How to Stream Logs from Azure Resources to Loggly
When running any production load within Azure, you need to make sure you are always getting the latest info on what is going within your hosted infrastructure. Logs provide a way to record and monitor the systems that we are running and alert us to any potential problems. The Azure Event Hub messaging system has built-in support for streaming logs out of a large variety of Azure-hosted resources, such as VMs, load balancers, and more. This makes it a great solution when capturing logs from many sources within your Azure stack. We can use it to forward logs to a best-of-breed log management system like SolarWinds® Loggly™ to make monitoring and searching through log data extremely easy.
This tutorial will show you how to stream logs to Loggly. An Azure resource generates logs, which then sends them to Event Hub. An Azure function will read them and send them to Loggly.
Setting Up an Event Hub to Capture the Logs
To get started, we will need to set up a new Azure Event Hub namespace which is the root container that holds our actual Event Hubs. We will use this namespace for all of our resource logging. Each Event Hub namespace can have multiple Event Hubs running simultaneously, and each with multiple partitions actively ingesting streaming log data. This allows for a lot of flexibility in how you can segment out and scale your logs.
For this example, we’ll set up one Event Hub within our namespace and write all the data to that hub. For the best performance, create your new namespace in the same region as the resources you plan to monitor. This will cut down on cross traffic and increase speed.
Log into Azure and click the Create a resource button. Search for “Event Hubs” and choose the first item that appears in the results. Follow the prompts to set up the Event Hub namespace, then click Create.
Once the deployment is complete, open it up by clicking on it within the Azure portal. Click the Event Hub button to create an Event Hub endpoint that will serve as the target for our logs. In the example shown below, our namespace is ehloggy
and our event hub name is demogrouplogs
.
When creating this setup for production, it’s a good idea to enable the “capture” feature on your Event Hub. This will forward all events to a storage account on Azure for backup purposes, which can be handy in case anything along your pipeline breaks. Also, increasing the Message Retention value will help with resiliency in case you need to pause or take down the solution for updates.
After you create your Event Hub, open the namespace’s shared access policies page and save the RootManageSharedAccessKey connection string. It will look like this:
Endpoint=sb://ehnamespace.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=****************************************************************
Forwarding Logs to Your Event Hub
Now that you have your Event Hub set up and ready to receive connections, you can start forwarding logs from your Azure resources. You can configure log streaming for many resources within the Azure portal. Start by opening the Monitoring blade and select Diagnostic settings.
This will give you a list of all your resources that have streaming capabilities. Click on the resource whose logs you want to forward and select Turn on diagnostics. Name your log stream something easily identifiable, check Stream to an event hub, and select the Event Hub you configured previously. At the bottom of the setting menu, make sure to check the log categories you wish to capture.
Here, I’ve set up streaming to my demo Event Hub for a network security group:
Save your settings and head back over to your Event Hub. After a few minutes, you should start seeing events flowing into it from your chosen resource.
Note: Different types of resources have differing log collector timing. Some only collect and forward logs once every minute, while some do so every 10 minutes. If you don’t see any hint of traffic on your Event Hub, try waiting at least 10 minutes while the resource is active.
Now that you have your logs streaming into your Event Hub, you are ready to set up your function app that will serve as the forwarder to Loggly.
Setting Up Your Forwarding Python Function App
Azure function apps have built-in triggers for Event Hub that, when executed, pass in the contents of log messages to the function app. This allows us to easily grab it with some Python code and forward the contents to Loggly via a simple HTTP POST call.
When authoring Azure functions, I’d recommend using Microsoft Visual Studio Code (VSCode) application along with the Azure CLI extension and installing the function app extension.
VSCode Download
Azure CLI Extension
Azure Functions Extension
Setting up a Python-based function app within Azure has changed recently with their 2.0 function app framework. Follow the instructions to create a new Linux-based function app for use in this example.
You will need the Azure CLI version 2.x or later installed on your machine as well as the Azure CLI extensions for the Linux preview. Make you sure you choose “python” as the run time of choice when running the “az functionapp createpreviewapp” command. Once deployed, open up VSCode and load the function app extension and sign into your Azure subscription. You should see your deployed Linux function app displayed here.
In the picture above, you can see my newly created function app “logglyeh.”
Now that the Linux-based function app is set up and running, we are ready to start creating the actual functions that will run within it and do the all the heavy lifting. I’ve already published a working function app on GitHub that you can clone. Use Git software to clone the repository to an empty directory, then use VSCode to open up the “eventhub-loggly” folder. As soon as the folder is open, VSCode should automatically recognize that you are working with a function app and ask you to initialize it so that it will work properly.
Select “Yes” to allow VSCode to initialize the function app project.
[Note: I’ve included the required Python modules within the GitHub repo to help get you set up as quickly as possible. When creating your own from scratch, you will want to use a virtual environment via venv, then install the modules manually via a “pip install -r requirements.txt” command.]
Initializing the project will create a few new folders that will be zipped and uploaded along with your function when you publish it. For now, you can ignore the newly created folder and go ahead and open up the “ehlogglymain” subfolder that contains the main Python file (_init_.py) and the accompanying function.json file. Customize both of these for this function to work in your environment.
Customizing the Code to Work in Your Environment
The actual function code is contained in the _inti_.py
file. This is the Python script that will actually be executed when the function is triggered by a new incoming Event Hub message. There is one main variable that needs to be defined in this file, which is the logglyendpoint
variable. This is the Loggly HTTPS endpoint that the function will be forwarding the message to. To find your endpoint, log into Loggly, click “source setup” at the top of the screen, and then select the “HTTP/S Event” option.
Your HTTP endpoint will be listed under the configure your app section. A cool feature that I want to point out is that the “/tag/http” ending can be changed to whatever you want. So, I set this to something related to the Azure resource I’m tracking to help me quickly filter down to the corresponding log once the forward is set up and functioning.
Set the logglyendpoint
variable in the _init_.py
file to your HTTP URL with your chosen tag.
After you made this change, click Save.
The last edit we’ll make to prep our function app is setting our Event Hub name in the function.json
file, which is also located within the ehlogglymain
function folder. Set this to the target Event Hub’s name. In my example, it’s called demogrouplogs
. You can learn more about the various settings and bindings contained in this file by taking a look at Azure’s documentation.
Once that variable has been set, we are ready to publish our app to Azure.
Publishing Your Function App to Azure
Now that we have the function customized to our particular setup, we’re ready to publish it using our Azure subscription. VSCode makes this a piece of cake by using a simple interface provided by the same function app extension.
Click on the Azure icon on the left-hand side of the menu and select your Linux-based function app. Then simply hit the blue check button at the top of the screen to start publishing your function. VSCode will ask which folder you want to deploy and to which function app you want to publish to. Make sure you select the root folder of the project, not the ehlogglymain folder. Select your already running Linux-based function app for the target deployment.
A warning might pop up saying that the function isn’t set up correctly to be published from VSCode. It will ask if it can make a few modifications to the layout to prep it. Click yes on this pop up to continue the publishing process.
After it makes the necessary corrections to the layout, you will get one final prompt asking if you are sure that you want to deploy your function to Azure. It will warn you that doing so will overwrite any previous version that might already be deployed. Since this is the first time we are deploying, we aren’t worried about overwriting anything; however, it’s a good thing to understand and keep in mind when deploying functions in this manner.
For now, we will select “Deploy” and let VSCode do its thing. You can watch the process of the deployment’s output within the “Output” window in the embedded terminal window within VSCode (Shift+`). Once you see that your deployment has been completed, expand your Linux Function App and confirm that you see it there.
Now that the function is live and published with your function app, there is one last variable within the Azure portal we will need to set to complete the pipeline and kick off the log forwarding.
Setting the Final Event Hub Variable in the Azure Portal
Log back into the Azure portal and browse to your function app. Once the blade for the app completely loads, select the “Platform features” button toward the top of the screen. Many options related to your function app will be displayed here but the one we are looking for is “Application Settings”. These are key-value pairs that your function app has access to while running. If you take a look back at the function.json
file in the ehlogglymain
folder, you will see the name of the key we need to list as the value for the “connection” field within this file.
With our example code, the value we’ll need to use is “EventHub”. Back in our function apes application settings menu you will need to click to the option to add a new setting and enter “EventHub” as the key with your Event Hub’s namespace RootManageSharedAccessKey connection string as value.
Make sure to click Save at the top of the screen to save this new app setting. Now the trigger that our app runs off of knows where to look to monitor for new events. Within a few minutes, you should start seeing HTTP-based events showing up in Loggly!
View the Logs in Loggly
We can check and confirm that our forward is working by logging into Loggly and searching for our specialized tag using the “tag:customtagname” format in the search bar.
Success! We are now getting our exported Azure network security diagnostic logs automatically sent to Loggly every time they are generated and written to our Event Hub. Since the forwarder is sending raw JSON, Loggly can automatically parse those logs. This allows you to search or filter on specific fields, or even graph values visually. We’ll save this kind of analysis for a future blog.
Conclusion
In this post, we set up some basic forwarding for an Azure resource using Event Hub for our message queue and an Azure function as the forwarding processor. Because Azure’s Event Hub service is set up to handle message volumes that you might see in large IoT deployments, it should scale up well to handle robust logging requirements for most stacks.
Additionally, with the custom function app you can perform all kinds of handy pre-processing and logistics on your logs before forwarding them over to Loggly, which opens up many possibilities. I highly suggest that once you get familiar with this setup, you try modifying the code a bit to really unlock the potential of this kind of solution.
Finally, since Event Hub is a central location for any Azure resource, a good next step would be to set up log forwarding for your infrastructure including your application servers, databases, load balancers, and more. We will cover these topics in more detail in future articles.
The Loggly and SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.
Guest Author