Loggly and Kubernetes give Molecule easy access to the logs that matter
Loggly is a really good value. It’s fast, and it gives me the information that matters. With Loggly, we have a lot more capability than what we got from Kibana.
Adam Sunderland Chief Architect
Highlights:
- Cuts expenses by replacing ELK stack running on EC2 with Loggly
- Accelerates troubleshooting with Loggly Dynamic Field Explorer™
- Reduces the customer impact from issues with proactive alerting
Docker Architecture Called for Centralized Log Management
With a microservices architecture running in Docker containers with Kubernetes orchestration, the development team at Molecule knew that it needed a centralized log management solution. “Pairing AWS EC2 with Kubernetes means that we don’t have to do anything to scale up and down,” says Adam Sunderland, Chief Architect at Molecule. “That easy automation extends to centralizing our log data.”
ELK Stack Generated Unexpected Costs
Molecule initially deployed a log management solution based on Elasticsearch-Logstash-Kibana (ELK) running on Amazon EC2 (mostly m4.2x instances) but found that this solution cost more to maintain than the company thought it would. “It got expensive for us to run because of the amount of data we were logging, which peaked at about 8GB on some days,” Sunderland reports. “That meant we had a lot to maintain.”
Why Loggly?
After searching for dedicated log management services and evaluating several, the Molecule team chose Loggly for its rich feature set, easy implementation, and support for fluentd.
Solution
Because Molecule was already aggregating its logs using fluentd, implementing Loggly was a quick transition. “We were surprised at how easy it was to switch over,” Sunderlands notes. “All we had to do was add the fluentd plugin for Loggly.”
Kubernetes provides an important log management benefit: structured log data. With Docker containers operated in Kubernetes, Molecule gets logs formatted as JSON and tagged with metadata indicating the application and instance from which they came.
Loggly Dynamic Field Explorer Accelerates Troubleshooting
Molecule’s primary use case is to investigate operational issues. Because all of Molecule’s log data is formatted as JSON, the team can quickly drill down into specific components using Loggly Dynamic Field Explorer™ or look at multiple components side by side. “Having the ability to slice our log data by field gets us to answers much faster,” Sunderland says.
“If your logs are in JSON, Loggly makes it super easy to filter down to what matters and get more information from your logs. And JSON comes for free from Kubernetes.”
Alerting Enables Molecule to Get Proactive
Loggly alerts help the development team at Molecule to detect potential issues before they affect customers. “We are continuing to look at new alerts to add,” Sunderland says. Molecule is also starting to build dashboards to track critical data and to take advantage of Loggly Anomaly Detection for one specific process in its application. In this process, Molecule parses a file. It expects to see a certain number of lines fail to parse in a day and logs all of those cases. So, if for some reason that number goes up or down, the team knows that action may be required.
Loggly Exposes the Logs That Matter
“Before, when something went wrong, we sometimes struggled to find the logs that mattered,” Sunderland concludes. “Now that we have Loggly, we get our log data broken down by namespace, task, time, and other parameters. When we start investigating a problem, we’re already really close to an answer.”
[gview file=”https://www.loggly.com/wp-content/uploads/2016/09/CS-MOLECULE-SOFTWARE.pdf”]