On the Way to Impressive
I manage our Twitter stream, and for the most part I see an overwhelming amount of positive support for the product. Sometimes though, we get the occasional brutally honest comment indicating we could be doing a better job. Once such tweet came in about a week ago from @jakajancar. Jaka tweeted the following regarding Loggly's apparent lack of features:
He has a helluva point here. A lot of the things we do for scale and speed aren't readily apparent to the casual observer. It's really only when you start sending us several million events an hour and then start searching across 100s of millions events that you realize the scope of what Loggly can do. It's taken the lion's share of our time over the last 6 months to achieve scalability to handle thousands of accounts sending in up to 8GB/day each. Even though not everyone has those types of volumes, there exist accounts that do. We had to make sure we could handle higher volumes before we did something really cool with the rest of the system.
Well, it's about #@!%'ing time do start doing something cool with all this scalability.
Putting the Sexy Time in JSON
About 2 months ago Jon and I were talking to App47 about how they use Loggly. App47 is embedding Loggly in their own app, and requested a feature where we would extract a given field, index on it, and then allow narrowing of searches on only that field. We scoped the work, codenamed the project Argonaut and began cranking on it. The result is a new Loggly input which accepts JSON formatted data and on which partial or ranged searches can be done.
Back on Twitter, I received a reply from Jaka about what exactly he'd like feature-wise in Loggly.
Turns out the stuff we've been working on is very nearly the set of features Jaka asked for. Que the evil laughter.
Using the New JSON Hotness
Using the new Loggly JSON input is pretty easy. You simply create a HTTP input that is JSON enabled, and then forward a JSON structured text blob to it. Let's start by taking a look at some sample data we generated with a custom Apache logging format:
Now let's search for an event which matches status code 200 across a bunch of these suckers:
Loggly will only return events for this search which have a field named status and a value of 200. The hotness doesn't stop there though - you can also do ranged searches:
By now you are saying to yourself, that's pretty cool, but can I graph my field shizzle with it? Hell yeah you can! You can either use graph or compare with single values, or ranged values to conduct your searches. Here we 'bucket' the status codes and use the compare graphing command to get a breakdown of response codes:
Let's not stop there. You can also do the equivalent of a grep | cut | sort | uniq -c|sort -n if you use the unique command:
We're not entirely done with all the little options in and around these features, but will be adding them in the coming weeks. If you have suggestions you'd like to throw our way, we're more than happy to pretend the subsequent improvements were attributed to you asking for them! :P
So how do you JSON all your log data?
It's pretty obvious by now we're not planning on directly providing field extraction support in the product. Loggly's approach to log management has always erred on the side of simple, and field extractions are no exception. We'll be partnering with a few other providers in the coming months to get you endpoints for extracting fields from unstructured data, and also cranking out best practices for doing it yourself.
For now, if you log from your own applications, implementing a Loggly JSON input is fairly trivial: You just send us JSON on your JSON enabled HTTP inputs.
For other use-cases, be sure to keep an eye out for a follow-up post by Jordan showing you how to configure your Apache server to serve up custom log formats and using the grok tool to convert other logging formats on the fly to JSON. Grok will even forward the events to your account from a monitored file.
Impressed yet? Just wait for real-realtime feeds and our new alerting app coming out next month!