Logging on the Force.com platform is something that I’ve always had an issue with – not because it doesn’t exist but more because it just feels so heavy handed. The system log gives a lot of information and sometimes I just want to know I’ve hit a certain trace or that a variable has been set to a specific value and sometimes I would actually just like to monitor how the code is performing: it’s always better to have seen an error and be working on a fix before the customer comes to you!

As such this is a subject that I keep coming back to but as of yet just haven’t quite found a neat enough solution; my previous forays have been based on my SF Console project and whilst they worked I was never really that satisfied with it. The idea was kinda right but the “console” part of it was just never going to be complete enough – it did after all just start out as a reason to have a play with some node.js

This week though the issue has resurfaced, albeit this time in a slightly different way, which has given me the perfect excuse to get my hands dirty with Loggly. Loggly is a cloud based logging service which has been on my radar for a while now but I have struggled to justify using it. I’m sure I’m late to the party but hey I am at least here. From the little I have played with it so far I quite like it – the command line interface in the browser is pretty cool, the automatic indexing of log entries is fantastic and of course it has a multitude of ways to integrate with it. It will also allow you to monitor and alert based on your logs through the use of Alert Birds although I haven’t made use of them yet.

My new lust for a logging solution has come from the need to instrument some of the managed packages that we’re building over at WickedBit. As people who are involved with software we all know that the one of the most constant constants is the ability for a customer to break your software within ten minutes of getting their hands on it. It happens to us all and should be no surprise to anyone who truly works with software. When this happens the debug logs are only any help if the customer gives you login access to their org, something which is fairly intrusive and not something everyone wants. The next best step is to be able to turn some kind of remote error logging on and this is where Loggly comes in.

First of all we need to get Loggly setup. We’re going to be using the incredibly simple ReST interface provided by Loggly so all we need is an account (just sign up for the basic account; it’s free) and then we need to create a HTLML input. I have decided to create a HTML input for each of my applications as this seems like the most natural segregation of data but each to their own, there are no rules here, the most important thing is to take note of (or at least remember where to find) the input key. And that is that; Loggly is now good to go.

What follows next is the APEX that I wrote to act as my logging mechanism and it probably helps if I explain a couple of my extra requirements before I show you the code. I need to be able to turn the entire logging mechanism on and off. I also need to able to turn only specific log statements on, this is after all a callout and we have precious little of those so we don’t want every log statement turned on all of the time. Instead I want to be able to only turn the statements on that I need to investigate a certain problem. With those desires out in the open here is the APEX logging class that I have used to integrate with Loggly – well a version of it at least I have stripped out the dependancy injection stuff to try and keep it cleaner for this post.

A quick run through of the code: the first line is the Loggly input key that I mentioned earlier, I pulled this out to make running this for different applications fairly easy. The next couple of lines are the to help control when the code can actually make the logging callout – the first is a simple boolean that is the “big” switch I talked about. The second is the name of the static resource which contains a JSON serialized Map the idea of this is that each call to the Log function passes in a “location” and the map contains a list of locations that are currently allowed to make the logging callout. This again could be moved to a set of custom settings but I have opted for the flexibility of a simple text file that I can send to the customer to change what logs I receive in Loggly; whilst still not ideal this should make the process much easier for the customer.

The crux of the class is the doLog method as this is the one that actually makes the callout (if all the settings are good) to Loggly. The callout is very simple the input key is passed in the URL, so there’s no authentication required and the data is passed across as a series of encoded key value pairs in the body. It’s important to set the header to application/x-www-form-urlencoded as when Loggly sees this is takes the key/value pairs in the request body and automatically JSON encodes them and indexes them making your logs searchable. To help with the searching I always pass the location and some information about the Organization making the call but otherwise the caller to the Log method is free to pass in any values they please in the values map.

All in all I’m fairly pleased with this initial implementation of a Loggly logging framework and my initial tests of using it from within a managed package are going well. I’m sure that things will change over time but it’s a good starting point.

With all of this in place making use of the code is fairly simple, first add a static resource that looks something like this:

Then make an APEX call like this:

And then jump over into Loggly and search for the fruits of your labour:

Feel free to take adapt and make use of the code above.. it’s too simple to warrent a github repo so I’m just going to leave it as is… of course feedback is always welcomed too, as ever with my code this is just a germ of an idea really.