Posts in "SF Console" category

Force.com and Loggly – watching the insides of managed packages

Logging on the Force.com platform is something that I’ve always had an issue with – not because it doesn’t exist but more because it just feels so heavy handed. The system log gives a lot of information and sometimes I just want to know I’ve hit a certain trace or that a variable has been set to a specific value and sometimes I would actually just like to monitor how the code is performing: it’s always better to have seen an error and be working on a fix before the customer comes to you!

As such this is a subject that I keep coming back to but as of yet just haven’t quite found a neat enough solution; my previous forays have been based on my SF Console project and whilst they worked I was never really that satisfied with it. The idea was kinda right but the “console” part of it was just never going to be complete enough – it did after all just start out as a reason to have a play with some node.js

This week though the issue has resurfaced, albeit this time in a slightly different way, which has given me the perfect excuse to get my hands dirty with Loggly. Loggly is a cloud based logging service which has been on my radar for a while now but I have struggled to justify using it. I’m sure I’m late to the party but hey I am at least here. From the little I have played with it so far I quite like it – the command line interface in the browser is pretty cool, the automatic indexing of log entries is fantastic and of course it has a multitude of ways to integrate with it. It will also allow you to monitor and alert based on your logs through the use of Alert Birds although I haven’t made use of them yet.

My new lust for a logging solution has come from the need to instrument some of the managed packages that we’re building over at WickedBit. As people who are involved with software we all know that the one of the most constant constants is the ability for a customer to break your software within ten minutes of getting their hands on it. It happens to us all and should be no surprise to anyone who truly works with software. When this happens the debug logs are only any help if the customer gives you login access to their org, something which is fairly intrusive and not something everyone wants. The next best step is to be able to turn some kind of remote error logging on and this is where Loggly comes in.

First of all we need to get Loggly setup. We’re going to be using the incredibly simple ReST interface provided by Loggly so all we need is an account (just sign up for the basic account; it’s free) and then we need to create a HTLML input. I have decided to create a HTML input for each of my applications as this seems like the most natural segregation of data but each to their own, there are no rules here, the most important thing is to take note of (or at least remember where to find) the input key. And that is that; Loggly is now good to go.

What follows next is the APEX that I wrote to act as my logging mechanism and it probably helps if I explain a couple of my extra requirements before I show you the code. I need to be able to turn the entire logging mechanism on and off. I also need to able to turn only specific log statements on, this is after all a callout and we have precious little of those so we don’t want every log statement turned on all of the time. Instead I want to be able to only turn the statements on that I need to investigate a certain problem. With those desires out in the open here is the APEX logging class that I have used to integrate with Loggly – well a version of it at least I have stripped out the dependancy injection stuff to try and keep it cleaner for this post.

A quick run through of the code: the first line is the Loggly input key that I mentioned earlier, I pulled this out to make running this for different applications fairly easy. The next couple of lines are the to help control when the code can actually make the logging callout – the first is a simple boolean that is the “big” switch I talked about. The second is the name of the static resource which contains a JSON serialized Map the idea of this is that each call to the Log function passes in a “location” and the map contains a list of locations that are currently allowed to make the logging callout. This again could be moved to a set of custom settings but I have opted for the flexibility of a simple text file that I can send to the customer to change what logs I receive in Loggly; whilst still not ideal this should make the process much easier for the customer.

The crux of the class is the doLog method as this is the one that actually makes the callout (if all the settings are good) to Loggly. The callout is very simple the input key is passed in the URL, so there’s no authentication required and the data is passed across as a series of encoded key value pairs in the body. It’s important to set the header to application/x-www-form-urlencoded as when Loggly sees this is takes the key/value pairs in the request body and automatically JSON encodes them and indexes them making your logs searchable. To help with the searching I always pass the location and some information about the Organization making the call but otherwise the caller to the Log method is free to pass in any values they please in the values map.

All in all I’m fairly pleased with this initial implementation of a Loggly logging framework and my initial tests of using it from within a managed package are going well. I’m sure that things will change over time but it’s a good starting point.

With all of this in place making use of the code is fairly simple, first add a static resource that looks something like this:

Then make an APEX call like this:

And then jump over into Loggly and search for the fruits of your labour:

Feel free to take adapt and make use of the code above.. it’s too simple to warrent a github repo so I’m just going to leave it as is… of course feedback is always welcomed too, as ever with my code this is just a germ of an idea really.

A quick start to SF Console

I thought I’d add a quick guide to getting the demo installed and running to save you having to read all that guff in the other posts.

1) Download and install this package into your SalesForce Org.
2) Log into this site with your SalesForce credentials; keep this window open.
3) Call Console.Write(‘Hello World’); from you Org – be it anonoymously or otherwise.
4) Go back to the window opened in 2) and either be amazed at how good it is, or unsurprised that it didn’t work

Alternatively if you’re feeling brave and want to get all the code and set this up yourself then you can get the SalesForce part of the app from this GitHub repository and the node.js part from this one.

It runs over over HTTP at the minute as I had a socket issue on HTTPS.

I’ve also been having a bit of a think about some things that it would be nice to add to the solution and I thought that I’d just make a public note of them in the vain hope that it forces me to do something about them.

On the UI: a clear button, maybe the option of a different colour scheme, sort out the having to refresh occasionally.
Within SF: the ability to turn the logging on and off, maybe log levels.
In General: perhaps a few screenshots of how to set it up and use it – although it’s not that complex.

These are my initial thoughts for enhancements, more suggestions are welcome as are forks on GitHub and pull requests… saves me the work after all!

Let me know of any difficulties… have fun.

SF Console version 2: A node.js experiment

SF Console is a cross between a need to create some form of remote logging for APEX and an excuse to try and get my hands dirty with some technologies that I hadn’t tried before. In my head there are three parts to this story; the links to the other two are below and will become available once I’ve managed to actually get the code together to back up the thoughts!

The initial pass of this project gave me the chance to play with websockets as I thought that they would be interesting and potentially have a great impact of how we interact with the web. And they are all those things, the trouble is they’re also a little disappointing but disappointing in a good way. I realise I’ve contradicted myself there so let me explain. Websockects work. And they work well. The implementation is baked into the browser and that just works and if you can get a well written server then that too will just work. Putting the two together is really really easy. And it’s that which is disappointing – I thought it would be a challenge and I like challenges. Having said that most of the world doesn’t and it’s for that reason that it’s disappointing in a good way… this stuff works and when stuff just works people tend to use it; it becomes a tool not an obstacle.

Anyway back to moving forwards… I mentioned in my previous post on this topic that I rushed together the “broker” (it’s in quotes because it’s not really a broker but it’s a handy description) part of the application using a C# solution and that I was hoping to change that later. Well, that’s where node.js comes into the picture. And not only node.js but node on Heroku – just to give it that SalesForce.com spin.

Node is obviously a server side javascript implementation that, as we all know, is based on Google’s V8 javascript engine. It’s meant to be quick and it’s meant to handle certain tasks really well. I didn’t want to do much with it – I just wanted to display a couple of web pages, log into SalesForce, receive some SOAP messages and then send some data to my clients via web sockets. In the grand scheme of things it’s fairly lightweight and so I decided it should be able to handle this task with consumate ease.

For many years Javascript had to me, been that thing that sat in the browser and let you do a few bits and pieces client side; it wasn’t great and it wasn’t that powerful. So why did I choose node to act as my broker; there we a thousand other choices. To be honest I’m not too sure, there’s a growing buzz around it so maybe it was that. Maybe it’s the technical machocist in me? I’m not too sure but let me say now, despite a few hiccoughs along the way I’m really glad that I did decided to head in this direction. I think my previous dislike for javascript as probably fear – I come from “real” languages, C#, Java, you know the ones… and javascript doesn’t really work like them and therefore it must be rubbish. Well I’m proud to stand up and say now that I understand a little more about the language I like it – boy it’s different – but I like it.

Once I’d got my head into the node.js space – thinking in an asynchronous fashion and trying to make sure I got my closures in the right places (I’m fairly sure I still haven’t got those right). I actually found that writing the code came fairly easily.

My first surprise was actually with the platform… node.js support on Heroku is a relatively new thing and the web sockets protocol is also fairly new: too new in Heroku’s eyes to be stable enough for their platform. Which is a shame as it meant I had to revert to some of the older techniques for holding a channel open to the client. Luckily I quickly came across the socket.io library which is frankly fantastic. It provides both server and client js implementations of a series of protocols for client-server js communication (including web sockets) and will graciously fail back until it finds an acceptable means of communication for the two ends of the channel. I would have made use of this library even if I was using web sockets so for me it was an obvious choice – just tweak the config and off we go.

There are a lot of frameworks and libraries available on node and they all make your life slightly easier – I decided to make use of the connect middleware framework as this provided me with a few bits and pieces that I saw no point in reinventing, namely; cookie parsing, POST body parsing, static content serving and a favicon server. This saved me from a bunch of work and from probably messing the solution up with my poor JS skills more than I already have. the connect middleware is fairly simple to use and although the documentation is lightweight it provides wnough to get you started, plus the source is always there if you get stuck. The one thing that I would mention is that if your writting your only handler to go in the middleware then make sure you accept the next parameter, not just req and res. More importantly make sure that if you don’t end the response you call next – otherwise everything will get lost, nothing else in the chain will get called and the server will never complete the response to the client.

I was surprised to find that the support for SOAP from within node is fairly limited. I appreciate that it’s a heavy weight, antiquated protocol but a lot of people still only offer this interface for integration; even SalesForce expects a SOAP endpoint for an Outbound Message. There are a couple of frameworks out there but I couldn’t find anything that satisfied me, which meant I ended up creating a fairly poor work around to get the thing going. Looking at the code you’ll see a sfparser file which takes the XML from the SOAP message and turns it into a javascript “object”. It’s not pretty, it’s not robust and I’m only mentioning it here as I’m ashamed of it. Having said that it does a good enough job for me, for now, so that’s good enough.

Having just moaned about the tiredness of the SOAP protocol I’m now going to say that I again decided to make use of it to log into SalesForce rather than using OAuth. I do however have a justification. After the user has logged in I need to associate an OrgId with their connection so that I know to whom to forward the messages. The response from the SOAP login call on the partner WSDL gives me this information without me having to make another call. It just felt cleaner to do it this way. I’m sure there are a million other better ways to do it… but this is what I chose and why.

There’s not much else to say; the usr logs into SalesForce via the partner web service, the server stores their OrgId against their connection. SalesForce sends Outbound Messages to a fake SOAP endpoint on the node server, it responds with an Ack. The message is then parsed and sent to all connections that have the same OrgId as the incoming message. Job done.

The client remains pretty much the same as before the only real difference is to include a reference to the sokect.io client lib (which the server serves itself) and we’re all good to go.

All of this is now available in GitHub… it’s in two projects SF-Console and sf-console-node I’ll let you figure out what they are from their incredibly imaginative, albeit inconsistent, names. The projects are really not much more than a starting point for those that may be interested in the idea – for example I know there are still things missing and the testing hasn’t been all that extensive, especially around multiple orgs connecting to the same node.js endpoint. Feel free to rip off, laugh at, copy or contribute all I ask for is a little feedback: even laughter.

There’s a small issue with the HTTPS version of this at the moment… HTTP works though so if you’re brave go for it, otherwise wait for the update later.

SF Console version 1: Just hacked together

SF Console is my attempt to bring a Console.Log() statement into being within APEX and hence giving me some way, other than debug logs, of outputting some diagnostic information around code execution.

In my previous post I pretty much just threw it into the wild and let it speak for itself, it is after all fairly simple. Since then I have been through 2.5 iterations of the idea so decided that I would take some time to stop and actually write about it and how it’s put together. It’s not complex but I am going to split this out into three posts to fit in with the three different approaches I have taken. Links to the next two posts will become available below once the posts are written.

Structure

So, it’s a fairly simple idea – send a message about a change to a client (not within SalesForce) when you get to a certain point in the code. When I started this the streaming API wasn’t available to I had to approach it slightly differently; I knew that I was going to need some form of simplistic message broker to receive the data from SalesForce and then send it to the client(s). This meant that I could essentially split the problem into three pieces: getting the data from SalesForce to the broker, the broker and getting the data from the broker to the client.

SalesForce to Broker

This was going to be the easy part. And it was easy – I could just create a webservice on my broker and call it from within SalesForce. Well, that was my thinking for about 10 seconds until I remembered that we’re limited to 10 callouts per execution – which put that idea to bed fairly quickly. My next idea was to send an Outbound Message from a workflow – no limits, guaranteed to go through and asynchronous it fits the bill perfectly.

In order to have something for the workflow to work against I created a custom object called ConsoleLog__c and assigned a workflow rule to this object that is triggered whenever a record is created. I then added an outbound message to the rule which passed a couple of fields from the ConsoleLog__c object and the Org Id (I needed the Org Id to make sure that updates for different Orgs can be told apart). All I needed now was an endpoint for the message – this would be my broker.

The Broker

This part of the system is an unfortunate evil – well that’s how it started anyway in part 2 I’ll talk about how it actually became a chance to learn something else! As such I decided that a simple C# web site would suffice for my initial needs. C# is always my natural fallback position – I’ve used it for years and I always like to flex those muscles, after all if you don’t they’ll only fade away. Creating the web service to act as the end point for the Outbound Message was fairly straight forward and not worth writing about – once I received the message I needed to send it to the client. The client was going to be browser based and I had decided before I even started looking at the rest of this project that I was going to use web sockets this meant I needed to find a web sockets server in .NET to act as the other half of my broker. I didn’t fancy writing this myself – whilst I was interested in using the protocol I didn’t really want to learn it that deeply – so a quick search across the ol’ Internet and I ended up with SuperWebSocket.

SuperWebSocket gave me pretty much everything I needed out of the box, well out of the example actually as that included a simple client too.

The Client

The job of the client was simple – authenticate the user, connect to the web socket server, display any messages it receives on the screen. The UI was always going to be simple and simple is what I got. The interesting parts of the client are the need to deal with messages arriving out of order and the web sockets code. The web sockets code actually turned out to be really simple – a few lines of javascript allow us to connect to the server and receive messages from it.

    // create a new websocket and connect
    ws = new WebSocket('ws://sfutils.com:81/sfconsole');

    ws.onopen = function () {
        appendLine('* Connection established');
    };

    // when data is comming from the server, this method is called
    ws.onmessage = function (evt) {
        if (paused) return;

        appendLineC(evt.data.toString() + '');
    };

    // when the connection is closed, this method is called
    ws.onclose = function () {
        appendLine('* Connection lost');
    };

I was impressed with how clean and simple the web sockets code is. It just worked. When I started all of this I thought I was going to end up have a great long ramble about how to get it all up and working but no – it just functioned. I know that it’s baked into the browser implementation but the job that’s been done for exposing it to javascript is brilliant… there’s really nothing that I can say from my experiences with it other than that.

The messages from SalesForce can come in any order however we really want to see the messages in the order in which they were created. The broker’s responsible for just forwarding on messages and doesn’t store the messages at all and hence is the wrong place to try and sequence them. This leaves the job to the client. It makes most sense – as the client know which messages it has and therefore where to insert the one it has just received. Again it’s just a little javascript and nothing groundbreaking, it just compares timestamps, it’s more of an acknowledgement that this responsibility was pushed to the client.

Using web sockets means that we can have many clients connected to the same server receiving the same messages – the broker essentially just broadcasts to all listening clients for that Org Id.

Usage

So that gives us the necessary pieces to make this dream a reality. All that was needed was a simple class to create a ConsoleLog__c record and we’d be on our way.


public class Console{

	public static void Write(string message){
		Write(message, false);
	}

	public static void Write(string message, boolean doItNow){
		long timeStamp = dateTime.now().getTime();

		if(doItNow)
			LogNow(timeStamp, message);
		else
			Console.Log(timeStamp, message);
	}

	@future
	private static void Log(long timeStamp, string message){
		Console.CreateLog(timeStamp, message);
	}

	private static void LogNow(long timeStamp, string message){
		Console.CreateLog(timeStamp, message);
	}

	private static void CreateLog(long timeStamp, string message){
		ConsoleLog__c log = new ConsoleLog__c(Updated__c = timeStamp, Message__c = message);
		insert log;
	}

}

As you can see form the code there are two methods to log messages; one allows us to specify whether we want to execute it synchronously or not whilst the other gives us no choice. Asynchronous is obviously the preferable choice as it allows the rest of our APEX to carry on uninterrupted. There are however times that calling a future method isn’t acceptable, for example; anywhere you need to make more than 10 future calls or within a future call itself… hence the option to DoItNow. Once the record is created and committed to the database then the workflow fires, then the message is sent and shortly afterwards it should appear in our console.

Problems

This is a first pass and frankly not really thought through. As such I’ve ended up with something that works but has a few flaws. The first relates to the way I implemented the broker – because I used it to server the client pages, act as the Outbound Messaging endpoint and the web sockets server I needed to be a little more aware of the ports that I was using. I ended up having problems running the web sockets over port 80 which meant if you were behind any kind of half decent firewall you’d never create the web sockets connection.

The next problem was that after a period of time the web sockets connection would drop – not great for something that was intended to be a monitoring tool. I realised at the time that I needed to create some kind of heartbeat message, I just couldn’t be bothered to do it and when I came around to doing it I decided to start on version 2 anyway.

I haven’t tested the scalability of the system at all but I would guess that my .NET web server will quickly become the bottleneck this needs addressing to be useful.

Any event in SalesForce that causes a rollback will mean that the messages won’t be sent as the ConsoleLog__c records won’t get inserted into the database. This is a fundamental flaw with my design and one that as of yet I can’t see a way around- SalesForce being SalesForce I am forced to use the main system to provide me with some orthogonal functionality that should be independent of whatever else is happening. This isn’t a complaint it’s the way the platform is designed and if we ever want to have a real Console.Log it’s going to take a platform change rather than some half arsed code from me.

This version of the Console is currently available for people to try from within their own Orgs, have a look at my earlier post to see how to use it.  Having said that: it won’t be there forever; you’ll see why in part two…

All of the code I have relating to the Console is available for anyone to use – by the end of part two my intention is to have the whole system available in GitHub, I just don’t want to publish the current version of the Broker – but it’s coming.

Force.com Custom Console

Console

The debug logs in SalesForce are great; they’re a rich mine of information about the execution of the various parts of the platform.  However in some cases they can be too rich; trying to trace through code or inspect variables using System.debug() statements can be a bit of a pain to say the least.  Then there’s the case in integration projects where you’re waiting for a third party to call into your app via a webservice and you want to monitor the incoming calls, are they occurring, are they having any issues, what kind of data are they passing?  This is more of a monitoring use case and again the debug logs don’t really lend themselves to this kind of use.  And let’s not even get started on trying to debug @future calls.

There are a couple of ways to fix these kinds of issues, one is a more configurable logging system and the other is a console or stdout that you can send random messages to.  I initially set out to create the first, a more configurable logging framework, but during it’s development I found the need for the second and it’s that which I want to share with you in this post.  The logging framework for those that are interested is a work in progress and should be following shortly.

I’m not going to dive into the architecture or design of the implementation right now, instead I’m going for a quick overview of how to use it followed by how to set it up so that you can give it a spin yourselves – then when I’m feeling more verbose I’ll explain why it is the way it is.

So, what have we got then?  Well essentially it’s a system that will allow you to type:


Console.Write('A message from far far away');

in your APEX and then have the message appear in a “console” like this:

To achieve this you need the Console class and a few associated bits and pieces which can be found in this unmanaged package (prod/dev or sandbox).  The static class Console has one method: Write.  There are two overloads to Write.  There’s Write(string message, boolean DoItNow) where message is obviously the message to log and DoItNow controls if it’s done synchronously or not.  The second overload is Write(string message) this is always a synchronous call.

You will then need to log into the Console web application using your SalesForce credentials.  No data is stored in this app; it’s purely a messaging bus.  Then start making some Console.Write() calls from APEX, in fact go and just execute some anonymous code to prove it works.

And that’s it – it really is that easy to use.

There are, as ever, a few caveats;

  • Firstly you need a HTML5 browser!
  • It currently needs port 81 open – it’s the websocket connection that receives the broadcast messages… I hope to change this going forward.
  • Be aware that the log has to insert a dummy record for the message to be sent, therefore if your code throws any unhandled exceptions the log messages won’t be sent.
  • For this same reason be careful when using the synchronous method as each call will contribute to your DML limits.
  • Finally when you’re using the code from within a method marked as @future you will always have to use the synchronous call.

The order in which the messages are delivered to the console cannot be guaranteed however the console should always display the messages in the correct time order; inserting late arriving messages in the right location.

If left dormant the console may disconnect itself, a Connection Closed message  will appear in the window you will need to refresh the page to reconnect.  I will work around this in the future but for now this is just the way it is.

This started out life as a bit of fun and a way to play with some different technologies; it’s only whilst doing it that I realised it could be of real use to others and that I should post it here.  It’s for that reason that you need to remember it’s a little rough around the edges – for example because I have to create a ConsoleLog__c object for each message you send you will see the number of records for that object increasing over time; I need to write a schedulable task to clean them down – I haven’t… it’s things like that which give this project it’s roughness.

I will wax lyrical soon about the architecture of it and where I see the whole thing going.  But in the meantime please have a play with it and let me know what you think.  Could it be useful to you?  What features would make it more beneficial?  Or is it really just that bad that I should walk away from it all right now?