SF Console is my attempt to bring a Console.Log() statement into being within APEX and hence giving me some way, other than debug logs, of outputting some diagnostic information around code execution.

In my previous post I pretty much just threw it into the wild and let it speak for itself, it is after all fairly simple. Since then I have been through 2.5 iterations of the idea so decided that I would take some time to stop and actually write about it and how it’s put together. It’s not complex but I am going to split this out into three posts to fit in with the three different approaches I have taken. Links to the next two posts will become available below once the posts are written.

Structure

So, it’s a fairly simple idea – send a message about a change to a client (not within SalesForce) when you get to a certain point in the code. When I started this the streaming API wasn’t available to I had to approach it slightly differently; I knew that I was going to need some form of simplistic message broker to receive the data from SalesForce and then send it to the client(s). This meant that I could essentially split the problem into three pieces: getting the data from SalesForce to the broker, the broker and getting the data from the broker to the client.

SalesForce to Broker

This was going to be the easy part. And it was easy – I could just create a webservice on my broker and call it from within SalesForce. Well, that was my thinking for about 10 seconds until I remembered that we’re limited to 10 callouts per execution – which put that idea to bed fairly quickly. My next idea was to send an Outbound Message from a workflow – no limits, guaranteed to go through and asynchronous it fits the bill perfectly.

In order to have something for the workflow to work against I created a custom object called ConsoleLog__c and assigned a workflow rule to this object that is triggered whenever a record is created. I then added an outbound message to the rule which passed a couple of fields from the ConsoleLog__c object and the Org Id (I needed the Org Id to make sure that updates for different Orgs can be told apart). All I needed now was an endpoint for the message – this would be my broker.

The Broker

This part of the system is an unfortunate evil – well that’s how it started anyway in part 2 I’ll talk about how it actually became a chance to learn something else! As such I decided that a simple C# web site would suffice for my initial needs. C# is always my natural fallback position – I’ve used it for years and I always like to flex those muscles, after all if you don’t they’ll only fade away. Creating the web service to act as the end point for the Outbound Message was fairly straight forward and not worth writing about – once I received the message I needed to send it to the client. The client was going to be browser based and I had decided before I even started looking at the rest of this project that I was going to use web sockets this meant I needed to find a web sockets server in .NET to act as the other half of my broker. I didn’t fancy writing this myself – whilst I was interested in using the protocol I didn’t really want to learn it that deeply – so a quick search across the ol’ Internet and I ended up with SuperWebSocket.

SuperWebSocket gave me pretty much everything I needed out of the box, well out of the example actually as that included a simple client too.

The Client

The job of the client was simple – authenticate the user, connect to the web socket server, display any messages it receives on the screen. The UI was always going to be simple and simple is what I got. The interesting parts of the client are the need to deal with messages arriving out of order and the web sockets code. The web sockets code actually turned out to be really simple – a few lines of javascript allow us to connect to the server and receive messages from it.

    // create a new websocket and connect
    ws = new WebSocket('ws://sfutils.com:81/sfconsole');

    ws.onopen = function () {
        appendLine('* Connection established');
    };

    // when data is comming from the server, this method is called
    ws.onmessage = function (evt) {
        if (paused) return;

        appendLineC(evt.data.toString() + '');
    };

    // when the connection is closed, this method is called
    ws.onclose = function () {
        appendLine('* Connection lost');
    };

I was impressed with how clean and simple the web sockets code is. It just worked. When I started all of this I thought I was going to end up have a great long ramble about how to get it all up and working but no – it just functioned. I know that it’s baked into the browser implementation but the job that’s been done for exposing it to javascript is brilliant… there’s really nothing that I can say from my experiences with it other than that.

The messages from SalesForce can come in any order however we really want to see the messages in the order in which they were created. The broker’s responsible for just forwarding on messages and doesn’t store the messages at all and hence is the wrong place to try and sequence them. This leaves the job to the client. It makes most sense – as the client know which messages it has and therefore where to insert the one it has just received. Again it’s just a little javascript and nothing groundbreaking, it just compares timestamps, it’s more of an acknowledgement that this responsibility was pushed to the client.

Using web sockets means that we can have many clients connected to the same server receiving the same messages – the broker essentially just broadcasts to all listening clients for that Org Id.

Usage

So that gives us the necessary pieces to make this dream a reality. All that was needed was a simple class to create a ConsoleLog__c record and we’d be on our way.


public class Console{

	public static void Write(string message){
		Write(message, false);
	}

	public static void Write(string message, boolean doItNow){
		long timeStamp = dateTime.now().getTime();

		if(doItNow)
			LogNow(timeStamp, message);
		else
			Console.Log(timeStamp, message);
	}

	@future
	private static void Log(long timeStamp, string message){
		Console.CreateLog(timeStamp, message);
	}

	private static void LogNow(long timeStamp, string message){
		Console.CreateLog(timeStamp, message);
	}

	private static void CreateLog(long timeStamp, string message){
		ConsoleLog__c log = new ConsoleLog__c(Updated__c = timeStamp, Message__c = message);
		insert log;
	}

}

As you can see form the code there are two methods to log messages; one allows us to specify whether we want to execute it synchronously or not whilst the other gives us no choice. Asynchronous is obviously the preferable choice as it allows the rest of our APEX to carry on uninterrupted. There are however times that calling a future method isn’t acceptable, for example; anywhere you need to make more than 10 future calls or within a future call itself… hence the option to DoItNow. Once the record is created and committed to the database then the workflow fires, then the message is sent and shortly afterwards it should appear in our console.

Problems

This is a first pass and frankly not really thought through. As such I’ve ended up with something that works but has a few flaws. The first relates to the way I implemented the broker – because I used it to server the client pages, act as the Outbound Messaging endpoint and the web sockets server I needed to be a little more aware of the ports that I was using. I ended up having problems running the web sockets over port 80 which meant if you were behind any kind of half decent firewall you’d never create the web sockets connection.

The next problem was that after a period of time the web sockets connection would drop – not great for something that was intended to be a monitoring tool. I realised at the time that I needed to create some kind of heartbeat message, I just couldn’t be bothered to do it and when I came around to doing it I decided to start on version 2 anyway.

I haven’t tested the scalability of the system at all but I would guess that my .NET web server will quickly become the bottleneck this needs addressing to be useful.

Any event in SalesForce that causes a rollback will mean that the messages won’t be sent as the ConsoleLog__c records won’t get inserted into the database. This is a fundamental flaw with my design and one that as of yet I can’t see a way around- SalesForce being SalesForce I am forced to use the main system to provide me with some orthogonal functionality that should be independent of whatever else is happening. This isn’t a complaint it’s the way the platform is designed and if we ever want to have a real Console.Log it’s going to take a platform change rather than some half arsed code from me.

This version of the Console is currently available for people to try from within their own Orgs, have a look at my earlier post to see how to use it.  Having said that: it won’t be there forever; you’ll see why in part two…

All of the code I have relating to the Console is available for anyone to use – by the end of part two my intention is to have the whole system available in GitHub, I just don’t want to publish the current version of the Broker – but it’s coming.