Latest Blog Posts / Page 2

When Two Become One: Data in VF Controllers

We all love Visualforce, right? It gives us the chance to have that finer grained control over how things look for users and users love us making things look good for them – often much more than any functionality we provide. And when dealing with Visualforce, it’s often not long until we need to write a Custom Controller or a Controller Extension in Apex to allow us to fulfil some requirement or other. Controllers provide us with three main functions; to provide access to data, to provide us access to business logic through actions and to provide navigation. Actions and navigation are pretty straight forward as there is only one way to implement them in your controllers. Data however can be implemented in two ways and what’s worse… at the same time!

Continue Reading…

All decimals are equal, but some decimals are more equal than others

I learnt something rather intriguing today on the old platform: decimals just don’t behave how I expected them to when converting them to strings. I have obviously set myself up with that line but nevertheless let me go through the motions.

Before today I would have expected the following code to display the same three strings on each of the lines.

That is, despite how I declared the decimal in code, they would all be stored the same and hence all output the same way – truncated to zero decimal places in this case, given they are all exactly one. However when I run the page I get the following result:

Output 1

This struck me as a bit odd as it’s actually remembering how I declared the number and then using that when it converts it back to a string.

This got me thinking a little bit: is there some weird bug where by the numbers are considered to be different? I didn’t think this was the case but you can never be too sure so I added the following two methods to my test class and got the following output on the page:

Output 2

As expected comparing the numbers they are the same; they are both exactly 1. However comparing the strings we see that they are different (I know we could see that from the previous output but it’s always good to see what the compiler thinks).

My next thought was what does it do when we add decimals together that have been declared with a different precision? This leads to the following tests:

Output 3

We can see from this that order of inputs doesn’t matter and that the result will always have the precision of the most precise input.

So now that we know how it all works, even if it wasn’t what I originally thought, but where does that get us? Well strangely enough I had an application for this pretty much straight after I discovered it. I had to send a JSON object to a ReST service and needed to make sure that all the numbers I sent were formatted to show two decimal places. This little oddity means that I can simply multiple my input by 1.00 and I know that when I take the string value of the result I will have everything formatted just the way I want it. Perfect.

Output 4


It has been rightly pointed out to me that this phenomenon is expected and well documented behaviour (SF Docs) Thanks gain @jamesmelv before I write my next post I will promise to read the documentation.

Generating Hashcodes

I was quite excited by the introduction of the hashCode and equals methods to custom classes in Winter ’13, as I was finally going to be able to get a map to behave the way that I wanted it to. I started to write such a map and a post to go with it when I started to discover a few, shall we say, nuances about what was necessary to implement these methods – in particular the hashCode one. I will follow up with my magic map post but for the meantime I want to discuss the implementation of hashCode method.

I never studied computer science, or indeed maths in enough depth to know the intricacies of hash codes but I know enough to know that good hashing methods are hard to come by and potentially are computationally expensive. Given both of these facts I came up with a cunning plan when I was designing my hashCode method, in fact it was more cunning than a… never mind. My plan was thus:

First of all I would create a string representation of my object by making use of the JSON serializer and once I had this string I would simply call it’s hashCode() method.

The beauty of this plan is that I would be leveraging the internal hashing algorithm on the platform, which should be great, well optimized, well dispersed etc and I would only be using two statements to get said hash code. (I could have made it one I know but for clarity I kept it at two) Basically I would get a great, generic, hash generator without using a million lines of Apex and hence continuing to play nicely with the governor limits.

Now the thing about plans as cunning as this is that they normally don’t work and whilst this is a gross generalisation in this case it was the unfortunate truth. It didn’t work. The problem is that the hashCode method isn’t exposed on the string object. I must admit I found this a little odd: in order to use a custom class in a set then we have to implement this public method. Given that a string can be used in a set I would have expected the same pubic method to exist. I’m sure that it does exist the problem is that it hasn’t been exposed to use mere mortals. Whilst I’m on this subject I will mention my general confusion at the implementation of this feature in general. I would have expected an interface to implement or indeed a virtual class to override. However instead we’re asked to add two methods randomly to each class. I’m not a great fan of this – there is obviously some jiggery going on as the code is compiled and I think that rather than hiding this it would have been better to expose it to us. Heading down this route flies in the face of some OO principles and as such, in my eyes, makes the code much more confusing and potentially unclear than it needs to be; but what do I know?

Anyway, putting this to one side and returning to my hunt for a good hashing method. My next thought was to have a look around the web for hashing algorithms and investigate implementing one in Apex. After a bit of hunting I came across the FNV algorithm – it is spoken well of and seems easy enough to implement. Easy enough that is in most other languages but Apex as ever presents it’s challenges. The problem comes in the fact that you need to operate on an octet of data – just one byte. Now using a string as our starting point it’s quite difficult to extract just one byte of data from it. You can split the string into individual characters but you have no way of getting back to the integer that represents that character. A shame really given that you can do this process the other way around: you can use string.fromCharArray() to convert a list of integers into a string. Oh well, this is part of the fun of this platform – find the your way around the obstacles.

So, how to navigate around this small issue then? Well we could build a static map that lists all possible characters and maps them to an integer. This could work, although with some of the extended character sets this could become rather painful to build, not to mention the fact that we need to keep our value less that 256 to stay with our 8 bits of data. We need to represent our string of data some other way – some way that has a limited character set. Base64 was designed exactly for this purpose so we could base64 encode our string – the size of the map is limited and the platform offers us a way to do this. You could also convert your string to a hex representation – the map is even smaller in this case, although the resulting string would be longer. To be honest I think this is a personal preference thing, I have no string thoughts about going with either solution. With this adaptation we can now nicely execute the FNV algorithm in Apex and it ends up looking something like this:

It’s beautiful and it works. However it has a small flaw that is very much specific to the platform. It is computationally expensive in fact, in terms of script statements it’s almost unbounded. Only really limited by the size of the string put into it, which in turn is only limited by the size of the heap! So whilst we have a way of generating a good hash code we need to do something about controlling it’s size. Ironically the hing about a hash code is that it is always the same length, no matter what input you give it and it’s this realisation that leads me to the solution to this problem. The somewhat odd solution is in fact to hash the string before putting through our FNV hashing code. The best part of this is that we can make use of standard platform functions to do this for us. The crypto class allows us to generate a HMAC from any input – ta da! This is fantastic we can now in one script statement compress any string into a reasonably unique (very very low chance of collision) string and best of all it will always be the same length, so we can now predict the effort needed by of hash code method to compute our object’s hash.

At this point I can imagine what you’re thinking – why on earth didn’t you just make use of the built in hash function right at the start. Well I thought about it but it returns a string and our hash code method needs to return an integer. So I would still need a way to convert a string to an integer – which is what the FNV piece of code will do for us.

Putting all of the pieces together we get something that looks like this:

I have kept the conversion to JSON in there as it allows me to provide a generic function for finding an objects hash code, although if you need to make use of internal state to uniquely identify your objects you would need to modify this piece. I then create a HMAC of the JSON and convert it to a hex value. This hex string is then broken down and character by character mapped to an integer which acts as our octet of data in the FNV code. Then, after all of that, we’re left with an integer which we can use as our hash code.

I have to admit this does seem like an awfully large amount of work to simply generate an integer although as I mentioned before finding a good hashing algorithms isn’t trivial stuff. And it is further complicated by needing to stick within governor limits. I feel as though what I have come up with here is a good compromise that makes use of standard platform functionality where possible and hopefully provides a decent dispersion. However, as I mentioned before I don’t really know much about the maths behind hashing and am very happy to have any flaws pointed out to me. (I’m sure I’ve made a rookie mistake somewhere in there)

As I said before it would be great if the platform exposed some way for us to do this – be it exposing their own native hash methods on a string or indeed providing an object.hash method that all objects have access to. But in the meantime I will be using this method to help me create custom sets and maps.

As a small side note: you may have noticed a couple of odd lines in the code, where I convert a string into a long. Trust me, I believe this has to be in here… I don’t want it to be but there’s what looks like a small platform bug, after I have done some more testing on this I shall report back.

Handling Optional Platform Features From APEX

Over the past few weeks I have been getting a few requests from some of our customers to add a feature to one of our products; some minor integration with Chatter. Being a company that prides ourselves on being responsive to our customers wants (once a little common sense has been applied of course) we decided to add said feature.

The feature itself was trivial to implement but what struck me as I was thinking about it was: “what about our customers without Chatter enabled?” I know it’s becoming less and less common these days but back when we were consulting full time we heard numerous clients ask us to make sure Chatter was turned off – and no amount marketing material was going to convince them to turn it on. With this in mind I was suddenly a lot less bullish about this new feature – I didn’t want to alienate a whole section of our potential market just for this feature.

I had a dig around the Salesforce documentation to see what, if anything could be done about this – I couldn’t believe that I was the first person to have this worry. Things were a little sparse I found reference to a new status for a DML exception and also to a new Exception type. It appeared that it had been thought about but how exactly it was handled still seemed unclear.

When faced with a lack of documentation, indeed when faced with almost any uncertainty, I do what any developer would do: I write some code! In this case code to prove once and for all what really happens. I whipped up a very simple managed package that had a VF page which inserted a FeedItem and then captured any exceptions and outputed them to the user. Simple. When I created the package I left the Chatter “Required Feature” unchecked. This seems a bit backwards, I have Chatter features but you don’t need to have Chatter turned on, anyhow this is what makes it an optional platform feature. Once I’d uploaded the package I span up another org and disabled Chatter in it. And then installed the package and navigated to my test page.

Boom! It broke. Well actually it worked but it threw an exception so it did in fact break. Given the mention of exception in the documentation this was pretty much what I expected to happen. Although to be honest it’s a little disappointing.

Why do I find it disappointing? After all I have a mechanism with which I can have Chatter features in my product but not need the user to have Chatter turned on in their Org. Well, if we were measuring happiness purely in terms of whether I have met my requirements then you’re right I should be estatic. However I find it disappointing that I am forced by the platform to use exception handling techniques to control the logic of my application.

Look at this code to see what I mean:

First off I just want to say that it’s not all bad – the exception that gets thrown is very specific to this problem so we can be sure that the only reason we’re in that catch is that Chatter isn’t enabled. If we want to deal with other excpetions from that code then we can use other catch statements. All good. Now for the not so good side of things, first of a bit of a bug bear the name of the exception is very generic, yet according to the documentation it can only be thrown if Chatter isn’t enabled. Either change the name of the exception or update the documentation to reflect the fact that in the future this may cover features other than just Chatter.

OK with that out of the way let’s get back to the question of logic. The idea behind exceptions is to provide a method to change the normal execution flow of a piece of code when an error occurs. It provides an opportunity for the developer to handle this error, this thing that wasn’t expected to happen. In this case we are expecting that some of our customers will not have Chatter enabled. This isn’t an errornous state of affairs it’s just the way life is. What we’re really wanting to do here is say: “Is chatter enabled? Yes? OK then insert a record because we can”. Instead we’re forced to to say: “Insert a record. Oh balls it didn’t work, must be because it’s not enabled, oh well”. We’re using exception methods to handle the consequences of not being able to make a logical decision upfront.

All of this is just bad practice and something that we shouldn’t be doing. Another reason often cited in other languages is that throwing an exception is a computationally expensive task. I can’t imagine that this is any different in Apex, especially given that it not compiles down to Java byte code anyway however I haven’t tried to eek out any benchmarks to prove it.

Interestingly it feels like the designers of the platform have a penchant for this anti-pattern. Seems like a strong thing to say, what do I mean? Well a couple of things make me feel this way. The first is the fact that we see this pattern in Apex already – we’re all guilty of assigning a LIMIT 1 query directly to an instance of an object and catching the exception if there are no matching records. What should happen in this case? Well how about just assigning null to the variable? The second thing that piques my worry is the name of the exception and the fact that it’s very generic it feels like we’re going to see the exception being thrown for other features as well, which will force us to replicate said horidness over and over.

So what can be done about all of this? After all there’s no point complaining unless you have a suggestion for improvement. First things first – I am not against this exception! I know it sounds like I want to see it burnt at the stake but that’s really not the case. I can completely understand why it is needed. In fact I wouldn’t change the exception at all, it’s completely appropriate for the action taken in the given circumstances. However, what is needed is something to compliment it, something that is the equivilient of the Limits class that already exists. How about a class called Features? It would have a series of static methods that give us the ability to test for the existence of optional features before we try and make use of them. That way we could use some conditional code structures to control the logical flow of the application, allowing us to change our try/catch example into something a bit better structured:

This feels like much better code, not only does it remove the nasty exception code but it also becomes much clearer as to what the code is trying to achieve.

Anyway enough of my disappointment, the crux of the matter is: there is a way to handle “optional” features on the platform such as Chatter from within your code. And this is great because it means I can add my trivial Chatter based feature to my application and not worry about alienating a whole section of my market or indeed having to try and manage two code bases.

Just be careful when you do make use of this exception though and please remember that you’re using an anti-pattern because you have to not becuase you should.

Accessing Record Types Consistently

Reading other people’s code is great, be it good or bad, functioning or broken there is always something to discover – and generally it is something that makes you think: “d’oh, why didn’t I see that before?”

I liked my latest discovery so much that I thought I’d share it myself for those, like me, that just hadn’t thought of it or indeed seen it elsewhere.

Record Types are the subject. We are all aware of the complications that record types bring to Apex; often you will want to perform a different action in your code depending on the type of record that you have. Or you’ll want to get a value from one field for one record type but from another for the rest. That’s cool and it makes sense but how do you write the code to do that? Hard coding the record type id is a common practice and it just about works, so long as you have declared the record types in your production org as the ids stay the same when you create sandboxes. However things don’t normally happen this way. Often a developer will create a new record type in her sandbox and make use of it there but when you move between environments this code breaks as the record type doesn’t exist and if you create it the id will be different; try running CI in an environment like this! Another problem I see with this method is that, even if the ids are consistent betwwne environments the code is fairly meaningless: what exactly is id 00D123…..? So much for self documenting code, eh?

So, as you can probably guess this technique that I saw helps solve these problems. Actually what I saw was a step in the right direction… I’ve just taken it a little bit further.

In the code base that I was reading last week I saw a lot of declarations like this at the top of classes:

That solves a whole bunch of issues but why is it being repeated time and time again at the top of each class? Surely it would be better for all involved if these variables were declared and maintained in one place? The result of this thought is below: firstly a class called RecordTypeContainer that actually does all the hard work and then one called AccountRecordTypes which contains all of the record types for the Account object.

It would have been possible to wrap this all into the one class but splitting it out like this makes for easier to read code in my mind as I can say: AccountRecordTypes.Test1 in my code and it’s fairly obvious what I’m referring too. Obviously adopting this structure means we end up with lots of XXRecordTypes classes but it becomes a convention and not too much of a hardship. Whilst we have lots of RecordTypes classes we can be assured that they all work the same way because they all make use of the RecordTypeContainer class – this handles the intial getting of the record types, the storing of them and then the retrieval – again in a consistent and efficient manner.

The code itself is fairly simple and as such I’m not going to give a huge commentary on it; hopefully you can see how to apply it to other objects within your orgs but if not then just leave a comment.

Obviously it would be great if the platform had some form of static access to record types like this baked in and I’m sure over time it may well be added but in the meantime I’ve found this pattern to be really helpful and hopefully others will too.

An unusual use case for Rypple?

A slight departure from my normally technical posts to give me a chance to get a random thought off my chest.

Rypple is doing it’s best to revolutionize the way in which staff and teams are managed and it seems to be a great tool to do it wit. However lately I’ve been wondering about how it might be applicable is less obvious situations.

I’m going to preface the rest of this post with the admission that I’m basing everything that I write here on a vague understanding (a demo and a brief play) of Rypple. And that I only really have indirect experience of the education sector (I went to school, I worked in a school for a year and my wife is a teacher). Therefore all of this might already exist or be complete rubbish.

The other day someone from my wife’s school was asking her what the best part of being on maternity leave was. I was expecting her to respond with some comment about the joys of raising our son and watching him grow into a small boy but no. Instead she replied without hesitation that it was not having to write reports. It was at this point I realised her disdain for reports was the same as my disdain for appraisals.

A bit more thinking about it and I started to see more parallels between the two processes. Students have targets, generally individual when related to learning but the concept of “table teams” and “houses” also exist. There are also many parties invested in the performance of a pupil, teachers, adminstrators, parents and sometimes the pupils themselves. As things stand the targets are generally set and monitored by the same teacher on a termly basis and the parents on the whole only receive feedback once or twice a year in a formalised “report”.

As I said the parallels to the working world are, to me at least, very striking. So how could Rypple be used? Well targets continue to be set by their teacher however rather than being kept on a dusty piece of paper they are now visible to all staff allowing any of them to contribute to noting that pupil’s progress towards their targets. This could be particularly useful in a secondary school environment where a maths target may well be met in a physics lesson. Then give parents access to their child’s account so that they can quickly catch up on where their child’s progress and goals allowing for more efficient teacher – parent communication and giving parents the best opportunity to support their child. More senior members of staff would be able to get a quicker and more transparent view of the progress of their pupils.

I’m a bit vague on the current ability of Rypple to support this but imagine if when a child switched schools you simply transferred their Rypple record. This would give you immediate access to their current targets not to mention their history. As a teacher you may want to change these targets but at least you have something decent to start with rather than a crumbled piece of paper that may or may not have found it’s way to you.

Yes, I know I sound like I’m piling yet more pressure on teachers to carry out more administration but in reality they should be doing a lot of this already. Couple that with having such an up to date and fluent communication with the pupil, their parents and senior management it should actually start to remove some of the end of term report pressure – hopefully, eventually, removing the need for this big bang communication completely.

There are undoubtedly problems with all of this but nothing that seems insurmountable. In my mind this actually just fits into a much bigger potential revolution of the education sector and the way in which it manages it’s data. For example if we start to think of pupils and parents as a school’s customers then it’s only a short hop, skip and a jump to realise that actually the data fits very nicely into Salesforce.

Anyway these are the genral ramblings of someone that doesn’t really know what he’s talking about but it’s got to be worth getting someone that does know this area to think about it. Hasn’t it?

CoffeeScript in VisualForce

CoffeeScript has been around for a few years but has been gaining a lot more traction recently. For those unfamiliar with this language it allows you to write Javascript using a more Ruby-esque syntax. It’s brevity and clarity have made it increasingly popular and I thought that it might be good to find a way to make use of it from within VisualForce pages.

CoffeScript is “compiled” to JavaScript which is then sent to the users browser, this means that browsers don’t need to know anything about this new language but developers can write more robust JavaScript quicker. And whilst this is great news for browsers it’s not so great for the prospects of using CoffeeScript on the platform. Why? Well, the way that things are expected to happen in the CoffeeScript world means that we need a compiler that can run server side. Which in turn means we need a CoffeeScript compiler written in Apex.

Obviously writing a compiler in Apex isn’t beyond the realms of possibility but it is beyond the realms of what I’m willing to do to get this to work at the minute. It is also probably the best solution to the problem but I decided to look for an alternative.

My first thought was inspired by the fact that CoffeeScript is written in CoffeeScript. Given this and the fact that CoffeeScript is just JavaScript I realised that it must be possible to run the CoffeeScript compiler in the user’s browser. Unsurprisingly this is true and has been done. In fact if you include the coffee-script.js script and then tag your CoffeeScript in <script type="text/coffeescript"> tags the CoffeeScript compiler will process your scripts in the browser. Nice. Well kind of. It seems a bit wrong in my mind to be foisting this effort onto your users and the approach is frown on by the CoffeeScript community in general.

So, whilst this approach would have made it possible to use CoffeeScript in VisualForce I set out to look for another way of achieving this that was cleaner to the developer and the end user. In this vain I turned my attention to thinking of other places that I could run JavaScript. The immediate, and somewhat obvious, answer was node.js running on Heroku. This was a simple solution: node.js is great at executing JavaScript and the CoffeeScript compiler is even available as a npm package. The idea was a simple service which accepted a request body containing CoffeeScript, compile it up and return the JavaScript… nice!

The JavaScript above runs on node.js on Heroku and does exactly what I described in the previous paragraph. It’s really that simple. I was using uglify-js to minify the output but left it out of the examples to keep them to the point. Free free to have a play with this, just post some CoffeeScript to and you should get a JavaScript response.

So now that we have a server based approach to compiling CoffeeScript how do we make use of it easily from within VisualForce? My aim here was to keep it simple; it should be as easy as add a script tag. To achieve this simplicity I put together a basic Apex component which takes the name of a static resource (your CoffeeScript file) and in turn outputs JavaScript.

Then in the controller all we need to do is find the CoffeeScript resource, pass it out to our Heroku instance and send the response to the browser – simple.

There we have it, a fully functioning CoffeeScript VisualForce component; we too now have access to the benefits that CoffeeScript brings!

Using it is very simple: upload your CoffeeScript as a static resource and then include a line like this in the VisualForce page that you want to use the uploaded script in. Hey Presto! Job done.

Obviously there are short comings of this component as it stands one massive improvement for example would be to cache the returned JavaScript as another static resource so you only need to call out to Heroku once. There probably needs to be some better error handling; at the minute I just return a blank string if things went wrong, whereas it might be nice to know why.

Having said that I have achieved what I set out to do and hopefully have provided enough of a platform for those that want to include CoffeeScript in their VisualForce to get started with. In the meantime if I have another evening where I’m feeling a little bored I might round this component out and add it to GitHub for the easy consumption by the masses.

The First Bristol Dev Meetup

Last Thursday was a sunny one in Bristol, UK. It was also the inaugural meeting for the South West’s dev group.

The group met in The Elephant in the city centre and was, according to those left at the end of the night, a great success. We never expected the group to be big when we were planning it’s birth and the fact that a total of seven developers turned up to swap tales was pleasantly surprising.

The format was intentionally informal and the evening consisted of a few beers, a good discussion around how we’d like to see the meetup evolve and of course a group therapy session around our latest set of “challenges” the platform has presented us with. As you’d expect the topics where very varied from query parameter striping and reordering to testing http callouts to JSON parsing in Summer ’12.

Going forward we intend to meet on the last Thursday of every other month this allows us to fold in nicely with the South West User Group as well. It also means that we’ll be more or less due an event around the time of Dreamforce so we may well endeavour to do something terribly English, or worse West Country esk whilst we’re there. For the meantime we’re going to keep the fairly informal format as well given the size, although we may think about having “themes” to base our discussions around to give us some focus without needing to go down the whole presentation route – just yet. Our biggest initial challenge is to try and find those developers that hang around the South West and maybe hope for a slight down turn in the weather as well to encourage a few more people indoors.

If you’re reading this and live/work locally then I really do hope you can come along to the next event. Given our minority standing in the developer community as a whole it’s easy to shrink away into the corner and feel forgotten about. But at our events there’s no need to feel like that we’re all in the same boat and so welcome other with out-stretched arms.

Finally no developer meetup would be complete without a picture or two to prove that it happened – unfortunately I only have a couple of pictures that prove I was in the pub with a group of people, one of whom just happened to be wearing a cap! and Loggly – watching the insides of managed packages

Logging on the platform is something that I’ve always had an issue with – not because it doesn’t exist but more because it just feels so heavy handed. The system log gives a lot of information and sometimes I just want to know I’ve hit a certain trace or that a variable has been set to a specific value and sometimes I would actually just like to monitor how the code is performing: it’s always better to have seen an error and be working on a fix before the customer comes to you!

As such this is a subject that I keep coming back to but as of yet just haven’t quite found a neat enough solution; my previous forays have been based on my SF Console project and whilst they worked I was never really that satisfied with it. The idea was kinda right but the “console” part of it was just never going to be complete enough – it did after all just start out as a reason to have a play with some node.js

This week though the issue has resurfaced, albeit this time in a slightly different way, which has given me the perfect excuse to get my hands dirty with Loggly. Loggly is a cloud based logging service which has been on my radar for a while now but I have struggled to justify using it. I’m sure I’m late to the party but hey I am at least here. From the little I have played with it so far I quite like it – the command line interface in the browser is pretty cool, the automatic indexing of log entries is fantastic and of course it has a multitude of ways to integrate with it. It will also allow you to monitor and alert based on your logs through the use of Alert Birds although I haven’t made use of them yet.

My new lust for a logging solution has come from the need to instrument some of the managed packages that we’re building over at WickedBit. As people who are involved with software we all know that the one of the most constant constants is the ability for a customer to break your software within ten minutes of getting their hands on it. It happens to us all and should be no surprise to anyone who truly works with software. When this happens the debug logs are only any help if the customer gives you login access to their org, something which is fairly intrusive and not something everyone wants. The next best step is to be able to turn some kind of remote error logging on and this is where Loggly comes in.

First of all we need to get Loggly setup. We’re going to be using the incredibly simple ReST interface provided by Loggly so all we need is an account (just sign up for the basic account; it’s free) and then we need to create a HTLML input. I have decided to create a HTML input for each of my applications as this seems like the most natural segregation of data but each to their own, there are no rules here, the most important thing is to take note of (or at least remember where to find) the input key. And that is that; Loggly is now good to go.

What follows next is the APEX that I wrote to act as my logging mechanism and it probably helps if I explain a couple of my extra requirements before I show you the code. I need to be able to turn the entire logging mechanism on and off. I also need to able to turn only specific log statements on, this is after all a callout and we have precious little of those so we don’t want every log statement turned on all of the time. Instead I want to be able to only turn the statements on that I need to investigate a certain problem. With those desires out in the open here is the APEX logging class that I have used to integrate with Loggly – well a version of it at least I have stripped out the dependancy injection stuff to try and keep it cleaner for this post.

A quick run through of the code: the first line is the Loggly input key that I mentioned earlier, I pulled this out to make running this for different applications fairly easy. The next couple of lines are the to help control when the code can actually make the logging callout – the first is a simple boolean that is the “big” switch I talked about. The second is the name of the static resource which contains a JSON serialized Map the idea of this is that each call to the Log function passes in a “location” and the map contains a list of locations that are currently allowed to make the logging callout. This again could be moved to a set of custom settings but I have opted for the flexibility of a simple text file that I can send to the customer to change what logs I receive in Loggly; whilst still not ideal this should make the process much easier for the customer.

The crux of the class is the doLog method as this is the one that actually makes the callout (if all the settings are good) to Loggly. The callout is very simple the input key is passed in the URL, so there’s no authentication required and the data is passed across as a series of encoded key value pairs in the body. It’s important to set the header to application/x-www-form-urlencoded as when Loggly sees this is takes the key/value pairs in the request body and automatically JSON encodes them and indexes them making your logs searchable. To help with the searching I always pass the location and some information about the Organization making the call but otherwise the caller to the Log method is free to pass in any values they please in the values map.

All in all I’m fairly pleased with this initial implementation of a Loggly logging framework and my initial tests of using it from within a managed package are going well. I’m sure that things will change over time but it’s a good starting point.

With all of this in place making use of the code is fairly simple, first add a static resource that looks something like this:

Then make an APEX call like this:

And then jump over into Loggly and search for the fruits of your labour:

Feel free to take adapt and make use of the code above.. it’s too simple to warrent a github repo so I’m just going to leave it as is… of course feedback is always welcomed too, as ever with my code this is just a germ of an idea really.

Sharing my Salesforce screen with remote users

When it comes to technology I love proving that things are possible even if, at the time, they have little practical value.

The latest example of this trait comes in an idea I had whilst tramping through the Welsh countryside last week: wouldn’t it be cool to be able to broadcast your view of Salesforce to other people’s browsers? OK, maybe it’s not that cool but as I said it’s more about proving it’s possible rather than practical.

Now obviously this is possible otherwise this is a very very short post and as such a little video demonstration of what I’m talking about shouldn’t spoil any surprises. Oh, and there’s no video as I didn’t think my dulcet tones would add much to this.

[vimeo id=”42704811″]

OK, so how did I manage to achieve this amazing, yet pointless, feat of engineering? If I break it down and change the language a little you’ll probably start to figure it all out for yourselves. There is a publisher that wants to broadcast their actions to one or more subscribers. When I put this way it all started to feel much more achievable.

How Best to Broadcast Messages

The first part that needed to be solved was the transport mechanism; how do I take a message from one user and broadcast it to an unknown number of subscribers? My first thought was to immediately jump to the Salesforce Streaming API; this takes a message and broadcasts it to all the subscribers. However when I thought about it a little more I realised that it really wasn’t a great fit as it requires the broadcasting user to effect a change to an object to force the streaming API to push the change message out. This idea felt incredibly hacky and whilst my aim was to simply prove it’s possible to do this I have a unwritten rule that the solution should be semi elegant as well; I mean what’s the point otherwise?

It was then that I remembered the amazing PubNub theme song (go on watch it, you’ll be singing it for weeks). PubNub was exactly what I was looking for as the broadcast mechanism all that was left was to implement it in a couple of places in Salesforce and I was home dry. PubNub [], as well as being the catchy “two way internet radio” allows you to implement the Pub/Sub model with very little effort. And with support for a huge range of languages out of the box, a simple ReST interface if you want to roll your own, and a free 1 million message per month it really couldn’t be easier.

What to Broadcast

The next piece of the puzzle was to decide what I was going to send. It was at this point that I decided that I should probably come up with some kind of use case for this little technologically jolly; otherwise I would just be chucking information around from browser to browser for no good reason.

I didn’t want something to “real life” as that would lead to all sorts of complications that I just didn’t want to consider. As such I took a real world situation and simplified it into the following: Every Monday morning as a team we all jump on Skype (we’re spread right around the world) and go through pertinent Accounts, Opportunities and Cases to ensure we’re all up to date (we go through JIRA too but that’s completely out of scope). A kind of weekly stand up, but not! The trouble is people are rarely looking at the record we happen to be discussing which, more frequently than it should, leads to the inevitable: “which Account is this again?” question. It’s boring, time consuming and downright frustrating. Therefore using this code it should be possible for one person to navigate to a Salesforce record and for everyone else on the call to have their browser display that record too. We don’t need to see visual force pages or reports, just records.

This slightly factious use case neatly gave me the answer to the question of what to broadcast: the currently viewed record.

How to Broadcast

Whilst part of me wanted desperately to write an APEX wrapper for the PubNub API I decided that simply inspecting the URL of the current page via JavaScript and broadcasting that via the standard, simple, PubNub JavaScript API would be my quickest and most reliable route to success. In actual fact it’s probably the only really sensible way to do this as the browser is the best place to see what the user is currently viewing – trying to write this in APEX would be impossible.

When I looked at the PubNub JavaScript API I knew that this was going to be incredible simple to code this up. The only question was were to put it so that it appeared on every page. This lead me to TehNrd’s article on Showing and Hiding Buttons on Page Layouts in this he also needed to have some JavaScript on every page. He actually goes one step further than I needed to by including the Salesforce API library – I just need to make sure this executes all of the time.

So what code did I need to put in the sidebar? The javascript needed to inspect the page URL, extract the record Id and then publish it on said PubNub channel. To achieve this I took the standard PubNub JavaScript API example and modified it slightly. This gave me the following:

After I put this into a homepage component all I needed to do was figure out how to make use of the id at the subscribers end.

How to Subscribe (and react)

As I wasn’t doing anything particularly exciting with the PubNub API I was again able to take the simple example from their website, modify it slightly and be up and running receiving messages. I stuck this code into a visual force page and added a simple alert to make sure I was receiving the message. A quick test from one browser caused a message box with the record id to appear in another browser – at this point I knew I had two and half out of the three pieces of this puzzle working. All that was left was to display the record in the subscriber.

As I only needed to display records and nothing more complex than that I decided that the apex:detail component would be a perfect fit. The apex:detail component simply takes the id of a record to display and then outputs it in the visual force page as if you were viewing the record directly – this comes with the added bonus that if a user doesn’t have permission to view a particular record or field then it won’t be displayed; with no effort from myself. All I needed now was a way to take the id from the PubNub message and set the apex:detail parameter with it. The easiest way of achieving this was to reload the visual force page but pass the object id in the query string at the same time. This would then allow me to extract the id in the controller and set the parameter of the apex:detail component. Simple!


For 30 minutes work this has achieved what I set out to achieve and that’s the ability to broadcast my view in Salesforce to others without using some screen sharing application. I have proved that it’s possible and that make me happy. However it is very much a proof of concept, a couple of immediate improvements would be;

  • it always uses the same channel so it’s not possible for different groups to make use of it at the same time
  • it would be nice to stop users from clicking on the record as they’re viewing it (a simple div overlay would achieve this)
  • it would good to be able to view things other than just records

Other than that it’s a great start to something else that is probably not a lot of use to anyone out there. But who really cares? I’ve proved that it’s possible!

« Newer Posts Older Posts »