Posts in "SalesForce" category / Page 2

Handling Optional Platform Features From APEX

Over the past few weeks I have been getting a few requests from some of our customers to add a feature to one of our products; some minor integration with Chatter. Being a company that prides ourselves on being responsive to our customers wants (once a little common sense has been applied of course) we decided to add said feature.

The feature itself was trivial to implement but what struck me as I was thinking about it was: “what about our customers without Chatter enabled?” I know it’s becoming less and less common these days but back when we were consulting full time we heard numerous clients ask us to make sure Chatter was turned off – and no amount marketing material was going to convince them to turn it on. With this in mind I was suddenly a lot less bullish about this new feature – I didn’t want to alienate a whole section of our potential market just for this feature.

I had a dig around the Salesforce documentation to see what, if anything could be done about this – I couldn’t believe that I was the first person to have this worry. Things were a little sparse I found reference to a new status for a DML exception and also to a new Exception type. It appeared that it had been thought about but how exactly it was handled still seemed unclear.

When faced with a lack of documentation, indeed when faced with almost any uncertainty, I do what any developer would do: I write some code! In this case code to prove once and for all what really happens. I whipped up a very simple managed package that had a VF page which inserted a FeedItem and then captured any exceptions and outputed them to the user. Simple. When I created the package I left the Chatter “Required Feature” unchecked. This seems a bit backwards, I have Chatter features but you don’t need to have Chatter turned on, anyhow this is what makes it an optional platform feature. Once I’d uploaded the package I span up another org and disabled Chatter in it. And then installed the package and navigated to my test page.

Boom! It broke. Well actually it worked but it threw an exception so it did in fact break. Given the mention of exception in the documentation this was pretty much what I expected to happen. Although to be honest it’s a little disappointing.

Why do I find it disappointing? After all I have a mechanism with which I can have Chatter features in my product but not need the user to have Chatter turned on in their Org. Well, if we were measuring happiness purely in terms of whether I have met my requirements then you’re right I should be estatic. However I find it disappointing that I am forced by the platform to use exception handling techniques to control the logic of my application.

Look at this code to see what I mean:

First off I just want to say that it’s not all bad – the exception that gets thrown is very specific to this problem so we can be sure that the only reason we’re in that catch is that Chatter isn’t enabled. If we want to deal with other excpetions from that code then we can use other catch statements. All good. Now for the not so good side of things, first of a bit of a bug bear the name of the exception is very generic, yet according to the documentation it can only be thrown if Chatter isn’t enabled. Either change the name of the exception or update the documentation to reflect the fact that in the future this may cover features other than just Chatter.

OK with that out of the way let’s get back to the question of logic. The idea behind exceptions is to provide a method to change the normal execution flow of a piece of code when an error occurs. It provides an opportunity for the developer to handle this error, this thing that wasn’t expected to happen. In this case we are expecting that some of our customers will not have Chatter enabled. This isn’t an errornous state of affairs it’s just the way life is. What we’re really wanting to do here is say: “Is chatter enabled? Yes? OK then insert a record because we can”. Instead we’re forced to to say: “Insert a record. Oh balls it didn’t work, must be because it’s not enabled, oh well”. We’re using exception methods to handle the consequences of not being able to make a logical decision upfront.

All of this is just bad practice and something that we shouldn’t be doing. Another reason often cited in other languages is that throwing an exception is a computationally expensive task. I can’t imagine that this is any different in Apex, especially given that it not compiles down to Java byte code anyway however I haven’t tried to eek out any benchmarks to prove it.

Interestingly it feels like the designers of the platform have a penchant for this anti-pattern. Seems like a strong thing to say, what do I mean? Well a couple of things make me feel this way. The first is the fact that we see this pattern in Apex already – we’re all guilty of assigning a LIMIT 1 query directly to an instance of an object and catching the exception if there are no matching records. What should happen in this case? Well how about just assigning null to the variable? The second thing that piques my worry is the name of the exception and the fact that it’s very generic it feels like we’re going to see the exception being thrown for other features as well, which will force us to replicate said horidness over and over.

So what can be done about all of this? After all there’s no point complaining unless you have a suggestion for improvement. First things first – I am not against this exception! I know it sounds like I want to see it burnt at the stake but that’s really not the case. I can completely understand why it is needed. In fact I wouldn’t change the exception at all, it’s completely appropriate for the action taken in the given circumstances. However, what is needed is something to compliment it, something that is the equivilient of the Limits class that already exists. How about a class called Features? It would have a series of static methods that give us the ability to test for the existence of optional features before we try and make use of them. That way we could use some conditional code structures to control the logical flow of the application, allowing us to change our try/catch example into something a bit better structured:

This feels like much better code, not only does it remove the nasty exception code but it also becomes much clearer as to what the code is trying to achieve.

Anyway enough of my disappointment, the crux of the matter is: there is a way to handle “optional” features on the platform such as Chatter from within your code. And this is great because it means I can add my trivial Chatter based feature to my application and not worry about alienating a whole section of my market or indeed having to try and manage two code bases.

Just be careful when you do make use of this exception though and please remember that you’re using an anti-pattern because you have to not becuase you should.

CoffeeScript in VisualForce

CoffeeScript has been around for a few years but has been gaining a lot more traction recently. For those unfamiliar with this language it allows you to write Javascript using a more Ruby-esque syntax. It’s brevity and clarity have made it increasingly popular and I thought that it might be good to find a way to make use of it from within VisualForce pages.

CoffeScript is “compiled” to JavaScript which is then sent to the users browser, this means that browsers don’t need to know anything about this new language but developers can write more robust JavaScript quicker. And whilst this is great news for browsers it’s not so great for the prospects of using CoffeeScript on the Force.com platform. Why? Well, the way that things are expected to happen in the CoffeeScript world means that we need a compiler that can run server side. Which in turn means we need a CoffeeScript compiler written in Apex.

Obviously writing a compiler in Apex isn’t beyond the realms of possibility but it is beyond the realms of what I’m willing to do to get this to work at the minute. It is also probably the best solution to the problem but I decided to look for an alternative.

My first thought was inspired by the fact that CoffeeScript is written in CoffeeScript. Given this and the fact that CoffeeScript is just JavaScript I realised that it must be possible to run the CoffeeScript compiler in the user’s browser. Unsurprisingly this is true and has been done. In fact if you include the coffee-script.js script and then tag your CoffeeScript in <script type="text/coffeescript"> tags the CoffeeScript compiler will process your scripts in the browser. Nice. Well kind of. It seems a bit wrong in my mind to be foisting this effort onto your users and the approach is frown on by the CoffeeScript community in general.

So, whilst this approach would have made it possible to use CoffeeScript in VisualForce I set out to look for another way of achieving this that was cleaner to the developer and the end user. In this vain I turned my attention to thinking of other places that I could run JavaScript. The immediate, and somewhat obvious, answer was node.js running on Heroku. This was a simple solution: node.js is great at executing JavaScript and the CoffeeScript compiler is even available as a npm package. The idea was a simple service which accepted a request body containing CoffeeScript, compile it up and return the JavaScript… nice!

The JavaScript above runs on node.js on Heroku and does exactly what I described in the previous paragraph. It’s really that simple. I was using uglify-js to minify the output but left it out of the examples to keep them to the point. Free free to have a play with this, just post some CoffeeScript to http://high-window-9445.herokuapp.com/ and you should get a JavaScript response.

So now that we have a server based approach to compiling CoffeeScript how do we make use of it easily from within VisualForce? My aim here was to keep it simple; it should be as easy as add a script tag. To achieve this simplicity I put together a basic Apex component which takes the name of a static resource (your CoffeeScript file) and in turn outputs JavaScript.

Then in the controller all we need to do is find the CoffeeScript resource, pass it out to our Heroku instance and send the response to the browser – simple.

There we have it, a fully functioning CoffeeScript VisualForce component; we too now have access to the benefits that CoffeeScript brings!

Using it is very simple: upload your CoffeeScript as a static resource and then include a line like this in the VisualForce page that you want to use the uploaded script in. Hey Presto! Job done.

Obviously there are short comings of this component as it stands one massive improvement for example would be to cache the returned JavaScript as another static resource so you only need to call out to Heroku once. There probably needs to be some better error handling; at the minute I just return a blank string if things went wrong, whereas it might be nice to know why.

Having said that I have achieved what I set out to do and hopefully have provided enough of a platform for those that want to include CoffeeScript in their VisualForce to get started with. In the meantime if I have another evening where I’m feeling a little bored I might round this component out and add it to GitHub for the easy consumption by the masses.

The First Bristol Dev Meetup

Last Thursday was a sunny one in Bristol, UK. It was also the inaugural meeting for the South West’s Force.com dev group.

The group met in The Elephant in the city centre and was, according to those left at the end of the night, a great success. We never expected the group to be big when we were planning it’s birth and the fact that a total of seven developers turned up to swap tales was pleasantly surprising.

The format was intentionally informal and the evening consisted of a few beers, a good discussion around how we’d like to see the meetup evolve and of course a group therapy session around our latest set of “challenges” the platform has presented us with. As you’d expect the topics where very varied from query parameter striping and reordering to testing http callouts to JSON parsing in Summer ’12.

Going forward we intend to meet on the last Thursday of every other month this allows us to fold in nicely with the South West User Group as well. It also means that we’ll be more or less due an event around the time of Dreamforce so we may well endeavour to do something terribly English, or worse West Country esk whilst we’re there. For the meantime we’re going to keep the fairly informal format as well given the size, although we may think about having “themes” to base our discussions around to give us some focus without needing to go down the whole presentation route – just yet. Our biggest initial challenge is to try and find those Force.com developers that hang around the South West and maybe hope for a slight down turn in the weather as well to encourage a few more people indoors.

If you’re reading this and live/work locally then I really do hope you can come along to the next event. Given our minority standing in the developer community as a whole it’s easy to shrink away into the corner and feel forgotten about. But at our events there’s no need to feel like that we’re all in the same boat and so welcome other with out-stretched arms.

Finally no developer meetup would be complete without a picture or two to prove that it happened – unfortunately I only have a couple of pictures that prove I was in the pub with a group of people, one of whom just happened to be wearing a Force.com cap!

 

 

Force.com and Loggly – watching the insides of managed packages

Logging on the Force.com platform is something that I’ve always had an issue with – not because it doesn’t exist but more because it just feels so heavy handed. The system log gives a lot of information and sometimes I just want to know I’ve hit a certain trace or that a variable has been set to a specific value and sometimes I would actually just like to monitor how the code is performing: it’s always better to have seen an error and be working on a fix before the customer comes to you!

As such this is a subject that I keep coming back to but as of yet just haven’t quite found a neat enough solution; my previous forays have been based on my SF Console project and whilst they worked I was never really that satisfied with it. The idea was kinda right but the “console” part of it was just never going to be complete enough – it did after all just start out as a reason to have a play with some node.js

This week though the issue has resurfaced, albeit this time in a slightly different way, which has given me the perfect excuse to get my hands dirty with Loggly. Loggly is a cloud based logging service which has been on my radar for a while now but I have struggled to justify using it. I’m sure I’m late to the party but hey I am at least here. From the little I have played with it so far I quite like it – the command line interface in the browser is pretty cool, the automatic indexing of log entries is fantastic and of course it has a multitude of ways to integrate with it. It will also allow you to monitor and alert based on your logs through the use of Alert Birds although I haven’t made use of them yet.

My new lust for a logging solution has come from the need to instrument some of the managed packages that we’re building over at WickedBit. As people who are involved with software we all know that the one of the most constant constants is the ability for a customer to break your software within ten minutes of getting their hands on it. It happens to us all and should be no surprise to anyone who truly works with software. When this happens the debug logs are only any help if the customer gives you login access to their org, something which is fairly intrusive and not something everyone wants. The next best step is to be able to turn some kind of remote error logging on and this is where Loggly comes in.

First of all we need to get Loggly setup. We’re going to be using the incredibly simple ReST interface provided by Loggly so all we need is an account (just sign up for the basic account; it’s free) and then we need to create a HTLML input. I have decided to create a HTML input for each of my applications as this seems like the most natural segregation of data but each to their own, there are no rules here, the most important thing is to take note of (or at least remember where to find) the input key. And that is that; Loggly is now good to go.

What follows next is the APEX that I wrote to act as my logging mechanism and it probably helps if I explain a couple of my extra requirements before I show you the code. I need to be able to turn the entire logging mechanism on and off. I also need to able to turn only specific log statements on, this is after all a callout and we have precious little of those so we don’t want every log statement turned on all of the time. Instead I want to be able to only turn the statements on that I need to investigate a certain problem. With those desires out in the open here is the APEX logging class that I have used to integrate with Loggly – well a version of it at least I have stripped out the dependancy injection stuff to try and keep it cleaner for this post.

A quick run through of the code: the first line is the Loggly input key that I mentioned earlier, I pulled this out to make running this for different applications fairly easy. The next couple of lines are the to help control when the code can actually make the logging callout – the first is a simple boolean that is the “big” switch I talked about. The second is the name of the static resource which contains a JSON serialized Map the idea of this is that each call to the Log function passes in a “location” and the map contains a list of locations that are currently allowed to make the logging callout. This again could be moved to a set of custom settings but I have opted for the flexibility of a simple text file that I can send to the customer to change what logs I receive in Loggly; whilst still not ideal this should make the process much easier for the customer.

The crux of the class is the doLog method as this is the one that actually makes the callout (if all the settings are good) to Loggly. The callout is very simple the input key is passed in the URL, so there’s no authentication required and the data is passed across as a series of encoded key value pairs in the body. It’s important to set the header to application/x-www-form-urlencoded as when Loggly sees this is takes the key/value pairs in the request body and automatically JSON encodes them and indexes them making your logs searchable. To help with the searching I always pass the location and some information about the Organization making the call but otherwise the caller to the Log method is free to pass in any values they please in the values map.

All in all I’m fairly pleased with this initial implementation of a Loggly logging framework and my initial tests of using it from within a managed package are going well. I’m sure that things will change over time but it’s a good starting point.

With all of this in place making use of the code is fairly simple, first add a static resource that looks something like this:

Then make an APEX call like this:

And then jump over into Loggly and search for the fruits of your labour:

Feel free to take adapt and make use of the code above.. it’s too simple to warrent a github repo so I’m just going to leave it as is… of course feedback is always welcomed too, as ever with my code this is just a germ of an idea really.

Sharing my Salesforce screen with remote users

When it comes to technology I love proving that things are possible even if, at the time, they have little practical value.

The latest example of this trait comes in an idea I had whilst tramping through the Welsh countryside last week: wouldn’t it be cool to be able to broadcast your view of Salesforce to other people’s browsers? OK, maybe it’s not that cool but as I said it’s more about proving it’s possible rather than practical.

Now obviously this is possible otherwise this is a very very short post and as such a little video demonstration of what I’m talking about shouldn’t spoil any surprises. Oh, and there’s no video as I didn’t think my dulcet tones would add much to this.

[vimeo id=”42704811″]

OK, so how did I manage to achieve this amazing, yet pointless, feat of engineering? If I break it down and change the language a little you’ll probably start to figure it all out for yourselves. There is a publisher that wants to broadcast their actions to one or more subscribers. When I put this way it all started to feel much more achievable.

How Best to Broadcast Messages

The first part that needed to be solved was the transport mechanism; how do I take a message from one user and broadcast it to an unknown number of subscribers? My first thought was to immediately jump to the Salesforce Streaming API; this takes a message and broadcasts it to all the subscribers. However when I thought about it a little more I realised that it really wasn’t a great fit as it requires the broadcasting user to effect a change to an object to force the streaming API to push the change message out. This idea felt incredibly hacky and whilst my aim was to simply prove it’s possible to do this I have a unwritten rule that the solution should be semi elegant as well; I mean what’s the point otherwise?

It was then that I remembered the amazing PubNub theme song (go on watch it, you’ll be singing it for weeks). PubNub was exactly what I was looking for as the broadcast mechanism all that was left was to implement it in a couple of places in Salesforce and I was home dry. PubNub [www.pubnub.com], as well as being the catchy “two way internet radio” allows you to implement the Pub/Sub model with very little effort. And with support for a huge range of languages out of the box, a simple ReST interface if you want to roll your own, and a free 1 million message per month it really couldn’t be easier.

What to Broadcast

The next piece of the puzzle was to decide what I was going to send. It was at this point that I decided that I should probably come up with some kind of use case for this little technologically jolly; otherwise I would just be chucking information around from browser to browser for no good reason.

I didn’t want something to “real life” as that would lead to all sorts of complications that I just didn’t want to consider. As such I took a real world situation and simplified it into the following: Every Monday morning as a team we all jump on Skype (we’re spread right around the world) and go through pertinent Accounts, Opportunities and Cases to ensure we’re all up to date (we go through JIRA too but that’s completely out of scope). A kind of weekly stand up, but not! The trouble is people are rarely looking at the record we happen to be discussing which, more frequently than it should, leads to the inevitable: “which Account is this again?” question. It’s boring, time consuming and downright frustrating. Therefore using this code it should be possible for one person to navigate to a Salesforce record and for everyone else on the call to have their browser display that record too. We don’t need to see visual force pages or reports, just records.

This slightly factious use case neatly gave me the answer to the question of what to broadcast: the currently viewed record.

How to Broadcast

Whilst part of me wanted desperately to write an APEX wrapper for the PubNub API I decided that simply inspecting the URL of the current page via JavaScript and broadcasting that via the standard, simple, PubNub JavaScript API would be my quickest and most reliable route to success. In actual fact it’s probably the only really sensible way to do this as the browser is the best place to see what the user is currently viewing – trying to write this in APEX would be impossible.

When I looked at the PubNub JavaScript API I knew that this was going to be incredible simple to code this up. The only question was were to put it so that it appeared on every page. This lead me to TehNrd’s article on Showing and Hiding Buttons on Page Layouts in this he also needed to have some JavaScript on every page. He actually goes one step further than I needed to by including the Salesforce API library – I just need to make sure this executes all of the time.

So what code did I need to put in the sidebar? The javascript needed to inspect the page URL, extract the record Id and then publish it on said PubNub channel. To achieve this I took the standard PubNub JavaScript API example and modified it slightly. This gave me the following:

After I put this into a homepage component all I needed to do was figure out how to make use of the id at the subscribers end.

How to Subscribe (and react)

As I wasn’t doing anything particularly exciting with the PubNub API I was again able to take the simple example from their website, modify it slightly and be up and running receiving messages. I stuck this code into a visual force page and added a simple alert to make sure I was receiving the message. A quick test from one browser caused a message box with the record id to appear in another browser – at this point I knew I had two and half out of the three pieces of this puzzle working. All that was left was to display the record in the subscriber.

As I only needed to display records and nothing more complex than that I decided that the apex:detail component would be a perfect fit. The apex:detail component simply takes the id of a record to display and then outputs it in the visual force page as if you were viewing the record directly – this comes with the added bonus that if a user doesn’t have permission to view a particular record or field then it won’t be displayed; with no effort from myself. All I needed now was a way to take the id from the PubNub message and set the apex:detail parameter with it. The easiest way of achieving this was to reload the visual force page but pass the object id in the query string at the same time. This would then allow me to extract the id in the controller and set the parameter of the apex:detail component. Simple!

Improvements

For 30 minutes work this has achieved what I set out to achieve and that’s the ability to broadcast my view in Salesforce to others without using some screen sharing application. I have proved that it’s possible and that make me happy. However it is very much a proof of concept, a couple of immediate improvements would be;

  • it always uses the same channel so it’s not possible for different groups to make use of it at the same time
  • it would be nice to stop users from clicking on the record as they’re viewing it (a simple div overlay would achieve this)
  • it would good to be able to view things other than just records

Other than that it’s a great start to something else that is probably not a lot of use to anyone out there. But who really cares? I’ve proved that it’s possible!

Hex to Dec and back again

The requirement to move numbers between bases is something that I haven’t need to worry about in a while with most interfaces dealing strictly with a decimal base. Having said that, as the laws of the universe dictate, I have found myself of late needing to deal with hexadecimal numbers. The ability to switch between decimal and hex is something that I’ve not been able to find on the platform, although as my post on converting blobs proves I’m not always the most alert to available functionality! As such I found myself writing a little class to handle it for me.

The algorithm to move from hex to decimal and back again is well known and simple at best but still I just thought I’d chuck it out here for people to make use of and to allow me somewhere to copy and paste from if I should ever need it again. I have tried to write the class in such a way as that it could be easily extended to convert other bases; although the way it currently stands it will always need to be to and from base 10. Also, I haven’t tested using it with a base other than 16 but the theory states that it should work.

All in all it “works on my machine” or should that be “in my cloud”, actual result may vary, blah blah blah.

Force.com Dependency Management: A More Detailed Look

My previous post glanced over the details behind the dependency management framework that I have started to build. However the intention of this post is to rectify that and go through the core parts of the code in more detail. Hopefully giving you a better idea of how it currently works.

There are two main parts to the solution; the Container and the IHasDependencies interface.

The Container

The Container class is responsible for providing an instantiated class when a particular interface is requested. The container class uses an internal map to know which classes map to which interfaces, this map is what gives the users the flexibility to change dependencies without changing code as everything is controlled from it’s contents.

Users are provided with two ways to build the map in the Container; either programmatically or via an external mapping file. To programmatically add a mapping simply call the addMapping method passing in a ClassMap. The ClassMap expects Type parameters for the interface and class that you want to add; this is to try and make the code clearer and remove the chance of typos messing things up. To load up the map using an external mapping file a simple call to loadMappFile passing in the name of the static resource that is the file will populate the map. The map file itself is currently just a JSON representation of a ListI had to use a different class as you can’t deserialize the Type class – although as you can see from the code ultimately the ClassMapFileEntry is used to populate a ClassMap class before being added to the map.

There are two ways to build the map as I think they lend themselves to different use cases: the programmatic method being great for quickly pulling together some mock classes in a test scenario, whilst the mapping file gives much more flexibility at runtime or when moving code between environments. There are clearly crinkles in this to work though – multiple mappings for the same interface, conditional mappings, ensuring the class implements the interface it’s mapped to are just a few that fall off the top of my head.

Given the map the container is now in a position to provide users with the classes they need for given interfaces and it is the getClass method on the container that provides this functionality.

This method is probably a bit longer than it needs to be at the minute but cutting through the chaff you can see that it can be broken down into three basic functions:

  • get the name of the class needed from the map
  • instantiate the class
  • get any dependencies that class requires

Whilst the heart of the users experience this is the simplest part of the whole solution in my mind. Getting the classname form the map is self explanatory. The instantiation makes use of the JSON parser trick to get the class; the only thing to remember here is that the constructor of the class isn’t called. And the final part, getting dependencies, is really just a recursive call to this method: rinse and repeat. OK it’s a little more involved than that but not much and falls nicely into the next section.

IHasDependency

One of my biggest goals at the outset of this process was to not have to worry about child classes and their dependencies. I want to be able to say to the container: “give me a working one of those – and don’t bother me with how”. This is where the simple IHasDependency interface comes to the rescue.

As you can see the interface is extremely simple. The key to the process is the map returned but getDependencies; the key set of the map is used by the container to know which interfaces to request classes for in the recursive call. The values are where the container puts the instantiated classes to pass back to the class requesting the dependencies. This works because non-primitive types are passed by reference. It would obviously be possible to have a member on the interface that was this map but we would run into problems with being able to populate it in classes that are instantiated via the JSON trick.

The second method in the interface is gotDependencies and at first look seems a bit odd. Looking back at the container class you can see that this method is called after all of the dependencies have been resolved and populated in the map. So, why is it called? It is there to give the class implementing the interface the chance to take it’s new found classes out of the map and assign them to internal members; after all no one wants to be referencing members from a map it just feels ugly – although if that’s what you want then knock yourself out!

An example of a simple class that implements the IHasDependencies interface would look something like this.

This class has a dependency on a IHttp implementation. To declare that it has this dependency it implements the IHasDependencies interface. In the getDependencies method is creates a new map and adds to it the interfaces for which it needs concreate classes, in this case it’s just IHttp. This now means that when the container instantiates the ParentClass class it will know it has dependencies that need full filling (the IHasDependencies interface) and by calling the getDependencies method the container is able to get a list of required interfaces and can populate the map to return them to the ParentClass.

You can also see in the ParentClass the implementation of the gotDependencies method; in this the IHttp class in the map is assigned out to a private member of the ParentClass class to allow it to be easily used elsewhere in the class.

Bringing it all together

Getting a fully loaded ParentClass is a simply a case of loading a map and then requesting the class from the container.

Two simple calls and obviously for each extra class you need it’s just a case of a call to getClass. There is some overhead associated with this in that you need to a build an maintain your mapping file and that you need to instrument your classes with the IHasDependencies interface but rarely do you get something for nothing.

What you go get however is a nicely de-coupled code base. And not only that it actually starts to become quite readable; getting a class with all necessary dependencies is one call in your code, all classes with dependencies are clearly labels and list all dependencies in one common place and the mapping is held in a centralised location. For me this framework looks like it will meet my needs.

Having said that I also think there is still work to be done; this is very much the seed of the idea and I think there are lots of improvements to be made as well as bells and whistles to add.

The Force.com project can be found on Github feel free to dive in and hack about with it.

Force.com Dependency Management: A First Pass

I’m starting to sound like a broken record I’m sure – constantly going on about why we should be using interfaces in Salesforce more, why we should be breaking tightly couple code blah blah… Go here and here if you haven’t already been exposed to my rabbiting!

Now, I don’t just write this stuff I try and make sure I live by my own word as much as possible. But, as I’m sure you all of you that have tried to decouple things more in Salesforce, this decoupling can quickly become a mess: making sure the right concrete class is being called at the right time is only simple at first. It’s the same old story: at the time everything makes sense but going back to that code one, two, three months later and you sit there thinking: “what the hell was I on when I wrote this?”

It was such a moment earlier in the week that led me to face up to the challenge I set myself a while ago and do something about the way in which I managed interfaces and dependencies in my code. With the overall goal of building a framework to manage all of this in a clean and repeatable way. I may well find that this has been done before but it has been a wet Sunday and wanted to have a stab at it myself.

So – looking to other languages the obvious answer is to tend towards some kind of dependency injection, inversion of control or service locator – but which one and why? Well, they’ll all similar but different and frankly an in depth discussion about which to use and why is way beyond the scope of what I wanted this post to be. Suffice to say I believe I have gone for a form of interface dependency injection with some kind of inversion of control container – if all of that seems a bit vague then it’s because it is; I’m not really one for theory and names so please excuse any incorrectness there.

The Solution

Also I’m not going to post all of the code here as I don’t think it’s appropriate but if you want to follow along as I go through/around the various components then the Force.com project can be found on Github.

The force.com platform provides a big challenge for dependency injection as there’s no form of reflection or inspection – it’s for this reason I have decided to adapt interface injection. Constructor based injection could have been an option but the novel method used to dynamically instantiate classes means that there’s no construtor that I can inject into. At the heart of this system is the interface IHasDependencies: any class that you want to have dependencies injected into it by the container needs to implement this interface. Using the interface means that this is the only thing the class developer needs to be aware of; this keeps the classes decoupled and unaware of the container that’s wiring them up.

As you can see the interface is very simple with only two methods. The first, getDependencies, returns a Map describing the dependencies that the class requires. A map is passed as it allows me to make use of the fact that non primitive objects are always passed by reference on the platform – this means that the container can populate the value portion of the map with the concrete classes and they get automatically “passed” back to the class that needs them. This little trick means we can again keep everything nicely decoupled.

The second method, gotDependencies, is a little bit of a hack but a necessary one. The container calls this once it has set all of the values in the map to concrete classes allowing the class wanting the dependencies to assign them out to members as it sees fit. This has to be done this way as without reflection/inspection the container cannot do it itself.

The other major part of this solution is the Container class. This class is responsible for maintaining a mapping of interfaces to concrete classes and then providing the appropriate concrete class when needed. The container calls itself recursively to get hold of all of the dependencies in all of the child classes as well. This recursion means that the consumer of the parent class need not worry about what other classes may or may not need to be configured in order to get a complete object graph. The container is able to seemingly dynamically instantiate classes but is in fact using the, probably now well documented and, extremely useful quirk of the JSON deserializer as described in much more detail here.

The rest of the functionality in the container class is to control the all important mapping. There are two available methods to setup the mapping: the first is a code based solution where you programatically add the interface to concrete mapping; this is useful in a test scenario where the mocks are likely to be one off test classes. The second method allows you to load the map up from a text file stored as a static resource. The file is currently structured as a simple JSON string representing a list of Container.ClassMapFileEntry objects. This approach gives you the ability to change functionality without deploying code – useful if you’re working in an environment where this can be an arduous process and also very handy for changing behaviour, such as integration points when moving code between environments as again no code needs to be changed between environments.

Conclusion

This post is short and lacking in any in-depth code walk through on purpose as I really just wanted it to be a marker, an introduction to what I hope will be an interesting discussion around the worthiness of this idea and how it can be improved (or a signpost to an existing fully fledged framework). There are plenty of features that I can think of – conditional mapping and instantiation control to mention just two – that could be added to this. Not to mention a whole bunch of error handling and test classes. But I feel that all of that is further down the road, the core of the solution needs to be well developed and debated first.

As I mentioned before, all of the code that I have so far can be found on Github and I encourage you to take a look, have a play and let me know of it’s many flaws. I include a couple of examples below that work with that code to exercise the basics of the container and interface – hopefully they will give you some idea of how it’s meant to be used. In the meantime I will work on a follow up post that goes through all of this in much more detail and breaks the code down section by section.

Looking forward to your thoughts.

TL;DR
I’ve had a stab at coming up with a way to manage dependent interfaces; a very simple IoC/DI solution for Force.com – I’m looking for ways to improve it and appreciate feedback. The code can be found over here.

UPDATE
I have added a bit more of a detailed walk though here

JSON deserializtion: Have Salesforce got it wrong?

We are all grateful for and love the native JSON support that was introduced to the platform in Winter 12 it has certainly made lives much simpler when it comes to integrating with external systems.

One of the reasons that the introduction of JSON support was such a step change for the platform was the prevalence of JSON in the wild as a way of communicating – it has over recent history become the de facto method for data transport replacing the much bulky and often more complex XML. One of the biggest reasons in my eyes for the rapid adoption of JSON is it’s flexibility, allowing interfaces to be changed quickly and easily. This of course, whilst an advantage for JSON isn’t a great selling point for systems architecture in general – historically interfaces are expected to be well defined and slow moving: JSON bucks this trend massively.

Accepting that changes to interfaces have become rapid and often unpredictable we see that JSON parsers are being written to cope with these changes. In general they are very flexible; not caring if fields are added or removed and instead just doing the best they can with what they’ve got. That is of course unless you happen to be developing on the Force.com platform.

If we take the following, very simple, JSON:

Using the built in deserializer to deserialize this into the follow APEX class works as expected:

All good. Now what happens if we add a new field to the APEX class but leave the JSON as it is, without the new field?

Deserializing the JSON into this class works and as expected fieldb is null in the resulting object. Brilliant , this is exactly the flexibility that we would expect. So what happens when we have the opposite situation; a field in the JSON that isn’t a field on the class? Keeping the class the same as above but changing the JSON to look like this:

Trying to deserialize this JSON into the Example class will now throw an exception.

FATAL_ERROR System.JSONException: Unknown field: Example.fieldc

This is not so brilliant, in fact out of the two situations I’ve described above if the platform was going to thrown an exception at all I’d rather it be in the first instance – after all I was clearly expecting fieldb so it would be nice to know I didn’t receive it in my JSON. However going back to the principal of flexibility in JSON and it’s parsers I think it would be best to not throw an exception in either case.

So, why do I think it’s such bad form to throw an exception in the second case? The problem is that you become extremely tightly coupled to the interfaces that you’re integrating with, something that I don’t really see as being in the spirit of JSON. Why are you so tightly coupled? Well every time your third party adds a field to their JSON return all of your deserialization is going to fail and fail spectacularly too. Once you know about the change it’s quickly fixed, in the case above simply add fieldc to you Example class but I shouldn’t really be having to do this it makes the system so brittle. This is obviously amplified if you’re code is in a managed package and installed in other Orgs; suddenly a third party change makes your product fail andthat’s all the user sees, they don’t know and don’t care that it’s not your fault.

I think this needs to be changed and will raise and idea to assit in this process but I would be interested in your views on this. Am I wrong in my thinking? Am I expecting too much from the platform here? Should there be exceptions in both cases?

UPDATE

Thanks to James Melville (@jamesmelv) for pointing me to the Summer 12 release notes:

JSON methods for deserializing JSON content now perform more lenient parsing by default. They don’t throw any errors if the JSON input string contains attributes, such as fields or objects, that don’t exist in the type of the deserialized output object. They simply ignore these extra attributes when deserializing the JSON string.

This serves me right for a) not posting these things when I think of them and for b) not reading release notes carefully enough.
If this turns out to be true then I gracefully rescind my earlier rant.

Convert a Blob to a Hex String in Salesforce

Since setting up my new business Wickedbit I have been spending a lot of time looking at integrating Salesforce with various third party applications. I love integration work which is just as well as one is never like another, despite there being standards for things like authentication and data payloads people still seem to relish the chance to “roll their own”. I shouldn’t complain I guess it keeps me in a job!

One such “example of innovation” that I came across required me to pass a blob via HTTP and to achieve this rather than simply Base64 encoding the Blob I was required to send the Blob as a string of hexadecimal values. An interesting idea for sure and one that after a quick bit of searching I realised wasn’t straight forward to solve.

My solution to the problem has ended up being a little convoluted and I’m sure there are other ways around this but for prosperities sake here is my approach. I decided to use the platform EncodingUtil class to get the Blob into a base64 representation; then using some simple bit shifting and masking I would reverse the base64 encoding and therefore get the individual bytes that make up the blob. From there it’s just a case of doing a decimal to hex conversion. It sounds easy enough and here’s the code; it’s compressed into as few statements as possible to try and keep below governor limits.

This could all be made a lot simpler if there were two little additions to the platform, other than the inclusion of a byte primitive. Firstly access to the bytes that make up the blob. Either let me initialise a list of bytes using the blob or provide a getByteAt(n) method. Secondly, a simple decimal to hex conversion method would clean this up no end and be useful in other circumstances too.

Whilst you might not have an odd integration requirement like I did to fulfil I hope that the idea above will be of some use; if you remove the lookup into the hex map you have a routine that can give you the list of bytes that make up a blob – that’s got to be useful to someone, no? I hope so.

« Newer Posts Older Posts »