Latest Blog Posts / Page 3

Hex to Dec and back again

The requirement to move numbers between bases is something that I haven’t need to worry about in a while with most interfaces dealing strictly with a decimal base. Having said that, as the laws of the universe dictate, I have found myself of late needing to deal with hexadecimal numbers. The ability to switch between decimal and hex is something that I’ve not been able to find on the platform, although as my post on converting blobs proves I’m not always the most alert to available functionality! As such I found myself writing a little class to handle it for me.

The algorithm to move from hex to decimal and back again is well known and simple at best but still I just thought I’d chuck it out here for people to make use of and to allow me somewhere to copy and paste from if I should ever need it again. I have tried to write the class in such a way as that it could be easily extended to convert other bases; although the way it currently stands it will always need to be to and from base 10. Also, I haven’t tested using it with a base other than 16 but the theory states that it should work.

All in all it “works on my machine” or should that be “in my cloud”, actual result may vary, blah blah blah. Dependency Management: A More Detailed Look

My previous post glanced over the details behind the dependency management framework that I have started to build. However the intention of this post is to rectify that and go through the core parts of the code in more detail. Hopefully giving you a better idea of how it currently works.

There are two main parts to the solution; the Container and the IHasDependencies interface.

The Container

The Container class is responsible for providing an instantiated class when a particular interface is requested. The container class uses an internal map to know which classes map to which interfaces, this map is what gives the users the flexibility to change dependencies without changing code as everything is controlled from it’s contents.

Users are provided with two ways to build the map in the Container; either programmatically or via an external mapping file. To programmatically add a mapping simply call the addMapping method passing in a ClassMap. The ClassMap expects Type parameters for the interface and class that you want to add; this is to try and make the code clearer and remove the chance of typos messing things up. To load up the map using an external mapping file a simple call to loadMappFile passing in the name of the static resource that is the file will populate the map. The map file itself is currently just a JSON representation of a ListI had to use a different class as you can’t deserialize the Type class – although as you can see from the code ultimately the ClassMapFileEntry is used to populate a ClassMap class before being added to the map.

There are two ways to build the map as I think they lend themselves to different use cases: the programmatic method being great for quickly pulling together some mock classes in a test scenario, whilst the mapping file gives much more flexibility at runtime or when moving code between environments. There are clearly crinkles in this to work though – multiple mappings for the same interface, conditional mappings, ensuring the class implements the interface it’s mapped to are just a few that fall off the top of my head.

Given the map the container is now in a position to provide users with the classes they need for given interfaces and it is the getClass method on the container that provides this functionality.

This method is probably a bit longer than it needs to be at the minute but cutting through the chaff you can see that it can be broken down into three basic functions:

  • get the name of the class needed from the map
  • instantiate the class
  • get any dependencies that class requires

Whilst the heart of the users experience this is the simplest part of the whole solution in my mind. Getting the classname form the map is self explanatory. The instantiation makes use of the JSON parser trick to get the class; the only thing to remember here is that the constructor of the class isn’t called. And the final part, getting dependencies, is really just a recursive call to this method: rinse and repeat. OK it’s a little more involved than that but not much and falls nicely into the next section.


One of my biggest goals at the outset of this process was to not have to worry about child classes and their dependencies. I want to be able to say to the container: “give me a working one of those – and don’t bother me with how”. This is where the simple IHasDependency interface comes to the rescue.

As you can see the interface is extremely simple. The key to the process is the map returned but getDependencies; the key set of the map is used by the container to know which interfaces to request classes for in the recursive call. The values are where the container puts the instantiated classes to pass back to the class requesting the dependencies. This works because non-primitive types are passed by reference. It would obviously be possible to have a member on the interface that was this map but we would run into problems with being able to populate it in classes that are instantiated via the JSON trick.

The second method in the interface is gotDependencies and at first look seems a bit odd. Looking back at the container class you can see that this method is called after all of the dependencies have been resolved and populated in the map. So, why is it called? It is there to give the class implementing the interface the chance to take it’s new found classes out of the map and assign them to internal members; after all no one wants to be referencing members from a map it just feels ugly – although if that’s what you want then knock yourself out!

An example of a simple class that implements the IHasDependencies interface would look something like this.

This class has a dependency on a IHttp implementation. To declare that it has this dependency it implements the IHasDependencies interface. In the getDependencies method is creates a new map and adds to it the interfaces for which it needs concreate classes, in this case it’s just IHttp. This now means that when the container instantiates the ParentClass class it will know it has dependencies that need full filling (the IHasDependencies interface) and by calling the getDependencies method the container is able to get a list of required interfaces and can populate the map to return them to the ParentClass.

You can also see in the ParentClass the implementation of the gotDependencies method; in this the IHttp class in the map is assigned out to a private member of the ParentClass class to allow it to be easily used elsewhere in the class.

Bringing it all together

Getting a fully loaded ParentClass is a simply a case of loading a map and then requesting the class from the container.

Two simple calls and obviously for each extra class you need it’s just a case of a call to getClass. There is some overhead associated with this in that you need to a build an maintain your mapping file and that you need to instrument your classes with the IHasDependencies interface but rarely do you get something for nothing.

What you go get however is a nicely de-coupled code base. And not only that it actually starts to become quite readable; getting a class with all necessary dependencies is one call in your code, all classes with dependencies are clearly labels and list all dependencies in one common place and the mapping is held in a centralised location. For me this framework looks like it will meet my needs.

Having said that I also think there is still work to be done; this is very much the seed of the idea and I think there are lots of improvements to be made as well as bells and whistles to add.

The project can be found on Github feel free to dive in and hack about with it. Dependency Management: A First Pass

I’m starting to sound like a broken record I’m sure – constantly going on about why we should be using interfaces in Salesforce more, why we should be breaking tightly couple code blah blah… Go here and here if you haven’t already been exposed to my rabbiting!

Now, I don’t just write this stuff I try and make sure I live by my own word as much as possible. But, as I’m sure you all of you that have tried to decouple things more in Salesforce, this decoupling can quickly become a mess: making sure the right concrete class is being called at the right time is only simple at first. It’s the same old story: at the time everything makes sense but going back to that code one, two, three months later and you sit there thinking: “what the hell was I on when I wrote this?”

It was such a moment earlier in the week that led me to face up to the challenge I set myself a while ago and do something about the way in which I managed interfaces and dependencies in my code. With the overall goal of building a framework to manage all of this in a clean and repeatable way. I may well find that this has been done before but it has been a wet Sunday and wanted to have a stab at it myself.

So – looking to other languages the obvious answer is to tend towards some kind of dependency injection, inversion of control or service locator – but which one and why? Well, they’ll all similar but different and frankly an in depth discussion about which to use and why is way beyond the scope of what I wanted this post to be. Suffice to say I believe I have gone for a form of interface dependency injection with some kind of inversion of control container – if all of that seems a bit vague then it’s because it is; I’m not really one for theory and names so please excuse any incorrectness there.

The Solution

Also I’m not going to post all of the code here as I don’t think it’s appropriate but if you want to follow along as I go through/around the various components then the project can be found on Github.

The platform provides a big challenge for dependency injection as there’s no form of reflection or inspection – it’s for this reason I have decided to adapt interface injection. Constructor based injection could have been an option but the novel method used to dynamically instantiate classes means that there’s no construtor that I can inject into. At the heart of this system is the interface IHasDependencies: any class that you want to have dependencies injected into it by the container needs to implement this interface. Using the interface means that this is the only thing the class developer needs to be aware of; this keeps the classes decoupled and unaware of the container that’s wiring them up.

As you can see the interface is very simple with only two methods. The first, getDependencies, returns a Map describing the dependencies that the class requires. A map is passed as it allows me to make use of the fact that non primitive objects are always passed by reference on the platform – this means that the container can populate the value portion of the map with the concrete classes and they get automatically “passed” back to the class that needs them. This little trick means we can again keep everything nicely decoupled.

The second method, gotDependencies, is a little bit of a hack but a necessary one. The container calls this once it has set all of the values in the map to concrete classes allowing the class wanting the dependencies to assign them out to members as it sees fit. This has to be done this way as without reflection/inspection the container cannot do it itself.

The other major part of this solution is the Container class. This class is responsible for maintaining a mapping of interfaces to concrete classes and then providing the appropriate concrete class when needed. The container calls itself recursively to get hold of all of the dependencies in all of the child classes as well. This recursion means that the consumer of the parent class need not worry about what other classes may or may not need to be configured in order to get a complete object graph. The container is able to seemingly dynamically instantiate classes but is in fact using the, probably now well documented and, extremely useful quirk of the JSON deserializer as described in much more detail here.

The rest of the functionality in the container class is to control the all important mapping. There are two available methods to setup the mapping: the first is a code based solution where you programatically add the interface to concrete mapping; this is useful in a test scenario where the mocks are likely to be one off test classes. The second method allows you to load the map up from a text file stored as a static resource. The file is currently structured as a simple JSON string representing a list of Container.ClassMapFileEntry objects. This approach gives you the ability to change functionality without deploying code – useful if you’re working in an environment where this can be an arduous process and also very handy for changing behaviour, such as integration points when moving code between environments as again no code needs to be changed between environments.


This post is short and lacking in any in-depth code walk through on purpose as I really just wanted it to be a marker, an introduction to what I hope will be an interesting discussion around the worthiness of this idea and how it can be improved (or a signpost to an existing fully fledged framework). There are plenty of features that I can think of – conditional mapping and instantiation control to mention just two – that could be added to this. Not to mention a whole bunch of error handling and test classes. But I feel that all of that is further down the road, the core of the solution needs to be well developed and debated first.

As I mentioned before, all of the code that I have so far can be found on Github and I encourage you to take a look, have a play and let me know of it’s many flaws. I include a couple of examples below that work with that code to exercise the basics of the container and interface – hopefully they will give you some idea of how it’s meant to be used. In the meantime I will work on a follow up post that goes through all of this in much more detail and breaks the code down section by section.

Looking forward to your thoughts.

I’ve had a stab at coming up with a way to manage dependent interfaces; a very simple IoC/DI solution for – I’m looking for ways to improve it and appreciate feedback. The code can be found over here.

I have added a bit more of a detailed walk though here

JSON deserializtion: Have Salesforce got it wrong?

We are all grateful for and love the native JSON support that was introduced to the platform in Winter 12 it has certainly made lives much simpler when it comes to integrating with external systems.

One of the reasons that the introduction of JSON support was such a step change for the platform was the prevalence of JSON in the wild as a way of communicating – it has over recent history become the de facto method for data transport replacing the much bulky and often more complex XML. One of the biggest reasons in my eyes for the rapid adoption of JSON is it’s flexibility, allowing interfaces to be changed quickly and easily. This of course, whilst an advantage for JSON isn’t a great selling point for systems architecture in general – historically interfaces are expected to be well defined and slow moving: JSON bucks this trend massively.

Accepting that changes to interfaces have become rapid and often unpredictable we see that JSON parsers are being written to cope with these changes. In general they are very flexible; not caring if fields are added or removed and instead just doing the best they can with what they’ve got. That is of course unless you happen to be developing on the platform.

If we take the following, very simple, JSON:

Using the built in deserializer to deserialize this into the follow APEX class works as expected:

All good. Now what happens if we add a new field to the APEX class but leave the JSON as it is, without the new field?

Deserializing the JSON into this class works and as expected fieldb is null in the resulting object. Brilliant , this is exactly the flexibility that we would expect. So what happens when we have the opposite situation; a field in the JSON that isn’t a field on the class? Keeping the class the same as above but changing the JSON to look like this:

Trying to deserialize this JSON into the Example class will now throw an exception.

FATAL_ERROR System.JSONException: Unknown field: Example.fieldc

This is not so brilliant, in fact out of the two situations I’ve described above if the platform was going to thrown an exception at all I’d rather it be in the first instance – after all I was clearly expecting fieldb so it would be nice to know I didn’t receive it in my JSON. However going back to the principal of flexibility in JSON and it’s parsers I think it would be best to not throw an exception in either case.

So, why do I think it’s such bad form to throw an exception in the second case? The problem is that you become extremely tightly coupled to the interfaces that you’re integrating with, something that I don’t really see as being in the spirit of JSON. Why are you so tightly coupled? Well every time your third party adds a field to their JSON return all of your deserialization is going to fail and fail spectacularly too. Once you know about the change it’s quickly fixed, in the case above simply add fieldc to you Example class but I shouldn’t really be having to do this it makes the system so brittle. This is obviously amplified if you’re code is in a managed package and installed in other Orgs; suddenly a third party change makes your product fail andthat’s all the user sees, they don’t know and don’t care that it’s not your fault.

I think this needs to be changed and will raise and idea to assit in this process but I would be interested in your views on this. Am I wrong in my thinking? Am I expecting too much from the platform here? Should there be exceptions in both cases?


Thanks to James Melville (@jamesmelv) for pointing me to the Summer 12 release notes:

JSON methods for deserializing JSON content now perform more lenient parsing by default. They don’t throw any errors if the JSON input string contains attributes, such as fields or objects, that don’t exist in the type of the deserialized output object. They simply ignore these extra attributes when deserializing the JSON string.

This serves me right for a) not posting these things when I think of them and for b) not reading release notes carefully enough.
If this turns out to be true then I gracefully rescind my earlier rant.

Convert a Blob to a Hex String in Salesforce

Since setting up my new business Wickedbit I have been spending a lot of time looking at integrating Salesforce with various third party applications. I love integration work which is just as well as one is never like another, despite there being standards for things like authentication and data payloads people still seem to relish the chance to “roll their own”. I shouldn’t complain I guess it keeps me in a job!

One such “example of innovation” that I came across required me to pass a blob via HTTP and to achieve this rather than simply Base64 encoding the Blob I was required to send the Blob as a string of hexadecimal values. An interesting idea for sure and one that after a quick bit of searching I realised wasn’t straight forward to solve.

My solution to the problem has ended up being a little convoluted and I’m sure there are other ways around this but for prosperities sake here is my approach. I decided to use the platform EncodingUtil class to get the Blob into a base64 representation; then using some simple bit shifting and masking I would reverse the base64 encoding and therefore get the individual bytes that make up the blob. From there it’s just a case of doing a decimal to hex conversion. It sounds easy enough and here’s the code; it’s compressed into as few statements as possible to try and keep below governor limits.

This could all be made a lot simpler if there were two little additions to the platform, other than the inclusion of a byte primitive. Firstly access to the bytes that make up the blob. Either let me initialise a list of bytes using the blob or provide a getByteAt(n) method. Secondly, a simple decimal to hex conversion method would clean this up no end and be useful in other circumstances too.

Whilst you might not have an odd integration requirement like I did to fulfil I hope that the idea above will be of some use; if you remove the lookup into the hex map you have a routine that can give you the list of bytes that make up a blob – that’s got to be useful to someone, no? I hope so.

A VisualForce Wizard Framework

Wizards – the step by step type, not the spell casting type – they’re not much fun.  Well, at least that’s what I think; to be honest there are few things less interesting to code in the world than another Wizard.

The trouble we have is that it’s a paradigm that’s presented to the end user over and over and as such they are reinforced in the users brain over and over again as the right way to do things.  This epic reinforcement leads us down an inevitable path; the first step of which is us saying, “there’s a much better way of gather this information”, and the final step of which is us wearily saying, “ok, I given, so you want: next, previous and cancel, yes?”.  It’s a losing battle and one I’m sure many of us have fought through before – only to be defeated in the end by the weight of the user’s expectations.

So, what can we do to make this inevitable requirement easier to swallow?  Well, the first port of call is the Cookbook and it’s recipe for VisualForce wizards.  It’s a good start and shows the basics: use a common controller, return the PageReference of the next or previous page.  I’ve been there a few times over the past couple of years – it’s always a good starting point but I often feel as though it’s lacking something and as such I tend to end up tweaking it.

This continual tweaking got me thinking: surely there has to be another way.  Not necessarily a better way, please note, just a different way.

So what do I want from this different way? In general I don’t want my VisualForce pages to really need to worry about the fact they are in a Wizard. What do I mean by this? Well, I want the controller that stands behind them to provide a standardised way for carrying out certain “default” wizard tasks. As with most of my ideas they’re open to change and development but as I see things at the minute these default features are:

  • Provide the page title
  • Check the user is somewhere they can be
  • Provide Next and Previous actions for navigation

A VisualForce page that uses such a controller may have a skeleton that looks like this.

<apex:page controller="wizardController" title="{!pageTitle}" action="{!checkStep}">
  <apex:commandButton action="{!Previous}" disabled="{!NOT(hasPrevious)}" value="Previous" immediate="true"/>
  <apex:commandButton action="{!Next}" disabled="{!NOT(hasNext)}" value="Next" />

This is, as you can see, a completely bare bones page. It provides the template for each page in the wizard to be built from.

The best thing about this page in my mind is that all of the navigation is taken away from each individual page and centralised in the controller. We now have only one artifact that is responsible for navigation. This has to be a good thing, no?

What about the controller itself then, how should we implement this? We first need to store the information about each page, I hold this information in a small inner class called NavigationPage.

private class NavigationPage{
  public Integer previous {get; set;}
  public Integer next {get; set;}
  public PageReference thePage {get; set;}
  public String title {get; set;}

The controller itself holds a reference to a Map object. This Map contains a list of all of the available pages in the Wizard. The Integers, next and previous, defined in the NavigationPage object refer to the keys in this Map; it is these keys that didtact the flow of the pages. This gives rise to a method to setup the pages in the controller; this method would look something like this:

private void setupSteps(){
  NavigationPage np = new NavigationPage();
  np.thePage = Page.newOpptyStep1;
  np.title = 'New Opportunity - Customer Information';
  np.previous = null; = 2;
  steps.put(1, n);

  np = new NavigationPage();
  np.thePage = Page.newOpptyStep2;
  np.title = 'New Opportunity - Opportunity Information';
  np.previous = 1; = 3;
  steps.put(2, n);

  np = new NavigationPage();
  np.thePage = Page.newOpptyStep3;
  np.title = 'New Opportunity - Customer Information';
  np.previous = 2; = null;
  steps.put(3, n);

Given this information inside the controller it’s a fairly trivial task to fill out the other navigation related actions needed to satisfy our earlier VisualForce template. Although we’re still missing one piece of information; the key of the current page, without this we’re doomed. This is a simple a holding an Integer in the controller. For simplicity’s sake we can just default this to the first page in the Wizard or, a better solution would be to have this find the key of the item in the Map that has null as it’s previous step. Either way once we have this we can fill out the remaining actions needed on the controller like so.

public PageReference Next(){
  if(steps.get(curStep).next == null)
    return null;
  curstep = steps.get(curStep).next;
  PageReference pr = steps.get(curStep).thePage;
  return pr;

public PageReference Previous(){
  if(steps.get(curStep).previous == null)
    return null;
  curstep = steps.get(curStep).previous;
  return steps.get(curStep).thePage;
public boolean getHasNext(){
  return steps.get(curStep).next != null;

public boolean getHasPrevious(){
  return steps.get(curStep).previous != null;

public string getPageTitle(){
  return steps.get(curStep).title;

The final piece of the puzzle is the checkStep action that I had on the page. Firstly, what is this for? Well I don’t like users thinking that they can shortcut the wizard, after all they asked for it they should go through the usability anguish the same as the rest of us. So, the idea of the checkStep action is to double check that the page they are trying to load is actually the next page in the wizard flow. This has the obvious advantage that once we get to the end of the wizard we can rest assured that the user went through every step and we’re not going to start getting null reference exceptions. The implementation is again a trivial one but it provides us with the security we need to stop users bookmarking the wrong page, entering in URLs themselves or generally finding a way to jump from page to page with no care for the implications to the data.

public PageReference checkStep(){
  if(getPagePath(ApexPages.currentPage()) != getPagePath(steps.get(curStep).thePage)){
    curstep = 1;
    PageReference pr = steps.get(curStep).thePage;
    return pr;
  return null;
private string getPagePath(PageReference pr){
  return new URL('' + pr.getUrl()).getPath();

There’s some messing around in there to make sure I am comparing like with like but the crux of the idea is to simply compare the PageReferences for the current page and the page curStep says we should be on. If they’re different return to the start. If not then just carry on as normal.

Whilst this is a basic, bare bones template for the implementation of a wizard in VisualForce I have to say I’m quite pleased with it. In particular because it brings all of the code responsible for navigation into one well defined place. Yes, it means we need a code change if we want to alter the flow of the wizard but what’s wring with that. The bonus is that we can write test classes to make sure that the flow of the wizard is what we want.

This is a starting point that I offer to the world; I’d be keen to hear your feedback on it from your personal experiences with such user requirements.

Empty Maps and VisualForce in Spring ’12: A Conclusion

Since my initial post on this subject I have had the good fortune to be able to get some feedback from Salesforce on this issue. And I thought that it would be good to share this.

It turns out that the issue I have been seeing is due to a bug in the VisualForce engine which looks through the Map to see which columns need to be rendered. This code is passing null as a key to the Map which in return throws the error that we see as end users. I am assured that this bug will be fixed before Spring ’12 is released; so whilst it may be a pain at the moment we can get some solace in knowing it’s a transient issue.

As an aside I also found out something else during my conversation with Salesforce around this which is useful. Apparently Spring ’12 contains a fix which allows null to be a valid key in a Map. I wasn’t aware that it previously wasn’t allowed but I am informed that this is the case. It might therefore be worth checking through your code to see if you rely on this behaviour. I have heard that this fix may become “versioned” as people may depend on the old behaviour but I can’t say for sure that this will be the case. Short story: check your code.

Empty Maps and VisualForce in Spring ’12

I woke up this morning to a bug in a production environment, it seemed fairly innocuous but being a sensible individual I went off to the sandbox first to replicate it before blundering into production with a fix.

Things got interesting when I couldn’t even get to the page that I needed to in order to test said issue; for some reason I kept getting the error message: Map key null not found in map

I went back to production, ran through the test scenario and sure enough managed to get past this erroneous page and to my original bug. How odd I thought, then I realised: the sandbox had been upgraded over the weekend; I was in Spring ’12’s waters – it had to be something to do with that.

Needless to say after some investigation it does indeed seem to be a “feature” of Spring ’12 so I thought I’d share it for you in all of it’s technicolor glory. And even a simple work around because I’m nice like that.

The code is straight forward enough: an APEX controller that looks like this:

public class TestController{
    private Map<Integer,Account> accs = new Map<Integer,Account>();
    public Map<Integer, Account> getAccounts(){
        return accs;

And a VisualForce page that looks like this:

<apex:page controller="TestController">
    <apex:pageBlock >
        <apex:pageBlockTable value="{!accounts}" var="acc">
            <apex:column value="{!accounts[acc].name}" />

The key to the problem is that the Map is initialised but empty. Running the same code in a Winter ’12 environment gets me the following output:

Whilst running it in a Spring ’12 environment gives me this:

Oh dear, for some reason in Spring ’12 the empty Map appears to contain a null key. I ran this through system.debug and the outputs from both environments matched and were as expected. So it would appear that it’s something to do with the VisualForce engine.

There is however a workaround as promised although not terribly exciting and possibly not practical in all scenarios. Having said that it worked for me and I imagine that it could be adapted for pretty much any other situation. I mentioned earlier that the key to the problem was the fact that the Map was initialised but empty. This means that a simple change to our controller to return null when the Map is empty will cure all ills, like so:

public class TestController{
    private Map<Integer,Account> accs = new map<Integer,Account>();
    public Map<Integer, Account> getAccounts(){
        if(accs.size() == 0)
            return null;
        return accs;

Hopefully this will save some of you guys a little Monday morning pain; when your code suddenly stops working after an upgrade for no apparent reason.

**** UPDATE ****

This issue seems to go deeper than I first managed to uncover. And whilst the fix above will work for the use case described it won’t work in a whole raft of others. yes, my initial thought that this could be adapted to work around any situation was way off the mark.

I’m having an issue now that appears to be with rehydrating the variables in the controller from the viewstate; although I can’t get deep enough into understand it completely. Looks like I’m waiting on Salesforce support.

**** UPDATE ****

I have had a response on this issue read Empty Maps and VisualForce in Spring ’12: A Conclusion for more details.

Beware of Hierarchical Custom Settings

I’ve once again had the pleasure over the past few days of using Custom Settings in Salesforce. I strangely enjoy using them; they are designed for a job and they do that job well. Well, that was how I felt until this most recent encounter with Custom Settings.

The difference this time was that I was using hierarchical settings rather than a list and I clearly hadn’t RTFM. As such I beat my head against the desk for a while whilst I got to grips with some, shall we say, curiosities? As I said this is probably all written elsewhere, I just wanted to make sure I made a personal note of it so next time I don’t get so frustrated.

The thing that had me stumped for the longest was the following piece of code.

  if(MySetting__c.getInstance(UserInfo.getUserId()) == null){
    MySetting__c val = new MySetting__c(Field_1__c=false, Field_2__c='a string');
    insert val;

Obviously MySetting__c is my custom setting and I’m interested here in getting hold of the values for the current user. Of, or possible a red herring, another user has a MySetting__c record already. This if was part of a larger, more complex piece of functionality, and when I saw the method failing I naturally thought that the problem would never be in such a simple piece of code as this and spent some time trawling through the rest of the code. And yet I was clearly wrong this simple piece of code can fail – or at least not perform as I had expected.

In the land of list custom settings I would have been return a null if the record I have requested did not exist – a classic testing mistake that one: not setting up your custom data in your test scripts. However in this case, whilst the user had no record associated with them the if evaluated to false – it was returning a valid object. The interesting thing is the object return had Field_2__c set to null – this made life easier I would just change the if to test one of the fields on the returned object for null. Field_1__c was my choice (for some reason), it’s a Boolean but that didn’t matter, here on the platform a Boolean is a class not a type like other languages and as such can be null. Or at least that’s the case in all the other places I’ve come across Booleans on the platform. However here the Field_1__c was set to false and that’s no use as the real object can set this field to false. A quick switch to testing Field_2__c and everything was OK and behaving once again.

My lessons form this is little excursion are that a) hierarchical custom settings will return an object even when one isn’t defined for the User/Profile/Org combination or searching on and b) said object won’t necessarily be initialised as a normal object on the platform would be. Two worthwhile notes to remember as I’m not too sure how much more head banging my desk can take.

Counting weekdays

Counting weekdays in Salesforce is something that you see cropping up every now and then and generally speak the same solution gets wheeled out – loop through the days between your two dates and count the number of days that are Saturday or Sunday.  This has, ever since I first saw it, frustrated me; it’s ok for small date ranges but for larger ones it starts to become inefficient, wasting code statements left right and centre – and if it’s in a loop, well it’s just not worth thinking about.

I’ve always felt, deep down, that there must be a more efficient way of do this so, returning from a couple of weeks of disconnectedness, I thought that I’d have a go at finding the solution – you know, just to ease myself back into the swing of things!

The solution that I have come up with is available on GitHub and I have to say that I’m not particularly pleased with it.  The basic approach I have taken is to use the standard daysBewteen functionality in APEX and then try to calculate the number of weekends in between those dates that I need to take off the total.  It sounds like a great idea but I soon ran into difficulties when it came to dates that started or ended on a weekend – in this case it wasn’t just as simple as multiplying the number of weeks by two to get the number of Saturdays and Sundays.  In fact I’ve ended up with a fairly horrid if else statement which makes me cringe – a lot.  Here have a look but I’m warning you it’s not pretty.

public static Integer weekdaysBetween(Date d1, Date d2){

    Date start = min(d1,d2);
    Date finish = max(d1,d2);

    Integer daysBetween = start.daysBetween(finish);
    Integer startDay = dayOfWeek(start);
    Integer finishDay = dayOfWeek(finish);

    daysBetween -= Math.round(((daysBetween + startDay - finishDay) / 7) * 2);

    if(finishDay == 5 && startDay != 5){
        if(startDay == 6)
            daysBetween += 1;
            daysBetween -= 1;
    } else if(finishDay == 6 && startDay != 6){
        if(startDay == 5)
            daysBetween -= 1;
            daysBetween -= 2;
    } else if(startDay == 5 && finishDay != 5 && finishDay != 6){
        daysBetween += 1;
    } else if(startDay == 6 && finishDay != 5 && finishDay != 6){
        daysBetween += 2;

    return daysBetween;

As I said it isn’t pretty! It is probably worth pointing out two other non standard functions in that code – the date min/max and the dayOfWeek functions these are both also in the DateExtensions class on GitHub and are available for your pleasure.

As far as the mess of counting weekdays is concerned I’m fairly sure that what I have still isn’t the most efficient method of achieving my goal but for now I’m going to leave it as it is and hope that it’s of some use to someone. Of course if anyone wants to grab hold of it and improve it then that would be great too – hopefully it’s a good start for someone, somewhere.

Having said that it’s not all doom and gloom; whats there does seem to work and passes the test suite that I currently have for it – it’s just not as efficient as I’d like it to be.

« Newer Posts Older Posts »