One of the biggest challenges that I face when writing my code is making sure certain things are enabled in an Org. For example some people still don’t have Chatter enabled so trying to post to a feed is going to cause my code to blow up. Each time I come across something like this I always have to go hunting to find the best way to figure out if something is on or not. This trawling of the internet finally got the better of me the other day and I have started a Feature class.
A few days ago I noticed that this blog wasn’t looking like it should.
— Simon Goodyear (@simongoodyear) January 13, 2015
In fact it turns out quite a lot of it has disappeared. So firstly apologies if you’re getting here through search results – most of the content behind those permalinks is no more. And that which is left appears to have completely been messed up.
I am currently working through getting all of this information back but it’s not going to be an overnight return to it’s former glory. However, new posts should continue to be added and more importantly I will turn that backup feature back on!
We all love Visualforce, right? It gives us the chance to have that finer grained control over how things look for users and users love us making things look good for them – often much more than any functionality we provide. And when dealing with Visualforce, it’s often not long until we need to write a Custom Controller or a Controller Extension in Apex to allow us to fulfil some requirement or other. Controllers provide us with three main functions; to provide access to data, to provide us access to business logic through actions and to provide navigation. Actions and navigation are pretty straight forward as there is only one way to implement them in your controllers. Data however can be implemented in two ways and what’s worse… at the same time!
I learnt something rather intriguing today on the old Force.com platform: decimals just don’t behave how I expected them to when converting them to strings. I have obviously set myself up with that line but nevertheless let me go through the motions.
Before today I would have expected the following code to display the same three strings on each of the lines.
That is, despite how I declared the decimal in code, they would all be stored the same and hence all output the same way – truncated to zero decimal places in this case, given they are all exactly one. However when I run the page I get the following result:
This struck me as a bit odd as it’s actually remembering how I declared the number and then using that when it converts it back to a string.
This got me thinking a little bit: is there some weird bug where by the numbers are considered to be different? I didn’t think this was the case but you can never be too sure so I added the following two methods to my test class and got the following output on the page:
As expected comparing the numbers they are the same; they are both exactly 1. However comparing the strings we see that they are different (I know we could see that from the previous output but it’s always good to see what the compiler thinks).
My next thought was what does it do when we add decimals together that have been declared with a different precision? This leads to the following tests:
We can see from this that order of inputs doesn’t matter and that the result will always have the precision of the most precise input.
So now that we know how it all works, even if it wasn’t what I originally thought, but where does that get us? Well strangely enough I had an application for this pretty much straight after I discovered it. I had to send a JSON object to a ReST service and needed to make sure that all the numbers I sent were formatted to show two decimal places. This little oddity means that I can simply multiple my input by 1.00 and I know that when I take the string value of the result I will have everything formatted just the way I want it. Perfect.
It has been rightly pointed out to me that this phenomenon is expected and well documented behaviour (SF Docs) Thanks gain @jamesmelv before I write my next post I will promise to read the documentation.
I was quite excited by the introduction of the hashCode and equals methods to custom classes in Winter ’13, as I was finally going to be able to get a map to behave the way that I wanted it to. I started to write such a map and a post to go with it when I started to discover a few, shall we say, nuances about what was necessary to implement these methods – in particular the hashCode one. I will follow up with my magic map post but for the meantime I want to discuss the implementation of hashCode method.
I never studied computer science, or indeed maths in enough depth to know the intricacies of hash codes but I know enough to know that good hashing methods are hard to come by and potentially are computationally expensive. Given both of these facts I came up with a cunning plan when I was designing my hashCode method, in fact it was more cunning than a… never mind. My plan was thus:
First of all I would create a string representation of my object by making use of the JSON serializer and once I had this string I would simply call it’s hashCode() method.
The beauty of this plan is that I would be leveraging the internal hashing algorithm on the platform, which should be great, well optimized, well dispersed etc and I would only be using two statements to get said hash code. (I could have made it one I know but for clarity I kept it at two) Basically I would get a great, generic, hash generator without using a million lines of Apex and hence continuing to play nicely with the governor limits.
Now the thing about plans as cunning as this is that they normally don’t work and whilst this is a gross generalisation in this case it was the unfortunate truth. It didn’t work. The problem is that the hashCode method isn’t exposed on the string object. I must admit I found this a little odd: in order to use a custom class in a set then we have to implement this public method. Given that a string can be used in a set I would have expected the same pubic method to exist. I’m sure that it does exist the problem is that it hasn’t been exposed to use mere mortals. Whilst I’m on this subject I will mention my general confusion at the implementation of this feature in general. I would have expected an interface to implement or indeed a virtual class to override. However instead we’re asked to add two methods randomly to each class. I’m not a great fan of this – there is obviously some jiggery going on as the code is compiled and I think that rather than hiding this it would have been better to expose it to us. Heading down this route flies in the face of some OO principles and as such, in my eyes, makes the code much more confusing and potentially unclear than it needs to be; but what do I know?
Anyway, putting this to one side and returning to my hunt for a good hashing method. My next thought was to have a look around the web for hashing algorithms and investigate implementing one in Apex. After a bit of hunting I came across the FNV algorithm – it is spoken well of and seems easy enough to implement. Easy enough that is in most other languages but Apex as ever presents it’s challenges. The problem comes in the fact that you need to operate on an octet of data – just one byte. Now using a string as our starting point it’s quite difficult to extract just one byte of data from it. You can split the string into individual characters but you have no way of getting back to the integer that represents that character. A shame really given that you can do this process the other way around: you can use string.fromCharArray() to convert a list of integers into a string. Oh well, this is part of the fun of this platform – find the your way around the obstacles.
So, how to navigate around this small issue then? Well we could build a static map that lists all possible characters and maps them to an integer. This could work, although with some of the extended character sets this could become rather painful to build, not to mention the fact that we need to keep our value less that 256 to stay with our 8 bits of data. We need to represent our string of data some other way – some way that has a limited character set. Base64 was designed exactly for this purpose so we could base64 encode our string – the size of the map is limited and the platform offers us a way to do this. You could also convert your string to a hex representation – the map is even smaller in this case, although the resulting string would be longer. To be honest I think this is a personal preference thing, I have no string thoughts about going with either solution. With this adaptation we can now nicely execute the FNV algorithm in Apex and it ends up looking something like this:
It’s beautiful and it works. However it has a small flaw that is very much specific to the platform. It is computationally expensive in fact, in terms of script statements it’s almost unbounded. Only really limited by the size of the string put into it, which in turn is only limited by the size of the heap! So whilst we have a way of generating a good hash code we need to do something about controlling it’s size. Ironically the hing about a hash code is that it is always the same length, no matter what input you give it and it’s this realisation that leads me to the solution to this problem. The somewhat odd solution is in fact to hash the string before putting through our FNV hashing code. The best part of this is that we can make use of standard platform functions to do this for us. The crypto class allows us to generate a HMAC from any input – ta da! This is fantastic we can now in one script statement compress any string into a reasonably unique (very very low chance of collision) string and best of all it will always be the same length, so we can now predict the effort needed by of hash code method to compute our object’s hash.
At this point I can imagine what you’re thinking – why on earth didn’t you just make use of the built in hash function right at the start. Well I thought about it but it returns a string and our hash code method needs to return an integer. So I would still need a way to convert a string to an integer – which is what the FNV piece of code will do for us.
Putting all of the pieces together we get something that looks like this:
I have kept the conversion to JSON in there as it allows me to provide a generic function for finding an objects hash code, although if you need to make use of internal state to uniquely identify your objects you would need to modify this piece. I then create a HMAC of the JSON and convert it to a hex value. This hex string is then broken down and character by character mapped to an integer which acts as our octet of data in the FNV code. Then, after all of that, we’re left with an integer which we can use as our hash code.
I have to admit this does seem like an awfully large amount of work to simply generate an integer although as I mentioned before finding a good hashing algorithms isn’t trivial stuff. And it is further complicated by needing to stick within governor limits. I feel as though what I have come up with here is a good compromise that makes use of standard platform functionality where possible and hopefully provides a decent dispersion. However, as I mentioned before I don’t really know much about the maths behind hashing and am very happy to have any flaws pointed out to me. (I’m sure I’ve made a rookie mistake somewhere in there)
As I said before it would be great if the platform exposed some way for us to do this – be it exposing their own native hash methods on a string or indeed providing an object.hash method that all objects have access to. But in the meantime I will be using this method to help me create custom sets and maps.
As a small side note: you may have noticed a couple of odd lines in the code, where I convert a string into a long. Trust me, I believe this has to be in here… I don’t want it to be but there’s what looks like a small platform bug, after I have done some more testing on this I shall report back.
Over the past few weeks I have been getting a few requests from some of our customers to add a feature to one of our products; some minor integration with Chatter. Being a company that prides ourselves on being responsive to our customers wants (once a little common sense has been applied of course) we decided to add said feature.
The feature itself was trivial to implement but what struck me as I was thinking about it was: “what about our customers without Chatter enabled?” I know it’s becoming less and less common these days but back when we were consulting full time we heard numerous clients ask us to make sure Chatter was turned off – and no amount marketing material was going to convince them to turn it on. With this in mind I was suddenly a lot less bullish about this new feature – I didn’t want to alienate a whole section of our potential market just for this feature.
I had a dig around the Salesforce documentation to see what, if anything could be done about this – I couldn’t believe that I was the first person to have this worry. Things were a little sparse I found reference to a new status for a DML exception and also to a new Exception type. It appeared that it had been thought about but how exactly it was handled still seemed unclear.
When faced with a lack of documentation, indeed when faced with almost any uncertainty, I do what any developer would do: I write some code! In this case code to prove once and for all what really happens. I whipped up a very simple managed package that had a VF page which inserted a FeedItem and then captured any exceptions and outputed them to the user. Simple. When I created the package I left the Chatter “Required Feature” unchecked. This seems a bit backwards, I have Chatter features but you don’t need to have Chatter turned on, anyhow this is what makes it an optional platform feature. Once I’d uploaded the package I span up another org and disabled Chatter in it. And then installed the package and navigated to my test page.
Boom! It broke. Well actually it worked but it threw an exception so it did in fact break. Given the mention of exception in the documentation this was pretty much what I expected to happen. Although to be honest it’s a little disappointing.
Why do I find it disappointing? After all I have a mechanism with which I can have Chatter features in my product but not need the user to have Chatter turned on in their Org. Well, if we were measuring happiness purely in terms of whether I have met my requirements then you’re right I should be estatic. However I find it disappointing that I am forced by the platform to use exception handling techniques to control the logic of my application.
Look at this code to see what I mean:
First off I just want to say that it’s not all bad – the exception that gets thrown is very specific to this problem so we can be sure that the only reason we’re in that catch is that Chatter isn’t enabled. If we want to deal with other excpetions from that code then we can use other catch statements. All good. Now for the not so good side of things, first of a bit of a bug bear the name of the exception is very generic, yet according to the documentation it can only be thrown if Chatter isn’t enabled. Either change the name of the exception or update the documentation to reflect the fact that in the future this may cover features other than just Chatter.
OK with that out of the way let’s get back to the question of logic. The idea behind exceptions is to provide a method to change the normal execution flow of a piece of code when an error occurs. It provides an opportunity for the developer to handle this error, this thing that wasn’t expected to happen. In this case we are expecting that some of our customers will not have Chatter enabled. This isn’t an errornous state of affairs it’s just the way life is. What we’re really wanting to do here is say: “Is chatter enabled? Yes? OK then insert a record because we can”. Instead we’re forced to to say: “Insert a record. Oh balls it didn’t work, must be because it’s not enabled, oh well”. We’re using exception methods to handle the consequences of not being able to make a logical decision upfront.
All of this is just bad practice and something that we shouldn’t be doing. Another reason often cited in other languages is that throwing an exception is a computationally expensive task. I can’t imagine that this is any different in Apex, especially given that it not compiles down to Java byte code anyway however I haven’t tried to eek out any benchmarks to prove it.
Interestingly it feels like the designers of the platform have a penchant for this anti-pattern. Seems like a strong thing to say, what do I mean? Well a couple of things make me feel this way. The first is the fact that we see this pattern in Apex already – we’re all guilty of assigning a LIMIT 1 query directly to an instance of an object and catching the exception if there are no matching records. What should happen in this case? Well how about just assigning null to the variable? The second thing that piques my worry is the name of the exception and the fact that it’s very generic it feels like we’re going to see the exception being thrown for other features as well, which will force us to replicate said horidness over and over.
So what can be done about all of this? After all there’s no point complaining unless you have a suggestion for improvement. First things first – I am not against this exception! I know it sounds like I want to see it burnt at the stake but that’s really not the case. I can completely understand why it is needed. In fact I wouldn’t change the exception at all, it’s completely appropriate for the action taken in the given circumstances. However, what is needed is something to compliment it, something that is the equivilient of the Limits class that already exists. How about a class called Features? It would have a series of static methods that give us the ability to test for the existence of optional features before we try and make use of them. That way we could use some conditional code structures to control the logical flow of the application, allowing us to change our try/catch example into something a bit better structured:
This feels like much better code, not only does it remove the nasty exception code but it also becomes much clearer as to what the code is trying to achieve.
Anyway enough of my disappointment, the crux of the matter is: there is a way to handle “optional” features on the platform such as Chatter from within your code. And this is great because it means I can add my trivial Chatter based feature to my application and not worry about alienating a whole section of my market or indeed having to try and manage two code bases.
Just be careful when you do make use of this exception though and please remember that you’re using an anti-pattern because you have to not becuase you should.
Reading other people’s code is great, be it good or bad, functioning or broken there is always something to discover – and generally it is something that makes you think: “d’oh, why didn’t I see that before?”
I liked my latest discovery so much that I thought I’d share it myself for those, like me, that just hadn’t thought of it or indeed seen it elsewhere.
Record Types are the subject. We are all aware of the complications that record types bring to Apex; often you will want to perform a different action in your code depending on the type of record that you have. Or you’ll want to get a value from one field for one record type but from another for the rest. That’s cool and it makes sense but how do you write the code to do that? Hard coding the record type id is a common practice and it just about works, so long as you have declared the record types in your production org as the ids stay the same when you create sandboxes. However things don’t normally happen this way. Often a developer will create a new record type in her sandbox and make use of it there but when you move between environments this code breaks as the record type doesn’t exist and if you create it the id will be different; try running CI in an environment like this! Another problem I see with this method is that, even if the ids are consistent betwwne environments the code is fairly meaningless: what exactly is id 00D123…..? So much for self documenting code, eh?
So, as you can probably guess this technique that I saw helps solve these problems. Actually what I saw was a step in the right direction… I’ve just taken it a little bit further.
In the code base that I was reading last week I saw a lot of declarations like this at the top of classes:
That solves a whole bunch of issues but why is it being repeated time and time again at the top of each class? Surely it would be better for all involved if these variables were declared and maintained in one place? The result of this thought is below: firstly a class called RecordTypeContainer that actually does all the hard work and then one called AccountRecordTypes which contains all of the record types for the Account object.
It would have been possible to wrap this all into the one class but splitting it out like this makes for easier to read code in my mind as I can say: AccountRecordTypes.Test1 in my code and it’s fairly obvious what I’m referring too. Obviously adopting this structure means we end up with lots of XXRecordTypes classes but it becomes a convention and not too much of a hardship. Whilst we have lots of RecordTypes classes we can be assured that they all work the same way because they all make use of the RecordTypeContainer class – this handles the intial getting of the record types, the storing of them and then the retrieval – again in a consistent and efficient manner.
The code itself is fairly simple and as such I’m not going to give a huge commentary on it; hopefully you can see how to apply it to other objects within your orgs but if not then just leave a comment.
Obviously it would be great if the platform had some form of static access to record types like this baked in and I’m sure over time it may well be added but in the meantime I’ve found this pattern to be really helpful and hopefully others will too.
A slight departure from my normally technical posts to give me a chance to get a random thought off my chest.
Rypple is doing it’s best to revolutionize the way in which staff and teams are managed and it seems to be a great tool to do it wit. However lately I’ve been wondering about how it might be applicable is less obvious situations.
I’m going to preface the rest of this post with the admission that I’m basing everything that I write here on a vague understanding (a demo and a brief play) of Rypple. And that I only really have indirect experience of the education sector (I went to school, I worked in a school for a year and my wife is a teacher). Therefore all of this might already exist or be complete rubbish.
The other day someone from my wife’s school was asking her what the best part of being on maternity leave was. I was expecting her to respond with some comment about the joys of raising our son and watching him grow into a small boy but no. Instead she replied without hesitation that it was not having to write reports. It was at this point I realised her disdain for reports was the same as my disdain for appraisals.
A bit more thinking about it and I started to see more parallels between the two processes. Students have targets, generally individual when related to learning but the concept of “table teams” and “houses” also exist. There are also many parties invested in the performance of a pupil, teachers, adminstrators, parents and sometimes the pupils themselves. As things stand the targets are generally set and monitored by the same teacher on a termly basis and the parents on the whole only receive feedback once or twice a year in a formalised “report”.
As I said the parallels to the working world are, to me at least, very striking. So how could Rypple be used? Well targets continue to be set by their teacher however rather than being kept on a dusty piece of paper they are now visible to all staff allowing any of them to contribute to noting that pupil’s progress towards their targets. This could be particularly useful in a secondary school environment where a maths target may well be met in a physics lesson. Then give parents access to their child’s account so that they can quickly catch up on where their child’s progress and goals allowing for more efficient teacher – parent communication and giving parents the best opportunity to support their child. More senior members of staff would be able to get a quicker and more transparent view of the progress of their pupils.
I’m a bit vague on the current ability of Rypple to support this but imagine if when a child switched schools you simply transferred their Rypple record. This would give you immediate access to their current targets not to mention their history. As a teacher you may want to change these targets but at least you have something decent to start with rather than a crumbled piece of paper that may or may not have found it’s way to you.
Yes, I know I sound like I’m piling yet more pressure on teachers to carry out more administration but in reality they should be doing a lot of this already. Couple that with having such an up to date and fluent communication with the pupil, their parents and senior management it should actually start to remove some of the end of term report pressure – hopefully, eventually, removing the need for this big bang communication completely.
There are undoubtedly problems with all of this but nothing that seems insurmountable. In my mind this actually just fits into a much bigger potential revolution of the education sector and the way in which it manages it’s data. For example if we start to think of pupils and parents as a school’s customers then it’s only a short hop, skip and a jump to realise that actually the data fits very nicely into Salesforce.
Anyway these are the genral ramblings of someone that doesn’t really know what he’s talking about but it’s got to be worth getting someone that does know this area to think about it. Hasn’t it?
Obviously writing a compiler in Apex isn’t beyond the realms of possibility but it is beyond the realms of what I’m willing to do to get this to work at the minute. It is also probably the best solution to the problem but I decided to look for an alternative.
<script type="text/coffeescript"> tags the CoffeeScript compiler will process your scripts in the browser. Nice. Well kind of. It seems a bit wrong in my mind to be foisting this effort onto your users and the approach is frown on by the CoffeeScript community in general.
Then in the controller all we need to do is find the CoffeeScript resource, pass it out to our Heroku instance and send the response to the browser – simple.
There we have it, a fully functioning CoffeeScript VisualForce component; we too now have access to the benefits that CoffeeScript brings!
Using it is very simple: upload your CoffeeScript as a static resource and then include a line like this in the VisualForce page that you want to use the uploaded script in. Hey Presto! Job done.
Having said that I have achieved what I set out to do and hopefully have provided enough of a platform for those that want to include CoffeeScript in their VisualForce to get started with. In the meantime if I have another evening where I’m feeling a little bored I might round this component out and add it to GitHub for the easy consumption by the masses.
Last Thursday was a sunny one in Bristol, UK. It was also the inaugural meeting for the South West’s Force.com dev group.
The group met in The Elephant in the city centre and was, according to those left at the end of the night, a great success. We never expected the group to be big when we were planning it’s birth and the fact that a total of seven developers turned up to swap tales was pleasantly surprising.
The format was intentionally informal and the evening consisted of a few beers, a good discussion around how we’d like to see the meetup evolve and of course a group therapy session around our latest set of “challenges” the platform has presented us with. As you’d expect the topics where very varied from query parameter striping and reordering to testing http callouts to JSON parsing in Summer ’12.
Going forward we intend to meet on the last Thursday of every other month this allows us to fold in nicely with the South West User Group as well. It also means that we’ll be more or less due an event around the time of Dreamforce so we may well endeavour to do something terribly English, or worse West Country esk whilst we’re there. For the meantime we’re going to keep the fairly informal format as well given the size, although we may think about having “themes” to base our discussions around to give us some focus without needing to go down the whole presentation route – just yet. Our biggest initial challenge is to try and find those Force.com developers that hang around the South West and maybe hope for a slight down turn in the weather as well to encourage a few more people indoors.
If you’re reading this and live/work locally then I really do hope you can come along to the next event. Given our minority standing in the developer community as a whole it’s easy to shrink away into the corner and feel forgotten about. But at our events there’s no need to feel like that we’re all in the same boat and so welcome other with out-stretched arms.
Finally no developer meetup would be complete without a picture or two to prove that it happened – unfortunately I only have a couple of pictures that prove I was in the pub with a group of people, one of whom just happened to be wearing a Force.com cap!