9
Oct
2009

JavaScript performance optimization, take 1

For the last several months, Mike and I have been working on a new project, which is nearing closed beta. That means we need to start battening down the hatches, and today was the day to start tackling client-side JavaScript performance.

I’ve actually done quite a bit of performance work in my life, but not with JavaScript, so I though I’d take some notes along the way.

Firebug is your friend

In my mind, there are really three ways to a significant dent in performance:

  1. Find bad algorithms and replace them with fast ones
  2. Find code that doesn’t actually have to be called and skip it.
  3. Optimize the code that gets called most often

And you really can’t do any of the three without a profiler. You might think you know what the problem is, but you won’t know until you profile it. In my case, I started out thinking that I had event listeners hanging around that weren’t letting go of their events, but the profiler (in this case, Firebug) told me I was completely wrong.

To get started profiling in Firebug, go to the console tab, press the ‘profile’ button, do some stuff, and hit the ‘profile’ button again. That’s it.

You’ll then be presented with data that looks like this:

Firebug

For my money, the two most important columns are ‘own time’ and ‘time’. ‘time’ is the total time spent in a function including any functions that are called by that function, and ‘own time’ is the same thing minus the time taken up by other functions.

Problem: $$(‘.class’) can be SLOW!

I created a test where I did the same UI gesture 8 times, and this is what I discovered. Looking at ‘own time’ told me that most of my time was going to DOM traversal via the $$ function.

Looking at ‘time’ told me that the methods responsible for calling $$ were all central functions that were called in many places throughout my code, so it was worth making them as efficient as possible before figuring out whether there was a way to avoid calling some of them altogether.

Phase 1 — replacing traversals of the entire DOM tree (via $$) with smaller traversals

Roughly speaking, this corresponds to strategy (3).

What total time %delta
from prev
%delta
from base
Baseline 2812ms
Replace $$(‘.class’) by $(‘section’).getElements(‘.class’) in critical sections 2345ms 20% 20%
Chage getElements(‘.class’) to getElements(‘div.class’) in critical sections 2094ms 12% 34%
Found more places to do the above optimizations 1723ms 22% 63%
Replaced getElements() with getChildren() where possible 1641ms 5% 71%

Along the way, I tried all sorts of other optimizations, but none of them yielded much benefit. Now that I was reaching the point of diminishing returns, it was time to see if there were chunks of code I could safely skip.

Phase 2 — skipping handler functions when possible

I knew that there was almost certainly code I was running that could be skipped (strategy 2). Why?

I find that when writing UI code, it is often easier to use brute force to make sure that everything is working consistently. For example, if an AJAX call updates a certain part of the screen, it is often easier to blow away all event handlers from everything and re-add them where needed, rather than just patching up event handlers for the portion of the screen that was updating.

My rationale is that you can always fix this at the end. And well, it was now time to pay the piper.

My test case involved doing the same UI gesture 8 times. And most of the time was going to the following functions:


add_panel_handlers_if_needed(): 8 times
add_content_handlers(): 16 times
add_panel_handlers(): 8 times
actually_do_drag_cleanup(): 8 times
remove_content_handlers(): 24 times
fix_detail_handlers(): 8 times
handle_click(): 8 times
fix_toggle_rte_handlers(): 8 times
add_drag_handlers_and_start(): 8 times
add_insert_handlers(): 8 times

You can see that some functions are being called 8 times and some were being called 24 times. As it turns out, this was just due to programmer laziness. By adding a few checks, some of those redundant calls could be safely avoided.

The other thing that was causing extra work is that only certain interactions caused screen updates that needed event handlers to be reattached. By writing some code to check for that, I was able to avoid many of these calls altogether.

What total time %delta
from prev
%delta
from base
Baseline 2812ms
End of phase 1 1641ms 71% 71%
Remove redundant calls to remove_content_handlers and add_content_handlers 1389ms 18% 102%
Skip certain fixup calls when content is determined not to have changed 1073ms 29% 162%

(P.S. there is some small part of my brain that tells me that instead of manually worrying about these event handlers, I should just bite the bullet and switch to JQuery. But I’m not there yet.)

Summary

So, what’s the moral? First off, doing $$(‘.class’) is slow. Second, large performance boosts usually come from a combination of skipping code that doesn’t have to run and optimizing the code that does. This was no exception.

One more thing. I just have to say that Firebug is amazing. I expected it to have trouble giving useful timings in the face of inconsistent UI gestures and garbage collection, but it did the “right thing”, which many desktop profilers don’t manage to do. If I had one wish, I wish I could get it to bundle up calls from specified library files and allocate the time spent in them to the calling function.

Ok. Back to more optimizing.

6 Responses to “JavaScript performance optimization, take 1”

  1. keith

    Thanks for the tips :) I’m a little suprised that selecting for div.class over .class would improve performance. I used to think that selectors worked left to right, and by being more specific, it would make it easier on the browser. There was an article by Steve Souders (http://www.stevesouders.com/blog/2009/06/18/simplifying-css-selectors/) not too long ago, however, which mentioned that selectors are matched from right to left though. Since that’s the case, wouldn’t it be quicker to just match .class instead of div.class?

  2. sho

    Ah. I forgot to mention why I tried that.

    The selectors module in mootools (and I imagine jquery) splits apart the selector into multiple parts and performs the fastest actions first.

    If the selector contains an id, the element is recovered using the low level DOM method getElementById().

    If the selector contains a tag, the list of candidate elements is computed using the low level DOM method getElementsByTagName().

    If neither a tag nor an id are provided, the entire subtree must be searched.

    For classes and pseudo elements, there is no low-level DOM call to find these, so each element must be checked one by one using JavaScript.

    Adding the class to the selector thus helps speed things up by narrowing the list of tags to search through via a quick DOM call.

  3. Jay Freeman (saurik)

    In point of fact, Opera 9.5, Safari 3.1, Chrome, and Firefox 3 all provide native implementations of getElementsByClassName, as specified by HTML5. Hopefully your JS library is taking advantage of this, but given your performance figures I can only assume it isn’t. It is incredibly unfortunate how much worse the state of browser performance is due to popular libraries refusing to take advantage of browser-specific features, no matter how commonly implemented. For the record, I believe JQuery finally changed their policy on this matter, which may be another reason to switch to it.

  4. sho

    Thanks for that info, Jay. Good to know.

  5. keith

    Thanks for the clarification sho!

    It’s funny: up until I read your article I’ve managed to completely neglect the fact that how the browsers handle CSS selectors may be completely different from how the popular JavaScript libraries handle them. This means that I need to start thinking more about having two different ways of optimizing CSS selectors: one for CSS statements (which are read right-to-left), and another strategy for optimizing the convenience selectors provided by jQuery, etc. Well.. guess it’s time to start digging around more in the jQuery code :)

  6. Amy

    Hey Sho. So true about the wrapper functions and CSS selectors (and maybe some day, with wider support of getElementsByClassName, it won’t be).

    Especially if you are doing a lot of super-interactive user interface, you might also want to be sure that not only do you not blow away/recreate the event handlers (as you describe above), but that you use capture and bubble effectively. Hopefully you’re already doing that :)

    One other idea: tuning your traversals is great, but don’t underestimate the extra performance suck that extra elements can cause. The more extraneous elements you can strip out, the faster everything will be. Including CSS selectors.

    You might try also looking thru the other items on this JavaScript performance checklist I put out today: http://slowjavascript.com/

    Nota bene: I am the co-author of JavaScript Performance Rocks! (http://jsrocks.com/), a book specifically about this very topic. It’s long and very thorough, you might find it helpful. :)

Leave a Reply