One thing we can be clear on is that using a flat hierarchy of class-based selectors, as is the case with ECSS, provides selectors that are as fast as any others.
For me, it has confirmed my believe that it is absolute folly to worry about the type of selector used. Second guessing a selector engine is pointless as the manner selector engines work through selectors clearly differs. Further more, the difference between fastest and slowest selectors isn't massive, even on a ludicrous DOM size like this. As we say in the North of England, There are bigger fish to fry.
Since documenting my original results, Benjamin Poulain, a WebKit Engineer got in touch to point out his concerns with the methodology used. His comments were very interesting and some of the information he related is quoted verbatim below:
By choosing to measure performance through the loading, you are measuring plenty of much much bigger things than CSS, CSS Performance is only a small part of loading a page.
If I take the time profile of [class^="wrap"]
for example (taken on an old WebKit so that it is somewhat similar to Chrome), I see:
~10% of the time is spent in the rasterizer.
~21% of the time is spent on the first layout.
~48% of the time is spent in the parser and DOM tree creation
~8% is spent on style resolution
~5% is spent on collecting the style – this is what we should be testing and what should take most of the time. (The remaining time is spread over many many little functions).
With the test above, let say we have a baseline of 100 ms with the fastest selector. Of that, 5 ms would be spent collecting style. If a second selector is 3 times slower, that would appear as 110 ms in total. The test should report a 300% difference but instead it only shows 10%.
At this point, I responded that whilst I understood what Benjamin was pointing out, my test was only supposed to illustrate that the same page, with all other things being equal, renders largely the same regardless of the selector used. Benjamin took the time to reply with further detail:
I completely agree it is useless to optimize selectors upfront, but for completely different reasons:
It is practically impossible to predict the final performance impact of a given selector by just examining the selectors. In the engine, selectors are reordered, split, collected and compiled. To know the final performance of a given selectors, you would have to know in which bucket the selector was collected, how it is compiled, and finally what does the DOM tree looks like.
All of that is very different between the various engines, making the whole process even less predictable.
The second argument I have against web developers optimizing selectors is that they will likely make things worse. The amount of misinformation about selectors is larger than correct cross-browser information. The chance of someone doing the right thing is pretty low.
In practice, people discover performance problems with CSS and start removing rules one by one until the problem go away. I think that is the right way to go about this, it is easy and will lead to correct outcome.
At this point I felt vindicated that the CSS selector used was almost entirely irrelevant. However, I did wonder what else we could gleam from the tests.
If the number of DOM elements on the page was halved, as you might expect, the speed to complete any of the tests dropped commensurately. But getting rid of large parts of the DOM isn't always a possibility in the real world. This made me wonder what difference the amount of unused styles in the CSS would have on the results.