There have never been better and more accessible tools for testing CSS code before deployment. As such it's possible to supplant common performance generalizations with empirical facts. Performing tests provides actual data on which to base choices, rather than relying on mere conjecture.
To exemplify this, it has been a long-held belief that certain CSS selectors are 'slow' compared to others, the much maligned universal selector is an obvious example. Some would also argue that it would be better to use a qualified selector such as footer[role='contentinfo']
rather than merely [role='contentinfo']
. The theory is that in the case of the latter, the selector engine has to travel every DOM node. However such rules are typically based on generalization. For your own project, any performance difference may be of little to no consequence and the benefits of a particular selector (maintainability, low specificity) may outweigh any expected performance...