Running your tests in parallel means different things to different people, as it can mean either of the following:
- Run all of your tests against multiple browsers at the same time
- Run your tests against multiple instances of the same browser
Should we run our tests in parallel to increase coverage?
I'm sure that when you are writing automated tests, to make sure things work with the website you are testing, you are initially told that your website has to work on all browsers. The reality is that this is just not true. There are many browsers out there and it's just not feasible to support everything. For example, will your AJAX-intensive site that has the odd flash object work in the Lynx browser?
The next thing you will hear is, "OK, well, we will support every browser supported by Selenium." Again, that's great, but we have problems. Something that most people don't realize is that the core Selenium teams official browser support is the current browser version, and the previous version at the time of release of a version of Selenium. In practice, it may well work on older browsers and the core team does a lot of work to try and make sure they don't break support for older browsers. However, if you want to run a series of tests on Internet Explorer 6, Internet Explorer 7, or even Internet Explorer 8, you are actually running tests against browsers that are not officially supported by Selenium.
We then come to our next set of problems. Internet Explorer is only supported on Windows machines, and you can have only one version of Internet Explorer installed on a Windows machine at a time.
Safari is only supported on OS X machines, and, again, you can have only one version installed at a time.
It soon becomes apparent that even if we do want to run all of our tests against every browser supported by Selenium, we are not going to be able to do it on one machine.
At this point, people tend to modify their test framework so that it can accept a list of browsers to run against. They write some code that detects, or specifies, which browsers are available on a machine. Once they have done this, they start running all of their tests over a few machines in parallel and end up with a matrix that looks like this:
This is great, but it doesn't get around the problem that there is always going to be one or two browsers you can't run against your local machine, so you will never get full cross-browser coverage. Using multiple different driver instances (potentially in multiple threads) to run against different browsers has given us slightly increased coverage. We still don't have full coverage though.
We also suffer some side effects by doing this. Different browsers run tests at different speeds because JavaScript engines in all browsers are not equal. We have probably drastically slowed down the process of checking that the code works before you push it to a source code repository.
Finally, by doing this we can make it much harder to diagnose issues. When a test fails, you now have to work out which browser it was running against, as well as why it failed. This may only take a minute of your time, but all those minutes do add up.
So, why don't we just run our tests against one type of browser for the moment. Let's make that test run against that browser nice and quickly, and then worry about cross-browser compatibility later.