Book Image

Instant Testing with CasperJS

By : Eric Brehault
Book Image

Instant Testing with CasperJS

By: Eric Brehault

Overview of this book

Professional web development implies systematic testing. While JavaScript unit tests will validate your JavaScript library’s quality, web functional testing is the only way to guarantee the expected behavior of your web pages. CasperJS is a fast and simple JavaScript testing API that can run on any platform, and it is currently one of the best and easiest ways to write your functional tests. Instant Testing with CasperJS will teach you how to write efficient and accurate tests for your professional web developments. This practical guide explains the various different CasperJS principles through clear and detailed examples, covering a large set of common use cases. This book will progressively cover everything you need to know from CasperJS basic principles to the most advanced testing practices. This book starts off by introducing you to the different testing assertions that you can perform with the CasperJS API. We will then move on to cover why bad timing between event triggering can ruin tests and learn strategies to avoid it. Finally, you will learn how to test efficient and complex web interactions like drag and drop, authentication, and file uploading. With Instant Testing with CasperJS, you will be able to set up an advanced and functional test suite for your web development projects quickly and efficiently.
Table of Contents (7 chapters)

Best practices (Intermediate)


This section will discuss the essential best practices for web functional testing with CasperJS.

Testing the real thing

Our software quality depends on our tests' accuracy. Testing is always good, but if we don't test the software's behavior accurately, we might miss out on some potential problems. To create accurate tests, we must forget about the system and how it works, and we must focus on user interactions.

It might sound obvious but it is not, because most of the time we design and code the system and do not use it (as a standard user) a lot.

Let's consider a typical example of a basic web form. Here is an important thing to know about web forms: users never submit web forms. They actually do the following:

  • Enter values into inputs

  • Click on the Submit button

These actions do produce a form submission, but the users don't actually submit the form by themselves; their web browsers do it for them. Or, let's say the system does it for them.

Obviously, if we want to test this system, we cannot rely on its supposed behavior.

That is why our tests must produce the real user interactions and then assert that the resulting behavior is correct.

Let's have a look at the following example (example11.html):

<html><body>
  <form id="form1">
    <p>Firstname: <input type="text" name="firstname"/></p>
    <p>Lastname: <input type="text" name="lastname"/></p>
    <p>Age: <input type="text" name="age"/></p>
    <input type="submit" value="Save" name="save" />
  </form>
  <form id="form2">
    <p>Firstname: <input type="text" name="firstname"/></p>
    <p>Lastname: <input type="text" name="lastname"/></p>
     <p>Age: <input type="text" name="age"/></p>
  </form>
  <form id="form3">
    <p>Firstname: <input type="text" name="firstname"/></p>
    <p>Lastname: <input type="text" name="lastname"/></p>
    <p>Age: <input style="display: none;" type="text" name="age"/></p>
    <input type="submit" value="Save" name="save" />
  </form>
</body></html>

This page shows the following three forms:

  • The first one contains first name, last name, and age, and a Save button

  • The second one has the same fields but no Save button

  • The third one has all the fields and a Save button, but the age field is hidden

If we launch our simple HTTP server, the first one will work fine, but the other two will not be usable.

Now, let's test it as follows (example11-1.js):

var formid = casper.cli.options['formid'];

casper.test.begin('Test form submission', 3, function(test) {

    casper.start('http://localhost:8000/example11.html', function() {
        this.fill("#" + formid, {
            firstname: "Isaac",
            lastname: "Newton",
            age: "370",
        }, true);
    });
    casper.then(function() {
        test.assertUrlMatch(/firstname=Isaac/);
        test.assertUrlMatch(/lastname=Newton/);
        test.assertUrlMatch(/age=370/);
    });
    casper.run(function() {
        test.done();
    });
});

This test uses casper.fill() to submit the form with a value for each field and then asserts so we obtain the three values in the resulting URL.

Tip

In this test, we use casper.cli.options to read options passed to the casperjs command; this way, we can use the same script to test the three different forms.

Let's run it. The following screenshot shows the output:

The tests pass with the first form as expected, but they also pass with the other two!

Why is that? It's because the fill() method is blind to the mistakes we have introduced in the forms; it just performs a submission without checking if a real user could actually do the same.

The following is a test that is closer to real user interaction (example11-2.js):

var formid = casper.cli.options['formid'];

casper.test.begin('Test form submission', 3, function(test) {
    casper.start('http://localhost:8000/example11.html', function() {
        this.sendKeys("form#"+formid+" input[name='firstname']", "Isaac");
        this.sendKeys("form#"+formid+" input[name='lastname']", "Newton");
        this.sendKeys("form#"+formid+" input[name='age']", "370");
        this.click("form#"+formid+" input[type=submit][value='Save']");
    });
    casper.then(function() {
        test.assertUrlMatch(/firstname=Isaac/);
        test.assertUrlMatch(/lastname=Newton/);
        test.assertUrlMatch(/age=370/);
    });

    casper.run(function() {
        test.done();
    });
});

Here we use sendKeys() to enter the values in the inputs, and we use the click()method to click on the Submit button. This is basically what the user would do with his or her keyboard and mouse.

Now let's run it. The following screenshot shows the output:

This is much better; now our tests fail at the second and third forms.

This does not mean that we must never use the fill() method. That was just an example. The main point here is that we must always be careful to keep as close as possible to the user interactions. But of course, we also need to create concise and maintainable tests.

So a good approach is to write several kinds of tests, such as the following:

  • Some precise tests focusing on the user interactions (where we will use sendKeys() and click() instead of fill(), for instance) to make sure each page is usable by itself

  • Some concise tests focusing on screen chaining and usage scenarios (where we can use fill(), for instance) to make sure the complete application is working fine

Surviving design changes

Some people prefer to write tests at the end of their development, when they know everything is pretty much stable.

But the best time to write tests is from the beginning to the end of the development. This has already been demonstrated in a lot of books, but one of the most obvious reasons is that we usually produce the biggest part of our bugs during the development phase, and tests are a great help to fight bugs.

Unfortunately, people who write tests at the end are right: the website is (usually) more stable after development than during development.

One of the aspects that might change a lot is the design. Design changes should not impact application features. But sometimes they do, and there may be a lot of reasons for it, such as a CSS attribute can make a button invisible, modification in an element's ID can break a JavaScript call, and so on. But we do not mind this much, because if the design breaks any feature, our tests will warn us immediately.

The problem is that sometimes our tests fail even if the design changes haven't broken any features. This is very bad because it implies that we cannot trust our tests to know whether something is broken or not.

Why our tests would fail if all the features have been preserved? This is just because our tests are less design-proof than our web page features, and the writing of design-proof tests depends mainly on selectors.

Indeed, our test inner logic (for example, if we click here, we should get that) should not be impacted by design changes.

But what could easily break if we are not careful enough is the way we define here and that. They will be defined using selectors. We must choose selectors that focus on the logic and not on the layout.

The following are some examples:

Instead of

Prefer

Because

"div#form-container span input"

"form[name='registration'] input[name='firstname']"

We only depend on form elements and their names

"div ul li:first-child a"

"#results .result:first-child a"

We use IDs and classes instead of tag names

"a#reset-btn"

x("//a[normalize-space(text())='Reset']")

We use the link text instead of its ID (and to do this, we switch to the XPath selector)

By doing this, we can change our design, switch from Bootstrap to Foundation, reorganize the layout, and so on, and be sure that if the tests fail, they do for a good reason: because we have actually broken the logic.

Creating test suites

Until now, we have created single test scripts for our different example pages, but when we test our real applications, we will need to test a lot of different features and scenarios.

It will definitely work if we do it in a single long test script, but obviously, it will be more difficult to refactor it, maintain it, share it with a team, and so on.

So, quite a simple, good practice is to split our different feature tests and testing scenarios into separate scripts. And fortunately, CasperJS provides the casperjs test command so we can run all our tests at once.

Let's reuse our example8.html page from the previous chapter. This page proposes two features: an editable header and a simple form with text input.

Let's imagine that we want to create two different tests for these two features. So let's create a folder (named suit, for instance), and in this folder, create the files test_editable_header.js and test_form.js.

Create test_editable_header.js as follows:

casper.test.begin('Test editable header', 1, function(test) {
  casper.start('http://localhost:8000/example8.html', function() {
    this.click('h1');
    this.sendKeys('h1', 'I have changed ');
  });
  casper.then(function() {
    test.assertSelectorHasText('h1', "I have changed my header");
  });
  casper.run(function() {
    test.done();
  });
});

Create test_form.js js as follows:

casper.test.begin('Test editable header', 1, function(test) {
  casper.start('http://localhost:8000/example8.html', function() {
    this.sendKeys('input[name="firstname"]', 'Eric');
  });
  casper.then(function() {
    test.assertSelectorHasText('#message', "You have entered 4 characters.");
  });

  casper.run(function() {
    test.done();
  });
});

And now, let's launch our tests:

As we can see in the preceding screenshot, we have passed our folder path to the casperjs test command, and it has run all the tests contained in this folder.

The casperjs test command offers interesting options, as follows:

  • --fail-fast: This is used to stop the test suite at the first error

  • --pre=pre-test.js: This is used to run a test before executing the test suite

  • --post=post-test.js: This is used to run a test after executing the test suite

  • --includes=file1.js,file2.js: This is used to include some tests before running each test in the suite

  • --direct: This is used to output the log message in the console

  • --log-level=<level>: This is used to choose the log level (DEBUG, INFO, WARNING, or ERROR)

  • --xunit=<filename>: This is used to export the test results to the xUnit format

The --pre and --post options are typically used to implement a pre-test setup and post-test tear down, respectively.

For instance, if our system allows users to modify their preferences and we want to test it, the pre-test setup will create a fake user profile so we can test profile preference change. The post-test tear down will remove this fake user profile, so the next time we run the test suite, it will not break because the profile already exists.

Running CasperJS on Jenkins

Writing tests is a good starting point, but then we have to make sure we run them often enough. One of the Continuous Integration (CI) principles is to run the tests each time we commit a change in the source repository. To do this, we need a CI tool, and Jenkins is one of the most widely used ones.

Note

We will not cover the detailed Jenkins installation and configuration here as it is not a desktop application but a service exposed by a server. We assume it is deployed on one of our servers.

To run our CasperJS tests on Jenkins, we need first to make sure that PhantomJS and CasperJS are installed on the machine where Jenkins is running (refer to the Installing CasperJS section).

Then, we open the Jenkins web interface and click on New Job as shown in the following screenshot:

In the Build section, we add a new Execute shell step:

Then we click on Save and enter our test command in the Build section.

Assuming our tests are in a folder named tests at the root of our repository, we would enter the following:

casperjs test ./tests

The following screenshot shows the textbox in which we will add the preceding command:

We can now launch a build manually or let Jenkins launch builds automatically (depending on the triggers we have chosen). Jenkins will directly interpret the CasperJS output, and we will get a build history showing failures and successes as follows:

We can also see the details of a given build as in the following screenshot:

Running CasperJS on Travis-CI

Travis-CI is a Cloud service that can be hooked to our GitHub repositories. It is free for public repositories. Each time we push changes to GitHub, Travis-CI does the following:

  • Creates a blank virtual machine

  • Checks the current sources from GitHub

  • Deploys our application

  • Runs the tests

  • Notifies the user (via e-mail, IRC, and so on)

It also does the same when we receive a pull request on our GitHub repository so we know whether the submitted pull request breaks the tests or not before merging it. This information is displayed directly on GitHub.

To run CasperJS tests on Travis-CI, we need to do the following:

  1. Go to travis-ci.org and sign in with our GitHub account.

  2. Go to Profile and copy the token.

  3. Go to the GitHub repository, click on Settings / Service Hooks, choose Travis-CI, enter our GitHub ID and the previously copied Travis token, check Active, and click on Update.

  4. Add a .travis.yml file in the root of our repository.

This .travis.yml file is used to explain to Travis how to deploy the test environment and how to run the tests.

We just need to deploy CasperJS because PhantomJS is preinstalled on Travis.

Tip

Since PhantomJS is completely headless, there is no need run Xvfb.

The following is a typical .travis.yml file:

install:
  - git clone git://github.com/n1k0/casperjs.git
  - cd casperjs; git checkout tags/1.1; cd -
before_script:
  - "export PHANTOMJS_EXECUTABLE='phantomjs --local-to-remote-url-access=yes --ignore-ssl-errors=yes'"
script:
  - ./casperjs/bin/casperjs test ./tests

In the install section, we download the CasperJS code and check the last stable version. In the before_script section, we set up PhantomJS to allow access to external URLs, and in the script section, we launch the tests.

Just like Jenkins, Travis-CI will interpret the CasperJS output result as a success or failure, and we will be notified accordingly. Our tests can target a local server, and if so, our .travis.yml file will need to deploy the needed HTTP server and its components.

But the tests can also target an external URL, and if so, we have to make sure the code is updated on the real server as soon as it is pushed to GitHub. This can be done conveniently using GitHub pages.