Browser Testing with PhantomJS and Mocha – Part 2

Happy New Year!

Today I am going to complete the work of browser testing. In the last article, I introduced MochaJS in a browser, so you could run tests manually in a browser by setting up a test server and generating a static site. I am going to automate that task and take the browser out of the mix.

A big part of the process is the inclusion of PhantomJS – a headless browser that allows you to do a number of things – among them is headless browser testing. There are plug-ins for most test runners, including Mocha, Jasmine, and Chutzpah.

Before I get to that, I need a strategy. My build process is driven by gulp. I run gulp test to build the library and run all the tests. I need a task that will set up a test web server, then use phantomJS and mocha to run the test suite, bailing on a failed test, and then finally shutting down the test web server. I’ve already discussed the test server, but that runs forever.

Fortunately for me, Mocha and PhantomJS is such a popular combination that there is a Gulp plug-in for the combo called gulp-mocha-phantomjs, which is really a thin wrapper around mocha-phantomjs. PhantomJS is bundled here, so it should “just work”. I did have some trouble getting PhantomJS working on Mac OSX El Capitan due to the security policies. To fix this, open System Preferences, then Security & Privacy. There is a section to Allow applications downloaded from Anywhere:

macos-allow-from-anywhere

The Gulp task looks like this:

var gulp = require('gulp'),
    express = require('express'),
    phantomjs = require('gulp-mocha-phantomjs'),
    runSequence = require('run-sequence'),
    config = require('../configuration');

var port = config.test.server.port || 3000;
var server = 'http://localhost:' + port + '/';
var listening;

var app = express();
app.use(express.static(config.test.server.rootdir));

gulp.task('browser:global', function () {
    var stream = phantomjs({ reporter: 'spec' });
    stream.write({ path: server + 'global.html' });
    stream.end();
    return stream;
});

gulp.task('testserver:close', function (callback) {
    console.log('Test Server stopped');
    listening.close();
    callback();
});

module.exports = exports = function (callback) {
    listening = app.listen(port, function () {
        console.log('Test Server started on port ', port);
        runSequence('browser:global', 'testserver:close', callback);
    });
};

The task uses a global variable, listening, to store the server reference. This is used within the testserver:close task to close the connection and make the server quit. The main task sets up the server to listen. When the server is listening, it runs the test suites in order. I’ve only got one test suite right now. If I were expanding this to other test suites, I would add a task for each test suite and then add the task to the runSequence call before the testserve:close task.

I’ve linked the task into my main Gulpfile.js like this:

var gulp = require('gulp');

gulp.task('lint', require('./gulp/tasks/lint'));
gulp.task('mocha', require('./gulp/tasks/mocha'));

gulp.task('build:testserver', [ 'build' ], require('./gulp/tasks/buildserver'));
gulp.task('browser-tests', [ 'build:testserver' ], require('./gulp/tasks/browsertests'));

gulp.task('build', [ 'lint', 'mocha' ], require('./gulp/tasks/build'));
gulp.task('test', ['lint', 'mocha', 'browser-tests']);
gulp.task('default', ['build', 'test']);

The task is stored in gulp/tasks/browsertests.js. This sequencing ensures that the main test suite and linter is done first, then I build the library and then I run the browser tests. Output should now look like this:

phantomjs-test-output

There is a small problem – the server continues to run (and the process never exits) if the browser tests fail. However, I find that reasonable since I will want to load the failing test up into a web browser to investigate if the tests fail.

Testing async functions with mocks and mocha in JavaScript

I’ve recently gone down the road of testing all my code using Mocha and Chai, and I aim for 100% code coverage. My current library does a HTTP connection to a backend and I’m hoping to use node-fetch for that. But how do you test a piece of asynchronous code that uses promises or callbacks?

Let’s take a look at my code under test:

import fetchImpl from 'node-fetch';

export default class Client {
    constructor(baseUrl, options = {}) {
        const defaultOptions = {
            fetch: fetchImpl
        }
        
        this.prvOptions = Object.assign({}, defaultOptions, options);
        this.prvBaseUrl = baseUrl;
    }
    
    fetch(relativeUrl, options = {}) {
        const defaultOptions = {
            method: 'GET'
        }
        
        let fetchOptions = Object.assign({}, defaultOptions, options);
        return this.prvOptions.fetch(`${baseUrl}${relativeUrl}`, fetchOptions);
    }
}

This is a much shortened version of my code, but the basics are there. Here is the important thing – I set a default option that includes an option for holding the fetch implementation. It’s set to the “real” version by default and you can see that in line 6. If I don’t override the implementation, I get the node-fetch version.

Later on, I call client.fetch('/foo'). The client library uses my provided implementation of fetch or the default one if I didn’t specify.

All this logic allows me to substitute (or mock) the fetch command. I don’t really want to test the functionality of fetch – I just want to ensure I am calling it with the right parameters.

Now for the tests. My first problem is that I have asynchronous code here. fetch returns a Promise. Promises are asynchronous. That means I can’t just write tests like I was doing before – they will fail because the response wouldn’t be available during the test. The mocha library helps by providing a done call back. The general pattern is this:

    describe('#fetch', function() {
        it('constructs the URL properly', function(done) {
            client.fetch('/foo').then((response) => {
                    expect(response.url).to.equal('https://foo.a.com/foo');
                    done();
                })
                .catch((err) => {
                    done(err);
                });
        });
    });

You might remember the .then/.catch pattern from the standard Promise documentation. Mocha provides a callback (generally called done). You call the callback when you are finished. If you encountered an error, you call the callback with the error. Mocha uses this to deal with async tests.

Note that I have to handle both the .then() and the .catch() clause. Don’t expect Mocha to call done for you. Ensure all code paths in your test actually call done appropriately.

This still has me calling client.fetch without an override. I don’t want to do that. I’ve got this ability to swap out the implemenetation. I have a mockfetch.js file that looks like this:

export default function mockfetch(url, init) {
    return new Promise((resolve, reject) => {
        resolve({url: url, init: init});
    });
}

The only thing that the mockfetch method does is create a promise that is resolved and returns the parameters that were passed in the resolution. Now I can finish my test:

    describe('#fetch', function() {
        let clientUrl = 'https://foo.a.com';
        let clientOptions = {fetch: mockfetch};
        let client = new AzureMobileClient(clientUrl, clientOptions);

        it('constructs the URL properly', function(done) {
            client.fetch('/foo')
                .then((response) => {
                    expect(response.url).to.equal('https://foo.a.com/foo');
                    done();
                })
                .catch((err) => {
                    done(err);
                });
        });
    });

Note that my mockfetch does not return anything resembling a real response – it’s not even the same object type or shape. That’s actually ok because it’s designed for what I need it to do – respond appropriately for the function under test.

There are three things here:

  1. Construct your libraries so that you can mock any external library calls
  2. Use the Mocha “done” parameter to handle async code
  3. Create mock versions of those external library calls

This makes testing async code easy.

Mocha Tests and ECMAScript 2015

Recently, I tried my hand at testing a library with Mocha and Chai. It went rather well, and I’ve just about integrated testing into my day to day life. I won’t say I’m perfect and the people I work with will attest that they need to remind me sometimes to write tests. Today my problem is testing ECMAScript 2015 script.

I have a nice API for parsing a URI. It’s based on work by Steven Levithan from way back in 2007.  I wanted to bring it up to date and re-write it as a class in ECMAScript 2015.  I won’t bore you with the code – it’s relatively easy.  I obviously want to write tests for this.  Here is the test code:

///<reference path="../typings/mocha/mocha.d.ts"/>
///<reference path="../typings/chai/chai.d.ts"/>
import {expect} from 'chai';
import URL from '../src/url';

describe('URL', function () {
    describe('.constructor()', function () {
        it('should accept a simple URL', function () {
            var e = new URL('http://mywebsite.com/');
            expect(e).to.be.an.instanceof(URL);
        });

        it('should accept a loose URL', function () {
            var e = new URL('yahoo.com/search');
            expect(e).to.be.an.instanceof(URL);
        });

        it('should accept a strict URL', function () {
            var e = new URL('http://yahoo.com/search/', true);
            expect(e).to.be.an.instanceof(URL);
        });
    });
});

Note the import mechanisms above the tests. That tells me it’s ES2015 code and not regular javascript. So what happens when you try to run mocha?

Screen Shot 2015-11-25 at 3.21.57 PM

The problem is really that Node.js doesn’t support all the ES2015 syntax yet. I need to transpile. I can do this one of two ways. The obvious one is to transpile the code into a separate directory and then run the mocha tests on that. This is really unsatisfactory. Firstly, I’m going to have to create a gulp job for this to transpile and then run the unit tests because otherwise I’ll forget. Secondly, it’s really increasing the footprint. I can’t just quickly run mocha with an argument to run one test – I have to run a full compile.

That leads me to the second way. I can actually run Mocha with a transpiler. I have to firstly install a mocha transpiler plugin. That’s another npm package:

npm install --save-dev mocha-babel

Make sure you use the same transpiler as you would with your code. If you use traceur normally, then install mocha-traceur instead. Now I can run the tests with a command line argument:

mocha --compilers js:mocha-babel

This will run all the tests on my ES2015 code, transpiling on the fly for me. I can now place this in my package.json as follows:

  "scripts": {
    "test": "mocha --compilers js:mocha-babel"
  },

What about babel options? Well, you can create a file in the root of your project called mocha-babel.js which contains the options you want. For instance:

require('babel/register')({
  'presets': [ 'es2015' ],
  'plugins': [ 'class-properties' ]
});

The options are passed through to the Babel transpiler as-is, so make sure your options match the version you are using. There was a significant change in options between v5.x and v6.x of Babel.

Now, back to my developing!

Testing a NodeJS Library with Mocha and Chai

I’ve asserted before that I am not a “professional developer” partly because I don’t test. There are three things in testing that are important – a willingness to put in the time to learn a test framework, the writing of the tests and the adoption of a testing methodology. Today, I’m going to do all three for my latest project – a configuration framework for NodeJS that I am writing.

Testing Methodologies

Let’s start with the adoption of a testing methodology. One could write the code and then write some unit tests that test that code to make you feel good about releasing the library. It’s not really a methodology.

Test Driven Development is the first of the methodologies I can discuss. In TDD, you write the tests first – based on what the code is meant to do. This requires a level of design, of course. You get to write code that your library should run. Then you continually write code until all the tests pass. You are pretty well guaranteed to have 100% test coverage because you are coding against the tests. Once the tests pass, the code is complete.

TDD does fall down in a couple of areas – most notably where state comes into play. TDD is not a good fit for UI testing, for example. In the case of a library, your API is a contract – it either passes or fails. If you have enough tests to describe the API fully, then you’ve got a good test suite. In the land of UI development, however, there are corner cases. What if a user does something unexpected? One could assert that the UI is also a contract between a user and the program, but there are lots of things that can happen; including device differences, environment differences and so on that make this not so straight forward an answer.

BDD (which is Behaviour Driven Development) is a similar methodology but describes behaviours, not unit tests. For example, in my configuration example – TDD would test each method; BDD would test the act of producing a valid configuration.

There are other tests that you should consider aside from unit tests. You should definitely do some tests that are end-to-end (normally abbreviated as E2E). In my example, I want to support a set of common patterns for producing configurations, so I definitely want to test those situations.

Choice: TDD – the writing of unit tests and some E2E tests for the common patterns.

Testing Toolsets

This brings us to testing tools. In the NodeJS world, there are choices. I often got stuck on the implementation details of tests and that caused me to spin, eventually leading me to dropping testing because I just couldn’t decide. In general, you need to decide on two pieces – an assertion library and a test runner. Based on my prior research, I decided on Mocha and Chai. Mocha is the test runner and Chai is the assertion library. There is good information on each website, so I’m not going to go into detail. Instead I’m going to focus on setting up testing on my project.

Writing Tests

I’m using TypeScript and Visual Studio to generate all my code for this library I am writing. In my previous post, I set up the project and loaded it into Visual Studio. Today, my first step is to create a folder called test. Since I have the Node Tools for Visual Studio installed, I can write-click on the tests folder and select Add > New Item… There is a Mocha UnitTest File as an option under Node.js in both a JavaScript and TypeScript variety. I like to be able to run my build process without compilation, so my library is written in TypeScript, but the Gulpfile and unit tests are written in JavaScript:

09292015-1

I have not included Mocha or Chai in my project. Since I am in Visual Studio, I can expand the npm view in my Solution Explorer, right-click on the dev node and select Install new npm packages…

09292015-2

Searching for Mocha and Chai is enough:

09292015-3

One of the neat things about this is the warning it gives you on Windows:

09292015-4

Yes, you want to run npm dedupe. Fortunately, npm3 will get rid of this annoyance, but it isn’t the default release yet. Back to the test file. I’ve got a class – Configuration – that I want to test. It has a number of methods that I also want to test individually. Each method will have a number of tests associated with it. I’ve created a configuration.js on the test directory. Mocha will run all of the JavaScript files in the test directory by default. Here is my initial code:

var expect = require('chai').expect,
    Source = require('../dist/Source');

describe('Source', function () {
    describe('.keys()', function () {
        // Tests for the Source.keys() method
    });

    describe('.type', function () {
        // Tests for the Source.type property
    });

    describe('.location', function () {
        // Tests for the Source.location property
    });

    describe('.get()', function () {
        // Tests for the Source.get() method
    });
});

The first line brings in the expect syntax from the Chai library. Chai supports three different syntax implementations – should, expect and assert. They are mostly similar but do have some minor implementation differences. I like the readability of expect so I’m going to use that. I also bring in my library under test. Finally, I describe the library of tests I am going to run – the outer describe says I am testing the Source class and the inner describes say I am testing a particular method. You can next as much as you want.

Writing the tests

Let’s take the type property. I try to think about the tests first. Here is my logic:

  • It is set by the constructor
  • It is read-only
  • It is a string

Here is my code:

    describe('.type', function () {
        it('should return a string', function (done) {
            var s = new Source('static');
            expect(s.type).to.be.a('string');
        });

        it('sould be the same as the constructor value', function (done) {
            var s = new Source('static');
            expect(s.type).to.equal('static');
        });

        it('should be read-only', function (done) {
            var s = new Source('static');
            expect(function () { s.type = 'new-value'; }).toLocaleString.throw(Error);
        });
    });

I find these tests to be highly readable. Each test case is self-contained – you could run any of these tests by itself and not worry about the state of the test system.

Running Tests

Before running tests, you need to have mocha installed globally so you can run it:

npm install -g mocha

Now I need a stub of my eventual implementation:

class Source {
    constructor(type: string, filename?: string) {
    }

    get type(): string {
        return null;
    }
}

export = Source;

Running mocha gets me a whole bunch of errors, but look at the top of the output:

09292015-5

Now I can run mocha whenever I want. You will note that the stack trace from the assertion library is printed for each error. One of the things I like doing is working on “the next error” – you can do this easily with mocha -b:

09292015-6

Integrating into the Build Workflow

I want to integrate testing into my workflow. There are two things I want to do here:

  1. Run npm test to test the project
  2. Run Mocha as part of my Gulp standard pipeline

Adding npm test support is easy – just add a “test” entry to the “scripts” section of the package.json file:

  "scripts": {
    "test": "mocha"
  },

Integrating into gulp is also easy. Use the gulp-mocha library:

gulp.task('build', ['compile'], function () {
    return gulp.src('./test/**/*.js', { read: false })
        .pipe(mocha({ reporter: 'spec' }));
});

Here, my compile task compiles my code into the distribution area, ready for testing and usage.

Wrap Up

I’ve said a few times in the past that I need to learn testing techniques. Mocha and Chai make it easy. Now all I have to do is ingrain testing into my development world – write tests first and then code to the test. At least I have the tools and workflow to do this task properly.

Web Dev Tools 101: Testing

You were going to test your application, weren’t you? Until recently, I was the kind of person who ran my application in Google Chrome and Internet Explorer on each little piece of functionality I developed. If that functionality ran, then I called the whole application good. This has all sorts of problems for the serious developer. If you are a serious developer then you need to be thinking about testing.

TL;DR

There are several pieces to testing. The below is what I’m going to use:

Why do I care?

In the bad old days, Javascript programmers got a really bad rap as bad programmers. The code wasn’t efficient and modular, there was lots of bad practices, and practically zero automated testing. Then the environment grew up a bit and now we have all this infrastructure available to us. This includes testing environments.

Let’s be honest, you don’t want to test your code. You NEED to test your code. There are two ways of doing that. The “ad-hoc” method where you run your application, click around, and see if it fails, and the “rigorous” method where you write unit and UI tests to test your code, you write the unit tests before you write your code and you run your unit tests before checking in your code.

Fortunately, there are a lot of choices when it comes to testing libraries these days. You need a test runner (the thing you integrate into your build process for testing), a test framework (a format that you write all your tests in), sometimes an assertion library, and something to emulate a browser so you don’t have to.

Test Runners

There are only two test runners and only one real contender in this category – so good news there. The two contenders are Karma (you should use this) and Chutzpah (erm – no). Here is what I was looking for when looking for a test runner.

  1. Support on grunt and gulp: You want to integrate testing into your process.
  2. Support for device testing: You want to test on real devices and real browsers.
  3. Support for headless testing: You don’t want to test on real devices all the time.
  4. Test Coverage Reporting: You want to know how much is tested and get better over time.
  5. Visual Studio Support: I like Visual Studio – this is important to me!

Karma

Produced by a collaboration between Google and Angular, Karma (previously Testacular – not a good name) is pretty much the de-facto standard in test runners. It has plugins for grunt and gulp, so it can be easily integrated into your workflow. It can work with Jasmine, Mocha and QUnit – the three major test frameworks (more on these below). It has support for PhantomJS and slimerJS (the headless browsers – more on these below) and can support multiple real browsers and devices. It can also integrate with Istanbul – a code coverage tool. There is even a Visual Studio Test Adapter.

Indeed, there is really very little down side to Karma.

Chutzpah

In the interest of being balanced, here is another option. Chutzpah seems to have been written with Visual Studio in mind. It comes with a Visual Studio Test Adapter from the author (rather than a third party), integrates with Jasmine, Mocha and QUnit and has support for PhantomJS.

And there it stops. No gulp/grunt integration. No SlimerJS support. No remote device support. Those a really really important things to me. I just don’t think I can support a test runner without those.

Test Frameworks

Test Frameworks are what take your test suite and actually execute tests. There are three really popular ones – Jasmine, Mocha and QUnit – and a host of others. All of them come with an assertion library, but you can use your own as well if you want (something like should.js, expectJS, or chai). These assertion libraries just improve the test framework – they are not required.

Jasmine

How about this for a Jasmine test description:

describe("The 'toBe' matcher compares with ===", function() {
  it("and has a positive case", function() {
    expect(true).toBe(true);
  });

  it("and can have a negative case", function() {
    expect(false).not.toBe(true);
  });
});

I can actually read this! It’s Javascript and it’s semantic. Those of you who have been following me for a while know I love elegant Javascript. When you can read it, you’ve won. As one would expect, there are a whole host of assertions out of the box but you can integrate with an assertion library as well (I like Chai, for reference).

So what’s the downfall of this library? Jasmine is targeted as Behavior Driven Development. I can see it being used for Test Driven Development as well, though. QUnit is designed for unit testing, so it’s more in the TDD area. Yes, I’m splitting hairs.

Mocha

Mocha is sort of a very flexible version of Jasmine and QUnit. There is less done for you but it’s more flexible as a result of all the plug-ins available. There is no assertion library (use Chai), or Spy Framework (use sinon.js). It can be configured for BDD, TDD or both though. Basically, if Jasmine or QUnit can’t do it for you and you would find yourself going towards the other tool, it’s time to move to Mocha.

The tests for Mocha are very similar to Jasmine:

var assert = require("assert");
describe('Array', function(){
  describe('#indexOf()', function(){
    it('should return -1 when the value is not present', function(){
      assert.equal(-1, [1,2,3].indexOf(5));
      assert.equal(-1, [1,2,3].indexOf(0));
    });
  });
});

Note how we have to bring in the assert library first. One isn’t built in, so that’s more scaffolding. Otherwise, it should look very similar to the Jasmine test case.

QUnit

QUnit is done by the same fine people who brought you jQuery. It’s got plenty of tutorial and documentation materials and fine examples. it’s got support for gulp and grunt and has a whole host of plugins to handle BDD (which isn’t handled out of the box) and PhantomJS. A typical QUnit test looks like this:

QUnit.test("prettydate basics", function( assert ) {
    var now = "2008/01/28 22:25:00";
    assert.equal(prettyDate(now, "2008/01/28 22:24:30"), "just now");
    assert.equal(prettyDate(now, "2008/01/28 22:23:30"), "1 minute ago");
    assert.equal(prettyDate(now, "2008/01/28 21:23:30"), "1 hour ago");
    assert.equal(prettyDate(now, "2008/01/27 22:23:30"), "Yesterday");
    assert.equal(prettyDate(now, "2008/01/26 22:23:30"), "2 days ago");
    assert.equal(prettyDate(now, "2007/01/26 22:23:30"), undefined);
});

My major problem with this is that it just isn’t as readable as the Jasmine version. I like the level of documentation and help you can get with QUnit though.

Headless Browsers

At some point you are going to want to test on a real browser. There are basically three flavors of browser – Gecko (most notably Firefox), WebKit (most notably Chrome) and Internet Explorer. I can’t help you with Internet Explorer – you’ll have to use a real device for that. However, I can help you with Gecko and WebKit.

PhantomJS

PhantomJS includes the WebKit javascript engine. As a result of this, it can emulate Chrome and Safari precisely. This makes it an ideal test companion as you can run the test suite against a “real browser” without having the browser open.

SlimerJS

SlimerJS is the same thing, but for the Gecko javascript engine. Unlike PhantomJS, it isn’t truly headless – you still need a graphical environment and you will see windows. This is really only a problem on Windows as you can use xvfb (a virtual frame buffer) on Mac and Linux to simulate a screen that you don’t see.

The Verdict

After doing all this research, the choice for me was rather obvious. I’m going to start out with the Karma test runner, run my tests with Jasmine (using the Chai assertion library when the default library isn’t enough) and when I come to do my UI tests I’ll be able to use real devices or the PhantomJS headless browser.