Testing ExpressJS Web Services

Let’s say you have a web application written in NodeJS and you want to test it. What’s the best way to go about that? Fortunately, this is a common enough problem that there are modules and recipes to go along with it.

Separating Express from HTTP

ExpressJS contains syntactic sugar to implement a complete web service. You will commonly see code like this:

var express = require('express');

var app = express();
// Do some other stuff here
app.listen(3000);

Unfortunately, this means that you have to be doing HTTP calls to test the API. That’s a problem because it doesn’t lend itself to easily being tested. Fortunately, there is an easier way. It involves separating the express application from the HTTP logic. First of all, let’s create a web-application.js file. Here is mine:

import bodyParser from 'body-parser';
import compression from 'compression';
import express from 'express';
import logCollector from 'express-winston';
import staticFiles from 'serve-static';

import logger from './lib/logger';
import apiRoute from './routes/api';

/**
 * Create a new web application
 * @param {boolean} [logging=true] - if true, then enable transaction logging
 * @returns {express.Application} an Express Application
 */
export default function webApplication(logging = true) {
    // Create a new web application
    let webApp = express();

    // Add in logging
    if (logging) {
        webApp.use(logCollector.logger({
            winstonInstance: logger,
            colorStatus: true,
            statusLevels: true
        }));
    }

    // Add in request/response middleware
    webApp.use(compression());
    webApp.use(bodyParser.urlencoded({ extended: true }));
    webApp.use(bodyParser.json());

    // Routers - Static Files
    webApp.use(staticFiles('wwwroot', {
        dotfiles: 'ignore',
        etag: true,
        index: 'index.html',
        lastModified: true
    }));

    // Routers - the /api route
    webApp.use('/api', apiRoute);

    // Default Error Logger - should be added after routers and before other error handlers
    webApp.use(logCollector.errorLogger({
        winstonInstance: logger
    }));

    return webApp;
}

Yes, it’s written in ES2015 – I do all my work in ES2015 right now. The export is a function that creates my web application. I’ve got a couple of extra modules – an api route (which is an expressjs router object) and a logging module.

Note that I’ve provided a logging parameter to this function. Setting logging=false turns off the transaction logging. I want transaction logging when I am running this application in production. That same logging gets in the way of the test results display when I am running tests though. As a result, I want a method of turning it off when I am testing.

I also have a http-server.js file that does the HTTP logic in it:

import http from 'http';

import logger from './lib/logger';
import webApp from './web-application';

webApp.set('port', process.env.PORT || 3000);

logger.info('Booting Web Application');
let server = http.createServer(webApp());
server.on('error', (error) => {
    if (error.syscall !== 'listen') {
        throw error;
    }
    if (error.code) {
        logger.error(`Cannot listen for connections (${error.code}): ${error.message}`);
        throw error;
    }
    throw error;
});
server.on('listening', () => {
    let addr = server.address();
    logger.info(`Listening on port ${addr.family}/(${addr.address}):${addr.port}`);
});
server.listen(webApp.get('port'));

This uses the Node.JS HTTP module to create a web server and start listening on a TCP port. This is pretty much the same code that is used by ExpressJS when you call webApp.listen(). Finally, I have a server.js file that registers BabelJS as my ES2015 transpiler and runs the application:

require('babel-register');
require('./src/http-server');

The Web Application Tests

I’ve placed all my source code in the src directory (except for the server.js file, which is in the project root). I’ve got another directory for testing called test. It has a mocha.opts file with the following contents:

--compilers js:babel-register

This automatically compiles all my tests from ES2015 using BabelJS prior to executing the tests. Now, for the web application tests:

/// <reference path="../../typings/mocha/mocha.d.ts"/>
/// <reference path="../../typings/chai/chai.d.ts"/>
import { expect } from 'chai';
import request from 'supertest';

import webApplication from '../src/web-application';

describe('src/web-application.js', () => {
    let webApp = webApplication(false);

    it('should export a get function', () => {
        expect(webApp.get).to.be.a('function');
    });

    it('should export a set function', () => {
        expect(webApp.set).to.be.a('function');
    });

    it('should provide a /api/settings route', (done) => {
        request(webApp)
            .get('/api/settings')
            .expect('Content-Type', /application\/json/)
            .expect(200)
            .end((err) => {
                if (err) {
                    return done(err);
                }
                done();
            });
    });
});

First note that I’m creating the web application by passing the logging parameter of false. This turns off the transaction logging. Set it to true to see what happens when you leave it on. You will be able to see quite quickly that the test results get drowned out by the transaction logging.

My http-server.js file relies on a webApp having a get/set function to store the port setting. As a result, the first thing I do is check to see whether those exist. If I update express and they decide to change the API on me, these tests will point that out.

The real meat is in the third (highlighted) test. This uses supertest – a WebAPI testing facility that pretends to be the HTTP module from Node, listening on a port. You send requests into the webApp using supertest instead of the HTTP module. ExpressJS handles the request and sends the response back to supertest and that allows you to check the response.

There are two parts to the test. The first is the construction of an actual request:

    request(webApp)
        .get('/api/settings')

Supertest uses superagent underneath to actually do the requests. Once you have linked in the ExpressJS application, you can send a GET, POST, DELETE or any other verb. DELETE is a special case because it is a reserved word – use del() instead:

    request(webApp)
        .del('/tables/myTable/1')

You can add custom headers. For example, I do a bunch of work with azure-mobile-apps – I can test that with:

    request(webApp)
        .set('ZUMO-API-VERSION', '2.0.0')
        .get('/tables/myTable/1')

Check out superagent for more examples of the API here.

The second part of the request is the assertions. You can assert on anything – a specific header, status code or body content. For example, you might want to assert on a non-200 response:

   request(webApp).get('/api/settings')
       .expect(200)

You can also expect a body. For example:

    request(webApp).get('/index.html')
        .expect(/<html>/)

Note the use of the regular expression here. That pattern is really common. You can also check for a specific header:

    request(webApp).get('/index.html')
        .expect('X-My-Header', /value/);

Once you have your sequence of tests, you can close out the connection. Since superagent and supertest are asynchronous, you need to handle the test asynchronously. That involves passing in a parameter of ‘done’ and then calling it after the test is over. You pass a callback into the .end() method:

    request(webApp).get('/index.html')
        .expect('X-My-Header', /value/)
        .end((error) => {
            done(error);
        });

Wrapping up

The supertest module, when combined with mocha, allows you to run test suites without spinning up a server and that enables you to increase your test coverage of a web service to almost 100%. With this, I’ll now be able to test my entire API surface automatically.

Browser Testing with PhantomJS and Mocha – Part 2

Happy New Year!

Today I am going to complete the work of browser testing. In the last article, I introduced MochaJS in a browser, so you could run tests manually in a browser by setting up a test server and generating a static site. I am going to automate that task and take the browser out of the mix.

A big part of the process is the inclusion of PhantomJS – a headless browser that allows you to do a number of things – among them is headless browser testing. There are plug-ins for most test runners, including Mocha, Jasmine, and Chutzpah.

Before I get to that, I need a strategy. My build process is driven by gulp. I run gulp test to build the library and run all the tests. I need a task that will set up a test web server, then use phantomJS and mocha to run the test suite, bailing on a failed test, and then finally shutting down the test web server. I’ve already discussed the test server, but that runs forever.

Fortunately for me, Mocha and PhantomJS is such a popular combination that there is a Gulp plug-in for the combo called gulp-mocha-phantomjs, which is really a thin wrapper around mocha-phantomjs. PhantomJS is bundled here, so it should “just work”. I did have some trouble getting PhantomJS working on Mac OSX El Capitan due to the security policies. To fix this, open System Preferences, then Security & Privacy. There is a section to Allow applications downloaded from Anywhere:

macos-allow-from-anywhere

The Gulp task looks like this:

var gulp = require('gulp'),
    express = require('express'),
    phantomjs = require('gulp-mocha-phantomjs'),
    runSequence = require('run-sequence'),
    config = require('../configuration');

var port = config.test.server.port || 3000;
var server = 'http://localhost:' + port + '/';
var listening;

var app = express();
app.use(express.static(config.test.server.rootdir));

gulp.task('browser:global', function () {
    var stream = phantomjs({ reporter: 'spec' });
    stream.write({ path: server + 'global.html' });
    stream.end();
    return stream;
});

gulp.task('testserver:close', function (callback) {
    console.log('Test Server stopped');
    listening.close();
    callback();
});

module.exports = exports = function (callback) {
    listening = app.listen(port, function () {
        console.log('Test Server started on port ', port);
        runSequence('browser:global', 'testserver:close', callback);
    });
};

The task uses a global variable, listening, to store the server reference. This is used within the testserver:close task to close the connection and make the server quit. The main task sets up the server to listen. When the server is listening, it runs the test suites in order. I’ve only got one test suite right now. If I were expanding this to other test suites, I would add a task for each test suite and then add the task to the runSequence call before the testserve:close task.

I’ve linked the task into my main Gulpfile.js like this:

var gulp = require('gulp');

gulp.task('lint', require('./gulp/tasks/lint'));
gulp.task('mocha', require('./gulp/tasks/mocha'));

gulp.task('build:testserver', [ 'build' ], require('./gulp/tasks/buildserver'));
gulp.task('browser-tests', [ 'build:testserver' ], require('./gulp/tasks/browsertests'));

gulp.task('build', [ 'lint', 'mocha' ], require('./gulp/tasks/build'));
gulp.task('test', ['lint', 'mocha', 'browser-tests']);
gulp.task('default', ['build', 'test']);

The task is stored in gulp/tasks/browsertests.js. This sequencing ensures that the main test suite and linter is done first, then I build the library and then I run the browser tests. Output should now look like this:

phantomjs-test-output

There is a small problem – the server continues to run (and the process never exits) if the browser tests fail. However, I find that reasonable since I will want to load the failing test up into a web browser to investigate if the tests fail.

Browser Testing with PhantomJS and Mocha – Part 1

If you have been following along for the past couple of weeks, you will know that I’ve been writing a browser library recently. I’m writing the library in ES2015 and then transpiling it into UMD.

A sidebar on bugs in BabelJS
I did bump into a bug when transpiling into the UMD module format. The bug is pretty much across the module transforms, and manifests as a ‘Maximum Call Stack Exceeded’ error with _typeof. The bug is T6777. There is a workaround, which is to add a typeof undefined; line at the top of your library.

Back to the problem at hand. I’ve already used Mocha to test my library and I use mocks to attempt to exercise the code, but at some point you have to run it in a browser. There are two steps to this. The first is to set up a test system that runs in a browser, and the second is to run the test system through a headless browser so it can be automated. Let’s tackle the first step today.

My library is a client library to access a remote AJAX environment. I want the library to use either a provided URL or the URL the page was loaded from – whichever is appropriate. As a result, I need to load the files over the Internet – loading from a file:// URL isn’t good enough. To handle this, I’m going to:

  • Create a local test server
  • Load the files into a static service area
  • Run the pages in a browser

To this end, I’ve got a Gulp task that builds my server:

var gulp = require('gulp'),
    babel = require('gulp-babel'),
    concat = require('gulp-concat'),
    sourcemaps = require('gulp-sourcemaps'),
    config = require('../configuration');

module.exports = exports = function() {
    return gulp.src(config.source.files)
        .pipe(sourcemaps.init())
        .pipe(concat('MyLibrary.js'))
        .pipe(babel())
        .pipe(sourcemaps.write('.'))
        .pipe(gulp.dest(config.destination.directory));
};

I store my gulp tasks in a separate file – one file per task. I then require the file in the main Gulpfile.js:

var gulp = require('gulp');

gulp.task('build', require('./gulp/tasks/build'));

I now have a MyLibrary.js file and a MyLibrary.js.map file in the dist directory. Building the server area is just as easy:

var gulp = require('gulp'),
    config = require('../configuration');

// Builds the server.rootdir up to service test files
module.exports = exports = function() {
    return gulp.src(config.test.server.files)
        .pipe(gulp.dest(config.test.server.rootdir));
};

My configuration.js exposes a list of files like this:

module.exports = exports = {
    source: {
        files: [ 'src/**/*.js' ]
    },
    destination: {
        directory: 'dist'
    },
    test: {
        mocha: [ 'test/**/*.js' ],
        server: {
            files: [
                'browser-tests/global.html',
                'browser-tests/global-tests.js',
                'dist/MyLibrary.js',
                'dist/MyLibrary.js.map',
                'node_modules/chai/chai.js',
                'node_modules/mocha/mocha.css',
                'node_modules/mocha/mocha.js'
            ],
            port: 3000,
            rootdir: 'www'
        }
    }
};

Take a look at the test.server.files object. That contains three distinct sections – the browser test files (more on those in a moment), the library files under test and the testing libraries. You should already have these installed, but if you don’t, you can install them:

npm install --save-dev mocha chai

I will have a www directory with all the files I need in it once I run the gulp buildserver command. Next, I need a server. I use ExpressJS for this. First off, install ExpressJS:

npm install --save-dev express

Note that this is a dev install – not a production install, hence the use of the --save-dev tag. I want express listed in devDependencies. Now, on to the server code, which I place in testserver.js:

var express = require('express'),
    config = require('./gulp/configuration');

var app = express();
app.use(express.static(config.test.server.rootdir));
app.listen(config.test.server.port || 3000, function() {
    console.info('Listening for connections');
});

This is about the most basic configuration for an ExpressJS server you can get. I’m serving static pages from the area I’ve built. That’s enough of infrastructure – now, how about running tests? I’ve got two files in my files list that I have not written yet. The first is a test file called global-tests.js and the other is a HTML file that sets up the test run – called global.html. The global-tests.js is a pretty normal Mocha test suite:

/* global describe, it, chai, MyLibrary */
var expect = chai.expect;

describe('MyLibrary.Client - Global Browser Object', function () {
    it('should have an MyLibrary global object', function () {
        expect(MyLibrary).to.be.a('object');
    });

    it('should have an MyLibrary.Client method', function () {
        expect(MyLibrary.Client).to.be.a('function');
    });

    it('should create a Client object when run in a browser', function () {
        var client = new MyLibrary.Client();
        expect(client).to.be.an.instanceof(MyLibrary.Client);
    });

    it('should set the url appropriately', function () {
        var client = new MyLibrary.Client();
        expect(client.url).to.equal('http://localhost:3000');
    });

    it('should set the environment appropriately', function () {
        var client = new MyLibrary.Client();
        expect(client.environment).to.equal('web/globals');
    });
});

There are a couple of changes. Firstly, this code is going to run in the browser, so you must write your tests for that environment. Secondly, it expects that the test framework is established already – it expects the chai library to be pre-loaded. One other thing is that this is a minimal test load. The majority of the testing is done inside my standard Mocha test run. As long as you have your tests exercise all paths within the code across the test suites (both the standard mocha tests and the browser tests), then you will be ok. I only test things that need the browser in order to test them.

The global.html test file sets up the tests, loads the appropriate libraries and then executes the tests:

<!DOCTYPE html>
<html>

<head>
    <title>Mocha Test File: Global Library Definition</title>
    <meta charset="utf-8">
    <link rel="stylesheet" href="mocha.css">
</head>

<body>
    <div id="mocha"></div>
    <script src="mocha.js"></script>
    <script src="chai.js"></script>
    <script>
        mocha.setup('bdd');
        mocha.reporter('html');
    </script>
    <script src="MyLibrary.js"></script>
    <script src="global-tests.js"></script>
    <script>
        mocha.run();
    </script>
</body>

</html>

I’m intending on writing a test file that implements the global object version, AMD module definition and browserify to ensure that the library runs in all environments. Each environment will have it’s own HTML file and test suite file. I can include as many of these sets as I want.

Running the tests

Running the tests at this stage is a two-step process. First, you start the server:

node testserver.js

Secondly, you browse to http://localhost:3000/global.html – note the initiator for your test suite is the HTML file. If you have done everything properly, the tests will just work:

mocha-browser

If things don’t work, you can use Developer Tools to figure out what is going on and correct the problem, then re-run the tests. Since this is an ES2015 project, there are some things that may require a polyfill. You can provide your own (mine only needs a polyfill for Object.assign – a matter for a couple of dozen lines of code), or you can use a major ES2015 polyfill like core.js – just ensure you load the polyfill in your test environment. This is also a great pointer to ensure your library has the right dependencies listed and that you have documented your requirements for the browser.

In the next article (Happy New Year!) I will integrate this into automated testing so that you don’t have to open a browser to do this task.

An ECMAScript 6, CommonJS and RequireJS Project

I’ve been writing a lot of CommonJS code recently – the sort that you would include in Node projects on the server side. I’ve recently had a thought that I would like to do a browser-side project. However, how do you produce a browser library that can be consumed by everyone?

The different styles of modules

Let’s say I have a class Client(). If I were operating in Node or Browserify, I’d do something like this:

var Client = require('my-client-package');

var myclient = new Client();

This is called CommonJS format. I like it – it’s nice and clean. However, that’s not the only way to potentially consume the library. You can also bring it in with RequireJS:

define(['Client'], function(Client) {
    var myclient = new Client();

});

Finally, you could also register the variable as a global and bring it in with a script HTML tag:

<script src="node_modules/my-client-package/index.js"></script>
<script>
    var client = new Client();
</script>

You can find a really good writeup of the differences between CommonJS and AMD in an article by Addy Osmani.

Three different techniques. If we were being honest, they are all valid and have their place, although you might have your favorite technique. As a library developer, I want to support the widest range of JavaScript developers which means supporting three different styles of code. This brings me to UMD format. I named it “Ugly Module Definition”, and you can see why when you look at the code:

(function (root, factory) {
    if (typeof define === 'function' && define.amd) {
        // AMD. Register as an anonymous module.
        define(['b'], function (b) {
            return (root.returnExportsGlobal = factory(b));
        });
    } else if (typeof module === 'object' && module.exports) {
        // Node. Does not work with strict CommonJS, but
        // only CommonJS-like enviroments that support module.exports,
        // like Node.
        module.exports = factory(require('b'));
    } else {
        // Browser globals
        root.returnExportsGlobal = factory(root.b);
    }
}(this, function (b) {
    // Use b in some fashion

    return {// Your exported interface };
}));

Seriously, could this code be any uglier? I like writing my code in ECMAScript 2015, also known as ES6. So, can I write a class in ES6 and then transpile it to the right format? Further, can I set up a project that has everything I need to test the library? It turns out I can. Here is how I did it.

Project Setup

These days, I tend to create a directory for my project, put some stuff in it and then push it up to a newly created GitHub repository. I’m going to assume you have already created a GitHub user and then created a GitHub repository called ‘my-project’. Let’s get started:

mkdir my-project
cd my-project
git init
git remote add origin https://github.com/myuser/my-project
npm init --yes
git add package.json
git commit -m "First Commit"
git push -u origin master

Perhaps unshockingly, I have a PowerShell script for this functionality since I do it so often. All I have to do is remember to check in things along the way now and push the repository to GitHub at the end of my work.

My Code

I keep my code in the src directory, The tests are in the test directory. The distribution file is in the dist directory. Let’s start with looking at my src/Client.js code:

export default class Client {
    constructor(options = {}) {
    }
}

Pretty simple, right? The point of this is not to concentrate on code – it’s about the build process. I’ve also got a test in the test/Client.js file:

/* global describe, it */

// Testing Library Functions
import { expect } from 'chai';

// Objects under test
import Client from '../src/Client';

describe('Client.js', () => {
    describe('constructor', () => {
        it('should return a Client object', () => {
            let client = new Client();
            expect(client).to.be.instanceof(Client);
        });
    });
});

I like to use Mocha and Chai for my tests, so this is written with that combination in mind. Note the global comment on the first line – that prevents Visual Studio Code from putting green squiggles underneath the mocha globals.

Build Modules

I decided some time along the way that I won’t use gulp or grunt unless I have to. In this case, I don’t have to. My toolset includes:

Let’s take a look at my package.json:

{
    "name": "my-project",
    "version": "0.1.0",
    "description": "A client library written in ES6",
    "main": "dist/Client.js",
    "scripts": {
        "pretest": "eslint src test",
        "test": "mocha",
        "build": "babel src --out-file dist/Client.js --source-maps"
    },
    "keywords": [
    ],
    "author": "Adrian Hall <adrian@shellmonger.com>",
    "license": "MIT",
    "devDependencies": {
        "babel-cli": "^6.3.17",
        "babel-plugin-transform-es2015-modules-umd": "^6.3.13",
        "babel-preset-es2015": "^6.3.13",
        "babel-register": "^6.3.13",
        "chai": "^3.4.1",
        "eslint": "^1.10.3",
        "mocha": "^2.3.4"
    },
    "babel": {
        "presets": [
            "es2015"
        ],
        "plugins": [
            "transform-es2015-modules-umd"
        ]
    }
}

A couple of regions need to be discussed here. Firstly, I’ve got two basic npm commands I can run:

  • npm test will run the tests
  • npm run build will build the client library

I’ve got a bunch of devDependencies to implement this build system. Also note the “babel” section – this is what would normally go in the .babelrc – you can also place it in your package.json file.

The real secret sauce here is the build script. This uses a module transform to create a UMD format library from your ES6 code. You don’t even have to worry about reading that ES5 code – it’s ugly, but it works.

Editor Files

I use Visual Studio Code, so I need a jsconfig.json file in the root of my project:

{
    "compilerOptions": {
        "target": "ES6"
    }
}

This tells Visual Studio Code to use ES6 syntax. I’m hopeful the necessity of this will go away soon. I’m hoping that I’m not the only one who is contributing to this repository. Collaboration is great, but you want to set things up so that people coming newly in to the project can get started with your coding style straight away. I include a .editorconfig file as well:

root = true

[*]
charset = utf-8
indent_style = space
indent_size = 4
end_of_line = lf
insert_final_newline = true
trim_trailing_whitespace = true

[*.json]
insert_final_newline = false

You can read about editorconfig files on their site. This file is used by a wide variety of editors – if your editor is on the list, you should also install the plugin.

ESLint Configuration

I have a .eslintrc.js file at the root of the project. I’ve got that in a gist since it is so big and I just cut and paste it into the root directory.

Test Configuration

My test directory is different – it expects to operate within mocha, so I need an override to tell eslint that this is all about mocha. Here is the test/.eslintrc file:

module.exports = exports = {
    "env": {
        "es6": true,
        "mocha": true
    }
};

I also need a mocha.opts file to tell mocha that the tests are written in ES6 format:

--compilers js:babel-register

Wrapping up

You will need a dist directory. I place a README.md file in there that describes the three use cases for the library – CommonJS, AMD and globals. That README.md file is really only there to ensure the dist directory exists when you clone the repository.

I also need to add a README.md at the root of the project. It’s required if I intend to publish the project to the NPM repository. Basic instructions on how to install and use the library is de rigeur, but you can put whatever you want in there in reality.

I have not addressed jsdoc yet – you should be doing it in your source files, and it should be a postbuild step in your package.json file.

You can now run the tests and build through the npm commands and get a library that can be used across the board.

Testing async functions with mocks and mocha in JavaScript

I’ve recently gone down the road of testing all my code using Mocha and Chai, and I aim for 100% code coverage. My current library does a HTTP connection to a backend and I’m hoping to use node-fetch for that. But how do you test a piece of asynchronous code that uses promises or callbacks?

Let’s take a look at my code under test:

import fetchImpl from 'node-fetch';

export default class Client {
    constructor(baseUrl, options = {}) {
        const defaultOptions = {
            fetch: fetchImpl
        }
        
        this.prvOptions = Object.assign({}, defaultOptions, options);
        this.prvBaseUrl = baseUrl;
    }
    
    fetch(relativeUrl, options = {}) {
        const defaultOptions = {
            method: 'GET'
        }
        
        let fetchOptions = Object.assign({}, defaultOptions, options);
        return this.prvOptions.fetch(`${baseUrl}${relativeUrl}`, fetchOptions);
    }
}

This is a much shortened version of my code, but the basics are there. Here is the important thing – I set a default option that includes an option for holding the fetch implementation. It’s set to the “real” version by default and you can see that in line 6. If I don’t override the implementation, I get the node-fetch version.

Later on, I call client.fetch('/foo'). The client library uses my provided implementation of fetch or the default one if I didn’t specify.

All this logic allows me to substitute (or mock) the fetch command. I don’t really want to test the functionality of fetch – I just want to ensure I am calling it with the right parameters.

Now for the tests. My first problem is that I have asynchronous code here. fetch returns a Promise. Promises are asynchronous. That means I can’t just write tests like I was doing before – they will fail because the response wouldn’t be available during the test. The mocha library helps by providing a done call back. The general pattern is this:

    describe('#fetch', function() {
        it('constructs the URL properly', function(done) {
            client.fetch('/foo').then((response) => {
                    expect(response.url).to.equal('https://foo.a.com/foo');
                    done();
                })
                .catch((err) => {
                    done(err);
                });
        });
    });

You might remember the .then/.catch pattern from the standard Promise documentation. Mocha provides a callback (generally called done). You call the callback when you are finished. If you encountered an error, you call the callback with the error. Mocha uses this to deal with async tests.

Note that I have to handle both the .then() and the .catch() clause. Don’t expect Mocha to call done for you. Ensure all code paths in your test actually call done appropriately.

This still has me calling client.fetch without an override. I don’t want to do that. I’ve got this ability to swap out the implemenetation. I have a mockfetch.js file that looks like this:

export default function mockfetch(url, init) {
    return new Promise((resolve, reject) => {
        resolve({url: url, init: init});
    });
}

The only thing that the mockfetch method does is create a promise that is resolved and returns the parameters that were passed in the resolution. Now I can finish my test:

    describe('#fetch', function() {
        let clientUrl = 'https://foo.a.com';
        let clientOptions = {fetch: mockfetch};
        let client = new AzureMobileClient(clientUrl, clientOptions);

        it('constructs the URL properly', function(done) {
            client.fetch('/foo')
                .then((response) => {
                    expect(response.url).to.equal('https://foo.a.com/foo');
                    done();
                })
                .catch((err) => {
                    done(err);
                });
        });
    });

Note that my mockfetch does not return anything resembling a real response – it’s not even the same object type or shape. That’s actually ok because it’s designed for what I need it to do – respond appropriately for the function under test.

There are three things here:

  1. Construct your libraries so that you can mock any external library calls
  2. Use the Mocha “done” parameter to handle async code
  3. Create mock versions of those external library calls

This makes testing async code easy.

Mocha Tests and ECMAScript 2015

Recently, I tried my hand at testing a library with Mocha and Chai. It went rather well, and I’ve just about integrated testing into my day to day life. I won’t say I’m perfect and the people I work with will attest that they need to remind me sometimes to write tests. Today my problem is testing ECMAScript 2015 script.

I have a nice API for parsing a URI. It’s based on work by Steven Levithan from way back in 2007.  I wanted to bring it up to date and re-write it as a class in ECMAScript 2015.  I won’t bore you with the code – it’s relatively easy.  I obviously want to write tests for this.  Here is the test code:

///<reference path="../typings/mocha/mocha.d.ts"/>
///<reference path="../typings/chai/chai.d.ts"/>
import {expect} from 'chai';
import URL from '../src/url';

describe('URL', function () {
    describe('.constructor()', function () {
        it('should accept a simple URL', function () {
            var e = new URL('http://mywebsite.com/');
            expect(e).to.be.an.instanceof(URL);
        });

        it('should accept a loose URL', function () {
            var e = new URL('yahoo.com/search');
            expect(e).to.be.an.instanceof(URL);
        });

        it('should accept a strict URL', function () {
            var e = new URL('http://yahoo.com/search/', true);
            expect(e).to.be.an.instanceof(URL);
        });
    });
});

Note the import mechanisms above the tests. That tells me it’s ES2015 code and not regular javascript. So what happens when you try to run mocha?

Screen Shot 2015-11-25 at 3.21.57 PM

The problem is really that Node.js doesn’t support all the ES2015 syntax yet. I need to transpile. I can do this one of two ways. The obvious one is to transpile the code into a separate directory and then run the mocha tests on that. This is really unsatisfactory. Firstly, I’m going to have to create a gulp job for this to transpile and then run the unit tests because otherwise I’ll forget. Secondly, it’s really increasing the footprint. I can’t just quickly run mocha with an argument to run one test – I have to run a full compile.

That leads me to the second way. I can actually run Mocha with a transpiler. I have to firstly install a mocha transpiler plugin. That’s another npm package:

npm install --save-dev mocha-babel

Make sure you use the same transpiler as you would with your code. If you use traceur normally, then install mocha-traceur instead. Now I can run the tests with a command line argument:

mocha --compilers js:mocha-babel

This will run all the tests on my ES2015 code, transpiling on the fly for me. I can now place this in my package.json as follows:

  "scripts": {
    "test": "mocha --compilers js:mocha-babel"
  },

What about babel options? Well, you can create a file in the root of your project called mocha-babel.js which contains the options you want. For instance:

require('babel/register')({
  'presets': [ 'es2015' ],
  'plugins': [ 'class-properties' ]
});

The options are passed through to the Babel transpiler as-is, so make sure your options match the version you are using. There was a significant change in options between v5.x and v6.x of Babel.

Now, back to my developing!

Testing a NodeJS Library with Mocha and Chai

I’ve asserted before that I am not a “professional developer” partly because I don’t test. There are three things in testing that are important – a willingness to put in the time to learn a test framework, the writing of the tests and the adoption of a testing methodology. Today, I’m going to do all three for my latest project – a configuration framework for NodeJS that I am writing.

Testing Methodologies

Let’s start with the adoption of a testing methodology. One could write the code and then write some unit tests that test that code to make you feel good about releasing the library. It’s not really a methodology.

Test Driven Development is the first of the methodologies I can discuss. In TDD, you write the tests first – based on what the code is meant to do. This requires a level of design, of course. You get to write code that your library should run. Then you continually write code until all the tests pass. You are pretty well guaranteed to have 100% test coverage because you are coding against the tests. Once the tests pass, the code is complete.

TDD does fall down in a couple of areas – most notably where state comes into play. TDD is not a good fit for UI testing, for example. In the case of a library, your API is a contract – it either passes or fails. If you have enough tests to describe the API fully, then you’ve got a good test suite. In the land of UI development, however, there are corner cases. What if a user does something unexpected? One could assert that the UI is also a contract between a user and the program, but there are lots of things that can happen; including device differences, environment differences and so on that make this not so straight forward an answer.

BDD (which is Behaviour Driven Development) is a similar methodology but describes behaviours, not unit tests. For example, in my configuration example – TDD would test each method; BDD would test the act of producing a valid configuration.

There are other tests that you should consider aside from unit tests. You should definitely do some tests that are end-to-end (normally abbreviated as E2E). In my example, I want to support a set of common patterns for producing configurations, so I definitely want to test those situations.

Choice: TDD – the writing of unit tests and some E2E tests for the common patterns.

Testing Toolsets

This brings us to testing tools. In the NodeJS world, there are choices. I often got stuck on the implementation details of tests and that caused me to spin, eventually leading me to dropping testing because I just couldn’t decide. In general, you need to decide on two pieces – an assertion library and a test runner. Based on my prior research, I decided on Mocha and Chai. Mocha is the test runner and Chai is the assertion library. There is good information on each website, so I’m not going to go into detail. Instead I’m going to focus on setting up testing on my project.

Writing Tests

I’m using TypeScript and Visual Studio to generate all my code for this library I am writing. In my previous post, I set up the project and loaded it into Visual Studio. Today, my first step is to create a folder called test. Since I have the Node Tools for Visual Studio installed, I can write-click on the tests folder and select Add > New Item… There is a Mocha UnitTest File as an option under Node.js in both a JavaScript and TypeScript variety. I like to be able to run my build process without compilation, so my library is written in TypeScript, but the Gulpfile and unit tests are written in JavaScript:

09292015-1

I have not included Mocha or Chai in my project. Since I am in Visual Studio, I can expand the npm view in my Solution Explorer, right-click on the dev node and select Install new npm packages…

09292015-2

Searching for Mocha and Chai is enough:

09292015-3

One of the neat things about this is the warning it gives you on Windows:

09292015-4

Yes, you want to run npm dedupe. Fortunately, npm3 will get rid of this annoyance, but it isn’t the default release yet. Back to the test file. I’ve got a class – Configuration – that I want to test. It has a number of methods that I also want to test individually. Each method will have a number of tests associated with it. I’ve created a configuration.js on the test directory. Mocha will run all of the JavaScript files in the test directory by default. Here is my initial code:

var expect = require('chai').expect,
    Source = require('../dist/Source');

describe('Source', function () {
    describe('.keys()', function () {
        // Tests for the Source.keys() method
    });

    describe('.type', function () {
        // Tests for the Source.type property
    });

    describe('.location', function () {
        // Tests for the Source.location property
    });

    describe('.get()', function () {
        // Tests for the Source.get() method
    });
});

The first line brings in the expect syntax from the Chai library. Chai supports three different syntax implementations – should, expect and assert. They are mostly similar but do have some minor implementation differences. I like the readability of expect so I’m going to use that. I also bring in my library under test. Finally, I describe the library of tests I am going to run – the outer describe says I am testing the Source class and the inner describes say I am testing a particular method. You can next as much as you want.

Writing the tests

Let’s take the type property. I try to think about the tests first. Here is my logic:

  • It is set by the constructor
  • It is read-only
  • It is a string

Here is my code:

    describe('.type', function () {
        it('should return a string', function (done) {
            var s = new Source('static');
            expect(s.type).to.be.a('string');
        });

        it('sould be the same as the constructor value', function (done) {
            var s = new Source('static');
            expect(s.type).to.equal('static');
        });

        it('should be read-only', function (done) {
            var s = new Source('static');
            expect(function () { s.type = 'new-value'; }).toLocaleString.throw(Error);
        });
    });

I find these tests to be highly readable. Each test case is self-contained – you could run any of these tests by itself and not worry about the state of the test system.

Running Tests

Before running tests, you need to have mocha installed globally so you can run it:

npm install -g mocha

Now I need a stub of my eventual implementation:

class Source {
    constructor(type: string, filename?: string) {
    }

    get type(): string {
        return null;
    }
}

export = Source;

Running mocha gets me a whole bunch of errors, but look at the top of the output:

09292015-5

Now I can run mocha whenever I want. You will note that the stack trace from the assertion library is printed for each error. One of the things I like doing is working on “the next error” – you can do this easily with mocha -b:

09292015-6

Integrating into the Build Workflow

I want to integrate testing into my workflow. There are two things I want to do here:

  1. Run npm test to test the project
  2. Run Mocha as part of my Gulp standard pipeline

Adding npm test support is easy – just add a “test” entry to the “scripts” section of the package.json file:

  "scripts": {
    "test": "mocha"
  },

Integrating into gulp is also easy. Use the gulp-mocha library:

gulp.task('build', ['compile'], function () {
    return gulp.src('./test/**/*.js', { read: false })
        .pipe(mocha({ reporter: 'spec' }));
});

Here, my compile task compiles my code into the distribution area, ready for testing and usage.

Wrap Up

I’ve said a few times in the past that I need to learn testing techniques. Mocha and Chai make it easy. Now all I have to do is ingrain testing into my development world – write tests first and then code to the test. At least I have the tools and workflow to do this task properly.