Browser Testing with PhantomJS and Mocha – Part 2

Happy New Year!

Today I am going to complete the work of browser testing. In the last article, I introduced MochaJS in a browser, so you could run tests manually in a browser by setting up a test server and generating a static site. I am going to automate that task and take the browser out of the mix.

A big part of the process is the inclusion of PhantomJS – a headless browser that allows you to do a number of things – among them is headless browser testing. There are plug-ins for most test runners, including Mocha, Jasmine, and Chutzpah.

Before I get to that, I need a strategy. My build process is driven by gulp. I run gulp test to build the library and run all the tests. I need a task that will set up a test web server, then use phantomJS and mocha to run the test suite, bailing on a failed test, and then finally shutting down the test web server. I’ve already discussed the test server, but that runs forever.

Fortunately for me, Mocha and PhantomJS is such a popular combination that there is a Gulp plug-in for the combo called gulp-mocha-phantomjs, which is really a thin wrapper around mocha-phantomjs. PhantomJS is bundled here, so it should “just work”. I did have some trouble getting PhantomJS working on Mac OSX El Capitan due to the security policies. To fix this, open System Preferences, then Security & Privacy. There is a section to Allow applications downloaded from Anywhere:

macos-allow-from-anywhere

The Gulp task looks like this:

var gulp = require('gulp'),
    express = require('express'),
    phantomjs = require('gulp-mocha-phantomjs'),
    runSequence = require('run-sequence'),
    config = require('../configuration');

var port = config.test.server.port || 3000;
var server = 'http://localhost:' + port + '/';
var listening;

var app = express();
app.use(express.static(config.test.server.rootdir));

gulp.task('browser:global', function () {
    var stream = phantomjs({ reporter: 'spec' });
    stream.write({ path: server + 'global.html' });
    stream.end();
    return stream;
});

gulp.task('testserver:close', function (callback) {
    console.log('Test Server stopped');
    listening.close();
    callback();
});

module.exports = exports = function (callback) {
    listening = app.listen(port, function () {
        console.log('Test Server started on port ', port);
        runSequence('browser:global', 'testserver:close', callback);
    });
};

The task uses a global variable, listening, to store the server reference. This is used within the testserver:close task to close the connection and make the server quit. The main task sets up the server to listen. When the server is listening, it runs the test suites in order. I’ve only got one test suite right now. If I were expanding this to other test suites, I would add a task for each test suite and then add the task to the runSequence call before the testserve:close task.

I’ve linked the task into my main Gulpfile.js like this:

var gulp = require('gulp');

gulp.task('lint', require('./gulp/tasks/lint'));
gulp.task('mocha', require('./gulp/tasks/mocha'));

gulp.task('build:testserver', [ 'build' ], require('./gulp/tasks/buildserver'));
gulp.task('browser-tests', [ 'build:testserver' ], require('./gulp/tasks/browsertests'));

gulp.task('build', [ 'lint', 'mocha' ], require('./gulp/tasks/build'));
gulp.task('test', ['lint', 'mocha', 'browser-tests']);
gulp.task('default', ['build', 'test']);

The task is stored in gulp/tasks/browsertests.js. This sequencing ensures that the main test suite and linter is done first, then I build the library and then I run the browser tests. Output should now look like this:

phantomjs-test-output

There is a small problem – the server continues to run (and the process never exits) if the browser tests fail. However, I find that reasonable since I will want to load the failing test up into a web browser to investigate if the tests fail.

Browser Testing with PhantomJS and Mocha – Part 1

If you have been following along for the past couple of weeks, you will know that I’ve been writing a browser library recently. I’m writing the library in ES2015 and then transpiling it into UMD.

A sidebar on bugs in BabelJS
I did bump into a bug when transpiling into the UMD module format. The bug is pretty much across the module transforms, and manifests as a ‘Maximum Call Stack Exceeded’ error with _typeof. The bug is T6777. There is a workaround, which is to add a typeof undefined; line at the top of your library.

Back to the problem at hand. I’ve already used Mocha to test my library and I use mocks to attempt to exercise the code, but at some point you have to run it in a browser. There are two steps to this. The first is to set up a test system that runs in a browser, and the second is to run the test system through a headless browser so it can be automated. Let’s tackle the first step today.

My library is a client library to access a remote AJAX environment. I want the library to use either a provided URL or the URL the page was loaded from – whichever is appropriate. As a result, I need to load the files over the Internet – loading from a file:// URL isn’t good enough. To handle this, I’m going to:

  • Create a local test server
  • Load the files into a static service area
  • Run the pages in a browser

To this end, I’ve got a Gulp task that builds my server:

var gulp = require('gulp'),
    babel = require('gulp-babel'),
    concat = require('gulp-concat'),
    sourcemaps = require('gulp-sourcemaps'),
    config = require('../configuration');

module.exports = exports = function() {
    return gulp.src(config.source.files)
        .pipe(sourcemaps.init())
        .pipe(concat('MyLibrary.js'))
        .pipe(babel())
        .pipe(sourcemaps.write('.'))
        .pipe(gulp.dest(config.destination.directory));
};

I store my gulp tasks in a separate file – one file per task. I then require the file in the main Gulpfile.js:

var gulp = require('gulp');

gulp.task('build', require('./gulp/tasks/build'));

I now have a MyLibrary.js file and a MyLibrary.js.map file in the dist directory. Building the server area is just as easy:

var gulp = require('gulp'),
    config = require('../configuration');

// Builds the server.rootdir up to service test files
module.exports = exports = function() {
    return gulp.src(config.test.server.files)
        .pipe(gulp.dest(config.test.server.rootdir));
};

My configuration.js exposes a list of files like this:

module.exports = exports = {
    source: {
        files: [ 'src/**/*.js' ]
    },
    destination: {
        directory: 'dist'
    },
    test: {
        mocha: [ 'test/**/*.js' ],
        server: {
            files: [
                'browser-tests/global.html',
                'browser-tests/global-tests.js',
                'dist/MyLibrary.js',
                'dist/MyLibrary.js.map',
                'node_modules/chai/chai.js',
                'node_modules/mocha/mocha.css',
                'node_modules/mocha/mocha.js'
            ],
            port: 3000,
            rootdir: 'www'
        }
    }
};

Take a look at the test.server.files object. That contains three distinct sections – the browser test files (more on those in a moment), the library files under test and the testing libraries. You should already have these installed, but if you don’t, you can install them:

npm install --save-dev mocha chai

I will have a www directory with all the files I need in it once I run the gulp buildserver command. Next, I need a server. I use ExpressJS for this. First off, install ExpressJS:

npm install --save-dev express

Note that this is a dev install – not a production install, hence the use of the --save-dev tag. I want express listed in devDependencies. Now, on to the server code, which I place in testserver.js:

var express = require('express'),
    config = require('./gulp/configuration');

var app = express();
app.use(express.static(config.test.server.rootdir));
app.listen(config.test.server.port || 3000, function() {
    console.info('Listening for connections');
});

This is about the most basic configuration for an ExpressJS server you can get. I’m serving static pages from the area I’ve built. That’s enough of infrastructure – now, how about running tests? I’ve got two files in my files list that I have not written yet. The first is a test file called global-tests.js and the other is a HTML file that sets up the test run – called global.html. The global-tests.js is a pretty normal Mocha test suite:

/* global describe, it, chai, MyLibrary */
var expect = chai.expect;

describe('MyLibrary.Client - Global Browser Object', function () {
    it('should have an MyLibrary global object', function () {
        expect(MyLibrary).to.be.a('object');
    });

    it('should have an MyLibrary.Client method', function () {
        expect(MyLibrary.Client).to.be.a('function');
    });

    it('should create a Client object when run in a browser', function () {
        var client = new MyLibrary.Client();
        expect(client).to.be.an.instanceof(MyLibrary.Client);
    });

    it('should set the url appropriately', function () {
        var client = new MyLibrary.Client();
        expect(client.url).to.equal('http://localhost:3000');
    });

    it('should set the environment appropriately', function () {
        var client = new MyLibrary.Client();
        expect(client.environment).to.equal('web/globals');
    });
});

There are a couple of changes. Firstly, this code is going to run in the browser, so you must write your tests for that environment. Secondly, it expects that the test framework is established already – it expects the chai library to be pre-loaded. One other thing is that this is a minimal test load. The majority of the testing is done inside my standard Mocha test run. As long as you have your tests exercise all paths within the code across the test suites (both the standard mocha tests and the browser tests), then you will be ok. I only test things that need the browser in order to test them.

The global.html test file sets up the tests, loads the appropriate libraries and then executes the tests:

<!DOCTYPE html>
<html>

<head>
    <title>Mocha Test File: Global Library Definition</title>
    <meta charset="utf-8">
    <link rel="stylesheet" href="mocha.css">
</head>

<body>
    <div id="mocha"></div>
    <script src="mocha.js"></script>
    <script src="chai.js"></script>
    <script>
        mocha.setup('bdd');
        mocha.reporter('html');
    </script>
    <script src="MyLibrary.js"></script>
    <script src="global-tests.js"></script>
    <script>
        mocha.run();
    </script>
</body>

</html>

I’m intending on writing a test file that implements the global object version, AMD module definition and browserify to ensure that the library runs in all environments. Each environment will have it’s own HTML file and test suite file. I can include as many of these sets as I want.

Running the tests

Running the tests at this stage is a two-step process. First, you start the server:

node testserver.js

Secondly, you browse to http://localhost:3000/global.html – note the initiator for your test suite is the HTML file. If you have done everything properly, the tests will just work:

mocha-browser

If things don’t work, you can use Developer Tools to figure out what is going on and correct the problem, then re-run the tests. Since this is an ES2015 project, there are some things that may require a polyfill. You can provide your own (mine only needs a polyfill for Object.assign – a matter for a couple of dozen lines of code), or you can use a major ES2015 polyfill like core.js – just ensure you load the polyfill in your test environment. This is also a great pointer to ensure your library has the right dependencies listed and that you have documented your requirements for the browser.

In the next article (Happy New Year!) I will integrate this into automated testing so that you don’t have to open a browser to do this task.

An ECMAScript 6 Search Box (Part 1)

Over the past week I’ve made decisions about:

You may not agree with all my decisions, bu I’ve laid them out there for you.  Now, how do I integrate all these various tools together so that it works within Visual Studio 2015 and an ASP.NET v5 application?  That’s the point of this next set of posts. At the end I will have a GitHub repository with my minimal application in it for you to enjoy. More importantly, you will get to see all the steps I go through to get there with all the tooling I have chosen.

Getting Started

Let’s start with an ASP.NET v5 Empty project.  I’ve covered this before.  However, here is a refresher.  Start with File -> New Project…

vs2015-new-webapp

Select the ASP.NET Application template and give it a name (unless you happen to like WebApplication1).  Also make sure you select the other good options here – I’m storing my solution in a GitHub repository.  That means that I am checking the “Add to Source Control…” box and selecting a location within my cloned repository.  (Sorry – I am not covering the mechanics of Github here). Click on Next when you are done.

vs2015-select-webapp

From here, select the Empty ASP.NET 5 Application – this will give me the most minimal content in your project so I can build everything from the ground up.

ASP.NET Scaffolding

My first step is to get a web service that I can use to serve up my minimal amount of pages. Since most of the work is going to be in the Javascript side, I’m going to set up an ASP.NET application that serves up static pages. You may want to do this if your application is mostly Javascript and all the interaction comes via a RESTful WebAPI. If you want to use a traditional ASP.NET MVC pattern, feel free.

My ASP.NET vNext package manager of choice, like everyone elses, is NuGet. As a result I’ll be adding the ASP.NET packages to my project.json file. Here I am adding the Static Files assemblies plus BrowserLink and error pages to my file:

{
    "webroot": "wwwroot",
    "version": "1.0.0-*",
    "dependencies": {
        "Microsoft.AspNet.Server.IIS": "1.0.0-beta3",
        "Microsoft.AspNet.StaticFiles": "1.0.0-beta3",
        "Microsoft.AspNet.Diagnostics": "1.0.0-beta3",
        "Microsoft.VisualStudio.Web.BrowserLink.Loader": "14.0.0-beta3"
    },
    "frameworks": {
        "aspnet50": { },
        "aspnetcore50": { }
    },
    "bundleExclude": [
        "node_modules",
        "bower_components",
        "**.kproj",
        "**.user",
        "**.vspscc"
    ],
    "exclude": [
        "wwwroot",
        "node_modules",
        "bower_components"
    ]
}

I’ve also got some simple changes to make to the Startup.cs file to enable all this:

using Microsoft.AspNet.Builder;
using Microsoft.Framework.DependencyInjection;
using Microsoft.AspNet.Diagnostics;

namespace BubbleSearch
{
    public class Startup
    {
        public void ConfigureServices(IServiceCollection services)
        {
        }

        public void Configure(IApplicationBuilder app)
        {
            app.UseBrowserLink();
            app.UseErrorPage(ErrorPageOptions.ShowAll);
            app.UseStaticFiles();
        }
    }
}

I’ve added a test.html page in the wwwroot directory – this is just to say “the web server is working and serving pages ok”. My HTML page is simple:

<!DOCTYPE html>

<html>
<head>
    <meta charset="utf-8" />
    <title>Test Page</title>
</head>
<body>
    <h1>Test</h1>
</body>
</html>

Running the project now should allow you to browse to the test.html page and show off the HTML you placed in there.

Adding Bootstrap and jQuery Libraries

My next work item is to get the Bootstrap and jQuery libraries installed. Initially I am going to place these in the wwwroot/lib directory – that may not last long term, but the idea is that wwwroot can be cleaned out whenever I want. All my source code needs to go through some sort of transformation before it can be run. First stop is to get the packages. My package manager of choice for Javascript is NPM, so I need a package.json file.

Visual Studio has direct support for NPM, so just right click on the project, select Add -> New Item… and select the NPM Configuration File. This will create a template package.json file. The contents of my package.json file a relatively simple at this point:

{
    "version": "1.0.0",
    "name": "BubbleSearch",
    "private": true,
    "dependencies": {
        "bootstrap": "3.3.2",
        "jquery": "2.1.3"
    },
    "devDependencies": {
    }
}

Once I saved this file, Visual Studio will automatically download the packages. The packages will be listed under the Dependencies\NPM node. On disk they are located under the node_modules directory.

Initializing the build process

I now need to put the pieces of the two libraries in the right place. For that I want a task runner since this process will be a part of my build. I chose gulp as my task runner, so let’s work on the gulp configuration next.

Gulp doesn’t have an explicit sample so just use Add -> New Item… and add a new Javascript file to the project called Gulpfile.js. It will be blank but notice that Gulp logo is in the bottom right of the file editor (if you have Web Essentials installed). This is a good indication you have the right filename.

To start with, I want two tasks that assist me with building. The idea is that I can run “gulp build” to build the system and “gulp clean” to clean out all the build artifacts. My initial Gulpfile.js looks like this:

var gulp = require('gulp'),
    del = require('del'),
    path = require('path');

var npmPath = './node_modules',
    buildPath = './wwwroot';

gulp.task('libraries:copy', function () {
});

gulp.task('libraries:clean', function (cb) {
    del([path.join(buildPath, 'lib/**')], cb);
});

gulp.task('build', [
    'libraries:copy'
]);

gulp.task('clean', [
    'libraries:clean'
]);

gulp.task('default', ['build']);

The whole point of the del package is to delete paths. In this case I’m deleting the wwwroot/lib path and everything below it in the libraries:clean task.

I also need to add gulp to the devDependencies section of my package.json file. In addition, I need to add the del package to handle the cleanup task. You don’t need to install the path plugin – it’s a part of Node and gulp runs on top of node.

    "devDependencies": {
        "gulp": "3.8.11",
        "del": "1.1.1"
    }

One thing I did notice is that Gulp did not run correctly within Visual Studio the first time. In the Task Runner Explorer, I saw the following error:

gulp-error

To fix this I started a PowerShell task as Administrator and ran the following:

npm install -g gulp

Open the Task Runner Explorer (right click on the Gulpfile.js and select it) and refresh the configuration. You should see something similar to this:

task-runner-1

I’ve got several targets here. The default one should run the build target. The build target and the clean target will eventually become lists of pipelines and the libraries:clean and libraries:copy concern themselves with the libraries that I have installed. I still haven’t created a libraries:copy task, so let’s do that now:

var gulp = require('gulp'),
    del = require('del'),
    path = require('path'),
    merge = require('merge-stream');

var npmPath = './node_modules',
    buildPath = './wwwroot';

var libraries = [
    'bootstrap',
    'jquery'
];

gulp.task('libraries:copy', function () {
    var tasks = libraries.map(function (library) {
        return gulp.src(path.join(npmPath, library, 'dist/**'))
            .pipe(gulp.dest(path.join(buildPath, 'lib', library)));
    });
    return merge(tasks);
});

This took a little bit of working out. First of all I’ve added some new modules to my gulp settings – merge-stream needs to be loaded, so place that in the devDependencies of your package.json file. I’ve defined where NPM deposits the libraries, where I want them afterwards and which libraries to include in the process.

I am using the Array.map function in the libraries:copy task to iterate over the list of libraries. For each one I am going to utilize a simple copy pipeline to copy the contents of the dist directory to the wwwroot/lib directory. This returns an array. I then use the new merge-stream plugin to merge all these different tasks into one task.

An alternate way to write this would have been as follows:


gulp.task('libraries:install-bootstrap', function() {
    gulp.src('node_modules/bootstrap/dist/**')
        .dist('wwwroot/lib/bootstrap');
});
gulp.task('libraries:install-jquery;, function() {
    gulp.src('node_modules/jquery/dist/**')
        .dist('wwwroot/lib/jquery');
});
gulp.task('libraries:copy', [ 'libraries:install-jquery', 'libraries:install-bootstrap' ]);

Now expand that to 20 libraries. I hate boilerplate code in most cases. By doing this with an extra plugin and an array I just have to add the library to the array and re-build.

If you have done everything properly then you can do two things in the Task Runner Explorer.

  1. Right-click on Build or Default and select Run – the wwwroot\lib directory will be populated with the two libraries.
  2. Right-click on Clean and select Run – the wwwroot\lib directory will be removed.

Once you have built, here is what your wwwroot tree should look like:

vnext-part1-srctree

I’ve uploaded this code to my GitHub Repository for you to review. Note that it is the complete code, so check out the other articles in the series that cover everything. In the next article I will convert my BubbleSearch code to ECMAScript 6 and integrate the some more of the tool chain.

Web Dev Tools 101: Testing

You were going to test your application, weren’t you? Until recently, I was the kind of person who ran my application in Google Chrome and Internet Explorer on each little piece of functionality I developed. If that functionality ran, then I called the whole application good. This has all sorts of problems for the serious developer. If you are a serious developer then you need to be thinking about testing.

TL;DR

There are several pieces to testing. The below is what I’m going to use:

Why do I care?

In the bad old days, Javascript programmers got a really bad rap as bad programmers. The code wasn’t efficient and modular, there was lots of bad practices, and practically zero automated testing. Then the environment grew up a bit and now we have all this infrastructure available to us. This includes testing environments.

Let’s be honest, you don’t want to test your code. You NEED to test your code. There are two ways of doing that. The “ad-hoc” method where you run your application, click around, and see if it fails, and the “rigorous” method where you write unit and UI tests to test your code, you write the unit tests before you write your code and you run your unit tests before checking in your code.

Fortunately, there are a lot of choices when it comes to testing libraries these days. You need a test runner (the thing you integrate into your build process for testing), a test framework (a format that you write all your tests in), sometimes an assertion library, and something to emulate a browser so you don’t have to.

Test Runners

There are only two test runners and only one real contender in this category – so good news there. The two contenders are Karma (you should use this) and Chutzpah (erm – no). Here is what I was looking for when looking for a test runner.

  1. Support on grunt and gulp: You want to integrate testing into your process.
  2. Support for device testing: You want to test on real devices and real browsers.
  3. Support for headless testing: You don’t want to test on real devices all the time.
  4. Test Coverage Reporting: You want to know how much is tested and get better over time.
  5. Visual Studio Support: I like Visual Studio – this is important to me!

Karma

Produced by a collaboration between Google and Angular, Karma (previously Testacular – not a good name) is pretty much the de-facto standard in test runners. It has plugins for grunt and gulp, so it can be easily integrated into your workflow. It can work with Jasmine, Mocha and QUnit – the three major test frameworks (more on these below). It has support for PhantomJS and slimerJS (the headless browsers – more on these below) and can support multiple real browsers and devices. It can also integrate with Istanbul – a code coverage tool. There is even a Visual Studio Test Adapter.

Indeed, there is really very little down side to Karma.

Chutzpah

In the interest of being balanced, here is another option. Chutzpah seems to have been written with Visual Studio in mind. It comes with a Visual Studio Test Adapter from the author (rather than a third party), integrates with Jasmine, Mocha and QUnit and has support for PhantomJS.

And there it stops. No gulp/grunt integration. No SlimerJS support. No remote device support. Those a really really important things to me. I just don’t think I can support a test runner without those.

Test Frameworks

Test Frameworks are what take your test suite and actually execute tests. There are three really popular ones – Jasmine, Mocha and QUnit – and a host of others. All of them come with an assertion library, but you can use your own as well if you want (something like should.js, expectJS, or chai). These assertion libraries just improve the test framework – they are not required.

Jasmine

How about this for a Jasmine test description:

describe("The 'toBe' matcher compares with ===", function() {
  it("and has a positive case", function() {
    expect(true).toBe(true);
  });

  it("and can have a negative case", function() {
    expect(false).not.toBe(true);
  });
});

I can actually read this! It’s Javascript and it’s semantic. Those of you who have been following me for a while know I love elegant Javascript. When you can read it, you’ve won. As one would expect, there are a whole host of assertions out of the box but you can integrate with an assertion library as well (I like Chai, for reference).

So what’s the downfall of this library? Jasmine is targeted as Behavior Driven Development. I can see it being used for Test Driven Development as well, though. QUnit is designed for unit testing, so it’s more in the TDD area. Yes, I’m splitting hairs.

Mocha

Mocha is sort of a very flexible version of Jasmine and QUnit. There is less done for you but it’s more flexible as a result of all the plug-ins available. There is no assertion library (use Chai), or Spy Framework (use sinon.js). It can be configured for BDD, TDD or both though. Basically, if Jasmine or QUnit can’t do it for you and you would find yourself going towards the other tool, it’s time to move to Mocha.

The tests for Mocha are very similar to Jasmine:

var assert = require("assert");
describe('Array', function(){
  describe('#indexOf()', function(){
    it('should return -1 when the value is not present', function(){
      assert.equal(-1, [1,2,3].indexOf(5));
      assert.equal(-1, [1,2,3].indexOf(0));
    });
  });
});

Note how we have to bring in the assert library first. One isn’t built in, so that’s more scaffolding. Otherwise, it should look very similar to the Jasmine test case.

QUnit

QUnit is done by the same fine people who brought you jQuery. It’s got plenty of tutorial and documentation materials and fine examples. it’s got support for gulp and grunt and has a whole host of plugins to handle BDD (which isn’t handled out of the box) and PhantomJS. A typical QUnit test looks like this:

QUnit.test("prettydate basics", function( assert ) {
    var now = "2008/01/28 22:25:00";
    assert.equal(prettyDate(now, "2008/01/28 22:24:30"), "just now");
    assert.equal(prettyDate(now, "2008/01/28 22:23:30"), "1 minute ago");
    assert.equal(prettyDate(now, "2008/01/28 21:23:30"), "1 hour ago");
    assert.equal(prettyDate(now, "2008/01/27 22:23:30"), "Yesterday");
    assert.equal(prettyDate(now, "2008/01/26 22:23:30"), "2 days ago");
    assert.equal(prettyDate(now, "2007/01/26 22:23:30"), undefined);
});

My major problem with this is that it just isn’t as readable as the Jasmine version. I like the level of documentation and help you can get with QUnit though.

Headless Browsers

At some point you are going to want to test on a real browser. There are basically three flavors of browser – Gecko (most notably Firefox), WebKit (most notably Chrome) and Internet Explorer. I can’t help you with Internet Explorer – you’ll have to use a real device for that. However, I can help you with Gecko and WebKit.

PhantomJS

PhantomJS includes the WebKit javascript engine. As a result of this, it can emulate Chrome and Safari precisely. This makes it an ideal test companion as you can run the test suite against a “real browser” without having the browser open.

SlimerJS

SlimerJS is the same thing, but for the Gecko javascript engine. Unlike PhantomJS, it isn’t truly headless – you still need a graphical environment and you will see windows. This is really only a problem on Windows as you can use xvfb (a virtual frame buffer) on Mac and Linux to simulate a screen that you don’t see.

The Verdict

After doing all this research, the choice for me was rather obvious. I’m going to start out with the Karma test runner, run my tests with Jasmine (using the Chai assertion library when the default library isn’t enough) and when I come to do my UI tests I’ll be able to use real devices or the PhantomJS headless browser.