Gulp and Webpack – Better Together

I’ve used gulp as a workflow engine prior, but I’d pretty much given up on using it because webpack did so much of what I needed. However, I was flying back from my vacation and I was reminded why I still need it. Not everything in my workflow is actually involved in creating bundles. In particular, some of my stuff is loaded from CDN – things like core-js and some icon fonts. When I am developing without the Internet (like on an airplane), I’d like to still use them. I need to copy libraries that I normally grab from the CDN and place them into the local public area. That requires something other than webpack.

This begs the question – how can one convert the build I had been doing with webpack into something that gulp runs. Well, it turns out that there is a recipe for that. Here is my new Gulpfile.js:

 var eslint = require('gulp-eslint'),
    gulp = require('gulp'),
    gutil = require('gulp-util'),
    webpack = require('webpack'),
    webpackConfig = require('./webpack.config.js');

var files = {
    client: [ 'client/**/*.js', 'client/**/*.jsx' ],
    server: [ 'server/**/*.js' ]
};

gulp.task('build', [
    'webpack:build'
]);

gulp.task('lint', [
    'server:lint',
    'webpack:lint'
]);

gulp.task('server:lint', function () {
    return gulp.src(files.server)
        .pipe(eslint())
        .pipe(eslint.format())
        .pipe(eslint.failAfterError());
});

gulp.task('webpack:lint', function () {
    return gulp.src(files.client)
        .pipe(eslint())
        .pipe(eslint.format())
        .pipe(eslint.failAfterError());
});

gulp.task('webpack:build', function (callback) {
    webpack(webpackConfig, function (err, stats) {
        if (err)
            throw new gutil.PluginError('webpack:build', err);
        gutil.log('[webpack:build] Completed\n' + stats.toString({
            assets: true,
            chunks: false,
            chunkModules: false,
            colors: true,
            hash: false,
            timings: false,
            version: false
        }));
        callback();
    });
});

The task you want to look at is the webpack:build task. This simply calls the webpack() API. Normally, the stats.toString() call will contain a whole host of information many hundreds of lines long – I only want the summary, so I’ve turned off the things I don’t want to see.

I’ve also added two tasks for checking the files with a eslint. I tend to run linters separately as well as together with the client. My webpack configuration still specifies that the linting is done as part of the build. This allows me to continue to use the development server. However, now I can run linting separately as well.

Now that I have this in place, I can rig my server to do a development build. Here are all the pieces:

Step 1: Install the libraries

I use font-awesome, material design icons and core-js in my project:

npm install --save font-awesome mdi core-js

Step 2: Create a task that copies the right files into the public area

Here is the code snippet for copying the files to the right place:

var eslint = require('gulp-eslint'),
    gulp = require('gulp'),
    gutil = require('gulp-util'),
    webpack = require('webpack'),
    webpackConfig = require('./webpack.config.js');

var files = {
    client: [ 'client/**/*.js', 'client/**/*.jsx' ],
    server: [ 'server/**/*.js' ],
    libraries: [
        './node_modules/font-awesome/@(css|fonts)/*',
        './node_modules/mdi/@(css|fonts)/*',
        './node_modules/core-js/client/*'
    ]
};
var destination = './public';

gulp.task('libraries:copy', function () {
    return gulp.src(files.libraries, { base: './node_modules' })
        .pipe(gulp.dest(destination));
});

Note that I’m not interested in copying all the files from the packages into my web area. In general, the package contains much more than you need. For example, font-awesome contains less and sass files – not really needed in my project. Take a look at what comes along with the package and only copy what you need. You can find out about the syntax of the filename glob by reading the Glob primer in node-glob.

Step 3: Update your configuration to specify the locations of the libraries.

I added the following to the config/default.json:

{
    "port": 3000,
    "env": "development",
    "base": "/",
    "library": {
        "core-js": "//cdnjs.cloudflare.com/ajax/libs/core-js/2.0.2/core.min.js",
        "mdi": "//cdn.materialdesignicons.com/1.4.57/css/materialdesignicons.min.css",
        "font-awesome": "//maxcdn.bootstrapcdn.com/font-awesome/4.5.0/css/font-awesome.min.css"
    }
}

Lines 5-9 specify the normal locations of the libraries. In this case, they are all out on the Internet on a CDN somewhere. In my config/development.json file, I specify their new locations:

{
    "env": "development",
    "base": "https://grumpy-wizards.azurewebsites.net/",
    "library": {
        "core-js": "core-js/client/core.min.js",
        "mdi": "mdi/css/materialdesignicons.min.css",
        "font-awesome": "font-awesome/css/font-awesome.min.css"
    }
}

When I import the config, I can read the library location with config.get('library.core-js'); (or whatever the library is).

Step 4: Update the home page configuration

In server/static/index.js, I have a nice function for loading a HTML file. I want to replace the libraries as I do the env and base configuration:

function loadHtmlFile(filename) {
    var contents = '', file = path.join(__dirname, filename);
    if (!Object.hasOwnProperty(fileContents, filename)) {
        contents = fs.readFileSync(file, 'utf8'); // eslint-disable-line no-sync
        fileContents[filename] = contents
            .replace(/\$\{config.base\}/g, config.get('base'))
            .replace(/\$\{config.env\}/g, config.get('env'))
            .replace(/\$\{config.library.font-awesome}/g, config.get('library.font-awesome'))
            .replace(/\$\{config.library.mdi}/g, config.get('library.mdi'))
            .replace(/\$\{config.library.core-js}/g, config.get('library.core-js'))
            ;
    }
    return fileContents[filename];
}

I’ve got a relatively small number of libraries, so the overhead of a templating engine is not worth it right now. However, if I grew the number of libraries more, I’d probably switch this over to a template engine like EJS. I also need to update my index.html file to match:

<!DOCTYPE html>
<html>

<head>
    <meta charset="utf-8">
    <meta name="viewport" content="width=device-width, initial-scale=1">
    <title>Grumpy Wizards</title>
    <link rel="stylesheet" href="https://fonts.googleapis.com/css?family=Roboto:300,400,500,700"/>
    <link rel="stylesheet" href="${config.library.mdi}"/>
    <link rel="stylesheet" href="${config.library.font-awesome}">
</head>

<body>
    <div id="pageview"></div>

    <script>
        window.GRUMPYWIZARDS = {
            env: '${config.env}',
            base: '${config.base}'
        };
    </script>
    <script src="${config.library.core-js}"></script>
    <script src="vendor.bundle.js"></script>
    <script src="grumpywizards.js"></script>
</body>
</html>

Step 5: Update package.json to copy the libraries to the right place before running nodemon

I added a new script to my package.json:

  "scripts": {
    "build": "gulp build",
    "prenodemon": "gulp libraries:copy",
    "nodemon": "nodemon --watch server ./server.js",
    "start": "node ./server.js"
  },

With all this done, I now have two modes:

  • In development mode, run by npm run nodemon, I copy the libraries to the right place and then serve those libraries locally
  • In production mode, run by NODE_ENV=production npm start, I serve the libraries from a CDN, saving my bandwidth

If I change the libraries that are copied into the public area, I will have to stop and restart the server. That is a relatively rare thing (I only have three libraries), so I’m willing to make that a part of my workflow when it happens.

As always, grab the latest source from my GitHub Repository.

Reduce the size of your Browserified React applications

I’ve been using React for most of my browser-side applications recently. The recommended approach here is to use Browserify to bundle your application. You create ES6 modular components, then bundle them all with React for your application. Let’s take a small application. I’ve got a basic bootstrap component called app.jsx:

import React from 'react';
import ReactDOM from 'react-dom';

import Application from './views/Application.jsx';

ReactDOM.render(
    <Application/>,
    document.getElementById('rootelement')
);

This includes the React libraries (I’m using v0.14.4 here) and my singular component, which looks like this:

import React from 'react';

/**
 * Main Application Router
 * @extends React.Component
 */
export default class Application extends React.Component {
    /**
     * React API - render() method
     * Renders the application view
     * @returns {JSX.Element} a JSX Expression
     */
    render() {
        return <h1>{'Application Booted'}</h1>;
    }
}

I’ve also got a Gulp task to build the app.js that my HTML file loads that looks like this:

gulp.task('client:build:javascript', [ 'client:test' ], function () {
    return browserify({ debug: true })
        .add(config.source.client.entry.javascript, { entry: true })
        .transform(babelify, { presets: [ 'es2015', 'react' ], sourceMaps: true })
        .transform(browserifyshim)
        .bundle()
        .pipe(source('app.js'))
        .pipe(buffer())
        .pipe(sourcemaps.init({ loadMaps: true }))
        .pipe(uglify())
        .pipe(sourcemaps.write('./'))
        .pipe(gulp.dest(config.destination.directory));
});

Finally, I’ve got my HTML file:

<!DOCTYPE html>
<html>
<head>
    <meta charset="utf-8">
    <meta name="viewport" content="width=device-width, initial-scale=1">
    <title>Grumpy Wizards</title>
    <link rel="stylesheet" href="app.css">
</head>
<body>
    <div id="rootelement"></div>
    <script src="app.js"></script>
</body>
</html>

This works, but at what cost? The app.js file is 214Kb and the map file is 1.6Mb. That strikes me as a little excessive for 15 lines of code. Of course, the culprit is React and ReactDOM – those libraries occupy the majority of the space. I want to get economies of scale. Lots of sites use jQuery, React, Angular and other major frameworks. Why not let the CDN serve up that content for me? I get a bunch of benefits from this:

  1. My code is much smaller
  2. I don’t pay for bandwidth for serving libraries
  3. The user can take advantage of browser caching
  4. The user experiences shorter load times

Utilizing CDNs is a good idea. Back to the problem at hand – how do I rig Browserify so that it doesn’t bundle libraries? The answer is in a small module called browserify-shim. Here is how you use it.

Step 1: Update your index.html file to bring in the libraries from CDN

React is located at //fb.me/react-0.14.4.min.js and ReactDOM is located at //fb.me/react-dom-0.14.4.min.js – thank Facebook for providing the CDN for this! In fact, since Facebook uses these libraries, they are likely already in your cache. My new index.html file looks like this:

<!DOCTYPE html>
<html>
<head>
    <meta charset="utf-8">
    <meta name="viewport" content="width=device-width, initial-scale=1">
    <title>Grumpy Wizards</title>
    <link rel="stylesheet" href="app.css">
</head>
<body>
    <div id="rootelement"></div>
    <script src="//fb.me/react-0.14.4.min.js"></script>
    <script src="//fb.me/react-dom-0.14.4.min.js"></script>
    <script src="app.js"></script>
</body>
</html>

Note that I use the double-slash without the protocol. This means use the same protocol as I am using for the main page. It avoids the “different security policies” dialog boxes.

Re-run your project and check out the window object with your developer tools. React and ReactDOM both add a global variable to the window object. I use Chrome Developer Tools – I can just go over to the Console tab and type window, then expand the returned variable. If you have another big library, then find out its global variable. jQuery uses $, for example, and d3 uses THREE.

Step 2: Update your code to use the actual global variables

My code didn’t need to be updated. However, if you are doing something like this:

import * as jq from 'jquery';
import three from 'three';

If you are doing this, then you need to update your code to use the actual global variable:

import * as $ from 'jquery';
import THREE from 'three';

Step 3: Install browserify-shim

Browserify-shim is available on npmjs.com:

npm install --save-dev browserify-shim

You will also need to update your gulp task:

var browserifyshim = require('browserify-shim');

gulp.task('client:build:javascript', [ 'client:test' ], function () {
    return browserify({ debug: true })
        .add(config.source.client.entry.javascript, { entry: true })
        .transform(babelify, { presets: [ 'es2015', 'react' ], sourceMaps: true })
        .transform(browserifyshim)
        .bundle()
        .pipe(source('app.js'))
        .pipe(buffer())
        .pipe(sourcemaps.init({ loadMaps: true }))
        .pipe(uglify())
        .pipe(sourcemaps.write('./'))
        .pipe(gulp.dest(config.destination.directory));
});

Step 4: Specify the global mapping in package.json

Finally, you need to add a section to your package.json file to tell the browserify-shim what global variables you want to shim in your application:

  "browserify-shim": {
    "react": "global:React",
    "react-dom": "global:ReactDOM"
  }

The left hand side (the key) is the name of your import/require statement. The right side is the global variable that it creates.

Wrap-up

Doing this reduced my file size from 214K to 2.5K and the map file from 1.6M to under 9K. These are massive savings. The user is able to take advantage of caching in their browser which is not just good for browsers but great for mobile browsers where data is at a premium. The smaller sizes also mean the user gets the shortest load time possible and I save on storage and bandwidth.

Want to support the vendors who put out these packages? Most vendors can utilize the data from the CDN to understand apps that are using their libraries and the browsers that are being used with their vendors. This information can inform the development plans and enable them to target high-impact changes. So it’s really a plus for the vendor as well.

I do believe this is a win all around. Don’t serve up common libraries yourself – let the CDN do it.

Browser Testing with PhantomJS and Mocha – Part 2

Happy New Year!

Today I am going to complete the work of browser testing. In the last article, I introduced MochaJS in a browser, so you could run tests manually in a browser by setting up a test server and generating a static site. I am going to automate that task and take the browser out of the mix.

A big part of the process is the inclusion of PhantomJS – a headless browser that allows you to do a number of things – among them is headless browser testing. There are plug-ins for most test runners, including Mocha, Jasmine, and Chutzpah.

Before I get to that, I need a strategy. My build process is driven by gulp. I run gulp test to build the library and run all the tests. I need a task that will set up a test web server, then use phantomJS and mocha to run the test suite, bailing on a failed test, and then finally shutting down the test web server. I’ve already discussed the test server, but that runs forever.

Fortunately for me, Mocha and PhantomJS is such a popular combination that there is a Gulp plug-in for the combo called gulp-mocha-phantomjs, which is really a thin wrapper around mocha-phantomjs. PhantomJS is bundled here, so it should “just work”. I did have some trouble getting PhantomJS working on Mac OSX El Capitan due to the security policies. To fix this, open System Preferences, then Security & Privacy. There is a section to Allow applications downloaded from Anywhere:

macos-allow-from-anywhere

The Gulp task looks like this:

var gulp = require('gulp'),
    express = require('express'),
    phantomjs = require('gulp-mocha-phantomjs'),
    runSequence = require('run-sequence'),
    config = require('../configuration');

var port = config.test.server.port || 3000;
var server = 'http://localhost:' + port + '/';
var listening;

var app = express();
app.use(express.static(config.test.server.rootdir));

gulp.task('browser:global', function () {
    var stream = phantomjs({ reporter: 'spec' });
    stream.write({ path: server + 'global.html' });
    stream.end();
    return stream;
});

gulp.task('testserver:close', function (callback) {
    console.log('Test Server stopped');
    listening.close();
    callback();
});

module.exports = exports = function (callback) {
    listening = app.listen(port, function () {
        console.log('Test Server started on port ', port);
        runSequence('browser:global', 'testserver:close', callback);
    });
};

The task uses a global variable, listening, to store the server reference. This is used within the testserver:close task to close the connection and make the server quit. The main task sets up the server to listen. When the server is listening, it runs the test suites in order. I’ve only got one test suite right now. If I were expanding this to other test suites, I would add a task for each test suite and then add the task to the runSequence call before the testserve:close task.

I’ve linked the task into my main Gulpfile.js like this:

var gulp = require('gulp');

gulp.task('lint', require('./gulp/tasks/lint'));
gulp.task('mocha', require('./gulp/tasks/mocha'));

gulp.task('build:testserver', [ 'build' ], require('./gulp/tasks/buildserver'));
gulp.task('browser-tests', [ 'build:testserver' ], require('./gulp/tasks/browsertests'));

gulp.task('build', [ 'lint', 'mocha' ], require('./gulp/tasks/build'));
gulp.task('test', ['lint', 'mocha', 'browser-tests']);
gulp.task('default', ['build', 'test']);

The task is stored in gulp/tasks/browsertests.js. This sequencing ensures that the main test suite and linter is done first, then I build the library and then I run the browser tests. Output should now look like this:

phantomjs-test-output

There is a small problem – the server continues to run (and the process never exits) if the browser tests fail. However, I find that reasonable since I will want to load the failing test up into a web browser to investigate if the tests fail.

Managing Coding Style in JavaScript with eslint

I’m a big fan of linters, especially in the case of JavaScript.  Most languages are statically typed and have fairly well known and rigid formatting.  This allows consumers of the language to read source code and easily digest what it is doing.  Not so JavaScript.  Some of the code is so badly written that you have to run it to figure it out. Naturally, I want to avoid writing bad code, so the use of linters is mandatory.

JavaScript linters, including JSLint, JSHint and my favorite – ESLint, provide many functional benefits to you by statically analyzing your code.  They can uncover a slew of issues before you even run the code.  ESLint breaks down the rules (there are over 200 rules in the default set) into three major areas – potential errors, best practices and style. I use eslint because it supports ES2015 and React/JSX out of the box.

If you use an alternative, then you have:

There really is no good reason to NOT use a linter.

Let’s take a quick step back and go through how you use this.  Firstly, you need to install eslint:

npm install -g eslint

Then you need to create a configuration file.  I prefer my configuration file to be in JavaScript – it allows me to document the configuration so that I remember why I put a particular setting in there.  The filename of the configuration file is .eslintrc.js.  You can find my as a gist on github.com. The basics of this file are here:

var OFF = 0, WARN = 1, ERROR = 2;

module.exports = exports = {
    "env": {
        "es6": true,
        "browser": true,
        "commonjs": true
    },

    "ecmaFeatures": {
        // env=es6 doesn't include modules, which we are using
        "modules": true
    },

    "extends": "eslint:recommended",

    "rules": {
        // Put your overrides here
    }
};

You can use the keywords OFF, WARN and ERROR instead of the numeric values in the rule overrides, as we will see later.
You can then override the default handling of any rules in
the rules section. You can run this by using:

eslint src

The eslint program looks for several config files, but the JavaScript version is the first one that it looks for. It will also look in parent directories for a suitable file, so you can set up your rules at the top of your project and have the settings work for all files. I tend to place my .eslintrc.js file in the src directory as my Gulpfile.js, tests and so on require different rules.

Back to those three areas. The first area is Possible Errors. Most of these are enabled as errors in the recommended set. You will probably want to override the ones that aren’t in the recommended set – they are useful.

The second set is Best Practices. I read the description of each rule in the set and made a judgement call on whether the rule would be an error or a warning or turned off. Most rules are simple. For example, I don’t want to ship a product with alert() calls, so I have the following override:

"no-alert": ERROR,

Some rules have additional information and you have to read the rule details for that information. An example of this is the accessor-pairs rule. This covers getters and setters. One of the things I like to do is have class properties written as accessors. This allows me to provide a “read-only” property by only specifying a getter. Thus my rule is:

"accessor-pairs": [ ERROR, {
    "getWithoutSet": false,
    "setWithoutGet": true
}],

This says that I can have just a getter or a getter and a setter. I cannot have a setter without a getter. I prefer best practices to be errors by default and need to justify to myself why something should be a warning.

The final section is Stylistic. Pretty much every development group has a coding style guide. It goes into things about where curly-braces should be, what sort of quotes to use, how to name variables, whether to use tabs or spaces for indents and so on. This section (which is the bulk of the rules) allow you to enforce this coding style. In this section, I prefer things to be warnings and have to justify errors. The difference here is that the code will still work – it’s just coding style.

I run eslint before I do anything else on a compilation. I have two mechanisms for handling builds. In the first mechanism, I use the package.json to build the code and eschew a task runner. This is generally for smaller projects. In this case, I put the eslint call in the package.json scripts section, like this:

  "scripts": {
    "pretest": "eslint src test",
    "test": "mocha"
  },

If I am doing a build, I’m usually using a task runner – gulp is my task runner of choice. In this case, I have a task to do the eslint for me:

var gulp = require('gulp'),
    eslint = require('gulp-eslint');

var config = {
    paths: {
        src: './src/**/*.js',
        dest: './dist'
    }
};

gulp.task('lint', function () {
    return gulp.src([ config.paths.src, '!**/.eslintrc.js' ])
        .pipe(eslint())
        .pipe(eslint.format())
        .pipe(eslint.failAfterError());
});

gulp.task('build', [ 'lint' ], function () {
    // my build process here
});

The gulp.src line explicitly ignores your .eslintrc file – it’s a circular reference and tends to not follow style, surprisingly.

There will come a time when you want to override a rule for a specific case. I always do something like the following when I do this:

// lint-disable: foobar is an external library (new-cap)
var x = new foobar(); //eslint-disable-line new-cap

Note two things here. Firstly, I say why the rule was disabled. This prevents any guessing on the part of people that come after me. I also prefix the the line with lint-disable so that I can easily grep the entire source tree for lint-disable to get all the lines. Secondly, I only disable the rule on one line. This is the preferred method.

Sometimes I need to disable a rule for a whole function; I use a block disable. This has to be even rarer and again needs to be documented:

/* eslint-disable id-length */
/* lint-disable id-length - the library foobar has this identifier */
function myfoo() {
  foobar.aReallyLongIdentifierUsedByFooBarIsObnoxiousButRequired = false;
}
/* eslint-enable id-length */

You should (and can easily) audit all the lint-disable lines and blocks on a regular basis.

So, how do I use eslint?

  • eslint the source before every build and test – fix errors
  • eslint the source before every check in – fix warnings
  • audit the source for lint-disable before every release

There are a few things more important than static analysis of your code, but it’s right up there with testing. You need to be linting your JavaScript code on a very regular basis. Automate your usage of eslint (or another JavaScript linter) so that it’s a part of your build process.

Apache Cordova, ES2015 and Babel

I created a simple Apache Cordova app and got it working on my iOS and Android emulators in the last article. My hope was to convert the app to ECMAScript 2015 (the new fancy name for what we have been calling ES6 for the past year) and work on Browserify for the app packaging. However, the initial bits took too long. So let’s remedy that now. I’m starting from the basic app template that the cordova tool produced.

Let’s start by looking at the code that the basic app template includes (in ./www/js/index.js):

var app = {
    // Application Constructor
    initialize: function() {
        this.bindEvents();
    },
    // Bind Event Listeners
    //
    // Bind any events that are required on startup. Common events are:
    // 'load', 'deviceready', 'offline', and 'online'.
    bindEvents: function() {
        document.addEventListener('deviceready', this.onDeviceReady, false);
    },
    // deviceready Event Handler
    //
    // The scope of 'this' is the event. In order to call the 'receivedEvent'
    // function, we must explicitly call 'app.receivedEvent(...);'
    onDeviceReady: function() {
        app.receivedEvent('deviceready');
    },
    // Update DOM on a Received Event
    receivedEvent: function(id) {
        var parentElement = document.getElementById(id);
        var listeningElement = parentElement.querySelector('.listening');
        var receivedElement = parentElement.querySelector('.received');

        listeningElement.setAttribute('style', 'display:none;');
        receivedElement.setAttribute('style', 'display:block;');

        console.log('Received Event: ' + id);
    }
};

app.initialize();

This is basically a class for handling events together with a method that handles the application code. I think I can abstract the event handling and use the EventEmitter class from the NodeJS package. I like the semantics of EventEmitter a little better. Let’s take a look at my new code (which I’ve placed in src/js/index.js):

import DeviceManager from './lib/device-manager';

var app = new DeviceManager();
app.on('deviceready', function () {
  var parentElement = document.getElementById('deviceready');
  var listeningElement = parentElement.querySelector('.listening');
  var receivedElement = parentElement.querySelector('.received');

  listeningElement.setAttribute('style', 'display:none;');
  receivedElement.setAttribute('style', 'display:block;');
});

I could have used an arrow-function for the callback in app.on(), but I like the callback semantics when I’m not in a class and have no parameters. I believe it is more readable. I now need a DeviceManager class. This is stored in the file src/js/lib/device-manager.js:

import {EventEmitter} from 'events';

/**
 * A class for handling all the event handling for Apache Cordova
 * @extends EventEmitter
 */
export default class DeviceManager extends EventEmitter {
  /**
   * Create  new DeviceManager instance
   */
  constructor() {
    super();
    document.addEventListener('deviceready', this.onDeviceReady.bind(this), false);
  }

  /**
   * Handle the deviceready event
   * @see http://cordova.apache.org/docs/en/5.4.0/cordova/events/events.deviceready.html
   * @emits {deviceready} a deviceready event
   * @param {Event} the deviceready event object
   */
  onDeviceReady(e) {
    console.debug('[DeviceManager#onDeviceReady] event = ', e);

    this.emit('deviceready', e);
  }
}

I’m preparing this for conversion into a library by include esdoc tags for documentation. There is more to do in this class. I want to trap all the Apache Cordova events so that I can re-emit them through the EventEmitter module, for example. However, this is enough to get us started.

Note that there is a little extra work needed if you want to use Visual Studio Code to edit ES2015 code. Add the following jsconfig.json file:

{
    "compilerOptions": {
        "target": "ES6"
    }
}

Now that I have the code written, how does one build it? First step is to bring in npm, which I will use as the package manager for this project:

npm init --yes

I like to answer yes to everything and then edit the file directly. In this case, I’ve set the license to MIT, added a description and updated the author. However all of these are optional, so this single line lets me start working straight away.

A Diversion: Babel 6

My next step was to download the tool chain which includes gulp, browserify and babelify. I always browse the blog before adding a module to a project and it was lucky I did that in the case of Babel as there were major changes. Here is the short short version:

  1. Babel is just a transpiler framework now
  2. You must create a .babelrc file for it to work

Fortunately, getting ES2015 compilation working with the .babelrc file is simple. Here it is:

{
	"presets": [ "es2015" ]
}

If you are using Babel on the command line, I highly recommend you read the introductory blog post by the team.

Build Process: Gulp

As I pretty much always do nowadays, gulp is my go-to build process handler. To set it up:

npm install --save-dev gulp gulp-rename vinyl-source-stream browserify babelify babel-preset-es2015

The first five modules are from my prior experience with Browserify. The last one is a new one and brings in the ES2015 preset for Babel6. The Gulpfile.js is relatively simple right now:

var babelify = require('babelify'),
    browserify = require('browserify'),
    gulp = require('gulp'),
    rename = require('gulp-rename')
    source = require('vinyl-source-stream');

gulp.task('default': [ 'build' ]);
gulp.task('build': [ 'js:bundle' ]);

gulp.task('js:bundle', function () {
  var bundler = browserify({
    entry: './src/js/index.js',
    debug: true
  });

  return bundler
    .add('./src/js/index.js')
    .transform(babelify)
    .bundle()
    .pipe(source('./src/js/index.js'))
    .pipe(rename('index.js'))
    .pipe(gulp.dest('./www/js'));
});

This is the exact same recipe I have used before, so the semantics haven’t changed. However, you need to uninstall old-babel and install the new packages for this to work. Of course, this is a new project, so I didn’t have to uninstall old-babel. To build the package, I now need to do the following:

gulp build

To make it even easier, I’ve added the following section to the npm package.json:

  "scripts": {
    "build": "gulp build"
  },

With this code, I can now run npm run build instead of gulp directly. It’s a small level of indirection, but a common one.

Wrapping up

The build process overwrites the original www/js/index.js. Once the build is done, you should be able to use cordova run browser to check it out. You should also be able to build the iOS and Android versions and run those. There isn’t any functional change, but the code is now ES2015, and that makes me happy.

In the meantime, check out the code!

Moving from Bower to NPM

The “definitely not the” node package manager (npm) has recently moved up a major version to v3.0 and in the process has signalled its intent to start handling client-side packages. It supports any old git repository so the benefit of bower is basically gone now. It’s time to stop using two package managers and standardize on one.

There are three things I need to do. Firstly, I need to figure out how to get packages that aren’t in the npm package repository yet. Secondly, I need to handle my standard gulp recipe for building the library area. Finally, since I need to do extra stuff to the polymer package before using it, I need to adjust the build process for Polymer.

1. Handling Non-Repository Packages

npm has a bunch of ways you can handle non-repository packages. You could run your own repository – which is probably overkill for most projects. You can also install a .tgz or .zip file either locally or from a URI. You can also install from GitHub.

A quick warning – the first time you install from github, it will ask you for verification. On Windows, this was painful. Answering the questions and continuing then returning and re-running the command worked for me.

Polymer is not in the npm registry. There is a package called polymer – but it’s something different. There are also lots of things dealing with Polymer, but not polymer itself. Let’s install Polymer with this command:

npm install --save github:Polymer/polymer

As of this writing, it will install v1.0.5 of Polymer – exactly the latest version. In the package.json, it looks a little different. Here is the top of my package.json file:

{
  "version": "1.0.0",
  "name": "AspNetPolymer",
  "private": true,
  "dependencies": {
    "Polymer": "polymer/polymer",
    "font-awesome": "^4.3.0",
    "webcomponents.js": "^0.7.2"
  },

Note that I’ve moved the packages from bower.json to the package.json in the dependencies section. In the process, I had to rename the webcomponentsjs to webcomponents.js – you may encounter other naming differences. You can edit the package.json in Visual Studio and it will run a restore packages process when complete. Polymer is a little bit different though – it specifies the library name (Polymer) and the path on github.com. You can use private git repositories as well if you like (an alternative to NPM Enterprise, perhaps?).

2. Building the Libraries

Now that I’ve got the libraries on my machine in my source area, I want to move them into the right place on wwwroot. I use gulp for my build process, so here is the recipe:

gulp.task("libraries", function () {
    return gulp.src(plugins.npmFiles(), { base: "./node_modules" })
        .pipe(gulp.dest(config.dest + "/lib"));
});

The plugins.npmFiles() call is provided by gulp-npm-files – a module for this exact purpose. I’m using gulp-load-plugins to create the plugins object that loads this plugin.

3. Post-processing the libraries

Back to Polymer. Polymer is distributed as three files – polymer.html, polymer-mini.html and polymer-micro.html. I want to vulcanize these into one file. I use this to create a elements/polymer area:

/*
 * Build the vulcanized Polymer library that we need
 */
gulp.task("polymer", ["libraries"], function () {
    var polymerPath = "./node_modules/Polymer/polymer.html";

    return gulp.src(polymerPath)
        .pipe(plugins.vulcanize())
        .pipe(gulp.dest(config.dest + "/elements/polymer"));
});

The only real thing I had to do here was change the path to the Polymer library – it’s in node_modules instead of bower_components. I could build this into the libraries pipeline using gulp-filter (and look for **/polymer.html). The reduction in one task doesn’t seem worth it. Most libraries are distributed already built.

That’s pretty much all there was to it. If I’m developing using Aurelia, I will still use two package managers (jspm and npm), but most things can be developed with just one package manager now. The support for npm in Visual Studio is just a bonus to me.

Gulp: Bumping Versions

You may have noticed that Javascript build utilities all have their own JSON file for configuration – whether it be bower, npm, or whatever. There is always a JSON file and it is always versioned. So one of the things you tend to do is bump the version and check it in again.

That screams out for automation.

Someone else thought so too.

Let’s take a look at my bower.json file:

{
  "name": "WebApp",
  "version": "0.0.2",
  "private": true,
  "dependencies": {
    "app-router": "^2.6.1",
    "polymer": "^1.0.3",
    "webcomponentsjs": "^0.7.3"
  },
  "overrides": {
    "polymer": {
      "main": "*.html"
    }
  }
}

Note the version string. The package.json file that npm uses and the project.json file that ASP.NET uses have this version field as well.

Big shout out to Steve Lacy for writing gulp-bump – a Gulp task handler that bumps the version number. Here is a recipe:

// Bump the version included in bower.json and package.json
gulp.task("bump-version", function () {
    return gulp.src(["./bower.json", "./package.json"])
        .pipe(plugins.bump({ type: "patch" }))
        .pipe(gulp.dest("./"));
});

Now I have codified what needs to be changed, I just run gulp bump-version and all the files get updated.