Learning Webpack with React and ES6

When I create a React/ES6 app, I reach for my tool of choice – browserify. However, that has certain issues – things that I am certain I can work around if I try, but I’m spending all my time in the tooling. I want to write the application – not the tools. A lot of my articles over the past year have been simply adjustments to my tooling. It’s time for a different approach. Way back in the early parts of last year, I mentioned Webpack in the same breath as Browserify. Webpack is a different style of bundler. It bundles everything together – not just your code. To get to the point of using Webpack, I need some configuration. My hope is that I can build the entire web site with a single build script.

Firstly, let’s get rid of Gulp. That means getting rid of the Gulpfile.js and my gulp directories. It also means I need to handle linting and testing elsewhere. I’ve done this by changing the scripts section of my package.json to the following:

  "scripts": {
    "clean": "rimraf public",
    "pretest": "eslint client/src server/src client/test server/test && sass-lint -v -q -f stylish",
    "test": "mocha --recursive --compilers js:babel-register --reporter dot client/test server/test",
    "start": "node ./bin/www"
  },

Of these, only the start script was there before. I can run npm test to test the package. I can also run npm run clean to clean up the generated files. This handles all the tasks except for the building of the public area. My next step is to integrate the index.html file to the express server. Right now, I serve it up as a static file, and the gulp build system would copy the index.html file from the source area to the destination area. I’ve added the following to my server (the code goes above the staticFiles() middleware initialization):

    // Display the index.html file
    app.get('/', function (request, response) {
        response.status(200).type('text/html').send(index);
    });

The index variable is initialized to the content I want to send elsewhere. The variable is just a copy of my old index.html file.

Webpack Basics

On to the webpack configuration. Webpack, at it’s core, bundles things together and then allows you to load them via loaders. In order to load my JSX files, I need to include the Babel transpiler. The idea is that the files will be compiled from ES6 to normal browser-ready JavaScript. Babel also compiles my JSX for me, so I don’t need an extra step for that. Just like Gulp and Grunt before them, Webpack has a configuration file: webpack.config.js – it’s a Javascript file that exports the configuration object. Here is my simple version:

module.exports = {
    entry: {
        grumpywizards: './client/src/app.jsx'
    },
    module: {
        loaders: [
            {
                test: /\.jsx?$/,
                loaders: [ 'babel' ],
                exclude: /node_modules/
            }
        ]
    }
    output: {
        filename: 'public/[name].js'
    }
};

This will walk the entry point, compile all the source files into the right form and then store the file in public/grumpywizards.js. The module.loaders has three elements – firstly, a test – this is a regular expression that is matched against the filename. If the test matches, this loader definition is used. The version here accepts .js and .jsx extensions. The next thing is the exclude – this says don’t include anything in node_modules. Finally, the list of loaders is applied from right to left – I’ve only got one – babel – because Babel compiles JSX as well.

To run this, you need to install webpack and the loader:

npm install --save-dev webpack babel-loader

You can now add the following script definition to the package.json:

  "scripts": {
    "clean": "rimraf public",
    "pretest": "eslint client/src server/src client/test server/test && sass-lint -v -q -f stylish",
    "test": "mocha --recursive --compilers js:babel-register --reporter dot client/test server/test",
    "prestart": "webpack -p",
    "start": "node ./bin/www"
  },

When you run npm start now, the webpack is run prior to starting the server. What you will see is an uglified packed file. Note that it’s on the larger side – about 1MB. That’s because all the React libraries are included (and after I just got rid of them with Browserify!). Also, there are no source maps yet.

Dealing with External Libraries

In my last article, I used an extra module – browserify-shim – to abstract libraries from my code. This functionality is built in to webpack. I just need to add a little bit of configuration to the webpack.config.js:

module.exports = {
    entry: {
        grumpywizards: './client/src/app.jsx'
    },
    module: {
        loaders: [
            {
                test: /\.jsx?$/,
                loaders: [ 'babel' ],
                exclude: /node_modules/
            }
        ]
    },
    externals: {
        'react': 'React',
        'react-dom': 'ReactDOM'
    },
    output: {
        filename: 'public/[name].js'
    }
};

On the left hand side is what the module is called. On the right hand side is the global variable that it is exposed as when you load the library from the CDN. Yep – this is super simple. Building this takes my library from 1MB to 2Kb, which is eminently more reasonable. I’ll leave the library serving to the CDN.

Source Maps

Just like the external library configuration, source maps has been thought about as well. Just add an option to generate source maps to your webpack.config.js:

module.exports = {
    entry: {
        grumpywizards: './client/src/app.jsx'
    },
    devtool: 'source-map',
    module: {
        loaders: [
            {
                test: /\.jsx?$/,
                loaders: [ 'babel' ],
                exclude: /node_modules/
            }
        ]
    },
    externals: {
        'react': 'React',
        'react-dom': 'ReactDOM'
    },
    output: {
        filename: 'public/[name].js'
    }
};

The important line here is the devtool configuration – there are various values here but the source-map value should make the webpack emit a .map file.

I had a major amount of pain here though. Most of the online tutorials suggested putting babel-loader and jsx-loader in. This caused the source maps to be the ES5 versions of the files – after transpiling. However, Babel transpiles JSX as well, so there is no need for jsx-loader. Well, it turns out that jsx-loader doesn’t have source map support. Fortunately, we don’t need jsx-loader any more. So just do away with it and be happy.

Wrap Up

So, where do we go from here. I like webpack – much more than the gulp + browserify + babelify + all the rest. There is still some work to do. I need to find a solution to stylesheets for my components and I want to start looking at live reloading as I save files. That, however, is for another day. In the mean time, you can find my continuing code at my GitHub Repository.

Running BabelJS/ES2015 Apps in Azure App Service

BabelJS is a really cool in-line transpiler. You can use it as a ‘require-hook’ to make your Node.js apps use the full ES6 syntax without the v8 javascript interpreter issues around ES6 support. For instance, I have a server.js file that looks like this:

require('babel-register');
require('./app');

My app.js file contains regular ES6 code, like this:

import express from 'express';
import process from 'process';
import staticFiles from 'serve-static';

let app = express();
app.use(staticFiles('public', { index: 'index.html' }));
app.listen(process.env.PORT || 3000);

I’ve also got a public directory and an index.html file inside the public directory for display. Finally, I need a .babelrc file:

{
    "presets": [ "es2015" ]
}

Assuming I’ve installed all the right packages:

npm install --save babel-register babel-preset-es2015 express serve-static

This will run and I’ll be able to browse to http://localhost:3000 and get my index page. It just works. Which is a great thing. Now, let’s transfer it up to the cloud. Azure App Service to be precise.

To do this, I logged onto the Azure Portal, created a new Web App and set up Continuous Deployment to read my site from the GitHub repository. When it deployed, I got a successful deployment. However, when I browsed to my site, I got a failure. The failure in the logs was this:

Mon Dec 28 2015 22:36:55 GMT+0000 (Coordinated Universal Time): Unaught exception: Error: ENOENT: no such file or directory, open 'D:\local\UserProfile\.babel.json'
    at Error (native)
    at Object.fs.openSync (fs.js:584:18)
    at Object.fs.writeFileSync (fs.js:1224:33)
    at save (D:\home\site\wwwroot\node_modules\babel-register\lib\cache.js:45:19)
    at nextTickCallbackWith0Args (node.js:433:9)
    at process._tickCallback (node.js:362:13)
    at Function.Module.runMain (module.js:432:11)
    at startup (node.js:141:18)
    at node.js:980:3

I don’t even have this .babel.json file, so what’s wrong? By default, BabelJS will save a cache file in your user profile. This is fine if you are running in the context of a user account – user accounts generally have a home directory or user profile.

BabelJS saves the JSON Cache in the following places:

  • BABEL_CACHE_PATH
  • USERPROFILE
  • HOMEDRIVE + HOMEPATH
  • HOME

You can also set BABEL_DISABLE_CACHE=1 to disable the generation of this file. Since the existence of this file improves startup times (and Azure does restart your site from time to time), you probably want to keep the file.

These are all environment variables. In Azure, USERPROFILE points to a directory that does not exist – there is no user in Azure. My opinion is that it should point to the temporary directory or not be set. Azure has a temporary directory at D:\local\Temp. We can force BabelJS to write to the temporary directory by specify an App Setting. To do this:

  1. Log onto the Azure Portal and select your Web App.
  2. Click on Settings, then Application Settings (under GENERAL).
  3. Scroll down to the bottom of the App settings section.
  4. In an empty box, for key, enter BABEL_CACHE_PATH and for value, enter D:\local\Temp\babel.json
  5. Click on Save.

You application may need a restart. Once restarted, browse to your site again and see it work properly. With this app setting, your site will work for both local development and within the Azure cloud.

Browser Testing with PhantomJS and Mocha – Part 1

If you have been following along for the past couple of weeks, you will know that I’ve been writing a browser library recently. I’m writing the library in ES2015 and then transpiling it into UMD.

A sidebar on bugs in BabelJS
I did bump into a bug when transpiling into the UMD module format. The bug is pretty much across the module transforms, and manifests as a ‘Maximum Call Stack Exceeded’ error with _typeof. The bug is T6777. There is a workaround, which is to add a typeof undefined; line at the top of your library.

Back to the problem at hand. I’ve already used Mocha to test my library and I use mocks to attempt to exercise the code, but at some point you have to run it in a browser. There are two steps to this. The first is to set up a test system that runs in a browser, and the second is to run the test system through a headless browser so it can be automated. Let’s tackle the first step today.

My library is a client library to access a remote AJAX environment. I want the library to use either a provided URL or the URL the page was loaded from – whichever is appropriate. As a result, I need to load the files over the Internet – loading from a file:// URL isn’t good enough. To handle this, I’m going to:

  • Create a local test server
  • Load the files into a static service area
  • Run the pages in a browser

To this end, I’ve got a Gulp task that builds my server:

var gulp = require('gulp'),
    babel = require('gulp-babel'),
    concat = require('gulp-concat'),
    sourcemaps = require('gulp-sourcemaps'),
    config = require('../configuration');

module.exports = exports = function() {
    return gulp.src(config.source.files)
        .pipe(sourcemaps.init())
        .pipe(concat('MyLibrary.js'))
        .pipe(babel())
        .pipe(sourcemaps.write('.'))
        .pipe(gulp.dest(config.destination.directory));
};

I store my gulp tasks in a separate file – one file per task. I then require the file in the main Gulpfile.js:

var gulp = require('gulp');

gulp.task('build', require('./gulp/tasks/build'));

I now have a MyLibrary.js file and a MyLibrary.js.map file in the dist directory. Building the server area is just as easy:

var gulp = require('gulp'),
    config = require('../configuration');

// Builds the server.rootdir up to service test files
module.exports = exports = function() {
    return gulp.src(config.test.server.files)
        .pipe(gulp.dest(config.test.server.rootdir));
};

My configuration.js exposes a list of files like this:

module.exports = exports = {
    source: {
        files: [ 'src/**/*.js' ]
    },
    destination: {
        directory: 'dist'
    },
    test: {
        mocha: [ 'test/**/*.js' ],
        server: {
            files: [
                'browser-tests/global.html',
                'browser-tests/global-tests.js',
                'dist/MyLibrary.js',
                'dist/MyLibrary.js.map',
                'node_modules/chai/chai.js',
                'node_modules/mocha/mocha.css',
                'node_modules/mocha/mocha.js'
            ],
            port: 3000,
            rootdir: 'www'
        }
    }
};

Take a look at the test.server.files object. That contains three distinct sections – the browser test files (more on those in a moment), the library files under test and the testing libraries. You should already have these installed, but if you don’t, you can install them:

npm install --save-dev mocha chai

I will have a www directory with all the files I need in it once I run the gulp buildserver command. Next, I need a server. I use ExpressJS for this. First off, install ExpressJS:

npm install --save-dev express

Note that this is a dev install – not a production install, hence the use of the --save-dev tag. I want express listed in devDependencies. Now, on to the server code, which I place in testserver.js:

var express = require('express'),
    config = require('./gulp/configuration');

var app = express();
app.use(express.static(config.test.server.rootdir));
app.listen(config.test.server.port || 3000, function() {
    console.info('Listening for connections');
});

This is about the most basic configuration for an ExpressJS server you can get. I’m serving static pages from the area I’ve built. That’s enough of infrastructure – now, how about running tests? I’ve got two files in my files list that I have not written yet. The first is a test file called global-tests.js and the other is a HTML file that sets up the test run – called global.html. The global-tests.js is a pretty normal Mocha test suite:

/* global describe, it, chai, MyLibrary */
var expect = chai.expect;

describe('MyLibrary.Client - Global Browser Object', function () {
    it('should have an MyLibrary global object', function () {
        expect(MyLibrary).to.be.a('object');
    });

    it('should have an MyLibrary.Client method', function () {
        expect(MyLibrary.Client).to.be.a('function');
    });

    it('should create a Client object when run in a browser', function () {
        var client = new MyLibrary.Client();
        expect(client).to.be.an.instanceof(MyLibrary.Client);
    });

    it('should set the url appropriately', function () {
        var client = new MyLibrary.Client();
        expect(client.url).to.equal('http://localhost:3000');
    });

    it('should set the environment appropriately', function () {
        var client = new MyLibrary.Client();
        expect(client.environment).to.equal('web/globals');
    });
});

There are a couple of changes. Firstly, this code is going to run in the browser, so you must write your tests for that environment. Secondly, it expects that the test framework is established already – it expects the chai library to be pre-loaded. One other thing is that this is a minimal test load. The majority of the testing is done inside my standard Mocha test run. As long as you have your tests exercise all paths within the code across the test suites (both the standard mocha tests and the browser tests), then you will be ok. I only test things that need the browser in order to test them.

The global.html test file sets up the tests, loads the appropriate libraries and then executes the tests:

<!DOCTYPE html>
<html>

<head>
    <title>Mocha Test File: Global Library Definition</title>
    <meta charset="utf-8">
    <link rel="stylesheet" href="mocha.css">
</head>

<body>
    <div id="mocha"></div>
    <script src="mocha.js"></script>
    <script src="chai.js"></script>
    <script>
        mocha.setup('bdd');
        mocha.reporter('html');
    </script>
    <script src="MyLibrary.js"></script>
    <script src="global-tests.js"></script>
    <script>
        mocha.run();
    </script>
</body>

</html>

I’m intending on writing a test file that implements the global object version, AMD module definition and browserify to ensure that the library runs in all environments. Each environment will have it’s own HTML file and test suite file. I can include as many of these sets as I want.

Running the tests

Running the tests at this stage is a two-step process. First, you start the server:

node testserver.js

Secondly, you browse to http://localhost:3000/global.html – note the initiator for your test suite is the HTML file. If you have done everything properly, the tests will just work:

mocha-browser

If things don’t work, you can use Developer Tools to figure out what is going on and correct the problem, then re-run the tests. Since this is an ES2015 project, there are some things that may require a polyfill. You can provide your own (mine only needs a polyfill for Object.assign – a matter for a couple of dozen lines of code), or you can use a major ES2015 polyfill like core.js – just ensure you load the polyfill in your test environment. This is also a great pointer to ensure your library has the right dependencies listed and that you have documented your requirements for the browser.

In the next article (Happy New Year!) I will integrate this into automated testing so that you don’t have to open a browser to do this task.

Testing async functions with mocks and mocha in JavaScript

I’ve recently gone down the road of testing all my code using Mocha and Chai, and I aim for 100% code coverage. My current library does a HTTP connection to a backend and I’m hoping to use node-fetch for that. But how do you test a piece of asynchronous code that uses promises or callbacks?

Let’s take a look at my code under test:

import fetchImpl from 'node-fetch';

export default class Client {
    constructor(baseUrl, options = {}) {
        const defaultOptions = {
            fetch: fetchImpl
        }
        
        this.prvOptions = Object.assign({}, defaultOptions, options);
        this.prvBaseUrl = baseUrl;
    }
    
    fetch(relativeUrl, options = {}) {
        const defaultOptions = {
            method: 'GET'
        }
        
        let fetchOptions = Object.assign({}, defaultOptions, options);
        return this.prvOptions.fetch(`${baseUrl}${relativeUrl}`, fetchOptions);
    }
}

This is a much shortened version of my code, but the basics are there. Here is the important thing – I set a default option that includes an option for holding the fetch implementation. It’s set to the “real” version by default and you can see that in line 6. If I don’t override the implementation, I get the node-fetch version.

Later on, I call client.fetch('/foo'). The client library uses my provided implementation of fetch or the default one if I didn’t specify.

All this logic allows me to substitute (or mock) the fetch command. I don’t really want to test the functionality of fetch – I just want to ensure I am calling it with the right parameters.

Now for the tests. My first problem is that I have asynchronous code here. fetch returns a Promise. Promises are asynchronous. That means I can’t just write tests like I was doing before – they will fail because the response wouldn’t be available during the test. The mocha library helps by providing a done call back. The general pattern is this:

    describe('#fetch', function() {
        it('constructs the URL properly', function(done) {
            client.fetch('/foo').then((response) => {
                    expect(response.url).to.equal('https://foo.a.com/foo');
                    done();
                })
                .catch((err) => {
                    done(err);
                });
        });
    });

You might remember the .then/.catch pattern from the standard Promise documentation. Mocha provides a callback (generally called done). You call the callback when you are finished. If you encountered an error, you call the callback with the error. Mocha uses this to deal with async tests.

Note that I have to handle both the .then() and the .catch() clause. Don’t expect Mocha to call done for you. Ensure all code paths in your test actually call done appropriately.

This still has me calling client.fetch without an override. I don’t want to do that. I’ve got this ability to swap out the implemenetation. I have a mockfetch.js file that looks like this:

export default function mockfetch(url, init) {
    return new Promise((resolve, reject) => {
        resolve({url: url, init: init});
    });
}

The only thing that the mockfetch method does is create a promise that is resolved and returns the parameters that were passed in the resolution. Now I can finish my test:

    describe('#fetch', function() {
        let clientUrl = 'https://foo.a.com';
        let clientOptions = {fetch: mockfetch};
        let client = new AzureMobileClient(clientUrl, clientOptions);

        it('constructs the URL properly', function(done) {
            client.fetch('/foo')
                .then((response) => {
                    expect(response.url).to.equal('https://foo.a.com/foo');
                    done();
                })
                .catch((err) => {
                    done(err);
                });
        });
    });

Note that my mockfetch does not return anything resembling a real response – it’s not even the same object type or shape. That’s actually ok because it’s designed for what I need it to do – respond appropriately for the function under test.

There are three things here:

  1. Construct your libraries so that you can mock any external library calls
  2. Use the Mocha “done” parameter to handle async code
  3. Create mock versions of those external library calls

This makes testing async code easy.

An Asynchronous Task List in Apache Cordova

I’ve been spending a bunch of time recently learning Apache Cordova with an eye towards integrating something within Azure Mobile Apps. I want to have an iOS and Android app that works with my TodoItem backend in Node. I also want it to be written in ES2015. So far, I’ve done a bunch of the basic work, but now it’s time to put an app together. You can see most of the work on my GitHub repository.

What I want to consider today is an asynchronous task list store. Each task in the TodoItem has an ID (which is a GUID converted to a string), a text description and a completed flag. I need to be able to read, insert and update records. I’m not going to deal with deletions right now, but that’s coming. To abstract this, I’m going to write an ES2015 class. Let’s start with the basics:

import uuid from 'uuid';

export default class Store {
    constructor() {
        console.info('Initializing Storage Manager');
        
        this._data = [
            { id: uuid.v1(), text: 'Item 1', complete: false },
            { id: uuid.v1(), text: 'Item 2', complete: false }
        ];
    } 
}

I’m creating two example tasks to get me started. I don’t need them, but it helps to show off the HTML coding.

At the center of asynchronous programming in JavaScript is the Promise. Put simply, a promise is a representation of something that doesn’t exist yet. It will be asynchronously resolved and your code can come back to it later. You can use promises relatively simply:

asyncFunction
   .then((result) => {
      /* do something with the result */
   }).catch((error) => {
      /* do something with the error */
   });

You can chain multiple promises and wait for multiple promises to complete. This all results in a rather flexible mechanism to make your code asynchronous. But how do you create a promise? My Store class has an array right now. I want to make it ACT asynchronously so that I can add the network code later on. You need to write a method that either resolves or rejects the promise. Here is the insert() method:

    /**
     * Insert a new object into the database.
     * @method insert
     * @param {object} data the data to insert
     * @return {Promise} - resolve(newitem), reject(error)
     */
    insert(data) {
        data.id = uuid.v1();
        console.log('[storage-manager] insert data=', data);
        var promise = new Promise((resolve, reject) => {
            // This promise always resolves
            this._data.push(data);
            resolve(data);
        });
        
        return promise;
    } 

Creating a promise is a case of creating a new Promise object with a callback function. The callback will be passed “what to call when you are resolving or rejecting the promise”. You then do your processing and call the resolve or reject method to say “I’m done”. Note that I’m using “fat-arrows” to preserve the this variable value. If you don’t use a fat-arrow function then you have to preserve this by other means. All my other functions are similar. For example:

    /**
     * Read some records based on the query.  The elements must match
     * the query
     * @method read
     * @param {object} query the things to match
     * @return {Promise} - resolve(items), reject(error)
     */
    read(query) {
        console.log('[storage-manager] read query=', query);
        var promise = new Promise((resolve, reject) => {
            var filteredData = this._data.filter((element, index, array) => {
                for (let q in query) {
                    if (query[q] !== element[q]) {
                        return false;
                    }
                }
                return true;
            });
            resolve(filteredData);
        });
        
        return promise;
    }  

This will return a list of tasks that match my criteria.

Using this class is encapsulated in my index.js code:

app.on('deviceready', function () {
    var taskStore = new Store();
    
    // Get the various pieces of the UX so they can be referred to later
    var el = {
        todoitems: document.querySelector('#todo-items'),
        summary: document.querySelector('#summary'),
        refreshlist: document.querySelector('#refresh-tasks'),
        addnewtask: document.querySelector('#add-new-task'),
        newitemtextbox: document.querySelector('#new-item-text')
    };
    
    // This is called whenever we want to refresh the contents from the database
    function refreshTaskList() {
      el.summary.innerHTML = 'Loading...';
      console.log('taskStore = ', taskStore);

      taskStore.read({ complete: false }).then((todoItems) => {
          console.log('Read the taskStore: items = ', todoItems);
          let count = 0;
          el.todoitems.innerHTML = ''; 
          todoItems.forEach((entry, index) => {
              let checked = entry.complete ? ' checked="checked"': '';
              let entrytext = entry.text.replace('"', '&quot;');
              let html = `<li data-todoitem-id="${entry.id}"><input type="checkbox" class="item-complete"${checked}><div><input class="item-text" value="${entrytext}"></div></li>`;
              el.todoitems.innerHTML += html;
              count++;
          });  
          el.summary.innerHTML = `<strong>${count}</strong> item(s)`;  
      }).catch((error) => {
          console.error('Error reading task store: ', error);
          el.summary.innerHTML = '<strong>Error reading store.</strong>';
      });
    }

    // Set up the event handler for clicking on Refresh Tasks
    el.refreshlist.addEventListener('click', refreshTaskList);
    refreshTaskList();
    el.newitemtextbox.focus();
});

Note the highlighted line. I call the read() method (above) and wait for it to return the value. This can be asynchronously. Once it completes, the results are passed into the fat-arrow function in the then clause and that renders the list of tasks for me.

The changes to this version are quite extensive, so I encourage you to check out the code at my GitHub Repository. All the ES2015 code is in src/js with the store implementation in src/js/lib/storage-manager.js.

I still have some work to do here. Specifically, I copied (and updated) the Azure Mobile Services Quick Start for Apache Cordova and ES2015. I want to update the CSS3 code to be a little more friendly towards the devices. I also want to implement sorting and filtering. That’s the topic for another blog post.

Apache Cordova, ES2015 and Babel

I created a simple Apache Cordova app and got it working on my iOS and Android emulators in the last article. My hope was to convert the app to ECMAScript 2015 (the new fancy name for what we have been calling ES6 for the past year) and work on Browserify for the app packaging. However, the initial bits took too long. So let’s remedy that now. I’m starting from the basic app template that the cordova tool produced.

Let’s start by looking at the code that the basic app template includes (in ./www/js/index.js):

var app = {
    // Application Constructor
    initialize: function() {
        this.bindEvents();
    },
    // Bind Event Listeners
    //
    // Bind any events that are required on startup. Common events are:
    // 'load', 'deviceready', 'offline', and 'online'.
    bindEvents: function() {
        document.addEventListener('deviceready', this.onDeviceReady, false);
    },
    // deviceready Event Handler
    //
    // The scope of 'this' is the event. In order to call the 'receivedEvent'
    // function, we must explicitly call 'app.receivedEvent(...);'
    onDeviceReady: function() {
        app.receivedEvent('deviceready');
    },
    // Update DOM on a Received Event
    receivedEvent: function(id) {
        var parentElement = document.getElementById(id);
        var listeningElement = parentElement.querySelector('.listening');
        var receivedElement = parentElement.querySelector('.received');

        listeningElement.setAttribute('style', 'display:none;');
        receivedElement.setAttribute('style', 'display:block;');

        console.log('Received Event: ' + id);
    }
};

app.initialize();

This is basically a class for handling events together with a method that handles the application code. I think I can abstract the event handling and use the EventEmitter class from the NodeJS package. I like the semantics of EventEmitter a little better. Let’s take a look at my new code (which I’ve placed in src/js/index.js):

import DeviceManager from './lib/device-manager';

var app = new DeviceManager();
app.on('deviceready', function () {
  var parentElement = document.getElementById('deviceready');
  var listeningElement = parentElement.querySelector('.listening');
  var receivedElement = parentElement.querySelector('.received');

  listeningElement.setAttribute('style', 'display:none;');
  receivedElement.setAttribute('style', 'display:block;');
});

I could have used an arrow-function for the callback in app.on(), but I like the callback semantics when I’m not in a class and have no parameters. I believe it is more readable. I now need a DeviceManager class. This is stored in the file src/js/lib/device-manager.js:

import {EventEmitter} from 'events';

/**
 * A class for handling all the event handling for Apache Cordova
 * @extends EventEmitter
 */
export default class DeviceManager extends EventEmitter {
  /**
   * Create  new DeviceManager instance
   */
  constructor() {
    super();
    document.addEventListener('deviceready', this.onDeviceReady.bind(this), false);
  }

  /**
   * Handle the deviceready event
   * @see http://cordova.apache.org/docs/en/5.4.0/cordova/events/events.deviceready.html
   * @emits {deviceready} a deviceready event
   * @param {Event} the deviceready event object
   */
  onDeviceReady(e) {
    console.debug('[DeviceManager#onDeviceReady] event = ', e);

    this.emit('deviceready', e);
  }
}

I’m preparing this for conversion into a library by include esdoc tags for documentation. There is more to do in this class. I want to trap all the Apache Cordova events so that I can re-emit them through the EventEmitter module, for example. However, this is enough to get us started.

Note that there is a little extra work needed if you want to use Visual Studio Code to edit ES2015 code. Add the following jsconfig.json file:

{
    "compilerOptions": {
        "target": "ES6"
    }
}

Now that I have the code written, how does one build it? First step is to bring in npm, which I will use as the package manager for this project:

npm init --yes

I like to answer yes to everything and then edit the file directly. In this case, I’ve set the license to MIT, added a description and updated the author. However all of these are optional, so this single line lets me start working straight away.

A Diversion: Babel 6

My next step was to download the tool chain which includes gulp, browserify and babelify. I always browse the blog before adding a module to a project and it was lucky I did that in the case of Babel as there were major changes. Here is the short short version:

  1. Babel is just a transpiler framework now
  2. You must create a .babelrc file for it to work

Fortunately, getting ES2015 compilation working with the .babelrc file is simple. Here it is:

{
	"presets": [ "es2015" ]
}

If you are using Babel on the command line, I highly recommend you read the introductory blog post by the team.

Build Process: Gulp

As I pretty much always do nowadays, gulp is my go-to build process handler. To set it up:

npm install --save-dev gulp gulp-rename vinyl-source-stream browserify babelify babel-preset-es2015

The first five modules are from my prior experience with Browserify. The last one is a new one and brings in the ES2015 preset for Babel6. The Gulpfile.js is relatively simple right now:

var babelify = require('babelify'),
    browserify = require('browserify'),
    gulp = require('gulp'),
    rename = require('gulp-rename')
    source = require('vinyl-source-stream');

gulp.task('default': [ 'build' ]);
gulp.task('build': [ 'js:bundle' ]);

gulp.task('js:bundle', function () {
  var bundler = browserify({
    entry: './src/js/index.js',
    debug: true
  });

  return bundler
    .add('./src/js/index.js')
    .transform(babelify)
    .bundle()
    .pipe(source('./src/js/index.js'))
    .pipe(rename('index.js'))
    .pipe(gulp.dest('./www/js'));
});

This is the exact same recipe I have used before, so the semantics haven’t changed. However, you need to uninstall old-babel and install the new packages for this to work. Of course, this is a new project, so I didn’t have to uninstall old-babel. To build the package, I now need to do the following:

gulp build

To make it even easier, I’ve added the following section to the npm package.json:

  "scripts": {
    "build": "gulp build"
  },

With this code, I can now run npm run build instead of gulp directly. It’s a small level of indirection, but a common one.

Wrapping up

The build process overwrites the original www/js/index.js. Once the build is done, you should be able to use cordova run browser to check it out. You should also be able to build the iOS and Android versions and run those. There isn’t any functional change, but the code is now ES2015, and that makes me happy.

In the meantime, check out the code!

Building an ES6/JSX/React/Flux App – Part 3 – Authentication

Over the last two posts, I’ve delved into React and built my own Flux Light Architecture, all the while trying to implement the most basic of tutorials – a two-page application with client-side routing and remote data access. It’s now time to turn my attention to authentication. I’m – as ever – going to use my favorite authentication service, Auth0. Let’s first of all get authentication working, then work on how to use it.

New Actions

I need two new actions – one for logging in and one for logging out – to support authentication. This is defined in actions.js like this:

    static login(token, profile) {
        dispatcher.dispatch('LOGIN', { authToken: token, authProfile: profile });
    }

    static logout() {
        dispatcher.dispatch('LOGOUT');
    }

The Auth0 system returns a JSON Web Token and a profile object when you log in to it. These are passed along for storage into the store.

Store Adjustments

I’ve created a pair of new actions that carry data, so I need somewhere to store them. That’s done in the stores/AppStore.js file. First off, I need to initialize the data within the constructor:

    constructor() {
        super('AppStore');
        this.logger.debug('Initializing AppStore');

        this.initialize('pages', [
          { name: 'welcome', title: 'Welcome', nav: true, auth: false, default: true },
          { name: 'flickr', title: 'Flickr', nav: true, auth: false },
          { name: 'spells', title: 'Spells', nav: true, auth: true }
        ]);
        this.initialize('route', this.getNavigationRoute(window.location.hash.substr(1)));
        this.initialize('images', []);
        this.initialize('lastFlickrRequest', 0);
        this.initialize('authToken', null);
        this.initialize('authProfile', null);
    }

I also need to process the two actions – this is done in the onAction() method:

            case 'LOGIN':
                if (this.get('authToken') != null) {
                    this.logger.error('Received LOGIN action, but already logged in');
                    return;
                }
                if (data.authToken == null || data.authProfile == null) {
                    this.logger.errro('Received LOGIN action with null in the data');
                    return;
                }
                this.logger.info(`Logging in with token=${data.authToken}`);
                this.set('authToken', data.authToken, true);
                this.set('authProfile', data.authProfile, true);
                this.changeStore();
                break;

            case 'LOGOUT':
                if (this.get('authToken') == null) {
                    this.logger.error('Received LOGOUT action, but not logged in');
                    return;
                }
                this.logger.info(`Logging out`);
                this.set('authToken', null, true);
                this.set('authProfile', null, true);
                this.changeStore();
                break;

Both action processors take care to ensure they are receiving the right data and that the store is in the appropriate state for the action before executing it.

The UI

There were three places I needed work. The first was in the NavBar.jsx file to bring in a NavToolbar.jsx component:

import React from 'react';
import NavBrand from './NavBrand.jsx';
import NavLinks from './NavLinks.jsx';
import NavToolbar from './NavToolbar.jsx';

class NavBar extends React.Component {
    render() {
        return (
            <header>
                <div className="_navbar">
                    <NavBrand/>
                </div>
                <div className="_navbar _navbar_grow">
                    <NavLinks pages={this.props.pages} route={this.props.route}/>
                </div>
                <div className="_navbar">
                    <NavToolbar/>
                </div>
            </header>
        );
    }
}

The second was the Client/views/NavToolbar.jsx component – a new component that provides a toolbar on the right side of the navbar:

import React from 'react';
import Authenticator from './Authenticator.jsx';

class NavToolbar extends React.Component {
    render() {
        return (
          <div className="_navtoolbar">
            <ul>
              <li><Authenticator/></li>
            </ul>
          </div>
      );
    }
}

export default NavToolbar;

Finally, I needed the Client/views/Authenticator.jsx component. This is a Controller-View style component. I’m using the Auth0Lock library, which can be brought in through dependencies in package.json:

  "dependencies": {
    "auth0-lock": "^7.6.2",
    "jquery": "^2.1.4",
    "lodash": "^3.10.0",
    "react": "^0.13.3"
  },

You should also add brfs, ejsify and packageify to the devDependencies, per the Auth0 documentation. Here is the top of the Client/views/Authenticator.jsx file:

import React from 'react';
import Auth0Lock from 'auth0-lock';
import Logger from '../lib/Logger';
import Actions from '../actions';
import appStore from '../stores/AppStore';

class Authenticator extends React.Component {
    constructor(props) {
        super(props);

        this.state = {
            token: null
        };
        this.logger = new Logger('Authenticator');
    }

    componentWillMount() {
        this.lock = new Auth0Lock('YOUR-CLIENT-ID', 'YOUR-DOMAIN.auth0.com');
        this.appStoreId = appStore.registerView(() => { this.updateState(); });
        this.updateState();
    }

    componentWillUnmount() {
        appStore.deregisterView(this.appStoreId);
    }

    updateState() {
        this.setState({
            token: appStore.get('authToken')
        });
    }

I don’t like having the client ID and domain embedded in the file, so I’m going to introduce a local WebAPI to solve that. Ensure you swap in your own Auth0 settings here. Other than that minor change, this is the basic Controller-View pattern. Now for the rendering:

    onClick() {
        if (this.state.token != null) {
            Actions.logout();       // Generate the logout action - we will be refreshed
            return;
        }

        this.lock.show((err, profile, token) => {
            this.lock.hide();
            if (err) {
                this.logger.error(`Error in Authentication: `, err);
                return;
            }
            Actions.login(token, profile);
        });
    }

    render() {
        let icon = (this.state.token == null) ? 'fa fa-sign-in' : 'fa fa-sign-out';
        let handler = event => { return this.onClick(event); };

        return (
            <span className="_authenticator" onClick={handler}>
                <i className={icon}></i>
            </span>
        );
    }

The render() method registers a click handler (the onClick() method) and then sets the icon that is displayed based on whether the current state is signed in or signed out. The onClick() method above it handles showing the lock. Once the response is received from the Auth0 system, I initiate an action to log the user in. If the user was logged in, the click initiates the logout action.

There is a methodology (redirect mode in Auth0 lock) that allows you to show the lock, then the page will be refreshed with a new hash containing the token. You can then store the token and restore the original page. That is all sorts of ugly to implement and follow. I like this version for it’s simplicity. I store the state and values of the authentication in the store, use actions to store that data, but don’t refresh the page.

Checking Authentication

I have a page within this app right now that requires authentication called spells. It never gets displayed because the code in NavLinks.jsx has logic to prevent it. Let’s fix that now.

First, NavLinks.jsx needs a new boolean property called authenticated:

NavLinks.propTypes = {
    authenticated: React.PropTypes.bool.isRequired,
    pages: React.PropTypes.arrayOf(
            React.PropTypes.shape({
                auth: React.PropTypes.bool,
                nav: React.PropTypes.bool,
                name: React.PropTypes.string.isRequired,
                title: React.PropTypes.string.isRequired
            })
        ).isRequired,
    route: React.PropTypes.string.isRequired
};

I can also change the logic within the visibleLinks to check the authenticated property:

        let visibleLinks = this.props.pages.filter(page => {
            if (this.props.authenticated === true) {
                return (page.nav === true);
            } else {
                return (page.nav === true && page.auth === false);
            }
        });

Now, I need to ensure that the NavBar and the AppView bubble the authentication state down the tree of components. That means adding the authenticated property to NavBar (I’ll leave that to you – it’s in the repository) and including it in the NavLinks call:

<NavLinks pages={this.props.pages} route={this.props.route} authenticated={this.props.authenticated}/>

That also means, AppView.jsx must provide it to the NavBar. This is a little more extensive. First of all, I’ve updated the state in the constructor to include an authenticated property:

        this.state = {
            pages: [],
            route: 'welcome',
            authenticated: false
        };

That means updateState() must be updated to account for the new state variable:

    updateState() {
        let token = appStore.get('authToken');
        this.setState({
            route: appStore.get('route'),
            pages: appStore.get('pages'),
            authenticated: token != null
        });
    }

Finally, I can push this state down to the NavBar:

        return (
            <div id="pagehost">
                <NavBar pages={this.state.pages} route={this.state.route} authenticated={this.state.authenticated}/>
                <Route/>
            </div>
        );

With this code, the Spells link will only appear when the user is authenticated.

Requesting API Data

So far, I’ve created an application that can re-jig itself based on the authentication state. But it’s all stored on the client. The authentication state is only useful if you request data from a remote server. I happen to have a Web API called /api/spells that must be used with a valid Auth0 token. You can read about it in a prior post. I’m not going to cover it here. Suffice to say, I can’t get data from that API without submitting a proper Auth0 JWT token. The code in the repository uses User Secrets to store the actual secret for the Auth0 JWT that is required to decode. If you are using the RTM version of Visual Studio 2015, right click on the project and select Manage User Secrets. Your user secrets should look something like this:

{
  "JWT": {
    "Domain": "YOUR-DOMAIN.auth0.com",
    "ClientID": "YOUR-CLIENT-ID",
    "ClientSecret": "YOUR-CLIENT-SECRET"
  }
}

If you run the application and browse to /api/settings, you should see the Domain and ClientID. If you browse to /api/spells, you should get a 401 response.

I can now use the same technique I used when requesting the Flickr data. Firstly, create two actions – one for the request and one for the response (in actions.js):

    static requestSpellsData() {
        dispatcher.dispatch('REQUEST-AUTHENTICATED-API', {
            api: '/api/spells',
            callback: Actions.processSpellsData
        });
    }

    static processSpellsData(data) {
        dispatcher.dispatch('PROCESS-SPELLS-DATA', data);
    }

Then, alter the Store to handle the request and response. This is a place where the request may be handled in one store and the response could be handled in a different store. I have a generic action that says “call an API with authentication”. It then sends the data to whatever action I tell it to. If I had a “SpellsStore”, the spells store could process the spells data on the return. It’s this disjoint method of handling the API call and response that allows me to have stores that don’t depend on one another. I’ve added the following to the constructor of the stores/AppStore.js:

this.initialize('spells', []);

I’ve also added the following to the case statement in onAction():

            case 'REQUEST-AUTHENTICATED-API':
                if (this.get('authToken') == null) {
                    this.logger.error('Received REQUEST-AUTHENTICATED-API without authentication');
                    return;
                }
                let token = this.get('authToken');
                $.ajax({
                    url: data.api,
                    dataType: 'json',
                    headers: { 'Authorization': `Bearer ${token}` }
                }).done(response => {
                    data.callback(response);
                });

            case 'PROCESS-SPELLS-DATA':
                this.logger.info('Received Spells Data: ', data);
                this.set('spells', data);

Finally, I can adjust the views/Spells.jsx file to be converted to a Controller-View and request the data. I’ve already done this for the views/Flickr.jsx. You can check out my work on the GitHub repository.

I’ve done something similar with the Settings API. The request doesn’t require authentication, so I just process it. I also cache the results (if the settings have been received, I don’t need to ask them again). This data is stored as ‘authSettings’ in the store. I then added the authSettings to the state in the views/Authenticator.jsx component. I also need to trigger the settings grab – I do this in the views/Authenticator.jsx component via the componentWillMount() method:

    componentWillMount() {
        if (this.lock == null && this.state.settings != null) {
            this.lock = new Auth0Lock(this.state.settings.ClientID, this.state.settings.Domain);
        } else {
            Actions.requestSettingsData();
        }
        this.appStoreId = appStore.registerView(() => { this.updateState(); });
        this.updateState();
    }

I don’t want the authenticator to be clickable until the settings have been rendered, so I added the following to the top of the render() method:

    render() {
        // Additional code for the spinner while the settings are loaded
        if (this.state.settings == null) {
            return (
                <span className="_authenticator">
                    <i className="fa fa-spinner fa-pulse"></i>
                </span>
            );
        }

This puts a spinner in the place of the login/logout icon until the settings are received.

Wrap-up

One of the biggest differences between MVC and Flux is the data flow. In the MVC architecture, you have a Datastore object that issues the requests to the backend and somehow updates the model, that then informs the controller via a callback (since it’s async). It feels hacky. MVC really works well when the controller can just do the request to get the model from the data store and get the response back to feed the view. Flux feels right in the async front-end development world – much more so than the MVC model.

The Flux architecture provides for a better flow of data and allows the easy integration of caching algorithms that are right for the environment. If you want to cache across restarts of the application (aka page refreshes), then you can store the data into localStore. If you want to specify a server-side refresh (for example, for alerting), then you can integrate SignalR into the system and let SignalR generate the action.

As you can guess, I’m loving the Flux architecture. After getting my head around the flow of data, it became very easy to understand. Code you understand is much easier to debug.

As always, you can get my code on my GitHub Repository.