Configuring ASP.NET Core Applications in Azure App Service

In my last article, I introduced my plan to see what it would take to run an Azure Mobile Apps compatible service in ASP.NET Core. There are lots of potential problems here and I need to deal with them one by one. The first article covered how to get diagnostic logging working in Azure App Service, and today’s article shows how to deal with configuration in Azure App Service.

There are two major ways to configure your application in Azure App Service. The first is via App Settings and the second is via Data Connections. App Settings appear as environment variables with the prefix APPSETTING_. For example, if you have an app setting called DEBUGMODE, you can access it via Environment.GetEnvironmentVariable("APPSETTING_DEBUGMODE"). An interesting side note: If you configure App Service Push or Authentication, these settings appear as app settings to your application as well.

Data Connections provide a mechanism for accessing connection strings. If you added a Data Connection called MS_TableConnectionString (which is the default for Azure Mobile Apps), then you would see an environment variable called SQLAZURECONNSTR_MS_TableConnectionString. This encodes both the type of connection and the connection string name.

Configuration in ASP.NET Core

The .NET Core configuration framework is a very solid framework, working with a variety of methods – YAML, XML, JSON and environment variables are all supported. You will generally see code like this in the constructor of the Startup.cs file:

        public Startup(IHostingEnvironment env)
        {
            var builder = new ConfigurationBuilder()
                .SetBasePath(env.ContentRootPath)
                .AddJsonFile("appsettings.json", optional: false, reloadOnChange: true)
                .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
                .AddEnvironmentVariables();
            Configuration = builder.Build();
        }

There are a couple of problems with this, which I will illustrate by adding a view that displays the current configuration. Firstly, add a service in the ConfigureServices() method in Startup.cs:

        // This method gets called by the runtime. Use this method to add services to the container.
        public void ConfigureServices(IServiceCollection services)
        {
            // Add Configuration as a service
            services.AddSingleton<IConfiguration>(Configuration);

            // Add framework services.
            services.AddMvc();
        }

I can now add an action to the Controllers\HomeController.cs:

        public IActionResult Configuration([FromServices] IConfiguration service)
        {
            ViewBag.Configuration = service.AsEnumerable();
            return View();
        }

The [FromServices] parameter allows me to use dependency injection to inject the singleton service I defined earlier. This provides access to the configuration in just this method. I can assign the enumeration of all the configuration elements to the ViewBag for later display. I’ve also added a Views\Home\Configuration.cshtml file:

<h1>Configuration</h1>

<div class="row">
    <table class="table table-striped">
        <tr>
            <th>Key</th>
            <th>Value</th>
        </tr>
        <tbody>
            @foreach (var item in ViewBag.Configuration)
            {
                <tr>
                    <td>@item.Key</td>
                    <td>@item.Value</td>
                </tr>
            }
        </tbody>
    </table>
</div>

If I run this code within a properly configured App Service (one with an associated SQL service attached via Data Connections), then I will see all the environment variables and app settings listed on the page. In addition, the environment variables configuration module has added a pair of configuration elements for me – one named ConnectionStrings:MS_TableConnectionString with the connection string, and the other called ConnectionStrings:MS_TableConnectionString_ProviderName.

The problems are somewhat myriad:

  • All environment variables override my configuration. Azure App Service is a managed service, so they can add any environment variables they want at any time and that may clobber my configuration.
  • The environment variables are not organized in any way and rely on convention.
  • Many of the environment variables are not relevant to my app – they are relevant to Azure App Service.

A Better Configuration Module

Rather than use the default environment variables module, I’m going to write a custom provider for configuration within Azure App Service. You can use the “right” environment variables when developing locally or a local JSON file to do the configuration. If I were doing the Azure App Service configuration in JSON, it may look like this:

{
    "ConnectionStrings": {
        "MS_TableConnectionString": "my-connection-string"
    },
    "Data": {
        "MS_TableConnectionString": {
            "Type": "SQLAZURE",
            "ConnectionString": "my-connection-string"
        }
    },
    "AzureAppService": {
        "AppSettings": {
            "MobileAppsManagement_EXTENSION_VERSION": "latest"
        }
        "Auth": {
            "Enabled": "True",
            "SigningKey": "some-long-string",
            "AzureActiveDirectory": {
                "ClientId: "my-client-id",
                "ClientSecret": "my-client-secret",
                "Mode": "Express"
            }
        },
        "Push": {
            // ...
        }
    }
}

This is a much better configuration pattern in that it provides organization of the settings and does not pollute the configuration name space with all environment variables. I like having the Data block for adding associated information about the connection string instead of the convention of adding _ProviderName to the end to add information. Duplicating the connection string means I can use Configuration.GetConnectionString() or Configuration.GetSection("Data:MS_TableConnectionString") to get the information I need. I’m envisioning releasing this library at some point, so providing options like this is a good idea.

Writing a new configuration provider is easy. There are three files:

  • An extension to the ConfigurationBuilder to bring in your configuration source
  • A configuration source that references the configuration provider
  • The configuration provider

The first two tend to be boiler-plate code. Here is the AppServiceConfigurationBuilderExtensions.cs file:

using Microsoft.Extensions.Configuration;

namespace Microsoft.Extensions.Configuration
{
    public static class AzureAppServiceConfigurationBuilderExtensions
    {
        public static IConfigurationBuilder AddAzureAppServiceSettings(this IConfigurationBuilder builder)
        {
            return builder.Add(new AzureAppServiceSettingsSource());
        }
    }
}

Note that I’ve placed the class in the same namespace as the other configuration builder extensions. This means you don’t need a using statement to use this extension method. It’s a small thing.

Here is the AzureAppServiceSettingsSource.cs file:

using Microsoft.Extensions.Configuration;

namespace Microsoft.Azure.AppService.Core.Configuration
{
    internal class AzureAppServiceSettingsSource : IConfigurationSource
    {
        public IConfigurationProvider Build(IConfigurationBuilder builder)
        {
            return new AzureAppServiceSettingsProvider(Environment.GetEnvironmentVariables());
        }
    }
}

The source just provides a new provider. Note that I pass in the environment to the provider. This allows me to mock the environment later on for unit testing. I’ve placed the three files (the two above and the next one) in their own library project within the solution. This allows me to easily write unit tests later on and it allows me to package and distribute the library if I wish.

All the work for converting the environment to a configuration is done in the AzureAppServiceSettingsProvider.cs file (with apologies for the length):

using System.Collections;
using Microsoft.Extensions.Configuration;
using System.Text.RegularExpressions;
using System.Collections.Generic;

namespace Microsoft.Azure.AppService.Core.Configuration
{
    internal class AzureAppServiceSettingsProvider : ConfigurationProvider
    {
        private IDictionary env;

        /// <summary>
        /// Where all the app settings should go in the configuration
        /// </summary>
        private const string SettingsPrefix = "AzureAppService";

        /// <summary>
        /// The regular expression used to match the key in the environment for Data Connections.
        /// </summary>
        private Regex DataConnectionsRegexp = new Regex(@"^([A-Z]+)CONNSTR_(.+)$");

        /// <summary>
        /// Mapping from environment variable to position in configuration - explicit cases
        /// </summary>
        private Dictionary<string, string> specialCases = new Dictionary<string, string>
        {
            { "WEBSITE_AUTH_CLIENT_ID",                 $"{SettingsPrefix}:Auth:AzureActiveDirectory:ClientId" },
            { "WEBSITE_AUTH_CLIENT_SECRET",             $"{SettingsPrefix}:Auth:AzureActiveDirectory:ClientSecret" },
            { "WEBSITE_AUTH_OPENID_ISSUER",             $"{SettingsPrefix}:Auth:AzureActiveDirectory:Issuer" },
            { "WEBSITE_AUTH_FB_APP_ID",                 $"{SettingsPrefix}:Auth:Facebook:ClientId" },
            { "WEBSITE_AUTH_FB_APP_SECRET",             $"{SettingsPrefix}:Auth:Facebook:ClientSecret" },
            { "WEBSITE_AUTH_GOOGLE_CLIENT_ID",          $"{SettingsPrefix}:Auth:Google:ClientId" },
            { "WEBSITE_AUTH_GOOGLE_CLIENT_SECRET",      $"{SettingsPrefix}:Auth:Google:ClientSecret" },
            { "WEBSITE_AUTH_MSA_CLIENT_ID",             $"{SettingsPrefix}:Auth:MicrosoftAccount:ClientId" },
            { "WEBSITE_AUTH_MSA_CLIENT_SECRET",         $"{SettingsPrefix}:Auth:MicrosoftAccount:ClientSecret" },
            { "WEBSITE_AUTH_TWITTER_CONSUMER_KEY",      $"{SettingsPrefix}:Auth:Twitter:ClientId" },
            { "WEBSITE_AUTH_TWITTER_CONSUMER_SECRET",   $"{SettingsPrefix}:Auth:Twitter:ClientSecret" },
            { "WEBSITE_AUTH_SIGNING_KEY",               $"{SettingsPrefix}:Auth:SigningKey" },
            { "MS_NotificationHubId",                   $"{SettingsPrefix}:Push:NotificationHubId" }
        };

        /// <summary>
        /// Mpping from environment variable to position in configuration - scoped cases
        /// </summary>
        private Dictionary<string, string> scopedCases = new Dictionary<string, string>
        {
            { "WEBSITE_AUTH_", $"{SettingsPrefix}:Auth" },
            { "WEBSITE_PUSH_", $"{SettingsPrefix}:Push" }
        };

        /// <summary>
        /// Authentication providers need to be done before the scoped cases, so their mapping
        /// is separate from the scoped cases
        /// </summary>
        private Dictionary<string, string> authProviderMapping = new Dictionary<string, string>
        {
            { "WEBSITE_AUTH_FB_",          $"{SettingsPrefix}:Auth:Facebook" },
            { "WEBSITE_AUTH_GOOGLE_",      $"{SettingsPrefix}:Auth:Google" },
            { "WEBSITE_AUTH_MSA_",         $"{SettingsPrefix}:Auth:MicrosoftAccount" },
            { "WEBSITE_AUTH_TWITTER_",     $"{SettingsPrefix}:Auth:Twitter" }
        };

        public AzureAppServiceSettingsProvider(IDictionary env)
        {
            this.env = env;
        }

        /// <summary>
        /// Loads the appropriate settings into the configuration.  The Data object is provided for us
        /// by the ConfigurationProvider
        /// </summary>
        /// <seealso cref="Microsoft.Extensions.Configuration.ConfigurationProvider"/>
        public override void Load()
        {
            foreach (DictionaryEntry e in env)
            {
                string key = e.Key as string;
                string value = e.Value as string;

                var m = DataConnectionsRegexp.Match(key);
                if (m.Success)
                {
                    var type = m.Groups[1].Value;
                    var name = m.Groups[2].Value;

                    if (!key.Equals("CUSTOMCONNSTR_MS_NotificationHubConnectionString"))
                    {
                        Data[$"Data:{name}:Type"] = type;
                        Data[$"Data:{name}:ConnectionString"] = value;
                    }
                    else
                    {
                        Data[$"{SettingsPrefix}:Push:ConnectionString"] = value;
                    }
                    Data[$"ConnectionStrings:{name}"] = value;
                    continue;
                }

                // If it is a special case, then handle it through the mapping and move on
                if (specialCases.ContainsKey(key))
                {
                    Data[specialCases[key]] = value;
                    continue;
                }

                // A special case for AUTO_AAD
                if (key.Equals("WEBSITE_AUTH_AUTO_AAD"))
                {
                    Data[$"{SettingsPrefix}:Auth:AzureActiveDirectory:Mode"] = value.Equals("True") ? "Express" : "Advanced";
                    continue;
                }

                // Scoped Cases for authentication providers
                if (dictionaryMappingFound(key, value, authProviderMapping))
                {
                    continue;
                }

                // Other scoped cases (not auth providers)
                if (dictionaryMappingFound(key, value, scopedCases))
                {
                    continue;
                }

                // Other internal settings
                if (key.StartsWith("WEBSITE_") && !containsMappedKey(key, scopedCases))
                {
                    var setting = key.Substring(8);
                    Data[$"{SettingsPrefix}:Website:{setting}"] = value;
                    continue;
                }

                // App Settings - anything not in the WEBSITE section
                if (key.StartsWith("APPSETTING_") && !key.StartsWith("APPSETTING_WEBSITE_"))
                {
                    var setting = key.Substring(11);
                    Data[$"{SettingsPrefix}:AppSetting:{setting}"] = value;
                    continue;
                }

                // Add everything else into { "Environment" }
                if (!key.StartsWith("APPSETTING_"))
                {
                    Data[$"Environment:{key}"] = value;
                }
            }
        }

        /// <summary>
        /// Determines if the key starts with any of the keys in the mapping
        /// </summary>
        /// <param name="key">The environment variable</param>
        /// <param name="mapping">The mapping dictionary</param>
        /// <returns></returns>
        private bool containsMappedKey(string key, Dictionary<string, string> mapping)
        {
            foreach (var start in mapping.Keys)
            {
                if (key.StartsWith(start))
                {
                    return true;
                }
            }
            return false;
        }

        /// <summary>
        /// Handler for a mapping dictionary
        /// </summary>
        /// <param name="key">The environment variable to check</param>
        /// <param name="value">The value of the environment variable</param>
        /// <param name="mapping">The mapping dictionary</param>
        /// <returns>true if a match was found</returns>
        private bool dictionaryMappingFound(string key, string value, Dictionary<string, string> mapping)
        {
            foreach (string start in mapping.Keys)
            {
                if (key.StartsWith(start))
                {
                    var setting = key.Substring(start.Length);
                    Data[$"{mapping[start]}:{setting}"] = value;
                    return true;
                }
            }
            return false;
        }
    }
}

Unfortunately, there are a lot of special cases here to handle how I want to lay out my configuration. However, the basic flow is handled in the Load() method. It cycles through the environment. If the environment variable matches one of the ones I watch for, then I add it to the Data[] object which becomes the configuration. Anything that doesn’t match is added to the default Environment section of the configuration. The ConfigurationProvider class that I inherit from handles all the other lifecycle type requirements for the provider, so I don’t need to be concerned with it.

Testing the Configuration Module

I’ve done some pre-work to aid in testability. Firstly, I’ve segmented the library component into its own project. Secondly, I’ve added a “mocking” capability for the environment. The default environment is passed in from the source class, but I can instantiate the provider in my test class with a suitable dictionary. The xUnit site covers how to set up a simple test, although Visual Studio 2017 has a specific xUnit test suite project template (look for xUnit Test Project (.NET Core) in the project templates list).

My testing process is relatively simple – given a suitable environment, does it produce the right configuration? I’ll have a test routine for each of the major sections – connection strings, special cases and scoped cases, and others. Then I’ll copy my environment from a real App Service and see if that causes issues. I get my environment settings from Kudu – also known as Advanced Tools in your App Service menu in the Azure portal. Here is an example of one of the tests:

        [Fact]
        public void CreatesDataConnections()
        {
            var env = new Dictionary<string, string>()
            {
                { "SQLCONNSTR_MS_TableConnectionString", "test1" },
                { "SQLAZURECONNSTR_DefaultConnection", "test2" },
                { "SQLCONNSTRMSTableConnectionString", "test3" }
            };
            var provider = new AzureAppServiceSettingsProvider(env);
            provider.Load();

            string r;
            Assert.True(provider.TryGet("Data:MS_TableConnectionString:Type", out r));
            Assert.Equal("SQL", r);
            Assert.True(provider.TryGet("Data:MS_TableConnectionString:ConnectionString", out r));
            Assert.Equal("test1", r);

            Assert.True(provider.TryGet("Data:DefaultConnection:Type", out r));
            Assert.Equal("SQLAZURE", r);
            Assert.True(provider.TryGet("Data:DefaultConnection:ConnectionString", out r));
            Assert.Equal("test2", r);

            Assert.False(provider.TryGet("Data:MSTableConnectionString:Type", out r));
            Assert.False(provider.TryGet("Data:MSTableConnectionString:ConnectionString", out r));
        }

This test ensures that the typical connection strings get placed into the right Data structure within the configuration. You can run the tests within Visual Studio 2017 by using Test > Windows > Test Explorer to view the test explorer, then click Run All – the projects will be built and tests discovered.

I’m keeping my code on GitHub, so you can find this code (including the entire test suite) in my GitHub Repository at tag p4.

The Most Popular Articles of the Year

I suspect there may be a bunch of blog posts around the Internet that wrap up the year. Here are the most popular articles on my blog for the year:

React with ES6 and JSX

In fifth place, I did a series of articles on working with ECMAScript 2015 and React/Flux, working on getting a typical application working. I also poked into some stage0 proposals for ECMAScript7. I really enjoy working with React, but I’m torn between Custom Elements (and Polymer specifically) and React. Custom Elements are more standard – React is more popular. I’ll be revisiting this again next year (which is in 24 hours, but I’ll likely take longer than that).

Aurelia – a new framework for ES6

In fourth place, people were interested in how I would do my test tutorial with Aurelia. Aurelia is a really interesting framework and I prefer it over Ember and Angular. The learning curve is relatively small, although I will have to revisit the whole framework discussion as Angular 2 and Ember next-gen are coming out. This tutorial included using authentication with Auth0 and accessing remote resources.

ASP.NET MVC6 and Bootstrap

A one-off article on adding Bootstrap to ASP.NET MVC6 applications came in third place. There are other Bootstrap posts that are also interesting, including one that got made into a video.

Talking of ASP.NET MVC6

With the next revision of ASP.NET imminent, I took several strolls through the alpha and beta releases of that framework. There is a lot to like about it and a lot that is familiar. I’ve mostly switched over to a NodeJS environment now, so I’m not expecting to do much more in this area, but it is a much nicer environment that the old ASP.NET.

And finally, Visual Studio Tooling!

Fueled in large part by a link from the ASP.NET Community Articles page, the #1 page for the year was an article I wrote that described the Web Development extensions I used in Visual Studio. It also generated the most discussion with lots of people telling me about their favorite extensions. I’m using Visual Studio Code more these days – it’s lighter weight. I still love this list though.

Next Year

2015 was definitely the year that frameworks changed – In .NET land we got a look at the next revision of the ASP.NET framework, and in JavaScript land we got Aurelia, React, Flux, Relay, Angular-2, ES2015, Web Components, and several new versions of Node. I hope the framework releases calm down in 2016 so we can start sorting out the good from the bad and ugly. I’m going to take new looks at all this and work on my side projects. I hope you will continue the journey with me.

Browser Testing with PhantomJS and Mocha – Part 1

If you have been following along for the past couple of weeks, you will know that I’ve been writing a browser library recently. I’m writing the library in ES2015 and then transpiling it into UMD.

A sidebar on bugs in BabelJS
I did bump into a bug when transpiling into the UMD module format. The bug is pretty much across the module transforms, and manifests as a ‘Maximum Call Stack Exceeded’ error with _typeof. The bug is T6777. There is a workaround, which is to add a typeof undefined; line at the top of your library.

Back to the problem at hand. I’ve already used Mocha to test my library and I use mocks to attempt to exercise the code, but at some point you have to run it in a browser. There are two steps to this. The first is to set up a test system that runs in a browser, and the second is to run the test system through a headless browser so it can be automated. Let’s tackle the first step today.

My library is a client library to access a remote AJAX environment. I want the library to use either a provided URL or the URL the page was loaded from – whichever is appropriate. As a result, I need to load the files over the Internet – loading from a file:// URL isn’t good enough. To handle this, I’m going to:

  • Create a local test server
  • Load the files into a static service area
  • Run the pages in a browser

To this end, I’ve got a Gulp task that builds my server:

var gulp = require('gulp'),
    babel = require('gulp-babel'),
    concat = require('gulp-concat'),
    sourcemaps = require('gulp-sourcemaps'),
    config = require('../configuration');

module.exports = exports = function() {
    return gulp.src(config.source.files)
        .pipe(sourcemaps.init())
        .pipe(concat('MyLibrary.js'))
        .pipe(babel())
        .pipe(sourcemaps.write('.'))
        .pipe(gulp.dest(config.destination.directory));
};

I store my gulp tasks in a separate file – one file per task. I then require the file in the main Gulpfile.js:

var gulp = require('gulp');

gulp.task('build', require('./gulp/tasks/build'));

I now have a MyLibrary.js file and a MyLibrary.js.map file in the dist directory. Building the server area is just as easy:

var gulp = require('gulp'),
    config = require('../configuration');

// Builds the server.rootdir up to service test files
module.exports = exports = function() {
    return gulp.src(config.test.server.files)
        .pipe(gulp.dest(config.test.server.rootdir));
};

My configuration.js exposes a list of files like this:

module.exports = exports = {
    source: {
        files: [ 'src/**/*.js' ]
    },
    destination: {
        directory: 'dist'
    },
    test: {
        mocha: [ 'test/**/*.js' ],
        server: {
            files: [
                'browser-tests/global.html',
                'browser-tests/global-tests.js',
                'dist/MyLibrary.js',
                'dist/MyLibrary.js.map',
                'node_modules/chai/chai.js',
                'node_modules/mocha/mocha.css',
                'node_modules/mocha/mocha.js'
            ],
            port: 3000,
            rootdir: 'www'
        }
    }
};

Take a look at the test.server.files object. That contains three distinct sections – the browser test files (more on those in a moment), the library files under test and the testing libraries. You should already have these installed, but if you don’t, you can install them:

npm install --save-dev mocha chai

I will have a www directory with all the files I need in it once I run the gulp buildserver command. Next, I need a server. I use ExpressJS for this. First off, install ExpressJS:

npm install --save-dev express

Note that this is a dev install – not a production install, hence the use of the --save-dev tag. I want express listed in devDependencies. Now, on to the server code, which I place in testserver.js:

var express = require('express'),
    config = require('./gulp/configuration');

var app = express();
app.use(express.static(config.test.server.rootdir));
app.listen(config.test.server.port || 3000, function() {
    console.info('Listening for connections');
});

This is about the most basic configuration for an ExpressJS server you can get. I’m serving static pages from the area I’ve built. That’s enough of infrastructure – now, how about running tests? I’ve got two files in my files list that I have not written yet. The first is a test file called global-tests.js and the other is a HTML file that sets up the test run – called global.html. The global-tests.js is a pretty normal Mocha test suite:

/* global describe, it, chai, MyLibrary */
var expect = chai.expect;

describe('MyLibrary.Client - Global Browser Object', function () {
    it('should have an MyLibrary global object', function () {
        expect(MyLibrary).to.be.a('object');
    });

    it('should have an MyLibrary.Client method', function () {
        expect(MyLibrary.Client).to.be.a('function');
    });

    it('should create a Client object when run in a browser', function () {
        var client = new MyLibrary.Client();
        expect(client).to.be.an.instanceof(MyLibrary.Client);
    });

    it('should set the url appropriately', function () {
        var client = new MyLibrary.Client();
        expect(client.url).to.equal('http://localhost:3000');
    });

    it('should set the environment appropriately', function () {
        var client = new MyLibrary.Client();
        expect(client.environment).to.equal('web/globals');
    });
});

There are a couple of changes. Firstly, this code is going to run in the browser, so you must write your tests for that environment. Secondly, it expects that the test framework is established already – it expects the chai library to be pre-loaded. One other thing is that this is a minimal test load. The majority of the testing is done inside my standard Mocha test run. As long as you have your tests exercise all paths within the code across the test suites (both the standard mocha tests and the browser tests), then you will be ok. I only test things that need the browser in order to test them.

The global.html test file sets up the tests, loads the appropriate libraries and then executes the tests:

<!DOCTYPE html>
<html>

<head>
    <title>Mocha Test File: Global Library Definition</title>
    <meta charset="utf-8">
    <link rel="stylesheet" href="mocha.css">
</head>

<body>
    <div id="mocha"></div>
    <script src="mocha.js"></script>
    <script src="chai.js"></script>
    <script>
        mocha.setup('bdd');
        mocha.reporter('html');
    </script>
    <script src="MyLibrary.js"></script>
    <script src="global-tests.js"></script>
    <script>
        mocha.run();
    </script>
</body>

</html>

I’m intending on writing a test file that implements the global object version, AMD module definition and browserify to ensure that the library runs in all environments. Each environment will have it’s own HTML file and test suite file. I can include as many of these sets as I want.

Running the tests

Running the tests at this stage is a two-step process. First, you start the server:

node testserver.js

Secondly, you browse to http://localhost:3000/global.html – note the initiator for your test suite is the HTML file. If you have done everything properly, the tests will just work:

mocha-browser

If things don’t work, you can use Developer Tools to figure out what is going on and correct the problem, then re-run the tests. Since this is an ES2015 project, there are some things that may require a polyfill. You can provide your own (mine only needs a polyfill for Object.assign – a matter for a couple of dozen lines of code), or you can use a major ES2015 polyfill like core.js – just ensure you load the polyfill in your test environment. This is also a great pointer to ensure your library has the right dependencies listed and that you have documented your requirements for the browser.

In the next article (Happy New Year!) I will integrate this into automated testing so that you don’t have to open a browser to do this task.

An ECMAScript 6, CommonJS and RequireJS Project

I’ve been writing a lot of CommonJS code recently – the sort that you would include in Node projects on the server side. I’ve recently had a thought that I would like to do a browser-side project. However, how do you produce a browser library that can be consumed by everyone?

The different styles of modules

Let’s say I have a class Client(). If I were operating in Node or Browserify, I’d do something like this:

var Client = require('my-client-package');

var myclient = new Client();

This is called CommonJS format. I like it – it’s nice and clean. However, that’s not the only way to potentially consume the library. You can also bring it in with RequireJS:

define(['Client'], function(Client) {
    var myclient = new Client();

});

Finally, you could also register the variable as a global and bring it in with a script HTML tag:

<script src="node_modules/my-client-package/index.js"></script>
<script>
    var client = new Client();
</script>

You can find a really good writeup of the differences between CommonJS and AMD in an article by Addy Osmani.

Three different techniques. If we were being honest, they are all valid and have their place, although you might have your favorite technique. As a library developer, I want to support the widest range of JavaScript developers which means supporting three different styles of code. This brings me to UMD format. I named it “Ugly Module Definition”, and you can see why when you look at the code:

(function (root, factory) {
    if (typeof define === 'function' && define.amd) {
        // AMD. Register as an anonymous module.
        define(['b'], function (b) {
            return (root.returnExportsGlobal = factory(b));
        });
    } else if (typeof module === 'object' && module.exports) {
        // Node. Does not work with strict CommonJS, but
        // only CommonJS-like enviroments that support module.exports,
        // like Node.
        module.exports = factory(require('b'));
    } else {
        // Browser globals
        root.returnExportsGlobal = factory(root.b);
    }
}(this, function (b) {
    // Use b in some fashion

    return {// Your exported interface };
}));

Seriously, could this code be any uglier? I like writing my code in ECMAScript 2015, also known as ES6. So, can I write a class in ES6 and then transpile it to the right format? Further, can I set up a project that has everything I need to test the library? It turns out I can. Here is how I did it.

Project Setup

These days, I tend to create a directory for my project, put some stuff in it and then push it up to a newly created GitHub repository. I’m going to assume you have already created a GitHub user and then created a GitHub repository called ‘my-project’. Let’s get started:

mkdir my-project
cd my-project
git init
git remote add origin https://github.com/myuser/my-project
npm init --yes
git add package.json
git commit -m "First Commit"
git push -u origin master

Perhaps unshockingly, I have a PowerShell script for this functionality since I do it so often. All I have to do is remember to check in things along the way now and push the repository to GitHub at the end of my work.

My Code

I keep my code in the src directory, The tests are in the test directory. The distribution file is in the dist directory. Let’s start with looking at my src/Client.js code:

export default class Client {
    constructor(options = {}) {
    }
}

Pretty simple, right? The point of this is not to concentrate on code – it’s about the build process. I’ve also got a test in the test/Client.js file:

/* global describe, it */

// Testing Library Functions
import { expect } from 'chai';

// Objects under test
import Client from '../src/Client';

describe('Client.js', () => {
    describe('constructor', () => {
        it('should return a Client object', () => {
            let client = new Client();
            expect(client).to.be.instanceof(Client);
        });
    });
});

I like to use Mocha and Chai for my tests, so this is written with that combination in mind. Note the global comment on the first line – that prevents Visual Studio Code from putting green squiggles underneath the mocha globals.

Build Modules

I decided some time along the way that I won’t use gulp or grunt unless I have to. In this case, I don’t have to. My toolset includes:

Let’s take a look at my package.json:

{
    "name": "my-project",
    "version": "0.1.0",
    "description": "A client library written in ES6",
    "main": "dist/Client.js",
    "scripts": {
        "pretest": "eslint src test",
        "test": "mocha",
        "build": "babel src --out-file dist/Client.js --source-maps"
    },
    "keywords": [
    ],
    "author": "Adrian Hall <adrian@shellmonger.com>",
    "license": "MIT",
    "devDependencies": {
        "babel-cli": "^6.3.17",
        "babel-plugin-transform-es2015-modules-umd": "^6.3.13",
        "babel-preset-es2015": "^6.3.13",
        "babel-register": "^6.3.13",
        "chai": "^3.4.1",
        "eslint": "^1.10.3",
        "mocha": "^2.3.4"
    },
    "babel": {
        "presets": [
            "es2015"
        ],
        "plugins": [
            "transform-es2015-modules-umd"
        ]
    }
}

A couple of regions need to be discussed here. Firstly, I’ve got two basic npm commands I can run:

  • npm test will run the tests
  • npm run build will build the client library

I’ve got a bunch of devDependencies to implement this build system. Also note the “babel” section – this is what would normally go in the .babelrc – you can also place it in your package.json file.

The real secret sauce here is the build script. This uses a module transform to create a UMD format library from your ES6 code. You don’t even have to worry about reading that ES5 code – it’s ugly, but it works.

Editor Files

I use Visual Studio Code, so I need a jsconfig.json file in the root of my project:

{
    "compilerOptions": {
        "target": "ES6"
    }
}

This tells Visual Studio Code to use ES6 syntax. I’m hopeful the necessity of this will go away soon. I’m hoping that I’m not the only one who is contributing to this repository. Collaboration is great, but you want to set things up so that people coming newly in to the project can get started with your coding style straight away. I include a .editorconfig file as well:

root = true

[*]
charset = utf-8
indent_style = space
indent_size = 4
end_of_line = lf
insert_final_newline = true
trim_trailing_whitespace = true

[*.json]
insert_final_newline = false

You can read about editorconfig files on their site. This file is used by a wide variety of editors – if your editor is on the list, you should also install the plugin.

ESLint Configuration

I have a .eslintrc.js file at the root of the project. I’ve got that in a gist since it is so big and I just cut and paste it into the root directory.

Test Configuration

My test directory is different – it expects to operate within mocha, so I need an override to tell eslint that this is all about mocha. Here is the test/.eslintrc file:

module.exports = exports = {
    "env": {
        "es6": true,
        "mocha": true
    }
};

I also need a mocha.opts file to tell mocha that the tests are written in ES6 format:

--compilers js:babel-register

Wrapping up

You will need a dist directory. I place a README.md file in there that describes the three use cases for the library – CommonJS, AMD and globals. That README.md file is really only there to ensure the dist directory exists when you clone the repository.

I also need to add a README.md at the root of the project. It’s required if I intend to publish the project to the NPM repository. Basic instructions on how to install and use the library is de rigeur, but you can put whatever you want in there in reality.

I have not addressed jsdoc yet – you should be doing it in your source files, and it should be a postbuild step in your package.json file.

You can now run the tests and build through the npm commands and get a library that can be used across the board.

Logging to Splunk with Winston

I have to admit, I’ve still got a soft spot for Splunk in my heart. I spent several years developing apps there and it is still my go-to logging platform. Recently, I’ve been playing with ExpressJS and using Winston as my logger of choice, together with express-winston to hook the two pieces up. My projects inevitably start with this:

var express = require('express'),
    winston = require('winston'),
    expressLogger = require('express-winston');

var app = express();

app.use(expressLogger.logger({
    transports: [
        new winston.transports.Console()
    ],
    level: 'debug'
}));

// Do other setup here
app.listen(process.env.PORT || 3000);

This is all well and good, but what about Splunk? My prior version of this wrote the log to a file and then Splunk would consume the file. However, I’m operating inside of Azure App Service these days and my Splunk instance is operating on a different machine – it’s a virtual machine inside of Azure. So what am I to do?

Splunk recognized this and so they produced a high-performance HTTP event collector. This is a REST endpoint that allows you to submit data as long as you have a token. I’m not going to explain how to get a token (Splunk does a really good job of that). However, I need to handle the other side of things – the Winston transport.

Fortunately, Winston has a highly extensible transport system and I’ve done just that. You can download the module from npmjs.org or get it from my GitHub repository.

So, how do you use it? It doesn’t require Express, but I’m going to alter my prior example to show how easy it is. Note the highlighted lines:

var express = require('express'),
    winston = require('winston'),
    SplunkStreamEvent = require('winston-splunk-httplogger'),
    expressLogger = require('express-winston');

var app = express();

var splunkSettings = {
    host: 'my-splunk-host',
    token: 'MY-DATAINPUT-TOKEN'
};

app.use(expressLogger.logger({
    transports: [
        new winston.transports.Console(),
        new SplunkStreamEvent({ splunk: splunkSettings });
    ],
    level: 'debug'
}));

// Do other setup here
app.listen(process.env.PORT || 3000);

Block-by-block:

  1. Line 3 brings in the library – standard Node module management here
  2. Lines 8-11 define the options for the splunk-logging library
  3. Line 16 adds the transport to winston for logging

It’s as simple as that. There is one proviso. Underneath, it uses the excellent splunk-logging library. Winston expects that you send off each event individually. It doesn’t really stream events. As a result, setting any of the options in such a way that batching occurs will result in weird errors. That’s because Winston is expecting a callback for each event and the splunk-logging library doesn’t call the callback unless it actually writes to the channel. I haven’t done any high capacity tests to see what happens when batching does occur, so I’d avoid that for now.

If you find any bugs or wish to ask a question, please let me know through the GitHub Repository.

Working with Azure App Service: Kudu

I’m taking a break from coding to explain about a little known feature of the Azure App Service – Kudu. You can consider Kudu to be a backend system that alows you to get down to the very basics of what is happening behind the scenes of the portal. For instance, all web sites in Azure App Service are stored in a git repository. Want to know about that git repository? Kudu is your friend.

Getting to Kudu is straight forward. Firstly, log into the Azure Portal and enter your web app. Go to the Tools area, then Kudu, then click on Go:

11062015-1

You will get a rather simple site with no polish. I’m pretty sure that is deliberate. You don’t come to Kudu unless you want low level information, so the Kudu design reflects that.

Today I had a basic question – what branch is my continuous deployment hooked up to? Back on the portal, I know the project that it is linked up to, but it was set up so long ago that I’d forgotten what branch I needed to merge into. That information is not available in the portal directly. I could, of course, create a new branch in my GitHub project and then publish it, but that would mean disconnecting the SCM connection and re-establishing it.

Fortunately, that information is in the App Settings, available through Kudu, but it’s displayed in JSON. Also fortunately, I’m using JSONView for Chrome. This extension will pretty print any JSON document for you. From the Kudu site, go to App Settings, and you will get something similar to this:

11062015-2

As you can see, the branch that my continuous deployment is listed as the last item in this case.

There are a bunch of other things I absolutely love about Kudu. In the Debug Console menu drop-down is a “PowerShell” option – this opens a PowerShell prompt on your web app and allows you to explore the filesystem and the environment with all the power of PowerShell. Under the Environment menu, you get to see all the environmental information that your web site sees. Previously, I had written a small NodeJS application to view this information, not realizing that Azure had provided a tool already. A warning about this – the SQL passwords are shown in plain text here, so don’t just cut and paste this information. Want to stream the logs from your website? That’s available under the Tools menu option.

Short version: If you are developing for Azure App Service, you owe it to yourself to understand what Kudu offers and spend some time learning this powerful system.

Naming Exceptions in NodeJS

In my last post I started testing my new configuration SDK. My initial code was very simple boilerplate:

/// <reference path="../typings/node/node.d.ts" />
/// <reference path="IConfigurationSource.ts" />

import path = require('path');

import { IConfigurationSource } from './IConfigurationSource';
import { InvalidOperationError, NotImplementedError } from './Exceptions';

export default class JsonConfigurationSource implements IConfigurationSource {
    private location: string;

    constructor(location: string) {
        this.location = path.resolve(location);
        throw new NotImplementedError();
    }

    get name(): string {
        return 'JSON';
    }

    set name(v: string) {
        throw new InvalidOperationError();
    }

    get id(): string {
        return `JSON-${this.location}`;
    }

    set id(v: string) {
        throw new InvalidOperationError();
    }

    toObject(): Object {
        throw new NotImplementedError();
    }
}

I have implemented a few new errors in a separate file called Exceptions.ts:

/// <reference path="../typings/node/node.d.ts" />

export class InvalidOperationError extends Error {
    constructor() {
        super('Invalid Operation');
    }
}

export class NotImplementedError extends Error {
    constructor() {
        super('Not Implemented');
    }
}

The theory being that I can tell the difference between something that isn’t implemented and something that is broken. They both inherit from the standard NodeJS Error class. So far so good. What’s the problem? Well, it’s in the tests:

10022015-1

That assertion error isn’t really helpful. Basically, “error – we were expecting an error, but we got an error”. That’s because I have not renamed the error object. There are three properties on the Error object: message, name and stack. I am setting the message property when I call the super() in the constructor. What I am seeing in the display is the name property.

Note that I can still tell the difference between an InvalidOperationError and a NotImplementedError (and any other sort of error) by using the instanceof operator. This is about display only and the only place this really matters is in testing and log files.

The solution to this is relatively simple – set a name:

/// <reference path="../typings/node/node.d.ts" />

export class InvalidOperationError extends Error {
    constructor() {
        super('Invalid Operation');
    }

    name: string = 'InvalidOperationError';
}

export class NotImplementedError extends Error {
    constructor() {
        super('Not Implemented');
    }

    name: string = 'NotImplementedError';
}

Once this is in place, the failed tests now show the proper error:

10022015-2

The big take away here is to understand the classes you are sub-classing. Make sure you set the proper properties to provide appropriate debug messages.