How to determine The Node versions available on Azure App Service

I may have mentioned this before, but Azure App Service is an awesome service for hosting your website. It’s got tons of features to support devops, production deployments, testing and monitoring tasks. One of the things I struggled with was node deployments. You should specify the version of node that you want to use in your package.json file in the engines section, like this:

"engines": {
    "node": ">= 4.2.3 <= 4.3.0",
    "npm": ">= 3.3.0"
},

This is great, but how do you know what versions of node and npm are acceptable together?

It turns out that this is relatively easy. First of all, create yourself an Azure App Service Web App (or Mobile App, or API App – they are all the same thing). Deploy a random node/express app to the service using your favorite technique (mine is using continuous deployment from GitHub). Now, let’s get to know Kudu

Ku-what?

Kudu is a nice web-based console for accessing the guts of your site. It contains a bunch of useful information. To get there, go to your Tools blade and then click on Kudu, then Go:

Screen Shot 2016-03-03 at 8.13.48 PM

You can also go directly to https://your-site.scm.azurewebsites.net instead. As I’ve described in the past, it’s well worth getting to know Kudu – it’s one of those hidden gems in the Azure system that really assists in problem solving. Back to the problem at hand – I have a node site, but what versions am I allowed to put in the package.json file? Simple – click on Runtime versions on the front page.

Admittedly, this isn’t the friendliest display. A lot of Kudu interacts with a REST endpoint behind the scenes and displays the result in the most raw version possible. This is good – it gives you access to the maximal information possible, but it’s also bad – it tends to be hard to read. Fortunately, I’ve prepared for this. I’ve already installed JSON Viewer to assist with pretty-printing JSON files when in Chrome – my preferred browser. There are a number of plug-ins that do this, not only in Chrome, but Firefox and standalone. You can use whatever you want.

Now you can just cut-and-paste the version you want into the engines section of your package.json. Alternatively, you can use a range to ensure that you pick up the latest version. For example, my standard engines entry contains the following:

"engines": {
    "node": ">=5.7.0 <5.8.0",
    "npm": ">=3.3.0"
},

With this code, and matching it to the list, I know I’ll be running node.js v5.7.0 and npm v3.6.0 on the service.

Another thing to like about Azure App Service – they are really responsive in keeping up to date with the Node versions. Node is an extremely active community and multiple releases come out every week, it seems.

(Full disclaimer – I work for Microsoft in Azure App Service and Mobile Apps – I don’t maintain the node environment though, and my thoughts are not those of my employer nor the group that does maintain the node environments).

The Most Popular Articles of the Year

I suspect there may be a bunch of blog posts around the Internet that wrap up the year. Here are the most popular articles on my blog for the year:

React with ES6 and JSX

In fifth place, I did a series of articles on working with ECMAScript 2015 and React/Flux, working on getting a typical application working. I also poked into some stage0 proposals for ECMAScript7. I really enjoy working with React, but I’m torn between Custom Elements (and Polymer specifically) and React. Custom Elements are more standard – React is more popular. I’ll be revisiting this again next year (which is in 24 hours, but I’ll likely take longer than that).

Aurelia – a new framework for ES6

In fourth place, people were interested in how I would do my test tutorial with Aurelia. Aurelia is a really interesting framework and I prefer it over Ember and Angular. The learning curve is relatively small, although I will have to revisit the whole framework discussion as Angular 2 and Ember next-gen are coming out. This tutorial included using authentication with Auth0 and accessing remote resources.

ASP.NET MVC6 and Bootstrap

A one-off article on adding Bootstrap to ASP.NET MVC6 applications came in third place. There are other Bootstrap posts that are also interesting, including one that got made into a video.

Talking of ASP.NET MVC6

With the next revision of ASP.NET imminent, I took several strolls through the alpha and beta releases of that framework. There is a lot to like about it and a lot that is familiar. I’ve mostly switched over to a NodeJS environment now, so I’m not expecting to do much more in this area, but it is a much nicer environment that the old ASP.NET.

And finally, Visual Studio Tooling!

Fueled in large part by a link from the ASP.NET Community Articles page, the #1 page for the year was an article I wrote that described the Web Development extensions I used in Visual Studio. It also generated the most discussion with lots of people telling me about their favorite extensions. I’m using Visual Studio Code more these days – it’s lighter weight. I still love this list though.

Next Year

2015 was definitely the year that frameworks changed – In .NET land we got a look at the next revision of the ASP.NET framework, and in JavaScript land we got Aurelia, React, Flux, Relay, Angular-2, ES2015, Web Components, and several new versions of Node. I hope the framework releases calm down in 2016 so we can start sorting out the good from the bad and ugly. I’m going to take new looks at all this and work on my side projects. I hope you will continue the journey with me.

A lap around Azure Mobile Apps Node SDK Preview

Microsoft Azure App Service recently released a slew of mobile announcements – PowerApps being the big one. Of lesser note, the Azure Mobile Apps Node SDK hit a milestone and entered preview. It’s been a while since I lapped around the SDK, so let’s take a look at some of the breaking changes.

The SQL Azure data provider is now called mssql

There was one provider in the alpha version of the SDK – the sql driver that used mssql to access a SQL Azure instance or an on-premise SQL Server instance. However, there are plans for more providers and support was needed for multiple providers. If you are specifying a SQL Azure connection string via the Azure Portal, then nothing changes. For local development, however, it’s fairly common to use an azureMobile.js file to specify the data connection. Something like this:

module.exports = {
    logging: {
        level: 'silly'
    },
    data: {
        provider: 'mssql',
        server: 'localhost',
        database: 'sample',
        user: 'testuser',
        password: 'testpass'
    }
};

Note the highlighted line. That used to be just ‘sql’ – now it’s ‘mssql’. Again, this is all to prepare for multiple providers, so it’s a good move in my book.

You should initialize the database before listening

The common server.js file would just import tables and then start listening. This caused a significant delay on first access along with potential failures because the database was not set up properly. To combat this, a Promise structure was introduced that allowed you to defer listening for connections until after the database was initialized. You use it like this:

// Import the files from the tables directory to configure the /tables API
mobile.tables.import('./tables');

// Initialize the database before listening for incoming requests
// The tables.initialize() method does the initialization asynchronously
// and returns a Promise.
mobile.tables.initialize()
  .then(function () {
    app.use(mobile);    // Register the Azure Mobile Apps middleware
    app.listen(process.env.PORT || 3000);   // Listen for requests
  });

If a table has dynamic schema, then the initialize() call immediately resolves since the database is managed by the SDK. If you have set up static tables, then those tables are created prior to listening for connections, which means that your users get a timeout instead of a 500 server failure, which is probably a better experience.

You can seed tables with static data

There are cases when you want data to be “already there”. An example of this is in testing – you want to test the GET response and ensure specific records are there for you. Another example is if you want to have a read-only table for settings. To assist with this, you can add a seed property to the table definition. Something like this:

var table = require('azure-mobile-apps').table();
table.columns = {
	"text": "string",
	"complete": "boolean"
};
table.dynamicSchema = false;

// Seed data into the table
table.seed = [
	{ text: "Example 1", complete: true },
	{ text: "Example 2", complete: false }
];

module.exports = table;

The seed property is defined as an array of objects. Data seeding happens during the initialize() call, so you need to ensure you call initialize() if you want to seed data.

There is a new getIdentity() API

Azure App Service introduced a new app-specific authentication mechanism during the November 2015 update. The authentication gateway is now deprecated. Along with that change is a new getIdentity() call in the Node SDK that calls the authentication backend to retrieve the claims that you defined. You can use it like this in a table definition file:

table.access = 'authenticated';

table.read(function (context) {
    return context.user.getIdentity()
        .then(function (identity) {
            logger.info('table.read identity = ', identity);
            return context;
        })
        .then(function (context) {
            return context.execute();
        })
        .catch(function (error) {
            logger.error('Error in table.read: ', error);
        });
});

The getIdentity() call returns a Promise. Once the promise is resolved, the identity is available in the resolution. You can then adjust the context (just like the personal-table sample) with any information in the claims that you registered. Note that I’m chaining the promises together – when getIdentity() resolves, I get an identity. I then adjust the context and return the adjusted context, which is used asynchronously to execute the query and return the result of that.

You can disable access to table operations

Ever wanted to have a read-only table? How about a logging table that you can add to, but you can’t update/delete? All of those are possible by adjusting permissions on the table.

// Read-only table - turn off write operations
table.insert.access = 'disabled';
table.update.access = 'disabled';
table.delete.access = 'disabled';

You can also make it so that reads can be unauthenticated, but writes require authentication. The values for the access parameter are ‘anonymous’, ‘authenticated’ and ‘disabled’. Pick the right one for all operations and another one for an individual operation, if you like.

There are lots of samples

The team is dedicated to providing lots of sample variations to show off specific situations. You can still check out the canonical todo sample – that has been documented to a point where there is more documentation than code. However, there are lots of additional samples now.

CORS

CORS is a difficult subject right now as Azure App Service has just rolled out a change that delegates the handling of CORS to App Service. This means that you only care about CORS in your code when you are running the server locally. In this case, set skipVersionCheck to true in your azureMobile.js, like this:

module.exports = {
  skipVersionCheck: true,
  cors: {
    origins: [ '*' ]
  }
};

This will enable CORS for development purposes.

Need more information?

As yet another avenue for asking questions, you can log onto the Gitter channel and have a direct chat with the developers. That’s in addition to the Azure Forums, Stack Overflow and GitHub Issues. We aren’t starved of methods to discuss problems with the team!

Did I mention documentation? No? Well, here you are – the API documentation is published as well.

Updating Authentication in Azure Mobile Apps

There was a change to Azure Mobile Apps over the weekend. It was a titanic shift in how authentication is done. In short, the Azure App Service authentication gateway is out and on-stamp authentication is in. Along with that change is a new set of WindowsAzure.MobileServices SDKs to support the new logic. This article is really on “how does that affect me?”

Advantages of the new Authentication

There are several advantages of the new authentication. Firstly, you can specify additional claims. For example, you might want to include the users real name in the claims so you can display it in your mobile app. Specific claims appear in the result to the GetIdentities() call. (More on that in another post). Secondly, the management experience is streamlined across web and mobile – there is no separation any more. Finally and perhaps most importantly, authentication is per-app now. Authentication was per-resource group with the gateway. That lead to advice to place each mobile app in its own resource group. You can now group apps together as you want and let each have its own authentication settings.

Configuring the new Authentication

Since there is no longer any gateway, you have to look elsewhere for the authentication. My client application uses Microsoft Account authentication. I have a Client ID and Client Secret. If you don’t remember how I did that, refer to my recent blog post. There are four things I need to do.

  1. Move the Client ID and Client Secret to the new Authentication Configuration Screens
  2. Adjust the Redirect URI in the Microsoft Account client configuration settings
  3. Update your server code with the new SDK
  4. Update the client code with the new SDK

Step 1: Enable App Service Authentication

First off, find the Authentication / Authorization link in your applications settings blade:

11172015-1

Once you find it, enable the App Service Authentication, then click on the “Log in with Microsoft Account” authentication provider:

11172015-2

The next blade is very different from the gateway version:

11172015-3

The Client ID and Client Secret are hopefully obvious. Enter the original Client ID and Client Secret from the gateway. However, there are a whole bunch of new claims that you can check. These claims are accessed through the GetIdentity() call for the user. I’ve included wl.basic and wl.emails here. You should probably always check the wl.basic claim (and that is the default).

Step 2: Adjusting the Redirect URI

Once you have set up the Azure Mobile Apps side of things, you will want to adjust the Redirect URI in the Microsoft Account application settings. Go to the My Applications page and click on your application. Click on the Edit Settings link followed by the API Settings in the left hand navigation.

11172015-4

The URI should be https://your-site.azurewebsites.net/.auth/login/provider/callback. Most of the providers are obvious – facebook, twitter and google. The Microsoft Account is microsoftaccount. Click on Save and you are done.

Step 3: Update the Server SDK

My server side application is written in NodeJS and uses a simple application with a single table and dynamic schema. The only change I had to do was to upgrade the NodeJS SDK to 2.0.0-alpha5 to support the new authentication service. A quick push to my GitHub repository and the changes get deployed to my Azure App Service site.

Step 4: Update the Client SDK

To go along with the new authentication mechanism, you also need to update the NuGet packages for WindowsAzure.MobileServices. In your Visual Studio project, right click on References and select Manage NuGet packages… For each of the WindowsAzure.MobileServices packages you use, upgrade them to the latest beta. I use two packages – the base WindowsAzure.MobileServices (which is updated to 2.0.0-beta-3) and the WindowsAzure.MobileServices.SQLiteStore (updated to 2.0.0-beta-2) for offline sync.

11172015-upgrade

I refactored my code over the weekend and as a result I have a new class – DataStore.cs. This handles authentication and offline sync capabilities for me. In the old version, the constructor for the MobileServicesClient (which was located in App.xaml.cs, but is now in DataStore.cs) had three arguments:

MobileServiceClient MobileService = new MobileServiceClient(clouduri, gatewayuri, appkey);

Well, there is no gateway any more and there hasn’t been an app key for some time. So we can now simplify the constructor:

MobileServiceClient MobileService = new MobileServiceClient(clouduri);

You can simply remove the two unnecessary arguments.

Wrap Up

Gateway Authentication isn’t going away. You can still configure it and your clients can still use it. You should probably be running two versions of your code – one for Gateway authentication and one for the new authentication. As your users upgrade to newer clients that support the new authentication, you can use some of the new features of the new authentication scheme to great effect. That, however, is another blog post.

Storing Data in Azure Mobile Apps

I have been creating a new Universal Windows App over the last few weeks. It’s simple and really to figure out concepts. I’ve already handled authentication against my Microsoft Account. Now it’s time to use that information to store data on a backend SQL Service. This comprises of two parts – the client piece and the server piece. I’m going to be using the latest azure-mobile-apps NodeJS SDK for the server and updating my C#.NET client on the frontend.

Let’s start with the backend. The azure-mobile-apps SDK makes this easy. I have the code in GitHub and have linked it into an Azure App Service via continuous deployment. The server.js file is simple enough:

var webApp = require('express')(),
    morgan = require('morgan'),
    mobileApp = require('azure-mobile-apps')();

webApp.use(morgan('combined'));

mobileApp.tables.import('./tables');

webApp.use(mobileApp);

webApp.listen(process.env.PORT || 3000);

This is a basic app model. I create an ExpressJS application, link in logging with Morgan and then do a standard import of the Azure Mobile Apps table definition. The table definition in tables/TaskItem.js is pretty basic:

var mobileApps = require('azure-mobile-apps');

var table = mobileApps.table();
table.columns = {
  'email': 'string',
  'text': 'string',
  'complete': 'boolean'
};
// We want the app to manage the schema
table.dynamicSchema = true;
// We want to authenticate the users
table.authorize = true;

table.read(function (ctx) {
  ctx.query.where({ email: ctx.user.id });
  return ctx.execute();
});

table.insert(function (ctx) {
  ctx.item.email = ctx.user.id;
  return ctx.execute();
});

module.exports = exports = table;

There is a couple of other things we want to take care of here. Firstly, the insert method fills in the email field in the database based on the authenticated user. Right now, this will be something like ‘sid:really-long-string-of-hex-digits’. We can adjust this later to be the email address (but that’s another blog post). Similarly, in the read method, we add a predicate to the query such that only records that match the authenticated user are returned. We now have a personal table – the user can submit and retrieve records from the table, but not see other records belonging to other people.

So much for the server code – it was really basic. Now, on to the client. The first thing I did was move the MobileServiceUser from a private variable in MainPage.xaml.cs to a basic property in TaskStore.cs. I also rewired the references accordingly. This was a fairly simple re-factor, so I’m not going to show it. Let’s take a look at the more extensive changes to TaskStore.cs:

    class TaskStore : ObservableCollection<TaskItem>
    {
        private IMobileServiceTable<TaskItem> tableController = App.MobileService.GetTable<TaskItem>();

        public TaskStore()
        {
            Add(new TaskItem { Id = Guid.NewGuid().ToString(), Title = "Task 1" });
            Add(new TaskItem { Id = Guid.NewGuid().ToString(), Title = "Task 2" });
            User = null;
        }

The important thing to note here is the tableController – this is the object that actually does the connection to the backend. You will need to add a reference to the NuGet package WindowsAzure.MobileServices here – ensure you add v2.0.0-beta-2 (which you should have done from the authentication setup). Once the table controller is set up, I can add async methods to read and write data to the backend:

        public async Task Create(TaskItem item)
        {
            item.Id = Guid.NewGuid().ToString();
            Add(item);
            if (User != null)
            {
                System.Diagnostics.Debug.WriteLine("Inserting item into remote table");
                await tableController.InsertAsync(item);
            }
        }

        public async Task Update(TaskItem item)
        {
            for (var idx = 0; idx < Count; idx++)
            {
                if (Items[idx].Id.Equals(item.Id))
                {
                    Items[idx] = item;
                }
            }
            if (User != null)
            {
                System.Diagnostics.Debug.WriteLine("Updating item in remote table");
                await tableController.UpdateAsync(item);
            }
        }

        public async Task Delete(TaskItem item)
        {
            Remove(item);
            if (User != null)
            {
                System.Diagnostics.Debug.WriteLine("Deleting item in remote table");
                await tableController.DeleteAsync(item);
            }
        }

        public async Task Refresh()
        {
            try
            {
                System.Diagnostics.Debug.WriteLine("Refreshing from the remote table");
                var items = await tableController.ToCollectionAsync();
                Clear();
                var e = items.GetEnumerator();
                while (e.MoveNext())
                {
                    Add(e.Current);
                }
            }
            catch (MobileServiceInvalidOperationException ex)
            {
                System.Diagnostics.Debug.WriteLine(String.Format("Cannot read from remote table: {0}", ex.Message));
                await new MessageDialog(ex.Message, "Error loading itmes").ShowAsync();
            }
        }

In each of the cases – Create, Update and Delete – I firstly add the record to the local ObservableCollection then I call the appropriate Async method on the table controller to do the same thing. I only do this if I am logged in. I’ve also got a Refresh method that refreshes the data from the remote table. This is wired up in MainPage.xaml.cs as follows:

        private async void LoginSync_Clicked(object sender, RoutedEventArgs e)
        {
            if (store.User == null)
            {
                try
                {
                    store.User = await App.MobileService.LoginAsync(MobileServiceAuthenticationProvider.MicrosoftAccount);
                    System.Diagnostics.Debug.WriteLine(String.Format("User is logged in - username is {0}", store.User.UserId));
                    loginSyncButton.Label = "Sync";
                    // Refresh from the backend store
                    await filteredStore.Refresh();
                }
                catch (MobileServiceInvalidOperationException ex)
                {
                    System.Diagnostics.Debug.WriteLine(String.Format("Mobile Services Error: {0}", ex.Message));
                    store.User = null;
                    var dialog = new MessageDialog(ex.Message);
                    dialog.Commands.Add(new UICommand("OK"));
                    await dialog.ShowAsync();
                }
            }
            else
            {
                await filteredStore.Refresh();
                System.Diagnostics.Debug.WriteLine("MobileServices Sync");
            }
        }

Wait – I’m calling Refresh() on the filtered store, not the store. That’s because I’m passing through the Refresh() method from the FilteredStore to the Store:

        public async Task Refresh()
        {
            await _store.Refresh();
            RefreshView();
        }

This will refresh the store first, then refresh the view based on the store. That’s pretty much it – you can use the SQL Server 2014 Management Studio to connect to your SQL Service in the Azure cloud and check out the data that is stored.

This isn’t the end of this topic. I’ve noticed a few things along the way that need refactoring. Specifically, I should treat the view (FilteredTaskStore) and the store separately and connect them via event delegates. That way the view would update when the store updates. I wouldn’t have to wrap the FilteredTaskStore according to the store. Similarly, I kind of hate the UWP version of ObservableCollection – I’d like to get a generic version of this going that is based on something reasonable – maybe handle the manipulations all in LINQ. So I’ll continue working on this.

Next up is handling offline synchronization of data – because there are times when you aren’t on a network. Until then, there is always the GitHub Repository.

Running NodeJS in Harmony Mode on Azure Web Apps

One of my big things is taking advantage of newer language features – right now, that’s ASP.NET5 and ECMAScript 2015. NodeJS already has a lot of the ECMAScript 2015 features. You may have to run node --es_staging or node --harmony for some features, which is a problem when you upload your service to a cloud provider like Azure. Thankfully, more and more features are in the standard set of features supplied in NodeJS. How do you get your scripts to run with the correct environment attached?

The main component of Microsoft Azure App Service is the Web App, and I’ve blogged about creating a web app before. One file I failed to mention is iisnode.yml. This is a YAML file that Azure creates for you if you don’t supply one. It contains, by default, the path to the NodeJS runtime executable. Unfortunately, you don’t know where that is and it can change over time, which makes it a pain to handle.

Getting the iisnode.yml

To download the iisnode.yml file, you need to get the publish settings, open them up, then use the ftp publish settings to download the file. First off, log into http://portal.azure.com and select your web app. You want to download the Publish Settings file for your site. It’s available on the main page of your web app within the portal:

10132015-1

This will download an XML file. The best place to open this is in a web browser. Add a .xml extension to the file before you do this though. When you do, it looks like this:

10132015-2

Don’t bother checking out these publication settings – the test site I used has already been removed. Create your own web site and download your own publish settings. You will note that there are two publication settings – one for MSDeploy and the other for FTP. You will want the FTP one. You need the publishUrl, the userName and the userPWD for the FTP deployment method. Open up a new web browser page and cut and paste the publishUrl into the address bar. It will prompt you for the username and password – use cut and paste to enter those too.

Note: Don’t use Internet Explorer as your web browser for this – it doesn’t allow cut and paste from an XML document. I used Opera instead.

Assuming you’ve already got a deployed application in there and the package.json has an engines section, here is what you will see in the wwwroot directory:

10132015-3

There is the mysterious iisnode.yml file. So, what’s in that file? Here are my contents:

nodeProcessCommandLine: "D:\Program Files (x86)\nodejs\4.1.2\node.exe"

Right now, Azure Web Apps is running v4.1.2 (also known as the very latest node version). You can easily add the –harmony flag to this file and then check it into the root of your project. Note that the file MUST be called iisnode.yml and it MUST appear in the top-level directory of your project once deployed, which generally means the top-level directory of your project.

A Warning

And this is a big one. You are specifying a specific path here. Microsoft may change this path. It will definitely change as the version of NodeJS changes. If you specify, for example, “node >= 4.1.0”, then when NodeJS 5.0 comes out (probably next month), this path will also change. You are hard-coding something here.

So don’t do this unless you absolutely need a feature that hides behind a flag.

Since ES6 features are moving within V8 from harmony to staging to production, you will find that your application is less and less likely to need the harmony flag. You can check compliance by utilizing the Kangax Compat Table, although it isn’t showing v4.1.x compatibility as of right now. Another good page is the NodeJS ES6 page, which lists the features that it supports.

Implementing a User Preferences Service in NodeJS with Azure Table Storage

One of the great things about cloud services is that you can take advantage of price competitive solutions and utilize the best technology for the problem rather than shoe horning it into an existing solution. Take, for example, user preferences. I want to store a series of preferences for my applications on a per-user basis. I already have a SQL instance, so the right way to do this is to create a user preferences table and store a single row per user with a JSON blob that I can return to the user at any point.

Not so fast. Is a SQL Service really the best way to store JSON documents? You aren’t taking advantage of any of the relational database functionality – it’s just a store. Why not use a NoSQL service in the cloud instead?

In this article, I’m going to implement a user preferences store in my NodeJS application based on the Azure Table Storage – a simple NoSQL store.

The Code

My application is a NodeJS/ExpressJS application. I have authentication built in that provides a username in req.user.id. To implement my API, I’m going to implement an ExpressJS Router that provides all the functionality I need. To link in a Router, I need to do the following in my main code:

webApp.use(staticFiles(config.webRoot, config.options.staticFiles));

webApp.get('/api/settings', function (req, res) {
  res.json(config.settings);
});

webApp.use(jwt);
webApp.use('/api/userprefs', userprefs());
webApp.use(mobileApp);

webApp.listen(config.webPort, function () {
  console.log('Listening on port ' + config.webPort);
});

Line 25 is my JSON Web Token parser and provides the req.user object (including the all important req.user.id). Line 26 links in my user preferences object. The userprefs is a function that is required like this:

var userprefs = require('./userprefs');

Now, on to the userprefs.js file – this is where the entire API is created:

var express = require('express'),
    bodyParser = require('body-parser'),
    azureStorage = require('azure-storage');

module.exports = exports = function () {
  var router = express.Router(),
      store = azureStorage.createTableService(),
      tableName = 'userprefs',
      partitionKey = 'P1';

  // Create the table
  store.createTableIfNotExists(tableName, function (error, result, response) {
    if (error) {
      throw error;
    }
    if (result) {
      console.info('[userPrefs] Created new Azure Table Store');
    }
  });

The top of the file brings in an important aspect – azure-storage – this is the API for accessing the Azure Storage API. I’ve followed the simple NodeJS library tutorial to create a store object and I’ve named my table and a partition key. I also create the table if it doesn’t exist.

  /*
   * Middleware for this router that does an authentication check - the
   * req.user MUST be set and it MUST contain an id - if it doesn't, then
   * we cannot access the specifics of the user preferences store
   */
  router.use(function authCheck(req, res, next) {
    if (!req.user || !req.user.id) {
      res.status(401).send('Unauthorized');
      return;
    }
    return next();
  });

The next step is to put in a simple authorization check – this goes before any of the routes in my router. Basic version – if I’m not authenticated, send a 401 Unauthenticated status.

  /*
   * Middleware - handles the GET /api/userprefs - sends an object.  If
   * the user does not have any user prefs, sends an empty object
   */
  router.get('/', function (req, res) {
    // Retrieve the object from the userprefs store
    store.retrieveEntity(tableName, partitionKey, req.user.id, function (err, result, response) {
      if (err && err.statusCode === 404) {
        console.info('[userPrefs] No Userprefs for user', req.user.id, ': sending empty object');
        res.status(200).type('application/json').send({});
        return;
      }

      if (err) {
        console.error('[userPrefs] Retrieval for user', req.user.id, 'failed: ', err);
        res.status(500).send('Internal Server Error (Storage)');
        return;
      }

      // Remove the system fields and ensure the content type is correct
      delete response.body.PartitionKey;
      delete response.body.RowKey;
      delete response.body.Timestamp;
      delete response.body['odata.metadata'];
      response.headers['content-type'] = 'application/json;charset=utf-8';

      // The response gives us all the information we need
      res.status(response.statusCode)
         .header(response.headers)
         .send(response.body);
      return;
    });
  });

I’m implementing two interfaces – a GET of /api/userprefs will get the user preferences for the authenticated user. Since I’m always operating on the authenticated user, I’ve decided to use the user ID as the row key. The Azure Table Storage API gives me just about everything and I could just return the statusCode, headers and body from the API. However, I want the response to be the same as the what was sent to the API service, so I need to remove the meta-data from the response before I send it to the user.

  /*
   * Middleware - handles the POST /api/userprefs - body must be a json
   * object.  If the object is not a JSON object, then return 400 Bad Request
   * If the object is a JSON object, then replace the original user prefs and
   * send a 200 Success
   */
  router.post('/', bodyParser.json(), function (req, res) {
    // check out req.body - must be there and be an object
    if (!req.body || typeof req.body !== 'object') {
      res.status(400).send('Invalid Body');
      return;
    }

    // Construct a new object from the parameters within the body
    var entGen = azureStorage.TableUtilities.entityGenerator,
        tableObject = {
          PartitionKey: entGen.String(partitionKey),
          RowKey: entGen.String(req.user.id)
        };

    // Iterate over the parameters and add them to the tableObject
    // Types are inferred when not specified
    for (var key in req.body) {
      // Skip the system properties
      if (key === 'PartitionKey' || key === 'RowKey' || key === 'odata.metadata' || key === 'Timestamp')
        next;
      tableObject[key] = { '_': req.body[key] };
    }

    // Store the tableObject in the table store
    store.insertOrReplaceEntity(tableName, tableObject, function (error, result, response) {
      if (error) {
        console.error('[userPrefs] insertOrReplaceEntity: ', error);
        res.status(500).send('Internal Server Error (Storage)');
        return;
      }

      // The response gives us all the information we need
      res.status(response.statusCode)
         .header(response.headers)
         .send(response.body);
      return;
    });
  });

The POST interface is a little more complex. The code gets fed a JSON object within the body (and that is decoded by bodyParser.json()). If the code receives a valid object, then it is converted into the form that Azure Table Storage requires and then inserted. If the row already exists, the Azure Table Storage API will automatically replace the existing row. As with the GET version, the response returned from the Azure Table Storage API is “correct” – in this case, no adjustments are required.

  return router;
};

Finally, I return the router. This router can be “mounted” just like any other express middleware.

Testing the Code

I like to test code locally before committing it to Azure. To do this, I downloaded the Azure Storage Emulator. This can be easily installed on a Windows PC (sorry Mac users – you are out of luck). Once installed, it is located at C:\Program Files (x86)\Microsoft SDKs\Azure\Storage Emulator so make sure you add that to your path. To initialize the storage emulator, run:

AzureStorageEmulator.exe init

Once the storage emulator is initialized, you can start it:

AzureStorageEmulator.exe start

You will also need to set some environment variables. I have a small PowerShell script with the following:

$env:AZURE_STORAGE_ACCOUNT = "devstoreaccount1"
$env:AZURE_STORAGE_ACCESS_KEY = "Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw=="
$env:AZURE_STORAGE_CONNECTION_STRING = "UseDevelopmentStorage=true"

The account, access key and connection string are all defined by the Azure Storage Emulator and cannot be changed.

You can now start your project and use Postman to send GET and POST requests to /api/userprefs to see the results. Don’t forget that you need to send an Authorization header or otherwise get req.user.id set.

Going to Production

I’ve walked through creating an Azure App Service Web App with continuous deployment before. I’m going to assume that your application has been deployed properly via continuous deployment. Once there, add a new service, select Data + Storage and then a Storage Account:

10092015-1

Once you click on that, you will be asked what deployment model you want. You want to change it from Classic to Resource Manager. Click on Create.

In the Create storage account page, give the storage account a unique name. Most importantly, click on the Select existing under Resource group, then select the name of your web apps resource group:

10092015-2

The other options (redundancy, diagnostic logging and region) are up to you. I’d suggest placing the storage in the same region as your web application.

Once the storage account is created, open up the storage account and select Settings then Access Keys:

10092015-3

You want to copy the connection string for one of the keys. I use KEY1. Now, go back to your web app and click on Settings then Application Settings. You want to add an App setting for AZURE_STORAGE_CONNECTION_STRING – copy the connection string in from the storage account.

10092015-4

Don’t forget to click on the save button. Once saved, restart your web site so that the code will read in the new app setting. At this point, you should be able to use the userprefs API endpoint just like you did in your test – it’s just a different URI to access.