Writing HTTP CRUD in Azure Functions

Over the last two posts, I’ve introduced writing Azure Functions locally and deploying them to the cloud. It’s time to do something useful with them. In this post, I’m going to introduce how to write a basic HTTP router. If you follow my blog and other work, you’ll see where this is going pretty quickly. If you are only interested in Azure Functions, you’ll have to wait a bit to see how this evolves.

Create a new Azure Function

I started this blog by installing the latest azure-functions-cli package:

npm i -g azure-functions-cli

Then I created a new Azure Function App:

mkdir dynamic-tables
cd dynamic-tables
func new

Finally, I created a function called todoitem:


Customize the HTTP Route Prefix

By default, any HTTP trigger is bound to /api/_function_ where function is the name of your function. I want full control over where my function exists. I’m going to fix this is the host.json file:

    "http": {
        "routePrefix": ""

The routePrefix is the important thing here. The value is normally “/api”, but I’ve cleared it. That means I can put my routes anywhere.

Set up the Function Bindings

In the todoitem directory are two files. The first, function.json, describes the bindings. Here is the version for my function:

    "disabled": false,
    "bindings": [
            "name": "req",
            "type": "httpTrigger",
            "direction": "in",
            "authLevel": "function",
            "methods": [ "GET", "POST", "PATCH", "PUT", "DELETE" ],
            "route": "tables/todoitem/{id:alpha?}"
            "type": "http",
            "direction": "out",
            "name": "res"

This is going to get triggered by a HTTP trigger, and will accept five methods: GET, POST, PUT, PATCH and DELETE. In addition, I’ve defined a route that contains an optional string for an id. I can, for example, do GET /tables/todoitem/foo and this will be accepted. On the outbound side, I want to respond to requests, so I’ve got a response object. The HTTP Trigger for Node is modelled after ExpressJS, so the req and res objects are mostly equivalent to the ExpressJS Request and Response objects.

Write the Code

The code for this function is in todoitem/index.js:

 * Routes the request to the table controller to the correct method.
 * @param {Function.Context} context - the table controller context
 * @param {Express.Request} req - the actual request
function tableRouter(context, req) {
    var res = context.res;
    var id = context.bindings.id;

    switch (req.method) {
        case 'GET':
            if (id) {
                getOneItem(req, res, id);
            } else {
                getAllItems(req, res);

        case 'POST':
            insertItem(req, res);

        case 'PATCH':
            patchItem(req, res, id);

        case 'PUT':
            replaceItem(req, res, id);

        case 'DELETE':
            deleteItem(req, res, id);

            res.status(405).json({ error: "Operation not supported", message: `Method ${req.method} not supported`})

function getOneItem(req, res, id) {
    res.status(200).json({ id: id, message: "getOne" });

function getAllItems(req, res) {
    res.status(200).json({ query: req.query, message: "getAll" });

function insertItem(req, res) {
    res.status(200).json({ body: req.body, message: "insert"});

function patchItem(req, res, id) {
    res.status(405).json({ error: "Not Supported", message: "PATCH operations are not supported" });

function replaceItem(req, res, id) {
    res.status(200).json({ body: req.body, id: id, message: "replace" });

function deleteItem(req, res, id) {
    res.status(200).json({ id: id, message: "delete" });

module.exports = tableRouter;

I use a tableRouter method (and that is what our function calls) to route the HTTP call to the write CRUD method I want to execute. It’s up to me to put whatever CRUD code I need to execute and respond to the request in those additional methods. In this case, I’m just returning a 200 status (OK) and some JSON data. One key piece is differentiating between a GET /tables/todoitem and a GET /tables/todoitem/foo. The former is meant to return all records and the latter is meant to return a single record. If the id is set, we call the single record GET method and if not, then we call the multiple record GET method.

What’s the difference between PATCH and PUT? In REST semantics, PATCH Is used when you want to do a partial update of a record. PUT is used when you want to send a full record. This CRUD recipe uses both, but you may decide to use one or the other.

Running Locally

As with the prior blog post, you can run func run test-func --debug to start the backend and get ready for the debugger. You can then use Postman to send requests to your backend. (Note: Don’t use func run todoitem --debug – this will cause a crash at the moment!). You’ll get something akin to the following:


That’s it for today. I’ll be progressing on this project for a while, so expect more information as I go along!

Creating and Debugging Azure Functions Locally

I’ve written about Azure Functions before as part of my Azure Mobile Apps series. Azure Functions is a great feature of the Azure platform that allows you to run custom code in the cloud in a “serverless” manner. In this context, “serverless” doesn’t mean “without a server”. Rather, it means that the server is abstracted away from you. In my prior blog post, I walked through creating an Azure Function using the web UI, which is a problem when you want to check your Azure Functions in to source code and deploy them as part of your application.

UPDATE: Azure App Service have released a blog on the new CLI tools.

This is the first in a series of blog posts. I am going to walk through a process by which you can write and debug Azure Functions on your Windows 10 PC, then check the code into your favorite SCCM and deploy in a controlled manner. In short – real world.

Getting Ready

Before you start, let’s get the big elephant out of the way. The actual runtime Windows only. Sorry, Mac Users. The run-time relies on the 4.x .NET Framework, and you don’t have that. Boot into Windows 10. You can still create functions locally, but you will have to publish them to the cloud to run them. There is no local runtime on a Mac.

To get your workstation prepped, you will need the following:

  • Node
  • Visual Studio Code
  • Azure Functions Runtime

Node is relatively easy. Download the Node package from nodejs.org and install it as you would any other package. You should be able to run the node and npm programs from your command line before you continue. Visual Studio Code is similarly easily downloaded and installed. You can download additional extensions if you like. If you write functions in C#, I would definitely download the C# extension.

The final bit is the Azure Functions Runtime. This is a set of tools produced by the Azure Functions team to create and run Functions locally and are based on Yeoman. To install:

npm install -g yo generator-azurefunctions azure-functions-cli

WARNING There is a third-party module called azure-functions which is not the same thing at all. Make sure you install the right thing!

After installing, the func command should be available:


Once you have these three pieces, you are ready to start working on Azure Functions locally.

Creating an Azure Functions Project

Creating an Azure Functions project uses the func command:

mkdir my-func-application
cd my-func-application
func init

Note that func init creates a git repository as well – one less thing to do! Our next step is to create a new function. The Azure Functions CLI uses Yeoman underneath, which we can call directly using yo azurefunctions:


You can create as many functions as you want in a single function app. In the example above, I created a simple HTTP triggered function written in JavaScript. This can be used as a custom API in a mobile app, for example. The code for my trigger is stored in test-func\index.js:

module.exports = function(context, req) {
    context.log('Node.js HTTP trigger function processed a request. RequestUri=%s', req.originalUrl);

    if (req.query.name || (req.body && req.body.name)) {
        context.res = {
            // status: 200, /* Defaults to 200 */
            body: "Hello " + (req.query.name || req.body.name)
    else {
        context.res = {
            status: 400,
            body: "Please pass a name on the query string or in the request body"

and the binding information is in test-func\function.json:

    "disabled": false,
    "bindings": [
            "authLevel": "function",
            "type": "httpTrigger",
            "direction": "in",
            "name": "req"
            "type": "http",
            "direction": "out",
            "name": "res"

Running the function

To run the Azure Functions Runtime for your function app, use func run test-func.

The runtime is kicked off first. This monitors the function app for changes, so any changes you do in the code will be reflected as soon as you save the file. If you are running something that can be triggered manually (like a cron job), then it will be run immediately. For my HTTP trigger, I need to hit the HTTP endpoint – in this case, http://localhost:7071/api/test-func.

Note that the runtime is running with the version of Node that you installed and it is running on your local machine. Yet it still can be triggered by whatever you set up. If you set up a blob trigger from a storage account, then that will trigger. You have to set up the environment properly. Remember that App Service (and Functions) app settings appear as environment variables to the runtime. When you run locally, you will need to manually set up the app settings by setting an environment variable of the same name. Do this before you use func run for the first time.

Debugging the function

Running the function is great, but I want to debug the function – set a breakpoint, inspect the internal state of the function, etc. This can be done easily in Visual Studio Code as the IDE has an integrated Node debugger.

  • Run func run test-func --debug
  • Go into the debugger within Visual Studio Code, and set a breakpoint
  • Switch to the Debugger tab and hit Start (or F5)


  • Trigger the function. Since I have a HTTP triggered function, I’m using Postman for this:


  • Note that the function is called and I can inspect the internals and step through the code:


You can now step through the code, inspect local variables and generally debug your script.

Next up

In the next article, I’ll discuss programmatically creating an Azure Function app and deploying our function through some of the deployment mechanisms we have available to us in Azure App Service.

30 Days of Zumo.v2 (Azure Mobile Apps): Day 1 – Setup

I’ve committed myself to learning as much Azure Mobile Apps as possible. Internally, this project is called Zumo (Azure Mobile glued together) and several sites have shown this name with reference to Azure Mobile Services. Azure Mobile Apps is Zumo v2. It’s a server SDK that runs on top of a web site. This has some really interesting things about it – like you can use all the features of Azure Web Apps (staging slots, scaling, backup and restore, authentication) and use the same API within the Web App and Mobile App. Each one of the 30 days of code will cover a topic and cover it in an hour.

To get started, you’ve got to have an Azure subscription. Sure, you could use TryAppService to create a backend, but that only lasts for an hour and it’s very restrictive – you don’t get to alter the backend code. If you haven’t already, there is a free trial that lasts 30 days. Once you get beyond the 30 days, you can run a development site for free.

Day 1: Setup

Day 1 is all about setup. I am going to do all my development on a Mac. Why not a PC with Visual Studio? Visual Studio is a very specific environment and doesn’t lend itself to iOS development. I want the experience to be as raw as possible and entirely free. Developing on the mac tends to be a little more painful when you are used to an integrated environment like Visual Studio. Visual Studio, however, hides a lot of details from you. When things “just happen”, you tend to not learn what’s behind them and debugging capabilities get lost. I want to avoid that, so I’ll be using the command line, the Azure Portal and a simple text editor.

What else do I need?

A Text Editor

What’s your favorite code editor? On a mac, mine is Atom. There are a bunch of decent plugins for Atom and I’ve covered some of them in the past. I’ll probably do another post at some point about my favorite Atom plug-ins for JavaScript development. I also like Visual Studio Code, which also has some great plug-ins. I’ve heard good things about Sublime Text as well. All three of these editors are available on Mac and Windows.

I’m not advocating for a specific text editor here. There are things you definitely want: a monospace font and syntax highlighting would be enough. Just pick your favorite.

One thing to avoid is something “heavy”. Eclipse, for example, falls into this “heavy” camp. It’s marginal on the functionality improvements, yet it’s startup cost and memory utilization make it distinctly second-rate for what I want to do – edit files.

Google Chrome

Yes, I’m being specific here. Google Chrome gets its developer tools right. In addition, you will want a few plugins. The most notable is Postman – a mechanism for executing REST calls and viewing the output.


Again, I’m being specific here. There are several git tools, but they all implement git. Don’t try and get something else. I am going to be putting things on GitHub, which uses git underneath. There is Git for Mac and Git for Windows. I’ll use these tools for storing my code in a source code repository on GitHub and for querying Azure App Service.

A Command Line

If you are on a mac, there is a command line under Application-Utilities. If you are on a PC, then you have the PowerShell prompt, although I prefer ConEmu a little better. Since I’m not going to be using Visual Studio, I want somewhere to execute commands.

XCode (Mac Only)

If you are developing iOS applications, then the compilation step must use Xcode and must run on a Mac. Android applications don’t care. Windows applications don’t require a specific compiler, but you have to compile on Windows – when I get to that, I’ll switch over to Windows and use Visual Studio. There will be some points at which you will be asked to start Xcode and accept the license – at which point, you might as well download it. You can do this from the Apple App Store. Note, however, it’s a 1+Gb download, so it takes some time.

An FTP Client

I like a graphical ftp client for this. It allows me to browse the App Service behind the scenes. You can find a good list on LifeHacker. Personally, I use Cyberduck for this.

After the software, you will also need a GitHub Id – I’m going to store my code on GitHub, so there will be a repository on GitHub.

Let’s get started

The Azure Documentation already adequately covers creating an Azure Mobile App. I recommend following that tutorial to get your Azure Mobile App created and then hooked up to a SQL Azure instance. You can follow any of the tutorials – you will end up with a mobile site and a client that implements a simple TodoItem hooked up to the mobile backend.

One word on pricing and what sort of app pricing plan you should choose. There are several types of site sizing and they offer some interesting choices. Here is the breakdown:

Screen Shot 2016-03-09 at 6.34.39 PM

Note that Basic does not offer all the features of Standard and Premium. In fact, Standard offers many features that you should be interested in:

  • Auto-scaling – while not an issue in development, will be an issue in production applications. You want your application to grow as demand grows, automatically. Basic only scales manually and only has a limited scaling
  • Staging Slots – this is an awesome feature that I’ll discuss in a later blog post. One of the things this allows you to do is to upload a new site, test it out and then swap out the production version, all with zero down time.
  • Backups – we like backups. They are important. Standard adds a daily backup.

Premium adds more disk space, Biztalk services, more staging slots and more backups. Most developers can get away with Basic edition since developers only need limited scalability (to test what happens when the service does scale) and don’t need staging slots. There are two other tiers – Free and Shared:

Screen Shot 2016-03-09 at 6.42.08 PM

Note the lack of features. Free and Shared are great if you are just learning but you will find them painful to use. Spend some of your Azure free trial credits on a minimum of Basic.

Note that I’m not saying anything about the options available on SQL Azure here. The pricing when you create an App Service has nothing to do with SQL Azure. To get your effective pricing, you need to add your App Service plan to your SQL Azure plan:


For most normal learning activities, you can use a B-Basic plan for your SQL Azure. If you want to try out Georeplication or you have bigger data needs, you can use an S0-Standard. The pricing goes up from there. As with App Service, there is a Premium offering that adds Active Georeplication – good for those mission critical revenue-on-the-line type of apps.

Want a completely free version of this? Make sure you pick an F1-Free App Service plan and an F-Free SQL Azure plan. Want to learn everything that there is to learn about the platform without altering? Pick an S1-Standard App Service plan and an S0-Standard SQL Azure plan. However, you can upgrade your plan at any point, so this allows you to start small and move up in cost as you need to.

If you are learning or developing and do pick a standard plan, make sure you shut down the App Service at the end of your activity. This will save you some cash at night when you aren’t using the service.

Setting up for Development

So, I’ve got my site all set up. I also have a nice iOS Todo app that allows me to add TodoItems (I used the Swift version, since I’m mildly interested in learning the language), but I have not been shown any of the server code as yet. I want to set up something else here – Continuous Deployment. To configure continuous deployment, I’m going to do the following:

  • Create a GitHub repository
  • Clone the GitHub repository onto your local machine
  • Download the source code for the site I created
  • Check in the source code for the site into GitHub
  • Create a branch on GitHub for Azure deployment
  • Link the site to deploy directly from GitHub

This is a cool feature. Once I’m set up, deployment happens automatically. When I push changes to GitHub, the Azure Mobile App will automatically pick them up and deploy them for me. Here is how to do it:

1. Create a GitHub Repository.

  1. Log onto GitHub
  2. Click on the + in the top right corner of the web browser and select New Repository
  3. Fill in the form – I called my repository 30-days-of-zumo-v2
  4. Click on Create Repository

2. Clone the repository

  1. Open up your command prompt
  2. Pick a place to hold your GitHub repositories. Mine is ~/github – if you need to make the directory, then do so.
  3. Change directory to that place: cd ~/github, for example
  4. Type in:
git clone https://github.com/adrianhall/30-days-of-zumo-v2

You will replace the URL with the URL of your repository – this was displayed when you created the repository.

3. Download the Azure Website

First step is to set your deployment credentials. Log onto the Azure Portal, select your site then select All Settings. Find the Deployment Credentials option, then fill in the form and click on Save. I like to use my email address with special characters replaced by underscores for my username – this ensures it is unique. Make your password very hard to guess. Use a password manager if you need to.

Let’s get the requisite information for an ftp download:

  • The server is listed on the starting blade of your site, but will be something like ftp://waws-prod-bay-043.ftp.azurewebsites.windows.net
  • Your username is SITENAME\USERNAME. The SITENAME is the name of your site. The USERNAME is what you set in the deployment credentials. This is listed on the starting blade as well, right above the FTP Hostname.
  • Your password is whatever you set in the deployment credentials.

Open up Cyberduck, enter the information (uncheck the anonymous checkbox) and click on Connect. You can use ftp or ftps protocol – I prefer ftps since it’s designated secure – information is transmitted with SSL encryption, including your deployment credentials.

Screen Shot 2016-03-09 at 7.32.48 PM

You will now be able to see the site. Expand the site node then the wwwroot node. Highlight everything in the wwwroot node, right-click and select Download to…. Put the files in the directory you cloned from GitHub.

4. Check in the code for your site

Before you go all “git add” on this stuff, there is some cleanup to do. Right now, the site is set up to use Easy Tables and Easy APIs – there are some extra files that you don’t really need. That’s because we are going to act like developers and keep our files checked into source code control. That really means we can’t use Easy Tables and Easy APIs. Those facilities are great for simple sites and I highly recommend you check them out. But you will leave them behind once you get serious about developing a backend – you will write code and check it into a source code repository.

Let’s start by removing the files we don’t need because we aren’t going to be using Easy Tables or Easy APIs:

  • sentinel
  • tables/TodoItem.json

We’ll also remove the files that are handled by the server or restored during deployment

  • node_modules
  • iisnode.yml
  • webconfig.xml

You can do this within your GUI or on the command line with the rm command. On Windows, use rimraf:

npm install rimraf -g
rimraf node_modules

Finally, add a .gitignore file – go to https://gitignore.io, enter Node in the box and click on Generate. This will generate a suitable .gitignore file that you can cut and paste into an editor.

You are now ready to check in the initial version. Make sure you are in the project directory, then type:

git add .
git commit -m "Initial Checkin"
git push -u origin master

This will push everything up to the master branch on GitHub.

5. Create an Azure deployment branch

You can do this from the command line as well. Make sure you are still in the project directory, then type:

git checkout -b azure
git push -u origin azure

This will create an azure branch for you to merge into (more on that later), then push it up to GitHub.

6. Link the azure branch to continuous deployment

Log back on to the Azure portal and select your site. Click on All Settings, then click on Deployment Source. Select GitHub as your deployment source. You will probably have to enter your GitHub credentials in order to proceed. Eventually, you will see something like this:

Screen Shot 2016-03-09 at 7.51.02 PM

Pick your project (it’s the GitHub repository you created) and the azure branch. Once done, click on OK. Finally, click on Sync.

Something completely magical will happen now. Well, not so magical really – that comes later. The Azure system will go off and fetch the project. It will install all the dependencies of the project (listed in the package.json) file and then deploy the results. The magical piece happens later – whenever you push a new version to the azure branch, it will automatically be deployed. You’ll be able to see it happen.

This post went a little longer than I planned, but I’m not all set up for continuous development on Azure. In the next post, I’ll look at upgrading the Node.js version and handle the checkin and merge mechanism. In addition, I’ll look at a local development cycle (rather than deploying) using the SQL Azure instance I’ve set up.

If you want to follow along on the code, I’ve set up a new GitHub repository – enjoy!

Using Azure Mobile Apps from a React Redux App

I did some work towards my React application in my last article – specifically handling authentication with Auth0 providing the UI and then swapping the token with Azure Mobile Apps for a ZUMO token. I’m now all set to do some CRUD operations within my React Redux application. There is some basic Redux stuff in here, so if you want a refresher, check out my prior Redux articles:

Refreshing Data

My first stop is “how do I get the entire table that I can see from Azure Mobile Apps?” This requires multiple actions in a React Redux world. Let’s first of all look at the action creators:

import constants from '../constants/tasks';
import zumo from '../../zumo';

 * Internal Redux Action Creator: update the isLoading flag
 * @param {boolean} loading the isLoading flag
 * @returns {Object} redux-action
function updateLoading(loading) {
    return {
        type: constants.UpdateLoadingFlag,
        isLoading: loading

 * Internal Redux Action Creator: replace all the tasks
 * @param {Array} tasks the new list of tasks
 * @returns {Object} redux-action
function replaceTasks(tasks) {
    return {
        type: constants.ReplaceTasks,
        tasks: tasks

 * Redux Action Creator: Set the error message
 * @param {string} errorMessage the error message
 * @returns {Object} redux-action
export function setErrorMessage(errorMessage) {
    return {
        type: constants.SetErrorMessage,
        errorMessage: errorMessage

 * Redux Action Creator: Refresh the task list
 * @returns {Object} redux-action
export function refreshTasks() {
    return (dispatch) => {

        const success = (results) => {
            console.info('results = ', results);

        const failure = (error) => {

        zumo.table.read().then(success, failure);

Four actions for a single operation? I’ve found this is common for Redux applications that deal with backend services – you need to have several actions to implement all the code-paths. I could have gotten away with just three – an initiator, a successful completion and an error condition. However, I wanted to ensure I had flexibility. The setErrorMessage() and updateLoading() actions are generic enough to be re-used for other actions.

Two of these actions are internal – I don’t export them and so the rest of the application never sees them. The only action creator that the application at large can use is the refreshTasks() action – the initiator for the refresh. I’ve made the setErrorMessage() task generic enough that it can be used by an error dialog to clear the error as well. Lesson learned – only export the tasks that you want the rest of the application to use.

Looking at the refreshTasks() task list, I’m not doing any filtering. Azure Mobile Apps supports filtering on the server as well as the client. I’d rather filter on the client in this application – it saves a round trip and the data is never going to be big enough that filtering is going to be a problem. This may not be true in your application – you should make a decision on filtering on the server. vs. client in terms of performance and memory usage.

Insert, Modify and Delete Tasks

I’ve already got the actions – I just need to update them for the async server code. For example, here is the insert code:

 * Redux Action Creator: Insert a new task into the cache
 * @param {Object} task the task to be updated
 * @param {string} task.id the ID of the task
 * @param {string} task.text the description of the task
 * @param {bool} task.complete true if the task is completed
 * @returns {Object} redux-action
function insertTask(task) {
    return {
        type: constants.Create,
        task: task

 * Redux Action Creator: Create a new Task
 * @param {string} text the description of the new task
 * @returns {Object} redux-action
export function createTask(text) {
    return (dispatch) => {

        const newTask = {
            text: text,
            complete: false

        const success = (insertedItem) => {
            console.info('createTask: ', insertedItem);

        const failure = (error) => {

        zumo.table.insert(newTask).then(success, failure);

I’m reusing the updateLoading() and setErrorMessage() action creators that I used with the refresh tasks. The createTask() does the insert async then calls the insertTask() action creator with the newly created task to update the in-memory cache (as we will see below when we come to the reducers). There are similar mechanisms for modification and deletion. I create an internal action creator to update the in-memory cache. The exported action creator initiates the change and doesn’t update the in-memory cache until the request is completed successfully.

I did need to do some work to add a dialog on the error message being set in my Application.jsx component:

        const onClearError = () => { return dispatch(taskActions.setErrorMessage(null)); };
        let errorDialog = <div style={{ display: 'none' }}/>;
        if (this.props.errorMessage) {
            const actions = [ <FlatButton key="cancel-dialog" label="OK" primary={true} onTouchTap={onClearError} /> ];
            errorDialog = (
                    title="Error from Server"

I then place {errorDialog} in my rendered JSX file.

Adjusting the Cache

Let’s take a look at the reducers.

import constants from '../constants/tasks';

const initialState = {
    tasks: [],
    profile: null,
    isLoading: false,
    authToken: null,
    errorMessage: null

 * Reducer for the tasks section of the redux implementation
 * @param {Object} state the current state of the tasks area
 * @param {Object} action the Redux action (created by an action creator)
 * @returns {Object} the new state
export default function reducer(state = initialState, action) {
    switch (action.type) {
    case constants.StoreProfile:
        return Object.assign({}, state, {
            authToken: action.token,
            profile: action.profile

    case constants.UpdateLoadingFlag:
        return Object.assign({}, state, {
            isLoading: action.isLoading

    case constants.Create:
        return Object.assign({}, state, {
            isLoading: false,
            tasks: [ ...state.tasks, action.task ]

    case constants.ReplaceTasks:
        return Object.assign({}, state, {
            isLoading: false,
            tasks: [ ...action.tasks ]

    case constants.Update:
        return Object.assign({}, state, {
            isLoading: false,
            tasks: state.tasks.map((tmp) => { return tmp.id === action.task.id ? Object.assign({}, tmp, action.task) : tmp; })

    case constants.Delete:
        return Object.assign({}, state, {
            isLoading: false,
            tasks: state.tasks.filter((tmp) => { return tmp.id !== action.taskId; })

    case constants.SetErrorMessage:
        return Object.assign({}, state, {
            isLoading: false,
            errorMessage: action.errorMessage

        return state;

You will note that my reducers only deal with the local cache. I could, I guess, also store this in localStorage so that my restart speed is faster. There would be a more complex interaction between the server, the in-memory cache and the localStorage cache that would have to be sorted out, however.

Note that all my reducers that result in a change to the in-memory cache also turn off the isLoading flag. This allows me one less dispatch via redux. I doubt it’s a significant performance increase, but I’m of the opinion that any obvious performance wins should be done. In this case, each operation results in one less action dispatch and one less Object.assign. In bigger projects, this could be significant.

Thinking about Sync and Servers

One of the things you can clearly see in this application is the delay in the round-trip to the server. I don’t update the cache until I have updated the server. This is safe. However, it’s hardly performant. There are a couple of ways I could fix this.

Firstly, I can update the local cache first. For example, let’s say I am inserting a new object. I can add two new local fields that are not transmitted to the server: clientId and isDirty. When the task is newly created, I can create a clientId instead (and use that everywhere) and set the dirty flag. When the server response comes back, I update the record from the server (not updating clientId) and clear the dirty flag. This allows me to identify “things that have not been updated on the server”, perhaps preventing multiple updates – it also allows me to identify things that have been newly created on the client.

Secondly, I can update a localStorage area instead of the server. This will be much faster. Then, periodically, I can trigger a refresh of the data from the server – sending the changes to the localStorage area up to the server.

There are multiple ways to do synchronization of data with a server – which one depends on the requirements of accuracy of the data on the server, performance required on the client and memory consumption. There are trade offs whichever way you choose.

Where’s the code

You can find the complete code on my GitHub Repository.

Integrating Auth0 with Azure Mobile Apps JavaScript client

I included a mechanism to get Auth0 working in my Webpack-based React application during my last article. Today I want to go one step further. I want to show how you can use the information you get back from Auth0 to authenticate to Azure Mobile Apps. Azure Mobile Apps has recently released azure-mobile-apps-client v2.0.0-beta4 for JavaScript and Apache Cordova. One of the neat things about this system is that you can use whatever library you like to authenticate a user as long as you get the original identity provider token. That means that you can, for instance, use a Facebook provided library to integrate with the Facebook app and then submit that token to Azure Mobile Apps to generate an Azure App Service token. This is called “client-directed authentication flow”.

It requires a little bit of setup though. In this article, I’m going to go through the process for generating a Microsoft Account, use Auth0 as the UI for the authentication and then integrate it into the Azure Mobile Apps JavaScript SDK.

Step 1: Set up a Microsoft Account Application

Log on to the Microsoft Developer Account. Click on Create Application, then click on API Settings and fill in the form like this:


Specifically, the Mobile or Desktop Client toggle should be set to No and the Redirect URLs should match your Auth0 callback, which is based on the Auth0 Domain for your application. Log onto Auth0, click on App / APIs and then click on your application to find this information. Click on Save, then click on App Settings. You need to cut-and-paste the Client ID and Client Secret as you will need those.

Step 2: Update your Auth0 Application

You need to set up the Microsoft Account in your application within Auth0. Log into your Auth0 dashboard, click on Connections, then Social and finally Windows Live.


Cut and paste your Client ID and Client Secret from Step 1 into the relevant boxes. If you want the users email address, make sure you have the right box checked. Click on Save, then close the box.

Step 3: Set up Authentication on Azure App Service

Log onto the Azure Portal, click on All Resources, then your Azure Mobile Apps application (if you don’t have one yet, follow their tutorial). Click on All Settings, then Authentication / Authorization. Now you are in the right place to be setting up authentication.

  • Turn App Service Authentication on
  • Set the action to take when the request is not authenticated to Allow request
  • Turn the Token Store to on (under Advanced Settings).

Now click on Microsoft Account. Cut and paste the Client ID and Client Secret from Step 1, and select the same boxes as you did in Step 2 – these are the claims you are requesting be provided to you.


This is a most important step. The Client ID and Client Secret MUST be unique to your application (you can’t “try it” in the Auth0 dashboard, for example) and they must match (don’t use two different client ID/secret combos). This will ensure that the token that is provided by Auth0 can be verified by Azure Mobile Apps.

Step 4: Load the Azure Mobile Apps SDK

When you npm install azure-mobile-apps-client, the actual library is in node_modules/azure-mobile-apps-client/dist/MobileServices.Web.min.js – you need to include this as a script reference in your HTML file. At this point, there is no CDN for this library and you can’t “require” the library into Webpack. Those facilities will come later. When it is loaded, you will be able to see a WindowsAzure.MobileServiceClient object within the global context of the browser.

I created a file to create the client like this:

/* global WindowsAzure */

const client = new WindowsAzure.MobileServiceClient(window.APPLICATION.base);
const table = client.getTable('TodoItem');

// Store the client so we can try things
window.APPLICATION.client = client;

export default {
    client: client,
    table: table

Now I can do something like:

import zumo from 'path/to/zumo';

This brings in the client and table reference. APPLICATION.base is set to my Azure Mobile Apps URL (in this case, https://ahall-todo-app.azurewebsites.net/) Note that I store the resulting client in my global APPLICATION object – this aids in debugging later on if I need to check something on a live connection.

Step 5: Convert the Auth0 token into an Azure Mobile Apps token

The Auth0 profile that is returned by the lock.show() callback contains an element called identities. There will only be one identity – your Microsoft Account one. In there is an access_token which is the token provided by the identity provider. You can use this as follows:

export function authenticate(profile, token) {
    return (dispatch) => {
        // Start the refresh process

        const loginSuccess = (data) => {
            // Store the original profile and the mobile service auth token
            dispatch(storeProfile(profile, data.mobileServiceAuthenticationToken));
            // Update the loading task to false
        // On failure, clear the authentication
        const loginFailed = (error) => {
            dispatch(storeProfile(null, null));

        // Trigger the process to swap the token for a zumo-token
        zumo.client.login('microsoftaccount', { access_token: profile.identities[0].access_token })
            .then(loginSuccess, loginFailed); // eslint-disable-line camelcase

Note that I’m passing the access token from the identity provider (NOT the auth0 token) to my Azure Mobile Apps client.login() method. If the call succeeds, I’m using Redux and dispatching an action to update my authentication profile. If an error occurs, I’m dispatching an action to set the error message. In my application, this pops up a dialog stating that an error occurred (and clears the login).

Some Common Errors

It’s best to get down to a network level when you are diagnosing problems in this flow – do this by running the application in Chrome and opening up the Developer Tools, then switching to the Network tab. Click the XHR button to only see AJAX requests. When you see a problem, click on the Response for the request that went wrong. Look at the status:

  • A 401 Unauthorized error indicates that you’ve configured Microsoft Account, but the Client ID or Client Secret doesn’t match what’s in Auth0
  • A 404 Not Found error indicates you did not set up the appropriate Identity Provider in Azure Mobile Apps

If you aren’t moving beyond the Sign In button, check out the APPLICATION.client.currentUser and ensure the user information is being filled in.

Auth0 supports many more Identity Providers than Azure Mobile Apps. You only get Facebook, Twitter, Google and Windows Live / Microsoft Account and Azure Active Directory in Azure App Service, so use one of those.

Wrap Up

Want to see the fully working example? I’ve got that on my GitHub repository. My intent is to provide a React for browser example of the Todo application that Azure Mobile Apps uses for their quickstarts. Now I’ve got authentication going, I’m going to move on to using the JavaScript library to cloud-connect this Todo app.

Integrating Auth0 into a Webpack Project

I’ve got a nice webpack-based React application moving towards “completion” (and I put that in quotes because I think a project is never really completed). One of the things I want to do is to integrate Auth0 – I like the presentation of the sign-in project. This article is not about how to configure Auth0 – they do an excellent job of that. Rather, it is about how to get Auth0 working in a Webpack environment. The example webpack project that they provide is, quite simply, wrong (UPDATE: Auth0 corrected the issues within 2 days of this blog being written. Another thing to love about Auth0 – a responsive team!). Here is how to really do it.

Install required components

Along with webpack, you also need a few uncommon loaders:

npm install --save-dev auth0-lock transform-loader json-loader brfs packageify ejsify

The auth0-lock package is the actual Auth0 UI. The json-loader is for including JSON files in the package. The final three packages (brfs, packageify and ejsify) are the same packages used by the Browserify version of the auth0-lock build. That leaves transform-loader, which allows you to use any browserify plugin within webpack. This basically allows you to use code designed for browserify within webpack.

Wondering where Auth0 went wrong with the sample? They left off brfs, packageify and ejsify from the devDependencies in the package.json file.

Adjust the webpack.config.js to compile Auth0

I work on a PC, not a Mac. As a result, the loaders provided within the sample did not work. The path separator is different between a PC and a Mac. Here are my loaders:

loaders: [
    // Javascript & React JSX Files
    { test: /\.jsx?$/, loader: jsxLoader, exclude: /node_modules/ },

    // Auth0-Lock Build
        test: /node_modules[\\\/]auth0-lock[\\\/].*\.js$/, 
        loaders: [ 'transform-loader/cacheable?brfs', 'transform-loader/cacheable?packageify' ]
        test: /node_modules[\\\/]auth0-lock[\\\/].*\.ejs$/, 
        loader: 'transform-loader/cacheable?ejsify' 
        test: /\.json$/, 
        loader: 'json' 

Note that my standard loader excludes the node_modules directory. The auth0-lock build explicitly maps the build.

Use Auth0Lock in your React code

I have the following in a React login controller:

import Auth0Lock from 'auth0-lock';

// and later, within the component

     * Lifecycle event - called when the component is about to mount -
     *  creates the Auth0 Lock Object
    componentWillMount() {
        if (!this.lock)
            this.lock = new Auth0Lock('rKxvwIoKdij6mwpsSvqi7doafDiGR3LA', 'shellmonger.auth0.com');

     * Event Handler to handle when the user clicks on Sign In button
     * @param {SyntheticEvent} event the button click
     * @returns {boolean} the result
    onClickedLogin(event) {
        const authOptions = {
            closable: true,
            socialBigButtons: true,
            popup: true,
            rememberLastLogin: true,
            authParams: {
                scope: 'openid email name'

         * Callback for the authentication pop-up
         * @param {Error} err - the error (or null)
         * @param {Object} profile - the user profile
         * @param {string} token - the JWT token
        const authCallback = (err, profile, token) => {
            this.lock.hide();       // Hide the auth0 lock after the callback

            if (err) {
                console.error('Auth0 Error: ', err);
                this.setState({ error: err });

            console.info('token = ', token);
            console.info('profile = ', profile);

        this.lock.show(authOptions, authCallback);

        // Click is handled here - nowhere else.
        return false;

Obviously, this does not actually do anything other than print stuff on the console. You will want to store the token and use that to access any resources you want. I’ve got a full redux store update happening when I get a valid login back.

How to determine The Node versions available on Azure App Service

I may have mentioned this before, but Azure App Service is an awesome service for hosting your website. It’s got tons of features to support devops, production deployments, testing and monitoring tasks. One of the things I struggled with was node deployments. You should specify the version of node that you want to use in your package.json file in the engines section, like this:

"engines": {
    "node": ">= 4.2.3 <= 4.3.0",
    "npm": ">= 3.3.0"

This is great, but how do you know what versions of node and npm are acceptable together?

It turns out that this is relatively easy. First of all, create yourself an Azure App Service Web App (or Mobile App, or API App – they are all the same thing). Deploy a random node/express app to the service using your favorite technique (mine is using continuous deployment from GitHub). Now, let’s get to know Kudu


Kudu is a nice web-based console for accessing the guts of your site. It contains a bunch of useful information. To get there, go to your Tools blade and then click on Kudu, then Go:

Screen Shot 2016-03-03 at 8.13.48 PM

You can also go directly to https://your-site.scm.azurewebsites.net instead. As I’ve described in the past, it’s well worth getting to know Kudu – it’s one of those hidden gems in the Azure system that really assists in problem solving. Back to the problem at hand – I have a node site, but what versions am I allowed to put in the package.json file? Simple – click on Runtime versions on the front page.

Admittedly, this isn’t the friendliest display. A lot of Kudu interacts with a REST endpoint behind the scenes and displays the result in the most raw version possible. This is good – it gives you access to the maximal information possible, but it’s also bad – it tends to be hard to read. Fortunately, I’ve prepared for this. I’ve already installed JSON Viewer to assist with pretty-printing JSON files when in Chrome – my preferred browser. There are a number of plug-ins that do this, not only in Chrome, but Firefox and standalone. You can use whatever you want.

Now you can just cut-and-paste the version you want into the engines section of your package.json. Alternatively, you can use a range to ensure that you pick up the latest version. For example, my standard engines entry contains the following:

"engines": {
    "node": ">=5.7.0 <5.8.0",
    "npm": ">=3.3.0"

With this code, and matching it to the list, I know I’ll be running node.js v5.7.0 and npm v3.6.0 on the service.

Another thing to like about Azure App Service – they are really responsive in keeping up to date with the Node versions. Node is an extremely active community and multiple releases come out every week, it seems.

(Full disclaimer – I work for Microsoft in Azure App Service and Mobile Apps – I don’t maintain the node environment though, and my thoughts are not those of my employer nor the group that does maintain the node environments).