Creating and Debugging Azure Functions Locally

I’ve written about Azure Functions before as part of my Azure Mobile Apps series. Azure Functions is a great feature of the Azure platform that allows you to run custom code in the cloud in a “serverless” manner. In this context, “serverless” doesn’t mean “without a server”. Rather, it means that the server is abstracted away from you. In my prior blog post, I walked through creating an Azure Function using the web UI, which is a problem when you want to check your Azure Functions in to source code and deploy them as part of your application.

UPDATE: Azure App Service have released a blog on the new CLI tools.

This is the first in a series of blog posts. I am going to walk through a process by which you can write and debug Azure Functions on your Windows 10 PC, then check the code into your favorite SCCM and deploy in a controlled manner. In short – real world.

Getting Ready

Before you start, let’s get the big elephant out of the way. The actual runtime Windows only. Sorry, Mac Users. The run-time relies on the 4.x .NET Framework, and you don’t have that. Boot into Windows 10. You can still create functions locally, but you will have to publish them to the cloud to run them. There is no local runtime on a Mac.

To get your workstation prepped, you will need the following:

  • Node
  • Visual Studio Code
  • Azure Functions Runtime

Node is relatively easy. Download the Node package from nodejs.org and install it as you would any other package. You should be able to run the node and npm programs from your command line before you continue. Visual Studio Code is similarly easily downloaded and installed. You can download additional extensions if you like. If you write functions in C#, I would definitely download the C# extension.

The final bit is the Azure Functions Runtime. This is a set of tools produced by the Azure Functions team to create and run Functions locally and are based on Yeoman. To install:

npm install -g yo generator-azurefunctions azure-functions-cli

WARNING There is a third-party module called azure-functions which is not the same thing at all. Make sure you install the right thing!

After installing, the func command should be available:

func-1

Once you have these three pieces, you are ready to start working on Azure Functions locally.

Creating an Azure Functions Project

Creating an Azure Functions project uses the func command:

mkdir my-func-application
cd my-func-application
func init

Note that func init creates a git repository as well – one less thing to do! Our next step is to create a new function. The Azure Functions CLI uses Yeoman underneath, which we can call directly using yo azurefunctions:

func-2

You can create as many functions as you want in a single function app. In the example above, I created a simple HTTP triggered function written in JavaScript. This can be used as a custom API in a mobile app, for example. The code for my trigger is stored in test-func\index.js:

module.exports = function(context, req) {
    context.log('Node.js HTTP trigger function processed a request. RequestUri=%s', req.originalUrl);

    if (req.query.name || (req.body && req.body.name)) {
        context.res = {
            // status: 200, /* Defaults to 200 */
            body: "Hello " + (req.query.name || req.body.name)
        };
    }
    else {
        context.res = {
            status: 400,
            body: "Please pass a name on the query string or in the request body"
        };
    }
    context.done();
};

and the binding information is in test-func\function.json:

{
    "disabled": false,
    "bindings": [
        {
            "authLevel": "function",
            "type": "httpTrigger",
            "direction": "in",
            "name": "req"
        },
        {
            "type": "http",
            "direction": "out",
            "name": "res"
        }
    ]
}

Running the function

To run the Azure Functions Runtime for your function app, use func run test-func.

The runtime is kicked off first. This monitors the function app for changes, so any changes you do in the code will be reflected as soon as you save the file. If you are running something that can be triggered manually (like a cron job), then it will be run immediately. For my HTTP trigger, I need to hit the HTTP endpoint – in this case, http://localhost:7071/api/test-func.

Note that the runtime is running with the version of Node that you installed and it is running on your local machine. Yet it still can be triggered by whatever you set up. If you set up a blob trigger from a storage account, then that will trigger. You have to set up the environment properly. Remember that App Service (and Functions) app settings appear as environment variables to the runtime. When you run locally, you will need to manually set up the app settings by setting an environment variable of the same name. Do this before you use func run for the first time.

Debugging the function

Running the function is great, but I want to debug the function – set a breakpoint, inspect the internal state of the function, etc. This can be done easily in Visual Studio Code as the IDE has an integrated Node debugger.

  • Run func run test-func --debug
  • Go into the debugger within Visual Studio Code, and set a breakpoint
  • Switch to the Debugger tab and hit Start (or F5)

    func-3

  • Trigger the function. Since I have a HTTP triggered function, I’m using Postman for this:

    func-4

  • Note that the function is called and I can inspect the internals and step through the code:

    func-5

You can now step through the code, inspect local variables and generally debug your script.

Next up

In the next article, I’ll discuss programmatically creating an Azure Function app and deploying our function through some of the deployment mechanisms we have available to us in Azure App Service.

30 Days of Zumo.v2 (Azure Mobile Apps): Day 9 – Table Controller Operations

I introduced the table controller in my last article and showed off all the things you can do in defining tables at the table level. All the settings are common across all operations to the table. This includes schema and authorization settings. Today, I am looking at individual operations. A table controller provides an OData v3 data source with GET, POST, PUT and DELETE operations.

The per-operation settings

There are four operations on a table, corresponding to the four operations in a typical CRUD controller:

  • CREATE is table.insert
  • READ is table.read
  • UPDATE is table.update
  • DELETE is table.delete

There is also a table.undelete() – this is a special case of a POST with an ID to undelete a previously deleted record. Soft delete must be enabled for this option. If you want to set an option on the READ operation, you would set it on table.read, for example. Let’s look at some:

Per-Operation Access Control

Last time, I showed you how to specify the default access control: something like this:

var table = require('azure-mobile-apps').table();

table.access = 'authenticated';

module.exports = table;

You can also specify access control on a per-operation basis. For example, let’s look at a read-only table:

var table = require('azure-mobile-apps').table();

table.access = 'disabled';
table.read.access = 'anonymous';

module.exports = table;

Note that I specify the default permission using the table.access, then override it with individual operation access permissions. Another common pattern would be the ability to read anonymously, but only update when authenticated:

var table = require('azure-mobile-apps').table();

table.access = 'authenticated';
table.read.access = 'anonymous';

module.exports = table;

Yet another option would be a table where you can’t delete or update anything – only insert into the table:

var table = require('azure-mobile-apps').table();

table.access = 'authenticated';
table.update.access = 'disabled';
table.delete.access = 'disabled';

module.exports = table;

In all cases, the same logic as for table.access applies:

  • ‘authenticated’ requires the submission of a valid X-ZUMO-AUTH token – respond with 401 Unauthorized if not validly authenticated.
  • ‘anonymous’ allows anyone to perform the operation
  • ‘disabled’ means that the server responds with 405 Method not allowed

Per-operation Middleware

You can also define a method of executing some additional code during the execution of the operation. I’ve already shown an example this when I was demonstrating the personal table – obtaining the email address so that I can use it in my table. The general form here is:

var table = require('azure-mobile-apps').table();

table.access = 'authenticated';

table.read(function (context) {
    // Pre-operation tasks

    return context.execute().then((response) => {
        // Post-operation tasks
    });
});

module.exports = table;

Common pre-operation tasks are authorization, query adjustment and field insertion. I showed off examples of the latter two in the Personal Tables article – query adjustment to only show records that belong to the authenticated user and field insertion to insert the users email address. I’ll cover authorization in another article.

The Operation Context

When the operation middleware is called, a context is passed. The context has the following properties:

Property Type Usage
query queryjs/Query The query to be executed
id string The record ID
item object The inbound record
req express.Request The request object
res express.Response The response object
data data The data provider
push notification hub The push provider
logger winston The logger provider
tables function Accessor for tables
table object The current table
user object The authenticated user
execute function Execution function

The query, id and item properties are used in various operations:

Property Used in
query read (search), update
id read (fetch), delete
item insert, update

The req and res options are from the express object – you can use these to access other headers and cookies or directly manipulate the response. Most of the other properties allow you to access pre-configured services. I’ve already introduced the user object – the only thing guaranteed there is the user.id value – everything else is optional, although the user object also has a getIdentity() method. Finally, we get to the execute() method. This executes the query and returns a Promise that resolves (eventually) to the response. However, that doesn’t stop you from altering the response as you go.

Let’s take a quick look at an example. Let’s say we adjust our existing TodoItem.js script like this:

// READ operation
table.read(function (context) {
  console.log('context.user = ', JSON.stringify(context.user));
    return context.user.getIdentity().then(function (userInfo) {
        console.log('userInfo.aad.claims = ', JSON.stringify(userInfo.aad.claims));
        context.query.where({ userId: userInfo.aad.claims.emailaddress });
        return context.execute();
    }).then(function (response) {
        console.log('read.execute response = ', response);
        return response;
    });
});

All this code does is print out the response to the log. You’ll need to turn on application logging at a verbose level to view this. When you do (and run your application), here is what you will see:

day-8-p1

The response in this case (for a read/search) is an array of results. We can use this fact to do some sort of business logic.

Business Logic like what?

Well, we’ll get into some very specific things dealing with Push Notifications later on in the course. However, some things could be to asynchronously kick off a queue submission. As a quick example, let’s say you are using the Azure Storage API to store images. Once the image is stored, you update the database with a unique ID and some meta-data on the image. But you want to add a watermark, resize the image, or notify someone that an image is available for review. You could do this in a post-execute phase.

Next Steps

Middleware is an incredibly powerful functionality that allows you to do pre- and post-processing on requests on a per-operation basis. The last two articles have been just introducing the table controller functionality. In the next article, I’m going to use this functionality to provide authorization capabilities to my operations in a way that, hopefully, can be considered somewhat boilerplate. Until then, you can find todays code at my GitHub Repository.

30 Days of Zumo.v2 (Azure Mobile Apps): Day 7 – Refresh Tokens

So far in this series, I’ve look at the following:

Today I’m going to cover a common problem within token-based authentication. Every token that is given out has some in-built security – an expiry time. How do you deal with the expiry of a token? Facebook expires tokens once every 60 days (not exactly a problem). However, most authentication providers are much smaller – Azure Mobile Apps, Azure AD and Google all use 1 hour – a number in common with a lot of providers. Some providers use very short times (miiCard, for example, uses 2 minutes, which is crazy as it practically guarantees you need to do two API calls instead of 1 to do something). To ensure that the user is still authenticated, identity providers implement a refresh token mechanism. Today, we are going to look at integrating refresh tokens into our client and server authentication flows. If you are using Custom Authentication, then this is another topic you have to deal with this yourself. Sorry – there are no generalized rules here.

Refresh Tokens

You’ve logged in (somehow) and received a token from Azure Mobile Apps. Here is a little secret. It expires after 1 hour. Actually, that’s not much of a secret. It’s kind of well-known. What isn’t well known is how to deal with it. The short version of this is that there is an endpoint available for refreshing all your authentication tokens at the same time. The response is a new token. You have to submit the old token. Some of the Azure Mobile Apps client SDKs will include a method InvokeApiASync() that does this for you. The Apache Cordova Plugin doesn’t, but it does have a _request() method that I can use instead. Let’s start by adding a button to interactively call the refresh API:

    function initializeApp() {
        $('#wrapper').empty();

        // Replace the wrapper with the main content from the original Todo App
        var content =
              '<div id="wrapper">'
            + '<article>'
            + '  <header>'
            + '    <div id="title"><h2>Azure</h2><h1>Mobile Apps</h1></div><div id="rt-title"><button id="reftoken">Token</button></div>'
            + '    <form id="add-item">'
            + '      <button id="refresh">Refresh</button>'
            + '      <button id="add">Add</button>'
            + '      <div><input type="text" id="new-item-text" placeholder="Enter new task" /></div>'
            + '    </form>'
            + '  </header>'
            + '  <ul id="todo-items"></ul>'
            + '  <p id="summary">Initializing...</p>'
            + '</article>'
            + '<footer><ul id="errorlog"></ul></footer>'
            + '</div>';
        $('#page').html(content);

        // Refresh the todoItems
        refreshDisplay();

        // Wire up the UI Event Handler for the Add Item
        $('#add').on('click', addItemHandler);
        $('#refresh').on('click', handleRefresh);
        $('#reftoken').on('click', handleTokenRefresh);
    }

Line 62 adds the button – I can make some CSS to push the entire DIV right and clean it up. Line 82 adds a click handler for the button. This is fairly standard HTML and JavaScript code so far. Let’s take a look at the event handler:

    /**
     * Event Handler for clicking on the token button
     */
    function handleTokenRefresh(event) {
        event.preventDefault();

        client._request('GET', '/.auth/refresh', function (error, response) {
            console.error('refresh error = ', error);
            console.info('refresh response =  ', response);
        });
    }

Configuring OpenID Connect “Hybrid Flow”

Spoiler: It doesn’t work

I got a 400.80 Bad Request response from the server. That’s because I have not configured the Azure AD service to use refresh tokens. App Service Authentication / Authorization express configuration uses the OpenID Connect “Implicit Flow”, which is convenient because it doesn’t require a secret key, but has the disadvantage of not allowing you to get a refresh token. To get a refresh token, I need to configure the OpenID Connect “Hybrid Flow”.

Log into the Azure Portal, select Browse> and then Active Directory (it will be at the top)

Quick Tip Click on the star next to Active Directory – it will make the link appear in the side bar so you don’t need to click on Browse.

After the directory appears (you may need to click on it if you have more than one), click on the APPLICATIONS tab, then click on your application followed by the CONFIGURE tab.

day-7-p1

Scroll down to the keys section. Select Duration = 2 years in the drop-down, then click on Save. (Don’t forget to come back and re-issue the key before the 2 years are up!) Copy the key that is generated after you saved.

day-7-p2

Now, go back to the Azure Portal, select All Resources or App Services, then select your app service. Mine is 30-days-of-zumo-v2. Click on Tools, then Resource Explorer, then Go

day-7-p3

You will get a new tab pointing to your site inside of https://resources.azure.com – an awesome site for viewing all the settings. Expand config and then authSettings:

day-7-p4

Because this is a dangerous place, it’s set to read-only mode. Click on the grey Read/Write box at the top of the screen, then click on Edit next to the PUT button. I need to set two things. Firstly, the key that I created in the Azure AD portal needs to be copied into the clientSecret field as a string. Secondly, I need to set the additionalLoginParams to ["response_type=code id_token"], like this:

day-7-p5

Note: Client Secrets and keys should be kept secret. I re-generated mine immediately after these screen shots were made.

Once you are happy with your edits (double-check them as they are impacting your site significantly), click on the PUT button at the top. It’s a good idea to put the interface into Read-only mode after you have made the edits – this prevents accidents from happening.

You’ve now configured a minimal Hybrid Flow. Note that this is the same flow you would need in order to access the AAD Graph API. However, there are a couple of extra steps. Refer to this blog post for more details.

The Refresh Response

Now that I’ve got the configuration done, my application will run again and will actually get a response. If you run the client with the logging, you will notice that error is null (indicating no error) and response is a standard XmlHttpRequest object. The _request() method is an under the covers method and isn’t meant to be run directly. Looking at the response from the server in the Network tab, I see the following:

day-7-p6

The authenticationToken is the new token that I need to store in the client.currentUser.mobileServiceAuthenticationToken (note that capitalization changes between Client SDKs to conform to the client conventions). I can now deal with the refresh token thusly:

    /**
     * Event Handler for clicking on the token button
     */
    function handleTokenRefresh(event) {
        event.preventDefault();

        client._request('GET', '/.auth/refresh', function (error, response) {
            if (error != null) {
                console.error('Auth Refresh: Error = ', error);
                alert('Authentication Refresh Failed');
            } else if (response.status !== 200) {
                console.warn('Auth Refresh: Status = ', response.status, response.statusText);
            } else {
                var data = response.response;
                client.currentUser.mobileServiceAuthenticationToken = data.authenticationToken;
                console.info('Auth Refresh: New Token Received and stored');
            }
        });
    }

How to use Refresh Tokens

Of course, handling refresh tokens in an interactive way is definitely not the way to go. This is meant to be a hands-off approach to dealing with refresh tokens. My personal favorite is to write a method that returns a Promise and deals with it all for you:

  1. If you are not authenticated, then call the authenticate routine – resolve the Promise when authenticate returns successfully.
  2. If you are authenticated and it’s been more than 30 minutes (for example) since the last refresh, refresh the token then resolve the Promise when the refresh returns successfully.
  3. If the refresh fails, call the authenticate routine and resolve the Promise when authenticate returns successfully.

In this way you can use the authenticate-or-refresh method like this:

// pseudo-code only - do not use this as is
authenticateOrRefresh().then(() => {
   client.insertAsync(todoItem);
});

For the purposes of this topic, I’ve just checked in the interactive refresh code. You can get it from my GitHub Repository.

What about Custom Authentication?

If you are using a third-party identity provider, then you will need to do the same sort of logic against the third party identity provider. You can read about the process for Auth0 on their site, for example. The underlying process will pretty much be the same, however. Configuration and URLs or SDKs will differ. Understand what it takes to get a refresh token and how often you need to request one.

Next Steps

We will be leaving our review of authentication in Azure Mobile Apps now. I recommend you follow Chris Gillumhis blog contains lots of additional information on the authentication system used by Azure App Service. In fact, this blog post could not have happened without him. If he hasn’t written about it, then just ask.

In my next article, I’ll move onto table controllers. We kind of got a taste of table controllers in the last article, but I’ll be going much deeper next.

30 Days of Zumo.v2 (Azure Mobile Apps): Day 6 – Personal Tables

So far in this series, I’ve look at the following:

What I haven’t done is taken a look at the data interface – a central component to the Azure Mobile Apps story. If you are using the node.js server SDK (which includes if you are using in-portal editing), then you really only have an option of using a SQL Azure (or local SQL Server) database. ASP.NET users have more options available to them (but that’s another blog post).

In todays post, I’m going to talk about that data interface and do some simple changes to it.

What is the data interface

Azure Mobile Apps exposes an OData v3 interface to your SQL database. OData itself is a mechanism of describing queries to retrieve and publish data, so it’s a natural extension of databases into the web world. It’s RESTful in nature. Azure Mobile Apps adds a few fields to every record to support sync scenarios – most notably offline capabilities, row level conflict resolution and incremental sync.

Let’s start with the fields.

  • The ‘createdAt’ field is a DateTimeOffset field (in SQL data format terms) that represents the date and time that the record was created.
  • The ‘updatedAt’ field is also a DateTimeOffset field that represents the date and time of the last update.
  • The ‘version’ field is a base-64 encoded string that represents a unique version.
  • The ‘id’ field is a string – it can be anything, but it has to be unique as the id is a primary key. If you don’t specify it, Azure Mobile Apps will assign the string representation of a v4 GUID to this field.
  • The ‘deleted’ field is a boolean – it represents that the record has been deleted and is used to support cross-device (soft) deletion cases.

All tables have an endpoint in the /tables namespace and are case-insensitive. If you have a SQL table that you define called TodoItem (the typical starter project), then it can be accessed through /tables/todoitem and individual records can be accessed through /tables/todoitem/:id where you replace the :id with the id of the record. For example, to get the records within my todoitem table:

Screen Shot 2016-03-23 at 4.30.18 PM

And to get the specific ID:

Screen Shot 2016-03-23 at 4.31.11 PM

Defining a Table

There are lots of ways to define a table, but let’s get to the basics – I want a table. I am developing, so I want a dynamic schema – that way I can define the record structure on the client. To do that, I create a tables directory in my server project and create a tables/todoitem.js file:

var azureMobileApps = require('azure-mobile-apps');

// Create a new table definition
var table = azureMobileApps.table();

module.exports = table;

Then, within the app.js, I want to import all the tables and initialize the database:

var express = require('express'),
    azureMobileApps = require('azure-mobile-apps');

// Set up a standard Express app
var app = express();
var mobileApp = azureMobileApps({
    homePage: true,
    swagger: true
});

mobileApp.tables.import('./tables');
mobileApp.api.import('./api');

mobileApp.tables.initialize()
  .then(function () {
      app.use(mobileApp);
      app.listen(process.env.PORT || 3000);
  });

Line 11 imports the tables from the tables directory. Line 14 updates the database. The initialize() method returns a Promise – when the promise is resolved (i.e. the database is updated), I add the mobile API to the express app and start listening for connections.

Requiring authentication

This is the same table that I’ve been using all along. In fact, the authentication I’ve been setting up hasn’t actually prevented anyone from accessing the table via Postman or another app – it just means my app is doing the right thing. It’s time to require authentication for the entire table. To do that, I alter the table script (also known as a table controller):

var azureMobileApps = require('azure-mobile-apps');

// Create a new table definition
var table = azureMobileApps.table();

// Require authentication
table.access = 'authenticated';

module.exports = table;

Once I deploy this version of the script, I can go back to Postman and try out another GET operation on the table:

Screen Shot 2016-03-23 at 4.54.28 PM

I’m now getting an unauthorized message. Working through my Apache Cordova app will still work because I am authenticated there.

If you aren’t getting an Unauthorized message, make sure you are using v2.0.2 or later of the azure-mobile-apps server SDK.

Multi-user applications

This still isn’t the holy grail though. All my apps, irrespective of my authentication, still use the same records. That’s nice for sharing a todo list, but I want a little more control over my data. I want to be able to limit the data so that the logged in user can see just their records. To do that, I need to modify the table controller with some custom code.

There are four script snippets that you can add.

  • table.insert() for CREATE operations
  • table.read() for READ operations
  • table.update() for UPDATE operations
  • table.delete() for DELETE operations

One method for each of the typical CRUD operations. Each one allows you to adjust the query or the inserted data. Here is the general form:

table.read(function (context) {
    // Do something pre-execution

   context.execute()
   // .then(function (data) {
   //    Do something post-execution
   //
   //    return data;
   // });
});

context.execute() will return a Promise – the returned value upon resolution will be the inserted, updated or deleted item, or the list of items that are read (in the case of a read operation).

There is a bunch of good stuff in the context object. For the purposes of my investigation into authentication, I’m going to concentrate on just three items:

  • user is an object describing the authenticated user
  • query is a queryjs object (if you are familiar with LINQ, think of it as LINQ-lite) – this is used to convert an OData query into an SQL query
  • item is the item to be inserted

In particular, there is a user.id element – that is the sub field of the JWT that is passed to ensure authentication and is normally an identity provider based value. Let’s change this table controller to use the user.id to set up a personal table view:

var azureMobileApps = require('azure-mobile-apps');

// Create a new table definition
var table = azureMobileApps.table();

// Require authentication
table.access = 'authenticated';

// CREATE operation
table.insert(function (context) {
  context.item.userId = context.user.id;
  return context.execute();
});

// READ operation
table.read(function (context) {
  context.query.where({ userId: context.user.id });
  return context.execute();
});

// UPDATE operation
table.update(function (context) {
  context.query.where({ userId: context.user.id });
  context.item.userId = context.user.id;
  return context.execute();
});

// DELETE operation
table.delete(function (context) {
  context.query.where({ userId: context.user.id });
  return context.execute();
});

module.exports = table;

I’ve also set up a HTML-based client – it does the server-flow authentication and it is designed to act just like the apps. I find this method is better for testing than app based testing for the server since you can use browser-based development tools. You can check out the code on the GitHub Repository. You will have to do some adjustments to the auth settings, but you have the tools now to do those adjustments. Refer to the Day 3 article if you are in doubt.

If you have stored items before doing this, you will notice on running this that you no longer have any items in the list. The items are not gone – they are just not accessible any more because the userId field does not match.

My project logs the token (just like I did on day 2), and I can use that to access the protected area. To do that, I add an X-ZUMO-AUTH header to the request in Postman – the value of which is the token.

Screen Shot 2016-03-23 at 7.45.00 PM

Finding the User Identity

Note the user Id. That’s not exactly friendly. How do I, for example, get rid of user data when the user wants to close their account? Generally, users don’t know their security ID. They know their email address. I want to have the records linked to an email address instead.

Let’s first of all take a look at what is in the token on the server, where we are doing the record adjustments. To do this, I added a logging statement to the read operations in tables/todoitem.js:

// READ operation
table.read(function (context) {
  console.log('context.user = ', JSON.stringify(context.user));
  context.query.where({ userId: context.user.id });
  return context.execute();
});

Use the Azure Portal to turn on Application Logs – this is located in the Diagnostics Logs section of the Settings blade. You can then go to Tools -> Log Stream to stream the log:

Screen Shot 2016-03-23 at 7.50.18 PM

Note that the user object has nothing of value in it. The next thing to check is the decode of the token – I do this on http://jwt.io. Just cut and paste the token into the Encoded box and see what the decode is:

Screen Shot 2016-03-23 at 7.52.23 PM

Once again, nothing is really of value here. I am still looking at a sid. Fortunately, there is one other area that is available with Azure App Service Authentication: the /.auth/me endpoint:

Screen Shot 2016-03-23 at 7.54.10 PM

This has all the claims that we are allowed to get from the original identity provider. The Azure Mobile Apps Server SDK has a method for this called getIdentity() – it’s available on the context.user object. The method returns a Promise that resolves to the contents of the /.auth/me file. If I change my table.read method to this, I’ll be able to see the contents:

// READ operation
table.read(function (context) {
  return context.user.getIdentity().then(function (userInfo) {
    console.log('user.getIdentity = ', JSON.stringify(userInfo));
    context.query.where({ userId: context.user.id });
    return context.execute();
  });
});

Looking at the data from this, I can see that the information I need is in userInfo.aad.claims.emailaddress property. I can use this to affect the table.

Screen Shot 2016-03-23 at 8.01.59 PM

The new set of functions become this:

// CREATE operation
table.insert(function (context) {
  return context.user.getIdentity().then(function (userInfo) {
    context.item.userId = userInfo.aad.claims.emailaddress;
    return context.execute();
  });
});

// READ operation
table.read(function (context) {
  return context.user.getIdentity().then(function (userInfo) {
    context.query.where({ userId: userInfo.aad.claims.emailaddress });
    return context.execute();
  });
});

// UPDATE operation
table.update(function (context) {
  return context.user.getIdentity().then(function (userInfo) {
    context.query.where({ userId: userInfo.aad.claims.emailaddress });
    context.item.userId = userInfo.aad.claims.emailaddress;
    return context.execute();
  });
});

// DELETE operation
table.delete(function (context) {
  return context.user.getIdentity().then(function (userInfo) {
    context.query.where({ userId: userInfo.aad.claims.emailaddress });
    return context.execute();
  });
});

When I use these functions and access the table via Postman, I get the following:

Screen Shot 2016-03-23 at 8.15.39 PM

Note that the userId property is now the email address. If I use the Auth0 custom authentication – where I provide a choice between Facebook, Twitter, Google, MSA and potentially AAD – the email address will always be the same, so this allows me to do cross-platform authentication.

What about performance?

The getIdentity() method a HTTPS request underneath to obtain the response from the /.auth/me endpoint. As a result, it isn’t going to be as performant as just dealing with the JWT. There are a couple of methods of dealing with this and we will cover these in a later blog post.

Next Steps

Todays foray into the table controller is not the last time we will visit this topic. We are really still covering authentication. In the next blog post, we’ll take a look at what you should do when the authentication token is about to expire.

Until then, you can find todays code on my GitHub Repository.

How to determine The Node versions available on Azure App Service

I may have mentioned this before, but Azure App Service is an awesome service for hosting your website. It’s got tons of features to support devops, production deployments, testing and monitoring tasks. One of the things I struggled with was node deployments. You should specify the version of node that you want to use in your package.json file in the engines section, like this:

"engines": {
    "node": ">= 4.2.3 <= 4.3.0",
    "npm": ">= 3.3.0"
},

This is great, but how do you know what versions of node and npm are acceptable together?

It turns out that this is relatively easy. First of all, create yourself an Azure App Service Web App (or Mobile App, or API App – they are all the same thing). Deploy a random node/express app to the service using your favorite technique (mine is using continuous deployment from GitHub). Now, let’s get to know Kudu

Ku-what?

Kudu is a nice web-based console for accessing the guts of your site. It contains a bunch of useful information. To get there, go to your Tools blade and then click on Kudu, then Go:

Screen Shot 2016-03-03 at 8.13.48 PM

You can also go directly to https://your-site.scm.azurewebsites.net instead. As I’ve described in the past, it’s well worth getting to know Kudu – it’s one of those hidden gems in the Azure system that really assists in problem solving. Back to the problem at hand – I have a node site, but what versions am I allowed to put in the package.json file? Simple – click on Runtime versions on the front page.

Admittedly, this isn’t the friendliest display. A lot of Kudu interacts with a REST endpoint behind the scenes and displays the result in the most raw version possible. This is good – it gives you access to the maximal information possible, but it’s also bad – it tends to be hard to read. Fortunately, I’ve prepared for this. I’ve already installed JSON Viewer to assist with pretty-printing JSON files when in Chrome – my preferred browser. There are a number of plug-ins that do this, not only in Chrome, but Firefox and standalone. You can use whatever you want.

Now you can just cut-and-paste the version you want into the engines section of your package.json. Alternatively, you can use a range to ensure that you pick up the latest version. For example, my standard engines entry contains the following:

"engines": {
    "node": ">=5.7.0 <5.8.0",
    "npm": ">=3.3.0"
},

With this code, and matching it to the list, I know I’ll be running node.js v5.7.0 and npm v3.6.0 on the service.

Another thing to like about Azure App Service – they are really responsive in keeping up to date with the Node versions. Node is an extremely active community and multiple releases come out every week, it seems.

(Full disclaimer – I work for Microsoft in Azure App Service and Mobile Apps – I don’t maintain the node environment though, and my thoughts are not those of my employer nor the group that does maintain the node environments).

Using Azure App Service Authentication with a Web Application

My pet project is an ExpressJS application with a React/Flux front-end running on top of Azure App Service. Since I will want to use Azure Mobile Apps to provide a common UI between a mobile edition of the app and the web edition of the app, I want to use the built-in Authentication feature to handle the authentication. Here is the question. I’ve got built-in mechanisms in the Azure Mobile Apps SDKs to handle authentication properly – how do I do this in the Web App? It’s not complicated, but the documentation is not where it should be. In this blog post, I’ll walk you through the process of using Azure App Service Authentication in a React web app.

For the purposes of this blog post, I’m going to assume you have already set up Authentication and have a good mechanism for deploying your app to Azure App Service. If you haven’t, check out:

The Web Version

My first stab at this is to look at the process that my web application has to go through. Firstly, it has to redirect to /.auth/login/provider on your site. Im using the Microsoft Login as my authentication as it’s really easy to set up. You might want to do Facebook, Twitter, Google or AAD instead. The URL is https://mysite.azurewebsites.net/.auth/login/microsoftaccount.

Once I’m sent there, the Azure App Service will redirect to the identity provider (Microsoft in this case), then come back and give you a “Redirect back to Website”. Click on that and your home page is reloaded. This time there is a new cookie. Unfortunately, you can’t access it:

cookies

That check mark under HTTP means that the JavaScript can’t access the cookie. Some other mechanism is required. Fortunately, Azure App Service has provided such an endpoint – /.auth/me. This endpoint will give a JSON document that contains the token plus some other information:

auth-me-blob

The endpoint returns a 401 Unauthorized status if the user is not logged in.

The JavaScript Version

My application has an app-store.js that implements a Flux store. This is a store with my own design, but you could easily use Redux, Alt or Fluxxor, for example. In the constructor, I register an action handler and then dispatch the action to the action handler:

        // Register any action listeners
        this.logger.debug('Creating Dispatch Table');
        this.onActionDispatched('init-store', (payload) => { this.initializeStore(payload); });
        this.onActionDispatched('check-auth', (payload) => { this.checkAuthentication(payload); });

        // Dispatch initial actions
        this.logger.debug('Dispatching Store Initializer');
        dispatcher.dispatch({ actionType: 'init-store' });
        dispatcher.dispatch({ actionType: 'check-auth' });

Line 37 registered the action handler and then line 42 dispatches the action. Once someone loads the app store, the action handler will be called. On to the action handler. I use the fetch API to actually do the request:

    /**
     * Action Handler for the check-auth event
     * @param {Object} payload the payload provided to the dispatcher
     * @returns {boolean} true if the payload was handled
     */
    checkAuthentication(payload) {
        this.logger.entry('checkAuthentication', payload);

        this.logger.debug(`Initiating fetch of /.auth/me`);
        let options = {
            method: 'GET',
            credentials: 'include',
            cache: 'no-cache'
        };

        // Fetch the authentication configuration
        fetch('/.auth/me', options).then((response) => {
            this.logger.debug('[checkauth-callback-1]: Response = ', response);
            if (!response.ok && response.status !== 401)
                throw new Error('Invalid Response from Config Endpoint', response);
            if (response.status === 401)
                return false;
            return response.json();
        }).then((config) => {
            if (!config) {
                this.logger.info('[checkauth-callback-2] unauthenticated');
                return;
            }
            this.logger.debug('[checkauth-callback-2]: config = ', config);
            this.storeData.error = false;
            this.storeChanged();
        }).catch((error) => {
            this.logger.error(`[checkauth-callback-catch] failed to check authentication status`);
            this.storeData.error = { message: error.message };
            this.storeChanged();
        });

        return this.logger.exit('checkAuthentication', true);
    }

This doesn’t actually store any data – that depends on your application. Line 112 (highlighted) will log whatever is in the returned JSON object:

config-return-val

It’s worth taking a look at the return value and decoding it. In my application, I need to load up the this.data.auth object with the information I received. I’m going to convert the object that is returned into a set of claims:

decoded-auth-me

To do this, I’m going to use Array.reduce – a JavaScript function that takes a little bit of time to even understand, let alone master. It works across an array, calling the function with an accumulator. In my case, the accumulator is an empty object and the function converts the key/value pairs present in the response into object notation. Here is the code:

            /**
             * Map/Reduce mapper for dealing with claims
             * @param {object} target the target object
             * @param {object} claim each individual claim
             * @returns {object} the target
             */
            function mapClaims(target, claim) {
                target[claim.typ] = claim.val;
                if (claim.typ.indexOf('http://schemas.xmlsoap.org/ws') !== -1)
                    target[claim.typ.slice(claim.typ.lastIndexOf('/') + 1)] = claim.val;
                return target;
            }

            let providerData = auth[0];
            this.storeData.auth = {
                claims: providerData.user_claims.reduce(mapClaims, {}),
                id: providerData.user_id,
                provider: providerData.provider_name,
                token: providerData.access_token,
                providertoken: providerData.authentication_token
            };
            this.logger.debug('[checkauth-callback-2]: authdata = ', this.storeData.auth);

Finishing Up

What else is needed? When I login, I need to redirect the page to the /.auth/login/microsoftaccount – I’m using a standard React event handler for this in my Chrome.jsx file. The click handler then sets document.location to the required value. I’m also going to adjust my user card – I have the users real name now, so I want to represent that.

I need to figure out logging out of Azure App Service. Logging out is not implemented in the SDKs, so I’m going to have to do some research.

The other thing I need to do is to sort out the offline development model. I have to deploy this application to Azure to do little tests. I will sometimes alter a file in numerous small ways, running the resulting code several times before I am satisfied. The Azure App Service deployment process takes six minutes. That is just way too long for the development cycle. As a result, I’m going to look at mocking the /.auth endpoint in development.

As always, check out the code on my GitHub Repository.

Testing ExpressJS Web Services

Let’s say you have a web application written in NodeJS and you want to test it. What’s the best way to go about that? Fortunately, this is a common enough problem that there are modules and recipes to go along with it.

Separating Express from HTTP

ExpressJS contains syntactic sugar to implement a complete web service. You will commonly see code like this:

var express = require('express');

var app = express();
// Do some other stuff here
app.listen(3000);

Unfortunately, this means that you have to be doing HTTP calls to test the API. That’s a problem because it doesn’t lend itself to easily being tested. Fortunately, there is an easier way. It involves separating the express application from the HTTP logic. First of all, let’s create a web-application.js file. Here is mine:

import bodyParser from 'body-parser';
import compression from 'compression';
import express from 'express';
import logCollector from 'express-winston';
import staticFiles from 'serve-static';

import logger from './lib/logger';
import apiRoute from './routes/api';

/**
 * Create a new web application
 * @param {boolean} [logging=true] - if true, then enable transaction logging
 * @returns {express.Application} an Express Application
 */
export default function webApplication(logging = true) {
    // Create a new web application
    let webApp = express();

    // Add in logging
    if (logging) {
        webApp.use(logCollector.logger({
            winstonInstance: logger,
            colorStatus: true,
            statusLevels: true
        }));
    }

    // Add in request/response middleware
    webApp.use(compression());
    webApp.use(bodyParser.urlencoded({ extended: true }));
    webApp.use(bodyParser.json());

    // Routers - Static Files
    webApp.use(staticFiles('wwwroot', {
        dotfiles: 'ignore',
        etag: true,
        index: 'index.html',
        lastModified: true
    }));

    // Routers - the /api route
    webApp.use('/api', apiRoute);

    // Default Error Logger - should be added after routers and before other error handlers
    webApp.use(logCollector.errorLogger({
        winstonInstance: logger
    }));

    return webApp;
}

Yes, it’s written in ES2015 – I do all my work in ES2015 right now. The export is a function that creates my web application. I’ve got a couple of extra modules – an api route (which is an expressjs router object) and a logging module.

Note that I’ve provided a logging parameter to this function. Setting logging=false turns off the transaction logging. I want transaction logging when I am running this application in production. That same logging gets in the way of the test results display when I am running tests though. As a result, I want a method of turning it off when I am testing.

I also have a http-server.js file that does the HTTP logic in it:

import http from 'http';

import logger from './lib/logger';
import webApp from './web-application';

webApp.set('port', process.env.PORT || 3000);

logger.info('Booting Web Application');
let server = http.createServer(webApp());
server.on('error', (error) => {
    if (error.syscall !== 'listen') {
        throw error;
    }
    if (error.code) {
        logger.error(`Cannot listen for connections (${error.code}): ${error.message}`);
        throw error;
    }
    throw error;
});
server.on('listening', () => {
    let addr = server.address();
    logger.info(`Listening on port ${addr.family}/(${addr.address}):${addr.port}`);
});
server.listen(webApp.get('port'));

This uses the Node.JS HTTP module to create a web server and start listening on a TCP port. This is pretty much the same code that is used by ExpressJS when you call webApp.listen(). Finally, I have a server.js file that registers BabelJS as my ES2015 transpiler and runs the application:

require('babel-register');
require('./src/http-server');

The Web Application Tests

I’ve placed all my source code in the src directory (except for the server.js file, which is in the project root). I’ve got another directory for testing called test. It has a mocha.opts file with the following contents:

--compilers js:babel-register

This automatically compiles all my tests from ES2015 using BabelJS prior to executing the tests. Now, for the web application tests:

/// <reference path="../../typings/mocha/mocha.d.ts"/>
/// <reference path="../../typings/chai/chai.d.ts"/>
import { expect } from 'chai';
import request from 'supertest';

import webApplication from '../src/web-application';

describe('src/web-application.js', () => {
    let webApp = webApplication(false);

    it('should export a get function', () => {
        expect(webApp.get).to.be.a('function');
    });

    it('should export a set function', () => {
        expect(webApp.set).to.be.a('function');
    });

    it('should provide a /api/settings route', (done) => {
        request(webApp)
            .get('/api/settings')
            .expect('Content-Type', /application\/json/)
            .expect(200)
            .end((err) => {
                if (err) {
                    return done(err);
                }
                done();
            });
    });
});

First note that I’m creating the web application by passing the logging parameter of false. This turns off the transaction logging. Set it to true to see what happens when you leave it on. You will be able to see quite quickly that the test results get drowned out by the transaction logging.

My http-server.js file relies on a webApp having a get/set function to store the port setting. As a result, the first thing I do is check to see whether those exist. If I update express and they decide to change the API on me, these tests will point that out.

The real meat is in the third (highlighted) test. This uses supertest – a WebAPI testing facility that pretends to be the HTTP module from Node, listening on a port. You send requests into the webApp using supertest instead of the HTTP module. ExpressJS handles the request and sends the response back to supertest and that allows you to check the response.

There are two parts to the test. The first is the construction of an actual request:

    request(webApp)
        .get('/api/settings')

Supertest uses superagent underneath to actually do the requests. Once you have linked in the ExpressJS application, you can send a GET, POST, DELETE or any other verb. DELETE is a special case because it is a reserved word – use del() instead:

    request(webApp)
        .del('/tables/myTable/1')

You can add custom headers. For example, I do a bunch of work with azure-mobile-apps – I can test that with:

    request(webApp)
        .set('ZUMO-API-VERSION', '2.0.0')
        .get('/tables/myTable/1')

Check out superagent for more examples of the API here.

The second part of the request is the assertions. You can assert on anything – a specific header, status code or body content. For example, you might want to assert on a non-200 response:

   request(webApp).get('/api/settings')
       .expect(200)

You can also expect a body. For example:

    request(webApp).get('/index.html')
        .expect(/<html>/)

Note the use of the regular expression here. That pattern is really common. You can also check for a specific header:

    request(webApp).get('/index.html')
        .expect('X-My-Header', /value/);

Once you have your sequence of tests, you can close out the connection. Since superagent and supertest are asynchronous, you need to handle the test asynchronously. That involves passing in a parameter of ‘done’ and then calling it after the test is over. You pass a callback into the .end() method:

    request(webApp).get('/index.html')
        .expect('X-My-Header', /value/)
        .end((error) => {
            done(error);
        });

Wrapping up

The supertest module, when combined with mocha, allows you to run test suites without spinning up a server and that enables you to increase your test coverage of a web service to almost 100%. With this, I’ll now be able to test my entire API surface automatically.