Configuring ASP.NET Core Applications in Azure App Service

In my last article, I introduced my plan to see what it would take to run an Azure Mobile Apps compatible service in ASP.NET Core. There are lots of potential problems here and I need to deal with them one by one. The first article covered how to get diagnostic logging working in Azure App Service, and today’s article shows how to deal with configuration in Azure App Service.

There are two major ways to configure your application in Azure App Service. The first is via App Settings and the second is via Data Connections. App Settings appear as environment variables with the prefix APPSETTING_. For example, if you have an app setting called DEBUGMODE, you can access it via Environment.GetEnvironmentVariable("APPSETTING_DEBUGMODE"). An interesting side note: If you configure App Service Push or Authentication, these settings appear as app settings to your application as well.

Data Connections provide a mechanism for accessing connection strings. If you added a Data Connection called MS_TableConnectionString (which is the default for Azure Mobile Apps), then you would see an environment variable called SQLAZURECONNSTR_MS_TableConnectionString. This encodes both the type of connection and the connection string name.

Configuration in ASP.NET Core

The .NET Core configuration framework is a very solid framework, working with a variety of methods – YAML, XML, JSON and environment variables are all supported. You will generally see code like this in the constructor of the Startup.cs file:

        public Startup(IHostingEnvironment env)
        {
            var builder = new ConfigurationBuilder()
                .SetBasePath(env.ContentRootPath)
                .AddJsonFile("appsettings.json", optional: false, reloadOnChange: true)
                .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
                .AddEnvironmentVariables();
            Configuration = builder.Build();
        }

There are a couple of problems with this, which I will illustrate by adding a view that displays the current configuration. Firstly, add a service in the ConfigureServices() method in Startup.cs:

        // This method gets called by the runtime. Use this method to add services to the container.
        public void ConfigureServices(IServiceCollection services)
        {
            // Add Configuration as a service
            services.AddSingleton<IConfiguration>(Configuration);

            // Add framework services.
            services.AddMvc();
        }

I can now add an action to the Controllers\HomeController.cs:

        public IActionResult Configuration([FromServices] IConfiguration service)
        {
            ViewBag.Configuration = service.AsEnumerable();
            return View();
        }

The [FromServices] parameter allows me to use dependency injection to inject the singleton service I defined earlier. This provides access to the configuration in just this method. I can assign the enumeration of all the configuration elements to the ViewBag for later display. I’ve also added a Views\Home\Configuration.cshtml file:

<h1>Configuration</h1>

<div class="row">
    <table class="table table-striped">
        <tr>
            <th>Key</th>
            <th>Value</th>
        </tr>
        <tbody>
            @foreach (var item in ViewBag.Configuration)
            {
                <tr>
                    <td>@item.Key</td>
                    <td>@item.Value</td>
                </tr>
            }
        </tbody>
    </table>
</div>

If I run this code within a properly configured App Service (one with an associated SQL service attached via Data Connections), then I will see all the environment variables and app settings listed on the page. In addition, the environment variables configuration module has added a pair of configuration elements for me – one named ConnectionStrings:MS_TableConnectionString with the connection string, and the other called ConnectionStrings:MS_TableConnectionString_ProviderName.

The problems are somewhat myriad:

  • All environment variables override my configuration. Azure App Service is a managed service, so they can add any environment variables they want at any time and that may clobber my configuration.
  • The environment variables are not organized in any way and rely on convention.
  • Many of the environment variables are not relevant to my app – they are relevant to Azure App Service.

A Better Configuration Module

Rather than use the default environment variables module, I’m going to write a custom provider for configuration within Azure App Service. You can use the “right” environment variables when developing locally or a local JSON file to do the configuration. If I were doing the Azure App Service configuration in JSON, it may look like this:

{
    "ConnectionStrings": {
        "MS_TableConnectionString": "my-connection-string"
    },
    "Data": {
        "MS_TableConnectionString": {
            "Type": "SQLAZURE",
            "ConnectionString": "my-connection-string"
        }
    },
    "AzureAppService": {
        "AppSettings": {
            "MobileAppsManagement_EXTENSION_VERSION": "latest"
        }
        "Auth": {
            "Enabled": "True",
            "SigningKey": "some-long-string",
            "AzureActiveDirectory": {
                "ClientId: "my-client-id",
                "ClientSecret": "my-client-secret",
                "Mode": "Express"
            }
        },
        "Push": {
            // ...
        }
    }
}

This is a much better configuration pattern in that it provides organization of the settings and does not pollute the configuration name space with all environment variables. I like having the Data block for adding associated information about the connection string instead of the convention of adding _ProviderName to the end to add information. Duplicating the connection string means I can use Configuration.GetConnectionString() or Configuration.GetSection("Data:MS_TableConnectionString") to get the information I need. I’m envisioning releasing this library at some point, so providing options like this is a good idea.

Writing a new configuration provider is easy. There are three files:

  • An extension to the ConfigurationBuilder to bring in your configuration source
  • A configuration source that references the configuration provider
  • The configuration provider

The first two tend to be boiler-plate code. Here is the AppServiceConfigurationBuilderExtensions.cs file:

using Microsoft.Extensions.Configuration;

namespace Microsoft.Extensions.Configuration
{
    public static class AzureAppServiceConfigurationBuilderExtensions
    {
        public static IConfigurationBuilder AddAzureAppServiceSettings(this IConfigurationBuilder builder)
        {
            return builder.Add(new AzureAppServiceSettingsSource());
        }
    }
}

Note that I’ve placed the class in the same namespace as the other configuration builder extensions. This means you don’t need a using statement to use this extension method. It’s a small thing.

Here is the AzureAppServiceSettingsSource.cs file:

using Microsoft.Extensions.Configuration;

namespace Microsoft.Azure.AppService.Core.Configuration
{
    internal class AzureAppServiceSettingsSource : IConfigurationSource
    {
        public IConfigurationProvider Build(IConfigurationBuilder builder)
        {
            return new AzureAppServiceSettingsProvider(Environment.GetEnvironmentVariables());
        }
    }
}

The source just provides a new provider. Note that I pass in the environment to the provider. This allows me to mock the environment later on for unit testing. I’ve placed the three files (the two above and the next one) in their own library project within the solution. This allows me to easily write unit tests later on and it allows me to package and distribute the library if I wish.

All the work for converting the environment to a configuration is done in the AzureAppServiceSettingsProvider.cs file (with apologies for the length):

using System.Collections;
using Microsoft.Extensions.Configuration;
using System.Text.RegularExpressions;
using System.Collections.Generic;

namespace Microsoft.Azure.AppService.Core.Configuration
{
    internal class AzureAppServiceSettingsProvider : ConfigurationProvider
    {
        private IDictionary env;

        /// <summary>
        /// Where all the app settings should go in the configuration
        /// </summary>
        private const string SettingsPrefix = "AzureAppService";

        /// <summary>
        /// The regular expression used to match the key in the environment for Data Connections.
        /// </summary>
        private Regex DataConnectionsRegexp = new Regex(@"^([A-Z]+)CONNSTR_(.+)$");

        /// <summary>
        /// Mapping from environment variable to position in configuration - explicit cases
        /// </summary>
        private Dictionary<string, string> specialCases = new Dictionary<string, string>
        {
            { "WEBSITE_AUTH_CLIENT_ID",                 $"{SettingsPrefix}:Auth:AzureActiveDirectory:ClientId" },
            { "WEBSITE_AUTH_CLIENT_SECRET",             $"{SettingsPrefix}:Auth:AzureActiveDirectory:ClientSecret" },
            { "WEBSITE_AUTH_OPENID_ISSUER",             $"{SettingsPrefix}:Auth:AzureActiveDirectory:Issuer" },
            { "WEBSITE_AUTH_FB_APP_ID",                 $"{SettingsPrefix}:Auth:Facebook:ClientId" },
            { "WEBSITE_AUTH_FB_APP_SECRET",             $"{SettingsPrefix}:Auth:Facebook:ClientSecret" },
            { "WEBSITE_AUTH_GOOGLE_CLIENT_ID",          $"{SettingsPrefix}:Auth:Google:ClientId" },
            { "WEBSITE_AUTH_GOOGLE_CLIENT_SECRET",      $"{SettingsPrefix}:Auth:Google:ClientSecret" },
            { "WEBSITE_AUTH_MSA_CLIENT_ID",             $"{SettingsPrefix}:Auth:MicrosoftAccount:ClientId" },
            { "WEBSITE_AUTH_MSA_CLIENT_SECRET",         $"{SettingsPrefix}:Auth:MicrosoftAccount:ClientSecret" },
            { "WEBSITE_AUTH_TWITTER_CONSUMER_KEY",      $"{SettingsPrefix}:Auth:Twitter:ClientId" },
            { "WEBSITE_AUTH_TWITTER_CONSUMER_SECRET",   $"{SettingsPrefix}:Auth:Twitter:ClientSecret" },
            { "WEBSITE_AUTH_SIGNING_KEY",               $"{SettingsPrefix}:Auth:SigningKey" },
            { "MS_NotificationHubId",                   $"{SettingsPrefix}:Push:NotificationHubId" }
        };

        /// <summary>
        /// Mpping from environment variable to position in configuration - scoped cases
        /// </summary>
        private Dictionary<string, string> scopedCases = new Dictionary<string, string>
        {
            { "WEBSITE_AUTH_", $"{SettingsPrefix}:Auth" },
            { "WEBSITE_PUSH_", $"{SettingsPrefix}:Push" }
        };

        /// <summary>
        /// Authentication providers need to be done before the scoped cases, so their mapping
        /// is separate from the scoped cases
        /// </summary>
        private Dictionary<string, string> authProviderMapping = new Dictionary<string, string>
        {
            { "WEBSITE_AUTH_FB_",          $"{SettingsPrefix}:Auth:Facebook" },
            { "WEBSITE_AUTH_GOOGLE_",      $"{SettingsPrefix}:Auth:Google" },
            { "WEBSITE_AUTH_MSA_",         $"{SettingsPrefix}:Auth:MicrosoftAccount" },
            { "WEBSITE_AUTH_TWITTER_",     $"{SettingsPrefix}:Auth:Twitter" }
        };

        public AzureAppServiceSettingsProvider(IDictionary env)
        {
            this.env = env;
        }

        /// <summary>
        /// Loads the appropriate settings into the configuration.  The Data object is provided for us
        /// by the ConfigurationProvider
        /// </summary>
        /// <seealso cref="Microsoft.Extensions.Configuration.ConfigurationProvider"/>
        public override void Load()
        {
            foreach (DictionaryEntry e in env)
            {
                string key = e.Key as string;
                string value = e.Value as string;

                var m = DataConnectionsRegexp.Match(key);
                if (m.Success)
                {
                    var type = m.Groups[1].Value;
                    var name = m.Groups[2].Value;

                    if (!key.Equals("CUSTOMCONNSTR_MS_NotificationHubConnectionString"))
                    {
                        Data[$"Data:{name}:Type"] = type;
                        Data[$"Data:{name}:ConnectionString"] = value;
                    }
                    else
                    {
                        Data[$"{SettingsPrefix}:Push:ConnectionString"] = value;
                    }
                    Data[$"ConnectionStrings:{name}"] = value;
                    continue;
                }

                // If it is a special case, then handle it through the mapping and move on
                if (specialCases.ContainsKey(key))
                {
                    Data[specialCases[key]] = value;
                    continue;
                }

                // A special case for AUTO_AAD
                if (key.Equals("WEBSITE_AUTH_AUTO_AAD"))
                {
                    Data[$"{SettingsPrefix}:Auth:AzureActiveDirectory:Mode"] = value.Equals("True") ? "Express" : "Advanced";
                    continue;
                }

                // Scoped Cases for authentication providers
                if (dictionaryMappingFound(key, value, authProviderMapping))
                {
                    continue;
                }

                // Other scoped cases (not auth providers)
                if (dictionaryMappingFound(key, value, scopedCases))
                {
                    continue;
                }

                // Other internal settings
                if (key.StartsWith("WEBSITE_") && !containsMappedKey(key, scopedCases))
                {
                    var setting = key.Substring(8);
                    Data[$"{SettingsPrefix}:Website:{setting}"] = value;
                    continue;
                }

                // App Settings - anything not in the WEBSITE section
                if (key.StartsWith("APPSETTING_") && !key.StartsWith("APPSETTING_WEBSITE_"))
                {
                    var setting = key.Substring(11);
                    Data[$"{SettingsPrefix}:AppSetting:{setting}"] = value;
                    continue;
                }

                // Add everything else into { "Environment" }
                if (!key.StartsWith("APPSETTING_"))
                {
                    Data[$"Environment:{key}"] = value;
                }
            }
        }

        /// <summary>
        /// Determines if the key starts with any of the keys in the mapping
        /// </summary>
        /// <param name="key">The environment variable</param>
        /// <param name="mapping">The mapping dictionary</param>
        /// <returns></returns>
        private bool containsMappedKey(string key, Dictionary<string, string> mapping)
        {
            foreach (var start in mapping.Keys)
            {
                if (key.StartsWith(start))
                {
                    return true;
                }
            }
            return false;
        }

        /// <summary>
        /// Handler for a mapping dictionary
        /// </summary>
        /// <param name="key">The environment variable to check</param>
        /// <param name="value">The value of the environment variable</param>
        /// <param name="mapping">The mapping dictionary</param>
        /// <returns>true if a match was found</returns>
        private bool dictionaryMappingFound(string key, string value, Dictionary<string, string> mapping)
        {
            foreach (string start in mapping.Keys)
            {
                if (key.StartsWith(start))
                {
                    var setting = key.Substring(start.Length);
                    Data[$"{mapping[start]}:{setting}"] = value;
                    return true;
                }
            }
            return false;
        }
    }
}

Unfortunately, there are a lot of special cases here to handle how I want to lay out my configuration. However, the basic flow is handled in the Load() method. It cycles through the environment. If the environment variable matches one of the ones I watch for, then I add it to the Data[] object which becomes the configuration. Anything that doesn’t match is added to the default Environment section of the configuration. The ConfigurationProvider class that I inherit from handles all the other lifecycle type requirements for the provider, so I don’t need to be concerned with it.

Testing the Configuration Module

I’ve done some pre-work to aid in testability. Firstly, I’ve segmented the library component into its own project. Secondly, I’ve added a “mocking” capability for the environment. The default environment is passed in from the source class, but I can instantiate the provider in my test class with a suitable dictionary. The xUnit site covers how to set up a simple test, although Visual Studio 2017 has a specific xUnit test suite project template (look for xUnit Test Project (.NET Core) in the project templates list).

My testing process is relatively simple – given a suitable environment, does it produce the right configuration? I’ll have a test routine for each of the major sections – connection strings, special cases and scoped cases, and others. Then I’ll copy my environment from a real App Service and see if that causes issues. I get my environment settings from Kudu – also known as Advanced Tools in your App Service menu in the Azure portal. Here is an example of one of the tests:

        [Fact]
        public void CreatesDataConnections()
        {
            var env = new Dictionary<string, string>()
            {
                { "SQLCONNSTR_MS_TableConnectionString", "test1" },
                { "SQLAZURECONNSTR_DefaultConnection", "test2" },
                { "SQLCONNSTRMSTableConnectionString", "test3" }
            };
            var provider = new AzureAppServiceSettingsProvider(env);
            provider.Load();

            string r;
            Assert.True(provider.TryGet("Data:MS_TableConnectionString:Type", out r));
            Assert.Equal("SQL", r);
            Assert.True(provider.TryGet("Data:MS_TableConnectionString:ConnectionString", out r));
            Assert.Equal("test1", r);

            Assert.True(provider.TryGet("Data:DefaultConnection:Type", out r));
            Assert.Equal("SQLAZURE", r);
            Assert.True(provider.TryGet("Data:DefaultConnection:ConnectionString", out r));
            Assert.Equal("test2", r);

            Assert.False(provider.TryGet("Data:MSTableConnectionString:Type", out r));
            Assert.False(provider.TryGet("Data:MSTableConnectionString:ConnectionString", out r));
        }

This test ensures that the typical connection strings get placed into the right Data structure within the configuration. You can run the tests within Visual Studio 2017 by using Test > Windows > Test Explorer to view the test explorer, then click Run All – the projects will be built and tests discovered.

I’m keeping my code on GitHub, so you can find this code (including the entire test suite) in my GitHub Repository at tag p4.

Running ASP.NET Core applications in Azure App Service

One of the things I get asked about semi-regularly is when Azure Mobile Apps is going to support .NET Core. It’s a logical progression for most people and many ASP.NET developers are planning future web sites to run on ASP.NET Core. Also, the ASP.NET Core programming model makes a lot more sense (at least to me) than the older ASP.NET applications. Finally, we have an issue open on the subject. So, what is holding us back? Well, there are a bunch of things. Some have been solved already and some need a lot of work. In the coming weeks, I’m going to be writing about the various pieces that need to be in place before we can say “Azure Mobile Apps is there”.

Of course, if you want a mobile backend, you can always hop over to Visual Studio Mobile Center. This provides a mobile backend for you without having to write any code. (Full disclosure: I’m now a program manager on that team, so I may be slightly biased). However, if you are thinking ASP.NET Core, then you likely want to write the code.

Let’s get started with something that does exist. How does one run ASP.NET Core applications on Azure App Service? Well, there are two methods. The first involves uploading your application to Azure App Service via the Visual Studio Publish… dialog or via Continuous Integration from GitHub, Visual Studio Team Services or even Dropbox. It’s a relatively easy method and one I would recommend. There is a gotcha, which I’ll discuss below.

The second method uses a Docker container to house the code that is then deployed onto a Linux App Service. This is still in preview (as of writing), so I can’t recommend this for production workloads.

Create a New ASP.NET Core Application

Let’s say you opened up Visual Studio 2017 (RC right now) and created a brand new ASP.NET Core MVC application – the basis for my research here.

  • Open up Visual Studio 2017 RC.
  • Select File > New > Project…
  • Select the ASP.NET Core Web Application (.NET Core).
    • Fill in an appropriate name for the solution and project, just as normal.
    • Click OK to create the project.
  • Select ASP.NET Core 1.1 from the framework drop-down (it will say ASP.NET Core 1.0 initially)
  • Select Web Application in the ASP.NET Core 1.1 Templates selection.
  • Click OK.

I called my solution netcore-server and the project ExampleServer. At this point, Visual Studio will go off and create a project for you. You can see what it creates easily enough, but I’ve checked it into my GitHub repository at tag p0.

I’m not going to cover ASP.NET Core programming too much in this series. You can read the definitive guide on their documentation site, and I would recommend you start by understanding ASP.NET Core programming before getting into the changes here.

Go ahead and run the service (either as a Kestrel service or an IIS Express service – it works with both). This is just to make sure that you have a working site.

Add Logging to your App

Logging is one of those central things that is needed in any application. There are so many things you can’t do (including diagnose issues) if you don’t have appropriate logging. Fortunately, ASP.NET Core has logging built-in. Let’s add some to the Controllers\HomeController.cs file:

using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Logging;

namespace ExampleServer.Controllers
{
    public class HomeController : Controller
    {
        private ILogger logger;

        public HomeController(ILoggerFactory loggerFactory)
        {
            logger = loggerFactory.CreateLogger(this.GetType().FullName);
        }

        public IActionResult Index()
        {
            logger.LogInformation("In Index of the HomeController", null);
            return View();
        }
        // Rest of the file here

I’ve added the logger factory via dependency injection, then logged a message whenever the Index file is served in the home controller. If you run this version of the code (available on the GitHub respository at tag p1), you will see the following in your Visual Studio output window:

20170216-01

It’s swamped by the Application Insights data, but you can clearly see the informational message there.

Deploy your App to Azure App Service

Publishing to Azure App Service is relatively simple – right-click on the project and select Publish… to kick off the process. The layout of the windows has changed from Visual Studio 2015, but it’s the same process. You can create a new App Service or use an existing one. Once you have answered all the questions, your site will be published. Eventually, your site will be displayed in your web browser.

Turn on Diagnostic Logging

  • Click View > Server Explorer to add the server explorer to your work space.
  • Expand the Azure node, the App Service node, and finally your resource group node.
  • Right-click the app service and select View Settings
  • Turn on logging and set the logging level to verbose:

20170216-02

  • Click Save to save the settings (the site will restart).
  • Right-click the app service in the server explorer again and this time select View Streaming Logs
  • Wait until you see that you are connected to the log streaming service (in the Output window)

Now refresh your browser so that it reloads the index page again. Note how you see the access logs (which files have been requested) but the log message we put into the code is not there.

The Problem and Solution

The problem is, hopefully, obvious. ASP.NET Core does not by default feed logs to Azure App Service. We need to enable that feature in the .NET Core host. We do this in the Program.cs file:

using System.IO;
using Microsoft.AspNetCore.Hosting;

namespace ExampleServer
{
    public class Program
    {
        public static void Main(string[] args)
        {
            var host = new WebHostBuilder()
                .UseKestrel()
                .UseContentRoot(Directory.GetCurrentDirectory())
                .UseIISIntegration()
                .UseStartup<Startup>()
                .UseApplicationInsights()
                .UseAzureAppServices()
                .Build();

            host.Run();
        }
    }
}

You will also need to add the Microsoft.AspNetCore.AzureAppServicesIntegration package from NuGet for this to work. Once you have done this change, you can deploy this and watch the logs again:

20170216-03

If you have followed the instructions, you will need to switch the Output window back to the Azure logs. The output window will have been switched to Build during the publish process.

Adjusting the WebHostBuilder for the environment

It’s likely that you won’t want Application Insights and Azure App Services logging except when you are running on Azure App Service. There are a number of environment variables that Azure App Service uses and you can leverage these as well. My favorites are REGION_NAME (which indicates which Azure region your service is running in) and WEBSITE_OWNER_NAME (which is a combination of a bunch of things). You can test for these and adjust the pipeline accordingly:

using Microsoft.AspNetCore.Hosting;
using System;
using System.IO;

namespace ExampleServer
{
    public class Program
    {
        public static void Main(string[] args)
        {
            var hostBuilder = new WebHostBuilder()
                .UseKestrel()
                .UseContentRoot(Directory.GetCurrentDirectory())
                .UseIISIntegration()
                .UseStartup<Startup>()
                .UseApplicationInsights();

            var regionName = Environment.GetEnvironmentVariable("REGION_NAME");
            if (regionName != null)
            {
                hostBuilder.UseAzureAppServices();
            }
                
            var host = hostBuilder.Build();

            host.Run();
        }
    }
}

You can download this code at my GitHub repository at tag p2.

Writing HTTP CRUD in Azure Functions

Over the last two posts, I’ve introduced writing Azure Functions locally and deploying them to the cloud. It’s time to do something useful with them. In this post, I’m going to introduce how to write a basic HTTP router. If you follow my blog and other work, you’ll see where this is going pretty quickly. If you are only interested in Azure Functions, you’ll have to wait a bit to see how this evolves.

Create a new Azure Function

I started this blog by installing the latest azure-functions-cli package:

npm i -g azure-functions-cli

Then I created a new Azure Function App:

mkdir dynamic-tables
cd dynamic-tables
func new

Finally, I created a function called todoitem:

dec15-01

Customize the HTTP Route Prefix

By default, any HTTP trigger is bound to /api/_function_ where function is the name of your function. I want full control over where my function exists. I’m going to fix this is the host.json file:

{
    "id":"6ada7ae64e8a496c88617b7ab6682810",
    "http": {
        "routePrefix": ""
    }
}

The routePrefix is the important thing here. The value is normally “/api”, but I’ve cleared it. That means I can put my routes anywhere.

Set up the Function Bindings

In the todoitem directory are two files. The first, function.json, describes the bindings. Here is the version for my function:

{
    "disabled": false,
    "bindings": [
        {
            "name": "req",
            "type": "httpTrigger",
            "direction": "in",
            "authLevel": "function",
            "methods": [ "GET", "POST", "PATCH", "PUT", "DELETE" ],
            "route": "tables/todoitem/{id:alpha?}"
        },
        {
            "type": "http",
            "direction": "out",
            "name": "res"
        }
    ]
}

This is going to get triggered by a HTTP trigger, and will accept five methods: GET, POST, PUT, PATCH and DELETE. In addition, I’ve defined a route that contains an optional string for an id. I can, for example, do GET /tables/todoitem/foo and this will be accepted. On the outbound side, I want to respond to requests, so I’ve got a response object. The HTTP Trigger for Node is modelled after ExpressJS, so the req and res objects are mostly equivalent to the ExpressJS Request and Response objects.

Write the Code

The code for this function is in todoitem/index.js:

/**
 * Routes the request to the table controller to the correct method.
 *
 * @param {Function.Context} context - the table controller context
 * @param {Express.Request} req - the actual request
 */
function tableRouter(context, req) {
    var res = context.res;
    var id = context.bindings.id;

    switch (req.method) {
        case 'GET':
            if (id) {
                getOneItem(req, res, id);
            } else {
                getAllItems(req, res);
            }
            break;

        case 'POST':
            insertItem(req, res);
            break;

        case 'PATCH':
            patchItem(req, res, id);
            break;

        case 'PUT':
            replaceItem(req, res, id);
            break;

        case 'DELETE':
            deleteItem(req, res, id);
            break;

        default:
            res.status(405).json({ error: "Operation not supported", message: `Method ${req.method} not supported`})
    }
}

function getOneItem(req, res, id) {
    res.status(200).json({ id: id, message: "getOne" });
}

function getAllItems(req, res) {
    res.status(200).json({ query: req.query, message: "getAll" });
}

function insertItem(req, res) {
    res.status(200).json({ body: req.body, message: "insert"});
}

function patchItem(req, res, id) {
    res.status(405).json({ error: "Not Supported", message: "PATCH operations are not supported" });
}

function replaceItem(req, res, id) {
    res.status(200).json({ body: req.body, id: id, message: "replace" });
}

function deleteItem(req, res, id) {
    res.status(200).json({ id: id, message: "delete" });
}

module.exports = tableRouter;

I use a tableRouter method (and that is what our function calls) to route the HTTP call to the write CRUD method I want to execute. It’s up to me to put whatever CRUD code I need to execute and respond to the request in those additional methods. In this case, I’m just returning a 200 status (OK) and some JSON data. One key piece is differentiating between a GET /tables/todoitem and a GET /tables/todoitem/foo. The former is meant to return all records and the latter is meant to return a single record. If the id is set, we call the single record GET method and if not, then we call the multiple record GET method.

What’s the difference between PATCH and PUT? In REST semantics, PATCH Is used when you want to do a partial update of a record. PUT is used when you want to send a full record. This CRUD recipe uses both, but you may decide to use one or the other.

Running Locally

As with the prior blog post, you can run func run test-func --debug to start the backend and get ready for the debugger. You can then use Postman to send requests to your backend. (Note: Don’t use func run todoitem --debug – this will cause a crash at the moment!). You’ll get something akin to the following:

dec15-02

That’s it for today. I’ll be progressing on this project for a while, so expect more information as I go along!

Creating and Debugging Azure Functions Locally

I’ve written about Azure Functions before as part of my Azure Mobile Apps series. Azure Functions is a great feature of the Azure platform that allows you to run custom code in the cloud in a “serverless” manner. In this context, “serverless” doesn’t mean “without a server”. Rather, it means that the server is abstracted away from you. In my prior blog post, I walked through creating an Azure Function using the web UI, which is a problem when you want to check your Azure Functions in to source code and deploy them as part of your application.

UPDATE: Azure App Service have released a blog on the new CLI tools.

This is the first in a series of blog posts. I am going to walk through a process by which you can write and debug Azure Functions on your Windows 10 PC, then check the code into your favorite SCCM and deploy in a controlled manner. In short – real world.

Getting Ready

Before you start, let’s get the big elephant out of the way. The actual runtime Windows only. Sorry, Mac Users. The run-time relies on the 4.x .NET Framework, and you don’t have that. Boot into Windows 10. You can still create functions locally, but you will have to publish them to the cloud to run them. There is no local runtime on a Mac.

To get your workstation prepped, you will need the following:

  • Node
  • Visual Studio Code
  • Azure Functions Runtime

Node is relatively easy. Download the Node package from nodejs.org and install it as you would any other package. You should be able to run the node and npm programs from your command line before you continue. Visual Studio Code is similarly easily downloaded and installed. You can download additional extensions if you like. If you write functions in C#, I would definitely download the C# extension.

The final bit is the Azure Functions Runtime. This is a set of tools produced by the Azure Functions team to create and run Functions locally and are based on Yeoman. To install:

npm install -g yo generator-azurefunctions azure-functions-cli

WARNING There is a third-party module called azure-functions which is not the same thing at all. Make sure you install the right thing!

After installing, the func command should be available:

func-1

Once you have these three pieces, you are ready to start working on Azure Functions locally.

Creating an Azure Functions Project

Creating an Azure Functions project uses the func command:

mkdir my-func-application
cd my-func-application
func init

Note that func init creates a git repository as well – one less thing to do! Our next step is to create a new function. The Azure Functions CLI uses Yeoman underneath, which we can call directly using yo azurefunctions:

func-2

You can create as many functions as you want in a single function app. In the example above, I created a simple HTTP triggered function written in JavaScript. This can be used as a custom API in a mobile app, for example. The code for my trigger is stored in test-func\index.js:

module.exports = function(context, req) {
    context.log('Node.js HTTP trigger function processed a request. RequestUri=%s', req.originalUrl);

    if (req.query.name || (req.body && req.body.name)) {
        context.res = {
            // status: 200, /* Defaults to 200 */
            body: "Hello " + (req.query.name || req.body.name)
        };
    }
    else {
        context.res = {
            status: 400,
            body: "Please pass a name on the query string or in the request body"
        };
    }
    context.done();
};

and the binding information is in test-func\function.json:

{
    "disabled": false,
    "bindings": [
        {
            "authLevel": "function",
            "type": "httpTrigger",
            "direction": "in",
            "name": "req"
        },
        {
            "type": "http",
            "direction": "out",
            "name": "res"
        }
    ]
}

Running the function

To run the Azure Functions Runtime for your function app, use func run test-func.

The runtime is kicked off first. This monitors the function app for changes, so any changes you do in the code will be reflected as soon as you save the file. If you are running something that can be triggered manually (like a cron job), then it will be run immediately. For my HTTP trigger, I need to hit the HTTP endpoint – in this case, http://localhost:7071/api/test-func.

Note that the runtime is running with the version of Node that you installed and it is running on your local machine. Yet it still can be triggered by whatever you set up. If you set up a blob trigger from a storage account, then that will trigger. You have to set up the environment properly. Remember that App Service (and Functions) app settings appear as environment variables to the runtime. When you run locally, you will need to manually set up the app settings by setting an environment variable of the same name. Do this before you use func run for the first time.

Debugging the function

Running the function is great, but I want to debug the function – set a breakpoint, inspect the internal state of the function, etc. This can be done easily in Visual Studio Code as the IDE has an integrated Node debugger.

  • Run func run test-func --debug
  • Go into the debugger within Visual Studio Code, and set a breakpoint
  • Switch to the Debugger tab and hit Start (or F5)

    func-3

  • Trigger the function. Since I have a HTTP triggered function, I’m using Postman for this:

    func-4

  • Note that the function is called and I can inspect the internals and step through the code:

    func-5

You can now step through the code, inspect local variables and generally debug your script.

Next up

In the next article, I’ll discuss programmatically creating an Azure Function app and deploying our function through some of the deployment mechanisms we have available to us in Azure App Service.

30 Days of Zumo.v2 (Azure Mobile Apps): Day 1 – Setup

I’ve committed myself to learning as much Azure Mobile Apps as possible. Internally, this project is called Zumo (Azure Mobile glued together) and several sites have shown this name with reference to Azure Mobile Services. Azure Mobile Apps is Zumo v2. It’s a server SDK that runs on top of a web site. This has some really interesting things about it – like you can use all the features of Azure Web Apps (staging slots, scaling, backup and restore, authentication) and use the same API within the Web App and Mobile App. Each one of the 30 days of code will cover a topic and cover it in an hour.

To get started, you’ve got to have an Azure subscription. Sure, you could use TryAppService to create a backend, but that only lasts for an hour and it’s very restrictive – you don’t get to alter the backend code. If you haven’t already, there is a free trial that lasts 30 days. Once you get beyond the 30 days, you can run a development site for free.

Day 1: Setup

Day 1 is all about setup. I am going to do all my development on a Mac. Why not a PC with Visual Studio? Visual Studio is a very specific environment and doesn’t lend itself to iOS development. I want the experience to be as raw as possible and entirely free. Developing on the mac tends to be a little more painful when you are used to an integrated environment like Visual Studio. Visual Studio, however, hides a lot of details from you. When things “just happen”, you tend to not learn what’s behind them and debugging capabilities get lost. I want to avoid that, so I’ll be using the command line, the Azure Portal and a simple text editor.

What else do I need?

A Text Editor

What’s your favorite code editor? On a mac, mine is Atom. There are a bunch of decent plugins for Atom and I’ve covered some of them in the past. I’ll probably do another post at some point about my favorite Atom plug-ins for JavaScript development. I also like Visual Studio Code, which also has some great plug-ins. I’ve heard good things about Sublime Text as well. All three of these editors are available on Mac and Windows.

I’m not advocating for a specific text editor here. There are things you definitely want: a monospace font and syntax highlighting would be enough. Just pick your favorite.

One thing to avoid is something “heavy”. Eclipse, for example, falls into this “heavy” camp. It’s marginal on the functionality improvements, yet it’s startup cost and memory utilization make it distinctly second-rate for what I want to do – edit files.

Google Chrome

Yes, I’m being specific here. Google Chrome gets its developer tools right. In addition, you will want a few plugins. The most notable is Postman – a mechanism for executing REST calls and viewing the output.

Git

Again, I’m being specific here. There are several git tools, but they all implement git. Don’t try and get something else. I am going to be putting things on GitHub, which uses git underneath. There is Git for Mac and Git for Windows. I’ll use these tools for storing my code in a source code repository on GitHub and for querying Azure App Service.

A Command Line

If you are on a mac, there is a command line under Application-Utilities. If you are on a PC, then you have the PowerShell prompt, although I prefer ConEmu a little better. Since I’m not going to be using Visual Studio, I want somewhere to execute commands.

XCode (Mac Only)

If you are developing iOS applications, then the compilation step must use Xcode and must run on a Mac. Android applications don’t care. Windows applications don’t require a specific compiler, but you have to compile on Windows – when I get to that, I’ll switch over to Windows and use Visual Studio. There will be some points at which you will be asked to start Xcode and accept the license – at which point, you might as well download it. You can do this from the Apple App Store. Note, however, it’s a 1+Gb download, so it takes some time.

An FTP Client

I like a graphical ftp client for this. It allows me to browse the App Service behind the scenes. You can find a good list on LifeHacker. Personally, I use Cyberduck for this.

After the software, you will also need a GitHub Id – I’m going to store my code on GitHub, so there will be a repository on GitHub.

Let’s get started

The Azure Documentation already adequately covers creating an Azure Mobile App. I recommend following that tutorial to get your Azure Mobile App created and then hooked up to a SQL Azure instance. You can follow any of the tutorials – you will end up with a mobile site and a client that implements a simple TodoItem hooked up to the mobile backend.

One word on pricing and what sort of app pricing plan you should choose. There are several types of site sizing and they offer some interesting choices. Here is the breakdown:

Screen Shot 2016-03-09 at 6.34.39 PM

Note that Basic does not offer all the features of Standard and Premium. In fact, Standard offers many features that you should be interested in:

  • Auto-scaling – while not an issue in development, will be an issue in production applications. You want your application to grow as demand grows, automatically. Basic only scales manually and only has a limited scaling
  • Staging Slots – this is an awesome feature that I’ll discuss in a later blog post. One of the things this allows you to do is to upload a new site, test it out and then swap out the production version, all with zero down time.
  • Backups – we like backups. They are important. Standard adds a daily backup.

Premium adds more disk space, Biztalk services, more staging slots and more backups. Most developers can get away with Basic edition since developers only need limited scalability (to test what happens when the service does scale) and don’t need staging slots. There are two other tiers – Free and Shared:

Screen Shot 2016-03-09 at 6.42.08 PM

Note the lack of features. Free and Shared are great if you are just learning but you will find them painful to use. Spend some of your Azure free trial credits on a minimum of Basic.

Note that I’m not saying anything about the options available on SQL Azure here. The pricing when you create an App Service has nothing to do with SQL Azure. To get your effective pricing, you need to add your App Service plan to your SQL Azure plan:

sql-azure-pricing

For most normal learning activities, you can use a B-Basic plan for your SQL Azure. If you want to try out Georeplication or you have bigger data needs, you can use an S0-Standard. The pricing goes up from there. As with App Service, there is a Premium offering that adds Active Georeplication – good for those mission critical revenue-on-the-line type of apps.

Want a completely free version of this? Make sure you pick an F1-Free App Service plan and an F-Free SQL Azure plan. Want to learn everything that there is to learn about the platform without altering? Pick an S1-Standard App Service plan and an S0-Standard SQL Azure plan. However, you can upgrade your plan at any point, so this allows you to start small and move up in cost as you need to.

If you are learning or developing and do pick a standard plan, make sure you shut down the App Service at the end of your activity. This will save you some cash at night when you aren’t using the service.

Setting up for Development

So, I’ve got my site all set up. I also have a nice iOS Todo app that allows me to add TodoItems (I used the Swift version, since I’m mildly interested in learning the language), but I have not been shown any of the server code as yet. I want to set up something else here – Continuous Deployment. To configure continuous deployment, I’m going to do the following:

  • Create a GitHub repository
  • Clone the GitHub repository onto your local machine
  • Download the source code for the site I created
  • Check in the source code for the site into GitHub
  • Create a branch on GitHub for Azure deployment
  • Link the site to deploy directly from GitHub

This is a cool feature. Once I’m set up, deployment happens automatically. When I push changes to GitHub, the Azure Mobile App will automatically pick them up and deploy them for me. Here is how to do it:

1. Create a GitHub Repository.

  1. Log onto GitHub
  2. Click on the + in the top right corner of the web browser and select New Repository
  3. Fill in the form – I called my repository 30-days-of-zumo-v2
  4. Click on Create Repository

2. Clone the repository

  1. Open up your command prompt
  2. Pick a place to hold your GitHub repositories. Mine is ~/github – if you need to make the directory, then do so.
  3. Change directory to that place: cd ~/github, for example
  4. Type in:
git clone https://github.com/adrianhall/30-days-of-zumo-v2

You will replace the URL with the URL of your repository – this was displayed when you created the repository.

3. Download the Azure Website

First step is to set your deployment credentials. Log onto the Azure Portal, select your site then select All Settings. Find the Deployment Credentials option, then fill in the form and click on Save. I like to use my email address with special characters replaced by underscores for my username – this ensures it is unique. Make your password very hard to guess. Use a password manager if you need to.

Let’s get the requisite information for an ftp download:

  • The server is listed on the starting blade of your site, but will be something like ftp://waws-prod-bay-043.ftp.azurewebsites.windows.net
  • Your username is SITENAME\USERNAME. The SITENAME is the name of your site. The USERNAME is what you set in the deployment credentials. This is listed on the starting blade as well, right above the FTP Hostname.
  • Your password is whatever you set in the deployment credentials.

Open up Cyberduck, enter the information (uncheck the anonymous checkbox) and click on Connect. You can use ftp or ftps protocol – I prefer ftps since it’s designated secure – information is transmitted with SSL encryption, including your deployment credentials.

Screen Shot 2016-03-09 at 7.32.48 PM

You will now be able to see the site. Expand the site node then the wwwroot node. Highlight everything in the wwwroot node, right-click and select Download to…. Put the files in the directory you cloned from GitHub.

4. Check in the code for your site

Before you go all “git add” on this stuff, there is some cleanup to do. Right now, the site is set up to use Easy Tables and Easy APIs – there are some extra files that you don’t really need. That’s because we are going to act like developers and keep our files checked into source code control. That really means we can’t use Easy Tables and Easy APIs. Those facilities are great for simple sites and I highly recommend you check them out. But you will leave them behind once you get serious about developing a backend – you will write code and check it into a source code repository.

Let’s start by removing the files we don’t need because we aren’t going to be using Easy Tables or Easy APIs:

  • sentinel
  • tables/TodoItem.json

We’ll also remove the files that are handled by the server or restored during deployment

  • node_modules
  • iisnode.yml
  • webconfig.xml

You can do this within your GUI or on the command line with the rm command. On Windows, use rimraf:

npm install rimraf -g
rimraf node_modules

Finally, add a .gitignore file – go to https://gitignore.io, enter Node in the box and click on Generate. This will generate a suitable .gitignore file that you can cut and paste into an editor.

You are now ready to check in the initial version. Make sure you are in the project directory, then type:

git add .
git commit -m "Initial Checkin"
git push -u origin master

This will push everything up to the master branch on GitHub.

5. Create an Azure deployment branch

You can do this from the command line as well. Make sure you are still in the project directory, then type:

git checkout -b azure
git push -u origin azure

This will create an azure branch for you to merge into (more on that later), then push it up to GitHub.

6. Link the azure branch to continuous deployment

Log back on to the Azure portal and select your site. Click on All Settings, then click on Deployment Source. Select GitHub as your deployment source. You will probably have to enter your GitHub credentials in order to proceed. Eventually, you will see something like this:

Screen Shot 2016-03-09 at 7.51.02 PM

Pick your project (it’s the GitHub repository you created) and the azure branch. Once done, click on OK. Finally, click on Sync.

Something completely magical will happen now. Well, not so magical really – that comes later. The Azure system will go off and fetch the project. It will install all the dependencies of the project (listed in the package.json) file and then deploy the results. The magical piece happens later – whenever you push a new version to the azure branch, it will automatically be deployed. You’ll be able to see it happen.

This post went a little longer than I planned, but I’m not all set up for continuous development on Azure. In the next post, I’ll look at upgrading the Node.js version and handle the checkin and merge mechanism. In addition, I’ll look at a local development cycle (rather than deploying) using the SQL Azure instance I’ve set up.

If you want to follow along on the code, I’ve set up a new GitHub repository – enjoy!

Using Azure Mobile Apps from a React Redux App

I did some work towards my React application in my last article – specifically handling authentication with Auth0 providing the UI and then swapping the token with Azure Mobile Apps for a ZUMO token. I’m now all set to do some CRUD operations within my React Redux application. There is some basic Redux stuff in here, so if you want a refresher, check out my prior Redux articles:

Refreshing Data

My first stop is “how do I get the entire table that I can see from Azure Mobile Apps?” This requires multiple actions in a React Redux world. Let’s first of all look at the action creators:

import constants from '../constants/tasks';
import zumo from '../../zumo';

/**
 * Internal Redux Action Creator: update the isLoading flag
 * @param {boolean} loading the isLoading flag
 * @returns {Object} redux-action
 */
function updateLoading(loading) {
    return {
        type: constants.UpdateLoadingFlag,
        isLoading: loading
    };
}

/**
 * Internal Redux Action Creator: replace all the tasks
 * @param {Array} tasks the new list of tasks
 * @returns {Object} redux-action
 */
function replaceTasks(tasks) {
    return {
        type: constants.ReplaceTasks,
        tasks: tasks
    };
}

/**
 * Redux Action Creator: Set the error message
 * @param {string} errorMessage the error message
 * @returns {Object} redux-action
 */
export function setErrorMessage(errorMessage) {
    return {
        type: constants.SetErrorMessage,
        errorMessage: errorMessage
    };
}

/**
 * Redux Action Creator: Refresh the task list
 * @returns {Object} redux-action
 */
export function refreshTasks() {
    return (dispatch) => {
        dispatch(updateLoading(true));

        const success = (results) => {
            console.info('results = ', results);
            dispatch(replaceTasks(results));
        };

        const failure = (error) => {
            dispatch(setErrorMessage(error.message));
        };

        zumo.table.read().then(success, failure);
    };
}

Four actions for a single operation? I’ve found this is common for Redux applications that deal with backend services – you need to have several actions to implement all the code-paths. I could have gotten away with just three – an initiator, a successful completion and an error condition. However, I wanted to ensure I had flexibility. The setErrorMessage() and updateLoading() actions are generic enough to be re-used for other actions.

Two of these actions are internal – I don’t export them and so the rest of the application never sees them. The only action creator that the application at large can use is the refreshTasks() action – the initiator for the refresh. I’ve made the setErrorMessage() task generic enough that it can be used by an error dialog to clear the error as well. Lesson learned – only export the tasks that you want the rest of the application to use.

Looking at the refreshTasks() task list, I’m not doing any filtering. Azure Mobile Apps supports filtering on the server as well as the client. I’d rather filter on the client in this application – it saves a round trip and the data is never going to be big enough that filtering is going to be a problem. This may not be true in your application – you should make a decision on filtering on the server. vs. client in terms of performance and memory usage.

Insert, Modify and Delete Tasks

I’ve already got the actions – I just need to update them for the async server code. For example, here is the insert code:

/**
 * Redux Action Creator: Insert a new task into the cache
 * @param {Object} task the task to be updated
 * @param {string} task.id the ID of the task
 * @param {string} task.text the description of the task
 * @param {bool} task.complete true if the task is completed
 * @returns {Object} redux-action
 */
function insertTask(task) {
    return {
        type: constants.Create,
        task: task
    };
}

/**
 * Redux Action Creator: Create a new Task
 * @param {string} text the description of the new task
 * @returns {Object} redux-action
 */
export function createTask(text) {
    return (dispatch) => {
        dispatch(updateLoading(true));

        const newTask = {
            text: text,
            complete: false
        };

        const success = (insertedItem) => {
            console.info('createTask: ', insertedItem);
            dispatch(insertTask(insertedItem));
        };

        const failure = (error) => {
            dispatch(setErrorMessage(error.message));
        };

        zumo.table.insert(newTask).then(success, failure);
    };
}

I’m reusing the updateLoading() and setErrorMessage() action creators that I used with the refresh tasks. The createTask() does the insert async then calls the insertTask() action creator with the newly created task to update the in-memory cache (as we will see below when we come to the reducers). There are similar mechanisms for modification and deletion. I create an internal action creator to update the in-memory cache. The exported action creator initiates the change and doesn’t update the in-memory cache until the request is completed successfully.

I did need to do some work to add a dialog on the error message being set in my Application.jsx component:

        const onClearError = () => { return dispatch(taskActions.setErrorMessage(null)); };
        let errorDialog = <div style={{ display: 'none' }}/>;
        if (this.props.errorMessage) {
            const actions = [ <FlatButton key="cancel-dialog" label="OK" primary={true} onTouchTap={onClearError} /> ];
            errorDialog = (
                <Dialog
                    title="Error from Server"
                    actions={actions}
                    modal={true}
                    open={true}
                    onRequestClose={onClearError}
                >
                    {this.props.errorMessage}
                </Dialog>);
        }

I then place {errorDialog} in my rendered JSX file.

Adjusting the Cache

Let’s take a look at the reducers.

import constants from '../constants/tasks';

const initialState = {
    tasks: [],
    profile: null,
    isLoading: false,
    authToken: null,
    errorMessage: null
};

/**
 * Reducer for the tasks section of the redux implementation
 *
 * @param {Object} state the current state of the tasks area
 * @param {Object} action the Redux action (created by an action creator)
 * @returns {Object} the new state
 */
export default function reducer(state = initialState, action) {
    switch (action.type) {
    case constants.StoreProfile:
        return Object.assign({}, state, {
            authToken: action.token,
            profile: action.profile
        });

    case constants.UpdateLoadingFlag:
        return Object.assign({}, state, {
            isLoading: action.isLoading
        });

    case constants.Create:
        return Object.assign({}, state, {
            isLoading: false,
            tasks: [ ...state.tasks, action.task ]
        });

    case constants.ReplaceTasks:
        return Object.assign({}, state, {
            isLoading: false,
            tasks: [ ...action.tasks ]
        });

    case constants.Update:
        return Object.assign({}, state, {
            isLoading: false,
            tasks: state.tasks.map((tmp) => { return tmp.id === action.task.id ? Object.assign({}, tmp, action.task) : tmp; })
        });

    case constants.Delete:
        return Object.assign({}, state, {
            isLoading: false,
            tasks: state.tasks.filter((tmp) => { return tmp.id !== action.taskId; })
        });

    case constants.SetErrorMessage:
        return Object.assign({}, state, {
            isLoading: false,
            errorMessage: action.errorMessage
        });

    default:
        return state;
    }
}

You will note that my reducers only deal with the local cache. I could, I guess, also store this in localStorage so that my restart speed is faster. There would be a more complex interaction between the server, the in-memory cache and the localStorage cache that would have to be sorted out, however.

Note that all my reducers that result in a change to the in-memory cache also turn off the isLoading flag. This allows me one less dispatch via redux. I doubt it’s a significant performance increase, but I’m of the opinion that any obvious performance wins should be done. In this case, each operation results in one less action dispatch and one less Object.assign. In bigger projects, this could be significant.

Thinking about Sync and Servers

One of the things you can clearly see in this application is the delay in the round-trip to the server. I don’t update the cache until I have updated the server. This is safe. However, it’s hardly performant. There are a couple of ways I could fix this.

Firstly, I can update the local cache first. For example, let’s say I am inserting a new object. I can add two new local fields that are not transmitted to the server: clientId and isDirty. When the task is newly created, I can create a clientId instead (and use that everywhere) and set the dirty flag. When the server response comes back, I update the record from the server (not updating clientId) and clear the dirty flag. This allows me to identify “things that have not been updated on the server”, perhaps preventing multiple updates – it also allows me to identify things that have been newly created on the client.

Secondly, I can update a localStorage area instead of the server. This will be much faster. Then, periodically, I can trigger a refresh of the data from the server – sending the changes to the localStorage area up to the server.

There are multiple ways to do synchronization of data with a server – which one depends on the requirements of accuracy of the data on the server, performance required on the client and memory consumption. There are trade offs whichever way you choose.

Where’s the code

You can find the complete code on my GitHub Repository.

Integrating Auth0 with Azure Mobile Apps JavaScript client

I included a mechanism to get Auth0 working in my Webpack-based React application during my last article. Today I want to go one step further. I want to show how you can use the information you get back from Auth0 to authenticate to Azure Mobile Apps. Azure Mobile Apps has recently released azure-mobile-apps-client v2.0.0-beta4 for JavaScript and Apache Cordova. One of the neat things about this system is that you can use whatever library you like to authenticate a user as long as you get the original identity provider token. That means that you can, for instance, use a Facebook provided library to integrate with the Facebook app and then submit that token to Azure Mobile Apps to generate an Azure App Service token. This is called “client-directed authentication flow”.

It requires a little bit of setup though. In this article, I’m going to go through the process for generating a Microsoft Account, use Auth0 as the UI for the authentication and then integrate it into the Azure Mobile Apps JavaScript SDK.

Step 1: Set up a Microsoft Account Application

Log on to the Microsoft Developer Account. Click on Create Application, then click on API Settings and fill in the form like this:

auth0-zumo-apps-1

Specifically, the Mobile or Desktop Client toggle should be set to No and the Redirect URLs should match your Auth0 callback, which is based on the Auth0 Domain for your application. Log onto Auth0, click on App / APIs and then click on your application to find this information. Click on Save, then click on App Settings. You need to cut-and-paste the Client ID and Client Secret as you will need those.

Step 2: Update your Auth0 Application

You need to set up the Microsoft Account in your application within Auth0. Log into your Auth0 dashboard, click on Connections, then Social and finally Windows Live.

auth0-zumo-apps-2

Cut and paste your Client ID and Client Secret from Step 1 into the relevant boxes. If you want the users email address, make sure you have the right box checked. Click on Save, then close the box.

Step 3: Set up Authentication on Azure App Service

Log onto the Azure Portal, click on All Resources, then your Azure Mobile Apps application (if you don’t have one yet, follow their tutorial). Click on All Settings, then Authentication / Authorization. Now you are in the right place to be setting up authentication.

  • Turn App Service Authentication on
  • Set the action to take when the request is not authenticated to Allow request
  • Turn the Token Store to on (under Advanced Settings).

Now click on Microsoft Account. Cut and paste the Client ID and Client Secret from Step 1, and select the same boxes as you did in Step 2 – these are the claims you are requesting be provided to you.

auth0-zumo-apps-3

This is a most important step. The Client ID and Client Secret MUST be unique to your application (you can’t “try it” in the Auth0 dashboard, for example) and they must match (don’t use two different client ID/secret combos). This will ensure that the token that is provided by Auth0 can be verified by Azure Mobile Apps.

Step 4: Load the Azure Mobile Apps SDK

When you npm install azure-mobile-apps-client, the actual library is in node_modules/azure-mobile-apps-client/dist/MobileServices.Web.min.js – you need to include this as a script reference in your HTML file. At this point, there is no CDN for this library and you can’t “require” the library into Webpack. Those facilities will come later. When it is loaded, you will be able to see a WindowsAzure.MobileServiceClient object within the global context of the browser.

I created a file to create the client like this:

/* global WindowsAzure */

const client = new WindowsAzure.MobileServiceClient(window.APPLICATION.base);
const table = client.getTable('TodoItem');

// Store the client so we can try things
window.APPLICATION.client = client;

export default {
    client: client,
    table: table
};

Now I can do something like:

import zumo from 'path/to/zumo';

This brings in the client and table reference. APPLICATION.base is set to my Azure Mobile Apps URL (in this case, https://ahall-todo-app.azurewebsites.net/) Note that I store the resulting client in my global APPLICATION object – this aids in debugging later on if I need to check something on a live connection.

Step 5: Convert the Auth0 token into an Azure Mobile Apps token

The Auth0 profile that is returned by the lock.show() callback contains an element called identities. There will only be one identity – your Microsoft Account one. In there is an access_token which is the token provided by the identity provider. You can use this as follows:

export function authenticate(profile, token) {
    return (dispatch) => {
        // Start the refresh process
        dispatch(updateLoading(true));

        const loginSuccess = (data) => {
            // Store the original profile and the mobile service auth token
            dispatch(storeProfile(profile, data.mobileServiceAuthenticationToken));
            // Update the loading task to false
            dispatch(updateLoading(false));
        };
        // On failure, clear the authentication
        const loginFailed = (error) => {
            dispatch(storeProfile(null, null));
            dispatch(setErrorMessage(error.message));
        };

        // Trigger the process to swap the token for a zumo-token
        zumo.client.login('microsoftaccount', { access_token: profile.identities[0].access_token })
            .then(loginSuccess, loginFailed); // eslint-disable-line camelcase
    };
}

Note that I’m passing the access token from the identity provider (NOT the auth0 token) to my Azure Mobile Apps client.login() method. If the call succeeds, I’m using Redux and dispatching an action to update my authentication profile. If an error occurs, I’m dispatching an action to set the error message. In my application, this pops up a dialog stating that an error occurred (and clears the login).

Some Common Errors

It’s best to get down to a network level when you are diagnosing problems in this flow – do this by running the application in Chrome and opening up the Developer Tools, then switching to the Network tab. Click the XHR button to only see AJAX requests. When you see a problem, click on the Response for the request that went wrong. Look at the status:

  • A 401 Unauthorized error indicates that you’ve configured Microsoft Account, but the Client ID or Client Secret doesn’t match what’s in Auth0
  • A 404 Not Found error indicates you did not set up the appropriate Identity Provider in Azure Mobile Apps

If you aren’t moving beyond the Sign In button, check out the APPLICATION.client.currentUser and ensure the user information is being filled in.

Auth0 supports many more Identity Providers than Azure Mobile Apps. You only get Facebook, Twitter, Google and Windows Live / Microsoft Account and Azure Active Directory in Azure App Service, so use one of those.

Wrap Up

Want to see the fully working example? I’ve got that on my GitHub repository. My intent is to provide a React for browser example of the Todo application that Azure Mobile Apps uses for their quickstarts. Now I’ve got authentication going, I’m going to move on to using the JavaScript library to cloud-connect this Todo app.