Azure App Service Authentication in an ASP.NET Core Application

In my prior posts, I’ve covered:

This is a great start. Next on my list is authentication. How do I handle both a web-based authentication and a mobile-based authentication pattern within a single authentication pattern. ASP.NET Core provides a modular authentication system. (I’m sensing a theme here – everything is modular!) So, in this post, I’m going to cover the basics of authentication and then cover how Azure App Service Authentication works (although Chris Gillum does a much better job than I do and you should read his blog for all the Easy Auth articles) and then introduce an extension to the authentication library that implements Azure App Service Authentication.

Authentication Basics

To implement authentication in ASP.NET Core, you place the following in the ConfigureServices() method of your Startup.cs:

services.AddAuthentication();

Here, services is the IServiceCollection that is passed in as a parameter to the ConfigureServices() method. In addition, you need to add a UseXXX() method to bring in the extension method within the Configure() method. Here is an example:

app.UseJwtBearerAuthentication(new JwtBearerOptions
{
    Authority = Configuration["JWT:Authority"],
    Audience = Configuration["JWT:Audience"]
});

Once that is done, your MVC controllers or methods can be decorated with the usual [Authorize] decorator to require authentication. Finally, you need to add the Microsoft.AspNetCore.Authentication NuGet package to your project to bring in the authentication framework.

In my project, I’ve added the services.AddAuthentication() method to ConfigureServices() and added an [Authorize] tag to my /Home/Configuration controller method. This means that the configuration viewer that I used last time now needs authentication:

auth-failed

That 401 HTTP status code for Configuration loading is indicative of a failed authentication. 401 is “Authorization Failed”. This is completely expected because we have not configured an authentication provider yet.

How App Service Authentication Works

Working with Azure App Service Authentication is relatively easy. A JWT-based token is submitted either as a cookie or as the X-ZUMO-AUTH header. The information necessary to decode that token is provided in environment variables:

  • WEBSITE_AUTH_ENABLED is True if the Authentication system is loaded
  • WEBSITE_AUTH_SIGNING_KEY is the passcode used for signing the key
  • WEBSITE_AUTH_ALLOWED_AUDIENCES is the list of allowed audiences for the JWT

If WEBSITE_AUTH_ENABLED is set to True, decode the X-ZUMO-AUTH header to see if the user is valid. If the user is valid, then do a HTTP GET of {issuer}/.auth/me with the X-ZUMO-AUTH header passed through to get a JSON blob with the claims. If the token is expired or non-existent, then don’t authenticate the user.

This has an issue in that you have to do another HTTP call to get the claims. This is a small overhead and the team is working to fix this for “out of process” services. In process services, such as PHP and ASP.NET, have access to server variables. The JSON blob that is returned by calling the /.auth/me endpoint is presented as a server variable so it doesn’t need to be fetched. ASP.NET Core applications are “out of process” so we can’t use this mechanism.

Configuring the ASP.NET Core Application

In the Configure() method of the Startup.cs file, I need to do something like the following:

            app.UseAzureAppServiceAuthentication(new AzureAppServiceAuthenticationOptions
            {
                SigningKey = Configuration["AzureAppService:Auth:SigningKey"],
                AllowedAudiences = new[] { $"https://{Configuration["AzureAppService:Website:HOST_NAME"]}/" },
                AllowedIssuers = new[] { $"https://{Configuration["AzureAppService:Website:HOST_NAME"]}/" }
            });

This is just pseudo-code right now because neither the UserAzureAppServiceAuthentication() method nor the AzureAppServiceAuthenticationOptions class exist. Fortunately, there are many templates for a successful implementation of authentication. (Side note: I love open source) The closest one to mine is the JwtBearer authentication implementation. I’m not going to show off the full implementation – you can go check it out yourself. However, the important work is done in the AzureAppServiceAuthenticationHandler file.

The basic premise is this:

  1. If we don’t have an authentication source (a token), then return AuthenticateResult.Skip().
  2. If we have an authentication source, but it’s not valid, return AuthenticateResult.Fail().
  3. If we have a valid authentication source, decode it, create an AuthenticationTicket and then return AuthenticateResult.Success().

Detecting the authentication source means digging into the Request.Headers[] collection to see if there is an appropriate header. The version I have created supports both the X-ZUMO-AUTH and Authorization headers (for future compatibility):

            // Grab the X-ZUMO-AUTH token if it is available
            // If not, then try the Authorization Bearer token
            string token = Request.Headers["X-ZUMO-AUTH"];
            if (string.IsNullOrEmpty(token))
            {
                string authorization = Request.Headers["Authorization"];
                if (string.IsNullOrEmpty(authorization))
                {
                    return AuthenticateResult.Skip();
                }
                if (authorization.StartsWith("Bearer ", StringComparison.OrdinalIgnoreCase))
                {
                    token = authorization.Substring("Bearer ".Length).Trim();
                    if (string.IsNullOrEmpty(token))
                    {
                        return AuthenticateResult.Skip();
                    }
                }
            }
            Logger.LogDebug($"Obtained Authorization Token = {token}");

The next step is to validate the token and decode the result. If the service is running inside of Azure App Service, then the validation has been done for me and I only need to decode the token. If I am running locally, then I should validate the token. The signing key for the JWT is encoded in the WEBSITE_AUTH_SIGNING_KEY environment variable. Theoretically, the WEBSITE_AUTH_SIGNING_KEY can be hex encoded or base-64 encoded. It will be hex-encoded the majority of the time. Using the configuration provider from the last post, this appears as the AzureAppService:Auth:SigningKey configuration variables and I can place that into the options for the authentication provider during the Configure() method of Startup.cs.

So, what’s the code for validating and decoding the token? It looks like this:

            // Convert the signing key we have to something we can use
            var signingKeys = new List<SecurityKey>();
            // If the signingKey is the signature
            signingKeys.Add(new SymmetricSecurityKey(Encoding.UTF8.GetBytes(Options.SigningKey)));
            // If it's base-64 encoded
            try
            {
                signingKeys.Add(new SymmetricSecurityKey(Convert.FromBase64String(Options.SigningKey)));
            } catch (FormatException) { /* The key was not base 64 */ }
            // If it's hex encoded, then decode the hex and add it
            try
            {
                if (Options.SigningKey.Length % 2 == 0)
                {
                    signingKeys.Add(new SymmetricSecurityKey(
                        Enumerable.Range(0, Options.SigningKey.Length)
                                  .Where(x => x % 2 == 0)
                                  .Select(x => Convert.ToByte(Options.SigningKey.Substring(x, 2), 16))
                                  .ToArray()
                    ));
                }
            } catch (Exception) {  /* The key was not hex-encoded */ }

            // validation parameters
            var websiteAuthEnabled = Environment.GetEnvironmentVariable("WEBSITE_AUTH_ENABLED");
            var inAzureAppService = (websiteAuthEnabled != null && websiteAuthEnabled.Equals("True", StringComparison.OrdinalIgnoreCase));
            var tokenValidationParameters = new TokenValidationParameters
            {
                // The signature must have been created by the signing key
                ValidateIssuerSigningKey = !inAzureAppService,
                IssuerSigningKeys = signingKeys,

                // The Issuer (iss) claim must match
                ValidateIssuer = true,
                ValidIssuers = Options.AllowedIssuers,

                // The Audience (aud) claim must match
                ValidateAudience = true,
                ValidAudiences = Options.AllowedAudiences,

                // Validate the token expiry
                ValidateLifetime = true,

                // If you want to allow clock drift, set that here
                ClockSkew = TimeSpan.FromSeconds(60)
            };

            // validate the token we received
            var tokenHandler = new JwtSecurityTokenHandler();
            SecurityToken validatedToken;
            ClaimsPrincipal principal;
            try
            {
                principal = tokenHandler.ValidateToken(token, tokenValidationParameters, out validatedToken);
            }
            catch (Exception ex)
            {
                Logger.LogError(101, ex, "Cannot validate JWT");
                return AuthenticateResult.Fail(ex);
            }

This only gives us a subset of the claims though. We want to swap out the principal (in this case) with the results of the call to /.auth/me that gives us the actual claims:

            try
            {
                client.BaseAddress = new Uri(validatedToken.Issuer);
                client.DefaultRequestHeaders.Clear();
                client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
                client.DefaultRequestHeaders.Add("X-ZUMO-AUTH", token);

                HttpResponseMessage response = await client.GetAsync("/.auth/me");
                if (response.IsSuccessStatusCode)
                {
                    var jsonContent = await response.Content.ReadAsStringAsync();
                    var userRecord = JsonConvert.DeserializeObject<List<AzureAppServiceClaims>>(jsonContent).First();

                    // Create a new ClaimsPrincipal based on the results of /.auth/me
                    List<Claim> claims = new List<Claim>();
                    foreach (var claim in userRecord.UserClaims)
                    {
                        claims.Add(new Claim(claim.Type, claim.Value));
                    }
                    claims.Add(new Claim("x-auth-provider-name", userRecord.ProviderName));
                    claims.Add(new Claim("x-auth-provider-token", userRecord.IdToken));
                    claims.Add(new Claim("x-user-id", userRecord.UserId));
                    var identity = new GenericIdentity(principal.Claims.Where(x => x.Type.Equals("stable_sid")).First().Value, Options.AuthenticationScheme);
                    identity.AddClaims(claims);
                    principal = new ClaimsPrincipal(identity);
                }
                else if (response.StatusCode == System.Net.HttpStatusCode.Unauthorized)
                {
                    return AuthenticateResult.Fail("/.auth/me says you are unauthorized");
                }
                else
                {
                    Logger.LogWarning($"/.auth/me returned status = {response.StatusCode} - skipping user claims population");
                }
            }
            catch (Exception ex)
            {
                Logger.LogWarning($"Unable to get /.auth/me user claims - skipping (ex = {ex.GetType().FullName}, msg = {ex.Message})");
            }

I can skip this phase if I want to by setting an option. The result of this code is a new identity that has all the claims and a “name” that is the same as the stable_sid value from the original token. If the /.auth/me endpoint says the token is bad, I am returning a failed authentication. Otherwise, I’m skipping the response.

I could use the UserId field from the user record. However, that is not guaranteed to be stable as users can and do change the email address on their social accounts. I may update this class in the future to use stable_sid by default but allow the developer to change it if they desire.

The final step is to create an AuthenticationTicket and return success:

            // Generate a new authentication ticket and return success
            var ticket = new AuthenticationTicket(
                principal, new AuthenticationProperties(),
                Options.AuthenticationScheme);

            return AuthenticateResult.Success(ticket);

You can check out the code on my GitHub Repository at tag p5.

Wrap Up

There is still a little bit to do. Right now, the authentication filter returns a 401 Unauthorized if you try to hit an unauthorized page. This isn’t useful for a web page, but is completely suitable for a web API. It is thus “good enough” for Azure Mobile Apps. If you are using this functionality in an MVC application, then it is likely you want to set up some sort of authorization redirect to a login provider.

In the next post, I’m going to start on the work for Azure Mobile Apps.

Configuring ASP.NET Core Applications in Azure App Service

The .NET Core framework has a nice robust configuration framework. You will normally see the following (or something akin to it) in the constructor of the Startup.cs file:

        public Startup(IHostingEnvironment env)
        {
            var builder = new ConfigurationBuilder()
                .SetBasePath(env.ContentRootPath)
                .AddJsonFile("appsettings.json", optional: false, reloadOnChange: true)
                .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
                .AddEnvironmentVariables();
            Configuration = builder.Build();
        }

This version gets its settings from JSON files and then overrides these settings with things from environment variables. It seems ideal for Azure App Service because all the settings – from the app settings to the connection strings – are provided as environment variables. Unfortunately, they don’t necessarily get placed in the right place in the configuration. For example, Azure App Service has a Data Connections facility to define the various other resources that the app service uses. You can reach it from the menu of your App Service via the Data Connections menu. The common thing to do for Azure Mobile Apps is to define a connection to a SQL Azure instance called MS_TableConnectionString. However, when this is turned into an environment variable, the environment variable is called SQLAZURECONNSTR_MS_TableConnectionString, which is hardly obvious.

Fortunately, the ASP.NET Core configuration library is extensible. What I am going to do is develop an extension to the configuration library for handling these data connections. When developing locally, you will be able to add an appsettings.Development.json file with the following contents:

{
    "ConnectionStrings": {
        "MS_TableConnectionString": "my-connection-string"
    },
    "Data": {
        "MS_TableConnectionString": {
            "Type": "SQLAZURE",
            "ConnectionString": "my-connection-string"
        }
    }
}

When in production, this configuration is produced by the Azure App Service. When in development (and running locally), you will want to produce this JSON file yourself. Alternatively, you can adjust the launchsettings.json file to add the appropriate environment variable for local running.

To start, here is what our Startup.cs constructor will look like now:

        public Startup(IHostingEnvironment env)
        {
            var builder = new ConfigurationBuilder()
                .SetBasePath(env.ContentRootPath)
                .AddJsonFile("appsettings.json", optional: false, reloadOnChange: true)
                .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
                .AddAzureAppServiceDataConnections()
                .AddEnvironmentVariables();
            Configuration = builder.Build();
        }

Configuration Builder Extensions

In order to wire our configuration provider into the configuration framework, I need to provide an extension method to the configuration builder. This is located in Extensions\AzureAppServiceConfigurationBuilderExtensions.cs:

using Microsoft.Extensions.Configuration;

namespace ExampleServer.Extensions
{
    public static class AzureAppServiceConfigurationBuilderExtensions
    {
        public static IConfigurationBuilder AddAzureAppServiceDataConnections(this IConfigurationBuilder builder)
        {
            return builder.Add(new AzureAppServiceDataConnectionsSource());
        }
    }
}

This is a fairly standard method of hooking extensions into extensible builder-type objects.

The Configuration Source and Provider

To actually provide the configuration source, I need to write two classes – a source, which is relatively simple, and a provider, which does most of the actual work. Fortunately, I can extend other classes within the configuration framework to ease the code I have to write. Let’s start with the easy one – the source. This is the object that is added to the configuration builder (located in Extensions\AzureAppServiceDataConnectionsSource.cs:

using Microsoft.Extensions.Configuration;
using System;

namespace ExampleServer.Extensions
{
    public class AzureAppServiceDataConnectionsSource : IConfigurationSource
    {
        public IConfigurationProvider Build(IConfigurationBuilder builder)
        {
            return new AzureAppServiceDataConnectionsProvider(Environment.GetEnvironmentVariables());
        }
    }
}

I’m passing in the current environment into my provider because I want to be able to mock an environment later on for testing purposes. The configuration source (which is linked into the configuration builder) returns a configuration provider. The configuration provider is where the work is done (located in Extensions\AzureAppServiceDataConnectionsProvider.cs):

using Microsoft.Extensions.Configuration;
using System.Collections;
using System.Text.RegularExpressions;

namespace ExampleServer.Extensions
{
    internal class AzureAppServiceDataConnectionsProvider : ConfigurationProvider
    {
        /// <summary>
        /// The environment (key-value pairs of strings) that we are using to generate the configuration
        /// </summary>
        private IDictionary environment;

        /// <summary>
        /// The regular expression used to match the key in the environment for Data Connections.
        /// </summary>
        private Regex MagicRegex = new Regex(@"^([A-Z]+)CONNSTR_(.+)$");

        public AzureAppServiceDataConnectionsProvider(IDictionary environment)
        {
            this.environment = environment;
        }

        /// <summary>
        /// Loads the appropriate settings into the configuration.  The Data object is provided for us
        /// by the ConfigurationProvider
        /// </summary>
        /// <seealso cref="Microsoft.Extensions.Configuration.ConfigurationProvider"/>
        public override void Load()
        {
            foreach (string key in environment.Keys)
            {
                Match m = MagicRegex.Match(key);
                if (m.Success)
                {
                    var conntype = m.Groups[1].Value;
                    var connname = m.Groups[2].Value;
                    var connstr = environment[key] as string;

                    Data[$"Data:{connname}:Type"] = conntype;
                    Data[$"Data:{connname}:ConnectionString"] = connstr;
                    Data[$"ConnectionStrings:{connname}"] = connstr;
                }
            }
        }
    }
}

The bulk of the work is done within the Load() method. This cycles through each environment variable. If it matches the pattern
we are looking for, it extracts the pieces of data we need and puts them in the Data object. The Data object is provided by the
ConfigurationProvider and all the other lifecycle methods are handled for me by the ConfigurationProvider class.

Integrating with Dependency Injection

I want to provide a singleton service to my application that provides access to the data configuration. This service will be injected
into any code I want via dependency injection. To do this, I need an interface:

using System.Collections.Generic;

namespace ExampleServer.Services
{
    public interface IDataConfiguration
    {
        IEnumerable<DataConnectionSettings> GetDataConnections();
    }
}

This refers to a model (and I produce a list of these models later):

namespace ExampleServer.Services
{
    public class DataConnectionSettings
    {
        public string Name { get; set; }
        public string Type { get; set; }
        public string ConnectionString { get; set; }
    }
}

I also need a concrete implementation of the IDataConfiguration interface to act as the service:

using Microsoft.Extensions.Configuration;
using System.Collections.Generic;

namespace ExampleServer.Services
{
    public class DataConfiguration : IDataConfiguration
    {
        List<DataConnectionSettings> providerSettings;

        public DataConfiguration(IConfiguration configuration)
        {
            providerSettings = new List<DataConnectionSettings>();
            foreach (var section in configuration.GetChildren())
            {
                providerSettings.Add(new DataConnectionSettings
                {
                    Name = section.Key,
                    Type = section["Type"],
                    ConnectionString = section["ConnectionString"]
                });
            }
        }

        public IEnumerable<DataConnectionSettings> GetDataConnections()
        {
            return providerSettings;
        }
    }
}

Now that I have defined the interface and concrete implementation of the service, I can wire it into the dependency injection system in the ConfigureServices() method within Startup.cs:

        public void ConfigureServices(IServiceCollection services)
        {
            services.AddSingleton<IDataConfiguration>(new DataConfiguration(Configuration.GetSection("Data")));

            // Add framework services.
            services.AddMvc();
        }

I can now write my test page. The main work is in the HomeController.cs:

        public IActionResult DataSettings([FromServices] IDataConfiguration dataConfiguration)
        {
            ViewBag.Data = dataConfiguration.GetDataConnections();
            return View();
        }

The parameter to the DataSettings method will be provided from my service via dependency injection. Now all I have to do is write the view for it:

<h1>Data Settings</h1>

<div class="row">
    <table class="table table-striped">
        <tr>
            <th>Provider Name</th>
            <th>Type</th>
            <th>Connection String</th>
        </tr>
        <tbody>
            @foreach (var item in ViewBag.Data)
            {
                <tr>
                    <td>@item.Name</td>
                    <td>@item.Type</td>
                    <td>@item.ConnectionString</td>
                </tr>
            }
        </tbody>
    </table>
</div>

Publish this to your Azure App Service. Don’t forget to link a SQL Azure instance via the Data Connections menu. Then browse to https://yoursite.azurewebsites.net/Home/DataSettings – you should see your SQL Azure instance listed on the page.

Now for the good part…

ASP.NET Core configuration already does this for you. I didn’t find this out until AFTER I had written all this code. You can just add the .UseEnvironmentVariables() to your configuration builder and the rename comes along for free. Specifically:

  • ConfigurationStrings:MS_TableConnectionString will point to your connection string
  • ConfigurationStrings:MS_TableConnectionString_Provider will point to the provider class name

So you don’t really need to do anything at all. However, this is a great reference guide for me for producing the next configuration provider as the sample code is really easy to follow.

The code for this blog post is in my GitHub Repository at tag p3. In the next post, I’ll tackle authentication by linking Azure App Service Authentication with ASP.NET Core Identity.

Running ASP.NET Core applications in Azure App Service

One of the things I get asked about semi-regularly is when Azure Mobile Apps is going to support .NET Core. It’s a logical progression for most people and many ASP.NET developers are planning future web sites to run on ASP.NET Core. Also, the ASP.NET Core programming model makes a lot more sense (at least to me) than the older ASP.NET applications. Finally, we have an issue open on the subject. So, what is holding us back? Well, there are a bunch of things. Some have been solved already and some need a lot of work. In the coming weeks, I’m going to be writing about the various pieces that need to be in place before we can say “Azure Mobile Apps is there”.

Of course, if you want a mobile backend, you can always hop over to Visual Studio Mobile Center. This provides a mobile backend for you without having to write any code. (Full disclosure: I’m now a program manager on that team, so I may be slightly biased). However, if you are thinking ASP.NET Core, then you likely want to write the code.

Let’s get started with something that does exist. How does one run ASP.NET Core applications on Azure App Service? Well, there are two methods. The first involves uploading your application to Azure App Service via the Visual Studio Publish… dialog or via Continuous Integration from GitHub, Visual Studio Team Services or even Dropbox. It’s a relatively easy method and one I would recommend. There is a gotcha, which I’ll discuss below.

The second method uses a Docker container to house the code that is then deployed onto a Linux App Service. This is still in preview (as of writing), so I can’t recommend this for production workloads.

Create a New ASP.NET Core Application

Let’s say you opened up Visual Studio 2017 (RC right now) and created a brand new ASP.NET Core MVC application – the basis for my research here.

  • Open up Visual Studio 2017 RC.
  • Select File > New > Project…
  • Select the ASP.NET Core Web Application (.NET Core).
    • Fill in an appropriate name for the solution and project, just as normal.
    • Click OK to create the project.
  • Select ASP.NET Core 1.1 from the framework drop-down (it will say ASP.NET Core 1.0 initially)
  • Select Web Application in the ASP.NET Core 1.1 Templates selection.
  • Click OK.

I called my solution netcore-server and the project ExampleServer. At this point, Visual Studio will go off and create a project for you. You can see what it creates easily enough, but I’ve checked it into my GitHub repository at tag p0.

I’m not going to cover ASP.NET Core programming too much in this series. You can read the definitive guide on their documentation site, and I would recommend you start by understanding ASP.NET Core programming before getting into the changes here.

Go ahead and run the service (either as a Kestrel service or an IIS Express service – it works with both). This is just to make sure that you have a working site.

Add Logging to your App

Logging is one of those central things that is needed in any application. There are so many things you can’t do (including diagnose issues) if you don’t have appropriate logging. Fortunately, ASP.NET Core has logging built-in. Let’s add some to the Controllers\HomeController.cs file:

using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Logging;

namespace ExampleServer.Controllers
{
    public class HomeController : Controller
    {
        private ILogger logger;

        public HomeController(ILoggerFactory loggerFactory)
        {
            logger = loggerFactory.CreateLogger(this.GetType().FullName);
        }

        public IActionResult Index()
        {
            logger.LogInformation("In Index of the HomeController", null);
            return View();
        }
        // Rest of the file here

I’ve added the logger factory via dependency injection, then logged a message whenever the Index file is served in the home controller. If you run this version of the code (available on the GitHub respository at tag p1), you will see the following in your Visual Studio output window:

20170216-01

It’s swamped by the Application Insights data, but you can clearly see the informational message there.

Deploy your App to Azure App Service

Publishing to Azure App Service is relatively simple – right-click on the project and select Publish… to kick off the process. The layout of the windows has changed from Visual Studio 2015, but it’s the same process. You can create a new App Service or use an existing one. Once you have answered all the questions, your site will be published. Eventually, your site will be displayed in your web browser.

Turn on Diagnostic Logging

  • Click View > Server Explorer to add the server explorer to your work space.
  • Expand the Azure node, the App Service node, and finally your resource group node.
  • Right-click the app service and select View Settings
  • Turn on logging and set the logging level to verbose:

20170216-02

  • Click Save to save the settings (the site will restart).
  • Right-click the app service in the server explorer again and this time select View Streaming Logs
  • Wait until you see that you are connected to the log streaming service (in the Output window)

Now refresh your browser so that it reloads the index page again. Note how you see the access logs (which files have been requested) but the log message we put into the code is not there.

The Problem and Solution

The problem is, hopefully, obvious. ASP.NET Core does not by default feed logs to Azure App Service. We need to enable that feature in the .NET Core host. We do this in the Program.cs file:

using System.IO;
using Microsoft.AspNetCore.Hosting;

namespace ExampleServer
{
    public class Program
    {
        public static void Main(string[] args)
        {
            var host = new WebHostBuilder()
                .UseKestrel()
                .UseContentRoot(Directory.GetCurrentDirectory())
                .UseIISIntegration()
                .UseStartup<Startup>()
                .UseApplicationInsights()
                .UseAzureAppServices()
                .Build();

            host.Run();
        }
    }
}

You will also need to add the Microsoft.AspNetCore.AzureAppServicesIntegration package from NuGet for this to work. Once you have done this change, you can deploy this and watch the logs again:

20170216-03

If you have followed the instructions, you will need to switch the Output window back to the Azure logs. The output window will have been switched to Build during the publish process.

Adjusting the WebHostBuilder for the environment

It’s likely that you won’t want Application Insights and Azure App Services logging except when you are running on Azure App Service. There are a number of environment variables that Azure App Service uses and you can leverage these as well. My favorites are REGION_NAME (which indicates which Azure region your service is running in) and WEBSITE_OWNER_NAME (which is a combination of a bunch of things). You can test for these and adjust the pipeline accordingly:

using Microsoft.AspNetCore.Hosting;
using System;
using System.IO;

namespace ExampleServer
{
    public class Program
    {
        public static void Main(string[] args)
        {
            var hostBuilder = new WebHostBuilder()
                .UseKestrel()
                .UseContentRoot(Directory.GetCurrentDirectory())
                .UseIISIntegration()
                .UseStartup<Startup>()
                .UseApplicationInsights();

            var regionName = Environment.GetEnvironmentVariable("REGION_NAME");
            if (regionName != null)
            {
                hostBuilder.UseAzureAppServices();
            }
                
            var host = hostBuilder.Build();

            host.Run();
        }
    }
}

You can download this code at my GitHub repository at tag p2.

The Latest Wrinkle (Updated 4/10/2017)

The latest edition of the WindowsAzure.Storage package have breaking changes, so can’t be included until a major release. In the interim, you will need to edit your .csproj file and add the following:

<PackageTargetFallback>$(PackageTargetFallback);portable-net40+sl5+win8+wp8+wpa81;portable-net45+win8+wp8+wpa81</PackageTargetFallback>

Writing HTTP CRUD in Azure Functions

Over the last two posts, I’ve introduced writing Azure Functions locally and deploying them to the cloud. It’s time to do something useful with them. In this post, I’m going to introduce how to write a basic HTTP router. If you follow my blog and other work, you’ll see where this is going pretty quickly. If you are only interested in Azure Functions, you’ll have to wait a bit to see how this evolves.

Create a new Azure Function

I started this blog by installing the latest azure-functions-cli package:

npm i -g azure-functions-cli

Then I created a new Azure Function App:

mkdir dynamic-tables
cd dynamic-tables
func new

Finally, I created a function called todoitem:

dec15-01

Customize the HTTP Route Prefix

By default, any HTTP trigger is bound to /api/_function_ where function is the name of your function. I want full control over where my function exists. I’m going to fix this is the host.json file:

{
    "id":"6ada7ae64e8a496c88617b7ab6682810",
    "http": {
        "routePrefix": ""
    }
}

The routePrefix is the important thing here. The value is normally “/api”, but I’ve cleared it. That means I can put my routes anywhere.

Set up the Function Bindings

In the todoitem directory are two files. The first, function.json, describes the bindings. Here is the version for my function:

{
    "disabled": false,
    "bindings": [
        {
            "name": "req",
            "type": "httpTrigger",
            "direction": "in",
            "authLevel": "function",
            "methods": [ "GET", "POST", "PATCH", "PUT", "DELETE" ],
            "route": "tables/todoitem/{id:alpha?}"
        },
        {
            "type": "http",
            "direction": "out",
            "name": "res"
        }
    ]
}

This is going to get triggered by a HTTP trigger, and will accept five methods: GET, POST, PUT, PATCH and DELETE. In addition, I’ve defined a route that contains an optional string for an id. I can, for example, do GET /tables/todoitem/foo and this will be accepted. On the outbound side, I want to respond to requests, so I’ve got a response object. The HTTP Trigger for Node is modelled after ExpressJS, so the req and res objects are mostly equivalent to the ExpressJS Request and Response objects.

Write the Code

The code for this function is in todoitem/index.js:

/**
 * Routes the request to the table controller to the correct method.
 *
 * @param {Function.Context} context - the table controller context
 * @param {Express.Request} req - the actual request
 */
function tableRouter(context, req) {
    var res = context.res;
    var id = context.bindings.id;

    switch (req.method) {
        case 'GET':
            if (id) {
                getOneItem(req, res, id);
            } else {
                getAllItems(req, res);
            }
            break;

        case 'POST':
            insertItem(req, res);
            break;

        case 'PATCH':
            patchItem(req, res, id);
            break;

        case 'PUT':
            replaceItem(req, res, id);
            break;

        case 'DELETE':
            deleteItem(req, res, id);
            break;

        default:
            res.status(405).json({ error: "Operation not supported", message: `Method ${req.method} not supported`})
    }
}

function getOneItem(req, res, id) {
    res.status(200).json({ id: id, message: "getOne" });
}

function getAllItems(req, res) {
    res.status(200).json({ query: req.query, message: "getAll" });
}

function insertItem(req, res) {
    res.status(200).json({ body: req.body, message: "insert"});
}

function patchItem(req, res, id) {
    res.status(405).json({ error: "Not Supported", message: "PATCH operations are not supported" });
}

function replaceItem(req, res, id) {
    res.status(200).json({ body: req.body, id: id, message: "replace" });
}

function deleteItem(req, res, id) {
    res.status(200).json({ id: id, message: "delete" });
}

module.exports = tableRouter;

I use a tableRouter method (and that is what our function calls) to route the HTTP call to the write CRUD method I want to execute. It’s up to me to put whatever CRUD code I need to execute and respond to the request in those additional methods. In this case, I’m just returning a 200 status (OK) and some JSON data. One key piece is differentiating between a GET /tables/todoitem and a GET /tables/todoitem/foo. The former is meant to return all records and the latter is meant to return a single record. If the id is set, we call the single record GET method and if not, then we call the multiple record GET method.

What’s the difference between PATCH and PUT? In REST semantics, PATCH Is used when you want to do a partial update of a record. PUT is used when you want to send a full record. This CRUD recipe uses both, but you may decide to use one or the other.

Running Locally

As with the prior blog post, you can run func run test-func --debug to start the backend and get ready for the debugger. You can then use Postman to send requests to your backend. (Note: Don’t use func run todoitem --debug – this will cause a crash at the moment!). You’ll get something akin to the following:

dec15-02

That’s it for today. I’ll be progressing on this project for a while, so expect more information as I go along!

Creating and Debugging Azure Functions Locally

I’ve written about Azure Functions before as part of my Azure Mobile Apps series. Azure Functions is a great feature of the Azure platform that allows you to run custom code in the cloud in a “serverless” manner. In this context, “serverless” doesn’t mean “without a server”. Rather, it means that the server is abstracted away from you. In my prior blog post, I walked through creating an Azure Function using the web UI, which is a problem when you want to check your Azure Functions in to source code and deploy them as part of your application.

UPDATE: Azure App Service have released a blog on the new CLI tools.

This is the first in a series of blog posts. I am going to walk through a process by which you can write and debug Azure Functions on your Windows 10 PC, then check the code into your favorite SCCM and deploy in a controlled manner. In short – real world.

Getting Ready

Before you start, let’s get the big elephant out of the way. The actual runtime Windows only. Sorry, Mac Users. The run-time relies on the 4.x .NET Framework, and you don’t have that. Boot into Windows 10. You can still create functions locally, but you will have to publish them to the cloud to run them. There is no local runtime on a Mac.

To get your workstation prepped, you will need the following:

  • Node
  • Visual Studio Code
  • Azure Functions Runtime

Node is relatively easy. Download the Node package from nodejs.org and install it as you would any other package. You should be able to run the node and npm programs from your command line before you continue. Visual Studio Code is similarly easily downloaded and installed. You can download additional extensions if you like. If you write functions in C#, I would definitely download the C# extension.

The final bit is the Azure Functions Runtime. This is a set of tools produced by the Azure Functions team to create and run Functions locally and are based on Yeoman. To install:

npm install -g yo generator-azurefunctions azure-functions-cli

WARNING There is a third-party module called azure-functions which is not the same thing at all. Make sure you install the right thing!

After installing, the func command should be available:

func-1

Once you have these three pieces, you are ready to start working on Azure Functions locally.

Creating an Azure Functions Project

Creating an Azure Functions project uses the func command:

mkdir my-func-application
cd my-func-application
func init

Note that func init creates a git repository as well – one less thing to do! Our next step is to create a new function. The Azure Functions CLI uses Yeoman underneath, which we can call directly using yo azurefunctions:

func-2

You can create as many functions as you want in a single function app. In the example above, I created a simple HTTP triggered function written in JavaScript. This can be used as a custom API in a mobile app, for example. The code for my trigger is stored in test-func\index.js:

module.exports = function(context, req) {
    context.log('Node.js HTTP trigger function processed a request. RequestUri=%s', req.originalUrl);

    if (req.query.name || (req.body && req.body.name)) {
        context.res = {
            // status: 200, /* Defaults to 200 */
            body: "Hello " + (req.query.name || req.body.name)
        };
    }
    else {
        context.res = {
            status: 400,
            body: "Please pass a name on the query string or in the request body"
        };
    }
    context.done();
};

and the binding information is in test-func\function.json:

{
    "disabled": false,
    "bindings": [
        {
            "authLevel": "function",
            "type": "httpTrigger",
            "direction": "in",
            "name": "req"
        },
        {
            "type": "http",
            "direction": "out",
            "name": "res"
        }
    ]
}

Running the function

To run the Azure Functions Runtime for your function app, use func run test-func.

The runtime is kicked off first. This monitors the function app for changes, so any changes you do in the code will be reflected as soon as you save the file. If you are running something that can be triggered manually (like a cron job), then it will be run immediately. For my HTTP trigger, I need to hit the HTTP endpoint – in this case, http://localhost:7071/api/test-func.

Note that the runtime is running with the version of Node that you installed and it is running on your local machine. Yet it still can be triggered by whatever you set up. If you set up a blob trigger from a storage account, then that will trigger. You have to set up the environment properly. Remember that App Service (and Functions) app settings appear as environment variables to the runtime. When you run locally, you will need to manually set up the app settings by setting an environment variable of the same name. Do this before you use func run for the first time.

Debugging the function

Running the function is great, but I want to debug the function – set a breakpoint, inspect the internal state of the function, etc. This can be done easily in Visual Studio Code as the IDE has an integrated Node debugger.

  • Run func run test-func --debug
  • Go into the debugger within Visual Studio Code, and set a breakpoint
  • Switch to the Debugger tab and hit Start (or F5)

    func-3

  • Trigger the function. Since I have a HTTP triggered function, I’m using Postman for this:

    func-4

  • Note that the function is called and I can inspect the internals and step through the code:

    func-5

You can now step through the code, inspect local variables and generally debug your script.

Next up

In the next article, I’ll discuss programmatically creating an Azure Function app and deploying our function through some of the deployment mechanisms we have available to us in Azure App Service.

30 Days of Zumo.v2 (Azure Mobile Apps): Day 1 – Setup

I’ve committed myself to learning as much Azure Mobile Apps as possible. Internally, this project is called Zumo (Azure Mobile glued together) and several sites have shown this name with reference to Azure Mobile Services. Azure Mobile Apps is Zumo v2. It’s a server SDK that runs on top of a web site. This has some really interesting things about it – like you can use all the features of Azure Web Apps (staging slots, scaling, backup and restore, authentication) and use the same API within the Web App and Mobile App. Each one of the 30 days of code will cover a topic and cover it in an hour.

To get started, you’ve got to have an Azure subscription. Sure, you could use TryAppService to create a backend, but that only lasts for an hour and it’s very restrictive – you don’t get to alter the backend code. If you haven’t already, there is a free trial that lasts 30 days. Once you get beyond the 30 days, you can run a development site for free.

Day 1: Setup

Day 1 is all about setup. I am going to do all my development on a Mac. Why not a PC with Visual Studio? Visual Studio is a very specific environment and doesn’t lend itself to iOS development. I want the experience to be as raw as possible and entirely free. Developing on the mac tends to be a little more painful when you are used to an integrated environment like Visual Studio. Visual Studio, however, hides a lot of details from you. When things “just happen”, you tend to not learn what’s behind them and debugging capabilities get lost. I want to avoid that, so I’ll be using the command line, the Azure Portal and a simple text editor.

What else do I need?

A Text Editor

What’s your favorite code editor? On a mac, mine is Atom. There are a bunch of decent plugins for Atom and I’ve covered some of them in the past. I’ll probably do another post at some point about my favorite Atom plug-ins for JavaScript development. I also like Visual Studio Code, which also has some great plug-ins. I’ve heard good things about Sublime Text as well. All three of these editors are available on Mac and Windows.

I’m not advocating for a specific text editor here. There are things you definitely want: a monospace font and syntax highlighting would be enough. Just pick your favorite.

One thing to avoid is something “heavy”. Eclipse, for example, falls into this “heavy” camp. It’s marginal on the functionality improvements, yet it’s startup cost and memory utilization make it distinctly second-rate for what I want to do – edit files.

Google Chrome

Yes, I’m being specific here. Google Chrome gets its developer tools right. In addition, you will want a few plugins. The most notable is Postman – a mechanism for executing REST calls and viewing the output.

Git

Again, I’m being specific here. There are several git tools, but they all implement git. Don’t try and get something else. I am going to be putting things on GitHub, which uses git underneath. There is Git for Mac and Git for Windows. I’ll use these tools for storing my code in a source code repository on GitHub and for querying Azure App Service.

A Command Line

If you are on a mac, there is a command line under Application-Utilities. If you are on a PC, then you have the PowerShell prompt, although I prefer ConEmu a little better. Since I’m not going to be using Visual Studio, I want somewhere to execute commands.

XCode (Mac Only)

If you are developing iOS applications, then the compilation step must use Xcode and must run on a Mac. Android applications don’t care. Windows applications don’t require a specific compiler, but you have to compile on Windows – when I get to that, I’ll switch over to Windows and use Visual Studio. There will be some points at which you will be asked to start Xcode and accept the license – at which point, you might as well download it. You can do this from the Apple App Store. Note, however, it’s a 1+Gb download, so it takes some time.

An FTP Client

I like a graphical ftp client for this. It allows me to browse the App Service behind the scenes. You can find a good list on LifeHacker. Personally, I use Cyberduck for this.

After the software, you will also need a GitHub Id – I’m going to store my code on GitHub, so there will be a repository on GitHub.

Let’s get started

The Azure Documentation already adequately covers creating an Azure Mobile App. I recommend following that tutorial to get your Azure Mobile App created and then hooked up to a SQL Azure instance. You can follow any of the tutorials – you will end up with a mobile site and a client that implements a simple TodoItem hooked up to the mobile backend.

One word on pricing and what sort of app pricing plan you should choose. There are several types of site sizing and they offer some interesting choices. Here is the breakdown:

Screen Shot 2016-03-09 at 6.34.39 PM

Note that Basic does not offer all the features of Standard and Premium. In fact, Standard offers many features that you should be interested in:

  • Auto-scaling – while not an issue in development, will be an issue in production applications. You want your application to grow as demand grows, automatically. Basic only scales manually and only has a limited scaling
  • Staging Slots – this is an awesome feature that I’ll discuss in a later blog post. One of the things this allows you to do is to upload a new site, test it out and then swap out the production version, all with zero down time.
  • Backups – we like backups. They are important. Standard adds a daily backup.

Premium adds more disk space, Biztalk services, more staging slots and more backups. Most developers can get away with Basic edition since developers only need limited scalability (to test what happens when the service does scale) and don’t need staging slots. There are two other tiers – Free and Shared:

Screen Shot 2016-03-09 at 6.42.08 PM

Note the lack of features. Free and Shared are great if you are just learning but you will find them painful to use. Spend some of your Azure free trial credits on a minimum of Basic.

Note that I’m not saying anything about the options available on SQL Azure here. The pricing when you create an App Service has nothing to do with SQL Azure. To get your effective pricing, you need to add your App Service plan to your SQL Azure plan:

sql-azure-pricing

For most normal learning activities, you can use a B-Basic plan for your SQL Azure. If you want to try out Georeplication or you have bigger data needs, you can use an S0-Standard. The pricing goes up from there. As with App Service, there is a Premium offering that adds Active Georeplication – good for those mission critical revenue-on-the-line type of apps.

Want a completely free version of this? Make sure you pick an F1-Free App Service plan and an F-Free SQL Azure plan. Want to learn everything that there is to learn about the platform without altering? Pick an S1-Standard App Service plan and an S0-Standard SQL Azure plan. However, you can upgrade your plan at any point, so this allows you to start small and move up in cost as you need to.

If you are learning or developing and do pick a standard plan, make sure you shut down the App Service at the end of your activity. This will save you some cash at night when you aren’t using the service.

Setting up for Development

So, I’ve got my site all set up. I also have a nice iOS Todo app that allows me to add TodoItems (I used the Swift version, since I’m mildly interested in learning the language), but I have not been shown any of the server code as yet. I want to set up something else here – Continuous Deployment. To configure continuous deployment, I’m going to do the following:

  • Create a GitHub repository
  • Clone the GitHub repository onto your local machine
  • Download the source code for the site I created
  • Check in the source code for the site into GitHub
  • Create a branch on GitHub for Azure deployment
  • Link the site to deploy directly from GitHub

This is a cool feature. Once I’m set up, deployment happens automatically. When I push changes to GitHub, the Azure Mobile App will automatically pick them up and deploy them for me. Here is how to do it:

1. Create a GitHub Repository.

  1. Log onto GitHub
  2. Click on the + in the top right corner of the web browser and select New Repository
  3. Fill in the form – I called my repository 30-days-of-zumo-v2
  4. Click on Create Repository

2. Clone the repository

  1. Open up your command prompt
  2. Pick a place to hold your GitHub repositories. Mine is ~/github – if you need to make the directory, then do so.
  3. Change directory to that place: cd ~/github, for example
  4. Type in:
git clone https://github.com/adrianhall/30-days-of-zumo-v2

You will replace the URL with the URL of your repository – this was displayed when you created the repository.

3. Download the Azure Website

First step is to set your deployment credentials. Log onto the Azure Portal, select your site then select All Settings. Find the Deployment Credentials option, then fill in the form and click on Save. I like to use my email address with special characters replaced by underscores for my username – this ensures it is unique. Make your password very hard to guess. Use a password manager if you need to.

Let’s get the requisite information for an ftp download:

  • The server is listed on the starting blade of your site, but will be something like ftp://waws-prod-bay-043.ftp.azurewebsites.windows.net
  • Your username is SITENAME\USERNAME. The SITENAME is the name of your site. The USERNAME is what you set in the deployment credentials. This is listed on the starting blade as well, right above the FTP Hostname.
  • Your password is whatever you set in the deployment credentials.

Open up Cyberduck, enter the information (uncheck the anonymous checkbox) and click on Connect. You can use ftp or ftps protocol – I prefer ftps since it’s designated secure – information is transmitted with SSL encryption, including your deployment credentials.

Screen Shot 2016-03-09 at 7.32.48 PM

You will now be able to see the site. Expand the site node then the wwwroot node. Highlight everything in the wwwroot node, right-click and select Download to…. Put the files in the directory you cloned from GitHub.

4. Check in the code for your site

Before you go all “git add” on this stuff, there is some cleanup to do. Right now, the site is set up to use Easy Tables and Easy APIs – there are some extra files that you don’t really need. That’s because we are going to act like developers and keep our files checked into source code control. That really means we can’t use Easy Tables and Easy APIs. Those facilities are great for simple sites and I highly recommend you check them out. But you will leave them behind once you get serious about developing a backend – you will write code and check it into a source code repository.

Let’s start by removing the files we don’t need because we aren’t going to be using Easy Tables or Easy APIs:

  • sentinel
  • tables/TodoItem.json

We’ll also remove the files that are handled by the server or restored during deployment

  • node_modules
  • iisnode.yml
  • webconfig.xml

You can do this within your GUI or on the command line with the rm command. On Windows, use rimraf:

npm install rimraf -g
rimraf node_modules

Finally, add a .gitignore file – go to https://gitignore.io, enter Node in the box and click on Generate. This will generate a suitable .gitignore file that you can cut and paste into an editor.

You are now ready to check in the initial version. Make sure you are in the project directory, then type:

git add .
git commit -m "Initial Checkin"
git push -u origin master

This will push everything up to the master branch on GitHub.

5. Create an Azure deployment branch

You can do this from the command line as well. Make sure you are still in the project directory, then type:

git checkout -b azure
git push -u origin azure

This will create an azure branch for you to merge into (more on that later), then push it up to GitHub.

6. Link the azure branch to continuous deployment

Log back on to the Azure portal and select your site. Click on All Settings, then click on Deployment Source. Select GitHub as your deployment source. You will probably have to enter your GitHub credentials in order to proceed. Eventually, you will see something like this:

Screen Shot 2016-03-09 at 7.51.02 PM

Pick your project (it’s the GitHub repository you created) and the azure branch. Once done, click on OK. Finally, click on Sync.

Something completely magical will happen now. Well, not so magical really – that comes later. The Azure system will go off and fetch the project. It will install all the dependencies of the project (listed in the package.json) file and then deploy the results. The magical piece happens later – whenever you push a new version to the azure branch, it will automatically be deployed. You’ll be able to see it happen.

This post went a little longer than I planned, but I’m not all set up for continuous development on Azure. In the next post, I’ll look at upgrading the Node.js version and handle the checkin and merge mechanism. In addition, I’ll look at a local development cycle (rather than deploying) using the SQL Azure instance I’ve set up.

If you want to follow along on the code, I’ve set up a new GitHub repository – enjoy!

Using Azure Mobile Apps from a React Redux App

I did some work towards my React application in my last article – specifically handling authentication with Auth0 providing the UI and then swapping the token with Azure Mobile Apps for a ZUMO token. I’m now all set to do some CRUD operations within my React Redux application. There is some basic Redux stuff in here, so if you want a refresher, check out my prior Redux articles:

Refreshing Data

My first stop is “how do I get the entire table that I can see from Azure Mobile Apps?” This requires multiple actions in a React Redux world. Let’s first of all look at the action creators:

import constants from '../constants/tasks';
import zumo from '../../zumo';

/**
 * Internal Redux Action Creator: update the isLoading flag
 * @param {boolean} loading the isLoading flag
 * @returns {Object} redux-action
 */
function updateLoading(loading) {
    return {
        type: constants.UpdateLoadingFlag,
        isLoading: loading
    };
}

/**
 * Internal Redux Action Creator: replace all the tasks
 * @param {Array} tasks the new list of tasks
 * @returns {Object} redux-action
 */
function replaceTasks(tasks) {
    return {
        type: constants.ReplaceTasks,
        tasks: tasks
    };
}

/**
 * Redux Action Creator: Set the error message
 * @param {string} errorMessage the error message
 * @returns {Object} redux-action
 */
export function setErrorMessage(errorMessage) {
    return {
        type: constants.SetErrorMessage,
        errorMessage: errorMessage
    };
}

/**
 * Redux Action Creator: Refresh the task list
 * @returns {Object} redux-action
 */
export function refreshTasks() {
    return (dispatch) => {
        dispatch(updateLoading(true));

        const success = (results) => {
            console.info('results = ', results);
            dispatch(replaceTasks(results));
        };

        const failure = (error) => {
            dispatch(setErrorMessage(error.message));
        };

        zumo.table.read().then(success, failure);
    };
}

Four actions for a single operation? I’ve found this is common for Redux applications that deal with backend services – you need to have several actions to implement all the code-paths. I could have gotten away with just three – an initiator, a successful completion and an error condition. However, I wanted to ensure I had flexibility. The setErrorMessage() and updateLoading() actions are generic enough to be re-used for other actions.

Two of these actions are internal – I don’t export them and so the rest of the application never sees them. The only action creator that the application at large can use is the refreshTasks() action – the initiator for the refresh. I’ve made the setErrorMessage() task generic enough that it can be used by an error dialog to clear the error as well. Lesson learned – only export the tasks that you want the rest of the application to use.

Looking at the refreshTasks() task list, I’m not doing any filtering. Azure Mobile Apps supports filtering on the server as well as the client. I’d rather filter on the client in this application – it saves a round trip and the data is never going to be big enough that filtering is going to be a problem. This may not be true in your application – you should make a decision on filtering on the server. vs. client in terms of performance and memory usage.

Insert, Modify and Delete Tasks

I’ve already got the actions – I just need to update them for the async server code. For example, here is the insert code:

/**
 * Redux Action Creator: Insert a new task into the cache
 * @param {Object} task the task to be updated
 * @param {string} task.id the ID of the task
 * @param {string} task.text the description of the task
 * @param {bool} task.complete true if the task is completed
 * @returns {Object} redux-action
 */
function insertTask(task) {
    return {
        type: constants.Create,
        task: task
    };
}

/**
 * Redux Action Creator: Create a new Task
 * @param {string} text the description of the new task
 * @returns {Object} redux-action
 */
export function createTask(text) {
    return (dispatch) => {
        dispatch(updateLoading(true));

        const newTask = {
            text: text,
            complete: false
        };

        const success = (insertedItem) => {
            console.info('createTask: ', insertedItem);
            dispatch(insertTask(insertedItem));
        };

        const failure = (error) => {
            dispatch(setErrorMessage(error.message));
        };

        zumo.table.insert(newTask).then(success, failure);
    };
}

I’m reusing the updateLoading() and setErrorMessage() action creators that I used with the refresh tasks. The createTask() does the insert async then calls the insertTask() action creator with the newly created task to update the in-memory cache (as we will see below when we come to the reducers). There are similar mechanisms for modification and deletion. I create an internal action creator to update the in-memory cache. The exported action creator initiates the change and doesn’t update the in-memory cache until the request is completed successfully.

I did need to do some work to add a dialog on the error message being set in my Application.jsx component:

        const onClearError = () => { return dispatch(taskActions.setErrorMessage(null)); };
        let errorDialog = <div style={{ display: 'none' }}/>;
        if (this.props.errorMessage) {
            const actions = [ <FlatButton key="cancel-dialog" label="OK" primary={true} onTouchTap={onClearError} /> ];
            errorDialog = (
                <Dialog
                    title="Error from Server"
                    actions={actions}
                    modal={true}
                    open={true}
                    onRequestClose={onClearError}
                >
                    {this.props.errorMessage}
                </Dialog>);
        }

I then place {errorDialog} in my rendered JSX file.

Adjusting the Cache

Let’s take a look at the reducers.

import constants from '../constants/tasks';

const initialState = {
    tasks: [],
    profile: null,
    isLoading: false,
    authToken: null,
    errorMessage: null
};

/**
 * Reducer for the tasks section of the redux implementation
 *
 * @param {Object} state the current state of the tasks area
 * @param {Object} action the Redux action (created by an action creator)
 * @returns {Object} the new state
 */
export default function reducer(state = initialState, action) {
    switch (action.type) {
    case constants.StoreProfile:
        return Object.assign({}, state, {
            authToken: action.token,
            profile: action.profile
        });

    case constants.UpdateLoadingFlag:
        return Object.assign({}, state, {
            isLoading: action.isLoading
        });

    case constants.Create:
        return Object.assign({}, state, {
            isLoading: false,
            tasks: [ ...state.tasks, action.task ]
        });

    case constants.ReplaceTasks:
        return Object.assign({}, state, {
            isLoading: false,
            tasks: [ ...action.tasks ]
        });

    case constants.Update:
        return Object.assign({}, state, {
            isLoading: false,
            tasks: state.tasks.map((tmp) => { return tmp.id === action.task.id ? Object.assign({}, tmp, action.task) : tmp; })
        });

    case constants.Delete:
        return Object.assign({}, state, {
            isLoading: false,
            tasks: state.tasks.filter((tmp) => { return tmp.id !== action.taskId; })
        });

    case constants.SetErrorMessage:
        return Object.assign({}, state, {
            isLoading: false,
            errorMessage: action.errorMessage
        });

    default:
        return state;
    }
}

You will note that my reducers only deal with the local cache. I could, I guess, also store this in localStorage so that my restart speed is faster. There would be a more complex interaction between the server, the in-memory cache and the localStorage cache that would have to be sorted out, however.

Note that all my reducers that result in a change to the in-memory cache also turn off the isLoading flag. This allows me one less dispatch via redux. I doubt it’s a significant performance increase, but I’m of the opinion that any obvious performance wins should be done. In this case, each operation results in one less action dispatch and one less Object.assign. In bigger projects, this could be significant.

Thinking about Sync and Servers

One of the things you can clearly see in this application is the delay in the round-trip to the server. I don’t update the cache until I have updated the server. This is safe. However, it’s hardly performant. There are a couple of ways I could fix this.

Firstly, I can update the local cache first. For example, let’s say I am inserting a new object. I can add two new local fields that are not transmitted to the server: clientId and isDirty. When the task is newly created, I can create a clientId instead (and use that everywhere) and set the dirty flag. When the server response comes back, I update the record from the server (not updating clientId) and clear the dirty flag. This allows me to identify “things that have not been updated on the server”, perhaps preventing multiple updates – it also allows me to identify things that have been newly created on the client.

Secondly, I can update a localStorage area instead of the server. This will be much faster. Then, periodically, I can trigger a refresh of the data from the server – sending the changes to the localStorage area up to the server.

There are multiple ways to do synchronization of data with a server – which one depends on the requirements of accuracy of the data on the server, performance required on the client and memory consumption. There are trade offs whichever way you choose.

Where’s the code

You can find the complete code on my GitHub Repository.