Configuring ASP.NET Core Applications in Azure App Service

In my last article, I introduced my plan to see what it would take to run an Azure Mobile Apps compatible service in ASP.NET Core. There are lots of potential problems here and I need to deal with them one by one. The first article covered how to get diagnostic logging working in Azure App Service, and today’s article shows how to deal with configuration in Azure App Service.

There are two major ways to configure your application in Azure App Service. The first is via App Settings and the second is via Data Connections. App Settings appear as environment variables with the prefix APPSETTING_. For example, if you have an app setting called DEBUGMODE, you can access it via Environment.GetEnvironmentVariable("APPSETTING_DEBUGMODE"). An interesting side note: If you configure App Service Push or Authentication, these settings appear as app settings to your application as well.

Data Connections provide a mechanism for accessing connection strings. If you added a Data Connection called MS_TableConnectionString (which is the default for Azure Mobile Apps), then you would see an environment variable called SQLAZURECONNSTR_MS_TableConnectionString. This encodes both the type of connection and the connection string name.

Configuration in ASP.NET Core

The .NET Core configuration framework is a very solid framework, working with a variety of methods – YAML, XML, JSON and environment variables are all supported. You will generally see code like this in the constructor of the Startup.cs file:

        public Startup(IHostingEnvironment env)
        {
            var builder = new ConfigurationBuilder()
                .SetBasePath(env.ContentRootPath)
                .AddJsonFile("appsettings.json", optional: false, reloadOnChange: true)
                .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
                .AddEnvironmentVariables();
            Configuration = builder.Build();
        }

There are a couple of problems with this, which I will illustrate by adding a view that displays the current configuration. Firstly, add a service in the ConfigureServices() method in Startup.cs:

        // This method gets called by the runtime. Use this method to add services to the container.
        public void ConfigureServices(IServiceCollection services)
        {
            // Add Configuration as a service
            services.AddSingleton<IConfiguration>(Configuration);

            // Add framework services.
            services.AddMvc();
        }

I can now add an action to the Controllers\HomeController.cs:

        public IActionResult Configuration([FromServices] IConfiguration service)
        {
            ViewBag.Configuration = service.AsEnumerable();
            return View();
        }

The [FromServices] parameter allows me to use dependency injection to inject the singleton service I defined earlier. This provides access to the configuration in just this method. I can assign the enumeration of all the configuration elements to the ViewBag for later display. I’ve also added a Views\Home\Configuration.cshtml file:

<h1>Configuration</h1>

<div class="row">
    <table class="table table-striped">
        <tr>
            <th>Key</th>
            <th>Value</th>
        </tr>
        <tbody>
            @foreach (var item in ViewBag.Configuration)
            {
                <tr>
                    <td>@item.Key</td>
                    <td>@item.Value</td>
                </tr>
            }
        </tbody>
    </table>
</div>

If I run this code within a properly configured App Service (one with an associated SQL service attached via Data Connections), then I will see all the environment variables and app settings listed on the page. In addition, the environment variables configuration module has added a pair of configuration elements for me – one named ConnectionStrings:MS_TableConnectionString with the connection string, and the other called ConnectionStrings:MS_TableConnectionString_ProviderName.

The problems are somewhat myriad:

  • All environment variables override my configuration. Azure App Service is a managed service, so they can add any environment variables they want at any time and that may clobber my configuration.
  • The environment variables are not organized in any way and rely on convention.
  • Many of the environment variables are not relevant to my app – they are relevant to Azure App Service.

A Better Configuration Module

Rather than use the default environment variables module, I’m going to write a custom provider for configuration within Azure App Service. You can use the “right” environment variables when developing locally or a local JSON file to do the configuration. If I were doing the Azure App Service configuration in JSON, it may look like this:

{
    "ConnectionStrings": {
        "MS_TableConnectionString": "my-connection-string"
    },
    "Data": {
        "MS_TableConnectionString": {
            "Type": "SQLAZURE",
            "ConnectionString": "my-connection-string"
        }
    },
    "AzureAppService": {
        "AppSettings": {
            "MobileAppsManagement_EXTENSION_VERSION": "latest"
        }
        "Auth": {
            "Enabled": "True",
            "SigningKey": "some-long-string",
            "AzureActiveDirectory": {
                "ClientId: "my-client-id",
                "ClientSecret": "my-client-secret",
                "Mode": "Express"
            }
        },
        "Push": {
            // ...
        }
    }
}

This is a much better configuration pattern in that it provides organization of the settings and does not pollute the configuration name space with all environment variables. I like having the Data block for adding associated information about the connection string instead of the convention of adding _ProviderName to the end to add information. Duplicating the connection string means I can use Configuration.GetConnectionString() or Configuration.GetSection("Data:MS_TableConnectionString") to get the information I need. I’m envisioning releasing this library at some point, so providing options like this is a good idea.

Writing a new configuration provider is easy. There are three files:

  • An extension to the ConfigurationBuilder to bring in your configuration source
  • A configuration source that references the configuration provider
  • The configuration provider

The first two tend to be boiler-plate code. Here is the AppServiceConfigurationBuilderExtensions.cs file:

using Microsoft.Extensions.Configuration;

namespace Microsoft.Extensions.Configuration
{
    public static class AzureAppServiceConfigurationBuilderExtensions
    {
        public static IConfigurationBuilder AddAzureAppServiceSettings(this IConfigurationBuilder builder)
        {
            return builder.Add(new AzureAppServiceSettingsSource());
        }
    }
}

Note that I’ve placed the class in the same namespace as the other configuration builder extensions. This means you don’t need a using statement to use this extension method. It’s a small thing.

Here is the AzureAppServiceSettingsSource.cs file:

using Microsoft.Extensions.Configuration;

namespace Microsoft.Azure.AppService.Core.Configuration
{
    internal class AzureAppServiceSettingsSource : IConfigurationSource
    {
        public IConfigurationProvider Build(IConfigurationBuilder builder)
        {
            return new AzureAppServiceSettingsProvider(Environment.GetEnvironmentVariables());
        }
    }
}

The source just provides a new provider. Note that I pass in the environment to the provider. This allows me to mock the environment later on for unit testing. I’ve placed the three files (the two above and the next one) in their own library project within the solution. This allows me to easily write unit tests later on and it allows me to package and distribute the library if I wish.

All the work for converting the environment to a configuration is done in the AzureAppServiceSettingsProvider.cs file (with apologies for the length):

using System.Collections;
using Microsoft.Extensions.Configuration;
using System.Text.RegularExpressions;
using System.Collections.Generic;

namespace Microsoft.Azure.AppService.Core.Configuration
{
    internal class AzureAppServiceSettingsProvider : ConfigurationProvider
    {
        private IDictionary env;

        /// <summary>
        /// Where all the app settings should go in the configuration
        /// </summary>
        private const string SettingsPrefix = "AzureAppService";

        /// <summary>
        /// The regular expression used to match the key in the environment for Data Connections.
        /// </summary>
        private Regex DataConnectionsRegexp = new Regex(@"^([A-Z]+)CONNSTR_(.+)$");

        /// <summary>
        /// Mapping from environment variable to position in configuration - explicit cases
        /// </summary>
        private Dictionary<string, string> specialCases = new Dictionary<string, string>
        {
            { "WEBSITE_AUTH_CLIENT_ID",                 $"{SettingsPrefix}:Auth:AzureActiveDirectory:ClientId" },
            { "WEBSITE_AUTH_CLIENT_SECRET",             $"{SettingsPrefix}:Auth:AzureActiveDirectory:ClientSecret" },
            { "WEBSITE_AUTH_OPENID_ISSUER",             $"{SettingsPrefix}:Auth:AzureActiveDirectory:Issuer" },
            { "WEBSITE_AUTH_FB_APP_ID",                 $"{SettingsPrefix}:Auth:Facebook:ClientId" },
            { "WEBSITE_AUTH_FB_APP_SECRET",             $"{SettingsPrefix}:Auth:Facebook:ClientSecret" },
            { "WEBSITE_AUTH_GOOGLE_CLIENT_ID",          $"{SettingsPrefix}:Auth:Google:ClientId" },
            { "WEBSITE_AUTH_GOOGLE_CLIENT_SECRET",      $"{SettingsPrefix}:Auth:Google:ClientSecret" },
            { "WEBSITE_AUTH_MSA_CLIENT_ID",             $"{SettingsPrefix}:Auth:MicrosoftAccount:ClientId" },
            { "WEBSITE_AUTH_MSA_CLIENT_SECRET",         $"{SettingsPrefix}:Auth:MicrosoftAccount:ClientSecret" },
            { "WEBSITE_AUTH_TWITTER_CONSUMER_KEY",      $"{SettingsPrefix}:Auth:Twitter:ClientId" },
            { "WEBSITE_AUTH_TWITTER_CONSUMER_SECRET",   $"{SettingsPrefix}:Auth:Twitter:ClientSecret" },
            { "WEBSITE_AUTH_SIGNING_KEY",               $"{SettingsPrefix}:Auth:SigningKey" },
            { "MS_NotificationHubId",                   $"{SettingsPrefix}:Push:NotificationHubId" }
        };

        /// <summary>
        /// Mpping from environment variable to position in configuration - scoped cases
        /// </summary>
        private Dictionary<string, string> scopedCases = new Dictionary<string, string>
        {
            { "WEBSITE_AUTH_", $"{SettingsPrefix}:Auth" },
            { "WEBSITE_PUSH_", $"{SettingsPrefix}:Push" }
        };

        /// <summary>
        /// Authentication providers need to be done before the scoped cases, so their mapping
        /// is separate from the scoped cases
        /// </summary>
        private Dictionary<string, string> authProviderMapping = new Dictionary<string, string>
        {
            { "WEBSITE_AUTH_FB_",          $"{SettingsPrefix}:Auth:Facebook" },
            { "WEBSITE_AUTH_GOOGLE_",      $"{SettingsPrefix}:Auth:Google" },
            { "WEBSITE_AUTH_MSA_",         $"{SettingsPrefix}:Auth:MicrosoftAccount" },
            { "WEBSITE_AUTH_TWITTER_",     $"{SettingsPrefix}:Auth:Twitter" }
        };

        public AzureAppServiceSettingsProvider(IDictionary env)
        {
            this.env = env;
        }

        /// <summary>
        /// Loads the appropriate settings into the configuration.  The Data object is provided for us
        /// by the ConfigurationProvider
        /// </summary>
        /// <seealso cref="Microsoft.Extensions.Configuration.ConfigurationProvider"/>
        public override void Load()
        {
            foreach (DictionaryEntry e in env)
            {
                string key = e.Key as string;
                string value = e.Value as string;

                var m = DataConnectionsRegexp.Match(key);
                if (m.Success)
                {
                    var type = m.Groups[1].Value;
                    var name = m.Groups[2].Value;

                    if (!key.Equals("CUSTOMCONNSTR_MS_NotificationHubConnectionString"))
                    {
                        Data[$"Data:{name}:Type"] = type;
                        Data[$"Data:{name}:ConnectionString"] = value;
                    }
                    else
                    {
                        Data[$"{SettingsPrefix}:Push:ConnectionString"] = value;
                    }
                    Data[$"ConnectionStrings:{name}"] = value;
                    continue;
                }

                // If it is a special case, then handle it through the mapping and move on
                if (specialCases.ContainsKey(key))
                {
                    Data[specialCases[key]] = value;
                    continue;
                }

                // A special case for AUTO_AAD
                if (key.Equals("WEBSITE_AUTH_AUTO_AAD"))
                {
                    Data[$"{SettingsPrefix}:Auth:AzureActiveDirectory:Mode"] = value.Equals("True") ? "Express" : "Advanced";
                    continue;
                }

                // Scoped Cases for authentication providers
                if (dictionaryMappingFound(key, value, authProviderMapping))
                {
                    continue;
                }

                // Other scoped cases (not auth providers)
                if (dictionaryMappingFound(key, value, scopedCases))
                {
                    continue;
                }

                // Other internal settings
                if (key.StartsWith("WEBSITE_") && !containsMappedKey(key, scopedCases))
                {
                    var setting = key.Substring(8);
                    Data[$"{SettingsPrefix}:Website:{setting}"] = value;
                    continue;
                }

                // App Settings - anything not in the WEBSITE section
                if (key.StartsWith("APPSETTING_") && !key.StartsWith("APPSETTING_WEBSITE_"))
                {
                    var setting = key.Substring(11);
                    Data[$"{SettingsPrefix}:AppSetting:{setting}"] = value;
                    continue;
                }

                // Add everything else into { "Environment" }
                if (!key.StartsWith("APPSETTING_"))
                {
                    Data[$"Environment:{key}"] = value;
                }
            }
        }

        /// <summary>
        /// Determines if the key starts with any of the keys in the mapping
        /// </summary>
        /// <param name="key">The environment variable</param>
        /// <param name="mapping">The mapping dictionary</param>
        /// <returns></returns>
        private bool containsMappedKey(string key, Dictionary<string, string> mapping)
        {
            foreach (var start in mapping.Keys)
            {
                if (key.StartsWith(start))
                {
                    return true;
                }
            }
            return false;
        }

        /// <summary>
        /// Handler for a mapping dictionary
        /// </summary>
        /// <param name="key">The environment variable to check</param>
        /// <param name="value">The value of the environment variable</param>
        /// <param name="mapping">The mapping dictionary</param>
        /// <returns>true if a match was found</returns>
        private bool dictionaryMappingFound(string key, string value, Dictionary<string, string> mapping)
        {
            foreach (string start in mapping.Keys)
            {
                if (key.StartsWith(start))
                {
                    var setting = key.Substring(start.Length);
                    Data[$"{mapping[start]}:{setting}"] = value;
                    return true;
                }
            }
            return false;
        }
    }
}

Unfortunately, there are a lot of special cases here to handle how I want to lay out my configuration. However, the basic flow is handled in the Load() method. It cycles through the environment. If the environment variable matches one of the ones I watch for, then I add it to the Data[] object which becomes the configuration. Anything that doesn’t match is added to the default Environment section of the configuration. The ConfigurationProvider class that I inherit from handles all the other lifecycle type requirements for the provider, so I don’t need to be concerned with it.

Testing the Configuration Module

I’ve done some pre-work to aid in testability. Firstly, I’ve segmented the library component into its own project. Secondly, I’ve added a “mocking” capability for the environment. The default environment is passed in from the source class, but I can instantiate the provider in my test class with a suitable dictionary. The xUnit site covers how to set up a simple test, although Visual Studio 2017 has a specific xUnit test suite project template (look for xUnit Test Project (.NET Core) in the project templates list).

My testing process is relatively simple – given a suitable environment, does it produce the right configuration? I’ll have a test routine for each of the major sections – connection strings, special cases and scoped cases, and others. Then I’ll copy my environment from a real App Service and see if that causes issues. I get my environment settings from Kudu – also known as Advanced Tools in your App Service menu in the Azure portal. Here is an example of one of the tests:

        [Fact]
        public void CreatesDataConnections()
        {
            var env = new Dictionary<string, string>()
            {
                { "SQLCONNSTR_MS_TableConnectionString", "test1" },
                { "SQLAZURECONNSTR_DefaultConnection", "test2" },
                { "SQLCONNSTRMSTableConnectionString", "test3" }
            };
            var provider = new AzureAppServiceSettingsProvider(env);
            provider.Load();

            string r;
            Assert.True(provider.TryGet("Data:MS_TableConnectionString:Type", out r));
            Assert.Equal("SQL", r);
            Assert.True(provider.TryGet("Data:MS_TableConnectionString:ConnectionString", out r));
            Assert.Equal("test1", r);

            Assert.True(provider.TryGet("Data:DefaultConnection:Type", out r));
            Assert.Equal("SQLAZURE", r);
            Assert.True(provider.TryGet("Data:DefaultConnection:ConnectionString", out r));
            Assert.Equal("test2", r);

            Assert.False(provider.TryGet("Data:MSTableConnectionString:Type", out r));
            Assert.False(provider.TryGet("Data:MSTableConnectionString:ConnectionString", out r));
        }

This test ensures that the typical connection strings get placed into the right Data structure within the configuration. You can run the tests within Visual Studio 2017 by using Test > Windows > Test Explorer to view the test explorer, then click Run All – the projects will be built and tests discovered.

I’m keeping my code on GitHub, so you can find this code (including the entire test suite) in my GitHub Repository at tag p4.

Running ASP.NET Core applications in Azure App Service

One of the things I get asked about semi-regularly is when Azure Mobile Apps is going to support .NET Core. It’s a logical progression for most people and many ASP.NET developers are planning future web sites to run on ASP.NET Core. Also, the ASP.NET Core programming model makes a lot more sense (at least to me) than the older ASP.NET applications. Finally, we have an issue open on the subject. So, what is holding us back? Well, there are a bunch of things. Some have been solved already and some need a lot of work. In the coming weeks, I’m going to be writing about the various pieces that need to be in place before we can say “Azure Mobile Apps is there”.

Of course, if you want a mobile backend, you can always hop over to Visual Studio Mobile Center. This provides a mobile backend for you without having to write any code. (Full disclosure: I’m now a program manager on that team, so I may be slightly biased). However, if you are thinking ASP.NET Core, then you likely want to write the code.

Let’s get started with something that does exist. How does one run ASP.NET Core applications on Azure App Service? Well, there are two methods. The first involves uploading your application to Azure App Service via the Visual Studio Publish… dialog or via Continuous Integration from GitHub, Visual Studio Team Services or even Dropbox. It’s a relatively easy method and one I would recommend. There is a gotcha, which I’ll discuss below.

The second method uses a Docker container to house the code that is then deployed onto a Linux App Service. This is still in preview (as of writing), so I can’t recommend this for production workloads.

Create a New ASP.NET Core Application

Let’s say you opened up Visual Studio 2017 (RC right now) and created a brand new ASP.NET Core MVC application – the basis for my research here.

  • Open up Visual Studio 2017 RC.
  • Select File > New > Project…
  • Select the ASP.NET Core Web Application (.NET Core).
    • Fill in an appropriate name for the solution and project, just as normal.
    • Click OK to create the project.
  • Select ASP.NET Core 1.1 from the framework drop-down (it will say ASP.NET Core 1.0 initially)
  • Select Web Application in the ASP.NET Core 1.1 Templates selection.
  • Click OK.

I called my solution netcore-server and the project ExampleServer. At this point, Visual Studio will go off and create a project for you. You can see what it creates easily enough, but I’ve checked it into my GitHub repository at tag p0.

I’m not going to cover ASP.NET Core programming too much in this series. You can read the definitive guide on their documentation site, and I would recommend you start by understanding ASP.NET Core programming before getting into the changes here.

Go ahead and run the service (either as a Kestrel service or an IIS Express service – it works with both). This is just to make sure that you have a working site.

Add Logging to your App

Logging is one of those central things that is needed in any application. There are so many things you can’t do (including diagnose issues) if you don’t have appropriate logging. Fortunately, ASP.NET Core has logging built-in. Let’s add some to the Controllers\HomeController.cs file:

using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Logging;

namespace ExampleServer.Controllers
{
    public class HomeController : Controller
    {
        private ILogger logger;

        public HomeController(ILoggerFactory loggerFactory)
        {
            logger = loggerFactory.CreateLogger(this.GetType().FullName);
        }

        public IActionResult Index()
        {
            logger.LogInformation("In Index of the HomeController", null);
            return View();
        }
        // Rest of the file here

I’ve added the logger factory via dependency injection, then logged a message whenever the Index file is served in the home controller. If you run this version of the code (available on the GitHub respository at tag p1), you will see the following in your Visual Studio output window:

20170216-01

It’s swamped by the Application Insights data, but you can clearly see the informational message there.

Deploy your App to Azure App Service

Publishing to Azure App Service is relatively simple – right-click on the project and select Publish… to kick off the process. The layout of the windows has changed from Visual Studio 2015, but it’s the same process. You can create a new App Service or use an existing one. Once you have answered all the questions, your site will be published. Eventually, your site will be displayed in your web browser.

Turn on Diagnostic Logging

  • Click View > Server Explorer to add the server explorer to your work space.
  • Expand the Azure node, the App Service node, and finally your resource group node.
  • Right-click the app service and select View Settings
  • Turn on logging and set the logging level to verbose:

20170216-02

  • Click Save to save the settings (the site will restart).
  • Right-click the app service in the server explorer again and this time select View Streaming Logs
  • Wait until you see that you are connected to the log streaming service (in the Output window)

Now refresh your browser so that it reloads the index page again. Note how you see the access logs (which files have been requested) but the log message we put into the code is not there.

The Problem and Solution

The problem is, hopefully, obvious. ASP.NET Core does not by default feed logs to Azure App Service. We need to enable that feature in the .NET Core host. We do this in the Program.cs file:

using System.IO;
using Microsoft.AspNetCore.Hosting;

namespace ExampleServer
{
    public class Program
    {
        public static void Main(string[] args)
        {
            var host = new WebHostBuilder()
                .UseKestrel()
                .UseContentRoot(Directory.GetCurrentDirectory())
                .UseIISIntegration()
                .UseStartup<Startup>()
                .UseApplicationInsights()
                .UseAzureAppServices()
                .Build();

            host.Run();
        }
    }
}

You will also need to add the Microsoft.AspNetCore.AzureAppServicesIntegration package from NuGet for this to work. Once you have done this change, you can deploy this and watch the logs again:

20170216-03

If you have followed the instructions, you will need to switch the Output window back to the Azure logs. The output window will have been switched to Build during the publish process.

Adjusting the WebHostBuilder for the environment

It’s likely that you won’t want Application Insights and Azure App Services logging except when you are running on Azure App Service. There are a number of environment variables that Azure App Service uses and you can leverage these as well. My favorites are REGION_NAME (which indicates which Azure region your service is running in) and WEBSITE_OWNER_NAME (which is a combination of a bunch of things). You can test for these and adjust the pipeline accordingly:

using Microsoft.AspNetCore.Hosting;
using System;
using System.IO;

namespace ExampleServer
{
    public class Program
    {
        public static void Main(string[] args)
        {
            var hostBuilder = new WebHostBuilder()
                .UseKestrel()
                .UseContentRoot(Directory.GetCurrentDirectory())
                .UseIISIntegration()
                .UseStartup<Startup>()
                .UseApplicationInsights();

            var regionName = Environment.GetEnvironmentVariable("REGION_NAME");
            if (regionName != null)
            {
                hostBuilder.UseAzureAppServices();
            }
                
            var host = hostBuilder.Build();

            host.Run();
        }
    }
}

You can download this code at my GitHub repository at tag p2.

30 Days of Zumo.v2 (Azure Mobile Apps): Day 24 – Push with Tags

I introduced push as a concept in the last article, but I left a teaser – push to a subset of users with tags. Tags are really a meta-thing that equates to “interests”, but it’s really the way you would implement such things as “push-to-user” and “push-to-group”. They can literally be anything. Before I can get there, though, I need to be able to register for tags.

Dirty little secret – the current registration API allows you to request tags, but it actually ignores the tags. There is actually a good reason for this – if you allow the client to specify the tags, they may register for tags that they aren’t allowed to. For example, let’s say you implement a tag called “_email:”. Could a user register for a tag with someone elses email address by “hacking the REST request”. The answer, unfortunately, was yes. That could happen. Don’t let it happen to you.

Today I’m going to implement a custom API that replaces the regular push installations endpoint. My endpoint is going to define two distinct sets of tags – a whitelist of tags that the user can subscribe to (anything not an exact match in the list will be thrown out); and a set of dynamic tags based on the authentication record.

The Client

Before I can do anything, I need to be able to request tags. I’ve got an Apache Cordova app and can do requests for tags simply in the register() method:

    /**
     * Event Handler for response from PNS registration
     * @param {object} data the response from the PNS
     * @param {string} data.registrationId the registration Id from the PNS
     * @event
     */
    function handlePushRegistration(data) {
        var pns = 'gcm';
        var templates = {
            tags: ['News', 'Sports', 'Politics', '_email_myboss@microsoft.com' ]
        };
        client.push.register(pns, data.registrationId, templates);
    }

The registration takes an object called “templates”, which contains the list of tags as an array. All the other SDKs have something similar to this. You will notice that I’ve got three tags that are “normal” and one that is special. I’m going to create a tag list that will strip out the ones I’m not allowed to have. For example, if I list ‘News’ and ‘Sports’ as valid tags, I expect the ‘Politics’ tag to be stripped out. In addition, the ‘_email’ tag should always be stripped out since it is definitely not mine.

Note that a tag cannot start with the $ sign – that’s a reserved symbol for Notification Hubs. Don’t use it.

The Node.js Version

The node.js version is relatively simple to implement, but I had to do some work to coerce the SDK to allow me to register a replacement for the push installations:

var express = require('express'),
    serveStatic = require('serve-static'),
    azureMobileApps = require('azure-mobile-apps'),
    authMiddleware = require('./authMiddleware'),
    customRouter = require('./customRouter'),
    pushRegistrationHandler = require('./pushRegistration');

// Set up a standard Express app
var webApp = express();

// Set up the Azure Mobile Apps SDK
var mobileApp = azureMobileApps({
    notificationRootPath: '/.push/disabled'
});

mobileApp.use(authMiddleware);
mobileApp.tables.import('./tables');
mobileApp.api.import('./api');
mobileApp.use('/push/installations', pushRegistrationHandler);

Line 6 brings in my push registration handler. Line 13 moves the old push registration handler to “somewhere else”. Finally, line 19 registers my new push registration handler to take over the right place. Now, let’s look at the ‘./pushRegistration.js’ file:

var express = require('express'),
    bodyParser = require('body-parser'),
    notifications = require('azure-mobile-apps/src/notifications'),
    log = require('azure-mobile-apps/src/log');

module.exports = function (configuration) {
    var router = express.Router(),
        installationClient;

    if (configuration && configuration.notifications && Object.keys(configuration.notifications).length > 0) {
        router.use(addPushContext);
        router.route('/:installationId')
            .put(bodyParser.json(), put, errorHandler)
            .delete(del, errorHandler);

        installationClient = notifications(configuration.notifications);
    }

    return router;

    function addPushContext(req, res, next) {
        req.azureMobile = req.azureMobile || {};
        req.azureMobile.push = installationClient.getClient();
        next();
    }

    function put(req, res, next) {
        var installationId = req.params.installationId,
            installation = req.body,
            tags = [],
            user = req.azureMobile.user;

        // White list of all known tags
        var whitelist = [
            'news',
            'sports'
        ];

        // Logic for determining the correct list of tags
        installations.tags.forEach(function (tag) {
            if (whitelist.indexOf(tag.toLowerCase()) !== -1)
                tags.push(tag.toLowerCase());
        });
        // Add in the "automatic" tags
        if (user) {
            tags.push('_userid_' + user.id);
            if (user.emailaddress) tags.push('_email_' + user.emailaddress);
        }
        // Replace the installation tags requested with my list
        installation.tags = tags;

        installationClient.putInstallation(installationId, installation, user && user.id)
            .then(function (result) {
            res.status(204).end();
        })
            .catch(next);
    }

    function del(req, res, next) {
        var installationId = req.params.installationId;

        installationClient.deleteInstallation(installationId)
            .then(function (result) {
            res.status(204).end();
        })
            .catch(next);
    }

    function errorHandler(err, req, res, next) {
        log.error(err);
        res.status(400).send(err.message || 'Bad Request');
    }
};

The important code here is in lines 33-50. Normally, the tags would just be dropped. Instead, I take the tags that are offered and put them through a whitelist filter. I then add on some more automatic tags (but only if the user is authenticated).

Note that this version was adapted from the Azure Mobile Apps Node.js Server SDK version. I’ve just added the logic to deal with the tags.

ASP.NET Version

The ASP.NET Server SDK comes with a built-in controller that I need to replace. It’s added to the application during the App_Start phase with this:

            // Configure the Azure Mobile Apps section
            new MobileAppConfiguration()
                .AddTables(
                    new MobileAppTableConfiguration()
                        .MapTableControllers()
                        .AddEntityFramework())
                .MapApiControllers()
                .AddPushNotifications() /* Adds the Push Notification Handler */
                .ApplyTo(config);

I can just comment the highlighted line out and the /push/installations controller is removed, allowing me to replace it. I’m not a confident ASP.NET developer – I’m sure there is a better way of doing this. I’ve found, however, that creating a Custom API and calling that custom API is a better way of doing the registration. It’s not a problem of the code within the controller. It’s a problem of routing. In my client, instead of calling client.push.register(), I’ll call client.invokeApi(). This version is in the Client.Cordova project:

    /**
     * Event Handler for response from PNS registration
     * @param {object} data the response from the PNS
     * @param {string} data.registrationId the registration Id from the PNS
     * @event
     */
    function handlePushRegistration(data) {
        var apiOptions = {
            method: 'POST',
            body: {
                pushChannel: data.registrationId,
                tags: ['News', 'Sports', 'Politics', '_email_myboss@microsoft.com' ]
            }
        };

        var success = function () {
            alert('Push Registered');
        }
        var failure = function (error) {
            alert('Push Failed: ' + error.message);
        }

        client.invokeApi("register", apiOptions).then(success, failure);
    }

Now I can write a POST handler as a Custom API in my backend:

using System.Web.Http;
using Microsoft.Azure.Mobile.Server.Config;
using System.Collections.Generic;
using System.Net;
using System.Net.Http;
using System.Threading.Tasks;
using System.Security.Principal;
using Microsoft.Azure.Mobile.Server.Authentication;
using System.Linq;
using Microsoft.Azure.NotificationHubs;
using System.Web.Http.Controllers;

namespace backend.dotnet.Controllers
{
    [Authorize]
    [MobileAppController]
    public class RegisterController : ApiController
    {
        protected override void Initialize(HttpControllerContext context)
        {
            // Call the original Initialize() method
            base.Initialize(context);
        }

        [HttpPost]
        public async Task<HttpResponseMessage> Post([FromBody] RegistrationViewModel model)
        {
            if (!ModelState.IsValid)
            {
                return new HttpResponseMessage(HttpStatusCode.BadRequest);
            }

            // We want to apply the push registration to an installation ID
            var installationId = Request.GetHeaderOrDefault("X-ZUMO-INSTALLATION-ID");
            if (installationId == null)
            {
                return new HttpResponseMessage(HttpStatusCode.BadRequest);
            }

            // Determine the right list of tasks to be handled
            List<string> validTags = new List<string>();
            foreach (string tag in model.tags)
            {
                if (tag.ToLower().Equals("news") || tag.ToLower().Equals("sports"))
                {
                    validTags.Add(tag.ToLower());
                }
            }
            // Add on the dynamic tags generated by authentication - note that the
            // [Authorize] tags means we are authenticated.
            var identity = await User.GetAppServiceIdentityAsync<AzureActiveDirectoryCredentials>(Request);
            validTags.Add($"_userid_{identity.UserId}");

            var emailClaim = identity.UserClaims.Where(c => c.Type.EndsWith("emailaddress")).FirstOrDefault();
            if (emailClaim != null)
            {
                validTags.Add($"_email_{emailClaim.Value}");
            }

            // Register with the hub
            await CreateOrUpdatePushInstallation(installationId, model.pushChannel, validTags);

            return new HttpResponseMessage(HttpStatusCode.OK);
        }

        /// <summary>
        /// Update an installation with notification hubs
        /// </summary>
        /// <param name="installationId">The installation</param>
        /// <param name="pushChannel">the GCM Push Channel</param>
        /// <param name="tags">The list of tags to register</param>
        /// <returns></returns>
        private async Task CreateOrUpdatePushInstallation(string installationId, string pushChannel, IList<string> tags)
        {
            var pushClient = Configuration.GetPushClient();

            Installation installation = new Installation
            {
                InstallationId = installationId,
                PushChannel = pushChannel,
                Tags = tags,
                Platform = NotificationPlatform.Gcm
            };
            await pushClient.CreateOrUpdateInstallationAsync(installation);
        }
    }

    /// <summary>
    /// Format of the registration view model that is passed to the custom API
    /// </summary>
    public class RegistrationViewModel
    {
        public string pushChannel;

        public List<string> tags;
    }
}

The real work here is done by the CreateOrUpdatePushInstallation() method at lines 77-84. This uses the Notification Hub SDK to register the device according to my rules. Why write it as a Custom API? Well, I need things provided by virtue of the [MobileApiController] attribute – things like the notification hub that is linked and authentication. However, doing that automatically links the controller into the /api namespace, thus overriding my intent of replacing the push installation version. There are ways of discluding the association, but is it worth the effort? My thought is no, which is why I switched over to a Custom API. I can get finer control over the invokeApi rather than worry about whether the Azure Mobile Apps SDK is doing something wierd.

Wrap Up

I wanted to send two important messages here. Firstly, use the power of Notification Hubs by taking charge of the registration process yourself. Secondly, do the logic in the server – not the client. It’s so tempting to say “just do what my client says”, but remember rogue operators don’t think that way – you need to protect the services that you pay for so that only you are using them and you can only effectively do that from the server.

Next time, I’ll take a look at a common pattern for push that will improve the offline performance of your application. Until then, you can find the code on my GitHub Repository.

30 Days of Zumo.v2 (Azure Mobile Apps): Day 20 – Custom API

Thus far, I’ve covered authentication and table controllers in both the ASP.NET world and the Node.js world. I’ve got two clients – an Apache Cordova one and a Universal Windows one – and I’ve got two servers – a Node.js one and an ASP.NET one. I’ve looked at what it takes to bring in existing SQL tables. It’s time to move on.

Not every thing that you want to do can fit into a nice table controller. Sometimes, you need to do something different. Let’s take, for example, the application key. When we had Mobile Services, the API had an application key. It was meant to secure “the API” – in other words, only your applications could access the API. Others would need to know the application key to get into the API. This is insanely insecure and easily defeated. Anyone downloading your app and installing a MITM sniffer will be able to figure out application key. It’s in a header, after all. Then, all the attacked needed to do is use the REST endpoint with your application key and your API is as open as before. It’s trivial – which is why pretty much no-one who understands security at all will produce an API with an application key any more. It doesn’t buy you anything.

How about a secure approach? When you have a mobile app out there, you have to register it with the various app stores – the Google App Store, Apple iTunes or the Microsoft App Store. The only apps that can use the push notification systems (GCM for Google, APNS for Apple and WNS for Microsoft)re registered apps. So, use a Custom API to request a token. The token is sent via the push notification scheme for the device and is unique to the session. Add that token to the headers and then your API looks for that. This technique is really secure. But it relies on your application being able to receive push notifications and needs your application registered with the stores. In addition, push notifications sometimes take time. Would you want the first experience of your app to be a five minute delay for “registration”?

There is a middle ground. Use a Custom API to create a per-device token. The token can be used for only a certain amount of time before it expires, thus limiting the exposure. Each time the token expires, it must be re-acquired from the server. It isn’t secure – your API can still get hijacked. However, it makes the process much more costly and that, at the end, is probably enough.

Version 1: The Node.js Easy API

You can use the Easy API if you meet all the following criteria:

  • You have created the server with the Node.js Quickstart
  • You have not modified the main application code

If you followed Day 1, then this doesn’t apply to you. Easy Tables and Easy API are only available with a specially configured server that is deployed when you use the Quickstart deployment. Any other deployment pretty much doesn’t work.

Here is how to use Easy API after creating the server. Firstly, go to the Settings menu for your App Service and click on the Easy APIs option. (If you do not have access to Easy APIs, then this will also tell you – in which case, use Version 2 instead). Click on the + Add button and fill in the form:

day-20-p1

I’m only going to access this API via GET, so I’ve disabled the others. For the GET API, I’m enabling anonymous access. I can also select authenticated access. Easy APIs integrates with your regular mobile authentication – the same authentication token used for table access.

Once the API is created, click on the API and then click on Edit script. This will open Visual Studio Online. This will allow you to edit the script online. A blueprint has been implemented for me:

module.exports = {
    //"get": function (req, res, next) {
    //}
}

Not much there – next is my code. The version I’m going to use is this:

var md5 = require('md5');
var jwt = require('jsonwebtoken');

module.exports = {
    "get": function (req, res, next) {
        var d = new Date();
        var now = d.getUTCFullYear() + '-' + (d.getUTCMonth() + 1) + '-' + d.getUTCDate();
        console.info('NOW = ', now);
        var installID = req.get('X-INSTALLATION-ID');
        console.info('INSTALLID = ', installID);
        
        if (typeof installID === 'undefined') {
            console.info('NO INSTALLID FOUND');
            res.status(400).send({ error: "Invalid Installation ID" });
            return;
        }
        
        var subject = now + installID;
        var token = md5(subject);
        console.info('TOKEN = ', token);
        
        var payload = {
            token: token
        };
        
        var options = {
            expiresIn: '4h',
            audience: installID,
            issuer: process.env.WEBSITE_SITE_NAME || 'unk',
            subject: subject
        };
        
        var signedJwt = jwt.sign(payload, installID, options);
        res.status(200).send({ jwt: signedJwt });
    }
};

This won’t work yet – that’s because the md5 and jsonwebtoken modules are not yet available. I can install these through Kudu. Go back to the Azure Portal, select your App Service, then Tools, followed by Kudu. Click on the PowerShell version of the Debug console. change directory into site/wwwroot, then type the following into the console:

npm install --save md5 jsonwebtoken

Did you know You can download your site for backup at any time from here. Just click on the Download icon next to the wwwroot folder.

Version 2: The Node.js Custom API

If you aren’t a candidate for the Easy API, then you can still use Custom APIs and the same code. However, you need to add Custom API’s into your code. Place the code below into the api/createKey.js file. Add the npm packages to the package.json file.

In the Easy API version, there is also a createKey.json file. In the Custom API version, the authentication information is placed in the Javascript file, like this:

var md5 = require('md5');
var jwt = require('jsonwebtoken');

var api = {
    "get": function (req, res, next) {
        var d = new Date();
        var now = d.getUTCFullYear() + '-' + (d.getUTCMonth() + 1) + '-' + d.getUTCDate();
        console.info('NOW = ', now);
        var installID = req.get('X-INSTALLATION-ID');
        console.info('INSTALLID = ', installID);
        
        if (typeof installID === 'undefined') {
            console.info('NO INSTALLID FOUND');
            res.status(400).send({ error: "Invalid Installation ID" });
            return;
        }
        
        var subject = now + installID;
        var token = md5(subject);
        console.info('TOKEN = ', token);
        
        var payload = {
            token: token
        };
        
        var options = {
            expiresIn: '4h',
            audience: installID,
            issuer: process.env.WEBSITE_SITE_NAME || 'unk',
            subject: subject
        };
        
        var signedJwt = jwt.sign(payload, installID, options);
        res.status(200).send({ jwt: signedJwt });
    }
};

api.get.access = 'anonymous';

module.exports = api;

In addition, the custom API system must be loaded in the main server.js file:

var express = require('express'),
    serveStatic = require('serve-static'),
    azureMobileApps = require('azure-mobile-apps'),
    authMiddleware = require('./authMiddleware');

// Set up a standard Express app
var webApp = express();

// Set up the Azure Mobile Apps SDK
var mobileApp = azureMobileApps();
mobileApp.use(authMiddleware);
mobileApp.tables.import('./tables');
mobileApp.api.import('./api');

// Create the public app area
webApp.use(serveStatic('public'));

// Initialize the Azure Mobile Apps, then start listening
mobileApp.tables.initialize().then(function () {
    webApp.use(mobileApp);
    webApp.listen(process.env.PORT || 3000);
});

Once published (or, if you are doing continuous deployment, just checking the code into the relevant branch of your source-code control system), this will operate exactly the same as the Easy API version.

Version 3: The Node.js Custom Middleware

Both the Easy API and Custom API use the same underlying code to do the implementation. You have access to the whole Azure Mobile Apps environment (more on that in a later blog post). However, you are limited in the routes that you can use. You have four verbs (so no HEAD, for example) and very little in the way of variable routes. Sometimes, you want to take control of the routes and verbs. You maybe want to produce a composed API that has a two level Id structure or you are really into doing REST “properly” (which isn’t much, but there are some accepted norms). There are many constraints to the Easy API / Custom API route in Node.js – most notably that the routes are relatively simple. Fortunately, the Node.js SDK uses ExpressJS underneath, so you can just spin up a Router and do the same thing. I’ve placed the following code in the server.js file:

var express = require('express'),
    serveStatic = require('serve-static'),
    azureMobileApps = require('azure-mobile-apps'),
    authMiddleware = require('./authMiddleware'),
    customRouter = require('./customRouter');

// Set up a standard Express app
var webApp = express();

// Set up the Azure Mobile Apps SDK
var mobileApp = azureMobileApps();
mobileApp.use(authMiddleware);
mobileApp.tables.import('./tables');
mobileApp.api.import('./api');

// Create the public app area
webApp.use(serveStatic('public'));

// Initialize the Azure Mobile Apps, then start listening
mobileApp.tables.initialize().then(function () {
    webApp.use(mobileApp);
    webApp.use('/custom', customRouter);
    webApp.listen(process.env.PORT || 3000);
});

Note that I’m putting the custom middleware after I’ve added the Azure Mobile App to the ExpressJS app. Ordering is important here – if I place it before, then authentication and table controllers will not be available – I might need those later on. The customRouter object must export an express.Router:

var express = require('express');
var jwt = require('jsonwebtoken');
var md5 = require('md5');

var router = express.Router();

router.get('/createKey', function (req, res, next) {
    var d = new Date();
    var now = d.getUTCFullYear() + '-' + (d.getUTCMonth() + 1) + '-' + d.getUTCDate();
    console.info('NOW = ', now);
    var installID = req.get('X-INSTALLATION-ID');
    console.info('INSTALLID = ', installID);

    if (typeof installID === 'undefined') {
        console.info('NO INSTALLID FOUND');
        res.status(400).send({ error: "Invalid Installation ID" });
        return;
    }

    var subject = now + installID;
    var token = md5(subject);
    console.info('TOKEN = ', token);

    var payload = {
        token: token
    };

    var options = {
        expiresIn: '4h',
        audience: installID,
        issuer: process.env.WEBSITE_SITE_NAME || 'unk',
        subject: subject
    };

    var signedJwt = jwt.sign(payload, installID, options);
    res.status(200).send({ jwt: signedJwt });
});

module.exports = router;

The actual code here is identical once you get past the change to an ExpressJS Router – in fact, I can put the algorithm in its own library to make it easier to include. The advantage of this technique is flexibility, but at the expense of complexity. I can easily add any routing scheme and use any verb since I’m just using the ExpressJS SDK. It really depends on your situation as to whether the complexity is worth it. This technique is really good for producing composed APIs where you have really thought out the mechanics of the API (as opposed to Easy API which is really good for a one-off piece of functionality). My advice is to either use Custom Middleware or Custom APIs though – don’t mix and match.

Note that this technique does not put APIs under /api – the Azure Mobile Apps SDK takes this over (which is part of the reason why you shouldn’t mix and match).

Version 4: The ASP.NET Custom API

Finally, let’s talk about ASP.NET implementation. There is already a well-known implementation for APIs in ASP.NET, so just do the same thing! The only difference is some syntactic sugar to wire up the API into the right place and to handle responses in such a way that our application can handle them. To add a custom controller, right-click on the Controllers node and use Add -> Controller… to add a new controller. The Azure Mobile Apps Custom Controller should be right at the top:

day-20-p3

Here is the default scaffolding:

using System.Web.Http;
using Microsoft.Azure.Mobile.Server.Config;

namespace backend.dotnet.Controllers
{
    [MobileAppController]
    public class CreateKeyController : ApiController
    {
        // GET api/CreateKey
        public string Get()
        {
            return "Hello from custom controller!";
        }
    }
}

The important piece here is the [MobileAppController] – this will wire the API controller into the right place and register some handlers so the objects are returned properly. I expanded on this in a similar way to my Node.js example:

using System.Web.Http;
using Microsoft.Azure.Mobile.Server.Config;
using System.Web;
using System.Net;
using System;
using System.Security.Cryptography;
using System.Text;
using System.Diagnostics;
using System.IdentityModel.Tokens;
using System.Collections.Generic;
using Jose;

namespace backend.dotnet.Controllers
{
    [MobileAppController]
    public class CreateKeyController : ApiController
    {
        // GET api/CreateKey
        public Dictionary<string, string> Get()
        {
            var now = DateTime.UtcNow.ToString("yyyy-M-d");
            Debug.WriteLine($"NOW = {now}");
            var installID = HttpContext.Current.Request.Headers["X-INSTALLATION-ID"];
            if (installID == null)
            {
                throw new HttpResponseException(HttpStatusCode.BadRequest);
            }
            Debug.WriteLine($"INSTALLID = {installID}");

            var subject = $"{now}-{installID}";
            var token = createMD5(subject);
            var issuer = Environment.GetEnvironmentVariable("WEBSITE_SITE_NAME");
            if (issuer == null)
            {
                issuer = "unk";
            }
            Debug.WriteLine($"SUBJECT = {subject}");
            Debug.WriteLine($"TOKEN = {token}");

            var expires = ((TimeSpan)(DateTime.UtcNow.AddHours(4) - new DateTime(1970, 1, 1))).TotalMilliseconds;
            var payload = new Dictionary<string, object>()
            {
                { "aud", installID },
                { "iss", issuer },
                { "sub", subject },
                { "exp", expires },
                { "token", token }
            };

            byte[] secretKey = Encoding.ASCII.GetBytes(installID);
            var result = new Dictionary<string, string>()
            {
                { "jwt", JWT.Encode(payload, secretKey, JwsAlgorithm.HS256) }
            };

            return result;
        }

        /// <summary>
        /// Compute an MD5 hash of a string
        /// </summary>
        /// <param name="input">The input string</param>
        /// <returns>The MD5 hash as a string of hex</returns>
        private string createMD5(string input)
        {
            using (MD5 md5 = MD5.Create())
            {
                byte[] ib = Encoding.ASCII.GetBytes(input);
                byte[] ob = md5.ComputeHash(ib);
                StringBuilder sb = new StringBuilder();
                for (int i = 0; i < ob.Length; i++)
                {
                    sb.Append(ob[i].ToString("X2"));
                }
                return sb.ToString();
            }
        }
    }
}

Most of this code is dealing with the C#.NET equivalent of the Node code I posted earlier in the article. I’m using jose-jwt to implement the JWT signing. The algorithm is identical, so you should be able to use the same client code with either a Node or ASP.NET backend. Want it authenticated? Just add an [Authorize] annotation to the method.

Testing the API

In all cases, you should be able to do a Postman request to GET /api/createKey (or /custom/createKey if you are using the Node custom middleware technique) with a header for X-INSTALLATION-ID that in a unique ID (specifically, a GUID):

day-20-p2

If you don’t submit an X-INSTALLATION-ID, then you should get a 400 Bad Request error.

What are Custom APIs good for?

I use this type of custom API commonly to provide additional settings to my clients or to kick off a process. Some examples of simple Custom APIs:

  • Push to a Tag from a client device
  • Get enabled features for a client
  • Get an Azure Storage API Key for uploading files

The possibilities are really open to what you can dream up.

What are Custom APIs not good for?

Custom APIs are not good candidates for offline usage. There are ways you can queue up changes for synchronization when you are back online. In general, these end up being a hacked up version of a table controller – the client inserts a record into the offline table; when it syncs the backend processes the custom API during the insert operation. However, I cringe when writing that. A better idea would be to implement an offline queue mechanism. In any case, custom APIs are not good for an offline sync scenario.

Next Steps

I only covered the various server APIs this time. In the next article, I’ll take a look at calling the custom API from the clients and adjusting the request properties so that special headers can be inserted. After that, I’m going to cover accessing the Azure Mobile Apps data and authentication objects from within your custom API so that you can do some interesting things with data.

Until then, you can check all four implementations at my GitHub Repository.

30 Days of Zumo.v2 (Azure Mobile Apps): Day 18 – ASP.NET Authentication

I introduced the ASP.NET backend in my last article, but it was rather a basic backend. It just did the basic TodoItem single table controller with no authentication. Today, I’m going to integrate the Azure Authentication / Authorization and adjust the table controller to produce a personal table – similar to the Node.js environment I posted about much earlier in the series.

If you followed along the journey so far, your backend is already configured for Authentication / Authorization. If you are using a new site for the ASP.NET backend, you may want to go back to Day 3 and read about setting up Authentication again.

Setting up the Project

The team has split the NuGet packages for Azure Mobile Apps up significantly so you only have to take what you need. You need to add the following NuGet packages to your project:

  • Microsoft.Azure.Mobile.Server.Authentication

You will also need to edit your App_Start/AzureMobile.cs file to take account of authentication:

using Owin;
using System.Configuration;
using System.Data.Entity;
using System.Web.Http;
using Microsoft.Azure.Mobile.Server;
using Microsoft.Azure.Mobile.Server.Authentication;
using Microsoft.Azure.Mobile.Server.Config;
using Microsoft.Azure.Mobile.Server.Tables.Config;
using backend.dotnet.Models;

namespace backend.dotnet
{
    public partial class Startup
    {
        public static void ConfigureMobileApp(IAppBuilder app)
        {
            HttpConfiguration config = new HttpConfiguration();

            // Configure the Azure Mobile Apps section
            new MobileAppConfiguration()
                .AddTables(
                    new MobileAppTableConfiguration()
                        .MapTableControllers()
                        .AddEntityFramework())
                .MapApiControllers()
                .ApplyTo(config);

            // Initialize the database with EF Code First
            Database.SetInitializer(new AzureMobileInitializer());

            MobileAppSettingsDictionary settings = config.GetMobileAppSettingsProvider().GetMobileAppSettings();
            if (string.IsNullOrEmpty(settings.HostName))
            {
                app.UseAppServiceAuthentication(new AppServiceAuthenticationOptions
                {
                    SigningKey = ConfigurationManager.AppSettings["SigningKey"],
                    ValidAudiences = new[] { ConfigurationManager.AppSettings["ValidAudience"] },
                    ValidIssuers = new[] { ConfigurationManager.AppSettings["ValidIssuer"] },
                    TokenHandler = config.GetAppServiceTokenHandler()
                });
            }

            // Link the Web API into the configuration
            app.UseWebApi(config);
        }
    }
}

I’ve got some extra packages to deal with. Then I need to set up authentication. The Authentication / Authorization provider requires me to configure it with JWT keys. Note that this is also how I could deal with custom authentication from a provider like Auth0 – just set up the signing key, audience and issuer and let Azure Mobile Apps deal with it.

Want to do local debugging with user authentication? Check out this blog post.

In order for the app settings to work, I need to add the app settings I am using to the web.config file:

  <appSettings>
    <add key="webpages:Version" value="3.0.0.0" />
    <add key="webpages:Enabled" value="false" />
    <add key="ClientValidationEnabled" value="true" />
    <add key="UnobtrusiveJavaScriptEnabled" value="true" />
    <add key="SigningKey" value="READ FROM AZURE"/>
    <add key="ValidAudience" value="https://{yoursite}.azurewebsites.net"/>
    <add key="ValidIssuer" value="https://{yoursite}.azurewebsites.net"/>
  </appSettings>

It actually doesn’t matter what value is there – the values will be overwritten by the Azure App Service when it runs. You can put your development values in there if you like.

Configuring a Table Controller

Now that I have configured the project, I can configure a table controller. This amounts to putting the standard [Authorize] attribute to the methods and/or controllers I want to authorize.

Note: One of the common problems is developers who say that “things are always authenticated, even if I don’t want them to be”. It’s likely you set the Authentication / Authorization setting to always authenticate – let anonymous connections through and you can then control which routes get authentications.

My personal table requires the entire table to be authenticated, so I just add the [Authorize] attribute to the entire class, like this:

namespace backend.dotnet.Controllers
{
    [Authorize]
    public class TodoItemController : TableController<TodoItem>
    {
        protected override void Initialize(HttpControllerContext controllerContext)
        {
            base.Initialize(controllerContext);
            MyDbContext context = new MyDbContext();
            DomainManager = new EntityDomainManager<TodoItem>(context, Request);
        }

The Personal Table DTO

My original DTO needs to be updated in preparation for the personal table:

using Microsoft.Azure.Mobile.Server;

namespace backend.dotnet.DataObjects
{
    public class TodoItem : EntityData
    {
        public string UserId { get; set; }

        public string Text { get; set; }

        public bool Complete { get; set; }
    }
}

Since this is Entity Framework, I would normally need to do an Entity Framework Code First Migration to get that field onto my database. You can find several walk-throughs of the process online. This isn’t an Entity Framework blog, so I’ll leave that process to better minds than mine. Just know that you have to deal with this aspect when using the ASP.NET backend. (Node deals with this via dynamic schema adjustments).

Dealing with Claims

When using the Azure Mobile Apps SDK, the User (technically, HttpContext.User) is available within your table controller. It’s specified as a ClaimsPrincipal and you can read it like this:

        private string GetAzureSID()
        {
            var principal = this.User as ClaimsPrincipal;
            var sid = principal.FindFirst(ClaimTypes.NameIdentifier).Value;
            return sid;
        }

I don’t want the Security ID. I want the email address of the user. To do that, I need to delve deeper:

        private async Task<string> GetEmailAddress()
        {
            var credentials = await User.GetAppServiceIdentityAsync<AzureActiveDirectoryCredentials>(Request);
            return credentials.UserClaims
                .Where(claim => claim.Type.EndsWith("/emailaddress"))
                .First<Claim>()
                .Value;
        }

The User.GetAppServiceIdentityAsync() method returns all the information contained in the /.auth/me endpoint, but placed into a class so you can deal with it. The claims are in the UserClaims property which returns an IEnumerable – a Claim is something with a Type and a Value. The email address is actually something like http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress – but it may be something else on Facebook, for example. To be reasonable, I just need the claim to end with emailaddress. The first one listed is the one I want.

Adjusting the controller response

I’m going to need to do some adjustments to the various endpoints in the table controller to use this.

GetAll

The GetAllTodoItems() method uses the Query() method to construct a query based on the inbound OData query. I need to adjust that using something akin to LINQ to add a clause for the UserId:

        // GET tables/TodoItem
        public async Task<IQueryable<TodoItem>> GetAllTodoItem()
        {
            Debug.WriteLine("GET tables/TodoItem");
            var emailAddr = await GetEmailAddress();
            return Query().Where(item => item.UserId == emailAddr);
        }

There are lots of things you can do with a Query() object, so this is a great area for experimentation.

GetItem

I can also use a similar query for the GetItem method:

        // GET tables/TodoItem/48D68C86-6EA6-4C25-AA33-223FC9A27959
        public async Task<SingleResult<TodoItem>> GetTodoItem(string id)
        {
            Debug.WriteLine($"GET tables/TodoItem/{id}");
            var emailAddr = await GetEmailAddress();
            var result = Lookup(id).Queryable.Where(item => item.UserId == emailAddr);
            return new SingleResult<TodoItem>(result);
        }

The Lookup() method returns a Queryable with 0 or 1 entries. I then use LINQ to further filter based on the email address, before re-constituting the result into a SingleResult object. I find it’s easier to read (and test) when returning objects rather than IHttpActionResults. However, you can use whatever you are most comfortable with.

PatchItem and DeleteItem

The Patch and Delete are so close to one another that I combined them. I’ll take a look at the PATCH version – check the code for the DELETE version:

        // PATCH tables/TodoItem/48D68C86-6EA6-4C25-AA33-223FC9A27959
        public async Task<TodoItem> PatchTodoItem(string id, Delta<TodoItem> patch)
        {
            Debug.WriteLine($"PATCH tables/TodoItem/{id}");
            var item = Lookup(id).Queryable.FirstOrDefault<TodoItem>();
            if (item == null)
            {
                throw new HttpResponseException(HttpStatusCode.NotFound);
            }
            var emailAddr = await GetEmailAddress();
            if (item.UserId != emailAddr)
            {
                throw new HttpResponseException(HttpStatusCode.Forbidden);
            }
            return await UpdateAsync(id, patch);
        }

In this version, I am doing the following logic:

  • Lookup the item – if it isn’t there, produce a 404 Not Found response
  • Does it belong to me – if not, produce a 403 Forbidden response
  • Update the record and return it

PostItem

Finally, the PostItem is relatively easy:

        // POST tables/TodoItem
        public async Task<IHttpActionResult> PostTodoItem(TodoItem item)
        {
            Debug.WriteLine($"POST tables/TodoItem");
            var emailAddr = await GetEmailAddress();
            item.UserId = emailAddr;
            TodoItem current = await InsertAsync(item);
            return CreatedAtRoute("Tables", new { id = current.Id }, current);
        }

This version overwrites whatever the user supplied with the authenticated information.

Publishing

When publishing, don’t forget to use a Code First Migration to get the extra field in the table. I must admit that I cheated here and just wiped out my database table. You can browse your database directly from Visual Studio. Open the Server Explorer, expand the Azure node (you will need to enter your Azure credentials), then expand the SQL Databases node. Finally right-click on the database and select Open in SQL Server Object Explorer.

day-18-server-explorer

You will have to enter the credentials for your SQL server. You will also have to permit your Client IP to access the database. Once you have done that, you can use Visual Studio to browse your tables and manage your data.

Next Steps

This is actually a huge step forward – I’ve now got equivalent functionality within both the Node.js and ASP.NET backends. I’ll continue to cover both Node.js and ASP.NET equally in the future. Next, however, I’m going to take a look at some final thoughts on ASP.NET controllers – things like soft delete, logging, and using existing tables. Until next time, my code is on my GitHub Repository.

jQuery Form Validation with ASP.NET

I’ve been working on refactoring my various Account area forms so that they look good on the screen. I’ll admit to using a graphic I found on Google Images – I suspect it belongs to Wizards of the Coast, so I don’t want to use it in a production environment. Fortunately, I have a friend who is a lot more artistic than I am and she is producing a background for me. Until then, the graphic is a placeholder.

One of the things I wanted to do during the refactoring is to add some client-side validation. I already have server-side validation and that is staying in there. You should never trust the input coming from the user – there will always be malicious users who will try to circumvent your controls. However, client side validation gives the user more immediate feedback since it does not involve a round trip to the server.

To do this, I’m going to lean on jQuery Validation Plugin as it does a good portion of what I want and has minimal configuration. My registration form is based on my RegisterAccountVM view-model, which has three fields – Email, Password and ConfirmPassword. I want the email address to be required and a valid email address, the password to be between 6 and 128 characters and meet complexity requirements; finally, the confirm password must equal the password. I can handle all the configuration except for the complex password using the jQuery Validation Plugin standard configuration, like this:

    $("#Account form").validate({
        rules: {
            Email: {
                required: true,
                email: true
            },
            Password: {
                required: true,
                minlength: 6,
                maxlength: 128,
                complexPassword: true
            },
            ConfirmPassword: {
                required: true,
                minlength: 6,
                maxlength: 128,
                equalTo: "#regPasswordField"
            }
        }
    });

Note that the keys in the rules object are the name field of the input which, in ASP.NET MVC, are also the name of the fields in the view model.

The rule for complexPassword I’ve listed in the Password rules is non-standard. I need a custom validator to handle the complexity. My requirements for this are that the password must contain one character from each character group – upper case, lower case, numeric and symbols. To do this, I use a recipe from the documentation:

    jQuery.validator.addMethod("complexPassword", function(value, element) {
        // Min to Max length is already handled - just have to handle complexity
        var hasUpper = false, hasLower = false, hasNumeric = false, hasSymbol = false;

        for (var i = 0 ; i &lt; value.length ; i++) {
            var ch = value.charAt(i);
            if ("ABCDEFGHIJKLMNOPQRSTUVWXYZ".indexOf(ch) !== -1)
                hasUpper = true;
            if ("abcdefghijklmnopqrstuvwxyz".indexOf(ch) !== -1)
                hasLower = true;
            if ("0123456789".indexOf(ch) !== -1)
                hasNumeric = true;
            if ("!@@#$%^&*()_-+=|\}{[]:;''?/.,".indexOf(ch) !== -1)
                hasSymbol = true;
        }
        return (hasUpper && hasLower && hasNumeric && hasSymbol);
    }, "Password must be more complex");

My Areas/Main/Views/Layout.cshtml file contains a section for the scripts, defined like this:

    <!-- BootStrap Javascript Dependencies -->
    <script src="~/jspm_packages/github/components/jquery@2.1.3/jquery.min.js"></script>
    <script src="~/jspm_packages/github/twbs/bootstrap@3.3.4/js/bootstrap.min.js"></script>

    <!-- JSPM Boot Loader -->
    <script src="~/jspm_packages/system.js"></script>
    <script src="~/config.js"></script>

    <!-- Page Scripts -->
    @RenderSection("scripts", required: false)
</body>
</html>

The RenderSection call is used to insert the scripts section from my view. That means I need to add the following to the bottom of my Areas/Account/Views/RegisterAccount/Index.cshtml file:

@section scripts {
<script src="~/jspm_packages/github/jzaefferer/jquery-validation@1.13.1/dist/jquery.validate.min.js"></script>
<script>
    // Add a custom rule to jquery validation
    jQuery.validator.addMethod("complexPassword", function(value, element) {
        // Min to Max length is already handled - just have to handle complexity
        var hasUpper = false, hasLower = false, hasNumeric = false, hasSymbol = false;

        for (var i = 0 ; i < value.length ; i++) {
            var ch = value.charAt(i);
            if ("ABCDEFGHIJKLMNOPQRSTUVWXYZ".indexOf(ch) !== -1)
                hasUpper = true;
            if ("abcdefghijklmnopqrstuvwxyz".indexOf(ch) !== -1)
                hasLower = true;
            if ("0123456789".indexOf(ch) !== -1)
                hasNumeric = true;
            if ("!@@#\$\%^&*()_-+=|\\}{[]:;\"'<>?/.,".indexOf(ch) !== -1)
                hasSymbol = true;
        }
        return (hasUpper && hasLower && hasNumeric && hasSymbol);
    }, "Password must be more complex");

    $("#Account form").validate({
        rules: {
            Email: {
                required: true,
                email: true
            },
            Password: {
                required: true,
                minlength: 6,
                maxlength: 128,
                complexPassword: true
            },
            ConfirmPassword: {
                required: true,
                minlength: 6,
                maxlength: 128,
                equalTo: "#regPasswordField"
            }
        }
    });
</script>
}

I’ve done some other work in the refactoring, including changing the main.less to have an Account.less inclusion, rather than using the login.less separate file. I’ve also refactored all the Account views to handle my new format. Finally, I’ve updated the form in the ForgotPassword workflow to have the same sort of validation as the account registration. I’ve also moved the complexPassword definition into its own Javascript file so that the same code can be reused in both the ForgotPassword and RegisterAccount views. I also suspect that I will want to use it in some sort of profile page in the future. Finally, I adjusted the Gulp/javascript.js file to account for the jQuery global so I could use eslint on the new file.

One other thing to note. I had a hell of a time with Visual Studio 2015 CTP 6 today. It decided it wanted to hang on processing Javascript and Less files constantly. As a result of this, I switched my editor (there is only so much frustration one can take) and used gulp build followed by k web to run the web site. I didn’t actually use Visual Studio much today at all. Hopefully, the next build of Visual Studio will be released at BUILD at the end of the month (just one week away) and I can try that out instead.

You can check out the code at tag cs-0.0.8.

Introducing my new Side Project

With all the research and blogging about my research, one could wonder what’s the point of it all. Well, I have a point and that point is my side project. I have been a sometimes developer for a long time. I’m definitely not the one you want to be writing the next blockbuster application, but I get by; mostly by struggling for hours with simple code. This year I decided that I would actually spend the time to become at least as proficient as a college graduate programmer. I learn by doing so I decided I would direct my attention at a particular side project.

That side project is an online application that emulates a Dungeons and Dragons Character Sheet. Since Dungeons and Dragons is a tabletop paper and pencil game generally, the character sheets, where you write down all the statistics about your character, are similarly paper driven. I figured this would be a good time to update this for a tablet world. There are likely to be three parts to this application:

  1. An online portal that you can use to view and manage your characters
  2. A Web API so that I can write other (offline, perhaps) applications to use the data
  3. A Windows “Modern” application for a tablet experience

All of this, of course, should use the absolutely latest and greatest technologies. I will use ASP.NET vNext for the backend with Entity Framework 7 doing the database work for me. I’ll host the application in Azure App Services so that it is always available.

The front end work also will get the latest and greatest treatment. All the code will use ECMAScript 6, style sheets will be coded in LESS and I’ll use the latest thinking in Web Components with perhaps a touch of Polymer.

In terms of build environment, I’m opting for Visual Studio 2015 for my main IDE; jspm for my module handling; gulp for my client-side build automation. I’ll use babel, autoprefixer and other tools as they are appropriate.

Starting with Identity

My starting point was the recent ASP.NET Identity Tutorial that I wrote. There are nine parts to it:

  1. Setting up the Database
  2. The Login Process
  3. Registration
  4. The Registration Callback
  5. Forgotten Passwords
  6. Refactoring for Areas
  7. Logging
  8. Transient Services for the User Profile
  9. Wrapping up some bugs

If you are following along, I suggest you start with these nine articles as they have all been included in the character sheet initial version. Aside from that, I’ve done some styling work to make my Account screens look like the application I envision.

Where is the Code

Each section check-in will be tagged in the blog-code repository on GitHub. In addition, the version will be revved for each major section. Right now, I’m at cs-0.0.1. The project name is called CharacterSheet

Cloning the Repository

You can clone the repository directly within Visual Studio. Just use View -> Team Explorer. Click on the green plug (Connect to Team Projects). You should see a section for Local Git Repositories. Click on Clone:

blog-code-0412-1

Enter the information as above, selecting the location on your disk (not mine). By default, Visual Studio will pick a good place for you. Currently, the repository is small so it won’t take too long to clone. Once that is done, you can double-click on the repository to be taken to the Solutions:

blog-code-0412-2

Double-click on the CharacterSheet.sln solution to open up the project. You will need to manually select the Solution Explorer after you have done this.

Preparing the Solution

Visual Studio 2015 CTP 6 does not have support for jspm. The package restore won’t happen automatically as a result. You have to do it. To do this, open up a PowerShell prompt and install jspm, then run jspm install. Make sure you add it to your PATH or set up an alias for jspm as you will need to drop down to a command prompt to install new packages. I’ll let you know when this has to happen.

Visual Studio Extensions

I have a few Visual Studio Extensions installed. All of these extensions can be installed from the Tools -> Extensions and Updates menu option.

  1. Bootstrap Snippet Pack
  2. CommentsPlus
  3. Grunt Launcher
  4. Indent Guides
  5. jQuery Code Snippets
  6. Open Command Line
  7. Regex Tester
  8. Trailing Whitespace Visualizer
  9. Web Essentials 2015.0 CTP 6
  10. SideWaffle Template Pack

I will likely add to this list. Extensions like these make development easier, so I’ll blog about the useful extensions I find along the way as well.

Target Browsers

It’s all well and good developing good responsive design, but you have to test everywhere. For my main machine I have Windows 10 Technical Preview (on the fast track) with the following browsers installed:

  1. Google Chrome 41
  2. Internet Explorer 11
  3. Project Spartan

In addition I have an iPad 3 and a Dell Venue 8 as my tablets. I’ll install other browsers and operating systems on my “other boxes”. I have a Mac Mini for running mac browsers and a Hyper-V box that I can run random operating systems and their browsers on.

Running in Azure

I don’t run my development stuff in Azure. Firstly, it costs money. More importantly, the code is likely to be unstable. I’ll have to figure out the pushing to Azure piece, especially with the database in the mix. I’ll post another blog about that process when I actually do it. I do have an Azure account though; this blog is run out of Azure App Services.

That’s pretty much it for the run-down of my side project. I hope you’ll join me on my journey through web applications and developing my Side Project.