Azure App Service Authentication in an ASP.NET Core Application

In my prior posts, I’ve covered:

This is a great start. Next on my list is authentication. How do I handle both a web-based authentication and a mobile-based authentication pattern within a single authentication pattern. ASP.NET Core provides a modular authentication system. (I’m sensing a theme here – everything is modular!) So, in this post, I’m going to cover the basics of authentication and then cover how Azure App Service Authentication works (although Chris Gillum does a much better job than I do and you should read his blog for all the Easy Auth articles) and then introduce an extension to the authentication library that implements Azure App Service Authentication.

Authentication Basics

To implement authentication in ASP.NET Core, you place the following in the ConfigureServices() method of your Startup.cs:

services.AddAuthentication();

Here, services is the IServiceCollection that is passed in as a parameter to the ConfigureServices() method. In addition, you need to add a UseXXX() method to bring in the extension method within the Configure() method. Here is an example:

app.UseJwtBearerAuthentication(new JwtBearerOptions
{
    Authority = Configuration["JWT:Authority"],
    Audience = Configuration["JWT:Audience"]
});

Once that is done, your MVC controllers or methods can be decorated with the usual [Authorize] decorator to require authentication. Finally, you need to add the Microsoft.AspNetCore.Authentication NuGet package to your project to bring in the authentication framework.

In my project, I’ve added the services.AddAuthentication() method to ConfigureServices() and added an [Authorize] tag to my /Home/Configuration controller method. This means that the configuration viewer that I used last time now needs authentication:

auth-failed

That 401 HTTP status code for Configuration loading is indicative of a failed authentication. 401 is “Authorization Failed”. This is completely expected because we have not configured an authentication provider yet.

How App Service Authentication Works

Working with Azure App Service Authentication is relatively easy. A JWT-based token is submitted either as a cookie or as the X-ZUMO-AUTH header. The information necessary to decode that token is provided in environment variables:

  • WEBSITE_AUTH_ENABLED is True if the Authentication system is loaded
  • WEBSITE_AUTH_SIGNING_KEY is the passcode used for signing the key
  • WEBSITE_AUTH_ALLOWED_AUDIENCES is the list of allowed audiences for the JWT

If WEBSITE_AUTH_ENABLED is set to True, decode the X-ZUMO-AUTH header to see if the user is valid. If the user is valid, then do a HTTP GET of {issuer}/.auth/me with the X-ZUMO-AUTH header passed through to get a JSON blob with the claims. If the token is expired or non-existent, then don’t authenticate the user.

This has an issue in that you have to do another HTTP call to get the claims. This is a small overhead and the team is working to fix this for “out of process” services. In process services, such as PHP and ASP.NET, have access to server variables. The JSON blob that is returned by calling the /.auth/me endpoint is presented as a server variable so it doesn’t need to be fetched. ASP.NET Core applications are “out of process” so we can’t use this mechanism.

Configuring the ASP.NET Core Application

In the Configure() method of the Startup.cs file, I need to do something like the following:

            app.UseAzureAppServiceAuthentication(new AzureAppServiceAuthenticationOptions
            {
                SigningKey = Configuration["AzureAppService:Auth:SigningKey"],
                AllowedAudiences = new[] { $"https://{Configuration["AzureAppService:Website:HOST_NAME"]}/" },
                AllowedIssuers = new[] { $"https://{Configuration["AzureAppService:Website:HOST_NAME"]}/" }
            });

This is just pseudo-code right now because neither the UserAzureAppServiceAuthentication() method nor the AzureAppServiceAuthenticationOptions class exist. Fortunately, there are many templates for a successful implementation of authentication. (Side note: I love open source) The closest one to mine is the JwtBearer authentication implementation. I’m not going to show off the full implementation – you can go check it out yourself. However, the important work is done in the AzureAppServiceAuthenticationHandler file.

The basic premise is this:

  1. If we don’t have an authentication source (a token), then return AuthenticateResult.Skip().
  2. If we have an authentication source, but it’s not valid, return AuthenticateResult.Fail().
  3. If we have a valid authentication source, decode it, create an AuthenticationTicket and then return AuthenticateResult.Success().

Detecting the authentication source means digging into the Request.Headers[] collection to see if there is an appropriate header. The version I have created supports both the X-ZUMO-AUTH and Authorization headers (for future compatibility):

            // Grab the X-ZUMO-AUTH token if it is available
            // If not, then try the Authorization Bearer token
            string token = Request.Headers["X-ZUMO-AUTH"];
            if (string.IsNullOrEmpty(token))
            {
                string authorization = Request.Headers["Authorization"];
                if (string.IsNullOrEmpty(authorization))
                {
                    return AuthenticateResult.Skip();
                }
                if (authorization.StartsWith("Bearer ", StringComparison.OrdinalIgnoreCase))
                {
                    token = authorization.Substring("Bearer ".Length).Trim();
                    if (string.IsNullOrEmpty(token))
                    {
                        return AuthenticateResult.Skip();
                    }
                }
            }
            Logger.LogDebug($"Obtained Authorization Token = {token}");

The next step is to validate the token and decode the result. If the service is running inside of Azure App Service, then the validation has been done for me and I only need to decode the token. If I am running locally, then I should validate the token. The signing key for the JWT is encoded in the WEBSITE_AUTH_SIGNING_KEY environment variable. Theoretically, the WEBSITE_AUTH_SIGNING_KEY can be hex encoded or base-64 encoded. It will be hex-encoded the majority of the time. Using the configuration provider from the last post, this appears as the AzureAppService:Auth:SigningKey configuration variables and I can place that into the options for the authentication provider during the Configure() method of Startup.cs.

So, what’s the code for validating and decoding the token? It looks like this:

            // Convert the signing key we have to something we can use
            var signingKeys = new List<SecurityKey>();
            // If the signingKey is the signature
            signingKeys.Add(new SymmetricSecurityKey(Encoding.UTF8.GetBytes(Options.SigningKey)));
            // If it's base-64 encoded
            try
            {
                signingKeys.Add(new SymmetricSecurityKey(Convert.FromBase64String(Options.SigningKey)));
            } catch (FormatException) { /* The key was not base 64 */ }
            // If it's hex encoded, then decode the hex and add it
            try
            {
                if (Options.SigningKey.Length % 2 == 0)
                {
                    signingKeys.Add(new SymmetricSecurityKey(
                        Enumerable.Range(0, Options.SigningKey.Length)
                                  .Where(x => x % 2 == 0)
                                  .Select(x => Convert.ToByte(Options.SigningKey.Substring(x, 2), 16))
                                  .ToArray()
                    ));
                }
            } catch (Exception) {  /* The key was not hex-encoded */ }

            // validation parameters
            var websiteAuthEnabled = Environment.GetEnvironmentVariable("WEBSITE_AUTH_ENABLED");
            var inAzureAppService = (websiteAuthEnabled != null && websiteAuthEnabled.Equals("True", StringComparison.OrdinalIgnoreCase));
            var tokenValidationParameters = new TokenValidationParameters
            {
                // The signature must have been created by the signing key
                ValidateIssuerSigningKey = !inAzureAppService,
                IssuerSigningKeys = signingKeys,

                // The Issuer (iss) claim must match
                ValidateIssuer = true,
                ValidIssuers = Options.AllowedIssuers,

                // The Audience (aud) claim must match
                ValidateAudience = true,
                ValidAudiences = Options.AllowedAudiences,

                // Validate the token expiry
                ValidateLifetime = true,

                // If you want to allow clock drift, set that here
                ClockSkew = TimeSpan.FromSeconds(60)
            };

            // validate the token we received
            var tokenHandler = new JwtSecurityTokenHandler();
            SecurityToken validatedToken;
            ClaimsPrincipal principal;
            try
            {
                principal = tokenHandler.ValidateToken(token, tokenValidationParameters, out validatedToken);
            }
            catch (Exception ex)
            {
                Logger.LogError(101, ex, "Cannot validate JWT");
                return AuthenticateResult.Fail(ex);
            }

This only gives us a subset of the claims though. We want to swap out the principal (in this case) with the results of the call to /.auth/me that gives us the actual claims:

            try
            {
                client.BaseAddress = new Uri(validatedToken.Issuer);
                client.DefaultRequestHeaders.Clear();
                client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
                client.DefaultRequestHeaders.Add("X-ZUMO-AUTH", token);

                HttpResponseMessage response = await client.GetAsync("/.auth/me");
                if (response.IsSuccessStatusCode)
                {
                    var jsonContent = await response.Content.ReadAsStringAsync();
                    var userRecord = JsonConvert.DeserializeObject<List<AzureAppServiceClaims>>(jsonContent).First();

                    // Create a new ClaimsPrincipal based on the results of /.auth/me
                    List<Claim> claims = new List<Claim>();
                    foreach (var claim in userRecord.UserClaims)
                    {
                        claims.Add(new Claim(claim.Type, claim.Value));
                    }
                    claims.Add(new Claim("x-auth-provider-name", userRecord.ProviderName));
                    claims.Add(new Claim("x-auth-provider-token", userRecord.IdToken));
                    claims.Add(new Claim("x-user-id", userRecord.UserId));
                    var identity = new GenericIdentity(principal.Claims.Where(x => x.Type.Equals("stable_sid")).First().Value, Options.AuthenticationScheme);
                    identity.AddClaims(claims);
                    principal = new ClaimsPrincipal(identity);
                }
                else if (response.StatusCode == System.Net.HttpStatusCode.Unauthorized)
                {
                    return AuthenticateResult.Fail("/.auth/me says you are unauthorized");
                }
                else
                {
                    Logger.LogWarning($"/.auth/me returned status = {response.StatusCode} - skipping user claims population");
                }
            }
            catch (Exception ex)
            {
                Logger.LogWarning($"Unable to get /.auth/me user claims - skipping (ex = {ex.GetType().FullName}, msg = {ex.Message})");
            }

I can skip this phase if I want to by setting an option. The result of this code is a new identity that has all the claims and a “name” that is the same as the stable_sid value from the original token. If the /.auth/me endpoint says the token is bad, I am returning a failed authentication. Otherwise, I’m skipping the response.

I could use the UserId field from the user record. However, that is not guaranteed to be stable as users can and do change the email address on their social accounts. I may update this class in the future to use stable_sid by default but allow the developer to change it if they desire.

The final step is to create an AuthenticationTicket and return success:

            // Generate a new authentication ticket and return success
            var ticket = new AuthenticationTicket(
                principal, new AuthenticationProperties(),
                Options.AuthenticationScheme);

            return AuthenticateResult.Success(ticket);

You can check out the code on my GitHub Repository at tag p5.

Wrap Up

There is still a little bit to do. Right now, the authentication filter returns a 401 Unauthorized if you try to hit an unauthorized page. This isn’t useful for a web page, but is completely suitable for a web API. It is thus “good enough” for Azure Mobile Apps. If you are using this functionality in an MVC application, then it is likely you want to set up some sort of authorization redirect to a login provider.

In the next post, I’m going to start on the work for Azure Mobile Apps.

Running ASP.NET Core applications in Azure App Service

One of the things I get asked about semi-regularly is when Azure Mobile Apps is going to support .NET Core. It’s a logical progression for most people and many ASP.NET developers are planning future web sites to run on ASP.NET Core. Also, the ASP.NET Core programming model makes a lot more sense (at least to me) than the older ASP.NET applications. Finally, we have an issue open on the subject. So, what is holding us back? Well, there are a bunch of things. Some have been solved already and some need a lot of work. In the coming weeks, I’m going to be writing about the various pieces that need to be in place before we can say “Azure Mobile Apps is there”.

Of course, if you want a mobile backend, you can always hop over to Visual Studio Mobile Center. This provides a mobile backend for you without having to write any code. (Full disclosure: I’m now a program manager on that team, so I may be slightly biased). However, if you are thinking ASP.NET Core, then you likely want to write the code.

Let’s get started with something that does exist. How does one run ASP.NET Core applications on Azure App Service? Well, there are two methods. The first involves uploading your application to Azure App Service via the Visual Studio Publish… dialog or via Continuous Integration from GitHub, Visual Studio Team Services or even Dropbox. It’s a relatively easy method and one I would recommend. There is a gotcha, which I’ll discuss below.

The second method uses a Docker container to house the code that is then deployed onto a Linux App Service. This is still in preview (as of writing), so I can’t recommend this for production workloads.

Create a New ASP.NET Core Application

Let’s say you opened up Visual Studio 2017 (RC right now) and created a brand new ASP.NET Core MVC application – the basis for my research here.

  • Open up Visual Studio 2017 RC.
  • Select File > New > Project…
  • Select the ASP.NET Core Web Application (.NET Core).
    • Fill in an appropriate name for the solution and project, just as normal.
    • Click OK to create the project.
  • Select ASP.NET Core 1.1 from the framework drop-down (it will say ASP.NET Core 1.0 initially)
  • Select Web Application in the ASP.NET Core 1.1 Templates selection.
  • Click OK.

I called my solution netcore-server and the project ExampleServer. At this point, Visual Studio will go off and create a project for you. You can see what it creates easily enough, but I’ve checked it into my GitHub repository at tag p0.

I’m not going to cover ASP.NET Core programming too much in this series. You can read the definitive guide on their documentation site, and I would recommend you start by understanding ASP.NET Core programming before getting into the changes here.

Go ahead and run the service (either as a Kestrel service or an IIS Express service – it works with both). This is just to make sure that you have a working site.

Add Logging to your App

Logging is one of those central things that is needed in any application. There are so many things you can’t do (including diagnose issues) if you don’t have appropriate logging. Fortunately, ASP.NET Core has logging built-in. Let’s add some to the Controllers\HomeController.cs file:

using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Logging;

namespace ExampleServer.Controllers
{
    public class HomeController : Controller
    {
        private ILogger logger;

        public HomeController(ILoggerFactory loggerFactory)
        {
            logger = loggerFactory.CreateLogger(this.GetType().FullName);
        }

        public IActionResult Index()
        {
            logger.LogInformation("In Index of the HomeController", null);
            return View();
        }
        // Rest of the file here

I’ve added the logger factory via dependency injection, then logged a message whenever the Index file is served in the home controller. If you run this version of the code (available on the GitHub respository at tag p1), you will see the following in your Visual Studio output window:

20170216-01

It’s swamped by the Application Insights data, but you can clearly see the informational message there.

Deploy your App to Azure App Service

Publishing to Azure App Service is relatively simple – right-click on the project and select Publish… to kick off the process. The layout of the windows has changed from Visual Studio 2015, but it’s the same process. You can create a new App Service or use an existing one. Once you have answered all the questions, your site will be published. Eventually, your site will be displayed in your web browser.

Turn on Diagnostic Logging

  • Click View > Server Explorer to add the server explorer to your work space.
  • Expand the Azure node, the App Service node, and finally your resource group node.
  • Right-click the app service and select View Settings
  • Turn on logging and set the logging level to verbose:

20170216-02

  • Click Save to save the settings (the site will restart).
  • Right-click the app service in the server explorer again and this time select View Streaming Logs
  • Wait until you see that you are connected to the log streaming service (in the Output window)

Now refresh your browser so that it reloads the index page again. Note how you see the access logs (which files have been requested) but the log message we put into the code is not there.

The Problem and Solution

The problem is, hopefully, obvious. ASP.NET Core does not by default feed logs to Azure App Service. We need to enable that feature in the .NET Core host. We do this in the Program.cs file:

using System.IO;
using Microsoft.AspNetCore.Hosting;

namespace ExampleServer
{
    public class Program
    {
        public static void Main(string[] args)
        {
            var host = new WebHostBuilder()
                .UseKestrel()
                .UseContentRoot(Directory.GetCurrentDirectory())
                .UseIISIntegration()
                .UseStartup<Startup>()
                .UseApplicationInsights()
                .UseAzureAppServices()
                .Build();

            host.Run();
        }
    }
}

You will also need to add the Microsoft.AspNetCore.AzureAppServicesIntegration package from NuGet for this to work. Once you have done this change, you can deploy this and watch the logs again:

20170216-03

If you have followed the instructions, you will need to switch the Output window back to the Azure logs. The output window will have been switched to Build during the publish process.

Adjusting the WebHostBuilder for the environment

It’s likely that you won’t want Application Insights and Azure App Services logging except when you are running on Azure App Service. There are a number of environment variables that Azure App Service uses and you can leverage these as well. My favorites are REGION_NAME (which indicates which Azure region your service is running in) and WEBSITE_OWNER_NAME (which is a combination of a bunch of things). You can test for these and adjust the pipeline accordingly:

using Microsoft.AspNetCore.Hosting;
using System;
using System.IO;

namespace ExampleServer
{
    public class Program
    {
        public static void Main(string[] args)
        {
            var hostBuilder = new WebHostBuilder()
                .UseKestrel()
                .UseContentRoot(Directory.GetCurrentDirectory())
                .UseIISIntegration()
                .UseStartup<Startup>()
                .UseApplicationInsights();

            var regionName = Environment.GetEnvironmentVariable("REGION_NAME");
            if (regionName != null)
            {
                hostBuilder.UseAzureAppServices();
            }
                
            var host = hostBuilder.Build();

            host.Run();
        }
    }
}

You can download this code at my GitHub repository at tag p2.

The Latest Wrinkle (Updated 4/10/2017)

The latest edition of the WindowsAzure.Storage package have breaking changes, so can’t be included until a major release. In the interim, you will need to edit your .csproj file and add the following:

<PackageTargetFallback>$(PackageTargetFallback);portable-net40+sl5+win8+wp8+wpa81;portable-net45+win8+wp8+wpa81</PackageTargetFallback>

30 Days of Zumo.v2 (Azure Mobile Apps): Day 24 – Push with Tags

I introduced push as a concept in the last article, but I left a teaser – push to a subset of users with tags. Tags are really a meta-thing that equates to “interests”, but it’s really the way you would implement such things as “push-to-user” and “push-to-group”. They can literally be anything. Before I can get there, though, I need to be able to register for tags.

Dirty little secret – the current registration API allows you to request tags, but it actually ignores the tags. There is actually a good reason for this – if you allow the client to specify the tags, they may register for tags that they aren’t allowed to. For example, let’s say you implement a tag called “_email:”. Could a user register for a tag with someone elses email address by “hacking the REST request”. The answer, unfortunately, was yes. That could happen. Don’t let it happen to you.

Today I’m going to implement a custom API that replaces the regular push installations endpoint. My endpoint is going to define two distinct sets of tags – a whitelist of tags that the user can subscribe to (anything not an exact match in the list will be thrown out); and a set of dynamic tags based on the authentication record.

The Client

Before I can do anything, I need to be able to request tags. I’ve got an Apache Cordova app and can do requests for tags simply in the register() method:

    /**
     * Event Handler for response from PNS registration
     * @param {object} data the response from the PNS
     * @param {string} data.registrationId the registration Id from the PNS
     * @event
     */
    function handlePushRegistration(data) {
        var pns = 'gcm';
        var templates = {
            tags: ['News', 'Sports', 'Politics', '_email_myboss@microsoft.com' ]
        };
        client.push.register(pns, data.registrationId, templates);
    }

The registration takes an object called “templates”, which contains the list of tags as an array. All the other SDKs have something similar to this. You will notice that I’ve got three tags that are “normal” and one that is special. I’m going to create a tag list that will strip out the ones I’m not allowed to have. For example, if I list ‘News’ and ‘Sports’ as valid tags, I expect the ‘Politics’ tag to be stripped out. In addition, the ‘_email’ tag should always be stripped out since it is definitely not mine.

Note that a tag cannot start with the $ sign – that’s a reserved symbol for Notification Hubs. Don’t use it.

The Node.js Version

The node.js version is relatively simple to implement, but I had to do some work to coerce the SDK to allow me to register a replacement for the push installations:

var express = require('express'),
    serveStatic = require('serve-static'),
    azureMobileApps = require('azure-mobile-apps'),
    authMiddleware = require('./authMiddleware'),
    customRouter = require('./customRouter'),
    pushRegistrationHandler = require('./pushRegistration');

// Set up a standard Express app
var webApp = express();

// Set up the Azure Mobile Apps SDK
var mobileApp = azureMobileApps({
    notificationRootPath: '/.push/disabled'
});

mobileApp.use(authMiddleware);
mobileApp.tables.import('./tables');
mobileApp.api.import('./api');
mobileApp.use('/push/installations', pushRegistrationHandler);

Line 6 brings in my push registration handler. Line 13 moves the old push registration handler to “somewhere else”. Finally, line 19 registers my new push registration handler to take over the right place. Now, let’s look at the ‘./pushRegistration.js’ file:

var express = require('express'),
    bodyParser = require('body-parser'),
    notifications = require('azure-mobile-apps/src/notifications'),
    log = require('azure-mobile-apps/src/log');

module.exports = function (configuration) {
    var router = express.Router(),
        installationClient;

    if (configuration && configuration.notifications && Object.keys(configuration.notifications).length > 0) {
        router.use(addPushContext);
        router.route('/:installationId')
            .put(bodyParser.json(), put, errorHandler)
            .delete(del, errorHandler);

        installationClient = notifications(configuration.notifications);
    }

    return router;

    function addPushContext(req, res, next) {
        req.azureMobile = req.azureMobile || {};
        req.azureMobile.push = installationClient.getClient();
        next();
    }

    function put(req, res, next) {
        var installationId = req.params.installationId,
            installation = req.body,
            tags = [],
            user = req.azureMobile.user;

        // White list of all known tags
        var whitelist = [
            'news',
            'sports'
        ];

        // Logic for determining the correct list of tags
        installations.tags.forEach(function (tag) {
            if (whitelist.indexOf(tag.toLowerCase()) !== -1)
                tags.push(tag.toLowerCase());
        });
        // Add in the "automatic" tags
        if (user) {
            tags.push('_userid_' + user.id);
            if (user.emailaddress) tags.push('_email_' + user.emailaddress);
        }
        // Replace the installation tags requested with my list
        installation.tags = tags;

        installationClient.putInstallation(installationId, installation, user && user.id)
            .then(function (result) {
            res.status(204).end();
        })
            .catch(next);
    }

    function del(req, res, next) {
        var installationId = req.params.installationId;

        installationClient.deleteInstallation(installationId)
            .then(function (result) {
            res.status(204).end();
        })
            .catch(next);
    }

    function errorHandler(err, req, res, next) {
        log.error(err);
        res.status(400).send(err.message || 'Bad Request');
    }
};

The important code here is in lines 33-50. Normally, the tags would just be dropped. Instead, I take the tags that are offered and put them through a whitelist filter. I then add on some more automatic tags (but only if the user is authenticated).

Note that this version was adapted from the Azure Mobile Apps Node.js Server SDK version. I’ve just added the logic to deal with the tags.

ASP.NET Version

The ASP.NET Server SDK comes with a built-in controller that I need to replace. It’s added to the application during the App_Start phase with this:

            // Configure the Azure Mobile Apps section
            new MobileAppConfiguration()
                .AddTables(
                    new MobileAppTableConfiguration()
                        .MapTableControllers()
                        .AddEntityFramework())
                .MapApiControllers()
                .AddPushNotifications() /* Adds the Push Notification Handler */
                .ApplyTo(config);

I can just comment the highlighted line out and the /push/installations controller is removed, allowing me to replace it. I’m not a confident ASP.NET developer – I’m sure there is a better way of doing this. I’ve found, however, that creating a Custom API and calling that custom API is a better way of doing the registration. It’s not a problem of the code within the controller. It’s a problem of routing. In my client, instead of calling client.push.register(), I’ll call client.invokeApi(). This version is in the Client.Cordova project:

    /**
     * Event Handler for response from PNS registration
     * @param {object} data the response from the PNS
     * @param {string} data.registrationId the registration Id from the PNS
     * @event
     */
    function handlePushRegistration(data) {
        var apiOptions = {
            method: 'POST',
            body: {
                pushChannel: data.registrationId,
                tags: ['News', 'Sports', 'Politics', '_email_myboss@microsoft.com' ]
            }
        };

        var success = function () {
            alert('Push Registered');
        }
        var failure = function (error) {
            alert('Push Failed: ' + error.message);
        }

        client.invokeApi("register", apiOptions).then(success, failure);
    }

Now I can write a POST handler as a Custom API in my backend:

using System.Web.Http;
using Microsoft.Azure.Mobile.Server.Config;
using System.Collections.Generic;
using System.Net;
using System.Net.Http;
using System.Threading.Tasks;
using System.Security.Principal;
using Microsoft.Azure.Mobile.Server.Authentication;
using System.Linq;
using Microsoft.Azure.NotificationHubs;
using System.Web.Http.Controllers;

namespace backend.dotnet.Controllers
{
    [Authorize]
    [MobileAppController]
    public class RegisterController : ApiController
    {
        protected override void Initialize(HttpControllerContext context)
        {
            // Call the original Initialize() method
            base.Initialize(context);
        }

        [HttpPost]
        public async Task<HttpResponseMessage> Post([FromBody] RegistrationViewModel model)
        {
            if (!ModelState.IsValid)
            {
                return new HttpResponseMessage(HttpStatusCode.BadRequest);
            }

            // We want to apply the push registration to an installation ID
            var installationId = Request.GetHeaderOrDefault("X-ZUMO-INSTALLATION-ID");
            if (installationId == null)
            {
                return new HttpResponseMessage(HttpStatusCode.BadRequest);
            }

            // Determine the right list of tasks to be handled
            List<string> validTags = new List<string>();
            foreach (string tag in model.tags)
            {
                if (tag.ToLower().Equals("news") || tag.ToLower().Equals("sports"))
                {
                    validTags.Add(tag.ToLower());
                }
            }
            // Add on the dynamic tags generated by authentication - note that the
            // [Authorize] tags means we are authenticated.
            var identity = await User.GetAppServiceIdentityAsync<AzureActiveDirectoryCredentials>(Request);
            validTags.Add($"_userid_{identity.UserId}");

            var emailClaim = identity.UserClaims.Where(c => c.Type.EndsWith("emailaddress")).FirstOrDefault();
            if (emailClaim != null)
            {
                validTags.Add($"_email_{emailClaim.Value}");
            }

            // Register with the hub
            await CreateOrUpdatePushInstallation(installationId, model.pushChannel, validTags);

            return new HttpResponseMessage(HttpStatusCode.OK);
        }

        /// <summary>
        /// Update an installation with notification hubs
        /// </summary>
        /// <param name="installationId">The installation</param>
        /// <param name="pushChannel">the GCM Push Channel</param>
        /// <param name="tags">The list of tags to register</param>
        /// <returns></returns>
        private async Task CreateOrUpdatePushInstallation(string installationId, string pushChannel, IList<string> tags)
        {
            var pushClient = Configuration.GetPushClient();

            Installation installation = new Installation
            {
                InstallationId = installationId,
                PushChannel = pushChannel,
                Tags = tags,
                Platform = NotificationPlatform.Gcm
            };
            await pushClient.CreateOrUpdateInstallationAsync(installation);
        }
    }

    /// <summary>
    /// Format of the registration view model that is passed to the custom API
    /// </summary>
    public class RegistrationViewModel
    {
        public string pushChannel;

        public List<string> tags;
    }
}

The real work here is done by the CreateOrUpdatePushInstallation() method at lines 77-84. This uses the Notification Hub SDK to register the device according to my rules. Why write it as a Custom API? Well, I need things provided by virtue of the [MobileApiController] attribute – things like the notification hub that is linked and authentication. However, doing that automatically links the controller into the /api namespace, thus overriding my intent of replacing the push installation version. There are ways of discluding the association, but is it worth the effort? My thought is no, which is why I switched over to a Custom API. I can get finer control over the invokeApi rather than worry about whether the Azure Mobile Apps SDK is doing something wierd.

Wrap Up

I wanted to send two important messages here. Firstly, use the power of Notification Hubs by taking charge of the registration process yourself. Secondly, do the logic in the server – not the client. It’s so tempting to say “just do what my client says”, but remember rogue operators don’t think that way – you need to protect the services that you pay for so that only you are using them and you can only effectively do that from the server.

Next time, I’ll take a look at a common pattern for push that will improve the offline performance of your application. Until then, you can find the code on my GitHub Repository.

30 Days of Zumo.v2 (Azure Mobile Apps): Day 20 – Custom API

Thus far, I’ve covered authentication and table controllers in both the ASP.NET world and the Node.js world. I’ve got two clients – an Apache Cordova one and a Universal Windows one – and I’ve got two servers – a Node.js one and an ASP.NET one. I’ve looked at what it takes to bring in existing SQL tables. It’s time to move on.

Not every thing that you want to do can fit into a nice table controller. Sometimes, you need to do something different. Let’s take, for example, the application key. When we had Mobile Services, the API had an application key. It was meant to secure “the API” – in other words, only your applications could access the API. Others would need to know the application key to get into the API. This is insanely insecure and easily defeated. Anyone downloading your app and installing a MITM sniffer will be able to figure out application key. It’s in a header, after all. Then, all the attacked needed to do is use the REST endpoint with your application key and your API is as open as before. It’s trivial – which is why pretty much no-one who understands security at all will produce an API with an application key any more. It doesn’t buy you anything.

How about a secure approach? When you have a mobile app out there, you have to register it with the various app stores – the Google App Store, Apple iTunes or the Microsoft App Store. The only apps that can use the push notification systems (GCM for Google, APNS for Apple and WNS for Microsoft)re registered apps. So, use a Custom API to request a token. The token is sent via the push notification scheme for the device and is unique to the session. Add that token to the headers and then your API looks for that. This technique is really secure. But it relies on your application being able to receive push notifications and needs your application registered with the stores. In addition, push notifications sometimes take time. Would you want the first experience of your app to be a five minute delay for “registration”?

There is a middle ground. Use a Custom API to create a per-device token. The token can be used for only a certain amount of time before it expires, thus limiting the exposure. Each time the token expires, it must be re-acquired from the server. It isn’t secure – your API can still get hijacked. However, it makes the process much more costly and that, at the end, is probably enough.

Version 1: The Node.js Easy API

You can use the Easy API if you meet all the following criteria:

  • You have created the server with the Node.js Quickstart
  • You have not modified the main application code

If you followed Day 1, then this doesn’t apply to you. Easy Tables and Easy API are only available with a specially configured server that is deployed when you use the Quickstart deployment. Any other deployment pretty much doesn’t work.

Here is how to use Easy API after creating the server. Firstly, go to the Settings menu for your App Service and click on the Easy APIs option. (If you do not have access to Easy APIs, then this will also tell you – in which case, use Version 2 instead). Click on the + Add button and fill in the form:

day-20-p1

I’m only going to access this API via GET, so I’ve disabled the others. For the GET API, I’m enabling anonymous access. I can also select authenticated access. Easy APIs integrates with your regular mobile authentication – the same authentication token used for table access.

Once the API is created, click on the API and then click on Edit script. This will open Visual Studio Online. This will allow you to edit the script online. A blueprint has been implemented for me:

module.exports = {
    //"get": function (req, res, next) {
    //}
}

Not much there – next is my code. The version I’m going to use is this:

var md5 = require('md5');
var jwt = require('jsonwebtoken');

module.exports = {
    "get": function (req, res, next) {
        var d = new Date();
        var now = d.getUTCFullYear() + '-' + (d.getUTCMonth() + 1) + '-' + d.getUTCDate();
        console.info('NOW = ', now);
        var installID = req.get('X-INSTALLATION-ID');
        console.info('INSTALLID = ', installID);
        
        if (typeof installID === 'undefined') {
            console.info('NO INSTALLID FOUND');
            res.status(400).send({ error: "Invalid Installation ID" });
            return;
        }
        
        var subject = now + installID;
        var token = md5(subject);
        console.info('TOKEN = ', token);
        
        var payload = {
            token: token
        };
        
        var options = {
            expiresIn: '4h',
            audience: installID,
            issuer: process.env.WEBSITE_SITE_NAME || 'unk',
            subject: subject
        };
        
        var signedJwt = jwt.sign(payload, installID, options);
        res.status(200).send({ jwt: signedJwt });
    }
};

This won’t work yet – that’s because the md5 and jsonwebtoken modules are not yet available. I can install these through Kudu. Go back to the Azure Portal, select your App Service, then Tools, followed by Kudu. Click on the PowerShell version of the Debug console. change directory into site/wwwroot, then type the following into the console:

npm install --save md5 jsonwebtoken

Did you know You can download your site for backup at any time from here. Just click on the Download icon next to the wwwroot folder.

Version 2: The Node.js Custom API

If you aren’t a candidate for the Easy API, then you can still use Custom APIs and the same code. However, you need to add Custom API’s into your code. Place the code below into the api/createKey.js file. Add the npm packages to the package.json file.

In the Easy API version, there is also a createKey.json file. In the Custom API version, the authentication information is placed in the Javascript file, like this:

var md5 = require('md5');
var jwt = require('jsonwebtoken');

var api = {
    "get": function (req, res, next) {
        var d = new Date();
        var now = d.getUTCFullYear() + '-' + (d.getUTCMonth() + 1) + '-' + d.getUTCDate();
        console.info('NOW = ', now);
        var installID = req.get('X-INSTALLATION-ID');
        console.info('INSTALLID = ', installID);
        
        if (typeof installID === 'undefined') {
            console.info('NO INSTALLID FOUND');
            res.status(400).send({ error: "Invalid Installation ID" });
            return;
        }
        
        var subject = now + installID;
        var token = md5(subject);
        console.info('TOKEN = ', token);
        
        var payload = {
            token: token
        };
        
        var options = {
            expiresIn: '4h',
            audience: installID,
            issuer: process.env.WEBSITE_SITE_NAME || 'unk',
            subject: subject
        };
        
        var signedJwt = jwt.sign(payload, installID, options);
        res.status(200).send({ jwt: signedJwt });
    }
};

api.get.access = 'anonymous';

module.exports = api;

In addition, the custom API system must be loaded in the main server.js file:

var express = require('express'),
    serveStatic = require('serve-static'),
    azureMobileApps = require('azure-mobile-apps'),
    authMiddleware = require('./authMiddleware');

// Set up a standard Express app
var webApp = express();

// Set up the Azure Mobile Apps SDK
var mobileApp = azureMobileApps();
mobileApp.use(authMiddleware);
mobileApp.tables.import('./tables');
mobileApp.api.import('./api');

// Create the public app area
webApp.use(serveStatic('public'));

// Initialize the Azure Mobile Apps, then start listening
mobileApp.tables.initialize().then(function () {
    webApp.use(mobileApp);
    webApp.listen(process.env.PORT || 3000);
});

Once published (or, if you are doing continuous deployment, just checking the code into the relevant branch of your source-code control system), this will operate exactly the same as the Easy API version.

Version 3: The Node.js Custom Middleware

Both the Easy API and Custom API use the same underlying code to do the implementation. You have access to the whole Azure Mobile Apps environment (more on that in a later blog post). However, you are limited in the routes that you can use. You have four verbs (so no HEAD, for example) and very little in the way of variable routes. Sometimes, you want to take control of the routes and verbs. You maybe want to produce a composed API that has a two level Id structure or you are really into doing REST “properly” (which isn’t much, but there are some accepted norms). There are many constraints to the Easy API / Custom API route in Node.js – most notably that the routes are relatively simple. Fortunately, the Node.js SDK uses ExpressJS underneath, so you can just spin up a Router and do the same thing. I’ve placed the following code in the server.js file:

var express = require('express'),
    serveStatic = require('serve-static'),
    azureMobileApps = require('azure-mobile-apps'),
    authMiddleware = require('./authMiddleware'),
    customRouter = require('./customRouter');

// Set up a standard Express app
var webApp = express();

// Set up the Azure Mobile Apps SDK
var mobileApp = azureMobileApps();
mobileApp.use(authMiddleware);
mobileApp.tables.import('./tables');
mobileApp.api.import('./api');

// Create the public app area
webApp.use(serveStatic('public'));

// Initialize the Azure Mobile Apps, then start listening
mobileApp.tables.initialize().then(function () {
    webApp.use(mobileApp);
    webApp.use('/custom', customRouter);
    webApp.listen(process.env.PORT || 3000);
});

Note that I’m putting the custom middleware after I’ve added the Azure Mobile App to the ExpressJS app. Ordering is important here – if I place it before, then authentication and table controllers will not be available – I might need those later on. The customRouter object must export an express.Router:

var express = require('express');
var jwt = require('jsonwebtoken');
var md5 = require('md5');

var router = express.Router();

router.get('/createKey', function (req, res, next) {
    var d = new Date();
    var now = d.getUTCFullYear() + '-' + (d.getUTCMonth() + 1) + '-' + d.getUTCDate();
    console.info('NOW = ', now);
    var installID = req.get('X-INSTALLATION-ID');
    console.info('INSTALLID = ', installID);

    if (typeof installID === 'undefined') {
        console.info('NO INSTALLID FOUND');
        res.status(400).send({ error: "Invalid Installation ID" });
        return;
    }

    var subject = now + installID;
    var token = md5(subject);
    console.info('TOKEN = ', token);

    var payload = {
        token: token
    };

    var options = {
        expiresIn: '4h',
        audience: installID,
        issuer: process.env.WEBSITE_SITE_NAME || 'unk',
        subject: subject
    };

    var signedJwt = jwt.sign(payload, installID, options);
    res.status(200).send({ jwt: signedJwt });
});

module.exports = router;

The actual code here is identical once you get past the change to an ExpressJS Router – in fact, I can put the algorithm in its own library to make it easier to include. The advantage of this technique is flexibility, but at the expense of complexity. I can easily add any routing scheme and use any verb since I’m just using the ExpressJS SDK. It really depends on your situation as to whether the complexity is worth it. This technique is really good for producing composed APIs where you have really thought out the mechanics of the API (as opposed to Easy API which is really good for a one-off piece of functionality). My advice is to either use Custom Middleware or Custom APIs though – don’t mix and match.

Note that this technique does not put APIs under /api – the Azure Mobile Apps SDK takes this over (which is part of the reason why you shouldn’t mix and match).

Version 4: The ASP.NET Custom API

Finally, let’s talk about ASP.NET implementation. There is already a well-known implementation for APIs in ASP.NET, so just do the same thing! The only difference is some syntactic sugar to wire up the API into the right place and to handle responses in such a way that our application can handle them. To add a custom controller, right-click on the Controllers node and use Add -> Controller… to add a new controller. The Azure Mobile Apps Custom Controller should be right at the top:

day-20-p3

Here is the default scaffolding:

using System.Web.Http;
using Microsoft.Azure.Mobile.Server.Config;

namespace backend.dotnet.Controllers
{
    [MobileAppController]
    public class CreateKeyController : ApiController
    {
        // GET api/CreateKey
        public string Get()
        {
            return "Hello from custom controller!";
        }
    }
}

The important piece here is the [MobileAppController] – this will wire the API controller into the right place and register some handlers so the objects are returned properly. I expanded on this in a similar way to my Node.js example:

using System.Web.Http;
using Microsoft.Azure.Mobile.Server.Config;
using System.Web;
using System.Net;
using System;
using System.Security.Cryptography;
using System.Text;
using System.Diagnostics;
using System.IdentityModel.Tokens;
using System.Collections.Generic;
using Jose;

namespace backend.dotnet.Controllers
{
    [MobileAppController]
    public class CreateKeyController : ApiController
    {
        // GET api/CreateKey
        public Dictionary<string, string> Get()
        {
            var now = DateTime.UtcNow.ToString("yyyy-M-d");
            Debug.WriteLine($"NOW = {now}");
            var installID = HttpContext.Current.Request.Headers["X-INSTALLATION-ID"];
            if (installID == null)
            {
                throw new HttpResponseException(HttpStatusCode.BadRequest);
            }
            Debug.WriteLine($"INSTALLID = {installID}");

            var subject = $"{now}-{installID}";
            var token = createMD5(subject);
            var issuer = Environment.GetEnvironmentVariable("WEBSITE_SITE_NAME");
            if (issuer == null)
            {
                issuer = "unk";
            }
            Debug.WriteLine($"SUBJECT = {subject}");
            Debug.WriteLine($"TOKEN = {token}");

            var expires = ((TimeSpan)(DateTime.UtcNow.AddHours(4) - new DateTime(1970, 1, 1))).TotalMilliseconds;
            var payload = new Dictionary<string, object>()
            {
                { "aud", installID },
                { "iss", issuer },
                { "sub", subject },
                { "exp", expires },
                { "token", token }
            };

            byte[] secretKey = Encoding.ASCII.GetBytes(installID);
            var result = new Dictionary<string, string>()
            {
                { "jwt", JWT.Encode(payload, secretKey, JwsAlgorithm.HS256) }
            };

            return result;
        }

        /// <summary>
        /// Compute an MD5 hash of a string
        /// </summary>
        /// <param name="input">The input string</param>
        /// <returns>The MD5 hash as a string of hex</returns>
        private string createMD5(string input)
        {
            using (MD5 md5 = MD5.Create())
            {
                byte[] ib = Encoding.ASCII.GetBytes(input);
                byte[] ob = md5.ComputeHash(ib);
                StringBuilder sb = new StringBuilder();
                for (int i = 0; i < ob.Length; i++)
                {
                    sb.Append(ob[i].ToString("X2"));
                }
                return sb.ToString();
            }
        }
    }
}

Most of this code is dealing with the C#.NET equivalent of the Node code I posted earlier in the article. I’m using jose-jwt to implement the JWT signing. The algorithm is identical, so you should be able to use the same client code with either a Node or ASP.NET backend. Want it authenticated? Just add an [Authorize] annotation to the method.

Testing the API

In all cases, you should be able to do a Postman request to GET /api/createKey (or /custom/createKey if you are using the Node custom middleware technique) with a header for X-INSTALLATION-ID that in a unique ID (specifically, a GUID):

day-20-p2

If you don’t submit an X-INSTALLATION-ID, then you should get a 400 Bad Request error.

What are Custom APIs good for?

I use this type of custom API commonly to provide additional settings to my clients or to kick off a process. Some examples of simple Custom APIs:

  • Push to a Tag from a client device
  • Get enabled features for a client
  • Get an Azure Storage API Key for uploading files

The possibilities are really open to what you can dream up.

What are Custom APIs not good for?

Custom APIs are not good candidates for offline usage. There are ways you can queue up changes for synchronization when you are back online. In general, these end up being a hacked up version of a table controller – the client inserts a record into the offline table; when it syncs the backend processes the custom API during the insert operation. However, I cringe when writing that. A better idea would be to implement an offline queue mechanism. In any case, custom APIs are not good for an offline sync scenario.

Next Steps

I only covered the various server APIs this time. In the next article, I’ll take a look at calling the custom API from the clients and adjusting the request properties so that special headers can be inserted. After that, I’m going to cover accessing the Azure Mobile Apps data and authentication objects from within your custom API so that you can do some interesting things with data.

Until then, you can check all four implementations at my GitHub Repository.

30 Days of Zumo.v2 (Azure Mobile Apps): Day 18 – ASP.NET Authentication

I introduced the ASP.NET backend in my last article, but it was rather a basic backend. It just did the basic TodoItem single table controller with no authentication. Today, I’m going to integrate the Azure Authentication / Authorization and adjust the table controller to produce a personal table – similar to the Node.js environment I posted about much earlier in the series.

If you followed along the journey so far, your backend is already configured for Authentication / Authorization. If you are using a new site for the ASP.NET backend, you may want to go back to Day 3 and read about setting up Authentication again.

Setting up the Project

The team has split the NuGet packages for Azure Mobile Apps up significantly so you only have to take what you need. You need to add the following NuGet packages to your project:

  • Microsoft.Azure.Mobile.Server.Authentication

You will also need to edit your App_Start/AzureMobile.cs file to take account of authentication:

using Owin;
using System.Configuration;
using System.Data.Entity;
using System.Web.Http;
using Microsoft.Azure.Mobile.Server;
using Microsoft.Azure.Mobile.Server.Authentication;
using Microsoft.Azure.Mobile.Server.Config;
using Microsoft.Azure.Mobile.Server.Tables.Config;
using backend.dotnet.Models;

namespace backend.dotnet
{
    public partial class Startup
    {
        public static void ConfigureMobileApp(IAppBuilder app)
        {
            HttpConfiguration config = new HttpConfiguration();

            // Configure the Azure Mobile Apps section
            new MobileAppConfiguration()
                .AddTables(
                    new MobileAppTableConfiguration()
                        .MapTableControllers()
                        .AddEntityFramework())
                .MapApiControllers()
                .ApplyTo(config);

            // Initialize the database with EF Code First
            Database.SetInitializer(new AzureMobileInitializer());

            MobileAppSettingsDictionary settings = config.GetMobileAppSettingsProvider().GetMobileAppSettings();
            if (string.IsNullOrEmpty(settings.HostName))
            {
                app.UseAppServiceAuthentication(new AppServiceAuthenticationOptions
                {
                    SigningKey = ConfigurationManager.AppSettings["SigningKey"],
                    ValidAudiences = new[] { ConfigurationManager.AppSettings["ValidAudience"] },
                    ValidIssuers = new[] { ConfigurationManager.AppSettings["ValidIssuer"] },
                    TokenHandler = config.GetAppServiceTokenHandler()
                });
            }

            // Link the Web API into the configuration
            app.UseWebApi(config);
        }
    }
}

I’ve got some extra packages to deal with. Then I need to set up authentication. The Authentication / Authorization provider requires me to configure it with JWT keys. Note that this is also how I could deal with custom authentication from a provider like Auth0 – just set up the signing key, audience and issuer and let Azure Mobile Apps deal with it.

Want to do local debugging with user authentication? Check out this blog post.

In order for the app settings to work, I need to add the app settings I am using to the web.config file:

  <appSettings>
    <add key="webpages:Version" value="3.0.0.0" />
    <add key="webpages:Enabled" value="false" />
    <add key="ClientValidationEnabled" value="true" />
    <add key="UnobtrusiveJavaScriptEnabled" value="true" />
    <add key="SigningKey" value="READ FROM AZURE"/>
    <add key="ValidAudience" value="https://{yoursite}.azurewebsites.net"/>
    <add key="ValidIssuer" value="https://{yoursite}.azurewebsites.net"/>
  </appSettings>

It actually doesn’t matter what value is there – the values will be overwritten by the Azure App Service when it runs. You can put your development values in there if you like.

Configuring a Table Controller

Now that I have configured the project, I can configure a table controller. This amounts to putting the standard [Authorize] attribute to the methods and/or controllers I want to authorize.

Note: One of the common problems is developers who say that “things are always authenticated, even if I don’t want them to be”. It’s likely you set the Authentication / Authorization setting to always authenticate – let anonymous connections through and you can then control which routes get authentications.

My personal table requires the entire table to be authenticated, so I just add the [Authorize] attribute to the entire class, like this:

namespace backend.dotnet.Controllers
{
    [Authorize]
    public class TodoItemController : TableController<TodoItem>
    {
        protected override void Initialize(HttpControllerContext controllerContext)
        {
            base.Initialize(controllerContext);
            MyDbContext context = new MyDbContext();
            DomainManager = new EntityDomainManager<TodoItem>(context, Request);
        }

The Personal Table DTO

My original DTO needs to be updated in preparation for the personal table:

using Microsoft.Azure.Mobile.Server;

namespace backend.dotnet.DataObjects
{
    public class TodoItem : EntityData
    {
        public string UserId { get; set; }

        public string Text { get; set; }

        public bool Complete { get; set; }
    }
}

Since this is Entity Framework, I would normally need to do an Entity Framework Code First Migration to get that field onto my database. You can find several walk-throughs of the process online. This isn’t an Entity Framework blog, so I’ll leave that process to better minds than mine. Just know that you have to deal with this aspect when using the ASP.NET backend. (Node deals with this via dynamic schema adjustments).

Dealing with Claims

When using the Azure Mobile Apps SDK, the User (technically, HttpContext.User) is available within your table controller. It’s specified as a ClaimsPrincipal and you can read it like this:

        private string GetAzureSID()
        {
            var principal = this.User as ClaimsPrincipal;
            var sid = principal.FindFirst(ClaimTypes.NameIdentifier).Value;
            return sid;
        }

I don’t want the Security ID. I want the email address of the user. To do that, I need to delve deeper:

        private async Task<string> GetEmailAddress()
        {
            var credentials = await User.GetAppServiceIdentityAsync<AzureActiveDirectoryCredentials>(Request);
            return credentials.UserClaims
                .Where(claim => claim.Type.EndsWith("/emailaddress"))
                .First<Claim>()
                .Value;
        }

The User.GetAppServiceIdentityAsync() method returns all the information contained in the /.auth/me endpoint, but placed into a class so you can deal with it. The claims are in the UserClaims property which returns an IEnumerable – a Claim is something with a Type and a Value. The email address is actually something like http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress – but it may be something else on Facebook, for example. To be reasonable, I just need the claim to end with emailaddress. The first one listed is the one I want.

Adjusting the controller response

I’m going to need to do some adjustments to the various endpoints in the table controller to use this.

GetAll

The GetAllTodoItems() method uses the Query() method to construct a query based on the inbound OData query. I need to adjust that using something akin to LINQ to add a clause for the UserId:

        // GET tables/TodoItem
        public async Task<IQueryable<TodoItem>> GetAllTodoItem()
        {
            Debug.WriteLine("GET tables/TodoItem");
            var emailAddr = await GetEmailAddress();
            return Query().Where(item => item.UserId == emailAddr);
        }

There are lots of things you can do with a Query() object, so this is a great area for experimentation.

GetItem

I can also use a similar query for the GetItem method:

        // GET tables/TodoItem/48D68C86-6EA6-4C25-AA33-223FC9A27959
        public async Task<SingleResult<TodoItem>> GetTodoItem(string id)
        {
            Debug.WriteLine($"GET tables/TodoItem/{id}");
            var emailAddr = await GetEmailAddress();
            var result = Lookup(id).Queryable.Where(item => item.UserId == emailAddr);
            return new SingleResult<TodoItem>(result);
        }

The Lookup() method returns a Queryable with 0 or 1 entries. I then use LINQ to further filter based on the email address, before re-constituting the result into a SingleResult object. I find it’s easier to read (and test) when returning objects rather than IHttpActionResults. However, you can use whatever you are most comfortable with.

PatchItem and DeleteItem

The Patch and Delete are so close to one another that I combined them. I’ll take a look at the PATCH version – check the code for the DELETE version:

        // PATCH tables/TodoItem/48D68C86-6EA6-4C25-AA33-223FC9A27959
        public async Task<TodoItem> PatchTodoItem(string id, Delta<TodoItem> patch)
        {
            Debug.WriteLine($"PATCH tables/TodoItem/{id}");
            var item = Lookup(id).Queryable.FirstOrDefault<TodoItem>();
            if (item == null)
            {
                throw new HttpResponseException(HttpStatusCode.NotFound);
            }
            var emailAddr = await GetEmailAddress();
            if (item.UserId != emailAddr)
            {
                throw new HttpResponseException(HttpStatusCode.Forbidden);
            }
            return await UpdateAsync(id, patch);
        }

In this version, I am doing the following logic:

  • Lookup the item – if it isn’t there, produce a 404 Not Found response
  • Does it belong to me – if not, produce a 403 Forbidden response
  • Update the record and return it

PostItem

Finally, the PostItem is relatively easy:

        // POST tables/TodoItem
        public async Task<IHttpActionResult> PostTodoItem(TodoItem item)
        {
            Debug.WriteLine($"POST tables/TodoItem");
            var emailAddr = await GetEmailAddress();
            item.UserId = emailAddr;
            TodoItem current = await InsertAsync(item);
            return CreatedAtRoute("Tables", new { id = current.Id }, current);
        }

This version overwrites whatever the user supplied with the authenticated information.

Publishing

When publishing, don’t forget to use a Code First Migration to get the extra field in the table. I must admit that I cheated here and just wiped out my database table. You can browse your database directly from Visual Studio. Open the Server Explorer, expand the Azure node (you will need to enter your Azure credentials), then expand the SQL Databases node. Finally right-click on the database and select Open in SQL Server Object Explorer.

day-18-server-explorer

You will have to enter the credentials for your SQL server. You will also have to permit your Client IP to access the database. Once you have done that, you can use Visual Studio to browse your tables and manage your data.

Next Steps

This is actually a huge step forward – I’ve now got equivalent functionality within both the Node.js and ASP.NET backends. I’ll continue to cover both Node.js and ASP.NET equally in the future. Next, however, I’m going to take a look at some final thoughts on ASP.NET controllers – things like soft delete, logging, and using existing tables. Until next time, my code is on my GitHub Repository.

jQuery Form Validation with ASP.NET

I’ve been working on refactoring my various Account area forms so that they look good on the screen. I’ll admit to using a graphic I found on Google Images – I suspect it belongs to Wizards of the Coast, so I don’t want to use it in a production environment. Fortunately, I have a friend who is a lot more artistic than I am and she is producing a background for me. Until then, the graphic is a placeholder.

One of the things I wanted to do during the refactoring is to add some client-side validation. I already have server-side validation and that is staying in there. You should never trust the input coming from the user – there will always be malicious users who will try to circumvent your controls. However, client side validation gives the user more immediate feedback since it does not involve a round trip to the server.

To do this, I’m going to lean on jQuery Validation Plugin as it does a good portion of what I want and has minimal configuration. My registration form is based on my RegisterAccountVM view-model, which has three fields – Email, Password and ConfirmPassword. I want the email address to be required and a valid email address, the password to be between 6 and 128 characters and meet complexity requirements; finally, the confirm password must equal the password. I can handle all the configuration except for the complex password using the jQuery Validation Plugin standard configuration, like this:

    $("#Account form").validate({
        rules: {
            Email: {
                required: true,
                email: true
            },
            Password: {
                required: true,
                minlength: 6,
                maxlength: 128,
                complexPassword: true
            },
            ConfirmPassword: {
                required: true,
                minlength: 6,
                maxlength: 128,
                equalTo: "#regPasswordField"
            }
        }
    });

Note that the keys in the rules object are the name field of the input which, in ASP.NET MVC, are also the name of the fields in the view model.

The rule for complexPassword I’ve listed in the Password rules is non-standard. I need a custom validator to handle the complexity. My requirements for this are that the password must contain one character from each character group – upper case, lower case, numeric and symbols. To do this, I use a recipe from the documentation:

    jQuery.validator.addMethod("complexPassword", function(value, element) {
        // Min to Max length is already handled - just have to handle complexity
        var hasUpper = false, hasLower = false, hasNumeric = false, hasSymbol = false;

        for (var i = 0 ; i &lt; value.length ; i++) {
            var ch = value.charAt(i);
            if ("ABCDEFGHIJKLMNOPQRSTUVWXYZ".indexOf(ch) !== -1)
                hasUpper = true;
            if ("abcdefghijklmnopqrstuvwxyz".indexOf(ch) !== -1)
                hasLower = true;
            if ("0123456789".indexOf(ch) !== -1)
                hasNumeric = true;
            if ("!@@#$%^&*()_-+=|\}{[]:;''?/.,".indexOf(ch) !== -1)
                hasSymbol = true;
        }
        return (hasUpper && hasLower && hasNumeric && hasSymbol);
    }, "Password must be more complex");

My Areas/Main/Views/Layout.cshtml file contains a section for the scripts, defined like this:

    <!-- BootStrap Javascript Dependencies -->
    <script src="~/jspm_packages/github/components/jquery@2.1.3/jquery.min.js"></script>
    <script src="~/jspm_packages/github/twbs/bootstrap@3.3.4/js/bootstrap.min.js"></script>

    <!-- JSPM Boot Loader -->
    <script src="~/jspm_packages/system.js"></script>
    <script src="~/config.js"></script>

    <!-- Page Scripts -->
    @RenderSection("scripts", required: false)
</body>
</html>

The RenderSection call is used to insert the scripts section from my view. That means I need to add the following to the bottom of my Areas/Account/Views/RegisterAccount/Index.cshtml file:

@section scripts {
<script src="~/jspm_packages/github/jzaefferer/jquery-validation@1.13.1/dist/jquery.validate.min.js"></script>
<script>
    // Add a custom rule to jquery validation
    jQuery.validator.addMethod("complexPassword", function(value, element) {
        // Min to Max length is already handled - just have to handle complexity
        var hasUpper = false, hasLower = false, hasNumeric = false, hasSymbol = false;

        for (var i = 0 ; i < value.length ; i++) {
            var ch = value.charAt(i);
            if ("ABCDEFGHIJKLMNOPQRSTUVWXYZ".indexOf(ch) !== -1)
                hasUpper = true;
            if ("abcdefghijklmnopqrstuvwxyz".indexOf(ch) !== -1)
                hasLower = true;
            if ("0123456789".indexOf(ch) !== -1)
                hasNumeric = true;
            if ("!@@#\$\%^&*()_-+=|\\}{[]:;\"'<>?/.,".indexOf(ch) !== -1)
                hasSymbol = true;
        }
        return (hasUpper && hasLower && hasNumeric && hasSymbol);
    }, "Password must be more complex");

    $("#Account form").validate({
        rules: {
            Email: {
                required: true,
                email: true
            },
            Password: {
                required: true,
                minlength: 6,
                maxlength: 128,
                complexPassword: true
            },
            ConfirmPassword: {
                required: true,
                minlength: 6,
                maxlength: 128,
                equalTo: "#regPasswordField"
            }
        }
    });
</script>
}

I’ve done some other work in the refactoring, including changing the main.less to have an Account.less inclusion, rather than using the login.less separate file. I’ve also refactored all the Account views to handle my new format. Finally, I’ve updated the form in the ForgotPassword workflow to have the same sort of validation as the account registration. I’ve also moved the complexPassword definition into its own Javascript file so that the same code can be reused in both the ForgotPassword and RegisterAccount views. I also suspect that I will want to use it in some sort of profile page in the future. Finally, I adjusted the Gulp/javascript.js file to account for the jQuery global so I could use eslint on the new file.

One other thing to note. I had a hell of a time with Visual Studio 2015 CTP 6 today. It decided it wanted to hang on processing Javascript and Less files constantly. As a result of this, I switched my editor (there is only so much frustration one can take) and used gulp build followed by k web to run the web site. I didn’t actually use Visual Studio much today at all. Hopefully, the next build of Visual Studio will be released at BUILD at the end of the month (just one week away) and I can try that out instead.

You can check out the code at tag cs-0.0.8.

Introducing my new Side Project

With all the research and blogging about my research, one could wonder what’s the point of it all. Well, I have a point and that point is my side project. I have been a sometimes developer for a long time. I’m definitely not the one you want to be writing the next blockbuster application, but I get by; mostly by struggling for hours with simple code. This year I decided that I would actually spend the time to become at least as proficient as a college graduate programmer. I learn by doing so I decided I would direct my attention at a particular side project.

That side project is an online application that emulates a Dungeons and Dragons Character Sheet. Since Dungeons and Dragons is a tabletop paper and pencil game generally, the character sheets, where you write down all the statistics about your character, are similarly paper driven. I figured this would be a good time to update this for a tablet world. There are likely to be three parts to this application:

  1. An online portal that you can use to view and manage your characters
  2. A Web API so that I can write other (offline, perhaps) applications to use the data
  3. A Windows “Modern” application for a tablet experience

All of this, of course, should use the absolutely latest and greatest technologies. I will use ASP.NET vNext for the backend with Entity Framework 7 doing the database work for me. I’ll host the application in Azure App Services so that it is always available.

The front end work also will get the latest and greatest treatment. All the code will use ECMAScript 6, style sheets will be coded in LESS and I’ll use the latest thinking in Web Components with perhaps a touch of Polymer.

In terms of build environment, I’m opting for Visual Studio 2015 for my main IDE; jspm for my module handling; gulp for my client-side build automation. I’ll use babel, autoprefixer and other tools as they are appropriate.

Starting with Identity

My starting point was the recent ASP.NET Identity Tutorial that I wrote. There are nine parts to it:

  1. Setting up the Database
  2. The Login Process
  3. Registration
  4. The Registration Callback
  5. Forgotten Passwords
  6. Refactoring for Areas
  7. Logging
  8. Transient Services for the User Profile
  9. Wrapping up some bugs

If you are following along, I suggest you start with these nine articles as they have all been included in the character sheet initial version. Aside from that, I’ve done some styling work to make my Account screens look like the application I envision.

Where is the Code

Each section check-in will be tagged in the blog-code repository on GitHub. In addition, the version will be revved for each major section. Right now, I’m at cs-0.0.1. The project name is called CharacterSheet

Cloning the Repository

You can clone the repository directly within Visual Studio. Just use View -> Team Explorer. Click on the green plug (Connect to Team Projects). You should see a section for Local Git Repositories. Click on Clone:

blog-code-0412-1

Enter the information as above, selecting the location on your disk (not mine). By default, Visual Studio will pick a good place for you. Currently, the repository is small so it won’t take too long to clone. Once that is done, you can double-click on the repository to be taken to the Solutions:

blog-code-0412-2

Double-click on the CharacterSheet.sln solution to open up the project. You will need to manually select the Solution Explorer after you have done this.

Preparing the Solution

Visual Studio 2015 CTP 6 does not have support for jspm. The package restore won’t happen automatically as a result. You have to do it. To do this, open up a PowerShell prompt and install jspm, then run jspm install. Make sure you add it to your PATH or set up an alias for jspm as you will need to drop down to a command prompt to install new packages. I’ll let you know when this has to happen.

Visual Studio Extensions

I have a few Visual Studio Extensions installed. All of these extensions can be installed from the Tools -> Extensions and Updates menu option.

  1. Bootstrap Snippet Pack
  2. CommentsPlus
  3. Grunt Launcher
  4. Indent Guides
  5. jQuery Code Snippets
  6. Open Command Line
  7. Regex Tester
  8. Trailing Whitespace Visualizer
  9. Web Essentials 2015.0 CTP 6
  10. SideWaffle Template Pack

I will likely add to this list. Extensions like these make development easier, so I’ll blog about the useful extensions I find along the way as well.

Target Browsers

It’s all well and good developing good responsive design, but you have to test everywhere. For my main machine I have Windows 10 Technical Preview (on the fast track) with the following browsers installed:

  1. Google Chrome 41
  2. Internet Explorer 11
  3. Project Spartan

In addition I have an iPad 3 and a Dell Venue 8 as my tablets. I’ll install other browsers and operating systems on my “other boxes”. I have a Mac Mini for running mac browsers and a Hyper-V box that I can run random operating systems and their browsers on.

Running in Azure

I don’t run my development stuff in Azure. Firstly, it costs money. More importantly, the code is likely to be unstable. I’ll have to figure out the pushing to Azure piece, especially with the database in the mix. I’ll post another blog about that process when I actually do it. I do have an Azure account though; this blog is run out of Azure App Services.

That’s pretty much it for the run-down of my side project. I hope you’ll join me on my journey through web applications and developing my Side Project.