Azure App Service Authentication in an ASP.NET Core Application

In my prior posts, I’ve covered:

This is a great start. Next on my list is authentication. How do I handle both a web-based authentication and a mobile-based authentication pattern within a single authentication pattern. ASP.NET Core provides a modular authentication system. (I’m sensing a theme here – everything is modular!) So, in this post, I’m going to cover the basics of authentication and then cover how Azure App Service Authentication works (although Chris Gillum does a much better job than I do and you should read his blog for all the Easy Auth articles) and then introduce an extension to the authentication library that implements Azure App Service Authentication.

Authentication Basics

To implement authentication in ASP.NET Core, you place the following in the ConfigureServices() method of your Startup.cs:

services.AddAuthentication();

Here, services is the IServiceCollection that is passed in as a parameter to the ConfigureServices() method. In addition, you need to add a UseXXX() method to bring in the extension method within the Configure() method. Here is an example:

app.UseJwtBearerAuthentication(new JwtBearerOptions
{
    Authority = Configuration["JWT:Authority"],
    Audience = Configuration["JWT:Audience"]
});

Once that is done, your MVC controllers or methods can be decorated with the usual [Authorize] decorator to require authentication. Finally, you need to add the Microsoft.AspNetCore.Authentication NuGet package to your project to bring in the authentication framework.

In my project, I’ve added the services.AddAuthentication() method to ConfigureServices() and added an [Authorize] tag to my /Home/Configuration controller method. This means that the configuration viewer that I used last time now needs authentication:

auth-failed

That 401 HTTP status code for Configuration loading is indicative of a failed authentication. 401 is “Authorization Failed”. This is completely expected because we have not configured an authentication provider yet.

How App Service Authentication Works

Working with Azure App Service Authentication is relatively easy. A JWT-based token is submitted either as a cookie or as the X-ZUMO-AUTH header. The information necessary to decode that token is provided in environment variables:

  • WEBSITE_AUTH_ENABLED is True if the Authentication system is loaded
  • WEBSITE_AUTH_SIGNING_KEY is the passcode used for signing the key
  • WEBSITE_AUTH_ALLOWED_AUDIENCES is the list of allowed audiences for the JWT

If WEBSITE_AUTH_ENABLED is set to True, decode the X-ZUMO-AUTH header to see if the user is valid. If the user is valid, then do a HTTP GET of {issuer}/.auth/me with the X-ZUMO-AUTH header passed through to get a JSON blob with the claims. If the token is expired or non-existent, then don’t authenticate the user.

This has an issue in that you have to do another HTTP call to get the claims. This is a small overhead and the team is working to fix this for “out of process” services. In process services, such as PHP and ASP.NET, have access to server variables. The JSON blob that is returned by calling the /.auth/me endpoint is presented as a server variable so it doesn’t need to be fetched. ASP.NET Core applications are “out of process” so we can’t use this mechanism.

Configuring the ASP.NET Core Application

In the Configure() method of the Startup.cs file, I need to do something like the following:

            app.UseAzureAppServiceAuthentication(new AzureAppServiceAuthenticationOptions
            {
                SigningKey = Configuration["AzureAppService:Auth:SigningKey"],
                AllowedAudiences = new[] { $"https://{Configuration["AzureAppService:Website:HOST_NAME"]}/" },
                AllowedIssuers = new[] { $"https://{Configuration["AzureAppService:Website:HOST_NAME"]}/" }
            });

This is just pseudo-code right now because neither the UserAzureAppServiceAuthentication() method nor the AzureAppServiceAuthenticationOptions class exist. Fortunately, there are many templates for a successful implementation of authentication. (Side note: I love open source) The closest one to mine is the JwtBearer authentication implementation. I’m not going to show off the full implementation – you can go check it out yourself. However, the important work is done in the AzureAppServiceAuthenticationHandler file.

The basic premise is this:

  1. If we don’t have an authentication source (a token), then return AuthenticateResult.Skip().
  2. If we have an authentication source, but it’s not valid, return AuthenticateResult.Fail().
  3. If we have a valid authentication source, decode it, create an AuthenticationTicket and then return AuthenticateResult.Success().

Detecting the authentication source means digging into the Request.Headers[] collection to see if there is an appropriate header. The version I have created supports both the X-ZUMO-AUTH and Authorization headers (for future compatibility):

            // Grab the X-ZUMO-AUTH token if it is available
            // If not, then try the Authorization Bearer token
            string token = Request.Headers["X-ZUMO-AUTH"];
            if (string.IsNullOrEmpty(token))
            {
                string authorization = Request.Headers["Authorization"];
                if (string.IsNullOrEmpty(authorization))
                {
                    return AuthenticateResult.Skip();
                }
                if (authorization.StartsWith("Bearer ", StringComparison.OrdinalIgnoreCase))
                {
                    token = authorization.Substring("Bearer ".Length).Trim();
                    if (string.IsNullOrEmpty(token))
                    {
                        return AuthenticateResult.Skip();
                    }
                }
            }
            Logger.LogDebug($"Obtained Authorization Token = {token}");

The next step is to validate the token and decode the result. If the service is running inside of Azure App Service, then the validation has been done for me and I only need to decode the token. If I am running locally, then I should validate the token. The signing key for the JWT is encoded in the WEBSITE_AUTH_SIGNING_KEY environment variable. Theoretically, the WEBSITE_AUTH_SIGNING_KEY can be hex encoded or base-64 encoded. It will be hex-encoded the majority of the time. Using the configuration provider from the last post, this appears as the AzureAppService:Auth:SigningKey configuration variables and I can place that into the options for the authentication provider during the Configure() method of Startup.cs.

So, what’s the code for validating and decoding the token? It looks like this:

            // Convert the signing key we have to something we can use
            var signingKeys = new List<SecurityKey>();
            // If the signingKey is the signature
            signingKeys.Add(new SymmetricSecurityKey(Encoding.UTF8.GetBytes(Options.SigningKey)));
            // If it's base-64 encoded
            try
            {
                signingKeys.Add(new SymmetricSecurityKey(Convert.FromBase64String(Options.SigningKey)));
            } catch (FormatException) { /* The key was not base 64 */ }
            // If it's hex encoded, then decode the hex and add it
            try
            {
                if (Options.SigningKey.Length % 2 == 0)
                {
                    signingKeys.Add(new SymmetricSecurityKey(
                        Enumerable.Range(0, Options.SigningKey.Length)
                                  .Where(x => x % 2 == 0)
                                  .Select(x => Convert.ToByte(Options.SigningKey.Substring(x, 2), 16))
                                  .ToArray()
                    ));
                }
            } catch (Exception) {  /* The key was not hex-encoded */ }

            // validation parameters
            var websiteAuthEnabled = Environment.GetEnvironmentVariable("WEBSITE_AUTH_ENABLED");
            var inAzureAppService = (websiteAuthEnabled != null && websiteAuthEnabled.Equals("True", StringComparison.OrdinalIgnoreCase));
            var tokenValidationParameters = new TokenValidationParameters
            {
                // The signature must have been created by the signing key
                ValidateIssuerSigningKey = !inAzureAppService,
                IssuerSigningKeys = signingKeys,

                // The Issuer (iss) claim must match
                ValidateIssuer = true,
                ValidIssuers = Options.AllowedIssuers,

                // The Audience (aud) claim must match
                ValidateAudience = true,
                ValidAudiences = Options.AllowedAudiences,

                // Validate the token expiry
                ValidateLifetime = true,

                // If you want to allow clock drift, set that here
                ClockSkew = TimeSpan.FromSeconds(60)
            };

            // validate the token we received
            var tokenHandler = new JwtSecurityTokenHandler();
            SecurityToken validatedToken;
            ClaimsPrincipal principal;
            try
            {
                principal = tokenHandler.ValidateToken(token, tokenValidationParameters, out validatedToken);
            }
            catch (Exception ex)
            {
                Logger.LogError(101, ex, "Cannot validate JWT");
                return AuthenticateResult.Fail(ex);
            }

This only gives us a subset of the claims though. We want to swap out the principal (in this case) with the results of the call to /.auth/me that gives us the actual claims:

            try
            {
                client.BaseAddress = new Uri(validatedToken.Issuer);
                client.DefaultRequestHeaders.Clear();
                client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
                client.DefaultRequestHeaders.Add("X-ZUMO-AUTH", token);

                HttpResponseMessage response = await client.GetAsync("/.auth/me");
                if (response.IsSuccessStatusCode)
                {
                    var jsonContent = await response.Content.ReadAsStringAsync();
                    var userRecord = JsonConvert.DeserializeObject<List<AzureAppServiceClaims>>(jsonContent).First();

                    // Create a new ClaimsPrincipal based on the results of /.auth/me
                    List<Claim> claims = new List<Claim>();
                    foreach (var claim in userRecord.UserClaims)
                    {
                        claims.Add(new Claim(claim.Type, claim.Value));
                    }
                    claims.Add(new Claim("x-auth-provider-name", userRecord.ProviderName));
                    claims.Add(new Claim("x-auth-provider-token", userRecord.IdToken));
                    claims.Add(new Claim("x-user-id", userRecord.UserId));
                    var identity = new GenericIdentity(principal.Claims.Where(x => x.Type.Equals("stable_sid")).First().Value, Options.AuthenticationScheme);
                    identity.AddClaims(claims);
                    principal = new ClaimsPrincipal(identity);
                }
                else if (response.StatusCode == System.Net.HttpStatusCode.Unauthorized)
                {
                    return AuthenticateResult.Fail("/.auth/me says you are unauthorized");
                }
                else
                {
                    Logger.LogWarning($"/.auth/me returned status = {response.StatusCode} - skipping user claims population");
                }
            }
            catch (Exception ex)
            {
                Logger.LogWarning($"Unable to get /.auth/me user claims - skipping (ex = {ex.GetType().FullName}, msg = {ex.Message})");
            }

I can skip this phase if I want to by setting an option. The result of this code is a new identity that has all the claims and a “name” that is the same as the stable_sid value from the original token. If the /.auth/me endpoint says the token is bad, I am returning a failed authentication. Otherwise, I’m skipping the response.

I could use the UserId field from the user record. However, that is not guaranteed to be stable as users can and do change the email address on their social accounts. I may update this class in the future to use stable_sid by default but allow the developer to change it if they desire.

The final step is to create an AuthenticationTicket and return success:

            // Generate a new authentication ticket and return success
            var ticket = new AuthenticationTicket(
                principal, new AuthenticationProperties(),
                Options.AuthenticationScheme);

            return AuthenticateResult.Success(ticket);

You can check out the code on my GitHub Repository at tag p5.

Wrap Up

There is still a little bit to do. Right now, the authentication filter returns a 401 Unauthorized if you try to hit an unauthorized page. This isn’t useful for a web page, but is completely suitable for a web API. It is thus “good enough” for Azure Mobile Apps. If you are using this functionality in an MVC application, then it is likely you want to set up some sort of authorization redirect to a login provider.

In the next post, I’m going to start on the work for Azure Mobile Apps.

Configuring ASP.NET Core Applications in Azure App Service

The .NET Core framework has a nice robust configuration framework. You will normally see the following (or something akin to it) in the constructor of the Startup.cs file:

        public Startup(IHostingEnvironment env)
        {
            var builder = new ConfigurationBuilder()
                .SetBasePath(env.ContentRootPath)
                .AddJsonFile("appsettings.json", optional: false, reloadOnChange: true)
                .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
                .AddEnvironmentVariables();
            Configuration = builder.Build();
        }

This version gets its settings from JSON files and then overrides these settings with things from environment variables. It seems ideal for Azure App Service because all the settings – from the app settings to the connection strings – are provided as environment variables. Unfortunately, they don’t necessarily get placed in the right place in the configuration. For example, Azure App Service has a Data Connections facility to define the various other resources that the app service uses. You can reach it from the menu of your App Service via the Data Connections menu. The common thing to do for Azure Mobile Apps is to define a connection to a SQL Azure instance called MS_TableConnectionString. However, when this is turned into an environment variable, the environment variable is called SQLAZURECONNSTR_MS_TableConnectionString, which is hardly obvious.

Fortunately, the ASP.NET Core configuration library is extensible. What I am going to do is develop an extension to the configuration library for handling these data connections. When developing locally, you will be able to add an appsettings.Development.json file with the following contents:

{
    "ConnectionStrings": {
        "MS_TableConnectionString": "my-connection-string"
    },
    "Data": {
        "MS_TableConnectionString": {
            "Type": "SQLAZURE",
            "ConnectionString": "my-connection-string"
        }
    }
}

When in production, this configuration is produced by the Azure App Service. When in development (and running locally), you will want to produce this JSON file yourself. Alternatively, you can adjust the launchsettings.json file to add the appropriate environment variable for local running.

To start, here is what our Startup.cs constructor will look like now:

        public Startup(IHostingEnvironment env)
        {
            var builder = new ConfigurationBuilder()
                .SetBasePath(env.ContentRootPath)
                .AddJsonFile("appsettings.json", optional: false, reloadOnChange: true)
                .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
                .AddAzureAppServiceDataConnections()
                .AddEnvironmentVariables();
            Configuration = builder.Build();
        }

Configuration Builder Extensions

In order to wire our configuration provider into the configuration framework, I need to provide an extension method to the configuration builder. This is located in Extensions\AzureAppServiceConfigurationBuilderExtensions.cs:

using Microsoft.Extensions.Configuration;

namespace ExampleServer.Extensions
{
    public static class AzureAppServiceConfigurationBuilderExtensions
    {
        public static IConfigurationBuilder AddAzureAppServiceDataConnections(this IConfigurationBuilder builder)
        {
            return builder.Add(new AzureAppServiceDataConnectionsSource());
        }
    }
}

This is a fairly standard method of hooking extensions into extensible builder-type objects.

The Configuration Source and Provider

To actually provide the configuration source, I need to write two classes – a source, which is relatively simple, and a provider, which does most of the actual work. Fortunately, I can extend other classes within the configuration framework to ease the code I have to write. Let’s start with the easy one – the source. This is the object that is added to the configuration builder (located in Extensions\AzureAppServiceDataConnectionsSource.cs:

using Microsoft.Extensions.Configuration;
using System;

namespace ExampleServer.Extensions
{
    public class AzureAppServiceDataConnectionsSource : IConfigurationSource
    {
        public IConfigurationProvider Build(IConfigurationBuilder builder)
        {
            return new AzureAppServiceDataConnectionsProvider(Environment.GetEnvironmentVariables());
        }
    }
}

I’m passing in the current environment into my provider because I want to be able to mock an environment later on for testing purposes. The configuration source (which is linked into the configuration builder) returns a configuration provider. The configuration provider is where the work is done (located in Extensions\AzureAppServiceDataConnectionsProvider.cs):

using Microsoft.Extensions.Configuration;
using System.Collections;
using System.Text.RegularExpressions;

namespace ExampleServer.Extensions
{
    internal class AzureAppServiceDataConnectionsProvider : ConfigurationProvider
    {
        /// <summary>
        /// The environment (key-value pairs of strings) that we are using to generate the configuration
        /// </summary>
        private IDictionary environment;

        /// <summary>
        /// The regular expression used to match the key in the environment for Data Connections.
        /// </summary>
        private Regex MagicRegex = new Regex(@"^([A-Z]+)CONNSTR_(.+)$");

        public AzureAppServiceDataConnectionsProvider(IDictionary environment)
        {
            this.environment = environment;
        }

        /// <summary>
        /// Loads the appropriate settings into the configuration.  The Data object is provided for us
        /// by the ConfigurationProvider
        /// </summary>
        /// <seealso cref="Microsoft.Extensions.Configuration.ConfigurationProvider"/>
        public override void Load()
        {
            foreach (string key in environment.Keys)
            {
                Match m = MagicRegex.Match(key);
                if (m.Success)
                {
                    var conntype = m.Groups[1].Value;
                    var connname = m.Groups[2].Value;
                    var connstr = environment[key] as string;

                    Data[$"Data:{connname}:Type"] = conntype;
                    Data[$"Data:{connname}:ConnectionString"] = connstr;
                    Data[$"ConnectionStrings:{connname}"] = connstr;
                }
            }
        }
    }
}

The bulk of the work is done within the Load() method. This cycles through each environment variable. If it matches the pattern
we are looking for, it extracts the pieces of data we need and puts them in the Data object. The Data object is provided by the
ConfigurationProvider and all the other lifecycle methods are handled for me by the ConfigurationProvider class.

Integrating with Dependency Injection

I want to provide a singleton service to my application that provides access to the data configuration. This service will be injected
into any code I want via dependency injection. To do this, I need an interface:

using System.Collections.Generic;

namespace ExampleServer.Services
{
    public interface IDataConfiguration
    {
        IEnumerable<DataConnectionSettings> GetDataConnections();
    }
}

This refers to a model (and I produce a list of these models later):

namespace ExampleServer.Services
{
    public class DataConnectionSettings
    {
        public string Name { get; set; }
        public string Type { get; set; }
        public string ConnectionString { get; set; }
    }
}

I also need a concrete implementation of the IDataConfiguration interface to act as the service:

using Microsoft.Extensions.Configuration;
using System.Collections.Generic;

namespace ExampleServer.Services
{
    public class DataConfiguration : IDataConfiguration
    {
        List<DataConnectionSettings> providerSettings;

        public DataConfiguration(IConfiguration configuration)
        {
            providerSettings = new List<DataConnectionSettings>();
            foreach (var section in configuration.GetChildren())
            {
                providerSettings.Add(new DataConnectionSettings
                {
                    Name = section.Key,
                    Type = section["Type"],
                    ConnectionString = section["ConnectionString"]
                });
            }
        }

        public IEnumerable<DataConnectionSettings> GetDataConnections()
        {
            return providerSettings;
        }
    }
}

Now that I have defined the interface and concrete implementation of the service, I can wire it into the dependency injection system in the ConfigureServices() method within Startup.cs:

        public void ConfigureServices(IServiceCollection services)
        {
            services.AddSingleton<IDataConfiguration>(new DataConfiguration(Configuration.GetSection("Data")));

            // Add framework services.
            services.AddMvc();
        }

I can now write my test page. The main work is in the HomeController.cs:

        public IActionResult DataSettings([FromServices] IDataConfiguration dataConfiguration)
        {
            ViewBag.Data = dataConfiguration.GetDataConnections();
            return View();
        }

The parameter to the DataSettings method will be provided from my service via dependency injection. Now all I have to do is write the view for it:

<h1>Data Settings</h1>

<div class="row">
    <table class="table table-striped">
        <tr>
            <th>Provider Name</th>
            <th>Type</th>
            <th>Connection String</th>
        </tr>
        <tbody>
            @foreach (var item in ViewBag.Data)
            {
                <tr>
                    <td>@item.Name</td>
                    <td>@item.Type</td>
                    <td>@item.ConnectionString</td>
                </tr>
            }
        </tbody>
    </table>
</div>

Publish this to your Azure App Service. Don’t forget to link a SQL Azure instance via the Data Connections menu. Then browse to https://yoursite.azurewebsites.net/Home/DataSettings – you should see your SQL Azure instance listed on the page.

Now for the good part…

ASP.NET Core configuration already does this for you. I didn’t find this out until AFTER I had written all this code. You can just add the .UseEnvironmentVariables() to your configuration builder and the rename comes along for free. Specifically:

  • ConfigurationStrings:MS_TableConnectionString will point to your connection string
  • ConfigurationStrings:MS_TableConnectionString_Provider will point to the provider class name

So you don’t really need to do anything at all. However, this is a great reference guide for me for producing the next configuration provider as the sample code is really easy to follow.

The code for this blog post is in my GitHub Repository at tag p3. In the next post, I’ll tackle authentication by linking Azure App Service Authentication with ASP.NET Core Identity.

Running ASP.NET Core applications in Azure App Service

One of the things I get asked about semi-regularly is when Azure Mobile Apps is going to support .NET Core. It’s a logical progression for most people and many ASP.NET developers are planning future web sites to run on ASP.NET Core. Also, the ASP.NET Core programming model makes a lot more sense (at least to me) than the older ASP.NET applications. Finally, we have an issue open on the subject. So, what is holding us back? Well, there are a bunch of things. Some have been solved already and some need a lot of work. In the coming weeks, I’m going to be writing about the various pieces that need to be in place before we can say “Azure Mobile Apps is there”.

Of course, if you want a mobile backend, you can always hop over to Visual Studio Mobile Center. This provides a mobile backend for you without having to write any code. (Full disclosure: I’m now a program manager on that team, so I may be slightly biased). However, if you are thinking ASP.NET Core, then you likely want to write the code.

Let’s get started with something that does exist. How does one run ASP.NET Core applications on Azure App Service? Well, there are two methods. The first involves uploading your application to Azure App Service via the Visual Studio Publish… dialog or via Continuous Integration from GitHub, Visual Studio Team Services or even Dropbox. It’s a relatively easy method and one I would recommend. There is a gotcha, which I’ll discuss below.

The second method uses a Docker container to house the code that is then deployed onto a Linux App Service. This is still in preview (as of writing), so I can’t recommend this for production workloads.

Create a New ASP.NET Core Application

Let’s say you opened up Visual Studio 2017 (RC right now) and created a brand new ASP.NET Core MVC application – the basis for my research here.

  • Open up Visual Studio 2017 RC.
  • Select File > New > Project…
  • Select the ASP.NET Core Web Application (.NET Core).
    • Fill in an appropriate name for the solution and project, just as normal.
    • Click OK to create the project.
  • Select ASP.NET Core 1.1 from the framework drop-down (it will say ASP.NET Core 1.0 initially)
  • Select Web Application in the ASP.NET Core 1.1 Templates selection.
  • Click OK.

I called my solution netcore-server and the project ExampleServer. At this point, Visual Studio will go off and create a project for you. You can see what it creates easily enough, but I’ve checked it into my GitHub repository at tag p0.

I’m not going to cover ASP.NET Core programming too much in this series. You can read the definitive guide on their documentation site, and I would recommend you start by understanding ASP.NET Core programming before getting into the changes here.

Go ahead and run the service (either as a Kestrel service or an IIS Express service – it works with both). This is just to make sure that you have a working site.

Add Logging to your App

Logging is one of those central things that is needed in any application. There are so many things you can’t do (including diagnose issues) if you don’t have appropriate logging. Fortunately, ASP.NET Core has logging built-in. Let’s add some to the Controllers\HomeController.cs file:

using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Logging;

namespace ExampleServer.Controllers
{
    public class HomeController : Controller
    {
        private ILogger logger;

        public HomeController(ILoggerFactory loggerFactory)
        {
            logger = loggerFactory.CreateLogger(this.GetType().FullName);
        }

        public IActionResult Index()
        {
            logger.LogInformation("In Index of the HomeController", null);
            return View();
        }
        // Rest of the file here

I’ve added the logger factory via dependency injection, then logged a message whenever the Index file is served in the home controller. If you run this version of the code (available on the GitHub respository at tag p1), you will see the following in your Visual Studio output window:

20170216-01

It’s swamped by the Application Insights data, but you can clearly see the informational message there.

Deploy your App to Azure App Service

Publishing to Azure App Service is relatively simple – right-click on the project and select Publish… to kick off the process. The layout of the windows has changed from Visual Studio 2015, but it’s the same process. You can create a new App Service or use an existing one. Once you have answered all the questions, your site will be published. Eventually, your site will be displayed in your web browser.

Turn on Diagnostic Logging

  • Click View > Server Explorer to add the server explorer to your work space.
  • Expand the Azure node, the App Service node, and finally your resource group node.
  • Right-click the app service and select View Settings
  • Turn on logging and set the logging level to verbose:

20170216-02

  • Click Save to save the settings (the site will restart).
  • Right-click the app service in the server explorer again and this time select View Streaming Logs
  • Wait until you see that you are connected to the log streaming service (in the Output window)

Now refresh your browser so that it reloads the index page again. Note how you see the access logs (which files have been requested) but the log message we put into the code is not there.

The Problem and Solution

The problem is, hopefully, obvious. ASP.NET Core does not by default feed logs to Azure App Service. We need to enable that feature in the .NET Core host. We do this in the Program.cs file:

using System.IO;
using Microsoft.AspNetCore.Hosting;

namespace ExampleServer
{
    public class Program
    {
        public static void Main(string[] args)
        {
            var host = new WebHostBuilder()
                .UseKestrel()
                .UseContentRoot(Directory.GetCurrentDirectory())
                .UseIISIntegration()
                .UseStartup<Startup>()
                .UseApplicationInsights()
                .UseAzureAppServices()
                .Build();

            host.Run();
        }
    }
}

You will also need to add the Microsoft.AspNetCore.AzureAppServicesIntegration package from NuGet for this to work. Once you have done this change, you can deploy this and watch the logs again:

20170216-03

If you have followed the instructions, you will need to switch the Output window back to the Azure logs. The output window will have been switched to Build during the publish process.

Adjusting the WebHostBuilder for the environment

It’s likely that you won’t want Application Insights and Azure App Services logging except when you are running on Azure App Service. There are a number of environment variables that Azure App Service uses and you can leverage these as well. My favorites are REGION_NAME (which indicates which Azure region your service is running in) and WEBSITE_OWNER_NAME (which is a combination of a bunch of things). You can test for these and adjust the pipeline accordingly:

using Microsoft.AspNetCore.Hosting;
using System;
using System.IO;

namespace ExampleServer
{
    public class Program
    {
        public static void Main(string[] args)
        {
            var hostBuilder = new WebHostBuilder()
                .UseKestrel()
                .UseContentRoot(Directory.GetCurrentDirectory())
                .UseIISIntegration()
                .UseStartup<Startup>()
                .UseApplicationInsights();

            var regionName = Environment.GetEnvironmentVariable("REGION_NAME");
            if (regionName != null)
            {
                hostBuilder.UseAzureAppServices();
            }
                
            var host = hostBuilder.Build();

            host.Run();
        }
    }
}

You can download this code at my GitHub repository at tag p2.

The Latest Wrinkle (Updated 4/10/2017)

The latest edition of the WindowsAzure.Storage package have breaking changes, so can’t be included until a major release. In the interim, you will need to edit your .csproj file and add the following:

<PackageTargetFallback>$(PackageTargetFallback);portable-net40+sl5+win8+wp8+wpa81;portable-net45+win8+wp8+wpa81</PackageTargetFallback>

Writing HTTP CRUD in Azure Functions

Over the last two posts, I’ve introduced writing Azure Functions locally and deploying them to the cloud. It’s time to do something useful with them. In this post, I’m going to introduce how to write a basic HTTP router. If you follow my blog and other work, you’ll see where this is going pretty quickly. If you are only interested in Azure Functions, you’ll have to wait a bit to see how this evolves.

Create a new Azure Function

I started this blog by installing the latest azure-functions-cli package:

npm i -g azure-functions-cli

Then I created a new Azure Function App:

mkdir dynamic-tables
cd dynamic-tables
func new

Finally, I created a function called todoitem:

dec15-01

Customize the HTTP Route Prefix

By default, any HTTP trigger is bound to /api/_function_ where function is the name of your function. I want full control over where my function exists. I’m going to fix this is the host.json file:

{
    "id":"6ada7ae64e8a496c88617b7ab6682810",
    "http": {
        "routePrefix": ""
    }
}

The routePrefix is the important thing here. The value is normally “/api”, but I’ve cleared it. That means I can put my routes anywhere.

Set up the Function Bindings

In the todoitem directory are two files. The first, function.json, describes the bindings. Here is the version for my function:

{
    "disabled": false,
    "bindings": [
        {
            "name": "req",
            "type": "httpTrigger",
            "direction": "in",
            "authLevel": "function",
            "methods": [ "GET", "POST", "PATCH", "PUT", "DELETE" ],
            "route": "tables/todoitem/{id:alpha?}"
        },
        {
            "type": "http",
            "direction": "out",
            "name": "res"
        }
    ]
}

This is going to get triggered by a HTTP trigger, and will accept five methods: GET, POST, PUT, PATCH and DELETE. In addition, I’ve defined a route that contains an optional string for an id. I can, for example, do GET /tables/todoitem/foo and this will be accepted. On the outbound side, I want to respond to requests, so I’ve got a response object. The HTTP Trigger for Node is modelled after ExpressJS, so the req and res objects are mostly equivalent to the ExpressJS Request and Response objects.

Write the Code

The code for this function is in todoitem/index.js:

/**
 * Routes the request to the table controller to the correct method.
 *
 * @param {Function.Context} context - the table controller context
 * @param {Express.Request} req - the actual request
 */
function tableRouter(context, req) {
    var res = context.res;
    var id = context.bindings.id;

    switch (req.method) {
        case 'GET':
            if (id) {
                getOneItem(req, res, id);
            } else {
                getAllItems(req, res);
            }
            break;

        case 'POST':
            insertItem(req, res);
            break;

        case 'PATCH':
            patchItem(req, res, id);
            break;

        case 'PUT':
            replaceItem(req, res, id);
            break;

        case 'DELETE':
            deleteItem(req, res, id);
            break;

        default:
            res.status(405).json({ error: "Operation not supported", message: `Method ${req.method} not supported`})
    }
}

function getOneItem(req, res, id) {
    res.status(200).json({ id: id, message: "getOne" });
}

function getAllItems(req, res) {
    res.status(200).json({ query: req.query, message: "getAll" });
}

function insertItem(req, res) {
    res.status(200).json({ body: req.body, message: "insert"});
}

function patchItem(req, res, id) {
    res.status(405).json({ error: "Not Supported", message: "PATCH operations are not supported" });
}

function replaceItem(req, res, id) {
    res.status(200).json({ body: req.body, id: id, message: "replace" });
}

function deleteItem(req, res, id) {
    res.status(200).json({ id: id, message: "delete" });
}

module.exports = tableRouter;

I use a tableRouter method (and that is what our function calls) to route the HTTP call to the write CRUD method I want to execute. It’s up to me to put whatever CRUD code I need to execute and respond to the request in those additional methods. In this case, I’m just returning a 200 status (OK) and some JSON data. One key piece is differentiating between a GET /tables/todoitem and a GET /tables/todoitem/foo. The former is meant to return all records and the latter is meant to return a single record. If the id is set, we call the single record GET method and if not, then we call the multiple record GET method.

What’s the difference between PATCH and PUT? In REST semantics, PATCH Is used when you want to do a partial update of a record. PUT is used when you want to send a full record. This CRUD recipe uses both, but you may decide to use one or the other.

Running Locally

As with the prior blog post, you can run func run test-func --debug to start the backend and get ready for the debugger. You can then use Postman to send requests to your backend. (Note: Don’t use func run todoitem --debug – this will cause a crash at the moment!). You’ll get something akin to the following:

dec15-02

That’s it for today. I’ll be progressing on this project for a while, so expect more information as I go along!

Deploying Azure Functions Automatically

In my last post, I went over how to edit, run and debug Azure Functions on your local machine. Eventually, however, you want to place these functions in the cloud. They are, after all, designed to do things in the cloud on dynamic compute. There are two levels of automation you can use:

  1. Continuous Deployment
  2. Automated Resource Creation

Most of you will be interested in continuous deployment. That is, you create your Azure Functions app once and then you just push updates to it via a source code control system. However, a true DevOps mindset requires “configuration as code”, so we’ll go over how to download an Azure Resource Manager (ARM) template for the function app and resource group.

Creating a Function App in the Portal

Creating an Azure Functions app in the portal is a straight forward process.

  1. Create a resource group.
  2. Create a function app in the resource group.

Log into the Azure portal. Select Resource Groups in the left-hand menu (which may be under the “More Services” link in your case), then click on the + Add link in the top bar to create a new resource group. Give it a unique name (it only has to be unique within your subscription), select a nominal location for the resource group, then click on Create:

create-rg-1

Once the resource group is created, click into it and then select + Add inside the resource group. Enter “Function App” in the search box, then select the same and click on Create

create-func-1

Fill in the name and select the region that you want to place the Azure Functions in. Ensure the “Consumption Plan” is selected. This is the dynamic compute plan, so you only pay for resources when your functions are actually being executed. The service will create an associated storage account for storing your functions in the same resource group.

Continuous Deployment

In my last blog post, I created a git repository to hold my Azure Function code. I can now link this git repository to the Function App in the cloud as follows:

  • Open the Function App.
  • Click the Function app settings link in the bottom right corner.
  • Click the Go to App Service Settings button.
  • Click the Deployment Credentials menu option.
  • Fill in the form for the deployment username and password (twice).
  • Click Save at the top of the blade.

You need to know the username and password of your git repository in the cloud that is attached to your Function App so that you can push to it. You’ve just set those credentials.

  • Click the Deployment options menu option.
  • Click Choose Source.
  • Click Local Git Repository
  • Click OK.

I could have just as easily linked my function app to GitHub, Visual Studio Team Services or BitBucket. My git repository is local to my machine, so a local git repository is suitable for this purpose.

  • Click the Properties menu option.
  • Copy the GIT URL field.

I now need to add the Azure hosted git repository as a remote on my local git repository. To do this, open a PowerShell console, change directory to the function app and type the following:

git remote add azure <the-git-url>
git push azure master

This will push the contents of the git repository up to the cloud, which will then do a deployment of the functions for you. You will be prompted for your username and password that you set when setting up the deployment credentials earlier.

create-func-2

Once the deployment is done, you can switch back to the Function App in the portal and you will see that your function is deployed. An important factor is that you are now editing the files associated with the function on your local machine. You can no longer edit the files in the cloud, as any changes would be overwritten by the next deployment from your local machine. To remind you of this, Azure Functions displays a helpful warning:

create-func-3

If you edit your files on the local machine, remember to push them to the Azure remote to deploy them.

Saving the ARM Template

You are likely to only need the Azure Function process shown above. However, in case you like checking in the configuration as code, here is how you do it. Firstly, go to your resource group:

create-rg-2

Note the menu item entitled Automation script – that’s the one you want. The portal will generate an Azure Resource Manager (ARM) template plus a PowerShell script or CLI script to run it. Click on Download to download all the files – you will get a ZIP file.

Before extracting the ZIP file, you need to unblock it. In the File Explorer, right-click on the ZIP file and select Properties.

create-rg-3

Check the Unblock box and then click on OK. You can now extract the ZIP file with your favorite tool. I just right-click and select Extract All….

Creating a new Azure Function with the ARM Template

You can now create a new Azure Function App with the same template as the original by running .\deploy.ps1 and filling in the fields. Yep – it’s that simple!

Offline Sync with Azure Mobile Apps and Apache Cordova

In the past, I’ve introduced you to a TodoList application built in Apache Cordova so that it is available for iOS, Android or any other platform that Apache Cordova supports. Recently, we released a new beta for the Azure Mobile Apps Cordova SDK that supports offline sync, which is a feature we didn’t have.

Underneath, the Cordova offline sync functionality uses SQLite – this means it isn’t an option at this point for HTML/JS applications. We’ll have to work out how to do this with IndexDB or something similar, but for now that isn’t an option without a lot of custom work.

Let’s take a look at the differences.

Step 1: New variables

Just like other clients, I need a local store reference and a sync context that is used to keep track of the operational aspects for synchronization:

    var client,        // Connection to the Azure Mobile App backend
        store,         // Sqlite store to use for offline data sync
        syncContext,   // Offline data sync context
        todoItemTable; // Reference to a table endpoint on backend

Step 2: Initialization

All the initialization is done in the onDeviceReady() method. I have to set up a model so that the SQLite database is set up to match what is on the server:

function onDeviceReady() {

    // Create the connection to the backend
    client = new WindowsAzure.MobileServiceClient('https://yoursite.azurewebsites.net');

    // Set up the SQLite database
    store = new WindowsAzure.MobileServiceSqliteStore();

    // Define the table schema
    store.defineTable({
        name: 'todoitem',
        columnDefinitions: {
            // sync interface
            id: 'string',
            deleted: 'boolean',
            version: 'string',
            // Now for the model
            text: 'string',
            complete: 'boolean
        }
    }).then(function () {
        // Initialize the sync context
        syncContext = client.getSyncContext();
        syncContext.pushHandler = {
            onConflict: function (serverRecord, clientRecord, pushError) {
                window.alert('TODO: onConflict');
            },
            onError: function(pushError) {
                window.alert('TODO: onError');
            }
        };
        return syncContext.initialize(store);
    }).then(function () {
        // I can now get a reference to the table
        todoItemTable = client.getSyncTable(tableName);

        refreshData();

        $('#add-item').submit(addItemHandler);
        $('#refresh').on('click', refreshData);
    });
}

There are three distinct areas here, separated by promises. The first promise defines the tables. If you are using multiple tables, you must ensure that all promises are complete before progressing to the next section. You can do this with Promise.all() as an example.

The second section initializes the sync context. You need to define two sections for the push handler – the conflict handler and the error handler. I’ll go into the details of a conflict handler at a later date, but this is definitely something you will want to spend some time thinking about. Do you want the last one in to be the winner, or the current client edition to be the winner, or do you want to prompt the user on conflicts? It’s all possible.

Once I have created a sync context, I can get a reference to the local SQLite database table, which is used instead of the getTable() call that it replaces. The rest of the code is identical – I refresh the data and add the event handlers.

Step 3: Adjusting the Refresh

In the past, refresh was just a query to the backend. Now I need to do something a bit different. When refresh is clicked, I want to do the push/pull cycle for synchronizing the data.

function refreshData() {
    updateSummaryMessage('Loading data from Azure');
    syncContext.push().then(function () {
        return syncContext.pull(new WindowsAzure.Query('todoitem'));
    }).then(function () {
        todoItemtable
            .where({ complete: false })
            .read()
            .then(createTodoItemList, handleError);
    });
}

Just like the initialization, the SDK uses promises to proceed asynchronously. First push (which resolves as a promise), then pull (which also resolves as a promise) and finally you do EXACTLY THE SAME THING AS BEFORE – you query the table, read the results and then build the todo list. Seriously – this bit really didn’t change.

That means you can add offline to your app without changing your existing code – just add the initialization and something to trigger the push/pull.

Wrap Up

This is still a beta, which means a work-in-progress. Feel free to try it out and give us feedback. You can file issues and ideas at our GitHub repository.

Cross-posted to the Azure App Service Team Blog.

Adjusting the HTTP Request with Azure Mobile Apps

Azure Mobile Apps provides an awesome client SDK for dealing with common mobile client problems – data access, offline sync, Notification Hubs registration and authentication.  Sometimes, you want to be able to do something extra in the client.  Perhaps you need to adjust the headers that are sent, or perhaps you want to understand the requests by doing diagnostic logging.  Whatever the reason, Azure Mobile Apps is extensible and can easily handle these requirements.

Android (Native)

You can implement a ServiceFilter to manipulate requests and responses in the HTTP pipeline.  The general recipe is as follows:

ServiceFilter filter = new ServiceFilter() {
    @Override
    public ListenableFuture handleRequest(ServiceFilterRequest request, NextServiceFilterCallback next) {

        // Do pre-HTTP request requirements here
        request.addHeader("X-Custom-Header", "Header Value");  // Example: Adding a Custom Header
        Log.d("Request to ", request.getUrl());                // Example: Logging the request

        ListenableFuture responseFuture = next.onNext(request);

        Futures.addCallback(responseFuture, new FutureCallback() {
            @Override
            public void onFailure(Throwable exception) {
                // Do post-HTTP response requirements for failures here
                Log.d("Exception: ", exception.getMessage());  // Example: Logging an error
            }

            @Override
            public void onSuccess(ServiceFilterResponse response) {
                // Do post-HTTP response requirements for success here
                if (response != null &amp;&amp; response.getContent() != null) {
                    Log.d("Response: ", response.getContent());
                }
            }
        });
        
        return responseFuture;
    }
};

MobileServiceClient client = new MobileServiceClient("https://xxx.azurewebsites.net", this).withFilter(filter);

You can think of the ServiceFilter as a piece of middleware that wraps the existing request/response from the server.

iOS (Native)

Similar to the Android case, you can wrap the request in a filter. For iOS, the same code (once translated) works in both Swift and Objective-C. Here is the Swift version:

class CustomFilter: NSObject, MSFilter {

    func handleRequest(request: NSURLRequest, next: MSFilterNextBlock, response: MSFilterResponseBlock) {
        var mutableRequest: NSMutableURLRequest = request.mutableCopy()

        // Do pre-request requirements here
        if !mutableRequest.allHTTPHeaderFields["X-Custom-Header"] {
            mutableRequest.setValue("X-Custom-Header", forHTTPHeaderField: "Header Value")
        }

        // Invoke next filter
        next(customRequest, response)
    }
}

// In your client initialization code...
let client = MSClient(applicationURLString: "https://xxx.azurewebsites.net").clientWithFilter(CustomFilter())

The .clientWithFilter() method clones the provided client with the filters.

JavaScript & Apache Cordova

As you might exepct given the Android and iOS implementations, the JavaScript client (and hence the Apache Cordova implementation) also uses a filter – this is just a function that the request gets passed through:

function filter(request, next, callback) {
    // Do any pre-request requirements here
    console.log('request = ', request);                     // Example: Logging
    request.headers['X-Custom-Header'] = "Header Value";    // Example: Adding a custom here
    
    next(request, callback);
}

// Your client initialization looks like this...
var client = new WindowsAzure.MobileServiceClient("https://xxx.azurewebsites.net").withFilter(filter);

Xamarin / .NET

The equivalent functionality in the .NET world is a Delegating Handler. The implementation and functionality are basically the same as the others:

public class MyHandler: DelegatingHandler
{
    protected override async Task SendAsync(HttpRequestMessage message, CancellationToken token)
    {
        // Do any pre-request requirements here
        request.Headers.Add("X-Custom-Header", "Header Value");

        // Request happens here
        var response = await base.SendAsync(request, cancellationToken);

        // Do any post-request requirements here

        return response;
    }
}

// In your mobile client code:
var client = new MobileServiceClient("https://xxx.azurewebsites.net", new MyHandler());

General Notes

There are some HTTP requests that never go through the filters you have defined. A good example for this is the login process. However, all requests to custom APIs and/or tables get passed through the filters.

You can also wrap the client multiple times. For example, you can use two separate filters – one for logging and one for the request. In this case, the filters are executed in an onion-like fashion – The last one added is the outer-most. The request goes through each filter in turn until it gets to the actual client, then the response is passed through each filter on its way out to the requestor.

Finally, note that this is truly a powerful method allowing you to change the REST calls that Azure Mobile Apps makes – including destroying the protocol that the server and client rely on. Certain headers are required for Azure Mobile Apps to work, including tokens for authentication and API versioning. Use wisely.

(Cross-posted to the Azure App Service Blog)