Configuring ASP.NET Core Applications in Azure App Service

In my last article, I introduced my plan to see what it would take to run an Azure Mobile Apps compatible service in ASP.NET Core. There are lots of potential problems here and I need to deal with them one by one. The first article covered how to get diagnostic logging working in Azure App Service, and today’s article shows how to deal with configuration in Azure App Service.

There are two major ways to configure your application in Azure App Service. The first is via App Settings and the second is via Data Connections. App Settings appear as environment variables with the prefix APPSETTING_. For example, if you have an app setting called DEBUGMODE, you can access it via Environment.GetEnvironmentVariable("APPSETTING_DEBUGMODE"). An interesting side note: If you configure App Service Push or Authentication, these settings appear as app settings to your application as well.

Data Connections provide a mechanism for accessing connection strings. If you added a Data Connection called MS_TableConnectionString (which is the default for Azure Mobile Apps), then you would see an environment variable called SQLAZURECONNSTR_MS_TableConnectionString. This encodes both the type of connection and the connection string name.

Configuration in ASP.NET Core

The .NET Core configuration framework is a very solid framework, working with a variety of methods – YAML, XML, JSON and environment variables are all supported. You will generally see code like this in the constructor of the Startup.cs file:

        public Startup(IHostingEnvironment env)
        {
            var builder = new ConfigurationBuilder()
                .SetBasePath(env.ContentRootPath)
                .AddJsonFile("appsettings.json", optional: false, reloadOnChange: true)
                .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
                .AddEnvironmentVariables();
            Configuration = builder.Build();
        }

There are a couple of problems with this, which I will illustrate by adding a view that displays the current configuration. Firstly, add a service in the ConfigureServices() method in Startup.cs:

        // This method gets called by the runtime. Use this method to add services to the container.
        public void ConfigureServices(IServiceCollection services)
        {
            // Add Configuration as a service
            services.AddSingleton<IConfiguration>(Configuration);

            // Add framework services.
            services.AddMvc();
        }

I can now add an action to the Controllers\HomeController.cs:

        public IActionResult Configuration([FromServices] IConfiguration service)
        {
            ViewBag.Configuration = service.AsEnumerable();
            return View();
        }

The [FromServices] parameter allows me to use dependency injection to inject the singleton service I defined earlier. This provides access to the configuration in just this method. I can assign the enumeration of all the configuration elements to the ViewBag for later display. I’ve also added a Views\Home\Configuration.cshtml file:

<h1>Configuration</h1>

<div class="row">
    <table class="table table-striped">
        <tr>
            <th>Key</th>
            <th>Value</th>
        </tr>
        <tbody>
            @foreach (var item in ViewBag.Configuration)
            {
                <tr>
                    <td>@item.Key</td>
                    <td>@item.Value</td>
                </tr>
            }
        </tbody>
    </table>
</div>

If I run this code within a properly configured App Service (one with an associated SQL service attached via Data Connections), then I will see all the environment variables and app settings listed on the page. In addition, the environment variables configuration module has added a pair of configuration elements for me – one named ConnectionStrings:MS_TableConnectionString with the connection string, and the other called ConnectionStrings:MS_TableConnectionString_ProviderName.

The problems are somewhat myriad:

  • All environment variables override my configuration. Azure App Service is a managed service, so they can add any environment variables they want at any time and that may clobber my configuration.
  • The environment variables are not organized in any way and rely on convention.
  • Many of the environment variables are not relevant to my app – they are relevant to Azure App Service.

A Better Configuration Module

Rather than use the default environment variables module, I’m going to write a custom provider for configuration within Azure App Service. You can use the “right” environment variables when developing locally or a local JSON file to do the configuration. If I were doing the Azure App Service configuration in JSON, it may look like this:

{
    "ConnectionStrings": {
        "MS_TableConnectionString": "my-connection-string"
    },
    "Data": {
        "MS_TableConnectionString": {
            "Type": "SQLAZURE",
            "ConnectionString": "my-connection-string"
        }
    },
    "AzureAppService": {
        "AppSettings": {
            "MobileAppsManagement_EXTENSION_VERSION": "latest"
        }
        "Auth": {
            "Enabled": "True",
            "SigningKey": "some-long-string",
            "AzureActiveDirectory": {
                "ClientId: "my-client-id",
                "ClientSecret": "my-client-secret",
                "Mode": "Express"
            }
        },
        "Push": {
            // ...
        }
    }
}

This is a much better configuration pattern in that it provides organization of the settings and does not pollute the configuration name space with all environment variables. I like having the Data block for adding associated information about the connection string instead of the convention of adding _ProviderName to the end to add information. Duplicating the connection string means I can use Configuration.GetConnectionString() or Configuration.GetSection("Data:MS_TableConnectionString") to get the information I need. I’m envisioning releasing this library at some point, so providing options like this is a good idea.

Writing a new configuration provider is easy. There are three files:

  • An extension to the ConfigurationBuilder to bring in your configuration source
  • A configuration source that references the configuration provider
  • The configuration provider

The first two tend to be boiler-plate code. Here is the AppServiceConfigurationBuilderExtensions.cs file:

using Microsoft.Extensions.Configuration;

namespace Microsoft.Extensions.Configuration
{
    public static class AzureAppServiceConfigurationBuilderExtensions
    {
        public static IConfigurationBuilder AddAzureAppServiceSettings(this IConfigurationBuilder builder)
        {
            return builder.Add(new AzureAppServiceSettingsSource());
        }
    }
}

Note that I’ve placed the class in the same namespace as the other configuration builder extensions. This means you don’t need a using statement to use this extension method. It’s a small thing.

Here is the AzureAppServiceSettingsSource.cs file:

using Microsoft.Extensions.Configuration;

namespace Microsoft.Azure.AppService.Core.Configuration
{
    internal class AzureAppServiceSettingsSource : IConfigurationSource
    {
        public IConfigurationProvider Build(IConfigurationBuilder builder)
        {
            return new AzureAppServiceSettingsProvider(Environment.GetEnvironmentVariables());
        }
    }
}

The source just provides a new provider. Note that I pass in the environment to the provider. This allows me to mock the environment later on for unit testing. I’ve placed the three files (the two above and the next one) in their own library project within the solution. This allows me to easily write unit tests later on and it allows me to package and distribute the library if I wish.

All the work for converting the environment to a configuration is done in the AzureAppServiceSettingsProvider.cs file (with apologies for the length):

using System.Collections;
using Microsoft.Extensions.Configuration;
using System.Text.RegularExpressions;
using System.Collections.Generic;

namespace Microsoft.Azure.AppService.Core.Configuration
{
    internal class AzureAppServiceSettingsProvider : ConfigurationProvider
    {
        private IDictionary env;

        /// <summary>
        /// Where all the app settings should go in the configuration
        /// </summary>
        private const string SettingsPrefix = "AzureAppService";

        /// <summary>
        /// The regular expression used to match the key in the environment for Data Connections.
        /// </summary>
        private Regex DataConnectionsRegexp = new Regex(@"^([A-Z]+)CONNSTR_(.+)$");

        /// <summary>
        /// Mapping from environment variable to position in configuration - explicit cases
        /// </summary>
        private Dictionary<string, string> specialCases = new Dictionary<string, string>
        {
            { "WEBSITE_AUTH_CLIENT_ID",                 $"{SettingsPrefix}:Auth:AzureActiveDirectory:ClientId" },
            { "WEBSITE_AUTH_CLIENT_SECRET",             $"{SettingsPrefix}:Auth:AzureActiveDirectory:ClientSecret" },
            { "WEBSITE_AUTH_OPENID_ISSUER",             $"{SettingsPrefix}:Auth:AzureActiveDirectory:Issuer" },
            { "WEBSITE_AUTH_FB_APP_ID",                 $"{SettingsPrefix}:Auth:Facebook:ClientId" },
            { "WEBSITE_AUTH_FB_APP_SECRET",             $"{SettingsPrefix}:Auth:Facebook:ClientSecret" },
            { "WEBSITE_AUTH_GOOGLE_CLIENT_ID",          $"{SettingsPrefix}:Auth:Google:ClientId" },
            { "WEBSITE_AUTH_GOOGLE_CLIENT_SECRET",      $"{SettingsPrefix}:Auth:Google:ClientSecret" },
            { "WEBSITE_AUTH_MSA_CLIENT_ID",             $"{SettingsPrefix}:Auth:MicrosoftAccount:ClientId" },
            { "WEBSITE_AUTH_MSA_CLIENT_SECRET",         $"{SettingsPrefix}:Auth:MicrosoftAccount:ClientSecret" },
            { "WEBSITE_AUTH_TWITTER_CONSUMER_KEY",      $"{SettingsPrefix}:Auth:Twitter:ClientId" },
            { "WEBSITE_AUTH_TWITTER_CONSUMER_SECRET",   $"{SettingsPrefix}:Auth:Twitter:ClientSecret" },
            { "WEBSITE_AUTH_SIGNING_KEY",               $"{SettingsPrefix}:Auth:SigningKey" },
            { "MS_NotificationHubId",                   $"{SettingsPrefix}:Push:NotificationHubId" }
        };

        /// <summary>
        /// Mpping from environment variable to position in configuration - scoped cases
        /// </summary>
        private Dictionary<string, string> scopedCases = new Dictionary<string, string>
        {
            { "WEBSITE_AUTH_", $"{SettingsPrefix}:Auth" },
            { "WEBSITE_PUSH_", $"{SettingsPrefix}:Push" }
        };

        /// <summary>
        /// Authentication providers need to be done before the scoped cases, so their mapping
        /// is separate from the scoped cases
        /// </summary>
        private Dictionary<string, string> authProviderMapping = new Dictionary<string, string>
        {
            { "WEBSITE_AUTH_FB_",          $"{SettingsPrefix}:Auth:Facebook" },
            { "WEBSITE_AUTH_GOOGLE_",      $"{SettingsPrefix}:Auth:Google" },
            { "WEBSITE_AUTH_MSA_",         $"{SettingsPrefix}:Auth:MicrosoftAccount" },
            { "WEBSITE_AUTH_TWITTER_",     $"{SettingsPrefix}:Auth:Twitter" }
        };

        public AzureAppServiceSettingsProvider(IDictionary env)
        {
            this.env = env;
        }

        /// <summary>
        /// Loads the appropriate settings into the configuration.  The Data object is provided for us
        /// by the ConfigurationProvider
        /// </summary>
        /// <seealso cref="Microsoft.Extensions.Configuration.ConfigurationProvider"/>
        public override void Load()
        {
            foreach (DictionaryEntry e in env)
            {
                string key = e.Key as string;
                string value = e.Value as string;

                var m = DataConnectionsRegexp.Match(key);
                if (m.Success)
                {
                    var type = m.Groups[1].Value;
                    var name = m.Groups[2].Value;

                    if (!key.Equals("CUSTOMCONNSTR_MS_NotificationHubConnectionString"))
                    {
                        Data[$"Data:{name}:Type"] = type;
                        Data[$"Data:{name}:ConnectionString"] = value;
                    }
                    else
                    {
                        Data[$"{SettingsPrefix}:Push:ConnectionString"] = value;
                    }
                    Data[$"ConnectionStrings:{name}"] = value;
                    continue;
                }

                // If it is a special case, then handle it through the mapping and move on
                if (specialCases.ContainsKey(key))
                {
                    Data[specialCases[key]] = value;
                    continue;
                }

                // A special case for AUTO_AAD
                if (key.Equals("WEBSITE_AUTH_AUTO_AAD"))
                {
                    Data[$"{SettingsPrefix}:Auth:AzureActiveDirectory:Mode"] = value.Equals("True") ? "Express" : "Advanced";
                    continue;
                }

                // Scoped Cases for authentication providers
                if (dictionaryMappingFound(key, value, authProviderMapping))
                {
                    continue;
                }

                // Other scoped cases (not auth providers)
                if (dictionaryMappingFound(key, value, scopedCases))
                {
                    continue;
                }

                // Other internal settings
                if (key.StartsWith("WEBSITE_") && !containsMappedKey(key, scopedCases))
                {
                    var setting = key.Substring(8);
                    Data[$"{SettingsPrefix}:Website:{setting}"] = value;
                    continue;
                }

                // App Settings - anything not in the WEBSITE section
                if (key.StartsWith("APPSETTING_") && !key.StartsWith("APPSETTING_WEBSITE_"))
                {
                    var setting = key.Substring(11);
                    Data[$"{SettingsPrefix}:AppSetting:{setting}"] = value;
                    continue;
                }

                // Add everything else into { "Environment" }
                if (!key.StartsWith("APPSETTING_"))
                {
                    Data[$"Environment:{key}"] = value;
                }
            }
        }

        /// <summary>
        /// Determines if the key starts with any of the keys in the mapping
        /// </summary>
        /// <param name="key">The environment variable</param>
        /// <param name="mapping">The mapping dictionary</param>
        /// <returns></returns>
        private bool containsMappedKey(string key, Dictionary<string, string> mapping)
        {
            foreach (var start in mapping.Keys)
            {
                if (key.StartsWith(start))
                {
                    return true;
                }
            }
            return false;
        }

        /// <summary>
        /// Handler for a mapping dictionary
        /// </summary>
        /// <param name="key">The environment variable to check</param>
        /// <param name="value">The value of the environment variable</param>
        /// <param name="mapping">The mapping dictionary</param>
        /// <returns>true if a match was found</returns>
        private bool dictionaryMappingFound(string key, string value, Dictionary<string, string> mapping)
        {
            foreach (string start in mapping.Keys)
            {
                if (key.StartsWith(start))
                {
                    var setting = key.Substring(start.Length);
                    Data[$"{mapping[start]}:{setting}"] = value;
                    return true;
                }
            }
            return false;
        }
    }
}

Unfortunately, there are a lot of special cases here to handle how I want to lay out my configuration. However, the basic flow is handled in the Load() method. It cycles through the environment. If the environment variable matches one of the ones I watch for, then I add it to the Data[] object which becomes the configuration. Anything that doesn’t match is added to the default Environment section of the configuration. The ConfigurationProvider class that I inherit from handles all the other lifecycle type requirements for the provider, so I don’t need to be concerned with it.

Testing the Configuration Module

I’ve done some pre-work to aid in testability. Firstly, I’ve segmented the library component into its own project. Secondly, I’ve added a “mocking” capability for the environment. The default environment is passed in from the source class, but I can instantiate the provider in my test class with a suitable dictionary. The xUnit site covers how to set up a simple test, although Visual Studio 2017 has a specific xUnit test suite project template (look for xUnit Test Project (.NET Core) in the project templates list).

My testing process is relatively simple – given a suitable environment, does it produce the right configuration? I’ll have a test routine for each of the major sections – connection strings, special cases and scoped cases, and others. Then I’ll copy my environment from a real App Service and see if that causes issues. I get my environment settings from Kudu – also known as Advanced Tools in your App Service menu in the Azure portal. Here is an example of one of the tests:

        [Fact]
        public void CreatesDataConnections()
        {
            var env = new Dictionary<string, string>()
            {
                { "SQLCONNSTR_MS_TableConnectionString", "test1" },
                { "SQLAZURECONNSTR_DefaultConnection", "test2" },
                { "SQLCONNSTRMSTableConnectionString", "test3" }
            };
            var provider = new AzureAppServiceSettingsProvider(env);
            provider.Load();

            string r;
            Assert.True(provider.TryGet("Data:MS_TableConnectionString:Type", out r));
            Assert.Equal("SQL", r);
            Assert.True(provider.TryGet("Data:MS_TableConnectionString:ConnectionString", out r));
            Assert.Equal("test1", r);

            Assert.True(provider.TryGet("Data:DefaultConnection:Type", out r));
            Assert.Equal("SQLAZURE", r);
            Assert.True(provider.TryGet("Data:DefaultConnection:ConnectionString", out r));
            Assert.Equal("test2", r);

            Assert.False(provider.TryGet("Data:MSTableConnectionString:Type", out r));
            Assert.False(provider.TryGet("Data:MSTableConnectionString:ConnectionString", out r));
        }

This test ensures that the typical connection strings get placed into the right Data structure within the configuration. You can run the tests within Visual Studio 2017 by using Test > Windows > Test Explorer to view the test explorer, then click Run All – the projects will be built and tests discovered.

I’m keeping my code on GitHub, so you can find this code (including the entire test suite) in my GitHub Repository at tag p4.

Running ASP.NET Core applications in Azure App Service

One of the things I get asked about semi-regularly is when Azure Mobile Apps is going to support .NET Core. It’s a logical progression for most people and many ASP.NET developers are planning future web sites to run on ASP.NET Core. Also, the ASP.NET Core programming model makes a lot more sense (at least to me) than the older ASP.NET applications. Finally, we have an issue open on the subject. So, what is holding us back? Well, there are a bunch of things. Some have been solved already and some need a lot of work. In the coming weeks, I’m going to be writing about the various pieces that need to be in place before we can say “Azure Mobile Apps is there”.

Of course, if you want a mobile backend, you can always hop over to Visual Studio Mobile Center. This provides a mobile backend for you without having to write any code. (Full disclosure: I’m now a program manager on that team, so I may be slightly biased). However, if you are thinking ASP.NET Core, then you likely want to write the code.

Let’s get started with something that does exist. How does one run ASP.NET Core applications on Azure App Service? Well, there are two methods. The first involves uploading your application to Azure App Service via the Visual Studio Publish… dialog or via Continuous Integration from GitHub, Visual Studio Team Services or even Dropbox. It’s a relatively easy method and one I would recommend. There is a gotcha, which I’ll discuss below.

The second method uses a Docker container to house the code that is then deployed onto a Linux App Service. This is still in preview (as of writing), so I can’t recommend this for production workloads.

Create a New ASP.NET Core Application

Let’s say you opened up Visual Studio 2017 (RC right now) and created a brand new ASP.NET Core MVC application – the basis for my research here.

  • Open up Visual Studio 2017 RC.
  • Select File > New > Project…
  • Select the ASP.NET Core Web Application (.NET Core).
    • Fill in an appropriate name for the solution and project, just as normal.
    • Click OK to create the project.
  • Select ASP.NET Core 1.1 from the framework drop-down (it will say ASP.NET Core 1.0 initially)
  • Select Web Application in the ASP.NET Core 1.1 Templates selection.
  • Click OK.

I called my solution netcore-server and the project ExampleServer. At this point, Visual Studio will go off and create a project for you. You can see what it creates easily enough, but I’ve checked it into my GitHub repository at tag p0.

I’m not going to cover ASP.NET Core programming too much in this series. You can read the definitive guide on their documentation site, and I would recommend you start by understanding ASP.NET Core programming before getting into the changes here.

Go ahead and run the service (either as a Kestrel service or an IIS Express service – it works with both). This is just to make sure that you have a working site.

Add Logging to your App

Logging is one of those central things that is needed in any application. There are so many things you can’t do (including diagnose issues) if you don’t have appropriate logging. Fortunately, ASP.NET Core has logging built-in. Let’s add some to the Controllers\HomeController.cs file:

using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Logging;

namespace ExampleServer.Controllers
{
    public class HomeController : Controller
    {
        private ILogger logger;

        public HomeController(ILoggerFactory loggerFactory)
        {
            logger = loggerFactory.CreateLogger(this.GetType().FullName);
        }

        public IActionResult Index()
        {
            logger.LogInformation("In Index of the HomeController", null);
            return View();
        }
        // Rest of the file here

I’ve added the logger factory via dependency injection, then logged a message whenever the Index file is served in the home controller. If you run this version of the code (available on the GitHub respository at tag p1), you will see the following in your Visual Studio output window:

20170216-01

It’s swamped by the Application Insights data, but you can clearly see the informational message there.

Deploy your App to Azure App Service

Publishing to Azure App Service is relatively simple – right-click on the project and select Publish… to kick off the process. The layout of the windows has changed from Visual Studio 2015, but it’s the same process. You can create a new App Service or use an existing one. Once you have answered all the questions, your site will be published. Eventually, your site will be displayed in your web browser.

Turn on Diagnostic Logging

  • Click View > Server Explorer to add the server explorer to your work space.
  • Expand the Azure node, the App Service node, and finally your resource group node.
  • Right-click the app service and select View Settings
  • Turn on logging and set the logging level to verbose:

20170216-02

  • Click Save to save the settings (the site will restart).
  • Right-click the app service in the server explorer again and this time select View Streaming Logs
  • Wait until you see that you are connected to the log streaming service (in the Output window)

Now refresh your browser so that it reloads the index page again. Note how you see the access logs (which files have been requested) but the log message we put into the code is not there.

The Problem and Solution

The problem is, hopefully, obvious. ASP.NET Core does not by default feed logs to Azure App Service. We need to enable that feature in the .NET Core host. We do this in the Program.cs file:

using System.IO;
using Microsoft.AspNetCore.Hosting;

namespace ExampleServer
{
    public class Program
    {
        public static void Main(string[] args)
        {
            var host = new WebHostBuilder()
                .UseKestrel()
                .UseContentRoot(Directory.GetCurrentDirectory())
                .UseIISIntegration()
                .UseStartup<Startup>()
                .UseApplicationInsights()
                .UseAzureAppServices()
                .Build();

            host.Run();
        }
    }
}

You will also need to add the Microsoft.AspNetCore.AzureAppServicesIntegration package from NuGet for this to work. Once you have done this change, you can deploy this and watch the logs again:

20170216-03

If you have followed the instructions, you will need to switch the Output window back to the Azure logs. The output window will have been switched to Build during the publish process.

Adjusting the WebHostBuilder for the environment

It’s likely that you won’t want Application Insights and Azure App Services logging except when you are running on Azure App Service. There are a number of environment variables that Azure App Service uses and you can leverage these as well. My favorites are REGION_NAME (which indicates which Azure region your service is running in) and WEBSITE_OWNER_NAME (which is a combination of a bunch of things). You can test for these and adjust the pipeline accordingly:

using Microsoft.AspNetCore.Hosting;
using System;
using System.IO;

namespace ExampleServer
{
    public class Program
    {
        public static void Main(string[] args)
        {
            var hostBuilder = new WebHostBuilder()
                .UseKestrel()
                .UseContentRoot(Directory.GetCurrentDirectory())
                .UseIISIntegration()
                .UseStartup<Startup>()
                .UseApplicationInsights();

            var regionName = Environment.GetEnvironmentVariable("REGION_NAME");
            if (regionName != null)
            {
                hostBuilder.UseAzureAppServices();
            }
                
            var host = hostBuilder.Build();

            host.Run();
        }
    }
}

You can download this code at my GitHub repository at tag p2.

Writing HTTP CRUD in Azure Functions

Over the last two posts, I’ve introduced writing Azure Functions locally and deploying them to the cloud. It’s time to do something useful with them. In this post, I’m going to introduce how to write a basic HTTP router. If you follow my blog and other work, you’ll see where this is going pretty quickly. If you are only interested in Azure Functions, you’ll have to wait a bit to see how this evolves.

Create a new Azure Function

I started this blog by installing the latest azure-functions-cli package:

npm i -g azure-functions-cli

Then I created a new Azure Function App:

mkdir dynamic-tables
cd dynamic-tables
func new

Finally, I created a function called todoitem:

dec15-01

Customize the HTTP Route Prefix

By default, any HTTP trigger is bound to /api/_function_ where function is the name of your function. I want full control over where my function exists. I’m going to fix this is the host.json file:

{
    "id":"6ada7ae64e8a496c88617b7ab6682810",
    "http": {
        "routePrefix": ""
    }
}

The routePrefix is the important thing here. The value is normally “/api”, but I’ve cleared it. That means I can put my routes anywhere.

Set up the Function Bindings

In the todoitem directory are two files. The first, function.json, describes the bindings. Here is the version for my function:

{
    "disabled": false,
    "bindings": [
        {
            "name": "req",
            "type": "httpTrigger",
            "direction": "in",
            "authLevel": "function",
            "methods": [ "GET", "POST", "PATCH", "PUT", "DELETE" ],
            "route": "tables/todoitem/{id:alpha?}"
        },
        {
            "type": "http",
            "direction": "out",
            "name": "res"
        }
    ]
}

This is going to get triggered by a HTTP trigger, and will accept five methods: GET, POST, PUT, PATCH and DELETE. In addition, I’ve defined a route that contains an optional string for an id. I can, for example, do GET /tables/todoitem/foo and this will be accepted. On the outbound side, I want to respond to requests, so I’ve got a response object. The HTTP Trigger for Node is modelled after ExpressJS, so the req and res objects are mostly equivalent to the ExpressJS Request and Response objects.

Write the Code

The code for this function is in todoitem/index.js:

/**
 * Routes the request to the table controller to the correct method.
 *
 * @param {Function.Context} context - the table controller context
 * @param {Express.Request} req - the actual request
 */
function tableRouter(context, req) {
    var res = context.res;
    var id = context.bindings.id;

    switch (req.method) {
        case 'GET':
            if (id) {
                getOneItem(req, res, id);
            } else {
                getAllItems(req, res);
            }
            break;

        case 'POST':
            insertItem(req, res);
            break;

        case 'PATCH':
            patchItem(req, res, id);
            break;

        case 'PUT':
            replaceItem(req, res, id);
            break;

        case 'DELETE':
            deleteItem(req, res, id);
            break;

        default:
            res.status(405).json({ error: "Operation not supported", message: `Method ${req.method} not supported`})
    }
}

function getOneItem(req, res, id) {
    res.status(200).json({ id: id, message: "getOne" });
}

function getAllItems(req, res) {
    res.status(200).json({ query: req.query, message: "getAll" });
}

function insertItem(req, res) {
    res.status(200).json({ body: req.body, message: "insert"});
}

function patchItem(req, res, id) {
    res.status(405).json({ error: "Not Supported", message: "PATCH operations are not supported" });
}

function replaceItem(req, res, id) {
    res.status(200).json({ body: req.body, id: id, message: "replace" });
}

function deleteItem(req, res, id) {
    res.status(200).json({ id: id, message: "delete" });
}

module.exports = tableRouter;

I use a tableRouter method (and that is what our function calls) to route the HTTP call to the write CRUD method I want to execute. It’s up to me to put whatever CRUD code I need to execute and respond to the request in those additional methods. In this case, I’m just returning a 200 status (OK) and some JSON data. One key piece is differentiating between a GET /tables/todoitem and a GET /tables/todoitem/foo. The former is meant to return all records and the latter is meant to return a single record. If the id is set, we call the single record GET method and if not, then we call the multiple record GET method.

What’s the difference between PATCH and PUT? In REST semantics, PATCH Is used when you want to do a partial update of a record. PUT is used when you want to send a full record. This CRUD recipe uses both, but you may decide to use one or the other.

Running Locally

As with the prior blog post, you can run func run test-func --debug to start the backend and get ready for the debugger. You can then use Postman to send requests to your backend. (Note: Don’t use func run todoitem --debug – this will cause a crash at the moment!). You’ll get something akin to the following:

dec15-02

That’s it for today. I’ll be progressing on this project for a while, so expect more information as I go along!

Deploying Azure Functions Automatically

In my last post, I went over how to edit, run and debug Azure Functions on your local machine. Eventually, however, you want to place these functions in the cloud. They are, after all, designed to do things in the cloud on dynamic compute. There are two levels of automation you can use:

  1. Continuous Deployment
  2. Automated Resource Creation

Most of you will be interested in continuous deployment. That is, you create your Azure Functions app once and then you just push updates to it via a source code control system. However, a true DevOps mindset requires “configuration as code”, so we’ll go over how to download an Azure Resource Manager (ARM) template for the function app and resource group.

Creating a Function App in the Portal

Creating an Azure Functions app in the portal is a straight forward process.

  1. Create a resource group.
  2. Create a function app in the resource group.

Log into the Azure portal. Select Resource Groups in the left-hand menu (which may be under the “More Services” link in your case), then click on the + Add link in the top bar to create a new resource group. Give it a unique name (it only has to be unique within your subscription), select a nominal location for the resource group, then click on Create:

create-rg-1

Once the resource group is created, click into it and then select + Add inside the resource group. Enter “Function App” in the search box, then select the same and click on Create

create-func-1

Fill in the name and select the region that you want to place the Azure Functions in. Ensure the “Consumption Plan” is selected. This is the dynamic compute plan, so you only pay for resources when your functions are actually being executed. The service will create an associated storage account for storing your functions in the same resource group.

Continuous Deployment

In my last blog post, I created a git repository to hold my Azure Function code. I can now link this git repository to the Function App in the cloud as follows:

  • Open the Function App.
  • Click the Function app settings link in the bottom right corner.
  • Click the Go to App Service Settings button.
  • Click the Deployment Credentials menu option.
  • Fill in the form for the deployment username and password (twice).
  • Click Save at the top of the blade.

You need to know the username and password of your git repository in the cloud that is attached to your Function App so that you can push to it. You’ve just set those credentials.

  • Click the Deployment options menu option.
  • Click Choose Source.
  • Click Local Git Repository
  • Click OK.

I could have just as easily linked my function app to GitHub, Visual Studio Team Services or BitBucket. My git repository is local to my machine, so a local git repository is suitable for this purpose.

  • Click the Properties menu option.
  • Copy the GIT URL field.

I now need to add the Azure hosted git repository as a remote on my local git repository. To do this, open a PowerShell console, change directory to the function app and type the following:

git remote add azure <the-git-url>
git push azure master

This will push the contents of the git repository up to the cloud, which will then do a deployment of the functions for you. You will be prompted for your username and password that you set when setting up the deployment credentials earlier.

create-func-2

Once the deployment is done, you can switch back to the Function App in the portal and you will see that your function is deployed. An important factor is that you are now editing the files associated with the function on your local machine. You can no longer edit the files in the cloud, as any changes would be overwritten by the next deployment from your local machine. To remind you of this, Azure Functions displays a helpful warning:

create-func-3

If you edit your files on the local machine, remember to push them to the Azure remote to deploy them.

Saving the ARM Template

You are likely to only need the Azure Function process shown above. However, in case you like checking in the configuration as code, here is how you do it. Firstly, go to your resource group:

create-rg-2

Note the menu item entitled Automation script – that’s the one you want. The portal will generate an Azure Resource Manager (ARM) template plus a PowerShell script or CLI script to run it. Click on Download to download all the files – you will get a ZIP file.

Before extracting the ZIP file, you need to unblock it. In the File Explorer, right-click on the ZIP file and select Properties.

create-rg-3

Check the Unblock box and then click on OK. You can now extract the ZIP file with your favorite tool. I just right-click and select Extract All….

Creating a new Azure Function with the ARM Template

You can now create a new Azure Function App with the same template as the original by running .\deploy.ps1 and filling in the fields. Yep – it’s that simple!

Offline Sync with Azure Mobile Apps and Apache Cordova

In the past, I’ve introduced you to a TodoList application built in Apache Cordova so that it is available for iOS, Android or any other platform that Apache Cordova supports. Recently, we released a new beta for the Azure Mobile Apps Cordova SDK that supports offline sync, which is a feature we didn’t have.

Underneath, the Cordova offline sync functionality uses SQLite – this means it isn’t an option at this point for HTML/JS applications. We’ll have to work out how to do this with IndexDB or something similar, but for now that isn’t an option without a lot of custom work.

Let’s take a look at the differences.

Step 1: New variables

Just like other clients, I need a local store reference and a sync context that is used to keep track of the operational aspects for synchronization:

    var client,        // Connection to the Azure Mobile App backend
        store,         // Sqlite store to use for offline data sync
        syncContext,   // Offline data sync context
        todoItemTable; // Reference to a table endpoint on backend

Step 2: Initialization

All the initialization is done in the onDeviceReady() method. I have to set up a model so that the SQLite database is set up to match what is on the server:

function onDeviceReady() {

    // Create the connection to the backend
    client = new WindowsAzure.MobileServiceClient('https://yoursite.azurewebsites.net');

    // Set up the SQLite database
    store = new WindowsAzure.MobileServiceSqliteStore();

    // Define the table schema
    store.defineTable({
        name: 'todoitem',
        columnDefinitions: {
            // sync interface
            id: 'string',
            deleted: 'boolean',
            version: 'string',
            // Now for the model
            text: 'string',
            complete: 'boolean
        }
    }).then(function () {
        // Initialize the sync context
        syncContext = client.getSyncContext();
        syncContext.pushHandler = {
            onConflict: function (serverRecord, clientRecord, pushError) {
                window.alert('TODO: onConflict');
            },
            onError: function(pushError) {
                window.alert('TODO: onError');
            }
        };
        return syncContext.initialize(store);
    }).then(function () {
        // I can now get a reference to the table
        todoItemTable = client.getSyncTable(tableName);

        refreshData();

        $('#add-item').submit(addItemHandler);
        $('#refresh').on('click', refreshData);
    });
}

There are three distinct areas here, separated by promises. The first promise defines the tables. If you are using multiple tables, you must ensure that all promises are complete before progressing to the next section. You can do this with Promise.all() as an example.

The second section initializes the sync context. You need to define two sections for the push handler – the conflict handler and the error handler. I’ll go into the details of a conflict handler at a later date, but this is definitely something you will want to spend some time thinking about. Do you want the last one in to be the winner, or the current client edition to be the winner, or do you want to prompt the user on conflicts? It’s all possible.

Once I have created a sync context, I can get a reference to the local SQLite database table, which is used instead of the getTable() call that it replaces. The rest of the code is identical – I refresh the data and add the event handlers.

Step 3: Adjusting the Refresh

In the past, refresh was just a query to the backend. Now I need to do something a bit different. When refresh is clicked, I want to do the push/pull cycle for synchronizing the data.

function refreshData() {
    updateSummaryMessage('Loading data from Azure');
    syncContext.push().then(function () {
        return syncContext.pull(new WindowsAzure.Query('todoitem'));
    }).then(function () {
        todoItemtable
            .where({ complete: false })
            .read()
            .then(createTodoItemList, handleError);
    });
}

Just like the initialization, the SDK uses promises to proceed asynchronously. First push (which resolves as a promise), then pull (which also resolves as a promise) and finally you do EXACTLY THE SAME THING AS BEFORE – you query the table, read the results and then build the todo list. Seriously – this bit really didn’t change.

That means you can add offline to your app without changing your existing code – just add the initialization and something to trigger the push/pull.

Wrap Up

This is still a beta, which means a work-in-progress. Feel free to try it out and give us feedback. You can file issues and ideas at our GitHub repository.

Cross-posted to the Azure App Service Team Blog.

Adjusting the HTTP Request with Azure Mobile Apps

Azure Mobile Apps provides an awesome client SDK for dealing with common mobile client problems – data access, offline sync, Notification Hubs registration and authentication.  Sometimes, you want to be able to do something extra in the client.  Perhaps you need to adjust the headers that are sent, or perhaps you want to understand the requests by doing diagnostic logging.  Whatever the reason, Azure Mobile Apps is extensible and can easily handle these requirements.

Android (Native)

You can implement a ServiceFilter to manipulate requests and responses in the HTTP pipeline.  The general recipe is as follows:

ServiceFilter filter = new ServiceFilter() {
    @Override
    public ListenableFuture handleRequest(ServiceFilterRequest request, NextServiceFilterCallback next) {

        // Do pre-HTTP request requirements here
        request.addHeader("X-Custom-Header", "Header Value");  // Example: Adding a Custom Header
        Log.d("Request to ", request.getUrl());                // Example: Logging the request

        ListenableFuture responseFuture = next.onNext(request);

        Futures.addCallback(responseFuture, new FutureCallback() {
            @Override
            public void onFailure(Throwable exception) {
                // Do post-HTTP response requirements for failures here
                Log.d("Exception: ", exception.getMessage());  // Example: Logging an error
            }

            @Override
            public void onSuccess(ServiceFilterResponse response) {
                // Do post-HTTP response requirements for success here
                if (response != null &amp;&amp; response.getContent() != null) {
                    Log.d("Response: ", response.getContent());
                }
            }
        });
        
        return responseFuture;
    }
};

MobileServiceClient client = new MobileServiceClient("https://xxx.azurewebsites.net", this).withFilter(filter);

You can think of the ServiceFilter as a piece of middleware that wraps the existing request/response from the server.

iOS (Native)

Similar to the Android case, you can wrap the request in a filter. For iOS, the same code (once translated) works in both Swift and Objective-C. Here is the Swift version:

class CustomFilter: NSObject, MSFilter {

    func handleRequest(request: NSURLRequest, next: MSFilterNextBlock, response: MSFilterResponseBlock) {
        var mutableRequest: NSMutableURLRequest = request.mutableCopy()

        // Do pre-request requirements here
        if !mutableRequest.allHTTPHeaderFields["X-Custom-Header"] {
            mutableRequest.setValue("X-Custom-Header", forHTTPHeaderField: "Header Value")
        }

        // Invoke next filter
        next(customRequest, response)
    }
}

// In your client initialization code...
let client = MSClient(applicationURLString: "https://xxx.azurewebsites.net").clientWithFilter(CustomFilter())

The .clientWithFilter() method clones the provided client with the filters.

JavaScript & Apache Cordova

As you might exepct given the Android and iOS implementations, the JavaScript client (and hence the Apache Cordova implementation) also uses a filter – this is just a function that the request gets passed through:

function filter(request, next, callback) {
    // Do any pre-request requirements here
    console.log('request = ', request);                     // Example: Logging
    request.headers['X-Custom-Header'] = "Header Value";    // Example: Adding a custom here
    
    next(request, callback);
}

// Your client initialization looks like this...
var client = new WindowsAzure.MobileServiceClient("https://xxx.azurewebsites.net").withFilter(filter);

Xamarin / .NET

The equivalent functionality in the .NET world is a Delegating Handler. The implementation and functionality are basically the same as the others:

public class MyHandler: DelegatingHandler
{
    protected override async Task SendAsync(HttpRequestMessage message, CancellationToken token)
    {
        // Do any pre-request requirements here
        request.Headers.Add("X-Custom-Header", "Header Value");

        // Request happens here
        var response = await base.SendAsync(request, cancellationToken);

        // Do any post-request requirements here

        return response;
    }
}

// In your mobile client code:
var client = new MobileServiceClient("https://xxx.azurewebsites.net", new MyHandler());

General Notes

There are some HTTP requests that never go through the filters you have defined. A good example for this is the login process. However, all requests to custom APIs and/or tables get passed through the filters.

You can also wrap the client multiple times. For example, you can use two separate filters – one for logging and one for the request. In this case, the filters are executed in an onion-like fashion – The last one added is the outer-most. The request goes through each filter in turn until it gets to the actual client, then the response is passed through each filter on its way out to the requestor.

Finally, note that this is truly a powerful method allowing you to change the REST calls that Azure Mobile Apps makes – including destroying the protocol that the server and client rely on. Certain headers are required for Azure Mobile Apps to work, including tokens for authentication and API versioning. Use wisely.

(Cross-posted to the Azure App Service Blog)

30 Days of Zumo.v2 (Azure Mobile Apps): Day 30: Catching Up

I’ve been doing this blog series – 29 blog articles – since late March. In the cloud world, that’s a lifetime, so let’s take a look at what new things you need to be aware of when developing Mobile apps.

Microsoft acquired Xamarin

There are really three distinct technologies you need to understand. Native development uses the platform specific tools (XCode or Android Development Studio) and languages (Swift, Objective-C and Java) to develop the mobile app. These offer the highest level of compatibility with the features of the mobile phone. However, there is zero code reuse between them. A common developer theme is DRY – Don’t Repeat Yourself. I’m developing two apps – one for Android and one for iOS – that do the same thing. Why can’t I reuse the models and business logic within my code? I’m not a fan of native development for that reason.

Apache Cordova places your app in a web view. I like this when there is an associated website that is already responsive. It allows you to wrap your existing web code (which is likely a SPA these days) into an app. You get access to the same libraries as your web code and you can leverage a good chunk of the talent you have for developing web code when developing your mobile app. It does have drawbacks, though. The UI is defined inside of HTML/CSS ultimately, which means it’s relatively hard to get a platform consistent UI. The UI will either look like an Android app or like an iOS app, even on the alternate platform. In addition, the code runs inside a Web view. This is a sandboxed area that has limitations. The limitations are becoming less important. Basically, if you want to access the bleeding edge of hardware, this is not the platform for you. Finally, I find performance lags that of native apps – there is a memory and CPU overhead. This may not be important in small apps, but as the apps grow, so does the memory and CPU requirements – the ceiling is lower.

The final segment is cross-platform apps and that’s where Xamarin comes in. Xamarin allows you to write apps in a common language – C# or F# with a .NET runtime. However, the apps compile to native code. This has two advantages. Firstly, you get native performance – the CPU and memory considerations I mentioned with Hybrid Apps are at the same level as for native apps. Secondly, you get access to the entire API surface that the device offers. Unlike native apps, you can encapsulate your models and business logic in a shared library, allowing for code re-use.

In the past (pre-acquisition), I was loath to suggest Xamarin for small projects because of the cost associated with it. With the acquisition, Microsoft made the technology free. For small groups, you can use Xamarin with Visual Studio Community. For bigger groups, the various Visual Studio offerings provide the appropriate licensing. In addition, Microsoft made Xamarin Studio free for Mac users, which – in my experience – is the platform of choice for most of the mobile developers. This means you don’t have to any excuse to not use Xamarin for developing your native apps.

Those of you that have looked into Xamarin before will know that there are two “modes” for developing Xamarin apps. Xamarin.iOS and Xamarin.Android provide distinct advantages by providing access to the platform features directly, whereas Xamarin.Forms provides a cross-platform UI capability as well. I am recommending that teams use Xamarin.Forms for cross-platform enterprise apps (where differences in the look and feel of the app are less important), but using the Xamarin.iOS / Xamarin.Android for consumer apps. This allows you to tune the UI to be very platform specific.

File Sync for Node Backends

This is really recent. The team released the azure-mobile-apps-files NPM package to allow the implementation of File Sync with node backends. It’s definitely still in preview – like most of the other functionality in File Sync, but it means you don’t have to make a choice of backend based solely on features available. The Azure Mobile Apps Node SDK has been one of our most active release tracks, and I expect more features being pulled into the alternate Node implementation.

Apache Cordova Offline Sync

I believe I’ve mentioned this several times, but it bears repeating. Azure Mobile Apps development – all the server and client SDKs – are open source. All the issues are on GitHub Issues, the repositories are on GitHub and the team develops in the open. That’s why I can safely say that Apache Cordova Offline Sync is coming “Real Soon”. If you want to view any of the repositories, you can do a search on the Azure GitHub organization page. Got a bug or an idea? File an issue. We don’t bite!

RIP Azure Mobile Services

Unfortunately, I had to close down Azure Mobile Services. I’m rather sad about that since I think it was an awesome service. However, it was reliant on older technology that couldn’t easily be upgraded. As a result, we were handling customer issues dealing primarily with new libraries not working on older versions of Node. To aid the process, we’ve done two things. The first is a single-click migration of the Mobile Service to Azure App Service. It doesn’t change any of the code (and you can clone the git repository and update the code any way you see fit). My recommendation, however, is to upgrade your code to Azure Mobile Apps and utilize all the good things we’ve been doing. Upgrading also allows you to use any version of Node that you wish, which is a constant request.

To aid in upgrading your site to Azure Mobile Apps, we’ve also released a compatibility layer for node sites (which accommodates about 75% of all Mobile Services). This will take your existing mobile services site (that you have cloned after migration) and turn it into an Azure Mobile Apps site. Afterwards, your code-behind files for table controllers and APIs are all compatible with Azure App Service. Then all you have to do is publish your site, upgrade the SDK in your clients (including changing the URL to point to your new App Service) and publish those new clients to the app store. At this point you will be running both sites (the v1 site and the v2 site) in parallel.

There is a small question of database compatibility. You can fix this with Views (I mentioned the process for this in Day 19 – it needs to be adjusted for the new situation). However, once that is done, you are ready to rock the new environment.

One of the biggest gotchas I see are people running Azure Mobile Services but referencing newer SDKs & Documentation, or people running Azure Mobile Apps and referencing older SDKs and Documentation. We get it – it’s confusing. MAke sure your libraries, documentation and lingo are all in line on both the backend and frontend / client.

When Will .NET Core be supported?

I’ve lost count how many times I’ve been asked this. ASP.NET Core is an awesome technology and I’m looking forward to stability and GA for it. However, that isn’t the only technology we use in the stack. We also use Entity Framework, System.Web.OData, Automapper, and others. Until the whole stack is compatible with .NET Core, we won’t be releasing a .NET Core version of the Azure Mobile Apps Server SDK. Rest assured, we are in touch with the right teams and it’s definitely on our radar.

Contacting the team?

One of the great things about this team is how involved they are in the community. There are a multitude of methods to get in touch. If you have a problem, then the best way is to post a question on Stack Overflow. This reaches the widest possible audience. You can also reach us via the Azure Forums, on Twitter, or via comments on our documentation (although we prefer that you post on Stack Overflow if you have an issue). You can also open an issue on one of our GitHub repositories.

Finally, the App Service team has a new team blog. I’ll be publishing further Azure Mobile Apps posts there instead of here. This blog is going back to (mostly) my own side work.

I hope you’ve enjoyed reading the last two months worth of posts as much as I’ve enjoyed writing them. I’ve included an Index – take a look under Pages to your right.