Configuring ASP.NET Core Applications in Azure App Service

The .NET Core framework has a nice robust configuration framework. You will normally see the following (or something akin to it) in the constructor of the Startup.cs file:

        public Startup(IHostingEnvironment env)
        {
            var builder = new ConfigurationBuilder()
                .SetBasePath(env.ContentRootPath)
                .AddJsonFile("appsettings.json", optional: false, reloadOnChange: true)
                .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
                .AddEnvironmentVariables();
            Configuration = builder.Build();
        }

This version gets its settings from JSON files and then overrides these settings with things from environment variables. It seems ideal for Azure App Service because all the settings – from the app settings to the connection strings – are provided as environment variables. Unfortunately, they don’t necessarily get placed in the right place in the configuration. For example, Azure App Service has a Data Connections facility to define the various other resources that the app service uses. You can reach it from the menu of your App Service via the Data Connections menu. The common thing to do for Azure Mobile Apps is to define a connection to a SQL Azure instance called MS_TableConnectionString. However, when this is turned into an environment variable, the environment variable is called SQLAZURECONNSTR_MS_TableConnectionString, which is hardly obvious.

Fortunately, the ASP.NET Core configuration library is extensible. What I am going to do is develop an extension to the configuration library for handling these data connections. When developing locally, you will be able to add an appsettings.Development.json file with the following contents:

{
    "ConnectionStrings": {
        "MS_TableConnectionString": "my-connection-string"
    },
    "Data": {
        "MS_TableConnectionString": {
            "Type": "SQLAZURE",
            "ConnectionString": "my-connection-string"
        }
    }
}

When in production, this configuration is produced by the Azure App Service. When in development (and running locally), you will want to produce this JSON file yourself. Alternatively, you can adjust the launchsettings.json file to add the appropriate environment variable for local running.

To start, here is what our Startup.cs constructor will look like now:

        public Startup(IHostingEnvironment env)
        {
            var builder = new ConfigurationBuilder()
                .SetBasePath(env.ContentRootPath)
                .AddJsonFile("appsettings.json", optional: false, reloadOnChange: true)
                .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
                .AddAzureAppServiceDataConnections()
                .AddEnvironmentVariables();
            Configuration = builder.Build();
        }

Configuration Builder Extensions

In order to wire our configuration provider into the configuration framework, I need to provide an extension method to the configuration builder. This is located in Extensions\AzureAppServiceConfigurationBuilderExtensions.cs:

using Microsoft.Extensions.Configuration;

namespace ExampleServer.Extensions
{
    public static class AzureAppServiceConfigurationBuilderExtensions
    {
        public static IConfigurationBuilder AddAzureAppServiceDataConnections(this IConfigurationBuilder builder)
        {
            return builder.Add(new AzureAppServiceDataConnectionsSource());
        }
    }
}

This is a fairly standard method of hooking extensions into extensible builder-type objects.

The Configuration Source and Provider

To actually provide the configuration source, I need to write two classes – a source, which is relatively simple, and a provider, which does most of the actual work. Fortunately, I can extend other classes within the configuration framework to ease the code I have to write. Let’s start with the easy one – the source. This is the object that is added to the configuration builder (located in Extensions\AzureAppServiceDataConnectionsSource.cs:

using Microsoft.Extensions.Configuration;
using System;

namespace ExampleServer.Extensions
{
    public class AzureAppServiceDataConnectionsSource : IConfigurationSource
    {
        public IConfigurationProvider Build(IConfigurationBuilder builder)
        {
            return new AzureAppServiceDataConnectionsProvider(Environment.GetEnvironmentVariables());
        }
    }
}

I’m passing in the current environment into my provider because I want to be able to mock an environment later on for testing purposes. The configuration source (which is linked into the configuration builder) returns a configuration provider. The configuration provider is where the work is done (located in Extensions\AzureAppServiceDataConnectionsProvider.cs):

using Microsoft.Extensions.Configuration;
using System.Collections;
using System.Text.RegularExpressions;

namespace ExampleServer.Extensions
{
    internal class AzureAppServiceDataConnectionsProvider : ConfigurationProvider
    {
        /// <summary>
        /// The environment (key-value pairs of strings) that we are using to generate the configuration
        /// </summary>
        private IDictionary environment;

        /// <summary>
        /// The regular expression used to match the key in the environment for Data Connections.
        /// </summary>
        private Regex MagicRegex = new Regex(@"^([A-Z]+)CONNSTR_(.+)$");

        public AzureAppServiceDataConnectionsProvider(IDictionary environment)
        {
            this.environment = environment;
        }

        /// <summary>
        /// Loads the appropriate settings into the configuration.  The Data object is provided for us
        /// by the ConfigurationProvider
        /// </summary>
        /// <seealso cref="Microsoft.Extensions.Configuration.ConfigurationProvider"/>
        public override void Load()
        {
            foreach (string key in environment.Keys)
            {
                Match m = MagicRegex.Match(key);
                if (m.Success)
                {
                    var conntype = m.Groups[1].Value;
                    var connname = m.Groups[2].Value;
                    var connstr = environment[key] as string;

                    Data[$"Data:{connname}:Type"] = conntype;
                    Data[$"Data:{connname}:ConnectionString"] = connstr;
                    Data[$"ConnectionStrings:{connname}"] = connstr;
                }
            }
        }
    }
}

The bulk of the work is done within the Load() method. This cycles through each environment variable. If it matches the pattern
we are looking for, it extracts the pieces of data we need and puts them in the Data object. The Data object is provided by the
ConfigurationProvider and all the other lifecycle methods are handled for me by the ConfigurationProvider class.

Integrating with Dependency Injection

I want to provide a singleton service to my application that provides access to the data configuration. This service will be injected
into any code I want via dependency injection. To do this, I need an interface:

using System.Collections.Generic;

namespace ExampleServer.Services
{
    public interface IDataConfiguration
    {
        IEnumerable<DataConnectionSettings> GetDataConnections();
    }
}

This refers to a model (and I produce a list of these models later):

namespace ExampleServer.Services
{
    public class DataConnectionSettings
    {
        public string Name { get; set; }
        public string Type { get; set; }
        public string ConnectionString { get; set; }
    }
}

I also need a concrete implementation of the IDataConfiguration interface to act as the service:

using Microsoft.Extensions.Configuration;
using System.Collections.Generic;

namespace ExampleServer.Services
{
    public class DataConfiguration : IDataConfiguration
    {
        List<DataConnectionSettings> providerSettings;

        public DataConfiguration(IConfiguration configuration)
        {
            providerSettings = new List<DataConnectionSettings>();
            foreach (var section in configuration.GetChildren())
            {
                providerSettings.Add(new DataConnectionSettings
                {
                    Name = section.Key,
                    Type = section["Type"],
                    ConnectionString = section["ConnectionString"]
                });
            }
        }

        public IEnumerable<DataConnectionSettings> GetDataConnections()
        {
            return providerSettings;
        }
    }
}

Now that I have defined the interface and concrete implementation of the service, I can wire it into the dependency injection system in the ConfigureServices() method within Startup.cs:

        public void ConfigureServices(IServiceCollection services)
        {
            services.AddSingleton<IDataConfiguration>(new DataConfiguration(Configuration.GetSection("Data")));

            // Add framework services.
            services.AddMvc();
        }

I can now write my test page. The main work is in the HomeController.cs:

        public IActionResult DataSettings([FromServices] IDataConfiguration dataConfiguration)
        {
            ViewBag.Data = dataConfiguration.GetDataConnections();
            return View();
        }

The parameter to the DataSettings method will be provided from my service via dependency injection. Now all I have to do is write the view for it:

<h1>Data Settings</h1>

<div class="row">
    <table class="table table-striped">
        <tr>
            <th>Provider Name</th>
            <th>Type</th>
            <th>Connection String</th>
        </tr>
        <tbody>
            @foreach (var item in ViewBag.Data)
            {
                <tr>
                    <td>@item.Name</td>
                    <td>@item.Type</td>
                    <td>@item.ConnectionString</td>
                </tr>
            }
        </tbody>
    </table>
</div>

Publish this to your Azure App Service. Don’t forget to link a SQL Azure instance via the Data Connections menu. Then browse to https://yoursite.azurewebsites.net/Home/DataSettings – you should see your SQL Azure instance listed on the page.

Now for the good part…

ASP.NET Core configuration already does this for you. I didn’t find this out until AFTER I had written all this code. You can just add the .UseEnvironmentVariables() to your configuration builder and the rename comes along for free. Specifically:

  • ConfigurationStrings:MS_TableConnectionString will point to your connection string
  • ConfigurationStrings:MS_TableConnectionString_Provider will point to the provider class name

So you don’t really need to do anything at all. However, this is a great reference guide for me for producing the next configuration provider as the sample code is really easy to follow.

The code for this blog post is in my GitHub Repository at tag p3. In the next post, I’ll tackle authentication by linking Azure App Service Authentication with ASP.NET Core Identity.

Running ASP.NET Core applications in Azure App Service

One of the things I get asked about semi-regularly is when Azure Mobile Apps is going to support .NET Core. It’s a logical progression for most people and many ASP.NET developers are planning future web sites to run on ASP.NET Core. Also, the ASP.NET Core programming model makes a lot more sense (at least to me) than the older ASP.NET applications. Finally, we have an issue open on the subject. So, what is holding us back? Well, there are a bunch of things. Some have been solved already and some need a lot of work. In the coming weeks, I’m going to be writing about the various pieces that need to be in place before we can say “Azure Mobile Apps is there”.

Of course, if you want a mobile backend, you can always hop over to Visual Studio Mobile Center. This provides a mobile backend for you without having to write any code. (Full disclosure: I’m now a program manager on that team, so I may be slightly biased). However, if you are thinking ASP.NET Core, then you likely want to write the code.

Let’s get started with something that does exist. How does one run ASP.NET Core applications on Azure App Service? Well, there are two methods. The first involves uploading your application to Azure App Service via the Visual Studio Publish… dialog or via Continuous Integration from GitHub, Visual Studio Team Services or even Dropbox. It’s a relatively easy method and one I would recommend. There is a gotcha, which I’ll discuss below.

The second method uses a Docker container to house the code that is then deployed onto a Linux App Service. This is still in preview (as of writing), so I can’t recommend this for production workloads.

Create a New ASP.NET Core Application

Let’s say you opened up Visual Studio 2017 (RC right now) and created a brand new ASP.NET Core MVC application – the basis for my research here.

  • Open up Visual Studio 2017 RC.
  • Select File > New > Project…
  • Select the ASP.NET Core Web Application (.NET Core).
    • Fill in an appropriate name for the solution and project, just as normal.
    • Click OK to create the project.
  • Select ASP.NET Core 1.1 from the framework drop-down (it will say ASP.NET Core 1.0 initially)
  • Select Web Application in the ASP.NET Core 1.1 Templates selection.
  • Click OK.

I called my solution netcore-server and the project ExampleServer. At this point, Visual Studio will go off and create a project for you. You can see what it creates easily enough, but I’ve checked it into my GitHub repository at tag p0.

I’m not going to cover ASP.NET Core programming too much in this series. You can read the definitive guide on their documentation site, and I would recommend you start by understanding ASP.NET Core programming before getting into the changes here.

Go ahead and run the service (either as a Kestrel service or an IIS Express service – it works with both). This is just to make sure that you have a working site.

Add Logging to your App

Logging is one of those central things that is needed in any application. There are so many things you can’t do (including diagnose issues) if you don’t have appropriate logging. Fortunately, ASP.NET Core has logging built-in. Let’s add some to the Controllers\HomeController.cs file:

using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Logging;

namespace ExampleServer.Controllers
{
    public class HomeController : Controller
    {
        private ILogger logger;

        public HomeController(ILoggerFactory loggerFactory)
        {
            logger = loggerFactory.CreateLogger(this.GetType().FullName);
        }

        public IActionResult Index()
        {
            logger.LogInformation("In Index of the HomeController", null);
            return View();
        }
        // Rest of the file here

I’ve added the logger factory via dependency injection, then logged a message whenever the Index file is served in the home controller. If you run this version of the code (available on the GitHub respository at tag p1), you will see the following in your Visual Studio output window:

20170216-01

It’s swamped by the Application Insights data, but you can clearly see the informational message there.

Deploy your App to Azure App Service

Publishing to Azure App Service is relatively simple – right-click on the project and select Publish… to kick off the process. The layout of the windows has changed from Visual Studio 2015, but it’s the same process. You can create a new App Service or use an existing one. Once you have answered all the questions, your site will be published. Eventually, your site will be displayed in your web browser.

Turn on Diagnostic Logging

  • Click View > Server Explorer to add the server explorer to your work space.
  • Expand the Azure node, the App Service node, and finally your resource group node.
  • Right-click the app service and select View Settings
  • Turn on logging and set the logging level to verbose:

20170216-02

  • Click Save to save the settings (the site will restart).
  • Right-click the app service in the server explorer again and this time select View Streaming Logs
  • Wait until you see that you are connected to the log streaming service (in the Output window)

Now refresh your browser so that it reloads the index page again. Note how you see the access logs (which files have been requested) but the log message we put into the code is not there.

The Problem and Solution

The problem is, hopefully, obvious. ASP.NET Core does not by default feed logs to Azure App Service. We need to enable that feature in the .NET Core host. We do this in the Program.cs file:

using System.IO;
using Microsoft.AspNetCore.Hosting;

namespace ExampleServer
{
    public class Program
    {
        public static void Main(string[] args)
        {
            var host = new WebHostBuilder()
                .UseKestrel()
                .UseContentRoot(Directory.GetCurrentDirectory())
                .UseIISIntegration()
                .UseStartup<Startup>()
                .UseApplicationInsights()
                .UseAzureAppServices()
                .Build();

            host.Run();
        }
    }
}

You will also need to add the Microsoft.AspNetCore.AzureAppServicesIntegration package from NuGet for this to work. Once you have done this change, you can deploy this and watch the logs again:

20170216-03

If you have followed the instructions, you will need to switch the Output window back to the Azure logs. The output window will have been switched to Build during the publish process.

Adjusting the WebHostBuilder for the environment

It’s likely that you won’t want Application Insights and Azure App Services logging except when you are running on Azure App Service. There are a number of environment variables that Azure App Service uses and you can leverage these as well. My favorites are REGION_NAME (which indicates which Azure region your service is running in) and WEBSITE_OWNER_NAME (which is a combination of a bunch of things). You can test for these and adjust the pipeline accordingly:

using Microsoft.AspNetCore.Hosting;
using System;
using System.IO;

namespace ExampleServer
{
    public class Program
    {
        public static void Main(string[] args)
        {
            var hostBuilder = new WebHostBuilder()
                .UseKestrel()
                .UseContentRoot(Directory.GetCurrentDirectory())
                .UseIISIntegration()
                .UseStartup<Startup>()
                .UseApplicationInsights();

            var regionName = Environment.GetEnvironmentVariable("REGION_NAME");
            if (regionName != null)
            {
                hostBuilder.UseAzureAppServices();
            }
                
            var host = hostBuilder.Build();

            host.Run();
        }
    }
}

You can download this code at my GitHub repository at tag p2.

The Latest Wrinkle (Updated 4/10/2017)

The latest edition of the WindowsAzure.Storage package have breaking changes, so can’t be included until a major release. In the interim, you will need to edit your .csproj file and add the following:

<PackageTargetFallback>$(PackageTargetFallback);portable-net40+sl5+win8+wp8+wpa81;portable-net45+win8+wp8+wpa81</PackageTargetFallback>

Integrating OData and DocumentDb with Azure Functions

This is the finale in a series of posts that aimed to provide an alternative to Azure Mobile Apps based on Azure Functions and DocumentDb. I started with discussing how to run a CRUD HTTP API, then moved onto DocumentDb, handled inserts and replacements. Now it’s time to fetch data. Azure Mobile Apps uses a modified OData v3 query string to perform the offline sync and online querying of data. This is mostly because ASP.NET (which was the basis for the original service) has a nice OData library for it. OData is painful to use in our context, however. Firstly, there are some necessary renames – the updatedAt field is actually the DocumentDb timestamp, for example. The other thing is that there is no ready made library for turning an OData query string into a DocumentDb SQL statement. So I don’t have an “easy” way of fulfilling the requirement.

Fortunately, the Azure Mobile Apps Node SDK has split off a couple of libraries for more general use. The first is azure-query-js. This is a library for converting between a set of OData query parameters and an internal query structure. The second is azure-odata-sql, which is for turning a normalized OData query into SQL, based on Microsoft SQL or SQLite syntax. Neither of these libraries is particularly well documented, but they are relatively easy to use based on the examples used within the Azure Mobile Apps SDKs. We are going to need to modify the azure-odata-sql library to generate appropriate SQL statements for DocumentDB, so I’ve copied the source to the library into my project (in the directory odata-sql). My first stab at the getAllItems() method looks like this:

var OData = require('azure-query-js').Query.Providers.OData;
var formatSql = require('../odata-sql').format;

function getAllItems(req, res) {
    // DoumentDB doesn't support SKIP yet, so we can't do TOP either without some problems
    var query = OData.fromOData(
        settings.table,
        req.query.$filter,
        req.query.$orderby,
        undefined, //parseInt(req.query.$skip),
        undefined, //parseInt(req.query.$top),
        req.query.$select,
        req.query.$inlinecount === 'allpages',
        !!req.query.__includeDeleted);

    var sql = formatSql(OData.toOData(query), {
        containerName: settings.table,
        flavor: 'documentdb'
    });
    
    res.status(200).json({ query: req.query, sql: sql, message: 'getAll' });
}

As noted here, DocumentDB hasn’t added full support for SKIP/TOP statements, so we can’t use those elements. Once the support is available within DocumentDB, I just need to include that support in the odata-sql library and change the two parmeters to the fromOData() call.

So, what does this do? Well, first, it converts the request from the browser (or client SDK) from the jumble of valid OData query params into a Query object. That Query object is actually a set of functions to do the parsing. Then we use the toOData() method (from the azure-query-js library) to convert that Query object into a normalized OData query. Finally, we use a custom SQL formatter (based on the azure-odata-sql) library to convert it to a SQL statement. If you run this, you should get something like the following out of it:

getall-1

I can now see the SQL statements being generated. The only problem is that they are not actually valid SQL statements for DocumentDB. They are actually perfectly valid for Microsoft SQL Server or SQL Azure. We need to adjust the odata-sql library for our needs. There are a couple of things needed here. Our first requirement is around the updatedAt field. This is not updatedAt in DocumentDB – it’s _ts, and it’s a number. We can do this using regular expressions like this:

if (req.query.$filter) {
    while (/updatedAt [a-z]+ '[^']+'/.test(req.query.$filter)) {
        var re = new RegExp(/updatedAt ([a-z]+) '([^']+)'/);
        var results = re.exec(req.query.$filter);
        var newDate = moment(results[2]).unix();
        var newString = `_ts ${results[1]} ${newDate}`;
        req.query.$filter = req.query.$filter.replace(results[0], newString);
    }
}

I could have probably shrunk this code somewhat, but it’s clear as to what is going on. We loop around the filter while there is still an updatedAt clause, convert the date, then replace the old string with the new string. We need to do similar things with the $select and $orderby clauses as well – left out because I’m trying to make this simple.

In terms of the odata-sql library, most of what we want is in the helpers.js library. Specifically, in the case of DocumentDB, we don’t need the square brackets. That means the formatMember() and formatTableName() methods must be adjusted to compensate.

I found it easier to step through the code by writing a small test program to test this logic out. You can find it in todoitem\test.js. With Visual Studio Code, you can set breakpoints, watch variables and do all the normal debugging things to really understand where the code is going and what it is doing.

Now that the SQL looks good, I need to execute the SQL commands. I’ve got a version of queryDocuments() in the driver:

    queryDocuments: function (client, collectionRef, query, callback) {
        client.queryDocuments(collectionRef._self, query).toArray(callback);
    },

This is then used in the HTTP trigger getAllItems() method. I’ve included the whole method here for you:

function getAllItems(req, res) {
    // Adjust the query parameters for DocumentDB
    if (req.query.$filter) {
        while (/updatedAt [a-z]+ '[^']+'/.test(req.query.$filter)) {
            var re = new RegExp(/updatedAt ([a-z]+) '([^']+)'/);
            var results = re.exec(req.query.$filter);
            var newDate = moment(results[2]).unix();
            var newString = `_ts ${results[1]} ${newDate}`;
            req.query.$filter = req.query.$filter.replace(results[0], newString);
        }
    }
    // Remove the updatedAt from the request
    if (req.query.$select) {
        req.query.$select = req.query.$select.replace(/,{0,1}updatedAt/g, '');
    }

    // DoumentDB doesn't support SKIP yet, so we can't do TOP either
    var query = OData.fromOData(
        settings.table,
        req.query.$filter,
        req.query.$orderby,
        undefined, //parseInt(req.query.$skip),
        undefined, //parseInt(req.query.$top),
        req.query.$select,
        req.query.$inlinecount === 'allpages',
        !!req.query.__includeDeleted);

    var sql = formatSql(OData.toOData(query), {
        containerName: settings.table,
        flavor: 'documentdb'
    });

    // Fix up the object so that the SQL object matches what DocumentDB expects
    sql[0].query = sql[0].sql;
    sql[0].parameters.forEach((value, index) => {
        sql[0].parameters[index].name = `@${value.name}`;
    });

    // Execute the query
    console.log(JSON.stringify(sql[0], null, 2));
    driver.queryDocuments(refs.client, refs.table, sql[0])
    .then((documents) => {
        documents.forEach((value, index) => {
            documents[index] = convertItem(value);
        });

        if (sql.length == 2) {
            // We requested $inlinecount == allpages.  This means we have
            // to adjust the output to include a count/results field.  It's
            // used for paging, which DocumentDB doesn't support yet.  As
            // a result, this is a hacky way of doing this.
            res.status(200).json({
                results: documents,
                count: documents.length
            });
        } else {
            res.status(200).json(documents);
        }
    })
    .catch((error) => {
        res.status(400).json(error);
    });
}

Wrapping Up

So, there you have it. A version of the Azure Mobile Apps service written with DocumentDB and executing in dynamic compute on Azure Functions.

Of course, I wouldn’t actually use this code in production. Firstly, I have not written any integration tests on this, and there are a bunch of corner cases that I would definitely want to test. DocumentDB doesn’t have good paging support yet, so you are getting all records all the time. I also haven’t looked at all the OData methods that can be converted into SQL statement to ensure DocumentDB support. Finally, and this is a biggie, the service has a “cold start” time. It’s not very much, but it can be significant. In the case of a dedicated service, you spend that cold start time once. In the case of a dynamic compute Azure Function, you can spend that time continually. This isn’t actually a problem with DocumentDB, since I am mostly passing through the REST calls (adjusted). However, it can become a problem when using other sources. One final note is that I keep all the records in memory – this can drive up the memory requirements (and hence cost) of the Azure Function on a per-execution basis.

Until next time, you can find the source code for this project on my GitHub repository.

Updating Documents in DocumentDb

In my last few posts, I’ve been working on an Azure Mobile Apps replacement service. It will run in Azure Functions and use DocumentDb as a backing store. Neither of these requirements are possible in the Azure Mobile Apps server SDK today. Thus far, I’ve created a CRUD HTTP API, initialized the DocumentDb store and handled inserts. Today is all about fetching, but more importantly it is about replacing documents and handling conflict resolution.

The DocumentDb Driver

Before I get started with the code for the endpoint, I need to add some more functionality to my DocumentDb promisified driver. In the document.js file, I’ve added the following:

module.exports = {
    createDocument: function (client, collectionRef, docObject, callback) {
        client.createDocument(collectionRef._self, docObject, callback);
    },

    fetchDocument: function (client, collectionRef, docId, callback) {
        var querySpec = {
            query: 'SELECT * FROM root r WHERE r.id=@id',
            parameters: [{
                name: '@id',
                value: docId
            }]
        };

        client.queryDocuments(collectionRef._self, querySpec).current(callback);
    },

    readDocument: function (client, docLink, options, callback) {
        client.readDocument(docLink, options, callback);
    },

    replaceDocument: function(client, docLink, docObject, callback) {
        client.replaceDocument(docLink, docObject, callback);    
    }
};

My first attempt at reading a document used the readDocument() method. I would construct a docLink using the following:

var docLink = `${refs.table._self}${refs.table._docs}${docId}`;

However, this always resulted in a 400 Bad Request response from DocumentDb. The reason is likely that the _self link uses the shorted (and obfuscated) URI, whereas the Document Id I am using is a GUID and is not obfuscated. If you take a look at the response from DocumentDb, there is an id field and a _rid field. The _rid field is used in the document links. Thus, instead of using readDocument(), I’m using a queryDocuments() call on the driver to search for the Id. I’ve also promisified these calls in the normal manner using the Bluebird library.

Fetching a Record

The Azure Mobile Apps SDK allows me to GET /tables/todoitem/id – where id is the GUID. With the driver complete, I can do the following in the Azure Function table controller:

function getOneItem(req, res, id) {
    driver.fetchDocument(refs.client, refs.table, id)
    .then((document) => {
        if (typeof document === 'undefined')
            res.status(404).json({ 'error': 'Not Found' });
        else
            res.status(200).json(convertItem(document));
    })
    .catch((error) => {
        res.status(error.code).json(convertError(error));
    });
}

When doing this, I did notice that some semantics seem to have changed in the Azure Functions SDK. I can no longer use context.bindings.id and has to switch to using req.params.id. Aside from this small change in the router code, this code is relatively straight forward. I established the convertItem() and convertError() methods in my last article.

Replacing a Record

The more complex case is replacing a record. There is a little bit of logic around conflict resolution:

  • If there is an If-Match header, then ensure the version of the current record matches the If-Match header, otherwise return a 412 response.
  • If there is no If-match header, but the new record contains a version, return a 409 response.
  • Otherwise update the record

Because we want the version and updatedAt fields to be controlled as well, we need to ensure the new object does not contain those values when it is submitted to DocumentDb:

function replaceItem(req, res, id) {
    driver.fetchDocument(refs.client, refs.table, id)
    .then((document) => {
        if (typeof document === 'undefined') {
            res.status(404).json({ 'error': 'Not Found' });
            return;
        }

        var item = req.body, version = new Buffer(document._etag).toString('base64')
        if (item.id !== id) {
            res.status(400).json({ 'error': 'Id does not match' });
            return;
        }

        if (req.headers.hasOwnProperty('if-match') && req.header['if-match'] !== version) {
            res.status(412).json({ 'current': version, 'new': item.version, 'error': 'Version Mismatch' })
            return;
        }

        if (item.hasOwnProperty('version') && item.version !== version) {
            res.status(409).json({ 'current': version, 'new': item.version, 'error': 'Version Mismatch' });
            return;
        }

        // Delete the version and updatedAt fields from the doc before submitting
        delete item.updatedAt;
        delete item.version;
        driver.replaceDocument(refs.client, document._self, item)
        .then((updatedDocument) => {
            res.status(200).json(convertItem(updatedDocument));
            return;
        });
    })
    .catch((error) => {
        res.status(error.code).json(convertError(error));
    });
}

I’m using the same Base64 encoding for the etag in the current document to ensure I can do a proper match. I could get DocumentDb to do all this work for me – the options value in the driver replaceDocument() method allows me to specify an If-Match. However, to do that, I would need to still fetch the record (since I need the document link), so I may as well do the checks myself. This also keeps some load off the DocumentDb, which is helpful.

While this is almost there, there is one final item. If there is a conflict, the server version of the document should be returned. That means the 409 and 412 responses need to return convertItem(document) instead – a simple change.

Deleting a Record

Deleting a record does not delete a record. Azure Mobile Apps uses soft delete (whereby the deleted flag is set to true). This means that I need to use replaceDocument() again for deletions:

function deleteItem(req, res, id) {
    driver.fetchDocument(refs.client, refs.table, id)
    .then((document) => {
        if (typeof document === 'undefined') {
            res.status(404).json({ 'error': 'Not Found' });
            return;
        }

        var item = convertItem(document);
        delete item.updatedAt;
        delete item.version;
        item.deleted = true;
        driver.replaceDocument(refs.client, document._self, item)
        .then((updatedDocument) => {
            res.status(200).json(convertItem(updatedDocument));
            return;
        });
    })
    .catch((error) => {
        res.status(error.code).json(convertError(error));
    });
}

This brings up a point about the GetOneItem() method. It does not take into account the deleted flag. I need it to return 404 Not Found if the deleted flag is set:

function getOneItem(req, res, id) {
    driver.fetchDocument(refs.client, refs.table, id)
    .then((document) => {
        if (typeof document === 'undefined' || document.deleted === true)
            res.status(404).json({ 'error': 'Not Found' });
        else
            res.status(200).json(convertItem(document));
    })
    .catch((error) => {
        res.status(error.code).json(convertError(error));
    });
}

It’s a simple change, but important in getting the protocol right.

What’s left?

There is only one method I have not written yet, and it’s the biggest one of the set – the getAllItems() method. That’s because it deals with OData querying, which is no small task. I’ll be tackling that in my next article. Until then, get the current codebase at my GitHub repository.

Creating Documents in DocumentDB with Azure Functions HTTP API

Thus far in my story of implementing Azure Mobile Apps in a dynamic (consumption) plan of Azure Functions using DocumentDB, I’ve got the basic CRUD HTTP API stubbed out and the initialization of my DocumentDB collection done. It’s now time to work on the actual endpoints that my Azure Mobile Apps SDK will call. There are five methods to implement:

  • Insert
  • Update / Replace
  • Delete
  • Fetch a single record
  • Search

I’m going to do these in the order above. Before I do that, I need to take a look at what DocumentDB provides me. Azure Mobile Apps requires five fields to work properly:

  • id – a string (generally a GUID).
  • createdAt – the date the record was created, in ISO-8601 format.
  • updatedAt – the date the record was updated, in ISO-8601 format.
  • deleted – a boolean, if the record is deleted.
  • version – an opaque string for conflict resolution.

DocumentDB provides some of this for us:

  • id – a string (generally a GUID).
  • _ts – a POSIX / unix timestamp of the number of seconds since the epoch since the record was last updated.
  • _etag – a checksum / version identifier.

When we create a record, we need to convert the document that DocumentDB returns to us into the format that Azure Mobile Apps provides. I use the following routine:

/**
 * Given an item from DocumentDB, convert it into something that the service can used
 * @param {object} item the original item
 * @return {object} the new item
 */
function convertItem(item) {
    if (item.hasOwnProperty('_ts')) {
        item.updatedAt = moment.unix(item._ts).toISOString();
        delete item._ts;
    } else {
        throw new Error('Invalid item - no _ts field');
    }

    if (item.hasOwnProperty('_etag')) {
        item.version = new Buffer(item._etag).toString('base64');
        delete item._etag;
    } else {
        throw new Error('Invalid item - no _etag field');
    }

    // Delete all the known fields from documentdb
    if (item.hasOwnProperty('_rid')) delete item._rid;
    if (item.hasOwnProperty('_self')) delete item._self;
    if (item.hasOwnProperty('_attachments')) delete item._attachments;

    return item;
}

I’m using the moment library to do date/time manipulation. This is a very solid library and well worth learning about. In addition to the convertItem() method, I also need something to convert the error values that come back from DocumentDB. They are not nicely formed, so some massaging is in order:

/**
 * Convert a DocumentDB error into something intelligible
 * @param {Error} error the error object
 * @return {object} the intelligible error object
 */
function convertError(error) {
    var body = JSON.parse(error.body);
    if (body.hasOwnProperty("message")) {
        var msg = body.message.replace(/^Message:\s+/, '').split(/\r\n/);
        body.errors = JSON.parse(msg[0]).Errors;

        var addl = msg[1].split(/,\s*/);
        addl.forEach((t) => {
            var tt = t.split(/:\s*/);
            tt[0] = tt[0].replace(/\s/, '').toLowerCase();
            body[tt[0]] = tt[1];
        });

        delete body.message;
    }

    return body;
}

I had to work through the error object several times experimenting with the actual response to come up with this routine. This seems like the right code by experimentation. Whether it holds up during normal usage remains to be seen.

I’ve already written the createDocument() method in the DocumentDB driver:

module.exports = {
    createDocument: function (client, collectionRef, docObject, callback) {
        client.createDocument(collectionRef._self, docObject, callback);
    }
};

This is then promisifyed using the bluebird promise library. With this work done, my code for inserts becomes very simple:

function insertItem(req, res) {
    var item = req.body;

    item.createdAt = moment().toISOString();
    if (!item.hasOwnProperty('deleted')) item.deleted = false;

    driver.createDocument(refs.client, refs.table, item)
    .then((document) => {
        res.status(201).json(convertItem(document));
    })
    .catch((error) => {
        res.status(error.code).json(convertError(error));
    });
}

The item that we need to insert comes in on the body. We need to add the createdAt field and the deleted field (if it isn’t already set). Since this is an insert, we call createDocument() in the driver. If it succeeds, we return a 201 Created response with the new document (converted to the Azure Mobile Apps specification). If not, we return the error from DocumentDB together with the formatted object.

We can test inserts with Postman. For example, here is a successful insert:

insert-1

DocumentDB creates the id for me if it doesn’t exist. I convert the _ts and _etag fields to something more usable by the Azure Mobile Apps SDK on the way back to the client. If I copy the created object and push it again, I will get a conflict:

insert-2

Notice how DocumentDB does all the work for me? All I need to do is some adjustments on the output to get my insert operation working. I can use the Document Browser within the Azure Portal to look at the actual records.

In the next post, I’m going to move onto Update, Delete and Fetch all in one go.