Integrating OData and DocumentDb with Azure Functions

This is the finale in a series of posts that aimed to provide an alternative to Azure Mobile Apps based on Azure Functions and DocumentDb. I started with discussing how to run a CRUD HTTP API, then moved onto DocumentDb, handled inserts and replacements. Now it’s time to fetch data. Azure Mobile Apps uses a modified OData v3 query string to perform the offline sync and online querying of data. This is mostly because ASP.NET (which was the basis for the original service) has a nice OData library for it. OData is painful to use in our context, however. Firstly, there are some necessary renames – the updatedAt field is actually the DocumentDb timestamp, for example. The other thing is that there is no ready made library for turning an OData query string into a DocumentDb SQL statement. So I don’t have an “easy” way of fulfilling the requirement.

Fortunately, the Azure Mobile Apps Node SDK has split off a couple of libraries for more general use. The first is azure-query-js. This is a library for converting between a set of OData query parameters and an internal query structure. The second is azure-odata-sql, which is for turning a normalized OData query into SQL, based on Microsoft SQL or SQLite syntax. Neither of these libraries is particularly well documented, but they are relatively easy to use based on the examples used within the Azure Mobile Apps SDKs. We are going to need to modify the azure-odata-sql library to generate appropriate SQL statements for DocumentDB, so I’ve copied the source to the library into my project (in the directory odata-sql). My first stab at the getAllItems() method looks like this:

var OData = require('azure-query-js').Query.Providers.OData;
var formatSql = require('../odata-sql').format;

function getAllItems(req, res) {
    // DoumentDB doesn't support SKIP yet, so we can't do TOP either without some problems
    var query = OData.fromOData(
        settings.table,
        req.query.$filter,
        req.query.$orderby,
        undefined, //parseInt(req.query.$skip),
        undefined, //parseInt(req.query.$top),
        req.query.$select,
        req.query.$inlinecount === 'allpages',
        !!req.query.__includeDeleted);

    var sql = formatSql(OData.toOData(query), {
        containerName: settings.table,
        flavor: 'documentdb'
    });
    
    res.status(200).json({ query: req.query, sql: sql, message: 'getAll' });
}

As noted here, DocumentDB hasn’t added full support for SKIP/TOP statements, so we can’t use those elements. Once the support is available within DocumentDB, I just need to include that support in the odata-sql library and change the two parmeters to the fromOData() call.

So, what does this do? Well, first, it converts the request from the browser (or client SDK) from the jumble of valid OData query params into a Query object. That Query object is actually a set of functions to do the parsing. Then we use the toOData() method (from the azure-query-js library) to convert that Query object into a normalized OData query. Finally, we use a custom SQL formatter (based on the azure-odata-sql) library to convert it to a SQL statement. If you run this, you should get something like the following out of it:

getall-1

I can now see the SQL statements being generated. The only problem is that they are not actually valid SQL statements for DocumentDB. They are actually perfectly valid for Microsoft SQL Server or SQL Azure. We need to adjust the odata-sql library for our needs. There are a couple of things needed here. Our first requirement is around the updatedAt field. This is not updatedAt in DocumentDB – it’s _ts, and it’s a number. We can do this using regular expressions like this:

if (req.query.$filter) {
    while (/updatedAt [a-z]+ '[^']+'/.test(req.query.$filter)) {
        var re = new RegExp(/updatedAt ([a-z]+) '([^']+)'/);
        var results = re.exec(req.query.$filter);
        var newDate = moment(results[2]).unix();
        var newString = `_ts ${results[1]} ${newDate}`;
        req.query.$filter = req.query.$filter.replace(results[0], newString);
    }
}

I could have probably shrunk this code somewhat, but it’s clear as to what is going on. We loop around the filter while there is still an updatedAt clause, convert the date, then replace the old string with the new string. We need to do similar things with the $select and $orderby clauses as well – left out because I’m trying to make this simple.

In terms of the odata-sql library, most of what we want is in the helpers.js library. Specifically, in the case of DocumentDB, we don’t need the square brackets. That means the formatMember() and formatTableName() methods must be adjusted to compensate.

I found it easier to step through the code by writing a small test program to test this logic out. You can find it in todoitem\test.js. With Visual Studio Code, you can set breakpoints, watch variables and do all the normal debugging things to really understand where the code is going and what it is doing.

Now that the SQL looks good, I need to execute the SQL commands. I’ve got a version of queryDocuments() in the driver:

    queryDocuments: function (client, collectionRef, query, callback) {
        client.queryDocuments(collectionRef._self, query).toArray(callback);
    },

This is then used in the HTTP trigger getAllItems() method. I’ve included the whole method here for you:

function getAllItems(req, res) {
    // Adjust the query parameters for DocumentDB
    if (req.query.$filter) {
        while (/updatedAt [a-z]+ '[^']+'/.test(req.query.$filter)) {
            var re = new RegExp(/updatedAt ([a-z]+) '([^']+)'/);
            var results = re.exec(req.query.$filter);
            var newDate = moment(results[2]).unix();
            var newString = `_ts ${results[1]} ${newDate}`;
            req.query.$filter = req.query.$filter.replace(results[0], newString);
        }
    }
    // Remove the updatedAt from the request
    if (req.query.$select) {
        req.query.$select = req.query.$select.replace(/,{0,1}updatedAt/g, '');
    }

    // DoumentDB doesn't support SKIP yet, so we can't do TOP either
    var query = OData.fromOData(
        settings.table,
        req.query.$filter,
        req.query.$orderby,
        undefined, //parseInt(req.query.$skip),
        undefined, //parseInt(req.query.$top),
        req.query.$select,
        req.query.$inlinecount === 'allpages',
        !!req.query.__includeDeleted);

    var sql = formatSql(OData.toOData(query), {
        containerName: settings.table,
        flavor: 'documentdb'
    });

    // Fix up the object so that the SQL object matches what DocumentDB expects
    sql[0].query = sql[0].sql;
    sql[0].parameters.forEach((value, index) => {
        sql[0].parameters[index].name = `@${value.name}`;
    });

    // Execute the query
    console.log(JSON.stringify(sql[0], null, 2));
    driver.queryDocuments(refs.client, refs.table, sql[0])
    .then((documents) => {
        documents.forEach((value, index) => {
            documents[index] = convertItem(value);
        });

        if (sql.length == 2) {
            // We requested $inlinecount == allpages.  This means we have
            // to adjust the output to include a count/results field.  It's
            // used for paging, which DocumentDB doesn't support yet.  As
            // a result, this is a hacky way of doing this.
            res.status(200).json({
                results: documents,
                count: documents.length
            });
        } else {
            res.status(200).json(documents);
        }
    })
    .catch((error) => {
        res.status(400).json(error);
    });
}

Wrapping Up

So, there you have it. A version of the Azure Mobile Apps service written with DocumentDB and executing in dynamic compute on Azure Functions.

Of course, I wouldn’t actually use this code in production. Firstly, I have not written any integration tests on this, and there are a bunch of corner cases that I would definitely want to test. DocumentDB doesn’t have good paging support yet, so you are getting all records all the time. I also haven’t looked at all the OData methods that can be converted into SQL statement to ensure DocumentDB support. Finally, and this is a biggie, the service has a “cold start” time. It’s not very much, but it can be significant. In the case of a dedicated service, you spend that cold start time once. In the case of a dynamic compute Azure Function, you can spend that time continually. This isn’t actually a problem with DocumentDB, since I am mostly passing through the REST calls (adjusted). However, it can become a problem when using other sources. One final note is that I keep all the records in memory – this can drive up the memory requirements (and hence cost) of the Azure Function on a per-execution basis.

Until next time, you can find the source code for this project on my GitHub repository.

Updating Documents in DocumentDb

In my last few posts, I’ve been working on an Azure Mobile Apps replacement service. It will run in Azure Functions and use DocumentDb as a backing store. Neither of these requirements are possible in the Azure Mobile Apps server SDK today. Thus far, I’ve created a CRUD HTTP API, initialized the DocumentDb store and handled inserts. Today is all about fetching, but more importantly it is about replacing documents and handling conflict resolution.

The DocumentDb Driver

Before I get started with the code for the endpoint, I need to add some more functionality to my DocumentDb promisified driver. In the document.js file, I’ve added the following:

module.exports = {
    createDocument: function (client, collectionRef, docObject, callback) {
        client.createDocument(collectionRef._self, docObject, callback);
    },

    fetchDocument: function (client, collectionRef, docId, callback) {
        var querySpec = {
            query: 'SELECT * FROM root r WHERE r.id=@id',
            parameters: [{
                name: '@id',
                value: docId
            }]
        };

        client.queryDocuments(collectionRef._self, querySpec).current(callback);
    },

    readDocument: function (client, docLink, options, callback) {
        client.readDocument(docLink, options, callback);
    },

    replaceDocument: function(client, docLink, docObject, callback) {
        client.replaceDocument(docLink, docObject, callback);    
    }
};

My first attempt at reading a document used the readDocument() method. I would construct a docLink using the following:

var docLink = `${refs.table._self}${refs.table._docs}${docId}`;

However, this always resulted in a 400 Bad Request response from DocumentDb. The reason is likely that the _self link uses the shorted (and obfuscated) URI, whereas the Document Id I am using is a GUID and is not obfuscated. If you take a look at the response from DocumentDb, there is an id field and a _rid field. The _rid field is used in the document links. Thus, instead of using readDocument(), I’m using a queryDocuments() call on the driver to search for the Id. I’ve also promisified these calls in the normal manner using the Bluebird library.

Fetching a Record

The Azure Mobile Apps SDK allows me to GET /tables/todoitem/id – where id is the GUID. With the driver complete, I can do the following in the Azure Function table controller:

function getOneItem(req, res, id) {
    driver.fetchDocument(refs.client, refs.table, id)
    .then((document) => {
        if (typeof document === 'undefined')
            res.status(404).json({ 'error': 'Not Found' });
        else
            res.status(200).json(convertItem(document));
    })
    .catch((error) => {
        res.status(error.code).json(convertError(error));
    });
}

When doing this, I did notice that some semantics seem to have changed in the Azure Functions SDK. I can no longer use context.bindings.id and has to switch to using req.params.id. Aside from this small change in the router code, this code is relatively straight forward. I established the convertItem() and convertError() methods in my last article.

Replacing a Record

The more complex case is replacing a record. There is a little bit of logic around conflict resolution:

  • If there is an If-Match header, then ensure the version of the current record matches the If-Match header, otherwise return a 412 response.
  • If there is no If-match header, but the new record contains a version, return a 409 response.
  • Otherwise update the record

Because we want the version and updatedAt fields to be controlled as well, we need to ensure the new object does not contain those values when it is submitted to DocumentDb:

function replaceItem(req, res, id) {
    driver.fetchDocument(refs.client, refs.table, id)
    .then((document) => {
        if (typeof document === 'undefined') {
            res.status(404).json({ 'error': 'Not Found' });
            return;
        }

        var item = req.body, version = new Buffer(document._etag).toString('base64')
        if (item.id !== id) {
            res.status(400).json({ 'error': 'Id does not match' });
            return;
        }

        if (req.headers.hasOwnProperty('if-match') && req.header['if-match'] !== version) {
            res.status(412).json({ 'current': version, 'new': item.version, 'error': 'Version Mismatch' })
            return;
        }

        if (item.hasOwnProperty('version') && item.version !== version) {
            res.status(409).json({ 'current': version, 'new': item.version, 'error': 'Version Mismatch' });
            return;
        }

        // Delete the version and updatedAt fields from the doc before submitting
        delete item.updatedAt;
        delete item.version;
        driver.replaceDocument(refs.client, document._self, item)
        .then((updatedDocument) => {
            res.status(200).json(convertItem(updatedDocument));
            return;
        });
    })
    .catch((error) => {
        res.status(error.code).json(convertError(error));
    });
}

I’m using the same Base64 encoding for the etag in the current document to ensure I can do a proper match. I could get DocumentDb to do all this work for me – the options value in the driver replaceDocument() method allows me to specify an If-Match. However, to do that, I would need to still fetch the record (since I need the document link), so I may as well do the checks myself. This also keeps some load off the DocumentDb, which is helpful.

While this is almost there, there is one final item. If there is a conflict, the server version of the document should be returned. That means the 409 and 412 responses need to return convertItem(document) instead – a simple change.

Deleting a Record

Deleting a record does not delete a record. Azure Mobile Apps uses soft delete (whereby the deleted flag is set to true). This means that I need to use replaceDocument() again for deletions:

function deleteItem(req, res, id) {
    driver.fetchDocument(refs.client, refs.table, id)
    .then((document) => {
        if (typeof document === 'undefined') {
            res.status(404).json({ 'error': 'Not Found' });
            return;
        }

        var item = convertItem(document);
        delete item.updatedAt;
        delete item.version;
        item.deleted = true;
        driver.replaceDocument(refs.client, document._self, item)
        .then((updatedDocument) => {
            res.status(200).json(convertItem(updatedDocument));
            return;
        });
    })
    .catch((error) => {
        res.status(error.code).json(convertError(error));
    });
}

This brings up a point about the GetOneItem() method. It does not take into account the deleted flag. I need it to return 404 Not Found if the deleted flag is set:

function getOneItem(req, res, id) {
    driver.fetchDocument(refs.client, refs.table, id)
    .then((document) => {
        if (typeof document === 'undefined' || document.deleted === true)
            res.status(404).json({ 'error': 'Not Found' });
        else
            res.status(200).json(convertItem(document));
    })
    .catch((error) => {
        res.status(error.code).json(convertError(error));
    });
}

It’s a simple change, but important in getting the protocol right.

What’s left?

There is only one method I have not written yet, and it’s the biggest one of the set – the getAllItems() method. That’s because it deals with OData querying, which is no small task. I’ll be tackling that in my next article. Until then, get the current codebase at my GitHub repository.

Creating Documents in DocumentDB with Azure Functions HTTP API

Thus far in my story of implementing Azure Mobile Apps in a dynamic (consumption) plan of Azure Functions using DocumentDB, I’ve got the basic CRUD HTTP API stubbed out and the initialization of my DocumentDB collection done. It’s now time to work on the actual endpoints that my Azure Mobile Apps SDK will call. There are five methods to implement:

  • Insert
  • Update / Replace
  • Delete
  • Fetch a single record
  • Search

I’m going to do these in the order above. Before I do that, I need to take a look at what DocumentDB provides me. Azure Mobile Apps requires five fields to work properly:

  • id – a string (generally a GUID).
  • createdAt – the date the record was created, in ISO-8601 format.
  • updatedAt – the date the record was updated, in ISO-8601 format.
  • deleted – a boolean, if the record is deleted.
  • version – an opaque string for conflict resolution.

DocumentDB provides some of this for us:

  • id – a string (generally a GUID).
  • _ts – a POSIX / unix timestamp of the number of seconds since the epoch since the record was last updated.
  • _etag – a checksum / version identifier.

When we create a record, we need to convert the document that DocumentDB returns to us into the format that Azure Mobile Apps provides. I use the following routine:

/**
 * Given an item from DocumentDB, convert it into something that the service can used
 * @param {object} item the original item
 * @return {object} the new item
 */
function convertItem(item) {
    if (item.hasOwnProperty('_ts')) {
        item.updatedAt = moment.unix(item._ts).toISOString();
        delete item._ts;
    } else {
        throw new Error('Invalid item - no _ts field');
    }

    if (item.hasOwnProperty('_etag')) {
        item.version = new Buffer(item._etag).toString('base64');
        delete item._etag;
    } else {
        throw new Error('Invalid item - no _etag field');
    }

    // Delete all the known fields from documentdb
    if (item.hasOwnProperty('_rid')) delete item._rid;
    if (item.hasOwnProperty('_self')) delete item._self;
    if (item.hasOwnProperty('_attachments')) delete item._attachments;

    return item;
}

I’m using the moment library to do date/time manipulation. This is a very solid library and well worth learning about. In addition to the convertItem() method, I also need something to convert the error values that come back from DocumentDB. They are not nicely formed, so some massaging is in order:

/**
 * Convert a DocumentDB error into something intelligible
 * @param {Error} error the error object
 * @return {object} the intelligible error object
 */
function convertError(error) {
    var body = JSON.parse(error.body);
    if (body.hasOwnProperty("message")) {
        var msg = body.message.replace(/^Message:\s+/, '').split(/\r\n/);
        body.errors = JSON.parse(msg[0]).Errors;

        var addl = msg[1].split(/,\s*/);
        addl.forEach((t) => {
            var tt = t.split(/:\s*/);
            tt[0] = tt[0].replace(/\s/, '').toLowerCase();
            body[tt[0]] = tt[1];
        });

        delete body.message;
    }

    return body;
}

I had to work through the error object several times experimenting with the actual response to come up with this routine. This seems like the right code by experimentation. Whether it holds up during normal usage remains to be seen.

I’ve already written the createDocument() method in the DocumentDB driver:

module.exports = {
    createDocument: function (client, collectionRef, docObject, callback) {
        client.createDocument(collectionRef._self, docObject, callback);
    }
};

This is then promisifyed using the bluebird promise library. With this work done, my code for inserts becomes very simple:

function insertItem(req, res) {
    var item = req.body;

    item.createdAt = moment().toISOString();
    if (!item.hasOwnProperty('deleted')) item.deleted = false;

    driver.createDocument(refs.client, refs.table, item)
    .then((document) => {
        res.status(201).json(convertItem(document));
    })
    .catch((error) => {
        res.status(error.code).json(convertError(error));
    });
}

The item that we need to insert comes in on the body. We need to add the createdAt field and the deleted field (if it isn’t already set). Since this is an insert, we call createDocument() in the driver. If it succeeds, we return a 201 Created response with the new document (converted to the Azure Mobile Apps specification). If not, we return the error from DocumentDB together with the formatted object.

We can test inserts with Postman. For example, here is a successful insert:

insert-1

DocumentDB creates the id for me if it doesn’t exist. I convert the _ts and _etag fields to something more usable by the Azure Mobile Apps SDK on the way back to the client. If I copy the created object and push it again, I will get a conflict:

insert-2

Notice how DocumentDB does all the work for me? All I need to do is some adjustments on the output to get my insert operation working. I can use the Document Browser within the Azure Portal to look at the actual records.

In the next post, I’m going to move onto Update, Delete and Fetch all in one go.

Offline Sync with Azure Mobile Apps and Apache Cordova

In the past, I’ve introduced you to a TodoList application built in Apache Cordova so that it is available for iOS, Android or any other platform that Apache Cordova supports. Recently, we released a new beta for the Azure Mobile Apps Cordova SDK that supports offline sync, which is a feature we didn’t have.

Underneath, the Cordova offline sync functionality uses SQLite – this means it isn’t an option at this point for HTML/JS applications. We’ll have to work out how to do this with IndexDB or something similar, but for now that isn’t an option without a lot of custom work.

Let’s take a look at the differences.

Step 1: New variables

Just like other clients, I need a local store reference and a sync context that is used to keep track of the operational aspects for synchronization:

    var client,        // Connection to the Azure Mobile App backend
        store,         // Sqlite store to use for offline data sync
        syncContext,   // Offline data sync context
        todoItemTable; // Reference to a table endpoint on backend

Step 2: Initialization

All the initialization is done in the onDeviceReady() method. I have to set up a model so that the SQLite database is set up to match what is on the server:

function onDeviceReady() {

    // Create the connection to the backend
    client = new WindowsAzure.MobileServiceClient('https://yoursite.azurewebsites.net');

    // Set up the SQLite database
    store = new WindowsAzure.MobileServiceSqliteStore();

    // Define the table schema
    store.defineTable({
        name: 'todoitem',
        columnDefinitions: {
            // sync interface
            id: 'string',
            deleted: 'boolean',
            version: 'string',
            // Now for the model
            text: 'string',
            complete: 'boolean
        }
    }).then(function () {
        // Initialize the sync context
        syncContext = client.getSyncContext();
        syncContext.pushHandler = {
            onConflict: function (serverRecord, clientRecord, pushError) {
                window.alert('TODO: onConflict');
            },
            onError: function(pushError) {
                window.alert('TODO: onError');
            }
        };
        return syncContext.initialize(store);
    }).then(function () {
        // I can now get a reference to the table
        todoItemTable = client.getSyncTable(tableName);

        refreshData();

        $('#add-item').submit(addItemHandler);
        $('#refresh').on('click', refreshData);
    });
}

There are three distinct areas here, separated by promises. The first promise defines the tables. If you are using multiple tables, you must ensure that all promises are complete before progressing to the next section. You can do this with Promise.all() as an example.

The second section initializes the sync context. You need to define two sections for the push handler – the conflict handler and the error handler. I’ll go into the details of a conflict handler at a later date, but this is definitely something you will want to spend some time thinking about. Do you want the last one in to be the winner, or the current client edition to be the winner, or do you want to prompt the user on conflicts? It’s all possible.

Once I have created a sync context, I can get a reference to the local SQLite database table, which is used instead of the getTable() call that it replaces. The rest of the code is identical – I refresh the data and add the event handlers.

Step 3: Adjusting the Refresh

In the past, refresh was just a query to the backend. Now I need to do something a bit different. When refresh is clicked, I want to do the push/pull cycle for synchronizing the data.

function refreshData() {
    updateSummaryMessage('Loading data from Azure');
    syncContext.push().then(function () {
        return syncContext.pull(new WindowsAzure.Query('todoitem'));
    }).then(function () {
        todoItemtable
            .where({ complete: false })
            .read()
            .then(createTodoItemList, handleError);
    });
}

Just like the initialization, the SDK uses promises to proceed asynchronously. First push (which resolves as a promise), then pull (which also resolves as a promise) and finally you do EXACTLY THE SAME THING AS BEFORE – you query the table, read the results and then build the todo list. Seriously – this bit really didn’t change.

That means you can add offline to your app without changing your existing code – just add the initialization and something to trigger the push/pull.

Wrap Up

This is still a beta, which means a work-in-progress. Feel free to try it out and give us feedback. You can file issues and ideas at our GitHub repository.

Cross-posted to the Azure App Service Team Blog.

Apache Cordova, Azure Mobile Apps and CORS

I am continuing my discovery of Apache Cordova to integrate it into Azure Mobile Apps. This post is about fixing a relatively minor issue. That issue is Cross-Origin Resource Sharing or CORS. One might think this is not an issue with Apache Cordova. After all, a mobile app should be able to access the remote data store with no problem. Where is the CORS issue? The issue occurs in two situations – when using cordova run browser and when using Ripple. In both cases, Apache Cordova starts a server on localhost to serve the pages. The client (the browser session) is pulling the pages from localhost, but retrieving the data from your Azure Mobile Apps service. This hits CORS.

The Server

I’ve got a simple server that is a copy of the basic app from the azure-mobile-apps samples directory. My server looks like this:

var express = require('express')(),
    morgan = require('morgan'),
    azureMobileApps = require('azure-mobile-apps');

express.use(morgan('combined'));
var app = new azureMobileApps();

app.tables.import('./tables');
app.tables.initialize().then(function () {
    express.use(app);
    express.listen(process.env.PORT || 3000);
});

I have the following code in the tables/TodoList.js:

var table = require('azure-mobile-apps').table();

table.dynamicSchema = true;

module.exports = table;

You can find the complete code sample at my GitHub repository. I’ve deployed this to an Azure App Service using continuous deployment, linking my App Service to the GitHub repository.

The Client

I’ve adjusted my src/lib/storage-manager.js file as follows:

import uuid from 'uuid';
/* global OData */

export default class Store {
    constructor() {
        console.info('Initializing Storage Manager');

        // We need to add the ZUMO-API-VERSION to the headers of the OData request
        this._defaultHttpClient = OData.defaultHttpClient;
        OData.defaultHttpClient = {
            request: (request, success, error) => {
                request.headers['ZUMO-API-VERSION'] = '2.0.0';
                this._defaultHttpClient.request(request, success, error);
            }
        };;

        this._service = 'https://ahall-todo-list.azurewebsites.net';
        this._store = `${this._service}/tables/TodoList`;
    }

    /**
     * Read some records based on the query.  The elements must match
     * the query
     * @method read
     * @param {object} query the things to match
     * @return {Promise} - resolve(items), reject(error)
     */
    read(query) {
        console.log('[storage-manager] read query=', query);

        var promise = new Promise((resolve, reject) => {
            /* odata.read(url, success(data,response), error(error), handler, httpClient, metadata); */

            var successFn = (data, response) => {
                console.info('[storage-manager] read data=', data);
                console.info('[storage-manager] read response=', response);
                resolve(data);
            };
            var errorFn = (error) => {
                console.error('[storage-manager] read error=', error);
                reject(error);
            };

            OData.read(this._store, successFn, errorFn);
        });
        return promise;
    }

I’m using DataJS to do the actual call. Note that I’m not going to complete the call in this blog post – I’m just going to get past CORS. I’ve added the package to my npm install:

npm install --save datajs

I’ve also got a Gulp rule to copy the datajs library into my www/lib directory. Check out the GitHub repository to see how that works. I need to include the datajs library into my HTML page as well – that’s just a script reference. Now, let’s run the app and check out the console:

cors-1

The Client Security Policy

There are a couple of places in Apache Cordova where you have to configure content security policies. The content security policy tells Apache Cordova where it can fetch data from. This is configured in two places – one of which is already done for you.

In config.xml, you will note the following line:

    <access origin="*" />

This tells Apache Cordova that the app should be able to access data from anywhere. But that isn’t the only place. In your index.html file – and any other HTML file you reference, there is a Content-Security-Policy tag:

        <meta http-equiv="Content-Security-Policy" content="default-src 'self' data: gap: https://ssl.gstatic.com 'unsafe-eval'; style-src 'self' 'unsafe-inline'; media-src *">

Let’s recode this into a friendlier setting:

  • default-src
    • ‘self’
    • data: = Allow loading of local base64 values
    • gap: = Allow JS -> native communication on iOS
    • https://ssl.gstatic.com = Allow Talkback on Android
  • style-src
    • ‘self’ but don’t allow inline styles
  • media-src
    • Anywhere

You can read more about the Content-Security-Policy – it’s very flexible. For now, I need to add my URL to the default-src:

        <meta http-equiv="Content-Security-Policy" content="default-src 'self' https://ahall-todo-list.azurewebsites.net data: gap: https://ssl.gstatic.com 'unsafe-eval'; style-src 'self' 'unsafe-inline'; media-src *">

Handling CORS in Azure App Service

Before this will work, you also need to tell the server to trust your local instance. Log onto https://portal.azure.com and open up your web site. Click on Settings and find the CORS setting:

cors-2

You can enter any URLs you want here. When I run cordova run browser, Apache Cordova will start a server at http://localhost:8000:

cors-3

Don’t forget to save your changes once you are done.

Wrapping Up

Once you have done these changes, you can try the project again. This time the pre-flight connection will succeed. The actual transfer will succeed but the data decoding will not work. That’s because I haven’t finished writing the code for utilizing data.js as yet. That, however, is a blog post for another time.

A lap around Azure Mobile Apps Node SDK Preview

Microsoft Azure App Service recently released a slew of mobile announcements – PowerApps being the big one. Of lesser note, the Azure Mobile Apps Node SDK hit a milestone and entered preview. It’s been a while since I lapped around the SDK, so let’s take a look at some of the breaking changes.

The SQL Azure data provider is now called mssql

There was one provider in the alpha version of the SDK – the sql driver that used mssql to access a SQL Azure instance or an on-premise SQL Server instance. However, there are plans for more providers and support was needed for multiple providers. If you are specifying a SQL Azure connection string via the Azure Portal, then nothing changes. For local development, however, it’s fairly common to use an azureMobile.js file to specify the data connection. Something like this:

module.exports = {
    logging: {
        level: 'silly'
    },
    data: {
        provider: 'mssql',
        server: 'localhost',
        database: 'sample',
        user: 'testuser',
        password: 'testpass'
    }
};

Note the highlighted line. That used to be just ‘sql’ – now it’s ‘mssql’. Again, this is all to prepare for multiple providers, so it’s a good move in my book.

You should initialize the database before listening

The common server.js file would just import tables and then start listening. This caused a significant delay on first access along with potential failures because the database was not set up properly. To combat this, a Promise structure was introduced that allowed you to defer listening for connections until after the database was initialized. You use it like this:

// Import the files from the tables directory to configure the /tables API
mobile.tables.import('./tables');

// Initialize the database before listening for incoming requests
// The tables.initialize() method does the initialization asynchronously
// and returns a Promise.
mobile.tables.initialize()
  .then(function () {
    app.use(mobile);    // Register the Azure Mobile Apps middleware
    app.listen(process.env.PORT || 3000);   // Listen for requests
  });

If a table has dynamic schema, then the initialize() call immediately resolves since the database is managed by the SDK. If you have set up static tables, then those tables are created prior to listening for connections, which means that your users get a timeout instead of a 500 server failure, which is probably a better experience.

You can seed tables with static data

There are cases when you want data to be “already there”. An example of this is in testing – you want to test the GET response and ensure specific records are there for you. Another example is if you want to have a read-only table for settings. To assist with this, you can add a seed property to the table definition. Something like this:

var table = require('azure-mobile-apps').table();
table.columns = {
	"text": "string",
	"complete": "boolean"
};
table.dynamicSchema = false;

// Seed data into the table
table.seed = [
	{ text: "Example 1", complete: true },
	{ text: "Example 2", complete: false }
];

module.exports = table;

The seed property is defined as an array of objects. Data seeding happens during the initialize() call, so you need to ensure you call initialize() if you want to seed data.

There is a new getIdentity() API

Azure App Service introduced a new app-specific authentication mechanism during the November 2015 update. The authentication gateway is now deprecated. Along with that change is a new getIdentity() call in the Node SDK that calls the authentication backend to retrieve the claims that you defined. You can use it like this in a table definition file:

table.access = 'authenticated';

table.read(function (context) {
    return context.user.getIdentity()
        .then(function (identity) {
            logger.info('table.read identity = ', identity);
            return context;
        })
        .then(function (context) {
            return context.execute();
        })
        .catch(function (error) {
            logger.error('Error in table.read: ', error);
        });
});

The getIdentity() call returns a Promise. Once the promise is resolved, the identity is available in the resolution. You can then adjust the context (just like the personal-table sample) with any information in the claims that you registered. Note that I’m chaining the promises together – when getIdentity() resolves, I get an identity. I then adjust the context and return the adjusted context, which is used asynchronously to execute the query and return the result of that.

You can disable access to table operations

Ever wanted to have a read-only table? How about a logging table that you can add to, but you can’t update/delete? All of those are possible by adjusting permissions on the table.

// Read-only table - turn off write operations
table.insert.access = 'disabled';
table.update.access = 'disabled';
table.delete.access = 'disabled';

You can also make it so that reads can be unauthenticated, but writes require authentication. The values for the access parameter are ‘anonymous’, ‘authenticated’ and ‘disabled’. Pick the right one for all operations and another one for an individual operation, if you like.

There are lots of samples

The team is dedicated to providing lots of sample variations to show off specific situations. You can still check out the canonical todo sample – that has been documented to a point where there is more documentation than code. However, there are lots of additional samples now.

CORS

CORS is a difficult subject right now as Azure App Service has just rolled out a change that delegates the handling of CORS to App Service. This means that you only care about CORS in your code when you are running the server locally. In this case, set skipVersionCheck to true in your azureMobile.js, like this:

module.exports = {
  skipVersionCheck: true,
  cors: {
    origins: [ '*' ]
  }
};

This will enable CORS for development purposes.

Need more information?

As yet another avenue for asking questions, you can log onto the Gitter channel and have a direct chat with the developers. That’s in addition to the Azure Forums, Stack Overflow and GitHub Issues. We aren’t starved of methods to discuss problems with the team!

Did I mention documentation? No? Well, here you are – the API documentation is published as well.