30 Days of Zumo.v2 (Azure Mobile Apps): Day 26: Relationship Advice

No, I’m not going to fix the relationship with your significant other. One of the common questions I get asked is about relationships between tables. The basic advice is easy – don’t do it. The actual relationship advice is a little more complex than that and today I’m going to cover what is involved in setting up table relationships, what to do and what to avoid.

The Node.js / Easy Tables Version

This one is easy. The Node.js Server SDK for Azure Mobile Apps does not support relationships in any way, shape or form. If you want to use relationships, use an ASP.NET backend.

The ASP.NET Version

There are two relationships that are supported

  1. A one-way 1:1 relationship
  2. A one-way 1:Many relationship

One way relationships are where a table has a foreign key for another table, but the reverse is not true. Do not do the normal “two-way” relationships as you will run into circular serialization problems.

These are both configured with Entity Framework using foreign keys. You need to ensure at you set up your data transfer objects first. Let’s take an example. Let’s say I have my normal TodoItem table, but I want to add a tag to each record. The data transfer objects are configured thus:

using Microsoft.Azure.Mobile.Server;
using System.ComponentModel.DataAnnotations.Schema;

namespace Mobile.Backend.DataObjects
{
    public class Tag : EntityData
    {
        public string TagName { get; set; }
    }

    public class TodoItem : EntityData
    {
        public string Text { get; set; }

        public bool Complete { get; set; }

        #region Relationships
        public string TagId { get; set; }

        [ForeignKey("TagId")]
        public Tag Tag { get; set; }
        #endregion
    }
}

The ForeignKey annotation is the key to this – it sets up the linkage between the two tables. Note that the TagId references the Id of the Tag table which is, of course, a GUID in string form when you are in an Azure Mobile Apps DTO. I still need to add a table controller for the Tag entity, and add a DbSet to the MobileServiceContext – so now I have two controllers.

The Client Side

On the client side, I need to be able to add a Tag to the TodoItem. The Tag must exist before the TodoItem exists. Let’s take a look at one of the TodoItem records when I request it through Postman – here is the response:

[
  {
    "deleted": false,
    "updatedAt": "2016-05-25T03:01:53.33Z",
    "createdAt": "2016-05-25T03:01:53.33Z",
    "version": "AAAAAAAAB9Q=",
    "id": "183df2a2-abed-4f60-bde5-ebb157343462",
    "tagId": "e6782065-6ec4-4042-a0b7-3dcfc1c334a0",
    "complete": false,
    "text": "First item"
  },
  {
    "deleted": false,
    "updatedAt": "2016-05-25T03:01:53.345Z",
    "createdAt": "2016-05-25T03:01:53.345Z",
    "version": "AAAAAAAAB9Y=",
    "id": "e1291e16-c8e1-440f-ae6e-4ec9835302c4",
    "tagId": null,
    "complete": false,
    "text": "Second item"
  }
]

My first record has a linkage to a specific tag – this is shown via the tagId being filled in. My second record does not have a linkage to a specific tag. On the client, you will need to do the join of the tables yourself. The client does not know about relationships. Each table is independent. This means that your model (assuming a C#/.NET client, like UWP or Xamarin) should look like this:

using ClientApp.Helpers;

namespace ClientApp.Models
{
    public class Tag : EntityData
    {
        public string TagName { get; set; }
    }

    public class TodoItem : EntityData
    {
        public string Text { get; set; }
        public bool Complete { get; set; }

        public string TagId { get; set; }
    }
}

In general, things will just work. However, that’s because you will naturally create a Tag before using it. You must ensure that you insert the tag before inserting the Todoitem that references it. Again, you do the join in code – don’t expect a Tag item to magically be populated in your TodoItem as a linked reference – it won’t happen.

1:Many Relationships

You can also do one-way 1:Many relationships with the ASP.NET Server SDK. An example of a one-way 1:Many relationship is that my TodoItem may have multiple tags. As long as I am not doing the reverse (give me all todo items that use this tag, which would indicate a two-way relationship), you should be ok. In this case, the Tags would be an array. Let’s take a look at the new models on the server side:

using Microsoft.Azure.Mobile.Server;
using System.Collections.Generic;

namespace Mobile.Backend.DataObjects
{
    public class Tag : EntityData
    {
        public string TagName { get; set; }
    }

    public class TodoItem : EntityData
    {
        public string Text { get; set; }
        public bool Complete { get; set; }
        public virtual ICollection<Tag> Tags { get; set; }
    }
}

Sending the same query as before will result in the following:

[
  {
    "deleted": false,
    "updatedAt": "2016-05-25T21:17:05.034Z",
    "createdAt": "2016-05-25T21:17:05.03Z",
    "version": "AAAAAAAAB9I=",
    "id": "1b635919-9880-4353-a4c2-f6c346a124ff",
    "complete": false,
    "text": "First item"
  },
  {
    "deleted": false,
    "updatedAt": "2016-05-25T21:17:05.106Z",
    "createdAt": "2016-05-25T21:17:05.106Z",
    "version": "AAAAAAAAB9g=",
    "id": "25bcc657-6299-4634-9ce0-aa523b89227b",
    "complete": false,
    "text": "Second item"
  }
]

Note that there is no tag. OData requires you to send the $expand parameter to expand the collection – something like https://localhost:14683/tables/TodoItem?$expand=Tags. The Azure Mobile Apps Client SDK (currently) doesn’t support $expand, so I have to do something on the server to support this. The first thing is to provide a new annotation called the [ExpandProperty]:

using System;
using System.Linq;
using System.Web.Http.Controllers;
using System.Web.Http.Filters;

namespace Mobile.Backend.Helpers
{
    [AttributeUsage(AttributeTargets.Method, AllowMultiple = true)]
    public class ExpandPropertyAttribute : ActionFilterAttribute
    {
        string propertyName;

        public ExpandPropertyAttribute(string propertyName)
        {
            this.propertyName = propertyName;
        }

        public override void OnActionExecuting(HttpActionContext actionContext)
        {
            base.OnActionExecuting(actionContext);
            var uriBuilder = new UriBuilder(actionContext.Request.RequestUri);
            var queryParams = uriBuilder.Query.TrimStart('?').Split(new[] { '&' }, StringSplitOptions.RemoveEmptyEntries).ToList();
            int expandIndex = -1;
            for (var i = 0; i < queryParams.Count; i++)
            {
                if (queryParams[i].StartsWith("$expand", StringComparison.Ordinal))
                {
                    expandIndex = i;
                    break;
                }
            }

            if (expandIndex < 0)
            {
                queryParams.Add("$expand=" + this.propertyName);
            }
            else
            {
                queryParams[expandIndex] = queryParams[expandIndex] + "," + propertyName;
            }

            uriBuilder.Query = string.Join("&", queryParams);
            actionContext.Request.RequestUri = uriBuilder.Uri;
        }
    }
}

When applied to a table controller method, this will automatically add the $expand property. Use it like this on the TodoItem controller:

        // GET tables/TodoItem
        [ExpandProperty("Tags")]
        public IQueryable<TodoItem> GetAllTodoItems() => Query();

        // GET tables/TodoItem/48D68C86-6EA6-4C25-AA33-223FC9A27959
        [ExpandProperty("Tags")]
        public SingleResult<TodoItem> GetTodoItem(string id) => Lookup(id);

You can specify this multiple times so that you can expand multiple properties. You generally only need to add this to the “Read” methods – GetAll and GetSingle. Now, when I do the same Postman query as before, I get the following:

[
  {
    "deleted": false,
    "updatedAt": "2016-05-25T21:17:05.034Z",
    "createdAt": "2016-05-25T21:17:05.03Z",
    "version": "AAAAAAAAB9I=",
    "id": "1b635919-9880-4353-a4c2-f6c346a124ff",
    "complete": false,
    "text": "First item",
    "tags": [
      {
        "tagName": "Urgent",
        "id": "3257f559-1c09-40a8-a633-df60f0970ac7",
        "version": "AAAAAAAAB9Q=",
        "createdAt": "2016-05-25T21:17:05.059Z",
        "updatedAt": "2016-05-25T21:17:05.064Z",
        "deleted": false
      },
      {
        "tagName": "Question",
        "id": "7c2f4467-973f-464e-88a8-584c7becd37c",
        "version": "AAAAAAAAB9Y=",
        "createdAt": "2016-05-25T21:17:05.091Z",
        "updatedAt": "2016-05-25T21:17:05.091Z",
        "deleted": false
      }
    ]
  },
  {
    "deleted": false,
    "updatedAt": "2016-05-25T21:17:05.106Z",
    "createdAt": "2016-05-25T21:17:05.106Z",
    "version": "AAAAAAAAB9g=",
    "id": "25bcc657-6299-4634-9ce0-aa523b89227b",
    "complete": false,
    "text": "Second item",
    "tags": []
  }
]

Note that the tag result is embedded in the result. For client platforms that support models, you can now do something similar to the following:

using ClientApp.Helpers;

namespace ClientApp.Models
{
    public class Tag : EntityData
    {
        public string TagName { get; set; }
    }

    public class TodoItem : EntityData
    {
        public string Text { get; set; }
        public bool Complete { get; set; }

        public List<Tag> Tags { get; set; }
    }
}

The tags will be included in the todoitem. The data is denormalized. Say you have a tag called “Urgent” and it is used six times – there will be six copies of the “Urgent” tag – not one with references. While this is not a problem for most applications, if your data is large you actually want to deal with this a different way. This may include using a TagId list instead and then linking the models when you do a refresh, for example.

Wrap Up

I hope you take away from this a few points:

  1. Relationships is strictly an ASP.NET function.
  2. Relationships are hard and you must work at them to ensure the proper outcome.
  3. 1:1 and 1:Many relationships are doable if they are one-way (i.e. table1 refers to table2, but the reverse is not true)
  4. You have to deal with expansion in the 1:Many case, and data is denormalized (which may not be desirable)
  5. Avoid Many:Many relationships
  6. You really need to understand how Entity Framework implements these relationships

The final thing to remember is that the client does not know about relationships. That is a SQL concept and SQL is spoken only on the server. Finally – my advice is to try and avoid relationships on your mobile backend – use projections mechanisms (such as SQL Views and/or automatic expansion with the ExpandProperty attribute) to denormalize the data prior to it getting to the client.

You can see the 1:Many code for the backend in the relationship-backend solution on my GitHub Repository.

30 Days of Zumo.v2 (Azure Mobile Apps): Day 19 – ASP.NET Table Controllers

I’ve got a pretty good table controller going from my last article now. However, there are some features of ASP.NET Table Controllers that I am not using yet. I’d like to cover those as a miscellany of features that don’t really have any association with each other – you need to consider them individually and decide if your situation warrants their use.

Soft Delete

I’ve mentioned Soft Delete before with respect to offline sync. Soft Delete allows you to notify your clients that a record has been deleted. When you delete a record, it’s just marked as deleted. That allows other clients to download the new state and update their offline sync cache. At some later date, you can remove the deleted records through another process. I’ll be covering that process in a future posting.

Soft Delete is enabled on a per-table basis in the Initialize() method:

        protected override void Initialize(HttpControllerContext controllerContext)
        {
            base.Initialize(controllerContext);
            MyDbContext context = new MyDbContext();
            DomainManager = new EntityDomainManager&lt;TodoItem&gt;(context, Request, enableSoftDelete: true);
        }

The addition to the initialization of the EntityDomainManager is all you need to enable soft delete. The entity domain manager that is used to manage all the requests on this table handles the actual implementation of soft delete.

Seeding of Data

Let’s say you want to initialize a table with some content? The database initializer has a method called Seed() that is called when the database is created for the first time. It does not fill in information on application restart, nor does it fill in information during an EF Migration, so it’s not a good solution for that. You can find the database initializer in the App_Start/AzureMobile.cs file:

    public class AzureMobileInitializer : CreateDatabaseIfNotExists&lt;MyDbContext&gt;
    {
        protected override void Seed(MyDbContext context)
        {
            // You can seed your database here
#if SEED_DATA
            List&lt;TodoItem&gt; todoItems = new List&lt;TodoItem&gt;
            {
                new TodoItem { Id = Guid.NewGuid().ToString(), Text = &quot;First item&quot;, Complete = false },
                new TodoItem { Id = Guid.NewGuid().ToString(), Text = &quot;Second item&quot;, Complete = false },
            };

            foreach (TodoItem todoItem in todoItems)
            {
                context.Set&lt;TodoItem&gt;().Add(todoItem);
            }

#endif

            base.Seed(context);
        }
    }

Personally, I dislike the seeding of data and try to avoid it, but it is useful in certain circumstances.

Debugging

The Node.js server SDK has a nice logging mechanism that logs the actual SQL statements that are being made. ASP.NET doesn’t do this out of the box. Fortunately, ASP.NET with SQL Azure is based on Entity Framework, so I can set up logging using the Entity Framework technique. Put this in your DbContext constructor (which, for me, is in Models/MyDbContext.cs):

        private const string connectionStringName = &quot;Name=MS_TableConnectionString&quot;;

        public MyDbContext() : base(connectionStringName)
        {
            this.Database.Log = (message) => Debug.Write(message);
        }

You will get output in the Log Stream (which is under the Tools menu for your App Service in the Azure Portal). Make sure you publish a “Debug” version of your app.

Did You Know? You can turn on diagnostic logging and view the log stream directly from Visual Studio. Open the Server Explorer, expand the Azure -> App Service node, find your mobile app and right-click on it. Click on View Settings to update the web server logging, and View Streaming Logs to view the logs. The logs appear in the Output window.

There are also various techniques to connecting a remote debugger from Visual Studio to your site. If you want to do this, then check out the Azure Article on the subject.

One note is that model differences – where the DTO on the server and the Model on the client app differ – matters. If you do have model differences, the SQL commands on the backend won’t even execute. In these cases, you will want to capture the Response output on the server or check out the response on the client. I use the following in my Post handler, for example:

        // POST tables/TodoItem
        public async Task&lt;IHttpActionResult&gt; PostTodoItem(TodoItem item)
        {
            Debug.WriteLine($&quot;POST tables/TodoItem&quot;);
            var emailAddr = await GetEmailAddress();
            Debug.WriteLine($&quot;Email Address = {emailAddr}&quot;);
            item.UserId = emailAddr;
            Debug.WriteLine($&quot;Item = {item}&quot;);
            try
            {
                TodoItem current = await InsertAsync(item);
                Debug.WriteLine($&quot;Updated Item = {current}&quot;);
                return CreatedAtRoute(&quot;Tables&quot;, new { id = current.Id }, current);
            }
            catch (HttpResponseException ex)
            {
                Debug.WriteLine($&quot;Exception: {ex}&quot;);
                Debug.WriteLine($&quot;Response: {ex.Response}&quot;);
                string content = await ex.Response.Content.ReadAsStringAsync();
                Debug.WriteLine($&quot;Response Content: {content}&quot;);
                throw ex;
            }
        }

Using Existing Tables

This is a big and – to my mind – fairly complex topic, so be aware that I consider this advanced. Let’s say you have an existing database, with an existing table, and for some reason you don’t have the ability to adjust the table. So you can’t add the system columns to the database. How do you deal with this? Well, there is a way. Qualitatively, you create another table that holds the system columns and then you create an updatable view that acts like a table but isn’t.

Step 1: Don’t let the Mobile App update the Database Schema

When you go this route, you are responsible for updating the database. The Mobile App will no longer do changes for you. In your App_Start/AzureMobile.cs Startup class, do the following change:

            // Initialize the database with EF Code First
            // Database.SetInitializer(new AzureMobileInitializer());
            Database.SetInitializer&lt;MyDbContext&gt;(null);

The original line is above the new line and commented out.

Step 2: Create the System Columns Table

Let’s say you have a table called [myapp].[TodoItem] which is defined like this:

CREATE TABLE [myapp].[TodoItem] (
  [id]       BIGINT NOT NULL IDENTITY(1,1) PRIMARY KEY,
  [UserId]   NVARCHAR(255) NOT NULL,
  [Title]     NVARCHAR(255) NOT NULL,
  [Complete] BIT
)
GO

I can create a system properties table like this:

CREATE TABLE [mobile].[TodoItem_SystemProps] (
  [id]        NVARCHAR(255) CONSTRAINT [DF_todoitem_id] DEFAULT (CONVERT([NVARCHAR](255),NEWID(),(0))) NOT NULL,
  [createdAt] DATETIMEOFFSET(7) CONSTRAINT [DF_todoitem_createdAt] DEFAULT (CONVERT([DATETIMEOFFSET](7),SYSUTCDATETIME(),(0))) NOT NULL,
  [updatedAt] DATETIMEOFFSET(7) NULL,
  [version]   ROWVERSION NOT NULL,
  [deleted]   BIT DEFAULT ((0)) NOT NULL,
  [item_id]   BIGINT NOT NULL
  PRIMARY KEY NONCLUSTERED ([id] ASC)
)
GO

Step 3: Create an Mobile SQL View

Here is my view:

CREATE VIEW [mobile].[TodoItem] AS
SELECT
    [mobile].[TodoItem_SystemProps].[id],
    [mobile].[TodoItem_SystemProps].[createdAt],
    [mobile].[TodoItem_SystemProps].[updatedAt],
    [mobile].[TodoItem_SystemProps].[version],
    [mobile].[TodoItem_SystemProps].[deleted],
    [mobile].[TodoItem_SystemProps].[item_id],
    [myapp].[TodoItem].[UserId],
    [myapp].[TodoItem].[Title],
    [myapp].[TodoItem].[Complete]
FROM
    [myapp].[TodoItem],
    [mobile].[TodoItem_SystemProps]
WHERE
    [myapp].[TodoItem].[id] = [mobile].[TodoItem_SystemProps].[item_id]
GO

This produces a composite read-only view of the original table with the system properties. However, it doesn’t quite work yet.

Step 4: Handle Updates to the Original Table

I need to wire up two specific areas – firstly, when the original data is updated, I also update the system properties table. This is a series of three triggers.

CREATE TRIGGER
    [myapp].[TRG_TodoItem_Insert]
ON
    [myapp].[TodoItem]
AFTER
    INSERT
AS BEGIN
    DECLARE @itemid AS BIGINT
    SELECT @itemid = inserted.id FROM inserted
    INSERT INTO [mobile].[TodoItem_SystemProps] ([item_id], [updatedAt]) VALUES (@itemid, CONVERT(DATETIMEOFFSET(7), SYSUTCDATETIME()));
END
GO

CREATE TRIGGER
    [myapp].[TRG_TodoItem_Update]
ON
    [myapp].[TodoItem]
AFTER
    UPDATE
AS BEGIN
    UPDATE
        [mobile].[TodoItem_SystemProps]
    SET
        [updatedAt] = CONVERT(DATETIMEOFFSET(7), SYSUTCDATETIME())
    FROM
        INSERTED
    WHERE
        INSERTED.id = [mobile].[TodoItem_SystemProps].[item_id]
END
GO

CREATE TRIGGER
    [myapp].[TRG_TodoItem_Delete]
ON
    [myapp].[TodoItem]
AFTER
    DELETE
AS BEGIN
    DECLARE @itemid AS BIGINT
    SELECT @itemid = deleted.id from deleted
    DELETE FROM [mobile].[TodoItem_SystemProps] WHERE [item_id] = @itemid
END
GO

Each trigger handles one case of insert, update, or delete. The primary thing to do in update is to update the updatedAt time to match the current time. Insertion creates a new record and deletion deletes the record. Important note here is that deletion does not handle soft delete – something else will have to be done for when the record is deleted on the original table.

Step 5: Handle Updates from Azure Mobile Apps

The next step is to ensure that when an update (or an insert or a delete) comes in from Azure Mobile Apps, the right thing happens on the tables. For insertion (or deletion), this means creating the original item – not the system props (these are handled by the triggers above). For updates, I need to update just the original table – not the system props (except for soft delete).

CREATE TRIGGER
    [mobile].[TRG_Mobile_TodoItem_Insert]
ON
    [mobile].[TodoItem]
INSTEAD OF
    INSERT
AS BEGIN
    DECLARE @userid AS NVARCHAR(255)
    SELECT @userid = inserted.UserId FROM inserted
    DECLARE @title AS NVARCHAR(255)
    SELECT @title = inserted.Title FROM inserted
    DECLARE @complete AS BIT
    SELECT @complete = inserted.Complete FROM inserted

    INSERT INTO
        [myapp].[TodoItem] ([UserId], [Title], [Complete])
    VALUES
        (@userid, @title, @complete)

    IF UPDATE(Id) BEGIN
        DECLARE @itemid AS BIGINT
        SELECT @itemid = @@identity
        DECLARE @id AS NVARCHAR(255)
        SELECT @id = inserted.Id FROM inserted
        UPDATE [mobile].[TodoItem_SystemProps] SET [Id] = @id WHERE [item_id] = @itemid
    END
END;
GO

CREATE TRIGGER
    [mobile].[TRG_Mobile_TodoItem_Update]
ON
    [mobile].[TodoItem]
INSTEAD OF
    UPDATE
AS BEGIN
    DECLARE @id AS NVARCHAR(255)
    SELECT @id = inserted.id FROM inserted
    DECLARE @itemid AS BIGINT
    SELECT @itemid = [item_id] FROM [mobile].[TodoItem_SystemProps] WHERE [id] = @id

    IF UPDATE(UserId) BEGIN
	    DECLARE @userid AS NVARCHAR(255)
		SELECT @userid = inserted.UserId FROM inserted
        UPDATE [myapp].[TodoItem] SET [UserId] = @userid WHERE [id] = @itemid
    END
    IF UPDATE(Title) BEGIN
		DECLARE @title AS NVARCHAR(255)
		SELECT @title = inserted.Title FROM inserted
        UPDATE [myapp].[TodoItem] SET [Title] = @title WHERE [id] = @itemid
    END
    IF UPDATE(Complete) BEGIN
		DECLARE @complete AS BIT
		SELECT @complete = inserted.Complete FROM inserted
        UPDATE [myapp].[TodoItem] SET [Complete] = @complete WHERE [id] = @itemid
    END
    IF UPDATE(deleted) BEGIN
	    DECLARE @deleted AS BIT
		SELECT @deleted = inserted.deleted FROM inserted
        UPDATE [mobile].[TodoItem_SystemProps] SET [deleted] = @deleted WHERE [item_id] = @itemid
    END
END
GO

CREATE TRIGGER
    [mobile].[TRG_Mobile_TodoItem_Delete]
ON
    [mobile].[TodoItem]
INSTEAD OF
    DELETE
AS BEGIN
    DECLARE @id AS NVARCHAR(255)
    SELECT @id = deleted.id FROM deleted
    DECLARE @itemid AS BIGINT
    SELECT @itemid = [item_id] FROM [mobile].[TodoItem_SystemProps] WHERE [id] = @id

    DELETE FROM [myapp].[TodoItem] WHERE [id] = @itemid
	DELETE FROM [mobile].[TodoItem_SystemProps] WHERE [id] = @id
END
GO

A standard SQL view is read-only. By using triggers, you can make it adjust the underlying tables that are being used to construct the view and hence make it read-write. You can read more about creating triggers on MSDN.

There are also similar things you can do for multi-tables. Let’s say you have a mobile view that is comprised of a customer and an order. When you view the order, you want to combine the data with the order. When the order is deleted or removed from the mobile device, the customer data for that order is also removed. To implement this, use the same technique. Create a view that has all the customer data for the order plus the order. Call it “CustomerOrder”. Then, when a mobile device updates the record, you can use triggers to actually do the updates to the underlying tables.

Step 6: Test your SQL Setup

The reality is that this is a lot of SQL for someone who doesn’t do a lot of SQL work. Mistakes happen. It’s a good idea to test the SQL by doing manual updates. There are six tests you need to do:

  1. INSERT into the main table, then SELECT from the VIEW – ensure the createdAt and id fields are set.
  2. UPDATE the record you just inserted, then SELECT from the VIEW – ensure the updatedAt and version fields are set.
  3. DELETE the record you just inserted, then SELECT from the System Properties table – ensure the record is deleted.
  4. INSERT into the view, then SELECT from the main table – ensure the record is created.
  5. UPDATE the view, then SELECT from the main table – ensure the record is updated. Also SELECT from the System Properties table – ensure the updatedAt and version fields are updated.
  6. DELETE the record you created via the via, then SELECT from the main table and system properties table – ensure both records are deleted.

This is a relatively easy test to do and only needs to be done when you update the table. If you update the main table, don’t forget to update the triggers as well.

Changing the Mobile Schema

The final thing to do here is to change the SQL schema for my operations to the [mobile] schema. By default, Azure Mobile Apps uses the [dbo] schema. This table is stored in the [mobile] schema. This is a pure Entity Framework problem. Probably the easiest method is to use HasDefaultSchema in the DbContext:

        protected override void OnModelCreating(DbModelBuilder modelBuilder)
        {
            modelBuilder.HasDefaultSchema(&quot;mobile&quot;);
            modelBuilder.Conventions.Add(
                new AttributeToColumnAnnotationConvention&lt;TableColumnAttribute, string&gt;(
                    &quot;ServiceTableColumn&quot;,
                    (property, attributes) =&gt; attributes.Single().ColumnType.ToString()
                )
            );
        }

This will move all the referenced tables to the [mobile] schema. If you want to do just one table, it’s a little more complex. Something like:

        protected override void OnModelCreating(DbModelBuilder modelBuilder)
        {
            //modelBuilder.HasDefaultSchema(&quot;mobile&quot;);
            modelBuilder.Entity&lt;TodoItem&gt;().ToTable(&quot;TodoItem&quot;, &quot;mobile&quot;);
            modelBuilder.Conventions.Add(
                new AttributeToColumnAnnotationConvention&lt;TableColumnAttribute, string&gt;(
                    &quot;ServiceTableColumn&quot;,
                    (property, attributes) =&gt; attributes.Single().ColumnType.ToString()
                )
            );
        }

Finally, you can also do this as a data annotation on the DTO:

using Microsoft.Azure.Mobile.Server;
using Newtonsoft.Json;
using System.ComponentModel.DataAnnotations.Schema;

namespace backend.dotnet.DataObjects
{
    [Table(&quot;TodoItem&quot;, Schema=&quot;mobile&quot;)]
    public class TodoItem : EntityData
    {
        public string UserId { get; set; }

        public string Title { get; set; }

        public bool Complete { get; set; }

        // When asked for the string representation, return the JSON
        public override string ToString()
        {
            return JsonConvert.SerializeObject(this);
        }
    }
}

Any of these techniques ensure that Entity Framework is using the appropriate place in your SQL database for queries and updates.

Next Steps

I’ll be honest here. I almost broke my arm patting myself on the back for this one. SQL is not my forte and this works as advertised, including with Soft Delete, Opportunistic Concurrency, and Incremental Offline Sync.

In my next article, I’m going to take a look at the options you have available when the table controller doesn’t quite fit what you want to do. I’ll delve into the world of Custom APIs for both Node.js and ASP.NET. Until then, my changes for today are on my GitHub Repository.

30 Days of Zumo.v2 (Azure Mobile Apps): Day 15 – Understanding Offline Sync

Halfway through my expected list of posts on Azure Mobile Apps, I’m finally getting to Offline Sync. I thought I would split offline sync over a few posts. The first post will cover what offline sync is, how it works and some of the terminology. Next time, I will update my existing UWP app to use Offline Sync, including looking at conflict resolution. I’ll cover other platforms as we move forward.

Let’s start with some information first.

What is Offline Sync?

Great question. Normally, you will do LINQ (or similar) queries to get information from a backend OData service hosted in Azure Mobile Apps. You can do some really sophisticated queries. However, it suffers from two basic problems:

  1. You can’t do queries when there is no network available.
  2. The queries are slow and that results in a poor user experience.

Offline sync is designed to alleviate those concerns. To do that, it uses a local SQL store – CoreData on the iOS platform and SQLite on most other platforms. You perform queries against the local SQL store instead of the remote store. In the background (or via a user interaction), you perform a synchronization process to push changes to the server and bring down updated records.

To implement the synchronization logic, the client SDK uses a temporary table (called MS_TableOperations) in the local SQL store to store the information pending to be transmitted to the server.

What is Incremental Sync?

To perform this magical synchronization process, the server relies on a specific field – the UpdatedAt field in the models – to determine which records have been updated. Incremental Sync saves the UpdatedAt value for the last synchronized record and uses that to pull the latest records down. Only records with an UpdatedAt value greater than the last synchronized record will be retrieved from the backend.

For this reason, it is important that the updatedAt field (a DateTimeOffset field in each table that is to by synchronized) is indexed. The server need to execute ORDER BY and WHERE clauses against it.

You, of course, don’t need to use incremental sync. If you don’t use it, all records are retrieved from the server during the sync process. This is great when you want to wipe out the table on the client and replace it with the server version, but in general it sucks for performance.

What is Optimistic Concurrency?

When there are conflicts between the client and server, the server returns a 409 or 412 response and the developer has to handle conflict resolution. In the case of offline sync, we can’t test conflicts until the sync handles, so we are optimistic about our chances – we just write the new record to the offline store as if the server isn’t there and ignore the conflict. When you finally push those changes to the server, you may have conflicts and have to handle them at that point. That’s Optimistic Concurrency.

What is Collapse Logic?

Love the jargon yet? Let’s say you have a table with three records in it – A, B and C. You change A and then change B. When you push these changes to the server, first A will be changed and then B will be changed – the client SDK pushes the changes in the order that they were made. This helps when you want to deal with foreign keys. You can have two tables and push the records in the right order on the client – the ordering that changes are made is preserved on the server so that dependent keys get updated in the right order.

What happens if I change A, then change B, then change A again? In this case, the client SDK collapses the two changes to A into one change. This has a benefit for performance (you are doing 2 updates instead of 3), but it may destroy your nice foreign key ordering.

The Offline Sync Process

You actually know all the operations since it’s a standard OData process. First, the changes in the MS_TableOperations are pushed to the server. This is done in single operations. If you have 500 changes, that means 500 separate connections are made. If you are doing bulk operations, this is not the most performant method of doing changes. I’ll talk about the right way of doing bulk changes later on.

Each push operation is identical to it’s “live” non-offline-sync version. If you are creating a record, then a POST of the record is done and the full record (with the Id) is received and placed in the offline cache. Ditto for updates and deletes (but I’ll be talking about soft delete a little later).

Once the pushes are done, the pull is initiated. The server will do something akin to the following:

GET /tables/todoitem?$filter=updatedAt%20ge%20value&$skip=0&$take=50&$orderBy=updatedAt%20asc&$includeDeleted=false HTTP/1.1

Here the string ‘value’ will be replaced by a standard ISO 8601 date-time stamp – something like 2016-04-12T09:00:00Z.

Paging

Let’s say you have thousands of records. If you execute the query without paging, then it is likely you will tie up your client process on the phone for a considerable period of time as you receive and process the data. To alleviate that and allow your mobile application to remain responsive, the client SDK implements paging. Be default, 50 records will be requested for each paged operation.

In reality, this means that you will see one more request than you expect. Let’s take an example. Let’s say you are doing an initial synchronization of 120 records. What you will see is:

  • Request #1 with $skip=0&amp;$take=50 returning 50 records
  • Request #2 with $skip=50&amp;$take=50 returning 50 records
  • Request #3 with $skip=100&amp;$take=50 returning 20 records
  • Request #4 with $skip=120&amp;$take=50 returning 0 records

Why not stop at the third request? We expect this to be a live system. The OData subsystem is allowed to return less than the requested value and will do so for a variety of reasons. For example, it may be configured with a maximum transfer size and the records won’t fit into the transfer buffer. The only sure way of knowing that you have received all the records is to request more and be told there is no more.

Implicit Push

If you do a Pull operation with a dirty cache – i.e. there are some records in the MS_TableOperations table that have yet to be pushed – then the system will automatically push those records to the server for you. This is called Implicit Push. You should not rely on this as you are likely not expecting nor capturing conflict exceptions on a Pull operation – conflicts only occur on a Push. So do yourself a favor and always explicitly push.

You can also be proactive about pushing. Not everywhere has ubiquitous bandwidth. Sometimes, you only want to push your changes up to the server (and get their corresponding updates back). You don’t want to pull down new records unless you are on wifi, for example. Because the push and the pull are explicit, you can do the push without the pull.

Soft Delete

I’ve mentioned soft delete previously but not really defined it. Let’s say you have two clients and both of them implement offline sync. CLIENT1 downloads items A, B, C and so does CLIENT2. CLIENT1 now deletes item A. It’s gone from the cache in CLIENT1 and it’s gone from the server. But what about CLIENT2? Since no updates are received when CLIENT2 does a sync, the record still exists in the cache of CLIENT2.

There are two ways to fix this:

  1. Do a full sync of the table without the benefit of incremental sync
  2. Implement soft delete

Soft delete adds another field to the table – the deleted field. This field is a boolean. Records that are deleted just have the deleted field set to true (and the updatedAt field is updated). Let’s take a look at what happens now

  • CLIENT1 sets deleted=true on record A
  • CLIENT1 syncs – record A is sent to the server
  • CLIENT2 syncs – record A is received with an updated deleted flag
  • CLIENT2 removes the record from the local cache

When you request records from the server, records marked as deleted are automatically discluded from the results unless you explicitly ask for them. If you are accessing the data and not using the OData service (e.g. you are using SQL directly), then ensure you add AND deleted=false to your queries.

With soft delete you still enjoy incremental sync and the performance and network bandwidth benefits that come with it, but at a price. You have to purge deleted records yourself. I’ll have another post on dealing with this eventuality.

Next Steps

No code for today, but next time I’ll be implementing offline sync in my UWP app, so watch for that.

30 Days of Zumo.v2 (Azure Mobile Apps): Day 14 – Linking Existing Tables

Thus far, I’ve done a bunch on the server that is associated with a new database and a new table. Most of the time, however, I am linking to an existing database and an existing table. Azure Mobile Apps requires that tables provide certain fields. In this article, I’m going to look at linking an existing table and exposing it as a Mobile Table.

I’m going to assume you have already gotten the database linked to the Azure Mobile Apps via the Data Connections Blade. You need to provide a suitable user for accessing the database (the user must have enough rights to read and write data).

Step 1: Prepare the table

There are five fields that should be considered “system fields” in all mobile tables. These fields need to be named specifically and have specific types:

Field Name Field Type Default Value
id nvarchar(255) newid()
createdAt datetimeoffset(3) sysutcdatetime()
updatedAt datetimeoffset(3) null
version timestamp
deleted bit 0

You can create an empty table like this:

CREATE TABLE [dbo].[tableName]
(
  [id] [nvarchar](255) NOT NULL CONSTRAINT [DF_tableName_id]  DEFAULT (CONVERT([nvarchar](255),newid(),(0))),
  [createdAt] [datetimeoffset](3) NOT NULL CONSTRAINT [DF_tableName_createdAt]  DEFAULT (CONVERT([datetimeoffset](3),sysutcdatetime(),(0))),
  [updatedAt] [datetimeoffset](3) NULL,
  [version] [timestamp] NOT NULL,
  [deleted] [bit] NULL DEFAULT ((0))

  PRIMARY KEY NONCLUSTERED ([id] ASC) 
)

Similarly, you can use an ALTER TABLE statement to adjust the table instead. The major concern here is, of course, the id field – this field is commonly configured as an auto-incrementing number. Unfortunately, this is not suitable for mobile clients – two clients may asynchronously add a record and your service has to deal with this.

In addition to the table update, you need to create a trigger for setting the updatedAt record:

CREATE TRIGGER [TR_tableName_InsertUpdateDelete] ON [dbo].[tableName]
AFTER INSERT, UPDATE, DELETE AS
BEGIN
  SET NOCOUNT ON;
  IF TRIGGER_NESTLEVEL() > 3 RETURN;
  UPDATE [dbo].[tableName] 
    SET [updatedAt] = CONVERT (DATETIMEOFFSET(7),SYSUTCDATETIME())
    FROM INSERTED WHERE INSERTED.id = [dbo].[tableName].[id]
END

Step 2: Define the table

If you are using an existing table, you generally don’t want Azure Mobile Apps changing it for you. As a result, you should turn off dynamic schema and define the columns you want to be included in the view:

var table = require('azure-mobile-apps').table();

table.dynamicSchema = false;
table.columns = {
    column1: 'string',
    column2: 'number',
    column3: 'boolean',
    column4: 'datetime'
};

module.exports = table

The translation of the types are:

Azure Mobile Apps type SQL Server type
boolean bit
datetime datetimeoffset(7)
number float(53)
string nvarchar(max)

If you need your database locked down even more and your types don’t conform to these types, then you will need to switch over to an ASP.NET project. The ASP.NET project has access to Entity Framework, which allows you to generate more specific types. You do this by locking down the model. JavaScript is a little more forgiving on the model specification, but at the price of flexibility in the SQL types available to you.

You can, of course, do all the authentication, per-table middleware and per-operation middleware that I’ve been discussing throughout this series. This table definition only includes the definition of the actual table.

Alternative Schema

Let’s say your database isn’t in [dbo] – it might be in [myschema] instead. You can handle this easily by setting the data.schema setting. This is set within an App Setting – MS_TableSchema, so there are two ways of setting it. Firstly, you can alter the initializer of Azure Mobile Apps in your app.js:

var mobileApp = azureMobileApps({
    data: {
        schema: 'myschema'
    }
});

However, this doesn’t provide any flexibility. Let’s say you have a test version of your database in schema ‘test’ and the production database in ‘production’ – how do you do that? The answer is to set the App Settings:

  1. Log onto the Azure Portal and select your App Service
  2. Click on All Settings then Application Settings
  3. Under App settings, add a new app setting. The key should be MS_TableSchema and the value should be the schema name.

If you use deployment slots for deploying your site, you can make the app setting a deployment slot setting – this allows you to specify different schemas for (for example) staging vs. production automatically – when the slots are swapped, the app gets the replacement setting.

Next Time

It’s probably time I cover a killer feature of Azure Mobile Apps – Offline Sync. This is not just a feature to allow access to data when the network is not there – it’s also a feature used for performance – especially when there is a lot of data to transfer.

30 Days of Zumo.v2 (Azure Mobile Apps): Day 12 – Conflict Resolution

One of the common items that everyone has to deal with is conflict resolution. Let me give you the scenario. I have two devices – a web browser and a Universal Windows App. In both cases, I temporarily hold the list of tasks in memory in a list. I update a specific task on the web browser, maybe changing the title. I update the same task on the app, changing it to a different value. What happens?

Well, the answer is “it depends”.

Let’s take the two cases. In the case of my Universal Windows App, I have the following model:

using Microsoft.WindowsAzure.MobileServices;
using Newtonsoft.Json;

namespace Shellmonger.TaskList.Services
{
    public class TodoItem
    {
        public string Id { get; set; }

        [JsonProperty("text")]
        public string Title { get; set; }

        [JsonProperty("complete")]
        public bool Completed { get; set; }

        [JsonProperty("shared")]
        public bool Shared { get; set; }

        [JsonIgnore]
        public bool NotShared { get { return !Shared;  } }
    }
}

When I update the title, I send a HTTP PATCH with the contents of the model being updated converted to JSON. If I do nothing else, the default conflict resolution is “last write wins”. In the specific case that I mentioned earlier, updating the web browser, then updating the app – with this model – the app change overwrites the browser change. Refreshing the browser will show the new value.

What about the other way round? Because the app does not send a version, the version does not get updated. That means last write wins again.

What if you want to do something else? Like, maybe, pop up an error and allow the user to choose? This is possible too, but all your clients need to support conflict resolution. I need to add an extra field called Version to my model for that:

using Microsoft.WindowsAzure.MobileServices;
using Newtonsoft.Json;

namespace Shellmonger.TaskList.Services
{
    public class TodoItem
    {
        public string Id { get; set; }

        [Version]
        public string Version { get; set; }

        [JsonProperty("text")]
        public string Title { get; set; }

        [JsonProperty("complete")]
        public bool Completed { get; set; }

        [JsonProperty("shared")]
        public bool Shared { get; set; }

        [JsonIgnore]
        public bool NotShared { get { return !Shared;  } }
    }
}

The version string is updated to a new version whenever a POST, PATCH or DELETE operation is performed – i.e., when the record is updated. This is done on the server. When I perform the same sequence of requests now, the server will return a 412 Precondition Failed response. This is trapped by the Azure Mobile Apps Client SDK and a MobileServicePreconditionFailedException is thrown. I can trap this myself like this (from TasksPage.xaml.cs):

            StartNetworkActivity();
            try
            {
                Trace($"taskTitle_Changed - updating title ID={item.Id} Title={item.Title} Complete={item.Completed}");
                await dataTable.UpdateAsync(item);
                TaskListView.Focus(FocusState.Unfocused);
            }
            catch (MobileServicePreconditionFailedException<TodoItem> conflict)
            {
                Trace($"taskTitle_Changed - Conflict Resolution for item ${conflict.Item.Id}");
                var dialog = new MessageDialog(conflict.Item.Id, "Conflict Resolution");
                await dialog.ShowAsync();
            }
            catch (MobileServiceInvalidOperationException exception)
            {
                Trace($"taskTitle_Changed - failed to update title ID={item.Id} Title={item.Title} Error={exception.Message}");
                var dialog = new MessageDialog(exception.Message);
                await dialog.ShowAsync();
            }
            StopNetworkActivity();

Put a breakpoint on the trace and inspect the value of the conflict exception – it contains a TodoItem in the Item property. This contains the server copy of the record. You have your copy of the record already.

At this point you have a couple of choices:

  1. You can choose “Client Wins” – update the version in your record to match the version on the server, then re-push the change
  2. You can choose “Server Wins” – copy the entire record from the server into your list
  3. You can somehow merge the results (see later)
  4. You can let the user choose (I will implement this)

Option #3 is worthy of some discussion. Let’s say you have a complicated model. You could add a hidden “dirty” field to each field you have in your model. As the model gets updated, you identify that the field is dirty. When a conflict happens, you can take the server record as a base, copy the dirty fields into the new record, thus creating a “server + my changes” record, then push that record. It’s a lot more complex to handle, but the process is relatively similar to what I Have below.

That leaves option #4. Leave it up to the user. I can present the user with two options – the server version and the client version. I can let the user decide which one wins. If the user chooses “server”, then I replace the client version with the server version. If the user chooses “client”, then I update the version and push the change to the server. That results in something akin to the following code:

            StartNetworkActivity();
            try
            {
                Trace($"taskTitle_Changed - updating title ID={item.Id} Title={item.Title} Complete={item.Completed}");
                await dataTable.UpdateAsync(item);
                TaskListView.Focus(FocusState.Unfocused);
            }
            catch (MobileServicePreconditionFailedException<TodoItem> conflict)
            {
                Trace($"taskTitle_Changed - Conflict Resolution for item ${conflict.Item.Id}");

                // If the two versions are the same, then ignore the conflict - client wins
                if (conflict.Item.Title.Equals(item.Title) && conflict.Item.Completed == item.Completed)
                {
                    item.Version = conflict.Item.Version;
                }
                else
                {
                    // Build the contents of the dialog box
                    var stackPanel = new StackPanel();
                    var localVersion = new TextBlock
                    {
                        Text = $"Local Version: Title={item.Title} Completed={item.Completed}"
                    };
                    var serverVersion = new TextBlock
                    {
                        Text = $"Server Version: Title={conflict.Item.Title} Completed={conflict.Item.Completed}"
                    };
                    stackPanel.Children.Add(localVersion);
                    stackPanel.Children.Add(serverVersion);

                    // Create the dialog box
                    var dialog = new ContentDialog
                    {
                        Title = "Resolve Conflict",
                        PrimaryButtonText = "Local",
                        SecondaryButtonText = "Server"
                    };
                    dialog.Content = stackPanel;

                    // Show the dialog box and handle the response
                    var result = await dialog.ShowAsync();
                    if (result == ContentDialogResult.Primary)
                    {
                        // Local Version - Copy the version from server to client and re-submit
                        item.Version = conflict.Item.Version;
                        await dataTable.UpdateAsync(item);
                    }
                    else if (result == ContentDialogResult.Secondary)
                    {
                        // Just pull the records from the server
                        await RefreshTasks();
                    }
                }
            }
            catch (MobileServiceInvalidOperationException exception)
            {
                Trace($"taskTitle_Changed - failed to update title ID={item.Id} Title={item.Title} Error={exception.Message}");
                var dialog = new MessageDialog(exception.Message);
                await dialog.ShowAsync();
            }
            StopNetworkActivity();

This is obviously not perfect code. Firstly, I don’t have a great story around what happens if other things (other than the title) change. Secondly, I don’t have a great story for what happens if the second update fails. Finally, I don’t have a good story around the “server” acceptance. I just refresh the tasks. However, this is good enough for you to get the general idea.

Updating the JavaScript app

This does not help me in the JavaScript world. I have an Apache Cordova app and a HTML app that are both based around the HTML/JS Client SDK. In the app, I have the following code:

    /**
     * Event handler for when the user updates the text of a todo item
     * @param {Event} event the event that caused the request
     * @returns {void}
     */
    function updateItemTextHandler(event) {
        var itemId = getTodoItemId(event.currentTarget),
            newText = $(event.currentTarget).val();

        updateSummaryMessage('Updating Item in Azure');
        console.info('updateItemTextHandler: itemId = ', itemId);
        todoItemTable
            .update({ id: itemId, text: newText })  // Async send the update to backend
            .then(refreshDisplay, handleError); // Update the UI
        event.preventDefault();
    }

You can find the code in public/application.js for the web browser version. When you run this, you get the following request sent:

day-12-p1

Note that the request does not have the version included. This means that “last write wins”. The server will do the change, ignoring the version field completely. There are several methods by which this can be fixed. My version is to include the version in the record, like this:

    function createTodoItem(item) {
        return $('<li>')
            .attr('data-todoitem-id', item.id)
            .attr('data-todoitem-version', item.version)
            .append($('<button class="item-delete">Delete</button>'))
            .append($('<input type="checkbox" class="item-complete">')
                .prop('checked', item.complete))
            .append($('<div>')
                .append($('<input class="item-text">').val(item.text)));
    }

I can now write a copy of the getTodoItemId(el) method to grab the version of a record:

    /**
     * Given a sub-element of an LI, find the TodoItem Versuin associated with the list member
     *
     * @param {DOMElement} el the form element
     * @returns {string} the Version of the TodoItem
     */
    function getTodoItemVersion(el) {
        return $(el).closest('li').attr('data-todoitem-version');
    }

Now I can use these in the code to add the version attribute to my update:

    function updateItemTextHandler(event) {
        var itemId = getTodoItemId(event.currentTarget),
            itemVersion = getTodoItemVersion(event.currentTarget),
            newText = $(event.currentTarget).val();

        updateSummaryMessage('Updating Item in Azure');
        console.info('updateItemTextHandler: itemId = ', itemId);
        todoItemTable
            .update({ id: itemId, version: itemVersion, text: newText })  // Async send the update to backend
            .then(refreshDisplay, handleError); // Update the UI
        event.preventDefault();
    }

If there is a conflict, the handleError routine will get called and will show a 412 error.

Next Steps

This is not the last time I will deal with conflict resolution, as this topic is different when covering offline sync. In the next post, I’m going to cover the OData interface and all the errors that you can get from the server SDK. I’ll also take a look at a more reasonable version of the table managers I have been using up to now, providing an overall conflict resolution and two-way binding. Until then, here is the code:

30 Days of Zumo.v2 (Azure Mobile Apps): Day 11 – Table Access

There are times when accessing one table for your data is just not enough. Let’s take an example. I have a simple task list app right now. I’ve got it running on all the mobile platforms – iOS, Android and Windows. But I can only see my tasks. What if I want to show my tasks to someone else? I’m going to put together a fairly simple adjustment to my application during this article. Firstly, I’m going to add a “Friends” page to my Universal Windows application – where I can enter the email address of a friend and then they can see my tasks (with a different icon)

The Friends Table

To start with, I need a friends table. This is a new table called friend.js but is just like the todoitem.js table:

var azureMobileApps = require('azure-mobile-apps');

// Create a new table definition
var table = azureMobileApps.table();

// Require authentication
table.access = 'authenticated';

// CREATE operation
table.insert(function (context) {
    context.item.userId = context.user.emailaddress;
    return context.execute();
});

// READ operation
table.read(function (context) {
    context.query.where({ userId: context.user.emailaddress });
    return context.execute();
});

// UPDATE operation
table.update(function (context) {
    context.query.where({ userId: context.user.emailaddress });
    context.item.userId = context.user.emailaddress;
    return context.execute();
});

// DELETE operation
table.delete(function (context) {
    context.query.where({ userId: context.user.emailaddress });
    return context.execute();
});

module.exports = table;

I’ve also created a new Universal Windows client that processes this table. You can find the client at my GitHub repository. It has a People tab and a Tasks tab. The People tab allows you to enter email addresses in to the table so that other people can view your data. Note that this is in no way how I would write this code in reality. There are way too many mechanisms by which you can abuse this. However, it’s good to view as a proof of concept. I’m also not going to cover the .NET client – this is about the backend code, not the frontend.

The question for this blog post is this: How do I implement a mechanism whereby my table view is limited by another table? In this case, I want to view all records that I am allowed to see – either because I own them or because someone has given me permission to view them. This will be a change to the read() method in the todoitem.js table controller. I am splitting this into two halves. In the first half I will get a list of all the user IDs that I can view:

// READ operation
table.read(function (context) {
    return context.tables('friend')
        .where({ viewer: context.user.emailaddress })
        .select('userId')
        .read()
        .then(function (friends) {
            console.log('READ: Response from SQL Query = ', friends);
            var list = friends.map(function (f) { return f.userId; }).push(context.user.emailaddress);
            console.log('READ: List of user ids = ', list);
            
            // TODO: Adjust the query according to the requirements
            return context.execute();
        });
});

Let’s take this line by line. Line 18 gets a reference to the friend table. I then construct a query whereby all records where the current user is marked as a viewer. The table itself has just two columns – userId is the “owner” and viewer is the email address I place in the table in the UI. It reads “viewer can read the tasks for userId”. Once I’ve got that, I’m going to limit the returned columns to just userId and then finally (line 21) execute the query and read the results. The result of this is a Promise that resolves to the list of userIds. I’ll get an array of objects – each object will have one property (the userId field).

There are more methods that you can use that are “LINQ-like”. I’ve already introduced where, select and read. You can also use:

  • orderBy(field) and orderByDescending(field) to do sorting
  • skip(count) and take(count) to implement paging
  • includeTotalCount() to add a total number of records field to the results

Once I’ve resolved the promise, I need to convert the list to a list of email addresses – these become the list of email addresses to use as the viewership. Notice that I add the current users email address to the list.

Next, I need to adjust my search of the database to return tasks owned by anyone in the list of email addresses I’ve just created. This is actually quite hard to do. One of the methods of altering the query that is sent to the database is to provide a function:

// READ operation
table.read(function (context) {
    return context.tables('friend')
    .where({ viewer: context.user.emailaddress })
    .select('userId')
    .read()
    .then(function (friends) {
        var list = friends.map(function (f) { return f.userId; })
        list.push(context.user.emailaddress);
        context.query.where(function(list) { return this.userId in list; }, list);
        return context.execute().then(function (results) {
          // Process the results
          console.log('results = ', results);
          return results;
        });
    });
});

This is a different mechanism to altering the query. Underneath, the query is compiled to an AST by esprima, and then converted to SQL from the compiled form. You never have to worry about this until it bites you because you see JavaScript and assume you can use any JavaScript function in there where method. There is a limited set of things you can do within a function call. In the context of the function, this refers to the current item. You can pass in one or two arguments (such as lists) and anything you need to use must be passed in. You can’t rely on the context that the function appears to be running in – it’s not running a function. Once inside the function, there is a set of methods you can use:

  • toUpperCase()
  • toLowerCase()
  • trim()
  • indexOf()
  • concat()
  • substring() or substr()
  • replace()
  • getFullYear()
  • getUTCFullYear()
  • getYear()
  • getDate()
  • getUTCDate()
  • in

Obviously, most of these only work on specific datatypes – strings and dates, most normally. However, you can use in with arrays. You can also use math capabilities as well as boolean logic inside the function. I’m using this functionality to change the query to be “records I am allowed to view”. My final step in this rather complicated read logic is to change the results so that I return a computed “shared” flag:

// READ operation
table.read(function (context) {
    return context.tables('friend')
    .where({ viewer: context.user.emailaddress })
    .select('userId')
    .read()
    .then(function (friends) {
        var list = friends.map(function (f) { return f.userId; })
        list.push(context.user.emailaddress);
        context.query.where(function(list) { return this.userId in list; }, list);
        return context.execute().then(function (results) {
          results.forEach(function (item) {
            // Item is shared if the owner of item is not current owner
            item.shared = (item.userId !== context.user.emailaddress);
          });
          return results;
        });
    });
});

Here, the highlighted lines add in a computed field called shared that is true when the record does not belong to the current user. I also need to add in some logic to REMOVE the shared flag from the incoming records:

// CREATE operation
table.insert(function (context) {
    context.item.userId = context.user.emailaddress;
    delete context.item.shared;
    delete context.item.Shared;
    return context.execute();
});

// UPDATE operation
table.update(function (context) {
    context.query.where({ userId: context.user.emailaddress });
    context.item.userId = context.user.emailaddress;
    delete context.item.shared;
    delete context.item.Shared;
    return context.execute();
});

Next Steps

Today, I’ve gone through the more esoteric adjustments to the query you can do, doing sub-queries and manipulating the results. In the next article, I’m going to start looking at doing conflict resolution.

In the mean time, you can download my client and server:

30 Days of Zumo.v2 (Azure Mobile Apps): Day 9 – Table Controller Operations

I introduced the table controller in my last article and showed off all the things you can do in defining tables at the table level. All the settings are common across all operations to the table. This includes schema and authorization settings. Today, I am looking at individual operations. A table controller provides an OData v3 data source with GET, POST, PUT and DELETE operations.

The per-operation settings

There are four operations on a table, corresponding to the four operations in a typical CRUD controller:

  • CREATE is table.insert
  • READ is table.read
  • UPDATE is table.update
  • DELETE is table.delete

There is also a table.undelete() – this is a special case of a POST with an ID to undelete a previously deleted record. Soft delete must be enabled for this option. If you want to set an option on the READ operation, you would set it on table.read, for example. Let’s look at some:

Per-Operation Access Control

Last time, I showed you how to specify the default access control: something like this:

var table = require('azure-mobile-apps').table();

table.access = 'authenticated';

module.exports = table;

You can also specify access control on a per-operation basis. For example, let’s look at a read-only table:

var table = require('azure-mobile-apps').table();

table.access = 'disabled';
table.read.access = 'anonymous';

module.exports = table;

Note that I specify the default permission using the table.access, then override it with individual operation access permissions. Another common pattern would be the ability to read anonymously, but only update when authenticated:

var table = require('azure-mobile-apps').table();

table.access = 'authenticated';
table.read.access = 'anonymous';

module.exports = table;

Yet another option would be a table where you can’t delete or update anything – only insert into the table:

var table = require('azure-mobile-apps').table();

table.access = 'authenticated';
table.update.access = 'disabled';
table.delete.access = 'disabled';

module.exports = table;

In all cases, the same logic as for table.access applies:

  • ‘authenticated’ requires the submission of a valid X-ZUMO-AUTH token – respond with 401 Unauthorized if not validly authenticated.
  • ‘anonymous’ allows anyone to perform the operation
  • ‘disabled’ means that the server responds with 405 Method not allowed

Per-operation Middleware

You can also define a method of executing some additional code during the execution of the operation. I’ve already shown an example this when I was demonstrating the personal table – obtaining the email address so that I can use it in my table. The general form here is:

var table = require('azure-mobile-apps').table();

table.access = 'authenticated';

table.read(function (context) {
    // Pre-operation tasks

    return context.execute().then((response) => {
        // Post-operation tasks
    });
});

module.exports = table;

Common pre-operation tasks are authorization, query adjustment and field insertion. I showed off examples of the latter two in the Personal Tables article – query adjustment to only show records that belong to the authenticated user and field insertion to insert the users email address. I’ll cover authorization in another article.

The Operation Context

When the operation middleware is called, a context is passed. The context has the following properties:

Property Type Usage
query queryjs/Query The query to be executed
id string The record ID
item object The inbound record
req express.Request The request object
res express.Response The response object
data data The data provider
push notification hub The push provider
logger winston The logger provider
tables function Accessor for tables
table object The current table
user object The authenticated user
execute function Execution function

The query, id and item properties are used in various operations:

Property Used in
query read (search), update
id read (fetch), delete
item insert, update

The req and res options are from the express object – you can use these to access other headers and cookies or directly manipulate the response. Most of the other properties allow you to access pre-configured services. I’ve already introduced the user object – the only thing guaranteed there is the user.id value – everything else is optional, although the user object also has a getIdentity() method. Finally, we get to the execute() method. This executes the query and returns a Promise that resolves (eventually) to the response. However, that doesn’t stop you from altering the response as you go.

Let’s take a quick look at an example. Let’s say we adjust our existing TodoItem.js script like this:

// READ operation
table.read(function (context) {
  console.log('context.user = ', JSON.stringify(context.user));
    return context.user.getIdentity().then(function (userInfo) {
        console.log('userInfo.aad.claims = ', JSON.stringify(userInfo.aad.claims));
        context.query.where({ userId: userInfo.aad.claims.emailaddress });
        return context.execute();
    }).then(function (response) {
        console.log('read.execute response = ', response);
        return response;
    });
});

All this code does is print out the response to the log. You’ll need to turn on application logging at a verbose level to view this. When you do (and run your application), here is what you will see:

day-8-p1

The response in this case (for a read/search) is an array of results. We can use this fact to do some sort of business logic.

Business Logic like what?

Well, we’ll get into some very specific things dealing with Push Notifications later on in the course. However, some things could be to asynchronously kick off a queue submission. As a quick example, let’s say you are using the Azure Storage API to store images. Once the image is stored, you update the database with a unique ID and some meta-data on the image. But you want to add a watermark, resize the image, or notify someone that an image is available for review. You could do this in a post-execute phase.

Next Steps

Middleware is an incredibly powerful functionality that allows you to do pre- and post-processing on requests on a per-operation basis. The last two articles have been just introducing the table controller functionality. In the next article, I’m going to use this functionality to provide authorization capabilities to my operations in a way that, hopefully, can be considered somewhat boilerplate. Until then, you can find todays code at my GitHub Repository.