30 Days of Zumo.v2 (Azure Mobile Apps): Day 19 – ASP.NET Table Controllers

I’ve got a pretty good table controller going from my last article now. However, there are some features of ASP.NET Table Controllers that I am not using yet. I’d like to cover those as a miscellany of features that don’t really have any association with each other – you need to consider them individually and decide if your situation warrants their use.

Soft Delete

I’ve mentioned Soft Delete before with respect to offline sync. Soft Delete allows you to notify your clients that a record has been deleted. When you delete a record, it’s just marked as deleted. That allows other clients to download the new state and update their offline sync cache. At some later date, you can remove the deleted records through another process. I’ll be covering that process in a future posting.

Soft Delete is enabled on a per-table basis in the Initialize() method:

        protected override void Initialize(HttpControllerContext controllerContext)
        {
            base.Initialize(controllerContext);
            MyDbContext context = new MyDbContext();
            DomainManager = new EntityDomainManager<TodoItem>(context, Request, enableSoftDelete: true);
        }

The addition to the initialization of the EntityDomainManager is all you need to enable soft delete. The entity domain manager that is used to manage all the requests on this table handles the actual implementation of soft delete.

Seeding of Data

Let’s say you want to initialize a table with some content? The database initializer has a method called Seed() that is called when the database is created for the first time. It does not fill in information on application restart, nor does it fill in information during an EF Migration, so it’s not a good solution for that. You can find the database initializer in the App_Start/AzureMobile.cs file:

    public class AzureMobileInitializer : CreateDatabaseIfNotExists<MyDbContext>
    {
        protected override void Seed(MyDbContext context)
        {
            // You can seed your database here
#if SEED_DATA
            List<TodoItem> todoItems = new List<TodoItem>
            {
                new TodoItem { Id = Guid.NewGuid().ToString(), Text = "First item", Complete = false },
                new TodoItem { Id = Guid.NewGuid().ToString(), Text = "Second item", Complete = false },
            };

            foreach (TodoItem todoItem in todoItems)
            {
                context.Set<TodoItem>().Add(todoItem);
            }

#endif

            base.Seed(context);
        }
    }

Personally, I dislike the seeding of data and try to avoid it, but it is useful in certain circumstances.

Debugging

The Node.js server SDK has a nice logging mechanism that logs the actual SQL statements that are being made. ASP.NET doesn’t do this out of the box. Fortunately, ASP.NET with SQL Azure is based on Entity Framework, so I can set up logging using the Entity Framework technique. Put this in your DbContext constructor (which, for me, is in Models/MyDbContext.cs):

        private const string connectionStringName = "Name=MS_TableConnectionString";

        public MyDbContext() : base(connectionStringName)
        {
            this.Database.Log = (message) => Debug.Write(message);
        }

You will get output in the Log Stream (which is under the Tools menu for your App Service in the Azure Portal). Make sure you publish a “Debug” version of your app.

Did You Know? You can turn on diagnostic logging and view the log stream directly from Visual Studio. Open the Server Explorer, expand the Azure -> App Service node, find your mobile app and right-click on it. Click on View Settings to update the web server logging, and View Streaming Logs to view the logs. The logs appear in the Output window.

There are also various techniques to connecting a remote debugger from Visual Studio to your site. If you want to do this, then check out the Azure Article on the subject.

One note is that model differences – where the DTO on the server and the Model on the client app differ – matters. If you do have model differences, the SQL commands on the backend won’t even execute. In these cases, you will want to capture the Response output on the server or check out the response on the client. I use the following in my Post handler, for example:

        // POST tables/TodoItem
        public async Task<IHttpActionResult> PostTodoItem(TodoItem item)
        {
            Debug.WriteLine($"POST tables/TodoItem");
            var emailAddr = await GetEmailAddress();
            Debug.WriteLine($"Email Address = {emailAddr}");
            item.UserId = emailAddr;
            Debug.WriteLine($"Item = {item}");
            try
            {
                TodoItem current = await InsertAsync(item);
                Debug.WriteLine($"Updated Item = {current}");
                return CreatedAtRoute("Tables", new { id = current.Id }, current);
            }
            catch (HttpResponseException ex)
            {
                Debug.WriteLine($"Exception: {ex}");
                Debug.WriteLine($"Response: {ex.Response}");
                string content = await ex.Response.Content.ReadAsStringAsync();
                Debug.WriteLine($"Response Content: {content}");
                throw ex;
            }
        }

Using Existing Tables

This is a big and – to my mind – fairly complex topic, so be aware that I consider this advanced. Let’s say you have an existing database, with an existing table, and for some reason you don’t have the ability to adjust the table. So you can’t add the system columns to the database. How do you deal with this? Well, there is a way. Qualitatively, you create another table that holds the system columns and then you create an updatable view that acts like a table but isn’t.

Step 1: Don’t let the Mobile App update the Database Schema

When you go this route, you are responsible for updating the database. The Mobile App will no longer do changes for you. In your App_Start/AzureMobile.cs Startup class, do the following change:

            // Initialize the database with EF Code First
            // Database.SetInitializer(new AzureMobileInitializer());
            Database.SetInitializer<MyDbContext>(null);

The original line is above the new line and commented out.

Step 2: Create the System Columns Table

Let’s say you have a table called [myapp].[TodoItem] which is defined like this:

CREATE TABLE [myapp].[TodoItem] (
  [id]       BIGINT NOT NULL IDENTITY(1,1) PRIMARY KEY,
  [UserId]   NVARCHAR(255) NOT NULL,
  [Title]     NVARCHAR(255) NOT NULL,
  [Complete] BIT
)
GO

I can create a system properties table like this:

CREATE TABLE [mobile].[TodoItem_SystemProps] (
  [id]        NVARCHAR(255) CONSTRAINT [DF_todoitem_id] DEFAULT (CONVERT([NVARCHAR](255),NEWID(),(0))) NOT NULL,
  [createdAt] DATETIMEOFFSET(7) CONSTRAINT [DF_todoitem_createdAt] DEFAULT (CONVERT([DATETIMEOFFSET](7),SYSUTCDATETIME(),(0))) NOT NULL,
  [updatedAt] DATETIMEOFFSET(7) NULL,
  [version]   ROWVERSION NOT NULL,
  [deleted]   BIT DEFAULT ((0)) NOT NULL,
  [item_id]   BIGINT NOT NULL
  PRIMARY KEY NONCLUSTERED ([id] ASC)
)
GO

Step 3: Create an Mobile SQL View

Here is my view:

CREATE VIEW [mobile].[TodoItem] AS
SELECT
    [mobile].[TodoItem_SystemProps].[id],
    [mobile].[TodoItem_SystemProps].[createdAt],
    [mobile].[TodoItem_SystemProps].[updatedAt],
    [mobile].[TodoItem_SystemProps].[version],
    [mobile].[TodoItem_SystemProps].[deleted],
    [mobile].[TodoItem_SystemProps].[item_id],
    [myapp].[TodoItem].[UserId],
    [myapp].[TodoItem].[Title],
    [myapp].[TodoItem].[Complete]
FROM
    [myapp].[TodoItem],
    [mobile].[TodoItem_SystemProps]
WHERE
    [myapp].[TodoItem].[id] = [mobile].[TodoItem_SystemProps].[item_id]
GO

This produces a composite read-only view of the original table with the system properties. However, it doesn’t quite work yet.

Step 4: Handle Updates to the Original Table

I need to wire up two specific areas – firstly, when the original data is updated, I also update the system properties table. This is a series of three triggers.

CREATE TRIGGER
    [myapp].[TRG_TodoItem_Insert]
ON
    [myapp].[TodoItem]
AFTER
    INSERT
AS BEGIN
    DECLARE @itemid AS BIGINT
    SELECT @itemid = inserted.id FROM inserted
    INSERT INTO [mobile].[TodoItem_SystemProps] ([item_id], [updatedAt]) VALUES (@itemid, CONVERT(DATETIMEOFFSET(7), SYSUTCDATETIME()));
END
GO

CREATE TRIGGER
    [myapp].[TRG_TodoItem_Update]
ON
    [myapp].[TodoItem]
AFTER
    UPDATE
AS BEGIN
    UPDATE
        [mobile].[TodoItem_SystemProps]
    SET
        [updatedAt] = CONVERT(DATETIMEOFFSET(7), SYSUTCDATETIME())
    FROM
        INSERTED
    WHERE
        INSERTED.id = [mobile].[TodoItem_SystemProps].[item_id]
END
GO

CREATE TRIGGER
    [myapp].[TRG_TodoItem_Delete]
ON
    [myapp].[TodoItem]
AFTER
    DELETE
AS BEGIN
    DECLARE @itemid AS BIGINT
    SELECT @itemid = deleted.id from deleted
    DELETE FROM [mobile].[TodoItem_SystemProps] WHERE [item_id] = @itemid
END
GO

Each trigger handles one case of insert, update, or delete. The primary thing to do in update is to update the updatedAt time to match the current time. Insertion creates a new record and deletion deletes the record. Important note here is that deletion does not handle soft delete – something else will have to be done for when the record is deleted on the original table.

Step 5: Handle Updates from Azure Mobile Apps

The next step is to ensure that when an update (or an insert or a delete) comes in from Azure Mobile Apps, the right thing happens on the tables. For insertion (or deletion), this means creating the original item – not the system props (these are handled by the triggers above). For updates, I need to update just the original table – not the system props (except for soft delete).

CREATE TRIGGER
    [mobile].[TRG_Mobile_TodoItem_Insert]
ON
    [mobile].[TodoItem]
INSTEAD OF
    INSERT
AS BEGIN
    DECLARE @userid AS NVARCHAR(255)
    SELECT @userid = inserted.UserId FROM inserted
    DECLARE @title AS NVARCHAR(255)
    SELECT @title = inserted.Title FROM inserted
    DECLARE @complete AS BIT
    SELECT @complete = inserted.Complete FROM inserted

    INSERT INTO
        [myapp].[TodoItem] ([UserId], [Title], [Complete])
    VALUES
        (@userid, @title, @complete)

    IF UPDATE(Id) BEGIN
        DECLARE @itemid AS BIGINT
        SELECT @itemid = @@identity
        DECLARE @id AS NVARCHAR(255)
        SELECT @id = inserted.Id FROM inserted
        UPDATE [mobile].[TodoItem_SystemProps] SET [Id] = @id WHERE [item_id] = @itemid
    END
END;
GO

CREATE TRIGGER
    [mobile].[TRG_Mobile_TodoItem_Update]
ON
    [mobile].[TodoItem]
INSTEAD OF
    UPDATE
AS BEGIN
    DECLARE @id AS NVARCHAR(255)
    SELECT @id = inserted.id FROM inserted
    DECLARE @itemid AS BIGINT
    SELECT @itemid = [item_id] FROM [mobile].[TodoItem_SystemProps] WHERE [id] = @id

    IF UPDATE(UserId) BEGIN
	    DECLARE @userid AS NVARCHAR(255)
		SELECT @userid = inserted.UserId FROM inserted
        UPDATE [myapp].[TodoItem] SET [UserId] = @userid WHERE [id] = @itemid
    END
    IF UPDATE(Title) BEGIN
		DECLARE @title AS NVARCHAR(255)
		SELECT @title = inserted.Title FROM inserted
        UPDATE [myapp].[TodoItem] SET [Title] = @title WHERE [id] = @itemid
    END
    IF UPDATE(Complete) BEGIN
		DECLARE @complete AS BIT
		SELECT @complete = inserted.Complete FROM inserted
        UPDATE [myapp].[TodoItem] SET [Complete] = @complete WHERE [id] = @itemid
    END
    IF UPDATE(deleted) BEGIN
	    DECLARE @deleted AS BIT
		SELECT @deleted = inserted.deleted FROM inserted
        UPDATE [mobile].[TodoItem_SystemProps] SET [deleted] = @deleted WHERE [item_id] = @itemid
    END
END
GO

CREATE TRIGGER
    [mobile].[TRG_Mobile_TodoItem_Delete]
ON
    [mobile].[TodoItem]
INSTEAD OF
    DELETE
AS BEGIN
    DECLARE @id AS NVARCHAR(255)
    SELECT @id = deleted.id FROM deleted
    DECLARE @itemid AS BIGINT
    SELECT @itemid = [item_id] FROM [mobile].[TodoItem_SystemProps] WHERE [id] = @id

    DELETE FROM [myapp].[TodoItem] WHERE [id] = @itemid
	DELETE FROM [mobile].[TodoItem_SystemProps] WHERE [id] = @id
END
GO

A standard SQL view is read-only. By using triggers, you can make it adjust the underlying tables that are being used to construct the view and hence make it read-write. You can read more about creating triggers on MSDN.

There are also similar things you can do for multi-tables. Let’s say you have a mobile view that is comprised of a customer and an order. When you view the order, you want to combine the data with the order. When the order is deleted or removed from the mobile device, the customer data for that order is also removed. To implement this, use the same technique. Create a view that has all the customer data for the order plus the order. Call it “CustomerOrder”. Then, when a mobile device updates the record, you can use triggers to actually do the updates to the underlying tables.

Step 6: Test your SQL Setup

The reality is that this is a lot of SQL for someone who doesn’t do a lot of SQL work. Mistakes happen. It’s a good idea to test the SQL by doing manual updates. There are six tests you need to do:

  1. INSERT into the main table, then SELECT from the VIEW – ensure the createdAt and id fields are set.
  2. UPDATE the record you just inserted, then SELECT from the VIEW – ensure the updatedAt and version fields are set.
  3. DELETE the record you just inserted, then SELECT from the System Properties table – ensure the record is deleted.
  4. INSERT into the view, then SELECT from the main table – ensure the record is created.
  5. UPDATE the view, then SELECT from the main table – ensure the record is updated. Also SELECT from the System Properties table – ensure the updatedAt and version fields are updated.
  6. DELETE the record you created via the via, then SELECT from the main table and system properties table – ensure both records are deleted.

This is a relatively easy test to do and only needs to be done when you update the table. If you update the main table, don’t forget to update the triggers as well.

Changing the Mobile Schema

The final thing to do here is to change the SQL schema for my operations to the [mobile] schema. By default, Azure Mobile Apps uses the [dbo] schema. This table is stored in the [mobile] schema. This is a pure Entity Framework problem. Probably the easiest method is to use HasDefaultSchema in the DbContext:

        protected override void OnModelCreating(DbModelBuilder modelBuilder)
        {
            modelBuilder.HasDefaultSchema("mobile");
            modelBuilder.Conventions.Add(
                new AttributeToColumnAnnotationConvention<TableColumnAttribute, string>(
                    "ServiceTableColumn",
                    (property, attributes) => attributes.Single().ColumnType.ToString()
                )
            );
        }

This will move all the referenced tables to the [mobile] schema. If you want to do just one table, it’s a little more complex. Something like:

        protected override void OnModelCreating(DbModelBuilder modelBuilder)
        {
            //modelBuilder.HasDefaultSchema("mobile");
            modelBuilder.Entity<TodoItem>().ToTable("TodoItem", "mobile");
            modelBuilder.Conventions.Add(
                new AttributeToColumnAnnotationConvention<TableColumnAttribute, string>(
                    "ServiceTableColumn",
                    (property, attributes) => attributes.Single().ColumnType.ToString()
                )
            );
        }

Finally, you can also do this as a data annotation on the DTO:

using Microsoft.Azure.Mobile.Server;
using Newtonsoft.Json;
using System.ComponentModel.DataAnnotations.Schema;

namespace backend.dotnet.DataObjects
{
    [Table("TodoItem", Schema="mobile")]
    public class TodoItem : EntityData
    {
        public string UserId { get; set; }

        public string Title { get; set; }

        public bool Complete { get; set; }

        // When asked for the string representation, return the JSON
        public override string ToString()
        {
            return JsonConvert.SerializeObject(this);
        }
    }
}

Any of these techniques ensure that Entity Framework is using the appropriate place in your SQL database for queries and updates.

Next Steps

I’ll be honest here. I almost broke my arm patting myself on the back for this one. SQL is not my forte and this works as advertised, including with Soft Delete, Opportunistic Concurrency, and Incremental Offline Sync.

In my next article, I’m going to take a look at the options you have available when the table controller doesn’t quite fit what you want to do. I’ll delve into the world of Custom APIs for both Node.js and ASP.NET. Until then, my changes for today are on my GitHub Repository.

30 Days of Zumo.v2 (Azure Mobile Apps): Day 17 – ASP.NET Backend Introduction

I have concentrated on the Node.js backend for Azure Mobile Apps thus far. However, Azure Mobile Apps also supports an ASP.NET 4.6 (not ASP.NET core) backend. I’m going to start exploring the ASP.NET backend in this article, looking at a lot of the same functionality as I did for the Node.js backend.

Why use the ASP.NET backend?

Given the functionality of the Node.js backend, you may be wondering why you would even consider the ASP.NET backend. Well, there are a few reasons:

  1. You are more familiar with C# and ASP.NET in general.
  2. You want to utilize the large package library available via NuGet.
  3. You are doing cross-platform development in Xamarin and want to share code between backend and client apps.
  4. You want to use more complex types than a string, number, date or boolean.

Similarly, there are good reasons to use the Node.js backend:

  1. You are more familiar with Javascript and/or node programming.
  2. You want to utilize the large package library available via npm.
  3. You are primarily a frontend developer, don’t care about the schema and want to use the dynamic schema feature.

I find Node.js to be simpler to write – it requires less code to write functionally identical backends. I find ASP.NET to be easier to debug a lot of the time, with many errors being found at compile time (something that doesn’t exist in Node) rather than run time.

Starting with an ASP.NET MVC application

When kicking off a new project, I’d normally tell you to start with an example or template that is close to the pattern you want to deploy. Azure Mobile Apps has templates for the ASP.NET backend in the Azure SDK. Why not start with one of those?

Great question and you can certainly start with a specific Azure Mobile Apps project template. However, I want to show off how you can add Azure Mobile Apps to any ASP.NET application. Azure Mobile Apps is a good platform for exposing SQL data to a web or mobile application. There is nothing magical about the Azure Mobile Apps templates – they are really just ASP.NET templates with some in-built code. So I’ve started my code with the standard ASP.NET 4.6 MVC template. I’ve done some changes to it – most notably removing the built-in identity manager (which creates a database table for handling sign-ins) and removing Application Insights.

You might be wondering how I get two backends in the same solution and choose which one to deploy. The .deployment file has a project property that tells Azure App Service which project to deploy.

You can find my initial code at my GitHub Repository, tagged as pre-day-17.

Introducing the Azure Mobile Apps .NET Server SDK

Did you know the Azure Mobile Apps team develops all their SDKs as open source on GitHub? Here is a complete list of the SDKs, together with their links:

In addition, the Azure Mobile Apps team publishes the SDKs in “the normal places” – that depends on the client language – npm for JavaScript, for example, or NuGet for the .NET Server SDKs. That means you can generally just work with the SDKs as you normally would.

There are several parts to installing the Server SDK into an ASP.NET application:

  1. Configure Entity Framework for your database
  2. Configure Azure Mobile Apps Server SDK
  3. Create your model
  4. Create a table controller

I’m going to do the basic one today – starting where I did with the Node.js backend – at the beginning, configuring the server to handle the TodoItem model and table.

Configuring Entity Framework

My first stop is to the web.config file. In the standard template, the connection string is called DefaultConnection – I need to change it to match the connection string that is used by Azure Mobile Apps:

  <connectionStrings>
    <add name="MS_TableConnectionString" connectionString="Data Source=(LocalDb)\MSSQLLocalDB;AttachDbFilename=|DataDirectory|\aspnet-backend.dotnet-20160417041539.mdf;Initial Catalog=aspnet-backend.dotnet-20160417041539;Integrated Security=True" providerName="System.Data.SqlClient" />
  </connectionStrings>

Just the name needs to change here. I don’t need to do any other changes because the Azure Mobile Apps Server SDK takes care of it for me.

Configuring Azure Mobile Apps

First stop – I need to add some NuGet packages. Here is the list:

  • Microsoft.Azure.Mobile.Server
  • Microsoft.Azure.Mobile.Server.Entity
  • Microsoft.WindowsAzure.ConfigurationManager
  • Microsoft.AspNet.WebApi.Owin

To install the NuGet packages, right-click on the References node and select Manage NuGet Packages… Click on Browse and then search for the packages. Once you’ve found one, click on the Install button:

day-17-nuget

This list is a minimal list – I haven’t included items that are in the dependencies.
Pretty much any plugin of this magnitude has a startup process. I am placing mine in App_Start/AzureMobile.cs:

using Owin;
using System.Data.Entity;
using System.Web.Http;
using Microsoft.Azure.Mobile.Server.Config;
using Microsoft.Azure.Mobile.Server.Tables.Config;
using backend.dotnet.Models;

namespace backend.dotnet
{
    public partial class Startup
    {
        public static void ConfigureMobileApp(IAppBuilder app)
        {
            HttpConfiguration config = new HttpConfiguration();

            // Configure the Azure Mobile Apps section
            new MobileAppConfiguration()
                .AddTables(
                    new MobileAppTableConfiguration()
                        .MapTableControllers()
                        .AddEntityFramework())
                .MapApiControllers()
                .ApplyTo(config);

            // Initialize the database with EF Code First
            Database.SetInitializer(new AzureMobileInitializer());

            // Link the Web API into the configuration
            app.UseWebApi(config);
        }
    }

    public class AzureMobileInitializer : CreateDatabaseIfNotExists<MyDbContext>
    {
        protected override void Seed(MyDbContext context)
        {
            // You can seed your database here
            base.Seed(context);
        }
    }
}

The major work here is done after the MobileAppConfiguration(). I add any table controllers that are defined and hook them up to the data source defined via the DbContext. Once that is done, I call the database initializer, which will create the database and tables that I need. For that, I need a database context, which is defined in Models/MyDbContext.cs:

using backend.dotnet.DataObjects;
using Microsoft.Azure.Mobile.Server.Tables;
using System.Data.Entity;
using System.Data.Entity.ModelConfiguration.Conventions;
using System.Linq;

namespace backend.dotnet.Models
{
    public class MyDbContext : DbContext
    {
        private const string connectionStringName = "Name=MS_TableConnectionString";

        public MyDbContext() : base(connectionStringName)
        {

        }

        public DbSet<TodoItem> TodoItems { get; set; }

        protected override void OnModelCreating(DbModelBuilder modelBuilder)
        {
            modelBuilder.Conventions.Add(
                new AttributeToColumnAnnotationConvention<TableColumnAttribute, string>(
                    "ServiceTableColumn",
                    (property, attributes) => attributes.Single().ColumnType.ToString()
                )
            );
        }
    }
}

This is fairly standard stuff. I set the connection string name to the MS_TableConnectionString – which I am using because that’s the connection string that Azure App Service creates when adding a data connection. The OnModelCreating() method is boilerplate for Azure Mobile Apps – just go with it.

Finally, don’t forget to link the configuration into the Startup.cs file:

using Microsoft.Owin;
using Owin;

[assembly: OwinStartupAttribute(typeof(backend.dotnet.Startup))]

namespace backend.dotnet
{
    public partial class Startup
    {
        public void Configuration(IAppBuilder app)
        {
            ConfigureMobileApp(app);
        }
    }
}

If you find yourself not able to query anything at all and are getting 404 responses to everything, then it’s likely you missed this step.

Create a Model (or DTO)

Normally, I would be talking about models here. Azure Mobile Apps uses things called “Data Transfer Objects” or DTOs for short. They inherit from EntityData – this adds the five system columns to my table. Here is my DataObjects/TodoItem.cs file:

using Microsoft.Azure.Mobile.Server;

namespace backend.dotnet.DataObjects
{
    public class TodoItem : EntityData
    {
        public string Text { get; set; }

        public bool Complete { get; set; }
    }
}

Want a different field name on the remote end? Just use a JsonProperty transform – like this:

using Microsoft.Azure.Mobile.Server;
using Newtonsoft.Json;

namespace backend.dotnet.DataObjects
{
    public class TodoItem : EntityData
    {
        public string Text { get; set; }

        [JsonProperty(PropertyName = "complete")]
        public bool IsComplete { get; set; }
    }
}

I’m not doing this since I’m reusing the database I used for the Node.js backend. At this point, everything should compile, so you can do some sanity checks on your code – are you missing the NuGet packages, for example?

Create a Table Controller

A Table Controller looks just like a regular controller. In fact, there is some scaffolding help you get from the Azure SDK. Just right-click on the Controllers folder in the Solution Explorer and select Add -> Controller… – the Azure Mobile Apps Table Controller will be listed:

day-17-add-new-controller

On the next screen, you will be asked for the model class and the DbContext – I’ve conveniently just completed those:

day-17-add-new-controller-2

Click on OK and that’s it – you don’t need to do anything else.

Running locally

You can just press F5 to run this now (or right-click on the project, use Set As Startup Project… and then run it). Test it with Postman:

day-1-postman

Next Steps

I’m not ready to deploy this to Azure yet. I’ve got just a basic ASP.NET backend running. My Node.js backend handled authentication and provided a unique view of the data based on the users email address. I want to implement that capability before I deploy to Azure. That will be the topic next time.

Until then, you can get my code from my GitHub Repository.

The Most Popular Articles of the Year

I suspect there may be a bunch of blog posts around the Internet that wrap up the year. Here are the most popular articles on my blog for the year:

React with ES6 and JSX

In fifth place, I did a series of articles on working with ECMAScript 2015 and React/Flux, working on getting a typical application working. I also poked into some stage0 proposals for ECMAScript7. I really enjoy working with React, but I’m torn between Custom Elements (and Polymer specifically) and React. Custom Elements are more standard – React is more popular. I’ll be revisiting this again next year (which is in 24 hours, but I’ll likely take longer than that).

Aurelia – a new framework for ES6

In fourth place, people were interested in how I would do my test tutorial with Aurelia. Aurelia is a really interesting framework and I prefer it over Ember and Angular. The learning curve is relatively small, although I will have to revisit the whole framework discussion as Angular 2 and Ember next-gen are coming out. This tutorial included using authentication with Auth0 and accessing remote resources.

ASP.NET MVC6 and Bootstrap

A one-off article on adding Bootstrap to ASP.NET MVC6 applications came in third place. There are other Bootstrap posts that are also interesting, including one that got made into a video.

Talking of ASP.NET MVC6

With the next revision of ASP.NET imminent, I took several strolls through the alpha and beta releases of that framework. There is a lot to like about it and a lot that is familiar. I’ve mostly switched over to a NodeJS environment now, so I’m not expecting to do much more in this area, but it is a much nicer environment that the old ASP.NET.

And finally, Visual Studio Tooling!

Fueled in large part by a link from the ASP.NET Community Articles page, the #1 page for the year was an article I wrote that described the Web Development extensions I used in Visual Studio. It also generated the most discussion with lots of people telling me about their favorite extensions. I’m using Visual Studio Code more these days – it’s lighter weight. I still love this list though.

Next Year

2015 was definitely the year that frameworks changed – In .NET land we got a look at the next revision of the ASP.NET framework, and in JavaScript land we got Aurelia, React, Flux, Relay, Angular-2, ES2015, Web Components, and several new versions of Node. I hope the framework releases calm down in 2016 so we can start sorting out the good from the bad and ugly. I’m going to take new looks at all this and work on my side projects. I hope you will continue the journey with me.

Azure Web Apps and ASP.NET5 Configuration

In one of my prior posts, I covered how to deal with SQL Azure with an Azure App Service Web App. I described how to create the SQL Azure database and how to set up an app setting to tell your NodeJS application where the SQL Azure database was.

What about ASP.NET5?

Well, it turns out that the AddEnvironmentVariables() method takes care of it for you, if you know where to look. Take a look at my Startup.cs file:

using Microsoft.AspNet.Builder;
using Microsoft.AspNet.Http;
using Microsoft.Framework.DependencyInjection;
using Microsoft.AspNet.Hosting;
using Microsoft.Dnx.Runtime;
using Microsoft.Framework.Configuration;
using System;

namespace AspNetSqlConnStr
{
    public class Startup
    {
        public Startup(IHostingEnvironment env, IApplicationEnvironment appEnv)
        {
            var cb = new ConfigurationBuilder(appEnv.ApplicationBasePath);
            cb.AddJsonFile("config.json", optional: true);
            cb.AddEnvironmentVariables();

            Configuration = cb.Build();

        }

        public IConfiguration Configuration
        {
            get;
            private set;
        }

        public void ConfigureServices(IServiceCollection services)
        {
            // The SQLCONNSTR_MS_TableConnectionString becomes Data:MS_TableConnectionString
            var dataSource = Configuration.GetSection("Data:MS_TableConnectionString");
            var strings = dataSource.GetChildren();
            Console.Out.WriteLine("dataSource found!");
        }

        public void Configure(IApplicationBuilder app)
        {
            app.Run(async (context) =>
            {
                await context.Response.WriteAsync("Hello World!");
            });
        }
    }
}

This is ASP.NET5 Beta6 code, so it may be slightly different if you are using a different beta release. Create a new ASP.NET Application and add the highlighted lines. Put a breakpoint on the Console line (#33) so that you can take a look at the data.

Next, set a value for SQLCONNSTR_MS_TableConnectionString. From my prior post, this is an app setting in the Azure Portal and appears as an environment variable when the application is run. You can do this in Visual Studio by right-clicking on the project and selecting Properties. Under the Debug tab is a place for environment variables:

09222015-pic1

The actual value doesn’t matter at this point – it just has to be something. However you need to ensure it is a valid SQL Server connection string when you are actually using it.

Now you can run that application and check out the output and the breakpoint:

09222015-pic2

Note that the dataSource has two elements – the Data:MS_TableConnectionString:ProviderName contains the class that you need to instantiate to connect to it – this works for MySQL (with a prefix of MYSQLCONNSTR) as well as others.

So, how do you use this? Well, probably the easiest way is to grab the connection string like this:

var connectionString = Configuration.GetSection("Data:MS_TableConnectionString:ConnectionString").Value;

You can then use this string as part of the Entity Framework setup. You could also do something like the following for injection into other objects:

        public void ConfigureServices(IServiceCollection services)
        {
            // The SQLCONNSTR_MS_TableConnectionString becomes Data:MS_TableConnectionString
            services.Configure<DbSettings>(Configuration.GetSection("Data:MS_TableConnectionString"));
            Console.Out.WriteLine("dataSource found!");
        }

The DbSettings class looks like the following:

namespace AspNetSqlConnStr.Models
{
    public class DbSettings
    {
        public string ConnectionString { get; set; }
        public string ProviderName { get; set; }
    }
}

You can now ask for this in your controllers and other code using dependency injection – just like normal.

ASP.NET, ES2015, React, SystemJS and JSPM

I’ve started investigating a relative newcomer to the JavaScript library, but one that is making a lot of noise. That library is React. But when I combined this with ASP.NET, I found the build processes confusing. There just isn’t a good recipe out there that allows you to write React/JSX components in ECMAScript 6 and get anything like the debugging you want. You lose the source map and any association with the actual source – not too good for the debugging environment. So how do you handle this?

Let’s rewind a bit.

What is React again? It’s a component technology. It occupies the same space as Polymer in that respect, although with vastly differing implementation details. It handles web-based components. It’s got various advantages and disadvantages over other component technologies, but it does the same thing at the end of the day.

I’m not going to go over yet another React tutorial. Really, there are plenty of them even if you don’t know much web dev, including tutorials on React and ES6.

Why am I learning them? Isn’t Polymer enough? Well, no. Firstly, React and Flux are a framework combination that I wanted to learn. I want to learn it mostly because it isn’t MVC and I wanted to see what a non-MVC framework looked like. Flux is the framework piece and React provides the views. Then there are things like React Native – a method of making mobile applications (only iOS at the moment) out of React elements. It turns out to be extremely useful.

As a module system, I like to use jspm. It’s optimized for ES6. So that was my first stop. Can I use jspm + ES6 + JSX + React all in the same application. Let’s make a basic Hello World React app using an ASP.NET based server in Visual Studio.

Step 1: Set up the Server

There really isn’t anything complicated about the server this time. I’m just adding Microsoft.AspNet.StaticFiles to the project.json:

{
  "webroot": "wwwroot",
  "version": "1.0.0-beta5",

  "dependencies": {
    "Microsoft.AspNet.Server.IIS": "1.0.0-beta5",
    "Microsoft.AspNet.Server.WebListener": "1.0.0-beta5",
    "Microsoft.AspNet.StaticFiles": "1.0.0-beta5"
  },

This isn’t the whole file, but I only changed one line. The Startup.cs file is similarly easy:

using Microsoft.AspNet.Builder;
using Microsoft.Framework.DependencyInjection;

namespace WebApplication1
{
    public class Startup
    {
        public void ConfigureServices(IServiceCollection services)
        {
        }

        public void Configure(IApplicationBuilder app)
        {
            app.UseStaticFiles();
        }
    }
}

This gets us a web server that serves up the stuff inside wwwroot.

Step 2: Install Libraries

Next is jspm. Run jspm init like I have shown before. Then run:

jspm install react npm:react-dom jsx

This will install the ReactJS library and the JSX transformer for us. I’m using v0.14.0-beta1 of the ReactJS library. They’ve just made a change where some of the rendering code is separated out into a react-dom library. That library hasn’t made it into the JSPM registry yet, so I have to specify where it is.

Step 3: Write Code

First off, here is my wwwroot/index.html file:

<!DOCTYPE html>
<html>
<head>
    <meta charset="utf-8">
    <title>Hello World with React</title>
</head>
<body>
    <div id="root"></div>

    <script src="jspm_packages/system.js"></script>
    <script src="config.js"></script>
    <script>System.import("app.js!jsx");</script>
</body>
</html>

Note the !jsx at the end of the System.import statement. That tells SystemJS to run the file through the JSX transformer first. Now, let’s write wwwroot/app.js:

import React from "react";
import ReactDOM from "react-dom";

class HelloWorld extends React.Component {
    render() {
        return (<h1>Hello World</h1>);
    }
}

ReactDOM.render(<HelloWorld/>, document.getElementById("root"));

Don’t try this on any version of React prior to v0.14.0-beta1. As I mentioned, there are two libraries now – react for creating react components and react-dom for rendering them. You need both.

Step 4: Debug Code

This is a debuggers dream. I can see the code and the DOM side-by-side in the browser:

blog-08132015-1

Yep – that’s the original code. The JSX code has been transformed into JavaScript, but the ES6 code is right there. That means I can alter it “in-situ”, set break points, and generally work with it. If an exception occurs, it points at the original source code.

I wouldn’t want to ship this code. When you look at it, this small HelloWorld project loads 2.8MB and over 230 requests with a delay of 4.4 seconds (on localhost!) – just to get one React component rendered. I’d definitely be using Webpack or Browserify on a production site. But this is great for debugging.

Step 5: Code Control – Separate Components

Let’s say I wanted to give HelloWorld its own file. Code organization is important. It’s realatively simple with SystemJS. Just place the following code in wwwroot/components/HelloWorld.js:

import React from "react";

export default class HelloWorld extends React.Component {
    render() {
        return (<h1>Hello World</h1>);
    }
}

This is a copy of the original code, made into an ES6 module. Now I can alter the app.js file accordingly:

import React from "react";
import ReactDOM from "react-dom";

import HelloWorld from "./components/HelloWorld.js!jsx";

ReactDOM.render(<HelloWorld/>, document.getElementById("root"));

Final Notes

The Visual Studio editor is not doing me any favors here. I get errors all over the place. However, I can use this same technique in Visual Studio Code (which handles this syntax better), Atom and other places. This, however, is a great step towards debugging React code in a browser.

ASP.NET and the Secret Store

I’ve lost count of the number of times I’ve checked in something I shouldn’t have. The ClientSecret to my Auth0 app configuration, an embedded Administrator password, or worse. It’s all gone into Git and then been synced to GitHub. I’ve mentioned numerous times to ensure you add some JSON file or other to .gitignore. I was rather pleased when ASP.NET5 beta5 came out. It had a solution to my problem in the form of UserSecrets. In this tutorial, I’m going nuts to bolts on user secrets.

Step 1: Install DNVM

The Getting Started with ASP.NET 5 and DNX page suggests that the “latest preview of Visual Studio 2015” installs dnvm. Well, not for me. I had to install DNVM on my own. TO do this, I opened up a PowerShell prompt and did the following:

&{$Branch='dev';iex ((new-object net.webclient).DownloadString('https://raw.githubusercontent.com/aspnet/Home/dev/dnvminstall.ps1'))}

This says it is adding things to your path. It didn’t. I added the resulting path – C:\Users\adrian\.dnx\bin – to my PATH in my profile.ps1 file. Specifically, I added the following:

Add-Path "$(${env:HOME})\.dnx\bin"

You can find out about my PathUtils module elsewhere on my blog.

Once I’d got dnvm installed, I could download the latest version of DNX:

dnvm upgrade

Step 2: Install SecretManager

Once you have dnx installed, you can install the SecretManager:

dnu commands install SecretManager

This will allow you to run a command called user-secret:

user-secret --help

If you get this far, you are all set.

Step 3: Create an ASP.NET5 Application

I’m going to create a simple ASP.NET5 WebAPI application for the purposes of demonstration. It will have one route – /api/settings – that will output the information I need to configure Auth0 in the browser. The idea is that my browser application will download this config as a JSON document and then use the information in there to configure the Auth0 authentication. To test it, I’m going to use Postman which is a plug-in for Google Chrome. Let’s start with an Empty ASP.NET5 application and do a quick tour of the code. First off, the project.json file:

{
    "webroot": "wwwroot",
    "version": "1.0.0-*",

    "dependencies": {
        "Microsoft.AspNet.Mvc": "6.0.0-beta5",
        "Microsoft.AspNet.Server.IIS": "1.0.0-beta5",
        "Microsoft.AspNet.Server.WebListener": "1.0.0-beta5",
        "Microsoft.Framework.Configuration.Json": "1.0.0-beta5"
    },

    .....
}

I haven’t reproduced the entire file – just the bits that matter – the dependencies. Just two new packages are absolutely required here. Now the Startup.cs file:

using Microsoft.AspNet.Builder;
using Microsoft.AspNet.Hosting;
using Microsoft.Framework.Configuration;
using Microsoft.Framework.DependencyInjection;
using Microsoft.Framework.Runtime;

using UserSecretWeb.Settings;

namespace UserSecretWeb
{
    public class Startup
    {
        public Startup(IHostingEnvironment env, IApplicationEnvironment appEnv)
        {
            var configBuilder = new ConfigurationBuilder(appEnv.ApplicationBasePath);
            configBuilder.AddJsonFile("config.json", optional: true);
            configBuilder.AddJsonFile($"config.{env.EnvironmentName}.json", optional: true);
            configBuilder.AddEnvironmentVariables();

            Configuration = configBuilder.Build();
        }

        public IConfiguration Configuration
        {
            get;
            private set;
        }

        public void ConfigureServices(IServiceCollection services)
        {
            services.Configure<Auth0Settings>(this.Configuration.GetConfigurationSection("Auth0"));

            services.AddMvc();
        }

        public void Configure(IApplicationBuilder app)
        {
            app.UseMvc();
        }
    }
}

This comes directly from my Configuration and Dependency Injection article. I’ve got an Settings/Auth0Settings class to hold my configuration – it’s basically a model:

namespace UserSecretWeb.Settings
{
    public class Auth0Settings
    {
        public string Domain
        {
            get;
            set;
        }

        public string ClientID
        {
            get;
            set;
        }

        public string ClientSecret
        {
            get;
            set;
        }
    }
}

I also have a controller to expose the /api/settings WebAPI – that’s in Controllers/SettingsController.cs:

using Microsoft.AspNet.Mvc;
using Microsoft.Framework.OptionsModel;
using UserSecretWeb.Settings;

namespace UserSecretWeb.Controllers
{
    [Route("api/[controller]")]
    public class SettingsController : Controller
    {
        private Auth0Settings auth0Settings = null;

        public SettingsController(IOptions<Auth0Settings> settings)
        {
            this.auth0Settings = settings.Options;
        }

        // GET: api/settings
        [HttpGet]
        public Auth0Settings Get()
        {
            return auth0Settings;
        }
    }
}

Finally, I need some configuration to send, so here is my config.json file.

{
    "Auth0": {
        "Domain": "YOUR-DOMAIN.auth0.com",
        "ClientID": "YOUR-CLIENT-ID",
        "ClientSecret": "YOUR-CLIENT-SECRET"
    }
}

Note that this is the default settings. I always want something in there so that the application doesn’t blow up. This also provides documentation on what the settings should be and the format of those settings.

Run this project and add api/settings to the end of the URI and you get the following:

blog-08092015-1

My JSON is just as I would expect it. I could, of course, ensure that the secret isn’t transmitted (but still available to the backend). However, this is good enough to test on. If you want to start here, you can download the entire project from my GitHub Repository.

Step 4: Integrate UserSecrets

Integrating User Secrets into the process is remarkably easy. It’s a single line change to the Startup.cs constructor:

        public Startup(IHostingEnvironment env, IApplicationEnvironment appEnv)
        {
            var configBuilder = new ConfigurationBuilder(appEnv.ApplicationBasePath);

            configBuilder.AddJsonFile("config.json", optional: true);
            configBuilder.AddJsonFile($"config.{env.EnvironmentName}.json", optional: true);
            configBuilder.AddUserSecrets();
            configBuilder.AddEnvironmentVariables();

            Configuration = configBuilder.Build();
        }

You do need to add the Microsoft.Framework.Configuration.UserSecrets package to your project.json:

{
    "webroot": "wwwroot",
    "version": "1.0.0-beta5",
    "userSecretsId": "UserSecretsDemo",

    "dependencies": {
        "Microsoft.AspNet.Mvc": "6.0.0-beta5",
        "Microsoft.AspNet.Server.IIS": "1.0.0-beta5",
        "Microsoft.AspNet.Server.WebListener": "1.0.0-beta5",
        "Microsoft.Framework.Configuration.UserSecrets": "1.0.0-beta5",
        "Microsoft.Framework.Configuration.Json": "1.0.0-beta5"
    },

Note the userSecretsId – in the final version of Visual Studio, this is likely to be auto-generated for you. Right now, you have to do it yourself. It’s a bucket for your secrets. If you have multiple projects that all share the same secrets, you only have to set them once.

Step 5: Set up personal user secrets

Let’s set up a personal user-secret for the Domain and ClientID of our configuration file:

cd ~\GitHub\blog-code\UserSecretWeb
user-secret set Auth0:Domain shellmonger.auth0.com
user-secret set Auth0:ClientID "something with a space in it"

Note that I have to be in the project directory. More specifically, I need to be in the directory containing the project.json file. Also, you need to have the userSecretsId in the project.json file. If you don’t have that, it will complain.

You can do a user-secret list to list out the contents of the user secrets store. You can also get and destroy individual keys.

When you run your code now and go to that same /api/settings URI, you will get the following:

blog-08092015-2

However, the secrets you want to conserve will never be checked in because they are not in the git area.

Step 6: Investigate

So, where are the secrets stored? Well, with these settings, my secrets were stored in the following: %APPDATA%\Microsoft\UserSecrets\UserSecretsDemo in a file called secrets.json. There was no encryption involved, so you can just display the file.

Finally, I mentioned you might not want to show the ClientSecret – that’s for internal use and you shouldn’t be passing that around. However, the model on the server still needs it. ASP.NET uses Newtonsoft JSON.NET as a serializer, so I can tell the serializer to ignore it using a decorator in the model Settings/Auth0Settings.cs:

using Newtonsoft.Json;

namespace UserSecretWeb.Settings
{
    public class Auth0Settings
    {
        public string Domain
        {
            get;
            set;
        }

        public string ClientID
        {
            get;
            set;
        }

        [JsonIgnore]
        public string ClientSecret
        {
            get;
            set;
        }
    }
}

You can get this code, as always, from my GitHub Repository.

Writing Custom Middleware for ASP.NET

In my last article I decoded a JSON Web Token to get the authorization information. This was a follow on from my prior articles about submitting a JSON Web Token via the Aurelia HTTP Client, authenticating the client side in Aurelia using the Auth0 service, and getting a JSON Web Token from Auth0. However, I left the token decoding a little unfinished. Yes, I decoded a token, but from within an ASP.NET Controller. The normal way to do authorization is with middleware.

Sidestepping – Middleware?

When you do a request to an ASP.NET Web Application (whether it be standalone, MVC or WebAPI), your request goes through a series of software “pipes”. Pipes can handle the request, response or both and modify either. You are already familiar with a pipe – the ASP.NET MVC handler is an example of a pipe. ASP.NET Identity is also a pipe. These pipes are called middleware. They are configured in the Configure method of the Startup.cs like this:

        public void Configure(IApplicationBuilder app)
        {
            app.UseErrorPage(ErrorPageOptions.ShowAll);
            app.UseStaticFiles();
            app.UseJsonWebTokenAuthorization();
            app.UseMvc();
        }

The highlighted line configures the new middleware. It doesn’t exist yet, so expect a red squiggly line.

Injecting Middleware

To inject that JsonWebTokenAuthorization middleware I have to write an extension class that allows me to inject it. I’ve created a folder called Middleware and created a file called JWTExtensions.cs in that folder with the following contents:

using aurelia_2.Middleware;

namespace Microsoft.AspNet.Builder
{
    public static class JWTExtensions
    {
        public static IApplicationBuilder UseJsonWebTokenAuthorization(this IApplicationBuilder builder)
        {
            return builder.UseMiddleware<JsonWebTokenAuthorization>();
        }
    }
}

The highlighted line creates an object from the JsonWebTokenAuthorization class (this is our middleware) and tells the ASP.NET pipeline builder to use it as middleware. The JsonWebTokenAuthorization will be underlined in that red squiggly line because we haven’t written any middleware yet.

A Simple Middleware Example

I needed to create a simple middleware example for investigation. What did the middleware get fed? Could I access everything I needed to access? Questions like this are really only answered by setting a breakpoint and looking at the data. I created a JsonWebTokenAuthorization.cs class in the Middleware folder with these contents:

using System.Diagnostics;
using System.Threading.Tasks;
using Microsoft.AspNet.Builder;
using Microsoft.AspNet.Http;

namespace aurelia_2.Middleware
{
    public class JsonWebTokenAuthorization
    {
        private readonly RequestDelegate next;

        public JsonWebTokenAuthorization(RequestDelegate next)
        {
            this.next = next;
        }

        public Task Invoke(HttpContext context)
        {
            Debug.WriteLine("In JsonWebTokenAuthorization.Invoke");
            return next(context);
        }
    }
}

The Debug.Writeline is only there to set a breakpoint on. In the final version, I’ll remove the System.Diagnostics and Debug.WriteLine lines. This middleware is as basic as it gets. The pipeline calls Invoke with a HttpContext (which is a wrapper for the request, response, identity, etc.) and then you call the next thing in the pipeline.

You can run this application. After clicking Continue a few times, click on the Spells link and you will be able to investigate the request that is important to you:

blog-0714-1

Note the context.Request.Headers contains the information you are after. This gives me enough information to decode the JWT and add it to my context

Decoding The Json Web Token in Middleware

Here is my replacement Invoke method in the JsonWebTokenAuthorization file:

        public Task Invoke(HttpContext context)
        {
            if (context.Request.Headers.ContainsKey("Authorization"))
            {
                var authHeader = context.Request.Headers["Authorization"];
                var authBits = authHeader.Split(' ');
                if (authBits.Length != 2)
                {
                    Debug.WriteLine("[JsonWebTokenAuthorization] Ignoring Bad Authorization Header (count!=2)");
                    return next(context);
                }
                if (!authBits[0].ToLowerInvariant().Equals("bearer"))
                {
                    Debug.WriteLine("[JsonWebTokenAuthorization] Ignoring Bad Authorization Header (type!=bearer)");
                    return next(context);
                }

                string claims;
                try
                {
                    var b64secret = config.Get("Auth0:ClientSecret").Replace('_', '/').Replace('-', '+');
                    var secret = System.Convert.FromBase64String(b64secret);
                    claims = JWT.JsonWebToken.Decode(authBits[1], secret);
                }
                catch (JWT.SignatureVerificationException)
                {
                    Debug.WriteLine("[JsonWebTokenAuthorization] Ignoring Bad Authorization (JWT signature doesn't match)");
                    return next(context);
                }
                catch (FormatException)
                {
                    Debug.WriteLine("[JsonWebTokenAuthorization] Ignoring Bad Client Secret");
                    return next(context);
                }

                Debug.WriteLine(string.Format("[JsonWebTokenAuthorization] JWT Decoded as {0}", claims));
            }
            Debug.WriteLine("In JsonWebTokenAuthorization.Invoke");
            return next(context);
        }

A log of this code is identical to the code I wrote in the last article. I’ve somewhat tightened it up by writing out error messages to the log (instead of sending an error to the user) and trapping exceptions that library routines generate on a bad Authorization header or configuration. I’ve also adjusted the constructor for this class as follows:

        private readonly RequestDelegate next;
        private readonly IConfiguration config;

        public JsonWebTokenAuthorization(RequestDelegate next)
        {
            this.next = next;
            this.config = Startup.Configuration;
        }

This is most definitely not the best way to configure an ASP.NET middleware class. This works for now and I’ll get to configuration another time. Note that the decode will fail if the JWT is expired, so I don’t have to worry about checking for an expiry time.

Creating an Identity

Eventually, I want my Controller to have an [Authorize] decorator on it. This tells the ASP.NET MVC system that it needs to authorize the user and not call the method if the user is not authorized. The Authorize decorator is actually defined in the class AuthorizeAttribute. That’s a complicated beast, able to handle users, roles and ad-hoc policies and is contained within the Microsoft.AspNet.Authorization namespace. Check out the source code. The basic premise we are after is contained in the DenyAnonymousAuthorizationRequirement.cs class. That class is actually fairly reasonable and it basically says “the user is authorized if any Identity object in the Identities list is authenticated.

That means what I have to do is create an Identity with a specific claim (pulled from the sub field of the JWT claim) and then set the IsAuthenticated flag on that Identity. Finally, I need to add the Identity to the list of Identities in the request. Here is the code:

                var identity = new ClaimsIdentity(
                    new[]
                    {
                        new Claim(ClaimTypes.NameIdentifier, claims, xmlString, "JWT-Issuer"),
                        new Claim(ClaimTypes.Name, claims, xmlString, "JWT-Issuer"),
                    },
                    "JWT-Issuer",
                    ClaimsIdentity.DefaultNameClaimType,
                    ClaimsIdentity.DefaultRoleClaimType);
                context.User.AddIdentity(identity);

I copied this code from the Twitter code within the AspNet.Security package. XmlString is a constant that I’ve defined at the top of the class to be http://www.w3.org/2001/XMLSchema#string.

Now I can change the code in SpellsController to this:

        [Authorize]
        [Route("")]
        public string GetAll()
        {
            return "{id:1}";
        }

Running the project will do two things:

  1. If I’m signed out, the WebAPI call will return a 401 Unauthenticated response code – I can use this in my Aurelia app to trigger an authentication.
  2. If I’m signed in, the WebAPI call will return the expected JSON string.

The “claim” is in the Name of the identity and that’s a composite JSON object. You can see it using the JSON Visualizer in Visual Studio. Set a breakpoint on the Debug.WriteLine that says JWT Decoded. When it is hit, go to the Locals tab and expand the identity. Click on the down-arrow on the right hand side of the row that shows the Name property, then select JSON Visualizer. You will get something like this:

blog-0715-1

Decoding the JSON Response

I really want to have the JWT-Issuer replaced by the iss (or Issuer) field and the name replaced by the sub (or Subject) field. Now that I have a plaintext token that I have verified, I can trust that it has not been mutilated in transit. This allows me to use a standard mechanism to decode it:

                var jwt = JsonConvert.DeserializeObject<JsonWebToken>(claims,new JsonSerializerSettings
                {
                    MissingMemberHandling = MissingMemberHandling.Ignore
                });

                var identity = new ClaimsIdentity(
                    new[]
                    {
                        new Claim(ClaimTypes.NameIdentifier, jwt.Subject, xmlString, jwt.Issuer),
                        new Claim(ClaimTypes.Name, jwt.Subject, xmlString, jwt.Issuer),
                        new Claim(ClaimTypes.UserData, claims, xmlString, jwt.Issuer)
                    },
                    jwt.Issuer,
                    ClaimsIdentity.DefaultNameClaimType,
                    ClaimsIdentity.DefaultRoleClaimType);
                context.User.AddIdentity(identity);

I’ve created a new class – JsonWebToken.cs – to hold the claim:

using Newtonsoft.Json;

namespace aurelia_2.Middleware
{
    public class JsonWebToken
    {
        [JsonProperty("iss")]
        public string Issuer { get; set; }

        [JsonProperty("sub")]
        public string Subject { get; set; }

        [JsonProperty("aud")]
        public string Audience { get; set; }

        [JsonProperty("exp")]
        public long Expiry { get; set; }

        [JsonProperty("iat")]
        public long IssuedAt { get; set; }
    }
}

Now my issuer is my Auth0 domain instead of the custom issuer string and my name (which I will use as a unique identifier for the user) is the JWT subject field.

Wrap Up

That’s it for my investigation into authentication with ASP.NET MVC6 WebAPI. To re-cap:

  1. I followed the Aurelia tutorial, but adjusted for TypeScript and ASP.NET
  2. I added an Auth0 pop-up to authenticate using a service
  3. I used that authentication to affect routing on the client side
  4. I added the authorization JWT to a WebAPI request
  5. I decoded the JWT in a Controller
  6. I made the JWT an authorization middleware (this article)

That’s a lot of code and I hope you enjoy it. The code is on my GitHub Repository.