How to use Visual Studio 2015 with ASP.NET MVC5 and TypeScript 2.1+

TypeScript is a wonderful language for front-end developing. It helps by make front-end code feels like C#. At the end you compile the TypeScript code in JavaScript like if you had do without TypeScript. The first step is to setup Visual Studio to use TypeScript. On the official website, there is instruction for Asp.Net MVC4 which doesn’t work very smoothly with MVC5 and the latest version of TypeScript. In fact, following those instructions will lead you into a compilation problem that would tell you that VSTSC doesn’t exist. In this article, I’ll show you the quickest way to use TypeScript.

The first step is to download the latest version of TypeScript. This will install TypeScript in Program Files (C:\Program Files (x86)\Microsoft SDKs\TypeScript). Be aware that you may already have some older version of TypeScript (like 1.6 and 1.8) but you want to have 2.1.

Installing TypeScript should take about 3 minutes. Once it’s done, you need to add a tsconfig.json file at the root of your project. This second step add a file that gives configuration to TypeScript. For example, where to take the TypeScript and where to output the JavaScript result. Here is an example:

{
  "compilerOptions": {
    "sourceMap": true,
    "target": "es6",
    "outDir": "./Scripts/App"
  },
  "files": [
    "./src/app.ts"
  ]
}

That said to take the file app.ts from the src folder and create the corresponding JavaScript in the outDir. In this example, we take the file in /src/ and output the result in /Scripts/App/ which is the default JavaScript folder in Asp.Net MVC. It could have been any other folder as long as in your .cshtml you refer to this one. That said, we need to change the .cshtml where we want to consume the JavaScript. We need to add the script tag like we would do normally.

<script src="~/Scripts/App/app.js"></script>

Before going any further, let’s talk about the tsconfig.json. The option are very basic, the first one indicate that we want sourcemap which allow you to debug directly the TypeScript file instead of the JavaScript. The second parameter is the target. It indicates in which version of JavaScript (EcmaScript) to output. The third one if where to save the JavaScript files compiled. Files is the input.

From here, you just need to have a TypeScript file called app.ts and write some code. Do as normally with C#, go in Visual Studio’s menu under Build and do Build Solution. This will output JavaScript. You may not see the output file if you do not have “Show all Files” selected in the Solution Explorer.

At that point, you may ask yourself that it sounds cumbersome to manually add files if the project is big. This is why you can change the tsconfig.json to compile all TypeScript file of specific folders.

{
  "compilerOptions": {
    "sourceMap": true,
    "target": "es6",
    "outDir": "./Scripts/App"
  },
  "include": [
        "src/**/*"
    ]
}

This will go through all .ts, .tsx, and .d.ts file and generate the right JavaScript.

You may fall into the problem that while writing your TypeScript that Visual Studio 2015 tells you that you are using a version different from the version specified in the tsconfig.ts.

This come with the problem of having the .csproject having the TypeScript options being disabled saying that two tsconfig.json exist.

The problem is that Visual Studio is having TypeScript configuration directly into the .csproj file. You can open with a text editor the .csproj and search for TypeScript.

There is two options here. The first one is to remove the tsconfig.json file and configure from Visual Studio project property. However, you will be limited in term of options. The second is to remove all TypeScript entries inside the .csproject and keep the tsconfig.json. You may have to restart Visual Studio to have Intellisense to work again.

How to use multiple TypeScript files

TypeScript allows you to use EcmaScript import syntax to bring code from another file, or multiple files. This is very useful if you do not want to have all your code into a single file, or if you want to reuse code in different files.

For example, let’s have 2 files. app.ts and fileToInclude.ts which will contain a class that we want to use in App.ts.

//app.ts content:
import { ClassA } from "fileToInclude"

const a = new ClassA();
a.method1();

//fileToInclude.ts content:
export class ClassA {
    public method1(): void
    {
        console.log("ClassA>method1");
    }
}

As you can see, the way to import a class is to specify the class name and from which file to import it. On the other side, the class that you want to import must be marked as export to be imported. The export and import are not TypeScript specific, but something EcmaScript version 6 allows you to use instead of using a custom AMD loader library like Require.js.

So while this is supported since EcmaScript 6, some browser doesn’t support this feature. However, with TypeScript, you can style use EcmaScript, except for module loading. So, if you compile using EcmaScript and go to Chrome you will end up with an unexpected error if you do not.

Uncaught SyntaxError: Unexpected token import

By changing the target of the tsconfig.json to use a module loader the generated code will be using the module loader syntax instead of the EcmaScript syntax. A popular one is Require.js. to do so, the tsconfig.json file needs to have a module entry.

{
  "compilerOptions": {
    "sourceMap": true,
    "target": "es6",
    "module": "amd",
    "outDir": "./Scripts/App"
  },
  "include": [
        "src/**/*"
    ]
}

Without specifyin the module, the code generated was:

import { ClassA } from "./fileToInclude";
const a = new ClassA();
a.method1();

With the module to AMD, this will output a JavaScript that will wrap the module with require. For example:

define(["require", "exports", "./fileToInclude"], function (require, exports, fileToInclude_1) {
    "use strict";
    var a = new fileToInclude_1.ClassA();
    a.method1();
});

Finally, you cannot call directly the .js file in the .cshtml. Instead, we need to use the script tag with a src to requirejs and call a specific method to indicate which module to load (which file).

<script src="~/Scripts/require.js"></script>
<script>
    requirejs.config({
        baseUrl: '/Scripts/App/'
    });
    requirejs(['app']);
</script>

In our case, we want to execute app.js, so we write “app” without the extension “.js”. However, before doing so, we need to setup require requirejs to know that the root of all JavaScript file are located into the Script folder.

Service Worker, Push Notification and Asp.Net MVC – Part 3 of 3 Server Side

I previously discussed about how to configure a web push notification from the client side perspective as well as how to send the notification from an ASP.Net code which could be sent from Azure Webjobs. The remaining part is how do you send to multiple devices of one user. If you have a single browser used by a user, the initial solution is good. However, the reality is that users use multiple devices. Users can use not only different browsers on different machines but also jump from computer to phone and so on. The idea is to register subscription by device and not by user.

Google Firebase documentation explains briefly the “Device group messaging” but the page talks more about Topic. I couldn’t figure out how to use the device group messaging but could use Topic for the same matter. The idea is to use a single topic per user and send a message to this topic.

The first big change is to add a subscribe method and unsubscribe that work with the topic api.

public bool UnRegisterTopic(string userIdentifierForAllDevices, string singleDeviceNoticationKey)
{
	var serverApiKey = ConfigurationManager.AppSettings["FirebaseServerKey"];
	var firebaseGoogleUrl = $"https://iid.googleapis.com/iid/v1/{singleDeviceNoticationKey}/rel/topics/{userIdentifierForAllDevices}";

	var httpClient = new WebClient();
	httpClient.Headers.Add("Content-Type", "application/json");
	httpClient.Headers.Add(HttpRequestHeader.Authorization, "key=" + serverApiKey);

	object data = new { };
	var json = JsonConvert.SerializeObject(data);
	Byte[] byteArray = Encoding.UTF8.GetBytes(json);
	var responsebytes = httpClient.UploadData(firebaseGoogleUrl, "DELETE", byteArray);
	string responsebody = Encoding.UTF8.GetString(responsebytes);
	dynamic responseObject = JsonConvert.DeserializeObject(responsebody);

	return responseObject.success == "1";
}
public bool RegisterTopic(string userIdentifierForAllDevices, string singleDeviceNoticationKey)
{
	var serverApiKey = ConfigurationManager.AppSettings["FirebaseServerKey"];
	var firebaseGoogleUrl = $"https://iid.googleapis.com/iid/v1/{singleDeviceNoticationKey}/rel/topics/{userIdentifierForAllDevices}";

	var httpClient = new WebClient();
	httpClient.Headers.Add("Content-Type", "application/json");
	httpClient.Headers.Add(HttpRequestHeader.Authorization, "key=" + serverApiKey);

	object data = new{};
	var json = JsonConvert.SerializeObject(data);
	Byte[] byteArray = Encoding.UTF8.GetBytes(json);
	var responsebytes = httpClient.UploadData(firebaseGoogleUrl, "POST", byteArray);
	string responsebody = Encoding.UTF8.GetString(responsebytes);
	dynamic responseObject = JsonConvert.DeserializeObject(responsebody);

	return responseObject.success == "1";
}

There is quite repetition in that code and you can improve it easily. The biggest change is the URL. Not only the URL domain is different (before https://fcm.googleapis.com/fcm/send and now https://iid.googleapis.com/), it has different route portions. The first part is the device notification key which is the token generated by the client side from the method “getToken”. The second portion of the route is the user identifier which I use as topic. If you really need topic across users, you can just use a string with the category needed. In my case, it is just the unique GUID of the user. This POST HTTP call will register the device for the user by a topic which is the user ID.

To send a message to the user, on all devices, the code needs also to change.

public bool QueueMessage(string to, string title, string message, string urlNotificationClick)
{
	if (string.IsNullOrEmpty(to))
	{
		return false;
	}
	var serverApiKey = ConfigurationManager.AppSettings["FirebaseServerKey"];
	var firebaseGoogleUrl = "https://fcm.googleapis.com/fcm/send";

	var httpClient = new WebClient();
	httpClient.Headers.Add("Content-Type", "application/json");
	httpClient.Headers.Add(HttpRequestHeader.Authorization, "key=" + serverApiKey);
	var timeToLiveInSecond = 24 * 60; // 1 day
	var data = new
	{
		to = "/topics/" + to ,
		data = new
		{
			notification = new
			{
				body = message,
				title = title,
				icon = "/Content/Images/Logos/BourseVirtuelle.png",
				url = urlNotificationClick,
				sound = "default"
			}
		},
		time_to_live = timeToLiveInSecond
	};

	var json = JsonConvert.SerializeObject(data);
	Byte[] byteArray = Encoding.UTF8.GetBytes(json);
	var responsebytes = httpClient.UploadData(firebaseGoogleUrl, "POST", byteArray);
	string responsebody = Encoding.UTF8.GetString(responsebytes);
	dynamic responseObject = JsonConvert.DeserializeObject(responsebody);

	return responseObject.success == "1";
}

What has changed from sending to a single user? The field “to” which is sending to topics. The “to” in the method signature is still the user unique identifier, but instead of sending directly to it, we use it has a topic. We do not use the token generated by the front end since a new one got generated per device, we only use the user id which is the topic.

C# Localize Properties by String Pattern with Resource File

Imagine the scenario where you have several classes that inherit a base class and all theses children classes have an unique title, description and other string properties defined in a resource file. If you need the current language, the standard way to access the resource is fine. However, in the scenario where you need to have all defined localized string, you need to call the resource manager and explicitly use the string for the key as well as for the culture. Going in that direction is a step in redundancy where every child class needs to have a reference to the resource file manager, and repeat the property string that is defined in the base class for all languages. In this article, I’ll show you a way to handle by convention, with a string pattern, a way to get the localized string for all desired language once (in the base class) instead of in all children.

At the end, we want something clean like the following code:

public class ChildA: Parent{
    public ChildA(){}
}

To do so, we need the parent to take care to fill the localized properties accordingly from the child’s type.

public class Parent
{
	public LocalizedString Name { get; set; }
	public LocalizedString Description { get; set; }

	protected FeatureFlag()
	{
		this.Name = ParentResourceManager.Instance.GetLocalizationNameByClassName(this);
		this.Description = ParentResourceManager.Instance.GetLocalizationDescriptionByClassName(this);
	}
}

You can skip the detail about the LocalizedString type or if you are curious can go look at this article. It’s just a class that has a French and English string. The important piece if that the constructor invokes the ParentResourceManager to retrieve the proper string. This resource manager looks like this:

public class ParentResourceManager : ClassResourceManager
{
   private static readonly Lazy<ParentResourceManager> lazy = new Lazy<ParentResourceManager>(() => new ParentResourceManager());

   public static ParentResourceManager Instance { get { return lazy.Value; } }
   private ParentResourceManager(): base(ApplicationTier.ParentResource.ResourceManager)
   {
   }
}

This class uses the generic ClassResourceManager that define which resource file to look for the string and also be sure to have a single unique instance in the application. The generic class ClassResourceManager handles the two methods used to retrieve the title and description. These methods could be simplified into a single one, but for the purpose of this article, let’s keep it this way. The reason behind having this generic class is that you can reuse it for every type that has a different resource file. In short, it sets a pointer to the resource manager generated by the resource file. Indeed, the resource file needs to be set at auto-generated.

public class ClassResourceManager
{
	public static string ERROR_MESSAGE_FORMAT = "[{0}] not found for language [{1}] in file [{2}]";

	public ClassResourceManager(ResourceManager resourceManager)
	{
		this.ResourceManager = resourceManager;
	}

	public ResourceManager ResourceManager { get; set; }

	public string GetLocalizationFor(string key, LanguageType language)
	{
		string languageTwoLetters = "";
		switch (language)
		{
			case LanguageType.English:
				languageTwoLetters = "en";
				break;
			case LanguageType.French:
				languageTwoLetters = "fr";
				break;
		}
		try
		{
			var resourceString = this.ResourceManager.GetString(key, CultureInfo.CreateSpecificCulture(languageTwoLetters));
			if (resourceString == null)
			{
				return string.Format(ERROR_MESSAGE_FORMAT, key, language, this.GetResourceFileName());
			}
			return resourceString;
		}
		catch (Exception)
		{
			return string.Format(ERROR_MESSAGE_FORMAT, key, language, this.GetResourceFileName());
		}

	}

	public LocalizedString GetLocalizationFor(string key)
	{
		return new LocalizedString { French = this.GetLocalizationFor(key, LanguageType.French), English = this.GetLocalizationFor(key, LanguageType.English) };
	}

	public LocalizedString GetLocalizationNameByClassName(object objectReference)
	{
		var objectType = objectReference.GetType();
		var name = objectType.Name + "_Name";
		return new LocalizedString { French = this.GetLocalizationFor(name, LanguageType.French), English = this.GetLocalizationFor(name, LanguageType.English) };
	}

	public LocalizedString GetLocalizationDescriptionByClassName(object objectReference)
	{
		var objectType = objectReference.GetType();
		var name = objectType.Name + "_Description";
		return new LocalizedString { French = this.GetLocalizationFor(name, LanguageType.French), English = this.GetLocalizationFor(name, LanguageType.English) };
	}

	private string GetResourceFileName()
	{
		return this.ResourceManager.BaseName;
	}
}

The ClassResourceManager has a pointer to the resource file and use the name of the child class to concatenate with the property string name. For example, with ClassA, the developer must defined in the resource file ClassA_Title and ClassA_Description. If the developer forget, the code will thrown an exception telling exactly which resource name is missing which is convenient and pretty clear.

The whole idea is to stop having manual entries for localized string at the cost of depending on a pattern for properties that are shared across all children. Since we know which properties the parent needs to be localized, it’s easy to have this one handle how to retrieve the localized string from a specific resource file.

Service Worker, Push Notification and Asp.Net MVC – Part 2 of 3 Server Side

In the part one, we saw how to register a service worker and how to handle incoming messages if the user is actively on the website. However, we didn’t touch how to send a message through Google Firebase to receive the message. In this article, I’ll show how to send a message from an Azure Webjob, written in C#. This is a common scenario where you have a backend job running and executing some logics that needs to have the user to an action. Since the user may or may not be on the website (or the wrong page), the push notification is great to indicate that something must be done. The other big advantage is that the push notification with Google Firebase offers an almost instant messaging service. Within few milliseconds, the message goes from the server to Google Firebase server to the service worker that will use the push notification API of the browser to display the message.

The first thing, is to define a generic contract with an interface. I decided to create a simple one that return a simple boolean to indicate if the message sent is a success or a failure. The method signature allows to pass the “to” token, which is the unique identifier of the user for Firebase (the token saved from the Ajax call in the part 1). The remaining parameters are self explanatory with the title, message and url when the user click the notification.

public interface IPushNotification
{
    bool QueueMessage(string to, string title, string message, string urlNotificationClick);
}

The implementation is also very simple. It relies on the REST endpoint of Google Firebase.

public class GoogleFirebaseNotification:IPushNotification
{
    public bool QueueMessage(string to, string title, string message, string urlNotificationClick)
    {
        if (string.IsNullOrEmpty(to))
        {
            return false;
        }
        var serverApiKey = "SuperLongKeyHere";
        var firebaseGoogleUrl = "https://fcm.googleapis.com/fcm/send";

        var httpClient = new WebClient();
        httpClient.Headers.Add("Content-Type", "application/json");
        httpClient.Headers.Add(HttpRequestHeader.Authorization, "key=" + serverApiKey);
        var timeToLiveInSecond = 24 * 60; // 1 day
        var data = new
        {
            to = to,
            data = new
            {
                notification = new
                {
                    body = message,
                    title = title,
                    icon = "/Content/Images/Logos/BourseVirtuelle.png",
                    url = urlNotificationClick
                }
            },
            time_to_live = timeToLiveInSecond
        };

        var json = JsonConvert.SerializeObject(data);
        Byte[] byteArray = Encoding.UTF8.GetBytes(json);
        var responsebytes = httpClient.UploadData(firebaseGoogleUrl, "POST", byteArray);
        string responsebody = Encoding.UTF8.GetString(responsebytes);
        dynamic responseObject = JsonConvert.DeserializeObject(responsebody);

        return responseObject.success == "1";
    }
}

The first piece of puzzle is to use the right server api key. It’s under the Firebase console, under the setting’s cog and under the Cloud Messaging.

The remaining of the code is configuring the WebClient. You must use a specific content-type to be json. The second header that must be defined is the authorization key. This is where you set the cloud messaging server key. Finally, we setup the data from the signature. Some information are hardcoded like the icon to display as well as the time that we want Firebase to hold the message if the user doesn’t have a browser to collect the message. The last step is to retrieve the response and looks to see if the message got delivered successfully to the Firebase’s server.

When using with a webjob, you just need to use this implementation and pass the desired parameters. You can get from the token from a new column create in the AspNetUsers table and define a specific title and description depending of what the user must do.

Service Worker, Push Notification and Asp.Net MVC – Part 1 of 3 Client Side

Browsers involve very rapidly and since few years it’s possible to write JavaScript that runs in the background of the browser. That means that it’s possible to run code even if the user is not on the website. This is useful for many scenarios and today we will see one feature which is the push notification. The particular environment that we will describe is to use a service worker that wait a message from a Asp.Net Azure web job written in C# that will push a message at a particular time depending of some value to a specific user. The end result will be that the browser will popup a message box at the bottom right if the user is not on the website or if the user is on the website will display a HTML notification directly on the page.

This article is the part one of two which concentrates on the front-end, not the C# code that runs on the server. We will cover the registration of Google Firebase, the service worker’s code and the JavaScript code to add on your website.

The first step if to register an account with Google Firebase. This is not very obvious since almost all example on the web (at this date) uses the raw Service Worker + Push Notification API with the legacy Google system instead of Firebase. Both are pretty compatible in term of server contracts to generate the message, however, on the client side, it’s pretty different. Firebase acts as a wrapper on the native Service Worker API and Push Notification API. You can still use the API directly, and in some case, it’s the only way to access advanced feature.

To create a Firebase account, you need to https://console.firebase.google.com and create an account and a project.

Firebase is a library that is accessible via API keys and JavaScript API. You can also invoke the API through a Rest API which we will see in the second part. The first challenge is to figure out where to get the right API key because the system has many. The first step is to create the Service Worker. This is registered when the user goes into your website to run in the background of the browser. The default is to create a file called “firebase-messaging-sw.js” and to put that file at the root of your website. The location of the file is important because the service worker can only access assert that are sibling or child to the script registered.

Here is the full code that I have for my Service Worker:

importScripts('https://www.gstatic.com/firebasejs/3.5.0/firebase-app.js');
importScripts('https://www.gstatic.com/firebasejs/3.5.0/firebase-messaging.js');

var config = {
    apiKey: "AIzaSyDe0Z0NtygDUDySNMRtl2MIV5m4Hp7IAm0",
    authDomain: "bourse-virtuelle.firebaseapp.com",
    messagingSenderId: "555061918002",
};
firebase.initializeApp(config);

var messaging = firebase.messaging();
messaging.setBackgroundMessageHandler(function (payload) {
    var dataFromServer = JSON.parse(payload.data.notification);
    var notificationTitle = dataFromServer.title;
    var notificationOptions = {
        body: dataFromServer.body,
        icon: dataFromServer.icon,
        data: {
            url:dataFromServer.url
        }
    };
    return self.registration.showNotification(notificationTitle,
        notificationOptions);
});

self.addEventListener("notificationclick", function (event)
{
    var urlToRedirect = event.notification.data.url;
    event.notification.close();
    event.waitUntil(self.clients.openWindow(urlToRedirect));
});

In short, it uses 2 Firebase scripts. One for the Firebase and one for the messaging which is the wrapper around the push notification api. The configuration is tricky. The apiKey is taken from the Firebase’s console, under the desired project, under the project settings gear, in the Generaltab.

The messagingSenderId is an id that is available tab next to the General tab, called Cloud Messaging.

The initialize command will connect the service worker to the server to listen to new messages. The setBackgroundMessageHandler function is called when a new message occurs when the user is not having the website in focus. It means that if the user has the website in a browser’s tab that it not the current one, or if the user is not having the website open at all, or if the browser is minimized that this message will be invoked. The case about if the user is having focus will be treated later.

This code get the data from the server. In my case, it’s under the property data and notification. I set the title, the main message, the icon. The URL is there but didn’t work at this time. That is why the second method, which use directly the push notification api to hook on notificationclick. This method handles when the user click the notification to open a specific URL. For example, in my case, the notification occurs when a specific event occurs and clicking the notification opens a specific page where the user can see the result of the action.

The next step is to have a page where the user can subscribe to the push notification. In my case, it’s done in the user’s profile. I have a checkbox, if checked, the browser will request the authorization to the user to install the service worker. So, in my profile.js page I have the following code:

$(document).ready(function()
{
    initialiseUI();
});
function initialiseUI() {
    $(document).on("click", "#" + window.Application.Variables.IsHavingNotification,
        function requestPushNotification()
        {
            var $ctrl = $(this);
            if ($ctrl.is(":checked"))
            {
                console.log("checked");
                subscribeUser();

            } else
            {
                console.log("unchecked");
                unsubscribeUser();
            }
        });
}

function subscribeUser() {
    var isSubscribed = false;
    var messaging = firebase.messaging();
    messaging.requestPermission()
      .then(function () {
          messaging.getToken()
          .then(function (currentToken) {
              if (currentToken) {
                  updateSubscriptionOnServer(currentToken);
                  isSubscribed = true;
              } else {
                  updateSubscriptionOnServer(null);
              }
              $("#" + window.Application.Variables.IsHavingNotificationt).prop('checked', isSubscribed);
          })
          .catch(function (err) {
              isSubscribed = false;
              updateSubscriptionOnServer(null);
          });
      })
      .catch(function (err) {
          console.log('Unable to get permission to notify.', err);
      });
}

function unsubscribeUser() {
    var messaging = firebase.messaging();
    messaging.getToken()
    .then(function (currentToken) {
        messaging.deleteToken(currentToken)
        .then(function () {
            updateSubscriptionOnServer(null);
        })
        .catch(function (err) {
            console.log('Unable to delete token. ', err);
        });
    })
    .catch(function (err) {
        console.log('Error retrieving Instance ID token. ', err);
    });
}

function updateSubscriptionOnServer(subscription) {
    var subscriptionDetail = { key: "" };
    if (subscription)
    {
        subscriptionDetail = { key: subscription };
    } else {
        console.log("delete on the server the token");
    }
    
    var apiUrl = window.Application.Url.UrlNotifications;
    var dateToSent = subscriptionDetail;
    $.ajax({
        url: apiUrl,
        type: 'POST',
        data: dateToSent,
        cache: true,
        dataType: 'json',
        success: function (json) {
            if (json.IsValid) {
            } else {
            }
        },
        error: function (xmlHttpRequest, textStatus, errorThrown) {
            console.log('some error occured', textStatus, errorThrown);
        },
        always: function () {
        }
    });

}

We allow to subscribe and unsubscribe. When subscribing, we request the permission to send the notification by the browser. Then, we get the token provided my Firebase. This is needed to be able to save the token back to the server to have targeted message from the server later. With this token, we will be able to send specific message to specific user. This is where the updateSubscriptionOnServer come to play. It sends by Ajax the token, and it’s saved in the database. In my case, I added a column in the user’s table to keep track of the token. The unsubscribe sends a null value and set null in the column. This way, the server can look and see if the user has or not a Firebase token and only send a message when a token is defined.

To verify that all the previous steps are well executed, you can look in Chrome developer tool under Application and see for the service worker.

It’s important to understand that what we are doing only work under localhost or with HTTPS’ website. From Chrome’S debug panel, you can unregister, or click “Update on reload” to force a reinstallation of the service worker. This can be handy when developing to be sure to always have the latest version of your service worker.

The next step is to have your website listen to incoming messages. This cover the scenario when the user is on the website and that we do not want to use the browser notification. To do, we need to use some code that we already used in the service worker concerning Firebase’s initialization. In my case, I added in the master page (_layout.cshtml) a reference to Firebase script to initialize the library. It looks like that:

    <script src="https://www.gstatic.com/firebasejs/3.6.2/firebase.js"></script>
    </script>
    <script>
      var config = {
        apiKey: "AIzaSyDe0Z0NtygDUDySNMRtl2MIV5m4Hp7IAm0",
        authDomain: "bourse-virtuelle.firebaseapp.com",
        messagingSenderId: "555061918002",
      };
      firebase.initializeApp(config);
    </script>

I also have a global JavaScript file where I added the listener to message which are used in every page I have.

$(document).ready(function()
{
    var messaging = firebase.messaging();
    messaging.onMessage(function(payload)
    {
        var dataFromServer = JSON.parse(payload.data.notification);
        var myMessageBar = new MessageBar();
        myMessageBar.setMessage(dataFromServer.title + " : " + dataFromServer.body);
    });
});

The listener is onMessage is fired when the user has the focus on the website. So, instead of having the service worker to handle the message, this handler is receiving the data. This give the advantage to be able to add the message directly in the webpage Dom, something that the service worker cannot do. It also has the convenience of having the notification in the field of view of the user instead of having a notification outside.

At this point, you can use any HTTP tools to send a message to Firebase. You can use console.log to output the token and forge a HTTP request with your web api and sender id. I won’t give detail in this post and will give how to do it in a future post about how to handle it with a webjob in C# which will send a HTTP request.

Service worker allows you do to a lot more than just using the push notification. This article covered the basis of how to use Google Firebase has a backbone to have your own backend infrastructure (covered in a future article) to send a message and to have your client receiving the message. Several pieces of code is needed in specific places.

Application Insights Build Version on all telemetry

Something very interesting is to know which version was affected by a telemetry. This is good for custom events, traces and very interesting for exceptions. However, adding this information on every calls are redundant and not clean. That is why, Application Insights allows you to add a telemetry initializer.

A telemetry initializer is a a piece of code that is executed when a telemetry starts. There is two steps to make it works. First, create a class that inherit ITelemetryInitializer. Second, register the class to Application Insights.

To accomplish the goal of having the system version in every telemetry, let’s create a class that will add in Application Insights’ context a property named BuildVersion. I place this class in my website project which allows me to grap the assembly version. Indeed, you need to use the AssemblyInfo.cs file and its versions on every release to have this method to work.

    public class AssemblyVersionInitializer : ITelemetryInitializer
    {
        public void Initialize(Microsoft.ApplicationInsights.Channel.ITelemetry telemetry)
        {
            telemetry.Context.Properties["BuildVersion"] = this.GetType().Assembly.GetName().Version.ToString();
        }
    }

The next and final step is to use this class.

public class MvcApplication : System.Web.HttpApplication
{
   protected void Application_Start()
   {
       TelemetryConfiguration.Active.TelemetryInitializers.Add(new AssemblyVersionInitializer());
   }
}

From there, what ever you add or not custom properties, it will always have also the BuildVersion one. The goal of having this BuildVersion is primary to see difference in your telemetries between version. You can clearly identify if a problem is resolve or is created between version. You can also see if the performance goes worse. However, this only work if you release often since Application Insights retention of the information if very limited with most of the data is restricted to 7 days (or 14 days).

Post-Mortem of my Asp.Net MVC Project, SQL Server, Redis and managed with VSTS

During winter 2014, I decided to re-write my old 2004 PHP stock simulator. My initial plan about what needed to be done changed just a few, since the beginning even if I joined Microsoft few months after. Most of the work planned got executed. With the move and the arrival of my first daughter, I had less time, so in the meantime, I merged all my posts about the first few months into this one. Having to maintain this blog and a dedicated online journal of my progress was too much. I am now in the final stretch and I can look back to tell what went good, what was wrong. This article will be what lessons I have learned during this adventure of 35 months.

Quick Recapitulation

The project started has a simple stock simulator built between 2004 and 2006, but mostly during the winter 2004. The legacy system was built in PHP (3 than 4 than 5), MySql and MemCached hosted in a Linux VPN. Some JavaScript was used with CSS. I remember that JQuery wasn’t there and the web 2.0 was still very new. I had to do my own Ajax call using primitive JavaScript and ActiveX for Internet Explorer.

I do not have the real amount of time spent coding because development of features continued until 2010. Roughly, I would say that the core took about 100 hours, added by few big features around the next years. So about 5 x 60 hours and one big refactor of 100 hours, so overall around 500 hours.

Moving back to 2014, I had more experiences, access to better technologies and I achieved 75% of features in about 2000 hours. I will not paint a picture prettier than it is because the goal is to figure out reasons why this project took 4 times the original time. During the reading of this text, you have to keep in mind that while this system is new, the old one was still running with active, real, users. I wish I could have release a version before July 2016 (sprint 30) but the lack of features couldn’t be justifiable to have a quick first release and iterate on it. I first thought to release the month before the delivery of my first baby (June 2015 / Sprint 17) but the amount of performance improvement, bugs and administrative features would have been missing, making the next month harder to develop and probably reduce the throughput of work.

Schedule

Everything was planned in sprints with VSTS. If you go see a my post, you will see screenshots who look already dated because Visual Studio Team Services has changed a lot since the inception of the project. Since I was the only developer, I decided to have 1 sprint per month which allowed me to have a fix set of features delivered every months. Most of the month was planned to have 2 hours per day, so a sprint was 60 hours. On the 35 months, about 6 months had less time being half capacity. These months were the few months before my move from Montreal (Canada) to Redmond (USA) for Microsoft. I also had 2 months slower when I my first daughter arrived. So overall, 29 sprints of 35 sprints went has planned. I knew I could do it since it’s one of my force to be consistent.

However, I was spending more than 2 hours per day. To be honest, I planned to finish everything within 1 year. My baseline was the time it took me to do it initially. I was planning to do more unit tests, more quality, more design but since I already knew all gotchas and how the logic needed to be that the 1 year plan was reasonable. After all, 12 months meant 12 months multiply by 60 hours that equals 700 hours which give me a nice 15% more of the initial design with 25% features. Each sprint was frozen in term of what to do, so I couldn’t add more features during that month or change my mind. On the other hand, I allowed myself to reshuffle the priority. For example, I pushed back performance tasks and the landing page quite a few times.

Reasons of slipping the schedule

How come it tooks me more than 4 times the original time is hard to say. Even if I have all my tasks and bugs documented in VSTS, I do not have the actual time spent on these work item. During the development, I noticed that I was getting late, thus started to interrogate myself about the reason. Few things that was slowed me down around the the 12th months.

First, I was working on that project while watching TV. I wasn’t isolated, my wife was always with me in the room and we were talking and doing other little chore that I was in reality spending a minimum of 2 hours — mostly 2h30min but in reality was worth 1h30ish. That said, after a full day of work, I think it’s reasonable not to be at 100% on a side project for that many months. I also had a period of time, almost 8-12 months where I fell into several Entity Framework issues which was slowing me down. I fought all those issues and never gave up. In retrospective, I should have drop Entity Framework and that why I have now an hybrid model with Dapper now. My justification of keeping fighting it was that my goal was to improve my Entity Framework skills — and I did. In many different job I had before this project and after, I was often the one to fix Entity Framework mysterious behavior.

Second, an other reason of some slowness was the initial design with several projects which caused the build time to take more than 2 minutes. This might not seem a problem and often not a full build was required so not really a 2min45sec, but still always around 1 min. Building 15 times per night cause a lost of around 5 hours per months. And in fact, when waiting, I was going on Twitter, Hackernews, Facebook thus taking more than just the minute. So the 5 hours per months ended up by probably be around 7-8 hours per months on a time frame of 60 hours it is a lot. I ended up with a new laptop with SSD and more ram + change in the architecture (see later in that post) which helped me to reduce the waste by more than half.

Third, having integration tests (test with the database) was hard to build in some situations, which I’ll explain later. Each integration test was taking about 20 minutes to write, debug and test.

Forth and finally, quality has a cost. To be honest with myself, the PHP project had about 50 unit tests. The new one has 1800. The PHP’s project had deep bugs that was there for more than 2 years; the new one is still very new but major bugs aren’t still visible. Something also changed which is the rate of update in technology. Between the first month of this project and the current, I updated several third-services API, libraries version and this required some changes. I am proud that I kept the original plan with Asp.Net MVC, front-end generated by the server and aiming for Azure instead of falling in the trap of switching technologies during the development.

What have I learned about schedule?

Using VSTS helped me to stay focus or at least prioritize my work. I have over 1500 work items at this day. I was using a sprint approach using the Backlog and Board. I was planning about 6 months a head, with 2 months really more seriously. Always the current month frozen to be sure not to over-add. The rare time I finished the sprint earlier, I added more unit tests. I was adjusting work items (user stories and bugs) for future sprints time to time. At the end, I was able to do every user stories expect those concerning option trading and team trading which the legacy system supported that I decided to drop because of their popularity and the time required. My evaluation of the time for each work items was often underestimated by few hours. I always tried to split my work items in task of 1 to 3 hours which was okay until hitting major issue. Also, tests was longer than expected — all the time. But, sometime, it was the other way around and I overestimated. At the end, I was more underestimated than overestimated.

My conclusion on the schedule is that a single person project can have his toll when you hit a problem and can change drastically a schedule. A project that is planned to be on year can rapidly have months added up. While the primary goal was to learn and keep my knowledge up, I would say that a smaller project is better. That said, I still believe that a project most be significant to understand the impact on the maintainability. This one has 30 100 lines of code + 13 900 lines of comment, so more than 44k lines.

33monthsnumberlinesofcode

Finally, I would say that motivation is a factor too. I am very motivated and I enjoy the code produced which even after a hard day at work was making me feel good. Owning everything may have the bad side of being stuck for days which unrail a feature from 10 hours to 40 hours, but also have the benefit of having a code base that look like you want.

Architecture

The architecture was right from the start. I didn’t do any major changes and the reason is that I knew how to do it. I not only maintained a lot of different projects, but also created a lot of them done in the past. PHP, Asp.Net webform, classic ASP, or Asp.Net MVC, at the end, a good architecture fits any languages.

LayersBourseVirtuelle
The division of the service, accessor and repository is a big win. It was a charm to add Redis, even if it was around sprint 16, because of the accessor layer which guide the request in Redis or in the database (repository). It was also able to handle email by sending the email in file during development and to a third-party like Send Grid in production, all that by using dependencies of injection.

DetailledArchitecture

The unit of work wasn’t that good. It’s a recommended pattern by Julie Lerman in her Pluralsight training. The idea is very good, but Entity Framework Context is a mess and having the unit of work lives between logic was causing all sort of problem. While this pattern is a charm to work for testing, it’s a maintainability nightmare. Adding logic between a simple load of data and a save could result of having Entity Framework Context being out of synchronization. I won’t go more in detail in this post but you can find some details in this past post about why I wouldn’t use Entity Framework.

Having the configuration at the same level of the repository is great, it means that this one can come from web.config, database, cache, etc without being important for any service layer (where the logic belong). Concerning cross-layer logic, it was very limited to log and to classes like the RunningContext which has the culture, current user id and the current datetime. This is a crucial decision that I would repeat on every system if I can. I can now debug easily by adjusting the time which is a major element in a stock system where orders, transactions and prices fluctuate in time.

I started having all the layers and modules in different project. The solution had about 50 projects. For performance reason, I finished with 5 projects. The screenshot shows 8 buts one is the architecture models and one is the Mysql connector that was used during the migration and one is the Migration itself. Both of them are usually unloaded, hence 5 real projects.

6realprojectsforboursevirtuelle

I didn’t loose any time in the architecture. Of course, the first sprint was more about planning with VSTS, creating some Visio document with classes, etc. But it was worth it. First of all, the work items got used. I didn’t had all of them the first month, but most of them. The UML Visio diagrams was quite accurate to be honest. Of course, it misses about 50% of the classes since I involved and added more details. It misses them because they were not core classes and because I never took the time to updated it. During the development, I updated the architecture diagram that Visual Studio can produce in its Enterprise edition.

Open sourcing code

At some point, I decided to open sourced some codes of my application to be able to reuse it on others projects as well to share. In the following trend graph, we can see two drop of lines of code that result to that extraction of code outside the project. From this project, I have now :

So while the project has over 44 000 lines of code and comments, the real amount of code produced is approaching 50 000 lines of code. The idea of open sourcing code was good, but it added more overhead. First of all, extracting the code from the solution. Even if the code was well separated, logging and namespacing needed to change. Creating the Github repository, the Nuget package, the unit tests in xUnit instead of VsTest (which were not supported by Travis-ci) added some nights of work. Furthermore, debugging became harder too. From just adding a breakpoint in the code to having to reference the project and remove the Nuget, etc. For sure, it wasn’t helping to get faster. That said, I am glad because these extracted library are well contained and tested.

linesofcodetrend
During the development, I was following the code health with a tool called NDepend. It was a good “virtual friend” that tell you if you are going in the wrong direction. All trending images are from this tool.

Some time was lost but most of the work was done inside the main project, hence the debugging was not intensive. I would say that I lost maybe 3 weeks (25-30 hours).

Tests

I am a big fan of automated tests. Even more if it’s a single man project that I will be the one to maintain. I started doing my tests almost at the same time of my code, but after few months (around 6th-7th) was doing my test just before commiting, as a way to make sure that what I did was right and a validation that my code was well encapsulated with unique purpose. There is definitely benefits. On the 1800 automated tests, 1300 are unit tests and 500 are integration tests. By unit tests I mean very fast test that test methods without any dependencies — everything is mocked. By integration tests, I mean test that use database, they integrate many pieces of the puzzle.

structuretests
The test files structure is the exact structure of the actual code, but inside a testing project. It increases the maintainability because it’s easier to find the test associated with the code. Also, every test’s folder end by “UnitTests” or “IntegrationTests” which allow to quickly know what type of test the file contains.

Unit tests

One problem I had and have is that I am still having some bugs that I shouldn’t with my coverage of 70%. The problem I have an still have is that I have a lot of expected exceptions that are thrown and tested but the code that use those methods doesn’t handle those exceptions. That is totally my fault and I need to work on that. Why I didn’t do it is mostly because I know these cases shouldn’t happen, but they do, sometime. Those edge cases need user experiences that are not defined. The reason I didn’t do it was that I was short on my work item schedule or even was already late. Not a good excuse, but still the reality.
coverage

If you look at the trend graph, you can see that around march 2015 (sprint 14) the coverage fell from 78% to now 70%. I can blame no one else than myself. That said, I was reaching little down in motivation (my first and only one among the first 30 months) by seeing that I was not even half way through that project, I just had a new baby and was working a lot of hours at Microsoft. I cheated on the unit tests instead of cutting features. I definitely plan to increase them slowly. The major area where unit tests are missing at this point is the all controllers and few repository classes. The choice was deliberated to focus on the services and models classes which hold the core of the logic and are more incline to fail.

Integration tests

Integration tests started to be a way to have a persistence experience covered without having to do heavy testing with framework that click the UI. To cut the work and scope down, since we have unit tests, the only integration tests that was done was about the repository layer. The idea is good, but it was still time consuming. I have two formats for those code which rely on the same architecture : the tests initialize by creating a transaction and rollback the transaction once the test is done. The difference from the older version and the newest version is that the older one has a lot of utility methods to arrange the code. For example, “CreateUserThatHasNotValidatedEmail”. The second one is using a Builder model with a fluent syntax. For example, I could do :

  var data = this.DatabaseModelBuilder
                .InsertNewUser(1, (applicationUser, index) => {
                    applicationUser.CanReceiveEmailWhenContestInscriptionStart = true;
                    applicationUser.ValidationDateTime = null;
                });

Or more complex:

    var s = this.DatabaseModelBuilder
                .InsertNewFullContest()
                .InsertNewPortefolio()
                .InsertSymbolToRenameRequest()
                .InsertSymbolToRenameVote()
                .InsertStockTransactionInPortefolios(symbols);

While these examples may look just “okay” they are replacing a huge amount of work. Each of these methods are not just building the object but save them in the database, recuperate the generated ID, do associations, etc.
This allowed me to avoid having hundreds of utility method for every cases but having a syntax that is reusable and can built-up. It also allowed to have cleaner methods.

Here is a shameful real code of the old way :

 [TestMethod]
        public void GivenBatchSymbolsRenameInOrders_WhenListHasStockToBeRenamed_ThenOnlySpecificStocksAreRenamed()
        {
            // Arrange
            var uow = base.GetNewUnitOfWorkDapper();
            var userId = this.runningContext.Object.GetUserId();
            var repositoryExecuteContest = new ContestRepository(unitOfWorkForInsertion, uow, base.runningContext.Object);
            var repositoryExecuteModeration = new ModerationRepository(unitOfWorkForInsertion, base.GetNewUnitOfWorkDapper(), base.runningContext.Object);
            var repositoryUpdating = new OrderRepository(unitOfWorkForUpdate, base.GetNewUnitOfWorkDapper(), base.runningContext.Object);
            var repositoryReading = new OrderRepository(unitOfWorkForReading, base.GetNewUnitOfWorkDapper(), base.runningContext.Object);
            var date = this.runningContext.Object.GetCurrentTime();
            var stock1 = new StockOrder { StockSymbol = new Symbol("msft"), Quantity = 100, TransactionTypeId = TransactionType.Buy.Id, OrderStatusId = OrderStatusType.Waiting.Id, OrderTypeId = OrderType.Market.Id, PlacedOrderTime = date, ExpirationOrderTime = date, UserWhoPlacedTheOrderId = userId.ToString()};
            var stock2 = new StockOrder { StockSymbol = new Symbol("msft2"), Quantity = 100, TransactionTypeId = TransactionType.Buy.Id, OrderStatusId = OrderStatusType.Waiting.Id, OrderTypeId = OrderType.Market.Id, PlacedOrderTime = date, ExpirationOrderTime = date, UserWhoPlacedTheOrderId = userId.ToString() };
            var stock3 = new StockOrder { StockSymbol = new Symbol("msft3"), Quantity = 100, TransactionTypeId = TransactionType.Buy.Id, OrderStatusId = OrderStatusType.Waiting.Id, OrderTypeId = OrderType.Market.Id, PlacedOrderTime = date, ExpirationOrderTime = date, UserWhoPlacedTheOrderId = userId.ToString() };
            var stock4 = new StockOrder { StockSymbol = new Symbol("msft"), Quantity = 100, TransactionTypeId = TransactionType.Buy.Id, OrderStatusId = OrderStatusType.Waiting.Id, OrderTypeId = OrderType.Market.Id, PlacedOrderTime = date.AddDays(1), ExpirationOrderTime = date, UserWhoPlacedTheOrderId = userId.ToString() };
            var symbolToRename = new SymbolToRename
            {
                CurrentSymbol = new Symbol("msft")
                ,EffectiveDate = date
                ,SymbolAfterRename = new Symbol("goog")
                ,UserWhoPlacedTheChangeId = this.runningContext.Object.GetUserId().ToString()
                ,Link = "http://"
                ,DateTimeChangePlaced = date
            };

            var vote = SymbolToRenameVote.Create(symbolToRename, new ApplicationUser() { Id = this.runningContext.Object.GetUserId().ToString() }, 1, base.runningContext.Object);
            vote.Point = 100;
            vote.VoteDateTime = date;

            // Arrange - Create Contest
            var contest = contestModelBuilder.GetAStockContest();
            repositoryExecuteContest.Save(contest);
            uow.Commit();

            // Arrange - Create Request to change Symbol
            symbolToRename = repositoryExecuteModeration.Save(symbolToRename);
            unitOfWorkForInsertion.Commit();

            // Arrange - Create Vote
            vote.SymbolToRename = symbolToRename;
            vote.SymbolToRename = null;
            vote.UserWhoVoted = null;
            unitOfWorkForInsertion.Entry(vote).State = EntityState.Added;
            unitOfWorkForInsertion.Commit();

            // Arrange - Create Portefolio
            var portefolio = repositoryExecuteContest.SaveUserToContest(contest, userId);
            unitOfWorkForInsertion.Commit();
            var portefolioId = portefolio.Id;

            // Arrange - Create Stocks for Order
            stock1.PortefolioId = portefolioId;
            stock2.PortefolioId = portefolioId;
            stock3.PortefolioId = portefolioId;
            stock4.PortefolioId = portefolioId;
            unitOfWorkForInsertion.Entry(stock1).State = EntityState.Added;
            unitOfWorkForInsertion.Entry(stock2).State = EntityState.Added;
            unitOfWorkForInsertion.Entry(stock3).State = EntityState.Added;
            unitOfWorkForInsertion.Entry(stock4).State = EntityState.Added;
            unitOfWorkForInsertion.Commit();

            // Act
            repositoryUpdating.BatchSymbolsRenameInOrders(symbolToRename);
            unitOfWorkForUpdate.Commit();

            // Assert
            var stocks = repositoryReading.GetUserOrders(userId).ToList();
            Assert.IsTrue(stocks.Any(d => d.StockSymbol.Value == "goog"), "goog symbol should be present in the portefolio");
            Assert.AreEqual(1, stocks.Count(d => d.StockSymbol.Value == "msft"), "msft symbol should be present once in the portefolio");
        }

A similar test with the new version:

        [TestMethod]
        public void GivenAListOfStocks_WhenRequestFirstPageAndThisOneHasLessThanTheFixedPageAmount_ThenReturnLessThanTheFixedAmountOfContests()
        {
            // Arrange
            const int ACTIVE_ORDER = 2;
            const int INACTIVE_ORDER = 2;
            const int PAGE_NUMBER = 0;
            const int NUMBER_CONTEST_PER_PAGE = 10;
            var allDatabaseOrders = new List<StockOrder>();
            var data = this.DatabaseModelBuilder
                .InsertNewFullContest()
                .InsertNewPortefolio()
                .InsertNewOrders(ACTIVE_ORDER, PREFIX_ACTIVE,  true, order => { allDatabaseOrders = order.ToList(); })
                .InsertNewOrders(INACTIVE_ORDER, PREFIX_INACTIVE, false, null);

            var theNewestContestIs = data.GetallStockOrders().First();


            // Act
            var orders = data.GetStockOrdersByFilter(PAGE_NUMBER, NUMBER_CONTEST_PER_PAGE);

            // Assert
            Assert.AreEqual(ACTIVE_ORDER + INACTIVE_ORDER, orders.Count());
            Assert.AreEqual(theNewestContestIs.Id, orders.First().Id);
        }

Overall, the problem is the time it took to create a new integration test. The average was 20 minutes. With the new builder system, it’s a little bit faster, because of the reusability, around 10 minutes if the builder miss some methods and under 5 minutes when the builder contains what it needs. It is always a little bit expensive because of the need to create code to prepare scenarios. I also had a lot of problem with Entity Framework because of the context which had previous data. If you want to arrange your test, you need different context that the one executing the code and the one asserting. Otherwise, you have some values in Entity Framework’s context that interfere the reality of real scenario where data is inserted in different request. Another problem with Entity Framework was that the order of command was different from the code inside the repository which could change how some path was setting values inside Entity Framework. Result could be from DbConcurrency problem or from simply having association problem. I remember that I some point I was taking more time building those integration tests than doing the actual code.

My conclusion about integration tests is that they are required. Database can be tricky and it was beneficial to have an insurance that some automated tests was ensuring that values desired to be saved and retrieved was in the state expected. What I would change is, again, stop using Entity Framework. An additional reason why Entity Framework was causing more pain in integration test is that the invocation order for preparing the test wasn’t the same as the service layer who invoked the code in test. For example, inserting a user, joining a simulation, creating an order is usually done by different requests, different Entity Framework context and with some logic in the services layer that we skipped in the integration test just to insert the data required. It would have been way easier just to do insert in the database for preparing the test with Raw SQL and then using the repository to read it. I also am sure that I do not need test with a framework that click around the UI. It was planned, but even if the UI is very stable, that would have add an additional layer of tests that is not essential, and very fragile and time consuming.

Front-end

Front-end had a radical shift around sprint 25ish. Mostly because performance metrics started to get in — it was slow. I was using all capabilities available with some UIHint, auto detection from type with DisplayTemplates and EditorTemplates. Even with all performance tricks, it was slow. Razor is just not enough polish, and will never be. The new way of handling everything in JavaScript is where all the effort are at, and that is fine. That said, I could mitigate the problem by using Html helpers instead of templates. While that was quite few changes everywhere, it wasn’t that long to do. I cannot say that it was a cause of I slip in the schedule, but as you can start to see, many 1-2 weeks unplanned items are getting added. You can clearly see that after 6-8 times you are already 3-4 months late.

JavaScript and CSS

I have 1 JavaScript and 1 CSS per view (page). Additionally, 1 JavaScript for the whole site, 1 css for the whole site. Everything is using Asp.Net Bundles so when in production, there is not big hit. I am not using any AMD loader because I do not need to — the code is simple. It does simple tasks, simple actions and that’s it. It modify the active view with JQuery and that’s it. The CSS is also very simple, and I am leveraging Bootstrap to reduce a lot of complexity and having the UI responsive from cellphone to website. In my day to day job, I am using TypeScript, with modules, and React. I think this is the future, but it adds a lot of complexity and performance aren’t that better. Having the server generating the UI might not be the current trend, but Razor aside I can generate the whole UI within few millisecond (mostly under 50ms). Slower than when it was in PHP, but still faster than a lot of UI I see these days. That said, I really like React and those big front-end framework, I just believe that if you have a website under 20 pages, that is not a single page application with simple UI that you do not need to go in the “trendy way” of Angular/React/VuesJs. I also believe that if you have an intranet website that you should avoid those big frameworks. That said, it’s an other subject.

Entity Framework

I do not recommend Entity Framework. The initial version with .EDMX was bad, and I went straight away with Code First. But, the problem is deeper than that. First of all, there is about hundred of way to fail and configure your entities. Way to flexible, this best is hard to handle. While my configuration is quite well divided in configuration’s classes, I had few hiccup in the first sprints. Like anyone I know who worked with Entity Framework’s migration tool (to generate new setup code when your entities and configurations change) generation work well until it doesn’t and you need to regenerate from scratch. Not that it’s hard, but you are losing your migration’s steps. Also, harder scenario are just not cover by the tool. For example, if you want to apply it on existing database with data. The biggest issue was about how Entity Framework’s context is working with reference. You may load and want to save a new entity and Entity Framework will tell you that you have some DbConcurrency because entity has changed or that some association cannot be saved even if you ignored them, etc. All that pain comes also with bad performance. While it wasn’t the worst part, it wasn’t a nice cherry on top of it. I really wasted more than 120 hours (2 months) with problems here and there spread during these 30 months.

Migration from old to new

Migrating the data took me about 2 month of work. It was done in two times, one around the middle where I was thinking to migrate, and one 1 month before the actual release. Steps was to dump the MySql locally, uses an unstable primitive tool called Microsoft SQL Server Migration Assistant for MySQL. I qualify it of unstable because I had to change in Task Manager > Details tab > SSMAforMySql.exe > Context menu > Set Affinity > Core 0 only to be checked otherwise it was crashing. I also had to do it a lot of time to have all the data transferred. The migration didn’t stop there. Once having the same schema from MySql in MsSql, I had to use C# project to push some C# code in the new database to be able to use the same algorithm that Asp.Net MVC do to hash password. The last and longer step was to create SQL script to transfer the database from the old legacy schema into the new one. The script takes more than 10 hours to run. There is millions and millions of data to be inserted.

I do not see how I could have those the migration faster than that. I had some waste of time with the tooling, but nothing dramatic.

Routing and Localization

The routing is exactly like I wanted which mean I do not have the culture in the URL. The system figures out with the string if the culture is between French or English. This is one of the open sourced project that is in GitHub now. I support two languages and I am using Microsoft’s resource file. I have multiples resources files in different folders. I divided them by theme, for example “Exceptions.resx”, “UI.resx”, “DataAnnotation.resx”, “HelpMessage.resx”, “ValidationMessage.resx”, “HomePage.resx”, etc. I have about 40 resources files. That said, while it is not a waste of time I could say that having less files would be better. Now, I need someone to do some correction on those files and having to handle that many files is just not simple. The counter part is that these files do not have thousand of string each.

Performance

I got a huge surprise here. From my experience, developing with the PHP/MySql stack and publishing in a production server was always increasing the speed. For example, locally I could have a full rendering at 400ms locally, in production it would be 300ms. With Azure, it’s not everything on a fast server but everything in the same data center. The first time I published the performance was the other way around. It comes from 400ms to 5 sec! I am not even exaggerating here. Without Redis, the performance is super bad. That said, I was using MemCached in the legacy system. Without the database I could have on heavy pages performance around 400ms-800ms, with Memcached on around 150ms-200ms. With all the performance improvements I did I am able to have page loading between 400ms to 2000ms for heavy huge page. It is still way slower (about 5x) than the legacy system. The problem is that calling Redis which is not on the same machine of the webserver add around 75ms per call. If you have to get 5 times data it adds there 300ms. If the cache is not hydrated, it adds even more with Sql Server. I did a lot of optimization to reduce the amount of call to Redis as well to have over 20 webjobs running in the background to hydrate Redis and did a lot of work to reduce payload between the webserver and Redis. It was fun to tweak the performance, but a whole sprint was dedicated to the issue. Performance is a never ending work and I think the time taken was required to have a usable system. I do not see a waste here but a reason of why a whole month was added to the schedule.

Azure

As we saw in the previous performance section, Azure was adding with the performance a whole month of work. However, performance wasn’t the only problem. Azure configurations with VSTS for continue integration took me 2 nights. I started by going inside VSTS to have a new build step to push the built code in Azure but never succeed. After trying for 1 hours, I decided to move to Azure which can hook on VSTS’s repository to do the build. That is a little bit the reverse of what I would naturally do but it works. It took me 20 minutes and I was up and running. Some works need to be done on VSTS side because it would be better to have VSTS push to Azure’s slot since you can have VSTS runs your unit tests as a prerequisite before doing any other step.

Using LetsEncrypt 1 more night (it’s not a single checkbox…), configuring Azure’s slots and having to create Redis, delete Redis, etc few nights (you cannot just flush the Redis, you need to delete the instance and it takes a lot of time). I also had to send the database, configure the DNS (which is more complex than Cpanel), etc. Jobs was also difficult because there are so many way to do it. I started with Visual Studio’s UI where you can setup with a calendar when each task start, but I needed a cronjob styles of syntax. I tried 2 different ways, the last one worked but still had some issues, needed to create an issue on GitHub, etc. Overall, having everything up and running took me about 3 weeks. I though it would take me 1 week. Overall, I think Azure or any other cloud service is great but most website can handle it with a single VPS or server for way less. I can configure Redis was faster than Azure do without having to pay the enormous monthly cost. For a corporation, the cost is justified since you might cut on IT resource. However, for small side-project that has low budget (or almost not revenues after paying servers and other costs) it can squeeze you even more. Azure is also changing quite a lot in some area, and in other it’s stale which you never know if what you are using will still be there in 5 years or you will be forced at some point to update SDK. In my legacy Linux VPS, I can have everything ran for a decade without having to worry about it — it just works. Azure is great, I do not regret since I learned a lot, but I didn’t fall in the happy-wagon of not seeing that it doesn’t fit every scenario. I also miss the feature of being able to configure email like in CPanel with Linux. It was literally taking 2 minutes to create a new email’s box. On a final side note, Application Insights Analytic is very great, but too limited with only 7 days of custom data retention.

UX – User Experience

My goal was to provide a more inline help experience. The legacy system had inline help for stock order creation and all pages had a link to a Wiki. A Wiki works only if you have a motivated and participating community that is also a little bit technical to understand what is a Wiki. I also had a PHPBB forum. That worked great because I was creating in the background a PHPBB account when the user were creating their account. In the new system, I wanted to remove the Wiki since I was the only one contributing to it (had over 50 pages) and it was also a source of a lot of hackers. I wanted to remove the PHPBB forum because I didn’t want to migrate to a Windows’s alternative, neither have the time to go read it. I decided to bring help with bubbles around every single page. So far, these help are not very used (I have some telemetry) and they are not used until the end of the steps too. I also got some feedback from users asking for better help. I will have to create some video explaining the features when the user is new to stock exchange and to the system. There is a steep curve for sure, but my initial assumption was wrong: inline page is not enough and people prefer video than text.

Authentication, Facebook and Twitter Login

The new system allows you to connect with a single Facebook or Twitter account. Asp.Net MVC has some templates about it but still isn’t all perfect with some hiccup depending of which provider you use. Twitter was the simpler to integrate for example. That said, I do not see a lot of user using it so far (still just 3 months since released and most people have already their account). This might has been released too fast and could have been delayed to a later stage. However, it wasn’t a job of more than 2 nights. I am still very happy with my email inscription which require just a single email. The legacy system was also very simple but was requiring a password. Now, it’s really just an email.

registrationwindow

The idea was to have the user as quick as possible in the system. An email is sent with a temporary password, that allow me to validate that it’s a real user and thus remove the need of a Captcha that I used in the legacy system. I am using SendGrid to send email and I really like their interface where it tell you if the user has receive/open/click the email. I have a greater ratio of email delivered for people having Hotmail account so far which is great. Before, in the legacy system, I was using directly SMTP.

Mapping

I use AutoMapper to map classes between the Model layer and ViewModel. I still believe that it is required to have different classes for the front end and the backend. The reason is that you have so many scenarios where the user interface needs specific fields in specific format that it’s way cleaner not to create property in your model classes. For sure, with Entity Framework that would have been unreasonable since we would need a lot of configuration to ignore fields. I saw other architecture with classes for Entity which mean you need to map ViewModel->Model->Entity but that is a lot of mapping. Easier for Entity Framework, harder for the time of development. So, I did the good move. The only bad thing I see is that AutoMapper was the more problematic Nuget package I had to update. Not sure what is going on with Nuget but even if some major version imply breaking changes, it should be that often in 30 months. That said, it’s not a big deal too and I am not using the latest because I am doing some automation which is more advanced scenario to bring from the model classes the translated properties name as well as the exception message. For example, if the FirstName in User’s class fail, I’ll bring automatically the property name from the resource file and attach it to the view model as well as the view model property.

Inversion of control (IOC)

Everything is passing from the controller down to the repository by inversion of control. I have been doing that pattern since so many years and it is a success in all points. The reason is that it makes testing so easy. You can mock interfaces passed easily. I didn’t wasted anytime since I already used Microsoft Unity few times before in big projects. While I have been witness of performance issue with controlled injection in some other projects, I didn’t see anything wrong with this one. I have to say that I do not pass more than 10 interfaces per controller which is probably the reason why it’s not that heavy. IOC also helped when a instance needed to be singleton across request like logging events but also when in production I needed to sent real email instead of writing the email in a local .html file in development. This can be done with a unity release file configured in the release version of the web.config

Stripe vs Paypal

I am selling privilege account since the legacy system which give more features for the user. This time, as well as privilege account, I am also selling private simulation (contest). Before, I was using Paypal. Now, I am using Stripe. Why? Paypal API changed during the last years and it wasn’t a pleasure to work with. I also got some trouble and account frozen, etc. I am also not a fan of Paypal way to not innovate. Stripe was and is a breeze to work with. Their API is clean, easy to use and allow user to use credit card without having any account. The integration was a charm and it took less than 6 hours to have everything setup. Testing it is also very easy.

Conclusion

Lot of great things happened during this project. The user of VSTS was definitely a good idea. It was hosting my source code, doing my build and organizing what I needed to do. All these features for free. The use of C# was like always a delight. Asp.Net MVC was also a good choice. While Razor will never been what it should have been in term of performance, the syntax was a pleasure to work with. Three years ago, there was no way I would have started that project differently. Right now, if I had to redo it again, I would aim more for a Single Page Application (SPA) with TypeScript and the new Asp.Net Core as the backend (which just get released to be usable for more serious project). That said, I wouldn’t redo that project, at least not for a new decade. Handling more than 50 tables, more than 988 classes, 95 interfaces, 5736 methods is a little bit too big to learn about technologies without finishing the project with already a lot of already out of date. While I really believe that a project need to be bigger than just a dozen of entities to feel the reality and not just an illusion of that frameworks and libraries can do, handling too much just create a burden. I still have a lot of good memory about MySql and PHP. PHP is not as pretty as C# but it was pretty quick. MySql was also very great and easy to work with. Sql Server is more robust and has more features but I am not using most of them. Concerning speed, I haven’t see a huge different. One thing that worry me is to be lock down to Azure. I could move into a Asp.Net server, transform those webjobs into services and be on IIS, but at that point, I know that I can get better with a Linux, Apache server. The situation is changing with Asp.Net Core and will reduce that feeling since it would be possible to use any kind of server.

I still have few months to improve unit tests, work on some features that will use existing data on the database/entities and continue to work to make it easier for the users to use the feature. Using Application Insights is awesome for the following months. By summer, I’ll be in maintenance mode with this project and move on something completely new.

Asp.Net MVC Bootstrap Update Broke Bundle

Upgrading Nuget package is always not an easy process. While the packages are downloaded for you, the migration side effects are often obscure. Some packages are not upgraded often and have some recurring issues. Bootstrap is one that doesn’t work well with Asp.Net MVC that it’s easy to forget.
badpath

Using BootStrap is not a problem until you use Asp.Net MVC bundle. The problem you may have is that it will try to get the font from \bundles\fonts\glyphicons-halflings-regular.woff2 instead of \fonts\glyphicons-halflings-regular.woff2. Why does it adds the \bundles\ in front is still not clear but it’s the problem is that the Nuget package comes with -min.css version. The bundle engines try to use these files which cause the issue. The work around is to delete all the .min file. This way, Asp.Net MVC bundle system uses the non minimized version and will do a proper minimized version.

A second solution that I do not like would have been to copy the fonts into a new bundles folder at the root of the website. But, it’s easier to delete and avoid repetition of files than doing so.

Boosting Asp.Net MVC performance improvement by not using Razor templates

Here is a real case scenario. A website running on Azure that got deployed in release mode with everything pre-compiled. Still hitting 400ms to render the whole view. With Glimpse on, we can see that many views are used. Partial views, editor templates and display templates and taking some extra milliseconds here and there.

ViewWithEditorTemplate

Here is the same view rendered with almost no template. Everything is directly in the view and for specific components, editor and display templates got migrated into Html helper.

ViewWithoutEditorTemplate

So, at the end, Asp.Net MVC templates are just time consuming. Rendering a simple view shouldn’t take 400ms. It should take 10% of that, and this is what we get by trimming templates out.