Service Worker, Push Notification and Asp.Net MVC – Part 3 of 3 Server Side

I previously discussed about how to configure a web push notification from the client side perspective as well as how to send the notification from an ASP.Net code which could be sent from Azure Webjobs. The remaining part is how do you send to multiple devices of one user. If you have a single browser used by a user, the initial solution is good. However, the reality is that users use multiple devices. Users can use not only different browsers on different machines but also jump from computer to phone and so on. The idea is to register subscription by device and not by user.

Google Firebase documentation explains briefly the “Device group messaging” but the page talks more about Topic. I couldn’t figure out how to use the device group messaging but could use Topic for the same matter. The idea is to use a single topic per user and send a message to this topic.

The first big change is to add a subscribe method and unsubscribe that work with the topic api.

public bool UnRegisterTopic(string userIdentifierForAllDevices, string singleDeviceNoticationKey)
{
	var serverApiKey = ConfigurationManager.AppSettings["FirebaseServerKey"];
	var firebaseGoogleUrl = $"https://iid.googleapis.com/iid/v1/{singleDeviceNoticationKey}/rel/topics/{userIdentifierForAllDevices}";

	var httpClient = new WebClient();
	httpClient.Headers.Add("Content-Type", "application/json");
	httpClient.Headers.Add(HttpRequestHeader.Authorization, "key=" + serverApiKey);

	object data = new { };
	var json = JsonConvert.SerializeObject(data);
	Byte[] byteArray = Encoding.UTF8.GetBytes(json);
	var responsebytes = httpClient.UploadData(firebaseGoogleUrl, "DELETE", byteArray);
	string responsebody = Encoding.UTF8.GetString(responsebytes);
	dynamic responseObject = JsonConvert.DeserializeObject(responsebody);

	return responseObject.success == "1";
}
public bool RegisterTopic(string userIdentifierForAllDevices, string singleDeviceNoticationKey)
{
	var serverApiKey = ConfigurationManager.AppSettings["FirebaseServerKey"];
	var firebaseGoogleUrl = $"https://iid.googleapis.com/iid/v1/{singleDeviceNoticationKey}/rel/topics/{userIdentifierForAllDevices}";

	var httpClient = new WebClient();
	httpClient.Headers.Add("Content-Type", "application/json");
	httpClient.Headers.Add(HttpRequestHeader.Authorization, "key=" + serverApiKey);

	object data = new{};
	var json = JsonConvert.SerializeObject(data);
	Byte[] byteArray = Encoding.UTF8.GetBytes(json);
	var responsebytes = httpClient.UploadData(firebaseGoogleUrl, "POST", byteArray);
	string responsebody = Encoding.UTF8.GetString(responsebytes);
	dynamic responseObject = JsonConvert.DeserializeObject(responsebody);

	return responseObject.success == "1";
}

There is quite repetition in that code and you can improve it easily. The biggest change is the URL. Not only the URL domain is different (before https://fcm.googleapis.com/fcm/send and now https://iid.googleapis.com/), it has different route portions. The first part is the device notification key which is the token generated by the client side from the method “getToken”. The second portion of the route is the user identifier which I use as topic. If you really need topic across users, you can just use a string with the category needed. In my case, it is just the unique GUID of the user. This POST HTTP call will register the device for the user by a topic which is the user ID.

To send a message to the user, on all devices, the code needs also to change.

public bool QueueMessage(string to, string title, string message, string urlNotificationClick)
{
	if (string.IsNullOrEmpty(to))
	{
		return false;
	}
	var serverApiKey = ConfigurationManager.AppSettings["FirebaseServerKey"];
	var firebaseGoogleUrl = "https://fcm.googleapis.com/fcm/send";

	var httpClient = new WebClient();
	httpClient.Headers.Add("Content-Type", "application/json");
	httpClient.Headers.Add(HttpRequestHeader.Authorization, "key=" + serverApiKey);
	var timeToLiveInSecond = 24 * 60; // 1 day
	var data = new
	{
		to = "/topics/" + to ,
		data = new
		{
			notification = new
			{
				body = message,
				title = title,
				icon = "/Content/Images/Logos/BourseVirtuelle.png",
				url = urlNotificationClick,
				sound = "default"
			}
		},
		time_to_live = timeToLiveInSecond
	};

	var json = JsonConvert.SerializeObject(data);
	Byte[] byteArray = Encoding.UTF8.GetBytes(json);
	var responsebytes = httpClient.UploadData(firebaseGoogleUrl, "POST", byteArray);
	string responsebody = Encoding.UTF8.GetString(responsebytes);
	dynamic responseObject = JsonConvert.DeserializeObject(responsebody);

	return responseObject.success == "1";
}

What has changed from sending to a single user? The field “to” which is sending to topics. The “to” in the method signature is still the user unique identifier, but instead of sending directly to it, we use it has a topic. We do not use the token generated by the front end since a new one got generated per device, we only use the user id which is the topic.

Service Worker, Push Notification and Asp.Net MVC – Part 1 of 3 Client Side

Browsers involve very rapidly and since few years it’s possible to write JavaScript that runs in the background of the browser. That means that it’s possible to run code even if the user is not on the website. This is useful for many scenarios and today we will see one feature which is the push notification. The particular environment that we will describe is to use a service worker that wait a message from a Asp.Net Azure web job written in C# that will push a message at a particular time depending of some value to a specific user. The end result will be that the browser will popup a message box at the bottom right if the user is not on the website or if the user is on the website will display a HTML notification directly on the page.

This article is the part one of two which concentrates on the front-end, not the C# code that runs on the server. We will cover the registration of Google Firebase, the service worker’s code and the JavaScript code to add on your website.

The first step if to register an account with Google Firebase. This is not very obvious since almost all example on the web (at this date) uses the raw Service Worker + Push Notification API with the legacy Google system instead of Firebase. Both are pretty compatible in term of server contracts to generate the message, however, on the client side, it’s pretty different. Firebase acts as a wrapper on the native Service Worker API and Push Notification API. You can still use the API directly, and in some case, it’s the only way to access advanced feature.

To create a Firebase account, you need to https://console.firebase.google.com and create an account and a project.

Firebase is a library that is accessible via API keys and JavaScript API. You can also invoke the API through a Rest API which we will see in the second part. The first challenge is to figure out where to get the right API key because the system has many. The first step is to create the Service Worker. This is registered when the user goes into your website to run in the background of the browser. The default is to create a file called “firebase-messaging-sw.js” and to put that file at the root of your website. The location of the file is important because the service worker can only access assert that are sibling or child to the script registered.

Here is the full code that I have for my Service Worker:

importScripts('https://www.gstatic.com/firebasejs/3.5.0/firebase-app.js');
importScripts('https://www.gstatic.com/firebasejs/3.5.0/firebase-messaging.js');

var config = {
    apiKey: "AIzaSyDe0Z0NtygDUDySNMRtl2MIV5m4Hp7IAm0",
    authDomain: "bourse-virtuelle.firebaseapp.com",
    messagingSenderId: "555061918002",
};
firebase.initializeApp(config);

var messaging = firebase.messaging();
messaging.setBackgroundMessageHandler(function (payload) {
    var dataFromServer = JSON.parse(payload.data.notification);
    var notificationTitle = dataFromServer.title;
    var notificationOptions = {
        body: dataFromServer.body,
        icon: dataFromServer.icon,
        data: {
            url:dataFromServer.url
        }
    };
    return self.registration.showNotification(notificationTitle,
        notificationOptions);
});

self.addEventListener("notificationclick", function (event)
{
    var urlToRedirect = event.notification.data.url;
    event.notification.close();
    event.waitUntil(self.clients.openWindow(urlToRedirect));
});

In short, it uses 2 Firebase scripts. One for the Firebase and one for the messaging which is the wrapper around the push notification api. The configuration is tricky. The apiKey is taken from the Firebase’s console, under the desired project, under the project settings gear, in the Generaltab.

The messagingSenderId is an id that is available tab next to the General tab, called Cloud Messaging.

The initialize command will connect the service worker to the server to listen to new messages. The setBackgroundMessageHandler function is called when a new message occurs when the user is not having the website in focus. It means that if the user has the website in a browser’s tab that it not the current one, or if the user is not having the website open at all, or if the browser is minimized that this message will be invoked. The case about if the user is having focus will be treated later.

This code get the data from the server. In my case, it’s under the property data and notification. I set the title, the main message, the icon. The URL is there but didn’t work at this time. That is why the second method, which use directly the push notification api to hook on notificationclick. This method handles when the user click the notification to open a specific URL. For example, in my case, the notification occurs when a specific event occurs and clicking the notification opens a specific page where the user can see the result of the action.

The next step is to have a page where the user can subscribe to the push notification. In my case, it’s done in the user’s profile. I have a checkbox, if checked, the browser will request the authorization to the user to install the service worker. So, in my profile.js page I have the following code:

$(document).ready(function()
{
    initialiseUI();
});
function initialiseUI() {
    $(document).on("click", "#" + window.Application.Variables.IsHavingNotification,
        function requestPushNotification()
        {
            var $ctrl = $(this);
            if ($ctrl.is(":checked"))
            {
                console.log("checked");
                subscribeUser();

            } else
            {
                console.log("unchecked");
                unsubscribeUser();
            }
        });
}

function subscribeUser() {
    var isSubscribed = false;
    var messaging = firebase.messaging();
    messaging.requestPermission()
      .then(function () {
          messaging.getToken()
          .then(function (currentToken) {
              if (currentToken) {
                  updateSubscriptionOnServer(currentToken);
                  isSubscribed = true;
              } else {
                  updateSubscriptionOnServer(null);
              }
              $("#" + window.Application.Variables.IsHavingNotificationt).prop('checked', isSubscribed);
          })
          .catch(function (err) {
              isSubscribed = false;
              updateSubscriptionOnServer(null);
          });
      })
      .catch(function (err) {
          console.log('Unable to get permission to notify.', err);
      });
}

function unsubscribeUser() {
    var messaging = firebase.messaging();
    messaging.getToken()
    .then(function (currentToken) {
        messaging.deleteToken(currentToken)
        .then(function () {
            updateSubscriptionOnServer(null);
        })
        .catch(function (err) {
            console.log('Unable to delete token. ', err);
        });
    })
    .catch(function (err) {
        console.log('Error retrieving Instance ID token. ', err);
    });
}

function updateSubscriptionOnServer(subscription) {
    var subscriptionDetail = { key: "" };
    if (subscription)
    {
        subscriptionDetail = { key: subscription };
    } else {
        console.log("delete on the server the token");
    }
    
    var apiUrl = window.Application.Url.UrlNotifications;
    var dateToSent = subscriptionDetail;
    $.ajax({
        url: apiUrl,
        type: 'POST',
        data: dateToSent,
        cache: true,
        dataType: 'json',
        success: function (json) {
            if (json.IsValid) {
            } else {
            }
        },
        error: function (xmlHttpRequest, textStatus, errorThrown) {
            console.log('some error occured', textStatus, errorThrown);
        },
        always: function () {
        }
    });

}

We allow to subscribe and unsubscribe. When subscribing, we request the permission to send the notification by the browser. Then, we get the token provided my Firebase. This is needed to be able to save the token back to the server to have targeted message from the server later. With this token, we will be able to send specific message to specific user. This is where the updateSubscriptionOnServer come to play. It sends by Ajax the token, and it’s saved in the database. In my case, I added a column in the user’s table to keep track of the token. The unsubscribe sends a null value and set null in the column. This way, the server can look and see if the user has or not a Firebase token and only send a message when a token is defined.

To verify that all the previous steps are well executed, you can look in Chrome developer tool under Application and see for the service worker.

It’s important to understand that what we are doing only work under localhost or with HTTPS’ website. From Chrome’S debug panel, you can unregister, or click “Update on reload” to force a reinstallation of the service worker. This can be handy when developing to be sure to always have the latest version of your service worker.

The next step is to have your website listen to incoming messages. This cover the scenario when the user is on the website and that we do not want to use the browser notification. To do, we need to use some code that we already used in the service worker concerning Firebase’s initialization. In my case, I added in the master page (_layout.cshtml) a reference to Firebase script to initialize the library. It looks like that:

    <script src="https://www.gstatic.com/firebasejs/3.6.2/firebase.js"></script>
    </script>
    <script>
      var config = {
        apiKey: "AIzaSyDe0Z0NtygDUDySNMRtl2MIV5m4Hp7IAm0",
        authDomain: "bourse-virtuelle.firebaseapp.com",
        messagingSenderId: "555061918002",
      };
      firebase.initializeApp(config);
    </script>

I also have a global JavaScript file where I added the listener to message which are used in every page I have.

$(document).ready(function()
{
    var messaging = firebase.messaging();
    messaging.onMessage(function(payload)
    {
        var dataFromServer = JSON.parse(payload.data.notification);
        var myMessageBar = new MessageBar();
        myMessageBar.setMessage(dataFromServer.title + " : " + dataFromServer.body);
    });
});

The listener is onMessage is fired when the user has the focus on the website. So, instead of having the service worker to handle the message, this handler is receiving the data. This give the advantage to be able to add the message directly in the webpage Dom, something that the service worker cannot do. It also has the convenience of having the notification in the field of view of the user instead of having a notification outside.

At this point, you can use any HTTP tools to send a message to Firebase. You can use console.log to output the token and forge a HTTP request with your web api and sender id. I won’t give detail in this post and will give how to do it in a future post about how to handle it with a webjob in C# which will send a HTTP request.

Service worker allows you do to a lot more than just using the push notification. This article covered the basis of how to use Google Firebase has a backbone to have your own backend infrastructure (covered in a future article) to send a message and to have your client receiving the message. Several pieces of code is needed in specific places.

Boosting Asp.Net MVC performance improvement by not using Razor templates

Here is a real case scenario. A website running on Azure that got deployed in release mode with everything pre-compiled. Still hitting 400ms to render the whole view. With Glimpse on, we can see that many views are used. Partial views, editor templates and display templates and taking some extra milliseconds here and there.

ViewWithEditorTemplate

Here is the same view rendered with almost no template. Everything is directly in the view and for specific components, editor and display templates got migrated into Html helper.

ViewWithoutEditorTemplate

So, at the end, Asp.Net MVC templates are just time consuming. Rendering a simple view shouldn’t take 400ms. It should take 10% of that, and this is what we get by trimming templates out.

How to redirect Http to Https only in production

You are working locally without a SSL certificate and in production with certificate,=. The simplest way to work handle both case is to have a configuration that switch depending of if you are in your production server or in your local dev box. Here is two solutions. The first one is well known on Internet but require to have IIS with the rewrite module. This is not a problem with Azure, and even locally it’s not a big problem because it can be downloaded from IIS manager console, under Web Platform. But, you won’t need that. The first solution is to change the web.release.config to add the redirection on the deployed files only. This is done like this:

<?xml version="1.0"?>
<configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform">
  <system.webServer>
    <rewrite xdt:Transform="Insert">
      <rules>
        <rule name="Redirect HTTP to HTTPS">
          <match url="(.*)" />
          <conditions>
            <add input="{HTTPS}" pattern="off" ignoreCase="true" />
          </conditions>
          <action type="Redirect" url="https://{HTTP_HOST}/{R:1}" redirectType="Permanent"/>
        </rule>
      </rules>
    </rewrite>
  </system.webServer>
</configuration>

The second solution is simpler, because it’s just a change in code. However, the request needs to go to the Asp.Net pipeline which is more demanding for your webserver. You should have the redirection as soon as you can and doing it at IIS level is the best place. Nevertheless, it’s always good to have both solution on hands.

In global.asax.cs:

if (!HttpContext.Current.IsDebuggingEnabled)
{
     filters.Add(new RequireHttpsAttribute());
}

Localized URL with Asp.Net MVC with Traversal Feature

I already wrote how to have URL in multiple languages in Asp.Net MVC without having to specify the language in the URL. I then, shown how to configure the routing with a Fluent Api to help creating routing. In this article, I have an improved version of the previous code.

  • Routing Traversal with the Visitor Pattern
  • Mirror Url Support
  • Add Routing for Default Domain Url
  • Associate Controller to a Specific NameSpace

Routing Traversal with the Visitor Pattern

The biggest improvement from the last article is the possibility to traversal the whole routing tree to search a specific route. While the solution works with Asp.Net MVC by hooking easily with the AreaRegistrationContext and RouteCollection, you may want to traverse to get the translated Url in project outside the web project. In any case, the new solution let you traverse easily with different logic without the need to change anything. The secret resides in the Visitor Pattern. Route classes accept a visitor where logic can be executed. Here is an example of a concrete visitor that get the localized url from an area, controller and action.

// Arrange
var visitor = new RouteLocalizedVisitor(LocalizedSection.EN, Constants.Areas.MODERATOR, "Symbol", "SymbolChangeList", null, null);

// Act
RoutesArea.AcceptRouteVisitor(visitor);

// Assert
var result = visitor.Result().FinalUrl();
Assert.AreEqual("Moderation/Symbol-en/Symbol-Change-List",result);

How is this possible? By having 2 interfaces. One is for every element (Area, controller, action, list of area, list of controller, list of action) and one interface for the visitor.

/// <summary>
/// A route element is an Area, a Controller, an Action or a list of all these threes.
/// </summary>
public interface IRouteElement
{
    /// <summary>
    /// Entry point for the visitor into the element
    /// </summary>
    /// <param name="visitor"></param>
    void AcceptRouteVisitor(IRouteVisitor visitor);
}

/// <summary>
/// A visitor is the code that will traverse the configuration (tree) of routes.
/// </summary>
public interface IRouteVisitor
{
    /// <summary>
    /// Logic to be done by the visitor when this one visit an Area
    /// </summary>
    /// <param name="element"></param>
    /// <returns>True if has found a route that match the area criteria</returns>
    bool Visit(AreaSectionLocalized element);

    /// <summary>
    /// Logic to be done by the visitor when this one visit a Controller
    /// </summary>
    /// <param name="element"></param>
    /// <returns>True if has found a route that match the controller criteria</returns>
    bool Visit(ControllerSectionLocalized element);

    /// <summary>
    /// Logic to be done by the visitor when this one visit an Action
    /// </summary>
    /// <param name="element"></param>
    /// <returns>True if has found a route that match the actopm criteria</returns>
    bool Visit(ActionSectionLocalized element);

    /// <summary>
    /// Flag to indicate that a route has been found and that subsequent visits call be cancelled.
    /// This is to improve performance.
    /// </summary>
    bool HasFoundRoute { get; }
}

The IRouteElement interface is the entry point to start traversing the route’s tree. It’s also this interface used to move from one node to another one. This interface is implemented by every nodes (Area, controller, action, list of area, list of controller, list of action). The consumer of the fluent Api shouldn’t care else than knowing that this is where he will pass its visitor. For the curious, here is the implementation on the Controller.

public void AcceptRouteVisitor(IRouteVisitor visitor)
{
    if (visitor.Visit(this))
    {
        foreach (var action in this.ActionTranslations)
        {
            action.AcceptRouteVisitor(visitor);
            if (visitor.HasFoundRoute)
            {
                break;
            }
        }
    }
}

The implementation is very similar for Area. What is does is that it allow the visitor to visit the controller node, if this one is matching (controller name), then it visits every children (actions) of the controller. To improve the performance, the loop is stopped if an action is found has a good one. The most interesting part is how to create a visitor. The visitor is the one that get called by the tree on every AcceptRouteVisitor. The visitor is the one having the logic of what you want to do. Here is the full code to get a localized route.

/// <summary>
/// Visitor to find from generic information a localized route from an three of routes
/// </summary>
public class RouteLocalizedVisitor: IRouteVisitor
{
    private readonly CultureInfo culture;
    private readonly string area;
    private readonly string controller;
    private readonly string action;
    private readonly string[] urlInput;
    private readonly string[] tokens;
    private readonly RouteReturn result;


    /// <summary>
    /// 
    /// </summary>
    /// <param name="culture">Culture used for the route to Url convertion</param>
    /// <param name="area">Area requested. Can be null.</param>
    /// <param name="controller">Controller requested. This cannot be null.</param>
    /// <param name="action">Action requested. This cannot be null</param>
    /// <param name="urlInput">Specific input. Can be null.</param>
    /// <param name="tokens">Custom localized token. Can be null.</param>
    public RouteLocalizedVisitor(CultureInfo culture, string area, string controller, string action, string[] urlInput, string[] tokens)
    {
        if (controller == null)
        {
            throw new ArgumentNullException(nameof(controller));
        }
        if (action == null)
        {
            throw new ArgumentNullException(nameof(action));
        }
        this.culture = culture;
        this.area = area;
        this.controller = controller;
        this.action = action;
        this.urlInput = urlInput;
        this.tokens = tokens;
        this.result = new RouteReturn();
    }

    /// <summary>
    /// Visitor action for area. If the area match, the result is updated with the localized area name
    /// </summary>
    /// <param name="element">Area visited</param>
    /// <returns>True if found; False if not found</returns>
    public bool Visit(AreaSectionLocalized element)
    {
        if (element.AreaName == this.area)
        {
            this.result.UrlParts[Constants.AREA] = element.Translation.First(d => d.CultureInfo.Name == this.Culture.Name).TranslatedValue;
            return true;
        }
        else
        {
            return false;
        }
    }

    /// <summary>
    /// Visitor action for controller. If the controller match, the result is updated with the localized controller name
    /// </summary>
    /// <param name="element">Controller visited</param>
    /// <returns>True if found; False if not found</returns>
    public bool Visit(ControllerSectionLocalized element)
    {
        if (element.ControllerName == this.controller)
        {
            this.result.UrlParts[Constants.CONTROLLER] =  element.Translation.First(d => d.CultureInfo.Name == this.Culture.Name).TranslatedValue;
            return true;
        }
        else
        {
            return false;
        }
    }

    /// <summary>
    /// Visitor action for action. If the action match, the result is updated with the localized action name
    /// </summary>
    /// <param name="element">Action visited</param>
    /// <returns>True if found; False if not found</returns>
    public bool Visit(ActionSectionLocalized element)
    {
        var urlPartToAddIfGoodPart = new Dictionary<string, string>();
        if (element.ActionName == this.action)
        {
            if (!this.ExtractTokens(element, urlPartToAddIfGoodPart))
            {
                return false;
            }

            if (!this.ExtractUrlPartValues(element, urlPartToAddIfGoodPart))
            {
                return false;
            }

            this.result.UrlParts[Constants.ACTION] = element.Translation.First(d => d.CultureInfo.Name == this.Culture.Name).TranslatedValue;
        }
        else
        {
            return false;
        }
            
            
        this.RemoveOptionalWithDefaultEmpty(element, urlPartToAddIfGoodPart);
        urlPartToAddIfGoodPart.ToList().ForEach(x => this.result.UrlParts.Add(x.Key, x.Value)); //Merge the result
        this.result.UrlTemplate = element.Url;
        this.result.HasFoundRoute = true;
        return true;

    }

    /// <summary>
    /// Remove optional value by adding this one in the UrlPart with Empty string which make the GetFinalUrl to replace the {xxx} with nothing
    /// </summary>
    /// <param name="element"></param>
    /// <param name="urlPartToAddIfGoodPart"></param>
    private void RemoveOptionalWithDefaultEmpty(ActionSectionLocalized element, Dictionary<string, string> urlPartToAddIfGoodPart)
    {
        if (element.Values != null)
        {
            var dict = (RouteValueDictionary) element.Values;
            foreach (var keyValues in dict)
            {
                var remove = this.urlInput == null || (this.urlInput != null && this.urlInput.All(f => f != keyValues.Key));
                if (remove)
                {
                    urlPartToAddIfGoodPart[keyValues.Key] = string.Empty;
                }
            }
        }
    }

    /// <summary>
    /// If the user request a url than we let it through (to let the user replace with his value). If not defined in UrlPart, then use default value.
    /// </summary>
    /// <param name="element"></param>
    /// <param name="urlPartToAddIfGoodPart"></param>
    /// <returns></returns>
    private bool ExtractUrlPartValues(ActionSectionLocalized element, Dictionary<string, string> urlPartToAddIfGoodPart)
    {
        //Default Values : check if there, nothing to replace
        if (this.urlInput != null)
        {
            foreach (string input in this.urlInput)
            {
                if (element.Url.IndexOf(input, StringComparison.CurrentCultureIgnoreCase) >= 0)
                {
                    var routeValues = (RouteValueDictionary) element.Values;
                    var isDefinedValue = (routeValues != null) && routeValues.Keys.Contains(input);
                    if (isDefinedValue)
                    {
                        var defaultValue = routeValues[input].ToString();
                        if (defaultValue == string.Empty)
                        {
                            urlPartToAddIfGoodPart[input] = "{" + input + "}";
                        }
                        else
                        {
                            urlPartToAddIfGoodPart[input] = defaultValue;
                        }
                    }
                    else
                    {
                        //Default if not empty
                        urlPartToAddIfGoodPart[input] = "{" + input + "}";
                    }
                }
                else
                {
                    return false;
                }
            }
        }
        return true;
    }

    /// <summary>
    /// Get localized value for every tokens
    /// </summary>
    /// <param name="element"></param>
    /// <param name="urlPartToAddIfGoodPart"></param>
    /// <returns></returns>
    private bool ExtractTokens(ActionSectionLocalized element, Dictionary<string, string> urlPartToAddIfGoodPart)
    {
        if (this.tokens != null)
        {
            if (element.Tokens == null)
            {
                return false;
            }
            for (int i = 0; i < this.tokens.Length; i++)
            {
                if (element.Tokens.ContainsKey(this.tokens[i]))
                {
                    var tokenFound = element.Tokens[this.tokens[i]];
                    var tokenTranslation = tokenFound.First(d => d.CultureInfo.Name == this.Culture.Name);
                    urlPartToAddIfGoodPart[this.tokens[i]] = tokenTranslation.TranslatedValue;
                }
                else
                {
                    return false;
                }
            }
        }
        return true;
    }

    /// <summary>
    /// Indicate if a route has been found. This mean that every condition was met
    /// </summary>
    public bool HasFoundRoute
    {
        get { return this.result.HasFoundRoute; }
    }


    public CultureInfo Culture
    {
        get { return this.culture; }
    }



    public RouteReturn Result()
    {
        return this.result;
    }
}

This code let you specify an area (or not), a controller, an action, expected values to be passed and tokens. If the values has a default value, this one will be used. If the default value is empty, this one is avoided in the url. The token is simply translated. Here is two examples:

public static ControllerSectionLocalizedList RoutesController = FluentLocalizedRoute
											.BuildRoute()
										    .ForBilingualController("Account", "Account-en", "Compte")
												.WithBilingualAction("Profile", "Profile-en", "Afficher-Profile")
												   .WithDefaultValues(new { username = UrlParameter.Optional })
												   .WithUrl("{action}/{username}")
											.ToList();
[TestMethod]
public void GivenARouteToVisit_WhenNoAreaWithDefaultValue_ThenReturnRouteWithoutAreaWithDefaultValue()
{
    // Arrange
    var visitor = new RouteLocalizedVisitor(LocalizedSection.EN, null, "Account", "Profile", null, null);

    // Act
    RoutesController.AcceptRouteVisitor(visitor);

    // Assert
    var result = visitor.Result().FinalUrl();
    Assert.AreEqual("Profile-en",result);
}

[TestMethod]
public void GivenARouteToVisit_WhenNoAreaWithDefaultValueSet_ThenReturnRouteWithoutAreaWithDefaultValue()
{
    // Arrange
    var visitor = new RouteLocalizedVisitor(LocalizedSection.EN, null, "Account", "Profile", new [] {"username"}, null);

    // Act
    RoutesController.AcceptRouteVisitor(visitor);

    // Assert
    var result = visitor.Result().FinalUrl();
    Assert.AreEqual("Profile-en/{username}", result);
}

Mirror Url Support

Mirror Url is the capability to have more than one Url for a specific route. This is good when you want to have more than a single URL to be associated to a specific action. It’s a mirror Url because the real Url won’t get affected. It also mean that trying to generate this Url from the route values won’t get into that mirror Url but the main one. The change is inside the Fluent Url and it adds in the list of action the mirror Url.

public IRouteBuilderAction_ToListOnlyWithAnd WithMirrorUrl(string url)
{
	this.AddInActionList();
	var mirrorAction = new ActionSectionLocalized(this.currentAction.ActionName
		, this.currentAction.Translation
		, this.currentAction.Values
		, this.currentAction.Constraints
		, url);
	var s = new RouteBuilderAction(this.currentControllerSection, mirrorAction, this.routeBuilder,this.routeBuilderController);
	this.currentControllerSection.ActionTranslations.Add(mirrorAction);
	this.currentAction = mirrorAction;
	return s;
}

Add Routing for Default Domain Url

This new feature lets having an action related to the root url, the domain one. In short, it set the controller and action as a default value, so, it’s not required to be in the Url.

public IRouteBuilderAction_ToListWithoutUrl AddDomainDefaultRoute(string controller, string action)
{
	var controller1 = ForBilingualController("{controller}", "{controller}", "{controller}");
	var action1 = controller1.WithBilingualAction("{action}", "{action}", "{action}");
	var action2 = action1.WithDefaultValues(Constants.CONTROLLER, controller);
	var action3 = action2.WithDefaultValues(Constants.ACTION, action);
	var action4 = action3.WithUrl("{controller}/{action}");
	return action4;
}

Associate Controller to a Specific NameSpace

The last modification is to have the possibility to associate a namespace for the controller. This is required if your controller name is used in different namespace to avoid collision. This is also inside the Fluent Api. In short, it add to the controller section a namespace if this one doesn’t have one. However, if this one already have a namespace, this one is added.

public IRouteBuilderControllerAndControllerConfiguration AssociateToNamespace(string @namespace)
{
	if (currentControllerSection.NameSpaces == null)
	{
		currentControllerSection.NameSpaces = new[] { @namespace };
	}
	else
	{
		var currentNamespaces = currentControllerSection.NameSpaces;
		var len = currentControllerSection.NameSpaces.Length;
		Array.Resize(ref currentNamespaces, len + 1);
		currentNamespaces[len-1] = @namespace;
		currentControllerSection.NameSpaces = currentNamespaces;
	}
   
	return this;
}

A change is also required inside the LocalizedRoute class.

private void AdjustForNamespaces()
{
	var namespaces = this.ControllerTranslation.NameSpaces;
	bool useNamespaceFallback = (namespaces == null || namespaces.Length == 0);
	base.DataTokens["UseNamespaceFallback"] = useNamespaceFallback;
	if ((namespaces != null) && (namespaces.Length > 0))
	{
		base.DataTokens["Namespaces"] = namespaces;
	}
}

These changes are nice addition to the previous post. You can find the the whole source code in GitHub.

Telemetry with Application Insights for Website and Webjobs

If you have a website and also some WebJobs you may want have both of them use the same library for your telemetry. Once idea is to create a shared project that both project refers. This shared project can have a class that abstract your abstraction. The real implementation can use Microsoft Azure Application Insights to send telemetries to Azure. As you may have read in the official documentation is that your website needs to have the Microsoft.ApplicationInsights.Web package, and Microsoft.ApplicationInsights.WindowsServer. What you need to know is that the shared project also need to have the Web and WindowsServer package but the WebJobs also need to have the WindowsServer package. If you don’t, your will get some exception on Telemetry.Active…

Finally, you should always give some time for the telemetry to be sent after it is flushed. Here is a snippet of the method that send the constructed telemetry from my Telemetry class in the shared project.

private void Send(string eventName, Dictionary<string, string> properties, Dictionary<string, double> metrics)
{
	this.telemetry.TrackEvent(eventName
	  , properties
	  , metrics
	  );
	this.telemetry.Flush();
	System.Threading.Thread.Sleep(5000);
}

The 5 seconds sleep is more than enough. You can have less. The important is just giving enough time to the telemetry to be sent to Azure.

Redis Experimentation with Full List Cache against using Redis Sorted List

I am improving the performance of a system right now with Redis and the library StackExchange. Onne particular case was that I needed to cache a list of data that are ordered by rank from a value that change often. One requirement is that it’s possible that two items can have the same rank. For example:

Rank - Data   - Value
1    - User 1 - 100
2    - User 2 - 99
3    - User 4 - 99
4    - User 5 - 97

The data is in fact a serialized object that contains multiples classes. For not making this article too heavy, I will just use a string. Nevertheless, keep in mind that this is not a simple string unique identifier. The value column is required by Redis when using the Sorted List. In reality, this value is inside the data, in a property.

This is also information that must be paged because the list can go around five thousand entries.

The first step was to measure the time when using the database. I got an average of 264ms per query for 20 items on a set of 200 items. The database contains thousand of entry, the page is using a clause to filter down depending of other criteria defined inside the data (inside the class that we serialize). The next step was to use Redis as a simple cache — once we get the result of the database we store it for few times. The first hit will have the same average, because it goes to the database, but the subsequent request will go inside Redis instead of the database. This was producing an improvement of 50% faster, with 125ms in average. The key was determined by the type of list, by the filter attribute and the page number. For example, “MyListOfObjectXXX_PartitionYYY_Page_1”. The speed was interesting for me, I was aiming around 100 ms but I was satisfy with the result. The time also contains the time to deserialize the object to create a generic list of all 20 results. I count the deserialization process time in my benchmark because I was counting the ORM time to instantiate the object too. My concern with that solution is that every object can change its value at any time. The value does change the rank by consequence. Since I am also caching the data with a separate key for each instance, I duplicate this information in the cache. The size of the cache can be a problem in the long run, but the bigger problem is that the information become desynchronize. In fact, the source of truth is the individual cached version in the system. It looks like this : “MyData_Key_1”. I set an expiry because this is not the real source of data. I will not invalidate that data like the rest of the software when values change from the entity. I will let them expire and than change it. It means that a user that drill down from the list can get an up-to-date data. This is the cost to pay (so far) for a one minute delay.

db.StringSet(MyListOfObjectFoo_PartitionRed_Page_1, myListOfDataForPage1, TimeSpan.FromMinutes(1));

To overcome this issue, Redis offers to be able to store an ordered list that is sorted by a value. What is interesting is that the value can be the same which will produce the same rank. So far, this is exactly the answer of the problem. However, that solution does not fix the problem of having to duplicate the data in the cache. The sorted list solution can query by range, so it’s interesting for paging, but not by unique key. Thus, it solves only the problem of having desynchronized value since I can push easily in the sorted list an entry in a specify (updated) rank.

// Initial push
db.SortedSetAdd("MyListOfObjectFoo_PartitionRed_Page_1", new[] {
                    new SortedSetEntry("User 1",100),
                    new SortedSetEntry("User 2",99),
                    new SortedSetEntry("User 3",99)});

// Later when one entity change with a value of 100. This will produce two rank 1.
db.SortedSetAdd("MyListOfObjectFoo", objectToCache, 100);

This was surprising in many ways. First of all, the main problem was that if you have several same ranks that it is not possible to have a second ordering value from the object. You are stock with value you set which is a double. This allow you to do some mathematics trick but if you would like to sort by alphabetic order than you need to manually in C# do your second sort. I didn’t go more deep with that solution because of the second problem. The second bigger problem was the performance. To get the information, you use the get by range method.

db.SortedSetRangeByRank("MyListOfObjectFoo", 1, 20)

From that, you need to loop and deserialize all values which is the same tax to pay that we have when caching the whole page in 1 Redis key-value entry. However, the performance was disastrous. My average on three run was 1900ms. This was really surprising. I double check everything because it wasn’t making any sense to me. My initial hypothesis was that this was highly optimized for this kind of scenario — I was wrong. However, the fault is not Redis. After some investigation, I found that the serialization, done with Json.Net library, got some harder time deserializing 20 times a very complex objects than a list of 20 objects. This is mostly because when serializing a list, if the complex object has already a reference that this one is not serialized again but use a reference system. For example, instead of having a deep object, Json.Net will use “$ref”: “20”. This has a huge impact in performance.

I finally decided to optimize my model classes and have a more light classes for this page. Instead of using a list of objects that has a lot of sub-rich objects, using a simple list of a basic class with properties did an awesome job. The list that was taking 1900ms to get from Redis and deserialize is not taking less than .17 ms. That is right and not a typo, it is less than a single millisecond.

I am still learning how to maximize the use of Redis and so far like the flexibility that it offers compared to Memcached that I used for more than a decade. So far it’s interesting and will keep you inform with any new optimization I can find. In short term, I think a solution may be to cache not the whole complex object but just a part of it in an aggregate view of objects.

How to Extend Glimpse for Redis

Glimpse is the best real time profiler/diagnostic add-on you can have for your Asp.Net MVC solution. I will not describe in that article all the capabilities but in one sentence, Glimpse allows to have for you Asp.Net MVC project all times for each calls like filter, action, db call, etc. Unfortunately, no extension has been done for Redis. Nevertheless, creating a custom extension for the Timeline is not too hard. However, the documentation is very dry and it is not obvious about what you can extend or not. This is really sad and the extensibility model of Glimpse is pretty limited. For example, you cannot extend the HUD.

The objective of the Glimpse’s extension we are building in this article is to add in Glimpse’s timeline every cache calls starting time, ending time, duration and what was the method name and key used. Here is the end result:
GlimpseExtension

The first thing is that extension will be not for Redis particularity but for any cache system. I have in the project I have a Cache.cs class that is abstract. My Redis implementation inherit from that cache. That class contains a lot of method like Set, Get, Delete etc. Here is the set method.

public void Set<T>(string key, T objectToCache, TimeSpan? expiry = null)
{
    if (string.IsNullOrEmpty(key))
    {
        throw new ArgumentNullException("key");
    }
    if (this.isCacheEnable)
    {
        var serializedObjectToCache = Serialization.Serialize(objectToCache);
        if (!this.ExecuteUnderCircuitBreaker(()=>this.SetStringProtected(key, serializedObjectToCache, expiry),key))
        {
            Log.Error(string.Format("Cannot Set {0}", key));
        }
    }
}

As you can see, the method serializes the object to cache, and calls the SetStringProtected method. Something particular is the method is called within a function called ExecuteUnderCircuitBreaker which is a design pattern. Whatever this pattern, every calls to the cache go through this function. At the end, if we remove all the circuit breaker pattern we can add the entry point for the Glimpse’s extension.

protected bool ExecuteUnderCircuitBreaker(Action action, string key, [CallerMemberName]string callerMemberName="")
{
   using (var glimpse = new GlimpseCache(key, callerMemberName))
   {
      //Code removed here about circuit breaker
      action();
   }
}

The important part for the moment is that every calls for the cache are proxied by this method which execute the Redis action between a GlimpseCache object creation and disposition. The GlimpseCache class start a timer when the class is constructed and stop the timer when it is disposed.

public class GlimpseCache:IDisposable
{
    private readonly GlimpseCacheCommandTracer tracer;
    public GlimpseCache(string key, string commandName)
    {
        this.tracer = new GlimpseCacheCommandTracer();
        tracer.CommandStart(commandName, key);
    }

    public void Dispose()
    {
        if (tracer != null)
        {
            tracer.CommandFinish(); 
        }
    }
}

The core code is in the GlimpseCacheCommadnTracer. The tracer will use the IMessageBroker and IExecutionTimer to know the configuration. This will get from the configuration file (web.config) Glimpse’s configurations but also if this one is active or not. It will also give you a hook to the timer start and stop. This will allow us to get into the timeline by publishing an event. This class also configure how to display the information. You can define the label, the color and the highlight.

public class GlimpseCacheCommandTracer 
{
    private IMessageBroker messageBroker;
    private IExecutionTimer timerStrategy;

    private IMessageBroker MessageBroker
    {
        get { return messageBroker ?? (messageBroker = GlimpseConfiguration.GetConfiguredMessageBroker()); }
        set { messageBroker = value; }
    }

    private IExecutionTimer TimerStrategy
    {
        get { return timerStrategy ?? (timerStrategy = GlimpseConfiguration.GetConfiguredTimerStrategy()()); }
        set { timerStrategy = value; }
    }
        
    private const string LABEL = "Cache";
    private const string COLOR = "#555";
    private const string COLOR_HIGHLIGHT = "#55ff55";
        
    private string command;
    private string key;
    private TimeSpan start;

    public void CommandStart(string command, string key)
    {
        if (TimerStrategy == null)
            return;
        this.start = TimerStrategy.Start();
        this.command = command;
        this.key = key;
    }


    public void CommandFinish()
    {
        if (TimerStrategy == null || MessageBroker == null)
            return;

        var timerResult = TimerStrategy.Stop(start);

        var message = new CacheTimelineMessage(this.command, this.key)
                .AsTimelineMessage(command + ": " + key, new TimelineCategoryItem(LABEL, COLOR, COLOR_HIGHLIGHT))
                .AsTimedMessage(timerResult);

        MessageBroker.Publish(message);
    }
}

The command finish method is called by the disposable method stop the timer for this event and build the message to be added to the timeline. In that example, we display the command and the key. The third and last class you need is the CacheTimelineMessage. This is the class that inherit from Glimpse’s MessageBase and ITimelineMessage. This is what will be used to display information in the timeline.

    public class CacheTimelineMessage : MessageBase, ITimelineMessage
    {
        public string Command { get; set; }
        public string Key { get; set; }

        #region From Interface
        public TimelineCategoryItem EventCategory { get; set; }
        public string EventName { get; set; }
        public string EventSubText { get; set; }
        public TimeSpan Duration { get; set; }
        public TimeSpan Offset { get; set; }
        public DateTime StartTime { get; set; }
        #endregion
        public CacheTimelineMessage(string command, string key)
        {
            this.Command = command;
            this.Key = key;

        }
    }
}

I am pretty sure we can do something better and even maybe show more information, but I am satisfy with the insight that I can have now with this few lines of code to Glimpse.

Using Redis in Asp.Net in an Enterprise System

I wrote about how to integrate Redis into Asp.Net MVC few days ago. Here is a way how to integrate Redis into your solution with dependency injection and abstracting Redis. This additional layer will be helpful if in the future we change from Redis to Memcached or simply.

The first step is to create the interface that will be used.

public interface ICache
{
    void SetString(string key, string objectToCache, TimeSpan? expiry = null);
    void Set<T>(string key, T objectToCache, TimeSpan? expiry = null) where T : class;
    string GetString(string key);
    T Get<T>(string key) where T : class;
    void Delete(string key);
    void FlushAll();
}

This interface gives primary operations that can be execute against Redis (or any other cache system). It’s possible to enhance this interface with more methods, This is the basic operations that is required to run a cache. The first two methods are to set a value inside the cache. One set a simple string, the second take a class of type T. The second one will be mostly used to take an object and serialize it. The next two methods are to get from a key the unserialized data. The next two methods is to delete. One use a key to delete a specific object and the other one delete everything from the cache.

A second interface is used. This one will allow us to get some status about if the cache is enable and if the cache is running properly.

public interface ICacheStatus
{
    bool IsCacheEnabled { get;}
    bool IsCacheRunning { get;}
}

The difference between IsCacheEnable and IsCacheRunning is that the first one is controlled by us. Normally from the web.config, you should have a key to turn on and off the cache. In case you notice a problem with the cache, it is always a good option to be able to turn off. The Second property is about getting the status of the caching server, Redis. If this one become inactive, it’s interesting to get the status from an administration panel for example.

Despite this interface, we need to have an abstract class with shared logic for any cache system (not only Redis). This is where we will have the serialization process, the error logging and the handling of the on/off mechanism. This is where the Circuit Pattern could also be used. I will discuss about it in a future article. Keep in mind for the moment that

public abstract class Cache : ICache, ICacheStatus
{
    private readonly bool isCacheEnable;

    public Cache(bool isCacheEnable)
    {
        this.isCacheEnable = isCacheEnable;
    }

    public void Set<T>(string key, T objectToCache, TimeSpan? expiry = null) where T : class
    {
        if (string.IsNullOrEmpty(key))
        {
            throw new ArgumentNullException("key");
        }
        if (this.isCacheEnable)
        {
            try
            {
                var serializedObjectToCache = JsonConvert.SerializeObject(objectToCache
                     , Formatting.Indented
                     , new JsonSerializerSettings
                     {
                         ReferenceLoopHandling = ReferenceLoopHandling.Serialize,
                         PreserveReferencesHandling = PreserveReferencesHandling.Objects,
                         TypeNameHandling = TypeNameHandling.All
                     });

                this.SetStringProtected(key, serializedObjectToCache, expiry);
            }
            catch (Exception e)
            {
                Log.Error(string.Format("Cannot Set {0}", key), e);
            }
        }
    }

    public T Get<T>(string key) where T : class
    {
        if (string.IsNullOrEmpty(key))
        {
            throw new ArgumentNullException("key");
        }
        if (this.isCacheEnable)
        {
            try{
                var stringObject = this.GetStringProtected(key);
                if(stringObject  ==  null)
                {
                     return default(T);
                }
                else
                {
                     var obj = JsonConvert.DeserializeObject<T>(stringObject
                         , new JsonSerializerSettings
                         {
                             ReferenceLoopHandling = ReferenceLoopHandling.Serialize,
                             PreserveReferencesHandling = PreserveReferencesHandling.Objects,
                             TypeNameHandling = TypeNameHandling.All
                         });
                    return obj;
                }
            }
            catch (Exception e)
            {
                Log.Error(string.Format("Cannot Set key {0}", key), e);
            }
        }
        return null;
    }

    public void Delete(string key)
    {
        if (string.IsNullOrEmpty(key))
        {
            throw new ArgumentNullException("key");
        }
        if (this.isCacheEnable)
        {
            try{
                this.DeleteProtected(key);
            }
            catch (Exception e)
            {
                Log.Error(string.Format("Cannot Delete key {0}",key), e);
            }
        }
    }

    public void DeleteByPattern(string prefixKey)
    {
        if (string.IsNullOrEmpty(prefixKey))
        {
            throw new ArgumentNullException("prefixKey");
        }
        if (this.isCacheEnable)
        {
            try
            {
                this.DeleteByPatternProtected(prefixKey);
            }
            catch (Exception e)
            {
                Log.Error(string.Format("Cannot DeleteByPattern key {0}", prefixKey), e);
            }
        }
    }

    public void FlushAll()
    {
        if (this.isCacheEnable)
        {
            try{
                this.FlushAllProtected();
            }
            catch (Exception e)
            {
                Log.Error("Cannot Flush", e);
            }
        }
    }

    public string GetString(string key)
    {
        if (string.IsNullOrEmpty(key))
        {
            throw new ArgumentNullException("key");
        }
        if (this.isCacheEnable)
        {
            try
            {
                return this.GetStringProtected(key);
            }
            catch (Exception e)
            {
                Log.Error(string.Format("Cannot Set key {0}", key), e);
            }
        }
        return null;
    }

    public void SetString(string key, string objectToCache, TimeSpan? expiry = null)
    {
        if (string.IsNullOrEmpty(key))
        {
            throw new ArgumentNullException("key");
        }
        if (this.isCacheEnable)
        {
            try
            {
                this.SetStringProtected(key, objectToCache, expiry);
            }
            catch (Exception e)
            {
                Log.Error(string.Format("Cannot Set {0}", key), e);
            }
        }
    }
    public bool IsCacheEnabled
    {
        get { return this.isCacheEnable; }

    }
    
    protected abstract void SetStringProtected(string key, string objectToCache, TimeSpan? expiry = null);
    protected abstract string GetStringProtected(string key);
    protected abstract void DeleteProtected(string key);
    protected abstract void FlushAllProtected();
    protected abstract void DeleteByPatternProtected(string key);
    public abstract bool IsCacheRunning { get;  }
}

As you can see, this abstract class will delegate all methods into a protected abstract methods which contains the cache implementation code. This one does not know about concrete implementation, just how to handle general caching knowledge. It also abstract a single method which save a string. This mean that the implementer does not need to care about anything other than string. However, the one that will use the class has access to a Set method that allow to pass a string or an object. The next class, is the one that does the real job. Here is a simple Redis implementations of this abstract class.

public class RedisCache : Definitions.Cache
{
    private ConnectionMultiplexer redisConnections;

    private IDatabase RedisDatabase {
        get {
            if (this.redisConnections == null)
            {
                InitializeConnection();
            }
            return this.redisConnections != null ? this.redisConnections.GetDatabase() : null;
        }
    }

    public RedisCache(bool isCacheEnabled):base(isCacheEnabled)
    {
        InitializeConnection();
    }

    private void InitializeConnection()
    {
        try
        {
             this.redisConnections = ConnectionMultiplexer.Connect(System.Configuration.ConfigurationManager.AppSettings["CacheConnectionString"]);
        }
        catch (RedisConnectionException errorConnectionException)
        {
            Log.Error("Error connecting the redis cache : " + errorConnectionException.Message, errorConnectionException);
        }
    }

    protected override string GetStringProtected(string key)
    {
        if (this.RedisDatabase == null)
        {
            return null;
        }
        var redisObject = this.RedisDatabase.StringGet(key);
        if (redisObject.HasValue)
        {
            return redisObject.ToString();
        }
        else
        {
            return null;
        }
    }

    protected override void SetStringProtected(string key, string objectToCache, TimeSpan? expiry = null)
    {
        if (this.RedisDatabase == null)
        {
            return;
        }

        this.RedisDatabase.StringSet(key, objectToCache, expiry);
    }

    protected override void DeleteProtected(string key)
    {
        if (this.RedisDatabase == null)
        {
            return;
        }
        this.RedisDatabase.KeyDelete(key);
    }

    protected override void FlushAllProtected()
    {
        if (this.RedisDatabase == null)
        {
            return;
        }
        var endPoints = this.redisConnections.GetEndPoints();
        foreach (var endPoint in endPoints)
        {
            var server = this.redisConnections.GetServer(endPoint);
            server.FlushAllDatabases();
        }
    }

    public override bool IsCacheRunning
    {
        get { return this.redisConnections != null && this.redisConnections.IsConnected; }
    }
}

The Redis connection get its setting from the web.config. The instantiation of the Redis object is done by using the ConnectionMultiplexer that come from the StackExchange API. This one is thread save and this is why the Cache will be a singleton from the dependency container.

    container.RegisterType<RedisCache>(new ContainerControlledLifetimeManager()
                                                                , new InjectionConstructor(
                                                                        Convert.ToBoolean(ConfigurationManager.AppSettings["IsCacheEnabled"])
                                                                )); //Singleton ( RedisCache use thread-safe code)
    container.RegisterType<ICache, RedisCache>(); //Re-use the singleton above
    container.RegisterType<ICacheStatus, RedisCache>(); //Re-use the singleton above

This is how to register the cache with Microsoft Unity. The first one register the RedisCache class with a new object object shared by every queries to the cache, thus every requests. The two next registrations associate the two interfaces to that cache instance.

From there, it’s possible to use anywhere the interface. It’s also easy to unit test since you can mock the ICache interface which is the only interface that you need to pass through all your code. About what need to be used, it’s clear from the dependency injection code that we use ICache as the interface to use and not the concrete RedisCache class. The cache shouldn’t be used in the controller class, neither in your service class or in your repository class. This belong to the accessory classes which are between your service and repository class. Here is the a graphic of the layers that is recommended to have when using a cache system and a database.

Layers

The idea is that that the only layer to know about the cache is the accessor. The service layer does not know about the cache or the database — it only know about to get and set from the accessor. The repository does not know about caching, it’s responsibility is to get from the persistence storage the data. This can be with Entity Framework (or any other ORM) or directly with Ado.Net. On the other hand, the cache does not know about the database, it only know how to store data in a fast access way. This mean that the accessor class is the only one to get the cache injected. Here is a small example.

public class ContestAccessor: IContestAccessor
{
	private readonly IContestRepository contestRepository;
	private readonly ICache cache;
	public ContestAccessor(IContestRepository repository, ICache cache)
	{
		//...
	}
}

This class can have methods to get specific information. Here is an example to get a contest by id.

public Contest GetById(int id)
{
    var key = string.Format("contest_by_id_", id);
    var contestObject = this.cache.Get<Contest>(key);
    if (contestObject == null)
    {
        contestObject = this.contestRepository.GetById(id);
        this.cache.Set(key, contestObject);
    }
    return contestObject;
}

This is a basic example, that get the contest from the cache, if this one does not find it, than get it from the repository and store it inside the cache for the next call. Every time, we return the object whatever where it comes from. The service layer uses the injected accessor (the interface IContest for example). It does not know anything about the repository or the cache — the service just knows about getting its object by an id.

Converting anonymous object to Dictionnary

Some Asp.Net MVC Html Helpers use the possibility to add anonymous object as parameter to assign key value. It is the case of HtmlAttributes parameter. If you want to create your own Html Helper or simply having the possibility to use anonymous object, you may stumble into the case that you need to enumerate keys and values.

public string Url(string action, string controller, string area = null, object routeValues = null)
{
    //Code here
}

The code above is an example. You may want to have a method that generate an url from some parameters. The last parameter named “routeValues” is of type object. This one is created to be used for anonymous object.

    Url("action", "controller", "area", new {Id = "123", Name="This is my name"});

The Url method can then loop through all the properties. Something that can help you is to create an anonymous method for object that convert everything into an IDictionary where it will be easy to manipulate keys (property name) and values.

public static class ObjectExtensions
{
    public static IDictionary<string, object> AsDictionary(this object source, BindingFlags bindingAttr = BindingFlags.DeclaredOnly | BindingFlags.Public | BindingFlags.Instance)
    {
        return source.GetType().GetProperties(bindingAttr).ToDictionary
        (
            propInfo => propInfo.Name,
            propInfo => propInfo.GetValue(source, null)
        );
    }
}

It uses reflection to get all properties and from these properties to get all values.

Here is a unit test for the AsDictionary extension method.

[TestClass]
public class ObjectExtensionsTest
{
    [TestMethod]
    public void GivenAnObject_WhenThisOneHasMultipleProperties_ThenDictionary()
    {
        // Arrange
        var objectToConvert = new {Id="Test", Name="Test2"};

        // Act
        var dictionary = objectToConvert.AsDictionary();

        // Assert
        Assert.AreEqual(2,dictionary.Keys.Count);
        Assert.AreEqual(objectToConvert.Id,dictionary["Id"]);
        Assert.AreEqual(objectToConvert.Name,dictionary["Name"]);
    }
}