NodeJs and MongoDb on Windows : Connecting and Inserting

MongoDb is an interesting choice of permanent persistence when using NodeJs since it stores document which is ideal of JSON document to get stored.

You can download a free version directly on MongoD : https://www.mongodb.com/download-center#community
Once it’s downloaded, it’s best to be sure that the system environment variable is set to have an entry in the path to point to the bin folder of the installation path of MongoDb.

Before working out MongoDb with NodeJs, I recommend to download RoboMongo for free: https://robomongo.org/download. This tool allows to query MongoDb and explore the data. Before using RoboMongo or to use the MongoDb library, we need to run the MongoDb server. To do so, go in the development folder you are working and execute the mongod command with the dbpath. For example, the following command will store the MongoDb in the “data” folder in the development folder.

mongod --dbpath=.\data

Inside your NodeJs project, you needs sto have the mongodb library. If you are using TypeScript, you can get the type definition too.

npm install mongodb --save
npm install @types/mongodb --save-dev

At this point, you can start using the library to access the information. First step, connecting to the server. Second step, connecting to the collection. The first step is the same as any other database, the second one is just that every documents are stored into a collection. Think about it as a table.

From here, we need to import some classes.

import { MongoClient, MongoError, Db, InsertOneWriteOpResult } from "mongodb";

MongoClient is the main class to connect to Mongo. MongoError is the class that wrap the error which we will use to get information about the connection’s error. Db is the class that hold information about MongoDb once connected. We need it to properly close the connection, but also to select the collection in which we want to invoke the action (find, insert, delete). The InsertOneWriteOpResult is the result on an insert.

Here is the connection code:

MongoClient.connect(url, (errMongo: MongoError, db: Db) => {
    if (errMongo) {
        console.log(errMongo);
    } else {
        console.log("Connected successfully to server");
    }
});

To insert something in you need to get the collection name and use the insert method. Something I haven’t yet figure out perfectly is how to handle Date from a .json file. In my case, I was opening file (.json) and inserting them into MongoDb. The Json parse method was returning the date as string, hence I needed to assign the value back with an instance of date


// To have a real date format in MongoDb
objToInsert.fullDate = new Date(objToInsert.fullDate);

// Access the collection we want to insert
const collection = db.collection("documents"); // "documents" can be anything you want your collection to be named

// Insert with a callback that has an error or the result data
collection.insert(objToInsert, (err: MongoError, result: InsertOneWriteOpResult) => {

});

The code above will add a new entry and alter the object to add an “_id” with a GUID for the object. This way, every entry has a unique identifier automatically.

How to boost caching performance to cache Entity Framework object

Entity Framework objects are dangerous for caching because of their nature to keep references to object. If you have an object that contains a list of object that can contain back the initial object, you come in a scenario where you have a infinite deepness of reference. While this is not a problem in memory, since it’s just pointer. However, if you serialize, it can be problematic. Json.Net provides some way to serialize reference which will serialize once and then refer to the object by a $ref id. However, this can still be expensive because the framework needs to navigate through the objects tree to determine if or not it needs more serialization. Another way to optimize the serialization with Json.Net is to have a custom ContractResolver where you can evaluate the level of deepness you are and stop serializing. The reference + custom ContractResolver looks like this:

public static class Serialization
{
    public static string Serialize<T>(T objectToSerialize, int maxDepth = 5)
    {
        using (var performanceLog = new GlimpseCodeSection("Serialize"))
        {
            using (var strWriter = new StringWriter())
            {
                using (var jsonWriter = new CustomJsonTextWriter(strWriter))
                {
                    Func<bool> include = () => jsonWriter != null && jsonWriter.CurrentDepth <= maxDepth;
                    var resolver = new DepthContractResolver(include);
                    var serializer = new JsonSerializer();
                    serializer.Formatting = Formatting.Indented;
                    serializer.ContractResolver = resolver;
                    serializer.ReferenceLoopHandling = ReferenceLoopHandling.Serialize;
                    serializer.PreserveReferencesHandling = PreserveReferencesHandling.Objects;
                    serializer.TypeNameHandling = TypeNameHandling.All;
                    serializer.ConstructorHandling = ConstructorHandling.AllowNonPublicDefaultConstructor;
                    serializer.NullValueHandling = NullValueHandling.Include;
                    serializer.Serialize(jsonWriter, objectToSerialize);
                }
                return strWriter.ToString();
            }

        }
    }

    public static T Deserialize<T>(string objectSerialized)
    {
        using (var performanceLog = new GlimpseCodeSection("Deserialize"))
        {
            var contractResolver = new PrivateResolver();
            var obj = JsonConvert.DeserializeObject<T>(objectSerialized
                , new JsonSerializerSettings
                {
                    ReferenceLoopHandling = ReferenceLoopHandling.Serialize,
                    PreserveReferencesHandling = PreserveReferencesHandling.Objects,
                    TypeNameHandling = TypeNameHandling.All,
                    ConstructorHandling = ConstructorHandling.AllowNonPublicDefaultConstructor,
                    ContractResolver = contractResolver,
                    NullValueHandling = NullValueHandling.Include
                });
            return obj;
        }
    }

    /// <summary>
    /// Allow to have private method to be written in the serialization
    /// </summary>
    public class PrivateResolver : DefaultContractResolver
    {
        protected override JsonProperty CreateProperty(MemberInfo member, MemberSerialization memberSerialization)
        {
            var prop = base.CreateProperty(member, memberSerialization);

            if (!prop.Writable)
            {
                var property = member as PropertyInfo;
                if (property != null)
                {
                    var hasPrivateSetter = property.GetSetMethod(true) != null;
                    prop.Writable = hasPrivateSetter;
                }
            }

            return prop;
        }
    }

    public class DepthContractResolver : DefaultContractResolver
    {
        private readonly Func<bool> includeProperty;

        public DepthContractResolver(Func<bool> includeProperty)
        {
            this.includeProperty = includeProperty;
        }

        protected override JsonProperty CreateProperty(MemberInfo member, MemberSerialization memberSerialization)
        {
            var property = base.CreateProperty(member, memberSerialization);
            //See if we should serialize with the depth
            var shouldSerialize = property.ShouldSerialize;
            property.ShouldSerialize = obj => this.includeProperty() 
                                                && (shouldSerialize == null || shouldSerialize(obj));

            //Setter if private is okay to serialize
            if (!property.Writable)
            {
                var propertyInfo = member as PropertyInfo;
                if (propertyInfo != null)
                {
                    var hasPrivateSetter = propertyInfo.GetSetMethod(true) != null;
                    property.Writable = hasPrivateSetter;
                }
            }


            return property;
        }

        protected override IList<JsonProperty> CreateProperties(Type type, MemberSerialization memberSerialization)
        {
            IList<JsonProperty> props = base.CreateProperties(type, memberSerialization);
            var propertyToSerialize = new List<JsonProperty>();
            foreach (var property in props)
            {
                if (property.Writable)
                {
                    propertyToSerialize.Add(property);
                }
                else
                {
                    var propertyInfo = type.GetProperty(property.PropertyName);
                    if (propertyInfo != null)
                    {
                        var hasPrivateSetter = propertyInfo.GetSetMethod(true) != null;
                        if (hasPrivateSetter)
                        {
                            propertyToSerialize.Add(property);
                        }
                    }
                }
            }
            return propertyToSerialize;
        }

    }

    

    public class CustomJsonTextWriter : JsonTextWriter
    {
        public int CurrentDepth { get; private set; } = 0;
        public CustomJsonTextWriter(TextWriter textWriter) : base(textWriter)
        {
        }

        public override void WriteStartObject()
        {
            this.CurrentDepth++;
            base.WriteStartObject();
        }

        public override void WriteEndObject()
        {
            this.CurrentDepth--;
            base.WriteEndObject();
        }
    }
}

The problem is that even with those optimizations, the time can be long. One common pattern is that you have a big Entity Framework object that you want to serialize. Before sending the object to serialize, you want to cut some branches by setting to null properties. For example, if you have the main entity that has many collections, you may want to null the collection and just setting the object with less sub-objects into Redis. The problem is that if you null a property, the main object will have some missing data and your object is in a bad state. So, the pattern is to serialize the object once and deserialize it right away. Null some properties on that deserialized object, which is a total clone. Any changes doesn’t affect the real object. From that clone, you can serialize this one and set it to Redis. The problem is that it takes 2 serializations operation and 1 deserialization while the best case scenario would be 1 serialization.

The pattern remains good, but the way to achieve it is wrong. A better approach would be to clone the object in C#. The benefit is the speed, the disadvantage is that you need to have cloning method on all your classes which can be time consuming. It’s also difficult to know how to clone each object. Often you will need a shallow clone and a deep clone. Depending of the situation and the class, you need to call the right cloning method. The speed is varying a lot but on big cloning object where the graph is huge I saw result going from 500ms to 4ms. Very good for a first clone operation. After, cutting some properties and serializing again, the same object can take about 20ms to serialize.

Here is an example :

public Contest ShallowCloneManual()
{
	var contest = (Contest)this.MemberwiseClone();
	contest.RegistrationRules = this.registrationRules.DeepCloneManual();
	contest.AllowedMarkets = this.AllowedMarkets?.ShallowCloneManual();
	contest.ContestOrderType = this.contestOrderType?.DeepCloneManual();
	contest.Creator = this.Creator?.ShallowCloneManual();
	contest.DailyStatistics = this.DailyStatistics?.ShallowCloneManual();
	contest.InitialCapital = this.InitialCapital.DeepCloneManual();
	contest.Moderators = this.Moderators?.ShallowCloneManual();
	contest.Name = this.Name.DeepCloneManual();
	contest.TransactionRules = this.TransactionRules.DeepCloneManual();
	contest.StockRules = this.StockRules?.DeepCloneManual();
	contest.ShortRules = this.ShortRules?.DeepCloneManual();
	contest.OptionRules = this.OptionRules?.DeepCloneManual();
	contest.Portefolios = this.Portefolios?.ShallowCloneManual();
	return contest;
}

public Contest DeepCloneManual()
{
	var contest = (Contest)this.MemberwiseClone();
	contest.RegistrationRules = this.registrationRules.DeepCloneManual();
	contest.AllowedMarkets = this.AllowedMarkets?.ShallowCloneManual();
	contest.ContestOrderType = this.contestOrderType?.DeepCloneManual();
	contest.Creator = this.Creator?.ShallowCloneManual();
	contest.DailyStatistics = this.DailyStatistics?.ShallowCloneManual();
	contest.InitialCapital = this.InitialCapital.DeepCloneManual();
	contest.Moderators = this.Moderators?.DeepCloneManual();
	contest.Name = this.Name.DeepCloneManual();
	contest.TransactionRules = this.TransactionRules.DeepCloneManual();
	contest.StockRules = this.StockRules?.DeepCloneManual();
	contest.ShortRules = this.ShortRules?.DeepCloneManual();
	contest.OptionRules = this.OptionRules?.DeepCloneManual();
	contest.Portefolios = this.Portefolios?.DeepCloneManual();
	return contest;
}

Some improvements could be done to be more generic. For example, DeepCloneManual could take a static option object which track the deep level and stop cloning. The impact of doing the cloning in C# was significant on Azure Webjob where thousand of objects needed to be reduced and send to Azure Redis. You can see by yourself the drop in the following graph where the 75th percentile get down from 16 minutes to less than 4 minutes and the 95th percentile from +20 minutes to 4 minutes.
CustomCSharpClone

To conclude, cloning by serializing and deserializing an Entity Framework object is expensive in term of processing but fast to use. It should be used with parsimony.

Improving your Azure Redis performance and size with Lz4net

The title is a little misleading. I could rewrite it has improving Redis performance by compressing the object your serialized. It’s not related to Azure Redis particularly, neither to Lz4net which is a way to compress. However, I have learn that compression is improving Redis on Azure recently. It helps in two different ways. First, the Redis server and the website/webjobs needs to send the information to the Redis server. Having a smaller number of bytes to send it always faster. Second, you have a limit of size depending of the package you took on Azure with Redis. Compressing can save you some space. That space vary depending of what you cache. From my personal experience, any object serialized that takes more than 2ko gains from compression. I did some logging and the I have a reduction between 6% and 64% which is significant if you have object to cache that are around 100ko-200ko. Of course, this has CPU cost, but depending of the algorithm you use, you may not feel the penalty. I choose Lz4net which is a loose-less, very fast compression library. It’s open source and also available with Nuget.

Doing it is also simple, but the documentation around Lz4net is practically non-existent and Redis.StackExchange doesn’t provide detail about how to handle compressed data. The problem with StackExchange library is that it doesn’t allow you to use byte[] directly. Underneath, it converts the byte[] into a RedisValue. It works well for storing, however, when getting, the RedisValue to byte[] return null. Since the compressed data format in an array of bytes, this cause a problem. The trick is to encapsulate the data into a temporary object. You can read more from Marc Gravell on StackOverflow.

private class CompressedData
{
	public CompressedData()
	{
		
	}
	public CompressedData(byte[] data)
	{
		this.Data = data;
	}
	public byte[] Data
	{
		get; private set;
	}
}

This object can be serialized and used with StackExchange. It can also be restored from Redis, uncompressed, deserialized and used as object. Inside my Set method, the code looks like this:

var compressed = LZ4Codec.Wrap(Encoding.UTF8.GetBytes(serializedObjectToCache));
var compressedObject = new CompressedData(compressed);
string serializedCompressedObject = Serialization.Serialize(compressedObject);
//Set serializedCompressedObject with StackExchange Redis library

The Get method do the other way around:

string stringObject = //From StackExchange Redis library
var compressedObject = Serialization.Deserialize<CompressedData>(stringObject);
var uncompressedData = LZ4Codec.Unwrap(compressedObject.Data);
string unCompressed = Encoding.UTF8.GetString(uncompressedData);
T obj = Serialization.Deserialize<T>(unCompressed);

The result is really stunning. If you look at my personal numbers from a project that I applied this compression you can see that even for 5ko objects we have a gain.

RedisSize

For example, the 50th percentile with has a 23ko size for one key. This one go down by more than half when compressed. If we look at the 95th percentile we realize that the gain is even more touching 90% reduction by being from 478ko to 44ko. Compressing is often the critic of being bad for smaller object. However, i found that even object as small as 6ko was gaining by being reduced to 3ko. 35ko to 8ko and so on. Since the compression algorithm used is very fast, the experience was way more positive than impacting the performance negatively.

Migrating your Local SQL Server Database to Microsoft Azure

Migrating a SQL Server Database into Azure is easy but not obviously. Here is few steps that might save you some hours. These steps are migrating the schema by creating a new database at destination with all data.

First of all, you need to open Microsoft SQL Server Management Studio. Right click the database you want to migrate to Azure and select Task, and Export Data-tier Application. It’s important that you do not choose the option Deploy Database to Windows Azure Sql Database because this option doesn’t keep the identity (primary keys) of you tables.

ExportDataTierApplication
This will generate a bacpac file that needs to be imported into Azure. To do so, you need to connect with Microsoft SQL Server Management Studio to your Azure server. To connect to Azure Server Database you need to have your IP allowed by your Azure SQL Server firewall and knows your server password.

To import into Azure, you need to right click the Databases folder of Azure SQL Server. Select Import Data-tier Application.
ImportDataTierApplication

A short wizard will open and let you select the name of the destination database as well as some primitive Azure configuration. This may take a while to transfer depending of the size of your database.

Once it’s done, it’s always a good idea to verify if everything has been transferred.

SELECT OBJECT_SCHEMA_NAME(p.object_id) AS [Schema]
    , OBJECT_NAME(p.object_id) AS [Table]
    , i.name AS [Index]
    , p.rows AS [Row Count]
FROM sys.partitions p
INNER JOIN sys.indexes i ON p.object_id = i.object_id
                         AND p.index_id = i.index_id
WHERE OBJECT_SCHEMA_NAME(p.object_id) != 'sys'
ORDER BY [Row Count], [Schema], [Table], [Index]

That query will show the information about the number of rows per table. You can compare from the local database and the Azure one to see any disperency.

SQL Arithmetic overflow error converting numeric to data type numeric

When you are using straight ADO.Net with SQL you may come when using an operation that produce an overflow. This is often hard to debug if you are inside an update statement which update several fields. You may think than using cast or convert to the type of the destination field solve the problem, but it is not actually valid.

Here is two examples that show that even if you convert or cast it won’t be enough.

declare @dob1 as decimal(16,4)
set @dob1 = cast(554656545465486786844864613 as decimal(16,4))
select @dob1

declare @dob2 as decimal(16,4)
set @dob2 = CONVERT(decimal(16,4),5455531234268.68423224224244864613 )
select @dob2

In my case, the problem was that I was performing an update in a field by executing a multiplication. field = field * 1/2. The problem was that field was already, in some case, at 0 and sometime above the maximum which is 12 (16-4) digits. Even if the following code work fine.

declare @dob3 as int
set @dob3 = cast(0*1/2 as int)
select @dob3

This one was not:

declare @fromValue as int
declare @toValue as int

set @fromValue = 1
set @toValue = 2

SELECT cast(Quantity * @fromValue/@toValue as int) as newQuantity
FROM [Trading].[Stock]

However, adding a where clause eliminates edge cases to be proceeded. The trick is to handle result of 0 and result above the limit.

declare @fromValue as int
declare @toValue as int

set @fromValue = 1
set @toValue = 2

SELECT cast(Quantity * @fromValue/@toValue as int) as newQuantity
FROM [Trading].[Stock]
WHERE Quantity > 0
AND Quantity  < 999999999999

Redis’ Service Eating Windows Hard drive Space

I noticed my hard drive losing space faster than usually. I decided to run a tool named “WinDirStat” which is free and allow to get a portrait of your hard drive. Within few seconds, the culprit was inside Windows folder, to be more accurate: C:\Windows\ServiceProfiles\NetworkService\AppData\Local\Redis.

RedisServerWindowsSpaceHarddrive

I am running Redis as a service on my machine since about 2 months and never realized that this one store the size of the ram of your machine in multiple files. Each files are named with “RedisQFork_9076.dat” and the number vary. Each of them were 8 gigs because I had a 8 gig ram machine.

This only occur on 64bits OS, otherwise it is cap limited to 500 megs. So why does these files are created? Because Redis was created to be on Linux first. The Windows version needs to mimic some behaviors. Redis clusters, backup, synchronization needs to use the fork command that Windows OS does not have. So, it uses the file system.

To clean up, you need to go to C:\Windows\ServiceProfiles\NetworkService\AppData\Local\Redis but before deleting anything, you must stop the Windows service named “Redis”. Then, you can delete all files from that folder and start again the service.

If you want to move these files into another directory, this is possible with the latest Windows Redis version. Go in the configuration file of Redis located:
C:\Program Files\Redis\redis.windows-service.conf. Search for heapdir. It’s also possible to limit the size by changing the setting named maxheap.

Redis Experimentation with Full List Cache against using Redis Sorted List

I am improving the performance of a system right now with Redis and the library StackExchange. Onne particular case was that I needed to cache a list of data that are ordered by rank from a value that change often. One requirement is that it’s possible that two items can have the same rank. For example:

Rank - Data   - Value
1    - User 1 - 100
2    - User 2 - 99
3    - User 4 - 99
4    - User 5 - 97

The data is in fact a serialized object that contains multiples classes. For not making this article too heavy, I will just use a string. Nevertheless, keep in mind that this is not a simple string unique identifier. The value column is required by Redis when using the Sorted List. In reality, this value is inside the data, in a property.

This is also information that must be paged because the list can go around five thousand entries.

The first step was to measure the time when using the database. I got an average of 264ms per query for 20 items on a set of 200 items. The database contains thousand of entry, the page is using a clause to filter down depending of other criteria defined inside the data (inside the class that we serialize). The next step was to use Redis as a simple cache — once we get the result of the database we store it for few times. The first hit will have the same average, because it goes to the database, but the subsequent request will go inside Redis instead of the database. This was producing an improvement of 50% faster, with 125ms in average. The key was determined by the type of list, by the filter attribute and the page number. For example, “MyListOfObjectXXX_PartitionYYY_Page_1”. The speed was interesting for me, I was aiming around 100 ms but I was satisfy with the result. The time also contains the time to deserialize the object to create a generic list of all 20 results. I count the deserialization process time in my benchmark because I was counting the ORM time to instantiate the object too. My concern with that solution is that every object can change its value at any time. The value does change the rank by consequence. Since I am also caching the data with a separate key for each instance, I duplicate this information in the cache. The size of the cache can be a problem in the long run, but the bigger problem is that the information become desynchronize. In fact, the source of truth is the individual cached version in the system. It looks like this : “MyData_Key_1”. I set an expiry because this is not the real source of data. I will not invalidate that data like the rest of the software when values change from the entity. I will let them expire and than change it. It means that a user that drill down from the list can get an up-to-date data. This is the cost to pay (so far) for a one minute delay.

db.StringSet(MyListOfObjectFoo_PartitionRed_Page_1, myListOfDataForPage1, TimeSpan.FromMinutes(1));

To overcome this issue, Redis offers to be able to store an ordered list that is sorted by a value. What is interesting is that the value can be the same which will produce the same rank. So far, this is exactly the answer of the problem. However, that solution does not fix the problem of having to duplicate the data in the cache. The sorted list solution can query by range, so it’s interesting for paging, but not by unique key. Thus, it solves only the problem of having desynchronized value since I can push easily in the sorted list an entry in a specify (updated) rank.

// Initial push
db.SortedSetAdd("MyListOfObjectFoo_PartitionRed_Page_1", new[] {
                    new SortedSetEntry("User 1",100),
                    new SortedSetEntry("User 2",99),
                    new SortedSetEntry("User 3",99)});

// Later when one entity change with a value of 100. This will produce two rank 1.
db.SortedSetAdd("MyListOfObjectFoo", objectToCache, 100);

This was surprising in many ways. First of all, the main problem was that if you have several same ranks that it is not possible to have a second ordering value from the object. You are stock with value you set which is a double. This allow you to do some mathematics trick but if you would like to sort by alphabetic order than you need to manually in C# do your second sort. I didn’t go more deep with that solution because of the second problem. The second bigger problem was the performance. To get the information, you use the get by range method.

db.SortedSetRangeByRank("MyListOfObjectFoo", 1, 20)

From that, you need to loop and deserialize all values which is the same tax to pay that we have when caching the whole page in 1 Redis key-value entry. However, the performance was disastrous. My average on three run was 1900ms. This was really surprising. I double check everything because it wasn’t making any sense to me. My initial hypothesis was that this was highly optimized for this kind of scenario — I was wrong. However, the fault is not Redis. After some investigation, I found that the serialization, done with Json.Net library, got some harder time deserializing 20 times a very complex objects than a list of 20 objects. This is mostly because when serializing a list, if the complex object has already a reference that this one is not serialized again but use a reference system. For example, instead of having a deep object, Json.Net will use “$ref”: “20”. This has a huge impact in performance.

I finally decided to optimize my model classes and have a more light classes for this page. Instead of using a list of objects that has a lot of sub-rich objects, using a simple list of a basic class with properties did an awesome job. The list that was taking 1900ms to get from Redis and deserialize is not taking less than .17 ms. That is right and not a typo, it is less than a single millisecond.

I am still learning how to maximize the use of Redis and so far like the flexibility that it offers compared to Memcached that I used for more than a decade. So far it’s interesting and will keep you inform with any new optimization I can find. In short term, I think a solution may be to cache not the whole complex object but just a part of it in an aggregate view of objects.

How to Extend Glimpse for Redis

Glimpse is the best real time profiler/diagnostic add-on you can have for your Asp.Net MVC solution. I will not describe in that article all the capabilities but in one sentence, Glimpse allows to have for you Asp.Net MVC project all times for each calls like filter, action, db call, etc. Unfortunately, no extension has been done for Redis. Nevertheless, creating a custom extension for the Timeline is not too hard. However, the documentation is very dry and it is not obvious about what you can extend or not. This is really sad and the extensibility model of Glimpse is pretty limited. For example, you cannot extend the HUD.

The objective of the Glimpse’s extension we are building in this article is to add in Glimpse’s timeline every cache calls starting time, ending time, duration and what was the method name and key used. Here is the end result:
GlimpseExtension

The first thing is that extension will be not for Redis particularity but for any cache system. I have in the project I have a Cache.cs class that is abstract. My Redis implementation inherit from that cache. That class contains a lot of method like Set, Get, Delete etc. Here is the set method.

public void Set<T>(string key, T objectToCache, TimeSpan? expiry = null)
{
    if (string.IsNullOrEmpty(key))
    {
        throw new ArgumentNullException("key");
    }
    if (this.isCacheEnable)
    {
        var serializedObjectToCache = Serialization.Serialize(objectToCache);
        if (!this.ExecuteUnderCircuitBreaker(()=>this.SetStringProtected(key, serializedObjectToCache, expiry),key))
        {
            Log.Error(string.Format("Cannot Set {0}", key));
        }
    }
}

As you can see, the method serializes the object to cache, and calls the SetStringProtected method. Something particular is the method is called within a function called ExecuteUnderCircuitBreaker which is a design pattern. Whatever this pattern, every calls to the cache go through this function. At the end, if we remove all the circuit breaker pattern we can add the entry point for the Glimpse’s extension.

protected bool ExecuteUnderCircuitBreaker(Action action, string key, [CallerMemberName]string callerMemberName="")
{
   using (var glimpse = new GlimpseCache(key, callerMemberName))
   {
      //Code removed here about circuit breaker
      action();
   }
}

The important part for the moment is that every calls for the cache are proxied by this method which execute the Redis action between a GlimpseCache object creation and disposition. The GlimpseCache class start a timer when the class is constructed and stop the timer when it is disposed.

public class GlimpseCache:IDisposable
{
    private readonly GlimpseCacheCommandTracer tracer;
    public GlimpseCache(string key, string commandName)
    {
        this.tracer = new GlimpseCacheCommandTracer();
        tracer.CommandStart(commandName, key);
    }

    public void Dispose()
    {
        if (tracer != null)
        {
            tracer.CommandFinish(); 
        }
    }
}

The core code is in the GlimpseCacheCommadnTracer. The tracer will use the IMessageBroker and IExecutionTimer to know the configuration. This will get from the configuration file (web.config) Glimpse’s configurations but also if this one is active or not. It will also give you a hook to the timer start and stop. This will allow us to get into the timeline by publishing an event. This class also configure how to display the information. You can define the label, the color and the highlight.

public class GlimpseCacheCommandTracer 
{
    private IMessageBroker messageBroker;
    private IExecutionTimer timerStrategy;

    private IMessageBroker MessageBroker
    {
        get { return messageBroker ?? (messageBroker = GlimpseConfiguration.GetConfiguredMessageBroker()); }
        set { messageBroker = value; }
    }

    private IExecutionTimer TimerStrategy
    {
        get { return timerStrategy ?? (timerStrategy = GlimpseConfiguration.GetConfiguredTimerStrategy()()); }
        set { timerStrategy = value; }
    }
        
    private const string LABEL = "Cache";
    private const string COLOR = "#555";
    private const string COLOR_HIGHLIGHT = "#55ff55";
        
    private string command;
    private string key;
    private TimeSpan start;

    public void CommandStart(string command, string key)
    {
        if (TimerStrategy == null)
            return;
        this.start = TimerStrategy.Start();
        this.command = command;
        this.key = key;
    }


    public void CommandFinish()
    {
        if (TimerStrategy == null || MessageBroker == null)
            return;

        var timerResult = TimerStrategy.Stop(start);

        var message = new CacheTimelineMessage(this.command, this.key)
                .AsTimelineMessage(command + ": " + key, new TimelineCategoryItem(LABEL, COLOR, COLOR_HIGHLIGHT))
                .AsTimedMessage(timerResult);

        MessageBroker.Publish(message);
    }
}

The command finish method is called by the disposable method stop the timer for this event and build the message to be added to the timeline. In that example, we display the command and the key. The third and last class you need is the CacheTimelineMessage. This is the class that inherit from Glimpse’s MessageBase and ITimelineMessage. This is what will be used to display information in the timeline.

    public class CacheTimelineMessage : MessageBase, ITimelineMessage
    {
        public string Command { get; set; }
        public string Key { get; set; }

        #region From Interface
        public TimelineCategoryItem EventCategory { get; set; }
        public string EventName { get; set; }
        public string EventSubText { get; set; }
        public TimeSpan Duration { get; set; }
        public TimeSpan Offset { get; set; }
        public DateTime StartTime { get; set; }
        #endregion
        public CacheTimelineMessage(string command, string key)
        {
            this.Command = command;
            this.Key = key;

        }
    }
}

I am pretty sure we can do something better and even maybe show more information, but I am satisfy with the insight that I can have now with this few lines of code to Glimpse.

Using Redis in Asp.Net in an Enterprise System

I wrote about how to integrate Redis into Asp.Net MVC few days ago. Here is a way how to integrate Redis into your solution with dependency injection and abstracting Redis. This additional layer will be helpful if in the future we change from Redis to Memcached or simply.

The first step is to create the interface that will be used.

public interface ICache
{
    void SetString(string key, string objectToCache, TimeSpan? expiry = null);
    void Set<T>(string key, T objectToCache, TimeSpan? expiry = null) where T : class;
    string GetString(string key);
    T Get<T>(string key) where T : class;
    void Delete(string key);
    void FlushAll();
}

This interface gives primary operations that can be execute against Redis (or any other cache system). It’s possible to enhance this interface with more methods, This is the basic operations that is required to run a cache. The first two methods are to set a value inside the cache. One set a simple string, the second take a class of type T. The second one will be mostly used to take an object and serialize it. The next two methods are to get from a key the unserialized data. The next two methods is to delete. One use a key to delete a specific object and the other one delete everything from the cache.

A second interface is used. This one will allow us to get some status about if the cache is enable and if the cache is running properly.

public interface ICacheStatus
{
    bool IsCacheEnabled { get;}
    bool IsCacheRunning { get;}
}

The difference between IsCacheEnable and IsCacheRunning is that the first one is controlled by us. Normally from the web.config, you should have a key to turn on and off the cache. In case you notice a problem with the cache, it is always a good option to be able to turn off. The Second property is about getting the status of the caching server, Redis. If this one become inactive, it’s interesting to get the status from an administration panel for example.

Despite this interface, we need to have an abstract class with shared logic for any cache system (not only Redis). This is where we will have the serialization process, the error logging and the handling of the on/off mechanism. This is where the Circuit Pattern could also be used. I will discuss about it in a future article. Keep in mind for the moment that

public abstract class Cache : ICache, ICacheStatus
{
    private readonly bool isCacheEnable;

    public Cache(bool isCacheEnable)
    {
        this.isCacheEnable = isCacheEnable;
    }

    public void Set<T>(string key, T objectToCache, TimeSpan? expiry = null) where T : class
    {
        if (string.IsNullOrEmpty(key))
        {
            throw new ArgumentNullException("key");
        }
        if (this.isCacheEnable)
        {
            try
            {
                var serializedObjectToCache = JsonConvert.SerializeObject(objectToCache
                     , Formatting.Indented
                     , new JsonSerializerSettings
                     {
                         ReferenceLoopHandling = ReferenceLoopHandling.Serialize,
                         PreserveReferencesHandling = PreserveReferencesHandling.Objects,
                         TypeNameHandling = TypeNameHandling.All
                     });

                this.SetStringProtected(key, serializedObjectToCache, expiry);
            }
            catch (Exception e)
            {
                Log.Error(string.Format("Cannot Set {0}", key), e);
            }
        }
    }

    public T Get<T>(string key) where T : class
    {
        if (string.IsNullOrEmpty(key))
        {
            throw new ArgumentNullException("key");
        }
        if (this.isCacheEnable)
        {
            try{
                var stringObject = this.GetStringProtected(key);
                if(stringObject  ==  null)
                {
                     return default(T);
                }
                else
                {
                     var obj = JsonConvert.DeserializeObject<T>(stringObject
                         , new JsonSerializerSettings
                         {
                             ReferenceLoopHandling = ReferenceLoopHandling.Serialize,
                             PreserveReferencesHandling = PreserveReferencesHandling.Objects,
                             TypeNameHandling = TypeNameHandling.All
                         });
                    return obj;
                }
            }
            catch (Exception e)
            {
                Log.Error(string.Format("Cannot Set key {0}", key), e);
            }
        }
        return null;
    }

    public void Delete(string key)
    {
        if (string.IsNullOrEmpty(key))
        {
            throw new ArgumentNullException("key");
        }
        if (this.isCacheEnable)
        {
            try{
                this.DeleteProtected(key);
            }
            catch (Exception e)
            {
                Log.Error(string.Format("Cannot Delete key {0}",key), e);
            }
        }
    }

    public void DeleteByPattern(string prefixKey)
    {
        if (string.IsNullOrEmpty(prefixKey))
        {
            throw new ArgumentNullException("prefixKey");
        }
        if (this.isCacheEnable)
        {
            try
            {
                this.DeleteByPatternProtected(prefixKey);
            }
            catch (Exception e)
            {
                Log.Error(string.Format("Cannot DeleteByPattern key {0}", prefixKey), e);
            }
        }
    }

    public void FlushAll()
    {
        if (this.isCacheEnable)
        {
            try{
                this.FlushAllProtected();
            }
            catch (Exception e)
            {
                Log.Error("Cannot Flush", e);
            }
        }
    }

    public string GetString(string key)
    {
        if (string.IsNullOrEmpty(key))
        {
            throw new ArgumentNullException("key");
        }
        if (this.isCacheEnable)
        {
            try
            {
                return this.GetStringProtected(key);
            }
            catch (Exception e)
            {
                Log.Error(string.Format("Cannot Set key {0}", key), e);
            }
        }
        return null;
    }

    public void SetString(string key, string objectToCache, TimeSpan? expiry = null)
    {
        if (string.IsNullOrEmpty(key))
        {
            throw new ArgumentNullException("key");
        }
        if (this.isCacheEnable)
        {
            try
            {
                this.SetStringProtected(key, objectToCache, expiry);
            }
            catch (Exception e)
            {
                Log.Error(string.Format("Cannot Set {0}", key), e);
            }
        }
    }
    public bool IsCacheEnabled
    {
        get { return this.isCacheEnable; }

    }
    
    protected abstract void SetStringProtected(string key, string objectToCache, TimeSpan? expiry = null);
    protected abstract string GetStringProtected(string key);
    protected abstract void DeleteProtected(string key);
    protected abstract void FlushAllProtected();
    protected abstract void DeleteByPatternProtected(string key);
    public abstract bool IsCacheRunning { get;  }
}

As you can see, this abstract class will delegate all methods into a protected abstract methods which contains the cache implementation code. This one does not know about concrete implementation, just how to handle general caching knowledge. It also abstract a single method which save a string. This mean that the implementer does not need to care about anything other than string. However, the one that will use the class has access to a Set method that allow to pass a string or an object. The next class, is the one that does the real job. Here is a simple Redis implementations of this abstract class.

public class RedisCache : Definitions.Cache
{
    private ConnectionMultiplexer redisConnections;

    private IDatabase RedisDatabase {
        get {
            if (this.redisConnections == null)
            {
                InitializeConnection();
            }
            return this.redisConnections != null ? this.redisConnections.GetDatabase() : null;
        }
    }

    public RedisCache(bool isCacheEnabled):base(isCacheEnabled)
    {
        InitializeConnection();
    }

    private void InitializeConnection()
    {
        try
        {
             this.redisConnections = ConnectionMultiplexer.Connect(System.Configuration.ConfigurationManager.AppSettings["CacheConnectionString"]);
        }
        catch (RedisConnectionException errorConnectionException)
        {
            Log.Error("Error connecting the redis cache : " + errorConnectionException.Message, errorConnectionException);
        }
    }

    protected override string GetStringProtected(string key)
    {
        if (this.RedisDatabase == null)
        {
            return null;
        }
        var redisObject = this.RedisDatabase.StringGet(key);
        if (redisObject.HasValue)
        {
            return redisObject.ToString();
        }
        else
        {
            return null;
        }
    }

    protected override void SetStringProtected(string key, string objectToCache, TimeSpan? expiry = null)
    {
        if (this.RedisDatabase == null)
        {
            return;
        }

        this.RedisDatabase.StringSet(key, objectToCache, expiry);
    }

    protected override void DeleteProtected(string key)
    {
        if (this.RedisDatabase == null)
        {
            return;
        }
        this.RedisDatabase.KeyDelete(key);
    }

    protected override void FlushAllProtected()
    {
        if (this.RedisDatabase == null)
        {
            return;
        }
        var endPoints = this.redisConnections.GetEndPoints();
        foreach (var endPoint in endPoints)
        {
            var server = this.redisConnections.GetServer(endPoint);
            server.FlushAllDatabases();
        }
    }

    public override bool IsCacheRunning
    {
        get { return this.redisConnections != null && this.redisConnections.IsConnected; }
    }
}

The Redis connection get its setting from the web.config. The instantiation of the Redis object is done by using the ConnectionMultiplexer that come from the StackExchange API. This one is thread save and this is why the Cache will be a singleton from the dependency container.

    container.RegisterType<RedisCache>(new ContainerControlledLifetimeManager()
                                                                , new InjectionConstructor(
                                                                        Convert.ToBoolean(ConfigurationManager.AppSettings["IsCacheEnabled"])
                                                                )); //Singleton ( RedisCache use thread-safe code)
    container.RegisterType<ICache, RedisCache>(); //Re-use the singleton above
    container.RegisterType<ICacheStatus, RedisCache>(); //Re-use the singleton above

This is how to register the cache with Microsoft Unity. The first one register the RedisCache class with a new object object shared by every queries to the cache, thus every requests. The two next registrations associate the two interfaces to that cache instance.

From there, it’s possible to use anywhere the interface. It’s also easy to unit test since you can mock the ICache interface which is the only interface that you need to pass through all your code. About what need to be used, it’s clear from the dependency injection code that we use ICache as the interface to use and not the concrete RedisCache class. The cache shouldn’t be used in the controller class, neither in your service class or in your repository class. This belong to the accessory classes which are between your service and repository class. Here is the a graphic of the layers that is recommended to have when using a cache system and a database.

Layers

The idea is that that the only layer to know about the cache is the accessor. The service layer does not know about the cache or the database — it only know about to get and set from the accessor. The repository does not know about caching, it’s responsibility is to get from the persistence storage the data. This can be with Entity Framework (or any other ORM) or directly with Ado.Net. On the other hand, the cache does not know about the database, it only know how to store data in a fast access way. This mean that the accessor class is the only one to get the cache injected. Here is a small example.

public class ContestAccessor: IContestAccessor
{
	private readonly IContestRepository contestRepository;
	private readonly ICache cache;
	public ContestAccessor(IContestRepository repository, ICache cache)
	{
		//...
	}
}

This class can have methods to get specific information. Here is an example to get a contest by id.

public Contest GetById(int id)
{
    var key = string.Format("contest_by_id_", id);
    var contestObject = this.cache.Get<Contest>(key);
    if (contestObject == null)
    {
        contestObject = this.contestRepository.GetById(id);
        this.cache.Set(key, contestObject);
    }
    return contestObject;
}

This is a basic example, that get the contest from the cache, if this one does not find it, than get it from the repository and store it inside the cache for the next call. Every time, we return the object whatever where it comes from. The service layer uses the injected accessor (the interface IContest for example). It does not know anything about the repository or the cache — the service just knows about getting its object by an id.

Integrating Redis in your Asp.Net MVC Project

I have been using Memcached for the last decade and wanted to try Redis on a new project. Redis is also a cache system that allow you to cache data in the ram memory for fast access, like Memcached. Redis offers the same key-value experiences. Redis is newer and is having more functionalities. Where Memcached can be limited, for example in the choice of eviction strategies, Redis will offer several solutions. Memcached is also more restrictive concerning the key’s size whilst Redis is way less. Redis offers the possibility of not be restricted to string for value but can use the Redis Hash system. It is also possible to do operation, like SQL, on the server side instead of having to retrieve the data to manipulate it. The goal of this article is not to sell you Redis but the tell you how to use it with C# and an Asp.Net MVC project.

The first thing to do is to install Redis on your machine. It is created for Linux but has a simple installation for Windows. In fact, Microsoft has a open source implementation where you can download the installation from this GitHub page. I hear a lot of good feedback about it. Most say that Microsoft try not to inject any Microsoft’s flavor into it and keep it just as in implementation from the Linux ones — which is great. I also believe that this will be keep up-to-date since Microsoft offers a Redis service in part of Azure. Once you download the file and install, you will have a running service in your machine.

RedisServer

Once it’s running, you can do a quick test with the Redis-cli console. You can set and get a value with a simple set and get command.
RedisCliExample
It’s also possible to get all keys by using the keys * command. Once everything work as expected, you can delete everything with the flushall delete command.

Next step, is to be able to set and get from the Asp.Net C# application. This is where it can be tricky. There is multiple clients available. Do not waste your time with the servicestack one. Even if a lot of documentation is available, that library became not free since version 4. The version 3 is still available to download via Nuget but it’s more than 1 year old, the documentation does not fit quite well with the version 4 and it requires a lot of tricky hack to make everything work with all dependency packages. After wasted 3 hours I decided to use the stackexchange version one. The name is similar so do not get confuse.

StackExchangeRedis

Once installed, you’ll be able to access Redis with almost no effort for basic command like setting and getting. Here is a short example of possible use.

public class RedisCache : ICache
{
    private readonly ConnectionMultiplexer redisConnections;

    public RedisCache()
    {
        this.redisConnections = ConnectionMultiplexer.Connect("localhost");
    }
    public void Set<T>(string key, T objectToCache) where T : class
    {
        var db = this.redisConnections.GetDatabase();
        db.StringSet(key, JsonConvert.SerializeObject(objectToCache
                    , Formatting.Indented
                    , new JsonSerializerSettings
                    {
                        ReferenceLoopHandling = ReferenceLoopHandling.Serialize,
                        PreserveReferencesHandling = PreserveReferencesHandling.Objects
                    }));
    }


    public T Get<T>(string key) where T :class 
    {
        var db = this.redisConnections.GetDatabase();
        var redisObject = db.StringGet(key);
        if (redisObject.HasValue)
        {
            return JsonConvert.DeserializeObject<T>(redisObject
                    , new JsonSerializerSettings
                    {
                        ReferenceLoopHandling = ReferenceLoopHandling.Serialize,
                        PreserveReferencesHandling = PreserveReferencesHandling.Objects
                    });
        }
        else
        {
            return (T)null;
        }
    }

From here, you can inject the ICache with your IOC and use RedisCache. You can get and set any object. Voila! Of course, this class is not ready for any production code. The real ICache should have more methods like deleting and you should not hardcode “localhost”, but this should give you enough to get started with Redis and .Net.