Integrating Redis in your Asp.Net MVC Project

I have been using Memcached for the last decade and wanted to try Redis on a new project. Redis is also a cache system that allow you to cache data in the ram memory for fast access, like Memcached. Redis offers the same key-value experiences. Redis is newer and is having more functionalities. Where Memcached can be limited, for example in the choice of eviction strategies, Redis will offer several solutions. Memcached is also more restrictive concerning the key’s size whilst Redis is way less. Redis offers the possibility of not be restricted to string for value but can use the Redis Hash system. It is also possible to do operation, like SQL, on the server side instead of having to retrieve the data to manipulate it. The goal of this article is not to sell you Redis but the tell you how to use it with C# and an Asp.Net MVC project.

The first thing to do is to install Redis on your machine. It is created for Linux but has a simple installation for Windows. In fact, Microsoft has a open source implementation where you can download the installation from this GitHub page. I hear a lot of good feedback about it. Most say that Microsoft try not to inject any Microsoft’s flavor into it and keep it just as in implementation from the Linux ones — which is great. I also believe that this will be keep up-to-date since Microsoft offers a Redis service in part of Azure. Once you download the file and install, you will have a running service in your machine.

RedisServer

Once it’s running, you can do a quick test with the Redis-cli console. You can set and get a value with a simple set and get command.
RedisCliExample
It’s also possible to get all keys by using the keys * command. Once everything work as expected, you can delete everything with the flushall delete command.

Next step, is to be able to set and get from the Asp.Net C# application. This is where it can be tricky. There is multiple clients available. Do not waste your time with the servicestack one. Even if a lot of documentation is available, that library became not free since version 4. The version 3 is still available to download via Nuget but it’s more than 1 year old, the documentation does not fit quite well with the version 4 and it requires a lot of tricky hack to make everything work with all dependency packages. After wasted 3 hours I decided to use the stackexchange version one. The name is similar so do not get confuse.

StackExchangeRedis

Once installed, you’ll be able to access Redis with almost no effort for basic command like setting and getting. Here is a short example of possible use.

public class RedisCache : ICache
{
    private readonly ConnectionMultiplexer redisConnections;

    public RedisCache()
    {
        this.redisConnections = ConnectionMultiplexer.Connect("localhost");
    }
    public void Set<T>(string key, T objectToCache) where T : class
    {
        var db = this.redisConnections.GetDatabase();
        db.StringSet(key, JsonConvert.SerializeObject(objectToCache
                    , Formatting.Indented
                    , new JsonSerializerSettings
                    {
                        ReferenceLoopHandling = ReferenceLoopHandling.Serialize,
                        PreserveReferencesHandling = PreserveReferencesHandling.Objects
                    }));
    }


    public T Get<T>(string key) where T :class 
    {
        var db = this.redisConnections.GetDatabase();
        var redisObject = db.StringGet(key);
        if (redisObject.HasValue)
        {
            return JsonConvert.DeserializeObject<T>(redisObject
                    , new JsonSerializerSettings
                    {
                        ReferenceLoopHandling = ReferenceLoopHandling.Serialize,
                        PreserveReferencesHandling = PreserveReferencesHandling.Objects
                    });
        }
        else
        {
            return (T)null;
        }
    }

From here, you can inject the ICache with your IOC and use RedisCache. You can get and set any object. Voila! Of course, this class is not ready for any production code. The real ICache should have more methods like deleting and you should not hardcode “localhost”, but this should give you enough to get started with Redis and .Net.

Serializing Complex Object for Redis

I have been using Redis since few weeks now and I stumble into a weird problem after few days. At first, my serialization was fine. I was able to serialize, cache and deserialize without problem. After few times, I wasn’t able to unserialize my serialize.

First of all, I am using JSON.Net for serialization and deserialization. It is the common serializer library for a long time, it’s even the one Microsoft is using now for all its new projects. Second, I was already using some configurations to handle references. The problem was that one of the object had a collection of an interface which it was not deserialized correctly because JSON.Net did not know to which concrete type to instanciate.

To fix this issue and have during serialization the concrete type added to the serialization, you need to use JsonSerializerSettings with the TypeNameHandling property to All. The result look like the following code:

var serializedObjectToCache =  JsonConvert.SerializeObject(objectToCache
                       , Formatting.Indented
                       , new JsonSerializerSettings
                       {
                           ReferenceLoopHandling = ReferenceLoopHandling.Serialize,
                           PreserveReferencesHandling = PreserveReferencesHandling.Objects,
                           TypeNameHandling = TypeNameHandling.All
                       });

The result of the serialization is a JSON attribute named “$type” which has the complete namespace and class name. You can find more detail on JSON.Net documentation about TypeNameHandling.

The deserialization contains also the same JsonSerializerSettings.

var obj = JsonConvert.DeserializeObject<T>(serializedObject
                        , new JsonSerializerSettings
                        {
                            ReferenceLoopHandling = ReferenceLoopHandling.Serialize,
                            PreserveReferencesHandling = PreserveReferencesHandling.Objects,
                            TypeNameHandling = TypeNameHandling.All
                        });

With the three settings used in this article, you can serialize complex object with recursive references, with abstract type or interface type and not have to worry about serializing an Entity Framework object. So, why this is not the default configuration? Because this add a lot of information in the JSON. This shouldn’t be a huge problem if you are using it with Redis, more a problem is you transfer the information through an Ajax call.

Asp.Net Identity 2.2 Localization of error messages

It’s interesting to see that Microsoft is trying to better in term of localization but still fail to provide a perfect solution. Asp.Net Identity is by default English oriented but at least not hardcoded. It uses inside the Identity library a resource file.
BadExample

Here is what you can see if your decompile Asp.Net Identity assembly. You will see a resource file with defined strings.

IdentityResourceFile

The problem is that you cannot just define your own resource file and setup the string you desire. You must use one of Microsoft’s package for your language. Here is the one for French.

Install-Package Microsoft.AspNet.Identity.Core.fr

AspNetIdentityCore

The problem with that solution is that you must be lucky to have your language available and you must like the string they chose for you. Some other options available is to create your own UserValidator and PasswordValidator and by inheriting the Identity ones can still have the benefit of Identity. Overall, there is some workaround but still is time consuming for nothing. Better solution like just having to drop a resource file of your own would have been way easier and easily custom for solution. Nevertheless, it starts to get in the right direction after more than a decade trying to create a better Identity framework.

How to Use Third Party Dll Reference in a SQL CLR Function

In a previous article, I was using a SQL CLR function written in C# to be executed by SQL server during a migration of account. The reason was to use the hashing code from Asp.Net Identity that is used by Entity Framework and Asp.Net. In that article, instead of using a copy of the code of the real implementation, we are going to use the real Asp.Net Identity dll. Our C# Dll will just be a proxy to the third party.

[Microsoft.SqlServer.Server.SqlFunction]
public static SqlString HashPassword2(string plainPassword)
{
   var hasher = new Microsoft.AspNet.Identity.PasswordHasher();
   return hasher.HashPassword(plainPassword);
}

The code is way simpler, and reuse the Microsoft Asp.Net code which is interesting. The problem is that SQL project does not allow to use NuGet. It’s not possible to get the reference from a NuGet package. So, we need to select the DLL from the browse window of the reference. The simple way to do it is to select the one in the Bin folder of you Asp.Net MVC project. Once done, we still need to do something to allow SQL to read it. If you do not, it will compile but not publish.

Creating [MySqlToMsSql]...
(63,1): SQL72014: .Net SqlClient Data Provider: Msg 6503, Level 16, State 12, Line 1 Assembly 'microsoft.aspnet.identity.core, version=2.0.0.0, culture=neutral, publickeytoken=31bf3856ad364e35.' was not found in the SQL catalog.
(63,0): SQL72045: Script execution error.  The executed script:
CREATE ASSEMBLY [MySqlToMsSql]
    AUTHORIZATION [dbo]
    FROM 0x4D5A9000...
An error occurred while the batch was being executed.

To fix that issue, we need to go in the property of the third-party DLL added to the project.

PropertyOfThirdParty

The property about Model Aware, Is Visible and Generate Sql Script must be set to true.

How to use C# code within SQL

This is the first time I had to have something that require a special business logic inside the database that I could not do directly in TSQL. Usually, I get around or I simply do something outside SQL like a small C# console that will handle everything. I decided to take a look a CLR SQL Server User-Defined Function. This article describes how to create your own C# method and use it within your TSQL. Albeit the feasibility and the success of the method, I removed the code at the end to make a console application and I will explain to you why.

First thing to do is to create a SQL Server Project.
SqlServerDatabaseProject
Second task is to add a file that will have the code. You can right click the new project and add a new item. Select user defined function.
SqlUserDefinedFunction
This will create a partial class named “UserDefinedFunctions”. You can rename the file with a custom name — what is important is to have the class as partial.

Finally, you write your own method. Mine was returning a single string so I had to return a SqlString.

public partial class UserDefinedFunctions
{
    [Microsoft.SqlServer.Server.SqlFunction]
    public static SqlString HashPassword(string plainPassword)
    {
        byte[] salt;
        byte[] buffer2;
        using (var bytes = new Rfc2898DeriveBytes(plainPassword, 0x10, 0x3e8))
        {
            salt = bytes.Salt;
            buffer2 = bytes.GetBytes(0x20);
        }
        byte[] dst = new byte[0x31];
        Buffer.BlockCopy(salt, 0, dst, 1, 0x10);
        Buffer.BlockCopy(buffer2, 0, dst, 0x11, 0x20);
        return new SqlString (Convert.ToBase64String(dst));
    }
}

This is the code I used. It does the hash of a plain password and return the hashed password. This algorithm is compatible with Entity Framework so it was a way to convert legacy password into Entity Framework format. You can see that not only that class is partial to a specific class, but it also has an attribute. The Microsoft.SqlServer.Server.SqlFunction is required to specify the type of the method for SQL. It also allows to add additional information like if the function is deterministic.

The last step is to publish. You need to right click the project, select Publish. This will raise a dialog where you need to select the database server, the database name, and the script to generate. This will build (if not already done) the project. The bin folder will contains the code (dll) but also a dacpac and a SQL file.
SqlUserDefinedFunction

SqlUserDefinedFunctionBinFolder

The dacpac is what is installed by Visual Studio into Sql Server. This is a file that contains SQL objects. It is used not only by C# code but could contains any SQL object like Sql Procedure for example. The format cannot be read by NotePad because it is a compressed file. You can unzip it if you are curious. And we are curious! So here is the code above dacpac unzipped file structures.
dacpactunzipped

The interesting file is the model.xml. It contains a reference to the DLL file but also the function metadata we created and the source of the code inside a CDATA.
insidedacpac
That said, if you used Visual Studio, the function is already been installed for you. You can see it under the Scalar-valued Functions.

ScalarValueFunctionSqlServer

From here, you can just use a simple select, pass your parameter and you are all set.

If you cannot have you database working with CLR, it maybe because it is disabled. To enable, you need to do the following command once.

sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO
sp_configure 'clr enabled', 1;
GO
RECONFIGURE;
GO

The problem I ran into is the performance. This is very slow compared to any native SQL code. It is also slower than execute the same code directly inside a simple Console application that loop through all the data. For example, removing that call take 7 seconds. Even if it wasn’t converting the password and keeping this one plain, the time was reasonable. Using the CRL SQL Server User-Defined Function the time raise to 11 minutes. Doing the loop in a console by reading the data and updating the value with ADO.Net took half of the time. The time is getting high because of the code itself which is not a simple task. Even if it takes few milliseconds to execute in a console, the hashing process is still little expensive to do. Also, CRL in SQL add a layer of slowness because the SQL engine need to stub inside your SQL query Microsoft intermediate language (MSIL) a bridge to the function. This generated stub is compiled into optimized code for the SQL Server you are executing this one to.

That said, the performance was quite slow from few seconds to several minutes. I had a huge script that was converting a lot of data, gigs and gigs of data across a lot of tables from one database to another. The overhead was not something I wanted to paid during the conversion and could be executed afterward by a small console job later. I could also use the CLR function in a second script later because even if the performance was several minutes, creating the console application wasn’t saving much more. That said, during the creation of the CLR function for SQL I got curious about performance and I learned that this is far from being a subject that is black or white. Most of the time, it depends of the operation executed to know if it worth it or not to use a C# CLR function. I personally believe it should be limited. Even if it is possible to debug with Visual Studio those function remotely, I believe that business logic should remain in the application tier –t he database should be used only for persisting the information. Nevertheless, special case like using Regex on string, doing special data manipulations can be not possible with SQL only. This is a great tool to have on your belt and should be used wisely.