MongoDB the Good to Know

Recently I have been working with MongoDB. Here is some highlight that might help you to take the decision to use that No-Sql database.

In MongoDB, write operations are atomic at the document level. If you design your data to use reference (which is possible) forget about atomic transaction. Also, if you design by referencing other document, it will not be possible to get the information with a single query.

It is possible to query MongoDB with operator such as comparing with $gt, $gte, $lt, $lte, $in, $nin, $ne. Logical operation exist with $or, $and, $not, $not. Others type of operator exists and can be used for querying the database or to project. Here are some example.

db.users.insert({name:"patrick", age:30})
db.users.insert({name:"mélodie", age:26})
db.users.insert({name:"vincent", age:30})
db.users.insert({name:"julie", age:28})
db.users.find({age:{$lt:30}}) // Return 2 elements

When you document get over 16 megs, Mongo divide it into part. The need of GridFS is than required to reassemble every part of the document.

In MongoDB you can allow the system to generate the unique identifier by not setting any _id. But, you can also take the liberty to assign the _id when inserting your document.

It is better to store one to many reference into the “one side”. This way, you do not have a huge array in the “many side”. You can also reference something that does not exist yet. For example, if A reference B, you can insert A with a reference to B and then insert B.

It is possible to set index, like in SQL, to improve performance. This can be done with the ensureIndex method. This method is available on the database.

db.users.ensureIndex({name:1})

It is possible to store in a string information such as a hiearchical path and then query against that string property with a regex to find.

It is possible to write with the MongoDb command insert, update, findAndModify and remove.

The update command can be executed against a collection. This one has three parameters. The first one is the query. You can specify the unique identifier of the data your want to update or to any other criteria. This can be useful for embedded resource to query multiple documents that has the same embedded information. The second parameter is what we are updating. We can update the whole document or a part of it. It is also possible to push ($push) information into an existing array or to increment/decrement value of a field. The following example come from the MongoDB website and it update a book by its ID only if this one has available book on the shelve. It update by decreasing the number of available copies and push a new entry about who checkout the book. This is all done atomicly.

db.books.update (
   { _id: 123, available: { $gt: 0 } },
   {
     $inc: { available: -1 },
     $push: { checkout: { by: "abc", date: new Date() } }
   }
)

Doing operations like update, insert, delete return some information into a WriteResult object. Some properties like the number of found element by the query, the number of inserted or modified document are returned. The number of inserted can be above 0 when updating if in the third parameter of the update you specify the option of inserting if not found.

When defining an index on an array, MongoDB creates index entries for each element. This mean that if you have a document with an index on an array of 3 elements that in the backend the information will be set in three index collection. For example, if you have a car document with an array of color and that you set an index on the color array the document will be indexed 3 times and also stored one time as the car itself. This has an impact in the inserting time (like in SQL).

If you are working with money and want to be exact, you need to multiply your number to have an integer. For example, storing 9.99 with a precision of 2 decimals would require you to store 999 in MongoDB. By then, you have to divide by 100 to get back the real value. Depending of the precision you want, you multiply and divide by the power of 10 desired.

The primary key is defined with MongoDB ObjectId. This ObjectId is generated by using the Unix TimeStamp, the machine identifier, the process id and a random value. It’s result look like a GUID. You can generate one by calling ObjectId()
ObjectId

KeyValuePair does not return NULL with Linq to Object SingleOrDefault Method

If you have a list of key value pair and you are searching something that might not be there, you may want to use SingleOrDefault, or FirstOrDefault to get this element. If it does not exist, you may think that the Linq to object return null but in fact, it return the default value which is a new instance of KeyValuePair class.

    var kvp1 = new KeyValuePair<string, string>("a", "b");
    var kvp2 = new KeyValuePair<string, string>("c", "d");
    var list = new List<KeyValuePair<string, string>> {kvp1, kvp2};
    var value = list.SingleOrDefault(d => d.Key == "notfound").Value;

The code above return from SingleOrDefault a new KeyValuePair object with the Key and the Value to NULL. The return of the Linq is not NULL.

In fact, this is the case of any of your classes that you search and that this one is not found.

var kvp3 = new MyKeyValuePair {Key = "a", Value = "b"};
var kvp4 = new MyKeyValuePair {Key = "c", Value = "d"};
var list2 = new List<MyKeyValuePair> { kvp3, kvp4 };
var value2 = list.SingleOrDefault(d => d.Key == "notfound").Value;

public class MyKeyValuePair
{
    public string Key { get; set; }
    public string Value { get; set; }
}

The result is that value2 is an Exception and this is because SingleOrDefault has returned NULL. How come? It returns the default value has the name of the method specify. So, if we verify the default value of a class we will get an empty object right? Wrong! We are getting a NULL.

var defaultIs = default(MyKeyValuePair); //This return null!

If we check the source code of SingleOrDefault, we realize that it uses the exact same default method.

public static TSource SingleOrDefault<TSource>(this IEnumerable<TSource> source, Func<TSource, bool> predicate) {
	if (source == null) throw Error.ArgumentNull("source");
	if (predicate == null) throw Error.ArgumentNull("predicate");
	TSource result = default(TSource);
	long count = 0;
	foreach (TSource element in source) {
		if (predicate(element)) {
			result = element;
			checked { count++; }
		}
	}
	switch (count) {
		case 0: return default(TSource);
		case 1: return result;
	}
	throw Error.MoreThanOneMatch();
}

KeyValuePair class, or should I say the KeyValuePair struct default is different. The reason is that the default value of a structure is not the same as a class. It returns an new structure and not null. The mystery is now resolved. For your information, you cannot define you “default value” for your classes. Here are something interesting from MSDN.

The solution is to use the default keyword, which will return null for reference types and zero for numeric value types. For structs, it will return each member of the struct initialized to zero or null depending on whether they are value or reference type.

Installing MongoDB on a Windows Machine

The first step is to install MongoDB on your machine. You can go on the official website, in the top menu, you will see Download. Click the Download link (http://www.mongodb.org/downloads) and then select the Windows version. You can download the 64 bits or 32 bits. In this tutorial of MongoDB we will use the MongoDB 64 bits for Windows. The file is about 132 megs and is a setup.

MongoDbSetup

The installation is pretty straight forward. You can select the typical installation to have a basic setup. During the installation, you will have to accept to elevate the permission.

From here, it is time to open a Command Prompt with administration right. Let’s go in the folder we just installed MongoDB. Since we have installed the typical package with the 64 bits version, the installation should be in program file.

MongoDbConsole

You can after that configure MongoDB. The next steps are taken directly from MongoDB documentation. You must create a directory to save the content. This can be anywhere so let’s do a data folder at the root of the installation path. The md command let you create a directory. The code below is what you can write to create the default path that MongoDB uses.

cd "C:\Program Files\MongoDB 2.6 Standard"
md \data\db

Then, you can start MongoDB.

cd bin
mongo.exe

Me, the first time I start MongoDB I got a warning followed by an error saying that it was not possible to connect.

C:\Program Files\MongoDB 2.6 Standard\bin>mongo.exe
MongoDB shell version: 2.6.4
connecting to: test
2014-09-10T13:34:40.878-0700 warning: Failed to connect to 127.0.0.1:27017, reason: errno:10061 No connection could be made because the target machine actively
refused it.
2014-09-10T13:34:40.882-0700 Error: couldn’t connect to server 127.0.0.1:27017 (127.0.0.1), connection attempt failed at src/mongo/shell/mongo.js:146
exception: connect failed

Than I realized that I was launching mongo.exe instead of mongod.exe. Once the administration console lunch the mongod.exe you can start a new console (no need to have administration privileges on this one) and start mongo.exe. Here is what you should see.

MongoDbConsole

Do not forget to specify the –dbpath when starting the mongod.exe because otherwise it will store everything on you c:\ drive.

Basic Commands

Here is few commands that may be useful during the development.
dbs shows the database you have

show dbs

You can create new database or switch the a database by using the use command.

use mydb

Information are added into collection. You can add data into a collection with the command db..insert().
Here is an example of three insert into a collection named “testdata” of the dotnet database.

use dotnet
db.testdata.insert({id:1})
db.testdata.insert({id:2})
db.testdata.insert({id:3,name:"three"})

It is possible to see if the collection of the active database really exist with show collections command.

show collections

The last command that is really useful is to see the content of a collection. You can use the find() command.

db.testdata.find()

Keep in mind that if you see nothing it might be because you typed the collection name with the wrong case sensitivity. MongoDB is case sensitive.
Here is screenshot of the output of all commands that we just discussed.
MongoBasicCommands

New Features in C# 6.0

Soon, Microsoft is releasing the new version of C#. Here are some of the most interesting feature that the version 6.0 provides.
Auto property
Auto-Properties can have only getter instead of having setter and getter. Before C# 6.0, you had to have a private setter. Now, you can have only the getter by ommiting the set keyword into the curly bracket.

public int {get;} 

That say, it is also convenient that you can now have a initial value to the property. This is valid for property with only getter but also for property with setter and getter.

public int {get;} = 5;

Static method does not need to specify the class where it belongs. For example, with C# 5, you have to specify the Math method using the Absolute method, Abs. It does not mean that you have to use that shortcut. However, it can be useful to reduce the amount of repetition in your code if you are using intensively static methods.

Math.Abs(-1); 	//Old way, that is still valid
Abs(-1);		//New way

String.Format
String.Format has a new format with string interpolation. This mean that instead of relying on index to define placeholder, that you can use something more meaning full. For example, if you have a variable that you want to insert into a string, instead of specifying the index 0, you can simply use the backslash with the curly brace with the variable.

return String.Format("This is my name '{0}'", name); //Old way, that is still valid
return "This is my name '\{name}'";					 //New way

Methods
Methods can return lambda. This is quite interesting for small method.

public string ReturnStringMethod()
{
	return "A string";
}
//Can be rewritten by :
public string ReturnStringMethod() => "A string";

You can also now use the nameof method to have a string that represent the variable name. This is very interesting because it reduces the problem while refactoring. The string that is present into the exception can now be dynamically link to the variable name which Visual Studio fully support during refactoring.

public void YourMethod(Object yourArgument)
{
    if(yourArgument == null)
	{
	    throw New ArgumentNullException("yourArgument", "Cannot be null") //Before we had to specify with a string
	}
}

public void YourMethod(Object yourArgument)
{
    if(yourArgument == null)
	{
	    throw New ArgumentNullException(nameof(yourArgument), "Cannot be null") //Now we have a method
	}
}

Index initializer
If you object define the square bracket operator you can now use the default initializer to set values.

new YourObjectWithIndex{["var1"] = 1, ["var2"] = 2};

Null conditional operators
A new operator is born to check if something is null. It is the ?.. It verify if the variable before the operator is null. If yes, nothing after is executed. If it is not null, than it continue to execute what is on the right.

//Before
if(variable != null && variable.property != null)
{
   variable.property.test = "okay";
}
//Now
variable?.property?.test = "okay"

Visual Studio Build Notification

Visual Studio comes with multiple interesting tools. One of the tool is the Build Notification application.

This tool can be located in the Common7\IDE folder. Here is an example with Visual Studio 2013:

C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\BuildNotificationApp.exe

BuilNotificationApp

Once open, you have to configure the tool to let its know what build to check.

Here is an example that has been automatically filled up since I am already using Visual Studio and TFS for project.

BuildNotificationsOptions

When the configuration is set, the system tray will show an icon with the status of the last build. For example, here is the icon when build fail.
NotificationTrayBuildNotification

This tool is quite handy if you have a team that is using TFS and when you are using the build server. This way, you can know exactly when the build fail and be ready to react.

Entity Framework Does Not Allow To Have Nullable Complex Type

Once in a while, I forgot that weakness of Entity Framework that make me change the design of my database. Entity Framework (well for the 6 first version at least) cannot save an entity that has a complex type to null.

Let’s get in the context that you have a class named Order, and this one have a Price property. The Price property is of type Money which is a complex type. You cannot set your Price property to null without having Entity Framework crash during the commit phase.

DbUpdateException: Null value for non-nullable member. Member: ‘PriceLimit’.

Once you realize that Entity Framework will not help you in this path, you have to change your design. Their are multiple ways to handle this kind of scenario but one that I prefer and I really think is quite easy is to have an additional property inside the complex class that specify is the complex class is null or not. Of course, this would be more clean not to have that property, but at least, it is a viable solution if you own the complex type. It has also the advantage to be cohesive and to not alter all of your classes that use that complex type.

In the complex class we change the value to be nullable now.

private decimal? value;

public decimal? Value
{
     get { return value; }
     set { this.value = value; }
}

This is the first change that will let the database to save null value not to the complex type but to the value of this one. The next step is to create a new property that will specify if the complex type is null or not.

public bool IsNull {
    get { return !this.Value.HasValue; }
    set {
        if (value)
        {
            this.Value = null;
        }
        else
        {
            this.Value = default(decimal);
        }
    }
}

As you can see, the IsNull property does not contain a value but is calculated on the fly. We also will not store this value in the database. This mean that we need to ignore this property for Entity Framework.

public MoneyConfiguration()
{
    this.Ignore(d => d.IsNull);   // Required because Entity Framework cannot have two properties that load the same property(value)
}

The reason is double. First, we do not need to save this value because we can calculate it on the fly. Second, Entity Framework does not allow us to read this type of property. Indeed, Entity Framework can save both value (the value and the IsNull flag) but when this one will try to load the data from the database, it will not be able to resolve the value correctly. Primary because both property depend of each other. When setting the Value, the IsNull does not change, so it is fine. However, when Entity Framework set IsNull to false the default value is set. Since we cannot tell Entity Framework to avoid loading a single property or that we cannot specify the order of the properties to be loaded, it is better to avoid having to save the value.

NDepend Version 5.4 Professional Edition Review

NDepend exists since many years and is one of the best static analytic tool around. This tool can be integrated within Visual Studio or run independently from a GUI or a console. It is interesting because it output with nice graphic the dependencies between your classes, namespaces and assemblies. Also, NDepend powerful Linq language allows to have custom metrics about your code and keep track of it during the development lifetime. This is possible with analyzed comparison.

For this review, I did a run on one project that I am working on that has 7 100 lines of code, 702 types, 34 assemblies and 3 900 lines of comment. I can tell you that because NDepend’s dashboard give you all these information straight after your first run.

NDependDashBoard

As you can see in the dashboard, code coverage is also possible. At the time I did my first run, I did not have the unit test configured for coverage. After specifying the coverage output file to NDepend, this one was able to show some information.

CodeCoverageDashBoardNDepend This is quite interesting if you are serious about the health of your coverage because you will be able to get a picture of how your coverage is during the development life time of your application. Since NDepend can be used from a console, you can use this tool to do daily report and than compare what happening. This is true not only for coverage but for every metric that NDepend gives you.

From the DashBoard, we can see that the project is having some rules violation. By default, NDepend comes wiht hundreds of pre-defined rules.
CodeViolatedFromNDepend

Clicking the “Critical Rules Violated” opens the Queries and Rules Explorer. This filter all rules by the rule that has pass the reasonable threshold.
10CriticalRulesViolatedFromNDepend

The next step is to match the rules’ violation with the code. This is make easily by clicking the problem and NDepend opens the methods matched panel where every methods in violation appear. However, it might have some false positive. For example, NDepend found that I have 5 methods with too many arguments. The problem is that the 5 methods from an assembly that use Entity Framework’s Migration Tool. Double clicking the method from NDepend

NDependCannotOpenDeclarationSourceFile

This could be resolved by changing the CQLinq which is NDepend Linq language. The too many parameters look like this:

warnif count > 0 from m in JustMyCode.Methods 
where 
  m.NbParameters > 8
  orderby m.NbParameters descending
select new { m, m.NbParameters }

To fix the problem I edited the query to do not check the class in problem.

warnif count > 0 from m in JustMyCode.Methods 
where 
  m.NbParameters > 8
&& m.FullName.Contains("DataAccessMigration")
  orderby m.NbParameters descending
select new { m, m.NbParameters }

What is interesting is that you have auto-completion and the result is live. That mean that whilst typing the query you see the result to the output panel. For me, the Queries and Rules Explorer is the panel that I use the most of all NDepend’s feature. The only negative I have found is that I was expecting to move to Visual Studio when double clicking a method in problem inside the Visual NDepend. It does open the code but every time into a new Visual Studio instance. That mean that you can have 10 Visual Studio running if you double clicked 10 methods to be improved.

Another tool that NDepend offers is the Dependency Graph. For me, that tool is not useful because it creates a spaghetti graph.
NDependDependencyGraph I think the NDepend’s team know about it because they warn you inside the software to use the Dependency Matrix for large structure. In my opinion, almost all serious software cannot use that tool. It would have been more interesting to see something like assembly dependency graph like those layers graph that can be read from top to bottom. This is why the Matrix Dependency is way better to use.
NDependMatrixDependency
The software does a good job with hovering sensitive help to guide you with how to use the tool. For example, the image above indicates that the software is perfectly structured by layer. This can be found because the blue squares and the green squares are divided by the diagonal line.

This tool can also shows you the coupling and cohesion of your code. More the squares are close together, more the code is high in cohesion. So a group of squares show the cohesion and the space between these group shows the coupling. More space they are, more the coupling is high. In the example of the Dependency Matrix, it’s hard to get some result. We can see high cohesion at the top and low coupling. High cohesion because we have 6 squares near together and low coupling because these squares are very far away of the rest of the code.

Another panel is the Metrics graph. This is a panel where pixel represent a metric. For example, you can display a Metric Graph for Cyclomatic Complexity. This one will blue every Cyclomatic Complexity that is high and that needs to be fixed.

NDependCyclomaticMetrics

This can be useful but I prefer to use directly the Queries and Rules Explorer to have a list of methods.

NDepend is a big software and it would be hard for me to cover all its functionality. You can find more information on the official website where videos, articles and documentations are available: http://www.ndepend.com/

C# Using Statement Inside or Outside the Class Namespace

.Net works with library that you can reference in the projet and then use in any code file. The keyword using is the one to use inside the C# file to be able to use classes from an external library or a different namespace from the one the code belong in the file. Of course, if you do not want to use the using statement, you can when using a class from another namespace specify the name with the whole namespace path.

For example:

var x = new OtherLibrary.OtherNameSpace.Xyz.TheClass();

Having the whole namespace in the code can become cumbersome. This is why the using statement exist. By default, the using is at the top of the file.

using System;
namespace MyNameSpace
{

    public class MyClass
    {
        //...
    }
}

But this could also be different by having the System namespace directly inside the namespace.

namespace MyNameSpace
{
    using System;
    public class MyClass
    {
        //...
    }
}

But what is the difference? The difference is the priority of how the .Net will use external dependencies. The priority is to the using inside the namespace and then to the one of the file. This is why having the using inside the namespace can be more safe in a way that you can be sure that no other library can hijack a namespace and break your code. For example, you could create a class named Math, and still be in your namespace and having this one be used instead of System.Math. But, to remove this possibility, if you set using System; inside your namespace than you are sure to have the real Math class to be used (or to have a conflict during compilation file if both are explicitly marked with using).

A rule of thumb is to set the using inside your namespace, this way you have less chance of having a behavior that you do not expect. If you want to change the default behavior of Visual Studio when you are creating a new class or interface, you need to go to the template folder and edit the class and interface template. This folder is for Visual Studio 2013 inside Program File under the Common7 folder. Here is my path which is the default installation path of Visual Studio 2013.

C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\ItemTemplates\CSharp\Code\1033

If you are using ReSharper, you can also modify the Code Editing > C# > Formatting Style > Namespace Imports. You can select “Add Using directive to the deepest scope” and you will have your using inside the namespace when you clean full format.

Visual Studio Extension to Attach to IIS with a Single Key

Developing web application require at some point to use IIS. Visual Studio lets you debug easily with IIS Express by pressing F5. This one start Visual Studio Express and attach automatically Visual Studio debugger to the IIS Express process. However, if you are using IIS, nothing is automatic. You have to go in the Debug menu, select Attach to Process and than in the list select w3wp.exe. This is something that you can do something more than a dozen a time per day.

Today, I found something interesting in the Visual Studio Extension Gallery. It is an extension that let you do that with a single click.

AttachToAnyProcessLikeIIS

Since this extension is adding in the menu the action, it is possible to assign a shortcut to the action. IIS is assigne to the 1 attach to item.

AssignShortCut

I have assigned mine to the F1 key. Every time that I want to debug, I just need to hit F1 and I am ready to go.

Modify the Html Output of any of your Page Before Rendering

In some situation, you may want to alter the html output that the Asp.Net MVC render. An interesting use case is that you may have several user controls that inject directly into the html some JavaScript or CSS. To keep your page loading fast, you want to have everything at the bottom of the html. Of course, other method exist but one can be to let Asp.Net MVC render everthing and just before sending back to the client the Html output to remove those JavaScript and CSS tag of the Html markup and to add them at the bottom of this Html. This article describe how to modify the Asp.Net MVC default rendering pipeline to inject your own hook that will be placed between the end of the Asp.Net MVC rendering engine and the sending of this one to the client. It will also explains how to have this option in an atomic scenario of only allowing this alteration for a specific action up to all requests.

The first class to create is the class that will play with the content produced. I create a small filter called MyCustomStream that remove all Script tag and replace them by a comment and then add all Script tag before the closing Html tag. This way, all Script are set at the end of the page.

public class MyCustomStream : Stream
{
    private readonly Stream filter;


    public MyCustomStream(Stream filter)
    {
        this.filter = filter;
    }

    public override void Write(byte[] buffer, int offset, int count)
    {
        var allScripts = new StringBuilder();
        string wholeHtmlDocument = Encoding.UTF8.GetString(buffer, offset, count);
        var regex = new Regex(@"<script[^>]*>(?<script>([^<]|<[^/])*)</script>", RegexOptions.IgnoreCase | RegexOptions.Multiline);
        //Remove all Script Tag
        wholeHtmlDocument = regex.Replace(wholeHtmlDocument, m => { allScripts.Append(m.Groups["script"].Value); return "<!-- Removed Script -->"; });

        //Put all Script at the end
        if (allScripts.Length > 0)
        {
            wholeHtmlDocument = wholeHtmlDocument.Replace("</html>", "<script type='text/javascript'>" + allScripts.ToString() + "</script></html>");
        }
        buffer = Encoding.UTF8.GetBytes(wholeHtmlDocument);
        this.filter.Write(buffer, 0, buffer.Length);
    }

    public override void Flush()
    {
        this.filter.Flush();
    }

    public override long Seek(long offset, SeekOrigin origin)
    {
        return this.filter.Seek(offset, origin);
    }

    public override void SetLength(long value)
    {
        this.filter.SetLength(value);
    }

    public override int Read(byte[] buffer, int offset, int count)
    {
        return this.filter.Read(buffer, offset, count);
    }

    public override bool CanRead
    {
        get { return this.filter.CanRead; }
    }

    public override bool CanSeek
    {
        get { return this.filter.CanSeek; }
    }

    public override bool CanWrite
    {
        get { return this.filter.CanWrite; }
    }

    public override long Length
    {
        get { return this.filter.Length; }
    }

    public override long Position { get { return this.filter.Position; }
        set { this.filter.Position = value; }
    }
}

To make it works for controller or action, you must create an attribute. When the action is executed and this one has the attribute (or if the controller of the action has the attribute) the filter is applied.

public class MyCustomAttribute: ActionFilterAttribute
{
    public override void OnActionExecuted(ActionExecutedContextfilterContext)
    {
        var response = filterContext.HttpContext.Response;

        if (response.ContentType == "text/html") {
            response.Filter = new MyCustomStream(filterContext.HttpContext.Response.Filter);
        }
        
    }
}

You can also set it to all your controller by setting the attribute to the Global.Asax.cs

protected void Application_Start() 
{
    GlobalFilters.Filters.Add(new MyCustomAttribute());
}

But, so far something is wrong. The filter is called multiple time because the stream is outputed in chunk of several bytes. Since we are playing with the Html rendering, we must replace html element when the whole document is loaded. This require us to modify a little bit the implementation above. The filter class must have a buffer. We will append all chunk into our buffer and when this one is full, we will act our transformation on this buffer and use this memory buffer to output into the filter stream.

The first step is to have a Stream to buffer. I choose to use the MemoryStream because it has some method like ToArray() that simplify our life when it is the time to read the whole buffer. The Flush method needs modification to accumulate all bytes of the page before hooking the filter and write back the modified buffer.

public class MyCustomStream : Stream
{

    private readonly Stream filter;
    private readonly MemoryStream cacheStream = new MemoryStream();

    public MyCustomStream(Stream filter)
    {
        this.filter = filter;
    }

    public override void Write(byte[] buffer, int offset, int count)
    {
        cacheStream.Write(buffer, 0, count);
    }

    public override void Flush()
    {
        if (cacheStream.Length > 0)
        {
            var allScripts = new StringBuilder();
            string wholeHtmlDocument = Encoding.UTF8.GetString(cacheStream.ToArray(), 0, (int)cacheStream.Length);
            var regex = new Regex(@"<script[^>]*>(?<script>([^<]|<[^/])*)</script>", RegexOptions.IgnoreCase | RegexOptions.Multiline);
            //Remove all Script Tag
            wholeHtmlDocument = regex.Replace(wholeHtmlDocument, m => { allScripts.Append(m.Groups[0].Value); return "<!-- Removed Script -->"; });

            //Put all Script at the end
            if (allScripts.Length > 0)
            {
                wholeHtmlDocument = wholeHtmlDocument.Replace("</html>", "<script type='text/javascript'>" + allScripts.ToString() + "</script></html>");
            }
            var buffer = Encoding.UTF8.GetBytes(wholeHtmlDocument);
            this.filter.Write(buffer, 0, buffer.Length);
            cacheStream.SetLength(0);
        }
        this.filter.Flush();
    }

    public override long Seek(long offset, SeekOrigin origin)
    {
        return this.filter.Seek(offset, origin);
    }

    public override void SetLength(long value)
    {
        this.filter.SetLength(value);
    }

    public override int Read(byte[] buffer, int offset, int count)
    {
        return this.filter.Read(buffer, offset, count);
    }

    public override bool CanRead
    {
        get { return this.filter.CanRead; }
    }

    public override bool CanSeek
    {
        get { return this.filter.CanSeek; }
    }

    public override bool CanWrite
    {
        get { return this.filter.CanWrite; }
    }

    public override long Length
    {
        get { return this.filter.Length; }
    }

    public override long Position { get { return this.filter.Position; }
        set { this.filter.Position = value; }
    }
}

You can put what ever you want inside the if statement of the Flush method. In my case, I remove all script of the file, replace them with a comment and finally put all scripts at the end of the file, just before the closing Html tag.

MovingScriptToBottom

The result can be seen if you show the source in any browser. This method is efficient but as a cost that we are playing with the output result and indeed add some overhead in the rendering pipeline. This kind of filter must be used only in specific cases where it is the only way to accomplish a transformation. The case of JavaScript or CSS are two cases where it is logic to do if you are developing in an older oriented way where “control/component/usercontrol” inject their own JavaScript and CSS. However, in new system, you should not rely on this kind of replacement. It tends to develop bad habit to throw code everywhere without checking the consequence. It also add some performance penalty by having to pass through all the code instead of initially setting at the right place the code. This can be efficiently done by using section with ASP.net MVC. Finally, this kind of replacement can cause problem because of dependencies. In this small example, nothing really is changed, but in bigger code base some JavaScript may need to be before specific Html elements or have dependencies to other JavaScript files. Moving with automatic process may require more code than the one shown in this article.

You can find the source code of this example in GitHub or download the zip file.