Installing MongoDB on a Windows Machine

The first step is to install MongoDB on your machine. You can go on the official website, in the top menu, you will see Download. Click the Download link (http://www.mongodb.org/downloads) and then select the Windows version. You can download the 64 bits or 32 bits. In this tutorial of MongoDB we will use the MongoDB 64 bits for Windows. The file is about 132 megs and is a setup.

MongoDbSetup

The installation is pretty straight forward. You can select the typical installation to have a basic setup. During the installation, you will have to accept to elevate the permission.

From here, it is time to open a Command Prompt with administration right. Let’s go in the folder we just installed MongoDB. Since we have installed the typical package with the 64 bits version, the installation should be in program file.

MongoDbConsole

You can after that configure MongoDB. The next steps are taken directly from MongoDB documentation. You must create a directory to save the content. This can be anywhere so let’s do a data folder at the root of the installation path. The md command let you create a directory. The code below is what you can write to create the default path that MongoDB uses.

cd "C:\Program Files\MongoDB 2.6 Standard"
md \data\db

Then, you can start MongoDB.

cd bin
mongo.exe

Me, the first time I start MongoDB I got a warning followed by an error saying that it was not possible to connect.

C:\Program Files\MongoDB 2.6 Standard\bin>mongo.exe
MongoDB shell version: 2.6.4
connecting to: test
2014-09-10T13:34:40.878-0700 warning: Failed to connect to 127.0.0.1:27017, reason: errno:10061 No connection could be made because the target machine actively
refused it.
2014-09-10T13:34:40.882-0700 Error: couldn’t connect to server 127.0.0.1:27017 (127.0.0.1), connection attempt failed at src/mongo/shell/mongo.js:146
exception: connect failed

Than I realized that I was launching mongo.exe instead of mongod.exe. Once the administration console lunch the mongod.exe you can start a new console (no need to have administration privileges on this one) and start mongo.exe. Here is what you should see.

MongoDbConsole

Do not forget to specify the –dbpath when starting the mongod.exe because otherwise it will store everything on you c:\ drive.

Basic Commands

Here is few commands that may be useful during the development.
dbs shows the database you have

show dbs

You can create new database or switch the a database by using the use command.

use mydb

Information are added into collection. You can add data into a collection with the command db..insert().
Here is an example of three insert into a collection named “testdata” of the dotnet database.

use dotnet
db.testdata.insert({id:1})
db.testdata.insert({id:2})
db.testdata.insert({id:3,name:"three"})

It is possible to see if the collection of the active database really exist with show collections command.

show collections

The last command that is really useful is to see the content of a collection. You can use the find() command.

db.testdata.find()

Keep in mind that if you see nothing it might be because you typed the collection name with the wrong case sensitivity. MongoDB is case sensitive.
Here is screenshot of the output of all commands that we just discussed.
MongoBasicCommands

New Features in C# 6.0

Soon, Microsoft is releasing the new version of C#. Here are some of the most interesting feature that the version 6.0 provides.
Auto property
Auto-Properties can have only getter instead of having setter and getter. Before C# 6.0, you had to have a private setter. Now, you can have only the getter by ommiting the set keyword into the curly bracket.

public int {get;} 

That say, it is also convenient that you can now have a initial value to the property. This is valid for property with only getter but also for property with setter and getter.

public int {get;} = 5;

Static method does not need to specify the class where it belongs. For example, with C# 5, you have to specify the Math method using the Absolute method, Abs. It does not mean that you have to use that shortcut. However, it can be useful to reduce the amount of repetition in your code if you are using intensively static methods.

Math.Abs(-1); 	//Old way, that is still valid
Abs(-1);		//New way

String.Format
String.Format has a new format with string interpolation. This mean that instead of relying on index to define placeholder, that you can use something more meaning full. For example, if you have a variable that you want to insert into a string, instead of specifying the index 0, you can simply use the backslash with the curly brace with the variable.

return String.Format("This is my name '{0}'", name); //Old way, that is still valid
return "This is my name '\{name}'";					 //New way

Methods
Methods can return lambda. This is quite interesting for small method.

public string ReturnStringMethod()
{
	return "A string";
}
//Can be rewritten by :
public string ReturnStringMethod() => "A string";

You can also now use the nameof method to have a string that represent the variable name. This is very interesting because it reduces the problem while refactoring. The string that is present into the exception can now be dynamically link to the variable name which Visual Studio fully support during refactoring.

public void YourMethod(Object yourArgument)
{
    if(yourArgument == null)
	{
	    throw New ArgumentNullException("yourArgument", "Cannot be null") //Before we had to specify with a string
	}
}

public void YourMethod(Object yourArgument)
{
    if(yourArgument == null)
	{
	    throw New ArgumentNullException(nameof(yourArgument), "Cannot be null") //Now we have a method
	}
}

Index initializer
If you object define the square bracket operator you can now use the default initializer to set values.

new YourObjectWithIndex{["var1"] = 1, ["var2"] = 2};

Null conditional operators
A new operator is born to check if something is null. It is the ?.. It verify if the variable before the operator is null. If yes, nothing after is executed. If it is not null, than it continue to execute what is on the right.

//Before
if(variable != null && variable.property != null)
{
   variable.property.test = "okay";
}
//Now
variable?.property?.test = "okay"

Visual Studio Build Notification

Visual Studio comes with multiple interesting tools. One of the tool is the Build Notification application.

This tool can be located in the Common7\IDE folder. Here is an example with Visual Studio 2013:

C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\BuildNotificationApp.exe

BuilNotificationApp

Once open, you have to configure the tool to let its know what build to check.

Here is an example that has been automatically filled up since I am already using Visual Studio and TFS for project.

BuildNotificationsOptions

When the configuration is set, the system tray will show an icon with the status of the last build. For example, here is the icon when build fail.
NotificationTrayBuildNotification

This tool is quite handy if you have a team that is using TFS and when you are using the build server. This way, you can know exactly when the build fail and be ready to react.

Entity Framework Does Not Allow To Have Nullable Complex Type

Once in a while, I forgot that weakness of Entity Framework that make me change the design of my database. Entity Framework (well for the 6 first version at least) cannot save an entity that has a complex type to null.

Let’s get in the context that you have a class named Order, and this one have a Price property. The Price property is of type Money which is a complex type. You cannot set your Price property to null without having Entity Framework crash during the commit phase.

DbUpdateException: Null value for non-nullable member. Member: ‘PriceLimit’.

Once you realize that Entity Framework will not help you in this path, you have to change your design. Their are multiple ways to handle this kind of scenario but one that I prefer and I really think is quite easy is to have an additional property inside the complex class that specify is the complex class is null or not. Of course, this would be more clean not to have that property, but at least, it is a viable solution if you own the complex type. It has also the advantage to be cohesive and to not alter all of your classes that use that complex type.

In the complex class we change the value to be nullable now.

private decimal? value;

public decimal? Value
{
     get { return value; }
     set { this.value = value; }
}

This is the first change that will let the database to save null value not to the complex type but to the value of this one. The next step is to create a new property that will specify if the complex type is null or not.

public bool IsNull {
    get { return !this.Value.HasValue; }
    set {
        if (value)
        {
            this.Value = null;
        }
        else
        {
            this.Value = default(decimal);
        }
    }
}

As you can see, the IsNull property does not contain a value but is calculated on the fly. We also will not store this value in the database. This mean that we need to ignore this property for Entity Framework.

public MoneyConfiguration()
{
    this.Ignore(d => d.IsNull);   // Required because Entity Framework cannot have two properties that load the same property(value)
}

The reason is double. First, we do not need to save this value because we can calculate it on the fly. Second, Entity Framework does not allow us to read this type of property. Indeed, Entity Framework can save both value (the value and the IsNull flag) but when this one will try to load the data from the database, it will not be able to resolve the value correctly. Primary because both property depend of each other. When setting the Value, the IsNull does not change, so it is fine. However, when Entity Framework set IsNull to false the default value is set. Since we cannot tell Entity Framework to avoid loading a single property or that we cannot specify the order of the properties to be loaded, it is better to avoid having to save the value.

NDepend Version 5.4 Professional Edition Review

NDepend exists since many years and is one of the best static analytic tool around. This tool can be integrated within Visual Studio or run independently from a GUI or a console. It is interesting because it output with nice graphic the dependencies between your classes, namespaces and assemblies. Also, NDepend powerful Linq language allows to have custom metrics about your code and keep track of it during the development lifetime. This is possible with analyzed comparison.

For this review, I did a run on one project that I am working on that has 7 100 lines of code, 702 types, 34 assemblies and 3 900 lines of comment. I can tell you that because NDepend’s dashboard give you all these information straight after your first run.

NDependDashBoard

As you can see in the dashboard, code coverage is also possible. At the time I did my first run, I did not have the unit test configured for coverage. After specifying the coverage output file to NDepend, this one was able to show some information.

CodeCoverageDashBoardNDepend This is quite interesting if you are serious about the health of your coverage because you will be able to get a picture of how your coverage is during the development life time of your application. Since NDepend can be used from a console, you can use this tool to do daily report and than compare what happening. This is true not only for coverage but for every metric that NDepend gives you.

From the DashBoard, we can see that the project is having some rules violation. By default, NDepend comes wiht hundreds of pre-defined rules.
CodeViolatedFromNDepend

Clicking the “Critical Rules Violated” opens the Queries and Rules Explorer. This filter all rules by the rule that has pass the reasonable threshold.
10CriticalRulesViolatedFromNDepend

The next step is to match the rules’ violation with the code. This is make easily by clicking the problem and NDepend opens the methods matched panel where every methods in violation appear. However, it might have some false positive. For example, NDepend found that I have 5 methods with too many arguments. The problem is that the 5 methods from an assembly that use Entity Framework’s Migration Tool. Double clicking the method from NDepend

NDependCannotOpenDeclarationSourceFile

This could be resolved by changing the CQLinq which is NDepend Linq language. The too many parameters look like this:

warnif count > 0 from m in JustMyCode.Methods 
where 
  m.NbParameters > 8
  orderby m.NbParameters descending
select new { m, m.NbParameters }

To fix the problem I edited the query to do not check the class in problem.

warnif count > 0 from m in JustMyCode.Methods 
where 
  m.NbParameters > 8
&& m.FullName.Contains("DataAccessMigration")
  orderby m.NbParameters descending
select new { m, m.NbParameters }

What is interesting is that you have auto-completion and the result is live. That mean that whilst typing the query you see the result to the output panel. For me, the Queries and Rules Explorer is the panel that I use the most of all NDepend’s feature. The only negative I have found is that I was expecting to move to Visual Studio when double clicking a method in problem inside the Visual NDepend. It does open the code but every time into a new Visual Studio instance. That mean that you can have 10 Visual Studio running if you double clicked 10 methods to be improved.

Another tool that NDepend offers is the Dependency Graph. For me, that tool is not useful because it creates a spaghetti graph.
NDependDependencyGraph I think the NDepend’s team know about it because they warn you inside the software to use the Dependency Matrix for large structure. In my opinion, almost all serious software cannot use that tool. It would have been more interesting to see something like assembly dependency graph like those layers graph that can be read from top to bottom. This is why the Matrix Dependency is way better to use.
NDependMatrixDependency
The software does a good job with hovering sensitive help to guide you with how to use the tool. For example, the image above indicates that the software is perfectly structured by layer. This can be found because the blue squares and the green squares are divided by the diagonal line.

This tool can also shows you the coupling and cohesion of your code. More the squares are close together, more the code is high in cohesion. So a group of squares show the cohesion and the space between these group shows the coupling. More space they are, more the coupling is high. In the example of the Dependency Matrix, it’s hard to get some result. We can see high cohesion at the top and low coupling. High cohesion because we have 6 squares near together and low coupling because these squares are very far away of the rest of the code.

Another panel is the Metrics graph. This is a panel where pixel represent a metric. For example, you can display a Metric Graph for Cyclomatic Complexity. This one will blue every Cyclomatic Complexity that is high and that needs to be fixed.

NDependCyclomaticMetrics

This can be useful but I prefer to use directly the Queries and Rules Explorer to have a list of methods.

NDepend is a big software and it would be hard for me to cover all its functionality. You can find more information on the official website where videos, articles and documentations are available: http://www.ndepend.com/

C# Using Statement Inside or Outside the Class Namespace

.Net works with library that you can reference in the projet and then use in any code file. The keyword using is the one to use inside the C# file to be able to use classes from an external library or a different namespace from the one the code belong in the file. Of course, if you do not want to use the using statement, you can when using a class from another namespace specify the name with the whole namespace path.

For example:

var x = new OtherLibrary.OtherNameSpace.Xyz.TheClass();

Having the whole namespace in the code can become cumbersome. This is why the using statement exist. By default, the using is at the top of the file.

using System;
namespace MyNameSpace
{

    public class MyClass
    {
        //...
    }
}

But this could also be different by having the System namespace directly inside the namespace.

namespace MyNameSpace
{
    using System;
    public class MyClass
    {
        //...
    }
}

But what is the difference? The difference is the priority of how the .Net will use external dependencies. The priority is to the using inside the namespace and then to the one of the file. This is why having the using inside the namespace can be more safe in a way that you can be sure that no other library can hijack a namespace and break your code. For example, you could create a class named Math, and still be in your namespace and having this one be used instead of System.Math. But, to remove this possibility, if you set using System; inside your namespace than you are sure to have the real Math class to be used (or to have a conflict during compilation file if both are explicitly marked with using).

A rule of thumb is to set the using inside your namespace, this way you have less chance of having a behavior that you do not expect. If you want to change the default behavior of Visual Studio when you are creating a new class or interface, you need to go to the template folder and edit the class and interface template. This folder is for Visual Studio 2013 inside Program File under the Common7 folder. Here is my path which is the default installation path of Visual Studio 2013.

C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\ItemTemplates\CSharp\Code\1033

If you are using ReSharper, you can also modify the Code Editing > C# > Formatting Style > Namespace Imports. You can select “Add Using directive to the deepest scope” and you will have your using inside the namespace when you clean full format.

Visual Studio Extension to Attach to IIS with a Single Key

Developing web application require at some point to use IIS. Visual Studio lets you debug easily with IIS Express by pressing F5. This one start Visual Studio Express and attach automatically Visual Studio debugger to the IIS Express process. However, if you are using IIS, nothing is automatic. You have to go in the Debug menu, select Attach to Process and than in the list select w3wp.exe. This is something that you can do something more than a dozen a time per day.

Today, I found something interesting in the Visual Studio Extension Gallery. It is an extension that let you do that with a single click.

AttachToAnyProcessLikeIIS

Since this extension is adding in the menu the action, it is possible to assign a shortcut to the action. IIS is assigne to the 1 attach to item.

AssignShortCut

I have assigned mine to the F1 key. Every time that I want to debug, I just need to hit F1 and I am ready to go.

Modify the Html Output of any of your Page Before Rendering

In some situation, you may want to alter the html output that the Asp.Net MVC render. An interesting use case is that you may have several user controls that inject directly into the html some JavaScript or CSS. To keep your page loading fast, you want to have everything at the bottom of the html. Of course, other method exist but one can be to let Asp.Net MVC render everthing and just before sending back to the client the Html output to remove those JavaScript and CSS tag of the Html markup and to add them at the bottom of this Html. This article describe how to modify the Asp.Net MVC default rendering pipeline to inject your own hook that will be placed between the end of the Asp.Net MVC rendering engine and the sending of this one to the client. It will also explains how to have this option in an atomic scenario of only allowing this alteration for a specific action up to all requests.

The first class to create is the class that will play with the content produced. I create a small filter called MyCustomStream that remove all Script tag and replace them by a comment and then add all Script tag before the closing Html tag. This way, all Script are set at the end of the page.

public class MyCustomStream : Stream
{
    private readonly Stream filter;


    public MyCustomStream(Stream filter)
    {
        this.filter = filter;
    }

    public override void Write(byte[] buffer, int offset, int count)
    {
        var allScripts = new StringBuilder();
        string wholeHtmlDocument = Encoding.UTF8.GetString(buffer, offset, count);
        var regex = new Regex(@"<script[^>]*>(?<script>([^<]|<[^/])*)</script>", RegexOptions.IgnoreCase | RegexOptions.Multiline);
        //Remove all Script Tag
        wholeHtmlDocument = regex.Replace(wholeHtmlDocument, m => { allScripts.Append(m.Groups["script"].Value); return "<!-- Removed Script -->"; });

        //Put all Script at the end
        if (allScripts.Length > 0)
        {
            wholeHtmlDocument = wholeHtmlDocument.Replace("</html>", "<script type='text/javascript'>" + allScripts.ToString() + "</script></html>");
        }
        buffer = Encoding.UTF8.GetBytes(wholeHtmlDocument);
        this.filter.Write(buffer, 0, buffer.Length);
    }

    public override void Flush()
    {
        this.filter.Flush();
    }

    public override long Seek(long offset, SeekOrigin origin)
    {
        return this.filter.Seek(offset, origin);
    }

    public override void SetLength(long value)
    {
        this.filter.SetLength(value);
    }

    public override int Read(byte[] buffer, int offset, int count)
    {
        return this.filter.Read(buffer, offset, count);
    }

    public override bool CanRead
    {
        get { return this.filter.CanRead; }
    }

    public override bool CanSeek
    {
        get { return this.filter.CanSeek; }
    }

    public override bool CanWrite
    {
        get { return this.filter.CanWrite; }
    }

    public override long Length
    {
        get { return this.filter.Length; }
    }

    public override long Position { get { return this.filter.Position; }
        set { this.filter.Position = value; }
    }
}

To make it works for controller or action, you must create an attribute. When the action is executed and this one has the attribute (or if the controller of the action has the attribute) the filter is applied.

public class MyCustomAttribute: ActionFilterAttribute
{
    public override void OnActionExecuted(ActionExecutedContextfilterContext)
    {
        var response = filterContext.HttpContext.Response;

        if (response.ContentType == "text/html") {
            response.Filter = new MyCustomStream(filterContext.HttpContext.Response.Filter);
        }
        
    }
}

You can also set it to all your controller by setting the attribute to the Global.Asax.cs

protected void Application_Start() 
{
    GlobalFilters.Filters.Add(new MyCustomAttribute());
}

But, so far something is wrong. The filter is called multiple time because the stream is outputed in chunk of several bytes. Since we are playing with the Html rendering, we must replace html element when the whole document is loaded. This require us to modify a little bit the implementation above. The filter class must have a buffer. We will append all chunk into our buffer and when this one is full, we will act our transformation on this buffer and use this memory buffer to output into the filter stream.

The first step is to have a Stream to buffer. I choose to use the MemoryStream because it has some method like ToArray() that simplify our life when it is the time to read the whole buffer. The Flush method needs modification to accumulate all bytes of the page before hooking the filter and write back the modified buffer.

public class MyCustomStream : Stream
{

    private readonly Stream filter;
    private readonly MemoryStream cacheStream = new MemoryStream();

    public MyCustomStream(Stream filter)
    {
        this.filter = filter;
    }

    public override void Write(byte[] buffer, int offset, int count)
    {
        cacheStream.Write(buffer, 0, count);
    }

    public override void Flush()
    {
        if (cacheStream.Length > 0)
        {
            var allScripts = new StringBuilder();
            string wholeHtmlDocument = Encoding.UTF8.GetString(cacheStream.ToArray(), 0, (int)cacheStream.Length);
            var regex = new Regex(@"<script[^>]*>(?<script>([^<]|<[^/])*)</script>", RegexOptions.IgnoreCase | RegexOptions.Multiline);
            //Remove all Script Tag
            wholeHtmlDocument = regex.Replace(wholeHtmlDocument, m => { allScripts.Append(m.Groups[0].Value); return "<!-- Removed Script -->"; });

            //Put all Script at the end
            if (allScripts.Length > 0)
            {
                wholeHtmlDocument = wholeHtmlDocument.Replace("</html>", "<script type='text/javascript'>" + allScripts.ToString() + "</script></html>");
            }
            var buffer = Encoding.UTF8.GetBytes(wholeHtmlDocument);
            this.filter.Write(buffer, 0, buffer.Length);
            cacheStream.SetLength(0);
        }
        this.filter.Flush();
    }

    public override long Seek(long offset, SeekOrigin origin)
    {
        return this.filter.Seek(offset, origin);
    }

    public override void SetLength(long value)
    {
        this.filter.SetLength(value);
    }

    public override int Read(byte[] buffer, int offset, int count)
    {
        return this.filter.Read(buffer, offset, count);
    }

    public override bool CanRead
    {
        get { return this.filter.CanRead; }
    }

    public override bool CanSeek
    {
        get { return this.filter.CanSeek; }
    }

    public override bool CanWrite
    {
        get { return this.filter.CanWrite; }
    }

    public override long Length
    {
        get { return this.filter.Length; }
    }

    public override long Position { get { return this.filter.Position; }
        set { this.filter.Position = value; }
    }
}

You can put what ever you want inside the if statement of the Flush method. In my case, I remove all script of the file, replace them with a comment and finally put all scripts at the end of the file, just before the closing Html tag.

MovingScriptToBottom

The result can be seen if you show the source in any browser. This method is efficient but as a cost that we are playing with the output result and indeed add some overhead in the rendering pipeline. This kind of filter must be used only in specific cases where it is the only way to accomplish a transformation. The case of JavaScript or CSS are two cases where it is logic to do if you are developing in an older oriented way where “control/component/usercontrol” inject their own JavaScript and CSS. However, in new system, you should not rely on this kind of replacement. It tends to develop bad habit to throw code everywhere without checking the consequence. It also add some performance penalty by having to pass through all the code instead of initially setting at the right place the code. This can be efficiently done by using section with ASP.net MVC. Finally, this kind of replacement can cause problem because of dependencies. In this small example, nothing really is changed, but in bigger code base some JavaScript may need to be before specific Html elements or have dependencies to other JavaScript files. Moving with automatic process may require more code than the one shown in this article.

You can find the source code of this example in GitHub or download the zip file.

JavaScript The Good Things to Know

Null and array

The type null or the type array are in fact of type object. You can verify this by using typeof.

console.log(typeof(null));
console.log(typeof([1,2,3]);

Variables Name

Variables name can have illegal character if used with quote when defined. For example, you can use numeric and alphanumeric character with underscore but you cannot use directly a dash for example. The variable name this-is-illegal is not legal but if you define your object with the property name “this-is-illegal” it works.

var yourObject = {
  "this-is-illegal":"but it works because of the quote",
  this_is_legal : "and does not require quote"
};

Even if the illegal character can be by-passed by the first approach, using quote, it is not recommended to write you code this way. Retreving the value require to use the array notation instead of the dot notation.

var v1 = yourObject["this-is-illegal"];
//instead of 
var v1 = yourObject.this_is_legal;

arguments variable

Every function can access the arguments keyword. This variable is not an official JavaScript array (lack of Array method) but can access every elements with the square bracket.

function add(a,b)
{
   return a + b;
   // or
   return arguments[0] + arguments[1];
}

But this goes far beyond that. You can define your function to not have any arguments and use it with multiple arguments. The arguments variable will hold all arguments passed to the function and not only those officially specified.

function add()
{
    var index
    var sum = 0;     
    for (index = 0; index < arguments.length; index += 1) 
    {         
       sum += arguments[index];     
    }     
    return sum
}
var result = add(1,2,3,4,5); // 15

Default Initialization

If you are not sure if a variable has already been initialized you can use the || operator to check and assign.

var variableWithValueForSure = anotherVariable.variable1 || "defaultValue";

What it does is that it check if the first expression returns undefined. If it is undefined, than this one return false. Since it returns false, the next expression is evaluate which set the value. In the case it is not undefined, this one return not true but the value directly. This is why, often in JavaScript we see the same variable doing this trick to itself to be sure that it is defined. The next example ensures that the variable “me” is defined and not undefined.

var me = me || {};

Object and Dynamic Variables

It is possible to add variables at anytime with JavaScript. You just need to set a value to have the variable defined inside your object. This is also true for functions.

var obj = {
   variable1 : 1
};
obj.newVariable = 2; //newVariable is added to the object obj

References is used not copy

Every time you set an existing object to another variable this one pass its reference. This is true for function parameters but also for variable inside a function.

var x1 = x2; //x1 and x2 are the same now
x2.v1 = 'value1';
//x1.v1 is also at 'value1'

Prototype

Prototype is the concept in JavaScript that allow you to share information through different object of the same type. When calling a function or variable on an object, if this one does not find the function or variable it go check if it can find it in its prototype. If it does not find it it goes to the prototype of the prototype and so on until it reach object.prototype. If nothing is found, it return undefined.

hasOwnProperty

If you want to loop your object proprety (variables and functions) than you will stumble into prototype properties which you may not want to see. If you want to see only method that you have defined for the object and not those ones from the prototype you must use the function hasOwnProperty(‘propertyToCheck’).

var propertyName ; 
for (propertyName in yourObject) 
{     
    if (yourObject.hasOwnProperty(propertyName])) 
    {
          //Do what you want with the property that is inside YourObject and not inside the its prototype     
    } 
}

We used the for in statement to loop through all properties. This give us property in an non-specific order. If you want to have properties in the order defined in the code you must use a for with a integer that loop every properties in an array.

Delete keyword

Using delete can remove a property. For example, if you define a property named “prop1” and you execute delete on it, this one will return undefined exepted if the prototype has a “prop1” method. Because of the nature of the prototyoe

Adding Method to Prototype

You can add methods to an object with the prototype. You just need to use the prototype keyword after the type you want to enhance. The example below add a trim method to any string.

String.prototype['trim'] = function () 
                       {     
                            return this.replace(/^\s+|\s+$/g, ''); 
                       };
This add the trim method to all String type.

Variables Declaration

In JavaScript it is better to define variable in the beginning of the function instead of the best practice that suggest to declare the variable the nearest of its use. The reason is that JavaScript scope works differently than other languages. JavaScript variables defined in a scope can access others variables outside its scopes.

var Program = function()
{
  var var1 = 1;
  var Program2= function()
  {
    var1 = var1 + 1; // This can access var1 function which is not the case in other scoped language.
  }
  Program2(); // Call f2 function
  //The value of var 1 is 2;
}

For example, this does not work in C#:

class Program
{
        private int a;

        private class Program2
        {
            public Program2()
            {
                a = a + 1; // Do not compile
            }
        }
}

Apply Keyword

You can call any function with by following this function with the method apply. This one takes two parameters. The first one is the value you want to set to this for the method you call. The value can be set to null if you do not want to pass a value to the this of the function. The second parameter is an array. This array are converted into the function parameter.

function add(a,b)
{
   return a + b;
}
//Can be called this way:
var result1 = add(1,2); //3
//or
var result2 = add.apply(null,[1,2]); //3

Exceptions

You can throw exception and catch them. The exception throw an object you want. You can use any thing.

throw {name: 'Error Name', message : 'message you want'};

Thrown statement are read by catch block. If you want to catch multiple exception, than you must do a if statement on a property you want, for example the name.

try
{
    throw {name: 'StackOverFlow', message : 'message you want'};
}
catch(e)
{
    if(e.name === 'StackOverFlow')
    {
        console.log('***' + e.name + ': ' + e.message + '***');
    }
    console.log(e.name + ': ' + e.message);
}

Chaining Calls

It is always good to return the this keyword if your method return nothing. This allow to do chaining calls.

var Human= function() {
  this.name = 'Not Defined';
  this.gender = 'm';
};

Human.prototype.setName = function(name) {
  this.name = name;
  return this;
};

Human.prototype.setGender = function(gender) {
  this.gender = gender;
  return this;
};

This allow us to chain because every function return the this reference.

var patrick = new Humain()
                  .setName('patrick')
                  .setGender('male');

Javascript Encapsulation with Closure

JavaScript provides Encapsulation with something named Closure. Since everything in JavaScript uses function, this one too. The principle of closure is to encapsulate every variables and methods into a cohesive function. This allow us to scope what is private to the object from what is public. It is very similar to object oriented class. Private methods and variables are not returned by the closure while public methods and variable are. Let’s start with an example to demystify the concept of closure.

var referenceToTheObject = (function () 
{     
	var privateVariable = 0;     
	return 
	{         
		 publicMethod1: function () {       }
		,publicMethod2: function () {       }     
	}; 
}()
);

This is interesting because in fact, we are invoking an anonymous function (see the line before the last one). This function return an object with two public functions. As you must know now, these function can call any methods and variables in their outer scope. This mean that both public method can call each others but also the private variable. The private variable is not reachable outside the anonymous function because it is not returned by the anonymous function.

Developing with IIS Express to Full IIS

Developing with IIS Express has its limitation. More you are developing and more you may have several website, web api, WCF and others system that must run together. You can increase your compilation process by only compiling and publishing the system that has changed. Visual Studio is bright enough to not recompile every libraries but it also has it pitfalls with IIS Express that suddenly have some of its references not synchronize. The result is obvious. First of all, the “start up” project will work but some of the others will not. For example, if you have a web project and a web api with the web as a startup project, you may have the web working when the web api will result to any types of error possibles.

A solution is to use IIS instead of IIS Express. This way, every compilation, only the libraries that has changed are compiled but once it is compiled, all your system will stay in a working stage (indeed it must have been in a working stage). To switch is pretty easy. Open IIS and create one website for your web project and so on. Define different ports for all your website and that’s it. Not so fast! You can have a error 500:

This configuration section cannot be used at this path. This happens when the section is locked at a parent level. Locking is either by default.

Error500ConfigurationSectionFail

This error occurs if you have not added some of IIS features. To add those features, open the Windows Feature by typing “Turn Windows features on or off” in the start menu. This open a window with some Windows feature. Select “Internet Information Services”, “World Wide Web Services” and all of them.

IISFeaturesEnabled

From there, be sure that your web application in IIS point to the web project and the other one to the web api project. Not the DLL folder, but the folder where the project is located. You just need to compile and you are up and running. If you need to debug with break point, you need to go in Visual Studio and go to Debug > Attach To Process. Click “Show Process from all user” and select the w3wp.exe process. Click attach and you are ready to debug.