The Back Side of the DevOps Trend

At this moment, if you do not agree that DevOps is the best thing in the world – your are out, well not in the “gang”. Indeed, Facebook do it, Google do it, Amazon do it and Microsoft is also in transition to do it. Everybody do it, so it must be the best thing in the world? Well, if you remove the fact that DevOps always existed in small companies, than it is nothing new than simplifying your workforce expertise into a big bucket. The concept of DevOps is that an individual can contribute almost end-to-end of a product. If you want, this is the opposite of what Henry Ford used to be efficient. It is the opposite to divide to conquer. So, instead of having 1 analyst, 1 developer, 1 tester and 1 IT guy; you just use a single person that know everything. This is something you have to do if you run a small company because you do not have enough money to support all those expertise to be spread on different person. With DevOps, the same person setup the server, go talk with customers, do the planning, code the product and test it. The reason behind having huge corporations going in that path is mostly because it increase the deployment time of feature. The justification is true in some extend because you have less overhead of communication and also less waiting for a particular set of skills. Also, you have a team that have better overall picture of the system. So far, everything is “better” on paper. Who can argue against that every one can be easily replaced or that anyone do a better development since he or she knows how to code from a testing perspective or a deployment perspective. Last thing, this also merge the support team with the development team. So, they are no anymore a team that is doing the new stuff and one that is repairing it.

However, here are some problems. If you are “DevOping” only for a part of your organization, than it is not really more efficient. For example, if you have 3 levels of manager that must agree for every decision than you have DevOps for your “coding operation” not for the overall “development operation” of your product. On a small company, you talk directly to the boss and thing can move fast — sometime the boss is also a developer! It works. However, if you need to include in the loop your lead developer, you manager level 1,2,3,4… than you product manager which must go in meeting with other managers, you loss a lot of benefits like deploying fast new features, having innovative features developed, etc. In fact, from my experience, people are waiting and while they are waiting they are trying to understand the field of knowledge that they do not understand. When they are not waiting, they are doing stuff, but most of the time is on researching about the knowledge that they do not know. At the end, the proportion of time passed into developing the feature itself is not high at all. Also, since your development team is handling all the support than the development that was supposed to be more efficient is cut by the time of understanding every bugs, making the fix, testing again, etc. In a single week, the development time is shrunk rapidly.

DevOps has a bigger caveat if you have a big software : the code. For example, a software that is built by 10 developers or 100 developers or 500 has more different coding standards across the product and also a lot more codes for the same development period of time. This mean that just for development tasks, this require a huge investment of time to understand the current code base. This is not without saying that their is so many technologies implied now that reading the code base force you to know more than just 2-3 languages, it can go very fast above 6-7-8… At that point, we are not even talking about front-end and back-end code. DevOps merges front-end and back-end dev but also, like I said analysis tasks and skills; design tasks, tool, standard, meeting ; coding with different technologies, standards, best practices, debugging and software; testing with unit test frameworks that is different from techno to techno but also from unit, integration, functional, etc; deploying locally, on a server or on the cloud; infrastructure with cluster, load-balancing, network VPN, DNS; etc. So, after a few times, expert in some field become average in every fields.

It is impossible to have a single individual that is an expert in CSS, JavaScript, TypeScript, Angular, ASP, SQL, ORM, Rest Service, security, cloud storage, deployment, unit testing, etc. Indeed, a single individual can be an expert on multiple technologies and systems, but not all of them. This is why the model of Henry Ford was good for development of thing that does not change because every phase was mastered by a single entity. In software, everything change, so a pure segregate model does not apply, but on the other spectrum, the “know it all” model does not either. This is also even more true with the new trend of having new version out so fast. Today, code base is working with version 1 of one framework, in 1 month it will be version 2 out… multiply that by the tremendous amount of frameworks, libraries, technologies required and you are almost changing every weeks something. Keeping track of the good way to do stuff become harder and harder. Of course, you can learn on every task you must do, but still, you will know the basis without being an expert. The cherry at the top is that now, is that every thing is shipped so fast that it contains bugs which if you stumble into one that you are often said that “it is open source, fix it” –indeed.

So, I am all about having a wide range of knowledge. I never been someone that was dividing the front-end development from the back-end development. In my mind, you must know how it works from the request that ask the web page to how to get the data from the database (or other source). I am also all into having developers building unit test and even integration test. In fact, I have projects that I do end-to-end. However, from my professional experiences, if it goes beyond that point and you have a huge code base, the performance of the team is not better with a DevOps approach than having some experts in every part of the process. In fact, it is worth because we are all average and we loss the expertise. Whilst your expert programmers are doing functional tests, or try to see how to deploy on the IIS farm or need to go in meeting with managers to figure out what to do, they are not at full speed at what they are good at. Also, some developer does not have any interest to do analysis, neither to gather requirements, doing tasks management or to work with third party partners — they want to do what they know the best develop. Same thing for testers or any other expert in the team.

This trend is strong right now, and will be there for few times before migrating to something else. Management likes DevOps because they have a pool of individual that can be easily switched and allow them to have for few days a full team of developer and tomorrow a full team of testers as well as one team in one product today which can be moved into a different division later. I am not against that movement, but contrary to a lot of people, I simply do not think that this is the way to go in long term. Keeping developer having expertise without having them exhausted with all those different tasks and technologies to keep up is going to be challenging.

To conclude, I am curious to see why this mentality does not goes in the management’s zone. Because, DevOps could also be applied: we should only have 1 layer of ManOps: “Management Operations”. All the benefits would be also there. Faster decisions, less hierarchies to reach the person who can do something tangible, no middle man or distortion of information, faster delivering features or innovative ideas to be incorporate inside the product…

Enterprise Asp.Net MVC Part 8: Asp.Net cache before repository

At some point in the life of your software the performance can become an issue. If you have optimized your queries or your Entity Framework configuration, than the next step is to think about keeping some data in memory or in an external cache. This has the advantage to have the data already available.

First of all, we need to have some infrastructure classes and interface because we want to have something flexible and not tightly bound to Asp.Net since this will be used in the Data Access Layer.

public interface ICacheConfiguration
{
	bool IsActivate();
}

The first interface configures the cache. So far, to keep it simple, only one property is set. It is about its activation. Caching system must always have a possibility to be desactivated. The reason is that if your data become not what you expect that you can turn off the cache and use the main persistence. If the problem is solved, than it means that the problem is the cache. Otherwise, the problem is with the persistence or the logic that use the data.

public interface ICacheProvider
{
	void Set<T>(T objectToCache) where T : ICachableModel;

	void Set(string key, Object objectToCache);

	T Get<T>(T key) where T : ICachableModel;

	object Get(string key);

	object Delete(string key);

	T Delete<T>(T objectTodelete) where T : ICachableModel;

	bool IsInCache(string key);

	bool IsInCache<T>(T objectToVerify) where T : ICachableModel;
}

This second interface allows you to have something in front of the technology used. You can have a memory cache, an external caching system or to have an Azure cache behind this interface.

public interface ICachableModel
{
    string GetCacheKey();
}

Most of the methods are defined twice. One use a string key, and the other use a ICachableModel. This interface allows to have the model class to have its logic to built its unique key.

public class MemoryCache:ICacheProvider
{
	private readonly ObjectCache cache;
	private readonly CacheItemPolicy defaultPolicy;
	private readonly ICacheConfiguration configuration;

	public MemoryCache(ICacheConfiguration configuration)
	{
		this.configuration = configuration;
		this.cache = new System.Runtime.Caching.MemoryCache(Constants.Configurations.CacheNameConfiguration);
		this.defaultPolicy = new CacheItemPolicy();
	}

	public void Set<T>(T objectToCache) where T : ICachableModel
	{
		if (configuration.IsActivate())
		{
			cache.Set(objectToCache.GetCacheKey(), objectToCache, defaultPolicy);
		}
	}

	public void Set(string key, object objectToCache)
	{
		if (configuration.IsActivate())
		{
			cache.Set(key, objectToCache, defaultPolicy);
		}
	}

	public T Get<T>(T objectToCache) where T : ICachableModel

	{
		if (configuration.IsActivate())
		{
			return (T) cache.Get(objectToCache.GetCacheKey());
		}
		else
		{
			return default(T);
		}
	}

	public object Get(string key)
	{
		if (configuration.IsActivate())
		{
			return cache.Get(key);
		}
		else
		{
			return null;
		}
	}

	public object Delete(string key)
	{
		if (configuration.IsActivate())
		{
			return cache.Remove(key);
		}
		else
		{
			return null;
		}
	}

	public T Delete<T>(T objectTodelete) where T : ICachableModel
	{
		if (configuration.IsActivate())
		{
			return (T) cache.Remove(objectTodelete.GetCacheKey());
		}
		else
		{
			return default(T);
		}
	}

	public bool IsInCache(string key)
	{
		if (configuration.IsActivate())
		{
			return cache.Contains(key);
		}
		else
		{
			return false;
		}
	}

	public bool IsInCache<T>(T objectToVerify) where T : ICachableModel
	{
		if (configuration.IsActivate())
		{
			return cache.Contains(objectToVerify.GetCacheKey());
		}
		else
		{
			return false;
		}
	}
}

This implementation uses the System.Runtime.Caching as you can see, it also use the configuration to disable the cache. This way to proceed does not affect any of the caller code. In fact, all method return the default value when the cache does not find the value. This should tell to the called to continue with the default persistence strategy.

The caller should be in the Services classes if you have followed previous post about Enterprise Asp.Net MVC application.

var cacheResult = (YouEntity)this.cache.Get("YouUniqueKey123");
if (cacheResult == null)
{
	var repositoryResult = yourRepository.GetYourEntity();
	this.cache.Set("YouUniqueKey123", repositoryResult);
	return repositoryResult;
}
else
{
	return cacheResult;
}

This create a simple architecture for caching. It has the flexibility to use the concrete cache you want and to have high cohesive classes. Configurations could have additional information about how many time the entity must stay in cache, the information about external cache like which IP or PORT to use for MemCached for example.

Series Articles

Article #1: Asp.Net MVC Enterprise Quality Web Application
Article #2: Asp.Net MVC Enterprise Quality Web Application Model
Article #3: Asp.Net MVC Enterprise Quality Web Application Controller
Article #4: Asp.Net MVC Enterprise Quality Web Repository Layer
Article #5: Asp.Net MVC Enterprise Quality Web with Entity Framework
Article #6: Asp.Net MVC Enterprise Quality Layers
Article #7: Asp.Net MVC Enterprise Quality Web Security

Entity Framework and the Unit of Work pattern

Abstract

This article is a summary of how to make the use of a unit of work with Entity Framework. First of all, Entity Framework is a unit of work by itself. You can do multiple insert, update and delete and it’s not until a SaveChanges that everything is committed to the Sql Server. The problem is that you may want to have multiple repositories. This mean that if you want to be under the same transaction that you want to share the save DbContext. Here comes the unit of work, a pattern that share the DbContext. The reference of DbContext is shared across repositories, which is interesting because if we want to be domain driven we can share the DbContext between repositories of the same domain. It’s also interesting for unit testing. The reason is that the unit of work has interface which can be easily mocked.

I have seen an article on Asp.Net website concerning Entity Framework and the unit of work pattern but I believe it’s wrong. I prefer the one of Julie Lerman in her Pluralsight video. The main reason is the one of Asp.Net includes the repository inside the unit of work and the DbContext. The one of Julie Lerman only contain the DbContext and the unit of work is passed through every repositories of the domain.

Here is the representation of every layers that we would like with the unit of work.

Layers

As you can see, the controller should contact the service layer where all queries are from databases, access to caching services and web services are executed. For the database part, we contact the data access layer accessor which is an abstraction for the unit of work and repositories. This allow every developers that use repositories to abstract the need to create the unit of work and to pass it through constructors. The accessor does have a reference to repositories and to the unit of work.

This article explains how to create a layered approach that has a controller, a service layer, a data access layer accessor with repositories and unit of work with a simple set of entities. I already have wrote an article for repository and entity framework. This was an other simpler way to design the repository. Previously, a facade was passing the DbContext to all repository, which was created the same behavior as the unit of work pattern. However, the unit of work is more elaborate and allows to unit test easily and allow you to reuse repository in several DbContext if required. Having the possibility to create several DbContext and to share it by domain (for domain driven design) is important for big software. It increase the performance of the database context by having a limited amount of entity to handle. So, the previous way to handle repository is perfect if you have under 50 entities. This is a rule of thumb and it depends of many factors. If you have a lot of entities and that you can draw specific domains, the approach of unit of work in this post is preferable. As you will see, a lot of more classes will be needed and this is not a small detail to consider before going into this way.

Creating the entities, the database context and the tables

First of all, let’s create entities and a simple context that we will call directly from the controller. This should never been done in enterprise but it will allow us to migrate the code from a simple basic code to a more heavy layered application.

public class Animal
{
    public int Id { get; set; }
    public string Name { get; set; }

    public virtual ICollection<Animal> Enemies { get; set; }
    public virtual ICollection<Animal> EnemyOf { get; set; }
}

public class Cat : Animal
{
    public int NumberOfMustache { get; set; }
    public int RemainingLife{get;set;}
}

public class Dog : Animal
{
    public string Type { get; set; }
}

We have two classes, one is for Cat and one is for Dog. Both inherit from Animal class. These are very simple classes because we want to focus on the unit of work and not on complex classes. The next step is to create the database context.

The first step is to get Entity Framework. This can be done by using Nuget with the interface (“Manage Nuget Package”) or with a command line :

PM> install-package entityframework

Then, we need to inherit from DbContext and setup web.config to have a connection string for the database. The web.config looks like this:

<configuration>
  <configSections>
    <section name="entityFramework" type="System.Data.Entity.Internal.ConfigFile.EntityFrameworkSection, EntityFramework, Version=6.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" requirePermission="false" />
  </configSections>
  <connectionStrings>
    <add name="EntityConnectionString" connectionString="Data Source=PATRICK-I7\SQLEXPRESS;Initial Catalog=UnitOfWork;Integrated Security=SSPI;" 
         providerName="System.Data.SqlClient" />
  </connectionStrings>

The first is that we have a new configSection for Entity Framework. This has been added automatically. The line that is required to be added manually is the connection string.

The last step is to configure the entity. Since we are simplify the whole application for the purpose of the unit of work, the model class will be directly the entity. Some may want in enterprise application have an additional layer to not share entity classes with the model.

public class AllDomainContext:DbContext
{
    public AllDomainContext():base("EntityConnectionString")
    {
    }

    protected override void OnModelCreating(DbModelBuilder modelBuilder)
    {
        base.OnModelCreating(modelBuilder);

        //Table per type configuration
        modelBuilder.Entity<Dog>().ToTable("Animals");
        modelBuilder.Entity<Dog>().ToTable("Dogs");
        modelBuilder.Entity<Cat>().ToTable("Cats");

        //Primary keys configuration
        modelBuilder.Entity<Animal>().HasKey(k => k.Id);

        modelBuilder.Entity<Animal>()
            .HasMany(entity => entity.Enemies)
            .WithMany(d => d.EnemyOf)
            .Map(d => d.ToTable("Animals_Enemies_Association").MapLeftKey("AnimalId").MapRightKey("EnemyId"));
            
    }
}

The configuration has something special for the Enemies list because I did not wanted to handle the the association table by myself. Entity Framework can handle it for us by configure a many to many relationship with the animal class. It requires to have a table name for the many-many table with a foreign keys.

Setup the controllers, service layer and data access layer

Before even having the service layer, let’s use the context directly into the controller and see the database creation. Then, we will change to code every layers but not the unit of work yet. We can use scaffolding to leverage Visual Studio power to get code generation for us. First step, right click the controller and select add new controller.
MvcControllerWithEntityFramework

The second step is to select to model class, you can select the one of Animal and select the DbContext class. If you do not see your DbContext class (DatabaseContext), close the window and compile your application. The wizard bases its choice on the compiled resource of the project. Once generated, you can execute the code, IIS Express start by default and you just need to go to http://localhost:15635/Animal and the DbContext will start the creation of the database. If you open SQL Server Manager, the unit of work database should have 3 tables.

TabletsTPTForAnimal

Transforming to have service layers

At this stage, the architecture of the web application is not enterprise grade. The controller has a strong reference to the database context. The next step is to have everything related to the database inside a service layer which abstract entity framework. This allow us to test easily the controller without having to care about the database.

This is the current controller code at this moment.

public class AnimalController : Controller
{
    private DatabaseContext db = new DatabaseContext();

    public ActionResult Index()
    {
        return View(db.Animals.ToList());
    }

    public ActionResult Details(int? id)
    {
        if (id == null)
        {
            return new HttpStatusCodeResult(HttpStatusCode.BadRequest);
        }
        Animal animal = db.Animals.Find(id);
        if (animal == null)
        {
            return HttpNotFound();
        }
        return View(animal);
    }

    public ActionResult Create()
    {
        return View();
    }


    [HttpPost]
    [ValidateAntiForgeryToken]
    public ActionResult Create([Bind(Include="Id,Name")] Animal animal)
    {
        if (ModelState.IsValid)
        {
            db.Animals.Add(animal);
            db.SaveChanges();
            return RedirectToAction("Index");
        }

        return View(animal);
    }

    public ActionResult Edit(int? id)
    {
        if (id == null)
        {
            return new HttpStatusCodeResult(HttpStatusCode.BadRequest);
        }
        Animal animal = db.Animals.Find(id);
        if (animal == null)
        {
            return HttpNotFound();
        }
        return View(animal);
    }

    [HttpPost]
    [ValidateAntiForgeryToken]
    public ActionResult Edit([Bind(Include="Id,Name")] Animal animal)
    {
        if (ModelState.IsValid)
        {
            db.Entry(animal).State = EntityState.Modified;
            db.SaveChanges();
            return RedirectToAction("Index");
        }
        return View(animal);
    }

    public ActionResult Delete(int? id)
    {
        if (id == null)
        {
            return new HttpStatusCodeResult(HttpStatusCode.BadRequest);
        }
        Animal animal = db.Animals.Find(id);
        if (animal == null)
        {
            return HttpNotFound();
        }
        return View(animal);
    }

    [HttpPost, ActionName("Delete")]
    [ValidateAntiForgeryToken]
    public ActionResult DeleteConfirmed(int id)
    {
        Animal animal = db.Animals.Find(id);
        db.Animals.Remove(animal);
        db.SaveChanges();
        return RedirectToAction("Index");
    }

    protected override void Dispose(bool disposing)
    {
        if (disposing)
        {
            db.Dispose();
        }
        base.Dispose(disposing);
    }
}

If you want to test rapidly the database, just add the code in the index.

    public ActionResult Index()
    {
        var animal1 = new Animal { Name = "Boss" };
        var cat1 = new Cat { Name = "Mi" };
        var cat2 = new Cat { Name = "Do" };
        animal1.Enemies = new List<Animal> { cat1,cat2};
        db.Animals.Add(animal1);
        db.Animals.Add(cat1);
        db.Animals.Add(cat2);
        db.SaveChanges();
        return View(db.Animals.AsNoTracking().ToList());
    }

tablesDataForAssociation

The first step is to create a repository class for animal inside the DataAccessLayer folder. Normally, I create a folder called Repository to have all repositories.

public class AnimalRepository : IAnimalRepository
{
    private DatabaseContext db = new DatabaseContext();

    public Models.Animal Find(int? id)
    {
        return db.Animals.Find(id);
    }

    public void Insert(Models.Animal animal)
    {
        db.Animals.Add(animal);
        db.SaveChanges();
    }

    public void Update(Models.Animal animal)
    {
        db.Entry(animal).State = EntityState.Modified;
        db.SaveChanges();
    }

    public void Delete(Models.Animal animal)
    {
        db.Animals.Remove(animal);
        db.SaveChanges();
    }

    public void Dispose()
    {
        db.Dispose();
    }

    public IList<Animal> GetAll()
    {
        return db.Animals.AsNoTracking().ToList();
    }
}

This class as also an interface with the public method in it.

The second step is to create a service layer. Normally, we would create a new project, but to keep everything simple, let’s just add a new folder (namespace). Then, we move the DatabaseContext class from the controller to the service.

The animal service class looks like the following code.

public class AnimalService: IAnimalService
{
    private IAnimalRepository animalRepository;

    public AnimalService(IAnimalRepository animalRepository)
    {
        this.animalRepository = animalRepository;
    }

    public Models.Animal Find(int? id)
    {
        return this.animalRepository.Find(id);
    }

    public void Insert(Models.Animal animal)
    {
        this.animalRepository.Insert(animal);
    }

    public void Update(Models.Animal animal)
    {
        this.animalRepository.Update(animal);
    }

    public void Delete(Models.Animal animal)
    {
        this.animalRepository.Delete(animal);
    }
    public IList<Animal> GetAll()
    {
        return this.animalRepository.GetAll();
    }
}

It’s all the code from the controller. Later, some improvement should be done. One of this change is to move the SaveChanges because it’s not interesting to save every time we add, modify or update an entity. This cause performance problem when several entities are required to be posted to the database. However, let’s focus on the transformation first, later these details will be gone. The role of the service layer is to resemble every repository. In this situation we have only one repository. In fact, in more complex problem like in enterprise, a service has several repository and caching classes.

The next class that require changes is the animal controller class. This one now has a constructor that need an IAnimalService.

public class AnimalController : Controller
{
    private IAnimalService _service;

    public AnimalController()
    {
        _service = new AnimalService(new AnimalRepository()); 
    }

    public AnimalController(IAnimalService animalService)
    {
        _service = animalService;
    }


    public ActionResult Index()
    {
        return View(_service.GetAll());
    }

    public ActionResult Details(int? id)
    {
        if (id == null)
        {
            return new HttpStatusCodeResult(HttpStatusCode.BadRequest);
        }
        Animal animal = _service.Find(id);
        if (animal == null)
        {
            return HttpNotFound();
        }
        return View(animal);
    }

    public ActionResult Create()
    {
        return View();
    }


    [HttpPost]
    [ValidateAntiForgeryToken]
    public ActionResult Create([Bind(Include="Id,Name")] Animal animal)
    {
        if (ModelState.IsValid)
        {
            _service.Insert(animal);
            return RedirectToAction("Index");
        }

        return View(animal);
    }

    public ActionResult Edit(int? id)
    {
        if (id == null)
        {
            return new HttpStatusCodeResult(HttpStatusCode.BadRequest);
        }
        Animal animal = _service.Find(id);
        if (animal == null)
        {
            return HttpNotFound();
        }
        return View(animal);
    }

    [HttpPost]
    [ValidateAntiForgeryToken]
    public ActionResult Edit([Bind(Include="Id,Name")] Animal animal)
    {
        if (ModelState.IsValid)
        {
            _service.Update(animal);
            return RedirectToAction("Index");
        }
        return View(animal);
    }

    public ActionResult Delete(int? id)
    {
        if (id == null)
        {
            return new HttpStatusCodeResult(HttpStatusCode.BadRequest);
        }
        Animal animal = _service.Find(id);
        if (animal == null)
        {
            return HttpNotFound();
        }
        return View(animal);
    }

    [HttpPost, ActionName("Delete")]
    [ValidateAntiForgeryToken]
    public ActionResult DeleteConfirmed(int id)
    {
        Animal animal = _service.Find(id);
        _service.Delete(animal);
        return RedirectToAction("Index");
    }
}

At this stage, the controller is separated from the database by the service and the repository. Still, it’s better to not having a strong reference to AnimalService inside the controller. This is why we will extract an interface from AnimalService and we will inject the concrete class by inversion of control. This allow us to have when doing test entry point to inject a fake AnimalService that won’t goes to the database. You can use the refactoring tool to extract the interface easily.

ExtractInterface

public interface IAnimalService
{
    void Delete(Animal animal);
    Animal Find(int? id);
    IList<Animal> GetAll();
    void Insert(Animal animal);
    void Update(Animal animal);
}

Inside the controller, we have two constructors. One to help us for this example which instantiate the service layer and the real one that takes a single parameter. This is the one that you should have in your enterprise grade software because it can inject any thing of IAnimalService into the controller.

public class AnimalController : Controller
{
    private IAnimalService _service;

    public AnimalController(IAnimalService animalService)
    {
        _service = animalService;
    }
//...

Before implementing the unit of work, we will create a new repository to illustrate why the unit of work is required. We will also do a little re-factoring by changing the repository to stop having automatically a call to SaveChanges. This allow us to insert several entities in a single transaction.

This is now the animal service class and interface.

public interface IAnimalService
{
    void Delete(Animal animal);
    void Delete(IList<Animal> animals);
    Animal Find(int? id);
    IList<Animal> GetAll();
    void Save(Animal animal);
    void Save(IList<Animal> animal);
}
public class AnimalService: IAnimalService
{
    private IAnimalRepository animalRepository;

    public AnimalService(IAnimalRepository animalRepository)
    {
        this.animalRepository = animalRepository;
    }

    public Models.Animal Find(int? id)
    {
        return this.animalRepository.Find(id);
    }

    public void Delete(IList<Animal> animals)
    {
        foreach (var animal in animals)
        {
            this.animalRepository.Delete(animal);    
        }
        
        this.animalRepository.Save();
    }

    public void Delete(Models.Animal animal)
    {
        this.Delete(new List<Animal> { animal });
    }

    public IList<Animal> GetAll()
    {
        return this.animalRepository.GetAll();
    }

    public void Save(Animal animal)
    {
        Save(new List<Animal> { animal });
    }

    public void Save(IList<Animal> animals)
    {
        foreach (var animal in animals)
        {
            if (animal.Id == default(int))
            {
                this.animalRepository.Insert(animal);
            }
            else
            {
                this.animalRepository.Update(animal);
            }
        }

        this.animalRepository.Save();
    }
}

As you can see, it’s better. It also hide the complexity for update and insert by having a single method “save”. Next, we will create a new repository. We won’t code its detail but we will use it inside the AnimalService to simulate a case where we need to interact on several entities.

public class HumanRepository : IHumanRepository
{
}
public interface IHumanRepository
{
    void Insert(Models.Human humain);
}

We also need to modify the service to have in its constructor the IHumanRepository.

public class AnimalService: IAnimalService
{
    private IAnimalRepository animalRepository;
    private IHumanRepository humanRepository;

    public AnimalService(IAnimalRepository animalRepository, IHumanRepository humanRepository)
    {
        this.animalRepository = animalRepository;
        this.humanRepository = humanRepository;
    }
//...
}

Then we can simulate the need to have something in the same transaction between animal and human repository. This can be in the Save method of the AnimalService. Let’s create a new save method in the service which take an Animal and also an Human. In IAnimalService we add.

    void SaveAll(Animal animal, Human humain);

And in the concrete implementation we have :

    public void SaveAll(Animal animal, Human humain)
    {
        this.animalRepository.Insert(animal);
        this.humanRepository.Insert(humain);
    }

This is where the unit of work is required. The animal repository has its own DbContext and the human repository has its one two. Since both have not the same repository, they are in two different transaction. We could wrap these both lines with a TransactionScope but since Entity Framework is already a transaction scope and since in more complex scenario where we would want to use the DbContext furthermore, having to use the same DbContext is something viable.

Implementing Unit of Work pattern

As we have seen, we need to share the DbContext. This is where the unit of work shines. The first move is to create the unit of work which hold the DbContext.

public interface IUnitOfWork
{
    IDbSet<T> Set<T>() where T:class;

    DbEntityEntry<T> Entry<T>(T entity) where T:class;

    void SaveChanges();
}

The interface could be richer but this should be the minimal number of methods. The implementation is only having a central point for every database sets. In a more domain driven design application we could restrain entities by having a DbContext that is less general than the one created. “AllDomainContext” contains all entities set. This is perfect to create the whole database or when your application has a limited number of entities (under 50). But if you are domain driven design or with a big application, to have Entity Framework perform well and restrict the domains, having several DbContext is a good solution. With unit of work and its generic T class, you can pass any domain you want to have.

public class UnitOfWork<T>:IUnitOfWork where T : DbContext, new()
{
    public UnitOfWork()
    {
        DatabaseContext = new T();
    }

    private T DatabaseContext { get; set; }

    public void SaveChanges()
    {
        DatabaseContext.SaveChanges();
    }

    public System.Data.Entity.IDbSet<T> Set<T>() where T : class
    {
        return DatabaseContext.Set<T>();
    }

    public DbEntityEntry<T> Entry<T>(T entity) where T : class
    {
        return DatabaseContext.Entry<T>(entity);
    }
}

This unit of work is very general since it can takes T as set. This mean that any entity defined can be used. In our example, with this modified unit of work, the controller needs to be changed too.

public class AnimalController : Controller
{
    private IAnimalService _service;

    public AnimalController()
    {
        var uow = new UnitOfWork<AllDomainContext>();
        _service = new AnimalService(uow, new AnimalRepository(uow), new HumanRepository(uow)); 
    }
    public AnimalController(IAnimalService animalService)
    {
        _service = animalService;
    }
//...
}

So, the unit of work is instantiated with the domain we want. Here, it’s everything. We still have the “real” constructor that takes only the IAnimalService which is the one that should be used in the real application with inversion of control to inject the controller. Since it’s an article, to keep it simple, I show you what the IoC should do in the background.

The animal service is changed too to work with the unit of work.

public class AnimalService: IAnimalService
{
    private IAnimalRepository animalRepository;
    private IHumanRepository humanRepository;
    private IUnitOfWork unitOfWork;
    public AnimalService(IUnitOfWork unitOfWork, IAnimalRepository animalRepository, IHumanRepository humanRepository)
    {
        this.unitOfWork = unitOfWork;
        this.animalRepository = animalRepository;
        this.humanRepository = humanRepository;
    }

    public Animal Find(int? id)
    {
        return this.animalRepository.Find(id);
    }

    public void Delete(IList<Animal> animals)
    {
        foreach (var animal in animals)
        {
            this.animalRepository.Delete(animal);    
        }

        this.unitOfWork.SaveChanges();
    }

    public void Delete(Models.Animal animal)
    {
        this.Delete(new List<Animal> { animal });
    }

    public IList<Animal> GetAll()
    {
        return this.animalRepository.GetAll();
    }

    public void Save(Animal animal)
    {
        Save(new List<Animal> { animal });
    }

    public void Save(IList<Animal> animals)
    {
        foreach (var animal in animals)
        {
            if (animal.Id == default(int))
            {
                this.animalRepository.Insert(animal);
            }
            else
            {
                this.animalRepository.Update(animal);
            }
        }

        this.unitOfWork.SaveChanges();
    }

    public void SaveAll(Animal animal, Human humain)
    {
        this.animalRepository.Insert(animal);
        this.humanRepository.Insert(humain);
        this.unitOfWork.SaveChanges();
    }
}

The repository now accepts the unit of work. It can works with set defined in the domain without problem.

public class AnimalRepository : WebsiteForUnitOfWork.DataAccessLayer.Repositories.IAnimalRepository
{
    private IUnitOfWork UnitOfWork { get; set; }

    public AnimalRepository(IUnitOfWork unitOfWork)
    {
        this.UnitOfWork = unitOfWork;
    }

    public Models.Animal Find(int? id)
    {
        return UnitOfWork.Set<Animal>().Find(id);
    }

    public void Insert(Models.Animal animal)
    {
        UnitOfWork.Set<Animal>().Add(animal);
    }

    public void Update(Models.Animal animal)
    {
        UnitOfWork.Entry(animal).State = EntityState.Modified;
    }

    public void Delete(Models.Animal animal)
    {
        UnitOfWork.Set<Animal>().Remove(animal);
    }

    public IList<Animal> GetAll()
    {
        return UnitOfWork.Set<Animal>().AsNoTracking().ToList();
    }
}

It’s possible to continue to improve the unit of work and Entity Framework by going further in the use of the repository. But, what have been shown here is enterprise graded repository design. It allows you to divide the domain and improve the performance of Entity Framework by the same time. It allows to have an abstraction between the Asp.Net MVC front and the Entity Framework. It easily testable because we use interface which can be mocked easily. Benefits are clear but the price to pay is the overwhelm required to support this infrastructure. More classes need to be in place. Still, the version presented is light and once the setup is done, adding new entity is only a matter of editing the context in which it belongs and create into the repository what action is needed.

Source code

You can find the source code on GitHub for this Unit of work example.

Enterprise Asp.Net MVC Part 7: Securing action with role authorization

In previous article of the enterprise asp.net mvc series we have choose to allow anonymous not by default and to secure to logged used most of the actions possible. This is great but not enough if we want to have some action available only for specific role. In this article, I’ll show you how to authorize specific role to be mapped to action and to keep the security for anonymous. Also, we will see how to have custom error page for unauthorized action instead of the login screen that Asp.Net MVC redirect when the authorization is unsuccessful.

First of all, we will need to create a new Authorize attribute. This is not because Asp.Net MVC 4 doesn’t provide the attribute but because Asp.Net MVC 4 act the same way for authorized access (401) and a forbidden access (403). We want when it’s an authorized access (not logged) to redirect to the login screen and when it’s the forbidden access (not being in the role) to be redirected to a view saying something and not the login form.

[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method, Inherited = true, AllowMultiple = true)]
public sealed class AuthorizeAttribute : System.Web.Mvc.AuthorizeAttribute
{
	public AuthorizeAttribute()
	{
		ErrorArea = string.Empty;
		ErrorController = "Error";
		ErrorAction = "Index";
	}

	public string ErrorArea { get; set; }
	public string ErrorController { get; set; }
	public string ErrorAction { get; set; }


public override void OnAuthorization(AuthorizationContext filterContext)
{
          
    base.OnAuthorization(filterContext);
    if (AuthorizeCore(filterContext.HttpContext))
        return;
    if (filterContext.HttpContext.Request.IsAuthenticated)
    {
        if (ErrorController != null)
        {
            filterContext.Result = new RedirectToRouteResult(new RouteValueDictionary(new
                                                                                            {
                                                                                                action =ErrorAction,
                                                                                                controller =ErrorController,
                                                                                                area = ErrorArea
                                                                                            }));
        }
        else
        {
            filterContext.Result = new HttpStatusCodeResult((int)HttpStatusCode.Forbidden);
        }
    }
    else
    {
        filterContext.Result = null;
    }
}
}

This is the attribute class. This class check if the user is authenticated, if not, will redirect to the normal process and return a 401 http status with the login form. If the user is authenticated, the status code is changed to 403 if no controller is specified, otherwise, will redirect to a specific controller/action. By default, I have set a controller and action, this way, it’s more user friendly to have a real page inside the page layout than the default 403 IIS page. Of course, it’s up to you to choose what you prefer. However, I believe that not only it’s more user friendly but this way give you the possibility to log forbidden access and to have custom message.

To use this new AuthorizeAttribute, we need to change the default filter set to every action. In Asp.Net MVC 4, you need to search for FilterConfig.cs

public class FilterConfig
{
    public static void RegisterGlobalFilters(GlobalFilterCollection filters)
    {
        filters.Add(new HandleErrorAttribute());
        filters.Add(new Views.AuthorizeAttribute());
    }
}

The line 6 has been replaced by or AuthorizeAttribute. This won’t do such a big change since the access by default isn’t set. But, when an action shall be protected by a specific role, this is where the custom authorize class shine.

[HttpGet]
[Views.Authorize(Roles = Models.Roles.ADMINISTRATOR)]
public ActionResult Create()
{
    var x = ServiceFactory.Exercise.New(Model);
    return View("Create",x);
}

As you can see in the code above, if the use is not an administrator than this one will be redirected to the default error page.

At anytime, you also can specify a specific controller and action if for a special case you need to do something else for a forbidden access.

[Views.Authorize(Roles = Models.Roles.ADMINISTRATOR, ErrorController = "CustomerController", ErrorAction = "LogAndRedirect")]
public ActionResult Create()
{
    var x = ServiceFactory.Exercise.New(Model);
    return View("Create",x);
}

To conclude, it’s possible to have distinct page for authorized access and forbidden access. I strongly believe it’s important to do something different since it’s counter intuitive to display to login form when someone is already logged without the right role. It’s fundamental that the user know what’s going on and this is why a redirection to a custom error’s controller seem the natural solution to this problem.

Series Articles

Article #1: Asp.Net MVC Enterprise Quality Web Application
Article #2: Asp.Net MVC Enterprise Quality Web Application Model
Article #3: Asp.Net MVC Enterprise Quality Web Application Controller
Article #4: Asp.Net MVC Enterprise Quality Web Repository Layer
Article #5: Asp.Net MVC Enterprise Quality Web with Entity Framework
Article #6: Asp.Net MVC Enterprise Quality Layers
Article #7: Asp.Net MVC Enterprise Quality Web Security

Source code on GitHub

Enterprise Asp.Net MVC Part 6: The three layers of validation

Validations is definitely a serious subject. If no validation is made, then the system is compromise. Whatever the architecture, whatever the hardware setup and whatever the idea of the product, you need to implement validations to protect your system. This is why it must be taken seriously.

By default, Asp.Net MVC handles validation and also Entity Framework uses the same interface to handle validation entities. So, why not use what is already in place and not try to reinvent the wheel. In fact, we follow the KISS principle.

Here is an overview of the article in a single image.

We have 3 layers of validation. The first layer and third layer are built-in with .Net with the IValidatableObject interface. I have already discussed about this interface for validating entity but I’ll show you how to use it in a more “enterprise way”.

Using IValidatableObject

This interface lets you have a single method called Validate which lets you return error message linked to a property method. If you want a general error, you can also specify an empty string for the property name. Simple? Yes. Powerful? Even more! The framework know this interface and it automatically uses the validation when the model is bound from a Http request to your view model by the Model Binder. The .Net framework automatically call this method when Entity Framework try to save entities to the database. This mean that you have nothing to do, but to add your business logic validation.

From here, it’s interesting to force every model to have this interface and this is why a good place to inherit from IValidatableObject is in the BaseModel.

public abstract class BaseModel : IValidatableObject
{
    public const int NOT_INITIALIZED = -1;
    public int Id { get; set; }

    #region Implementation of IValidatableObject

    public abstract IEnumerable<ValidationResult> Validate(ValidationContext validationContext);

    #endregion

}

Every model have to define the Validate method. If no validation is required, the method is simply empty. Let’s go back with the Workout entity and add some validations.

public class Workout : BaseModel, IUserOwnable
{
    public DateTime StartTime { get; set; }
    public DateTime? EndTime { get; set; }
    public string Name { get; set; }
    public string Goal { get; set; }
    public ICollection<WorkoutSession> Sessions { get; set; }

    public override IEnumerable<ValidationResult> Validate(ValidationContext validationContext)
    {
        if (string.IsNullOrEmpty(Name))
        {
            yield return new ValidationResult("Name is mandatory", new[] {"Name"});
        }
        if (EndTime.HasValue)
        {
            if (StartTime > EndTime.Value)
            {
                yield return new ValidationResult("EndTime must be after the StartTime", new[] {"StartTime", "EndTime"});
            }
        }
    }

    #region Implementation of IUserOwnable

    public int UserId { get; set; }

    #endregion
}

Every time we have an error we return a ValidationResult. We specify a message and an array of properties that are concerned by the error. In this example, the name is validated and the EndTime property too but only when this one is specified.

The first layer of validation : Model Binding inside the controller

We have implemented the IValidatableObject and when a Http request is done to the server, the controller bind the data to the model. Since we are using the ViewModel approach, this validation is not triggered! But, since we have a BaseController and already have defined the new approach of having the model to be automapped automatically it hooks the validation and apply it when the ViewModel is mapped to the Model. (You have to go read previous post of “Business” category to understand why it’s automatically mapped.)

The first modification occurs in the override method OnActionExecuting that should be already overrided with the modification of the mapper. We simply need to check if the model bound is really a IValidatableObject and to trig the validation mechanism.

protected override void OnActionExecuting(ActionExecutingContext filterContext)
{
	base.OnActionExecuting(filterContext);
	if(filterContext.ActionParameters.Any())
	{
		var possibleViewModel = filterContext.ActionParameters.FirstOrDefault(x => x.Value.GetType() == typeof(TViewModel));
		if (possibleViewModel.Value!=null)
		{
			var viewModel = (TViewModel) possibleViewModel.Value;
			var model = (TModel) Activator.CreateInstance(typeof (TModel));
			Model = _mapperFactory.Map(viewModel, model);
			ApplyOwnership();
			ApplyErrorsToModelState();
		}
	}
}

private void ApplyErrorsToModelState()
{
	if (Model is IValidatableObject)
	{
		var errors = (Model as IValidatableObject).Validate(new ValidationContext(this));
		foreach (var validationResult in errors)
		{
			foreach (var memberName in validationResult.MemberNames)
			{
				ModelState.AddModelError(memberName, validationResult.ErrorMessage);
			}
		}
	}
}

What we are doing is a general method that works for any entities. We verify if the Model bound from the ViewModel is really inherited from a IValidatableObject. From here, we do what the framework would do if we weren’t using view model : calling the Validate method of the interface. We then loop all error and assign everything to the ModelState. This will give us the possibility to act like if no view model has been used.

The code above about the method “ApplyErrorsToModelState” could be replaced with the code below to be able to validate the data annotation AND also the IValidatableObject interface.

private void ApplyErrorsToModelState()
{
        ModelMetadata metadata = ModelMetadataProviders.Current.GetMetadataForType(() => Model, Model.GetType());

        foreach (ModelValidationResult validationResult in ModelValidator.GetModelValidator(metadata, this.ControllerContext).Validate(null))
        {
            var propertyName = validationResult.MemberName;
            ModelState.AddModelError(propertyName, validationResult.Message);
        }
}

The code above validate the data annotation and the IValidatableObject. This can be used in a scenario where you need to have deeper validation process. For example here is the same code as above with enhanced validation on the mapping. This required to have split both validation.

private void ApplyErrorsToModelState(TModel model, TViewModel viewModel)
{
    //Data Annotation validation
    ICollection<ValidationResult> result;
    ValidateDataAnnotation(model, out result);
    foreach (ValidationResult validationResult in result)
    {
        foreach (string memberName in validationResult.MemberNames)
        {
            ModelState.AddModelError(memberName, validationResult.ErrorMessage);
        }
    }

    //IValidatableObject validation
    if (Model is IValidatableObject)
    {
        IEnumerable<ValidationResult> errors = (Model as IValidatableObject).Validate(new ValidationContext(this));
        foreach (ValidationResult validationResult in errors)
        {
            if (validationResult is EnhancedMappedValidationResult<TModel>)
            {
                var enhanced = (EnhancedMappedValidationResult<TModel>)validationResult;
                var viewModelPropertyName = _mapperFactory.GetMapper(model, viewModel).GetErrorPropertyMappedFor(enhanced.Property);
                ModelState.AddModelError(viewModelPropertyName, validationResult.ErrorMessage);
            }
            else
            {       
                        
                if (validationResult.MemberNames.Any())
                {
                    foreach (string memberName in validationResult.MemberNames)
                    {
                        ModelState.AddModelError(memberName, validationResult.ErrorMessage);
                    }
                }
                else
                {
                    ModelState.AddModelError(string.Empty, validationResult.ErrorMessage);
                }
            }
        }
    }
    /*
    //This validate underlying entity which can be not fully loaded in the case of reference
    ModelMetadata metadata = ModelMetadataProviders.Current.GetMetadataForType(() => Model, Model.GetType());

    foreach (ModelValidationResult validationResult in ModelValidator.GetModelValidator(metadata, this.ControllerContext).Validate(null))
    {
        var propertyName = validationResult.MemberName;
        ModelState.AddModelViewModelToErrorsMap(propertyName, validationResult.Message);
    }*/
}

private bool ValidateDataAnnotation(object entity, out ICollection<ValidationResult> results)
{
    var context = new ValidationContext(entity);
    results = new List<ValidationResult>();
    return Validator.TryValidateObject(entity, context, results, true);
}
[HttpPost]
public ActionResult Create(WorkoutViewModel viewModel)
{
    if (ModelState.IsValid) //This is the default Asp.Net MVC way to validate entity
    {
    //Save the entity

This is great because it’s the default way to validate object that has been bound in MVC. The IsValid doesn’t only validate our business logic but also validate all data annotation that could have been set. It’s even greater because people that are used to use the ModelState for validation won’t have to learn a new way to act with controllers because it’s the same.

The second layer of validation : Service layer

So far, the validation works fine but it doesn’t handle the case that you need to validate across many entities. You can have case that you need to validate an entity dependently of the value of others entities. It can also be a validation from some value that are inside the database. Since Model doesn’t have access to the repository, at this moment, we couldn’t validate. To solve this problem, the second layer of validation is required and the perfect place it’s in the Service layer. The reason is that this layer does have access to all entities and also have access to all repositories. Contrary to the first layer of validation, this one will require some manual explicit call for validation. The concrete implementation of this second layer of validation will be done with the Workout entity. What we want to implement is a validation that the active user cannot create more than 3 workouts per month without a premium account. That mean that we need to go check in the database the amount of workout for a specific user for a specific month. This couldn’t be validate in the Workout class because it doesn’t have access to the database.

public int Create(Workout model)
{
     int amountWorkout = Repository.Workout.GetAmountWorkoutForCurrentMonth();
     if (amountWorkout>3)//More than 3 workouts done without premium account
     {
        throw new ValidationErrors(new GeneralError("You have reach the limit of 3 workouts per month, you need premium or wait the next month"));
     }
     return Repository.Workout.Insert(model);
}

This get the amount of workout for the month and if it’s over a certain threshold will raise the error.

The error is handled by the controller that verify that the action executed has been completed without error. Here is the Create action of the Workout controller with the first layer validation and with the catch for the second layer.

[HttpPost]
public ActionResult Create(WorkoutViewModel viewModel)
{
	if (ModelState.IsValid)
	{
		try
		{
			_service.Create(Model);
		}
		catch (ValidationErrors propertyErrors)
		{
			ModelState.AddValidationErrors(propertyErrors);
		}
	}
	return View("Create");
}

The exception type is ValidationErrors which is our custom error handler. The reason is that we do not want to use specific exception from other layers. This is why cross layers classes will be used to transport exception through all layers. This will be discussed after the third layer of validation.

The third layer of validation : Persistence layer

The persistence layer is where the call to the database is done. This is an automatic validation with Entity Framework that call the IValidatableObject interface of the entity before save it to the database.

But, since we do not want to raise a DbEntityValidationResult up to the controller (because it’s a class that belong to Entity Framework (System.Data.Entity.Validation), we will use our own exception classes.

We will create an interface that will hold the property name in error and also the error message.

public interface IBaseError
{
    string PropertyName { get; }
    string PropertyExceptionMessage { get; }
}

Two classes will inherit from this interface. One for a property error and one for a general error.

public class PropertyError:IBaseError
{
    public string PropertyName { get; set; }
    public string PropertyExceptionMessage { get; set; }
    public PropertyError(string propertyName, string errorMessage)
    {
        this.PropertyName = propertyName;
        this.PropertyExceptionMessage = errorMessage;
    }
}

public class GeneralError:IBaseError
{
    #region Implementation of IBaseError

    public string PropertyName { get {return string.Empty; }}
    public string PropertyExceptionMessage { get; set; }

    public GeneralError(string errorMessage)
    {
        this.PropertyExceptionMessage = errorMessage;
    }

    #endregion
}

Then, we add the interface IValidationErrors which holds all IBaseError to be send back through all layers.

public interface IValidationErrors
{
    List<IBaseError> Errors { get; set; }
}

The first implementation can be used anywhere, like in the service layers.

public class ValidationErrors : Exception, IValidationErrors
{
    public List<IBaseError> Errors { get; set; }
    public ValidationErrors()
    {
        Errors = new List<IBaseError>();
    }

    public ValidationErrors(IBaseError error): this()
    {
        Errors.Add(error);
    }

}

The second is more specific to database.

public class DatabaseValidationErrors : ValidationErrors
{
       
    public DatabaseValidationErrors(IEnumerable<DbEntityValidationResult> errors):base()
    {
        foreach (var err in errors.SelectMany(dbEntityValidationResult => dbEntityValidationResult.ValidationErrors))
        {
            Errors.Add(new PropertyError(err.PropertyName,err.ErrorMessage));
        }
    }
}

The last one is used by the repository. In fact, when we SaveChanges() to the database, we need to validate before Entity Framework executes the SaveChanges. Of course, we could let Entity Framework but we would have to catch the exception. Since they are a way without having to catch an exception, I prefer to use it.

If you remember correctly, our DatabaseContext inherit from IDatabaseContext which have SaveChanges() method. We simply need to override this one instead of relying on the one from DbContext and call the DbContext one if everything is fine.

public override int SaveChanges()
{
    var errors = this.GetValidationErrors();
    if (!errors.Any())
    {
        return base.SaveChanges();
    }
    else
    {
        throw new DatabaseValidationErrors(errors);
    }
}

The exception thrown will loop all errors and be trig to a higher level. In fact, this exception is raised to the service layer which doesn’t handle the exception. So, the exception will be raised to the controller layer. This is the same patch of exception than having an exception thrown from the service in the layer 2 because of business logic validation! We are reusing the same mechanism and this is possible because of the exceptions classes we have created which are abstracted with interface.

Model State and custom exceptions classes

If you remember, the controller does have a catch for ValidationErrors.

//...
catch (ValidationErrors propertyErrors)
{
     ModelState.AddValidationErrors(propertyErrors);
}

By default, the model state doesn’t have this method that accept our interface IValidationErrors. This is an extension method.

public static class ControllersExtensions
{
    public static void AddValidationErrors(this ModelStateDictionary modelState, IValidationErrors propertyErrors)
    {
        foreach (var databaseValidationError in propertyErrors.Errors)
        {
            modelState.AddModelError(databaseValidationError.PropertyName, databaseValidationError.PropertyExceptionMessage);
        }
    }
}

Using IValidationErrors lets handle errors from the service layer or the database error. In fact, at this point, it doesn’t really matter because we want to loop through all exceptions and to use the model state to attach doesn’t exception to the correct property (or if general exception to the string.empty which will be a global error message).

Conclusion

Validation of the model could be more complex. It could have been used external classes for each validation. We could have create our own system for validation message and not using the IValidatableObject interface. We could have completely not using the ModelState and create our own html helper with custom mechanism for validating across all layers. We could have add a layer of abstraction between Entity Framework and the service and to handle validation there. But at the end, having solution that are short and efficient seem to be better in my point of view. The current solution give a lot of flexibility concerning the validation and keep the code easy to maintain. In fact, add a validation are a two steps. First, where the validation should be coded? Second, adding the validation. I have seen patterns for validation that goes so beyond MVC and respect even more the single responsibility principle that adding a single validation take over 30 minutes. For me, this is not acceptable. Abstractions levels never should make the development of the code harder. In theory, adding levels of abstraction ain’t cost a thing, but in real enterprise code, where people have to maintain the code base, this can lead to problem.

The solution proposed here use layers previously defined without adding overhead to handle validation.

Series Articles

Article #1: Asp.Net MVC Enterprise Quality Web Application
Article #2: Asp.Net MVC Enterprise Quality Web Application Model
Article #3: Asp.Net MVC Enterprise Quality Web Application Controller
Article #4: Asp.Net MVC Enterprise Quality Web Repository Layer
Article #5: Asp.Net MVC Enterprise Quality Web with Entity Framework
Article #6: Asp.Net MVC Enterprise Quality Layers
Article #7: Asp.Net MVC Enterprise Quality Web Security

Source code on GitHub

Enterprise Asp.Net MVC Part 5: Database Context and Impersonate data

The database context is abstracting the connection between entity and Entity Framework. We won’t abstract all method of the Entity Framework and Linq to Entity like “Where”, “Select”, “Find”, “First”, etc but we will abstract the entry point : DbSet. In fact, the reason is to be able to add ability to impersonate later and to be able to configure your entity that you need to have this DatabaseContext. The role of the factory is not to configure Entity Framework, neither to impersonate. The database context role is to do those task.

public interface IDatabaseContext   
{
	int SaveChanges();
	IDbSet<TEntity> SetOwnable<TEntity>() where TEntity : class, IUserOwnable;
	DbSet<TEntity> Set<TEntity>() where TEntity : class;
	DbEntityEntry<TEntity> Entry<TEntity>(TEntity entity) where TEntity : class;
	void InitializeDatabase();
	UserProfileImpersonate Impersonate(ICurrentUser userProfile);
}

For the moment, the interface of IDatabaseContext looks like this. We have a SaveChanges because we might want to do operation over several repository and want to manually commit changes in a specific time. This will be the role of SaveChanges method. The SetOwnable<> method will act like the default Set method but will automatically assign the user to the entity. This will be good for the loading and for the saving. When in the loading, we won’t have to specify every time that we want the workout for the userA, etc. It will be automatically. This save us time, possibility of error and also improve the security because by default, everything will be bound the the current user. The InitializeDatabase method will be a method to configure extra database stuff. For example, in this project, I am using this method to setup the WebSecurity (membership layout for WebMatrix). The last method is the method that will give us some impersonation for the time of a query depending of another user profile.

public class DatabaseContext : DbContext, IDatabaseContext
{
	public const string DEFAULTCONNECTION = "DefaultConnection";

	public DatabaseContext(IUserProvider userProvider)
	{
		UserProvider = userProvider;

		base.Database.Connection.ConnectionString = ConfigurationManager.ConnectionStrings[DEFAULTCONNECTION].ConnectionString;
		Configuration.ProxyCreationEnabled = false;
	}

	public IUserProvider UserProvider { get; set; }

	public ICurrentUser CurrentUser
	{
		get { return UserProvider.Account; }
	}

        public new DbSet<TEntity> Set<TEntity>() where TEntity : class
        {
            if (typeof(IUserOwnable) is TEntity)
            {
                throw new SecurityException("You cannot by pass the ownable security");
            }
            return base.Set<TEntity>();
        }
	public IDbSet<TEntity> SetOwnable<TEntity>() where TEntity : class, IUserOwnable
	{
		return new FilteredDbSet<TEntity>(this, entity => entity.UserId == CurrentUser.UserId, entity => entity.UserId = CurrentUser.UserId);
	}

	public void InitializeDatabase()
	{
		WebSecurity.InitializeDatabaseConnection(DEFAULTCONNECTION, "UserProfile", "UserId", "UserName", autoCreateTables: true);
	}

	protected override void OnModelCreating(DbModelBuilder modelBuilder)
	{
		base.OnModelCreating(modelBuilder);
		//Call here some other classes to build the configuration of Entity Framework
	}

	public UserProfileImpersonate Impersonate(ICurrentUser userProfile)
	{
		return new UserProfileImpersonate(this, userProfile);
	}
}

This is a small example. That talk for itself. The two interesting part is the SetOwnable that use a FilteredDbSet which the code has been trimmed from a version that you can find over the web and that we will discuss later. The other part is the Impersonate method that we will talk now.

Lets start with the end result. For now, if you want to insert into the database a new Workout entity you need in the WorkoutRepository to do :

DatabaseContext.SetOwnable<Workout>().Add(entity);      

This will automatically insert a new workout to the current logged user. If you want to change the user, you could use the Set but because we override the Set method and check if the it inherit from the IUserOwnable interface. This is the required interface to use SetOwnable method. This way, we can get the user id. But, to protect developer to by pass this mechanism, an exception is thrown if we use Set method with entity that are ownable. That doesn’t mean that you cannot save to an other user, but will require more work with impersonating. Why adding some over head and not letting the developer directly use the Set when he want to save an entity to someone else authority? Simply because all entity will inherit from IUserOwnable because it will be a lot easier to work with without having to always specify the user inside the repository. Also, repository doesn’t have access directly to the user id. That’s say, it’s a painful process. Not letting access directly to the Set avoid the mistake to simply user Set method for an entity. An exception will be thrown and the developer will automatically remember to user the SetOwnable method instead. If he really mean to use the Set method, than the impersonate method will be appropriate.

For general entity, let say that we have a list of status that are shared across all entities or shared across all users, the entity won’t inherit of IUserOwnable because it’s not a user ownable entity. So in theory it works, let check in practice!

using (var db = DatabaseContext.Impersonate(new UserProfile { UserId = 1 }))
{
     db.SetOwnable<Workout>().Add(entity);
}

This would be in the repository instead of the last piece of code. As you can see, we impersonate with a UserProfile with the Id 1. The code is around curly bracket and give us the scope of when the impersonation start and end.

The DatabaseContext class implementation of Impersonate simply call a new DbContext.

public UserProfileImpersonate Impersonate(ICurrentUser userProfile)
{
    return new UserProfileImpersonate(this, userProfile);
}

A new class is used because we want to have a scope which is done by inherit from IDisposable interface. we will create a new instance of Impersonate and dispose it to come back with the real Current User and not the impersonate one. The class is mostly the same as the DbContext but has a reference to the user profile before the impersonate because we want to set it back once it’s done.

public class UserProfileImpersonate : IDatabaseContext, IDisposable
{
	private readonly DatabaseContext _databaseContext;

	private readonly IUserProvider _oldUserProvider;

	#region Implementation of IDisposable

	public UserProfileImpersonate(DatabaseContext dbContext, ICurrentUser userProfile)
	{
		_databaseContext = dbContext;
		_oldUserProvider = dbContext.UserProvider;
		_databaseContext.UserProvider = new ImpersonateUserProvider(userProfile);
	}

	public void Dispose()
	{
		_databaseContext.UserProvider = _oldUserProvider;
	}

	#endregion

	#region Implementation of IDatabaseContext

	public int SaveChanges()
	{
		return _databaseContext.SaveChanges();
	}

	public IDbSet<TEntity> SetOwnable<TEntity>() where TEntity : class, IUserOwnable
	{
		return _databaseContext.SetOwnable<TEntity>();
	}

	public DbSet<TEntity> Set<TEntity>() where TEntity : class
	{
		return _databaseContext.Set<TEntity>();
	}

	public DbEntityEntry<TEntity> Entry<TEntity>(TEntity entity) where TEntity : class
	{
		return _databaseContext.Entry(entity);
	}

	public void InitializeDatabase()
	{
		_databaseContext.InitializeDatabase();
	}

	public UserProfileImpersonate Impersonate(ICurrentUser userProfile)
	{
		return _databaseContext.Impersonate(userProfile);
	}

	#endregion
}

Simple isn’t? We simply call the same database context method but only change the current logged user profile. Single task to do which respect the single responsibility principle.

Series Articles

Article #1: Asp.Net MVC Enterprise Quality Web Application
Article #2: Asp.Net MVC Enterprise Quality Web Application Model
Article #3: Asp.Net MVC Enterprise Quality Web Application Controller
Article #4: Asp.Net MVC Enterprise Quality Web Repository Layer
Article #5: Asp.Net MVC Enterprise Quality Web with Entity Framework
Article #6: Asp.Net MVC Enterprise Quality Layers
Article #7: Asp.Net MVC Enterprise Quality Web Security

Source code on GitHub

Enterprise Asp.Net MVC Part 4: Repository

This is the forth part of the series concerning enterprise Asp.Net MVC web site. In this article, we will discuss about how to design the repository. Has you can imagine, we won’t use Entity Framework (or any other ORM) directly into controllers. Also, this article will focus on Entity Framework 5.0 but the concept behind is the same : the repository must be abstracted from the controller. The main reason is that we want to be able to respect the single responsibility principle. The controller responsibility is not about how to load or save entity but to know how to dispatch. This why we will follow separation of concern idea by having classes that will handle the repository. By separating the repository we will use many classes to have a set of cohesive class. The result will be an application well separated in concern.

Before starting, let me just make one thing clear. I won’t abstract Entity Framework here. I do not believe that abstracting the ORM is a good idea. First, it’s a lot of overhead. Second, the ORM already abstract the database implementation, and third, it’s harder to maintain because if we want something specific to Entity Framework, we will need to do a lot of code.

Abstracting the repository : the plan

The first thing that we need to keep in mind is that every entity will need to use the ORM. Entity Framework use what we call a DbContext. We need to be able to share this DbContext between repositories because we may want to save an entity that will use several other repository. The same is true for loading entity. You may want to load an entity and load a second one in a different repository. Sharing the same DbContext let you have the same transaction when saving and when loading let you have join instead of 2 queries (or more). It also open a single connection to the database instead of several.

The second thing to have in mind is that every entity belong to a user. If UserA create an entity, this entity should belong to him. UserB should access his information. This is not the case of everything, but most of the time yes. Even a Facebook Message is owned by you (but shared to others). So, we need a mechanism to bind data to a user account. Also, we will need to have a way to impersonate in some case this mechanism. This will give us the leverage to save entity to a specific user. A simple case that I can tell you may be to load the database with test data for development purpose. We may want to create entity to several users without being logged to these users.

The third thing to have in mind is that we want to be able to test without having to care about the database. It also mean that I do not want to have overhead when testing by mocking every method of Entity Framework. What we want is to simply mock the repository.

Factory Method Pattern

The factory method pattern let you construct object from a single point. The factory will return all repositories for every entity. It’s a central point. The reason to use this pattern is that it will give use a lazy loading for all repositories creation but also will give us the possibility to share the DbContext between those repositories in the creation of them. It also give us the possibility to mock the whole factory for testing. The factory will be the classe shared between all controllers.

Repository Factory

The repository factory inherit from an interface. This interface will contain all entity repository. That mean that if you want to add new entity that you need to add a new entry into this interface.

public interface IRepositoryFactory
{
    IWorkoutRepository Workout { get;  }
    IUserProfileRepository UserProfile { get;  }
    //...Other entities...
}

For example, in the code above, we have 2 entities. One is our domain, the Workout class, and the second is the UserProfile that is for the user (from the membership classes). If we wanted to add the Exercises entity, we would need to add a new property in the interface.

You can also notice that IRepositoryFactory contain interface to repository. So, a new entity means a new interface for its repository and for the concrete implementation of this repository.

Before going deeper with the repository class, let check a concrete implementation of IRepositoryFactory for Entity Framework 5.0 Code First approach.

public class RepositoryFactory:IRepositoryFactory
{
	private readonly IDatabaseContext _databaseContext;
	private IWorkoutRepository _workoutRepository;
	private IUserProfileRepository _userProfileRepository;

	public RepositoryFactory(IDatabaseContext databaseContext)
	{
		_databaseContext = databaseContext;
	}

	#region Implementation of IRespositoryFactory

	public IWorkoutRepository Workout
	{
		get { return _workoutRepository ?? (_workoutRepository = new WorkoutRepository(_databaseContext)); }
	}

	public IUserProfileRepository UserProfile
	{
		get { return _userProfileRepository ?? (_userProfileRepository = new UserProfileRepository(_databaseContext)); }
	}

	#endregion
}

This class takes in its constructor a IDatabaseContext. This will give us an interface to share between repository. Otherwise, the factory if very simple. It checks if the property has been already initialized, if not, it initialize it with the IDatabaseContext, otherwise, it simply reuse the repository. This class contains for every repository a property. That’s it.

Repository Classes

Every repository inherit from IRepository.

public interface IRepository<T>
{
	IQueryable<T> GetAll();
	T Get(int id);
	int Insert(T entity);
	int Update(T entity);
	int Delete(T entity);
}

This give us the 80% repository method that we need. Other more specific method like searching with filter will be added directly into the concrete implementation of the class.

Also, every Repository inherit from a BaseRepository which will hold the DataContext reference. This is required because every call to the repository is done by the DbContext. When the Repository factory pass the IDatabaseContext to the repository, all repository will simply pass the object to the base in their constructor.

public class BaseRepository
{
    protected IDatabaseContext DatabaseContext { get; private set; }

    protected BaseRepository(IDatabaseContext databaseContext)
    {
        DatabaseContext = databaseContext;
    }
}

Here is the example with the Workout entity.

public class WorkoutRepository : BaseRepository, IWorkoutRepository
{
	public WorkoutRepository(IDatabaseContext databaseContext) : base(databaseContext)
	{
	}

	#region Implementation of IRepository<Workout>

	public IQueryable<Workout> GetAll()
	{
		return DatabaseContext.SetOwnable<Workout>().Include(x => x.Sessions);
	}

	public Workout Get(int id)
	{
		return DatabaseContext.SetOwnable<Workout>().Include(x => x.Sessions).Single(c => c.Id == id);
	}

	public int Insert(Workout entity)
	{
		//To-do : Other stuff with complex type here
		DatabaseContext.SetOwnable<Workout>().Add(entity);
		return DatabaseContext.SaveChanges();
	}

	public int Update(Workout entity)
	{
		Workout fromDatabase = Get(entity.Id);
		DatabaseContext.Entry(fromDatabase).CurrentValues.SetValues(entity);
		DatabaseContext.Entry(fromDatabase).State = EntityState.Modified;
		//To-do : Other stuff with complex type here
		return DatabaseContext.SaveChanges();
	}

	public int Delete(Workout entity)
	{
		DatabaseContext.SetOwnable<Workout>().Remove(entity);
		return DatabaseContext.SaveChanges();
	}

	#endregion
}

It’s quite neat! You do not see any detail of database connection. The only thing we see is task concerning saving and loading entity. We have direct access to Entry and we can use the Set<> and SetOwnable<>. As you can see, we do not need to have any access to the current user, neither to specific to whom the Workout belong because Workout inherit from IUserOwnable. You will see the detail about how it works on the next article concerning the DbContext (see part 5).

Conclusion

So far, so good. Now we have the controller that talk with the database. We are using Repository factory method to access the desired repository and every repository share the same instance of DbContext which give us the possibility to handle multiple entity with the same context (same transaction). Every classes has its own role. The controller handle http request, the service handle how the database is accessed, the repository factory manage all repositories, repository handle how their entity are stored and finally, the database context take care of the database connection. The next article of the series, part 5 will discuss more in detail about the database context (DbContext) and its role with Entity Framework 5.0.

Series Articles

Article #1: Asp.Net MVC Enterprise Quality Web Application
Article #2: Asp.Net MVC Enterprise Quality Web Application Model
Article #3: Asp.Net MVC Enterprise Quality Web Application Controller
Article #4: Asp.Net MVC Enterprise Quality Web Repository Layer
Article #5: Asp.Net MVC Enterprise Quality Web with Entity Framework
Article #6: Asp.Net MVC Enterprise Quality Layers
Article #7: Asp.Net MVC Enterprise Quality Web Security

Source code on GitHub

Enterprise Asp.Net MVC Part 3: Controller

In this third part, we will discuss about controller. We aren’t done yet with the model (still require to add more validation) but let’s talk about the controller. In Asp.Net MVC, the controller act has the gate for the Http Request and answer back to any request with a Http Response. That’s it. It’s role should be limited to this task to respect the Single Responsability Principle.

But, we need to do a lot of thing when a client send information to the server. We need to convert the data in input to object, we need to convert this information to the model domain, we need to get into the database to load information and maybe to save information, we need to manipulate the data and we need to send back an answer. How can the controller be clean and in the same time be able to do all those things? Well, we will need to use the principle of seperation of concern and to split every task into multiple classes.

We will start with the model binding, which is the first step of any request.

Auto-mapping

In Asp.Net Mvc, the transformation of HTTP Get parameter or HTTP Post parameters into C# code is called Model Binding. The Model binding by default try to convert any data to primitive type or try to instantiate your model object if the request contain a Json object that fit the schema of your classes. That mean that you can simply use Asp.Net MVC to send back all properties values of your model back to the server and Asp.Net MVC is bright enough to build a new object for you.

[HttpPost]
public ActionResult Create(WorkoutModel model)
{
	//1)Validate model
	//2)Do manipulation
	//3)Save into the database

	return View("Create);//4)Return a response to the client
}

The problem with this approach is, it works fine if you use Model object to send information to the view but we are using ViewModel (this was an architecture decision we took in the first part of this series). ViewModel give us the leverage to add additional information like a list of exercises that could be used in the workout, etc. So, before the task 1 of validating the model, we need to convert back the view model into model object. This is where automapper come to the rescue.

An automapper is a library that map property from an object to an other one. In our example, we will use AutoMapper. It’s a free, open source, and widely used automapper. It can be configurable or by default map property name automatically. I won’t show you how to use automapper in this article but you can find good example in this blog or anywhere on the web.

So, once we have receive the view model back to from the view to the controller, we need to automap the view model to the model. That mean that every time we use a controller action that we need to do this task. This can be repetitive and error prone. That’s why a better approach is to implement a “Model” object. A little bit like Microsoft did with Asp.Net MVC with the View. We will create a Model property that will hold the converted view model. To do so, we will need to modify the BaseController.

Automapper and BaseController

We will modify the BaseController and override the method OnActionExecuting. This will give us the opportunity to modify the code before entering the code of the action defined inside the controller.

This is an overview of what we are going to do. First, we will have a concrete controller for every entity. In our case, the WorkoutController. Each controller inherit of the BaseController which is generic with 2 types. The first one is the model type, and the second is the view model type. The BaseController contain a reference to a IMapperFactory, which is a layer of abstraction to the AutoMapper implementation. We will come back later with the IMapperFactory. Finally, the BaseController contain a property of TModel type. That mean that for the WorkoutController that we will be able to use “this.Model” to get the model from the view model. For another entity, the Model will be of the entity type because it will use the TModel type defined by the BaseController. Here is the code that reflect the illustration above.

public class WorkoutController : BaseController<Workout, WorkoutViewModel>
{

	public WorkoutController(IMapperFactory mapperFactory):base(mapperFactory)
	{
	}


	public ActionResult Index()
	{
	}

	[HttpGet]
	public ActionResult Details(int id)
	{
	}

	[HttpGet]
	public ActionResult Create()
	{

	}

	[HttpPost]
	public ActionResult Create(WorkoutViewModel viewModel)
	{
	}

	[HttpGet]
	public ActionResult Edit(int id)
	{
	}

	[HttpPost]
	public ActionResult Edit(WorkoutViewModel viewModel)
	{
	}
}

public abstract class BaseController<TModel, TViewModel>:Controller
{
	private readonly IMapperFactory _mapperFactory;
	
	protected TModel Model { get; private set; }

	protected BaseController(IMapperFactory mapperFactory)
	{
		_mapperFactory = mapperFactory;
	}

	protected override void OnActionExecuting(ActionExecutingContext filterContext)
	{
		base.OnActionExecuting(filterContext);
		if(filterContext.ActionParameters.Any())
		{
			var possibleViewModel = filterContext.ActionParameters.FirstOrDefault(x => x.Value.GetType() == typeof(TViewModel));
			if (possibleViewModel.Value!=null)
			{
				var viewModel = (TViewModel) possibleViewModel.Value;
				var model = (TModel) Activator.CreateInstance(typeof (TModel));
				Model = _mapperFactory.Map(viewModel, model);
			}
		}
	}
}

Anytime, inside the Update or Create, instead of using the viewModel parameter which is of WorkoutViewModel type, you can use the base.Model. This way to code give us few advantages. First, the controller is clean. No mapping is done on any concrete controller. Second, we still have access to the view model if required. Third, we do not repeat work on all controllers.

Service layer

Now that we have the data from the view, we need to do some manipulation. We will skip the validation process because it will be in another part of this series. Let’s jump to the service layers. The service layer is a layer between the service layer is above the controller and could be used not only by the web controller but by the web api controller or any other application. It’s the layer between the user interaction and the repository. It’s the one that can contact the repository, the cache, or manipulate many entity to return a unique one. The service layer will be used by the controller to access the repository and to build the view model. For example, it will load a specific workout if the user call the Edit action of the Workout controller. Not only it will load the workout, but it will give us the view model filled correctly with the extra properties that could contain additional choices to be selected (like a list of exercise) and additional localized text for example.

So, we need to modify the WorkoutController to have a service reference.

public class WorkoutController : BaseController<Workout, WorkoutViewModel>
{
	public WorkoutController(IMapperFactory mapperFactory):base(mapperFactory)
	{
	}
//...

will become:

public class WorkoutController : BaseController<Workout, WorkoutViewModel>
{
	private readonly IWorkoutService _service;
	public WorkoutController(IWorkoutService service, IMapperFactory mapperFactory):base(mapperFactory)
	{
		_service = service;
	}
//...

As you can see, the IWorkoutService has been added. This will give us the possibility to inject the service into the controller. Every controller will have its own service.

Because most of the service will look the same we can create a base service class, that I’ll call IService. The IService will contain the primitive call that are concerning getting the model, saving the model and deleting the model.

public interface IService<TModel, TViewModel>
{
	IEnumerable<TViewModel> GetAll();
	TViewModel Get(TModel model);
	int Create(TModel model);
	int Update(TModel model);
	int Delete(TModel model);
}

public interface IWorkoutService : IService<Workout, WorkoutViewModel>
{
}

We could add in IWorkoutService more specific method. For example, one could require to have a specicial Get that will return a extended view model with more data. Or, someone might want to have to model from the Get instead of the view model. This type of architecture let a flexibility.

If we check the concrete implementation of IWorkoutService we will see all repository access and the automapper to convert the model to view model.

public class WorkoutService : BaseService, IWorkoutService
{
	public WorkoutService(IRepositoryFactory repositoryFactory, IMapperFactory mapperFactory) : base(repositoryFactory, mapperFactory)
	{
	}

	#region Implementation of IService<Workout>

	public IEnumerable<WorkoutViewModel> GetAll()
	{
		var listModel = Repository.Workout.GetAll().ToList();
		return Mapper.Map<List<Workout>,List<WorkoutViewModel>>(listModel);
	}

	public WorkoutViewModel Get(Workout model)
	{
		var modelToBound = Repository.Workout.Get(model.Id);
		return Mapper.Map<Workout, WorkoutViewModel>(modelToBound);
	}

	public int Create(Workout model)
	{
		return Repository.Workout.Insert(model);
	}

	public int Update(Workout model)
	{
		return Repository.Workout.Update(model);
	}

	public int Delete(Workout model)
	{
		return Repository.Workout.Delete(model);
	}

	#endregion
}

As you can see, we are using the IMapperFactory to map data and not directly the automaper. This abstraction give us the possibility to mock the mapping easily later. Also, you can see that we are doing the same with the repository. We are using IRepositoryFactory which is not tightly bound to any repository, neither bound to the workout. That mean that workout could load exercises without problem. The detail about the repository will be defined in another article.

Conclusion

We have seen that we can have clean controller and the use of service help us to separate the request from the repository. We also have seen that it’s better to use interface instead of concrete classes because it gives us the possibility to mock later on, give us a layer of abstraction between concrete implementation of the controller and from the repository, mapper and so on. In the next article of this series we will discuss about the repository and Entity Framework in an enterprise Asp.Net MVC web application. We will come back with controller in the article concerning validation of the model. Indeed, the controller will have its role with validation and we will see how to implement a solution that will still respect the single responsibility principle.

Series Articles

Article #1: Asp.Net MVC Enterprise Quality Web Application
Article #2: Asp.Net MVC Enterprise Quality Web Application Model
Article #3: Asp.Net MVC Enterprise Quality Web Application Controller
Article #4: Asp.Net MVC Enterprise Quality Web Repository Layer
Article #5: Asp.Net MVC Enterprise Quality Web with Entity Framework
Article #6: Asp.Net MVC Enterprise Quality Layers
Article #7: Asp.Net MVC Enterprise Quality Web Security

Enterprise Asp.Net MVC Part 2: Building The Model

This is part 2 of the enterprise Asp.Net MVC web application creation. We have before discussed about the project that we will develop and now we will work on the model. We will create all classes first.

If we remember the UML class diagram, we will have to create 5 classes. One for the Workout, one the for WorkoutSession, one for the Exercise, one for the Muscle and one for the MuscleGroup.

All these classes will be used to contain the business logic but also to contains the Entity from Entity Framework 5. Since we are building in Code First mode with Entity Framework, we have to develop all business logic (model) classes first then the database will be generated by Entity Framework ORM.

So far, if I translate the Model diagram into classes I have :

public class BaseModel
{
	public int Id { get; set; }
}

public class Workout:BaseModel
{
	public DateTime StartTime { get; set; }
	public DateTime EndTime { get; set; }
	public string Name { get; set; }
	public string Goal{ get; set; }
	public ICollection<WorkoutSession> Sessions { get; set; }
}

public class WorkoutSession:BaseModel
{
	public string Name { get; set; }
	public ICollection<Exercise> Exercises { get; set; }
}

public class Exercise:BaseModel
{
	public string Name { get; set; }
	public string Repetitions { get; set; }
	public string Weights { get; set; }
	public string Tempo { get; set; }
	public TimeSpan RestBetweenSet{ get; set; }
	public virtual Muscle Muscle { get; set; }
	public ICollection<WorkoutSession> WorkoutSessions { get; set; }
}

public class Muscle : BaseModel
{
	public string Name { get; set; }
	public virtual MuscleGroup Group { get; set; }
	public ICollection<Exercise> Exercises { get; set; }
}

public class MuscleGroup:BaseModel
{
	public string Name { get; set; }
	public ICollection<Muscle> Muscles { get; set; }
}

Indeed, I separate all these classes into individual file. Few modeling problem raised on my mind while I was writing those classes. First, every exercise need to be sorted for the user because every exercise are always done in a specific order. We need to add an Order property but we cannot add it to the Exercise class because the order will change depending of the workout session. For example, I may have the “Bicep Curl” exercise done in first on Monday and last on Friday. Also, we will create a directory of Exercise later so we need to have the “meta data” of exercise somewhere else from the exercise of the workout session. In fact, if we put out database glasses, this information would be in a junction table. If we put back our developer glasses, we will simply have a WorkouttSessionExecise that will have 1-1 relationship to an Exercise. So, let’s modify the model diagram and the class.


Modified Class Diagram for the Workout Planner Web Application

The modification need to be reflected into the classes.

public class WorkoutSession:BaseModel
{
	public string Name { get; set; }
        public ICollection<WorkoutSessionExercise> WorkoutSessionExercises { get; set; }
        public virtual Workout Workout { get; set; }
}

public class WorkoutSessionExercise:BaseModel
{
        public int Order { get; set; }
        public string Repetitions { get; set; }
        public string Weights { get; set; }
        public string Tempo { get; set; }
        public TimeSpan RestBetweenSet { get; set; }
        public virtual Exercise Exercise { get; set; }
        public virtual WorkoutSession WorkoutSession { get; set; }
}

public class Exercise:BaseModel
{
        public string Name { get; set; }
        public virtual Muscle Muscle { get; set; }
        public ICollection<WorkoutSessionExercise> WorkoutSessionExercices { get; set; }
}

As you can see, if have also moved the repetition, weight, tempo and all specific user/workout session information into something that will let the user add his information while having the Exercise class having the static information like the name of the exercise and the muscle concerned.

The BaseModel class is used to have similar information like the primary key which is an Integer. Later, other information will be added.

Validating the model

The next step is to add validation to the model. We could set it up to the setter of every property where we want to have some validation but we can also use the IValidateObject interface to let the ModelBinding system of Asp.Net MVC handle validation of every model object. If you want more information concerning the IValidationObject, I suggest you to go read this blog post about IValidationObject interface. In short, the ModelBinding will verify every Validate method of the model when bound back to the controller and it will also Validate the object before saving it into Entity Framework. So, we have double validation automatically execute by .Net Framework. This is a big advantage because we cannot forget to validate the model since the framework do it for us. To make it mandatory we will inherit the interface from the BaseModel class and create an abstract method that will be defined on every model. This way, we are sure that we will have model with validation defined.

public abstract class BaseModel : IValidatableObject
{
	public int Id { get; set; }

	#region IValidatableObject Members

	public IEnumerable<ValidationResult> Validate(ValidationContext validationContext)
	{
		return ValidateModel(validationContext);
	}

	#endregion

	protected abstract IEnumerable<ValidationResult> ValidateModel(ValidationContext validationContext);
}

All our model classes are modified to have the abstract method defined. Here is an example of a model class with validation and one that doesn’t have (yet) any validation logic.

public class Workout : BaseModel
{
	public DateTime StartTime { get; set; }
	public DateTime? EndTime { get; set; }
	public string Name { get; set; }
	public string Goal { get; set; }
	public ICollection<WorkoutSession> Sessions { get; set; }

	protected override IEnumerable<ValidationResult> ValidateModel(ValidationContext validationContext)
	{
		if (string.IsNullOrEmpty(Name))
		{
			yield return new ValidationResult("Name is mandatory", new[] {"Name"});
		}
		if (EndTime.HasValue)
		{
			if (StartTime > EndTime.Value)
			{
				yield return new ValidationResult("EndTime must be after the StartTime", new[] {"StartTime", "EndTime"});
			}
		}
	}
}

public class WorkoutSessionExercise : BaseModel
{
	public int Order { get; set; }
	public string Repetitions { get; set; }
	public string Weights { get; set; }
	public string Tempo { get; set; }
	public TimeSpan RestBetweenSet { get; set; }
        public virtual Exercise Exercise { get; set; }
        public virtual WorkoutSession WorkoutSession { get; set; }

	protected override IEnumerable<ValidationResult> ValidateModel(ValidationContext validationContext)
	{
		return new Collection<ValidationResult>();
	}
}

The first example show you some validation on the Name that need to be defined. It also validate the StartTime that must be before the EndTime. As you can see, the error will be displayed on both property if the error occur.

I also defined the EndTime as Nullable. This will let the user not enter a ending date to the workout, and for us, give us some scenario to test with nullable type.

The second example show you an example where we do not have any validation defined. It returns a simple empty collection (that must be inherit from IEnumerable).

Localized string

The last thing that bother me with this model is that for the moment, we take for grant that everything is in English all the time. In fact, the workout goal could be in the user language but the exercise name must be translated into the language of the user. In a previous blog post, we have discussed about a technique that can be used to have with Entity Framework multiple language handled automatically. It also doesn’t brake any object oriented theory. So, let’s apply now the modification to the model by changing some string into LocalizedString class.

To do, we will add this class :

[ComplexType]
public class LocalizedString
{
	public string French { get; set; }
	public string English { get; set; }

	[NotMapped]
	public string Current
	{
		get { return (string) LanguageProperty().GetValue(this); }
		set { LanguageProperty().SetValue(this, value); }
	}

	public override string ToString()
	{
		return Current;
	}

	private PropertyInfo LanguageProperty()
	{
		string currentLanguage = Thread.CurrentThread.CurrentUICulture.DisplayName;
		return GetType().GetProperty(currentLanguage);
	}
}

This class lets you have French and English for every LocalizedString defined. It will add a column in the database for French and one for English.

As you can see, the LocalizedString does have a ComplexType attribute which will tel Entity Framework to merge the property into the owner and not to create a relationship to a table called LocalizedString. For example, we will use LocalizedString with the name of Exercise. So the Exercise will have in its table Name_French and Name_English.

public class Exercise : BaseModel
{
	public LocalizedString Name { get; set; }
	public virtual Muscle Muscle { get; set; }
	public ICollection<WorkoutSessionExercise> WorkoutSessionExercices { get; set; }

	protected override IEnumerable<ValidationResult> ValidateModel(ValidationContext validationContext)
	{
		if (Name==null)
		{
			yield return new ValidationResult("Name is mandatory", new[] {"Name"});
		}
	}
}

As you can see, the Name property is now of type LocalizedString and we have modified the validation that now check if the name is defined.

Series Articles

Article #1: Asp.Net MVC Enterprise Quality Web Application
Article #2: Asp.Net MVC Enterprise Quality Web Application Model
Article #3: Asp.Net MVC Enterprise Quality Web Application Controller
Article #4: Asp.Net MVC Enterprise Quality Web Repository Layer
Article #5: Asp.Net MVC Enterprise Quality Web with Entity Framework
Article #6: Asp.Net MVC Enterprise Quality Layers
Article #7: Asp.Net MVC Enterprise Quality Web Security

Enterprise Asp.Net MVC Part 1: The Planification

As discussed before, a multi part posts will be published during the next weeks concerning how to develop enterprise web application with Microsoft Asp.Net MVC framework.

This first part will contains the project itself, the class diagram and the setup of the solution. The project will be iterative and incremental. We will establish the domain in this post, but we will enhance it during other parts. The reason we will do this is because in real life, the model change. We will start slowly and add stuff during the next week to finally have something done.

First of all, let define the project that we will create. Since I workout at the gym since a long time, I thought that we could build a gym workout planner. When people go to the gym, they have a plan of exercises for every group of exercise. Usually, trainer split every body part on multiple session during the week. So, in 1 week, you can go 4 times to the gym and train with 4 different workout. This is what we call a workout with 4 different session of exercise. Each session contains different exercises. Every exercise contains a name, a number of set and repetitions. It can also contain tempo which are at what time the exercise movement is done. While a set is the number of time you are doing the exercise, the repetition is the number of movement done every set of time. For example, you can have the exercise called “Leg Press” which are done 5 times (5 sets) of 10 repetitions (10 reps).

The Model in UML Class Diagram

If we try to translate this application into a static UML diagram, like the UML Class diagram we have something like below.

As we can see, we will have user that will be identified by the system which could have zero or many workout. Every workout will have at least one session (in the case a user want to do the same workout every training session) or could have multiple session (in case the user split his training in multiple session for example 4 different training per week). Workout session contain multiple exercise which are all associated to a muscle. Every muscle are grouped in group. For example, the bicep muscle and tricep muscle could be in the arm’s group. For the moment, let stay simple and not elaborate further more. Later, we will be able to give advice to user depending of objectives or muscles desired to train.

Creating a new Asp.Net MVC 4 project with Visual Studio 2012

New project screen in Visual Studio 2012

A new project is created with the MVC 4 project type. The next step is to select the MVC4 template to be applied with Razor View Engine.

Visual Studio 2012's project template screen.

I have selected the Unit Test project to be created cause as any serious enterprise project, we will unit test most of what we will develop. Not only it will secure us but it will ensure us to develop with interface and good habit.

From there, we are setup to start. We have multiple possibility. We could start by implementing Microsoft Membership straight from the beginning or we can start by doing the system for a single user. If we look at the model diagram, 5 classes on 6 are concerning everything but user account. I suggest that we start doing the application for a single user. By that I mean that will will remove the User class and develop everything for a single user for few times. The advantage will be that we won’t have to configure the Microsoft Membership with Entity Framework from the start and will remove some overhead. Also, if we do not have time to implement this part of the software, we will have something functional rapidly for at least one user.

Series Articles

Article #1: Asp.Net MVC Enterprise Quality Web Application
Article #2: Asp.Net MVC Enterprise Quality Web Application Model
Article #3: Asp.Net MVC Enterprise Quality Web Application Controller
Article #4: Asp.Net MVC Enterprise Quality Web Repository Layer
Article #5: Asp.Net MVC Enterprise Quality Web with Entity Framework
Article #6: Asp.Net MVC Enterprise Quality Layers
Article #7: Asp.Net MVC Enterprise Quality Web Security