DTO’s and why you should be using them

If you’ve worked in any form of modern (decent sized) application, you know that the de facto standard is to use a layered design where people usually define operations into layers corresponding to certain functionality, for example a Data Access Layer, that is nothing else but an implementation of your repository using nHibernate, Entity Framework, etc. While that is a very good idea for most scenarios, a bit of a problem comes around with it, and is the fact that you need to pass around lots of calls between layers, and sometimes is not just calling a DLL inside your solution, sometimes, it’s calling a service hosted somewhere over the network.

The problem

If your app calls services and receives data from them (obviously?) then you might encounter in your service something like this:
public Person AddPerson(string name, string lastName, string email)
Now, let’s first look at the parameters and why this is probably not a very good definition. 
In this method, you have 3 arguments, name, lastName and email; what happens if somebody needs a telephone number? Well, we just add another argument! Dead easy! Yeah, no. Suppose we make it more interesting saying we have Workers and Customers, both inheriting from person, we would then have something like this:
public Person AddWorker(string name, string lastName, string email)
public Person AddCustomer(string name, string lastName, string email)
If you need to add that telephone number now and go for that extra param, you have to add code in two locations, so you need to touch more code, and what happens if we touch more code? Simple, we put more bugs.

The Good

Now, what happens if you have this?
public Worker AddWorker(Worker worker)
public Customer AddCustomer(Customer customer)
DTO stands for Data Transfer Object, and that is precisely what these classes do, we use them to transfer data on our services. For one, code is much simpler to read now! But there is another thing, if Worker and Customer inherit from Person as they should considering they are both a Person, then we can safely add that email to the person without having to change the signature of the service, yes, our service will now have an extra argument but we don’t have to change our service signature on the code, just the DTO it receives. 
Now, more on the common use for DTO’s, just as Martin Fowler states a DTO is
An object that carries data between processes in order to reduce the number of method calls.
Now, it’s fairly obvious that using DTOs for input arguments is good, but what happens for output arguments? Well, similar story really, with a small twist, considering that many people today use ORMs for accessing the database, it’s very likely that you already have a Worker, Customer and person class, because they are part of your domain model, or they are created by Linq To Sql (not a huge fan, but many people still use it), so, should you be using those entities to return on your services? Not a very good idea and I have some reasons for it.

One very simple reason is that the objects generated by these frameworks usually are not serialization friendly, because they are on top of proxy classes which are a pain to serialize for something that outputs JSON or XML. Another potential problem is when your entity doesn’t quite fit the response you want to give, what happens if your service has something like this?

public Salary CalculateWorkerSalary(Worker worker)

You could have a very simple method just returning a double, but let’s think of a more convoluted solution to illustrate the point, imagine salary being like this:

public class Salary
{
     public double FinalSalary {get;}
     public double TaxDeducted {get;}
     public double Overtime {get;}
}

So, this is our class, and Overtime means it’s coupled to a user because not everybody does the same amount of overtime. So, what happens now if we also need the Tax code for that salary? Or the overtime rate for the calculation? That is assuming these are not stored on the salary table. More importantly, what happens if we don’t want whoever is calling the API to see the Overtime the Worker is doing? Well, the entity is not fit for purpose and we need a DTO where we can put all of these, simple as that.

The Bad

However, DTOs are not all glory, there is a problem with them and it’s the fact they bloat your application, especially if you have a large application with many entities. If that’s the case, it’s up to you to decide when a DTO is worth it and when it’s not, like many things on software design, there is no rule of thumb and it’s very easy to get it wrong. But for most of things where you pass complex data, you should be using DTOs.

The Ugly

There is another problem with DTOs, and it’s the fact you end up having a lot of code like this:

var query = _workerRepository.GetAll();
var workers = query.Select(ConvertWorkerDTO).ToList();
return workers;

Where ConvertWorkerDTO is just a method looking pretty much like this:

public WorkerDTO ConvertWorkerDTO(Worker worker)
{
    return new WorkerDTO() {
        Name = worker.Name,
        LastName = worker.LastName,
        Email = worker.Email
    };
}

Wouldn’t be cool if you could do something without a mapping method, like this:

var query = _workerRepository.GetAll();
var workers = query.Select(x => Worker.BuildFromEntity<Worker, WorkerDTO>(x))
                   .ToList();
return workers;

Happily, there is a simple way to achieve a result like this one, and it’s combining two very powerful tools, inheritance and reflection. Just have a BaseDTO class that all of your DTOs inherit from and make a method like that one, that manages the conversion by performing a mapping property to property. A fairly simple, yet fully working, version could be this:

public static TDTO BuildFromEntity<TEntity, TDTO>(TEntity entity)
{
    var dto = Activator.CreateInstance<TDTO>();
    var dtoProperties = typeof (TDTO).GetProperties();
    var entityProperties = typeof (TEntity).GetProperties();

    foreach (var property in dtoProperties)
    {
        if (!property.CanWrite)
            continue;

        var entityProp =
            entityProperties.FirstOrDefault(x => x.Name == property.Name && x.PropertyType == property.PropertyType);

        if (entityProp == null)
            continue;

        if (!property.PropertyType.IsAssignableFrom(entityProp.PropertyType))
            continue;

        var propertyValue = entityProp.GetValue(entity, new object[] {});
        property.SetValue(dto, propertyValue, new object[]{});
    }

    return dto;
}

And Finally…

The bottom line is like everything, you can over engineer your way into adding far too many DTOs into your system, but ignoring them is not a very good solution either, and adding one or two to a project with more than 15 entities just to feel you’re using them, it’s just as good as using one interface to say you make decoupled systems.

What’s your view on this? Do you agree? Disagree? Share what you think on the comments!

EDIT: As a side note, it’s work checking this article that talks a lot about the subject.

Testing S#arp Lite Repositories with Moq

One pending matter I’ve always had is to improve my testing skills, there I said it. I test, but not as much as I should. When I say test, I mean Unit Test, not just test the application by launching it and starting to poke it. One thing that I found to be a really outstanding idea with S#arp Lite is that repositories eliminated many complications. If you had a repository and needed to run a query against it, just call GetAll and throw some Linq at. Grated, it assumed that the Linq provider for the underlaying data model was mature, but with NHibernate and EntityFramework being the two ORM of my choice always, that seems like a fair assumption.

However, this has a downside, I tried to test the repositories and had a really rough time getting to test a repo that was using an underlying IQueryable item. However, this became quite clear with time, and now I can test my repos. Let’s make a fairly simple test scenario. Let’s assume I have a user class, with a few standard properties, pretty much like this one:

public class User : Entity
{
 public virtual string Password { get; set; }

 public virtual string Email { get; set; }

 public virtual bool Blocked { get; set; }

 public virtual int LoginCount { get; set; }
} 

Now, I have a class called Membership that handles my Membership logic, that is, logging users, blocking them after a couple of bad logins, etc. That class should look like this:

public class Membership
{
 IRepository<User> _usersRepository;

 public Membership( IRepository<User> usersRepository )
 {
  _usersRepository = usersRepository;
 }
 
 public bool IsValidUser( string email, string password )
 {
  //create test first!
  return false;
 }
} 

Now, we need to create a test case. Let’s call it, MembershipTests

[TestFixture]
public class MembershipTests
{
 [TestFixtureSetUp]
 public void SetupTestEnvironment()
 {
 
 }
}

Now, I want to create a Mock repository to pass it along to my test Membership class, but I need to do it so that it simulates the data backed without touching my actual data nor getting too slow. Obviously we need a list, but not just any list, we need a list that can pose for a Repository or at least fake it. That’s why we need to create this sort of list, a QueryableList:

public class QueryableList<T, TId> : List<T>, IQueryable<T> where T : EntityWithTypedId<TId>
{
 #region Constructors
 public QueryableList()
 { }

 public QueryableList(IEnumerable<T> source)
  : base(source)
 { } 
 #endregion

 #region IQueryable<T> implementation
 public Expression Expression
 {
  get { return ToArray().AsQueryable().Expression; }
 }

 public Type ElementType
 {
  get { return typeof(T); }
 }

 public IQueryProvider Provider
 {
  get { return ToArray().AsQueryable().Provider; }
 }
 #endregion

 public void UpdateEntity(T entity)
 {
  var index = -1;

  for (var i = 0; i < Count; i++)
   if (this[i].Equals(entity))
    index = i;

  if (index == -1)
   Add(entity);
  else
   this[index] = entity;
 }
} 

Voila! We have a List that directly implements IQueryable, which is a good thing, not a hard thing to do, but it will help us a lot. We need to get the entity of the List and the Id that is going to be used on the list to keep it as generic as possible, so when we need to test a repo of entities with typed id’s, we won’t have to rewrite much. The UpdateEntity method will mimic the SaveOrUpdate method we have on our repo using the Equals method to invoke the Equality comparer provided by S#arp Lite. Now, we need to setup our Mocks. We go back to the TestSetup and let’s setup our environment:

[TestFixture]
public class MembershipTests
{
 private Membership _membership;
 
 [TestFixtureSetUp]
 public void SetupTestEnvironment()
 {
  var usersMockedRepo = new Mock<IRepository<User>>();
  
  var users = new List<User> { new User{ Blocked = false, Email = "david@someplace.com", Password = "a password" } };
  var list = new QueryableList<T, int>(users);
  
  //Mock GetAll
  usersMockedRepo.Setup(x => x.GetAll()).Returns(list);
  
  //Mock the Get
  usersMockedRepo.Setup( x => x.Get( It.IsAny<int>() ))
      .Returns( (int id) => list.AsQueryable()
      .SingleOrDefault(x => x.Id.Equals(id)));
  
  //Mock the SaveOrUpdate using our own
  usersMockedRepo.Setup(x => x.SaveOrUpdate(It.IsAny<T>()))
      .Callback((T entity) => list.UpdateEntity(entity));
      
  //Mock the delete
  usersMockedRepo.Setup(x => x.Delete(It.IsAny<T>())).Callback((T entity) => list.Remove(entity));
  _membership = new Membership(usersMockedRepo.Object);
 }
}

Now, we have setup our very own mocked repository. We need to make a test now for the IsValidUser method we left before. Let’s write a simple test case:

[TestCase]
public void CheckBasicAuthentication()
{
 var checkValidUser = Membership.Instance.IsValidUser("david.conde@gmail.com", "a password");
 var checkInvalidUser = Membership.Instance.IsValidUser("david.conde@gmail.com", "another password");

 Assert.AreEqual(checkInvalidUser, false);
 Assert.AreEqual(checkValidUser, true);
}

And that’s it! We have our own test and we can now create as many test cases as we want all relying on a simple structure like a list. There is one final thought here, which came to mind while reading this StackOverflow post. The idea is to put the setup into a helper method, so we can reuse it with different test scenarios:

Please note that the following code can induce headaches 🙂

public static class MockExtensions
{
 public static void SetupIQueryableTypedRepository<T, TId>
  (this Mock<IRepositoryWithTypedId<T, TId>> mockObject, IEnumerable<T> source)
  where T : EntityWithTypedId<TId> where TId : IComparable
 {
  var list = new QueryableList<T, TId>(source);

  mockObject.Setup(x => x.GetAll()).Returns(list);
  mockObject.Setup(x => x.Get(It.IsAny<TId>())).Returns((TId id) => list.AsQueryable().SingleOrDefault(x => x.Id.Equals(id)));

  mockObject.Setup(x => x.SaveOrUpdate(It.IsAny<T>())).Callback((T entity) => list.UpdateEntity(entity));
  mockObject.Setup(x => x.Delete(It.IsAny<T>())).Callback((T entity) => list.Remove(entity));
 }

 public static void SetupIQueryableRepository<T>(this Mock<IRepository<T>> mockObject, IEnumerable<T> source)
  where T : Entity
 {
  var list = new QueryableList<T, int>(source);

  mockObject.Setup(x => x.GetAll()).Returns(list);
  mockObject.Setup(x => x.Get(It.IsAny<int>())).Returns( (int id) => list[id] );
  
  mockObject.Setup(x => x.SaveOrUpdate(It.IsAny<T>())).Callback( (T entity) => list.UpdateEntity(entity) );
  mockObject.Setup(x => x.Delete(It.IsAny<T>())).Callback((T entity) => list.Remove(entity));
 }
}

Now, we can reduce our Setup method to this:

[TestFixtureSetUp]
public void SetupTestEnvironment()
{
 var usersMockedRepo = new Mock<IRepository<User>>();
 var users = new List<User> { new User{ Blocked = false, Email = "david@someplace.com", Password = "a password" } };
 
 usersMockedRepo.SetupIQueryableRepository(users);
 _membership = new Membership(usersMockedRepo.Object);
}

If you just want to be able to test your S#arp Lite repositories using this, then just get this extension to your code, set your mocked repositories using this idea and done! If you have any other thoughts, let me know on the comments!

Lucene2Objects goes public!

I’ve been asked a couple times to release the Lucene2Objects code out in the wild and it was the intention all the way, but I didn’t had the proper time to do it, not the proper bandwidth.
The new Lucene2Objects has a couple new changes and it’s more lightweight than the first versions since it doesn’t depends on Ninject anymore. Feel free to browse the code, leave issues or just email me with any problem you have.
If you are using Lucene2Objects, let me know as I want to know if it’s been useful out there.

All the best,
David

S#arpLite: “At least one ISessionFactory has not been registered with IoC”, Darn with the redirections…

Yesterday I installed ASP .NET MVC 4 beta so I can play with it for a couple days. To be honest, I haven’t built one simple app because I haven’t had the time to do it, but I popped up my Visual Studio 2010 today and booted a S#arp lite project from the template. As usual, I made my move to Fluent nHibernate but when I loaded the project:

After spending a couple hours digging my way into my machine and debugging I went into the S#arp Lite discussion group and found this post of somebody having the same issue, finally the last answer hit the nail:

Have you installed MVC4 on you machine? If yes, check reference in Init project, it references MVC4 whereas Web project references MVC3.

Mmmmm… I did… so I tried this…

<dependentAssembly>
 <assemblyIdentity name="System.Web.Mvc" publicKeyToken="31bf3856ad364e35" />
 <bindingRedirect oldVersion="1.0.0.0-4.0.0.0" newVersion="3.0.0.0" />
</dependentAssembly>
 

Voilà! It works now… Darn with the redirections…

Introducing Lucene2Objects

I’ve been playing with Lucene .NET for over 2 years now. It all started as part of my incorporation to a NLP investigation group and my first task was to look into Lucene since nobody was using it. I was baffled with the strength that Lucene had, besides, the biggest players were using it! Now that I’ve get to know it a bit better I see why so many people use it, put simple: It’s awesome! However, Lucene does have a problem which is the learning curve. Wrapping your head around the concept of documents, queries, analyzers and how to get a pseudo efficient search working are a few of the issues with using Lucene on a project.

Enter Lucene2Objects, my basic idea is to make a simple interface into Lucene for those developers wanting to incorporate search annotations into the domain model. Now, let’s take an example of a system handling messages (of the “Hi! How do you do?” kind, not the WM_PAINT kind), is most probably that users would like to search for something inside their messages. A (very) basic approach gives us a simple class:

public class Message
{
 public int Id { get; set; }

 public string Text { get; set; }

 public string Title { get; set; }

 public DateTime Sent { get; set; }
}

This is neat, but if I want to implement search I can either use the services provided by my DB backend as Full Text Indexing from SQL Server (which is awesome by the way, but lacks some other cool stuff) but the biggest problem is that we would then be fixing (or tightly coupling, for the fan boys of OOP/IoC/SOLID) the data store to the solution of finding a text, which is almost definitely a bad thing.

Now, if we want to use Lucene, we need to make a few configuration stuff, learn some stuff about indexing, tokenizers, analyzers and a huge list of stuff that some folks (me included) find amusing, but others find really boring (not to mention those who find it daunting). But imagine a world where you could do something like this:

var iWriter = new IndexWriter(Environment.CurrentDirectory + @"index");
var message = new Message { Id = 12, Sent = DateTime.Now, 
                            Text = "Some text on the message!", 
                            Title = "This is the title" 
              };
iWriter.AddEntity(message);
iWriter.Close();

Cool uh? Just point a folder and save. Nice! Well, and how would I search for stuff on that folder? Easy piece

var iReader = new IndexReader(Environment.CurrentDirectory + @"index");
var messages = iReader.Search&lt;Message&gt;("text");

foreach (var message in messages) {
 Console.WriteLine("Message: {0}", message.Title);
}

Fine! And how does my model knows where to search? What to index? What not to index? Well, validations were a similar issue, so, why not give it a similar solution? Just annotate away!

[SearchableEntity(DefaultSearchProperty = "Text")]
public class Message
{
 public int Id { get; set; }

 [Indexed]
 public string Text { get; set; }

 [Indexed]
 public string Title { get; set; }

 public DateTime Sent { get; set; }

 public DateTime? Read { get; set; }
}

If you liked that way of handling things with Lucene, you’ll love Lucene2Objects. Keep in mind however, that I’m the only person working with this idea, so if you like it and want to put something into it, let me know! For now, I’ll leave the Lucene2Objects as a package in Nuget, so you can play with it. I’ll put it into my BitBucket repo this week along with my Scaffolders for SharpLite.

Using Sharp Lite with Fluent nHibernate

One thing I didn’t liked too much when first met S#arp Lite was the fact that Billy McCafferty decided to drop Fluent nHibernate as the defacto mapper for entities. There was a big discussion on the matter of nHibernate making it’s own fluent (or was it Loquacious?) API to map entities on a lot of places, and finally James Gregory, the man who gave us Fluent nHibernate spoke on the matter of nHibernate making an API that looked a lot like his.

The conclusion is pretty simple, Fluent nHibernate is not dead, not even close, there are a lot of folks (me included) that like it and we are not willing to let it go. Having said that, I wanted to integrate my Fluent nHibernate into S#arp Lite, and just as Billy said, is a dead simple thing to do, still will share it here.

The first thing you need to do is to reference the Fluent nHibernate library from your NorthwindDemo.NHibernateProvider project, so you can use it there. After that, you can make your own mappings on the provider. If you don’t know what is a Fluent Mapping, you should read this wiki before you start here. Now that we have our mappings made, we need to change the nHibernate boot process found in NHibernateInitializer so we use Fluent nHibernate instead. Go to the NHibernateInitializer class and change the code for this:

public static Configuration Initialize()
{
 var cnf 
          = Fluently.Configure()
      .Database(MySQLConfiguration.Standard.ConnectionString( 
    x => x.Database("MyDb")
          .Username("MyUserName")
          .Password("MyPassword")
          .Server("MyServer"))
   )
       .Mappings( m => m.FluentMappings.AddFromAssemblyOf&lt;CustomerMapping&gt;());

 var configuration 
          = cnf.BuildConfiguration()
        .Proxy( p => p.ProxyFactoryFactory&lt;DefaultProxyFactoryFactory&gt;() )
        .CurrentSessionContext&lt;LazySessionContext&gt;();

 return configuration;
} 
 

Notice that I’m not using SQL Express like the default install uses, I’m using MySQL instead. The magic of the thing is on the line:

.Mappings( m => m.FluentMappings.AddFromAssemblyOf<customerMapping>()); 
 

Here, I’m telling FNH to load all the mappings specified in the assembly of CustomerMapping, which is one of my entities’ mappings. After that, we register the DefaultProxyFactoryFactory to manage the lazy loading, which basically gets rid of the old nHibernate.Castle so we don’t have to use an external loader for proxy classes. With this configuration, you should be on your way to use Fluent nHibernate with your S#arp Lite project.

Finally, there is one thing. I think, not so sure though, that there is a version of Fluent NH for nHibernate 3.2, however, if you are using an older one, like me!, I’m using v1.2 which looks for nHibernate 3.1, you will need to make an Assembly redirect, which is easy, just lk in your web.config file for this line:

<runtime>
 <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
  <dependentAssembly>
   <assemblyIdentity name="System.Web.Mvc" publicKeyToken="31bf3856ad364e35" />
   <bindingRedirect oldVersion="1.0.0.0-2.0.0.0" newVersion="3.0.0.0" />
  </dependentAssembly>
 </assemblyBinding>
</runtime>
 

And change it for this one:

<runtime>
 <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
  <dependentAssembly>
   <assemblyIdentity name="System.Web.Mvc" publicKeyToken="31bf3856ad364e35" />
   <bindingRedirect oldVersion="1.0.0.0-2.0.0.0" newVersion="3.0.0.0" />
  </dependentAssembly>
  <dependentAssembly>
   <assemblyIdentity name="NHibernate" publicKeyToken="aa95f207798dfdb4" culture="neutral" />
   <bindingRedirect oldVersion="0.0.0.0-3.1.0.4000" newVersion="3.2.0.4000" />
  </dependentAssembly>
 </assemblyBinding>
</runtime>
 

The basic idea is to tell the system that when someone (FNH) asks for nHibernate from version 0.0.0.0 to 3.1.0.4000 the assembly to resolve is version 3.2.0.4000, so the binding is done properly. This should be it, you should have your own S#arp Lite project running on Fluent nHibernate!!

Happy coding! And as usual, comenting is not forbidden 😉

Booting Sharp Lite and autogenerating Db with nHibernate

When facing a new project, there are many things to consider, but there is no doubt that the words most mentioned and given more buzz are how to accomplish a scalable and maintainable solution without builing the next Amazon for our first version. However, trying to make a solution that is able to grow over time and adapt to new requirements is not an easy task, it’s indeed very tricky business. That’s why there are frameworks, mainly because there are a bunch of guys willing to help those who can’t (or won’t) get into building their own foundation to make a project.

And here is where S#arpArchitecture enters. S#arpArchitecture is project started by Billy McCafferty, a developer/architect well versed on the matter how made a super duper framework to deploy ASP .NET applications. The framework was consistent with several best practices he wrote on a CodeProject article on how to accomplish a nice setup of a project using nHibernate. The thing with S#arpArchitecture is that it was somehow a bit coupled into nHibernate, altough this didn’t seemed to matter at that point because nHibernate was (and IMHO is, but won’t get into that) the best way to accomplish ORM with a .NET project. However, time has changed and EntityFramework is a strong project, not to mention that is handled by the man, we also have now the NoSql fever and there are other stuff that may be yet to come and S#arpArchitecture had to evolve to remove the direct dependencies from nHibernate but as any old project moving away from the original idea has been a difficult business and the most important thing is that to be able t setup a S#arpArchitecture install using other thing than nHibernate, you’ll have to be very versed on topics such as IoC, design patterns, nHibernate itself, so you know what to remove and what not to remove and many other dark stuff.

Because of all these nasty reasons, Billy McCafferty decided to start yet another framework called S#arp Lite. The intention here was to be able to make a decent framework available for a broader audience of developers who didn’t had to be very versed on deep matters of architecture.

So, if you want to make an ASP .NET MVC 3 app and looking for a nice development framework, you should make a run for S#arp Lite and try it out. The first thing you’ll need is to read the blog post where Billy explains why he had to make S#arp Lite despite the fact that we already had a S#arpArchitecture, once you’ve read that, you should also get the basics on S#arp Lite also explained by Billy in this post. If, however, you don’t want to read any of those and just want to jumpstart into S#arp Lite, then the idea would be to:

Before anything, the sample app is found inside the S#arp Lite file downloaded from Github, on the folder called Example, the project is named MyStore. Now, to set up your database for the sample app, you can make two things, one I like it, the other not so much. Before getting to that, make sure that you have a SQL Server database to play with and get your connection string. Mostly if your DB is the integrated SQL Server Express DB that comes with Visual Studio, your connection string will look like this one:

Data Source=localhostsqlexpress;Initial Catalog=SampleDb;Integrated Security=True
 

The only thing that may change is the Initial Catalog, which is basically up to your choosing. Now that we know the connection string, we need to tell S#arp Lite what is it, so open the web.config file found on MyStore.Web and look for the line where says:

<connectionStrings>
   <add name="MyStoreConnectionString" connectionString="YOUR_CONNECTION_STRING" />
</connectionStrings>
 

And yes, you guessed it, change where it says YOUR_CONNECTION_STRING for the actual connection string.

Now we just need to setup our DB, for which you have two ways. The first way is the simplest one, which is to use a SQL server administration tool, the Management Studio Express included with the Express version of SQL Server express 2008 is just fine, and run the sql file you can find inside the example project on the folder called “MyStore.DB” et voilà! The project can now be run.

As a side note, the last time I downloaded S#arp Lite (v0.42), when setting up the sample project and compiling, you’d get 11 errors regarding tests and test classes that cannot be found, don’t despair! Just go into MyStore.Tests and remove the reference to nunit.framework and add it again, the assembly should be in your lib folder outside of the project. Now you can run the demo without issues!

Now the other way, which I like a lot more is to use nHibernate to generate your database, which is so cool! And since nHibernate is also very cool, this is a very simple process. Just go into your project called “MyStore.NHibernateProvider” and look for the class called NHibernateInitializer. Just like the name indicates this fellow handles the initialization of nHibernate and we will also generate the DB with it. Go into the class and add the method that will handle DB initialization:

public static void CheckAndBuilDb(Configuration cfg)
{
 var schemaValidator = new SchemaValidator(cfg);

 try {
  schemaValidator.Validate();
 }
 catch (Exception) {
  var schemaBuilder = new SchemaExport(cfg);
  //drop in case of old one
  schemaBuilder.Drop(false, true);
  schemaBuilder.Create(false, true);
 }
}
 

Without entering into unnecessary gory details, the idea is to check if the existent DB is up to date and if is not, drop it and regenerate a new one. This does bring a bad side effect, which is the issue of migrating from one version into another. That problem can be fixed doing some nHibernate-fu, but we won’t get into that for now. Build and run and your database will be automatically generating against the mapped entities! Cool uh?

This is pretty much just about it, I’ll make a follow up post with some more stuff on how to get going with S#arp Lite and Fluent nHibernate.

Automatically building FluentNHibernate mappings

As those who suffered data access and passed to the ORM world know, one of the key benefits of using a ORM like nHibernate is productivity, having an object oriented way to manage object persistence. However, not everything is just roses and candles, and nHibernate has mappings. Those tedious xml files we used to write to map a class into an object representation were a real pain in the a**.

However, then we had Fluent nHibernate to save our days and everything went better, we had fluent mappings, which rock much more than the old way of making mappings. For those of you who haven’t heard of Fluent nHibernate (FNH from now on) it’s a wonderful tool that allows you (amoong other really useful things) to map your data classes to nHibernate and allow you to do stuff like Lazy Loading and many other perks. It supports a great deal of the functions provided by nHibernate but writing pretty classes and not nasty xml. Ok, no more chatty stuff… Let’s see a simple example, say you have this class:

public class Product
{
 public virtual int Id {get; set;}
 
 public virtual string Name {get; set;}
 
 public virtual float BasePrice {get; set;}
 
 public virtual Category Category {get; set;}
}
 

This class can be mapped to nHibernate using something called ClassMap provided by the most useful FNH, like this:

public class ProductMapping : ClassMap&lt;Product&gt;
{
 public ProductMapping()
 {
  Id( x => x.Id );
  
  Map( x => x.Name );
  Map( x => x.BasePrice );
  
  References( x => x.Category );
 }
}
 

Ok, this is pretty much it… if you want to read more on Fluent Mappings, be sure to check their wiki which is a pretty good introduction and if you are new to nHibernate I’d have to recommend Jason Dentler’s nHibernate 3.0 Cookbook (I know I’m missing a link here…), which is also pretty good.

Now, to the purpouse of the post, in my interest on speeding my development, I started applying those stuff that I learned from Steven Sanderson’s about Scaffolding in an awsome serie of blog posts he made.

So, instructions… First thing you need to do is install the MvcScaffolding package. Then you should create your own Scaffolder project. The steps are pretty much like this:

 Install-Package MvcScaffolding
 ....
 Scaffold CustomScaffolder AutoFluentMapper
 

Now you will get a Powershell file and a T4 file. For your Powershell write this:

[T4Scaffolding.Scaffolder(Description = "AutoFluentMapper. Scaffolding mappings")][CmdletBinding()]
param(       
    [parameter(Mandatory = $true, ValueFromPipelineByPropertyName = $true)]
    [string]$DomainFolder,
    [parameter(Mandatory = $false, ValueFromPipelineByPropertyName = $true)]
    [string]$BaseEntity,
    [string]$Project,
 [string]$CodeLanguage,
 [string[]]$TemplateFolders,
 [switch]$Force = $false
)

$namespace = (Get-Project $Project).Properties.Item("DefaultNamespace").Value;
$folder = Get-ProjectFolder $DomainFolder;

$entitiesNamespace = $namespace + "." + $DomainFolder;
$mappingsNamespace = $namespace + "." + "Mappings";

if (!$BaseEntity)
{
    $BaseEntity = "System.Object";
}

foreach ( $folderItem in $folder )
{
    $entityName = $folderItem.Name.Replace(".cs", "");
    $entityFullName = $entitiesNamespace + "." + $entityName;
    
    $entity = Get-ProjectType $entityFullName -Project $Project;
    
    if ( !$entity ) { Write-Host "Entity $entityFullName not found!"; return; }
    
    $mappingName = $entityName + "Mapping";
    
    
    Add-ProjectItemViaTemplate -OutputPath "Mappings$entityName" `
                           -Template "AutoFluentMapperTemplate" `
                           -TemplateFolders $TemplateFolders `
                           -SuccessMessage "Added Mappings output at {0}" `
                           -Model @{ 
                                    ModelType = $entity; EntityName = $entityName;
                                    Namespace = $mappingsNamespace; EntityNamespace = $entitiesNamespace;
                                    BaseEntity = $BaseEntity
                                    }`
                            -Project $Project -CodeLanguage $CodeLanguage -Force:$Force
                            
    
}
 

Ok, I know you don’t toss an amount of code like this without a good explanation, so I’ll try to explain myself. The first block defines the list of parameters that our Scaffolder will accept, if you take the powershell script that you get when creating a new scaffolder, we only added a DomainFolder, which we will use to search in that folder the domain entities and a BaseEntity to check if we have a normal class or if we have model inheritance.
Then, we have a small block of code to manage the names of the namespaces for the mappings, and finally if no base entity name is provided, we’ll assume that our base model is System.Object.

After we’ve done all these small twitches, all we need is to process each file we find in the folder specified by DomainFolder. We get the name of the entity, then the full name considering the namespace, and finally we add the new file, pretty much like this:

$entityName = $folderItem.Name.Replace(".cs", "");
$entityFullName = $entitiesNamespace + "." + $entityName;
$entity = Get-ProjectType $entityFullName -Project $Project;
if ( !$entity ) { Write-Host "Entity $entityFullName not found!"; return; }

Add-ProjectItemViaTemplate -OutputPath "Mappings$entityName" `
 -Template "AutoFluentMapperTemplate" `
 -TemplateFolders $TemplateFolders `
 -SuccessMessage "Added Mappings output at {0}" `
 -Model @{ 
  ModelType = $entity; EntityName = $entityName;
  Namespace = $mappingsNamespace; EntityNamespace = $entitiesNamespace;
  BaseEntity = $BaseEntity
 }`
 -Project $Project -CodeLanguage $CodeLanguage -Force:$Force
 

Ok, now we need to work our T4 template. If you don’t know what T4’s are all about, then consider checking out Oleg Sych’s post on T4 templates which is the way to start and according to Scott Hanselman is a place with some good resources. Now, what will we do with our T4, we will generate our mapping by inspecting into the code’s properties.

This is the part were I’ll have to apologize, because when I generate my models I include an Id property, which is an int, so I didn’t inspected for that property and just added hard coding it, feel free to make changes and do tell me of new ideas…

The T4 template I built is based on one idea, take a model, check if it inherits from something other than the base model, inspect the properties and build the mappings accordingly:

var modelType = (CodeClass)Model.ModelType;
 
var baseEntityName = (string)Model.BaseEntity;
var parentEntities = (CodeElements)modelType.Bases;

var derivedFromBase = false;

foreach( CodeElement parentEntity in parentEntities ){
 if ( parentEntity.Name.Contains(baseEntityName))
  derivedFromBase = true;
}

var inheritable = derivedFromBase ? "ClassMap" : "SubclassMap";

var properties = new List&lt;CodeProperty&gt;();
var enumerables = new List&lt;CodeProperty&gt;();
var references = new List&lt;CodeProperty&gt;();
 

So, the idea was to separate the properties (or attributes of the domain entities) in 3 groups:

  • Properties: Those attributes that have a basic type, say strings, integers, DateTimes, and stuff like that.
  • Enumerables: Those attributes that define an IEnumerable collection, most commonly seen on many to one or many to many (not supported yet, sorry…) relationships
  • References: Thos attributes that reference another attribute of our domain

Having these ideas in mind, and will a little of T4 and EnvDTE magic we can get a fully fledged template for our Scaffolding needs. All we need now is to run it, much like this:

Scaffold AutoFluentMapper Entities -BaseEntity BaseDomainEntity
 

For those interested only in the how it works idea, the Entities argument defines the name of the folder where I have my entities, and their namespace is infered from there (convention over configuration..). The -BaseEntity parameter is used to determine the base of your domain models to check for class inheritance in nHibernate models, if you ommit this argument, I’ll assume it’s System.Object. Besides these 2 arguments, this is just a regular plugin and I’ve tried it already. Hope you enjoy it!

If you are using this code as part of your projects, send me an email or leave a comment to know that I’ve actually helped somebody, which is the main thing that encourages me to write more of this stuff. The Source code can be downloaded here as a zip file.

Helpers vs Extenders

One of the most important features we need to achieve in our life as developers is to write code which at some point can be reused. Reuse is tricky word because sometimes we find ourselves saying “it just needs a tiny modification” when it needs some major rewriting. Making reusable code is not just writing a snippet of code and then copy it and write it down again, it’s having libraries of code that don’t need any sort of modification (in the worst case scenario, some small ones).

This takes me to the point of this post, which is considering method extensions when developing applications. Let’s see the following problem, suppose we need for a certain task to retrieve the names of the properties available in an object, but we need to make this as generically as possible, because we don’t know when we could need the same feature again.

The initial approach is to do a small static class which does the job, most people tend to call these classes helpers. Helpers are there to “help” our code doing stuff that might be needed by several parts of the same application. To our problem, the helper code would look like this:

public static class ReflectionHelper
{
    public static List<string> GetMethods( object who )
    {
        //Implementation of the method
    }
}

This approach does work most times, but is not nice in the refactoring sense of development. We need to keep these kinds of helper objects to the smallest amount in any design we make, because by following this pattern, we can end up in one of two possible scenarios.

The first possible ending is that we might end up with a few helper classes doing just a few functionalities, sometimes providing a really horrible refactoring nightmare. The second possible scenario is that we might end up with one monster helper with a lot of helper methods, which will make the previous nightmare look like a pleasant dream.

There are two more plausible solutions to this problem. The first one applies to the case where retrieving the list of methods is part of the work of our objects; to this case we could apply the scenario of an ORM framework when it’s mapping the properties to an object and retrieves the properties using reflection. Anyways, in this case we could add a base class called ReflectableObject, which provides all the required functionalities.

If we choose to follow the ORM sample, we would have a class diagram having a Model class and to complicate things a bit, we could also have a repository class. Both would need to perform some sort of reflection on the objects they manage, so they would require it. The class diagram would be like this:

This way works pretty well for the cases where we want to include this behavior into the object, but this does pose a few problems:

  • We would need to add an extra layer of classes to our design, thus complicating the design
  • When we have only one class needing to perform the actual operation we need to create, adding another layer of complexity would feel like killing a fly with a rail gun.
  • If by any chance of fate we are working with sealed classes, inheritance is automatically discarded.

Finally, there is one last option: Using extension methods. Extension methods are there to provide a flexible and reusable way to extend objects that are there already, and we do not want to either modify or inherit.

We can think of extensions as a light inheritance, we say: “OK, we have this class and we want to add a few methods to it, but we don’t want to create a new class and inherit from it, so what do we do?”

We extend the class.

The code to extend a class is quite similar to the helper, but instead we include a reference to the class we are planning to extend in the method, like this:

public static class ReflectionHelper 
{ 
     public static List<string> GetMethods( this object who ) 
     {
          //Implementation of the method 
     } 
} 

Is quite simple, suppose that our class is in the ReflectionHelpers namespace, we would be able to do this then:

using ReflectionHelpers; 
using System; 

namespace MyNamespace 
{ 
    public class MyClass 
    { 
        public void Foo() 
        {
            object o = SomeWeirdOperation(); 
            var methods = o.GetMethods(); 
        } 
    } 
}

All objects can call now the GetMethods method! By doing this we are successfully removing the extra layer from the design, we are getting the job done and it doesn’t requires new classes to be added, if we change the ReflectionHelper class name to Reflexive it wouldn’t matter, because all that matters is that the GetMethods method is extending the object class as specified in the object signature.

Extension methods are the base of LINQ, and perhaps one of the most powerful tools implemented in the .NET framework.

Simple class for data access using ADO.NET

Many data access layers we see today are meant for medium to big projects, and building the required models, and loading the required dll’s in a small project sometimes feels like trying to kill a fly with a mortar. So, what do we do most times? Write some plain ADO.NET access code to retrieve the data and that’s it, we don’t want to mess our code with NHibernate, Entity Framework or DataSets only for retrieving data from a single (or two) tables. If we change providers, all we would do is just change the class names.

This approach is simple, but with a little tweaks we can make it reusable, ergo we don’t have to rewrite the code is it would be a fairly plain database access. To accomplish this, we will make a small mix of singletons and dictionaries, to be able to talk to different databases at the same time. I’ve found this feature useful, since I use SQLite for caching my request and MySQL to handle big stuff that I require saving on DB.

So, our singleton code would be like this one:

private static Dictionary<string, Database> _instances;

public static Database Get(string flavour)
{
    if ( _instances == null )
        _instances = new Dictionary<string, Database>();

    if (_instances.ContainsKey(flavour))
        return _instances[flavour];

    var dbProvider = DbProviderFactories.GetFactory(flavour);
    var connectionStrings = ConfigurationManager.ConnectionStrings;
    var cs = connectionStrings[flavour].ConnectionString
                              .Replace("|HomeDir|", Environment.CurrentDirectory)
                              .Replace("'", """);
    
    _instances.Add(flavour, new Database(dbProvider, cs));
    return _instances[flavour];
}

private DbConnection _connection = null;
private DbProviderFactory _factory;

private Database(DbProviderFactory factory, string connectionString)
{
    _factory = factory;
    _connection = _factory.CreateConnection();
    _connection.ConnectionString = connectionString;
}

As you can see, we use the DbProviderFactories class to load the required factory classes for our project. A sample app.settings file would be like this one:



    
        
        
    

    
        
            
            
            
            
            
        
    


The basic idea is to be able to load the factory required for our flavor from an XML file and forget about instantiation of classes. This way, we can truly reuse our code. Now we need to mimic the 3 important methods of any ADO.NET command:

  • ExecuteScalar
  • ExecuteNonQuery
  • ExecuteReader

But, we also need to support parameters, and to do so, we will just pass parameters as a list of KeyValuePair objects. With that in mind, let’s create an utility method to create the command for a given query:

private DbCommand CreateCommand(string query, 
                                params KeyValuePair<string, object>[] args)
{
    var cmd = _factory.CreateCommand();
    cmd.Connection = _connection;
    cmd.CommandText = query;

    foreach (var argument in args)
    {
        var param = _factory.CreateParameter();
        param.ParameterName = argument.Key;
        param.Value = argument.Value;

        cmd.Parameters.Add(param);
    }

    return cmd;
}

With this utility method, creating one of the proxies for ADO.NET is quite simple. The ExecuteScalar proxy would be like this one:

public object ExecuteScalar(string query, params KeyValuePair<string, object>[] args)
{
    var cmd = CreateCommand(query, args);

    try
    {
        _connection.Open();
        object scalar = cmd.ExecuteScalar();

        _connection.Close();
        return scalar;
    }
    catch (Exception)
    { return null; }
}

This one is quite easy uh? Now, we wont return a DbDataReader in the ExecuteReader, because it would be simple to read the values and return a List of dictionaries, where each Dictionary object would represent a given row.
Since I don’t return a DbDataReader, I thought that ExecuteReader wouldn’t be OK, so I changed the name to ExecuteList. The basic layout of the method is like the past one, but once we have the DbDataReader, we would fill the rows, like this:

while (reader.Read())
{
    var row = new Dictionary<string, object>();
    int fcount = reader.FieldCount;

    for (int i = 0; i < fcount; i++)
    {
        string fName = reader.GetName(i);
        object val = reader.GetValue(i);

        if (reader.IsDBNull(i))
            val = null;

        row.Add(fName, val);
    }

    result.Add(row);
}
&#91;/csharp&#93;

<p>And we return the result, which is of type List&lt;Dictionary&lt;string, object&gt;&gt;. Cute uh?</p>
<p>With these three methods, we have a jumpstart for a nice (a very simple) data access class, but what would be of .NET without types, so we would expect to have a typed retrieval method, something like this:</p>
public List<t> ExecuteList<t>(string query, params KeyValuePair<string, object>[] args)

Where T is (hopefully) a POCO class, and we just want to fill it. This is a lame attempt of doing some ORM, but as we said earlier, the idea is to create a small reusable class for data access in small apps, most likely for small jobs or personal projects.
Back to the problem, we would need a method to receive a dictionary and return an object of type T, and using the wondrous Reflection, we can do this:

private T ParseClass<t>(Dictionary<string, object> hash)
{
    Type t = typeof (T);
    var properties = t.GetProperties();
    var instance = Activator.CreateInstance(t);

    foreach (var property in properties)
    {
       if ( property.CanWrite && hash.ContainsKey(property.Name) )
        property.SetValue(instance, hash[property.Name], null);
    }

    return (T)instance;
}

I’m making a few assumptions here.

  1. First, I use classes to access data, not DataTables, since I find Types much more convenient at development time. I think we can agree that this:

        var name = st.Name;
    

    Is much friendlier than this:

        var name = (string)dt.Rows[“Name”];
    

  2. Second, this magic method happens only for simple classes, which work as POCOs so we don’t have to worry about read only properties, or strange fields or highly complex structures, just some small properties, keep in mind, that for medium projects, using a small class like this could (most times IS) fatal to your application/design.

With that in mind, the last method for typed data retrieval would look like this:

public List<t> ExecuteList<t>(string query, params KeyValuePair<string, object>[] args)
{
    var dictionary = ExecuteList(query, args);
    return dictionary.Select(ParseClass<t>).ToList();
}

As I’ve said many times in this post, this class would be only for small projects, but it does help to have a something like that, because we might find a few personal projects with small lines of code using the same data access code and retyping it again and again is not going to make it better. Remember that any snippet of code that you type over and over in projects, no matter how small it is, can be abstracted into a bit more complex (and useful) library.

I hope that this code can help someone out there, feel free to comment!