AutoMapper doesn’t actually save much time… and other reasons not to use it

I’m definitely a fan of keeping software architecture as simple and as minimal as possible… KISS, YAGNI etc. In line with this I try to keep use of third party tools to an absolute minimum and stick with out of the box stuff as much as possible.

One tool I’ve used in the past based on its popularity and proposed benefits is AutoMapper. It’s used to automatically assign values from one object to another based on conventions such as matching property names. It’s used quite a lot to map entity classes to viewModel/EditModel classes and despite its authors protestations is often used to map viewModel/EditModel classes back to entity classes too. Does AutoMapper save time?

AutoMapper doesn’t actually save much time

AutoMapper is supposed to save the time developers would spending typing out all the manual and so called tedious left-right assignments in code. I’ve used AutoMapper on three projects and each time so much configuration was required or so much confusion about why something wasn’t working as expected was experienced by the developers that after all was said and done it would have just been quicker to type the code out manually. And besides left-right assignment code is pretty much the easiest code any developer is ever going to have to write, everyone understands it (including the BAU team who did not write the app but who will support it) and it’s very rare to lose significant time on software projects because of the sheer volume of code we have to type. It’s always the research, whiteboarding, collaboration, brainstorming required to figure out what code to write, not actually writing it that costs the most time.

In the projects I worked on which used AutoMapper tasks which the team lost time on included debugging cryptic error messages, writing custom rules when something non trival is inevitably needed, explaining to junior developers how it all works/is supposed to work and funnily enough discussing the merits of using it or not. All these tasks added up to more than the time saved by not typing mappings manually so I wouldn’t consider including AutoMapper in a build again unless the system was very large and the vast majority of mappings could be done without custom mappers. Other reasons I’d choose manual mappings over AutoMapper include…

Run time errors as no compile time errors
Mappings are computed at runtime, so if your mapping object A to B and rename a property on one but not the other you won’t know until runtime. AssertConfigurationIsValid can however be called at startup or from a unit test to ensure you do indeed catch these issues at runtime.

No static analysis
Find all references won’t find references for many properties as they not explicitly used.

Long term support?
This isn’t specific to AutoMapper, but AutoMapper is mainly written by a talented guy called Jimmy Bogard. Although he has a bit of help, its still a pretty small project which may not be around or supported in a couple of years. Given that software in the industry I’m working in now (banking) often has a life time of 20 years this is a factor I needed to consider too, although again this is not specific to AM.

Recommendations for using AutoMapper

For me I no longer see the benefits of using it and I’d just rather not create a dependency on someone elses kit without it having validated benefits, but each to their own so if you intend on using it keep your life simple by:

Not using custom mappers as you end up writing almost as much code as you would to do things manually. Wrap standard Mapper.Map / source class -> destination code in an extension method (for example) and then do plain left-right assignment for anything that doesn’t map out of the box, this means your using less of the complex features of AutoMapper but still getting the majority of your properties mapped:

OrderDto dto = Mapper.Map<OrderDto>(order);
dto.CustomerFullName = order.FirstName + ‘ ‘ + order.LastName;

Call AssertConfigurationIsValid at startup when in development mode. You can’t get compile time errors but you can check for valid mappings immediately upon runtime. Keep an eye on how long this takes however.

Am guilty of doing this in the past myself but avoid using AutoMapper to go from ViewModels to entities as there is a lot of black box stuff going on which quite often doesn’t sit well with entity framework and causes unintended consequences such as triggering inserts rather than updates. Simply put your entities are too important to use AutoMapper on, do them manually…

FYI, not using AutoMapper is fine

Retrofiting in Automapper has no advantagesIn the not too distant past during some refactor time on an almost done project an external architect recommended to our team that we retrofit AutoMapper in after we had manually done all the left-right assignments without issue. Recommendation was based on the fact we were not using AutoMapper rather than on the need for a solution to a problem we were having (we were having none). There would have been absolutely no benefit only cost to using AutoMapper at that point. Point is with AutoMapper and other third party tools you should resist the urge to use them just because they are popular or the big books mention them, consider that everything has a cost and the choice to use something or not should depend on your project particulars not what is flavour of the month on StackOverflow or other developer hangouts online.

An entity object cannot be referenced by multiple instances of IEntityChangeTracker

If you see this exception, its likely you have created two entity framework db contexts and are tracking a particular entity from both of them when calling savechanges(). Although there may be some cases for wanting two db contexts this is likely not want you want. Two main approaches to solving this and ensuring your dealing with the one context..

Explicitly passing the same context into each of your service or repository classes. Rather than creating a new DB context in each of your service classes create the context at a higher level (perhaps in controller/base controller) and just pass it into your worker classes…

ContextDB _db = new ContextDB();
OrderService orderService = new OrderService(_db);
CustomerService customerService = new CustomerService(_db);

Using an IOC container such as Ninject to manage the lifetime of the context class for you. Per-Request is recommended for web applications which means a single db context will be shared across the whole of a single request. Ninject is simple enough to configure, a per request binding looks similar to below…

kernel.Bind<ContextDB>().ToSelf().InRequestScope();

Difference between truncate and delete commands in SQL Server

Oldie but goodie interview question…

What’s the difference between the truncate and delete commands in SQL Server and when might you choose one over the other?

Let’s look at the characteristics of both..

Truncate

  • Doesn’t log each individual row deleted but rather logs page deallocations
  • Can be rolled back when in transaction
  • Can’t always be rolled back in full recovery mode without transaction
  • Resets identity seed
  • Can’t be used with tables with FKs
  • All records removed, can’t include a where clause
  • Does not cause delete, instead of or after triggers to fire for each row removed
  • Locks whole table
  • Requires alter permissions as its considered a DDL statement

Delete

  • Logs each individual delete so is slower and takes up more log space
  • Can be rolled back when in transaction
  • Can always be rolled back in full recovery mode with or without transaction
  • Doesn’t reset identity seed
  • Can be used with tables with FKs
  • All or some records can be removed as where clause can be included
  • Causes delete, instead of or after triggers to fire for each row removed
  • Locks only specific rows being deleted
  • Requires delete permissions as its considered a DML statement

Which one you should choose depends completely on your use case although unless your tables are huge, sticking with delete is the safest due to the nature of what is logged (assuming full recovery mode) BUT…

Truncate CAN be rolled back too

If you can hit most of the points above the interviewer will be happy. Given that a common misconception is that truncate cannot be rolled back brownie points will be available by pointing out that this is not the case. If the truncate command is issued in a transaction it can be rolled back just like the delete command can. Furthermore even without a transaction it may be possible to ‘roll back’ a truncate too as truncate is a logged command and it doesn’t actually remove underlying data but rather deallocates the space used so it becomes available for re-use. If you act fast and get hold of the .MDF files before the deallocated space is overwritten with the help of certain third party tools you have a good chance of saving your data.

Duplicate rows returned when querying views with entity framework

If your querying a SQL Server view in entity framework it will attempt to use the first non-nullable column as the primary key. If this column is unique, all good, however if it is not unique, entity framework will return duplicate rows whereby all rows with the same ‘key’ will have the data from the first occurance (the first row) of that ‘key’. This is most likely not the behaviour you want, but its easily fixed:

Preventing entity framework returning duplicate rows from SQL Server views

  1. If your underlying data has a primary key be sure its selected first in the view.
  2. Add an artificial key just for this scenario using ROW_NUMBER() and return it first in the view.
  3. Use AsNoTracking() in your LINQ statement to advise EF to ignore key definitions and just return the data as is. If your data doesn’t have a primary key, AsNoTracking() is the simplest way to resolve this and you don’t lose anything as since you don’t update view records you don’t need EF to ever track this data. Usage is simple:

dbcontext.view.AsNoTracking().Where(x => …)

What to look for when reviewing code for SOLID principle ‘violations’

If your tasked with doing a code review of a fellow developers code or indeed are looking to improve an existing code base as a whole, one good set of principles to review the code by is the SOLID principles by Uncle Bob.

There are five SOLID principles in total and these courtesy of Wikipedia are:

image001-2

Here’s what you should keep an eye out for if looking for violations of these principles:

Single responsibility principle – look for large classes which contain lots of functionality, chances are those classes could be broken up into smaller more focused ones. When your dealing with already small size classes the line is a little less clear as really what is the definition of ‘single’.

Open/closed principle – look for long blocks of if/else statements which check an object’s type and then does some similar action X (but differently) based on the result. When you add a new relevant type the class with this if/else type check code has to be modified to accommodate the new type, whereas the better way would be to use polymorphism either through interfaces or abstract classes.

Liskov substitution principle – this one may be a little tougher to spot and indeed is it often the toughest principle for developers to grasp. Look for subclass methods which change the behaviour (even if only sometimes) of a base class method in such a way that consumers can’t call the derived method and have it ALWAYS behave as if it was the base class method. To find LSP violations look for things like:

Subclasses with overridden methods which throw NotImplementedException().

Subclass methods which enforce stricter rules on parameters than their base.  For example… a base method accepts an integer as parameter but overridden method throws exception if this integer is not positive.

Methods which appear from a signature point of view to operate on a base class (or interface) but then within the method some type checking occurs. 

Interface segregation principle – look for large (fat) interfaces with a lot of unfocused methods, also look at interfaces whose clients throw NotImplementedException() a lot.

Dependency inversion principle – look for a lot of new statements.  New statements means concrete instantiations, which means coupling. Better to use a dependency injection framework or even (if you must) poor man’s dependency injection and push the creation of your objects up higher in the stack while simply passing down interfaces which provide a functionality contract but which makes no assertions about how that functionality is implemented.

Related Links

Overview of the five different principles. I always seem to be reading this article come interview time.