You can use any redirect extension you like but the one I’ll show below is called Redirector. When you click on the error or warning links in the Error List box (above) Visual Studio opens https://bingdev.cloudapp.net/BingUrl.svc/Get?selectedText=QUERY. We can redirect this to https://stackoverflow.com/search?q=QUERY by creating a new redirect like below.
The results will now be shown directly on Stackoverflow.com which is useful for me at least as the vast majority of technical solutions I find online come from SO.
I’m definitely a fan of keeping software architecture as simple and as minimal as possible… KISS, YAGNI etc. In line with this I try to keep use of third party tools to an absolute minimum and stick with out of the box stuff as much as possible.
One tool I’ve used in the past based on its popularity and proposed benefits is AutoMapper. It’s used to automatically assign values from one object to another based on conventions such as matching property names. It’s used quite a lot to map entity classes to viewModel/EditModel classes and despite its authors protestations is often used to map viewModel/EditModel classes back to entity classes too. Does AutoMapper save time?
AutoMapper doesn’t actually save much time
AutoMapper is supposed to save the time developers would spending typing out all the manual and so called tedious left-right assignments in code. I’ve used AutoMapper on three projects and each time so much configuration was required or so much confusion about why something wasn’t working as expected was experienced by the developers that after all was said and done it would have just been quicker to type the code out manually. And besides left-right assignment code is pretty much the easiest code any developer is ever going to have to write, everyone understands it (including the BAU team who did not write the app but who will support it) and it’s very rare to lose significant time on software projects because of the sheer volume of code we have to type. It’s always the research, whiteboarding, collaboration, brainstorming required to figure out what code to write, not actually writing it that costs the most time.
In the projects I worked on which used AutoMapper tasks which the team lost time on included debugging cryptic error messages, writing custom rules when something non trival is inevitably needed, explaining to junior developers how it all works/is supposed to work and funnily enough discussing the merits of using it or not. All these tasks added up to more than the time saved by not typing mappings manually so I wouldn’t consider including AutoMapper in a build again unless the system was very large and the vast majority of mappings could be done without custom mappers. Other reasons I’d choose manual mappings over AutoMapper include…
Run time errors as no compile time errors Mappings are computed at runtime, so if your mapping object A to B and rename a property on one but not the other you won’t know until runtime. AssertConfigurationIsValid can however be called at startup or from a unit test to ensure you do indeed catch these issues at runtime.
No static analysis Find all references won’t find references for many properties as they not explicitly used.
Long term support? This isn’t specific to AutoMapper, but AutoMapper is mainly written by a talented guy called Jimmy Bogard. Although he has a bit of help, its still a pretty small project which may not be around or supported in a couple of years. Given that software in the industry I’m working in now (banking) often has a life time of 20 years this is a factor I needed to consider too, although again this is not specific to AM.
Recommendations for using AutoMapper
For me I no longer see the benefits of using it and I’d just rather not create a dependency on someone elses kit without it having validated benefits, but each to their own so if you intend on using it keep your life simple by:
Not using custom mappers as you end up writing almost as much code as you would to do things manually. Wrap standard Mapper.Map / source class -> destination code in an extension method (for example) and then do plain left-right assignment for anything that doesn’t map out of the box, this means your using less of the complex features of AutoMapper but still getting the majority of your properties mapped:
Call AssertConfigurationIsValid at startup when in development mode. You can’t get compile time errors but you can check for valid mappings immediately upon runtime. Keep an eye on how long this takes however.
In the not too distant past during some refactor time on an almost done project an external architect recommended to our team that we retrofit AutoMapper in after we had manually done all the left-right assignments without issue. Recommendation was based on the fact we were not using AutoMapper rather than on the need for a solution to a problem we were having (we were having none). There would have been absolutely no benefit only cost to using AutoMapper at that point. Point is with AutoMapper and other third party tools you should resist the urge to use them just because they are popular or the big books mention them, consider that everything has a cost and the choice to use something or not should depend on your project particulars not what is flavour of the month on StackOverflow or other developer hangouts online.
If you see ‘An entity object cannot be referenced by multiple instances of IEntityChangeTracker‘ exception its likely you have created two entity framework db contexts and are tracking a particular entity from both of them when calling savechanges(). Although there may be some cases for wanting two db contexts this is likely not want you want. Two main approaches to solving this and ensuring your dealing with the one context..
Explicitly passing the same context into each of your service or repository classes. Rather than creating a new DB context in each of your service classes create the context at a higher level (perhaps in controller/base controller) and just pass it into your worker classes…
ContextDB _db = new ContextDB(); OrderService orderService = new OrderService(_db); CustomerService customerService = new CustomerService(_db);
Using an IOC container such as Ninject to manage the lifetime of the context class for you. Per-Request is recommended for web applications which means a single db context will be shared across the whole of a single request. Ninject is simple enough to configure, a per request binding looks similar to below…
What’s the difference between the truncate and delete commands in SQL Server and when might you choose one over the other?
Let’s look at the characteristics of both..
Doesn’t log each individual row deleted but rather logs page deallocations
Can be rolled back when in transaction
Can’t always be rolled back in full recovery mode without transaction
Resets identity seed
Can’t be used with tables with FKs
All records removed, can’t include a where clause
Does not cause delete, instead of or after triggers to fire for each row removed
Locks whole table
Requires alter permissions as its considered a DDL statement
Logs each individual delete so is slower and takes up more log space
Can be rolled back when in transaction
Can always be rolled back in full recovery mode with or without transaction
Doesn’t reset identity seed
Can be used with tables with FKs
All or some records can be removed as where clause can be included
Causes delete, instead of or after triggers to fire for each row removed
Locks only specific rows being deleted
Requires delete permissions as its considered a DML statement
Which one you should choose depends completely on your use case although unless your tables are huge, sticking with delete is the safest due to the nature of what is logged (assuming full recovery mode) BUT…
Truncate CAN be rolled back too
When asked for the difference between truncate and delete commands in SQL Server if you can hit most of the points above the interviewer will be happy. Given that a common misconception is that truncate cannot be rolled back brownie points will be available by pointing out that this is not the case. If the truncate command is issued in a transaction it can be rolled back just like the delete command can. Furthermore even without a transaction it may be possible to ‘roll back’ a truncate too as truncate is a logged command and it doesn’t actually remove underlying data but rather deallocates the space used so it becomes available for re-use. If you act fast and get hold of the .MDF files before the deallocated space is overwritten with the help of certain third party tools you have a good chance of saving your data.
If your querying a SQL Server view in entity framework it will attempt to use the first non-nullable column as the primary key. If this column is unique, all good, however if it is not unique, entity framework will return duplicate rows whereby all rows with the same ‘key’ will have the data from the first occurance (the first row) of that ‘key’. This is most likely not the behaviour you want, but its easily fixed:
Preventing entity framework returning duplicate rows from SQL Server views
If your underlying data has a primary key be sure its selected first in the view.
Add an artificial key just for this scenario using ROW_NUMBER() and return it first in the view.
Use AsNoTracking() in your LINQ statement to advise EF to ignore key definitions and just return the data as is. If your data doesn’t have a primary key, AsNoTracking() is the simplest way to resolve this and you don’t lose anything as since you don’t update view records you don’t need EF to ever track this data. Usage is simple: