A lesson in bad exception practice

Consider this stack trace from an exception thrown by Entity Framework 6.1.3:

Lucky for me, I know which tiny change I made to the code I am working with to induce the exception, and therefore I am pretty confident that I know what caused it. Had I not been so fortunate, this stack trace would have provided almost no help at all with diagnosing what is actually wrong.

Let’s try and decompile the topmost method, GetStoreTypeFromName, which is the one that apparently calls .Single(..) on a sequence without any matches. It looks like this:

This, Ladies And Gentlemen, is in my humble opinion almost unforgivable: It is simply not OK to call .Single(..) on stuff without re-throwing the exception along with information on what we were looking for.

A much improved version of the method would have looked like this:

this way not losing the original exception, still providing information from the callsite telling the caller what’s wrong.

An even better version of the method would provide me with a list of available “store types”, which often provides some great hints about what is wrong:

The next method in the stack trace is called ConfigureColumn, and sadly if you decompile the method, it will be just as cheap with regards to helping us know what was wrong, even though it has all kinds of valuable things to tell us like:

  • which type was it trying to map
  • which property could not be mapped
  • which column in the database was it trying to map to

etc. etc. etc.

Please don’t write code like that.

Tell, Don’t Ask

or “How big is an interface?”… Uuuh, what?

Well, consider these two interfaces:

– which is bigger?

If you think that ISomeService1 is the bigger interface, then there’s a good chance you’re wrong! Why is that?

It’s because an interface is not just the signatures of the methods and properties it exposes, it also consists of the types that go in and out of its methods and properties, and the assumptions that go along with them!

This is a fact that I see people often forget, and this is one of the reasons why everything gets easier if you adhere to the Tell, Don’t Ask principle. And by “everything”, I almost literally mean “everything”, but one thing stands out in particular: Unit testing!

Consider this imaginary API:

Now, in our system, on various occasions we need to change the values for a particular name. We start out by doing it like this:

Obviously, if GetValues returns a reference to a mutable object, and this object is the one kept inside the value store, this will work, and the system will chug along nicely.

The problem is that this has exposed much more than needed from our interface, including the assumption that the obtained Values reference is to the cached object, and an assumption that the Value1 and Value2 properties can set set, etc.

Now, imagine how tedious it will be to test that the Run method works, because we need to stub an instance of Values as a return value from GetValues. And when multiple test cases need to exercise the Run method to test different aspects of it, we need to make sure that GetValues returns a dummy object every time – otherwise we will get a NullReferenceException.

Now, let’s improve the API and change it into this:

adhering to the “Tell, Don’t Ask” principle, allowing a potential usage scenario like this:

As you can see, I have changed the API from a combined query/command thing to a pure command thing which appears to be much cleaner to the eye – it actually reveals exactly what is going on.

And testing has suddenly become a breeze, because our mocked ISomeKindOfValueStore will just record that the call happened, allowing us to assert it in the test cases where that is relevant, ignoring it in all the other test cases.

Another benefit is that this coding style lends itself better to stand the test of time, as it is more tolerant to changes – the implementation of ISomeKindOfValueStore may change an in-memory object, update something in a db, send a message to an external system, etc. A command API is just easier to change.

Therefore: Tell, Don’t Ask.

How to loop good

…or “Short rant on why C-style for-loops are almost always unnecessary “…

If you’ve been with C# for the last couple of years, your coding style has probably evolved in a more functional direction since the introduction of C# 3/.NET 3.5 with all its LINQ and lambda goodness.

In my own code, I have almost completely dismissed the old C-style for-loop in favor of .Select, .Where etc., making the code way more readable and maintainable.

The readability and maintainability is improved because long sequences for for/ foreach and collection-juggling can now be replaced by oneliners of selection, mapping, and projection.

One place, however, where I often see the for-loop used, is when people need to build sequences of repeated objects, or where some variable is incremented for each iteration.

Now, here is a message to those developers: Please realize that what you actually want to do is to repeat an object or map a sequence of numbers to a sequence of objects.

Having realized that, please just implement it like that:

Same thing goes for sequences of elements where something is incremented:

There’s just too many for-loops in the world!

Not the same thing, but still somehow related to that, is when people need to collect stuff from multiple properties and ultimately subject each value to the same treatment – please just implement it like that:


Big brownfield codebase, NDepend, and a Kaizen mind

I have meant to write a post for quite a while now, on how my team and I got up and running with NDepend on a big legacy codebase. So, here goes:


I am currently employed as a developer on a mortgage bond administration system. The project was started almost 6 years ago when SOA was (apparently!!) incredibly hot, so it’s got a lot of web services which was some architect’s interpretation of service-orientation.

The aforementioned “architect” left the project pretty early though, and after that our team has consisted of 4 to 8 people in various configurations.

This, combined with a couple of hectic deadlines along the way, has led to a big, fragmented codebase, where some areas have been left almost untouched for years, other areas are big balls of mud that everyone currently on the team fears to touch, other areas are characterized by developers having been in a hurry when they wrote the code etc etc.

A few times along the way, the idiomatic way of implementing new stuff has been radically changed to make things better. One example is that instead of orchestrating a bunch of web services RPC style, all new functionality is now being implemented in a single web service, messaging style.

Another thing is that the system is not built with an IoC container in mind, but that did not prevent us from introducing Castle Windsor. That means that services must be pulled from Windsor, service location style – and even though we try to encourage people to reduce their calls to the service locator in only a few well-defined places, some developers did not understand why, and they went ahead and used the locator in all kinds of places.

To put it like we do in Danish when someone owes more than their mortgage is worth: We have technical debt rising above our chimney!

What to do?

When you have so many problems that you don’t know where to begin, how do you solve your problems then?

Well, the Japanese have a word, Kaizen, which is beautifully capturing the concept of constantly improving things.

It’s a philosophy I try to consider all day long, when I write code, modify code, and even when I look at code: I constantly make small changes, remove cruft, and refactor into better more idiomatic ways. How can I do that without breaking code as well? Simple: We have automated tests!

NDepend is a really cool tool that lets us achieve Kaizen on a higher level 🙂

Quick introduction if you don’t know anything at all about NDepend

In NDepend, you create a project and add all the assemblies and .PDBs from your build. Now NDepend can analyze your assemblies and tell you all kinds of interesting stuff about your code.

E.g. it can show you a dependency matrix, which visualizes dependencies between assemblies and namespaces. That way, you can see if your application adheres to a layered structure, or if dependencies are circular.

But the feature I want to focus on here, is CQL rules. CQL is Code Query Language, which resembles SQL a bit, but is used to query assemblies, types, methods, fields etc. Using CQL, I can also generate warnings if certain conditions are not met.

What makes this extremely interesting in our case, is that we can run our CQL rules from our automated build, via CruiseControl.NET, which integrates nicely with the automatedness (is that a word?) that is required for an agile team.

A few examples

Simple stuff – people not adhering to our naming conventions

I am a firm believer that code must be clean, streamlined, and uniform, in order to reduce the perceived noise when reading it. For some reason, a couple of team members re-installed their ReSharper and got their R# naming rules reset, which resulted in numerous cases where field names were prefixed with _.

To address this annoyance, I created the following CQL warning:

– which from now on will yield a warning and the names and locations of violations in the build report in CruiseControl.NET. Nifty!

More annoying stuff – people using the service locator to instantiate stuff in tests!

Building a pretty complex system with a complex domain can lead to complicated set ups when “unit” testing. E.g. when a payment is recorded in the system, a payment distributor will generate money transactions for interest, principal reduction, and more, in accordance with some particular customer’s strategy. Then, if the SUT needs a couple of payments to have been made, some of our developers took a shortcut in their test set ups, and just pulled a IPaymentDistributor from our service locator, which – unfortunately – is fully configured and functional in our tests.

The real problem is of course that a great portion of our core modules are so strongly coupled that they cannot be tested in isolation.

But given that we have 1 MLOC, it’ s not feasible to change the design of our core module at this time. But what we can do, is to make it visible to developers when they pull stuff from the service locator during testing. That can be achieved with the following CQL:

Most annoying stuff – people not understanding IoC containers, using the service locator from within services already pulled from a container

This is something, that makes me angry and sad at the same time 🙂 but some team members did not understand Castle Windsor and what IoC containers can do, so they just went on and did calls to ServiceLocator.Resolve<I...> from within services that were already pulled from the container.

There’s too many reasons that this is just icky, so I will just show the CQL that will make sure that this misconception will not live on:

As you can see, this rule builds on the fact that all our service registrations are made by registering types decorated with the ServiceAttribute.


I have shown a few examples on how we set up automated assertion of certain properties in our architecture. NDepend has already proven to be extremely useful for this kind of stuff, and it allows us to continually add rules whenever we identify problems, which is what we need.


Recently, I had the opportunity to teach an introductory course in programming in C#, including topics on dependency injection, IoC containers, and test-driven development.

When I introduced the concept of dependency injection, and I suggested that this seemingly pretty simple and trivial task be accomplished by putting an IoC container to use, some of the attendees reacted with scepticism and doubt – would this thing, configured with either endless fluent spells or – shudder! – tonnes of XML, not make everything even more complex?

At the time, I tried to convince them that an IoC container was cool, and that learning its ins and outs could be considered some kind of investment that – over time – would actually reduce complexity. This is not something new, and everybody who has ever used an IoC container will probably agree with me that this investment pays off. But still, I think I needed some words that would support my argument better.

So, here goes a few random thought about complexity…


Right now, I want to focus on the kind of complexity that is inherent in non-trivial programs. That is, the complexity can not be removed from the program. The complexity can be anything, really, and it is there in some form or another, and we need to deal with it.

Immediate vs. deferred complexity

Assuming we need to deal with it, how do we do it then? Which strategy do we follow? Well, acknowledging its existence is not the same as dealing with it, but it’s definitely a first step. Thereafter, we should probably consider either A) solving the complexity on a case-to-case ad hoc kind of basis, or B) building some kind of mechanism, that abstracts the identified complexity away, thus dealing with the complexity right now.

Localized vs. dispersed complexity

Definitely coupled to the above somehow, but not inseparable, is the localized vs. dispersed complexity question: When you identify some kind of complexity, do you A) solve the problem in all the little places where the complexity surfaces, or B) build some kind of framework that makes all the other code oblivious of the complexity?

My opinion, and how it relates to IoC containers

I don’t know about you, but I prefer B & B: the abstraction and the framework. It might leak, and it might take some time for other developers to grok if they have never seen that particular problem solved like that before, but if the abstraction holds, it will probably assume some kind of pattern status that will be used, re-used, and recognized from then on.

The force of IoC containers is that they take control over object creation, thus providing hooks around whenever objects are created, and object creation is an incredibly important event in a running program. Having a hook in that proves to be infinitely useful (almost).

The IoC container solves the complexity of deciding when and how to create (which) objects. This would otherwise be a severely dispersed (spatially as well as temporally) complexity, or cross-cutting concern if you will, in that it is a really common thing to instantiate stuff and/or use existing instances of stuff, and the alternative would be to new up objects, maybe pulling from a cache somewhere, all around the place – thus infiltrating our program with our solution to that particular problem.


Use an IoC container. Learn to use its features. Allow the container to take responsibility. This will make you happy.

(seriously, do it! :))