Castle Windsor debugger visualization

A while ago, I noticed that Krzysztof Ko?mic was tweeting regularly about adding debugger visualization to Windsor. At the time, I wasn’t really paying attention, so I didn’ t actually understand what he was doing.

Today, I was coding some stuff, and I downloaded the latest and greatest Windsor 2.5 (from August 2010), and after a while I found myself stepping through some code in a small web application I was building from scratch – and that’s when I found out what he was rambling about…. check this out:

Windsor debugger visualization

See how the container renders itself as a list of all the components it contains, followed by a list of “Potentially Misconfigured Components”…?

How cool is that? (pretty cool, that’s how cool it is!)

Not a ground-breaking earth-shaking feature on the grand scale of cool stuff, but sometimes it’s the little things…

(I’m puzzled, however, as to why the headline says “Count = 4” in the “Potentially Misconfigured Components” section when the count is clearly 2…?)

Scheduling recurring tasks in NServiceBus

A while ago, on a project I am currently involved with which is based on NServiceBus, we needed to publish certain pieces of information at fixed intervals. I was not totally clear in my head on how this could be implemented in an NServiceBus service, so I asked for help on Twitter, which resulted in a nifty piece of advice from Andreas Öhlund: Set up a timer to do a bus.SendLocal at the specified interval.

That’s exactly what we did, and I think we ended up with a pretty nifty piece of code that I want to show off 🙂

PS: bus.SendLocal(message) effectively does a bus.Send(((UnicastBus)bus).Address, message) – i.e. it puts a message, MSMQ and all, in the service’s own input queue.

First, we have an API that looks like this (looking a little funny, I know – wait and see…):

– which is implemented like this (registered as a singleton in the container):

The System.Timers.Timer is a timer which uses the thread pool to schedule callbacks at the specified interval. It’s pretty easy to use, and it fits nicely with this scenario.

Now, in combination with this nifty class of extension goodness:

– we can schedule our tasks like so:

Now, why is this good? It’s good because the actual task will then be carried out by whoever implements IHandleMessages<PublishRealTimeDataMessage> in the service, processing the tasks with all the benefits of the usual NServiceBus message processing pipeline.

Nifty, huh?

Looking over the simplicity and elegance of this solution, I’m kind of embarassed to tell that my first take on this was to implement the timer almost exactly like above, except instead of bus.SendLocal in the Elapsed-callback, we had a huge event handler that simulated most of our message processing pipeline – including NHibernateMessageModule, transactions, and whatnot….

Please note that ScheduleRealTimeDataPublishing is not re-entrant – in this form its Every method should only be used from within the Run and Stop methods of implementors of IWantToRunAtStartup, as these are run sequentially.

Code golf

The result of Kodehoved’s code golf competition has been found, and I got a shared 5th place.

The task was to add two large integers together by representing the integers with arrays of their digits, thus allowing them to become extremely large.

My contribution looks like this (weighing in at 138 characters):

Asger‘s and Lars‘ contribution looks like this (135 characters):

And the winners’ (Mads and Peter Sandberg Brun) contribution looks like this (weighing in at an incredibly compact, but almost unreadable, 132 characters :)):

I usually don’t compete in competitions like this, because I’ve always thought of myself as pretty lame when it comes to solving coding puzzles, but this time it was great fun – especially since I was pretty motivated by my desire to beat Asger (my little sister’s boyfriend), who pushed me several iterations further than I would have gone on my own. He ended up beating me though, but I am still pretty satisfied with my fairly compact and almost readable solution.

Fun with NoRM 4

This time, a short post on how to model inheritance, which (at least in a class-oriented programming language) is one of the foundations of object-oriented programming.

Let’s take an example with a person, who has a home address that can be either domestic or foreign. Consider this:

Now, when I create a Person with a DomesticAddress and save it to my local Mongo, it looks like this:

which is all fine and dandy – and in the db:

which looks pretty good as well. BUT when I try to load the person again by doing this:

I get BOOM!!: Norm.MongoException: Could not find the type to instantiate in the document, and Address is an interface or abstract type. Add a MongoDiscriminatedAttribute to the type or base type, or try to work with a concrete type next time.

Why of course! JSON (hence BSON) only specifies objects – even though we consider them to be logical instances of some class, they’ re actually not! – they’re just objects!

So, we need to help NoRM a little bit. Actually the exception message says it all: Add a MongoDiscriminatedAttribute to the abstract base class, like so:

That was easy. Now, if I do a db.People.drop(), followed by my people.Insert(...)-code from before, I get this in the db:

See the __type field that NoRM added to the object? As you can see, it contains the assembly-qualified name of the concrete type that resulted in that particular object, allowing NoRM to deserialize properly when loading from the db.

Now, this actually makes working with inheritance hierarchies and specialization pretty easy – just add [MongoDiscriminated] to a base class, resulting in concrete type information being saved along with objects of any derived type.

Only thing that would be better is if NoRM would issue a warning or an exception when saving something that could not be properly deserialized – this way, one would not easily get away with saving stuff that could not (easily) be retrieved again.

Fun with NoRM 3

Third post in “Fun With NoRM” will be about how “the dynamism” of JavaScript and JSON is bridged into the rigid and statically typed world of C#. The thing is, in principle there’s no way to be certain that a JSON object returned from MongoDB will actually fit into our static object model.

Consider a situation where, for some reason, some of our orders have a field, PlacedBy, containing the name of the person who placed the order. Let’s see how things will go when adding the field and then querying all orders:

– and BOOM ! – Cannot deserialize!: Norm.MongoException: Deserialization failed: type MongoTest.Order does not have a property named PlacedBy

This is actually pretty good, because this way we will never accidentally load a document with un-deserializable properties and save it back, thus truncating the document. But how can we handle this?

Well, NoRM makes it pretty easy: Make your model class inherit Expando, thus effectively becoming a dictionary. E.g. like so:

Now we can do this:

– which yields:

when run with a small DB containing three orders. Nifty, huh?

If you’re sad that you’ve given up your single opportunity to inherit something by deriving from Expando, just go ahead and implement IExpando instead. Then you need to suply a few members, but you can just redirect to an internal Expando in your class.

Next up, a post on how to model inheritance hierarchies… one of my favorites! 🙂

Fun with NoRM 2

This second post in “Fun With NoRM” will be about querying…

How to get everything

Querying collections can be done easily with the anonymous types of C# 3 – e.g. the Order collection from my previous post can be queried for all orders like so:

How to use criteria

If we’re looking for some particular order, we can query by field values like so:

or by using the query operators residing in the static class Q:

More advanced criteria

The query operators can even be combined by combining criteria like so:

Now, what about the nifty dot notation? This an example where C#’s capabilities don’t cut it anymore, as everything on the left side in an anonymous type need to be valid identifiers – so no dots in property names!

This is solved in NoRM by introducing Expando! (not to be confused with ExpandoObject of .NET 4, even though they have similarities…)

Expando is just a dictionary, so to query by the field of an embedded object, do it like so:

As you can see, querying with NoRM is pretty easy – and I think the NoRM guys have found a pretty decent workaround in the case of dot notation, where C#’s syntax could not be bent further.

Stay tuned for more posts…

Fun with NoRM 1

My previous posts on MongoDB have been pretty un-.NETty, in that I have focused almost entirely on how to work the DB through its JavaScript API. To remedy that, I shall write a few short posts on how to get rolling with MongoDB using NoRM, the coolest C# driver for MongoDB at the moment.

First post will be on how to connect and shove data into MongoDB.

Short introduction to NoRM

NoRM is “No Object-Relational Mapping”. It’s a .NET-driver, that allows you to map objects and their fields and aggregated objects into documents. I like NoRM because it’s successfully preserved that low-friction MongoDB-feeling, bridging C#’s capabilities nicely to those of JavaScript in the best possible way, providing some extra C#-goodies along the way. Please read on, you’ll see…

Connect to MongoDB

Easy – can be done like so:

Inserting a few documents

Inserting documents with NoRM is easy – just create a class with fields and aggregated objects, and make sure the class either has a property named something like “Id” or has a property decorated with [MongoIdentifier], e.g. like so:

– and then go ahead and pull a strongly typed collection and insert documents into it:

Now, to make this work I need to create five 200-line XML-files with mapping info etc. </kidding> no, seriously – that’s all it takes to persist an entire aggregate root!!

Pretty cool, eh? That’ s what I meant when I said low friction. Stay tuned for more posts, e.g. on how to query a collection…

So you think you’re domain-driven?

In a project I am currently involved with, a core part of the system involves a couple of fairly complicated (at least to me :)) computations involving time, power, energy, fuel, volumes etc.

At first, we just implemented the computations “as specified”, i.e. we went ahead and did stuff like this:

to calculate how many MHhs a given device can deliver. This is just an example, we have a lot of this stuff going on, and this will be a major extensibility point in the system in the future.

I don’t know about you, but I got a tiny headache every time I looked at code like this, part from trying to understand what was going on, part because I knew errors could hide in there forever.

To remedy my headache (and the other team members’ headaches), we started migrating all that funky double stuff to some types, that do a better job at representing – i.e. Power for power, Energy for energy, and so on!

All of them, of course, as proper immutable value types (even though we use classes for that).

And then we utilized C#’s ability to supply operator overloading on all our domain types, allowing stuff like this to happen:

where currentProduction is an instance of Power and remainingRuntime is a TimeSpan. And whoever gets the Energy that comes out of this, will never have to doubt whether its Whs, KWhs, or MWhs – it’s just pure energy!

Now, this may seem like a small change, but it has already proven to have huge ramifications for the way we implement our computations:

  1. There is no such thing as multiplying two powers by accident, or subtracting two values that cannot be subtracted and still make sense, etc.
  2. We have no errors due to wrong factors, e.g. like KWs mistakenly being treated as MWs
  3. We have gained a cleaner, more intuitive core API, which is just friggin’ sweet!

In retrospective, I have done so much clumsy work in the past that could have been avoided by introducing proper value types for core domain concepts, like e.g. money, percentages, probabilities, etc.

I usually call myself “pretty domain-driven”, but now I realize that there’s an entire aspect of being just that, that I have overseen.

Do you think you’re domain-driven?

I will be speaking about NoSQL and MongoDB

as seen from the eyes of a .NET developer at two events in June (in Danish).

The first event is a JAOO Geek Night at Dong Energy in Skærbæk on Tuesday June 29th at 4:30 pm. You can read more about the free JAOO Geek Nights here.

The other event is a meeting in Århus .NET User Group, which is the day after, on Wednesday June 30th at 6 pm – you can sign up via Facebook here.

I’m really looking forward to it, because I think we will have some interesting discussions. And perhaps we can widen a few people’s horizons 🙂

Hope to see a lot of engaged people at both events.

Big brownfield codebase, NDepend, and a Kaizen mind

I have meant to write a post for quite a while now, on how my team and I got up and running with NDepend on a big legacy codebase. So, here goes:

Preface

I am currently employed as a developer on a mortgage bond administration system. The project was started almost 6 years ago when SOA was (apparently!!) incredibly hot, so it’s got a lot of web services which was some architect’s interpretation of service-orientation.

The aforementioned “architect” left the project pretty early though, and after that our team has consisted of 4 to 8 people in various configurations.

This, combined with a couple of hectic deadlines along the way, has led to a big, fragmented codebase, where some areas have been left almost untouched for years, other areas are big balls of mud that everyone currently on the team fears to touch, other areas are characterized by developers having been in a hurry when they wrote the code etc etc.

A few times along the way, the idiomatic way of implementing new stuff has been radically changed to make things better. One example is that instead of orchestrating a bunch of web services RPC style, all new functionality is now being implemented in a single web service, messaging style.

Another thing is that the system is not built with an IoC container in mind, but that did not prevent us from introducing Castle Windsor. That means that services must be pulled from Windsor, service location style – and even though we try to encourage people to reduce their calls to the service locator in only a few well-defined places, some developers did not understand why, and they went ahead and used the locator in all kinds of places.

To put it like we do in Danish when someone owes more than their mortgage is worth: We have technical debt rising above our chimney!

What to do?

When you have so many problems that you don’t know where to begin, how do you solve your problems then?

Well, the Japanese have a word, Kaizen, which is beautifully capturing the concept of constantly improving things.

It’s a philosophy I try to consider all day long, when I write code, modify code, and even when I look at code: I constantly make small changes, remove cruft, and refactor into better more idiomatic ways. How can I do that without breaking code as well? Simple: We have automated tests!

NDepend is a really cool tool that lets us achieve Kaizen on a higher level 🙂

Quick introduction if you don’t know anything at all about NDepend

In NDepend, you create a project and add all the assemblies and .PDBs from your build. Now NDepend can analyze your assemblies and tell you all kinds of interesting stuff about your code.

E.g. it can show you a dependency matrix, which visualizes dependencies between assemblies and namespaces. That way, you can see if your application adheres to a layered structure, or if dependencies are circular.

But the feature I want to focus on here, is CQL rules. CQL is Code Query Language, which resembles SQL a bit, but is used to query assemblies, types, methods, fields etc. Using CQL, I can also generate warnings if certain conditions are not met.

What makes this extremely interesting in our case, is that we can run our CQL rules from our automated build, via CruiseControl.NET, which integrates nicely with the automatedness (is that a word?) that is required for an agile team.

A few examples

Simple stuff – people not adhering to our naming conventions

I am a firm believer that code must be clean, streamlined, and uniform, in order to reduce the perceived noise when reading it. For some reason, a couple of team members re-installed their ReSharper and got their R# naming rules reset, which resulted in numerous cases where field names were prefixed with _.

To address this annoyance, I created the following CQL warning:

– which from now on will yield a warning and the names and locations of violations in the build report in CruiseControl.NET. Nifty!

More annoying stuff – people using the service locator to instantiate stuff in tests!

Building a pretty complex system with a complex domain can lead to complicated set ups when “unit” testing. E.g. when a payment is recorded in the system, a payment distributor will generate money transactions for interest, principal reduction, and more, in accordance with some particular customer’s strategy. Then, if the SUT needs a couple of payments to have been made, some of our developers took a shortcut in their test set ups, and just pulled a IPaymentDistributor from our service locator, which – unfortunately – is fully configured and functional in our tests.

The real problem is of course that a great portion of our core modules are so strongly coupled that they cannot be tested in isolation.

But given that we have 1 MLOC, it’ s not feasible to change the design of our core module at this time. But what we can do, is to make it visible to developers when they pull stuff from the service locator during testing. That can be achieved with the following CQL:

Most annoying stuff – people not understanding IoC containers, using the service locator from within services already pulled from a container

This is something, that makes me angry and sad at the same time 🙂 but some team members did not understand Castle Windsor and what IoC containers can do, so they just went on and did calls to ServiceLocator.Resolve<I...> from within services that were already pulled from the container.

There’s too many reasons that this is just icky, so I will just show the CQL that will make sure that this misconception will not live on:

As you can see, this rule builds on the fact that all our service registrations are made by registering types decorated with the ServiceAttribute.

Conclusion

I have shown a few examples on how we set up automated assertion of certain properties in our architecture. NDepend has already proven to be extremely useful for this kind of stuff, and it allows us to continually add rules whenever we identify problems, which is what we need.