Online MongoDB courses

mongo-db-huge-logo_0This weekend, I completed the MongoDB (Python) developer and DBA courses, and I can really recommend them if you are interested in a gentle, yet fairly challenging, introduction to MongoDB.

The courses are run over ~7 weeks with instructional videos being released every week, and every week a couple of exercises must be completed in order to qualify for the final exam. Everything is online, and it’s completely free!

Next round begins on the 1st and 29th of April for the DEV and DBA courses respectively, and you can sign up at 10gen’s education site.

(PS: I got a 90% score on the DEV course and 96% on DBA – does that mean I’m more qualified to be a DBA than a developer..? I hope not!)

Bring back the types!

When you’re working with C# and reflection, you might end up in a situation where you have “lost” the type. For example, if you’ve deserialized an incoming message of some kind, you might have code that needs to do this:

Usually, when you want to do different stuff depending on the type of an object, you just call that object and let it do what it deems reasonable. That’s a good object-oriented way of delegating behavior to objects, totally anti-if campaign compliant and all. But in cases where the object you need to act upon is an incoming DTO, it’s usually not a good approach to put logic on that DTO.

This is a good time to do a “handler lookup” [1. I don’t know if there’s a better name for this pattern. I tried to ask Twitter, but I mostly got “it’s usually not a good idea to do that” – well, most patterns are usually not a good idea: all patterns should be applied only in cases where they make sense </stating-the-obvious>] in your IoC container for something that can handle an object of the given type. It goes like this: Given message: T, find handler: IHandle<T> and invoke handler with message.

In C#, that might look like this:

That’s not too bad I guess, but working at this untyped level for longer than this turns out to be cumbersome and hard to read and understand. It wouldn’t take more than a few lines more of invoking things via reflection before the dispatch of the message has turned into a mess! And then add the fact that exceptions thrown in method invocations made with reflection will be wrapped in TargetInvocationExceptions.

Consider this alternative approach where I use reflection to invoke a private generic method, closing it with the type of the message:

This way, the Dispatch method can have the bulk of the logic and it does not need to bother with the reflection API. Nifty!

Crocodile report, day 2

On this second day of the Warm Crocodile Developer Conference.

First session

I started out by attending Jimmy Bogard‘s talk on continuous delivery. The talk was ok, but in my opinion too much time went by with fiddling with ASP.NET MVC and database scripts and stuff like this.

But this might be also be because much of the stuff that is required to be able to do continous delivery (like e.g. using a build script, having the right tests, automating database migrations, etc.) is stuff that I would never consider NOT doing.

I did however get a tip on how to configure our TeamCity installation by chaining the build configurations to form a pipeline.

Second session

In the second session, I went and saw my homie Jesper Lund Stocholm do a presentation on OData. Jesper is cool in many ways, especially because he’ll say stuff like “I love WS-*” and “I love looking at SOAP messages in Fiddler”, etc.

I’m not sure I’m buying in on the “OData is the future of data on the web”, but I was pleasantly surprised that it was kind of RESTful in that it provides links and stuff between entities. The API is just so ugly though, that I swear I shed a tear of blood while watching Jesper punch in what seemed to be an encrypted LINQ query in the browser query string…

Third session

In the third session, I wanted to see Mark do his top 10 developer mistakes when using SQL Server, but the room was packed! And I simply could not stand the thought of having to sit on a hard floor for one hour straight, so I went down to the lobby and hung out with some of the (geek) rock stars.

Fourth session

After one hour in the lobby, I went and checked out Christian‘s presentation on harmful layers. His message really resonated with me and the problems we’re currently facing at work, and it seemed the service oriented bounded context thing was a recurring theme in several presentations (including my own) at the conference.

Fifth session

After two big ice creams, I went to Ayende’s talk to get an update on the current state of RavenDB. It seems RavenDB is becoming cooler and cooler, and I really like the idea of all the stuff that will become trivial to implement as a developer because the database provides a model that is better suited for what developers do mostly (i.e. want to store objects and query objects in advanced way without compromising on the way the objects are stored).

The room was packed though, so my butt was sore after sitting on the concrete floor for the duration of the talk. Generally, the rooms for the sessions were just a little bit too small.

Sixth session

The sixth and last session was Anders Ljusberg who talked about CQRS and event sourcing, and it was really cool to see a talk about CQRS and event sourcing that had a lot of code in it!

Usually, when people talk about CQRS and event sourcing, it’s done at a slightly higher level where there’s boxes and arrows and databases and stuff like that. But this talk was really concrete and to-the-point, and I really appreciated that.

And then, Anders was the 1000000th presenter to use Twitter Bootstrap in his demos, so I guess it would have been in order to give him some kind of prize.

Conclusion

All in all, the Warm Crocodile Developer Conference has been a really nice experience. It has had a few glitches, no doubt about that, but I think Daniel handled it in a charming fashion – as the time when the arrangers had forgotten to arrange the ice cream for the “double espressos and ice cream” break, so Daniel went into the next door Irma and bought all of their ice cream, which he brought to the hotel in Irma shopping baskets.

There’s been room for a lot of talk between the sessions, and generally the speakers have been engaged and approachable, which has contributed to an awesome atmosphere. And then I had three cool colleagues with me, so it’s been two and a half days of excellent company.

Crocodile report, day 1

Even though I have been kind of occupied mentally by the fact that I was going to do a presentation, I still managed to catch a few points from attending other presentations during the day.

First session

First, I went and saw Rob Ashton talk about building and testing Javascript applications, which was mostly about Node.js and a bunch of libraries that can be used when you’re doing serious Node development.

It strikes me that project names in the Node world are way cooler than in .NET…. We have NUnit, NBehave, NServiceBus, NWhatever… Node has Mocha, Zombie, Mocha-Cakes, etc.

Also, Rob was delightfully whimsy as he switched between virtual desktops with terminals and Vim at epileptic seisure-inducing speed, so I was definitely entertained, and also enlightened (although I have messed a bit with Node.js already, so I already knew most of the stuff he talked about).

Second session

I decided to stay for Rob’s second talk on game development in HTML5, which was an introduction to three ways of drawing animated graphics with HTML5. Rob gave a great overview over “happy face/sad face” facts about canvas vs. DOM manipulation vs. WebGL, so even though I don’t care about creating games in HTML5, it was nice to learn about how few limits there are in the browser.

Third session

After an elfish amount of vikingish lunch, I went and checked out Mantas Klasavičius who talked about something that he calls metric-driven development. Basically, Mantas talked about how to collect metrics from running applications and use them i various ways, both as a means to know where to improve the application, but also – as I understood it – almost as a gamification thing that inspired development teams to take responsibility and do even better. Even though Mantas is from Lithuania and thus was a little bit challenged language-wise, he succeeded in conveying a bunch of excellent points that I intend to take home and implement right away!

Fourth session

My fourth session was on F# with Tomas Petricek, and I guess it was ok – but at this point, I had to mess a little bit with my slides because I had suddenly realized my last example was too complex to be explained with words alone, so I was trying to follow Tomas at the same time as I was drawing a sequence diagram in my notebook and editing the iPhone photo to make it ready to be included in a slide.

Tomas touched very briefly on some of the cool features of F#, and must admit that I would have loved a full session about programming with actors in F# instead of the TypeProvider thing that Tomas talked about.

Fifth session

…was my own session on Rebus. I think it went fairly well – for once I had had time to actually prepare the talk, so I had my tongue laid out in the right way or something 🙂 at least I didn’t stutter and mumble as much as I usually do when I’m forced to talk in English for an extended amount of time… Here’s the slide deck (PDF): Taking The Hippie Bus To The Enterprise. Sample code is on GitHub: Warm Crocodile Rebus demos.

Sixth session

was on dealing with global performance by Steven Singh, which I attended because Steven had forgotten to bring a computer 🙂 so, being a nice guy and all, I let him use my computer to download his slides and use it during the presentation, even though it meant that I would miss out on Stefan’s ServiceStack all the things!-talk. That was actually too bad, so I must see if can get a chance to pick Stefan’s brain later.

Moreover, I was really tired at this point, and I felt like some FaceTime™ with my two kids at home, so I went out for about 30 minutes during the presentation. I can’t judge whether the presentation was good or bad, but I got the impression that Steven had a hard time keeping the common thread in what he wanted to say. It may be my disability to follow though, so I’m not entirely sure of this.

Conclusion

1st day ended with lunch together with most of the attendees, and I got to talk to Roy Osherove, Jimmy Bogard, and Derick Bailey – three really inspiring people! It seemed that Jimmy and Derick are totally buying in on Udi’s service-oriented-all-the-things way of thinking that permeates all layers of entire stack from database to UI widgets.

Also, Derick had experience with composite UIs with Prism (which is what we’re working with in our big trading application at d60), and obviously also with Marionette which he made as a composite UI framework in Javascript.

All in all it has been a great day! Now i want a shower, and then I want to sleep!

Initial thoughts on the Warm Crocodile Developer Conference

I’m curently sitting at an F# session with Thomas Petricek, and it’s a little bit hard for me to keep my attention at the presentation because my own presentation is on in less than 2 hours.

But I just want to express my very positive impression of the Warm Crocodile Developer Conference! Even though there has been a few glitches around last minute re-arrangements of the conference program, a “viking’ish lunch” in less-than-vikingish amounts, and a conference room with a giant column in the middle of the room, the conference is going greatly so far!

My impression is that people are very positive, and there’s a lot of communication going on in the corners. The arrangers have really succeeded in creating an atmosphere where people seem to want to connect, and that’s a huge win for a conference like this.

2012 retrospective and 2013 resolutions

Just like I’ve done the previous two year, I’ll spend a blog post summing up my experiences for the past year and possibly try to think about what I’d like to do the next year. Here goes…:

This is what happened to me in 2012 in random order – I:

  • Started in my new position as a software development consultant at d60 – I spent the first three months helping out in a financial company that needed to reach a hard deadline, and the next three months I went back and helped out on the PowerHub project – and then, afterwards, I began working as an architect on d60’s trading systems. Starting at d60 has been absolutely awesome, and it’s really exciting to be part of the rapid growth of what still feels like a small company.
  • Helped introduce Rebus on several projects, my own as well as other peoples’ – at the moment, Rebus moves money around, controls a couple of power plants, and hopefully makes the lives of a few software developers even more enjoyable 😉
  • Took Rebus+MongoDB to production – a match that I’ve often thought was made in heaven (even though I know that the match was made at a Hackernight, or while I was doing the dishes at home, some time in early 2012…)
  • Attended Software Passion Summit in Göteborg where I did a presentation on MongoDB.
  • Attended Miracle Open World 2012 where I presented on MongoDB and NServiceBus.
  • Gave my first presentation on Rebus at Community Day in Copenhagen.
  • Attended GOTO Aarhus where I met a lot of my awesome ex-colleagues from Trifork.
  • Gave user group talks on Rebus at ONUG, CNUG, and ANUG.
  • Did a Rebus code camp at ANUG that featured a Columbian drug lord and loads of drugs & money.
  • Was awarded as a “Microsoft C# Most Valuable Professional” on the 1st of April – I still sometimes wonder whether someone just played an April Fool’s prank on me 😉

My theme for 2012 has definitely been “Rebus” almost all the time, and I really hope to continue being able to work with Rebus – at work as well as in my spare time.

Here’s my plans for 2013 – I’d like to:

  • Do some presentations on Rebus to continue spreading the word and build even more inertia – first one is already planned: Taking The Hippie Bus To The Enterprise.
  • Go bump Rebus’ version to 1.0 (mostly some configuration options missing).
  • Gain some experience with breaking down our trading system, which is being developed by a team of more than 20 people, into a service-oriented architecture that respects the bounded contexts.
  • Get some experience operating MongoDB as it grows.
  • Build something with HTML5.
  • Hopefully get the MVP award again, although I haven’t done a single thing with this in mind – all of my activities are purely driven by me wanting to do stuff I think is fun.
  • Learn, learn, learn!

happynewyear27 As usual, I like that the theme of my new year’s resolutions is mostly about expanding my horizon, although I realize that I’m pretty stuck in .NET country. It’s still a really interesting place to be, though, with a lot of interesting things happening, and I like that .NET developers in general seem to be really open towards getting inspiration from stuff that’s happening in the other camps.

Now, as we approach the end, please enjoy this animated gif of an extremely cute and cuddly bear that wishes you and your loved ones a happy new year!

Domain events salvation example

One of the immensely awesome things that Udi Dahan has – if not invented, then at least brought to my attention – is the awesome “domain events – salvation” pattern.

If you’re familiar with the onion architecture, you’ve probably more than once experienced the feeling that you had the need to pull something from your IoC container some place deep in your domain model. In my experience, that often happens when you have a requirement on the form “when something happens bla bla, then some other thing must happen bla bla” where “something” and “some other thing” aren’t otherwise related.

An example could be when a new account (a.k.a. a customer that we mean to do business with) is registered in the system, we should make sure that our credit assessment of the account is fresh enough that we dare do the businesss.

I’ve found that the “domain events – salvation” provides a solution to that problem. Check this out:

“Whoa, what was that?” you might ask… that was a domain object that did some stuff, and in the end it raised a “domain event”, telling to the world what just happened. And yes, that was a domain object calling out to some static nastiness – but I promise you, it isn’t as nasty as you may think.

I’d like to assure you that the Account class is fully unit testable – i.e. it can be tested in isolation, just like we want. And that’s because we can abstract the actual handling of the domain events out behind an interface, IHandleDomainEvents, which is what actually gets to take care of the domain events – something like this:

and then IHandleDomainEvents can be implemented in several flavors and “injected” into the domain as a static dependency, e.g. this bad boy for testing:

which can be used by installing a new instance of it in the setup phase of each test, and then running assertions against the collected domain events in the assert phase.

The coolest part though, is the production implementation of IHandleDomainEvents – it may look like this:

thus delegating the handling of domain events to all implementors of ISubscribeTo<TDomainEvent>. This way, if I have registered this guy in my container:

I can kick off a new credit assessment workflow when a new account is registered, and everything is wired together in a way that is semantically true to the domain. Also, my domain object suddenly get to pull stuff from the container (albeit in an indirect fashion), without even knowing about it!

Thoughts on how to integrate with 3rd party libraries when you’re a library author

Yoda in a Santa's costume, carrying a magic wandWhen building .NET systems for clients, it’s just awesome how much free open source that’s available at your fingertips – and with tools like NuGet around, it’s become extremely easy to rapidly pull in various colors and sizes of Legos to snap together and build on. Except, sometimes those Legos don’t snap 🙁

Different types of libraries and frameworks are more or less susceptible to Lego-snap-issues, depending on their place in the stack. That is, libraries with many incoming references are inherently problematic in this regard, the ubiquitous examples being logging libraries and serialization libraries.

When I built Rebus, one of my solemn goals was to make Rebus dependent on .NET BCL ONLY. All kinds of integration with 3rd party libraries would have to be made via small dedicated adapter projects, because this way – in the case where there’s a conflict with one of your own dependencies – only that single adapter would have to be either built against your version of the problematatic dependency, or switched for something else.

Serialization with Rebus

Rebus serializes message bodies by using an ISerializeMessages abstraction. And since Rebus’ default transport message serialization format is JSON, and I didn’t feel like re-re-re-inventing any wheels, I decided to pull in the nifty JSON.NET package in order to implement Rebus’ JsonMessageSerializer that is part of the core Rebus.dll. But that clearly violates Rebus’ goal of having no dependencies besides what’s in the .NET framework – so how did I solve that?

Simple: ILmerge JSON.NET into Rebus! Simple solution, very effective.

Maybe there’s something I’m being ignorant about here, but I don’t get why projects like e.g. RavenDB keeps having issues with their JSON.NET dependency. Why didn’t Ayende just merge it from the start?

Update January 7th 2013: Well, it appears that they did just that for RavenDB 2: RavenDB-220

Logging with Rebus

And then there’s logging – in order to log things good, all Rebus classes must be given access to a logger – and I like it when the logger is named after the classes it is being used from, which always turns out to be a piece of really useful information when you’re trying to track down where stuff went wrong some time in the past.

Ideally, I wanted a syntax that resembles the Log4Net one-liner, that you’ll most likely encounter if you come across code that uses Log4Net for logging – each class usually initializes its own logger like this:

thus retrieving a static readonly logger instance named after the calling class. So, originally I started out by copying the Log4Net API, providing adapters for console logging, Log4Net logging, etc.

That worked perfectly for a while, but later on, since I spent so much time writing integration tests for Rebus, I also wanted the logging mechanism to be swappable at runtime – i.e. I wanted it so that Rebus could have its logger swapped from e.g. a ConsoleLogger to a NullLogger in a performance test, where all the console output would otherwise interfere with the measurements.

Therefore, my logging “abstraction” is a bit more involved. But it accomplishes all goals: All classes can have a static logger instance, named after the class, swappable at runtime, without introducting dependencies on external stuff, and with very little (read: an acceptable amount of) ceremony (too much for my tastes actually, but I accept this one because of the goodness it brings).

Check out what a class must do in order to get a Rebus logger:

As you can see, the logger is set when the Changed event is raised – and by taking control of the add/ remove operations of the event, I can ensure that the logger is set when each subscriber first subscribes. This way, the logger is immediately initialized by the currently configured static IRebusLoggingFactory, and in the event that the factory is changed, all logger instances are changed.

The Changed event is implemented like this:

The last thing is to provide a meaningful name to the logger – i.e. I wanted the name of the calling class to be used, so the call to GetCurrentClassLogger() had to do some StackFrame trickery in order to fullfil the promise made by its title. The implementation is pretty simple:

As you can see, it just skips one level on the call stack, i.e. it gets the calling method, and then we can use the standard reflection API to get the type that has this method. Easy peasy!

This was an example on how two of the more troublesome dependencies can be dealt with, allowing your library or framework to be effectively dependency-free.

I’ll be speaking at Warm Crocodile Developer Conference

In Copenhagen, on the 16th and 17th of January, there’s a new conference called Warm Crocodile – pretty cool title, if you ask me!

I begged and begged, until they caved in and gave me an hour of their otherwise extremely cool schedule – so I’ll be giving a talk that I’ve dubbed “Taking the hippie bus to the enterprise”… it will most likely be about using a free .NET service bus in combination with other cool free software to rapidly solve enterprisey problems without feeling a lot of pain.

If you’re interested, you can read the full abstract here: Taking the hippie bus to the enterprise

Rebus transport: MSMQ vs. RabbitMQ

Now that Rebus officially supports RabbitMQ, I’ll just outline some of the reasons why you might want to choose one over the other… here goes:

Pros of RabbitMQ

The Rabbit is faster!

In my experience, RabbitMQ is often slightly faster than MSMQ in simple scenarios. I know that people pump high volumes of messages through Rabbit every day, and people might throw around numbers like “100000 msg/s”, or “500000 msg/s” and stuff like that. It might be true, but I promise you that these volumes can only be achieved in a few cases where e.g. a bit of delivery guarantee, message durability and/or atomicity is traded for speed.

RabbitMQ is easier to scale out, though – where MSMQ doesn’t handle competing consumers very well, it’s definitely the way to go with Rabbit if you have to process large message volumes concurrently.

Rabbit is (in some ways) easier to manage

RabbitMQ is pretty easy to manage because it’s a server that is installed somewhere, and then you point all the clients to the broker and they’re good. It’s easy to just run it on port 5672, and then that’s the only port that needs to be opened for access through firewalls, across VLANs, and whatnot. Most serious installations will require at least two Rabbit nodes though, so you might need to account for some configuration time though.

It also comes with a fairly useful web-based management tool that allows you to inspect all queues on the Rabbit server. This centralized way of managing things just makes it feel like you’re in control.

RabbitMQ can be configured to route messages in many ways, and Rebus can leverage the multicast features of Rabbit to function as a global subscription storage. This means that you can do this:

(notice the “ ManageSubscriptions” above?) which will make Rebus use Rabbit to do the hard work when doing pub/sub. I’ll get back to this in a future blog post, I promise 🙂

Pros of MSMQ

More reliable

MSMQ will probably be more reliable in most scenarios, because applications are always talking to the locally hosted MSMQ service. I.e. MSMQ always does local store-and-forward of messages, which means that you can always count on being able to deliver a message – and that’s actually something!

MSMQ is inherently more distributed – i.e. it’s much more tolerant to machines rebooting, switches failing, etc etc – if it can synchronize messages across the network, it will do so – if not, it will back off and wait until it can do so again.

Works well in homogenous environment

And then, if you’re so lucky that your network consists of one LAN with Windows machines, then everything will just work! And that includes security with user rights on queues, etc.

Easy to install

Oh, and then MSMQ frickin’ comes with Windows out of the box. That’s pretty cool 🙂

The future

I hope to see both the MSMQ transport and the RabbitMQ transport thrive and prosper. I hope to use both myself, and I’d like to help people use both as well.

The end

That was a long post! I hope even more people will find Rebus useful now that it can work with RabbitMQ. As always, just go get the bits from NuGet by doing the usual install-package rebus.rabbitmq.