Oh btw., if you’re thinking of going to Software Passion Summit 2012 in Göteborg, you should use the promotion code MOGENS when you register – you’ll get a 10% discount with this bad boy.
I judge developers by how many toolbars are showing in their IDE
Usually, I try not to be judgmental about stuff. I like to keep my mind open and to accept people as the beautifully unique snowflakes they are… or something… 🙂
There’s one thing that irritates me though, and that’s C# developers who constantly reach for the mouse to click the tiny crappy toolbar buttons that for some reason seem to have survived in Microsoft IDEs since 1995 VB4. Yeah I’m looking at you! You’re crap!
There is nothing more annoying than pair programming with someone, who cannot even go to another file without having to scroll up and down in Solution Explorer, looking for that file to double-click. And then comes the time to re-run the current unit test… Sigh!!!
Now, if you have any ambition as a C# developer, I recommend you start out every new installation of Visual Studio by
- Hiding all toolbars (which, unfortunately, cannot easily be done at once – new ones pop up every time you open a new kind of file for the first time).
- Making all tool windows auto-hide (i.e. click the little pin on e.g. Solution Explorer, making it collapse – usually to the right side of the screen).
That will make your work environment resemble the picture on the right (especially if you have a 1337 dark color scheme like mine) – see: no clutter! No stinking buttons to disturb your vision while you’re swinging the code hammer! And, it will serve as an incentive to start using the keyboard some more.
Now, in order to be able to actually work like this, it’s essential that you know how to navigate using the keyboard only. Therefore, here’s a few very basic shortcuts to get you started[1. Assuming of course that you’re using Visual Studio with standard keyboard settings and R# with Visual Studio keyboard scheme]:
- Navigate to any open window in the environment: Ctrl + Tab + arrows while holding Ctrl.
- Jump to file currently being edited in the Solution Explorer: Shift + Alt + L.
- Jump to the R# test runner: Ctrl + Alt + T.
- Pop open the context menu: Shift + F10.
Now, with these in place I think it should be possible to start doing all navigation with the keyboard only. And then, when you get tired pressing Shift + F10 and choosing stuff in the menus, you can start learning the real shortcuts to everything.
Using the keyboard for the majority of all tasks has several advantages – in addition to relieving the strain on the right wrist, arm, and shoulder, you also get the advantage that your navigation and execution of common workflows is sped up, allowing your work pace to better match the pace of your train of thought.
Also, I won’t judge you 🙂
Just a personal observation regarding the recent IoC debate on Twitter
Systems, where I’ve started out “as simple as possible”, and later wished that I had baked in an IoC container from the beginning: Several.
Systems, where I’ve baked in Castle Windsor from the beginning, and where I later wished that I hadn’t: 0.
#justsaying
(IMO, using an IoC container does not increase overall complexity – it just moves some of the complexity out of my code, and into a thing that implements a bunch of useful patterns in a slightly more opaque way than if I’d implemented those things myself.)
I’ll be in Göteborg on my birthday
This is just to say that on the 19th of March (and the 20th as well), I’ll be in Göteborg at Software Passion Summit 2012.
On the 19th (my birthday), I’ll do my “Frictionless Persistence in .NET with MongoDB” talk, which is just awesome!
I love talking about these things, so it’s a real treat to get to do that in an exciting new conference.
The duality of request/reply vs. publish/subscribe #2
In my last post, I described how the mechanics of publish/subscribe actually mirror those of request/reply.
In this post, I’ll look at the two operation from another angle: What do they mean?
What does it mean when you bus.Send?
Sending means either that the sender wants to
- Command that another service does something.
- Request that another service does something, and yields one or more replies[1. Note that the request/reply pattern may impose unwanted temporal coupling in an architecture and should probably be used only in integration scenarios orchestrated by a saga.].
This means that the sender knows stuff about the other service, but that other service will most likely not know or care about who’s sending. In other words, the sender depends on that other service!
What does it mean when you bus.Publish?
Publishing means that the publisher wants to
- Broadcast an event that contains information on something that has happened.
This means that the publisher most likely does stuff inside itself, maybe updates some internal state, and then goes on and publishes information on some aspect of what has happened. In doing this, the publisher will most likely not know or care about who’s receiving. In other words, the subscriber depends on the publisher!
Summing it up with a picture
Consider this illustration, where service dependencies are shown with arrows:
Again, see how comparing Send to Publish is actually like comparing two mirror images when the other mirror image is upside-down?
The duality of request/reply vs. publish/subscribe #1
A question I often meet in relation to messaging frameworks like NServiceBus and Rebus, is this: Where do messages go?
The confusion often comes from comparing how bus.Publish works with how bus.Send works.
In this post, I’d like to describe the two operations and show that they are mirror images of each other – except maybe not as much a mirror image as a transposition.
Sending messages
In the case where you’re doing a bus.Send(message), the answer is trivial: The message gets sent to the endpoint specified in the sender’s enpoint mapping for that message type. Let’s say our sender is equipped with this snippet of XML[1. The snippet is an endpoint mapping in NServiceBus format, which can also be understood by Rebus when it’s running with the DetermineDestinationFromNServiceBusEndpointMappings implementation of IDetermineDestination] in its app.config:
1 2 3 4 5 |
<UnicastBusConfig> <MessageEndpointMappings> <add Messages="MyService.Messages" Endpoint="my_service"/> </MessageEndpointMappings> </UnicastBusConfig> |
If we assume that message is an instance of a class from the MyService.Messages assembly, in this case a bus.Send(message) will be translated into bus.Send("my_service", message).
Publishing messages
But where do messages go when they’re published? Well, they go to whomever subscribed to that particular message type – and with NServiceBus (and, for that matter, with Rebus as well) subscribers get subscribed by sending some kind of subscription message, which is basically saying: “Hey there, mr. Publisher – I’m some_subscriber, and I’d like to subscribe to MyService.Messages.SomeParticularMessage”.
From this point on, the publisher will store the mapping of the message type along with the subscriber’s address, allowing a bus.Publish(message) method to be implemented something along the lines of
1 2 3 4 5 6 7 |
public void Publish(object message) { foreach(var subscriberEndpoint in GetSubscribersFor(message.GetType())) { Send(subscriberEndpoint, message); } } |
So – how do we understand this on a higher level, allowing us to never ever confuse these things again? Let’s dive into…
The duality
Consider these two sequence-like diagrams:
See how request/reply and publish/subscribe are actually the same patterns? The reason these two are often confused, is that the Send operation is often countered by Publish, when in fact it would be more fitting to see the subscription message (i.e. subscription request) as the counterpart of Send. Thus, Publishing is more like replying. And thus, Send is actually the transposition of Publish.
Now, when you realize this, you’re never going to confuse these things again 🙂 In the next post, I’ll touch a little bit on another difference between Send and Publish.
I will be speaking at Miracle Open World 2012
In April, I will be doing two presentations at Miracle’s Open World conference. It looks like a lot of cool people are going, and it’s my first time at MOW, so it goes without saying that I’m excited about it!
First, I’ll be doing a brand new intro to NServiceBus, which I have used extensively for the last two years. Even though I wish it was free for everyone to use, NServiceBus continues to be an awesome framework, so I’d like to continue spread the word about it – you can read my abstract here: Ride the Bus!.
After that, it seems I’ll be topping off day one with a brand new, condensed, platform-agnostic and pure MongoDB tour – this one will not do the usual “and this is NoSQL, and this is what characterizes a document DB”-intro, this will be full-on and to the point. You can read about it here: So you want to liberate your data?
I hope to see a lot of engaged people there 🙂
2011 retrospective and 2012 resolutions
In the same vein as last year, I’ll spend a post summing up on what happened this year, and then try to come up with some goals for the next year.
2011 retrospective
What did I do in 2011? Well, I
- Wrote 27 blog posts (+ this one = 28).
- Gave my “Frictionless Persistence in .NET with MongoDB” talk at Goto Copenhagen. Great experience, and Microsoft even recorded it.
- Gave the talk again as a free geek night.
- Hosted an Aarhus .NET User Group code camp on MongoDB.
- Gave the Frictionless talk again, this time at an Odense .NET User Group meeting.
- Made tiny contributions to Castle Windsor and MassTransit.
- Started building an NServiceBus-like service bus: Rebus. It already has pub-sub messaging and sagas 🙂
- Attended Udi Dahan’s “Advanced Distributed Systems Design With SOA” course. Udi was no stranger to me as I have been following his work, but the course presented some extremely interesting ideas on how to build a service-oriented architecture.
- Spent most of my time monkeying around with code and architecture on the PowerHub project, which is getting more and more serious. Oh, did I mention that the system’s regulation parts have zero downtime? With a nifty master-slave setup with automatic failover, PowerHub can continue to optimize and control local units, even in the face of system and platform upgrades… 🙂
- Got a new job!!! Yes, that’s right: The 30th of December 2011 will be my last day as a Trifork Software Pilot! On January the 2nd in the new year, I’ll join d60 as a consultant. This fact deserved a dedicated blog post. 🙂
- Had my photo of a hard-wired hairdryer included in Mark Seemann’s book about DI in .NET (see page 8 in chapter 1). Needless to say, this photo went right into my slidedeck 🙂
If I compare that to my 2011 resolutions, I think I’m only missing a “real” pet project. The closest thing is PriorityQ, which I made as an example app for my MongoDB presentations – it’s a “question collector” that can be used during presentations.
2012 resolutions
This is what I’d like to do in 2012:
- Gain a footing in my new position, and help out with some of the company’s challenges.
- Attend a couple of conferences – in passive as well as in active mode.
- Contribute some more to some of the OSS projects I like – including my own.
- Put Rebus to (some serious ab)use.
and, most importantly – just like my 2011 resolutions – I’d like to continue to be inspired by communicating with smart people.
Lastly, I will express my feelings in the form of an animated GIF that reeks of 1996: Now, let’s see what 2012 brings…
New job!!
As I’m writing this, I have spent 4 years and 9 months working at Trifork. That means a majority of my professional experience comes from working there, and I must say that it has been a fantastic time!
Throughout the years, I have been allowed to work on interesting projects, attend conferences, speak, teach, and play, and thus continually be challenged – and almost be forced to grow.
When I read Chad Fowler’s “My Job Went To India” (which later became “The Passionate Programmer”), the “Be The Worst” chapter immediately made sense to me, because I think that pretty much describes me when I started working with Trifork. If you haven’t read it, please do yourself a favor and do it – it’s available online.
As Chad puts it: “The people around you affect your own performance. Choose your crowd wisely”.
So, if you’re looking for an inspiring environment and some extremely talented colleagues, Trifork is definitely a great place to be. Especially as a .NET developer, I think Trifork can offer a healthy exposure to Java, ObjectiveC, Riak, Erlang, Ruby, and more non-.NETty things, which I think has helped me become more wholistic in my views on technology.
After almost 5 years however, I feel it is time to seek new challenges.
So, on Januar 2nd 2012 I’ll join d60, which is a fast-growing Microsoft-based consultancy agency on the outskirts of central Aarhus. d60 is just about equally split between systems development and business intelligence, so hopefully I’ll gain some insight in BI, which I think will help me build better systems. At d60 I’ll continue working as a software development consulatant, and hopefully I’ll continue to communicate with smart people about software development and help building cool solutions to real world problems.
If you can draw it like that, then just f*n code it like that!
Recently, I was asked whether I had any pet peeves. I thought about it for a couple of seconds, and since I couldn’t immediately think of anything, I just ranted a little bit about some of the minor annoyances around code quality I had experienced the same day or something like that.
But when I couldn’t sleep early this morning, I remembered one of my favorite pet peeves: Code that doesn’t model stuff right. Let me explain that with a couple of real-life scenarios…
Scenario 1
I was working on a system with my team, and our product owner(s) – a team of really really smart real-time regulation experts – came to us with some requirements regarding modeling some physical processes of some kind. They explained these things to us, and they showed us some graphic models[1. – and you’d probably think that this would trigger a few light bulbs…] of how they thought of this thing that we were supposed to start working on.
When we later got some more specific requirements, they didn’t resemble those graphic models that were originally presented to us. I mean, some of the concepts were brought over, but there was no clear mapping between the graphic models and the model proposed in the requirements.
Somehow, we ended up implementing things like they were specified to us the second time, although I did hesitate and express some concerns several times along the way.
Now, one-and-a-half years later, we’re faced with our implementation’s limited capability of expressing the domain to a high enough degree. We’re constantly forced to handle special cases in various parts of the system, whereas in other parts we have to spray logic all over the place to implement even simple features. And if you’re used to unit testing your stuff, you can probably imagine that our huge-ball-of-mud-model-with-limited-expressive-power has so many different combinations of flags and settings that they’re practically impossible to cover with tests.
WTF?!
Scenario 2
In the same system, we had a pretty complex specification of rules that some part of the system should act in accordance with. After some iterations during the specification, clarification, and breakdown of the feature, we ended up with a specification that pretty much consisted of a 5-6 levels deep decision tree, including some floating point values that needed to be considered as well.
At this point, some team members went on an implemented the thing – with a 5-6 levels deep nested if structure, that was implemented 100% in accordance with the specification, including code comments with cross references to nodes in the decision tree diagram. At first glance, this seemed O.K. – I mean, it was fairly easy to verify that the tree was implemented correct, due to the 1-to-1 mapping that could be made from the if structure to the decision tree diagram, and vice versa. So this is much better than scenario 1, right?
Well, if one if statement results in two possible passes through a piece of code, and thus two test cases that need to be written, then it follows that 5-6 levels of if statements yields 25 – 26 possible combinations. Combine this with 5 to 10 floating point values that need to be combined and limited as well, depending on their value and the path followed through the decision tree, and then we have yet another completely untestable mess!
WTF?!?!?!
What to learn from this?
This should not come as a surprise to you, especially if you still – in spite of all the functional commotion that has been going on the last 5 years – believe in object-oriented programming… because that’s what the objects are for: Modeling stuff!
But somehow, I still often see developers – even seasoned ones – go and implement models that 1) model only a subset of what’s actually required, 2) somehow flatten the model and don’t respect the inherent structure of hierarchy and cardinality, 3) apply some other non-invertible transformation to what they’re shown, and thus end up truncating the domain, which in turn leads to having to resort to hacks like flags and an ever-increasing level of if statements and special cases. The end result is that it is hard to understand the relation between the model and the stuff it is supposed to model.
In hindsight, it is pretty easy to see that the first scenario should have been modeled with classes representing each “thing” in the original diagrams presented to us. The reason we were shown those drawings was that they were a pretty good model of what’s supposed to go on. Duh!!
And the second scenario was a pretty good candidate for some kind of simple boolean rules engine, where a representation of the tree could have been built in memory with instances of some class for each node, and then the tree could just have been “invoked” – just like that.
In both cases, we would have had the ability to test each class in isolation, and then do a few integration tests to verify that they were capable of interacting like expected. And then, lastly, we could have written a verification of our code’s ability to build the model in the form that would end up being executed.
To sum it up as a one-liner: If you can draw it like that, the just f*n code it like that!.