Orthogonality (again)

There’s one thing, that almost always makes we want to assume the foetal position and cry: developers, who are ignorant to the fact that there is a difference between application logic and application framework.

I must admit that I have only recently started being (this) conscious about this difference, so I have written a buttload of code the last few years that violates almost everything that I stand for now, which really makes me sad inside. But it’s never too late to improve.

Once I realized this and started to try to adhere to it to keep the two things separated, I started seeing things so very clearly, and then other people’s ignorance of this fact just started to gnaw and irritate me. Hence this post – I need to get this off my chest – I need to write another post in the “rant” category…

An incredibly insightful (as always) post was made by Ayende a couple of months ago: Application structure: Concepts & features. Ayende’s post explains it so well, but basically he distinguished between concepts in a system, which is stuff that requires design, and features which merely build and/or augment existing concepts. I just want to add a personal experience to the rant in the context of Ayende’s post.

Here’s the setting

I am currently on a team that develops and maintains a mortgage deed trading and administration system. Part of this system is an extensive suite of automated nightly jobs and automatic reports.

Naturally, some of the reports are run every night, e.g. summing up the numbers recorded the previous day for automated export to accounting systems, other reports are run every week/month/year, some on bank days, some relative to bank days, etc.

Some jobs make changes to the state of the system (lige e.g. updating the particulars of people for whom we receive updates from the Central Office of Civil Registration, remembering that a particular batch of transactions have been exported to the accounting system, or remembering that information on interest fees for the previous year have been reported to the National Bank etc.), and some are just (idempotent) reports and exports.

Most jobs run automatically in the night. Some jobs can also be initiated by the user of our Windows Forms frontend through a web service call. And all reports should also be accessible through the built-in reporting frontend in the Winforms app.

Here’s our current solution

All reports are run through a web service call by instantiating a ReportCommand, which is capable of getting the names of all the reports accessible to the current user. And then, given a report name, the command can get all the parameters for that report. And then, given a report name and a set of parameters, it can run the report and output a file in SpreadSheetML format. This allows our frontend to dynamically build a GUI for all reports. No GUI work is needed whenever we need to create a new report, which is great.

The majority of our nightjobs are initiated by the Windows Scheduler, which stems from an old pragmatic solution to the very first automated job we needed almost three years ago. This has not changed, so jobs are still scheduled manually through the Windows Scheduler. Our job runner is an ordinary .NET Windows .exe, which gets executed with one or more arguments. Exporting transactions to accounting could look like this:

– which would use the string to look up a class that implements ITask, e.g. ExportTransactionsTask, which exports all non-exported transactions in a .csv file to some preconfigured location.

To be able to schedule reports to run, we have made a job named “report”, which is capable of invoking the ReportCommand directly, setting the parameters of the report from the given command line arguments. Running a report could look like this:

– which runs the ReportTask, which is fed a dictionary containing {{"name", "transactions"}, {"caseNo", "10000:20000"}, {"recordDate", "today-1:today"}, {"mail", "[email protected]"}}, which in turn runs the ReportCommand with the given report name and parameters, extrapolating macros like “today” into today’s date + some more simple stuff.

If this sounds complicated to you, then yes! It is actually pretty complicated! Not because it should be, but because it’s pretty legacy and implemented in a pretty messy way. And now the need has come for the users to be able to schedule jobs and reports from the Winforms frontend. Damn! This is a good time to reflect on how I wish we had done it (and how I plan to do stuff like this in the future).

How I wish we had done it

The problem above, as I see it, is that features and concepts are mixed together in a gray matter that noone can oversee. I wish we had thought more about separating the concepts (command, nightjob, report) from the features (implementation of the different jobs and reports). I wish implementing a new report was easy like this ( #region...#endregion added as explanation):

and then all possible implementations of the Report class would be picked up by the various concepts in the system, like e.g. our report command, the report scheduler, and an ad hoc reporting task runner… and then, in a similar fashion, I want to subclass an IdempotentTask class for all tasks, that do not change anything in the system, and a TaskWithSideEffects class for all tasks that change the state of the world.

This way, implementing the logic inside of reports and tasks will be orthogonal to implementing the capabilities of the reports and tasks and their scheduling.

An opinion on “integrated solutions” like TFS and VSTS

As a response to Ben Scheirman’s post, Benjamin Day kindly apologized and summed up why he likes Visual Studio Team System and Team Foundation Server.

I am not going into the debate on whether it was right or wrong to delete that comment, because a lot of people already did that, and I agree with those who think that deleting the comment was kind of wrong. Calling it “unethical behaviour“, however, seems to be a little too harsh. Moderating news channels discussing politics in China is unethical – deleting a comment, because the blog author disagrees, is just weird and a little bit annoying.

Instead, I just wanted to chime in with my 2 cents on why I think Visual Studio Team System and Team Foundation Server are inferior, compared to ALL of the free alternatives that I know of – it’s because I believe in one of the finest principles of software engineering, which was coined by Edsger Dijkstra: Separation Of Concerns.

Separation Of Concerns can be low level, as in Uncle Bob’s single responsibility principle, or higher level as in service-orientation, or even higher level as in there’s NO WAY I’m gonna buy an oven, which insists on also being my washing machine and a pair of roller blades. No way!

This principle is so inherent in all the good disciplines of software engineering, heck in LIFE even, that I simply had to reply!

So I like to use CruiseControl, TeamCity, Subversion, Git, ReSharper, TestDriven, NUnit, xUnit.net, Jira, Redmine, Basecamp, MSBuild, Rake, NAnt etc. etc. because they let me switch each one of them out for anyone of the other whenever I feel like it. And, more importantly, whenever it fits the task better.

The fact that some of the tools are FREE and have their source code available for me to look at, is just an added plus. But the primary reason to use those tools is simply that they do one thing, and they are usually capable of doing that one thing better.

Book review: ASP.NET MVC 1.0 Quickly

This book is exactly what its title says: a quick introduction to ASP.NET MVC. A natural implication is that it cannot cover that much material, and it seems Maarten went for breadth instead of depth.

In my opinion, when a book chooses to be a “quick guide”, it should focus more on showing the preferred ways to do stuff. Instead, this book seems to have too much ViewData["stuff"] = fluff going on. Why bother wasting pages showing all the tedious, error-prone, hard-to-maintain ways to do stuff when there is so little space?

asp.net.mvc.quickly.quickly

If I were to author a book on ASP.NET MVC, I would focus on explaining ASP.NET MVC from the extensibility points and out. For example, System.Web.Mvc.Controller is just one way to implement the IController interface, and so on. I think that would provide a much more wholistic image of the framework, and the extensibility points is where ASP.NET MVC shines. I don’t think Maarten’s book really shows where the framework shines.

I enjoyed the chapter on using existing ASP.NET features though, and, not being an ASP.NET guy at all, I think I learned some stuff there.

My conclusion is that this book is absolutely for beginners, and that the code samples in the book should not be taken literally, because almost none of them are examples on what the community considers best practice.

Title: ASP.NET MVC 1.0 Quickly
Author: Maarten Balliauw
ISBN 10/13: 184719754X / 978-1847197542
Publisher: Packt Publishing

Task and issue tracking with Redmine on a Windows server

If you are in need of nifty and free task and issue tracking system, you could check out Redmine! It’s really cool, and it is easy to install (especially if your Windows box already has a Ruby development environment installed).

This post will show how Redmine can be installed as a Windows service on your Windows server. You will need to go through the following steps if you have a clean Windows machine, and you have not yet fiddled with Ruby on it:

  1. Install MySql
  2. Create a Redmine database + user
  3. Install Ruby 1.8.6
  4. Include the Ruby bin directory in your PATH environment variable
  5. Install Rubygems
  6. Install Rails
  7. Install Mongrel
  8. Install Mongrel Service
  9. Check out Redmine’s source code
  10. Configure Redmine
  11. Create the database schema
  12. Start Mongrel and verify that the installation works
  13. Install Mongrel/Redmine as a Windows Service
  14. Make the service run automatically when Windows boots

So… let’s do this!

1. Install MySql

Redmine needs a database. I use MySql for this. Go to http://dev.mysql.com/downloads/ and download and install the community server edition.

2. Create a Redmine database + user

Fire up a command prompt and enter (assuming that MySql is installed locally, running on the default port):

where USER might be root and PASSWORD is probably some secret cool password you made up during the installation of MySql.

Then, type

to create a new database for Redmine. Verify that it’s there by doing this:

Then, to create a new user for Redmine to use, and to grant full access to the database, do this:

3. Install Ruby 1.8.6

In Ruby land, 1.8.6 is still being used in many production environments around the world, even though Ruby 1.9 is well on its way. To install it, go to http://www.ruby-lang.org/en/downloads/ and choose the One-Click installer.

If you download the binaries instead, you will manually need to download a few missing dlls: zlib.dll, ssleay32.dll, iconv.dll and possibly more. Not too hard, but it’s definitely easier to use the One-Click installer.

4. Include the Ruby bin directory in your PATH environment variable

Not too hard – just do it! Mine says blablabla;c:\ruby-1.8.6\bin (not literally, though).

5. Install Rubygems

If you are unfamiliar with Ruby, I can inform you that Rubygems is the de facto standard package manager. Grab it from the downloads section of http://rubygems.org/, and put it somewhere on your computer. Then, open a command prompt in that directory and issue a

which will install gem.bat in your Ruby bin directory.

6. Install Rails

Redmine is built on the Rails frame work, so Rails must be installed for Redmine to work. Easy! Do it like this:

The -v 2.1.2 stuff is because my current version of Redmine required Rails 2.1.2. If your Redmine installation refuses to start up because it requires another version if Rails, it should be easy to install that version inspired by the command above.

7. Install Mongrel

Mongrel is a fast web server made in Ruby. Install it like this:

8. Install Mongrel Service

Mongrel Service is a Win32-specific thing, that allows Mongrel to be started via a Windows Service. Install it like this:

UPDATE April 4th 2010: When re-installing Redmine on a Windows 7 server today, I got this cryptic error message:

I Googled for a while before I thought about what Rubygems was trying to accomplish… It seemed like it was trying to run make on my Windows box, which is usually pretty hard… so I tried adding the platform parameter like so:

– which worked 🙂 I have no idea why the platform parameter is suddenly required to install successfully on a Windows box.
</update>

9. Check out Redmine’s source code

Look for instructions on how to obtain Redmine here. I have checked out the trunk on my computer – that makes it very easy to upgrade Remine whenever I feel like it.

10. Configure Redmine

Rails apps are easily configured by editing the files contained within the config folder of the Rails application. To configure Redmine to use your database when started in production mode, copy the database.yml.example file and call it database.yml. Edit the production section and supply the database name and user credentials you created in your MySql database.

11. Create the database schema

Rails applications have built-in database migration scripts. You can issue a full up to the current version by going to Redmine’s base directory in the command prompt and issue a

12. Start Mongrel and verify that the installation works

Open a command prompt in your Redmine app’s base directory and issue a:

to start Redmine on port 4000 in the production configuration.

Now you should be able to navigate to http://localhost:4000 and see Redmine running.

13. Install Mongrel/Redmine as a Windows Service

Open a command prompt as administrator (right click the command prompt shortcut and select “Run as administrator”) and navigate to your Redmine app’s base directory and issue a:

14. Make the service run automatically when Windows boots

Go to the service manager in Windows ( services.msc) and double-click your service to change the startup type to “Automatic”.

My favorite ASP.NET MVC hooks #2: Action filter attributes

Another simple place to hook your stuff in to the framework is action filters. An action filter is an attribute that derives from ActionFilterAttribute which is a convenience class containing the following methods for you to override:

A lot of the examples around the internet show how to use action filters to restrict access to controller actions or how to apply caching. I won’t do that here, because those concerns are boring. No, we care about how to make life easy for ourselves and how to write testable, non-breakable, maintenance-and-wrist-friendly code. That is, we reserve the right to be lazy.

A fine example on how to be lazy is by letting action filters fetch data for you to avoid repetitive repository gymnastics in every controller action – and I am actually going to be very lazy right now, because I wrote two posts on this topic earlier:

  1. How to avoid duplicate data fecthing with ASP.NET MVC
  2. Another way to avoid duplicate data fetching with ASP.NET MVC

My favorite ASP.NET MVC hooks #1: Controller factory

My favorite hook in the ASP.NET MVC framework is the controller factory. I like it so much because it serves as a very simple entry point to the application, thus providing the perfect place to insert an IoC container. My container of choice is Castle Windsor – not because I have tried any other container – I really can’t say that I have – but because I just happened to be introduced by a colleague to Windsor, and I have yet to experience anything it can’t do for me!

An ASP.NET MVC controller factory is an implementation of the IControllerFactory interface – a simple interface with two methods: one that provides an IController given a string and some more stuff, and one that allows the controller to clean up any resources it might have allocated. The interface looks like this:

As you can see, it provides a means to return a controller, given all sorts of information about the current request and what could be resolved as the controllerName from the request URL.

ASP.NET MVC accesses the controller factory via the ControllerBuilder class – so if we want to provide our own controller factory, we must install the factory before any requests get served. An obvious place to do this, is the Application_Start method inside the global.asax.cs file. This way our controller factory will be installed every time the web application starts up.

The controller factory can be set in two ways: 1) By providing the type of the factory, and 2) By providing an instance of the factory. Then it’s up to you if you want to let ASP. NET MVC instantiate the factory. I usually do it like this:

– and let ASP.NET MVC instantiate and store a singleton instance of my factory.

My controller factory implementation looks like this:

Very simple. It takes the string, that was resolved as the controller name – e.g. “home”, “hOme”, “HOME” or whatever – and converts it to lower case, and then it uses that to look up a controller in my container. This means that all my controllers are registered in the container, and they will have all their dependencies automatically resolved when the container creates each instance.

A controller registration might look like this:

Simple – yet incredibly effective!

Creating a Windows service with Topshelf

Update: This post still gets a fair amount of traffic, so I’d like to direct your attention towards my newer post on TopShelf 2.0: On The Bus With MassTransit #1
Update: Please go to the updated guide to Topshelf, which covers how to get started with Topshelf 3

Just wanted to share my experiences regard ing the super cool mini framework, Topshelf – a bi-product from the development of MassTransit. When Dru and the other guys were working on MassTransit, they needed to create Windows services a lot, which led to the separation of the Windows service stuff from the other stuff – and now they call it Topshelf.

You use Topshelf by creating a console application. In the Main method, you create a configuration, which you hand over to Topshelf. Then, everything just works as expected.

Topshelf requires that you to use an IoC container hidden behind the common service locator interface, which is neat because then it’s up to you to decide which container to use. In this example, I am using Windsor.

Creating a service is as simple as this:

Everything in the snippet above works as expected. Note how I create my Windsor container and store it behind WindsorServiceLocator, which is my trivial implementation of the IServiceLocator interface. This way, Topshelf will pull the specified service type(s) ( SomeService in the sample) from the container, and call the specified methods to start/stop (and possibly pause/resume) the service.

Note also how I specify that my service has a dependency on message queueing – neat!

Running and debugging the service is as easy as executing the .exe.

Installing/uninstalling can be done with

Nifty!

Respect your test code #3: Make your tests orthogonal

When two things are orthogonal, it means that the angle between them is 90 degrees – at least in spaces with 3 dimensions or less. So when two vectors are orthogonal, they satisfy the property that there is no way to use the first one to express even the tiniest bit of what the other one expresses.

That is also how we should write our application code: methods and classes should be orthogonal to one another – i.e. no class should try to express what another class already expresses either in part or whole – therefore each class and each method should have only one responsibility, and thus one reason to change.

And test code is real code.

The corollary is that our tests should have only one single reponsibility as well.

That is why I hate tests that look like this:

Notice how this test is actually fairly decently structured – at least that’s what it initiallt looks like… but it actually tests a lot of things: it checks that the output of the DueTermsFinder is what it expects, testing the MortgageDeedRepository indirectly as well – and then it goes on to test the TermDebitRecorder … sigh!

If (when!) one of these classes changes at some point, because the requirements have changed or whatever, the test will break for no good reason. The test should break because you have introduced a bug, not because you made a change in some related functionality.

That is why I usually follow the pattern of AAA: Arrange, Act, Assert. Each test should be divided into discrete steps corresponding to 1) Arranging some data, 2) Triggering a computation or some state change, 3) Asserting that the outcome was what we expected. And if I am feeling idealistic that day, I also follow the principle of putting only one assertion at the end of each test.

I try to never do AAAAA (Arrance, Act, Assert, Act Assert) or AAAAAA, or AAAAAAA which is even worse.

Every test should have only one reason to break.

Respect your test code #2: Create base classes for your test fixtures

When writing code, I often end up introducing a layer supertype – i.e. a base class with functionality shared by all implementations in that particular layer in my application.

This also holds for my test code – and why shouldn’t it? Test code is as real as real code, so the same rules apply and it should benefit from the same pain killers as we implement in our application code.

For example when testing repositories and services that need to query the database, I can save myself a lot of writing by stuffing all the boring NHibernate push-ups in a DbTestFixture supertype – this includes building a configuration that connects to a test database, building a session factory, storing that session factory somewhere, re-creating the entire database schema in the test fixture setup, and running each test in a transaction that is automatically rolled back at the end of each test + a few convenience methods that allow me to flush the current session etc.

The DbTestFixture might look something like this (note that all my repositories take an instance of ISessionProvider in their ctor – that’s how they obtain the currently ongoing session, which is why I have a TestSessionProvider to inject into repositories under test):

Then a fictional repository test might look as simple as this:

Note how DbTestFixture flushes in all the right places so I don’t need to worry about that.

This test fixture supertype can be used for all my database access tests, as well as integration testing. But what about unit tests? I am using Rhino Mocks, so my unit test fixture base looks like this:

Real simple – it just stores my MockRepository and gives me a few shortcuts to the mocks I care for. Then I inherit this further to ease testing e.g. my ASP.NET MVC controllers like this:

As you can see, I make it a real “fixture” – the controllers I am about to test will fit into this fixture like a glove, and I will certainly never forget to instantiate my controller only once, because I start out by implemeting that part in the implementation of the CreateController method.

A controller test might look like this:

Respect your test code #1: Hide object instantiation behind methods with meaningful names

One thing, that I find really annoying, is that somehow it seems to be accepted that test code creates instances of this and that like crazy! In a system I am currently maintaining with about 1MLOC where ~40% is test code, I often come across test fixtures with something like 20 individual tests, and each and every one of them creates instances of maybe 5 different entities, builds an object graph, runs some code to be tested, and does some assertions on the result.

This is a super example on how to write brittle and rigid tests, because what happens if the signature of one of the ctors changes? Or the semantics of the object graph? Or [insert way too many ways for the test to break for the wrong reasons here…].

When I come across these tests, I usually factor out the creation of all but the most simple objects behind methods with meaningful names. This has two advantages: 1) It’s more DRY because they can be re-used, and 2) It serves as brilliant documentation. Consider this rather simple assertion that requires a few objects to be in place:

compared to this:

MUCH more clear! The factory method acts as brilliant documentation on which aspects of the test are relevant to the outcome – it is clear, that a mortgage deed which has not yet begun its amortization must report its first term date as the actual principal date.

Go on to test another property of the mortgage deed before amortization:

– and we have already saved us from writing 50 lines of brittle rigid test code. Keep factoring out common stuff, so that the ctor of the mortage deed is still only called in one place… e.g. create methods like this:

which allows me to write cute easy-to-understand tests like this:

This is also a good example on how to avoid writing // bla bla comments all over the place – it’s just not necessary when the methods have sufficiently well-describing names.