Complexities

Recently, I had the opportunity to teach an introductory course in programming in C#, including topics on dependency injection, IoC containers, and test-driven development.

When I introduced the concept of dependency injection, and I suggested that this seemingly pretty simple and trivial task be accomplished by putting an IoC container to use, some of the attendees reacted with scepticism and doubt – would this thing, configured with either endless fluent spells or – shudder! – tonnes of XML, not make everything even more complex?

At the time, I tried to convince them that an IoC container was cool, and that learning its ins and outs could be considered some kind of investment that – over time – would actually reduce complexity. This is not something new, and everybody who has ever used an IoC container will probably agree with me that this investment pays off. But still, I think I needed some words that would support my argument better.

So, here goes a few random thought about complexity…

Complexity?

Right now, I want to focus on the kind of complexity that is inherent in non-trivial programs. That is, the complexity can not be removed from the program. The complexity can be anything, really, and it is there in some form or another, and we need to deal with it.

Immediate vs. deferred complexity

Assuming we need to deal with it, how do we do it then? Which strategy do we follow? Well, acknowledging its existence is not the same as dealing with it, but it’s definitely a first step. Thereafter, we should probably consider either A) solving the complexity on a case-to-case ad hoc kind of basis, or B) building some kind of mechanism, that abstracts the identified complexity away, thus dealing with the complexity right now.

Localized vs. dispersed complexity

Definitely coupled to the above somehow, but not inseparable, is the localized vs. dispersed complexity question: When you identify some kind of complexity, do you A) solve the problem in all the little places where the complexity surfaces, or B) build some kind of framework that makes all the other code oblivious of the complexity?

My opinion, and how it relates to IoC containers

I don’t know about you, but I prefer B & B: the abstraction and the framework. It might leak, and it might take some time for other developers to grok if they have never seen that particular problem solved like that before, but if the abstraction holds, it will probably assume some kind of pattern status that will be used, re-used, and recognized from then on.

The force of IoC containers is that they take control over object creation, thus providing hooks around whenever objects are created, and object creation is an incredibly important event in a running program. Having a hook in that proves to be infinitely useful (almost).

The IoC container solves the complexity of deciding when and how to create (which) objects. This would otherwise be a severely dispersed (spatially as well as temporally) complexity, or cross-cutting concern if you will, in that it is a really common thing to instantiate stuff and/or use existing instances of stuff, and the alternative would be to new up objects, maybe pulling from a cache somewhere, all around the place – thus infiltrating our program with our solution to that particular problem.

Conclusion

Use an IoC container. Learn to use its features. Allow the container to take responsibility. This will make you happy.

(seriously, do it! :))

IoC component registration

An interesting topic, that I always love to discuss, is how to find a balance between building a pure domain model and being pragmatic and getting the job done. It just so happens, that getting the job done, and being able to add features to a system quickly, usually conflicts with my inner purist, who really wishes to keep my domain model and services oblivious of everything infrastructure-related.

Usually, I go for pragmatism. Not only because I’m lazy, but also because I think it’s funny to come up with solutions that accelerate development, and generally make the sun shine brighter.

Actually, this post should have been #2 in a series called “Polluting My Domain”, and this post should have been #1, because in #1 I showed how I usually use attributes to give hints to the automapper in Fluent NHibernate – e.g. [Cascade] on a relation to configure NHibernate to cascade operations across that relation, or [Indexed("ix__something")] to make that column be indexed in the database, or [Encrypted] to make that particular property be backed by an encrypting IUserType.

This post, however, will show a pragmatic, elegant and flexible way to make component registration easy in an IoC container.

Most IoC containers that I know of can be configured with XML and through some kind of more or less fluent API in code. I’ll spare you the XML, so I’ll just show a small example on Castle Windsor’s fluent API:

or you can perform multiple registrations like this:

This is all fine and dandy, but I think the fluent API becomes pretty complicated when you throw customized registrations per customer and/or environment into the mix – mostly because it will become kind of obscure which services get registered where.

Therefore, one of the first things I have put in my recent projects, have been a registration routine based on attributes. Pretty simple, and yes, it does pollute my domain services with infrastructure-related stuff, but this is a great example where I prefer pragmatism and simplicity over purity.

My most recent project has two attributes, ServiceAttribute and RegisterInAttribute that look something like this:

and then the registration code looks like this:

So, having established the current value of environment, my component registration will look like this:

– and that’s it! But the best of it is that adding services to the system now becomes a breeze – check this out – registering a concrete type, offering itself as a service:

– and here’s registering different stuff depending on the environment:

This way of registering component has proven to me several times to be a simple and nifty way of managing the differences between environments, and even differences between customers (which would require a few extensions to the example above though), still being able to add services to the system quickly.

Another benefit is that it’s pretty clear what happens, even to developers who might not be that experienced in using IoC containers. If I were the only developer on a project, I would probably prefer component registration based on conventions, but when you have a team, you sometimes need to make some things more explicit.

Even more checking out MongoDB: The coolness continues

One thing I started to think about after having looked at MongoDB was how to model things that are somehow connected – without the use of foreign keys and the ability to join stuff.

When working with documents, you generally just embed the data where it belongs. But what if I have the following documents:

– and I want to constrain access to the songs, allowing me to see both songs, and The Dude to see only Factory?

My first take was to simply add username in an array inside each song, like so:

– and this will work well with indexing, which can be done like this:

But here comes the problem: What if each song should be displayed along with the name of who can access that particular song? I need to embed more stuff in the array, e.g. like so:

There’s a challenge now in keeping this extra “denormalized” piece of information up-to-date in case a user changes his name, etc. – but let’s just assume we’ve handled that.

Now here comes the cool thing: It’s cool that MongoDB can index “into” an array, but it can actually index “into” anything! Just tell it where to go, using the Dot Notation.

That means I can perform the same search as I did above like so:

How cool is that?? (pretty cool, actually!)

More checking out MongoDB: Updating

In MongoDB, there’s no way to lock a database, collection, or document. The ability to work without locking is a requirement for any db that wishes to be horizontally scalable, and obviously this imposes some limitations and/or possibilities (depending on your point of view :)).

If you want all the goodness that document-orientation brings, it seems we need to cope with this non-locking database.

So how DO you update stuff in MongoDB? And, more importantly: how do you update stuff without race conditions?

In one of my previous posts on MongoDB, I mentioned that the unit of atomicity is a document – i.e., either a document gets saved/updated/deleted or it doesn’t. That must mean that we can count on updating one document only (or not), so we should build our applications so they can work without requiring multiple documents to be updated to be consistent ([1. Which is good practice anyway! In my experience, long and wide db transactions are often used, not to enforce a strict consistency as much as to allow scenarios like: “when this happens, this should also happen”. But that kind of logic can often be handled by something else, e.g. by a publishing events reliably to other processes (logically and/or physically), that handles the side-effects.]).

First, let’s take a look at how to actually update a document.

Naïve attempt to update a document

Well, we could do this:

That is, if you go and save a document that already has an ID, any existing document with that ID will be updated.

This would work if we were the only client on the db. But what if someone was editing the post in that same moment, adding another tag as well? Well, if he was unfortunate enough to save his edits when we were out for coffee, his changes would be lost.

One way to actually do it

By using the update function!

update accepts the following four arguments:

  1. criteria – document selector that specifies which document to be updated
  2. objNew – document to save
  3. upsert – bool to specify auto-insert if document does not exist (“update if present, insert if missing”)
  4. multi – bool to allow updating multiple documents that match the criteria (default is only first document)

Actually, as you can now probably see, save(doc) is just a shorthand for update({}, doc, true, false) – an upsert with the document we’re saving.

This way, we could easily add an incrementing version field to our documents to make sure that the version we’re saving is the version we retrieved.

Let’s try it out:

– good thing he didn’t get through with that one! 🙂

As you can see, we can easily implement optimistic concurrency on each document by constraining updates to the version we checked out. But, as I will show next, you can actually do a lot of things on the server.

Server-side document updates

Instead of retrieving an entire document, modifying it, and saving it back again with the risk of overwriting someone else’s edits, we can ask the server to make edits as smaller operations. E.g. our attempt to add a missing ‘test’ tag to the document could have been done like this:

See how the code $push modifier was used to push a value into the array… this stuff is great. But here, we have a race condition again – what if someone added the ‘test’ tag almost the same time as we did? Then two ‘test’ tags would be present in the array.

One way is to constrain the update by id and the absence of ‘test’ in the tags array:

– another is to use the $addToSet function, which makes MongoDB treat the array as a set:

Nifty!

My conclusion (so far) is that an application can get a huge benefit from using the various modifier operations – performance-wise (obviously), but probably also UX-wise as well… It’s a step in another direction from the usual CRUD scenarios that I usually compulsively associate with the word “update”, and I imagine it could be made to reflect the user’s interactions with the system.

I am thinking that the majority of the user’s interactions with the system could (and probably should) be put on the form

– and then one could implement a more-or-less generic mechanism on the client side to handle unsuccessful updates (e.g. by asking the user what to do then, reloading some data to allow [assumed state before] to be something else, or by handling certain situations with some kind of merging function, etc.).

How to perform updates across multiple documents

My first thought is that this situation should be avoided when working with a document-oriented db. I think most people will agree with this one.

I am pretty unsure of this, actually… The rest of this post is just a few thoughts on my first take on this, should I need to do this. Comments are greatly appreciated!

The problem in updating multiple documents is that we can perform an update on one document at a time, each time checking if the update went well or not. But there’s no way to (consistently) roll back update #1 if update #2 fails. So this means that there’s only one way: Forward! But how to proceed then, when an update failed?

How do we usually do stuff reliably across boundaries of multiple things that may or may not succeed, allowing us to handle errors as gracefully as possible and proceed thereafter?

I’m thinking that asynchronous reliable one-way messaging is the answer to this.

So if an application ever needs to update multiple documents in the most reliable way possible, it should probably perform one document update per “transaction” – in NServiceBus terminology that would be updating one document per message handler. And then the handler should throw an exception if an update unexpectedly fails.

But again: I’m thinking that this situation should be avoided at all costs with a document-oriented db. If ACID is required, the application should probably have a RDBMS on the side, or implement some kind of transactional mechanism in the document store.

Conclusion

That concludes my little learning series of MongoDB posts.

I must say that I am intrigued by all the NoSQL discussions currently going on in the communities, and I think it is always a sign of health that we question the technologies we use.

I am entirely convinced that document dbs could and should have been used for some parts of systems that I have experienced, and I am blown away by the lack of friction when starting up a project on top of a schemaless db.

As a .NET dude, I am convinced that the future will see more .NET systems built with more than one db underneath – e.g. with MongoDB for all the “soft parts” of the system, NHibernate on SQL Server for the few things that by nature require ACID, and then some NHibernate Search/Lucene/Solr.NET for full-text indexing and searching capabilities etc.

More checking out MongoDB: Querying

In my first post about MongoDB, I touched querying very lightly. Querying is of course pretty important to most systems, so it’s fair to dedicate a separate post to the subject.

Querying in MongoDB works by sending a document to the server, e.g. in the following snippet I create a document with a post ID

– which can actually be even shorter, as the find and findOne functions accept an ObjectId directly as their argument, like so:

But how can I find a post with a specific slug? Easy, like so:

But how does this perform? It’s easy to examine how queries are executed with the explain() function, like so:

– yielding some info about the execution of the query. I don’t know exactly how to interpret all this, but I think I get that "nscanned": 10000 means 10000 documents were scanned – and in a collection with 10000 documents, that’s not entirely optimal as it implies a full table scan. Now, let’s make sure that our query will execute as fast as possible by creating an index on the field ( _id is always automatically indexed):

Now lets explain() again:

Wow! That’s what I call an improvement!

What about posts with a specific tag? First I tried the following snippet, because I learned that the special $where field could be put in a query document to evaluate a predicate server-side:

– and this actually works. This syntax is sort of clunky though. Luckily, MongoDB provides a nifty mechanism for arrays that automagically checks if something is contained in it. So my query can be rewritten to this:

Nifty!

Now, to take advantage of indexes, the special query document fields should be used. I showed $where above, but there are more – to name a few: $gt (greater than), $gte (greater than or equal), $lt (less than), $lte (less than or equal), $ne (not equal), and many more. For example, to count the number of non-nifty posts in February 2010:

Check out the manual for some great documentation on available operators.

Conclusion

Querying with MongoDB actually seems pretty cool and flexible. I like the idea that it’s possible to execute ad-hoc queries, and for most usage I think the supplied operators are adequate. The ability to supply a predicate function via $where seems really cool, but it should probably only be used in conjunction with one or more of the other operators to avoid a full table scan.

More checking out MongoDB: References

This post will touch a little bit on the mechanism used for references, and then a few thoughts on how document-orientation relates to OO.

Now – if you, like me, are into OO and normalized object models – the weirdness begins….. or maybe not?! (actually, I am not sure yet :))

In an OO world (and in a normalized RDB world as well), you reference stuff, thus reducing the amount of redundant information as much as possible. E.g. the names of countries should not be put in a column in your address table, each country should have a row in the countries table, and then be referenced by a countryId in the address table.

In a document-oriented world, you generally embed objects instead of referencing them. This is done for performance reasons, and because there’s no way to join stuff – which means that a stored ID/foreign key merely remains free for the client to manually use in additional queries.

When you do need to actually reference another document, use DBRef to create a reference, supplying the collection and the ID as arguments. E.g. like so:

This way, the reference is represented in a consistent manner which may – or may not – be picked up by the driver you are using. The C# driver can create DBRefs and follow them, but you don’t get to join stuff – you still need an extra query.

Embedding objects may seem a little clunky at first, but actually this plays nicely with some common OO concepts – take aggregation, for example: a blog post has an array of comments, each of which makes no sense without an aggregating post – i.e. comments live and die with their post. That’s an obvious sign that comments should be embedded in the post. So, instead of:

– you should do this:

Actually this makes me think about the concept of an aggregate root in DDD: an aggregate root “owns” the data beneath it, and is responsible for maintaining its own integrity. If you were to delete an aggregate root, all the data beneath it would dissappear.

This also fits kind of nicely with the fact that there’s no database transactions in MongoDB – i.e. there’s no way to issue multiple statements and have them rolled back in case of an error – there’s only documents, and either a document gets inserted/updated/deleted, or it doesn’t. So obviously, the document is the unit of atomicity, which fits (sort of nicely) with the aggregate root and its responsibility of keeping itself internally consistent.

Conclusion

The observations stated here pretty much make a document an aggregate root in the DDD sense – especially since only documents get an _id. There’s no obvious way to reference a particular comment inside the second post shown above.

If MongoDB’s performance is up to the task, data should probably be aggregated as much as possible into large documents. MongoDB’s limit is 4 MB per document, but I am unsure of how large documents should be before you should consider splitting them.

Maybe I am thinking too much about these things? Maybe I should just try and build something and see where my document modeling goes? Suggestions and comments are welcome 🙂

Checking out MongoDB

Having experienced a lot of pain using RDBMSs ([1. Usually because of abusing RDBMSs, actually. Storing an object model in a RDBMS is not painful as long as the tooling is right – e.g. by leveraging the amazing NHibernate. The pain comes when developers suddenly start implementing overly complex queries and doing reporting on top of a pretty entity model, modeling stuff OO style… ouch!]) as a default choice of persistence, having read a couple of blog posts about MongoDB, and being generally interested in widening my horizon, I decided to check out MongoDB.

This post is a write-as-I-go summary of the information I have gathered from the following places:

More posts may follow…. 🙂

Getting MongoDB

Piece of cake! Download MongoDB from the download center and shove the binaries away somewhere on your machine. Default is for MongoDB to store its data in /data/db which translates to c:\data\db if you are using Windows – go ahead and create this directory. The MongoDB daemon can be started by running mongod.exe, which will accept connections on localhost:27017.

It will probably look something like the screenshot shown below.

An alternative data path can be specified on the command line, e.g. like so: mongod --dbpath c:\somewhere\else.

Accessing it with JavaScript

Run mongo.exe to start the Mongo Shell. It will probably look something like this:

In the MongoDB prompt, you can use JavaScript to access the db. Here’s a sample session of some commands I have found useful:

Now, I have successfully added two documents representing blog posts in a collection named posts. As you can see, MongoDB assigns some funky IDs to the documents.

That was a brief demonstration of the JavaScript API in the Mongo Shell. Now, let’s do this from C#.

Getting started with mongodb-csharp

Now, go to mongodb-csharp dowload section at GitHub and get a debug build of the driver. Create a C# project and reference the MongoDB.Driver assembly.

On my machine, punching in the following actually works:

Now, I can verify that the document is actually in there by going back to the console and doing this:

Nifty! Now lets show the posts from C#. On my machine the following snippet displays the headlines of all posts:

– which is documented in the following screenshot:

Random nuggets of information

Document IDs

All MongoDB documents must have an ID in the _id field, either assigned by you (any object can be used), or automatically by MongoDB. IDs generated by MongoDB are virtually globally unique, as they consist of the following: 4 bytes of timestamp, 3 bytes of machine identification, 2 bytes of process identification, 3 bytes of something that gets incremented.

As a nifty consequence, the time of creation can be extracted from auto-generated IDs.

The ID type used by MongoDB can be created with ObjectId('00112233445566778899aabb') (where the input must be a string representing 12 bytes in HEX).

How are documents stored?

I you have not yet figured it out, documents are serialized to JSON – with the minor modification that it’s a BINARY version of JSON, hence it’s called BSON.

String encoding

UTF-8. No worries.

What about references?

I will research this and do a separate post on the subject. As MongoDB is non-relational, a “join” is – in principle – an unknown concept. There’s a mechanism, however, that allows for consistent representation of foreign keys that may/may not give you some extra functionality (depending on the driver you are using).

What about querying?

I will research this as well, posting as I go.

OR/M? (or OD/M?)

It is not yet clear to me how to handle Object-Document Mapping. Will require some research as well. As an OO dude, I am especially interested in finding out what a schema-less persistance mechanism will do to my design.

What else?

More topics include applying indices, deleting/updating, atomicity, and more. Implies additional blog posts.

Conclusion

My first impression of MongoDB is really good. It’s extremely easy to get going, and the few error messages I have received were easy to understand.

I am especially in awe with how little friction I encountered – mostly because of the schema-less nature, but also because everything just worked right away.

NHibernate is very flexible

…but it does impose limitations on your domain model.

Most of these limitations, however, like the need for public/ internal/ protected members to be virtual, and the requirement for a default constructor to exist with at least protected accessibility, are not that hard to adhere to and usually don’t interfere with what you would do if there were no rules at all.

One of the limitations, however, can be pretty significant – Ayende describes the problem here, using the term “ghost objects”.

But, as I am about to show, this significance only arises if you follow a certain style of coding, which you should usually avoid!

Short explanation of the problem

When NHibernate lazy-loads an entity from the db (i.e. when you call session.Load<TEntity>(id) or when an entity in your session references something through a lazy-loaded association), it does so by providing an instance of a runtime-generated type, which acts as a proxy.

The first time you access something on the proxy, it gets “hydrated”, which is just a fancy way of saying that the data will be loaded from the database.

This would be fine and dandy, if it weren’t for the fact that the proxy is a runtime-generated subclass of your entity, which – in cases where inheritance is involved – will be a sibling to the other derived classes. Consider the simple inheritance hierarchy on the sketch to the right which in code could be something like so:

– and then NHibernate will generate something along the lines of this (fake :)) class signature:

See the problem? Here’s the problem:

This means that this kind of runtime type checking will fail in those circumstances where the entity is a lazy-loaded reference of the supertype, and the following will FAIL:

One possible solution

The other day, Ayende blogged about a recent addition to NHibernate, namely lazy-loaded properties. This allows an entity to be partially hydrated, intercepting calls to certain properties to lazy-load the relevant fields on demand.

This feature is great when storing LOBs alongside the other fields on an entity, but it also laid the ground for his most recent addition, which is the ability to lazy-load an association by setting lazy="no-proxy" on it.

This way, NHibernate will not build a proxy, but instead it will intercept the property getter and load the entity at that point in time, thus being able to return the exact (sub)type of the loaded entity.

Now this seems to solve our problems, but let’s zoom out a bit … why did we have a problem in the first place? Our problem was actually that we failed to write object-oriented code, but instead we wrote a brittle piece of code that would fail at runtime whenever someone added a new subtype, thus violating the Liskov substitution principle. Moreover it just feels wrong to implement business logic that reflects on types!

What to do then?

Well, how about making your code polymorphic? The logic above could be easily rewritten as:

– which moves the logic of yielding name as a oneliner into the class hierarchy, allowing us to always get a name from a LegalEntity.

What if I really really need a concrete instance?

Then you should use the nifty visitor pattern to extract what you need. In the example above, I would need to add the following additions:

This way, we’re taking advantage of the fact that each subclass knows its own concrete instance, thus allowing it to pass itself to the visitor we passed in.

This is the preferred solution when the logic you’re writing doesn’t belong inside the actual entitiy class, like e.g. when you want to convert the entity to an editable view object, because this will make your code break at compile time if someone adds a new specialization, thus requiring each piece of logic to handle that specialization as well.

Oh, and if you’re a Java guy, you might be missing the ability to create an inline anonymous visitor within the scope of the current method, but that can be easily emulated by a generic visitor, like so:

– which would allow you to write inline typesafe code like this:

Only thing missing now is the ability to return a value in one line depending on the subclass. Well, the generic visitor can be used for that as well by adding the following:

– allowing you to write code like this:

– and still have the benefit of compile-time safety that all specializations have been handled.

When to reflect on types?

IMO you should only reflect on types in business logic when it’s a shortcut that doesn’t break the semantics of your code. What do I mean by that? Well, e.g. the implementation of the extension method System.Linq.Enumerable.Count<T>() looks something like this:

This way, providing the number of items is accelerated for certain implementations of IEnumerable&lt;T&gt; because the information is already there, and for other types there’s no way to avoid manually counting.

Conclusion

I don’t think I will be using the new lazy="no-proxy" feature, because if I need it, I think it is a sign that my design has a bad smell to it, and I should either go for polymorphism or using a visitor.

When jQuery was released…

…web development changed forever! (to me at least :))

It’s amazing how the CSS-selector-based approach has proven its effectiveness and durability, and combined with the incredible wealth of extensions, choosing jQuery is simply a no-brainer.

By the way, jQuery 1.4 has been released. Check out this post for a walkthrough of some of the new features.

C# vs. Clojure vs. Ruby & Scala

Short preface: at a job interview, Zach Cox was told to aggregate words and word counts from a bunch of files into two files, sorted alphabetically and by word count respectively, which he did in Ruby and Scala. This led Lau Bjørn Jensen to do the same thing in Clojure, which apparantly sparked other people to do it in Java, Python etc.

Inspired by the afore mentioned problem, and an extended train ride home (thank you, Danish National Railways!!), I decided to see what a C# (v. 3) version could look like:

Weighing in at 36 lines and executing in 10.2 seconds (on my Intel Core 2 laptop with 4 GB RAM), I think this is a pretty clear and performant alternative to the other languages mentioned.