Ways of scaling out with Rebus #4: SQL Server

Surprisingly enough, Rebus can also use a table in SQL Server to function as its message queue(s).

This makes for a compelling setup if you’re working with an IT department that is good at operating SQL Server, but may not be good at operating message brokers and whatnot, or if you’re interested in limiting the number of moving parts in your system to an absolute minimum.

It goes without saying that a SQL Server-based Rebus system will not scale as easily as one based on a decentralized message queueing system, but with intelligent row locking, Rebus can stille achieve a pretty decent message throughput.

Scale it already!

Ok ok, but it’s trivial! In fact, I made a SQL Scaleout Demo as well that demonstrates a simple competing consumers setup. In the demo, a producer will send 10, 100 or 1000 jobs when you press a button, and any number of consumers running in parallel will get to receive the jobs.

Here’s a picture showing a producer that has produced 1000 jobs, and three competing consumers very eager to get some work:

Capture

Please note that small batches might not always appear to be distributed evenly among the consumers, especially after a period of inactivity.

This display is caused by a simple backoff strategy, whereby a consumer will take longer and longer breaks between polling the messages table in the event that no message could be found.

This way, a small batch might be completely consumed by the first consumer that is lucky enough to wake up from its slumber, thus stealing all the work from the other consumers.

In order to make this blog post a little longer, I’ll just show off the Rebus configuration API that configures and starts a consumer (using ‘consumer’ as its logical queue[1. ‘Logical queue’ because the queue is emulated in the table ‘RebusMessages’ by specifying the name as the value in the ‘Recipient’ column.] name, and ‘error’ as a dead-letter queue):

Also, please note that the database ‘rebus_test’ must be created before you run the code shown above.

Ways of scaling out with Rebus #3: RabbitMQ

With RabbitMQ, scaling out your Rebus-based workers is easy too – in fact, I’ve created a small RabbitMQ scaleout demo that shows how a producer can produce small nuggets of work, and any number of consumers can be started in order to distribute work evenly.

The principle is the same as when you’re using Azure Service Bus: Just start a number of competing consumers and you’re good to go.

An example on running the producer with two consumers can be seen below – here, I have published 100 jobs for the consumers to share.

Competing consumers with RabbitMQ

If you’re interested in running the demo yourself, you can check out the code from GitHub and follow the instructions in the accompanying readme.