This is how developers might punish your database…

…or “Twisted use cases for SQL Server”, as I also like to call it ๐Ÿ™‚

It’s the title of a presentation that I’m doing at this year’s SQL Saturday in Copenhagen on the 19th of September.

The presentation is about how development teams need not despair when faced with the classical and almost ubiquitous constraint that data must be stored in SQL Server

I’ve created the following state diagram to illustrate what the presentation is about:

sql-server-yes-no

When they said to put data in SQl Server, did they tell you HOW to put data in SQL Server? They didn’t? Cool… let’s talk about that ๐Ÿ™‚

Ways of scaling out with Rebus #4: SQL Server

Surprisingly enough, Rebus can also use a table in SQL Server to function as its message queue(s).

This makes for a compelling setup if you’re working with an IT department that is good at operating SQL Server, but may not be good at operating message brokers and whatnot, or if you’re interested in limiting the number of moving parts in your system to an absolute minimum.

It goes without saying that a SQL Server-based Rebus system will not scale as easily as one based on a decentralized message queueing system, but with intelligent row locking, Rebus can stille achieve a pretty decent message throughput.

Scale it already!

Ok ok, but it’s trivial! In fact, I made a SQL Scaleout Demo as well that demonstrates a simple competing consumers setup. In the demo, a producer will send 10, 100 or 1000 jobs when you press a button, and any number of consumers running in parallel will get to receive the jobs.

Here’s a picture showing a producer that has produced 1000 jobs, and three competing consumers very eager to get some work:

Capture

Please note that small batches might not always appear to be distributed evenly among the consumers, especially after a period of inactivity.

This display is caused by a simple backoff strategy, whereby a consumer will take longer and longer breaks between polling the messages table in the event that no message could be found.

This way, a small batch might be completely consumed by the first consumer that is lucky enough to wake up from its slumber, thus stealing all the work from the other consumers.

In order to make this blog post a little longer, I’ll just show off the Rebus configuration API that configures and starts a consumer (using ‘consumer’ as its logical queue[1. ‘Logical queue’ because the queue is emulated in the table ‘RebusMessages’ by specifying the name as the value in the ‘Recipient’ column.] name, and ‘error’ as a dead-letter queue):

Also, please note that the database ‘rebus_test’ must be created before you run the code shown above.

Ways of scaling out with Rebus #1

Introduction

When you’re working with messaging, and you’re in need of processing messages that take a fair amount of time to process, you’re probably in need of some kind of scaling-out strategy. An example that I’ve been working with lately, is image processing: By some periodic schedule, I would have to download and render a number of SVG templates and pictures, and that number would be thousands and thousands.

Since processing each image would have no effect on the processing of the next image, the processing of images is an obvious candidate for some kind of parallelisation, which just happens to be pretty easy when you’re initiating all work with messages.

Rudimentary scaling: Increase number of threads

One way of “scaling out” your work with Rebus is to increase the number of worker threads that the bus creates internally. If you check out the documentation about the Rebus configuration section, you can see that it’s simply a matter of doing something like this:

Increasing the number of worker threads provides a simple and easy way to parallelise work, as long as your server can handle it. Each CLR thread will have 1 MB of RAM reserved for its stack, and will most likely require additional memory to do whatever work it does, so you’ll probably have to perform a few measurements or trial runs in order to locate a sweet spot where memory consumption and CPU utilization are good.

If you’re in need of some serious processing power though, you’ll most likely hit the roof pretty quickly – but you’re in luck, because your messaging-based app lends itself well to being distributed to multiple machines, although there are a few things to consider depending on the type of transport you’re using.

In the next posts, I’ll go through examples on how you can distribute your work and scale out your application when you’re using Rebus together with Azure Service Bus, RabbitMQ, SQL Server, and finally with MSMQ. Happy scaling!