Ways of scaling out with Rebus #2: Azure Service Bus

Scaling out your application is easy with Azure Service Bus, because Azure Service Bus by design lends itself well to the competing consumers pattern as described by Gregor Hohpe and Bobby Woolf in the Enterprise Integration Patterns book.

So, in order to make this post a little longer, I will tell a little bit on how Rebus makes use of Azure Service Bus. And then I’ll tell you how to scale it :)

Rebus and queue transactions

When Rebus is configured to use Azure Service Bus to transport messages like this:

the bus will not use Azure Service Bus queues for its input queue and error queue, as you might think.

This is because Rebus will go to great lengths to promise you that a message can be received, and 0 to many messages sent – in one single queue transaction!

This means that the underlying transport layer must somehow be capable of receiving and sending messages atomically – and in a way that can be either committed or rolled back.

And since Azure Service Bus has limited transactional capabilities that do NOT allow for sending messages to multiple queues transactionally, we had to take a different approach with Rebus.

So, how does Rebus actually use Azure Service Bus?

What Azure Service Bus DOES support though, is receiving and sending atomically within one single topic.

So when Rebus starts up with Azure Service Bus, it will ensure that a topic exists with the name “Rebus”, which will be used to publish all messages that are sent.

And then, for each logical input queue – let’s call it “some_input_queue” – there will be a subscription for that queue by the same name, and that subscription will be configured with a SqlFilter that filters the received messages on a specific message property that holds the name of the intended recipient’s input queue. The filter will then ensure that only the intended messages are received for that endpoint.

So – how to scale it?

Easy peasy – in the Azure portal, go to this section of your cloud service:

Skærmbillede 2013-12-18 kl. 15.01.25

and go crazy with this bad boy:

Skærmbillede 2013-12-18 kl. 15.01.39

and – there you have it! – that is how you can scale out your work with Rebus in Azure :)

One thing, though – when you’re doing some serious number crunching, depending on the granularity of your messages of course, you may be bitten by the fact that Azure Service Bus’ BrokeredMessage‘s lease expires after 60 seconds – if that is the case, Rebus has a fairly non-intrusive way of letting you renew the lease, which you can read more about in the “more about the Azure Service Bus transport” on the Rebus wiki.

In the next post, I’ll delve into how to scale your Rebus workers if you’re using RabbitMQ.

Ways of scaling out with Rebus #1

Introduction

When you’re working with messaging, and you’re in need of processing messages that take a fair amount of time to process, you’re probably in need of some kind of scaling-out strategy. An example that I’ve been working with lately, is image processing: By some periodic schedule, I would have to download and render a number of SVG templates and pictures, and that number would be thousands and thousands.

Since processing each image would have no effect on the processing of the next image, the processing of images is an obvious candidate for some kind of parallelisation, which just happens to be pretty easy when you’re initiating all work with messages.

Rudimentary scaling: Increase number of threads

One way of “scaling out” your work with Rebus is to increase the number of worker threads that the bus creates internally. If you check out the documentation about the Rebus configuration section, you can see that it’s simply a matter of doing something like this:

Increasing the number of worker threads provides a simple and easy way to parallelise work, as long as your server can handle it. Each CLR thread will have 1 MB of RAM reserved for its stack, and will most likely require additional memory to do whatever work it does, so you’ll probably have to perform a few measurements or trial runs in order to locate a sweet spot where memory consumption and CPU utilization are good.

If you’re in need of some serious processing power though, you’ll most likely hit the roof pretty quickly – but you’re in luck, because your messaging-based app lends itself well to being distributed to multiple machines, although there are a few things to consider depending on the type of transport you’re using.

In the next posts, I’ll go through examples on how you can distribute your work and scale out your application when you’re using Rebus together with Azure Service Bus, RabbitMQ, SQL Server, and finally with MSMQ. Happy scaling!

Install ElasticSearch on Ubuntu VMs in Azure

Since ElasticSearch is hot sh*# these days, and my old hacker friend Thomas Ardal wrote a nifty guide on how to install it on Windows VMs in Azure, I thought I might as well supplement with a guide on how to do the same thing, only on Ubuntu VMs in Azure….

So, in this guide I’ll take you through the steps necessary to set up three Ubuntu VMs in Azure and install an ElasticSearch node on each of them, and finally connect the nodes into a search cluster… here goes:

First, create a new virtual network

Unless you intend to add your new Ubuntu VMs to an existing virtual network, you should use the “New” button and go and create a new virtual network. You can just fill in the name and leave all other options at their defaults.

guide01

Create virtual machines

Now, go and create a new virtual machine from the gallery.

guide02

Select the latest Ubuntu from the list.

guide03

Give your virtual machine a sensible name – in this case, since this is the third machine in my ElasticSearch cluster, I’m calling it “elastica3″. For all three machines, I’ve created a user account called “mhg” on the machine so I can SSH to it.

guide04

On the first machine, be sure to create a new cloud service that you can use to load balance requests among the machines. When adding the subsequent machines, remember to select the existing cloud service. In this case, since it’s balancing among “elasica1″, “elastica2″, and “elastica3″, I’m calling the cloud service “elastica”.

Moreover, it’s important that you add the machines to the same availability set! This way, Azure will ensure that the machines are unlikely to crash/be disconnected/fail at the same time by putting the machines in different fault domains.

guide05

When the first machine was added, the public port 22 on the cloud service “elastica” got automatically mapped to port 22 on the machine. When adding the subsequent machines, select another public port to map to 22 so that you can SSH to each individual machine from the outside. I chose 23 and 24 for the two other machines.

guide06

SSH to each machine

Open up a terminal and

in order to SSH to the first machine, logging in as “mhg”. In this example, I’m using the (default) port 22 which I will replace with 23 and 24 in order to SSH to the other two machines.

Update apt-get

On each machine, I start out by running a

in order to download the most recent apt-get package lists.

Install Java

Now, on each machine I install Java by going

and at this point I usually feel inspired to go grab myself a cup of coffee… ;)

Download and install ElasticSearch

And, finally, we’re ready to install ElasticSearch – go to the download page and copy the URL of the DEB package. At the time of writing this, the most recent DEB package is https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-0.90.5.deb which I download and install on each machine like this:

Configure ElasticSearch cluster

In order to be able to edit the configuration file, I

and go

By default, ElasticSearch will use UDP to dynamically discover an existing cluster which it will automatically join. On Azure though, we must explicitly specify which nodes go into our cluster. In order to do this, uncomment the line

to disable UDP discovery, and then add the full list of the IP addresses of your machines on the following line:

In my case, the IPs assigned to the VMs were 10.0.0.4 through 10.0.0.6. You can use ifconfig on each machine if you’re in doubt which IP was assigned (or you can check it out via the Azure Portal).

After saving each file, remember to

for ElasticSearch to pick up the changes.

Check it out

Now, on any of the three machines, try CURLing the following command:

which should yield something like this:

Finally, let’s make the cluster accessible from the outside….

Set up load balancing among the three VMs

Go to the first VM on the “Endpoints” tab and add a new endpoint.

guide07

guide08

Remember to check the option that you want to create a new load-balanced set. Just go with the defaults when asked about how the load balancer should probe the endpoints.

Last thing is to add an endpoint to the two other VMs, selecting the existing load-balanced set.

guide09

When this step is completed, you should be able to visit your cloud service URL (in my case it was http://elastica.cloudapp.net:9200) and see something like this:

So, is it usable yet?

Not sure, actually – I haven’t had time to investigate how to properly set up an authorization mechanism so as to make my cluster accessible only to specific applications.

If anyone knows how to do that on Azure, please don’t hesitate to enlighten me :)

Why I have already stopped working on NServiceBus again

In my previous blog post I announced that I had joined the NServiceBus core team on September 1st, and at the time of writing, I have been working with them part time for two months.

I have already decided to stop working on the NServiceBus core team again though (as of November 1st), and in this blog post I’ll try to explain why.

In the beginning, I was really excited about getting to work with the awesome team that Udi Dahan has assembled to work on NServiceBus, and of course also to get to work with Udi himself, whom I admire and whose work I have followed closely during the last 5 years or so.

I knew that joining the NServiceBus team would mean that I would have to give up working on Rebus at some point, but I thought that that would not be a problem for me.

When work started though, I did have a slightly uneasy feeling, and it was hard for me to get excited about the actual work we were doing. I originally attributed that to the fact that I was working part time and thus had to do a lot of catching up every time I finally had time to work.

But I have come to realize that the reason I was not that motivated was that I was missing Rebus. And I realized that I felt odd because I was being the least enthusiatic person on a team where everyone else radiated pure excitement.

I really really want to be excited about the work that I am doing, and I know that I am usually capable of mustering tremendous excitement whenever I get to focus on some project or task that makes sense to me – but I’m afraid that I would not be able to generate that level of excitement for NServiceBus.

Therefore I thought that it would be most fair to everyone that I would stop working on NServiceBus and leave that space for someone who can be truly passionate about it.

This means that I’m back doing full time Awesome Stuff at d60 again, although my role will probably be twisted slightly away from consulting towards something with Rebus and messaging, some R&D, some Windows Azure, and possibly some other stuff that I will probably get back to in future blog posts :)

Exciting times!

Preface

Back in 2011 I had worked with NServiceBus for a couple of years, and I was very happy about using it. I became sad though, when NSB version 2.5 was announced, because Udi had decided that NServiceBus would go away from the ultra liberal Apache V2 license, to requiring a paid license to use it in production.

I wasn’t sad because I thought that NServiceBus wasn’t worth the money. I was sad because I knew that the money aspect – just the fact that there was a money aspect – would suddenly become a barrier for me to introduce NServiceBus into future small to medium-sized projects.

I also realize that the money aspect was probably the one thing what was missing for NServiceBus to be looked at with greater seriousness by many companies worldwide, so – from a business perspective – I totally understand the move. It just didn’t fit my plans of building cool distributed stuff, made entirely from free frameworks and libraries.

Taking matters into my own hands

Therefore, on September the 11th 2011, I made the initial commit to the Rebus repository and laid the ground for what I wanted to be a kind of “lean NServiceBus little brother”, an NServiceBus stand-in that could be used in small systems until they became serious enough to warrant the license fee.

From the outset, Rebus mimicked NServiceBus’ APIs, and would even be able to read and understand NServiceBus endpoint mappings, so as to make porting systems back and forth between Rebus and NServiceBus easy.

Rebus turned out to be pretty neat though, and it didn’t take many alpha versions before money was suddenly being moved around by Rebus. And a few versions after that, Rebus would help some people control a couple of power plants. At that point, I hadn’t even had the chance to use Rebus to build something real myself, but other people were happily using it to solve their messaging problems.

Today

Fast forward two years, and I’ve helped build several systems with Rebus, and I get to go to conferences and talk about it, and a big part of my daily work is to help my awesome colleagues at d60 build systems using messaging.

All is peace and quiet in Rebus-land, and then – BAM!1 – I get an email from Udi saying something along the lines of: “Have you considered turning to the dark side?” – where Udi invites me to become part of the NServiceBus core team…………!!

I did not see that one coming.

AND like that, I’m in a huge dilemma – because I really love working at d60, and I love my awesome colleagues, and I love the spirit that we have succeeded in building in the company – so it pains me to think that I would no longer be part of the d60 project.

On the other hand, I realize that this an immensely awesome opportunity to

  • get to work with Udi, whose work I have admired for the last 5 years, and
  • get to work with some of the most talented .NET developers on the globe to
  • help building a messaging library, full time, as my day job – a task that I apparently think is fun enough that I will do it while doing the dishes, while sitting up at wee hours, again and again at our monthly hackernights, etc.

Therefore – a tough decision, but an obvious one. So of course I accepted Udi’s invitation!

So how’s that going to happen?

Luckily, d60 have been nice enough to make an agreement that I can start working on NServiceBus right away, ramping up my efforts over the next 6 months while I ramp down my d60 activities. I will not ramp all the way down though: I’ll continue to hang out at the d60 office and help d60 make cool distributed stuff with messaging, participate in brown-bag meetings, etc. I’ll just spend most of the time working with the NServiceBus core team.

But what about Rebus?

There’s no doubt that I will concentrate my efforts on NServiceBus now.

But Rebus still exists! Noone can deny that :) And Rebus has gained traction in places, in big as well as small companies that will most likely NOT readily replace it with something else, so I expect Rebus to continue to be refined and be maintained by its community for a long long time.

I’ve agreed with Asger Hallas, whose contributions to Rebus throughout the last two years have been invaluable, that he will be the new Rebus lead.

Asger has contributed with code in the form of all of Rebus’ RavenDB integration and several improvements, and he has contributed with excellent insights into the technical challenges of building a messaging library, and our lengthy discussions have brought much awesomeness to the table. I can think of no better person to continue the direction in which Rebus is heading.

I’m a Rebus user – should I be worried?

I don’t think so. If Rebus can make your endpoints communicate now, Rebus will still be able to make your endpoints communicate in the future. Rebus has never had an ambition to become bigger than it is right now, so if you’re satisfied now, chances are you’ll be satisfied tomorrow as well.

I’m still gonna be around!

Also: I’m not going away! I’ll still be around for helping out if you have trouble with using Rebus. I have just been given the opportunity to focus on helping a great team improve the most popular .NET service bus framework, which I hope will teach me a lot about messaging and hopefully turn out to be beneficial for NServiceBus as well.

So, please don’t hesitate to contact me if you have any questions regarding Rebus down the road. And in the future, I might be able to answer a few questions about NServiceBus as well ;)

Cheers!

Edit: If you’re interested, check out Udi’s perspective on this.

How to use Rebus’ timeout service #2

In the previous post, we took a look at how Rebus can use the external timeout service to store timeouts its. The other way of using Rebus’ timeout service, is to use the internal timeout service – that’s right, every Rebus service can function as its own timeout manager.

In fact, the external timeout service is just an ordinary Rebus service that throws exceptions if you send it something that is not a TimeoutRequest.

Since the external timeout service is the default choice, you have to do something in order to enable the internal timeout service. Not much, though – check this out:

Configure it

and like that, your Rebus service will store its timeouts in a table called “timeouts” in the database specified by the given connection string, and it will automatically create the table if it does not exist when it starts up.

Only thing left is to lean back and bus.Defer(tillMañana, yourWorkMan).

How to use Rebus’ timeout service #1

First way of using Rebus’ timeout service is to use the default choice, which is the external timeout service. The external timeout service is a Windows service, that you can install – preferably one on each server in your environment – and leave there, running forever.

The timeout service will receive your TimeoutRequests, store them somewhere (preferably in a database of some kind), and then send them back as a TimeoutReply when the timeout has expired.

To get up and running, you can do this:

Get the Rebus source code and build the timeout service

and then go to /Rebus/deploy/NET40/Timeout where you’ll find the binaries.

Configure the timeout service

Currently, the timeout service can store its timeouts in memory, in SQL server, and in MongoDB. The timeout service can be configured by opening up Rebus.Timeout.exe.config and editing the timeout element – e.g. you might write something like this:

or this

depending on your preferences.

Install the service

In an administrator command prompt:

and then, since rebus.timeout is the default queue name for the timeout service, all other local Rebus services can now bus.Defer(toTheFuture, theirMessages).

In the next post, I’ll explain how to configure Rebus to use the internal timeout service.