Using Azure to host the node.js backend of your Xamarin-based iOS app

TL;DR: This is how to tell Azure to host your node.js web service even though your repository contains .NET solution files and/or projects: Create an app setting called Project and set its value to the directory where your node app resides, e.g. src\Server – this is the argument that will make Azure decide to host your node.js stuff and not the .NET stuff.

This is not new – it’s how you pick a .NET project to host as well when your repository contains multiple hostable .NET projects, but when it’s .NET you point to a project file instead of a directory.

Do NOT be fooled by GitHub issues mentioning setting the SCM_SCRIPT_GENERATOR_ARGS app setting to --node – doing that will not work together with the Project setting!

I wrote this blog post because it took me quite a while to figure this seemingly obvious thing out, mostly because I got fooled by various red herrings around the net referring to the aforementioned Kudu setting.

Long version :)

Lately, I’ve been tinkering a bit with Xamarin Studio, trying to get a little bit into building a simple iPhone app.

My app needs a backend, which will be function as a mediator between the iPhone app and another backend, allowing me to

  • tailor the service to my iPhone app, trimming the API for my needs, and
  • not worry too much about having to update the iPhone app everytime something changes on the real backend (the “backend-backend”…)

This is a sound way of putting these things together IMHO, and since the backend for my iPhone app will mostly function as a mediator/proxy it was kind of an obvious opportunity to get to play a little bit with node.js as well ;) (because JavaScript is fun, and because of its asynchronous nature)

But – alas – when I Git-deployed my Azure web site, I got the following message in the Azure portal:

Skærmbillede 2014-04-22 kl. 20.35.36

and clicking the log revealed that Azure was kind of confused by the fact that my repository did not contain an obvious single candidate for something to build & deploy – it had multiple (the Xamarin solution Client.sln and my initial Web API-based dummy server solution Server.sln):

Skærmbillede 2014-04-22 kl. 20.35.46

When a Git-deployed Azure web site has multiple things that can potentially be built & deployed, you’ll usually create a Project app setting and point it to the web project that you want to host in that particular web site, e.g. setting it to src\Something.Web\Something.Web.csproj if that is a web site.

That’s also what we need to do here! – just by pointing to the directory where your node app’s package.json resides – in this case it’s src\Server2 (which was the most awesomest name I could come up with for my server no. 2….) It’s that simple ;)

Passing user context in message headers with Rebus

When you’re building messaging systems, I bet you’ve run into a situation where you’ve had the need to pass some extra information along with your messages: Some kind of metadata that plays a role in addressing a cross-cutting concern, like e.g. bringing along contextual information about the user that initiated the current message correspondence.

To show how that can easily be achieved with Rebus, I’ve added the UserContextHeaders sample to the sample repository. This sample demonstrates how an “ambient user context” can be established in a client application which then automagically flows along with all messages sent as a consequence of the initiating message.

I this blog post, I’ll go through the key parts that make up the solution – if you’re interested in the details, I suggest you go download the source code.

First, let’s

See how Rebus is configured

As you can see, I’ve added some extra setup in an extension method called AutomaticallyTransferUserContext – it looks like this:

It’s pretty clear that it hooks into Rebus’ MessageSent and MessageContextEstablished events which are called for all outgoing messages and all incoming messages respectively.

First, let’s see what we must do when we’re

Sending messages

Let’s see what’s inside that EncodeUserContextHeadersIfPossible method from the snippet above:

As you can see, it checks for the presence of an ambient user context (most likely attached to the currently executing thread), encoding that context in a message header if one was found. This way, the user context will be included on all outgoing messages, no matter if they’re sent, published or replied.

Now, let’s see how we’re

Handling incoming messages

Let’s just check out DecodeUserContextHeadersIfPossible mentioned in the configuration extension above:

As you can see, it checks for the presence of a user context header on the incoming message, decoding it and adding it to the message context if it was there. This means that message handlers can access the context like this:

These were the main parts that make up the automatically flowing user context – there’s few more details in the sample code, like e.g. an implementation of a thread-bound ambient user context and constructor-injected UserContext into message handlers – pretty sweet, if I have to say so myself :) please head over to the UserContextHeaders sample for some working code to look at.

Topshelf and IoC containers

To top off my previous post with an example on how almost any kind of application can be hosted as a Windows Service, I’ll show an example on how Topshelf can be used to host an IoC container.

As most developers probably know, an IoC container functions as a logical application container of sorts, helping with creating stuff when stuff needs to be created, and disposing stuff when stuff needs to be disposed, keeping whatever needs to be kept alive alive for the entire duration of the application lifetime.

This way, when we have this Topshelf-IoC-container framework in place, we can use it to activate all kinds of applications.

I’ll show how it’s done with Castle Windsor, but I think it should be fairly easy to port this solution to any other IoC container.

Create a service class that holds the container

First, let’s create a class that implements the Start/Stop methods of ServiceControl – this class will initialize its Windsor container when it starts, and dispose it when it stops.

I’ve added some logging for good measure – when you’re a backend kind of guy, you come to appreciate a systematic and consistent approach towards logging in your Windows services, and I’m no exception.

Use Topshelf to activate the service class

Now we can have our RobotService activated as a Windows Service by adding the following C# spells:

Please note that Topshelf needs a little extra help with logging unhandled app domain exceptions, so I’ve added a little bit of Log.Fatal at the top.

Add components to the container to actually do something

Now, just to show some real action, I’ve added a single Windsor installer that looks like this:

As you can see, it will start a System.Timers.Timer, configured to periodically log a cheerful statement to the log and shove it in the container to be disposed when the application shuts down. It shouldn’t take too much imagination to come up with more useful and realistic tasks for the timer to perform.

This is actually all there is to it! Of course it can be customized if you want, but what I’ve shown is the only stuff that is necessary to create a full-blown Windows Service that manages the lifetime of application components and logs its work in a way that makes the service monitorable, e.g. by Microsoft SCOM and/or Splunk or whichever way you like to throw your logs.

As a public service, I’ve put this small example project in a GitHub repository for your cloning pleasure. Also, if you have any comments, please don’t hesitate to send them my way. Pull requests are also accepted :)

Updated guide to Topshelf

topshelfI have written about Topshelf several times before, but the API has gone through a few changes over the last couple of years – and since my Topshelf posts are, for some reason, constantly (according to my stats) being (re)visited, I thought it would be in order to show the most current version of Topshelf in action…

Oh, and if you’re reading this and you don’t know what Topshelf is – it’s a library that helps you to quickly make your C# console application into a Windows Service that can easily be F5-debugged as well as installed as a proper Windows Service.

This is how you do it:

Create a new console application

Just do it. Name it something cool, like RobotsOnFire.

Pull in Topshelf with NuGet

Either reach out for your mouse and clickety click a few dozen times… me? I prefer to go

in the Package Manager Console, and this is how I avoid getting a tennis elbow even before I’ve written one line of code.

Remove unnecessary using directives and other redundant crap

This looks stupid – it makes you look like a person who doesn’t care:
before

This looks good – it reaks of the finicky meticulousness that is you:
after

Now, modify your Program class

into something like this:

and then press F5, and then you’ll be looking at something like this:

f5

Wow! That was easy, right?! In order to actually do something in your Windows Service, you start a thread or a timer or something similar in the Start method and make sure that what you started is stopped again in the Stop method. In other words, the Start and Stop methods must not block – otherwise, Windows Service Control will not start the service properly.

Since this bad boy is an ordinary Console Application on the surface, it can be F5-debugged and run by executing RobotsOnFire.exe. But in addition to that, you can do this (in an elevated command prompt):

install

which results in Robots On Fire turning up as a Windows Service in Windows:

services

Now, let’s uninstall it again:

uninstall

and then let’s customize a couple of settings:

which, after installing it again, will show up like this:

where-do-the-settings-go

Topshelf has many additional things that can be customized – e.g. you can perform actions in connection with installing/uninstalling the service, make it install to be run under various system accounts, and – this is one of the cooler things – you can declare a dependency on e.g. a local SQL Server, a local IIS, or any other locally running Windows Service – this way, services will always be started/stopped in the right order, ensuring that e.g. your MSMQ-dependent service is not started before the Message Queueing service.

Ways of scaling out with Rebus #5: MSMQ

I deliberately waited until the last post in my series on how to scale your Rebus work to talk about how to do it with MSMQ. The thing is, MSMQ doesn’t lend itself well to the competing consumers pattern and thus requires some other mechanism in order to be able to effectively distribute work between a number of workers.

NServiceBus achieves this ability with its Distributor process, which is a sophisticated load balancer that communicates with its worker processes in order to avoid overloading them with work, etc.

The Rebus way of doing this is much simpler. I’ve long contemplated building a simple load balancer – one that blindly distributes work in a round-robin style fashion, which I believe would provide a sufficiently powerful and wonderfully unsophisticated way of scaling out your chunks of work.

This means that in a producer/consumer scenario, you would send all the work to the distributor, which would forward pieces of work to consumers in equal portions. It can be shown like this:

rebus msmq load balancer

This is so simple, it cannot fail.

Which is why I’ve made the Rebus.MsmqLoadBalancer package, containing a LoadBalancerService.

Once you’ve executed the following Powershell spell:

the load balancer service can be started like this (assuming YourProject was a console application):

In this example, the load balancer will receive messages from the local distributor queue and forward them, distributing them evenly among the three workers residing on machine1, machine2, and machine3.

If you want to put this bad boy to some serious action, you’ll most likely want to wrap it in some kind of host process – either hosting it in the same process as the producer of your work, or in a Topshelf service or something.

One thing to note though: I haven’t had the chance to try the load balancer myself yet, mainly because my own horizontal scaling experience comes from working with RabbitMQ and Azure Service Bus only.

If you’re interested in using it, please don’t hesitate to contact me with questions etc., and I’ll be happy to help you on your way!

Ways of scaling out with Rebus #4: SQL Server

Surprisingly enough, Rebus can also use a table in SQL Server to function as its message queue(s).

This makes for a compelling setup if you’re working with an IT department that is good at operating SQL Server, but may not be good at operating message brokers and whatnot, or if you’re interested in limiting the number of moving parts in your system to an absolute minimum.

It goes without saying that a SQL Server-based Rebus system will not scale as easily as one based on a decentralized message queueing system, but with intelligent row locking, Rebus can stille achieve a pretty decent message throughput.

Scale it already!

Ok ok, but it’s trivial! In fact, I made a SQL Scaleout Demo as well that demonstrates a simple competing consumers setup. In the demo, a producer will send 10, 100 or 1000 jobs when you press a button, and any number of consumers running in parallel will get to receive the jobs.

Here’s a picture showing a producer that has produced 1000 jobs, and three competing consumers very eager to get some work:

Capture

Please note that small batches might not always appear to be distributed evenly among the consumers, especially after a period of inactivity.

This display is caused by a simple backoff strategy, whereby a consumer will take longer and longer breaks between polling the messages table in the event that no message could be found.

This way, a small batch might be completely consumed by the first consumer that is lucky enough to wake up from its slumber, thus stealing all the work from the other consumers.

In order to make this blog post a little longer, I’ll just show off the Rebus configuration API that configures and starts a consumer (using ‘consumer’ as its logical queue1 name, and ‘error’ as a dead-letter queue):

Also, please note that the database ‘rebus_test’ must be created before you run the code shown above.

  1. ‘Logical queue’ because the queue is emulated in the table ‘RebusMessages’ by specifying the name as the value in the ‘Recipient’ column.

Ways of scaling out with Rebus #3: RabbitMQ

With RabbitMQ, scaling out your Rebus-based workers is easy too – in fact, I’ve created a small RabbitMQ scaleout demo that shows how a producer can produce small nuggets of work, and any number of consumers can be started in order to distribute work evenly.

The principle is the same as when you’re using Azure Service Bus: Just start a number of competing consumers and you’re good to go.

An example on running the producer with two consumers can be seen below – here, I have published 100 jobs for the consumers to share.

Competing consumers with RabbitMQ

If you’re interested in running the demo yourself, you can check out the code from GitHub and follow the instructions in the accompanying readme.