Messaging Software


There are several different messaging protocols to interact with messaging brokers: STOMP, OpenWire, AMQP... In case of AMQP, its versions (0-9, 0-10, 1-0...) are very different and incompatible.

For each of the protocols, depending on the programming language used (Python, Perl, Ruby, Java, C, C++, JavaScript...), you may have zero, one or more different client libraries that can be used, with different maturity levels and different sets of features supported.

The problem

Reliable and correct use of messaging is not easy. When interacting with a broker, most of the code needed is about exception handling:

  • what to do if you cannot connect to the broker? reconnect? how many times? how long do you wait?
  • what if the delivery of one message fails? retry? when to give up? what to do with the message?
  • what if you do not get a reply/acknowledgment quickly enough? wait more? abort?

For a given application, as soon as you want to support multiple protocols and/or multiple languages, you quickly face code duplication, in other words, bugs multiplication.

For instance, imagine a C++ producer talking to a Python consumer via two brokers using different protocols (e.g. one production old technology and one new for testing purposes):


How many time the same exception handling logic will have to be implemented?

How can we solve it?

We can use "Lego bricks": small, robust and flexible components that can be combined into complete solutions addressing the most common use cases.

Here are the main components:


  • Message Queue
    • file-system based message queue
    • simple and robust API supporting concurrent access


  • Messaging Transfer Agent
    • transfer messages between a broker and a message queue (all combinations)

In practice

This section will show how to to use the basic components in realistic use cases.

Simple producer/consumer use case

The simplest use case to consider is with a producer / consumer pair.

Simplifying the producer


In this figure it is shown how the MTA and the MQ can be combined to produce messages to one or more brokers:

  • the client application just needs to know about the MQ API to produce messages
  • one or more MTA can be configured to transfer messages from the MQ to one or more brokers

Simplifying the consumer


In this figure it is shown how the MTA and the MQ can be combined to consume messages from one or more brokers:

  • one or more MTA can be configured to receive messages from one or more brokers and put them in the MQ
  • the client application just needs to know the MQ API to consume the messages received

Scaling it up

What if these components create bottlenecks? How to scale this up?

The following figure illustrates a bigger configuration using the same Lego bricks as before:


This figure shows four dimensions which can be used to scale the consuming side of the use case:

  1. broker: the number of brokers can be increased to support the volume of messages
  2. MTA: one or more MTA can be configured to consume messages from a broker; having more than one MTA per broker increase the throughput by load balancing the consumption of messages
  3. MQ: more than one MQ can be used to balance the load if the file-system is a bottleneck
  4. handler: if the (application specific) handler is a bottleneck, one or more instances can be running in parallel

The message producing side of the use case can be scaled up using the same technique.

RPC use case

So far only the simple produce/consume use case has been shown, what about RPC use case?

Here is how a combination of MTA and MQ could be used to allow application code (the rpc server) to receive messages and reply to them:


  • input: one MTA is configured to receive messages from the broker and store them in an input MQ
  • rpc server: the rpc server processes requests from the input MQ and produces responses in an output MQ
  • output: one MTA is configured to forward responses from the output MQ to the broker

Other use cases?

Just like genuine Lego bricks, the MTA and the MQ can be combined to produce very different results.

We believe that these components are generic enough to be able to support all common use cases.

How can we handle an elastic service?

One can make a messaging solution evolve by adding or removing components.

However, how to manage in practice an environment which grows and shrinks as needed?

Consider for instance the following use case:


There are already five processes (two MTA and three handlers) to run and be monitored.

The best way to manage these is to use supervisors.


We have developed a daemon supervisor inspired by Erlang OTP. Like in Erlang OTP it is possible to declare hierarchies of supervisors and workers to build services. Hierarchies are useful and important because supervisors can be configured to handle children failures in a custom way.

The use case showed in the previous figure could then be implemented with the following supervision tree:


Explaining it from the bottom:

  • mta1 and handler1 are grouped under the same supervisor. handler1 depends on mta1 because if there is no messages it is pointless for him to run and we may want to monitor their aggregated status
  • mta2 and handler2/3 are grouped under the same supervisor for a similar reason to the previous point
  • since a hierarchy is expected we can then aggregate the two previous supervisors in a top one where we can customize the failure policies

By simply re-configuring the supervisor, we can grow/shrink the service as required.

Components availability

All the components described here (MQ, MTA and supervisor) are already available. Most of them are production ready, some are being tested.

  • Message Queue
    • Perl implementation: perl-Messaging-Message + perl-Directory-Queue
    • Python implementation: python-messaging + python-dirq

For more information, recommended versions and exact availability, see the messaging libraries page.

For complementary information look at the slides slides presented at the EMI TF 2012.

Edit | Attach | Watch | Print version | History: r13 < r12 < r11 < r10 < r9 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r13 - 2013-04-30 - MassimoPaladin
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    EMI All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2021 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback