Case Study: MQ & Broker on z/Linux

Real world, real time: Getting from 0 to 20.106 near real-time MQ messages per day on z/Linux...

We were recommended to a subsidiary of a large EU bank in reference to an ATM/POS management system built by a highly reputable international vendor.

This is what we had to start with:

  • A card authorization system by a reputable international vendor integrated by another reputable system integrator, running on HP Nonstop (Tandem).
  • A core banking system running on z/OS. (No need to qualify that the vendor was large and reputable. There are no small time vendors of z/OS systems. 🙂
  • File based interface. Yes, seriously, I mean it. File based interfaces. Both systems were working offline and exchanging files daily. Keep reading — it becomes a bit clearer why.
  • Real time constraints. Here’s the deal: A credit card authorization system deals with Visa and MasterCard networks. These networks have very tight constraints on the amount of time you can spend working on their transactions before the initiating device gets a reply. How about 4 seconds? And if that sounds like a lot — it’s not. In these 4 seconds the transaction must travel across the world, get processed, and a reply sent back, while traveling through many systems and networks in between.
  • There was no MQ interface to Card system on Tandem, TCP only.
  • The Card system was using an ISO 8583 message format. At the time there was no Message Broker parser for ISO 8583 available.

All of which would have been sort of fine, except that this was a new client, very skeptical that we could deliver anything at all… much less a near real-time system to handle millions of mission critical messages.

On the other hand, quotes from other integrators added up to millions, or estimated millions, because those were open-ended, time and material offers without any specific promises. (Thus, alas, the file based interface.) Which was good for us at XQuadro. Skeptical as the client was, they cautiously proceeded.


Proof of Concept & v0.1

To get this POC off the ground we had to build several distinct components:

  • TCP interface
  • Custom Message Broker C Parser
  • Actual Broker flows

Custom TCP interface

We had to build a custom TCP interface (written in portable C) that allows Message Broker to talk TCP to Card Management System on the other end. Now, there’s nothing wrong with TCP, it works great, but TCP is low level stuff:

  • It has no transactions coordination between a sender and a receiver.
  • It has no standard way of marking the start and the end of a message.

One can of course include all this, but we were talking about an already existing software that no one was willing to change. Let’s put it mildly: we’ve seen better designed protocols.

We had to very carefully design this code and test it all the way to exhaustion. And we did.

Custom Message Broker Parser

In more recent versions Message Broker does support parsing ISO 8583. In a way. But this was a while back and support was not available. So we had to create a custom message parser, which wouldn’t be a big deal except that we had to parse a very awkward format. (Anyone that has dealt with 8583 will know what this means.) It was actually a variation of 8583 and the documentation was not too clear and not too correct and not too complete.

So we had to work around and build safeguards to catch any deviations of real wire data from documentation.

And we did.

Our parser worked. In reality it did not quite match the documentation, so the code had to be fixed a bit in order to match reality. But it worked.

Broker Flows

We had to build these Broker Flows very carefully. We did an enormous amount of testing to ensure near real-time performance that was required even under heavy load.

POC Results

It actually worked. The project was a success. To our client’s delight and politely subdued surprise.


V1.0 was deployed and worked beautifully in production. There was a fair amount of issues, to no one’s surprise, but, logging saved the day:

  1. TCP connections drop, packets are lost.
  2. Card system sends messages that do not match documentation.

Keep in mind, this is still a new client. Some rapport was established but not that much at the time. Logging helps a lot. Since then, we have earned our client’s continued trust and secured a long-term business partnership.


Fast Forward: Today

Message Broker has been upgraded and is now IBM Integration Bus. The installation has grown quite a bit:

  • More than 100 Message Flows.
  • An average of 20 million messages per day. Peaks are quite a bit higher. Naturally, these messages are not uniformly distributed during the day and peaks are way above 230 message per second. No unplanned downtime in 4 years (touch wood :).