Modernizing a legacy IoT device: AMQP, but how?
September 01, 2016 3:54
This is a very common design, every (professional) IoT solution I have seen recommends it, and for good reasons: when load increases (you have more producers, or each of them produces more traffic and/or bigger messages), you scale out: you have multiple, concurrent (competing) consumers.
You can do this easily if you decouple consumers and producers, by inserting something that "buffers" data in between the consumer and producer. The benefits are increased availability and load levelling.
An easy way to have this is to place a simple, durable data store in the middle. The design is clean: producers place data into the store, consumers take them out and process them.
You have three options:
- Insert a "stub/proxy" on the server (or cloud) side. Devices talk to this stub using their native protocol (a custom TCP protocol, HTTP + XML payload, whatever). This "stub" usually scales well: you can code it as lightweight web server (an Azure Web Role, for example) and just throw in more machines if needed, and let a load balance (like HAProxy) distribute the load. It is important that this layer just acts as a "collector": take data, do basic validation, log, and throw it in a queue for processing. No processing of messages here, no SQL inserts, so we do not block and we can have rate leveling, survive to bursts, etc.
This is the only viable solution if you cannot touch the device code. - Re-write all the code on the device that "calls out" using the legacy protocol/wire format, and substitute it with something that talks in a standard supported by various brokers, like AMQP or MQTT. In this way, you can directly talk to the broker (Azure Event Hub, IoT Hub, RabbitMQ, ..), without the need of a stub.
This solution is viable only if you fully control the device firmware. - Insert a "broker" or gateway on the device, and then redirect all existing TCP traffic to the gateway (using local sockets), and move the file manager/watcher to the device. Have multiple queues, based on priority of transmission. Have connection-sensitive policies (transmit only the high-priority queue under GPRS, for example). Provide also a way to call directly the broker for new code, so the broker itself will store data and events to files and handle their lifetime. Then use AMQP as the message transport: to the external obsever (the Queue), the devices talks AMQP natively.
This is a "in the middle solution": you can code / add your own programs to the device, but you do not have to change the existing software.
Plus, it makes it possible to implement some advanced controls over data transmission (a "built-in" way to transmit files in a reliable way, have messages with different priorities, transmission policies based on time/location/connection status, ...).
- BigQueue (JVM): based on memory mapped files. Tuned for size, not reliability, but claims to be persistent and reliable.
- Rhino.Queues.Storage.Disk (.NET): Rhino Queues are an experiment from the creator of the (very good) RavenDB. There is a follow up post on persistent transactional queues, as a generic base for durable engines (DB base).
- Apache Mnemonic (JVM): "an advanced hybrid memory storage oriented library ... a non-volatile/durable Java object model and durable computing"
- Imms (.NET): "is a powerful, high-performance library of immutable and persistent collections for the .NET Framework."
- Akka streams + Akka persistence (JVM): two Akka modules, reactive streams and persistence, make it possible to implement a durable queue with a minimal amount of code. A good article can be found here.
- Redis (Any lang): the famous in-memory DB provides periodic snapshots. You need to forget snapshot and go for the append-only file alternative, which is the fully-durable persistence strategy for Redis.