Queuing

The good thing about the queuing system in Iguana 6 is that it is very fast. What isn’t so good about it is how the design has some scaling issues in terms of structure.

This is the scaling issue:

  • All the queued messages and log messages going into a single log file per day.

  • This means that if you have say 100 channels with 100 queues that on average less that 1% of the log file relates to one channel.

    • We call this a sparse data structure - i.e. a lot of space in between entries

  • In Iguana 6 this created a performance issue when reading the queue.

    • To solve this problem we made use a database index implemented using a database called SQLite.

In IguanaX we rethought this design and applied the ideas of safe by design. Now the design means:

  • We replace one monolithic per day log/queue file to many which are broken down:

    • By neuron

      • And by batches of messages - so for a given day we might see a few dozen files for a given neuron.

So by breaking things into smaller packets of data we avoid the sparse data structure of the original design. By avoiding the sparseness problem altogether we don’t need the extra complexity of the SQLlite index to solve it.

Thus we have scalability built into the design with much less complexity.

So one final question is how to we get guaranteed data integrity if the power is cut and yet optimize the speed of the system by batching file I/O writes for performance?

Well we do only periodically writing to each of the neuron’s individual queue/log files, but we write frequently to a journal file for the system. In the event that power is cut to the system we can load the journal file and recover data which has not yet been saved to individual neuron queue/log files.