Even though the Wall Of Fame might look like a fancy oversized gadget to bring along on our events, there’s a lot going on in the background that we think you’ll find interesting.
We built a high performance, low latency ecosystem controlling the Wall Of Fame while allowing for adding features easily. But it took some time and effort to get to the point that we’re at right now.
General idea: streaming image frames
The whole idea of the Wall Of Fame is based upon the idea of continuously sending (read: streaming) images or frames to the keyboards. This automatically made us think in a certain way for setting up our architecture. We went for implementing streams through MQTT and later on RabbitMQ. Rome wasn’t built in one day, and neither was the Wall Of Fame. It is still an evolving setup, but for now we are happy with what we’ve managed to produce.
Initial setup with a controller and orchestrator
Although our current setup is pretty solid and fast, we didn’t get there without a hassle and many refacts. The initial backend consisted of only 2 applications . The first program is called the Keyboard Controller and it contains al the code that is written for and running on our Raspberry pi 3’s. The images and frames were sent by the Orchestrator, a Java program handling images and GIF’s. The following image illustrates the very first basic setup.
At first the Keyboard Controller was a mixture of C and C++ code, receiving message sent over MQTT and parsing them for output on the keyboards. It was the hardest piece of code to write as I had to get back into the C++ mindset. Coming from mainly programming in Java, it took some time to get back up to speed. It was not my first time writing C++ code, but it was the most complex program I had to write using C++.
The Orchestrator was the very first Java application that was added to our system. It was first created using Spring boot 1.5.2, which goes to show how long this has been a part of our setup (don’t worry, we’re getting ready to update from our current 1.5.12 to spring Boot 2 soon!).
At first there wasn’t a lot of functionality present. It exposed a controller serving data about present keyboards and it could receive one image at a time to send to the Keyboard Controller. Apart from the controller, it also used the Thymeleaf templating engine to allow interaction through a simple UI.
The website consisted of 2 parts, on the left a list of connected but not configured keyboards and on the right a grid that represents the actual wall. Dragging a keyboard from left to right meant that it is now configured and will receive images. We started out with only two keyboards and only one Raspberry Pi and although not amazing, it did it’s job.
Improvements and current state
The setup we created worked, but it was not at all elegant or flexible. We could only send 1 image at a time using the Thymeleaf-served UI. We had way too many crazy ideas for using the wall so we had to think about a better setup that could handle al our ideas.
The first and biggest refactor was improving the Keyboard controller. Previously we had only 1 Raspberry PI but we wanted more! More keyboards meant more Raspberry PI’s and so we hit the first obstacle: “what about synchronising?“. This was a major issue, when streaming images and animations every keyboard needs to be showing the correct frame or else you get distorted images.
* We tried using a USB hub made for controlling 36 USB ports so we could continue using a single Raspberry PI. Unfortunately this was not the way to go, the hub gave too little bandwidth causing the keyboards to get out of sync pretty bad.
* Our next idea was using multiple Raspberry PI’s and using barriers to synchronise the sending of a frame to the keyboards. We were thinking about linking all the Rapberry Pi’s together using the GPIO pins and sending pulses when the next frame should be displayed. Although possible, it is a difficult route to implement.
* Finally, our current implementation that works just fine: don’t care about synchronisation (sort of)! Every Raspberry PI handles images on it’s own speed, without any knowledge of others. It receives messages though MQTT and stores these on an internal queue. When it is ready to process an image, it takes the last image it received and clears the queue. This proved to be the best approach, it is simple and it works! Even if a certain Pi is running behind because of a frame that takes a long time, it will skip to the last frame it received in the meantime and ignores al the other frames which would be irrelevant anyways.
So we got the synchronisation down pretty easily. Apart from that, we transitioned from the C/C++ code to only using C++. It took a while, but it made development a bit easier. The next step was adding more features, handling network issues and disconnecting keyboards. After a long process of trial and error, we ended up with an extremely stable and performant C++ library that has proven to be fast and reliable.
Handling more keyboards meant that we had to reassess the Orchestrator. For example, configured keyboards were not saved and would be reset when restarting the Orchestrator. When working with two keyboards that is not an issue but when there are 54 keyboards, it’s a lot more annoying to spend ten minutes dragging everything to its correct cell in the grid.
There is an automatic save mechanism developed which saves the state of the wall between restarts. As soon as a keyboard registers itself, it is placed back in it’s previous spot.
This gave us better usability, but we still didn’t get the extensibility down. This is where RabbitMQ comes into the picture. Frames could now be sent over RabbitMQ after which the Orchestrator will resize and distribute the correct pixels to the correct keyboards. Using this approach, we could add a lot of features by adding extra projects that are able to send frames over RabbitMQ.
The Animation Processor was the first application that was added to the backend. It is written in Scala and is able to load in a GIF, chop it up in different frames and send this over RabbitMQ. This worked really well, but it did not fit in the latest version of the architecture, so it is only used for testing purposes.
As we have multiple applications streaming images, we need to have a application that decides which stream has precedence. This is why our Director came into existence. Every service has to register itself at the Director with a unique ID and a name. This name is displayed on a UI where the priority can be set. Apart from registering, applications also have to remain active and send heartbeats at least every 2 minutes.
The director is the key application that opened a lot of opportunities for adding services, or as we call them, broadcasters. As long as you pass the Director readable images, it can be shown on the wall. Adding new functionality takes no effort, you only need to register your broadcaster, send a heartbeat and stream images in the correct format.
When going to events or job fairs, we mainly use the Playlist broadcaster. This broadcaster can read images and GIF’s in a specific order and loop over them, sending the result to the wall. If the Wall Of Fame is set up in the office, it functions as a giant clock showing the hour and current date. And last but not least, a colleague made it possible to use the Wall Of Fame as a live Twitter feed!
The Wall Of Fame has gone through some iterations to become the masterpiece that it is today. We’ve overcome some obstacles, but the result is a flexible and responsive setup. I included a final image that shows the entire setup as is today.