Meet our latest ToThePoint Machine Learning interns
Meet Lukas and Jorden, Deep Image Recognition interns at ToThePoint — describing their current work thus far as being “not made to be pleasing to the eye, but rather work fast.“
Sounds quite, To The Point.
So they’re fitting in rather nice here already, if you ask me.
Pimp your Machine Learning Arcade with Deep Image Recognition
Using deep image recognition, they will add a valuable asset to our ToTheArcade fun-project in the Machine Learning department.
We are currently roaming the world sharing our Machine Learning findings where we go. So far, we’ve learned a fair amount from our fun side-projects that already proved very useful to illustrate certain case studies for our clients. But now we’re looking for ways to learn even more.
Push things further than making real-time predictions using only button presses
Just focussing on the fun side of things in this short recap: we’ve got a decent algorithm going that can predict which game is currently being played, purely based on the buttons that are being pressed. An IoT device in its purest sense – doing exactly what we want it to do: make predictions.
It’s like having a leprechaun between your legs
What it does in essence: It’s like having a little tiny leprechaun sitting on your lap as you are driving your car. It is observing only you (your eye movements, your hand-to-steering-wheel movement, your pedal operations and other biometrics) to predict where you are travelling. So, without actually looking OUTSIDE of the car, it can predict (with a tiny margin of error, of course) where you are travelling to, as it learns your favourite destinations depending on the hour of the day etc.
We want to give the leprechaun some extra pair of eyes
Now we’re looking to push things forward.
Not only do we want to predict which game is being played based on the buttons being pressed? We also want to have it learn to predict which character is being played (in Mortal Kombat for example).
So in layman terms: we want to give our leprechaun that is still sitting on your lap a dashcam to put on the back of his head, of which he can use the generated images of to make even more accurate predictions.
We are in effect building a two factor machine learning authentication.
Current state of affairs of the deep image learning internship
Jorden and Lukas will be running their neural network (using Keras and Tensorflow) on a NVIDIA Jetson Nano.
NVIDIA® Jetson Nano™ Developer Kit is a small, powerful computer that lets you run multiple neural networks in parallel for applications like image classification, object detection, segmentation, and speech processing. All in an easy-to-use platform that runs in as little as 5 watts.
The main advantage of using the NVIDIA Jetson Nano is to have the ability to run it offline, so you don’t always need an internet connection. Which means we can basically have it run (and learn) day and night to improve our Neural Network.
It all starts with gathering data. So they installed a (perfectly legal 👀) emulator to run games. On that they are running a custom script to take screenshots, label them and visualise the data further. Pretty basic stuff so far.
Next, they are building a neural network in Keras that is using Tensorflow.
Who knows what’s next?
We all support Lukas and Jorden in their quest to beat our Machine Learning Master Kevin Smeyers of the throne – or at least have some sort of impression on us, apart from: meh?
Good luck guys!