Quicker and Smarter: Neurorobot on the hunt
Though it feels like just last week I landed here in Michigan, it seems that it’s almost the time for me to go back home. Though my work with the project here is wrapping up, there’s still so much to be done, but I’m confident that I’m leaving behind a good framework.
These past few weeks have been so eventful! I was home sick for a week with the flu, which sucked, but thankfully I could just take my work home with me. The Ann Arbor Art Fair (pictured above) was incredible; half of downtown was covered in tents full of beautiful creations from all around the country. Completely unrelated, half my paycheck now seems to be missing.
If you were following my last blog post, you may remember that one of the biggest hurdles I had to overcome was video delay between the bot and my laptop; big delay meant bad reaction speed meant sad bot.
Through changing libraries, multithreading, and extensive testing, we’ve now gotten the delay to just under 0.4 seconds, less than half of what it was with the old versions of the code!! This may not seem too exciting, but it means the bot now react to things much smoother and calmer than before, as demonstrated in this video of it hunting down a ferocious bag of chips:
Another new accomplishment is neurons! Spiking neurons to be specific, nothing like your regular generic brand neural net neurons. These ones are much prettier (using the Izhikevich model) and react to stimuli very organically, making graphs that resemble what you’d see on your lab oscilloscope:
More importantly, other than just looking good, these neuron models also behave really well. As an example here’s just two neurons (listen for the different clicking tones) connected to the motors of the Neurorobot, one activated and one inhibited by the ultrasonic distance sensor:
With just these two neurons, and fewer than twenty lines of code, the Neurorobot can already display cool exploration behaviours, avoiding getting stuck on walls and objects the best it can. Neurons are powerful, that’s why creatures like the roundworm can survive with just over 300 of them: it doesn’t take a lot to do a lot.
Here’s another example, in which position neurons that are more sensitive to left and right areas of the image space are given the task of finding something big and orange:
Notice how when the cup is to the left of the bot’s camera, the blue neuron spikes; whereas when it drifts to the right, the green and red neurons start spiking.
There’s still much optimization that can be done to make the Neurorobot think and react faster, eventually the entire camera code is going to be rewritten from scratch, as well as better visualisation of what’s going on under the hood. Lights, speakers, microphone, and a good user interface are all coming soon to a Backyard Brains near you!
Christopher Harris’s Neurorobot prototypes already have a drag-and-drop interface for putting together neurons:
The real goal of this project isn’t to have student write a thousand lines of code for the robot to do something interesting, but for them to learn and explore the wonderful behaviour of neurons by just by playing around with them; changing parameters, connecting their synapses to different components and seeing if they can get the bot to wiggle when it sees a shoe or beep and back away from anything that looks like a vase. And as is in keeping with Backyard Brains’ tenets, seeing what they will discover.