Neurorobot on Wheels
Hey everybody, only two weeks have passed but I have so much to update you on. Earlier this month I was at CRISPRcon in the lovely city of Boston (pictured above), learning about the hope and fears presented by emerging CRISPR technologies, and this week our Neurorobot has sprouted legs!
Or well… Wheels.
This new version of the prototype is yeet yet unnamed, but has been updated to roll around using the help of two DC brush motors with an Arduino controller. That controller has been cleverly interfaced with the WiFi chip so that I can both receive video data from the bot and send it commands on the same channel! As demonstrated by this little demo video of me manually controlling it from my laptop:
And for those of you wondering the important question of “does it fit in a box?” The answer is “absolutely”:
Streaming video via WiFi means my laptop can do the heavy lifting computationally while the little Neurorobot just has to perform commands sent. In this way, the bot itself is a lot like our sensory organs and muscles, providing and receiving information from the laptop, which tries to act like the central nervous system.
Here’s an example of some basic visual tracking the bot and laptop brain can do together:
It’s jerky and slow to adjust, but is definitely on its way to becoming a master object tracker. Right now the Neurorobot is going through a slow constant loop of “where is the object,” “where do I move,” “I’ve moved a little,” “where is the object”…
Through this primitive thought process, our bot can effectively (albeit slowly) wiggle itself towards objects of interest, like this red cup:
Right now I’m working on teaching it more than just its primary colours, having it use a fine-tuned Inception-V3 network to identify and locate my hands, so I can guide it around the office laptop-free. For more information on what fine-tuning means in the context of machine learning check out: https://flyyufelix.github.io/2016/10/03/fine-tuning-in-keras-part1.html