Backyard Brains Logo

Neuroscience for Everyone!

+1 (855) GET-SPIKES (855-438-7745)


items ()

Neurorobot Video Transmission In Progress

Hey everybody, it’s your favourite Neurorobot project once again, back with more exciting updates! I went to my first knitting lesson this week at a lovely local cafe called Literati, and attended the Ann Arbor Fourth of July parade dressed as a giant eyeball with keyboards on my arms (I meant to dress as “computer vision” but I think it ended up looking more like a strange halloween costume).

Oh wait… Did you want updates on the Neurorobot itself?
Unfortunately it’s been more snags and surprises than it has been significant progress; one of major hurdles we’re still yet to overcome is in the video transmission itself. (I did however put huge googly eyes on it)

The video from the Neurorobot has to first be captured and transmitted by the bot itself, then sent flying through the air as radio waves, received by my computer, assembled back together into video, loaded into program memory, processed, and only then can I finally give the bot a command to do something. All parts of this process incur delays, some small, some big, but the end result so far is about 0.85 seconds.

(A demo of how I measure delay, the difference between the stopwatch in the bot recording and the one running live on my computer)

Unfortunately, human perception is a finicky subject; typically in designing websites and applications it has been found that anything up to 100ms of delay is considered “instantaneous,” meaning the user won’t send you angry emails about how slow a button is to click. 0.85 seconds however means that even if you show the robot a cup or a shoe and tell it to follow it, the object may very well leave its view before it’s had a chance to react to it. This means the user has a hard time telling the correlation between showing the object and the bot moving towards it, leading them to question whether it’s actually doing anything at all.

Unfortunately the protocol the wifi module on our robot uses to communicate video with the laptop isn’t that easy to figure out, but we’ve made sizable progress. We’ve gotten the transmission delay down to 0.28 seconds, but the resulting code to do this is 3 different applications all “duct-taped” together, so there’s still a little bit of room for improvement.

I hope to have much bigger updates for my next blogpost, but for now here’s a video demo of my newest mug tracking software.

{Previous update: https://blog.backyardbrains.com/2018/06/neurorobot-on-wheels/}


The Fellows Summer Experience: Tigers Game and July Fourth Parade

One of the most attractive things about a BYB Summer Fellowship is the chance to spend a summer in colorful Ann Arbor. We changed the program name from an internship to a fellowship because of the lasting connections made throughout the summer, and these connections are made possible by the things we all do together! Before we get to some project updates, here’s a little bit about our summer together so far.

Take Me Out To The Ball Game

Last summer, we sponsored a student whose visa required participation in a “cultural appreciation” event, so we piled into a bus and headed over to Comerica Park for some of America’s favorite sport, baseball. It was such a hit, we went again this year! Luckily, Backyard Brains signature color  (orange!) matches pretty well with the Tigers brand 😉

Fourth of July Parade

Another celebrated BYB Summer Fellowship pastime is the Jaycee’s Fourth of July Parade! Each year, the fellows design and build a costume representing their summer research and wear it as BYB walks in the annual parade! Check out some of the looks from this year:

 
 
 
   

Meet the Fellows, See the Projects

Catch up with our Fellows! Since our Fellowship started, each fellow has been hard at work on their summer research. Check out these posts introducing each Fellowship research track:

First Progress Reports:

If you’ve been dying for an update on what we’ve been researching, fret no more! Feast your eyes on our first batch of updates!

Second Progress Reports:

Science marches ever onward! The Fellows have kept plugging away on their research in between all the fun and games, and here are their newest updates!

Conclusions:

The summer is winding down, and with it our Fellowship. While scientific exploration is never really finished, here are some wrap-ups from our Fellows on the projects they have devoted their inquiry to over the past weeks.


Neurorobot on Wheels

Hey everybody, only two weeks have passed but I have so much to update you on. Earlier this month I was at CRISPRcon in the lovely city of Boston (pictured above), learning about the hope and fears presented by emerging CRISPR technologies, and this week our Neurorobot has sprouted legs!

Or well… Wheels.

This new version of the prototype is yeet yet unnamed, but has been updated to roll around using the help of two DC brush motors with an Arduino controller. That controller has been cleverly interfaced with the WiFi chip so that I can both receive video data from the bot and send it commands on the same channel! As demonstrated by this little demo video of me manually controlling it from my laptop:

 

 

And for those of you wondering the important question of “does it fit in a box?” The answer is “absolutely”:

 

 

Streaming video via WiFi means my laptop can do the heavy lifting computationally while the little Neurorobot just has to perform commands sent. In this way, the bot itself is a lot like our sensory organs and muscles, providing and receiving information from the laptop, which tries to act like the central nervous system.

 

Here’s an example of some basic visual tracking the bot and laptop brain can do together:

 

It’s jerky and slow to adjust, but is definitely on its way to becoming a master object tracker. Right now the Neurorobot is going through a slow constant loop of “where is the object,” “where do I move,” “I’ve moved a little,” “where is the object”…

Through this primitive thought process, our bot can effectively (albeit slowly) wiggle itself towards objects of interest, like this red cup:

 

 

Right now I’m working on teaching it more than just its primary colours, having it use a fine-tuned Inception-V3 network to identify and locate my hands, so I can guide it around the office laptop-free. For more information on what fine-tuning means in the context of machine learning check out: https://flyyufelix.github.io/2016/10/03/fine-tuning-in-keras-part1.html