Cats and dogs aren’t the only pets fond of chasing things that run away from them. Aquarium fish do it too, as shown in our new peer-reviewed paper that came out just last week in the “Animals” journal! Out of 66 fish species observed, nearly 90% showed interest in or set out to chase moving laser pointed dots.
What makes us especially proud is that the paper came as culmination of a yearlong research that began as our last year’s Fellow Sofia Eisenbeiser’s summer project. As scientists know it all too well, a single year is pretty fast and couldn’t make us more proud!
Another reason for pride is the fact that this research adds another layer of proof to what we’ve been saying all along: (neuro)science doesn’t have to cost a fortune. This particular experiment only requires a couple of things that many people already have: a fish tank with some inhabitants (the more, the merrier!) and a laser pointer or two. Incredibly easy to replicate in, say, your biology classroom!
Welcome to the final update on my TinyML Robot Hand project! After collecting sEMG (surface electromyography) data, feeding them into a neural network, and producing a machine learning model that can accurately classify different hand gestures, I can proudly say that my eagle has landed!
Deploying and integrating the model proved to be a lot more challenging than I anticipated. The offline model reached a high accuracy (~ 90%), but as soon as I tried to deal with real data, the classifier performed at worse than chance accuracy. No matter how similar I tried to make my real-time processing pipeline compared to my offline pipeline, it seemed like nothing could fix it.
But just when everything seemed lost, we had a breakthrough at the last minute. I’ll explain what we did further down, but TL;DR, it worked!
In the end, it all came back to a topic I have been discussing throughout the blog updates. Offline models are great for data analysis and provide great insight into the nature of our signals and the features that can be extracted from them. Nevertheless, the good performance of an offline model is not guaranteed to be translated to online models.
In my case, the root of the problem was the difference in magnitude between the data recorded with the Spike RecorderTM, vs the data recorded with the Arduino Nano. Since the waveforms of the data remained approximately the same, you would think that the neural network would focus on them, and ignore the differences in amplitude, but that wasn’t the case.
To solve this issue, I ended up creating a new dataset using the data recorder from the Arduino Nano, and I was finally able to get back to 72% classification accuracy on the testing dataset. This accuracy is not excellent, but at the very least, it helped me succeed in controlling the hacker hand most of the time.
As the fellowship comes to an end, I wanted to take a moment to look back at what we did:
We hypothesized that a 5-channel signal should allow us to discriminate between 5 finger gestures, which is a hard problem because the muscles controlling finger movements are located deep in the forearm, so writing a classical computer program was out of the question.
Then, we collected data to test our hypothesis and concluded that there were enough qualitative differences in the data for different gestures, implying that a neural network had a chance at succeeding at this task. We then trained the neural network with data that was processed in Python (an offline model) and succeeded with good classification accuracy.
Next, we tried to recreate this success in the real-time system (read: Arduino/C++), but we failed because the real-time data was different than the offline training data. Finally, we fixed this issue by training the network with data captured in the real-time system.
In general, it seems like we have succeeded as a proof of concept, and as always, there are certain aspects of this project that I would like to revisit in the future:
I need an experiment to determine if the classification accuracy of the real-time system matches the testing accuracy reported by Edge Impulse.
I need to explore whether or not all channels contribute useful information across different gestures, and if they don’t, I need to determine how many channels do I actually need to control the system.
Finally, as a computational neuroscientist, I would like to explore if there are different neural network architectures better suited for this kind of problem.
Overall, I had fun building working on this project, and I hope you had fun following along too. Although there is some work left to be done, I think this project is ready to help you get started in your journey in Neuroscience and Neurotechnology. I genuinely believe that this is a great way to get you comfortable with the basics of neural interfaces, digital signal processing, and all the fun stuff you need to deal with when working with electrophysiological signals and real-time systems. I invite you to reproduce this project, and then go beyond; the applications are only limited by your creativity!
Well, folks, we made it. The last week, the final frontier, time to sink or swim. Luckily, I’ve spent the past 10 weeks with the most expert swimmers of all, our BYB fish. And, boy, have they taught me well. So, let’s dive in!
Alright, where were we? In the beginning we discussed play and how science might attempt to concretely define it in lower-level vertebrates whose minds we don’t fully understand (yet…). That was followed up with the discovery that fish like to play laser tag, and that some of them even prefer to play with differently colored lasers!
The first few weeks of my project revolved around more conceptual ideas and thinking about how exactly to quantify qualitative traits. How am I supposed to take something as abstract as the day-to-day of a fish and turn that into cold, hard data?
Well… the answer is actually quite simple: ethograms!