Are you part of an organization registered as public charity in the US or Canada?
If yes, now is the time to apply for up to $1,500 that you can use towards planning out an outreach for next year’s Brain Awareness Week (March 14 – March 20, 2022)!
If awarded, you could use this money to organize workshops, brain fairs, interactive programs geared towards school students, undergrads, underserved communities or the general public. There are no formal limitations as long as your program is free for attendees and has to do with neuroscience or brain health! Brain science doesn’t have nearly enough presence in school curriculums, so all initiatives to try and fill in the gap are more than welcome.
If you’re worried about the pandemic, fret not: both in-person and virtual events are eligible for this grant!
How to Apply & What Kind of Programs Do They Support?
Applying is simple: just head over here to register as a Partner. If you’re registered already, just log in here and follow the link to submit your application. The Foundation provides a bunch of resources to help you plan out your program – check them out before applying!
Hello. I’m here again! And this will be my last update on my neuromathematical project. If you don’t remember me, I’m Natalia Díaz, and I’m doing my university internship at Backyard Brains.
(If you’re wondering what I’m talking about when I say neuromathematics, check out my first and second blog post.)
In the last part of my internship, I have been working with the Python platform. It is great to do the experiments but we must also learn to interpret them. And as every numbers geek knows, there is no better way of doing that than math and statistics!
Until now, Backyard Brains have been using the Matlab platform for EEG analysis, but they’ve always wanted to achieve the same (or at least similar) result using Python. So, they asked me to try to “translate” the work they had already done, showing increased alpha wave power in the visual cortex when the eyes are closed.
At first I was a little scared, since I know the Python language, but I am not an expert! But as I worked on it, I realized that it was not difficult and that I knew more than I thought I knew. So I managed to make a code on this new platform that did the same as what they had. I must say that Matlab is easier for (a bit more complicated) math operations, but with a little effort and searching you can get a good result in Python.
Below you can see the spectrogram of the EEG of the visual cortex as we opened and closed our eyes. Look at that alpha wave power. It worked!
Realizing that I had done a good job (I think haha), BYB’s co-founder Tim asked me for a little more statistical analysis by creating new graphs and calculations. And I did it! By analyzing all the data of alpha power during eyes closed versus eyes open, and using boxplots, I can now show statistically that alpha power is higher in the visual cortex EEG when the eyes are closed. I think a p-value of 0.003 is convincing, don’t you?
Finally in summary, I am very happy to finish my internship having been able to help Backyard Brains with my knowledge and above all by having combined what I like the most: mathematics and neuroscience. My protocol is already on the website under our experiments page – “Quantifying Your EEG.” I hope they are happy with my work and consider me in future projects where they need neuromathematical help.
See you soon!
Say Hello to Another BYB Intern & Budding Neural Engineer
— Written by Miguel Cornejo —
Hi everyone! I am Miguel Cornejo, a high school senior at Colegio Alberto Blest Gana in San Ramón, Santiago, Chile. Backyard Brains has had a relationship with my school for 5 years, and I took their Neural Engineering Course two years ago, leading a team on studying Leg Muscle Recordings (EMGs) during soccer kicks.
I recently worked with Backyard Brains during a short 1 month long internship to modernize two of their Muscle SpikerShield Experiments – Controlling a Stepper Motor and Controlling an LCD screen with your muscles. Why did they need to be modernized? Because the new controller chips have now become so inexpensive my new protocol is a breeze. Tim and I worked together at various cafes in Nuñoa and downtown Santiago, and after only one burned out chip, my project finished quickly! As a result of this internship, I now have a small neural interfaces workshop in my house and stay in touch with BYB. In my spare time, when not learning next-gen engineering, I enjoy building gaming PCs and (of course) soccer.
Welcome to the final update on my TinyML Robot Hand project! After collecting sEMG (surface electromyography) data, feeding them into a neural network, and producing a machine learning model that can accurately classify different hand gestures, I can proudly say that my eagle has landed!
Deploying and integrating the model proved to be a lot more challenging than I anticipated. The offline model reached a high accuracy (~ 90%), but as soon as I tried to deal with real data, the classifier performed at worse than chance accuracy. No matter how similar I tried to make my real-time processing pipeline compared to my offline pipeline, it seemed like nothing could fix it.
But just when everything seemed lost, we had a breakthrough at the last minute. I’ll explain what we did further down, but TL;DR, it worked!
In the end, it all came back to a topic I have been discussing throughout the blog updates. Offline models are great for data analysis and provide great insight into the nature of our signals and the features that can be extracted from them. Nevertheless, the good performance of an offline model is not guaranteed to be translated to online models.
In my case, the root of the problem was the difference in magnitude between the data recorded with the Spike RecorderTM, vs the data recorded with the Arduino Nano. Since the waveforms of the data remained approximately the same, you would think that the neural network would focus on them, and ignore the differences in amplitude, but that wasn’t the case.
To solve this issue, I ended up creating a new dataset using the data recorder from the Arduino Nano, and I was finally able to get back to 72% classification accuracy on the testing dataset. This accuracy is not excellent, but at the very least, it helped me succeed in controlling the hacker hand most of the time.
As the fellowship comes to an end, I wanted to take a moment to look back at what we did:
We hypothesized that a 5-channel signal should allow us to discriminate between 5 finger gestures, which is a hard problem because the muscles controlling finger movements are located deep in the forearm, so writing a classical computer program was out of the question.
Then, we collected data to test our hypothesis and concluded that there were enough qualitative differences in the data for different gestures, implying that a neural network had a chance at succeeding at this task. We then trained the neural network with data that was processed in Python (an offline model) and succeeded with good classification accuracy.
Next, we tried to recreate this success in the real-time system (read: Arduino/C++), but we failed because the real-time data was different than the offline training data. Finally, we fixed this issue by training the network with data captured in the real-time system.
In general, it seems like we have succeeded as a proof of concept, and as always, there are certain aspects of this project that I would like to revisit in the future:
I need an experiment to determine if the classification accuracy of the real-time system matches the testing accuracy reported by Edge Impulse.
I need to explore whether or not all channels contribute useful information across different gestures, and if they don’t, I need to determine how many channels do I actually need to control the system.
Finally, as a computational neuroscientist, I would like to explore if there are different neural network architectures better suited for this kind of problem.
Overall, I had fun building working on this project, and I hope you had fun following along too. Although there is some work left to be done, I think this project is ready to help you get started in your journey in Neuroscience and Neurotechnology. I genuinely believe that this is a great way to get you comfortable with the basics of neural interfaces, digital signal processing, and all the fun stuff you need to deal with when working with electrophysiological signals and real-time systems. I invite you to reproduce this project, and then go beyond; the applications are only limited by your creativity!