I write this on the last day of the fellowship. With a really heavy heart. Eleven weeks went past really fast. Although I shall be back again in Ann Arbor for school in September, it won’t be the same. This was one of the best summers I’ve ever had. I will surely miss everyone at Backyard Brains!
In my last post, I mentioned about how I could perform post-hoc classification to determine whether a person is thinking about movement or not. For most of my time after that, I was working on having a better classification accuracy, by tweaking parameters here and there, collecting more data and validating the results. The average classification rate I achieved was approximately 88%. Which is very good. But, post-hoc classification has no use with respect to application. And so, I have started working on reading continuous data and classifying with a real time interface. But time decided to just fly as fast as it could. So I will definitely continue working on it through the next month. No other major updates about the project for today.
Meanwhile, we were all also preparing for the our poster presentation which was on 1st August. It was my first ever poster presentation, and it turned out to be so motivating and inspiring: looking at the amazing research by so many other students and getting feedback on our research, getting a chance to have a meaningful discussion about our work, all of it was so fruitful and fun.
One more thing which I realised is that I never really discussed why the behaviour of mu rhythms is the way that it is. In the sense, what is the reason why these particular waves disappear with movement or the thought of movement. This is something which should’ve been in the very first blog post, but I guess better late than never? So, there isn’t really a concrete explanation for the behaviour of mu rhythms, but of all the different theories, I came across one which personally to me made the most sense. Feel free to correct me if you feel so! As mentioned before, mu rhythms are most prominent when a person is physically at rest, to be specific when the neurons in sensorimotor region are ‘idling’. However, with the thought of movement or with actual movement, these neurons all start sharing a huge amount of information at the same time. Hence, a very high ‘information capacity’ results into a weak signal. This is similar to the stadium analogy that Greg often uses. When outside the stadium, we can never really figure out what’s going on inside because there are thousands of different voices at the same time. And thus we can never really know what is being said. On the other hand, when everyone is singing the national anthem, we can hear it outside because everyone is saying the exact same thing. Thus it makes sense that the mu rhythms are stronger when all the neurons are in the exact same ‘idling’ state, and they get suppressed with the onset of movement or movement visualisation because they are all firing at the same time and sharing a ton of information. Here’s an image to visualise all that I wrote:
Again, this explanation might not be the correct one, it just made sense to me personally.
And with this I conclude. I hope to be able to write again for all of you with further advancements in my project. I would like to thank Greg and everyone else at Backyard Brains for this amazing summer! Feel free to reach out to me (anusha.joshi@backyardbrains.com) with any further questions and discussions!
The Backyard Brains 2018 Summer Research Fellowship is coming to a close, but not before we get some real-world scientific experience in! Our research fellows are nearing the end of their residency at the Backyard Brains lab, and they are about to begin their tenure as neuroscience advocates and Backyard Brains ambassadors. The fellows dropped in on University of Michigan’s Undergraduate Research Opportunity Program (UROP) Symposium during their final week of the fellowship, and each scientist gave a quick poster presentation about the work they’d been doing this summer! The fellows synthesized their data into the time-honored poster format and gave lightning-round pitches of their work to attendees. BYB is in the business of creating citizen scientists, and this real-world application is always a highlight of our fellowship. Check out their posters below!
(more…)
Hello everyone! The previous two weeks have been an emotional and professional roller-coaster for me. It was tough saying goodbye to Etienne, who was such a lovely mentor for almost five weeks, but there was also the joy of welcoming Stanislav (our new mentor!), my parents from India visited me and then they left, I participated in my first ever July-Fourth-parade (my first year in the US, remember?) dressed as gigantic brain, and of course my project had its own ups and downs, which I shall explain in detail below.
As I mentioned in my last blog post, I was finally successful in both finding the mu rhythms and detecting their suppression when there was hand movement. The tricky part was seeing the suppression when a subject is asked to imagine hand movement. It’s tricky because one needs to focus all of their thoughts on moving their hand and absolutely block out any other thoughts. It’s hard to not think about moving when asked not to think. Sounds freaky I know, but every time I ask the subject to relax and not think about movement, they seemed to think more about it. Very few candidates did it pretty well, and I believe with a little bit of training everyone can. And hence, in search of these candidates, I spent most of my time collecting data from a lot of people.
Simultaneously, I invested a lot of time in brainstorming about what the next step should be. The main goal is to classify when a person was thinking about movement, and this classification with a machine learning approach needed some features; features of EEG recordings that are specific to when a person is thinking about movement and when the person is relaxed. Currently, I am trying my luck with the percentage values of power suppression, i.e the difference in the power of the mu-rhythm (8-14 Hz) band when relaxed, versus the power during motor imagery. Theoretically, the power during motor imagery should be much lower. And thus a bigger difference. This works on those candidates who are able to successfully focus their thoughts on only hand movement and have absolutely no thoughts of movement when asked not to. Here’s a plot of these features and the decision boundary that my classifier made :
I used a Support Vector Machine to classify from my testing data and it successfully made a decision boundary that separates movement versus non-movement. However, this was not possible in all the candidates, as shown in another example below:
As you can see, there are a bunch of mis-classified states (red markers in blue area and vice versa).
My next steps are to implement a real-time detection system for all those subjects on whom I can classify with a decent accuracy and simultaneously make changes to my data collection protocol for those subjects where the distinction isn’t clear.
With just two weeks to go, there seems to be a lot of work to be done in a short span. But hopefully I will get it done! Fingers crossed! Lastly, one notable change that has occured in the lab is everyone is hooked on to ‘Teenage Dirtbag’ by Wheatus thanks to Greg humming it every day for the past two weeks!