I write this on the last day of the fellowship. With a really heavy heart. Eleven weeks went past really fast. Although I shall be back again in Ann Arbor for school in September, it won’t be the same. This was one of the best summers I’ve ever had. I will surely miss everyone at Backyard Brains!
In my last post, I mentioned about how I could perform post-hoc classification to determine whether a person is thinking about movement or not. For most of my time after that, I was working on having a better classification accuracy, by tweaking parameters here and there, collecting more data and validating the results. The average classification rate I achieved was approximately 88%. Which is very good. But, post-hoc classification has no use with respect to application. And so, I have started working on reading continuous data and classifying with a real time interface. But time decided to just fly as fast as it could. So I will definitely continue working on it through the next month. No other major updates about the project for today.
Meanwhile, we were all also preparing for the our poster presentation which was on 1st August. It was my first ever poster presentation, and it turned out to be so motivating and inspiring: looking at the amazing research by so many other students and getting feedback on our research, getting a chance to have a meaningful discussion about our work, all of it was so fruitful and fun.
One more thing which I realised is that I never really discussed why the behaviour of mu rhythms is the way that it is. In the sense, what is the reason why these particular waves disappear with movement or the thought of movement. This is something which should’ve been in the very first blog post, but I guess better late than never? So, there isn’t really a concrete explanation for the behaviour of mu rhythms, but of all the different theories, I came across one which personally to me made the most sense. Feel free to correct me if you feel so! As mentioned before, mu rhythms are most prominent when a person is physically at rest, to be specific when the neurons in sensorimotor region are ‘idling’. However, with the thought of movement or with actual movement, these neurons all start sharing a huge amount of information at the same time. Hence, a very high ‘information capacity’ results into a weak signal. This is similar to the stadium analogy that Greg often uses. When outside the stadium, we can never really figure out what’s going on inside because there are thousands of different voices at the same time. And thus we can never really know what is being said. On the other hand, when everyone is singing the national anthem, we can hear it outside because everyone is saying the exact same thing. Thus it makes sense that the mu rhythms are stronger when all the neurons are in the exact same ‘idling’ state, and they get suppressed with the onset of movement or movement visualisation because they are all firing at the same time and sharing a ton of information. Here’s an image to visualise all that I wrote:
Again, this explanation might not be the correct one, it just made sense to me personally.
And with this I conclude. I hope to be able to write again for all of you with further advancements in my project. I would like to thank Greg and everyone else at Backyard Brains for this amazing summer! Feel free to reach out to me (anusha.joshi@backyardbrains.com) with any further questions and discussions!
Hello everyone! The previous two weeks have been an emotional and professional roller-coaster for me. It was tough saying goodbye to Etienne, who was such a lovely mentor for almost five weeks, but there was also the joy of welcoming Stanislav (our new mentor!), my parents from India visited me and then they left, I participated in my first ever July-Fourth-parade (my first year in the US, remember?) dressed as gigantic brain, and of course my project had its own ups and downs, which I shall explain in detail below.
As I mentioned in my last blog post, I was finally successful in both finding the mu rhythms and detecting their suppression when there was hand movement. The tricky part was seeing the suppression when a subject is asked to imagine hand movement. It’s tricky because one needs to focus all of their thoughts on moving their hand and absolutely block out any other thoughts. It’s hard to not think about moving when asked not to think. Sounds freaky I know, but every time I ask the subject to relax and not think about movement, they seemed to think more about it. Very few candidates did it pretty well, and I believe with a little bit of training everyone can. And hence, in search of these candidates, I spent most of my time collecting data from a lot of people.
Simultaneously, I invested a lot of time in brainstorming about what the next step should be. The main goal is to classify when a person was thinking about movement, and this classification with a machine learning approach needed some features; features of EEG recordings that are specific to when a person is thinking about movement and when the person is relaxed. Currently, I am trying my luck with the percentage values of power suppression, i.e the difference in the power of the mu-rhythm (8-14 Hz) band when relaxed, versus the power during motor imagery. Theoretically, the power during motor imagery should be much lower. And thus a bigger difference. This works on those candidates who are able to successfully focus their thoughts on only hand movement and have absolutely no thoughts of movement when asked not to. Here’s a plot of these features and the decision boundary that my classifier made :
I used a Support Vector Machine to classify from my testing data and it successfully made a decision boundary that separates movement versus non-movement. However, this was not possible in all the candidates, as shown in another example below:
As you can see, there are a bunch of mis-classified states (red markers in blue area and vice versa).
My next steps are to implement a real-time detection system for all those subjects on whom I can classify with a decent accuracy and simultaneously make changes to my data collection protocol for those subjects where the distinction isn’t clear.
With just two weeks to go, there seems to be a lot of work to be done in a short span. But hopefully I will get it done! Fingers crossed! Lastly, one notable change that has occured in the lab is everyone is hooked on to ‘Teenage Dirtbag’ by Wheatus thanks to Greg humming it every day for the past two weeks!
Hello! We are inching towards our goal of giving you a superpower! Last time, I was trying to find the mu rhythms, and I strongly think I may have found them. This consisted of two main steps. One was to find the right montage (electrode locations) which will help us see the mu rhythms. And once I figured the locations, the next step was to hunt for those distinct mu rhythms. For this I needed, a suitable data recording protocol. Both of which I didn’t seem to settle upon quickly. After multiple unsuccessful tests on my mentor who also agreed to be my subject, there was a tiny ray of hope.
I decided upon using 4 channels corresponding to 4 locations on the scalp.
Here’s an image of the international 10/20 system used to specify the electrode locations.
One of them is keeping the positive lead on F4 and negative(reference) on C4 with the ground lead on the mastoid (the bone behind the ear), or the positive lead on C4 and negative(reference) on Fz. And ofcourse, to maintain symmetry, there are two channels on F3/C3 and C3/Fz. As we know, the left hemisphere of the brain corresponds to the right part of our body and vice versa.
And now, time for some data blitz! Hello, mu rhythms!
The waves after the breakpoint correspond to when there was movement. There is a stark difference between the nature of waves before and after movement. There is a desynchronisation of the EEG waves right when a movement is happening. This is nothing but the mu-wave suppression which I mentioned in my last post.
However, just a visual observation of these mu waves is not enough. To accomplish our end goal, we need better visualisation; a way to quantify this suppression. And that’s what I am working on presently. One way to do this would be to calculate the power corresponding to each frequency and plot it. Theoretically, there should be a reduction of power in the 8-14 Hz band every time there is a movement. Or every time there is an imagined movement. The power plots gave me promising leads. Here’s a snippet of the power spectral density.
This is a plot with the data collected from F4-C4, the location that corresponds to left hand. As we can see, the waves for relaxed state, and right hand movement (both imagined and actual movement) have equivalent power. However for the left hand movement there is a significant decrease in power. And a slightly less reduction in power for imagined left hand movement. This corresponds with our expectations.
Currently, I am in search of more features and processing techniques that I can use in order to train a model to predict imagined movements. Simultaneously, collecting as much as data as possible. One can never have too much data!
Apart from work, my last two weekends were quite interesting. One of them I hosted my sister’s classical indian dance show and the other weekend I visited the picturesque Michigan’s Little Bavaria: Frankenmuth!
The shockingly unpredictable World Cup has kept me occupied too!
I look forward to updating all of you with hopefully some more good results! But until then, Auf Wiedersehen!