Backyard Brains Logo

Neuroscience for Everyone!

+1 (855) GET-SPIKES (855-438-7745)


items ()

Creating Professor X

Remember Professor Charles Francis Xavier? The founder and leader of X-men has phenomenal telepathic abilities. But, alas, he only exists in fiction! Or so we thought. What if we had the technology to make a part of Professor X’s abilities reality? We could channel the superpower of looking into people’s minds to know what movement they’re thinking of, before they make it. Maybe translate that tech into a robotic arm or leg?  I am working on making it reality this summer.

But how do we get around to predicting one’s imagined movements? One approach is to measure the “Mu Rhythms,” or also known as “Mu Waves” from the sensorimotor area of one’s brain by registering an EEG signal. The Mu Waves are associated with the movement of the body – either by actually moving any part of your body or by “thinking” about moving that particular part of your body. The sensorimotor area is a narrow strip that goes from one ear to the other one by along the top of the head. As we can see in the image below, the sensorimotor area involves the ‘Primary motor cortex’ and ‘Primary somatosensory cortex.’

Let’s explore these ‘Mu waves’ a little more. They occur in the above-mentioned regions only when the body is resting, or particularly when this particular cortex of the brain is ‘resting/idling.’ When we move a part of our body, the mu waves corresponding to that region disappear, or in scientific terms, these waves are desynchronized. The desynchronization will occur when the cortex is no more in the resting state. Interestingly, our brain isn’t idling when we’re imagining a movement. Which means, this desynchronization of the mu waves should be visible by mere imagination of a certain movement. And when I say imagining, I do not mean visual imagination, but the actual feeling of it, somewhat like imagining how it would *feel* like to move your hand without actually moving it. At this point, I am still working towards finding these rhythms, but theoretically, they should look somewhat like the highlighted region below (C4 corresponds to the left hand):

The positioning of the electrodes plays an important role in detecting the mu-rhythms! I hope to see these rhythms by the end of this week! Fingers crossed!

 

What next?

Once we do find these rhythms, the next step would be to quantify the suppression of the mu-waves in order to predict whether the body is relaxed or whether there’s some activity going on (either actual movement or the imagination of it). Once that is accomplished we can go ahead and measure the activity from different regions and predict which body part is the activity associated with.

 

A little about me!

My name is Anusha, and this is my first year in the US. I am pursuing my Master’s in Electrical and Computer Engineering from University of Michigan, Ann Arbor. Currently, I am basking in the much-awaited and short-lived summer in Ann Arbor; long walks around the sprawling beautiful campus and the lush green arboretum, seeking solace in the sunset by the Huron river with a nice cup of coffee and a good book, losing myself in the world of fiction. Music is one of my getaways; I have been trained in classical Indian music throughout my undergrad in India and take pleasure in singing from time to time. I’m also passionate about cooking, baking and eating of course.  Here’s me with my brother.


Hacking Sleep, Memory

So memory hacking during sleep is a thing? With endless runs back and forth to Om of Medicine chasing down my subjects, to countless hours staring at the Mona Lisa of sleep: Delta waves, and many other ups and downs during this summer…

I can finally tell you it is quite possible!!

As August is here, it is sadly my last few days at Backyard Brains! So let me come back again one last time and give you a final peek at what I have been up to for the past month and a wrap-up spiel on all my findings and exciting results of my research!

Since my last blog posts (Improving Memory Formation During Sleep and Learning and Deep Sleep), I have been conducting my study on as many subjects I could possibly find. During this process, we added many new implementations to our TMR app to improve our ability to collect data as efficiently as possible.

The GUI settings for the app look very nice now with new colors and a more user friendly environment!

The reference grid was changed to colored boxes as shown, and the image does not appear in the confines of the boxes anymore. We added this change after noticing that our participants’ performance was being slightly biased by the old grid. Another exciting addition: we can now save experiment sessions within the app itself and be able to come back to it and continue from where we left. Our Exporting function was fully revolutionized.. Take a look

This is the pseudo code I did with Greg to organize our data in a better-structured form. We now export JSON files that have entries easily identifiable and accessible in Matlab to perform data analysis.  

TMR technique is a powerful tool to play with, it allows us to test the selectivity of our memory consolidation in various ways, and be able to experiment with many parameters and answer different research questions. For that, I wanted to have built in controls, and give the user the choice to change the parameters of sound cueing. The implementations are:

  • Setting the percentage for sounds to be cued during treatment. The default that I have been testing with (according to all published papers) is 50% of all sounds presented at the learning phase (so 24 out of the 48). It could be interesting to test if cueing 0%, 25%, 75%, or 100% would hinder or enhance the effectiveness of TMR on memory consolidation.
  • Manually select cues (and corresponding images accordingly) if a user would like to test with a different number of targets other than the default 48.  Check this out:

  • The most exciting part: the control experiment is now ready! To validate our results, we need to run control experiments where we have subjects do a continuous reaction task instead of sleeping. We imbed the cues within this task as well, and test to see if TMR still has an effect on memory consolidation during wakefulness. The game consists of 4 rounds. A 2.5 minute training phase, then 3 7.5 minutes testing phases. Sound cues play in the second round of the testing phase 1.5 minute after the start of the round. The user will see numbers on the screen, and they should click if both numbers are either odd or even. Here is a video from the app showing how the game works:

With all these amazing implementations I was able to test it on more subjects. This table includes the full database of all subject participants I had over this summer. It was very hard finding people who are willing to spare their time to do the study (which takes up to 2.5-3 hours) in the middle of the day. So most of my subjects were fellow interns and employees at BYB, and Ann Arbor locals who volunteered during Tech Trek, or signed up for my doodle poll.

Mean start time for Slow Wave Sleep =  37.5 minutes +/-5.1 SE amongst all the subjects we tested (who could fall asleep fully for 90 minutes or more, first 8 subjects). Experimental results and post analysis was based on data from the first 4 subjects, as they were the ones able to complete the study fully. Control experiment was conducted on the last two subjects.

Here comes the best part, what we have been waiting for:

Result: Cued sounds during SWS showed better recall after sleep than uncued sounds

This graph is pretty interesting and tells us a lot, but might not be very intuitive at first. So let’s walk through it! The change in spatial recall is measured in terms of the difference in distance in points. This is calculated within the app itself. Points are units of measurement for position in iOS and apple devices similar to pixels. The conversion ratio to cm is 1 point = 0.0352778 cm. The app calculates the distance between where the user taps on the screen according to where they remember the image to appear (as x,y coordinates to a single position point of the tap), and where the original correct location of the image is (taking x,y coordinates of a single point of the bottom left corner of the image). The larger the distance between the two, the more off was the subject from the correct location, so it reflects less accurate recall, indicating more forgetting. This distance in points is measured for each image in both pre and post sleep tests. To find the difference in performance, I subtract the after sleep distance – before sleep distance. Having a negative number, means that the distance after sleep is smaller than the one before, indicating an improvement in performance and recall, as the subject clicked closer to where the correct image location is, and so remembers better. Therefore, grouping data from all 4 subjects and separating the images into cued and uncued, we have 24 cued images per one subject and 96 cued for all 4 subjects. The same applies for uncued. This gives us a total number of 192 images on the x-axis both cued and uncued. Now, with this knowledge in mind, this graph shows us the distribution amongst each and all subjects. We can see there is a higher distribution of the blue columns with larger positive differences in distance above the x-axis for the uncued images. This shows that subjects are forgetting more/scoring a less accurate recall for the uncued images. On the other and, we can see a higher distribution of the green columns with larger negative differences in distance below the x-axis for cued images showing less forgetting-better remembering/scoring a more accurate recall.

This graph is now much easier, it takes the mean distance of all of the differences of the 96 cued and 96 uncued and plots them. This only shows the final overall change in recall for all subjects grouped. We can see a pretty interesting significant difference in performance between the two.

Summary: Better recall for cued images (-12.95 points +/- 15.80 SE) compared to uncued images (33.09 points +/- 16.26 SE), using two-sample independent t-test (p = 0.04).

This graph shows us another analysis of the results. It shows the percentage correct for cued and uncued images before and after sleep for all subjects. This is the number of images subjects got correct out of the 24 cued or uncued before and after sleep. Correctness or incorrectness is decided based on a comparison between the distance in points discussed above, and a set  threshold of 15% of the screen width. The % of screen width is just “distance in points/(width of the screen in points)”. The width is adjusted automatically as you change the apple device being used. If the distance is less than 15% of the screen width, it is correct. We can see that subjects had a higher %correct for cued images after sleep, lower %correct for uncued, and overall higher %correct for cued vs. uncued after sleep.

Assembling all these puzzle pieces together, we can conclude that we are seeing a general trend so far that indicates the following: The DIY version of Targeted Memory Reactivation (TMR) technique could potentially enhance memory consolidation during SWS and have suitable applications in learning and teaching in the future. It can be seen that TMR can effectively bias spatial associative memory consolidation, by altering the level of forgetness, more than providing pure gain of remembering cued images better. We definitely still need to test this on more  subjects for accurate significance conclusions.

The control experiments involving cueing sounds with no sleep were conducted on two subjects only so far.  Results show the same trend of the experiment with slight differences.

Summary: Better recall for cued images (-23.60points +/- 13.29 SE) compared to uncued images (46.77 points +/- 21.53 SE), using two-sample independent t-test (p = 0.007).

It looks like performance was slightly better for the cued images, and worse for the uncued ones compared to the results above. We have to keep in mind that although the results from the control experiment are significant, they are only taken from two subjects. More data needs to be collected, however, for now, this shows us something surprising yet reasonable! TMR appears to work well both during SWS and wakefulness. But which is better? Where does the maximum memory consolidation happen? Does SWS sleep promote consolidation of different types of memory compared to wakefulness? All such questions are yet to be answered!

So, my research does not stop here and it will continue beyond my fellowship this summer with BYB. My goal is to continue collecting more data, and explore the answers to the questions above and others that might come along the way. To do that, and continue with the idea of making this research fully DIY and accessible for the public, my next step would be working on taking the cueing of the sounds during sleep to the next step: automatic cueing using machine learning! This would allow users to run this fully functional study on themselves by buying the Heart and Brain Spiker Shield and downloading our TMR app, without needing a researcher observing their EEG during sleep and manually cueing the sounds when detecting Delta waves as what I have been doing. By having this property, the hope would be to provide a future cloud service for customer data and to use TMR to tackle larger issues:

Can it be used in PTSD research to help patients overcome traumatic memories? Can it be applied in educational settings to improve learning and teaching in institutions? Would it give us more insight into how our brains work when it comes to memory and potentially find a link to Alzheimer’s research?

Stay tuned! You will be hearing from me again in the near future.

Before I leave you for the summer, I would love to share with you some pictures from my best moments during this fellowship with my fellow interns and BYB staff. Last week, we had TED visiting Ann Arbor to film our projects into episodes for an internet show that will be go on live on the internet sometime this fall. This by far has been the best part of this experience and the most exciting one. We all worked so hard preparing for it, and spent long days presenting and explaining our work in front of cameras and lights! You will hopefully like it and share our enjoyment with us soon! Yesterday, we also presented our posters in the UROP Summer Symposium at the University of Michigan and people loved my project and gave some very good feedback on future directions.

It has been a pleasure interning with BYB this summer. It was a very exciting and moving journey, where it helped adding more valuable lessons to my academic and personal growth. I truly appreciate Greg Gage and all his love and support into pushing me to become a better researcher and a believer in his famous quoted piece of advice: “skepticism is a virtue”. This summer was indeed not only about learning how to cook, code/ MATLAB, deal with my best friend – EEGs – or even network and get one step closer towards my professional career aspirations. It was a reassuring discovery of my love for research and passion in literally revolutionizing Neuroscience and making it available for everyone!   

See if you can spot how many times I wore my favourite-lucky blue blouse! It should go down in history

With all the awesome interns! Thank you for the greatest summer 🙂 We had very good memories and funny moments, and got to explore Michigan together!! This is not a goodbye!!


Real Time Mind-Reading

G’day mate, how are ya?

Thank you for waiting for the updates on my research on Human EEG Decoding (aka Mind Reading).

Since my last post, I have tried to decode human EEG data in real time and… guess what…I succeeded! Hooray!

I first analyzed all the data I have collected so far to verify and evaluate the different patterns of brain signals on different images.

I analyzed the raw EEG signals, and ERPs (event related potentials) in each channel and also in each category of images. I could definitely notice the N170 neuron specifically firing when the face was presented, while it stayed mostly quiet during the presentation of other images. However, there was no significant difference between any of the other images (house, scenery, weird pictures) which indicated that, at least with my current experimental design, I should begin focusing on classifying face vs non face events.

Then I wrote MATLAB code which collects the data from human subjects and decodes their brain signals in real time. In the training session, 240 randomized images (60 images for each 4 categories) are presented to subjects to train the support vector machine (one of machine learning techniques that are useful for classifying objects into different categories). Then, during the testing session, we analyze the EEG responses from each randomized image in real time to correctly predict what image is actually presented. Since it was real time classification, the coding was complicated, and I also had to make it synchronize with my python image presentation program…

Also, I had to check the hardware and outside environment which could deteriorate the performance of the spikershields and the classifier! Some blood, sweat, tears, and a lot of electrode gel later… I had it running!

After all the hard work, I began running trials where the goal was to classify face vs scenery. I chose natural landscapes as they seem to be the most non face type of images. After a training session of 240 images for each subject, we tested 50 faces and 50 scenery images (order randomized) to check our real time algorithms.

The result was very satisfying! Christy and Spencer (the coolest BYB research fellows – please see my previous blog post) scored averages of 93% and 85% accuracy rate in 5 trials, which proved that we can classify the brain signals from face vs non face presentations successfully. The brain signals were so distinct that we could see specific stimulus distinction from ERPs in training and testing sessions, but they were not strong enough to observe in the raw EEG signals from each channel. However, just because they weren’t obvious to the human eye… doesn’t mean the computer couldn’t decode them! The machine learning algorithms I prepared did an excellent job classifying the raw EEG signals in real time, which suggests that a future in which we can begin working on more advanced, real-time EEG processing is not far away! We’re edging closer and closer to revolutionary bio-tech advancements. But for now, it’s just faces and trees.

And now, the capstone of my summer research!

We fellows worked long hours for 6 straight days filming short videos for TED, each of which focused on one of our individual projects!

It was stressful but exciting. I never would have expected I’d have the opportunity to present my research to the world through TED!

My best subject Christy generously agreed to be a subject for my video shoot.

We presented three experiments:

  1. EEG recording and ERPs
  2. Real Time decoding with no training trials
  3. Real Time decoding with training trials

The first experiment was to show how we can detect differences between face vs other images via ERPs by the presence of N170 spikes. The second experiment was to demonstrate the difficulty of real time decoding… and the third experiment was to show how we can really decode human brain in real time with limited information and few observable channels.

All the experiments were successful, thanks to Greg, TED staff, and Christy!

For the videos, I had to explain what was going on with the experiment and what is implied by the results of each experiment. In preparation for those “chat” segments,, I needed to study how to best explain and decompose the research for the public, so that they may understand and replicate the experiment. The educational format was definitely a good experience to prepare and present my research to different audiences.

Please check out my TED video when it is released someday! You’ll probably be able to see it here on the BYB website when it launches!

To wrap up, I’ve enjoyed my research these past 11 weeks. Looking back what I have done so far during the summer, I see how far I’ve come. This fellowship was a valuable experience to improve my software engineering and coding skills across different programming languages and platforms. I also got a crash course in hardware design and electrical engineering! I learned how to design a new experiment from scratch using many different scientific tools. Most importantly, I learned more about the scientific mindset, how to think critically about a project, how to analyze data, and how to avoid unsubstantiated claims or biases.

Even though mind-reading was my project, I couldn’t have gone it alone I would like to say thank you for everyone in BYB who supported my project, including Greg, who continues to guide me in the scientific mindset, teach me how to conduct experiments, and helps me to analyze data and present the research effectively to outsiders. Thank you to Stanislav, who put forth a lot of effort to help me verify and build my software. To Zach who helped me build and test the hardware, to Will who was always there to help me out for any matters I had during my time here, and to Christy and Spencer who were the best subjects always sparing their time for the sake of science. I am sure that my experience here was a step forward to becoming a better researcher. My project is not finished, it has really just begun. I am planning to continue researching this mind-reading project. One channel decoding and classification of non face images will be the first step after this summer.

Thank you so much for your time and interest in my project. Stay tuned….