As part of the kick-off to the International Senior Research Fellowship, the fellows attended Soapbox Science Munich and the Munich Science Slam presentation skills workshop to begin immersing themselves in the scientific community in Germany and learn strategies to become better science communicators.(more…)
Over 11 sunny Ann Arbor weeks, our research fellows worked hard to answer their research questions. They developed novel methodologies, programmed complex computer vision and data processing systems, and compiled their experimental data for poster, and perhaps even journal, publication. But, alas and alack… all good things must come to an end. Fortunately, in research, the end of one project is often the beginning of the next!
Some of the fellows intend to continue working with on the research they began here while they’re away and many of these projects will be continued next summer! Definitely expect to hear updates from Nathan’s EEG Visual Decoding project and Joud’s Sleep Memory project. Additionally, two of the projects will continue throughout the next few months: Zach’s Songbird Identification and Shreya’s Electric Fish Detector projects will continue through to December!
Meet the Fellows, See the Projects
The fellows are off to a great start! Check out their blog posts introducing their projects:
- Zach – Identifying Bird Songs
- Ilya – Octopus Learning and Behavior
- Spencer –Optogenetics with the FlyPad
- Jaimie – Electrophysiology of Dragonflies
- Nathan – Decoding Images in the Brain
- Christy – Behavioral Study of Baby Squids
- Joud – Improving Memory Formation During Sleep
- Haley – Learning about the Mosquito Love Song
- Shreya – Detecting Electric Fish
The team has been working hard to bring their projects to life. Check out these blog posts on their rig construction and data collection efforts!
- Zach – When Computers Hear the Birds Sing
- Ilya – Studying Aggressive Behavior in Octopodes
- Spencer –The Taste Preferences of Fruit Flies
- Jaimie – Recording from the Visual Neurons in a Dragonfly
- Nathan – Detecting the Detection of Faces
- Christy – Recording the Behavior of Squid Hatchlings
- Joud – Learning and Deep Sleep
- Haley – Harmonics of Mosquito Mating
- Shreya – Building an Electric Fish Sensor
Our fellows experience the peaks and valleys of research this summer, but they all came out on top! Check out their final posts for their results, posters, and other details!
- Ilya – Octopus Wrestling and Computer Vision
- Spencer – Changing Taste Perception with Optogenetics
- Jaimie – Finalizing a No-Harm, Dragonfly Visual Neuron Recording Prep
- Nathan – Real-Time Mind Reading
- Christy – Squid Hatchlings React to Changes in Light Levels
- Joud – Hacking Sleep, Memory
- Haley – Visualizing Harmonic Convergence in Mosquito Mating
A few of our fellows are staying on throughout this next semester for longer term development projects! Zach is going to be back to working with his team on the Songbird Identification Device project, and Shreya will be working through to December on the Electric Fish Detector project. Expect updates on their progress from them soon!
So memory hacking during sleep is a thing? With endless runs back and forth to Om of Medicine chasing down my subjects, to countless hours staring at the Mona Lisa of sleep: Delta waves, and many other ups and downs during this summer…
I can finally tell you it is quite possible!!
As August is here, it is sadly my last few days at Backyard Brains! So let me come back again one last time and give you a final peek at what I have been up to for the past month and a wrap-up spiel on all my findings and exciting results of my research!
Since my last blog posts (Improving Memory Formation During Sleep and Learning and Deep Sleep), I have been conducting my study on as many subjects I could possibly find. During this process, we added many new implementations to our TMR app to improve our ability to collect data as efficiently as possible.
The GUI settings for the app look very nice now with new colors and a more user friendly environment!
The reference grid was changed to colored boxes as shown, and the image does not appear in the confines of the boxes anymore. We added this change after noticing that our participants’ performance was being slightly biased by the old grid. Another exciting addition: we can now save experiment sessions within the app itself and be able to come back to it and continue from where we left. Our Exporting function was fully revolutionized.. Take a look
This is the pseudo code I did with Greg to organize our data in a better-structured form. We now export JSON files that have entries easily identifiable and accessible in Matlab to perform data analysis.
TMR technique is a powerful tool to play with, it allows us to test the selectivity of our memory consolidation in various ways, and be able to experiment with many parameters and answer different research questions. For that, I wanted to have built in controls, and give the user the choice to change the parameters of sound cueing. The implementations are:
- Setting the percentage for sounds to be cued during treatment. The default that I have been testing with (according to all published papers) is 50% of all sounds presented at the learning phase (so 24 out of the 48). It could be interesting to test if cueing 0%, 25%, 75%, or 100% would hinder or enhance the effectiveness of TMR on memory consolidation.
- Manually select cues (and corresponding images accordingly) if a user would like to test with a different number of targets other than the default 48. Check this out:
- The most exciting part: the control experiment is now ready! To validate our results, we need to run control experiments where we have subjects do a continuous reaction task instead of sleeping. We imbed the cues within this task as well, and test to see if TMR still has an effect on memory consolidation during wakefulness. The game consists of 4 rounds. A 2.5 minute training phase, then 3 7.5 minutes testing phases. Sound cues play in the second round of the testing phase 1.5 minute after the start of the round. The user will see numbers on the screen, and they should click if both numbers are either odd or even. Here is a video from the app showing how the game works:
With all these amazing implementations I was able to test it on more subjects. This table includes the full database of all subject participants I had over this summer. It was very hard finding people who are willing to spare their time to do the study (which takes up to 2.5-3 hours) in the middle of the day. So most of my subjects were fellow interns and employees at BYB, and Ann Arbor locals who volunteered during Tech Trek, or signed up for my doodle poll.
Mean start time for Slow Wave Sleep = 37.5 minutes +/-5.1 SE amongst all the subjects we tested (who could fall asleep fully for 90 minutes or more, first 8 subjects). Experimental results and post analysis was based on data from the first 4 subjects, as they were the ones able to complete the study fully. Control experiment was conducted on the last two subjects.
Here comes the best part, what we have been waiting for:
Result: Cued sounds during SWS showed better recall after sleep than uncued sounds
This graph is pretty interesting and tells us a lot, but might not be very intuitive at first. So let’s walk through it! The change in spatial recall is measured in terms of the difference in distance in points. This is calculated within the app itself. Points are units of measurement for position in iOS and apple devices similar to pixels. The conversion ratio to cm is 1 point = 0.0352778 cm. The app calculates the distance between where the user taps on the screen according to where they remember the image to appear (as x,y coordinates to a single position point of the tap), and where the original correct location of the image is (taking x,y coordinates of a single point of the bottom left corner of the image). The larger the distance between the two, the more off was the subject from the correct location, so it reflects less accurate recall, indicating more forgetting. This distance in points is measured for each image in both pre and post sleep tests. To find the difference in performance, I subtract the after sleep distance – before sleep distance. Having a negative number, means that the distance after sleep is smaller than the one before, indicating an improvement in performance and recall, as the subject clicked closer to where the correct image location is, and so remembers better. Therefore, grouping data from all 4 subjects and separating the images into cued and uncued, we have 24 cued images per one subject and 96 cued for all 4 subjects. The same applies for uncued. This gives us a total number of 192 images on the x-axis both cued and uncued. Now, with this knowledge in mind, this graph shows us the distribution amongst each and all subjects. We can see there is a higher distribution of the blue columns with larger positive differences in distance above the x-axis for the uncued images. This shows that subjects are forgetting more/scoring a less accurate recall for the uncued images. On the other and, we can see a higher distribution of the green columns with larger negative differences in distance below the x-axis for cued images showing less forgetting-better remembering/scoring a more accurate recall.
This graph is now much easier, it takes the mean distance of all of the differences of the 96 cued and 96 uncued and plots them. This only shows the final overall change in recall for all subjects grouped. We can see a pretty interesting significant difference in performance between the two.
This graph shows us another analysis of the results. It shows the percentage correct for cued and uncued images before and after sleep for all subjects. This is the number of images subjects got correct out of the 24 cued or uncued before and after sleep. Correctness or incorrectness is decided based on a comparison between the distance in points discussed above, and a set threshold of 15% of the screen width. The % of screen width is just “distance in points/(width of the screen in points)”. The width is adjusted automatically as you change the apple device being used. If the distance is less than 15% of the screen width, it is correct. We can see that subjects had a higher %correct for cued images after sleep, lower %correct for uncued, and overall higher %correct for cued vs. uncued after sleep.
Assembling all these puzzle pieces together, we can conclude that we are seeing a general trend so far that indicates the following: The DIY version of Targeted Memory Reactivation (TMR) technique could potentially enhance memory consolidation during SWS and have suitable applications in learning and teaching in the future. It can be seen that TMR can effectively bias spatial associative memory consolidation, by altering the level of forgetness, more than providing pure gain of remembering cued images better. We definitely still need to test this on more subjects for accurate significance conclusions.
The control experiments involving cueing sounds with no sleep were conducted on two subjects only so far. Results show the same trend of the experiment with slight differences.
Summary: Better recall for cued images (-23.60points +/- 13.29 SE) compared to uncued images (46.77 points +/- 21.53 SE), using two-sample independent t-test (p = 0.007).
It looks like performance was slightly better for the cued images, and worse for the uncued ones compared to the results above. We have to keep in mind that although the results from the control experiment are significant, they are only taken from two subjects. More data needs to be collected, however, for now, this shows us something surprising yet reasonable! TMR appears to work well both during SWS and wakefulness. But which is better? Where does the maximum memory consolidation happen? Does SWS sleep promote consolidation of different types of memory compared to wakefulness? All such questions are yet to be answered!
So, my research does not stop here and it will continue beyond my fellowship this summer with BYB. My goal is to continue collecting more data, and explore the answers to the questions above and others that might come along the way. To do that, and continue with the idea of making this research fully DIY and accessible for the public, my next step would be working on taking the cueing of the sounds during sleep to the next step: automatic cueing using machine learning! This would allow users to run this fully functional study on themselves by buying the Heart and Brain Spiker Shield and downloading our TMR app, without needing a researcher observing their EEG during sleep and manually cueing the sounds when detecting Delta waves as what I have been doing. By having this property, the hope would be to provide a future cloud service for customer data and to use TMR to tackle larger issues:
Can it be used in PTSD research to help patients overcome traumatic memories? Can it be applied in educational settings to improve learning and teaching in institutions? Would it give us more insight into how our brains work when it comes to memory and potentially find a link to Alzheimer’s research?
Stay tuned! You will be hearing from me again in the near future.
Before I leave you for the summer, I would love to share with you some pictures from my best moments during this fellowship with my fellow interns and BYB staff. Last week, we had TED visiting Ann Arbor to film our projects into episodes for an internet show that will be go on live on the internet sometime this fall. This by far has been the best part of this experience and the most exciting one. We all worked so hard preparing for it, and spent long days presenting and explaining our work in front of cameras and lights! You will hopefully like it and share our enjoyment with us soon! Yesterday, we also presented our posters in the UROP Summer Symposium at the University of Michigan and people loved my project and gave some very good feedback on future directions.
It has been a pleasure interning with BYB this summer. It was a very exciting and moving journey, where it helped adding more valuable lessons to my academic and personal growth. I truly appreciate Greg Gage and all his love and support into pushing me to become a better researcher and a believer in his famous quoted piece of advice: “skepticism is a virtue”. This summer was indeed not only about learning how to cook, code/ MATLAB, deal with my best friend – EEGs – or even network and get one step closer towards my professional career aspirations. It was a reassuring discovery of my love for research and passion in literally revolutionizing Neuroscience and making it available for everyone!
See if you can spot how many times I wore my favourite-lucky blue blouse! It should go down in history
With all the awesome interns! Thank you for the greatest summer 🙂 We had very good memories and funny moments, and got to explore Michigan together!! This is not a goodbye!!