Backyard Brains Logo

Neuroscience for Everyone!

+1 (855) GET-SPIKES (855-438-7745)


items ()

Finalizing a No-Harm, Dragonfly Visual Neuron Recording Prep

Welp, it’s my last day of work here at Backyard Brains! It’s been a fun 11 weeks with my fellow interns, but all things must end. Last week we wrapped up all the TED filming for our mini series episodes. I had a great time, and I’m really looking forward to seeing the final result. 

The dragonfly project ended in a good place; we have a good amount of data from the final setup and succeeded in developing a replicable, recoverable prep. I take a dragonfly that has been in the fridge for a few hours and carefully restrain its wings back with a “helping hands” clamp covered in cloth. This prevents damage to the wings. Then I wrap the dragonfly with a cloth, leaving only its head exposed; this is so the dragonfly doesn’t move and pull out the electrode wires during recording. The cloth is taped and pinned into the clamp’s cloth to hold it in place. Then, I still use silly putty to place and hold the electrode’s stick in place so the wires don’t come out when I prepare the recording electrodes and move the Dragonfly later.We modified one of the Backyard Brains Micromanipulator electrodes so that instead of a grounding pin, we use a reference electrode. Then, onto the dragonfly, I place the two electrode wires on either side of a single, exposed ventral nerve cord.I also made a few new stimuli, all on generic size paper. One had a fake plastic fly glued to the middle, and the other four I drew various sizes of dots in the center: 3mm, 7mm, 2.3cm, and 9cm in diameter.

I waved these papers by hand left and right, up and down, and even switched them out in the same recording to compare size preferences, not just direction. Besides just seeing a reaction, I’m interested in seeing the directionality of response.

This indicates that there are certain neurons within the dragonfly’s nervous system, like the target-selective descending neurons (TSDNs), that help the dragonfly differentiate, in an almost mechanical way, what direction a target is moving. This has the advantage of removing some “post processing” of the information, allowing the dragonfly to react quicker and hunt its prey more efficiently. I had success in seeing this kind of evoked response in my trials, which was a great success for the project.

As you can see in the results above, as I improved my prep and experimented with new electrodes, I began to see better results. By the end, I was seeing responses in most of my preps. I began to observe a directional bias more frequently and began seeing more evidence of a size discriminate response. By the time we presented our projects via a poster presentation on August 2nd, I had totaled my data into success rates of getting certain kinds of signals using this final prep I developed, giving students who repeat this experiment an idea of how difficult or easy it will be to see different responses.

Further, we are hoping to publish these results, but in order to do so, the stimuli cannot be moved by hand; the human error of timing the event markers in Spike Recorder with the movement of the stimulus is not accurate or consistent enough for a peer-reviewed journal. We built a servo-motor rig that moves the paper back and forth while simultaneously sending the event markers to the software. The rig has a lot of problems, and I ran out of time to work on it, so if my project is continued next summer, the rig should be the focus to really iron out the automation and precision of stimulus delivery.

That’s all from me! Thanks for reading. Dragonfly girl, signing off.


Hacking Sleep, Memory

So memory hacking during sleep is a thing? With endless runs back and forth to Om of Medicine chasing down my subjects, to countless hours staring at the Mona Lisa of sleep: Delta waves, and many other ups and downs during this summer…

I can finally tell you it is quite possible!!

As August is here, it is sadly my last few days at Backyard Brains! So let me come back again one last time and give you a final peek at what I have been up to for the past month and a wrap-up spiel on all my findings and exciting results of my research!

Since my last blog posts (Improving Memory Formation During Sleep and Learning and Deep Sleep), I have been conducting my study on as many subjects I could possibly find. During this process, we added many new implementations to our TMR app to improve our ability to collect data as efficiently as possible.

The GUI settings for the app look very nice now with new colors and a more user friendly environment!

The reference grid was changed to colored boxes as shown, and the image does not appear in the confines of the boxes anymore. We added this change after noticing that our participants’ performance was being slightly biased by the old grid. Another exciting addition: we can now save experiment sessions within the app itself and be able to come back to it and continue from where we left. Our Exporting function was fully revolutionized.. Take a look

This is the pseudo code I did with Greg to organize our data in a better-structured form. We now export JSON files that have entries easily identifiable and accessible in Matlab to perform data analysis.  

TMR technique is a powerful tool to play with, it allows us to test the selectivity of our memory consolidation in various ways, and be able to experiment with many parameters and answer different research questions. For that, I wanted to have built in controls, and give the user the choice to change the parameters of sound cueing. The implementations are:

  • Setting the percentage for sounds to be cued during treatment. The default that I have been testing with (according to all published papers) is 50% of all sounds presented at the learning phase (so 24 out of the 48). It could be interesting to test if cueing 0%, 25%, 75%, or 100% would hinder or enhance the effectiveness of TMR on memory consolidation.
  • Manually select cues (and corresponding images accordingly) if a user would like to test with a different number of targets other than the default 48.  Check this out:

  • The most exciting part: the control experiment is now ready! To validate our results, we need to run control experiments where we have subjects do a continuous reaction task instead of sleeping. We imbed the cues within this task as well, and test to see if TMR still has an effect on memory consolidation during wakefulness. The game consists of 4 rounds. A 2.5 minute training phase, then 3 7.5 minutes testing phases. Sound cues play in the second round of the testing phase 1.5 minute after the start of the round. The user will see numbers on the screen, and they should click if both numbers are either odd or even. Here is a video from the app showing how the game works:

With all these amazing implementations I was able to test it on more subjects. This table includes the full database of all subject participants I had over this summer. It was very hard finding people who are willing to spare their time to do the study (which takes up to 2.5-3 hours) in the middle of the day. So most of my subjects were fellow interns and employees at BYB, and Ann Arbor locals who volunteered during Tech Trek, or signed up for my doodle poll.

Mean start time for Slow Wave Sleep =  37.5 minutes +/-5.1 SE amongst all the subjects we tested (who could fall asleep fully for 90 minutes or more, first 8 subjects). Experimental results and post analysis was based on data from the first 4 subjects, as they were the ones able to complete the study fully. Control experiment was conducted on the last two subjects.

Here comes the best part, what we have been waiting for:

Result: Cued sounds during SWS showed better recall after sleep than uncued sounds

This graph is pretty interesting and tells us a lot, but might not be very intuitive at first. So let’s walk through it! The change in spatial recall is measured in terms of the difference in distance in points. This is calculated within the app itself. Points are units of measurement for position in iOS and apple devices similar to pixels. The conversion ratio to cm is 1 point = 0.0352778 cm. The app calculates the distance between where the user taps on the screen according to where they remember the image to appear (as x,y coordinates to a single position point of the tap), and where the original correct location of the image is (taking x,y coordinates of a single point of the bottom left corner of the image). The larger the distance between the two, the more off was the subject from the correct location, so it reflects less accurate recall, indicating more forgetting. This distance in points is measured for each image in both pre and post sleep tests. To find the difference in performance, I subtract the after sleep distance – before sleep distance. Having a negative number, means that the distance after sleep is smaller than the one before, indicating an improvement in performance and recall, as the subject clicked closer to where the correct image location is, and so remembers better. Therefore, grouping data from all 4 subjects and separating the images into cued and uncued, we have 24 cued images per one subject and 96 cued for all 4 subjects. The same applies for uncued. This gives us a total number of 192 images on the x-axis both cued and uncued. Now, with this knowledge in mind, this graph shows us the distribution amongst each and all subjects. We can see there is a higher distribution of the blue columns with larger positive differences in distance above the x-axis for the uncued images. This shows that subjects are forgetting more/scoring a less accurate recall for the uncued images. On the other and, we can see a higher distribution of the green columns with larger negative differences in distance below the x-axis for cued images showing less forgetting-better remembering/scoring a more accurate recall.

This graph is now much easier, it takes the mean distance of all of the differences of the 96 cued and 96 uncued and plots them. This only shows the final overall change in recall for all subjects grouped. We can see a pretty interesting significant difference in performance between the two.

Summary: Better recall for cued images (-12.95 points +/- 15.80 SE) compared to uncued images (33.09 points +/- 16.26 SE), using two-sample independent t-test (p = 0.04).

This graph shows us another analysis of the results. It shows the percentage correct for cued and uncued images before and after sleep for all subjects. This is the number of images subjects got correct out of the 24 cued or uncued before and after sleep. Correctness or incorrectness is decided based on a comparison between the distance in points discussed above, and a set  threshold of 15% of the screen width. The % of screen width is just “distance in points/(width of the screen in points)”. The width is adjusted automatically as you change the apple device being used. If the distance is less than 15% of the screen width, it is correct. We can see that subjects had a higher %correct for cued images after sleep, lower %correct for uncued, and overall higher %correct for cued vs. uncued after sleep.

Assembling all these puzzle pieces together, we can conclude that we are seeing a general trend so far that indicates the following: The DIY version of Targeted Memory Reactivation (TMR) technique could potentially enhance memory consolidation during SWS and have suitable applications in learning and teaching in the future. It can be seen that TMR can effectively bias spatial associative memory consolidation, by altering the level of forgetness, more than providing pure gain of remembering cued images better. We definitely still need to test this on more  subjects for accurate significance conclusions.

The control experiments involving cueing sounds with no sleep were conducted on two subjects only so far.  Results show the same trend of the experiment with slight differences.

Summary: Better recall for cued images (-23.60points +/- 13.29 SE) compared to uncued images (46.77 points +/- 21.53 SE), using two-sample independent t-test (p = 0.007).

It looks like performance was slightly better for the cued images, and worse for the uncued ones compared to the results above. We have to keep in mind that although the results from the control experiment are significant, they are only taken from two subjects. More data needs to be collected, however, for now, this shows us something surprising yet reasonable! TMR appears to work well both during SWS and wakefulness. But which is better? Where does the maximum memory consolidation happen? Does SWS sleep promote consolidation of different types of memory compared to wakefulness? All such questions are yet to be answered!

So, my research does not stop here and it will continue beyond my fellowship this summer with BYB. My goal is to continue collecting more data, and explore the answers to the questions above and others that might come along the way. To do that, and continue with the idea of making this research fully DIY and accessible for the public, my next step would be working on taking the cueing of the sounds during sleep to the next step: automatic cueing using machine learning! This would allow users to run this fully functional study on themselves by buying the Heart and Brain Spiker Shield and downloading our TMR app, without needing a researcher observing their EEG during sleep and manually cueing the sounds when detecting Delta waves as what I have been doing. By having this property, the hope would be to provide a future cloud service for customer data and to use TMR to tackle larger issues:

Can it be used in PTSD research to help patients overcome traumatic memories? Can it be applied in educational settings to improve learning and teaching in institutions? Would it give us more insight into how our brains work when it comes to memory and potentially find a link to Alzheimer’s research?

Stay tuned! You will be hearing from me again in the near future.

Before I leave you for the summer, I would love to share with you some pictures from my best moments during this fellowship with my fellow interns and BYB staff. Last week, we had TED visiting Ann Arbor to film our projects into episodes for an internet show that will be go on live on the internet sometime this fall. This by far has been the best part of this experience and the most exciting one. We all worked so hard preparing for it, and spent long days presenting and explaining our work in front of cameras and lights! You will hopefully like it and share our enjoyment with us soon! Yesterday, we also presented our posters in the UROP Summer Symposium at the University of Michigan and people loved my project and gave some very good feedback on future directions.

It has been a pleasure interning with BYB this summer. It was a very exciting and moving journey, where it helped adding more valuable lessons to my academic and personal growth. I truly appreciate Greg Gage and all his love and support into pushing me to become a better researcher and a believer in his famous quoted piece of advice: “skepticism is a virtue”. This summer was indeed not only about learning how to cook, code/ MATLAB, deal with my best friend – EEGs – or even network and get one step closer towards my professional career aspirations. It was a reassuring discovery of my love for research and passion in literally revolutionizing Neuroscience and making it available for everyone!   

See if you can spot how many times I wore my favourite-lucky blue blouse! It should go down in history

With all the awesome interns! Thank you for the greatest summer 🙂 We had very good memories and funny moments, and got to explore Michigan together!! This is not a goodbye!!


Changing Taste Perception with Optogenetics

Hey everyone! My summer of research in Ann Arbor has come to an end and it’s been an awesome experience. It’s been a busy 10 weeks of making daily improvements to my rig, resoldering the flyPAD, collecting data, and presenting what I found to others. The original goal of this project was to see if altering taste perception was possible by activating taste neurons with light – a new technique called optogenetics. To test this I stimulated channelrhodopsin in the neurons of fruit flies’ which give them a sweet taste response.

If you missed it, my first post: Optogenetics with the flyPAD, and my second post: The Taste Preferences of Fruit Flies

The FlyPAD setup in its full glory

Naturally, fruit flies prefer eating sugary as opposed to unsweet foods, similar to humans. This was the case when I offered them banana, a sweet fruit, and avocado, broccoli, and brussels sprouts, the unsweet alternatives. The flies always preferred banana over anything else. However, when Arduinos were programmed to pulse red light at the flies the same instant they sipped the unsweet foods, their gr5a neurons were activated, tricking them into thinking that what they were eating was sweet. The data is shown below, as bar graphs of the average number of sips and of sip % to see how food choice preference changed.

As we see here, the flies naturally prefer banana over avocado

But this preference switched when stimulation of channelrhodopsin activated their sweet tasting (gr5a) neuron

Flies, naturally, REALLY prefer banana over broccoli

 

The star preference we saw earlier disappeared, and the flies ate some of both foods: more of the newly sweet tasting broccoli and less of the banana.

Again, we see that banana wins the prize naturally.

 

And again, with stimulation, we see the sweet and the non-sweet options begin to level out

 

So, changing the subjective perception of taste is possible, as we could make a fly’s least preferred food become their absolute favorite! These findings show that subjective perception is alterable, but also that optogenetics is a neuroscience technique which can be done with little, affordable equipment.

If I end up continuing work on this project, I am interested to see how long the altered preference of the flies can persist. Anecdotally, I’ve seen that when the LED lights stop working there are some flies which continue to visit the unsweet food which they were tricked into tasting sweet. This wasn’t within the scope of my summer research, but I suspect that doing experiments on this would be interesting as it could reveal how powerful optogenetics is by creating a change in food choice preference that persists once stimulation trials have stopped.

After finding these results I compiled them into a poster which I recently presented at an UROP (undergraduate research opportunity program) symposium at the University of Michigan. It was fun explaining my summer’s work to the public and other researchers. Got a ribbon for it too!

Call me “Blue Ribbon”

A close up of my poster!

Aside from collecting data in the lab, I also had the chance to showcase my project with TED for their upcoming series of episodes focussed on the Backyard Brains’ research fellows’ projects. I was able to conduct experiments for them and give step by step walkthroughs of how they are carried out. Stay tuned on their posts coming around this fall to catch our episodes!

Getting filmed

Huge thanks to Greg for mentoring me this summer and introducing me to the world of Neuroscience research in the coolest way possible with BYB.

Thank you so much to Backyard Brains for giving me this amazing opportunity and to all the research fellows who made it a really fun summer!