Backyard Brains Logo

Neuroscience for Everyone!

+1 (855) GET-SPIKES (855-438-7745)


items ()

Visualizing Harmonic Convergence in Mosquito Mating

Wow, what a summer!!! I have some exciting news to report…I didn’t get bit by ONE mosquito all summer!!! Just kidding, my project is a little more exciting than that! I did it! I successfully put together and executed a project that I was a little iffy about back in May, and developed a new-found love for mosquitoes [fake news, don’t tell them I said that!]. I now like to be referred to as the mosquito whisperer, so if you see me on the streets, I will not respond to any other name.

But now, let’s get to the good stuff! Last time you heard from me, I was getting ready to start recording male/female pairs of mosquitoes. Now, I have about 7,000 audio and video recordings of these interactions, and I couldn’t be happier with the data I collected! The goal for this stage of my research was to observe whether or not mosquitoes actually communicate with one another to signal their interest in mating, or basically flirt. Below are the visual results of this from the previous study.

For my own recordings, I was able to detect the presence of these interactions by importing my audio files into a computer program called Audacity. Within this program, I could convert the sound file into a spectrogram that was able to clearly show me the frequencies produced by the mosquitoes in the recording. What the heck am I talking about, you ask?? Below is one example of a recording spectrogram that revealed a converging interaction!

But before I get into explaining the scary pink and blue stuff above, let’s talk about how I got these recordings in the first place- that’s the fun part (minus the 500 times mosquitoes got loose in the lab and attacked all of my friends…losers)! About midway through the summer, I changed some of my methods to make my procedure a little easier and reduce the number of casualties caused from pinning my little friends onto insect pins…yeah, they were not happy with me when they woke up from their nap to find themselves stuck to a wire…but, you got to do what you got to do for science!!!!! At the beginning of the summer, I was using insect wax (a yummy combination of beeswax and rosin) to fix these guys to their new home, but it turned out that the wax wasn’t strong enough to keep the mosquitoes in place when they woke up, and more often than not, they flew right off of the pin and straight for my face. So, I decided to try pinning them with a tiny amount of superglue, and it worked magically! The trick was to touch the super glued side of the pin to the mosquito’s thorax (pictured below) instead of their abdomen, which is where I was attempting to pin them when I was using the insect wax. When I tried to pin their abdomen with superglue, sometimes their wings would get stuck to the pin, making it a little bit difficult to get a good recording when their wings couldn’t move… Instead, their thorax provided a perfect amount of surface area for the pin without interfering with their antennae or wings at all.

Once I adapted this method, pinning them was a breeze! I kid you not, I could probably pin 20 mosquitoes within 30 seconds. You’re impressed, I know, I was too…Below are a few examples of my mad skills.

      

Don’t they look so comfortable and happy!? Next, I set up my recording stands, which were actually 3D printed ‘micro-manipulators’ designed by Backyard Brains! My company is so cool… These stands were used to fix the mosquitoes, with the help of some silly putty, for the duration of the experiment. They were perfect.

 

Now I was ready to record!! Below is a beautiful video of one of my experiments (I’m a little proud of myself, can you tell?) Make sure you turn on your sound!!

 

How creepy is that??? These noises will be burned into my brain for the rest of my life! But isn’t it also super cool? You can definitely hear the difference in sound between the two sexes, but can you hear when they begin converging?? Listen again.

If you’re thinking that it happens roughly 20 seconds into the video, lasting about 15, you’re right!! But just to be safe and make sure that the noises we were hearing were indeed interactions, I imported both files into MATLAB for a closer look

Here you can see the two different frequencies of the female and male (though there is a bit of noise blocking the females’ fundamental frequency). The key to detecting an interaction is to look at the higher frequencies, up in the harmonics, around 1200 Hz because this is where convergence will normally occur. And lucky for us, it did! On camera!  I was so excited I just about packed up and called it a day, but I really wanted to see some more interactions, so I pinned 8 million more mosquitoes and got down to business! In the end, I was able to successfully record, both audio and video, 49 male/female pairs, observing interactions in 33 of them! That means, in the small sample size I had, the pairs would communicate a love interest to one another 67% of the time! Gross, get a room!!!!

Nearing the end of my time in Ann Arbor, I finally finished recording, throwing in the towel for my beloved new hobby, and I was ready to start processing my data in the hope of making it a little more ‘Hollywood’ as Greg would say! Little did I know, this process wasn’t as appealing as I first thought, and on multiple occasions I considered playing with some more mosquitoes just to get away from the madness known as MATLAB. Lucky for me, I had a MATLAB expert living with me (Hmmm…maybe that’s why we became best friends since she couldn’t escape me anytime I opened my computer to work!) Christy helped me create the most magical, color coded, satisfying and all around perfect video of not only my little buddies interacting, but also a spectrogram underneath it that played in perfect sync with the original video recording! Brace yourselves…you will never see anything more beautiful in your life…

 

 

If you caught yourself replaying it multiple times, don’t fret, as you will catch me playing it periodically throughout the day just for fun. I’m not a nerd. But look, I was successful!!!

We also presented our research at a poster symposium at University of Michigan!

So now is about the time where we wrap up!!! Ah don’t make me leave!!!! But I am so happy with the work I produced this summer and I feel so lucky that I got the chance to be part of this program. Greg Gage, you are the best boss I have ever had (don’t tell that to my dad since he’s the only other boss I’ve had…) and I will be forever thankful for the impact you had on my life as not only a researcher but also an individual. I love you and your family to pieces, especially your little ones that taught me all about Peppa Pig, and are still convinced my name is ‘Dirt’. Wonder where they got that…cough, cough, Christy. I already miss you guys, and I haven’t even left Ann Arbor yet! I’d also like to thank all of the staff at Backyard Brains (Stanislav, Zorica, Will, Zach, Caty, Catherine and John), who made my time here so worthwhile and comfortable- I never felt alone even when my MATLAB would crash, or when my fellow interns would shun me for letting some mosquitoes loose in the lab…

And last but not least, thank you to all of the BYB interns that made this summer one for the books! You will all be a part of my life forever, and I can’t wait to see where our lives take us once we leave each other this evening. You’re all such wonderful people, and I couldn’t have asked for better friends. Love you guys!!

Backyard Brains forever!!!! (Tattoo idea, interns?????)


Squid Hatchlings React to Changes in Light Levels

Hi everyone! The summer is finally coming to a close and I am excited to share all that I have learned from my time as a Backyard Brains fellow with you. If you’ve been here since the beginning, thank you so much for following along with me and my squiddos for the past ten weeks and if not, feel free to check out my previous two blog posts to see how we got to this point.

If you missed them… my first post Behavioral Study of Baby Squids and my second post Recording the Behavior of Squid Hatchlings

Honestly what even happened to me this summer

At the end of my last blog, I had finally gotten a recording box setup that worked well for me, and software that successfully tracked the squid over time. I was pondering what I wanted my final experiment with the squid to be and what I would test them for as they developed. I think that the most interesting part of the project was to be able to relate my experiments to actual conditions that the squid would experience in the ocean so that we could reveal something about the early lives of longfin inshore squid.

With that in mind, I first asked a very simple question. Do the squid prefer to be at the surface of the water or further down? In order to test this, I first put LEDs above the squid in the experimental tank and set them to produce very low light, around 100 lux. I found that the squid stayed right up at the top of the tank, and appeared very attracted to the light. Next, I ordered a really intense flashlight off of Amazon which according to the product description can be used for ‘calamity search and rescue.’ Super hardcore. I proceeded to blind all my intern friends and abuse the strobe function on the flashlight before getting down to experimenting.

Sunlight on a very bright day is around 120,000 lux, which is around the intensity of my flashlight. By placing it on top of the squid tank, I mimicked the experience that the squid would have should they swim right at the surface of the water. What I saw was pretty impressive; the squid dropped dramatically in the water and hung around the bottom until the light was turned off. This was especially interesting to me, because as you might have read in my previous post, squid are strongly negatively geotaxic. This means that they prefer not to be low in the column of water if they can help it, so seeing this reaction must mean that the squid have a very strong instinct to stay away from bright sunlight.


I saw this same behavior across different ages of squid and I believe that it is a pretty general response. While this was an interesting and informative experiment, it seems a little broad. We’ve learned that squid of all ages don’t like SUPER SUPER bright light and they do like SUPER SUPER dim light, but what about the kinds of light in between?

This is where my idea to film the squid moving right to left came into play. I became very obsessive about checking my eggs every hour to see if they had hatched, so I could be sure to test the squid under different conditions in their very first hours of life. I tested each group with 100 lux, 600 lux and 1,200 lux light shining from the right. I first tested the groups within 24 hours of hatching, and tested them again when they were 48 hours old.

The first result was similar to what I learned from the surface-light experiment which was that the squid are attracted to 100 lux light. In both age groups, there was a strong movement towards the LED as soon as it turned on, and clustering around the light for the entire duration of the video.

The result of the next experiment was totally different. Under 600 lux light, the newly hatched squid had a totally random response and were not attracted to the light source at all. After two days, the same group of squid would respond very strongly to the light and move right towards it.


Under 1,200 lux light, the young squid also did not respond and the older squid were somewhat attracted to the light, but less than they were to the 600 lux light.

So what does all of this mean??? We can only guess. It is apparent that in all age groups, squid are attracted to conditions that are dimly lit, probably because this is a safe location for avoiding predators. The differentiation of response is more interesting, however. In order to understand why younger squid are more repelled by bright light, we can consider the early migration patterns of the squid. While they are born at the coastal shelf of the ocean, the squid quickly move out towards the open ocean within their first few days of life.

As you can see in the diagram, there is a significant difference between how the light penetrates water on the coastal shelf and how it penetrates water in the open ocean. When the squid are born in the murky waters off the coast, even moderately strong light probably signals to them that they are dangerously close to the surface of the water. A few days later when they move to the open ocean. light penetrates more deeply, so moderately strong light probably still represents a safe distance from the surface.

Although these hypotheses could be totally wrong, the behavioral development that is apparent here is an extremely interesting model of infant reflexes and their changes over time. In human babies, we see strong behaviors such as the rooting reflex that allow infants to survive, which go away with time. This same general concept is being seen in the squid hatchlings, and we could perhaps work to study the mechanisms for how these instincts develop genetically in an organism. NEW PROJECT DIRECTION!??

And, if you missed it, I made and presented a poster! Check it out here if you’re interested in a more formal presentation of my results:

Anyway, this is almost the end of my last blog as a BYB fellow and if you want to peace out before things get sappy/ grateful/nostalgic, now would be the time 😉

 

Are we happy to be done filming or are we still arguing about the merits of subplot(‘Position’) vs subsubplot??

BUT ANYWAY I am so grateful to have had the opportunity to work at Backyard Brains this summer and for all the amazing experiences I’ve had here. Infinite thanks to Greg for taking the time to work with and teach us so much this summer and for his dedication to broadening the availability of neuroscience education. I will miss your excellent renditions of the entire soundtrack from The Book of Mormon and being your (unconfirmed) favorite intern.

Thank you also to the entire staff at Backyard Brains (looking at you Stan, Will and Zach) for being there every time I got stuck or frustrated this summer and providing helpful suggestions and thumbs ups. Same to my fellow interns. You’re all like siblings to me and I am so grateful to have gotten to know such a quality group of people; I know you’ll all do amazing things in life. The open-apartment Friday night policy applies forever please come visit me.

TO CONCLUDE this summer has been incredible. It has really deepened my passion for neuroscience and experimentation, and introduced me to the satisfaction of DIY science. I can only hope that the rest of my career involves so much gratifying research and such a wonderful community of scientists and makers. TO THE NEUROREVOLUTION!!

 


Promising Results in Detecting the Detection of Faces in the Brain!

G’day again! I’ve got data… and it is beautiful!

More on this below… I am pleased to update my progress on my BYB project, Human EEG visual decoding!

If you missed it, here’s the post where I introduced my project!

Since my first blog post, I have collected the data from 6 subjects with the stimulus presentation program I developed. The program presents 5 sets of 30 images from 4 categories (Face, House, Natural Scene, Weird pictures). Since the images are randomized, I have small, color-coded blocks in the corner of each image which I use to record which stimulus is presented when.

I needed to build the light sensor to read the signals from these colored block. I used a photoresistor at first, however, there was some delay on the signal, so I decided to use photodiodes which had a faster response. Since I do not have an engineering background, I had to learn how to read circuits and to solder to build the light sensor. This was new territory for me, but it was very interesting and motivating. After building up my device, I collected data from 6 subjects from 5 brain areas (CPz, C5, C6, P7, P8) that are thought to be important in measuring brain signals related to visual stimulus interpretation.

Figure1. Data recorded from DIY EEG gear. 5 channels from 5 brains areas (orange, red, yellow, light green, green) and 1 channel from photoresistor (aqua) that was replaced by photodiode

Figure2. A circuit for photodiode(top)  and the photodiodes I built (bottom)

Figure3. Checking each channel from the Arduino. One channel (Yellow) on the back of the brain is detecting alpha waves – 10 Hz waves  

Figure4. Spencer (top/mid) and Christy(bottom), our coolest interns, participating in the experiment

With the raw EEG data collected from each subject, I averaged them to get the ERP (Event Related Potential) to observe what the device detected from the data. ERPs provide a continuous measure of processing between a stimulus and a response, making it possible to determine which stages are being affected by a specific experimental manipulation, and also provide excellent temporal resolution—as the speed of ERP recording is only constrained by the sampling rate that the recording equipment can feasibly support,  Thus, ERPs are well suited to research questions about the speed of neural activity.

Then I performed Monte Carlo simulations to verify the statistical significance of the spikes in ERP data. Monte Carlo simulation is a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. With 100 random samples for each category, the analysis indicated that we had statistically significant spikes across the graph, especially in N170 in face images, which was very meaningful for my research.  N170 is a component of the event-related potential (ERP) that reflects the neural processing of faces, which supported that we have good detection on faces across subjects compared to other categories.

 

Figure5. ERP data from 6 subjects for each category of images. Significant response in N170 (negative peak after 170 ms after the stimuli presentation) is detected in the face

After verifying the statistical significance of the data, I used k-means clustering, a method of vector quantization that aims to partition n observations into k clusters in which each observation belongs to the cluster with the nearest mean, serving as a prototype of the cluster. K-means clustering indicated that difference between subjects was more significant than the difference between trials and that the difference between trials was more significant than the difference between categories. And, much to my excitement, it was obvious that the response to faces was distinguished from other categories across the number of averaging data sets.

With the insights from k-means clustering, I finally performed the machine learning techniques I’d been studying to analyze my accuracy at classifying what category of images people were looking at during the experiment by looking at the raw data. I performed the most popular pattern classifiers such as “linear support vector machine,” “quadratic support vector machine,” “cubic support vector machine,” “complex trees,” “gaussian,” “knn,” and so on… I used these methods on a single subject and a set of 6 subjects with and without averaging every 5, 10, 15, 20, 25, 30, 50, 75, 150 vectors of EEG data. Support Vector Machine showed the best performance among other classifiers with more than 50% of accuracy for each class with averaging data showing the better performance as expected.

One Subject Raw

One Subject Averaged

 

Six Subjects Raw

Six Subjects Averaged

Figure6.  K-means clustering results with averaging every 5, 10, 20, 50 75 vectors of the EEG data for a single subject(first 2 graphs) and 6 subjects(last 2 graphs). Y axis indicates 4 categories of the images (1: Face, 2: House, 3: Natural Scene, 4: Weird pictures), further illustrated by the red lines. The graphs from 6 subjects indicate that combining multiple subjects introduces too much variation to identify faces within the group. However, the graphs from a single subject indicate that face can be distinguished from other three categories.

Again, with the data from k-means clustering, and the Machine Learning classifiers I mentioned before, I then applied a 5-fold cross validation with and without averaging every 5 EEG data. In 5-fold cross validation, each data set is divided into five disjoint subsets, where four subsets are used as training sets and the rest are used for a test set. SVM showed the best performance among other classifiers with more than 50% of accuracy for each class with averaging data showing the better performance as expected.

One subject, SVM, no averaging

 

One subject, SVM, averaging 5

 

 

Six subjects, SVM, no averaging

 

Six subjects, SVM, averaging 5

 

 

Figure7. The results from pattern classification with SVM . Both one subject and 6 subjects achieved good results with averaging every 5 vectors of the EEG data, producing a better result than without averaging, and data from single subject producing a better result than 6 subjects.  (The darker the green down the diagonal the better, that’s the accuracy of predicting specific classes)

So now I am working on real time pattern classification so that I can detect what people are looking at without averaging multiple sets of data. I will perform spectral decomposition to compute and downsample the spectral power of the re-referenced EEG around each trial. The spectral features from all of the electrodes will be concatenated and used as inputs to pattern classifiers. The classifiers were trained using various pattern classifiers to recognize when each stimulus category is processed as the target image in real time; a separate classifier will be trained for each combination of stimulus category and time bin. Next, the trained classifiers will be used to measure how strongly the prime distractor image is processed on each trial. Finally, subjects’ RTs (to the probe image) on individual trials will be aligned to the classifier output from the respective trials.

The successful result of this research will make this kind of neural decoding accessible for any neuroscience researcher with an affordable EEG rig and provide us an opportunity to bring state-of-art neurotechnology techniques, such as brain authentication, to life. Please keep an eye on my project and feel free to ask any question. Toodle-oo!