Backyard Brains Logo

Neuroscience for Everyone!

+1 (855) GET-SPIKES (855-438-7745)


items ()

Squid Hatchlings React to Changes in Light Levels

Hi everyone! The summer is finally coming to a close and I am excited to share all that I have learned from my time as a Backyard Brains fellow with you. If you’ve been here since the beginning, thank you so much for following along with me and my squiddos for the past ten weeks and if not, feel free to check out my previous two blog posts to see how we got to this point.

If you missed them… my first post Behavioral Study of Baby Squids and my second post Recording the Behavior of Squid Hatchlings

Honestly what even happened to me this summer

At the end of my last blog, I had finally gotten a recording box setup that worked well for me, and software that successfully tracked the squid over time. I was pondering what I wanted my final experiment with the squid to be and what I would test them for as they developed. I think that the most interesting part of the project was to be able to relate my experiments to actual conditions that the squid would experience in the ocean so that we could reveal something about the early lives of longfin inshore squid.

With that in mind, I first asked a very simple question. Do the squid prefer to be at the surface of the water or further down? In order to test this, I first put LEDs above the squid in the experimental tank and set them to produce very low light, around 100 lux. I found that the squid stayed right up at the top of the tank, and appeared very attracted to the light. Next, I ordered a really intense flashlight off of Amazon which according to the product description can be used for ‘calamity search and rescue.’ Super hardcore. I proceeded to blind all my intern friends and abuse the strobe function on the flashlight before getting down to experimenting.

Sunlight on a very bright day is around 120,000 lux, which is around the intensity of my flashlight. By placing it on top of the squid tank, I mimicked the experience that the squid would have should they swim right at the surface of the water. What I saw was pretty impressive; the squid dropped dramatically in the water and hung around the bottom until the light was turned off. This was especially interesting to me, because as you might have read in my previous post, squid are strongly negatively geotaxic. This means that they prefer not to be low in the column of water if they can help it, so seeing this reaction must mean that the squid have a very strong instinct to stay away from bright sunlight.


I saw this same behavior across different ages of squid and I believe that it is a pretty general response. While this was an interesting and informative experiment, it seems a little broad. We’ve learned that squid of all ages don’t like SUPER SUPER bright light and they do like SUPER SUPER dim light, but what about the kinds of light in between?

This is where my idea to film the squid moving right to left came into play. I became very obsessive about checking my eggs every hour to see if they had hatched, so I could be sure to test the squid under different conditions in their very first hours of life. I tested each group with 100 lux, 600 lux and 1,200 lux light shining from the right. I first tested the groups within 24 hours of hatching, and tested them again when they were 48 hours old.

The first result was similar to what I learned from the surface-light experiment which was that the squid are attracted to 100 lux light. In both age groups, there was a strong movement towards the LED as soon as it turned on, and clustering around the light for the entire duration of the video.

The result of the next experiment was totally different. Under 600 lux light, the newly hatched squid had a totally random response and were not attracted to the light source at all. After two days, the same group of squid would respond very strongly to the light and move right towards it.


Under 1,200 lux light, the young squid also did not respond and the older squid were somewhat attracted to the light, but less than they were to the 600 lux light.

So what does all of this mean??? We can only guess. It is apparent that in all age groups, squid are attracted to conditions that are dimly lit, probably because this is a safe location for avoiding predators. The differentiation of response is more interesting, however. In order to understand why younger squid are more repelled by bright light, we can consider the early migration patterns of the squid. While they are born at the coastal shelf of the ocean, the squid quickly move out towards the open ocean within their first few days of life.

As you can see in the diagram, there is a significant difference between how the light penetrates water on the coastal shelf and how it penetrates water in the open ocean. When the squid are born in the murky waters off the coast, even moderately strong light probably signals to them that they are dangerously close to the surface of the water. A few days later when they move to the open ocean. light penetrates more deeply, so moderately strong light probably still represents a safe distance from the surface.

Although these hypotheses could be totally wrong, the behavioral development that is apparent here is an extremely interesting model of infant reflexes and their changes over time. In human babies, we see strong behaviors such as the rooting reflex that allow infants to survive, which go away with time. This same general concept is being seen in the squid hatchlings, and we could perhaps work to study the mechanisms for how these instincts develop genetically in an organism. NEW PROJECT DIRECTION!??

And, if you missed it, I made and presented a poster! Check it out here if you’re interested in a more formal presentation of my results:

Anyway, this is almost the end of my last blog as a BYB fellow and if you want to peace out before things get sappy/ grateful/nostalgic, now would be the time 😉

 

Are we happy to be done filming or are we still arguing about the merits of subplot(‘Position’) vs subsubplot??

BUT ANYWAY I am so grateful to have had the opportunity to work at Backyard Brains this summer and for all the amazing experiences I’ve had here. Infinite thanks to Greg for taking the time to work with and teach us so much this summer and for his dedication to broadening the availability of neuroscience education. I will miss your excellent renditions of the entire soundtrack from The Book of Mormon and being your (unconfirmed) favorite intern.

Thank you also to the entire staff at Backyard Brains (looking at you Stan, Will and Zach) for being there every time I got stuck or frustrated this summer and providing helpful suggestions and thumbs ups. Same to my fellow interns. You’re all like siblings to me and I am so grateful to have gotten to know such a quality group of people; I know you’ll all do amazing things in life. The open-apartment Friday night policy applies forever please come visit me.

TO CONCLUDE this summer has been incredible. It has really deepened my passion for neuroscience and experimentation, and introduced me to the satisfaction of DIY science. I can only hope that the rest of my career involves so much gratifying research and such a wonderful community of scientists and makers. TO THE NEUROREVOLUTION!!

 


Promising Results in Detecting the Detection of Faces in the Brain!

G’day again! I’ve got data… and it is beautiful!

More on this below… I am pleased to update my progress on my BYB project, Human EEG visual decoding!

If you missed it, here’s the post where I introduced my project!

Since my first blog post, I have collected the data from 6 subjects with the stimulus presentation program I developed. The program presents 5 sets of 30 images from 4 categories (Face, House, Natural Scene, Weird pictures). Since the images are randomized, I have small, color-coded blocks in the corner of each image which I use to record which stimulus is presented when.

I needed to build the light sensor to read the signals from these colored block. I used a photoresistor at first, however, there was some delay on the signal, so I decided to use photodiodes which had a faster response. Since I do not have an engineering background, I had to learn how to read circuits and to solder to build the light sensor. This was new territory for me, but it was very interesting and motivating. After building up my device, I collected data from 6 subjects from 5 brain areas (CPz, C5, C6, P7, P8) that are thought to be important in measuring brain signals related to visual stimulus interpretation.

Figure1. Data recorded from DIY EEG gear. 5 channels from 5 brains areas (orange, red, yellow, light green, green) and 1 channel from photoresistor (aqua) that was replaced by photodiode

Figure2. A circuit for photodiode(top)  and the photodiodes I built (bottom)

Figure3. Checking each channel from the Arduino. One channel (Yellow) on the back of the brain is detecting alpha waves – 10 Hz waves  

Figure4. Spencer (top/mid) and Christy(bottom), our coolest interns, participating in the experiment

With the raw EEG data collected from each subject, I averaged them to get the ERP (Event Related Potential) to observe what the device detected from the data. ERPs provide a continuous measure of processing between a stimulus and a response, making it possible to determine which stages are being affected by a specific experimental manipulation, and also provide excellent temporal resolution—as the speed of ERP recording is only constrained by the sampling rate that the recording equipment can feasibly support,  Thus, ERPs are well suited to research questions about the speed of neural activity.

Then I performed Monte Carlo simulations to verify the statistical significance of the spikes in ERP data. Monte Carlo simulation is a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. With 100 random samples for each category, the analysis indicated that we had statistically significant spikes across the graph, especially in N170 in face images, which was very meaningful for my research.  N170 is a component of the event-related potential (ERP) that reflects the neural processing of faces, which supported that we have good detection on faces across subjects compared to other categories.

 

Figure5. ERP data from 6 subjects for each category of images. Significant response in N170 (negative peak after 170 ms after the stimuli presentation) is detected in the face

After verifying the statistical significance of the data, I used k-means clustering, a method of vector quantization that aims to partition n observations into k clusters in which each observation belongs to the cluster with the nearest mean, serving as a prototype of the cluster. K-means clustering indicated that difference between subjects was more significant than the difference between trials and that the difference between trials was more significant than the difference between categories. And, much to my excitement, it was obvious that the response to faces was distinguished from other categories across the number of averaging data sets.

With the insights from k-means clustering, I finally performed the machine learning techniques I’d been studying to analyze my accuracy at classifying what category of images people were looking at during the experiment by looking at the raw data. I performed the most popular pattern classifiers such as “linear support vector machine,” “quadratic support vector machine,” “cubic support vector machine,” “complex trees,” “gaussian,” “knn,” and so on… I used these methods on a single subject and a set of 6 subjects with and without averaging every 5, 10, 15, 20, 25, 30, 50, 75, 150 vectors of EEG data. Support Vector Machine showed the best performance among other classifiers with more than 50% of accuracy for each class with averaging data showing the better performance as expected.

One Subject Raw

One Subject Averaged

 

Six Subjects Raw

Six Subjects Averaged

Figure6.  K-means clustering results with averaging every 5, 10, 20, 50 75 vectors of the EEG data for a single subject(first 2 graphs) and 6 subjects(last 2 graphs). Y axis indicates 4 categories of the images (1: Face, 2: House, 3: Natural Scene, 4: Weird pictures), further illustrated by the red lines. The graphs from 6 subjects indicate that combining multiple subjects introduces too much variation to identify faces within the group. However, the graphs from a single subject indicate that face can be distinguished from other three categories.

Again, with the data from k-means clustering, and the Machine Learning classifiers I mentioned before, I then applied a 5-fold cross validation with and without averaging every 5 EEG data. In 5-fold cross validation, each data set is divided into five disjoint subsets, where four subsets are used as training sets and the rest are used for a test set. SVM showed the best performance among other classifiers with more than 50% of accuracy for each class with averaging data showing the better performance as expected.

One subject, SVM, no averaging

 

One subject, SVM, averaging 5

 

 

Six subjects, SVM, no averaging

 

Six subjects, SVM, averaging 5

 

 

Figure7. The results from pattern classification with SVM . Both one subject and 6 subjects achieved good results with averaging every 5 vectors of the EEG data, producing a better result than without averaging, and data from single subject producing a better result than 6 subjects.  (The darker the green down the diagonal the better, that’s the accuracy of predicting specific classes)

So now I am working on real time pattern classification so that I can detect what people are looking at without averaging multiple sets of data. I will perform spectral decomposition to compute and downsample the spectral power of the re-referenced EEG around each trial. The spectral features from all of the electrodes will be concatenated and used as inputs to pattern classifiers. The classifiers were trained using various pattern classifiers to recognize when each stimulus category is processed as the target image in real time; a separate classifier will be trained for each combination of stimulus category and time bin. Next, the trained classifiers will be used to measure how strongly the prime distractor image is processed on each trial. Finally, subjects’ RTs (to the probe image) on individual trials will be aligned to the classifier output from the respective trials.

The successful result of this research will make this kind of neural decoding accessible for any neuroscience researcher with an affordable EEG rig and provide us an opportunity to bring state-of-art neurotechnology techniques, such as brain authentication, to life. Please keep an eye on my project and feel free to ask any question. Toodle-oo!


Are you fast enough to catch a grasshopper? Our new experiment and publication look for answers in visual neurons!

Are you fast enough to catch a grasshopper with your bare hands? Might be tricky, because grasshoppers are quick to react to potential threats! This reaction time is thanks to a very specific, visual neural circuit in the grasshopper. By recording from this circuit in a living grasshopper prep, we can record the spikes that are actuated by visual stimulus! Move a piece of paper back and forth in front of the grasshopper’s eyes and you get spikes! But so much more can be done to study this fascinating, hardwired reaction.

Dieu My worked on this exact prep last summer! And we are excited to announce that she recently had her paper published in June’s edition of JUNE (Journal of Undergraduate Neuroscience Education)! Her paper, titled “Grasshopper DCMD: An Undergraduate Electrophysiology Lab for Investigating Single-Unit Responses to Behaviorally-Relevant Stimuli,” featured in this journal’s titular month, details her summer research in Grasshopper Vision and Educational Methodology. It takes a lot of work to bring a research project to publication, and we’re proud and excited for Dieu My’s accomplishment!

See the Publication

To coincide with her publication, we are pleased to also introduce to you the resulting Backyard Brains experiment!

https://backyardbrains.com/experiments/grasshoppervision

Here’s a brief sample from the experiment’s introduction:

“It’s easy to observe that grasshoppers are able to quickly hop away to escape potential predators or quickly incoming danger, but learning just how the grasshopper can react so quickly is a research question that has interested neuroscientists for years.

In 1992, researchers Simmons and Rind helped identify a specific neural circuit in the grasshopper that is responsible for reacting to movement, called the descending contralateral movement detector (DCMD). Certain movement patterns activate the grasshopper’s DCMD, sending an alert down to its legs telling it to jump away. The DCMD underlies the grasshoppers’ ability to visually detect, discriminate, and react to an approaching object.”

 

 

This experiment guides you through the methodology that Dieu My created to take recordings from a Grasshopper’s DCMD (Descending Contralateral Motion Detecting Neurons).  Also, along with the methodology, Dieu My came up with a few experiments that could be performed to test the Grasshopper’s vision. One of the experiments has you compare different stimulation intervals, while the other has you adjust stimulus variables to see what triggers the DCMD.

The experiment does require some Backyard Brains tools and software, so check out the store for the Completo and SpikeRecorder.

Let us know what experiments you come up with and please do share your results with us!