Backyard Brains Logo

Neuroscience for Everyone!

+1 (855) GET-SPIKES (855-438-7745)

items ()

Shrimp! Heaven! Now!

As I was doing this project, the specter waiting for me as we started wrapping up our projects was the prospect of having to answer the question, “So what?” What is the point of this research? I spent most of my time working on the “methods,” the techniques (surgeries, soldering, coding) that became the experimental setup.

Everyone knows that you can’t do good science without a solid experimental setup, but before you design your experiment, you need a question to answer, a hypothesis. This imperative is known as hypothesis-driven research, and it’s the gold standard for doing science because it forces you to do novel work that will benefit the world. Sure, penicillin was discovered by accident, i.e., without being driven by a hypothesis, and a lot of good science is done by pursuing curiosity, but scientists usually strive for the traditional hypothesis-driven approach.

Well, for this ten-week program, you can’t do hypothesis-driven research; instead I had to formulate a valuable question while running experiments. These experiments, which I was designing on the fly, would in turn limit the kind of questions I could ask! For example, since I was making an EMG probe, I had to formulate a hypothesis related to mantis shrimp EMGs. I couldn’t all of a sudden decide that I actually wanted to see what neurotransmitters were involved in striking behavior, because you can’t measure neurotransmitters with EMG.

Well, “So what,” though? Mostly, it’s that no one has done this before, particularly in terms of making a backpack, comparing strike EMGs across mantis shrimp species, and, to a lesser degree, comparing power amplification across taxa.

The backpack

Pennywise rocking his backpack.

Electrophysiology-focused mantis shrimp research has been purely acute and terminal, meaning that you only get one day’s worth of data from an animal before you are done with it. No one had ever made a chronic setup for EMGs in mantis shrimp. If you have a backpack that can be left on the animal for days or weeks at a time (i.e., chronic), you spend less money on getting new specimens, there is less loss of animal life, and you can have what’s called a within-subjects experimental design. Within-subjects designs have the advantage of allowing you to compare data from the same subject (i.e., each individual mantis shrimp) on different days in addition to comparing subjects to each other, making it easier to believe whatever you find. Surprisingly, no one has spent time making something like my backpack, so the methodology of my research is actually one of my biggest findings!

Also, if all goes well, the mantis sheds the backpack when it molts. I noticed today that Featherclown did exactly that, and is looking bigger and better than ever.

Different species of mantis shrimp might not punch in the same way

The phases of extensor activity leading up to the strike

As we all know, there are hundreds of species of mantis shrimp. However, no one has tried comparing EMGs of the strikes of two different species of mantis shrimp. What if you’re interested in studying a species besides those two? The Sheila Patek paper that I keep (post #1) on (post #2) referencing (post #3) examined twelve parameters of strike EMGs in Neogonodactylus bredini. Even though I recorded from three species, I was only able to get consistent strikes from one: Pennywise, the Gonodactylus smithii. The big question on my mind was about the difference between how Neogonodactylus and Gonodactylus build up energy to strike, visualized as above. From those twelve parameters in the Patek paper, I decided to replicate two: Duration of cocontraction phase, and number of extensor spikes in the cocontraction phase. I know. That’s a lot of explanation. Here are the graphs. Individual 0 is Pennywise, 1 through 6 are Patek’s Neogonodactylus-es.


Same number of spikes, but Pennywise is heads and shoulders above the Neogonodactylus-es vis-a-vis duration. At least, possibly. This is only one individual. Ideally I’d have at least as many Gonodactylus as Patek had Neogonodactylus, so I can’t say if Pennywise’s strikes are representative of his species’ entire population.

Power amplification across taxa

This is easily the least hypothesis-driven part of my project. The question I’m answering is, more or less, “how does mantis shrimp power amplification compare to that of crickets?”, and I’m using cockroaches as a sort of non-power-amplifying control group. Some speculative work has been done about the similarity in power amplification in crickets, so this part isn’t that new either. I’m not carefully measuring the behavior of crickets or cockroaches, so I can’t say a particular burst of EMG spiking produced a particular movement. I’m just comparing details about the bursting itself, which I selected in the cricket and cockroach data based solely on the bursts’ shape. It turns out that the power-amplifying species show an increase in the number of spikes (ie, average firing rate) from the first half of each spike burst to the second half, whereas the cockroach is a good control since it is all over the place and does not show a trend.

One analysis I wished I could have done involved the overall shape of the bursts itself. See how the cricket and mantis shrimp bursts seem to be hourglass shaped while the cockroach’s is more boxy? That is something I want to quantify eventually. Anyway, the rest of my poster can be found at

Odds and ends

Future directions

Looking back, I wish I could have done a few things differently, had I enough time. The backpack was plagued by water infiltrating its crevices, shorting it and rendering it useless until I could wick the moisture out with a rolled-up paper towel. This is why I had to revert to the Patek restraint, where the animal is held half-in, half-out of the water. If I could connect a waterproof plug to the backpack and release the mantis shrimp into its home tank, I could elicit striking behavior while the animal is actively defending its burrow against an “intruder” (i.e., my hand or a pen). That would open the door to research into how EMGs figure into mantis shrimp predation, social interaction, and myriad things I couldn’t speculate about. I hope that someday an intrepid marine biologist will see that chronic, modular EMG is possible and will simplify and waterproof it.


Slick graph huh? The highlight of my week was discovering a programming tool for visualizing statistics called Seaborn. I discovered that it is named after Sam Seaborn, Rob Lowe’s character on The West Wing, which, being my favorite TV television show, made me very happy. The kind of idealism I mentioned briefly at the top of this post, about how the research you do must benefit the people around you, is a theme on The West Wing, transposed onto policymaking. Sam Seaborn is a gifted speechwriter for the President, and is wont to expound on the value of integrity or honesty or some other embarrassingly bushy-tailed thing, except that after hearing him you really want to go around thwacking people on the head for being less than they ought to be. In figure 2 below, we see Sam Seaborn making the case for public education.

You might see why Seaborn is an apt name for a tool that tries to turn statistics into persuasive visual arguments, clear and careful communication that enables the best in us.

It’s been a blast to be a part of BYB’s program this summer, and I am grateful to those of you who took the time to skim even one of my posts. Thank you and sorry to Toothfinger and Beastie Boy for giving your lives to my incompetence with animal care. It’s a comfort to imagine you two in shrimp heaven now, burrowing to your hearts’ content. Please Daniel we can’t keep doing this.

Take a step back and look into the future

Hello friends, this is Yifan again. As the end of the summer draws near, my summer research is also coming to a conclusion. The work I did over the summer was very different from what I expected. Since this is a wrap up post for an ongoing project, let us first go through what exactly I did this summer.

The above is the product flow graph for our MDP project. All the blue blocks and paths are what I worked on this summer. In previous posts I wrote about progress and accomplishments on everything except the bird detection algorithm.

In my second blog post, I talked about using a single HMM (hidden Markov model) to differentiate between a bird and a non-bird. One problem was that HMM classification takes a long time. Running HMM classification on a 30-minute long recording takes about 2 minute. Considering the fact that we need to analyze data much longer than that, we need to pre-process the recording, and ditch the less interesting parts. This way, we are only putting the interesting parts of the recording into the HMM classifier.

This figure is the runtime profile of running HMM on a full 30-minute long recording. The classification took about 2 minutes. After splitting out the interesting parts of the recording, we are only running classification on these short clips, hence reduces the runtime by a very large factor (see figure below).

One thing you might have noticed in these two graphs is that the runtime for wav_parse is also extremely long. Since there is almost no way to get around parsing the wav file itself, the time consumed here will always be a bottleneck for our algorithm. Instead of a better parsing function, I did the mature thing by blaming it all on python’s inherent performance issues. Jokes aside, I think eventually someone will need to deal with this problem, but I think optimization can wait for now.

This figure is the raw classification output using a model trained by 5 samples of a matching bird call. If the model thinks a window in the recording matches the model, it marks that window as 0, otherwise 1. Basically this mess tells us that in these 23 clips, only clip 9 and 10 does not contain the bird used to train the model.

One might ask, why don’t you have a plot or graph for this result? Don’t yell at me yet, I have my reasons… I literally have more than a hundred clips from one 30-minute recording. It’s easier for me to quickly go through the result if they are clustered together in a text file.

Although me and my mentor Stanislav had decided on trying out HMM to do the bird detection. The results aren’t very optimistic. There is the possibility that HMM is not a very good choice for this purpose after all, which means I might need to do more research to find a better solution for bird detection. Luckily, since songbird is an ongoing project, I will get my full team back again in September. Over this summer, I believed I have made some valuable contributions to this project, and hopefully that can help us achieve our initial goals and plans for this product.

This summer has been a wonderful time to me. I would like to thank all my mentors and fellows for their help along the way, it really meant a lot to me. Looking into the future, I definitely believe this project has more potential than just classifying birds, but for now I am ready to enjoy the rest of the summer in order to work harder when I come back to Ann Arbor in fall.

Quicker and Smarter: Neurorobot on the hunt

Though it feels like just last week I landed here in Michigan, it seems that it’s almost the time for me to go back home. Though my work with the project here is wrapping up, there’s still so much to be done, but I’m confident that I’m leaving behind a good framework.

These past few weeks have been so eventful! I was home sick for a week with the flu, which sucked, but thankfully I could just take my work home with me. The Ann Arbor Art Fair (pictured above) was incredible; half of downtown was covered in tents full of beautiful creations from all around the country. Completely unrelated, half my paycheck now seems to be missing.

If you were following my last blog post, you may remember that one of the biggest hurdles I had to overcome was video delay between the bot and my laptop; big delay meant bad reaction speed meant sad bot.

Through changing libraries, multithreading, and extensive testing, we’ve now gotten the delay to just under 0.4 seconds, less than half of what it was with the old versions of the code!! This may not seem too exciting, but it means the bot now react to things much smoother and calmer than before, as demonstrated in this video of it hunting down a ferocious bag of chips:

Another new accomplishment is neurons! Spiking neurons to be specific, nothing like your regular generic brand neural net neurons. These ones are much prettier (using the Izhikevich model) and react to stimuli very organically, making graphs that resemble what you’d see on your lab oscilloscope:

More importantly, other than just looking good, these neuron models also behave really well. As an example here’s just two neurons (listen for the different clicking tones) connected to the motors of the Neurorobot, one activated and one inhibited by the ultrasonic distance sensor:

With just these two neurons, and fewer than twenty lines of code, the Neurorobot can already display cool exploration behaviours, avoiding getting stuck on walls and objects the best it can. Neurons are powerful, that’s why creatures like the roundworm can survive with just over 300 of them: it doesn’t take a lot to do a lot.

Here’s another example, in which position neurons that are more sensitive to left and right areas of the image space are given the task of finding something big and orange:

Notice how when the cup is to the left of the bot’s camera, the blue neuron spikes; whereas when it drifts to the right, the green and red neurons start spiking.

There’s still much optimization that can be done to make the Neurorobot think and react faster, eventually the entire camera code is going to be rewritten from scratch, as well as better visualisation of what’s going on under the hood. Lights, speakers, microphone, and a good user interface are all coming soon to a Backyard Brains near you!

Christopher Harris’s Neurorobot prototypes already have a drag-and-drop interface for putting together neurons:

The real goal of this project isn’t to have student write a thousand lines of code for the robot to do something interesting, but for them to learn and explore the wonderful behaviour of neurons by just by playing around with them; changing parameters, connecting their synapses to different components and seeing if they can get the bot to wiggle when it sees a shoe or beep and back away from anything that looks like a vase. And as is in keeping with Backyard Brains’ tenets, seeing what they will discover.