Backyard Brains Logo

Neuroscience for Everyone!

+1 (855) GET-SPIKES (855-438-7745)


items ()

Octopus Wrestling and Computer Vision

 

Hello again my faithful viewers, and thanks for tuning in for another exciting octopus themed blog post. As always I am your host Ilya Chugunov, but today I’ve come with sad news; all good things must come to an end, and this marks the end of my summer research here with Backyard Brains. Now’s a time to grab a hot cocoa and reminisce on what we’ve learned and talk about what there’s still left to do.

First and foremost, if you haven’t already had the chance to look at my previous blog posts, you can see them here:

Octopus Learning and Behavior

Studying Aggressive Behavior in Octopodes

Now let’s recap and break this down into some conversational dialogue.

First, we found out, rather accidentally, that if left together our Bimacs will wrestle each other to assert dominance. This gave us the idea of using computer vision to gather data for analysis with the hope that we could identify some interesting features within their behaviors.

First I built my acrylic setup to record the octopuses doing their thing, making sure to have even lighting and a stable setup for my Go Pro so that the code didn’t just explode from all the variability.


The first, and most classic, behavior found in our trials with the Bimacs were “bouts”, which were little sumo-wrestling fights where each octopus tried to push the other around as far as possible; these were common when both the octopuses were excited and lasted about 5 seconds each.

The second curious behavior found was the “poke”, where one octopus wanted to provoke a real fight, but the other just wasn’t feeling it. The more excited octopus would waltz up to the lazy-bones and just briefly tap him with an arm before jetting off across the chamber.

I noticed that in both the bouts and the pokes, right as the distance between the two octopuses closed, and they made contact, the angle between them rapidly decreased too. They would approach each other sideways (almost backwards at times), then rapidly spin around right as they got close to poke/fight. In the poke behavior, the offending octopus would then spin back around and jet off, while in the bout behavior they’d just stay locked face to face.

Another notable thing our octopus do in their fighting ritual is change colors. As I assume you already know, these guys are covered in chromatophores and seem to flash bright black as they go on the offensive (Can you tell who the attacker is in that picture above?).

The poke behaviour elicited the same response, twice! The first bump was the attacked octopus darkening as the poking octopus approached it and second was the poking octopus turning a dark brown as he squirted away.

“But Ilya, how in the world do you process so much video? And how do you know when the fight starts in the first place?”
Why thanks for the question, hypothetical reader. I use a mess of MOG (Mixture of Gaussians) background subtraction, erosion, and band-pass filtering combined with the OpenCV convex hull functions to find the general outlines of the octopus, and then I check if they’re two separate blobs, or one combined megaoctopus. If they’re 2 blobs, they’re not in contact, and vice versa, so now it’s easy to define first contact and a bout vs a poke (long contact, short contact).

 

Using a simple windowing function and a pretty boring logistic regression, we can take a bunch of our video clips of octopus fights where we’ve already classified when a fight occurs, and from them predict a point of contact in a new video we feed into the algorithm. This is where the concept of machine learning starts to play into the project, letting a program learn from previous octopus video to predict what will happen in new octopus video.

I’ve compiled my research results and created a poster which I presented at a University of Michigan symposium.

What’s next?

For me, Canada. Heading up to Montreal next week.

In general, my code is up on my GitHub and is completely open source, so anyone is welcome to make changes to it, take it in whatever directions they want; you don’t even have to use it on octopus if you don’t want. 

Now for some musings…

I’m excited about computer vision. Historically, behavioral studies involve a lot of humans watching animals, recording specific events (like eating a certain food and when), or interpreting their behavior. This is not only time consuming, but also unscientific. In these studies, there needs to be redundancy. Multiple people need to record the events. Then, that data needs to be interpreted statistically to ensure that, on average, the interpretations are consistent between different observers. As you can see already, it is challenging. Computer Vision programs are changing this!
 
By taking the humans out of the equation, you remove chances for bias, for missed behaviors or interaction, or fudged results. Computer vision techniques can be used to comb over hundreds of hours of video footage, quickly providing researchers with quantifiable results. There are certainly still some behavioral studies that require human discretion, for instance, was a touch affectionate or aggressive, but for many researchers, computer vision is the future.

I think there’s a lot still to be done with computer vision and behavioral analysis, and this summer research was just me dipping my foot into the pool. There is much more data we can draw from the same video I was working with, tentacle position and length, how curled the octopus’s arms were, maybe even their heartrate could be extrapolated with enough clever coding. As I continue onward in whatever field of STEM I find myself in next, I hope to keep throwing computational power at problems that don’t seem like they even need a computer, because who knows, maybe they do.

I’ll leave you with some boring philosophy. No one, not a single scientist, knows for certain what the next big thing is going to be. No one knows when or where the next technological revolution is going to be, no one knows if the next world-changing invention is going to be made in a million dollar Elon Musk laboratory, or at 3am by a hungover student in their dorm room. So just know that when you read a blog post like this, about an 11 week undergrad project, even it has the chance to be something big; not all scientific breakthroughs are made by bearded dudes in lab coats, they could be made by you.


BYB Fellows Present at Research Symposium

The Fellows! Missing: Ilya and Nathan, they already had started presenting!

Today our Summer Research Fellows “snuck in” and presented their summer work at a University of Michigan, Undergraduate Research Opportunity Program (UROP) symposium! Over the two sessions our fellows presented their work and rigs to judges, other students, to university faculty, and community members. Some of the fellows are seasoned poster designers, but others had to learn quickly as they all rushed to get their posters printed in time! As our motto goes, we think it’s a shame that science is locked up in labs, and we pride ourselves on being able to take our DIY rigs wherever we go, so of course we encouraged the fellows to bring as much of their rigs as possible to show off in person. Science is much cooler when you can hold and see it in person.

Poster presentations are close to our heart here at Backyard Brains… You might be surprised that our company started out as a poster presentation! The “$100 Spike!” was the poster which launched a thousand ships. Our founders Tim and Greg developed the original SpikerBox as a passion project and presented it at a “Society for Neuroscience Conference” poster session. They pinned up their poster, tacked a hundred dollar bill to the board, and showed everyone who would listen to live action potentials on their first-generation SpikerBox. People expressed interest in purchasing the SpikerBox and Backyard Brains was born!

We’re proud to see our fellows continuing the tradition of creating affordable, DIY neuroscience experiments. Check out the photos and posters below, and be on the lookout for more blog posts from our fellows as they finish their write ups!

 

 


Studying the Behavior of Baby Squids!

Hi everyone! My name is Christy and I will be a junior at Rice University this coming Fall. I am so excited and grateful to be part of the Backyard Brains fellowship program for this summer and will be studying longfin inshore squid hatchlings for my research project.

My Favorite People and my Favorite Place

At school, I am majoring in Computer Science and Cognitive Science and minoring in Neuroscience. I am specifically interested in the fields of computational neuroscience and artificial intelligence and I’ve really, really loved the classes I’ve taken in those topics so far. In my free time, I enjoy long walks on campus… as a tour guide!  I also perform with my school dance team and work at the McNair Center where I write classifier algorithms to identify startups and accelerators.

A Scenic Ann Arbor Destination

I have really enjoyed my time in Ann Arbor! It has been great meeting the other interns and exploring the city. So far, I’ve eaten a ton of excellent food (ZINGERMANS!), gotten lost in the Dawn Treader Bookstore, visited Lake Erie, willingly touched a cockroach for the first time in my life, and discovered “The Most Well Stocked Walgreens in the History of Time.”

As soon as I arrived here at Backyard Brains, a shipment of squid eggs had already arrived for me, so I hit the ground running! I mixed and aerated some seawater and placed the eggs in their new home. I’ve never worked with any sort of marine creatures before, so I was definitely worried that they were in the wrong hands! Fortunately, my brilliant advisor Eric Edsinger has been helping me remotely, and the squid babies hatched this Tuesday. I became the proud mother of a ton of squiddos (like kiddos, but squid).

Since then, I’ve been scrambling to capture quality videos of my squid swimming so that I can train computer software to track their movements! Once I have a program up and running, I will be receiving another shipment of squid and tracking them as they react to different environmental conditions like strong light, temperature fluctuations, and water movement. Very little is known about how these hatchlings behave, and I hope that by recording information about them, I can contribute to our understanding of these fascinating creatures.

My squid live in aerated buckets of saltwater, but for recording, I needed to come up with a simple, DIY transparent container. So, for now, the tiny squid are swimming around in Tic-Tac containers! (By the end of the summer, I suspect the other interns and I will be very sick of eating Tic-Tacs.) Unfortunately, the squid like to hide in the beveled edges of the container that the camera can’t capture properly, so today I built some blockers to stop them from entering that region. I also constructed a little lightbox to film them from with random materials I found in our makerspace and started filming! I got some images that look like this:

I think my squiddos are pretty cute, but I’m biased of course #momthings

The next step is turning them into images that look something like this:

Negative Image of Squid for Easier Computer Vision Tracking

Using edge detection and high-pass filters, I can track how the population reacts to different stimuli.

I am enjoying building my experimental setup, and I am excited to get started with my experiments!. I hope to produce meaningful results this summer! By recording and analyzing the squid’s behavior in response to certain stimuli, I can begin to hypothesize why the squid behave in such a way, thus prompting further scientific experiments! I feel so lucky to have the opportunity to spend my summer learning and studying at Backyard Brains and can’t wait to update the blog soon!