Hi folks! My name is Dan and I am a student at the University of Massachusetts Amherst studying neuroscience and minoring in computer science. Back at school, I work in a songbird lab where I listen to neurons fire in zebra finches, and I’m on the ballroom dance team. Outside of working, sleeping, and eating, I’ve been rock climbing with the other Fellows and scouting out dance spots around town.
So at this point in 2018, I’m guessing a bunch of the people who will read this have at least heard about the focus of my project, mantis shrimps! Radiolab did a fabulous show about them and since then they’ve spread across the internet, so I was pretty darn excited when I found out I’d get to work with them this summer.
Even though there are a million great resources on mantis shrimp, I’m going to take a shot at it here anyway. They’re prolific like ants and old like crocodiles; they comprise a few hundred species of related crustaceans can be found across the world’s temperate oceans, and they haven’t changed much for thousands of years. One species can be found along the Atlantic coast from Maine to Suriname! They range in size from less than an inch to more than a foot long, and several species are vibrantly rainbow colored. Despite their pleasant color, they are extremely aggressive and will attack just about anything. There are a ton of things about mantis shrimp that make them biologically unique and really important to study, but I’m going to talk about just one aspect of their behavior/physiology: their punch.
Most species of mantis shrimp are considered “spearers” or “smashers,” because they use an arm-like appendage called a maxilliped to either spear or smash their prey. I’m studying a smashing species called Odontodactylus scyllarus, or the peacock mantis shrimp. In addition to looking like a Christmas ornament, they pack a punch like no other animal, or even robot, on Earth. Their maxillipeds have an enlarged blunt “elbow” that they swing faster than a bullet (underwater no less! Try swinging a baseball bat underwater some time). The impact of their punch pushes all the water away from the point of impact, replacing it with gas the temperature of the surface of the sun, creating what’s called a cavitation bubble. The water then crashes back down around the bubble, creating an audible click, a flash of light, and an aftershock that hits the target like another punch. This is one reason scientists are curious about these guys: we can’t engineer something as hydrodynamic as their little rock’em sock’em maxillipeds. By the way, this kind of spring-loading is called power amplification.
How does the mantis shrimp still outpace modern engineering? I think I’ll get really into it in my next post, but the basic idea is that they store mechanical energy into a kind of biological spring on their armor, twitching muscles in the maxilliped. Each twitch pushes the spring further and further down until it releases a latch, and that elbow catapults out into a very unlucky crab, scientist’s finger, or aquarium glass (they have been known to crack and break).
Our bodies produce a lot of electricity to do everyday things. The brain uses electricity to propagate information from one neuron to another, and we use electroencephalography, or EEG, to see how electricity use changes when we perform cognitive tasks. Electromyography, or EMG, is a way of visualizing the electrical activity of muscles. What I want to do for my project is capture the mantis shrimp EMG (“electro” = electricity, “myo” = muscle, “graphy” = visualization) that reflects the buildup of mechanical energy before its strike, and then take a slow-motion of the strike. Maybe I’ll be able to see some cavitation! The EMG trace for mantis shrimp strikes is quite well studied. A fantastic paper on strike EMGs from 2015 (see below) shows a distinctive pattern of activation in the muscles. I’ll be looking for a similar kind of trace when I do my work.
Power-amplifying EMG trace from a mantis shrimp leading up to a strike
Before I can get started on that, I have to practice getting EMGs from other organisms that use power amplifications — specifically, crickets and grasshoppers. And cockroaches for the extra ew-factor. That’s what I have been doing for the past week: cobbling together a surgery rig, anesthetizing insects in ice, and implanting EMG probes. And it’s worked!
Cockroach EMG data I acquired the old-fashioned way: with an oscilloscope.
Further information on the mantis shrimp:
Hey everyone, it’s Ilya again; if you remember me from last summer, I’m the octopus guy; otherwise, don’t worry, I’ll introduce myself again. I’m now a third year at UC Berkeley, studying Electrical Engineering and Computer Science, and this summer I’m tackling the problem of making a fun brain-themed neurorobot! (For more information on this project, check out our collaborator Chris Harris’s blog post here.)
I’m specifically focusing on making the neurorobot see like we do; recognizing colors, shapes, and even complex objects in its surroundings. Small problem, however; while a human’s brain talks like this, using constantly evolving neural spike trains:
Our computer talks a little bit more like this, with a digital pre-trained neural network:
So, unfortunately, a computer is a little bit harder to teach shapes to than a little kid, since it’s lacking all those beautiful neural pathways that nature has been working on for millions of years. Instead, we have to build those pathways ourselves. I feel a little like I’m playing a digital Frankenstein, trying to give my creation a brain.
Speaking of my creation, this is my handheld video retrieval unit, a wireless IP camera. It’s a stand-in for the final robot just so I have data to work with, and its name is Weird Duck:
So far I’m still working out the first basic problems of image recognition and localization (for the more tech minded I’m working with fast regional convolutional neural networks as seen in https://docs.microsoft.com/en-us/cognitive-toolkit/object-detection-using-faster-r-cnn). I hope to be able to post some cool updates soon, but in the meantime, I highly suggest checking out this video of me tracking a glue-stick using analysis of the probability distribution function of orange color in the live video:
Can robots think and feel? Can they have minds? Can they learn to be more like us? To do any of this, robots need brains. Scientists use “neurorobots” – robots with computer models of biological brains – to understand everything from motor control, and navigation to learning and problem solving. At Backyard Brains, we are working hard to take neurorobots out of the research labs and into the hands of anyone who wants one. How would you like a robot companion with life-like habits and goals? Even better, how would you like to visualize and rebuild its brain in real-time? Now that’s neuroscience made real!
I’m Christopher Harris, a neuroscientist from Sweden who for the past few years have had a bunch of neurorobots exploring my living room floor. Last year I joined Backyard Brains to turn my brain-based rugrats into a new education technology that makes it possible for high-school students to learn neuroscience by designing neurorobot brains. Our robots have cameras, wheels, microphones and speakers, and students use a drag-and-drop interface to hook them all up with neurons and neural networks into an artificial brain. Needless to say, the range of brains and behaviors you can create is limitless! Twice already we’ve had the opportunity to pilot our neurorobots with some awesome high-school students, and we’re learning a ton about how to make brain design a great learning experience.
But hang on, is this just machine learning (ML) dressed up to look like neuroscience? Not at all. Although ML algorithms and biological brains both get their power from connecting lots of neurons into networks that learn and improve over time, there are also crucial differences. Biological neurons are complex and generate spontaneous activity, while ML neurons are silent in the absence of input. Unlike ML networks, biological brain models are ideally suited for “neuromorphic” hardware, which has extraordinary properties, including (some say) the ability to support consciousness. Finally, while ML networks are organized into neat symmetrical layers with only the occasional feedback-loop, biological brains contain a huge diversity of network structures connected by tangles of criss-crossing nerve fibres. Personally I’m a big fan of the brain’s reward system – the sprawling, dopamine-driven network that generates our attention, motivation, decision-making and learning. So rest assured, fellow reward-enthusiasts, our neurorobots have a big bright “reward button” to release dopamine into the artificial brain, reinforce its synapses and shape its personality.
Interested? If you’d like to take part in a workshop to learn brain design for neurorobots, or if you’re a teacher and would like Backyard Brains to come and give your students a hands-on learning experience they’ll never forget; please email me at firstname.lastname@example.org, and check back here for updates.