Arushi (age 10) with her dragonfly bot at Elle Hacks 2020
What did you do with your weekends when you were 10 years old? We bet it wasn’t participate in a women-driven hackathon where you developed your own functioning neural robot! (We bet you wish you did, though.) Arushi Nath, age 10, recently participated in Elle Hacks 2020, an all-women hackathon hosted by York University in Canada. Arushi saw our DIY Neurosci TED episode and was fascinated by our findings about dragonflies, nature’s most accurate hunter, and she decided to make a bot that mimicked dragonfly behavior.
Arushi’s Dragonfly Bot
Dragonflies are highly accurate hunters due to their ability to sense their prey, using dedicated neurons on each side of its brain, and Arushi wanted to represent this sensing with her bot. Using an Arduino, mini servos, and a breadboard, Arushi devised a Dragonfly Bot that could sense light from either the left “eye” or right “eye” and stimulate corresponding outputs. In nature, these outputs would move the dragonfly left or right toward its target. In two days, Arushi was able to build and code a working robot that could simulate basic dragonfly visual processes!
Arushi isn’t the only one in her family that digs science, either. The Nath family from Toronto, Canada created their “maker-family enterprise” known as HotPopRobot in 2014 after they won a hackathon together. You can follow along with HotPopRobot on Twitter at @wonrobot, for more details about their events and projects (we will definitely be giving them a follow)! Arushi has been attending maker events since she was 5 years old, and she is thrilled to be able to participate all on her own now.
Arushi presenting her bot to the judges
It is wonderful that more and more events like Elle Hacks are popping up across the scientific community in recent years. Elle Hacks centers women in science, creating places for typically underrepresented individuals to develop their interests and foster connections, and they make it possible for girls like Arushi to get a head start on figuring out their passions.
At Backyard Brains, we are in the business of democratizing neuroscience, and we feel incredibly validated when our past projects inspire kids to take the initiative and grab the reins of their scientific future. We hope Arushi continues to hack her way into adulthood!
Over 11 sunny Ann Arbor weeks, our research fellows worked hard to answer their research questions. They developed novel methodologies, programmed complex computer vision and data processing systems, and compiled their experimental data for poster, and perhaps even journal, publication. But, alas and alack… all good things must come to an end. Fortunately, in research, the end of one project is often the beginning of the next!
Some of the fellows intend to continue working with on the research they began here while they’re away and many of these projects will be continued next summer! Definitely expect to hear updates from Nathan’s EEG Visual Decoding project and Joud’s Sleep Memory project. Additionally, two of the projects will continue throughout the next few months: Zach’s Songbird Identification and Shreya’s Electric Fish Detector projects will continue through to December!
Meet the Fellows, See the Projects
The fellows are off to a great start! Check out their blog posts introducing their projects:
Our fellows experience the peaks and valleys of research this summer, but they all came out on top! Check out their final posts for their results, posters, and other details!
A few of our fellows are staying on throughout this next semester for longer term development projects! Zach is going to be back to working with his team on the Songbird Identification Device project, and Shreya will be working through to December on the Electric Fish Detector project. Expect updates on their progress from them soon!
Welp, it’s my last day of work here at Backyard Brains! It’s been a fun 11 weeks with my fellow interns, but all things must end. Last week we wrapped up all the TED filming for our mini series episodes. I had a great time, and I’m really looking forward to seeing the final result.
The dragonfly project ended in a good place; we have a good amount of data from the final setup and succeeded in developing a replicable, recoverable prep. I take a dragonfly that has been in the fridge for a few hours and carefully restrain its wings back with a “helping hands” clamp covered in cloth. This prevents damage to the wings. Then I wrap the dragonfly with a cloth, leaving only its head exposed; this is so the dragonfly doesn’t move and pull out the electrode wires during recording. The cloth is taped and pinned into the clamp’s cloth to hold it in place. Then, I still use silly putty to place and hold the electrode’s stick in place so the wires don’t come out when I prepare the recording electrodes and move the Dragonfly later.We modified one of the Backyard Brains Micromanipulator electrodes so that instead of a grounding pin, we use a reference electrode. Then, onto the dragonfly, I place the two electrode wires on either side of a single, exposed ventral nerve cord.I also made a few new stimuli, all on generic size paper. One had a fake plastic fly glued to the middle, and the other four I drew various sizes of dots in the center: 3mm, 7mm, 2.3cm, and 9cm in diameter.
I waved these papers by hand left and right, up and down, and even switched them out in the same recording to compare size preferences, not just direction. Besides just seeing a reaction, I’m interested in seeing the directionality of response.
This indicates that there are certain neurons within the dragonfly’s nervous system, like the target-selective descending neurons (TSDNs), that help the dragonfly differentiate, in an almost mechanical way, what direction a target is moving. This has the advantage of removing some “post processing” of the information, allowing the dragonfly to react quicker and hunt its prey more efficiently. I had success in seeing this kind of evoked response in my trials, which was a great success for the project.
As you can see in the results above, as I improved my prep and experimented with new electrodes, I began to see better results. By the end, I was seeing responses in most of my preps. I began to observe a directional bias more frequently and began seeing more evidence of a size discriminate response. By the time we presented our projects via a poster presentation on August 2nd, I had totaled my data into success rates of getting certain kinds of signals using this final prep I developed, giving students who repeat this experiment an idea of how difficult or easy it will be to see different responses.
Further, we are hoping to publish these results, but in order to do so, the stimuli cannot be moved by hand; the human error of timing the event markers in Spike Recorder with the movement of the stimulus is not accurate or consistent enough for a peer-reviewed journal. We built a servo-motor rig that moves the paper back and forth while simultaneously sending the event markers to the software. The rig has a lot of problems, and I ran out of time to work on it, so if my project is continued next summer, the rig should be the focus to really iron out the automation and precision of stimulus delivery.
That’s all from me! Thanks for reading. Dragonfly girl, signing off.