Start the presses! Backyard Brains has a new publication! Our Neurorobot paper is titled “Neurorobotics Workshop for High School Students Promotes Competence and Confidence in Computational Neuroscience.” You can read the article in its entirety on the Frontiers in Neurorobotics website–because we believe neuroscience knowledge is for everyone, and no one should have to pay for access! The paper details our recent work developing the methodologies essential for making neurorobotics accessible in high school classrooms.
We began the Neurorobot project in 2018, when notable neuroscientist Christopher Harris joined the team with his gaggle of “brain-based rugrats” in tow. The Neurorobot aimed to bring neurorobotics more enticing to high school learners, and we quickly started to brainstorm (pun intended!) how we would implement such experiments in schools.
The Neurorobot Workshops
Chris ran the workshop at 2 high schools, sharing his 1-week Neurorobot workshop with nearly 300 students total. The students piloted the Neurorobot App developed for controlling the bots, and were able to provide feedback on the successes and shortcomings of the workshops.
The workshops were targeted to give students a base of knowledge and increase their confidence on the scientific topics studied. Both prior to and after the week-long sessions, students were presented a quiz, and their responses were analyzed for retention and comfort level. We found a significant improvement on all content questions, showcasing the effectiveness of our learning tools.
The Neurorobot Fellowship Project
If you recall, one of our fellows spent his summer working on the Neurorobot project. Ilya worked on coding the machine learning and computer vision aspects of the bot. Throughout the summer, he made progress posts, which can be found below:
There is nothing like hands-on application to showcase room for improvement, and our Neurorobotics Workshop definitely did so! We ran into some unexpected issues and tried to adapt on the fly, and we are so excited to keep this momentum going. Based on our successes, we hope to pilot more Neurorobotics programs in the future! Is your school interested? If you would like more information on how to get involved, email christopher@backyardbrains.com!
Hello all! The summer fellowship is officially over, but it’s not quite the end of the line for the jellies and me! In this final(?) update to my blog series I’ll be recalling the findings I’ve made over this summer, showcasing the poster I presented at the UROP Symposium, sharing my road trip back home (with the jellies in tow!!!), and planning my post-fellowship jellyfish-based research!
Final Fellowship Findings:
I’ve learned a lot about clytia hemisphaerica over this fellowship. This ranges from their appearance and life stages (polyp, ephyrae, medusa) to their husbandry and maintenance needs (acceptable salt levels, daily and weekly water changes, feeding requirements). This newfound knowledge also includes their behaviors and abilities, like how they catch and eat prey or how they dart, zig-zag, and make circles in the water. I have collected a decent number of videos for my jellyfish dataset, and I’ve done some basic position tracking on most of that dataset, but unfortunately the fellowship was over before any rigorous analysis could be completed.
However, this is not the end! I will be dedicating time over the next few weeks to progressing my research by adding features to my jellyfish tracking/analysis software to get more usable stats on the videos, by analyzing jelly video stats using unsupervised machine learning for labeling behaviors, and by getting more raw footage of these wonderful jellies to add to the dataset! (But more on that later.)
Poster Presentation:
This first photo is of the poster I made for and presented at the UROP symposium. It gives a brief introduction on clytia hemisphaerica, explains how I created my dataset (video recordings), and shows what observations and findings were made.
This next photo shows the poster and me in action at the symposium!
I got to meet a lot of exciting people and shared endless amounts of unusual jellyfish facts with them. [Example fun fact: Did you know it’s been confirmed that some jellyfish (like the upside down jellyfish) sleep? This finding by researchers at Caltech (http://www.sciencemag.org/news/2017/09/you-don-t-need-brain-sleep-just-ask-jellyfish) was surprising since jellyfish don’t have brains or even a central nervous system, so sleep must be a more universal activity than previously thought.]
Jelly Road Trip:
The day came much too quickly – the day I had to leave Ann Arbor and go back home to Cincinnati. I spent 7 hours straight packing and loading the car with all the things I’d brought with me or accumulated during my stay.
There was a lot of stuff and it took up a lot of (hard to find) space in my compact-size sedan, but one spot remained clear: the passenger seat.
The passenger seat was reserved for the 2 remaining jellies! I got approval to take them home with me and continue my work in Cincinnati! After 220 miles of highway roads, the jellies finally got to see their new home (and I got to improvise a new DIY tank setup).
Now that the jellies are here, we can start on the post-fellowship jellyfish-based research plans!
Future Work:
Over the next few weeks, I plan to make more recordings of the jellyfish in a wider variety of situations. I’ll try changing environmental variables like lighting, current direction/intensity, salinity, and water temperature.
Some of the features I plan to add to the tracking/analysis software include optical flow options (to track the water current based on the dust particles visible in the videos), ellipse fitting options (to gauge when the jellyfish is actively pulsing), and multiple jelly support (for tracking 2 or more jellyfish at once).
Finally, the machine learning portion of this project will revolve around mostly unsupervised methods in the hopes that behaviors can be found with minimal bias and human error. Some options that were discussed include basic k means clustering as a start followed by other methods like compressing the layers of the neural network to force the algorithm to find patterns that effectively store the original data without losing any information.
This fellowship was a great experience and I’m very excited and grateful for the opportunity to bring my project home with me and continue my research.
Hello friends, this is Yifan again. As the end of the summer draws near, my summer research is also coming to a conclusion. The work I did over the summer was very different from what I expected. Since this is a wrap up post for an ongoing project, let us first go through what exactly I did this summer.
The above is the product flow graph for our MDP project. All the blue blocks and paths are what I worked on this summer. In previous posts I wrote about progress and accomplishments on everything except the bird detection algorithm.
In my second blog post, I talked about using a single HMM (hidden Markov model) to differentiate between a bird and a non-bird. One problem was that HMM classification takes a long time. Running HMM classification on a 30-minute long recording takes about 2 minute. Considering the fact that we need to analyze data much longer than that, we need to pre-process the recording, and ditch the less interesting parts. This way, we are only putting the interesting parts of the recording into the HMM classifier.
This figure is the runtime profile of running HMM on a full 30-minute long recording. The classification took about 2 minutes. After splitting out the interesting parts of the recording, we are only running classification on these short clips, hence reduces the runtime by a very large factor (see figure below).
One thing you might have noticed in these two graphs is that the runtime for wav_parse is also extremely long. Since there is almost no way to get around parsing the wav file itself, the time consumed here will always be a bottleneck for our algorithm. Instead of a better parsing function, I did the mature thing by blaming it all on python’s inherent performance issues. Jokes aside, I think eventually someone will need to deal with this problem, but I think optimization can wait for now.
This figure is the raw classification output using a model trained by 5 samples of a matching bird call. If the model thinks a window in the recording matches the model, it marks that window as 0, otherwise 1. Basically this mess tells us that in these 23 clips, only clip 9 and 10 does not contain the bird used to train the model.
One might ask, why don’t you have a plot or graph for this result? Don’t yell at me yet, I have my reasons… I literally have more than a hundred clips from one 30-minute recording. It’s easier for me to quickly go through the result if they are clustered together in a text file.
Although me and my mentor Stanislav had decided on trying out HMM to do the bird detection. The results aren’t very optimistic. There is the possibility that HMM is not a very good choice for this purpose after all, which means I might need to do more research to find a better solution for bird detection. Luckily, since songbird is an ongoing project, I will get my full team back again in September. Over this summer, I believed I have made some valuable contributions to this project, and hopefully that can help us achieve our initial goals and plans for this product.
This summer has been a wonderful time to me. I would like to thank all my mentors and fellows for their help along the way, it really meant a lot to me. Looking into the future, I definitely believe this project has more potential than just classifying birds, but for now I am ready to enjoy the rest of the summer in order to work harder when I come back to Ann Arbor in fall.