Backyard Brains Logo

Neuroscience for Everyone!

+1 (855) GET-SPIKES (855-438-7745)


items ()

Funding Opportunities and Support: Free Money for Neuroscience Ed!

We’ll be your secret sauce, it’s worked before!

In an effort to make neuroscience education accessible to everyone, we are always keeping our eyes peeled for grant funding opportunities! We are constantly on the lookout for programs we can recommend to teachers and admins who are excited about Neuroscience and STEM, but might not have the funding for right now.

Toshiba America Foundation

The Toshiba America Foundation offers several very generous grant cycles throughout the school year and, as you can see in the tweet above, someone on the Toshiba judging panel is as excited about K12 Neurosci Ed as these students are!

There is a deadline approaching quickly for K-5 grants, and there are several other grants throughout the year (staggered every few months) which offer funding for Middle and High School students.

Lowe’s Toolbox for Education

Lowe’s Toolbox for Education – Awarding up to $5,000 to schools for projects ranging from new gardens to STEM projects, Lowe’s offers generous funding for creative K12 projects of all design.

Maybe Lowe’s Toolbox for Education can help your school win a set of Backyard Brains Toolboxes for Neuroscience!

Due date for submissions is September 28th.

Need help applying to a grant?
You don’t have to go it alone

If you’re looking for help finding or applying for grants, our partners and friends at Ward’s Science have a free grant support service. From finding the grants to writing winning applications, they can help you with the whole process.

Whether you may be looking for help applying to the Toshiba America Foundation grant, starting a Donor’s Choose campaign, or simply need help finding potential grants, the indomitable Rusti Berent of Ward’s Grant Services is ready to help guide you through the process.

And, if you need any more encouragement to reach out to and work with Ward’s Grant Services, know that they’ve helped schools raise over $1 million dollars in grant funding… also, check out this recent, and very relevant, Grant Winner from their Success Stories Page:

  • Matthew F., Billerica, MA, Toshiba America Foundation

The Claw
And other “grant bait” neuroscience kits

Is your goal to creatively meet Middle School NGSS, Introduce a Neuroscience Unit to your STEM Class, enhance a PLTW Biomedical Sciences class, or to empower your seniors to do their own Neuroscience Research Projects? We’ve got powerful kits that will throw the door wide open for your students, opening up a world of DIY Neuroscience, Biomedical Engineering, and hands-on Anatomy and Physiology. See a high school success story here!

Check out these kits and more in our store, create your own grant wish-list, request a quote, and get to work!


Take a step back and look into the future

Hello friends, this is Yifan again. As the end of the summer draws near, my summer research is also coming to a conclusion. The work I did over the summer was very different from what I expected. Since this is a wrap up post for an ongoing project, let us first go through what exactly I did this summer.

The above is the product flow graph for our MDP project. All the blue blocks and paths are what I worked on this summer. In previous posts I wrote about progress and accomplishments on everything except the bird detection algorithm.

In my second blog post, I talked about using a single HMM (hidden Markov model) to differentiate between a bird and a non-bird. One problem was that HMM classification takes a long time. Running HMM classification on a 30-minute long recording takes about 2 minute. Considering the fact that we need to analyze data much longer than that, we need to pre-process the recording, and ditch the less interesting parts. This way, we are only putting the interesting parts of the recording into the HMM classifier.

This figure is the runtime profile of running HMM on a full 30-minute long recording. The classification took about 2 minutes. After splitting out the interesting parts of the recording, we are only running classification on these short clips, hence reduces the runtime by a very large factor (see figure below).

One thing you might have noticed in these two graphs is that the runtime for wav_parse is also extremely long. Since there is almost no way to get around parsing the wav file itself, the time consumed here will always be a bottleneck for our algorithm. Instead of a better parsing function, I did the mature thing by blaming it all on python’s inherent performance issues. Jokes aside, I think eventually someone will need to deal with this problem, but I think optimization can wait for now.

This figure is the raw classification output using a model trained by 5 samples of a matching bird call. If the model thinks a window in the recording matches the model, it marks that window as 0, otherwise 1. Basically this mess tells us that in these 23 clips, only clip 9 and 10 does not contain the bird used to train the model.

One might ask, why don’t you have a plot or graph for this result? Don’t yell at me yet, I have my reasons… I literally have more than a hundred clips from one 30-minute recording. It’s easier for me to quickly go through the result if they are clustered together in a text file.

Although me and my mentor Stanislav had decided on trying out HMM to do the bird detection. The results aren’t very optimistic. There is the possibility that HMM is not a very good choice for this purpose after all, which means I might need to do more research to find a better solution for bird detection. Luckily, since songbird is an ongoing project, I will get my full team back again in September. Over this summer, I believed I have made some valuable contributions to this project, and hopefully that can help us achieve our initial goals and plans for this product.

This summer has been a wonderful time to me. I would like to thank all my mentors and fellows for their help along the way, it really meant a lot to me. Looking into the future, I definitely believe this project has more potential than just classifying birds, but for now I am ready to enjoy the rest of the summer in order to work harder when I come back to Ann Arbor in fall.


Quicker and Smarter: Neurorobot on the hunt

Though it feels like just last week I landed here in Michigan, it seems that it’s almost the time for me to go back home. Though my work with the project here is wrapping up, there’s still so much to be done, but I’m confident that I’m leaving behind a good framework.

These past few weeks have been so eventful! I was home sick for a week with the flu, which sucked, but thankfully I could just take my work home with me. The Ann Arbor Art Fair (pictured above) was incredible; half of downtown was covered in tents full of beautiful creations from all around the country. Completely unrelated, half my paycheck now seems to be missing.

If you were following my last blog post, you may remember that one of the biggest hurdles I had to overcome was video delay between the bot and my laptop; big delay meant bad reaction speed meant sad bot.

Through changing libraries, multithreading, and extensive testing, we’ve now gotten the delay to just under 0.4 seconds, less than half of what it was with the old versions of the code!! This may not seem too exciting, but it means the bot now react to things much smoother and calmer than before, as demonstrated in this video of it hunting down a ferocious bag of chips:

Another new accomplishment is neurons! Spiking neurons to be specific, nothing like your regular generic brand neural net neurons. These ones are much prettier (using the Izhikevich model) and react to stimuli very organically, making graphs that resemble what you’d see on your lab oscilloscope:

More importantly, other than just looking good, these neuron models also behave really well. As an example here’s just two neurons (listen for the different clicking tones) connected to the motors of the Neurorobot, one activated and one inhibited by the ultrasonic distance sensor:

With just these two neurons, and fewer than twenty lines of code, the Neurorobot can already display cool exploration behaviours, avoiding getting stuck on walls and objects the best it can. Neurons are powerful, that’s why creatures like the roundworm can survive with just over 300 of them: it doesn’t take a lot to do a lot.

Here’s another example, in which position neurons that are more sensitive to left and right areas of the image space are given the task of finding something big and orange:

Notice how when the cup is to the left of the bot’s camera, the blue neuron spikes; whereas when it drifts to the right, the green and red neurons start spiking.

There’s still much optimization that can be done to make the Neurorobot think and react faster, eventually the entire camera code is going to be rewritten from scratch, as well as better visualisation of what’s going on under the hood. Lights, speakers, microphone, and a good user interface are all coming soon to a Backyard Brains near you!

Christopher Harris’s Neurorobot prototypes already have a drag-and-drop interface for putting together neurons:

The real goal of this project isn’t to have student write a thousand lines of code for the robot to do something interesting, but for them to learn and explore the wonderful behaviour of neurons by just by playing around with them; changing parameters, connecting their synapses to different components and seeing if they can get the bot to wiggle when it sees a shoe or beep and back away from anything that looks like a vase. And as is in keeping with Backyard Brains’ tenets, seeing what they will discover.