Backyard Brains Logo

Neuroscience for Everyone!

+1 (855) GET-SPIKES (855-438-7745)


items ()

Volume Threshold for Songbird Audio

Hello again everyone! It’s Yifan here with the songbird project. Like my other colleagues I also attended the 4th of July parade in Ann Arbor, which was very fun. I made a very rugged cardinal helmet which looks like a rooster hat, but I guess rooster also counts as a kind of bird, so that turned out just fine.

Anyways, since the last blog post, I have shifted my work emphasis to user interface. After some discussions with my supervisors, we’ve made the decision to change the scheme a little. Instead of using machine learning to detect onsets in a recording, we are going to make an interface that allows the users to select an appropriate volume threshold to do the pre-processing. Then, we will use our machine learning classifier to further classify these interesting clips in details.

Why thresholding based on volume, one might ask? Well, volume is the most straightforward property of sound for us. During tech trek, a kid asked me a very interesting question: when you are detecting birds in a long recording, how do you know the train sound you ruled out as noise isn’t a bird that just sounds like train? Although this one should be quite obvious, we should still give the users the freedom to keep what they want in the raw data. Hence, I’ve developed a simple mechanism that allows every user to decide what they want and what they don’t want before classifying.

This figure is a quick visual representation of a 15 minute field recording after being processed by the mechanism I was talking about. As you can see, in the first plot there is a red line. That is the threshold for user to define. Anything louder than this line would be marked as “activity”; anything quieter than it would be marked as “inactivity.” The second plot shows the activity by time. However, an activity, like a bird call, might have long silence period in between each call. In order not to count those as multiple activities, we have a parameter called “inactivity window,” which is basically the silent time you need in between two activities to be counted as separate activities.

In the above figure, the inactivity window is set to 0.5 second, which is very small. That is why you can see so many separate spikes in the activity plot. Below is the the plot of the same data, but with a inactivity window of 5 seconds.

Because the inactivity window is larger now, smaller activities are now merged into longer continuous activities. This can also be customized by users. After this preprocessing procedure, we will chop up the long recording based on activities, and run smaller clips through the pre-trained classifier.

Unfortunately my laptop completely gave up on me a couple days ago, and I had to send it to repair. I would love to show more data and graphs in this blog post, but I’m afraid I have to postpone that to my last post. Anyways, I wish the best for my laptop (as well as the data in it), and see you next time!


Neurorobot Video Transmission In Progress

Hey everybody, it’s your favourite Neurorobot project once again, back with more exciting updates! I went to my first knitting lesson this week at a lovely local cafe called Literati, and attended the Ann Arbor Fourth of July parade dressed as a giant eyeball with keyboards on my arms (I meant to dress as “computer vision” but I think it ended up looking more like a strange halloween costume).

Oh wait… Did you want updates on the Neurorobot itself?
Unfortunately it’s been more snags and surprises than it has been significant progress; one of major hurdles we’re still yet to overcome is in the video transmission itself. (I did however put huge googly eyes on it)

The video from the Neurorobot has to first be captured and transmitted by the bot itself, then sent flying through the air as radio waves, received by my computer, assembled back together into video, loaded into program memory, processed, and only then can I finally give the bot a command to do something. All parts of this process incur delays, some small, some big, but the end result so far is about 0.85 seconds.

(A demo of how I measure delay, the difference between the stopwatch in the bot recording and the one running live on my computer)

Unfortunately, human perception is a finicky subject; typically in designing websites and applications it has been found that anything up to 100ms of delay is considered “instantaneous,” meaning the user won’t send you angry emails about how slow a button is to click. 0.85 seconds however means that even if you show the robot a cup or a shoe and tell it to follow it, the object may very well leave its view before it’s had a chance to react to it. This means the user has a hard time telling the correlation between showing the object and the bot moving towards it, leading them to question whether it’s actually doing anything at all.

Unfortunately the protocol the wifi module on our robot uses to communicate video with the laptop isn’t that easy to figure out, but we’ve made sizable progress. We’ve gotten the transmission delay down to 0.28 seconds, but the resulting code to do this is 3 different applications all “duct-taped” together, so there’s still a little bit of room for improvement.

I hope to have much bigger updates for my next blogpost, but for now here’s a video demo of my newest mug tracking software.

{Previous update: http://blog.backyardbrains.com/2018/06/neurorobot-on-wheels/}


A Peagrim’s Progress, or, “Let’s get down to pea-zness”

Hello! There has been some trial and error since my last update. I started my experiment with Monica Gagliano’s protocol (overly simplified!):

  1. Grow seedlings, 48 of them:
  1. Get them used to 8 hours of light, 16 hours dark (circadian rhythm):
  2. Train them under decision covers for 3 days:

  1. Test them

She had 48 of them. Unfortunately, each of those PVC pipes are $16 each:

PVC (above) x 48 = $768

Well, that’s not very practical for a classroom experiment.

So I tried my DIY version.

  1. Take a plant cell box:
  1. Make cardboard covers for each of those 48 cells

Fit the covers over the plant cell box:

Make fan/light circuits for them:

Hook them up:

Here is the schematic:

So in this way, 48 plants are being trained with two circuit boards.

That was in theory.

In reality:

  1. Cardboard is way too flimsy to stay on the appropriate columns.
  2. The fans, 5V, to work with the LittleBits circuits, were way too weak.

Everything kept sliding around, falling apart, as I was supposed to be training them. I was Chi-Fu trying to keep soldiers in line when I needed to be Captain Shang.

The results?

Pea-tiful 🙁 

They grew as straight as sticks, when I was looking for this result:

Plants that grew towards where I presented the fan last, towards the middle of each of the two rows.

The plants also were tall and spindly, meaning they had tried to get to the top as quickly as possible.

So back to the drawing board. We decided to do everything PROPERLY this time. Stick to the protocol. Exactly.

One problem I’d had was everything slipping all over the place so I bolted things down:

We’ll start with a small n of 4. I have another experiment design coming up, but you’ll have to wait until the last post for this and hopefully more exciting data!