Decoding Images in the Brain
I can see that you’re seeing a weird picture! :0
G’day, I am Nathan. I am a senior majoring in mathematical science and psychology in the State University of New York at Binghamton. Before continuing my study in the US, I worked for six years in IT and Finance after studying management and economics in South Korea. I was born in Seoul, Korea and studied in Australia and China before going back to Korea for college and service in the Republic of Korea Air Force.
My research interest is “intelligence” (both human intelligence and artificial intelligence). I am interested in how the brain derives strategies from inferences for innovative solutions to problems. Long term, this information could be used to help build an AI architecture that more accurately observes brain activity. More specifically, I build computational models to explore how the brain gives rise to learning and the “knowledge representation” phenomena. Then I test the predictions of these models using behavioral/neuroimaging studies where I decode people’s behaviors and thoughts as they learn and reason.
This summer I’m conducting research on human electroencephalogram(EEG) visual decoding with the Backyard Brains fellowship. There has been much progress in neuroimaging research that has successfully decoded the structure, or semantic content, of static and dynamic visual stimulus from human brain activity using EEG, functional magnetic resonance imaging(fMRI) and so on. My goal is to detect what people are looking at by using DIY EEG gear and quantitative data analysis and pattern classification techniques. An EEG is a device that detects and records electrical activity in a human brain using small, flat metal discs (electrodes) attached to the scalp. Our brain cells communicate via electrical impulses and are active all the time, even when we are asleep. This activity shows up as wavy lines on an EEG recording. These different activity patterns can be measured within the brain to decode the content of mental processes especially the visual and auditory content. (You can check more about the EEG here https://backyardbrains.com/experiments/EEG)
I will work with and modify BYB’s DIY EEG gear, which is affordable and efficient. This way anyone can replicate my experiment later and record raw EEG data for their own projects. I will analyze my data using spectral decomposition (see above image), also called time–frequency analysis, which separately quantifies time point and phase synchrony from the EEG recordings. I will build and test different types of pattern classifier architectures using Tensorflow, an open source software library for numerical computation developed by Google, and Google Cloud Tensorflow Processing Unit(TPU) to accurately and efficiently analyze and classify categories of the preprocessed EEG data. Once I can analyze the data, I hope to decode detailed information about how the brain processes different kinds of images, and I hope to be able to discriminate, based on the EEG data alone, what kind of image a subject is looking at… It’s looking like a busy summer!