If I were to show you a photo of a Rolex watch and a face of an unknown person, you’d probably be more interested in the prestigious, shiny object than the random stranger, right? However — and this may come as a surprise — your brain is much more modest and more of a social being than you think you are! 🙂
Evidence indicates that our brain reacts differently when it sees a face from when it sees any other object. This likely has to do with the way human brains have evolved, recognizing faces as something of a greater importance than random stuff.
And so we arrive at the N170 from the title of this blog post. What exactly does it mean? It‘s a very peculiar spike in EEG recordings: one that is observed approximately 170 milliseconds after a person has been exposed to a stimulus. Multiple papers state that this N170 has a higher amplitude when the stimulus is a human face rather than anything else. And that’s exactly what I set out to prove using the Human SpikerBox and several other electronic devices.
The general idea of the experiment was to record EEG of the subject as they watch a presentation consisting of photos of human faces and wristwatches. In the presentation, photos of faces and watches flash in random order, with grey screens in between. As soon as a face or a watch pops up, the goal was to send event markers to the SpikerBox and feed them into the EEG recording.
Now that the EEG recordings got their event markers, I was able to extract some valuable data of interest — such as when exactly the spike appears and what its amplitude is. The results were close to what I’d expected, with higher amplitudes for faces, but happening a bit earlier than 170 milliseconds. This may have had to do with the delay of the sensor as it sends event markers to the Spikerbox, but we’ll come to that shortly. All of this data is so cool, but it wasn’t easy getting there. Per aspera ad astra! I came across so many problems during the project that I can hardly even recall all of them. But, oh well, that’s science – you fail 95% of times and succeed only in the remaining 5%.
Anyway, to get started, I had to make a sensor reliable and sensitive enough to detect the change in screen brightness. The goal was to have it detect 6 different levels of brightness, with as small a delay as possible. Firstly, I tried with a basic photo resistor that changes its resistance according to light and uses it in a voltage divider. However, as it turned out, it had quite a delay. So I had to switch to phototransistor after spending several days on the resistor. How does a phototransistor work? It ‘generates’ current proportional to the light intensity. Fortunately, the phototransistor was good enough so I could use it in further examinations. But why do we even need that sensor?
So, just a quick recap – we were hoping to get as much data on the way we process the Pinocchio illusion by measuring different behavioral outcomes as well as the EMG in three timepoints. As far as the behavioral measures are concerned, we questioned participants on the illusion vividity (see my introductory blog post!), the extent of the nose elongation as well as through a recently published Pinocchio questionnaire (Prucell et al., 2021). On the neural side, we were comparing the EMG activity on the biceps and triceps between resting state, state of the illusion and the situation where the participants were actually instructed to contract the muscles.
Firstly, we found that all the participants reported sensation of the illusion and described it as moderately vivid – the average score was 2.6 on the five-point Likert type scale. They felt their nose extending by at least 50% and the questionnaire data suggest that sensations regarding arm tingling, nose and arm elongation represent the best predictors of the illusion vividity, whereas nose widening, pulsation in arm/nose/fingers or tingling in nose and fingers turned out to be less relevant.
When we think about science and technology, we often think about something intricate and sophisticated to comprehend, such as genetic engineering or aerospace astrophysical technology.
However, science and technology are a pivotal part of our mundane life. From us turning off the lights and going to bed at night to curing cancers or genetic disorders, they are all science and technology. We achieved a convenience that might appear to be trivial, and also something that used to deem as a miracle from numerous works and questions of scientists and engineers.
My ceaseless passion for science came from my ignorance of underlying principles; how my body functions, how we get diseases, how we cure them, how we optimize human efficiency, and how we increase the accuracy of data collection. And that is how my endless love for biology and computer science started. My project started from a similar question about the rudimentary concept: attention schema theory, which is elusive and intangible. (See my introductory note here.)
Since the brain is an information-processing device, it has a limitation in processing multiple sources of information. In this project, we investigated attention (visual, primarily) and awareness.
The significance of understanding human consciousness can be also expanded to treatment research and AI research (its consciousness). Linked internal models, cognitive machinery, and the self having a mental possession of the outside objects would be a critical component of awareness. Is it hard to understand? Don’t worry. There are games for it.