Welcome to my final post on project lie detector! Previously, I have discussed how I have moved from strict skin galvanic response research to P300 signals. Here I am now to update you on this “I saw it” response!
As a quick recap: the P300 signal is an EEG (brain-wave) measurement taken along the midline of the skull. It represents interactions between the parietal and frontal lobe. This sort of measurement is regarded as a recognition based response. It might seem strange that this has implications for lie detection, as it deviates from our typical expectations of a lie detector, such as the infamous polygraph. However, this response can be paired with retooled versions of the traditional “knowledge assessment tests” from the polygraph to form a stronger model. The reason P300 signals are so useful is because they convey recognition, which can show if a subject recognizes context that is unique to a crime. There’s no beating that!
Hello again! Since the last time we met, I have been working towards producing a machine learning model that can accurately predict the different gestures/finger movements that we are classifying, and (spoiler alert) it seems like we are almost there!
If we are to have any chance of success, we must work on the project incrementally. For this reason, I decided to classify only 5 gestures, moving only one finger at a time, and using the electrode placement described in a previous experiment by BYB. If we get something as basic as this to work, then we can do things like reduce the number of channels needed to produce good classification results and/or add more complex gestures later on.
With that in mind, I started by making a high-level block diagram of our system (see image above).
From the diagram, we can identify three main aspects of the project:
The green blocks require that we understand the physiological aspects of the sEMG signal and the kind of information we can extract from it. Then based on that background knowledge, we need to identify relevant digital signal processing (DSP) techniques to generate a dataset.
The yellow block will require us to be familiar with neural network properties like neural network architectures, training parameters and performance metrics.
The blue block requires us to be familiar with hardware control and communication protocols to control peripherals using the Arduino.
As of this update, I have managed to set up a DSP pipeline to generate features for the neural network and observe its performance on training and testing data.
Welcome back! If you’ve been following along with my FOMO glasses journey, then you know I’m trying to build a pair of glasses that capture a photo every time you blink. In my last post, we discussed the idea behind my project, and the implications the success of the project can have! So check it out here if you haven’t had the chance yet!
The next steps in my project include implementing the AI model, and figuring out the best way to classify eye blinks in real time. Then, the hardware of how to actually build the device. However, in order to do that, we first need to understand the data.
The image above was taken from a dataset I took on myself, where I placed four electrodes surrounding one of my eyes, and filmed myself doing normal things like; talking, eating, working, watching TV… It’s important to have a natural dataset, because your unconscious blinks are much lighter and more subtle than when consciously blinking for a dataset.