The magic of transforming brain data into digital commands

How cognitive neuroscience and machine learning got us to a real-time BCI almost anyone can use.

“How does it work?” 

This is the first question we hear after someone realizes real-time Brain-Computer Interface not only exists, but works. Today we’re going to explain some of the tech used in the NextMind BCI.

Design cognitive shortcuts 

The brain is the most complex object we know of in the universe.

Our understanding of it does not yet allow us to decode complex thoughts or visual concepts. Yet despite its complexity, we have a good understanding of how the visual cortex works. Located in the back of the head, this part of the brain is responsible for processing your vision.

Using our understanding of the visual cortex, we took an alternative approach; as a proxy for decoding more complex images, we designed a system of cognitive shortcuts. This crossover of neuroscience and visual design takes the form of NeuroTags: flickering pattern overlays that can be applied to any object.

NeuroTag in a video game demo using real-time BCI
A NeuroTag is a patterned overlay that can be applied to any graphic you want to become mind-interactable. Here, a NeuroTag has been added to the rock.

We design NeuroTags based on two main criteria: optimal visual cortex processing and aesthetic.

To optimize the design for the visual cortex, we create specific patterns, contrasts, and light pulses. When you focus on these designs, they create brain waves we can more easily recognize

Our main aesthetic criteria for NeuroTag design is discreetness. Since we believe mind interactions will become a part of daily life, their design should be able to integrate easily into any environment. As we continue research and development, NeuroTag designs will continue to evolve. They play a key role to help us start decoding more complex visual imagery and imagination. 

We’ll take you behind the scenes of the design process of our NeuroTags in a future article.

Identify active focus 

Your brain’s neurons are firing anytime you see something, move a muscle, or even have a thought. Because your brain is always active, it generates huge amounts of data that can be detected by our EEG-based Sensor. To sift through this information, we created machine-learning algorithms

These algorithms are able to recognize the brain waves generated when focusing on a NeuroTag. This is because active visual focus creates a stronger response in your brain [1, 2, 3]. You can think of active focus as the difference between seeing and looking.

Our technology often draws comparisons to eye tracking, but the ability to recognize intentional focus is a key difference [4]. In eye tracking, actions are triggered by letting the eyes rest on one target. With our visual BCI, the action is triggered from the moment active focus is detected. This creates a fluid experience and increases the feeling of control over a digital interface.

This feeling of control will improve as we further our research on understanding and decoding intentionality. 

Translate data into action 

Once your active focus on a NeuroTag is detected, it triggers whatever action the developer linked to it. These actions have two purposes: 

They can act as buttons for a direct and instant output. It could be to explode an object, press a play button, turn on a light, etc.  

Or they can react to your focus in real time to give you feedback when the algorithm is activated.

Type with NeuroTags
Here, each NeuroTag acts as a button so you can type a code.
NeuroFeedback loop neural processing
When the algorithm recognizes active focus, it reacts in real-time with visual effects.

We usually represent your focus through a green glow and triangle, although this can be customized with our SDK. This visual indication of your focus is called a neurofeedback loop. Seeing your brain in action enhances your feeling of control over the NeuroTag and helps you hone your focus skills. Using feedback loops to improve self-control over brain functions can be used in gaming and even in meditation practices [5].

By combining our expertise in cognitive neuroscience and machine learning, we have created a non-invasive BCI “…that offers a rare ‘wow’ factor in tech” according to TechCrunch. But most importantly, we’ve made this technology available to the public.

Today, anyone can test direct mind-machine control with the NextMind Dev Kit. And if you’re a developer you can build your own mind-enabled applications with our SDK.  

We’re happy to have you here for the journey.

References 

 “Brain responses are enhanced for attended stimuli” 

[1] Handy, T. C., Hopfinger, J. B., & Mangun, G. R. (2001). Functional neuroimaging of attention. Handbook of functional neuroimaging of cognition, 75-108. 

[2] Müller, M. M., Picton, T. W., Valdes-Sosa, P., Riera, J., Teder-Sälejärvi, W. A., & Hillyard, S. A. (1998). Effects of spatial selective attention on the steady-state visual evoked potential in the 20–28 Hz range. Cognitive brain research, 6(4), 249-261. 

[3] Morgan, S. T., Hansen, J. C., & Hillyard, S. A. (1996). Selective attention to stimulus location modulates the steady-state visual evoked potential. Proceedings of the National Academy of Sciences, 93(10), 4770-4774. 

 “EEG is different from eye-tracking because it is possible to ‘split’ visual attention between spatially distinct objects” 

[4] Müller, M. M., Malinowski, P., Gruber, T., & Hillyard, S. A. (2003). Sustained division of the attentional spotlight. Nature, 424(6946), 309-312. 

“Meditation and neurofeedback” 

[5] Brandmeyer, T., & Delorme, A. (2013). Meditation and neurofeedback. Frontiers in psychology4, 688. https://doi.org/10.3389/fpsyg.2013.00688 

You might like...