Why BCI technology ISN’T as revolutionary as we think… yet
We use our arms and legs every single day of our lives, making us incredibly dependent on them for our day-to-day tasks. That being said, there are over 2 million people in the United States that suffer from limb loss, or the loss of function within a limb. Sure, we have prosthetics, but they don’t replace the functions of our limbs since we can’t control them with electrical signals in our brain the way we do with our actual limbs… right??
Brain-Computer Interfaces, also known as BCIs, are an emerging technology that bridges the gap between the biology of our brain and the science of our tech. It allows our technology to understand electrical signals within our brains and utilize them technologically.
With this technology, amputees could potentially have prosthetic limbs ENTIRELY controlled by their brains the same way our actual limbs are. Or, those suffering from lost vision or hearing could potentially see or hear again through the transfer of visual or auditory data straight into the brain!
That’s absolutely insane! It wouldn’t even be that much of a stretch to say that Brain-Computer Interfaces can give you superpowers. Okay… maybe that is a stretch, but BCIs are an incredibly valuable and versatile technology with the potential to create a really interesting future.
Yes, there’s a but…
BCIs have a LOT of limitations, at least at the current stage of their development. As much as I wish we were ready to start turning ourselves into robots and replacing our normal human limbs with robotic-superpowered ones, there’s a lot that BCIs can’t do…. yet.
Does that mean there’s no hope?
Not exactly! What this means is that there are certain gaps with the current stage of BCI technology that prevent this technology from being as revolutionary as it has the power to be. However, just because there are gaps, doesn’t mean those gaps can’t be filled.
The intention behind this article is to clearly outline: what the current gaps are with BCI technology, what needs to be true for this technology to reach the potential we all hope it reaches, and what the future with BCIs can look like if problem solvers like you and me succeed in filling these gaps. However, before talking about the gaps in BCI technology, let’s talk about how it works.
Fundamentals of the BCI
The Science of the Brain
To understand how BCIs work, we have to understand how our brain works. Our brains are full of neurons — nerve cells connected with other nerve cells through dendrites and axons. Whenever we think, do an action, react to something, or remember something, an electrical signal is sent through our brain from neuron to neuron. The membranes of these neurons have ions with varying electric potential which generate the signals that get sent through the brain.
Myelin sheaths are an insulation layer that prevents electric signals from escaping. However, these signals often escape regardless and can be detected and interpreted by technology. However, signals can also be artificially fired, allowing for some groundbreaking properties. For example, if we can look into our brains and see which signals fire to our optic nerves when we see different things, we can artificially fire those same signals in somebody who’s blind, allowing them to see.
BCIs leverage electrodes within our brains that measure tiny tiny differences in the voltage between neurons. These allow us to see which neurons are being fired and overall understand the path of these signals. These differences in voltages are amplified and filtered before being interpreted by a computer program. Sensory input BCIs work a little differently, however. These BCIs are the ones that can send information into our brains. They work by being sent signals and firing neurons in the brain through an implant within the brain.
One of the coolest applications of BCIs is the ability for people without certain limbs to be able to control prosthetic limbs only with their brains. This is a task that requires an understanding of which brain signals are fired for certain movements and uses this data to move a robotic prosthetic limb. A user can utilize EEG-based BCIs (we’ll talk about what this is later) to allow a computer to learn which signals are being fired when we’re doing different actions. This means that users have to train to use these BCIs since computers need to be taught which signals correspond to which movements. Once this training period is over, these signals can be sent to a prosthetic limb in real time to control it, allowing somebody to control a prosthetic limb with only their brain!
BCIs are invasive!
To begin, let’s take a look at one of the largest issues with BCIs. This technology is incredibly invasive — after all, we’re working directly with the brain. A sought-after future of BCI technology is commercializing and scaling this tech to a large scale. That being said, BCIs in their current stages are much too invasive to be scaled the way we want them to be.
However, not all BCIs are invasive. In fact, there are three types of BCIs: invasive, partially invasive, and non-invasive. While BCIs aren’t always invasive, non-invasive BCIs can’t do nearly as much as invasive ones due to a lack of quality regarding accuracy in the information they get from the brain.
Invasive BCIs are Brain-Computer Interfaces that are implanted directly into your brain’s gray matter with neurosurgery. These are surgically implanted chips that detect signals from the brain and can be decoded into the same languages our computers run on. These BCIs normally have hundreds of incredibly thin pins, thinner than human hair, that penetrate the cerebral cortex and are incredibly prone to creating scar tissue within the brain.
Partially Invasive BCIs
Partially Invasive BCIs don’t go directly into the gray matter in your brain. Instead, they rest outside the brain but inside the skull. These BCIs use electrocorticography (ECoG) to measure brain activity with significantly increased quality than non-invasive BCIs. That being said, they still require surgical operations for implantation but show relatively promising results.
Non-Invasive BCIs are the most common and accessible type of Brain-Computer Interface. These BCIs are similar to what you’d see in movies with electrodes attached to your scalp — also known as Electroencephalography (EEG). These BCIs measure incredibly tiny differences in the voltages between neurons. That being said, these BCIs can’t be used for a lot of the major applications of this technology since the skull distorts electrical signals that get through to the BCIs and outright blocks a lot of them that try to. It’s not surprising to see how much lower the accuracy can get with BCIs that are less invasive but more accessible.
So what’s the issue?
Each type of BCI has its major drawbacks. Invasive and Semi-Invasive BCIs require surgery making it incredibly difficult to make widely accessible in the context of both physical scalability and finance. As well as this, both of these BCIs can create scar tissue in the brain which doesn’t sound super appealing if you ask me.
On the other hand, EEG BCIs currently struggle with a lot of noise in their results. It has incredibly poor spatial recognition which makes it hard to tell what part of the brain certain signals were produced in. While they might be useful in things such as brain disorder diagnoses, their limitations create large barriers for the utilization of this tech.
Scalability vs Quality Dilemma
As previously outlined, the only current BCI that’s currently scalable is the EEG-based non-invasive BCI. In fact, if you wanted to, you could get a BCI headset today! They’re not necessarily the cheapest, ranging from around $300-$1500 for headsets from companies like OpenBCI or Muse, but are easily accessible. Other companies like Emotiv, Neurosity, and NeuroSky also commercialize headsets for developers to work with.
The fact that there are currently BCIs accessible to developers is already a massive feat.! However, with the noise in the data that we get from EEG-based non-invasive BCIs, paired with the lack of accessibility due to their high prices, it’s evident that the current stage of BCIs has some pretty clear gaps that need to be filled before this technology can truly live up to its hype.
The Complexity of the Brain
We’ve already seen how the use of EEG is prone to distortion and noise, rendering their data much less effective than it needs to be for major BCI applications. This effect becomes even larger when taking into account how complex our brains are. The human brain has almost 100 billion neurons — it needs to have this many neurons to do all of the crazy things it can do. All of these neurons are in an incredibly complex web of connections within this massive, interconnected system to allow us to do things. These neurons are constantly being fired as signals are being sent throughout our brains. This doesn’t even take into account the fact that our brains utilize chemical processes as well, something that can’t be picked on by EEG.
Let’s suppose we’re watching a movie. Signals are being sent from our optic nerves into our brains to create the image of the movie. To allow a blind person to have the same visual, all we need to do is grab these signals and fire the same signals in the blind person, right? Turns out it’s not as easy as that. These optic signals aren’t the only signals being sent to our brains. We’re also being sent signals for the sounds we hear, the air we feel, the thoughts we think, the scent we smell, etc. The complexity of the brain is enormous and makes it incredibly difficult to truly understand. While it’s easy to understand the basic principles of BCIs and why they have so much hype, there are several reasons why BCIs just don’t work the way some might think they do.
What Am I Getting At?
Brain-Computer Interfaces have a long way to go before they can truly have the impact that we all hope they will have on our world. However, we now know the gaps. We now know what we need to do to get where we want to be — a future where non-invasive BCIs can be financially accessible to all consumers, where prosthetics could be controlled by simply wearing a pair of headphones or a wig, where audio and visual data can be sent directly to your brain without surgery, and where BCI technology is just as integrated into our lives as other emerging technologies like AI.