Facebook Developing Wearable Brain to Machine Interface

Facebook has entered the realm of creating the next generation of brain-computer interfaces, or BCIs. The social media company announced a wearable that would allow individuals to communicate directly with smartphones using only their brain. This announcement comes only weeks after Elon Musk took to the stage to reveal Neuralink, a direct invasive BCI that could potentially give users more brain bandwidth.

Facebook Developing Wearable Brain to Machine Interface
Facebook Brain Typing Project

By Sean Jackson

Facebook has entered the realm of creating the next generation of brain-computer interfaces, or BCIs. The social media company announced a wearable that would allow individuals to communicate directly with smartphones using only their brain. This announcement comes only weeks after Elon Musk took to the stage to reveal Neuralink, a direct invasive BCI that could potentially give users more brain bandwidth.

The project, headed by Facebook Reality Labs is currently in the midst of investigating how BCIs can detect what an individual hears and says, then decode the messages. For Facebook funded researchers at the University of California in San Francisco, the goal is to create a wearable device that knows how to decode words in the brain, which can offer privacy of text, as well as increased speed in communication.

Facebook researchers touted their new brain interface, stating in their blog, “Rather than looking down at a phone screen or breaking out a laptop, we can maintain eye contact and retrieve useful information and context without ever missing a beat. It’s a tantalizing vision, but one that will require an enterprising spirit, hefty amounts of determination, and an open mind.”

Unlike more invasive procedures, such as Elon Musk’s Neuralink, which would require electrodes to directly interface with neurons in the brain, Facebook researchers believe that a wearable is more viable and accessible.

Facebook conducted experiments with volunteers who were asked to listen to questions, and respond out loud with answers, allowing the system to decode their thoughts in real time. The participants were asked to answer basic questions, such as, “How is your room currently?” They had a set of five valid answers including: “Bright”, “Dark”, “Hot”, “Cold”, and “Fine”.

Researchers at Reality Labs explained their findings, stating, “After training, participants performed a task in which, during each trial, they listened to a question and responded aloud with an answer of their choice. Using only neural signals, we detect when participants are listening or speaking and they predict the identity of each detected utterance using phone-level Viterbi decoding.”

The researchers outlined in their findings that, “Recent investigations of the underlying mechanisms of these speech representation have shown that acoustic and phonemic speech content can be decoded directly from neural activity in superior temporal gyrus [STG] and surrounding secondary auditory regions.”

Researchers hope that they can use speech content to develop algorithms that understand a basic lexicon and understand basic computer navigation functions such as, “back”, “home”, “select”, and “delete” without an invasive interface.

The test was intended to see if the BCI could interpret a set of innocuous questions, and Facebook noted they were able to, “decode a small set of full, spoken words and phrases from brain activity in real time — a first in the field of BCI research.”

Facebook’s current goal is to decode speech at 100 words a minute, with a 1,000 word vocabulary that the device understands, at an error rate of less than 17%. Facebook has indicated that they are still a long way from achieving similar results to more invasive technology, but they believe their work will help in the development of decoding algorithms that can bridge the gap to more advanced BCIs in the future.