Now the answer is in it-and it’s not close at all. Four years after announcing the “crazy and amazing” project using optical technology to build a “silent voice” interface to read thoughts, Facebook shelved the project, saying that consumers’ brains are still far away from reading.
in A blog post, Facebook said it will stop the project and instead focus on experimental wrist controllers for virtual reality. Read arm muscle signals“Although we still believe in the long-term potential of head-mounted optics [brain-computer interface] Technology, we decided to focus our current efforts on a different neural interface method that has a more recent path to market,” the company said.
Facebook’s brain typing project takes it into uncharted territory-including Funding for brain surgery In a California hospital, and built a prototype helmet that can shoot light through the skull — and sparked a heated debate about whether technology companies should access private brain information. However, in the end, the company seems to have decided that this research will not produce a product anytime soon.
“We have gained a lot of practical experience in these technologies,” said physicist and neuroscientist Mark Chevelt, who did not lead the silent speech project until last year, but recently turned to studying how Facebook handles elections. “That’s why we can confidently say that as a consumer interface, head-mounted optical silent voice devices have a long way to go. It may be longer than we expected.”
The reason for the craze surrounding brain-computer interfaces is that companies see mind control software as a huge breakthrough—just as important as a computer mouse, a graphical user interface, or a sliding screen. More importantly, the researchers have demonstrated that if they place electrodes directly in the brain to tap individual neurons, the results are very significant.Paralyzed patients with this “implant” can Move the robotic arm dexterously with play video game or Types of Through mind control.
Facebook’s goal is to translate these findings into consumer technology that anyone can use, which means you can put on and take off your helmet or headset. “We never intended to make brain surgery products,” Chevillet said. In view of the many regulatory issues with this social giant, CEO Mark Zuckerberg once said that the last thing the company should do is knock the head. “I don’t want to see Congress hearings,” He joked.
In fact, with the development of brain-computer interfaces, serious new problems have emerged. What would happen if large technology companies could understand what people think?In Chile, legislators are even considering a human rights bill to protect Brain data, free will and psychological privacy From a technology company. Given Facebook’s poor record on privacy, the decision to stop this research may have some collateral benefits, namely keeping a certain distance between the company and growing concerns about “nerve rights.”
Facebook’s project specifically targets brain controllers that can be combined with its ambitions in virtual reality; it acquired Oculus VR for $2 billion in 2014. Chevillet said that in order to achieve this goal, the company took a two-pronged approach. First, it needs to determine whether a thought-to-speech interface is even possible. To this end, it sponsored a study at the University of California, San Francisco, where a researcher named Edward Chang placed electrode pads on the surface of people’s brains.
The implanted electrodes read data from a single neuron, and this technique called electrocorticography or ECoG can measure from a large group of neurons at a time. Chevillet said that Facebook hopes it will also be possible to detect equivalent signals from outside the head.
The UCSF team has made some surprising progress and reported in the New England Journal of Medicine today that it uses these electrodes to decode speech in real time. The subject of the study was a 36-year-old male who the researchers called “Bravo-1”. After a severe stroke, he lost the ability to form understandable words and could only grumble or moan. In their report, Chang’s team said that using electrodes on the surface of his brain, Bravo-1 has been able to make sentences on a computer at a rate of 15 words per minute. This technique involves measuring nerve signals in parts of the motor cortex that are related to Bravo-1’s efforts to move the tongue and vocal tract while imagining speaking.
To achieve this result, Chang’s team asked Bravo-1 to imagine saying one of 50 commonly used words nearly 10,000 times and inputting the patient’s neural signals into a deep learning model. After training the model to match words with neural signals, the team was able to correctly determine that the probability of the word Bravo-1 wanted to say was 40% (the resulting probability was about 2%). Nevertheless, his sentence is full of errors. “How are you?” You might say “I’m hungry, how are you”.
But scientists have improved performance by adding a language model—a program that determines which word sequences are most likely to appear in English. This increases the accuracy rate to 75%. With this robotic method, the system can predict that Bravo-1’s sentence “I am correct to my nurse” actually means “I like my nurse”.
As striking as the result, there are more than 170,000 words in English, so beyond the limited vocabulary of Bravo-1, performance will plummet. This means that although the technology may be useful as medical assistance, it is not close to Facebook’s ideas. “In the foreseeable future, we see the application of clinical assistive technology, but this is not where our business is,” Chevillet said. “We are focused on consumer applications and there is still a long way to go.”
Facebook’s decision to quit brain reading is not shocking to researchers studying these technologies. “I can’t say that I was surprised because they hinted that they are thinking about the short-term and intend to re-evaluate things,” said Mark Slutsky, a professor at Northwestern University, whose former student Emily Munger is a key Facebook employee. “From experience only, the goal of decoding speech is a big challenge. We are still a long way from a practical, all-encompassing solution.”
Nonetheless, Slutsky said that the UCSF project is an “impressive next step,” which demonstrates the extraordinary possibilities and limitations of the science of brain reading. “Whether you can decode free-form speech remains to be seen,” he said. “A patient who said’I want to drink water’ and’I want my medicine’-this is different.” He said that if the artificial intelligence model can be trained for a longer time, and not only in one Trained on the human brain, then they can quickly improve.
While the University of California, San Francisco research is underway, Facebook also paid other centers (such as the Johns Hopkins University Applied Physics Laboratory) to study how to pump light into the skull to read neurons noninvasively. Much like MRI, these technologies rely on sensing reflected light to measure blood flow to the brain area.
It is these optical technologies that are still a bigger stumbling block. Even with some recent improvements, including some improvements by Facebook, they cannot acquire neural signals with sufficient resolution. Chevillet said another problem is that the blood flow detected by these methods occurs 5 seconds after a group of neurons is activated, which makes the control of the computer too slow.