Short description
Scientists at the University of Texas have developed a language decoder that can convert human thoughts into text using functional magnetic resonance imaging and artificial intelligence. The research, published in the journal Nature Neuroscience, is the first example of the non-invasive reconstruction of human thoughts, and could be used to help people with neurological diseases that affect speech communicate with the outside world. The device is currently bulk and requires use of an fMRI machine, but future versions could use better sensors that attach to participants’ head. The creators also warn that the technology could be used unethically.
Scientists have created a language decoder based on GPT-1 that reads human thoughts without neurosurgery
Scientists at the University of Texas at Austin have developed a language decoder that can translate human thoughts into text using artificial intelligence and functional magnetic resonance imaging (FMR). The results of their work are described in an article published in the journal Nature Neuroscience.
Vice points out that this is the first example of non-invasive reconstruction of human thoughts: brain activity is read by fMRI and analyzed using a language model; based on this data, the words and sentences that the subject heard, spoke or thought are predicted.
Scientists involved three volunteers, each of them spent 16 hours inside a working tomograph and listened to specially selected stories that were part of the training data set. At the first stage, the researchers trained the GPT-1 AI model to associate certain words and phrases with the subjects’ neural activity recorded by fMRI.
After this phase of the experiment was completed, the participants had their brains scanned again with fMRI while they listened to new stories that were not part of the training data. The decoder translated unfamiliar stories into text very accurately in terms of content, but often not in the same words and using the same semantic constructions as in the original. For example, the phrase “I don’t have a driver’s license yet” was calculated and decoded using fMRI and an AI model as “She hasn’t even started learning to drive yet.” And when the subject heard the text “Walked down a dirt road through a wheat field, across a stream and past some buildings made of logs,” the decoder produced text that said “He had to cross a bridge to the other side and a very large building in the distance.”
“Our system works at a completely different level,” Oleksandr Huth, head of the research group and associate professor of neurobiology and computer science at the University of Texas, said at a briefing dedicated to the invention. “Instead of looking at low-level motor skills, our system really works at the level of ideas, semantics and meaning.”
An obvious drawback of the invention is its bulkiness: the decoder cannot function without the fMRI device. According to the scientists, their development should help people with neurological diseases affecting speech to communicate with the outside world. It is currently speculated that future versions of the device could be adapted to more convenient and mobile platforms, such as near-infrared spectroscopy (fNIRS) sensors that can be attached to the patient’s head.
Developers realize that as they improve their decoder can be used for less than ethical purposes, such as tracking people and espionage. Therefore, they especially emphasize that “brain-computer interfaces must respect mental privacy.”