ChatGPT Mind Reading: Sure, Plus It Is a Force for Good

May 15, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

The potential of artificial intelligence, for both good and evil, just got bumped up another notch. Surprised? Neither are we. The Guardian reveals, “AI Makes Non-Invasive Mind-Reading Possible by Turning Thoughts into Text.” For 15 years, researchers at the University of Texas at Austin have been working on a way to help patients whose stroke, motor neuron disease, or other conditions have made it hard to communicate. While impressive, previous systems could translate brain activity into text only with the help of surgical implants. More recently, researchers found a way to do the same thing with data from fMRI scans. But the process was so slow as to make it nearly useless as a communication tool. Until now. Correspondent Hannah Devlin writes:

“However, the advent of large language models – the kind of AI underpinning OpenAI’s ChatGPT – provided a new way in. These models are able to represent, in numbers, the semantic meaning of speech, allowing the scientists to look at which patterns of neuronal activity corresponded to strings of words with a particular meaning rather than attempting to read out activity word by word. The learning process was intensive: three volunteers were required to lie in a scanner for 16 hours each, listening to podcasts. The decoder was trained to match brain activity to meaning using a large language model, GPT-1, a precursor to ChatGPT. Later, the same participants were scanned listening to a new story or imagining telling a story and the decoder was used to generate text from brain activity alone. About half the time, the text closely – and sometimes precisely – matched the intended meanings of the original words. ‘Our system works at the level of ideas, semantics, meaning,’ said Huth. ‘This is the reason why what we get out is not the exact words, it’s the gist.’ For instance, when a participant was played the words ‘I don’t have my driver’s license yet,’ the decoder translated them as ‘She has not even started to learn to drive yet’.”

That is a pretty good gist. See the write-up for more examples as well as a few limitations researchers found. Naturally, refinement continues. The study‘s co-author Jerry Tang acknowledges this technology could be dangerous in the hands of bad actors, but says they have “worked to avoid that.” He does not reveal exactly how. That is probably for the best.

Cynthia Murrell, May 15, 2023

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta