CNN
–
On a recent Sunday morning, I found myself lying on my back in a pair of mismatched massages in the claustrophobic confines of an fMRI machine at a research facility in Austin, Texas. I thought, “Things I’ll do for television.”
Anyone who’s had an MRI or fMRI scan will tell you how noisy it is—electrical waves create a powerful magnetic field that scans your brain in detail. On this occasion, however, I was given a pair of special headphones that started playing episodes from The Wizard of Oz audiobook, so I couldn’t hear the loud hum of the mechanical magnets.
why?
Neuroscientists at the University of Texas at Austin have invented a way to translate brain activity into words, using the same artificial intelligence technology that powers ChatGPT.
The development could revolutionize how people with speech impairments can communicate. As technology advances and seems to touch every part of our lives and society, one pioneering AI application has been developed in recent months.
“We don’t like to use the word mind reading,” Alexander Huth, an assistant professor of neuroscience and computer science at the University of Texas at Austin, told me. “We think it connects things that we really can’t.”
Huth volunteered to be a research subject for this study, spending more than 20 hours in an fMRI machine, listening to audio clips while the machine took detailed images of his brain.
An artificial intelligence model analyzed the brain and the audio it was listening to and was later able to predict the words it heard just by looking at the brain.
The researchers used San Francisco-based startup OpenAI’s first-language model GPT-1, which was built on a huge database of books and websites. By analyzing all this data, the model learns how sentences are constructed – basically how people speak and think.
The researchers trained an AI to analyze the brain activity of Hutt and other volunteers as they listened to specific words. Eventually, the AI learns enough that it can predict what Hutt and others will hear just by monitoring their brain activity.
I spent less than half an hour in the machine and, as expected, I couldn’t decipher that I was listening to the Wizard of Oz audiobook episode where Dorothy walks down the yellow brick road.
CNN
Before the fMRI machine went in, CNN reporter Donnie O’Sullivan was given special headphones so he could listen to an audiobook during his brain scan.
Hutt listened to the same audio, but because an AI model was trained on his brain, it was able to accurately predict the parts of the audio he was listening to.
While the technology is still in its infancy and shows great promise, its limitations may be a relief to some. AI still cannot easily read our minds.
“The real application of this is helping people who can’t communicate,” Hutt explained.
He and other UT Austin researchers believe the innovative technology could be used in the future by people with “locked-in” syndrome, stroke victims and others whose brains are still functioning but unable to speak.
“Ours is the first demonstration that we can achieve this level of accuracy without brain surgery. So we think this is a step forward in helping neurosurgery for people who can’t speak without them.”
While new medical advances are undoubtedly good news and life-changing for patients struggling with debilitating illnesses, they also raise questions about how the technology can be applied in controversial areas.
Can it be used to extract a confession from a prisoner? Or to reveal our deepest, darkest secrets?
The short answer, say Hutt and his colleagues, is no — not at the moment.
For starters, brain scans must be performed in an fMRI machine, AI technology must be trained on an individual’s brain for many hours, and, according to the Texas researchers, subjects must give their consent. If a person actively resists listening to audio or thinks about something else, the mind probe will not be successful.
“We think that everyone’s mental information should be private,” said Jerry Tang, lead author of a paper outlining his team’s findings earlier this month. “Our mind is one of the last frontiers of our privacy.”
As Tang explained, “There is certainly a concern that brain decoding technology could be used in dangerous ways. Brain decoding is the term the researchers prefer to use instead of mind reading.
“I feel like he understands the idea of having a little thought that you don’t want to read, that doesn’t seem like a reaction to things. And I don’t think there’s any way we can do that with this approach,” Hutt explained. “We get the big ideas you think. If you’re trying to tell a story in your head, a story that someone is telling you, we can get the same.
Last week, developers of generative AI systems, including OpenAI CEO Sam Altman, descended on Capitol Hill to testify before a Senate committee about lawmakers’ concerns about the dangers posed by the powerful technology. Altman warned that the development of AI without guardians “could do a lot of damage to the world” and urged lawmakers to implement regulations to address the concerns.
Echoing the AI warning, Tang told CNN that lawmakers need to take “brain privacy” seriously to protect “brain data” — our thoughts — two of the most dystopian words I’ve heard in the age of AI.
While the technology currently only works in very limited cases, that may not always be the case.
“It’s important not to get a false sense of security and think things will be like this forever,” Tang cautioned. “Technology can improve and if decoding requires human cooperation, it can change how much we can decode.”