An attendant gives a signal, and Cathy focuses on a monitor sitting on the table in front of her as a series of single words emanate in a female monotone from a pair of nearby speakers.
Schalk insists he and his team can solve the puzzle and, using modern computing power, extract from that mass of data the words that Cathy has imagined.
As part of a project originally funded by the Army Research Office, Schalk and others found evidence that when we “Imagine” speaking, the auditory cortex, perhaps as an error-correction reference, receives a copy of how every word we speak should sound.
Rather than attempt to push those numbers up towards 100 percent, Schalk has focused on showing he can differentiate between vowels and consonants embedded in words. From Cathy’s bedside, I follow Schalk to his office.
Over the course of many months, Schalk explains, he carried speakers into hospital rooms and played the same segment of a Pink Floyd song for about a dozen brain surgery patients like Cathy.
By training pattern recognition algorithms, Schalk and his collaborators have taught computers to “Translate” the neural firing patterns in the auditory cortex back into sound.