Can technology now read minds?

Can technology now read minds?
In what can prove to be a massive breakthrough in science and technology, MIT researchers note that they have developed a computer interface that can almost..

In what can prove to be a massive breakthrough in science and technology, MIT researchers note that they have developed a computer interface that can almost “read minds.” The technology transcribes words that the user verbalizes internally but does not actually speak out aloud.


Intelligence amplification

Intelligence amplification is often compared to Artificial Intelligence. However, rather than completely replace human intelligence, IA is designed to work alongside human beings. It is also referred to cognitive augmentation and machine augmented intelligence. Intelligence amplification (IA) is the idea that technologies can be assistive to human intelligence, rather than being composed of technologies that create an independent artificial intelligence. Intelligence amplification systems work to enhance a human's own intelligence, to improve a human decision-maker’s function or capability in some way.

Intelligence amplification is also known as assistive intelligence, augmented intelligence, cognitive augmentation or machine-augmented intelligence. Augmented intelligence tools can be used for many purposes. Some are valuable in electronic discovery, or in developing a knowledge base. Natural language tools and imaging processing tools can enhance human perception. The key is that all of them are based on the idea of “intelligence amplification” – using human consciousness in some manner.


How the device works:

The device, which is called AlterEgo, does not read the user’s mind. However, it reads something called sub vocalization. This is the name given to tiny, almost imperceptible neurological and muscular movements that occur when human beings think of words. Sixteen electrodes on the prototype AlterEgo headset sense these changes and match the signals to data inside a special neural network, and eventually activate whatever task was requested.


The signals are fed to a machine-learning system that has been trained to correlate particular signals with particular words. The device also includes a pair of bone-conduction headphones, which transmit vibrations through the bones of the face to the inner ear. According to MIT, it is part of a complete silent-computing system that lets the user undetectably pose and receive answers to difficult computational problems.

“The motivation for this was to build an IA device — an intelligence-augmentation device,” says Arnav Kapur, a graduate student at the MIT Media Lab, who led the development of the new system. “Our idea was: Could we have a computing platform that’s more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?”

“We basically can’t live without our cellphones, our digital devices,” says Pattie Maes, a professor of media arts and sciences and Kapur’s thesis advisor. “But at the moment, the use of those devices is very disruptive. If I want to look something up that’s relevant to a conversation I’m having, I have to find my phone and type in the passcode and open an app and type in some search keyword, and the whole thing requires that I completely shift attention from my environment and the people that I’m with, to the phone itself. So, my students and I have for a very long time been experimenting with new form factors and new types of experience that enable people to still benefit from all the wonderful knowledge and services that these devices give us but do it in a way that lets them remain in the present.”

According to researchers involved in the project, the device has managed an average of 92% transcription accuracy in a 10-person trial with about 15 minutes of customizing to each person. The team is working on expanding the device’s capabilities and widening the number of words it can detect. Some believe that this will eventually replace virtual assistants like Google’s Assistant or Apple’s Siri. However, the difference would be that no one would be able to hear the commands given by the user as it would exist only in the person’s mind.

“Wouldn’t it be great to communicate with voice in an environment where you normally wouldn’t be able to?” said Thad Starner, a computing professor at Georgia Tech. “You can imagine all these situations where you have a high-noise environment, like the flight deck of an aircraft carrier, or even places with a lot of machinery, like a power plant or a printing press.”


Our assessment is that AlterEgo can either become the next breakthrough in technology and change the way human beings interact with one another or become a failure like Google Glass. One of the obstacles that it faces (that makes it similar to Google Glass) is that the device is a headset that is worn by the user in public and it could make interactions awkward. It presents a number of concerns. For instance, what if such a device is used by malicious elements in the society? What happens when a human being is forced to wear such a device against their will? Is this yet another infringement on privacy?