The Neural Net That Reads Us In Thought

Targeted ads, suggestions of friendships and research: sometimes we are amazed by the ability of computers and smartphones to “read our thoughts.” In the future, however, this could be normal: we may have to live with machines that can really search our minds, understand what we are looking at (or imagining) and respond accordingly.

A new, interesting research in this direction has been carried out in Japan: four researchers from the University of Kyoto have developed a neural network capable of deciphering the images we see “reading” our brain activity, and showing them in turn.

Step forward. This is not the first time that this path has been tried, but the previous systems only managed narrow categories of images (for example, “faces”). Their reconstructions were based on a series of semantic categories previously stored, while the subject looked at some images: previous data with which each new input was compared. For this reason their ability to “read thought” was limited to some basic forms (“bird,” “cake,” “person”).

The images shown to volunteers (in the first line) and, below, the reconstructions of the neural network. Each line corresponds to one of three subjects (click to enlarge the picture). © Kyoto University

Complex reading. The new technique, called deep image reconstruction, provides algorithms capable of interpreting and reproducing complex images based on multiple reading levels (e.g. form; color; light contrasts). In other words, it manages and processes visual stimulation hierarchically, as does the human brain.

The study lasted 10 months and involved three people who were asked to observe a thousand images of three types: natural entities (such as animals or people), artificial geometric forms, letters of the alphabet.

The participants’ brain activity was recorded with functional magnetic resonance imaging (fMRI) both while they were looking at the images and later, while they only imagined them. The data of the activity of the visual cortex were then given to the neural net, which decoded and used them to elaborate a “proper interpretation” of the individual images, working, precisely, for hierarchical and subsequent levels of interpretation.

Autonomous. The model was only taught from natural images – “men or animals” – but then it was able to reconstruct also letters and geometric forms (as seen in the video below): this is the proof that he had learned the technique and could use it from scratch

For the images only “thought” by volunteers (and not look) the process has only been partially successful: reconstructions maintained some similarity with the original, but were more confused than those of figures looked at (as confused are our mental images).

The accuracy of the representations must be improved: the images recreated by the model are recognizable, but inaccurate. In the future, however, an interface that can use these “reading of the mind” techniques, and then “refer” or act directly, could have numerous applications: whether useful or disturbing, depends on your degree of optimism.

0 Shares:
Leave a Reply

Your email address will not be published.

You May Also Like