Mind-reading through brain imaging technology is a common sci-fi theme
Mind-reading through brain imaging technology is a common sci-fi theme UC Berkeley

Scientists say they have made a breakthrough in brain imaging that may one day enable victims of coma, strokes and other neurodegenerative disease to communicate what's on their mind.

As subjects watched Hollywood movie trailers, researchers at the University of California, Berkeley were able to peer into their brain activity and reconstruct what the viewers saw.

The reconstructed movie trailers are not carbon copies, but resemble water colors of smeared vibrant colors and shapes similar to the original films.

At the moment, researchers are only able to reconstruct movie clips that people have already seen. But researchers said they are closer to reproducing things such as dreams, memories and other mind movies that people have, which have never been seen by anyone else.

This is a major leap toward reconstructing internal imagery, Jack Gallant, neuroscientist at UC Berkeley, said in a press statement.

Gallant is the coauthor of the study published online Thursday in the journal Current Biology. Other coauthors include Thomas Naselaris with UC Berkeley's Helen Wills Neuroscience Institute; An T. Vu with UC Berkeley's Joint Graduate Group in Bioengineering; and Yuval Benjamini and Professor Bin Yu with the UC Berkeley Department of Statistics.

We are opening a window into the movies in our minds, Gallant said.

Researchers say any practical application of this type of technology could not only lead to better understanding of what's happening in the minds of those who cannot communicate verbally, but that it may also lay the foundation for brain machine interface. This could be beneficial for people with who suffer from cerebral palsy or paralysis because they would then be able to guide computers by using their minds.

For the latest experiment, Gallant and his team decoded brain signals generated by moving pictures using brain imaging technologies and computational models.

Our natural visual experience is like watching a movie, lead author Shinji Nishimoto said. He is also a post-doctoral researcher in Gallant's lab, and was one of three research team members who served as subjects for the experiment.

In order for this technology to have wide applicability, we must understand how the brain processes these dynamic visual experiences, Nishimoto added.

So researchers watched two distinct sets of Hollywood movie trailers. This was done while fMRI measured blood flow through the visual cortex, which is the part of the brain that processes visual information.

As subjects watched the movie trailers, researchers recorded brain activity, data sent to a computer program that - second by second - learned to associate visual patterns in the movie with the corresponding brain activity.

Researchers used computer models to predict the brain activity of 18 million seconds of random YouTube videos. The resulting computer program then picked the 100 YouTube clips most similar in brain-activity profile to the test subjects. The 100 best clips were then merged into a blurry reconstruction of the original movie, researchers said.

Most previous attempts to decode brain activity have focused on photographs instead of moving pictures.

We need to know how the brain works in naturalistic conditions, Nishimoto said. For that, we need to first understand how the brain works while we are watching movies.

Learn more about the experiment from the YouTube video below.
The left clip is a segment of the film trailer subjects viewed while their brain activity was being recorded. The right clip shows the reconstruction of this movie from brain activity. The reconstruction was obtained using only each subject's brain activity and a library of 18 million seconds of random YouTube video that did not include the movies used as stimuli. Brain activity was sampled every one second, and each one-second section of the viewed movie was reconstructed separately.