This is one of the first steps towards a computer being able to tap directly into what our brain sees, imagine, and even dreams.
(Link to video) Each image that we see active photoreceptors in the retina of the eye. The information is fed through the optic nerve at the back of the brain. Here, the information is assembled and interpreted by increasingly higher-level process of the brain.
In this experiment, subjects watched clips of movie trailers in an fMRI machines scanned their brains in real time. The computer mapped activity throughout million (3D pixels) Äúvoxels at.
The computer gradually learned to associate the formal qualities, edges, and the movement occurring in the film with corresponding patterns of brain activity.
He then built Äúdictionaries at by matching the video images with patterns of brain activity, and models predict he guessed would be created with new videos, using a palette of 18 million seconds of random videos taken from the Internet. Over time, the computer could crunch all that data in a set of images that plays alongside the original video.
I understand the process properly, we images, Aore see on the right side (, Äúclips rebuilt from the brain activity, Au) are actually being average performance created by mixing a hundred random YouTube clips that met the computer, AOS predictions of what the pictures would match the grounds that it was monitoring in the brain.
In other words, the right image is generated from existing clips, not from scratch. In this video (link), you can see the new video that is at the origin of brain activity in the upper left part of the screen, and some of the samples (strung in a line) that the computer of assumptions must be the cause of this kind of brain activity.
This would explain the momentary fragments of ghostly words that appear in the images and the strange color and shape-quarters of the original.
The result is a moving image which looks a bit like a blurred version of the original video, but that was just generalized based on the available range of average images. Clearly, the perception of faces activates the brain in a very active, judging by the relative clarity of the computer, AOS generated images, compared to other types of images.
I wonder what would happen if you set this system in a biofeedback loop, so that the activity of the brain and the image generation could play against each other? It might be as a computer-assisted hallucination.
Article on Gizmodo
Thank you, Christian Schlierkamp