The co-authors of study, which appears in the journal Neuron, also include Niccolo Pescetelli, a doctoral student at the University of Oxford, and Stanislas Dehaene, a professor at Collège de France.
In their study, human subjects viewed a series of quickly flashed images, and reported which ones they saw and which they could not see, while their brain activity was monitored using magnetoencephalography (MEG)—a non-invasive neuroimaging technique which makes, at every millisecond, multiple measurements of the tiny magnetic fields generated by the neuronal activity. Critically, the authors developed machine learning algorithms to decode the content of these images directly from these large and complex neuroimaging data.
These new algorithms allowed the authors to confirm a series of theoretical predictions. In particular, they reveal a striking dissociation between the dynamics of “objective” (i.e. the visual information presented to the eyes) and “subjective” neural representations (i.e. what subjects report having seen). However, and contrarily to theoretical predictions, the authors also showed that invisible images can be partially maintained within high-level regions of the brain.
“Undoubtedly, these results suggest that our current understanding of the neural mechanisms of conscious perception may need to be revised,” notes King, who also holds an appointment at the Frankfurt Institute for Advanced Studies (FIAS). “However, beyond our empirical findings, this study demonstrates that machine learning tools can be remarkably powerful at decoding neuronal activity from MEG recordings—a preview of what we can uncover about the workings of the brain.”
This project received funding from the European Union’s Horizon 2020 research and innovation program (grant agreement No. 660086), INSERM, CEA, Collège de France, the Direction Générale de l’Armement, the Bettencourt Schueller Foundation, the Fondation Roger de Spoelberch, and the Philippe Foundation.
# # #