Eye-Brain-Computer Interface

Brain-computer interfaces (BCIs) and gaze-based control are being developed by many research groups around the world to help people with motor and speech disorders. About ten years ago, it was proposed to combine these two technologies in such a way as to mutually compensate for their main drawbacks, the low accuracy of BCI and the tendency of gaze-based control to be triggered in the absence of intention to give a command. As a result, it is expected to create a technology that provides the fastest possible transformation of intention into action. If successful, its users will be not only paralyzed people, but also healthy users interested in maximally facilitating the transfer of commands from the brain to the computer without the use of hands, for example during strenuous thinking activity.
However, serious limitations of existing methods for "decoding" human intentions stand in the way of creating such a hybrid technology. Moreover, brain processes of intention formation and implementation, as well as "markers" of these processes in brain signals available for noninvasive registration are still very poorly studied. The project, aimed to eliminate these barriers, was organized thanks to cooperation between the MEG Center and the Laboratory of Neurocognitive Technologies of the Kurchatov Institute, with the participation of colleagues from other organizations (A.E. Ossadtchi, Higher School of Economics, I.P. Zubarev, Aalto University, Finland). The project was supported by the Russian Science Foundation (grant 18-19-00593, Fluent Human-Machine Interaction Based on Expectation and Intention Markers: Neurophysiological and Neuroengineering Foundations; previously 14-28-00234 In Search of the "I": Interdisciplinary Research of Initiation of Voluntary Action).
The project is creating a methodological instrumentation that extends the capabilities of MEG technology, primarily the methodology of classification of short single fragments of MEG signal based on artificial deep neural networks, including in near-real-time mode. We study the characteristics of man-machine interaction in situations where a machine is given increased sensitivity to human intention, and, in particular, executes a command even before the full formation of the intention to give it, or helps to perform an action faster than normal human capabilities.
The main studies are carried out using MEG and other brain signals recorded during subjects' use of gaze-based control to make moves in specially designed games, with such data being compared with data recorded during spontaneously performed similar eye movements and fixations. This experimental paradigm for contrasting voluntary and spontaneous oculomotor behavior developed in the project (Shishkin et al., 2016) allows for the alignment of data on factors not directly related to intentionality (primarily on purely motor features of prepared or performed oculomotor behavior).
The material obtained is used in parallel in two directions at once: (1) for debugging means of machine classification of voluntary and spontaneous oculomotor activity in order to create a highly efficient hybrid human-machine interface; (2) for investigating brain mechanisms of voluntary action-the availability of "control," substantially aligned in movement characteristics with the voluntary mode, distinguishes our experimental paradigm from the well-known experimental paradigms of Libet and Haines. After its completion and tuning, the hybrid interface itself will be used both as an experimental model for investigating the practical possibilities of highly sensitive interfaces, primarily in the rehabilitation of people with severe motor disorders, and as an experimental stand for studying the generation of intention and the formation of a sense of authorship of action, the phenomena whose understanding is crucial for addressing both practical and generally meaningful theoretical and philosophical questions about free will and the boundaries of consciousness.

Examples of the temporal course of the fixation-related brain magnetic field when gaze dwells were made in order to give a command (red lines) and occurred spontaneously (gray lines). Left is the signal on a magnetometer, right is the signal on a gradiometer (position of the gradiometer and magnetometer are shown by a dot on the head). Group average (n=29), M (solid lines) ± 95% CI (shading), signal normalization by individual baseline [-300...-100] ms.

1. I.A. Dubynin, A.S. Yashin, B.M. Velichkovsky, S.L. Shishkin. An experimental paradigm for studying sense of agency in joint human-machine motor actions (under review).
2. A.O. Ovchinnikova, A.N. Vasilyev, I.P. Zubarev, B.L. Kozyrskiy, S.L. Shishkin. MEG-based detection of voluntary eye fixations used to control a computer. Frontiers in Neuroscience. doi: 10.3389/fnins.2021.619591
3. D.G. Zhao, A.N. Vasilyev, B.L. Kozyrskiy, E.V. Melnichuk, A.V. Isachenko, B.M. Velichkovsky, S.L. Shishkin. A passive BCI for monitoring the intentionality of the gaze-based moving object selection (in press). doi.org/10.1088/1741-2552/abda09
4. Zhao DG, Karikov ND, Melnichuk EV, Velichkovsky BM, Shishkin SL. (2020) Voice as a mouse click: Usability and effectiveness of simplified hands-free gaze-voice selection. Applied Sciences, 10(24):8791. doi: 10.3390/app10248791
5. Zhao D.G., Vasilyev A.N., Kozyrskiy B.L., Isachenko A.V., Melnichuk E.V., Velichkovsky B.M., Shishkin S.L. An expectation-based EEG marker for the selection of moving objects with gaze. Proc. 8th Graz Brain-Computer Interface Conf. 2019. Verlag der Technischen Universität Graz. Pp. 291-296.
6. Shishkin S.L., Zhao D.G., Isachenko A.V., Velichkovsky B.M. Gaze-and-brain-controlled interfaces for human-computer and human-robot interaction. Psychology in Russia: State of the Art. 2017. 10(3), 120-137.
7. Dubynin I.A., Shishkin S.L. Feeling of agency versus judgment of agency in passive movements with various delays from the stimulus. Psychology in Russia: State of the Art. 2017. 10(3), 40-56.
8. Nuzhdin Y.O., Shishkin S.L., Fedorova A.A., Kozyrskiy B.L., Medyntsev A.A., Svirin E.P., Korsun O.V., Dubynin I.A., Trofimov A.G., Velichkovsky B.M. Passive detection of feedback expectation: Towards fluent hybrid eye-brain-computer interfaces. Proc. of the 7th Graz Brain-Computer Interface Conference. 2017, pp. 361-366.
9. Shishkin S.L., Nuzhdin, Y.O., Svirin E.P., Trofimov A.G., Fedorova A.A., Kozyrskiy B.L., Velichkovsky B.M. EEG negativity in fixations used for gaze-based control: Toward converting intentions into actions with an eye-brain-computer interface. Frontiers in Neuroscience. 2016. 10, article 528.
10. Величковский Б.М., Нуждин Ю.О., Свирин Е.П., Строганова Т.А., Федорова А.А., Шишкин С.Л. Управление «силой мысли»: На пути к новым формам взаимодействия человека с техническими устройствами. Вопросы психологии. 2016. 62(1): 79-88.

Project Leader - Sergei L. Shishkin.