REVIEW
Brain-computer interface: the future in the present
1 Cyber Myonics, Moscow, Russia
2 Department of Neurobiology,Duke University, Durham, North Carolina, USA
Correspondence should be addressed: Olga Levitskaya
ul. Marshala Biryuzova, d. 30, kv. 45, Moscow, Russia, 123060; ur.liam@stivel_ailo
Ultimately, any mental activity is expressed as muscle contractions and relaxations that allow us to interact with the external world and each other: muscles control limb and eye movements, facial expression, and speech production. Muscle contractions are involved in practically any sensation. For example, we scan visual scenes with eye movements and move our hands to obtain tactile sensations.
The movements of our body are monitored by a large number of sensory receptors. The continuous streams of incoming (sensory) and outgoing (motor) signals are processed at multiple levels of the nervous system, from the lowest to the highest. This immense sensory and motor processing is largely subconscious, and we take it for granted that we can effortlessly perform very complex tasks, such as walking upright, maintaining balance, moving fingers and toes, speaking, etc.
Unfortunately, the ability to move and sense can be severely impaired if the nervous system is damaged. Millions of people around the world suffer from sensory and motor deficits caused by spinal cord injuries, stroke, Parkinson’s disease, amyotrophic lateral sclerosis and other pathological conditions. Even in the cases of devastating deficits, very often higher brain regions retain their functionality but turn to be isolated from muscles, the result being the patient’s paralysis and inability to speak or feel.
Currently, there is no effective treatment for many motor and sensory disorders. Patients are bed- or wheelchair bound till the end of their lives. Development of effective rehabilitation methods and devices that compensate for the lost functions is an extremely important issue faced by modern medicine.
Artificial components for nervous system
A brain-computer interface (BCI) is a promising tool for treating various neurological disorders. BCIs connect intact areas of the brain to assistive devices that can restore motor and sensory functions [1, 2, 3, 4, 5]. For example, patients paralyzed after a spinal cord injury could potentially restore mobility using a BCI that connects their intact motor cortex to robotic arms, exoskeletons or devices that apply functional electrical stimulation (FES) to the muscles. So far, there has been certain success in the development of such motor BCIs [6, 7, 8, 9]. Moreover, patients can hope to restore sensitivity of paralyzed body parts with sensory BCIs that connect somatosensory areas of the nervous system to prostheses equipped with touch and position sensors. Such BCIs induce sensations by electrical stimulation of the somatosensory cortex.
Being of assistance to patients, BCIs can also be used by healthy individuals, for example, in computer games [10] or as an alarm clock for long-haul truck drivers [11]. In the latter case the drowsiness is detected using the encephalogram (EEG).
BCIs are often called brain-machine interfaces (BMIs). In general, these terms can be used interchangeably, but conventionally, noninvasive interfaces have been termed BCIs and invasive interfaces have been termed BMIs. “Neuroprosthesis” and “neuroimplant” are their synonyms. In this article the term BCI is used.
Brain-computer interfaces belong to that knowledge area where the gap between science fiction and its practical implementation does not exceed 50 years. However, despite the fact that the number of publications on this subject has increased over the past few years, many BCI technologies are still at experimental stage, not used in clinical practice and not available in retail. The exception to that are some FESbased systems [12] and cochlear implants [13, 14] that are successfully used for rehabilitation.
In this article we will cover motor and sensory BCIs. Classification of functions into sensory and motor is, however, oversimplistic. The brain of any organism does not have areas solely responsible for movements or sensations [15, 16]. That is why recently developed sensorimotor interfaces are the most ergonomic ones [17].
The history of research and BCI development
The initial experiments in monkeys date back to the mid-1960s. The monkeys were implanted with multi-electrode arrays for electrical stimulation and recording of cortical potentials [15, 18]. It was shown that the sensorimotor cortex was activated when monkeys performed movements; the electrical stimulation of the sensorimotor cortex, in turn, caused muscle contractions.
In 1963 Walter carried out an experiment in which the first BCI as we understand it now was implemented [19]. To assist clinical diagnosis, patients were implanted with electrodes in different cortical areas. They were asked to advance carousel projector slides by pressing a button. After discovering the cortex area responsible for reproducing that muscle pattern, the researcher connected it straight to the projector. The button was disconnected, but the slides kept on moving: the brain controlled slide advance and did it even before the subject pushed the disconnected button.
An idea similar to the concept of modern BCIs was formulated by American researchers from the National Institute of Health in the late 1960s. They announced that they would focus on the development of principles and methods of controlling external devices by brain signals [20]. The researchers implanted electrodes to the motor cortex area of monkeys. The electrodes recorded action potentials of a few neurons while the animals were moving their hands [21]. The recorded neuronal discharges were transformed into the trajectory of movement of a hand using linear regression. It took another 10 years of effort to implement such transform in real time: the monkeys had learned to control the cursor on a LED display by activating their motor cortex neurons [22].
At that time a similar study was carried out under Fetz’s supervision [23], but the focus was on studying the biological feedback; the scientists faced the question: could a monkey control its neuronal discharges volitionally? It was found that volitional control of neurons responsible for movement was possible without performing the actual movement. That result is important for understanding the mechanisms of mirror neurons and even neurons involved in empathy.
Parallel to the development of motor BCIs, sensory interfaces were emerging [14]. In 1957 French scientists Djourno and Eyriès succeded in inducing auditory sensations in deaf individuals using a single-channel electrode that stimulated the auditory nerve. In 1964 Simmons proposed a multi-channel upgrade for the device. In the 1970s House and Urban developed the device that consisted of an acoustic signal converter and a multi-channel cochlear implant. The device was approved by the US Food and Drug Administration. After further improvements, the device was introduced into clinical practice.
In the 1980s a possibility of vision restoration using BCIs became the subject of the research. An electrode array was implanted over the visual cortex of totally blind individuals. Visual sensations induced during the experiment were termed phosphenes. People who had never seen light (or had not seen it for a long time) learned to identify simple phosphene patterns [24, 25]. At present electrically stimulated vision continues to be tested in clinical trials, where a complex image from a video camera is transmitted to the stimulating implants located in the eye or visual cortex.
A tremendous advance in BCI research took place in the 1990–2000s. Nicolelis and Chapin constructed the first BCI for controlling a robotic device [26]. The recorded activity of the cortex and basal ganglia neurons of awake rats was transmitted to a robot that fetched water to animals. Then Nicolelis continued his research with primates. Primates were used in a number of research projects, such as a robotic arm controlled by cortical neuronal ensembles [27, 28, 29], a BCI establishing an artificial tactile feedback [17], a BCI for decoding leg movements [30], BMI for bimanual movements [31], and others.
Also in the 1990s, experiments on implanting electrodes into human brain were launched. Kennedy, who implanted electrodes into his own brain in 2015, worked with a patient with amyotrophic lateral sclerosis. The patient was implanted with an electrode that contained myelinated fibers growth factor in the tip. As a result, the patient was able to issue a binary neural command [32].
In the early 2000s several laboratories began to compete in the area of invasive BCI development. A group headed by Donoghue worked with monkeys and humans; the researchers implanted multi-electrode arrays into human motor cortex, which allowed paralyzed individuals to control the cursor [8] and robotic manipulators [9]. Schwartz et al. studied movement control in three-dimensional space [33]. Eventually, success was achieved in the experiments with people controlling anthropomorphic robotic arm [7]; it is currently one of the most impressive achievements of BCI technology.
In the process of BCI development, many laboratories including those of Andersen, Shenoy and Vaadia, studied various cortical areas as signal sources for BCI and created new and original algorithms of decoding brain signals.
Parallel to that, studies on noninvasive neurointerfaces were carried out. They were based on EEG recording, near-infrared brain imaging and FES. Birbaumer, Pfurtscheller, Walpaw, Müller, Schalk, Neuper, Kübler, Millan, and other researchers offered a number of practical solutions for wheelchair operation and limb mobility restoration after traumas and strokes [12].
Neuronal decoding and neuronal tuning
How do motor BCIs manage to decode motor parameters from neuronal recordings? Many neurophysiological studies have shown that discharge rates of single cortical neurons are correlated to behaviors. For example, discharge rates of motor cortical neurons are correlated to the position, acceleration and the joint torques of the arm. Developers use such correlations for decoding neuronal signals. Reproducibility and recognizability of neural patterns, the so-called neuronal tuning, are a key factor for successful decoding. Neurons can be badly tuned or noise-contaminated, which impedes the decoding process.
Investigations of encoding of various parameters by single neurons began in the 1950–1960s. Those studies utilized a single sharp-tipped electrode to record the extracellular activity of neurons in different brain areas. Somatosensory [34], motor [16] and visual [35] systems were studied using this approach. It became clear that even single neurons demonstrate repeatable activity patterns that encode a number of sensory and motor phenomena.
Extracellular recording from single neurons in awake behaving animals continued in many laboratories around the world. Wise et al. discovered that cortical neurons modulate their rates several seconds before the actual movement. In their experiments, the monkeys knew what movement they had to make, but were trained not to make it before the trigger stimulus [36]. To study the transformation of visual stimuli on movement direction, Kalaska et al. recorded single neuron activity and employed a task in which a movement had to be executed after a delay [37]. Those experiments demonstrated that neuronal discharges contain information about the movements that are executed and those that are planned by the brain, but not initiated.
Georgopoulos and his colleagues recorded activity patterns of single motor cortical neurons while monkeys made arm movements in different directions [38]. The researchers found the dependency between the signal intensity and movement direction that could be described by a cosine function, meaning that discharge frequency of neurons was maximal for a certain direction, called preferred direction, and reduced gradually when movements deviated from it. To explain how neuronal discharges transform into arm movement in a given direction, Georgopoulos suggested the concept of the population vector. Such vector is a vector sum of contributions from multiple neurons that has been shown to match the movement direction. Interestingly, even imagery of arm movement without its execution, such as imaginary 90° rotation in space, can be well described by a population vector [39].
Owing to these studies, it became clear that the activity of individual neurons carries information on behavior parameters and these parameters can be decoded. Neurophysiologists often use an audio speaker to monitor discharges of single neurons. An experienced neurophysiologist can tell what his monkey is doing by listening to the sound of discharges. Similarly, a BCI decoder “listens” to neurons and tries to infer what movement or intent underlies this “neuronal sound”. The more neurons are “heard” by the decoder, the more accurate is the decoding.
What do neuron ensembles sing about?
The more “musicians” a neuronal ensemble consists of, the higher is the accuracy of decoding: increased neuronal sample enables to exclude occasional noisy fluctuations of single neurons [1, 2]. This does not mean that small neuronal populations are useless for BCIs. Sometimes a few neurons are enough for the interface to work [33, 40], particularly if those neurons are highly tuned to the parameter of interest. Highly tuned neurons are sometimes called grandmother cells or Jennifer Aniston neurons, because they are selectively activated by specific stimuli: grandmother’s or Jennifer Aniston’s photographs [41]. If a BCI task is to identify the presence of a grandmother or Jennifer Aniston, such neurons come handy. However, they are quite rare, and in real life the brain processes information using highly distributed neuronal representations. The melody of single neurons gives the main idea of a behavior pattern, but its symphony is played by many instruments. The more neurons are recorded simultaneously, the more accurate is the encoding [2]. Because of that, multielectrode recording of neuronal activity from a large number of neurons is most effective for BCI decoding. It is especially important to record the signals of large neuronal ensembles if the task is to decode several behavioral parameters simultaneously [30]. Such ensemble recording improves decoding and maintains its stability [1].
Decoding algorithms
BCI decoders use statistical and machine-learning methods to reconstruct behaviors from neuronal activity. Initial decoder settings are based on a training set. In experiments with monkeys a 5–10-minute recording is necessary to obtain the training set. During this time interval, the animal performs the task manually, for example, moves the joystick with its hand [17, 28, 29], and the decoder “learns” to detect movement parameters (position, acceleration, force). Then the mode is changed to brain control, and the monkey performs the task (moves the cursor and places it over the target) using the decoder and not its own hands.
A training set can be obtained without moving the hand. Instead, a subject observes a cursor movement or — in experiments with humans — we ask him to imagine the movement. The latter approach is especially important if the participant of the study is paralyzed.
The choice of a decoding algorithm is dictated by the behavioral parameters that need to be extracted from neuronal activity and neural signal features used for decoding (single neuron activity, field potentials, etc.), the number of recording channels, the specifics of the behavioral task (for example, a continuous control of cursor position or, in contrast, making discrete decisions).
If decoding is based on population vectors, a training set often consists of movements from the center to different directions along the radius. Then a population vector is computed; it is a weighted vector sum of contributions from single neurons. Each neuron contributes a vector pointing in that neuron’s preferred direction, and the vector length is proportional to the neuron’s discharge frequency [39]. Despite some advantages, a clear conceptual framework being one of them, this method is not optimal because it is not based on statistical procedures that would optimize decoding accuracy.
Wiener filter is a linear decoder which is very similar to the population vector, but it is much more accurate, since it minimizes the mean square error. Wiener filter output for time t is a weighted sum of neuron rates measured at different time points in the past (usually, 5–10 time points within a 1-second time window preceding t) [42]. Weights are computed for each neuron using standard linear regression methods based on matrix algebra.
In many cases, for example, in the presence of stereotype movement patterns, another filter — Kalman filter — demonstrates better performance. Kalman filter separates variables into the sets of state variables (limb position or velocity) and observable variables (relation of neuronal discharge to movement direction). During the decoding process, the state vector is updated for discrete time steps (usually 50–100 ms). During each update, two computations are performed: prediction of the next state and its correction based on neuronal activity data. Correction uses the model that compares an expectation of neuronal rates and the actually observed rates.
Unscented Kalman filter improves estimation made with a classic Kalman filter by taking into account non-linear dependencies between neuronal activity and movements.
Interestingly, research on neuronal decoding facilitates the development of new analytical mathematical methods of physiological interaction between the neurons. For example, artificial neural networks were both inspired by the organization of a nervous system and can be used for the interpretation of the activity of brain circuitry. Some laboratories use recurrent neural networks for decoding [43].
When solving tasks that imply a number of discrete solutions, discrete classifiers are used. EEG decoding of letters and numbers based on cortical potentials is one example [44, 45]. In BCI decoding, the following methods of machine learning have also found their application: Gaussian classifier, probabilistic classifier structures (Bayesian networks), hidden Markov models, k-nearest neighbour algorithm, artificial neural networks, multilayer perceptron, elements of fuzzy logic.
Theories of movement control and motor BCIs
To explain neuronal mechanisms of movements, several theories of movement control have been elaborated; they are also influential for BCI design.
A classical scheme of movement control includes a set of hierarchically organized regions of nervous system. As suggested by this scheme, cortical structures are at the top of this hierarchy. They control the most complex movements, such as finger movements. Brain stem and spinal cord supervise simpler functions: postural automatisms and spinal reflexes [46]. The spinal cord of quadrupeds is known to contain central pattern generators that control rhythmic movements of the limbs during walking [47].
Historically, motor control has been described as a set of reflexes for a long time. The concept of a reflex arch was proposed by Sherrington [46]. Currently, reflexes are acknowledged, but the emphasis has shifted to the top-down control exerted by the brain higher centers during volitional movements. Typical motor activity contains both voluntary and reflex components [48]. Some BCIs, called shared control BCIs, imitate these two components: they give the control over higher-level components (the onset and the end of movements, target choice) to the subject and delegate low-level tasks, such as maintaining balance, to a robotic controller.
Many modern theories of motor control are based on the idea that the brain forms an internal model of the body that is used for both perception of the body configuration and planning and executing movements. Such an internal model was first described by Head and Holmes as “body schema”, which the brain uses to monitor and update information of multiple signals from the body sensory receptors [49]. Currently, BCI developers strive to construct neurally controlled limb that can be finally incorporated into the brain body schema [1]. It is important to distinguish between the body schema and the body image. The body schema is a model constructed by the brain that reflects the structural and dynamic organization of the body, while the image is a conscious esthetic and sexual perception of one’s own body.
From the concept of body schema the researchers moved on towards the modern internal model theory [50]. This theory describes two parts of the control loop: the controlled object (for example, an arm with muscles and joints) and the controller (a neuronal network that controls arm movements). The controller uses an internal model to generate an expectation of the object position, as well as an expectation of sensory feedback. The controller then compares these expectations with the actual sensory feedback and, if a discrepancy is found, introduces corrections to the object state. The equilibrium point hypothesis describes one implementation of this view [51]. According to it, higher motor centers set an equilibrium point for the controlled object, and servo-mechanisms of the spinal cord transfer the object there.
Arm BCI
Arm movements constitute the major part of motor repertoire of our everyday lives. That is why many BCI developers focus on the task of arm control. Besides, arm movements have a substantial cortical component to them, which is convenient for the developers, because it is easier to record the signals of the cortex than those of subcortical structures.
Fig. 1 shows the interface that reproduced arm movements. It was an invasive BCI that monkeys used to control a robotic arm performing reaching and grasping movements. For decoding, multiple Wiener filters running in parallel were used.
In another experiment with monkeys, stereoscopic glasses were used to enable BCI control in a three-dimensional space [33]. Motor cortical activity was translated into cursor position in space. Decoding was initially performed using the above mentioned method of population vectors. In further experiments, system accuracy was improved by applying the adaptive algorithm that minimized trajectory errors. Later, the same group of researchers demonstrated a BCI which monkeys used to feed themselves with the robotic arm [52]. Similar technologies involving robotic arms are currently used to improve the quality of life of paralyzed patients [7, 9].
Also, virtual technologies have been developed, such as a pair of virtual arms moving on the computer screen and a BCI for their control [31]. In those experiments several hundreds of electrodes recorded neuronal activity in both cortical hemispheres, which enabled monkeys to control two arms simultaneously.
Functional electrical stimulation
Robotic BCIs are necessary in case of limb loss, but if limbs are paralyzed but not lost, it is possible to use FES. This technology utilizes electrode arrays for electrical stimulation of muscles with a set of impulses that imitate nervous system signals. Muscles activation by stimulation, in turn, produces limb movements. For surface stimulation, a multi-electrode array is placed on patient’s skin. Such contact electrodes can be sewn into clothes turning them into wearable electronic devices (gloves, trousers, etc.) [53]. Control over BCI can be performed by EEG beta oscillations, and that is how the movements of a paralyzed hand have been reproduced [54].
Using invasive BCIs, a paralyzed monkey hand was moved by FES, the movements being quite precise [40]. In the experiments involving FES for a larger number of muscles and decoding over a hundred of neurons, monkeys with paralyzed arms could perform grasping [55, 56]. Recently, such invasive BCI–based control has been demonstrated by a paralyzed human [6].
According to the experimental data, a part of lower-level functions, such as adjusting the limb position in the external force field, can be handed over to the local self-control. In this case, feedback systems are used, such as position sensors [57]. FES-based BCIs can take into account the specifics of muscle contractile properties. For feedback, vision can be used [53], as well as sensory substitution with vibrostimulation.
BCIs for bipedal locomotion
A possibility of reproducing kinematic parameters of bipedal walking based on brain cortical activity recording was first tested by Fitzsimmons, Lebedev and their colleagues [30]. The schematics of this experiment are presented in fig. 2. Monkeys were trained to walk on a treadmill. During this task, neuronal activity of sensorimotor cortex representation of lower limbs was recorded while the movements of the monkey’s legs were video tracked. The BCI decoder was trained to decode monkey lower limb kinematics. The decoder performed well for both forward and backward walking directions.
Based on those results, the Walk Again Project was founded, an international consortium, the goal of which is to develop an exoskeleton driven by the brain [2]. Nicolelis demonstrated the EEG-controlled exoskeleton built by Gordon Cheng at the opening of World Football Cup in 2014. A similar project, the Mindwalker, emerged in Europe [58]. In parallel, Contreras Vidal and his colleagues proposed an idea of developing a leg exoskeleton controlled by slow EEG rhythms; in 2012 they decoded gait kinematics of a human walking on the treadmill [59]. In Russia, ExoAtlet, a very practical leg exoskeleton, was developed [60].
As an alternative to EEG, a possibility of reactivating the spinal central pattern generator is studied. It was demonstrated in the experiments on rat models of complete spinal cord injury that locomotion can be restored using epidural electrical stimulation combined with treatment with serotonergic agonists [61].
Nueroplasticity and BCIs
Many studies have convincingly demonstrated that learning to use a BCI boosts the plasticity of the subject’s brain. It was speculated that due to that phenomenon, artificial limbs could become incorporated into the brain representation of the body and eventually feel and act as normal limbs [1, 62].
Controlling external devices by BCIs has a lot in common with tool use. Thus, in a famous experiment with monkeys trained to use rakes to retrieve distant objects [63], it was shown that posterior parietal cortex neurons that normally respond to objects in the vicinity of the hand started to respond to objects in the vicinity of rakes. In other words, the brain incorporated the rakes into the body schema.
Long-term use of BCIs can lead to similar changes in the brain. Indeed, the neurons participating in BCI control change activity patterns [64]. Correlations between pairs of neurons also change [28, 31], as well as neuronal tuning to movement directions [29].
Noninvasive BCIs
An important requirement for BCIs is safety. Noninvasive BCIs are the safest, as they do not penetrate biological tissues to record neuronal activity. Numerous types of noninvasive BCIs have been developed so far, mainly for operating wheelchairs and restoring communicative function by using spelling systems [44, 45, 65, 66, 67, 68].
EEG recording is the most popular method used for the development of noninvasive BCIs. EEG-based BCIs can be independent (based on endogenous activation by motor imagery) and dependent (based on exogenous activation by external stimuli). In the former case, slow cortical potentials, mu (8–12 Hz), beta (18–30 Hz) and gamma rhythms (30–70 Hz) are used to exert control [4]. The effectiveness of the method can be improved by using adaptive decoding algorithms [69]. With exogenous activation, the attention is focused on the external visual stimulus, which leads to a conspicuous cortical response, compared to the response to an ignored stimulus; the patient’s intentions are decoded based on the previously recorded difference in the response to attended and ignored stimuli. Thus, during BCI control based on steady-state visually evoked potentials, a reaction to frequently presented stimuli is recorded [70]. The subject is presented with several objects on the screen. Each object appears and disappears at its own frequency. The subject focuses on each object, one by one. P300 potentials can be used in a similar way [71].
Artifacts of EEG recording process present a considerable problem. They can be taken for neural activity and even serve as controlling signals. Dependent BCIs are less sensitive to artifacts. A better signal quality, compared to EEG, a higher spatial and temporal resolution and a lower sensitivity to artifacts are demonstrated by electrocorticographic BCIs. However, they are invasive.
Apart from EEG, magnetic encephalography is used (MEG) [72]. To register weak magnetic field generated by the brain, a highly sensitive method is required. Such sensitivity can be provided by superconducting quantum magnetometers. As a result, MEG recording requires special equipment and special conditions, magnetic shielding in the first place. Still, MEG provides a better temporal and spatial resolution, compared to EEG.
Another method for brain activity recording is based on detecting the levels of oxyhemoglobin and deoxyhemoglobin in cerebral circulation by using near-infrared spectroscopy (NIRS) with temporal resolution of 100ms and spatial resolution of 1 cm. The major disadvantage of this technology is a considerable signal delay (up to several seconds). However, the BCIs based on NIRS are becoming popular [73].
A powerful tool for recording changes in cerebral circulation is functional magnetic resonance imaging. Its temporal resolution is limited to 1–2 s, signal delay is about several seconds, but it stands out in the line of noninvasive methods because of its unsurpassed spatial resolution that makes it possible to detect the activity of every brain area [73].
Sensory BCIs
Sensory BCIs can be used for restoring vision, hearing, the sense of taste, smell or balance, and tactile and proprioceptive sensitivity. Functions of sensory organs can be impaired as a result of peripheral nervous system damage leading to complete loss of senses (deafness, blindness) and as a result of damage to the organs that process sensory information of a higher level (thalamus, cerebellum, basal ganglia, brain cortex); the latter does not cause a complete loss of sensitivity, though. An interesting example is blindsight in patients with damaged visual cortex; they are blind but still can sense and process visual stimuli subconsciously [74].
At present, sensory BCIs cannot replace high-level components of a sensory system. For example, blindsight cannot be repaired. Currently, researchers focus on developing devices for repairing low-level damage associated with peripheral areas and receptors dysfunction. Such systems replace physiological sensors with artificial ones that are connected to undamaged sensory areas [17, 75, 76]. Signal transmission from artificial sensors to the nerve tissue is usually mediated by electrical stimulation, but recently optogenetic methods have gained popularity [77].
We should also mention sensory substitution, a method in which a signal flow from an artificial sensor is redirected to the undamaged sensors of other body parts or another sensory organ. With such sensory substitution, a switch from one sensor modality to another becomes possible. For example, artificial vision can be implemented by transmitting the signal from a video camera to a tactile matrix that stimulates the back [78].
Cochlear implants
Cochlear implants are the most successful devices among sensory BCIs [13, 14]. Patients with such implants can detect speech, tell female voices from male voices and even perceive melodies. Bilateral implantation restores spatial hearing. The implant consists of six components: (1) an external microphone, (2) a speech processor that transforms the signal from the microphone to a stimulation sequence, (3) a transmitter placed on the skin, (4) a receiver and a stimulator implanted into the bone under the skin (5), a cable connecting stimulators with the electrodes, and (6) an array of stimulation electrodes implanted into the cochlea.
A sequence of impulses is applied to undamaged areas of the auditory nerve. The use of several electrodes enables to stimulate various areas of the nerve; the number of electrodes usually varies from 4 to 22. Several different methods of signal formation by multichannel stimulation were developed. In continuous interleaved sampling, a signal from a microphone is transformed into a frequency spectrum and the intensity of the signal in each band is transformed into the intensity of a stimulus. Compression of a wide dynamic range of signals into a narrow range of stimuli is performed using non-linear transform. Also, there are systems based on the continuous analysis of a signal from a microphone where an electrode is selected for signal transmission in a recurrent cycle.
For patients with severely damaged cochlear, brain stem implants have been developed [13]. These devices stimulate the cochlear nucleus of the brainstem by means of surface or penetrating electrodes. Some patients who tested such implants reported a low quality of sound recognition, while in the others the device performance was comparable to cochlear implant performance.
Visual prosthesis
Visual prostheses are currently capable of restoring simple visual sensations [79]. Visual prostheses can be divided into two groups: retinal prostheses and brain prostheses. Retinal prostheses are used for treating pathologies that do not affect the visual nerve, while brain prostheses are used if the visual nerve is damaged, and it is necessary to stimulate visual structures of the brain, such as the visual cortex, to evoke visual sensations.
Depending on the severity of retinal damage, several types of retinal prostheses can be used. Epiretinal implants stimulate nerve fibers of retinal ganglion cells by intraocular electrode arrays (up to 60 channels) that receive frames from a video camera. We expect that in the future all components of such prostheses will be implanted inside the eye. Patients with such implants can perceive the shape of objects, the brightness of colors and movement direction.
Subretinal prostheses stimulate ganglion and bipolar cells by electrical signals. They consist of thousands of microphotodiodes that respond to the level of illumination and transmit this information to the electrode array. The studies of these devices are currently at an early experimental stage.
In a transchoroidal prosthesis several dozens of stimulating electrodes are implanted under the choroid. Compared to others, this device can be implanted by a quite simple surgical procedure. Patients perceive stimuli as phosphenes and can detect simple objects.
As a rule, in non-retinal prostheses electrical stimulation of visual cortex is used. In 1974 simple visual perception was restored by implanting 64 electrodes onto the surface of the visual cortex [25]. It is possible that intracortical microelectrode arrays can yield better results.
Bidirectional BCIs (brain-computer-brain interface)
Biderictional, or sensor-connected BCIs decode brain activity and simultaneously transmit artificial sensory signals to the brain, thus creating a feedback loop. Fig. 3 shows the schematics of the first brain-computer-brain interface (BCBI) designed in Nicolelis laboratory by O’Doherty, Lebedev and their colleagues [80]. Microelectrode arrays were implanted into motor and somatosensory cortex of monkeys. The first array recorded intentions, the second one transmitted artificial tactile sensations back to the brain using intracortical microstimulation. The BCBI allowed monkeys to explore a virtual object using a cursor or a realistic image (avatar) of monkey’s arm. Virtual objects looked alike but had different texture; texture data were transmitted to the brain through microstimulation.
Brain-net
Networks that connect separate nervous systems have recently become a popular subject of research. In general, the task is to create the network that would combine knowledge and effort of several individuals for more effective problem solving. Among such distributed networks are a neuron-net (a community of people and technologies that use neuronal signals for communication), a body-net (a net in which the movements of one individual can be transmitted to another through FES) and a brain-net (an integration of several brains by BCI-technologies [81], fig. 4).
CONCLUSIONS
We are witnessing a rapid growth of BCI technologies. Researchers keep reporting new achievements and are making further progress in the development of methods and devices that will help to restore the lost functionality of the human body. With long-term use of a BCI, an artificial limb can be incorporated into the body schema formed by the brain. Many BCI projects are currently at the stage of lab experiments, but there are a few devices that have been successfully introduced into clinical practice. We envision the future in which a blind, deaf and paralyzed patient can live a life of a healthy person, assisted by neural implants and functional electrical stimulation. Using BCIs for network communication, the mankind can rise to a new level, the most recent projects on creating the “internet of bodies and minds” being the first attempt toward that goal.