In focus: The challenges of neurotechnology

The challenges of neurotechnology

In focus: The challenges of neurotechnology

By Mr Federico Mantellassi , Research and Project Officer in Global and Emerging Risks, Geneva Centre for Security Policy

Neurotechnology can be defined as a field of science and engineering that seeks to connect technical components to a nervous system. In so doing, neurotechnologies allow us to either record signals from the brain - for diagnosing mental diseases – or to translate them into technical commands (to for example operate a robotic prosthesis or control software), or to modify brain activity by stimulating targeted parts of the brain. A key advance in neurotechnology are Brain-Computer Interfaces (BCIs), i.e. systems that allow a bi-directional interaction between the brain and machines. Neurotechnologies such as BCIs have significant medical applications, and can enable some paralysed patients to regain a degree of autonomy by controlling prosthetic arms or by using a computer, with just the power of their mind. Furthermore, research into neurotechnologies can contribute to unlocking some of the major mysteries of the brain and pave a way to curing neurological disorders.

Currently, most advanced neurotechnology remains quite invasive, requiring small surgical procedures to insert implants, but the development of less invasive and therefore more commercially viable neurotechnology is advancing rapidly. However, the development and use of neurotechnologies is also fraught with ethical, privacy and security concerns. As Marcello Ienca, a leading researcher in the field, notes, a fine line must be tread between stifling innovation of a technology which can alleviate human suffering and ensuring that its misuse is mitigated. The following is a short overview of the major challenges associated with the development of neurotechnologies.

Governance of brain data and “neuro-rights”

The capacity to decode our cognitive processes gives us an unprecedented capability to collect “brain data” and thus to “datafy” what makes us, us. This has led to calls to consider brain data as different to other kinds of data. Because the brain is not just another organ, but the centrepiece of what governs our behaviour, identity, emotions and decisions, the data that flows from it can be considered as some of the most private and sensitive kind as it provides insights into a person’s very being. It therefore follows that this data should be subject to explicit and rigorous systems of governance to ensure that it is protected from abuse and misuse, and that practices such as coerced collection and brain-data-based discrimination are prevented.

A growing number of voices have advocated for expanding the scope of human rights to more appropriately safeguard the human mind by establishing new rights known as neuro-rights. These would, for example, enshrine the right to mental privacy and cognitive liberty. Discussions around the governance of neuro-data are extremely timely. As consumer-oriented neurotechnologies become increasingly prevalent, such data will be in the hands of private corporations that are already relentlessly collecting consumer data to maximise engagement, fine-tune customer profiling and increase advertisement revenue. As the last frontier of privacy, accessing consumers' brains should come with adequate governance frameworks.

Human enhancement, inequalities, and societal impacts

While the focus of neurotechnologies is disproportionately on restorative applications, for medical purposes, there is only a short and slippery slope between such applications and “enhancement”. In other words, the technology will not stop at restoring the abilities of the people who are ill or with impairments, but also enhance, or “augment” the healthy. This presents its own unique ethical and moral challenges. For example, what will enhancement mean for human character and personhood? Will it be safe? And will people be able to make free, educated choices regarding enhancement? The use of neurotechnologies for enhancing purposes also raises the issue of inequality of access to these technologies. If only the wealthiest few have access to cognitive or physical augmentation, these technologies risk exacerbating social inequalities, or even creating new types of inequalities. Indeed, if human augmentation becomes feasible, ensuring equal access to it will be key in mitigating its potential negative societal consequences. The convergence of neurotechnologies with other emerging technologies, such as artificial intelligence, is making their impact more unpredictable, disruptive and complex. As technologies intertwine and affect one another, they become harder to regulate, and comprehending and anticipating their long-term effects becomes more and more elusive.

“Brain hacking” and thought control

Our capacity to read from and write into the brain is accompanied by the very real possibility that we will soon be able to “hack” the brain – and effectively control human minds or behaviour. This capability has already been proved in a relatively rudimentary way in experiments on mice. It will not be long before similar capabilities work on human subjects. While state-of-the-art neurotechnology currently does not allow for this, anticipatory policies are necessary to safeguard against possible forms of misuse. As we better understand the mind and find ways to suppress the processes that lead to anxiety or depression, this growing knowledge in altering mental states can be repurposed and weaponised to, for example, induce these emotions in people. This future capability could lead to new levels of manipulations, much more direct, personal, and effective. With cognitive independence and individual integrity at risk, it is vital that we build safeguards into the uses of these technologies.

Military uses of neurotechnologies

Neurotechnology is a prime example of a dual-use technology, i.e. technology that has both civilian and military applications. At the most surface level, militaries are interested in research into neurotechnologies for their medical benefits, e.g. advanced brain-connected prosthetics for soldiers who have lost limbs, or to cure neurological disorders such as post-traumatic stress disorder (PTSD). However, these same technologies can be deployed on the battlefield in offensive ways, generating additional ethical, moral, political and security concerns. Advanced BCIs could augment soldiers’ combat abilities in many ways, either physically through the use of exoskeletons or cognitively through heightened awareness and control of their emotions. BCIs could also enhance warfighters by seamlessly integrating human-machine teaming on the battlefield, e.g. in the use of mind-controlled swarms of drones. As the tempo of war increases through the use of AI and the increased robotisation of warfare, military neurotechnologies could be the only way for soldiers to remain relevant on the battlefield. Some fear that this may lead to the coerced use of neurotechnologies in the military, raising serious ethical concerns. Furthermore, these technologies could be used against enemy combatants, for example during interrogations. With direct access to soldiers’ minds, these technologies could unlock the next frontier of surveillance. 

Looking ahead…

Neurotechnology is still an emerging field, and while it is developing rapidly, most technologies discussed here remain restricted to the confines of laboratory experiments. While some ethical questions need to be asked today and regulatory efforts must begin now, it is true that thinking about the security, societal, ethical and political implications of these technologies requires looking over the horizon. This does not mean that we should wait for technologies to mature to start discussing their potential misuses – by then it would be far too late.

 

About this blog series

The 21st century has ushered in an age of unprecedented technological innovation; disproportionately for the better. However, as digital technologies take more and more space in our lives, recent years have shown that they can sometimes have unintended security and societal impacts. There is an urgent need to guarantee the safe and globally beneficial development of emerging technologies and anticipate their potential misuse, malicious use, and unforeseen risks. Fortunately, technological risk can be addressed early on, and the unintended negative consequences of technologies can be identified and mitigated by putting ethics and security at the core of technological development. This series of blogs provides insights into the key challenges related to three emerging technologies: artificial intelligence, synthetic biology and neurotechnology. Each blog promotes an “ethics and security by design” approach, and are part of the Polymath Initiative; an effort to create a community of scientists who are able to bridge the gap between the scientific and technological community and the world of policy making.

 

Disclaimer: The views, information and opinions expressed in the written publications are the authors’ own and do not necessarily reflect those shared by the Geneva Centre for Security Policy or its employees. The GCSP is not responsible for and may not always verify the accuracy of the information contained in the written publications submitted by a writer.

Federico Mantellassi is a Research and Project Officer for the Global and Emerging Risks cluster at the GCSP. He is also the project coordinator of the GCSP’s Polymath Initiative.