Skip to main content

Neurotechnologies through the lens of human rights law

Neurotechnologies through the lens of human rights law
06 october 2022

Authored by: Ben Howkins and Julie Vinders
Reviewed by: Corinna Pannofino and Anaïs Resseguier

Article | 06 October 2022

Technological innovation can both enhance and disrupt society in various ways. It often raises complex legal questions regarding the suitability of existing laws to maximise the benefits to society, whilst also mitigating potentially harmful consequences. Some emerging technologies even challenge us to rethink the ways in which our fundamental rights as human beings are protected by law. What happens, for example, to the right to not self-incriminate if advanced neurotechnologies in the courtroom can provide insights into a defendant’s mental state?

As a means of directly accessing, analysing, and manipulating the neural system, neurotechnologies have a range of use applications, presenting the potential for both enhancements to and interferences with protected human rights. In an educational context, insights on how the brain works during the learning process gained from research on neuroscience and the use of neurotechnologies may lead to more effective teaching methods and improved outcomes linked to the right to education. This may enhance the rights of children, particularly those with disabilities, yet more research is required to assess whether there is the potential for long-term impacts to brain development. The (mis)use of neurotechnologies in the workplace context, meanwhile, may negatively impact an individual’s right to enjoy the right to rest and leisure by increasing workload, or instead positively enhance this right by improving efficiency and creating more time and varied opportunities for rest and leisure.

Neurotechnologies and medical treatment

The primary use application of neurotechnologies is in a clinical context, both as a means of improving understanding of patients’ health and as a means of administering clinical treatment, the effects of which have the potential to enhance various protected human rights in conjunction with the right to health. For example, neurotechnologies may facilitate communication in persons whose verbal communication skills are impaired, the benefits of which are directly linked to the right to freedom of expression. Additionally, neurotechnologies may be used to diagnose and treat the symptoms of movement disorders such as Parkinson’s disease, thereby potentially enhancing the rights to dignity and autonomy of persons with disabilities.

However, the clinical use of neurotechnologies requires compliance with legal and bioethical principles such as consent and the right to refuse treatment, without which the protected rights of users may be interfered with. A particular concern is that the clinical use of neurotechnologies may lead to infringements with the right to non-discrimination, specifically in the form of neurodiscrimination, whereby the insights from brain data processed by neurotechnologies form the basis of differential treatment between individuals, for instance in insurance and employment contexts. From this a key consideration emerges, namely whether brain data is adequately protected by the existing right to privacy, or whether there is a need for a putative right to mental privacy, amongst a range of novel human rights protections, including a right to cognitive liberty, a right to mental integrity and a right to psychological continuity. The essential premise behind these proposed ‘neurorights’ is that the existing human rights framework needs revising to ensure individuals are adequately protected against certain neuro-specific interferences, including the proposed ‘neurocrime’ of brain-hacking.

Neurotechnologies and the legal system

Neurotechnologies are also increasingly being used in the justice system, wherein they may enhance an individual’s right to a fair trial, for instance by ‘establishing competency of individuals to stand trial’ and informing rules on the appropriate ‘age of criminal responsibility’. However, the use of neurotechnologies may also interfere with access to justice and the right to a fair trial. For example, advanced neurotechnologies capable of gathering data on mental states consistent with one’s thoughts and emotions risks interfering with the right to presumption of innocence or the privilege against self-incrimination. An additional consideration in this context is the right of individuals to choose to or opt against benefitting from scientific progress, the relevance of which is that individuals cannot be compelled by States to use neurotechnologies, except in certain limited circumstances determined by the law. The enforced use of neurotechnologies in justice systems could therefore interfere with the right to choose to opt against “benefitting” from scientific progress, as well as the right to a fair trial and access to justice.

Neurotechnologies and future human rights challenges

Finally, whilst this study has highlighted the ways in which neurotechnologies may already affect the enjoyment of fundamental human rights, the potential for enhancements to and interferences with these protected rights may increase as the technological state of the art progresses. For example, although primarily contemplated within the realm of science fiction, in a future reality the use of neurotechnologies may challenge the strictness of the dichotomy between ‘life’ and ‘death’ by enabling ‘neurological functioning’ to be sustained independently of other bodily functions. This may affect States’ obligations to ensure the full enjoyment of the right to life, while also raising questions around the appropriate regulation of commercial actors seeking to trade on the promise of supposed immortality

Next in TechEthos – is there a need to expand human rights law?

The study highlights the importance of bringing a human rights law perspective into the development of neurotechnologies. The human rights impact assessment is a mechanism designed to help ensure that new and emerging technologies, including neurotechnologies, develop in a manner that respects human rights, while also enabling the identification of potential gaps and legal uncertainties early on in the development stage. The analysis also raises the question as to whether further legislation may be required to address these gaps. Crucial to this question is the need to strike a balance between ensuring technological development does not interfere with fundamental human rights protections and avoiding overregulating emerging technologies at an early stage and thereby stifling further development.

Read more about the human rights law implications of climate engineering and digital extended reality.

Read the report

Share:

go to top

Continue reading

Digital extended reality through the lens of human rights law

Digital extended reality through the lens of human rights law
06 October 2022

Authored by: Ben Howkins and Julie Vinders
Reviewed by: Corinna Pannofino and Anaïs Resseguier

Article | 06 October 2022

Technological innovation can both enhance and disrupt society in various ways. It often raises complex legal questions regarding the suitability of existing laws to maximise the benefits to society, whilst also mitigating potentially harmful consequences. Some emerging technologies even challenge us to rethink the ways in which our fundamental rights as human beings are protected by the law. For instance, how might digital extended reality (XR) affect online safety and the emerging rights to be online and to disconnect?

XR technologies have an assortment of uses and applications, from gaming and filmmaking to healthcare and education. Each use case of XR creates the potential for enhancements to and interferences with various human rights, including new and emerging rights, such as the right to a healthy environment, a right to disconnect and a right to be online.
The use of XR gaming applications, for instance, is consistent with the right to benefit from scientific progress and may enhance the right to rest and leisure of all users. It may benefit persons with disabilities in particular, whose right to autonomy, for instance, may be enhanced by being able to access leisure experiences perhaps otherwise unattainable in the physical world. However, the use of XR gaming applications may also lead to increased incidences of cyberbullying, harassment, and virtual sexual assault, the experiencing of which may interfere with the realisation of the rights of women and children, in particular.

XR and possible use cases

In addition to particular use cases, there are also a variety of contexts in which the use of XR technologies may lead to both positive and negative impacts on the realisation of fundamental rights. In the clinical context, for instance, XR may enhance the right of healthcare professionals to just and favourable conditions of work when used to provide low-risk, hyper-realistic training experiences designed to improve overall healthcare provision. For patients, meanwhile, the clinical use of XR may lead to benefits linked to the right to health. Such applications may also enhance other protected rights, with the use of XR technologies to treat psychological trauma, for instance, potentially enhancing the right to dignity of victims of criminal offences. There is a risk, however, that the use of XR in a clinical setting could interfere with these protected rights, for instance if patients experience short or long-term health-related harms through the use of XR, such as motion sickness and depersonalisation / derealisation disorder.

Developing XR in accordance with human rights

In an educational context, the use of XR technologies may lead to improved learning outcomes linked to the right to education, including by accommodating specific educational needs, the benefits of which relate to the enjoyment of the rights of persons with disabilities on the basis of non-discrimination. Similarly, the incorporation of XR into the judicial system may enhance an individual’s right to a fair trial by improving the accessibility of legal proceedings, enabling evidential recreation of events, and helping to provide legal professionals with anti-bias training in order to maintain fairness. In both contexts, however, there is also a risk that the use of XR may lead to interferences with these rights, particularly if adopted without consideration of the potential drawbacks.

As such, the use of XR as an educational tool, for instance, should be informed by research on information overload and the possible effects on brain and neurological development in children, without which potentially safer and more effective teaching measures may be deprioritised or underfunded. Likewise, the use of XR in legal proceedings should be guided by factors including the suitability of virtual participation and the accessibility of XR technology. The right of access to justice may otherwise be undermined, for instance by serving to promote a form of participation of inferior type or quality in comparison to in-person participation, or by exacerbating existing accessibility issues faced by disadvantaged parties.

XR and future human rights challenges

There are certain human rights considered in this study for which XR technologies may enhance enjoyment while also raising challenging issues which fall short of constituting interferences. In relation to the right to freedom of expression, for instance, XR applications may facilitate new forms of creative expression in a variety of mediums, including music, narrative storytelling, and art. Yet there are also concerns related to the appropriate treatment of content in XR depicting violence, pornography, hate speech and mis/disinformation. This creates a tension between the right of everyone to freedom of expression and the obligation on States to protect users of XR from potentially harmful content and interferences with other fundamental rights. In seeking to resolve this conflict, States are required to strike a balance between unrestricted freedom and legitimate limitations.

Next in TechEthos – is there a need to expand human rights law?

The study highlights the importance of bringing a human rights law perspective into the development of new and emerging technologies, including XR. The human rights impact assessment is a mechanism designed to help ensure that such technologies develop in a manner that respects human rights, while also enabling the identification of potential gaps and legal uncertainties early on in the development stage. The analysis also raises the question as to whether further legislation may be required to address these gaps. Crucial to this question is the need to strike a balance between ensuring technological development does not interfere with fundamental human rights protections and avoiding overregulating emerging technologies at an early stage and thereby stifling further development.

Read more about the human rights law implications of climate engineering and neurotechnologies.

Read the report

Share:

go to top

Continue reading

Climate engineering through the lens of human rights law

Climate engineering through the lens of human rights law
05 September 2022

Authored by: Ben Howkins and Julie Vinders
Reviewed by:Corinna Pannofino and Anaïs Resseguier

Article | 05 September 2022

Technological innovation can both enhance and disrupt society in various ways. It often raises complex legal questions regarding the suitability of existing laws to maximise the benefits to society, whilst mitigating potential negative consequences. Some emerging technologies even challenge us to rethink the ways in which our fundamental rights as human beings are protected by law. For example, how do your rights as an individual stack up if your local environment is affected by climate engineering activities aimed at addressing global climate change?

Whilst aimed at tackling climate change, climate engineering may in itself impact human rights in a variety of ways, typically either by way of enhancement or interference. Human rights relevant to climate engineering can be split between substantive rights – which are freestanding rights possessed by individuals (such as the right to life, or health) – and procedural rights, which relate to administrative procedure and enforcement of substantive rights (such as the right to be informed or have access to legal remedies).

Substantive human rights

The substantive rights most relevant in the context of climate engineering include the right to life, the right to a healthy environment, the right to health, the right to access food, and the right to water. Climate engineering is intended to mitigate the harms of climate change and may therefore enhance some of these substantive rights. However, it could also in itself result in serious environmental harms affecting human lives and their environment. The right to life encompasses threats to the quality and dignity of life, including those related to human health and access to food and water. Climate engineering activities have the potential to, albeit unintentionally, adversely, and potentially irreversibly affect the climate in some locations. This may affect precipitation patterns, possibly inducing drought conditions and reducing food and water security, which can either directly or indirectly affect the right to life, a healthy environment, health, food and water.

Rights related to scientific research

A subset of substantive rights relevant to climate engineering pertains to scientific research. States are required to respect the freedom indispensable for scientific research and everyone has the right to benefit from scientific research. Additionally, research participants are protected by various rights, including the general prohibition on torture and cruel, inhuman or degrading treatment. Thus, whilst researchers are freely able to develop climate engineering technologies, any testing is subject to obtaining free and informed consent from all impacted individuals. In the context of real-world climate engineering testing, however, any resultant effect to the Earth’s climate system is unlikely to be contained within a specific area, due to its global scale. This means that communities worldwide may essentially become research participants. It follows that the practical difficulties of obtaining consent from prospective research participants, as well as the adequate protection of intellectual property rights related to innovation and research, are two of the main legal challenges in relation to climate engineering technologies.

Public participation and procedural rights

On the issue of obtaining consent for climate engineering activities from all impacted individuals, procedural human rights offer a set of participation rights, which include the right to information, the right to participate in public affairs, and the right to access legal remedies. Information about climate engineering activities, particularly from public bodies, falls within the remit of the right to information. The European Court of Human Rights has assessed the right to environmental information in relation to the right to respect for private and family life, as well as the right to freedom of expression, and observed that these rights may be violated if a State fails or refuses to provide information. Furthermore, everyone has the right to engage in public affairs, including public debate and decision-making in relation to climate engineering. States should give citizens the possibility to participate in public affairs and exert influence through public debate and dialogue. In addition, the right to access legal remedies seeks to ensure individuals have access to legal recourse in the event of alleged human rights violations. This means that, according to international and EU law, individuals should have legal recourse if they were not adequately informed, involved in public dialogue, or their informed consent was not obtained in relation to climate engineering activities. Individuals also have a right to recourse if their substantive rights are violated by climate engineering.

Next in TechEthos – is there a need to expand human rights law?

The study highlights the importance of bringing a human rights law perspective into the development of new and emerging technologies, including climate engineering. The human rights impact assessment helps to ensure that such technologies develop in a manner that respects human rights, while also enabling the identification of potential gaps and legal uncertainties early on in the development stage. The analysis also raises the question as to whether further legislation may be required to address these gaps. Crucial to this question is the need to strike a balance between ensuring technological development does not interfere with fundamental human rights protections, and avoiding overregulating emerging technologies at an early stage, and thereby stifling further development.

Read more about the human rights law implications of neurotechnologies and digital extended reality.

Read the report

Share:

go to top

Continue reading

Exploring emerging technologies through the lens of human rights law

Exploring emerging technologies through the lens of human rights law
05 September 2022

Authored by: Ben Howkins and Julie Vinders
Reviewed by: Corinna Pannofino and Anaïs Resseguier

News | 5 September 2022

Technological innovation can both enhance and disrupt society in various ways. It often raises complex legal questions regarding the suitability of existing laws to maximise the benefits to society, whilst mitigating potential negative consequences. Some emerging technologies even challenge us to rethink the ways in which our fundamental rights as human beings are protected by law.

For example, how do your rights as an individual stack up if your local environment is affected by climate engineering activities aimed at addressing global climate change? Or what would happen to the right to not self-incriminate if advanced neurotechnologies in the courtroom can provide insights into a defendant’s mental state? And how might digital extended reality (XR) affect online safety and the emerging rights to be online and to disconnect?

A human rights impact assessment

A recent study by the TechEthos project analysed international and European Union (EU) human rights law, including the International Bill of Human Rights, the European Convention on Human Rights (ECHR), and the Charter of Fundamental Rights of the EU (CFREU), in relation to climate engineering, neurotechnologies and digital extended reality (XR). While such legal frameworks do not explicitly mention climate engineering, many of the provisions contained therein are nonetheless likely to be directly applicable. By highlighting the potential for enhancements to and interferences with various human rights, the study essentially provides a human rights impact assessment of the three technology families. It identifies some gaps and legal uncertainties, which may give rise to the need for further legislation, or at least further legal clarification in the future.

Read more about the human rights law implications of Climate Engineering, Neurotechnologies and Digital Extended Reality.

Share:

go to top

Continue reading

Key findings highlight implications of new and emerging technologies

Key findings highlight implications of new and emerging technologies
22 July 2022

Authored by: Cristina Paca
Reviewed by: Michael J. Bernstein and Anais Resseguier

News | 22 July 2022

New and emerging technologies may generate a range of socio-economic benefits and new opportunities. Nevertheless, their transformative potential also means that these technologies are likely to pose a number of challenging ethical and societal issues.

TechEthos chose as its focus three technology families that concern fundamental relationships between technology and the planet, the digital world, and the human body: Climate Engineering , Digital Extended Reality and Neurotechnologies.

Now at the halfway point of this three-year project, the TechEthos consortium is delighted to publish three sets of key findings that enhance our understanding of the implications of our three technology families. These results lay a strong foundation for our future efforts: to develop operational ethical guidelines or codes that bring ethical and societal values directly into the early stages of technology development.

Arriving at ethical values and principles

Ethics by design’ is at the core of the TechEthos approach to the ethics of new and emerging technologies. This approach involves the inclusion of a broad array of human and environmental values from the very beginning of the process of research and development of new technologies by designers, entrepreneurs, researchers, users and policy makers. A new TechEthos report identifies a range of fundamental values and principles associated with the technology families to inform the projects’ ‘ethics by design’ approach and key stakeholders.

Several different roads were taken by our research team to arrive to the values and principles of each technology family:

  • Looking at the specific techniques and devices that are characteristic of each technology family and questions they raise – for instance, Deep Brain Stimulation and Brain Computer Interfaces are both examples of Neurotechnologies techniques,
  • Considering the key applications of the technologies, in areas such as training and education, social relations, medicine and diagnostics, among others, and
  • Unravelling the arguments behind the core ethical dilemmas that have marked each technology family.

Visual aid describing the three roads to arrive at values and principles, as described in the bulletpoint list above

Three roads to arrive at values and principles. Illustration from the report ‘Analysis of Ethical Issues’.

Finally, for each value and principle identified and explained, the report outlines possible mitigating strategies and provides a set of questions that designers, policy makers and technology users might consider to reflect said values and principles.

Read the report

In the media

Media both reflect and shape public perceptions on technologies and, as such, give important indications of these perceptions. A media analysis was carried out by the TechEthos project team in 13 countries, focusing on news stories from key online news outlets published in 2020 and 2021 and using state-of-the-art computational tools.

The media analysis allowed us to take the pulse of the media landscape in these countries and understand which technology families, specific technologies and ethical, legal and social issues received more widespread media coverage, and the nature of that coverage. In a majority of countries, Digital Extended Reality was the most discussed in news stories, especially through the prism of Virtual Reality. There were exceptions, such as Germany and Austria, where Climate Engineering was a topical issue.

Word cloud on climate engineering news stories mentioning ethical, legal and social issues keywords, for Sweden. The top words in the cloud are 'vätgas' (hyrdrogen), 'koldioxid' (carbon dioxide), 'EU', 'utsläpp' (emission), 'klimatkrisen' (climate crisis), 'minska' (reduce). Illustration from the report ‘Results of media analysis’

Word cloud on climate engineering news stories mentioning ethical, legal and social issues keywords, for Sweden. The top words in the cloud are ‘vätgas’ (hyrdrogen), ‘koldioxid’ (carbon dioxide), ‘EU’, ‘utsläpp’ (emission), ‘klimatkrisen’ (climate crisis), ‘minska’ (reduce). Illustration from the report ‘Results of media analysis’

The report also revealed that media representations of technologies were often linked, for better or for worse, to notable individuals and their initiatives. This was, for example, the case for Neurotechnologies, wherein 35% of the stories collected referenced Elon Musk and his company, Neurolink.

Read the report

Legal issues in international & EU law

TechEthos has reviewed international and EU laws and policies for their relevance to our three technology families. While no comprehensive or dedicated laws were found to govern them, a number of legal frameworks do mark relevant obligations for nation states, as members of the international community or of the European Union, and give certain rights to private individuals.

To begin to identify the relevant legal issues, our research partners looked into a set of key questions – ‘What are the relevant objects?’, ‘What actions are done or not done?’, ‘Who is involved or impacted by the action?’ and ‘Where does the action take place?’.

Given the broad range of answers that each technology family implies, our recently published report touches on human rights law, rules on state responsibility, environmental law, climate law, space law, law of the seas, privacy and data protection law, consumer rights law, and the law related to artificial intelligence, digital services and data governance.

Private individuals and entities also face obligations from national legal frameworks in areas related to our three technology families. This is the subject of an upcoming report due later in 2022. The gaps and challenges in existing legal frameworks identified by this work will form the basis for our legal and policy recommendations which are expected in the final year of the project.

Read the report

What’s next?

The three sets of results, complemented by further insights from our ongoing societal engagement actions, lay the foundation for the second half of the project when TechEthos will work on enhancing ethical guidelines and codes for people working in research and development in the area of our three technology families. They provide not just the technical building blocks of this work but also widen our perspectives on the role and framing of those guidelines.

Share:

go to top

Continue reading