Skip to main content

TechEthos named among World-Leading Ethical Practices

TechEthos named among World-Leading Ethical Practices
17 June 2024

News | 17 June 2024

TechEthos has been named one of the world’s leading ethical practices in the newest CORDIS Results Pack on ethics and integrity in research, released by the European Commission. This recognition highlights our commitment to ensuring that scientific and technological progress aligns with societal values and ethical standards.

The publication, titled CORDIS Results Pack on Ethics and Integrity: Building Bridges for Trust and Excellence in Research and Innovation, was published by the European Research Executive Agency (REA) in cooperation with the European Commission’s DG RTD. It highlights the results of eight Horizon Europe and Horizon 2020 Science with and for Society (SwafS) funded projects. These projects aim to rethink research governance systems to ensure that scientific and technological advancements are in harmony with  ethical values.

The featured projects, including TechEthos, illustrate how the EU is actively promoting training, education, and capacity-building regarding research integrity principles. They also support initiatives that analyse the ethical dimensions and implications of emerging technologies.

The EU-funded TechEthos project provides guidance for the development and deployment of critical new technologies. While emerging technologies often bring significant social, economic, and environmental benefits, their development and use can also raise substantial ethical concerns and questions. For example, the rapid adoption of new technologies might lead to widespread job losses, necessitating worker reskilling, or it could create new data breaches and vulnerabilities for cybercriminals to exploit.

To address these concerns, prioritising ethics and societal values in the design, development, and deployment of new technologies is essential. The TechEthos project focuses on providing guidance on achieving this balance.

“For the first six months, we analysed and identified new and emerging technologies with high economic and ethical relevance,” explains Project Coordinator Eva Buchinger, AIT Austrian Institute of Technology. “We ended up focusing on three areas of innovation that interact with the planet, with the digital world, and with the body.”

TechEthos is proud to be part of this vital initiative and look forward to continuing the work in promoting ethical practices in technology development. For more details, read the full CORDIS Results Pack on ethics and integrity in research and discover the projects’ resources.

Share:

go to top

Continue reading

ALLEA publishes 2023 revised edition of The European Code of Conduct for Research Integrity

ALLEA publishes 2023 revised edition of The European Code of Conduct for Research Integrity

Authored by: Mathijs Vleugel (ALLEA)
Reviewed by: Greta Alliaj (Ecsite)

News | 13 September 2023

On 23 June 2023, ALLEA released the 2023 revised edition of “The European Code of Conduct for Research Integrity”, which takes account of the latest social, political, and technological developments, as well as trends emerging in the research landscape. These revisions took place in the context of the EU-funded TechEthos project, with the aim to also identify gaps and necessary additions related to the integration of ethics in research protocols and the possible implications of new technologies and their applications.

Together, these changes help ensure that the European Code of Conduct remains fit for purpose and relevant to all disciplines, emerging areas of research, and new research practices. As such, the European Code of Conduct can continue to provide a framework for research integrity to support researchers, the institutions in which they work, the agencies that fund them, and the journals that publish their work.

The Chair of the dedicated Code of Conduct Drafting Group, Prof. Krista Varantola, launched the new edition under the auspices of ALLEA’s 2023 General Assembly in London, presenting the revised European Code of Conduct to delegates of ALLEA Member Academies in parallel with its online release to the wider research community.

The 2023 revised edition

The revisions in the 2023 edition of the European Code of Conduct echo an increased awareness of the importance of research culture in enabling research integrity and implementing good research practices and place a greater responsibility on all stakeholders for observing and promoting these practices and the principles that underpin them. It likewise accommodates heightened sensibilities in the research community to mechanisms of discrimination and exclusion and the responsibility of all actors to promote equity, diversity, and inclusion.

The revised European Code of Conduct also takes account of changes in data management practices, the General Data Protection Regulation (GDPR), as well as recent developments in Open Science and research assessment. In the meantime, Artificial Intelligence tools and social media radically change how research results are produced and communicated, and the revised European Code of Conduct reflects the challenges these technologies pose to uphold the highest standards of research integrity.

The revisions process

From early 2022, the Drafting Group, consisting of members of the ALLEA Permanent Working Group on Science and Ethics, set about exploring what changes would be needed to update the 2017 edition of the European Code of Conduct to ensure it reflects the current views on what are considered good research practices. Their work culminated in October 2022 in a draft revised document being sent for consultation to leading stakeholder organisations and projects across Europe, including representative associations and organisations for academia, publishers, industry, policymaking, and broader societal engagement.

The response to this stakeholder consultation was exceptional, indicating a sense of ownership and engagement with the European Code of Conduct amongst the research community. As part of this stakeholder consultation process, the views of the TechEthos consortium partners were collected both in writing and during an online workshop.

All feedback was captured and discussed in detail in February 2023 by the Drafting Group. A summary of the stakeholder feedback process and how this informed the 2023 revision can be found at: https://allea.org/code-of-conduct/.

 

 

Share:

go to top

Continue reading

TechEthos Policy Event: Ethics for the Green and Digital transition

Event | 14 November 2023
TechEthos policy event: Ethics for the Green and Digital Transition

The TechEthos project presents their final one-day policy event in Brussels on the ethical governance of emerging technologies for the green and digital transition.

Interested to attend the event?

Event description

TechEthos will hold an in-person policy event in Brussels, Belgium, on 14 November 2023, co-hosted by Barbara Thaler, Member of the European Parliament & STOA Panel. The event focuses on the ethical governance of emerging technologies in the digital transformation and green transition. The event will bring together high-level experts in these fields, including EU policymakers, researchers from academia, and industry representatives to discuss the ethical governance of emerging technologies for the digital transformation and green transition.

The morning session will focus on ethics for the digital transformation, whilst the afternoon programme is dedicated to ethics for the green transition (see full programme below). Both the morning and afternoon programme feature keynote speeches and expert panel discussions on the ethical governance of emerging technologies in the digital and green transition. This conference will tap into ongoing ethical debates as well as existing and expected EU policy debates such as the proposed AI Act, the implementation of the Digital Services Act and Digital Markets Act, the European Green Deal, and the EC proposal for a Carbon Removal Certification Framework

Whether you’re a policymaker, industry professional, researcher, or simply interested in the ethical implications of the green and digital transition, this event offers a unique opportunity to learn from experts, engage in meaningful discussions, and network with like-minded individuals.

Don’t miss out on this exciting event! Mark your calendars and join us for a day of learning, collaboration, and exploration.

Details

Event date: Tuesday, 14 November 2023

Location: Sparks meeting centre, 60 rue Ravenstein – 1000 Brussels, Belgium and online

Event facilitator: Vivienne Parry

Draft programme


10:00–10:30

Registration & Networking Coffee

10:30–10:50

Welcome
Opening remark: Barbara Thaler, MEP & STOA member
Introductory statement: Joanna Drake, Deputy Director General at DG for Research and Innovation

10:50–11:00

TechEthos in a nutshell: Eva Buchinger, TechEthos Coordinator

Ethics for the digital transformation

11:00–11:45

Keynote: Laura Weidinger, Senior Research Scientist at DeepMind

11:45–12:15

Coffee break

12:15–13:15

Panel discussion on key ethical, social and regulatory challenges of Digital Extended Reality

13:15–14:15

Networking lunch

Ethics for the green transition 

14:15–15:00

Keynote: Behnam Taebi, Full Professor of Energy & Climate Ethics at Delft University of Technology

15:00-15:15

Coffee break

15:15–16:15

Panel discussion on key ethical, social and regulatory challenges of Climate Engineering

Highlights & Outlook for the ethical governance of emerging technologies 

16:15–16:45

TechEthos in the larger context of the ALLEA Code of Conduct: Maura Hiney (UCD Institute for Discovery)
Legacies: foundation and continuation: Eva Buchinger (AIT), Laurence Brooks (University of Sheffield), Renate Klar (EUREC)

16:45

Adjourn


Questions?

Get in touch

teamGreta Alliaj
Ecsite – European Network of Science Centres and Museums

galliaj@ecsite.eu

Share:

go to top

Continue reading

Can you change the world with 12.5 euros a day?

Can you change the world with 12.5 euros?

07 April 2023

Authored by: Ivan Yamshchikov
Reviewed by: Greta Alliaj and Cristina Paca

Opinion Piece | 07 April 2023

On January 18th 2023, Time Magazine published a story that put ChatGPT back in the news. Anybody interested in Artificial and Natural Intelligence couldn’t miss a headline like that:  “Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic”. If you did not read this work, please, do. Here’s the link. You need to know and think about it since you are using Artificial Intelligence daily, which will hardly change in the foreseeable future. For example, this text that you are reading right now might have appeared in your newsfeed because a “recommendation algorithm” found it for you, or because you found it while searching for something online. Somebody developed and trained these algorithms, while somebody else labelled the data for this training. 

This is not the first publication from Time Magazine about those people that tend to remain invisible whenever AI is mentioned: data labellers. In February 2022, Time published another great piece, “Inside Facebook’s African Sweatshop“. You can probably get the main message from the headlines, but I encourage you to read both articles before returning to this text.  I know it’s a lot to ask – we live with constantly divided attention, which is worsened by permanent time deficit. Nevertheless, it’s time well spent.

I hope you followed my advice, but in case you did not, here are several vital facts. Big Tech outsources data labelling to countries with lower levels of income. Data labellers have to deal with horrible content. I cannot put it mildly. The articles mention “sexual abuse, bestiality, rape, sexual slavery, graphic detail of death, violence, or serious physical injury”. People who label this content get around 1.5 dollars an hour working at least nine hours daily. Their mental health suffers and they do not always get proper counselling. Let these facts sting because they should. If you read this from the comfort of your home or office desk, those things take time to sink in.

One of my favourite books of all time is “Factfullness” by the late Hans Rosling. I remember seeing his talk “The best stats you’ve ever seen” and feeling an incredible surge of hope. I read “Factfullness”, published after he passed away, and learned one important lesson: context matters. We are used to the context we live in. This includes every little detail of our daily routine: from the price of coffee to our vacation plans. This defines what we find funny and what we find offensive. If you want to understand something, you have to put it into context. If you want to understand something far from your daily experience, you must try to reconstruct the context relevant to the issue you are trying to understand. We, as a species, are terrible at this task. Yet we make swift moral judgments that might affect our decisions. Moreover, we make moral judgments predicated on our daily experience and rarely consider the consequences of those judgments for the people who live lives very different from ours. Since both articles mentioned Kenya, let’s talk about it. 

You probably heard about the Big Mac index. It is one of the ways to estimate purchasing power in different regions of the world. I could not find the Big Mac index for Kenya, but thanks to McDonald’s eternal nemesis, Burger King, I managed to find a Whopper Index, which is fine with me. A whopper costs 590 Kenyan shillings, which is approximately 4.35 euros. A whopper in the local Burger King in Leipzig will cost me 8.69. Nine hours a day for 1.5 dollars per hour makes something like 12 Euros and 50 cents daily for Kenyan data labellers. If we adjust that salary for Kenyan purchasing power (assuming that the same amount of money buys you twice as many whoppers in Nairobi than in Berlin), we get 25 euros a day. You can tell me that 25 euros are still not a lot of money, especially if the person has to get that money at the price of their mental health. What difference does it make? The difference is that now you might better understand the data labellers’ actual condition. What if we add one more data point to this context?

The United Nations’ World Food Programme estimates that between October and December 2022, almost three and a half million people in Kenya were facing emergency levels of food scarcity. Five per cent of the country’s population is “in urgent need of food assistance”’. 25 euros buys you more than eight kilos of rice in Germany. When a human being has to choose between starving or watching harmful content nine hours a day, the choice is a no-brainer. 

How about some more facts to put the news into perspective? 1.5 dollars an hour,  a nine-hour-day and a five-day work week pay a salary comparable to the one that members of the “most trained and highly skilled with tactics police” get in Kenya. According to data by Payscale, this is approximately what an administrative assistant or office administrator makes in Nairobi. It is also approximately a quarter of what a Member of a County Assembly makes

Yes, 1.5 dollars an hour is a meagre wage. Yes, data labellers should be entitled to help from mental health practitioners. Yes, we can do better and we need high-quality journalists to tell these stories. We need to know about them. However, to do better globally, we also need to put these stories into perspective and remember that the only thing that makes people choose between mental health and starvation is poverty. And nothing in the known human history lifts the global population out of poverty faster than an alliance of science, technology, and free market competition. How do we balance those aspects? Can we ensure that the benefits of AI could improve the human condition globally and eventually change the economic situation for data labelers as well?

One possible path forward is to use regulation on the developed markets to encourage global collaboration but under two major conditions. First, we need to advocate for fair wages and mental health support. Companies should be held accountable for the working conditions of their employees, regardless of where they are based. Second, we can invest a part of AI’s productivity surplus globally into education and training. Developing local talent is crucial for economic growth. This investment in education should be understood in a broader sense. Companies need global talent, but we also need global founders. Modern education should encourage innovation and entrepreneurship. By investing in education and vocational training, countries can create a skilled workforce that attracts higher-paying jobs and reduces poverty. If we do it right, we can change the world with twelve euros and fifty cents a day. It will be a long and bumpy ride, but it’s worth trying. 

Share:

go to top

Continue reading

TechEthos game workshops: exploring public awareness & attitudes

TechEthos game workshops: exploring public awareness & attitudes
30 January 2023

Authored by: Greta Alliaj
Reviewed by: Cristina Paca

Article | 30 January 2022

What might a world in which technologies like the metaverse or neuroimaging have reached their full potential look like? What would it be like to live in a reality where such technologies are deployed in the most diverse fields, from education to justice, passing through marketing and entertainment? Imagine a world where neuroimaging is used to diagnose predispositions to certain neurological diseases. Such diagnosis could allow health professionals to better prevent a disease or decrease its impact on the patient, but at the same time, it could take a toll on people’s personal and professional relationships. Would you be in favour of implementing this technology?

Last autumn, hundreds of citizens across Europe took part in the Tech Ethos Science Cafés and engaged with scientists, innovators and civil society representatives to learn more about our three families of technologies. Now, the six science engagement organisations involved in the project are ready to build on this experience and invite all technology enthusiasts out there to play our new board game: The Tech Ethos game: Ages of Technology Impacts.

The TechEthos game is part of a longer workshop aimed at exploring the public’s attitudes towards Digital Extended Reality, Neurotechnologies and Climate Engineering. Participants will be invited to sit on their regional delegation to the Citizen World Council and decide what may be best for future generations and the Planet. Participants will forge the future starting from a set of technologies whose potential is not yet fully realised and each of their choices will have unforeseeable consequences.

Each round, players will be asked to discuss and agree on which technologies they would like to see further developed in their ideal future and, to do so, they will be confronted with the ethical implications of these choices. What will be the values and principles that will guide their decisions?

Throughout the different activities of the workshop, participants will have the opportunity to listen and learn from each other, express their concerns and defend their beliefs. This exchange will provide the project with insights into public attitudes and views on new and emerging technologies.

18 game workshops will take place in Austria, Czech Republic, Romania, Serbia, Spain, and Sweden. To capture a broader and richer perspective, the six science engagement centres will collaborate with associations supporting groups whose access to such activities is often hindered by economic and social factors.

Developed in co-creation with science engagement and game experts, the TechEthos game is essential in capturing ethical and societal values. This moves us closer to the project’s end goal, producing ethics guidelines that considers such values in the earliest phases of technology design and development.

Would you be interested in taking part in the conversation and shaping your ideal world? Have a look at our game resource page, keep an eye on the activities of TechEthos science engagement centres and follow us on Twitter and LinkedIn.

More about the game

Share:

go to top

Continue reading

Moral Equivalence of the Metaverse

Publication
Moral Equivalence in the Metaverse

Publication | 17 November 2022

In short

This scientific paper dives into the question “Are digital subjects in virtual reality morally equivalent to human subjects?”, from the perspective of cognitive and emotional equivalence. It builds on TechEthos’ analysis of ethical issues concerning Digital Extended Reality and expands significantly on the question of moral transfer, including themes of identity, action, responsibility, and imitating human language and appearance.

Authors

Alexei Grinbaum, Commissariat à l’Énergie Atomique et aux Énergies Alternatives (CEA), Laurynas Adomaitis, CEA.

Date of publication

11 October 2022

Cite this paper

Grinbaum, A., Adomaitis, L. (2022). Moral Equivalence in the Metaverse. Nanoethics. 16, 257-270. https://doi.org/10.1007/s11569-022-00426-x

Share:

go to top

Continue reading

Neurotechnologies through the lens of human rights law

Neurotechnologies through the lens of human rights law
06 october 2022

Authored by: Ben Howkins and Julie Vinders
Reviewed by: Corinna Pannofino and Anaïs Resseguier

Article | 06 October 2022

Technological innovation can both enhance and disrupt society in various ways. It often raises complex legal questions regarding the suitability of existing laws to maximise the benefits to society, whilst also mitigating potentially harmful consequences. Some emerging technologies even challenge us to rethink the ways in which our fundamental rights as human beings are protected by law. What happens, for example, to the right to not self-incriminate if advanced neurotechnologies in the courtroom can provide insights into a defendant’s mental state?

As a means of directly accessing, analysing, and manipulating the neural system, neurotechnologies have a range of use applications, presenting the potential for both enhancements to and interferences with protected human rights. In an educational context, insights on how the brain works during the learning process gained from research on neuroscience and the use of neurotechnologies may lead to more effective teaching methods and improved outcomes linked to the right to education. This may enhance the rights of children, particularly those with disabilities, yet more research is required to assess whether there is the potential for long-term impacts to brain development. The (mis)use of neurotechnologies in the workplace context, meanwhile, may negatively impact an individual’s right to enjoy the right to rest and leisure by increasing workload, or instead positively enhance this right by improving efficiency and creating more time and varied opportunities for rest and leisure.

Neurotechnologies and medical treatment

The primary use application of neurotechnologies is in a clinical context, both as a means of improving understanding of patients’ health and as a means of administering clinical treatment, the effects of which have the potential to enhance various protected human rights in conjunction with the right to health. For example, neurotechnologies may facilitate communication in persons whose verbal communication skills are impaired, the benefits of which are directly linked to the right to freedom of expression. Additionally, neurotechnologies may be used to diagnose and treat the symptoms of movement disorders such as Parkinson’s disease, thereby potentially enhancing the rights to dignity and autonomy of persons with disabilities.

However, the clinical use of neurotechnologies requires compliance with legal and bioethical principles such as consent and the right to refuse treatment, without which the protected rights of users may be interfered with. A particular concern is that the clinical use of neurotechnologies may lead to infringements with the right to non-discrimination, specifically in the form of neurodiscrimination, whereby the insights from brain data processed by neurotechnologies form the basis of differential treatment between individuals, for instance in insurance and employment contexts. From this a key consideration emerges, namely whether brain data is adequately protected by the existing right to privacy, or whether there is a need for a putative right to mental privacy, amongst a range of novel human rights protections, including a right to cognitive liberty, a right to mental integrity and a right to psychological continuity. The essential premise behind these proposed ‘neurorights’ is that the existing human rights framework needs revising to ensure individuals are adequately protected against certain neuro-specific interferences, including the proposed ‘neurocrime’ of brain-hacking.

Neurotechnologies and the legal system

Neurotechnologies are also increasingly being used in the justice system, wherein they may enhance an individual’s right to a fair trial, for instance by ‘establishing competency of individuals to stand trial’ and informing rules on the appropriate ‘age of criminal responsibility’. However, the use of neurotechnologies may also interfere with access to justice and the right to a fair trial. For example, advanced neurotechnologies capable of gathering data on mental states consistent with one’s thoughts and emotions risks interfering with the right to presumption of innocence or the privilege against self-incrimination. An additional consideration in this context is the right of individuals to choose to or opt against benefitting from scientific progress, the relevance of which is that individuals cannot be compelled by States to use neurotechnologies, except in certain limited circumstances determined by the law. The enforced use of neurotechnologies in justice systems could therefore interfere with the right to choose to opt against “benefitting” from scientific progress, as well as the right to a fair trial and access to justice.

Neurotechnologies and future human rights challenges

Finally, whilst this study has highlighted the ways in which neurotechnologies may already affect the enjoyment of fundamental human rights, the potential for enhancements to and interferences with these protected rights may increase as the technological state of the art progresses. For example, although primarily contemplated within the realm of science fiction, in a future reality the use of neurotechnologies may challenge the strictness of the dichotomy between ‘life’ and ‘death’ by enabling ‘neurological functioning’ to be sustained independently of other bodily functions. This may affect States’ obligations to ensure the full enjoyment of the right to life, while also raising questions around the appropriate regulation of commercial actors seeking to trade on the promise of supposed immortality

Next in TechEthos – is there a need to expand human rights law?

The study highlights the importance of bringing a human rights law perspective into the development of neurotechnologies. The human rights impact assessment is a mechanism designed to help ensure that new and emerging technologies, including neurotechnologies, develop in a manner that respects human rights, while also enabling the identification of potential gaps and legal uncertainties early on in the development stage. The analysis also raises the question as to whether further legislation may be required to address these gaps. Crucial to this question is the need to strike a balance between ensuring technological development does not interfere with fundamental human rights protections and avoiding overregulating emerging technologies at an early stage and thereby stifling further development.

Read more about the human rights law implications of climate engineering and digital extended reality.

Read the report

Share:

go to top

Continue reading

Digital extended reality through the lens of human rights law

Digital extended reality through the lens of human rights law
06 October 2022

Authored by: Ben Howkins and Julie Vinders
Reviewed by: Corinna Pannofino and Anaïs Resseguier

Article | 06 October 2022

Technological innovation can both enhance and disrupt society in various ways. It often raises complex legal questions regarding the suitability of existing laws to maximise the benefits to society, whilst also mitigating potentially harmful consequences. Some emerging technologies even challenge us to rethink the ways in which our fundamental rights as human beings are protected by the law. For instance, how might digital extended reality (XR) affect online safety and the emerging rights to be online and to disconnect?

XR technologies have an assortment of uses and applications, from gaming and filmmaking to healthcare and education. Each use case of XR creates the potential for enhancements to and interferences with various human rights, including new and emerging rights, such as the right to a healthy environment, a right to disconnect and a right to be online.
The use of XR gaming applications, for instance, is consistent with the right to benefit from scientific progress and may enhance the right to rest and leisure of all users. It may benefit persons with disabilities in particular, whose right to autonomy, for instance, may be enhanced by being able to access leisure experiences perhaps otherwise unattainable in the physical world. However, the use of XR gaming applications may also lead to increased incidences of cyberbullying, harassment, and virtual sexual assault, the experiencing of which may interfere with the realisation of the rights of women and children, in particular.

XR and possible use cases

In addition to particular use cases, there are also a variety of contexts in which the use of XR technologies may lead to both positive and negative impacts on the realisation of fundamental rights. In the clinical context, for instance, XR may enhance the right of healthcare professionals to just and favourable conditions of work when used to provide low-risk, hyper-realistic training experiences designed to improve overall healthcare provision. For patients, meanwhile, the clinical use of XR may lead to benefits linked to the right to health. Such applications may also enhance other protected rights, with the use of XR technologies to treat psychological trauma, for instance, potentially enhancing the right to dignity of victims of criminal offences. There is a risk, however, that the use of XR in a clinical setting could interfere with these protected rights, for instance if patients experience short or long-term health-related harms through the use of XR, such as motion sickness and depersonalisation / derealisation disorder.

Developing XR in accordance with human rights

In an educational context, the use of XR technologies may lead to improved learning outcomes linked to the right to education, including by accommodating specific educational needs, the benefits of which relate to the enjoyment of the rights of persons with disabilities on the basis of non-discrimination. Similarly, the incorporation of XR into the judicial system may enhance an individual’s right to a fair trial by improving the accessibility of legal proceedings, enabling evidential recreation of events, and helping to provide legal professionals with anti-bias training in order to maintain fairness. In both contexts, however, there is also a risk that the use of XR may lead to interferences with these rights, particularly if adopted without consideration of the potential drawbacks.

As such, the use of XR as an educational tool, for instance, should be informed by research on information overload and the possible effects on brain and neurological development in children, without which potentially safer and more effective teaching measures may be deprioritised or underfunded. Likewise, the use of XR in legal proceedings should be guided by factors including the suitability of virtual participation and the accessibility of XR technology. The right of access to justice may otherwise be undermined, for instance by serving to promote a form of participation of inferior type or quality in comparison to in-person participation, or by exacerbating existing accessibility issues faced by disadvantaged parties.

XR and future human rights challenges

There are certain human rights considered in this study for which XR technologies may enhance enjoyment while also raising challenging issues which fall short of constituting interferences. In relation to the right to freedom of expression, for instance, XR applications may facilitate new forms of creative expression in a variety of mediums, including music, narrative storytelling, and art. Yet there are also concerns related to the appropriate treatment of content in XR depicting violence, pornography, hate speech and mis/disinformation. This creates a tension between the right of everyone to freedom of expression and the obligation on States to protect users of XR from potentially harmful content and interferences with other fundamental rights. In seeking to resolve this conflict, States are required to strike a balance between unrestricted freedom and legitimate limitations.

Next in TechEthos – is there a need to expand human rights law?

The study highlights the importance of bringing a human rights law perspective into the development of new and emerging technologies, including XR. The human rights impact assessment is a mechanism designed to help ensure that such technologies develop in a manner that respects human rights, while also enabling the identification of potential gaps and legal uncertainties early on in the development stage. The analysis also raises the question as to whether further legislation may be required to address these gaps. Crucial to this question is the need to strike a balance between ensuring technological development does not interfere with fundamental human rights protections and avoiding overregulating emerging technologies at an early stage and thereby stifling further development.

Read more about the human rights law implications of climate engineering and neurotechnologies.

Read the report

Share:

go to top

Continue reading

Climate engineering through the lens of human rights law

Climate engineering through the lens of human rights law
05 September 2022

Authored by: Ben Howkins and Julie Vinders
Reviewed by:Corinna Pannofino and Anaïs Resseguier

Article | 05 September 2022

Technological innovation can both enhance and disrupt society in various ways. It often raises complex legal questions regarding the suitability of existing laws to maximise the benefits to society, whilst mitigating potential negative consequences. Some emerging technologies even challenge us to rethink the ways in which our fundamental rights as human beings are protected by law. For example, how do your rights as an individual stack up if your local environment is affected by climate engineering activities aimed at addressing global climate change?

Whilst aimed at tackling climate change, climate engineering may in itself impact human rights in a variety of ways, typically either by way of enhancement or interference. Human rights relevant to climate engineering can be split between substantive rights – which are freestanding rights possessed by individuals (such as the right to life, or health) – and procedural rights, which relate to administrative procedure and enforcement of substantive rights (such as the right to be informed or have access to legal remedies).

Substantive human rights

The substantive rights most relevant in the context of climate engineering include the right to life, the right to a healthy environment, the right to health, the right to access food, and the right to water. Climate engineering is intended to mitigate the harms of climate change and may therefore enhance some of these substantive rights. However, it could also in itself result in serious environmental harms affecting human lives and their environment. The right to life encompasses threats to the quality and dignity of life, including those related to human health and access to food and water. Climate engineering activities have the potential to, albeit unintentionally, adversely, and potentially irreversibly affect the climate in some locations. This may affect precipitation patterns, possibly inducing drought conditions and reducing food and water security, which can either directly or indirectly affect the right to life, a healthy environment, health, food and water.

Rights related to scientific research

A subset of substantive rights relevant to climate engineering pertains to scientific research. States are required to respect the freedom indispensable for scientific research and everyone has the right to benefit from scientific research. Additionally, research participants are protected by various rights, including the general prohibition on torture and cruel, inhuman or degrading treatment. Thus, whilst researchers are freely able to develop climate engineering technologies, any testing is subject to obtaining free and informed consent from all impacted individuals. In the context of real-world climate engineering testing, however, any resultant effect to the Earth’s climate system is unlikely to be contained within a specific area, due to its global scale. This means that communities worldwide may essentially become research participants. It follows that the practical difficulties of obtaining consent from prospective research participants, as well as the adequate protection of intellectual property rights related to innovation and research, are two of the main legal challenges in relation to climate engineering technologies.

Public participation and procedural rights

On the issue of obtaining consent for climate engineering activities from all impacted individuals, procedural human rights offer a set of participation rights, which include the right to information, the right to participate in public affairs, and the right to access legal remedies. Information about climate engineering activities, particularly from public bodies, falls within the remit of the right to information. The European Court of Human Rights has assessed the right to environmental information in relation to the right to respect for private and family life, as well as the right to freedom of expression, and observed that these rights may be violated if a State fails or refuses to provide information. Furthermore, everyone has the right to engage in public affairs, including public debate and decision-making in relation to climate engineering. States should give citizens the possibility to participate in public affairs and exert influence through public debate and dialogue. In addition, the right to access legal remedies seeks to ensure individuals have access to legal recourse in the event of alleged human rights violations. This means that, according to international and EU law, individuals should have legal recourse if they were not adequately informed, involved in public dialogue, or their informed consent was not obtained in relation to climate engineering activities. Individuals also have a right to recourse if their substantive rights are violated by climate engineering.

Next in TechEthos – is there a need to expand human rights law?

The study highlights the importance of bringing a human rights law perspective into the development of new and emerging technologies, including climate engineering. The human rights impact assessment helps to ensure that such technologies develop in a manner that respects human rights, while also enabling the identification of potential gaps and legal uncertainties early on in the development stage. The analysis also raises the question as to whether further legislation may be required to address these gaps. Crucial to this question is the need to strike a balance between ensuring technological development does not interfere with fundamental human rights protections, and avoiding overregulating emerging technologies at an early stage, and thereby stifling further development.

Read more about the human rights law implications of neurotechnologies and digital extended reality.

Read the report

Share:

go to top

Continue reading

Exploring emerging technologies through the lens of human rights law

Exploring emerging technologies through the lens of human rights law
05 September 2022

Authored by: Ben Howkins and Julie Vinders
Reviewed by: Corinna Pannofino and Anaïs Resseguier

News | 5 September 2022

Technological innovation can both enhance and disrupt society in various ways. It often raises complex legal questions regarding the suitability of existing laws to maximise the benefits to society, whilst mitigating potential negative consequences. Some emerging technologies even challenge us to rethink the ways in which our fundamental rights as human beings are protected by law.

For example, how do your rights as an individual stack up if your local environment is affected by climate engineering activities aimed at addressing global climate change? Or what would happen to the right to not self-incriminate if advanced neurotechnologies in the courtroom can provide insights into a defendant’s mental state? And how might digital extended reality (XR) affect online safety and the emerging rights to be online and to disconnect?

A human rights impact assessment

A recent study by the TechEthos project analysed international and European Union (EU) human rights law, including the International Bill of Human Rights, the European Convention on Human Rights (ECHR), and the Charter of Fundamental Rights of the EU (CFREU), in relation to climate engineering, neurotechnologies and digital extended reality (XR). While such legal frameworks do not explicitly mention climate engineering, many of the provisions contained therein are nonetheless likely to be directly applicable. By highlighting the potential for enhancements to and interferences with various human rights, the study essentially provides a human rights impact assessment of the three technology families. It identifies some gaps and legal uncertainties, which may give rise to the need for further legislation, or at least further legal clarification in the future.

Read more about the human rights law implications of Climate Engineering, Neurotechnologies and Digital Extended Reality.

Share:

go to top

Continue reading

  • 1
  • 2