Skip to main content

ALLEA publishes 2023 revised edition of The European Code of Conduct for Research Integrity

ALLEA publishes 2023 revised edition of The European Code of Conduct for Research Integrity

Authored by: Mathijs Vleugel (ALLEA)
Reviewed by: Greta Alliaj (Ecsite)

News | 13 September 2023

On 23 June 2023, ALLEA released the 2023 revised edition of “The European Code of Conduct for Research Integrity”, which takes account of the latest social, political, and technological developments, as well as trends emerging in the research landscape. These revisions took place in the context of the EU-funded TechEthos project, with the aim to also identify gaps and necessary additions related to the integration of ethics in research protocols and the possible implications of new technologies and their applications.

Together, these changes help ensure that the European Code of Conduct remains fit for purpose and relevant to all disciplines, emerging areas of research, and new research practices. As such, the European Code of Conduct can continue to provide a framework for research integrity to support researchers, the institutions in which they work, the agencies that fund them, and the journals that publish their work.

The Chair of the dedicated Code of Conduct Drafting Group, Prof. Krista Varantola, launched the new edition under the auspices of ALLEA’s 2023 General Assembly in London, presenting the revised European Code of Conduct to delegates of ALLEA Member Academies in parallel with its online release to the wider research community.

The 2023 revised edition

The revisions in the 2023 edition of the European Code of Conduct echo an increased awareness of the importance of research culture in enabling research integrity and implementing good research practices and place a greater responsibility on all stakeholders for observing and promoting these practices and the principles that underpin them. It likewise accommodates heightened sensibilities in the research community to mechanisms of discrimination and exclusion and the responsibility of all actors to promote equity, diversity, and inclusion.

The revised European Code of Conduct also takes account of changes in data management practices, the General Data Protection Regulation (GDPR), as well as recent developments in Open Science and research assessment. In the meantime, Artificial Intelligence tools and social media radically change how research results are produced and communicated, and the revised European Code of Conduct reflects the challenges these technologies pose to uphold the highest standards of research integrity.

The revisions process

From early 2022, the Drafting Group, consisting of members of the ALLEA Permanent Working Group on Science and Ethics, set about exploring what changes would be needed to update the 2017 edition of the European Code of Conduct to ensure it reflects the current views on what are considered good research practices. Their work culminated in October 2022 in a draft revised document being sent for consultation to leading stakeholder organisations and projects across Europe, including representative associations and organisations for academia, publishers, industry, policymaking, and broader societal engagement.

The response to this stakeholder consultation was exceptional, indicating a sense of ownership and engagement with the European Code of Conduct amongst the research community. As part of this stakeholder consultation process, the views of the TechEthos consortium partners were collected both in writing and during an online workshop.

All feedback was captured and discussed in detail in February 2023 by the Drafting Group. A summary of the stakeholder feedback process and how this informed the 2023 revision can be found at: https://allea.org/code-of-conduct/.

 

 

Share:

go to top

Continue reading

TechEthos game workshops: exploring public awareness & attitudes

TechEthos game workshops: exploring public awareness & attitudes
30 January 2023

Authored by: Greta Alliaj
Reviewed by: Cristina Paca

Article | 30 January 2022

What might a world in which technologies like the metaverse or neuroimaging have reached their full potential look like? What would it be like to live in a reality where such technologies are deployed in the most diverse fields, from education to justice, passing through marketing and entertainment? Imagine a world where neuroimaging is used to diagnose predispositions to certain neurological diseases. Such diagnosis could allow health professionals to better prevent a disease or decrease its impact on the patient, but at the same time, it could take a toll on people’s personal and professional relationships. Would you be in favour of implementing this technology?

Last autumn, hundreds of citizens across Europe took part in the Tech Ethos Science Cafés and engaged with scientists, innovators and civil society representatives to learn more about our three families of technologies. Now, the six science engagement organisations involved in the project are ready to build on this experience and invite all technology enthusiasts out there to play our new board game: The Tech Ethos game: Ages of Technology Impacts.

The TechEthos game is part of a longer workshop aimed at exploring the public’s attitudes towards Digital Extended Reality, Neurotechnologies and Climate Engineering. Participants will be invited to sit on their regional delegation to the Citizen World Council and decide what may be best for future generations and the Planet. Participants will forge the future starting from a set of technologies whose potential is not yet fully realised and each of their choices will have unforeseeable consequences.

Each round, players will be asked to discuss and agree on which technologies they would like to see further developed in their ideal future and, to do so, they will be confronted with the ethical implications of these choices. What will be the values and principles that will guide their decisions?

Throughout the different activities of the workshop, participants will have the opportunity to listen and learn from each other, express their concerns and defend their beliefs. This exchange will provide the project with insights into public attitudes and views on new and emerging technologies.

18 game workshops will take place in Austria, Czech Republic, Romania, Serbia, Spain, and Sweden. To capture a broader and richer perspective, the six science engagement centres will collaborate with associations supporting groups whose access to such activities is often hindered by economic and social factors.

Developed in co-creation with science engagement and game experts, the TechEthos game is essential in capturing ethical and societal values. This moves us closer to the project’s end goal, producing ethics guidelines that considers such values in the earliest phases of technology design and development.

Would you be interested in taking part in the conversation and shaping your ideal world? Have a look at our game resource page, keep an eye on the activities of TechEthos science engagement centres and follow us on Twitter and LinkedIn.

More about the game

Share:

go to top

Continue reading

Moral Equivalence of the Metaverse

Publication
Moral Equivalence in the Metaverse

Publication | 17 November 2022

In short

This scientific paper dives into the question “Are digital subjects in virtual reality morally equivalent to human subjects?”, from the perspective of cognitive and emotional equivalence. It builds on TechEthos’ analysis of ethical issues concerning Digital Extended Reality and expands significantly on the question of moral transfer, including themes of identity, action, responsibility, and imitating human language and appearance.

Authors

Alexei Grinbaum, Commissariat à l’Énergie Atomique et aux Énergies Alternatives (CEA), Laurynas Adomaitis, CEA.

Date of publication

11 October 2022

Cite this paper

Grinbaum, A., Adomaitis, L. (2022). Moral Equivalence in the Metaverse. Nanoethics. 16, 257-270. https://doi.org/10.1007/s11569-022-00426-x

Share:

go to top

Continue reading

Neurotechnologies through the lens of human rights law

Neurotechnologies through the lens of human rights law
06 october 2022

Authored by: Ben Howkins and Julie Vinders
Reviewed by: Corinna Pannofino and Anaïs Resseguier

Article | 06 October 2022

Technological innovation can both enhance and disrupt society in various ways. It often raises complex legal questions regarding the suitability of existing laws to maximise the benefits to society, whilst also mitigating potentially harmful consequences. Some emerging technologies even challenge us to rethink the ways in which our fundamental rights as human beings are protected by law. What happens, for example, to the right to not self-incriminate if advanced neurotechnologies in the courtroom can provide insights into a defendant’s mental state?

As a means of directly accessing, analysing, and manipulating the neural system, neurotechnologies have a range of use applications, presenting the potential for both enhancements to and interferences with protected human rights. In an educational context, insights on how the brain works during the learning process gained from research on neuroscience and the use of neurotechnologies may lead to more effective teaching methods and improved outcomes linked to the right to education. This may enhance the rights of children, particularly those with disabilities, yet more research is required to assess whether there is the potential for long-term impacts to brain development. The (mis)use of neurotechnologies in the workplace context, meanwhile, may negatively impact an individual’s right to enjoy the right to rest and leisure by increasing workload, or instead positively enhance this right by improving efficiency and creating more time and varied opportunities for rest and leisure.

Neurotechnologies and medical treatment

The primary use application of neurotechnologies is in a clinical context, both as a means of improving understanding of patients’ health and as a means of administering clinical treatment, the effects of which have the potential to enhance various protected human rights in conjunction with the right to health. For example, neurotechnologies may facilitate communication in persons whose verbal communication skills are impaired, the benefits of which are directly linked to the right to freedom of expression. Additionally, neurotechnologies may be used to diagnose and treat the symptoms of movement disorders such as Parkinson’s disease, thereby potentially enhancing the rights to dignity and autonomy of persons with disabilities.

However, the clinical use of neurotechnologies requires compliance with legal and bioethical principles such as consent and the right to refuse treatment, without which the protected rights of users may be interfered with. A particular concern is that the clinical use of neurotechnologies may lead to infringements with the right to non-discrimination, specifically in the form of neurodiscrimination, whereby the insights from brain data processed by neurotechnologies form the basis of differential treatment between individuals, for instance in insurance and employment contexts. From this a key consideration emerges, namely whether brain data is adequately protected by the existing right to privacy, or whether there is a need for a putative right to mental privacy, amongst a range of novel human rights protections, including a right to cognitive liberty, a right to mental integrity and a right to psychological continuity. The essential premise behind these proposed ‘neurorights’ is that the existing human rights framework needs revising to ensure individuals are adequately protected against certain neuro-specific interferences, including the proposed ‘neurocrime’ of brain-hacking.

Neurotechnologies and the legal system

Neurotechnologies are also increasingly being used in the justice system, wherein they may enhance an individual’s right to a fair trial, for instance by ‘establishing competency of individuals to stand trial’ and informing rules on the appropriate ‘age of criminal responsibility’. However, the use of neurotechnologies may also interfere with access to justice and the right to a fair trial. For example, advanced neurotechnologies capable of gathering data on mental states consistent with one’s thoughts and emotions risks interfering with the right to presumption of innocence or the privilege against self-incrimination. An additional consideration in this context is the right of individuals to choose to or opt against benefitting from scientific progress, the relevance of which is that individuals cannot be compelled by States to use neurotechnologies, except in certain limited circumstances determined by the law. The enforced use of neurotechnologies in justice systems could therefore interfere with the right to choose to opt against “benefitting” from scientific progress, as well as the right to a fair trial and access to justice.

Neurotechnologies and future human rights challenges

Finally, whilst this study has highlighted the ways in which neurotechnologies may already affect the enjoyment of fundamental human rights, the potential for enhancements to and interferences with these protected rights may increase as the technological state of the art progresses. For example, although primarily contemplated within the realm of science fiction, in a future reality the use of neurotechnologies may challenge the strictness of the dichotomy between ‘life’ and ‘death’ by enabling ‘neurological functioning’ to be sustained independently of other bodily functions. This may affect States’ obligations to ensure the full enjoyment of the right to life, while also raising questions around the appropriate regulation of commercial actors seeking to trade on the promise of supposed immortality

Next in TechEthos – is there a need to expand human rights law?

The study highlights the importance of bringing a human rights law perspective into the development of neurotechnologies. The human rights impact assessment is a mechanism designed to help ensure that new and emerging technologies, including neurotechnologies, develop in a manner that respects human rights, while also enabling the identification of potential gaps and legal uncertainties early on in the development stage. The analysis also raises the question as to whether further legislation may be required to address these gaps. Crucial to this question is the need to strike a balance between ensuring technological development does not interfere with fundamental human rights protections and avoiding overregulating emerging technologies at an early stage and thereby stifling further development.

Read more about the human rights law implications of climate engineering and digital extended reality.

Read the report

Share:

go to top

Continue reading

Digital extended reality through the lens of human rights law

Digital extended reality through the lens of human rights law
06 October 2022

Authored by: Ben Howkins and Julie Vinders
Reviewed by: Corinna Pannofino and Anaïs Resseguier

Article | 06 October 2022

Technological innovation can both enhance and disrupt society in various ways. It often raises complex legal questions regarding the suitability of existing laws to maximise the benefits to society, whilst also mitigating potentially harmful consequences. Some emerging technologies even challenge us to rethink the ways in which our fundamental rights as human beings are protected by the law. For instance, how might digital extended reality (XR) affect online safety and the emerging rights to be online and to disconnect?

XR technologies have an assortment of uses and applications, from gaming and filmmaking to healthcare and education. Each use case of XR creates the potential for enhancements to and interferences with various human rights, including new and emerging rights, such as the right to a healthy environment, a right to disconnect and a right to be online.
The use of XR gaming applications, for instance, is consistent with the right to benefit from scientific progress and may enhance the right to rest and leisure of all users. It may benefit persons with disabilities in particular, whose right to autonomy, for instance, may be enhanced by being able to access leisure experiences perhaps otherwise unattainable in the physical world. However, the use of XR gaming applications may also lead to increased incidences of cyberbullying, harassment, and virtual sexual assault, the experiencing of which may interfere with the realisation of the rights of women and children, in particular.

XR and possible use cases

In addition to particular use cases, there are also a variety of contexts in which the use of XR technologies may lead to both positive and negative impacts on the realisation of fundamental rights. In the clinical context, for instance, XR may enhance the right of healthcare professionals to just and favourable conditions of work when used to provide low-risk, hyper-realistic training experiences designed to improve overall healthcare provision. For patients, meanwhile, the clinical use of XR may lead to benefits linked to the right to health. Such applications may also enhance other protected rights, with the use of XR technologies to treat psychological trauma, for instance, potentially enhancing the right to dignity of victims of criminal offences. There is a risk, however, that the use of XR in a clinical setting could interfere with these protected rights, for instance if patients experience short or long-term health-related harms through the use of XR, such as motion sickness and depersonalisation / derealisation disorder.

Developing XR in accordance with human rights

In an educational context, the use of XR technologies may lead to improved learning outcomes linked to the right to education, including by accommodating specific educational needs, the benefits of which relate to the enjoyment of the rights of persons with disabilities on the basis of non-discrimination. Similarly, the incorporation of XR into the judicial system may enhance an individual’s right to a fair trial by improving the accessibility of legal proceedings, enabling evidential recreation of events, and helping to provide legal professionals with anti-bias training in order to maintain fairness. In both contexts, however, there is also a risk that the use of XR may lead to interferences with these rights, particularly if adopted without consideration of the potential drawbacks.

As such, the use of XR as an educational tool, for instance, should be informed by research on information overload and the possible effects on brain and neurological development in children, without which potentially safer and more effective teaching measures may be deprioritised or underfunded. Likewise, the use of XR in legal proceedings should be guided by factors including the suitability of virtual participation and the accessibility of XR technology. The right of access to justice may otherwise be undermined, for instance by serving to promote a form of participation of inferior type or quality in comparison to in-person participation, or by exacerbating existing accessibility issues faced by disadvantaged parties.

XR and future human rights challenges

There are certain human rights considered in this study for which XR technologies may enhance enjoyment while also raising challenging issues which fall short of constituting interferences. In relation to the right to freedom of expression, for instance, XR applications may facilitate new forms of creative expression in a variety of mediums, including music, narrative storytelling, and art. Yet there are also concerns related to the appropriate treatment of content in XR depicting violence, pornography, hate speech and mis/disinformation. This creates a tension between the right of everyone to freedom of expression and the obligation on States to protect users of XR from potentially harmful content and interferences with other fundamental rights. In seeking to resolve this conflict, States are required to strike a balance between unrestricted freedom and legitimate limitations.

Next in TechEthos – is there a need to expand human rights law?

The study highlights the importance of bringing a human rights law perspective into the development of new and emerging technologies, including XR. The human rights impact assessment is a mechanism designed to help ensure that such technologies develop in a manner that respects human rights, while also enabling the identification of potential gaps and legal uncertainties early on in the development stage. The analysis also raises the question as to whether further legislation may be required to address these gaps. Crucial to this question is the need to strike a balance between ensuring technological development does not interfere with fundamental human rights protections and avoiding overregulating emerging technologies at an early stage and thereby stifling further development.

Read more about the human rights law implications of climate engineering and neurotechnologies.

Read the report

Share:

go to top

Continue reading

Climate engineering through the lens of human rights law

Climate engineering through the lens of human rights law
05 September 2022

Authored by: Ben Howkins and Julie Vinders
Reviewed by:Corinna Pannofino and Anaïs Resseguier

Article | 05 September 2022

Technological innovation can both enhance and disrupt society in various ways. It often raises complex legal questions regarding the suitability of existing laws to maximise the benefits to society, whilst mitigating potential negative consequences. Some emerging technologies even challenge us to rethink the ways in which our fundamental rights as human beings are protected by law. For example, how do your rights as an individual stack up if your local environment is affected by climate engineering activities aimed at addressing global climate change?

Whilst aimed at tackling climate change, climate engineering may in itself impact human rights in a variety of ways, typically either by way of enhancement or interference. Human rights relevant to climate engineering can be split between substantive rights – which are freestanding rights possessed by individuals (such as the right to life, or health) – and procedural rights, which relate to administrative procedure and enforcement of substantive rights (such as the right to be informed or have access to legal remedies).

Substantive human rights

The substantive rights most relevant in the context of climate engineering include the right to life, the right to a healthy environment, the right to health, the right to access food, and the right to water. Climate engineering is intended to mitigate the harms of climate change and may therefore enhance some of these substantive rights. However, it could also in itself result in serious environmental harms affecting human lives and their environment. The right to life encompasses threats to the quality and dignity of life, including those related to human health and access to food and water. Climate engineering activities have the potential to, albeit unintentionally, adversely, and potentially irreversibly affect the climate in some locations. This may affect precipitation patterns, possibly inducing drought conditions and reducing food and water security, which can either directly or indirectly affect the right to life, a healthy environment, health, food and water.

Rights related to scientific research

A subset of substantive rights relevant to climate engineering pertains to scientific research. States are required to respect the freedom indispensable for scientific research and everyone has the right to benefit from scientific research. Additionally, research participants are protected by various rights, including the general prohibition on torture and cruel, inhuman or degrading treatment. Thus, whilst researchers are freely able to develop climate engineering technologies, any testing is subject to obtaining free and informed consent from all impacted individuals. In the context of real-world climate engineering testing, however, any resultant effect to the Earth’s climate system is unlikely to be contained within a specific area, due to its global scale. This means that communities worldwide may essentially become research participants. It follows that the practical difficulties of obtaining consent from prospective research participants, as well as the adequate protection of intellectual property rights related to innovation and research, are two of the main legal challenges in relation to climate engineering technologies.

Public participation and procedural rights

On the issue of obtaining consent for climate engineering activities from all impacted individuals, procedural human rights offer a set of participation rights, which include the right to information, the right to participate in public affairs, and the right to access legal remedies. Information about climate engineering activities, particularly from public bodies, falls within the remit of the right to information. The European Court of Human Rights has assessed the right to environmental information in relation to the right to respect for private and family life, as well as the right to freedom of expression, and observed that these rights may be violated if a State fails or refuses to provide information. Furthermore, everyone has the right to engage in public affairs, including public debate and decision-making in relation to climate engineering. States should give citizens the possibility to participate in public affairs and exert influence through public debate and dialogue. In addition, the right to access legal remedies seeks to ensure individuals have access to legal recourse in the event of alleged human rights violations. This means that, according to international and EU law, individuals should have legal recourse if they were not adequately informed, involved in public dialogue, or their informed consent was not obtained in relation to climate engineering activities. Individuals also have a right to recourse if their substantive rights are violated by climate engineering.

Next in TechEthos – is there a need to expand human rights law?

The study highlights the importance of bringing a human rights law perspective into the development of new and emerging technologies, including climate engineering. The human rights impact assessment helps to ensure that such technologies develop in a manner that respects human rights, while also enabling the identification of potential gaps and legal uncertainties early on in the development stage. The analysis also raises the question as to whether further legislation may be required to address these gaps. Crucial to this question is the need to strike a balance between ensuring technological development does not interfere with fundamental human rights protections, and avoiding overregulating emerging technologies at an early stage, and thereby stifling further development.

Read more about the human rights law implications of neurotechnologies and digital extended reality.

Read the report

Share:

go to top

Continue reading

Exploring emerging technologies through the lens of human rights law

Exploring emerging technologies through the lens of human rights law
05 September 2022

Authored by: Ben Howkins and Julie Vinders
Reviewed by: Corinna Pannofino and Anaïs Resseguier

News | 5 September 2022

Technological innovation can both enhance and disrupt society in various ways. It often raises complex legal questions regarding the suitability of existing laws to maximise the benefits to society, whilst mitigating potential negative consequences. Some emerging technologies even challenge us to rethink the ways in which our fundamental rights as human beings are protected by law.

For example, how do your rights as an individual stack up if your local environment is affected by climate engineering activities aimed at addressing global climate change? Or what would happen to the right to not self-incriminate if advanced neurotechnologies in the courtroom can provide insights into a defendant’s mental state? And how might digital extended reality (XR) affect online safety and the emerging rights to be online and to disconnect?

A human rights impact assessment

A recent study by the TechEthos project analysed international and European Union (EU) human rights law, including the International Bill of Human Rights, the European Convention on Human Rights (ECHR), and the Charter of Fundamental Rights of the EU (CFREU), in relation to climate engineering, neurotechnologies and digital extended reality (XR). While such legal frameworks do not explicitly mention climate engineering, many of the provisions contained therein are nonetheless likely to be directly applicable. By highlighting the potential for enhancements to and interferences with various human rights, the study essentially provides a human rights impact assessment of the three technology families. It identifies some gaps and legal uncertainties, which may give rise to the need for further legislation, or at least further legal clarification in the future.

Read more about the human rights law implications of Climate Engineering, Neurotechnologies and Digital Extended Reality.

Share:

go to top

Continue reading

Key findings highlight implications of new and emerging technologies

Key findings highlight implications of new and emerging technologies
22 July 2022

Authored by: Cristina Paca
Reviewed by: Michael J. Bernstein and Anais Resseguier

News | 22 July 2022

New and emerging technologies may generate a range of socio-economic benefits and new opportunities. Nevertheless, their transformative potential also means that these technologies are likely to pose a number of challenging ethical and societal issues.

TechEthos chose as its focus three technology families that concern fundamental relationships between technology and the planet, the digital world, and the human body: Climate Engineering , Digital Extended Reality and Neurotechnologies.

Now at the halfway point of this three-year project, the TechEthos consortium is delighted to publish three sets of key findings that enhance our understanding of the implications of our three technology families. These results lay a strong foundation for our future efforts: to develop operational ethical guidelines or codes that bring ethical and societal values directly into the early stages of technology development.

Arriving at ethical values and principles

Ethics by design’ is at the core of the TechEthos approach to the ethics of new and emerging technologies. This approach involves the inclusion of a broad array of human and environmental values from the very beginning of the process of research and development of new technologies by designers, entrepreneurs, researchers, users and policy makers. A new TechEthos report identifies a range of fundamental values and principles associated with the technology families to inform the projects’ ‘ethics by design’ approach and key stakeholders.

Several different roads were taken by our research team to arrive to the values and principles of each technology family:

  • Looking at the specific techniques and devices that are characteristic of each technology family and questions they raise – for instance, Deep Brain Stimulation and Brain Computer Interfaces are both examples of Neurotechnologies techniques,
  • Considering the key applications of the technologies, in areas such as training and education, social relations, medicine and diagnostics, among others, and
  • Unravelling the arguments behind the core ethical dilemmas that have marked each technology family.

Visual aid describing the three roads to arrive at values and principles, as described in the bulletpoint list above

Three roads to arrive at values and principles. Illustration from the report ‘Analysis of Ethical Issues’.

Finally, for each value and principle identified and explained, the report outlines possible mitigating strategies and provides a set of questions that designers, policy makers and technology users might consider to reflect said values and principles.

Read the report

In the media

Media both reflect and shape public perceptions on technologies and, as such, give important indications of these perceptions. A media analysis was carried out by the TechEthos project team in 13 countries, focusing on news stories from key online news outlets published in 2020 and 2021 and using state-of-the-art computational tools.

The media analysis allowed us to take the pulse of the media landscape in these countries and understand which technology families, specific technologies and ethical, legal and social issues received more widespread media coverage, and the nature of that coverage. In a majority of countries, Digital Extended Reality was the most discussed in news stories, especially through the prism of Virtual Reality. There were exceptions, such as Germany and Austria, where Climate Engineering was a topical issue.

Word cloud on climate engineering news stories mentioning ethical, legal and social issues keywords, for Sweden. The top words in the cloud are 'vätgas' (hyrdrogen), 'koldioxid' (carbon dioxide), 'EU', 'utsläpp' (emission), 'klimatkrisen' (climate crisis), 'minska' (reduce). Illustration from the report ‘Results of media analysis’

Word cloud on climate engineering news stories mentioning ethical, legal and social issues keywords, for Sweden. The top words in the cloud are ‘vätgas’ (hyrdrogen), ‘koldioxid’ (carbon dioxide), ‘EU’, ‘utsläpp’ (emission), ‘klimatkrisen’ (climate crisis), ‘minska’ (reduce). Illustration from the report ‘Results of media analysis’

The report also revealed that media representations of technologies were often linked, for better or for worse, to notable individuals and their initiatives. This was, for example, the case for Neurotechnologies, wherein 35% of the stories collected referenced Elon Musk and his company, Neurolink.

Read the report

Legal issues in international & EU law

TechEthos has reviewed international and EU laws and policies for their relevance to our three technology families. While no comprehensive or dedicated laws were found to govern them, a number of legal frameworks do mark relevant obligations for nation states, as members of the international community or of the European Union, and give certain rights to private individuals.

To begin to identify the relevant legal issues, our research partners looked into a set of key questions – ‘What are the relevant objects?’, ‘What actions are done or not done?’, ‘Who is involved or impacted by the action?’ and ‘Where does the action take place?’.

Given the broad range of answers that each technology family implies, our recently published report touches on human rights law, rules on state responsibility, environmental law, climate law, space law, law of the seas, privacy and data protection law, consumer rights law, and the law related to artificial intelligence, digital services and data governance.

Private individuals and entities also face obligations from national legal frameworks in areas related to our three technology families. This is the subject of an upcoming report due later in 2022. The gaps and challenges in existing legal frameworks identified by this work will form the basis for our legal and policy recommendations which are expected in the final year of the project.

Read the report

What’s next?

The three sets of results, complemented by further insights from our ongoing societal engagement actions, lay the foundation for the second half of the project when TechEthos will work on enhancing ethical guidelines and codes for people working in research and development in the area of our three technology families. They provide not just the technical building blocks of this work but also widen our perspectives on the role and framing of those guidelines.

Share:

go to top

Continue reading

Joining forces with like-minded projects to address ethical and societal issues of new technologies

Joining forces with like-minded projects to address ethical and societal issues of new technologies
06 April 2022

Authored by: Lisa Tambornino (EUREC) and Mathijs Vleugel (ALLEA)
Reviewed by: Andrew Whittington-Davis and Corinna Pannofino

News | 25 June 2021

TechEthos has established a cluster of 16 EU-funded projects, creating a platform to exchange, collaborate and create synergies together. Some of the projects represented address ethical and societal challenges related to new and emerging technologies – such as TechEthos. Others are purely technical projects but also address ethical and societal challenges. On 4 March 2022, these 16 EU-funded projects came together for an online kick-off meeting. This first meeting allowed us to establish many overlaps particularly highlighting that almost all projects aimed to identify ethical and societal challenges, find legal gaps and develop strategies to close these gaps. Many of the projects want to improve the ethical and legal framework through recommendations, tools and guidelines for users, researchers, ethics bodies, policymakers and other stakeholders. The cluster will continue to intensify its collaboration and work together more concretely to avoid duplicating efforts ensuring the best work is produced from all projects. 

Which projects are involved in the cluster?

TechEthos is a Horizon 2020-funded project that addresses how to prioritise ethics and societal values in the development of new and emerging technologies, with a particular focus on three technology areas, namely Neurotechnologies, Climate Engineering and Digital Extended Reality (for more information click here).  

For the cluster, we invited projects that are funded by the EU and work either in the field of research ethics or responsible research and innovation (RRI) or work in some way on ethical and/or societal challenges present in one of the three TechEthos technologies. 

From the resulting cluster of 16 projects, five projects have a focus on research ethics and/or RRI in general, three projects carry out research in the field of neurotechnology, four in the field of digital augmented reality and four in the field of climate engineering (see figure).

To find out more about the projects involved click here. 

What is the future plan for the cluster

At the kick-off meeting, representatives of the 16 projects engaged in lively discussions, which will continue during an in-person meeting in Vienna on 23 May 2022. After that, the cluster aims to regularly exchange progress and ideas in online meetings and work on joint webinars and position papers. 

The cluster remains open to further projects. If your projects are interested in exchanging and collaborating with TechEthos and other EU-funded projects, please contact the Horizontal Coordination WP leader Lisa Tambornino (tambornino@eurecnet.eu). 

Share:

go to top

Continue reading

Reviewing the horizon scan: Selecting the TechEthos technology portfolio

Reviewing the Horizon Scan: Selecting the Techethos Technology Portfolio
14 February 2022

Authored by: Eva Buchinger, Manuela Kienegger, Michael J. Bernstein, Austrian Institute of Technology; Andrea Porcari, Airi – Italian Association for Industrial Research
Reviewed by: Andrew Whittington-Davis and Nualo Polo

News | 14 February 2022

How we arrived at our portfolio of TechEthos technology families

In our recent article, we were excited to share the three potentially disruptive technologies with high socio-economic and ethical implications that have been chosen as the focus for the TechEthos project. Going forward, the TechEthos project will explore ways to develop ethics-by-design guidelines for these technology families in conversation with expert stakeholders, researchers, innovators, and members of the public. In this post, we present more detail on the methodology used to select these technologies – the horizon scan.

Horizon scan refers to the act of seeking out diverse sources of information about the short, medium, or long-term research, innovation, social, political, and economic developments. Our TechEthos horizon scan set out with the clear goal of developing a portfolio of three technology families. Specifically, we sought to identify potentially high socio-economic and ethically impactful technologies according to the following five impact assessment criteria selected by the project team:

  1.  Industrial and economic impact: the extent to which technologies enable novel applications with potentially significant impact on industry/economy.
  2.  Ethics impact: the extent to which technologies enhance or undermine fundamental societal principles and values (e.g., human rights).
  3.  Public impact: the extent to which technologies enable novel applications with potentially significant impact on the life of people and on broader societal trends.
  4.  Policy impact: the extent to which technologies are prioritised by policymakers at the regional, national, and international level.
  5.  Legal impact: the extent to which technologies challenge existing legal frameworks.

The Process

Using these criteriums as a guide, our horizon scan involved three selection steps to successively locate and refine technology families to form the basis of our TechEthos portfolio (see image below).

First, we cast a wide net in a desk-based document analysis to identify approximately 150 promising technology families. Our research team reviewed over 100 documents including authoritative technology assessments by governments, research organisations, think tanks, business organisations, and other key researcher and innovation actors. Through a series of iterative internal deliberations with our team of technical and social scientific experts—reviewing the impact assessment criteria above—we created a short-list of 16 technology families with high socio-economic impact.

In the second stage of the horizon scan, we reached out to external technical and social scientific experts through a survey and conducted an analysis of patent and industry involvement in technology family developments as indicators for industrial and economic impact. The survey drew upon research on the industrial research and development (R&D) strength (total number and year-over-year growth of patents) and prominence of each technology in EU R&D policy (number of EU projects funded and year-over-year growth in the number of projects funded). The online survey was designed to gather opinions of external experts representing various stakeholder groups (such as academic & industrial researchers, ethics bodies, policymakers, funding organizations).  Feedback from the 77 survey respondents largely certified the 16 technology families as having a high to very high social, economic and ethical impact and was used to prioritise them.

In the third and final stage of selection, we combined internal and external expertise to discuss a pre-selected set of five technology families. Our preselection resulted from a review of all the data acquired in light of our project goal: to generate ethical guidelines for high-impact social and economic technology families.

Taking the resulting five pre-selected technologies (Environmental and climate, Data processing, Cognitive technologies, Artificial human & neurotechnologies, and Mobility technologies) we conducted a participatory workshop with project members, advisory board members, and external experts to arrive at our final selection. During the workshop, through a series of ‘World Cafe’ rounds, we refined the pre-selected technology families by discussing:

  1.  The granularity of the technology family (specific enough? too broad?)
  2.  Different reasons why a particular refinement of the technology family would be suitable for the TechEthos project (i.e., if the revised composition or definition of the technology family would enhance its viability as subject of subsequent research by the project).

In the final portion of the selection workshop, participants identified three technology families. Each participant was first asked to come up with a combination of technology families of their own, and then add, combine and later vote on other possible portfolios. Participants justified their selections and votes according to the criteria:

  1.  European commission interest/ political priorities;
  2.  Potential TechEthos added value;
  3.  Scientific/intellectual interest;
  4.  Time horizon (short to long term with at least one specific technology within the family close to the market).

In the days following the workshop, the project team met to synthesise the workshop outcomes and to propose the set of high socio-economic impact technology families.

The result

Going forward, TechEthos will focus and explore technologies interacting with the planet, the digital world, and the human body. Specifically, TechEthos will develop ethics guidelines and supportive materials for climate engineering technologies, digital extended reality technologies, and neurotechnologies .

For further reading on the Horizon Scan methodology, please visit our resource on the assessment and final selection of technologies.

To learn more about TechEthos follow the project on Twitter and LinkedIn,  and sign up to the project newsletter. By joining the online community, you will be first in line to discover the technologies the project selects as the focus of its work and contribute to shaping the technologies of the future.

Share:

go to top

Continue reading

  • 1
  • 2