Ether-AI: Equitable and TrustwortHy mEthods for Responsible Human-AI interactions


Abstract

Research and technological advancements in the field of Human - Intelligent Systems (IS) interaction, especially in the context of assistive environments, has traditionally integrated requirements related to ethical and trustworthy interaction. Presently, guaranteeing such virtues in the behavior of a system will be imperative so that it complies with current EU and US legal frameworks. Ethical and responsible AI is related to many different dimensions, including algorithmic bias, transparency, and accountability. These dimensions ought to be considered by design and persist across the life cycle of an IS. Moreover, they are related to all aspects of a system, from data and models to human interaction, and concern all involved stakeholders.

Machine Learning (ML), a subfield of Artificial Intelligence (AI), plays a significant role in driving technological advancements across various sectors. In critical domains where ML models can contain errors or may suffer from biases, it is important to ensure their transparency. It is vital for secure, fair, and trustworthy intelligent systems to be able to explain how they work and why they predict a particular outcome. To address these issues, the research area of eXplainable Artificial Intelligence (XAI) has gained visibility in the scientific community.

In addition to explaining how a ML model works and why it works in a certain way, research within the field of XAI also aims at understanding how the explanations need to be communicated to humans. Depending on the level of Human-IS interaction, the ‘how’ can impact a wide range of interaction factors, from usability and UX issues, all the way to issues regarding transparency in the authorship of actions. The ethical implications related to this include: impediment of human agency, over- or under-reliance to AI, accountability/responsibility attribution. To account for all these, models and algorithmic methods need to be evaluated ‘in action’, in real time and in the real world.


Goals

Ether-AI aspires to engage researchers active in the field of responsible and trustworthy human-AI interaction. The workshop targets high-quality works that will promote interdisciplinary dialogue and support participants' future research via constructive peer-feedback. As part of the larger theme of the PETRA conference, the participants can also interact with top scientists working with pervasive assistive technologies and to exchange valuable ideas that could advance the state-of-the-art in the field.


List of Topics

In the scope of this workshop, we are interested in the exploration of the concepts, application scenarios, and development of tools. While our focus at LIST lies on personalised health recommendations and upskilling, we welcome perspectives from other domains to jointly develop a research roadmap.

Possible topics for contributions are:

  • Data-centric approaches for Trustworthy and Responsible AI
  • Data Equity Systems
  • Fairness, accountability, and transparency in Human-Interaction
  • Ethical aspects in Human-AI/robot interaction and collaboration in assistive environments
  • Explainability of complex data systems
  • New methods or attributes for explainability
  • New approaches to structure explanations
  • New forms of explainability or new ML tasks for explainability
  • New evaluation methods for explainability
  • Reports on systems that employ explainability
  • Evaluation of human-AI/robot collaboration in real-time and in real-world: studies, methods, and material
  • Human factors in the design, development, and evaluation of responsible and explainable AI methods


Workshop Organizers

Prof. Abolfazl Asudeh
Department of Computer Science, Univ. of Illinois Chicago
asudeh@uic.edu

Prof. Nick Bassiliades
School of Informatics, Aristotle Univ. of Thessaloniki
nbassili@csd.auth.gr

Dr. Maria Dagioglou
Institute of Informatics and Telecommunications (IIT), National Center for Scientific Research- “Demokritos”
mdagiogl@iit.demokritos.gr