HORIZON-CL4-2024-HUMAN-03-02: Explainable and Robust AI (AI Data and Robotics Partnership) (RIA)

10 January 2025|
Expected Outcome:

Projects are expected to contribute to one of the following outcomes:

  • Enhanced robustness, performance and reliability of AI systems, including generative AI models, with awareness of the limits of operational robustness of the system.
  • Improved explainability and accountability, transparency and autonomy of AI systems, including generative AI models, along with an awareness of the working conditions of the system.
Scope:

Trustworthy AI solutions, need to be robust, safe and reliable when operating in real-world conditions, and need to be able to provide adequate, meaningful and complete explanations when relevant, or insights into causality, account for concerns about fairness, be robust when dealing with such issues in real world conditions, while aligned with rights and obligations around the use of AI systems in Europe. Advances across these areas can help create human-centric AI[1], which reflects the needs and values of European citizens and contribute to an effective governance of AI technologies.

The need for transparent and robust AI systems has become more pressing with the rapid growth and commercialisation of generative AI systems based on foundation models. Despite their impressive capabilities, trustworthiness remains an unresolved, fundamental scientific challenge. Due to the intricate nature of generative AI systems, understanding or explaining the rationale behind their

...
Loading plans...