HORIZON-CL4-2024-HUMAN-03-02

Explainable and Robust AI (AI Data and Robotics Partnership) (RIA) -

⚫ indicates current topic
Node background color indicates the call topic status
Double click on a topic to center information around it.
Node size is proportional to distance from the current topic

About the connections

The graph above was generated based on the following links

  • HORIZON-CL4-2024-HUMAN-01-06
    Explainable and Robust AI (AI Data and Robotics Partnership) (RIA)

    MOTIVATION The call HORIZON-CL4-2024-HUMAN-01 was cancelled on short notice and the topic HORIZON-CL4-2024-HUMAN-01-06 was called instead as HORIZON-CL4-HUMAN-03-02 later that same year.

  • HORIZON-CL4-2024-DATA-01-01
    AI-driven data operations and compliance technologies (AI, data and robotics partnership) (IA)

    MOTIVATION Explainable and robust AI being crucial for AI Act compliance.

  • HORIZON-CL4-2024-DIGITAL-EMERGING-01-04
    Industrial leadership in AI, Data and Robotics boosting competitiveness and the green transition (AI Data and Robotics Partnership) (IA)[[https://www.europarl.europa.eu/RegData/etudes/STUD/2021/662906/IPOL_STU(2021)662906_EN.pdf]]

    MOTIVATION Build on robust and trustworthy AI.

  • HORIZON-CL4-2023-HUMAN-01-04
    Open innovation: Addressing Grand challenges in AI (AI Data and Robotics Partnership) (CSA)

    MOTIVATION According to the call text, "Proposals are expected to dedicate tasks and resources to collaborate with and provide input to the open innovation challenge under HORIZON-CL4-2023-HUMAN-01-04 addressing explainability and robustness. Research teams involved in the proposals are expected to participate in the respective Innovation Challenges."

  • HORIZON-CL4-2023-HUMAN-01-03
    Natural Language Understanding and Interaction in Advanced Language Technologies (AI Data and Robotics Partnership) (RIA)

    MOTIVATION Mentioned in the topic text.

  • HORIZON-CL4-2023-HUMAN-01-02
    Large Scale pilots on trustworthy AI data and robotics addressing key societal challenges (AI Data and Robotics Partnership) (IA)

    MOTIVATION Trustworthy and robust AI shared theme.

  • DIGITAL-2022-CLOUD-AI-02-SEC-LAW
    Data space for security and law enforcement

    MOTIVATION Topic asks for "contribution to data spaces".

  • DIGITAL-2023-CLOUD-AI-04-ICU-DATA
    Federated European Infrastructure for intensive care units' (ICU) data

    MOTIVATION Topic asks for "contribution to data spaces".

  • DIGITAL-2024-AI-ACT-06-INNOV
    EU AI Innovation Accelerator preparatory action

    MOTIVATION DEP topic addresses the AI Act.

  • DIGITAL-2022-CLOUD-AI-03-DS-SMART
    Data space for smart communities (deployment)

    MOTIVATION Topic asks for "contribution to data spaces".

  • DIGITAL-2021-CLOUD-AI-01-PREP-DS-GREEN-DEAL
    Preparatory actions for the Green Deal Data Space 

    MOTIVATION Topic asks for "contribution to data spaces".

  • DIGITAL-2024-CLOUD-AI-06-ENERSPACE
    Energy Data Space

    MOTIVATION Topic asks for "contribution to data spaces".

Call text (as on F&T portal)

View on F&T portal
Expected Outcome:

Projects are expected to contribute to one of the following outcomes:

  • Enhanced robustness, performance and reliability of AI systems, including generative AI models, with awareness of the limits of operational robustness of the system.
  • Improved explainability and accountability, transparency and autonomy of AI systems, including generative AI models, along with an awareness of the working conditions of the system.
Scope:

Trustworthy AI solutions, need to be robust, safe and reliable when operating in real-world conditions, and need to be able to provide adequate, meaningful and complete explanations when relevant, or insights into causality, account for concerns about fairness, be robust when dealing with such issues in real world conditions, while aligned with rights and obligations around the use of AI systems in Europe. Advances across these areas can help create human-centric AI[1], which reflects the needs and values of European citizens and contribute to an effective governance of AI technologies.

The need for transparent and robust AI systems has become more pressing with the rapid growth and commercialisation of generative AI systems based on foundation models. Despite their impressive capabilities, trustworthiness remains an unresolved, fundamental scientific challenge. Due to the intricate nature of generative AI systems, understanding or explaining the rationale behind their outputs is normally not possible with current explainable AI methods. Moreover, these models occasionally tend to 'hallucinate', generating non-factual or inaccurate information, further compromising their reliability.

To achieve robust and reliable AI, novel approaches are needed to develop methods and solutions that work under other than model-ideal circumstances, while also having an awareness when these conditions break down. To achieve trustworthiness, AI system should be sufficiently transparent and capable of explaining how the system has reached a conclusion in a way that it is meaningful to the user, enabling safe and secure human-machine interaction, while also indicating when the limits of operation have been reached.

The purpose is to advance AI-algorithms and innovations based on them that can perform safely under a common variety of circumstances, reliably in real-world conditions and predict when these operational circumstances are no longer valid. The research should aim at advancing robustness and explainability for a generality of solutions, while leading to an acceptable loss in accuracy and efficiency, and with known verifiability and reproducibility. The focus is on extending the general applicability of explainability and robustness of AI-systems by foundational AI and machine learning research. To this end, the following methods may be considered but are not necessarily restricted to:

  • data-efficient learning, transformers and alternative architectures, self-supervised learning, fine-tuning of foundation models, reinforcement learning, federated and edge-learning, automated machine learning, or any combination thereof for improved robustness and explainability.
  • hybrid approaches integrating learning, knowledge and reasoning, neurosymbolic methods, model-based approaches, neuromorphic computing, or other nature-inspired approaches and other forms of hybrid combinations which are generically applicable to robustness and explainability.
  • continual learning, active learning, long-term learning and how they can help improve robustness and explainability.
  • multi-modal learning, natural language processing, speech recognition and text understanding taking multicultural aspects into account for the purpose of increased operational robustness and the capability to explain alternative formulation[2].

Multidisciplinary research activities should address all of the following:

  • Proposals should involve appropriate expertise in all the relevant sector specific use cases and disciplines, and where appropriate Social Sciences and Humanities (SSH), including gender and intersectional knowledge to address concerns around gender, racial or other biases, etc.
  • Proposals are expected to dedicate tasks and resources to collaborate with and provide input to the open innovation challenge under HORIZON-CL4-2023-HUMAN-01-04 addressing explainability and robustness. Research teams involved in the proposals are expected to participate in the respective Innovation Challenges.
  • Contribute to making AI and robotics solutions meet the requirements of Trustworthy AI, based on the respect of the ethical principles, the fundamental rights including critical aspects such as robustness, safety, reliability, in line with the European Approach to AI. Ethics principles needs to be adopted from early stages of development and design.

All proposals are expected to embed mechanisms to assess and demonstrate progress (with qualitative and quantitative KPIs, benchmarking and progress monitoring), and share communicable results with the European R&D community, through the AI-on-demand platform or Digital Industrial Platform for Robotics, public community resources, to maximise re-use of results, either by developers, or for uptake, and optimise efficiency of funding; enhancing the European AI, Data and Robotics ecosystem and possible sector-specific forums through the sharing of results and best practice.

In order to achieve the expected outcomes, international cooperation is encouraged, in particular with Canada and India.

[1] A European approach to artificial intelligence | Shaping Europe’s digital future (europa.eu)

[2] Research should complement build upon and collaborate with projects funded under topic HORIZON-CL4-2023-HUMAN-01-03: Natural Language Understanding and Interaction in Advanced Language Technologies

News flashes

2025-05-06

EVALUATION results

Published: 18.04.2024

Deadline: 18.09.2024

Available budget: EUR 72,500,000

The results of the evaluation for each topic are as follows:

HUMAN-03-01

HUMAN-03-02

HUMAN-03-03

HUMAN-03-04

Number of proposals submitted (including proposals transferred from or to other calls)

27

131

23

2

Number of inadmissible proposals

0

0

0

0

Number of ineligible proposals

1

5

1

0

Number of above-threshold proposals

19

92

10

1

Total budget requested for above-threshold proposals

444,267,401.95 €

673,046,895.38 €

14,727,623.27 €

6,000,000 €

Number of proposals retained for funding

2

3

1

1

Number of proposals in the reserve list

1

1

1

0

Funding threshold

14.5

15

14.5

12.5

Number of proposals with scores lower or equal to 15 and higher or equal to 14

3

14

1

0

Number of proposals with scores lower than 14 and higher or equal to 13

4

20

1

0

Number of proposals with scores lower than 13 and higher or equal to 10

12

58

8

1

Summary of observer reports:

Observer report for topics HUMAN-03-01, 02 and 04:

Based on the achieved results the overall quality of the evaluation is rated as “very good”. The topics in this report were monitored by a team of two Independent Observers. The entire observation process was conducted remotely through analysis of documentation on the SEP system and consensus and panel meetings held in the video conferencing system (Cisco Webex). The Independent Observers (IO) verified that the procedures set out or referred to in the EU Funding & Tenders Online Manual are followed, drew the attention of Commission staff to any potential deficiencies; and compiled a report with findings and recommendations aiming to improve the overall efficiency and effectiveness of the evaluation process. Scale of the evaluation task as well as its complexity were challenging but within the boundaries of the professional and personal capacities of the experts who were invited to evaluate the proposals received in response to this Call. The exercise was very well prepared and managed excellently by the Call and Topic Coordinators and their teams. The Commission staff is to be commended for their professionalism during the exercise. The organisation and scheduling of evaluator briefings, and consensus meetings was carried out with considered efficiency and effectiveness. The Independent Observers are satisfied that the evaluation process conformed to the applicable rules and required standards. The evaluation process was fair, efficient, and effective and the throughput time of the evaluation process was good. The Commission staff is to be commended for the support provided to the observers during their task. procedures and tools were efficient, reliable and user-friendly. All evaluation procedures monitored by the observers were implemented in conformity with the applicable and agreed rules. All experts and involved actors adhered strictly to the guiding principles of independence, objectivity, accuracy, and consistency. No significant deviations have been observed or reported to the observers. The observers have given careful consideration to recommendations which were discussed during the checkpoint meeting with EU Staff. Based on observations the following recommendations can be derived:

  • The gender balance in the Experts pool still has room for improvement
  • Despite the time pressure, more regular breaks should be foreseen and planned for online meetings, especially for the second half or full day panel meetings.
  • Text highlighting should be a future feature of the SEP editor.
  • Although the scoring process has considerably improved by focussing on the wording first, applied procedures should be further strengthened and harmonised.
  • The efficiency of the consensus phase could be improved by returning to physical presence meetings to discuss proposals on site.
  • Observer report for topic HUMAN-03-03:

    The IO finds that the evaluation followed the applicable rules for the call, and that it was competently evaluated in a fair and equitable manner by the experts and continuously monitored by the Agency staff. The IO did not observe any event or activity that gave rise to specific concern that might have jeopardised the fairness of the evaluation. HORIZON-CL4-2024-HUMAN-03-03: 23 proposals were submitted; 1 proposal accepted for funding. The expert team evaluating the proposals were perfectly gender balanced and from the broadest possible national representation.

    We recently informed the applicants about the evaluation results for their proposals.

    For questions, please contact the Research Enquiry Service.

    2025-05-06

    PROPOSAL NUMBERS

    Call HORIZON-CL4-2024-HUMAN-03 has closed on the 18/09/2024.

    183 proposals have been submitted.

    The breakdown per topic is:

  • HORIZON-CL4-2024-HUMAN-03-01: 27 proposals
  • HORIZON-CL4-2024-HUMAN-03-02: 131 proposals
  • HORIZON-CL4-2024-HUMAN-03-03: 23 proposals
  • HORIZON-CL4-2024-HUMAN-03-04: 2 proposals
  • Evaluation results are expected to be communicated in December 2024.

    2024-05-14

    Dear applicant,

    Please note that there was an error in the Part B template available for download for this topic.

    The correct version is entitled “Standard Application Form (HE RIA and IA)”and indicates a page limit of45 pages.

    The correct version is the one now available in the submission system. Please make sure that you use the correct version before proceeding further in the drafting of your proposal.

    We apologise for the inconvenience.

    2024-04-24
    The submission session is now available for: HORIZON-CL4-2024-HUMAN-03-02(HORIZON-RIA), HORIZON-CL4-2024-HUMAN-03-03(HORIZON-CSA), HORIZON-CL4-2024-HUMAN-03-01(HORIZON-RIA), HORIZON-CL4-2024-HUMAN-03-04(HORIZON-CSA)
    call topic details
    Call status: Closed
    Publication date: 2024-04-17 (1 year ago)
    Opening date: 2024-04-23 (1 year ago)
    Closing date: 2024-09-18 (7 months ago)
    Procedure: single-stage

    Budget: 15000000
    Expected grants: 2
    Contribution: 7500000 - 7500000
    News flashes

    This call topic has been appended 4 times by the EC with news.

    • 2025-05-06
      evaluation resultspublished: 18.04.2024d...
    • 2025-05-06
      proposal numbers call horizon-cl4-2024-h...
    • 2024-05-14
      dear applicant,please note that there wa...
    • 2024-04-24
      the submission session is now available...
    Call

    HORIZON-CL4-2024-HUMAN-03

    Call topics are often grouped together in a call. Sometimes this is for a thematic reason, but often it is also for practical reasons.

    There are 3 other topics in this call:

    Source information

    Showing the latest information. Found 5 versions of this call topic in the F&T portal.

    Information from

    • 2025-01-11_03-30-11
    • 2024-11-23_03-30-15
    • 2024-11-04_17-25-09
    • 2024-09-30_21-20-58
    • 2024-07-11_18-29-29

    Check the differences between the versions.

    Annotations (will be publicly visible when approved)

    You must be logged in to add annotations
    No annotations yet

    Events

    This is just a very first implementation, better visualisation coming

    Events are added by the ideal-ist NCP community and are hand-picked. If you would like to suggest an event, please contact idealist@ffg.at.

    Call topic timeline

    What phase of the topic timeline are we in? This timeline contains some suggestions on what are realistic actions you should or could take at this moment. The timeline is based on the information provided by the call topic.
    1. Work programme available

      - 1 year ago

      The call topics are published first in the Work Programme, which is available a while before the call opens. By following up the Work Programme publications, you can get a headstart.

    2. Publication date

      - 1 year ago

      The call was published on the Funding & Tenders Portal.

    3. Opening date

      - 1 year ago

      The call opened for submissions.

    4. Closing date

      - 7 months ago

      Deadline for submitting a project.

    5. Time to inform applicants Estimate

      - 2 months ago

      The maximum time to inform applicants (TTI) of the outcome of the evaluation is five months from the call closure date.

    6. Today

    7. Sign grant agreement Estimate

      - 1 week from now

      The maximum time to sign grant agreements (TTG) is three months from the date of informing applicants.

    Funded Projects

    Loading...

    Project information comes from CORDIS (for Horizon 2020 and Horizon Europe) and will be sourced from F&T Portal (for Digital Europe projects)

    Bubbles

    This call topic is part of: