Providing high quality explanations for AI predictions based on machine learning requires combining several interrelated aspects, including, among the others: selecting a proper level of generality/specificity of the explanation, considering assumptions about the familiarity of the explanation beneficiary with the AI task under consideration, referring to specific elements that have contributed to the decision, making use of additional knowledge (e.g. metadata) which might not be part of the prediction process, selecting appropriate examples, providing evidences supporting negative hypothesis, and the capacity to formulate the explanation in a clearly interpretable, and possibly convincing, way. According to the above considerations, ANTIDOTE fosters an integrated vision of explainable AI, where low level characteristics of the deep learning process are combined with higher level schemas proper of the human argumentation capacity.
ACTA is a tool to support the decision making process in evidence-based medicine by automatic analysis of arguments in clinical trials.
Multilingual BERT fine-tuned for Argument Component detection on multilingual AbstRCT.
DMON is a tool for Argument Structure Learning (ASL) on medical data.
Multilingual Correct Answer Explanation Extraction in medical exams.
Open-Source Multilingual Text-to-Text LLM for the Medical Domain Fine-Tuned for Multi-Task and Multilingual Sequence Labelling
Multilingual Benchmarking of LLMs for Medical QA; we include a Mistral7b fine-tuned with max RAG.
This work is supported by the CHIST-ERA grant of the Call XAI 2019 of the ANR with the grant number Project-ANR-21-CHR4-0002.