ANTIDOTE

ArgumeNtaTIon-Driven explainable artificial intelligence fOr digiTal mEdicine

Providing high quality explanations for AI predictions based on machine learning requires combining several interrelated aspects, including, among the others: selecting a proper level of generality/specificity of the explanation, considering assumptions about the familiarity of the explanation beneficiary with the AI task under consideration, referring to specific elements that have contributed to the decision, making use of additional knowledge (e.g. metadata) which might not be part of the prediction process, selecting appropriate examples, providing evidences supporting negative hypothesis, and the capacity to formulate the explanation in a clearly interpretable, and possibly convincing, way. According to the above considerations, ANTIDOTE fosters an integrated vision of explainable AI, where low level characteristics of the deep learning process are combined with higher level schemas proper of the human argumentation capacity. The ANTIDOTE integrated vision is supported by three considerations:

Accordingly, ANTIDOTE will exploit cross-disciplinary competences in three areas, i.e. deep learning, argumentation and interactivity, to support a broader and innovative view of explainable AI. Although we envision a general integrated approach to explainable AI, we will focus on a number of deep learning tasks in the medical domain, where the need for high quality explanations, both to clinicians and to patients, is perhaps more critical than in other domains.

For more information please visit the Official Website of the Project.

Services

Argument Mining for Medical Domain

ACTA

ACTA is a tool developed to support the decision making process in evidence-based medicine by automatically analysing clinical trials for their argumentative components and PICO elements.

Multilingual Argument Component Detection

Multilingual BERT fine-tuned for argument component detection in the medical domain. State-of-the-art results in the original English AbstRCT dataset and on its French, Italian and Spanish parallel versions.

Dual-tower Multi-scale cOnvolution neural Network (DMON)

DMON is a model for Argument Structure Learning (ASL), i.e., the classification of relations between arguments. Users can select arguments from examples and specify who are the head arguments and who are the tail arguments. Our model will determine the relationship between head and tail arguments, whether it is support or attack.

Explanation Extraction and Generation

SYMEXP

SYMEXP is a tool developed to provide explanations in natural language already known diagnoses from clinical cases.

Correct Answer Explanation Extraction

Fine-tuned mdeberta-v3-base for Explanation Extraction of the Correct Answer in Medical Exams. Fine-tuned in EN,ES, FR, IT on the casimedicos-exp dataset.

Large Language Models and Benchmarks

Medical mT5

Medical MT5-large-multitask is a version of Medical MT5 fine-tuned for sequence labelling. It obtains state-of-the-art results in labelling a wide range of Medical labels and Argument Components (premise, claim) in unstructured text, such as Disease, Disability, ClinicalEntity, Chemical, etc. Medical MT5-large-multitask has been fine-tuned for English, Spanish, French and Italian, although it may work with a wide range of languages. Full details in the Medical mT5 paper.

Medical Question Answering

Mistral 7b fine-tuned for Medical QA on English MedExpQA with automatically obtained external knowledge by applying MedRAG by using the RRF-2 of two retrieval algorithms, namely, BM25 and MedCPT, over the MedCorp corpus. We use the entire clinical case, question and multiple-choice options to generate the query to retrieve the k=32 most relevant documents.