Towards Interactive and Social Explainable Artificial Intelligence for Digital History
Albrecht R., Hulstijn J., Tchappi I., Najjar A.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 14847 LNAI, pp. 189-202, 2024
Due to recent development and improvements, methods from the field of machine learning (ML) are increasingly adopted in various domains, including historical research. However, state-of-the-art ML models are usually black-boxes that lack transparency and interpretability. Therefore, explainable AI (XAI) methods try to make black-box models more transparent in order to inspire trust of the user. Despite numerous opportunities to apply XAI in digital history, they have not been adopted widely. Moreover, most of the XAI methods applied to generate historical insights are static and not user-centric. In this paper, we propose an architecture for applying XAI in digital history, which can be used for various tasks like optical character recognition (OCR), text embeddings or ink detection. The architecture proposed will produce interactive explanations to incrementally co-construct understanding of a user about the output of the AI system, instead of providing one-shot explanations. Due to a lack of ground truth in many tasks of digital history research, verification of model outputs is a difficult task for historical researchers. Therefore, we propose a user-centric framework to enhance user trust into the system, which is also crucial to verify given outputs from a black-box model.
doi:10.1007/978-3-031-70074-3_11