The summary that follows contains excerpts from the open-access volume of the MUHAI Deliverable 1.1:
Steels, Luc (ed.). (2022). Foundations for Meaning and Understanding in Human-centric AI. In Foundations for Meaning and Understanding in Human-centric AI (1-6-2022, p. 152) [Computer software]. Venice International University. https://doi.org/10.5281/zenodo.6666820
Through the Foundations for Meaning and Understanding in Human-centric AI, the MUHAI project offers an in-depth and integrated overview of narratives and understanding in different disciplines and research fields.
The volume builds upon recent insights and findings from social and cognitive sciences, humanities and other fields for which narratives have been found to play a relevant role in human understanding and decision-making processes. This, to map the state-of-the-art of narrative-centric studies and to identify the most promising research streams for tomorrow’s AI. In particular:
As the conclusion of this volume highlights, narratives are ubiquitous and essential cognitive goods used in key spheres of our professional and social life, including the socio-economic, artistic, and scientific domain.
Through this work, the MUHAI consortium has undertaken a first (but far-reaching) step towards meaningful AI, proposing a variety of R&D paths to be explored, implemented, and tested in the next phases of the project. This kind of AI integrates and goes beyond ML and statistical methods for pattern recognition, completion and prediction, and explores how narrative-centric methods can inform the next generation of AI researchers and help them integrate in their systems narrative-related aspects of humans’ individual and collective understanding, which have yet to be fully acknowledged in AI research.
Our explorations have yielded a wealth of insights and possible applications of meaningful AI in a diverse set of fields, ranging from the analysis of debates about social inequality to hypothesis generation in scientific research.
By acknowledging the prominent role of narratives in understanding, the volume tries to facilitate the development of new methods that can better complement human understanding processes, eventually helping us identify and mitigate some cognitive biases that relate to narrative fallacies. Meaningful AI methods should be designed to fit to real world situations where inputs are typically sparse, fragmentary, ambiguous, underspecified, uncertain, vague, occasionally contradictory, and possibly deliberately biased, for example, because the producer of inputs is trying to deceive or manipulate. For understanding inputs with the aforementioned characteristics, different solution paths may have to be considered at the risk of exploding combinatorial complexity.
Understanding may be hard for AI systems because of the hermeneutic paradox: to understand the whole, you need to understand the parts, but to understand the parts, you need to understand the whole. This calls for integrated control (and meta control) structures other than a simple linear flow of processes organised in an automated pipeline.
As this volume suggests, meaningful AI models should be inspired by recent works in the field of knowledge representation, knowledge-based systems, fine-grained language processing and semantic web technologies, as well as other non-AI research streams explored in this volume, that we recommend you to read.
Foundations for Meaning and Understanding in Human-centric AI can be downloaded at this link: https://doi.org/10.5281/zenodo.6666820