Название: The Multimodal Learning Analytics Handbook Автор: Michail Giannakos, Daniel Spikol, Daniele Di Mitri Издательство: Springer Год: 2022 Страниц: 362 Язык: английский Формат: pdf (true), epub Размер: 25.9 MB
This handbook is the first book ever covering the area of Multimodal Learning Analytics (MMLA). The field of MMLA is an emerging domain of Learning Analytics and plays an important role in expanding the Learning Analytics goal of understanding and improving learning in all the different environments where it occurs. The challenge for research and practice in this field is how to develop theories about the analysis of human behaviors during diverse learning processes and to create useful tools that could augment the capabilities of learners and instructors in a way that is ethical and sustainable. Behind this area, the CrossMMLA research community exchanges ideas on how we can analyze evidence from multimodal and multisystem data and how we can extract meaning from this increasingly fluid and complex data coming from different kinds of transformative learning situations and how to best feed back the results of these analyses to achieve positive transformative actions on those learning processes.
The goal of this book is to introduce the reader to the field of MMLA and provide a comprehensive overview of contemporary MMLA research. The contributions come from diverse contexts to support different objectives and stakeholders (e.g., learning scientists, policymakers, technologists). In this first introductory chapter, we present the history of MMLA and the various ongoing challenges, giving a brief overview of the contributions of the book, and conclude by highlighting the potential emerging technologies and practices connected with MMLA.
The intersection of data coming from different modalities (multimodal data) and advanced computational analyses has the ability to improve our understanding on how humans learn, but also provide novel affordances that enhance our learning capacities. During the last years we have seen an increasing research interest in the collection and analysis of rich data leveraging multiple data channels from various sources and in different modalities, i.e., multimodal learning analytics (MMLA). MMLA maintains Learning Analytics’ overarching goal of understanding and improving learning in all the different environments where it occurs, by exploiting new opportunities once we capture new forms of digital data from students’ learning activity, and by using computational analysis techniques from data science and Artificial Intelligence (AI). At the same time MMLA expands Learning Analytics methodologies, tools and potential implications, by leveraging advances in Machine Learning (ML) and affordable sensor technologies to act as a virtual observer and analyst of learning activities.
Multimodal data can complement our understanding on how humans learn, providing more information on (meta)cognitive, affective and behavioural aspects of learning. Multimodal data can enrich the digital representation of the learner in the computer. Recent works highlight how MMLA research enables us to extract insights from text, speech, gesture, affect, or gaze analysis, and utilize those insights and indicators to provide automated feedback to the learners to stimulate their awareness and reflection. With today’s increased availability and complexity in data, and advanced data analysis techniques new challenges and opportunities also arise. As in traditional learning sciences, the goal of MMLA is to understand and explain the phenomena of how humans learn. At the same time, MMLA research holds significant potential for advancing learning sciences and supporting both empirical and theoretical research by utilizing its capabilities to observe the learning activity at the micro-level and “sense” humans’ cognitive and affective and psychomotor factors.
Скачать The Multimodal Learning Analytics Handbook