In envisioning the scenario, a groundbreaking technology has emerged, presenting a fresh approach to disseminating information that is both engaging and informative, with the potential to facilitate the creation of novel knowledge regularly. This innovative creation hints at a transformation so profound that one of the world’s most eminent entrepreneurs foresees a radical shift in the educational methodology within a brief span.
In 1913, the advent of motion pictures—movies—prompted Thomas Edison to forecast that textbooks would become obsolete in schools within a decade. Fast forward to the current landscape, where the emergence of generative artificial intelligence, especially “large-language model” (LLM) chatbots like ChatGPT, has sparked predictions about revolutionizing the education sector. Bill Gates, for instance, prophesied in April 2023 that within 18 months, by October of the same year, generative AI would rival human tutors. Meanwhile, Sal Khan, the visionary behind Khan Academy, anticipates AI to herald the most significant transformation in education to date, with the educational chatbot Khanmigo already making inroads into schools.
The recurring pattern of overly ambitious promises and underwhelming outcomes in the realm of educational technology begs the question: why do we consistently fall into this trap? One key reason could be our failure to grasp the nuanced interplay between technology and education, rooted in a misunderstanding of human cognition and learning processes.
Consider the concept of “theory of mind,” a cornerstone in cognitive science, delineating our ability to attribute mental states to ourselves and others. Educators rely on a rough understanding of their students’ cognitive landscapes to decipher misconceptions and leverage existing knowledge when imparting new concepts, augmenting this ability through cultural practices such as conversational exchanges and cooperative interactions—a staple in traditional educational environments.
Unlike human cognition, current LLM-based technologies like ChatGPT and Khanmigo lack the capacity to develop a comprehensive theory of mind, relegated instead to predictive text generation based on statistical algorithms—a semblance of human-like conversation that may deceive the unwary observer. This limitation becomes apparent when tasked with solving algebraic problems, wherein the chatbots’ responses often veer off track from accurate reasoning, exposing the facade of interactive learning experiences they purport to offer.
The shortcomings of tools like Khanmigo extend beyond mere inaccuracies in mathematical computations; they engender a facade of conversational engagement without genuine cognitive depth, hindering rather than facilitating learning processes. As cognitive scientist Gary Marcus aptly puts it, generative AI tends to be “frequently wrong, never in doubt,” a troubling trait for any educational facilitator. The lack of empirical evidence supporting the efficacy of these tools in enhancing student learning further underscores the need for a cautious approach in integrating them into educational settings.
This cautionary stance may seem like skepticism or resistance to technological progress, but it underscores a critical need for discernment in evaluating the true impact of transformative technologies on education’s core mission of fostering critical thinking. By delving deeper into the science of cognition and scrutinizing the underlying principles of generative AI, we can make informed decisions regarding the implementation—or non-implementation—of these tools in educational contexts.
Benjamin Riley, the visionary behind Cognitive Resonance, endeavors to bridge the gap between human cognition and generative AI, seeking to enhance our understanding of these realms for a more informed approach to education.
The post Generative AI in Education: Another Mindless Mistake? appeared first on Education Next.
The post Are AI Applications in Education Making Thoughtless Errors? appeared first on User's blog.