World
The world needs a ‘premortem’ on generative AI and its use in education
In 2007, Steve Jobs introduced the iPhone, a revolutionary device that captured global attention and changed the landscape of both technology and education. This pocket-sized computer offered students unprecedented access to information through its Safari browser, making multimedia learning more accessible—especially for those with disabilities. It also provided a more affordable alternative to laptops, helping democratize access to learning opportunities for those who could afford an iPhone, and soon, Android alternatives.
While some educators and parents initially voiced trepidation about potential distractions, the excitement surrounding this portable, multifunctional device drowned out most concern to the contrary. Over time, however, enthusiasm began to wane as troubling issues surfaced. Students, along with their teachers and parents, increasingly found themselves glued to their digital distraction devices, leading to a host of negative outcomes, including declining student well-being, rising rates of depression, anxiety, and even suicidal thoughts. Teachers struggled to capture the attention of students who were often distracted by their smartphones. In much of the globe, currently, the overwhelming discourse is not about how helpful phones are, but rather how harmful.
This cautionary tale is all too familiar today. The story of mobile phones reflects a broader theme in technological advancement: every innovation—from television to social media—carries both benefits and drawbacks. As Melvin Kranzberg, a technology historian, observed, “technology is neither good nor bad; nor is it neutral.” While we can predict some of these effects, others take years to manifest. Thomas Edison, the inventor of the light bulb, famously predicted that electricity would liberate women from house work, but did not anticipate that the electrification of homes would add additional duties to housekeeping like vacuuming.
Given technology’s dual nature, its unanticipated consequences, and the difficulty of predicting the trajectory of its impact—even by its developers—we must exercise caution in the claims we make about technology and anticipate and address potential negative impacts as new tools continue to be widely adopted.
A new Brookings global task force on AI and education
Rather than waiting five to ten years to discover the negative impacts of artificial intelligence (AI), we at the Center for Universal Education (CUE) have embarked on a two-year initiative to conduct a “premortem” on generative AI in the context of global education. This proactive approach aims to identify potential first- and second-order negative impacts; explore actions to mitigate these negative impacts; and identify strategies so that AI can help educators address the most pressing educational problems while also supporting teachers and students. The task force will explore answers to two central questions:
- What are the potential risks that generative AI poses to children’s education, learning, and development from early childhood through secondary school?
- Knowing these risks, what can we do differently today to harness the opportunities that AI offers for children’s learning and development?
AI is not new. For years, Intelligent Tutoring Systems have harnessed AI elements to provide students with personalized feedback and guidance. The release of a free version of Chat GPT in November 2022 transformed both our understanding of AI and conversations about this new tool. As with many technologies, rapid technological developments in generative AI have far outpaced debates, policies, and regulatory frameworks governing its role in education. According to UNESCO, as of three years ago, only seven countries had AI frameworks or programs for teachers and only 22 had AI competency frameworks for students. School systems around the world are grappling with understanding what generative AI capabilities mean for their daily practice of teaching and learning alongside what it means for the very grammar of schooling itself. Some organizations are helping chart the way with evolving resources to guide schools, such as Teach AI’s toolkit which outlines seven principles for using AI in education, including maintaining human decisionmaking when using AI.
Of central concern to the education community is ensuring teachers, and educators at all levels, are not only participating in but driving the dialogue on AI use in education. “The fire is at the teachers’ feet, the environment in which they are teaching is changing and they are having the rug pulled out from under them without support,” says Armand Doucet, senior advisor for artificial intelligence in education in the government of New Brunswick, Canada. “The support they need goes way beyond training on using particular tools,” he argues.
Indeed one of the major questions with which the Task Force will grapple is the potential costs of ceding our intellectual labor to AI. The proliferation of more powerful and sophisticated AI in education tools raises fundamental questions about the role of teachers and students. Take teachers as a case in point: AI tools increasingly automate teacher work such as lesson planning, instructional differentiation, and student grading and feedback, potentially saving teachers hours of work and improving their ability to support students. But at what point do these efficiency gains erode the deep, personalized knowledge of students and the human insights that are at the professional core of being a teacher? This question is equally relevant for children’s own learning and development. Thus, educators are faced with numerous questions: What tasks should generative AI replace─but what tasks must remain human driven? Education systems must carefully consider which skills to preserve in this rapidly evolving landscape, balancing the benefits of AI automation with those of human-centered instruction.
A vision of positive human-AI collaboration
By fostering open dialogue, reflection, and critical analysis, we can hopefully anticipate challenges, identify opportunities, and develop ethical frameworks to guide AI’s integration into education. It is our hope that this inclusive approach will help us harness AI’s benefits while mitigating risks, ensuring technology enhances rather than denigrates teaching and learning. Though we can’t predict every impact of this still rapidly evolving technology, through collective reflection we can become more aware, informed, and prepared to address potential ill effects proactively, steering AI’s integration toward a more positive and equitable educational future where human-AI collaboration thrives. Ultimately, we hope these insights will help us reconnect with the true purpose of education and reexamine our fundamental beliefs about what education should be to foster engaged, agentic learners who have the skills needed to co-create a more just, peaceful, and sustainable world.
We would love to hear your thoughts! We invite you or your organizations to share your insights with us on as we embark on this journey to conduct a premortem on AI in education by emailing [email protected].