Open in App
  • U.S.
  • Election
  • Newsletter
  • The 74

    Is AI in Schools Promising or Overhyped? Potentially Both, New Reports Suggest

    By Greg Toppo,

    2 days ago
    https://img.particlenews.com/image.php?url=1mHJA1_0uxVwow700

    Are U.S. public schools lagging behind other countries like Singapore and South Korea in preparing teachers and students for the boom of generative artificial intelligence ? Or are our educators bumbling into AI half-blind, putting students’ learning at risk?

    Or is it, perhaps, both?

    Two new reports, coincidentally released on the same day last week, offer markedly different visions of the emerging field: One argues that schools need forward-thinking policies for equitable distribution of AI across urban, suburban and rural communities. The other suggests they need something more basic: a bracing primer on what AI is and isn’t, what it’s good for and how it can all go horribly wrong.

    A new report by the Center on Reinventing Public Education , a non-partisan think tank at Arizona State University, advises educators to take a more active role in how AI evolves, saying they must articulate to ed tech companies in a clear, united voice what they want AI to do for students.


    Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


    The report recommends that a single organization work with school districts to tell ed tech providers what AI tools they want, warning that if 18,000 school districts send “diffuse signals” about their needs, the result will be “crap.”

    It also says educators must work more closely with researchers and ed tech companies in an age of quickly evolving AI technologies.

    “If districts won’t share data with researchers — ed tech developers are saying they’re having trouble — then we have a big problem in figuring out what works,” CRPE Director Robin Lake said in an interview .

    The report urges everyone, from teachers to governors, to treat AI as a disruptive but possibly constructive force in classrooms. It warns of already-troubling inequities in how AI is employed in schools, with suburban school districts more than twice as likely as their urban and rural counterparts to train teachers about AI.

    The findings, which grew out of an April convening of more than 60 public and private officials, paint AI as a development akin to extreme weather and increasing political extremism, one that will almost certainly have wide-ranging effects on schools. It urges educators to explore how other school districts, states and even other nations are tackling their huge post-pandemic educational challenges with “novel” AI solutions.

    For instance, in Gwinnett County, Ga., educators started looking at AI-enabled learning as far back as 2017. They’ve since created an AI Learning Framework that aligns with the district’s “portrait of a graduate,” designed a three-course AI and career and technical education curriculum pathway with the state and launched a new school that integrates AI across disciplines.

    Lake pointed to models in states like Indiana, which is offering “incentives for experimentation,” such as a recent invitation to develop AI-enabled tutoring . “It allows a structure for districts to say, ‘Yes, here’s what I want to do.’ ”

    You can't eliminate all risk. But we can do a much better job of creating an environment where districts can experiment and hold student interests.

    Robin Lake, Center on Reinventing Public Education https://img.particlenews.com/image.php?url=4XA1HB_0uxVwow700

    But she also said states need to put guardrails on the experimentation to avoid situations such as that of Los Angeles Unified School District, which in June took its heavily hyped, $6 million AI chatbot offline after the tech firm that built it lost its CEO and shed most of its employees.

    “You can’t eliminate all risk — that’s just impossible,” Lake said. “But we can do a much better job of creating an environment where districts can experiment and hold student interests.”

    Related

    Was Los Angeles Schools’ $6 Million AI Venture a Disaster Waiting to Happen?

    AI ‘automates cognition’

    By contrast, the report by Cognitive Resonance , a newly formed Austin, Texas-based think tank, starts with a startling assertion: Generative AI in education is not inevitable and may actually be a passing phase.

    We shouldn’t assume that it will be ubiquitous,” said the group’s founder, Benjamin Riley. “We should question whether we want it to be ubiquitous.”

    The report warns of the inherent hazards of using AI for bedrock tasks like lesson planning and tutoring — and questions whether it even has a place in instruction at all, given its ability to hallucinate, mislead and basically outsource student thinking.

    Riley is a longtime advocate for the role of cognitive science in K-12 education — he founded Deans for Impact , which sought to raise awareness of learning science among teachers college deans. He said that what he and his colleagues have seen of AI in education makes them skeptical it’s going to be as groundbreaking and disruptive as the participants in CRPE’s convening believe.

    “I profoundly question the premise, which is that we actually know that this technology is improving learning outcomes or other important student outcomes at this point,” he said in an interview. “I don’t think [Lake] has the evidence for that. I don’t think anybody has any evidence for that, for no other reason than this technology is hardly old enough to be able to make that determination.”

    By its very nature, generative AI is a tool that “automates cognition” for those who use it. “It makes it so you don’t have to think as much. If you don’t have to think as much, you don’t have to learn as much.”

    I profoundly question the premise, which is that we actually know that this technology is improving learning outcomes.

    Benjamin Riley, Cognitive Resonance https://img.particlenews.com/image.php?url=2mpx4d_0uxVwow700

    Riley recently ruffled feathers in the ed tech world by suggesting that schools should slow down their adoption of generative AI. He took Khan Academy to task for promoting its AI-powered Khanmigo chatbot, which has been known to get math facts wrong . It also engages students in what he terms “an illusion of a conversation.”

    Technology like AI displays “just about the worst quality I can imagine” for an educator, he said, invoking the cognitive scientist Gary Marcus, who has said generative AI is “frequently wrong, never in doubt.”

    Related

    Benjamin Riley: AI is Another Ed Tech Promise Destined to Fail

    Co-authored by Riley and University of Illinois education policy scholar Paul Bruno, the report urges educators to, in a sense, take a deep breath and more carefully consider the capabilities of LLMs specifically and AI more generally. Its four sections are set off by four question-and-answer headings that seek to put the technology in its place:

    • Do large-language models learn the way that humans do? No.
    • Can large-language models reason? Not like humans.
    • Does AI make the content we teach in schools obsolete? No.
    • Will large-language models become smarter than humans? No one knows.

    Actually, Riley said, AI may well be inevitable in schools, but not in the way most people believe.

    “Will everybody use it for something?” he said. “Probably. But I just don’t know that those ‘somethings’ are going to be all that relevant to what matters at the core of education.” Instead they could help with the more mundane tasks of scheduling, grades and the like.

    Notably, Riley and Bruno confront what they say is a real danger in trusting AI for tasks like tutoring, lesson planning and the like. For instance, in lesson planning, large language models may not correctly predict what sequence of lessons might effectively build student knowledge.

    Related

    AI ‘Companions’ are Patient, Funny, Upbeat — and Probably Rewiring Kids’ Brains

    And given that a lot of the online instructional materials that developers likely train their models on are of poor quality, they might not produce lesson plans that are so great. “The more complex the topic, the more risk there is that LLMs will produce plausible but factually incorrect materials,” they say.

    To head that possibility off, they say, educators should feed them examples of high-quality content to emulate.

    When it comes to tutoring, educators should know, quite simply, that LLMs “do not learn from their interactions with students,” but from training data, the report notes. That means LLMs may not adapt to the specific needs of the students they’re tutoring.

    The two reports come as Lake and Riley emerge as key figures in the AI-in-education debate. Already this summer they’ve engaged in an open discussion about the best way to approach the topic, disagreeing politely in their newsletters .

    In a way, CRPE’s report can be seen as both a response to the hazards that Riley and Bruno point out — and a call to action for educators and policymakers who want to exert more control over how AI actually develops. Riley and Bruno offer short-term advice and guidance for those who want to dig into how generative AI actually works, while CRPE lays out a larger strategic vision.

    A key takeaway from CRPE’s April convening, Lake said, was that the 60 or so experts gathered there didn’t represent all the views needed to make coherent policy. “There was a really strong feeling that we need to broaden this conversation out into communities: to civil rights leaders, to parents, to students.”

    The lone student who attended, Irhum Shafkat, a Minerva University senior, told the group that growing up in Bangladesh, his educational experiences were limited. But access to Khan Academy , which has since invested heavily in AI, helped bolster his skills and develop an interest in math. “It changed my life,” he told the group. “The promise of technology is that we can make learning not a chance event,” he told them. “We could create a world where everybody can rise up as high as their skills should have been.”

    Related

    A Cautionary AI Tale: Why IBM’s Dazzling Watson Supercomputer Made a Lousy Tutor

    Lake said Shafkat’s perspective was important. “I think it really struck all of us how essential it is to let young people lead right now: Have them tell us what they need. Have them tell us what they’re learning, what they want.”

    The CRPE report urges everyone from teachers to philanthropists and governors to focus on emerging problem-solving tools that work well enough to be adopted widely. Those could include better translation and text-to-voice support for English learners, better feedback for students and summaries of research for educators, for instance. In other words, practical applications.

    Or as one convening participant advised, “Don’t use it for sexy things.”

    Expand All
    Comments / 0
    Add a Comment
    YOU MAY ALSO LIKE
    Most Popular newsMost Popular
    Total Apex Sports & Entertainment23 days ago

    Comments / 0