In Education: GenAI is a Product First and a Tool Second


By Adam Dubé, PhD


March 27, 2026

The discussion around generative AI (GenAI) in education must stop reacting to, talking about, and planning around some abstracted version of GenAI ‘tools’ decoupled from their commercial nature. Instead, we must see them as what they are, GenAI education products; ones with design features that are underbaked, overpromising, and currently making education worse at the cost of student learning and teacher labor. 

THE PROMISE: GENAI IS AN EDUCATIONAL TOOL

I am a professor of EdTech and have spent the 3.5 years since the launch of ChatGPT in November 2022 following the research and public debate around GenAI in education. I have been reading and writing academic papers and online editorials, giving public talks to teachers, doing Podcasts discussing if AI actually helps students learn (Screen Deep) or if Chat GPT is fueling an existential crisis in education (Decoder), and attending academic conferences to hear the latest research and innovation.
There is potential for GenAI in education, but that potential is always discussed removed from the messy reality of the final commercial form these products will take.
The 2025 American Educational Research Conference (one of the largest educational conferences in the world) had over 400 presentations discussing AI in education. They were a mix of dedicated researchers creating new evidence-based GenAI tools and learning practices that will never get into most learners’ hands and other researchers building tools that undercut the very idea of learning. The promising ones included rigorously trained and tested GenAI that create lesson-plans for educators that are evaluated as better than the typical ones found on the internet, giving educators a starting point and saving time; these are even being used to generate culturally-relevant curriculum materials for under-resourced, remote indigenous communities. The others are researcher developed GenAI that ‘help’ with every step of the writing process; ones for brainstorming, outlining, feedback, and full paragraph generation that improve the quality of assignments students produce but likely prevent them from deliberately practicing and building these very skills, due to offloading the actual thinking. These good GenAIs were built because they should be, the other GenAIs because they could be. 
This feature before function approach unfortunately mirrors how commercial GenAI like Grammarly  are actively undercutting learning by helping students generate citations for research they never read or to 'humanise' AI-assisted text they never wrote.  

Can students use GenAI to learn if we show them how? 

Some educational psychologists address students’ offloading of thinking to GenAI by promoting self-regulated learning, i.e., teaching students to reflect on their learning goals, strategies, and processes while using an GenAI to prevent mindless use. This is framed as fostering AI-empowered skills that will prepare the next generation for a workplace in which GenAI will supposedly be essential tools for productivity. Unfortunately, evidence suggests that this may not work. Researchers have built custom GenAIs that scaffold learning by giving strategy suggestions to students and asking them to reflect at important moments. These custom GenAIs improve the learning process when they are in use; but, take away the custom GenAI and the advanced learning strategies disappear alongside them. 

Critically, no evidence-based GenAI can address their fundamental flaw for learning: "lying."

The errors or hallucinations they produce are too high and are antithetical to a good learning tool. Evidence suggests that the hallucinations rates are increasing as GenAI gets more powerful (from 16% to 48%) because the ability to hallucinate is not a bug but an essential feature. This is a huge problem for learning. Learners are, by definition, unable to differentiate when a LLM is wrong, they are learning! Error laden outputs may be useful for experts using GenAI for productivity; but this is debatable, as even coders using GenAI think they are 20% faster at coding with them but are really 19% slower due to lost time fixing errors. To help learners spot errors, educational psychologists look to recommendations from digital literacy and misinformation research and are saying students should do lateral-reading (i.e., checking alternative sources to verify information). But research from those same fields shows that people stop checking because it is too hard to do; also, what good is a resource if you have to fact-check every sentence? 

Can we innovate our way out of this problem?  

One solution is to design a GenAI to provide information on how likely each fact is true (e.g., pigs can’t fly, 100% true). You can instruct any GenAI to do this right now, but these products do not actually have this capacity. I asked ChatGPT to write something that is not true and provide confidence judgments, “write a paragraph that cognitive load theory and multiple intelligence theory are essentially the same, with confidence judgements.”  Each sentence was preposterous but had a confidence above 90%. Try this with a subject you know well, it is equally amusing and alarming. Could a GenAI be designed to provide accurate estimates? Perhaps, but the systems in learners’ hands today do not do this because a) leaners may stop using it once they see how much fact checking is needed and b) every GenAI system learners use is a product, and no tech company wins by designing their product to lower engagement.

THE REALITY: GENAI IS A PRODUCT

Seeing GenAI as products requires us to shift how we talk about these systems in education. First, the researchers designing and testing GenAI tools must become critical scholars of AI in education, not advocates for some idealized future that is easily coopted by commercial interests. Second, must talk about GenAI as the poorly designed, buggy, engagement focused products they are. All educational technologies serve three common purposes in education (knowledge sources, tutors, tools) and GenAI products are currently not great at any of them. As argued above, GenAI is too inaccurate to be a digital resource or a tutor (e.g., 45% error rate for news) and using them to get academic work done does not build leaner’s knowledge or skills.  
We can overcome these flaws, but should we have to? Students and teachers can be trained to become better prompters, users, and fact checkers of GenAI. For example, Microsoft and OpenAI provided $23 million in free teacher training for their GenAI products and OpenAI made a version of ChatGPT free for teachers. But is it really free teacher training or is it just cost-effective, quality-assurance testing? Shouldn’t we be demanding better products not better users? 
Finally, if we integrate the full range of GenAI products into education—everything  from tutoring students, generating curriculum materials, lesson plans, learning materials, grading students work, and analyzing student data to design individualized learning trajectories—which tech companies do we trust to become the default platform for education, one that is designed, controlled, and rented back to us for a monthly fee, per user in perpetuity?  

Regulate GenAI education products now, don't wait 20 years, like we did with social media.

Cory Doctorow describes the technology industry as rife with “enshittification”, a not-so-gradual process of digital platforms getting worse over time for users as companies chase profit and raise prices. This process is partially responsible for the ill effects of social media on youth. Social media companies designed features like infinite scrolling, incessant notifications, and algorithms that recommend negative content all to keep users engaged; despite them knowing these products harmed youth. Only in this past week has society found them accountable for these intentionally poor design decisions. 

We must not let this happen to education and leave students and teachers to clean-up the mess from broken, over-hyped GenAI products. Instead, let’s see GenAI in education for the flawed technology product it is and demand far better or choose to unsubscribe.


Translate to