A college student, bleary-eyed and over-caffeinated, slouches over her desk. Racing to finish her essay, she turns to ChatGPT, which instantly flags typos and formatting issues that might escape even the sharpest eyes.
But consider this scenario unfolding against a darker backdrop. As generative AI expands its scope, our collective critical thinking skills begin to atrophy. Plagiarism and misinformation spread unchecked. The very tools designed to enhance learning may, ironically, undermine it.
Welcome to the AI debate echoing through the halls of higher education. As classes resume this fall, Canadian universities are wrestling with a pressing challenge: balancing AI’s vast potential against such concerns as academic misconduct and skill erosion.
Robert Clapperton, a professor at Toronto Metropolitan University who studies AI in education, says that the technology’s growing reach is compelling more people to engage with it. “We have this behemoth of a tool and we need to figure out a way to corral and harness it,” he says.
A co-founder of the AI-powered education platform Ametros Learning, Clapperton uses generative AI to teach a course on persuasive communication. He says the technology has made him more productive as a professor. “The question is,” he adds. “How do we teach with AI so it can become a collaborative agent for students?”
AI on campus
A 2024 poll by KPMG found that 59 per cent of Canadian post-secondary students reported using generative AI for school work, up from 53 per cent a year earlier. The most common uses included brainstorming, research and summarizing information. And students aren’t the only ones leveraging the technology. A recent Conference Board of Canada survey showed widespread support among educators for using AI in school.
Nearly three years after ChatGPT’s debut, many Canadian universities have developed detailed guidelines on AI use. The University of Toronto aims to become an “AI-ready institution” with a “robust, flexible and responsive” technological ecosystem. McGill permits AI use in certain instances, but warns against overreliance, comparing it to learning to drive by just watching from the sidewalk. Meanwhile, the University of Waterloo will stop using an AI detection tool this September, citing inaccuracy and potential bias against non-native English speakers.
The push to refine classroom policies reflects both the transformative power of this technology and its inherent challenges. Rapid advances in AI are upending traditional student assessments, from exams to essay writing. Some Canadian universities have reinstated in-person written exams amid concerns about AI-assisted cheating. At the same time, certain courses focused on career readiness risk losing their appeal. Just ask recent computer science graduates, many of whom have struggled to land entry-level coding jobs increasingly automated by AI.
Even law schools — long considered cloistered institutions slow to change — are adapting. At the University of British Columbia, law students working at legal clinics can now use an AI-powered tool to help develop sharper legal advice to people who cannot afford counsel. And in a sign that AI is gaining traction in legal education, York University’s Osgoode Law School launched a search for a professor of artificial intelligence and the future of law earlier this year.
Mark Doble, the CEO of Toronto-based legal software firm Alexi, sees generative AI as a transformative tutor for students. Help with interpretation of case law? Check. Drafting and translating documents? Check. In his view, the technology has the potential to allow students to focus on higher-order skills.
Take articling, a long-standing rite of passage for aspiring lawyers. Doble believes AI will reshape the experience but won’t make it obsolete. “Law school is very academic. Articling is less so but still focuses on fundamental principles with more traditional pedagogy,” he says. “With AI, articling students will be able to focus more directly on client outcomes and learn about relationship building and strategy.”
Still, he cautions against “throwing AI at everything.” Ultimately, AI should complement — not replace — learning. Doble recalls a recent conversation with a Canadian law school dean who saw parallels in how calculators shaped math education. Students may no longer need to do long division by hand, but they still need to understand how it works conceptually.
Efficiency booster or gateway to passive learning?
For students like Alex Davis, a post-doctoral researcher at the University of Toronto, AI remains a powerful tool to boost efficiency. A specialist who studies the electronic properties of industrial materials, Davis often writes code to run statistical analyses and generate data-packed charts and diagrams. When his code hits a snag, he usually consults ChatGPT. “Half the time, it can figure it out,” he says.
Davis, who has a paid subscription to ChatGPT, also uses generative AI to find academic papers, a task he once left to Google Scholar. In his experience, the tool handles vague queries better, surfacing studies Google might miss. Still, Davis is no blind adherent. “When I get a research link, I’ll check the original paper,” he says. “ And I don’t have to trust the code ChatGPT produces. I can just run it and see what happens.”
Although there are clear benefits to capitalizing on the technology’s strengths — as Davis notes, outsourcing onerous tasks such as troubleshooting and data-sourcing has freed up more time to focus on research — some experts worry that AI may undermine our ability to learn. Bonnie Stewart, a professor who specializes in digital learning at the University of Windsor, warns that generative AI may encourage “passive” and “transactional” learning. The technology should not be mistaken for a tool that opens new horizons in human knowledge, she stresses — rather, it can be thought of as a fishing trawler scouring the vast ocean of “what has already been thought and said.”
When we use this framework “to steer us from the processes and hard work of meaning-making, there will be cognitive impacts,” Stewart adds. “AI can help us do things, but getting things done is not necessarily learning.”
Research backs that up. In a 2025 study involving more than 4,500 participants, researchers at the University of Pennsylvania found that those who used large language models to research topics demonstrated shallower understanding of those subjects and generated fewer original insights than people who relied on traditional web search.
Tailoring learning to students
As Robert Clapperton notes, given the staying power of this technology, attempts to make classrooms “AI-proof” would be futile. The question is no longer whether to use AI, but how. Brad Cohen, chief academic officer at Top Hat, a Toronto-based educational software company, regularly meets with educators across North America to discuss their concerns about the technology.
“Many universities have invested resources to help faculty come to terms with AI and to help them understand what AI is and what it can do for them,” says Cohen, who believes generative AI’s greatest promise lies in its potential to create learning environments closely attuned to individual student needs, making learning more accessible, immersive and deeply personal.
With AI, he notes, educators can better identify knowledge gaps among students and tailor course design to accommodate diverse learning styles.
“Most faculty see AI as an essential tool for the future and want to help students learn how to use it ethically and responsibly,” he says. “Ultimately, we are all aiming for a learning environment where every student has the best possible chance to succeed.”
Owen Guo writes about technology for MaRS. Torstar, the parent company of the Toronto Star, has partnered with MaRS to highlight innovation in Canadian companies.