Does that online restaurant, service or product review seem to good to be true?
You could be right.
As the use of artificial intelligence grows exponentially — and slicker — fake reviews are getting harder to spot and, according to new research, are infiltrating trusted platforms.
Last month, Collingwood-based AI-detection company, Originality.ai, analyzed various Canadian industries from banking to health care and found that fake reviews were likely rampant across established review sites.
Originality.ai found that at five of Canada’s Big 6 banks, reviews that are likely AI-generated approached or exceeded 10 per cent on sites like Google, Trustpilot, and ConsumerAffairs.
For dental clinics across Canada, Originiality.ai found that about 20 per cent of reviews were likely AI, while 35 per cent of the reviews for plastic surgery clinics in Toronto were likely AI-generated.
Madeleine Lambert, director of marketing and sales at Originality.ai, says businesses enlisting automated review services may be doing so in order to stay competitive with search engine algorithms.
“You’re going to get better rankings and more eyeballs on your site if you have reviews,” says Lambert, adding that mass-producing fake reviews is faster and easier now with large language models like ChatGPT.
That also makes it more difficult to catch by government watchdogs.
“There’s more risk than in the past,” says Steve Szentesi, a competition and advertising lawyer based in Toronto, “because there are more ways to make false claims.”
Szentesi says that the Canadian Competition Bureau, the federal agency responsible for policing false or misleading product claims, may have a gap in its ability to enforce the current Wild West that is AI.
And some experts are doubtful it’s even possible to discern man from machine.
In March, the federal agency released a report in which a third of survey respondents expressed concern over AI’s ability to accelerate deceptive marketing.
Szentesi says bureau enforcement of rules prohibiting false or misleading endorsements has been low for more than a decade despite the proliferation of such ads.
The bureau’s U.S. counterpart, the Federal Trade Commission, is much more active, he says.
A year ago, the FTC took a major enforcement step by banning the sale and purchase of fake reviews and testimonials.
Szentesi says without more guidance from Canada’s bureau, determining liable parties when it comes to AI could be tricky.
“What happens if you have a company that’s using AI, but the AI is making the decisions?” he says. “The company essentially has extremely little or no control over what the AI are saying.”
There have been some notable government enforcements.
In February of 2024, Air Canada was found responsible by the Civil Resolution Tribunal in British Columbia when its AI chatbot misspoke by advising a passenger that they would be able to seek compensation for the difference in price between a bereavement rate and the airline’s full-price ticket after purchase.
This turned out to be counter to company policy, but the tribunal ruled the passenger was entitled to the amount promised by Air Canada’s chatbot.
In cases of AI writing product reviews, there can be many parties involved, including review platforms, third-party firms that generate fake reviews, and companies that employ the services of social media engagement farms.
Research released last year by the Transparency Company, a U.S. firm that helps expose and eliminate fake reviews, found that some 10 million out of 73 million reviews it analyzed were likely fake, while another 2.3 million reviews were likely to have been either partly or entirely AI generated.
“I think fundamentally, this is going to undermine a valuable system that consumers rely on and leave us with very few options,” said Martin Pyle, a professor in the marketing management department at Toronto Metropolitan University who studies customer reviews.
The profitability of gaming the review system is clear — as much as one extra star in a company’s rating can translate to 10 per cent in sales, Pyle said. The process of filtering out the fakes, however, is murkier.
“The reality is, these AI detectors out there don’t work.”
Pyle said that AI detection software can produce both false positives and false negatives, misidentifying human writing as AI and failing to catch AI-generated content.
The inability to know what’s real and what’s fake is, to Pyle, a sign of the times.
“It speaks to where we’re at, where we can’t trust in the systems around us, and that’s just sad.”
In an emailed statement to the Star, a spokesperson for ConsumerAffairs said that while AI detection software isn’t always reliable, using AI tools to help with phrasing isn’t against its internal company policy as long as the review was initiated by an actual person, something the site tries to verify by requiring users’ contact details.
Shreyas Sekar, a professor in management studies at the University of Toronto, predicts that the rise of AI fake reviews might drive companies to look for ways to tie customers’ accounts to who they are in the real world — like a verified purchase system for online products.
Some platforms, like Amazon, already have the option to verify your purchase though it’s not required and, according to Sekar, has been abused in the past by sellers and fake reviewers.
Sekar recognizes that most modes of extra assurance for the consumer will also involve more effort.
“You cannot stop fraud 100 per cent without inconveniencing real people,” he says, pointing to the difficulty of some CAPTCHAs, the online Turing Tests that ask users to select pictures of cars or traffic lights to distinguish them from robots.
“Increasingly, I guess we’re seeing things come full circle,” said Sekar. “I’m going to start relying on reviews less and maybe relying on my friends, who actually use the product, more.”