When Kristen Myers read that a vaccine for peanut allergies was available, she thought it would mean freedom for her seven-year-old daughter, Hailey.
“She has other disabilities that she struggles with and this was just one thing on the list,” Myers said. “I was like, ‘Maybe we’re going to be able to say this is no longer a problem.’ ”
But the Facebook post claiming the vaccine existed turned out to be AI spreading false information, one that has garnered thousands of likes, shares and comments filled with relief. The Facebook group, which describes itself as a “media/news company,” has a slew of other similar-looking posts, featuring AI-generated images.
“People once terrified by microscopic traces of peanuts can now enjoy food freely, even eating peanut butter again without fear,” the post reads, after claiming a clinical trial of 120 patients took place where 89 per cent of participants “developed full tolerance” after three months of treatment at McMaster University.
The publication the post cites also does not exist.
AI misinformation has spread around the internet but can be particularly harmful when it comes to health and medicine, creating false hope or leading to dangerous practices. It can be difficult to spot, but experts have some suggestions for how to avoid falling into its traps.
For Myers, who lives in Calgary, the realization was “devastating.” Her daughter relies on an EpiPen if she’s exposed to peanuts and has been to the hospital several times. This purported vaccine would have been life-changing for her.
“We should have known this was too good to be true.”
Here’s what you need to know about AI-generated medical misinformation and how to catch it.
How did AI misinformation become so prevalent?
Dr. Alon Vaisman, an infection control and infectious disease physician at the University Health Network, said the recent loss of trust in medical institutions “leaves the gap open” for people to get information from other, possibly unreliable sources.
“There are many vaccine-preventable infections that can cause a lot of harm if people start to fall prey to the disinformation and the campaigns using AI,” he said.
The more false information spreads around the internet, the harder it will become for people to fact-check, Vaisman believes, pointing to Google AI overviews as an example.
“(It) feeds off various other social media platforms, various other posts,” he said. “If there’s enough disinformation out there, there’s going to be more doubt cast upon value.”
Over 47 per cent of Canadians have tried AI tools for work, school or personal reasons, according to a Leger report earlier this year, up from 25 per cent in 2023.
In spaces where vaccines are a controversial topic, “bad actors will intentionally use AI and bots” to amplify false information, said Matthew Miller, director of the DeGroote Institute for Infectious Disease Research at McMaster University.
“It’s intentionally weaponized (and) can be really dangerous because it can be amplified in large circles very, very quickly,” he said.
Why is AI so hard to spot?
Clifton van der Linden, director of the Digital Society Lab at McMaster, said people don’t take the extra time to find the sources of an AI post, nor are they naturally exposed to the “factual claim” that refutes the original false post.
Sometimes, people also purposely turn to AI to summarize and explain complicated medical research.
“There is a danger to using artificial intelligence in this way,” van der Linden said. “It will not always provide an accurate representation or summary of the work that it is investigating.”
It’s also unreasonable to expect people to have the time or training to assess the scientific information they’re reading and determine whether or not it’s factual, he said. “This is a difficult burden to download to the mass public, who are already overburdened.”
How can we identify misinformation?
“Once misinformation is out there, it is extremely hard to correct,” Miller said.
A way to prevent people from falling prey to the spread of false information is pre-bunking — warning people in advance about news they could encounter.
“Context is everything,” Miller said. “It’s something that we can’t just rely on AI for.”
AI-generated text generally lacks personal opinions and has a “dry, robotic” tone, according to the federal government’s website through its Get Cyber Safe awareness campaign. “Some AI tools are trained on old data, giving false information about current events,” the website reads, urging Canadians to fact-check what they read, if it seems “exaggerated” or coming from fake sources.
Images generated by AI usually have “smooth, crisp images with heavily blurred backgrounds,” and people tend to appear “glossy” or “unusual smooth skin or clothing,” according to the website.
The Facebook page that claimed the vaccine for peanut allergies existed has these types of images. These images can also feature incorrect shadows and lighting or texts within the images that don’t form real words or letters.
Videos generated by AI typically have inconsistent lighting, “people who don’t blink” and irregular audio, with unnatural tones and choppy sentences.
The Canadian government website also says not to rely on AI detection tools because “they are often unreliable.”
So, is there a vaccine for peanut allergies?
Earlier in November, four food allergy researchers from McMaster University released a statement confirming the post “incorrectly suggested” a vaccine was developed at the school.
“Generative AI may be mixing our long history of advancing allergy research with other publicly available scientific concepts,” the statement read.
Susan Waserman, one of the co-authors of the statement, said she wasn’t aware of the Facebook post until Food Allergy Canada contacted the university about the vaccine claim.
Postings such as the AI-generated post overshadow legitimate food allergy-related research, Waserman said.
Her colleagues also had not heard about the story.
“If there was a major discovery, we all would have been aware of it.”