This week, the BC Wildfire Service posted on Facebook two aerial view images of wildfires raging across densely forested terrain with aircraft attempting to douse the flames. The images were dramatic, compelling and, the service warned, completely fake.
It was an example of a trend that has emerged with the increased availability of online tools that use artificial intelligence to generate images. Whenever disaster strikes, fakes pictures are soon to follow.
“Whether well-intentioned or intentionally misleading, misinformation is the last thing any of us need during emergencies,” the wildfire service said in its post.
Here are some tips on how to spot AI-generated images of natural disasters:
CONSIDER THE CONTENT
Are the events in the image likely to have happened in real life?
Freelance photojournalists Jesse Winter and Amber Bracken, who have both covered wildfires in B.C., note several details from the images shared by the wildfire service point to them being inauthentic.
“The angle of the photographs is aerial. Airspace is restricted to emergency responders during a wildfire, and so a real photo from this vantage would have to come either from the fire service directly, or from a journalist embedded with a crew,” Bracken said in an email. “Both cases require resources and are not the most common image you see from real wildfire.”
In its post, the wildfire service noted the fires “do not accurately represent the terrain, fire size or fire behaviour” of the fires they purported to be showing.
The Drought Hill fire near Peachland, B.C., was considered out of control on July 30, but that status improved significantly several hours before the AI image and misleading post was made on Facebook.
“If this were real fire behaviour, it would be the most extreme fire or some of the most extreme fire behaviour that’s possible in the sort of standard wildfire behaviour scale class,” Winter said.
LOOK FOR INCONSISTENCIES
The wildfire images have a sort of unreal, almost painterly quality that sets them apart from authentic images of wildfires.
“They look similar in style to many AI-generated images in the sense they are slightly cartoony and they are they look generic in a certain way that’s indicative of AI-generated work,” said Fenwick McKelvey, associate professor with the department of communication studies at Concordia University and co-director of the university’s Applied AI Institute.
McKelvey says AI images often contain logical errors, such as inconsistent lighting and misplaced or deformed hands. But McKelvey cautions that the quality of the images varies based on how much effort is put into making them.
Winter notes one of the aircraft seen in the wildfire images looked unlike any he’d seen before.
“That looks to me like not a real helicopter at all,” Winter said, noting the blades of a helicopter in one of the images don’t look aligned correctly. “It’s what a computer thinks a helicopter looks like.”
Text in AI-generated images often appears blurry or nonsensical.
“The highway has a label, similar to Google Maps,” Bracken pointed out. One of the images has the numbers “97” and “97C” apparently painted across the road on a highway overpass. Highways 97 and 97C are real roadways in B.C., but those numbers don’t appear painted on the roads that way.
“This would only be possible with Photoshop or an AI-generated image.”
CONSIDER THE SOURCE
Many social media users who generate content using AI say so in their bios.
The Facebook bio for the account that shared the wildfire images says it shares “AI art” on in its profile. A disclaimer was later added to one of the posts after commenters noticed the image was AI, but the other image is still on Facebook without the disclaimer.
The account also frequently posts images, both real and fake, of natural disasters from around the world.
“What we’re seeing is people post information that matches their world view, and that’s as much a tell as anything about provenance of the image,” McKelvey said.
If you’re not sure where the image is from, you can search through Google or other search engines to try to find the earliest version, which might tell you more about how it was created.
Many images and videos shared on social media during disasters aren’t AI-generated, but are instead pulled from previous events at different locations. A reverse image search will help spot these, as well.
Look for watermarks and read the captions. If the poster can’t – or won’t – say where it’s from, it’s a sign you should view it with skepticism.
SEEK RELIABLE SOURCES
Both McKelvey and Winter point out AI is improving so rapidly that advice on how to tell the difference between real and fake images will soon be outdated. This makes having access to trustworthy sources vital during emergencies – something Winter says can be difficult as journalists face restrictions in accessing fire sites in Canada.
Traditional media outlets have strict policies against sharing AI-generated images as if they are real.
If there’s a dramatic image of a major news event spreading on social media, check reliable news sources to see whether they’re reporting it as fact. Look for other images or video of the event and see whether they match.
In its Facebook post, the BC Wildfire Service urged residents to turn to trusted sources for updates, and pointed to the BC Wildfire Service App, the emergency alert system and mainstream news sites.
“The best time to identify your own trusted sources is before you need them,” the wildfire service said.
This report by The Canadian Press was first published Aug. 8, 2025.
Colleen Hale-Hodgson, The Canadian Press