TORONTO – If you jumped on the recent social media trend of posting photos or videos of memories from 2016, experts say artificial intelligence likely thanks you.
That’s because the high volume of publicly available images is a gold mine for anyone needing data to train AI models, and their clear date labels make it even easier to teach the technology how people, places and things change over time.
“These data sets are super rare, they’re very, very expensive to replicate, and they’re actually really difficult to collect because of ethical constraints and all sorts of things, so for me, I guess the bells went off,” said Sarah Saska, CEO of consultancy firm Feminuity, of her reaction to people posting about 2016.
The origins of the trend toward posting content from 10 years ago are hard to pinpoint but since early January, people have been posting that “2026 is the new 2016.” That’s meant a lot of posts featuring nostalgic touchstones like skinny jeans, Snapchat’s once-popular dog face filter and Drake’s “One Dance.”
Participants ranging from ordinary users to celebrities mostly treated the throwback as harmless fun, but some technology experts see it as a reminder that once anything is posted online, it’s beyond your control how it’s used.
“Something that is benign now could be very sensitive in a couple of years because we don’t necessarily have a perfect way of anticipating what technology will be available and what will be the applications of that technology,” said Nicolas Papernot, an associate professor of computer engineering and science at the University of Toronto.
What most people who have posted about 2016 didn’t realize is that the photos and videos they shared are ideal for AI firms, which typically have to buy or collect data, or in this case images, to train the models that underpin their software, he said.
It’s a costly affair because every image or video needs to be labelled, usually by a human, who can correctly identify what it depicts and when it was taken.
However, the time-consuming task is made much more efficient and affordable when people are dumping their images online, labelling them automatically and essentially, confirming their authenticity, he said.
And the value only grows when people post their 2016 image beside one from this year because it can teach an AI model how things change over a long period of time.
“There’s a decade of real biological aging, not just cosmetic change, that’s sort of reflected across the photos,” said Saska.
Once models use the photos to learn how our identities persist or what aspects of them change over time, Saska said they become better at recognizing us years later, even if our appearances have changed.
“It also helps with things like matching old photos of people to brand new surveillance footage,” she said.
“It can help with identifying people despite changes like hair or weight gain, clothing, or even things like plastic surgery, and they can link our historical images to present day records like ID or government documentation.”
Since it’s unclear how the trend originated, it’s possible it began organically and with no intention of training artificial intelligence models.
The experts interviewed for this story all recognized the value of such data to AI companies once it’s public, however.
Combine it with location and other tracking data or systems and it may even predict our movements, making it harder for us to stay anonymous, Saska said.
Or enter it into image or video generation software and you could become the victim of a deep fake, said Samantha Bradshaw, a research fellow at the Centre for International Governance Innovation.
Deep fakes are digitally manipulated images or videos that depict someone doing or saying something they haven’t.
Some people may think they’re not public figures, so they’re less at risk of someone using their data other than how they intended, but Bradshaw said think again.
“The more individual data points they have, the better their predictive models can become, so your individual data does really matter at the end of the day, even though you can sort of feel small and insignificant,” she said.
“It’s part of making a larger system work, function and be as powerful as it actually can be.”
Because social media platforms seldom offer users a way to opt out of their data being used to train AI, she said the best thing people can do is think more carefully about how they post and where possible, limit how public their accounts are, so their data can’t easily be scraped by outside companies.
This report by The Canadian Press was first published Jan. 27, 2026.