Viral winter storm photos of ‘insulated’ wild horses are AI-generated
The images provide another example of how quickly false information can spread during a severe weather event.
A viral Facebook post with more than 10,000 ‘likes’ featuring people bundling up horses with duct tape and insulation in North Carolina ahead of a winter storm in late January is an artificial intelligence (AI) fake.
The fakes were supposedly taken on North Carolina’s Corolla Wild Horse Beach.
“Today, the nonprofit organization Outer Banks People helped prepare the wild horses for low temperatures and possible snow,” reads the January 29th Facebook post.
“To keep them warm during extreme cold, we carefully wrapped them in recycled insulation materials.
At this time, we are accepting donations of insulation and duct tape to continue supporting our efforts. Every contribution helps keep our local wildlife warm and safe during the winter conditions.”
The post then links to an ‘unofficial’ beach page for Corolla Wild Horse Beach, which contains several AI-generated images and videos of the horses.
The page has over 16,000 'likes.'
(Facebook/Alex Lex)
Popular Science looked into it, and couldn’t find any registered charity bearing the name 'Outer Banks People', and neither could we here at The Weather Network.
Popular Science did, however, track down Chris Winter, Chief Executive Officer of the Corolla Wild Horse Fund, which operates in the same area.
“It is entirely fake; the pictures are AI generated,” he told Popular Science.
“It is unfortunate that these posts continue to be made, as it creates considerable and widespread concern for the well-being of the horses.”
(Facebook/Alex Lex)
It’s just a joke! (or is it?)
While most people would (likely) assume it isn’t practical or safe to wrap animals in housing insulation — which can cause skin irritation, respiratory issues, and severe illness or death if ingested — there are a few problems here.
For starters, the images are incredibly realistic. Many AI images have “clues” in them that give them away (we’ll post some of those below), but most of those visual cues are absent here.
Second, some members of the public are quick to accept what they see online as real, regardless of how preposterous it may look.
We scrolled through the first 50 comments under this Facebook post. While a few people asked if the images were a “joke,” not a single one that we were served asked if the images were AI-generated. Many people were outraged and concerned for the safety of the horses, while we saw at least one comment asking how to donate — although to be fair, some of those comments could have been written by AI bot accounts, which is an enormous problem across all social media platforms.
Third, some social media platforms, like X and Instagram, contain a “community notes” feature, which allows users to submit “corrections” or challenge false information in a public post. While Facebook has community notes, the feature remains in its testing phase, and this post does not contain any.
Social media users who are “trained” to look for community notes to determine a post’s validity could see the absence of them as an indication that the image is real.
And this all becomes even more problematic when an AI-generated post poses as charity work and asks for donations, as is the case here. While money is not explicitly asked for, the image and its validity provide evidence of how quickly AI-generated images can circulate unchecked.
(Facebook/Alex Lex)
AI-generated image a huge problem during Hurricane Melissa
In late October 2025, AI-generated images of Hurricane Melissa bearing down on Jamaica inundated social media, overshadowing crucial storm reporting and safety information as the Category 5 storm threatened the nation with heavy rainfall and intense winds.
Many of the videos were clearly watermarked with the Sora logo — OpenAI’s video generator tool, and TikTok removed several fake videos, many of them depicting floods, destruction, and violent winds that were not consistent with on-the-ground, verifiable reports.
Other problematic AI-generated videos included supposed footage of people swimming and playing in the streets as a storm approached, downplaying the storm’s significance. Many videos and images that have not been unpublished were later marked with community notes and labels referring to the AI sources.
But the AI images left some people questioning the authenticity of real videos, including this one from the U.S. Air Force:
The danger of using AI-generated images during a disaster
Experts say even seemingly-innocent AI images can clog up social media channels, drowning out updates from verified sources.
“During emergencies, when people are stressed and need reliable information, such digital disinformation can cause significant harm by spreading confusion and panic,” reads a statement on York University’s website released in August 2025 in response to AI-generated images clogging up social media during B.C.’s wildfire season.
“This vulnerability to disinformation stems from people’s reliance on mental shortcuts during stressful times; this facilitates the spread and acceptance of disinformation. Content that is emotionally charged and sensational often captures more attention and is more frequently shared on social media.
Based on our research and experience on emergency response and management, AI-generated misinformation during emergencies can cause real damage by disrupting disaster response efforts.”

AI-generated images shared by BC Wildfire Service in August 2025. The agency says these images were widely shared by other accounts, but do not reflect the true nature or size of the fires that were happening at the time. (BC Wildfire Service/Facebook)
How to detect if an image or video is AI-generated
AI technology is improving at a dizzying pace, but if you look closely, there are still clues to look for when determining if a piece of content is real or computer-generated
The Government of Canada's Get Cyber Safe program has the following recommendations:
For images:
Look for strange, out-of-context details, like gibberish in place of writing, or building architecture that doesn't make sense.
Pay attention to missing body parts, especially hands.
Look for misplaced or "floating" objects.
Look for impossible perspectives, lighting, and shadows.
Overly-smooth texture is another telltale sign of AI.
Pay attention to the background. Is it overly smooth or blurred? This is another clue you could be looking at AI.
For video:
Look for language that is flat and monotone.
Speaking in short, choppy sentences is another tell.
Body language that is jerky or unnatural is a common sign of AI.
People often don't blink in AI videos.
You may also see shadows or sparks of light in places that don't make sense.
Some AI videos and images will contain a watermark or small logo indicating they are computer-generated.
