Current:Home > reviewsAI-generated images are everywhere. Here's how to spot them -Infinite Edge Learning
AI-generated images are everywhere. Here's how to spot them
Robert Brown View
Date:2025-04-07 01:15:01
Amid debates about how artificial intelligence will affect jobs, the economy, politics and our shared reality, one thing is clear: AI-generated content is here.
Chances are you've already encountered content created by generative AI software, which can produce realistic-seeming text, images, audio and video.
So what do you need to know about sorting fact from AI fiction? And how can we think about using AI responsibly?
How to spot AI manipulation
Thanks to image generators like OpenAI's DALL-E2, Midjourney and Stable Diffusion, AI-generated images are more realistic and more available than ever. And technology to create videos out of whole cloth is rapidly improving, too.
The current wave of fake images isn't perfect, however, especially when it comes to depicting people. Generators can struggle with creating realistic hands, teeth and accessories like glasses and jewelry. If an image includes multiple people, there may be even more irregularities.
Take the synthetic image of the Pope wearing a stylish puffy coat that recently went viral. If you look closer, his fingers don't seem to actually be grasping the coffee cup he appears to be holding. The rim of his eyeglasses is distorted.
Another set of viral fake photos purportedly showed former President Donald Trump getting arrested. In some images, hands were bizarre and faces in the background were strangely blurred.
Synthetic videos have their own oddities, like slight mismatches between sound and motion and distorted mouths. They often lack facial expressions or subtle body movements that real people make.
Some tools try to detect AI-generated content, but they are not always reliable.
Experts caution against relying too heavily on these kinds of tells. The newest version of Midjourney, for example, is much better at rendering hands. The absence of blinking used to be a signal a video might be computer-generated, but that is no longer the case.
"The problem is we've started to cultivate an idea that you can spot these AI-generated images by these little clues. And the clues don't last," says Sam Gregory of the nonprofit Witness, which helps people use video and technology to protect human rights.
Gregory says it can be counterproductive to spend too long trying to analyze an image unless you're trained in digital forensics. And too much skepticism can backfire — giving bad actors the opportunity to discredit real images and video as fake.
Use S-I-F-T to assess what you're looking at
Instead of going down a rabbit hole of trying to examine images pixel-by-pixel, experts recommend zooming out, using tried-and-true techniques of media literacy.
One model, created by research scientist Mike Caufield, is called SIFT. That stands for four steps: Stop. Investigate the source. Find better coverage. Trace the original context.
The overall idea is to slow down and consider what you're looking at — especially pictures, posts, or claims that trigger your emotions.
"Something seems too good to be true or too funny to believe or too confirming of your existing biases," says Gregory. "People want to lean into their belief that something is real, that their belief is confirmed about a particular piece of media."
A good first step is to look for other coverage of the same topic. If it's an image or video of an event — say a politician speaking — are there other photos from the same event?
Does the location look accurate? Fake photos of a non-existent explosion at the Pentagon went viral and sparked a brief dip in the stock market. But the building depicted didn't actually resemble the Pentagon.
Google recently announced it's making it easier to see when a photo first appeared online, which could help identify AI-generated pictures as well as photos that are shared with misleading or false context — like that viral image of a shark swimming on a flooded highway that often appears after hurricanes.
Pause and think in other situations, too. Scammers have begun using spoofed audio to scam people by impersonating family members in distress. The Federal Trade Commission has issued a consumer alert and urged vigilance. It suggests if you get a call from a friend or relative asking for money, call the person back at a known number to verify it's really them.
Check your sources
AI images aren't the only way you might be fooled by a computer. Chatbots like OpenAI's ChatGPT, Microsoft's Bing and Google's Bard are really good at producing text that sounds highly plausible. But that doesn't mean what they tell you is true or accurate.
That's because they're trained on massive amounts of text to find statistical relationships between words. They use that information to create everything from recipes to political speeches to computer code.
While the text chatbots spit out may sound convincingly human, they do not learn, think, or create in the ways we do, says Gary Marcus, a cognitive scientist and professor emeritus at New York University.
"They don't have models of the world. They don't reason. They don't know what facts are. They're not built for that," he says. "They're basically autocomplete on steroids. They predict what words would be plausible in some context, and plausible is not the same as true."
ChatGPT fabricated a damaging allegation of sexual harassment against a law professor. It's made up a story my colleague Geoff Brumfiel, an editor and correspondent on NPR's science desk, never wrote. Bing invented quotes from a Pentagon spokesman. Bard made a factual error during its high-profile launch that sent Google's parent company's shares plummeting.
That means you should double-check anything a chatbot tells you — even if it comes footnoted with sources, as Google's Bard and Microsoft's Bing do. Make sure the links they cite are real and actually support the information the chatbot provides.
Use generative AI tools responsibly
In its early phase, AI can be unreliable and even risky. But it's also fun and interesting to experiment with. And like it or not, generative AI tools are being integrated into all kinds of software, from email and search to Google Docs, Microsoft Office, Zoom, Expedia, and Snapchat.
Playing around with chatbots and image generators is a good way to learn more about how the technology works and what it can and can't do.
"My main piece of advice to everybody is, do use this stuff," says Ethan Mollick, a professor at the University of Pennsylvania's Wharton School. "You absolutely should be making things. You should absolutely spend an hour on ChatGPT...You should try and automate your job."
Mollick requires his students to use AI. And while he's an enthusiastic user of chatbots and other forms of AI, he's also wary of the ways they can be misused.
"You've got to figure this thing out because we're in a world where there's nobody with great advice right now. There isn't like a manual out there that you can read," Mollick says.
If you are going to experiment with generative AI, here are a few things to keep in mind.
- Privacy: Be smart about sharing personal information with AI software. Systems may use your input for training, and companies may have access to what you enter as inputs.
- Ethics: What are you using the software to create? Are you asking an image generator to copy the style of a living artist, for example? Or using it in a class without your teacher's knowledge?
- Consent: If you're creating an image, who are you depicting? Is it parody? Could they be harmed by the portrayal?
- Disclosure: If you're sharing your AI creations on social media, have you made it clear they are computer-generated? What would happen if they were shared further without that disclosure?
- Fact check: As explained above, chatbots get things wrong. So double-check any important information before you post or share it.
"You can think of it as like an infinitely helpful intern with access to all of human knowledge who makes stuff up every once in a while," Mollick says.
The audio portion of this episode was produced by Thomas Lu and edited by Brett Neely and Meghan Keane.
We'd love to hear from you. Email us at [email protected]. Listen to Life Kit on Apple Podcasts and Spotify, or sign up for our newsletter.
veryGood! (95277)
Related
- Backstage at New York's Jingle Ball with Jimmy Fallon, 'Queer Eye' and Meghan Trainor
- SpaceX astronaut Anna Menon reads 'Kisses in Space' to her kids in orbit: Watch
- Dolphins star Tyreek Hill says he 'can't watch' footage of 'traumatic' detainment
- Horoscopes Today, September 12, 2024
- Why we love Bear Pond Books, a ski town bookstore with a French bulldog 'Staff Pup'
- Tagovailoa diagnosed with concussion after hitting his head on the turf, leaves Dolphins-Bills game
- Ferguson activist raised in the Black Church showed pastors how to aid young protesters
- Jill Biden and the defense chief visit an Alabama base to highlight expanded military benefits
- Behind on your annual reading goal? Books under 200 pages to read before 2024 ends
- Cardi B welcomes baby No. 3: 'The prettiest lil thing'
Ranking
- SFO's new sensory room helps neurodivergent travelers fight flying jitters
- Border Patrol response to Uvalde school shooting marred by breakdowns and poor training, report says
- Guns remain leading cause of death for children and teens in the US, report says
- Man serving life for teen girl’s killing dies in Michigan prison
- A Mississippi company is sentenced for mislabeling cheap seafood as premium local fish
- Jennie Garth Shares Why IVF Led to Breakup With Husband Dave Abrams
- Caitlin Clark returns to action: How to watch Fever vs. Aces on Friday
- Hank, the Milwaukee Brewers' beloved ballpark pup, has died
Recommendation
The city of Chicago is ordered to pay nearly $80M for a police chase that killed a 10
Under $50 Cozy Essentials for Your Bedroom & Living Room
Longtime Mexican drug cartel leader set to be arraigned in New York
Katy Perry Reveals Her and Orlando Bloom's Daughter Daisy Looks Just Like This Fictional Character
South Korean president's party divided over defiant martial law speech
Newly freed from federal restrictions, Wells Fargo agrees to shore up crime risk detection
Colorado teen hoping for lakeside homecoming photos shot in face by town councilman, police say
Driver charged with killing NHL’s Johnny Gaudreau and his brother had .087 blood-alcohol level