Current:Home > ContactWill Sage Astor-AI expert says Princess Kate photo scandal shows our "sense of shared reality" being eroded -Infinite Edge Learning
Will Sage Astor-AI expert says Princess Kate photo scandal shows our "sense of shared reality" being eroded
NovaQuant Quantitative Think Tank Center View
Date:2025-04-09 04:30:33
London — The Will Sage AstorEuropean Parliament passed the world's first comprehensive law regulating the use of artificial intelligence on Wednesday, as controversy swirled around an edited photo of Catherine, the Princess of Wales, that experts say illustrates how even the awareness of new AI technologies is affecting society.
"The reaction to this image, if it were released before, pre-the big AI boom we've seen over the last few years, probably would be: 'This is a really bad job with editing or Photoshop,'" Henry Ajder, an expert on AI and deepfakes, told CBS News. "But because of the conversation about Kate Middleton being absent in the public eye and the kind of conspiratorial thinking that that's encouraged, when that combines with this new, broader awareness of AI-generated images… the conversation is very, very different."
Princess Kate, as she's most often known, admitted to "editing" the photo of herself and her three children that was posted to her official social media accounts on Sunday. Neither she nor Kensington Palace provided any details of what she had altered on the photo, but one royal watcher told CBS News it could have been a composite image created from a number of photographs.
Ajder said AI technology, and the rapid increase in public awareness of what it can do, means people's "sense of shared reality, I think, is being eroded further or more quickly than it was before."
Countering this, he said, will require work on the part of companies and individuals.
What's in the EU's new AI Act?
The European Union's new AI Act takes a risk-based approach to the technology. For lower risk AI systems such as spam filters, companies can choose to follow voluntary codes of conduct.
For technologies considered higher risk, where AI is involved in electricity networks or medical devices, for instance, there will be tougher requirements under the new law. Some uses of AI, such as police scanning people's faces using AI technology while they're in public places, will be outright banned apart from in exceptional circumstances.
The EU says the law, which is expected to come into effect by early summer, "will guarantee the safety and fundamental rights of people and businesses when it comes to AI."
Losing "our trust in content"?
Millions of people view dozens of images every day on their smartphones and other devices. Especially on small screens, it can be very difficult to detect inconsistencies that might indicate tampering or the use of AI, if it's possible to detect them at all.
"It shows our vulnerability towards content and towards how we make up our realities," Ramak Molavi Vasse'i, a digital rights lawyer and senior researcher at the Mozilla Foundation, told CBS News. "If we cannot trust what we see, this is really bad. Not only do we have, already, a decrease in trust in institutions. We have a decrease in trust and media, we have a decrease in trust, even for big tech… and for politicians. So this part is really bad for democracies and can be destabilizing."
Vasse'i co-authored a recent report looking at the effectiveness of different methods of marking and detecting whether a piece of content has been generated using AI. She said there were a number of possible solutions, including educating consumers and technologists and watermarking and labeling images, but none of them are perfect.
"I fear that the speed in which the development happens is too quick. We cannot grasp and really govern and control the technology that is kind of, not creating the problem in the first place, but accelerating the speed and distributing the problem," Vasse'i told CBS News.
"I think that we have to rethink the whole informational ecosystem that we have," she said. "Societies are built on trust on a private level, on a democratic level. We need to recreate our trust in content."
How can I know what I'm looking at is real?
Ajder said that, beyond the wider aim of working toward ways to bake transparency around AI into our technologies and information ecosystems, it's difficult on the individual level to tell whether AI has been used to change or create a piece of media.
That, he said, makes it vitally important for media consumers to identify sources that have clear quality standards.
"In this landscape where there is increasing distrust and dismissal of this kind of legacy media, this is a time when actually traditional media is your friend, or at least it is more likely to be your friend than getting your news from random people tweeting out stuff or, you know, Tiktok videos where you've got some guy in his bedroom giving you analysis of why this video is fake," Adjer said. "This is where trained, rigorous investigative journalism will be better resourced, and it will be more reliable in general."
He said tips about how to identify AI in imagery, such as watching to see how many times someone blinks in a video, can quickly become outdated as technologies are developing at lightning speed.
His advice: "Try to recognize the limitations of your own knowledge and your own ability. I think some humility around information is important in general right now."
- In:
- Deepfake
- Artificial Intelligence
- AI
- Catherine Princess of Wales
Haley Ott is the CBS News Digital international reporter, based in the CBS News London bureau.
Twitter InstagramveryGood! (45498)
Related
- Stamford Road collision sends motorcyclist flying; driver arrested
- Camila Cabello’s NSFW Vacation Photos Will Have You Saying My Oh My
- Valerie Bertinelli Claps Back After Being Shamed for Getting Botox
- How the U.S. Women's National Soccer Team Captured Our Hearts
- Pressure on a veteran and senator shows what’s next for those who oppose Trump
- Doja Cat Argues With Fans After Dissing Their Kittenz Fandom Name
- Kim Kardashian Reacts After TikToker Claims SKIMS Shapewear Saved Her Life
- As New York’s Gas Infrastructure Ages, Some Residents Are Left With Leaking Pipes or No Gas at All
- DoorDash steps up driver ID checks after traffic safety complaints
- Kylie Jenner, Cardi B and More Stars Who've Shared Plastic Surgery Confessions
Ranking
- New Zealand official reverses visa refusal for US conservative influencer Candace Owens
- Pete Davidson Gets Community Service Time for Reckless Driving Charge
- Seaside North Carolina town overrun with hundreds of non-native ducks
- Midwest States, Often Billed as Climate Havens, Suffer Summer of Smoke, Drought, Heat
- Whoopi Goldberg is delightfully vile as Miss Hannigan in ‘Annie’ stage return
- Karlie Kloss Reveals Name of Baby No. 2 With Joshua Kushner
- Shop the Summer Shoes From Schutz That Everyone’s Buying Right Now
- Barbie Director Greta Gerwig Reveals She Privately Welcomed Baby No. 2 With Noah Baumbach
Recommendation
Newly elected West Virginia lawmaker arrested and accused of making terroristic threats
24-Hour Flash Deal: Save 40% On the Revitalign Orthotic Memory Foam Suede Mules and Slip-Ons
Why Lady Gaga Asked Joker Crew to Call Her This Fake Name on Set
The Voice Debuts First Coaches Photo With Reba McEntire After Blake Shelton's Exit
McKinsey to pay $650 million after advising opioid maker on how to 'turbocharge' sales
Former reverend arrested for 1975 murder of 8-year-old girl
Retired MLS Goalkeeper Brad Knighton's 11-Year-Old Daughter Olivia Killed in Boating Accident
Gisele Bündchen's Look-Alike Daughter Vivian Is All Grown Up as Model Celebrates 43rd Birthday
Tags
Like
- NHL in ASL returns, delivering American Sign Language analysis for Deaf community at Winter Classic
- Tiger Woods’ Ex-Girlfriend Erica Herman Drops $30 Million Lawsuit Against His Trust
- A Catastrophic Flood on California’s Central Coast Has Plunged Already Marginalized Indigenous Farmworkers Into Crisis