The rapid advance of Generative Artificial Intelligence (AI) in our day-to-day lives has set alarm bells ringing in the global policy corridors as experts worry that rebuilding trust in online media would be a steep challenge.
The rise of AI generated media, especially deepfake or synthetic pictures and videos, have caused raised concerns related to privacy, mental health and cyber safety.
At a recent policy-focused session, at AI Impact Summit 2026 in India, experts from tech giants and the Indian government discussed how content provenance and authenticity standards can help audiences understand the history of digital information.
In a panel discussion organized on Tuesday, Gail Kent from Google emphasized that understanding the evolution of digital content has become critical in an era where AI can enhance and manipulate images, audio, and video.
“Generative AI introduces new complexities, making it harder for audiences to distinguish between original, edited, and synthetic content. There is a need for “immutability tools” that can identify whether content was AI-generated or modified,” said Kent.
She focused on two tools SynthID and Content Credentials from which users can trace images and anime like how, where, and by whom content was created.
Sameer Boray from the Information Technology Industry Council referenced a policy guide that situated AI-generated content within existing legal frameworks while identifying gaps that may require new regulatory tools. He stressed that industry responsibility is essential as AI-generated content proliferates. He focused on authenticity as a meaningful step forward but also stressed this is not the only solution.
Andy Parsons from Adobe, explained that C2PA standards rely on cryptographic evidence embedded in digital files—including images, videos, documents, and audio. These signals can reveal provenance details such as how content was produced and what modifications it underwent. Andy cautioned that trust itself is a complex concept, arguing that provenance should not label AI as inherently good or bad, but rather empower individuals to assess risk and make informed judgments.
Together, speakers agreed that while no single tool can solve the trust crisis, integrating provenance standards into AI governance offers a practical pathway toward more trustworthy digital content ecosystems.
Also Read: AI Impact Summit 2026: ‘Apologize for any Inconvenience,’ says IT Minister on Day 1 Chaos


