At a time when the world is racing to build smarter machines, a new question is increasingly shaping global policy discussions: who will hold artificial intelligence accountable when things go wrong? That question took centre stage in New Delhi this week as legal experts, policymakers and technology leaders gathered for the launch of a new international initiative focused on AI accountability and governance.
The International AI Accountability Forum (IAAF), which describes itself as an international platform focused on AI accountability, governance and liability frameworks, was formally launched on May 14 at the India International Centre in New Delhi. During the launch, organizers unveiled three documents – the IAAF Charter, the New Delhi Compact on AI Accountability, and the Universal Declaration of AI Accountability Rights.
The Forum will operate from New Delhi and is being led by cyberlaw expert Dr. Pavan Duggal, designated by Charter as Founder, Founding Chairman, and Chief Architect, according to organizers. Delegates from the United States, European Union, Ghana, Lebanon, the Netherlands and the wider Indo-Pacific attended the founding, according to organizers.
In his keynote address, Dr. Duggal said the rapid rise of agentic AI systems has outpaced existing legal systems globally. He argued that current laws on product liability, contracts and criminal responsibility are inadequate for autonomous AI technologies, adding that “autonomy without accountability is tyranny encoded.”
AI Governance and Law Expert Saakshar Duggal said the Forum aims to create a space where policymakers, industry leaders, legal experts and innovators can discuss how AI systems should remain aligned with human values, ethics and accountability.
Manoj Chugh, described by organizers as a leading voice in the global technology industry and Chairman of Chugh Advisory LLP, the launch reflects growing global concerns around AI governance, liability and regulatory frameworks. India is well-positioned to contribute to global AI governance discussions, he said.
The founding comes amid wider international debates around AI regulation and accountability, particularly as governments and policymakers grapple with the legal implications of increasingly autonomous AI systems. According to a World Economic Forum report titled AI Agents in Action: Foundations for Evaluation and Governance, AI agents are shifting from prototypes to real-world deployment while most organizations remain unsure how to evaluate, manage and govern them responsibly, with the gap between accelerating experimentation and mature oversight widening and creating new risks in autonomy, safety and trust.
Also Read: Artificial Intelligence (AI) Was Supposed to Reduce Risk. Now It’s Getting Insured



