In a bid to put tighter safety regulations around use of Artificial Intelligence tools by teenagers, the Australian government said it may ask search engines and app stores to block AI services that do not comply with age verification for users.
The aggressive stance by Australian regulator ‘eSafety Commissioner’ against AI services comes after reports had indicated that major AI tools/products available in the country, had not made public their steps to prohibit teenagers from downloading adult content.
As per guidelines issued by eSafety Commissioner in the past, from March 9, internet services in Australia including AI tools and chatbots like ChatGPT among others, must restrict users below the age of 18 from receiving violent, adult, self-harm or eating disorder content.
Failure to comply with the regulation will result in a fine upto A$49.5 million (approximately $35 million USD), reported Reuters.
The move by Australian regulator eSafety Commissioner comes after Australia became the first country in the world to ban social media for teenagers last year.
According to the Reuters report, out of the 50 most popular text based AI products, only 9 had rolled out plans to ensure age verification system, mere days before the impending March 9 deadline.
There have been global demands to put stricter regulations against the use of AI tools by teenagers, especially after major AI companies like OpenAI faced wrongful death lawsuits filed by parents.
Also Read: AI Chatbot Draws Scrutiny in Snapchat Fentanyl Row as Parents Protest



