Over the past decade, social media has become an integral part of people’s lives, with statistics indicating over 5.86 billion people connected on these platforms, who regularly consume and share content, in a concurrent cycle of reels and tweets.
With the advent of AI, many instances of nefarious use of AI assisted tools on social media platforms like X, Meta and YouTube among others, have been reported. Reports of deepfake videos, AI assisted scams, explicit AI generated images of minors, have surfaced on these social media platforms that have rung alarm bells in policy corridors of several nations.
Be it deepfake videos used in extortion/investment scams in Brazil and India, or generating deepfake videos of celebrities including Taylor Swift or AI persona impersonating as a real person, the challenge to identify what’s real and what’s not has only gotten harder.
All leading Social Media platforms have recognized the harm deepfakes and misinformation can lead to and are actively working on mechanisms . Verified accounts and sources can be clearly identified. Suspicious content can be easily reported and prompt action is taken by moderators . Despite all this it is imperative that people also be vigilant and not trust blindly everything they see or hear in the age of AI.