From UPSC perspective, the following things are important :
Prelims level : AI and deepfakes
Mains level : Paper 3- Deepfakes and threats associated with it
Deepfakes creates media in which it challenges our ability to detect real from fake, it blurs the line between two. This article explains the threat associated with it.
What are deepfakes and threat associated with it
- Deepfakes are synthetic media (including images, audio and video) that are either manipulated or wholly generated by Artificial Intelligence.
- AI is used for fabricating audios, videos and texts to show real people saying and doing things they never did, or creating new images and videos.
- These are done so convincingly that it is hard to detect what is fake and what is real.
- They are used to tarnish reputations, create mistrust, question facts, and spread propaganda.
Legal provision in India
- Deepfakes even have the power to threaten the electoral outcome.
- So far, India has not enacted any specific legislation to deal with deepfakes.
- However, there are some provisions in the Indian Penal Code that criminalise certain forms of online/social media content manipulation.
- The Information Technology Act, 2000 covers certain cybercrimes.
- But this law and the Information Technology Intermediary Guidelines (Amendment) Rules, 2018 are inadequate to deal with content manipulation on digital platforms.
- The guidelines stipulate that due diligence must be observed by the intermediate companies for removal of illegal content.
- In 2018, the government proposed rules to curtail the misuse of social networks.
- Social media companies voluntarily agreed to take action to prevent violations during the 2019 general election.
- The Election Commission issued instructions on social media use during election campaigns.
How to deal with the problem of deepfakes
- Only AI-generated tools can be effective in detection.
- Blockchains are robust against many security threats and can be used to digitally sign and affirm the validity of a video or document.
- Educating media users about the capabilities of AI algorithms could help.
- Six themes identified in the workshop convened by the University of Washington and Microsoft are to dela with the deepfakes
- 1) Deepfakes must be contextualised within the broader framework of malicious manipulated media, computational propaganda and disinformation campaigns.
- 2) Deepfakes cause multidimensional issues which require a collaborative, multi-stakeholder response that require experts in every sector to find solutions.
- 3) Detecting deepfakes is hard.
- 4) Journalists need tools to scrutinise images, video and audio recordings for which they need training and resources;
- 5) Policymakers must understand how deepfakes can threaten polity, society, economy, culture, individuals and communities.
- 6) Any true evidence can be dismissed as fake is a major concern that needs to be addressed.
Consider the question “What are the deepfakes and threats associated with it? How these threats can be tackled?”
In today’s world, disinformation comes in varied forms, so no single technology can resolve the problem. As deepfakes evolve, AI-backed technological tools to detect and prevent them must also evolve.