Emerging Threat of Deepfakes

The rapid advancement of artificial intelligence has brought with it unprecedented benefits but also equally alarming threats. One of the most disturbing manifestations of this dual-edged technology is the rise of deepfakes-AI-generated synthetic media where a person’s likeness or voice is convincingly altered to create false narratives. In an age where social media is a dominant source of information, deepfakes have emerged as a potent tool for misinformation, manipulation, and character assassination. Union IT Minister Ashwini Vaishnaw’s recent assurance that India will soon introduce regulations to counter deepfakes comes at a critical juncture when the line between truth and fabrication is rapidly disappearing.
India’s social fabric, already sensitive along communal, political, and religious lines, is particularly vulnerable to the misuse of deepfake technology. A single doctored video or audio clip can inflame passions, spread falsehoods, and trigger mass unrest before verification is even possible. During elections, deepfakes can be weaponised to malign political figures, distort public opinion, or sow distrust in institutions. The gullible and largely untrained public often struggles to differentiate between authentic content and AI-generated falsehoods. This makes regulation not merely desirable but essential for maintaining public order and trust in democratic processes. Deepfakes violate personal autonomy, privacy, and dignity, all of which are constitutionally protected rights.
Around the world, countries have already begun taking stringent steps to counter deepfakes. The European Union, under its AI Act, has imposed clear accountability on developers and deployers of AI tools. The Act mandates watermarking or labelling of AI-generated content and holds companies liable for misuse. Similarly, the United States has initiated legislative discussions such as the DEEPFAKES Accountability Act, which calls for the disclosure of synthetic media and penalties for malicious creators. China, meanwhile, has taken a regulatory-heavy approach by enforcing rules that require deepfake creators to label their work and register their real identities before publishing AI-generated content online.
However, these frameworks, while comprehensive, are heavily legalistic and often struggle to keep pace with technological innovation. Laws alone, as India’s IT Minister rightly noted, cannot effectively control a rapidly evolving digital threat like deepfakes. A hybrid approach-where law and technology complement each other-is far more sustainable. Minister Vaishnaw’s announcement that India will adopt a techno-legal framework is a forward-looking strategy. Rather than merely penalising offenders after damage is done, this model envisions a system where technology itself acts as a preventive and detection mechanism. India’s advantage lies in its strong IT workforce and growing AI ecosystem. With over 38,000 GPUs being made available for AI development and global tech giants like Google committing USD 15 billion to build AI infrastructure in India, the country is well-positioned to become a leader in developing counter-deepfake technologies. The Government’s focus on indigenous AI models, including those with over 120 billion parameters, also underscores India’s intent to create technology that reflects local realities and cultural sensitivities-free from the biases often embedded in Western models.
Deepfakes, born out of sophisticated AI, can only be effectively countered through equally powerful technological innovation. Just as “diamond cuts diamond”, the antidote to deepfake lies in advanced detection systems powered by AI itself-tools capable of identifying synthetic manipulations in real time. Several Indian startups, research institutions, and tech companies are already exploring watermarking, facial mapping, and blockchain-based authentication techniques to detect forgeries.
Yet, innovation alone cannot work in isolation. There must be a strong ecosystem of collaboration between the Government, academia, private industry, and media platforms. The development of public awareness programmes is equally vital, enabling citizens to critically assess digital content before accepting it as truth. History has shown that every major technological leap comes with unintended consequences. AI was meant to empower, but it also has the potential to deceive. This is where scientists, researchers, and tech companies must step forward-not just to innovate, but to innovate responsibly. There is a moral obligation to ensure that technology remains a force for good, not a weapon of chaos. The deepfake menace will not disappear overnight, but through coordinated, intelligent, and ethical action, the country can build a digital future where truth is protected and trust restored.

The post Emerging Threat of Deepfakes appeared first on Daily Excelsior.

Editorials