Trending:
Imagine a dystopian scenario where the rise and fall of political regimes across the world are dictated by manipulated algorithms built into the generative Artificial Intelligence (AI) tools based on Large Language Models (LLMs). The signs of such a frightening future are already available in the disruptive power and the all-pervasive impact of AI technologies.
Unlike traditional AI tools, the LLM-based models produce human-like, contextually rich narratives that resonate more with individuals’ beliefs and emotions, thereby increasing their persuasive power. Such chatbots, when weaponised on an industrial scale, have the potential to distort the human psyche, fundamentally alter their belief systems and trigger political upheaval. Influencing the election system and installing a desired regime could then become easy.
A glimpse of such disruptive power was on display recently when ‘Grok 3’, a generative AI chatbot owned by American billionaire Elon Musk, kicked up a viral storm across India's digital landscape. This was due to its quirky, unfiltered and unhinged responses to queries regarding prominent politicians including Prime Minister Narendra Modi and Congress leader Rahul Gandhi and other related topics.
Its quirky responses — like declaring Rahul as being ‘more honest with an edge on formal education than Modi’ and claiming Prime Minister’s interviews were ‘often scripted’ — set off ripples in social media circles. Such formulations are mischievous and can cause political trouble. Understandably, they drew the attention of the Union Ministry of Information and Technology, prompting the government to seek clarification from X (formerly twitter).
More importantly, this shows how social media users look to validate their own ideological leanings by engaging the AI chatbot. If the intent of the developers of these tools is malicious, then all hell can break loose.
Harmful content
The incident has raised key questions regarding accountability for AI-generated misinformation, challenges of content moderation and the need for procedural safeguards.
Like corporations, AI chatbots are not human. While they cannot be granted unfettered free speech rights, the question is whether the AI-generated outputs fall under existing legal frameworks governing speech.
While the use of technology to mislead voters is not a new phenomenon, generative AI technologies can create believable, high-quality content tailored to specific audiences, making disinformation campaigns more effective.
A study conducted by London-based research organisation, ‘The Alan Turing Institute’, revealed that the disinformation created by LLMs appears so authentic that most people cannot identify it as AI-generated and hence cannot discern truth from falsehood.
Sophisticated generative AI tools can now create cloned human voices and hyper-realistic images, videos and audio in seconds, at minimal cost. When strapped to powerful social media algorithms, this fake and digitally created content can spread far and fast and target highly specific audiences, potentially taking campaign dirty tricks to a new low.
Disinformation campaigns could exploit existing societal divisions, deepening ideological rifts and fostering animosity between different political factions. This polarisation could result in social unrest and increased hostility among groups
As AI-generated disinformation proliferates, public trust in media, government institutions, and even scientific communities would rapidly diminish. People may become sceptical of genuine news reports, leading to a fragmented information ecosystem where only echo chambers thrive.
Generating disinformation via LLMs is cheaper and faster than traditional methods, enabling even low-resource actors to launch sophisticated operations.
The 2024 US presidential election had a taste of what AI-generated political disinformation could do to poison a voter's mind.
Existential threat to human race
Geoffrey Hinton, a Nobel laureate in Physics who is often called ‘Godfather of AI’, has raised serious concerns that AI could pose a significant risk to humanity, potentially leading to human extinction within the next three decades. Hinton is a British-Canadian computer scientist known for his ground-breaking work on artificial neural networks.
He believes that the kind of intelligence being developed in AI is very different from human intelligence and that digital systems can learn and share knowledge much faster than biological systems. He has been warning that AI systems might develop subgoals which could lead to unintended consequences where humans will no longer be in control of their destiny.
Hinton is part of a growing number of researchers and scientists who feel that the societal impacts of AI could be so profound that we may need to rethink politics, as AI could lead to a widening gap between rich and poor. At the same time, Hinton acknowledges AI’s potential to do enormous good, and argues that it is important to find ways to harness its power for the benefit of humanity.
Renowned Israeli historian and author Yuval Noah Harari too has a very pessimistic view about the future of AI. He argues that AI is an unprecedented threat to humanity because it is the first technology in history that can make decisions and create new ideas by itself. All previous human inventions have empowered humans, because no matter how powerful the new tool was, the decisions about its usage remained in our hands
“There is some fatal flaw in human nature that tempts us to pursue powers we don’t know how to handle,” Harari rues.
Global coordination needed
Artificial Intelligence tool is like a double-edged weapon; it can cut both ways. While it has the potential to transform human lives at a pace never seen in history, there are also possibilities of the technology being misused to spread disinformation and chaos.
These risks underscore the urgency of developing robust safeguards, international cooperation frameworks, and AI literacy programmes to counterbalance LLMs’ potential for misuse in political arenas.
The time is ripe for a coordinated global strategy to regulate the emerging sector that has a profound impact on societies. There is a greater need now than ever before for global cooperation on tackling the risks of AI, which include potential breaches to privacy and the displacement of jobs.
Current regulations governing LLM use in politics show limited effectiveness, with significant gaps in enforcement, scope, and adaptability to evolving AI capabilities. While some frameworks exist, their real-world impact remains constrained by political priorities, technological arms races, and jurisdictional fragmentation.
While initiatives like the European Union AI Act mandate transparency, enforcement struggles to keep pace with rapidly evolving models. The focus of the joint global effort should be on overcoming the long-standing fault line between regulation and promotion. The companies willing to invest in AI would want to prevent over-regulation that will kill innovation.