In a momentous action preceding the upcoming Lok Sabha polls, the Ministry of Electronics and Information Technology (MeitY) has issued an advisory to generative AI companies like Google and OpenAI operating in India. The advisory mandates that these companies, offering platforms with "under-testing/unreliable" AI systems or large language models (LLMs), must obtain explicit permission from the government before providing their services to Indian users.
Minister of State for Electronics and IT, Rajeev Chandrasekhar, emphasized the imperative of safeguarding the integrity of the electoral process, citing it as the primary reason for the advisory.
Chandrasekhar elaborated on the need for regulatory measures to address potential threats posed by generative AI platforms, signaling towards forthcoming legislative actions.
The directive comes on the heels of concerns raised regarding Google's AI platform Gemini, particularly over responses generated on queries related to Prime Minister Narendra Modi.
MeitY had expressed reservations about the accuracy and legality of the responses generated by the platform. Similarly, Ola's beta generative AI offering, Krutrim’s hallucinations, had also come under scrutiny.
Chandrasekhar clarified that the advisory is a precursor to future legislative actions aimed at regulating generative AI platforms. He outlined the necessity for companies to seek government approval, which would effectively create a regulatory sandbox. Companies have been instructed to provide a demonstration of their platforms, including details of the consent architecture they adhere to.
The advisory extends not only to generative AI platforms but also to platforms facilitating the creation of deepfakes, including Adobe. Companies have been given a 15-day deadline to submit an action taken report in response to the advisory.
The advisory underscores the importance of ensuring that AI systems deployed in India do not propagate misinformation, bias, or discrimination, particularly in the context of the upcoming elections. Chandrasekhar highlighted the potential misuse of deepfakes and misinformation to influence electoral outcomes, emphasizing the need for preemptive measures to counter such threats.
While some observers question the scope of the advisory vis-à-vis existing IT Rules, Chandrasekhar reiterated the necessity of addressing potential threats to the electoral process, indicating the gravity of the situation in light of the forthcoming elections.