Government withdraws mandate requiring AI models to seek approval before deployment

The government has removed the need that large language models (LLMs), algorithms, and artificial intelligence (AI) models obtain express consent before being made available to Indian users.

The Ministry of Electronics and Information Technology released a new advisory on Friday, stating that Indian users should only be able to access unreliable AI foundational models, LLMs, generative AI software, algorithms, or any other model after "appropriately labelling the possible inherent fallibility or unreliability of the output generated." ET has viewed a copy of the updated caution.

The IT ministry noted that intermediaries, AI models, LLMS, and generative AI softwares, among others, should employ such mechanisms to notify users when the output is inaccurate or unreliable, while doing away with the requirement for explicit authorization.

The IT ministry released a directive on March 1st, requiring "explicit permission of the government of India" before allowing any AI models, LLMs, software that uses generative AI, or any algorithms that are being tested, in beta testing, or unreliable in any way to be made available to users on the Indian internet.

The ministry stated in the warning released on Friday that platforms and intermediaries frequently acted in a "negligent" manner with regard to fulfilling their duties of due diligence. As stated in Rule 3(1)(b) of the Information Technology (IT) Rules, the IT ministry further stated that all intermediaries and platforms shall make sure that the usage of AI models, LLMs, Gen AI, software, or algorithms on their platforms does not permit users to share any illicit content.

According to Rule 3 (1)(b) of the IT Rules, it is illegal to exhibit, host, transport, or create any form of content that includes child sexual abuse, pornographic, obscene, seriously defamatory, or illegal in any way.

Post a Comment

Previous Post Next Post

Contact Form