India’s advisory on LLM usage causes consternation

Email queries sent to Microsoft, AWS, Oracle and other model providers concerning the advisory went unanswered.

[…]

India’s advisory on LLM usage causes consternation

Email queries sent to Microsoft, AWS, Oracle and other model providers concerning the advisory went unanswered.

Making detecting AI-generated content easier

The advisory’s recommendation that LLM providers watermark all generated content that could be used for deception may also prove problematic.

Meta is developing tools to identify images produced by generative AI at scale across its social media platforms — Facebook, Instagram, and Threads — but has no such tools for detecting generated audio and video. Google, too, has its own algorithms for detecting AI-generated content but has not made any announcements on this front.

What’s missing is a common standard for all technology providers to follow, experts said.

Such a standard would be useful elsewhere too: If the European Union’s AI Act is approved in April then it will introduce strict transparency obligations on providers and deployers of AI to label deep fakes and watermark AI-generated content.

Impact of the advisory on LLM providers and enterprises

 

Experts and analysts said the advisory, if not clarified further, could lead to significant loss of business for LLM providers and their customers, while stifling innovation.

“The advisory will put the brakes on the progress in releasing these models in India. It will have a significant impact on the overall environment as a lot of businesses are counting on this technology,” Gartner’s Mishra said.

IDC’s Giri said that the advisory might lead early adopters of the technology to rush to upgrade their applications to ensure adherence to the advisory.

“Adjustments to release processes, increased transparency, and ongoing monitoring to meet regulatory standards could cause delays and increase operational costs. A stricter examination of AI models may limit innovation and market expansion, potentially resulting in missed opportunities,” Giri said.

Tejasvi Addagada, an IT leader, believes that prioritizing compliance and ethical AI use can build trust with customers and regulators, offering long-term benefits such as enhanced reputation and market differentiation.

Startup exclusion creates room for confusion

The Minister of State for IT’s tweet excluding startups from the new requirements has caused further controversy, with some wondering whether it could result in lawsuits from larger companies alleging anticompetitive practices.

“The exemption of startups from the advisory might raise concerns about competition laws if it gives them an unfair advantage over established companies,” Natarajan said.

While model providers such as OpenAI, Stability AI, Anthropic, Midjourney, and Groq, are widely considered to be startups, these companies do not fit the Indian government’s definition of startups as set by the Department for Promotion of Industry and Internal Trade (DPIIT), which would require them to incorporate in India under the Companies Act 2013.

The tweak in policy to exclude startups seems to be an afterthought, Mishra said, as many smaller innovative startups are also under significant threat as their entire business revolves around AI and LLMs.

Experts expect further clarification from the government after the expiry of the 15-day period the advisory gives LLM providers to file reports on their actions and the status of their models.

About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.