Home GLOBAL NEWS ‘Sorry, we are unreliable’: Google apologised to government on Gemini’s results on PM Modi – Times of India

‘Sorry, we are unreliable’: Google apologised to government on Gemini’s results on PM Modi – Times of India

0
‘Sorry, we are unreliable’: Google apologised to government on Gemini’s results on PM Modi – Times of India

[ad_1]

NEW DELHI: Unable to offer any explanations for the unsubstantiated comments on PM Modi made by its AI platform Gemini, American tech giant Google said a ‘sorry’ to the govt and called the platform ‘unreliable’, minister of state for IT & Electronics Rajeev Chandrasekhar has told TOI.
“We had sent them a notice, seeking an explanation on the unsubstantiated results thrown up by Gemini regarding a particular query on PM Modi. They replied and said, “Sorry, the platform is unreliable,” the minister said as the govt announced that AI platforms will now need a ‘permit’ from the state to operate in the country. “That is not a defence you can take,” the minister said on the Google response, as he criticised a section of AI platforms that have been offering ‘consumer solutions’ even when they are in a trial phase.
The minister said that India cannot be used a test ground by AI platforms, especially when they are increasingly facing flak across the world for giving out unsubstantiated, biased, misinformed or unverified results to users. “The AI data is coming out straight from the lab on to the public internet, without any testing, and without any guardrails. And then, when they are caught flat-footed, they say sorry, it is unreliable.”
Giving an example of Google’s Gemini, he said, “Google Gemini is a classic example. They have gone from the lab to the public internet without fear of any consequences of violating the law. And when they are caught out on that, they say sorry, the information is unreliable.”
The minister made clear that the Indian government will not allow under-developed platforms to launch full-fledged services, especially when they do not make proper disclosures to users that the information they throw up can be misleading, false, unlawful.
“We have said that the Indian internet is not your lab. If you are moving from the lab and it is still under testing and it is unreliable, you have to put out a disclaimer on the platform saying that this is under testing. Also, you have to explicitly inform the user of your platform — in the consent form and through the terms of use — that this is an error-prone platform and this could produce unlawful content… that this is an undertrial platform and may output things that are unlawful and are incorrect.”
The minister said that AI platforms can’t use India as an extension of their lab. “You cannot consider Indian internet and Indian consumers as an extension of your R&D. You have to respect the Indian consumers, and our digital nagriks. You have to make it very clear to the Indian digital nagriks that your platform may be error prone, may be unreliable, may hallucinate. Therefore, tell them in advance that they are dealing with the platform with full awareness and knowledge.”
The minister said AI platforms can be prosecuted under Indian IT and criminal laws for violations and wrong information. “We have the criminal law, and Indian IT Act and IT Rules. These state very clearly that there are 12 types of unlawful content you cannot create. Now if you go ahead and generate unlawful content violative of either the IT Act and rules or the criminal law, you cannot take a defence saying that I’m unreliable. You will be prosecuted.”
The minister said that Gemini’s unverified and unsubstantiated results have been “reckless, irresponsible and certainly disrespecting” for Indian consumers. “They are taking the Indian digital nagriks for granted, which we will put a stop to.”
The govt had on Saturday issued an advisory for AI-led startups which spoke about the need to label any unverified information as potentially false and error prone. The fresh notice came almost two-and-a-half months after the govt had issued an advisory on the matter of deepfakes after there were several incidents of synthetically-made content flowing into social media and internet channels.



[ad_2]

Source link