Tech
This Move From Google Could Make Gemini AI Less Reliable Than ChatGPT, Here Is Why
Google’s recent internal guideline change for contractors working on its Gemini AI has raised concerns about the AI model’s reliability, especially on sensitive topics like healthcare, TechCrunch reported. This change might lead to more mistakes in the information given to users, making Gemini less trustworthy than OpenAI’s ChatGPT.
For example, a contractor could skip a prompt that was asking a niche question about cardiology because the contractor had no scientific background.
This has led to direct concerns about Gemini’s accuracy on certain topics, as contractors are sometimes tasked with evaluating highly technical AI responses about issues like rare diseases which they have no background in.
But last week, GlobalLogic announced a change from Google that contractors are no longer allowed to skip such prompts, regardless of their own expertise, the report said.
Internal messages seen by TechCrunch revealed that the guidelines used to say: “If you do not have critical expertise (e.g. coding, math) to rate this prompt, please skip this task.”
Now, the guidelines say: “You should not skip prompts that require specialized domain knowledge.” Contractors are now advised to “rate the parts of the prompt you understand” and mention if they lack the necessary knowledge.
“I thought the point of skipping was to increase accuracy by giving it to someone better?” one contractor noted in internal correspondence, seen by TechCrunch.
Contractors can now only skip prompts in two cases: if they’re “completely missing information” like the full prompt or response, or if they contain harmful content that requires special consent forms to evaluate, the new guidelines show.
(with inputs from TechCrunch)