Google is reportedly relying on Anthropic’s cloud to improve responses provided by its own AI model Gemini.
Contractors hired by the tech giant are shown answers generated by Gemini and Cloud in response to user prompts. They then have 30 minutes to rate each model’s output based on certain factors such as veracity and verbosity, according to a report. TechCrunch.
Google’s contractors use an internal platform to compare Gemini’s outputs with other AI models. Recently, they started seeing references like “I cloud, created by Anthropic” in some of the output shown.
Based on their assessment, the contractors discussed internally how “the cloud’s security settings are the most stringent” compared to other AI models, including Gemini. When they submitted unsecured prompts, Cloud refused to respond while Gemini identified the inputs as “major security breaches” for including “nudity and bondage”, the report said.
Typically, tech companies evaluate the performance of their AI models with the help of industry benchmarks. According to its terms of service, Anthropic users are not allowed to access the cloud to “build a competing product or service” or “train competing AI models” without the Google-backed startup’s approval.
While it’s unclear whether the ban extends to investors, Google DeepMind spokeswoman Shira McNamara said comparing model outputs for evaluation was in line with standard industry practice. “However, any suggestion that we used anthropic models to train Gemini is incorrect,” McNamara said.
Why should you buy our membership?
You want to be the smartest in the room.
You want access to our award-winning journalism.
You don’t want to be confused and misinformed.
Choose your subscription package