According to a report by The Information, Google DeepMind has assembled a dedicated team of researchers and engineers to strengthen the coding abilities of its Gemini models. The effort is led by DeepMind engineer Sebastian Borgeaud, who previously oversaw pretraining for DeepMind’s models. The team is focused on complex, long-horizon programming tasks such as writing new software, where models need to read files, understand user intent, and work across larger codebases.
One reason for the push is that some Google researchers reportedly view Anthropic’s coding tools as stronger than Google’s current offerings. Coding has become a major battleground for leading AI labs this year, with Google and OpenAI both trying to catch up to Anthropic in this area. In parallel, OpenAI has also been reallocating resources away from side projects such as Sora, which is being discontinued, as the company shifts focus toward coding and enterprise products.
Brin is still directly involved
Google co-founder Sergey Brin and DeepMind CTO Koray Kavukcuoglu are said to be personally involved in the effort. According to the report, Brin wrote in an internal memo that Google must “urgently bridge the gap” in AI agent execution. He also reportedly argued that every Gemini engineer should be using internal agents for complex, multi-step tasks.
Brin also told employees that stronger coding capabilities could be an important step toward AI that can improve itself. The idea is that an advanced coding agent, combined with AI systems for mathematics and experimentation, could one day automate much of the work currently done by AI researchers and engineers.
Internally, Google is reportedly tracking usage of its coding tool “Jetski” through team rankings, in a way that resembles reports about similar internal AI usage metrics at Meta. Some teams outside DeepMind are also said to require engineers to attend AI training sessions.
According to The Information, Google is increasingly emphasizing models trained on internal code, rather than relying only on coding models built for external customers. Because Google’s internal codebase differs significantly from the public code typically used to train general coding agents, those internally trained systems would not be suitable for public release. They could, however, help Google build better public models later on.
ES
EN