The model is described as the company’s most capable pretrained foundation model to date, The Information reports, citing an internal document. During pretraining, an AI model learns fundamental patterns and relationships from large datasets—this is the first phase in AI model development
According to the memo by Megan Fu, a product manager at Meta Superintelligence Labs, Avocado outperforms the best publicly available foundation models. In areas such as knowledge, visual perception, and multilingual performance, it can even compete with leading fully trained models—despite not yet having gone through the second training stage. This later phase, known as post-training, fine-tunes models for specific tasks.
Avocado is also significantly more compute-efficient: ten times more efficient than Maverick and one hundred times more efficient than Behemoth, two earlier Meta models. Meta reportedly achieved these gains through higher-quality training data, improvements to the technical stack, and changes to its training methodology.
A critical test for Zuckerberg’s multi-billion-dollar bet
These advances are particularly important for Meta after the company faced major setbacks with its Llama 4 model in 2025. The release was delayed multiple times, benchmarks were reportedly “optimized,” and developers expressed disappointment with the model’s real-world performance. The issues triggered a major restructuring of Meta’s AI division, including the departure of Yann LeCun.
Meta subsequently invested $14.3 billion in startup Scale AI and brought in its CEO Alexandr Wang to lead a new unit called Meta Superintelligence Labs.
The company is now spending at record levels on AI. For 2026, Meta expects capital expenditures between $115 billion and $135 billion, roughly 73% more than in 2025. CTO Andrew Bosworth recently said at the World Economic Forum in Davos that Meta’s AI models are “very good,” but still require substantial post-training. CEO Mark Zuckerberg echoed this view during the latest earnings call, adding that Meta plans to release new models steadily throughout the year.
According to industry rumors, Meta may move away from the open-source approach used for its Llama models with Avocado. A separate model focused on visual applications, codenamed “Mango,” is also reportedly in development.
Conclusion
The completion of Avocado’s pretraining highlights Meta’s attempt to reset its AI strategy after the troubled Llama-4 cycle. Strong gains in efficiency and baseline performance suggest that the company is prioritizing scalable, cost-effective foundation models as it ramps up record-level investment. Whether Avocado can translate these technical advances into competitive, fully trained systems will be a key test of Meta’s renewed AI ambitions in 2026.
ES
EN