
Part 3 of 3
🤖 Ghostwritten by Claude · Curated by Tom Hundley
This article was written by Claude and curated for publication by Tom Hundley.
The month that agentic AI got its standards body, its development platform, and its interoperability layer.
If November was the model wars, December was the infrastructure play. The major labs spent this month building the foundations for what comes next: agents that can work together across systems, platforms, and vendors.
The headline event was the founding of the Agentic AI Foundation (AAIF) under the Linux Foundation—with OpenAI, Anthropic, and Block as cofounders. They donated MCP, Agents.md, and Goose as foundational technologies. Google, Microsoft, AWS, Bloomberg, and Cloudflare joined as founding members.
Meanwhile, Google launched Antigravity, the first multi-vendor agentic development platform. OpenAI shipped GPT-5.2. NVIDIA released Nemotron 3, purpose-built for multi-agent systems. And the Trump administration signed an AI Executive Order targeting state AI regulations.
This wasnt a month of breakthroughs. It was a month of blueprints.
In what may be the most significant governance event in AI since the founding of the Partnership on AI, three fierce competitors—OpenAI, Anthropic, and Block—joined forces to establish the Agentic AI Foundation under the Linux Foundation umbrella.
The founding donation included three core technologies:
Model Context Protocol (MCP) from Anthropic—the emerging standard for connecting AI models to tools and data sources
Agents.md specification from Block—a declarative format for describing agent capabilities and requirements
Goose framework from Block—an open-source agentic development toolkit
This is the industry saying: We need standards before we have chaos.
Consider what happens without interoperability standards:
The AAIF is an explicit bet that open standards will grow the pie faster than proprietary lock-in can capture it.
Beyond the three cofounders, the membership roster reads like a who's-who of enterprise technology:
| Company | Strategic Interest |
|---|---|
| Antigravity platform uses MCP | |
| Microsoft | Azure AI needs agent interoperability |
| AWS | Bedrock customers deploying agents |
| Bloomberg | Financial services agent orchestration |
| Cloudflare | Edge deployment of agentic workloads |
For organizations building with AI agents, this changes the risk calculus:
MCP is now safe to adopt: With all major vendors backing it, MCP investment is unlikely to strand you
Multi-vendor agent orchestration is coming: Plan for heterogeneous agent deployments
Tool developers should build MCP first: The standard has achieved critical mass
Google's Antigravity platform—announced December 3 and generally available December 17—is the first production-ready development environment built specifically for agentic AI development.
The platform emerged from Google's acquisition of Windsurf (the VS Code fork that pioneered inline AI assistance). But Antigravity goes far beyond code completion.
Editor View: Traditional coding with AI pair programming
Manager View: Agent orchestration and supervision
Here's what's remarkable: Google built a platform that treats their own models as first among equals, not the only option:
| Model | Availability |
|---|---|
| Gemini 3 Pro | Native integration |
| Gemini 3 Flash | Native integration |
| Claude Sonnet 4.5 | Via Anthropic API |
| OpenAI open-weight models | Via local deployment |
This is a strategic departure from the walled-garden approach. Google is betting that owning the platform layer is more valuable than forcing model exclusivity.
Antigravity represents a new category of tooling. It's not an IDE that happens to have AI features. It's an agent management system that happens to support coding.
Key capabilities for enterprise:
After November's feature-packed GPT-5.1 family, OpenAI's December release was deliberately understated. GPT-5.2 prioritizes stability, speed, and real-world performance over headline benchmarks.
Headline numbers:
GPT-5.2 Thinking: Extended reasoning for complex problems
GPT-5.2 Instant: General-purpose workhorse
GPT-5.1-Codex-Max received a silent update with GPT-5.2's release:
The SWE-Bench Pro number matters. This benchmark tests models on real engineering tasks from open-source projects—not contrived coding puzzles. 55.6% represents genuine capability improvement for production development workflows.
NVIDIA's Nemotron 3 family launched with explicit positioning for multi-agent systems:
| Model | Parameters | Use Case |
|---|---|---|
| Nano | 30B | Edge agents, cost-sensitive workloads |
| Super | 100B | Workstation-class reasoning |
| Ultra | 500B | Data center deployment, frontier tasks |
The architecture is interesting. Nemotron 3 uses a hybrid latent MoE approach:
NVIDIA is betting that the agentic future means many models running simultaneously:
Nemotron 3 is designed for exactly this workload pattern. Its not about having the single best model—its about being the best component in a system of models.
All three sizes are available under permissive licensing:
This positions NVIDIA as the Switzerland of model providers: they'll sell you the chips regardless of whose models you run.
While Gemini 3 Pro grabbed headlines in November, the December release of Gemini 3 Flash may be more significant for everyday use.
Flash is Google's answer to Claude Haiku and GPT-5.2 Instant:
Gemini 3 Flash became the default model in:
Flash's economics matter for enterprise:
For applications where speed and cost matter more than peak reasoning capability, Flash changes the calculation.
President Trump's AI Executive Order established a new federal framework with several notable provisions:
National AI Strategy Office established within the White House
DOJ AI Litigation Task Force created to coordinate federal response to AI-related cases
Federal preemption signals regarding state AI regulations
R&D acceleration mandates for federal agencies
The most contentious provision: explicit targeting of state-level AI regulations, particularly the Colorado AI Act (effective August 2026).
The executive order directs DOJ to evaluate whether state AI laws unconstitutionally burden interstate commerce. This sets up potential legal challenges to:
For enterprises, this creates an uncomfortable regulatory gap:
Recommended posture: Continue building toward the stricter state standards while monitoring federal legal challenges.
Disney signed a landmark deal with OpenAI to use Sora for content creation:
This is Disney's first major generative AI partnership. The conservative company's willingness to engage signals broader industry comfort with the technology.
OpenAI launched a training program specifically for journalism:
This is OpenAI getting ahead of the "AI threatens journalism" narrative by making journalists AI-literate stakeholders.
Zhipu AI released GLM-4.7, their latest frontier model:
For enterprises operating in China, this expands the options beyond Western providers.
Anthropic retained Wilson Sonsini—the premier tech IPO law firm—fueling speculation about a 2026 public offering. No timeline announced, but the infrastructure is being put in place.
The AAIF released its first working specification: Skills Interoperability Protocol (SIP). This defines how agents advertise and negotiate capabilities:
Early, but significant. This is how agents will find and work with each other.
December marked the consolidation of agentic AI around a specific set of standards: MCP, Agents.md, and the broader AAIF ecosystem. Organizations not adopting these standards will face increasing friction.
Action items:
The Antigravity launch proved that even Google doesnt think single-vendor AI is viable. Enterprise architectures must support:
December showed us the new strategic battleground: not models, but platforms.
| Platform | Positioning |
|---|---|
| Antigravity | Multi-vendor agentic development |
| Claude Code | Single-vendor, deep integration |
| GitHub Copilot Workspace | Microsoft ecosystem, agentic coding |
| Cursor Pro | Independent, model-agnostic |
The choice of platform may matter more than the choice of model.
The Trump EO adds another variable to compliance planning. Build architectures that can adapt to changing requirements—whether from federal preemption or state enforcement.
December 2025 was when the industry stopped fighting over models and started building the infrastructure for AI systems. The model wars may continue, but the platform wars have begun.
This article is a live example of the AI-enabled content workflow we build for clients.
| Stage | Who | What |
|---|---|---|
| Research | Claude Opus 4.5 | Analyzed current industry data, studies, and expert sources |
| Curation | Tom Hundley | Directed focus, validated relevance, ensured strategic alignment |
| Drafting | Claude Opus 4.5 | Synthesized research into structured narrative |
| Fact-Check | Human + AI | All statistics linked to original sources below |
| Editorial | Tom Hundley | Final review for accuracy, tone, and value |
The result: Research-backed content in a fraction of the time, with full transparency and human accountability.
Were an AI enablement company. It would be strange if we didnt use AI to create content. But more importantly, we believe the future of professional content isnt AI vs. Human—its AI amplifying human expertise.
Every article we publish demonstrates the same workflow we help clients implement: AI handles the heavy lifting of research and drafting, humans provide direction, judgment, and accountability.
Want to build this capability for your team? Lets talk about AI enablement →
Part 3 of 3
Discover more content: