A Broadcom Securities and Exchange Commission (SEC) filing disclosing its Broadcom – Google Tensor Processing Units (TPUs) deal and related AI infrastructure plans offers a useful window into how AI infrastructure is being secured ahead of fully proven demand.
What stands out is the use of linked long-term agreements to structure chips, networking, and compute capacity, rather than building them out incrementally. This suggests a shift away from sequential deployment toward upfront coordination across the stack.
In its Form 8-K filed with the SEC on April 6, 2026, Broadcom disclosed a long-term agreement with Google to develop and supply custom TPUs, alongside a separate supply assurance agreement covering networking and related infrastructure through 2031. This separates compute development from infrastructure supply, while still tying them together contractually. The inclusion of a supply assurance agreement is particularly notable, as it points to an emphasis on securing availability, not just designing components.
The filing itself is brief and leaves out financial terms, but the structure is telling. Chip development and infrastructure delivery are being coordinated through formal agreements rather than phased procurement. That kind of coordination typically reflects planning over longer time horizons, where capacity and supply continuity become constraints.
A second part of the disclosure involves Anthropic. Starting in 2027, the company is expected to access around 3.5 gigawatts of TPU-based AI compute capacity through Broadcom. That’s a significant number, but the filing makes it clear that actual usage will depend on Anthropic’s commercial performance. “In connection with this deployment, the parties are in discussions with certain operational and financial partners,” the filing states.
That condition is easy to miss, but it matters. The capacity is described as available rather than fully committed, with usage dependent on future demand. This indicates that provisioning is being planned ahead of confirmed utilization, rather than strictly tied to contracted demand. The reference to ongoing partner discussions also suggests that elements of deployment and financing are still being worked out.
There are still gaps. The filing mentions ongoing partner discussions but doesn’t say who they are or how the deployment will ultimately be structured. It also leaves out specifics on volumes, timelines, and costs. Those omissions make it difficult to assess how much of the capacity is firmly allocated versus still contingent.
Even so, the broader direction is fairly clear. Chips, networking, and compute access are being coordinated through longer-term agreements instead of being treated as separate decisions. This points to a more integrated approach to infrastructure planning, where different layers are arranged in parallel rather than sequentially.
Taken together, the development points to an approach where parts of the infrastructure stack are arranged in advance, while actual usage remains tied to future demand. That combination of early coordination with conditional usage highlights how infrastructure is increasingly being positioned ahead of fully validated demand.
Also Read: OpenAI Taps Kiran Mani to Lead APAC Expansion Amid Growing Backlash







