Today we're announcing a strategic partnership between GPU.ai and NovaCore — India's first GPU cloud to deploy NVIDIA Blackwell. Starting now, every team on GPU.ai can reserve NovaCore Blackwell capacity from the same console they already use for A100, H100, and H200 instances on the U.S. east coast. And NovaCore tenants get a one-click bridge into GPU.ai's aggregated U.S. inventory.
It's a deliberately small announcement with deliberately large implications: serious AI compute should not require serious procurement.
What's actually shipping
The partnership is real product, not a press release. Three things are live today:
- Unified inventory. NovaCore Blackwell (GB200 NVL72 racks, Hyderabad) appears in GPU.ai's availability feed alongside our existing supplier graph. You can compare it against U.S. inventory in one query and provision either with the same
gpuaiCLI. - Cross-region tunneling. GPU.ai's WireGuard layer extends to NovaCore nodes, so a Blackwell pod in Hyderabad and an H100 cluster in Virginia can talk to each other without leaving private network space.
- One bill, two regions. Per-second billing, normalized to per-GPU-hour, settled through GPU.ai. No second contract. No second invoice.
Why this matters
The dominant AI cloud thesis for the past two years has been "one provider, one region, one big contract." That works until your context window grows and your inference workload needs to live where your users are. It works until compliance counsel reads the cross-border clause. It works until your training run finishes and the same nodes are now ten times overspecced for serving.
The aggregation thesis is different: the right GPU for your workload changes constantly, and your infrastructure layer should reflect that. A Blackwell rack in Hyderabad is the right answer for a long-context inference workload serving APAC users. An H100 SXM in us-east-1 is the right answer for a fine-tune on a regulated dataset. You should not have to pick a religion to use both.
NovaCore solved a hard problem on the supply side — they built a hyperscaler-grade Blackwell deployment outside the U.S. hyperscaler footprint, with bare metal, with InfiniBand, with the kind of operational discipline that lets you run sustained 512-GPU training jobs without surprises. We solved a hard problem on the demand side — pricing, availability, and deploy time across a fragmented supplier base. Putting the two together is the obvious move.
What changes for GPU.ai customers
Nothing about your existing workflow changes. The CLI is the same. The dashboard is the same. The pricing model is the same.
What's new is an additional row in availability:
gpuai availability --gpu b200
SUPPLIER REGION PRICE/GPU-HR AVAILABLE
NovaCore hyd-1 $4.89 88
gpu.ai/us-east us-east-1 $5.29 12
gpu.ai/us-west us-west-2 $5.49 8
Pick the row. Deploy. The tunnel, the keys, the billing — handled.
What changes for NovaCore tenants
You get a single API into U.S. capacity without setting up a second cloud account. Same SSH key. Same network. Same monitoring.
Builder credits
To kick this off, we're seeding the partnership with $250,000 in joint compute credits for early-stage teams building open models, agents, and inference services. If you're shipping something we'd want to use, apply for credits — we read every application.
What's next
A few things we're actively working on with the NovaCore team:
- Cross-region NCCL benchmarks. Real numbers, published, on what you can and can't do with multi-region training today. No marketing decks.
- Reserved Blackwell tiers. Monthly and quarterly reservations on the Hyderabad cluster, priced and provisioned through GPU.ai.
- A second region. We're not announcing where yet. We are saying it's not in the U.S. and not in India.
The compute layer is being built right now. We're glad to be building it with them.
— Ranbir