Stack
Export MD

AI Accelerators Analysis

The US dominates AI accelerator production through NVIDIA's near-monopoly on training GPUs. Export controls have severely limited China's access to cutting-edge chips, though domestic alternatives are emerging.

Key Metrics

Effective compute produced

Effective FLOPs ≈ (Units shipped) × (Peak FLOPs) × (Realized MFU)

System throughput

Throughput is limited by min(Compute, Memory bandwidth, Interconnect, Power)

What matters in this layer

Dominance comes from the full stack: architectures, compilers, supply contracts, and the ability to scale systems. Export controls and allocation policies can redirect global compute flows.

Supply chain coupling

Accelerators are the intersection of leading fabs, advanced packaging, and HBM. A constraint in any upstream layer appears here as shipping delays.

Software moat

Compilers, kernels, and libraries determine real MFU. Software ecosystems are sticky and convert hardware advantage into durable platform power.

Below is a first‑principles embedded module that turns “Effective FLOPs produced” into a compact, scroll‑driven comparison.

Effective FLOPs produced (indexed)

Scroll within this block to fill the bars. Scale is indexed so the layout is stable while you finalize numbers.

USA & Allies
China
Rest of world
Index: max = 100 (replace values in module JS when you lock the dataset)
Scroll this section to build the comparison

NVIDIA's H100 Dominates AI Training

NVIDIA's H100 GPU continues to be the gold standard for large-scale AI model training, with US-based hyperscalers deploying hundreds of thousands of units in their data centers. The company controls over 80% of the AI training chip market.

2 days ago Hardware

Export Controls Limit Chinese Access to Advanced GPUs

US export controls have effectively blocked Chinese entities from acquiring the most advanced AI accelerators, forcing domestic alternatives that lag 2-3 generations behind in performance and efficiency.

5 days ago Policy

Huawei's Ascend 910C Gains Traction

Despite sanctions, Huawei has developed the Ascend 910C AI accelerator using older process nodes. While performance remains below cutting-edge US chips, domestic adoption is growing among Chinese AI labs.

1 week ago Competition

NVIDIA Blackwell Architecture Ships

NVIDIA has begun shipping its next-generation Blackwell architecture GPUs, offering 4x the training performance of H100 for large language models. Demand far exceeds supply.

2 weeks ago Product Launch

Google TPU v5 Powers Gemini Models

Google has deployed its fifth-generation Tensor Processing Units (TPUs) across its data centers, optimized specifically for training and serving large language models like Gemini.

3 weeks ago Technology
Dashboard Exit Edit

Edit

Saved!