NVIDIA CEO Admits China AI Accelerator Market Share Hits Zero, Micron Says AI Consumes Over Half Global Memory

NVIDIA CEO Admits China AI Accelerator Market Share Hits Zero, Micron Says AI Consumes Over Half Global Memory

Bottom Line

Two messages from semiconductor giants converged on the same day, outlining the structural changes happening in the AI hardware market:

  1. NVIDIA CEO publicly admits: Due to US export controls, the company’s share of China’s high-end AI processor market has dropped to zero
  2. Huawei Ascend expects 2026 AI chip revenue of $12 billion, up 60% from 2025
  3. Micron CEO stated in earnings call: AI demand is consuming over half of global DRAM and NAND bit capacity

Together, these three points lead to one conclusion: the global AI compute market is splitting into two independent supply chain systems, and memory capacity is becoming the next bottleneck.

Ripple Effects of NVIDIA’s Exit from China

Market Share Changes

TimeNVIDIA China ShareMain CompetitorKey Event
2023 Q4~90%Huawei Ascend 910BFirst round A100/H100 export ban
2024~50%Huawei Ascend 910B/CH20 special edition launched
2025~20%Huawei Ascend + BirenH20 also restricted
2026 Q20%Huawei Ascend dominantCEO publicly confirms

Who’s Filling the Void?

Huawei Ascend is the biggest beneficiary:

  • Expected 2026 AI chip revenue: $12B (+60% YoY)
  • DeepSeek V4 was specifically optimized for Ascend chips — that single decision redirected billions in orders from NVIDIA to Huawei
  • Domestic AI chip companies (Biren, Moore Threads, etc.) are also accelerating their catch-up

Impact scope:

  • NVIDIA’s data center revenue from China will continue declining in financial reports
  • Chinese AI companies will increasingly rely on domestic hardware + open-source software stacks
  • Global AI training infrastructure is developing a “dual-track” system

Micron: AI Is Eating Global Memory Capacity

Key statement from Micron’s FY2026 Q2 earnings call:

AI demand is driving bit requirements for DRAM and NAND in data centers to grow sharply… AI now consumes memory exceeding half of total global capacity.

What this means:

1. Memory Prices May Continue Rising

If AI consumes over 50% of memory capacity, the remaining capacity for consumer electronics (phones, PCs, automobiles) will be squeezed. This could lead to:

  • Server memory (DDR5, HBM) prices staying high
  • Consumer-grade memory prices transmitting upward
  • Accelerated investment cycles for memory manufacturers to expand capacity

2. HBM Supply Tightness Will Continue

AI training and inference demand for HBM (High Bandwidth Memory) is particularly strong. NVIDIA GPUs, Huawei Ascend, and various custom AI chips all need large amounts of HBM. Currently, HBM capacity is controlled by three companies: SK Hynix, Samsung, and Micron, with expansion cycles taking 18-24 months.

3. Relationship with AI Capex

BofA’s latest forecast: global hyperscale AI capex will exceed $800 billion in 2026, potentially crossing $1 trillion in 2027. A significant portion of this spending will flow to:

  • GPU/AI accelerator procurement (NVIDIA, Huawei, AMD, custom chips)
  • Memory procurement (DRAM, HBM, NAND)
  • Data center infrastructure

Impact on Developers and Enterprises

If You’re in China

  • Hardware procurement: NVIDIA high-end GPUs are no longer available — need to switch to Ascend or other domestic solutions
  • Software adaptation: Need to pay attention to domestic chip software ecosystems (CANN, MindSpore, etc.)
  • Cloud services: Domestic cloud providers’ AI instances are comprehensively switching to domestic chips

If You’re Outside China

  • Memory costs: Memory costs for AI inference and training may not decrease quickly
  • Supply chain risk: If your products rely on Chinese supply chains, watch for transmission effects from US-China tech decoupling
  • Opportunity window: The rise of domestic AI chip ecosystems means new toolchain, framework, and service demands

Global Assessment

The AI hardware market is transitioning from “one NVIDIA-dominated global market” to “two parallel regional markets.” This is not a short-term fluctuation but a structural change driven by export controls, geopolitics, and industrial self-sufficiency.

For AI application developers, this means:

  • Model deployment needs to consider hardware availability in target markets
  • Cross-platform compatibility (CUDA vs CANN vs ROCm) will become increasingly important
  • Hardware neutrality of open-source models will become a competitive advantage