DeepSeek V4 Confirmed on Huawei Ascend Chips — Late April Launch Expected
Reuters confirms DeepSeek V4 runs on Huawei's Ascend 950PR processors, not NVIDIA. The 1-trillion-parameter MoE model is expected in late April with an Apache 2.0 release.
DeepSeek V4 is now expected in the final two weeks of April 2026, with Reuters confirming on April 4 that the model runs on Huawei's Ascend 950PR chips — not NVIDIA hardware. This makes V4 the first frontier-class model built entirely on Chinese silicon, a development with significant implications for AI export control policy.
We previously reported on V4's expected specifications in March. Since then, key details have solidified.
What's New Since March
Three developments have changed the picture:
1. Huawei Ascend confirmation. Reuters reported that DeepSeek optimized V4 specifically for Huawei's Ascend 950PR chips and gave Chinese chipmakers early optimization access while "deliberately denying that window to Western silicon suppliers." This isn't just a compatibility decision — it's a geopolitical statement.
2. Chip market impact. Chinese tech giants including Alibaba, ByteDance, and Tencent have placed advance orders for Huawei's next-generation AI chips to run V4 via their cloud services, according to BigGo Finance. The surge in demand has driven chip prices up approximately 20%.
3. Two prior delays. V4 has missed two earlier launch windows. The current "late April" timeline comes with less certainty than DeepSeek's track record would suggest. But the Reuters confirmation of Huawei chip integration indicates the model is in final testing, not still in training.
Specifications (Unchanged)
The technical specs remain consistent with our March report:
| Spec | Detail |
|---|---|
| Total parameters | ~1 trillion |
| Active parameters per token | 32–37 billion (MoE) |
| Context window | 1 million tokens |
| Multimodal | Native image and video support (reported) |
| License | Apache 2.0 (expected) |
| Training hardware | Huawei Ascend 950PR |
The Mixture-of-Experts architecture keeps inference costs manageable despite the massive parameter count — a strategy proven by Mistral and Meta's Llama 4 Maverick.
Why the Huawei Angle Matters
US export controls were designed to slow China's AI progress by restricting access to NVIDIA's most advanced chips. DeepSeek V4 running at frontier level on Huawei silicon would be a direct counter-argument to that thesis.
TrendForce's analysis frames it as "China's push to break CUDA dependence" — if V4 performs competitively with GPT-5.4 and Claude Opus 4.6, the leverage of chip sanctions diminishes significantly.
For Western AI companies, the practical impact is competition. An open-weight, trillion-parameter model under Apache 2.0 gives every developer on earth access to frontier capabilities without paying OpenAI or Anthropic API fees.
Our Take
DeepSeek V4 is the most consequential model launch of 2026 so far — not because of its benchmarks (we haven't seen them yet) but because of what it proves about supply chain independence.
Two caveats. First, "optimized for Huawei" doesn't mean "as efficient as NVIDIA." Training on Ascend likely took more compute-hours than the same model would have taken on H100s. Second, leaked benchmarks aren't published benchmarks — wait for the actual numbers before drawing competitive conclusions.
But if V4 delivers anywhere near its leaked performance targets, DeepSeek will have built the first credible open-weight alternative to the GPT-5 and Claude Opus tier. That changes the market for everyone.
FAQ
When exactly does DeepSeek V4 launch? The most reliable estimates point to the last two weeks of April 2026. The model has missed two earlier windows, so an early May slip remains possible.
Will V4 be open-source? DeepSeek is expected to release model weights under the Apache 2.0 license, following the same approach as V3 and prior releases. This has not been officially confirmed for V4 yet.
Can I run V4 on NVIDIA GPUs? Almost certainly yes — DeepSeek's open-weight models have historically supported both NVIDIA and alternative hardware. The Huawei optimization means V4 was trained on Ascend chips, but inference should work across GPU platforms.
How does V4 compare to Llama 4 Maverick? Both use MoE architectures. V4 is reportedly much larger (1T total params vs. Llama 4 Maverick's undisclosed but smaller scale) and natively multimodal. Head-to-head benchmarks don't exist yet — check back after launch.