LLMsVideo TranslationImage GenerationVideo Generation
AI News

Independent coverage of the latest AI tool updates, releases, and comparisons.

Categories

  • AI LLMs
  • AI Video Translation
  • AI Image Generation
  • AI Video Generation

Company

  • About & Contact

Resources

  • Sitemap
  • AI Glossary
  • Tool Comparisons
  • Facts / Grounding
  • llms.txt
  • XML Sitemap

© 2026 AI News. Independent editorial coverage. Not affiliated with any AI company.

Lisa Thoma|lisathoma-91@outlook.com|
AI LLMs

DeepSeek V4 Confirmed on Huawei Ascend Chips — Late April Launch Expected

Reuters confirms DeepSeek V4 runs on Huawei's Ascend 950PR processors, not NVIDIA. The 1-trillion-parameter MoE model is expected in late April with an Apache 2.0 release.

Lisa Thoma
Lisa Thoma
Tuesday, April 14, 2026·4 min read

DeepSeek V4 is now expected in the final two weeks of April 2026, with Reuters confirming on April 4 that the model runs on Huawei's Ascend 950PR chips — not NVIDIA hardware. This makes V4 the first frontier-class model built entirely on Chinese silicon, a development with significant implications for AI export control policy.

We previously reported on V4's expected specifications in March. Since then, key details have solidified.

What's New Since March

Three developments have changed the picture:

1. Huawei Ascend confirmation. Reuters reported that DeepSeek optimized V4 specifically for Huawei's Ascend 950PR chips and gave Chinese chipmakers early optimization access while "deliberately denying that window to Western silicon suppliers." This isn't just a compatibility decision — it's a geopolitical statement.

2. Chip market impact. Chinese tech giants including Alibaba, ByteDance, and Tencent have placed advance orders for Huawei's next-generation AI chips to run V4 via their cloud services, according to BigGo Finance. The surge in demand has driven chip prices up approximately 20%.

3. Two prior delays. V4 has missed two earlier launch windows. The current "late April" timeline comes with less certainty than DeepSeek's track record would suggest. But the Reuters confirmation of Huawei chip integration indicates the model is in final testing, not still in training.

Specifications (Unchanged)

The technical specs remain consistent with our March report:

SpecDetail
Total parameters~1 trillion
Active parameters per token32–37 billion (MoE)
Context window1 million tokens
MultimodalNative image and video support (reported)
LicenseApache 2.0 (expected)
Training hardwareHuawei Ascend 950PR

The Mixture-of-Experts architecture keeps inference costs manageable despite the massive parameter count — a strategy proven by Mistral and Meta's Llama 4 Maverick.

Why the Huawei Angle Matters

US export controls were designed to slow China's AI progress by restricting access to NVIDIA's most advanced chips. DeepSeek V4 running at frontier level on Huawei silicon would be a direct counter-argument to that thesis.

TrendForce's analysis frames it as "China's push to break CUDA dependence" — if V4 performs competitively with GPT-5.4 and Claude Opus 4.6, the leverage of chip sanctions diminishes significantly.

For Western AI companies, the practical impact is competition. An open-weight, trillion-parameter model under Apache 2.0 gives every developer on earth access to frontier capabilities without paying OpenAI or Anthropic API fees.

Our Take

DeepSeek V4 is the most consequential model launch of 2026 so far — not because of its benchmarks (we haven't seen them yet) but because of what it proves about supply chain independence.

Two caveats. First, "optimized for Huawei" doesn't mean "as efficient as NVIDIA." Training on Ascend likely took more compute-hours than the same model would have taken on H100s. Second, leaked benchmarks aren't published benchmarks — wait for the actual numbers before drawing competitive conclusions.

But if V4 delivers anywhere near its leaked performance targets, DeepSeek will have built the first credible open-weight alternative to the GPT-5 and Claude Opus tier. That changes the market for everyone.

FAQ

When exactly does DeepSeek V4 launch? The most reliable estimates point to the last two weeks of April 2026. The model has missed two earlier windows, so an early May slip remains possible.

Will V4 be open-source? DeepSeek is expected to release model weights under the Apache 2.0 license, following the same approach as V3 and prior releases. This has not been officially confirmed for V4 yet.

Can I run V4 on NVIDIA GPUs? Almost certainly yes — DeepSeek's open-weight models have historically supported both NVIDIA and alternative hardware. The Huawei optimization means V4 was trained on Ascend chips, but inference should work across GPU platforms.

How does V4 compare to Llama 4 Maverick? Both use MoE architectures. V4 is reportedly much larger (1T total params vs. Llama 4 Maverick's undisclosed but smaller scale) and natively multimodal. Head-to-head benchmarks don't exist yet — check back after launch.

Tools Mentioned

DeepSeekHigh-performance open-source LLMs with efficient training
Free (open source); API from $0.14/1M tokens
GPT (OpenAI)Industry-leading large language models powering ChatGPT
$20/mo (ChatGPT Plus)
Claude (Anthropic)Safe, helpful AI assistant with extended context and reasoning
$20/mo (Pro)
Llama (Meta)Open-source large language models from Meta
Free (open source)

More in AI LLMs

AI LLMs

Anthropic Launches Claude Managed Agents in Public Beta — $0.08/Hour Runtime

Claude Managed Agents provides a fully managed infrastructure for running autonomous AI agents with sandboxing, tool execution, and SSE streaming. Available now to all API accounts.

Lisa Thoma·Apr 14, 2026
AI LLMs

OpenAI Retires GPT-4o Entirely — Introduces $100 Pro Plan and GPT-5.3 Instant Mini

GPT-4o is fully retired from all ChatGPT plans as of April 3. OpenAI adds a new $100/month Pro tier for Codex power users and ships GPT-5.3 Instant Mini as the new fallback model.

Lisa Thoma·Apr 14, 2026
← Back to all news