China’s DeepSeek previews new AI model a year after jolting US rivals 

China’s DeepSeek previews new AI model a year after jolting US rivals 

```json { "title": "DeepSeek V4 Preview: China's AI Challenger Returns", "metaDescription": "DeepSeek releases its V4 AI model preview on April 24, 2026, challenging OpenAI, Google, and Anthropic with open-source power and domestic Chinese chips.", "content": "<h2>DeepSeek Releases V4 AI Model Preview, Challenging U.S. Rivals With Domestic Chips and Open-Source Ambition</h2>\n\n<p>Chinese AI startup DeepSeek on Friday, April 24, 2026, released preview versions of its highly anticipated V4 large language model, marking its first major ground-up model release since the R1 reasoning model sent shockwaves through global tech markets in early 2025. The Hangzhou-based company, founded in 2023, released V4 as open-source on Hugging Face and claims the new model delivers world-class reasoning and the best agentic coding capabilities among open-source systems — positioning it as a direct competitive threat to closed-source offerings from OpenAI, Google, and Anthropic.</p>\n\n<p>The release arrives roughly 15 months after R1's disruptive debut and sets the stage for another significant moment in the accelerating U.S.-China AI rivalry — though analysts are divided on whether V4 carries the same shock value as its predecessor.</p>\n\n<h2>What's Inside DeepSeek V4: Architecture, Scale, and Efficiency</h2>\n\n<p>DeepSeek V4 comes in two variants. The flagship <strong>DeepSeek-V4-Pro</strong> features 1.6 trillion total parameters, with 49 billion activated at inference time. The lighter <strong>DeepSeek-V4-Flash</strong> carries 284 billion total parameters, with 13 billion activated. Both models support a context window of 1 million tokens — a dramatic leap from the 128,000-token context window supported by the previous V3 model.</p>\n\n<p>The expanded context window means V4 can process and reason over significantly longer documents, codebases, or conversation histories in a single pass, a capability increasingly central to enterprise and agentic AI applications.</p>\n\n<p>Beyond raw scale, DeepSeek introduced a new architectural technique it calls <strong>Hybrid Attention Architecture</strong>, which the company says improves an AI platform's ability to remember queries across long conversations. According to DeepSeek's own Hugging Face documentation, in the 1 million-token context setting, V4-Pro requires only 27% of the single-token inference FLOPs and just 10% of the KV cache compared with DeepSeek-V3.2 — a notable efficiency gain that could lower the cost of running the model at scale.</p>\n\n<p>It is worth noting that V4 models currently process text only. DeepSeek stated it is "working on incorporating multimodal capabilities," meaning image, audio, or video understanding is not yet available in the preview release.</p>\n\n<h2>DeepSeek V4 Runs on Huawei and Cambricon Chips — Not Nvidia</h2>\n\n<p>One of the most geopolitically significant aspects of the V4 release is its hardware foundation. Unlike R1, which was trained on Nvidia chips, V4 is reported to run on domestic Chinese hardware — specifically Huawei's Ascend 950 chips and processors from Cambricon. Huawei confirmed on Friday that its "Supernode" technology, combining large clusters of Ascend 950 chips, supports the V4 series models.</p>\n\n<p>This shift matters because U.S. export controls have significantly restricted China's access to advanced AI chips from Nvidia and AMD, pushing Chinese developers toward homegrown alternatives. DeepSeek's ability to train and deploy a frontier-class model on domestic silicon — if independently verified — would represent a meaningful milestone for China's AI chip ecosystem.</p>\n\n<p>The market responded accordingly. Following the V4 announcement, shares of Chinese contract chip manufacturers surged in Hong Kong trading: SMIC rose 9% and Hua Hong Semiconductor jumped 15%. Meanwhile, shares of rival Chinese AI companies fell sharply, with MiniMax and Zhipu each dropping around 8% and Manycore Tech plunging 9%.</p>\n\n<p>Wei Sun, principal analyst at Counterpoint Research, framed the hardware shift as potentially the most consequential element of the release: <em>"It allows AI systems to be built and deployed without relying solely on Nvidia, which is why V4 could ultimately have an even bigger impact than R1 — accelerating adoption domestically and contributing to faster global AI development overall."</em></p>\n\n<h2>Performance Claims: Strong Benchmarks, But Independent Verification Pending</h2>\n\n<p>DeepSeek's own performance claims for V4 are ambitious. The company says V4 delivers the best agentic coding capability among open-source models and "world class" reasoning capabilities. Its top-tier "V4 Pro Max" configuration claims superior performance over OpenAI's GPT-5.2 and Google's Gemini 3.0-Pro on standard reasoning benchmarks, while falling "marginally" short of GPT-5.4 and Gemini 3.1-Pro. In agentic capabilities, DeepSeek said V4 Pro outperforms Anthropic's Claude Sonnet 4.5 and approaches the level of Claude Opus 4.5, based on DeepSeek's own evaluation.</p>\n\n<p>Analysts are cautiously optimistic but note the importance of independent testing. Lian Jye Su, chief analyst at technology research and advisory group Omdia, stated: <em>"Based on the benchmark results, it does appear DeepSeek V4 is going to be very competitive against its U.S. rivals."</em></p>\n\n<p>Ivan Su, senior equity analyst at Morningstar, offered a more measured take: <em>"Against U.S. models, DeepSeek's own evaluation suggests its capabilities largely match on most fronts, but independent evaluations are needed before final conclusions can be drawn."</em></p>\n\n<p>That caveat is meaningful. Self-reported benchmarks from AI companies — regardless of nationality — have historically been optimistic. The AI research community will need time to run rigorous independent evaluations before V4's true standing in the competitive landscape is established.</p>\n\n<h2>Context: A Year of Escalating U.S.-China AI Tension</h2>\n\n<p>The V4 launch does not occur in a vacuum. It arrives against a backdrop of intensifying geopolitical friction over AI development and intellectual property.</p>\n\n<p>In February 2026, Anthropic accused DeepSeek — along with Moonshot AI and MiniMax — of using more than 24,000 fake accounts to generate over 16 million exchanges with its Claude model in an alleged distillation campaign, a practice in which outputs from a more capable model are used to train a smaller one. OpenAI made similar distillation allegations against DeepSeek in a letter to U.S. House lawmakers.</p>\n\n<p>Just one day before the V4 preview dropped, Michael Kratsios, White House director of the Office of Science and Technology Policy, issued a memo on Thursday, April 23, accusing foreign entities "principally based in China" of conducting "industrial-scale" campaigns to distill frontier AI models from U.S. companies. The memo did not directly name DeepSeek.</p>\n\n<p>Separately, multiple U.S. states, Australia, Taiwan, South Korea, Denmark, and Italy introduced bans or other restrictions on DeepSeek's R1 model shortly after its January 2025 release, citing privacy and national security concerns. It remains to be seen whether V4 will face similar regulatory headwinds internationally.</p>\n\n<p>On the investment front, reports emerged that Chinese tech giants Tencent and Alibaba were in talks to invest in DeepSeek at a valuation exceeding $20 billion. Separately, investors were reportedly in discussions with DeepSeek in April 2026 about a $300 million funding round — a signal that institutional confidence in the company remains high despite the legal and regulatory controversies swirling around it.</p>\n\n<p>Stanford University's Institute for Human-Centered AI issued a recent report concluding that the U.S.-China gap in the performance of top AI models has "effectively closed" — a finding that lends broader context to DeepSeek's continued competitive positioning.</p>\n\n<p>All of this plays out as American technology giants are projected to invest around $650 billion in AI infrastructure and data centers in 2026 alone — a figure that reflects the scale of capital being deployed in the race to maintain AI leadership.</p>\n\n<h2>Expert Reactions: Competitive, But Not Another 'Sputnik Moment'</h2>\n\n<p>Analysts who spoke to major news outlets on Friday were largely impressed by V4's technical specifications but tempered expectations about its broader market impact relative to R1.</p>\n\n<p>Ivan Su of Morningstar noted that DeepSeek has established a consistent pattern of delivering capable, efficient, open-source models at a lower cost than Western counterparts — and that V4 fits squarely within that trajectory rather than redefining it: <em>"V4 is simply a follow-through on that same trend, and trends don't make headlines the way shocks do."</em></p>\n\n<p>Su also flagged what the competitive framing of V4 reveals about the Chinese AI market itself: <em>"This is a framing that didn't exist with R1, and that alone tells you how much domestic competition has intensified."</em></p>\n\n<p>Neil Shah, vice president of research at Counterpoint Research, struck a more emphatic tone: <em>"DeepSeek's V4 preview is a serious flex."</em></p>\n\n<p>Some industry analysts had expected the V4 model to arrive more than a month earlier, around the start of the Lunar New Year. The delay, while not explained publicly by DeepSeek, may have contributed to the heightened anticipation surrounding Friday's release.</p>\n\n<h2>What Comes Next for DeepSeek V4</h2>\n\n<p>The V4 release is described as a <em>preview</em>, meaning the models are available for developers to test and integrate, but the full production release — potentially including multimodal capabilities — has not yet been announced. DeepSeek has stated it is actively working on adding image and other modality support to the V4 series.</p>\n\n<p>Independent benchmark evaluations from AI research organizations, universities, and third-party testing platforms will be critical in the coming weeks to validate or challenge DeepSeek's self-reported performance claims. Until those results are available, comparisons to GPT-5.4, Gemini 3.1-Pro, and Claude Opus 4.5 should be treated as preliminary.</p>\n\n<p>The broader question — whether V4 will disrupt global AI markets the way R1 did in January 2025 — remains open. R1's impact was partly a function of surprise: the market had not anticipated that a Chinese startup could produce a frontier reasoning model at such low cost using restricted hardware. With V4, the element of surprise is diminished, even as the technical achievement remains significant. Whether regulators in Western markets will respond to V4 with the same restrictions they imposed on R1 is another open question that will likely be answered in the weeks ahead.</p>\n\n<p>For now, V4's most concrete near-term implication may be its demonstration that capable frontier AI models can be built and deployed on non-Nvidia hardware — a finding with lasting consequences for the global AI chip market and for U.S. export control policy.</p>\n\n<p>For more tech news, visit our <a href=\"/news\">news section</a>.</p>\n\n<h2>Why This Matters for Your Productivity</h2>\n\n<p>As frontier AI models become more capable, more efficient, and more accessible through open-source releases, the tools available to individuals and teams for health management, workflow optimization, and personal productivity are evolving rapidly. Understanding which models are genuinely competitive — and which claims require independent verification — helps you make smarter decisions about the AI tools you rely on. At Moccet, we track these developments so you don't have to. <a href=\"/#waitlist\">Join the Moccet waitlist to stay ahead of the curve.</a></p>", "excerpt": "DeepSeek released a preview of its V4 large language model on April 24, 2026, introducing two variants with up to 1.6 trillion parameters and a 1 million-token context window. The model runs on domestic Chinese chips from Huawei and Cambricon rather than Nvidia, carrying significant geopolitical weight amid ongoing U.S.-China AI tensions. Analysts say V4 is technically competitive with leading U.S. models, but independent evaluations are still needed to confirm DeepSeek's self-reported benchmark results.", "keywords": ["DeepSeek V4", "DeepSeek V4 model", "China AI model 2026", "open-source AI", "AI competition US China"], "slug": "deepseek-v4-preview-china-ai-model-2026" } ```

Share:
← Back to Tech News