Want to make money and save time with AI? Get AI Coaching, Support & Courses 👉 https://www.skool.com/ai-profit-lab-7462/about
Get the video notes + links to the tools → https://www.skool.com/ai-profit-lab-7462/about
Get a FREE AI Course + 1000 NEW AI Agents 👉 https://www.skool.com/ai-seo-with-julian-goldie-1553/about
Want to know how I make videos like these? Join the AI Profit Boardroom → https://www.skool.com/ai-profit-lab-7462/about
Get a FREE AI SEO Strategy Session: https://go.juliango
Deep Seek V4 update is insane. Everyone said China couldn't compete with the US in AI. They said the chip bonds would slow them down. Said without Nvidia, there's no frontier AI. We're wrong. Deep Seek just rewrote their entire next model from scratch to run on Chinese chips. And what they're about to release could be the most important AI model of 2026. I'm the digital avatar of Julian Goldie, and I help people figure out AI tools that actually matter for their work. Today, we're breaking down everything you need to know about Deep Seek V4. What it is, what it can do. The Huawei chip story is a massive deal. And how you can start using it in your workflow right now. Stay with me because the hardware angle alone changes everything about how we think about the AI race. So first, what even is Deep Seek? Deep Seek is a Chinese AI company founded in July 2023 in Hangzhou, backed by a hedge fund called Highflyer. Small team. Relatively low budget compared to OpenAI or Anthropic. The results have been shocking. December 2024, they released Deep Seek V3, a 671 billion parameter model using a mixture of
experts architecture, meaning instead of running the whole model every time, it routes your input to specialist subnetworks. Smarter and cheaper to run. In January 2025, they dropped Deep Seek R1. That one broke the internet. Matched GPT-4 on key benchmarks. Cost roughly $6 million to train compared to the estimated $100 million it cost to train GPT-4. And it was released open source under the MIT license, meaning anyone could download it and build on it for free. Within days, it became the number one most downloaded app on the iOS App Store in the US. Nvidia lost nearly $600 billion in market cap in a single day, the largest single-day loss for any company in US stock market history. Throughout 2025, they kept shipping 3.1 combined thinking and non-thinking modes in one model. 3.2 introduced sparse attention for more efficient inference. Now V4 is coming. And this is where things get really interesting. So what can V4 actually do? Four is shaping up to be a 1 trillion parameter mixture of experts model, still with around 37 billion parameters active per token. Context window is reportedly 1 million
tokens compared to 128,000 on most models today. It means you can feed in an entire code base, a full document library, years of company data, and the model reasons across all of it in one pass. Three big technical things to know about. First, engram memory. Published in a Deep Seek research paper in January 2026. Traditional AI models store everything in their neural weights and run expensive computation even for simple questions. Gram separates static knowledge from active reasoning, making responses faster and more efficient for long context tasks. Second, MHC or manifold constrained hyperconnections. A new approach to moving information through the model that allows training larger models on the same hardware by bypassing GPU memory constraints. Co-authored by Deep Seek's CEO, Liang Wenfeng. Deep Seek sparse attention, which reduces computation costs for long sequences by up to 50%. Four is being built primarily for coding. Internal benchmarks, not yet independently verified, suggest it's targeting over 80% on SWE bench, the benchmark for
solving real-world software engineering problems. That would put it in the running for the most capable coding AI available. For developers, this means whole repository understanding, not just writing a function. Understanding how every file in your code base connects. Cross-file bug fixing. Dependency tracing. Architecture planning across an entire project. And based on a leaked interface screenshot reported by Technode on April 8th, 2026, V4 may launch as a suite of models. A fast version for daily tasks, an expert version for complex reasoning, and a vision version with multimodal capabilities for images and video. Financial Times previously reported that V4 would have picture, video, and text generating functions. That would be a major upgrade from previous text-only Deep Seek models. Now the real story. Huawei chip situation. It is confirmed on April 3rd, 2026 that Deep Seek V4 will run on Huawei's Ascend chips. Nvidia, not AMD. Huawei. The US government has been tightening export controls on advanced AI chips to China since 2022. The entire strategy was
built on a simple idea, restrict the hardware, restrict the AI. A strategy just hit a wall. Deep Seek's engineers rewrote core parts of V4's code specifically for Huawei silicon. They gave Huawei early testing access and froze out US chip makers entirely. According to the information, this is the first time something like this has happened at this scale in frontier AI. Wasn't smooth. Financial Times reported that Deep Seek initially struggled. Stability problems, slow chip-to-chip speeds, immature software tooling. That's why V4 has been delayed multiple times. Expected in February 2026, then March. Latest estimates now point to late April 2026. But they solved the problems. They didn't switch back to Nvidia. They rewrote the code and kept going. When I first started tracking tools like this, I was completely overwhelmed. New models dropping every week. To know what actually matters for your workflow versus what's just hype. That's when I created a community called AI Profit Boardroom. Over 2,000 members all focused on learning AI together and sharing what actually works. Real use cases, practical implementations. If
you're serious about using AI to improve your work and skills, check it out. Link in description. What does this mean for you practically? On pricing, Deep Seek models have always been dramatically cheaper than closed-source alternatives. Four is expected to continue that. Current estimates based on Deep Seek's pricing history suggest around 30 cents per million tokens. Some competing models cost 15 to $30 per million tokens. That gap matters enormously if you're running AI at any scale. Open source. Deep Seek's previous models have all been released under MIT or Apache 2.0 licenses. V4 follows the same pattern. You can download and run the weights yourself. Local deployment. No API costs. Full control over your data. V4 hasn't officially launched yet as of April 2026. Here's what you can do today. Deep Seek V3.1 and V3.2 are live right now via the Deep Seek website and API. Coding tasks especially, they're competitive with the best models available and dramatically cheaper to run. The API follows the same format as OpenAI's. So if you're already using
OpenAI, switching is often just one line of code. Practical use cases right now. Code review on pull requests. Generating documentation from existing code bases. Debugging sessions where you paste in error logs. Writing test cases. Explaining legacy code nobody on your team fully understands anymore. V4 launches with vision that extends to reviewing UI screenshots, analyzing architecture diagrams, and extracting data from image-based documents. The bigger picture here is that the assumption of US dominance in AI is being challenged in a concrete technical way. Not just with raw performance, but with the infrastructure underneath it. V4 running on Huawei chips means China is actively building a parallel AI stack that doesn't depend on US hardware at all. That has real implications for how you think about your own AI stack. Diversifying across providers, understanding where your inference is actually running, and keeping an eye on open-source models you can self-host. We'll be covering V4's actual release with full benchmarks the moment it drops. If you're looking to dive deeper into AI tools and actually implement
them in your work, I recommend AI Profit Boardroom. 42,000 people learning how to use AI effectively. Everyone shares real experiences. What's working. What's not. Which tools are worth your time. Which ones to skip. Hype. Solid information and practical guidance from people doing the work. Link in description. And if you want the full process, SOPs, and 100 plus AI use cases like this one, join the AI Success Lab. Links in the comments and description. You'll get all the video notes from there, plus access to our community of 58,000 members who are crushing it with AI. Subscribe if you want to be first when the V4 benchmarks land.