WorldofAI
Qwen 3.6 Plus: GREATEST Opensource AI Model EVER! Beats Opus 4.5 and Gemini 3 (Fully Tested)
2026-04-03 14min 46,405 views watch on youtube →
Channel: WorldofAI
Date: 2026-04-03
Duration: 14min
Views: 46,405
URL: https://www.youtube.com/watch?v=FuUISGqIC3k

Download Wispr Flow on Android - https://ref.wisprflow.ai/worldofai

Qwen 3.6 Plus just dropped—and it might be the BEST open-source AI model we’ve ever seen.

🔗 My Links:

Sponsor a Video or Do a Demo of Your Product, Contact me: [email protected]

🔥 Become a Patron (Private Discord): https://patreon.com/WorldofAi

🧠 Follow me on Twitter: https://twitter.com/intheworldofai

🚨 Subscribe To The SECOND Channel: https://www.youtube.com/@UCYwLV1gDwzGbg7jXQ52bVnQ

👩🏻‍🏫 Learn to code with Scrimba

Looks like we have a new tobe open-source Quen model. And I got to say, they did an amazing job with this release. Allow me to introduce the Quen 3.6 Plus, a new Agentic coding model with a 1 million context window. Focused on becoming a highly capable Agentic AI model for real world tasks. This model brings stronger agentic coding, meaning it can handle full project repo level problems, terminal tasks, and automation workflows while also improving multimodal reasoning with a better understanding of images, documents, videos, and real world scenarios. Overall, it's designed to be an all-in-one agent model, combining reasoning, memory, and tool use into a single system. And coding wise, this model is exceptional, and I'm definitely adding it to my workflow since it excels at sway task, debugging, automation, and especially with long horizon planning and tool use. The front-end capabilities are also very strong. In certain cases, even comparable to Opus, which you can

see in these demos. Now, I got to say one thing though. When it comes to coding with this model, it can be kind of sluggish when it comes to generating long projects as well as working on generating lengthy code cuz this is a model that reasons for a while to get the output that you're looking for where it could be quite slow in certain cases. Benchmark-wise, it's competing at a very high level, either surpassing or coming very close to models like Kim K 2.5, Plaude Opus 4.5 and even Gemini 3 Pro cross major benchmarks like Su Bench and Terminal Bench where it actually outperforms other models along with MMU and other benchmarks. An advanced multimodal reasoning 3.6 6 Plus shows some real progress delivering breakthroughs in complex document understanding, visual analysis, video reasoning, and visual coding while also improving real world capabilities on almost all these benchmarks. If you use your phone to message, email, or write

every day, your limiting factor isn't what you want to say, it's how fast you can get it out. Most people end up typing a short or rushed message. They leave out context, nuance, or full thoughts, not because they don't have them, but because typing slows everything down. That's where Whisper Flow on Android comes in, which is today's video sponsor. It's a voice first productivity tool that floats over any app, WhatsApp, Slack, Gmail, Chat GBT, anywhere you normally type, speak naturally, filter words and all. and flow basically turns it into clean polished text automatically. Millions of users worldwide rely on it including teams at open AI versel and clay with over 100 plus languages supported and recently they are making it free and unlimited on Android with launch. It's faster, easier and more flexible than typing. Speak naturally, send without fixing and get your ideas across effortlessly. So, use the link in the

description below to download Whisper Flow on Android or any operating system that you want. Start talking, start standing, and no keyboard required. It delivers strong performance in more complex projects like 3D scenes, even games, while still maintaining high quality and web design. And like I'd stated, the front-end capability of this model is quite comparable to something like the Opus. Here, it had created a firstperson perspective flight HTML game. And this is something that is quite impressive as most models tend to fail at this task. The visual agents do exceptionally well with reasoning and depicting different sorts of things within an image and it's able to essentially scrape every content from it. Not just that, it does quite well with visual coding where it can even create different sorts of pages, powerpoints and spreadsheets. You can even interact with Excel and does quite well with the gentic task, visual understanding, anything you name it. This model is a great all-in-one model. Now, in regards to pricing, it is priced

at 50 cents per 1 million input tokens and $3 per 1 million output tokens, which is honestly pretty reasonable for what you're getting in terms of the quality, especially considering the level of Aentic coding as well as its multimodal capabilities. And on top of that, there are going to be smaller open- source versions of this model that are expected to drop later this week, which makes this even more interesting. Get started with this model. You have a lot of different options. You can use their chatbot for free where you can experiment with this model. Use their API. Open router also provides that free API as well as Kilo code. So I would highly recommend that you would actually use these different methods to get started with this model. Kilo code also provides a free API that lets you use it completely for free through their AI agent which is open source. So I highly recommend that you do so. To start off, we're going to be using the Kilo CLI to create a browserbased OS that clones Mac OS. It's going to add many apps and features and you can see that I'm using the free model right now within the CLI

and this is going to be able to use that free model to generate anything within the CLI. It's aic and it has a lot of different capabilities to get the best out of your model which is why I'm using the Kilo CLI for these tests. We are starting off with the bang. This is something that I didn't really expect the Quinn 3.6 to generate and I got to say this is one of the best generations I've gotten from any model. This is where it did a great job in replicating the Mac OS browserbased OS. And you can see that there's multiple functions. You have the Finder app that has been coded out. You have the ability to open up different apps, which is incredible. And the SVG icon of all of these different apps is truly amazing. We have a Finder app, we have a Safari, you have a messaging app, which is great because it mimics exactly like how all of the apps on Mac OS look like. You have a mail app, you have a photos app, which is truly incredible. I can actually click on the photos. You also have a music app. This is something that might not look really the best, but still the fact that it's able to generate all these

components is great. Calendar, the terminal, you have the calculator. You even have a system settings app where you have the ability to change the actual light or dark theme appearance of our application or our browser OS. You have the ability to change the appearance as well. These are things that I haven't seen any model actually generate, which is incredible. You can even change the display. You have the ability to actually tweak all of these settings, which is incredible. I haven't seen any model go in depth with certain generations like this. And I personally believe the reason why it's doing this is because of the 1 million context window that it's utilizing. And this way, you can see the effort that is being put into this generation from SVG icons to functionality. It did a remarkable job with this clone. This has got to be the best F1 drift donut simulation I've seen a model actually generate. And this is the output of the Quinn 3.6 Plus. We have the ability to actually change the direction as well as the RPM of our car. You also have the

ability to change camera angles and that is something that I haven't seen multiple models generate. You can reset it as well. And this is what the Claude Opus 4.6 had generated. Nothing. That is just surprising. I got to say I am truly impressed by this model's ability to generate structured code in SVG especially cuz it's able to translate it into coherent visual output where I told it to create a painting in SVG code. You can see the gradient of the water based off the moonlight and there's slight animations. Obviously, it could be improved a lot better, but this is still crazy to see that this model is able to output this type of quality. Here I was comparing the Kim K 2.5 and the Quen 3.5 plus to create a butterfly in SG code. Now if I have to actually showcase the first animation, the butterfly was actually broken. So I'll be honest, the first generation wasn't as good as what it is right now cuz the wings weren't actually animated properly. Now it looks like it fixed it and it actually did a

better job than what the Kim K 2.5 did because you can see that this looks like a butterfly which is definitely great but it is not at the same sort of quality as what the Quen 3.6 had generated. Using the Kilo CLI I had generated three different landing pages and I want you to take a look at the quality of front end that this model is able to output. This is the first landing page and I got to say everything about it is perfect. the typography, the different dynamic movements, a part of it. It did a great job overall in generating all the components except this one part of the landing page. But regardless, everything else looks pretty great with this generation, especially with the pricing. Here is the second landing page that generated. You have small animations, small attributes that have been added to it. And I got to say, this is a lot better than what I personally saw with the first generation. But there are clunky things within the main page like this generation over here. Whereas when I am to prompt something like Gemini to generate it, it does a better job with

it. This is the third landing page that it had generated. And this might be my favorite one cuz everything about it looks perfect. There's no faulty generations. Everything looks like it has been generated perfectly in comparison to the other two landing pages. If you like this video and would love to support the channel, you can consider donating to my channel through the super thanks option below. Or you can consider joining our private Discord where you can access multiple subscriptions to different AI tools for free on a monthly basis plus daily AI news and exclusive content plus a lot more. Here is another reason why this model is great at front end. I had told it to create a Tik Tok clone for mobile and you can see that the Quinn 3.6 had done a great job in all the components. You have almost everything that looks exactly like what Tik Tok does. You have the ability to like the actual Tik Tok. You have the ability to scroll and overall it did a great job with the functionality of it as well. This model excels at video understanding. It has a

capability to actually transform large videos into lectures for example or the capability of even video editing where you can simply provide a 29minute video and it's capable of essentially creating a full-on edit that is condensed into 23 seconds which is just insane. It's also able to help you with visual understanding like the computer use agent which can essentially practically automate any computer-based task. I am speechless with this cuz this is where the Quinn 3.6 plus within their chatbot it had created a slide deck and this is what I mean by insane visual coding capabilities where it's able to create slide decks for you and it's kind of really accurate to what Lord of the Rings is. The logo is perfect and it did a great job in generating a good rough understanding of what the novel is. talks about the story, talks about the key locations, what had actually happened, all of the different scenes and locations as well. So, this is truly

incredible as to what it had generated. So, this is something that you can potentially use to create slide decks for work, for recreational purposes, or for even note-taking. If you want the best AI tools, workflows, and drops before everyone else, join my free newsletter with the link in the description below, which is completely free. I am not kidding. This is what the Quen 3.6 had generated. And this is truly remarkable, guys, because this is something that adds in all the functionalities of a Minecraft clone. The only thing that hasn't been generated is the infinite terrain, which is a feature that most models actually generate. But the fact that there is animations when you're breaking different blocks is incredible. You also have the ability to place different blocks, which is nice. And there's different textures as well. So that is also great which mimics exactly what Minecraft looks like. You also have the ability to see that there is water. Usually most generations don't generate a functional texture for water in any output. And this is something that I didn't really expect. But the fact that

if you go into the water, it doesn't look exactly like it, which is the only downside. But if I to leave it, you can see that there's different terrains. You have the ability to break different blocks, which is cool. And it actually takes a while for me to break it. But let's see if I can find if there's a cave system cuz I did say in my prompt to generate a cave system. And guys, it actually generated a cave system. It took me hours to get here, but the fact that it generated ores, even added lava, and what's funny is if I even step in the lava, you can see my health bar goes down, which is crazy. But that is just incredible, guys. I didn't really expect this type of generation from this model. The fact that it's able to add in all these components without me even stating it is truly incredible. So, this is definitely going to be a model that I'm going to be using, especially for front-end tasks, as well as using its visual capabilities, its multimodal capabilities to generate various sorts of plugins and applications to help me improve my workflow. So, this is truly

something that I highly recommend that you take a look at. Overall, this is a serious step forward for fully autonomous AI agents, combining strong coding, reasoning, and multimodal capabilities into one system with this model. I highly recommend that you take a look at it, especially with its upcoming open-source variants. I am really proud of what the Quinn team had accomplished with this model, and this is truly something that is affordable and something that is worthwhile due to its capabilities. So, I'll leave all the links in the description below so that you can easily get started. But with that thought, guys, thank you guys so much for watching. Make sure you go ahead and subscribe to the second channel, join the newsletter, join the Discord, follow me on Twitter, and lastly, make sure you guys subscribe, turn on notification bell, like this video, and please take a look at our previous videos so that you can stay up to date with the latest AI news. But with that thought, guys, have an amazing day. Spread positivity, and I'll see you guys fairly shortly. He suffers.