AI roadmaps converge on desktop superapps and general-purpose agents that combine coding, multimodal models, and persistent integrations. Vibecoding and code-first agents are turning software engineering into universal knowledge-work automation across design, analytics, and marketing. Market dynamics show intensifying competition, collapsing moats, and a split between platform consolidation and extensible channel-based ecosystems.
The AI Daily Brief helps you understand the most important news
As every company seems to launch the everything app, does it show a lack of focus, a vicious AI competition, or does it say something about AI's path to doing everything? Welcome back to the AI Daily Brief. Over the last couple days, we have a bunch of stories which on the face of them are unrelated. It's different companies announcing new products or updates to their old products, all trying to jockey for position in the ever-changing AI landscape. And yet, when you look at all the announcements, there is clearly a convergence happening. The products are starting to mirror one another. We've discussed a version of this trend as the clawfication of AI, but it feels like there's something even more going on. Here's how Buko Capital summed it up. Open AI is building a super app, bro. It can do everything. And Lovable can do general tasks now. It also does everything. Airtable pivoted, you can vibe code there now. I sent all my agents to my Mac mini to fight to the death, and I'll use the strongest one. Bro, AGI is here. So, let's talk about what OpenAI's plans to launch a desktop super app, Google's
release of their new vibe coding experience in Google AI Studio, Lovable's announcement of Lovable General Tasks, and Claude Code's announcement that you can use it from Telegram, all have to do with one another. The temptation, I think, is for people to view these companies and maybe the AI product industry more broadly as failing, throwing everything against the wall and releasing kitchen sink products that don't really make any sense. I think, though, what we're actually seeing is a recognition that the capability to code does not just unlock new approaches to software engineering and vibe coding, but basically everything else in knowledge work. But let's go back and start with what was announced from Google AI Studio. Google AI Studio themselves tweeted, "Vibe coding in AI Studio just got a major upgrade. Multiplayer, build real-time games and tools, real services, connect live data, persistent builds, close the tab and it keeps working, pro UI, Shade CN Framer Motion and NPM support." Logan Kilpatrick adds one-click database support, sign in with Google support, a new coding agent powered by antigravity, multiplayer and back-end support, and so much more
coming soon. So, a couple of things going on here. First of all, Google is integrating antigravity directly into Google AI Studio, rather than these things being totally separate experiences. Along with that, they are trying to build a more end-to-end experience where you can actually get all the way to applications that can be deployed. As they put it, going from prototypes to production apps. So, a lot of the parts of the announcement are just the boring guts required for that sort of move, integrated databases and authentication, access to modern web tools like Framer Motion, and connections to external services like databases and payment processors. And yet, there are also some very Googly parts of this announcement. One of the things that we've been tracking, especially as OpenAI and Anthropic go tit-for-tat with coding capabilities around Codex and Claude Code, is that while Google certainly hasn't withdrawn from the AI coding fight, this announcement is proof point of that, they also are clearly trying to compete in areas where they are just in a class of their own, specifically around everything having to do with multimodal. Anything that benefits from having access to the entire corpus of YouTube, for example. We see that in things like
the Genie 3 model, and we even see it in the specific ways that they're pushing this new vibe coding experience in Google AI Studio. Specifically around this idea of pushing real-time multiplayer games. This is the first use case that they highlight in their announcement post, and I don't think that that's because they think that there are so many people out there right now who want to build massive multiplayer first-player laser tag games. I think they're trying to show off a capability set that they believe is very different. I started playing around with this a little bit, prototyping a game where you take a design from Leonardo da Vinci's notebooks and can actually interact with it in 3D space, trying to turn it into a working machine, almost as a sort of 3D exploratory sandbox type of Myst game. Now, in the first iterations of this game experience weren't as visually appealing as I wanted, I fired up a different new Google tool that had been updated just the day before. That tool is their updated creative canvas called Stitch. On Wednesday, Google Labs tweeted, "Meet the new Stitch, your vibe design partner." Now, the upgrades that they promised as part of this new version included an AI native canvas, a smarter design agent, native voice
integration so you can design by talking, instant prototypes and transportable design systems. It's really a mass expansion in some ways of what people think of as design. And of course, what's going on behind the scenes is that Google is leveraging these new models' capabilities to code to make a better design experience. A couple days later, they dropped a set of new starter ideas that show how blurry a lot of these knowledge work tasks are getting. Their starter idea number one was to take a messy document and turn it into a fully styled portfolio. And what's clear is that Google has ambition to be integrating and expanding these experiences in very short order. Logan Kilpatrick again writes, "Our AI Studio vibe coding roadmap for the next few weeks includes design mode, Figma integration, Google Workspace integration, better GitHub support, planning mode, immersive UI, agents, multiple chats per app, simplified deploys, G1 support, and more." Easy AI CMO Mustafa writes, "Google rebuilt AI Studio from scratch just to add vibe coding. Four months of work for one feature. That tells you everything about where the industry is headed. Vibe
coding isn't a trend anymore, it's the default interface. And that, of course, is what I think is the broader point in all of these announcements. So, what's the next one? The next one is Lovable for General Tasks. Lovable CEO Antono Sicca writes, "Lovable has always been for building apps. Today, it also becomes your data scientist, your business analyst, your deck builder, and your marketing assistant. This is a big step towards what Lovable is becoming, a general-purpose co-founder that can do anything." Some of the examples they show to show off the new tools, including dropping in a CSV file of health industry data to find a startup idea, taking an application that you've built in Lovable, and then creating marketing assets to help launch it, or creating a pitch deck for that app. Now, what's interesting is that this is actually quite similar to what Replit announced with Replit Agent 4 a couple weeks ago. In his announcement tweet, Replit CEO Amjad Massad wrote, "Software isn't merely technical work anymore, it's creative. Introducing Replit Agent 4. Design on an infinite canvas, work with your team, run parallel agents, and ship working apps, sites, slides, and
more." So, let me show you an example of how these things are all blending. What you're looking at right now, or hearing me describe if you're just listening, is effectively a slides-as-a-webpage view of our February AIDB Usage Pulse Survey. Even though the information is still conveyed in slides, you can interact with it like it's a website. Basically, I built the website version and the downloadable slides version at the same time using Replit's Agent 4. And it turns out that this pattern of the blurring of information output is not something brand new just being explored by these companies for the first time. For example, when you're working in Gamma, when you start something new, you have the option to create a document, a presentation, a mobile experience, or a webpage, or you can do it all at the same time. When you were using GenSpark or Manas to build slides, what's happening behind the scenes is that their general agent is using code to deliver against anything that you're actually looking for as an output. In other words, the GenSpark general agent is a coding agent with the coding part abstracted and the output format placed front and center.
Which is why I think people are a little off with one of the common responses that I've seen to Lovable's announcement, that this is a move of some type of desperation. Adam Barta writes, "First sign that Lovable is dead. Pivoting to general assistant is the most investor-pleasing move you could do. Their app-building business is obviously going nowhere, and investor money is drying up. Why should anyone use Lovable instead of the already established ecosystems?" Now, for what it's worth, just a week ago, Lovable reported that its ARR jumped from 300 to 400 million in a single month. So, I'm not sure that it's fair to say that its app coding business is going nowhere, but Adam's hardly alone in this sentiment. Tyler Angert writes, "This is the founder equivalent to becoming a paperclip maximizer. Increase shareholder value, they said. We must increase our TAM to 8 billion, therefore we will literally make our core product a kitchen sink for general-purpose work. Why? Just make separate products if you were so inclined. What a completely dilutive move. Going as horizontal as possible with no opinion." Hardik Pandya writes, "Complete strategic dilution. May not go well. It's a huge reach to go from building apps to doing anything a business needs."
Now, of course, not everyone agrees. Prajwal Tomar writes, "People say Lovable is spreading too thin by going beyond code, but think about it. You need to build the MVP, analyze user data, pitch investors, and run marketing. It just became the tool that does all of that in one place. No more jumping between five different AI tools. This saves so much time." And while that's a totally reasonable argument about the product value here, I think Peter Yang has the right of it when he writes, and in this case, this was after the Replit Agent 4 launch, "Code is the foundation of all knowledge work. If an agent can write code, it can also generate apps, presentations, animations, and more." Indeed, he resurfaced with that same sentiment around the Lovable announcement writing, "Code is the foundation of all knowledge work. Another proof point right here." Now, this is, of course, something that we've talked about on this show before. In January, I did an episode called Code AGI is functional AGI, about why the advances in coding capability mattered, not just because of the way that it would impact software engineering, or even vibe coding tools, but because of the other capabilities that it unlocked. And providing a little evidence that
even thinking about vibe coding as its own category might be increasingly reductive, one interesting finding from our AI Usage Pulse Survey for February, which admittedly is at the very vanguard of users, given that it's all of you guys answering, who as listeners to a daily AI show are not going to represent the average human being, let's just put it that way, still 71.3% of respondents were vibe coding in February. 62% had some use case that went beyond just assistant into the realm of automated or agentic AI. And while we saw coding use cases continue to be the most common and highest reported value use cases, we also saw a real diversification from coding into other strategic knowledge work areas like data analysis and strategic planning. For some, what's happening is just completely inevitable. Wobby creator Eugenia Kuyda writes, "2026 will be the year when every AI product converges into some version of Open Claw." Ben Vinegar puts it more poetically, "You either die a code gen tool or live long enough to become the everything app." Which brings us to Open AI. On Thursday night, The Wall Street
Journal released an exclusive report about Open AI's plans to launch a desktop super app that would combine Chat GPT, Codex, and their browser into a single experience. The WSJ points out that the strategy marks a shift from Open AI's previous approach to launching lots of standalone products that all had to stand on their own two feet. Now, this of course gets back to those comments from CEO of applications Fidji Simo, where she told the company that they were going to stop focusing on side quests and spreading their efforts across too many different areas. Peter Yang again writes, "I think Open AI's strategy is pretty clear. One, more people have Chat GPT installed than any other AI product. Two, make Chat GPT great for coding and knowledge work. Three, make it a personal assistant like Open Claw that knows you and can do whatever you want. They just need to get to two and three faster before people switch to Claude or Gemini for the same use cases." Swix, aka Shawn Wang from Latent Space, pointed out meanwhile that a very long time ago he had written a blog post with the line, "Attempts at building super apps have repeatedly failed outside China, but it's clear that both Chat GPT and Claude co-work are well on their way
to being AI super apps. Except instead of having every app having their own app, they make themselves legible to the AI overlords with MCP, UI, and skills, and Open Claw markdown files." Speaking of Open Claw, one of the other things that we've been watching is the way that Anthropic has been slowly going one by one through the features of Open Claw that people like and adding them into the core Claude code or Claude co-work experience. The most recent announcement on that front comes from Tariq from Claude Code, who writes, "We just released Claude Code channels, which allows you to control your Claude Code session through select MCPs, starting with Telegram and Discord. Basically, you can now message Claude Code directly from your phone, which was of course a huge draw for the Open Claw experience." Now, Gokul Rajaram thinks that this shows Open AI and Anthropic heading in slightly different directions. He writes, "Open AI merging Chat GPT, Codex, and Atlas into one super app, while Anthropic ships features like channels, persistent memory, and 10K skills in the same month. Two very different strategies playing out in real time. One is consolidating everything under one roof, the other is making the core tool so extensible that the
ecosystem builds itself around it." And while he may be right that there is a slight difference in strategy, I think that might have to do more with the starting point of where they are. In other words, Open AI having to deal with product sprawl, then actually being a different strategy. It feels a little bit like both ends are working towards the middle here of a very similar type of experience. Indeed, certainly Fidji Simo herself seems to suggest that this is more about having a Codex plus experience than it is about having Codex sit alongside a bunch of other experiences. She writes, "Companies go through phases of exploration and phases of refocus, both are critical. But when new bets start to work, like we're seeing now with Codex, it's very important to double down on them and avoid distractions. Really glad we're seizing this moment." Put differently, it may not be that Open AI is trying to create a super app, it's that they believe that inherently Codex is their super app and they're organizing everything around it. Now, even if I'm right and this convergence does not show flailing and a lack of product vision, but instead a natural path from coding capabilities to broader knowledge work capabilities,
that still doesn't mean that the everything app approach will actually work from a product standpoint. And Will writes, "On one hand, I will be happy to have GPT Pro and Codex, but on the other, I've really come to appreciate all the focus and attention they've placed on making a purely software engineering focused product. And I think it is worth noting that the other thing that's going on here is just the first large-scale startup competitions in an era where there are officially no motes." Ed Sim writes, "When shipping new features cost near zero, every company becomes every company. And when switching costs are also near zero, who wins? The next few months are going to be interesting." I think it's more than the next few months. I think that we are in a totally different type of company building paradigm that we have barely wrapped our heads around. On the one hand, there are no barriers to entry. People can build and spin things up faster than ever before. Non-technical founders can build the early versions of their products. And yet on the other hand, basically all the traditional motes have fallen. No barriers to entry, but also no motes is a very strange and kind of viciously competitive environment that makes
continual pivots feel like the only operational strategy. In AI land, nothing is going to sit still for long. For now, if nothing else, we have a lot of fun new toys to play around with, and for that alone, I am grateful and excited. For now, that that is going to do it for today's AI Daily Brief. Appreciate you listening or watching, as always, and until next time, peace.