Wes Roth
Claude just changed overnight
2026-04-07 17min 73,439 views watch on youtube →
Channel: Wes Roth
Date: 2026-04-07
Duration: 17min
Views: 73,439
URL: https://www.youtube.com/watch?v=U_8QyUvl-J4

FULL DETAILS IN MY NEWSLETTER: https://natural20.beehiiv.com/p/claude-code-just-changed-overnight

______________________________________________

My Links 🔗

➡️ Twitter: https://x.com/WesRoth

➡️ AI Newsletter: https://natural20.beehiiv.com/subscribe

Want to work with me?

Brand, sponsorship & business inquiries: [email protected]

Check out my AI Podcast where me and Dylan interview AI experts:

https://www.youtube.com/playlist?list=PLb1th0f6y4XSKLYenSVDUXFjSHsZTTfhk

_________________________

The entire internet is melting down right now as Anthropic goes to war with Open Claw. As OpenClaw says here, Anthropic cut us off. GPT 5.4 got better. We moved on. Apparently, Peter Steinberger, the creator of OpenClaw, took a hammer to GPT 5.4 and managed to beat emotions. So, there's a lot to unpack here because Anthropic band OpenClaw band is probably not the right word. Basically, they severely restricted third-party harnesses from using their subscriptions to get subsidized tokens. By the way, once again, I got to give credit to Boris Churnney, who has been kind of the champion of Cloud Code. He's also the creator of Cloud Code and him and his team are doing a phenomenal job of number one communicating with the rest of us as to why this is happening. And I sincerely think they're doing their best for the community, but how they're going about it, not again, not Boris, not the Claude Code team, I don't think they are the ones necessarily making the decisions, but some of these decisions

are making people pretty mad and with good reason. Let's take a look at what happened. So, really fast, for people who might not be familiar with the controversy surrounding this, Claude has a certain plan. Basically, you have the pro for 20 a month, max 5x at a 100 per month, and max 20x the $200 a month plan. Now, this $200 a month plan, you could use a lot more tokens. You can use these AI models far more than $200. Anthropic kind of subsidizes the tokens that you're able to use if you're using it, for example, for claw code. Now, there's a lot of thirdparty apps, namely OpenClaw, that are very popular that kind of piggy back off of these plans to be able to use not unlimited tokens, but they're able to use a lot of tokens at a limit, like let's say $200 a month, to run their swarms of AI agents and build all sorts of stuff. Like to give you an idea, here are the last seven days of March or it's March 24th to 30th, so it's seven days where I managed to rack

up $200 in API cost using notice Claude Sonnet 4.6. So not even the most expensive model. This was running OpenClaw. And this wasn't even like the craziest most heavyduty agent. This is kind of a side quest, a side project where I wanted a separate billing. But as you can see, you can easily burn through enough tokens many, many times over what you would pay on the Claude plan. So on April 4th, Anthropic cut our ability to use our, let's say, Claude Max subscription to run OpenClaw, which is annoying, but at the end of the day, I I guess you can say it makes sense. Certainly, if it's against their terms of service, they can, of course, cut it off. They can do as they wish. And as Boris for example talked about, there is a very good reason for why they might want to do that. One very important reason, probably the number one most logical and understandable reason is that Claude code and Claude co-work. So these were the first party tools. These were Anthropic's own tools. They were

engineered to increase prompt cache hit rates. Meaning certain queries that were often repeated instead of each time, you know, cloud code or cloud cowwork having to run the compute and figure out how to enter it, certain things that were cached, it basically reduced the needed compute. And these tools that anthropic build were really engineered to really take advantage of that. So if cloud sees the same context repeatedly, it can kind of reuse its previous work instead of h you having to pay for each token individually over and over and over again. And anthropics complaint, this was what they were saying is that tools like OpenClaw, well they weren't very well optimized for that. These tools either bypassed these abilities to save on compute. They completely ignore them or they underutilized them. meaning that the same subscriptions were dramatically more expensive for Anthropic to serve versus cloud code. If it was the same exact sort of set of prompts on cloud code versus open claw, the same exact use cases would cost much more on open cloud than they would on cloud code. Did

I say claw code? I meant cla code cloud code anthropics project versus open claw the open source project. Interestingly, Boris Churnney actually said that he was personally submitting PRs to improve these prompt caches for OpenClaw before the cutoff. So, they were working on improving this not just for themselves, also for for us the OpenClaw users. Okay, so this seems like kind of an openshot thing, right? So, a company has some third party software that's just like draining their money because they're unoptimized and Anthropic themselves have no reason to support it. Why would they? the founder of the company went to OpenAI. What could possibly be the problem? Well, first and foremost, Enthropic introduced kind of a new plan that they called Extra Use, and you can buy API tokens for this extra usage for up to 30% off. You have to spend, I think, $1,000 to or or more to get the 30% off. Now, unfortunately, as some users figured out, even mentioning Open Claw within the system prompt would

block claude code for for running those commands. you had to use that extra usage that you were paying for in order to be able to run commands like that or use any third-party harnesses. But I mean, in this case, you're you're basically this is a first party. You're using claude. This is claude here. You're just putting open claw in the in the system prompt, which prevents it from running. And anthropic also made it very clear that this isn't just open claw. This is basically the entire kind of thirdparty ecosystem will now have to pay on a API basis, per token basis, per million tokens, however you want to think about that. So, it's not a ban in the traditional sense. It's a forced move from flat rate to metered billing. Now, at the same time, there's quite a bit of people that have I don't know if you want to call them conspiracy theories or anecdotal evidence about their their own uses of cloud code. But basically, if those rumors are to be believed, some of them you can see on video, it does seem that cloud code is

now refusing to do prompts that could be more open claw like in nature. So, if you want to code, it will code. You kind of step outside of that and it gets a little bit iffy. It says, "Well, I don't know if I can do that. and I'm I'm here to help you code. And by the way, Codex will go ahead and run it. So if you're on Open AI's same technology that OOTH, you know, it's basically the same plan, but from OpenAI, OpenAI has explicitly said, go ahead and use it. Like we're totally cool with you using this. In fact, that's what I've been running on for the last 24 or however many hours it's been. I just updated OpenClot to the latest version. So still messing around with it. I don't know if I see the GPT versions being any more emotionally intelligent or not. Claude still has that something special, that personality, that soul, if you will, that GPT hasn't quite mastered up until now. It seems like GPT to me comes off as like that super smart engineer that just lacks a certain social skills where it's like, okay, you're smart. I'm glad you're in the team, but jeez, can't you just I don't know, like not talk like

that. So again, there's these certain allegations against Anthropic that are again, if they're really trying to prevent openclaw use at all cost, well, and maybe it's still within their rights. But again, a lot of members of the community are not very happy with Anthropic right now. The other sort of criticism that's AIMEd against Enthropic is this idea of copy then close strategy. Here's Peter Steinberger, the creator of Open Claw, and he's saying he's receiving a lot of this. So basically a lot of people, myself included, got these little messages that starting on April 4th, Anthropic is no longer be supporting these third-party harnesses and a lot of people have canled their Anthropic account because of that. And as Peter is saying here, funny how timings match up. First, they copy some popular features into their closed harness, then they lock out open source, which is kind of interesting. So basically since the anthropic leak we found out about a lot of the features that anthropic is building into cloud code and as we talked about in that

previous video wow does it seem like a lot of them were heavily heavily inspired by Peter's work and of course it's not just Peter it's the entire open source community contributing to openclaw which used to be called from the beginning claudebot as a sort of nod to claude and anthropic. So Peter's saying that Anthropic basically copied everything that he and the open source community created into their closed ecosystem and then they cut off the very people that they they started this movement the people that innovated and and created this stuff which you know this is kind of hard to argue against isn't it? I mean, if Anthropic just cut off the third-party harnesses because they were costing them too much money, that would be one thing. But if they're rapidly copying a lot of these innovations into their own fleet of products into their own ecosystem and then cutting off the open- source community that sort of created them, yeah, that that doesn't look good. So again to summarize kind of the big point of critics of what Anthropic is doing, number one, they're saying that they really borrowed heavily from the open

source community and then they locked out the open- source community. Number two is they used these engineering constraints, these cache hit rates, etc. as excuses for what they were actually doing is ecosystem control. They wanted the users to be in their ecosystem on their products and not on somebody else's. and they preserved the cheap affordable path for their own products and killed it from anybody that was competing with them. So, this certainly isn't great if you're trying to build something on top of anthropics technology because this shows you that, hey, they're willing to, you know, kill your little startup or kill whatever you're building in order to give a preference to their own products. Again, this is the criticism that are that are weighed against Anthropic. You don't have to agree with them, but this is kind of the sentiment that that is out there right now. And of course, some users like we talked about, they're they're saying that there's some scanning of prompt or metadata to identify if some of these things are being used for open claw style traffic or prompts, etc. So again, there's there

doesn't seem to be proof for that, but some people are noticing some weirdness or at least they're talking about noticing some weirdness. By the way, if you know something definitive that we can point to, definitely let me know in the comments. And you also got to think about kind of the emotional backlash. So open claw took claw anthropics model and they made it super super useful for actual realworld AI agent tasks. The flat rate subscription and made it affordable. You can build whatever you wanted. In fact, if you were using Open Cloud this whole time, this was kind of like this magical period of time where everybody kind of got a taste of of what it is to have these AI agents and having them run, you know, at scale without having to spend thousands and thousands of dollars. And by the way, we still do with the OpenAI codecs. I feel like in general, most people would say they they prefer Claude for a lot of the tasks, especially anything that needs to have some sort of like an emotional intelligence or some conversational ability. With some of the GPT models, I feel like first it'll nitpick the words

that you've used and it'll kind of tell you why your question is not a good question and how it can be made a better question and it will proceed to answer that question, which is just it's a little bit frustrating. Feel like it mainly does it for health related queries, but still. But the point is that all those people that fell in love with using Claude for AI agents, they became these are the people that were kind of evangelizing Claude and Anthropic and saying how good it is. I mean, just think about it. That's kind of what I've been doing this whole time. Anybody else talking about OpenClaw and a lot of them were talking about Claude as the model to be used. I mean, I know a lot of people in the comments, some of you love to accuse everybody of shilling and getting paid by Anthropic to promote their company. No. No. Anthropic hasn't paid me to say I like Claude. They haven't paid me anything. In fact, I've paid them a lot more than they've ever paid me because I'm pretty sure they paid me zero. Unless you count like some free credits that I think early on they gave 50 bucks to everybody that started up on Cloud Code or or something like that. But so far this battle does not seem to be going too well for Anthropic.

Not just from the perspective of kind of like the sentiment hit. By the way, there's a lot of people kind of jumping in and saying, "Oh, Anthropic has the right to do what they're doing." I I don't think anybody's denying that. Nobody's saying that Anthropic is doing something illegal or that they don't have the right to do it. But that doesn't mean that they can do it without any backlash, right? If they view something that's very unpopular, people will be upset. Even if they didn't break the laws, even if it's, you know, their terms of service that they've made, even if they checked all the boxes, people can still be upset because they feel like they're being treated unfairly. One thing that I found kind of interesting is that open claw with this new update added dreaming to the open claw agents. So dreaming is kind of a memory consolidation which was something that we found out about that anthropic was developing for clot code in that leak that they had. So it does look like the opensource community is not to be outdone. If Anthropic is copying similar things from the open source community and using it for their own models, then the open source community certainly should have the right to copy anything

that Anthropic does. And here's the thing, I still like the Claude series of models. I still like Anthropic. I like Boris and the team. I genuinely don't have some hate against them. They they seem like a decent company, great product, great AI model. Various public figures say weird stuff on television every once in a while. I feel like that just goes with the territory. I mean, all these tech CEOs seem to be kind of, what's the word? Eccentric in their own ways. I mean, think about Alex Karp. Think about Elon Musk. Even Deis Hassabus kind of gets wild and crazy kind of in his own ways. He's got his own like special interest. I just got the new book about him, the Infinity Machine. I wish I had it here with me. I'm not sure. It's in another room. I just got it like a few days ago. I can't wait to read it because some of the passages from that book that were posted online, I was like, I have to understand how this man thinks. But my point here is that I don't have anything negative against anthropic. But I got to say, this isn't really looking good for them. There's been a lot of bad optics in the last however many weeks. We have the Claude Mythos leak. We have the Claude

Code source code leak. That was kind of a big deal. We have the DMCA fiasco that nukes like over 8,000 GitHub repos. And of course, you know, a lot of the people that are talking about Open Claw were assuming that it was going to be affordable with the use of Anthropics, one of their plans. So, it kind of undercuts a lot of tutorials and courses, educational content that people put out there. I had a video I did very well about how to set up Open Claw. And in it, I show you how to, you know, use the Anthropic, one of their models to get started. You know, now technically that video has to get updated. I wouldn't have used Anthropic if I knew that they didn't want to be used as the model of choice for that particular tool. Tons of functionality broke for people kind of building on top of this. Like it's not great. But obviously also Enthropic, you know, their position is not crazy. You know, if you're offering your users a $200 a month plan and power users can spend, let's say, $5,000 a month of your credits by paying you $200, well then the business breaks. If

that argument doesn't make sense to you, please send me $5,000. I'll send you $200 back and hopefully that would be sort of an object lesson. I'm happy to help. So, the point here isn't that Anthropic did something crazy or horrible or against the rules or against the laws or anything like that. If they didn't act, either all subscription prices on all their products had to go up or quality would have to come down for for everybody. they prioritized surviving as a business, focusing on the people that make the money and you know they do have to IPO. They're saying maybe fourth quarter of 2026. It's kind of hard to make a case that they're the bad guys here. Unfortunately, a lot of this is hurting the very kind of super fans, the power users, the Claude evangelists that just love Claude Anthropic and everything about them. The people that created that kind of word of mouth advertising about cloud and anthropic are now mouththing off against anthropic and calling them all sorts of bad names. One really cool thing about Peter Steinberger getting acquired by OpenAI. That's probably a better way of

saying that you know he got paid to join OpenAI aqua hired as it were. One really cool thing about that is now he can kind of figure out how to make stuff work from inside open. Actually somewhere here he's got a tweet where he's saying exactly that because if Peter's able to bring that soul to the GBT models, the same soul that resides in in Claude, that would be pretty cool. I wonder if it's just a matter of copying over the soul. MD document. So let me know what you think about this. Are you team anthropic, team opencloud, team openi? Like who are you rooting for here? Do you agree with that anthropic has the right and and should do what they do? or are you with kind of the open source community? And do you think Anthropic is kind of being a little bit unfair here? Let me know in the comments if you made this far. Thank you so much for watching. My name is Wes Roth. I'll see you in the next