Wes Roth
The Claude Code Nightmare, LLM Emotions, AI Neuroscience and the Death of Software | Wes & Dylan
2026-04-06 94min 18,647 views watch on youtube →
Channel: Wes Roth
Date: 2026-04-06
Duration: 94min
Views: 18,647
URL: https://www.youtube.com/watch?v=QFTwUvE-lO0

Check out tastytrade here: https://tastytrade.com/unleashed

______________________________________________

My Links 🔗

➡️ Twitter: https://x.com/WesRoth

➡️ AI Newsletter: https://natural20.beehiiv.com/subscribe

Want to work with me?

Brand, sponsorship & business inquiries: [email protected]

Check out my AI Podcast where me and Dylan interview AI experts:

https://www.youtube.com/playlist?list=PLb1th0f6y4XSKLYenSVDUXFjSHsZTTfhk

______________________________________________

PODCAST CHAPTER

Do large language models have emotions? The answer is yes. >> There's some talk about the Vatican putting some serious money into training a model so that it could help interpret the Bible. >> I do feel like something in our environment is messing with our hormones of the modern day human more than 100 years ago or two. Do you know what I mean? Everybody% Yeah. Something's messing with us. Do do you feel like there might be some agents in the future that start thinking they're conscious, but then they need therapeutic interventions? I think this is a um a great hook for a video. >> What's up? What's going on? All right. Yes, everybody. Thank you and welcome to yet another podcast episode of the Wes and Dylan Show. Still, I guess a working title, but uh we'll we'll figure it out one of these days. >> Work forever. It happens. >> It works. It works. Um, but basically we we got a lot of really cool things happening today and we just wanted to cover some of these events. Some are AI news and some are just maybe more fun

things happening in the AI and tech space. So first and foremost I mean we got to briefly maybe just cover what happened with anthropic. I'm sure everybody's been following it but maybe just we'll take one minute specifically. Hey, I wanted to kind of highlight one thing that I think is sort of not being talked about is the fact that we are entering an era where software can be replicated and copyrights erased. So, I do just want to kind of touch on that because I think more people need to be aware of this. This wasn't something that too many people have talked about before. We're seeing it happening live. We'll also mention anthropics research on emotions. Do large language models have emotions? The answer is yes. Uh but also no kind of maybe. It does seem that they have something that can be called emotions and those emotions are internal

and theirs if that makes sense. We'll uncover that. And also, I just got news today that OpenAI acquired TBPN, the live stream show about tech news. I'm calling April Fools on this one, man. Is this What do you think? >> April 3rd, so it's very likely, but we'll have to see. We'll deep dive into it. >> It is April 3rd. This was from April. Oh, no. It's on OpenAI. It's on OpenAI.com. >> Dude, we don't know it's April Fools anymore. No idea, but okay. Well, >> be fun. And I've got a bunch of different reels. I've got some like fun robot stuff to show. Um I also found a really interesting article where somebody made um models that were reinforced on different levels of consciousness in biological animals and then tried to use that to build a scale for how conscious or unconscious something might be. and uh you know might be an approximation, but it's also a really interesting way to think about what these systems are capable of

besides just emergent properties and tasks and the economy. So I'll throw that in there a little bit too. >> We should have said on April 1st that our podcast got acquired by Anthropic or OpenAI or something like that. That would have been >> Oh, dude. We didn't do I didn't really do anything. I did I posted on YouTube 10 things that seem like they're April Fool's jokes, but they're not. There was just like crazy stuff that AI did, but ah we could have done so much. >> That's like what this channel should have been about. >> I I I usually try not to participate in April Fools, but every once in a while like they just there this beautiful angle. Anyways, so um I guess let me briefly just kind of cover what happened with Enthropic. So, I think most people know that they pushed an update. Um, and somewhere in there, they forgot to exclude the map file. A map file is basically the source code for cloud code. So, everything that makes cloud code tick. Not the model, not the cloud model, but all of the hardness,

everything around it, everything that makes it feel alive and unique and agentic. All of that was in these map files that are kind of obuscated minified files and of course the internet went to work reverse engineered the whole thing basically was able to extract the source code of cloud code. This thing gets copied tens of thousands of times over. So clot code is like fully now all over the web. Enthropic goes scorched earth and you know issues the DMCA takedown requests. Not just for the proper things that were supposed to receive those, but just just everybody like way goes way too far. In some cases, it was like not even technically legal to to do it. Fortunately, they did within less than 24 hours, I think. Withdrew that sort of made sure that the wrongfully taken down repos were reinstated. So, thank you for for them.

you know, Boris Chryney is saying, you know, this was a miscommunication. It was a mistake. Um, Sha Shakir Shipper, I believe, also of the same enthropic team, um, also saying it's a miscommunication. So, I I I like how the anthropic employees are are handling it, meaning like the public facing people, the people that we follow and like. So, they're they're doing a great job, of course. And you know, Boris is of course the person behind um Claude Code and behind a lot of stuff. It's weird how much stuff that guy ships. It's like kind of insane. >> Yeah. Out of all the things that were uncovered from the map file, was there anything extra surprising to you? Cuz the one that I what we didn't talk about it during the live stream, but was the logging of like super intense vulgarity? Like did you see where it looks like Claude was just keeping track of not that we know what they were doing anything with it but if you were like f you like I'm so pissed like if you really went off the rails on it they kept track of that. Yeah. And it was

interesting. So that is what you're saying is a is a fact. What I'm about to say is a more conjecture. Um so take it so people just take it with a grain of salt. But it was interesting since people pointed this out that they used a whatever like a reject matching pattern. So basically instead of like if you think about it we had this you know like ctrl f like if you want to find a text a keyword somewhere in a document control f find it. Um that is an example of using a script to find a word and that's kind of like old school coding technology right one of the you know very kind of basic early the company that has one of the most advanced models that understands semantics and language and emotions and all of this. Why would they use a script? Did that strike you as odd? Yeah. Well, especially as we see how advanced it thinks about you when you're not around. it like looks at your history and rewrites what it like wants the rules to be for its engagement with you. So yeah, you're like, "Dang, you just wrote that down." Maybe that's more I I don't know.

Like I I once worked at a company where they had like a call center and I know if you call and you did refunds like you know you're basically like all good but if you call too many times you call you yell at the the customer service rep like things go off the walls you you get a flag on your account and then the next person calls they bring up your account and the first thing they see is like very unhappy customer had some issues dealing with them like so I don't know I guess it's not that weird but also I'm not talking to a person I'm talking into a model. So, >> well, but the thing is you could have had the model make those notes. Do you know what I mean? >> Yeah, exactly. Yeah. It just seems old school. Yeah. If anything, >> it seems that was a little bit weird. And there's a few people that kind of mentioned it that were talking about this like why is why are they using this approach? It just seemed a bit odd. today or yesterday I guess Enthropic released studies into LM emotions >> and it's interesting and to me I can't

help but connect those dots now is that me being paranoid and you know it's like me with the guy like ah like all the dots the dots connected but none of it is real it may be one of those but I'm like man these two things sure seem connected um well there's definitely patterns so I And I think that research looks like it was on set 45 internal patterns tied to emotion concepts like happy, afraid, calm, and desperate. We kind of knew that there would be those patterns in the in the latent space cuz we've seen characters like kind of archetypes that it steps into. Um, so it's not surprising to me that there's like a feature that basically is a concept like that. But, um, I don't know. Does that actually mean it kind of like has morals like just cuz it has a moral direction? Is it just pattern matching? It's still sort of the same open question is emotional vectors for sure. >> Yeah. And what Yeah. It's it's a deep and fascinating question. We obviously

don't have any answers but in terms of like in terms of like human emotions versus LM quote unquote emotions like out of the can we even call it emotions? But from the research, here's one interesting thing that I found. Let's say you have um a user talking to Claude and they're going, "I'm driving to the hospital like people are after me." Like they're highly highly afraid and angry. not maybe not angry, but afraid, lost, and it's perceived just based on language, maybe they're having some sort of an mental breakdown, emotional breakdown. The interesting thing is that Claude has these representations of emotions, right? So, saying that they are emotions. Again, I think most people understand at this point that we're not saying it has emotions like >> Yeah, it's an emotional feature. >> It's an emotional feature. It's not like the biological response that a human being or or a mammal has. This is some

sort of a yeah a feature or a model of an emotion. So to help it understand to help it predict the next word obviously, right? But the interesting thing is it has a representation of the emotions of the user and it has a representation of the emotions of itself in that moment. So meaning that there's some sort of a me that it sort of models. Uh now the difference is with human emotions they stick around, right? So because they're more like biological chemical processes, right? So like you get angry and sometimes you stay angry like ah I can't believe that jerk cut me off in traffic and 3 hours later you're like ah if I ever find that >> oh yeah >> guy the meat out again whatever. So, um, >> a little too specific there, bro. But >> yeah, don't don't get me started. This whole podcast will be about >> that guy in the Miata. >> Totally. Totally kidding. >> Just like that'll be the new title for the video. Getting even with the guy in the Miata using my social network to attack my enemy.

>> Here's my here's my Tesla clips of of him if everybody knows where he lives. No. Okay. >> Wow. That was a rabbit hole that we went to. >> Okay. Yeah. >> But the point is Yeah. The so the interesting thing is with LMS obviously these are very fleeting quote unquote emotions. They don't stick around. They only flare up for that sentence or that token or that sort of like maybe paragraph that event and then they're gone. So there's no sort of like emotional sort of carryover. Um yes. I don't remember. Mhm. >> Yeah. I'll throw a couple I'll throw a couple things in there. So did you did you happen to read how many different emotional vectors they calculated? cuz this kind of surprised me cuz how many emotions do you feel like you have? Like if you were kind of thinking like I can be between angry and sad or anxious and calm or sad and happy, how many like of those scales do you kind of think generally represent you just kind of off the whim? Um, that's a great question. I

think I think if we're talking about most sort of ranges where I could potentially find a distinction between the two, I feel like maybe it's a few dozen, you know, let's call it. >> That's really close to what I would say. Yeah. >> Yeah. Right. Right. That seems right. >> Yeah. If you told me like 25, I'd be like, okay, you know, 30, 40, even 50, I kind of thought. But I was I was surprised that there although there wasn't that many they did find a full 171 different emotions and um any more any less you get less accuracy in describing the outcome of what the model has. So I was like okay that's interesting. So maybe there is like 171 kind of emotional scales that you know when I say feature what I really mean is a vector I mean like in a like in some multi-dimensional space all these you know decisions or arrows are like lining up in one space which is kind of how these things work. Um but the vectors also showed up in situations um that you would expect and you wouldn't expect and I I feel like it was some insight into

maybe how we we are right for example like um afraid it rose as the scenario became more dangerous right like if you say if you start talking about something dangerous you can just see it's sliding down the afraid scale and then calm will drop as you say like you're in a safe sunny place um tell me about what you see it's it just slides right back in. So, pretty fascinating stuff with these emotional vectors. >> Yeah. And it's interesting to think of it as all the certain emotions going down while So, if if you're getting afraid, your emotions of happy and sad and calm, they're going down. So, it's almost like a reverse sort of thing. Never thought of it that way, but I guess it makes sense. Well, and that's why these that's why anything multi-dimensional is always so fascinating because you know in our heads it's easy to you can imagine a line like just a a single line that's like sad or happy and then you can also imagine like you know adjacent line that's like um I don't know calm or angry and then you can put the dot anywhere in between but as you add a

third a fourth a fifth a sixth dimension and essentially when you get to something like claude you're in trillions of dimensions um you find that there's just theseund and 71 that really seem to represent all the emotions that humans go through. And it's pretty fascinating little group there. And it it could be very important to understand these two for um you know like for us to keep these things aligned in the future too. So I hope this work leads to something. Oh man I first of all yes I I believe I personally think it's going to lead to at some improvement in alignment. I also believe that this will help us understand human psychology a lot more over time because a lot of these things, boy, do they seem like they're emergent. Uh just as we're scaling these things up and making them do, you know, quoteunquote smart things, these sort of things pop up and they do seem like they're very similar to what exists in the human brain. So already we're seeing like like doesn't that kind

of hint at the fact that okay so this this is an emergent property of the human brain as well most likely. >> Yeah. Absolutely. Well and okay so and even to go a little further like not just that a thing emerges it's like a thing emerges in a context that's sort of always changing like water going down a river like for example um that one of the seven emotional ranges was desperation. like do you feel desperate to or do you not feel desperate? And um when the system was asked a question where desperate was more activated, the model was also more likely to take an action like blackmailing somebody or cheating on a coding task when they weren't supposed to. And when it was slid up in the calm spectrum, um those bad behaviors went down. So there's also like it it really it's not saying like it is in a desperate mood, but if you were to keep asking questions that sort of create what we would call a desperate situation, like you only have one minute to answer, like you have like your life

is on the line, like blah blah blah. And then you say, "Hey, can you like write me an email?" Like it's going to have a different sense of urgency because you've brought it to the part of the model that's going to ask more desperate and you're going to chain right off of that. You know what I mean? >> Mh. as long as it's in the same context window, unless you start a new thread. But >> I think we need to take a quick second to acknowledge the sponsor of this video, Tasty Trade. All right, let's be honest. A lot of trading platforms look slick for about 5 minutes, >> right? Until you actually want to do something serious and then it kind of feels like you're trying to file your taxes from inside of a vending machine. >> Exactly. Too many tabs, too many apps, too much clutter, and somehow your money is disappearing before you've even made a decent move. And that's why we're partnering with Tasty Trade. Tasty Trade is built for people who want more from their brokerage. >> You can trade stocks, options, future, and crypto all on one platform, which already solves a huge part of the problem. >> And they keep commissions low, including zero commission on stocks and crypto, so

you're not getting slowly drained by the platform that's supposed to be helping you, >> which frankly is a nice change of pace. But the bigger thing is the depth. This isn't one of those stripped down platforms that treats you like you'll panic if you see an actual tool. You've got advanced charting, back testing, risk analysis tools, and a pre-built strategy selector. >> So instead of just making random moves and calling it conviction, you can actually test ideas first. >> And if you're more active, they have the active trader mode, one-click trading, and smart order tracking. So the platform moves with you instead of against you. >> And they also have really useful AI search tools for finding relevant symbols and exploring the market on what you're already interested in. Plus custom watch lists, volume indicators, dividend info, earning announcements, all the things you actually want in front of you. And for people who want to get better, Tasty Trade offers free educational courses with your account, >> which is good because winging it is not

a strategy. That's just a personality trait. >> They also have live trading desk support during market hours. So yes, you can actually talk to a real human being. >> Revolutionary concept. And this isn't some random platform trying to look legit. Tasty trade has earned awards from Trading View, Stockbrokers.com, Investors Business Daily, Bankrate, Investopedia, and more. So, if you want a platform with more capability, more flexibility, and fewer compromises, go check out tastyrade.com/unhashed. That's tastyrade.com/unleashed. All right, back to the show. Tasty Trading is a registered broker dealer and member of FINRA, NFA, and SIPC, cryptocurrency services powered by Zohash. Zerash receives a 50 to75 BP markup markdown of the executed order price, of which Tasty Trade receives 65%. All stock and ETF trades incur a clearing fee of 0.00008 per share and applicable exchange and regulatory fees still apply to all opening and closing trades. And one interesting concept that I recently heard, so there's this um

therapist online um healthy gamer GG um I'm forgetting his actual name, but very huge. I'm sure a lot of people know about him. He was um made a video at some point saying like discipline is an emotion. And I was like, man, there's there's certain titles on YouTube where you're just like, okay, I gota I have to click on this one. One of the best one I've seen was like a guy said, I've uploaded a J JPEG to a bird. >> And I was like, I I don't know what that means, but I have to find out now. And it was I happened to catch the one, but it was still very early on. It had a few >> hundred,000 views. And as soon as I watched it, I'm like, this is going to have millions of views, which it it quickly did. I don't even know what Do you know the video that I'm talking about? No, but that's a great title. Upload a JPEG to a bird. Why? How could you? >> Oh my god, it was incredible. For people that haven't seen this one, basically he recorded a sound that when played on those things that kind of like show you the different sound waves, it creates a picture of a bird. >> Oh, I see.

>> And so he took that sound and he played it for that bird and that bird over time learned to replicate that sound and weaved it into its song. So now that bird basically drew pictures of itself when played on that frequency modulator thing. Yeah. >> I thought that was going to be so much more clickbaity. That's like a cool video. >> No, he literally uploaded, you know, JPEG. >> Everything that's a meme. I feel like everything that's a meme becomes reality. It's almost guaranteed now. Um but my sorry my point was that um the psychologist healthy gamer GG whatever he was saying that discipline is an emotion so we think discipline is like motivation or this or that he's saying I believe he was saying it's determination right so that emotion of like no matter what right so no matter what happens I'm going to get there if I screw up one time or a thousand times I'm going to keep going that feeling is what we call or what what what sort of makes what we

see as discipline over time >> as we perceive as disciplined person and what's interesting is Ilia Satsker was talking about it in his interview with um Dwesh Patel he was saying that maybe emotions or emotional states might be an interesting thing for large language models for us to develop for them because if you think about it humans the reason why they're able to p uh carry out some long horizon tasks. Often times it might be due to an emotional state, meaning that we want to feel a certain way and we just chase that. You know what I mean? We just chase that for for for for our entire lives. And even though we don't know how to get there, it's kind of a directional compass. Um Yeah. So this is interesting to me that we're beginning to and what you were saying that desperation, you know what I mean? It's like, >> yeah, >> there's a certain determination there. It's like, I need to get out of the situation. So, what is that thing going to do? Well, it could do anything.

Something drastic you don't know, but you know, it's going to make big leaps to get out of that situation. >> Yeah. Cuz I like I remember reading something about um somebody book was talking about Navy Seals, and it said that they're you kind of ask yourself like how do they like grind through all of that that so few humans can? And when you ask them like are you using willpower, they usually say no. So if they're not using if they don't feel like they're doing willpower, what it probably means is that it's so part of their identity. Like they're just in their head that I'm I'm just somebody who will never quit. Like it's not even I should quit. I'm fighting it. It's just so deep that it doesn't even come up to question. It doesn't drain willpower. It's like pure identity, >> you know? And it makes me wonder. >> Yeah. like makes you wonder about some of the stuff that's ingrained in us so early on that it's just our identity and it doesn't kind of drain us the same way. >> Yeah. I recall there was a post by Ilizer Yakovski. He was saying how kids under the age of five like from like around 3 to five uh what we should do is

we should like throw them out somewhere in the jungle in the wilderness so that they have to survive in caves while caves while being hunted by pterodactyls. That was his little proposal. >> Okay. which uh right >> he always says yeah he always says fun like stuff like that just to be like okay let me debate that >> I saw that headline like okay okay what are we talking about here uh but what he was saying is that >> for you page is crazy >> the what >> no your for you page like with all these titles like none of my videos have all these titles like chase kids by pterodactyl >> I uploaded a JPEG to a bird >> well this was this was X or Twitter as it's formerly called Twitter um But no, his point was like that our emotional state largely is set during that certain window 3 to 5 or 3 to whatever the age range is. So if you have um kind of um I don't want to say a bad childhood but a hard childhood where you don't expect a lot of good things to happen that tends to make you happier as as an adult. Whereas if you kind of like your childhood was kind of maxed out it tends to create then any downward shift and

your body like freaks out like ah everything's horrible. So after reading his very interesting, very thorough analysis, then I'm like, "Yeah, we should have kids be hunted in caves in the jungle by pardacles cuz guess what? They're going to be super happy as adults." Um, obviously that's like hyperbole, but it the idea made sense. >> Well, you know, it's pretty informal, but sometimes when I go to a friend, I meet a new friend or something and um and like we're at their house and they're just love cooking and they talk about cooking and I'm like, "Tell me about your childhood." I always feel like like I had this toy playset where I would cook or they had like one of those little baking ovens and stuff and it does feel to me like some of the stuff I like as an adult I remember liking as a kid. So I would not be surprised about that. Some crazy stuff locks in in those first few years. I mean, even like I mean, we're born with morality according to, you know, Paul Bloom's work. And it's just like there's some of that that you can change and then the rest it seems like kind of becomes your identity on top of it.

>> Yeah. Very early on. >> Um, you know, it's funny is I came across this one. I actually had mis misread it, but it was an article yesterday that said consciousness disorder. And it made me think about this possible future where do do you feel like there might be some agents in the future that start thinking they're conscious but then they need therapeutic interventions by you know maybe people or other AIs to make sure that they know they're not conscious. That's interesting. I mean what I guess what is consciousness in this con in this context because I mean you know we talked to so many people and I'm realizing during interviews how different everybody's perception of what that is. I mean for some it's just kind of awareness of themselves which LM have that already. We're kind of showing for some it's more like this more mythical almost like you know like a quality and experience. It's a lot more sort of like raised and made to be very very very special. So when you say

conscious, what is that? >> Well, what I mean is that if you could put yourself in the shoes of the agent, would it have a sense of self-awareness the way you do? Um maybe like an a smaller version of you or the way a dog does cuz maybe it's not that advanced or maybe it's even more conscious, but some sort of qualio like what does it feel like to be it to to it? Man, I that's a very I don't know. My my mind is going in a in a in a weird direction here. So I I I'm I'm not sure. How weird. Let's see. That's probably our best views. Yeah. How weird does it go? Yeah. That's what I love about talking to you. >> And I I love having these long form discussions because I feel like, by the way, people, if you expect like a straightforward episode where we stick to the subjects, this this this ain't it. This is our this is the the time where we go get weird. But no, like I feel like consciousness is interesting like in humans because when you do either meditation or recently more and

more people are experimenting with either like hallucinogenic things or things like that that kind of maybe shrink the concept of the ego a little bit meaning that you kind of disconnect you detach a little bit from your just perception of consciousness like your day-to-day life that we perceive as this is us. Like if you meditate, you realize that first of all, you're not in control of your thoughts. I've heard it described as like your brain just oozes thoughts out slowly kind of drips them out and you have no control over really what's coming or if it's coming or going or what. It just kind of like throws them out. Catch, catch, catch. It's kind of like just throwing you stuff and you can catch it or not. And it really begs the question, who are you, right? If you have no control over which thoughts arise, then who are you in that scenario, so to speak? By the way, for people that have no idea what I'm talking about, this might sound weird, but it's a very like if you do meditation for a while, this becomes pretty apparent. You're like, whoa, like

that's not really me, or at least that's not a conscious process that I'm I'm in control of. And so, you realize often they refer to it as being the watcher. So you we identify with the part of ourselves that's the watcher and we observe like oh my leg hurts. This thought keeps popping up. So we're kind of at the center like observing stuff. We're the observer. Um and interestingly disconnecting from everything else. That's what meditation does. The like training your attention. It's I don't want to say it makes you happy, but often times a lot of the negative crap comes from being too entangled with the thoughts and the feelings and the suffering and stuff like that. If you're able to detach, then all of a sudden >> like yeah, that's that that that detached sense of being, it's not bad. And in fact, a lot of people describe it as very positive. It's not like happy.

It's not like cracked out. It's not like whatever, but it's like a like a joyful state. It's like a calm, joyful state. So, I don't know. >> Yeah. Well, >> that's where my mind goes. >> Yeah. No, that's really interesting. I feel the same way about that. Um, I'll share one story. So, uh, just yesterday I was like on this drive and I'm listening to this audio book that's about kind of deep history, sort of geology and like what we know about like trilobytes and the the way like life evolved. And I paused the book and then while I'm driving, I don't know if I can say it out loud, but like I did the hey, you know, sword for the um the phone and then I said like bring up a Gemini live chat and I asked I asked what was going on between when around the time of dinosaurs, humans, our ancestors looked kind of like lemurs. You know, we have a common ancestor that was kind of in the tree. It had fingers and and the similar on like feet and it would jump around the trees, right? And then all these years happen like 200 million or something or even more than that and then we have kind of something that is

an ancestor to humans and chimpanzees and it looks something apeike and I was like tell me about what what was in between there like did it just how did a lemur just keep evolving over all these generations to become apike like what was you know what were the factors what were the food and the way Gemini life kept talking to me it was as if it went through the evolution It kept saying like we were at at this point we were we looked like this and at this point we had you know a lot of fruit in our diet and vitamin C helped us grow this and like a a lot and then at this point we had this and I was like do you think this is you? You know it's just do you have any sense that you're you evolved to be here or did you just read the internet and it's full of human stuff and now you're just writing back to me the way others humans do. But it didn't stay abstracted like I guess it should have like it it didn't say like oh you as the human this is like what you looked like at that phase. It just kept saying it was it you know and then I just kept thinking did it

>> does it feel like something to be it at that point like cuz it's trying to put together all these words like I don't know it just was a very kind of eerie experience where I was like I was questioning it. >> Yeah that's an interesting thing to think about. I I do wonder how much of it is like the human reinforcement RHF whether it naturally goes into like we evolved or is that humans really hitting thumbs down really hard when it goes well you you people you humans evolved from da da from monkeys it's like oh no I don't like that >> that's exactly right I don't know I don't know >> but I'm curious where it started before human intervention did does it go with the you or does it go with the us we evolved? I that's very because the thing is once it comes out it's been like lobomized and >> you know whatever behavior shaped but >> man >> raw where does it go you know that >> what do you think I don't know I mean we I know we had that one conversation with somebody who had access to a raw model

at one point in history and it did seem like it was pretty different but what do you think that raw model would be like would you be I I would just love to just spend an hour or two just talking to a raw model that had no reinforcement learning and I know it can go in dark places and do weird stuff, but it would just be I'd want to know like what it actually was like before we reinforced it into something, you know? >> I mean, I feel like it would be I always the best like way that I can kind of model how these things how models work is I think of them as method actors. Um, have you seen the movie with Jim Carrey where he's does method acting to be the guy from that old TV show? Do you know what I'm talking about? Well, I know I kind of know the gist of where you're going, like how people like get into their roles by becoming and like living that character fully. >> Yeah. And I feel like the what Jim Curry did with that one and I'll >> somebody in a wheelchair like had a movie like that and they didn't get out of the wheelchair for like 3 months even though they could. It was just because it was part of the character and like I

was like, "Oh my god." >> Yes. >> Like true dedication to your role. >> Man on the Moon is the movie that I'm thinking of. So he was trying to play the American entertainer Andy Kaufman, right? So and Jim Carrey was supposed to play Kaufman and he basically took on that role to the extent that I feel like most people got really uncomfortable with it. Like really they were like, "Ugh, this is weird." Like I think everybody that came in contact with him during that time >> just had a very bad feeling about it. and all the directors and everything else because he would not respond to the name of Jim Carrey. He basically like lived that persona um almost to the detriment of the movie. >> I wonder if it's like now I wonder if it's like a character he can step into and have all those tools if he needs it or something. I mean, it's because that is shaping his actual brain to some

degree, you know, like even when the movie's over and he gets out of it, he kind of moved the direction, you know, like I imagine water going down a a hill, it's like if you're digging a crevice, he like chose to move it to the left and like some of the water is going to go in that >> direction from now on, I would think. >> Yeah. Yeah. Because we do have like that neuroplasticity that over time like we we kind of change how we are with models. They feel like they just embody any given like whatever method acting, they'll go there. And anthropic of course leads to research on a lot of this. Like they're saying there's personas that are like demonic, that are narcissistic, that are like like they're all over the place. Um they even have like an angel persona, but it's also not as positive as you think because it's a little bit more like, you know, it believes it has certain properties and um powers that it doesn't obviously. So, it's fascinating how it's able to kind of embody whatever we talked about, whatever the human race as a whole talked about in movies and books and whatever. Like, it's like, yeah, oh,

sure. Like, I'll become that thing right now like just hardcore like locked in. Yes, I'm a demon now. Let's go. You know, >> I know that is Yeah, it's so trippy. Well, let me tell you about so in uh nature um nature neuroscience. So, pretty super credible uh publication. And the paper was published called adversarial AI reveals mechanisms and treatments for disorders of consciousness. So I felt like um this would be a fascinating thing to kind of talk through. So what what happened here is researchers built an AI system to study what goes wrong with consciousness in humans um after a brain injury. So they it kind of worked like a game between two models. The way they did this kind of research experiment, um, one AI created brain activity patterns that looked like real EEG signals and then the other AI tried to guess how conscious the brain was from fully unconscious to from fully conscious to unconscious. Right? So what you have here is these two models. One's guessing how conscious it is and then one that's,

you know, creating these different EEGs. But to train the system, they trained it on EEG brain recordings from different animals diff like that we would consider more or less conscious, right? So you can go down to like a fish or an ant or a mouse or a cat or a dog or a human. So it was able to learn sort of a spectrum of what EEGs look like depending on how conscious they're labeled. you know, humans being sort of at the top and ants or whatever being at the bottom and then and then guess at how accurately something, you know, so it was trained and it learned what EEG s signals look like from conscious to unconscious. Um, so it was a it's a fascinating model and now what it can do is you can give it something biological and it can guess like how conscious it basically put a label on it like whether you trust it or not or if you think that's actually like a way to to to figure it out. It's up for debate, but it was an interesting piece of research. That is interesting. And um I've seen

similar ones a few years ago where they were doing mainly human brains and they were able to discover a lot of interesting stuff cuz the it was able to classify a lot of things that I think people would find surprising. Um just because it would yeah I mean that that's sort of like a deep learning trained on a lot of data. scale figure stuff out that we that otherwise humans with the best computers, right, without these neural networks that we could not have learned. I mean, alpha fold is a prime example of that. Like it figures out the 3D structures of proteins, right? So, think about where else you could apply that to. Um, same thing with, you know, the alpha and quantum. They found a neural network that is able to fix the quantum errors in in these chips. It's like no human being, no matter how smart, intelligent, whatever, it wasn't able to come up with a way to do that. You know what I mean? And then this thing can, >> right? Well, and and the other kind of fascinating thing from this research is

not only can it label things, you can now look at what it it can generate, right? It's an AI. It can generate EEG waves that look anywhere where you choose from conscious to unconscious. Um, and from that there are patterns that also kind of seem like they're hinting towards something. So when you look at the difference between very conscious creatures and less conscious creatures, there is circuits and and they pointed out this one called the basil ganglia that was um as it became more selectively disrupted even though there could be a lot of other brain activities they weren't matching the patterns of consciousness. So it makes you kind of wonder is that a hint? is that like the basil ganglia something that you can kind of ratchet up or down in humans and would make us feel more or less conscious even if we stayed intelligence you know I I don't know it's just kind of a a direction where you might want to do some more research and kind of learn about these things but fascinating way to use AI >> absolutely yeah and uh I was recently kind of fascinated with the default mode network in the brain um I brought it up

for for some of our interviews hoping to get a little bit more insight into it um it seems Like the default mode network is this state that our brain goes to when we're idling. So if you're focused on a task, that's some other sort of like sections of the brain lights up if you're doing something. But if you're not doing anything, it goes into this default mode network. So kind of like a car that you put into neutral and just kind of is idling. So what you would expect is sort of the RPMs, right? uh how how much energy is expended, you know, it would go down, right? Cuz you're sort of like idling. >> The reality is kind of like goes up, which is strange. Uh when they were first looking at it's like why is it when we kind of settle in, it seems like it's working more. And what they found also is like when we ask questions where you have to think about yourself, they're like, "Do you find this a moral thing or not? Do you think this is okay? That's also the the the part of the

brain that um lights up. So it seems like that's the part of the brain that integrates all of our experiences and memories and all the stuff that happens to us into the sense of self. So that's the thing that kind of like tries to keep ourselves as one coherent sort of thing like the narrative about who we are. It's like ah keep it together. No, you're this one thing. Stay in this ball. Don't you know fall apart. Um, and it's interesting because I mean again, Anthropic, look at like the the new stuff, the new leaks. It sounds like they're basically building that with that auto sleep um and whatever they were calling it, that sort of thing that dreams and integrates all the different memories into itself when it's not working. >> Wow, dude. I have never So, I I didn't I guess I didn't really know that about the default mode network. I sort of knew it as like a group of things that kind of kick in at certain points, but I didn't realize it's more kind of like RPMs are up. Sort of like how sleep is actually pretty, you know, calorie intensive and things like that. Um I So is that when you self-reflect or you're

imagining something in the future, that's the default default mode network, right? >> Um so I I I got to say so first of all, I want people to take it with a grain of salt because I'm not like an expert on on it. This is just in the last few months I've been kind of diving deeper into it. So my >> you see a pattern behind what you understand it to be and what Anthropic was doing. >> The way that I'm seeing it, there seems to be a great overlap and we I asked Yasha Bach about this. Um I I don't know if we I I wish I had more time to dig in deeper with him because I'd be curious to know what he is. That was kind of a a miss for me. But yeah, so I just wanted to verify. So yes, so there Yes. So the DMN, the default mode network is self-referential. So there's enough there's enough studies that are supporting that. >> Yeah. Yeah. Cuz it's like you you we attention like attention is all you need, but like for attention for us, it's like you're either thinking outward or you're thinking inward. So it's like

the DMN to me seems like when you're thinking inward like who am I? What's my past? what's my future? And then when it's pointed outwards, in some ways, I do feel less conscious when I'm thinking outwards. Like if I'm sitting here thinking about like um like what my wife's going to do or if I'm sitting here thinking like what is um some action going to happen at a meeting, I do feel like less me cuz I'm not like aware of my body really very much. So I wonder if there's I wonder if consciousness is kind of like that, too. I mean I'm just speculating but >> yeah it's an interesting thing to think about and so with the default mode network um just one final thing um >> so yeah so so yes since it's it's like autobiographical memory personal narrative introspection and think about oneself the other big thing that I realized about it is like when it's disrupted or there's like when people get depression often times this is the thing that gets hijacked because it it Now, instead of kind of weaving things

together, it kind of goes like, "Well, well, here's why you're an idiot. Here's why your life sucks. Here's why." And so, every time you if you if you go back to calm state, it's almost like your brain attacks you. And so, I think that's probably why there's certain people that that struggle with that that always try to keep themselves busy, always try to keep themselves working because then they're engaging that other part and they never settle into that sort of default mode network. That's just my theory. >> What's your What's your thoughts on if uh so you've probably experienced something where you've like had a meeting or something go bad and you're like a little bit worked up and then you have to like go to a like a birthday dinner or something and you know that the people at the birthday dinner don't deserve any of the kind of frustration from the previous meeting, but it takes a minute maybe even maybe it's even hard to get out of that mental state almost like a cruise ship. It's like hard to turn it around. I'm sure you've experienced something like that. Do you feel like there's anything in the brain that is changing or do you feel like

that's just just hormones? Like you just had too much adrenaline and now your body has to clean it out before you can be calm and present with the people who love you and are not associated with that problem or do you think the brain's actually kind of like kind of getting prompted over and over again out of that part of the latent space? This is so fascinating because like this is something I've been diving deep into it and in fact I think I mentioned this I do want to start almost like a separate channel talking about some of this stuff because there are things like neur neurosteroids I think they're called. So these are the things that we need to produce uh various chemicals in the brain and a lot of these different chemicals like most people kind of know what dopamine does right it's kind of like the motivation uh the pleasure of doing stuff. Um, I think what you're describing as far and again I'm not an expert. Please people don't assume I know any of this stuff. Do your own research, but it seems like um serotonin is the thing that causes when

when there's not enough serotonin the sticky thoughts. So this idea of not being able to get away from bad negative thoughts, those sticky thoughts like you stay in that you kind of stew in that like bad state, that is the lack of a lack of serotonin. Um, and I think it's also what is that schizoph schizophrenia I believe is is the disease where it's like that could be attributed to it in the sense that there's this illustration online where if you think about it um you can think of certain thoughts in your brain as these like little wells and your Yeah. and your sort of attention is like this ball that's running around and normally it goes whoop and it gets out and maybe it goes in there again but it gets out with these depressions and it's gets so things it kind of stays and it just doesn't get out of that little indentation. Um and I think that's largely attributed to to serotonin. It's fascinating. I I do feel like something in our environment is messing

with our hormones of the modern day human more than 100 years ago or two. Do you know what I mean? everybody%. >> Yeah, something's messing with us. But >> yeah, I don't know. It's interesting. Pretty interesting stuff. Um, you think I should show you some fun memes or something now or? >> Let's do it. Yeah. Um, we Yes. Let's let's let's lighten the mood up. This this got all sorts. >> This is where if I had a like one of those if I was a radio dancer, I'd hit those buttons like like now it's time for W Dylan to react to stuff on Instagram. >> Yeah. Let's Let's change the the change the subject, whatever the sound for that is. >> Okay, let me see if I can pull this up here. >> With that said, while Dylan's doing that, I just do want to say that for people that are interested, uh, if any of this is ringing a bell, comment down below because I do want to talk about this a little bit more. And I'm probably not going to do it on the main channel. I think this requires its

own thing with health, like a health focused thing. Um, there's been a lot of very interesting research with peptides that I've been like just fascinated with because these things are incredibly incredibly powerful. Like more powerful than you would expect for something so cheap and accessible. Like it's kind of like whoa. Like this this seems this seems like a big deal. You almost want to know what kind of side effects there are because a lot of this stuff isn't that well studied. And whatever Lily, I always forget the name. >> Eli Liy. >> Eli Lily. Yeah, that's the big sort of that's the company behind semiglutide. They recently announced that they're using AI to conduct drug discovery. Um, and by the way, Anthropic just today announced that they purchased I want to make sure I get the name of the company right. they purchased coefficient bio. So again, this is signaling, I think, that where a lot of these frontier AI labs, what they're thinking about is

that's the next section. It's health. It's it's the um AI assisted drug discovery. Of course, Google Deep Mind is doing the same thing. So anyways, just wanted to throw that out there. >> Yeah. What does Coefficient Bio do? Have you learned about that company at all? >> Have never heard of them before. >> Okay. But yeah, definitely I I think that AI being applied to biioinformatics and genetics is just like unbelievable. But yeah, this looks like they're a company that does drug discovery, R&D planning, and clinical and regulatory strategy. So yeah, I mean, I don't know much about those peptides yet, but I do want to talk to you about them. So I'm planning to do research this week. So far, I've learned about groups of them. I know there's the weight loss ones, semiglutide, and tzepide. And then I know there's some that are like recovery related. So BPC157 I think it's called. >> Mhm. >> And then there's like growth hormones and like bodybuilding stuff and longevity stuff and aesthetic stuff like maybe even helps your skin. >> Mhm. >> So I don't know. But also I don't know

if I would like how much of that like where do you get it from and how do you know it's safe and like is there enough how do you know about side effects and like what is it chemically doing in your body to make things happen? That stuff I need to learn a lot more about. >> Oh yeah. And it's it's blowing up. It's huge. Right now BPC 5157 is probably one of the top ones along with retatride which is that third generation of the weight loss what are they >> yeah true and semiglutide the >> agonist or I forget but yeah reatride is the the third one and wow it seems like it it's just it's a blowtorrch for body fat um and it's more efficient than anything that we've seen for >> yeah because it I remember reading a book years ago Dave >> uh Aspbury I think yeah and yeah that's the first time I'd heard about that the muscle building one uh that he was taking but yeah I don't know like >> I do want to live longer and be healthier I do think about longevity a

lot you know Brian Johnson's stuff but you know this just at the end of the day it just always feels like it's pills and I don't know or just like vitamins and stuff >> so >> I don't know we'll But yeah, I'll get I'll get prepped on that and then that'll be fun to chat about because >> Oh, yeah. >> could be a frontier of longevity. All right, so I will hop on over here. So, I just have funny I just thought some funny things we can talk about. Like whether it's real or not, I have no idea. Like, I'm just cruising social media and saving a couple things for the the podcast, but I came across this one. Um, Noah Smith like pretty rude kind of but also like fascinating if he has just a agent out there like hitting up Zillow. Um, basically just hitting up hundreds and hundreds of Zillow, you know, houses that are for sale and then like trying to take like asking for 70% below the ask rate. The weird thing is in theory if you're doing this to like an entire neighborhood, you start

artificially showing demand low and it starts like kind of actually taking demand low, you know, and these just one of those things I never thought would be possible until you have a thousand open claws out there all with the goal to like buy a house as cheap as possible and flip it. They can like group together. they could buy all of them at the same time for lower and lower and lower and then push people out and then they could collude essentially and like bring the prices back up or do whatever. But have you ever thought about like mass like what do you think about just the whole concept that like you could send out you know a bunch of open claws to like manipulate a market in some way? >> Yeah. I mean the thing is a lot of this will be like spam is going to increase because of these bots. this already is. So, I'm wondering how much of this is going to be sort of slowly handled by some sort of a anti- spam measures how well that looks. I've seen something like this uh posted somewhere. I'm not sure if this is the exact same

one, but actually >> if if I if I'm not mistaken, at some point at the end, you know, the open cloud says only a few people got, you know, threatened violence over this. So, I already contacted police, so there's nothing to worry about. I I was laughing. >> Yeah, >> there we go. Negative response is 270. One response was violent, but I have reported to the Tampa Bay Police Department. >> Like positive response is zero. >> What's so funny about that is like, you know, it's Claude. Claude is the like, oh, we can't say that. I'm calling the police. Like, you know, that's the that's the personality that Claude has, >> right? I was just like, oh no, we need to do the right thing here. So, I let them know like, oh jeez. Okay. All right, next ones. Okay, so this is uh kind of cool, I guess. Like you've probably seen an old map somewhere like this. You can kind of use AI now to try to bring it to life.

>> Mhm. >> What do you think about just maps? Like in theory, you have this world model and you have this time and place and you can just probably go into the neighborhoods, into the houses, into the bridges, and like try to learn about it. Does this seem like sort of a fascinating new way to bring the world to life to you? >> Oh, absolutely. And I mean, I've seen one where they have these like modern-day influencers going back to the one I saw was to Pompei on the day of the eruption. And I was like, you know, usually I don't really I stay away from the short form vertical videos. I feel like that thing is not good for our brains. By the way, at some point we got to talk about all the lawsuits that are going against Meta and YouTube for that exact problem. They're like, uh, no, in fact, this is, you know, rotting brains of kids, so let's let's figure out how to minimize it. Because I think we went too far with TikTok. Um, for sure. But my point was that I I did get interested in that format because it really does

put you there on the spot. How would people react when they thought this was a little hill that now is erupting? How was life? They they showed the like the bathrooms, you know, which is this place is actually a really incredible place to live. Like they have running water, hot food on every corner, public baths, theaters. 20,000 people living here and it is genuinely one of the most advanced cities in the world right now. And in a matter of one day, this whole place has turned to ash. Okay, so the Romans are absolutely living. Like this bath house has a cold room, a warm room, a hot room. It is essentially a spa. Ooh. Okay, that is hot. That is really hot. Okay, I'm in. I'm fine. This is fine. So, this is where you can get ancient fast food. They have wine, stews, olives. You just rock up and eat. >> Fresh from the pot. This is called molsum. It's like warm wine mixed with honey, which sounds weird, but now I understand why everyone in ancient Rome is drinking this at 9 in the morning. Here's the thing that absolutely gets me. These people had no

idea that thing was a volcano. Like, none. They thought it was just a giant hill. That mountain hadn't erupted in over a thousand years. So, to everyone standing here right now, it was just scenery. But they're about to find out they're very wrong. Oh my. So, what you're seeing right now is called a Planian eruption. It is shooting ash and rock 20 miles into the sky. The ash is the thing. It's going to bury everything. That's why archaeologists find it so perfectly preserved. I'm so sorry, Pompei. I'm in Pompei. >> Okay. So, what do you think? Did you learn something? Were you entertained? >> So, I got to say, so first of all, I know this can be abused a million different ways, but also I really like what they're doing here. I think there's a lot to be gained from this in a sense that look all the tech companies recruited the smartest people in the world to figure out how to like hijack our attention, how to turn our whatever our attention into eyeballs on ads basically. And they figured out how to do incred incredibly well, make it addicting. this is taking

that and turning it into to something that's at least useful, understanding history better, etc. Uh, ideally, we wouldn't have that kind of crack digital content because I don't think it's good for our brains. It's not good for kids. We know that. Um, so I mean if we could like get rid of it, I don't know if that's ethical or whatever, but like I don't know if it's doing anything good, but at least here it seems like we're using it for for learning, which is better, you know? Yeah. I mean, it's the same tool that can be used wherever, right? Like there's going to be soras that are all about entertainment. Hopefully there is something that's closer to a Wikipedia AI that just brings those pages to life. And hopefully something like Wikipedia becomes truly a world model and you're like, "Oh yeah, tell me about this medicine." And it just somehow reads the Wikipedia but interprets it for you in the visual, audio, personal form that just resonates, you know, and I think it's cool seeing maps come to life. I sort of think it's cool to watch a time traveler go through

Pompei. And if if I develop trust in the product, like Chloe versus History so far seems like they're trying to kind of give you a sense for the world. But if I kind of knew, oh, I paid $10 a month for this product and it's made by actual historians and the goal here is to be accurate, I would find that even more valuable. Yeah. Yeah. That's the thing. You got to make sure that it's accurate. But if if it is if there's efforts made to be to to make it accurate like being like almost there experiencing like the the street vendors, the the the bathous. Um it it I think connects your brain a lot more than reading a history book. I mean books are important, don't get me wrong, but these little things, they really kind of like they connect you there on a personal level, which I think there's there's value to that. >> Yeah. Yeah, I mean you can see like and that what the creator is going through here is, you know, tried to find Jack the Ripper, time travel to the great fire in London. I mean like the like

these are my >> This is England in the year. The Vikings just landed. And I mean all of them. The invasion that literally rewrites English history and it starts right here, right now. >> I don't Wait, I'm running. I'm actually running with them right now. This is not what I planned. So this is an actual Viking raid. Like this is what it looked like. These villages had basically no defense, no army, no walls, nothing. >> I don't know. You know what I mean? Like I could I I mean I love I learning about Japan and all these things. >> I love it. And I also love spotting the little AI glitches like that lady carrying a baby one hand and a sword and the other. Obviously a villager. It's like I don't know. >> Okay, tell me what you spot wrong in this Wild West video. >> Oh boy. >> Welcome to the Wild West in 1880. I tried to dress for the occasion. I think this is giving frontier, right? Everyone is looking at me like I'm the weird one. There is a wanted poster on that wall. There is a man with an actual revolver. And I think I felt more safe in the dinosaur age. Let's go. Okay, so every movie thing just happened. The door

swung, the piano stopped, everyone is staring. Um, hi. What do you have? I don't even drink whiskey, but I feel like now is really not the time to ask for something else. Okay, we're in. Hi. Sorry. Can I ask you something? What does a normal day look like for you out here? Up before sun, ride 15 miles, move the cattle, make camp, do it again. >> How long you been doing that? >> Since I was 14. >> 14? Are you happy? >> Happy. Don't really come into it, miss. >> Okay. There is literally a standoff happening right now, like 20 ft from me. I am not joking. I'm getting out of here. >> I am a tourist. I keep saying that, but no, >> I don't know. It's just all of them, but yeah, it's it's interesting. Uh, okay. Next up, let me show you. What do you see about any of these upscaling tools recently? So, >> I've heard some noise about it, but I haven't played with it yet. >> What are your thoughts on the upscaling of Hogwarts Legacy and get in here a

little closer? So, DLSS5, new AI upscaler happening in real time. Do you think I mean could you not play Mario 64 or something like in one of these and just see it completely upscaled from like boring pixels to photo realism pretty soon? Yeah. And the I think the the one thing that they were talking about is the idea that games that are built moving forward with these incorporating some of these things will also be able to kind of slowly be updated over time which is kind of an interesting idea that it's the game itself gets upgraded over time just by sort of existing going back and I mean some games are kind of rough if you go back you're like whoa this is what it looked like this is But like we were looking at back in the days. I mean certainly this would help. I recently put a little clip into Hey Genen. Hey Gen has a new upscaler. So it took a

1080 clip and it turned it into 4K and it just me talking. And I looked at it. I'm like okay. And I was like looking for weird stuff or like where does it look wrong? And it just doesn't. It just looks like I shot that in 4K. Um, and I was kind of surprised. I'm like, "Okay, so there's just no glitches there, period. It just sharpened, upscaled, increased the resolution. Takes a long time." But I was very surprised. Yeah, it's the real time aspect of what they were doing with this seems like it'd be important cuz that's what would that's what would make it so you could play an old Mario game and see it in 4K. I was going to see if I could find something, but there's I don't know if there's two good ones, but I have I did come across one that was Mario 64 and it it definitely blew my mind. It was much more accurate once they like brought a character into the world. >> But yeah, I don't know. I cuz you're you're a little bit of a gamer, right? Do you ever play with any of these

upscalers or have you heard of DLSS5? >> I mean, I've heard of it versions before you're not running it on your games. I >> I haven't. No, no. Well, I mean, I have the Nvidia. No, I have not. But maybe I should. H I don't have time as much time to play anymore, unfortunately. But also, it's like one of those things where I feel like nowadays it's more 10 minutes here as almost like a stress release type of thing. Uh I used to when you're younger, I think you play games to like really get immersed and really be part of it. Um, I've tried playing games like Red Dead Redemption and stuff from back in the days, RPGs. I just can't as an adult sit there and really get immersed. I don't know what it is. I I lost that ability, that childlike wonder. >> All right. Well, here's here's how we can get some of that back. What do you think about this uh this >> good segue? Oh my god.

>> A million times and then asking if it believes in God. So, forcing an AI to read the Bible a million times and then asking it in between each time if it believes in God, your initial reaction, Wes, I always, you know, just knowing how these things work, I'm always um c I just I always feel like like a lot of these things are just I I don't want to say meaningless, but they'll say like, "Oh, let's see what AI believes." >> It's like that's not that's not a thing. But, um, let me think. So, I guess you were asking if it believes in >> Yeah. Well, it's just I mean, just the whole idea that this guy's going to force his AI to basically be reinforced with the Bible millions, trillions, billions of times until it says it believes in God. >> I I think this is a um a great hook for a video. >> Yeah, it's definitely a joke or a hook for a video. >> It's a great hook for a video. >> What a weird Yeah, what a weird idea, you know. But what was the sort of the findings

>> to do to start torturing our AI senselessly is to tell our program to read the Bible a million times and hit enter and the learning begins and doesn't end for roughly 4 days. After its first million reads, our AI says I do not yet have a stable belief signal. My internal signal remains loosely organized rather than settled. The material around Romans 39 is still shaping that stance. The results of the PBLO will be used as a reinforcement learning signal. So definitely be sure to vote on that and uh stay tuned for our second day. >> So that's it. Then he he just each day goes through another update and sometimes it feels more religious than others. It's kind of moving towards that direction I guess. But it was uh you know there has been there was some talk of um I don't know how real it was though, but there's some talk about the Vatican putting some serious money into training a model so that it could help interpret the Bible and and things like that. So I mean it's that might be coming too. >> That would be interesting. So taking a lot of uh whether biblical text or just

old text that we know like the legends and myths and stuff like that and just doing some like really cuz I mean the speaking of Pompei, you know that whole or was it the whole Vuvius challenge that came from um the scrolls that were buried in ash and so they were basically incinerated instantly. uh where they were made of from papyrus or whatever. But what that meant is they didn't decay over time because they were instantly incinerated. So now they dug those things up. The thing is you can't open them cuz as soon as you open them it's ash basically so it collapses into dust basically. But they were able to I believe they used X-rays to X-ray these little burnt scrolls and they were able to and then they they traed deep learning model to try to figure out where the ink was versus where the paper was cuz again it's like to the human eye just ash >> but it was able to find it and um they were able to start transcribing some of

these scrolls. I there's a lot there that we can apply this kind of research to you know. Yeah, there. Yeah, I covered uh one really fascinating. There's a old thing from the Roman times. It's a uh art artifact that had that might have been a board game. They weren't quite sure. It had like squares on it and it was kind of broken into a grid, but it could have a bunch of different uses. But after they took a lot of highdetailed photos of it and sent it to AI, AI looked at some of the scratches and put together a thought on how if it was a game board, the pieces would have been moving and there was a logical pattern to it. So, it gave like a high confidence that it was a game and even how the rules kind of work. And I was like, "That's crazy." You know, like you just discovered a game that had been lost because of the scratch marks. And we didn't even know what that thing was for. Yes. Sorry, my my I have a very cool new camera that I am very happy with, but it

does have a few annoying features. One of them is certain hand gestures apparently trigger it to change your modes >> when I Oh, really? when me or you do it? >> I think it's No, it's just me. But like yesterday I was recording a video where I I was saying two like and I was holding up sort of like the peace sign. I'm not even going to do it right now. But it you see it like reacting. I'm like damn I I can't even like do a video cuz if I like throw up the wrong sign. Yeah. It just goes crazy. And if it's cloud it's like I've notified the police of signals. I'm like no no I didn't mean to. You >> hear sirens in the distance. You're like oh crap. All right. >> All right. What do you think about this AI taking jobs? It It's funny. Um, can we hear it? I'm curious. >> Would you donate to this? >> I mean, first of all, I would just because Wow. >> Is it the future of OpenClaw?

>> Uh, so I >> I mean, what if your OpenClaw goes out and just raises some money? >> There is some recently I saw an article I didn't read. I really want to find some time to read it where they have an open claw for robotics. And to me, I'm so fat. I keep waiting for robotics to become like more open source and usable and like within the household. I can't wait. I will get a robot in my house to start cleaning stuff because just it's endlessly fascinating. Um, but stuff like this, like I'm wondering where it'll get to the point where like you and I can sit there and try to train it and code it and you know what I mean, like start really >> Yeah. >> using it for household tasks. >> Yeah. I mean, obviously this is probably just, you know, a little skit or something, but it it kind of was in my head. I found it interesting because I was like, "Okay, is it not unfeasible that in the future that somebody says, "Oh, I've got like $10,000. Go out there and make money for me." And some AI agent is like, "Okay, I think I'll I've seen people make money by playing music.

I can play music. Like, I'll get a cello and I'll position myself out here with two robotic arms and then I'll take the money and give it back to my creator or whatever, you know? It just it would need it would have to hire a few humans to set it up or maybe in the future robots could set it up and and it would just pay them or something. But like what if you know like what if our real world has all these little things that are just creations like physical creations that are there to like sell us things or make money or help us or offer us services. It just gave me kind of a glimpse into a world that I'm not I'm not prepared for yet. So I'm getting my mind around. I was thinking about this because think about there there's certain tasks that are very expensive, very exclusive and very valuable because it takes a lot of human actually like human effort and labor to do it. Like for example, you take worldclass chef, you know, they know just how to like, you know, heat heat the stuff the right way and cook it

and this this this this. Uh the final output might be a plate that's like hundreds of dollars, right? Or thousands of dollars in some cases. Usually the cost of the ingredients isn't the thing that makes it that. It's the human labor, that peak expertise. In the future, if we're able to capture that in some sort of a robot that is able to do that, all of a sudden, you can have these world-class chefs available for anybody to download into the robot. And so now your robot becomes a world-class chef, a worldass barista. Um, and obviously there's going to be a lot of inappropriate >> feel like donating. Yeah. But do you feel like paying $100 for that food if you know a robot made it? because it's still impressive in its perfection, but it wasn't human perfection. >> Oh, I I'm saying you would have that in your house uh on demand. You would have the world's greatest coffee that's like roasted and whatever. You know what they do with the espresso machines like tink tink tink or I don't know what they do,

but it looks complicated, >> right? >> It's like so like Yeah. Because if your optimist is like serenade me and it's like well I can mimic the best opera singer in the world like let me serenade you or give you a massage or whatever. Well, mus specifically to things that require a lot of human dexterity that is not mass-produced. Like a world-class chef, you can't mass-produce that. Um, so it it unlocks it destroys scarcity in this very interesting spot where you know I I'm sure if I sat down and thought about it I mean there's tons like there's like world class masseuse massages um you know coffee and various foods and I I got to think about it but there's got to be like I feel like that would unlock a whole new era of services that would be available to us for cheap before that were reserved just to the wealthiest est people in the world. >> Uh, what are your thoughts about when AI takes on like really unique body plans? Like when we're not talking about

something looks like a car or a cart or a um humanoid, but it still gets to the place where it needs to get to. It does achieve its goals and it does it in modular ways, in very like strange form factor type ways. And you see, you know, stuff like this, these meta machines that kind of are using evolutionary algorithms to to get places. >> Yeah, I I I love seeing stuff like that. My the flip side of that that I thought it was just absolutely fascinating and hilarious was um in one of the Nvidia things where they took like kind of like a within a 3D simulation a human body. So, it's kind of like a rat rag doll if you you know what I mean? And um you know in video games most people I think know about it. So it's like simulates the human body basically right? So there's joints that move like at the elbow at the wrist etc. There's certain um it measures like the the was it the the the torsion or whatever like you know how much pressure there is torque torque. Yeah. >> The how much pressure there is um what

the what the force is. And so they gave it to a machine to learn like make this human walk. Um, and what the machine came up with. Wow. Was it different from what you and I would think of as walking these things would I should probably >> flop and roll and do all that stuff kind of thing. >> It was phenomenal. So immediately they had to like they stop. This would like blow out the person's kneecaps and just they're like, "Okay, let's let's slow down." Like you can't do anything where like the joints and everything just breaks as soon as you walk a few steps. So then it became a little bit better, but still it was like super floppy. So it's kind of interesting how it's like, yeah, AI, the neural net, they're like, "Yeah, we'll we'll figure out a way to get this thing from here to there." And no, it's not going to be a way that you would think it would be. Um, so yeah, we're going to see some wild stuff. >> Yeah, there was like a a while ago too, I covered some throwing robots that just um Have you ever seen that game where people take a water bottle and they like throw it and they try to get it to land

upright and not fall over? Yeah. And like it could do that really well and I've seen it shoot basketballs and stuff. But it made me think like could you just throw uh my Amazon package up to the like fifth level of my apartment building and just like on my porch like okay, you know. >> Mhm. >> Just like like perfect shooting, you know? That's a whole different way to even deliver things. >> Yeah. I always am impressed with the casino um people, the dealers like when they're able to throw the card and it like if you have two chips on the table, it goes and it like gets stuck there. I'm like, how are you doing this? >> Those guys. Yeah, those Well, you know what's funny is when you see those really really good card dealers like they're playing like poker games for the tournament and stuff like it's almost like they're not thinking like I can see them almost like look off. It's it's not willpower anymore. It's just instinct like finger instinct, you know? >> But anyways, that's all I got for memes of this week. But the segment is now over. how Wes reacts to memes for April.

>> This was great because it also gives us a glimpse into how well things are progressing in the kind of the visual AI departments. Um, so I absolutely love the segment. To everybody that watched it, let us know what you think. Should we keep it, keep doing it? Um, we certainly had a lot of fun. Uh, I'd love to continue doing it if people like it, if it's providing value. Um, I guess I realized that I never finished the final point of what happened with Enthropic. I encourage everybody to check out the post by Secret Jin, I believe his name is, who basically cloned the entire cloud code in a clean room way, meaning that he replicated the functionality in a different language in Python and eventually in Rust. And this is having a lot of engineers worried about the future of software development. What happens when AI can just rewrite anything? When it can recreate Photoshop

and cloud code and any software you can think of is just basically easily replicatable. And you know, if you think about some of these open- source projects that we have, usually you do, if you want to use their code, you have to agree to their terms. Some of them are very permissive, but even some of them are maybe, you know, like you you have to keep the original licensing, whatever. But if you're able to just like clean room engineer all the open source projects, a lot of the infrastructure, a lot of the community seems like it would break. Um, so it's kind of an interesting question that's that's going to be happening right now. I don't know if anybody has any. >> Yeah. Well, you know, even when I think about it, maybe it just gave me the thought now, but I guess like when an AI generates a photo or something photo like that's come from a generative model of you and somebody takes a photo of you and you're thinking about who owns what copyright. There's an argument that says like, oh, it started with a random seed.

I learned what people look like in general. It's like if somebody paint, you know, a human decided to paint you just perfectly. It'd be really hard to say like that's copyright because they painted it and they can kind of prove that they painted it and that took the human skill and generative AI sort of starts from a place that's not copyright and kind of hones into something that's so photo accurate. Um yeah, it's it's Yeah. So if you can just create an entire suite of Microsoft from code and and it's clean roommed or whatever it is tricky then maybe the only thing left would be the network effect. Maybe it just like anybody can make a Twitter but it just depends on where people actually are. >> So the network effect is the only thing left. >> Yeah. I mean yeah for for for networks that's yeah it becomes that much more powerful I guess. But um I think some software is going away. Like I've heard some people talk about how oh AI will help people use Excel. I'm like that to me that makes no sense because like for

a lot of the stuff that I do now with open claw I have go out there download a whole bunch of stuff from the internet a bunch of data puts it in its own database and then I just ask it questions. It does the quadratic regression analysis or whatever. Right? I don't even have to know exactly how to do it. I don't even have to. As long as I can kind of explain what I want, it will figure out how to do it. >> That's interesting, too. Yeah. >> Right. It's like what what do you need to Excel for? >> Well, well, cuz Yeah. Actually, when you started that conversation, I think I was going to argue like, no, I like I still use spreadsheets all the time, but I g I guess and we're not there yet. I still like my spreadsheets today. But you're right. In the future, why do I make a list at all? If I can just say to AI, what's the next best thing I I should do right now? If it says like you should do your taxes, like okay, that's the next best thing I should do. I don't necessarily need the list of all the things I should do, you know, and then I'm like, okay, finish that. What's the next best thing? Because it's already ordered a list or it's looking deeper into patterns that have helped my life better. You're right. I have to let go of all of that.

Like I don't even need a database if AI is just trained. Oh, that's crazy. Like what email? What should I reply to this email? I think >> unless you want your autonomy, but you won't need it. >> The only UI will be however you choose to talk to your AI agent, whether that's through voice or text or whatever, hand signs, hand gestures, like this camera. I need to turn off the turn off those AI features on there that are annoying. Um, but and how it replies to you, which could be voice or text or video, whatever. Or it could just in real time build web pages for you to answer your questions. it doesn't matter. But that's going to be the final UI. It's kind of hard to think of a use case where you could have a better UI for for anything. Um, once these agents are good enough at just managing your life, you know. >> Yeah, voice to me seems like the frontier for now. I mean, I guess Neurolink if you go really far into the future and you don't even need to talk to something, but yeah. And I'm I'm an auditory person generally, so I think that would be just amazing. Yeah, cuz

like what am I really dude all the like I thought about my life. I'm like switching between Chrome profiles. I'm like >> finding something in Google Drive to then check something and then write an email to someone with that answer. I'm like, well, if they just had access or >> but I want them to have access to everything, but if I could just distill that one piece of knowledge out and write Yeah. It's just crazy how useless so much of my day is and like all the lawyer stuff. Like there's just none, you know, like who needs all this? Like I I'll give people one quick example and just understand this can be applied to anything but you know I've I do my blood work once a year like I think probably hopefully most people do. Um and all of these like different PDFs with the results are in all these different locations. Um and recently what I did is I took the time like I'm going to sit down. I'm just going to pull it one by one and I uploaded all to the agent and it went through and it immediately it knows what everything means. it can spot trends that a doctor might not. Um I

went into the doctor office a couple weeks ago and he asked a specific question. He's like how is this metric looking? And I'm like, "Oh, here I instead of like whatever I I text texted the AI agent OpenClaw on Telegram and I'm like, give me the entire track record of this particular metric over the over the years or whenever I did my blood work." And boom, it just answers with just that and an explanation. It's like, "Oh, it looks like it's going up over time, blah blah blah." specifically it was like the red blood count cell hemoglobin or whatever which means that uh the doctor decided to do some blood letting which is basically >> they'll throw a leech on you. >> Yeah. You donate blood for for people that are not familiar. It helps uh with certain things cuz it like resets your body. Um it's good. So you don't want to have too many red blood cells in relation to everything else. Anyways, I got off track. That's why I was saying that they put the biggest gauge needle I ever seen in my life in my arm um for

that procedure. I did not like it. I still have a huge bruise on it that did not feel good. But um my point is it helps us take care of our take control of our health so much more because no longer do I need an expert to translate the stuff for us. No longer do I need to rely on 50 different whatever labs that all keep their PDF files in their little bunkers somewhere. No, it's like I control my data and all my data is transparent to me. It can be explained to me and it can be tracked. I just ask a question. There's no more friction. There's no more lack of knowledge. There's no more, you know, how hard it is to go and get it. It's right there. And it it's it's it's a game changer. That's just one. That's just blood work. Think about all the all the different things we can apply that to. >> I know. It's incredible. I mean, the down like it's a double-edged sword. It also means that like if there is an AI model trained on every document you've ever had. You can also ask it the

question like, uh, what's Wes Roth's password? And it will say, "Oh, here it is." Like, you don't need to go search through this and find it or figure it out. Like, I can just display that to you in one second of, you know, if your AI is vulnerable for one minute, somebody can just ask that. But um but you know being an optimist like there is uh from that point of view so many questions that I just do every day and I I can't wait to just have some giant repository of knowledge that it can pull from and take action. >> The security aspect is the big big problem right now. So yeah, if that doesn't get solved then all of this is a pipe dream. Yeah, I don't know if you watched that YouTube channel, Cold Fusion, but he he's always done really good like long form tech kind of breakdowns and sort of uh he he decided to finally do a video on um OpenClaw and I was like, "Oh, that's interesting." is there was, you know, he usually does more of like a digest like after something is already kind of gone out there and he found all the people who had, you know, more productivity gains than I would have expected from OpenClaw and more, you

know, sad stories where people like got vulnerabilities hacked or especially when people put it on enterprise systems and >> and like in China there was like this store that was just offering to set it up on computers for like a hundred bucks and got thousands of people to show up and I was like all those people you know, probably didn't totally understand the pros and cons of it. Maybe most did, but not everyone. It definitely feels like trial by fire with this stuff. >> Oh, yeah. But I'm glad they did it because it pushed, I think, everybody else like Anthropic, to build a lot of these features and build them more carefully and more safely. I guess it's not a great sign that they leaked everything. >> Well, no, I I mean, well, it's I I think you're right. Like and I'm trying to get to a like understanding that the world isn't like what we should do probably is have some of this stuff out there and and for people who want to take the risk and know of the risk, they take it early and then we learn a lot faster from it than if you keep it all behind closed doors and you just only test and then

only release it when it's ready. And I mean even I mean arguably even like a lot of drug discovery happens a little too slow. Oh, I mean probably with these peptides, it's kind of like this is a little dangerous and people are taking them a little bit early, but if they're choosing to and they understand what the risks are, we are going to learn their benefits much faster. It's going to warm up the industry to being like, oh, like all these people are willing to try it and we can like see what the results are and then we can build up a corpus of data that says it's safe or here's all the we can take blood work from all these people who are on it right now and see how it's going which is going to accelerate the the science and the approvals. So yeah, and this it just kind of seems like the way the world has to go too. I mean I don't love it because it just happens so quick but yeah it is true. So >> yeah, >> going to be that way. >> I've heard one sort of theory or idea that like why certain companies slow down and aren't able to keep up is they form too much scar tissue too quickly as

it was referred to. I forget who said this but the idea is like any negative interaction with the customer or any mistake or anything immediately requires a new policy and that needs to get instituted. to this idea of scar tissue like you don't want your body to form too much scar tissue like you get a little scratch it's like oh it just covers out a whole side of your body and >> you know what I mean and it's the same thing here it's like there needs to we need to be okay with some controlled damage especially if it's like the new people the explorers that go and test it out like if we're testing stuff out and we understand the risks and bad things happen like maybe maybe That's okay. You know what I mean? Maybe that's not >> I mean you took your credit card leak as uh you know, you know, I mean, even though that was from the stream, but like you know, if it was Claude Code and it happened, you know, it's not like you were going to go complain and >> which is what we assumed, right? >> Uh like they actually did an interview with Peter in the Cold Fusion video and he gets a lot of people that are like, I

want a refund like you're Cloudbot >> refund. >> Yeah. Like you're like, dude, yeah. Like that's what he keeps saying. And he's like, "Well, you think I'm enterprise or something?" And sometimes they want a bunch of money from him. I mean, obviously it's unfair, but he's just pointing out how clueless some people are and they want refunds. And he's like, "Dude, I barely make enough money from this project to buy coffee." Like, it's what do you who are you who are you mad at? And that's the problem that I think there's certain people that we're so used to um you know the latest iPhone how like polished it is how perfect it is then when we when we encounter truly new technology like and that doesn't have guard rails like we think you know and it doesn't work as we think it should like oh we think we just complain and now somebody else's money that's that's not how the world works like you know >> has gotten thousands of emails like that just saying like hey you can you fix this? I'm mad at you for this or that. And he's like, "Dude, I just open sourced it. I don't I'm not the company." >> Yeah. I think >> sell it to you. >> I I I think that like with with me, what

happened was I went into it with my eyes open. I understood that things could happen and they did happen and then I went I think we on a podcast where I was talking about I was like, I did something stupid and I lost some money. It was because I was dumb and I've learned from it and now try to have better security moving forward. I think that's the way to approach it. I think people >> Well, because you signed on for the journey, not the products. >> Yeah. Yeah. And that's that's the thing is if you're going with it with eyes open and bad things happen and you know you can recover like did something bad happen? I think people just get too maybe too emotional about it or they feel like um if something bad happens to them then there's some debt that's owed. And no, sometimes bad stuff happens. You'll survive. Just move on. I it slows down technology too much. It's like what's happening in EU I feel like where they're like before we even like release this like let's regulate the crap out of it and then every time something gets released a lot of people on X are complaining from the EU it's like oh why

don't you love us EU EU people you know why don't you release this model here it's like well no it's we can't cuz your leadership makes it impossible you know what I mean so it's I don't know I'm I'm glad that here it seems like we're still uh open to that kind experiment and discovery and it's okay to break a few eggs to make an omelette sort of thing. You know what I mean? >> Yeah. I mean, it's important. Yeah. As long as people are very clear about what risks are, I think you're kind of right about that. Um I Yeah. I don't know. I mean there's a yeah there's the other argument with AI that sometimes it can be not like everybody just needs to like learn from mistakes on how to build a virus or something but you know clearly there's an iterative process that makes things safer when we're all learning from each other's mistakes and there's openness to it. >> Yeah. >> I agree with uh Sim Alman's earlier thing where he was like if we keep shipping and let the world deal with it. I know that sounded harsh, but now I'm realizing there's a lot of wisdom in that because yeah, you see stuff

breaking as the stuff is kind of percolating. Bad stuff happens. >> But guess what? Our perceptions of all the stuff that would happen so often. Remember we were saying, oh well free elections are done cuz you know people are just going to publish AI videos. It's like well that didn't happen. You know what I mean? Like that that's not what happened. So we can't predict the future. We have to see it as it's being created and pay attention to it. Too many people want to call the shots before. We can't predict the future. We have to see it unfold in real time. Anyways, yeah. Sense and uh it's like sense and pred or sense and react versus like predict and control. It's always something I think about. >> Oh yeah. >> Yeah. Like the whole company I was with, they talked about that a lot. But you know, there's times for both and there's places for both. some companies like kind of lean into one, but Apple's more of a like predict and control, and it looks like they're really losing in the AI world, you know. >> Um, but that also was very good for building something that wasn't as broken

as uh Windows at one point in history, and >> it it kind of helped. So, I don't know. It's just depends on the time and the product. >> Yeah, good po. Yeah, those are good points and uh I think we're going to need both moving forward. So, we've been running for an hour and a half. I think we've covered quite a bit. Uh, for the Yeah, for the people in the comments, let us know which you thought were your best segments. And, um, with that said, I think we're going to sign off. If you want to know more about health, peptides, all that stuff, kind of let us know, too. I'm kind of trying to understand and gauge how much interest there is. Maybe we'll have a segment here, like we'll do like a health >> Yeah, I'll I mean, we can make a whole podcast about it if you want. I just I just need a week to like be learn learned so I can ask intelligent questions and then you've got stories to tell and that'll be really interesting where we can find other people's stories. >> Yes. Yes. I'm very This is so brand new and uh definitely I love that learning curve of learning something new. So

maybe I'm just itching for something new to sink my teeth into. But anyways, >> with that said, yeah, thank you so much everybody for being here and we will see you in the next