FireTail
Modern Cyber: Episode 102 - This Week in AI Security 9 Apr 26
2026-04-09 11min 11 views watch on youtube →
Channel: FireTail
Date: 2026-04-09
Duration: 11min
Views: 11
URL: https://www.youtube.com/watch?v=d-qMcNf5vEQ

In this episode for April 9, 2026, Jeremy covers a week dominated by highly sophisticated supply chain attacks and the emergence of "Project Glasswing", an internal Anthropic project revealing that next-gen AI models may be "too good" at finding zero-day vulnerabilities.

Key Stories & Developments:

The FBI's IC3 Report: For the first time in 25 years, the FBI has specifically categorized AI-enabled fraud, which accounted for $893 million in losses across BEC, romance, and investment scams.

Ol

All right, welcome back to another episode of This Week in AI Security for the week of the 9th of April, 2026. We've got a number of interesting stories to get into this week, including a couple that were still kind of breaking news and developing last week that we chose not to cover until there was a little bit more information out there in the world. So, we're going to get into those and also things that have come out over the past 7 days. Let's get into it. First story is actually out of the FBI and this is an FBI annual report around fraud and financial crime specifically affecting just people in America. So, in the United States of America, this is called the IC3. They fielded over 450,000 cyber related fraud complaints, more than 17 billion in losses. So that's kind of one way to think about the scale of the consumer level cyber fraud that is happening. Uh cyber related fraud is 85% of those losses. Now what was interesting to us for this

week in AI security is actually that for the first time in the report's 25-y year history there's a specific call out for AI enabled fraud which accounted for about 893 million. So that's roughly 5% of the cost issues and 22,000 out of the 452,000 complaints. So that's actually less than that from the actual complaint side, but it does show maybe a slightly overweighted value in terms of the impact of an AI powered fraud campaign or an AI powered scam. Uh so they also mentioned that AI is now being used across business email compromise, romance scams, employment scams, and investment scams. So there's a pretty big diversity in the types of scams that are being used or where AI is being used rather. So something to keep an eye on. It it really shows something that we've talked about many many times on the program which is that remember that all the tools that you have access to thread actors and criminals cyber criminals do have access to as well. All right moving on to the next story. A report that came out from Cisco Talos and other authors.

Cisco Talos provided a showdown scan for publicly exposed Olama instances around the world and found over 1,100. Now, when that scan was run again in April of 2026, the original scan was back in September of 2025, when it was rerun in April of 2026, more than 25,000 instances were exposed. And the interesting thing here is that the author of this and as always we have the link in the show notes coupled this with a survey of a thousand European executives around their feeling around the level of security and let's say readiness that they have in adopting AI and less than 20% felt that they had adequate visibility and controls and 35.7% are already operating a operating AI workload. So you've got already just an initial mismatch. Couple that with the fact that the exposure rate is already really really high. This is one of the topics that we've hit a number of times here is that you know we're adopting AI tools at a rapid pace not thinking in

advance or planning in advance what our controls are going to be. So adoption outpacing security common common theme that we've talked about this particular author and this particular report were very focused on Europe and talks about how the fragmentation of different standards across the EU are actually causing some confusion as well as a lack of action. I think that's going to change later this year with the EU Act is already kind of quote unquote in effect but enforcement and fines only start in August of this year. So I'll be curious to see how that develops. We'll cover it here on this week in cyber uh AI security of course. All right, moving on. We've actually got two CVES, one in MLflow and one in praise on AI. Both score the maximum CVSS score of 10. So these are fully uh you know maximum severity. One has execute code function uh with no authentication in an attacker controlled Python uh environment and the other is a critical command injection flaw in a model conserving the container initialization code. So these are again

on the theme of the security of the infrastructure around things. So just two CVS to be aware of if you're using either of those packages. Look for the latest versions and move on. All right, next story. Along the same notes of the security of the infrastructure where you're building your AI systems, this is research from PaloAlto Networks unit 42. They discovered a flaw in the Vert.ex X AI permission model that can be misused to allow AI agents to gain unauthorized access to sensitive data and uh other environments within Google Cloud uh vertex AI service. So that is also again along the same topic of moving too quickly not anticipating all of the security boundaries around the different infrastructure components where we're building our AI systems. Moving on, major breach on a corporation called Merur. This is a uh $10 billion valued AI startup that provides AI training data to OpenAI Anthropic Meta. Uh it confirms that it was a victim of the supply chain attack. Part of the broader team PCP campaign that hit a lot of open

source packages in rapid succession in late March. We've talked before about a couple of those. This time it appears to be from the Axios package. And the Axios package is a major package used in a lot a lot a lot. I I cannot stress how much uh how many environments this is being used, but this is really really a big one. In fact, on the same topic, moving on to our next story, uh the SANS Institute did an emergency briefing about the Axios npm supply chain compromise. Uh this happened on March 31st. This is one of the stories that was kind of uh just starting to get a little bit of notice when we recorded last week. We wanted to wait for a little bit more information to be out there and uh so we're covering it this week. Basically, a remote access Trojan was um injected into the Axios package around midnight of March 31st and this was in versions 1.14.1 and 030.4. Uh and that was a that was then

potentially installed up to 600,000 times across Windows, Mac, and Linux environments. So, the interesting thing here is to think about a couple of things. One is how did this attack uh attack happen? How was the repository compromised? How was the package overall compromised? So you think about the theme of supply chain risk that we've covered any number of times here. But the other thing that I think is interesting is is something that the SANS instructors and the SANS institute people uh researchers pointed out which is that actually you know the real risk is the credentials that were harvested from that remote access Trojan that was installed. That's probably the most valuable data. So on this topic, uh this is one of the most widely used open source projects. The reporting from TechCrunch shows that this is probably weeks in the making. This appears to have been a very long-running campaign with a lot of sophistication in very very um precise targeting where the hacker spent weeks building rapport with the project's primary maintainer. They posed as a real company, created a

convincing Slack workspace, used fake employee profiles, shared data in Slack channels that looked very, very convincing as if it were that organization. So, you know, if you were this type of organization, what are the types of Slack channels you would have? What are the types of messages that would be in that? All of which, by the way, is easily generated using LLMs. I can go to an LLM today and say, "Hey, talk to me about a company like Firetale. What are the likely Slack channels that this company is going to have? what would be the structure? What would be stories and topics that would be shared inside that organization? The uh the maintainer was then invited into that Slack workspace uh and lured into downloading malware. That malware appears to be how the threat actors gain the credentials to take over the package and then compromise it. Uh one more CVE. Sorry, we should have had this earlier in the show. This is in the flow ice agent builder. Active exploitation has been reported. Something to keep an eye on if you are

using that again patch etc update. Then the next topic this is kind of one of the other stories that was really emerging last week around the anthropic claude mythos model and model family. And there was a couple of leaks around this. One was the leak around the existence of the claude mythos uh model family. and some internal documentation that appears to show that there are serious concerns about Mythos's capabilities in discovering vulnerabilities, including the discovery of a vulnerability in a BSD package that goes back 20 plus years, predates GitHub, etc. And you know, this is one of those things where a lot of the concern is that this thing is so good at finding vulnerabilities, there's a real danger that if you put it out into the wild and it's usable by thread actors, they will find every publicly exposed vulnerability across, you know, a good chunk of the internet with a very very rapid uh pace. And so that's one of the

big concerns. One of the other concerns or one of the other leaks was some uh anthropic internal tooling and methodology for how they build Claude. Um that includes things like the dreaming mode, a virtual pet that the agent has. All of those things that have been exposed, those have been covered a number of times already, so we're not going to go into them very very deeply here. They're not also very specific to the AI security uh theme that we cover here, which is why we're really focusing on the cyber security breakthrough. And so on that same topic, you know, one of the things that is particularly concerning is, you know, this is a generalpurpose model, but as part of this so-called project glasswing, it appears to have very very specific um cyber security capabilities. So, uh for instance, over 99% of the zeroday vulnerabilities that Mythos discovered have not yet been patched. Uh even the 1% that Anthropic can discuss give a clearer picture of the substantial leap in capabilities. uh we covered the 27year-old bug. This was something that

you know at unprompted one of the researchers who is part of this publication that we're sharing and referencing here in this week uh presented at the unprompted conference with very minimal prompting but with a little bit of baseline training that is inherent in the model. the model can be prompted to go find vulnerabilities so quickly and so extensively including let's say some kind of logical approaches to trying to untangle um business logic or untangle things like concatenated serialization of different parameters that go into a URL or an API argument or something like that that are you know more along the lines of professional human pen tester and sometimes even have levels of creativity and lateral thinking that I would argue a lot of pentesters may not have on their own. And so there's there's a real concerns around like this thing may be too good and uh there may need to be a controlled roll out kind of capability or program that goes along with this model. So again, that was one

of the stories that was kind of developing last week. We've covered it here for you this week. Uh hope you find that helpful. That's all for today's episode. A little bit shorter than usual. Yay. Uh and uh as always, rate and review, share, like, subscribe, all that good stuff. We'll talk to you next week. Thanks so much. Bye-bye.