Smashing Security podcast #463: This AI company leaked its own code. It’s also built something terrifying

A hacking group claims to have broken into the flood defence system protecting Venice’s Piazza San Marco – and is offering to sell access to whoever wants it. The asking price? A frankly insulting $600.
Meanwhile, Anthropic accidentally leaked the source code for Claude Code via a basic packaging mistake. Oh, and by the way, they’ve also just revealed they’ve built an AI model called Mythos that can find and chain together software vulnerabilities faster than any human. Sleep well.
All this and more in episode 463 of the “Smashing Security” podcast with cybersecurity expert and keynote speaker Graham Cluley, joined this week by special guest Tanya Janca.
0:00
0:00
Show full transcript
▼
This transcript was generated automatically, probably contains mistakes, and has not been manually verified.
Hello, hello, and welcome to Smashing Security episode 463. My name’s Graham Cluley.
You’re a pretty big deal in the world of cybersecurity. So if people haven’t heard of you, how can you describe what you do and what you’re all about?
I really like to speak, so I speak at conferences, and right now I’m giving secure coding training to large organizations and then kind of just doing contracts here and there, helping people change their application security program so it’s more AI aware.
You know, you go into an emergency room, a hospital, all of those places, the security is usually much worse than it is on the internet, and it’s not great on the internet.
Is that correct?
So, this month I’m covering the supply chain and how to secure the supply chain and how software developers they’re a target now.
Malicious actors are actually targeting the actual developer, the human, and they need to know.
But if we look at maybe half of those, it was actually the software developer that was compromised.
And then as a result, multiple parts of the supply chain was breached because they have superpowers, because they can control the CI, and they control their IDE, and they control the repo, and they can go to prod, and, and, and.
And so, you get the developer’s credentials and suddenly you have everything.
And then on top of that, what some of the malicious actors have been doing, Graham, is then they rob the developer as well.
And I remember from way back then that the programmers are also the kind of people who would demand to have admin privileges on their computers because they feel they have godlike capabilities anyway.
And so they would be arguing with the IT team, well, I need all of these rights. And that could be a security threat in itself, couldn’t it?
And on top of having admin rights and being the lord of their workstation, I think a lot of people, when we think of the CI/CD, we think of it as a thing that publishes code and we don’t think about how it’s a thing that talks to the outside, does downloads, tells us if everything’s okay or not, decides to log or not log certain security things.
And very few organizations are currently logging or alerting, for instance, if a new admin gets added or if a new workflow gets added.
I worked at a place, I was contracting there, and we’re playing around with their CI because I’m going to add some stuff and—
You’re giving me some acronyms here. No, no, no, it’s all right. But what is that? What is that that you are talking about?
It will go and get things off the internet for them.
It’ll add some updates, it can log things, it can send alerts, and then it will put a copy of whatever the thing is they’re building onto maybe a development server so they can play with it and look at it and do more tests.
And then if all those tests pass, it’s, hmm, that seemed pretty good. Let’s put it on another server and let another team see it.
And it goes from environment to environment automatically, automagically even.
And then by the end, assuming it passes all the tests and the humans it, it goes out into production, which is where you and I and most of us humans live.
So, if you’re a customer and you’re using software, you don’t know, but that’s called production. That’s the place where the magic happens, where the users are.
But there’s all these other environments below that where we’re playing around with things of making sure things are okay and making sure they’re safe.
And so this system is usually the most powerful software system in an organization. It can go to the internet and download things. It can install things. It can delete things.
It can decide this code’s not good enough and it’s not going anywhere on my watch. And it does most of this quite automatically without human intervention.
And now imagine a malicious actor takes that over.
They could literally put code in that’s bad and put it out into your product and release it to all your customers without you knowing.
And it’s happened a bunch of times and we’re not protecting these systems very well. And so, I’m talking about it.
Anyway, great to have you here, Tanya. Before we kick off, let’s thank this week’s wonderful sponsors: Meta, CoreView, and Vanta.
We’ll be hearing more about them later on in the podcast.
This week on Smashing Security, we won’t be talking about how hackers have breached travel site Booking.com, stealing names, addresses, phone numbers, and information shared with hotels.
You’ll hear no discussion of how Rockstar Games, the makers of Grand Theft Auto, have been hacked for the second time in 3 years.
And we won’t even mention how Meta is blocking lawyers from running ads on Facebook and Instagram to recruit clients who say that they’ve been harmed by social media.
So, Tanya, what are you going to be talking about this week?
Well, we’ve got time now to talk about one of today’s sponsors, Vanta. Joe, what keeps you up at 2 o’clock in the morning?
It automates all of that tedious manual compliance work so you can stop drowning in spreadsheets, chasing audit evidence, and filling out questionnaire after questionnaire.
It also uses AI to streamline evidence collection and flag risks. It automates compliance for SOC 2, ISO 27001, HIPAA, GDPR, and more.
It’s a warm, Spring morning, you’ve just paid €12 for a cappuccino, and you’re standing in Piazza San Marco watching the pigeons do their thing.
And what you don’t realise is, while you’re there in that beautiful setting, that somewhere on a dark Telegram channel, a hacking group is claiming that they could, at the press of a button, send water flooding across the very stones that you are standing on.
Which would of course solve the pigeon problem in Venice, at least temporarily. Now, Tanya, have you ever been to Venice? Does this make you want to go?
Well, a hacking group called the Infrastructure Destruction Squad, they announced in early April that they had broken into the hydraulic pump system that protects Piazza San Marco, in Venice from the notorious high tides of Venice.
They said that they accessed the system’s control interface on the 26th of March.
They spent about 10 days quietly poking around, having a little rummage, and then on the 7th of April, they began what they called the disclosure phase.
And the disclosure phase, that’s hacker speak for bragging about it on Telegram. Right?
They were sharing screenshots of control panels and valve states and system layouts, and then they offered to sell full root access to one of Italy’s most iconic pieces of critical infrastructure.
Or if you’re in Venice, round about 50 cappuccinos.
When you hear $600 to access flood defense infrastructure, is that a surprising number to you, or is it just depressingly familiar for critical systems security?
What’s your feeling?
So I was like, well, I mean, maybe what they’re paying for is the convenience of it being in an Excel spreadsheet instead of having to scrape it.
But I feel like $600 seems like they don’t actually have access and they’re just a kid in a basement being like, whoa, €600, that would be amazing. We could have 50 cappuccinos.
And their Telegram post, which was written in Chinese— I don’t speak Chinese, I don’t read Chinese, but thankfully the internet can do all that for me.
This is what it was saying in English. It said, yes, you conducted new checks after the attack in late March. Yes, equipment tests came back positive after Easter.
In other words, they were tracking the remediation efforts being made by the organisation trying to clean up afterwards.
They were doing this in real time while Telegram posts were being written about it.
And they continued, but what you haven’t understood is that we have refused to completely shut down the flood defense system.
So they’re trying to make Venice basically say, oh, thank you very much. That’s very good of you. We’re very grateful. They said, we are not here to destroy you.
We are simply here to deliver a message. We can do it and we are still inside your network. ‘No tests conducted by your security teams can drive us away.
No system updates can expel us. We’ve been here for months and will remain here for months to come.’ Which is fairly aggressive, kind of spooky talk, isn’t it?
They said, ‘Any newspaper that disseminates this news without understanding the truth, prepare for a devastating attack.
I mean, to be honest, at this point, I’m beginning to think this is most likely a 14-year-old. Yeah, there’s a lot of bravado going on here, isn’t there?
But to recap, these hackers broke in, refusing to leave, threatening journalists, but they’re only charging $600 for the privilege of having access yourself.
So you could imagine if someone had a problem with Venice. I don’t know, maybe you were in charge of IT at a rival European tourist destination.
Maybe if you thought, “Oh, Venice has beaten us once again with all of their gondoliers and cornettos.
If only we could access their flood defence system, and basically when that next high tide comes, we could ensure that they get flooded.” So you could imagine there would be people who’d think, “Oh yeah, $600, that’s gonna be worth it for us.” I can’t really see someone being that interested in it.
But I mean, but there are— now, this may come as a shock to you as a Canadian, but there are countries— I’m not going to name any countries, particularly to you, a Canadian— but there are countries which are perhaps a little bit more interested sometimes, some elements of them, in destruction.
I’m just saying it’s possible. But of course, lots of hacktivist groups may be interested. And look, a lot of the early malware which we saw was purely destructive.
It would wipe drives or delete files. You know, there was no point to it. There was no financial incentive. It was about just being mindless, really, in a way.
I need people to think I’m powerful.
And then hopefully that sort of just wears off when we mature and we’re like, actually, I could just achieve things and be awesome and I could prove I’m amazing by actually doing positive, good contributions to the world rather than negative ones.
But I feel like sometimes people get lost, and maybe they don’t see that there are good things that they could do to prove how awesome they are rather than bad things.
Maybe there’s something missing in their lives.
Think about it though, they’re not finding this purpose in their life, this thing that brings them joy, and they’re angry. And so they’re taking it out on people.
And I feel like if we could find a way— when we do the Pick of the Week, we’re gonna talk a little bit about maybe this, but I feel like you’re really onto something there, Graham.
I’ve said things like this before where I’m just like, you know, why are people doing this?
Maybe we need to find a focus to give them where they could show their brilliance, show their determination and be successful, but in a positive way.
In your world, when someone says they’ve got that kind of persistent access, do you take that seriously? Is that a technical claim, do you think, or is that just bravado?
There was an incident a few years ago where I remember the malicious actor was posting images of the Slack channel that the incident responders and security team was using.
So they could actually see the Slack channel and the discussions of the security incident, and then they were posting it to Twitter, mocking them, which made me feel so bad for that team.
And this is why we need to have a way to talk to each other that’s I call it out of bound, a different separate way.
So maybe there’s a Signal chat where you talk or Telegram if that’s your jam and you have this separate space where you can discuss things and where you can double-check things.
I think it was the LulzSec hacking gang. The police in the States, the police in the UK, Smashing Security set up a conference call to discuss this particular hacking group.
And one of the participants in that call, a British police officer, was accessing the call from his private email account, or he had forwarded the login details because he had to connect late in the evening.
What he didn’t know was that a member of that particular hacking group had hacked his personal email, and they were actually able to tune in to the conference call and hear the police discussing the investigation into them.
So, these things can really badly backfire.
When I teach software developers, I have this little section about what a security incident is, what it looks like, how you should call the security team, and what not to do.
Because I’ve had so many software developers attempt to help me, and always from a good place, just to be clear, then ruining the chain of custody, effing up all my evidence.
You know, “Don’t worry, I erased it.” I was like, oh my God.
Yeah, I feel like the security team needs to communicate better to the entire rest of the organization, the processes that they should follow so that if there is an emergency, everyone knows what to do because a helpful person can sometimes completely ruin everything.
This is all about the physical world of pumps and valves and sensors. This means that when it goes wrong, it’s not your data that’s being leaked.
It could mean water’s going everywhere.
I know your world is very much the software side of things, Tanya, but OT security and application security, they are converging in some ways, aren’t they?
And I would say in this case, it sounds like it’s critical infrastructure because at first when you were describing it, you’re like, oh, you’ll get your feet wet.
And I was like, whatever, I’m British Columbian, we’re always wet. It would actually flood, people could be harmed and stuff. It becomes critical infrastructure, if that makes sense.
And so software runs literally everything.
You know, the important thing was that they need to always work. And this was long before people were thinking about connecting them to anything.
But once they were networked for convenience, maybe, or remote maintenance, suddenly this decades-old infrastructure is perhaps accessible via the public internet and may have very weak security.
In December I was working with this company that does embedded medical devices and then they do operating systems and emergency room systems, all of the devices that are in there, they write the software for that.
And obviously, the security is pretty important. Safety and security and privacy, pretty darn important, right? And we worked together, and it was a really cool project.
But I feel like a lot of organizations, they’re like, oh, well, we’re not on the internet, so it’s not that important.
So when we did a threat model of all the things that could happen and how easy it would be, they’re really shocked.
And hospitals get hit with ransomware all the time, but if you— it’d be so easy to hit a hospital physically.
If you’re lucky, they have that part of the conversation. But do you think the software world is actually learning that lesson to integrate security earlier on in the process?
So everyone right now is using Cloud, which we’re going to talk about in a bit, and Copilot, et cetera, to write code for them.
And the quality of code coming out of those is not very good right now. And I am seeing it improve, but not the speed that I dream of.
Graham, it sounds weird, but I want to be put out of a job, right? Like, I would like to not need to teach secure coding anymore because we’ve got this. That’s what I want.
And the AI is not doing it for us.
So what’s happening now is that we have developers with varying levels of how to create secure software and varying level of prioritization on that.
And then now they’re being told develop software at 10 times the speed or we’re going to fire you and hire someone else.
So, they’re using the AI, the AI is changing tons and tons of things they don’t fully understand. They don’t have time to review it. They’re just pressing the commit button.
And that is my fear for new software. For old software, it’s, it’s that, oh, it’s always worked. Why would we update it? We’d have to re-architect it to fix that.
We don’t have money for that. We’ll just leave it. A lot of legacy is in a bad shape.
And by legacy, I mean software that’s already out in production that’s been out one or more years.
They might just want to open the valves and cause mayhem that way.
You hand them a physical address, a floor plan, they handle everything.
They sort out the ISP, they design and deploy the network, they turn up on the site, they rack their own hardware.
Kits that they’ve actually designed themselves, not just rebranded someone else’s gubbins.
Full control without any of the soul-destroying groundwork.
Tanya, what story have you got for us this week?
And then we also usually have something called an ignore file, which means don’t put all of those files up there. These are the just-for-us files.
And both of those things didn’t happen. And so then they published this file, it’s called a source map file, and it can be opened like a present, and inside was the code.
I can’t imagine being the software developer that did that because they’re probably pretty upset with themselves. So it wasn’t a hack, it was human error.
And the reason why this is a really big deal is, so first of all, they spilled their intellectual property.
And as a person who has made most of her income off of her intellectual property her whole life, ’cause when I was younger, I was a professional musician, then I was a software developer writing code, then I wrote books.
I did all of these things, right? All of that’s intellectual property. So that’s one thing.
But the other thing is that then the internet got ahold of it and analyzed it for vulnerabilities and started writing exploits for it so that they could take advantage of Claude.
And so people can dissect all of its defenses and come up with better attacks. And all of the other AI companies now are stealing it.
And basically, so someone, rather than seeing that and reporting it immediately to Anthropic, the person’s “you know what I’m gonna do?
I’m gonna copy it to my own GitHub repo and start distributing it.” Which makes me sad. And I know that it’s a cool thing to find. I would be really excited too, but—
But let’s not forget what Anthropic and the other AI companies have been doing for years, which is they’ve been stealing everyone else’s content without permission in order to train their AI models, right?
So isn’t this just actually a case of they’re getting their just desserts. They have spilt their code and now it’s in the hands of everybody.
And the theory is, is because Claude and all the other AIs just give you all the answers.
When you go and you Google something now, it’ll just tell you the smart thing that Tanya said, but it doesn’t say Tanya said it.
And I used to write articles for them and they’d get a couple hundred thousand reads, and now they’re getting 2,000 reads.
It’s that different because the AI reads it and then now it knows everything Tanya just spent weeks researching to write that article.
And so this is a huge problem for those of us that do research and release research because immediately it’s taken from us. It sucks.
And what Mythos does, it’s quite dangerous. So it finds vulnerabilities in applications and chains them together into exploits.
And it has been finding novel new kinds of things that humans haven’t been able to find before. And it’s been finding them so terribly fast. It’s absolutely completely terrifying.
So for instance, they found, I can’t even remember just how many bugs in OpenSSL, but Heartbleed level terrifying bugs.
For those of you that don’t know, Heartbleed was a bug found in OpenSSL where you could just send a specially crafted call and then it would just tell you all the secret sauce.
But they’ve openly admitted that they can’t fully control it or understand it. And I would really not want to see Mythos on the internet.
They said it was a release packaging issue rather than a security breach. And they’re saying, oh, it doesn’t matter because no customer data or credentials were involved.
And technically that’s right. It’s their code. It’s not somebody else’s. But, you know, they were leaking their source code. They were careless.
And meanwhile, they’ve just publicized this new technology they’ve built called Mythos, which can do something which could be very useful for many people in terms of securing their systems, because it can find vulnerabilities and you could find flaws in software and you could hopefully patch them and fix those bugs.
But if that fell into the wrong hands, if they had a release packaging issue and they spilt it out like they’ve just spilt out something, that’s horrendous because anybody could use something like Mythos to hack all kinds of systems and software, couldn’t they?
So it needs to be webby. And it will go and try to exploit a list of known CVEs, so Common Vulnerability Enumerators.
So vulnerabilities that are publicly known in software that you can buy.
So not custom software, but, you know, I have version XYZ of Apache web server and it’s known to have that vulnerability.
And so you point Metasploit at it, and if it has that vulnerability, it’ll go and it’ll open up a hole there and exploit it.
And in the wrong hands, you can use that to hurt people just the same as if you give a scalpel to someone, they can cut themselves, they can cut someone else.
But this tool, it’s kind of handing someone an atomic bomb.
And so I feel, you know, for instance, let’s say a big company Microsoft or Netflix or whatever, some big software company, they get a license to use it internally.
They find all their own bugs. They have time because they’re not publicly exposing, you know, no one else knows but them and they’re fixing it.
It would be the ultimate pen test, right? That could be great, except for what if one of those employees then sells those vulnerabilities to a malicious actor? You know what I mean?
Or they take it and then they point it at something they’re not supposed to, right?
Because it’s so powerful and it’s so fast and it’s finding apparently very novel, unique things that humans haven’t been able to see before. It’s quite disconcerting, or I think so.
I mean, I believe if you look at the HackerOne league table right now, the number one bug hunter is an AI-powered bug hunting solution at the moment.
The thing which I think changes the story a bit, this isn’t even the first time Anthropic has had a data leak this.
I mean, earlier versions of the same package in 2025 also shipped with full source maps before being pulled. So this isn’t a one-off slip.
It seems to almost be a pattern which has happened. And who’s to say it couldn’t happen again? And maybe it could happen with Mythos.
I don’t mean to sound insulting, but I can’t believe that they could make the same mistake again, right? Because that would be so painful the first time.
Do we just have to sort of shrug and say, oh well, that’s life, these things happen?
There’s this setting that you can do called .gitignore, and you list all of these files to say basically no matter what I say, don’t upload this.
So that’s step one is that we want to have the ignore file things set up properly. And then we always know we’re not supposed to have debug mode in production, right?
So, we know that we should have on the build server these settings turned off.
And so basically this is like security misconfiguration happening twice, which is on the new OWASP Top 10 2025, as a top risk to web apps.
Basically, they didn’t configure the build server correctly and then they didn’t configure Git correctly. And then they don’t have a process or a checklist to check that.
So I would love to see those three things. I teach supply chain security.
I’m expanding and expanding that class all the time because there’s more and more that we’re doing wrong there.
And I feel like if organizations had a checklist and they had, you know, a hardening of these things that they’re using that are part of their supply chain, like we talked about earlier, if we properly hardened our build server.
So, the CI/CD and build server, those are usually synonymous. They’re usually the same thing.
Or you have a build server and then you have a pipeline and you connect the two, but usually, it’s all one big thing.
And so, if we were properly hardening that, if we’re checking it at least once a year, if we analyzed who, you know, there’s an alert. Oh my gosh, there’s a new administrator.
So, it is a human error, but the human error happened because we didn’t have processes to protect that human from making that error. And I don’t like to blame Alice or Bob.
I like to look at, no, but did we train Alice or Bob on this? Did we? Right? Did we have a safeguard to stop them from making this error? Did we have a policy?
Or do we just assume they knew? Because when we assume, we’re let down a lot.
I’m going to give you a little bit of silver lining on the cloud, right? Because this has all been a bit depressing.
Do you get any comfort at all from the thought that the people building these tools are still fundamentally human and therefore fundamentally fallible?
Thank goodness it’s not the AI, right? It’s human error. Hey, yes, us humans, haven’t we done great? Because we’ve really cocked up on this occasion by leaking the source code.
I think we should feel good about that rather than it being an AI which screwed up, which surely is only a short way away.
But there’s software dark factories being built now where you don’t have a single software developer anymore, and literally every single part is only written by the AI.
And wouldn’t you think the AI company might be most likely to do something like that? I don’t know.
If someone broke into your Microsoft 365 tenant right now and quietly disabled your conditional access policies, grabbed global admin rights, turned off Defender, would you even notice?
One compromised account and an attacker can quietly reshape your entire tenant.
No alerts, no noise, just someone systematically dismantling your defenses while you’re none the wiser.
You could be rebuilding your tenant settings from scratch for weeks.
It’s actually a really practical read.
It covers how these attacks unfold step by step, where your existing tools are leaving gaps, and what it actually takes to recover control once it’s been lost.
You can learn more at smashingsecurity.com/coreview and maybe do it before someone else does something bad to your organization.
Could be a funny story, a book that they’ve read, a TV show, a movie, a record, a podcast, a website, or an app, whatever they wish.
It doesn’t have to be security related necessarily. Well, my Pick of the Week this week is actually security related.
In fact, my Pick of the Week this week, and this is gonna get very, very meta, not in a Mark Zuckerberg kind of way, because my pick of the week this week is actually about the Smashing Security podcast, because I’ve been busy doing a bit of vibe coding.
I know, very dangerous. I’ve been exploring the world of podcast transcripts, ladies and gentlemen.
I think it must have been about 9 years ago when I first got an email from a listener saying, why don’t you have a transcript? I’d much rather read rather than listen to you.
And I said, well, you know, it’s very hard putting together a transcript. I’d be up all hours typing my nonsensical words into a word processor.
Or I’d get some computer system to try and transcribe me into written English. And, you know, the quality is going to be diabolical anyway.
After quite a lot of work involving largely pipe cleaners and pots of treacle, bicycle chains, I have got together a Heath Robinson-type solution which now has, I believe, acceptable transcripts for this show.
Now, my podcast host, does create automated transcripts.
So if you go into your favorite podcast app at the moment and look at transcripts, if it supports that, you will see a very, very bad transcript of the show.
My intention is to replace all of those. And if you go to my website or to the Smashing Security website right now, you will find a much better transcript.
And in fact, it will even display the words as they are being said. So you can read as you are listening I think it works reasonably well most of the time.
Sometimes it makes a mistake, for goodness’ sake. Yes, I know. Sometimes it will mix up my name with someone else’s or something will go wrong.
But most of the time, I think it’s pretty darn impressive. So my pick of the week, rather self-referentially, is the new transcripts on the Smashing Security podcast.
Go to smashingsecurity.com or go and check out my articles on Graham Cluley.com.
And you will be able to see the transcripts in all of their glory there and tell me that it doesn’t work.
And then I’ll have to try and work out what the code’s doing and try and fix it. Cool. That is my pick of the week.
And it is about three psychologists that are friends that are all grieving because one of the psychologists, his wife died.
And it shows how he grieves, how his daughter grieves, how the two other psychologists grieve. And they teach all these different psychology lessons essentially in the show.
And last year I did a talk about the psychology of bad code and applying economic behavior types of concepts to our security programs.
And how if we do that, we can get better results. ‘Cause just yelling at software developers actually doesn’t improve code quality at all, as it turns out.
Just being mean to them doesn’t work. We’ve tried that for two decades. So, I was what if instead we did something different?
And so also so that I could get better results, right? If someone blows up at me, it’s like, why did they blow up at me? And often it’s not because of something I did.
It’s because they feel insecure or afraid or whatever.
And so most shows aren’t very educational, Graham. Most of them are kind of garbage.
I think if people are curious about, you know, why people do the things they do, they might like this.
And so I think they call it a drama comedy, which they literally put on Apple TV, Dramedy.
I’m sure lots of our listeners would love to find out what you’re up to and follow you online or listen to your podcast, of course. What’s the best way to do that?
You’ll get the episode of the podcast and you’ll get at least one meme. And memes are important, Graham.
You can find me, Graham Cluley, on LinkedIn, or you can follow Smashing Security on Reddit or Bluesky or Mastodon. And don’t forget to ensure you never miss another episode.
Follow Smashing Security in your favorite podcast app, such as Apple Podcasts, Spotify, and Pocket Casts for episode show notes, sponsorship info, guest lists, and the entire back catalog of 463 episodes, check out smashingsecurity.com.
Until next time, cheerio. Bye-bye.
And of course, to all of our fabulous supporters via Patreon. This week, we’re pulling out of the hat Watson Burney.
Sounds like a 19th century detective who probably solves crimes exclusively by monocle. Example name.
Now, I strongly suspect he’s a Patreon onboarding form that’s gained sentience and signed up.
Kenneth Ingham, Dan H, just the letter H because apparently everything after H is classified. Yuri Taraday, who has tremendous energy. We’re very glad that he’s on our side.
Ragnar Carlsen, of course, always arriving by longship. We’re not going to argue with them. J, just the letter J, not to be confused with Matt H or John W or Dan H.
This is just the letter J on its own, unadorned, magnificent. Ted Wilkinson sounds like a cricketer.
Govinda Charya, Travis West, who sounds like he should be presenting a true crime podcast of his own. Thank you all so much. You are wonderful, every single one of you.
And those are just a few of the folks who are supporting us via Smashing Security Plus, which means that they get episodes ad-free earlier than the general public and can be pulled out at random to have their names mocked at the end of the show.
If you would like that, all you got to do is join us at Smashing Security Plus. Head over to smashingsecurity.com/plus for all of the details and you too can become a patron.
But there are also ways you can support the show which don’t involve spending a penny.
You can like, subscribe, leave a 5-star review wherever you listen and tell your friends about the show. Spread the word. Every little bit helps.
And it really does make all the effort worthwhile. Well, thank you so much for listening. Well done for lasting this long into the show. Not everyone manages this. There’s too much.
You deserve a little badge or a pat on the back. But until next time, take it easy. Take care. Stay secure, my friends. Toodaloo and bye bye.
Host:
Graham Cluley:
Guest:
Tanya Janca:
Episode links:
Sponsored by:
- Meter – Network infrastructure for the enterprise. Get a free personalised demo.
- Vanta – Expand the scope of your security program with market-leading compliance automation… while saving time and money. Smashing Security listeners get $1000 off!
- Coreview – Download “Total Tenant Takeover”, a white paper about the Microsoft 365 Disaster No One Is Ready For.
Support the show:
You can help the podcast by telling your friends and colleagues about “Smashing Security”, and leaving us a review on Apple Podcasts or Podchaser.
Join Smashing Security PLUS for ad-free episodes and our early-release feed!
Follow us:
Follow the show on Bluesky, or join us on the Smashing Security subreddit, or visit our website for more episodes.
Thanks:
Theme tune: “Vinyl Memories” by Mikael Manvelyan.
Assorted sound effects: AudioBlocks.

