Episode 14: AI Risks, Threat Modeling, and The Future of Vibe Coding

Justin:

Welcome to Distilled Security Podcast episode number 14. My name is Justin. I'm here with Rick and Joe and we have a new person on here, John Ziola. Welcome, Thanks

Jon:

for having

Justin:

Thanks for coming on. Yeah, of course. So, you're around the Pittsburgh area. A lot of people know you. I heard you probably have the most certifications out of anybody in the Pittsburgh cybersecurity.

Jon:

Yeah, more than I'd like to admit.

Justin:

But if you wouldn't mind, why don't you give a little background to yourself? You know, how'd you get into cybersecurity? What are you working on right now?

Jon:

Yeah, yeah, yeah, for sure. Yeah, so I got interested in cybersecurity as a kid, I think most people do, with video games and hacking video games and things like that, trying to abuse them. I learned how to program, I applied that for a number of years, went to college. What your

Justin:

first programming?

Jon:

What was my first, like the language?

Justin:

Yeah yeah.

Jon:

QBasic, probably QBasic. I did a bunch of C plus plus Java, VB6, .net. Yeah, yeah, exactly. Then lately I do more like Go and Python and things like that. So yeah, I went to school, got out, started working at a couple of big local companies, PNC Bank, American Eagle, Outfitters doing cybersecurity stuff.

Jon:

And most recently, I've got a consulting company I work with Joe called CISO, and then I'm building a bunch of really cool AI tools. Nice.

Justin:

One of the

Jon:

things we want to talk about today is AI. I love AI. Yep. I'm super excited about it. Since about 2015, I've been working on machine learning and AI.

Jon:

I really saw the potential with analyzing large piles of data. So I got into that in a project called Apache Metron. I don't know if you've ever heard that. It's kind of like a SOC machine learning platform that was an open source. And I built that for a number of years.

Jon:

And then fast forward to recently, I'm kind of working more on AI generated code outputs. So how do you make sure how do you reduce the slot problem where it creates outputs, they look convincing, but it's just really not that very good. So That's mostly what I'm working on now.

Justin:

But it's basically all

Jon:

of AI right now. There's a lot of techniques that you can do to make it better, but the default case is pretty bad.

Joe:

Absolutely. Hey, go ahead and plug the website people can go to to see what you're working on.

Jon:

Oh yeah, yeah, so Zenable. So zenable.io is the website. We've got a free PR review bot if you use GitHub. We have a free MCP server if you're into that. And then we've got a number of premium offerings you can go to the pricing page to check out if

Justin:

you're Yeah.

Joe:

So they say it takes like what ten years to become expert level at stuff and you just said like fifteen. We'll begin there.

Rick:

Yeah, right. Exactly. And, you

Justin:

know, I've known John for years, and one of the things that like No matter what the hot topic is, he gets himself to be an expert at it. Like, we did a block chain conversation I

Jon:

forgot about cloud native I still love cloud native. I'm a CNCF ambassador as well. So, I work with the Kubernetes ecosystem a lot and I really do enjoy that. I see AI as really like an add on to that existing ecosystem

Justin:

a So

Joe:

you're saying John gets his ten years of experience in and about two and a half

Justin:

years from 20 He locks himself in his basement away from everybody. He's like, all right, I'm going learn this.

Jon:

Ten thousand hours. Many shiny things.

Joe:

That's why his business cards, he needs two of them together to get all certifications on. Oh, yeah. It should

Jon:

be one

Justin:

of those like fold out ones where you have the pictures Oh, And

Rick:

the back of would just be the wall attack. Yeah.

Justin:

Here's my phone number if you need me. That's funny.

Rick:

That's funny. Yeah.

Justin:

Great. Well, yeah, we have a good topic here. I mean, we want to bring you on specifically for this because you have direct experience into it. But as a large thing, we wanted to talk about AI. But to start off with, I think it'd be good for us to cover, why is AI unique?

Justin:

What are the unique threat levels or the risks that you would do from an organization adopting AI from generative AI with chat prompts or recording call summaries and doing things of that nature. I mean, it's all over the place. It'll probably catch on maybe like the internet at some point. But yeah, there's just a whole bunch. I sent you guys that thing from NTT.

Justin:

Did a survey with it and how the business is so excited to get AI into everything. And from a security perspective, we're all like,

Joe:

Oh, can we slow down? Well, here's some metrics from

Justin:

that. Yeah, yeah.

Joe:

67% of CEOs plan significant spend, yet only 45% of CISOs feel like they're on board with that. So the overwhelming

Justin:

The other 55 don't know what they're spending it on.

Rick:

I'm sure their sample size is better than like my anecdotal information, but I feel like the CISOs that I interact with, the percentage is less that I'm comfortable Really?

Joe:

Yeah. So the overall takeaway is that leaders are running full steam into it and saying, Hey, we're going to do this. And CISOs are not sure.

Jon:

Yeah. Yeah. So I guess maybe can we talk about that a little bit? So what do we mean by not sure? Like they don't even know how to manage the risks or it's just like the unknown unknowns or I

Justin:

take it as they don't even know like the tooling offering and what the vendor is bringing, you know, as far as risk. They haven't done an assessment on it to even know. When you say here's this big shiny tool, Salesforce is now incorporating AI into everything, it's like, what does that mean?

Rick:

I don't know if the article defines it this way or not, but I think the answer to your question ties into your question, which is what do we mean by uncomfortable? The folks that I talk to, part of the level of discomfort is part of why AI is unique in that it plays in so many spaces, right? So there's the financing that I would wanna do this with AI and then the marketing team says, hey, we're thinking about this and this and that. And so it hits you from like 30 different places and they're all kind of spawned from the business and often don't rely on that much if any, like IT underpinnings to get going, right? So the speed can be super fast, great from a business perspective.

Rick:

Hey, it has this promise of helping things, great from a business perspective. Hey, what are the real impediments to get going? Well, you need a P card, right? Or an X or whatever and you can get going. And then you have all these CISOs going, wait, wait, wait.

Rick:

How all of a sudden We have 80 projects from the business. We just get like our manifesto for AI or our policy for AI figured out out

Joe:

only that, you're thinking, all right, John, you got me thinking like in the mindset of your nervous CISO and GRC team, they just spent all this time doing all these reviews for third party vendors. And now is not even shadow IT. It's the actual IT they already approved.

Justin:

Yes.

Joe:

Are integrating this. And the leaders are like, Yes, we want this integrated into the tools we use. I'm finding very few like maybe a year ago, you would find companies saying, No, tell us you're putting AI in. Now we're hearing companies say, We don't even want your product unless AI is in it. And now so the CISOs, maybe they should be nervous because they have it's not even shadow IT, but it's legit IT and they have no control.

Joe:

Yeah. And they don't know what's happening with it.

Justin:

Well and I just saw Slack. They just came out last week, this week, something like that. They're like, Oh yeah, we're offering more AI services on this paid tier. Like they're now shoving AI more into that. You could already be paying for it and be like, here's more AI.

Justin:

Know, same

Rick:

thing. I love that you talked about the vendor issue though because actually there's probably a pattern there that is super similar, which is like if you have an approved vendor from like a GRC perspective and all of a sudden the business starts using them for this totally different use case, right? You go, Oh wait, no, not like that. Like they were approved, but not like that. And I think for the IT infrastructure or the tooling that you have and stuff, it's kind of the same thing, right?

Rick:

Potentially all these different use cases or patterns that are gonna be put in place that haven't had the weight of proactive thought applied to them to say, hey, how scary is this? So anyway, I think that's one of the unique things is it comes from so many places so fast.

Jon:

Yeah. I think the other interesting thing is that the outputs that you get from AI tools are just non deterministic. So if it's code, you kind of can put controls around it and you know you've done static analysis, dynamic analysis, whatever, and you mitigate your risks that Yeah,

Joe:

it's no more worse than just an entry level employee doing coding, Right,

Jon:

yeah, right, exactly. But it's just so stochastic that you don't always know if it's right. It's very confident and it's trained to be confident, right? But you don't always know if it's right. So correctness becomes this really important characteristic of these systems.

Jon:

But the guardrails, I call them guardrails, but the controls that you would put around these systems are very different than you would traditional systems. If you're doing third party risk management and you're asking a vendor about how they're securing the AI systems, you just have to remember that it's going to be a different sort of dynamic and different set of controls than everything that we've used before.

Joe:

Walk us through that a little bit. So let's use like a scenario. You're a company. You're using a third party. They integrate AI.

Joe:

But the AI they're integrating, they're maybe just doing API calls to ChatGPT or OpenAI. Would that be a case? Yeah. And in that case,

Rick:

they don't That's

Jon:

probably the most common case. Yeah. The ChatGPT wrapper, like it's very common.

Joe:

And then so what are the things to worry about there? Because they have no control over what OpenAI is doing, right?

Jon:

Right, exactly. Mean, get contracts that

Rick:

you

Justin:

negotiate Yeah, and you can not allow your data to be a part of the training model and everything. We're actually

Jon:

But that's contractual.

Rick:

I was going say the commercials are going be super specific to the vendor though too.

Jon:

Yeah, and that's just contractual, right? You're just kind of they were just saying that that's the case. It's a little bit tricky especially I think one of the big problems with AI is the data sprawl. We're just taking data and putting in all these different places And so for me, maybe as a vendor, to contract with a company and to say for sure, I'm not going to do X, Y, and Z with your data, I'm going to have to put those controls in all the places that your data goes. And so from ChatGPT's perspective, who would be that vendor, I think it's very easy for them to not intentionally, but accidentally not follow a control that they contractually said to you that they would follow, like training on your dad.

Joe:

And how would you know?

Rick:

Well, how would you know is one. But two, also, what's the remedy? So fast forward three years and there's this big event and some vendor you're using, Oh, sorry, we've been using your data for three years to do this thing that we promised we wouldn't.

Justin:

Well, FTC goes after people like that, not necessarily on the AI front, but if they start marketing people on like medications or something like that, they've gotten Yeah.

Jon:

Yeah. They know how they fix it. Like how you

Joe:

find

Jon:

do

Joe:

find Rick, you're asking about it a little bit different. You're saying, sure, somebody will go after them after the But

Justin:

now

Jon:

they're a Maybe not.

Rick:

Now do you do? Like, one, if you're so embedded in this tool already, are you really gonna throw them out?

Justin:

This happens all

Rick:

the time with security issues, right? Stickiness.

Jon:

What can you actually And

Rick:

the data thing is so insidious and it's almost like detecting a breach in some ways, But it's a super long tail. It's someone's been using your data in a way that they shouldn't. What do you do with that?

Jon:

I mean on the defensive side, a lot of companies are approaching it like supply chain security. So we make these attestations. We have provenance about where maybe it's this library came from. That's what we would do in supply chain security. They're applying the same thing to data.

Joe:

So finding provenance just for

Jon:

Yeah, provenance is where it came from. Did we get this directly from a person? They entered it at this point in time. We purchased this data. We found it online, we scraped it from this website, a timestamp and here's a hash of it, like whatever it is.

Jon:

You have some origin of it existing and you consuming it into your stack. And so a lot of the AI vendors are starting to work with AI Providence so that they can say, here's where all the data came from. Right. So especially in those cases, first of all, they could prove maybe that they aren't training on your data, but they could also identify if they are accidentally training on their data, and then they can remove it. Sometimes you can remove it from the model.

Jon:

Usually you would train a new model that doesn't have that in the training set which is extremely expensive. So really it would be like in the next version of this model it won't have that but this version stuck. It's very difficult to actually remove data from a trained model because it's not just subtracting something.

Rick:

Yeah. An interesting thought I just had based on the Providence conversation is I wonder I doubt there's any law around this. I wonder So public data, right, is typically public data. Do whatever you want with it. Right?

Rick:

Breach data is in the public. Right? So how many AI models are gonna train off of breach data? And if you have a data breach, right, I think you might need to start to think that

Joe:

They mean to go to the dark web to get the data to train off?

Justin:

Yes. Why wouldn't they?

Rick:

Are they? This is a question.

Jon:

Wouldn't be surprised because one of biggest problems with training new models is that we're out of data. Right. They've been trained on the whole Internet, essentially. So getting even more data, just any data, but then especially labeled data, is extremely valuable and So I wouldn't be surprised that they're going to a lot of unconventional locations. I don't know specifically that, but like probably a lot of unconventional locations to get more information to augment And then the other thing that they're doing is they're making more synthetic data.

Jon:

So they're taking data they have, generating new data from it, which is fake, but based on the schema of whatever it came from, and then training on that, which has all kinds of

Joe:

Is that like that Michael Keaton movie, Multiplicity, where the heat's normal keep getting dumped?

Jon:

Yeah, yeah.

Joe:

Does that cause a real problem?

Jon:

Yeah, I mean, could be. I think that's one of the things that been

Rick:

a little bit in terms of AI solutions is because you mentioned like we're out of data, like the general models have been trained on everything. But the very specific models are intentionally trained on very specific high quality data because you get much better outputs out of it. And so I would think that if there was a data breach and it was public or pseudo public data that's intended to be private, right? I mean, theoretically, you get a competitive advantage by consuming that.

Justin:

I mean, that point, it doesn't really matter anyways. If AI is consuming it or not, it's already public.

Rick:

Well, it depends on, I suppose, perspective or the risk models or what you have to deal with and manage towards.

Justin:

Yeah.

Rick:

Because What

Joe:

I would worry Go ahead.

Justin:

No,

Joe:

no. What I would worry about is that's somebody's IP. They still have claimed that IP. Just because it was breached or made available doesn't mean and then some model goes and trains off of it. And then I create a search and I say, help me design a XYZ.

Joe:

And then I end up with information that somebody else's IP and I'm thinking, well, that looks really good. I'm going use that and build out my company's processes.

Jon:

Right.

Rick:

Yeah. And by the way, I have a license to do it because I bought the license from a vendor to use

Jon:

the thing that I So those companies, so Google, ChatGPT, Microsoft, etcetera, they all indemnify their users. So if you get something that's generated from one of their models and it is copyrighted information, they are indemnifying you. They will protect you in a copyright claim.

Justin:

Right. But if you

Rick:

got to They're getting

Justin:

sued for that right now. Disney is suing Chat Cheepy Keeve for trick of all Yeah, For their models and everything. I was reading some of the lawyer claims. There was basically just like, you're just taking our IP and just using it.

Jon:

Yeah, right.

Justin:

Type of thing. And yeah, I mean, how do you prevent that? Like, that's really hard when you set the sights out on to do, you know, get all the data you can.

Jon:

Yeah. Especially because we have a lot of, like, improperly tagged information. So Yeah. I don't know. Like, I I use images from online for business purposes sometimes.

Jon:

Yeah. And so I go to Google, I do, the little drop down, and it's, like, allowed for public use. Yeah. And I look at that sort of stuff. But, like, who's to say it's not improperly tagged and it actually is copyrighted and I shouldn't be using it that way and they just messed up?

Jon:

It's hard for me to know because I don't actually get to see the information. They gave it to me in a page, right? I'm assuming that they have it correct, but they've got the same issue on the back

Rick:

end. Then getting back to whoever, assuming a copyrighted image was used, how would the content creator ever know? Unless you're consistently evaluating your own IP to be like, is this allowed for use in public domain? Is this

Joe:

And if I use AI to write a prompt that says generate this image and it generates an image and it looks very much like this thing I never even knew was anywhere else.

Rick:

Right.

Joe:

How would I know? I thought this was original.

Rick:

Yeah. Well, and I think that comes to another thing that I think might be a unique risk of AI, which is speed, like speed itself, right? Things going faster, content being generated more quickly, typically a good thing. But, and again, this comes ties to the business use cases, right? Where people want to move faster, they want to do things faster.

Rick:

And gets to a conversation we're having a little bit earlier in terms of like replacing people or displacing some element of people or whatever. And you go, okay, well, if I can just get this system to write these documents for me as opposed to these people, Well that sounds great on paper. Yeah, let's just get rid of the people to do it, have the system do it for much cheaper and it's gonna be perfect, right? Doesn't need to be reviewed. And so I guess my point is I'm certain you can just rely on all the marketing language that tells you it's

Jon:

perfect all the time, Right, yeah.

Joe:

Because that's generating Yeah,

Jon:

there's so many cases where that's an issue today. And if you think about code generation, that's a perfect example where historically we had developers, they write code, they would work on a feature maybe a couple days, and then it would open a pull request. And that would get peer reviewed by somebody else before it would go to production. And that you know, it's like two days of work. It's like one hour of peer review and conversation, and then it ships.

Jon:

But whenever you're doing AI and agents to generate the code and you speed things up so much, that one hour becomes more than the time that it took to write it in the first place. Now that becomes your biggest bottleneck. What do you think the pressure goes? How do we speed up the peer review process? Right?

Jon:

And there's some things you can do automated peer review. You can do LLM peer review,

Justin:

but you still

Jon:

should today, definitely today, still have human in the loop. Right. But there's gonna be a lot of pressure to get that to rip that out because that's going to become the biggest bottleneck. Writing the code is so much faster. Let's get rid of the bottlenecks, that's the bottleneck.

Jon:

Well, that's the human in the loop. That's of your biggest control at this point to catch business logic flaws like, should I be able to move money anywhere in this banking system? Probably not.

Rick:

Right, problems at scale.

Justin:

Yeah, right, exactly. I might say it's easier

Jon:

that way. Right, exactly. Convenience is harder. Exactly.

Rick:

What else? How else is it unique?

Justin:

So one of the things that I think from a threat perspective and there are a few people actually working on this is how do you know the data you're being handed to is for the right role and everything? With all the connectors with like the MCPs and you're given access to your Google Drive or SharePoint or Teams or Slack or there's all these connectors so that they can get more context. But what if we're prepping for the next board meeting and improperly had access into it and now it's available to the entire company and it's like, Oh yeah, I can just ask them about it. And oh, look, we've lost money this quarter. That's going to be a negative thing on stock.

Rick:

You will find it, man.

Joe:

Issues are so fast. Salaries.

Justin:

Yeah. I mean, a big problem when you're looking at like a lot of that data and doing like some of the rag stuff and everything, you need to tie permissions and identity to some of that stuff and everything. Out of the gate, especially six months ago, that really wasn't a thing. It was just consuming data and trying to do as fast as it can to, you know, give the context.

Rick:

The identifying previously unknown access issues, but identifying them in the wrong way, like through issues, it's a real problem. Before you had to

Justin:

go searching through the shared drive to see what would be in proper security first Yeah. And it's like, here you go. Like here's this Excel file out of HR that available to the entire company.

Rick:

It's a real issue.

Justin:

Obviously, it's available. So, it's

Jon:

all right. Yeah. That's where workload identities come into play as well, which started to be popular in the cloud native space because we had all of these different containers talking to each other, and we had these giant meshes, microservices, all this fun stuff. And they needed to prove that they were authorized to make the calls to each other. And so a lot of people were kind of picking off that idea for microservices of Workload Identity and how to bring it into AI agents.

Jon:

I will say I think that's probably, at least in my perspective, the least mature of the whole stack is least privileged, real least privileged. Because again, you just connect it to the Google Drive, you don't say these files but not those files and these tags and DLP and all this stuff. You just say plug it in and go.

Rick:

But it takes so much work to get it right and so many AI solutions are trying to be first to market. So like for instance, use case, hey I want to be able to just talk to you about the type of meeting I want to have scheduled and you're gonna schedule it for me. Right? You're going to identify all the people with these roles and all this stuff. Absolutely.

Rick:

And then I go, okay, well what if I just query the back end on the back end? What specifically is the CEO doing right now? Right? Well, people go so fast to be first to market like, well, you need to schedule with everyone. It's gonna consume all the data from all the meeting things so it has all the context or whatever.

Rick:

But then all of a sudden you have a business analyst being able to have full visibility anyone's calendars. It's a super I haven't seen this in practice. It's just things I've thought about. It's just a super easy to see use case as people are trying to be first to market, consuming a bunch of data, but like actual least privilege and like what use cases are okay and not okay, like that takes a lot of work

Jon:

to And very few people pay extra for least privilege. Right? So that's part of the pressure.

Rick:

They pay extra when

Justin:

they don't have

Rick:

it. Yeah, exactly.

Jon:

That's the problem is there's always pressure for features. Features, features, features and people would just skip a lot of the security controls. Yeah, yeah.

Justin:

Yeah. It is interesting. I was just reading a blog that they're going over basically this whole problem with that and they're putting the premise like, do we do it on a like where you inherit the identity of the person asking it or do you kind of do it more role based on who's actually asking it? Because a certain person like a high level admin can have many, many roles. Does it mean that even though I have access to the HR drive because I'm full admin that I should still have access to all that data at my disposal?

Joe:

Should the query actually return that data in a part of a question?

Jon:

Yeah.

Justin:

But I'm not acting in an HR capacity.

Joe:

Right. Or does it go

Rick:

into like some potential, like a potential phish like queue where it's like potential bad query or whatever and like someone has to review that Now you're just adding

Jon:

friction to everything.

Rick:

But you are. You have to.

Jon:

I mean there's a balance. Everything's balance. Exactly. Yeah. It's all a balance.

Jon:

Totally agree.

Rick:

But it's the expectations problem, right? Hey, we're paying all this money for this AI solution so we could do these things fast. What do you mean we can

Jon:

do So wait, you're telling me I had permissions to do it but you didn't let me do it? Yeah. That's a tough Yeah.

Justin:

Until you say, like, you can get the board ahead

Jon:

of Yeah. Well, you shouldn't have permission to. Well,

Joe:

a real scenario. Imagine a sysadmin manager who has super user access. They're able to see the HR drive. And then they're just using their ChatGPT or their Copilot or whatever it is that's built inside because they're analyzing, well, what are average salaries for people that I might want to hire from my team?

Justin:

Right. And next thing know.

Joe:

You start to get Compared to

Justin:

your peers, you're at the bottom. And

Joe:

it gives you the list of real internal data. Right. You know, but then again, some of this isn't much different than you're not supposed to have access, but how many times there have been a shared drive back in the day where people would put the HR salaries thing in the wrong place and

Justin:

But it's that chicken and the egg, you know, like you're giving AI context and all that stuff. And if you give it permission, like how can you fault the AI into that? That's just permission problems. You know, it makes it more available, you know, identify problems.

Jon:

More likely to cause an issue.

Rick:

Yeah, exactly. I mean, typically recommend like clients on the side in those situations for certain like highly sensitive drives or legal holds or things like that. Like, look, you need to understand that there's these service accounts and system admins that are gonna have access to this thing. And then they always say to me, well, how can we prove that they're not doing anything with it? I go, okay, we'll set up a tripwire.

Rick:

Right? Like, we'll just monitor it and then we'll set up a tripwire on the tripwire. If someone tries to modify like, It's a self healing process. And so I do think if you're tripwires is one way to maybe not solve but start to approach those issues. It's like, hey, this query that someone ran actually did evaluate actual salaries on the Yeah.

Rick:

And then you tune and tweak because I know how else you get through it.

Justin:

Well, one of the things I really like and some of the newer models in ChatGPT, I think it's their three on the deep research stuff and everything. They cite sources now. Like they're like, Oh yeah, I got this information. Here's website I got it from. I think they'll probably be more along that lines when they start to do summary type things.

Justin:

They'll put a little tag like, I got it here.

Jon:

That's brilliant. So

Rick:

then it gets into this like who's watching the watchers because I remember this legal case and this is like several years ago now where like the lawyer was getting disbarred because he asked ChatGPT to like give him precedent but also cite the cases. But he didn't then go

Justin:

Did he get disbarred? I know he got fined. He got fined.

Rick:

I know what the outcome was, but the proceeding was

Joe:

It was a bad day that day. It was a

Justin:

bad day. The judge ripped into him because the case didn't But because he said

Rick:

I thought I did my diligence by forcing it to cite its sources and it attached to source teaching while some of those sources were hallucinating.

Justin:

Yeah, who's gonna actually click on the link?

Rick:

This is my point. So even if it's citing its source like what do you do? Okay.

Joe:

I just had that happen this last week where I was saying the ChatGPT give me some information and then give me the sources and it created a website that didn't exist, a URL. And I said, this doesn't exist. It goes, oh, no, that would be the kind of website you might look for. That's very interesting. And

Jon:

so But there's actually a technique called LLM as a judge for that specific scenario. And so what you do is like so these AI tools, they're trained to be helpful. And so it helps you, it hallucinates some websites, whatever, and then you pass that output to a separate LLM whose job it is to be helpful in a different way to validate that those links are all correct.

Rick:

Oh, so different agents do different Exactly.

Justin:

So basically, get to the agent. Basically unit test of

Jon:

Yeah, it's like a unit test. Exactly, yeah. So you say, you take each one of these sentences and then a corresponding link. Maybe you crawl the page and you pass it to an LLM as a judge and you say, Here's a sentence. Here is some content.

Jon:

Is this sentence effectively in this content? And again, it could be wrong. It can hallucinate. But we're adding this, like, layers to make less Yeah. Exactly.

Jon:

Yeah. That's actually, like, kind of pretty much state of the art right now is to use the LLM to check the LLM. Sorry. I call it fight fire with fire. Yeah.

Joe:

So should CISOs be creating friction at this point? What's our thoughts? Or should it be lubricating the process and

Justin:

just make it go You know, accept it all.

Jon:

Turn off

Justin:

the Internet.

Rick:

Yeah, you to create some friction. I think you're not doing your job if you're not creating some level of friction in terms of, at a minimum, start with understanding, really understanding how is the business using or trying to use AI or wanting to use AI? Because there's two sides to this, right? There's people that'll come to you that say, I had this whiz bang use case, wanna do X-ray, or they won't come to you, they'll just wanna do But they have a specific use case in mind that's gonna accelerate them X percent or Y percent or they got sold something or whatever it is. Then you have this other path that AI is getting injected into organizations from what I've seen, which is the executive, hey, AI is good, go do it.

Jon:

The

Rick:

mandate? I've told all of my teams to go find ways or explore ways to use AI to get better. And I actually think that's the, in some ways, the more insidious or harder use because you get people stretching for use cases that aren't necessarily particularly business valuable, but they want it to be perceived as valuable and they want to achieve the mission. But there's still going be security issues there too.

Justin:

This happens all the time when there's a big hype. We've seen the same thing with blockchain. Move to cloud. Yeah, have a solution, then they go look for a problem. Type of thing.

Justin:

And so the reverse into that.

Rick:

But I think all security is a risk reward balance, right? And so I think the reason I really have a bad taste in my mouth about the mandated AI without supporting use case thing is because you're consuming relatively the same amount of risk. I mean, it all depends on data sets up, but AI being AI for significantly less reward, right? But everyone's going to approach it like, well, no, it's good for my business unit or I need the thing because if they don't sell it that way internally, they're going to get a slap on the wrist, right? That's what I hate that is because

Justin:

I haven't seen that. You've seen that where they mandate just AI being incorporated?

Jon:

Oh, I've definitely seen that. You've seen that a lot.

Rick:

Mandate's a weird word, So it's not like you must offload 10% of your workload to AI.

Joe:

They want to optimize.

Jon:

Yeah. No, I've seen a lot of board decisions lately where it's you're going to lose 10% of your budget and use AI to figure out the gap. Yeah. To fill the gap.

Rick:

Absolutely. Or I've seen everybody needs to come back to us. Every executive leader comes back to us with their five best uses of AI. And then the term I use is shark tank. And then we'll shark tank it at the board level or whatever to say which ones we're picking and you have to go do.

Rick:

That's what I'm saying. And they don't always pick the ones that are valuable because everyone's coming up with five.

Justin:

Yeah. And that five has to be all over the board. Like you look at like common tools just incorporating it versus like corporate like Gemini or something like that or Copilot, but just incorporating it into your existing stack, you know, into that.

Rick:

And it depends on executive experience and vision and direction and funding and all sorts of other things. Mean, people come up with some pretty bad If they're mandated to give five, some of them will be bad.

Jon:

And there's a lot of gaps in like the understanding of what's realistic today, what's realistic in six months and what's not realistic for ten, twenty years. There's all this talk about AGI and ASGI and things like that. And sure, there's not a zero possibility of that but having some realism about what could you do today, what could you do tomorrow, and what's going be a while.

Rick:

So that might be one thing. You say, what should we be doing? There's probably an education thing first and foremost, which is educate yourself on like what's possible, what's not possible, what's good, what's bad. And then you gotta figure out how to educate upstream in term because I don't know that you'll be super successful being a gate, but you probably can be more successful making Well you need guardrails. You need guardrails.

Rick:

But I think if you can make the board feel really, really smart or the ELT feel really, really smart about the fact that they know more because their CISO equipped them to know more. Right? So educate. I think educate is a huge one.

Joe:

Okay. So don't even need AI in this conversation. We're coming back to some of the basics that

Rick:

we talked about every single And

Joe:

I almost am hesitant to say, let's put a managed risk process in place again, because every month we talk about managing risk. I would say that if you're an IT or security leader, and you haven't Imagine that you're just going to go do something now because you heard this. What are you going to do? I would start like anything else inventory. Like, what do you even have?

Joe:

What apps do you already have that have AI built into it that are promising it? Like every time we were just talking about this, every time I open up a different Slack channel, it pops up a window. You can add AI to this. Just sign up. It's not my Slack channel, but somebody who owns it ought to buy it if they want it.

Joe:

But I'm Yeah. Right. And so it's always there. And so now not only would you educate, but then how do you start to create your risk scenarios? And so it's like every just add a column to your CMDB that says AI and a box to check.

Joe:

Is it included? So you just Well,

Justin:

and I think

Rick:

there's actually different AI risk profiles in my opinion, because a app installed on a laptop that's using some AI thing to do its thing better, spell check Spell check via AI, right? Grammarly? Yeah, sure. I think that's different than, Hey, I can buy this tool where I throw all my financial statements at it, right, internally and external, and it comes back with insights like we have an extra CFO. Like the risk of those two things, I think is inherently significantly different.

Joe:

Or how about the chatbot for customer support?

Rick:

Right.

Joe:

That may not be vetted out and gives bad advice. Did you hear about the one where somebody worked with the chatbot when they were buying a car and they told the chatbot Yeah, they got $1 Yeah, they social engineered it to give them

Jon:

a car

Joe:

on a super discount. Yeah. And the person went to the dealer to get it then and then it didn't work out.

Rick:

That's hilarious. Did not hear about that.

Jon:

That's funny.

Joe:

You gotta

Jon:

look that one up. Yeah. I think it comes back to doing threat modeling, right? Like how do we feed our risk management process? How do we do more threat modeling and do it from this different lens of the potential attack vectors and the concerns with it?

Jon:

Think

Rick:

that's right.

Justin:

Yeah, because the chatbot and everything, we haven't started using an Ampiski yet, but I use a product called Intercom. You just point it to all your help articles, and it'll try to get If they're asking a question, it'll try to take their question and say, Oh, you're looking for this. Maybe this article really helps you. And you know, obviously small company but at scale that can save hours and hours

Rick:

on customer support. Put the risk to your company in using that AI tool?

Justin:

It's probably pretty minimal if that's the only all the public

Rick:

stuff is I think the security, yeah pretty minimal if your access is good. Like maybe some reputational risk.

Jon:

And if the impact was higher you could choose to do something instead of having the AI summarize it you just have the AI give them links to things that they think are relevant. So now the AI is just facilitating finding links. It's just a glorified search bot. But it's like reduced the risk because it doesn't have the potential to hallucinate anything other than a broken link. Can we

Joe:

go back to threat model?

Rick:

Oh, I was gonna Exactly right.

Joe:

So STRIDE is a common threat model methodology. Will that still apply or do you think it needs to change? No, I think

Jon:

it still applies. I think it applies just like it always has today. There are different mechanisms. STRIDE is a great one to do threat modeling. I actually made a custom GPT a couple of years ago.

Jon:

I open sourced it. If you go to xenable. I think

Justin:

it's still on my LLM set.

Jon:

Oh, LLM, yeah. It's like xenable. Iothreats. And it'll take you to ChatGPT, Custom GPT. And I took the OWASP Top 10, a bunch of helper articles.

Jon:

We have a thing in the CNCF on how we do threat modeling for Kubernetes and related projects. I fed that into it. And I did a whole bunch of instructions to help people threat model. And this is a scenario where I think AI tools are extremely helpful. Use AI tools, feed them these bits of information, like what the frameworks are like STRIDE and things like that, and other ancillary information, and have it broaden your horizon, right?

Jon:

Because there are things that you can think of during a threat modeling exercise, but you know what AI is really good at? It's thinking about similar but different things. Brainstorming, Brainstorming, exactly. And so here's what I've got so far. What other potentials?

Jon:

And it's going to give you 10 things and seven will be garbage but three are gold. Right? And that's what I love about it.

Justin:

I do it all the time for like upcoming customer calls or something like that. I'll take a topic. I'll feed into that. And most of the stuff I always plan, but there'll be one or two that'll be like, Oh yeah, that's an interesting The

Rick:

other trick that I love doing is brainstorming in context. So say, All right, if I'm a security engineer or architect, what are the top 10 things I should be thinking about? And then you get like a couple of good ones. Okay. Now I'm the CISO.

Rick:

What are the top 10 things? And some of them are over or whatever, but you might get a couple of different things.

Joe:

I like taking that and then saying, now I'm that person. I just presented it to the CFO. What questions are they gonna ask me?

Rick:

Oh, yeah. Perfect. Back to threat modeling really quickly. What is the most distilled version of threat modeling? Like what's the question you're answering or the couple questions?

Rick:

Like what are the key elements that you need to pull out?

Jon:

You mean like doing a threat model specific to So

Rick:

give me threat modeling for dummies 101. Yeah. Right? Yeah. For specific to AI, yeah.

Jon:

Yeah, that's a good question.

Joe:

Well, let's go. Can we use stride like S is spoofing?

Rick:

Yeah. Okay.

Jon:

So, I mean, how's that different from hallucinating, right?

Rick:

It is. Or I guess my question is, is it, right? Data spoofing, is that different than role spoofing? Is that different than or I mean, there either other elements of it?

Jon:

Yeah. So, like, does the AI have the potential to spoof a role would be a good path to go down. So, given what you know about the system, is it a deterministic part of the system, the workload identities? If a person is talking to a chatbot and it's querying data, do you literally take their token and pass it in a deterministic way? Or do you extract from the HR system their role and identify what data they should have access to?

Jon:

And that's a lot it's much more likely that you're going do something wrong, right?

Rick:

You're going

Jon:

make mistakes. You can identify parts of the stack that have the potential for non deterministic outputs, and that's going be a higher potential for

Rick:

or mistakes.

Joe:

The other parts of STRIDE are I'll just run through these tampering, repudiation, information disclosure, denial of service and elevation of privilege. So that's what STRIDE stands for.

Justin:

So you

Joe:

can think of very practical items for each one.

Jon:

Yeah. I mean, there's always little things that you can do to change or augment, like denial of service in AI systems. There's also something in the serverless ecosystem we would call denial of wallet where you're paying for usage. It's expensive for usage. So a chatbot, I can have it send in a bunch of tokens and get a bunch of tokens out and that's going to cost more on their side.

Jon:

It's denial of service ish but usually that means downtime, availability has been degraded or something like that. But I'm talking about like spending their money more. So if I open a chatbot and people are just using it to do non chatbot like things, they're just abusing it to do other sort of work.

Justin:

Around resources.

Jon:

Yeah, yeah. They're just gonna, I'm gonna pay the bill as the person hosting the chatbot.

Rick:

But But I think of the pattern you noted before denial of service being the thing I'm thinking about, LLM as a judge. Yeah. I could definitely see like, oh, there's a bug in LLM as a judge that's causing a denial of service issue with generative element of, the thing that's creating or analyzing being passed to.

Jon:

Yeah. And so a lot of those I use a framework called langraph. And langraph is great because you essentially have To simplify it, you have different LLMs, and you have relationships between those LLMs. And you can say one generates stuff and one checks stuff, right? So this makes an output and this is the judge, but the judge can actually feedback, try again, do it again.

Jon:

And so in your scenario, you could have an infinite loop. Yeah, exactly Right. It never is able to complete and you don't have a TTL TTL or

Rick:

something Right, so then what happens?

Jon:

And it just instantly burns, you're just burning tons of money. Yeah. Know, paying, you know, Anthropic or Chachiketan or these companies tons, ridiculous amounts of money. You have to have some way to cut it. So like, if you're threat modeling in that, you're like, okay, well, an attacker could these sorts of things, have these inputs to put in an infinite loop so that it'll never make a satisfactory output, which will just burn a lot of our money.

Jon:

How do we mitigate that? Okay, you know, if we're talking about risk management, let's just put a loop limit of 10. You can only do 10. And how about an input token limit, an output token limit? You can put all of these different things in place, then you're dwindling the tree down, the attack tree down to, like, what lines do I care about cutting now by injecting a new control?

Rick:

Yeah. I've always thought, from a threat modeling perspective in terms of AI, the data poisoning argument is such an interesting

Jon:

It's interesting.

Rick:

At least for me, it's such an interesting

Jon:

It's more fun than I think real world.

Rick:

I think that's probably true.

Justin:

Yeah.

Jon:

Just seems so Isn't the

Justin:

entire internet poison? Yeah. Well, no, but I think about it like so know.

Rick:

But a specific AI agent working So, okay, so let's pretend this is not a thing that I've done recently, but let's pretend that, oh, we're gonna go all in on a SOC, right, with Adjentic AI and SOCs and all that. I can definitely see a world where you say, okay, SOC analysts, right, agents, think about all the security incidents we've had in the past, right? And what went right and what went wrong. I can definitely see a world where using AI, if an adversary knows that that's kind of the architecture of the model or the way things are going and there's an injection point where they can dump a bunch of data, right. They get access to for whatever reason, the folder where all the lessons learned documentation is.

Rick:

I can absolutely see using, pull all that out, use AI to write a bunch of new fake ones, right? That have bad lessons learned or indicate these things, dump them all back in. And again, depending on the level of automation that exists, right? Now all of a sudden you may have just told the robot to open the gates in

Jon:

different So with data poisoning, there's a couple of different ways that can apply. What I was replying to earlier was this assumption that we were talking about data poisoning to train the model itself. I think that that's very academically interesting and fun, but not super likely to I think what you're talking about is a much more likely scenario, which would be data poisoning, some sort of retrieval augmented generation, like a rag based system.

Rick:

To mess with the baseline.

Jon:

Where there's a database like help articles, like you were talking about earlier, and if you can get things into that database and then on the fly, the LLM is essentially gonna request to retrieve information from that, oh yeah, now you're definitely data poisoning and that is much more like it.

Rick:

You just absolutely unlocked a new fear where all I need to do as an attacker is change the help desk article that's automatically getting served to everyone in terms of like how you actually request a password reset and you just click a different link Yeah.

Jon:

You definitely if you got help articles and you allow comments, you're turning all of those off right now. Same thing with blogs. Like if you have a blog and you're pointing people to blogs but like the whole page is in context for LLM, the comments could be in context and you could put all kinds of poisonous bad stuff there. Like I feel like all of those systems are doomed right now. They're just all going get turned off because there's so much poisoning, much abuse that you could do.

Rick:

Well, it can't necessarily be or should say it can't be. It shouldn't probably be like real time updates as the data update. There probably should be some sort of gate or an LLM as a judge or whatever that is.

Jon:

Yeah, or ignore the comments, but it seems so prone to I'm not

Justin:

you can get into the right context.

Jon:

Yeah, I could say exclude things that are in this class of comment, but then we rename it comments and then the thing doesn't work anymore. It just seems brittle. Having it there but excluding it always seems brittle to me. You can do the allow list approach. Only include the things that are named this.

Jon:

That's probably a little bit better. But if I right click inspect, I could put that in my comment too and maybe mess up the LLM. Doesn't know what's being parsed and things like that. Yeah, there's a lot of abuse. I've actually been seeing a lot of LinkedIn pages lately where people are saying, delete my profile from your database in their taglines because they're getting scraped and they're essentially poisoning the recruiters that are using LLMs to scrape these things and they'll follow those instructions and maybe, I don't know if it works, but you could see it like forget about me or delete my information or only consider me for jobs that pay 7 figures or up or something like that.

Jon:

Yeah. Mean, that's data poisoning too, right? Just from a different angle.

Joe:

So what kind of company or what kind of CISO might actually need to think about that so that if we have a listener who's like, Well, do I have this? Do I be worried about this? What kind of companies are needing to think about?

Justin:

Every company.

Jon:

I've got lots of opinions here. Yeah. I mean, it's really like you could just extrapolate from how much security do you need today. Like how much do you invest in security? Maybe as a percent of your IT budget or something like that.

Rick:

No, much should you invest today?

Jon:

Should you, yeah, yeah. Well I think it's important that you put your money where your mouth is, right? You've as a company made the decision to invest this much. If you're investing on the higher end of the spectrum, you're probably regulated, right? You're probably a bank.

Jon:

You're probably in health care.

Justin:

That's Maybe

Jon:

you work with government. And if you're maybe something else, like a law firm, you're in the middle of the world. Maybe you're in education. You're going to be on the lower percentages. That's just a common thing.

Jon:

So extrapolate from your security budget or your security decisions, and that's probably people that need to because that means either you're regulated or you have higher risk. It's just inherent because you have more sensitive data or whatever it is.

Justin:

And what about the common e commerce SaaS? Are you saying they don't really need to worry about this stuff? Just use Gemini, ChatGDP, ProPilot, and you're good to go?

Jon:

I mean, there's impact and likelihood, right? And so impact is high. That's everything that I just said. But then there's likelihood, and those tech companies are going use this stuff like crazy. They have the higher potential for a reward, so they have a higher likelihood of impact for the negative sides as well.

Jon:

So I think that just generally, you're a tech company, if you're adopting AI quickly, that's like another knob that you turn on saying, like, you might want to consider more controls, more threat modeling, risk assessments, risk management, whatever it is.

Rick:

Well, you need to understand what could go wrong at a reasonable level through threat modeling. And I don't think a

Justin:

lot of people know full context onto that, right?

Rick:

You just need to think through it. But you need to understand like, well, with what we're using or what we want to use, what are the types of things that could go wrong? And then but you need to understand that or at least think about that before you figure out how much you want to invest in preventing those

Justin:

things from going I'm just curious, like, how many general counsels have gotten the question, should we be recording all of our meetings and transcribing them? You know?

Jon:

Sure,

Justin:

yeah. And given all the summary. How many have gone into that conversation into that and said, oh, yeah, that's a now it's discoverable. Like all this stuff is laying around and we haven't thought about like retention or anything else, you know, a lot of work.

Joe:

And it's probably recording it on a system that you don't control. Correct. It's a third party.

Justin:

Multiple systems. Notion just came out with their recording thing. And it records just the audio over your computer. So there is no like Zoom like announcing like, hey, everybody's like, just FYI. You just hit record.

Justin:

There is an option to actually say, announce it over You don't have do it.

Jon:

I have a similar scenario. So I use You

Justin:

just hit record, and you're just on a call, and it transcribes I use

Jon:

a tool called Crisp AI. And what it does is it filters out background noise, right? And so it's meant for your microphones and speakers to filter out background noise. And it has this built in default on feature. It's a great tool, by the way.

Jon:

But it records the calls. If I'm on Zoom and the call is not being recorded, my chris.ai could still be recording and have no notification of it. I I turn all this stuff off, but, like, man, does it like to be turned back on? Keep having to, like, turn

Joe:

it back off. I do

Jon:

an update, and it goes back to the default, and I'm like, it's little frustrating. But at the same time, like, who else has that? What other people just have this filter on their microphone and on their speaker and it's going through this software

Justin:

Well, you have it now.

Rick:

Who else has that that thinks about it from a pure media or professionalism perspective that isn't a technologist, right? So, oh, I'm maybe in the media business or I'm working with radio hosts or whatever. So we're gonna get this thing. Oh, by the way, now all your offline things are recorded. There's just so many things like that.

Joe:

Yeah. Well, I was on some meetings and every time I go into a meeting and the notetaker pops in. So I tell it mostly pop in to most of my meetings, not all of them. When that happens, the first thing I do is I just ask everybody if they're okay with it. Yeah.

Joe:

And usually I get, Yeah, we're good. No worries. And then I do have some where some attorneys are also joining those.

Rick:

All right.

Joe:

And they're like, whose notetaker is this? Can we kill that? Yeah, we're not doing that on these. And then we turn them off.

Jon:

Yeah. So I had a really interesting scenario that was like that. So I use a transcriber as well. And it sends to everybody that's invited to the meeting, it sends them an email ahead of time saying, this is set to be transcribed. Are you Okay with this?

Jon:

And so it goes to this one company, big, big, highly regulated institution. And they go to say no. But then their proxy blocked it. So they couldn't say no. They couldn't say no to the transcription ahead of time.

Jon:

So they had to text me and be like, dude, we can't have this thing transcribed, but I also can't say no. Because they couldn't say no because they're not allowed. They're proxy blocked.

Joe:

They're not allowed to get to

Jon:

that page. They can't get to the transcription service which is where they say yes or no. Yeah, was hilarious.

Justin:

That's why. That's crazy.

Rick:

You can't use this and therefore you can't say no to Exactly.

Joe:

So there's a lot of yeah, kind of to sum this up. If you're not thinking about these things, you really need to sit down and start going through and enumerating where we're just happening at. Start just documenting where it's happening at so you can have those meaningful conversations and then figure out how to adjust your risk register to be able to talk about these things.

Rick:

If you think it's not happening, still have those conversations because you'll likely be surprised I would think.

Jon:

The one thing I would say is it also sounds like a lot of work and it sounds painful. It's actually a lot of fun. Like I really enjoy it because you're getting to use AI tools and to think and experiment it's like it's very different than the traditional work I think we've been doing for years. Get really excited about it.

Justin:

Well, it almost seems like magic.

Rick:

When you

Justin:

get done with a call and it just like, here's the summary and here's your to do points. My gosh. And they're

Jon:

so accurate.

Joe:

This is awesome. Don't be the person who causes the friction but rather let's lubricate the process. Education.

Rick:

Speaking of lubrication. Yeah.

Justin:

We got something special here today. So, Elijah Craig, very common. So it's a popular bottle. You'll find it anywhere in the state stores and everything like that. But first off, we got our first sponsor here.

Justin:

Thank you to Liberty Liquors in Maryland for sponsoring this. We really appreciate it. Number two, this is a private barrel selection. For those that don't know, lot of distilleries will allow you to go in sample a number of different barrels and barrels age differently. So whether they're a little bit hotter, stored up in the rick house like at the top or down below closer to the ground, the different temperatures and the way they expand and contrast and get that bourbon in and out of the barrel are different through determining the houses.

Justin:

When you're like with big companies, when they go to select, it's usually a multi barrel where they'll pick like a little bit 20% from this section,

Rick:

10%.

Justin:

And that's what the master distillers do, right? Exactly. And they try to make it the same taste throughout it, through multiple years because you can't get the same taste. When you do a barrel pick, it's the unique taste of this. It's the same Mashville, same process, same barrels that they use, but because the temperature is different as it matures and ages, you'll get a very unique taste.

Justin:

You know, that's different from other barrels.

Rick:

Or even the nature of the wood that's in the barrel itself. There's a million different things. Like all those environmentals. Yes.

Joe:

What's the proof of this one?

Justin:

So, this is what? 47? So, 94 proof.

Joe:

Okay. And what's a bottle like this go for?

Justin:

This is what? A $50 ish?

Jon:

For a barrel pick?

Justin:

Oh, I don't know about the barrel pick. Elijah Craig specifically.

Rick:

Yeah, their standard.

Justin:

Yeah, their standard one and everything like that.

Joe:

I'm enjoying this one. In fact, I will typically throw a chip of ice in to kind of but I didn't do it on this one. It's so smooth.

Justin:

So, I love Elijah Craig. This is almost like my number one go to for mixing drinks. It is good that you can sip, but it's cheap enough that you can mix with it. It's not too crazy. Like, it doesn't have a lot of like where it could be aged in like port wine barrel or, you know, it doesn't have the craziness that will throw off your This is a standard, very good bourbon that I think you can either sip or make a cocktail too.

Justin:

So and it's excellent. I mean, it's I'd have to do it right next to another leg. I mean, I don't know. Like I can taste like what the difference would be, you know, into that.

Rick:

I think I get a little more caramel from this than like their standard fare.

Justin:

That's

Rick:

just It is very nice.

Justin:

Well, cheers. Cheers. Thank you, Liberty for sponsoring this. Cheers. Cheers.

Justin:

With that

Joe:

Can I tell you how excited I am for B Side Pittsburgh? I know it's the next episode we won't be talking about it as much, but everyone getting so close. We're just a few weeks away.

Justin:

Well, the next one will be the recap.

Joe:

Yeah, it will. It

Jon:

will. So

Joe:

a couple of quick things. I a quick glance at the agenda, talks and at least six of the 21 talks were clearly AI related in their titles. So I'm really excited to look at those.

Justin:

So you're saying it might catch on?

Jon:

Yeah, might catch on.

Joe:

You have a talk, right? You're doing

Jon:

a talk? Yeah, yeah. I'm talking track three, one p. M. How I learned to stop worrying and love vibe coding.

Jon:

Yeah. So it's a talk about how even if you're in security and you care about safety and correctness and conformance and all this sort of stuff, how you can still use these AI tools to work quickly to do this new thing called vibe coding

Joe:

where

Jon:

you're You're defining

Joe:

vibe coding.

Jon:

Yeah. So you're not really reading the code as much, although I still do, but you can you can of run the code and experience it and kind of feel it and understand, like, the vibes, maybe skim it, but don't read it line by line and say, it giving me the right vibes? Is it a good code smell? What you would a developer would call code smell. So as a human, you're taking a much higher level lighter review of the code, and then you're leveraging other tools like LLMs and things like that to do that more intense oversight and get a lot more stuff done.

Joe:

Oh, that's awesome.

Justin:

Eliminating code slop at the same time.

Jon:

Yeah, yeah. Getting rid of a lot of all that slop that comes out by default from coding tools. There's I'm going to talk about MCP servers and automated PR reviews and all kinds of cool stuff. Maybe even a little threat modeling. We'll see if I add that in there as So,

Joe:

you haven't got your tickets, you can still get them. The price has. I've warned over and over again, the price has finally gone up. People are still buying them. Yeah.

Justin:

But it's still worth it.

Joe:

Yeah. Yeah. Mean, I'll tell you even at $75 a ticket at this point, take advantage of the drink You're gonna break even on

Rick:

the meal at 75. Then you still get all the talks in the villages and everything.

Jon:

Well, there's lunch, there's the after party meal, there's the drink tickets. There's still a surprise. Don't think we've announced

Joe:

Justin doesn't know what it is.

Jon:

Oh, I'm looking forward to that. Yeah, there's the cookie table in the afternoon. True. There's all kinds of fun and I don't know if said t shirts and they

Justin:

have good How many tracks?

Joe:

Three tracks? 21 talks right now. 21, 22. There's 900 I checked like right before we started recording nine ninety tickets sold. Oh, on.

Joe:

So, it'll be there Yesterday

Jon:

was like nine fifty.

Justin:

Should I just announce it at a thousand by the time this It'll definitely be over.

Rick:

Yeah. We all buy three more. Yeah. Right. Exactly.

Joe:

And something else, Distilled Security is going to be there.

Jon:

Oh, yes. Yeah.

Justin:

Else.

Joe:

Go ahead, Justin. Tell us about what

Justin:

we're to Yeah. So we're going to be toward the front where you check-in and everything, and we're going to be doing a number of interviews for a lot of the speakers. I told you Joe that either you or somebody else from the conference, I definitely want to get you into there. But we're going to get just little clips of what you're going be talking about for like five minutes. Little interview with a couple of chairs and mics and then we're going to be launching it out on the web the same day, like within an hour.

Justin:

So

Joe:

if you're a participant, everybody at B Side is considered a participant. Come on and just stop over and whether it's Justin, Rick or myself or somebody else, we'll interview you and get your stuff out there.

Justin:

Yeah, absolutely. So, yeah, it'll be fun. I think, yeah, just see that we got a good placement. We got a lot of stuff like coordinating with that. I think it will be a good time.

Jon:

Yeah. So, wait, where are you guys going be positioned again?

Justin:

The Close to where the registration is.

Jon:

Top of

Justin:

the escalator of escalator of sweaters.

Jon:

So, they're right between the two main areas.

Justin:

Yeah. We're going to have a couple of chairs with a banner kind of at the back and then a couple of mics to sit down and talk with everybody.

Joe:

And even by the time

Justin:

this recording

Joe:

comes out, I think we'll be past the due date for getting logos on t shirts and stuff like that. We can still get logos on certain signage and on the website. So if you were looking to sponsor, it's not too late. But I believe this time we've had like in the last couple of days, we've had two or three new sponsors come through. So super excited about that.

Joe:

And there's still gold sponsorships to get your stuff, logos and shout outs. Yeah, very excited.

Justin:

Come join us.

Rick:

It'll be fun.

Jon:

Yeah, it's gonna be a good time. Always fun time. Yep.

Rick:

And hang out for the after party?

Justin:

After after party?

Rick:

After after party is always

Jon:

good too. Those are fun.

Joe:

Those are definitely good. Yep. Get your steps in on the after after party because we basically sometimes walk all the way

Rick:

We wander.

Joe:

We wander downtown Pittsburgh, have some drinks, get some pizza, and then wander back. So if you're one of the folks who can find just find us. Ask us. We'll tell you where to meet us.

Justin:

Yeah. Yeah. Absolutely. It's a good time.

Jon:

Sounds good.

Joe:

So, is there anything else we wanted to cover today? So,

Justin:

as far as AI goes, I mean, I guess I'm thinking like lawyers get involved into this, they're going to say no. Who's going to be the sounding voice into this? Aren't they? I think they may want to sometimes. They're always the like, I don't want to deal with this so let's not do it.

Jon:

You gotta converse, yeah. But

Rick:

lawyers are typically really good about not throwing themselves in front of buses. And I think from most

Justin:

know about that. The

Jon:

general counsels

Rick:

that I know are pretty

Justin:

good at I've met some good ones and I've met some terrible ones.

Rick:

Okay, let's on the side of the good ones.

Justin:

And sometimes there's a culture in the workplace that whatever the lawyers say goes. So they're like, Hey, no call recording. They're like, Oh, okay.

Rick:

Well, yeah. But my point is, lot of the count Not that I speak to counsels every single day, but enough. A lot of the ones that I've talked to understand that this is a thing that the business is going to do one way or the other. And so how are they going to appropriately manage the legal risks associated with doing the thing?

Joe:

I didn't even think about attorneys being our audience. But if there are, then I would say those that are positioning themselves in a role to sit with the CEO to make sure when they're saying go fast, that they're properly helping the organization put the guardrails in place to manage that risk without slowing it down.

Rick:

And their mandate is going be risk beyond cyber, right? Like there's going to be privacy stuff. There's going to be litigious stuff. There's going be a ton of discovery stuff. We were talking about in the last podcast about what's discoverable and what's not.

Justin:

We were just talking about, well, you guys were probably CC ing on that or somebody just putting in contract language into some of the stuff. Just the lawyers being able to say like, Hey, we have your customer data. It may go into some training data at some points in Ts and Cs because people have been sued for much less, you know, that it's like, I didn't give you authorization, you know, into that.

Joe:

So another takeaway then I'm hearing is if you're the CISO, director of security, somebody like that

Rick:

Partner with.

Joe:

Go and You should already be doing this. Go talk to your counsel. Yeah. Find out what it is that they think everybody should be doing and come up with some alignment.

Rick:

Can I throw one else? Throw another one in there? No. Never mind then. Sorry, never mind.

Rick:

I would also say the CFO because honestly, everyone

Justin:

For context, yeah.

Rick:

Everywhere I've ever been, my two best friends are going be general counsel and CFO from doing good security But the CFO, look, everyone wants to buy more AI tools, right? If it happens, if you're in an enterprise organization, it happens a thousand times. Some of these are inherently going be redundant. Some of these use cases could be solved like with one thing or three things or whatever and you don't need a thousand things. Make friends with the people whose job it is to reduce legal risk with reducing overall spend, all those sorts of things.

Joe:

Well, now you just made me think about the enterprise architect, whoever's in charge of that and tools rationalization. So now all of a sudden, you forget about whether you have your used to be the firewall vendor became security, became the one pane of glass, and then you ended up with three of them and then you can eliminate two and still be good because you only ever used 10 or 20% of those. It just sounds like the same thing. We need the AI tools rationalization. How many AI tools are you paying for every month and how many do exactly the same thing and how can you think about that?

Justin:

Yeah, and so interesting. Use Slack that has a whole bunch of integration, the linear GitHub, Drive, all that stuff. And I use Notion too. They have a whole bunch of integration.

Jon:

It's all this stuff.

Justin:

You know, linear GitHub, Google Drive.

Jon:

But you're

Rick:

not using both of them simultaneously.

Jon:

And I really use

Rick:

it for the same stuff.

Justin:

I'm less on the Slack AI stuff and more on the Notion personally into that. But from there, I'm like, I guess I'll integrate type of Yeah, sure. Better context, okay, type of thing. But yeah, it might be good, but you're rationalizing like connecting it to big data shares of your organization into that. Any last words?

Justin:

Well, we have a little bit of time. We haven't really discussed anything about the development side of this too much. We've talked about general organization stuff. But development, feel like is a big, big open AI use case. It's almost treated totally different.

Justin:

I mean, I guarantee like John, I'd be curious on your opinion of this. Who's paying for CoPilot? Most development companies

Rick:

paying for Copilot.

Jon:

If you're on GitHub, pretty much everybody on GitHub is paying for Copilot at this Because it has so many different uses. You can use it for peer reviewing things, you can use it for generating things. And it's a very good way to get started. And that's what's great about it is that it's so easy to start. And there's other tools too.

Jon:

Some people will buy Copilot, they'll also buy Cursor or Windsurf or Roo or

Justin:

Yeah, yeah,

Jon:

There's other things. Yeah, There's all these different options. And it's like do want it? Guardrails? Yeah, right, right.

Jon:

Exactly. Yeah. But yeah, so where do you want it? Do you want it as the developer's writing the code? Do you want them to be able to chat with it?

Jon:

Know lots of people that actually buy ChatGPT for their developers. And the way they use it is they have a ChatGPT tab open. They ask some questions, and they copy and paste things over. And that little bit of additional friction is on purpose. They don't want the IDE to actually have the AI stuff because they know the developer will tab, tab, tab, tab, tab, accept things, and just kind of vibe code it more.

Jon:

But if they force a little bit like, write me a function that does this, copy, paste, doesn't work. Okay. Here's an error message. And you make that a little slow down. You make the developers think about it a little bit.

Jon:

And sometimes that's good. Sometimes that's unreasonable. It's too much friction. It depends on exactly what you're building. I would say even within one company, it's probably Okay in some places and not Okay in others, depending on the type of tool you're working on or things like that.

Jon:

I mean, there's a ton you can talk about with software development and AI tools. But generally, it is sloppy. We've talked about AI slopped in general. And the default outputs are pretty bad. There's

Justin:

I think it's good on focus functions. Yeah. Like if I ask it very specifically, iterate through this array loop and come up with this input with this output as a JSON or something like that, it does that very well.

Jon:

If you know exactly what to tell it to do, yeah, I agree.

Rick:

I have a very dumb non developer question, which is why is Slop bad? Like in general, I kind of get it, right? But what are the actual real

Justin:

ramifications fitting into exactly what you're looking for. So if you have, let's say your architecture and you have a whole bunch of functions everywhere, you might already have some common use functions and all that stuff, it would just recreate functions, you know, on the fly. You know, and I was like, Oh yeah, like you need this function like a date formatting function and you might already have that in your code base or there might be an open source that you're already pulling in into that. It won't use it. It will create its own.

Justin:

They'll put it and all of a sudden you're having to rewrite that.

Rick:

But why

Justin:

is that bad? Because it's just more code you have to Well,

Jon:

it's reinventing the wheel. So if you are reinventing the wheel in 10 different places in your code base and you find that there was a mistake and you have to go fix it, now instead of fixing it in the one place and using that everywhere, you have to fix it in all 10 places and you have to find out how to find them. And so they might be written slightly differently.

Rick:

So if AI built it, can AI fix it?

Jon:

It might be able to fix it but it's not going to be particularly good at finding it necessarily without, I mean you could do lots of different LLM calls, one per each file. Does it contain a function that does this? Is it wrong in this way? Whatever. And it can hallucinate but

Rick:

I promise there's a healthy dose of devil's advocate here. But listen, there's paths that I can think of going down for organizations that are trying to do things fast, eliminate oversight, eliminate gates, they go, Well, okay, just build it and get it out.

Jon:

Yeah. For prototyping, it's amazing. Prototyping. Don't get me wrong, prototyping experiments, it's really good. It reduces that friction to seeing it work and knowing if it'll work.

Jon:

But there's a different level when you want to productize or sell or depend on. It becomes a critical process. That's a different level of scrutiny you need to put it under. But I think it's great for just shipping small, little things. Absolutely do that as much as you can, but then you have to know when it steps from I have, I call it a service definition or a component definition in my repo.

Jon:

And for different places in my code base, I give it maturity scores. And as the maturity score goes up, I can both rely on it more. But I also increase the security controls that I have on that piece of the code base.

Rick:

I love it. So you sort of pre thought about the riskthreat modeling elements of it. You say, Hey, look, if it's in this part of the code base, it's fine. We can do roughly whatever. It doesn't need that much scrutiny.

Rick:

If it's this part, I'm going to pay a lot of

Jon:

attention I'll give you an example. So I have a deployment lifecycle, and I'm going to label it as the lifecycle that this service is in this microservice. And if I put it as POC, I have essentially no security controls there, very, very little security controls. However, I also have controls that say that this cannot be deployed to production. It literally can't even be in my production account.

Jon:

It can't, in some cases, even deploy to test environments. It only can run locally. And so if I tag it as POC, it can't run anywhere other than on my computer, And then when I tag it as alpha, now I can deploy it to a sandbox environment and I have a first level of security controls. And then when I go to beta, it can go to production and it has higher level security controls and then GA. And then I have deprecated where I shouldn't be adding new features to it anymore, but that's still allowed to go to production.

Jon:

And then I have pending deletion, which means that I can't push updates. It's really flagged for deletion, I have some cron job. Going go clean it up. So we've got this life cycle, and with AI tools, I think it's even more important to think about the full life cycle. Usually with the SCLC, we're thinking about making code, shipping it, iterating it, improving it.

Jon:

We don't think about deleting it. We don't think about destroying it. But I think with AI tools, code is more expendable. And so we need to think about deleting it when we're creating it, right, upfront. And I even say this, I'm going say this in my talk, so I'm giving away a little bit of it.

Jon:

Do we optimize our code for being deleted in the future? Oh, love That's a good way to start when you're using an AI tool because it's kind of like TDD. So if you know TDD, Test Driven Development, you're writing your code to be able to be testable. That's good. Well, let's go to the next level and say, to make it deletable.

Jon:

Because if it's deletable, it's also testable and you're managing that full life cycle. So now I can say it's very easy in my code base to make a proof of concept, escalate it to prod, fail that experiment and decide I need a different thing, get that to prod, delete this thing and I'm using the new thing and make it very easy to completely swap out a service with a new service that does a similar thing.

Justin:

Yeah. But better.

Jon:

But better. Exactly. Yeah. It's better designed. We chose a different architecture.

Jon:

We were running it on Kubernetes. We think Lambdas are better and we don't want

Justin:

to it A different library. Yeah. Language. I just say that example where we did some commits where we like deleted 5,000 lines and added like 500. Are Optimization.

Justin:

Wow, exactly. Those are great.

Jon:

Those are wonderful.

Rick:

But like so hyper modularized basically Yeah,

Jon:

exactly. And you have to force the AI to do it because it won't really, well there's different trains of thoughts here. You can tell it to make modular code and what will happen is it will look beautiful and not work. Or you can have it just generate code and it'll be like single big files. Sometimes it'll chop it up a little bit.

Jon:

It'll work better. Enterprise software. Yeah. Can't do kind of the intersection of both very Interesting.

Justin:

And it's also Terrible at anything HTML for the most part. Like it'll do prototypes like you said. But there's many times I'm like, I'm using this UI framework. Use this framework. And I'll be like, Okay, yep.

Justin:

Here's tailwinds, you know, or the CSS. Or just throw a whole bunch of CSS into it. Use this new document. All right. This isn't what I wanted, you know, like type of thing.

Jon:

Yeah. Completely missing the market.

Joe:

So let's take it back to like businesses. What should they be doing? The people who might be in a normal organization. Well, first, do you find that there's a lot of organizations that have somebody who can just riff on it like you just did and be that head of software engineering to do that.

Jon:

So usually there is some sort of I'd say like a primary advocate for AI inside of a development team. There's someone who's just like in love with it and doing vibe coding on their personal time and trying to bring it to work. Usually younger or? I don't know. It's a mix.

Jon:

There's pressure to apply AI and then I think there's a reasonable counterbalance of security or a CTO or even a QA team is sometimes that counterbalance because they're like, that's great. In fact, was a great GitHub study that said something along the lines of AI is increasing the speed of writing the code between 1055%. However, it's adding 46% additional friction to downstream processes like QA and testing, and it's more than 50% less maintainable when it's in production because it doesn't have good observability, metrics, logging, etcetera. And so great, you've got 10% to 55% improvement here writing the code, and you're just pushing it, you know pushing this external cost to like other teams downstream. And so I think that you have to look at the whole picture like what is my net improvement?

Jon:

Am I actually net improving? Okay maybe it's okay that we have the small micro outages because we've got 50% improvement over here and 20% friction over here but really we look at the

Rick:

whole thing and you kind

Jon:

of get that whole all of that friction down and then you can get the speeds up to those like thirty, forty, 50% improvements.

Justin:

Who owns what? Who owns their own vision? Would that be the product owner?

Joe:

Would that be the CTO or

Justin:

the head of engineering? Because you're looking at a software architect, they're not looking at that full vision.

Jon:

You

Justin:

know, that type of thing.

Jon:

I've seen a lot of new AI focused teams or boards, kind of like a cab but for AI, pop out of companies and it's like, what's our AI vision? And again, going back to earlier, it's like we have this mandate from the board to do more AI things, so we're going have lots more ideas. Let's have a way to submit the idea to the board and you get representative review from legal and security and engineering architecture.

Joe:

So let

Justin:

me bring it back

Joe:

to Maybe cloud SRE platform teams as well. To bring it back to like what normal We're a security podcast, so our audience probably primarily security practitioners. And who should they be seeking out in the organization, in the greater software development teams in order to start the conversation? How do they start the conversation?

Jon:

Start the conversation to do what?

Joe:

Oh, so the security team can feel comfortable that they understand and can help support this vision.

Jon:

I mean, it's good question because sometimes the AI is used for troubleshooting production. And then you'd be talking to maybe your SRE or platform at Sometimes it's writing the code. You talk to your software engineering stack. And maybe you're starting at the CTO. Maybe it's literally doing QA.

Jon:

Maybe it's automated tests. And that team is using it more than other teams. And sometimes it's AI to write the product and sometimes AI in the product. Maybe it is your product owner because it's going in the product. But if it's AI to write the product, it's probably not your product owner.

Jon:

Maybe it's your CTO or engineering leadership or So it's going to depend a little bit on how are you using it. And so back to inventory, I wouldn't say necessarily inventory but knowing what you're doing. What are the contracts we're signing? What are the deals vendors we're using? Where does it show up?

Jon:

Where is it actually being used?

Justin:

Don't think it's just one. And I think everybody needs to almost adopt that kind of entrepreneurship, you know, type of thing. Yeah. Like everybody's kind of owning this as a whole and their part, you know, type

Joe:

of thing. So we just talked about who to pull together to start the very first AI change advisory board. But I think you just rattled off all those.

Rick:

I think there's another one.

Jon:

Why would

Justin:

you separate it instead of just a regular?

Rick:

Oh, you're saying like, why is it AI cab and not?

Justin:

Yeah. Because I hate more

Joe:

Oh yeah, not to create duplication.

Jon:

I would say maybe specialties. A change advisory board, like to go to the education thing earlier, a change advisory board might be staffed with people who understand incident management and things like that but they might not actually understand the potential of these tools or the downfalls and the pitfalls and they haven't been trained on it. I would say whoever's on the board if you're gonna reuse the same one might need to get training or education and build an Exactly.

Rick:

What.

Jon:

Right, exactly. It's like a new responsibility for existing. Or you could make a new one, which is staffed with people that have that experience.

Justin:

Then how are the two cabs

Jon:

I would go either

Justin:

way. I

Joe:

would say two cabs for a short period of time, but maybe not even two cabs. It's more of an innovation team for how we're going

Justin:

to use AI. All the people integrate and Absolutely. Take all the people you just mentioned. All the people you just mentioned. Also, of improvement plan or something like that to get there.

Jon:

Also, maybe it might be like an intensity or focus thing. This might take a lot of time. You might spend a lot of your time, more than forty hours a week on it. And so you might just need to have a separate body from a similar team on the AI focused initiative because it's all encompassing. It's gonna be all that they do.

Justin:

You asked us And it's changing so much.

Jon:

Oh my god, this is all that I do. It's all I've done for years now, and I don't feel like I'm always on top of it. There's always a new white paper, new research, new whatever like there's a new library that has 30,000 stars on GitHub. I've never heard of this thing I look at the star chart and it's like oh yeah it's three weeks old but like I've just now been Everybody's in that's happening a lot. That is happening a lot for sure.

Rick:

You asked a question that I think is super interesting which is who's responsible for some of those trade offs that you were talking about in terms of going fast versus shifting issues downstream or whatever. I actually think

Justin:

And I said the product owner maybe, know, but But

Rick:

there might be another one that I don't think we've talked about yet at the C level. Like I think a COO, if there's a chief operations officer at your organization, oftentimes they balance or one of the functions they have is balancing trade offs that one department's making for the other. So like what happens if HR needs to do this? What's its impact on manufacturing or this or that or the other? I actually think that there might be a role, and this is a half baked thought, so.

Rick:

But I think there might be a role for the COO at that table in terms of trade offs depending on where you're, again, how you're using AI, where is the speed coming And where is the trade off hitting? Because if it's all within the remit of one C level, like it's all within IT completely, that's one thing. But if you're vibe coding super fast and it's causing all these customer service issues. And that's a totally different department. I actually do think that there's probably a COO role there to play in terms of balancing risk reward cross departmentally.

Justin:

So that's hard to do interdepartmental though. When somebody's like regulating the other departments, I think that's a function

Rick:

of a COO in general. Right? It's like to think through As

Justin:

long as it's well defined.

Jon:

Yeah. Yeah, and if you even go back to the people who are using the AI tools and you think about in DevOps, we have the two pizza teams, right? The idea of a two pizza team is there's a team of people who can be fed by two pizzas, so maybe it's us four people, but hopefully maybe a little bit bigger than that. And they own the whole thing. They write the code, ship the production, and support it in production.

Jon:

They're going to have the same mentality for AI tools, right? You own the whole thing. If we get increased customer support calls, have to make that part of the chatbot that handles that customer support calls, and you get measured on increases in calls and if they're actually saying it was a good experience, they got their answer or not. How do you get them to own the whole thing just like we did with DevOps but with these AI tools, AI features?

Justin:

Yes. Yeah, but they're not the ones usually taking the calls. Right. They're not. That's the challenge.

Justin:

Exactly.

Jon:

That's what's tough. Especially the bigger the company you

Justin:

are. Right.

Jon:

Exactly. You're a ten, twenty, 50 person shop, sure. If you're 5,000 developers and 50,000 employees, it's going to be hard. How do you do all that segmentation? But again, people have pulled it off.

Jon:

AWS has pulled this off. They have two pizza teams and it's AWS. They have tens of billions of dollars in revenue per quarter. It's not easy.

Rick:

There's ways but you have to think about it.

Jon:

Yeah, right, exactly.

Justin:

Great. Any other final thoughts before we wrap this conversation up? I think we solved AI.

Joe:

I think this is the last podcast you ever listened to.

Rick:

Yeah, exactly. Perfect. Yeah.

Joe:

My only takeaway is go start sitting with other people, figure out what everybody's doing, figure out if you need other teams and pull a small working group together of just people who are going to go to lunch and talk about how you're getting ahead of this.

Jon:

At a very high level, the way I think about it is what do we want to accomplish And what rules do we need to follow? And then are we doing that? I like If you can kind of boil things up to this really high level, because what you'll find is there's new rules that pop out of the woodwork you didn't even know existed until you started asking. So make sure you really understand those rules. And then are we following them, and are we actually accomplishing the goal, like why we used AI in the first place.

Jon:

If you can kind of juggle those three things, that's very high level in summary, but it can put you in the right direction. It's like a north star.

Justin:

Yeah, I

Rick:

love that.

Joe:

Sounds good. Hey, cheers, everybody.

Justin:

Yeah, this is a

Jon:

lot of fun.

Justin:

Thanks for coming. Yeah. Right. Thank you, everyone, for joining us. Don't forget to like, comment, subscribe up to the YouTube.

Justin:

We'll be releasing this shortly before B side. So if you see this, definitely get your ticket as we Absolutely. Be sure to show up. It's one of the best conferences in Pittsburgh if not

Jon:

the most. Yeah. Don't forget my talk at 1PM. Yeah. So

Justin:

thank you everyone and we'll see you next time.

Creators and Guests

Joe Wynn
Host
Joe Wynn
Founder & CEO @ Seiso | IANS Faculty Member | Co-founder of BSidesPGH
Justin Leapline
Host
Justin Leapline
Founder of episki | IANS Faculty Member
Rick Yocum
Host
Rick Yocum
Optimize IT Founder | Managing Director, TrustedSec
Jon Zeolla
Guest
Jon Zeolla
Cybersecurity leader passionate about simplifying complex problems and reducing toil in large enterprises. He’s an active contributor to the open-source community through the CNCF, OpenSSF, and formerly the Apache Software Foundation. As the founder of Steel City InfoSec, PittSec, and BSidesPGH's parent company, John champions collaboration, learning, and community-driven security.
Episode 14: AI Risks, Threat Modeling, and The Future of Vibe Coding
Broadcast by