Episode 21: AI Notetakers Are Illegal, GRC Tools Are Lying, and ISO 42001 Changes Everything

Justin Leapline:

Today on the episode, AI notetaker is sued for collecting voiceprints without consent. What happens when GRC engineering meets AI? Deep dive into ISO 42,001 and salt typhoon breaches congress. This is Distilled Security Podcast. Somewhere

Speaker 2:

right now, someone is digging through folders trying to find the right version of an evidence file for the third time this week. Controls scattered across a dozen system, owners who left the company months ago, due dates that already passed, nobody noticed. It's how it's always been done, and somehow, we just accept it. There is a modern, better way. Episki.

Speaker 2:

Visit us at episki.com/dsp for a special offer.

Speaker 3:

If you're leading a company today, you're not short on ideas. You're buried under them. More tools, more frameworks, more initiatives that were supposed to help and somehow made everything harder. Minus Partners exists for leaders who don't need another thing added. Minus Partners helps leadership teams remove friction that's been normalized over time so execution gets easier, not harder.

Speaker 3:

Less noise, clear

Justin Leapline:

Alright. Welcome back to Distilled Security Podcast. If you noticed, we have a couple of new, things going on in the podcast. We're starting to run some ads into it. To start off with, we had some of our own companies sponsoring this podcast, which we're truly grateful for.

Justin Leapline:

Thank you, guys.

Speaker 4:

You know? I appreciate it.

Justin Leapline:

You know? But we are open for other sponsors. We're actually looking for some of them. So if you wanna reach a good GRC cybersecurity leadership podcast with hundreds of people listening each episode, please feel free to reach out to us. We'd be happy to talk to you and get your ads up on the podcast here.

Justin Leapline:

To dive into the topics here, we're looking at the first topic, BIPA with AI notetakers. And I think, Joe, correct me if I'm wrong, you use this notetaker. Right?

Joseph Wyn:

I use several notetakers. Okay. But, yeah, the one that they're talking about in this article so I was reading one of the JD Super articles. They're always great for what's going on.

Justin Leapline:

We just about them last episode.

Joseph Wyn:

Yeah. Yeah. I thought so. And BIPA is the biometric information privacy act, and that's from Illinois. So what happened was is these AI tools are moving faster than the law and policy, and they're starting to do things.

Joseph Wyn:

And what's happening here, it's not just like you're recording somebody, but the recording system actually can take your voice into a voiceprint Mhmm. And understand who it is and then be able to store that information. All of a sudden, they're saying that's biometric information.

Rick Yocum:

Sure.

Joseph Wyn:

So that's like the the crux of it. So Illinois is saying if you're doing that, it's more than just consent that you're being recorded or more than just disclosure you're being recorded, but you probably need to be getting consent

Rick Yocum:

Right.

Joseph Wyn:

From the people being recorded, and it needs to be explicit written like a whole lot of extra things that you may not think to do. Mhmm.

Justin Leapline:

Yeah. I found it really interesting because me looking at this, and it's always confusing with the law, but, like, who's responsible for it, you know, from an agency? Is it the actual tool responsible to collect that consent, which is who's getting sued into this? Or is it the actual company that employed the tool tool, you know, into that? And, you know, I don't know if they're getting sued and everything.

Rick Yocum:

I actually looked this up a little bit Okay. Because I saw

Joseph Wyn:

find out?

Rick Yocum:

So I saw a note about indemnity and specifically some analysis that suggested based on case precedent and based on how liability shields work in practice, it's you as a company on the line.

Justin Leapline:

Ultimately. And that's what I figured. Yeah.

Joseph Wyn:

You need a company that's pushing the button to say, let the AI notetaker into the call.

Rick Yocum:

Yeah. Because there's two things that can go wrong. Right? There can be an abuse of, say, the AI model itself. Right?

Rick Yocum:

Data that's your data used to train other models, like those sorts of things. The vendor is liable for abuses on that front. But how you use their tools is basically always on you. Every SaaS platform basically says, if you break the law, like, with our tools,

Justin Leapline:

that's on you. If it's out of the terms and conditions of what the the AI company says, like, okay. We're responsible for this. We're treating your data like this. Yeah.

Justin Leapline:

But how you use the tool, make sure you follow applicable laws and, you know, regulations. Yeah. And type of

Rick Yocum:

thing. And there's a lie there's there's like a library of precedent already because it's like it's all tech stuff. There's been tech lawsuits forever that just being applied to this. And one of the notes in particular I saw was like, oh, that makes sense was around how, you know, companies have errors and omissions insurance for, like, issues if they misuse their client's data, basically, in a bad way. But

Justin Leapline:

Or use in a way they don't disclose. Right. Because the FTC has gone after people like that that, you know Yeah. Yeah. Yeah.

Justin Leapline:

For whatever data that they're collecting, you know, they're like, oh, you didn't say that.

Rick Yocum:

You know? But if you're the note take if you're the company that runs the notetaker app and one of your clients doesn't get the appropriate, you know, attestations that it's okay to record people or whatever, like, that's their use of your tool. Right. Typically, your insurance won't pay out for that because it was client's use of your tool. And then you, as a vendor, put in a place where you have to fight your client on who owns that liability.

Rick Yocum:

Yeah. So most of the t's and c's just have it written in out of the gate. And, like, all your Microsofts and Google's and Zooms Yeah. Basically, that's how it works.

Joseph Wyn:

Yeah. And so so it's just, like, thinking of the craziness of it and some of the stuff you wouldn't expect. And so as you think about what you need to do, you know, speaker identity, cadence, emotion, these are things that are the AI can actually talk about. Do you ever use you know, take a transcript and throw it into AI and say, give me my notes, but at the end, also, give me the sentiment of

Speaker 4:

the customer. Yeah.

Joseph Wyn:

Mhmm. Of how they felt about us after that. Yeah. And it's like analyzing Mhmm. All the things, and it's trying to take the those pieces.

Joseph Wyn:

Now, if you're just taking a transcript with no of the none of the other, like, cadence and all that kind of stuff, it's just the words, then it's not really getting that. It's just gonna take it from the words. But I never tried this. You've tried analyzing audio and having it do that? That's something maybe to test out.

Justin Leapline:

So I've worked with it. Customers I mean, so, yes, I've seen that from a customer service perspective. They'll actually see if customers are irate or not and escalate it up to, like, people that are trained to deal with irate customers, you know Okay. Type of thing. So I've seen that in play, but I've also seen where banks have used some type of, like, voice authentication where they'll actually try to determine whether this is the user that's called in before talking about a claim or whatever it is, especially when you're dealing with, like, large money.

Justin Leapline:

Like, I've worked with like retirement places and all that stuff when you're dealing with, you know, $6.07 figure places and they're like, I'd like to cash out. They they use a lot of tools to make sure that

Joseph Wyn:

So every time you call,

Speaker 4:

it's Basically. Big money. Gotcha.

Justin Leapline:

But yeah. So I mean and I'm not sure they get consent on the end user every time with that because it's you know, they put it into, like, we're allowed to do this per our terms and service

Rick Yocum:

I imagine they would prevent fraud, you know. I imagine they would upfront. Like, hey, you're opening the account. Oh, do you want extra security? Oh, okay.

Rick Yocum:

Agree to these things.

Joseph Wyn:

Yeah. Those 15 pages of things you got assigned.

Justin Leapline:

A ton like well, so maybe in maybe in some of the contracts, but if you buy contracts or merchant to something like that, I mean, you think about it, you call up a call center, you're not I mean, they might say, this recorded for quality, know, control and all that stuff. And But

Rick Yocum:

that's like the whole point that's the whole point of the b b BIPA stuff is because they're like, well, now it's a biometric. It's not just a recording. Right.

Justin Leapline:

Yeah. Yeah. But you can turn any recording into a biometric.

Rick Yocum:

We can now.

Justin Leapline:

That was the joke when we were talking about this topic before. It's like, I got a Sony Walkman. I'm gonna record you. And you're like, well, no no no. It's a voiceprint.

Justin Leapline:

But if I have a recording of you, I can get a voiceprint.

Speaker 4:

You can

Joseph Wyn:

then take it and have AI. You can do the AI later.

Rick Yocum:

Yeah. It it's almost like and I think we might have talked about this on the podcast, but like how all the encrypted data breaches of the past when, like, quantum hits, they all become unencrypted and all those problems that, like, had kind of been, you know, noted, oh, it was encrypted when we lost the drives. Who cares?

Joseph Wyn:

Yeah. Right.

Rick Yocum:

This is kind of the same thing. Oh, all these recordings that you have, oh, well, you couldn't do large scale biometric analysis on the phone.

Justin Leapline:

On them. You know?

Rick Yocum:

But now you can't

Joseph Wyn:

have them. Now if

Speaker 4:

you know the name of the

Joseph Wyn:

person, you can associate it with the recording.

Speaker 4:

Yeah. Absolutely.

Joseph Wyn:

Yeah. Most companies are assuming, hey, we disclose this recording or we disclose that we are recording. And they think, like, oh, the vendor, is gonna handle all the compliance stuff. I don't have to worry about that. And it's it's just a transcript.

Joseph Wyn:

But what it turns into is what are they doing with that in the back end that turns it into this extra stuff. And And now what's happening is that, you know, it's going from theoretical to real. Plaintiff's attorneys are actually targeting companies who are using AI transcriptions and note taking tools. And so, you know, I'm starting to see companies are putting together an extra set of policies for internal that and putting it into some training and putting into their employee handbook a little blurb that says AI note takers may be used. And, whenever we use it, here's what we want you to do.

Joseph Wyn:

And one of the things I'm personally getting the habit of now is every time I join a call where the recording is already yeah. In Teams, you can say, oh, I want that call recorded.

Rick Yocum:

You just

Joseph Wyn:

hit it in advance. It'll just start recording.

Speaker 4:

Mhmm.

Joseph Wyn:

So now when I jump on, I just becoming a habit.

Justin Leapline:

Right.

Joseph Wyn:

I'm just asking. I'm like, hey. Everybody good? We're recording.

Justin Leapline:

And at least for the normal person, because I do this exact same thing. I don't care about the, like, the recording or really the transcript. I like the notes, you know. Like, take notes for me. Give me the bullet points.

Justin Leapline:

Give me the summary. Give me what we talked about, and give me action plans.

Joseph Wyn:

Yeah. So the problem is is that every time I do that with Microsoft Teams, there's, like, 35 tasks on a thing that should have had three.

Speaker 4:

And I'm like, oh, that's gotta get better. Clean it up anyways. But we're not

Joseph Wyn:

complaining about that right now.

Speaker 4:

We're complaining about a different problem.

Joseph Wyn:

Like, I know you you've commented on this. There's the the no harm rule for this too.

Justin Leapline:

Yeah. What damages were actually taken on this? And I think it's way and you can correct me. I think it's a way BIPA is set up that it they put it into that it can it's all basically civil Yeah. Lawsuits and everything.

Justin Leapline:

They've turned it it's not criminal, you know, it's civil. So now, every lawyer and their brother, you know, want to basically sue for any wrongful step. And was there any remediation actually like happened? Because if this participant was totally myth that they were getting a, you know, a voiceprint Mhmm. Wouldn't the first step be, like, call the company, like, can you delete that?

Justin Leapline:

Right. Or And then if they delete it, it'd be like, okay.

Joseph Wyn:

Yeah. Or I'm being recorded. Right. And one of the things I see Fireflies is doing, and the reason I like the Fireflies tool, although I'll I'll probably just get everything in the Microsoft Teams and stop using that. But the Fireflies tool does several things.

Joseph Wyn:

One is they have some good attestations and some good third party audits of their practices. So I looked into those and I liked them. They also are doing this thing. Kinda find a little annoying because if you're in a Teams call, it writes to the chat, hey, I'm Fireflies. I'm here.

Joseph Wyn:

Here's what you can do. But it makes it really easy. You can do, like, some some keystroke combination, like slash f f for Fireflies and then leave, and it'll just leave. Anybody can do that.

Rick Yocum:

Oh, nice.

Joseph Wyn:

You don't have to be the meeting owner to do that.

Rick Yocum:

That is nice.

Joseph Wyn:

In fact, when we're on some of our INS faculty calls, some folks, their Fireflies are set to automatically join all their calls. Mhmm. Well, when the they do the faculty internal calls, nobody from IONS is on it, so all the faculty can chat. So if you're an IONS faculty member, and I know some of them I think some listen. Two of us are in that right now, and we get on those calls, and then you'll see the recording on there.

Joseph Wyn:

And it makes it really easy for somebody to control it. Anybody can say, oh, let's get it to leave. They don't have to be a meeting

Justin Leapline:

Yeah. Host. I didn't know that. That's cool. That's nice.

Joseph Wyn:

Yeah. The same thing with transcript on that from Microsoft Teams now seems to or facilitator seems to have the ability to kick the recording out if you disagree, I think.

Rick Yocum:

One of the one of the practical notes that I think was interesting that I encountered when I was looking into some of this stuff was a suggestion to almost like at the bottom of these emails, like, hey, if you receive this in air, you gotta delete

Justin Leapline:

all that crap. Don't tell me the banner thing. Yeah. Come on.

Rick Yocum:

Well, but from a legal

Justin Leapline:

The most perspective worthless security recommendation ever.

Rick Yocum:

Well, because it's not security. It's liability.

Justin Leapline:

Yeah. It's still Right?

Rick Yocum:

But so it was interesting because it said, okay. Well, you might want to, if you're at a company, have a note that basically says no AI agents are allowed to be used unless there are AI agents, and those have to follow policy and all that stuff. And so because some of the things

Justin Leapline:

to be a corporate. Yeah. Exactly. I mean, that that's tool sprawl that you're, like

Rick Yocum:

It is. But but I mean, if you're companies have meetings all the time with people that aren't employees of that company, and they might bring their own AI agents.

Justin Leapline:

That's just bringing in, you know, your own software like any anything else.

Joseph Wyn:

Right. But so Or if you're like my company and I'm joining a customer call, sometimes my customer will have their other AI agent or their notetaker join.

Rick Yocum:

Exactly right. And then it's your call. You're the host, but it's their AI agent who's responsible for the disclosures, the thing, like, what what policies

Justin Leapline:

It's whoever brings that AI agent to bear.

Rick Yocum:

I don't know what the I don't what the legal standing is, but the recommendation

Justin Leapline:

ever controlling that software that's bringing it to record.

Rick Yocum:

It it well, they're the ones responsible for doing certain things. They own some of the liability.

Justin Leapline:

Like, if they're having fire like, Firefly, as an example, dial in and actually do it, you know, then they're responsible for that. Like, I use a lot of, Notion. You know, Notion, I think a year ago, we start to do Sure. Recording and transcript and all that stuff. You know, the good, I I like it.

Justin Leapline:

I can record on any call and nobody knows, you know Sure. Sort of thing. So I don't know if I'm disclosing something. Yeah. Yeah.

Joseph Wyn:

We might need to hey, can we edit this part? Yeah. No.

Speaker 4:

I was gonna

Justin Leapline:

But but yeah. Yeah. It's nice, like, and the nice thing about it, they integrate with calendar and everything. They're like, hey, you have a meeting in one minute. Do you wanna start up a note and start the recording and launch into Zoom or Teams at the all at the same time with one button.

Joseph Wyn:

Oh, wow. That's cool.

Justin Leapline:

And it's nice. And like I said, I've never listened to one recording. I don't even like, they delete the recordings. It's it's the output. It's the output of what I'm looking like.

Justin Leapline:

Great. I got all the meeting notes and all that stuff. Yeah.

Rick Yocum:

But back to the back to the disclosure about no agents that aren't ours in a meeting that you set up. If you're in a three part third three party call or something like that, I agree with you is a matter of practicality, like who should own that. Mhmm. But legal disputes get messy. Yeah.

Rick Yocum:

And so the point is it's not it's not a it's it's kind of a fail safe just to demonstrate your intent to follow rules more than anything else. Right.

Joseph Wyn:

And No. I totally agree with you there. And and the real situation is if I'm on a call with, you know, maybe two of my team members and we have a prospect on and the prospect has their notetaker join, it's my responsibility in my mind since I'm It's

Justin Leapline:

your host of the call.

Joseph Wyn:

Yeah. Yeah. Senior person on the call. Maybe I'm the host of it. I'm gonna say, oh, okay.

Joseph Wyn:

But I also have the ability in Teams to not allow their notetaker to join.

Justin Leapline:

Yeah. Yeah.

Joseph Wyn:

Yeah. So I leave that sit in the waiting room Mhmm. Along with mine, and I say, hey, is everybody good with

Justin Leapline:

Right.

Joseph Wyn:

The notetaker joining?

Rick Yocum:

That's good way of

Speaker 4:

doing it.

Joseph Wyn:

And then I just let them all in at once. Yeah. And what I've done though with my team and I say, well, a lot of times, some of our customers or some of our prospects have their own notetakers join.

Rick Yocum:

Mhmm.

Joseph Wyn:

So be aware of that and be careful what you say because we don't have control over the recording. And also be careful because we don't know that they're recording like Justin does. And, maybe they are capturing their own. Yeah. Right.

Joseph Wyn:

So

Justin Leapline:

And you should always I mean, from a consultant perspective, you have to be okay and cautious at the same time.

Speaker 4:

Yep. Yeah.

Justin Leapline:

You know, with both of those. Yeah. You know, from a company perspective, I agree with you. I mean, that's only polite to ask about recording. Like, I like how Zoom does it where, you know, as soon as somebody joins

Speaker 4:

Yeah.

Justin Leapline:

It's like recording in progress. Yeah. You know, type of thing. Like, there's no doubt. Like, it's like, okay, I know right now before I even start, there's a recording going on, you know.

Justin Leapline:

Yeah. And I can be okay with it or not and deal with it, you know, type of thing. Yeah.

Joseph Wyn:

So I've actually just reminded me I I'm not gonna say the name of this person or this company, but it's somebody that I've talked to or respect a lot, and they are an attorney who is an expert in privacy, and they're also in charge of like, the most senior person in charge of maybe not all of InfoSec, but they cover all the GRC stuff. Okay. And they make sure that all of the audit external audits they

Justin Leapline:

have. Yeah.

Joseph Wyn:

So they they really know what they're doing. And some of the customers that they work with will bring their own recording.

Rick Yocum:

Mhmm.

Joseph Wyn:

And that own recording might hear things the wrong way and transcribe Yeah. The wrong amount. So imagine that you're talking about that the the fee for this service is gonna be $50,000, but it records it and hears $30,000.

Rick Yocum:

5 or 500 are both bad.

Joseph Wyn:

Yeah. Yeah. Right. And it it here's the wrong number. And when you go to a lawsuit, whoever has the best notes Yeah.

Joseph Wyn:

Usually has a better case.

Justin Leapline:

Right.

Joseph Wyn:

And so if you had the best notes but they're wrong and the other person doesn't have any notes, then you were gonna have less of an argument.

Justin Leapline:

We talked about this before. Do we? And yeah. I think we're talking about, yeah, the accuracy and whether you'd be hold liable to what some of these note takers were taking. Right.

Justin Leapline:

Oh, yeah. Yeah. Yeah. I I I'd like to see the court case for that because, I mean, it's really gonna be circumstantial, you know, with that. Because the next thing after that 50 or 500,000 comment is gonna be, well, show me the contract you signed.

Joseph Wyn:

Right.

Justin Leapline:

And it's like, well, you signed it for 50,000, not 500,000. It's like

Joseph Wyn:

Right. The paper will

Speaker 4:

outweigh all that.

Justin Leapline:

Yeah. Exactly. It's like, I don't care what they said on the phone. You'd signed an agreement.

Joseph Wyn:

Yeah. But how many contracts do you have that say, here's the price, but then travel expenses will be a 10% of the some of the consultant firms do it this way. They'll be no they won't exceed 10% of the entire engagement. Yeah. And then and so now you don't have a real number.

Joseph Wyn:

And Right.

Justin Leapline:

But that, again, should be in a contract, you know, type of thing. Yeah. I guess so if that wasn't in the contract and it came down to what they told me, you know, and that's the that's where it would get a little interesting. It's like, well, you promised to do also this in the engagement. We didn't put it in the engagement, but you said you would also look at these IP addresses

Rick Yocum:

Right. For for Even even just things that require verbal approval. Like, I work I'm working with a bunch of lawyers on a bunch of different things in any given time. And sometimes there's like, oh, okay. So do you want me to take this action?

Rick Yocum:

And that action might be notifying someone in a very formal capacity or whatever. And the bill comes later. Right? Because these people, like, you know, individuals are on retainers or whatever. And so I'm I absolutely am sure there's gonna be issues in the future where even if where the notetaker mishears something and the person you're talking to even gets sick, And then someone's covering for them reviews the notes and goes, oh, yeah.

Rick Yocum:

You directed me to do x y z. He's like, no. I directed you not to.

Joseph Wyn:

Oh, right.

Rick Yocum:

Like that stuff like that's gonna happen.

Joseph Wyn:

Yeah. Yeah. Well, that's one of the reasons every time I get on a call that has attorneys, the first thing they do is make me kill all the notetakers. Yeah. Yeah.

Joseph Wyn:

They they won't let out work

Speaker 4:

for this.

Rick Yocum:

Absolutely. Yeah.

Joseph Wyn:

And so I'm like, alright. Hey, team.

Justin Leapline:

Like physically kill all the notetakers?

Speaker 4:

Yeah. Yeah. Yeah. No. Take them out.

Speaker 4:

Yeah. No.

Joseph Wyn:

I'm like, hey, team. You're you're back to manual notetaking. Right. Who's typing? Yeah.

Joseph Wyn:

Get this stuff down.

Rick Yocum:

But even if not lawyers, like, I mean, even like, oh, well, you told this DBA to delete this table and start over. Like, whatever it is. Right? There's just gonna be things that require verbals typically that aren't like

Speaker 4:

Yeah.

Rick Yocum:

Contraction, and that's gonna be get messy.

Joseph Wyn:

Yeah. Yeah. So, you know, I I think that people just need to start being aware of this. That's why I thought this was pretty this is I don't know. It's not cybersecurity related.

Joseph Wyn:

It's more just knowing what's happening. And I found it fascinating just the way it was presented. But you gotta assume that, you know, this AI voice analysis is just gonna trigger biometric laws and

Justin Leapline:

Oh, yeah. You know? Lawsuits.

Joseph Wyn:

Laws yeah.

Justin Leapline:

Yeah. It's it's gonna be a mess. I I really don't like BIPA because of the civil lawsuit aspect of it. It's everybody hungry for a dime, you know, for

Speaker 4:

the lawsuits.

Justin Leapline:

There's a

Joseph Wyn:

whole another topic with the the cookie. What's that?

Justin Leapline:

Cookies. Again, another worthless thing. Yeah. But there are so many verification for, like, alcohol websites. Yeah.

Justin Leapline:

Are you 21? Nowhere, yes. Right.

Joseph Wyn:

Well, I'm so glad they stopped asking me to, scroll all the way down to my year of birth.

Justin Leapline:

Oh, those are the words. I mean, I should actually put in the meeting notes. There's a, like, there's a worse consents test that you have to make it through in, a minute and a half, and you have to answer all affirmatively Oh. Based on the questions. And they're all different and they're all like real world examples of getting like all this stuff, you know, to answer and it's phenomenal.

Justin Leapline:

I'll put it in the show notes. I forget what it was. It is a phenomenal website that basically just runs you through, like, trying to figure out how to say yes to whatever. That's funny. Yeah.

Justin Leapline:

Funny. Oh, that is great.

Rick Yocum:

One other thing on this though. Yeah. And it's not really it's kinda related to this, but not specific. I was thinking there's this interesting outcome that I suspect is gonna start to happen from all these geographically based legislations. Because this is one of those things like because it's Illinois, right?

Rick Yocum:

This one?

Joseph Wyn:

Yeah. Yes.

Rick Yocum:

So it just and it doesn't matter where you are as a company. Right? And it doesn't matter where the servers are or the data is or anything like that. It's is one of the people that's being recorded, right, in that location. Well, one of the things that's naturally gonna happen is you have things like GDPR that require you to know someone's citizenship or BIPO requires you to know where someone is.

Rick Yocum:

Like, you you you compound these on top on top on top. You're gonna end up with a bunch of technology technologists that have a need to collect all sorts of information from a person before they know what rules can and can't be applied. So you almost have to do more monitoring to know what rules legally apply to a person in a given situation. And if this keeps happening with, like, legislation, it's gonna get real messy real fast.

Joseph Wyn:

Oh, yeah. So you're gonna end up having, at your company, a notification that says, if you're from any of these 49 states, I can't have a meeting with you.

Rick Yocum:

Right. I there's gonna be all sorts of stuff like that. It's like, what's your what's your citizenship and where are you physically located? And what's the and what's this and what's this? Like Well,

Justin Leapline:

and then there's I mean, we haven't even talked about it, but there's the, you know, either single party or dual party consent Mhmm. For recordings and everything that nobody really follows, you know, into this, you know? Right. But it is a technically real thing, you know, that if you're in a dual party, like one party is in a dual party, both part well, technically, that one party has a consent, you know, into that. But, yeah, it could be both people have to consent, you know, into it.

Justin Leapline:

So Right. Not a single party consent.

Joseph Wyn:

And I think the big takeaway for me is figure out if you need to have a policy, figure out if

Rick Yocum:

you need

Joseph Wyn:

to have training

Justin Leapline:

Well, you do need to have a policy.

Joseph Wyn:

And and and then get your acknowledgments in place. Yeah.

Justin Leapline:

Yeah. And I would say, like, get the authorized tools in place, tool or tools or whatever it is. And like any other, you know, tool, like, you don't wanna do tool spread, you don't wanna support unsupported tooling, you know. It's a lot easier if you have your single stack and deal with it that way, you know. So, I mean, meeting notes are important and people should be able to do that.

Justin Leapline:

So have one that meets the company's needs, you know, and that's it. You know? Definitely.

Rick Yocum:

I'll drink to avoiding tool sprawl.

Justin Leapline:

Yeah. Yeah. Yeah. Easier said than done.

Rick Yocum:

Right. Agreed.

Justin Leapline:

Alright. Moving on to the next topic here. GRC engineering meets AI. And I think we Rick, what did you bring this topic up

Rick Yocum:

here? Yeah. So the reason I started thinking about this, and this could probably go in a million different directions

Justin Leapline:

Okay.

Rick Yocum:

But the main thing I was thinking about recently was, if I think about the standard CIA triad, right, confidentiality, integrity, availability Mhmm. And I think about where I'll just lovingly call them, including myself, security and compliance nerds spend their time. Right?

Justin Leapline:

It's probably two thousands maybe for the

Speaker 4:

CIA triad. Well, yeah. But is it I'm not still kidding. The fundamentals. Yeah.

Speaker 4:

Yeah. But,

Rick Yocum:

like, where do you spend your time? It's like 95%, you know, confidentiality and available and and availability. Like, that's where people like, integrity very rarely gets a lot of attention. Right? And I was trying to think about, like, well, why is that?

Rick Yocum:

It's like, oh, well, was integrity never a big deal? Was it a bigger deal back in the days where, like, nightly batch processing was a huge thing and you could have a whole bunch of data that goes wrong overnight and then the business might not catch it? But now, things have happened in a bit more real time and they've moved on. And regardless now, with the concept of AI and agents more and more being either let loose directly or indirectly, right, indirectly being the business can basically batch process data at will Mhmm. Via certain agents, or you can have autonomous agents doing things, you do run the risk of basically, data drift, essentially, in ways that you didn't before.

Joseph Wyn:

Well, yeah, my take on the whole thing is that, you know, when you had systems and you're putting in code and you're going through a a good QA process Yeah. Companies have a dedicated QA team. What they're looking for is to make sure that the data integrity that's coming out of the system is right.

Rick Yocum:

Yeah. Rigorous SDLC protects for

Justin Leapline:

that stuff. Especially in financial systems. Oh, yeah.

Rick Yocum:

Or health care or engineering. Like, there's a bunch of places where precision is important.

Joseph Wyn:

Yeah. But with AI, AI, you know, it it won't crash the system when integrity fails, but it will confidently give you the wrong answer.

Speaker 4:

Right.

Joseph Wyn:

And so Yeah. What's happening there? And so now you're using a tool that has a little bit of flexibility Mhmm. In the output it might give And a little prompt engineering, a little confusion, a little something might create it to give you different answers. So no longer you're dealing with that static processing black box of the code that's taking data in and out, and you get the obvious expectations at the end.

Joseph Wyn:

Now you're having this AI thing do its work, and you may not know that you're always gonna get the same answer. In fact, even prepping for this, I said, give me give me a quick summary of this. Yeah. And I said, oh, I want you to include this and then give me that summary back. Well, it changed the whole thing up when it gave me the second time.

Rick Yocum:

Right. Well and and I think an important point is a lot of users in the business see IT as one big monolithic thing, and they see AI as an IT thing that they can leverage.

Joseph Wyn:

Yes. And count on and rely on it. It should

Rick Yocum:

be accurate. And that's the point because they're used to relying on IT stuff that's gone through rigorous SDLC processes and doesn't have, you know, fuzzy math underpinning a whole bunch of stuff. So now if it gets 80% of the stuff right, well, okay. I'm just gonna use you know, I always use, like, AP as an example here. Nothing against AP people, but it's just an easy example.

Rick Yocum:

If you have a bunch of AP analysts that use AI agents to run their stuff over and over and over, like, eventually, you could get this concept of data drift that actually has a material impact to financial statements. But it could

Speaker 4:

be this

Rick Yocum:

insidious change slowly over time. So anyway, my whole point in thinking about this was, I think there's gonna need to be a resurgence or rebalancing of the integrity controls amongst the stack as AI stuff becomes more prevalent. Because you can fight some of it with, like, Guardian agents and, like, this concept of, like, defense in-depth abusing agents, essentially. But even that, like, how much do you rely on that? And frankly, how much do you have of that in place by default if you have just a bunch of a a AP analysts being able to write their own prompts and basically batch process live.

Justin Leapline:

Yeah. Yeah. Yeah. And I think I was gonna mention Yeah. One quick story.

Justin Leapline:

I was just recently doing a sell, and I had AI do a a number of things, including, like, some of the work breakdowns and pricing and everything. And it gave me a little nice little grid, you know, as AI does, it makes it look nice and pretty and broke out. It's like, here's the tables and the columns and format like this, and here's how I want the hours breakdown versus hourly rate and sum it up, you know. Yeah. And it did all that.

Justin Leapline:

And then I'm like, I'm looking at the number. I'm like, the total doesn't seem right.

Rick Yocum:

Wait a minute.

Justin Leapline:

Copy and paste it. I put it in a spreadsheet. Yeah. Absolutely. Like, math was way off, you know, type of thing.

Justin Leapline:

And I'm like, yep. Alright.

Rick Yocum:

There there was a thing we had recently where a team was using some AI to to smash together a bunch of data really quickly so it could be presented to some executives. Mhmm. And it got pretty far along, like almost presented, not presented, but almost presented before someone had realized, oh, this pie chart is a 120%, not a 100%. Right?

Speaker 4:

Oh, wow.

Rick Yocum:

And if you really wanna kill your

Justin Leapline:

confidence That was revenue. Right? Yeah.

Rick Yocum:

If you really wanna kill your confidence with an executive team Yeah. Right. Like, it's stuff like that. So anyway Yeah. Dangerous.

Justin Leapline:

Yeah. And that's why I think I mean, I'm not in the a development business. I'm more on the consumption side of it, you know, both for my tooling and personal. But I I see more like Claude and a few other AI tools. They're starting to incorporate using tools more Yeah.

Justin Leapline:

To augment what they're doing. You know? So in this case, if they're adding up something, they might get a calculator, you know, and and actually say, add up these numbers and give me the result, and then I'll take that, you know, type of thing, instead of throwing it into the AI model and then trying to figure it out. You know? So I think, yeah, I I was just doing something recently with, Claude.

Justin Leapline:

They have something called, co work, which is basically more interaction with your environment and desktops and a whole bunch of MCP connectors and all that stuff. So you can connect your Salesforce with your Notion, with your intercom, with your HubSpot.

Rick Yocum:

Oh, like a integration middleware type basically.

Justin Leapline:

And then you have one chat prompt and say, do this.

Rick Yocum:

Oh, neat.

Justin Leapline:

And it will actually execute stuff for you Yeah.

Joseph Wyn:

If you

Justin Leapline:

want, which is phenomenal. And it will actually do stuff locally on the desktop too. So you can say, clean up my my desktop because I have files and folders and all that stuff. Anyways, it's going a little off track. Like, it does a whole bunch of stuff.

Justin Leapline:

I was really impressed. I was using the coding piece of that. Yeah. And it was debugging an API for me, and it's like, well, like, I don't know what's wrong with your code, so let me try this curl command. And it actually reached out to the API directly Wow.

Justin Leapline:

To query it to say, like, okay. I expect this to come back. Okay. That's solved. Now let me try, it different with this.

Justin Leapline:

Oh, yeah. I see the error here. And now let me adjust your code. And it did this, like, recursive thing where it kept test

Speaker 4:

Yeah.

Justin Leapline:

It tested my code, and then when there were faults, it adjusted and say, okay. I see what it's looking for now. Let me now go back and redo your code and everything. So, like, things of that nature, that's where the agent base is gonna be phenomenal. You know?

Justin Leapline:

But also dangerous.

Rick Yocum:

Like, I always say, like Yeah. Yeah. The ability to, like, work at scale allows you the ability to screw up at scale.

Justin Leapline:

Absolutely. Yeah. And that's where it's like, yeah, there should be some gates and, Claude, you know, like, I I I was just playing with them a lot this week, so I'll I'll say their name many times on the podcast here. But they've done a good job of making kind of safe things to do. Like, they put a lot of emphasis on being safe, and there's, like, different modes where it's like ask every time you run something Mhmm.

Justin Leapline:

Or these commands are okay, but don't, you know, don't run these commands. Or there's like dangerous mode where it's like, just do it. You know? Right. Right.

Justin Leapline:

You know, which it has checked in code for me. You know? Sure thing. So, yeah, you you basically there's a lot of ways of, you know, kinda doing that. I think yeah.

Justin Leapline:

I think it's good. Another story I have on the integrity thing Yeah. You know, with that, we had it was one of the programs I ran. We had a data grip database encryption software on the database. And we're coming to the end of our rock.

Justin Leapline:

And one of the things that we discovered was we rolled out this platform, like, three months ago, four months ago, this database encryption, you know, for I think it was, like, MariaDB and MySQL Mhmm. Separation there. It came to be that it was causing data integrity issues Oh, yeah. Encryption software with it. And once we had discovered, you know, they brought it to me.

Justin Leapline:

It was like, okay. We either pass PCI with data integrity issues or we pull it and Right. Have to find another solution. And, you know, I made the hard like, I was like, you gotta pull it. Like, you know, in business, you can't deal with, like, data.

Justin Leapline:

And and it was a back end database and all that

Rick Yocum:

stuff Right.

Justin Leapline:

Right. You know, type of thing. So, yeah, I weighed the cost for it, but I couldn't sign off on our rock to say we encrypt data at rest, you know, for this, you know, this point in time Right. You know, type of thing. And it only took about a month.

Justin Leapline:

We got another solution in place and everything, and it worked with that. But, you know, that was one of those cases. Like, yeah, integrity is pretty important when you're dealing with, like, you know, financial data and all that stuff, and we had to restore the database multiple times while they tried to debug. They didn't know why they were getting data integrity issues, you know, and then finally narrowed it down to, like, yep. It's the encryption software.

Joseph Wyn:

Yeah. So how long did it take? Do you have any idea how long they were running before they actually realized it? It was Yeah. It was months.

Joseph Wyn:

And was that creating operational issues?

Justin Leapline:

Like Yes.

Joseph Wyn:

Like, what what what sort of operational issues could a company think would happen when

Justin Leapline:

Yeah. We were seeing errors coming back on, like, select statements and everything, like, or it was pulling back, like, garbage because the data wouldn't be able to be unencrypted. You know?

Joseph Wyn:

I mean Okay.

Justin Leapline:

You know, essentially, it's trying to pull back encrypted data.

Joseph Wyn:

Did it go as far as,

Justin Leapline:

like, customers were receiving errored receipts or amounts? Don't know about that. Not that I was aware of, at least, into that, but it could've. You know? I mean, this was one of our primary database that it was both front end and back end that we use for our ecommerce Yeah.

Justin Leapline:

You know, into this. So it was front end bringing in customers, and then it also served as kind of a back end to manage those customers as well type of thing.

Rick Yocum:

Finance is one of those places where it's important to have precision. But, like, when it comes to customer facing stuff, they're usually a pretty critical audience. So Yeah. If you're having issues, they'll probably feedback. I the places that I really worry about is, like, engineering applications, healthcare applications, where, like, maybe there are customers downstream, but it's removed far enough that you real damage could occur.

Joseph Wyn:

Well, think about, like, environmental. Yeah. So say you have a a water supply system.

Speaker 4:

Yep.

Joseph Wyn:

And part of the process is that you're treating the water with a certain amount of chemical mix.

Rick Yocum:

Absolutely. Yeah.

Joseph Wyn:

And if you if the AI is supposed to be doing something or somebody intentionally is able to make something happen

Justin Leapline:

Yeah.

Joseph Wyn:

So that that is no longer safe drinking water that's getting into the reservoir Yeah. Then those are the kind of problems I think we we worry Absolutely.

Rick Yocum:

And and in the integration of systems as well. So, like, one example, I I was working a lot with some oil and gas companies a bunch of years ago and, you know, their part of their job is to poke holes in the ground to extract stuff. Right? Well, you wouldn't believe how complicated it is to actually poke holes in the ground to extract stuff. Right?

Rick Yocum:

So there's all these topological surveys and this, that, and the other, and land surveys, and someone owns the land, and you might be able to pull it up on one side of the property line. But if you pull it up on the other side of the property line, it's a real problem because someone else owns the material that's being extracted through there. And by the way, it costs like a million dollars or some crazy amount to like poke a hole sometimes depending on various factors. So like It might not

Justin Leapline:

be worth it at that point.

Rick Yocum:

Right. So so if the AI makes an error of half a foot in some cases, it's potentially catastrophic from a profitability perspective and things like that. Like, you could have major commercial projects or even like and and I'm not suggesting this is happening anywhere that I know of today, but like, I think, like, structural integrity of buildings or civil engineering or things like that, like, as people use AI more and more and more, it's a fantastic tool to help people do better. But, like, the the you you need to be able to trust it very thoroughly at the end

Joseph Wyn:

of the day for some these your risk really changes. It's decision quality risk now you're worried about over other countries.

Speaker 4:

And I

Rick Yocum:

say, like, would you let an intern do that themselves? And if not, maybe take a look yourself.

Justin Leapline:

Yeah. And I think I mean, we're basically talking about guardrails. Yeah. That's right. Point here.

Justin Leapline:

You know? So, yeah, I don't think AI should ever like, it will never be the Jarvis running, you know, Stark Tower, you know, like, you know, it's like, oh, yeah. It runs all the day to day operations

Rick Yocum:

Right.

Justin Leapline:

That. There has to be guardrails. Yeah. So you were mentioning, like, the drilling or the the what's mixing in the water. There are ups and like, a a good range where certain elements have to be within that.

Justin Leapline:

Absolutely. And that should be vital systems that have nothing to do with AI, you know? Right. Mhmm. In fact, when I first started, like, at high school, I went into the transit industry and, you know, like railways and Yeah.

Justin Leapline:

Transit cars and all that stuff. And we literally had vital systems and non systems. Yeah. Vital systems were on the train. So if something exceeded or went wrong on the train, only locally the train brakes brakes went on, you know, kind of thing.

Justin Leapline:

Right. And then we had a whole bunch of things outside like switching and routing Yeah. Absolutely. And all that stuff that were non vital, you know. Yep.

Justin Leapline:

But if things went wrong in the train, we had a we're slowing down to stop and that that's it until we correct it, know, type of thing. Yeah. So There's almost like

Rick Yocum:

a data category that's the analogy there. It's like, is this data that must be precise and correct, or is this data that, yeah, you can have, like, a bit more Right. You know, flex with it.

Justin Leapline:

And I think that's where yeah. You you need to have those guardrails too. And it's like, okay. Now you two agents go add up the same row, and I want you to come to I mean, like and give me your conclusion blindly, you know. This is where I always

Rick Yocum:

come back to, like, you know, I I always, like, treat him as an intern. An intern defense some depth is good. Right? Because if two interns argue, you know, one of them is probably wrong. 10 interns is even better.

Rick Yocum:

But at the Or same

Justin Leapline:

both of them is probably wrong.

Speaker 4:

But at the same time,

Rick Yocum:

you know, you as the intern manager, like, determines what they're doing and how they're doing about it in the context they have and thinking about it. So it's weird that there's even this, like, this human error factor that can seriously impact the agentic, like, execution of things. Right?

Joseph Wyn:

Did you guys watch The Pit?

Justin Leapline:

Yeah. No. Yeah. It's fantastic.

Joseph Wyn:

That's the show that's taking place in supposed to be Allegheny General Yeah. Just North Side Of Pittsburgh. And Noah, I can't remember his last name, was on ER. Yeah. Now is the doctor on this.

Joseph Wyn:

And this last episode I won't give any spoilers, but this last episode

Rick Yocum:

not seen the last episode.

Joseph Wyn:

They had the a different doctor who's coming in is gonna, you know, hang out and be in charge of the ER for a little while. Yeah. And she brought along some new new things. One of the new things was the AI system to talk to instead of writing all your charting. And as they're going through it, this is like what's happening during the episode.

Joseph Wyn:

And they're saying, watch this. And then they went through and they talked about the, you know, all the stuff for the patient, and it was as they were saying it, it was just writing it down and transcribing it and then it reorganized it and made it whatever. Mhmm. But it got one number wrong and it actually had a problem. Oh, And they made a really good point to like she she made a really good point to say, well, it's not a 100% accurate.

Joseph Wyn:

You have to review it. And I'm thinking, alright. Well, you get a busy ER, and people are gonna start relying on this stuff.

Justin Leapline:

That's

Rick Yocum:

the problem. Yes.

Joseph Wyn:

Yeah. You do it 20 times in a row or a thousand times in a row, and it's never makes a mistake. At what point do you just stop, like, reading it closely?

Justin Leapline:

Yeah. But I guess my counterargument to that would be how oftentimes do the nurses or whoever's taking the notes would actually make a mistake? And is it better than that? You know?

Joseph Wyn:

But the difference is is that the person taking it and, you know, they have personal insurance for that kind of stuff, malpractice insurance. And, you know, you'll still get human error. What what point

Rick Yocum:

does it

Joseph Wyn:

feel more wrong when the AI took the notes and you didn't proofread them and you missed the thing versus you wrote it wrong because you misheard it? And is one worse than another? I think the point they were making, it's gonna be so much time savings and

Justin Leapline:

Right.

Joseph Wyn:

So accurate that it'll be less error prone than a person.

Justin Leapline:

Yeah. And I think that's probably right. You know? I mean, studies will show and time will tell Yeah. You know, type of thing.

Justin Leapline:

But we look at that with, you know, self driving cars now. I mean, you have a Tesla. You know? Like, they are statistically way better in self driving mode from a wreck, you know, standpoint. You know, that they would avoid wrecks, you know, more oftentimes than a human would in the same situation, you know, into that.

Justin Leapline:

In fact, I just saw, last week, week before, there's a a popular insurance company that is now saying, like, if you have, like, a self driving car, we're gonna give you a discount.

Joseph Wyn:

I need to find that.

Speaker 4:

I need

Joseph Wyn:

to find that because my insurance is so high.

Rick Yocum:

You know what's right? I've predicting this for while. You know what's gonna happen next, though. At some point, I this is I'd I'd try not to be in the habit of predictions, but I can absolutely see this in the future. Fast forward ten, fifteen, twenty years, and I absolutely bet that it will cost a premium in your

Justin Leapline:

insurance To actually drive the car yourself?

Rick Yocum:

Want to drive yourself.

Justin Leapline:

Yeah. Yeah.

Rick Yocum:

Like, once it becomes

Joseph Wyn:

The drivers you ever get the some of the insurance companies will give you this module you plug into the Oh, yeah. Thing, and it will monitor and then send to the insurance company things like, are you going over the speed limit?

Justin Leapline:

Those are crazy. Yeah.

Rick Yocum:

Yeah. Well, I actually And did

Justin Leapline:

you see what Congress just passed? No. I didn't read into it too much, but, basically, their, car manufacturers are now allowed to put a kill code, like a a kill signal into your car.

Rick Yocum:

I hate everything about that.

Justin Leapline:

Yeah. Just saw that it kinda snuck through and everything. It. So yeah. I don't know what the ramifications or because they already had a little bit of that.

Justin Leapline:

Like, if you had the star network onto it, they could, like, remotely shut down your car.

Rick Yocum:

Right. Yeah. But but you choose whether or not you have that network. Yeah. Like, I mean, there's I don't I choose

Joseph Wyn:

to have a Tesla and I know they can do that. Because if I take it for service, I'm like, oh, I didn't do whatever he's like, don't worry. I got it. And he just pushes some button on this computer behind the thing and now he owns my car.

Rick Yocum:

Yeah. The the option to have the manual thing, I think, is is useful. It's crazy. Well, the but about the about the chips And

Justin Leapline:

all the privacy people buy, like, old Chevelles and Yeah. Just go old school. Yeah. No electronics. No chips.

Speaker 4:

I want no chips.

Rick Yocum:

Straight mechanic. Back to cassette players. I had an employee several years ago who put one of those chips in her vehicle, and I remember so vividly, like, fast forward a month after that. She said, oh, yeah. Because discounts are over.

Rick Yocum:

And then we're talking and she's like, yeah. I never should have done this because apparently, I make too many left turns on the way to work. And from an actuarial perspective, my rates are now going up because of the number of left turns I have to make on table.

Joseph Wyn:

That's dangerous cutting in front of those other cars. Woah.

Justin Leapline:

I was like, so now she just needs to make like, three right turns and go straight. Right? And I was like Right. Right. Right.

Justin Leapline:

It was nuts. So That is nuts. Yeah.

Rick Yocum:

Anyway Yeah. Well, so so also so I was thinking about the integrity thing. The other thing that popped into my high head on this topic just really quickly was as agents do more and more and more from a GRC perspective, right, there's gonna be automated testing and evidence collection and this, that, and the other. Right? So from and there could be hallucinations.

Rick Yocum:

And I I saw one quote somewhere. It was it was super funny. It was about vendor questionnaires. And like, oh, if something's automatically filling out your vendor questionnaires for you, you absolutely have to check it. And the quote was, hallucinated compliance is just fraud with extra steps.

Speaker 4:

With extra steps.

Justin Leapline:

Which I thought was pretty funny. Yeah.

Rick Yocum:

But so one thing that might you know, if if you have a decent program in place and you have some time and energy to start thinking about you should start thinking about your reliance and reperformance, like posture with respect to agentic testing. Right? Like, how much of that stuff do you have to look at yourself? How much of that stuff can you rely upon? How much do have to reperform?

Justin Leapline:

All that. I've said it before, and I'll say it again. Like, we're turning into a, an environment where it's AI talking to AI, you know, type of thing.

Joseph Wyn:

Right. Oh, yeah.

Justin Leapline:

So it's like, oh, you send me a questionnaire. I'm gonna have AI just write it, you know, type of thing. And then I'm gonna send it to you, and then I'm gonna ask my AI to summarize what your AI, you know, wrote into it. Yeah. You know?

Justin Leapline:

And it's just gonna be like AI just writing

Joseph Wyn:

Oh, yeah. The the major GRC systems are already doing that. Yeah.

Rick Yocum:

Right. Well, I think that happens with, like, insurance claims too between, like, what brokers and underwriters and stuff like that. Like, there are systems in place already that, like, negotiate amongst each other, like, a lot of the things that just sort of happen automatically. And then they're, like, looked at by the people at the organizations and all that But, yeah,

Speaker 4:

that happens.

Joseph Wyn:

At what point is AI governance questions gonna start showing up in your your SOC two audit? Absolutely. Kind of stuff.

Justin Leapline:

Yeah. SOC two will take a while, I think. I don't know. What was our last update? 2017?

Joseph Wyn:

Well, it doesn't mean the auditor won't start deciding they wanna ask you about what kind of risks you're

Speaker 4:

Right.

Joseph Wyn:

Now needing to deal with.

Justin Leapline:

And depending system of what they're doing, I guess it could fall into play.

Rick Yocum:

Yeah. The scope and the operations.

Joseph Wyn:

Mean, you take the integrity part like that Right. Trust service criteria.

Justin Leapline:

Yeah. And confidentiality and all that stuff.

Joseph Wyn:

Yeah. And then start finding out, have you considered AI risk? Yeah. We might get that a little bit later in

Speaker 4:

the show.

Justin Leapline:

Curious. If you're listening to the podcast and you're a CPA company doing SOC twos, please let us know if you've done anything with AI. Oh, yeah. I have not talked to any CPA companies that do anything with AI.

Joseph Wyn:

I haven't yet either. Yeah. But one of the things that, you know, was as I was reading through this stuff, I started thinking about, well, if we're really concerned with it being in the GRC realm, then at what point does it become policy, and at what point does it need to be audited as part of your program?

Speaker 4:

Oh,

Joseph Wyn:

yeah. And at some point, you're gonna say, well, this is my design. This is my control. And I I think you said one of the topics later is the ISO 42,000 Yep. One which is all about

Justin Leapline:

And I'll get into that.

Joseph Wyn:

Protecting. But yes. But we'll get into it later. Yeah. But if it's gonna start showing up in those audits, then Yes.

Joseph Wyn:

It's a matter of time.

Justin Leapline:

Yeah. And it's interesting. I'm not gonna talk about it now. Yeah. Yeah.

Justin Leapline:

Let's hold on. Yeah. And I think so I guess, you know, solutions out of this topic here. What what should we, like, be focusing on to tell, you know, our audience out of this?

Rick Yocum:

I I would think if you haven't looked at your overall control set and you're responsible for that stuff, take another look at it and really think about the integrity controls that you have in place. And if they need to be bolstered, changed, applied slightly differently, as AI stuff Yeah. You know, it continues to accelerate.

Joseph Wyn:

I'll expand on that and say I'd say go through your policies, go through the controls, and see which ones can translate into the actual technical guardrails. Yeah. Because there's gonna be things and then was thinking, like, how is this different than administrative controls versus technical controls?

Justin Leapline:

Mhmm.

Joseph Wyn:

Like, thinking from maybe, like, the HIPAA realm. You have your administrative controls. These are the ones that AI create policies, procedures. Yeah. Yeah.

Joseph Wyn:

Yeah. And but administrative control is something a person kinda does, where a technical control is kind of like a preventative control.

Rick Yocum:

And and so AI stuff so I've I always this is a thing that I've thought about for a long time. AI is effectively, like, code at the identity layer. Right? You can sort of treat it that way. So it's this weird blend between again, if I treat it like an intern, it's like an intern robot.

Rick Yocum:

So Okay. Are your controls administrative or are your controls technical? It's like, well, it's a little of both. It's actually this new blended middle ground because I can't fully trust it, but at the same time, it processes way faster than a person could.

Justin Leapline:

Really hard.

Rick Yocum:

And you can point out lot of stuff. So but to your point, Joe, I think that's a really interesting thing to think through from a practical matter. Yeah. Like, think about your administrative and technical roles and which ones need to blend into AI from both sides of that. And I'd also say from, like, an audit and straight GRC perspective, if you're thinking about these tools, your audit teams are thinking about these tools or, you know, all that sort of stuff, if you're responsible for them, think about in advance, like, how much you need to and want to double check the work and codify what that looks like upfront.

Joseph Wyn:

Yeah. Yeah. Maybe there's a standard that will help you Get that right. To

Rick Yocum:

that. Right.

Justin Leapline:

Yeah. Yeah. And just, I'll add my, 2¢ before we, go into the drinking bit here, and everything. I think from an, GRC engineering, I think we've only scratched the surface. Like, a lot of it has been, like, automation and pulling controls and validating and simplifying, you know, to what are the biggest pain points, which, you know, is what AI is great for.

Justin Leapline:

You know? Things are repetitive in in task and simplifying them to only a few seconds or minutes, you know, into that. So, like, in a PISCI, we're doing we were putting a number of AI stuff over the last few months. In fact, just this week, I put in our own model context protocol.

Rick Yocum:

Oh, nice.

Justin Leapline:

So Yeah. Now within your AI tool, can you actually hook in a PISCI and say, give me all the tasks that I have in the tool or issues or you know? And then soon, I I have it read only right now just to test it out, but it will soon be able to do, like, recommend good test procedures, you know, for these controls and put them in. Yeah. You know?

Justin Leapline:

Or I want good solid tests based on this, this, this, throw it into the tool. And all of sudden, I'm not even coding anymore. You know? It's just I'm giving it availability to another tool to code like, to do the task.

Joseph Wyn:

That wasn't any of the demo stuff you gave me, though.

Justin Leapline:

Oh, no.

Joseph Wyn:

No. No.

Justin Leapline:

Yeah. This is yeah. This is brand new this week and yeah. So you'll be able to open up OpenAI or Claude or whatever it may be and basically have a conversation with the Episcay stack on what should I do next? Where should I go?

Justin Leapline:

And eventually, we'll actually integrate more of the chat stuff right into the the web tooling of it. But this is kind of a nice shortcut to give that availability. I see that and all the context protocols that are coming out for all these tools. It's gonna be the new API integration. Like, nobody's gonna, like, try to integrate their tool.

Justin Leapline:

It's gonna be like, we're gonna integrate with AI with the CNL. And then all of a sudden, I'm gonna be sitting down and saying, GRC wise, why don't I just ask if our AWS instances are doing this? Is our AWS instances have multifactor access on all regions and endpoints? And it's going through. It'll query AWS.

Justin Leapline:

It'll pull back all the stuff. It'll like, it's like, you're missing three, you know, and here are the systems and all that stuff. Or I got a questionnaire from a client. Are they an active customer? Okay.

Justin Leapline:

Let me reach into HubSpot. No. They left six months ago, you know, type of thing. Or, you know, they're not an actual customer.

Joseph Wyn:

And then instead of having to waste my time asking that question, I would hope that Episcay gets that built in so it just knows to go and say, oh, this customer oh, I have a control. I have to have MFA everywhere. Well, I'm gonna proactively just go and query all the things you already told me I have access to. Right. And I'm gonna come back and tell you what it is.

Joseph Wyn:

And you wake up in the morning, make your coffee, and there's your list.

Justin Leapline:

Yeah. And it's it's great for that, but it's not cheap. You know? So every time you're doing that, you're eating up tokens, you know, to whatever service. Yeah.

Justin Leapline:

You know? If you're doing that for long term, an API direct connection and having actual, like, context and, you know, drone down to say, query AWS for exactly this is gonna be better in the long term. But, like, I was noticing when I was doing some cloud stuff, it was actually looking up some tickets and other repos. Yeah. I'm like, maybe this is a bug somewhere in the software stack that we're cool.

Justin Leapline:

And it was using GitHub's API access. So it's actually a command line, g h space API, and then it was querying the repos and doing search parameters. And it returned a JSON string that had then digested and looked to see if, like, if there's other stuff related to the problems that we were having.

Rick Yocum:

Dude, there's so much first of all, I fully agree

Speaker 4:

with you.

Rick Yocum:

Yeah. I think, like, that that this hyper integrated context through MCP is gonna be like a thing. Yeah. But now that I'm seeing that vision that you've shared, I am terrified of like the AI spaghetti diagram.

Joseph Wyn:

Oh my

Rick Yocum:

And and the and the the thing I know before is the and the ability to, like, screw up at scale. Yeah. And that's where, like,

Justin Leapline:

the access to, like, identity access to agents and all that stuff. Like, it would be way and that's where it gets it's gonna be a whole platform building just for agents, you know, with this because you look at this is if I'm a GRC person performing a GRC role, and let's say that scenario where I'm saying, are they a real customer, you know, type of thing. I should have some access to basically say yes or no Sure. You know, type of thing. Do I need to know how much they paid over time or, you know, that, you know, that there's other information into there?

Justin Leapline:

Or should I have right access to the CRM, you know, into that? Like

Joseph Wyn:

Yeah. I would say no.

Justin Leapline:

Well, I would say no as well. You know? But If I'm in the customer, you know, service, maybe I would just wanna update a record. Like, hey, I got a new address. Update it, you know, to this customer.

Justin Leapline:

I should have right Access, but maybe not full blown. You know? Like, there are so much stuff that it's like, okay. Now I have to, you know, do identity, and it's almost per task or per agent basis. I read a a good paper about it that it's not necessarily all the time it should follow the person.

Justin Leapline:

The person is a good indicator, but sometimes the person doesn't mean they should have the access. So if I'm a global admin, does that mean I should see everything in HR or finances share drive?

Rick Yocum:

Yeah. You might need to see it, but maybe, like, that's an emergency access Yeah. Only because you're a steward and you're a trusted party.

Justin Leapline:

Exactly.

Rick Yocum:

Yeah. And your query shouldn't look at that by default.

Justin Leapline:

Right? Somebody in HR has a full salary spreadsheet, you know, and even though I have access, doesn't mean I should theoretically grab everything back into context.

Joseph Wyn:

Well, that's where you know, so this is getting back to, like, zero trust. Identity

Justin Leapline:

Yeah.

Joseph Wyn:

Is the new boundary. Right? And so thinking about this, instead of having one MCP that has the access, you have multiple that are just like users that you give them the access they need, and they can only do that job. And so I was starting to think about, alright. Well, yeah, the the whole thing, AI won't take your jobs.

Joseph Wyn:

So he knows how to use AI, will take your job.

Justin Leapline:

Well, now

Joseph Wyn:

I'm starting to reconsider that because now you create all these things and you think about the scenario. You have your GRC system

Speaker 4:

Mhmm.

Joseph Wyn:

With all that has all your policies and the AI knows what all your rules are.

Speaker 4:

Mhmm.

Joseph Wyn:

And it does and it says, oh, I have a rule to make sure MFA is everywhere, and I'm gonna go and check all this stuff. And as soon as it finds it, instead of you waking up and saying I have to go deal with this report it found, instead, you now pass that off to a ticket system Yeah. That creates a ticket for an engineer to go and adjust that AWS system to turn on MFA. But really, the engineer it goes to is some other department's agent who has the access to do that. And so what's happening is in almost real time, does you just tell it, we need to have these controls in place.

Joseph Wyn:

Here's my, you know, my config profile. Go

Justin Leapline:

do it. Right.

Joseph Wyn:

And it just starts interacting and checking Yeah. Opening tickets, logging the fact that it's happening to create that audit trail and allowing the other system to receive that, make the change, and then report back that it's done, and then it can go recheck and then it can close the ticket.

Rick Yocum:

So this is one of the topics

Justin Leapline:

Wouldn't that be an awesome environment? Well, is one of the

Rick Yocum:

topics I talk about a little bit.

Justin Leapline:

It's It's Yeah.

Rick Yocum:

Well, it's agent granularity. Right? So how how broad or how narrow are the agents that you have? Is your engineering agent an agent that can do 13 different tasks that all come into that ticket queue, or is there 13 different agents and each one performs only one specific task?

Joseph Wyn:

Yeah. And is it less expensive?

Rick Yocum:

Serious pros and cons

Justin Leapline:

to each.

Joseph Wyn:

And, Justin, when you're saying, like, you're gonna use all your tokens, it's gonna be this expensive. But at the end of the day, is it a $150,000 a year?

Justin Leapline:

No. Absolutely. It's cheaper than a person.

Joseph Wyn:

Yeah.

Justin Leapline:

You know, doing the test almost every single day, you know, type of thing. But I'm saying for measuring from an API call to going through

Rick Yocum:

AI Yeah.

Justin Leapline:

An API call is gonna be a thousand times cheaper, you know, per day. So, yeah. So there's there's places for it, and it's where you're hap interacting with context, you know, is where AI is gonna play well. If you're just do this thing and you know exactly what it is, you can make AI calls into that. But if you don't know exactly how to do that, AI could probably figure it out and then execute it for you.

Rick Yocum:

But this is where orchestration layers get

Speaker 4:

Yeah.

Rick Yocum:

Super important because that's where like, you need guardrails on agents themselves to not, like, go crazy in their specific focus task, and then you need orchestration layer guardrails so that's like, it's just like attack chains. Right? So certain things don't get chained together. It's like, oh, wait. No.

Rick Yocum:

You shouldn't like, this agent shouldn't talk to this agent, shouldn't talk to this agent because that indicates a bad thing's happening. But you look at, like,

Justin Leapline:

the MFA example, like, I think that would be a phenomenal thing if you gave agent or agents enough power into that. And even if it was just read only from a GRC perspective

Rick Yocum:

Oh, for free.

Justin Leapline:

Say, like, okay. Why is an MFA turned on on this endpoint here? Is it a Terraform misconfiguration that is in GitHub? You know, like or did somebody turn it off? You know, like, does the Terraform say it should be on that we're rolling it out and it should be Yeah.

Justin Leapline:

On all the time, but somebody turned it off, you know, and that's where you get to learn what's root cause here, know, type of thing. Or maybe it's all, like, manual setup and it just it goes off and on as per people setting it, you know, type of thing. Or you can even say AI, was it on and it turned off?

Joseph Wyn:

Yeah. Go do a pull request. Yeah. Adjust the Terraform and

Justin Leapline:

Or that. Yeah. And then do a pull request if that's the actual, like, thing. Like, somebody screwed up, you know, the config file and they got through. It's like, okay.

Justin Leapline:

Well, you know, that should be it. And then write a new rule for Terraform before when it rolls out to make sure that setting is set. You know? So now you're linting, putting oh, yeah. Update in the lints, you know, to actually make sure that when configs go through, this is a chat.

Rick Yocum:

But this is like the AI That's

Justin Leapline:

where they're talking about oh, sorry. No. You're good. Of, like, compound engineering. You know?

Justin Leapline:

Yes. Exactly right. If you're doing something and you're like, oh, that's a problem. Go right into the fly file to next time it it won't it'll catch it, you know, type of thing.

Rick Yocum:

But so one of the things, though, and I talked about, like, the AI spaghetti diagram before. Right? So a thing that I think is gonna be critically important as, like, these worlds begin to develop and you end up with hyper granular agents assigned to specific controls or tasks that are all orchestrated together is gonna be this concept of, well, how are you building this such that changing one agent or one tool, like, doesn't require changes to the thousand things that everyone in your business has built over the

Speaker 4:

course of

Rick Yocum:

the past six months once you've turned this on. Like, you need, like, those abstraction layers and the concept of source control or, like, whatever the various bits and bobs are.

Justin Leapline:

A big problem actually with AI right now Absolutely. Is there you know, everybody's teaching it context and memory, you know, into this. But the biggest problem that it has with memory is what happens when it changes. Yep. You know?

Justin Leapline:

Like and I just had that the other day. Actually, when I was coding my NCP, I was doing some AI coding with it, and I changed the endpoint. I said, I don't make it NCP dash server. Just make it a dash slash NCP, you know, halfway through me coding. And then ten minutes later, it was trying to go to m c p dash server.

Justin Leapline:

Because I remember that's how I did it. How you it. Start, you know, type of thing. I'm like, no. M c slash m c p.

Justin Leapline:

Remember we changed it? Oh, yeah. That's right. But that's actually what I said.

Speaker 4:

Oh, yeah. Yeah. That tracks me up when it happens.

Rick Yocum:

And I'm sure we'll see some clever things like, oh, just have really good enterprise change control and make sure that all of your individual agents can also watch for change control to see things that might impact them.

Joseph Wyn:

Right.

Rick Yocum:

But, like, you're starting to build these ecosystems that are so fast. And it gets back to a thing that you hit on before, Joe, which is kinda like when everyone starts to rely on the speed that all this automation can provide, It's it's just as dangerous to be in a cargo in a 100 miles an hour. If it stops immediately, boy, that's pretty painful. Right? So if you you then you make a change and it breaks a thousand different agents in a bunch of different ways, that that's all that needs to be part of your Doctor plan essentially.

Rick Yocum:

Right? Depending on what you have these things doing.

Joseph Wyn:

Yeah. Well, I'm gonna change my takeaway from figuring out which policies need to be

Justin Leapline:

Recorded. Yeah.

Joseph Wyn:

Yeah. Well, you can have both. But my more my my new improved takeaway is like, look, if you're gonna learn, if you're gonna train on stuff, and maybe we can get a guest in here who could take us through how does a person who like, how long did it take you to just figure out how you wanted to do this stuff? You've been coding Episcay stuff for years, so you've kinda come up with all this stuff. And so your learning curve was your learning curve though to be able to get to the point of doing this.

Justin Leapline:

Oh, yeah.

Joseph Wyn:

Was, you know, quicker than others. But if you have a person out there who's not really in the middle of it all and needs to start figuring out what is the curriculum I follow? What is the way I figure this out? Right. I would love to have somebody come and just sit down and tell people That'd just like, they're in the industry.

Joseph Wyn:

They're here for a couple years. They're like, you know what? I don't want AI to take my job. It won't. Somebody who knows how to a do AI will take my job.

Joseph Wyn:

Well, maybe not. It's somebody who knows how to create MCPs and create all the AI They're the ones that are going to displace. And once they're in place and they just put those bots are in place and they displace these people doing these normal things, it's the person who's gonna have the oversight of that. The person who and and this is where, like, the the auditor role isn't always the most glamorous role. But all in a sudden, if you're the person who can make sure that the system and all of this stuff is running right, you have to understand so much.

Justin Leapline:

Absolutely. Yep.

Joseph Wyn:

In order to do

Rick Yocum:

that. Absolutely.

Joseph Wyn:

And how do you get into that path? That's what I'd love to have a whole episode on career advice for people that

Justin Leapline:

want That'd be an interesting

Rick Yocum:

That'd good. We should do

Joseph Wyn:

So, hey, reach out to us if you are the person who could come in and speak to that.

Justin Leapline:

Yeah. I saw a interesting study over the last week, think it was, where the perception of how much AI is saving your company And down kinda at the grunt level, it was like single digits, like 5%, you know, 7%, whatever it was. Up at the executive level, it was like 35, 40%, like AI is saving efficiencies across the company.

Rick Yocum:

The the gap between expectations and reality.

Justin Leapline:

Oh, yeah. Absolutely. Was it was a entertaining kinda survey.

Joseph Wyn:

Yeah. Real example. I I I will instinctively know what I'm gonna do. And then I wanna just validate it. And then I'll ask a question, and then I'm like, if I would've just did what I was gonna do, I would have been done.

Joseph Wyn:

But instead, now I'm reading like five pages of AI generated advice. Oh, yeah. And I'm like, half of this doesn't even make sense. This is not I would never do that.

Justin Leapline:

I did something today where I generate something like that, and I think my next three lines were shorter. Shorter. Shorter. Yeah. But that's, like, shorter.

Rick Yocum:

It's like the new analysis paralysis is, like, chewing on all the data that AI might give you. That's very interesting.

Joseph Wyn:

One of my favorite takeaways, John Ziola says when he writes prompts, he's like, favor brevity.

Justin Leapline:

Yeah.

Joseph Wyn:

So just write those two words at the end of your prompt. Mhmm. It's so much better.

Justin Leapline:

Yeah. Alright. Well, why don't we go into our alcohol section here? Anybody need a refresh?

Rick Yocum:

I would love more.

Justin Leapline:

So we have for this episode, red breast, which if anybody's a Scotch drinker, know red breast very well. They're very popular into it, and we got the twelve year cask strength. It's a dried fruit and lively spice. I think they said with apricots and butterscotch and barley finish. And it is a triple distilled matured in the finest oak cask.

Justin Leapline:

Product of Ireland here. It is delicious.

Rick Yocum:

It is So good.

Justin Leapline:

It is if you know Scotch, it's not as peaty. It has a very subtle

Joseph Wyn:

So you can have a it doesn't have to be from Scotland?

Rick Yocum:

I think this is a this is a whiskey and not a scotch. Right?

Justin Leapline:

Oh, is it? I think

Rick Yocum:

it's an Irish whiskey. I'm

Justin Leapline:

Oh, you're right. Okay. Good. I would might have got angry comments after the Irish whiskey, not scotch. Yeah.

Justin Leapline:

Could've that's all bad.

Joseph Wyn:

Hey. We we could just bleep over all those

Rick Yocum:

those things.

Joseph Wyn:

Justin was just swearing. That's what it was. It wasn't saying anything wrong about this.

Rick Yocum:

This is a really beep.

Speaker 4:

Yeah.

Justin Leapline:

When I say, it'd be Irish whiskey. Yes, delicious Irish whiskey.

Rick Yocum:

It's so good.

Justin Leapline:

You know? And yeah. The red breast, I mean, honest, anything they do. I even had the red breast, the Red Label one Yeah. Which is like twenty five years or something like It's

Speaker 4:

I don't think I've had that.

Joseph Wyn:

Was it better than this?

Justin Leapline:

You know, it's probably, but this is excellent.

Rick Yocum:

You know? This is so good. I like pretty I don't think I've had anything from them. Not that I've had, like, hundreds of bottles. I've had a decent amount.

Justin Leapline:

Never have I.

Rick Yocum:

I've I've never had anything from them I don't like. Yeah. Like, it's all very good for my opinion.

Speaker 4:

So

Justin Leapline:

They're good one. From an Irish whiskey perspective, if you ever wanna venture into that, definitely Red Brass. I think their standard is eight, if I'm not mistaken. Somebody will probably correct me on the tubes here. But, yeah, this is a little bit more of their kinda mid tier, I would say.

Justin Leapline:

And, yeah, it's excellent. You get a little bit of smoky flavor, I get Mhmm. Onto the kinda middle tasting palette of it. Finished really smooth. What's the ABV on it?

Rick Yocum:

It's, like, 58 and change?

Justin Leapline:

Yeah. 58. So yeah.

Rick Yocum:

Which it gives you gives you a little bit of the the heat, but it's not overbearing.

Joseph Wyn:

No. It's really Actually, for one

Justin Leapline:

sixteen proof, it's it doesn't seem like a one sixteen proof. No.

Joseph Wyn:

It doesn't feel that hot.

Rick Yocum:

Yeah. I'm a sucker for vanilla, and I get a ton of vanilla from

Justin Leapline:

that. So Gotcha. Alright, guys. Well, Cheers. Cheers.

Rick Yocum:

Clink. Hope we're doing it.

Justin Leapline:

Need do it.

Speaker 7:

Quick break to hear from one of our sponsors. If you own security, compliance, or risk, and it feels like you're always pushing a boulder uphill, I want you to know about CISO. CISO helps growing companies get audit ready, reduce risk, and stay resilient without drowning in tools, endless checklists, or one time reports that quietly rot the moment the audit ends. This isn't shelfware. It's not drive by consulting.

Speaker 7:

With CISO, you don't just get advice. You get hands on support from real security engineers, GRC specialists, and former CISOs who help you build, operate, and continuously improve your security program over time. Whether you're chasing SOC two, ISO 27,001, CMMC, HIPAA, or you're simply trying to get security under control so the business can move faster, CISO meets you where you are. Their managed VGRC model gives you enterprise level expertise without hiring a full internal team or reinventing the wheel. The focus is simple.

Speaker 7:

Clear priorities, practical controls, and measurable progress leadership can actually understand. Visit cisollc.com and start the conversation. Security you can trust, compliance you can prove, and people you can depend on.

Justin Leapline:

Alright. Welcome back to Distilled Security Podcast. Why

Speaker 4:

don't

Justin Leapline:

we finish up here, gentlemen, with talking about we kinda segued into it with a little break.

Joseph Wyn:

This is the AI episode.

Justin Leapline:

Yeah. Exactly. So ISO 42,001. So if those that are not familiar with it, this is ISO's answer to AI governance. And they basically if you're familiar with the twenty seven thousand one, they have their ISMS.

Justin Leapline:

This is their AI m s I I I s AIMS. AI management system. Yeah. Yeah. Yeah.

Rick Yocum:

When it's really tied to the EU AI act. Right?

Justin Leapline:

Not it's tighter

Rick Yocum:

It's really close.

Justin Leapline:

Yeah. So it has a lot of those attributes and everything out of it, but I think it would be arguably tighter to 27,000. You know?

Rick Yocum:

Okay. Yeah. I I think it's, yeah, I think it bridges the gap pretty well.

Justin Leapline:

Yeah. So it does align to the EUAI act Yeah. With that. And I think there's, like, it's even accepted into that. Have you guys either you guys worked much with this yet?

Justin Leapline:

Yeah. Not worked with it.

Joseph Wyn:

Okay. Yeah. I picked it up right through it. Okay. I mean, the the main thing is, like, you think we've been talking about AI risks this whole time.

Joseph Wyn:

Mhmm. And AI introduces risk that goes beyond your traditional cybersecurity. So whereas, like, CIA, confidentiality, integrity, availability are what you're worried about. Yep. With forty two thousand one, you're really worried about harm.

Joseph Wyn:

You're really worried about AI doing things it shouldn't. And you wanna think about, like, what are the consequences of of it? We're we're, like, 27,001 is asking, is information protected? Are risks, the CIA managed? Forty two thousand one is thinking more about what what decisions does AI influence or automate, who is affected by those decisions, and if they're wrong, and then what harm could occur if those systems, you know, if that happens.

Rick Yocum:

Yeah. Really really tied to, like, reasonability and fairness in a lot of ways. Like, in that way, I see it a lot closer to a lot of the privacy legislation that exists than some of the security legislation. Yeah.

Joseph Wyn:

Yeah. Yeah. Because twenty seven thousand one is protecting information

Speaker 4:

Yep.

Joseph Wyn:

Where forty two thousand one is looking to govern behavior.

Justin Leapline:

Yeah. Yep. Yeah. And, yeah, I think a lot of people you can extend out your twenty seven thousand to also get 42,000.

Joseph Wyn:

Yeah. You need 27,001, and then you can come along and add on the controls.

Justin Leapline:

That's not anymore. I thought you can actually now get a standalone 42,000.

Joseph Wyn:

I know you can do that with the twenty seven seven zero one for privacy, but I haven't really looked

Justin Leapline:

into one for privacy. I thought it was 42,000. Maybe. I'm gonna ask AI about AI governance here.

Rick Yocum:

But, yeah, I thought one of the things that, you know, was interesting is it's it's hyper focused on the risk assessments. And like you said well, I shouldn't say hyper focused, but a major component of it is risk assessments. And like you said, Joe, really about, you know, fairness and ethics and those sorts of things. And so I guess one of the takeaways right out of the gate, given that framing, is if you are cybersecurity practitioner, certainly there are cybersecurity elements that are that you know, because the security part underpins same way as privacy, like a lot of being able to do things in a safe and fair fashion. If you're a security team that doesn't typically interact all that much with your privacy teams or legal teams and stuff like that, and you feel like you need to get this or start to walk down this road, you're gonna need to start interacting with your privacy and legal teams from my perspective.

Joseph Wyn:

Oh, yeah. Yeah. Absolutely.

Rick Yocum:

Like, this is this is gonna be a joint effort. This is not like a sole focus, just cybersecurity person thing.

Joseph Wyn:

Yeah. And I'll blame it on them. Cold medicine. I think you're absolutely right. The 42,001 is a standalone

Rick Yocum:

Yes.

Joseph Wyn:

Cert you can get.

Justin Leapline:

Yes. Yeah. Yeah. AI said so.

Speaker 4:

Yeah. And I told it in

Justin Leapline:

the prompt to agree with me. So it's absolutely

Joseph Wyn:

right. A 100% that's right.

Rick Yocum:

What a good intern.

Justin Leapline:

Yeah. Yeah. Yeah.

Joseph Wyn:

So but but with yeah. And with the 42,001, you're you wanna really just look at the governance across design, training, deployment, monitoring, change management, decommissioning. Like, these are things that are different than maybe ISO 27,001, which is a little bit more control centric.

Justin Leapline:

Yep. Yeah. Yeah. Twenty seven thousand one, we have a breakdown in the notes. It's 93 control areas, four themes, organizational, people, physical, techno technological.

Justin Leapline:

42,000 has 38 controls across nine domains, and it's AI life cycle focused into that.

Rick Yocum:

So So one of the things when I was reading through it that I think is gonna be the hardest for organizations, again, is the AI Is the concept like AI, the just the definition of AI. Right? It's not we know. It's not monolithic. It's not just one thing.

Rick Yocum:

Yeah. Right? It's a bunch of different things. So how do you define and scope AI? And obviously, there's some baked in ones that you can use.

Rick Yocum:

But Joe, I'd I'd rely on your ISO expertise here. I'd imagine, like many ISO things, you can kind of define your scope specifically. If you wanna veer from the definitions, you probably could. Is that true to some extent? Like, you wanna define AI a certain way, could you scope and certify to, well, we're just gonna do AI that, know, works in this way or this specific product as a Well,

Joseph Wyn:

you you do need to pick your scope Yeah. And you need to define, like, what those and this is, like, very similar across a lot of the ISO standards. And you need to look at what risks you're gonna manage.

Rick Yocum:

Right.

Joseph Wyn:

And here, you're looking at an AI impact assessment. So all those things

Justin Leapline:

To Rick's thing, you don't have to include all the AI you're doing. No. No. You gotta subset.

Joseph Wyn:

You gotta include the scope of your what what you want your certification to cover. Yeah. And it may be, you know, the the processes for AI that are happening out of this business unit and not the other one.

Justin Leapline:

Yeah. Or it could be like if I have a tool that has an LLM or AI interactions, maybe I just do the tool because that's what the customer cares about. Right. And that's why I'm getting the certifications. The customer's gonna ask

Joseph Wyn:

Right. Your customer's gonna go to your website and say, wanna buy this product. And then you're gonna get a certification not for that product being AI secured, but you're gonna get a certification around your AI governance, your AI management system

Justin Leapline:

Within your product.

Joseph Wyn:

That that Yeah. Governs that product.

Rick Yocum:

Yeah. Yeah. But I think that's such a great distinction. Governs that product is potentially different or likely different than governs the organization as a whole. Right.

Rick Yocum:

And and with this stuff, I think it's gonna

Justin Leapline:

be because they could have AI meeting notes that they're not considering. Exactly. Oh, right. Yeah. Might not be part it.

Justin Leapline:

Hey.

Rick Yocum:

We just got a pissy, and it has a bunch of AI stuff in it. But I think that's it's such a critical distinction because even, the definition of AI and then another element in here that I think is hard. So once AI is defined, you have to do an AI inventory. Yep. And so, again, if your scope's tight, you know what you're, like, attacking, that's not the end of the world.

Rick Yocum:

But if you're gonna be like, oh, we want our organization to be certified to this, oh, you're gonna be opening some closets and turning over some stones if, in many cases, you have marketing teams doing various AI bits and bobs and finance teams and all these different things.

Joseph Wyn:

You probably wouldn't scope those in because your customer may not be they're not buying

Rick Yocum:

the product. And so, yeah, I think from a a a practical matter, I guess, like anything, sort of start small. Start with a very specific scope and then potentially expand that.

Justin Leapline:

Interesting. I haven't thought about this. But then, like, if you're integrating a vendor into, like, your product or something like that and they claim, like, AI, but it's not really AI, you know, type of thing. You're like, all of a sudden, you're like, now have to flush out the the marketing only AI versus This

Rick Yocum:

is why the definition matters so much.

Justin Leapline:

Yep. Yeah. Because I think the the first thing is defining AI systems in scope, you know. So what counts as AI, you know, into that. I mean, do you count, you know, like, just something crunching some numbers and doing kind of a natural process lang like language processing, you know, into that?

Justin Leapline:

That's not making kind of deterministic or indeterministic like like choices. Mhmm. It's just like kind of compiling like what the sentence is mostly like that's not AI technically. Sure. You know,

Joseph Wyn:

but let's use your example from earlier. Yeah. You talked about taking a you're building a quote, you're building a statement of work, and the AI did a lot of good stuff. I use it for this too, it generates a list of tasks, and then you can estimate see, I would estimate hours. Yep.

Joseph Wyn:

And then you hope it adds it all up. And you think about it, and you're like, how are you using that? So if you're gonna get your process for so maybe you're a large company, and you wanna still use the AI to, at scale, build SOWs to make all your salespeople a whole lot faster. Yep. So it can happen at one one person or it can happen at, you know, a 100 person company, and you're gonna use AI to do that.

Joseph Wyn:

So you think about this. What is the harm? So the harm that could've happened is you could've just relied on that. Your intelligence didn't jump out and say, oh, I'm gonna double check these numbers because that doesn't feel look right. And the harm then becomes that.

Joseph Wyn:

So if you do your AI impact assessment, you might say, well, I'm gonna build an SOW. You I'm gonna let the AI build the SOW. Well, the the impact could be, well, the task could be wrong compared to what I need to do to implement this for the customer.

Justin Leapline:

Yeah.

Joseph Wyn:

Or this hours could be wrong or the math just taking your billable rate times the hours is doing that wrong or just tolling it wrong. So these are all your your risks. These are what what could go wrong. Right. And then what is the harm?

Joseph Wyn:

Well, at this point, the harm is well, you could've you could've had a reputation damage because you sold them something and you sent them this and they're like, Justin, this doesn't add up. Yeah. What what are you crazy? Right.

Justin Leapline:

Or And if it's less, they're gonna sign immediately.

Joseph Wyn:

Right. They'll be right. And then, well, now now you're getting the other harm. Yeah. Maybe maybe you've just committed in this contract to spend the next three months doing this thing and you're gonna get four days worth of

Speaker 2:

pay. Right.

Joseph Wyn:

Well, who in society is being harmed? Your family, because now you're not bringing home the same amount of money. Yeah. And so now you're thinking about all these things.

Rick Yocum:

Well, and and from an ethics and fairness perspective too, like, what context are you feeding it? Right? Or if you're feeding it your history of sows in the past and maybe those haven't been maybe your strategy has changed significantly in the past couple years or whatever.

Joseph Wyn:

Maybe you raised your prices.

Rick Yocum:

You raised your prices.

Justin Leapline:

We added a Pima charge into a customer and then it goes Yeah.

Rick Yocum:

Yeah. The Pita Pita tax.

Speaker 4:

Yeah. But

Justin Leapline:

And then all of sudden, it actually puts it as a light on him, and the customer is like, what's that?

Speaker 4:

Well, it it could it could be that.

Rick Yocum:

But it could also be like

Justin Leapline:

For pita chips.

Rick Yocum:

It could also be kind of insidious where it's like, well, for various legitimate reasons, contracts in this part of the world seem to cost more like this much more. Well, now if it starts applying that is a surcharge. Oh, well, that's, like, from an ethics and fairness perspective, like, oh, well, now there is, like, weird potential

Justin Leapline:

Maybe that's even a brand thing. You just start losing business because you're way, you know, out quote, like, you know, over quote you're over quoting in other regions that you're not appropriately

Rick Yocum:

Or there could be regulatory issues or all sorts of various things Or discrimination. That's exactly right. So I think Discrimination.

Joseph Wyn:

Well, imagine that you are selling business to consumer, b to c

Justin Leapline:

Yeah.

Joseph Wyn:

And you now are adjusting your price based on a certain model and maybe that the person puts in their zip code and it's a particular area where it

Justin Leapline:

is With mortgages and all that stuff.

Rick Yocum:

Yeah. I get back like the buyer's context there.

Joseph Wyn:

Yeah. Yeah.

Justin Leapline:

That's a great impact and all that stuff and everything. Even though yeah. I mean, the it gets in a little bit of political, you know, stuff of that nature because, you know, you see from the bank side, they're like, well, it's basically in the slums. So anybody in the slums in that ZIP code, like Is higher risk or what? Risk.

Justin Leapline:

Yeah. Exactly. It's like, it's historically cost more. There's more claims coming from that zip code. I can prove it.

Justin Leapline:

But then you get, you know, whoever is on the DEI, you know, side of it. They're like, well, but that's 80% black. So are you looking at 80%, you know, and that's why you're charging more into that area? Well, know And

Rick Yocum:

I think that's just the thing that it you you have it the defensibility of it matters. Right. Right? Are you making this are are you doing a calculation and the calculation is fair and ethical and all that stuff? Or are you just saying, well, this is an easy and and I would suggest, like, AI in this case.

Rick Yocum:

Right? Does AI then take this constant text and say, oh, well, an easy shortcut is if someone lives in this area or if someone looks like this thing or whatever, like, I can just make this assumption. Then then there's a difference, obviously.

Justin Leapline:

I I mean, it's interesting. Like, I I I actually this is how boring of a life I do. In some of those like, I actually saw on C SPAN, like, some of those congressional hearings where they brought some of the bank executives. They're like, we don't even collect race as a, like, a determining factor. Like, we don't know what race is.

Justin Leapline:

So, like, well, you could tell by their name. You know? It's like, okay. Like like like, come on. You know?

Justin Leapline:

And again, you know, they're trying to do a point. The banks are bad, blah blah blah

Rick Yocum:

blah. Right.

Justin Leapline:

Everybody's greedy. You know? I get it. The political game, all

Rick Yocum:

But but from an ethics and fairness risk perspective back to '42.

Justin Leapline:

It's just it's justified. Like, if I could show, like, claims in a certain geographical area, and this is why my rates and I don't care if you're what your race is. Right. It's like anybody in that zone gets this,

Speaker 4:

you know.

Rick Yocum:

But yeah. But I'm I guess I'm saying from like an AI perspective

Justin Leapline:

Oh, yeah.

Rick Yocum:

Yeah. Like if the risks are ethics and fairness, like, okay, well, how are you addressing those risks? And then also, even just the concept of legal defensibility.

Justin Leapline:

Yeah. I don't know.

Rick Yocum:

I throw it into the AI black box and I put in all this context that spits something back out is definitely different than, well, here's specifically how we've instructed the AI and here's how we're mitigating risks that could point out and here's how we do

Justin Leapline:

human rule processes. You know, for positive or negative, you know, type of thing. Absolutely. In fact, there was a big pit thing where, like, they're trying to make it more diverse on, like, some of the image generations. You know?

Speaker 4:

Oh,

Justin Leapline:

yeah. And I was seeing, like, some of the people were like, well, give me a picture of, like, the French Revolution and some of the soldiers. And they'd have, like, a Chinese person in there and a black person. I was like, that's not historical.

Rick Yocum:

But That's such an interesting

Justin Leapline:

trying to put in more diverse, but obviously, it wasn't coming out as historically, you know, accurate.

Rick Yocum:

That's such an interesting point too, like injected model bias

Justin Leapline:

Right. Exactly.

Rick Yocum:

Which might have unintended consequences in any in any kind of thing. So anyway, any decision making capacity. Right? Like, again, ethics and fairness and defensibility and

Justin Leapline:

all think there's arguments on both sides because I I really like I really like Elon taking over Twitter and doing Grok. If nothing else, do an counterpoint to the rest of the AI industry. You know? Because he was big on AI is OpenAI is kinda ruining it because they were putting a lot of that bias in, you know, to try to make it kinda cleaned up. But then you look at, you know, Grock was, you know, like, pretty open, you know, with a lot of the stuff.

Justin Leapline:

But then it turned into an angry teenage Nazi girl, you know Right. Within, like, a few weeks of the inner, oops. You're like, alright. Yeah. We need more guardrails.

Justin Leapline:

You know? Type of thing. You know? So, like, there's arguments on both sides that I I like, you know, into that, and you can go too crazy on either, you know, into where you put the right guardrails and where you put, you know, biases on either side. It's all biases at the end of the day.

Rick Yocum:

Yeah. It's just yeah. Managing bias. And and and making sure that you have a consistent and defensible approach. Yeah.

Rick Yocum:

Right? But anyway, yeah, think I think that defining what is AI and then the inventory of AI stuff in broad scopes is gonna be hyper painful. So, like, narrowing that scope will be a useful thing if you're actually going after this. I also thought one of the things that was interesting was this concept of, you know, you're supposed to redo these AI risk assessments periodically, and that periodically can be time based, but also, like, when major things change. Right.

Rick Yocum:

It just put me in the mind of, like, pen tests and, like, let's be real from a security perspective. Like, how many people actually redo their pen testing when something material changes? It's I mean, that, like, practically never happens.

Justin Leapline:

I can probably count on one hand the amount of times I've seen that through PCI. Right. Because they have the same thing in there. Right. Do this at least annually or upon Or

Rick Yocum:

when a major thing changes.

Justin Leapline:

A significant change and everything.

Rick Yocum:

I said, well, we define significant as, and then, like, you never have to do it.

Justin Leapline:

Or the QSA never gets it or never determines that it was a significant change.

Joseph Wyn:

And how much control do you have over if the model that you're relying on, if they're making updates?

Rick Yocum:

Yeah. Right. That I think that's a fantastic point. And then are you and then you're obligated to re risk assess because I mean, I I mean, I I understand the chain of logic. Yeah.

Rick Yocum:

I mean, when when when something could materially impact the decision making processes related to the

Justin Leapline:

risk You're saying any new model that comes out, you need to redo the impact assessment?

Rick Yocum:

I think it depends on your scoping and the risks that you've defined. But if there's the potential for the new model to impact those risks in a material way, you my understanding is you would be obligated to rerun.

Joseph Wyn:

So what what what I think you

Justin Leapline:

need I think to that's kind of crazy on the timing.

Joseph Wyn:

So I think you need to have is in your own procedure. Mhmm. Your own governance that says how you make that decision. Yeah. That's what you have to document.

Joseph Wyn:

Yeah.

Rick Yocum:

And it can be small yeah, to your point, like, medium, large. Like, you know, is this is this such a major change that we need, like, a full on net new risk review across the board? Or is this small enough potentially that a senior enough person can look at it, think about it, maybe get a second opinion as well and go, oh, yeah. We're good.

Justin Leapline:

Yeah.

Joseph Wyn:

Yeah. So, Justin, when you're writing some Mepisky like MCPs, are you able to peg that to a particular Yes. Version of Claude that is gonna use?

Justin Leapline:

Can actually we use Vercel's AI gateway. Mhmm. So we have, like, over a 100 models that we can actually don't, like, within twenty seconds switch to.

Joseph Wyn:

So if they so if Claude does a new rev, you can make the decision. So your 42,001 governance can say, well, if we switch from a major rev of quad, like like OpenAI Right. From four to five

Justin Leapline:

Yep.

Joseph Wyn:

And that and and soon as it went, like, I my results were just going crazy for a little bit and using your words, bonkers. Right?

Speaker 4:

Yeah.

Joseph Wyn:

Yeah. And then, but if you were doing that, you could say, oh, you know what? We're gonna we're gonna evaluate and you might just do this naturally. You would probably, knowing you, would be like, oh, I'm not gonna just like switch it to that new one. I'm gonna do some testing and do that.

Joseph Wyn:

And then to get 42,001 around it, you're going to, like, codify in some Yeah. Rules or some governance. This is my procedure to decide how I switch from that to that.

Rick Yocum:

Yeah. This is standard change control and best practice stuff. Right?

Justin Leapline:

Yeah. And I think so I guess there should be a process. I guess a full blown impact assessment is where I'm like, okay. That that's not warranted. Like, if the process is exactly the same, you're just testing input output at that point.

Justin Leapline:

You know? Oh, but that that could be

Rick Yocum:

your risk assessment process, like ensuring that input output remains.

Justin Leapline:

Right. Exactly. But that's not the full blown impact assessment. Like, you're not you already know the potential ramifications

Joseph Wyn:

Yeah. You're doing the delta.

Justin Leapline:

Yeah. Exactly. Like, is the same inputs getting the same or better outputs? So, you know, type of thing is really what you're

Joseph Wyn:

looking for. You're building that process, you're gonna

Justin Leapline:

And it should be for any change. Like, it's less of a concern for minor model updates, like 5.1 to 5.2 on OpenAI. Mhmm. But if it's, like, to a whole new model like Opus 4.5, which is Claude or Gemini or whatever, you know, Copilot or something like that, You're switching that. You're testing full blown regression testing on how

Joseph Wyn:

No. Absolutely. And so does it gonna affect people, the organization, or society? Right. And so if you're taking those things into account and your governance based on the risk that your company can tolerate, do you make a decision that you're evaluating that?

Joseph Wyn:

And as top leadership, you were like, you know what? That I did evaluate the risk of that. And so, you know, with with your GRC tool, you're not going to negatively affect teenagers who are Right. Online Mhmm. Doing stuff that AI might be generating whatever for.

Joseph Wyn:

Mhmm. So, like, these these tools that are out there, these AI friends that you can get and or these these toys now that are coming with, like, AI responses and that kind of stuff. And they're Yeah. They're storing and they're reaching out and they're that kind of stuff. Those companies who are building these toys and these products that are gonna use AI

Rick Yocum:

Like Meta has, like, AI people, right, that interact or, like, I think Instagram and all that. Like, I think there's legit Yeah. Accounts that are fully AI run, like, intentionally from

Joseph Wyn:

company that's funny. About a toy that was actually being out there that was recording stuff and then sending it out and then the results would come back and it would speak these results and it was an AI toy for kids.

Rick Yocum:

Oh, I remember that. Oh, what's what what are the BIPA implications? No.

Joseph Wyn:

No. Never mind. We're

Speaker 4:

past that. Yeah.

Rick Yocum:

What were

Joseph Wyn:

you gonna say?

Justin Leapline:

I was gonna say, like, in the mid two thousands, I was setting up, like, fake bots on AIM and all that stuff that responded to people. Oh, yeah. But those were, like, just taking, like, bits and pieces of the the yeah. The the sentence and turning it around, like, understand what the subject and the verb and all that stuff was, and then it turned it around into a question or, you know, it would actually interact weirdly with people.

Rick Yocum:

If you're a full time marketing exec, you had AI back then. Exactly. But but yeah. No. But now, yeah, there's there's full on

Justin Leapline:

Converse yeah. And all that stuff. And it's good and bad. Again, it it depends on the guardrails and everything. So we have a little bit of AI.

Justin Leapline:

We use a a tool called Intercom for our main kinda customer interaction support and all that stuff.

Rick Yocum:

Showed me that a little bit. That is a very cool

Justin Leapline:

tool. Very powerful tool, and we don't use it to its full potential. But some of the stuff that's really nice is, like, we can put interactions into, like, Stripe for your payments. So, like, hey. I'd like to make a a payment for this even though you can go through the, you know, the the website If you're interacting and like, oh, yeah.

Justin Leapline:

I'd like to upgrade. You're like, oh, okay. Here. You know, right in the tool, we'll give you, like, click on this and go pay for it.

Rick Yocum:

That's like an agent that can just help you.

Justin Leapline:

Yeah. That's what it is. You know? And it basically has connectors that understand your context and then basically makes the decisions and offer up the tooling that may help or may not. Yeah.

Justin Leapline:

So yeah.

Rick Yocum:

That's very cool.

Justin Leapline:

Yeah. It's it's pretty good. And then a lot of the AI in fact, I just had it this past week here. There were some potential customers that had a little bit of trouble getting in.

Rick Yocum:

Yeah.

Justin Leapline:

And communicating over AI. I was like, oh, yeah. Let me take a look at it. And I had Intercom create a linear ticket, which is our development tickets. Yeah.

Justin Leapline:

It wrote up the ticket based on what they said, you know, into it, filed it into it. We worked on it. We closed it same day, you know, and it communicated back to the customer that this ticket was closed on their issue Yeah. Yeah. Into that.

Justin Leapline:

And all of a sudden, like and you can actually look in your intercom and actually see, oh, yeah. I opened up something and it was closed and, you know, and it should be all fixed now. And now you got this communication path, you know, with the customer. They're happy that their problem got solved and fixed. You're happy that, you know, you got basically a workflow to do that and not another, like, email and email.

Justin Leapline:

How much time

Joseph Wyn:

did that save you?

Justin Leapline:

Oh, a ton. Yeah. Yeah. I didn't have to hop to one tool to another tool to write something, to write it, to file it, to all that stuff. It was just in it.

Justin Leapline:

You did the coding

Speaker 4:

Yeah.

Justin Leapline:

Which I think was AI too, you know, that type of thing. Yeah. And then, finish it up. That's cool. Same day, you know, I've got it fixed.

Rick Yocum:

Well well, Joe, so if if there's an organization that maybe they have a 27,001, maybe they don't. But, like, what if an organization thinks they should start to wade into the AI, you know, ISO certification, what should they start thinking about right now?

Joseph Wyn:

Well, they probably already have so if you have a twenty seven thousand one, you already have a lot of things in place. So you have your policies. You already you're doing your normal processes for risk assessment, management review, internal audit. Like, all these things are very consistent across a lot of or almost all of the standard standardized ISO standards now.

Speaker 4:

Mhmm.

Joseph Wyn:

And so, you know, you wanna think about what is your governance. So the the nine areas that you mentioned, I believe it was or nine or 10 areas. And there are things like your AI governance and accountability. So the clear ownership. So you wanna define these things.

Joseph Wyn:

Define the roles and responsibilities is very important. AI risk management is really talking about, the identification of the AI specific risks.

Justin Leapline:

Yep.

Joseph Wyn:

And then that's what leads you into an impact assessment. So if this risk came to fruition, what's that gonna be? What's the harm? What's what's happening there? You need to understand your AI life cycle management.

Joseph Wyn:

So that's a whole, area. You have things like data governance and then human oversight, things like that, transparency Yeah. And explainability. So you wanna be able to have all that stuff documented. And then some of the normal stuff that's, like, the same a lot, on a lot of ISO standards, competence, training, awareness, incident management, and response, like, do you do?

Joseph Wyn:

Continuous monitoring and improvement. So these are all the same. So if you're already building a program and you're already a ISO certification in some areas Yeah. And you wanted to extend it, you likely can go find an auditor that can do both. So your your certification auditor probably can do 42,001 on top of the ISO twenty seven thousand one and twenty seven seven zero one.

Rick Yocum:

I was gonna say What's that? That that's interesting.

Justin Leapline:

They have to get certified.

Rick Yocum:

Do they?

Justin Leapline:

I was

Rick Yocum:

gonna ask that. Yeah.

Justin Leapline:

What's that? They have to be certified.

Joseph Wyn:

Oh, yeah. Yeah. So there's a lot of them out there that are I was just I was just talking to there's a couple different certification bodies, like, ANAP is one of them that certifies auditors. And I was talking to the auditor who actually was the first, I think, anywhere or at least in the country to have the 42,001 Cool. You know, ability.

Justin Leapline:

So I'm curious because I don't think they can. With 27,001, I know this because I got a provisional auditor at one point with it. One of the requirements to become a full blown bleed auditor was you needed three audits underneath your belt, you know, into that.

Joseph Wyn:

Oh, that's a very good, I don't know the answer to that.

Rick Yocum:

For new certs, I think they, like, waive that.

Justin Leapline:

I mean, they have to For for a minute. There's no leading auditors out there that can do that, you know, like, you know, the existence of that will be nil or maybe they'd say you have to be twenty seventh. I I don't know. I I I never looked into

Rick Yocum:

I think you can get your stripes on I think, and this is based on some certifications that I had a long time ago. So it could be lapsed at this point. Yeah. But I think you can get your stripes on a different cert. It could be.

Rick Yocum:

To your point, like, you've done enough 27,001

Justin Leapline:

can auditor over here

Rick Yocum:

You can as long

Justin Leapline:

as you pass the test. Like grandfather the experiential clause. Over there. That could be. I don't know ISO well enough because all I did was, do the provisional audit on twenty seven thousand.

Rick Yocum:

I was a lead auditor at one point, but it's definitely lapsed.

Justin Leapline:

Like, '27? Or provisional lead auditor?

Rick Yocum:

I'm full. I did enough. You did three audits?

Justin Leapline:

Yeah. Okay. Yeah. Yeah. Okay.

Justin Leapline:

So anyway Yeah. I it's just it's hard to do that without we did that as a security company in the mid two thousands. A bunch of us got trained, but I don't think anybody read the fine print on it. It was like, oh. We can't be like like so and we never did pull the trigger to actually hire a lead auditor.

Justin Leapline:

Yeah. Yeah. Yeah. Yeah. So we're just basically a bunch of provisional auditors, you know, provisional lead auditors.

Justin Leapline:

But

Rick Yocum:

Yeah. The training, but not the

Justin Leapline:

We we couldn't do anything.

Speaker 4:

Yeah. Yeah.

Justin Leapline:

Like You can't sign that. The test. We're all that. We're provisional. Oh, okay.

Justin Leapline:

Great. Yeah. You can't do anything. Yeah. Like

Joseph Wyn:

Yeah. So I guess the the big takeaway here is that if you're going to be running systems, and again, any any audit you're gonna get, let me back up a little bit, is somebody comes and says, I get a SOC two audit? Should I get an ISO certification? I'm like, well, let's talk about why. How's that gonna help you?

Joseph Wyn:

How's that gonna actually make your company generate more money?

Rick Yocum:

Yep.

Joseph Wyn:

How's that gonna be a benefit to your organization? So we figure out, like like, somebody comes to me and says, oh, I wanna get ISO certified. Like, alright. Tell me, what are all your customers asking for? Every day they ask for a SOC two.

Joseph Wyn:

I'm like, why are we talking about ISO? Let's talk about SOC two. Right? So let's just get the right thing figured out first. So whatever it is.

Joseph Wyn:

So if you're a company who is looking to get a product that's gonna heavily rely on AI as part of it, and you know your customers are gonna be asking you for some kind of third party vendor risk management because if they're talking to any of us, we're probably working both sides of that conversation.

Justin Leapline:

I gotta ask from a Piscay if I had a SOC two. Yeah. Or ISO.

Joseph Wyn:

Yep. I was on that call.

Speaker 4:

Oh, yeah. And and

Joseph Wyn:

so if I'm guiding a customer Yeah. Who's gonna go buy stuff, I'm telling them, look, you need to have a solid third party risk management program. You need to go ask all these questions.

Rick Yocum:

Well, I think there's a of questions. Product, but also like a service that's A supported by service.

Justin Leapline:

Well, it could

Rick Yocum:

be a product that's AI enabled or a service that's supported by AI. So either way, but the clients could come calling.

Joseph Wyn:

And then I'll go at the same time have a conversation with a company who's building these tools, a software tool, whatever it is. And they're gonna say, I keep getting asked all these questions. I keep getting asked if I have a SOC two or how I'm you know, what am I what kind of external audits do I have? And so, you know, I'm I can help both sides. Yeah.

Joseph Wyn:

And as we do that, we're we're saying, well, what do you need? What risks do you need to manage that if you explain this is gonna make the customer sign the line Yeah. Sign the line.

Rick Yocum:

Right.

Joseph Wyn:

And if it's and and if you're an AI company and that customer is really worried about that, you know, so at what point do we start telling our our own customers, hey, when you do your TPRM, your third party vendor risk management reviews, when are you gonna start advising your customers to go and you know what? You might wanna start asking if they are 42,001 certified. Like, has that come up in any of your advice that you've given anybody yet?

Rick Yocum:

It hasn't yet. Although, one of the programs that I'm particularly close to at a client, we're gonna start doing more specific vendor risk management stuff in the AI space specifically. And so I imagine that'll be one of the shortcuts that can be used. Right? Hey.

Rick Yocum:

Do you have this, or do we have to ask you this bucket of questions?

Joseph Wyn:

Right. And if you're if if one if they're actually going in writing their own AI tools for their customers and they're relying on somebody else's Exactly. Now

Rick Yocum:

the Fourth party risk, all that stuff.

Joseph Wyn:

Yeah. So you wanna, you wanna understand how it works both ways on that. So, my advice is as soon as you think you're gonna need to get ahead of this, start looking at something. I mean, NIST has the NIST AI framework. Yep.

Joseph Wyn:

You could use that. You can't get certified on it because NIST doesn't have specific certification.

Justin Leapline:

Has an AI framework. You could

Joseph Wyn:

do that. You can get some certifications from places for certain c s cloud CSA is cloud security alliance if Yeah.

Justin Leapline:

Everybody's wondering. And Their certification, I think, is coming later. Yeah. They're just rolling out their, their AI stuff now. Yeah.

Joseph Wyn:

Like, they have the star and other sorts now.

Justin Leapline:

And I think it's planned that this will eventually be certifiable.

Joseph Wyn:

Yeah. Okay. And so you have these, but then you could always ask a CPA firm just similar to a SOC two. You can a CPA firm come in and follow the AICPA standards and write an attestation Right. Their opinion on it's not a certification.

Joseph Wyn:

It's somebody's opinion of whether your design is good and the controls are effective.

Rick Yocum:

That's a fantastic note. You're absolutely right.

Joseph Wyn:

And you can you can do that for anything. You can make your own framework. You can have the Rick and Justin framework for Sure. How to drink whiskey at a podcast, and you could have an auditor come Yeah. You can you can have an auditor come in and say, here's all of my design and here is, our evidence.

Rick Yocum:

If you're a third party auditor,

Speaker 4:

we'll do

Joseph Wyn:

Do some evidence

Rick Yocum:

testing. Supporting this.

Justin Leapline:

Yeah. Yeah. And so certification once a month?

Speaker 4:

Is that what?

Joseph Wyn:

Yeah. Not not a certification.

Justin Leapline:

Not a station. At a station. Yeah. So

Joseph Wyn:

yeah. People get all hung up on like a SOC two, all this stuff. At the end of the day,

Rick Yocum:

you They just want a third party opinion.

Joseph Wyn:

You really need a third party opinion usually gets you to the point of where you need to be. So whatever it is you're doing, if you're doing it the right way and your design's solid, you have a framework that you can pull from that that somebody can look at and say, that makes sense. A SOC two is nothing more than just some some specs for what that design ought to look like. It's just a structure. Same thing.

Joseph Wyn:

Yeah. NIST CSF is somebody's opinion of what is a good security framework, and there's no reason that the Still Security Podcast couldn't go out and just audit some third party some company, and we could write an opinion. Joe Justin No deviations noted. Yeah. Yeah.

Joseph Wyn:

And so, like, the three of us could just go and write that, give them the paper, and they could say, hey, we got this. Well, great. It's the Distilled Security logo right there on it. And if that closes the deal 20 customers, that's legit.

Justin Leapline:

Yeah. It's good enough. I had a customer So if

Rick Yocum:

you're interested in sponsoring Yeah.

Justin Leapline:

I had a customer one time that they got a SOC two from a data center or something like that, and we were reviewing it, I think, PCI or something. And I opened it up and start going through executive summary, blah blah blah blah blah, and I'm going down to the controls, you know, and nothing there. They deleted Oh. The entire bottom section of the SOC two.

Rick Yocum:

I love I love the SOC two that should have been a SOC three.

Speaker 4:

Yeah. And I

Justin Leapline:

and I'm looking at this. I'm like, where are the controls? They're like, what are you talking about? I'm like, there should be controls here. And we reach out to the data center company.

Justin Leapline:

They're like, we consider that confidential. Yeah. I'm like, just be honest. You had a deviation. It got listed in there.

Justin Leapline:

You deleted it. Come on. Like Yeah. If if there if those were

Rick Yocum:

all green checks, would you deleted it?

Justin Leapline:

Yeah. Exactly. Yeah. Yeah.

Joseph Wyn:

Yeah. And that's that's another thing. You know what? I we've been talking about this for a while. Stay tuned for a future episode.

Joseph Wyn:

We're gonna do one where we talk about SOC two.

Rick Yocum:

Is it true? Yeah.

Joseph Wyn:

How to evaluate it, what's important, what doesn't doesn't matter Yeah. When to use it.

Justin Leapline:

So Yeah. I might have to go through my hit list of awful references for SOC two and other reports and everything. I think that would be a a fun, like I won't name names, you know, if I might get

Speaker 4:

in trouble.

Joseph Wyn:

Well, the best thing is, like, some of the notes that we started to put together already, the first line of it was the SOC two certification.

Justin Leapline:

Oh, no. Yeah.

Speaker 4:

I'm like

Joseph Wyn:

and and AI, that was obviously AI generated.

Speaker 4:

So we gotta fix that AI.

Rick Yocum:

Well, just one quick takeaway for me on the '40 2000 '1 is because the risk assessments are so closely aligned with more ethics and fairness as opposed to security, super strongly recommend, as I noted before, if you can lever the processes and or tools that already exist in your privacy or legal teams, these risk assessments are gonna mirror those a lot more clearly than they're gonna mirror your typical cybersecurity assessments. So strongly recommend

Justin Leapline:

It's you basically risk. You're identifying risk in the AI, you know, process.

Rick Yocum:

And it's very like human risk to people kinda more centric. Like, it's AI, but it's really targeted towards

Justin Leapline:

controls and putting guardrails in Absolutely. For test, for validation, or whatever it is. Yeah.

Joseph Wyn:

Yeah. Twenty seven thousand one secures data and Yeah. Forty two thousand one governs your the AI decisions.

Rick Yocum:

Yeah. But so anyway, I highly recommend if you can reuse the tools and processes more on the legal and privacy side, probably gonna be an easier route than trying to reuse if you have a separate risk assessment process on the security side. You'll kinda be shoehorning it in. Yeah. Gotcha.

Justin Leapline:

Yeah. Any final thoughts?

Joseph Wyn:

No. I think we hit on a lot.

Rick Yocum:

Yeah. This is great.

Justin Leapline:

It was a good time and and everything. So alright, everyone. Thank you for joining us. Don't forget to like, comment, and subscribe. We'd love to hear from you.

Justin Leapline:

So don't forget to comment on all that. Don't beat me up on the Irish versus Irish whiskey versus

Rick Yocum:

We caught

Justin Leapline:

it. Yeah. Yeah. We caught it.

Joseph Wyn:

Self correct. We'll bleep it out. Bleep it all out. All those are swear words. He was totally swearing.

Justin Leapline:

Everybody over with AI voice, you know, stuff. It will all be correct and probably funnier with that. So

Rick Yocum:

AI. Remake this podcast.

Justin Leapline:

Yeah. Exactly. Alright, everyone. Thank you so much, and we'll see you next time. Bye.

Creators and Guests

Joe Wynn
Host
Joe Wynn
Founder & CEO @ Seiso | IANS Faculty Member | Co-founder of BSidesPGH
Justin Leapline
Host
Justin Leapline
Founder of episki | IANS Faculty Member
Rick Yocum
Host
Rick Yocum
Optimize IT Founder | Managing Director, TrustedSec
Episode 21: AI Notetakers Are Illegal, GRC Tools Are Lying, and ISO 42001 Changes Everything
Broadcast by