Episode 2: Tailoring Security Frameworks & Leveraging AI

Justin:

Alright. Welcome to Distilled Security Podcast episode 2. My name is Justin Leblin. I'm here with Joe Wynn and Rick Yoakam. Thank you for joining us.

Justin:

Don't forget to like, comment, and subscribe up to our podcast where we do a monthly podcast for everyone. Today, we got a good lineup here. We're gonna be talking a little bit about frameworks, how to implement it, a little bit of AI because it's popular, and a few other topics that are of interest, and we're gonna be reviewing the bourbon that we have, today. So first thing I wanted to start off with, I think is of interest to all of us and everything, is frameworks. A lot of conversations about frameworks, how it's implemented into organizations, People like to pick 1 and say, is this the best 1 for me and everything?

Justin:

So curious to get what your guys' thoughts on this. Joe, I know you had something recently with this. You wanna start us off? Yeah.

Joe:

The conversation I had, it was actually just today, was about picking the right KPIs in order to drive the security program forward in the business. And while that was the part of the topic, it was also, well, the framework they wanted to align to NIST CSF. And because the, 6 functions is so then I could keep keep everybody's attention. But if they start going into, like, ISO or all the other things, they start to lose focus. So they they wanna keep the nest.

Joe:

Now the idea

Justin:

was like the nice clean buckets and everything do. To do is it kinda work. Respond, recover. And and

Joe:

1 of the questions was, well, why you're using this, where do you find, like, the right kind of KPIs to drive the measurement of this so you could keep everything moving forward?

Rick:

Right.

Joe:

And well, the thing is NIST CSF wasn't built as an assessment tool, nor was it built with, like, deliberate KPIs. Mhmm. And so the the conversation went on for a bit, and we finally got to, other things you can use. So you can reverse engineer some of the other NIST guidance that would give examples of what's going on, and that might help. But also, like CIS, CIS is out with, version 8 now, but they haven't published their metrics.

Joe:

But they have a CIS 7 has a metrics file that you can open in in Excel and in PDF, and in it gives you a lot of different ways to go about measuring at different target levels.

Rick:

Right.

Joe:

And so the nice thing with NIST CSF is it has the informative references. You can connect it to the CIS and then connect it to the parts that you wanna use. And so long way about getting a way to use, KPIs from other sources if you're doing that. So that's that's the example I had from the day.

Justin:

Gotcha. So when you're doing that, you're essentially profiling. Like CSF has a profiling component to it as well, where you're doing that. But that's essentially what you're doing with CIS is profiling, seeing where you're hitting and what controls. And CIS has that tiering as well, you know, with that, which CSF does not.

Rick:

So Yeah. They have they have the 1 to 4 to 2.

Justin:

1 to 3. CSF? CSF

Joe:

has those

Justin:

oh, CIF. Yeah. CIF has implications This episode is gonna be acronym

Rick:

deep here. CIS has, like, implementation groups 1, 2, and 3. Right. I believe CSF comes built in with like 4 scores

Justin:

to series. Right. Yeah. And those are more target levels, and they're

Joe:

also very confusing because, it you gotta gotta read the document closely. Yeah. But what they did nicely in the, latest CSF that they didn't do in 1.1 is they wrote these, like, quick start guides. Mhmm. And they give a lot of detail written in a whole lot of a whole lot easier to read language Yeah.

Joe:

So that you don't have to be a data scientist to understand it.

Rick:

CIS has some really good ones too. And they have 1 for, like, mobile. They have 1 for cloud, they have, like, very specific beyond just, like, the configuration baseline stuff. They actually have governance type guidance associated with the controls, which can be super helpful if you're in a specific niche or trying to figure something out.

Justin:

So which 1 should I pick? Tongue NG.

Joe:

No. That's great. And and what I always like to do is say, well, why don't you use your controls and pick whichever control framework works best for you in order to manage the risk that you've got. Now how do you get to that point? Well, I always go back to and I'll probably repeat this because I I keep going back to ISO's methodology as 1 of my favorites because it starts with who who cares, your interested parties, and then you go on to what are your assets, in the scope that they care about.

Joe:

And now let's take those assets into a risk assessment, and let's apply controls. And what controls do you apply? Well, you could pick the ISO controls, but you don't have

Justin:

to. 27, 000. Right?

Joe:

27, 001, and annex a controls, or you could pick your you pick controls from 853 or any of the other 800 groups or CIS. Then then the next thing you do is you memorialize those control decisions inside your policies.

Justin:

Mhmm.

Joe:

And then from there, you start measuring the effectiveness of your policies. And and that's how we loop back around to the best ways to figure out the right KPIs to use.

Rick:

Right.

Joe:

The ones that handle the policy, enforcement the best.

Rick:

Right. Right.

Justin:

But once you decide on that, are you stuck with that framework then? Then I can't cheat.

Rick:

Yeah. Yeah.

Joe:

What do you think?

Justin:

So, I just had a call actually today with this, and I was talking to the customer. And if you're anywhere near approaching, like, a maturity level, you're gonna be having multiple interest groups, you know, outside and potentially inside, you know, for this. So privacy, compliance, you know, regulatory, contractual compliance, whatever it may be, you're quickly gonna get out of a contained framework, you know, with that. And as you mentioned, there's advantages to, you know, ISO 27, 000 where it's more process focused, you know. Make sure you have the continual engine running, whereas a lot of other ones are control focus, you know, with that.

Justin:

So that gives you an advantage looking at the program holistically and continue to evaluate it, which they have controls that address that in other ones, you know, but it's not as formalized, you know, into that perspective. But ISO is very heavy on documentation, and to plus or minus, do

Rick:

you need all that documentation? Not unless you're getting audited. Right. Exactly. Yeah.

Justin:

If you're getting the Not

Joe:

unless you're getting audited.

Justin:

Right. Exactly.

Rick:

Yeah, if you're getting the certification.

Justin:

You know, so, you know, there's pros and cons to that. But I usually recommend if I was designed from scratch, come up with a base of ISO and then blend your own controls into that because there are gonna be a lot of interest groups into that. And I was actually 1 of the things I commonly point to is, like UCF, the Unified Compliance Framework. They're the paid for blending of everything, all the laws. You know, that's obviously cost some money.

Justin:

But SCF, the Security Controls Framework, is a free version of that. And it's often a great start for people. Like, if you need multiple authoritative sources from an interest, I'll basically filter on, tell me what you need to comply with. This, this, privacy, this, this, this. Okay.

Justin:

Let's sort it down to what that control base is, and then we can utilize that to upload it into a Jira c tool. And there's your start, and you already got the blending. And for future, you can just add it, and maybe you have 95% of the controls already there, and here's 10 more controls that you're missing because Australia, you know, popped up something, you know, or something like that. Done.

Rick:

Well, and I I love that point because it's actually 1 of the things that I see people get wrong all the time. It's like they take a controls framework, and then they're afraid to touch it. Right? It's like they're afraid to Right. Modify a control, or shift a control, or remove a control, right?

Justin:

They'd rather not rather, but they don't

Rick:

even think about it. They think it's this immutable thing. It's like, well, no, you're never gonna do that control and there's no risk to not doing the control or you've mitigated it some other way, just get rid of it. You don't need to give yourself a failing grade every time. You you need to memorialize it to your point.

Rick:

Have some documents somewhere that says why you're not doing it. Right?

Justin:

The same

Rick:

way that you want some documents that say stuff that you're adding or stuff that you're changing. But, like, I think a lot of people, even that are high up decision makers in organizations. Right? They see, oh, well, this is the CSF, so we're gonna audit to the CSF or win a third party assessment of the CSF, where frankly it would probably be more effective oftentimes if they were like, well, we took the CSF. We made these changes to it.

Rick:

Can you review us against this set that we've made our own? Right? And along the way, if you know, review our design decisions in terms of how we've adjusted the controls framework as well. And if you think we've made a poor choice in terms of modifying a control or we're missing something, like, yeah. Give us opinions on that too.

Rick:

But people a lot of organizations that, I talk to have this really strong reluctance to modify a control framework. And honestly, they should. It would make things so much better. And to your point, like, unify a control set amongst multiple groups.

Justin:

Right.

Rick:

Now you have a lot more interest and a lot more potential resources going after making this thing right.

Justin:

Right. But the thing they're worried about is the mapping component of it. I think a lot of the times, they're like, how am I going to pick CSF and comply with PCI? Because CSF says, in 1 control, do baselines. You know, and PCI is like, you need NTP, you need logging, you need this, this, this, this, this, you know, type of thing, which could all wrap into 1 control in this, you know.

Justin:

Right.

Rick:

Right. I think that's part of it. I think the other thing that I see a cause of the reluctance is, like, oh, but if something goes wrong, now someone's gonna be testifying to someone else in terms of, like, why we didn't do the control that would have stopped this, and like when you're explaining you're losing. But I think to a large extent people's programs are gonna be much more effective if they tailor the stuff in a meaningful way, and if they do the diligence upfront and the design diligence and they do just a little bit of memorialization and documentation to say, look, We started this is a starting point. We used CSF or ISO or the combination of things at a starting point.

Rick:

We've moved it just slightly to here because this is what aligns with us. They're gonna be much more situated to have a good security program. But then also, like, if things really do go sideways at some point and as they often do on a long enough time frame, they're gonna be in a place to defend that really well because they they will say, hey, look, we thought about our risks. We actually tailor our framework to align to our risks. We actually had a third party opinion that says that we did a pretty good job at that and they made some additional adjustments.

Rick:

Like, that actually puts Or not

Justin:

that we corrected it or something like that.

Rick:

Yeah. I actually think that puts you in a much stronger situation if you need to, like, explain to some regulator or something, you know, why you're doing something. He's like, well, because we thought about the risks and we aligned it as opposed to no, no, we just did the we just did the baseline starting point thing forever and we never touched it.

Joe:

You hit on, like, 2 really good things I think are worth repeating. 1 is the purpose of the CSF is to make sure that you're considering Right. All of these areas, all of these functions and the sub the categories and subcategories.

Rick:

Yep.

Joe:

And then the other good point, you made is how do you justify your decisions through a risk assess assessment? And once you figure out your risk, it's justifying why you did pick something, but also why you didn't pick something. Right. And if you can go back and say we did that for this reason Yeah. And it was justifiable, you're gonna your defensibility is gonna go a whole lot further.

Rick:

Yeah. Way up. And and you know update that once every couple years or once every year or whatever, right? So it's not, oh yeah, we made this decision 10 years ago, never thought about it again.

Joe:

Yeah. That won't work.

Justin:

Yeah, right.

Rick:

But no. But if that is like a part of your, you know, you you just have a schedule. Hey. We're gonna think about this annually or whatever. Yep.

Rick:

And you

Joe:

put that in your policy and your standards. Yeah. And you're Now you're following that too. Yeah. Yeah.

Joe:

Yeah. So something you said that I think, if everybody's not familiar with that, might, be useful to see the, the framework, the 1 that you mentioned, not the Oh,

Justin:

SCF, security controls, family.

Joe:

Yeah. Can you go into that 1 and also select and filter on the various areas

Justin:

just so we can? It's a big Excel sheet, and they have essentially on column base, they have all the different authoritative controls. And then in those columns, if it's blank, there's nothing mapped. If there's something in there, they reference the actual control out of that standard. So essentially you have to filter on what's in this column, like non blank and non blank and non blank, and then you essentially have your control on the very left, you know, like all the controls are lined up in the rows, you know, into that.

Justin:

So that's how I end up filtering it. It's not like just give me this, this, and this. Like UCF, you can just say, give me these 5 standards, you know, type of thing. There's some Excel filtering magic, and then you delete everything else, and then you're left with that. And then you clean up, like, obviously, a whole bunch of like, when I upload it into a GRC tool, you have to delete like a 100 columns.

Justin:

You know, they just span, you know, columns. In fact, we were talking about they also have threats and risk mapped to each 1 of their controls as well. And they do it kind of funky where they actually have the risk, into here, And if it repeats its own name in the thing, like it just repeats all the risk in the columns, and if it fills its own name in, it maps to that control. So it's a little weird, you know, but you can get there.

Joe:

But if you figure it out, you don't have to spend the money on the compliance framework.

Justin:

And the nice thing about it, like we talked about like when you upgrade and everything, a lot of people have a hard time, like, I'm on CSF 1.1 and 2.0 is out now, or PCI 3.2.1 and now 4.0 is out. The nice thing about this is once they have it mapped, you basically say, okay, I want to go to this. And then you can see whether it's the same controls or different or, you know, like, it will add more controls, you know, into that, typically if you're adjusting that. And so you're not adjusting your bass, like a lot of times if they're doing a whole bunch of remapping, and PCI, you know, adds controls at the beginning, which shifts the numbers all around and everything. Like that's not a thing.

Justin:

It's these controls we're mapping to. And by the way, they map to this authoritative sources, you know, and Correct.

Joe:

And the authoritative source, so for example, might maybe it's 853.

Justin:

Mhmm.

Joe:

And when you're doing control selection, have you ever selected controls from the NIST CSF, or are you using that to get yourself to an 853 control that is actually a control statement?

Justin:

No. They're all so they have, like, UCF, they have all their own controls that they basically have.

Rick:

You know, like a hub that

Justin:

sits in the middle and it maps all the way. Yeah. So they have their own written controls just like the common control CCF, common control framework for UCF. So they have basically their own base, and that's what they map everything So if you're is mapping out to everything.

Joe:

So if you're struggling for how to write a control statement, they've written them in a way that's not duplicative of the other frameworks. You can use these. And, and then you can justify the use through the mapping.

Rick:

Yep. Yeah. And pick them, and if there's something that's not in there that you want, make it your own by adding it.

Justin:

Yeah. Yeah. I mean, there's a lot of stuff. And, you know, it takes a little bit of expertise to do some of the stuff. Like there's controls maybe around AI, like that's kind of a, you know, there's papers being generated like that, but it's pretty unknown for most people.

Justin:

Like, you can add your own. You know, if you're adopting something into organization, you can add controls right into that, map it back in a policy. I'm a proponent of not having an AI policy, but it's more third party vendors, you know, and data management, which should already be a base. It's just different, you know, kind of thing. But, you know, however you wanna do that and reflect it, you know, to what your control base is.

Rick:

So, yeah.

Joe:

No. It's a great, overview for tailoring. So what's the main takeaway?

Rick:

Stop not tailoring frameworks.

Joe:

Stop not tailoring.

Justin:

What if you're really simple though?

Rick:

What if you're really simple?

Justin:

So you have a small organization that might be a corner store, you know, restaurant, and all you care about is PCI?

Rick:

Yeah. I mean, I

Justin:

I don't know. Like I'd

Rick:

argue you should still

Justin:

They probably don't care about PCI at that point.

Rick:

Well, but I would say still take a swing at tailoring it because there's probably a bunch of stuff in a framework. You know, they're built to be utilized by Everybody. As many people, as many organizations as possible. Yep. So there's inherently gonna be stuff that probably doesn't apply to a small mom and pops Mhmm.

Rick:

Shop or whatever. And so again, you you know, walk through it, make sure you're not forgetting anything, carve out the stuff that doesn't apply anymore, and you don't really need to assess against that.

Joe:

Can you tailor PCI, or aren't you supposed to do all the things?

Justin:

You're supposed to do well, I mean, there's not applicable control to that.

Rick:

Ask your QSA. Yeah.

Justin:

But, yeah, I mean, like any other framework, it has its own interest, and you can carve off segmentation. I mean, the bad thing about authoritative sources like that is it has a very specific interest point just like HIPAA. You know, it it only cares about 1 type of data, and it doesn't apply to your holistic program, you know, into that. So, yeah, it could have all the controls, but 1 system is in scope. So, like, your rest of your environment could be not patched for, you know, 10 years.

Justin:

And they're like, okay. That's fine.

Rick:

And this is where that Venn diagram of, like, security and compliance comes into play. Right? Like, if you're trying to be secure, absolutely tailor your maturity framework. If you're trying to be compliant and there is a compliance framework you need to adhere to, you're probably gonna have to do all those things, or at least follow their guidance to tailoring accordingly. Yep.

Rick:

So I guess just keep that in mind as you go.

Justin:

Yeah. When I run people through, kind of my little process of doing the SCF, they're like, oh, that makes a lot of sense. Like, yeah, it's pretty easy. You know, like, to do this, you don't have to worry about the mapping, you know, it's the price is right, you know, type of thing. It's time.

Justin:

Yeah. I mean, it takes a little bit of time, but less time than mapping multiple, you know, disparate frameworks together. Like, that takes time.

Rick:

Well, it's kinda like any other investment too. You sort of do it upfront, and then it'll pay dividends as you go as opposed to not doing it up front. Now you're assessing against a bunch of stuff that doesn't necessarily make sense, and maybe missing a couple things that should make sense, and talking all that about all that in the back end. So, yeah. And if you have

Justin:

a good GRC tool that also kind of compounds is like you can then start reporting on different aspects and business units and, you know, sources of that, you know, type of thing, which then really helps.

Rick:

A good GRC tool like a pesky? Yeah. I'm like that.

Justin:

If I ever get it out, stay tuned. Yeah. Yeah.

Joe:

So, well, what do you think about giving some shout outs to some upcoming conferences?

Rick:

Sure.

Joe:

And so the first 1 coming up, and a couple of us are helping to organize it besides Pittsburgh, July 12th, Rivers Casino. Tickets still available.

Justin:

Mhmm.

Joe:

You may not get guaranteed a t shirt at this point because they were sent off for, ordering.

Rick:

Blinky badge is still available too. Oh, good. Yep.

Joe:

It's less than

Justin:

a month, isn't it?

Joe:

Yeah. Yeah.

Rick:

It's coming up. Yeah.

Joe:

Exciting. And then, and I think there's still a couple, sponsorship slots if, anybody's interested. Yeah. And then another 1 is Tris, the 3 Rivers Information Security Symposium. I'm surprised you

Justin:

I always wanna say just looked it up, didn't you?

Rick:

I always wanna say summit, and I always get it wrong, and I feel bad.

Joe:

Exactly. And, that's coming up on October 3rd. But, Justin, you just had mentioned they just opened their call

Rick:

for papers.

Justin:

Yeah. Call for papers, presentations, and everything. So Oh, yes, I know. Yeah. They're they're a good conference in and around.

Justin:

Have you guys noticed, like, I've been getting an onslaught of random security conference around Pittsburgh.

Rick:

We've seen a couple.

Joe:

There's some good moneymakers out there. Moneymakers for people who throw them

Justin:

Yeah.

Joe:

Because, of that, and there's, some things coming up. So I don't wanna mention the for profit ones. The ones we're talking about are the ones that don't really keep any of the money from themselves. Yeah.

Justin:

And they're community driven. Like, we know basically all the, I mean, you guys are on the, obviously, b sides, you know, organizing group. But we know all the, essentially, the organizers of Tris Tris as well. Right? And everything.

Justin:

So it's community driven, which is great, you know, into that. But the last, you know, what month or so, I'm beginning to, it's like, I've been going to every conference in Pittsburgh, you know, since 15, 20 years, it feels like. And now it's like, where is this conference coming from? Yeah.

Rick:

Yeah. There's a couple new 1. Couple new ones. See how see how they turn out. Maybe there'll be something that way.

Joe:

So hopefully we'll see you there and hopefully see everybody there. Yeah.

Justin:

Yeah. So this, onto the booze, Scotch is making me thirsty. No, I'm kidding. The cravor thing. That's really good.

Justin:

Yeah, I know. I was kidding. It was a Seinfeld reference.

Rick:

Or

Justin:

pretzels and then yeah. Yeah. Yeah.

Rick:

Yeah. So this

Justin:

is real good. This is, door knockers. You said this was what was it? Thief?

Rick:

Door knocker. Whiskey thief.

Justin:

Whiskey thief.

Rick:

Stealing.

Justin:

And I read it closely.

Joe:

I think, Whiskey Thief actually puts us together for them is way

Rick:

Oh, but there are a couple of guys. I think that's true.

Justin:

Still matured and bottled by Whiskey Thief Distilling Company. Frankfurt, Kentucky. Frankfurt.

Rick:

That's it.

Justin:

Yeah. And so, yeah, tasting it, it's really good. It's I mean, the color is great. And it has a caramel very caramel y right at the front there. And then it settles into kind of, like, a nice, like, sweet vanilla or the type of vanilla,

Rick:

little tobacco, little like the leathers and tobaccos. And what's

Justin:

the proof on this?

Rick:

It's high. It's way smooth. 109. It's way smoother than it should be.

Joe:

I thought I was gonna buy it a whole lot more.

Justin:

Yeah. Hey.

Rick:

Cheers, guys.

Justin:

Cheers. Yeah. Thanks Thanks for bringing this, Rick.

Rick:

Yeah. Of course.

Justin:

And you weren't able to get this in PA, right? Mhmm. Because I'm not PA.

Rick:

No. This was something that I have no idea how it made it to my house from Louisville.

Justin:

Yeah. Just showed up. Just showed up. Just like I do, like, come on, little buddy. I don't

Rick:

know what. I don't

Justin:

know what.

Joe:

All my customers during an audit, I'm like, stop talking.

Justin:

Yeah. I can either I

Rick:

can either confirm nor deny that I brought this bottle of booze.

Justin:

I gotta fight for you. So we were doing an audit at a company I used to work with. I won't name it and everything, but we had a regular regulator in. And I gave this big spiel to, like, 50 people who are involved with this. And 1 of the things I said, I was like, hey, you know, with this, answer their questions.

Justin:

I want you to answer questions truthfully, but then stop, you know, type of thing. I was like, I don't want you to expand on it. If there's dead silence, let it ride. You know, like, they are going to be the ones driving the question. So just answer the question and then be done and let them as a follow-up.

Justin:

So there was this 1 gentleman. We were doing, I think it was Identity Access Management with this, and we're talking about, like, on the Windows side, what do they do and everything like that. And the the auditor asked, he's like, and is this the same side on the Linux side? And he's like, yes. Crickets.

Justin:

I just Normally,

Joe:

silence is the auditor's friend because somebody will keep talking.

Rick:

Well, that Yeah.

Justin:

It's it's It was hilarious. And this is, like, dead silence for, like, 10 seconds. You know, like, 10 seconds of dead silence, like, at an eternity.

Rick:

Like, forever.

Justin:

And then the auditor just, like, Okay.

Rick:

And then they're done. It's actually it's funny.

Justin:

It's hilarious. Well done.

Rick:

That's good. The silence being the auditor's friend thing, it's funny. I remember the Deloitte 101 training. Like, that was a thing that they told you to do. It's like, yeah, sometimes just if if you ask a question and you feel like there might be some additional detail you didn't get, just stop talking.

Rick:

And just see what they say. Don't be the next 1 to speak. And, yeah, it's interesting. Yeah. But it's definitely a tool that's used.

Justin:

Yeah. It it was just It was hilarious. We talked about it afterwards. It was just 1 of those things.

Joe:

So the takeaway there is make sure whoever is helping you get audit ready teaches you when to and when not to continue to answer questions.

Rick:

You know, I heard a thing from a highly mature organization I was working with on a couple things, and they were talking about their process when they get audited by some known to be extremely thorough regulatory individuals. And, and they said, you know, they the approach they take is essentially, the experts in the room never answer a question until it has been restated by general counsel. So if the if the resources are there to have this, if you're at a large organization, I thought it was a very clever way of doing it because Did

Justin:

the lawyers make up this rule?

Rick:

Well They

Justin:

get paid by the hour. Well, it's it's

Rick:

internal. They're internal, so

Justin:

they're getting paid anyway.

Rick:

But, but, basically so then the lawyer can clarify the question as much as they want or not want, you know, with the regulator, and then they'll restate the question. And then once they rephrase the question, then the subject matter expert answers it, and then anything outside. So base I just thought it was a very

Justin:

clever 1. What's that? Was this for any audit?

Rick:

It's how they manage specific regulatory audits that they go

Justin:

through without trying to give up too much. Yeah. Yeah. Yeah.

Rick:

But, yeah. I thought it was a really actually a very clever way of doing it if you have internal counsel. And they have a, again, a fairly mature organization where their counsel is involved in these audits by default anyway. Okay. So it was a really interesting thing to see.

Rick:

You know, that was the that was the give and take that they do. I thought it was kinda smart.

Justin:

Yeah. That's interesting.

Joe:

Yeah. Oh, I love that.

Justin:

Yeah. That's an expensive

Rick:

list. It. Yeah. Well, yeah. I mean, yeah.

Rick:

It is. Well But it also allows them to continue doing business.

Joe:

So Not having that regulator let them keep doing business is more expensive. Right.

Justin:

For sure. For sure. All right. Next topic here that I want to dive into is AI and security. So AI is very popular.

Justin:

We're plugging it here so just we get the clicks and everything. But really curious on with the adoption of AI, have you seen any tooling that you use from a security perspective? Where do you think the most advantage is coming from, from a AI perspective, you know, into that? Any general thoughts?

Rick:

So I have yet to see a tool, like, an actual security tool that has convinced me that they are using legitimate artificial intelligence, that it's something more than, like, a typical expert system. Okay. Right? And maybe they're out there, but I just haven't seen much that either either it hasn't impressed me that much or they're not really using it the right way. Honestly, what I see, like, the most use from from a Dril AI perspective is, like, help writing governance documents and stuff like that.

Rick:

I mean, no. To be perfectly honest, like, it's it's it there's a bunch of tedium that goes into that, or like summarizing complex topics and stuff like that. I mean, that stuff is super useful. Like, help prepare me for a board intro. Like like, there are LLMs that, like, can do, like, intro to a specific topic or something, right?

Rick:

Like type into chat gbt how would you explain artificial intelligence and security to a board of directors, and it's probably gonna give you some usable some stuff that's usable. Yeah. Right? That'll help shortcut your thinking almost like we're talking about with frameworks earlier. Help make sure that you're not forgetting something that that, you know, could be kind of obvious, but to augment your thinking on that.

Rick:

That's frankly what I've seen the most legit tactical benefits from.

Justin:

So how many policies do you think nowadays is just generated by AI?

Rick:

That's the other side. Stop not tailoring your AI policies. It's gonna be a year from now Yeah.

Joe:

Topic. Stop not editing them after AI generated it because it's probably wrong. Right.

Rick:

Well, to ensure point, like how many emails I have. Again, we'll change the names to protect the innocent. There's 1 of my friends in an IT role was working with the security guy, and and he was telling me for about the last year, I think he just every question he gets, I think he just plugged into chat GPT and responds. And, and I think, ultimately, there was some screenshot evidence that sort of backed up that assumption at some point. So Do you

Justin:

guys watch, South Park at all? I don't watch it regularly. But I saw the clip where, was it Stan was asking Clyde, like, how are you, you know, so good with your girlfriend and everything? He's like, anytime she texts me something, I just type it in the chat GPT for appropriate response and just spits it back out to her. It's like it's like, what what do you do you think I'm ugly or something?

Justin:

Like, no, you're the most beautiful blah blah blah blah. And it's like,

Rick:

oh, you're so kind. Yeah. Well, but so I feel like everyone in the comments should scream and yell at me about tools that actually use AI that are good for security because I'd love to be educated on that. But from my perspective, I haven't seen anything that's been, like, just really wowed me. But have you seen

Joe:

I haven't either. But how is it going to, and what I'm looking to learn is how are people actually really saving time in their jobs from a security perspective? And, of course, on the governance side, you have that. Well, what about the technical side? What about on the watching the logs and responding to potential incidents?

Rick:

Yeah. Well, and you I think that the challenge is, you know, and and we talk about, you know, false positive rate and false negative rate in security all the time. Right? Like the challenge is the false negative rate and the false positive rate. When you're using like AI to like change firewall rules automatically, or to, like, you know, kick off an incident investigation, or things like that.

Rick:

Like, it's super expensive when it gets it wrong, and it's it's I think, there's a lot of feel associated with that that I just don't know, maybe it'll get there. I have a tough time seeing

Joe:

that 1.

Justin:

I don't see anything yet from an AI perspective taking essentially control of decision making. It's more information augmentation. You know, like just basically providing context or aligning things to bring attention to.

Joe:

To be considered.

Justin:

Yeah. Exactly. Like, you mentioned changing stuff, I'm not aware of that. Well, I mean, there's some access controls.

Rick:

It's in, like, the hype cycles for, like, 0 trust. And, like, there's, like, Gartner's talking about, like, what, data security posture management, it talks about like, you know, oh, yeah, we're gonna, you know, from an access person, yeah, we'll automatically like, you know, ramp up or ramp down file access based on, you know, whatever. But at the heart of it, the things that I see that actually work and that are implemented are typically like rules that people consider and then put into place. Right. And, you know, using AI to be like, hey, what should I be considering?

Rick:

I think that's super fair to bring things that exceed normal benchmarks to humans' attentions for investigation. I think that's great. But, yeah, I haven't seen much that has really been like, oh yeah, this is gonna save an engineer or a technician, you know, tons and tons and tons of time.

Justin:

I go ahead. I was

Joe:

gonna say whenever I think of security, the other piece I bring up but I don't wanna dive away from security if you have something else, but I wanna maybe hit on privacy for a moment.

Rick:

Yeah. Yeah. Go ahead.

Joe:

Yeah. And and 1 of the things, I was, watching the, or listening to the Hustle Daily Show podcast the other day. And in it, they brought up this toy that I haven't heard about called Moxie.

Rick:

So I

Joe:

looked it up. Moxie is this little robot about so big, costs a little under $800, or I just found that today you can rent it for $99 a month if you wanna try it. And they even have a refurbished for a discount out there now. But some of the main points that they brought up, and I didn't validate this stuff. I just took it from the the hustle, daily show podcast, is that they say they don't share information, and they don't store information, yet they're sending information back in order to be processed.

Joe:

So Yeah. They gotta be doing something with it. But the part that really made me, like, iffy on the whole thing is they don't really have a feature for parents to know what conversations the kids are having with it, and more importantly, how it's answering.

Rick:

Right.

Joe:

And that would be something important to me from a privacy perspective.

Justin:

Have to send it back. Right?

Joe:

They they well, they would have to if they were doing that. There's no way for a parent Yeah.

Rick:

There's not, like, parental controls on this toy potentially.

Joe:

What this toy is telling you. So imagine the security implication if somebody were to compromise the system and then start having it answer in appropriate ways to

Justin:

To your baby monitor scenario all over again. Right. Right. Right.

Rick:

Yeah. Well, I to to your point, even if the way that the, like, language model that it was programmed on runs counter to something that you and your household believes or holds dear or whatever. Right? Like and it could be, you know, it could be a religion thing. It could be a political thing, a million different things.

Rick:

Yeah. Not having control. It's funny I was thinking about this a little bit because because I also looked at that, looked looked up that toy once we were talking about, hey, let's talk about AI and stuff. And 1 of the things that popped into my head though, which is a bit of a devil's advocate argument because I am made very uneasy by the nature of this toy for a bunch this is naturally, and I haven't thought about all the reasons why yet. But the devil's advocate argument might be something along the lines of okay, but if your kids are online anyway, they're subject to all these potential influences, right?

Rick:

They're subject to conversations in school that are unregulated potentially and all these things. And so I don't know I'm just I guess I'm just kind of cross checking my thinking a little bit, but I am also made very uneasy by this toy.

Joe:

Yeah. Well, I get a sense that the toy is maybe for targeting children a bit younger than children that are older. So I'm not quite sure what the age range is.

Rick:

Yeah. I think you're probably right. But it says, like, oh, yeah. And then they can talk about their feelings, and they can, like, interpret their drawings. Right.

Rick:

Look, I think there's probably a lot of things that are well intentioned. But like many things, the execution is, you know, is critical. So, yeah, I made very uneasy. And and I also just wonder about, like, even if they say they're not collecting data and stuff like that, I mean, we're security people or trust and verify type people.

Justin:

Right.

Rick:

So, like, boy oh boy, is this a landmine of marketing data in the future by being like basically a child's trusted journal robot.

Justin:

Yeah. That's wild. How many times has the FTC come out and actually sued somebody for violating their own privacy policy? Yeah. Actually.

Justin:

You know, going against consumer interest and all that stuff and everything.

Rick:

A lot. And we're gonna charge them a ton of money. However, you know, after the lawyers take their after the lawyers take their share Yeah. And after this and after that, you know, the people that actually felt the impact get privacy monitoring for

Justin:

a year

Joe:

or whatever.

Rick:

I mean, whatever it is. Right?

Justin:

But what what about all the, cell phone, companies and everything? They were selling all that data out to Yeah. 3rd parties, and the 3rd parties were selling that data again to other third parties and all that stuff. And they finally like nixed that after, you know, a little bit of government, you know, intervention. But it was like, you you can do what to what?

Justin:

You know, like, there was that 1 website that disclosed that, essentially, you could just go to it, no authentication, type anybody's number in, and see where their location is. I was like, that's creepy.

Rick:

I was thinking about this today. I don't know if you're too far from AI, but we'll go where the conversation takes us to a point. I was thinking about this earlier today too. At what point have we had so many data breaches and privacy breaches that instead of that you just say, look, everyone in America or everyone in the world or whatever gets free privacy monitoring. Like, just no more, like, lawsuits about it.

Rick:

Just, like, subsidize the whole thing at some point because everyone has it anyway.

Justin:

Yeah. But, I mean, does that actually do anything at the end of the day?

Rick:

I don't know that it actually does.

Justin:

I paid for it, you know, that. But

Rick:

But I guess my point is, I I what I was really thinking about was based on the large scale breaches that have occurred in

Justin:

the past,

Rick:

I'd be interested in the percentage of humans in the US that have been impacted by some security or privacy breach. Over 90%, I'm sure. And if that's the case, you know, how much of the legal system or whatever is being taken up by all this stuff that's just giving them where the benefit is and another set of privacy monitoring on top of the 3 that they've already chosen to use or not use. Like at some point, I mean I get that there needs to be like a stick, but like I'd almost rather just be like okay everybody in the US gets privacy monitoring for free, and from here on out all fines that are levied to organizations are just going to go to something else. I don't know.

Rick:

It's just the but again it depends on a million factors but it's just I was just curious about like I wonder how many humans in the US, like, if you deduplicated the populations have actually been impacted at this point because I bet it's a lot.

Justin:

Yeah. I'm sure it's a lot. I mean, just to change health care, you know, was what, a quarter of all Americans, you know, was involved in that 1 alone.

Joe:

Yeah. I think the number is actually higher, but I'm not sure.

Justin:

It might be. Yeah. Pankers. So so, yeah, that's crazy. So 1 thing before we get off this topic from an AI perspective, I've been seeing some, GRC tools implement some stuff like this.

Justin:

So there was 1 from a vendor questionnaire perspective that they integrated AI to basically look over some of their context of, their governance and then help them craft answers for a question that they would upload, which, I mean, a lot of GRC stuff is grunt work, you know, at the end of the day. So that can help a little bit from automation. I'm working on, shameless plug, John Ziola did a AI hackathon, the other day, and I worked on Apiskey. And 1 of the things that I was building into it was getting a vector kind of database along with the control context and what your response is, and then do an AI to actually search your own data and give you context information. So you could ask specific questions about your controls, about certain stuff.

Justin:

Again, it's not revolutionary, you know, type of thing, like, it's out there.

Joe:

But it's a great use case today where, somebody was talking about creating their own internal model, so that they can keep the data Mhmm. Inside their own database, but then make it available so that when their sales people were responding to RFIs

Rick:

It's so funny. I just That's exactly what I was gonna say. Keep going though.

Joe:

Oh, yeah. You finish it because I think we're on the same page.

Rick:

Yeah. Yeah. So, like, I've been thinking about this with chat bots forever. Right? You have these customer facing chat bots.

Rick:

It's like, well, just put 1 behind like a a client portal, or or for RFIs or whatever. But I think of it from, like, a if we have customers that are, like, hey, tell us about your security program. Right? Alright. Put it behind an authentication portal, right, and just have this chatbot that's that honestly the earliest use of actual AI that I know is like customer service chatbots.

Rick:

Yeah. Right. So leverage that, have it read all your policy documentation, right, all your procedures, all your whatever.

Justin:

Previous responses.

Rick:

Previous responses, all that stuff. And then just let let the vendor assessors type in, okay, how do you do password management? Because, oh, we do password management this way and here's the excerpt to the policy if you want it. Blah blah blah. Like and but to your point in terms of responding to RFIs, I think that's another, like, almost forward facing

Joe:

Yeah. Well well, this is so that the internal this is to save the security team time Yeah. So that when the salesperson is trying to get the deal and the questionnaire comes back

Rick:

Yeah. Same.

Joe:

They can start to self serve this so that they don't just ship the security question over to the security team.

Rick:

Yeah. Yep. Exactly. It's it's the sales focused version of the of the assessment as opposed to the repeat client version of the assessment. But yeah.

Rick:

Absolutely.

Justin:

Trust the sales team that's still with that?

Joe:

Well, I think you'd have to have checks and

Justin:

balances because,

Rick:

of course What

Justin:

if it said, yeah, we don't do this. I'm like, nope. Nope. That's unacceptable. We're doing it.

Rick:

Well, it's like I think the again, the the the challenge that I think about is the danger of, like, false negatives and The hallucinations. The hallucinations. Like, how do you do passwords? Oh, yeah. We have 300 character passwords or whatever.

Rick:

And it's like because you know how this works too. It's like, well, we got a response that's clearly untrue or questionable or weird. We can no longer rely on anything we've done for the past 5 years, and we're gonna, like, just all that kind of chaos that gets generated that typically wouldn't be generated by a human. Yeah. But but I love the use case.

Rick:

So I've actually been thinking about this for a while because we'll help organizations stand up some of their, like, security response programs in many cases, and we've tinkered with this internally a little bit in terms like hey could this work, could this work for us, da da da da da. So there's there's there is a there there and I think it's super cool, but we haven't gotten wheels under it yet. So, yeah, it's neat though. Did I

Justin:

tell you about my, policy mapping, thing for a customer? No. So Oh, yeah. Yeah. I was going for it.

Justin:

And it was a simple, you know, exercise. Essentially, they wanted their policy statements mapped to CSF. I was like, let me see what, chat GPT can do and everything. So I upload it, and all of a sudden, it's like, this maps to this requirement. This might I'm like, I'm gonna be done in 30 minutes.

Justin:

This is gonna be awesome. Yeah. And then I start going through and validating, and it was making up requirements numbers. It was just like, that requirement doesn't exist. I don't even know where it got.

Rick:

I always think about, like, alright.

Justin:

Back to the old fashioned way.

Rick:

Like, a couple lawyers that are, like, effectively facing disbarment. Maybe have been disbarred already.

Justin:

I think that was up in New York and everything. Yeah. It made up a case law. There are no instances of it. Yeah.

Justin:

That's not Yeah.

Rick:

And you can't you didn't you submit it to the judge without, like Yep. Validating it. Right.

Justin:

You gotta

Joe:

validate what's come out of these things. Mhmm.

Rick:

So and I do think that's that maybe that's the core takeaway, right? It can accelerate a lot of things in security, but at the heart of it you still need to do the trust but verify thing. Like you don't just, like, turn it loose just yet.

Joe:

Well, that leads me to something else I wanted to know about. A little shift on the topic, but it's deep fakes. Mhmm. And I have not had a chance to figure out how to play with that kind of system yet and make 1. Have you seen, a good 1 or tried it?

Rick:

So I have tried it a little bit, but this was probably about a year, year and a half ago, so like that stuff's evolving so quickly. Yeah, it's probably crazy now. I didn't have great success, although I didn't throw that much time at it, but honestly the reason I was doing it is because we have a very heavy meme culture where I work, and so we wanted to, lightly troll some people, but I couldn't make it work in an hour and a half and then I was like okay that's enough. But I'll tell you we have used some deep fakes in some of our incident response testing recently. Right?

Rick:

So both in terms of pretext, like, hey, I'm gonna, you know, the the CEO is on all these calls, so I'm gonna have the CEO call you and say these things that we type in, and see if I can

Justin:

Gotcha. So it mimics his tone.

Rick:

Yeah. Mimic the voice, or whatever. Yeah. And then the other and and that's been I think moderately effective but sometimes you get some rather hilarious, you know, misses, right? The other thing that we've used it for a little bit, I I guess it's less deep fake maliciously, but it's also an instant response, you know, mocking up a newscast as a for instance, to say, like, oh, this company, you know, had this thing happened to them and da da da da, and we're gonna call their person right now for comment and then we, you know, dial their number and, you know, we could consult scripted thing.

Rick:

So to make some of that look and feel a little more real, and those have been pretty good. I've been surprised at the quality that the team has turned out on those things, and I don't think they had to spend a ton of time.

Joe:

Really? That's awesome.

Rick:

Yeah. Yeah. I'm impressed and scared by it.

Joe:

So let's talk about the defensive side of that. Yeah. What, should people be considering when deep fakes are a real threat? The folks who know how to use it can really do it, and they can actually make it look like a real video talking to you of the person. They make a request.

Joe:

What should go through the minds of the people getting that request before they go

Justin:

do it? I mean, nobody's gonna trust anybody at that point.

Rick:

I I think we've had this conversation before, but I actually think, like, we're gonna at some point move from, like, a service based economy to a trust based economy where there's gonna be, you know, where the ability to trust swaths of people because they're pre vetted and stuff like that in specific ways has a ton of value. But to your specific question, I actually think you have to you I don't know that there's a way to solve this without additional process checkpoints

Justin:

Mhmm.

Rick:

When specific things if when specific high risk things happen. So, oh, you wanna change a password? Right? There's stuff like this that exists for that has existed forever. Oh, you wanna change a password?

Rick:

Oh, well, I'm gonna call the cell phone we have on file for you back right now, and I'm not gonna change your password unless you pick up, or you can answer these security questions

Joe:

or whatever. So that's the basics. That's what I was thinking. Yeah. And the same thing happens like you how many times, a month do we get notes from somebody that says, I wanna change my direct deposit.

Joe:

I want to change the, ACH for the payments. Yeah. And it all comes back to the same thing. You've got to set up processes

Rick:

Yep.

Joe:

For verification. And they need to be on a different channel, a different method than the 1 that it came in on. Yep.

Rick:

And it's proactive and it's reactive. Right? So, like, on the financial side, like, there's there's like the 2 key solution where like you need multiple people to agree. Right? Or I'm gonna contact you back on a different channel.

Rick:

And then there's the the post occurrence or or sometimes the the batching process where you do something like positive pay. Like, hey, we're not going to process any of these checks. We're gonna batch them all up and at the end of the day, we're gonna do them all at once. Right? But we're gonna make sure that someone validates them 1 last time before we actually do them.

Rick:

So I think there's probably a lot of stuff because people been trying to steal other people's money for a lot longer than they've been trying to do bad things digitally. Right. There's actually a lot of process oriented controls from the financial teams that I think we'll start to rely on more heavily in the technological world when deepfakes are more and more of a thing.

Justin:

I had a client last year. Somebody got a handle of, like, some of their vendors and everything and sent some fake emails to them saying, hey, I'd like to switch my direct deposit over a few weeks. And 500 k essentially went out the door to that new bank accounts, you know, and everything. And luckily enough, like, I called up my secret service buddy, and he was able to, like, stop 2 of the payments. So it wasn't 500 ks.

Justin:

They only lost like 100 or something like that. You know, it's great.

Rick:

I mean

Joe:

you get it early enough.

Justin:

Yeah, exactly. But, yeah, that was 1 thing. It's like and, you know, essentially they didn't have a process set up for that, and that's 1 of the things. I actually just did a call on ACH fraud a couple of weeks ago, and we went through some of the positive pay, you know, approval, like how you validate new users, changing users, and getting a self-service portal, let them basically do it, you know, if you can hook them up to that. If not, have a process where you're vetting, not just trusting an email coming in and stuff like that.

Justin:

There's a lot of kind of process and technology that goes into some of that stuff.

Rick:

But I think small businesses are gonna be hugely at risk for that stuff because they're the organizations that have, well, there's only 3 people, or there's only 1 person that does this, so who's the second person that authorizes this thing, or whatever. And there's no natural triggers that when they turn from 1 person to 3 people or 3 people to 5 people or 5 people to 50 people, oftentimes some of those processes just sort of stay the same because they're not necessarily broken. They'd be they just haven't realized or hit a thing to invite them. So, anyway, I think they're gonna be I I think the deep fake stuff is gonna, like most things, are gonna target small organizations more as quickly because they're most they're the most vulnerable in terms of process maturity.

Justin:

I mean, do they even need to do that stuff and everything?

Rick:

Deepfakes to target small business.

Justin:

Yeah. Deepfakes. Yeah. Exactly. I mean, they usually just fake an email to that, I'm the CEO, transfer this money now, and

Rick:

I actually think it's not necessarily do they need to. I think it actually turns into the once doing that is just as easy based on the tools that are available, once doing that is just as easy as crafting a spam email to a bunch of people, Well, you might as well do the spam voicemail. Yeah. Right? You might as well you might as well do it a bunch of ways.

Justin:

Who's leaving some voicemail, Amar?

Rick:

Well, I will tell you I will tell you as a pretext if someone's trying to get you to deposit a bunch of money, right, somewhere,

Joe:

if

Rick:

they send you an email and then automatically leave you a voicemail with the right voice and stuff like that, that's gonna feel a lot more legitimate.

Joe:

Or better yet, use, a video a video voicemail.

Justin:

Right.

Rick:

Right. All sorts of things like that. So I think you can use it to enhance pretexts a lot. I think so I don't think people are gonna necessarily go to the trouble just yet. It's still kind of a pain in the butt to build some of these things.

Rick:

But I mean, there's gonna be push button solutions for a lot of this stuff, and, I mean, don't quote me, but I wouldn't be surprised if it was 2 years or 5 years. Yeah.

Joe:

Well, since you're giving advice for small businesses and we started with I take this all the way back to the beginning, the NIST Cybersecurity Framework has a small business quick start guide.

Justin:

Oh, yeah.

Joe:

And it is only a few pages long, and it lays out the complexity of that thing in a very simple, easy understand way. So if you're a small business and you want to get started, you can just get started with a small business guide, quick service.

Justin:

I'll see about putting that in our show notes and everything. Alright. We can reference that.

Rick:

Do the CIS 1 too.

Justin:

I think they have

Rick:

a small business 1 as well. And again, they're they're these things are complementary. Right? They have slightly different focuses, and and they'll cover some of the same ground, but in a different way that I think is useful.

Joe:

And to round out, AI, Microsoft recall is related to AI. Yeah. What do you think about Microsoft Recall?

Justin:

So it was interesting when looking at it. I thought it was more concerning what it obviously, you know, they're taking screenshots every few moments, but it's stored locally, which I thought it was uploading, you know, to, you know, the cloud and everything like that. Honestly, I just thought it was like, isn't this time machine for the Mac? Like, you know, a lot of times, they didn't take screenshots, but they were taking snapshots

Rick:

of all

Justin:

the files and, you know, doing the recall with that. So I think I don't even know what why are you taking screenshots? Like, is there a context? I guess you can OCR some information from that, but what's the purpose of it?

Joe:

Well, the purpose the legit purpose is so that you can actually go and search and give ask an AI, like, question to Copilot to find out what it is you were, you know, you were looking up that thing, that topic that

Justin:

we browser history. That could be, like, the web page

Rick:

itself. Yeah.

Joe:

It could. It could. It could all be there. And but the the bad guy scenario. So it's stored locally, but what do we most worry about?

Joe:

Somebody getting access to your machine because, you fell for a phishing attack. Yep. And then they can exfiltrate the, the data.

Justin:

Right.

Joe:

And so what, what does what does, like, big technology companies worry about? Well, now they have their secret sauce, their information being snapshot to all these machines. And now it's not just inner active, in wherever they're storing it. The crown jewels have now just moved to

Justin:

Every single endpoint.

Rick:

Yeah. Yeah.

Joe:

That that's that's working good.

Justin:

I guess I just don't understand the context of why they would even want the screenshot.

Rick:

It does it feels like the risks outweigh the reward. The other sort of risk scenario I thought about with that stuff is if someone got access to this stuff, you know, people are investing a lot of time and money, particularly mature organizations in user behavior analysis. Right? Say, what does normal look like? Well, if I have the screenshots, like, over a certain period of time, I know precisely what normal looks like.

Rick:

I know how this specific person works.

Justin:

How long you've been on ESPN or Facebook or

Rick:

Right. So so if you have a motivated attacker that ends up with access to some of these things, they could in theory get away with a lot more without detection.

Justin:

This is the gun for your big brother, you know, type of thing.

Rick:

Oh, well, some I guess some of that too. I'm thinking more of a military than back end.

Justin:

John actually worked today, or has he been surfing the Internet for

Rick:

I will soapbox for another hour on this. That I I feel passionately that those are not security questions. Those are management questions.

Justin:

Oh, yeah. Absolutely. Sure.

Rick:

And they need to be dealt with by man.

Joe:

How many times have you been called when you were in industry, not consulting, to, go and grab the snapshot or the history or whatever even from the, the system logs because somebody wants to figure out if their person's working.

Rick:

This is why it's a specific pain point for me. Oh, yeah. I experienced

Joe:

that 1.

Rick:

Yeah. It's it is you're asking the wrong questions of the wrong people. But anyway. Yeah. Yeah.

Rick:

So the big brother stuff, yeah. But I I do think like it could allow people that access to that data could allow people to evade UBA more.

Justin:

Is it just the screenshots or all the rest of the data you're thinking of?

Rick:

I mean, really all of it collectively, but even just the screenshots could Yeah. I was just thinking about how frequent

Justin:

it was like Microsoft got hit over the head, and they pulled it back, you know, at least temporarily on some of this stuff to reevaluate it. But, I mean, I looked at that. Outside of the screenshot alone, how long has Apple had time machine where it takes a snapshot of every file and you could go back and look at all that stuff.

Rick:

That's true. I do think the time Why

Justin:

don't they get hit over the head, you know, for essentially the same feature?

Rick:

Yeah, it's

Joe:

a little bit different. It's more of a backup.

Justin:

A little bit.

Rick:

Yeah, exactly. I think the difference to me though is, and and this might be a very human way of thinking about it, but it doesn't necessarily bother me too much if people know what's happening to my system over time because the system is the thing under observation. It bothers me a lot more if people know exactly where I was clicking and what I was doing at the time because I am the 1 under observation. It's a probably subtle difference because ultimately if you're looking at all my system files, you kind of know what I'm doing anyway. But it feels like a distinct difference when I analyze it that way.

Rick:

And I bet that's why some of that backlash occurred because, like,

Justin:

oh, you're

Rick:

what you're taking screenshots of what a person is doing as opposed to you're taking snapshots of what's happening on a system. Yeah.

Joe:

And I

Justin:

think, I mean, there's always that balance between privacy and value. So you sacrifice a little privacy to get value out of it, you know. So, like, the Alexas, you know, or something like that, you're sacrificing a little privacy to get value back out of it, you know?

Rick:

That's true, but does that only ever move in 1 direction? Do you ever sacrifice value I think

Justin:

it's a simple, you know, into that.

Rick:

When when do people sacrifice value to get more privacy?

Justin:

Well, every, Alexa or Siri device that's sitting in their house. No.

Joe:

It's

Rick:

No. The other way.

Justin:

The opposite way.

Rick:

When does it go the other way? I what I see the reason I ask

Justin:

Sacrifices value for privacy.

Rick:

I see this as a slippery slope where When

Justin:

they throw those devices away.

Rick:

I suppose. I see it as it's an interesting thing, but it's a slippery slope where,

Joe:

to your

Rick:

point, people will sacrifice a little bit of privacy for some value, and then enough people do that, and then the societal expectation is everybody's doing that, everybody has free email from Google. What are you talking about, right? And now all of a sudden nobody has that privacy anymore, and then what's the next incremental change, and then what's the next incremental change? And then what's the next incremental change? And I just don't know where, or if or how there's probably some several doctorate thesis theses on these, but, about how, like, that gets pulled back at all.

Justin:

Right? Like a societal regression on resetting that.

Rick:

It really feels like it doesn't seem

Justin:

to think there has to be a big impact

Joe:

on someone. Gonna ever pull back. It's just like this other thing. I just heard somebody ran about, and this is nothing new with security. It's all about your phone.

Joe:

We have 20 years ago, 30 years ago, would you ever expected that somebody would message you and almost expect and demand that you're

Rick:

gonna respond

Joe:

within minutes. Because you used to leave the house with no phone, and you'd go for however long, and you would be away, and people would call you, and they leave a message.

Justin:

Right. And you

Joe:

may not even return their call.

Rick:

Absolutely right.

Joe:

And now if you don't respond to a text in a few minutes, people are just gonna get irritated.

Rick:

Yeah. Yeah. People get irritated with you so much. Especially with your wife. Yeah.

Rick:

Especially That's true.

Joe:

That's why she gets so special

Justin:

on her own.

Rick:

Can confirm. Yeah. Yeah. No. I it's absolutely right.

Rick:

I think, again, the societal expectations move, and then all of a sudden you're on call all the time. And I suspect it might be related. I don't know this for sure, but, like, and then, like, mental health declines or, like, there are all these, I think, unintended consequences of always being on call, sacrificing privacy for value, all these things that technology, like, at a micro scale enable in a really positive way. But on a macro scale, are probably worth some consideration. Not that I

Justin:

don't even care about it. But And then you have to be more purposeful of shielding yourself from that. You know, like when I first started at the consulting company that we were both part of, I had to do these big long reports, 100 page plus, you know, with that. It was the last thing I wanted to do. So any email that came in, anything along that line, I was like, oh, I'll

Rick:

go do that. You know?

Justin:

Like, oh, let me answer that real quick. You know? So I was looking for excuses not to do this, and I had to eventually reset. And I'm like, alright. I need to carve off.

Justin:

So And not disturb the same thing. Yeah. Yeah. Exactly. So now I practice inbox 0.

Justin:

I turn off all notifications, except for Slack. I have Slack still on. But like email, I have to physically go into my email to see if I have unread email into that. And I try to get to 0 as often as I can. And the ones that are in there are things I have to do, you know, with that.

Justin:

So it's waiting on a response. And same with my phone, like I try to eliminate like, okay, nothing is coming up to get me distracted from doing some of that stuff. But

Rick:

It ends up being like context switching. Right? Like Oh, yeah. So you I'm like, okay. Am I in respond to everything mode right now?

Rick:

Or am I in focus on this 1 thing right now mode or whatever? But again, I get to your point, and I it's so true, like, society in many ways, like, or many many people expect a response on certain modes of communication immediately. Yeah. And when they don't get it, there can be consequences. Maybe there's soft consequences, right?

Rick:

Like, you know, well, I just made whatever I wanted for dinner as opposed to asking you your opinion because you weren't around when I needed you, but I assumed I could ask you, you know, while I was at the grocery store.

Justin:

Right.

Rick:

Right? Because that's the exit because that's how it's worked 95% of the time. And I don't know that anyone's necessarily wrong in that scenario, but it has these soft consequences. It does.

Joe:

It does. It does. Yeah. And, so this reminds me of 1 more thing. Still nothing to do with security, but to do with your everyday life of working as a manager or a maker.

Joe:

Mhmm. Alex Hermozzi just, released a, white paper, and he always releases a Yeah. Company video with it too. And he talks you through what happens if you're a manager and how your day is structured and how you fill your day versus if you're a maker and the psychological differences. Whereas a manager, you're successful if every minute of your day is filled with a time block because you are basically being successful when you're in manager mode.

Joe:

You can be both. But

Rick:

when you're I see

Joe:

where you're going. I see your mind thinking of, yeah. And and when you're in a manager mode and a manager is like, yeah, I just had 15 meetings a day, and in each meeting, I got these things accomplished. It was a great day. That sounds awful.

Joe:

It does. And if you're a, but sometimes you have to do those. And so you should focus on what day you're gonna do those. And as a maker, your calendar might be blank because that's your opportunity to get things done and figured out. And the minute the manager sees your blank calendar, they figure they figure they could just reach out to you.

Joe:

Yeah.

Justin:

You're not doing anything.

Joe:

Or you're not doing anything, or if I interrupt you, it's not doing anything. And he talks on this, and maybe we should put a link to this too on the show notes. He talks about how to think about this from both sides. Mhmm. So if you're a maker, you understand the manager and what they're trying to do, and then how you can focus your response to actually get your structured day back, your blank calendar so you can actually make things

Rick:

Yeah.

Joe:

And work with the managers to understand them as well. And it helps, like, cross the chasm with both sides.

Justin:

I love that. That's 1 of the things I do as well is, I block off my mornings and block off my Fridays, for external meetings and everything. So the afternoons, pile it up, you know, But I realized early on that I'm most productive in the morning. So I get a I can focus on that because you guys know, like right now, I'm straddling between being a consultant and trying to build a product. And if I let 1 take over the other, you know, either if I'm building a product that doesn't have a revenue yet, I'm not going to be poor.

Justin:

You know? Or if I let consulting take over, you know, which I'm guilty of, I'm not going to get any progress on that either. So

Rick:

It it reminds me of a thing that a very wise mentor said to me a long time ago, and I didn't even recognize the value of it till several years later. He's like, don't underestimate the value of white space.

Joe:

Mhmm.

Rick:

And it's like a design thing to a large extent when you're thinking about, oh, I'm making a flyer for something or whatever or an email. Don't, you know But 1 of the things he said to me that applies to scheduling as well. Right? Oh, wow. Yeah.

Rick:

And and now I see it so clearly. Right? You need some white space. You need some stuff in the boundaries before things happen and after things happen because they improve the quality of the thing in the middle so much or allow for other things to happen in the margins. Absolutely.

Rick:

I really liked just the the phrase has always stuck with me, like, don't underestimate the value of white space.

Justin:

The 1 in, calendar app that I started using and everything, I think I talked to you about that. 1 of the features I have, I don't really use it for it, but you can actually set a limit to say like if I don't have at least 10 hours, you know, of outside meeting with that, block the rest off, you know, or something like that.

Joe:

That's the free Rise calendar.

Justin:

Yeah, Rise calendar, yeah.

Rick:

That's a super cool idea.

Justin:

I don't use it for that because I block off large sloths already.

Rick:

You intentionally take

Justin:

the free time. Yeah, take that, you know, as in like large blocks with that. But they also do kind of inner syncing where I have now, with my own email, 3 other clients that I'm on their Outlook with, and, you know, they expect to look at your free busy to schedule meetings and everything. And the thing it'll do, like, as soon as 1 count 1 thing goes up on 1 counter, it blocks off the other tone.

Rick:

Privacy for convenience and everything. Yeah.

Justin:

So well, the nice thing about privacy, it doesn't copy No. I know. It's not giving them details.

Rick:

Busy. Yeah.

Justin:

You know? No. I know. I know.

Joe:

You just give Rise

Justin:

access to all the details. Exactly. Yeah. They have access to all the details. But it's making my life easier because then 1 meeting invite goes up and 3 others

Rick:

get a lot.

Justin:

You know, have a day, which is awesome. I see

Rick:

the value for sure. Yeah. That's cool.

Joe:

So what do you think?

Rick:

I think this is great. I think this was great. I think this was great. Yeah. I think episode 2 is great.

Justin:

Yeah.

Rick:

Hopefully everyone else shoots up

Justin:

and keeps doing this. Cool. Cheers. And

Joe:

take us out. Cheers.

Justin:

Yeah. So thank you, everyone. Thank you for joining us. Don't forget to like, comment, and subscribe. Please let us know if you want any topics for us to address.

Justin:

We're having fun debating all these and everything, and come back later. Thank you, everyone.

Creators and Guests

Joe Wynn
Host
Joe Wynn
Founder & CEO @ Seiso | IANS Faculty Member | Co-founder of BSidesPGH
Justin Leapline
Host
Justin Leapline
Founder of episki | IANS Faculty Member
Rick Yocum
Host
Rick Yocum
Optimize IT Founder | Managing Director, TrustedSec
Episode 2: Tailoring Security Frameworks & Leveraging AI
Broadcast by