Exploring the Powerful Synergy of AI and Cybersecurity, Interview with Martin Stanley, Cybersecurity and Infrastructure Security Agency (CISA) [GovFuture Podcast]

Martin Stanley, CISA

In this episode of the GovFuture podcast we interview Martin Stanley who is Strategic Technology Branch Chief, Office of Strategy Policy and Plans at Cybersecurity and Infrastructure Security Agency (CISA). We discuss how emerging technologies, such as artificial intelligence and Quantum are impacting cybersecurity at CISA, how CISA collaborates with other federal agencies and industry partners to address cybersecurity challenges on a national scale, as well as how CISA stays ahead of the curve in terms of developing and implementing effective defense strategies.

If you enjoy listening to this podcast please rate us on apple podcasts, Google, Spotify or on your favorite podcast platform. Also, if you’re not already, consider becoming a GovFuture member to take advantage of all the community has to offer including Access to a diverse network of government innovators, opportunities to collaborate with government agencies, exclusive access to events and resources, and a platform to have a voice in shaping the future of government innovation. To sign up go to govfuture.com/join.

Show Notes:

Trimmed Episode Transcript: (note there may be transcription errors or mis-attributions, so please consult the audio file for any potential errors)

On today’s podcast, we’re so excited to have with us Martin Stanley, who is Strategic Technology Branch Chief, Office of Strategy, Policy, and Plans at Cybersecurity and Infrastructure Security Agency, also known as CISA. So welcome, Martin. Thanks so much for joining us today.

[Martin Stanley] Thank you for having me. This is a great concept here.

Yeah, we’re really looking forward to this discussion. We’d like to start by having you introduce yourself to our listeners and tell them a little bit about your background and what you do at CISA.

[Martin Stanley] Sure. Thank you, Kathleen and Ron. Thank you for having me.

Again, my name is Martin Stanley. I’m the Strategic Technology Branch Chief at CISA. And in that role, I basically run our agency R &D program in conjunction with our Science and Technology Directorate. And what that means practically is we work with our mission side to identify technology requirements related to capability gaps that we have in meeting our mission. And we try to identify technology solutions that we can then transition back into our mission side to increase either our ability to serve our stakeholders or increase our capacity or have new capabilities, all that kind of thing. And it’s a really interesting job because we get to work with all the different mission side folks. We get to work with emerging technology experts and leaders.

And then we get to transition those capabilities back in. And it’s very, very, very rewarding. The other part of my job is serving as a subject matter expert on cybersecurity for artificial intelligence and also artificial intelligence for cybersecurity. I know there’s a lot of talk these days about artificial intelligence. But we’ve been working on AI as it relates to at least our cyber mission space for quite some time. And in particular, we’ve long felt that that cybersecurity is going to be important for AI systems and AI systems are also going to be enabling some of those gaps that I talked about earlier. And I think for context, as we talk about, you know, some of the current concerns that we’re reading about and everyone’s, you know, focused on the large language models, but we think of cybersecurity traditionally as, you know, in three, you know, kind of, kind of threat areas, confidentiality, integrity and availability. The very highest level that’s the triad that we always talk about.

And I think we’ve got great and extensive experience with confidentiality and availability kinds of concerns. Those are ransomware attacks. Those are our, you know, data breaches and things like that.

And we’ve been working on those, Denial of Service. Those have been the kinds of cyber, cyber actors that we’ve been, that we’ve been dealing with for quite some time. I think with artificial intelligence, we’re, we’re starting to get more of a flavor of what integrity attacks would look like, not necessarily being able to trust the output of the system. I think today, you know, and, you know, previous to today, today we’ve been very trusting of the outputs of systems.

And I think now we’re going to be thinking a little bit more about, you know, do we know and, and, and how, how can we be sure that this output is trustworthy?

[GovFuture] Yeah, that’s a really big and fundamental idea. I think that we’re looking about, you know, as you know, one of the great ideas in cybersecurity was this idea of zero trust, which really had more, has more to do with the philosophical approach and how we deal with data and trusted systems. But now we’re realizing that maybe zero trust should apply to people too.

You know, we can’t necessarily trust everything we see, we hear, we read. All of that stuff now is just so easily manipulable. And the lines, you know, it’s really becoming harder and harder to tell because we’re training these systems, of course, increasingly on the real outputs and real interactions of humans.

So it becomes, it becomes harder. But we could definitely dive into that. I think one of the great things also that we’ve noticed is that sort of traditionally when we talk about technology and IT, we tend to group our conversations, I guess, by these technology areas. You know, we’re talking about databases. So let’s have that conversation over here. We’re talking about AI. So let’s have that conversation over here.

We’re having cybersecurity. Let’s have that conversation over here. And actually almost sometimes literally we’ll go to events where there’ll be different rooms and they’re having these different conversations.

But, but we’re seeing these are actually merged conversations. Now, this idea of stovepipes technology doesn’t really, it’s increasingly making less and less sense because we’re using AI for cyber. We’re using big data for AI.

We’re using analytics for this and automation for that. And so, so really starting to see less in the way of differentiation from that perspective. So maybe from your perspective, you know, how are you seeing and how emerging technologies, AI, even quantum, which we’d like to fantasize a little bit of from time to time.

You know, how are they impacting cyber at CISA or just in general, your thoughts on these, all these different technology areas, maybe it’s making it harder even to deal with everything all at one.

[Martin Stanley] Well, so, Ron, that’s an awesome question. And, you know, it’s, you know, it’s really interesting. We’re seeing two things. The first is we’re seeing a convergence amongst the requirements. I talked about, you know, looking across our entire mission space for, you know, we’ve got technology gaps and how do we fill those gaps?

Initially, five, six, seven years ago when we were working on this, they were very different requirements. So this has got multiple mission spaces. We have obviously our cyber mission space, but we also have critical infrastructure protection and emergency communications. And I think one of the more, you know, rewarding things that we’ve been seeing over the years has been this convergence of the actual long term technology requirements that we have. So now we’re seeing analytic needs, you know, out of all of our different areas. And I think the second aspect of what you mentioned, which is really, really interesting as well is this concept of adjacent technologies. Everyone’s focused on AI, but there’s all these adjacent technologies, whether it’s, you know, data protection or it’s, you know, automation robotics, you know, all that kind of stuff, sensors, where all the focus is on AI, but it’s all those other components that are also going to be necessary in order to get the full promise of that technology.

So how do we look at this? Well, we look at all of these technologies, whether it’s artificial intelligence, quantum and on and on from three perspectives, we look at it as how we can further our mission, you know, how we can meet those technology gaps that we have. We’re very concerned about how our stakeholders are going to adopt and use these kinds of technologies in the context that it will change the attack surface that we need to assist them in protecting. And then lastly, we’re very concerned as well with how our adversaries can leverage emerging technologies in order to change that threat landscape.

So, you know, I think I’ll kind of pause there on that aspect of it. But, you know, it’s those multiple perspectives that are informative to us as to how we want to think about moving forward with these capabilities. Within these adjacent technologies, and I always say this, this is my standard disclaimer. And we’re going to get into it as we talk about trustworthy AI. But as an agency, as a department, I’ve been very impressed since I’ve been here, you know, in the last 10 years, how focused we are on the appropriate use, the lawful authorized use of the data that we collect for the purposes in which we collect it. You know, we’ve got a lot of controls around that. And we want to make sure that we continue to instantiate and make those controls happen so that we can, you know, preserve your reputation and in the good relationships that we have with our stakeholders. On the AI front, we’ve been working at the interagency level since the beginning from the trustworthy AI executive order to preserving American leadership. In AI, we’ve been working with the foundation of all of those kinds of concepts.

And, you know, it’s really important, NIST just released the AI risk management framework. I encourage listeners if they haven’t checked it out. It’s an easy read, it’s approachable. But they’ve done an incredibly nice job of breaking down what trustworthiness means. And I think it really comes back to that these systems are about people.

And people need to trust these systems. And then on the quantum side, I think you’re probably familiar with all the work that’s going on. But for the listeners that are not aware, there’s multiple quantum initiatives. They’re, I think, a lot easier to understand than the AI initiatives just because they’re simpler and more focused. NIST has a lead on developing quantum secure algorithms.

Obviously, the biggest threat that quantum computing presents is the ability to break through some of our traditional encryption algorithms. And so, we’re, I think, in the final phases of testing and validating some of the finalists. And then on the CISA side, our function under FISMA is to work with federal agencies to help them to implement their cybersecurity programs and develop capabilities. We provide services, all of that kind of stuff to bolster and to improve the federal cybersecurity posture. We’re working at the interagency level to identify and quantify the threat space by assisting agencies to identify with their cryptographic system inventory. is.

Yeah, you know, there’s so much going on and unpacked in that. So I mean, what we always say, and I like that you’re, you know, you’re reiterating it is that you shouldn’t just, we sometimes call it silos or stovepipes, but right, you shouldn’t just focus on, okay, well, this is cybersecurity, so I don’t need to worry about anything else that, you know, I just focus on cybersecurity. I’m just focused on automation. I’m just focused on AI, because everything comes down to data, right? And some of these conversations that we’re having about, you know, trustworthiness, these are things that even at just like a high level, people should know and at least kind of understand.

I think the past maybe decade or so has really brought more awareness of data and your data footprint in general with how your data is being used, how quickly data is being generated. I mean, you know, that is just crazy from these systems are here for people, right? You know, we’re not here for the systems. I hope. I don’t know. And I think that’s like to your point, you know, we talk about this, that that the people get lost and all that. I think that’s sort of like, I’m going to probably to Kathleen’s next question on this.

[GovFuture] But yeah, yeah, and I think people do sometimes get lost in this too, right? You know, if we talk about data, you have to say, but what at the end of the day, what is that data representing? It’s representing a real human being.

And don’t forget that because I think that when you approach the topic saying this isn’t data, you know, in some air quote, this is a human being or this is, you know, something that I’m actually representing, it helps you think about it, I think in a better light. So CISA is America’s Cyber Defense Agency. How do you assist in dealing with some of these cyber security incidents? Maybe, you know, talk about how you approach that, maybe also how you collaborate and work with other agencies as well.

[Martin Stanley] Sure. So CISA has long been and it’s primary, you know, primary roots going, you know, all the way back to the initiation of our division, which was the National Cyber Security Division at DHS, was an incident response shop. And so we have, you know, a long, long, long history of working with stakeholders in a trusted way to assist them in responding to the evolution of the cyber threat. As America’s Cyber Defense Agency and the National Coordinator for Critical Infrastructure, Resiliency and Security, CISA leads a national effort to understand, manage and reduce risk to the cyber and physical infrastructure that Americans rely on every hour of every day.

I’m obligated to say that because I think that’s our tagline. But specifically what we do is we respond to particular incidents that are reported to us. We just passed the, Congress just passed the CISA legislation, which is an incident reporting requirement for certain entities to report cyber incidents to CISA for three purposes. One is for us to render aid. The second is for us to understand how others could potentially be, you know, at risk and then also to develop responses to these campaigns. And in that regard, we lead, for example, the nationwide response to Log 4J, you know, the nationwide response to solar winds, right?

That was all of that was led by our agency. So, we provide resources. We do a lot of trend analysis. I think that gets back to the AI stuff, you know, where we can potentially do a lot better and a lot faster with the data that we have coming in from different sources to identify when there’s potential campaigns. And then lastly, being able to quickly share that with, you know, the nation’s cyber defenders in an effective way. So, you know, primarily that is our function, you know, to lead those responses. But we can do it at the tactical, but also at the broadly strategic level as well.

[GovFuture] Yeah. And I was spending some time looking at the CISA’s website, especially on the infrastructure, critical infrastructure side and resilience. And it’s both impressive and maybe even overwhelming the amount of infrastructure we’re talking about here. For those that aren’t familiar, it’s the chemical sector, the commercial facilities, including things like stadiums, communication sector, the critical manufacturing sector, dams, defense industrial base, emergency services, energy sector, financial services sector, food and agriculture, government facilities, healthcare and public services, IT, nuclear reactors, materials and waste, transportation systems, water and wastewater systems.

So it’s not all of these, right? CISA has a role in fundamentally coordinating and managing the risk, because all those are so critical. It’s like we forget about almost in so many how fragile society could be if any of those critical bits of infrastructure are removed. We could see what’s happening in other parts of the world where those things are, you know, damaged or destroyed or made more difficult. And of course, when I talk about these things, I’m thinking, oh, there’s lots of other, of energy and DOT for transportation, HHS and the healthcare, we have, you know, treasury and financial services, we have so many other agencies that have a role to play here. So of course, the big question is, is, you know, give interconnected nature of not just the cyber threats, but these physical infrastructure threats as well. How do you collaborate with these other agencies and these, and perhaps industry partners that may be providers, solution providers, technology providers, I think of electricity infrastructure and the big companies that are involved there. How do you, how do you deal with that?

[Martin Stanley] How do you coordinate? Well, so this is a really great question. And I think it’s one of our core roles and one of our core strengths. We have a unique view as a result of the engagement with all these, you know, various stakeholders.

And it’s a very important position as well. Being able to identify and coordinate not only within a particular critical infrastructure sector, but across all of them is something that is in the DNA of our organization. We recently created the Joint Cyber Defense Collaborative or the JCDC. It’s an emerging organization within the CISA reorganization that happened with the establishment of the agency in 2018. And the JCDC is an organization comprised of organizations across the public and private sector that are focused on reducing the national cyber risk.

It’s not just a CISA thing. This is an open and operational forum and planning group where federal agencies and critical infrastructure partners engage as partners. So, you know, this is a new thing, but it’s in many ways an old thing that we’ve been doing. You know, it’s very coordinated. It’s, you know, broadly supported, you know, highly funded and all of that to encourage that collaboration. In the last year, it’s become operational. We have, you know, it’s built upon the existing relationships to maintain persistent collaboration between industry and the government.

And this is for the purpose of developing Joint Cyber Defense Plans and to improve real-time information sharing planning and exercising on national threats to reduce risk. I think this is, you know, really where we should pause, you know, to talk about, like, why is that, you know, so important? And it’s because everything is interconnected as we just talked about. You know, these requirements are, you know, where they used to be so unique and focused on a particular, you know, one particular vertical and maybe not of a concern to others. They’re now broadly, you know, shared across all these vulnerabilities. As I mentioned before, this was the group that coordinated the global response to the log-four shell vulnerability. And we’re also working to help prepare for and respond to malicious cyberactivity related to, you know, obviously the unfortunate Russian invasion of Ukraine.

Yeah, you know, we, I mean, that wasn’t great, but we like to hear about, you know, how you do collaborate and help and work with, you know, people both in the U.S. and outside of the U.S. as well. And unfortunately, that’s just ever-growing sophistication of cyber attacks. So how is CISIS staying ahead of the curve in terms of developing and implementing effective defense strategies?

Boy, what a great question. So, and I think it comes into something that we’ve actually just, we’ve just announced. This is an initiative for Secured by Design, Secured by Default, which is actually part of the new national cyber strategy.

Thank you. As part of this and under, you know, under that effort, we recently released principles and approaches for secure by design and default with our international partner nations. There’s this is available on our website. You can search for secure by design, secure by default and CISA and you’ll get directed to a web page where you’ve got all these, you know, all these different resources. But you know, to sum it up, you know, the design approach, these design tactics include things like memory safe program languages, secure hardware foundation, secure software components, code review, SBOM is a big thing for us. We just have a huge initiative with our Silicon Valley partners to enhance SBOM capability for the industry and vulnerability disclosures and, you know, lots of additional, lots of, additional considerations around design of technology. And then secure by default tactics are things that are, I think, you know, a little bit more, you know, a little more focused around your configuration out of the box stuff, like default passwords, not having default passwords, enabling single sign on by default, secure logging, your secure authorization profiles and making it easier by reducing the hardening guide, you know, complexity.

And there’s, and there’s much more in there. And so those those approaches are things that we’re going to be working very closely with industry partners who are, you know, best resource to manage and implement those capabilities and take, you know, take the burden off, you know, the, the, you know, highly, you know, vulnerable and, and resource challenged, you know, folks out there that, you know, we see in this sub, you know, in, in where they have a very large, large, large attack surface and they’re generally not very well funded, right, using this kind of technology. Those are, those are typically what represents a soft underbelly of this, of our attack service.

Yeah, I think so. And, you know, just like in the, I’d say traditional, well, nothing’s traditional anymore, but like the traditional world of defense, where, you know, we learned that it’s not always about large, you know, government actors with large armies causing problems. It’s this asymmetric warfare and asymmetric issues. You can have small people, but when they have access to advanced technology, they can cause all sorts of problems here. And I think that’s one of the biggest things we’re, we’re, we’re discussing, which is that everything now is across domain, multi-technological, multi-technological, multi-cross stovepipe, and there’s no stovepipes anymore, you know, issue where someone could take a little bit of automation, a little bit of AI, a little bit of big data, some analytics, this and that, and do something which may be previously only somebody who had significant resources could do. And that means that people now have to become much more aware of all these things, which is a little overwhelming. It’s hard to become an expert in everything. That’s probably literally impossible, but, but there has to be an organizational knowledge.

It’s like, well, an individual may not need to, but somehow the organization needs to have that awareness, which is really, really difficult. So as part of that, you know, we have not only these new technology areas and we’re techies, we love technology, but not everything is a technology solution, right? We have people, process and technology, always say, the people are incredibly important, right? And of course, the other little, maybe little extra bit of magic, perhaps, is that process, which is new approaches to doing things, new ways of doing things, maybe getting rid of some older ways. And in cyber, one of those ways is zero trust, which is really more of an approach. It’s an approach to thinking and designing and, and managing systems, so that we have an expectation or perhaps a lack of expectation of trust. And then we build systems in a particular way. So maybe you could talk a little bit about some of those new approaches, some, some other approaches that are really started impacting the way you’re thinking about and applying, you know, everything, that’s the organization’s mission around critical at Dr. resiliency and security. Yeah. Sure. So zero trust is a very important part of our overall cybersecurity approach. The, the zero trust maturity model, our CISA zero trust maturity model, was recently updated and released. And I think there’s a response for comments.

So that’s in the last couple months. And I really, Ron, liked your, your, your point about zero trust is not, you know, a product that you go by, right? It’s a, it’s a design approach. And it’s a trust model, you know, across your users and your systems and your data. And It’s an approach that generally should leverage a lot of the capabilities that you already have and I’ve been asked in other forum before about what should folks do if they’re just starting out on Zero Trust. The first thing they should do is talk to their existing vendors and look at their existing tools and capabilities and find out how much of that they can use because most of these controls are just repurposing some of the existing security controls that folks have. Back in one of my old positions when I ran the cybersecurity assurance branch in our federal network resilience division, we were working very closely with OMB on executing the high value asset program.

You probably talked about that here and if you hadn’t, it’s probably worth having one of our HVA experts on. But what we found during the analysis of all these really critical and important and very high notoriety federal systems, which I won’t name any names here, but the ones that you would all know and certainly agency missions, was that while there’s a lot of effort to secure these systems and the application of security controls, it was the way in which the security controls were applied, not the lack of security controls that contributed to vulnerabilities and weaknesses. And so it’s not just having those security controls, it’s applying them in a prescribed manner and I think our Zero Trust maturity model is a great resource for helping folks to look at the application of your existing security controls and the identification of what additional security controls you want to meet your security objectives. So from my perspective, there’s a lot out there now on Zero Trust, but most organizations are still beginning their journey towards that and the fact that we have a lot of resources out there is really great. I just would encourage folks to be careful about you getting led down the wrong path, like they have to throw out all their existing investments.

It shouldn’t be the case. Our CDM program was designed to evolve to accommodate new architectures, new security architectures and most of those tools are reusable for that. Yeah, perfect. And we found a link to that. I will make sure to link to it in the show notes for anybody that wants to check it out. It looks like a really good read. Also, I like to hear you say learn from others.

That’s something that we say all the time. This is probably not a unique problem, definitely not specific and unique to you. Sometimes you just need to look maybe one or two steps out a little, you know, and say, OK, maybe they don’t have my exact problem, but what is very similar and what can I learn from? Because we always say learn from others, right? Don’t reinvent the wheel. You don’t need to do this. Someone else has probably done it, gone through all the pain points and solved this or is at least on their way to solving it.

So see what they’ve done. We also touched upon a lot of topics on today’s podcast, a lot of those, you know, big focus areas, AI, cybersecurity, zero trust and, you know, IT modernization, all of this. So this was such a wonderful podcast. I’m sure our listeners got a lot out of this. I know that we did. We always like to wrap up the podcast by asking the same question.

And we always get varied results because you’re able to take your own, you know, knowledge and information and background into this and really kind of answer it as your own. So what do you see or hope to see as the future of technology and innovation in the government? Well, so it has been a whirlwind tour of technology topics.

But so I think to kind of sum it all up. the future has got to be how this technology integrates to support people. And specifically, as someone who works for the government, and our agencies are responsible for serving the American people, it should make accessing government and interacting with government easier. I mean, I think that’s probably the number one focus. And if you talk with CIOs and from other agencies, I think they’re doing a lot of work towards that, and we certainly have a long way to go. But we can’t lose sight of this human machine teaming. I think it’s a very important thing to consider that as we bring in automation, we bring in new capabilities.

It creates other opportunities for the humans. We’re not looking to replace people. We’re not looking to have everyone interact with machines. I think we’re going to be freeing up people to do a lot more of the work that people are good at.

I think we need to go back and think about what those things are. But one of the things that people are really good at is working with other people. I think, generally speaking, that’s probably a great vision for the future is that we’re going to be enabling better interaction for people with these kinds of general broader, in this case, systems meaning the various kinds of programs that the government runs for the benefit. I think we’re going to see a lot more automation. I think we’re going to have to be very careful not to send the message that we want to slow down or restrict innovation simply because that’s not going to happen.

The reality is, if you try and get in the way of innovation, it’s just going to happen anyways. And then we’ll be creating alternate shadow IT. We’ll be creating vulnerabilities that we’re not aware of. And so we have to embrace it. We have to challenge ourselves to get smarter and better at it. And I think that there’s a lot of that kind of attitude with folks that I work with on a day-to-day basis that I’m very optimistic about that.

So not losing sight of the fact that we’re doing this to actually serve people, I think is probably the best way to leave this conversation. Fantastic. And it’s been a great conversation. And I’m keeping an eye on the Clarker because I know that we could probably go on for hours. We probably will.

So we might have you back if you’re open to it on a future podcast. We’re showing off some of what you do. In addition to this, we have all these opportunities to get together both online and in person. We run our GovFuture forum event, both in DC, which is at George Mason University. For those that are in the region, you can come join us. And we have demos.

We actually had to do a lot of show and tell as well as we also have hot topic panels that we do where we talk about hot topics. So you’re welcome to join us there. For those of you that are listening to us that are not in the DC region, we haven’t forgotten about you. We know you.

We love you guys. Many of you have been with us since our AI and government days. So this is sort of where GovFuture evolved from.

For those of you that don’t know, AI and government was the start of all this back in 2019. We were in person at George Washington University. And then of course, the pandemic happened as it did everywhere. And then we moved that online and we started running our AI and government series online. And then we realized, hey, there’s this global government community, and they want to talk about more than AI.

And that’s the evolution of GovFuture. So we have opportunities both in person and online. So stay tuned, everybody.

As Kathleen told you, if you’re interested in seeing all those things, just go to GovFuture.com and go to slash join if you want to do some of our member activities, which we are going to do more and more of. So without further ado, I just want to thank you so much, Martin. You’ve been so fantastic sharing your insights with our audience. And we’re just grateful to have you on our podcast. Thank you so much. I will check out your GovFuture site. Wonderful.

Yeah. And thank you so much. This was such a great conversation. As I mentioned, we talked about so many different topics. So hopefully our listeners got a lot from this. And we have a lot more interviews lined up.

Scroll to Top

We've already checked you in.

IF YOU have your badge you're good to go!

Join us for


Register to View Event