Caveat Podcast: The Startup Leading AI Security in the UK with Mindgard's CEO
Discover the latest insights on AI security with Dr. Peter Garraghan, CEO of Mindgard, in this podcast episode. Learn about threats, solutions, and how Mindgard can secure your AI systems.
Our co-founder and CEO, Peter Garraghan, recently joined Dave Bittner on the Caveat Podcast on The Cyber Wire. They discussed the UK's recently published AI security guidelines and the recommendations Peter made for addressing cybersecurity risks in AI.
About Peter Garraghan
Dr. Peter Garraghan is CEO & CTO of Mindgard, Professor in Computer Science at Lancaster University, and fellow of the UK Engineering Physical Sciences and Research Council (EPSRC). He is an internationally recognised expert in AI security, Peter has dedicated years of scientific and engineering expertise to create bleeding-edge technology to understand and overcome growing threats against AI. He has raised over €11.6 million in research funding and published over 60 scientific papers.
About Mindgard
Mindgard is a cybersecurity company specializing in security for AI. Founded in 2022 at world-renowned Lancaster University and is now based in London, Mindgard empowers enterprise security teams to deploy AI and GenAI securely. Mindgard’s core product – born from ten years of rigorous R&D in AI security – offers an automated platform for continuous security testing and red teaming of AI.
In 2023, Mindgard secured $4 million in funding, backed by leading investors such as IQ Capital and Lakestar.
Next Steps
Thank you for taking the time to listen to the podcast episode.
Test Our Free Platform: Experience how our Automated Red Teaming platform swiftly identifies and remediates AI security vulnerabilities. Start for free today!
Follow Mindgard: Stay updated by following us on LinkedIn and X, or join our AI Security community on Discord.
Get in Touch: Have questions or want to explore collaboration opportunities? Reach out to us, and let's secure your AI together.
Please, feel free to request a demo to learn about the full benefits of Mindgard Enterprise.
Audio Transcript
Title: The startup leading AI security in the UK
Speakers:
Dave Bittner
Ben Yelin
Dr. Peter Garraghan
START OF AUDIO
Dr. Garraghan:
People understand, or policy people understand, leaking data or losing data is bad. They understand that if someone can steal my AI, that's also bad. So there's less emphasis on the technical mechanisms to achieve this, more about emphasizing how's it work, and more the recommendations required to mitigate that.
Dave:
Hello everyone and welcome to Caveat N2K Cyber Wire's privacy, surveillance law and policy podcast. I'm Dave Bittner, and joining me is my co-host Ben Yelin from the University of Maryland Center for Health and Homeland Security. Hey Ben.
Ben:
Hello, Dave.
Dave:
On today's show, I examine the troubling challenges of regulating deepfake porn. Ben looks at a brand new appeals court decision on geofencing, and later in the show, Dr. Peter Garraghan, CEO of Mindgard, discussing the UK's recently published AI security guidelines and the recommendations he made for addressing cybersecurity risks in AI. While this show covers legal topics, and Ben is a lawyer, the views expressed do not constitute legal advice. For official legal advice on any of the topics we cover, please contact your attorney. Ben, before we dig in here we got a kind note from one of our listeners. This is a listener named Kevin who wrote in and said, dear Dave and Ben, most importantly, Ben is one of the few lawyers I respect. There you go, Ben.
Ben:
I'll take it.
Dave:
It's pretty good.
Ben:
That's honestly one of the best compliments I've ever heard for myself.
Dave:
There you go.
Ben:
I'm already a fan of yours, Kevin.
Dave:
Faint praise. It's a low bar.
Ben:
Well, literally right. There's no shortage of lawyer jokes. But as another good friend of mine who's a lawyer says, everybody jokes about lawyers until they need one. Totally true. Kevin goes on and says, not being a lawyer my contact with the law has been as a trial consultant, and he says, I know what you're thinking. He says, having said that, I listened to Caveat where the Chevron deference was discussed. I'm for the change. I think there's a natural bias on the part of the experts to lean in favor of the agency. It has always appeared to me to be a conflict of interest. It would be interesting to know how many times the experts have sided with the agency. Now, could not the court as it does in many cases, call its own experts who would have a more objective view of the matter? I have seen and been called to testify as an expert witness on a case in which I was not involved. This just seems like a better solution to balance the regulatory system. What do you think here? Ben, I think Kevin makes an interesting point. It's a really interesting point. I think it's a widely shared perspective that Kevin is putting forth here. I'm certainly glad that he wrote in. I think I was probably more critical of overturning Chevron than perhaps our median listener, or at least our median listener who knows what the heck, Chevron is. As a case, I guess I don't see it the way Kevin does. I don't think agency personnel have any inherent bias. I mean, they are civil servants. They are there in the public interest. Certainly there are instances where there are probably financial incentives in terms of getting grants or getting on the right side of certain private enterprises, that sort of thing. But I think for the most part, just by the nature of being public servants and working within federal agencies, they are interested in what they see as the optimal policy.
Dave:
But don't you think… Trying to take maybe Kevin's side here—someone who has taken a lifetime position with the EPA is not likely to be a drill-baby-drill person. I mean, do you think there's an inherent-
Ben:
I don't think that's necessarily true.
Dave:
Don't you think there's an inherent bias in who would be attracted to work for particular agencies?
Ben:
There's a natural bias toward that. I think people who work for agencies who are in civil service… Granted, there are some political appointees, and that's different. But if you are a civil servant, you generally believe in the agency's mission. The agency's mission, I mean, I guess it depends. Everything is partisan in a sense. But for a lot of these agencies, the mission is quite non-partisan. I mean, a lot of the ones we never talk about, it's basically just processing old people's healthcare paperwork, and decisions about what Medicare should and should not cover based on some statute that was enacted 50 years ago. I mean, I certainly understand the perspective. Just, to my knowledge, having agency expertise cut out of the process entirely is an unwise decision, and one that I don't think reflects what my view is of both the Administrative Procedure Act and the proper role of the executive branch in developing regulatory policy. But I do think it's a totally valid disagreement. I do think courts are going to start to call in more outside experts. In a sense, those outside experts will be more unbiased because they don't have a direct stake in the outcome. But I still think it's giving away the leverage and the power that agencies have to make decisions without going through arduous, perhaps years long litigation, which is going to tie up what could be necessary regulatory policy. That's my view on it. I think we have a respectful disagreement there. Certainly Kevin's view is shared by many of our listeners and many people who are waiting 40 years to see Chevron overturned.
Dave:
Well, thank you, Kevin, for writing in. We do appreciate it. Of course we would love to hear from you. If you have something you'd like us to consider for the show, you can email us. It's caveat@n2k.com. Well, let's jump into our stories here. Ben, why don't you start things off for us?
Ben:
Even though the Supreme Court takes a break over the summer, our appeals courts over the many federal circuits do not take a break. We got a very high profile landmark decision on geofence warrants from the Fourth Circuit Court of Appeals, which is located right here in the Mid-Atlantic. This is actually a case that we had discussed previously when it went up for oral arguments. It's always good to come back to these and see what the decision was. This is the case of Okello Chatrie v. The United States. Mr. Chatrie pled guilty in May, 2022 for robbing a bank after a district court refused to suppress evidence on his location obtained from Google. Law enforcement went to Google and asked for a geofence warrant. All of the cell phones that were in the area of this bank at the time that the robbery occurred. They were able to do some investigative work. They figured out that his device was there and that he was the one who likely committed this crime. He sought to suppress that evidence saying it violated his Fourth Amendment rights, and the District Court refused to suppress that evidence. Mr. Chatrie appealed this to the Fourth Circuit Court of Appeals, and the Fourth Circuit agreed with the District Court, not necessarily for the same reasons, which I'll get into, but they agreed with the District court that this evidence should not be suppressed, that Mr. Chatrie does not have a reasonable expectation of privacy as it relates to geofence warrant data obtained from Google, and therefore, this is not a Fourth Amendment search that entitles him to the protections of the Fourth Amendment. In the view of this three-judge panel on the Fourth Circuit, a geofence warrant is unlike the information obtained in Carpenter. In the Carpenter Supreme Court decision back in 2018, that was about historical cell site location information of one individual spanning a period of seven days. That's a lot of information in the view of the court. You can put together a mosaic of a person's life pretty easily and understand their private affiliations associations by looking at seven days of historical location formation. The other factor in Carpenter that I think distinguishes it from this case and the view of the majority here, is that in Carpenter, there was no opt-in. Mr. Carpenter merely turned on his cell phone and law enforcement obtained data from his cellular provider, which collects data on his location without him having to opt into anything. Chatrie—and this is an apocryphal tale—agreed to opt in to Google, I believe, through Google Maps collecting his data. He pressed the I Agree button. He opted into sharing his location data with Google, and I think in the view of the court, that not only was a forfeiture of his reasonable expectation of privacy but there's not enough information in these geo warrants to create that type of broad mosaic view. A relevant precedent case in the Fourth Circuit is a case I believe we've also talked about on this podcast: Leaders of a Beautiful Struggle versus the Baltimore Police Department. I don't know if you remember those spy planes they used to fly over Baltimore, and they were taking real time pictures? It was a crime fighting tool. Fourth Circuit held in that case that the data obtained through those low-lying planes did violate the Fourth Amendment because it could create kind of a dossier on a person's private movements. It did create that mosaic that led to Fourth Amendment protection. In this case, they're distinguishing the geofence warrant from the data collected in the spy plane case. This is a two to one decision. The majority was written by Judge Richardson, an appointee of President Trump. He was joined by a long time Fourth Circuit Judge J. Harvie Wilkinson, who was a Ronald Reagan appointee. The dissent was written by an Obama appointee. I mentioned this because there could be a move on the part of Chatrie to get this reheard en banc, so in front of the whole Fourth Circuit Court of Appeals. The Fourth Circuit Court of Appeals is slightly democratic leaning in terms of which judges were appointed. It's possible that this decision could get reversed if it went to the whole Fourth Circuit. Then the last thing I'll say is, even though one of my favorite scholars who I've talked about a million times on Fourth Amendment technology stuff, professor Orin Kerr of the University of California, Berkeley, he thinks that the judges came to the right conclusion here, that this was not a search for Fourth Amendment purposes, but they did so for the wrong reasons. I think many other scholars are very critical of the so-called mosaic theory, and they don't know how it's a workable standard. At what point does a series of pictures become a mosaic? I think that's very unclear. There's no bright line in terms of forming that mosaic. How do we distinguish the information collected here, the quality and nature of that information from those collected by the Baltimore spy planes? I think that's a really difficult question. Long story short, in the fourth jurisdiction of the Fourth Circuit, which includes me and you here in Maryland, law enforcement does not need to obtain a warrant to get geofence data. I will say that Google itself voluntarily claims to have discontinued collection of data that they would submit in response to a request for a geofence warrant. This no longer seemingly applies to Google. Google has taken a step to protect their own users from geofence data requests. But there are a lot of other companies that collect your location information that have not made that promise that Google has made, including a lot of the apps that you would never think collect your location, but they clearly do. When I want to order my Dunkin Donuts, it finds the nearest Dunking Donuts to me, and it does that by collecting my location data. I do think this will have a broader application, even though Google, which probably has the largest market share on location data is no longer going to be sharing that information with law enforcement.
Dave:
My perception of this—and help me understand if I'm on the right track here—is that we're kind of making a distinction between coming at this from two different directions. As you say, in Carpenter, it is identifying an individual and tracking that individual over a period of time to see where that person went. In this case, it seems like we are interested in a location, a snapshot of a location, a robbed bank, and we're saying, “Who was in this one place at this time,” to see if our suspect happened to pass through this area?
Ben:
That's exactly right, and that is a major difference. The problem is, Carpenter, the decision itself didn't really come up with a workable standard for circumstances that are not exactly identical to Carpenter. Some people have interpreted Carpenter as creating a multifactor test. You look at the nature and quality of the information, the length of the collection, whether the defendant voluntarily opted into the collection. I had a student write a brilliant paper on this arguing that implicit in the Carpenter decision is a multifactor test. Professor Kerr, and I think other scholars, have said there is no multi-factor test. If they wanted to create a multi-factor test, they could have done so in that case, or in a future case. The court basically has not taken any additional Fourth Amendment cases on any topic since Carpenter. It's amazing for us. We've still had five-plus years of podcast content without the Supreme Court taking any Fourth Amendment decisions. But I think the reason that it's so hard to distinguish all of these cases is that the ruling in Carpenter has kind of created a Wild West in this field of jurisprudence where courts are just kind of doing their best to analogize the circumstances of the cases in front of them to Carpenter and it's really hard to try and create some type of generalized rule based on exactly what was said in that majority opinion. I think this is the Fourth Circuit's interpretation of Carpenter and how it would distinguish Carpenter based on the facts in the case at hand.
Dave:
Have we just not seen the disagreements among the Circuit courts to have some Fourth Amendment issue make its way to the Supreme Court?
Ben:
I think there, frankly, has been enough disagreement, not only on this issue, but on some of the other Fourth Amendment issues we've covered. For whatever reason, we're on a long streak, the Supreme Court just not taking up the issue. They don't have to explain why they don't grant certiorari in a case. Sometimes they decide to explain it, but for whatever reason, we've just been on a long drought of having any Fourth Amendment cases in front of the Supreme Court. They seem content to let lower courts argue and wrestle over what exactly Carpenter means without having to clarify it. I suspect that as long as chaos doesn't break out in the streets, that might continue for some time.
Dave:
Interesting. Well, we will have a link to that story in the show notes. My story comes from the IEEE Spectrum, which is the kind of the journal of the IEEE, which is the Institute of Electrical and Electronics Engineers. They are, I believe, the world's largest technical professional organization that deals with issues with technology and so on and so forth.
Ben:
Frequent source for us, actually.
Dave:
Yeah. They’ve certainly been around for a long time. I think they're generally well respected. Their journal is called Spectrum, and they have a story here looking at some of the challenges that we're facing when it comes to dealing with deepfake porn as a society. Of course deepfake porn is, we have this technology to generate deepfakes where we can basically paste someone's face onto someone else's body or take with only a handful of images of someone, recreate them doing things that they never actually did. This naturally, humans being humans, leads to people making apps and technology that can make pornography this way. These apps are not new. They've been around for several years now. There was a study back in 2023 from an organization called Home Security Heroes who found that if you have one clear image of a face, it takes less than a half an hour to create a 60-second deepfake porn video, all for free. Obviously, this has all sorts of implications. We've seen stories of this trickling down to kids in school making videos of their classmates, so on and so forth, which of course has all kinds of additional implications of people being underage. There was a high profile incident where someone created some deepfake images of Taylor Swift and that one got 47 million views before it was removed. Interestingly, and I suppose not surprisingly, this article points out that 99% of the victims are women or girls; I guess there's no surprise there. That's kind of where we stand right now. These apps are readily available. Many of them are free. We're faced with the challenge of what to do about this. Before I dig into some of the other details here, Ben, what do you think of what I've laid out here so far?
Ben:
It's such a vexing problem to try and solve. There are so many different issues at play. From a legal perspective, you don't want to unnecessarily suppress First Amendment speech, if deepfakes have some type of satirical value or are part of a political message. There's a question of whether to allow deepfake images, if there is some watermark or some warning that tells people that this was created through the use of deepfakes. Then there's the question of, is there a proper policy solution when it's so easy to create the deepfakes in the first place? Once they're out there, out there. The ability of our legal system to respond lags behind the capability of the smartest and brightest minds who are creating these deepfakes and posting them on the internet. Even if somebody is successful in getting a deepfake video taken down, there's a time lag, and 47 million people will have seen the video. I just see this as a very vexing issue. We've been dealing with it here in Maryland after an incident we talked about on this podcast where a principal in a high school in Pikesville, which is a suburb of Baltimore, was suspended from his job because a video going around purported to show him saying racist anti-Semitic things. Turns out it was a deepfake. The prosecutor who's trying to go after the person who created and distributed that deepfake kind of said he's doing his best. He did levy some charges, but his hands are tied. There's not a legislative solution to that particular problem. A lot of states have started to take action to criminalize deepfakes, but usually they're siloed within certain subjects like political deepfakes or deepfakes of a sexual nature. I think we're a long way from coming up with an all-encompassing solution to this problem.
Dave:
This article points out there's a woman named Susanna Gibson, who started an organization called MyOwn after she was victimized by a deepfake ordeal during a political campaign. She lives in Virginia and she was able to successfully push for expanding Virginia's revenge porn laws. This article points out that while there hasn't been much activity on the federal level, 49 states and the District of Columbia have some form of legislation against the non-consensual distribution of intimate images. Where we stand here in Maryland, Ben, we have laws against this, right?
Ben:
Yeah. We have enacted statutes through the Maryland General Assembly. There are a couple that we were pushing last session, which didn't get enacted; one dealing specifically with misinformation in the political context. I testified at the hearing for that one. But I do think despite the laws that have already enacted criminalizing the distribution of sexual based deepfakes, it still leaves a lot of gaps. It still doesn't provide a great level of recourse for the people who've been victimized by the creation of these deepfakes.
Dave:
Let me ask, granting and acknowledging and having tremendous empathy for the people who are victims of this, is there a First Amendment issue here? Specifically, I'm thinking of—and this is not a perfect analogy, and this took place before the era of deepfakes, but do you remember probably 10 or 15 years ago the Daily Show put out a book called America. Do you remember that?
Ben:
I had that book, as probably most 18 to 25 year olds did at that time.
Dave:
Well, what I'm reminded of is, in that book, they had artistic images of the Supreme Court in the nude. They said something like, “Here's the Supreme Court, we have stripped their dignity.” obviously done for comedic purposes, but protected speech?
Ben:
I think it is.
Dave:
Is that because it's the Supreme Court?
Ben:
I think that is a huge part of it. It does carry some type of satirical political value in making a statement.
Dave:
It was not photorealistic. It was artwork.
Ben:
It was artwork, and it was clearly presented as artwork. It wasn't created with the intent to trick people into thinking that these were actual naked pictures. I think that makes a huge difference as well. We have to tread very carefully because the default is that we don't want to criminalize the creation of any images or artwork or photographs. That's the default value that we have. There are exceptions to that and those are pretty well founded in our legal system, but we have to just tread very carefully. I think when you start to get into things like political satire… I saw a video going around a deepfake of Fred Trump criticizing his son for how he's running his campaign. That one was, actually, they did a very good job in that video of stating at the beginning that this was created as a deepfake, which I think also makes a difference. But there are more videos like that going around. I do think especially if it's the type of thing that cannot be expressed in any other way, like I believe that Fred Trump example qualifies considering he's been dead for 20 years. I do think that has some First Amendment value, and courts and legislatures have to wrestle with those conflicting values.
Dave:
One of the other things that this article points out is the discussion over whether the legal recourse for these should be criminal or civil. Whether the victims can have the right to either sue the person who made the deepfake, or the platforms that hosted them, either knowingly or otherwise. Do you have any thoughts there?
Ben:
Oftentimes the most controversial provisions of these laws are the private right of action. That can be the distinguishing factor that either makes a law succeed in getting enacted or making it fail. I think a lot of people think that there would be spurious or lawsuits that are not well founded. But because we've created this private rate of action people are using lawsuits as a tool to take down content that should be protected by the First Amendment. I think that's the concern there. There's certainly been an expression from industry from some of the big tech companies that they should be immune from suits and the private right of action. I think whether to make this a civil penalty or a criminal penalty is kind of a secondary question in my opinion. I do think it's a question we have to answer at some point, but really we have to first wrestle with what types of deepfakes are acceptable and what types are illegal, what is the dividing line and what recourse does a victim of a deepfake have to get it removed from the internet? I think once we resolve those problems, then I feel more comfortable having a broader discussion about this civil criminal distinction.
Dave:
Yeah, it's just another example of how, I guess by design, the legislation lags behind the technology and the things that society deals with.
Ben:
Yeah, although, I'll say, it's pretty impressive that almost all of the states and DC have at least proposed legislation on deepfakes. It's still a relatively new problem.
Dave:
I guess it does have bipartisan appeal to tamp down on this thing, right?
Ben:
Yeah, there know there are proposals for anti-deepfake legislation making their way through Congress, and all of those have bipartisan sponsors. I think it doesn't fall neatly along partisan lines. We know that everybody, no matter their political affiliation, could be affected by these, particularly if you're in the demographics you talked about, where it's largely young women who are victims of the distribution of these images and videos. I actually do think there's been more action on this than on a lot of the other issues we cover where congress and state legislatures have been comparatively slow to act. I'm kind of hopeful, especially as we've seen action in the EU and in other countries, that we can get our act together and come up with sensible regulations on this stuff.
Dave:
Well, we will have a link to that story. Again, it's from the IEEE’s Spectrum publication. Interesting read, if this is something a topic that is of interest to you. Ben, I recently had the pleasure of speaking with Dr. Peter Garraghan. He is the CEO of an organization called Mindgard and we're discussing the UK's recently published AI security guidelines and some of the recommendations that he made for addressing cybersecurity risks in AI. Here's my conversation with Dr. Peter Garraghan.
Dr. Garraghan:
The reason we completed the exercise was that in the last 12 to 18 months, there's been a lot of discussions about the cybersecurity risks of artificial intelligence. Every government and every organization are now defining governance structures and frameworks explaining what should be done; in terms of we need risk, we need to do red teaming, we need to fix security issues. However, there's very little in terms of the recommendations needed to reduce the cybersecurity risk of AI. The purpose of this research project and reports was to actually give empirical evidence on the type of recommendations organizations can use today to minimize the cybersecurity risks within AI.
Dave:
Well, take me through that process. As you say, it seems though this has certainly captured the imagination of the public and also governments around the world. How do you approach this? How do you get started with something that is such an active topic?
Dr. Garraghan:
It is true that AI captures the human imagination, but that's also a double-edged sword. Ultimately, AI is still software. It does software activities, it uses data, it runs on hardware. To begin with, we started with having a very empirical view of what is AI today and look at how it's used, and then looking at a whole set of different reports, news articles, technical blogs, and our expertise as a professor at Lancaster University to really understand what's the current state of the art of the recommendations to reduce cybersecurity risk, while the existing knowledge gaps included. This inquired or entailed quite a few weeks of literature review and read many papers, which is normal as a scientist, and then trying to map the current state of the art of what recommendations have been known to work, which recommendations have been suggested to be affected to minimize cybersecurity risk, but also highlighting the current gaps in the space as well.
Dave:
What are some of the conclusions that you all came up with here?
Dr. Garraghan:
There were quite a few conclusions, but I think the main highlights is, if you look carefully at the recommendations, a lot of the recommendations given—and I talk about things both technical and organizational—have very strong analogies to current cybersecurity practices recommendations, saying, make sure you have strict access controls on data, also applies to other type of software. Having user training on security of AI is very similar to user training of securing applications and software. It should be reassuring to readers because a lot of the suggestions also allow what we can't understand. However, there's also been some other difficulties, which is if you really look at the evidence given from recommendations and go back to the original source, it either comes from very few sources. The sources themselves are derived from laboratory experimentation, therefore they're inferred to be effective. In some ways it's also speculation based off expertise. Given how quickly the AI space changes and the type of cybersecurity risks that exist, there's very limited empirical information about actually what is their effectiveness within production systems. That comes from a lack of scientific activity in this space, the difficulty of doing so; and the nascent nature of AI, requires that if people do find these problems. They're not obligated to actually report to these.
Dave:
It's really is a fascinating issue. I'm trying to think of another example in society throughout history where something of this magnitude was kind of unleashed on the public and captured their imagination, but also had such a big potential for both good and bad.
Dr. Garraghan:
It is, and it isn't. AI is still software and a more recent technology that maybe not necessarily people capturing imagination with, but think of virtualization and cloud computing, which rose in prominence about 10, 15, 20 years ago. At that time, people were talking about, “I am spinning up resources and putting my data in places I can't see outside my home, outside my office. This seems incredibly insecure, but also really powerful. What do I do?” The cloud is a nebulous concept. There's a lot of fear and a lot of hype in the space, but what end up happening is saying, “Okay.” Empirically it is a computer that's hosted by somebody else, and there's implications of problems with that type of setup. But people went from being very pessimistic about the technology. It's been around for 50, 60 years, same as AI. AI is a very old technology, then became used and overhyped. They figured out some of the pain points and now in visualization, a lot of people use it and it's become much more understood in terms of the risks and how it used. AI falls in exactly the same scenario. AI is 50 years old. It's not a new concept. It's only in the last few years that it's got into the public limelight. There are new types of AI and now there's lots of excitement and there's some generally good use cases, but also people, to perhaps oversell the power of AI and things it's not designed to do. And that's fine. That's true for any technology hype. What's going to happen is, that brings cybersecurity risks and problems and separating, “Is there going to be an AI uprising” versus “Actually the problems is my AI model is leaking my confidential data.” that will come in time and the public will have a much better understanding of how it works.
Dave:
How do you go about preparing your information for policy makers, for translating it, to put it in a way that they can both understand it, but then use it in their own work to better suit the public?
Dr. Garraghan:
I think with policymakers, they have various different levels of expertise and their job is to communicate to the public, but also politicians, actionable insights for them to actually make legislation or suggest to different organizations and other governments best practice. Going about this as a professor at the university and also working with a lot of businesses now, we have a lot of experience catering towards very technical individuals, but also writing it in a lay person's terminology that you don't need to be an expert in AI or cybersecurity to understand. That requires reading some very technical pieces of literature and work and code, and then translating that into a form that a typical person will understand. The great thing about AI security is, conceptually a lot of it makes sense. People understand, or policy people understand, leaking data or losing data is bad. They understand that if someone can steal my AI, that's also bad. So there's less emphasis on the technical mechanisms to achieve this, more about emphasizing how does it work, and more the recommendations required to mitigate that.
Dave:
What has the reaction been so far when you've presented this to the various stakeholders?
Dr. Garraghan:
With the stakeholders involved, they've been rather happy because it's not typical or common that they get someone who’s both a professor and a CEO of a tech company to try and give both views. I give a very technical, scientific academic perspective on the problems objectively, but also try to tie this with the business problems that we face at Mindgard on a day-to-day basis. The reactions have been very complimentary. To my knowledge, it has been circulated with various agencies within the UK but also across other countries for the UK to actually explain what are they doing within AI security and cybersecurity alongside all the other great work. And the reports are also released at the same time.
Dave:
Where do you suppose this is headed next? Is this a first round? Do you expect there to be updates along the way?
Dr. Garraghan:
Yes, I think there will be updates. I believe in the report. At the very end, I do mention that the security of AI is not a solved topic. It's actively changing on a week to week basis. No one can claim that this space is actually completely known. Therefore, it's seen as a point-in-time solution by saying, “What is the state of the art, from year 2020 all the way up to the beginning of 2024? What is the snapshot of using AI and the cybersecurity recommendations use?” I expect, come a few years later, conceptually a lot of the recommendation advice will apply. What will be updated though is the actual primary sources of which recommendations have been tried and tested to minimize risk within AI.
Dave:
I'm curious, as you were taking part in this research and making your way through all of that literature, was there anything that you came across that surprised you or was unexpected?
Dr. Garraghan:
I don't think anything necessarily unexpected because I spent a lot of my career looking at very different type of primary sources, secondary sources and literature. But I think one thing that was quite surprising is a lot of the innovations in AI security, given it's so new, a lot of the real interesting recommendations and descriptions of attacks has not come from academic research papers or technical company frameworks. It's come from blog posts—people who are super technical, who are passionate and interested in hacking AI systems and how to fix it. The real meats of the evidence to empirically demonstrate their effectiveness comes from non-peer-reviewed sources. Obviously those have to be scrutinized quite carefully, and as a scientist, we can correlate what they'll mention with our laboratory experimentation. But that's been quite surprising. I suspect this is going to continue because within the AI security space, there isn't a formal database of vulnerabilities yet. Therefore, there's lots of people trying different things. I think in the recommendation space, they can apply these system frameworks and tools, which is great. It's the new things in terms of the technical techniques they need to recommend, it's still unknown how they're going to work.
Dave:
What's your own outlook here? Looking forward, are you optimistic that this is something that we're going to get a handle on and it'll become a regular part of our day-to-day lives?
Dr. Garraghan:
I think it will do it. It comes back to the mantra that AI is software. Replace AI, machine learning, ChatGPT with an application. Applications and software have problems. Yes, we know this. We've been spending the last few decades with hacks against systems data being leaked or just having poor performance or people throttling my network from people communicating over botnets. AI is no exception. Therefore, I do expect there's going to be a lot of great process coming down in terms of how to secure AI, and recommendations. I also do envision there'll be problems, like any type of software we could expect. The difference now is people were burnt quite badly with the rise of cybersecurity as a concept. I think now a lot of the governments and technical companies are gaining slightly ahead of themselves in terms of, they know this is coming as a problem, they're putting down the governance frameworks and recommendations now ahead of actually this becoming massively adopted at huge scale, that's very different from previously where it was a lot of trial and error in terms of let's build this thing and we can then respond to the type of threats and risks we encounter.
Dave:
Ben, what do you think?
Ben:
I thought it was really interesting, talking about security hygiene, things that can be done at the organizational company level. I think that's a good frame of reference for this. Because we often talk about what can be done at a policy level but I think it is going to be incumbent on individual organizations to protect their own security in the AI era by hiring people who can manage the legal and regulatory requirements that are increasing by the day, engaging with stakeholders, things like that. Really interesting interview with Dr. Garraghan.
Dave:
It is great to have somebody who's been so close to the inner circles talking about this, in his case, in the UK. To get his perspective, I think is definitely valuable. Again, our thanks to Dr. Peter Garraghan from Mindgard for joining us. That is our show. We want to thank all of you for listening. We'd love to know what you think of this podcast. Your feedback ensures we deliver the insights that keep you a step ahead in the rapidly changing world of cybersecurity. If you like our show, please share a rating and review in your podcast app. Please also fill out the survey in the show notes or send an email to caveat@n2k.com. We're privileged that N2K Cyber Wire is part of the daily routine of the most influential leaders and operators in the public and private sector, from the Fortune 500 to many of the world's preeminent intelligence and law enforcement agencies. N2K makes it easy for companies to optimize your biggest investment, your people. We make you smarter about your teams while making your teams smarter. Learn how, at n2k.com. This episode is produced by Liz Stokes. Our executive producer is Jennifer Eiben. The show is mixed by Trey Hester. Our executive editor is Brandon Karpf. Peter Kilpe is our publisher. I'm Dave Bittner.