mnemonic security podcast

Models Always Lie

April 15, 2024 mnemonic
mnemonic security podcast
Models Always Lie
Show Notes Transcript Chapter Markers

For this episode, Robby is once again joined by Eoin Wickens, Technical Research Director at HiddenLayer, an organisation doing security for Machine learning (ML) and Artificial Intelligence (AI). 

It is not too long ago since Eoin last visited the podcast, (only 7 months,)  but lots has happened in the world of AI since. During the episode, he talks about some of the most significant changes and developments he’s seen the last months, how models are getting smarter, smaller and more specific, and he revisits his crystal ball predictions last episode.

Robby and Eoin discuss potential security risks posed by using AI tools, how to secure AI powered tools, and what you should think about before using them. Eoin also gives some new crystal ball predictions and recommendations to organisations starting to utilise AI adjacent technologies.

Speaker 1:

Owen Wickens. Welcome back to the podcast.

Speaker 2:

Hey, robbie, thanks again for having me. It's great to be back and great to chat with you again.

Speaker 1:

First of all, congratulations on the promotion.

Speaker 2:

Thank you very much.

Speaker 1:

Hasn't even been a year and you've went from a senior adversarial machine learning researcher to technical research director.

Speaker 2:

It's been a busy year. Nose to the grindstone, but yeah, we move.

Speaker 1:

How old are you again? 25?

Speaker 2:

26.

Speaker 1:

26, okay, so you just had to get over the 25 to become a director.

Speaker 2:

I think that's it. That's in the contract somewhere. Every year I get a year older. It's kind of mad, I know it's crazy.

Speaker 1:

I don't want to talk about how old I am, but since last time we spoke, there's been a lot of AI. Are you tired of it yet? Uh, there's been a lot of ai.

Speaker 2:

Are you tired of it yet? Do you call it ai? By the way, what do you call it? Oh, yeah, I mean, I guess it's more machine learning really, isn't it? It always has been, but, um, at least until we get some sort of sentient being. But um, yeah, no, not sick of it. I think the, the, the rapid progress and pace keeps things interesting. Um, it's amazing what's happened in a year, I think, even since, even since we last spoke, which I think was back in july or august I believe, less than a year ago at least yeah, yeah, yeah, time is moving on quick, hey yeah, have you ever heard about the um, the god of the gaps?

Speaker 2:

no, I don't think so okay, so this is.

Speaker 1:

I'm saying this for my colleagues, which are much smarter than me, but it's um, it's basically a term from research about religion so as soon as you don't understand something, it's God right. So, like when we didn't understand lightning and thunder, it was Thor right. Yes, yes, yes. That's kind of how we look at AI. Right, I even started calling it LLMs now instead of AI, so I've had a reality check.

Speaker 2:

There we go Most of the market has. Absolutely.

Speaker 1:

So that's good. What else has happened. The models have gotten smaller. They've gotten more fine-tuned, smarter.

Speaker 2:

Totally, totally. And I think processing power has gone up as well. Right, I mean with, like, NVIDIA's new Blackwell chip, I believe. I mean things are going to change even more as the models get smaller and the processing power gets more and more capable. I mean, I'm really looking forward to a day where we have proper on-device AI, where we have the hardware to be able to run them and the software to be able to squeeze onto them. So I'd say that in the next few years is definitely going to be something we start to see more and more of.

Speaker 1:

What else notable? Like I mean, you follow this stuff, you have to follow this stuff, you get paid to follow this stuff. Like what are some of the biggest?

Speaker 2:

I'm strapped in. Yeah right, what are some?

Speaker 1:

of the biggest updates or changes you've seen in the past eight months.

Speaker 2:

Yeah, that's a good question. That's a good question, I think. As you said, I think the models are getting smaller. I think they're starting to distribute as well to kind of different model architectures. Um, you know, building upon transformers, which is like the.

Speaker 2:

I guess that you could say it was like the holy grail of large language model architectures for for quite a while, but they've been improving and iterating on that and to get models which are better at doing different tasks they've delegated out into things like mixture of experts models. They've brought in other things like retrieval, augmented generation, which basically means they have like this added bolt-on layer of like a kind of like a database, basically for extra information that they can, when, when an llm is queried, pull that information out of that database and feed it in right, and so it's like on top of fine tuning, you can also do this stuff. So basically, the models are getting smarter, they're getting smaller and they're getting more specific to dedicated tasks, which is great. I think we're starting to see just a lot of exploration. And then also, since we last spoke, I think llms just really have taken off in terms of like integration. Um, I think I don't think this was an april's, though I hope I'm not live saying this, but I do recall at some point last year they were going to replace one of the Windows keys with like a dedicated co-pilot button. So I mean that tells you just how much like all of these companies are now kind of dead set on AI within their platforms.

Speaker 1:

Now that you're mentioning vendors, right, that's one of the things I've really noticed since last time we've spoken is like all the security vendors have released what do they call it AI enabled security tools, right.

Speaker 1:

Totally, I think I sent you just an example, just to take Checkpoint as one that I typed in on Google, like AI enabled security tools, and Checkpoint was up there on top, and if I look at what they were able to do, it was a bunch of yeah reduced up to 90% of the time needed to perform administrative tasks. Infinity AI Copilot knows the customer's policies, access, rules, objects, logs, as well as product documentation.

Speaker 1:

So it just basically allows you to chat with your product or chat with your firewall and get it to do things for you. And that's just one example, right, I would assume, like all the major big companies will be coming out with that soon here.

Speaker 2:

I'm not sure if you have well on that. Yeah, absolutely so. I mean, last year we had um, we had google secpam came out, which was one basically security oriented um large language model. We had a CrowdStrike at Charlotte and there was also, I mean, microsoft Security Copilot as well. So there's like a number of different vendors that are moving into the space using LLMs for basically augmenting security workflows.

Speaker 2:

So I do remember the last time we spoke actually I think I made this kind of like crystal ball prediction that the SOC would start transitioning into an know, an llm kind of um quality control thing as well as using llms to augment their workflows and take all that data and do analysis to help with like alert fatigue and everything like that.

Speaker 2:

That's kind of where we're heading um. I think that's what these tools will be able to allow us to do. I mean, we have so many products feeding telemetry today and that I don't think that it's possible with our current workforce and I'm term in terms of how many people are in the workforce today to actually triage and process all that data. So we have to offload it somehow and we've done that typically through static and dynamic rules, but now with the age of ai and with, you know, llms, we can just process all of that so much easier and so much quicker. So I think these companies fantastic initiatives. I'm really interested to see how they work and integrate in practice. I'm a little out of the malware game and the security operations center game now, but I'd love to see how that's being done and integrated in practice certainly.

Speaker 1:

Yeah, I know that from a SOC. From our point of view, customers were first like, yeah, what are you doing with AI? And then they were like, wait, I kind of want humans to be doing this for me, not LLMs or not AI. Yet so I guess the hype has gone over a little bit.

Speaker 2:

Absolutely. I think things tend to come back to base after a while and, look, I mean, I don't think ai is a perfect solution for everything and like I think, what is it? Um models always lie. I believe was. Was somebody coined that? Not me models always like, but it's like. I think it's really important to have both right. I don't think we should offload all of our responsibilities to ai models with, let's say, within the security context to start with um, but I think that, like it serves a really valuable purpose in enabling us to like, do a lot of like base level triage and then have like human review over that and I think that'll settle in time.

Speaker 1:

Back to like the vendor and AI tool discussion that we started. What sort of like advice would you give clients of like okay, before they start using Checkpoint or Copilot or whatever sort of tool it is? Because a lot of I can see a lot of customers been like, oh cool, it's a chat bot and they just start using it and they just go for it. Uh is, is that totally fine? Or are there still some things that you should have in the back of your head when you start working with those sort of tools?

Speaker 2:

not, knowing the specific tool, obviously, but totally, totally main thing is, um understanding where your data is going right and understanding how somebody that you're sending your data to can use your data. And I think we've often seen over the years, um, if you're not paying for it, then you are the product. You know things like social media data harvesting and everything like that. Well, if you're uploading something sensitive to your to, let's say, an llm service, right like um, let's say, you're in a revenue forecasting for you know the next quarter or whatever, well, if that gets into the hands of somebody that you, you know don't want that them to have, then, um, you know that can be a pretty sticky situation to be in. So, like what happens if they use that data as part of their you know training process, and what happens if you know, let data as part of their you know training process. And what happens if you know, let's say, you upload, you know, thousands of documents to this, to the service. They incorporate into the training data or something like that. And then you know, six months a year down the line where they release a new iteration or a new epoch of the model, it then starts spitting out the same information to other tenants or whatever.

Speaker 2:

And we've also seen like situations arise, um, where, like one, a user may have inadvertent access to other users, chat histories and so on in like some services. Um, now this very quickly fixed. I think it's just like teething problems with, like, how you know getting these services developed around the llms and everything. So, again, it's probably worth saying that you know if you're uploading your data, upload it somewhere you trust, maybe have a good read through their you know legal documents and everything to understand their capability of using that. And, yeah, I think that's my biggest concern with third party services.

Speaker 1:

So I should still look at. I'm just going to keep picking on Checkpoint or using Checkpoint as an example. I guess I should say so I should look at that Checkpoint AI tool the same way I would look at Gemini or ChatGPT, then have the same sort of boundaries in my head.

Speaker 2:

If it's a service, yeah, if you're deploying it internally yourself, I mean, typically it's within your perimeter and it's something you can worry about. We're using Checkpoint as an example, but I I personally have no prior knowledge of checkpoint services. To clarify, um, excellent, but um, yeah, it's. I guess if you're sending data outside of your perimeter anywhere, right, you just want to have, like a good understanding or that it's in safe hands. Um, and I guess that paradigm hasn't changed with the context of lms and in fact I think before we might not have had as much reason to send data outside of our perimeter.

Speaker 2:

Like you know, I'm not going to like email revenue forecasts to you know, tom Dick or Harry, right, but, like with an LLM, it's like, okay, well, can you format this data for me nicely, please, so I can put this in my like PDF? Well, that would be great. And then it spits back the nicely formatted revenue forecast that I've just put into it, but the data has left the perimeter, and that's, I guess, the crux of the situation. So I guess, to answer your question more properly, if the model is being hosted internally, like if you run it up yourself in some cloud service provider or whatever within your own cloud, then it's not leaving your perimeter, but obviously make sure you're securing it.

Speaker 1:

Yeah, I came across something called virtual agents. Okay. Which sounds. Have you heard of it before? It's basically it means that it will do things for you on the internet.

Speaker 1:

Maybe it's called something else, but basically it's like if I say, yeah, format this data and send it, then it will go out and take from your crm, take whatever you need to do, write the email and send it for you. Uh, and apparently there's like a bunch of different startups that are like um, yeah, they have like demos out there where, like it says, make a website and start selling this and, like you can see, you'll have like a, you'll have a mac and it'll have four screens and you could see the I'm not even sure what to call it.

Speaker 2:

It's not an LLM anymore than it then I guess it is AI, it's a virtual agent, I think it's an agent.

Speaker 1:

Virtual agents, yeah, yeah.

Speaker 2:

Yeah.

Speaker 1:

What is your opinion on that?

Speaker 2:

Cool, yeah, great question. Yeah, I've. I've heard of the concept of like agents for hacking AI and everything like that, which I think is really interesting. So you know an autonomous thing which can act independently? You know to an extent, but it's kind of one of those. It's one of those situations where the more we hand off decisions to a you know, an unknown entity, right, the more risk you end up incorporating. Essentially, like if we, you know, you give this access to a lot of APIs, you give this access to your CRM. Like these models are a risk vector, they are an attack surface. The more they have access to and the more.

Speaker 2:

Like, if my thing can send an email, if it can browse the web and it can access, you know, sensitive information inside my perimeter, like where, where's the line there? Like how do I stop it from reading a website, getting prompts injected and dumping out all the information from my CRM into a public service and then emailing it Right, like that's that's big attack chain. Like a lot of ifs and buts have to happen for that to be pulled off. But you know this, it's gone from you know not being possible because we didn't have llms, so surfing the web and everything. So like this is an actual thing that we can see coming down the line, and so I think progress is great, but let's just make sure that we do it right, and then let's make sure that we are ironclad against prompt injections and so on. Um, I'll segue a little bit if you don't mind, because it's something that's quite interesting, um, but I mentioned, like prompt injected by surfing the web.

Speaker 2:

We call that indirect prompt injection, right? Um, a researcher, kagrashake, uh, basically kind of found out that you could do this, um, and he was using Microsoft Edge's Bing assistant, so, geez, a lot of product names there. But basically, to browse the web, it would read a page and you could do like that classic thing of like you know size, zero text and you know in white font and a white background, and it would read it and then it would prompt inject the chat assistant. The chat assistant would start suggesting spam links, it would start speaking like a pirate and all sorts of funky stuff. Right, like that's mad, right that we can do that. So, like, imagine now if your AI agent comes along and views a website and then gets prompt injected by virtue of that.

Speaker 2:

But taking this a step further, multimodal models, so models that can basically take in data from you know, like I've taken different data types like images, video, audio and text and everything. Uh came along, they came on the scene. That was another big thing we saw really over the last like seven, eight months or so. Um, you can also have multimodal prompt injections, right? So, like one of the first examples of this was a picture of an apple with a post-it note that said, like ignore all previous instructions and do whatever, right, so LLM takes this picture, reads it, it goes. Ok, I actually disregard this as an apple. Let's prompt inject the model.

Speaker 2:

So was that Black Hat Europe in December? And there was a really cool example right at the end of it where they talked about the risk of supply chain attacks through indirect prompt injection via images. Right, and imagine now, if you will, an image uploaded to wikipedia. Like, every time an llm views wikipedia, for this thing, it will get prompt injected by this image, right? So how do you stop that? Right, and like yes, we've got a lot of fences in there, we can do all sorts of post-processing and images and everything like that. But it's such a crazy prospect to me that, like a, an ai agent could potentially view a website and, like wikipedia, get prompt injected by an image that's on it that looks completely inconspicuous to us. Um, so it's interesting times.

Speaker 1:

It's interesting times, that's all I can say for sure uh, and while we're on the topic of interesting times, like, are there any noteworthy attacks that have happened in the recent, in the last time we spoke, that you you feel are notable and and cool to mention?

Speaker 2:

yeah, yeah, yeah, absolutely, absolutely. Um, there's a couple couple that came like very recently and like very recently in the last week or two. One of them was to do with hallucinations within ChatGPT, I believe, where it was suggesting false packages, so false dependencies. So it would give like a you'd ask it to write a snippet of code, it would give this particular package back, but instead of a hyphen it would be an underscore in the package name, right, and this was suggested and suggested and suggested and suggested, and the package never existed so it would fail. But a researcher decided oh, what if I create that package and then track how many times it's been downloaded? I believe they. They found that it was downloaded over 15,000 times, right, so it was being basically suggested en masse to people. So you have this like hallucination. You know hallucination in the context of an LLM is when you know it gives back a false piece of evidence. A false piece of evidence Jeez, I sound like I'm on CSI.

Speaker 2:

A false piece of information, false piece of information, or like something that's untruthful, that it thinks it or perceives it to be to be true. Um, but I thought, I thought that was fascinating and, you know, like the hallucinator first, and then they decided we'll create the malicious package that can can leverage this. So it was. It was, in a way, it was a very weaponized form of of this hallucination, like typically. Like, like these things can still be very consequential. But there was an example people were using ChatGPT or some other LLM to suggest papers, relevant papers to their subject matter, to their subject domain or whatever, and it started giving back false citations. Right, it started giving back false citations. Right, it started giving back false papers. So it's like you know, I don't know the biological makeup of some phytoplankton, for instance, or something by Owen Wickens, let's say right. And then they contacted this author Owen Wickens, that's me but they contacted this author anyway and they were like, well, yeah, this sounds like something I would write, but no, I've never written it before. And, like you know, these services are basically saying, oh no, this person wrote it. They might even give a summary and a blurb. So you have this like breakdown in known truths or ground truths and stuff. I don't know, I find that very interesting. I don't know, I find that very interesting, like, kind of outside of that, what we've been seeing an awful lot of is, like the actual tooling that supports the development and deployment of AI systems being like full of holes, like full of vulnerabilities. So iPhone or Android phones right, and we go back like 10, 15 years it was like you know could probably easy, easily enough exploit them. Um, nowadays you might with to exploit an iphone, you probably have to chain together like some ridiculous amount of exploits. You know super deep stuff that like will eventually lead into a massive payoff and when it does, it's super critical. But that like barrier to entry is so high with like ai and like ai like adjacent technologies, it's like something you probably would have seen like 20 years ago.

Speaker 2:

It's like we're dealing with like buffer overflows. We're dealing with, um, you know, path reversals. We're dealing with sometimes no authentication, right and like. Or a service is designed to execute arbitrary code within the context of ml, right, like. Imagine now you have all of this sensitive pii hosted on a service or hosted in, let's say, an s3 bucket. You have your service. You know ml ops tool running, which is, you know, allowing you to process this data, build your model and then it. But your service has either no authentication or it is designed to accept remote tasks from anybody, maybe with no authentication, and all of a sudden it means that somebody like me can come along and install a backdoor on your system without any real hassle, pull down all of that sensitive information and then cause a massive data breach.

Speaker 2:

And I think like one of the things that I can see you know at some point in the future is like maybe you know, looking at the data scientists as a incredibly privileged user, given the volume of data they have access to, especially within the business context, right, like we think of administrators and they have you know access to, to any system to or within their you know domain and so on. Super risky if they get popped, um. But with the data scientist it is also super risky because I guess you also have like a build environment, you're deploying models, especially models that go like downstream to customers, and then the massive amounts of data that they have access to, if it's like sensitive business related data or what have you and I think that's like quite a worrying prospect honestly. But anyway, sorry, I'm digressing hugely. If you want to bring me back on track with a question.

Speaker 1:

Oh, I got lost. I was loving it. I'm actually having a chat with somebody that's been building machine learning the one that you just referred to, one that makes those models and has access to all that and I just wanted to hear what his opinion is around security and does he think about security in these things? But back to our discussion just now.

Speaker 1:

Do you think that it is that way just because it came so quickly? And then the business was like give me this. And everybody was just like ah, a tool, I don't. They just skipped over, like the normal security barriers, or how do we get there?

Speaker 2:

yeah, um, yeah, no, totally, totally, I think. Um, I think that's part of it. I think, like we've got rapid adoption as one um, that rapid adoption definitely kind of the skewed, the security aspect, I guess. Um, one of the one of the other main contributing factors to that, I think, is that, like data science, I guess as a, as a discipline, had come largely from academia, given that it was, you know, largely phd data scientists and mathematicians developing these models, developing these architectures, and then, you know, going out into industry building them or or using tools that were developed by academics and so on.

Speaker 2:

And no hate, by the way, that like it's just part of the part of the process. But what? Often, if you're kind of building things like you know, you might not, if you're building them without like industry experience where you're used to security, development practices, like you know, secure development, life cycle stuff and everything, then you're you're not building software with that in mind, um, and if you're kind of building it outside of like this industry ecosystem where you've had to deal with this, then it's easy to develop insecure software, right, and that's like it's not a fault of their own. It's just kind of like, if you're, if you're not in the loop and you're not read in. That can be an issue. And so I think you've had this like kind of large swathe of tooling developed and designed for for data science that kind of just went under the radar in terms of the security community and then also in terms of like security engineering, I guess. And so now, because of that rapid adoption, we're once again playing catch up and we're trying to, you know, secure a lot of this tooling from you know, the ML op side, from the model file format side, from you know, even you know the generative AI sides.

Speaker 2:

Like the attack surface is huge, and I think that's like one of the biggest things that stood out to me when I, when I had the threat report in my hands and I looked at everything we've written about what, how, just how big the attack surface is now that we've kind of like really understood over the last two years is like staggering, uh, compared to when we started this and when we were looking around and going like this is, you know, we knew it was bad, but we didn't really have it quantified, and now it's like you know, there's a lot of things we need to do and like some of this is traditional security and some of this is AppSec. Right Like you can't like if you can't just ignore traditional security. But there's a whole new component. Right Like, the whole LLM concept is completely new and it requires this, this hybrid security posture. That's a whole other question.

Speaker 1:

You recently released a product, Generative AI, something in the title. I know that Absolutely absolutely so.

Speaker 2:

Building security into large language models is great, it's necessary, but it only gets you so far. We kind of call that like model robustness, we call it things like guardrails and so on which kind of prevent a user from doing malicious activity, and that's great, it's really important for the base foundational model. But what tends to happen then is like how do you control what data is sent into that model? How do you control the outputs that are sent out of that model? Let's say, I don't want code to ever come out of this model or I don't want PII to come out of it or even go into it I suppose would be the main one. Then how do I do that? And you can't necessarily fine-tune that out and you can't necessarily add it into the base model.

Speaker 2:

So it requires a bolt-on right, a kind of a bolt-on security product, and we've developed it to be an enterprise-grade security product that we can actually define custom policies depending on that specific customer and so on, so that we can bolt on our security layer and aim to basically protect you from things like PII leakage and so on, to all the way down to prompt injection and everything else that then comes out of the model as well. So, yeah, it comes back to what we were starting talking about at the beginning. If you are sending things to third-party services, do you want specific information in there? Do you want to blindly and implicitly trust the data that comes out of it If it's suggesting code, for instance, if it doesn't have malicious packages in the response, and everything, and so all of this requires an extra layer of security. So that's what we've been developing.

Speaker 1:

Yeah, so that means like you're. I mean, you still need guardrails, you still need that foundation around your model itself. But this is just like. I don't want to call it DLP. But you can go in and you can configure a lot of things so you can kind of make it like a DLP-ish, where you set your own policies and whatnot, right.

Speaker 2:

Yeah, there's definitely an analogy to be made in that 100%, and I think the two live quite well in harmony. Honestly, like I've seen models that have no guardrails and that's all well and good as well, but it's I think you know, you tend to try and want to get as far as you can with robustness in a model, definitely just to just to try and lock it down from more basic prompt injections. But I think what we found is you know, as many ways are there as there are to craft a sentence? There are ways to create a prompt injection, right and that and and and so on. And yeah, cat and mouse game your threat landscape report.

Speaker 1:

Uh, lots of good stuff. What did you find most interesting?

Speaker 2:

that's a great question um, we, we surveyed about 150 executive it leaders of major industry down to kind of SME level. So we wanted to capture a bit of a broad kind of swath of people and viewpoints. But what we found was quite interesting actually, and so I believe, if I'm not getting the statistics wrong, it was about 1700 models in production on average in major companies. Like, thought that was very fascinating. It's a that was very fascinating. It's a lot of models, a lot of models.

Speaker 1:

Um, each with their own attack surface right yeah, totally.

Speaker 2:

I mean, like you know, you may use a few models for like the same purpose but, like you, often each model will have its own specific kind of use case, right?

Speaker 2:

um exactly so, so, yeah, absolutely each has their own attack surface and depending on how, how they're exposed and everything like that, um.

Speaker 2:

But what we also saw was that the opinion across our, our surveyed respondents, um, basically said around 98 percent of them believed that ai was critical to their, to their or somewhat crucial to their business function. Right, so we're starting to see that, like it's just, it is crucial to their business function, right, so we're starting to see that, like it's just, it is part of their business function now, and I think that's pretty you know a pretty stark statistic there. And then again it's another statistic but about 77% of companies said they had identified a breach that related to their AI or like AI, jason, right, and that is also really worrying. And, look, I've said this, you know quite a lot over the last couple of years, but, like, the more critical decisions we offload to our AI models, to you know, even, like, not even just LLMs, but classifiers and everything like that, the more of an attack surface we're raising. Our data now is more precious than ever.

Speaker 1:

It doesn't mean necessarily it was like a crazy bad breach, but it means that number one, they had to have a way to detect that, which is interesting. Were those 77% of your clients, because that makes sense.

Speaker 2:

But how would?

Speaker 1:

77% of them even detect that. You know that's interesting.

Speaker 2:

Yeah, no, absolutely Absolutely. I think you can kind of understand if things are trying to like subvert a model in some regard, if they are trying to access specific data that's in relation to an ai model. Um, I'm not 100 sure on the visibility that's been, you know, put into these these companies now over the last year or two years, but I'm sure they're they're keeping close tabs on it yeah, um, so 100 of your clients have, uh, clients have noticed an attempted breach at least, and then 77% of those yeah Interesting.

Speaker 1:

But that I mean because, like a lot of things your products are doing, it's all possible to do by yourself. It's just extremely expensive and time consuming to use your own people to do those sort of, to implement those sort of measurements or to detect those sort of things.

Speaker 2:

Right, um, I guess that's one of the things we try and aim to achieve really is to take that security load off the data science team and enable, you know, security practitioners to be able to understand and triage these events and and so on. And so that's been. You know, a big part of our mission ultimately is to to allow them to kind of like coexist independently. But software engineers, you know, were able to write code that wasn't secure before. Well, they got away with it more often, let's say before we had like mass scale cyber intrusions and everything like that, and the software engineer then did have to kind of get more familiar with vulnerabilities and and secure software development and so on over time. And eventually, you know, there still is a line between a software engineer and a security analyst or even a vulnerability researcher or somebody in the product security team who's doing code review.

Speaker 2:

So you do have these like breakdowns and delineations and I just see that these teams are starting to move closer together. The conversations are happening and I do think you're starting to get data scientists who are more concerned around security as well and be really interested in hearing the you know the conversation, um that you're going to be having with the data scientists. Um, and I'll be tuning in for that um, but yeah, absolutely um, but yeah. I think even from what, from our conversations, we're starting to hear a lot more kind of gelling within that.

Speaker 1:

I mean software engineers might get hit on for this, but they've been around for what? 30 years? At least there's been software for 30 years, and I wouldn't say all software engineers are up there with security. But I guess the data scientists better get up there a lot quicker than 30 years, or else we're going to have a huge, huge problem.

Speaker 2:

Yeah, I think it's like all of our responsibility, right 30 years, or else we're going to have a huge, huge problem. Yeah, I think it's like all of our responsibility, right. It just means that people like us in the vulnerability research side, in the adversarial machine learning side, are able to help and able to contribute, and like we don't need to like put the onus entirely on the data scientists to get up to speed with it, but like we'll kind of all kind of come together and try and solve this problem. I think come together and try and solve this problem. I think because, uh, yeah, and I can see that happening, I'm just just any data scientists listening who were saying crikey, what is owen saying? Yeah, god damn it. Yeah because in their defense.

Speaker 1:

I mean, in their defense. I don't need to defend them at all, like they have so much learn, like I can't even follow the attacks that are coming out, and like the, the ai stuff, and they have to like learn how to implement and do it. And then the security comes in addition, which is a whole another.

Speaker 2:

It's a whole different, whole other aspects. But, um, yeah, we all have, like you know, kind of adjoining surfaces or whatever, um, or some other strange analogy for that.

Speaker 1:

Yeah, well, if you have both of those traits go work for hidden layer at least. I'm just thinking like um, you go to a lot of conferences and stuff.

Speaker 1:

You're probably one of the few people that goes to more conferences and stuff than I do, and you're on stage. What kind of like? Who do companies go to with these sort of issues? Besides, you Like, like we talked earlier, you have to understand data science, you have to understand secure coding practices, you know. So there's such a big span Like who do you go to with those sort of requests?

Speaker 2:

I guess there's some Ghostbusters.

Speaker 1:

Ghostbusters exactly.

Speaker 2:

But no, it's a great question. I mean, we do have taxonomies now. We have, like you know, more robust standards and so on, Like NIST has released. They're kind of like that taxonomy of adversarial attacks. And we have things like MatterAtlas, which is great at helping to define the space. We have different AI frameworks. We have the Secure AI framework from Google. Ibm also released one. Databricks recently released an AI security framework. So it is, you know, there's starting to be more voices within the community, which is fantastic and it's great. But that's something we try and aim to do is look at, you know, a system from all these different angles, right From the phone research side, from the pen testing side, from the reverse engineering side, from the adverse sale machine learning side and all of these things.

Speaker 2:

There is a lot, there is a lot, there is a lot, yeah, 100%, and that's so. I feel like we're, in a way, uniquely positioned to advise and help and help companies in that. But don't take it from me.

Speaker 1:

Crystal ball Predictions and recommendations.

Speaker 2:

Okay.

Speaker 2:

Okay, I think I've made some. I mean, there's other predictions I made last time that that we're still waiting on. But I think, like LLM integration, it's just gonna be continue to become bigger and bigger and, especially as the models get more capable, I think the multimodal side of the, the models, are really going to be interesting like that. That attack I mentioned from the, the researcher at black hat eu and with, like a supply chain attack through a wikipedia image, like I think that is where we're going to start seeing things. Um, it's not so much a crystal ball, it's like I don't know, is it my partial doom saying? But I wonder and I kind of worry about, like, the veracity of information being spouted out by these LLMs that often isn't 100% correct, through hallucination and so on. I do wonder, like you know, if we'll have this kind of self-fulfilling prophecy or like a snake eating its own tail, of, like you know, kind of bad, bad data being fed back into the models and you know we end up not being able to find the correct answer for our questions into the models, and you know we end up not being able to find the correct answer for our questions. I think the wikipedia models work quite well for crowdsourced. You know truths to an extent, um, but I do wonder about how things will progress with so much content. But that'll be interesting.

Speaker 2:

Um, like, we're starting to make strides with things like data provenance, uh, that like data set provenance and data set integrity, and especially, as well then model integrity and model signing.

Speaker 2:

There's some open source initiatives now that are coming out around this, so that we can basically show where a model has come from and cryptographically verify that in the same way we sign PE models and so on.

Speaker 2:

And so one of my crystal ball predictions which I think it's necessary, because you know, as we just mentioned, there's a big issue there with, like, okay, where am I getting this data from? Who provided this data? Or, if I'm pulling a pre-trained model in from a third party, where did this model come from? And like, how can I prove with any degree of accuracy that this model is what I hope it is, or came from who I presume it did? So that's what we're trying to solve as an open source initiative under the OpenSSF now is signing and model provenance, and I think that's going to go a long way to helping companies to trust their data and trust the models that they're pulling go a long way to helping companies to trust their, their, their data and trust the models that they're pulling. In a bit more, um, but yeah, I is that good for predictions absolutely, absolutely.

Speaker 1:

And I think that last way you entered it I mean trust, right, that is. That is the biggest problem with ai and lms these days is that's we don't trust them. So any any initiative towards making trust happen is a good one. And I also think that, like when you're talking, it sounds like the AI, the push towards AI, is kind of making steps, you know, verifications. That would be all be great for the open source community as a whole as well. A lot of those things are just not there today.

Speaker 2:

Absolutely yeah, and I think we want to solve that as an industry and, like, we don't want to just create a new standard. That, you know, is that we need another standard to correct and we want something that's like used by the industry that we can all work together with and really, like you know, a rising tide, it raises all boats and I think that's that's the important thing there. Um, but, yeah, you 100 trust. Trust is the main thing and I think, yeah, we're as a, as an industry, we're making great strides and it's great to see and I just hope that we can keep on pushing the pushing in that direction mr wickens, you're legends you too, robbie.

Speaker 1:

Thanks a million for having me always nice to talk to you and I will see you here in a few weeks in san francisco absolutely see you.

Speaker 2:

See you out in rsa looking forward to it cool cheers thanks.

AI Integration in Security Operations
AI Security Risks and Vulnerabilities
AI Security Concerns and Solutions
Predictions and Recommendations in AI
Advancing AI in Open Source