mnemonic security podcast

OpenClaw

mnemonic

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 32:41

The AI agent everyone is talking about.

In this episode of the mnemonic security podcast, Robby is joined by Marius Sandbu, fellow podcaster (CloudFirst Podcast and KI til Kaffen/AI with Coffee) and Cloud Evangelist at Sopra Steria. 

Together, they dive into the potential of agentic technologies, as of now. In particular, they cover OpenClaw, the open-source autonomous AI agent that is one of the most popular repositories on GitHub right now.

The conversation covers key risks, including remote control access, overly broad permissions and supply-chain concerns. As well as enterprise governance challenges, the need for policies and observability across different agent platforms.

They both share what conversations they're having with customers and security teams these days, both with the "gatekeepers" and the "believers". 

Send us Fan Mail

From our headquarters in Oslo, Norway, and on behalf of our host, Robby Peralta, welcome to the mnemonic Security Podcast. 

"I was annoyed that it didn't exist, so I just prompted it into existence." Peter Steinberger, creator of OpenClaw. We don't know how much open AI paid for it, but it's the most popular repository on GitHub ever.

The AI agent that runs on your machine and actually does things, not just answers questions, and you can talk to it however you'd like. iMessage, WhatsApp, slack, IRC. Whatever that is. One command to install unlimited possibilities, what could go wrong? So I put that question to someone who's been living inside this problem professionally, and also the only guy I know who's connected it to his Phillips HUE Lights at home.

Marius Sandbu welcome back to the podcast. 

Thank you. Thank you. Thank you. Thank you. 

Do you actually agree with the Nvidia CEO, that this is actually the, the new Linux? 

I don't say I would agree totally with that statement, because. First off, Nvidia has one gold and that's of course to sell as much GPU power as possible and, and allow consumers to burn as much tokens as possible.

And of course, pushing consumers to drive more adjunctive workloads isn't their benefit, of course. 

Hmm. 

And I'm, I, I don't think that openly doing anything revolutionary. Because a lot of the capabilities that it's using is something that's been there for a while, but I think it's one of the first features or products in the market that actually tries to group together all the generative AI capabilities into one single product or platform, if you will.

Hmm. 

And of course, it's the, one of the things that I see also becoming open source is that, you know, we now, we have all these three contributors. Building new capabilities, uh, adding new features. So of course, uh, even if OpenClaw in the first iteration had a lot of issues and, uh, security risks is becoming a more and more stable and more, uh, secure platform.

So it's, I think it's gonna interesting to see like, you know, years' time how far this piece of software has gotten. 

Hmm. Is there any common misconceptions that you've seen? And why? I mean, now that everybody's open sourcing it, why is it significant that it's open source? 

Well, uh, if we go into the first part, like common misconception, because of course there's a lot of discussion on social media and all the different articles that I see that like, okay, open Claw is kind of like a security nightmare. 

Mm, 

because it is actually a coding agent that has access to your local machine, and it all has all these different channels where you can publish and manage it externally. So you can manage it on your phone or on your Slack or any the type of communication channel that can set up. But this is also something that other of these coding agents actually have, and even Claude, Codex and Code also released this remote control feature last week.

So you have these ways to get access to that machine. The second part is that, okay, these coding agents also run locally on your machines. Of course, they have access to your file system and all applications that you have access to. Hmm. Now, of course, I think that the common misconception, well, open claw is not secure.

That's the big thing that a lot of people are saying, but okay. It all depends on how you set it up. Setting up properly, you can set it up in a sandbox which doesn't have access to anything besides a virtual file system, and you can specify what kind of, uh, external endpoints and services that you communicate to.

So either way, you have. A lot of different mechanisms that you can use to actually lock it down. So it can only access what you define as, uh, it should have access to. And of course you have these new versions from Cisco and Nvidia as well, which adds some additional security capabilities on top. But so I think that's like the main misconceptions that I see a lot of talking about.

It's the security part. And of course also a lot of people like to compare it with Claud code and OpenAI Codex saying that, oh, well, those other tools are much better. But again, it's, you can, you can have access to the same language models. Of course there are some, uh, differences in terms of instructions, but this, you have access to a lot of the same.

And of course, I think the main part is becoming open source. Um. Again, a lot of different other projects as well are open source, which provide some of the same capabilities, so it's not revolutionary there. But I think the main, uh, big difference is that you get one coding agent that has access to multitude of language models, multitudes of ways to manage it through using channels and that you have support for skills and MCP as well.

So it's uh, let's say a box of multiple different integrations that you can use directly on the same platform. 

How hard is it to actually set this up? You mentioned VMs and sandboxing. 

Well, it's, it's, it's, it's fairly simple depending on which operating system you have. It's like one command line that you run, and then it does this type of interactive, uh, setup where you can define, okay, which language model do you want to connect it to?

Which communication channel? How do you wanna manage it? And then, then you define, uh, what kind of permissions you wanted to run, and then you get access to a web dashboard that you can go in and set up agents and set up position control and so on. And the same also applies for the, or the other options from Nvidia and Cisco, as well as to like one command line.

You get it up and running quite quickly, but, but if you want to have it like in a virtualized environment, sandbox running, there's some additional configuration, but it's not that much work as long as you know the correct parameters. 

S So this is literally a YouTube video away, like a tutorial? 

Yeah. Yeah.

But I should not be doing that on my work computer. 

No, it's not a good idea. 

Why 

not? I don't think your IT department's gonna be fairly, uh, happy for that. But I think, I think the problem is okay, I can install it. Nothing bad is gonna happen unless I give it full permissions and I have full administrative rights on my machine.

I can give you one scenario, which I saw with this was on social media. That was some VP at Meta, which set up open Claw against their email or Gmail account. Then it started going in and deleting bunch of different emails because of the instructions that she gave the chat bot. So of course, as long as they have, as long as the agent has permission to do so and you give it wrong instructions, well bad things can happen.

I was really surprised by that story. It's just like you work for meta and you posted this openly, like why it doesn't make you look good. Yeah. But she did get a lot of attention and it made me smile. So. I'm wondering if this connection with, I heard that the whole world was sold out for Mac Mini, which I don't understand.

What does that Apple product have to do with this? 

Yeah, well, it's through parts. Uh, first off that you can cluster together Mac Minis and get a fairly good set of, uh, virtual weam running, so you can run local language models. Now there's also additional framework that you, you can cluster together like three, four or five Mac minis.

So, and have like a. Big virtual graphic cards onto on top, which you can run local language models, but the other part is that you get access to the Mac ecosystem, so iMessage and so on. 

Right. 

So that way you can connect open Claw to iMessage and manage it remotely from there. 

Yeah. 

Yeah. 

And that made them sell out.

Yeah. 

Yeah, yeah. And so it's, it is not that you use the MAC Mini for local processing, it's just that you get access to those Apple ecosystem, you know? And it looks like it looks nice on your desk, right. 

All. All right. Did you buy one? 

No, I didn't. 

No, you have 

those from before. I, I, I have some dedicated nooks with some, uh, like little bit more graphics, uh, power graphics card installed, so it's, uh, has a little bit more horse horsepower than the regular Mac mini.

Yeah. When it comes to the, the risks associated with this, uh. You know the skills you were telling me about virus total and the skills run us through the skills, risks, and any other risks that you see with operating this type of technology. 

Sure. So skills is a fairly new standard in AI terms, at least not in personal life or in AI terms.

Skills is fairly new, uh, term. It was introduced by Anthropic, not so long ago, but skills is essentially a markdown file, which can, which tries to provide, uh, how you provide expertise to your virtual agents within a specific, uh, specific domain. Could be that you have a skillset for how you manage virtual machines, a skillset for how you do security.

And these scales contain prompts and ways how the language models, which is just like instructions. But of course these skills can also reference scripts, CLI scripts, command, command line scripts, and the initial problem, what happened was that. Uh, open Claw had this marketplace, I don't remember the name, Claude.

Claude Hub or something, which people can create their own skills, publish it through marketplace, and people could download them and use them on their own open claw. But what we saw is there was a lot of new skills that was published, which contained malicious instructions. Which could say that, okay, I have this skill which is used to manage my Google Cloud or my Google email calendar.

So, uh, when people downloaded the skill, they had a lot of, uh, hidden instructions and scripts included as part of the skill was said that, okay, when uh, agent runs this skill, it would go in fine sense the information in my email or calendar and send it to a third party. And there was, there was thousands of different malicious skills being published to the marketplace.

So when they saw, okay, this is, this is becoming a big problem. So what happened was that Open Claw and Peter, the creator of, uh, OpenClaw, uh, went to the intra partnership with VirusTotal. So now they, uh, virus total will go in and scan every new skill that's being published to see if there's any type of malicious, uh, malicious content, uh, as part of the skill that's being published.

Hmm. Why has nothing really bad happened yet? It's very underwhelming. I feel like when, uh, everybody's talking about how, you know, scary this agentic future is and everything, but I've literally, the headlines, maybe there's just so much other drama going on in the world. Uh, I mean, besides that Meta incident.

There really hasn't been that much else, or 

No, no, not that I've seen. Uh, I think it's, as I said, I think it's drowning in other things that are happening, uh, worldwide. Um, well, I have one part though that I know happened, uh, Aqua Security, which is a company that has created an open source framework called Trivy which is used to scan, uh, for let's say bad configuration or, uh, bad settings in Terraform code and so on.

Now they also have their, uh, their tool publicly available on GitHub and their, uh, GitHub of repository was compromised a couple of weeks ago. And what happened was that when you ran and installed Trivy, it would also install OpenClaw automatically because of the way that they compromised the, the GitHub repository.

Hmm. 

So. Even though anything bad didn't happen. But what happened was that all the users that ran Trivy would also install Open Cloud automatically. And, uh, two weeks later, probably one week from now or uh, one week back in time, um, their entire GitHub repository was also compromised. Oh. Which also made this supply chain attack because a lot of companies are running trivi and attackers gained access to a lot of different organizations now, and this is now populating into different organizations and companies worldwide.

I, which I just read a little bit earlier today on an on an article. 

Hmm. 

And of course this might stem from one person inside running open claw and install a malicious skill, and suddenly some credentials or API keys were sent to somewhere, somewhere else. And then, okay, now we got access to this GitHub re request.

We can start moving forward and try to gain access to other parts as well. Could be, I'm speculating, but, uh, I, I think this is something that we're gonna see more and more, right? Where some individual could be a developer, it could be some working on the IT department or some regular per person working administration, installing this agent not setting up properly because, you know, we require some skill to understand how you can set it up in a secure manner.

Suddenly you give the agent too much access to something could be application or files or calendar or whatever, and then suddenly you have this malicious skill coming in and then information is flying somewhere else. 

Hmm. You mentioned, um, Cisco's Defense Claw, which is supposed to be a more enterprise, if I could say enterprise, a more business friendly version of this.

What are some of the like legit use cases that you can immediately see there? Are, are they the same, I guess, is it the same risks for a business, just a bigger scale than it is for personal use? 

Well, let's look at the blast radius. If someone were able to access my open clients, and of course the blast radius would be that machine that is running on or my local network.

If someone, yeah, my Philips HU Lights, that's the blast radius. Of course, I'm gonna get annoyed, but it's not more than that. Of course, if someone set up an open claw instance running within their infrastructure. Exposed it to the internet using some form of community. Let's say IRC, you have an agent there, and then someone gets access to that agent and says, oh, I want you to install this executable file on that machine so that they can get access remotely through some form of VNC view or something like that.

And then it's one way to get through your firewall and then get to the inside of your infrastructure. So of course, the blast radius can now suddenly become a lot larger. 

Mm-hmm. 

Again, it's, I, I think it's how you set it up properly to make sure that the blast radius is, uh, as small as possible. 

Hmm. How I assume that company should be right now, sort of scanning their environment for, just to make sure that they have governance over who has installed this.

That's possible to do. Correct. 

Yeah. Yeah. Most EDR tools you can scan for executables and, and new install software. So it's, uh, as long as you have EDR capabilities in place, it's fairly easy to find if the employees have installed this or not. 

What of the things would you be doing if you were head of security for a company, knowing what you know on the defensive side, just to make sure that this bad things don't happen through the use of this technology.

Well, we have, well, we'll always see that employees are fairly, a lot of people are really very curious and very forward or very open to try out new things and they see that, oh, this is something that can try out, and they try it at home, and then they see different use cases, how they can use this at work as well.

And I had so many talks about this last couple of months saying, oh, we can, we just install Open Claw for our business and try to solve these and these issues. But I think it's from a like CSO perspective, see that, okay, what kind of alternatives do we have where we can have better control but still try and solve the same use cases?

Because the way that I see it now, and as you said, like open claw is not an enterprise friendly tool. It lacks insight and policies that allows us to manage and control this in a centralized way. So I would look for alternatives. If you have a lot of people using open Clause saying, okay, what's the use case that you see using this tool for?

And secondly, okay, can we have alternatives in place that can provide a same capabilities but allow us to manage it properly? 

It just makes me, I think we talked about this last time, that, uh. The CISO is the HR for AI usage, or should be? Because it would be ideal if, you know the developers, and I've had this conversation with, uh, clients already that a developer or people in the business come to security and say, I want to do X, Y, and Z, and then they just say no.

Or they should be saying, yeah, give me a week and I'll figure it out, or something like, gimme, gimme some time. What has your experience been? With dealing with security teams, have you met any security teams that have actually taken that responsibility and and done good things so far? 

Yeah. Yeah. Well, I've met on both parts of that scale, we've had those, I say gatekeepers saying, no, no, we, uh, we don't know what this is and therefore we don't approve it yet, so we'll have to wait.

And then we have those other parts saying that, okay, this, these new capabilities and agents are something that's gonna empower our developers because they see a lot of benefit using it already. So we, we need to figure out how we can solve this. So I'm now seeing more and more IT security teams trying to be more proactive, trying to understand the needs of the business and the developers saying that, okay, we need to make sure that we can provide capabilities that allow us to remain control, uh, but still allow them to use these new tools, capabilities that are being, uh, being delivered to the market.

Hmm. 

But I think that the main problem is that. There's so many different tools and agents and capabilities, and there's no single way to, let's say, govern these agents on a large scale. So it's a fairly, fairly young ecosystem, so it also makes it even more complex to manage. Um, so many cases we need to define, okay, these are the pre-approved tools that you can use, and if you have a specific use case for a new tool, then we need to discuss it and figure out how we can solve it.

It's like security needs to go in a bunker for a few weeks and just go figure shit out. Uh, but they have to worry about the rest of the, the real job, which is security operations and everything else that we had to deal with before ai. 

And then you come back after those two weeks and the landscape is totally changed, right?

Yeah. Typical. Get used to it. It's not gonna change. Uh. I wanna get back to the governance part because that's the question I get the most often. How do we govern these things? But another thing I wanna talk about is the benefits. You know, you mentioned that it benefits to developers like ai. I, I feel like AI has most mostly benefited developers Of all these cases, I've heard developers have the, the largest change in their daily life.

What are some, or, first of all, do you agree to that? And second of all, what are some of the other use cases that you've actually seen? Like value being brought to the business using Ag agentic features? 

Yeah. When, when it comes to agents, I, I totally agree that I've talked with so many different developer teams saying that, uh, the use of AI has made their job a lot easier.

And of course, with the new improvements, with the new models that are coming constantly, it makes it easier and easier to see all the improvements that they get. And I, I saw a blog post on GitHub a couple of weeks ago. Where they published and research or some research on how much code is automatically approved based upon which kind of language model that they had.

I think that when it came to the latest versions of Claude, uh, Claude, 4.5 or 4.6, uh, close to 80% of the code was automatically approved because the quality was so good compared to the older versions, which were close to 50, 60%. Now, of course. I think there's a lot of ongoing discussions in terms of, okay, but how much time does each developer save by using these new tools?

And of course might not be that they save a lot of time, but of course it makes it a lot easier for them to actually troubleshoot or understand and, and I also think that's gonna give a lot of additional value for security as well. Just like understanding, okay, what kind of security bugs do we have in this code?

And just deploying an agent, which goes through every parts of the source code and tries to see, okay, these and these vulnerabilities are something that we might not have been able to detect before, um, until we unleash this language model on it. 

It's funny 'cause now we're using some technology to make the code.

We're, you're probably using the same technology to scan it for vulnerabilities. Which is an interesting sort of situation we've end up at when a client comes to us and says, Hey, we need governance around this. Where do we even start? 'cause I feel like that is the, that is probably the top, the top thing that every security team this year should be putting, using, dedicating some time to is how do you govern this agentic future?

What is your answer to that? As of the 31st of March, 2026? 

Yeah. I would say I, I, I would say I don't have an answer. I, I think the issue is that the, the ecosystem is quite fragmented because if you look at developers, they have, well, GI have copilot, uh, cursor, cloud code, open AI Code Codex, so many different AI or developed agent frameworks.

Then we have the more, let's say, line of business automation agents, which are built using some form of agent SDK, so Microsoft Agent Framework or Google agent development kits or land graph lang chain. So many there as well. And then we have like the more power users or office users, which use co-pilot 365 or CLO cowork or something like that.

So of course we have all these different use cases now, which are using generative ai. And some of them are running agents that interact locally with the machine. Some are running in cloud. Some are just regular chatbots, which has some form of automation, and there's no single way to control all of these using the same set of tools.

So we need to create some common guidelines to say that, okay, if you run some form of agent, you have to run. You have to make sure that you can use the, you have to make sure that you have these guidelines in place. Follow these rules. Might have some. Uh, way to provide insights or, uh, observability into the, how the agents are used, but you need to have multiple ways to manage them properly.

Mm-hmm. So for if you have a lot of agents using GitHub copilot, well, you need to set up policies there to make sure that you control what kind of features are in place or can be used, what kind of features that, what kinda language models that can be used, what kind of insight that you get. So make sure that it's used properly and you need to set up the same if you want to use, um, other frameworks using public cloud providers.

So when you have something that's talking directly with a language model through their API, which has their own set of tools. So you need to have different toolbox in place to govern these agents across different areas or different parts of the, uh, workforce. 

It sounds like you are a proponent or you're a fan of the.

Platform team, I guess then kinda like developers, you know, these guardrails that needs to be done today, but with ai and then I, or do, do you agree? Yeah, I agree. Two. Do you use the same people or should that be security team now? 

Um, I haven't figured that out yet. I think it all depends on, because it requires, of course, it requires a new set of expertise or knowledge, if you will, to understand how to use this properly.

Hmm. 

I think it all depends on how mature the team is. Uh, of course, if the platform team is, loves to play around with new technology, has a fairly good understanding of the tools and ecosystem, find out, put the, uh, put the responsibility there. Of course, the central security team is of course, in most cases, still responsible for having these common set of guidelines, if you will.

Uh, that always, that should always be applied to every parts of the organization. 

There was a report that came out and it was like only 4% of those that have invested in ai, whatever that means, under the umbrella term AI, have seen enterprise benefits. Is that still how you look at it or what's your view again, as of 31st of March, 2026.

Well, I can just start with a simple example. Simple example that provides for investment. I, and I'm not trying to advertise for some specific company now, but let's say that we have a certain collaboration platform where you have now virtual agents that can do transcription and do, uh, write, uh, create meeting notes, reference or create a.

Hmm. 

Create a summary, summaries, summary of the meeting, meeting summaries. Thank you. And also create to-do lists, uh, to-do tasks automatically. That dare just saves a lot of time. 

Absolutely. 

Um, even though it's probably, well, and of course this is something that these tools are being used every day. And if, of course, if you can figure, okay, how many meetings do we have within this company in a week?

How many hours are being saved on a daily basis just by having this single, pretty simple, uh, virtual agent that is running and collecting all the notes that's being said and creating to do, to do tasks automatically. So, of course, pretty simple. And then we have, uh, use cases to, to, um, automatic, uh, or OCR.

Digitalizing paper with content. That's something that still, still takes a lot of time because you need to scan the content. Now we have language models that can do this, uh, 10 times faster and much more accurate tradit compared to traditional OCR and this. That's a fairly good use case, but it's simple.

And then we have a lot more complex use cases now, which can, uh. Automatically handle every email that's coming in. Uh, look at correspondence and define, okay, which direction should this email being sent to? Because let's say you're a municipality, you need to sort out all emails incoming. I need to reroute it or redirect it, forward it to the right, um, um, right recipient.

Of course there's some additional use cases that we also see, uh, spare save a lot of time, and, um. I, I had another use case that I thought up, but now it slipped my mind. 

I'll, I'll take one for, uh, that I heard at the conference, um, or two actually. Why you think one of them was a, it was a guy that was, uh, trying to revolutionize the renewal process, uh, you know, selling licenses for, on behalf of vendors.

And he actually, uh, he has a bunch of junior salespeople and he transcribes everything they're saying, runs it through an agent so that they, they get feedback based on their call. Which I would be super scared if I had some agent listening to everything I'm saying with my clients, number one. 'cause that's kind of like a scary risk, or it would be in my line of work, but also just because now my boss knows how bad I am at my job.

So that was one and one of the other startups, um, yeah, I had, you know, how call centers are being tricked into giving credentials away and whatnot. Um, that it was just analyzing the, uh, the calls or the, you know, chats. Going towards service desk to understand all, all it's very, it's sentiment analysis.

Like are they asking for something? Are they asking for it now? Uh, gives them a risk score and it lets the help desk know on the other side that like, you are likely trying, this is likely bullshit. Right? Really simple use cases. So I guess the people started with the low hanging fruits, which is good. 

And, and I, I know there's a lot of use cases in Norway, which are using some, some simple, some more advanced, uh, agents that are being set up.

I know that the public, um, public road also has some examples that I published online on, um, webinar that I saw a couple weeks ago. So I see the, I think that. Most companies have like this, uh, when they look at ai, they say, okay, how can we do some quick wins? We have this specific use case and we have this budget.

How can we use AI to try and solve this? And they create a kind of, could be fairly simple one, okay. They get knowledge, they get more understanding, and uh, then they mature a little bit more in terms of using ai. And then they start to. Add more and more use cases, uh, on top. So I think that, uh, after like the very 1st of March, 2026 and the remain or remaining of 2026, we'll see a lot more use cases and agents being, uh, created and also more public references because I think we gotten to the point of where these agents or these language models have much higher accuracy.

Can handle much more content than it could before. And of course, now that they can interact with, uh, command line, they can interact with the, with a, with your machine so they can move around, uh, in a machine to actually do things that traditional RPA has done before, uh, which much higher accuracy as well compared to the traditional human.

I think we'll have more of these, let's say enterprise use cases being solved with agents. 

Mm. And, uh. Based on your Statons, Vive an example. And yeah, they're the ones that control the roads. Basically. A bunch of cameras reading one of those car, car licenses, numbers. There's also a case here for what they call it, multimodal, where pictures and voices are being used.

So I guess as technology advances in those fields, like I, I have for some reason, I have a feeling that Google is best with like picture generation and understanding pictures and stuff. Then once that gets really good, then there'll be even more use cases that are gonna be available. And it's not just text, I guess, if that makes any sense.

Yeah, yeah, yeah. Totally makes sense. And just think about like surveillance. Mm-hmm. Where you can attach a video stream to a language model and see, okay, if there's anything suspicious going on or certain things that you should look for, uh, the language model cannot. And let's say, let's go, let's take Google for example.

Gemini. Uh, it can handle up towards to one and a half hour of video content. Analyze it and specify, okay, what happened in this video? Uh, what were the persons involved or objects in that, in this view, it's a, like OCR, that's a capability that multimodal has. We also have now, uh, language models that create.

Smaller language models that's come out on your phone for image recognition. So let's say you're like a electrician working in the field, they need to troubleshoot something, and then you can have an assistant running on your phone and you can just take a picture of the, of the instrument panel or something that you're troubleshooting, and then the agent can talk to you directly on your phone where you might not have internet access at all and say, okay, this is the way to troubleshoot this and this 

awesome.

The feature is so cool. 

Yeah. The, the cool part is that we're just getting started because, uh, if we look at the, let's say last year or so, it's going, it's been quite hectic, like so many new capabilities being pushed out. And now we have, uh, investments up towards 900, uh, 9,000 billion dollars. Uh, no billion, $9 trillion, trillion dollars, trillion credit being invested into building new AI data centers, including two small ones in Norway, but on a global scale.

And of course, with the, the big problem that has been for these big, uh, AI vendors or AI companies, Google, uh, and Tropic and OpenAI, has been access to computing hardware. Yeah. Or AI hardware, and now this is being, uh, solved and this, that means that we'll get new models quicker, stronger, faster, and more powerful compared to the models that we have today.

Yeah. Myas, thank you so much for your time. You're a legend. Uh, and I am as always looking forward to speaking with you next time on the podcast. Take care. 

Thank you. Thank you very with me. Bye, 

cia. 

Well, that's all for today, 

folks. Thank you for tuning into the Mnemonic Security podcast. If you have any concepts or ideas that you'd like us to discuss on future episodes, please feel free to hit me up on LinkedIn or to send us a mail to podcast at Monic.

No. Thank you for listening. We'll see you next time.