Engineering-led security and the future of agentic protection with Raghu Sethuraman

Summary
In this episode of SecOps Confidential, host James Berthoty sits down with Raghu Sethuraman, VP of Engineering at Automation Anywhere, to discuss how security organization structures are evolving and why engineering leaders are increasingly responsible for product security. Raghu breaks down the three dimensions of AI security, including code generation security, system prompt protection, and runtime monitoring, and explains why teams need to start preparing for agent-to-agent (A2A) communication now, even if it feels far away. They discuss how security is becoming everyone's responsibility across the SDLC, why data permissioning and governance can't be afterthoughts in an agentic world, and the practical first steps for building AI red teaming and ethics frameworks. Raghu shares lessons from being on the front lines of agentic automation, including how Automation Anywhere approaches layered security, agent identity management, and the rapid shift from first agent adoption to agent proliferation.
Show Notes
- Why product security is moving under engineering leadership while InfoSec stays with CIO orgs
- How security becomes a shared responsibility across developers, DevOps, and security teams
- The three dimensions of AI security: code generation, system prompts, and runtime monitoring
- Why AI red teaming, ethics, and governance must be parallel tracks, not sequential
- Agent-to-agent (A2A) security protocols and the evolution from MCP to agentic swarms
- Layered data security approaches: public, organizational, departmental, and user-specific permissioning
- How to threat model agent communication, similar to dependency chain analysis in traditional software
- The rapid snowball effect when teams discover agent value and why early preparation matters
- Practical first steps include starting with AI red teaming and governance before agent proliferation hits
Links
James Berthoty (00:01)
Hello everyone, welcome back to SecOps Confidential. I am super excited to have Raghu here with me today, VP of Engineering at Automation Anywhere. But please go ahead, Raghu, introduce yourself, let the people know a little bit of your background and why I would be so excited to have you here.
Raghu Sethuraman (00:17)
Thanks for having me here, James. My name is Raghu Sethuraman. I'm Vice President of Engineering at Automation Anywhere. Automation Anywhere is one of the leaders in the agentic process automation space. Very upcoming, very fast growing company. I'm looking forward to talk here, James.
James Berthoty (00:35)
Yeah, and I think there's a few things I really want to talk on. One of those things is obviously going to be AI security. I think you guys are pretty clearly on the front lines of that based on the nature of the company and how you operate. But the first is just talking about, I'm super interested in your organization structure, being a VP of engineering that's handling security. That's something that I'm seeing more and more commonly is like the CIO or VP of engineering handling security for the organization. But when it comes to how you guys even just conceptualize building a security program.
like that, what made you decide that security best sat under engineering directly, as opposed to the many other org structures that people take when it comes to security?
Raghu Sethuraman (01:15)
See security again has got multiple phases to it. You've got info security, product security, infrastructure security, right? There are different functions and they have got different specific set of roles. Again, red teaming, blue teams, all of them are there as well. So some companies go with this traditional route of having a CISO, being there where all these functions unit under the particular function. Some new entities, there is need to understand the product really, really well.
work very closely with the product there. So product security becomes very closer into the product engineering role itself. Again, goes into infrastructure security, infrastructure and operations part of it there. So an InfoSec for the IT side of it that resides with the CIO organization there. I think that is a more common trend I'm seeing there based on my experience and the connections in the industry.
James Berthoty (02:11)
Yep, and I totally agree as far as I've always had that separation in my head between like IT security that's really focused on like your permissions management, your SaaS security, your endpoint management, and then the product security side, which is like your cloud application, DevSecOps, the red team, blue team stuff. Is that sort of how you guys separate it? And then where does like security operations, I've already, I've always found security operations to sit in a weird place within that structure as well.
Raghu Sethuraman (02:39)
See, security operations right now for us, everything is the product side of it. Product and infrastructure security are all aligned into a single entity, whereas IT related InfoSec is residing on a separate entity there. But even though they are part of different functions, we act as a single entity. There's extremely strong collaboration there, exchanging nodes, periodic sync ups.
James Berthoty (02:54)
And then is it.
Raghu Sethuraman (03:08)
We brainstorm how to learn best of InfoSec into product security set of it and infrastructure security on that one. Similarly, we exchange the new tools that we are using it goes back and forth on that one. We build our tools in-house, we share our notes on that also.
James Berthoty (03:23)
And then when it comes to the operations piece of it, do those teams own their own incident response processes, or do you have a separate organization that's doing the front lines for incident response and real-time alerting pieces?
Raghu Sethuraman (03:41)
As I said earlier, respective teams, the product security teams and the IT InfoSec teams, have their own InfoSec incident management process there. But we exchange notes, we unify that, we review both the policies, we add to it, we standardize those templates to a larger degree there. Even though the respective functions are reporting to different leaders,
in terms of operational maturity, terms of how we respond back to it, how we build our roadmap, we act as a unified function.
James Berthoty (04:15)
Yeah. And then when it comes to even how organizations think about like how to scale their own programs here, obviously you guys have these functionalities across a lot of different teams. When would you recommend someone start like how should someone try to balance the prioritization between investing in like the IT side versus the product security side? Do you think it really has to do with like every business is unique in this way? Or is there some template people can follow for where to invest and when?
Raghu Sethuraman (04:51)
You mean investment in Infosec, whereas product security side of it?
James Berthoty (04:55)
Yeah.
Raghu Sethuraman (04:56)
The requirements are very different between InfoSec and ProductSec and infrastructure security side of things. So endpoint monitoring, laptop monitoring, device monitoring, everything comes under the InfoSec point of view. So VPNs, everything, they look into those things over there. At the same time, cloud access, even that we keep it under our CIO part of the InfoSec team. They manage our cloud identity and security part of it.
they do the auditing side of it. They work very closely with us on all those things there. So when we bring in tools like SIM and agentics talk and everything on that point of time, we work with them as well and we come back and say that, can you utilize this as part of your CIO, InfoSec organization there? Whenever they bring in their tools, they bring it to us and come back and this is what we're evaluating, what do you think about it? it's easier, responsibilities are different, but both of us have different set of directions.
and wherever we could be part of the exchange or nodes and bring the to be shared the tools as well there.
James Berthoty (05:59)
Yeah, so it sounds like the prioritization is more of like an organic experience between the teams as opposed to trying to focus overly on one versus the other.
Raghu Sethuraman (06:08)
That's absolutely right. So the prioritization for me is about how do I make my cloud alerts from a SIM side of it, what if it's getting fired, how do I optimize it, how do I get a result or the false positive very quickly, how do I get to this genuine issues in the order of minutes, how do I act on those things over there, do I have enough alerts and audits in place or not.
So my focus is more towards ensuring my product in the cloud. We operate in all the clouds, all the multi-clouds, right from AWS, Azure, GCP, everywhere. So when we have that presence, we have got our point of presence all over the world. So my focus is about how do we scale, how do we secure, how do we ensure that compliance is being adhered to. For the AI side of it, agent side of it, that security is also getting stronger on.
James Berthoty (07:04)
Yeah, think something I'm always very curious about for very product driven organizations like yours is when it comes to trying to address a lot of those like ongoing security incidents and responding to real time alerts as far as like go investigate this or that function of it. A lot of times you can end up so deep in the application or
the infrastructure side that you have to call in like DevOps teams, platform engineering teams, development teams directly. And so how do you like coordinate the responses between like the product security team and then like the engineering team as far as helping them like work together when it's necessary or if there's any times in which just the dev team works with it directly as well.
Raghu Sethuraman (07:52)
See, security is not one team's responsibility. So security is everyone's responsibility. Right from design to deployment and operations, everyone has a role to play. That has been deeply enforced into every engineer's mind. So we don't look at it like security is like one team's or one person's responsibility. That is a cultural shift that has happened across the entire industry. and it is very deeply built into our engineers mind here as well. So whenever, whether we write from conceiving a product, threat modeling, design, everything happens at a very early stage.
Raghu Sethuraman (09:54)
Security is a shared responsibility, right? Again, the industry has completely moved from security alone is being responsible to developers, testers, devs, the cops, everybody is collectively responsible to ensure the product is secure. So we do the same thing over here, right from a product is being designed, once it is conceived and designed, security team works hand in hand with them. So. product security side of it, how do you scan the code, what is the threat modeling for that, and how do you deploy the code in a secure way, what is the access restrictions that is required for that one, all of them are being discussed. So security is everyone's responsibility, whether in an incident scenario or non-incident scenario, that culture is completely embedded across all the developer mindset, right?
James Berthoty (10:44)
And then I think where a lot of organizations struggle is it requires a lot of intentional effort to get there. As far as people tend not to wake up one day and threat modeling's happening and security developers are working together. What are some of the decisions you've made to help drive towards, or how can teams help drive towards that collaboration where new projects are getting threat modeled ahead of time, there's conscious design around the CI-CD systems from developers and building up that relationship.
Raghu Sethuraman (11:15)
We made it very clear, we have got checklist of sorts. A feature or a product cannot see the light in production until it has gone through the security review process. So, compliance and security, both are important and we have made it a very strong policy that it has to, see, right from the product is conceived, we know that this is the timeline a product is gonna hit the market, this is when the feature is gonna be released in a particular release. So security team is aware it is coming in their way. We work with our legal team as well this is coming. So we come back and say this is how the product is being designed. These are the capabilities. This is the geo which is going to get released. This is the architecture for that. Both software and infrastructure architecture. So we get that review happens. The feedback comes in place. We work on those feedback.
And then after, once we think that the product is ready to deploy, of course, there'll be some changes in the design based on how the software matures. We go for revise the changes on that one. They get reviewed, they get analyzed, feedbacks are being provided, tools are run to ensure it meets all our standards. No vulnerabilities are there in that one. And then we have our own exit review.
Once in that exit review, we make sure that all our compliance requirements, security process are all followed on that one. And then we decide to release the software. So a software cannot even go to production if it has not gone through security review. So which means that as part of that, right from threat modeling to red teaming, everything has to happen.
James Berthoty (13:00)
Yeah, I think to go into some of the AI stuff, since you guys are obviously on the front lines of deploying the stuff safely for customers, like how are you even tackling that from a, or how has it even evolved, like trying to threat model, like the earliest versions of AI threat modeling into what I'm sure now is even a more robust processes. There's more attack vectors that are getting discovered in an ongoing basis. Like how are you... Yeah, how has that process changed and how have you been able to develop it?
Raghu Sethuraman (13:33)
For AI and agents, the process has not changed, attack vector has changed. Correct? So we still, you again, it is part of the sort of development process there. So SCA, OSS, CVEs, all the S-bombs, all of them are still gonna be there. You are gonna still run your SAS, DAS, all those tools in place. So in terms of tooling, in terms of process that has not changed, we're still gonna continue the same process on that. But whereas,
very focused effort on AI and agentic. For example, prompt scanning, prompt fuzzing, all of them we are going to introduce for that generative AI related, LLM related security. And for agentic, again, agentic security guidelines are available. So agentic security tools are available. Those comes into play. On top of that, whatever we are doing on existing SDLC cycle itself, we are converting them into an agentic security behavior that will enhance our productivity, that will enhance how we do our security DevSecOps process as well.
James Berthoty (14:37)
I think this is probably one of the transition areas that's like the most challenging for a lot of leaders to address because on the one hand, it kind of feels like, everything's changing, right? Like I can do security reviews and pipeline with indeterministic systems. I can do a lot of this LLM Red teaming capabilities like AI pen testing on a separate like evolution of DAST. But on the other hand, from a pure like code generation standpoint, like code is code and so there's an element in which things haven't changed. How are you trying to balance like adopting new technologies? Like what are some of the ones that you think are the most important? Like you mentioned the AI red teaming piece versus what are some of the things that you think are more the same?
Raghu Sethuraman (15:24)
As I said earlier, the SDLC process, developer SSDLC process is the same. We are going to, it will follow your standard security best practices, security checklists and everything will be there in place there. The tools will change, the process will change a little bit to ensure that we integrating AI and agent-specific security over there. The way I look at it is in three dimensions. One is developer-led security which is the Curors and clouds of the world where the code is being written, the code level security for that one is one angle to it. SDLC related, for example, we are putting system prompts and everything on that. So the ASPM angle to it is changing a little bit on that one. Then you've got agent-tg-spm side of it. That is slightly different on that. So the tooling and how you are going to ensure the security is going to change, but the process part does not change is what I'm trying to get.
James Berthoty (16:21)
Yep. And then, Raghu Sethuraman (16:23)
And as an extension to it, we will talk for example, let's take that infrastructure side of it. We have got SOC, SIM, SOAR was a standard flow in terms of how do you detect and how do you respond back to it. Now, agent XSOC is coming into play where you are able to detect and you are able to remove your false positive in a much quicker manner there. For example, ExoForce is one of the tools that we have recently introduced in our network which reduce our detection time and resolution time to a larger degree. So wherever we could integrate agentic tools, we'll definitely integrate it, but process is not changing the tools and the detection mechanism are going to change, attack vectors are going to change.
James Berthoty (17:05)
And then when it comes to people who are in similar industries of like building AI automation tools and building AI tools, do you think there's unique security challenges or gaps that like just come from being in your industry that like you need to invest in earlier than people who aren't in the same industry?
Raghu Sethuraman (17:26)
I think it's disturbing everybody who's generating code. Whether you are having generating code for your internal application or you're building it for your own customers, moment you start writing the code, everybody has moved or integrated coding assistance. If you have coding assistance, the amount of code that gets generated is really large. And along with that, it also creates additional threat vectors. So when that itself fundamentally changing how you're approaching security. Engineers used to the code, it has to go through standard security process is one part, whereas because of coding assistant, the code that gets generated, that has to adhere to your company's standards. So not all the, every engineer writes a code differently there with coding assistants coming into play with features that are getting developed, that changes a lot. So everybody has to change. that security approach based on the coding behaviors, coding assistance.
James Berthoty (18:28)
Yeah, and then just to hop back to the security operations side of it, are there certain types of exploits like prompt injection type things? Are you guys monitoring for those alongside like historical attack vectors? And then how has that even changed like the way security operations teams have to deal with like incident response?
Raghu Sethuraman (18:50)
Yes, definitely the theme. We have to make sure that two dimensions to it. One is the performance and your AI response are consistent. We have to focus on bias, ethics side of it as well, along with your attack vectors based on malicious intents as well. So you have to ensure your AI security is consistent and delivers it release over release, build over build. Not only your internal build changes, but your endpoint changes. You keep on changing, you continue to change your, it could be OpenAI, could be Gemini, whichever your endpoint you are using, evolve, the versions evolve on that one, build numbers evolve on that one. You have to ensure the responses are consistent. with it, your system prompts, the new features keep on evolving. So you have to make sure the system prompts that are getting released, release over release, or optimized for best performance along with it, they are consistent and they are not hallucinating. They are not creating bias or additional risk vectors there.
James Berthoty (20:06)
Yeah, I think what's so challenging about this transition for people is really we're just touching on like pretty different sets of capabilities that are all pretty new to be even talking about. So like one is the vibe coding security or the AI code assistant generation process and how to secure that by default. And then you've got the AI red teaming piece of like testing the system prompts before they go into production and looking for biases and things once they're in production.
And then you've got like the runtime monitoring piece to know, am I, is someone currently trying to exploit the service or is it being actively attacked? Do you think those are like the, is that a right way to summarize like the different attack vectors or is there additional pieces there even?
Raghu Sethuraman (20:51)
No, those three, would say as a developer side of it, and wipe coding, coding assistant, related security, and then you have got your system prompt, a prompt injection related security, and runtime security. Those are the way I would look at it in the three dimensions.
James Berthoty (21:04)
And then how have you, first of all, how have you tried to adopt the balance between like, I think there's a lot of, I obviously work with a lot of security vendors and they just sort of hope that every time one of these come up, they're like, oh, someone's gonna look for a tool to go do it, which isn't always the case. Like there's a lot of DIY, there's a lot of existing tools and like reworking them. How have you experienced and how would you advise other people to go about like, trying to find these sets of capabilities out there because there's a lot of like AI security specific providers. There's a lot of like, we do code security, but now we do AI code security. Like there's so many different ways to go about getting protection across these different areas.
Raghu Sethuraman (21:48)
I don't think there is a unified solution that is available. People say that we have solved end to end which is not because this is evolving so fast. So you have to treat them atomically at this point of time. You have to find the right tool that solves these individual challenges there. But to a larger degree, the unified platforms are all coming up. People are trying to focus on all those things there. The problem setting is huge. People are still figuring out which one to go and... James Berthoty (21:55)
Yeah.
Raghu Sethuraman (22:18)
streamline first. So there are lot of startups focusing on all these three areas there and consolidation is also, you can see the number of acquisitions that are happening in the industry. Consolidation play is happening right now at this point of time. But one thing we need to realize about agents is also this. When you talk about agents, you have to look at it like you've got internal agents, the agents that are very specific to a particular department. And they have got inter-agents which are across the company, across the departments.
And then you have got external agents, internal agents, that hybrid agents that are there. So the communication between them also has to be secure. Roles, identities has to be evolved. Roles and identities between the agents, roles and identities for the user who's accessing those agents, all of them are evolving. So, go ahead.
James Berthoty (23:09)
And that's, when you think about those internal agents, are you referencing more like workforce style agents like Microsoft, Copilot and Gemini type agents? Are you talking about like homegrown applications that are doing more like internal type work and information?
Raghu Sethuraman (23:29)
Every team will have their own set of agents. It's not tool specific. Every tool you use, they have their own set of agents. On top of that, every department, every function will have their own set of agents. example, HR will have a bunch of agents for their department. Finance will have their own agents. Engineering will have a whole bunch of agents with respect to, for example, security will have a whole bunch of agents. DevOps will have their own set of agents there. Every team will have their own agents for a specific purpose there.
And those agents, internal agents have to be secure and they'll have user permissions and access controls that are required for that one. You're talking about agentic identity and security and you're talking about the users who are accessing their identity and security. And agents to agents communication are also, again as you can see that ATA protocol, everything is there right now. And how you're going to secure that. See for these things, data is primary. It's extremely very important part of it for all these agents to deliver what they are supposed to deliver, which means that how do you restrict and access, how do you provide right level of access to your data? That is a bigger challenge as well. Agents cannot perform by themselves. It's not just about a bunch of tools coming together. They need right set of data that they can refer to and users have different set of security for that.
James Berthoty (24:49)
Yeah, that's when it comes to like that agentic identity piece. I know there's a lot of different opinions out there as far as like if OAuth is good enough or it's not good enough or what should really be done as far as like tracking and trying to do permissioning. Like how have you guys tried to tackle that whole space of trying to figure out how to build a permissions model for internal agents and what they should have access to and all of that.
Raghu Sethuraman (25:14)
So some places we definitely have gone to build our own tools. We have leveraged industry standard tools on that one. Some of the existing tools or vendors that we have been using have evolved their capabilities to support this. It's a combination of both internal existing vendor and new tools from the industry that we using to manage all those.
James Berthoty (25:38)
And then if someone is making the painful transition from like, my data set up is a total mess, nightmare, I don't know what has access to what, into wanting to try to wrangle some of this stuff, like what are some of the first steps people should take to try to transition into that?
Raghu Sethuraman (25:56)
data is key, how the data is being structured, how the data is being organized. Again, rags of the world are evolving faster. You are able to bring in different data sources to streamline that. And again, we acquired a company called Isera, which is also one of the important company. If you could look into it, you can bring in different set of data sources.
As a platform and you can interact with all of them as an agent there. So working on the data Creating an agents on top of it is the first step and then you want to look at it Security right at the data level and at the agent take level So which data you want to expose you you want to create an agentic framework as a company in terms of Look at this way for example, you have a corporate network. How do you what do you do? You you have virtual lands you put people in different groups and provide the access to that in the past. Similarly, now your agents will come in. Agents will be structured in such a way for that based on the functions and the intended end users. So you create logical groups, logical access controls, and then agents reside inside that parameters.
James Berthoty (27:17)
Yeah, I'm curious your thoughts along. You know, there's some companies that have invested heavily into like semantic permissioning models where it's more like providing context to the agent of saying like, here's what this user is, they're an HR person, they're allowed, they should ask these kinds of questions, but not these kinds of questions and having the AI make its own sort of judgment calls about if the access is allowed or not based on what it knows about the user, as opposed to a more strict like here's the role associated to that user and therefore it doesn't have access to this database and this database like more from like an infrastructure level. I'm just curious if you have like a approach that you think is more promising or if it's both doing both things.
Raghu Sethuraman (28:01)
So you are going to have layered approach. First and foremost, you're going to have the data layer, the data layer which is like, again, public available internet data, and then you're going to have organization-specific data, department-specific data, user-specific data, right? And then you're going to have agents which have right set of permission to access each of this, organization-specific, department-specific, team-specific, and user-specific data sets, and the user who's asking for this particular data which has got the right level of access for all these things over here. If the request is not coming to you in the form of an agent, agent is talking to another agent, the secondary agent should have the right level of permissions for accessing all these data. It should also receive the user who's asking for this data so it can make educated decision there. So there'll be a cascading effect in terms of if there's a failure in one place, it'll have a cascading effect all the way to the entire part. It has to have a layered approach. The permission has to be also in a hierarchical manner.
James Berthoty (29:07)
I think I want to bring this back to that threat modeling piece because it gets at how difficult I think it is to threat model some of these systems compared to more traditional infrastructure where the threat model was, here's this sandbox environment. And so it doesn't have access to any of this data. Or it's splitting up data tables into what's the most high risk data table or database assets. Whereas this has layers of a combination of semantic permissioning as well as more infrastructure-based permissioning.
And so that's the part that, yeah, how are you seeing threat models become much more complicated as a result of trying to follow this data? Or is this just an evolution of like data security as a whole?
Raghu Sethuraman (29:52)
It is evolution of data security as a whole. Look at it this way, when you are a developer and you're gonna use, let's say you're gonna use an NPM package, that package can inherently call another package. It can call another package over there, which is already solved the problem right now, correct? So you are able to look at all these packages, what is the risk in each of those package dependencies, and then you are able to find the CVs related to that and you're able to address that.
Similarly, you are going to deal with this problem in a fundamental way, talk about what agent can do, what kind of data set it is going to have, be very focused on that one and treat the communication layer as a separate layer and focus on what is getting exchanged, how do you restrict it. How you're approaching is changing, whereas the SDLC best practice that we have today still should hold good with additional adjustments to
James Berthoty (30:47)
Yeah, think, mean, honestly, I think you guys are more on the front lines with like 8a protocol than most people are in terms of thinking about this evolution from, we had the first evolution from like chat bot to MCP tool user thing, but now getting more to this idea of like agentic swarms or like agents communicating with agents to like thousands of degrees of capabilities. And I think that evolution is just really hard to conceptualize in real time? Like how should people even go about tracking this? First of all, do you think that transition's happening to like from this MCP thing to the A2A piece? And then how can people, how should people conceptualize that change that's happening?
Raghu Sethuraman (31:37)
See, first of all, people have to realize that this change is happening, right? People are talking about, I seeing the right value for my investment, for some of those AI and agentic investments there? Some people are seeing it very quickly, if done and implemented really well, some are taking longer, but it is here to stay. That is not going away. So if you are not thinking about it, I really request everyone to start thinking about that immediately. So as a first step, start looking at the data, start looking at your process. Are you, whatever tools that you have today, are you future proof? Because the shift, when it happens, it happens in very fast way. And at that point of time, you may not have enough time to go and react to it, right? Your team may just be looking at coding assistant at this point of time. Once they realize the power of AI, the power of agents, it snowballs very quickly.
So at that point of time, we don't want to be caught off guard. So my request to everyone from the security community is to start looking at it. Which tools do you have today? What process you have today? What framework you want to apply for that today? So that when the first agent hits you, and one to hundred will be very fast. Once people realize that first agent is easy to build, taking it to production, and people are able to see value on that one, other team would like to follow suit on.
So at that point of time, trying to train the teams and then making them adapt to a newer behavior will be very difficult. So start fundamentally today and start experimenting at this point.
James Berthoty (33:19)
Yeah, I think when it comes to even just trying to make that more tangible for people as far as like making that first step towards trying to think in a more like A2A style of communication and doing guardrails, like what's the right thing to sort of start at? Is it the AI red teaming? Is it the data permissioning? Is it the runtime monitoring of what your agent's doing?
Is it creating like a governance, a mapping of like what the agents are and how they work? Yeah, what are some of those first couple steps?
Raghu Sethuraman (33:53)
We are talking about the entire flow. Everything is required. But if you are going to start with something fundamental, start with the AI and red teaming first. And ethics and governance angles is going to be very, important. Governance cannot be an afterthought. So all these things are parallel tracks for your AI and agentic journey. It's not one or the other. But if you are going to start something, of course, your development cycle starts first. Before it goes into production. So start focusing on your developmental best practices there. How do you ensure that the current system, the current process are not getting broken, it's adapting to this newer chain?
James Berthoty (34:35)
Cool. You've definitely given me a lot to think about as far as, I just don't think as many people as you, I don't think many people have had the same experience you've had in being able to be on like the front lines of this stuff as it's developing. And so it's just interesting to see like the lag behind things, right? Where like we're starting to get an idea of what AI code generation governance looks like.
And a lot of security teams are starting to implement like their tools, MCP tool in their AI coding assistant. And I think for a lot of people, seems conceptually pretty far ahead to be at this like autonomous AI fleet that's happening piece. But I think you're right that the pace at which we're seeing this develop is pretty unprecedented. And I think, yeah, that...
Trying to get ahead of it with some of these capabilities is super important for people.
Raghu Sethuraman (35:38)
Thank you. I agree with you on that.
James Berthoty (35:40)
Cool. Well, thank you so much for coming on and sharing your experience and advice on all of this with us. Where should people go to follow you, learn more about automation anywhere, all of those sorts of things.
Raghu Sethuraman (35:52)
Again, please definitely follow automation anywhere. com and I'll share my LinkedIn to this podcast. You should be able to follow.
James Berthoty (36:01)
Sounds good. Well, thank you so much for coming on. I appreciate it.
Raghu Sethuraman (36:03)
Thanks James, have a wonderful day
Explore how Exaforce can help transform your security operations
See what Exabots + humans can do for you
