My name is Alan Radford. I'm a strategist here at One Identity. When you consider that AI can perform so many calculations a second and its integrations to that wider identity fabric, AI would be in a unique position to preempt certain activities and enact those activities ahead of the need.
There's a great deal of intelligence that customers can expect from AI powered PAM features, not just in trends of activity, but in triggers of activity. A privileged user making a request for access to do something-- there are hoops to jump through. Typically it might be auto approved, but they may put a justification in there. Those justifications can now be much more rich.
We've also seen in the past use cases where customers would take session recordings of privileged tasks having been completed, and use those for education. Where in their helpdesk environment they would go, OK, well, this was how we fixed this problem. If the problem happens again, this is the procedure you can follow, for example.
Or on a more troubleshooting depth level, go, well, this is the activity we attempted to go through when we tried to fix this problem. It didn't work. So we can look at what we tried and adjust accordingly. AI can process all of that activity in flight, in real time.
And so when you start doing that, you'll get new emergent capabilities that allow it to be very, very predictive and prescriptive based on what's actually happening. If I'm using an AI to perform a task, the owner of the AI would be accountable for the AI's activities. High risk activity is no longer being conducted by humans. It's being conducted by AI instead.
That, to me, is a double-edged sword. Because on the one hand, I've got greater control, and I've also removed something we call human error. But I'm introducing something new that we don't fully understand at the moment-- artificial error.
When it comes to the proliferation of AI through the identity fabric, we're losing the human element. On the one hand, that can be quite useful in terms of removing human error, but you're also removing human insights. And so without checking that, by letting the AI make the determination as to, A, what I mean, and B, what I'm inferring, that with it comes danger. In the world of cybersecurity, allowing AI to make those inferences and to make those assumptions on our behalf, that's dangerous. It's not about making the decision, It's about the knock-on effects.
If I have 10 people trained in the One Identity portfolio, I can do more with that team and that vendor than I can do with twice the number of people across three different vendors, for example. In the world of AI, in that sort of fabric world, you're further squeezing that ROI. You're further sweating that same concept. Because an AI can learn a fabric at a far higher pace, and it can also be much more insightful.