What if your pricing team had a few more brilliant analysts who never sleep, process data instantly, and speak your language?
That’s the promise of Agentic AI.
In this session, we’ll explore how agentic systems built on the same foundations as ChatGPT can help you respond to market volatility with agility, accuracy, and confidence. These AI-powered agents act like intelligent teammates, streamlining workflows, flagging risks, and guiding pricing decisions in real time.
We will cover:
• What Agentic AI is, and how it differs from traditional chatbots or automation
• How B2B pricing teams can apply agents to detect margin risks and act on live market data
• Why this shift matters now, in a climate of rapid market swings and economic pressure
• Discover how agentic AI can give every pricing analyst the speed, insight, and support to thrive in volatile markets.
Speakers:
• Kaavya Muralidhar, Sr. Product Manager, PROS
• Eldho Kuriakose , Senior Director, Product Management PROS
Full Transcript
Great afternoon, everyone, and welcome to another PPS webinar. We are so excited to have you with us today for this webinar from ChatGPT, Waygentic AI, equip your B2B team to outsmart market volatility. As pricing professionals, you know how fast markets can shift and how critical it is to stay agile, accurate, and confident in our in your decisions. That’s where agentic AI comes in. Built on the same foundations as chatgpt, these a AI powered systems are redefining what’s possible possible for B2B pricing teams by acting as intelligent, always on time teams.
…
Today, we’re joined by two incredible experts from PROS who will walk us through the what, why, and how of Agentic AI, Kaavya, more leader, and Eldho Kuriakose.
How are the two of you today?
Very good.
We’re doing great. Very excited.
Thank you so much for being here. Kaavya Kaavya is a senior product manager at PROS, leading the strategy implementation of AI based price optimization software used by some of the largest enterprises in the world. Leveraging her background at the intersection of AI design and business, Kaavya’s expertise lies in creating pricing pricing software ex experiences that are seamless, delightful, and empower customers to excel financially through volatile market conditions. Joining her is Eldho, senior director of product management who brings over twenty years of experience across SaaS, high-tech manufacturing, and retail.
He currently leads a dynamic team delivering cutting edge solutions and rebates, price optimizations, and CPQ. Eldho is known for his passion for solving the right problems, asking the right questions, and empowering winning teams. We’re so excited to have them here today and hear what they have to share. So with that, Kaavya and Eldho, the stage is yours.
Alright. Thanks. Thanks, Alexis.
So, good morning or good afternoon, everyone. Really excited to be here today and talk about AI agents, with my colleague, Eldho. So, like Alex said, over the last few years, I’ve had the opportunity to create, AI based price optimization products and most recently actually build an AI agent that can support, users in, creating and creating their re own rebates, including choosing attainment criteria, incentives, etcetera. And so I really, enjoy diving deep into agentic AI and looking at how it can be powerful for B2B teams, especially in the current market.
So I’m joined here today with, Eldho, if you wanna share a little bit about yourself.
Sure. Thank you, everyone, and thank you, Kaavya.
Alex shared a lot about me, so I won’t, extend too much into that. I do wanna invite everyone here to join in. Even though this is a webinar, we are talking about it, and engaging with each other as if it’s a podcast. And, we’re gonna be making it fluid. So welcome for you all to join into the conversation because this is an evolving space, and we wanna hear your voice as well as, what, I’m sure you wanna hear what we’re about to say as well. So with that, let’s go ahead and dive in.
Alright. So we’re gonna be covering, yes. We’re gonna be covering, quite a few topics today.
What we really wanna put forth, and we’ll talk a bit more about this later, but, Eldho and I have been, you know, talking about this notion of AI agents as teammates. So redefining how we actually work with AI, and moving on from the more tool or task based approaches that we may have seen thus, thus far. And looking at what does it mean if what would it mean if we could have extra additional teammates that are able to supplement and augment the work that we’re doing?
We’ll we’ll talk a little bit about what an AI agent is just so we’re sharing some of the same vocabulary. And then, Eldho, do you wanna, share more about the maturity model?
Of course. Yeah. You know, this is an evolving space as I mentioned earlier, and it’s really sometimes challenging to orient where you are with where everyone else is and where where the technology is going. And we’re gonna provide some maturity model thinking.
We’d love to hear where you all feel like it’s going and, where you think you fit into that. And then, you know, as much as things things are changing, certain things are also staying pretty stable as the right ingredients to be successful. That’s having your data right, having the right people, having the right governance structure. So we’ll reflect on that a little bit.
And throughout this engagement, we welcome your questions, we welcome your comments, and we’ll be, actually actively asking for some of your participation at all. So do do engage that way, but we’ll also have a QA session at the very end.
Alright.
So we’re gonna actually kick things off today with a poll.
So we have this, QR code. If you would like to use your, use your phone, it’s gonna lead you to a Microsoft form that’s just gonna take a minute to fill out. But you can also use the link which will be shared in the chat.
Great. And I’ll be monitoring the, responses here. So let’s give ourselves maybe a minute, or two minutes. And we’re at the six minute mark, so let’s go to the eight minute mark and see how the responses are shaping it.
K.
Got one response.
Number two.
K. Looks like it’s taking folks about two minutes to fill out the the response, so that’s about the right time.
We got six responses so far.
Give ourselves maybe another forty five seconds and then we’ll share what, what you all have put together.
Okay. I think it’s probably a good time to share. Let me go ahead and share what I’m seeing.
K.
There we go.
Great. So we have nine responses that came in.
And the first question was what best describes your current usage? And we forced one choice. So it looks like most folks on here are in the usage of it as a chat experience, either Claude or ChatGPT.
One person’s not using it very much at all. And then we’ve got some folks who are, further up along the chain of oh, we got one more coming in. Right?
That’s regularly using AI for content and analysis.
So good mix there, about thirty percent.
And then, we’ve got some advanced users here as well who are using AI tools, custom AI tools.
But nobody is still using, AI agents to complete some routine tasks.
That’s good. I think that’s that’s, pretty representative of what we’ve seen of our customers.
The second question was a multiple, you know, entry. So we expect many answers here.
What are the use cases, that you expect to use or do you already use AI for? So a lot of content creation use cases here. That’s that’s expected.
Second is micro segmentation price optimization. So I get to see that.
Then you have some of that monitoring. So what what roughly one person’s doing the monitoring.
And, this one is still very early, it seems.
And then setting up pricing strategy.
Good.
And then barriers.
What are the barriers? We allowed multiple choices here. So strong, issues and concerns around safety and reliability that’s holding things back. Next is the data quality.
This is super yeah. Go ahead, Fabio.
I was gonna say, although, you know, I think the interesting thing here is this form was pretty open in terms of what kind of AI is being used.
I know we’ve had, you know, many many pricing organizations using, predictive models, neural networks, segmentation models, regression analysis, you know, those kinds of AI tools for things like price optimization, and then more recently, we’ve had LLMs being used a lot. So I, you know, I’m really curious as a follow-up or if you wanna enter in the chat, you know, for things like monitoring day to day market pricing, or designing long term agreements, what are the forms of AI that you’ve been using today?
Great.
Yeah. This is super, insightful. It’s a good number of, responses as well.
Yeah. So with that, let’s dive into what we mean by, agentic AI and the various maturity models and so on. So, Fabio, I’ll let you pick it back up on the share.
Yep. K.
So this is this is an interesting slide, you know. When, we were thinking about, as Fabi just mentioned, when people say AI, there’s n different ways that can be interpreted. And especially with everything that’s happening in the speed at which things are happening, it’s important to be aligned when we say AI what the shared mental model is.
So you have some cases where and I was very much in this camp, about a year ago where AI is considered a tool, you know. It it does what I tell it. I need to be very, specific with it. It’s good for very targeted task, maybe not basic, but targeted tasks.
And I’ve changed my posture on that over the last year or so.
Yeah. Fabian, let’s talk about some what what are your thoughts on some of the other ones here?
Yeah.
I mean, just an additional thought on AI as a tool. I think something that we’ve seen a lot and this is something I’ve seen myself do as well is I might, I might use AI to try and perform a specific task, and maybe I’m seeing that it’s not performing the task as, as well as or as efficiently as I would expect it to. Maybe something in the tone is not quite right or, some of the context is missing. And event and it’s pretty easy to just give up and say, okay, AI just doesn’t work for this.
And in some cases that’s true. I think that, you know, being able to evaluate evaluate that as a human is really important and is a very fundamental part of, the governance process. But at the same time, I think, the the mindset of thinking of AI as a teammate and we’ll talk a lot more about this when we come to the maturity model is actually thinking about, well, if if your AI isn’t able to do the task, you know, to the quality that you expect, is it enabled to do so? And can you, you know, can you train it?
Can you enable it? Can you upscale it? And I think that’s one of the mindset shifts in thinking about AI as a teammate, is that there’s this notion of investment enablement onboarding, that comes into play when you’re when you’re building a successful team or a company. It’s, you know, a lot of it is about the native capability of who you’re hiring, but a lot of it, whether they can be successful or not, depends on the ways that you invest in, invest in them, and invest in making them successful.
So I think for me that’s been, and for a lot of the, you know, you know, our internal teams as well as the, companies that we work with.
That’s been a pretty big mindset shift in thinking about what are the tools that we can now equip AI with so that it’s able to do some of the things that maybe the same model might not have been able to do a year ago.
Yep. That’s a very good point. And I and I love the way you’re thinking about it. Because that that it’s truly is a mind shift where you’re seeing the AI as a true peer. And it has strengths that it comes with, but there’s enhancements and, you know, bringing it into the fold kind of activities that you would do with the regular employee that you need to start thinking about when you start bringing AI at work.
Let’s also look at some of the more out there kind of, you know, mindsets that can be there. You know, there could be on the extreme end of this is AI as the overlord.
Admittedly, that’s a bit extreme, but, you know, there’s there’s a lot of concern about AI replacing jobs with the, recent, talk by, Amazon CEO. So that those kinds of things kind of instill a fear based, maybe defensive position.
And, the the truth here is that this is not magic. It’s not something that’s gonna, you know, solve every problem. And, it’s it’s not gonna replace human beings by far. So we’re gonna that’s gonna be a big theme in our talk here and and our conversation. That there’s a lot of human strengths that we need to bring and only when you bring those together can you actually maximize differentiation.
And on the other end, yeah, Kavi, you wanna talk a little bit about some of the wizardry kind of, views of AI.
Yeah. We’ve seen that.
I I don’t think I’ve seen a lot of people using AI, as though it’s wizardry, but maybe more of a philosophical notion or a sense of, in the next one year that AI can be used in a way that has no oversight that can solve all these, you know, problems that haven’t been able that we haven’t been able to solve as people. And I think, you know, that also is looking at I I think, to some extent, is a risk to the governance and safety structures that we’ll talk about towards the end.
I think like Eldho said, there is a lot that as people we bring to this equation of using AI.
We bring, what I like to call, pointiness, or like like a strength of opinion that may be creative or it may be skewed in one direction that’s actually really important to the success of companies, you know, to have a strong strong and perhaps unexpected point of view. And that’s something that AI doesn’t always, bring to the table given its, you know, given its its inclination towards average behavior, or averaging all the different opinions, and words and content that it has access to. So, again, I think that a teammate is probably aware we really want to, you know, steer this towards somebody who can who may have strengths that you don’t have, but also weaknesses that you don’t have, and that you would need to work together with to really get benefit of the team as a whole.
Yep. Very well said.
So let’s talk a bit more about, what an AI agent is, and then we’ll we’ll dive into the maturity model. So, you know, we’ve been using this word quite a bit, and one of the things that we wanted to do is differentiate. When we say AI agents, are we just talking about LLMs, and, you know, try to bring make that a bit more clear. So, Eldho, what do you see as the difference?
Yeah. I think, you know, when you think about, your familiar Claude or chat GBT prompt and that’s what you engage with, that’s a very point in time engagement. Even if you build a beautiful prompt and it’s very deep, it it doesn’t really connect to the wider set of tools and capabilities. It it may only have the context that you give it in that prompt.
It doesn’t understand the wider context of, of the business or or you as a person. So, these are roughly six things that we see as differentiating between what you are familiar with as chat GPT or Claude, versus what an agent really is. And there’s a rough organization where the the items on the left are more kinda near term things that agents can do. And the things on the on the on the right are emergent things that we see in the next six months, maybe next twelve months that are gonna be very, very important.
Yeah. So some of the terms are, you know, continuous autonomy. So without having to prompt it each time, you’ve gonna given it, this kind of, oversight responsibility to look for conditions that are loosely defined. And that’s a key sense here. Like, we had capabilities to define rules very rigorously in the past and create alerts off of that. But here’s where you can kind of loosely define things and allow its own reasoning to kinda create, you know, the right kind of scope for what to look for. That’s that’s one aspect of this autonomy.
Yeah. Let’s talk about goals there, Javier.
Yeah. I think goals, goals and autonomy are pretty related. The way that I like to think about it is it’s is that you’re moving on from providing step by step sequential instructions or a perfect decision tree, which are some of the things that you would do if you were, you know, trying to code a solution or do something in Excel into providing a set of tools, and to act, but also providing a goal to, you know, to work towards. So for example, in the in the AI agent that I built, it has, which is, an agent to support with, creating a rebate and optimizing some of the terms of the rebate. There are a lot of tools that this agent has access to including the ability to analyze data, the ability to, you know, go ahead and, and, submit the rebate for approval, the ability to look up prior rebates, the ability to look up data on the something specific on the Internet.
But it there isn’t a sequential, instruction for exactly what to always look up because the reality is that what to look up is gonna depend on the goals of the person creating the rebate. So this is where, we, you know, had a space to actually talk to the agent about what their goal was, and we and the agent had access to all these tools that enabled to access data to, act without prompting to submit for approval, but it could choose when to use those tools, which tools to use, and when not to. So along with autonomy, I think governance goals is a really important aspect of how to make sure that your agent is using the autonomy towards the direction that you want.
Very good. Yep. And then that those goals really need tools to achieve those goals. Right? And so that kinda brings in that third point that, it cannot it might be a brilliant AI, Prodigy, but if you cannot give it access to the context of your business, your data, the functions that you have, the APIs you have, or the APIs around the world, then it it’s limited in its ability. There’s also a component of it knowing that, these tools exist and this is how I should use those tools. And we’ll talk about a little bit of that, how we set all of that up correctly in some subsequent slides.
There’s some components on the on the right which are more forward looking where not only can it loosely define conditions, but it can learn and adapt over time. And this is an area that’s, you know, evolving.
There’s really interesting areas of AI kinda judging itself on whether it met those goals.
That’s an evolving space. Also, with for it to explore new avenues. And this is a huge power where, you know, we call it hallucinating at times, and and truly it is hallucinating in some cases. But when you wanna try, and and explore areas that haven’t been touched, then AI is really good at that. As much as I try to, you know, force it, you know, that you wanted to try some things else as well. K?
Oh, Kaavya, you’re on mute.
Sorry.
Yeah. So as we talked about we talked about what an AI agent is. We’ve talked about how it may have it can it has the power to act autonomously. It has the power to use tools.
But there’s a pretty big range in how you can actually use AI within, within the context of a company and specifically as a pricing team. And so as we were as we’ve been working with a lot of different, pricing teams as well as our own teams and looking at how, you know, what does this progression look like, we came up with this maturity model, and we really based our maturity model the on how you would upscale and think about a teammate as well. So there were two key axis that we wanted to focus on for today.
The first is on the y axis is enablement. So what are the, you know, think of this as training, access to tools, access to data, availability of con contextual information.
That’s what show is on the y axis. So the higher up you are on the y axis, the more enablement there is. And on the x axis, you have role complexity, which is basically how, you know, what is the role that you’ve given this agent? What is its job? And, how difficult or complex is the job?
So if we think about, what actually falls on these axes, you can see that, yeah, that we have, on the enablement, you you might start out with, you know, no enablement. And then as you move further up, you can see that you have tools that enable auto autonomous actions, so the ability to actually do things, in in your systems, or, you know, send an email, submit something for approval, etcetera.
And then for role complexity, you can think of the most simple role as, you know, something that somebody might do when they’re just getting onboarded or something that someone might do when they are still very new to a company or new to a field where they’re executing on simple tasks that somebody else might have come up with. And then as the role gets more complex and as the agent gets more mature, there’s the ability to do things like, like, make recommendations based on what it’s observing, and then finally to actually coordinate outcomes across workflows and be in a sense a leader. So, yeah. Go ahead, Aldo.
No. That’s awesome. I mean, I think the I’m curious to see what your build out looks like because this is very intuitive, and I’m curious to talk talk about what that build out looks like.
Yeah. So, you know, we’ll we’ll go through these, the the levels, in some of the next slides, but, you know, the one of the interesting things here that we’ve talked about is you can really see that there there’s this strong correlation, right, between the x and the y axis here where as you as your enablement increases, your role complexity also has the opportunity to increase. Yeah.
Just like in the case of a normal employee, so that that model continues to work really well.
Alright. So, we’ll go through some of these levels and, definitely feel free to, share in the chat if, if you know, you have examples of how you’ve been using AI at any of these levels. And I think the important thing to note is being at a particular level doesn’t mean that, the agent is or the AI that you’re using is not valuable. For example, if you’re using it as an assistant, you could maybe be using it as an operator eventually, but an assistant also has, that there are there are valuable ways to use it at that level, and there are important things that need to be done at every level. And so, I think as you’re looking through this, try to understand for different ways that you’re using it, where does it fall on the scale, how can you use it well at that scale, and then we’ll talk about what does it mean to promote your agent, just like a career ladder of promoting a person.
Awesome.
Alright. So, at the first level is, is like a one day intern. So this is, how I’ve often seen and this is maybe how I started using AI tools as well, or specifically LLMs, chat bots when, they became more popular in the last couple of years, which is to make an ask like this, which is drafted a customer facing email that communicates a pricing adjustment effective July first, maintaining a tone of transparency. So essentially, this is equivalent to if you hired somebody really smart, really prodigious, and capable, but you just hired them off the street, you didn’t give them any context beyond what’s in this text into your business, into what you do, what’s valuable or important to you, what prior emails look like, and then you just ask them to complete the task.
Yeah. Go ahead.
Yeah. This is a very valuable piece even though it sounds simple, because it, I’ve used it a lot of times. It’s just a brainstorming. It doesn’t have any context. And sometimes sometimes not having context is good because it provides a very fresh perspective and it brings up things that I wouldn’t have thought because maybe that wouldn’t make sense in our current organizational context or whatnot. But it really forces us to think think outside the box, and we can have long conversations with it to just, you know, think through a space or even just learn, a specific piece of technology or whatnot. So it’s a very powerful thing even as simple as it is.
Yeah. Your point on conversations and brainstorming is really good, Aldo. I know, you know, even in our work, we often send each other transcripts of our Claude conversations where we’ve thought through complex topics and, strategies. And so I think that, that’s really valuable. You know, you you can if someone is capable and is able to reflect back to you, you know, what what you may be thinking of in unique ways. That’s something someone could do without a lot of context.
Yeah. And we really need to build UX experiences where we we’re we’re product managers, so we wanna develop a good, intuitive product. And instead of, you know, working in a PowerPoint or Azure or some of those tools, you just talk to Claude and say, here’s the use case, come up with the UI, and then we just iterate and bring in our context. So Yeah. Very powerful.
Yeah. But the challenge here really is that, you know, this isn’t just the intern, but is the one day intern. One of the challenges here is that there isn’t a lot of opportunity when you’re using, AI like this for it to, learn over time. It’s kind of, you know, if you’re starting a new chat, you might be just starting from scratch each time.
There are there are ways that you can have it learn over time, you know, even in the context of using Cloud or ChatGPT.
But there there’s just very low context and tools provided. It’s not gonna it’s not gonna initiate without you initiating first, and then it’s primarily used to complete a task that you’ve decided that you need to complete. And so if you wanted to upscale an intern, agent, like, that that you’re using, like like so, I think, a couple of the ways, to start out by doing it is one, by providing contextual information. So things like information documents.
Definitely don’t do this, on the the public versions of AI tools, because of data security. So if you have an an, like, an enterprise, connection with, an AI tool, that’s probably the most secure way to do it. But once you have access to something like that, you can provide more specific company information, things like internal policies, and have that data just be available as tools so that, your AI is able to always pull from that. And second is actually to upload data.
So instead of just asking, asking it to create content from scratch, it, you might be able to upload existing files. And that brings us to the second level, which is the assistant.
It’s, this level is similar to the intern, but, there are there are pretty big differences in the amount of value that it’s able to provide. A really good example of this is data analysis. So uploading, for example, a spreadsheet, and then asking it to clean the data and highlight any discounts over thirty percent.
So this is something that, that LLMs also are able to do pretty well, they’re able to execute on the data analysis that you ask of it, even write, write Python code that you are able to run separately, but they aren’t they they still are waiting for you to ask each time, and they’re waiting for you to provide them with that kind of, input on what to do, and how to do it rather than really initiating on their own.
Yeah. There there’s some really powerful use cases here as well. I’ve built some data generation tools. Like, again, in in our in our life, we’re, creating products, but we need to test it with many, many vectors of scenarios.
And in the past, I’ve spent months building, you know, data generation tools. But, recently, I was able to do something in over a weekend where they can create, with with an Excel, plug in, build out x number of scenarios. And, again, those are things that it’s, it it’s scaling my capability a lot, but it’s not it’s not doing it on its own. It still needs to be asked. It needs still needs to be coded and explicitly stated to some degree, but it definitely, you know, frees you up to do a lot more than you were able to do in a given unit of time.
Yeah. I would say this this one’s really valuable, and I’ve used I’ve used AI quite a bit in this. And I think especially for pricing professionals where, you know, we do work a lot with with data and data at scale, being able to do things like, you know, concatenate rows or or, you know, take something that’s a little bit unstructured if you have, you know, unclean data of like company names for example, and, figure out, hey, can you identify things that might be duplicates where it’s, they’re spelled a little bit differently, but they’re probably referencing the the same customer, the same company, I think. Those kinds of tools that aren’t deterministic enough to easily easily program, but that an LLM can really pick up in the data and clean easily, that adds a huge amount of value at this level.
Yeah. I’d love to just, reach out to the the group, here listening to also chime in with your, experiences and, successes or failures that you’ve had. We’ve had some failures for sure as well. But maybe we’ll we’ll keep going and then we’ll talk about some of those if we have some time. But please share those in the chat so we can all learn as well.
Yeah. So the, we’ve talked about how this is valuable. One of the biggest ways to upskill at this level is to is to really, talk about this part, which is the if you ask every time. How do we go from an agent or an assistant that is that only executes tasks when prompted to something that runs automatically?
And so the upscaling plan here would be to automate workflows and standardize inputs so the agent can handle tasks on a schedule or trigger. For example, it might run every night.
And this is what leads us to level three, the scout. So you can see that, you know, the image that we’ve used here is a smoke detector because, one of the really valuable things that, value additions of this level is something that is able to run automatically in the background without being asked or requested, and is able to monitor, the conditions that you ask it to monitor and alert you in case there is something for you to look at.
So we’ve seen a few different examples of this, for example, Aldo, I know you’ve talked about the notion of a, like, a morning dashboard or a coffee dashboard.
Mhmm. Yeah. That’s right. I think the you know, when when things are changing very much in the in the wider world, it’s sometimes hard to specifically, you know, dictate what you what is important to look for.
So you can you well, you could have done that before AI or agents came. Now you’re able to give it a flavor of what you should consider interesting. What are the steady state patterns and what are deviations from those patterns? So you could be a little bit more loose in how you specify that.
You could specify that with natural language. And then the beauty of it also is that you can iterate your prompts over time without really having to bug your, development team extensively. So that’s there’s some operational benefits into, like, continually working with your AI to, make it stronger to Kaavya’s point about that training and upskilling. That really starts to pay dividends here that, maybe things that you thought of, last week or a couple of months ago, they come up again, and now you can actually keep building a repository of things to look for.
And then the AI also then can expand that scope over time.
Yeah. So I mean and the things that it’s monitoring, I think again, I think some of the benefits of, of specifically LLMs, although AI agents don’t have to aren’t necessarily only using LLMs. But one of the benefits of LLMs is it can monitor unstructured data as well. So something like, the news for a set of companies. So if there’s if there are deals being negotiated, but you wanna, you know, be alerted in case there’s something going on with those organizations that may affect their willingness to pay, or may affect their, what’s going on with the deal, that’s a really good example of something that can be monitored.
Of course there can be monitors on things like transaction data, quote data, pricing data as well to notice if, you know, has there been, is there any unusual activity here? Is something getting approved that, you know, is very different than the past and so on.
Javier, we have a question from Steve Wilkins about what tools do we suggest for level two that would comply with IT requirements.
I’ll say that at at PROS, we use Copilot, Microsoft Copilot, and we have a license with Microsoft where we could feed it, what we need and it stays within in that domain. That’s typically the, approach that, we our customers also use where they wanna use LLMs and they kind of contract with somebody like Microsoft to, ensure that it stays within the right those boundaries.
Yeah. And I I think, you know, most of the AI tools that have a public version also have an enterprise version. So I know it’s the, I mean, Microsoft for Copilot uses the same underlying models as as ChatGPT, but I think ChatGPT oh, OpenAI has their own, enterprise licenses as well, same with, Anthropic for Claude.
And I think, you’d definitely talk to your IT teams about this, but some of the a couple of the key differences and what it means to use an AI an LLM publicly versus through an enterprise license is data security. So especially if you’re uploading, sensitive data on it, you really wanna make sure that, you have an you have, some kind of license such that that data isn’t going to, be isn’t gonna go back into training the the model that is being used.
So that is and so that’s the reason at Prose, we, you know, we use the enterprise license, and even for just handling internal company, strategy, things like that.
Okay. Thank you.
Alright.
When you think about the scout, this is so smoke detector is a good example. But like a smoke detector, this at this level, it may alert you to the problem, but it cannot fix the problem. And it, it can maybe recommend an action, but it doesn’t really have it may not have the power to take any actions.
And this is the area that you can start upscaling here. So so start defining when the agent should escalate versus act, you know, so are there some actions that, you do feel comfortable for it to take safely given certain conditions?
And are you able to give it, the ability to take those actions? So, again, this doesn’t necessarily have to be leave the human out of the loop completely. It could be something that still needs an approval process, but at least the, you know, the change has gone through. And so this is where we get to the fourth level, which is the operator, which is a trusted executor that takes defined actions inside your systems. So, again, here, there’s no prompt needed.
It may be it it might be triggered by some kind of condition or it might just be triggered on a schedule.
And, so in this example, if there’s a deal approval system when all pricing conditions are met, the output might be that all thresholds are met, and so then there’s an auto response sent to sales, because these thresholds are ones that were agreed upon by the person and the agent when setting it up. And so it doesn’t need the person to, look over it if if all the conditions are indeed being met. And then there’s might be another action of logging into a into a pricing journal or some kind of tracker.
So yeah, although have any do you have any other use cases of using, AI as an operator?
Yeah.
I mean, this is where that whole, blend into how you marry it with your governance and and your work and your approval workflows really starts to be critical.
You know, the scout pros, for example, has a price quality agent that will run on a nightly basis and will identify those kinds of, margin issues or, out of portfolio pricing kind of situations. It can highlight it, and then at some point, we can say certain, certain conditions are auto approved to be corrected, and the corrective action is this. So the that’s an area where we’ll be evolving that particular capability into.
And that’s similarly what we see people doing. And once you have the scout working, then you can start, deciding what the corrective action is in each case and then letting letting that happen on its own.
So moving up from here, we’re still talking about an AI agent functioning largely independently. Right? So this is kind of like a really valuable individual contributor, who is working very independently. They don’t really, you know, they may talk to you, but they’re not talking to anybody other than you.
That’s that’s if we were thinking of the AI as a person, that’s kind of what it would embody at this stage.
So the upscaling plan here is really to expand its visibility, and start giving it, access to other systems, other people, and even other agents.
And this is when we really start seeing agentic workflows act as an orchestrator, which is, in a sense, not just directing its own work, but able to ask other agents questions, interact with other agents, or able to delegate its own work to, delegate work to other agents when it’s seen that another agent is an expert on a certain topic.
So for example, one example of this is if there’s a deal approved with unusually low margin, this agent has picked it up.
It there it may be able to talk to other agents saying, hey, so we, you know, we need to update the forecast.
There’s another agent that’s an expert and whose job it is to take care of forecasting.
So let’s let that agent know that there’s, you know, there’s been this unusual activity. You may need to look into that, And then you may also need to notify notify sales. So so again, there may be some, like, collaboration, coordination, and communication that’s happening here.
So this is a really big shift that we are seeing in the the agentic world today. It’s, agent to agent collaboration, and, it’s a it’s a very natural and valuable next step, especially because like people when and you’ll see this if you ever, have a chance to build your own AI agents.
It it is much more valuable to build something that has, a specific area of expertise and a specific goal, rather than one agent that’s trying to take care of everything all by itself, and so this is where just like a human team, delegation and being able to, talk to others about their area of expertise while you are working on your area of expertise, applies to how you build agents as well.
And so I so pretty advanced and mature part of this workflow, which is why it’s at level five, is having that agent to agent collaboration.
Absolutely. Yeah. I think we can talk about some interesting examples. Like, if you’re all, involved in rebates or in the supply chain world, where, you know, you have shipping debit situations.
Maybe you have a you’re a distributor and you’re being pushed on price and you’re having to price well below what you bought the product at.
And therefore, you need to create a negotiation with your manufacturer to recover some of that. Those are the things that happens today, but they all happen very slowly over the course of weeks. Maybe where one, the negotiations happening by the salesperson, The salesperson then talks to the rebate team to go create a rebate to go negotiate with the manufacturer. There’s a whole chain that can, you know, stand up and they can take a week or two to get get all that figured out.
You can identify all of that where the sales agent actually detects the margin issue. The sales agent then talks to the rebate agent and says, go create these rebates that go recover the lost margin, and then make that part of this negotiation agreement such that this agreement will not get approved unless the manufacturer also signs up to, you know, make make the distributor whole. So there’s some interesting really chain of, workflows that can come together. And something that took weeks can happen maybe in a day or two, you know, and and bring all that visibility and connectivity together.
It’s a very powerful thing that can happen and it’s already starting to happen, with these kind of things.
And so when we think about the like you said, the power of the orchestrator is really immense and definitely something that we, you know, that taking advantage of can have a huge differentiator.
At the same time, there’s still I think it’s important to not go into AI as wizardry or overlord here by, being being real about what it can do or what it cannot do. So we talked about what it can do is coordinate across agent systems, and drive outcomes, which is huge.
What an orchestrator, at least at the the levels that we’re seeing today, still not be able to do is have a personal and unique strong opinion, drive overarching, strategy and vision, you know, have have a point of view that moves things in a certain direction or that changes your philosophy for how you want to respond to certain things. So, again this is where we really see that, the orchestrator might be a really valuable member of your team to drive outcomes, but it’s not your whole team.
And this I think leads pretty well into, we’ve talked about these five levels, we’ve talked about how you can use them to get value today.
We still welcome your, your comments in the chat on how you’re using any of these levels, what you’ve been doing, if you have any interesting use cases to share.
But we also wanna talk about what it looks like to use it successfully. What are the key ingredients for success at any of these levels? So, Eldho, do you wanna share a bit more about that?
Yeah. Thank you, Kaavya. So in the the next two slides, we’re gonna talk about what really makes this successful.
So first, like any project, data and documentation is important, but it’s extremely, ten x, probably more important in what we’re trying to achieve with agents. Because the more descriptive you are with your with your column names, with the metadata, with how well you’ve documented your business context, how well you’ve laid out the synonyms of different things, and how well you’ve documented your processes, that will make that agent return value much, much quicker. Think about the same the same is true for bringing a a new person into your team. Right? The better your onboarding, better your documentation, the quicker they can learn and find their way around place and add value.
So if you do those things, then you’re able to then really, create that shift where you’re not just focusing on creating reports, but you’re creating those reports for AI consumption and for AI to do something with it.
You’re focusing away from the technical minutiae of how exactly that SQL table is set up to worrying making sure that the SQL table is described correctly, the metadata is correctly, and so that the business needs can be converted into AI requirements by an agent.
And then you’re focusing away from just kinda reading the news about what happened versus looking at, what are the pattern changes. So you’re off operating at a a level of second derivative so that you’re seeing what the changes are versus kind of looking at every little bit of noise.
And then lastly, an interesting point here is that instead of having to focus on hundred percent clean data, which was the case before we had some of this fuzzy processing capabilities, We can have a graduated set of data hygiene that maps to certain use cases and make sure that the AI agent can work maybe with not so perfect data, but the decisions it’s making is not, a very surgical decision.
And so thinking about data hygiene and then what use cases you’re gonna use that hygiene for is gonna be very valuable.
Yeah.
So, you know, as a as a pricing team, this is one of the things that we really recommend doing proactively even if your organization isn’t ready to use, to use an AI agent or use, use AI on your, pricing data today.
This definitely is a shift that’s happening and you, you know, you will get to a point when you’re ready.
But to be successful when that happens something that is gonna be really valuable to have done proactively is this data and documentation. So, for example, if you have spreadsheets and, hundreds of obscure column names, having having somewhere that describes what are those columns and what do they mean, is gonna mean that ultimately, you know, when a person or when an AI, an LLM looks at this document, even if that’s gonna be a year from now or in the future, that it has the information needed to, to make successful use of it. It doesn’t have to guess.
And it doesn’t have and you’re not relying on the knowledge that’s, inside the that’s purely inside the head of one of your experts.
Very good.
Yeah.
The next point here is that, you know, data is great, but, at the end of the day, this is a huge cultural shift. And so, you know, you need to understand how people and AI are gonna work together.
And it’s also important to realize the skill sets that are needed for the people to work together. And I try to lay out what I personally believe are the strong skill sets. You know, there’s some sense that maybe detail orientation is not so critical because AI will take care of the details. But it’s interesting with they’ve never having people count the number of fingers in a picture to make sure that it’s not a AI generated picture. So the attention to detail, I think, becomes a very valuable asset to to check the the work. And, there’s, I think a higher premium on that personally.
Creativity, of course, driving for results as Kaavya said, you know, it doesn’t have that that instinct to perform, instinct to, you know, self worth associated with the work maybe. So that drive for results, is something that still a human character, the growth mindset, and, of course, the vision and purpose. So having the right human characteristics is more important now than ever. So let’s make sure we focus on that.
When we shift shift to the governance part, there’s kinda soft and hard governance. It’s they’re the soft side of it, which is which you can build with your prompts, with your nudges, maybe frameworks of how to think about a problem, how make sure checklists are covered, feedback loops, etcetera.
Those are kinds of things that, you can manage. But then there’s, of course, a hard governance, which is approval workflows and audit practices. Those are things that you already have in the system today. If not, you should have anyway before, agents come in. And there’s some interesting, overlap with current world and agentic world.
When you look at accountability and culture, those are evolving spaces. Like, what do you do when agent makes a mistake? What do you do when a self driving part makes a mistake? Who owns that accountability?
How does the broader culture evolve around that? These are conversations to, you know, engage in with with confidence and with honesty, and let let that evolve, with with humans being good participants in that conversation.
Yeah.
Alright. So, we’ve got to the end.
Some of the next steps here, I so so number one, learning and education is really valuable here. So definitely recommend doing an AI agent course.
And, although, you actually, I think, shared a Forbes site to multiple courses that people can take.
Right?
Mhmm. Yep.
Yeah. And, second, although I’m curious to hear more about this, but I I absolutely agree with this as a as a learning step to build something basic, to understand the capabilities and limitations.
I think, you know, for us building actually going through and building an AI agent was, the most valuable learning experience and being able to envision what what’s possible and also what’s not possible quite yet and where where does this add the most value. So, although if if someone on listening wanted to go ahead and build something, how like, where where can you build an AI agent? Like, where should they start?
There’s a whole array of them. So in like, there’s open frameworks like LangFlow, Langchain. You could build those. Of course, you could, it it’s all blurring. Right? So how, you can use, something like Hirsch or Windserve, which has its own agent or or at least chat experience to create an app.
So it it’s maybe one thing to to take away is that none of these categories are, are particularly, you know, long standing. They’re all blurring into each other. So trying to build an app for yourselves is it is maybe a good exercise in itself.
But also look at all of the processes you go through. And, you know, even Microsoft has some really good with the power platform, if you have a in the Microsoft world, you can start to build some agents that work with Copilot.
Some very basic things are very available right there.
K.
Alright. And then we have, yeah, the three number three, which we’ve already talked about a little bit proactively get your metadata organized and, explore a combination of off the shelf and in house capabilities depending on what is available to you. You know, like we said at number two, I think building an AI agent might be, the learning step you wanna take or you might just use an AI agent or an agentic workflow that exists like Windsurf and see what you can do with it. So either of those paths I think are really valuable in, in learning what’s possible.
We have a just a few minutes, but wanted to, we’ll stay on if anyone has any questions that you want to drop in the chat. And, Alex will, I think also drop our LinkedIn in the, in the chat. So if you think think of any questions afterwards or wanna reach out and talk to us about, about this topic, we’re really, always really happy to.
And, yeah. Thanks everyone for attending.
Thank you, Natalie. Thank you, everyone. Thanks, Alex.
Of course. Thank you both so much for joining us today. This was an incredibly forward thinking presentation. I think it’ll be so beneficial to everyone who’s had the opportunity to join and anyone who watches the replay later. So truly appreciate the time and effort you put into this.
I have dropped your LinkedIns into the chat. So, like, Kaavya said, if you have any other questions, if any comments come up, or if you just like to talk more about this content or connect with them, please, copy, copy and paste those over, and connect with them on LinkedIn.
I don’t see any other questions in the chat right now, but, certainly, the floor is always open. If questions come to me, I will pass them over to the two of you.
But I think this was great today. Is there anything else you’d like to share with both side of?
No. That’s it.
We’re good.
Thanks, everyone.
Thank you.
Alright. Thank you all. As always, we appreciate everybody for being here today. Be sure to stay connected with the, professional pricing society we did to keep up with any, forthcoming, webinars, any other events, anything like that. If you’ve got any questions for us, please be sure to send us a message. Thank you both. Everyone have a great day, and we will speak with you all soon.