Jaya Gupta and Ashu Garg from Foundation Capital won the enterprise technology internet in December with their article AI’s Trillion-Dollar Opportunity: Context Graphs. It argued that:
Traditional enterprise applications (and systems of record) only capture decisions, not the rationale for decisions
Agentic applications will benefit from decision traces -- “the exceptions, overrides, precedents, and cross-system context that currently live in Slack threads, deal desk conversations, escalation calls, and people’s heads” and drive decisions
Giving agents access to decision traces, stored in a context graph, allows them to to make better decisions on a broader set of topics by drawing implicit knowledge locked up in your organization
Agents can build the context graph dynamically, without a pre-defined ontology
Kellblog predicted that context graphs would storm the market in 2026. STAC Research said that 2026 would be the year of the context graph. Flexrule countered that context graphs confuse memory with meaning.
Jaya and I sat down for a discussion about the article. Here were my takeaways:
The concept of a context graph (and especially the decision trace) is an important contribution to the idea graphs provide more flexibility than other mechanisms in modeling a business domain
There may be more impact in B2B markets than B2C markets -- there’s more tacit knowledge and less rule-based decision-making
Ideally you’d ingest data from email and slack to capture decision traces, but that could make employees comfortable; privacy will be essential here
Instrumenting agents to capture their deliberations may be an important source of insight
Ontology creation will be a dialectic. Experts will provide a starting point. Capturing data in real time will evolve the ontology so that it can accommodate new data -- and experts will tune it further
Traditional enterprise software may not be dead yet -- some of them have large customer success and implementation teams that have insight into decision traces themselves
James Kaplan: Hi there. This is James Kaplan. Welcome to our second video podcast for Prosaic Times. We are here with Jaya Gupta. Did I pronounce that okay?
Jaya Gupta: Close.
James Kaplan: Jaya Gupta, excuse me, from Foundation Capital, and we’re going to talk about context graphs and decision traces and what that means for the future of the software business and the future of the enterprise application business.
How does that sound?
Jaya Gupta: I’m excited. Let’s do it.
James Kaplan: Do you want to just briefly introduce yourself?
Jaya Gupta: Yeah so my name’s Jaya I’m a partner at Foundation Capital and a little bit on Foundation Capital. We partner with technical founders from Day Zero to build enduring companies and we believe that context graphs are a trillion dollar opportunity in AI and the layer that captures how decisions get made and turns out reasoning into competitive advantage Prior to foundation I spent some time at McKinsey and super excited to be doing this podcast.
James Kaplan: And you’re talking to us from the Bay Area today.
Jaya Gupta: Talking from the Bay Area I’m based in San Francisco.
James Kaplan: Okay. No snow on the ground,
Jaya Gupta: No snow on the ground.
James Kaplan: no snow on the ground, no ice. Everyone’s able to get to the office.
Jaya Gupta: Exactly.
James Kaplan: Exciting. Very different experience than we’ve had here on the East Coast the past couple of days. So. Congratulations. I, I, I think you won a significant part of the internet in December with your article about context graphs and decision traces.
Agents can read data and take action but they don’t know why decisions get made.
I certainly found it super interesting, always being interested in graphs, but I was wondering if you could just describe a little bit of that article for us. How’s that sound?
Jaya Gupta: Absolutely. So I think if you take a step back you know 2025 was supposed to be the year of AI agents and I think the models did actually get a lot better.
But in enterprises, agents still don’t act and you’d expect them to. And I think the reason’s actually pretty simple.
Agents can read data and take action but they don’t know why decisions get made. And I think that reasoning is what we call decision traces. They are scattered across tools buried in Slack or other communication platforms.
And basically sometimes it’s never really even recorded. And so we think that the winners will be the companies that capture those traces and turn them into context graphs.
James Kaplan: Give me an example. What is a decision trace?
A decision trace is really an example of how a specific decision was made and what inputs were considered what rules were applied and sort of what exceptions were granted: who approved and what that outcome was.
Jaya Gupta: It’s a great question. Player Zero — they do you know incident debugging and solve technical support tickets. A decision trace is really an example of how a specific decision was made and what inputs were considered what rules were applied and sort of what exceptions were granted: who approved and what that outcome was. And so in Player Zero’s case they’re creating a decision trace by you know. Taking examples of you know what are the incidents that we’re receiving from PagerDuty. What are the open escalations in Zendesk. What are you know where are examples of maybe prior renewals and stitching that data across to understand what is it that you know a person or employee decided with different pieces of data why
James Kaplan: So in, in this case, we’re, we’re talking about sort of a, you know, to use an old fashioned term, it service management. We’re going to use that in, you know, incident management. You know, you have all sorts of decisions there. Why is something a severity one versus a severity two, why do you take down a particular system?
Why do you roll back a change? What’s some of the data that you would look at in making a change and then what are some of the decision traces to your thinking that lead to decisions like those?
Jaya Gupta: Yeah, and so that’s a good question I think in every single application it’s going to look a little bit different.
James Kaplan: Let’s, let’s use this example. Let’s use the incident management example, right?
Jaya Gupta: Let’s do that. And so on on Player Zero for an incident ticket it would be sort of what did the human decide when there’s an incident kind of what is the next actions that they took.
Or based on prior incidents what is it that the human decided to make a change on. Or what you know what they slacked someone and you know what they kind of communicated to someone. They might have communicated an exception for a certain incident And so it’s kind of replaying that past history.
James Kaplan: Right. So it could be, the decision could be where an incident gets routed it. A decision could be whether to reboot a server, it could be one of probably hundreds of different things. And if, I’m assuming, if I’m understanding the logic here, you would have an agent look at a combination of contextual data, which would be environmental data about the system, about the application.
Logs, what have you, as well as, you know, some of the Slack information because very well, you know, a particular person handling an incident may have slacked the expert on this particular technology or this particular domain saying, Hey, gee, I’m thinking about whether to reboot this server. Do you think that’s a good idea or not a good idea?
Is that, is that the gist of it?
Jaya Gupta: Yes, exactly. And I think it’s it’s a little bit of figuring out what was actually true at that time. Exactly as you mentioned. What was the evidence that was consulted? Was it ticketed episodes? Was it dashboards was it different links what were sort of the constraints evoked and so this might be run books in this example? It might be as you said in Slack the rationale whether it’s could be in the future in agent-generated justification or it’s a human note and the action taken as well as the approver chain.
What was the evidence that was consulted? Was it ticketed episodes? Was it dashboards was it different links what were sort of the constraints evoked and so this might be run books in this example?
So I think there’s even a lot to learn from who you communicated to and in certain incidents you know you talk to certain different people that have that tribal knowledge.
James Kaplan: You said something very interesting. It could be the deliberations of the agent. So I think one of the things I read you saying is this might not just be the slack messages between humans. This might also be the deliberations of various agents or the communications among various agents as they’ve made decisions in the past.
Is that, is that correct?
Jaya Gupta: Yes that’s correct I think that’s going to be the future state I think enterprises are probably far away from that world but I do think that that will become reality eventually over time.
James Kaplan: Well, I’ve been building something in Cursor and, you know, you watch the deliberations of the of the agent in the in the chat window there, and there’s, there’s a lot of information that could potentially be captured and used to determine the course of future events.
Is that a fair way of thinking about it?
Jaya Gupta: Exactly. Have you been using Cursor?
James Kaplan: Yes, I have. I’ve gotten addicted. Gotten addicted to cursor.
Jaya Gupta: I love that thing.
James Kaplan: Yes. Exactly. It’s very odd for those of us who you know, haven’t been doing frontline software development in many, many years. But the world has indeed changed.
Jaya Gupta: Exactly We’re all software
James Kaplan: There were two things I thought that were really interesting. A number of people have talked about making decisions based on the environmental information. The idea of the decision trace is — it’s one I certainly had not seen before.
And there’s two parts of it I suppose. One is, you know, your point about all the slack messages and email messages and instant messages, right? Do you think, and then the second is the agent deliberations, how much of a, how much of a tension do you think there will be between questions of privacy and the desire to really get at how people make decisions.
Jaya Gupta: That’s a great question I think that’s actually going to be one of the biggest roadblocks that because you know decision traces can leak judgment patterns And so the example I like to use is you know let’s just say I work at a you know Let’s just take some random law firm and I’m
James Kaplan: right?
Jaya Gupta: of other clients and you know let’s just say my client was some big manufacturer of Some beverage. And I want to you know I’m a first year associate at this law firm and I want a query and I’m on a different case or different you know client and I want to figure out what did that client do
But you know what did another associate or what did another lawyer do on another project. And so is there a way you know and I think you’re going to have to people are going to have to solve.
At inference time as well as at retrieval time permissions. And so I think that’s going to be one of the hardest technical challenge and and why you know if context graphs.
I think won’t come alive until we see solutions that can solve for the security issue And I think it’s going to be a really hard one and I think that the industries that are kind of like legal and consulting even and and anything with client services it’s it’s going to be tough to to implement.
James Kaplan: Yeah. Well there actually, it’s a great point. There’s a couple of different aspects of this. There’s a for want of a better term, a data segregation issue, right? Or a conflict issue, which isn’t just in professional services. Contract manufacturing has tremendous issues around that.
I was thinking a little bit in terms of employee privacy or employee trepidations. Okay. Let me give you an example. Let’s, let’s go back to incident management. Say someone sends, say, I send a slack message to Joe. Saying that you know, Frank suggested I reboot the server, I do X, Y, or z, and Joe sends a slack based back to me saying, don’t listen to Joe. Joe’s an idiot on these issues. Listen to Sally.
That’s actually valuable information or that may indeed be valuable information. Because there may be people on the team who are very smart in domains A, B, and C, but tend to get over their skis in domain D. That’s part of working in an organization to know which people are good at what.
But that can be very uncomfortable if we’re capturing that in any sort of systematic way.
Jaya Gupta: Absolutely And. I and I think you’re spot on and I think this is probably one of the big reasons why decision traces are going to take a while for them to work.
Because as as you’re saying your best employees want to be the best employees.
And and I think that’s also why it really really hurts when your best employee leaves and can’t find a replacement And I and nor can anyone else fill their shoes because they have all this rationale.
James Kaplan: Tacit knowledge, right?
Jaya Gupta: Exactly spot on. But I think the one of the key differences here is that with context graphs over time they’re kind of a byproduct of how the agents work.
And so when the AI agent solves the problem it’s naturally traversing the organizational state pulling the data from the CRM checking the incident history and PA and PagerDuty and looking at the support thread and Zendesk or whatever people use these days and so that traversal of what is it touching you know in what order.
And to solve what problem is sort of kind of how a sample of how an organization actually works.
And so I think that they emerge from these agent trajectories over time and that’s sort of the difference maybe that I think you’ll still have to model the entities upfront to some degree. But I think there will be applications that emerge where you’ll have to do less of that and maybe a behavioral ontology.
James Kaplan: Well, it seemed to me that the abstraction here becomes incredibly important, which is you don’t want, you know, you don’t want to be telling people Joe said to reboot the server, don’t listen to Joe. He doesn’t know what he’s talking about on this issue. What you want is to say, you don’t reboot the server in these circumstances.
Right. That’s not going to work. And that’s, you know, that’s at least analytically a solvable problem if having a certain degree of complexity doing that doing that at scale. Right?
Jaya Gupta: Yeah.
James Kaplan: And this does this work just as well outside of technology domains? Does this work for things like pricing and discounting and so forth?
Jaya Gupta: It’s a good question and and one that we are actually thinking through as well.
So I think that there’s two things that I think it’ll depend on and you’re kind of also hinting at my next post of what are the what are the companies that and and what are the features of the problems.
But I think that some of it is going to be a data maturity question of how ready are organizations there’s many enterprises that reached out that I talked to that you know are far behind this because they don’t even have something like Slack potentially.
James Kaplan: Right. Yeah.
Jaya Gupta: Most of their decisions are made at steak dinners.
James Kaplan: Mm-hmm.
Jaya Gupta: When you have some of those industries I think that it will will look a little bit different.
And and I think the second piece is in in pricing which is a great question.
I think it depends on pricing and what industry but in in software at least I think that it it can be done.
And the reason being is let’s just say you use your CRM and it has a deal that closed at a 20 percent discount and that’s the state.
But you know why that discount got approved. You can track those actions someone pulled maybe three recent service outages from an incident log they found a reference a similar exception that the VP made last quarter and so I think you can actually track back the reasoning chain.
But you know why that discount got approved. You can track those actions someone pulled maybe three recent service outages from an incident log they found a reference a similar exception that the VP made last quarter and so I think you can actually track back the reasoning chain.
James Kaplan: And you want to tease apart the circumstances where you give the discount because y the sales guy plays golf with the appropriate person at the at the customer versus you gave the discount because there were a few service outages in the last quarter, and you need to you need to buy back some goodwill.
One hypothesis I have is this will be more relevant in B2B markets than in B2C markets, right? Because in B2C markets, it’s simpler. Traditional machine learning analytics have progressed further. More of the data is structured. There’s fewer complicated decision traces around who gets a discount for consumer auto insurance.
Right. If consumer auto insurance, they, they look at me and say I’m a 55-year-old man who lives in New York City who drives a station wagon. Boom. Here’s what the appropriate price is, right?
Jaya Gupta: Yeah and what you’re getting at at to here is actually super interesting because I think decision traces in B2C have always existed if you think about Netflix.
James Kaplan: Yeah.
Jaya Gupta: Amazon, TikTok, Google know when we click anything or what what made us pause.
James Kaplan: Yeah, but the decision traces are all structured information.
Jaya Gupta: Exactly, exactly. And and I think the highest value judgment in B2B actually lives in what we like to call dark surfaces.
James Kaplan: Yep.
Jaya Gupta: Inboxes, side debates, unwritten precedent. And so there wasn’t anything to compound. But when you have agents you sort of have decision surfaces I guess that you can now instrument.
James Kaplan: The way I put it is: now for the first time, we can convert unstructured data to structured data at scale and economically at scale and therefore analyze it.
Jaya Gupta: Exactly, exactly. And I think that more of enterprise work now also lives on maybe what you would call instrumentable surfaces too.
James Kaplan: You assume there is a incredibly high end of B2B where there’s fewer decision traces because as you pointed out, it happens over dinner, right.
Where there’s, you know, and maybe there’s notes that are shared after the dinner, maybe there aren’t, as opposed to, you know, for buying software, a lot of that is going to happen over Slack and email and text and other communications and via RFP responses, other communications that you can potentially, potentially interrogate.
Jaya Gupta: Exactly, exactly and and I think it’ll also be another reason for I think some of the examples that I’ve heard are you know on the factory floor sometimes people need to write some rationale into something but they never do it because there’s no use of it.
James Kaplan: Right.
Jaya Gupta: There will be still some organizational change required especially and so I think there will be a there still will be some human change required
James Kaplan: So what is a context graph and in particular. Is it different from a knowledge graph? Is it a form of a knowledge graph? What does a context graph look like when you implement it?
Jaya Gupta: Yeah that’s a good question I would say it’s it’s definitely different than a knowledge graph.
And I would say the key difference is because you’re not building them intentionally but they’re more of a byproduct of how the agents actually work.
And I would say the key difference is because you’re not building them intentionally but they’re more of a byproduct of how the agents actually work.
And so you’re you’re delivering some form of value at the application layer.
And then that application is generating thousands of agent trajectories.
And so Cursor is a great example. They’re they’re generating thousands of agent trajectories every single time we use it. Every single time we accept or reject one of their suggestions. We’re learning something that we couldn’t have learned upfront —what are the entities that matter, what are the relationships real and what are the patterns that reoccur in successful decisions.
And so those trajectories are are going to be the kind of accumulation of those trajectories become the context graph
James Kaplan: And what, when you say trajectory, what is a trajectory to your thinking?
Jaya Gupta: Yeah so an agent trajectory is really what are the you know what touch points is the agent making.
And I would say it sort of has you know five dimensions.
And one is what is the timeline of that agen. What are the events that the agent is going through?
And then you also are baking in semantics into the agent of what certain things mean in the organization because you know what the term risk could mean at one company could be very different than another company.
James Kaplan: Mm-hmm.
Jaya Gupta: And then you’re also the I think one of the most important thing is you know you’re trying to figure out outcomes in this trajectory and in cursor the the it’s super fast because you’re you’re generating something every single time but when you’re looking at things like pricing decisions and you know sales workflows where you don’t know it is very hard to figure out why a deal closed or what actions.
James Kaplan: Of course. Yeah.
Jaya Gupta: ...to close a deal.
James Kaplan: So many people are working on that question at the moment, I’m sure.
Jaya Gupta: Thousands.
James Kaplan: Yeah. So how much does this relate to? I mean, there’s a lot of debate going on and there was a lot of debate going on after your article about RDF triples versus labeled property graphs and about whether you have a formal ontology or do not have a formal ontology with a graph.
It sounds like you are inclined towards the side of the debate, which says you have an emergent ontology rather than a predefined ontology. Is that a correct way to hear what you’re saying?
Jaya Gupta: Yes And I think this debate will continue to to go on but I I would say that’s the key distinction and why this isn’t just another metadata layer or you know metadata 3.0.
It’s more that you don’t need to hand specify the ontology of the enterprise upfront. You can learn it through it agent behavior.
It’s more that you don’t need to hand specify the ontology of the enterprise upfront. You can learn it through it agent behavior.
Now I think that is some in some cases if you’re doing an migration or of any software I think that to some degree you are going to have to specify a part of the enterprise upfront.
Now I think that you can do instead of doing the entire ontology upfront you can do a smaller portion. And so I think it will actually end up being a combination still.
James Kaplan: Please continue.
Jaya Gupta: I was going to say the more physical the world gets the more you will have to model upfront.
James Kaplan: Yeah, I tend to have a for want of better term, I call the dialectical view. On ontology development, which is, you know, I would suggest you start with a structure created by an expert. You gather data, the structure created by the expert is wrong. You start to create. You evolve the ontology in an emergent way.
You re then you reshape the ontology based on expert input that it will be, you know, in, in effect, a dialectic between the individual and the and the artificial intelligence. It sounds like you may not be in a terribly different place.
Jaya Gupta: Yes I I would agree I think it’ll be a combination of both. And I think the thing that is interesting about agents is that it lets you for the first time in for some of the behavioral ontology.
James Kaplan: I think, and I think that’s incredibly powerful because you know, you could argue that. Developing software and what, by whatever mechanism is always a form of abstraction. You have to model a business process. And there is necessarily some form of, some degree of abstraction required to model anything or else it would be fiscally impossible.
And the question is how much, you know, how much abstraction can you, can you engage in How much and what’s the trade off between the level of granularity you try to capture versus the cost and complexity you create? And I think what you’re suggesting is the use of agents to capture emergent structures pushes out that frontier in terms of what level of granularity and precision you can capture in your, in the model you build and manage.
Is that, is that fair?
Jaya Gupta: Yeah that’s spot on
James Kaplan: Okay, sure. Now, I, I think a lot of much of the reason or not much of one of the reasons your article drew so much attention is it, it has some probably pretty profound implications for the enterprise software space. Do you mind telling us about that a little bit.
Jaya Gupta: Yeah of course so I think that you know we we like to call incumbents I think that you know one of the reasons that It it’ll be a struggle for them to build con context craphs is they usually capture the what and most systems are record today you know they’re structure to capture the what not kind of what led to that and why And so I think of the examples of certain categories of software could be you know CRMs you know ERPs.
James Kaplan: Supply chain management. You can imagine the list, right?
Jaya Gupta: Yeah.
James Kaplan: Product lifecycle management, what have you, right?
Jaya Gupta: So I think that the way I the example I to to use is that there’s all these functions that have created and they have the word ops after all of them Funny enough RevOps, SecOps. DevOps.
James Kaplan: Now DocOps. DocCps is my new favorite.
Jaya Gupta: Say that again.
James Kaplan: DocOps. The process of creating of using a document as code mindset to create documents in a structured way.
Jaya Gupta: I like that one. And I think that you know these really exist because none of these systems or records really own the sort of cross-functional workflow. And a lot of this unstructured data that you’re synthesizing exists in what we used to call systems of engagement layers.
Most reasoning or most human reasoning I think actually happens over voice or it happens over video. It’s multimodal in nature versus happening on text.
And so what I like to say is most reasoning or most human reasoning I think actually happens over voice or it happens over video. It’s multimodal in nature versus happening on text. It’s the way we make eye contact. It’s the way the tone changes. And so I think those are some of the subtle you know differences as well in capturing those decision traces.
James Kaplan: Yeah, it’s interesting, I, I think many cases, decision traces aren’t in systems of engagement. They’re in spreadsheets and word processing documents and emails and what have you. And, for any, any number of enterprise applications, there’s a lot of work that gets done in spreadsheets before the data gets entered into whichever system of record.
Now let me maybe just play devil’s advocate. Let me turn this around in terms of the enterprise application providers. You could argue at least some of them have a ton of tacit knowledge about decision traces. A lot of that goes into how their platforms have been implemented in individual companies.
And they all have, many of them have significant professional services or customer success or implementation arms. And is there a case you could make that if these companies were sufficiently motivated or sufficiently determined, they could combine the information they capture about the end result with the tacit knowledge that may exist already in their organizations about how people tend to make the decisions that lead to the what.
Does that make sense at all?
Jaya Gupta: It does it does and so I actually do agree with you. I think there will be some incumbents that that will be strategic. Some of it will be is the professional services arm.
James Kaplan: How much of that that knowledge do they have within the four walls of the institution?
Jaya Gupta: Exactly, exactly. And what and it is also maybe a function of what sorts of data that they’re sitting on and it’s also do they own any system of engagement sort of companies too.
James Kaplan: Yeah. The other way I think about it, and this may be, I’m not sure if it’s orthogonal to what you’re saying or not, so I’m curious about your view is.
The amount of business domain content in the platform will matter a lot. There’s certain, you know, there’s certain types of software where they have domain content about tax laws and tax treatments or inventory rules in various parts of the world. And it would seem to me that will be much harder to replace than something which is just a database and some workflow and a UI.
Jaya Gupta: Yeah so I think this one it depends on the the how I guess private that data is to the enterprise.
I think for I think the reason you don’t have that many companies today that are you know automating migrations for example is because a lot of that information is private to the enterprise.
James Kaplan: Yep.
Jaya Gupta: And so you can’t really just build an agent wrapper to to do that quickly. You’re going to need a lot of you know fine tuning and a lot of specific to the company.
James Kaplan: Well, it’s inherently complicated information, right?
Jaya Gupta: That’s right.
James Kaplan: There’s a lot of nuance there. There’s a lot of implicit stuff that’s not written down.
Jaya Gupta: Exactly and I think where you you know maybe retrieving information on different tax laws in country in different countries I think to the extent that it’s public and on the internet I think that will actually get easier and easier.
James Kaplan: As it becomes easier to ingest data there’ll be less of a dominant advantage in capturing that type of information. That more people will be able to access that type of information because it will now be easier to ingest that information given GenAI.
Jaya Gupta: Exactly.
James Kaplan: Alright, so where are you taking this next? It sounds like this has become a major theme in your research and your thinking. Where, where do you go from here? What are the open questions and what are you, what are you thinking about going forward?
Jaya Gupta: That’s a so I think a great question. I think some of it is where’s the opportunity for startups.
And we have a a point of view on that and we’re already starting to get I feel thousands of pitches per hour and so our our view is is forming also on the fly as well.
I think some of it is who’s going to own it. Is it going is there going to be a universal context graph or are context graphs going to be verticalized some of it is
James Kaplan: And horizontalized. Right, because you can imagine one for it service management and one for marketing ops and what have you, right?
Jaya Gupta: Exactly, exactly. So either by function or by industry and so you know will the opportunity go to the app layer. What new info will have to be built I think coming back to our security question? Where will we find who will build kind of the layer that of security which governs inference and not just retrieval?
James Kaplan: Right. Yeah.
Jaya Gupta: And so sort of who who will capture the opportunity and as well you know I think the thesis for us here is you know decision traces are are sort of the the trillion dollar layer that maybe B2B never had and consumer B2C has also always had. And so how do you how how do you figure out creative ways to capture them? And I think we’re seeing a very different approach in in many different sorts of applications across industries.
I think the thesis for us here is you know decision traces are are sort of the the trillion dollar layer that maybe B2B never had and consumer B2C has also always had.
James Kaplan: I mean, sometimes I to talk about the semantic layer, but that may be, and I may be coming at a similar idea from a different direction. Especially in for businesses that have incredibly complicated data with lots of tacit data there’s been, a hell of a lot of challenge that’s limited digitization and limited automation over the past couple of decades.
And we may now now have the opportunity to go address some of that in a way that would’ve been impossible even two or three years ago.
Jaya Gupta: Exactly, exactly. And I think you know the semantic layers of the world there was you know months and years of workshops and and stakeholder alignment and and I think this actually makes a lot of that much faster too.
James Kaplan: Exactly, because you know, getting people to agree on semantics I will entirely acknowledge is incredibly tough.
Jaya Gupta: Yeah.
James Kaplan: Any final thoughts?
I think I mean you know I’m biased but it’s going to be a great time for for startups.
Jaya Gupta: I think I mean you know I’m biased but it’s going to be a great time for for startups.
And I think you know it’s it’s also a you know I think many enterprises they were some some of them are have been a little bit slow to adopt AI.
I think that I’ve seen from early reactions is that it’s gotten many many enterprises early in their adoption cycle to to move a lot faster
And so I think it’s going to be super exciting on you know one the all the large companies and enterprises in the world of you know maybe moving up adoption cycles And then two for for startups to capture some of the opportunity as well.
James Kaplan: Thank you so much.
Jaya Gupta: Thank you.
James Kaplan: That was terrific.
Jaya Gupta: Awesome.





