At Tredence, we see the same frustrating cycle play out across almost every AI driven enterprise we work with. A data team spends a month building an internal AI agent. In the sandbox, it performs flawlessly. It pulls the right numbers, summarizes trends fluently, and gets the executive team excited; until it goes live on production.
Usually, the blame lands in one of two places: business stakeholders complain that the LLM is hallucinating, or the data engineers blame underlying data quality. But having looked under the hood of dozens of these failed deployments, the reality is usually different. The source tables are fine. The model is functioning exactly as designed. The actual point of failure is a total lack of context.
When a senior analyst gets asked a question about "active users." They don't just write a SQL query against the first table they find. They know that the users_v2 table has a bug from a migration last month. They know that "active" means something completely different to the product team than it does to marketing. They bring years of institutional knowledge to the prompt.
An AI agent doesn't have any of that. It just sees column headers. It doesn't know your undocumented business rules, the historical quirks of your CRM, or the agreed-upon corporate definitions of specific metrics. It operates in a vacuum. So, when an agent grabs the net revenue column instead of recognized revenue, the output isn't just slightly off—it's fundamentally wrong. And in an enterprise setting, delivering a confident but wrong answer destroys trust in the AI initiative entirely.
Fixing this is notoriously difficult. The standard workaround is usually to try and cram all this business logic into the system prompt. But business rules change weekly. Schemas evolve. Trying to hardcode an entire enterprise's operating context into an LLM prompt is a maintenance nightmare that simply doesn't scale.
For years, we’ve treated context as a passive exercise—a wiki where humans go to look up what a column means. But if we want AI to actually work autonomously in production, that context has to become active infrastructure.
This is exactly why Tredence is stepping up as a Context Layer Partner for Atlan Activate. Our focus has always been on closing the gap between raw data and real business value, and Atlan’s Enterprise Context Layer is the architectural missing link we’ve been waiting for.
Specifically, Atlan's new Context Engineering Studio addresses this problem directly. It takes the semantic definitions, lineage, and governance rules that organizations have already built, and makes them machine-readable. Instead of an AI agent blindly guessing which table to use, Context Engineering Studio intercepts the request and arms the agent with the exact definitions, rules, and relationships it needs to generate a trustworthy answer.
An LLM without governed business context isn’t an analyst; it’s just a fast, confident guesser. If we want to get out of the endless cycle of failed pilots, we have to stop focusing solely on the models and start building the infrastructure that gives them context.
LinkedIn