Why your Enterprise Needs Something New to Manage AI
Why your Enterprise Needs Something New to Manage AI
Blog
Featured
Latest
12/3/25
·
Rehgan Bleile
About a decade ago…
During the rise of machine learning in the enterprise, the question was often asked “Why do I need a new tool or a new process for this? Isn’t it just software?” Quickly, it became quite clear that the technical teams building models needed to manage the additional complexity of machine learning systems, beyond what was already being used for software development. The reason for this is probability.
When software is written, we generate concrete rules and workflows that we can encapsulate into code. When the user clicks this button it takes them to this page. Easy enough. But when we build predictive systems, they work with statistics. We are now dealing with probabilities. Based on historical information, what is the likelihood that the user wants to see a specific object? This is the basis of recommendation engines. There is a chance that the system is wrong and we do not know for sure what will be be displayed until the model spits out an answer.
This might seem like a small difference, but the influence can be significant.
It changes how we test the system, how we monitor and track outputs and outcomes, how we make improvements over time, how much data we need to make it work well, user expectations and experiences, troubleshooting what went wrong, and managing uncertainty in the system. We are improving the system’s ability to handle more pathways than hard coded logic in software. But now we are also increasing the complexity and managing “unhappy” paths.
AI takes this concept to a new level. We are now introducing “agency” within these systems, allowing them to action on our behalf and reason in ways we don’t fully understand. The complexity has increased and managing unwanted outcomes has become harder. These systems now make their own improvements to how they work over time (reinforcement learning) and are using an approach that makes reproducibility nearly impossible. So why even do this at all? In some cases, the benefits outweigh the downside.
It makes sense now why we need new technical tools to develop and manage AI systems.
We also need new evaluation tools to ensure these AI systems are doing what we want. By only focusing on the technical side, we miss out on the hardest part - managing change and risk across an entire enterprise. These powerful AI solutions will change behaviors, systems, and ways of operating. AI is not just automating existing systems, but uprooting and redefining them. The hardest part about change is people, not technology. Ask anyone in charge of AI at an enterprise. This is an all-encompassing technology. It requires a whole new level of collaboration. The bespoke enterprise systems that exist today in isolation are not going to cut it.
Companies need a way to capture the complex needs from the part of the business that AI will impact. They need a way to understand risk exposure from potential operational disruption, to improper usage of data, to unintended consequences of embedding AI. They need a way to navigate the ocean of data these systems need in order to work well. They need to properly translate all of these requirements and constraints into a language that each stakeholder can understand and approve. This is why the change is so hard. There are several groups (data, AI, risk, operations, leadership) involved in these conversations and none of them speak the same language. Yet, they all must understand the critical information about the AI system that will impact them.
Navigating this complexity requires a new way of collaborating.
It demands a new space to work together within each step of the AI life cycle, from idea to production. This isn’t a form or an Excel spread sheet. This isn’t a SharePoint site or a requirements document. This must be a space that is context aware, gives the information to the right person in the right language, helps to evaluate risk, and can generate a blueprint for every AI system. Whether you are buying an AI product from a vendor or building one yourself, you need to set and manage expectations for the AI system and have a place to store and manage it.
Whether people like it or not, this new way of working is not just a phase.
We are now designing and managing new sources of intelligence that will eventually transform how work is done. This requires a shift in how we think about collaborating. And if you want to move fast without breaking things, you cannot keep letting AI die in committee meetings. You are stuck in forever meetings because you don’t yet have a productive place to work together. Where are your AI blueprints?
Curious what this could look like inside your org?
Get in touch for a free consultation — we’ll walk you through it.
About a decade ago…
During the rise of machine learning in the enterprise, the question was often asked “Why do I need a new tool or a new process for this? Isn’t it just software?” Quickly, it became quite clear that the technical teams building models needed to manage the additional complexity of machine learning systems, beyond what was already being used for software development. The reason for this is probability.
When software is written, we generate concrete rules and workflows that we can encapsulate into code. When the user clicks this button it takes them to this page. Easy enough. But when we build predictive systems, they work with statistics. We are now dealing with probabilities. Based on historical information, what is the likelihood that the user wants to see a specific object? This is the basis of recommendation engines. There is a chance that the system is wrong and we do not know for sure what will be be displayed until the model spits out an answer.
This might seem like a small difference, but the influence can be significant.
It changes how we test the system, how we monitor and track outputs and outcomes, how we make improvements over time, how much data we need to make it work well, user expectations and experiences, troubleshooting what went wrong, and managing uncertainty in the system. We are improving the system’s ability to handle more pathways than hard coded logic in software. But now we are also increasing the complexity and managing “unhappy” paths.
AI takes this concept to a new level. We are now introducing “agency” within these systems, allowing them to action on our behalf and reason in ways we don’t fully understand. The complexity has increased and managing unwanted outcomes has become harder. These systems now make their own improvements to how they work over time (reinforcement learning) and are using an approach that makes reproducibility nearly impossible. So why even do this at all? In some cases, the benefits outweigh the downside.
It makes sense now why we need new technical tools to develop and manage AI systems.
We also need new evaluation tools to ensure these AI systems are doing what we want. By only focusing on the technical side, we miss out on the hardest part - managing change and risk across an entire enterprise. These powerful AI solutions will change behaviors, systems, and ways of operating. AI is not just automating existing systems, but uprooting and redefining them. The hardest part about change is people, not technology. Ask anyone in charge of AI at an enterprise. This is an all-encompassing technology. It requires a whole new level of collaboration. The bespoke enterprise systems that exist today in isolation are not going to cut it.
Companies need a way to capture the complex needs from the part of the business that AI will impact. They need a way to understand risk exposure from potential operational disruption, to improper usage of data, to unintended consequences of embedding AI. They need a way to navigate the ocean of data these systems need in order to work well. They need to properly translate all of these requirements and constraints into a language that each stakeholder can understand and approve. This is why the change is so hard. There are several groups (data, AI, risk, operations, leadership) involved in these conversations and none of them speak the same language. Yet, they all must understand the critical information about the AI system that will impact them.
Navigating this complexity requires a new way of collaborating.
It demands a new space to work together within each step of the AI life cycle, from idea to production. This isn’t a form or an Excel spread sheet. This isn’t a SharePoint site or a requirements document. This must be a space that is context aware, gives the information to the right person in the right language, helps to evaluate risk, and can generate a blueprint for every AI system. Whether you are buying an AI product from a vendor or building one yourself, you need to set and manage expectations for the AI system and have a place to store and manage it.
Whether people like it or not, this new way of working is not just a phase.
We are now designing and managing new sources of intelligence that will eventually transform how work is done. This requires a shift in how we think about collaborating. And if you want to move fast without breaking things, you cannot keep letting AI die in committee meetings. You are stuck in forever meetings because you don’t yet have a productive place to work together. Where are your AI blueprints?
Curious what this could look like inside your org?
Get in touch for a free consultation — we’ll walk you through it.



