The New Prize - The OT Ecosystem
OT is being affected by a series of powerful IT trends that are enabling new and enhanced capabilities, resulting in the convergence of IT with OT.
Can you trust an AI agent to make the right decision in your supply network?
AI agents are being actively investigated in industrial operations, from scheduling and forecasting to predictive maintenance and autonomous quality checks. These systems promise unprecedented efficiency and responsiveness. But with great autonomy comes great risk.
In a recent LNS Research Survey about Intelligent Supply Networks, 33% of the respondents said that they want AI agents responsible for taking actions without human intervention, and 17% said they want agents who are responsible and accountable for those actions. Only 4% stated that they desire to have no agents.
We see this pattern is even stronger when splitting between Intelligent Supply Network (ISN) Leaders and Followers. Figure 1 illustrates how Leaders in this field are significantly more likely to prefer AI agents that are both responsible and accurate — almost three times more likely than Followers at 32% versus 13%.
The power of autonomous AI is also its Achilles’ heel: it acts independently. When designed without principles or operated without boundaries, AI can lead to poor decisions, system failures, or even safety incidents. Autonomy without accountability is a recipe for failure.
To unlock AI's benefits in manufacturing and supply chains, we must design agents with clear principles and guardrails. Only then can we ensure that AI serves business goals safely and effectively.
The scope of an AI agent’s behavior can be visualized across two dimensions: safety and training. This yields four behavioral domains:
Safe & Trained: The sweet spot. We are within the safe operating envelope, and the agent can confidently operate as we are within the scope of the training data. Closed-loop probabilistic agents can be used.
Safe & Untrained: A zone of untapped potential. We are within the safe operating envelope, so the risk is low, but the agent is not familiar with this domain. Closed-loop probabilistic agents are not recommended.
Unsafe & Trained: The most deceptive zone. We are outside the safe operating envelope, so we need to apply deterministic systems. Even though the agent has been trained in this domain, it is too risky to use a probabilistic system, as we cannot guarantee the outcome. We can record the agent’s recommendations, but not close the loop.
Unsafe & Untrained: The danger zone. Here, the AI is out of its depth in a high-stakes environment. Mistakes are not only likely but potentially catastrophic. Closed-loop probabilistic systems shall never be used.
The takeaway: training data doesn’t define safety. AI systems need guardrails to understand when they are outside their capabilities.
In industrial systems, the safe operating envelope is a “defined range of operating conditions within which a system or process can be safely operated without causing damage or failure”. These are the conditions under which a probabilistic system can function with acceptable risk. Guardrails must be implemented at the boundary of the safe operating envelope so that deterministic systems can take over for the agent and ensure safe operations.
Generative AI and AI systems based on random sampling or inputs are inherently probabilistic, not deterministic. A deterministic system is characterized by predictable behavior, where a specific input always leads to the same output. In contrast, a probabilistic system incorporates randomness and provides a range of possible outcomes with associated probabilities.
Understanding where probabilistic systems can be used is especially important in a distributed, collaborative architecture with many interdependencies that are difficult to test and where autonomous systems must safely interoperate.
AI agents, like intelligent business processes, must operate on principles rather than just rigid rules. These principles are based on strategies and business objectives that consider the desired outcome for both the task and the higher-level business.
For example, an AI agent optimizing operations must consider not just speed and cost but compliance and product integrity.
However, when the boundary of the safe operating envelope is approached or breached, deterministic rules-based systems must take over to ensure safe operations.
Understanding AI agent safety and performance begins with acknowledging the unknown. Here’s a stepwise model for managing and classifying AI agent behavior:
![]() |
Start with All Possible States: Assume these states are unsafe until proven otherwise. This is a conservative approach that many forget. |
![]() |
Define the Safe Operating Envelope: This is where the world is safe, even if everything does not go according to plan. It is OK to take risks inside this envelope. |
|
Train the Model and Define the Scope of Training Data: AI models are trained, not programmed. They learn from the data that you provide. Remember, the model cannot extrapolate; it is not intelligent, so don’t expect it to do something you have not trained it to do. |
|
Classify the Regions of Operation: You can now give the regions names and choose the operating strategy:
|
This framework reframes AI deployment as a risk-informed decision, not a leap of faith.
LNS has created the Intelligent Supply Network architecture as a recommendation for building intelligent systems inside the plant and across the supply chain. In this architecture, AI agents sit in the Industrial Applications & Autonomous Agents layer, which provides an abstraction between the physical assets and the standardized business system.
AI agents must interconnect with business systems (ERP, SCM, PLM), plant assets (equipment, sensors, actuators), human operators (Connected Frontline Workforce), and subject matter experts in the Virtual Operations Center. The connection is done through data platforms and infrastructure, enabling applications and AI agents to communicate with other systems.
The Platforms & Infrastructure within the Intelligent Supply Network is enabling AI Agents through several capabilities:
Standard interfaces such as MCP (Model Context Protocol), A2A (Agent to Agent), JSON REST, and HTML
Contextualization and metadata that give the data meaning
Guardrails: Role-based permissions, authentication, and authorization that control who can do what within each domain of safe operations
These capabilities ensure AI agents act not in isolation but as responsible members of a coordinated system.
As AI agents become embedded across operations, the nature of decision-making and management will change. Latency (the time to make a good decision) is a productivity killer.
AI agents could become the new middle managers, trained to execute decisions, simulate scenarios, and recommend actions. This shift has deep implications:
Strategic leadership increases in importance as it defines the guiding principles.
Middle management may shrink or evolve, while the importance of the frontline workforce will increase.
New organization structures that focus more on the value streams and communication paths than the hierarchy must be formed.
AI agents provide the opportunity to move decisions closer to where the action is happening, which can shorten the latency and allow for experimentation and learning.
Despite the promise of automation, people remain essential. Humans bring critical thinking, ethics, and adaptability. The frontline workforce provides contextual intelligence. They can hear, feel, see, and smell things and identify patterns that many AI systems miss.
Autonomy is not a substitute for human wisdom. It is a tool for augmenting it.
Safety First: Just as machines have emergency stop buttons, AI agents must be designed with the equivalent: a way for humans to intervene immediately when outcomes veer off course. These agents must have an E-stop capability, not just automation, but interruptible automation.
The future of intelligent supply networks isn’t just smarter machines; it’s smarter systems and structures.
AI is showing tremendous promise, but it comes with significant challenges. Organizations that master the use of AI will significantly outperform those that do not. This is not the time to put your head in the sand; we must lean forward and identify the opportunities and pitfalls:
Understand your business objectives and measure everything you do against them.
Take a risk-based approach: Understand and document your safe operating envelope.
Educate yourself about the opportunities and limitations of AI agents. Be realistic; don’t get fooled by one-off promising results.
Be clear on your operating principles and guardrails. Gather good training data. Create a decision hierarchy that ensures guardrails always have the highest priority.
Build an architecture that supports AI agents. If you have a spaghetti structure today, it will just become more chaotic once AI systems start interacting with it.
Build an operating model and organization that leverages AI. AI alone will not produce the results you want.
Remember, even with AI agents, you are still accountable for all actions and outcomes. Make sure that you have the right safeguards in place.
This is not just a technological shift - it’s an organizational one. Leadership, structure, and oversight must evolve alongside autonomy.
Term |
Meaning |
AI agent |
An artificial intelligence application that can act on behalf of someone or another agent. AI agents can be connected and have orchestrated behaviors. |
Agent AI |
The capability of artificial intelligence to act on behalf of someone else. |
Deterministic |
A program or algorithm that, given the same inputs, will always produce the same output, with the same sequence of intermediate states. |
Probabilistic |
Based on or adapted to a theory of probability; subject to or involving chance variation. |
As a member-level partner of LNS Research, you will receive our expert and proven Advisory Services. These exclusive benefits give your team:
Let us help you with key decisions based on our solid research methodology and vast industrial experience.
BOOK A STRATEGY CALLOT is being affected by a series of powerful IT trends that are enabling new and enhanced capabilities, resulting in the convergence of IT with OT.
Mobile offerings in the environment, health, and safety space are becoming more commonplace, but where should you start?
LNS Research Principal Analyst, Dan Miklovic explores how Digital Transformation is going to drive and enable IT-OT.
The Industrial Transformation and Operational Excellence Blog is an informal environment for our analysts to share thoughts and insights on a range of technology and business topics.