Last month, I attended ProveIt! 2026 in Dallas — a technology showcase where industrial technology vendors demonstrated their capabilities on a set of shared virtual factory datasets. Each vendor who took the stage had a clear mandate, centered around four questions: What problem did you solve? How did you solve it? How long did it take? What did it cost?
The event drew hundreds of attendees, from tech vendors (most of whom were also sponsors), end users, and system integrators. Here are my takeaways from the event…
Industrial DataOps > Unified Namespace
Not surprisingly, Unified Namespace was the cornerstone on which the entire ProveIt! ecosystem was built. To paraphrase, the UNS, a concept championed by the ProveIt! community, organizes operational data by asset hierarchy into a single, shared broker that any application can publish to or consume from. LNS Research's recently defined Industrial DataOps framework includes the UNS as a foundational layer, but it does not stop there.

Industrial DataOps is not an entirely new concept (Wonderware has been working on this for 25 years with ArchestrA (now part of System Platform), but one that is still evolving. While there was reasonable agreement on the core capabilities, there were also meaningful differences in approach and scope. What was clear to me during my time at ProveIt! is that it is high time to move past the UNS and MQTT debate. The shared MQTT broker connecting 50+ vendors was a baseline expectation, not a differentiator. What vendors are competing on now is what happens after the data lands in the broker.
Conditioning and contextualizing data with additional metadata from material flow, shift schedules, machine states, work orders, quality codes, and domain-specific business rules directly into the data model is where the value proposition and differentiation lie today. The goal is to push domain knowledge as close to the edge as possible so that every downstream application or AI model works with actionable information, not just tags and raw data.
However, my biggest surprise at ProveIt! was the momentum behind graph-based data models. While the approach has been around for many years, it has never had the traction it received at the event; several vendors demonstrated graph modeling and node and edge visualization as part of their demonstrations. The appeal is legitimate: graphs are an effective way to model many-to-many relationships in ways that flat hierarchies cannot. However, in order to make sure it gets implemented for the right reasons and the right way, vendors need to lead with solving business problems and not lead with the technology itself.
Redefining Interoperability in an Industrial AI World
One of the more striking dynamics at ProveIt! is that the interoperability conversation is now led by a community of tech vendors, systems integrators, and industry associations (as opposed to just major automation vendors). While it can be argued that interoperability was solved with OPC servers decades ago, current needs have expanded what interoperability means. For instance, Model Context Protocol (MCP) came up in nearly every vendor conversation as the emerging interface for exposing industrial data to AI agents.
However, the most significant interoperability development at the show was the live demonstration of CESMII’s I3x, the Industrial Information Interoperability Exchange. Led by a coalition of tech vendors, I3x is a common API standard built for the OT world. If it achieves widespread adoption, I believe it could do for industrial applications what REST did for web APIs.
Agentic AI Accelerates Product Development
While Industrial DataOps for AI is relatively well understood, it was encouraging to see how Agentic AI capabilities are accelerating DataOps development itself. I lost count of how many vendors mentioned using Claude Code to build and deploy software on-site. Several vendors used agentic coding tools to build and deploy working connectors, data flows, and application logic live at the event — in some cases, in under an hour, by engineers with no prior domain experience in the use case they were solving.
The harder question — one I didn’t hear a good answer from any vendors — is what happens when vibe-coded apps mature enough to clear the governance, scalability, and auditability bar that industrial software demands. If end users can build good-enough applications with an AI agent and a weekend, how much does that change the Build vs Buy economics?
Summary & Recommendations
Industrial DataOps is quickly moving from exploration to must-have, and Industrial AI is the forcing function. Manufacturers that have invested in AI pilots are discovering the hard way that model quality is downstream of data quality. i.e., to do powerful things with Industrial AI, you need Industrial DataOps.
Enabling this, ProveIt! 2026 was a one-of-a-kind event; not just demos and booths, but a structured showcase that pushed vendors to demonstrate software on shared virtual factory datasets against a consistent evaluation framework: problem, approach, time, and cost. The format produced a reasonable basis for comparison across a diverse vendor landscape, which I group under three categories:
- Pureplay Industrial DataOps Providers: These vendors' primary business is Industrial DataOps — connectivity, conditioning, contextualization, and orchestration. Companies like Flow Software, FlowFuse, HighByte, HiveMQ, Litmus, MaestroHub, and Thred all fall into this category, each competing on a different architectural approach but sharing the same core mandate. Some also do persistence and data storage, but their Industrial DataOps features are the core functionalities. Their USP is a neutral data layer with no preference for where the data ends up, which makes them the most architecture-agnostic option in the market. These vendors are well-positioned to lead the charge on Industrial DataOps as a category over the next few years.
- DataOps as Part of a Larger Platform: These vendors do DataOps, but in service of an MES, an operations platform, or an analytics / AI platform. MachineMetrics, Fuuz, and AVEVA were among those at the event taking this approach, and they are not alone; an increasing number of vendors are embedding DataOps into broader platforms. Their DataOps capabilities are similar to the previous category, but they are designed to feed their own applications first, which means less architectural flexibility in exchange for a more complete out-of-the-box solution. Most claim openness and interoperability, and many deliver on it, but history has shown a preference for these vendors to be the hub in the middle rather than be a spoke that connects to a competitor’s hub. The recommendation for manufacturers here should be deliberate about not trading one set of silos for another.
- Application Providers: These vendors consume DataOps rather than produce it — connecting to the shared broker, pulling contextualized data, and using it to power their own applications. Tulip, MaintainX, and Inductive Automation are examples: operator workflows, work orders, and SCADA visualizations built on top of data that others have already conditioned. Their value lies in how effectively their applications act on that data — the speed of building, the quality of the user experience, and the ability to scale across sites. They depend on the layers below to do the heavy lifting, and their job is to close the loop between clean data and operational action.
As noted above, Industrial DataOps is quickly becoming a core capability for any modern industrial technology stack, and it will not be long before it is simply the baseline for anyone serious about Industrial AI. The manufacturers who invest now will build a compounding advantage; the ones who wait are not standing still, they are falling behind. Three actionable recommendations for manufacturers evaluating this space:
- Don't get distracted by the means: Manufacturers often face complex, tangled operational architectures, and it is easy to get sidetracked by flashy new technologies, closed platforms, or endless protocol debates. These are means to an end, not the goal.
- There is no single right path: The vendors at ProveIT represented meaningfully different architectural approaches — graph-first, historian-first, namespace-first, and flow-based. The right answer is a combination of all of them, and depends on the end user’s existing stack, their team's technical depth, and the specific operational questions needed to answer. Starting with the business problem at hand, then working backward to the architecture is a proven best practice here.
- Do not wait for the perfect data model: The leaders in this space are not the manufacturers who spent 18 months designing an enterprise ontology before connecting a single machine. They picked a high-value use case, wired it up, validated the result, and expanded. The cost of starting has never been lower. The cost of waiting — as AI-enabled competitors close the productivity gap — is growing every quarter.
