For nearly 15 years, we’ve been told that the fourth industrial revolution is all about data. Phrases like “Data is the New Oil” and similar buzzwords became commonplace to fuel the hype cycle, further accelerated by the boom in cloud computing and big data. By the mid-2010s, manufacturing leaders were urged to collect as much data as possible and push it to the cloud, where data scientists would provide unprecedented insights.
Spoiler alert: that didn’t happen.
What we got instead was petabytes of raw unstructured data stuck in data lakes with little to no governance, technology solutions searching for problems, and endless pilots that never scaled. It’s no surprise that we still don’t have the equivalent of a single industrial company leading the charge like Toyota did with TPS and Motorola (later GE) with Six Sigma.
Enter Industrial DataOps.
LNS Research recently defined Industrial DataOps as both a concept and a technology category that sits at the heart of solving some of these core challenges. The category spans across several key functionalities from connectivity, data quality, conditioning & contextualization, modeling, governance, and workflow orchestration. In this blog post, we will break down three common lies around Industrial DataOps and how these misconceptions have stalled industrial productivity growth so far.
Fake News #1: A Full-stack platform with Apps & Analytics is the only way.
One of the earliest misconceptions in the Industry 4.0 journey came from the rise of Industrial IoT platforms. At the time, these platforms felt revolutionary, as for the first time, manufacturers could unlock access to operational data and make information visible in ways that simply weren’t possible before.
While many of these platforms introduced useful DataOps-like capabilities — connectivity, basic contextualization, and app-building toolkits — they were, for the most part, locked inside proprietary environments. If you wanted to use data outside the platform’s boundaries, it was an ad hoc exercise at best, or more often, incurred significant custom development.
Fast forward to today, Industrial DataOps has evolved into two distinct paths:
-
-
-
Vertically integrated platforms: The next generation of full-stack solutions goes well beyond those early IIoT platforms. They often include connectivity drivers, MQTT brokers, historians, digital twins, storage layers, and even analytics.
-
Best-in-class providers: On the other side, there are vendors that focus deeply on one aspect of Industrial DataOps — whether it’s interoperability, data reliability, or contextualization. With this approach, end users can assemble a DataOps stack from multiple providers, selecting the strongest tool in each category. It requires relatively more work on integration but can yield a more flexible and future-proof architecture.

It goes without saying that each approach has its list of pros and cons; buyers and end users can decide on which way they want to approach Industrial DataOps based on existing architecture, time-to-value requirements, risk tolerance to vendor lock-in, expertise, and alignment of IT and OT, or any other criteria. It is important to note that while some of the vertically integrated platforms have taken a headless approach to their architecture (no analytics or apps), even the ones that have analytics embedded are more open than previous IIoT platforms.
Fake News #2: A Unified Namespace is the Holy Grail.
In recent years, few concepts in industrial data have gained as much traction as the Unified Namespace (UNS). The idea is simple but powerful: organize industrial data in a structured, consistent, and standardized way.
For many companies, it was an aspirational goal, as even within a single site, assets and materials are often named differently, left to the discretion of controls engineers and their programming of PLC code. The result was a messy, inconsistent namespace that made it difficult to find, share, or use data effectively.
UNS was proposed as the fix for this problem, and thanks to strong advocacy led by the system integrator community, it gained serious momentum in the industry. But there quickly became the perception that UNS was the end goal. Additionally, it also became tightly coupled with the MQTT protocol, leading to several MQTT vs OPC UA vs Kafka debates.
The Unified Namespace is an important component of Industrial DataOps — but it’s only one piece of the puzzle. Before pursuing a UNS, you need to address interoperability, abstraction, and data quality & reliability. After UNS, you still need to apply contextualization, modeling, and governance to make the data truly usable. In other words, a UNS is necessary, and critical part of Industrial DataOps, but not sufficient by itself.
Fake News #3: You Can Do It Yourself.
The Build vs Buy dilemma has been debated enough across several industrial technology categories, and Industrial DataOps is no exception. I’ve been asked countless times why a dedicated technology is needed (for DataOps), and why can’t it be done with existing automation and execution systems.
While it is technically possible, I wouldn’t recommend it. Yes, you could rename every single PLC tag, enforce consistent naming conventions, and create models on top of SCADA systems. With enough effort and a rigorously disciplined control engineering, you can make it work. But at what cost?
To begin with, it’s enormously resource-intensive, consuming substantial amounts of time and lines of code. You might get something to work at a single site, but replicating it across multiple plants quickly becomes unmanageable. At best, you end up with a static system that works reasonably well under current conditions. The moment objectives shift, priorities change, or your mergers or acquisitions happen, it quickly falls apart.
This is why purpose-built Industrial DataOps solutions exist. Modern vendors provide no-code or low-code platforms designed specifically to handle abstraction, interoperability, and modeling at scale. They don’t eliminate the need for thoughtful strategy, but they drastically reduce the time, cost, and fragility of doing it yourself. The problem here isn’t that it’s impossible to build it; it’s the lack of robustness and flexibility. In most cases, do-it-yourself Industrial DataOps is a short-term fix to a long-term problem.
Industrial DataOps: A Pre-Requisite & Catalyst for Industrial AI
We’re at a pivotal moment in manufacturing. Artificial Intelligence is already disrupting industries from finance to healthcare, and it is entering manufacturing faster than cloud or mobile technologies ever did. The potential is enormous, but its success depends on high-quality, reliable, contextual data.
Industrial DataOps is the discipline and technology framework that enables manufacturers to do this by turning raw industrial data into reliable, usable intelligence. It’s emerging through two main approaches: vertically integrated full-stack platforms that provide end-to-end capabilities, and best-in-class modular solutions that let companies assemble a flexible, future-proof technology stack tailored to their specific needs.
Either way, these Industrial DataOps providers offer a range of solutions to connect diverse plant-floor data sources, including real-time telemetry and sensor streams, historians, files, and enterprise databases. Yet, most still have opportunities to handle unstructured data such as images, audio, and video, which will be essential for the next wave of Industrial AI. The vendors that crack this challenge (whether full-stack or best-of-breed) will be the ones to catapult industrial productivity into its next stage of growth.
