Integration into systems that already work on the factory floor will be the next leap forward.
You often hear people say that large language models (LLMs) are going to change everything in business, but as LLMs continue to evolve, it’s clear that any lasting impact will depend on the business itself.
The reality is many of these models were not built for plug-and-play in business environments, like manufacturing. While LLMs represent a historic leap in technology, they have to contend with manufacturing processes that have been built on historical evidence, and have taken years, if not decades, to refine. It’s evident that LLMs will eventually reach most aspects of daily life, but integration into systems that already work on the factory floor will be the next leap forward.
The challenge with integrating LLMs into legacy systems like IBM AS/400, AIX, other UNIX variants, or SAP ECC, comes down to it not being worth the risk or cost. Manufacturing systems can be delicate, with one small hiccup having cascading effects, causing shutdowns or delays in operation. When integrating an LLM, manufacturers may find themselves combating several new challenges that cause disruptions.
Downtime risk
As we’ve seen recently, outages at companies like Amazon and Cloudflare knocked major parts of the internet offline. Meanwhile, manufacturers running fully local systems kept operating without interruption. In environments where every minute of downtime carries a real financial cost, any potential software outage or lag can carry heavy loss.
Data concerns
In an increasingly connected world, a data breach which exposes proprietary processes or specifications could be devastating. The recent indexing bug that exposed private ChatGPT session content in Google search only heightened that skepticism. Compounded with fear that cloud-based models might train on their data, potentially giving competitors an edge down the road.
Technical ability
As LLMs continue to advance and evolve, the skillset needed to manage, train, and prompt LLMs will need to grow just as quickly. Information Technology teams in small- to mid-sized manufacturers are already carrying significant workload to ensure existing operating systems are running at a capacity to meet demand. Manufacturers need to spend time vetting systems and software on top of focusing on the tangibles like employees, machine parts, and supply chain timelines. Adding the wrong AI system, and then having to train it, takes away profit and crucial time that a plant would otherwise be operational. On top of this, hiring AI engineers at current market rates is out of reach for most of these companies. This is a major skills gap that will need to be addressed as plants begin to integrate AI into their system.
The writing is on the wall
Amazon recently reported that it can double its U.S. sales by 2033 without adding any new employees. Its former CEO has also taken on a new role focused on building AI that manufactures computers, automobiles, and spacecraft. Innovations like this will reach large manufacturers first, and private equity groups buying smaller shops will not be far behind. The pricing pressure created by those competitors may be more than many relationship-driven businesses can absorb.
All of this points to the fact that AI is here to stay, but it has also led to real confusion about which kind of AI actually makes sense for manufacturing today.
Looking past the headlines
Headlines focus on trillion-dollar compute deals, photorealistic models, and other high-profile achievements. This makes it difficult for manufacturers to know where the real opportunities lie. Manufacturing does not need quantum simulations or high-end image generation. It needs consistency, reliability, and security.
Fortunately, when you look at the research, a much more practical path is emerging. Smaller language models are now capable of performing nearly as well as the foundational models for many operational tasks. They require far less compute power, which makes the hardware needed to run them much more accessible. The broader AI trend is moving toward smaller and more efficient models that will eventually run on personal devices.
The bridge to AI
This shift creates a realistic pathway for manufacturers to adopt AI without replacing their legacy systems. Today, it is both practical and affordable to purchase on-premise hardware that can run models capable of powering agents for manufacturing tasks. This keeps data local and greatly reduces security risk.
While credibility is still a concern when evaluating unknown vendors, there is some good news. IBM recently open-sourced one of its Granite models, which is currently the only open-source LLM with ISO certification. For manufacturers that don’t require that level of assurance, there are hundreds of other open-source models that can run fully on-premises and perform well.
Widespread adoption will likely depend on system integrators. They already understand existing factory systems and will be the ones to partner with AI companies to bring this technology to the floor. The advantage of this approach is that it can run in parallel with current systems instead of replacing them.
When you put all of this together, it’s clear that manufacturers don’t need AI to rebuild their entire infrastructure. They need solutions that fit the realities of the floor: local, reliable, secure, and able to work alongside what has already proven to successfully run. Smaller, on-premise models finally make that possible. The next phase of AI in manufacturing won’t be defined by the flashiest breakthroughs, but by the practical systems that help plants adopt new technology without disrupting the old.
Dan Steele, is the CEO of Listening Post.