Artificial Intelligence - Engineering.com https://www.engineering.com/category/technology/artificial-intelligence/ Mon, 08 Dec 2025 20:47:03 +0000 en-US hourly 1 https://wordpress.org/?v=6.9 https://www.engineering.com/wp-content/uploads/2025/06/0-Square-Icon-White-on-Purpleb-150x150.png Artificial Intelligence - Engineering.com https://www.engineering.com/category/technology/artificial-intelligence/ 32 32 Language gaps might threaten the U.S. manufacturing revival https://www.engineering.com/language-gaps-might-threaten-the-u-s-manufacturing-revival/ Mon, 08 Dec 2025 20:00:02 +0000 https://www.engineering.com/?p=145063 AI is one answer to the hidden costs language barriers take from the average industrial business.

The post Language gaps might threaten the U.S. manufacturing revival appeared first on Engineering.com.

]]>
The data is undeniable: American manufacturing is coming home. According to the Reshoring Initiative Annual Report published in 2025, companies announced over 244,000 jobs via reshoring and foreign direct investment in 2024 alone, continuing a historic trend to shorten fragile supply chains.

However, as engineers and plant managers race to stand up new facilities and expand domestic capacity, they are colliding with a stubborn reality: the labor market is tight, and the workforce is changing rapidly. The Deloitte & The Manufacturing Institute 2024 Talent Study warns that the U.S. could face a need for 3.8 million new manufacturing employees by 2033—and without significant changes, 1.9 million of those jobs could go unfilled.

To bridge this gap, companies are tapping into a broader, more diverse labor pool. New data from the Bureau of Labor Statistics reveals that foreign-born workers now make up 19.2% of the U.S. labor force. Crucially, this concentration is significantly higher in the industrial sector, with foreign-born workers accounting for over 20% of production and material moving occupations and nearly 30% of construction and extraction roles.

This demographic shift creates a distinct systems engineering challenge on the shop floor. The industry is building the “Factory of the Future” with the most advanced robotics and IIoT sensors available, yet running them with a linguistic fragmentation that legacy communication tools simply cannot solve.

When a frantic safety alert in Spanish is met with a confused stare from an English-speaking supervisor, it becomes dangerous. And when a process improvement idea in Vietnamese never makes it to the plant manager, operational intelligence is lost. For U.S. manufacturing to truly scale, language barriers must no longer be treated as an HR issue, but as an operational constraint requiring a technological solution.

The hidden tax of the language gap

For the practicing engineer, efficiency is a formula. But variable human communication is often the error term in that equation. This operational drag is often invisible until it triggers a catastrophe, creating what is called the “hidden tax” of the modern diverse factory.

It compounds the already staggering cost of inefficiency. A 2024 report by Siemens found that unplanned downtime now costs Fortune Global 500 companies approximately 11% of their annual turnover, totaling nearly $1.5 trillion. In a linguistically fragmented workforce, this “downtime” isn’t always mechanical. Often, it is conversational.

Research from Relay indicates this tax is far higher than most executives realize, discovering that hidden labor costs due to language barriers likely cost the average industrial business $500,000 or more annually. This stems heavily from bilingual employees serving as unofficial translators, spending an average of 4 hours per week translating for colleagues instead of performing their primary roles. This alone costs businesses an average of $7,500 annually per bilingual worker in lost productivity.

The tax manifests in three critical areas:

Safety Latency

In an emergency, seconds matter. OSHA estimates that language barriers contribute to 25% of workplace incidents. This aligns with Relay’s findings, where 64% of respondents believe language barriers negatively impact employee safety at their facility. If a warning has to be mentally translated before it’s shouted, the accident has likely already happened. On a noisy floor, where auditory cues are already compromised, adding a linguistic filter can be fatal.

The Stalled Digital Thread

The industry has invested millions in digital transformation, yet the “last mile” of that data is often human. You cannot fully digitize a workflow if the worker cannot read the screen or understand the voice command. In fact, 86% of manufacturing and warehousing professionals believe their workplace loses productivity due to language barriers, with 42% estimating those losses exceed 5% of total output.

The “Knowledge Trap”

Experienced workers who speak English as a second language often hold deep tribal knowledge about machine quirks and material handling. Without a seamless way to share that, the knowledge remains trapped. It retires when they do, or worse, it creates an artificial ceiling for talent. Relay’s data shows that 48% of respondents agree language barriers reduce promotion opportunities for affected workers, fueling dissatisfaction and turnover.

The universal translator is no longer science fiction

For decades, the standard “solution” was a bilingual shift lead or a game of telephone. Today, Artificial Intelligence has altered the physics of this problem.

As highlighted in Deloitte’s 2025 Manufacturing Industry Outlook, 80% of manufacturing executives plan to invest significantly in smart manufacturing initiatives. The most impactful of these investments will be those that empower the human worker. The sector is moving past the era of static, one-way radios into the age of voice-first operational intelligence.

New advancements in Large Language Models (LLMs) allow for near-instantaneous translation of voice communications on the shop floor. This is distinct from consumer-grade translation apps, which often fail to grasp industrial context. Modern industrial AI is being trained to understand that “the line is down” refers to a production stoppage, not a geometric shape.

Imagine a scenario: A line operator speaks a maintenance request in Spanish into their device regarding a specific hydraulic failure. The floor supervisor hears it in English. The understanding is instant. Instead of searching for a translator, the supervisor dispatches a repair technician immediately, preventing a minor stall from becoming a major downtime event

This creates a truly interchangeable workforce. It allows a plant manager to balance shifts based on skill set rather than language compatibility. It removes the structural barrier for talented workers who may be technically proficient but linguistically isolated, allowing them to rise to leadership roles.

Signal-to-noise: filtering for danger

Furthermore, this technology can also address the “signal-to-noise” ratio that plagues busy engineers. On a standard radio channel, a supervisor hears everything—every request for a pallet, every break check. Eventually, ear fatigue sets in, and critical information is missed.

Modern two way radio platforms can now utilize AI for active cross-channel monitoring, listening for context rather than just rigid keywords. It allows a supervisor to filter out the chatter of a thousand daily radio transmissions and only be alerted when specific, high-risk topics (like “leak,” “break,” “injury,” or “lockout”) are mentioned.

This moves communication from a passive stream of noise to an active safety monitoring system. It empowers every worker on the floor with a direct line to safety protocols without the friction of complex workflows. Instead of hesitating to find the correct channel or recalling a specific alert code, a worker can simply state the issue naturally, trusting the AI to detect the urgency and notify the right team immediately.

Reliability—the prerequisite for inclusion

While AI translation is the software solution, software does not exist in a vacuum. It relies entirely on a hardware reality that is often ignored. You cannot have an inclusive, AI-enabled workforce if the device they are holding is a brick.

The current “Buy, Break, Replace” cycle of legacy hardware disproportionately affects the frontline. When batteries die mid-shift or devices shatter on concrete floors, the worker is silenced. For a non-native speaker, the psychological barrier to communication is already high. If their digital connection fails, they may be unlikely to walk across the factory floor to struggle through a face-to-face conversation in a second language. They may simply guess, or remain silent.

To support a diverse workforce, engineers must demand an “outcome-based” approach to hardware. The mindset needs to shift from buying radios to optimizing for uptime. If facilities guarantee machine uptime, why accept downtime for the human workers operating them?

A diverse workforce requires communication devices that prioritize continuity, ensuring that the tool will honor the work regardless of who is holding it. This means durability standards that match the environment, battery life that outlasts the longest shift, and connectivity that penetrates the deepest parts of the facility. Inclusion is impossible without reliability.

Engineering the human-centric future

The resurgence of American manufacturing will not be defined solely by how many chips companies can produce or how much steel can be poured. It will be defined by how quickly a new generation of workers can be integrated into a cohesive, safe, and efficient unit.

The diversity of the workforce is not a temporary condition; it is the new permanent state of the American industrial base. As the “Silver Tsunami” of retiring boomers recedes, the void is being filled by a dynamic, multicultural coalition of workers.

Reshoring is an invitation to innovate, not just in production methods, but in how the workforce connects. By leveraging AI-enabled, voice-first technologies, the industry can dismantle language barriers and cut through the operational noise. This turns a diverse workforce from a logistical challenge into a safer, more efficient, and fully synchronized asset.

The engineer’s job is to solve problems. The language gap is a big one. Fortunately, for the first time in history, we don’t need a specialized gadget to overcome it. We can now seamlessly embed intelligence directly into the tools the team is already using on the shop floor.

The post Language gaps might threaten the U.S. manufacturing revival appeared first on Engineering.com.

]]>
DHL continues enterprise-wide AI rollout https://www.engineering.com/dhl-continues-enterprise-wide-ai-rollout/ Wed, 26 Nov 2025 20:53:23 +0000 https://www.engineering.com/?p=144863 This next phase of DHL's AI strategy focuses on agentic AI for its contract logistics division.

The post DHL continues enterprise-wide AI rollout appeared first on Engineering.com.

]]>
Logistics and shipping juggernaut DHL Group is continuing to roll out its enterprise-wide AI strategy, this time leveraging a new partnership between its contract logistics division DHL Supply Chain and AI startup HappyRobot.

This phase will deploy agentic AI to streamline operational communication, customer experience and employee engagement.

DHL Supply Chain has already successfully deployed HappyRobot’s AI agents across several regions and use cases, including appointment scheduling, driver follow-up calls, and high-priority warehouse coordination.

These agents autonomously handle phone and email interactions, enabling faster, more consistent, and scalable communication.

DHL Supply Chain says it has been identifying and validating operational use cases for generative and agentic AI technologies for the last 18 months.

“Building on our extensive operational experience with data analytics, robotic process automation, and self-learning software tools, we are now integrating AI agents to drive greater process efficiency for customers while making operational roles more engaging and rewarding for employees by automating repetitive and time-consuming tasks such as manual data entry, routine scheduling, and standardized communications,” said Sally Miller, CIO DHL Supply Chain.

Current deployments already in use across DHL Supply Chain target hundreds of thousands of emails and millions of voice minutes annually. AI agents are supporting key workflows such as appointment scheduling, transport status calls, and high-priority warehouse coordination.

Close collaboration between the product and engineering teams and DHL Supply Chain’s technology departments has been crucial to designing new agentic capabilities that reflect the nuances of DHL’s operations, according to Yamil Mateo, HappyRobot’s Head of Product.

“The DHL team understood very early the scale of enablement our platform brings to their organization. They were clear that they wanted a partner with state-of-the-art technology and infrastructure.”

To enable this rollout, the team created a unified AI worker orchestration layer across email, WhatsApp, and SMS, enabling omnichannel capabilities with built-in fault tolerance and recovery. Major reliability improvements in the infrastructure are underway to support the scale and criticality of the operational processes running on the HappyRobot deployment for DHL, said Danny Luo, a senior engineer on the team.

AI agents is the new operating model

These implementations have already shown measurable impact, significantly reducing manual effort, increasing responsiveness, and enabling teams to focus on more strategic tasks and exception handling.

“AI agents help us relieve our teams from repetitive, time consuming tasks and give them space to focus on meaningful, high-value work. That’s not just operational progress—it’s also a win for our people,” said Lindsay Bridges, EVP Human Resources at DHL Supply Chain. “

HappyRobot’s platform enables fully autonomous AI agents to interact via phone, email, and messaging, while integrating seamlessly with DHL’s internal systems. And DHL Group continues to expand its AI strategy across all divisions. Beyond current pilots, further use cases are also being tested.

“At HappyRobot, we envision AI workers coordinating global supply chain operations—not just moving data, but actively managing workflows,” said Pablo Palafox, CEO of HappyRobot. “Too often, people are stuck maintaining systems and inboxes, with little time to solve exceptions or improve processes. DHL recognized early on the potential of AI agents as a new operating layer—one that brings speed, visibility, and consistency to logistics.

The post DHL continues enterprise-wide AI rollout appeared first on Engineering.com.

]]>
Fixing AI application project stalls – part 2 https://www.engineering.com/fixing-ai-application-project-stalls-part-2/ Mon, 24 Nov 2025 19:44:34 +0000 https://www.engineering.com/?p=144815 A few more thoughts on how engineers can prevent or rapidly fix stalls in AI application projects.

The post Fixing AI application project stalls – part 2 appeared first on Engineering.com.

]]>
Artificial intelligence (AI) is rapidly reshaping the engineering profession. From predictive maintenance in manufacturing plants to design in civil and mechanical projects, AI applications promise to increase efficiency, enhance innovation, shorten cycle times and improve safety.

Yet, despite widespread awareness of AI’s potential, many engineering organizations struggle to progress beyond the pilot stage. AI implementations often stall for various reasons—organizational, technical, cultural, and ethical. Understanding these barriers and fixing them is crucial for leaders who aim to advance AI from an intriguing, much-hyped new concept into a practical engineering capability.

To read the first article in this two-part series, click here.

Skills shortages and training deficiencies

The successful application of AI in engineering demands multidisciplinary collaboration. Data scientists understand the algorithms, but not necessarily the physical principles governing structures, materials, or thermodynamics. Conversely, engineers possess domain expertise but may lack proficiency in machine learning, statistics, or data visualization. These gaps create communication barriers and implementation bottlenecks, stalling progress.

To bridge the divide, organizations promote “AI literacy” among engineers and “engineering literacy” among data professionals. Cross-functional teams, comprising engineers, data scientists, and IT specialists, are often the key to translating technical concepts into practical outcomes. Continuous professional development programs, partnerships with universities, and in-house training initiatives all contribute to building expertise. The future of engineering will belong to those professionals who can interpret both finite element analysis and neural network outputs with equal fluency.

Gaps in the computing infrastructure

With the explosion in data volumes and the high resource demands of AI applications, the computing infrastructure that supports engineering groups can be overwhelmed. That leads to storage shortages, poor performance, and unplanned outages. These issues stall engineers’ productivity.

Engineering organizations can add to the capacity of their computing infrastructure by:

  • Investing in additional on-premise capacity.
  • Moving some AI applications from servers to high-performance workstations.
  • Moving some applications to the cloud that is operated by a cloud service provider (CSP).
  • Moving some applications from on-premises to the computing infrastructure of a Software as a Service (SaaS) vendor.

Organizational resistance and cultural barriers

Engineering culture is grounded in precision, accountability, and safety. These values, while essential, can also foster skepticism toward new technologies. Some engineers question the validity of AI-generated recommendations if they cannot trace their underlying logic. Project managers may hesitate to delegate decisions to AI systems they perceive as “black boxes.” Such caution and resistance stall progress.

Overcoming this resistance requires transparency and inclusion. AI models used in engineering should emphasize explainability—demonstrating how inputs lead to outputs. Involving engineers in AI model development builds trust and ensures the results align with physical realities. IT leadership must communicate that AI is a decision-support tool, not a replacement for engineering judgment and expertise. By framing AI as an enabler of better engineering rather than an external disruptor, organizations can foster acceptance and enthusiasm.

Lack of ethical, legal, and safety considerations

Engineering operates within strict regulatory and ethical frameworks designed to protect public safety. AI introduces new dimensions of risk, such as:

  • Discrimination and toxicity
  • Privacy and security
  • Misinformation
  • Malicious actors and misuse
  • Human-computer interaction
  • Socioeconomic and environmental
  • AI system safety, failures and limitations

If an AI-driven application predicts incorrect stress tolerances or misjudges maintenance intervals due to one or more of these risks, the consequences can be catastrophic. Because AI project teams are new to these risks, they may become overly cautious, which can stall progress.

To mitigate these risks, engineering firms establish rigorous model validation procedures, document decision processes, and ensure compliance with industry standards and safety regulations. An AI ethics and safety review committee should evaluate new AI applications before they are deployed. Transparency and accountability are not optional in engineering—they are fundamental responsibilities.

Underestimating the complexity of change management

Introducing AI is not simply an IT upgrade—it is a transformation of the organization. Engineering workflows, approval hierarchies, and performance metrics often require reconfiguration to incorporate AI insights effectively. AI projects stall when leadership underestimates the organizational change necessary to operationalize AI results.

A conscious people change management approach is essential. It includes stakeholder engagement, pilot demonstrations, and training for every rollout. By delivering multiple tangible improvements—such as reduced maintenance costs or faster design cycles—AI projects build momentum for broader adoption and ultimately achieve a lasting impact.

AI has immense potential to revolutionize engineering practice. It can enhance design optimization, improve maintenance predictability, and elevate overall manufacturing efficiency. However, realizing that potential requires more than algorithms—it requires alignment, trust, and integration.

AI projects stall when skills are in short supply, the organization is resistant, or workflows resist adaptation. Success demands high-quality data, transparent governance, and an organizational culture that embraces continuous learning. Engineering has always been about solving complex problems through disciplined innovation. Implementing AI effectively is the next evolution of that successful tradition.

The organizations that combine engineering rigor with AI insights will not only overcome today’s barriers but also define the future of that organization.

The post Fixing AI application project stalls – part 2 appeared first on Engineering.com.

]]>
A 4-level framework for AI in PLM, part 2 https://www.engineering.com/a-4-level-framework-for-ai-in-plm-part-2/ Tue, 18 Nov 2025 21:24:21 +0000 https://www.engineering.com/?p=144685 Orchestration, custom models, and strategic guidance.

The post A 4-level framework for AI in PLM, part 2 appeared first on Engineering.com.

]]>
Welcome back to the second half of our series on how AI is making its way into PLM. In Part 1, we kicked things off by introducing a four-level framework that shows how organizations can move up the AI maturity ladder, depending on their capabilities:

  • Level 1: Tool-Native AI
  • Level 2: AI Across Enterprise Systems
  • Level 3: Orchestrating Work with AI
  • Level 4: Building Custom AI Models

Part 1 covered Levels 1 and 2, establishing that while most organizations progress sequentially, the sequence isn’t mandatory. Part 2 explores Levels 3 and 4 and concludes the series by providing realistic adoption timelines through 2027 and offering strategic guidance for navigating this transformation.

Level 3: Orchestrating work with AI                                  

Goal-oriented AI that plans and orchestrates workflows.

Level 3 represents a fundamental change. Instead of responding when asked questions, AI becomes goal oriented. You give it an objective, and the AI generates its own plan to achieve it, orchestrating workflows and adapting as conditions change. This can be user-initiated (“prepare this assembly for manufacturing review and route it appropriately”) or fully autonomous (“when a supplier’s delivery slips, identify impacts and coordinate the response”).

The boundary between Level 2 and Level 3 is often blurred in solution provider positioning. Many capabilities marketed as “agentic AI” are sophisticated AI assistants that synthesize information and recommend actions, even proactively. The distinction is whether AI is generating multi-step workflows to achieve goals (Level 3) or providing information and recommendations that humans use to decide next steps (Level 2). The line isn’t always clear, which is why setting realistic expectations matters.

Level 3 is where the gap between what’s technically possible and what organizations can deploy at scale becomes most apparent. You’ll see impressive demonstrations at conferences and hear about pilot projects, but production-scale deployments with measurable ROI are rare.

Consider this example: an engineer proposes a material change in a critical component. AI detects the change request, identifies all affected assemblies and downstream dependencies, checks regulatory requirements and certifications that need updating, queries suppliers for material availability and lead times, generates a risk assessment by analyzing historical failure data, and creates a coordinated review workflow, routing the change to appropriate stakeholders based on impact analysis.

Solution providers demonstrate these concepts in controlled settings. Siemens Industrial Copilot (co-developed with Microsoft for broader manufacturing applications, separate from the Teamcenter Copilot discussed in Part 1) and SAP Joule are examples, but most implementations remain pilots as of the date of the publication of this article.

The challenges are primarily organizational and legal. Governance frameworks must define AI decision authority boundaries for every workflow. When can AI act autonomously? When does it need approval? These boundaries vary by industry and company, often taking years to develop. Liability questions follow: “Who’s responsible when AI autonomously approves a material substitution that later fails?”

In regulated industries like aerospace, automotive, and medical devices, every decision must be auditable. Current AI capabilities often lack the transparency regulators need, and no specific regulatory frameworks for agentic AI exist yet. Organizations don’t grant autonomy to systems they don’t trust, and that trust develops gradually, typically requiring 18 or more months of seeing AI work correctly at Level 2. Change management and cultural acceptance often determine success or failure more than technical capability.

Level 3 requires everything from Level 2, operating reliably. Without this foundation, you can’t troubleshoot autonomous systems. You also need enterprise-grade integration for action coordination, workflow engines with rollback capabilities, robust error handling, and real-time monitoring. Equally important are governance frameworks that define autonomy boundaries, liability frameworks, and regulatory-compliant audit trails.

For 2026 and 2027, expect limited controlled production deployments among well-resourced organizations that have invested years in building the necessary governance frameworks. Most organizations will still be working on Level 2. Broader Level 3 adoption is likely beyond 2029.

Level 4: building custom AI models

Custom models tailored to proprietary needs and constraints.

Most companies will succeed at Levels 1 through 3 by leveraging the foundation models provided by solution providers. However, for certain organizations, it makes sense to train or fine-tune their own models—either to address specialized requirements or to capitalize on proprietary data that serves as their intellectual property and competitive differentiator. This is where Level 4 comes into play.

Level 4 examples span multiple AI types, including custom vision models for composite inspection, fine-tuned language models for regulatory documentation, physics-informed neural networks for specialized simulations, and AI surrogate models replacing expensive computational analyses with your proprietary test data. These custom models don’t operate in isolation.

A custom AI surrogate model accelerates generative design exploration by replacing expensive simulations (Level 1). A custom language model trained on your technical documentation enables better information synthesis across your digital thread (Level 2). Custom supply chain models enable autonomous decision-making about supplier selection and inventory management (Level 3).

For Level 4, you need the right tools, significant computational resources, expertise, and clear justification. Perhaps 10 to 15% of organizations will pursue Level 4, typically those with clear competitive needs in regulated industries. Level 4 is optional, not required, but for those with the right needs and capabilities, it will provide a genuine competitive advantage.

Realistic adoption timeline

Let’s look at where this is heading based on current trajectories, grounded in what solution providers are shipping and current industry adoption patterns.

Late 2025 (Current State): Level 1 has become widespread with early adopters proving value. Smart money went into Level 2 foundation work: digital thread establishment, data quality improvement, and integration infrastructure. Industry has learned that data quality matters more than algorithm sophistication.

2026: Level 1 becomes near universal by year end, table stakes rather than competitive advantage. Level 2 will continue to emerge, and this is where differentiation will happen. Digital thread maturity will separate leaders from followers. The skills gap will become even more apparent. Companies that tried jumping to Level 3 or 4 without a solid foundation will be course-correcting.

2027: Level 2 will be common among efficient organizations, and this is where competitive advantage will live. Organizations with a mature digital thread and strong data governance will clearly separate themselves. Data quality will remain the top challenge; executing well on this work is what will matter. Level 3 will remain a niche; production deployments will primarily exist in industries that have already invested heavily in connected manufacturing systems and can benefit from high-volume, repetitive decisions, such as automotive and semiconductor manufacturing. Organizational and governance barriers will prove higher than current enthusiasm suggests. Broader Level 3 adoption is likely beyond 2029. Level 4 will be utilized by under 15% of companies, those with specialized competitive needs.

By 2027, three distinct groups will emerge: Early adopters (likely under 20%): Level 2 complete and operating smoothly, Level 3 operational in controlled pilots, some pursuing Level 4. Fast followers (perhaps 30%): Level 1 complete and Level 2 fully deployed or underway. The rest (the majority): Still completing Level 1 and 2 fundamentals, struggling with data quality, and learning expensive lessons.

Strategic Guidance

This guidance from CIMdata is structured along three core pillars: the Foundation (Data & Time), Technology (Openness & Governance), and Action (Skills & Leadership).

  1. The Foundation: Data Integration and Time Investment

The digital thread is your foundation for Level 2 and beyond, connecting PLM, ERP, MES, quality, supply chain, service, and other key data-centric systems with consistent data models and real-time synchronization. As noted in Part 1, establishing this typically takes 18 to 24 months, often extending to 24 to 36 months. There is no shortcut.

2. Technology: Openness and Governance

Platform openness increasingly matters. Open platforms with robust APIs enable Level 2 and 3 success, as well as Level 4 capabilities. Closed platforms limit you to whatever AI capabilities that solution providers choose to offer.

3. Actionable Guidance

  1. Skills Development:

Skill requirements and implementation challenges evolve significantly across the four levels.

B. By Role:

Here’s what the next few years actually look like.

Level 1 is expected to be near-universal by the end of 2026; it’s just table stakes.

Level 2 is where competitive advantage lives through 2026 and 2027.

Level 3 remains a niche, concentrated in regulated industries, with broader adoption more likely after 2029. Currently, most solution provider Level 3 roadmaps are aspirational.

Across all these levels, we’re talking about what CIMdata refers to as “Augmented Intelligence.” That is, AI-enabled capabilities that enhance human decision-making rather than attempt to replace it. Whether AI is helping you find information faster (Level 1), synthesizing data across systems (Level 2), or executing workflows within boundaries you’ve defined (Level 3), the goal is the same: making people more effective, not making them obsolete.

Most organizations will progress sequentially because each level builds capabilities for the next. What determines success is understanding and meeting the prerequisites for your target level.

The data foundation work matters more than which AI algorithm you choose. Level 2 is where competitive success gets defined for the next 3-5 years. And remember, plan for 45% overruns on every timeline. This isn’t pessimism, just reality.

AI may not bring a sudden revolution, but its true power lies in becoming an unseen force that transforms how we work—much as email did decades ago. Organizations that commit now to building a robust data foundation and mastering each step will be the ones leading the way, setting new standards for excellence and innovation. Instead of learning hard lessons after the fact, they will be shaping the future, empowered by AI that amplifies their strengths and unlocks new possibilities.

 The journey starts today—those who embrace it boldly will not just adapt, but inspire, thriving in a world where technology and human ingenuity drive progress together.

The post A 4-level framework for AI in PLM, part 2 appeared first on Engineering.com.

]]>
Fixing AI application project stalls – Part 1 https://www.engineering.com/fixing-ai-application-project-stalls-part-1/ Fri, 14 Nov 2025 21:35:30 +0000 https://www.engineering.com/?p=144631 AI application projects stall easily due to these common issues.

The post Fixing AI application project stalls – Part 1 appeared first on Engineering.com.

]]>
Artificial intelligence (AI) is rapidly reshaping the engineering profession. From predictive maintenance in manufacturing plants to design in civil and mechanical projects, AI applications promise to increase efficiency, enhance innovation, shorten cycle times and improve safety.

Yet, despite widespread awareness of AI’s potential, many engineering organizations struggle to progress beyond the pilot stage. AI implementations often stall for various reasons—organizational, technical, cultural, and ethical. Understanding these barriers and fixing them is crucial for leaders who aim to advance AI from an intriguing and much hyped new concept into a practical engineering capability.

Lack of clear AI application objectives

Engineering organizations often approach AI from a technology-first perspective rather than a business or technical problem-first mindset. Senior management may task a team to “use AI to improve productivity” without specifying a more focused aspect of productivity. Overly broad examples include reducing design rework, optimizing supply chain logistics, or forecasting equipment failures. This ambiguity stalls progress by diffusing effort, yielding uneven results, and wasting resources.

Practical AI projects in engineering will be more successful if they begin with tightly defined business and technical objectives. For example:

  • A civil engineering firm might aim to reduce project delays caused by material shortages by using enhanced AI-driven demand forecasting.
  • A mechanical engineering team could target reduced downtime through more sophisticated AI-driven predictive maintenance analytics.
  • An electrical engineering work group could utilize AI to simplify circuit board designs, thereby improving manufacturing quality.

Aligning AI projects with measurable engineering outcomes—such as higher throughput, improved energy efficiency, or longer asset lifespan—creates both focus and accountability. Without such clarity, AI projects remain academic exercises rather than operational solutions.

Insufficient data quality

Engineering operations generate immense quantities of data—from versions of design drawings, manufacturing sensor readings, to field inspection reports. However, this data is rarely standardized or integrated. Legacy systems store data in incompatible formats, while field-collected data is often incomplete or inconsistent. Moreover, in many engineering environments, critical data resides in siloed applications, on isolated local servers or externally with partners. Sometimes, the impediment is that digital data transformation has not advanced sufficiently. Poor data quality and incomplete data lead to unreliable models, eroding confidence among engineers who depend on sustained data accuracy. These data issues stall progress until the data quality is improved.

AI models require reliable, high-quality data to produce accurate insights. Addressing this issue demands robust data governance — defining ownership, standardizing data formats and values, simplifying accessibility, and ensuring traceability. For large-scale engineering enterprises, implementing centralized data lakehouses or data warehouses can provide a unified data foundation. Without disciplined data management, even the most advanced AI applications cannot deliver actionable results.

Unrealistic expectations

Sometimes, engineering teams, in their enthusiasm, over-promise what they can deliver. Examples include:

  • More functionality than their AI model and the available data can achieve.
  • Overly aggressive AI project timelines.
  • Underestimated required resources and related project budget.

These issues lead to management disappointments and a reluctance to commit to additional AI application projects, which stall progress.

Setting and managing expectations is never easy. Promising too little will not create enthusiasm and support. Promising too much is guaranteed to lead to disappointment. Here are some techniques that have proven successful for engineers:

  • Mockup expected results with visualizations using PowerPoint slides.
  • Start with an exploratory prototype.
  • Conduct an AI pilot project with sufficient scope to enable the follow-on project to deliver a production-quality AI application.
  • Conduct an AI risk assessment, share the results with management and incorporate mitigations into your project plan.

Inadequate integration with existing engineering workflows

Unlike software-driven processes, engineering workflows are deeply intertwined with physical processes, regulatory compliance, and long-established methodologies. Introducing AI into these workflows often exposes integration challenges. For example:

  • An AI model that predicts equipment failure may not easily link to existing maintenance scheduling systems or supervisory control and data acquisition (SCADA) platforms.
  • AI-generated design recommendations might not align with CAD software data standards or quality assurance protocols.
  • AI-generated recommendations that alter supply chain vendors or order quantities may be challenging to implement within the existing suite of applications.

These integration issues frequently stall progress. Engineers may see AI as disruptive or unreliable if it requires substantial changes to established processes or applications.

The solution to AI integration issues lies in collaborative systems engineering. This concept, which facilitates smoother integration, consists of:

  • Designing AI applications that complement, rather than replace, existing systems.
  • Building application programming interfaces (APIs) that integrate new AI applications with existing systems.
  • Adopting a more modular system architecture that creates simpler integration points.
  • Integrating AI incrementally because it allows for easier absorption by the organization compared to a sweeping replacement.
  • Ensuring backward compatibility where feasible.

AI has immense potential to revolutionize engineering practice. It can enhance design optimization, improve maintenance predictability, and elevate overall manufacturing efficiency. However, realizing that potential requires more than algorithms—it requires alignment, trust, and integration.

AI projects often stall when data is fragmented, objectives are unclear, or integration is overly complex. Success demands clear business objectives, high-quality data, and determined leadership. Engineering has always been about solving complex problems through disciplined innovation. Implementing AI effectively is the next evolution of that tradition.

Organizations that combine engineering rigor with AI insights will not only overcome today’s barriers but also define the future of that organization.

The post Fixing AI application project stalls – Part 1 appeared first on Engineering.com.

]]>
From seats to outcomes: rethinking engineering software licensing https://www.engineering.com/from-seats-to-outcomes-rethinking-engineering-software-licensing/ Mon, 10 Nov 2025 20:35:48 +0000 https://www.engineering.com/?p=144499 From named users to token meters to possible outcome pricing, what does a fair price for design, make, and operate really look like?

The post From seats to outcomes: rethinking engineering software licensing appeared first on Engineering.com.

]]>
As the industry speculates about outcome-based pricing, the central question is whether the value created by digital engineering software can truly be measured, tracked, and monetized. Enterprise platforms that underpin product development—from PDM and PLM to BIM—have moved through distinct licensing eras. Perpetual ownership gave way to named-user subscriptions, which were later augmented by tokenized, consumption-based access. Tokenization complements full-time seats rather than replacing them, providing flexibility for SMEs, project peaks, or intermittent users. Today, vendors are asking whether the next stage should tie price to tangible outcomes.

Autodesk’s roadmap, highlighted during its 2025 Investor Day, exemplifies this evolution. The company has traded static seat counts for finer-grained visibility and elasticity across design, BIM, and manufacturing toolsets. Such trajectory is instructive, but it is not proof that outcome-based pricing is broadly deployable. Moving from counting access or usage to measuring value raises practical challenges around attribution, auditability, and the preservation of experimentation and innovation.

As AI increasingly reshapes how engineers, designers, and manufacturers work, it opens the door to new pricing paradigms. Software that can measure or assist in the generation of design, simulation, and operational outcomes could eventually enable pricing aligned with real business value, rather than mere access.

From seats to consumption

Autodesk’s licensing journey can be read in three pragmatic phases:

Phase 1: Named-user subscriptions (post-2020)

Autodesk moved away from perpetual and shared network seats toward identity-tied subscriptions. That change shifted entitlement from pooled licenses to named individuals, forcing procurement and IT teams to redesign entitlement, single-sign-on, and audit processes. It also delivered behavioral telemetry that vendors and customers can now use to understand real engagement patterns.

Phase 2: Token-based consumption (2021+)

With the introduction of Autodesk Flex, the company made consumption explicit: organizations can buy token pools and redeem them for product access when needed. Token-based access complements, rather than replaces, full-time subscriptions—providing flexibility for occasional users, project peaks, or smaller teams that cannot justify full-time seats. Tokenization also delivers metered usage data across PDM, BIM, and manufacturing toolsets, bridging traditional subscriptions and more variable pricing approaches.

Phase 3: Outcome orientation (future, TBC)

Autodesk has signaled interest in moving toward pricing aligned with measurable results as AI and automation become routine in workflows. AI could facilitate this transition by generating or tracking deliverables—design iterations, simulations, or digital twin scenarios—and attributing results to specific workflow steps. Yet a critical question remains: are end-user OEMs asking for outcome-based pricing, and are they ready to accept the complexity, transparency, and governance it demands? Standardizing KPIs, audit processes, and governance frameworks is essential before outcome pricing can move from concept to practice.

In short, Autodesk and peers are assembling the scaffolding—identity, metering, telemetry, and cloud orchestration—but defining what is legitimately billable and provably attributable as an “outcome” remains the hardest challenge.

True outcome-based pricing remains rare

Across major enterprise platforms, the dominant pattern mirrors Autodesk: retire perpetual licenses, adopt subscriptions, transition to SaaS, and layer in consumption mechanics where flexibility is needed. True outcome-based pricing remains virtually non-existent for enterprise software licensing—except in highly bespoke arrangements with custom triggers—or is generally limited to service contracts or narrowly scoped AI-enabled deliverables.

Dassault Systèmes offers outcome-oriented services through BIOVIA contract research, while core 3DEXPERIENCE licenses remain subscription-based. Siemens employs token-based or value-linked licensing for NX and Simcenter advanced capabilities, illustrating hybrid consumption. PTC, AVEVA, and SAP similarly combine subscriptions with consumption-based billing, but outcome-based monetization is largely service-bound.

Consumption models are attractive because they are auditable, scalable, and predictable. Outcome pricing introduces complexity and risk, which many vendors prefer to contain within services or pilot programs. Traditional per-seat licensing—perpetual or subscription-based—is predictable but poorly aligned with actual value creation. Projects expand, teams flex, and simulations run thousands of iterations, yet costs remain static. Vendors are now moving toward usage- and, potentially, outcome-based models where spend is linked to measurable performance, including compute hours, connected devices, or digital twin assets. Cloud and SaaS transitions give vendors greater visibility into both process and data usage. Importantly, outcome-based pricing must not constrain creative exploration or experimentation by imposing prohibitive costs on usage.

Defining and measuring outcomes

Outcome-based pricing depends on measurable, attributable KPIs. Potential candidates include validated design deliverables or approved configuration releases, reduced engineering change turnaround or rework rates, reuse rates of certified parts and modules, clash-free coordinated BIM models and verified constructability, measurable sustainability improvements from design decisions, and AI-specific metrics such as AI-generated preliminary designs, automated simulation throughput, issue resolution, recommendation adoption, time saved per iteration, and sustainability impact.

The challenge is not simply counting events but reliably attributing results across humans/teams, AI agents, and workflow systems. Historically, most vendors have linked outcomes to services rather than software licensing because services provide controlled delivery environments, allow shared risk and bespoke contracts, scale via project teams, and reduce buyer uncertainty while preserving flexibility for exploratory work. This explains why consumption-based and tokenized licensing remain the practical reality, while outcome-based billing continues to be niche and largely unproven at scale.

A pragmatic transition path

A realistic migration toward outcome pricing would require incremental steps: standardizing telemetry and consumption metrics to link usage to workflow stages, piloting event-based KPIs with capped financial exposure (e.g., pay-per-validated design created or simulation run), and building hybrid contracts blending subscriptions with outcome bonuses, preserving budget predictability while aligning incentives. AI could accelerate this path by tracking measurable outputs, attributing value, and automating data capture, making outcome-based pricing more credible and auditable.

Autodesk’s move from seats to token meters aligns price with usage across the product engineering stack, while the broader industry experiments with outcome-linked models supported by AI, telemetry, and governance. ERP, CRM, and MES software have similarly shifted from perpetual licenses to subscription and consumption-based approaches, though outcome-based pricing remains equally rare. The question remains: as technology and measurement tools advance, will outcome-based licensing become a practical reality?

The post From seats to outcomes: rethinking engineering software licensing appeared first on Engineering.com.

]]>
Jabil acquires Hanley in $725M data center play https://www.engineering.com/jabil-acquires-hanley-in-725m-data-center-play/ Wed, 05 Nov 2025 14:54:13 +0000 https://www.engineering.com/?p=144407 This is the latest of several acquisitions by Jabil focused on data center operations and services.

The post Jabil acquires Hanley in $725M data center play appeared first on Engineering.com.

]]>
Florida-based engineering and manufacturing services company Jabil Inc. has agreed to acquire Hanley Energy Group for $725 million in an all-cash transaction.

Headquartered in Ashburn, Virginia, Hanley designs and manufactures energy management and critical power solutions serving the data center infrastructure market.

The deal is expected to close in the first quarter of 2026, subject to customary closing conditions and regulatory approvals.

In June 2025, Jabil announced plans to invest approximately $500 million over the next several years to expand its footprint in the Southeast United States to support cloud and AI data center infrastructure customers.

This deal adds Hanley Energy Group’s extensive expertise in power systems and energy optimization to Jabil’s existing power management solutions for data centers, including low and medium voltage switch gear, PDUs, and UPS systems.

Ed Bailey, Jabil SVP and Chief Technology Officer, Intelligent Infrastructure, said Hanley’s expertise in designing, building, and commissioning turnkey mission-critical power solutions from the grid all the way to the hyperscale data center.

“[This] complements Jabil’s growing capabilities in AI data center infrastructure. With Hanley’s deep technical know-how and comprehensive lifecycle services, including design, consulting, deployment, commissioning, and field support services, we will be even better positioned to deliver secure, reliable, and energy-efficient power solutions to our global customers,” said Bailey.

Matt Crowley, Jabil’s Executive Vice President, Global Business Units, Intelligent Infrastructure, said the deal gives Jabil the capability to not only design and manufacture these solutions, but also to deploy and service them in the data center.

In Jabil’s 2025 year-end report, CEO Mike Dastoor said the company expects revenue of $31.3 billion, core operating margins of 5.6%, and adjusted free cash flow of more than $1.3 billion. Dastoor said the company sees “significant opportunities” ahead in areas such as AI data center infrastructure, healthcare, and advanced warehouse and retail automation, and said Jabil will be “deploying capital in ways that strengthen our capabilities and enhance shareholder returns.”

Based in St. Petersburg, Florida, Jabil operates 30 sites across the United States, combining automation, robotics, and process optimization.

The post Jabil acquires Hanley in $725M data center play appeared first on Engineering.com.

]]>
Beyond the hype: a 4 level framework for AI in PLM https://www.engineering.com/beyond-the-hype-a-four-level-framework-for-ai-in-plm/ Thu, 30 Oct 2025 19:26:40 +0000 https://www.engineering.com/?p=144258 Part 1: Tool-native AI and enterprise integration.

The post Beyond the hype: a 4 level framework for AI in PLM appeared first on Engineering.com.

]]>
For decades, CIMdata has tracked Artificial Intelligence’s (AI) role in PLM, particularly within various computer-aided technologies like CAD and CAE. More recently, and especially with the advent of generative AI and now Agentic AI, the proposed uses and expected applications have exploded.

CIMdata recognized AI’s greater potential back in 2019, when we introduced discussions related to AI’s role in augmenting human intelligence in support of the product lifecycle and its various enabling technologies. And in 2024, we further recognized its role by making ‘augmented intelligence’ one of CIMdata’s Critical Dozen—CIMdata’s twelve enabling elements of a digital transformation. Our emphasis then, and still today, is that AI enhances human decision-making, rather than attempting to replace it. That distinction matters now more than ever.

As solution providers accelerate AI capabilities across product lifecycle management (PLM) solutions, the conversation has shifted from whether to adopt AI to how to adopt it effectively. What we’re seeing is companies jumping to advanced capabilities without understanding the prerequisites, with costly results. Additionally, the gap between solution provider roadmaps and organizational readiness is leading to expensive failures.

In this two-part article, CIMdata introduces a Four-Level Framework for AI in PLM (Figure 1), developed as a practical tool to navigate this gap and help companies move from basic use to strategic advantage. Each level has distinct prerequisites and capabilities. Understanding these levels will help you make realistic decisions about where to invest and what success requires.

Part 1 covers the foundational levels: using AI within individual tools (Level 1) and extending AI across your enterprise systems (Level 2). Part 2 will tackle orchestrating work with autonomous AI agents (Level 3) and building custom AI models (Level 4), complete with realistic timelines through 2027 and actionable strategic guidance for navigating this transformation.

The four levels of AI maturity in PLM. (Image: Cimdata)

One clarification: this article addresses the full spectrum of AI in PLM.

This includes traditional machine learning (ML), such as physics-informed neural networks, AI surrogate models for accelerating simulation, predictive maintenance algorithms, and computer vision for quality inspection. It also encompasses AI-assisted topology optimization and generative design, as well as generative AI, including large language models (LLMs) that enable conversational interfaces.  While our examples focus primarily on LLM-based AI, since it’s driving current investment decisions, the framework applies equally to all AI types, whether conversational or computational. The prerequisites remain the same: clean data, integrated systems, skilled teams, and governance frameworks.

The Framework: Four Levels of AI Maturity

AI maturity in PLM can be understood through four levels, each with distinct capabilities, prerequisites, and organizational requirements.

Level 1: AI built into individual tools (e.g., CAD, PDM, and simulation). You ask AI for answers using data from that single system. Prerequisites are minimal: tool adoption, basic prompt literacy, and output validation.

Level 2: AI accesses information across your entire enterprise, which is connected through a digital thread. AI synthesizes information from PLM, ERP, manufacturing, procurement, quality, and other enterprise databases. Prerequisites are substantial: digital thread infrastructure, data governance, and integration expertise.

Level 3: Introduces AI agents. Instead of just answering questions like the AI assistants from Levels 1 and 2 do, AI agents monitor systems, detect events, and execute multi-step workflows with varying levels of autonomy. Prerequisites are extensive: everything from Level 2 plus data governance frameworks, years of operational experience, and cultural acceptance of AI acting without human intervention.

Level 4: Organizations build custom AI models for competitive advantage or specialized requirements. This can happen alongside any other level. It requires AI/ML expertise, data science capabilities, compute power, and clear justification.

Most organizations progress sequentially because each level builds capabilities for the next. However, organizations with existing capabilities can start at higher levels. What matters is having properly enabled the prerequisites for your target level.

Level 1: Tool-Native AI-Single-tool AI that answers your questions.

At Level 1, you’re working with AI assistants that solution providers have integrated directly into their solutions. The pattern is straightforward: you ask a question, and the AI searches data that is stored, indexed, and managed within that specific tool to compose an answer. This represents the simplest form of AI adoption; you purchase the solution and start using the AI.

Level 1 capabilities include:

  • Conversational search where engineers ask questions in plain language and get synthesized answers. Tools like Siemens Teamcenter Copilot (their PLM-specific assistant, not to be confused with Siemens Industrial Copilot that will be discussed in Part 2), AnsysGPT, and PTC Creo+ AI Assistant are examples of solutions with this capability.
  • Natural language command discovery where engineers ask, “How do I create a circular pattern?” and AI suggests the relevant commands and menu locations, eliminating the need to search through documentation or complex menu structures.
  • Natural language queries against tool data, where “show me all fasteners with corrosion resistance rating greater than eight used in marine applications” returns relevant results.
  • AI-driven design optimization where generative design algorithms explore design alternatives based on constraints and requirements. For example, Autodesk Fusion 360’s generative design creates optimized geometries from performance requirements and boundary conditions. For specialized domains like ship design or aerospace structures, some solution providers build custom models (Level 4 capabilities) and embed these into their tools, making domain-specific AI acceleration available to Level 1 users without requiring them to develop custom models themselves.
  • Proactive recommendations based on patterns in tool data, like flagging “this part is used in 12 assemblies.” Example: Siemens NX’s Design for Manufacture Advisor analyzes geometry and identifies manufacturing challenges.

Prerequisites are minimal, which is why Level 1 adoption is spreading quickly. Teams need basic training, prompt literacy through practice, and the ability to validate outputs—plan for 3 to 6 months of proper adoption and skill development.

The boundaries of Level 1 matter. At this level, reasoning is constrained to what’s in the solution. AI can’t access information from other systems or predict impacts across the enterprise. When your engineers say, “This would be better if it knew about supplier capabilities” or “I need cost data from ERP,” you’re ready for Level 2.

By the end of 2026, Level 1 capabilities are expected to be near-universal. Organizations without solution-native AI capabilities will be at a disadvantage. This is becoming table stakes, no longer a competitive advantage.

Level 2: AI Across Enterprise Systems

At Level 2, AI extends beyond individual solutions to access information across an enterprise. The interaction pattern remains the same (the user asks, AI answers), but now AI synthesizes data from multiple connected solution domains (e.g., PLM, ERP, manufacturing, procurement, and quality). This integration enables more comprehensive responses, supporting better-informed engineering decisions and cross-functional collaboration.

Here’s why this matters. An engineer asks, “What’s the impact if I switch this fastener from steel to aluminum?” With Level 1, the PLM solution responds based only on its data—”This part is used in 12 assemblies.” With Level 2, AI reasons across systems, “This part is used in 12 assemblies (i.e., a PLM where-used). Material cost drops from $2.40 to $0.65 (ERP), but supplier MOQ increases from 100 to 1000 units (procurement). Current inventory shows 450 steel fasteners (inventory management). Lead time for aluminum is 8 weeks vs 2 weeks for steel (supply chain). Note: Engineering flagged potential galvanic corrosion issues mixing aluminum fasteners with steel assemblies in marine environments (shared drive). Recommend review before proceeding.” The engineer reviews this synthesis and makes a decision.

This is “buy with some build and maintain” AI adoption. You’ll buy enterprise solutions with integration capabilities, but you’ll need to build the infrastructure connecting them into a digital thread and implement AI capabilities that query across this infrastructure. You also need to maintain the data quality that makes it work.

Several solution providers offer capabilities spanning multiple solution domains, though these generally work within a single solution provider’s ecosystem. Oracle AI for Fusion and SAP Joule, for example, offer capabilities across their integrated suites. However, when integrating across multiple solution providers, which most organizations need, you must build your own integration infrastructure. Platforms like DataBricks, Snowflake, or similar data lakehouse solutions can serve as the integration layer, but implementing and maintaining these requires significant internal expertise and ongoing effort.

Level 2 demands substantial investment, separating leaders from followers. The data foundation takes time. You need clean, validated data with consistent metadata across PLM, ERP, MES, quality, and supply chain solutions. The digital thread must actually work, preserving cross-solution relationships and synchronizing data in real-time. Plan for an 18- to 24-month minimum pilot and implementation timeline, realistically 24- to 36-months given typical overruns.

On the technical side, you need deep integration with robust APIs. Retrieval Augmented Generation (RAG) is becoming standard practice, connecting AI models to your actual enterprise data. However, effective RAG requires properly indexed data and specialized skills. If your data isn’t clean and well-indexed, RAG can’t retrieve the right information regardless of which AI solution provider you choose.

The organizational challenges often exceed technical ones. You need data governance frameworks, integration expertise across teams, and serious change management. MIT Center for Information Systems Research’s 2021 research found that while organizations are rolling out data literacy programs (i.e., training employees to read, interpret, and use data), training alone is insufficient—organizations must ensure people actually engage with and experience data to build true capability. Cultural resistance is real, and organizational change management (OCM) is critical to success.

When evaluating solution providers, ask directly, “How open is your platform to third-party AI? What APIs enable digital thread integration?” Integrated suites like Oracle Fusion or SAP have advantages within their ecosystems. Open platforms like Aras Innovator, Siemens Teamcenter, PTC Windchill, and others enable third-party AI integration and flexibility.

Level 1 becomes universal quickly because solution providers provide features. Level 2 requires sustained investment in digital thread infrastructure and data governance. Success depends on execution discipline, not buying software.

Organizations progress to Level 3 when Level 2 operates reliably, data governance is established and followed consistently, and the organization is ready to delegate decision-making to AI within defined boundaries. This typically requires years beyond Level 2.

Understanding AI maturity levels helps you make realistic decisions about where to invest and what success requires. Level 1 provides immediate value with minimal prerequisites and is expected to become universal by the end of 2026. Level 2 delivers a competitive advantage, but, as previously stated, it demands substantial investment in digital thread infrastructure and data governance.

Organizations investing in Level 1 now, while building Level 2 foundation work in parallel, are positioning themselves well. The question isn’t whether your organization will adopt these capabilities. The question is whether you’ll understand what’s needed to succeed, including what and how fast you should adopt.

In Part 2, we’ll explore Level 3 (orchestrating work with AI) and Level 4 (building custom models), provide realistic adoption timelines through 2027, and offer strategic guidance based on what’s actually happening in manufacturing companies implementing AI-enabled PLM.

The post Beyond the hype: a 4 level framework for AI in PLM appeared first on Engineering.com.

]]>
AI layoffs and the workforce paradox: who builds the future when machines take over? https://www.engineering.com/ai-layoffs-and-the-workforce-paradox-who-builds-the-future-when-machines-take-over/ Mon, 27 Oct 2025 17:24:01 +0000 https://www.engineering.com/?p=144147 Automation is accelerating faster than reskilling. Engineers sit at the fault line between capability evolution and displacement.

The post AI layoffs and the workforce paradox: who builds the future when machines take over? appeared first on Engineering.com.

]]>
When professional services firm Accenture announced it laid off over 11,000 employees in 2025, citing an inability to reskill certain roles for an AI-first model, it was easy to interpret it as another corporate restructuring. It was not. It was a clear signal of a deeper shift in how organizations balance human and machine capabilities.

According to the World Economic Forum’s Future of Jobs Report 2025, approximately 92 million roles will be displaced globally by automation by the end of the decade, even as 170 million new jobs are created. On paper, that looks like net growth—but the underlying story is one of structural tension. Routine, coordination-heavy work is disappearing faster than new analytical, interdisciplinary, and AI-augmented roles can emerge. The WEF estimates that nearly four in ten workers worldwide will require reskilling within five years, yet only half of companies have actionable transition plans.

Accenture’s decision reflects this reality. Consultancies are often early barometers of industrial transformation. By exiting low-margin, labor-intensive functions and reinvesting in higher-value AI and analytics capabilities, the firm is effectively reshaping its own operating DNA. The challenge it faces is the same that engineering and manufacturing organizations will soon encounter: reskilling at scale, and at speed, while preserving the domain expertise that defines their competitive edge.

Julie Sweet, Accenture’s CEO, summed it up: “Every new wave of technology has a time where you have to train and retool.”

That time is now.

From cost management to capability economics

The logic driving Accenture’s move is now visible across sectors—from cloud computing and semiconductors to consumer goods and retail. Organizations are restructuring around “capability economics,” shedding legacy roles while doubling down on AI-centric functions.

The trendline is unmistakable: automation is expanding faster than reskilling capacity. The practical question for industrial leaders is: who builds and maintains the systems that replace humans?

The shifting anatomy of work

In engineering and manufacturing, the shift is not just about headcount—it is about redefining the anatomy of work.

Design engineers who once spent days iterating CAD models now use AI-assisted systems to generate hundreds of validated options in minutes. The skill no longer lies in modeling itself, but in selecting the most contextually appropriate design.

Test and validation engineers are moving from script execution to managing predictive simulation models that detect design failures before prototypes exist. Manufacturing engineers are training algorithms to anticipate yield deviations, while R&D scientists use AI agents to simulate complex chemical or material interactions.

These are not incremental changes—they are structural. The competitive advantage no longer comes from having more engineers, but from enabling engineers to do more with AI.

In this new model, knowledge orchestration replaces manual coordination. The future engineer is part technologist, part strategist, capable of interpreting AI outputs and aligning them to product, cost, and compliance objectives. Those unable to adapt to this integrated workflow risk becoming obsolete—not by replacement, but by irrelevance.

A window for small and mid-sized manufacturers

The World Economic Forum warns that 39% of workers globally will need reskilling by 2030, yet most companies have not operationalized how that will happen. This gap creates a new kind of divide—not between humans and machines, but between those who can adapt to AI-augmented work and those whose roles remain fixed in time.

For small and mid-sized manufacturers, the risk is sharper. They cannot afford mass redundancies or multi-year academies, yet they face the same technology curve. The solution lies in micro-credentialing, shadowing programs, and cross-functional rotations that embed AI literacy into product and process teams.

Protecting domain knowledge and critical capabilities must also become a design principle. When experienced engineers leave before their know-how is codified into PLM systems or cognitive data threads, organizations create knowledge vacuums that no machine can fill.

AI can replicate reasoning patterns—but not contextual judgment. That still belongs to humans.

The future of work is not about replacing people with AI. It is about designing a productive coexistence where human creativity, ethics, and contextual awareness guide machine execution.

The leaders who get that right will define the next era of industrial transformation. Those who do not may find themselves automated out of relevance.

The post AI layoffs and the workforce paradox: who builds the future when machines take over? appeared first on Engineering.com.

]]>
AI a “kick in the pants” for infrastructure sector, says Bentley CEO https://www.engineering.com/ai-a-kick-in-the-pants-for-infrastructure-sector-says-bentley-ceo/ Fri, 24 Oct 2025 20:18:35 +0000 https://www.engineering.com/?p=144113 “The digital thread is broken,” Nicholas Cumins told Engineering.com at Bentley Systems’ Year in Infrastructure 2025.

The post AI a “kick in the pants” for infrastructure sector, says Bentley CEO appeared first on Engineering.com.

]]>
Amsterdam is a city with a unique history and an unavoidable focus on infrastructure. You could say the same about Bentley Systems, the software developer that began as a family business and is now a major provider of infrastructure engineering applications.

Fitting, then, that Bentley chose Amsterdam as the site of its 2025 Year in Infrastructure event, an annual gathering of Bentley personnel, press, and prestige customers dedicated to infrastructure engineering.

Engineering.com was in Amsterdam last week to report on Bentley’s many software announcements. You can read the news and analysis in Bentley bets big on AI for infrastructure.

We also had the chance to sit down with Bentley CEO Nicholas Cumins, the first leader of the company who doesn’t share its name (in 2024 he succeeded longtime CEO Greg Bentley, the eldest of the five Bentley brothers and current executive chair of the board).

Bentley Systems CEO Nicholas Cumins on stage at Year in Infrastructure 2025 in Amsterdam, the Netherlands. (Image: Bentley Systems.)

We spoke with Cumins about his vision for Bentley Systems, the challenges facing the infrastructure industry, how AI could be transformative for design and engineering, and the news he’s most excited about from YII 2025 (hint: it’s not any of the software announcements).

The following transcript has been edited for clarity and brevity.

Engineering.com: What is your vision for Bentley Systems?

Nicholas Cumins: Bentley is the infrastructure engineering software company. We offer software for pretty much all the engineering disciplines that are involved in designing and engineering infrastructure assets across industries, from transportation to electric utilities to the water network. We have the deepest, broadest portfolio of applications for infrastructure engineering, and we cover the full lifecycle from design to construction to operations.

Our greater vision here is how do you actually connect these phases of the lifecycle, from design to construction to operations and back to design? It’s very rare that you’re going to develop infrastructure in a vacuum. There are always existing assets. You want to keep all of that as context when you design potential new infrastructure or repurpose existing infrastructure.

So we want to make sure we can connect the loop from design to construction to operations back to design, so that we understand how designs have been performing over time to influence future designs going forward.

If that’s the vision, what’s the reality today?

The full feedback loop is still something that the industry is working towards. The truth of the industry is that the digital thread is broken.

When it goes from design to construction, it’s very often the case that if it’s a different firm in charge of construction, it’s actually going to recreate its own plans, its own models, to then move forward with the construction process. And it’s very often the case that the infrastructure assets, once they’re delivered, are being delivered with a lot of files, with a lot of data, with lots of simulation and analysis. But all of that data is going to go completely stale, and will reflect how the infrastructure asset was once it was delivered, but not how the infrastructure is at any point in time. And there is very rarely a feedback loop from the performance of the assets back to the original design.

So creating this digital thread is very much an endeavor for the entire industry. Organizations that do design-build will already make sure that there is a great digital thread that goes from planning all the way to the handover of the asset, so you will see pockets of that. But as an industry overall, I think it’s fair to say this is very much still ahead of us.

What do you see as the solution to that problem? Is it technological?

It’s technological potentially. I will say this is probably the easiest part, because we can do that already. It’s primarily the business models that need to evolve.

Engineering firms very often are charging time and material. They’re not charging based on the performance of the asset. So they don’t really have an incentive to be able to maintain that digital thread all the way to performance and back. So those are more fundamental things that need to be tackled.

Engineering firms have been talking about this for 100 years. There was one CEO who told me he found a whitepaper from the 1920s that was talking about moving to value based pricing for engineering services.

But interestingly, AI, because it’s changing the fundamental dynamics, it’s changing how value is being created. It might be the kick in the pants which is needed for the infrastructure sector overall to move to such business models.

How do you see AI impacting the infrastructure sector today?

We’re actually at the beginning of AI, and already we see amazing productivity gains. Some of the ones I quoted yesterday in the keynote were 60%, 80% productivity gains. So this is huge. It could be even more.

On the question of whether we replace engineers, we really don’t see that happening anytime soon in our space, because of what’s at stake here. There needs to be an organization that guarantees the designs that have been created, that can vouch for how reliable these designs are. And when they do that, they engage their liability. So we don’t see AI completely replacing engineers anytime soon.

What we do see is AI automating big parts of what the engineers are doing. And we see AI also making recommendations—not decisions, recommendations—to engineers. Things are moving so fast with AI, but a recent development we’ve seen is a clear acceleration of engineering firms who are using our applications to give feedback to the AI agents that they have created. So they will tap into our structural analysis software to see whether this is reliable from a structural standpoint. They will tap into our geotechnical software. They will tap into our hydraulics and hydrology software in order to get that kind of feedback.

We call that the engineering context. This is what AI needs in order to come up with appropriate recommendations. So we see our engineering applications providing engineering context to AI to make sure that AI is going to come up with appropriate recommendations.

How is Bentley planning to equip users with AI?

It’s a priority across our entire product organization. You will see AI in our core engineering applications. You will see AI coming up in Connect, a new set of capabilities as part of Bentley Infrastructure Cloud. And this is all about information management, data management and collaboration. You see AI also being developed into our offering for subsurface analysis called Seequent. So you see AI throughout our portfolio.

Bentley CEO Nicholas Cumins delivering the keynote at YII 2025. (Image: Bentley Systems.)

And then there’s something we haven’t really discussed so far, which is not just design and construction, but operations and maintenance. We have an entire offering dedicated to creating analytics on existing assets, which is all powered by AI. So it’s using computer vision, for example, to understand the exact physical condition of an existing asset and its full context. It’s using AI to detect if there’s something like some vegetation growing, or some electric poles which could be a danger. Is there a crack? Is there spalling on a bridge? And leveraging our own engineering applications to understand what that means from an engineering standpoint. Is there going to be a fire hazard here, a structural integrity issue? Do we need to do some remediation work? So this offering is called Bentley Asset Analytics, and it’s all leveraging AI.

Could you tell us more about the Bentley Copilot?

Bentley Copilot is an AI assistant that we have created in the context of our very first AI powered application for site engineering. And now we’re deploying it across our portfolio of engineering applications, as well as Bentley Infrastructure Cloud, the capabilities we offer for data management and collaboration.

So it’s the same assistant, and there is an interesting abstraction layer underneath it where we can swap from one LLM to another. We’re not dependent on any particular LLM, and I think we’ve swapped it already a couple of times.

Can users pick the LLM that they want to use?

No, this is transparent to them. Users sometimes create their own AI assistants, and what we do is offer them the interface needed so that their AI assistant can interact with our applications directly, or can interact with Bentley Infrastructure Cloud directly. Whenever they do this, obviously they pick whatever LLM they want.

How are your customers using their own AI tools with Bentley software?

The big engineering firms are creating AI assistants to help their engineers get information fast. And they could use that instead of our own copilot. We welcome that, even for simple things like our own documentation.

If you go on docs.bentley.com, you can interact with a chatbot. The same chatbot actually offers an MCP interface, the model context protocol, which is becoming a bit of a de facto standard for interfaces for AI. It was created by Anthropic and everybody is adopting it. We did this because we have a lot of engineering firms who wanted their assistants to be able to tap into the documentation. That’s a simple example.

Besides creating those MCP interfaces, there are two ways we’re fundamentally helping these engineering firms. One is helping them tap into their past project data that they’ve put into Bentley Infrastructure Cloud. In whatever file format that data is, whether it’s Bentley or not, we help them map that data into schemas. It’s called the base infrastructure schema. Suddenly that data is query-able, it’s searchable, and it’s ready to be picked up by AI. Basically, we give them technology that they can then use in order to access data, which can be decades old.

And the other way we’re helping them, which I think is the most profound, is our own engineering applications providing feedback to their AI. They’ve been trusting these applications for decades, they’re very established. STAAD, our application for structural analysis, is the gold standard application for structural engineering. PLS [Power Line Systems] is the same for transmission towers, and so on and so forth. So we have this deep and broad portfolio of engineering applications, and the big engineering firms now want not just infrastructure professionals to interact with those applications, they want AI also to interact with those applications.

So yesterday we announced that we have a co-innovation initiative to discuss very openly with those firms what APIs we have right now and how we need to evolve those APIs to help solve their use cases. And to what extent we also need to continue to evolve the architecture of our applications so that they can be queried by AI as cloud services as opposed to desktop software. Because right now, those are primarily desktop applications.

And also, how do we need to evolve our own commercial models to make sure that, with all the value that will be created, all these productivity gains that are going to be generated for the engineering firms leveraging AI, we can capture our fair share of the value. Because everything, whether it’s our architecture, our commercial models, everything right now is designed for individual infrastructure professionals to use our applications. And now we’re talking about something completely different.

Will Bentley Copilot be available across the Bentley portfolio?

Yes, you see it in all the next generation applications. So OpenSite+, Substation+ and Synchro+. The plus indicates it’s a new generation. There is a previous generation that still exists, but at some point, once it’s fully replaced, we’ll just drop the plus.

Bentley demonstrating the new OpenSite+ with Bentley Copilot at YII 2025. (Image: Bentley Systems.)

But then we also showed how we’re bringing the same copilot capabilities within established applications such as OpenRoads or OpenRail. We’ve shown how we’re embedding it into Bentley Infrastructure Cloud for the search capabilities, for example. Say you’re in ProjectWise as part of Bentley Infrastructure Cloud, for example, and you want to search past project data. You will actually interact with the Bentley Copilot.

What makes the next generation applications next gen? Is it just the addition of AI?

They are powered by AI, but their fundamental architecture is also quite different, because they all organize around a digital twin. And by that, what we mean is the data that is being created, instead of being created into an engineering file, whatever file format you pick, the data is being created into a digital twin. It’s persisted as a digital twin, which is cloud based, so it’s in the cloud.

And that allows multiple engineers to work together at the same time on the same project. So take Substation, for example. You will see engineers representing different disciplines who can work concurrently on the design of a new substation.

Do you have a timeline on transitioning the rest of the portfolio to the next generation?

No, we pace it because we don’t want to create too much disruption. For civil engineering we started with site engineering, because that’s typically how a project starts. Design the site first, and then you can go and design the data center, for example.

And then we went after substation, because we thought this is where there is the most acute need for concurrent engineering. There’s such big demand for electricity. The electric grid has to be upgraded. There’s a need for way more substations than we have right now, and there’s a lot of work to be done. Substation engineers cannot afford to be just waiting. So that’s why we went after that.

Probably the last applications that are going to be re-platformed are those very established applications such as OpenRoads and OpenRail. That’s why, instead of waiting, we decided to bring some AI capabilities into those, even though they’re still fundamentally file based applications.

You said a few times at this event that you won’t use your customers’ data to train AI. Why do you feel it’s important to emphasize that point?

It’s a matter of principle. It’s wrong to use the IP of one engineering firm to train AI that will benefit other engineering firms. It’s just wrong.

The same way that we are making really sure that when we use AI internally to help our developers, that our code doesn’t start to train AI that can benefit other software vendors. So it’s really a matter of principle.

I can tell you, however, that it is top of mind for engineering firms themselves. They are very concerned about their IP, their data, being used to train AI of other software vendors. And sometimes there’s also unclarity on who owns the IP. Sometimes it’s them, sometimes it’s their clients. So sometimes they’re not even allowed for that data to be used to train AI even if they wanted it to.

So we made it very clear that we will not use the data of our users, whether they are engineering firms or owner-operators, to train our own AI, unless they explicitly permit us to do so.

Which announcement are you most excited about from this year’s YII?

In a funny way, it might not be any of the software announcements. It would be this AI co-innovation initiative. I’m just in awe when I look at what engineering services firms, owner-operators are doing right now with AI and the possibilities that it opens up. I am just in awe. So I’m very much looking forward to those conversations about how we can help even more by evolving, especially our engineering applications.

The post AI a “kick in the pants” for infrastructure sector, says Bentley CEO appeared first on Engineering.com.

]]>