Digital Transformation - Engineering.com https://www.engineering.com/category/technology/digital-transformation/ Mon, 24 Nov 2025 19:44:37 +0000 en-US hourly 1 https://wordpress.org/?v=6.9 https://www.engineering.com/wp-content/uploads/2025/06/0-Square-Icon-White-on-Purpleb-150x150.png Digital Transformation - Engineering.com https://www.engineering.com/category/technology/digital-transformation/ 32 32 Fixing AI application project stalls – part 2 https://www.engineering.com/fixing-ai-application-project-stalls-part-2/ Mon, 24 Nov 2025 19:44:34 +0000 https://www.engineering.com/?p=144815 A few more thoughts on how engineers can prevent or rapidly fix stalls in AI application projects.

The post Fixing AI application project stalls – part 2 appeared first on Engineering.com.

]]>
Artificial intelligence (AI) is rapidly reshaping the engineering profession. From predictive maintenance in manufacturing plants to design in civil and mechanical projects, AI applications promise to increase efficiency, enhance innovation, shorten cycle times and improve safety.

Yet, despite widespread awareness of AI’s potential, many engineering organizations struggle to progress beyond the pilot stage. AI implementations often stall for various reasons—organizational, technical, cultural, and ethical. Understanding these barriers and fixing them is crucial for leaders who aim to advance AI from an intriguing, much-hyped new concept into a practical engineering capability.

To read the first article in this two-part series, click here.

Skills shortages and training deficiencies

The successful application of AI in engineering demands multidisciplinary collaboration. Data scientists understand the algorithms, but not necessarily the physical principles governing structures, materials, or thermodynamics. Conversely, engineers possess domain expertise but may lack proficiency in machine learning, statistics, or data visualization. These gaps create communication barriers and implementation bottlenecks, stalling progress.

To bridge the divide, organizations promote “AI literacy” among engineers and “engineering literacy” among data professionals. Cross-functional teams, comprising engineers, data scientists, and IT specialists, are often the key to translating technical concepts into practical outcomes. Continuous professional development programs, partnerships with universities, and in-house training initiatives all contribute to building expertise. The future of engineering will belong to those professionals who can interpret both finite element analysis and neural network outputs with equal fluency.

Gaps in the computing infrastructure

With the explosion in data volumes and the high resource demands of AI applications, the computing infrastructure that supports engineering groups can be overwhelmed. That leads to storage shortages, poor performance, and unplanned outages. These issues stall engineers’ productivity.

Engineering organizations can add to the capacity of their computing infrastructure by:

  • Investing in additional on-premise capacity.
  • Moving some AI applications from servers to high-performance workstations.
  • Moving some applications to the cloud that is operated by a cloud service provider (CSP).
  • Moving some applications from on-premises to the computing infrastructure of a Software as a Service (SaaS) vendor.

Organizational resistance and cultural barriers

Engineering culture is grounded in precision, accountability, and safety. These values, while essential, can also foster skepticism toward new technologies. Some engineers question the validity of AI-generated recommendations if they cannot trace their underlying logic. Project managers may hesitate to delegate decisions to AI systems they perceive as “black boxes.” Such caution and resistance stall progress.

Overcoming this resistance requires transparency and inclusion. AI models used in engineering should emphasize explainability—demonstrating how inputs lead to outputs. Involving engineers in AI model development builds trust and ensures the results align with physical realities. IT leadership must communicate that AI is a decision-support tool, not a replacement for engineering judgment and expertise. By framing AI as an enabler of better engineering rather than an external disruptor, organizations can foster acceptance and enthusiasm.

Lack of ethical, legal, and safety considerations

Engineering operates within strict regulatory and ethical frameworks designed to protect public safety. AI introduces new dimensions of risk, such as:

  • Discrimination and toxicity
  • Privacy and security
  • Misinformation
  • Malicious actors and misuse
  • Human-computer interaction
  • Socioeconomic and environmental
  • AI system safety, failures and limitations

If an AI-driven application predicts incorrect stress tolerances or misjudges maintenance intervals due to one or more of these risks, the consequences can be catastrophic. Because AI project teams are new to these risks, they may become overly cautious, which can stall progress.

To mitigate these risks, engineering firms establish rigorous model validation procedures, document decision processes, and ensure compliance with industry standards and safety regulations. An AI ethics and safety review committee should evaluate new AI applications before they are deployed. Transparency and accountability are not optional in engineering—they are fundamental responsibilities.

Underestimating the complexity of change management

Introducing AI is not simply an IT upgrade—it is a transformation of the organization. Engineering workflows, approval hierarchies, and performance metrics often require reconfiguration to incorporate AI insights effectively. AI projects stall when leadership underestimates the organizational change necessary to operationalize AI results.

A conscious people change management approach is essential. It includes stakeholder engagement, pilot demonstrations, and training for every rollout. By delivering multiple tangible improvements—such as reduced maintenance costs or faster design cycles—AI projects build momentum for broader adoption and ultimately achieve a lasting impact.

AI has immense potential to revolutionize engineering practice. It can enhance design optimization, improve maintenance predictability, and elevate overall manufacturing efficiency. However, realizing that potential requires more than algorithms—it requires alignment, trust, and integration.

AI projects stall when skills are in short supply, the organization is resistant, or workflows resist adaptation. Success demands high-quality data, transparent governance, and an organizational culture that embraces continuous learning. Engineering has always been about solving complex problems through disciplined innovation. Implementing AI effectively is the next evolution of that successful tradition.

The organizations that combine engineering rigor with AI insights will not only overcome today’s barriers but also define the future of that organization.

The post Fixing AI application project stalls – part 2 appeared first on Engineering.com.

]]>
PTC’s pivot: why shedding IoT makes its PLM vision stronger https://www.engineering.com/ptcs-pivot-why-shedding-iot-makes-its-plm-vision-stronger/ Mon, 17 Nov 2025 15:26:12 +0000 https://www.engineering.com/?p=144633 The $600 million sale of ThingWorx and Kepware to TPG signals a decisive refocus on lifecycle intelligence.

The post PTC’s pivot: why shedding IoT makes its PLM vision stronger appeared first on Engineering.com.

]]>
PTC’s divestiture of ThingWorx and Kepware to TPG is more than a portfolio adjustment—it is a 180° strategic shift from the platform-expansion vision under former CEO Jim Heppelmann. Where Heppelmann chased broad connectivity and IoT platform reach, the Barua era is defined by focus, operational precision, and lifecycle intelligence.

This is a defining inflection point for the PLM sector. The question PTC is now answering for customers and shareholders is no longer “How many devices can you connect?” It is now “How effectively do you convert connected signals into product lifecycle decisions?”

That reframing moves the competitive discussion from technical IoT plumbing to product insight. Under Neil Barua, PTC is deliberately simplifying its portfolio, scaling SaaS via the Atlas platform, embedding AI-driven intelligence across the lifecycle, and monetizing product data continuity. In short, it is moving from connectivity to intelligence—a shift that signals the next chapter in PLM leadership.

The reset we saw coming

PTC’s strategy has long been under scrutiny. Investors, partners, and competitors—including Autodesk—have been asking:

“Where is PTC headed? Is it a multi-SaaS platform company, an IoT player, or a PLM-first innovator?”

The company’s portfolio, spanning CAD, PDM, ALM, and IoT, made its priorities hard to discern. Market speculation peaked earlier this year when Autodesk explored a potential acquisition. Though discussions ended abruptly, the episode highlighted the industry’s central question: was PTC pursuing scale for its own sake, or focusing on clear differentiation in lifecycle intelligence?

The sale of ThingWorx and Kepware resolves that uncertainty. By divesting industrial connectivity, PTC signals a pivot away from platform breadth toward focused, product-centric value creation. This reset was predictable: the maturity of cloud connectivity stacks, combined with private equity interest in ThingWorx and Kepware, created ideal conditions for a separation. One company can scale connectivity independently, while PTC doubles down on lifecycle intelligence.

From connectivity to lifecycle intelligence

Industrial IoT was the first act of Industry 4.0 digital transformation: it proved that machines, sensors, and systems could communicate. That era, once groundbreaking, is now commoditized; connectivity is abundant. The differentiator today is how organizations translate connected data into actionable intelligence across the product lifecycle.

ThingWorx and Kepware gave PTC credibility in bridging IT and OT. But as open protocols and standardized cloud stacks proliferated, maintaining proprietary infrastructure became costly and offered diminishing differentiation. PTC’s pivot shifts the focus to orchestrating connected data into lifecycle insight, enabling smarter design, manufacturing, service, and sustainability decisions.

The Atlas platform exemplifies this shift. It underpins Windchill+, Creo+, and Codebeamer+, enabling scalable SaaS delivery, version control, and collaboration, while freeing PTC from maintaining a proprietary IoT stack. The company’s competitive edge is now measured by how product data drives decisions, reduces time to market, and improves outcomes, rather than how many devices it connects.

Notes:

  • The “+” products indicate cloud-first, SaaS-enabled delivery, aligned with Barua’s lifecycle intelligence focus.
  • Divestiture of IoT assets (ThingWorx/Kepware) refocuses PTC on insights and intelligence, not device connectivity.
  • PTC acts as a surgical consolidator, acquiring assets that reinforce lifecycle cohesion without diluting the platform.

The Barua blueprint

Neil Barua’s appointment as CEO in early 2024 marked a transition from broad ambition to operational precision. The Barua Blueprint crystallizes this strategy into four imperatives:

  • Simplify the portfolio – Focus on PDM, CAD, PLM, and digital thread as the pillars of product innovation.
  • Accelerate SaaS transformation – Scale predictable, cloud-first revenue through Windchill+, Creo+, and Codebeamer+.
  • Embed intelligence and AI – Apply predictive design, simulation, and decision automation across the lifecycle.
  • Monetize lifecycle depth – Expand ecosystem value through tighter integration rather than platform sprawl.

Investor clarity: By separating ThingWorx and Kepware, PTC also simplifies its investor narrative: performance is now measured on operational excellence and SaaS adoption, rather than on the speculative growth of a fragmented IoT market. This discipline mirrors the most successful industrial software transformations of the past five years: focus, clarity, and a clear ROI story for customers.

Industry implications, signals to watch

PTC’s pivot signals a broader maturation in industrial software. Connectivity is foundational but no longer sufficient; lifecycle orchestration—linking engineering, manufacturing, and service intelligence—defines digital maturity. Depth of integration now outweighs breadth of platform ownership.

For manufacturers, the benefits are tangible: faster SaaS delivery, simpler integration, and measurable ROI on PLM investments. For the ecosystem, TPG’s acquisition will accelerate industrial connectivity innovation, while PTC concentrates on intelligence-driven lifecycle solutions.

Over the next 12 months, the pivot will be judged by five signals:

  • SaaS adoption – growth in Windchill+ and Creo+ subscriptions.
  • PLM portfolio focus – continued divestitures or acquisitions aligned with lifecycle intelligence.
  • Embedded AI – deployment of predictive and decision-making capabilities across the lifecycle.
  • Strategic partnerships – leveraging hyperscalers, analytics, and automation platforms rather than building proprietary IoT stacks.
  • Execution clarity – consistent investor and market messaging emphasizing lifecycle value.

PTC’s divestiture demonstrates that focus and execution now define industrial software leadership. By orchestrating insights rather than owning devices, PTC is charting a new chapter—moving from the promise of “connected everything” under Heppelmann to “lifecycle intelligence” value under Barua.

The post PTC’s pivot: why shedding IoT makes its PLM vision stronger appeared first on Engineering.com.

]]>
Fixing AI application project stalls – Part 1 https://www.engineering.com/fixing-ai-application-project-stalls-part-1/ Fri, 14 Nov 2025 21:35:30 +0000 https://www.engineering.com/?p=144631 AI application projects stall easily due to these common issues.

The post Fixing AI application project stalls – Part 1 appeared first on Engineering.com.

]]>
Artificial intelligence (AI) is rapidly reshaping the engineering profession. From predictive maintenance in manufacturing plants to design in civil and mechanical projects, AI applications promise to increase efficiency, enhance innovation, shorten cycle times and improve safety.

Yet, despite widespread awareness of AI’s potential, many engineering organizations struggle to progress beyond the pilot stage. AI implementations often stall for various reasons—organizational, technical, cultural, and ethical. Understanding these barriers and fixing them is crucial for leaders who aim to advance AI from an intriguing and much hyped new concept into a practical engineering capability.

Lack of clear AI application objectives

Engineering organizations often approach AI from a technology-first perspective rather than a business or technical problem-first mindset. Senior management may task a team to “use AI to improve productivity” without specifying a more focused aspect of productivity. Overly broad examples include reducing design rework, optimizing supply chain logistics, or forecasting equipment failures. This ambiguity stalls progress by diffusing effort, yielding uneven results, and wasting resources.

Practical AI projects in engineering will be more successful if they begin with tightly defined business and technical objectives. For example:

  • A civil engineering firm might aim to reduce project delays caused by material shortages by using enhanced AI-driven demand forecasting.
  • A mechanical engineering team could target reduced downtime through more sophisticated AI-driven predictive maintenance analytics.
  • An electrical engineering work group could utilize AI to simplify circuit board designs, thereby improving manufacturing quality.

Aligning AI projects with measurable engineering outcomes—such as higher throughput, improved energy efficiency, or longer asset lifespan—creates both focus and accountability. Without such clarity, AI projects remain academic exercises rather than operational solutions.

Insufficient data quality

Engineering operations generate immense quantities of data—from versions of design drawings, manufacturing sensor readings, to field inspection reports. However, this data is rarely standardized or integrated. Legacy systems store data in incompatible formats, while field-collected data is often incomplete or inconsistent. Moreover, in many engineering environments, critical data resides in siloed applications, on isolated local servers or externally with partners. Sometimes, the impediment is that digital data transformation has not advanced sufficiently. Poor data quality and incomplete data lead to unreliable models, eroding confidence among engineers who depend on sustained data accuracy. These data issues stall progress until the data quality is improved.

AI models require reliable, high-quality data to produce accurate insights. Addressing this issue demands robust data governance — defining ownership, standardizing data formats and values, simplifying accessibility, and ensuring traceability. For large-scale engineering enterprises, implementing centralized data lakehouses or data warehouses can provide a unified data foundation. Without disciplined data management, even the most advanced AI applications cannot deliver actionable results.

Unrealistic expectations

Sometimes, engineering teams, in their enthusiasm, over-promise what they can deliver. Examples include:

  • More functionality than their AI model and the available data can achieve.
  • Overly aggressive AI project timelines.
  • Underestimated required resources and related project budget.

These issues lead to management disappointments and a reluctance to commit to additional AI application projects, which stall progress.

Setting and managing expectations is never easy. Promising too little will not create enthusiasm and support. Promising too much is guaranteed to lead to disappointment. Here are some techniques that have proven successful for engineers:

  • Mockup expected results with visualizations using PowerPoint slides.
  • Start with an exploratory prototype.
  • Conduct an AI pilot project with sufficient scope to enable the follow-on project to deliver a production-quality AI application.
  • Conduct an AI risk assessment, share the results with management and incorporate mitigations into your project plan.

Inadequate integration with existing engineering workflows

Unlike software-driven processes, engineering workflows are deeply intertwined with physical processes, regulatory compliance, and long-established methodologies. Introducing AI into these workflows often exposes integration challenges. For example:

  • An AI model that predicts equipment failure may not easily link to existing maintenance scheduling systems or supervisory control and data acquisition (SCADA) platforms.
  • AI-generated design recommendations might not align with CAD software data standards or quality assurance protocols.
  • AI-generated recommendations that alter supply chain vendors or order quantities may be challenging to implement within the existing suite of applications.

These integration issues frequently stall progress. Engineers may see AI as disruptive or unreliable if it requires substantial changes to established processes or applications.

The solution to AI integration issues lies in collaborative systems engineering. This concept, which facilitates smoother integration, consists of:

  • Designing AI applications that complement, rather than replace, existing systems.
  • Building application programming interfaces (APIs) that integrate new AI applications with existing systems.
  • Adopting a more modular system architecture that creates simpler integration points.
  • Integrating AI incrementally because it allows for easier absorption by the organization compared to a sweeping replacement.
  • Ensuring backward compatibility where feasible.

AI has immense potential to revolutionize engineering practice. It can enhance design optimization, improve maintenance predictability, and elevate overall manufacturing efficiency. However, realizing that potential requires more than algorithms—it requires alignment, trust, and integration.

AI projects often stall when data is fragmented, objectives are unclear, or integration is overly complex. Success demands clear business objectives, high-quality data, and determined leadership. Engineering has always been about solving complex problems through disciplined innovation. Implementing AI effectively is the next evolution of that tradition.

Organizations that combine engineering rigor with AI insights will not only overcome today’s barriers but also define the future of that organization.

The post Fixing AI application project stalls – Part 1 appeared first on Engineering.com.

]]>
PTC’s IoT dream ends with sale of ThingWorx and Kepware https://www.engineering.com/ptcs-iot-dream-ends-with-sale-of-thingworx-and-kepware/ Tue, 11 Nov 2025 17:11:52 +0000 https://www.engineering.com/?p=144526 They gave it their best shot, but now it’s time to focus on another tech trend.

The post PTC’s IoT dream ends with sale of ThingWorx and Kepware appeared first on Engineering.com.

]]>
You’re reading Engineering Paper, and here’s the latest design and simulation software news.

PTC has announced that it’s selling Kepware and ThingWorx, its brands for industrial connectivity and Internet of Things (IoT) software, to asset management firm TPG.

You might find this news surprising. PTC has long been bullish on those brands, particularly ThingWorx, which it bought in 2013 for approximately $112 million. The company even adapted the brand name for its erstwhile annual user conference, PTC LiveWorx, the last of which was in 2023.

ThingWorx was the first of several acquisitions for an IoT strategy that would cost PTC half a billion dollars over the next few years. PTC bought Kepware in 2015 for over $100M, machine connectivity provider Axeda in 2014 for $170M, and big data platform Coldlight in 2015 for $105M.

Industry watchers at the time noted the gamble PTC was taking on IoT. In 2015, Engineering.com contributor Verdi Ogewell asked: Is PTC’s CEO Jim Heppelmann Playing with Fire? to which the then-CEO replied: “[W]e are confident, and our customers agree, that not only is IoT an exciting new opportunity, but it will also reset expectations in the arenas of CAD, PLM, ALM [applications lifecycle management] and SLM [service lifecycle management].”

Ten years later, however, Heppelmann was gone (replaced by Neil Barua) and the IoT shine was starting to wear off. This summer a rumor was swirling that PTC might be acquired by Autodesk, and writing about that possibility, Engineering.com contributor Lionel Grealou noted that “ThingWorx and Kepware, once central to PTC’s digital transformation narrative, now appear most vulnerable to divestment.”

The Autodesk rumor went nowhere, but ThingWorx and Kepware have indeed gone somewhere. TPG and PTC didn’t disclose the terms of the acquisition, but they expect the transaction to close in the first half of 2026.

“We’re pleased to reach this agreement with TPG as we increase our focus on delivering our Intelligent Product Lifecycle vision for customers through our core CAD, PLM, ALM, and SLM offerings and the ongoing adoption of AI and SaaS,” Barua said in the joint press release.

Let’s check back in ten years to see how the AI play pays off. Speaking of…

Tech Soft 3D launches HOOPS AI for CAD machine learning

Engineering software development kit (SDK) provider Tech Soft 3D has launched HOOPS AI, a new tool that it says is “purpose-built to unlock AI and machine learning for CAD data.”

According to Tech Soft 3D, HOOPS AI is an end-to-end solution for CAD-based machine learning. It ingests and prepares CAD data, provides pre-built neural architectures for CAD tasks like feature recognition, and has built in visualization tools, among other features. It’s a standalone product that incorporates features from Tech Soft 3D’s HOOPS Exchange (for CAD data translation) and HOOPS Visualize (for CAD rendering).

“HOOPS AI represents a major leap forward for organizations looking to finally harness artificial intelligence for 3D CAD,” said Gavin Bridgeman, CTO of Tech Soft 3D, in the company’s press release. “It provides a complete, reproducible pipeline that makes machine learning workflows with CAD data both practical and scalable.”

Quick hits

  • Chaos has released Vantage 3, the latest update to its real-time visualization platform for AEC. The update adds support for USD, MaterialX, and Gaussian splatting, as well as a new camera tracking features, a new material editor, extended texture support, and more.
  • Siemens has introduced Electrical Designer for its TIA Selection Tool Cloud. The new feature aims to simplify main circuit design by automatically selecting components, verifying short circuits, sizing cables, and creating documentation, all in accordance with IEC standards, according to Siemens.
  • Celus, a developer of AI-based electronics design software, and NextPCB, a PCB manufacturer, have announced a strategic partnership that will allow NextPCB customers access to the Celus Design Platform.

One last link

Who figured software licensing could be such a dynamic topic? Here’s Lionel Grealou again with From seats to outcomes: rethinking engineering software licensing.

Engineering Paper will be off for the next two weeks. See you in December.

Got news, tips, comments, or complaints? Send them my way: malba@wtwhmedia.com.

The post PTC’s IoT dream ends with sale of ThingWorx and Kepware appeared first on Engineering.com.

]]>
From seats to outcomes: rethinking engineering software licensing https://www.engineering.com/from-seats-to-outcomes-rethinking-engineering-software-licensing/ Mon, 10 Nov 2025 20:35:48 +0000 https://www.engineering.com/?p=144499 From named users to token meters to possible outcome pricing, what does a fair price for design, make, and operate really look like?

The post From seats to outcomes: rethinking engineering software licensing appeared first on Engineering.com.

]]>
As the industry speculates about outcome-based pricing, the central question is whether the value created by digital engineering software can truly be measured, tracked, and monetized. Enterprise platforms that underpin product development—from PDM and PLM to BIM—have moved through distinct licensing eras. Perpetual ownership gave way to named-user subscriptions, which were later augmented by tokenized, consumption-based access. Tokenization complements full-time seats rather than replacing them, providing flexibility for SMEs, project peaks, or intermittent users. Today, vendors are asking whether the next stage should tie price to tangible outcomes.

Autodesk’s roadmap, highlighted during its 2025 Investor Day, exemplifies this evolution. The company has traded static seat counts for finer-grained visibility and elasticity across design, BIM, and manufacturing toolsets. Such trajectory is instructive, but it is not proof that outcome-based pricing is broadly deployable. Moving from counting access or usage to measuring value raises practical challenges around attribution, auditability, and the preservation of experimentation and innovation.

As AI increasingly reshapes how engineers, designers, and manufacturers work, it opens the door to new pricing paradigms. Software that can measure or assist in the generation of design, simulation, and operational outcomes could eventually enable pricing aligned with real business value, rather than mere access.

From seats to consumption

Autodesk’s licensing journey can be read in three pragmatic phases:

Phase 1: Named-user subscriptions (post-2020)

Autodesk moved away from perpetual and shared network seats toward identity-tied subscriptions. That change shifted entitlement from pooled licenses to named individuals, forcing procurement and IT teams to redesign entitlement, single-sign-on, and audit processes. It also delivered behavioral telemetry that vendors and customers can now use to understand real engagement patterns.

Phase 2: Token-based consumption (2021+)

With the introduction of Autodesk Flex, the company made consumption explicit: organizations can buy token pools and redeem them for product access when needed. Token-based access complements, rather than replaces, full-time subscriptions—providing flexibility for occasional users, project peaks, or smaller teams that cannot justify full-time seats. Tokenization also delivers metered usage data across PDM, BIM, and manufacturing toolsets, bridging traditional subscriptions and more variable pricing approaches.

Phase 3: Outcome orientation (future, TBC)

Autodesk has signaled interest in moving toward pricing aligned with measurable results as AI and automation become routine in workflows. AI could facilitate this transition by generating or tracking deliverables—design iterations, simulations, or digital twin scenarios—and attributing results to specific workflow steps. Yet a critical question remains: are end-user OEMs asking for outcome-based pricing, and are they ready to accept the complexity, transparency, and governance it demands? Standardizing KPIs, audit processes, and governance frameworks is essential before outcome pricing can move from concept to practice.

In short, Autodesk and peers are assembling the scaffolding—identity, metering, telemetry, and cloud orchestration—but defining what is legitimately billable and provably attributable as an “outcome” remains the hardest challenge.

True outcome-based pricing remains rare

Across major enterprise platforms, the dominant pattern mirrors Autodesk: retire perpetual licenses, adopt subscriptions, transition to SaaS, and layer in consumption mechanics where flexibility is needed. True outcome-based pricing remains virtually non-existent for enterprise software licensing—except in highly bespoke arrangements with custom triggers—or is generally limited to service contracts or narrowly scoped AI-enabled deliverables.

Dassault Systèmes offers outcome-oriented services through BIOVIA contract research, while core 3DEXPERIENCE licenses remain subscription-based. Siemens employs token-based or value-linked licensing for NX and Simcenter advanced capabilities, illustrating hybrid consumption. PTC, AVEVA, and SAP similarly combine subscriptions with consumption-based billing, but outcome-based monetization is largely service-bound.

Consumption models are attractive because they are auditable, scalable, and predictable. Outcome pricing introduces complexity and risk, which many vendors prefer to contain within services or pilot programs. Traditional per-seat licensing—perpetual or subscription-based—is predictable but poorly aligned with actual value creation. Projects expand, teams flex, and simulations run thousands of iterations, yet costs remain static. Vendors are now moving toward usage- and, potentially, outcome-based models where spend is linked to measurable performance, including compute hours, connected devices, or digital twin assets. Cloud and SaaS transitions give vendors greater visibility into both process and data usage. Importantly, outcome-based pricing must not constrain creative exploration or experimentation by imposing prohibitive costs on usage.

Defining and measuring outcomes

Outcome-based pricing depends on measurable, attributable KPIs. Potential candidates include validated design deliverables or approved configuration releases, reduced engineering change turnaround or rework rates, reuse rates of certified parts and modules, clash-free coordinated BIM models and verified constructability, measurable sustainability improvements from design decisions, and AI-specific metrics such as AI-generated preliminary designs, automated simulation throughput, issue resolution, recommendation adoption, time saved per iteration, and sustainability impact.

The challenge is not simply counting events but reliably attributing results across humans/teams, AI agents, and workflow systems. Historically, most vendors have linked outcomes to services rather than software licensing because services provide controlled delivery environments, allow shared risk and bespoke contracts, scale via project teams, and reduce buyer uncertainty while preserving flexibility for exploratory work. This explains why consumption-based and tokenized licensing remain the practical reality, while outcome-based billing continues to be niche and largely unproven at scale.

A pragmatic transition path

A realistic migration toward outcome pricing would require incremental steps: standardizing telemetry and consumption metrics to link usage to workflow stages, piloting event-based KPIs with capped financial exposure (e.g., pay-per-validated design created or simulation run), and building hybrid contracts blending subscriptions with outcome bonuses, preserving budget predictability while aligning incentives. AI could accelerate this path by tracking measurable outputs, attributing value, and automating data capture, making outcome-based pricing more credible and auditable.

Autodesk’s move from seats to token meters aligns price with usage across the product engineering stack, while the broader industry experiments with outcome-linked models supported by AI, telemetry, and governance. ERP, CRM, and MES software have similarly shifted from perpetual licenses to subscription and consumption-based approaches, though outcome-based pricing remains equally rare. The question remains: as technology and measurement tools advance, will outcome-based licensing become a practical reality?

The post From seats to outcomes: rethinking engineering software licensing appeared first on Engineering.com.

]]>
AI layoffs and the workforce paradox: who builds the future when machines take over? https://www.engineering.com/ai-layoffs-and-the-workforce-paradox-who-builds-the-future-when-machines-take-over/ Mon, 27 Oct 2025 17:24:01 +0000 https://www.engineering.com/?p=144147 Automation is accelerating faster than reskilling. Engineers sit at the fault line between capability evolution and displacement.

The post AI layoffs and the workforce paradox: who builds the future when machines take over? appeared first on Engineering.com.

]]>
When professional services firm Accenture announced it laid off over 11,000 employees in 2025, citing an inability to reskill certain roles for an AI-first model, it was easy to interpret it as another corporate restructuring. It was not. It was a clear signal of a deeper shift in how organizations balance human and machine capabilities.

According to the World Economic Forum’s Future of Jobs Report 2025, approximately 92 million roles will be displaced globally by automation by the end of the decade, even as 170 million new jobs are created. On paper, that looks like net growth—but the underlying story is one of structural tension. Routine, coordination-heavy work is disappearing faster than new analytical, interdisciplinary, and AI-augmented roles can emerge. The WEF estimates that nearly four in ten workers worldwide will require reskilling within five years, yet only half of companies have actionable transition plans.

Accenture’s decision reflects this reality. Consultancies are often early barometers of industrial transformation. By exiting low-margin, labor-intensive functions and reinvesting in higher-value AI and analytics capabilities, the firm is effectively reshaping its own operating DNA. The challenge it faces is the same that engineering and manufacturing organizations will soon encounter: reskilling at scale, and at speed, while preserving the domain expertise that defines their competitive edge.

Julie Sweet, Accenture’s CEO, summed it up: “Every new wave of technology has a time where you have to train and retool.”

That time is now.

From cost management to capability economics

The logic driving Accenture’s move is now visible across sectors—from cloud computing and semiconductors to consumer goods and retail. Organizations are restructuring around “capability economics,” shedding legacy roles while doubling down on AI-centric functions.

The trendline is unmistakable: automation is expanding faster than reskilling capacity. The practical question for industrial leaders is: who builds and maintains the systems that replace humans?

The shifting anatomy of work

In engineering and manufacturing, the shift is not just about headcount—it is about redefining the anatomy of work.

Design engineers who once spent days iterating CAD models now use AI-assisted systems to generate hundreds of validated options in minutes. The skill no longer lies in modeling itself, but in selecting the most contextually appropriate design.

Test and validation engineers are moving from script execution to managing predictive simulation models that detect design failures before prototypes exist. Manufacturing engineers are training algorithms to anticipate yield deviations, while R&D scientists use AI agents to simulate complex chemical or material interactions.

These are not incremental changes—they are structural. The competitive advantage no longer comes from having more engineers, but from enabling engineers to do more with AI.

In this new model, knowledge orchestration replaces manual coordination. The future engineer is part technologist, part strategist, capable of interpreting AI outputs and aligning them to product, cost, and compliance objectives. Those unable to adapt to this integrated workflow risk becoming obsolete—not by replacement, but by irrelevance.

A window for small and mid-sized manufacturers

The World Economic Forum warns that 39% of workers globally will need reskilling by 2030, yet most companies have not operationalized how that will happen. This gap creates a new kind of divide—not between humans and machines, but between those who can adapt to AI-augmented work and those whose roles remain fixed in time.

For small and mid-sized manufacturers, the risk is sharper. They cannot afford mass redundancies or multi-year academies, yet they face the same technology curve. The solution lies in micro-credentialing, shadowing programs, and cross-functional rotations that embed AI literacy into product and process teams.

Protecting domain knowledge and critical capabilities must also become a design principle. When experienced engineers leave before their know-how is codified into PLM systems or cognitive data threads, organizations create knowledge vacuums that no machine can fill.

AI can replicate reasoning patterns—but not contextual judgment. That still belongs to humans.

The future of work is not about replacing people with AI. It is about designing a productive coexistence where human creativity, ethics, and contextual awareness guide machine execution.

The leaders who get that right will define the next era of industrial transformation. Those who do not may find themselves automated out of relevance.

The post AI layoffs and the workforce paradox: who builds the future when machines take over? appeared first on Engineering.com.

]]>
A beginner’s guide to data pipelines for manufacturers https://www.engineering.com/a-beginners-guide-to-data-pipelines-for-manufacturers/ Thu, 23 Oct 2025 18:21:04 +0000 https://www.engineering.com/?p=144081 Data pipelines enable manufacturing engineers to simplify complex data management in support of their work.

The post A beginner’s guide to data pipelines for manufacturers appeared first on Engineering.com.

]]>
As manufacturing engineers grapple with more and more data from diverse sources, they implement data pipelines to simplify their increasingly complex data management processes.

What are data pipelines?

Data pipelines are automated systems that manufacturing engineers use to read data from multiple data sources, transform the data and then write it to a destination database.

Examples of transforming data include the following:

  • Changing key values such as customer code, part number or vendor number to a single set of values.
  • Revising dates to a standard format.
  • Aligning codes and related descriptions to a single set of values.
  • Denormalizing data for improved application performance.
  • Aggregating data to a uniform level of summarization.

Often, the destination is a data lake house or data warehouse. From there, the data is used for one or more of the following purposes:

  • Operational applications such as manufacturing planning or control.
  • Data analysis and visualization, including dashboards.
  • AI applications for insights into manufacturing trouble shooting, optimization or forecasting.

Increasing importance of data pipelines

Data pipelines have taken on increasing importance for engineers as a result of the following application trends:

  • Manufacturers are employing more advanced simulation and AI applications. These software trends require access to large volumes of high-quality data. Upgrading data quality, often by comparing values across datastores, is dependent on data pipelines.
  • Manufacturers are integrating their various systems more tightly. This integration trend requires data pipelines to copy selected data from one system to another, improving data sharing.
  • Manufacturing groups see value in data analytics and visualization. This analytics trend requires concurrent access to multiple data sources, which is often dependent on data pipelines.

Business benefits of data pipelines

The key benefit of data pipelines is to make data available in a timely and integrated manner for business processes across many parts of the organization, including engineering. That data availability can:

  • Accelerate product development through more detailed simulation and faster iteration.
  • Improve operations by improving decision quality.
  • Enable AI and machine learning initiatives by ensuring trustworthy data sufficiency.
  • Improve customer experience by reducing the time it takes to complete transactions.
  • Reduce time to market for new products and services.
  • Enhance risk management through more comprehensive risk identification.

More broadly, data pipelines enable the shift-left, or earlier in the engineering process, approach. Shift-left focuses on improving digital data availability to the initial stages of product or service planning, design, and development. The benefits of this approach include faster delivery, better quality and lower costs.

Types of data pipelines

Data pipelines operate differently depending on the characteristics of the application:

  • Batch processing data pipelines – Load large batches of data from multiple data sources into a destination database at set time intervals. Organizations typically schedule batch pipelines during off-peak business hours. A good example is aggregating daily production quantities by product from multiple manufacturing facilities overnight for data analysis the following morning.
  • Streaming data pipelines – Continuously process new or revised data generated in real-time by sensors or end-user interactions with an application into a destination database. Most streaming data pipelines operate continuously. A good example is streaming Industrial Internet of Things (IIoT) data from the manufacturing floor to monitoring applications used by engineers.
  • Data integration pipelines – Merge data from multiple data sources into a single unified view in a destination database. Data integration pipelines can operate either as batch or streaming data pipelines. A good example is merging data from various Enterprise Resource Planning (ERP) modules with Customer Relationship Management (CRM) data and external data to build an integrated view of industry production trends.

Selecting data pipeline software

The capability of available data pipeline software varies widely. The following criteria will help engineers select software that fits the application requirements:

  • Ease of use features that increase developer productivity and thereby reduce development cost.
  • Features that minimize effort to respond to changes in data source schemas.
  • Scalability to handle the estimated current and future data volumes.
  • Ability to connect easily to the required diversity of data access technologies used by data sources.
  • Security features for data encryption and authentication.
  • Automation features that simplify operations.
  • Operational monitoring features that quickly identify problems that require intervention.
  • Vendor track record.
  • Acquisition and operating costs are acceptable.

Frequent risks associated with data pipeline implementations

Manufacturing engineers should consider whether the following potential risks affect their data pipeline project and implement suitable mitigations:

  • Data quality shortcomings that are expensive and time-consuming to address.
  • Data integration complexities that require sophisticated and expensive software development.
  • Data and accuracy loss caused by poorly designed data integration software.
  • Ambitious goals or target states with large scopes that are beyond the capacity of the organization.
  • Excessive processing latencies that create data anomalies for streaming data pipelines.
  • Data pipeline performance issues that may occur when large data volumes are involved.
  • Security vulnerabilities that may be introduced when a significant number of data sources are involved.

As manufacturing engineers grapple with increasing data volumes from diverse sources, they implement data pipelines to achieve faster delivery, better quality and lower costs.

The post A beginner’s guide to data pipelines for manufacturers appeared first on Engineering.com.

]]>
PLM leadership and the switch from vision to operations https://www.engineering.com/plm-leadership-and-the-switch-from-vision-to-operations/ Tue, 14 Oct 2025 16:57:21 +0000 https://www.engineering.com/?p=143823 Five PLM developers with new CEOs in 5 years tells us a bit about where PLM is headed.

The post PLM leadership and the switch from vision to operations appeared first on Engineering.com.

]]>
When Andover, Massachusetts-based PLM developer Aras announced in September 2025 that Leon Lauritsen would step up as Chief Executive Officer, the news barely made a ripple outside PLM circles. Yet, this was not just another internal promotion. It was another piece in a pattern that has reshaped the leadership of nearly every major PDM/PLM vendor in the past five years.

Dassault Systèmes, PTC, Siemens, SAP—and now Aras—all transitioned to new CEOs since 2020. Some were founder successions, others generational handovers. All carry strategic implications for customers, partners, and the industry at large.

This wave of executive change is no coincidence. It reflects a structural shift: from an era dominated by founder-visionaries to one driven by operators. From leaders who inspired long-term visions of digital continuity and Industry 4.0, to leaders mandated to execute, scale, and monetize.

Closing one chapter, opening another

For decades, the PLM landscape has been shaped by charismatic figures with deep roots in R&D.

  • Bernard Charlès personified Dassault Systèmes’ 3DEXPERIENCE narrative, embedding PLM into a vision of virtual twins and the “Industry Renaissance.” His influence stretched beyond technology—he reframed how entire industries approached innovation.
  • Jim Heppelmann became synonymous with PTC’s pivot from CAD to IoT, AR, and eventually SaaS. His tenure was defined by bold bets and transformative acquisitions, pushing PTC into new competitive arenas.
  • Peter Schroer, founder of Aras, brought a disruptor’s mindset—challenging incumbents with an open, low-code PLM model. His legacy set the tone for Aras’s eventual profitability and SaaS transition.
  • Bill McDermott (SAP) and Joe Kaeser (Siemens) were not PLM founders per se, but they embodied larger corporate transformations, with PLM nested in their broader enterprise and industrial visions.

But leadership is not infinite. Succession is inevitable. Charlès and Heppelmann both moved upstairs in 2024, retaining Chairmanship roles while handing the operational CEO mantle to successors with very different profiles. Dassault elevated CFO/COO Pascal Daloz. PTC appointed Neil Barua, a ServiceMax operator with SaaS credentials.

In both cases, boards sent a clear message: the next era is about disciplined execution. Here is a consolidated view of recent CEO changes across the PLM vendor landscape:

Vendor strategies in transition

The CEO profiles alone reveal much about industry priorities:

  • Dassault Systèmes: Daloz inherits a company rooted in vision. His mandate is to industrialize execution. Where Charlès focused on expanding the conceptual boundaries of PLM and the Industry Renaissance, Daloz must prove that 3DEXPERIENCE SaaS can deliver predictable, recurring returns. Dassault has publicly tied his leadership to doubling EPS by 2029—a clear financial anchor to complement the trillion-euro market aspiration.
  • PTC: Barua arrives as an operator after a decade of Heppelmann’s bold acquisitions and technology bets. Atlas—the SaaS backbone—needs consolidation. Investors expect subscription growth, not new moonshots. Barua’s playbook will likely emphasize integration, efficiency, and monetization, turning PTC into a SaaS cash machine.
  • Aras: Lauritsen takes the helm of a profitable company that successfully navigated its SaaS transition under Roque Martin. His sales pedigree suggests a more strategic commercial approach. Expect Aras to leverage digital thread, low-code PLM, and AI-enabled solutions as competitive differentiators against larger incumbents. This is about scaling adoption and positioning Aras as a disruptive challenger.
  • Siemens: Under Busch, PLM is no longer a standalone product line but a component of Siemens’ Xcelerator platform. The emphasis is on openness, subscription, and ecosystem integration. Busch’s CTO background positions him to blend engineering software with AI, IoT, and cloud-native architecture, steering Siemens toward a holistic digital-industrial stack.
  • SAP: Klein’s leadership has solidified SAP’s pivot to the cloud and AI, with PLM functions integrated into its ERP core. PLM becomes a process capability rather than a discrete solution—bundled into supply chain, manufacturing, and sustainability workflows. This strategy aligns with SAP’s broader cloud transformation and its ambition to make PLM an integral part of enterprise operations.

From vision to execution

The common denominator: a pivot from visionaries to operators. Boards now demand predictable performance, not just inspiring narratives. That explains the backgrounds of the new CEOs: CFOs, COOs, sales leaders, SaaS operators. They represent continuity in vision but new accountability in execution.

For the industry, this signals:

  • Acceleration of SaaS and subscription models: Vendors will push customers harder toward recurring revenue models. Expect subscription-only offerings to dominate.
  • Integration into broader platforms: PLM technology will increasingly be sold as part of ecosystems—such as Xcelerator, 3DEXPERIENCE, or S/4HANA—not as isolated deployments; most vendors do not even refer to “PLM” as it is perceived as a “niche” subject and a topic of recurring philosophical debate among pundits.
  • M&A and portfolio expansion: As I argued in my previous article, operator-CEOs are more likely to rationalize portfolios and pursue acquisitions. PTC’s ServiceMax deal was a harbinger; more consolidation is inevitable.

For engineering and manufacturing leaders, these transitions will shape investment choices. A few points stand out:

  1. Platform lock-in risk: As PLM becomes integrated into broader platforms, customers will need to weigh the benefits of the ecosystem against the risks of vendor dependency.
  2. Shift in value propositions: Vendors will emphasize outcomes—sustainability, AI enablement, digital thread continuity—rather than technical features alone.
  3. Financial rigor: Pricing models, adoption roadmaps, and customer success metrics will align more tightly with vendors’ recurring revenue goals.

The end of one era, the beginning of another

The last two decades of PLM have been defined by visionaries who expanded the scope of what PLM could be—digital twin, Industry 4.0, and the industrial metaverse. The next decade will be defined by operators who deliver those visions at scale.

This transition should not be underestimated. It is more than a personnel change. It is the pivot from one era of PLM to another: from promise to performance, from vision to execution, from founder-led innovation to board-driven predictability.

The open question is whether this shift will unlock the full potential of the Industry Renaissance, or whether operational discipline will come at the expense of disruptive ambition.

Either way, the baton has been passed. A new era of PLM leadership has begun.

The post PLM leadership and the switch from vision to operations appeared first on Engineering.com.

]]>
How small language models can advance digital transformation – part 2 https://www.engineering.com/how-small-language-models-can-advance-digital-transformation-part-2/ Thu, 09 Oct 2025 17:51:36 +0000 https://www.engineering.com/?p=143700 Let’s compare small language models to large language models for digital transformation projects.

The post How small language models can advance digital transformation – part 2 appeared first on Engineering.com.

]]>
Small language models (SLMs) can perform better than large language models (LLMs). This counterintuitive idea applies to many engineering applications because many artificial intelligence (AI) applications don’t require an LLM. We often assume that more information technology capacity is better for search, data analytics and digital transformation.

SLMs offer numerous advantages for small, specialized AI applications, such as digital transformation. LLMs are more effective for large, general-purpose AI applications.

Let’s compare SLMs to LLMs for digital transformation projects. To read the first article in this series, click here.

Language model construction and operation

SLMs are much cheaper to construct than LLMs because they build a model from much less data. This lower cost of SLMs makes them particularly attractive for digital transformation projects.

SLMs are much cheaper to operate and perform faster than LLMs because they need to process much less data volume to create inferences.

Impediments to implementing an SLM

What’s keeping organizations from implementing an SLM for a digital transformation project?

  1. Poor internal digital data quality and accessibility.
  2. Insufficient subject matter expertise to curate the specialized data required for the contemplated SLM.
  3. Too much unstructured paper data.
  4. Insufficient AI technical skills.
  5. Uncertain business case for the digital transformation project.
  6. Immature AI tools and vendor solutions.
  7. Immature project management practices.

How will SLMs and LLMs evolve?

The most likely trends for the foreseeable future of SLMs and LLMs include:

  1. Increasing numbers of organizations will use both SLMs and LLMs as the benefits of AI applications become clearer and more organizations acquire the skills to implement and operate the applications.
  2. Both SLMs and LLMs will grow in size and sophistication as software improves and data quality increases.
  3. Both SLMs and LLMs will improve in performance as software for inference processing improves and incorporates reasoning.
  4. The training costs for SLMs and LLMs will decrease as training algorithms are optimized.
  5. The limits on the number of words in a prompt will increase.
  6. Integration of AI models with enterprise applications will become more widespread.
  7. Hosting SLM-based AI applications internally will appeal to more organizations as the price point is achievable and because it mitigates the risk of losing control over proprietary information.
  8. Hosting an LLM internally will remain too costly and unnecessary when the organization has published and is enforcing an AI usage policy, as described in this article: Why You Need a Generative AI Policy.
  9. The clear distinction between SLMs and LLMs will blur as medium language models (MLMs) or small Large Language Models (sLLMs) are built and deployed.
  10. LLMs will reduce hallucinations8 by fact-checking external sources and providing references for inferences.

SLMs offer numerous advantages for digital transformation projects, as these projects often utilize domain-specific data. LLMs are more effective for large, general-purpose AI applications that require vast data volumes.

Footnotes

  1. This high consumption of resources is driving the many announcements about building large, new data centers and related electricity generation capacity. Training an LLM can cost more than $10 million each time.
  2. GPU stands for Graphics Processing Unit. GPU chips are particularly well-suited for the types of calculations that AI models perform in great quantity.
  3. Fine-tuning is a manual process with automated support where a trained AI model is further refined to improve its accuracy and performance.
  4. Selection criteria for choosing an AI model are described in this article: Making smarter AI choices.
  5. Performance is also called latency. In either case, it refers to the elapsed time from when the end-user completes the prompt, and the AI application output appears on the monitor.
  6. Inference is the term that refers to the process that the AI model performs to generate text in response to the prompt it receives.
  7. Sometimes called edge devices or edge computing.

The post How small language models can advance digital transformation – part 2 appeared first on Engineering.com.

]]>
Optimizing logistics, supply chains, and local manufacturing https://www.engineering.com/optimizing-logistics-supply-chains-and-local-manufacturing/ Thu, 02 Oct 2025 18:39:07 +0000 https://www.engineering.com/?p=143521 How digital transformation can turn supply chains into a strategic advantage.

The post Optimizing logistics, supply chains, and local manufacturing appeared first on Engineering.com.

]]>

It seems like the manufacturing sector is forever in the midst of a structural shift. Competitive pressures, supply chain disruptions, and evolving customer expectations are constants, driving companies to continuously rethink how they produce goods and where they produce them. Digital transformation systems—a convergence of advanced analytics, IoT, AI, and cloud-based platforms—are at the center of this current shift.

For engineers and executives alike, these systems are more than IT upgrades. They are tools—sometimes very simple, sometimes quite complex—that reconfigure logistics, streamline supply chains, and make localized manufacturing practical and profitable.

Digital transformation and logistics optimization

Historically, logistics in manufacturing has been reactive to disruption—responding to bottlenecks, freight delays, or warehouse shortages as they arise. Digital transformation turns this reactive model into a predictive and adaptive one.

Real-time visibility provided by IoT sensors and connected devices track goods in transit, raw material consumption, and production progress. By tracking the data from these devices, engineers gain line-of-sight of their raw materials and products from factory floor to customer delivery. AI-driven routing and scheduling algorithms forecast delays and dynamically reroute shipments or adjust production schedules to maintain throughput.

And then there’s digital twins, which aren’t just for product design. By creating a digital twin of logistics networks, engineers can simulate different transportation strategies, warehouse configurations, or production-distribution trade-offs before making capital commitments.

The result is lower transportation costs, higher on-time delivery rates, and fewer emergency interventions to solve unexpected problems.

Digital transformation and supply chain resilience

In the last five years, supply chain fragility has become more than just a boardroom issue. Digital transformation systems can help bring resilience by unifying fragmented data and enabling proactive decision-making.

Instead of relying on the siloed ERP and supplier systems of the previous decade, companies can use integrated supplier data platforms to build digital ecosystems where quality, lead times, and pricing data are visible in one place. The advanced analytics produced by these digital ecosystems can help users flag single-source dependencies or regions exposed to external risks, such as natural disasters, geopolitical snafus or disease outbreaks.

At this stage, either humans or AI systems can find and recommend shifts to alternate suppliers, suggest redistribution of inventory, or gameplan adjustments to current stock levels. For manufacturers, this means fewer surprises on the production line and the bottom line—it’s a little bit of assurance that production won’t grind to a halt due to a single point of failure.

Digital transformation and enabling local manufacturing

Local manufacturing—sometimes called near-market manufacturing, regional manufacturing or nearshoring— simply means manufacturing your products closer to end customers. The strategy reduces transportation costs, shortens lead times, and lowers emissions. The downside is that it introduces complexity through operating multiple regional plants, relying on varied supplier networks, and adjusting to different regulatory environments. Digital transformation systems provide the infrastructure to make this a viable strategy for a larger cross section of businesses.

With a digital infrastructure, standardized production data models help engineers replicate validated processes across sites, ensuring consistency in quality while tailoring to local market needs. Linking and integrating cloud-based manufacturing execution systems (MES) and enterprise resource planning (ERP) software allow plant managers to coordinate production planning across geographies while everyone works from the same set of data. Real-time sales and consumption data flow directly into local production schedules, aligning output with regional market demand.

The outcome is that companies gain the agility to serve markets faster while maintaining engineering rigor and cost control.

Digital transformation and engineering leadership

Digital transformation systems don’t come cheap. Aside from the initial cost, they require significant time and staff resources during start up. However, the next-level strategic and tactical functionality enabled by digital transformation has also never been more accessible. Rapid advancements in technology, compute power and the availability of could storage make it attainable to anyone willing to invest the time and money. The capabilities mitigate the initial costs by reducing “firefighting” and manual reporting. They provide tools to model, test, and optimize logistics and supply chain variables virtually before committing to a plan. Indeed, for executives and manufacturing engineers, these systems turn supply chains into a strategic advantage, enabling informed investment in new sites, supplier diversification, and sustainable practices. Digital transformation is a lever for both resilience and growth, improving reliability today while positioning the enterprise to compete globally tomorrow.

The post Optimizing logistics, supply chains, and local manufacturing appeared first on Engineering.com.

]]>