PLM/ERP - Engineering.com https://www.engineering.com/category/technology/plm-erp/ Tue, 14 Oct 2025 16:57:22 +0000 en-US hourly 1 https://wordpress.org/?v=6.9 https://www.engineering.com/wp-content/uploads/2025/06/0-Square-Icon-White-on-Purpleb-150x150.png PLM/ERP - Engineering.com https://www.engineering.com/category/technology/plm-erp/ 32 32 PLM leadership and the switch from vision to operations https://www.engineering.com/plm-leadership-and-the-switch-from-vision-to-operations/ Tue, 14 Oct 2025 16:57:21 +0000 https://www.engineering.com/?p=143823 Five PLM developers with new CEOs in 5 years tells us a bit about where PLM is headed.

The post PLM leadership and the switch from vision to operations appeared first on Engineering.com.

]]>
When Andover, Massachusetts-based PLM developer Aras announced in September 2025 that Leon Lauritsen would step up as Chief Executive Officer, the news barely made a ripple outside PLM circles. Yet, this was not just another internal promotion. It was another piece in a pattern that has reshaped the leadership of nearly every major PDM/PLM vendor in the past five years.

Dassault Systèmes, PTC, Siemens, SAP—and now Aras—all transitioned to new CEOs since 2020. Some were founder successions, others generational handovers. All carry strategic implications for customers, partners, and the industry at large.

This wave of executive change is no coincidence. It reflects a structural shift: from an era dominated by founder-visionaries to one driven by operators. From leaders who inspired long-term visions of digital continuity and Industry 4.0, to leaders mandated to execute, scale, and monetize.

Closing one chapter, opening another

For decades, the PLM landscape has been shaped by charismatic figures with deep roots in R&D.

  • Bernard Charlès personified Dassault Systèmes’ 3DEXPERIENCE narrative, embedding PLM into a vision of virtual twins and the “Industry Renaissance.” His influence stretched beyond technology—he reframed how entire industries approached innovation.
  • Jim Heppelmann became synonymous with PTC’s pivot from CAD to IoT, AR, and eventually SaaS. His tenure was defined by bold bets and transformative acquisitions, pushing PTC into new competitive arenas.
  • Peter Schroer, founder of Aras, brought a disruptor’s mindset—challenging incumbents with an open, low-code PLM model. His legacy set the tone for Aras’s eventual profitability and SaaS transition.
  • Bill McDermott (SAP) and Joe Kaeser (Siemens) were not PLM founders per se, but they embodied larger corporate transformations, with PLM nested in their broader enterprise and industrial visions.

But leadership is not infinite. Succession is inevitable. Charlès and Heppelmann both moved upstairs in 2024, retaining Chairmanship roles while handing the operational CEO mantle to successors with very different profiles. Dassault elevated CFO/COO Pascal Daloz. PTC appointed Neil Barua, a ServiceMax operator with SaaS credentials.

In both cases, boards sent a clear message: the next era is about disciplined execution. Here is a consolidated view of recent CEO changes across the PLM vendor landscape:

Vendor strategies in transition

The CEO profiles alone reveal much about industry priorities:

  • Dassault Systèmes: Daloz inherits a company rooted in vision. His mandate is to industrialize execution. Where Charlès focused on expanding the conceptual boundaries of PLM and the Industry Renaissance, Daloz must prove that 3DEXPERIENCE SaaS can deliver predictable, recurring returns. Dassault has publicly tied his leadership to doubling EPS by 2029—a clear financial anchor to complement the trillion-euro market aspiration.
  • PTC: Barua arrives as an operator after a decade of Heppelmann’s bold acquisitions and technology bets. Atlas—the SaaS backbone—needs consolidation. Investors expect subscription growth, not new moonshots. Barua’s playbook will likely emphasize integration, efficiency, and monetization, turning PTC into a SaaS cash machine.
  • Aras: Lauritsen takes the helm of a profitable company that successfully navigated its SaaS transition under Roque Martin. His sales pedigree suggests a more strategic commercial approach. Expect Aras to leverage digital thread, low-code PLM, and AI-enabled solutions as competitive differentiators against larger incumbents. This is about scaling adoption and positioning Aras as a disruptive challenger.
  • Siemens: Under Busch, PLM is no longer a standalone product line but a component of Siemens’ Xcelerator platform. The emphasis is on openness, subscription, and ecosystem integration. Busch’s CTO background positions him to blend engineering software with AI, IoT, and cloud-native architecture, steering Siemens toward a holistic digital-industrial stack.
  • SAP: Klein’s leadership has solidified SAP’s pivot to the cloud and AI, with PLM functions integrated into its ERP core. PLM becomes a process capability rather than a discrete solution—bundled into supply chain, manufacturing, and sustainability workflows. This strategy aligns with SAP’s broader cloud transformation and its ambition to make PLM an integral part of enterprise operations.

From vision to execution

The common denominator: a pivot from visionaries to operators. Boards now demand predictable performance, not just inspiring narratives. That explains the backgrounds of the new CEOs: CFOs, COOs, sales leaders, SaaS operators. They represent continuity in vision but new accountability in execution.

For the industry, this signals:

  • Acceleration of SaaS and subscription models: Vendors will push customers harder toward recurring revenue models. Expect subscription-only offerings to dominate.
  • Integration into broader platforms: PLM technology will increasingly be sold as part of ecosystems—such as Xcelerator, 3DEXPERIENCE, or S/4HANA—not as isolated deployments; most vendors do not even refer to “PLM” as it is perceived as a “niche” subject and a topic of recurring philosophical debate among pundits.
  • M&A and portfolio expansion: As I argued in my previous article, operator-CEOs are more likely to rationalize portfolios and pursue acquisitions. PTC’s ServiceMax deal was a harbinger; more consolidation is inevitable.

For engineering and manufacturing leaders, these transitions will shape investment choices. A few points stand out:

  1. Platform lock-in risk: As PLM becomes integrated into broader platforms, customers will need to weigh the benefits of the ecosystem against the risks of vendor dependency.
  2. Shift in value propositions: Vendors will emphasize outcomes—sustainability, AI enablement, digital thread continuity—rather than technical features alone.
  3. Financial rigor: Pricing models, adoption roadmaps, and customer success metrics will align more tightly with vendors’ recurring revenue goals.

The end of one era, the beginning of another

The last two decades of PLM have been defined by visionaries who expanded the scope of what PLM could be—digital twin, Industry 4.0, and the industrial metaverse. The next decade will be defined by operators who deliver those visions at scale.

This transition should not be underestimated. It is more than a personnel change. It is the pivot from one era of PLM to another: from promise to performance, from vision to execution, from founder-led innovation to board-driven predictability.

The open question is whether this shift will unlock the full potential of the Industry Renaissance, or whether operational discipline will come at the expense of disruptive ambition.

Either way, the baton has been passed. A new era of PLM leadership has begun.

The post PLM leadership and the switch from vision to operations appeared first on Engineering.com.

]]>
The data gold rush: how to uncover hidden value in your data https://www.engineering.com/the-data-gold-rush-how-to-uncover-hidden-value-in-your-data/ Mon, 29 Sep 2025 11:49:00 +0000 https://www.engineering.com/?p=143374 The valuable insights from data gold mining are often suspected, or even known to exist, yet they remain buried.

The post The data gold rush: how to uncover hidden value in your data appeared first on Engineering.com.

]]>
The fundamental challenge of an organization’s data is transforming it efficiently and effectively into physical products and services that maximize revenue and maintain competitiveness. Same old, same old, yes? But a host of new tools are helping product developers dig deeper into their terabytes of data to uncover value and actionable insights that result in better products and organizational performance.

In turn, these tools and the techniques needed to use them cost-effectively are spurring product developers to find and maximize real-world value and are working to incorporate them into their products and services.

The resulting gold being mined by these new tools and techniques is not by itself some long-sought breakthrough but an intermediate step in end-to-end product lifecycle management (PLM). And the counterpart is just as critical. Before gold is forged into something of value, it’s meticulously assayed to determine its purity.

The valuable insights from data gold mining are often suspected, or even known to exist, yet they remain buried. Managers are unwilling or unprepared to sift through this data and put it to use. Amid resource scarcities and time pressures, few organizations know how to do this effectively or affordably—the perennial headaches of data mining.

Mining for the data gold is often seen as too iffy to promise an ROI. And yet this gold is often related to the biggest challenges facing every product and system: obsolescence and failure, ever-evolving user wants and needs, marketplace disruptions, competition, sourcing, service, and rising costs.

The challenge in data gold mining is no longer just about getting data; it’s about the strategic application of new technologies to transform that data into actionable insights and value. The following areas are key to this challenge:

  • Tightly integrating the most common forms of generative AI and agentic AI into PLM platforms
  • Better techniques for generating AI queries
  • Better analytical tools…descriptive, predictive, and prescriptive…to make sense of the returns from AI queries.

These tools and techniques seek to find gold in the form of hidden similarities among seemingly unconnected and unrelated phenomena. This includes feedback from customers who have little or nothing in common and are using dissimilar products and services.

Gold is also buried in the incongruities in sales orders and rejections, field service, warranty claims, manufacturing stoppages, and supply chain disruptions. Whether the data is structured or unstructured no longer matters. Ditto for whether the data is internal or external. For example, incorporating parts of the Industrial Internet of Things (IIoT) and their connected data-generating devices into the Large Language Models (LLMs) on which AI is trained.

What is also new is the size and depth of databases searched, as well as how these new tools and techniques overcome the disconnects that plague every organization’s data. These disconnects include bad and useless data formats; errors, ambiguities, and inaccuracies; data buried in departmental silos; legacy data with unsuspected value; and data that is misplaced or lost.

All this is aided by digital transformation in all its myriad forms. Digital transformation is increasingly vital to gold mining because it helps users gather terabytes of data into more complete, more accurate, more manageable, and more focused LLMs. Digital transformation can also help users pinpoint what is (still) needed for timely/effective decision making (e.g., data that did not get into a given LLM and should be for subsequent queries).

CIMdata itself is adapting by:

  • Broadening its PLM focus to work with clients’ AI projects, creating an AI Practice with Diego Tamburini as Practice Director and Executive Consultant. He has held key positions at Microsoft, Autodesk, SDRC (now part of Siemens Digital Industries Software), and the Georgia Tech Manufacturing Research Center.   
  • Expanding its work and capabilities in digital transformation that enables PLM—and is enabled by PLM in turn. The ultimate goal is to bring together engineering technology (ET), information technology (IT), and operations technology (OT) at the top of the enterprise.

Managers and staffers who receive these AI and analytics findings have a similarly daunting agenda.   They must learn how to discern and understand what the gold is telling them. And they must learn how to weed out what has already been simulated, designed, or engineered for production. And they must learn how to choose the most viable of these tools and techniques and how to manage them.

Effective data governance is crucial to gold mining. I strongly recommend a review of an AI Governance webinar written by Janie Gurley, CIMdata’s Data Governance Director, and me. Posted July 8, 2025, by Engineering.com, available at https://www.engineering.com/ai-governance-the-unavoidable-imperative-of-responsibility/

Further insight is available in my most recent Engineering.com posting, available at https://www.engineering.com/in-the-rush-to-digital-transformation-it-might-be-time-for-a-rethink/

Getting the gold into product development offers many potential benefits by uncovering:

  • Causes of product and system failures
  • Unexpected obsolescence in products and systems
  • Users’ new wants and changing needs
  • Early indications of marketplace disruptions
  • Insights into competitors’ strategies
  • Pending shortfalls in sourcing…and alternatives
  • Likely cost increases and finding options
  • Unrecognized service needs
  • Better methods for production and operations…thanks to AI’s new ability to handle streaming data in LLMs.

Over time, diligent searchers may turn up dozens of connections and correspondences in these nine bullet points. Some will be simple, random coincidences. But many will turn into gold that reveals both opportunities and challenges.

Managing resistance

I urge readers to push the envelope, to find new approaches to everyday tasks, and try new things. While there is always pushback from staff and managers who are already overworked, without encouragement, change will never happen.

The usual response is, “Yes, you’re right, we’ll get to it later.” We caution that they are letting routine tasks get in the way of potential game changers. Ignoring these issues will not make them go away.Yes, obsolescence and failure, user wants and needs, marketplace disruptions, and all the rest (see above) will eventually surface, gain urgency, and become tasks that everyone must address.

Inevitably, this new gold will have to be dug out from data we assumed was useless and then painstakingly engineered into products, services, and forecasts. These values will mandate changes to the organization’s facilities, systems, processes, business partners, and suppliers. And they will have to be communicated to the sales force and distributors.

And there will be resistance to change and its huge costs, which are the underlying themes of this article. Those costs are part of addressing newly uncovered digital gold in a mad scramble amid fears that errors and oversights will place profitability, competitiveness, and job security at risk.

In summary, the quest for digital gold is our modern-day equivalent of the myth of Jason and the Argonauts and their search for the Golden Fleece. Like Jason, we must embark on a perilous journey and overcome countless challenges to seize a prize that promises untold value—transforming raw data into profitable products and services.

The post The data gold rush: how to uncover hidden value in your data appeared first on Engineering.com.

]]>
The PLM M&A race—from lifecycle to enterprise digital everything https://www.engineering.com/the-plm-ma-race-from-lifecycle-to-enterprise-digital-everything/ Fri, 19 Sep 2025 11:38:00 +0000 https://www.engineering.com/?p=143084 PLM vendors chase everything—but risk losing coherence in the process.

The post The PLM M&A race—from lifecycle to enterprise digital everything appeared first on Engineering.com.

]]>
Over the past five years, PLM has evolved from CAD, BOM, PDM, and engineering change management into cross-functional digital-everything platforms. Siemens, Dassault Systèmes, PTC, SAP, and Aras now extend into simulation, ALM, AR, R&D informatics, field service, supply chain procurement, operations, and custom application frameworks.

The question is no longer, “Who controls my product data?” It is, “Which platform can orchestrate my entire digital enterprise—coherently, intelligently, at scale?”

The Autodesk/PTC leak (July 2025) and recent Forrester Wave PLM report for discrete manufacturing highlight just how fluid—and contested—this market has become. Success is no longer about feature breadth alone; it is about ecosystem reach, strategic alliances, integration maturity, and disciplined execution.

Siemens: industrial scale integration challenge

On the paper, Siemens has been the boldest acquirer, investing more than $16 billion since 2021. Key moves include Supplyframe ($700M, 2021), Altair ($10.6B, 2024), and Dotmatics ($5.1B, 2025). Together these acquisitions build what Siemens positions as a cognitive industrial platform—a stack that connects R&D informatics, laboratory knowledge, simulation, predictive modelling, supply chain intelligence, and design engineering.

The strategy is clear: customers want cross-domain insights that link product design with sourcing and operations—building from ready-made industrial data scenarios. If Siemens can deliver this continuum, it secures a powerful differentiator. Yet, history warns that breadth can become fragmentation. A platform with too many moving parts risks slipping into silos unless Siemens maintains relentless focus on integration discipline.

Ambition is unmatched. Integration discipline is the test. A platform with too many moving parts risks fragmentation unless Siemens relentlessly enforces coherence.

Dassault Systèmes: deepening the virtual twin

Dassault Systèmes has pursued a more measured but no less ambitious path. Its acquisitions—NuoDB (2020), Proxem (2020), Bloom (2021), Diota (2022), and Contentserv (2025)—aim to enrich the 3DEXPERIENCE platform with cloud scalability, semantic AI, AR, and omnichannel content.

The strategy is to extend the virtual twin beyond products into the broader enterprise, contextualizing insights and enabling simulation across design, operations, and customer experience. This resonates strongly in consumer-driven and regulated industries where brand, compliance, and collaboration drive value.

The risk? Coherence. Specialized acquisitions bring great capabilities but can easily create friction in workflows and user experience. Dassault Systèmes’ competitive edge depends on delivering a seamless platform, not a collection of clever but disjointed modules.

PTC: threading the lifecycle

PTC has played the role of surgical consolidator, strengthening the digital thread rather than overextending the platform perimeter. Its acquisitions—Arena ($715M, 2021), Codebeamer ($280M, 2022), ServiceMax ($1.46B, 2023), and IncQuery Group (2025)—are tightly aligned to SaaS-native lifecycle cohesion.

This approach works particularly well in regulated industries like medtech, aerospace, and automotive, where traceability and compliance are non-negotiable. PTC’s portfolio now spans ALM, PDM, service lifecycle, and SaaS-native collaboration, creating a compelling end-to-end vision.

But here too, the test is execution. Customers will only see value if Windchill, Arena, Onshape, Codebeamer, and ServiceMax operate as one coherent digital thread. Without that, the promise of end-to-end traceability dissolves into tool-switching and integration debt.

SAP: owning the enterprise backbone

SAP has taken a different path. Rather than buying PLM capabilities outright, SAP has doubled down on being the enterprise orchestrator. Its S/4HANA Clean Core strategy (2024) and deepened partnerships with Siemens and PTC reflect a philosophy, perhaps coupled with a marketing strategy: lifecycle data must flow seamlessly across finance, supply chain, and operations.

Broadly speaking, this makes SAP unavoidable for large enterprises seeking enterprise-scale integration, at least on the core ERP side of things. The value proposition lies in connecting the PDM backbone to the entire enterprise nervous system. The risk, however, is that customers see SAP’s model as too ERP-centric or rigid. If SAP fails to demonstrate lifecycle depth alongside enterprise breadth, it risks ceding ground to vendors who blend both.

Aras: the quiet surprise

Aras, often underestimated, emerged as a surprise leader in the recent Forrester Wave for discrete manufacturing PLM. Its Minerva Group acquisition (2022) boosted delivery strength in medtech and electronics, providing domain-specific solutions that accelerate compliance and reduce customization overhead.

Where Siemens and Dassault Systèmes chase scale and vision, Aras delivers agility and configurability. For customers who need compliance-ready, adaptable solutions without the overhead of a massive enterprise platform, Aras is increasingly credible—often adopted as an overlay strategy to extend or modernize existing PLM backbones or fill legacy gaps.

The open question is sustainability: can Aras scale its positioning beyond niche and overlay deployments without diluting the very flexibility that defines its edge?

Coherence vs. complexity

The past five years show that ambition alone is not enough. Siemens bets on industrial breadth, Dassault Systèmes on experiential twins, PTC on a cohesive SaaS thread, SAP on ERP orchestration, and Aras on niche agility.

The market’s winners will be those who deliver platforms that feel seamless and purpose-built, not stitched together from acquisitions. Execution, integration discipline, and adoption matter more than the size of an acquisition pipeline.

This brings us to the key insight: PLM is no longer a lifecycle tool. It is the product innovation backbone of enterprise digital orchestration. The metric of success has shifted. Customers now evaluate platforms based on:

  • Integration maturity: Does the platform deliver real continuity across R&D, engineering, operations, and service?
  • Execution discipline: Can acquisitions, modules, and partner technologies function as one coherent system?
  • Platform coherence: Does the user experience feel unified, or is it fragmented across silos and workflows?
  • Resilience and adaptability: Can the platform respond to emerging AI-native tools, regulatory change, or market disruptions without losing coherence?

Siemens’ industrial breadth may create a cognitive platform unmatched in scale—but only if complexity does not erode usability. Dassault Systèmes’ virtual twin strategy offers immersive insight, yet its value will be judged by workflow consistency and cross-domain intelligence. PTC’s SaaS-native digital thread emphasizes lifecycle discipline—but its promise exists only if all modules operate as one. SAP must show that ERP-centric orchestration adds value beyond lifecycle coverage. Aras must balance growth with its core promise of agile, domain-specific solutions.

The Autodesk/PTC leak is a reminder that disruption in PLM is ongoing, unpredictable, and fiercely contested. M&A headlines are attention-grabbing, but true differentiation lies in execution, coherence, and adoption.

The next wave—AI-native platforms, further consolidation, or deeper PDM/ERP/MES convergence—will test whether vendors can balance ambition with disciplined integration (end-to-end PLM scope). OEMs and other user organizations must resist evaluating vendors on acquisition size or feature count alone. The real question is:

Which platform can deliver a coherent, intelligent, and resilient digital enterprise at scale—backed by a true, ready-made transformation path?

PLM has evolved—and will continue to mature. It is no longer a set of engineering tools. It is the orchestration engine of the enterprise, connecting people, data, and processes across design, development, operations, and service. The vendors who master coherence over complexity will define the next era of digital enterprise transformation. Those who fail will see their platforms fragment, their promises collapse, and their leadership erode.

The race is not about owning product data structure. It is about owning the digital enterprise, with precision, discipline, and foresight.

The post The PLM M&A race—from lifecycle to enterprise digital everything appeared first on Engineering.com.

]]>
Register for Digital Transformation Week 2025 https://www.engineering.com/register-for-digital-transformation-week-2025/ Tue, 09 Sep 2025 00:54:14 +0000 https://www.engineering.com/?p=142714 Engineering.com’s September webinar series will focus on how to make the best strategic decisions during your digital transformation journey.

The post Register for Digital Transformation Week 2025 appeared first on Engineering.com.

]]>
Digital transformation remains one of the hottest conversations in manufacturing in 2025. A few years ago, most companies approached digital transformation as a hardware issue. But those days are gone. Now the conversation is a strategic one, centered on data management and creating value from the data all the latest technology generates. The onrush of AI-based technologies only clouds the matter further.

This is why the editors at Engineering.com designed our Digital Transformation Week event—to help engineers unpack all the choices in front of them, and to help them do it at the speed and scale required to compete.

Join us for this series of lunch hour webinars to gain insights and ideas from people who have seen some best-in-class digital transformations take shape.

Registrations are open and spots are filling up fast. Here’s what we have planned for the week:

September 22: Building the Digital Thread Across the Product Lifecycle

12:00 PM Eastern Daylight Time

This webinar is the opening session for our inaugural Digital Transformation Week. We will address the real challenges of implementing digital transformation at any scale, focusing on when, why and how to leverage manufacturing data. We will discuss freeing data from its silos and using your bill of materials as a single source of truth. Finally, we will help you understand how data can fill in the gaps between design and manufacturing to create true end-to-end digital mastery.

September 23: Demystifying Digital Transformation: Scalable strategies for Small & Mid-Sized Manufacturers

12:00 PM Eastern Daylight Time

Whether your organization is just beginning its digital journey or seeking to expand successful initiatives across multiple departments, understanding the unique challenges and opportunities faced by smaller enterprises is crucial. Tailored strategies, realistic resource planning, and clear objectives empower SMBs to move beyond theory and pilot phases, transforming digital ambitions into scalable reality. By examining proven frameworks and real-world case studies, this session will demystify the process and equip you with actionable insights designed for organizations of every size and level of digital maturity.

September 24, 2025: Scaling AI in Engineering: A Practical Blueprint for Companies of Every Size

12:00 PM Eastern Daylight Time

You can’t talk about digital transformation without covering artificial intelligence. Across industries, engineering leaders are experimenting with AI pilots — but many remain uncertain about how to move from experiments to production-scale adoption. The challenge is not primarily about what algorithms or tools to select but about creating the right blueprint: where to start, how to integrate with existing workflows, and how to scale in a way that engineers trust and the business can see immediate value. We will explore how companies are combining foundation models, predictive physics AI, agentic workflow automation, and open infrastructure into a stepped roadmap that works whether you are a small team seeking efficiency gains or a global enterprise aiming to digitally transform at scale.

September 25: How to Manage Expectations for Digital Transformation

12:00 PM Eastern Daylight Time

The digital transformation trend is going strong and manufacturers of all sizes are exploring what could be potentially game-changing investments for their companies. With so much promise and so much hype, it’s hard to know what is truly possible. Special guest Brian Zakrajsek, Smart Manufacturing Leader at Deloitte Consulting LLP, will discuss what digital transformation really is and what it looks like on the ground floor of a manufacturer trying to find its way. He will chat about some common unrealistic expectations, what the realistic expectation might be for each, and how to get there.

The post Register for Digital Transformation Week 2025 appeared first on Engineering.com.

]]>
Data landscapes and the product lifecycle https://www.engineering.com/data-landscapes-and-the-product-lifecycle/ Tue, 12 Aug 2025 18:33:39 +0000 https://www.engineering.com/?p=142053 The hidden life of data clutter in half-forgotten digital closets is coming to an end.

The post Data landscapes and the product lifecycle appeared first on Engineering.com.

]]>
The torrent of data and information flowing through organizations is relentless. Onward in variety, outward in reach and spread, and upward in quantity—all at higher velocity. All organizations have petabytes of data in countless forms and formats flowing ceaselessly into and through a complex ecosystem of applications, systems, and platforms.

A comprehensive new vision for this is emerging: the Data Landscape, a graphical ecosystem built with or extracted from all of the enterprise’s databases with state-of-the-art data mapping tools and solutions.

These tools and solutions can unlock, gather, track, manage, analyze, and use anything and everything digital in the enterprise, regardless of size and structure. And anything else, too, for that matter, such as images, text files, videos, CAD files, CAM toolpaths, analyses of many kinds, inspection specifics, and so on.  

The best of these tools can access virtually all of the enterprise’s data and metadata, even if it is buried in obsolete legacy formats and systems, or stashed in the “silos” maintained by nearly every department and business unit.

In short, the hidden life of data clutter in half-forgotten digital closets is coming to an end as it is gathered and mapped into Data Landscapes. What this can potentially do for your enterprise’s product development and product lifecycle management activities is significant, although it won’t be as easy as a finger snap.

I am always cautious about using the term “revolution,” but for now, it is the best description for what is happening in the world of data, information, and knowledge. We at CIMdata now see that Data Landscape mapping tools and solutions offer exponentially better ways for managing, securing, searching, finding, accessing, as well as extracting value out of the extended enterprise’s mountain of data.

Fortunately, these Data Landscape tools and solutions are rapidly becoming more effective and less difficult to use. And they foreshadow major implications for product lifecycle management (PLM) activities from concept through life, by:

  • Enabling an organization’s digital threads to bring more data into the digital twins to which they are connected.
  • Supporting more comprehensive and more detailed digital twins.
  • Extending end-to-end lifecycle connectivity to the entire Data Landscape of the extended enterprise, not just product lifecycles.    

These new Data Landscape tools are also beginning to upend time-tested processes for developing bills of materials (BOMs), which are the roadmaps of every new product’s creation, development, production, and service. Soon BOMs will be extended “back” to ideation and what I like to call “the voice of the customer” and forward through field service and on to recycling, remanufacturing, repurposing, or disposal.

There are also major product-development implications in Data Landscape mapping for digital transformation, as we now see that so much more has to be transformed and made usable from the earliest stages of the product lifecycle.

The very name “Data Landscape” points us towards a new approach to mapping, one that far exceeds the traditional conceptual, logical, and physical models that we use to align data with business strategies and goals. Mapping in this new form must cover all of the extended enterprise’s data and its sources, transformations, and destinations … profiling and cataloguing a digital graphical geography of data, applications, tools, systems, and so on.

Without Data Landscape mapping, most of the enterprise’s data remains inaccessible and otherwise useless, denying the enterprise the opportunity for more informed decisions, comprehensive analyses, and competitive products and services.

Defining the data landscape

What is in a Data Landscape? Anything and everything digital, structured or unstructured, formatted data, raw data, and anything in between. Data Landscapes contain sources, transformations, models, transactional databases, analyses (and analytical tools), programming languages, and everything else that anyone has saved. It’s far easier to say what’s not in a Data Landscape: in theory, nothing.

Unlocking new value

Data Landscapes are coming into focus as being capable of creating significant new value in product lifecycle management activities. Thanks to mapping and digital transformation, the possibilities include vastly expanded analyses for more profound product insights; savvier and speedier decision-making about products, features, and capabilities; better and higher quality production; and better end-to-end lifecycle support.

And all of this is constantly evolving, growing, and undergoing change, with updates, transformations, and sometimes even replacement—a powerful justification for implementing effective and ongoing data governance, as well as for frequent remapping.

For everyday use, these tools must be integrated with the enterprises’ dozens of technology stacks, which are used to create, collect, model, transform, store, and analyze data and information for specific purposes or business processes. The fact that the term “Data Landscape” itself is gaining attention shows that these tools work.

Dozens of tools for various mapping processes can be downloaded from solution providers in the Data Landscape marketplace—IBM and Microsoft, of course, as well as Orion Governance, FanRuan Software, Hyve Solutions, AtScale, Zendata, and many others.

In terms of PLM, what tools are we talking about?

In terms of PLM activities, however, what we connect Data Landscape mapping tools to is highly important. Aside from PLM and the toolsets in use for digital transformation, this means whatever is used to generate BOMs (with or without ERP), manufacturing execution systems (MES), purchasing (including interfaces to component suppliers, contractors, and partners), supply chain maintenance, and service oriented solutions (e.g., those focused on maintenance, repair, and overhaul.)

These connections foreshadow many alterations (and even a few upheavals) in the established processes of developing new products. As with any new technology, users and managers inevitably have to address some tough challenges, such as how to:

  • Choose which of the many mapping tools to implement
  • Connect and integrate those tools
  • Use them effectively
  • Evaluate those tools’ outputs, whether graphical or in some other form
  • Integrate these new tools’ outputs with the enterprise’s data already in use throughout the product lifecycle, both upstream and downstream.

The three pillars of a successful data landscape

To ensure that a Data Landscape and its mapping tools enhance PLM activities, three essential elements must be in place:

Training so that users understand the Data Landscape and its many subsystems and components, their uses, and their differences. Data Landscape websites list data warehouses, lakes and lakehouses, platforms, meshes, stacks, marts, and even swamps. A Data Landscape can be seen as the ultimate representation of data throughout the connected environment of an Internet of Things.

•  Adoption and use of artificial intelligence (AI), which is now part of everything digital, including Data Landscape mapping tools and systems—especially those using the enterprise’s Data Stacks. A well-mapped Data Landscape can be seen as the ultimate Large Language Model (LLM) to use with AI. The enormous amount and variety of digital data (and computer clutter) make it essential to know how to find and fix AI’s errors and its occasional hallucinations.

Data governance, which is essential for any worthwhile use of Data Landscapes. Data governance policies, standards, rules, and their supervision—always do the same few vital things. Amid complexity and constant, rapid change, data governance ensures security, quality, and integrity of data access and use. Data governance is crucial for avoiding failures in regulatory compliance, that come with the risk of often hefty fines and reputational damage that may take years to overcome.

Impacts of data landscape tools on PLM

As we observed at the outset, the maturation of digital tools to work with virtually all of the enterprise’s data presents, what might be, a once-in-a-generation opportunity to create more complete and more accurate BOMs and use them to overhaul the entire product lifecycle.

In effect, we will be gaining useful and reliable access to virtually all of the enterprise’s data and information, no matter what it is or where it is.

This is why we see that the new and improved Data Landscape mapping tools and solutions offer exponentially better ways of searching, finding, and accessing whatever is needed from the extended enterprise’s data, even if not actually using most of it. PLM processes will never be the same.

The post Data landscapes and the product lifecycle appeared first on Engineering.com.

]]>
Nvidia Omniverse coming to PTC Creo and Windchill https://www.engineering.com/nvidia-omniverse-coming-to-ptc-creo-and-windchill/ Tue, 05 Aug 2025 15:34:07 +0000 https://www.engineering.com/?p=141889 Plus PTC pledged itself to the Alliance for OpenUSD, and more design and simulation software news.

The post Nvidia Omniverse coming to PTC Creo and Windchill appeared first on Engineering.com.

]]>
This is Engineering Paper, and here’s the latest design and simulation software news.

PTC has expanded its partnership with Nvidia. The Boston-based developer, which not long ago was rumored to be up for sale, says it will integrate Nvidia Omniverse technologies into Creo and Windchill.

“By connecting Windchill with Omniverse’s real-time, photorealistic simulation development platform, teams will be able to visualize and interact with the most current Creo design data in a shared, immersive environment,” reads PTC’s press release.

PTC has also joined the Alliance for OpenUSD (AOUSD), a group working to advance the Pixar-created OpenUSD file framework used in Nvidia Omniverse. Nvidia was one of the five founding members of the AOUSD alongside Pixar, Adobe, Apple, and Autodesk. In June, engineering software developer Tech Soft 3D also announced a collaboration with Nvidia and joined the AOUSD.

“By deepening our collaboration with Nvidia and joining the Alliance for OpenUSD, we’re giving our customers the ability to incorporate design and configuration data in a real-time, immersive simulation environment,” said Neil Barua, president and CEO of PTC, in the press release. “The integration of Omniverse technologies within Creo and Windchill will enable teams to accelerate development, improve product quality, and collaborate more effectively across the entire product lifecycle.”

Desktop Metal files for Chapter 11

The story of 3D printing company Desktop Metal has reached Chapter 11.

“Barely more than two years after Stratasys made a $1.8B bid for it and just a few weeks after Nano Dimension acquired it for a fraction of that price, Desktop Metal has filed for bankruptcy protection under Chapter 11 of the U.S. Bankruptcy Code,” wrote Engineering.com 3D printing editor Ian Wright in his coverage of the news.

“After much speculation about the fate of the beleaguered metal AM company… this looks like the end of what was once the darling of investors and 3D printing enthusiasts alike,” Ian wrote.

For more details, read the full article on Engineering.com: Desktop Metal files for Chapter 11.

ITC goes dark with IntelliCAD 14.0

The IntelliCAD Technology Consortium announced the release of IntelliCAD 14.0, the latest version of the member-funded CAD development platform.

IntelliCAD 14.0 introduces a dark mode, which in my opinion is an accessibility setting that belongs in every software package (I’m baffled by extremely popular applications that still lack the option—I’m looking at you, Google Docs).

“While dark is now the default, you can also choose from light or gray themes,” according to ITC’s video overview of IntelliCAD 14.0.

Screenshot of IntelliCAD 14.0. (Image: IntelliCAD Technology Consortium.)

The new release also adds faster performance for common functions including copy, break, move, and union, as well as detachable drawing windows, support for Autodesk Revit 2025 files, API enhancements, and more.

“IntelliCAD 14.0 reflects our commitment to listening to real-world feedback from our members and delivering the tools they need most,” said Shawn Lindsay, president of the IntelliCAD Technology Consortium, in the release announcement. “We remain focused on providing an open, dependable platform that developers can build on—and on offering a powerful alternative in the CAD software market.”

One last link

Engineering.com executive editor Jim Anderton’s latest episode of End of the Line discusses the rapidly changing technology of warfare: The war in Ukraine: The end of armor as we know it.

Got news, tips, comments, or complaints? Send them my way: malba@wtwhmedia.com.

The post Nvidia Omniverse coming to PTC Creo and Windchill appeared first on Engineering.com.

]]>
From software 3.0 to PLM that thinks https://www.engineering.com/from-software-3-0-to-plm-that-thinks/ Tue, 29 Jul 2025 20:51:41 +0000 https://www.engineering.com/?p=141728 PLM is no longer just a system of record—it’s an ecosystem that learns with engineers to create “conversational” product innovation.

The post From software 3.0 to PLM that thinks appeared first on Engineering.com.

]]>
As Andrej Karpathy—former Director of AI at Tesla and a leading voice in applied deep learning—explains in his influential Software 3.0 talk, we are entering a new era in how software is created: not programmed line-by-line, but trained on data, shaped by prompts, and guided by intent.

This shift replaces traditional rule-based logic with inferred reasoning. Large Language Models (LLMs) no longer act as tools that execute commands—they behave more like collaborators that understand, interpret, and suggest. This is not just a software evolution—it’s a new operating paradigm for digital systems across industries.

This evolution challenges how we think about enterprise systems designed to support and enable product innovation—particularly PLM, which must now move beyond static data foundations and governance to embrace adaptive reasoning and continuous collaboration.

Legacy PLM: governance without understanding

PDM/PLM and alike systems have long played a foundational role in industrial digitalization. Built to manage complex product data, enforce compliance, and track design evolution, they act as structured systems of record. But while they govern well, they do not reason.

Most PLM platforms remain bound by rigid schemas and predefined workflows. They are transactional by design—built to secure approvals, ensure traceability, and document history. As such, PLM has often been seen as a brake pedal, not an accelerator, in the innovation process.

In today’s increasingly adaptive R&D and manufacturing environments, that model is no longer sufficient. Software 3.0 introduces a cognitive layer that can elevate PLM from reactive gatekeeping to proactive orchestration—but “only if we keep AI firmly on a leash” as Karpathy put it.

PLM that thinks

Imagine a PLM ecosystem that does not simply route change requests for approval—but asks why the change is needed, how it will impact downstream functions, and what the best alternatives might be.

This is the promise of LLM-powered PLM:

  • Conversational interfaces replace rigid forms. Engineers interact with the ecosystem through natural language, clarifying design intent and constraints.
  • Reasoning engines interpret the implications of product changes in real time—spanning design, sourcing, compliance, and sustainability.
  • Agentic capabilities emerge: AI can suggest design modifications, simulate risks, and even initiate cross-functional coordination.

PLM becomes an intelligent co-pilot—responding to prompts, adapting to context, and surfacing insight when and where it matters most. The shift is from enforcing compliance to guiding innovation—while maintaining strict guardrails to prevent runaway AI decisions.

The cognitive thread

Software 3.0 does more than enable conversational PLM—it rewires how digital continuity is managed across the lifecycle.

Beyond the digital thread, we now see the rise of a cognitive thread: a persistent, adaptive logic that connects design intent, regulatory constraints, manufacturing realities, and in-market feedback.

  • Decisions are traced not just by timestamp, but by reasoning path.
  • Data is interpreted based on role, context, and business objective.
  • AI learns from past projects to anticipate outcomes, not just report on them.

This transforms PLM into a system of systems thinking—an orchestration layer where data, knowledge, and human expertise converge into continuous learning cycles. It reshapes how products are developed, iterated, and sustained—with AI kept in check through rigorous validation.

Preventing PLM hallucination and entropy

With intelligence comes risk. Reasoning systems can misinterpret context, hallucinate outputs, or apply flawed logic. In safety-critical or highly regulated sectors, this is not a theoretical concern—it is a business and ethical imperative.

We must now ask:

  • How do we validate AI-generated recommendations in engineering workflows?
  • How do we trace the logic behind autonomous decisions?
  • How do we ensure adaptive systems do not drift from controlled baselines?

As PDM/PLM/ERP/MES and other enterprise systems begin to think, new governance models must emerge—combining ethical AI frameworks with domain-specific validation processes. This is not just about technology. It is about trust, accountability, and responsible transformation.

Software 3.0 marks a turning point—not just for software developers, but for product innovators, engineers, and digital transformation leaders. It redefines what enterprise systems can be. In this new landscape, PLM is no longer the place where innovation is recorded after the fact. It becomes the place where innovation is shaped in real time—through intelligent dialogue, adaptive reasoning, and guided exploration—all while keeping AI safely on a leash.

Are we ready to collaborate with a PLM ecosystem that learns with products—but only within trusted boundaries? Because the next generation of product innovation will not be built on forms and workflows. It is very likely that it will be built on conversation, interpretation, and co-creation with validated AI assistance.

The post From software 3.0 to PLM that thinks appeared first on Engineering.com.

]]>
Autodesk mulling PTC takeover to create industrial software juggernaut https://www.engineering.com/autodesk-mulling-ptc-takeover-to-create-industrial-software-juggernaut/ Fri, 11 Jul 2025 19:07:32 +0000 https://www.engineering.com/?p=141287 The $20B bet could reshape the future of engineering software. We analyze the product mix, strategic fit and how it will affect engineers and end users.

The post Autodesk mulling PTC takeover to create industrial software juggernaut appeared first on Engineering.com.

]]>
Autodesk is reportedly considering the acquisition of PTC in what could be its largest-ever deal, rumored to be valued at more than $20 billion. Although it is still in early stages and may not materialize, the potential impact is already generating significant market and industry attention. Reports from Bloomberg, Reuters and others suggest the transaction could be structured as a mix of cash and stock, reflecting both the ambition and complexity of such a transformative move.

This is not just a transaction between two legacy software firms. It could represent a redefinition of the industrial software landscape: Autodesk, long focused on democratizing design via the cloud, meeting PTC, grounded in enterprise-scale digital transformation for manufacturers. The overlap is clear. The complementarity? Still to be proven.

Strategy, scale, and ambition

While both companies are respected in their domains, they differ significantly in size, culture, and strategic posture:

  • Autodesk reported more than $6.1 billion in FY2025 revenue (fiscal year ending January 2025), with a market cap of approximately $66.6 billion.
  • PTC reported $2.3 billion in FY2024 revenue (fiscal year ending September 2024), with a current market cap around $17 billion following the takeover speculation bump.

Autodesk is more than twice PTC’s size in revenue and has traditionally focused on AEC, creative design, and mid-market engineering. PTC, in contrast, is deeply rooted in industrial manufacturing, PLM, and IoT.

This is not a merger of equals. It reflects Autodesk’s strategic ambition to move deeper into the enterprise market. With PTC, Autodesk would gain credibility and capability in core enterprise workflows. This would mark a step change for Autodesk’s portfolio maturity—from cloud-native tools for SMBs to enterprise-scale digital thread and product lifecycle platforms.

Yet, the companies have very different go-to-market approaches. Autodesk has built its SaaS business around high-volume channels, while PTC’s sales motion is enterprise direct. That contrast creates opportunity-but also serious integration risk.

Market reactions and community feedback

PTC shares surged over 17% on July 9 after Bloomberg reported Autodesk was exploring a bid. They fell 7.5% the next day. Autodesk’s stock declined nearly 8% as investors assessed the strategic rationale and integration risks. These market movements highlight the scale and sensitivity of such a transformative bet.

In professional forums and industry circles, the deal has sparked debate. Many experts have expressed skepticism about strategic alignment. They point out potential redundancy between core CAD offerings (Creo vs. Inventor/Fusion 360) and PLM solutions (Windchill vs. Fusion Manage). Others note Autodesk’s limited experience in large, complex integrations, and voice concerns about its ability to manage an enterprise-scale acquisition.

One clear thread: this would be a high-risk, high reward move. Autodesk has never made a deal of this magnitude. It could unlock new verticals—but also strain its operating model and alienate parts of its existing base.

Analysts also speculate on regulatory hurdles. The CAD and PLM market is already concentrated. A deal of this scale may face antitrust scrutiny, particularly in the US and Europe. Financing would also be a stretch, and shareholders will expect a well-articulated synergy plan. The rumored price tag of about $20 billion raises the stakes further.

Product portfolio and strategic fit

Autodesk has invested heavily in Autodesk Platform Services (APS), with Fusion 360 acting as its design collaboration anchor. PTC’s portfolio is broader in manufacturing and enterprise engineering, with Windchill+, Arena (PLM), Onshape (cloud CAD), and ThingWorx/Kepware (IoT/edge connectivity).

While the combination would offer end-to-end coverage from SMB to enterprise, the breadth creates duplication. Customers may worry about future roadmap clarity. Will Autodesk continue Fusion Manage or prioritize Windchill+? Can Creo and Inventor coexist? And does Autodesk have a plan for ThingWorx and Kepware, which do not align with its core portfolio?

Most experts believe those IoT assets will be divested. That opens new opportunities for companies like Rockwell, Schneider Electric, or Emerson—firms more focused on industrial automation and edge connectivity. These decisions will send strong signals to the market about Autodesk’s long-term intent.

Beyond the technology, there is a broader question: is Autodesk acquiring products, a platform, and/or an extended customer base? The answer is likely to be multiple. It will determine how much integration effort is required—and how much customer disruption it might cause.

Execution and leadership will define the outcome

The true test will be execution. Autodesk has evolved into a cloud-first player over the past decade, but it has little experience with large-scale enterprise integrations. PTC, though smaller, brings a strong industrial culture and a distinct go-to-market strategy that may not align with Autodesk’s creative, SMB-rooted DNA.

Cultural integration, pricing model alignment, and partner ecosystem rationalization will be complex. If poorly managed, these differences could erode customer trust and delay value realization.

Leadership will play a pivotal role. PTC’s new CEO, Neil Barua, took over in February 2024 from long-time chief Jim Heppelmann. Barua, formerly CEO of ServiceMax (acquired by PTC in 2022), brings a sharper focus on customer-driven innovation and return on investment. His strategic priorities—and openness to integration—could influence how the two companies align.

ThingWorx and Kepware, once central to PTC’s digital transformation narrative, now appear most vulnerable to divestment. Their fate may define Autodesk’s long-term industrial strategy. Rockwell Automation’s recent exit from its $1B stake in PTC in August 2023 further suggests shifting alliances and possible competitive realignments in the broader industrial software ecosystem.

This deal, if it proceeds, will not go unnoticed. Siemens, Dassault Systèmes, and other PLM leaders are likely already reassessing their positions. A successful integration would escalate the digital thread race. A failed one could reinforce the limits of M&A in an already saturated market.

In the end, the acquisition is just the beginning. The real transformation will be defined by what Autodesk chooses to keep, integrate or let go.

Editor’s update July 14 2025: In the days after this story was published Autodesk in a regulatory filing declared this deal is no longer on the table and will instead focus on more strategic priorities, as reported by Reuters.

The post Autodesk mulling PTC takeover to create industrial software juggernaut appeared first on Engineering.com.

]]>
Siemens Realize LIVE 2025: AI-powered digital transformation is the path forward https://www.engineering.com/siemens-realize-live-2025-ai-powered-digital-transformation-is-the-path-forward/ Wed, 09 Jul 2025 20:15:56 +0000 https://www.engineering.com/?p=141233 Complexity is not a problem to solve, it’s an advantage to harness.

The post Siemens Realize LIVE 2025: AI-powered digital transformation is the path forward appeared first on Engineering.com.

]]>
At Siemens Realize LIVE Europe 2025, the message was clear: complexity is not a problem to solve—it’s an advantage to harness. AI, digital threads, and domain-specific PLM are no longer future concepts; they are converging into operational realities.

This year’s event illustrated how Siemens is doubling down on the strategy it has long articulated: enabling faster innovation by embedding intelligence, integration, and collaboration into the digital backbone of manufacturing and product development.

Record attendance at Siemens Realize LIVE 2025 in Amsterdam, with Jones discussing the key to mastering complexity; and of course, that includes the use of AI. (Image: Siemens.)

AI as a strategic accelerator

AI at Siemens has moved beyond pilots—now, the race is on for scale. As Bob Jones, Chief Revenue Officer and EVP of Global Sales and Customer Success, put it: “It’s not just about adopting AI—it’s about being the fastest to adopt it.” Speed matters.

Jones emphasized mastering complexity through AI. Siemens is embedding intelligence across the Xcelerator portfolio to boost speed, clarity, and confidence in decision-making:

  • Ask, find, act: Teamcenter Copilot and AI Chat allow users to query data in natural language, surfacing insights instantly.
  • Fix faster: RapidMiner spots quality issues and recommends improvements.
  • Make documents dynamic: AI extracts procedures from static PDFs to accelerate training and compliance.
  • Automate handoffs: Teamcenter, Rulestream, and Mendix streamline design-to-order workflows.

Joe Bohman, EVP of PLM Products, summed it up: Siemens is “training AI in the language of engineering and manufacturing.” This is not about generic automation—it is about embedding domain-specific intelligence aligned with physics, lifecycle context, and operational constraints.

Reinforcing that intent, Siemens appointed Vasi Philomin—former AWS VP of Generative AI—as EVP of Data and AI, reporting to CTO Peter Koerte. At Amazon, Philomin launched Amazon Bedrock and led foundation model development. His arrival signals Siemens’ commitment to scaling industrial AI as a foundational capability—not a feature—across the Xcelerator suite.

From static data to dynamic digital threads

A deeper shift is underway: from managing Bills of Materials to orchestrating Bills of Information. Is this just new language or real change? Either way, it reflects a move from static data capture to dynamic, role-based information delivery across the product lifecycle-enabling faster, more informed decisions at every stage.

Siemens is championing a PLM architecture that supports this shift, built around:

  • Secure, object-level data access tailored to specific roles and responsibilities.
  • Microservices and large language models (LLMs) delivering contextual guidance across engineering, manufacturing, and service domains.
  • A digital thread backbone connecting design, production, quality, and support in real time.

As advertised, this approach goes well beyond traditional traceability. It unlocks the ability to deliver the right data, in the right format, to the right person—when and where it is needed. It transforms engineering knowledge from static documentation into living, operational intelligence.

For globally distributed and regulated industries, this kind of digital continuity is no longer optional—it is a minimum requirement. Engineering is being redefined not just by tools, but by how data is structured, shared, and transformed into actionable insights that drive innovation and execution at scale.

Rethinking CPG

Siemens is reimagining PLM for the CPG industry, extending Teamcenter beyond packaging to support end-to-end collaboration across R&D, regulatory, marketing, and supply chain. By integrating formulation and specification management—built on Opcenter RD&L—Siemens is positioning Teamcenter to compete with SAP PLM in process-heavy, compliance-driven sectors. The solution is promising but still maturing.

A recent partnership with FoodChain ID boosts this trajectory by embedding global regulatory intelligence into the digital thread, helping CPG companies design for compliance from the start.

Key focus areas include:

  • Formulation and specification support, bridging science-led R&D with enterprise PLM.
  • Cross-functional collaboration across R&D, regulatory, marketing, and sourcing in a shared digital workspace.
  • Recipe reuse across global sites, increasing agility and compliance—as demonstrated by Unilever.
  • Scenario modeling and digital twins, enabling design-for-supply-chain strategies.
  • Regulatory intelligence integration, to guide compliant product development from the outset.

While Siemens’ CPG capabilities are evolving, the strategy requires further clarity. The long-term goal is ambitious: to build a robust PLM backbone that accelerates product innovation while addressing regulatory compliance and supply chain complexity from day one.

Xcelerator-as-a-service and agent-driven automation

Building on this momentum, Siemens’ Xcelerator-as-a-Service approach follows a clear goal: keep things simple, flexible, and always up to date.

Key enablers include:

  • Lifecycle data management, with built-in traceability and change control.
  • Low-code tools, via Mendix, embedded across Teamcenter and Opcenter.
  • AI agents, reducing manual effort, streamlining workflows, and reinforcing governance.

The transition toward software-defined products is accelerating. Siemens is doubling down on:

  • SysML v2, enabling next-generation model-based systems engineering
  • Polarion, aligning software and hardware requirements in unified backlogs
  • Supplier frameworks, integrating BOMs and compliance for cross-domain coordination

This is more than a technical evolution—it is a strategic upgrade toward future-ready operations, where complexity is coordinated, traceable, and compatible by design.

Regulatory-ready digital twins and Industry 4.0 interoperability

Regulation is not lagging behind innovation anymore—it is driving it. The upcoming EU Digital Product Passport (DPP 4.0) marks a turning point. Siemens is preparing its customers to meet these mandates not as a constraint—but as a catalyst for trustworthy digital systems.

Their approach includes:

  • Asset Administration Shell (AAS): machine-readable digital twins that maintain continuity from design through operation.
  • OPC UA-based interoperability: enabling secure, standards-based data exchange across partners and platforms.
  • Embedded sustainability and compliance tracking: making ESG and traceability data a native part of the engineering model.

This is digital transformation built for permanence. With regulations requiring traceable, reusable digital records, AI can only accelerate what is built on the right data foundation.

Make no mistake. Complexity is not just managed; it is mined for advantage. The metaphor echoed at the event is apt: like bamboo, digital transformation takes time to root—but when it does, it grows fast and strong. For industrial leaders, the question is no longer why transform—but rather: How fast can intelligence be embedded into the product and value chain?

The post Siemens Realize LIVE 2025: AI-powered digital transformation is the path forward appeared first on Engineering.com.

]]>
AI governance—the unavoidable imperative of responsibility https://www.engineering.com/ai-governance-the-unavoidable-imperative-of-responsibility/ Tue, 08 Jul 2025 18:03:42 +0000 https://www.engineering.com/?p=141188 Examining key pillars an organization should consider when developing AI governance policies.

The post AI governance—the unavoidable imperative of responsibility appeared first on Engineering.com.

]]>
In a recent CIMdata Leadership Webinar, my colleague Peter Bilello and I presented our thoughts on the important and emerging topic of Artificial Intelligence (AI) Governance. More specifically, we brought into focus a new term in the overheated discussions surrounding this technology, now entering general use and, inevitably, misuse. That term is “responsibility.”

For this discussion, responsibility means accepting that one will be held personally accountable for AI-related problems and outcomes—good or bad—while acting with that knowledge always in mind.

Janie Gurley, Data Governance Director, CIMdata Inc.

Every new digital technology presents opportunities for misuse, particularly in its early days when its capabilities are not fully understood and its reach is underestimated. AI, however, is unique, making its governance extra challenging because of the following three reasons:

  • A huge proportion of AI users in product development are untrained, inexperienced, and lack the caution and self-discipline of engineers; engineers are the early users of nearly all other information technologies.  
  • With little or no oversight, AI users can reach into data without regard to accuracy, completeness, or even relevance. This causes many shortcomings, including AI’s “hallucinations.”
  • AI has many poorly understood risks—a consequence of its power and depth—that many new AI users don’t understand.

While both AI and PLM are critical business strategies, they are hugely different. Today, PLM implementations have matured to the point where they incorporate ‘guardrails,’ mechanisms common in engineering and product development that keep organizational decisions in sync with goals and strategic objectives while holding down risks. AI often lacks such guardrails and is used in ways that solution providers cannot always anticipate.

And that’s where the AI governance challenges discussed in our recent webinar, AI Governance: Ensuring Responsible AI Development and Use, come in.

The scope of the AI problem

AI is not new; in various forms, it has been used for decades. What is new is its sudden widespread adoption, coinciding with the explosion of AI toolkits and AI-enhanced applications, solutions, systems, and platforms. A key problem is the poor quality of data fed into the Large Language Models (LLMs) that genAI (such as ChatGPT and others) uses.

During the webinar, one attendee asked if executives understand the value of data. Bilello candidly responded, “No. And they don’t understand the value of governance, either.”  And why should they?  Nearly all postings and articles about AI mention governance as an afterthought, if at all.

So, it is time to establish AI governance … and the task is far more than simply tracking down errors and identifying users who can be held accountable for them. CIMdata has learned from experience that even minor oversights and loopholes can undermine effective governance.

AI Governance is not just a technical issue, nor is it just a collection of policies on paper. Everyone using AI must be on the same page, so we laid out four elements in AI governance that must be understood and adopted:

Ethical AI, adhering to principles of fairness, transparency, and accountability.

AI Accountability, assigning responsibility for AI decisions and ensuring human oversight.

Human-in-the-Loop (HITL), the integration of human oversight into AI decision-making to ensure sound judgments, verifiable accountability, and authority to intercede and override when needed.

AI Compliance, aligning AI initiatives with legal requirements such as GDPR, CCPA, and the AI Act.

Bilello noted, “Augmented intelligence—the use of AI technologies that extend and/or enhance human intelligence—always has a human in the loop to some extent and. despite appearances, AI is human-created.”

Next, we presented the key pillars of AI governance, namely:

  • Transparency: making AI models explainable, clarifying how decisions are made, and making the results auditable.
  • Fairness and proactively detecting and mitigating biases.
  • Privacy and Security to protect personal data, as well as the integrity of the model.
  • Risk Management with continuous monitoring across the AI lifecycle.

The solution provider’s perspective

Now let’s consider this from the perspective of a solution provider, specifically the Hexagon Manufacturing Intelligence unit of Hexagon Metrology GmbH.

AI Governance “provides the guardrails for deploying production-ready AI solutions. It’s not just about complying with regulations—it’s about proving to our customers that we build safe, reliable systems,” according to Dr. René Cabos, Hexagon Senior Product Manager for AI.

“The biggest challenge?” according to Cabos, is “a lack of clear legal definitions of what is legally considered to be AI. Whether it’s a linear regression model or the now widely used Generative AI [genAI], we need traceability, explainability, and structured monitoring.”

Explainability lets users look inside AI algorithms and their underlying LLMs and renders decisions and outcomes visible, traceable, and comprehensible; explainability ensures that AI users and everyone who depends on their work can interpret and verify outcomes. This is vital for enhancing how AI users work and for establishing trust in AI; more on trust below.

Organizations are starting to make changes to generate future value from genAI,with large companies leading the way.

Industry data further supports our discussion on the necessity for robust AI governance, as seen in McKinsey & Company’s Global Survey on AI, titled The state of AI – How organizations are rewiring to capture value, published in March 2025.

The study by Alex Singla et al. found that “Organizations are beginning to create the structures and processes that lead to meaningful value from gen AI.” Even though already in wide use—including putting senior leaders in critical roles overseeing AI governance.

The findings also show that organizations are working to mitigate a growing set of gen-AI-related risks. Overall, the use of AI—gen AI, as well as Analytical AI—continues to build momentum: more than three-quarters of respondents now say that their organizations use AI in at least one business function. The use of genAI in particular is rapidly increasing.

“Unfortunately, governance practices have not kept pace with this rewiring of work processes,” the McKinsey report noted. “This reinforces the critical need for structured, responsible AI governance. Concerns about bias, security breaches, and regulatory gaps are rising. This makes core governance principles like fairness and explainability non-negotiable.”

More recently, McKinsey observed that AI “implications are profound, especially Agentic AI. Agentic AI represents not just a new technology layer but also a new operating model,” Mr. Federico Burruti and four co-authors wrote in a June 4, 2025, report titled, When can AI make good decisions? The rise of AI corporate citizens.

“And while the upside is massive, so is the risk. Without deliberate governance, transparency, and accountability, these systems could reinforce bias, obscure accountability, or trigger compliance failures,” the report says.

The McKinsey report points out that companies should “Treat AI agents as corporate citizens. “That means more than building robust tech. It means rethinking how decisions are made from an end-to-end perspective. It means developing a new understanding of which decisions AI can make. And, most important, it means creating new management (and cost) structures to ensure that both AI and human agents thrive.”

In our webinar, we characterized this rewiring as a tipping point because the integration of AI into the product lifecycle is poised to dramatically reshape engineering and design practices. AI is expected to augment, not replace, human ingenuity in engineering and design; this means humans must assume the role of curator of content and decisions generated with the support of AI.

Why governance has lagged

With AI causing so much heartburn, one might assume that governance is well-established. But no, there are many challenges:

  • The difficulty of validating AI model outputs when systems evolve from advisor-based recommendations to fully autonomous agents.
  • The lack of rigorous model validation, ill-defined ownership of AI-generated intellectual property, and data privacy concerns.
  • Evolving regulatory guidance, certification, and approval of all the automated processes being advanced by AI tools…coupled with regulatory uncertainty in a changing global landscape of compliance challenges and a poor understanding of legal restrictions.
  • Bias, as shown in many unsettling case studies, and the impacts of biased AI systems on communities.
  • The lack of transparency (and “explainability”), with which to challenge black-box AI models.
  • Weak cybersecurity measures and iffy safety and security in the face of cyber threats and risks of adversarial attacks.
  • Public confidence in AI-enabled systems, not just “trust” by users.
  • Ethics and trust themes that reinforce ROI discussions.

Trust in AI is hindered by widespread skepticism, including fears of disinformation, instability, unknown unknowns, job losses, industry concentration, and regulatory conflicts/overreach.

James Markwalder, U.S. Federal Sales and Industry Manager at Prostep i.v.i.p.,  a product data governance association based in Germany, characterized AI development “as a runaway train—hundreds of models hatch every day—so policing the [AI] labs is a fool’s errand. In digital engineering, the smarter play is to govern use.”

AI’s fast evolution requires that we “set clear guardrails, mandate explainability and live monitoring, and anchor every decision to…values of safety, fairness, and accountability,” Markwalder added. “And if the urge to cut corners can be tamed, AI shifts from black-box risk to a trust engine that shields both ROI and reputation.”

AI is also driving a transformation in product development amid compliance challenges to business, explained by Dr. Henrik Weimer, Director of Digital Engineering at Airbus. In his presentation at CIMdata’s PLM Road Map & PDT North America in May 2025, Weimer spelled out four AI business compliance challenges:

Data Privacy, meaning the protection “of personal information collected, used, processed, and stored by AI systems,” which is a key issue “for ethical and responsible AI development and deployment.”

Intellectual Property, that is “creations of the mind;” he listed “inventions, algorithms, data, patents and copyrights, trade secrets,data ownership, usage rights, and licensing agreements.”

Data Security, ensuring confidentiality, integrity, and availability, as well as protecting data in AI systems throughout the lifecycle.

Discrimination and Bias, addressing the unsettling fact that AI systems “can perpetuate and amplify biases present in the data on which they are trained,” leading to “unfair or discriminatory outcomes, disproportionately affecting certain groups or individuals.”

Add to these issues the environmental impact of AI’s tremendous power demands. In the April 2025 issue of the McKinsey Quarterly, the consulting firm calculated that “Data centers equipped to handle AI processing loads are projected to require $5.2 trillion in capital expenditures by 2030…” (The article is titled The cost of compute: A $7 trillion race to scale data centers.)

Establishing governance

So, how is governance created amid this chaos? In our webinar, we pointed out that the answer is a governance framework that:

• Establishes governance policies aligned with organizational goals, plus an AI ethics committee or oversight board.

• Develops and implements risk assessment methodologies for AI projects that monitor AI processes and results for transparency and fairness.

• Ensures continuous auditing and feedback loops for AI decision-making.

To show how this approach is effective, we offered case studies from Allied Irish Bank, IBM’s AI Ethics Governance framework, and Amazon’s AI Recruiting Tool (which had a bias against females).

Despite all these issues, AI governance across the lifecycle is cost-effective, and guidance was offered on measuring the ROI impact of responsible AI practices:

  • Quantifying AI governance value in cost savings, risk reduction, and reputation
      management.
  • Developing and implementing metrics for compliance adherence, bias reduction, and transparency.
  • Justifying investment with business case examples and alignment with stakeholders’ priorities.
  • Focusing continuous improvement efforts on the many ways in which AI governance drives innovation and operational efficiency.

These four points require establishing ownership and accountability through continuous monitoring and risk management, as well as prioritizing ethical design. Ethical design is the creation of products, systems, and services that prioritize benefits to society and the environment while minimizing the risks of harmful outcomes.

The meaning of ‘responsibility’ always seems obvious until one probes into it. Who is responsible? To whom? Responsible for what? Why? And when? Before the arrival of AI, the answers to these questions were usually self-evident. In AI, however, responsibility is unclear without comprehensive governance.

Also required is the implementation and fostering of a culture of responsible AI use through collaboration within the organization as well as with suppliers and field service. Effective collaboration, we pointed out, leads to diversity of expertise and cross-functional teams that strengthen accountability and reduce blind spots.

By broadening the responsibilities of AI users, collaboration adds foresight into potential problems and helps ensure practical, usable governance while building trust in AI processes and their outcomes. Governance succeeds when AI “becomes everyone’s responsibility.”

Our conclusion was summed up as: Govern Smart, Govern Early, and Govern Always.

In AI, human oversight is essential. In his concluding call to action, Bilello emphatically stated, “It’s not if we’re going to do this but when…and when is now.” Undoubtedly, professionals who proactively embrace AI and adapt to the changing landscape will be well-positioned to thrive in the years to come.

Peter Bilello, President and CEO, CIMdata and frequent Engineering.com contributor, contributed to this article.

The post AI governance—the unavoidable imperative of responsibility appeared first on Engineering.com.

]]>