IoT - Engineering.com https://www.engineering.com/category/technology/iot/ Thu, 23 Oct 2025 18:21:06 +0000 en-US hourly 1 https://wordpress.org/?v=6.9 https://www.engineering.com/wp-content/uploads/2025/06/0-Square-Icon-White-on-Purpleb-150x150.png IoT - Engineering.com https://www.engineering.com/category/technology/iot/ 32 32 A beginner’s guide to data pipelines for manufacturers https://www.engineering.com/a-beginners-guide-to-data-pipelines-for-manufacturers/ Thu, 23 Oct 2025 18:21:04 +0000 https://www.engineering.com/?p=144081 Data pipelines enable manufacturing engineers to simplify complex data management in support of their work.

The post A beginner’s guide to data pipelines for manufacturers appeared first on Engineering.com.

]]>
As manufacturing engineers grapple with more and more data from diverse sources, they implement data pipelines to simplify their increasingly complex data management processes.

What are data pipelines?

Data pipelines are automated systems that manufacturing engineers use to read data from multiple data sources, transform the data and then write it to a destination database.

Examples of transforming data include the following:

  • Changing key values such as customer code, part number or vendor number to a single set of values.
  • Revising dates to a standard format.
  • Aligning codes and related descriptions to a single set of values.
  • Denormalizing data for improved application performance.
  • Aggregating data to a uniform level of summarization.

Often, the destination is a data lake house or data warehouse. From there, the data is used for one or more of the following purposes:

  • Operational applications such as manufacturing planning or control.
  • Data analysis and visualization, including dashboards.
  • AI applications for insights into manufacturing trouble shooting, optimization or forecasting.

Increasing importance of data pipelines

Data pipelines have taken on increasing importance for engineers as a result of the following application trends:

  • Manufacturers are employing more advanced simulation and AI applications. These software trends require access to large volumes of high-quality data. Upgrading data quality, often by comparing values across datastores, is dependent on data pipelines.
  • Manufacturers are integrating their various systems more tightly. This integration trend requires data pipelines to copy selected data from one system to another, improving data sharing.
  • Manufacturing groups see value in data analytics and visualization. This analytics trend requires concurrent access to multiple data sources, which is often dependent on data pipelines.

Business benefits of data pipelines

The key benefit of data pipelines is to make data available in a timely and integrated manner for business processes across many parts of the organization, including engineering. That data availability can:

  • Accelerate product development through more detailed simulation and faster iteration.
  • Improve operations by improving decision quality.
  • Enable AI and machine learning initiatives by ensuring trustworthy data sufficiency.
  • Improve customer experience by reducing the time it takes to complete transactions.
  • Reduce time to market for new products and services.
  • Enhance risk management through more comprehensive risk identification.

More broadly, data pipelines enable the shift-left, or earlier in the engineering process, approach. Shift-left focuses on improving digital data availability to the initial stages of product or service planning, design, and development. The benefits of this approach include faster delivery, better quality and lower costs.

Types of data pipelines

Data pipelines operate differently depending on the characteristics of the application:

  • Batch processing data pipelines – Load large batches of data from multiple data sources into a destination database at set time intervals. Organizations typically schedule batch pipelines during off-peak business hours. A good example is aggregating daily production quantities by product from multiple manufacturing facilities overnight for data analysis the following morning.
  • Streaming data pipelines – Continuously process new or revised data generated in real-time by sensors or end-user interactions with an application into a destination database. Most streaming data pipelines operate continuously. A good example is streaming Industrial Internet of Things (IIoT) data from the manufacturing floor to monitoring applications used by engineers.
  • Data integration pipelines – Merge data from multiple data sources into a single unified view in a destination database. Data integration pipelines can operate either as batch or streaming data pipelines. A good example is merging data from various Enterprise Resource Planning (ERP) modules with Customer Relationship Management (CRM) data and external data to build an integrated view of industry production trends.

Selecting data pipeline software

The capability of available data pipeline software varies widely. The following criteria will help engineers select software that fits the application requirements:

  • Ease of use features that increase developer productivity and thereby reduce development cost.
  • Features that minimize effort to respond to changes in data source schemas.
  • Scalability to handle the estimated current and future data volumes.
  • Ability to connect easily to the required diversity of data access technologies used by data sources.
  • Security features for data encryption and authentication.
  • Automation features that simplify operations.
  • Operational monitoring features that quickly identify problems that require intervention.
  • Vendor track record.
  • Acquisition and operating costs are acceptable.

Frequent risks associated with data pipeline implementations

Manufacturing engineers should consider whether the following potential risks affect their data pipeline project and implement suitable mitigations:

  • Data quality shortcomings that are expensive and time-consuming to address.
  • Data integration complexities that require sophisticated and expensive software development.
  • Data and accuracy loss caused by poorly designed data integration software.
  • Ambitious goals or target states with large scopes that are beyond the capacity of the organization.
  • Excessive processing latencies that create data anomalies for streaming data pipelines.
  • Data pipeline performance issues that may occur when large data volumes are involved.
  • Security vulnerabilities that may be introduced when a significant number of data sources are involved.

As manufacturing engineers grapple with increasing data volumes from diverse sources, they implement data pipelines to achieve faster delivery, better quality and lower costs.

The post A beginner’s guide to data pipelines for manufacturers appeared first on Engineering.com.

]]>
The interoperability revolution in manufacturing https://www.engineering.com/the-interoperability-revolution-in-manufacturing/ Mon, 20 Oct 2025 19:33:16 +0000 https://www.engineering.com/?p=143967 Rockwell Automation's Dennis Wylie talks about how open standards are starting to shatter automation silos.

The post The interoperability revolution in manufacturing appeared first on Engineering.com.

]]>
Here’s a fun scenario: You’re a plant manager walking across your plant, and you notice that half your equipment is from the 1990s, a quarter is from the early 2000s, and the rest is a hodgepodge of newer systems that somehow need to get along. Ring a bell? If you’ve ever been in manufacturing at the operational level, you’ve seen this system integrator’s nightmare.

For years, factory floors have been closer to digital archipelagos – isolated islands of automation that hardly spoke to each other, let alone cooperated. Each vendor’s system came with a unique language, resulting in a multiple language scenario that can drive plant engineers to despair.

It’s not all doom and gloom though. In the past few years there has been a fundamental shift: the interoperability revolution industry insiders have been talking up since the beginning of Industry 4.0 is finally arriving. And it’s coming quicker than most of us imagined.

The vendor lock-in prison break

Do you remember when you bought your first smartphone? You probably went back and forth between iOS and Android because you knew that when you made your ecosystem choice, you were, in effect, married to it. Your apps, accessories, and information would be tied to that platform. Industrial automation has been similarly so, only with more at risk and an exponentially more expensive divorce.

Legacy automation vendors have based their business models on this lock-in. Now that you’d committed to their platform, it was so painful and costly to switch that you’d be wedded to them for life, even if your choices were better elsewhere. You need a new safety system? Better pray your vendor has one on the shelf or be prepared to issue some very expensive integration checks.

I spoke recently to a plant engineer who said they’d had the same vendor for fifteen years – not because they particularly liked them, mind you, but because switching would have required rebuilding their entire control architecture. That’s not collaboration; that’s hostage to your own infrastructure.

This dynamic is at last collapsing, and it’s all thanks to something that appears mundane but is revolutionary: the maturity of open communications standards. They’re not new concepts – protocols like Open Platform Communications Unified Architecture (OPC UA) have been around for over a decade. But they’ve finally matured from test-lab experiments into production-quality products that can live up to the demanding, no-excuses requirements of modern manufacturing.

What’s different now is that newer controller architectures don’t just support these standards as afterthoughts – they’re built around them from the ground up. Instead of requiring costly add-on modules or expensive gateway devices, these systems speak directly to equipment from different vendors using standardized protocols.

Think of it like this: imagine if every device in your house spoke the same language and could work together seamlessly. That’s what’s happening on the factory floor right now.

Solving the Great Security Paradox

 “But Dennis,” I hear you say. “open standards are great, but what if we settle on security? Won’t proprietary systems be safer?” This has been the elephant in the room for years, and it’s the biggest reason that many manufacturers have been reluctant to embrace open standards.

Conventional wisdom held that proprietary, closed systems were inherently more secure because “security through obscurity” provided additional protection. Recent high-profile attacks on supposedly “secure” proprietary industrial systems have thoroughly debunked this notion.

Meanwhile, open standards have gotten incredibly good at security. We’re talking military-grade encryption, zero-trust architectures, and security frameworks battle-tested across every industry imaginable. The latest industrial controllers implement the same zero-trust principles protecting banks and government agencies. Instead of relying on proprietary protocols, they use proven cryptographic methods and authentication systems that have survived real-world attacks.

Here’s the kicker: this approach makes systems more secure. When everything speaks the same security “language,” you can apply enterprise-wide security policies consistently, monitor everything from a single dashboard, and audit security across all systems simultaneously.

As one cybersecurity consultant told me: “I’d rather defend ten systems using the same well-understood security protocols than try to secure ten different proprietary systems, each with unique vulnerabilities.”

Why your CFO will love this

Let’s talk about money. The financial benefits of true interoperability extend far beyond avoiding vendor lock-in premiums.

Research shows compelling figures: consolidated integration technology can reduce operating costs by up to 30% (Gartner), while platform consolidation delivers 25% productivity improvements (Forrester). Integrated systems reduce data errors by up to 40% (IDC), and organizations report average cost savings of 32% through intelligent automation consolidation (Deloitte).

The real savings come from integration complexity. Today, adding new equipment often requires custom programming, protocol converters, extensive testing, and sleepless nights for your automation team. Each integration becomes a bespoke engineering project.

With standardized protocols, connecting new devices can be as straightforward as plugging in a network cable and configuring some settings. What used to take weeks now takes days; what took days now takes hours.

Maintenance costs plummet when you’re not dependent on specialized expertise for each vendor’s unique systems. Training costs decrease, too – instead of multiple vendor-specific programs, technicians learn one set of standards applicable across all equipment.

But the biggest benefit is strategic flexibility. You can genuinely select best-of-breed solutions without integration penalties. Want the best motor drives from Company A and advanced vision systems from Company B? No problem, as long as they support open standards.

Edge computing finally delivers

Industry 4.0, digital transformation, smart manufacturing, IoT, edge computing – these buzzwords have dominated trade shows for years, sounding more like marketing concepts than practical factory solutions. Edge computing seemed like another overhyped IT cliché destined to fizzle out.

But the convergence of open standards and serious edge computing power is creating genuine game changers. Innovation in interoperability standards is finally turning these visions into reality. Advanced controllers now execute analytics and AI applications locally with the deterministic, real-time responsiveness manufacturing demands. It’s not about more computational power – it’s building a new methodology where operational technology (OT) and information technology (IT) finally interact productively.

The real-world effects are impressive. Data is processed in microseconds rather than traveling to distant servers. Quality issues get caught and repaired in real time instead of being discovered next shift. Predictive maintenance runs continuously, catching problems before expensive downtime occurs.

The key difference: focus has shifted from revolutionary change to evolutionary improvement. This approach doesn’t require ripping out existing systems – something that was never going to happen at scale. Instead, it allows gradual modernization that preserves investments while adding new capabilities. Edge intelligence layers onto existing processes, adding functionality without disrupting production.

This pragmatic approach accelerates adoption by reducing risk and enabling phased rollouts. Start with a pilot in one plant section, prove value, learn from mistakes, then spread winning tactics across operations.

What this means for you

If you’re a plant manager or automation engineer, the opportunity is significant – greater flexibility, reduced costs, and access to best-of-breed solutions. The challenge is navigating the transition thoughtfully.

When evaluating new equipment, prioritize support for open standards. The long-term benefits far outweigh the initial costs. It’s really investing in future flexibility.

Start developing internal expertise in open standards. Having team members who understand protocols like OPC UA, MQ Telemetry Transport (MQTT), and Representational State Transfer (REST) APIs will pay dividends. Don’t try to transform everything at once – look for natural upgrade cycles where you can implement open standards without disrupting production.

The bottom line: this train is leaving the station

The interoperability revolution isn’t imminent – it’s already upon us. The question isn’t whether open standards will dominate industrial automation; it’s how quickly manufacturers can adapt to take advantage of the possibilities they enable.

The companies that implement interoperability early gain competitive advantages that grow over time. They’ll have more responsive systems, lower integration costs, and access to a more extensive ecosystem of solutions. Those who cling to proprietary systems will become more isolated and rigid in an increasingly dynamic market.

I’ve been around manufacturing long enough to see a lot of technology fads come and go. Some paid out on their promise, but a lot didn’t. This one is different in that it’s not asking you to bet your business on unproven technology – it’s offering you a practical path to step up your systems in increments, reduce risk, and safeguard investments you already made.

The choice is quite clear, and the window of opportunity is extremely wide open. The only question is: are you going to be leading on this change, or are you going to be playing catch-up?

The post The interoperability revolution in manufacturing appeared first on Engineering.com.

]]>
Optimizing logistics, supply chains, and local manufacturing https://www.engineering.com/optimizing-logistics-supply-chains-and-local-manufacturing/ Thu, 02 Oct 2025 18:39:07 +0000 https://www.engineering.com/?p=143521 How digital transformation can turn supply chains into a strategic advantage.

The post Optimizing logistics, supply chains, and local manufacturing appeared first on Engineering.com.

]]>

It seems like the manufacturing sector is forever in the midst of a structural shift. Competitive pressures, supply chain disruptions, and evolving customer expectations are constants, driving companies to continuously rethink how they produce goods and where they produce them. Digital transformation systems—a convergence of advanced analytics, IoT, AI, and cloud-based platforms—are at the center of this current shift.

For engineers and executives alike, these systems are more than IT upgrades. They are tools—sometimes very simple, sometimes quite complex—that reconfigure logistics, streamline supply chains, and make localized manufacturing practical and profitable.

Digital transformation and logistics optimization

Historically, logistics in manufacturing has been reactive to disruption—responding to bottlenecks, freight delays, or warehouse shortages as they arise. Digital transformation turns this reactive model into a predictive and adaptive one.

Real-time visibility provided by IoT sensors and connected devices track goods in transit, raw material consumption, and production progress. By tracking the data from these devices, engineers gain line-of-sight of their raw materials and products from factory floor to customer delivery. AI-driven routing and scheduling algorithms forecast delays and dynamically reroute shipments or adjust production schedules to maintain throughput.

And then there’s digital twins, which aren’t just for product design. By creating a digital twin of logistics networks, engineers can simulate different transportation strategies, warehouse configurations, or production-distribution trade-offs before making capital commitments.

The result is lower transportation costs, higher on-time delivery rates, and fewer emergency interventions to solve unexpected problems.

Digital transformation and supply chain resilience

In the last five years, supply chain fragility has become more than just a boardroom issue. Digital transformation systems can help bring resilience by unifying fragmented data and enabling proactive decision-making.

Instead of relying on the siloed ERP and supplier systems of the previous decade, companies can use integrated supplier data platforms to build digital ecosystems where quality, lead times, and pricing data are visible in one place. The advanced analytics produced by these digital ecosystems can help users flag single-source dependencies or regions exposed to external risks, such as natural disasters, geopolitical snafus or disease outbreaks.

At this stage, either humans or AI systems can find and recommend shifts to alternate suppliers, suggest redistribution of inventory, or gameplan adjustments to current stock levels. For manufacturers, this means fewer surprises on the production line and the bottom line—it’s a little bit of assurance that production won’t grind to a halt due to a single point of failure.

Digital transformation and enabling local manufacturing

Local manufacturing—sometimes called near-market manufacturing, regional manufacturing or nearshoring— simply means manufacturing your products closer to end customers. The strategy reduces transportation costs, shortens lead times, and lowers emissions. The downside is that it introduces complexity through operating multiple regional plants, relying on varied supplier networks, and adjusting to different regulatory environments. Digital transformation systems provide the infrastructure to make this a viable strategy for a larger cross section of businesses.

With a digital infrastructure, standardized production data models help engineers replicate validated processes across sites, ensuring consistency in quality while tailoring to local market needs. Linking and integrating cloud-based manufacturing execution systems (MES) and enterprise resource planning (ERP) software allow plant managers to coordinate production planning across geographies while everyone works from the same set of data. Real-time sales and consumption data flow directly into local production schedules, aligning output with regional market demand.

The outcome is that companies gain the agility to serve markets faster while maintaining engineering rigor and cost control.

Digital transformation and engineering leadership

Digital transformation systems don’t come cheap. Aside from the initial cost, they require significant time and staff resources during start up. However, the next-level strategic and tactical functionality enabled by digital transformation has also never been more accessible. Rapid advancements in technology, compute power and the availability of could storage make it attainable to anyone willing to invest the time and money. The capabilities mitigate the initial costs by reducing “firefighting” and manual reporting. They provide tools to model, test, and optimize logistics and supply chain variables virtually before committing to a plan. Indeed, for executives and manufacturing engineers, these systems turn supply chains into a strategic advantage, enabling informed investment in new sites, supplier diversification, and sustainable practices. Digital transformation is a lever for both resilience and growth, improving reliability today while positioning the enterprise to compete globally tomorrow.

The post Optimizing logistics, supply chains, and local manufacturing appeared first on Engineering.com.

]]>
The data gold rush: how to uncover hidden value in your data https://www.engineering.com/the-data-gold-rush-how-to-uncover-hidden-value-in-your-data/ Mon, 29 Sep 2025 11:49:00 +0000 https://www.engineering.com/?p=143374 The valuable insights from data gold mining are often suspected, or even known to exist, yet they remain buried.

The post The data gold rush: how to uncover hidden value in your data appeared first on Engineering.com.

]]>
The fundamental challenge of an organization’s data is transforming it efficiently and effectively into physical products and services that maximize revenue and maintain competitiveness. Same old, same old, yes? But a host of new tools are helping product developers dig deeper into their terabytes of data to uncover value and actionable insights that result in better products and organizational performance.

In turn, these tools and the techniques needed to use them cost-effectively are spurring product developers to find and maximize real-world value and are working to incorporate them into their products and services.

The resulting gold being mined by these new tools and techniques is not by itself some long-sought breakthrough but an intermediate step in end-to-end product lifecycle management (PLM). And the counterpart is just as critical. Before gold is forged into something of value, it’s meticulously assayed to determine its purity.

The valuable insights from data gold mining are often suspected, or even known to exist, yet they remain buried. Managers are unwilling or unprepared to sift through this data and put it to use. Amid resource scarcities and time pressures, few organizations know how to do this effectively or affordably—the perennial headaches of data mining.

Mining for the data gold is often seen as too iffy to promise an ROI. And yet this gold is often related to the biggest challenges facing every product and system: obsolescence and failure, ever-evolving user wants and needs, marketplace disruptions, competition, sourcing, service, and rising costs.

The challenge in data gold mining is no longer just about getting data; it’s about the strategic application of new technologies to transform that data into actionable insights and value. The following areas are key to this challenge:

  • Tightly integrating the most common forms of generative AI and agentic AI into PLM platforms
  • Better techniques for generating AI queries
  • Better analytical tools…descriptive, predictive, and prescriptive…to make sense of the returns from AI queries.

These tools and techniques seek to find gold in the form of hidden similarities among seemingly unconnected and unrelated phenomena. This includes feedback from customers who have little or nothing in common and are using dissimilar products and services.

Gold is also buried in the incongruities in sales orders and rejections, field service, warranty claims, manufacturing stoppages, and supply chain disruptions. Whether the data is structured or unstructured no longer matters. Ditto for whether the data is internal or external. For example, incorporating parts of the Industrial Internet of Things (IIoT) and their connected data-generating devices into the Large Language Models (LLMs) on which AI is trained.

What is also new is the size and depth of databases searched, as well as how these new tools and techniques overcome the disconnects that plague every organization’s data. These disconnects include bad and useless data formats; errors, ambiguities, and inaccuracies; data buried in departmental silos; legacy data with unsuspected value; and data that is misplaced or lost.

All this is aided by digital transformation in all its myriad forms. Digital transformation is increasingly vital to gold mining because it helps users gather terabytes of data into more complete, more accurate, more manageable, and more focused LLMs. Digital transformation can also help users pinpoint what is (still) needed for timely/effective decision making (e.g., data that did not get into a given LLM and should be for subsequent queries).

CIMdata itself is adapting by:

  • Broadening its PLM focus to work with clients’ AI projects, creating an AI Practice with Diego Tamburini as Practice Director and Executive Consultant. He has held key positions at Microsoft, Autodesk, SDRC (now part of Siemens Digital Industries Software), and the Georgia Tech Manufacturing Research Center.   
  • Expanding its work and capabilities in digital transformation that enables PLM—and is enabled by PLM in turn. The ultimate goal is to bring together engineering technology (ET), information technology (IT), and operations technology (OT) at the top of the enterprise.

Managers and staffers who receive these AI and analytics findings have a similarly daunting agenda.   They must learn how to discern and understand what the gold is telling them. And they must learn how to weed out what has already been simulated, designed, or engineered for production. And they must learn how to choose the most viable of these tools and techniques and how to manage them.

Effective data governance is crucial to gold mining. I strongly recommend a review of an AI Governance webinar written by Janie Gurley, CIMdata’s Data Governance Director, and me. Posted July 8, 2025, by Engineering.com, available at https://www.engineering.com/ai-governance-the-unavoidable-imperative-of-responsibility/

Further insight is available in my most recent Engineering.com posting, available at https://www.engineering.com/in-the-rush-to-digital-transformation-it-might-be-time-for-a-rethink/

Getting the gold into product development offers many potential benefits by uncovering:

  • Causes of product and system failures
  • Unexpected obsolescence in products and systems
  • Users’ new wants and changing needs
  • Early indications of marketplace disruptions
  • Insights into competitors’ strategies
  • Pending shortfalls in sourcing…and alternatives
  • Likely cost increases and finding options
  • Unrecognized service needs
  • Better methods for production and operations…thanks to AI’s new ability to handle streaming data in LLMs.

Over time, diligent searchers may turn up dozens of connections and correspondences in these nine bullet points. Some will be simple, random coincidences. But many will turn into gold that reveals both opportunities and challenges.

Managing resistance

I urge readers to push the envelope, to find new approaches to everyday tasks, and try new things. While there is always pushback from staff and managers who are already overworked, without encouragement, change will never happen.

The usual response is, “Yes, you’re right, we’ll get to it later.” We caution that they are letting routine tasks get in the way of potential game changers. Ignoring these issues will not make them go away.Yes, obsolescence and failure, user wants and needs, marketplace disruptions, and all the rest (see above) will eventually surface, gain urgency, and become tasks that everyone must address.

Inevitably, this new gold will have to be dug out from data we assumed was useless and then painstakingly engineered into products, services, and forecasts. These values will mandate changes to the organization’s facilities, systems, processes, business partners, and suppliers. And they will have to be communicated to the sales force and distributors.

And there will be resistance to change and its huge costs, which are the underlying themes of this article. Those costs are part of addressing newly uncovered digital gold in a mad scramble amid fears that errors and oversights will place profitability, competitiveness, and job security at risk.

In summary, the quest for digital gold is our modern-day equivalent of the myth of Jason and the Argonauts and their search for the Golden Fleece. Like Jason, we must embark on a perilous journey and overcome countless challenges to seize a prize that promises untold value—transforming raw data into profitable products and services.

The post The data gold rush: how to uncover hidden value in your data appeared first on Engineering.com.

]]>
This low-tech tool sharpens your digital transformation strategy https://www.engineering.com/this-low-tech-tool-sharpens-your-digital-transformation-strategy/ Thu, 18 Sep 2025 11:12:00 +0000 https://www.engineering.com/?p=143021 It's ironic that the Engineering Services DX Assessment Tool, a simple instrument developed at the University of Waterloo, is low-tech as it gets.

The post This low-tech tool sharpens your digital transformation strategy appeared first on Engineering.com.

]]>
Charlie Patel’s family had been providing engineering services to manufacturing companies in Ontario for the past 75 years. Over that period, technological advance had taken place and resulted in improvements in many aspects of their clients’ activities. These technology changes had occurred at a pace that Charlie’s company had been able to adapt to without too much difficulty – today was different.

Charlie is considering what his company’s response should be to today’s rapid technological change, including what they should do about artificial intelligence. He was hearing about the revolutionary impact AI would have on what seemed a daily basis. He knew that new companies were becoming established that provided new technology-based manufacturing engineering services and he wanted to ensure that this did not result in a reduction in the work done by his company. Charlie needed to understand the impact that these new technologies would have on his business and what his strategy should be to deal with it.

Digital transformation is the response that organisations are making to the Fourth Industrial Revolution (the world created by the rapid technological advance that is taking place today). This can mean changes in products, services and processes throughout the organisation. It can be small or large scale, radically changing business models and it means new technologies are being introduced throughout organisations with significant implications for engineering services organisations.

Engineering services are impacted by the new environment they need to support and the new tools that are available to them to do this. A wide range of technologies may be adopted in the organisation in areas that are within the scope of the work normally done by engineering services. These might include the development, implementation and support for new technology enabled processes, new automation and new decision-making systems that may or may not utilise artificial intelligence.

At the same time, technology is changing the support engineering services provide. More data is being collected and better tools exist that can be used in predictive maintenance services. New design tools enable faster, better design, some aspects of service delivery can be automated, artificial intelligence can be used for analytics, digital twins and simulation support analytics, design and management and big data can provide valuable insights.

These developments in the environment engineers support and in the tools available to them are the main factors influencing the digital transformation of engineering services. They make it essential that engineering services organisations carefully review their current situation and develop their own strategy for dealing with it. Otherwise they will be vulnerable to other providers who emerge, better prepared for the new environment.

The Engineering Services DX Assessment Tool is a simple instrument that we have developed at the University of Waterloo to help engineering services organisations consider and plan their own digital transformation. It is intended to be facilitative – prepared on a white board or flip chart by a group of engineers. Here is a blank version:

The tool is suitable for engineering services companies and engineering services within an existing organisation. It asks you to consider the following elements:

Client/Dept: The main organisations or units the engineering services are provided to. For an engineering services company this may be their main clients or client types if they are larger. For internal engineering services this would be the units they provide services to.

Change Elements: The changes in the Client/Dept that are or will be impacted by new technology. This may be specific performance improvements, equipment changes, process changes etc.

Main Tech: The main technologies being used in the change elements. This may include internet of things, artificial intelligence, digital twins, automation etc.

Implement Support: The support needed to implement the change described in the Change Elements column, such as design work, project management, impact assessment, etc.

Operate Support: The support needed to operate the change described in the Change Elements column, such as maintenance, education and training, and performance improvement.

Impact Services Now: Can your existing services provide the Implement and Operate Support that the Change Elements need now or are changes required to do this? Include here any areas of your services that may have been used by the client in the past but are not now needed due to the Change Element.

Action Needed: Review the information you have entered in this row of the chart and determine the actions that you need to take to deliver the support that your Client/Dept will require.

Charlie has completed the tool for his company, in this example:

The Engineering Services DX Assessment Tool allows you to consider the actions you might wish to take to ensure your organisation is able to continue to effectively provide engineering support. Once the chart is completed you can then consider the areas that will be your priority and become the main elements in your digital transformation strategy.

This strategy must include the impact that the Client/Dept changes will have on the skills of the members of your engineering team, along with any personnel changes you may need to make. The Client/Dept changes will require engagement and collaboration with stakeholders by engineers, utilising social skills more frequently than in the past due to the more rapid pace of change.

It should also include consideration of the technology-based tools that your team uses today (for example data analytics, simulation etc.) and investments in any new tools that may be appropriate here.

Developing and implementing your digital transformation strategy for Engineering Services is essential today. As Client/Dept organisations plan and implement their own digital transformation strategies they will consider the role existing engineering services providers can play. Be prepared with your own digital transformation strategy.

The post This low-tech tool sharpens your digital transformation strategy appeared first on Engineering.com.

]]>
Register for Digital Transformation Week 2025 https://www.engineering.com/register-for-digital-transformation-week-2025/ Tue, 09 Sep 2025 00:54:14 +0000 https://www.engineering.com/?p=142714 Engineering.com’s September webinar series will focus on how to make the best strategic decisions during your digital transformation journey.

The post Register for Digital Transformation Week 2025 appeared first on Engineering.com.

]]>
Digital transformation remains one of the hottest conversations in manufacturing in 2025. A few years ago, most companies approached digital transformation as a hardware issue. But those days are gone. Now the conversation is a strategic one, centered on data management and creating value from the data all the latest technology generates. The onrush of AI-based technologies only clouds the matter further.

This is why the editors at Engineering.com designed our Digital Transformation Week event—to help engineers unpack all the choices in front of them, and to help them do it at the speed and scale required to compete.

Join us for this series of lunch hour webinars to gain insights and ideas from people who have seen some best-in-class digital transformations take shape.

Registrations are open and spots are filling up fast. Here’s what we have planned for the week:

September 22: Building the Digital Thread Across the Product Lifecycle

12:00 PM Eastern Daylight Time

This webinar is the opening session for our inaugural Digital Transformation Week. We will address the real challenges of implementing digital transformation at any scale, focusing on when, why and how to leverage manufacturing data. We will discuss freeing data from its silos and using your bill of materials as a single source of truth. Finally, we will help you understand how data can fill in the gaps between design and manufacturing to create true end-to-end digital mastery.

September 23: Demystifying Digital Transformation: Scalable strategies for Small & Mid-Sized Manufacturers

12:00 PM Eastern Daylight Time

Whether your organization is just beginning its digital journey or seeking to expand successful initiatives across multiple departments, understanding the unique challenges and opportunities faced by smaller enterprises is crucial. Tailored strategies, realistic resource planning, and clear objectives empower SMBs to move beyond theory and pilot phases, transforming digital ambitions into scalable reality. By examining proven frameworks and real-world case studies, this session will demystify the process and equip you with actionable insights designed for organizations of every size and level of digital maturity.

September 24, 2025: Scaling AI in Engineering: A Practical Blueprint for Companies of Every Size

12:00 PM Eastern Daylight Time

You can’t talk about digital transformation without covering artificial intelligence. Across industries, engineering leaders are experimenting with AI pilots — but many remain uncertain about how to move from experiments to production-scale adoption. The challenge is not primarily about what algorithms or tools to select but about creating the right blueprint: where to start, how to integrate with existing workflows, and how to scale in a way that engineers trust and the business can see immediate value. We will explore how companies are combining foundation models, predictive physics AI, agentic workflow automation, and open infrastructure into a stepped roadmap that works whether you are a small team seeking efficiency gains or a global enterprise aiming to digitally transform at scale.

September 25: How to Manage Expectations for Digital Transformation

12:00 PM Eastern Daylight Time

The digital transformation trend is going strong and manufacturers of all sizes are exploring what could be potentially game-changing investments for their companies. With so much promise and so much hype, it’s hard to know what is truly possible. Special guest Brian Zakrajsek, Smart Manufacturing Leader at Deloitte Consulting LLP, will discuss what digital transformation really is and what it looks like on the ground floor of a manufacturer trying to find its way. He will chat about some common unrealistic expectations, what the realistic expectation might be for each, and how to get there.

The post Register for Digital Transformation Week 2025 appeared first on Engineering.com.

]]>
Is Nvidia’s Jetson Thor the robot brain we’ve been waiting for? https://www.engineering.com/is-nvidias-jetson-thor-the-robot-brain-weve-been-waiting-for/ Wed, 03 Sep 2025 15:39:58 +0000 https://www.engineering.com/?p=142562 Last month Nvidia launched it’s powerful new AI and robotics developer kit Nvidia Jetson AGX Thor. The chipmaker says it delivers supercomputer-level AI performance in a compact, power-efficient module that enables robots and machines to run advanced “physical AI” tasks—like perception, decision-making, and control—in real time, directly on the device without relying on the cloud. […]

The post Is Nvidia’s Jetson Thor the robot brain we’ve been waiting for? appeared first on Engineering.com.

]]>

Last month Nvidia launched it’s powerful new AI and robotics developer kit Nvidia Jetson AGX Thor. The chipmaker says it delivers supercomputer-level AI performance in a compact, power-efficient module that enables robots and machines to run advanced “physical AI” tasks—like perception, decision-making, and control—in real time, directly on the device without relying on the cloud.

It’s powered by the full-stack Nvidia Jetson software platform, which supports any popular AI framework and generative AI model. It is also fully compatible with Nvidia’s software stack from cloud to edge, including Nvidia Isaac for robotics simulation and development, Nvidia Metropolis for vision AI and Holoscan for real-time sensor processing.

Nvidia says it’s a big deal because it solves one of the most significant challenges in robotics: running multi-AI workflows to enable robots to have real-time, intelligent interactions with people and the physical world. Jetson Thor unlocks real-time inference, critical for highly performant physical AI applications spanning humanoid robotics, agriculture and surgical assistance.

Jetson AGX Thor delivers up to 2,070 FP4 TFLOPS of AI compute, includes 128 GB memory, and runs within a 40–130 W power envelope. Built on the Blackwell GPU architecture, the Jetson Thor incorporates 2,560 CUDA cores and 96 fifth-gen Tensor Cores, enabled with technologies like Multi-Instance GPU. The system includes a 14-core Arm Neoverse-V3AE CPU (1 MB L2 cache per core, 16 MB shared L3 cache), paired with 128 GB LPDDR5X memory offering ~273 GB/s bandwidth.

There’s a lot of hype around this particular piece of kit, but Jetson Thor isn’t the only game in town. Other players like Intel’s Habana Gaudi, Qualcomm RB5 platform, or AMD/Xilinx adaptive SoCs also target edge AI, robotics, and autonomous systems.

Here’s a comparison of what’s available currently and where it shines:

Edge AI robotics platform shootout

Nvidia Jetson AGX Thor

Specs & Strengths: Built on Nvidia Blackwell GPU, delivers up to 2,070 FP4 TFLOPS and includes 128 GB LPDDR5X memory—all within a 130 W envelope. That’s a 7.5 times AI compute leap and 3 times better efficiency compared to the previous Jetson Orin line. Equipped with 2,560 CUDA cores, 96 Tensor cores, and a 14-core Arm Neoverse CPU. Features 1 TB onboard NVMe, robust I/O including 100 GbE, and optimized for real-time robotics workloads with support for LLMs and generative physical AI.

Use Cases & Reception: Early pilots and evaluations are taking place at several companies, including Amazon Robotics, Boston Dynamics, Meta, Caterpillar, with pilots from John Deere and OpenAI.

Qualcomm Robotics RB5 Platform

Specs & Strengths: Powered by the QRB5165 SoC, combines Octa-core Kryo 585 CPU, Adreno 650 GPU, Hexagon Tensor Accelerator delivering 15 TOPS, along with multiple DSPs and an advanced Spectra 480 ISP capable of handling up to seven concurrent cameras and 8K video. Connectivity is a standout—integrated 5G, Wi-Fi 6, and Bluetooth 5.1 for remote, low-latency operations. Built for security with Secure Processing Unit, cryptographic support, secure boot, and FIPS certification.

Use Cases & Development Support: Ideal for robotics use cases like SLAM, autonomy, and AI inferencing in robotics and drones. Supports Linux, Ubuntu, and ROS 2.0 with rich SDKs for vision, AI, and robotics development.

(Read more about the Qualcom Robotics RB5 platform on Robot Report)

AMD Adaptive SoCs and FPGA Accelerators

Key Capabilities: AMD’s AI Engine ML (AIE-ML) architecture provides significantly higher TOPS per watt by optimizing for INT8 and bfloat16 workloads.

Innovation Highlight: Academic projects like EdgeLLM showcase CPU–FPGA architectures (using AMD/Xilinx VCU128) outperforming GPUs in LLM tasks—achieving 1.7 times higher throughput and 7.4 times better energy efficiency than NVIDIA’s A100.

Drawbacks: Powerful but requires specialized development and lacks an integrated robotics platform and ecosystem.

The Intel Habana Gaudi is more common in data centers for training and is less prevalent in embedded robotics due to form factor limitations.

The post Is Nvidia’s Jetson Thor the robot brain we’ve been waiting for? appeared first on Engineering.com.

]]>
How digital transformation and remote monitoring drive sustainability in manufacturing https://www.engineering.com/how-digital-transformation-and-remote-monitoring-drive-sustainability-in-manufacturing/ Wed, 13 Aug 2025 17:55:34 +0000 https://www.engineering.com/?p=142081 Sustainability is no longer a peripheral concern; it’s a strategic and financial imperative.

The post How digital transformation and remote monitoring drive sustainability in manufacturing appeared first on Engineering.com.

]]>
Regulatory pressure, stakeholder expectations, and rising energy costs have made environmental stewardship critical to long-term success. For manufacturing engineers, this presents both a challenge and an opportunity: How can operations become more resource-efficient without compromising productivity?

The answer increasingly lies in digital transformation systems — specifically, in the deployment of remote monitoring technologies that turn real-time data into actionable sustainability improvements. From energy and water efficiency to predictive maintenance and emissions tracking, these technologies are reshaping how manufacturers optimize resource use and reduce their environmental footprint.

Indeed, the path to sustainable manufacturing runs through data — and remote monitoring is the bridge.

What is remote monitoring?

Remote monitoring involves using Internet of Things (IoT) sensors, embedded systems, and cloud platforms to continuously collect and analyze data from equipment, utilities, and environmental systems across a facility. This data is centralized through manufacturing execution systems (MES), enterprise resource planning (ERP) software, or dedicated building management systems (BMS).

Instead of relying on manual checks, logbooks, or periodic audits, engineers and facility managers get real-time visibility into performance metrics — allowing them to make faster, more informed decisions that directly impact sustainability.

Energy efficiency through real-time monitoring

Energy use is one of the biggest drivers of cost and carbon emissions in manufacturing. Remote monitoring enables a granular view of energy consumption across assets and zones, revealing exactly where, when, and how energy is being used or wasted.

Smart meters and sub-meters connected to a centralized dashboard can identify all sorts of conditions on the shop floor and beyond, including:

  • Idle equipment that’s consuming power during off-hours
  • HVAC systems operating outside of optimal temperature ranges
  • Lighting systems left on in unoccupied zones
  • Peak load times where demand charges can be minimized

By linking this data with control systems, manufacturers can automate load balancing, schedule equipment operations, and even initiate demand-response actions in coordination with utility providers. This reduces both energy costs and greenhouse gas emissions.

Water conservation and waste reduction

Water plays a crucial role in many manufacturing processes — from cooling and cleaning to production itself. However, leaks, inefficiencies, and overuse are common and costly. Remote monitoring helps tackle this by using flow sensors, pressure gauges, and smart valves to track water use in real time. Cooling systems can be optimized to reduce unnecessary water cycling and smart alerts can be triggered by unexpected consumption spikes, pointing to leaks or process failures. Usage trends can be analyzed to adjust cleaning cycles or reuse treated wastewater.

In plants with on-site wastewater treatment, remote monitoring can ensure compliance with discharge limits and optimize treatment operations, minimizing environmental impact while reducing chemical and energy usage.

Predictive maintenance and asset efficiency

We’ve covered this a lot in this series—for a reason. One of the most effective ways to reduce waste and energy consumption is to keep machinery operating at peak efficiency. With remote condition monitoring, engineers can track vibration, temperature, current draw, and operational hours of key equipment in real time.

Environmental monitoring and emissions tracking

Modern manufacturing operations are under pressure to reduce air emissions, particulate output, and volatile organic compounds (VOCs). Remote monitoring plays a vital role in tracking these metrics through ambient sensors, gas analyzers, and stack monitors connected to cloud systems.

These systems provide continuous emissions reporting for regulatory compliance and early warnings when emissions approach critical thresholds. They also maintain historical data used for environmental, social, and governance reporting.

This not only keeps operations within legal bounds but also supports a proactive approach to pollution prevention, enabling facilities to fine-tune combustion systems or ventilation processes based on real-time feedback.

As more facilities adopt on-site renewable energy—be it solar, wind, or combined heat and power (CHP)—managing the variability and integration of these sources becomes essential. Remote monitoring allows for dynamic balancing of solar generation output versus real-time load, battery storage availability and grid draw during peak versus off-peak hours.

This maximizes the use of clean energy, reduces fossil fuel dependency, and lowers emissions associated with energy use. In some cases, surplus energy can be fed back into the grid or redirected to storage systems, enhancing sustainability while reducing operating costs.

Digital twins and process optimization

Beyond monitoring individual systems, the technology and processes involved in digitization and digitalization (which combine to form the basis of digital transformation) enable the creation and application of digital twins of production lines or facilities. By integrating real-time monitoring data into these simulations, engineers can model energy and resource usage under different production scenarios and test any process changes before implementing them physically. This can identify optimal settings for production that also reduce scrap, cycle time, or energy used per unit produced

This capability is powerful for continuous improvement and sustainability planning, allowing facilities to adapt quickly to new customers or product mixes.

The engineering advantage

For manufacturing engineers, the integration of remote monitoring technologies into digital transformation strategies isn’t just a sustainability move — it’s a smarter way to run a business. These systems deliver granular, real-time insights that enable better decisions, faster response, and long-term efficiency.

As sustainability becomes more tightly linked to profitability, risk management, and brand reputation, engineers who understand and embrace these technologies will be best positioned to lead their organizations into a more resource-efficient and environmentally responsible future.

The post How digital transformation and remote monitoring drive sustainability in manufacturing appeared first on Engineering.com.

]]>
From chain-of-thought to agentic AI: the next inflection point https://www.engineering.com/from-chain-of-thought-to-agentic-ai-the-next-inflection-point/ Fri, 08 Aug 2025 15:11:08 +0000 https://www.engineering.com/?p=141980 AI that thinks versus AI that acts. Autonomously. Systemically. At scale.

The post From chain-of-thought to agentic AI: the next inflection point appeared first on Engineering.com.

]]>
We have learned to prompt AI. We have trained it to explain its reasoning. And we have begun to integrate it as a co-pilot or ‘co-assistant’ in science, product design, engineering, manufacturing and beyond—to facilitate enterprise-wide decision-making.

But even as chain-of-thought (CoT) prompting reshaped how we engage with machines, it also exposed a clear limitation: AI still waits for us to tell it what to do.

Often in engineering, the hardest part is not finding the right answer—it is knowing what the right question is in the first place. This highlights a critical truth: even advanced AI tools depend on human curiosity, perspective, and framing.

CoT helps bridge that gap, but it is still a people-centered evolution. As AI begins to reason more like humans, it raises a deeper question: Can it also begin to ask the right questions, not just answer them? Can the machine help engineers make product development or manufacturing decisions?

As complexity escalates and time-to-decision contracts, reactive monolithic enterprise systems alone will no longer suffice. We are entering a new era—where AI stops assisting and starts orchestrating.

Welcome to the age of agentic AI.

Chain-of-thought: transformational but not autonomous

CoT reasoning is a breakthrough in human-AI collaboration. By enabling AI to verbalize intermediate steps and reveal transparent reasoning, CoT has reshaped AI from an opaque black box into a more interpretable partner. This evolution has bolstered trust, enabling domain experts to validate AI outputs with greater confidence. Across sectors such as engineering, R&D, and supply chain management, CoT is accelerating adoption by enhancing human cognition.

Yet CoT remains fundamentally reactive. It requires human prompts and structured queries to function, lacking autonomy or initiative. In environments rife with complexity—thousands of interdependent variables influencing product development, manufacturing, and supply chains—waiting for human direction slows response and restricts scale.

Consider a product design review with multiple engineering teams navigating dynamic regulatory demands, supplier constraints, and shifting market trends. CoT can clarify reasoning or suggest alternatives, but it cannot autonomously prioritize design changes or coordinate cross-functional decisions in real time.

CoT is just the visible tip of the iceberg. While it connects to the underlying data plumbing, the real shift lies in how AI can interrogate these relationships meaningfully—and potentially uncover new ones. That is where things start to tip from reasoning to autonomy, and the door opens to agentic AI.

From logic to autonomous action

Agentic AI represents a fundamental leap from the prompt-response paradigm. These systems initiate, prioritize, and adapt. They fuse reasoning with goal-driven autonomy—capable of contextual assessment, navigating uncertainty, and taking independent action.

Self-directed, proactive, and context-aware, agentic AI embodies a new class of intelligent software—no longer answering queries alone but orchestrating workflows, resolving issues, and closing loops across complex value chains.

As Steven Bartlett noted in a recent DOAC podcast: “AI agents are the most disruptive technology of our lifetime.” They will not just change how we work—they will change what it means to work, reshaping roles, decisions, and entire industries in their wake.

The 2025 Trends in Artificial Intelligence report from Bond Capital highlights this transition, describing autonomous agents as evolving beyond manual interfaces into core enablers of digital workflows. The speed and scope of this transformation evoke the early days of the internet—only this time, the implications promise to be even more profound.

Redefining the digital thread

Agentic AI rewires the digital thread—from passive connectivity to proactive intelligence across the product lifecycle. No longer static, the thread becomes adaptive and autonomous. Industry applications are wide:

  • In quality, AI monitors sensor streams, predicts anomalies, and triggers resolution—preventing defects before they occur.
  • In configuration management, agents detect part-software-supplier conflicts and self-initiate change coordination.
  • In supply chain orchestration, disruptions prompt real-time replanning, compliance updates, and automated documentation.

The result: reduced cycle times, faster iteration, and proactive risk mitigation. Can the digital thread become a thinking, dynamic learning, acting ecosystem—bridging data, context, and decisions?

Nevertheless, the transformation is not just technical:

  • Trust and traceability: Autonomous decisions must be explainable, especially in regulated spaces.
  • Data readiness: Structured, accessible data is the backbone of agentic performance.
  • Integration: Agents must interface with PLM, ERP, digital twins, and legacy systems.
  • Leadership and workforce evolution: Engineers become orchestrators and interpreters. Leaders must foster new models of human-AI engagement.

This shift is from thinking better to acting faster, smarter, and more autonomously. Agentic AI will redraw the boundaries between systems, workflows, and organizational silos.

For those ready to lead, this is not just automation, it is acceleration. If digital transformation was a journey, this is the moment the wheels leave the ground.

Building trustworthy autonomy

The road ahead is not about AI replacing humans—but about shaping new hybrid ecosystems where software agents and people collaborate in real time.

  • We will see AI agents assigned persistent roles across product lifecycles—managing variants, orchestrating compliance, or continuously optimizing supply chains.
  • These agents will not just “assist” engineers. They will augment system performance, suggesting better configurations, reducing rework, and flagging design risks before they materialize.
  • Organizations will create AI observability frameworks—dashboards for tracking, auditing, and tuning the behavior of autonomous agents over time.

Going forward, we might not just review dashboards—we might be briefed by agents that curate insights, explain trade-offs, and propose resolutions. To succeed, the next wave of adoption will hinge on governance, skill development, and cultural readiness:

  • Governance that sets transparent bounds for agent behavior, and continuous purposeful adjustments.
  • Skills that blend domain expertise with human-AI fluency.
  • Cultures that treat agents not as black boxes—but as emerging teammates or human extensions.

Crucially, managing AI hallucination—where systems generate plausible but inaccurate outputs—alongside the rising entropy of increasingly complex autonomous interactions, will be essential to maintain trust, ensure auditable reasoning, and prevent system drift or unintended behaviors.

Ultimately, the goal is not to lose control—but to gain new control levers. Agentic AI will demand a rethink not just of tools—but of who decides, how, and when. The future is not man versus machine. It should be machine-empowered humanity—faster, more adaptive, and infinitely more scalable.

The post From chain-of-thought to agentic AI: the next inflection point appeared first on Engineering.com.

]]>
5 more project deliverables that drive digital transformation success https://www.engineering.com/5-more-project-deliverables-that-drive-digital-transformation-success/ Wed, 06 Aug 2025 14:04:08 +0000 https://www.engineering.com/?p=141919 Some significant project deliverables that are essential to every digital transformation project plan.

The post 5 more project deliverables that drive digital transformation success appeared first on Engineering.com.

]]>
Digital transformation often changes the way organizations create value. Technology adoption and the growing reliance on digital systems significantly benefit engineers.

From predictive analytics, robotic process automation and artificial intelligence to software that enhances collaboration, organizations are leveraging digital technology to create, capture, and deliver value in new and innovative ways.

Well-defined digital transformation project deliverables lead to project success. Vaguely defined or missing project deliverables create a risk of missed expectations, delays and cost overruns.

Here are the significant project deliverables essential to every digital transformation project plan:

  1. Project charter
  2. Data analytics and visualization strategy
  3. Generative AI strategy
  4. Data conversion strategy
  5. Data profiling strategy
  6. Data integration strategy
  7. Data conversion testing strategy
  8. Data quality strategy
  9. Risk Assessment
  10. Change management plan
  11. Data conversion reports

These deliverables merit more attention for digital transformation projects than routine systems projects. We’ll examine the deliverables six to eleven in this article. To read the first article, click here.

Data integration strategy

Organizations achieve most of the value of digital transformation from new data-based insights and previously impossible process efficiencies. The insights are revealed through sophisticated data analytics and visualization, which rely on data integration. The process efficiencies are achieved in the same way.

The data integration strategy deliverable describes how data from every data source will be integrated with data from other data sources. The primary integration strategies are:

  • Populate a lakehouse or a data warehouse with data from multiple data sources. This strategy persists all the integrated data. Implementation and operation are more expensive but deliver the most integration opportunities and the best query performance.
  • Keep data sources where they are and populate cross-reference tables for key values. This strategy persists only key values to create the illusion of integration. It’s appealing because it’s the easiest and cheapest to implement and operate. It sacrifices query performance.

The value of the data integration strategy lies in informing the software development effort for each of the likely tools, such as extract, transform, and load (ETL), data pipelines, and custom software.

Data conversion testing strategy

It’s impossible for engineers to overestimate the effort required for data conversion testing. Every digital transformation project will involve some risk of data degradation as a result of the conversion process.

The data conversion testing strategy is a deliverable that describes who and how the project will test the data conversion and considers the following:

  • Automation opportunities.
  • Statistical analysis techniques.
  • Mismatches in joins of keys and foreign keys.
  • Strategies for minimizing manual inspection.

The value of the data conversion testing strategy lies in describing the skills and estimating the effort needed to confirm the adequacy of the data conversion.

Data quality strategy

The current data quality in the data sources will almost always disappoint. That reality leads to a data quality strategy deliverable for digital transformation that describes the following:

  • Acceptable data quality level for at least the most critical data elements.
  • Data enhancement actions that will augment available data to more fully meet end-user expectations.
  • Data quality maintenance processes to maintain data quality for the various data sources.

The data quality strategy enables the organization to achieve a consensus on what constitutes sufficient data quality and which data elements are most critical.

Risk assessment

Digital transformation projects experience various risks. This deliverable reminds management that these projects are not simple or easy.

The specific risks that digital transformation projects experience include:

  • Risk of data issues, such as insufficient quality and complex integration. These issues will add cost and schedule to digital transformation projects.
  • The risk of viewing AI as a silver bullet. How can engineers reduce AI model hallucinations? explains how to alleviate this risk.
  • The risk of data analytics and visualization software complexity. This complexity adds to the cost of training and creates a risk of misleading results.
  • The risk of an overly ambitious scope overwhelming the team and budget. This unrealizable scope creates disappointment and reduces commitment to the project.

Identifying and assessing potential risks associated with the transformation will help the project team mitigate risks and minimize their impact if they become a reality. This article How engineers can mitigate AI risks in digital transformation discusses the most common ones.

Change management plan

Digital transformation projects almost always introduce significant changes to business processes. Successful projects define a comprehensive people change management plan that includes:

  • Stating the change management goal, approach and supporting objectives.
  • Engaging project sponsors and stakeholders.
  • Description of the current situation.
  • Description of change management strategies, such as training, change agents and end-user support.
  • Recommended change management roles and resources.
  • Description of the approximate timeline.

This deliverable, when conducted as planned, ensures end-users, such as engineers, adopt new processes and digital tools. Without that adoption, the organization will not realize the benefits of the contemplated project.

Data conversion reports

The data conversion test report contains the results of the data conversion design and testing work for each datastore. It includes:

  • The number of software modules developed to perform data conversion.
  • A summary of the number of rows that couldn’t be converted with a cause.
  • A summary of the data quality improvements required.

The final data conversion report contains the results of the data conversion work by datastore. It includes a summary of the:

  • Number of rows converted.
  • Data quality improvements that were made.
  • Data issues that were not addressed.

These deliverables often frustrate stakeholders because engineers invariably recommend more effort to:

  • Improve historical data quality.
  • Strengthen data quality processes to maintain future data quality.

Paying detailed attention to these digital transformation project deliverables will ensure the success of your project. This list of deliverables is a subset of the overall set of deliverables that all projects typically complete.

The post 5 more project deliverables that drive digital transformation success appeared first on Engineering.com.

]]>