Industry - Engineering.com https://www.engineering.com/category/industry/ Thu, 08 Jan 2026 12:44:13 +0000 en-US hourly 1 https://wordpress.org/?v=6.9 https://www.engineering.com/wp-content/uploads/2025/06/0-Square-Icon-White-on-Purpleb-150x150.png Industry - Engineering.com https://www.engineering.com/category/industry/ 32 32 AMD introduces Ryzen AI Embedded processor portfolio https://www.engineering.com/amd-introduces-ryzen-ai-embedded-processor-portfolio/ Thu, 08 Jan 2026 10:48:00 +0000 https://www.engineering.com/?p=145486 New AMD Ryzen AI Embedded P100 and X100 Series processors combine high-performance “Zen 5” CPU cores.

The post AMD introduces Ryzen AI Embedded processor portfolio appeared first on Engineering.com.

]]>
AMD introduced the AMD Ryzen AI Embedded processors, a new portfolio of embedded x86 processors designed to power AI-driven applications at the edge. From automotive digital cockpits and smart healthcare to physical AI for autonomous systems, including humanoid robotics, the new P100 and X100 Series processors provide OEMs, tier-1 suppliers and system and software developers in automotive and industrial markets with high performance, efficient AI compute in a compact BGA (ball grid array) package for the most constrained embedded systems.

The processors integrate the high-performance “Zen 5” core architecture for scalable x86 performance and deterministic control, an RDNA 3.5 GPU for real-time visualization and graphics, and an XDNA 2 NPU for low-latency, low-power AI acceleration – all in a single chip.

The portfolio includes the P100 Series processors, targeting in-vehicle experiences and industrial automation, and the X100 Series processors featuring higher CPU core counts and AI TOPS performance for more demanding physical AI and autonomous systems.

Purpose-built for in-vehicle experiences

Launching P100 Series processors featuring 4-6 cores are optimized for next-generation digital cockpits and HMI (human-machine interfaces), enabling real-time graphics for in-vehicle infotainment displays, AI-driven interactions, and multi-domain responsiveness. They deliver up to a 2.2X multi-thread and single-thread performance boost over the previous generation, ensuring deterministic control in a compact 25×40 mm BGA package. With a 15–54-watt operating range and support for –40°C to +105°C environments, it is built for harsh, power- and space-constrained edge systems and 10-year lifecycles.

Immersive graphics and on-device AI acceleration

The P100 Series processors integrate an RDNA 3.5 GPU, delivering an estimated 35% faster rendering to power up to four 4K (or two 8K) digital displays simultaneously at 120 frames per second. The AMD video codec engine enables high-fidelity, low latency streaming and responsive playback without burdening the CPU.

The next generation AMD XDNA 2 NPU delivers up to 50 TOPS, for up to 3X higher AI inference performance. XDNA 2 architecture combines understanding of voice, gestures and environmental cues using supported AI models including vision transformers, compact LLMs and CNNs.

Open, safe software stack for faster system design

Ryzen AI Embedded processors provide a consistent development environment with a unified software stack that spans the CPU, GPU, and NPU. At the runtime layer, developers benefit from optimized CPU libraries, open-standard GPU APIs, and a native XDNA architecture AI runtime enabled through Ryzen AI Software.

The entire software stack is built on the open-source, Xen hypervisor-based virtualization framework that securely isolates multiple operating system domains. This enables Yocto or Ubuntu to power the HMI, FreeRTOS to manage real-time control, and Android or Windows to support richer applications, all running safely in parallel. With an open-source foundation, long-term OS support, and an ASIL-B capable architecture, they help customers reduce costs, simplify customization, and accelerate the path to production for automotive and industrial systems.

Product availability

AMD Ryzen AI Embedded P100 processors featuring 4-6 cores are sampling with early access customers. Tools and documentation are available with production shipments expected in the second quarter. P100 Series processors featuring 8-12 cores targeting industrial automation applications are expected to begin sampling in the first quarter. Sampling of X100 Series processors, which offer up to 16 cores, is expected to begin in the first half of this year.

For more information, visit amd.com.

The post AMD introduces Ryzen AI Embedded processor portfolio appeared first on Engineering.com.

]]>
NVIDIA introduces Rubin platform for large-scale AI systems https://www.engineering.com/nvidia-introduces-rubin-platform-for-large-scale-ai-systems/ Thu, 08 Jan 2026 09:51:00 +0000 https://www.engineering.com/?p=145482 The platform combines six new chips and rack-scale systems, with early adoption expected across cloud providers, AI labs and system manufacturers.

The post NVIDIA introduces Rubin platform for large-scale AI systems appeared first on Engineering.com.

]]>
NVIDIA launched the Rubin platform, which includes six chips intended to be used together in a rack-scale AI system. NVIDIA said Rubin is designed to support building, deploying and securing large AI systems while reducing training time and inference cost.

The Rubin platform uses extreme codesign across the six chips — the NVIDIA Vera CPU, NVIDIA Rubin GPU, NVIDIA NVLink 6 SwitchNVIDIA ConnectX-9 SuperNICNVIDIA BlueField-4 DPU and NVIDIA Spectrum-6 Ethernet Switch — to slash training time and inference token costs.

Named for Vera Florence Cooper Rubin — the trailblazing American astronomer whose discoveries transformed humanity’s understanding of the universe — the Rubin platform features the NVIDIA Vera Rubin NVL72 rack-scale solution and the NVIDIA HGX Rubin NVL8 system.

The Rubin platform introduces five innovations, including the latest generations of NVIDIA NVLink interconnect technology, Transformer Engine, Confidential Computing and RAS Engine, as well as the NVIDIA Vera CPU. These breakthroughs will accelerate agentic AI, advanced reasoning and massive-scale mixture-of-experts (MoE) model inference at up to 10x lower cost per token of the NVIDIA Blackwell platform. Compared with its predecessor, the NVIDIA Rubin platform trains MoE models with 4x fewer GPUs to accelerate AI adoption.

Broad ecosystem support

Among the world’s leading AI labs, cloud service providers, computer makers and startups expected to adopt Rubin are Amazon Web Services (AWS), Anthropic, Black Forest Labs, Cisco, Cohere, CoreWeave, Cursor, Dell Technologies, Google, Harvey, HPE, Lambda, Lenovo, Meta, Microsoft, Mistral AI, Nebius, Nscale, OpenAI, OpenEvidence, Oracle Cloud Infrastructure (OCI), Perplexity, Runway, Supermicro, Thinking Machines Lab and xAI.

Engineered to scale intelligence

Agentic AI and reasoning models, along with state-of-the-art video generation workloads, are redefining the limits of computation. Multistep problem-solving requires models to process, reason and act across long sequences of tokens. Designed to serve the demands of complex AI workloads, the Rubin platform’s five groundbreaking technologies include:

  • Sixth-Generation NVIDIA NVLink: Delivers the fast, seamless GPU-to-GPU communication required for today’s massive MoE models. Each GPU offers 3.6TB/s of bandwidth, while the Vera Rubin NVL72 rack provides 260TB/s — more bandwidth than the entire internet. With built-in, in-network compute to speed collective operations, as well as new features for enhanced serviceability and resiliency, NVIDIA NVLink 6 switch enables faster, more efficient AI training and inference at scale.
  • NVIDIA Vera CPU: Designed for agentic reasoning, NVIDIA Vera is the most power‑efficient CPU for large-scale AI factories. The NVIDIA CPU is built with 88 NVIDIA custom Olympus cores, full Armv9.2 compatibility and ultrafast NVLink-C2C connectivity. Vera delivers exceptional performance, bandwidth and industry‑leading efficiency to support a full range of modern data center workloads.
  • NVIDIA Rubin GPU: Featuring a third-generation Transformer Engine with hardware-accelerated adaptive compression, Rubin GPU delivers 50 petaflops of NVFP4 compute for AI inference.
  • Third-Generation NVIDIA Confidential Computing: Vera Rubin NVL72 is the first rack-scale platform to deliver NVIDIA Confidential Computing — which maintains data security across CPU, GPU and NVLink domains — protecting the world’s largest proprietary models, training and inference workloads.
  • Second-Generation RAS Engine: The Rubin platform — spanning GPU, CPU and NVLink — features real-time health checks, fault tolerance and proactive maintenance to maximize system productivity. The rack’s modular, cable-free tray design enables up to 18x faster assembly and servicing than Blackwell.

AI-native storage and secure, software-defined infrastructure

NVIDIA Rubin introduces NVIDIA Inference Context Memory Storage Platform, a new class of AI-native storage infrastructure designed to scale inference context at gigascale.

Powered by NVIDIA BlueField-4, the platform enables efficient sharing and reuse of key-value cache data across AI infrastructure, improving responsiveness and throughput while enabling predictable, power-efficient scaling of agentic AI.

As AI factories increasingly adopt bare-metal and multi-tenant deployment models, maintaining strong infrastructure control and isolation becomes essential.

BlueField-4 also introduces Advanced Secure Trusted Resource Architecture, or ASTRA, a system-level trust architecture that gives AI infrastructure builders a single, trusted control point to securely provision, isolate and operate large-scale AI environments without compromising performance.

With AI applications evolving toward multi-turn agentic reasoning, AI-native organizations must manage and share far larger volumes of inference context across users, sessions and services.

Different forms for different workloads

NVIDIA Vera Rubin NVL72 offers a unified, secure system that combines 72 NVIDIA Rubin GPUs, 36 NVIDIA Vera CPUs, NVIDIA NVLink 6, NVIDIA ConnectX-9 SuperNICs and NVIDIA BlueField-4 DPUs.

NVIDIA will also offer the NVIDIA HGX Rubin NVL8 platform, a server board that links eight Rubin GPUs through NVLink to support x86-based generative AI platforms. The HGX Rubin NVL8 platform accelerates training, inference and scientific computing for AI and high-performance computing workloads.

NVIDIA DGX SuperPOD serves as a reference for deploying Rubin-based systems at scale, integrating either NVIDIA DGX Vera Rubin NVL72 or DGX Rubin NVL8 systems with NVIDIA BlueField-4 DPUs, NVIDIA ConnectX-9 SuperNICs, NVIDIA InfiniBand networking and NVIDIA Mission Control software.

Next-generation ethernet networking

Advanced Ethernet networking and storage are components of AI infrastructure critical to keeping data centers running at full speed, improving performance and efficiency, and lowering costs.

NVIDIA Spectrum-6 Ethernet is the next generation of Ethernet for AI networking, built to scale Rubin-based AI factories with higher efficiency and greater resilience, and enabled by 200G SerDes communication circuitry, co-packaged optics and AI-optimized fabrics.

Built on the Spectrum-6 architecture, Spectrum-X Ethernet Photonics co-packaged optical switch systems deliver 10x greater reliability and 5x longer uptime for AI applications while achieving 5x better power efficiency, maximizing performance per watt compared with traditional methods. Spectrum-XGS Ethernet technology, part of the Spectrum-X Ethernet platform, enables facilities separated by hundreds of kilometers and more to function as a single AI environment.

Together, these innovations define the next generation of the NVIDIA Spectrum-X Ethernet platform, engineered with extreme codesign for Rubin to enable massive-scale AI factories and pave the way for future million-GPU environments.

Rubin readiness

NVIDIA Rubin is in full production, and Rubin-based products will be available from partners the second half of 2026.

Among the first cloud providers to deploy Vera Rubin-based instances in 2026 will be AWS, Google Cloud, Microsoft and OCI, as well as NVIDIA Cloud Partners CoreWeave, Lambda, Nebius and Nscale.

Microsoft will deploy NVIDIA Vera Rubin NVL72 rack-scale systems as part of next-generation AI data centers, including future Fairwater AI superfactory sites.

Designed to deliver unprecedented efficiency and performance for training and inference workloads, the Rubin platform will provide the foundation for Microsoft’s next-generation cloud AI capabilities. Microsoft Azure will offer a tightly optimized platform enabling customers to accelerate innovation across enterprise, research and consumer applications.

CoreWeave will integrate NVIDIA Rubin-based systems into its AI cloud platform beginning in the second half of 2026. CoreWeave is built to operate multiple architectures side by side, enabling customers to bring Rubin into their environments, where it will deliver the greatest impact across training, inference and agentic workloads.

Together with NVIDIA, CoreWeave will help AI pioneers take advantage of Rubin’s advancements in reasoning and MoE models, while continuing to deliver the performance, operational reliability and scale required for production AI across the full lifecycle with CoreWeave Mission Control.

In addition, Cisco, DellHPELenovo and Supermicro are expected to deliver a wide range of servers based on Rubin products.

AI labs including Anthropic, Black Forest, Cohere, Cursor, Harvey, Meta, Mistral AI, OpenAI, OpenEvidence, Perplexity, Runway, Thinking Machines Lab and xAI are looking to the NVIDIA Rubin platform to train larger, more capable models and to serve long-context, multimodal systems at lower latency and cost than with prior GPU generations.

Infrastructure software and storage partners AIC, Canonical, Cloudian, DDN, Dell, HPE, Hitachi Vantara, IBM, NetApp, Nutanix, Pure Storage, Supermicro, SUSE, VAST Data and WEKA are working with NVIDIA to design next-generation platforms for Rubin infrastructure.

The Rubin platform marks NVIDIA’s third-generation rack-scale architecture, with more than 80 NVIDIA MGX ecosystem partners.

To unlock this density, Red Hat today announced an expanded collaboration with NVIDIA to deliver a complete AI stack optimized for the NVIDIA Rubin platform with Red Hat’s hybrid cloud portfolio, including Red Hat Enterprise Linux, Red Hat OpenShift and Red Hat AI. These solutions are used by the vast majority of Fortune Global 500 companies.

For more information, visit nvidia.com.

The post NVIDIA introduces Rubin platform for large-scale AI systems appeared first on Engineering.com.

]]>
Hexagon Robotics partners with Microsoft on humanoid robots https://www.engineering.com/hexagon-robotics-partners-with-microsoft-on-humanoid-robots/ Thu, 08 Jan 2026 09:46:21 +0000 https://www.engineering.com/?p=145481 Collaboration uses Azure cloud and AI frameworks to support manipulation and inspection use cases in automotive, aerospace and manufacturing.

The post Hexagon Robotics partners with Microsoft on humanoid robots appeared first on Engineering.com.

]]>
Hexagon Robotics announced a partnership with Microsoft to advance humanoid robots. The companies plan to collaborate on data-driven manufacturing workflows, expand physical AI frameworks including imitation learning, reinforcement learning and multimodal vision-language-action models and work with customers to deploy Azure-based robotics systems from development through factory deployment.

Combining Hexagon Robotics’ expertise in sensor fusion, robotics, and spatial intelligence and Microsoft’s strengths in cloud computing and scalable platforms, including Fabric Real-Time Intelligence in Microsoft Fabric, Azure IoT Operations, and Azure App Service, the two companies will work together to provide production-ready humanoid solutions for manipulation and inspection use cases, first targeting automotive, aerospace, manufacturing and logistic industries.

Humanoid robots are expected to reshape many industries by enabling new levels of autonomy and efficiency while keeping humans in the loop. The partnership will tackle several existing deployment challenges, such as data management, one-shot imitation learning, and training for multimodal AI models. Through this collaboration, Hexagon’s industrial humanoid robot, AEON, has already demonstrated real-time defect detection and operational intelligence.

Together, Hexagon Robotics and Microsoft are committed to addressing workforce challenges and enhancing operational efficiency across a breadth of industries with intelligent, scalable, and autonomous solutions.

For more information, visit hexagon.com.

The post Hexagon Robotics partners with Microsoft on humanoid robots appeared first on Engineering.com.

]]>
Humanoid robots show autonomous sorting and mobility at CES 2026 https://www.engineering.com/humanoid-robots-show-autonomous-sorting-and-mobility-at-ces-2026/ Thu, 08 Jan 2026 08:57:00 +0000 https://www.engineering.com/?p=145491 X-Humanoid demonstrated the advanced Demos featured a bimanual parts-sorting workflow, plus running tests, and highlighted deployments in factories, power grid inspection and labs.

The post Humanoid robots show autonomous sorting and mobility at CES 2026 appeared first on Engineering.com.

]]>
At CES 2026, Humanoid Robotics (X-Humanoid) presented its advanced more useful robots, including Embodied Tien Kung 2.0 and Embodied Tien Kung Ultra, reflecting significant progress toward creating robots that are truly capable and skilled at real-world tasks. Through live, fully autonomous demonstrations, X-Humanoid demonstrated the advanced capabilities of China’s applied, industry-focused robotics sector.

The Embodied Tien Kung 2.0 robot performing sorting tasks at CES.

Founded in November 2023, Beijing Innovation Center of Humanoid Robotics Co., Ltd. is a technology company specializing in embodied intelligence and humanoid robotics. The company focuses on the research, development, and application of next-generation intelligent robots that integrate artificial intelligence, motion control, and human–robot interaction.

X-Humanoid, steadfast in its mission to make embodied intelligence fully autonomous and more practical, presented its robots’ self-directed operational capabilities through live on-site demos, showcasing how its robotics development is increasingly oriented toward deployable, more useful systems to a global audience.

Embodied Tien Kung Ultra running at CES.

To ensure robots that are capable and skilled at real-world tasks, X-Humanoid has developed two core platforms:

  • Embodied Tien Kung: a universal robotics platform, which provides an industrial-grade robot body engineered for long endurance, high payload capacity, and coordinated bimanual operation.
  • Wise KaiWu: a universal embodied AI platform, which integrates cognitive and physical systems to form a closed-loop system encompassing perception, decision-making, and execution, paving the way for embodied robots to achieve full autonomy and broader real-world applicability.

During the live demonstrations, Embodied Tien Kung 2.0 performed fully autonomous parts sorting and interacted with visitors, offering a direct, hands-on view of its fast, accurate, and resilient operation.

Powered by X-Humanoid’s proprietary cross-ontology Base Model VLA XR-1, the robot autonomously and smoothly performed a complete workflow of grasping, sorting, and placing components. It consistently adapted to variables such as changing object positions, environmental changes outside the conveyor zone, and spatial adjustments, demonstrating strong generalization. Its performance highlights three key strengths:

  • Fast: Using the proprietary UVMC (Unified Vision-Motion Codes) technology, the robot builds a direct bridge between visual perception and physical action, transforming what it sees into instinctive physical responses—analogous to human reflexes—to handle unexpected situations with minimal latency in sorting scenarios.
  • Accurate: With a high-frequency control capability exceeding 60 Hz, it converts visual data into smooth, precise motion commands in real time, enabling high-speed dynamic grasping and closing the gap between seeing and doing.
  • Resilient: It demonstrates robust bimanual coordination. If the right arm misses a part, the left arm immediately steps in to complete the grasp, ensuring continuous operation and operational consistency in sorting tasks.

X-Humanoid has deployed its solutions with partners across sectors, focusing on high-risk, high-intensity, and repetitive labor scenarios. For example, Embodied Tien Kung 2.0 and Tian Yi 2.0 are now operating on the unmanned production line at a Foton Cummins engine plant. There, they autonomously handle bin pickup, transport, and placement, adapting to various shelf heights and container types—marking a successful transition from controlled testing environments to live industrial production.

In addition, the company(X-Humanoid) has implemented humanoid robots for high-risk power grid inspections in collaboration with the China Electric Power Research Institute, and partnered with the Li-Ning Sports Science Laboratory to conduct long-duration, high-intensity athletic shoe testing using humanoid robots. Recently, X-Humanoid also entered into an agreement with Bayer to jointly further the development of humanoid robotics and embodied intelligence technologies for applications in solid pharmaceutical manufacturing, packaging, quality control, warehousing, and logistics.

Also on display was the Embodied Tien Kung Ultra, which showcased exceptional stability and mobility during extended running demonstrations. It is the world’s first champion of humanoid robot to complete a half-marathon (21.0975 km) fully autonomously without remote control, finishing in 2:40:42. Additionally, it is the first humanoid robot to run 100 meters autonomously in 21.50 seconds, winning the first-ever humanoid robot games. These extreme endurance and sprint tests validate the platform’s stability, durability, and autonomous capability, laying a foundation for long-term, stable, and independent operation in real-world environments.

The Embodied Tien Kung 2.0 further demonstrates embodied intelligence’s interactive capabilities, including serving as a host at the World Robot Conference 2025 and deploying one of the earliest fully autonomous guided tour solutions at scale in unmanned exhibition halls. Enhanced by the Wise Kaiwu platform, it recognizes and responds in multiple languages and can coordinate a fleet of robots through an intelligent multi-agent dispatch system. This progress supports future applications in consultation, tour guiding, and other service and operational scenarios.

X-Humanoid’s participation at CES 2026 represents a focused presentation of its core mission—building robots that are truly capable and skilled at real-world tasks. From extreme environment testing to industrial validation, and from key technological breakthroughs to open ecosystem development, X-Humanoid remains dedicated to creating measurable, application-driven value across industries.

As 2026 marks a pivotal year for embodied intelligence moving from demonstration to scaled adoption, the company used the global stage of CES to interpret the Smarter AI for All theme, articulate its philosophy of embodied intelligence empowering all industries to an international audience, and drive the entire industry forward.

For more information, visit x-humanoid.com.

The post Humanoid robots show autonomous sorting and mobility at CES 2026 appeared first on Engineering.com.

]]>
Hyundai Mobis to supply actuators for humanoid robot https://www.engineering.com/hyundai-mobis-to-supply-actuators-for-humanoid-robot/ Thu, 08 Jan 2026 08:53:00 +0000 https://www.engineering.com/?p=145489 Deal with Boston Dynamics, announced at CES 2026, marks Hyundai Mobis’ first customer in the robotics components market.

The post Hyundai Mobis to supply actuators for humanoid robot appeared first on Engineering.com.

]]>
Hyundai Mobis announced that it has formed a strategic collaboration with Boston Dynamics, a global leader in robotics, at CES 2026 in Las Vegas.

The company stated on the 7th that it will supply actuators for Atlas, Boston Dynamics’s next-generation humanoid robot. As part of its new $26 billion U.S. investment, Hyundai Motor Group (HMG) plans to build and deploy tens of thousands of robots over the next few years, and the collaboration with Hyundai Mobis will help accelerate Boston Dynamics and HMG’s mass production plans for Atlas.

Through this agreement, Hyundai Mobis secures its first official customer in the robotics sector, marking a significant milestone as it enters the global robot components market.

Over the past several years, Hyundai Mobis has steadily expanded beyond the automotive industry and reshaped its business portfolio to include high–value-added fields such as robotics. This transformation aligns with the company’s mid- to long-term strategy to proactively respond to the rapidly evolving mobility landscape and establish a sustainable business foundation.

Hyundai Mobis is leveraging its accumulated expertise in automotive component development and large-scale manufacturing to enter the robot actuator market, a field closely aligned with its core capabilities. Actuators, which convert control signals into physical movement, are a critical subsystem for humanoid robots and represent more than 60% of their material cost.

The strategic cooperation between Hyundai Mobis and Boston Dynamics, along with the agreement on actuator supply, is expected to benefit both companies. Boston Dynamics is said to have highly valued Hyundai Mobis’s engineering expertise, reliability-based evaluation systems, and global-standard mass production capabilities.

For Hyundai Mobis, securing Boston Dynamics, a global leader in robotics, as its first customer in the field provides a stable long-term demand partner. The company plans to use this momentum to establish a large-scale actuator manufacturing system and strengthen its design capabilities for high-performance robotics components.

The collaboration is expected to help both companies carve out leadership positions in the emerging robotics components industry. As the robot component market currently lacks a dominant global player, establishing a reliable, cost-competitive large-scale supply system could allow both companies to achieve economies of scale early.

In August of last year, Hyundai Motor Group announced plans to build a robot factory capable of producing 30,000 units annually and develop the site into a robotics production hub for the North American region. With growing demand expected for various components needed for robot manufacturing, Hyundai Mobis is projected to play an increasingly significant role in the robotics ecosystem.

For more information, visit mobis.com.

The post Hyundai Mobis to supply actuators for humanoid robot appeared first on Engineering.com.

]]>
Bentley expands asset analytics with Talon, Pointivo deals https://www.engineering.com/bentley-expands-asset-analytics-with-talon-pointivo-deals/ Wed, 07 Jan 2026 14:36:26 +0000 https://www.engineering.com/?p=145478 Closed in December, the acquisitions add drone data processing and inspection tools for telecom and utility infrastructure owners.

The post Bentley expands asset analytics with Talon, Pointivo deals appeared first on Engineering.com.

]]>
Bentley Systems, an infrastructure engineering software company, announced it has acquired Talon Aerolytics and the technology and technical expertise of Pointivo. The acquisitions, which closed in December, expand Bentley’s Asset Analytics portfolio, which uses digital twins and AI to support owner-operators in monitoring and managing infrastructure assets.

Bentley Asset Analytics includes OpenTower iQ for telecommunication towers and Blyncsy for road networks. The acquisitions expand Bentley’s offerings in telecommunications and electric utilities, supporting integrated digital workflows for 5G deployments and grid modernization. As next-generation networks and electrification increase demand, the added capabilities help infrastructure owners digitize, analyze and optimize assets at scale.

Talon provides solutions for site surveys, inspections and asset digitization for wireless telecom, broadband and electric utilities. Its platform combines workflow automation, digital twins and AI to help organizations manage recurring tasks and inspections and assess asset conditions over time.

Bentley’s acquisition of Pointivo’s technologies, including its intellectual property and technical expertise, adds capabilities to Bentley Asset Analytics in drone data processing, AI-based damage detection and geolocation. Bentley said the additions will support its platform for AI-driven asset insights.

Dechert LLP acted as legal advisor to Bentley in the transactions, for which financial details were not disclosed.

For more information, visit bentley.com.

The post Bentley expands asset analytics with Talon, Pointivo deals appeared first on Engineering.com.

]]>
PTC adds AI tech pack generation to FlexPLM https://www.engineering.com/ptc-adds-ai-tech-pack-generation-to-flexplm/ Wed, 07 Jan 2026 13:51:18 +0000 https://www.engineering.com/?p=145475 Debuts at NRF Retail’s Big Show Jan. 11-13 in NYC; demos at Booth 3346 highlight supply chain and sustainability tools.

The post PTC adds AI tech pack generation to FlexPLM appeared first on Engineering.com.

]]>
PTC announced new artificial intelligence (AI) capabilities for its FlexPLM retail product lifecycle management (PLM) solution aimed at automating tech pack creation, a step in retail product development that often requires significant manual effort and can introduce errors. The capabilities will debut at the National Retail Federation’s (NRF) Retail’s Big Show, Jan. 11-13, in New York City.

Converting design sketches into production specifications typically involves multiple handoffs between design and development teams and substantial manual data entry. With FlexPLM’s AI-driven tech pack generation, teams can extract information from design drawings and use it to populate bills of materials (BOMs), measurements, construction details, attributes and colorways.

PTC said the update supports its Intelligent Product Lifecycle approach, which combines structured product data with AI to improve workflow efficiency and decision-making across the product lifecycle.

At NRF, PTC will demonstrate FlexPLM features related to product development, supply chain visibility, and sustainability and regulatory requirements. Attendees can visit Booth #3346 for demonstrations and to speak with PTC retail specialists.

For more information, please visit ptc.com.

The post PTC adds AI tech pack generation to FlexPLM appeared first on Engineering.com.

]]>
3D Systems expands aerospace and defense manufacturing capacity https://www.engineering.com/3d-systems-expands-aerospace-and-defense-manufacturing-capacity/ Wed, 07 Jan 2026 12:56:55 +0000 https://www.engineering.com/?p=145472 Littleton facility growth, new metal R&D programs and certification efforts support additive manufacturing for U.S. defense use.

The post 3D Systems expands aerospace and defense manufacturing capacity appeared first on Engineering.com.

]]>
3D Systems reported increased activity in its aerospace and defense business, including planned facility expansion and ongoing technology development. The company cited provisions in the National Defense Authorization Act for fiscal year 2026 that restrict foreign-sourced 3D printing systems for certain Department of Defense programs, which could shift demand toward U.S.-based suppliers.

Key highlights:

  • Revenue outlook: The aerospace and defense business is forecast to have grown more than 15% in 2025 and more than 20% in 2026. Revenue from production printing systems and custom metal parts is expected to exceed $35 million in 2026.
  • Capacity expansion: 3D Systems plans to add up to 80,000 square feet to its Littleton, Colorado, facility to increase capacity for application development, process qualification, validation and production-scale manufacturing.
  • Qualification and certification: The Littleton facility has been selected for certification under the America Makes JAQS-SQ framework to support qualified additive manufacturing production for defense applications.
  • Metal printing R&D: The company is continuing an $18.5 million U.S. Air Force-sponsored program to develop next-generation laser powder-bed fusion technologies for large-format metal part production, with milestones scheduled through 2027.
  • U.S. manufacturing footprint: 3D Systems said its U.S. operations span system design in San Diego, California, printer manufacturing in Rock Hill, South Carolina and metal parts production and application development in Littleton, Colorado, with plans extending to 2027.
  • International operations: The company said it supports non-U.S. aerospace and defense customers through operations in Leuven, Belgium, and Riom, France, and through a joint venture in Saudi Arabia that is pursuing localized additive manufacturing for aerospace and defense applications.

The Littleton expansion supports 3D Systems’ application-focused strategy that combines hardware, materials, software and engineering support across four areas:

  • Supply chain resilience: Regional manufacturing is intended to reduce lead times and supply risk for time-sensitive programs. The company cited work with Huntington Ingalls Industries on copper-nickel (CuNi30) alloy solutions for naval components to shorten production timelines.
  • New application development: Through its Application Innovation Group, 3D Systems works with customers on lightweight and consolidated part designs. The expanded Littleton Center is expected to support qualification and scale-up through engineering collaboration, pilot production and technology transfer.
  • Printing solutions: The company’s low-oxygen direct metal printing process is intended to support consistent output for aerospace and defense applications. In collaboration with NIAR and the America Makes Joint Metal Additive Database Definition effort, 3D Systems is working to develop material allowables on the DMP 350 platform for additional end uses, including flight applications.
  • Propulsion and casting applications: QuickCast Air and additive casting workflows are used to produce complex geometries and iterate designs for aviation, space and energy applications. The company is also participating in the Penn State-led IMPACT 3.0 program focused on integrating additive manufacturing into casting and forging workflows.

For more information, visit 3dsystems.com.

The post 3D Systems expands aerospace and defense manufacturing capacity appeared first on Engineering.com.

]]>
Siemens and NVIDIA expand partnership on industrial AI https://www.engineering.com/siemens-and-nvidia-expand-partnership-on-industrial-ai/ Wed, 07 Jan 2026 11:09:00 +0000 https://www.engineering.com/?p=145470 Companies plan AI-accelerated manufacturing, digital twins and EDA advances, with pilot factories and GPU-based simulation starting in 2026.

The post Siemens and NVIDIA expand partnership on industrial AI appeared first on Engineering.com.

]]>
Siemens and NVIDIA announced an expansion of their strategic partnership to bring artificial intelligence into the real world. Together, the companies aim to develop industrial and physical AI solutions that will bring AI-driven innovation to every industry and industrial workflow, as well as accelerate each others’ operations.

To support development, NVIDIA will provide AI infrastructure, simulation libraries, models, frameworks and blueprints, while Siemens will commit hundreds of industrial AI experts and leading hardware and software.

Accelerating the entire industrial lifecycle

Siemens and NVIDIA will work together to build AI-accelerated industrial solutions across the full lifecycle of products and production, enabling faster innovation, continuous optimization and more resilient, sustainable manufacturing. The companies aim to build the world’s first fully AI-driven, adaptive manufacturing sites globally, starting in 2026 with the Siemens Electronics Factory in Erlangen, Germany, as the first blueprint.

Using an “AI Brain,” — powered by software-defined automation and industrial operations software, combined with NVIDIA Omniverse libraries and NVIDIA AI infrastructure, factories can continuously analyze their digital twins, test improvements virtually and turn validated insights into operational changes on the shopfloor.

This results in faster, more reliable decision-making from design to deployment — raising productivity while reducing commissioning time and risk. The companies aim to scale these capabilities across key verticals and several customers are already evaluating some of the capabilities including Foxconn, HD Hyundai, KION Group and PepsiCo.

With the partnership expansion, Siemens will complete GPU acceleration across its entire simulation portfolio and expand support for NVIDIA CUDA-X libraries and AI physics models, enabling customers to run larger, more accurate simulations faster. Building on that foundation, the companies will advance toward generative simulation by using NVIDIA PhysicsNeMo and open models to provide autonomous digital twins that deliver real-time engineering design and autonomous optimization. 

Advancing electronic design automation for accelerated computing

By applying industrial AI operating logic to semiconductors and AI factories, Siemens and NVIDIA will accelerate the engines of the AI revolution. Starting with semiconductor design and building on NVIDIA’s extensive use of Siemens’ tools, Siemens will integrate NVIDIA CUDA-X libraries, PhysicsNeMo and GPU acceleration across its EDA portfolio with a focus on verification, layout and process optimization — to target 2-10x speedups in key workflows.

The partnership will also add AI-assisted capabilities such as layout guidance, debug support and circuit optimization to boost engineering productivity while meeting strict manufacturability requirements. Together, these capabilities will advance AI-native engines for design, verification, manufacturability and digital-twin approaches to shorten design cycles, improve yield and deliver more reliable outcomes.

Designing the next generation of AI factories

Siemens and NVIDIA will also jointly develop a repeatable blueprint for next-generation AI factories — accelerating the industrial AI revolution and providing the high-performance foundation for their AI-accelerated industrial portfolios.

This blueprint will balance the next-generation high-density computing demands for power, cooling and automation while ensuring technologies are well positioned for both speed and efficiency — optimizing the full lifecycle, from planning and design to deployment and operations.

The combined effort bridges NVIDIA’s AI platform roadmap, AI infrastructure expertise, partner ecosystem and the accelerated power of NVIDIA Omniverse library-based simulation with Siemens’ strengths in power infrastructure, electrification, grid integration, automation and digital twins. Together, the companies aim to accelerate deployment, increase energy efficiency and improve resilience for industrial-scale AI infrastructure worldwide.

Optimizing operations through shared innovation

Siemens and NVIDIA aim to accelerate each others’ operations and portfolio by implementing technologies on their own systems before scaling them across industries. NVIDIA will assess Siemens offerings to streamline and optimize its own operations and offerings, and Siemens will assess its own workloads and collaborate with NVIDIA to accelerate them and integrate AI into Siemens’ customer portfolio. By accelerating one another and improving their own systems, Siemens and NVIDIA are creating concrete proof points of value and scalability for customers.

For more information, visit nvidia.com.

The post Siemens and NVIDIA expand partnership on industrial AI appeared first on Engineering.com.

]]>
NVIDIA announces Alpamayo family of open-source AI models and tools https://www.engineering.com/nvidia-announces-alpamayo-family-of-open-source-ai-models-and-tools/ Wed, 07 Jan 2026 10:36:00 +0000 https://www.engineering.com/?p=145465 With Alpamayo, mobility leaders such as JLR, Lucid and Uber, along with the AV research community can fast-track safe, reasoning‑based level 4 deployment roadmaps.

The post NVIDIA announces Alpamayo family of open-source AI models and tools appeared first on Engineering.com.

]]>
NVIDIA unveiled the NVIDIA Alpamayo family of open AI models, simulation tools and datasets designed to accelerate the next era of safe, reasoning‑based autonomous vehicle (AV) development.

AVs must safely operate across an enormous range of driving conditions. Rare, complex scenarios, often called the “long tail,” remain some of the toughest challenges for autonomous systems to safely master. Traditional AV architectures separate perception and planning, which can limit scalability when new or unusual situations arise. Recent advances in end-to-end learning have made significant progress, but overcoming these long-tail edge cases requires models that can safely reason about cause and effect, especially when situations fall outside a model’s training experience.

The Alpamayo family introduces chain-of-thought, reasoning-based vision language action (VLA) models that bring humanlike thinking to AV decision-making. These systems can think through novel or rare scenarios step by step, improving driving capability and explainability — which is critical to scaling trust and safety in intelligent vehicles — and are underpinned by the NVIDIA Halos safety system.

A complete, open ecosystem for reasoning‑based autonomy

Alpamayo integrates three foundational pillars — open models, simulation frameworks and datasets — into a cohesive, open ecosystem that any automotive developer or research team can build upon.

Rather than running directly in-vehicle, Alpamayo models serve as large-scale teacher models that developers can fine-tune and distill into the backbones of their complete AV stacks.

At CES, NVIDIA is releasing:

  • Alpamayo 1: The industry’s first chain-of-thought reasoning VLA model designed for the AV research community, now on Hugging Face. With a 10-billion-parameter architecture, Alpamayo 1 uses video input to generate trajectories alongside reasoning traces, showing the logic behind each decision. Developers can adapt Alpamayo 1 into smaller runtime models for vehicle development, or use it as a foundation for AV development tools such as reasoning-based evaluators and auto-labeling systems. Alpamayo 1 provides open model weights and open-source inferencing scripts. Future models in the family will feature larger parameter counts, more detailed reasoning capabilities, more input and output flexibility, and options for commercial usage.
  • AlpaSim: A fully open‑source, end-to-end simulation framework for high‑fidelity AV development, available on GitHub. It provides realistic sensor modeling, configurable traffic dynamics and scalable closed‑loop testing environments, enabling rapid validation and policy refinement.
  • Physical AI Open Datasets: NVIDIA offers the most diverse large-scale, open dataset for AV that contains 1,700+ hours of driving data collected across the widest range of geographies and conditions, covering rare and complex real-world edge cases essential for advancing reasoning architectures. These datasets are available on Hugging Face.

Together, these tools enable a self-reinforcing development loop for reasoning-based AV stacks.

Broad AV industry supports Alpamayo

Mobility leaders and industry experts, including Lucid, JLR, Uber and Berkeley DeepDrive, are showing interest in Alpamayo to develop reasoning-based AV stacks that will enable level 4 autonomy.

Beyond Alpamayo, developers can tap into NVIDIA’s rich library of tools and models, including from the NVIDIA Cosmos and NVIDIA Omniverse platforms. Developers can fine-tune model releases on proprietary fleet data, integrate them into the NVIDIA DRIVE Hyperion architecture built with NVIDIA DRIVE AGX Thor accelerated compute, and validate performance in simulation before commercial deployment.

For more information, visit nvidia.com.

The post NVIDIA announces Alpamayo family of open-source AI models and tools appeared first on Engineering.com.

]]>