Computing - Engineering.com https://www.engineering.com/category/technology/computing/ Wed, 18 Jun 2025 17:00:39 +0000 en-US hourly 1 https://wordpress.org/?v=6.9 https://www.engineering.com/wp-content/uploads/2025/06/0-Square-Icon-White-on-Purpleb-150x150.png Computing - Engineering.com https://www.engineering.com/category/technology/computing/ 32 32 Engineer’s Toolbox: Spatial Computing for Engineers https://www.engineering.com/resources/engineers-toolbox-spatial-computing-for-engineers/ Wed, 18 Jun 2025 17:00:37 +0000 https://www.engineering.com/?post_type=resources&p=140730 The era of spatial computing is underway. Augmented, virtual, and mixed reality are increasingly being used for design, collaboration, simulation, factory layout, training, maintenance and other enterprise applications. This Engineering Toolbox explains the fundamentals of spatial computing—how it works, the different forms it can take, its hardware requirements, and how engineers are using the technology. […]

The post Engineer’s Toolbox: Spatial Computing for Engineers appeared first on Engineering.com.

]]>
The era of spatial computing is underway. Augmented, virtual, and mixed reality are increasingly being used for design, collaboration, simulation, factory layout, training, maintenance and other enterprise applications. This Engineering Toolbox explains the fundamentals of spatial computing—how it works, the different forms it can take, its hardware requirements, and how engineers are using the technology.

Download the PDF by filling out the form.

Your download is sponsored by Hawk Ridge Systems.

The post Engineer’s Toolbox: Spatial Computing for Engineers appeared first on Engineering.com.

]]>
3DLive on the Apple Vision Pro: Q&A with Tom Acland https://www.engineering.com/3dlive-on-the-apple-vision-pro-qa-with-tom-acland/ Thu, 06 Mar 2025 18:52:44 +0000 https://www.engineering.com/?p=137401 3DExcite’s CEO explains how Dassault Systemes’ visionOS app works and why it’s a crucial part of the next-generation 3DExperience platform.

The post 3DLive on the Apple Vision Pro: Q&A with Tom Acland appeared first on Engineering.com.

]]>
Dassault Systèmes recently announced 3DLive, an upcoming app for the Apple Vision Pro headset that will bring spatial computing to users of the 3DExperience platform.

Scheduled for release this summer, 3DLive is part of Dassault’s next-generation concept of “3D UNIV+RSES”, a strategy that leans heavily on the merging of physical and virtual reality.

To learn more about 3DLive, Engineering.com sat down with Tom Acland, CEO of 3DExcite at Dassault Systèmes. He explained how the visionOS app works, why Dassault chose to collaborate with Apple, and how 3DLive fits into the 3D UNIV+RSES strategy.

Tom Acland, CEO of 3DExcite. (Image: Tom Acland via LinkedIn.)

The following transcript has been edited for brevity and clarity.

Engineering.com: What’s 3DLive all about?

Tom Acland: The release that we’re making in the summer of this year really consists, from a product perspective, of two components. There’s the 3DLive app, which is going to be available on the Apple Vision Pro. It’s the way that people access the information which is published from the 3DExperience platform.

The other half is the ability to create use case focused scenarios to help people in business collaborate with each other. And that tool chain is resident on the 3DExperience platform. So using the components which are on 3DExperience platform, you can aggregate different pieces of the virtual twin which are relevant in the context of a particular use case.

Is that a new app within 3DExperience?

We’re leveraging technology which was already there on the 3DExperience platform, but we’ve been able to extend it to make the experiences that you publish spatially accessible.

Specifically, there’s an app called Creative Experience which is part of the Experience Creator role. And that app has been available for many years already. It’s typically used by engineering teams who need to explain the value of what it is that they’re doing. It’s also available in the 3DExperience Works portfolio as Product Communicator.

[Related: How to use 3DExcite’s Creative Experience]

So Solidworks users will be able to use this tool?

Yeah, and they already use it today.

For what?

You can craft experiences for use in a 2D context. You can also generate portable content from the experience that you’ve created. So for example, if you need to create specific content which you’re going to use on your website, or animations, videos, those things, those can also be generated from the same application and using the same tool chain.

Could you tell me more about 3DExcite?

3DExcite is one of the Dassault Systèmes brands. It helps manufacturers take their products to market. So we help our manufacturing clients express the value of the inventions that they’re coming up with on the 3DExperience platform.

Obviously a big part of that is the storytelling. So how does this particular innovation help the people that it’s designed to serve? In a Solidworks world, for example, where you have people making machine tools, you have a similar challenge. How do I show my customer what it is that I’m developing?

So is 3DLive a marketing tool?

Well, if you think about traditional marketing, that’s often tied up with advertising. But as these products become more sophisticated, for example, more software defined, the way that you show the value of the product to a customer is not just through advertising. You have to be able to illustrate and explain new features.

For example, you’ve just released a product over-the-air. You might need some content which appears in the app which goes with the product, so that users can understand this new feature. So you can create advertising content, but you can also create content which is useful for end users, and that’s really the key.

What we’re seeing because of software definition and the speed of change is that it’s increasingly important that you define what the value is for the customer as early as possible. So you could look at this as a way of capturing requirements from a customer-centric perspective. So you’re not just writing things down, you’re modeling what the outcome of that experience is going to be so that you can show it to someone: “Is this what you want?” You can engineer it and then make sure that your engineering matches what you’re aiming for.

So being customer-centric is not just about communication outwards, it’s about communication inwards to everyone who’s building that product, so that everyone understands what it is we’re trying to make.

Dassault Systèmes’ promo video for 3DLive.

How closely did you work with Apple to develop the new app?

The idea goes well back before the collaboration with Apple. But what is special about the technology that Apple has developed for spatial computing is that you have a very powerful set of capabilities on the Apple Vision Pro, in terms of processing, in terms of sensors, in terms of the OS, which allows you to deliver those experiences in a very true-to-life fashion. And they’re easy to use.

The collaboration with Apple goes back over a year. They were actually at 3DExperience World last year. They came to visit us. We’d already started conversations. And it’s been a journey that’s been going on for over a year to work out exactly how 3DExperience can interact with and work with the Apple Vision Pro.

I think people sometimes talk about these things generically as a headset, right? But we see the Apple Vision Pro as not just another headset. It’s a different type of capability, which is a function of the hardware, but also the software which is powering those kind of experiences. So we don’t really see this as a case of just swapping out one headset for another. The VR thing’s been done before, but this is a next-generation capability for putting people inside the model.

How so? How does this Vision Pro app compare to VR experiences on other headsets?

There are a whole lot of specifics about the Apple Vision Pro capabilities which I’m not going to go into myself, but I’ll tell you about the benefits in terms of what the difference is. If we’re talking about the use cases which are typically addressed in VR today in conjunction with the 3DExperience platform, you’re often talking about design type situations where you’re looking at the exterior shell, the physical design of the product. And that’s typically a function of configuration, materials and geometry.

[Related: Should engineers buy the Apple Vision Pro?]

What we’re doing with the Apple Vision Pro is radically different, because you’re actually looking at all of the facets of the interaction with that thing, including kinematics, including systems information, and putting that in the context of an end user benefit. So it’s a much richer experience that you can create, and you can really get a sense of how the thing that you’re building is going to help the people it’s designed to serve. It’s not just a tool for designers. It’s a tool for everybody who needs to understand the benefit of a particular process or a particular product itself.

So this isn’t an existing capability being ported to a new headset?

No, it’s an entirely new thing. And it’s just the start. The whole idea that we’re trying to address in working with the Apple Vision Pro on the 3DExperience platform is a pillar of gen seven.

[Gen seven refers to 3D UNIV+RSES, “the seventh generation of representation of the world introduced by Dassault Systèmes”.]

So it’s a strategic aspect to the next-generation of the 3DExperience platform, which is designed to help people design better products to deliver more value to their customers, but also help customers understand what it is that they’re getting. If you’re selling a robot, for example, the customer may not understand how the robot’s made, but they want to understand how the robot’s going to fit their specific use case. So it’s as much to help the customers understand the value of the product that’s being engineered as it is a tool for the engineer to make a better product.

How will users access the 3DLive app, and what will it cost?

In an enterprise context, if you’re deploying Apple technology, you typically have an enterprise app store. Your devices themselves are often managed through device management, so you have a very similar experience to what you would have as a consumer, but the applications available to you as a user of an enterprise are curated by your IT department. And that’s using standard Apple technology for making iPhones, iPads, etc. part of the enterprise ecosystem.

So the app is going to be available to people by those means, on the enterprise app store for companies who’ve deployed this process. And there is no additional charge expected for having that app available in that way. Sign in through your 3DExperience ID and it’s up and running.

How you then discover those experiences, how they’re organized, is part of the value of the process. It’s not just the experience itself, it’s how you access it in context, so that people who are part of that work group can look at the things that they need to see together.

Do you plan to bring this technology to other XR headsets akin to the Apple Vision Pro, like Samsung’s Project Moohan?

The idea of spatial computing—or sense computing, as we call it, because we think it could become broader in the next 20 years—is going to be a very emerging field. So there may be other technologies by Apple or by other people which are relevant. And of course we want to embrace the best of the market to be able to execute on the Dassault Systèmes vision for sense computing.

That said, there’s something unique about the level of integration in the Apple stack. This is my personal view. If you are able to combine that very, very sophisticated hardware with the OS, with the experiences that are deployed to that device, you can achieve completely different things than when you have, let’s say, an ecosystem where the OS is separate from the device.

The ability to create that sense of stability, where everything is locked in place, is what you need if you want to, say, walk up to a machine and press a button and the virtual system responds in the right way. That’s very, very hard to achieve if you have dozens of different devices all nominally conforming to a spec. So we see that the technology that Apple has brought to market is at the moment leading not just because of the hardware that’s inside, but because of the approach. It’s because of the fact that you’ve got that close integration between the software and the hardware on the device. It allows you to do completely different things. And we don’t really see too many other companies at the moment with that level of capability.

So we’ll see what happens with the space. It’s likely to evolve, and there’ll be new types of devices, but obviously we want to work with the ones that actually achieve the objectives of Dassault Systèmes and 3DExperience.

You gave the example of walking up to a machine and pressing a button. Is that a capability of this app?

Yes. In one of the demos there’s a training scenario that’s an example of how a maintenance engineer who’s designing maintenance procedures would create a little boot camp for an operator to run through that procedure virtually. And you can imagine that if you’re trying to get a new line stood up, or you’re trying to turn over a line, or even an entire factory, there’s going to be hundreds of those specific use cases. And in that environment, it’s very important that you have a sense of being in the place and things behave the way that they’re going to behave.

So yeah, if there is an actuator in the context of a particular instruction that you’ve got to work through, that will be active, and you’ll be able to interact with it like in the real world. Likewise, if you have a screen in there that’s going to show you your work instruction, for example, it’ll have the actual work instruction that you’re going to encounter in the real world. So you’re really trying to give people a sense of proximity to the real world so that they really understand what it is they’re going to do.

If I had something like a TV stand in the app, could I go up to it and move it up and down?

Yes, absolutely. Kinematics is one of the things that makes a big difference in terms of traditional digital content creation versus the approach we’re taking here. Because the way that you make the experience is derived straight from the CAD, it has all of the kinematics and so on available to it, to make sure that the way those things are represented are true to the engineering.

And it’s quite possible that there’s a bit of back and forth between the engineering team and the customer. Things change. You don’t want to have to go back to the start again, export all the CAD again, go through that loop, which typically takes a long time for every single engineering change. You want to be able to just update that specific aspect, like the kinematics, and then it’ll be available to you within minutes to be able to show that update to the customer.

If I’m in the headset and my colleague next to me updates the model, will that change propagate to 3DLive?

One of the other aspects of gen seven is the virtual companion. Virtual companion is about giving people superpowers through the use of generative AI or AI in general, but also about being able to automate processes that previously were done manually.

So the objective is to do exactly as you described. That those processes which are already repeatable and manageable can also be automated, so that you can essentially run those processes in the cloud fully automatically.

I can’t tell you that’s all going to be there in the summer, but that’s exactly the intent. Once you’ve created those scenarios and you’ve created the relationships between those scenarios and the CAD, you don’t need to have to come in every time and run it again manually.

What about collaboration? Could both of us be in a headset and work on the same thing at the same time?

That will be available at the release in the summer. There are still a few kinks being worked out there, but that is absolutely the idea. It’s one of the things that we see as being most in demand in those immersive environments, the ability to be colocated in a virtual space with somebody else.

You use the term sense computing. How do you see different senses being incorporated into spatial computing?

We don’t know 100% yet. I think touch haptics is probably next in terms of being able to get the idea of surface texture. I think that’s quite likely to be the next one. Smell, I’m not so sure. That’d be kind of cool, but we’ll see how long it takes us to get there.

What else excites you about 3DLive?

I think it’s the direction of where spatial computing is going and why it’s important to see spatial computing as a function of virtual twins.

At the moment we’re talking mostly about creating virtual representations of something which is going to arrive in the future. But you can reverse the polarity of that. In the future, you’ll be able to superimpose on the real world things which are coming from the virtual, so you’ll be able to actually explain to people how devices or how products are composed, how they work, by reverse engineering the real world and getting back to where the information came from.

I think that is a super exciting outlook, because you’re not just talking about going from virtual to real. You’re talking about going from real to virtual as well. And to do that you need to be able to create continuity from the virtual to the real. The devices are able to recognize things precisely because they’ve been trained on the information which is inherent in the virtual twin.

In order to be able to do that recognition, you need to be able to have a well-defined model, which will allow these spatial computing devices generally to identify objects and then associate them with information that isn’t necessarily immediately visible.

So you’re going to see the virtual and the physical worlds kind of blend together, not just in terms of engineering and design, but in terms of use, and maybe in terms of circularity. Like, what else could that thing be if I were to deconstruct it? What elements of that could I take out? How could I recycle and how could I use them some other way? That’s part of what our purpose is, to make sure that there’s more value out of less resources that get consumed.

How does 3DLive fit into the concept of 3D UNIV+RSES?

I think the underpinning construct is the idea of moving from data up to representation of knowledge. CAD or IoT information, for example, unless it’s contextualized in a scenario which is meaningful to somebody, remains a little bit abstract. It makes it a little bit difficult to leverage. What does it mean semantically? Not just as a number, but what does it mean? And also, how is that knowledge used by people to create something new? And that’s the know-how element that occurs when people work together around a set of known concepts.

So you’re not modeling just the product. You’re talking about how the product interacts with other products and people in the context of its use. What happens to it in the real world? What can we learn from its actual interactions with the real world to make the design better? And that means we have to model the context to a certain extent at the same level of fidelity as we would have typically modeled the product in the past. And that’s quite an exciting new era, because we’re going to be modeling factories, we’re going to be modeling hospitals, we’re going to be modeling any place where these products add value to people’s lives, not just the products themselves. And I think that’s a sort of a step change in how we think about designing things for the real world.

So there’s a lot going into gen seven, which is about elevating what’s been done so far on the 3DExperience platform into the era of AI by adding meaning to data through experiences like we’ve been talking about with sense computing. And I think this is going to be quite an exciting journey as these things evolve all around us.

The post 3DLive on the Apple Vision Pro: Q&A with Tom Acland appeared first on Engineering.com.

]]>
6 Best Practices When Developing XR for Industrial Applications https://www.engineering.com/resources/6-best-practices-when-developing-xr-for-industrial-applications/ Mon, 21 Oct 2024 13:44:35 +0000 https://www.engineering.com/?post_type=resources&p=133025 Through Industry 4.0 and the industrial internet of things (IIoT), developers have brought industry into the digital realm. Industry experts can learn, control and share anything about a process with a few clicks. But these experts are still limited by their physical connections.

The post 6 Best Practices When Developing XR for Industrial Applications appeared first on Engineering.com.

]]>

Developers, however, can start to blend the physical and digital realms via technologies like virtual reality (VR), augmented reality (AR) and mixed reality (MR) — collectively referred to as extended reality (XR). But this dream is still in its infancy. As a result, developers need guidelines to ensure they are going down the correct path when creating XR experiences.

In this 7-page ebook, developers will learn:

  • How XR is bound to change industry.
  • Which challenges exist when making XR experiences for industry.
  • Six best practices to keep the development of industrial XR experiences on track.
  • How Unity can help make industrial XR experiences a reality.

To download your free ebook, fill out the form on this page. Your download is sponsored by Unity Technologies.

The post 6 Best Practices When Developing XR for Industrial Applications appeared first on Engineering.com.

]]>
Spatial computing extends reality in design, AEC and manufacturing https://www.engineering.com/spatial-computing-extends-reality-in-design-aec-and-manufacturing/ Mon, 30 Sep 2024 18:23:00 +0000 https://www.engineering.com/?p=132260 VR, AR and more offer an enhancement to engineers in industry, rather than a replacement of their expertise.

The post Spatial computing extends reality in design, AEC and manufacturing appeared first on Engineering.com.

]]>
Lenovo has sponsored this post.

Hyundai and Kia are two automakers using VR for virtual design reviews. (Image: Hyundai.)

The latest trend in the computing world is “spatial computing” – popularly understood as the extension of computer interactions into the third dimension, and encompassing technologies including virtual reality (VR), augmented reality (AR), mixed reality (MR), extended reality (XR), the metaverse and more.

The core conceit of spatial computing involves the user’s physical movements being used to control the software and the models or environments displayed within it. Virtual reality headsets are one example of hardware for this purpose. With these, users can interact with the virtual environment by physically moving within a room, or via hand gestures or controllers. But AR, MR, and XR experiences can also run on smaller devices such as smart glasses and smartphones, offering virtual information overlaid on images or video of the physical world.

But spatial computing is far more than a gimmick, and advances in the hardware technology have enabled applications in industry and enterprise to rise to the forefront.

Spatial computing applications in AEC and manufacturing

Under the spatial computing banner, XR technologies are seeing new applications in the architecture, engineering and construction (AEC) and manufacturing industries to improve designs, increase efficiency and reduce costs. 

In the AEC industry, spatial computing and XR means being able to visualize designs at scale and immersively explore designed environments. Virtual prototypes of AEC projects mean designers can view and update elements in real-time, or walk clients and decision-makers through the virtual model of a building project. 

On the automotive side, virtual models, design reviews, and digital twins are on the rise, and spatial computing applications bring a whole new level to the design and production process. Using XR and spatial computing, users can collaborate on design reviews both in person and remotely, and manipulate design elements within the virtual environment. Assets derived from CAD models can be easily changed or reused, and finished designs can be easily shared with sales and marketing teams.

Digital twins of factories and processes can also be built and explored virtually, enabling optimized lines and floor plans, XR-augmented plant tours and enhanced training for plant operators. 

While there are many consumer-focused VR headsets available, engineering and industrial applications require workstation-grade hardware. Some headsets, such as Lenovo’s ThinkReality VRX, offer both convenient mobile processing combined with high-performance workstation hardware in order to provide the highest processing and graphics performance. The bigger the models you intend to create, the greater processing power you’ll need in the GPU. According to Mike Leach, senior manager for Lenovo workstations, an Nvidia RTX A3000 mobile GPU or better is recommended, though all GPUs from the RTX A1000 and up are considered to be “VR Ready” by Nvidia.

And while GPU power might be the key consideration, it’s not the only one. Enterprise users should assess their needs to find a balance between performance, cost, physical comfort and graphics resolution.

Future-forward engineering

Spatial computing offers an enhancement to engineers in industry, rather than a replacement of their expertise. Leach pictures spatial computing and headset hardware as one more tool in the engineer’s toolbox to streamline workflows and improve the design and development process.

“You can jump into spatial computing to do design collaborations with colleagues at a moment’s notice,” Leach says. “We see that for the engineer of the future.”

For an in-depth look at spatial computing hardware, check out Lenovo’s white paper The Workstations Behind Spatial Computing.

The post Spatial computing extends reality in design, AEC and manufacturing appeared first on Engineering.com.

]]>
How hardware fits into your AI journey https://www.engineering.com/how-hardware-fits-into-your-ai-journey/ Mon, 30 Sep 2024 14:26:31 +0000 https://www.engineering.com/?p=132255 Engineers have always expected high performance from their workstations, and that’s good news for AI adoption.

The post How hardware fits into your AI journey appeared first on Engineering.com.

]]>
Lenovo has sponsored this post.

(Image: Technology Innovation Institute.)

Artificial intelligence is growing rapidly within the engineering space, and AI functionality is quickly becoming ubiquitous in applications from design to manufacturing and more. The utility and potential of AI means that companies of all sizes need to invest in this technology in order to stay ahead of the curve. Since AI relies heavily on computing power, investment in computing hardware is key. Luckily, the familiar engineering workstations used for CAD, BIM and CAE are evolving to accommodate the computing needs of AI.

Engineering software for CAD, CAE and more are already seeing AI enhancements. In CAD, machine learning can predict designers’ needs and suggest tools or features to use, while generative design tools help streamline the design iteration process. With CAE and simulation, AI models can augment—or one day replace—traditional solvers, accelerating solve time as well as tedious tasks such as data preparation and meshing.

Large language model (LLM) chatbots are also appearing in engineering software of all stripes, offering assistance to users with technical questions or acting as “AI copilots” for tasks like writing G-code or designing schematics. 

Some engineering companies are developing their own AI tools and workflows that leverage their proprietary data to provide results tailored to their needs. But many companies don’t have the budget or hardware to develop their own machine learning models, which can involve billions of parameters and intensive processing and compute power. 

“You should never train a model just once,” says Mike Leach, senior manager for Lenovo workstations. “You need to constantly fine-tune or train to make sure it’s accurate, that it’s up to date and learns as it goes.”

The Lenovo ThinkStation P7, pictured here, is “the world’s fastest and most powerful workstation for AI workloads,” according to Leach. (Image: Lenovo.)

The solution might be a pre-trained model, such as the Llama 3 LLM from Meta or the Falcon open source LLM from the Technology Innovation Institute, which can then be customized.

All of these AI applications need powerful hardware. The GPU is the key, with today’s top-of-the-line GPUs from providers such as NVIDIA and AMD offering dedicated AI processing cores as well as compute power for 3D modeling and rendering. Many engineering workstations can combine multiple GPUs for increased compute power, and offer advantages over cloud resources because desktop hardware can be configured with the latest generation technologies and the fastest processor clock speeds.  

While workstations are critical themselves, part of their core value comes from being part of a larger hardware ecosystem. Leach points to the hybrid AI concept, where workstation hardware, on-premises servers and cloud infrastructure work together to deliver enterprise AI solutions. Lenovo offers a wide portfolio of ThinkSystem servers and ThinkStation workstations that can be deployed as part of an organization’s hybrid AI ecosystem. These AI optimized platforms are certified for NVIDIA’s AI Enterprise software and will enable organizations to develop custom LLMs, GenAI applications, and deploy production AI across their systems.

Though industry is still learning how AI technology will impact the engineering space, it is clear that AI is here to stay, and grow. AI is a computing revolution, and companies that begin to invest in AI-capable hardware now will stand the best chance of success—and with workstations more powerful and versatile than ever before, there are plenty of options to suit every business’ needs.

“AI is a journey, not a destination,” Leach says.

For an in-depth look at hardware for AI in engineering, check out Lenovo’s white paper Workstations for AI in the Modern Engineering Workflow.

The post How hardware fits into your AI journey appeared first on Engineering.com.

]]>
Intel’s photonics breakthrough, Asus’s new ultracompact AI PC and more computing news https://www.engineering.com/intels-photonics-breakthrough-asuss-new-ultracompact-ai-pc-and-more-computing-news/ Wed, 10 Jul 2024 18:51:47 +0000 https://www.engineering.com/?p=52224 Engineering.com’s roundup of recent computing news.

The post Intel’s photonics breakthrough, Asus’s new ultracompact AI PC and more computing news appeared first on Engineering.com.

]]>
Intel achieves photonics milestone

Intel says it’s reached a “revolutionary milestone” in photonics by demonstrating what it says is the first-ever fully integrated optical I/O chiplet. Unveiled at the Optical Fiber Communication Conference (OFC) 2024, the optical compute interconnect (OCI) chiplet was co-packaged with an Intel CPU and ran live data, according to Intel. The chipmaker said in its press release that it expects the OCI chiplet to “revolutionize high-speed data processing for AI infrastructure.”

Intel’s recently demonstrated optical compute interconnect chiplet. (Image: Intel.)

Asus NUC 14 Pro+ supports local generative AI

Asus announced a new ultracompact PC, the NUC 14 Pro+, that it says is the first of its NUC lineup to offer Intel Core Ultra 9 CPUs. These processors include a neural processing unit (NPU) that allow the NUC 14 Pro+ to run local generative AI workloads. Housed in a 5-inch by 4-inch anodized aluminum chassis, the NUC 14 Pro+ can support up to four 4K displays through a combination of HDMI, DisplayPort and Thunderbolt outputs.

The Asus NUC 14 Pro+. (Image: Asus.)

Lenovo appoints chief security and AI officer

Lenovo has expanded the role of its chief security officer, Doug Fisher, to encompass artificial intelligence (AI). In his new role as chief security and AI officer, Fisher will lead AI governance and “champion” Lenovo’s AI policy alongside the company’s Responsible AI Committee.

In a press release announcing Fisher’s expanded role, Lenovo listed the three pillars of its AI policy as not using AI “in ways that harm people or put them or their rights at risk,” ensuring that its AI solutions are “fair, transparent, explainable, and efficient,” and committing to “protect people’s privacy at all stages of the AI Lifecyle.”

HP hires Karen Parkhill as CFO

HP Inc. announced that it’s hired a new chief financial officer, Karen Parkhill. Parkhill most recently served as CFO of healthcare technology company Medtronic and will join HP on August 5, 2024. She will replace interim CFO Tim Brown, who will step back to his role as head of Print Finance, according to HP.

“HP’s transformation over the past eight years has been extraordinary to watch and I look forward to working with a stellar team of professionals to advance the shared goal of creating long-term sustainable growth,” Parkhill said in HP’s press release.

The post Intel’s photonics breakthrough, Asus’s new ultracompact AI PC and more computing news appeared first on Engineering.com.

]]>
Logitech’s new Quest stylus, Nvidia and HPE partner on AI, and more computing news https://www.engineering.com/logitechs-new-quest-stylus-nvidia-and-hpe-partner-on-ai-and-more-computing-news/ Wed, 19 Jun 2024 18:25:09 +0000 https://www.engineering.com/?p=51893 Engineering.com's roundup of recent computing news.

The post Logitech’s new Quest stylus, Nvidia and HPE partner on AI, and more computing news appeared first on Engineering.com.

]]>
Logitech’s Quest stylus

Logitech revealed that it’s developing a new mixed reality input device. The MX Ink is a stylus made for the Meta Quest 3 that allows users to draw, annotate and interact with virtual objects. Logitech says the stylus offers haptic feedback, pressure sensitivity, low latency and a long battery life, and can be paired alongside Meta Quest controllers. The MX Ink and accessories, including a charging dock and drawing mat, will be available in September 2024.

A Meta Quest 3 user with the Logitech MX Ink stylus. (Image: Logitech.)

Nvidia AI Computing by HPE

Hewlett Packard Enterprise (HPE) and Nvidia announced a new portfolio of co-developed AI solutions called Nvidia AI Computing by HPE. HPE Private Cloud AI is one new solution that the companies say will give enterprises of all sizes a way to develop and deploy generative AI applications. It’s expected to be available this fall.

“To unleash the immense potential of generative AI in the enterprise, HPE and NVIDIA co-developed a turnkey private cloud for AI that will enable enterprises to focus their resources on developing new AI use cases that can boost productivity and unlock new revenue streams,” said HPE president and CEO Antonio Neri during his keynote at HPE Discover in Las Vegas, Nevada.

Campfire coming to Apple Vision Pro

Campfire, a developer of enterprise augmented reality (AR) software for CAD collaboration, announced its plans to support the Apple Vision Pro. Campfire for Vision Pro will be available in Apple’s App Store this fall, according to the company, which says it grew its customer base by more than fifty times since it began supporting Meta’s Quest 3 headset in November last year.

Apple debuts Math Notes calculator

Apple announced many updates to many products during its annual worldwide developer conference (WWDC) last week. For engineers, the most interesting may be the new calculator app coming to the iPad in iPadOS 18. Called the Math Notes calculator, the app will solve equations that have been handwritten with the Apple Pencil. It can also accept variables and plot equations, according to Apple. iPadOS 18 will be available this fall.

The Math Notes calculator app is coming to iPadOS 18 this fall. (Image: Apple.)

Supermicro’s plug-and-play AI data center

Supermicro, Inc. unveiled a new liquid-cooled data center solution in its “AI Supercluster” portfolio. Optimized for Nvidia AI Enterprise software and Nvidia’s latest Blackwell compute platforms, Supermicro says its plug-and-play AI SuperCluster hardware can significantly reduce data center power usage.

“From cold plates to CDUs to cooling towers, our rack-scale total liquid cooling solutions can reduce ongoing data center power usage by up to 40%,” Charles Liang, president and CEO of Supermicro, said in a press release.

The post Logitech’s new Quest stylus, Nvidia and HPE partner on AI, and more computing news appeared first on Engineering.com.

]]>
AI drives growth for hardware OEMs, plus new AMD Ryzen processors and a new Asus laptop https://www.engineering.com/ai-drives-growth-for-hardware-oems-plus-new-amd-ryzen-processors-and-a-new-asus-laptop/ Wed, 05 Jun 2024 13:41:00 +0000 https://www.engineering.com/ai-drives-growth-for-hardware-oems-plus-new-amd-ryzen-processors-and-a-new-asus-laptop/ Engineering.com’s roundup of recent computing news.

The post AI drives growth for hardware OEMs, plus new AMD Ryzen processors and a new Asus laptop appeared first on Engineering.com.

]]>
AMD announces new Ryzen processors

Chipmaker AMD unveiled its next generation of processors at Computex 2024. The new AMD Ryzen AI 300 Series processors are built on AMD’s new XDNA 2 architecture and feature what AMD claims is the “world’s most powerful Neural Processing Unit (NPU)” to power AI PC laptops. For desktop computers, AMD also announced the new Ryzen 9000 Series processors built on the company’s latest Zen 5 architecture.

(Image: AMD.)

(Image: AMD.)

Nvidia’s profits soar

Nvidia reported that its first quarter revenue is up 17.8% from the previous quarter, amounting to $26 billion with $15 billion in profit. CEO Jensen Huang said in a statement that the company’s data center growth was due to “strong and accelerating demand for generative AI training and inference on the Hopper platform.” With Hopper making way for Nvidia’s recently announced Blackwell platform, Huang added that the company is “poised for our next wave of growth.”

Asus ExpertBook P5 to bring AI to business laptops

Asus announced the ExpertBook P5, the flagship laptop of the company’s new Expert P series of business-focused AI PCs. The ExpertBook P5 will include an Intel Core Ultra processor and feature a 14-inch, 2.5K display. Full specs will be available on the laptop’s launch in Q3 of this year.

The Asus ExpertBook P5. (Image: Asus.)

The Asus ExpertBook P5. (Image: Asus.)

Dell revenue reflects AI growth

Dell detailed the results of its first 2025 fiscal quarter, which saw a 6% year-over-year increase to $22.2 billion in revenue. In the company’s press release, Dell CFO Yvonne McGill pointed to AI as a driver of new growth for the company.

Lenovo and Cisco partner for digital transformation

Lenovo and Cisco announced a strategic partnership to accelerate digital transformation that aims to “deliver fully integrated infrastructure and networking solutions.” The companies say they’ll work together to design and engineer turnkey solutions that will help customers advance their AI capabilities from edge to cloud.

HP looks to grow profits in AI and hybrid era

HP announced that its fiscal 2024 Q2 revenue was $12.8 billion, down 0.8% year-over-year. HP president and CEO Enrique Lores said in a statement that the company is “well positioned” for profitable growth, thanks in part to new solutions “designed for the AI and hybrid era.”

The post AI drives growth for hardware OEMs, plus new AMD Ryzen processors and a new Asus laptop appeared first on Engineering.com.

]]>
Should engineers buy the Apple Vision Pro? https://www.engineering.com/should-engineers-buy-the-apple-vision-pro/ Thu, 30 May 2024 10:41:00 +0000 https://www.engineering.com/should-engineers-buy-the-apple-vision-pro/ It’s early days, but the new spatial computing headset is already making headway into engineering workflows.

The post Should engineers buy the Apple Vision Pro? appeared first on Engineering.com.

]]>
Virtual reality (VR) and augmented reality (AR) have been slowly seeping into engineering offices over the past decade. The technology is being used to interact with CAD models, to help assemble spacecraft and to host virtual design reviews.

But spatial computing, as the technology is often called, has yet to make the widespread impact that proponents believe it can.

Could a new consumer headset speed up progress? The Apple Vision Pro, which launched earlier this year, has spurred newfound interest in spatial computing and its potential as an enterprise tool. And with several engineering software developers already committed to the headset, it may win over more than a few engineers.

Apple Vision Pro for designers

The starting $3,499 price tag for the Apple Vision Pro, plus the cost of enterprise software for the headset, is a roadblock to putting the headset on the desk of every designer. But it gives engineering companies a new platform to explore the value of spatial computing.

An Apple Vision Pro user testing out Onshape. (Image: PTC.)

An Apple Vision Pro user testing out Onshape. (Image: PTC.)

Engineering software companies including PTC and Nvidia have embraced the Vision Pro within the first few months of release. PTC’s Onshape was one of the first engineering apps to launch on the Vision Pro, giving users the ability to directly connect to the Onshape database (removing any required intermediate file formats needed on previous headsets), pull up 3D models, change design materials, leave comments and more.

“We were excited to get an early look of the Apple Vision Pro,” Greg Brown, vice president of product management for Onshape at PTC, told Engineering.com. “We knew that because of the way that it was going to come out—the functionality, the ease of use and all of these things—the barriers that were previously there in doing this type of visualization would be addressed in a major way.”

Brown views the biggest current engineering uses for the Vision Pro as ideation, collaboration, and evaluation, and says these are the areas where the Onshape app has focused first. The current app allows multiple people to interact with a model in the same virtual space. The following video from PTC shows the app in action:

PTC has hinted at plans to bring 3D design tools to the Vision Pro as well, but right now their software is focused on display, rendering and commenting.

Hungarian CAD provider Shapr3D also offers its 3D design software on the Vision Pro, but the immersive version of the software aims to be much more versatile than mere viewing. The company is offering demos of their Vision Pro software which advertises the ability to design fully in the headset.

Other engineering software available for the Apple Vision Pro includes Nvidia Omniverse, which can stream data and applications on the headset; Vectary, an interactive product visualizer; Graphisoft’s BIMx presentation app; and JigSpace, a 3D presentation app.

Apple Vision Pro beyond design

Engineering work extends far beyond 3D models, and companies are testing out VR and AR software for use on the manufacturing floor, construction sites and beyond.

View from within the Apple Vision Pro Resolve app showing a comment left within a building. (Image: Resolve.)

View from within the Apple Vision Pro Resolve app showing a comment left within a building. (Image: Resolve.)

Resolve is a BIM application offering immersive tools for the design and construction industry with a focus on helping all stakeholders “walk” through buildings before they are built. Resolve developed a rendering engine that can take massive engineering models and load them wirelessly onto a headset with a computer or cloud streaming. The company has previously worked with other VR headsets and now has a demo available for the Apple Vision Pro that CEO Angel Say calls “the tip of the iceberg.”

“There’s still so many more features and a general platform that we want to continue building. But that’s going to take time, both from developers like ourselves to keep adding functionality to our app, but then also from the industry to embrace these applications that we’re building and apply it to the areas that make the most sense,” Say told Engineering.com.

Resolve had to reimagine their software for the Vision Pro, as it presents a new paradigm of user navigation in virtual reality.

“There are no controllers with Apple Vision Pro, and so it’s not like you can use a joystick to fly around. It’s all eye tracking and tracking input. So that has been one thing that we’ve had to rethink,” Say said.

Other companies have also developed their own custom Vision Pro software. Porsche’s race engineering team uses the headsets to track car data in real time alongside live video from their race cars. KLM Royal Dutch Airlines is bringing Apple Vision Pros into the machine shop for training technicians.

However, the direct impact of the Vision Pro on engineering work will vary from user to user. Many engineers do not rely on Mac computers, presenting a hurdle to integrations, but Say has seen that shifting.

“You’ve got programs like Revit, Navisworks, all these things that are only run on Windows,” Say said. “But in the last decade, I would say the industry has also had a shift out in the field to using Apple devices.”

Should engineers buy an Apple Vision Pro?

Most engineers should wait on buying an expensive new Vision Pro. These headsets are only going to get cheaper, lighter, better, and have more engineering apps available. But if the cost is palatable and you’re eager to explore spatial computing workflows, you’ll likely be impressed with the Vision Pro.

“It’s one of those things that you can read about, you can watch videos, but nothing beats actually going out and getting a demo and really experiencing it for yourself,” Say said.

View from within the Apple Vision Pro headset of a tank system displayed in Onshape. (Image: PTC.)

View from within the Apple Vision Pro headset of a tank system displayed in Onshape. (Image: PTC.)

During a recent quarterly earnings call, Apple CEO Tim Cook announced that more than half of Fortune 100 companies have purchased Apple Vision Pro headsets, showing that big businesses are seeing value in exploring the enterprise impacts of this technology.

“Real customers and real prospects have been excited to be able to get their hands on these early. They have seen benefits early and a number of them have gone out and bought them the very same day,” Brown said of his experience demoing Onshape on the Vision Pro. “That speaks volumes to me that it’s finally reached a point where it can be more than a curiosity.”

As prices come down and apps get more sophisticated, the Apple Vision Pro and other headsets that follow it could find a permanent home on every engineer’s desk.

“If people haven’t tried VR/AR in the last five years, it’s time to revisit it,” Say said. “I think it’s important to understand how much the technology has evolved and get into one of the more recent headsets.”

The post Should engineers buy the Apple Vision Pro? appeared first on Engineering.com.

]]>
Bluetooth SpaceMouse, NX Immersive Designer and a lot of new Copilot+ AI PCs https://www.engineering.com/bluetooth-spacemouse-nx-immersive-designer-and-a-lot-of-new-copilot-ai-pcs/ Wed, 22 May 2024 11:56:00 +0000 https://www.engineering.com/bluetooth-spacemouse-nx-immersive-designer-and-a-lot-of-new-copilot-ai-pcs/ Engineering.com’s roundup of recent computing news.

The post Bluetooth SpaceMouse, NX Immersive Designer and a lot of new Copilot+ AI PCs appeared first on Engineering.com.

]]>
3Dconnexion releases SpaceMouse Wireless – Bluetooth Edition

3Dconnexion introduced a new version of its most popular product, the SpaceMouse Wireless, that now supports Bluetooth connectivity. Made for CAD users, the SpaceMouse Wireless is a joystick-like input device that allows users to precisely navigate 3D models with six degrees of freedom. The previous version of the SpaceMouse Wireless connected via a 2.4 GHz USB dongle called the 3Dconnexion Universal Receiver, an option which is still available for the SpaceMouse Wireless – Bluetooth Edition. The device can also be connected and charged with a USB-C cable.

The 3Dconnexion SpaceMouse Wireless – Bluetooth Edition. (Image: 3Dconnexion.)

The 3Dconnexion SpaceMouse Wireless – Bluetooth Edition. (Image: 3Dconnexion.)

Siemens and Sony team up on NX Immersive Designer

Sony and Siemens have announced new details of their collaboration to develop an XR headset for CAD designers. The Sony-designed SRH-S1 headset, which was on display at Siemens’ recent Realize Live user conference in Las Vegas, Nevada, includes two 4K OLED microdisplays and two controllers designed for precisely manipulating 3D objects. The SRH-S1 will be available exclusively with Siemens’ NX CAD software through a bundle called NX Immersive Designer, which Siemens expects to launch at the end of 2024.

Designing in Siemens NX with a virtual monitor using Sony’s SRH-S1 XR headset. (Image: Sony.)

Designing in Siemens NX with a virtual monitor using Sony’s SRH-S1 XR headset. (Image: Sony.)

New Copilot+ AI PCs from Dell, HP, Lenovo, Acer and Asus

All major PC makers launched new laptops in a category that Microsoft refers to as Copilot+ PCs. Powered by the Snapdragon X Elite and Snapdragon X Plus processors, the new computers include a neural processing unit (NPU) capable of performing 45 trillion operations per second (TOPS), a feature that PC maker Dell says will allow users to run AI tasks more efficiently.

Dell announced five Copilot+ PCs: the Dell XPS 13, Inspiron 14 Plus, Inspiron 14, Latitude 7455 and Latitude 5455. HP announced two: the HP OmniBook X and EliteBook Ultra. Lenovo also announced two: the Lenovo Yoga Slim 7x and ThinkPad T14s Gen 6. Acer unveiled the Acer Swift 14 AI and Asus the Asus Vivobook S 15.

The Dell XPS 13 Copilot+ AI PC with a Snapdragon X Elite processor. (Image: Dell.)

The Dell XPS 13 Copilot+ AI PC with a Snapdragon X Elite processor. (Image: Dell.)

AMD launches EPYC 4004 Series server CPUs

AMD announced the AMD EPYC 4004 Series, a set of eight new server CPUs that the company says deliver strong performance at an affordable price for small to medium businesses. The single-socket processors use AMD’s Zen 4 architecture. AMD says that the new EPYC 4564P provides a 1.8x increase in performance per CPU dollar compared to the Intel Xeon E-2488. Hardware provider MSI has already announced server platforms supporting the new processors.

Boxx AI

Workstation maker Boxx Technologies unveiled Boxx AI, a new line of multi-GPU desktop and data center workstations for AI design and development. The desktop systems include up to four Nvidia RTX 6000 Ada Gen GPUs with up to 192 GB of GPU memory.

The post Bluetooth SpaceMouse, NX Immersive Designer and a lot of new Copilot+ AI PCs appeared first on Engineering.com.

]]>