With the surge in popularity of artificial intelligence (AI) and machine learning (ML), developers are constantly seeking innovative ways to integrate these technologies into their projects.
One widely-used open-source library for building and training ML models is PyTorch, which provides a simple and efficient way to develop neural networks. Recent news has announced the release of an Arm-native version of PyTorch as part of Microsoft's Copilot Runtime, aiming to bring AI development closer to home.
This new move enables developers to build, train, and run ML models on their local devices, rather than relying on cloud-hosted services. The Arm-native builds of PyTorch and LibTorch libraries are designed to work seamlessly with Microsoft's endpoint AI development strategy.
This integration allows developers to use the full range of tools provided by Copilot+ PCs, including ONNX model runtimes for the Hexagon NPU and support in Direct ML. The process of installing PyTorch on Windows on Arm is relatively straightforward and requires only a few steps.
First, developers need to install the Visual Studio Build Tools with C++ support and Rust. Then, they can install Python from the official website and use pip to install the latest version of PyTorch. Microsoft has also provided sample code for experimenting with PyTorch on Arm, including downloading a pretrained Stable Diffusion model from Hugging Face.
This sample code demonstrates how easy it is to get started with PyTorch on Arm, even without extensive experience in AI development. One key benefit of using PyTorch on Arm is the ability to run AI models locally, which can significantly reduce latency and improve performance compared to cloud-hosted services.
The Arm-native version of PyTorch also allows developers to take advantage of the power of their local devices without worrying about the overheads associated with Windows' Prism x64 emulation. This release marks an important milestone in the development of AI on the edge, enabling developers to build and train ML models locally.
By providing a simple and efficient way to build, train, and run ML models locally, developers can now take advantage of the power of their devices without relying on cloud-hosted services. This move is expected to further accelerate the adoption of AI in various industries, from healthcare and finance to retail and transportation.
The release of an Arm-native version of PyTorch has significant implications for the development of AI on the edge, enabling faster innovation and improved performance. As a result, we can expect to see increased adoption of AI in a wide range of applications, driven by the power of local device processing.