Apple just dropped a bombshell for developers: its in-house MLX machine learning framework now supports NVIDIA CUDA, marking a massive shift in the company’s AI strategy.
This is the first time Apple’s AI tools can natively run on NVIDIA GPUs, unlocking the ability to train and scale models outside the Apple ecosystem. For developers working across Mac and Linux, this move dramatically expands what’s possible with MLX.
Previously tied to Apple Silicon, MLX is now making its debut in Linux environments using CUDA 11.6, as confirmed by Apple’s official GitHub repository. The framework now supports matrix multiplication, sorting, indexing, and other key functions on NVIDIA’s industry-dominant GPU architecture.
Apple has long favored vertical integration, limiting its AI tooling to Macs and its proprietary hardware. But with this new CUDA support, it’s clear Apple wants MLX to become a cross-platform powerhouse.
Here’s what changes:
Build and test models on MacBook Pro or M-series chips
Scale to NVIDIA-powered clusters for high-performance training
No vendor lock-in, which means full flexibility for developers
For AI engineers, it means seamless workflows between local development and cloud deployment, something previously difficult inside Apple’s ecosystem.
This strategic pivot places Apple directly into the heart of mainstream AI development, where NVIDIA dominates. Most large AI models are trained on CUDA-based GPUs, including those behind ChatGPT, Stable Diffusion, and Gemini.
With NVIDIA CUDA integration, Apple’s MLX can now plug into that same infrastructure, opening doors to:
Distributed training on powerful NVIDIA data center GPUs
Integration with platforms like Google Cloud and AWS
Better tooling for hybrid AI development
Apple MLX update now supports industry-standard GPUs
Developers gain freedom to scale AI projects from MacBooks to GPU clusters
MLX might become the most developer-friendly AI tool Apple has ever built
For the first time, Apple seems ready to play open in the world of AI, and the timing couldn’t be more perfect.