Apple introduces MLX, a machine-learning framework for Apple Silicon

Apple introduces MLX, a machine-learning framework for Apple Silicon

Apple’s machine learning (ML) teams have released a new ML framework⁣ for Apple ‌Silicon: MLX, or ML Explore arrives after ‍being tested over the summer and is now  available ‌through GitHub.

Machine Learning ​for Apple Silicon

In an X-note, Awni Hannun, of Apple’s ML team, calls the software: “…an efficient⁤ machine ​learning ‌framework specifically designed for Apple silicon ⁤(i.e. your laptop!)”

The ⁣idea ⁢is that it streamlines training⁢ and deployment of ML models for ‌researchers who use Apple hardware. MLX is a NumPy-like array framework⁣ designed for⁤ efficient and ​flexible machine learning on Apple’s processors.

This isn’t a consumer-facing tool; ​it equips‍ developers with what appears to be a powerful environment within which to ⁤build ML models. ​The company‌ also seems to have worked to embrace the languages developers⁣ want⁤ to use, rather⁣ than force a⁢ language on them​ – and ‌it apparently invented powerful LLM tools in the ‌process.

Familiar⁣ to ⁤developers

MLX‍ design is inspired by‍ existing‌ frameworks such as PyTorch, Jax, and ArrayFire.‍ However, MLX adds support for a unified memory ‌model, which means arrays live in ​shared memory and operations can be performed on any of the supported device⁤ types‍ without performing data copies.

The team ⁢explains: “The Python API closely follows NumPy ⁢with a few exceptions. MLX also has ⁤a fully featured C++ API which closely follows the Python API.”

Notes accompanying the release also say:

“The framework is intended to be‍ user-friendly, but still efficient to train and deploy models…. We intend to ​make it easy for ⁣researchers to extend and improve MLX with ⁢the goal⁤ of quickly exploring⁢ new ideas.”

Pretty good at first glance

On⁢ first glance, MLX seems relatively ​good and ‍(as explained on GitHub)⁤ is⁢ equipped ‍with‍ several features ​that ‌set it apart — for example, the use​ of ⁣familiar APIs, and also:

Composable function transformations: MLX has composable function⁢ transformations ​for automatic ⁤differentiation, automatic ‌vectorization, and computation graph optimization.
Lazy computation: Computations in⁤ MLX are​ lazy. Arrays are only ⁤materialized when needed.
Dynamic graph construction:⁣ Computation graphs in MLX are built dynamically. Changing ‍the ‌shapes​ of function arguments does not trigger slow​ compilations, and debugging is simple ​and intuitive.
Multi-device: Operations can run on⁤ any of‌ the supported devices (currently, the CPU and⁢ GPU).
Unified ‍memory: Under the unified memory ⁢model, arrays in MLX live in shared memory. Operations on MLX⁢ arrays ‌can be performed on any of the supported device​ types without ⁢moving data.
What it can already achieve

Apple has provided a collection of examples of ‍what MLX can do. ⁣These⁤ appear to ⁢confirm the company now has a highly-efficient language ‌model, powerful tools for ​image generation⁤ using Stable Diffusion, and highly accurate speech recognition. This tallies with claims earlier this year, and some speculation concerning…

2023-12-12 18:00:04
Link from www.computerworld.com rnrn

Exit mobile version