⚠️🚧 This site is currently under construction. Documentation is actively being written and expected to be released along with the next GoMLX release v0.28.0. 🚧⚠️

What is GoMLX?

GoMLX, an Accelerated ML and Math Framework

GoDev GitHub Go Report Card Linux/amd64 Tests Linux/arm64 Tests Darwin/arm64 Tests Windows/amd64 Tests Coverage Slack Sponsor gomlx

πŸ“– About GoMLX

GoMLX Gopher

GoMLX is an easy-to-use set of Machine Learning and generic math libraries and tools. It can be seen as a PyTorch/Jax/TensorFlow for Go.

It can be used to train, fine-tune, modify, and combine machine learning models. It provides all the tools to make that work easy: from a complete set of differentiable operators, all the way to UItools to plot metrics while training in a notebook.

It runs almost everywhere Go runs, using a pure Go backend. It runs even in the browser with WASM (see demo created with GoMLX). Likely, it will work in embedded devices as well (see Tamago).

It also supports a very optimized backend engine based on OpenXLA that uses just-in-time compilation to CPU, GPUs (Nvidia, and likely AMD ROCm, Intel, Macs) and Google’s TPUs. It also supports modern distributed execution (new, still being actively improved) for multi-TPU or multi-GPU using XLA Shardy, an evolution of the GSPMD distribution).

It’s the same engine that powers Google’s Jax, TensorFlow and Pytorch/XLA, and it has the same speed in many cases. Use this backend to train large models or with large datasets.

[!Tip]

It was developed to be a full-featured ML platform for Go, productionizable and easily to experiment with ML ideas β€”see Long-Term Goals below.

It strives to be simple to read and reason about, leading the user to a correct and transparent mental model of what is going on (no surprises)β€”aligned with Go philosophy. At the cost of more typing (more verbose) at times.

It is also incredibly flexible and easy to extend and try non-conventional ideas: use it to experiment with new optimizer ideas, complex regularizers, funky multitasking, etc.

Documentation is kept up to date (if it is not well-documented, it is as if the code is not there), and error messages are useful (always with a stack-trace) and try to make it easy to solve issues.

πŸ—ΊοΈ Overview

GoMLX is a full-featured ML framework, supporting various well-known ML components
from the bottom to the top of the stack. But it is still only a slice of what a major ML library/framework should provide (like TensorFlow, Jax, or PyTorch).

Examples developed using GoMLX

Backends

GoMLX is a friendly “intermediary ML API”, that hosts a common API and a library of ML layers and such. But per-se it doesn’t execute any computation: it relies on different backends to compile and execute the computation on very different hardware.

There is a common backend interface (currently in github.com/gomlx/gomlx/backends, but it will soon go to its own repo), and 3 different implementations:

  1. xla: OpenXLA backend for CPUs, GPUs, and TPUs. State-of-the-art as these things go, but only static-shape. For linux/amd64, linux/arm64 (CPU) and darwin/arm64 (CPU) for now. Using the go-xla Go version of the APIs.
  2. go: a pure Go backend (no C/C++ dependencies): slower but very portable (compiles to WASM/Windows/etc.):
  3. πŸš€ NEW πŸš€ go-darwinml: Go bindings to Apple’s CoreML supporting Metal acceleration, MLX, and any backend DarwinOS related.

Highlights

  • Converting ONNX models to GoMLX with onnx-gomlx: both as an alternative for onnxruntime (leveraging XLA), but also to further fine-tune models. See also go-huggingface to easily download ONNX model files from HuggingFace.
  • Docker “gomlx_jupyterlab” with integrated JupyterLab and GoNB (a Go kernel for Jupyter notebooks)
  • Autodiff: automatic differentiationβ€”only gradients for now, no jacobian.
  • Context: automatic variable management for ML models.
  • ML layers library with some of the most popular machine learning “layers”: FFN layers,

This page is excerpted from the full README. For complete documentation, browse the sections in the sidebar.

Last updated April 25, 2026