by Optimal Intellect

Moreau
GPU-native convex optimization for AI

Embed differentiable optimization layers directly into PyTorch or JAX—fully batched, entirely on GPU.

What's your application?

The Optimization Layer

Differentiable by Design

Problem data flows in, optimal solutions flow out—with gradients propagating back through the entire solve.

Input

cost P, q
constraints A, b
bounds l, u
moreau.solve()
Convex optimization problem

K Zero · Nonneg · SOC · Exp · Power

Output

x* primal solution
s* slack variable
z* dual variable

Backward Pass

∂L/∂x*, ∂L/∂s*, ∂L/∂z*
moreau.backward()

implicit differentiation

∂L/∂P, ∂L/∂q, ∂L/∂A, ∂L/∂b, ∂L/∂l, ∂L/∂u

Capabilities

Built for ML training workflows

Moreau provides a Python API with PyTorch and JAX bindings. Problem data stays on GPU throughout training—no CPU round-trips.

01

GPU-Native

All computation stays in VRAM. No CPU round-trips during training loops. Compatible with PyTorch and JAX tensor workflows.

02

Batched

Solve 128–1024 problem instances in parallel on a single GPU. Designed for the batch sizes used in modern ML training.

03

Differentiable

Computes gradients of the solution with respect to problem data via implicit differentiation. Enables backpropagation through optimization layers.

Who It's For

Teams embedding optimization in training

For researchers and engineers who need hard constraints respected during training—not as a post-processing step. Constraints as first-class citizens, not soft penalties.

Robotics & Embodied AI

Train policies where actions come from solving constrained optimization. Backpropagate through dynamics, contact constraints, and joint limits.

Quantitative Finance

Differentiate through portfolio optimization with risk, leverage, and regulatory constraints during training.

Power Systems

Train models that respect network constraints, capacity limits, and operational safety requirements.

Supply Chain & Logistics

Differentiate through routing, scheduling, and inventory decisions with real-world constraints.

Team

Built by optimization experts

The team behind CVXPY, CVXPYlayers, and 50+ research papers on convex optimization.

Shane Barratt

Shane Barratt

CEO

Parth Nobel

Parth Nobel

CTO

Steven Diamond

Steven Diamond

COO

Creators of CVXPY (3M+ downloads/mo)

Creators of CVXPYlayers (900+ citations)

Stanford PhDs from Stephen Boyd's lab

Authors of 50+ papers on optimization