A Linux Foundation Project
Open Initiative

OpenCAPI: The Open NVLink the Industry Left Behind

NVIDIA owns the only mainstream coherent accelerator fabric. OpenCAPI was the open answer — built into POWER silicon, specified publicly, and proven in production. The specs are archived. The reference designs are on GitHub. What's missing is an IP license grant from the original consortium founders to make it buildable again.

Support This Initiative View Archived Specs ↗
Background

What OpenCAPI Was

A direct, cache-coherent, high-bandwidth interface between the POWER processor and accelerators — bypassing PCIe entirely. Exactly what NVLink does, but open, auditable, and implementable by anyone.

The NVLink problem: NVIDIA's NVLink gives H100 and B200 GPUs a direct coherent connection to each other and to CPUs at hundreds of GB/s. But NVLink is proprietary — you can only use it with NVIDIA hardware, you cannot audit it, and you cannot build an alternative that is compatible. OpenCAPI was designed from the start as the open alternative. It ran in production on POWER9 and POWER10 systems at scale.

NVIDIA — Proprietary

NVLink / NVSwitch

  • 900 GB/s bidirectional bandwidth (NVLink 4.0)
  • Cache-coherent GPU↔CPU and GPU↔GPU
  • Bypasses PCIe — direct memory fabric access
  • Sub-microsecond latency
  • Closed specification — NVIDIA only
  • Cannot be audited or independently implemented
  • No third-party accelerators supported
OpenPOWER — Open Standard

OpenCAPI 3.0

  • Up to 400 GB/s aggregate (25 Gbps × 8 lanes × 2)
  • Cache-coherent CPU↔accelerator, FPGA, HSM, SmartNIC
  • Dedicated SerDes on POWER die — no PCIe path
  • Sub-microsecond latency, lower than PCIe by design
  • Public specification — anyone can implement
  • Open-source FPGA reference designs on GitHub
  • Any vendor's accelerator can attach — fully auditable

Technical Specification

Signaling 25 Gbps PAM-4 SerDes, dedicated lanes on POWER9/POWER10 die
Coherency model Full cache coherency — accelerator participates in the processor memory domain
Latency Sub-microsecond; deterministic, not subject to PCIe switch hops
OMI (Open Memory Interface) OpenCAPI extension for coherent memory expansion — 8 controllers on POWER10, 410 GB/s per socket
Specification status Publicly archived at CXL Consortium; OpenCAPI 3.0 Transaction Layer and PHY specs available for download
Open-source IP OC-Accel FPGA framework, OpenCAPI3.0_Client_RefDesign, and OMI device designs on GitHub (OpenCAPI org)
Deployed in IBM POWER9 (2018), IBM POWER10 (2021) — production systems at banks, hyperscalers, national labs
History

What Happened to the Consortium

In August 2022, the OpenCAPI Consortium transferred its trademarks and specifications to the CXL Consortium. The technology did not disappear — but the stewardship structure that enabled open implementation did.

2016 — Consortium Founded

IBM, Xilinx, Mellanox, Micron, Google, Toshiba Memory, and HPE form the OpenCAPI Consortium. Specification released publicly. First silicon ships on POWER9.

Complete

2021 — POWER10 Ships

POWER10 ships with 8 OpenCAPI / OMI controllers per socket, delivering 410 GB/s memory bandwidth. IBM publicly states: "CXL is a few years behind OpenCAPI."

Complete

2022 — Consortium Dissolves

OpenCAPI Consortium signs letter of intent to transfer all assets — trademarks, specifications, OMI spec — to the CXL Consortium. Open-source GitHub IP remains available.

IP transferred
!

Today — IP Gap

Specs are archived and downloadable. FPGA reference designs are on GitHub. But without a formal patent license grant, new implementers cannot build OpenCAPI-compatible silicon without risk.

Action needed

What Unlocks Next

Founders grant RF (royalty-free) or FRAND patent coverage to OPF members → FPGA implementations blessed → third-party POWER licensees add OpenCAPI to ASIC designs → open NVLink becomes real.

Unlocked by this initiative

Why CXL does not replace OpenCAPI: CXL is a PCIe protocol — it runs on the PCIe physical layer and through PCIe root complexes. OpenCAPI used dedicated high-speed SerDes wired directly to the POWER memory fabric. They solve different problems. CXL excels at memory pooling and disaggregation. OpenCAPI was designed for the tight-coupled, cache-coherent, accelerator-attached-at-processor-speed use case — the same one NVIDIA solved with NVLink. No open standard currently fills that gap.

Strategic Case

Why This Matters for Sovereign AI

The AI accelerator market is consolidating around NVLink-connected NVIDIA hardware. For organisations that cannot accept proprietary lock-in, there is currently no open alternative for coherent accelerator fabric.

Vendor Lock-In Is Accelerating

Every hyperscaler and national lab building AI infrastructure today is buying into NVIDIA's NVLink fabric. Once that infrastructure is in place, switching costs are prohibitive. Governments and regulated enterprises that need auditable, sovereign AI compute have no open coherent accelerator option.

Auditability Is a Security Requirement

For financial regulators, intelligence agencies, and healthcare systems, the interconnect between processor and accelerator is inside the trust boundary. A closed-source fabric you cannot audit is a compliance risk. OpenCAPI's specification was fully public — every layer inspectable.

The Pieces Already Exist

The specifications are archived and downloadable. The FPGA reference designs are live on GitHub. Microwatt (OPF's open POWER soft core) runs on commodity FPGAs. The only missing piece is a clear IP license that allows the community to build without legal uncertainty.

The Ask

What We're Requesting

The original OpenCAPI Consortium founding members hold the patents that cover OpenCAPI implementation. We are asking each of them to formally grant royalty-free (or FRAND) patent coverage to OpenPOWER Foundation members — the same model that made the POWER ISA open.

IBM
Primary inventor. Holds core POWER-specific SerDes and transaction layer patents. Already granted POWER ISA to OPF — this is the natural extension.
IP Grant Requested
AMD / Xilinx
Xilinx contributed FPGA-side implementation IP and PHY patents. Now AMD. Open-source reference designs already exist on GitHub under their stewardship.
IP Grant Requested
NVIDIA / Mellanox
Mellanox contributed network adapter and SmartNIC implementation patents before acquisition by NVIDIA. Now holds those rights.
IP Grant Requested
Micron Technology
Contributed memory-side OMI interface IP and DDIMM implementation patents. OMI is foundational to OpenCAPI's memory expansion capability.
IP Grant Requested
Google
Contributed accelerator endpoint and coherency protocol patents as a founding data-centre consumer of the standard.
IP Grant Requested
Kioxia / Toshiba Memory
Contributed storage-class memory interface IP in the original consortium.
IP Grant Requested

Precedent: In 2019, IBM transferred the POWER ISA to the OpenPOWER Foundation under an open licence. That single act unlocked an entire ecosystem of open processors, compilers, and firmware. An OpenCAPI IP grant would do the same for open coherent accelerator fabric — completing the open hardware stack from ISA to interconnect.

If This Succeeds

What Becomes Buildable

A formal IP grant doesn't require anyone to build new silicon immediately. It unblocks a community that already has the specs, the reference designs, and the processor ISA — and is waiting for legal clarity.

Near Term — FPGA

Wire the existing OC-Accel FPGA framework into Microwatt (OPF's open POWER soft core). Running today on commodity Xilinx/AMD FPGAs. An IP grant makes this a formally sanctioned OPF reference implementation rather than a legal grey area.

Medium Term — Third-Party ASICs

POWER ISA licensees building their own chips — including emerging fabs in India, Africa, and Southeast Asia — can add OpenCAPI ports to their designs. The foundry SerDes IP is available from TSMC and Samsung on standard process nodes.

Medium Term — Open Accelerator Cards

Vendors building inference accelerators, HSMs, and SmartNICs for the POWER ecosystem can implement OpenCAPI endpoints rather than being limited to PCIe. Direct memory-coherent AI inference offload without NVIDIA's fabric.

Long Term — Complete Open Stack

Open ISA (POWER) + open coherent fabric (OpenCAPI) + open accelerator designs + open firmware (OpenBMC/OpenFSP) = a fully auditable, sovereign AI compute stack with no closed components in the critical path.

Resources

Start Building Now

While the IP grant process is underway, the existing open-source implementations are available for development and evaluation.

GitHub
OC-Accel Framework
Full FPGA accelerator development framework with OpenCAPI interface. The fastest path to a working implementation today.
GitHub
Client Reference Design
OpenCAPI 3.0 FPGA accelerator endpoint reference design. Implements the full transaction layer in synthesisable RTL.
GitHub
OMI Device (ICE)
Complete FPGA implementation of an OMI DDIMM device with DDR4 memory — the memory-expansion half of the OpenCAPI stack.
CXL Consortium
Specification Archive
OpenCAPI 3.0 Transaction Layer and 25 Gbps PHY specifications, archived and publicly downloadable.

Support the Open Source NVLink

If your organisation depends on open, auditable compute infrastructure — or if you represent one of the founding consortium members — we want to hear from you. The path from archived spec to working open ecosystem starts with this conversation.