NVIDIA owns the only mainstream coherent accelerator fabric. OpenCAPI was the open answer — built into POWER silicon, specified publicly, and proven in production. The specs are archived. The reference designs are on GitHub. What's missing is an IP license grant from the original consortium founders to make it buildable again.
A direct, cache-coherent, high-bandwidth interface between the POWER processor and accelerators — bypassing PCIe entirely. Exactly what NVLink does, but open, auditable, and implementable by anyone.
The NVLink problem: NVIDIA's NVLink gives H100 and B200 GPUs a direct coherent connection to each other and to CPUs at hundreds of GB/s. But NVLink is proprietary — you can only use it with NVIDIA hardware, you cannot audit it, and you cannot build an alternative that is compatible. OpenCAPI was designed from the start as the open alternative. It ran in production on POWER9 and POWER10 systems at scale.
In August 2022, the OpenCAPI Consortium transferred its trademarks and specifications to the CXL Consortium. The technology did not disappear — but the stewardship structure that enabled open implementation did.
IBM, Xilinx, Mellanox, Micron, Google, Toshiba Memory, and HPE form the OpenCAPI Consortium. Specification released publicly. First silicon ships on POWER9.
CompletePOWER10 ships with 8 OpenCAPI / OMI controllers per socket, delivering 410 GB/s memory bandwidth. IBM publicly states: "CXL is a few years behind OpenCAPI."
CompleteOpenCAPI Consortium signs letter of intent to transfer all assets — trademarks, specifications, OMI spec — to the CXL Consortium. Open-source GitHub IP remains available.
IP transferredSpecs are archived and downloadable. FPGA reference designs are on GitHub. But without a formal patent license grant, new implementers cannot build OpenCAPI-compatible silicon without risk.
Action neededFounders grant RF (royalty-free) or FRAND patent coverage to OPF members → FPGA implementations blessed → third-party POWER licensees add OpenCAPI to ASIC designs → open NVLink becomes real.
Unlocked by this initiativeWhy CXL does not replace OpenCAPI: CXL is a PCIe protocol — it runs on the PCIe physical layer and through PCIe root complexes. OpenCAPI used dedicated high-speed SerDes wired directly to the POWER memory fabric. They solve different problems. CXL excels at memory pooling and disaggregation. OpenCAPI was designed for the tight-coupled, cache-coherent, accelerator-attached-at-processor-speed use case — the same one NVIDIA solved with NVLink. No open standard currently fills that gap.
The AI accelerator market is consolidating around NVLink-connected NVIDIA hardware. For organisations that cannot accept proprietary lock-in, there is currently no open alternative for coherent accelerator fabric.
Every hyperscaler and national lab building AI infrastructure today is buying into NVIDIA's NVLink fabric. Once that infrastructure is in place, switching costs are prohibitive. Governments and regulated enterprises that need auditable, sovereign AI compute have no open coherent accelerator option.
For financial regulators, intelligence agencies, and healthcare systems, the interconnect between processor and accelerator is inside the trust boundary. A closed-source fabric you cannot audit is a compliance risk. OpenCAPI's specification was fully public — every layer inspectable.
The specifications are archived and downloadable. The FPGA reference designs are live on GitHub. Microwatt (OPF's open POWER soft core) runs on commodity FPGAs. The only missing piece is a clear IP license that allows the community to build without legal uncertainty.
The original OpenCAPI Consortium founding members hold the patents that cover OpenCAPI implementation. We are asking each of them to formally grant royalty-free (or FRAND) patent coverage to OpenPOWER Foundation members — the same model that made the POWER ISA open.
Precedent: In 2019, IBM transferred the POWER ISA to the OpenPOWER Foundation under an open licence. That single act unlocked an entire ecosystem of open processors, compilers, and firmware. An OpenCAPI IP grant would do the same for open coherent accelerator fabric — completing the open hardware stack from ISA to interconnect.
A formal IP grant doesn't require anyone to build new silicon immediately. It unblocks a community that already has the specs, the reference designs, and the processor ISA — and is waiting for legal clarity.
Wire the existing OC-Accel FPGA framework into Microwatt (OPF's open POWER soft core). Running today on commodity Xilinx/AMD FPGAs. An IP grant makes this a formally sanctioned OPF reference implementation rather than a legal grey area.
POWER ISA licensees building their own chips — including emerging fabs in India, Africa, and Southeast Asia — can add OpenCAPI ports to their designs. The foundry SerDes IP is available from TSMC and Samsung on standard process nodes.
Vendors building inference accelerators, HSMs, and SmartNICs for the POWER ecosystem can implement OpenCAPI endpoints rather than being limited to PCIe. Direct memory-coherent AI inference offload without NVIDIA's fabric.
Open ISA (POWER) + open coherent fabric (OpenCAPI) + open accelerator designs + open firmware (OpenBMC/OpenFSP) = a fully auditable, sovereign AI compute stack with no closed components in the critical path.
While the IP grant process is underway, the existing open-source implementations are available for development and evaluation.
If your organisation depends on open, auditable compute infrastructure — or if you represent one of the founding consortium members — we want to hear from you. The path from archived spec to working open ecosystem starts with this conversation.