13
Feature
Description
Overlay Networks
In order to better scale their networks, data center operators often create overlay
networks that carry traffic from individual virtual machines over logical tunnels in
encapsulated formats such as NVGRE and VXLAN. While this solves network scalability
issues, it hides the TCP packet from the hardware offloading engines, placing higher
loads on the host CPU. ConnectX®-6 Dx effectively addresses this by providing
advanced NVGRE and VXLAN hardware offloading engines that encapsulate and de-
capsulate the overlay protocol.
RDMA and RDMA
over Converged
Ethernet (RoCE)
ConnectX®-6 Dx, utilizing RDMA (Remote Data Memory Access) and RoCE (RDMA over
Converged Ethernet) technology, delivers low-latency and high-performance over
Band and Ethernet networks. Leveraging data center bridging (DCB) capabilities as
well as ConnectX®-6 Dx advanced congestion control hardware mechanisms, RoCE
provides efficient low-latency RDMA services over Layer 2 and Layer 3 networks.
NVIDIA PeerDirect®
PeerDirect® communication provides high-efficiency RDMA access by eliminating
unnecessary internal data copies between components on the PCIe bus (for example,
from GPU to CPU), and therefore significantly reduces application run time.
ConnectX®-6 Dx advanced acceleration technology enables higher cluster efficiency
and scalability to tens of thousands of nodes.
CPU Offload
Adapter functionality enables reduced CPU overhead leaving more CPU resources
available for computation tasks.
Open vSwitch (OVS) offload using ASAP
2(TM)
• Flexible match-action flow tables
• Tunneling encapsulation/decapsulation
Quality of Service
(QoS)
Support for port-based Quality of Service enabling various application requirements
for latency and SLA.
Hardware-based I/
O Virtualization
ConnectX®-6 Dx provides dedicated adapter resources and guaranteed isolation and
protection for virtual machines within the server.
Storage Acceleration
A consolidated compute and storage network achieves significant cost-performance
advantages over multi-fabric networks. Standard block and file access protocols can
leverage RDMA for high-performance storage access.
• NVMe over Fabric offloads for the target machine
SR-IOV
ConnectX®-6 Dx SR-IOV technology provides dedicated adapter resources and
guaranteed isolation and protection for virtual machines (VM) within the server.
NC-SI
The adapter supports a Network Controller Sideband Interface (NC-SI), MCTP over
SMBus and MCTP over PCIe - Baseboard Management Controller interface.
High-
Performance Accelera
tions
• Tag Matching and Rendezvous Offloads
• Adaptive Routing on Reliable Transport
• Burst Buffer Offloads for Background Checkpointing
Host Management
NVIDIA host management sideband implementations enable remote monitor and
control capabilities using RBT, MCTP over SMBus, and MCTP over PCIe – Baseboard
Management Controller (BMC) interface, supporting both NC-SI and PLDM management
protocols using these interfaces. NVIDIA OCP 3.0 adapters support these protocols to
offer numerous Host Management features such as PLDM for Firmware Update,
network boot in UEFI driver, UEFI secure boot, and more.
Secure Boot
Hardware Root-of-Trust (RoT) Secure Boot and secure firmware update using RSA
cryptography, and cloning-protection, via a device-unique secret key.
Crypto
Crypto – IPsec and TLS data-in-motion inline encryption and decryption offload and
AES-XTS block-level data-at-rest encryption and decryption offload.