10
Feature
Description
CPU Offload Adapter functionality enabling reduced CPU overhead allowing more available CPU for
computation tasks.
Open vSwitch (OVS) offload using ASAP
2(TM)
• Flexible match-action flow tables
• Tunneling encapsulation/decapsulation
Quality of
Service
(QoS)
Support for port-based Quality of Service enabling various application requirements for latency
and SLA.
Hardware-
based I/
O Virtualiza
tion
ConnectX-5 provides dedicated adapter resources and guaranteed isolation and protection for
virtual machines within the server.
Storage
Acceleratio
n
A consolidated compute and storage network achieves significant cost-performance advantages
over multi-fabric networks. Standard block and file access protocols can leverage RDMA for
high-performance storage access.
• NVMe over Fabric offloads for target machine
SR-IOV
ConnectX-5 SR-IOV technology provides dedicated adapter resources and guaranteed isolation
and protection for virtual machines (VM) within the server.
NC-SI over
RMII
The adapter supports a slave Network Controller Sideband Interface (NC-SI) that can be
connected to a BMC.
High-
Performanc
e Accelerati
ons
• Tag Matching and Rendezvous Offloads
• Adaptive Routing on Reliable Transport
• Burst Buffer Offloads for Background Checkpointing
Host
Managemen
t
Technology
NVIDIA’s host management technology for standard and multi-host platforms optimizes board
management and power, performance and firmware update management via NC-SI, MCTP over
SMBus and MCTP over PCIe, as well as PLDM for Monitor and Control DSP0248 and PLDM for
Firmware Update DSP0267.
Multi-Host
Technology
NVIDIA Multi-Host™ technology, when enabled, allows multiple hosts to be connected into a
single adapter by separating the PCIe interface into multiple and independent interfaces. By
Using NVIDIA Multi Host™, ConnectX-5 lowers the total cost of ownership (TCO) in the data
center by reducing CAPEX (cables, NICs, and switch port expenses), and by reducing OPEX by
cutting down on switch port management and overall power usage. With NVIDIA Multi-Host™
technology powered by a shared buffer architecture, connection-tracking offloads, and RoCE
enhancements, ConnectX-5 offers an extremely flexible solution for today’s demanding data
center and cloud application.
NVIDIA
Socket
Direct
™
NVIDIA Socket Direct technology brings improved performance to multi-socket servers by
enabling direct access from each CPU in a multi-socket server to the network through its
dedicated PCIe interface. With this type of configuration, each CPU connects directly to the
network; this enables the interconnect to bypass a QPI (UPI) and the other CPU, optimizing
performance and improving latency. CPU utilization improves as each CPU handles only its own
traffic, and not the traffic from the other CPU. NVIDIA’s OCP 3.0 cards include native support
for socket direct technology for multi-socket servers and can support up to 4 CPU sockets.
Wake-on-
LAN (WoL)
Supported
Содержание Mellanox ConnectX-5
Страница 1: ...Exported on Sep 19 2022 09 36 AM NVIDIA ConnectX 5 Ethernet Adapter Cards for OCP Spec 3 0 User Manual...
Страница 24: ...24 Cards Extraction Instructions Unable to render include or excerpt include Could not retrieve page...
Страница 30: ...30 b Click Next to install the desired tools 9 Click Install to start the installation...
Страница 35: ...35 5 Click Install to extract this folder or click Change to install to a different folder...
Страница 68: ...68 MCX566A CDAB Board Label MCX566A CDAI Board Label MCX565M CDAI Board Label MCX562A ACAI Board Label...