Rev 1.1
Mellanox Technologies Confidential
12
CPU Offload
Adapter functionality enabling reduced CPU overhead allowing more available CPU for
computation tasks.
•
Flexible match-action flow tables
•
Open VSwitch (OVS) offload using ASAP
2
- Accelerated Switch and Packet Processing®
•
Tunneling encapsulation / decapsulation
Quality of Service (QoS)
Support for port-based Quality of Service enabling various application requirements for
latency and SLA.
Hardware-based I/O
Virtualization
ConnectX-6 provides dedicated adapter resources and guaranteed isolation and protection
for virtual machines within the server.
Storage Acceleration
A consolidated compute and storage network achieves significant cost-performance
advantages over multi-fabric networks. Standard block and file access protocols can leverage:
•
RDMA for high-performance storage access
•
NVMe over Fabric offloads for target machine
•
Erasure Coding
•
T10-DIF Signature Handover
SR-IOV
ConnectX-6 SR-IOV technology provides dedicated adapter resources and guaranteed
isolation and protection for virtual machines (VM) within the server.
High-Performance
Accelerations
•
Tag Matching and Rendezvous Offloads
•
Adaptive Routing on Reliable Transport
•
Burst Buffer Offloads for Background Checkpointing
1.3.1
Operating Systems/Distributions
ConnectX-6 Socket Direct cards 2x PCIe x16 (OPNs 1GK7G and CY7GD) are not
supported in Windows and WinOF-2.
•
RHEL/CentOS
•
Windows
•
SLES
•
OpenFabrics Enterprise Distribution (OFED)
•
OpenFabrics Windows Distribution (WinOF-2)
1.3.2
Connectivity
•
Interoperable with 1/10/25/40/50/100/200 Gb/s Ethernet switches
•
Passive copper cable with ESD protection
•
Powered connectors for optical and active cable support
1.3.3
Manageability
ConnectX-6 technology maintains support for manageability through a BMC. ConnectX-6
adapter card can be connected to a BMC using MCTP over SMBus or MCTP over PCIe
protocols as if it is a standard Mellanox adapter card.