12
•
•
Feature
Description
Up to 200 Gigabit
Ethernet
Mellanox adapters comply with the following IEEE 802.3 standards:
• 200GbE / 100GbE / 50GbE / 40GbE / 25GbE / 10GbE / 1GbE
• IEEE 802.3bj, 802.3bm 100 Gigabit Ethernet
• IEEE 802.3by, Ethernet Consortium25, 50 Gigabit Ethernet, supporting all FEC modes
• IEEE 802.3ba 40 Gigabit Ethernet
• IEEE 802.3by 25 Gigabit Ethernet
• IEEE 802.3ae 10 Gigabit Ethernet
• IEEE 802.3ap based auto-negotiation and KR startup
• Proprietary Ethernet protocols (20/40GBASE-R2, 50GBASE-R4)
• IEEE 802.3ad, 802.1AX Link Aggregation
• IEEE 802.1Q, 802.1P VLAN tags and priority
• IEEE 802.1Qau (QCN)
• Congestion Notification
• IEEE 802.1Qaz (ETS)
• IEEE 802.1Qbb (PFC)
• IEEE 802.1Qbg
• IEEE 1588v2
• Jumbo frame support (9.6KB)
Memory Components
EEPROM - The EEPROM capacity is 32Kbit
SPI Quad - includes 256Mbit SPI Quad Flash device (MX25L25645GXDI-08G
device by Macronix)
Overlay Networks
In order to better scale their networks, data center operators often create overlay
networks that carry traffic from individual virtual machines over logical tunnels in
encapsulated formats such as NVGRE and VXLAN. While this solves network scalability
issues, it hides the TCP packet from the hardware offloading engines, placing higher
loads on the host CPU. ConnectX®-6 Dx effectively addresses this by providing
advanced NVGRE and VXLAN hardware offloading engines that encapsulate and de-
capsulate the overlay protocol.
RDMA and RDMA
over Converged
Ethernet (RoCE)
ConnectX®-6 Dx, utilizing RDMA (Remote Data Memory Access) and RoCE (RDMA over
Converged Ethernet) technology, delivers low-latency and high-performance over Band
and Ethernet networks. Leveraging data center bridging (DCB) capabilities as well as
ConnectX®-6 Dx advanced congestion control hardware mechanisms, RoCE provides
efficient low-latency RDMA services over Layer 2 and Layer 3 networks.
Mellanox
PeerDirect®
PeerDirect
®
communication provides high-efficiency RDMA access by eliminating
unnecessary internal data copies between components on the PCIe bus (for example,
from GPU to CPU), and therefore significantly reduces application run time.
ConnectX®-6 Dx advanced acceleration technology enables higher cluster efficiency
and scalability to tens of thousands of nodes.
CPU Offload
Adapter functionality enabling reduced CPU overhead allowing more available CPU for
computation tasks.
Open vSwitch (OVS) offload using ASAP
2
- Accelerated Switch and Packet Processing®
• Flexible match-action flow tables
• Tunneling encapsulation/decapsulation
Quality of Service
(QoS)
Support for port-based Quality of Service enabling various application requirements for
latency and SLA.
Hardware-based I/
O Virtualization
ConnectX®-6 Dx provides dedicated adapter resources and guaranteed isolation and
protection for virtual machines within the server.
Summary of Contents for C-ADAB
Page 14: ...14...