ConnectX®-4 VPI Single and Dual QSFP28 Port Adapter Card User Manual for Dell PowerEdge
Rev 1.3
Mellanox Technologies
12
1.2
Features and Benefits
Table 4 - Features
a
100Gb/s Virtual Protocol
Interconnect (VPI)
Adapter
ConnectX-4 offers the highest throughput VPI adapter, supporting EDR 100Gb/s
InfiniBand and 100Gb/s Ethernet and enabling any standard networking, cluster-
ing, or storage to operate seamlessly over any converged network leveraging a
consolidated software stack.
InfiniBand Architecture
Specification v1.3
compliant
ConnectX-4 delivers low latency, high bandwidth, and computing efficiency for
performance-driven server and storage clustering applications. ConnectX-4 is
InfiniBand Architecture Specification v1.3 compliant.
PCI Express (PCIe)
Uses PCIe Gen 3.0 (1.1 and 2.0 compatible) through an x16 edge connector up to
8GT/s
Up to 100 Gigabit
Ethernet
Mellanox adapters comply with the following IEEE 802.3 standards:
– 100GbE / 50GbE / 40GbE / 25GbE / 10GbE / 1GbE
– IEEE 802.3bj, 802.3bm 100 Gigabit Ethernet
– 25G Ethernet Consortium 25,50 Gigabit Ethernet
– IEEE 802.3ba 40 Gigabit Ethernet
– IEEE 802.3by 25 Gigabit Ethernet
– IEEE 802.3ae 10 Gigabit Ethernet
– IEEE 802.3az Energy Efficient Ethernet
– IEEE 802.3ap based auto-negotiation and KRstartup
– IEEE 802.3ad, 802.1AX Link Aggregation
– IEEE 802.1Q, 802.1P VLAN tags and priority
– IEEE 802.1Qau (QCN) – Congestion Notification
– IEEE 802.1Qaz (ETS)
– IEEE 802.1Qbb (PFC) 802.1Qbg
– IEEE 1588v2
– Jumbo frame support (9.6KB)
InfiniBand EDR
A standard InfiniBand data rate, where each lane of a 4X port runs a bit rate of
25.78125Gb/s with a 64b/66b encoding, resulting in an effective bandwidth of
100Gb/s.
Memory
PCI Express - stores and accesses InfiniBand and/or Ethernet fabric connection
information and packet data.
SPI - includes one 16MB SPI Flash device (M25PX16-VMN6P device by ST
Microelectronics)
EEPROM capacity is 128Kb.
Overlay Networks
In order to better scale their networks, data center operators often create overlay
networks that carry traffic from individual virtual machines over logical tunnels
in encapsulated formats such as NVGRE and VXLAN. While this solves network
scalability issues, it hides the TCP packet from the hardware offloading engines,
placing higher loads on the host CPU. ConnectX-4 effectively addresses this by
providing advanced NVGRE and VXLAN hardware offloading engines that
encapsulate and decapsulate the overlay protocol.