![Mellanox Technologies ConnectX-5 Ex Скачать руководство пользователя страница 12](http://html1.mh-extra.com/html/mellanox-technologies/connectx-5-ex/connectx-5-ex_user-manual_1768344012.webp)
Introduction
Rev 1.5
10
Mellanox Technologies
1.2
Features and Benefits
Table 4 - Features
a
100Gb/s Virtual Protocol
Interconnect (VPI)
Adapter
ConnectX-5 offers the highest throughput VPI adapter, supporting EDR
100Gb/s InfiniBand and 100Gb/s Ethernet and enabling any standard
networking, clustering, or storage to operate seamlessly over any con-
verged network leveraging a consolidated software stack.
InfiniBand Architecture
Specification v1.3
compliant
ConnectX-5 delivers low latency, high bandwidth, and computing effi-
ciency for performance-driven server and storage clustering applica-
tions. ConnectX-5 is InfiniBand Architecture Specification v1.3
compliant.
PCI Express (PCIe)
Uses PCIe Gen 3.0 (8GT/s) and Gen 4.0 (16GT/s) through an x16 edge con-
nector. Gen 1.1 and 2.0 compatible.
Up to 100 Gigabit Ethernet
Mellanox adapters comply with the following IEEE 802.3 standards
:– 100GbE/ 50GbE / 40GbE / 25GbE / 10GbE / 1GbE
– IEEE 802.3bj, 802.3bm 100 Gigabit Ethernet
– IEEE 802.3by, Ethernet Consortium25, 50 Gigabit Ethernet,
supporting all FEC modes
– IEEE 802.3ba 40 Gigabit Ethernet
– IEEE 802.3ae 10 Gigabit Ethernet
– IEEE 802.3ap based auto-negotiation and KR startup
– Proprietary Ethernet protocols (20/40GBASE-R2, 50GBASE-R4)
– IEEE 802.3ad, 802.1AX Link Aggregation
– IEEE 802.1Q, 802.1P VLAN tags and priority
– IEEE 802.1Qau (QCN)
– Congestion Notification
– IEEE 802.1Qaz (ETS)
– IEEE 802.1Qbb (PFC)
– IEEE 802.1Qbg
– IEEE 1588v2
– Jumbo frame support (9.6KB)
InfiniBand EDR
A standard InfiniBand data rate, where each lane of a 4X port runs a bit
rate of 25.78125Gb/s with a 64b/66b encoding, resulting in an effective
bandwidth of 100Gb/s.
Memory
PCI Express - stores and accesses
InfiniBand and/or
Ethernet fabric connec-
tion information and packet data.SPI Quad - includes 128Mbit SPI Quad Flash
device (W25Q128FVSIG device by ST Microelectronics)
VPD EEPROM - The EEPROM capacity is 128Kbit.
Overlay Networks
In order to better scale their networks, data center operators often create overlay
networks that carry traffic from individual virtual machines over logical tunnels
in encapsulated formats such as NVGRE and VXLAN. While this solves net-
work scalability issues, it hides the TCP packet from the hardware offloading
engines, placing higher loads on the host CPU. ConnectX-5 effectively
addresses this by providing advanced NVGRE and VXLAN hardware offload-
ing engines that encapsulate and de-capsulate the overlay protocol.