Overview
Rev 1.1
Mellanox Technologies
10
1
Overview
The TX6100 MetroX™ series extends InfiniBand from a single-location data center network to a
high-performance technology for local, campus and even metro applications.
Data centers, compute clusters and supercomputers are overwhelmed by unprecedented growth
in data volume, fueled by strong application and technology trends.
While InfiniBand products have been traditionally deployed for their high-performance intercon-
nect benefits within the data center, Mellanox MetroX switches, implementing long-haul Infini-
Band, enable connecting between data centers deployed across multiple geographically
distributed sites, extending the same world-leading interconnect benefits of InfiniBand beyond
local data centers and storage clusters.
Mellanox’s MetroX is the perfect cost-effective, low power, easily managed and scalable solu-
tion that enables today’s data centers and storage to run over local and distributed InfiniBand fab-
rics, managed as a single unified network infrastructure.
MetroX switches, which implement long-haul InfiniBand, can transfer data to distances of up to
10km. The switches enable aggregate data and storage networking over a single, consolidated
InfiniBand fabric. The long-haul InfiniBand technology guarantees high-performance, high-vol-
ume data sharing between distant sites, enabling existing data centers expansion, disaster recov-
ery, data mirroring and campus connectivity.
MetroX enables a campus network to assemble large aggregate clusters, all connected and easily
managed by an InfiniBand Subnet Manager - an embedded manager, OpenSM, or using Mella-
nox’s Unified Fabric Manager™ (UFM™).
MetroX extends InfiniBand protocol RDMA capabilities beyond the local cluster and ensures
RDMA enhancements to both data centers and storage clusters alike.
The switch comes pre-installed with all necessary firmware and is configured for standard oper-
ation within an InfiniBand fabric. See Section 9 on page 49 for more information.
Installation, hot-swapping components and hardware maintenance is covered in “Basic Opera-
tion” on page 12.
Figure 1: Connector Side View of the Switch
1.1
Features
1.1.1
Full Feature List
•
6 FDR10 (40Gb/s) QSFP Long haul ports (aggregate data throughput up to 240Gb/s)
•
6 FDR (56Gb/s) QSFP down link ports (aggregate data throughput up to 336Gb/s)
MT6100
CONSOLE
MGT
12
11
10
9
8
7
6
5
4
3
2
1
EXT
1
EXT
2
EXT
3
EXT
4
EXT
5
EXT
6
RST
UID
PS2
PS1