15–QLogic Teaming Services
Application Considerations
213
83840-546-00 E
High-Performance Computing Cluster
Gigabit Ethernet is typically used for the following three purposes in
high-performance computing cluster (HPCC) applications:
Inter-Process Communications (IPC): For applications that do not require
low-latency, high-bandwidth interconnects (such as Myrinet, InfiniBand),
Gigabit Ethernet can be used for communication between the compute
nodes.
I/O: Ethernet can be used for file sharing and serving the data to the
compute nodes. This can be done simply using an NFS server or using
parallel file systems such as PVFS.
Management & Administration: Ethernet is used for out-of-band (ERA) and
in-band (OMSA) management of the nodes in the cluster. It can also be
used for job scheduling and monitoring.
In our current HPCC offerings, only one of the on-board adapters is used. If
Myrinet or IB is present, this adapter serves I/O and administration purposes;
otherwise, it is also responsible for IPC. In case of an adapter failure, the
administrator can use the Felix package to easily configure adapter 2. Adapter
teaming on the host side is neither tested nor supported in HPCC.
Advanced Features
PXE is used extensively for the deployment of the cluster (installation and
recovery of compute nodes). Teaming is typically not used on the host side and it
is not a part of our standard offering. Link aggregation is commonly used between
switches, especially for large configurations. Jumbo frames, although not a part of
our standard offering, may provide performance improvement for some
applications due to reduced CPU overhead.
Summary of Contents for FastLinQ 3400 Series
Page 286: ......