background image

D a t a s h e e t

2006 QLogic Corporation. all rights reserved. QLogic, the QLogic logo, and InfiniPath are trademarks or registered trademarks of QLogic Corporation. Other trademarks are the property of their respective owners.

sN0058044-00 Rev e 11/06

Corporate headquarters

    QLogic Corporation    26650 aliso Viejo Parkway    aliso Viejo, Ca 92656    949.389.6000

europe headquarters 

  QLogic (UK) LtD.    surrey technology Centre    40 Occam Road Guildford    surrey GU2 7YG UK    +440(0)1483 295825

InfiniPath QLE7140

 

PCI Express Interface

• 

PCIe v1.1 x8 compliant

• 

PCIe slot compliant (fits into x8 or x16 slot)

 

Connectivity

•  

single InfiniBand 4X port (10+10 Gbps) - Copper

•  

external fiber optic media adapter module support

•  

Compatible  with  InfiniBand  switches  from  Cisco®, 
silverstorm™, Mellanox®, Microway, and Voltaire®

•  

Interoperable  with  host  channel  adapters  (hCas) 
from Cisco, silverstorm, Mellanox and Voltaire run-
ning the OpenIB software stack

 

 Host Driver/Upper Level Protocol (ULP) 
Support

•  

MPICh version 1.2.6 with MPI 2.0 ROMIO I/O 126

•  

tCP,  NFs,  UDP,  sOCKets  through  ethernet  driver 
emulation

•  

Optimized MPI protocol stack supplied

•  

32- and 64-bit application ready

•  

IPoIB, sDP, UDP using OpenIB stack

 

InfiniPath Interfaces and Specifications

•  

4X speed (10+10 Gbps)

•  

Uses standard IBta 1.2 compliant fabric and cables; 
Link layer compatible

•  

Configurable MtU size (4096 maximum)

•  

Integrated seRDes

 

Management Support

•  

Includes  InfiniBand  1.1  compliant  sMa  (subnet 
Management agent)

•  

Interoperable  with  management  solutions  from 
Cisco, silverstorm, and Voltaire

•  

Open sM

 

Regulatory Compliance

•  

FCC Part 15, subpart B, Class a

•  

ICes-003, Class a

•  

eN 55022, Class a

•  

VCCI V-3/2004.4, Class a

 

Operating Environments

•  

Red hat enterprise Linux 4.x

•  

sUse Linux 9.3 & 10.0

•  

Fedora Core 3 & Fedora 4

 

InfiniPath Adapter Specifications

•  

typical Power Consumption: 5 Watts

•  

available in PCI half height, short-form factors

•  

Operating temperature: 10 to 45°C at 0-3km -30 to 
60°C (Non-operating)

•  

humidity 20% to 80% (Non-condensing, Operating) 
5% to 90% (Non-operating)

 

InfiniPath PCIe ASIC Specifications

•  

hsGBa package, 484 pin, 23.0 mm x 23.0 mm 1 mm 
ball pitch

•  

2.6 Watts (typical)

•  

Requires 1.0V and 3.3V supplies, 33V supplies, plus 
InfiniBand interface reference voltages.

1

 Ping-pong latency and uni-directional bandwidth are based on the Ohio state University Ping-pong latency test.

2

  the n

1/2

 measurement was done with a single processor node communicating to a single processor node through a single level of switch

3

   tCP/IP bandwidth and latency are based on using Netperf and a standard Linux tCP/IP software stack.

Note: actual performance measurements may differ from the data published in this document. all current performance data is available at www.pathscale.com/infinipath.php

by the hPC Challenge Benchmark suite, is nearly identical to its ping-pong 
latency, even as you increase the number of nodes.

the InfiniPath adapter, using a standard Linux distribution, also achieves the 
lowest tCP/IP latency and outstanding bandwidth.3 eliminating the excess 
latency found in traditional interconnects reduces communications wait time 
and allows processors to spend more time computing, which results in ap-
plications that run faster and scale higher.

Lowest CPU Utilization.

 the InfiniPath connectionless environment eliminates 

overhead that wastes valuable CPU cycles. It provides reliable data transmis-
sion without the vast resources required by connection-oriented adapters, 
thus increasing the efficiency of your clustered systems. 

Built  On  Industry  standards.

 the  InfiniPath  adapter  supports  a  rich  com-

bination  of  open  standards  to  achieve  industry-leading  performance.  the 
InfiniPath OpenIB software stack has been proven to be the highest perfor-
mance implementation of the OpenIB Verbs layer, which yields both superior 
latency and bandwidth compared to other InfiniBand alternatives.

• 

InfiniBand 1.1 4X Compliant

• 

standard InfiniBand fabric management

• 

MPI 1.2 with MPICh 1.2.6

• 

OpenIB supporting IPoIB, sDP, UDP and sRP

• 

PCI express x8 expansion slot Compatible

• 

supports sUse, Red hat, and Fedora Core Linux

Отзывы: