Chelsio Communications www.chelsio.com [email protected] +1-408-962-3600
•
•
•
High Performance
10GbE iWARP Adapter
RDMA
TCP/IP
Cluster Computing
•
•
R310E-CXA
C
HELSIO
’
S
R310E 10GbE
iWARP Adapter is a pro-
tocol-offloading 10 Gigabit
Ethernet adapter with PCI
Express host bus interface for
servers and storage systems.
The third-generation technol-
ogy from Chelsio provides the
highest 10GbE performance
available and dramatically low-
ers host-system CPU commu-
nications overhead.
With on-board hardware that offloads iWARP RDMA processing from its
host system, the R310E frees up host CPU cycles for useful applications.
The system gets increased bandwidth, improved overall performance,
and reduced message latency across all applications.
This combination makes it practical to converge other networks that tradi-
tionally used niche technologies onto 10GbE. High bandwidth and
extremely low latency make 10GbE the best technology for high-
performance cluster computing (HPCC) fabrics.
By using Chelsio’s R310E, enterprises can cost-effectively connect serv-
ers and storage systems directly to the 10GbE backbone over lower-cost
CX4 copper or active optical cabling and switching infrastructure.
As an upgrade or alternative to aggregated Gigabit Ethernet links, 10GbE
boosts connection bandwidth and simplifies cabling, installation, and main-
tenance. The R310E also provides additional bandwidth needed to con-
solidate server functions on fewer, more powerful systems – simplifying
management and reducing costs for servers, rackspace, power consump-
tion, and maintenance.
Applications with large data sets benefit from a high-speed distributed plat-
form, including video rendering and distribution, data visualization such as
remote medical imaging and climatic modeling, and bioinformatics appli-
cations such as DNA sequencing.
Benefits
Highlights
Applications
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
RDMA-enabled NIC (RNIC) specifically
optimized for cluster computing
Reduces host CPU utilization by up to
90% compared to NICs without full
offload capabilities
PCI Express 8x host bus interface
Line-rate 10Gbps full-duplex
performance
Integrated traffic manager, QoS, and
virtualization capabilities
RNIC-PI, kDAPL, and OpenFabrics 1.2
software interfaces
Powerful per-connection, per-server,
and per-interface configuration and
control
Scale up servers and NAS systems
Link servers in multiple facilities to syn-
chronize data centers
Consolidate LAN, SAN, and cluster
networks
Deploy Ethernet-only networking for clus-
ter fabric, LAN, and SAN
Routable infrastructure
Standards-compliant iWARP RDMA plus
direct data placement (DDP)
Seamlessly runs existing InfiniBand
RDMA applications
Very low latency Ethernet
Increase cluster fabric bandwidth
Data-Center Networking
High Performance Cluster Computing
Activ
e ca
ble
ca
pa
ble