are built with either a full-bandwidth 64-port interconnect or a reduced bandwidth 128-port
interconnect as follows:
— Reduced Bandwidth — Reduced bandwidth clusters are available in configurations of
129 to 256 nodes or 257 to 512 nodes. A reduced bandwidth cluster uses a switch
configuration that has fewer switch cards and interconnect network routes, trading off
hardware cost against system performance.
— Full Bandwidth — Full bandwidth clusters are available in configurations of 129 to 256
nodes or 257 to 512 nodes. A full bandwidth cluster uses a fully configured interconnect
where all possible node links are available, providing the maximum high-speed network
performance.
The configuration of a cluster is very flexible, depending on the number of nodes selected and
any options (such as storage shelves) installed in the racks.
Table 2-1
provides a guide to the
possible configurations of dense clusters, with the columns showing the number of nodes in the
cluster, and the size (port count) of the interconnect that is required for this number of nodes.
Table 2-1 Dense Clusters: Ratio of Nodes to Racks and Interconnects
Single 128-Port Interconnect
Single 64-Port Interconnect
Single 32-Port Interconnect
Racks
6
6
15
1
24
24
32
1
2
42
42
3
60
60
4
78
5
96
6
114
7
128
1
8
1
In this configuration, you cannot connect the control node to the interconnect.
Modular cluster configurations are even more flexible because the number of nodes might be
anywhere up to the maximum number of nodes supported for the current release. A larger
number of options is supported. Clusters that have 128 nodes (or fewer) require only a single
128-port interconnect to provide links for all the nodes, and are referred to as bounded clusters.
Clusters exceeding 128 nodes require that the cluster interconnects are organized into a hierarchy
of node-level interconnects and top-level interconnects, called a federated configuration. In such
configurations, up to 10 interconnect chasses are required, which means that there will be 5 IBB
racks (two interconnect chasses per rack).
Cluster configurations are flexible and depend on the criteria such as the following:
•
The number of nodes — Clusters with more than 128 nodes require a federated hierarchy
of interconnects and a larger number of IBB racks.
•
The format of the nodes — Nodes have a height ranging from 1U to 4U. A cluster assembled
from application nodes having a 4U height will require a much larger number of CBB racks
than a cluster constructed from nodes having a 2U height.
•
The number of CBBs — Clusters with up to 12 CBBs can support up to 23 utility nodes. As
the number of racks increases up to a maximum of 27 CBBs, the number of supported utility
nodes decreases to 7.
•
Storage options — Up to two storage controllers and disk shelves are supported in the UBB.
Because of these interactions, it is not possible to define a simple set of rules that will accurately
determine the ratio of nodes to racks in a cluster. This configuration flexibility also has implications
for certain tasks when planning a cluster installation. To plan cabling routes and determine the
total footprint of the installed cluster, you must work with HP sales and service representatives
to understand the precise rack configuration of the cluster that you order. By the time the cluster
2.2 Cluster Models
25
Summary of Contents for Cluster Platform
Page 8: ...8 ...
Page 14: ...14 ...
Page 18: ...18 ...
Page 20: ...20 ...
Page 31: ...Figure 2 6 HP Modular Cooling System Front View 2 5 HP Modular Cooling System 31 ...
Page 62: ...62 ...
Page 70: ...70 ...
Page 72: ...72 ...
Page 76: ...76 ...
Page 82: ...82 ...
Page 87: ...87 ...
Page 88: ... A CPCCO 1F Printed in the US ...