Understanding the Software Configurations
A single data address is used to access these two physical ports. That data address allows traffic
to continue flowing to the ports in the IPMP group, even if one of the two 10-GbE NICs fail.
Note -
You can also connect just one port in each IPMP group to the 10-GbE network rather
than both ports, if you are limited in the number of 10-GbE connections that you can make to
your 10-GbE network. However, you will not have the redundancy and increased bandwidth in
this case.
InfiniBand Network
The connections to the InfiniBand network vary, depending on the type of domain:
■
Database Domain:
■
Storage private network
: Connections through P1 (active) on the InfiniBand HCA
associated with the
first
CPU in the
first
processor module (PM0) in the domain and
P0 (standby) on the InfiniBand HCA associated with the
last
CPU in the
last
processor
module (PM3) in the domain.
So, for a Giant Domain in a Full Rack, these connections would be through P1 on the
InfiniBand HCA installed in slot 3 (active) and P0 on the InfiniBand HCA installed in
slot 16 (standby).
■
Exadata private network
: Connections through P0 (active) and P1 (standby) on all
InfiniBand HCAs associated with the domain.
So, for a Giant Domain in a Full Rack, connections will be made through all eight
InfiniBand HCAs, with P0 on each as the active connection and P1 on each as the
standby connection.
■
Application Domain:
■
Storage private network
: Connections through P1 (active) on the InfiniBand HCA
associated with the
first
CPU in the
first
processor module (PM0) in the domain and
P0 (standby) on the InfiniBand HCA associated with the
last
CPU in the
last
processor
module (PM3) in the domain.
So, for a Giant Domain in a Full Rack, these connections would be through P1 on the
InfiniBand HCA installed in slot 3 (active) and P0 on the InfiniBand HCA installed in
slot 16 (standby).
■
Oracle Solaris Cluster private network
: Connections through P0 (active) on the
InfiniBand HCA associated with the
first
CPU in the
second
processor module (PM1)
in the domain and P1 (standby) on the InfiniBand HCA associated with the
first
CPU in
the
third
processor module (PM2) in the domain.
So, for a Giant Domain in a Full Rack, these connections would be through P0 on the
InfiniBand HCA installed in slot 4 (active) and P1 on the InfiniBand HCA installed in
slot 7 (standby).
Understanding the System
73
Summary of Contents for SuperCluster T5-8
Page 1: ...Oracle SuperCluster T5 8 Owner s Guide Part No E40167 17 May 2016 ...
Page 2: ......
Page 11: ...Contents Index 353 11 ...
Page 12: ...12 Oracle SuperCluster T5 8 Owner s Guide May 2016 ...
Page 14: ...14 Oracle SuperCluster T5 8 Owner s Guide May 2016 ...
Page 116: ...116 Oracle SuperCluster T5 8 Owner s Guide May 2016 ...
Page 204: ...204 Oracle SuperCluster T5 8 Owner s Guide May 2016 ...
Page 228: ...228 Oracle SuperCluster T5 8 Owner s Guide May 2016 ...
Page 244: ...244 Oracle SuperCluster T5 8 Owner s Guide May 2016 ...