Understanding the Software Configurations
The following 10-GbE NICs and ports are used for connection to the client access network for
this configuration:
■
PCIe slot 1, port 0 (active)
■
PCIe slot 14, port 1 (standby)
A single data address is used to access these two physical ports. That data address allows traffic
to continue flowing to the ports in the IPMP group, even if one of the two 10-GbE NICs fail.
Note -
You can also connect just one port in each IPMP group to the 10-GbE network rather
than both ports, if you are limited in the number of 10-GbE connections that you can make to
your 10-GbE network. However, you will not have the redundancy and increased bandwidth in
this case.
InfiniBand Network
The connections to the InfiniBand network vary, depending on the type of domain:
■
Database Domain:
■
Storage private network
: Connections through P1 (active) on the InfiniBand HCA
associated with the
first
CPU in the domain and P0 (standby) on the InfiniBand HCA
associated with the
last
CPU in the domain.
So, for a Large Domain in a Half Rack, these connections would be through P1 on the
InfiniBand HCA installed in slot 3 (active) and P0 on the InfiniBand HCA installed in
slot 16 (standby).
■
Exadata private network
: Connections through P0 (active) and P1 (standby) on all
InfiniBand HCAs associated with the domain.
So, for a Large Domain in a Half Rack, connections will be made through all four
InfiniBand HCAs, with P0 on each as the active connection and P1 on each as the
standby connection.
■
Application Domain:
■
Storage private network
: Connections through P1 (active) on the InfiniBand HCA
associated with the
first
CPU in the domain and P0 (standby) on the InfiniBand HCA
associated with the
last
CPU in the domain.
So, for a Large Domain in a Half Rack, these connections would be through P1 on the
InfiniBand HCA installed in slot 3 (active) and P0 on the InfiniBand HCA installed in
slot 16 (standby).
■
Oracle Solaris Cluster private network
: Connections through P0 (active) on the
InfiniBand HCA associated with the
second
CPU in the domain and P1 (standby) on the
InfiniBand HCA associated with the
third
CPU in the domain.
64
Oracle SuperCluster T5-8 Owner's Guide • May 2016
Содержание SuperCluster T5-8
Страница 1: ...Oracle SuperCluster T5 8 Owner s Guide Part No E40167 17 May 2016 ...
Страница 2: ......
Страница 11: ...Contents Index 353 11 ...
Страница 12: ...12 Oracle SuperCluster T5 8 Owner s Guide May 2016 ...
Страница 14: ...14 Oracle SuperCluster T5 8 Owner s Guide May 2016 ...
Страница 116: ...116 Oracle SuperCluster T5 8 Owner s Guide May 2016 ...
Страница 120: ...Find the Unpacking Instructions FIGURE 21 Unpacking the Rack 120 Oracle SuperCluster T5 8 Owner s Guide May 2016 ...
Страница 123: ...Move Oracle SuperCluster T5 8 Caution Never tip or rock the rack It can fall over Installing the System 123 ...
Страница 204: ...204 Oracle SuperCluster T5 8 Owner s Guide May 2016 ...
Страница 228: ...228 Oracle SuperCluster T5 8 Owner s Guide May 2016 ...
Страница 244: ...244 Oracle SuperCluster T5 8 Owner s Guide May 2016 ...
Страница 303: ...Understanding Expansion Rack Internal Cabling FIGURE 34 Expansion Rack Layout Half Rack Connecting Expansion Racks 303 ...