•
Management network
: This required network connects to your existing
management network, and is used for administrative work for all components of
the Exalogic machine. It connects ILOM, compute nodes, server heads in the
storage appliance, switches connected to the Ethernet switch in the Exalogic
machine rack. This management network is in a single subnet. ILOM connectivity
uses the
NET0
(on Oracle Solaris,
igb0
) sideband interface.
For multirack configurations, you may have any of the following:
–
A single subnet per configuration
–
A single subnet per rack in the multirack configuration
–
Multiple subnets per configuration
Oracle recommends that you configure a single subnet per configuration.
With sideband management, only the
NET0
(on Oracle Solaris,
igb0
) interface of
each compute node is physically connected to the Ethernet switch on the rack. For
the server heads in the storage appliance,
NET0
and
NET1
interfaces (on Oracle
Solaris,
igb0
and
igb1
) are physically connected to support active-passive
clustering.
Note:
Do not use the management network interface (
NET0
on Oracle Linux, and
igb0
on Oracle Solaris) on compute nodes for client or application network
traffic. Cabling or configuration changes to these interfaces on Exalogic
compute nodes is not permitted.
•
InfiniBand private network
: This required network connects the compute nodes
and the storage appliance through the
BOND0
interface to the InfiniBand
switches/gateways on the Exalogic rack. It is the default IP over InfiniBand
(IPoIB) subnet created automatically during the initial configuration of the
Exalogic machine.
Note:
This network is either based on the default InfiniBand partition or based on a
partition allocated for the Exalogic machine. A single default partition is
defined at the rack level. For more information, see
Rack-Level InfiniBand Partition
.
•
Client access network
: This required network connects the compute nodes to
your existing client network through the
BOND1
interface and is used for client
access to the compute nodes (this is related primarily to a physical Exalogic
deployment). Each Exalogic compute node has a single default client access (edge
network) to an external 10 Gb Ethernet network through a Sun Network QDR
InfiniBand Gateway Switch.
The logical network interface of each compute node for client access network
connectivity is bonded. Bond1 consists of 2 vNICs (Ethernet over IB vNICs). Each
vNIC is mapped to a separate Sun Network QDR InfiniBand Gateway Switch for
high availability (HA) and each host EoIB vNIC is associated with a different
HCA IB port (On Oracle Linux, vNIC0 -> ib0, vNIC1 -> ib1; on Oracle Solaris,
vNIC0 -> ibp0, vNIC1 -> ibp1).
Overview of Network Requirements
Understand Network Requirements and Configuration 6-3
Содержание Exalogic Elastic Cloud
Страница 12: ...xii ...
Страница 70: ...Default Port Assignments 4 16 Machine Owner s Guide ...
Страница 82: ...Initial Network Configuration of Exalogic Machine 5 12 Machine Owner s Guide ...
Страница 102: ...Network Configuration Worksheets 6 20 Machine Owner s Guide ...
Страница 112: ...What Next 7 10 Machine Owner s Guide ...
Страница 134: ...Use the Phone Home Service to Manage the Storage Appliance 8 22 Machine Owner s Guide ...
Страница 162: ...Oracle Solaris Creating VNICs and Associating Them with VLANs 11 6 Machine Owner s Guide ...
Страница 248: ...Using Oracle Services in Oracle Enterprise Manager Ops Center 17 34 Oracle Exalogic Elastic Cloud Machine Owner s Guide ...
Страница 250: ...Using Oracle Services in Oracle Enterprise Manager Ops Center 17 36 Machine Owner s Guide ...
Страница 324: ...Power Distribution Unit Cabling Tables E 32 Machine Owner s Guide ...
Страница 338: ...Migrate a Zone to a New Host F 14 Machine Owner s Guide ...
Страница 342: ...Install Update and Remove RPMs Using Yum G 4 Machine Owner s Guide ...