.
4.
To
confirm
the
addition,
type
mmlsnode
-C
set1.
Rules
to
follow
when
adding
nodes
You
must
follow
these
rules
when
adding
nodes
to
a
GPFS
node
set:
v
A
node
can
belong
to
only
one
node
set
at
a
time.
v
The
nodes
being
added
to
the
node
set
must
belong
to
a
GPFS
cluster
(issue
the
mmlscluster
command
to
display
available
nodes).
v
The
existing
nodeset
must
meet
quorum
for
the
nodes
to
be
added.
For
example,
if
GPFS
is
currently
configured
on
eight
nodes,
all
of
which
are
up
and
running,
the
quorum
value
is
met
and
the
new
nodes
join
the
node
set.
v
Conversely,
if
GPFS
is
currently
configured
on
eight
nodes
and
only
four
are
up
and
running,
a
quorum
of
five
does
not
exists
and
the
new
nodes
can
not
join
the
node
set.
When
five
of
the
original
eight
nodes
are
up
and
running,
the
new
nodes
are
added.
v
After
the
nodes
have
been
added
and
GPFS
is
started
on
the
new
nodes,
the
quorum
value
for
the
nodeset
is
adjusted
accordingly.
This
enables
new
nodes
to
join
a
running
node
set
without
causing
quorum
to
be
lost.
v
Issue
the
mmstartup
command
to
start
GPFS
on
the
new
nodes.
v
When
adding
nodes
to
a
node
set
using
the
single-node
quorum
algorithm,
the
GPFS
daemon
must
be
stopped
on
all
of
the
nodes.
If
after
the
nodes
are
added,
the
number
of
nodes
in
the
node
set
exceeds
two,
the
quorum
algorithm
is
automatically
changed
to
the
multinode
quorum
algorithm.
Distributing
the
system
image
to
all
nodes
in
the
cluster
Because
of
the
way
the
Red
Hat
version
9.0
loads
SCSI
drivers
and
assigns
them
to
dev/sda,
dev/sdb
partitions,
problems
can
result
if
more
than
one
SCSI
host
adapter
(Adaptec
or
LSI
SCSI
controller
for
local
drives
and
QLogic
HBA
for
Triton
connection)
is
installed
on
the
system.
The
QLogic
HBA
will
typically
be
detected
first
by
the
installation
process.
Follow
the
and
modify
the
order
of
the
contents
of
the
/etc/modules.conf
file.
Attempting
to
distribute
the
system
image
out
to
the
nodes
while
a
FAStT
controller
is
still
turned
on
and
connected
might
cause
data
damage
on
the
first
logical
disk
device
in
FAStT
subsystem.
Make
sure
that
the
FAStT
controllers
are
turned
off
or
that
all
fiber
cables
for
the
FAStT
controllers
are
disconnected
from
the
back
of
each
controller
before
starting
the
install
process.
To
distribute
the
system
image
to
all
nodes
in
the
cluster,
complete
the
following
steps:
1.
Open
an
rconsole
window
for
each
node
being
installed
so
you
can
monitor
the
installation
process:
rconsole
-n
{node_list}
2.
Run
the
installnode
command
for
each
node
being
installed:
installnode
{node_list}
After
the
operating
system
is
installed
on
the
storage
nodes,
reconnect
the
fiber
cable
to
the
FAStT
controllers.
Restart
the
storage
nodes
to
see
any
configured
LANs.
Verifying
the
configuration
1.
Start
the
management
node
and
log
on
as
user
root
.
Chapter
6.
Installing
the
software
33
Summary of Contents for eserver Cluster 1350
Page 1: ...IBM Eserver Cluster 1350 Installation and Service Guide ERserver...
Page 2: ......
Page 3: ...IBM Eserver Cluster 1350 Installation and Service Guide ERserver...
Page 26: ...10 IBM Eserver Cluster 1350 Installation and Service Guide...
Page 82: ...66 IBM Eserver Cluster 1350 Installation and Service Guide...
Page 86: ...70 IBM Eserver Cluster 1350 Installation and Service Guide...
Page 88: ...72 IBM Eserver Cluster 1350 Installation and Service Guide...
Page 90: ...74 IBM Eserver Cluster 1350 Installation and Service Guide...
Page 94: ...78 IBM Eserver Cluster 1350 Installation and Service Guide...
Page 98: ...82 IBM Eserver Cluster 1350 Installation and Service Guide...
Page 104: ...88 IBM Eserver Cluster 1350 Installation and Service Guide...
Page 114: ...98 IBM Eserver Cluster 1350 Installation and Service Guide...
Page 120: ...104 IBM Eserver Cluster 1350 Installation and Service Guide...
Page 121: ......
Page 122: ...Part Number 25K8407 Printed in USA 1P P N 25K8407...