For SuSE Linux 8 or 8.1 edit the /etc/sysconfig/kernel file to contain the
following line:
INITRD_MODULES="ips qla2300 reiserfs"
For SLES 7.2 or 8 edit the /etc/rc.config file to contain the following line:
INITRD_MODULES="ips qla2300"
3.
For Red Hat 7.3 or 8.0 rebuild the two initrd images (mkinitrd will not allow
you to make a ramdisk image if it detects one already present with the same
name, so the first two commands will rename the old images):
mv /boot/initrd-2.4.2-2.img /boot/initrd-2.4.2-2_orig.img
mv /boot/initrd-2.4.2-2smp.img /boot/initrd-2.4.2-2smp_orig.img
mkinitrd initrd-2.4.2-2.img 2.4.2-2
mkinitrd initrd-2.4.2-2smp.img 2.4.2-2smp
For SuSE 8 or 8.1 and SLES 7.2 or 8 run the mkinitrd command to create a
boot/initrd
directory and then run lilo.
4.
If an RSA adapter was installed, reboot and load the setup floppy or CD-ROM
to configure the network. Assign the same configuration information for the
RSA adapter (name, IP, hostname) as used before.
ATTENTION!
Refer to this site to download RSA and ASM Process or Firmware Update Diskette utility:
http://www.pc.ibm.com/qtechinfo/MIGR-4JTS2T.html
5.
Configure the kernel (if you have custom modifications).
6.
Reboot the node.
Copy the system image out to all nodes in the cluster
Because of the way the Red Hat 7.3 and 8 loads SCSI drivers and assigns them to
/dev/sda
, /dev/sdb, and so on, problems can result if more than one SCSI host
adapter board (Adaptec or LSI SCSI controller for local drives and Qlogic HBA for
Triton connection) is installed on the system. The QLogic HBA will typically be
seen first by the install process. Follow the “Installation procedure” on page 47 and
modify the order of the contents of the /etc/modules.conf file.
Attempting to copy the system image out to the nodes while a FAStT controller is
still powered up and connected may cause data corruption on the first logical disk
device in FAStT subsystem. Ensure the FAStT controllers are powered down or all
fibre cables for the FAStT controllers are disconnected from the back of each
controller before starting the install process.
To copy the system image out to all nodes in the cluster, take the following steps:
1.
Open an rconsole window for each node being installed so you can monitor
the install process:
rconsole -n
{node list}
2.
Run the installnode command for each node being installed:
installnode
{node list}
Once the operating system is installed on the storage nodes, reconnect the fibre
cable to the FAStT controllers. Reboot the storage nodes to see any configured
LUNs.
Test the configuration
1.
Boot and log on to the Management Node as user root.
48
Installation and Service
Summary of Contents for System Cluster 1350
Page 1: ...eServer Cluster 1350 Cluster 1350 Installation and Service IBM...
Page 2: ......
Page 3: ...eServer Cluster 1350 Cluster 1350 Installation and Service IBM...
Page 8: ...vi Installation and Service...
Page 10: ...viii Installation and Service...
Page 12: ...x Installation and Service...
Page 20: ...2 Installation and Service...
Page 30: ...12 Installation and Service...
Page 32: ...14 Installation and Service...
Page 52: ...34 Installation and Service...
Page 68: ...50 Installation and Service...
Page 70: ...52 Installation and Service...
Page 72: ...54 Installation and Service...
Page 74: ...56 Installation and Service...
Page 92: ...74 Installation and Service...
Page 96: ...78 Installation and Service...
Page 98: ...80 Installation and Service...
Page 104: ...86 Installation and Service...
Page 110: ...92 Installation and Service...
Page 124: ...106 Installation and Service...
Page 126: ...108 Installation and Service...
Page 138: ...120 Installation and Service...
Page 139: ...Part 4 Appendixes Copyright IBM Corp 2003 121...
Page 140: ...122 Installation and Service...
Page 144: ...126 Installation and Service...
Page 148: ...130 Installation and Service...
Page 154: ...136 Installation and Service...
Page 160: ...142 Installation and Service...
Page 169: ......
Page 170: ...IBMR Printed in U S A...