Table 12. Network troubleshooting for a cluster with one network (continued)
1.
Cannot ping a node or nodes on the
cluster network from the Management
Node, yet the rconsole command and
access from the KVM work correctly.
1.
Use the ifconfig command to verify that
the IP settings are correct.
2.
Verify that the cables are fully plugged
into the switch and node, and that
everything is plugged into the correct
port. Refer to the cabling information
printed on each cable label and “VLAN
options” on page 17 if you are unsure
where a cable belongs. Verify that the
link lights are on.
3.
Swap ports on the Ethernet switch with a
Cluster Node port you know is working.
4.
Verify the Ethernet switch port is
configured for the Management VLAN.
Cluster with two networks
Ping failure on one or some nodes:
If one or more nodes experience a ping
failure, it indicates a problem with the node hardware or software.
1.
Attempt to telnet to the node via the serial console or KVM and verify the
node is operational.
a.
If telnet succeeds, check the
syslog
for errors.
1)
If there are errors, go to “Isolating software problems” on page 69 for
software problem resolution.
2)
If there are no errors, it indicates a network problem. Go to Table 14 on
page 63 and continue with the steps shown there.
b.
If telnet fails, it indicates a node hardware problem. Go to “Isolating
hardware problems” on page 63 for problem resolution.
Ping failure on only one network:
If ping failures occur on one network but not
on the second network, it indicates a problem on the network adapter on the
Management Node for the network where the failure occurred.
Ping failure on one or both networks:
1.
Verify that all communication devices on the network are powered on and that
each device has a green status light on both ends of the connection.
2.
Verify with support that correct IP Address, Net Mask, and Gateway settings
for each device that fails to function in the network.
3.
Use the ifconfig command to determine the IP Address scheme of each node
and compare it to the factory defaults shown in Table 13
Table 13.
Device
IP address
Host Name
Management Node
172.20.0.1
172.30.0.1
eth0
–mgtnode.cluster.net
eth1
–mgtnode-eth1
Storage Node
172.20.1.1
storage001
First FAStT Storage
172.20.2.1
Second FAStT Storage
172.20.2.2
x335 Cluster Nodes
172.20.3.1
node001...node
xxx
BladeCenter Ethernet Switch
Module
172.20.90.1
62
Installation and Service
Summary of Contents for System Cluster 1350
Page 1: ...eServer Cluster 1350 Cluster 1350 Installation and Service IBM...
Page 2: ......
Page 3: ...eServer Cluster 1350 Cluster 1350 Installation and Service IBM...
Page 8: ...vi Installation and Service...
Page 10: ...viii Installation and Service...
Page 12: ...x Installation and Service...
Page 20: ...2 Installation and Service...
Page 30: ...12 Installation and Service...
Page 32: ...14 Installation and Service...
Page 52: ...34 Installation and Service...
Page 68: ...50 Installation and Service...
Page 70: ...52 Installation and Service...
Page 72: ...54 Installation and Service...
Page 74: ...56 Installation and Service...
Page 92: ...74 Installation and Service...
Page 96: ...78 Installation and Service...
Page 98: ...80 Installation and Service...
Page 104: ...86 Installation and Service...
Page 110: ...92 Installation and Service...
Page 124: ...106 Installation and Service...
Page 126: ...108 Installation and Service...
Page 138: ...120 Installation and Service...
Page 139: ...Part 4 Appendixes Copyright IBM Corp 2003 121...
Page 140: ...122 Installation and Service...
Page 144: ...126 Installation and Service...
Page 148: ...130 Installation and Service...
Page 154: ...136 Installation and Service...
Page 160: ...142 Installation and Service...
Page 169: ......
Page 170: ...IBMR Printed in U S A...