1) Check the memory of each canister.
2) If any issues, go back to the main page of essutils -> Advanced Tasks -> Check the memory.
This option dumps the complete list of DIMMs in each slot.
7. Configure GPFS page pool size to the 60% target (customer task).
Find the node class name to use, and list the current pagepool settings by issuing the following
commands from either one of the server canisters:
# mmvdisk nc list
a. Identify the node class name that is associated with the system by going through MES.
Example
[root@ess3k5a ~]# mmvdisk nc list
node class recovery group
-------------------- ---------------
ess_x86_64_mmvdisk ess3k
ess_x86_64_mmvdisk_5 ess3k5
gssio1_ibgssio2_ib -
b. Gather the current pagepool configuration by issuing the following command:
# mmvdisk server list –-nc <node class name> --config
Example
[root@ess3k5a ~]# mmvdisk server list --nc ess_x86_64_mmvdisk_5 --config
node
number server active memory pagepool nsdRAIDTracks
------ -------------------------------- ------- -------- -------- -------------
21 ess3k5a-ib.example.net no 754 GiB 75 GiB 131072
22 ess3k5b-ib.example.net no 754 GiB 75 GiB 131072
Here you can see that the pagepool is less than 25% of physical memory.
c. To change the pagepool percentage, check that GPFS is running:
d. Restart the GPFS by issuing the following command:
# mmstartup -N <node class name>
Example
[root@ess3k5b ~]# mmstartup -N ess_x86_64_mmvdisk_5
Wed Feb 19 16:37:02 EST 2020: mmstartup: Starting GPFS ...
e. Change the pagepool to 60%, which is 460G by issuing the following command:
# mmchconfig pagepool=460G -N <node class name>
f. Ensure that the 460G pagepool setting is listed for the target node class by issuing the following
command:
# mmlsconfig -Y | grep -i pagepool
8. Restore GPFS normal operational mode and confirm pagepool configuration setting (customer task).
Do the following steps on both canisters:
a. Restart the server by issuing the following command:
# systemctl reboot
b. When the server is up again, do a basic ping test between the canister over the high-speed
interface.
c. If the ping is successful, start GPFS again by issuing the following command:
38 IBM Elastic Storage System 3000: Service Guide
Summary of Contents for Elastic Storage System 3000
Page 1: ...IBM Elastic Storage System 3000 Version 6 0 1 Service Guide IBM SC28 3158 00...
Page 4: ...Index 63 iv...
Page 6: ...vi...
Page 8: ...viii...
Page 12: ...xii IBM Elastic Storage System 3000 Service Guide...
Page 30: ...18 IBM Elastic Storage System 3000 Service Guide...
Page 58: ...46 IBM Elastic Storage System 3000 Service Guide...
Page 64: ...52 IBM Elastic Storage System 3000 Service Guide...
Page 74: ...62 IBM Elastic Storage System 3000 Service Guide...
Page 76: ...64 IBM Elastic Storage System 3000 Service Guide...
Page 78: ...66 IBM Elastic Storage System 3000 Service Guide...
Page 79: ......
Page 80: ...IBM Product Number 5765 DME 5765 DAE SC28 3158 00...