![NetApp H410S Hardware Information Download Page 10](http://html1.mh-extra.com/html/netapp/h410s/h410s_hardware-information_1669954010.webp)
Configure the nodes
After you rack and cable the hardware, you are ready to configure your new storage resource.
Steps
1. Attach a keyboard and monitor to the node.
2. In the terminal user interface (TUI) that is displayed, configure the network and cluster settings for the node
by using the on-screen navigation.
You should get the IP address of the node from the TUI. You need this when you add the
node to a cluster. After you save the settings, the node is in a pending state, and can be
added to a cluster. See the <insert link to Setup section>.
3. Configure out-of-band management using the Baseboard Management Controller (BMC). These steps
apply
only to H610S
nodes.
a. Use a web browser and navigate to the default BMC IP address: 192.168.0.120
b. Log in using
root
as the username and
calvin
as the password.
c. From the node management screen, navigate to
Settings
>
Network Settings
, and configure the
network parameters for the out-of-band management port.
See
this KB article (log in required)
Create a cluster
After you add the storage node to your installation and configure the new storage resource, you are ready to
create a new storage cluster
Steps
1. From a client on the same network as the newly configured node, access the NetApp Element software UI
by entering the node’s IP address.
2. Enter the required information in the
Create a New Cluster
window. See the
information.
Find more information
•
NetApp SolidFire Resources Page
•
Documentation for earlier versions of NetApp SolidFire and Element products
Replace a H410S node
You should replace a storage node in the event of CPU failure, Radian card problems,
other motherboard issues, or if it does not power on. The instructions apply to H410S
storage nodes.
Alarms in the NetApp Element software UI alert you when a storage node fails. You should use the Element UI
to get the serial number (service tag) of the failed node. You need this information to locate the failed node in
the cluster.
Here is the back of a two rack unit (2U), four-node chassis with four storage nodes:
8