
and thus the controller status cannot be determined or reported until the core services complete
the initialization phase.
For more information, see
“Error log for team configuration ” (page 184)
,
,
“Failover behavior within a region” (page 189)
,
“Failback behavior within a region” (page 191)
The controller states are:
•
initializing
The sequencer completed the startup sequence through the team stage and the controller
is part of the team quorum (connected to at least one other active controller in the team),
but has not yet deployed and initialized the operational group services. At this point, the
OpenFlow port is not open.
•
active
The sequencer completed all stages of the startup sequence and the OpenFlow port is open.
If the controller is a member of a team, it is part of the team quorum (connected to at least
one other team member that has started its teaming services).
•
suspended
The sequencer completed all stages in the suspend sequence. The sequencer initiates the
suspend sequence when a monitored core service reports an unhealthy status or a teamed
controller loses its membership in the team quorum. Core services are started. Teaming
services are started but are waiting until the controller can become a member of a team
quorum. The OpenFlow port is closed.
•
unreachable
A controller sees a remote controller as
unreachable
if the connection to the remote
controller is broken. A controller never sees itself as
unreachable
.
If an application reports an unhealthy status, an alert is generated but the controller remains in
the
active
state.
If two controllers in a team fail, the third controller does not operate as a standalone controller.
Instead, the third controller loses its membership in the team quorum, and the sequencer initiates
the suspend sequence.
You can view your controller status from the top section of the Team screen in the UI, see
team configuration and controller status” (page 105)
Manually synchronizing Cassandra database nodes using
nodetool
repair
utility
The Cassandra
nodetool repair
utility corrects inconsistencies among instances of the
Cassandra database such that all nodes have the same and current data.
Guidelines for running the
nodetool repair
utility
•
Run the utility on each server in the controller team.
•
Schedule regular repair operations for one server in the controller team at a time.
•
Schedule regular repair operations once every 10 days.
•
Disk activity increases during repair operations, so schedule repair operations during
low-usage hours.
Running the Cassandra
nodetool repair
command
The commands in this procedure are run from the command prompt on the Linux system on
which the controller is installed:
102
Configuring for High Availability