2
0
mirror of https://github.com/xcat2/xcat-core.git synced 2024-11-21 09:02:00 +00:00
12 node status enhancement
Bin Xu edited this page 2017-11-24 14:26:18 +08:00

Design_Warning

Identify the key checkpoint for below kind of node operation

  • Provisioning
  • Discovery
  • Node reboot (Diskful)

The goal here is to standardize the xCAT status and provide a way for the Admin to better understand the progress of xCAT provisioning on the nodes and the state changes reflected in the status attribute for the node. If the node is stuck at some status, they can easily determine where the error occurred.

Although fine grained status is better for Admin to understand the whole progress, but considering the the less impact to xCAT daemon when provisioning large scale of nodes in the same time, only key status will be reported. Details progress could be got from the log system.

Note: site.nodestatus could be set to N to disable the node status reporting mechanism during the operations.

Provisioning

  • Diskful installation In such kind of OS installation, xCAT will generate the corresponding boot configuration files and leverage the OS primitive installer to finish the whole OS installation and configuration.
Stage Status Set by Notes
Booting from net powering-on rpower 1. Server is powering on 2. And try to boot itself via `bootp` on provision NIC
OS Provisioning installing pre-script in kickstart Operating System is installing
booting post-script in kickstart After finishing all xCAT Postscript and Operating System is rebooting to firstboot
xCAT Postboot Processing postbooting xcatpostinit1->xcatinstallpost Post Boot Scripts Running
booted|failed xcatpostinit1->xcatinstallpost xCAT completed all defined configuration and OS is deployed. (If any postbootscript failed, the status will be failed)
  • Diskless installation In this kind of OS installation, xCAT will provision the node with a pre-built images.
Stage Status Set by Notes
Booting from net powering-on rpower 1. Server is powering on 2. And try to boot itself via `bootp` on provision NIC
OS Provisioning netbooting script in dracut Operating System is installing
xCAT Postboot Processing postbooting xcatpostinit1->xcatdlspost Post Boot Scripts Running
booted|failed xcatpostinit1->xcatdlspost xCAT completed all defined configuration and OS is deployed. (If any postbootscript failed, the status will be failed)

Node reboot

For diskless node, a reboot means a new installation. So here is only for diskful compute nodes.

Stage Status Set by Notes
Shut-down powering-off rpower Server is shutting down
Rebooting powering-on rpower Server is booting
xCAT Postboot Processing postbooting xcatpostinit1->xcatdlspost Post Boot Scripts Running (Only available when `site.runbootscripts=yes`)
booted|failed xcatpostinit1->xcatdlspost xCAT completed all defined configuration and OS is deployed. (If any postbootscript failed, the status will be failed)
Note: if the reboot is triggered from compute node via 'reboot' command, then no 'powering-on' stage, and the 'powering-on'is set by 'xcatpostinit1'

A convenient tool to check the node status (After discussion, this feature is defered)

Now the status are only in log file, it is planned to have a new xcatprobe sub-command to show the status changing history.

nodecheck config [-n noderange] [-V|--verbose]
nodecheck status [-n noderange] [-V|--verbose]

Example output:
boston02:
07-07-2017 17:01:27 powering-on
07-07-2017 17:10:26 installing
07-07-2017 17:31:08 booting
07-07-2017 17:38:08 postbooting
07-07-2017 17:39:47 booted

The contents in /var/log/xcat/cluster.log like below:

cluster.log-20170709:Jul  7 17:01:27 c910f02c05p03 xcat: boston02 status: powering-on statustime: 07-07-2017 17:01:27
cluster.log-20170709:Jul  7 17:10:26 c910f02c05p03 xcat: boston02 status: installing statustime: 07-07-2017 17:10:26
cluster.log-20170709:Jul  7 17:31:08 c910f02c05p03 xcat: boston02 status: booting statustime: 07-07-2017 17:31:08
cluster.log-20170709:Jul  7 17:39:47 c910f02c05p03 xcat: boston02 status: booted statustime: 07-07-2017 17:39:47

Q & A document on how to debug the issue

If a node has an intermediate status for a long time, it may mean some issues on it. Here we need provide a Q&A for it. When the node hang at one status, how to debug and find the root cause.

Other Design Considerations

  • N/A

Out of Scope

As the status is reported by Compute Nodes, so if there might be potential issue the status cannot be reported to management node successfully. (For example, networking issue)