2
0
mirror of https://github.com/xcat2/xcat-core.git synced 2024-12-25 12:41:45 +00:00
7 Service_node_take_over
ligc edited this page 2015-07-30 04:22:49 -04:00

Table of Contents

{{:Design Warning}}

Overview

In a hierarchical cluster where many service nodes are used to manage the compute nodes, sometimes a service node may be out of order or may need to be shutdown for maintenance. Sometimes, the admin may decide to move some nodes from one service nodes to another. In order not to make minimum impact to the compute nodes, it is needed to smoothly move the responsibility of the source service node, say sn1, to the destination service sn2. This design assumes that

  • both sn1 and sn2 are installed as the service nodes
  • nodes managed by sn1 are reachable by sn2 via Ethernet.
  • sn2 is currently running
  • sn1 may or may not be running.

A utility will be developed for both AIX and Linux to do the take over.

Implementation

Exploring service node pool concept

In this design, the service node pool concept will be explored. To define a service node pool for a node, noderes.servicenode will be a comma separated list of service node and noderes.xcatmaster is usually left to blank. However, this design will add the support for noderes.xcatmaster not being blank. It can be the host name of the adapter of a service node that faces the compute nodes. In this case, the first service node in the list specified in the noderes.servicenode must point to THE SAME SERVICE NODE defined in noderes.xcatmaster. Here is how it works:

  • If noderes.xcatmaster is blank, the MASTER environmental variable for the postscripts will be the dhcp server that responded to the node's dhcp request during node deployment.
  • If noderes.xcatmaster is not blank, the MASTER environmental variable for the postscripts will use this value regardless of the dhcp server.

The MASTER will be used by the postscripts to setup the server for some services, for example, syslog server. In fact the current code already works this way. We need to go through the code to make fixes in the places that assume the noderes.xcatmaster is blank for service node pool. The service node take over only works when

  • there is a service node pool defined for the nodes
  • both sn1 and sn2 are in the pool and sn1 is the first one in the servicenode list.
  • noderes.xcatmaster is not blank

snmove command

A command called snmove will be implemented. It'll take two service node names as input, transfer the responsibilities from the source service node to the destination service node.

   **snmove -v|-h**
   **snmove noderange -d sn2 -D sn2n [-i]**  
        Move management responsibilities for the given nodes to sn2. 
   **snmove -s sn1 [-S sn1n] -d sn2 -D sn2n [-i]**  
        Move management responsibilities for all the nodes managed by sn1 to sn2.
                sn1 is the hostname of the source service node adapter facing the mn.
                sn1n is the hostname of the source service node adapter facing the nodes.
                sn2 is the hostname of the destination service node adapter facing the mn.
                sn2n is the hostname of the destination service node adapter facing the nodes.
                -i: No action will be done on the nodes. 
                If -i is not specified, the syslog and setup ntp postscritps will be rerun on the nodes to switch the syslog and NTP server.

When the snmove is invoked, it must be invoked on the mn. It will perform the following functions:

For Linux:

  • Top down
    • change the noderes.servicenode setting, move sn2 to the first one in the list
  • Bottom up
    • change the noderes.xcatmaster setting for the nodes from sn1 to sn2.
    • go to the nodes, make syslog and ntp MASTER to use sn2 (using updatenode)
  • DHCP
    • add the nodes to the .leases files on sn2. (this step can be ignored because makeshcp already did it).
    • change the dhcpserver on the nodes to point to sn2
  • TFTP and NFS
    • if noderes.tftpserver and noderes.nfsserver is pointing to sn1, change it to sn2 for the nodes. rerun nodeset for the nodes so that next time when the nodes boot up, they will use the latest setting.
  • Name Server
    • Change the /etc/resolve.conf on the node to point to sn2 (what puts the value there? dhcp?)
  • Conserver
    • change the nodehm.conserver setting for the nodes from sn1 to sn2, rerun makeconservercf for the nodes.
  • Monitoring??
    • change the noderes.monserser setting for the nodes from sn1 to sn2.
    • How about monitoring software like RMC, SNMP, Ganglia? No, this will be done by user running updatenode command.
  • Hardware control point?
    • What if sn1 is the service node for hardware control point for nodes need to be moved over to sn2? (Not handled yet)
  • Applications?
    • LL config file? (No, user need to run updatenode manaually)

For AIX:

TBD.

Direct bootp or broadcast bootp

In system p, we can ask to have a direct bootp or broadcast bootp during node deployment. A flag (-d) will be added in rnetboot command to indicate a direct bootp is requested. If -d is set, the noderes.xcatmaster ,which default to site.master if blank, will be used as the bootp server for the nodes to be deployed.