2
0
mirror of https://github.com/xcat2/xcat-core.git synced 2025-06-28 17:05:31 +00:00

Added sections for

- service node 101
- defining service nodes
This commit is contained in:
Victor Hu
2015-08-11 17:35:37 -04:00
parent fd2b0aa4b0
commit e765d12566
4 changed files with 101 additions and 23 deletions

View File

@ -1,27 +1,16 @@
Configure a Database
====================
xCAT requires a database to hold persistent information and currently supports the following:
xCAT uses the SQLite database (https://www.sqlite.org/) as the default database and it is initialized during xCAT installation of the Management Node. If using Service Nodes, SQLite **cannot** be used because Service Nodes require remote access to the xCAT database. One of the following databases should be used:
* SQLite
* MySQL/MariaDB
* PostgreSQL
* :ref:`mysql_reference_label`
* :ref:`postgresql_reference_label`
SQLite
------
The SQLite database (https://www.sqlite.org/) is the default database used by xCAT and is initialized when xCAT is installed on the management node.
SQLite is a small, light-weight, daemon-less database that requires very little configuration and maintenance. This database is sufficient for smarll to moderately sized systems (typeically < 1000 nodes).
xCAT Hierarchy (Service Nodes)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The SQLite datacase **CAN NOT** be used when using xCAT hierarchy support because the xCAT service nodes require remote access to the database. This is one reason you would need to configure one of the alternative databases listed below:
.. _mysql_reference_label:
MySQL/MariaDB
-------------
.. toctree::
:maxdepth: 2
@ -31,9 +20,10 @@ MySQL/MariaDB
mysql_remove.rst
.. _postgresql_reference_label:
PostgreSQL
----------
.. toctree::
:maxdepth: 2
@ -41,3 +31,4 @@ PostgreSQL
postgres_configure.rst
postgres_using.rst
postgres_remove.rst

View File

@ -1,14 +1,16 @@
Large Clusters
==============
Large Cluster Support
=====================
When managing large clusters, it is recommended to have more than one node (Management Node, "MN") handling the installation and management of all the compute nodes. These additional "helper" nodes are called **Service Nodes** ("SN"). The Management Node can delegate all management operational needs for a compute node to the Service Node responsible for that compute node. There can be one or more Service Nodes configured to install/manage a group of compute nodes.
xCAT supports management of very large sized cluster through the use of **xCAT Hierarchy** or **xCAT Service Nodes**.
The following configurations are supported by xCAT:
When dealing with large clusters, to balance the load, it is recommended to have more than one node (Management Node, "MN") handling the installation and management of the compute nodes. These additional *helper* nodes are referred to as **xCAT Service Nodes** ("SN"). The Management Node can delegate all management operational needs to the Service Node responsible for a set of compute node.
* Each Service Node installs/manages a specific set of compute nodes
* Having a pool of Service Nodes in which any can respond to an installation request from a compute node
* A hybrid of the above, where each specific set of compute nodes have 2 or more Service Nodes in a pool
The following configurations are supported:
* Each service node installs/manages a specific set of compute nodes
* Having a pool of service nodes, any of which can respond to an installation request from a compute node (*Requires service nodes to be aligned with networks broadcast domains, compute node chooses service nodes based on who responds to DHCP request first.*)
* A hybrid of the above, where each specific set of compute nodes have 2 or more service nodes in a pool
The following documentation assumes an xCAT cluster has already been configured and covers the additional steps needed to suport xCAT Hierarchy via Service Nodes.
.. toctree::
:maxdepth: 2

View File

@ -1,2 +1,81 @@
Define Service Nodes
====================
This next part shows how to configure a xCAT Hierarchy and provision xCAT service nodes from an existing xCAT cluster.
*The document assumes that the compute nodes part of your cluster have already been defined into the xCAT database and you have successfully provisioned the compute nodes using xCAT*
The following table illustrates the cluster being used in this example:
+----------------------+----------------------+
| Operating System | rhels7.1 |
+----------------------+----------------------+
| Architecture | ppc64le |
+----------------------+----------------------+
| xCAT Management Node | xcat01 |
+----------------------+----------------------+
| Compute Nodes | r1n01 |
| (group=rack1) | r1n02 |
| | r1n03 |
| | ... |
| | r1n10 |
+----------------------+----------------------+
| Compute Nodes | r2n01 |
| (group=rack1) | r2n02 |
| | r2n03 |
| | ... |
| | r2n10 |
+----------------------+----------------------+
#. Select the compute nodes that will become service nodes
The first node in each rack, ``r1n01 and r2n01``, is selected to become the xCAT service nodes and manage the compute nodes in that rack
#. Change the attributes for the compute node to make them part of the **service** group: ::
chdef -t node -o r1n01,r2n01 groups=service,all
#. When ``copycds`` was run against the ISO image, several osimages are created into the ``osimage`` table. The ones containing "service" are provided to help easily provision xCAT service nodes. ::
# lsdef -t osimage | grep rhels7.1
rhels7.1-ppc64le-install-compute (osimage)
rhels7.1-ppc64le-install-service (osimage) <======
rhels7.1-ppc64le-netboot-compute (osimage)
#. Add the service nodes to the ``servicenode`` table: ::
chdef -t group -o service setupnfs=1 setupdhcp=1 setuptftp=1 setupnameserver=1 setupconserver=1
**Tips/Hint**
* Even if you do not want xCAT to configure any services, you must define the service nodes in the ``servicenode`` table with at least one attribute, set to 0, otherwise xCAT will not recognize the node as a service node**
* See the ``setup*`` attributes in the node definition man page for the list of available services: ``man node``
* For clusters with subnetted management networks, you might want to set ``setupupforward=1``
#. Add additional postscripts for Service Nodes (optional)
By default, xCAT will execute the ``servicenode`` postscript when installed or diskless booted. This postscript will set up the necessary credentials and installs the xCAT software on the Service Nodes. If you have additional postscripts that you want to execute on the service nodes, copy to ``/install/postscripts`` and run the following: ::
chdef -t group -o service -p postscripts=<mypostscript>
#. Assigning Compute Nodes to their Service Nodes
The node attributes ``servicenode`` and ``xcatmaster``, define which Service node will serve the particular compute node.
* ``servicenode`` - defines which Service Node the **Management Node** should send commands to (e.g ``xdsh``) and should be set to the hostname or IP address of the service node that the management node can conttact it by.
* ``xcatmaster`` - defines which Service Node the **Compute Node** should boot from and should be set to the hostname or IP address of the service node that the compute node can contact it by.
You must set both ``servicenode`` and ``xcatmaster`` regardless of whether or not you are using service node pools, for most scenarios, the value will be identical. ::
chdef -t group -o rack1 servicenode=r1n01 xcatmaster=r1n01
chdef -t group -o rack2 servicenode=r2n01 xcatmaster=r2n01
#. Set the conserver and monserver attributes
Set which service node should run the conserver (console) and monserver (monitoring) daemon for the nodes in the group. The most typical setup is to have the service node also ad as it's conserver and monserver. ::
chdef -t group -o rack1 conserver=r1n01 monserver=r1n01
chdef -t group -o rack2 conserver=r2n01 monserver=r2n01

View File

@ -1,3 +1,9 @@
Service Nodes 101
=================
Service Nodes are similar to the xCAT Management Node in that each service Nodes runs an instance of the xCAT daemon: ``xcatd``. ``xcatd``'s communicate with each other using the same XML/SSL protocol that the xCAT client uses to communicate with ``xcatd`` on the Management Node.
The Service Nodes need to communicate with the xCAT database running on the Management Node. This is done using the remote client capabilities of the database. This is why the default SQLite database cannot be used.
The xCAT Service Nodes are installed with a special xCAT package ``xCATsn`` which tells ``xcatd`` running on the node to behave as a Service Node and not the Management Node.