2
0
mirror of https://github.com/xcat2/xcat-core.git synced 2025-09-07 18:58:14 +00:00

Merge pull request #13 from whowutwut/large_cluster

Merge in documentation for Large Cluster Support
This commit is contained in:
Victor Hu
2015-08-13 10:58:39 -04:00
12 changed files with 331 additions and 26 deletions

View File

@@ -1,42 +1,34 @@
Databases
=========
Configure a Database
====================
xCAT Supports the following databases to be used by xCAT on the Management node
xCAT uses the SQLite database (https://www.sqlite.org/) as the default database and it is initialized during xCAT installation of the Management Node. If using Service Nodes, SQLite **cannot** be used because Service Nodes require remote access to the xCAT database. One of the following databases should be used:
* SQLite
* MySQL/MariaDB
* PostgreSQL
* DB2
* :ref:`mysql_reference_label`
* :ref:`postgresql_reference_label`
SQLite
------
SQLite database is the default database used by xCAT and is initialized when xCAT is installed on the management node.
SQLite is a small, light-weight, daemon-less database that requires no configuration or maintenance. This database is sufficient for small to moderate size systems ( < 1000 nodes )
**The SQLite database can NOT be used for xCAT hierarchy support because service nodes requires remote access to the database and SQLite does NOT support remote access.**
For xCAT hierarchy, you will need to use one of the following alternate databases:
.. _mysql_reference_label:
MySQL/MariaDB
-------------
.. toctree::
:maxdepth: 2
mysql_install.rst
mysql_configure.rst
mysql_using.rst
mysql_remove.rst
.. _postgresql_reference_label:
PostgreSQL
----------
.. toctree::
:maxdepth: 2
postgres_install.rst
postgres_configure.rst
postgres_tips.rst
postgres_using.rst
postgres_remove.rst

View File

@@ -1,2 +1,24 @@
Configure MySQL
===============
Configure MySQL/MariaDB
=======================
Migrate xCAT to use MySQL/MariaDB
---------------------------------
The following utility is provided to migrate an existing xCAT database from SQLite to MySQL/MariaDB. ::
mysqlsetup -i
If you need to update the database at a later time to give access to your service nodes, you can use the ``mysqlsetup -u -f`` command. A file needs to be provided with all the hostnames and/or IP addresses of the servers that need to access the database on the Management node. Wildcards can be used. ::
TODO: Show an example here of file1
mysqlsetup -u -f /path/to/file1
**While not recommended**, if you wish to manually migrate your xCAT database, see the following documentation:
`Manually set up MySQL <https://sourceforge.net/p/xcat/wiki/Setting_Up_MySQL_as_the_xCAT_DB/#configure-mysql-manually>`_
Granting/Revoking access to the database for Service Node Clients
-----------------------------------------------------------------
https://sourceforge.net/p/xcat/wiki/Setting_Up_MySQL_as_the_xCAT_DB/#granting-or-revoking-access-to-the-mysql-database-to-service-node-clients

View File

@@ -1,2 +1,85 @@
Install MySQL
=============
Install MySQL/MariaDB
=====================
The MySQL database is supported by xCAT since xCAT 2.1. MariaDB is a fork of the MySQL project which was released around 2009 and is a drop-in replacement for MySQL. MariaDB support within xCAT started in version 2.8.5 and is currently fully supported moving forward.
+------------+------------+------------+
| Database | MySQL | MariaDB |
+============+============+============+
| xCAT 2.1+ | Yes | No |
+------------+------------+------------+
| xCAT 2.8.5 | Yes | RHEL 7 |
+------------+------------+------------+
| xCAT 2.9 | Yes | SLES 12 |
+------------+------------+------------+
| xCAT 2.10+ | Yes | Yes |
+------------+------------+------------+
MySQL/MariaDB packages are shipped as part of most Linux Distributions.
Redhat Enterprise Linux
-----------------------
* MySQL - Using ``yum``, ensure that the following packages are installed on the management node: ::
perl-DBD-MySQL*
mysql-server-5.*
mysql-5.*
mysql-devel-5.*
mysql-bench-5.*
mysql-connector-odbc-*
* MariaDB - Using ``yum``, ensure that the following packages are installed on the management node: ::
mariadb-devel-5.*
mariadb-libs-5.*
mariadb-server-5.*
mariadb-bench-5.*
mariadb-5.*
perl-DBD-MySQL*
mysql-connector-odbc-*
unixODBC*
Suse Linux Enterprise Server
----------------------------
* MySQL - Using ``zypper``, ensure that the following packages are installed on the management node: ::
mysql-client-5*
libmysqlclient_r15*
libqt4-sql-mysql-4*
libmysqlclient15-5*
perl-DBD-mysql-4*
mysql-5*
* MariaDB - Using ``zypper``, ensure that the following packages are installed on the management node: ::
mariadb-client-10.*
mariadb-10.*
mariadb-errormessages-10.*
libqt4-sql-mysql-*
libmysqlclient18-*
perl-DBD-mysql-*
Debian/Ubuntu
-------------
* MySQL - Using ``apt-get``, ensure that the following packages are installed on the management node: ::
mysql-server
mysql-common
libdbd-mysql-perl
libmysqlclient18
mysql-client-5*
mysql-client-core-5*
mysql-server-5*
mysql-server-core-5*
* MariaDB - Using ``apt-get``, ensure that the following packages are installed on the management node: ::
libmariadbclient18
mariadb-client
mariadb-common
mariadb-server

View File

@@ -0,0 +1,2 @@
Removing ``xcatdb`` from MySQL/MariaDB
======================================

View File

@@ -0,0 +1,2 @@
Using MySQL/MariaDB
===================

View File

@@ -0,0 +1,44 @@
Removing ``xcatdb`` from PostgreSQL
===================================
To remove ``xcatdb`` completely from the PostgreSQL database:
#. Run a backup of the database to save any information that is needed: ::
mkdir -p ~/xcat-dbback
dumpxCATdb -p ~/xcat-dbback
#. Stop the ``xcatd`` daemon on the management node.
**Note:** If you are using *xCAT Hierarchy (service nodes)* and removing ``xcatdb`` from postgres, hierarchy will no longer work. You will need to configure another database which supports remote database access to continue using the hierarchy feature. ::
service xcatd stop
#. Remove the ``xatdb`` from PostgreSQL: ::
su - postgres
drop the xcatdb: ::
dropdb xcatdb
remove the xcatadm database owner : ::
dropuser xcatadm
clean up the postgresql files (necessary if you want to re-create the database): ::
cd /var/lib/pgsql/data
rm -rf *
#. Move, or remove, the ``/etc/xcat/cfglog`` file as it points xCAT to PostgreSQL. (without this file, xCAT defaults to SQLite): ::
mv /etc/xcat/cfgloc /etc/xcat/cfglog.postgres
#. Restore the PostgreSQL database into SQLite: ::
XCATBYPASS=1 restorexCATdb -p ~/xcat-dbback
#. Restart ``xcatd``: ::
service xcatd start

View File

@@ -0,0 +1,39 @@
Using PostgreSQL
================
Refer to `<http://www.postgresql.org/>`_ for the latest documentation.
Using ``psql``, connect to the xcat database: ::
su - postgres
psql -h <hostname> -U xcatadm -d xcatdb (default pw: cluster)
list the xCAT tables: ::
xcatdb=> \dt
show the entries in the nodelist table: ::
xcatdb=> select * from nodelist;
quit postgres: ::
xcatdb=> \q
Useful Commands
---------------
Show the SQL create statement for a table: ::
/usr/bin/pg_dump_xcatdb -U xcatadm -t <table_name>
# example, for prescripts table:
/usr/bin/pg_dump xcatdb -U xcatadm -t prescripts
List all databases in postgres: ::
su - postgres
psql -l

View File

@@ -1,7 +1,22 @@
Managing Large Clusters
=======================
Large Cluster Support
=====================
xCAT supports management of very large sized cluster through the use of **xCAT Hierarchy** or **xCAT Service Nodes**.
When dealing with large clusters, to balance the load, it is recommended to have more than one node (Management Node, "MN") handling the installation and management of the compute nodes. These additional *helper* nodes are referred to as **xCAT Service Nodes** ("SN"). The Management Node can delegate all management operational needs to the Service Node responsible for a set of compute node.
The following configurations are supported:
* Each service node installs/manages a specific set of compute nodes
* Having a pool of service nodes, any of which can respond to an installation request from a compute node (*Requires service nodes to be aligned with networks broadcast domains, compute node chooses service nodes based on who responds to DHCP request first.*)
* A hybrid of the above, where each specific set of compute nodes have 2 or more service nodes in a pool
The following documentation assumes an xCAT cluster has already been configured and covers the additional steps needed to suport xCAT Hierarchy via Service Nodes.
.. toctree::
:maxdepth: 2
service_nodes/service_nodes101.rst
databases/index.rst
service_nodes/define_service_nodes.rst
service_nodes/provision_service_nodes.rst
tips.rst

View File

@@ -0,0 +1,81 @@
Define Service Nodes
====================
This next part shows how to configure a xCAT Hierarchy and provision xCAT service nodes from an existing xCAT cluster.
*The document assumes that the compute nodes part of your cluster have already been defined into the xCAT database and you have successfully provisioned the compute nodes using xCAT*
The following table illustrates the cluster being used in this example:
+----------------------+----------------------+
| Operating System | rhels7.1 |
+----------------------+----------------------+
| Architecture | ppc64le |
+----------------------+----------------------+
| xCAT Management Node | xcat01 |
+----------------------+----------------------+
| Compute Nodes | r1n01 |
| (group=rack1) | r1n02 |
| | r1n03 |
| | ... |
| | r1n10 |
+----------------------+----------------------+
| Compute Nodes | r2n01 |
| (group=rack1) | r2n02 |
| | r2n03 |
| | ... |
| | r2n10 |
+----------------------+----------------------+
#. Select the compute nodes that will become service nodes
The first node in each rack, ``r1n01 and r2n01``, is selected to become the xCAT service nodes and manage the compute nodes in that rack
#. Change the attributes for the compute node to make them part of the **service** group: ::
chdef -t node -o r1n01,r2n01 groups=service,all
#. When ``copycds`` was run against the ISO image, several osimages are created into the ``osimage`` table. The ones containing "service" are provided to help easily provision xCAT service nodes. ::
# lsdef -t osimage | grep rhels7.1
rhels7.1-ppc64le-install-compute (osimage)
rhels7.1-ppc64le-install-service (osimage) <======
rhels7.1-ppc64le-netboot-compute (osimage)
#. Add the service nodes to the ``servicenode`` table: ::
chdef -t group -o service setupnfs=1 setupdhcp=1 setuptftp=1 setupnameserver=1 setupconserver=1
**Tips/Hint**
* Even if you do not want xCAT to configure any services, you must define the service nodes in the ``servicenode`` table with at least one attribute, set to 0, otherwise xCAT will not recognize the node as a service node**
* See the ``setup*`` attributes in the node definition man page for the list of available services: ``man node``
* For clusters with subnetted management networks, you might want to set ``setupupforward=1``
#. Add additional postscripts for Service Nodes (optional)
By default, xCAT will execute the ``servicenode`` postscript when installed or diskless booted. This postscript will set up the necessary credentials and installs the xCAT software on the Service Nodes. If you have additional postscripts that you want to execute on the service nodes, copy to ``/install/postscripts`` and run the following: ::
chdef -t group -o service -p postscripts=<mypostscript>
#. Assigning Compute Nodes to their Service Nodes
The node attributes ``servicenode`` and ``xcatmaster``, define which Service node will serve the particular compute node.
* ``servicenode`` - defines which Service Node the **Management Node** should send commands to (e.g ``xdsh``) and should be set to the hostname or IP address of the service node that the management node can conttact it by.
* ``xcatmaster`` - defines which Service Node the **Compute Node** should boot from and should be set to the hostname or IP address of the service node that the compute node can contact it by.
You must set both ``servicenode`` and ``xcatmaster`` regardless of whether or not you are using service node pools, for most scenarios, the value will be identical. ::
chdef -t group -o rack1 servicenode=r1n01 xcatmaster=r1n01
chdef -t group -o rack2 servicenode=r2n01 xcatmaster=r2n01
#. Set the conserver and monserver attributes
Set which service node should run the conserver (console) and monserver (monitoring) daemon for the nodes in the group. The most typical setup is to have the service node also ad as it's conserver and monserver. ::
chdef -t group -o rack1 conserver=r1n01 monserver=r1n01
chdef -t group -o rack2 conserver=r2n01 monserver=r2n01

View File

@@ -0,0 +1,11 @@
Provision Service Nodes
=======================
Diskful
-------
Diskless
--------
Verfication
-----------

View File

@@ -0,0 +1,9 @@
Service Nodes 101
=================
Service Nodes are similar to the xCAT Management Node in that each service Nodes runs an instance of the xCAT daemon: ``xcatd``. ``xcatd``'s communicate with each other using the same XML/SSL protocol that the xCAT client uses to communicate with ``xcatd`` on the Management Node.
The Service Nodes need to communicate with the xCAT database running on the Management Node. This is done using the remote client capabilities of the database. This is why the default SQLite database cannot be used.
The xCAT Service Nodes are installed with a special xCAT package ``xCATsn`` which tells ``xcatd`` running on the node to behave as a Service Node and not the Management Node.

View File

@@ -0,0 +1,5 @@
Tips/Tuning/Suggestions
=======================
TODO: Content from: https://sourceforge.net/p/xcat/wiki/Hints_and_Tips_for_Large_Scale_Clusters/