2
0
mirror of https://github.com/xcat2/xcat-core.git synced 2025-08-02 17:47:37 +00:00

Merge pull request #589 from hu-weihua/zone

Add details for "zones" document
This commit is contained in:
Xiaopeng Wang
2016-01-18 03:17:30 -05:00
5 changed files with 160 additions and 1 deletions

View File

@@ -1,2 +1,62 @@
Change Zones
============
After you create a zone, you can use the :doc:`chzone </guides/admin-guides/references/man1/chzone.1>` command to make changes. Some of the things you can do are the following:
* Add nodes to the zone
* Remove nodes from the zone
* Regenerated the keys
* Change sshbetweennodes setting
* Make it the default zone
The following command will add node1-node10 to zone1 and create a group called zone1 on each of the nodes. ::
chzone zone1 -a node1-node10 -g
The following command will remove node20-node30 from zone1 and remove the group zone1 from those nodes. ::
chzone zone1 -r node2--node30 -g
The following command will change zone1 such that root cannot ssh between the nodes without entering a password. ::
#chzone zone1 -s no
#lsdef -t zone zone1
Object name: zone1
defaultzone=no
sshbetweennodes=no
sshkeydir=/etc/xcat/sshkeys/zone1/.ssh
The following command will change zone1 to the default zone.
**Note**: you must use the ``-f`` flag to force the change. There can only be one default zone in the ``zone`` table. ::
#chzone zone1 -f --defaultzone
#lsdef -t zone -l
Object name: xcatdefault
defaultzone=no
sshbetweennodes=yes
sshkeydir=/root/.ssh
Object name: zone1
defaultzone=yes
sshbetweennodes=no
sshkeydir=/etc/xcat/sshkeys/zone1/.ssh
Finally, if your root ssh keys become corrupted or compromised you can regenerate them. ::
chzone zone1 -K
or ::
chzone zone1 -k <path to SSH RSH private key>
As with the :doc:`mkzone </guides/admin-guides/references/man1/mkzone.1>` commands, these commands have only changed the definitions in the database, you must run the following to distribute the keys. ::
updatenode mycompute -k
or ::
xdsh mycompute -K

View File

@@ -1,2 +1,17 @@
Configure Zones
===============
Setting up zones only applies to nodes. We will still use the MN root ssh keys on any devices, switches, hardware control. All ssh access to these devices is done from the MN or SN. The commands that distribute keys to these entities will not recognize zones (e.g. ``rspconfig``, ``xdsh -K --devicetype``). You should never define, the Management Node in a zone. The zone commands will not allow this.
The ssh keys will be generated and store in ``/etc/xcat/sshkeys/<zonename>/.ssh`` directory. You must not change this path. XCAT will manage and sync this directory to the service nodes as need for hierarchy.
When using zones, the **site** table **sshbetweennodes** attribute is no longer use. You will get a warning that it is no longer used, if it is set. You can just remove the setting to get rid of the warning. The **zone** table **sshbetweennodes** attribute is used so this can be assigned for each zone. When using zones, the attribute can only be set to yes/no. Lists of nodegroups are not supported as was supported in the **site** table **sshbetweennodes** attributes. With the ability of creating zones, you should be able to setup your nodes groups to allow or not allow passwordless root ssh as before.
There are three commands to support zones:
* :doc:`mkzone </guides/admin-guides/references/man1/mkzone.1>` - creates the zones
* :doc:`chzone </guides/admin-guides/references/man1/chzone.1>` - changes a previously created zone
* :doc:`rmzone </guides/admin-guides/references/man1/rmzone.1>` - removes a zone
**Note**: It is highly recommended that you only use the zone commands for creating and maintaining your zones. They do a lot of maintaining of tables and directories for the zones when they are running.

View File

@@ -1,2 +1,47 @@
Create Zones
============
The first time you run :doc:`mkzone </guides/admin-guides/references/man1/mkzone.1>`, it is going to create two zones. It will create the zone you request, but automatically add the xCAT default zone. This command creates the two zones , but does not assign it to any nodes. There is a new attribute on the nodes called **zonename**. As long as it is not defined for the node, then the node will use what is currently defined in the database as the defaultzone.
**Note**: if zones are defined in the zone table, there must be one and only one default zone. If a node does not have a zonename defined and there is no defaultzone in the zone table, it will get an error and no keys will be distribute.
For example: ::
#mkzone zone1
#lsdef -t zone -l
Object name: xcatdefault
defaultzone=yes
sshbetweennodes=yes
sshkeydir=/root/.ssh
Object name: zone1
defaultzone=no
sshbetweennodes=yes
sshkeydir=/etc/xcat/sshkeys/zone1/.ssh
Another example which makes the zone and defines the nodes in the mycompute group in the zone and also automatically creates a group on each node by the zonename is the following: ::
#makezone zone2 -a mycompute -g
#lsdef mycompute
Object name: node1
groups=zone2,mycompute
postbootscripts=otherpkgs
postscripts=syslog,remoteshell,syncfiles
zonename=zone2
At this time we have only created the zone, assigned the nodes and generated the SSH RSA keys to be distributed to the node. To setup the ssh keys on the nodes in the zone, run the following ``updatenode`` command. It will distribute the new keys to the nodes, it will automatically sync the zone key directory to any service nodes and it will regenerated your ``mypostscript.<nodename>`` files to include the zonename, if you are using ``precreatemypostscripts`` enabled. ::
updatenode mycompute -k
You can also use the following command but it will not regenerated the mypostscript.<nodename> file. ::
xdsh mycompute -K
If you need to install the nodes, then run the following commands. They will do everything during the install that the updatenode did. Running nodeset is very important, because it will regenerate the mypostscript file to include the zonename attribute. ::
nodeset mycompute osimage=<mycomputeimage>
rsetboot mycompute net
rpower mycompute boot

View File

@@ -1,4 +1,21 @@
Overview
========
xCAT supports the concept of zones within a single xCAT cluster managed by one (1) Management Node. The nodes in the cluster can be divided up into multiple zones that have different ssh keys managed separately.
XCAT supports the concept of zones within a single xCAT cluster managed by one Management Node. The nodes in the cluster can be divided up into multiple zones that have different ssh keys managed separately.
Each defined zone has it own root's ssh RSA keys, so that any node can ssh without a password to any other node in the same zone, cannot ssh without being prompted for a password to nodes in another zone.
Currently xCAT changes root ssh keys on the service nodes (SN) and compute nodes (CN) that are generated at install time to the root ssh keys from the Management node. It also changes the ssh **hostkeys** on the SN and CN to a set of pre-generated hostkeys from the MN. Putting the RSA public key in the **authorized-keys** file on the service nodes and compute nodes allows passwordless ssh to the Service Nodes (SN) and the compute nodes from the Management Node (MN). Today, by default, all nodes in the xCAT cluster are setup to be able to passwordless ssh to other nodes except when using the site **sshbetweennodes** attribute. More on that later. The pre-generated hostkey makes all nodes look like the same to ssh, so you are never prompted for updates to ``known_hosts``.
The new support only addresses the way we generate and distribute root's ssh RSA keys. Hostkey generation and distribution is not affected. It only supports setting up zones for the root userid. Non-root users are not affected. The Management node (MN) and Service Nodes (SN) are still setup so that root can ssh without password to the nodes from the MN and SN's for xCAT command to work. Also, the SN's should be able to ssh to each other with a password. Compute nodes and Service Nodes are not setup by xCAT to be able to ssh to the Management Node without being prompted for a password. This is to protect the Management Node.
In the past, the setup allowed compute nodes to be able to ssh to the SN's without a password. Using zones, will no longer allow this to happen. Using zones only allows compute nodes to ssh without password to compute node, unless you add the service node into the zone which is not considered a good idea.
But add service node into a zone is not a good idea. Beacuse:
* IF you put the service node in a zone, it will no longer be able to ssh to the other servicenodes with being prompted for a password.
* Allowing the compute node to ssh to the service node, could allow the service node to be compromised, by anyone who gained access to the compute node.
* It is recommended to not put the service nodes in any zones and then they will use the default zone which today will assign the root's home directory ssh keys as in previous releases. More on the default zone later.
If you do not wish to use zones, your cluster will continue to work as before. The root ssh keys for the nodes will be taken from the Management node's root's home directory ssh keys or the Service node's root's home directory ssh keys (hierarchical case) and put on the nodes when installing, running ``xdsh -K`` or ``updatenode -k``. To continue to operate this way, do not define a zone. The moment you define a zone in the database, you will begin using zones in xCAT.

View File

@@ -1,2 +1,24 @@
Remove Zones
============
The :doc:`rmzone </guides/admin-guides/references/man1/rmzone.1>` command will remove a zone from the database. It will also remove the zone name from the **zonename** attribute on all the nodes currently defined in the zone and as an option ``-g`` will remove the group zonename from the nodes. The zonename attribute will be undefined, which means the next time the keys are distributed, they will be picked up from the defaultzone. It will also remove the ``/etc/xcat/sshkeys/<zonename>`` directory.
**Note**: :doc:`rmzone </guides/admin-guides/references/man1/rmzone.1>` will always remove the zonename defined on the nodes in the zone. If you use other xCAT commands and end up with a zonename defined on the node that is not defined in the zone table, when you try to distribute the keys you will get errors and the keys will not be distributed. ::
rmzone zone1 -g
If you want to remove the default zone, you must use the ``-f`` flag. You probably only need this to remove all the zones in the zone table. If you want to change the default zone, you should use the :doc:`chzone </guides/admin-guides/references/man1/chzone.1>` command.
**Note**: if you remove the default zone and nodes have the ``zonename`` attribute undefined, you will get errors when you try to distribute keys. ::
rmzone zone1 -g -f
As with the other zone commands, after the location of a nodes root ssh keys has changed you should use one of the following commands to update the keys on the nodes: ::
updatenode mycompute -k
or ::
xdsh mycompute -K