2
0
mirror of https://github.com/xcat2/xcat-core.git synced 2025-05-31 01:56:39 +00:00

Merge pull request #2026 from gurevichmark/spell_check2

Doc spelling and grammar fixes
This commit is contained in:
Victor Hu 2016-10-25 21:37:55 -04:00 committed by GitHub
commit d0e0c90fa6
109 changed files with 205 additions and 218 deletions

View File

@ -7,7 +7,7 @@ The chain table (``tabdump chain``) is an xCAT database table that holds the cha
* currchain
* chain
To know how are those three attributes used, pls reference the picture:
To know how are those three attributes used, reference the picture:
.. image:: chain_tasks_logic.png

View File

@ -25,7 +25,7 @@ The ``image.tgz`` **must** have the following properties:
* Created using the ``tar zcvf`` command
* The tarball must include a ``runme.sh`` script to initiate the execution of the runimage
To create your own image, please reference :ref:`creating image for runimage <create_image_for_runimage>`.
To create your own image, reference :ref:`creating image for runimage <create_image_for_runimage>`.
**Tip**: You could try to run ``wget http://<IP of xCAT Management Node>/<dir>/image.tgz`` manually to make sure the path has been set correctly.

View File

@ -36,11 +36,11 @@ Database Connection Changes
Granting or revoking access privilege in the database for the service node.
* For mysql, please refer to :ref:`grante_revoke_mysql_access_label`.
* For mysql, refer to :ref:`grante_revoke_mysql_access_label`.
.. There is no procedure in old document on sourceforge for postgress to
grant or revoke the access privilege for service node.
* For postgress, please refer to `TODO <https://localhost/todo>`_.
* For postgress, refer to `TODO <https://localhost/todo>`_.
Update Provision Environment on Service Node
--------------------------------------------

View File

@ -56,8 +56,7 @@ The following example describes the steps for **rhels7.1** on **ppc64le**::
cd confluent-dep-rh7-ppc64le/
./mklocalrepo.sh
**Note:** If the OS/architecture you are looking for is not provided under confluent-dep,
please send an email to the xcat-user mailing list: xcat-user@lists.sourceforge.net
**Note:** If the OS/architecture you are looking for is not provided under confluent-dep, send an email to the xcat-user mailing list: xcat-user@lists.sourceforge.net
Install

View File

@ -69,7 +69,7 @@ Now run the xCAT Docker container with the Docker image "xcat/xcat-ubuntu-x86_64
* use ``--privileged=true`` to give extended privileges to this container
* use ``--hostname`` to specify the hostname of the container, which is available inside the container
* use ``--name`` to assign a name to the container, this name can be used to manipulate the container on Docker host
* use ``--add-host="xcatmn.clusers.com xcatmn:10.5.107.101"`` to write the ``/etc/hosts`` entries of Docker container inside container. Since xCAT use the FQDN(Fully Qualified Domain Name) to determine the cluster domain on startup, please make sure the format to be "<FQDN> <hostname>: <IP Address>", otherwise, you need to set the cluster domain with ``chdef -t site -o clustersite domain="clusters.com"`` inside the container manually
* use ``--add-host="xcatmn.clusers.com xcatmn:10.5.107.101"`` to write the ``/etc/hosts`` entries of Docker container inside container. Since xCAT use the FQDN(Fully Qualified Domain Name) to determine the cluster domain on startup, make sure the format to be "<FQDN> <hostname>: <IP Address>", otherwise, you need to set the cluster domain with ``chdef -t site -o clustersite domain="clusters.com"`` inside the container manually
* use ``--volume /docker/xcatdata/:/install`` to mount a pre-created "/docker/xcatdata" directory on Docker host to "/install" directory inside container as a data volume. This is optional, it is mandatory if you want to backup and restore xCAT data.
* use ``--net=mgtnet`` to connect the container to the Docker network "mgtnet"
* use ``--ip=10.5.107.101`` to specify the IP address of the xCAT Docker container

View File

@ -4,7 +4,7 @@ Setup Docker host
Install Docker Engine
---------------------
The Docker host to run xCAT Docker image should be a baremental or virtual server with Docker v1.10 or above installed. For the details on system requirements and Docker installation, please refer to `Docker Installation Docs <https://docs.docker.com/engine/installation/>`_.
The Docker host to run xCAT Docker image should be a baremental or virtual server with Docker v1.10 or above installed. For the details on system requirements and Docker installation, refer to `Docker Installation Docs <https://docs.docker.com/engine/installation/>`_.
**Note:**

View File

@ -3,7 +3,7 @@ Docker life-cycle management in xCAT
The Docker linux container technology is currently very popular. xCAT can help managing Docker containers. xCAT, as a system management tool has the natural advantage for supporting multiple operating systems, multiple architectures and large scale clusters.
This document describes how to use xCAT for docker management, from Docker Host setup to docker container operationis.
This document describes how to use xCAT for docker management, from Docker Host setup to docker container operations.
**Note:** The document was verified with **Docker Version 1.10, 1.11** and **Docker API version 1.22.** The Docker Host was verified on **ubuntu14.04.3 x86_64**, **ubuntu15.10 x86_64**, **ubuntu16.04 x86_64** and **ubuntu16.04 ppc64el**.

View File

@ -17,7 +17,7 @@ There are a lot of ways for data synchronization, but considering the specific x
**1\. Move physical disks between the two management nodes**: if we could physically move the hard disks from the failed management node to the backup management node, and bring up the backup management node, then both the operating system and xCAT data will be identical between the new management node and the failed management node. RAID1 or disk mirroring could be used to avoid the disk be a single point of failure.
**2\. Shared data**: the two management nodes use the single copy of xCAT data, no matter which management node is the primary MN, the cluster management capability is running on top of the single data copy. The access to the data could be done through various ways like shared storage, NAS, NFS, samba etc. Based on the protocol being used, the data might be accessible only on one management node at a time or be accessible on both management nodes in parellel. If the data could only be accessed from one management node, the failover process need to take care of the data access transition; if the data could be accessed on both management nodes, the failover does not need to consider the data access transition, it usually means the failover process could be faster.
**2\. Shared data**: the two management nodes use the single copy of xCAT data, no matter which management node is the primary MN, the cluster management capability is running on top of the single data copy. The access to the data could be done through various ways like shared storage, NAS, NFS, samba etc. Based on the protocol being used, the data might be accessible only on one management node at a time or be accessible on both management nodes in parallel. If the data could only be accessed from one management node, the failover process need to take care of the data access transition; if the data could be accessed on both management nodes, the failover does not need to consider the data access transition, it usually means the failover process could be faster.
Warning: Running database through network file system has a lot of potential problems and is not practical, however, most of the database system provides database replication feature that can be used to synchronize the database between the two management nodes.

View File

@ -149,7 +149,7 @@ So, in this documentation, we will setup xCAT on both management nodes before we
chdef -t site nameservers=10.1.0.1
chdef -t network 10_1_0_0-255_255_255_0 tftpserver=10.1.0.1
#. Install and configure MySQL. MySQL will be used as the xCAT database system, please refer to the doc [ **todo** Setting_Up_MySQL_as_the_xCAT_DB].
#. Install and configure MySQL. MySQL will be used as the xCAT database system, refer to the doc [ **todo** Setting_Up_MySQL_as_the_xCAT_DB].
Verify xcat is running on MySQL by running: ::
@ -219,7 +219,7 @@ Setup xCAT on the Standby Management Node
#. Install xCAT. The procedure described in :doc:`xCAT Install Guide <../../guides/install-guides/index>` should be used for the xCAT setup on the standby management node.
#. Install and configure MySQL. MySQL will be used as the xCAT database system, please refer to the doc [Setting_Up_MySQL_as_the_xCAT_DB].
#. Install and configure MySQL. MySQL will be used as the xCAT database system, refer to the doc [Setting_Up_MySQL_as_the_xCAT_DB].
Verify xcat is running on MySQL by running: ::
@ -689,7 +689,7 @@ Configure Pacemaker
All the cluster resources are managed by Pacemaker, here is an example ``pacemaker`` configuration that has been used by different HA MN customers. You might need to do some minor modifications based on your cluster configuration.
Please be aware that you need to apply ALL the configuration at once. You cannot pick and choose which pieces to put in, and you cannot put some in now, and some later. Don't execute individual commands, but use crm configure edit instead. ::
Be aware that you need to apply ALL the configuration at once. You cannot pick and choose which pieces to put in, and you cannot put some in now, and some later. Don't execute individual commands, but use crm configure edit instead. ::
node x3550m4n01
node x3550m4n02
@ -1043,7 +1043,7 @@ Add a crontab entry to check the differences
0 6 * * * /sbin/drbdadm verify all
Please note that this process will take a few hours. You could schedule it at a time when it can be expected to run when things are relatively idle. You might choose to only run it once a week, but nightly seems to be a nice choice as well. You should only put this cron job on one side or the other of the DRBD mirror . not both.
Note that this process will take a few hours. You could schedule it at a time when it can be expected to run when things are relatively idle. You might choose to only run it once a week, but nightly seems to be a nice choice as well. You should only put this cron job on one side or the other of the DRBD mirror . not both.
Correcting the differences automatically
----------------------------------------

View File

@ -408,7 +408,7 @@ The operating system is installed on the internal disks.
#. Connect the shared disk to both management nodes
To verify the shared disks are connected correctly, run the sginfo command on both management nodes and look for the same serial number in the output. Please be aware that the sginfo command may not be installed by default on Linux, the sginfo command is shipped with package sg3_utils, you can manually install the package sg3_utils on both management nodes.
To verify the shared disks are connected correctly, run the sginfo command on both management nodes and look for the same serial number in the output. Be aware that the sginfo command may not be installed by default on Linux, the sginfo command is shipped with package sg3_utils, you can manually install the package sg3_utils on both management nodes.
Once the sginfo command is installed, run sginfo -l command on both management nodes to list all the known SCSI disks, for example, enter: ::

View File

@ -329,7 +329,7 @@ Install corosync and pacemaker on both rhmn2 and rhmn1
Customize corosync/pacemaker configuration for xCAT
------------------------------------------------------
Please be aware that you need to apply ALL the configuration at once. You cannot pick and choose which pieces to put in, and you cannot put some in now, and some later. Don't execute individual commands, but use crm configure edit instead.
Be aware that you need to apply ALL the configuration at once. You cannot pick and choose which pieces to put in, and you cannot put some in now, and some later. Don't execute individual commands, but use crm configure edit instead.
Check that both rhmn2 and chetha are standby state now: ::

View File

@ -16,7 +16,7 @@ Appendix B: Diagnostics
* **otherpkgs(including xCAT rpms) installation failed on the SN** --The OS
repository is not created on the SN. When the "yum" command is processing
the dependency, the rpm packages (including expect, nmap, and httpd, etc)
required by xCATsn can't be found. In this case, please check whether the
required by xCATsn can't be found. In this case, check whether the
``/install/postscripts/repos/<osver>/<arch>/`` directory exists on the MN.
If it is not on the MN, you need to re-run the "copycds" command, and there
will be some file created under the

View File

@ -12,7 +12,7 @@ If you no longer want to use MySQL/MariaDB to maintain ``xcatdb``, and like to s
XCATBYPASS=1 restorexCATdb -p ~/xcat-dbback
* Change to PostgreSQL, please following documentation: :doc:`/advanced/hierarchy/databases/postgres_install`
* Change to PostgreSQL, following documentation: :doc:`/advanced/hierarchy/databases/postgres_install`
* Change back to default xCAT database, SQLite (**Note**: xCAT Hierarchy cluster will no longer work)

View File

@ -1,7 +1,7 @@
Diskless (Stateless) Installation
=================================
**Note: The stateless Service Node is not supported in ubuntu hierarchy cluster. For ubuntu, please skip this section.**
**Note: The stateless Service Node is not supported in ubuntu hierarchy cluster. For ubuntu, skip this section.**
If you want, your Service Nodes can be stateless (diskless). The Service Node
must contain not only the OS, but also the xCAT software and its dependencies.

View File

@ -63,4 +63,4 @@ If the kit does contain a deployment parameter file, the contents of the file wi
addkitcomp -i <image> <kitcomponent name>
vi /install/osimages/<image>/kits/KIT_DEPLOY_PARAMS.otherpkgs.pkglist
NOTE: Please be sure to know how changing any kit deployment parameters will impact the install of the product into the OS image. Many parameters include settings for automatic license acceptance and other controls to ensure proper unattended installs into a diskless image or remote installs into a diskful node. Changing these values will cause problems with genimage, updatenode, and other xCAT deployment commands.
NOTE: Be sure to know how changing any kit deployment parameters will impact the install of the product into the OS image. Many parameters include settings for automatic license acceptance and other controls to ensure proper unattended installs into a diskless image or remote installs into a diskful node. Changing these values will cause problems with genimage, updatenode, and other xCAT deployment commands.

View File

@ -101,7 +101,7 @@ The following software kits will be used to install the IBM HPC software stack o
addkitcomp -a -i rhels7.2-ppc64le-install-compute \
essl-computenode-3264rtecuda-5.4.0-0-rhels-7.2-ppc64le
If the system doesn't have GPU and the CUDA toolkit is not needed, the adminstrator should not add the following kit components that requires the CUDA packages: ``essl-loginnode-5.4.0-0-rhels-7.2-ppc64le``, ``essl-computenode-3264rte-5.4.0-0-rhels-7.2-ppc64le`` and ``essl-computenode-3264rtecuda-5.4.0-0-rhels-7.2-ppc64le``. Please check the ESSL installation guide: http://www.ibm.com/support/knowledgecenter/SSFHY8_5.4.0/com.ibm.cluster.essl.v5r4.essl300.doc/am5il_xcatinstall.htm
If the system doesn't have GPU and the CUDA toolkit is not needed, the adminstrator should not add the following kit components that requires the CUDA packages: ``essl-loginnode-5.4.0-0-rhels-7.2-ppc64le``, ``essl-computenode-3264rte-5.4.0-0-rhels-7.2-ppc64le`` and ``essl-computenode-3264rtecuda-5.4.0-0-rhels-7.2-ppc64le``. Check the ESSL installation guide: http://www.ibm.com/support/knowledgecenter/SSFHY8_5.4.0/com.ibm.cluster.essl.v5r4.essl300.doc/am5il_xcatinstall.htm
#. Add the **Parallel ESSL** kitcomponents to osimage.

View File

@ -5,9 +5,9 @@ A **stateless**, or **diskless**, provisioned nodes is one where the operating s
To deploy stateless compute nodes, you must first create a stateless image. The "netboot" osimages created from ``copycds`` in the **osimage** table are sample osimage definitions that can be used for deploying stateless nodes.
In a homogenous cluster, the management node is the same hardware architecture and running the same Operating System (OS) as the compute nodes, so ``genimage`` can directly be executed from the management node.
In a homogeneous cluster, the management node is the same hardware architecture and running the same Operating System (OS) as the compute nodes, so ``genimage`` can directly be executed from the management node.
The issues arises in a heterogenous cluster, where the management node is running a different level operating system *or* hardware architecture as the compute nodes in which to deploy the image. The ``genimage`` command that builds stateless images depends on various utilities provided by the base operating system and needs to be run on a node with the same hardware architecture and *major* Operating System release as the nodes that will be booted from the image.
The issues arises in a heterogeneous cluster, where the management node is running a different level operating system *or* hardware architecture as the compute nodes in which to deploy the image. The ``genimage`` command that builds stateless images depends on various utilities provided by the base operating system and needs to be run on a node with the same hardware architecture and *major* Operating System release as the nodes that will be booted from the image.
Same Operating System, Different Architecture
---------------------------------------------
@ -27,7 +27,7 @@ The following describes creating stateless images of the same Operating System,
lsdef -t osimage -z rhels6.3-x86_64-netboot-compute | sed 's/^[^ ]\+:/mycomputeimage:/' | mkdef -z
#. To obtain the ``genimage`` command to execte on ``n01``, execute the ``genimage`` command with the ``--dryrun`` option: ::
#. To obtain the ``genimage`` command to execute on ``n01``, execute the ``genimage`` command with the ``--dryrun`` option: ::
genimage --dryrun mycomputeimage

View File

@ -1,7 +1,7 @@
Configure Ethernet Switches
---------------------------
It is recommended that spanning tree be set in the switches to portfast or edge-port for faster boot performance. Please see the relevant switch documentation as to how to configure this item.
It is recommended that spanning tree be set in the switches to portfast or edge-port for faster boot performance. See the relevant switch documentation as to how to configure this item.
It is recommended that lldp protocol in the switches is enabled to collect the switch and port information for compute node during discovery process.
@ -71,9 +71,9 @@ Running Remote Commands in Parallel
You can use xdsh to run parallel commands on Ethernet switches. The following shows how to configure xCAT to run xdsh on the switches:
**[Note]**:Configure the switch to allow **ssh** or **telnet**. This varies for switch to switch. Please refer to the switch command references to find out how to do it.
**[Note]**:Configure the switch to allow **ssh** or **telnet**. This varies for switch to switch. Refer to the switch command references to find out how to do it.
Add the switch in xCAT DB. Please refer to the "Discovering Switches" section if you want xCAT to discover and define the switches for you. ::
Add the switch in xCAT DB. Refer to the "Discovering Switches" section if you want xCAT to discover and define the switches for you. ::
mkdef bntc125 groups=switch mgt=switch ip=10.4.25.1 nodetype=switch switchtype=BNT
@ -97,9 +97,9 @@ Set the ssh or telnet username an d password. ::
xdsh bntc125 --devicetype EthSwitch::BNT "enable;configure terminal;vlan 3;end;show vlan"
Please note that you can run multiple switch commands, they are separated by comma.
Note that you can run multiple switch commands, they are separated by comma.
Please also note that --devicetype is used here. xCAT supports the following switch types out of the box: ::
Also note that --devicetype is used here. xCAT supports the following switch types out of the box: ::
* BNT
* Cisco
@ -178,7 +178,7 @@ The new configuration file will look like this: ::
For **BNT** switches, the **command-to-set-term-length-to-0** is **terminal-length 0**.
Please make sure to add a semi-colon at the end of the "pre-command" line.
Make sure to add a semi-colon at the end of the "pre-command" line.
Then you can run the xdsh like this: ::

View File

@ -5,7 +5,7 @@ Firmware Updates
Adapter Firmware Update
-----------------------
Please download the OFED IB adapter firmware from the Mellanox site `http://www.mellanox.com/page/firmware_table_IBM <http://www.mellanox.com/page/firmware_table_IBM>`_ .
Download the OFED IB adapter firmware from the Mellanox site `http://www.mellanox.com/page/firmware_table_IBM <http://www.mellanox.com/page/firmware_table_IBM>`_ .
Obtain device id: ::

View File

@ -72,7 +72,7 @@ Use the following command to consolidate the syslog to the Management Node or Se
Configure xdsh for Mellanox Switch
----------------------------------
To run xdsh commands to the Mellanox Switch, you must use the --devicetype input flag to xdsh. In addition, for xCAT versions less than 2.8, you must add a configuration file, please see `Setup ssh connection to the Mellanox Switch`_ section.
To run xdsh commands to the Mellanox Switch, you must use the --devicetype input flag to xdsh. In addition, for xCAT versions less than 2.8, you must add a configuration file, see `Setup ssh connection to the Mellanox Switch`_ section.
For the Mellanox Switch the ``--devicetype`` is ``IBSwitch::Mellanox``. See :doc:`xdsh man page </guides/admin-guides/references/man1/xdsh.1>` for details.

View File

@ -8,7 +8,7 @@ Pre-requirement
In order to do switch-based switch discovery, the admin
1. Needs to manually setup and configure core-switch, SNMP v3 needs to be enabled in order for xCAT access to it. **username** and **userpassword** attributes are for the remote login. It can be for **ssh** or **telnet**. If it is for **telnet**, please set protocol to “telnet”. If the **username** is blank, the **username** and **password** will be retrieved from the passwd table with “switch” as the key. SNMP attributes will used for SNMPv3 communication. **nodetype** has to be set to "switch" to differentiate between switch-based node discovery or switch-based switch discovery. Refer to switches table attributes. Example of core-switch definition:
1. Needs to manually setup and configure core-switch, SNMP v3 needs to be enabled in order for xCAT access to it. **username** and **userpassword** attributes are for the remote login. It can be for **ssh** or **telnet**. If it is for **telnet**, set protocol to “telnet”. If the **username** is blank, the **username** and **password** will be retrieved from the passwd table with “switch” as the key. SNMP attributes will used for SNMPv3 communication. **nodetype** has to be set to "switch" to differentiate between switch-based node discovery or switch-based switch discovery. Refer to switches table attributes. Example of core-switch definition:
::

View File

@ -37,5 +37,5 @@ The discovery process works with the following four kind of switches: ::
BNT
Juniper
The ``switchdiscover`` command can display the output in xml format, stanza forma and normal list format. Please see the man pages for this command for details.
The ``switchdiscover`` command can display the output in xml format, stanza forma and normal list format. See the man pages for this command for details.

View File

@ -53,7 +53,7 @@ For example: ::
This means port 42 of switch1 is connected to port 50 of switch2. And switch1 can be accessed using SNMP version 3 and switch 2 can be accessed using SNMP version 2.
Note: The **username** and the **password** on the switches table are NOT the same as SSH user name and password. You have to configure SNMP on the switch for these parameters and then fill up this table. Please use **tabdump switches -d** command to find out the meaning of each column.
Note: The **username** and the **password** on the switches table are NOT the same as SSH user name and password. You have to configure SNMP on the switch for these parameters and then fill up this table. Use **tabdump switches -d** command to find out the meaning of each column.
**2. Populate the switch table**
@ -80,7 +80,7 @@ The interface eth1 is for the application network on node1, node2 and node3. Not
**3. Configure the switch for SNMP access**
Please make sure that the MN can access the switch using SNMP and the switch is configured such that it has SNMP read and write permissions.
Make sure that the MN can access the switch using SNMP and the switch is configured such that it has SNMP read and write permissions.
You can use **snmpwalk/snmpget** and **snmpset** commands on the mn to check. These commands are from **net-snmp-utils** rpm.
@ -215,7 +215,7 @@ For example: ::
VLAN Security
-------------
To make the vlan more secure, the root guard and the bpdu guard are enabled for each ports within the vlan by **mkvlan** and **chvlan** commands. This way it guards the topology changes on the switch by the hackers who hack the STP. However, when the vlan is removed by the **rmvlan** and the **chvlan (-d)** commands, the root guard and the bpdu guard are not disabled because the code cannot tell if the guards were enabled by the admin or not. If you want to remove the gurads after the vlan is removed, you need to use the switch command line interface to do so. Please refer to the documents for the switch command line interfaces for details.
To make the vlan more secure, the root guard and the bpdu guard are enabled for each ports within the vlan by **mkvlan** and **chvlan** commands. This way it guards the topology changes on the switch by the hackers who hack the STP. However, when the vlan is removed by the **rmvlan** and the **chvlan (-d)** commands, the root guard and the bpdu guard are not disabled because the code cannot tell if the guards were enabled by the admin or not. If you want to remove the gurads after the vlan is removed, you need to use the switch command line interface to do so. Refer to the documents for the switch command line interfaces for details.
Limitation
----------

View File

@ -49,7 +49,7 @@ Output will be similar to: ::
======================do summary=====================
[MN]: Check on MN PASS. [ OK ]
**[MN]** means that the verfication is performerd on the Management Node. Overall status of ``PASS`` or ``FAILED`` will be displayed after all items are verified..
**[MN]** means that the verification is performed on the Management Node. Overall status of ``PASS`` or ``FAILED`` will be displayed after all items are verified..
Service Nodes are checked automatically for hierarchical clusters.

View File

@ -18,7 +18,7 @@ Following sections show how to use ``diskdiscover`` and ``configraid``, we assum
Discovering disk devices
------------------------
Command ``diskdiscover`` scans disk devices, it can get the overview of disks and RAID arrays information from compute node; The outputs contain useful information for ``configraid`` to configure RAID arrays, user can get ``pci_id``, ``pci_slot_name``, ``disk names``, ``RAID arrays`` and other informations from the outputs. It should be ran in xcat genesis system. It can be executed without input parameter or with pci_id, pci_id includes PCI vender and device ID. For example, power8 SAS adapter pci_id is ``1014:034a``, ``1014`` is vender info, ``034a`` is PCI-E IPR SAS Adapter, more info about pci_id refer to ``http://pci-ids.ucw.cz/read/PC/1014/``.
Command ``diskdiscover`` scans disk devices, it can get the overview of disks and RAID arrays information from compute node; The outputs contain useful information for ``configraid`` to configure RAID arrays, user can get ``pci_id``, ``pci_slot_name``, ``disk names``, ``RAID arrays`` and other informations from the outputs. It should be ran in xcat genesis system. It can be executed without input parameter or with pci_id, pci_id includes PCI vendor and device ID. For example, power8 SAS adapter pci_id is ``1014:034a``, ``1014`` is vendor info, ``034a`` is PCI-E IPR SAS Adapter, more info about pci_id refer to ``http://pci-ids.ucw.cz/read/PC/1014/``.
Here are steps to use ``diskdiscover``:
@ -70,19 +70,19 @@ Here are the input parameters introduction:
#. **delete_raid** : List raid arrays which should be removed.
* If its value is all, all raid arrays detected should be deleted.
* If its value is a list of raid array names, these raid arrays will be deleted. Raid array names should be seperated by ``#``.
* If its value is a list of raid array names, these raid arrays will be deleted. Raid array names should be separated by ``#``.
* If its value is null or there is no delete_raid, no raid array will be deleted.
* If there is no delete_raid, the default value is null.
#. **stripe_size** : It is optional used when creating RAID arrays. If stripe size is not specified, it will default to the recommended stripe size for the selected RAID level.
#. **create_raid** : To create a raid array, add a line beginning with create_raid, all attributes keys and values are seperated by ``#``. The formats are as followings:
#. **create_raid** : To create a raid array, add a line beginning with create_raid, all attributes keys and values are separated by ``#``. The formats are as followings:
* ``rl`` means RAID level, RAID level can be any supported RAID level for the given adapter, such as 0, 10, 5, 6. ``rl`` is a mandatory attribute for every create_raid. Supported RAID level is depend on pysical server's RAID adapter.
* ``rl`` means RAID level, RAID level can be any supported RAID level for the given adapter, such as 0, 10, 5, 6. ``rl`` is a mandatory attribute for every create_raid. Supported RAID level is depend on physical server's RAID adapter.
* User can select disks based on following attributes value. User can find these value based on ``diskdiscover`` outputs as above section described.
a. ``pci_id`` is PCI vender and device ID.
a. ``pci_id`` is PCI vendor and device ID.
b. ``pci_slot_name`` is the specified PCI location. If using ``pci_slot_name``, this RAID array will be created using disks from it.
c. ``disk_names`` is a list of advanced format disk names. If using ``disk_names``, this RAID array will be created using these disks.
@ -139,7 +139,7 @@ Configuring RAID manually in xcat genesis system shell
xdsh cn1 'configraid delete_raid=all create_raid="rl#0|pci_id#1014:034a|disk_num#2"'
Monitoring and debuging RAID configration process
Monitoring and debuging RAID configuration process
''''''''''''''''''''''''''''''''''''''''''''''''''
#. Creating some RAID level arrays take very long time, for example, If user creates RAID 10, it will cost tens of minutes or hours. During this period, you can use xCAT xdsh command to monitor the progress of raid configuration. ::

View File

@ -44,7 +44,7 @@ Enabling the certificate functionality of https server is useful for the Rest AP
The certificate for xcatd has already been generated when installing xCAT, it can be reused by the https server. To enable the server certificate authentication, the hostname of xCAT MN must be a fully qualified domain name (FQDN). The REST API client also must use this FQDN when accessing the https server. If the hostname of the xCAT MN is not a FQDN, you need to change the hostname first.
Typically the hostname of the xCAT MN is initially set to the NIC which faces to the cluster (usually an internal/private NIC). If you want to enable the REST API for public client, please set the hostname of xCAT MN to one of the public NIC.
Typically the hostname of the xCAT MN is initially set to the NIC which faces to the cluster (usually an internal/private NIC). If you want to enable the REST API for public client, set the hostname of xCAT MN to one of the public NIC.
To change the hostname, edit /etc/sysconfig/network (RHEL) or /etc/HOSTNAME (SLES) and run: ::

View File

@ -1,7 +1,7 @@
Transmission Channel
--------------------
The xCAT daemon uses SSL to only allow authorized users to run xCAT commands. All xCAT commands are initiated as an xCAT **client**, even when run commands from the xCAT management node. This **client** opens an SSL socket to the xCAT daemon, sends the command and receives responses through this one socket. xCAT has configured the certificate for root, if you nee to authorize other users, please refer to below section.
The xCAT daemon uses SSL to only allow authorized users to run xCAT commands. All xCAT commands are initiated as an xCAT **client**, even when run commands from the xCAT management node. This **client** opens an SSL socket to the xCAT daemon, sends the command and receives responses through this one socket. xCAT has configured the certificate for root, if you nee to authorize other users, refer to the section below.
Create SSL Certificate So That User Can Be Authenticated By xCAT
@ -25,7 +25,7 @@ This will create the following files in the <username> 's ``$HOME/.xcat`` direct
Commands Access Control
-----------------------
Except SSL channel, xCAT only authorize root on the management node to run **xCAT** commands by default. But xCAT can be configured to allow both **non-root users** and **remote users** to run limited xCAT commands. For remote users, we mean the users who triggers the xCAT commands from other nodes and not have to login to the management node. xCAT uses the **policy** table to control who has authority to run specific xCAT commands. For a full explanation of the **policy** table, please refer to :doc:`policy </guides/admin-guides/references/man5/policy.5>` man page.
Except SSL channel, xCAT only authorize root on the management node to run **xCAT** commands by default. But xCAT can be configured to allow both **non-root users** and **remote users** to run limited xCAT commands. For remote users, we mean the users who triggers the xCAT commands from other nodes and not have to login to the management node. xCAT uses the **policy** table to control who has authority to run specific xCAT commands. For a full explanation of the **policy** table, refer to :doc:`policy </guides/admin-guides/references/man5/policy.5>` man page.
Granting Users xCAT Privileges
@ -74,7 +74,7 @@ Below are the steps of how to set up a login node.
1. Install the xCAT client
In order to avoid stucking in dependence problem in different distro. We recommand to create repository first by referring to below links.
In order to avoid dependency problems on different distros, we recommend creating repository first by referring to links below.
* :doc:`Configure xCAT Software Repository in RHEL</guides/install-guides/yum/configure_xcat>`
@ -111,11 +111,11 @@ Below are the steps of how to set up a login node.
The remote not-root user still needs to set up the credentials for communication with management node. By running the ``/opt/xcat/share/xcat/scripts/setup-local-client.sh <username>`` command as root in management node, the credentials are generated in <username>'s ``$HOME/.xcat`` directory in management node. These credential files must be copied to the <username>'s ``$HOME/.xcat`` directory on the login node. **Note**: After ``scp``, in the login node, you must make sure the owner of the credentials is <username>.
Setup your ``policy`` table on the managment node with the permissions that you would like the non-root id to have.
Setup your ``policy`` table on the management node with the permissions that you would like the non-root id to have.
At this time, the non-root id should be able to execute any commands that have been set in the ``policy`` table from the Login Node.
If any remote shell commmands (psh,xdsh) are needed, then you need to follow `Extra Setup For Remote Commands`_.
If any remote shell commands (psh,xdsh) are needed, then you need to follow `Extra Setup For Remote Commands`_.
Auditing
@ -142,7 +142,7 @@ Password Management
xCAT is required to store passwords for various logons so that the application can login to the devices without having to prompt for a password. The issue is how to securely store these passwords.
Currently xCAT stores passwords in ``passwd`` table. You can store them as plaintext, you also can store them as MD5 ciphertext.
Currently xCAT stores passwords in ``passwd`` table. You can store them as plain text, you can also store them as MD5 ciphertext.
Here is an example about how to store a MD5 encrypted password for root in ``passwd`` table. ::
@ -178,5 +178,5 @@ This setting of site.sshbetweennodes will only enable root ssh between nodes of
Secure Zones
````````````
You can set up multiple zones in an xCAT cluster. A node in the zone can ssh without password to any other node in the zone, but not to nodes in other zones. Please refer :doc:`Zones </advanced/zones/index>` for more information.
You can set up multiple zones in an xCAT cluster. A node in the zone can ssh without password to any other node in the zone, but not to nodes in other zones. Refer to :doc:`Zones </advanced/zones/index>` for more information.

View File

@ -40,7 +40,7 @@ This document describes how to install and configure a template node (called gol
Prepare the xCAT Management Node for Support Sysclone
`````````````````````````````````````````````````````
How to configure xCAT management node please refer to section :ref:`install_guides`
To configure xCAT management node refer to section :ref:`install_guides`
For support Sysclone, we need to install some extra rpms on management node and the golden client.
@ -93,7 +93,7 @@ Install and Configure the Golden Client
The Golden Client acts as a regular node for xCAT, just have some extra rpms to support clone. When you deploy golden client with xCAT, you just need to add a few additional definitions to the image which will be used to deploy golden client.
For information of how to install a regular node, please refer to section :ref:`Diskful Installation <diskful_installation>`
For information of how to install a regular node, refer to section :ref:`Diskful Installation <diskful_installation>`
For support clone, add 'otherpkglist' and 'otherpkgdir' attributes to the image definition which will be used to deploy golden client, then deploy golden client as normal. then the golden client will have extra rpms to support clone. If you have deployed your golden client already, using 'updatenode' command to push these extra rpms to golden client. CentOS share the same pkglist file with RHEL. For example:
@ -121,7 +121,7 @@ For support clone, add 'otherpkglist' and 'otherpkgdir' attributes to the image
chdef -t osimage -o <osimage-name> -p otherpkgdir=/install/post/otherpkgs/rhels6.3/ppc64
updatenode <golden-cilent> -S
*[Note]: If you install systemimager RPMs on CentOS 6.5 node by above steps, you maybe hit failure. this is a known issue because some defect of CentOS6.5 itself. Please refer to known issue section for help.*
*[Note]: If you install systemimager RPMs on CentOS 6.5 node by above steps, you maybe hit failure. this is a known issue because some defect of CentOS6.5 itself. Refer to known issue section for help.*
Capture Image from Golden Client
````````````````````````````````
@ -159,7 +159,7 @@ If, at a later time, you need to make changes to the golden client (install new
**[Limitation]**: In xcat2.8.5, this feature has limitation in RHEL and CentOS. when your delta changes related bootloader, it would encounter error. This issue will be fixed in xcat higher version. So up to now, in RHEL and CentOS, this feature just update files not related bootloader.
Update delta changes please follow below steps:
Update delta changes follow below steps:
1. Make changes to your golden node (install new rpms, change config files, etc.).
@ -199,7 +199,7 @@ Known Issue
Can not install systemimager RPMs in CentOS6.5 by yum
``````````````````````````````````````````````````````
If you install systemimager RPMs on CentOS 6.5 node by yum, you maybe hit failure because some defect of CentOS6.5 itself. So please copy related RPMs to CentOS 6.5 node and install them by hand.
If you install systemimager RPMs on CentOS 6.5 node using yum, you may experience some problems due to CentOS6.5 itself. If that happens, copy related RPMs to CentOS 6.5 node and install them by hand.
* **On management node**::

View File

@ -12,7 +12,7 @@ Clone the xCAT project from `GitHub <https://github.com/xcat2/xcat-core>`_::
xcat-deps
---------
The ``xcat-deps`` package is currently owned and maintained by the core development on our internal servers. Please use the packages created at: http://xcat.org/download.html#xcat-dep
The ``xcat-deps`` package is currently owned and maintained by the core development on our internal servers. Use the packages created at: http://xcat.org/download.html#xcat-dep
man pages

View File

@ -161,7 +161,7 @@ Add links to refer other web page is a very common way in writting document, it
Add OS or ARCH Specific Contents
--------------------------------
When writing a common xCAT doc, we always encounter the case that certain small part of content needs to be OS or ARCH specific. In this case, please use the following format to add specific branches.
When writing a common xCAT doc, we always encounter the case that certain small part of content needs to be OS or ARCH specific. In this case, use the following format to add specific branches.
The keyword in the **[]** can be an OS name or ARCH name, or any name which can distinguish the content from other part.

View File

@ -3,7 +3,7 @@ Contributor and Maintainer Agreements
We welcome developers willing to contribute to the xCAT project to help make it better.
Please follow the guidelines below.
Follow the guidelines below.
.. toctree::
:maxdepth: 1

View File

@ -7,7 +7,7 @@ In order to clarify the intellectual property license granted with Contributions
This version of the Agreement allows an entity (the "Corporation") to submit Contributions to the xCAT Community, to authorize Contributions submitted by its designated employees to the xCAT Community, and to grant copyright and patent licenses thereto.
If you have not already done so, please complete and sign, then scan and email a PDF file of this Agreement to: **xcat-legal@lists.sourceforge.net**. Please read this document carefully before signing and keep a copy for your records.
If you have not already done so, complete and sign, then scan and email a PDF file of this Agreement to: **xcat-legal@lists.sourceforge.net**. Read this document carefully before signing and keep a copy for your records.
Corporation name: ___________________________________________________

View File

@ -5,7 +5,7 @@ The xCAT Community Individual Contributor License Agreement ("Agreement")
In order to clarify the intellectual property license granted with Contributions from any person or entity made for the benefit of the xCAT Community, a Contributor License Agreement ("CLA") must be on file that has been signed by each Contributor, indicating agreement to the license terms below. This license is for your protection as a Contributor as well as the protection of the xCAT Community and its users; it does not change your rights to use your own Contributions for any other purpose.
If you have not already done so, please complete and sign, then scan and email a PDF file of this Agreement to: **xcat-legal@lists.sourceforge.net**.
If you have not already done so, complete and sign, then scan and email a PDF file of this Agreement to: **xcat-legal@lists.sourceforge.net**.

View File

@ -54,7 +54,7 @@ It is important to note that some HA-related software like DRDB, Pacemaker, and
HA Service Nodes
````````````````
When you have NFS-based diskless (statelite) nodes, there is sometimes the motivation make the NFS serving highly available among all of the service nodes. This is not recommended because it is a very complex configuration. In our opinion, the complexity of this setup can nullify much of the availibility you hope to gain. If you need your compute nodes to be highly available, you should strongly consider stateful or stateless nodes.
When you have NFS-based diskless (statelite) nodes, there is sometimes the motivation make the NFS serving highly available among all of the service nodes. This is not recommended because it is a very complex configuration. In our opinion, the complexity of this setup can nullify much of the availability you hope to gain. If you need your compute nodes to be highly available, you should strongly consider stateful or stateless nodes.
If you still have reasons to pursue HA service nodes:

View File

@ -26,7 +26,7 @@ Another example is if "node1" is assigned the IP address "10.0.0.1", node2 is as
#node,ip,hostnames,otherinterfaces,comments,disable
"compute","|node(\d+)|10.0.0.($1+0)|",,,,
In this example, the regular expression in the ``ip`` attribute uses ``|`` to separate the 1st and 2nd part. This means that xCAT will allow arithmetic operations in the 2nd part. In the 1st part, ``(\d+)``, will match the number part of the node name and put that in a variable called ``$1``. The 2nd part is what value to give the ``ip`` attribute. In this case it will set it to the string "10.0.0." and the number that is in ``$1``. (Zero is added to ``$1`` just to remove any leading zeroes.)
In this example, the regular expression in the ``ip`` attribute uses ``|`` to separate the 1st and 2nd part. This means that xCAT will allow arithmetic operations in the 2nd part. In the 1st part, ``(\d+)``, will match the number part of the node name and put that in a variable called ``$1``. The 2nd part is what value to give the ``ip`` attribute. In this case it will set it to the string "10.0.0." and the number that is in ``$1``. (Zero is added to ``$1`` just to remove any leading zeros.)
A more involved example is with the ``vm`` table. If your kvm nodes have node names c01f01x01v01, c01f02x03v04, etc., and the kvm host names are c01f01x01, c01f02x03, etc., then you might have an ``vm`` table like ::
@ -45,14 +45,14 @@ Before you panic, let me explain each column:
``|\D+(\d+)\D+(\d+)\D+(\d+)\D+(\d+)|dir:///install/vms/vm($4+0)|``
This item is similar to the one above. This substituion pattern will produce the value for the 5th column (a list of storage files or devices to be used). Because this row was the match for "c01f02x03v04", the produced value is "dir:///install/vms/vm4".
This item is similar to the one above. This substitution pattern will produce the value for the 5th column (a list of storage files or devices to be used). Because this row was the match for "c01f02x03v04", the produced value is "dir:///install/vms/vm4".
Just as the explained above, when the node definition "c01f02x03v04" is created with ::
# mkdef -t node -o c01f02x03v04 groups=kvms
1 object definitions have been created or modified.
The generated node deinition is ::
The generated node definition is ::
# lsdef c01f02x03v04
Object name: c01f02x03v04

View File

@ -18,15 +18,15 @@ The paralell compression tool ``pigz`` can be enabled by installing ``pigz`` pac
EPEL has an ``epel-release`` package that includes gpg keys for package signing and repository information. Installing this package for your Enterprise Linux version should allow you to use normal tools such as ``yum`` to install packages and their dependencies.
Please refer to the http://fedoraproject.org/wiki/EPEL for more details on EPEL
Refer to the http://fedoraproject.org/wiki/EPEL for more details on EPEL
1) Enabling the ``pigz`` in ``genimage`` (only supported in RHELS6 or above)
``pigz`` should be installed in the diskless rootimg. Please download ``pigz`` package from https://dl.fedoraproject.org/pub/epel/ , then customize the diskless osimage to install ``pigz`` as the additional packages, see :doc:`Install Additional Other Packages</guides/admin-guides/manage_clusters/ppc64le/diskless/customize_image/additional_pkg>` for more details.
``pigz`` should be installed in the diskless rootimg. Download ``pigz`` package from https://dl.fedoraproject.org/pub/epel/ , then customize the diskless osimage to install ``pigz`` as the additional packages, see :doc:`Install Additional Other Packages</guides/admin-guides/manage_clusters/ppc64le/diskless/customize_image/additional_pkg>` for more details.
2) Enabeling the ``pigz`` in ``packimage``
``pigz`` should be installed on the management server. Please download ``pigz`` package from https://dl.fedoraproject.org/pub/epel/ , then install the ``pigz`` with ``yum`` or ``rpm``.
``pigz`` should be installed on the management server. Download ``pigz`` package from https://dl.fedoraproject.org/pub/epel/ , then install the ``pigz`` with ``yum`` or ``rpm``.
* **[UBUNTU]**

View File

@ -33,7 +33,7 @@ If you have newer updates to some of your operating system packages that you wou
createrepo .
chdef -t osimage <os>-<arch>-<inst_type>-<profile> pkgdir=/install/<os>/<arch>,/install/osupdates/<os>/<arch>
Note:If the objective node is not installed by xCAT,please make sure the correct osimage pkgdir attribute so that you could get the correct repository data.
Note:If the objective node is not installed by xCAT, make sure the correct osimage pkgdir attribute so that you could get the correct repository data.
.. _File-Format-for-pkglist-label:

View File

@ -72,7 +72,7 @@ Here is partition definition file example for RedHat LVM partition in IBM Power
.. BEGIN_partition_definition_file_example_RedHat_RAID1_for_IBM_Power_machines
Partition definition file example for RedHat RAID1 please refer to :doc:`Configure RAID before Deploy OS </guides/admin-guides/manage_clusters/ppc64le/diskful/customize_image/raid_cfg>`
To partition definition file example for RedHat RAID1 refer to :doc:`Configure RAID before Deploy OS </guides/admin-guides/manage_clusters/ppc64le/diskful/customize_image/raid_cfg>`
.. END_partition_definition_file_example_RedHat_RAID1_for_IBM_Power_machines
@ -287,7 +287,7 @@ Here is partition definition file example for SLES standard partition in ppc64 m
.. BEGIN_partition_definition_file_example_SLES_RAID1
Partition definition file example for SLES RAID1 please refer to `Configure RAID before Deploy OS <http://xcat-docs.readthedocs.org/en/latest/guides/admin-guides/manage_clusters/ppc64le/diskful/customize_image/raid_cfg.html>`_
To partition definition file example for SLES RAID1 refer to `Configure RAID before Deploy OS <http://xcat-docs.readthedocs.org/en/latest/guides/admin-guides/manage_clusters/ppc64le/diskful/customize_image/raid_cfg.html>`_
.. END_partition_definition_file_example_SLES_RAID1

View File

@ -150,7 +150,7 @@ Currently, only NFS is supported for the setup of kdump.
If the dump attribute is not set, the kdump service will not be enabled.
Please make sure the NFS remote path(nfs://<nfs_server_ip>/<kdump_path>) is exported and it is read-writeable to the node where kdump service is enabled.
Make sure the NFS remote path(nfs://<nfs_server_ip>/<kdump_path>) is exported and it is read-writeable to the node where kdump service is enabled.
How to trigger kernel panic on Linux
------------------------------------

View File

@ -104,7 +104,7 @@ Skip this section if you want to use the image as is.
* Modify .pkglist file to add or remove packges that are from the os distro
* Modify .otherpkgs.pkglist to add or remove packages from other sources. Please refer to ``Using_Updatenode`` for details
* Modify .otherpkgs.pkglist to add or remove packages from other sources. Refer to ``Using_Updatenode`` for details
* For diskful, modify the .tmpl file to change the kickstart/autoyast configuration

View File

@ -109,7 +109,7 @@ To create the virtual machine "vm1" with 20G hard disk on a hypervisor directory
mkvm vm1 -s 20G
When "vm1" is created successfully, a VM hard disk file with a name like "vm1.sda.qcow2" will be found in the location specified by **vmstorage**. What's more, the **mac** attribute of "vm1" is set automatically, please check it with: ::
When "vm1" is created successfully, a VM hard disk file with a name like "vm1.sda.qcow2" will be found in the location specified by **vmstorage**. What's more, the **mac** attribute of "vm1" is set automatically, check it with: ::
lsdef vm1 -i mac
@ -132,7 +132,7 @@ or running the following command on the kvm hypervisor "kvmhost1" ::
Monitoring the Virtual Machine
``````````````````````````````
When the VM has been created and powered on, please choose one of the following methods to monitor and access it.
When the VM has been created and powered on, choose one of the following methods to monitor and access it.
* Open the console on kvm hypervisor: ::

View File

@ -29,7 +29,7 @@ Installing Additional OS Distro Packages
For packages from the OS distro, add the new package names (without the version number) in the .pkglist file. If you have newer updates to some of your operating system packages that you would like to apply to your OS image, you can place them in another directory, and add that directory to your osimage pkgdir attribute. How to add additional OS distro packages, go to :ref:`Install-Additional-OS-Packages-label`
Note:If the objective node is not installed by xCAT, please make sure the correct osimage pkgdir attribute so that you could get the correct repository data.
Note:If the objective node is not installed by xCAT, make sure the correct osimage pkgdir attribute so that you could get the correct repository data.
Install Additional non-OS Packages
``````````````````````````````````
@ -132,8 +132,8 @@ Linux: xdsh <noderange> -e /install/postscripts/xcatdsklspost -m <server> <scrip
where <scripts> is a comma separated postscript like ospkgs,otherpkgs etc.
* wget is used in xcatdsklspost/xcataixpost to get all the postscripts from the <server> to the node. You can check /tmp/wget.log file on the node to see if wget was successful or not. You need to make sure the /xcatpost directory has enough space to hold the postscripts.
* A file called /xcatpost/mypostscript (Linux) is created on the node which contains the environmental variables and scripts to be run. Please make sure this file exists and it contains correct info. You can also run this file on the node manually to debug.
* For ospkgs/otherpkgs, if /install is not mounted on the <server>, it will download all the rpms from the <server> to the node using wget. Please make sure /tmp and /xcatpost have enough space to hold the rpms and please check /tmp/wget.log for errors.
* For ospkgs/otherpkgs, If zypper or yum is installed on the node, it will be used the command to install the rpms. Please make sure to run createrepo on the source direcory on the <server> every time a rpm is added or removed. Otherwise, the rpm command will be used, in this case, please make sure all the necessary depended rpms are copied in the same source directory.
* A file called /xcatpost/mypostscript (Linux) is created on the node which contains the environmental variables and scripts to be run. Make sure this file exists and it contains correct info. You can also run this file on the node manually to debug.
* For ospkgs/otherpkgs, if /install is not mounted on the <server>, it will download all the rpms from the <server> to the node using wget. Make sure /tmp and /xcatpost have enough space to hold the rpms and check /tmp/wget.log for errors.
* For ospkgs/otherpkgs, If zypper or yum is installed on the node, it will be used the command to install the rpms. Make sure to run createrepo on the source direcory on the <server> every time a rpm is added or removed. Otherwise, the rpm command will be used, in this case, make sure all the necessary depended rpms are copied in the same source directory.
* You can append -x on the first line of ospkgs/otherpkgs to get more debug info.

View File

@ -22,7 +22,7 @@ The discovered PBMC node will be like this::
postscripts=syslog,remoteshell,syncfiles
serial=10112CA
**Note**: Pls note that the PBMC node is just used to control the physical during hardware discovery process, it will be deleted after the correct server node object is found.
**Note**: Note that the PBMC node is just used to control the physical during hardware discovery process, it will be deleted after the correct server node object is found.
Start discovery process
-----------------------

View File

@ -26,7 +26,7 @@ Set the target `osimage` into the chain table to automatically provision the ope
chdef cn1 -p chain="osimage=<osimage_name>"
For more information about chain, please refer to :doc:`Chain <../../../../../advanced/chain/index>`
For more information about chain, refer to :doc:`Chain <../../../../../advanced/chain/index>`
Initialize the discovery process
````````````````````````````````

View File

@ -62,7 +62,7 @@ Set the target `osimage` into the chain table to automatically provision the ope
chdef cn1 -p chain="osimage=<osimage_name>"
For more information about chain, please refer to :doc:`Chain <../../../../../advanced/chain/index>`
For more information about chain, refer to :doc:`Chain <../../../../../advanced/chain/index>`
Add cn1 into DNS::

View File

@ -14,7 +14,7 @@ With xCAT, the end user can turn the beacon light on or off with the commands sh
rbeacon cn1 on
rbeacon cn1 off
Please notice, the current state of the beacon light can not be inquery remotely. As a workaround, one can always use the ``rbeacon`` command to turn all the beacon lights in one frame off, and then turn a particular beancon light on. ::
The current state of the beacon light can not be queried remotely. As a workaround, one can always use the ``rbeacon`` command to turn all the beacon lights in one frame off, and then turn a particular beacon light on. ::
rbeacon a_group_of_cn off
rbeacon cn5 on
@ -35,7 +35,7 @@ Or do a hardware reset, run ::
rpower cn1 reset
Get the current rpower state of a machine, please refer to the example below. ::
Get the current rpower state of a machine, refer to the example below. ::
# rpower cn1 state
cn1: Running
@ -80,16 +80,16 @@ To get all the hardware information, which including the model type, serial numb
rinv cn1 all
As an example, in order to get only the information of firmware version, the follwing command can be used. ::
As an example, in order to get only the information of firmware version, the following command can be used. ::
rinv cn1 firm
Remote Hardware Vitals
``````````````````````
Collect runtime information from running physical machine is also a big requirement for real life system administrators. This kind of information includes, temperature of CPU, internal voltage of paricular socket, wattage with workload, speed of cooling fan, et al.
Collect runtime information from running physical machine is also a big requirement for real life system administrators. This kind of information includes, temperature of CPU, internal voltage of particular socket, wattage with workload, speed of cooling fan, et al.
In order to get such information, please use ``rvitals`` command. Please also notice, this kind of information various among different model types of the machine. Thus, please check the actual output of the ``rvitals`` command against your machine, to verify which kinds of information can be get. The information may change due to the firmware updating of the machine. ::
In order to get such information, use ``rvitals`` command. This kind of information varies among different model types of the machine. Thus, check the actual output of the ``rvitals`` command against your machine, to verify which kinds of information can be extracted. The information may change after the firmware update of the machine. ::
rvitals cn1 all
@ -115,7 +115,7 @@ Update node firmware to the version of the HPM file
Configures Nodes' Service Processors
````````````````````````````````````
Here comes the command, ``rspconfig``. It is used to configure the service processor of a physical machine. On a OpenPower system, the service processor is the BMC, Baseboard Management Controller. Various variables can be set through the command. But, please also notice, the actual configuration may change among different machine-model types.
Here comes the command, ``rspconfig``. It is used to configure the service processor of a physical machine. On a OpenPower system, the service processor is the BMC, Baseboard Management Controller. Various variables can be set through the command. Also notice, the actual configuration may change among different machine-model types.
Examples

View File

@ -28,7 +28,7 @@ rpower fails with "Error: internal error Process exited while reading console lo
Then restart the NFS services and try to power on the VM again...
**Note**: For stateless hypervisor, please purge the VM by ``rmvm -p vm1``, reboot the hypervisor and then create the VM.
**Note**: For stateless hypervisor, purge the VM by ``rmvm -p vm1``, reboot the hypervisor and then create the VM.
rpower fails with "Error: internal error: process exited while connecting to monitor qemu: Permission denied"
-------------------------------------------------------------------------------------------------------------
@ -77,7 +77,7 @@ Error: Cannot communicate via libvirt to kvmhost1
The kvm related commands complain "Error: Cannot communicate via libvirt to kvmhost1"
**Solution**:
Usually caused by incorrect ssh configuration between xCAT management node and hypervisor. Please make sure it is possible to access the hypervisor from management node via ssh without password.
Usually caused by incorrect ssh configuration between xCAT management node and hypervisor. Make sure it is possible to access the hypervisor from management node via ssh without password.
Fail to ping the installed VM
@ -89,7 +89,7 @@ Fail to ping the installed VM
ADDRCONF(NETDEV_UP): eth0 link is not ready.
**Solutoin**:
Usually caused by the incorrect VM NIC model. Please try the following steps to specify "virtio": ::
Usually caused by the incorrect VM NIC model. Try the following steps to specify "virtio": ::
rmvm vm1
chdef vm1 vmnicnicmodel=virtio

View File

@ -3,4 +3,4 @@ x86_64
This section is not available at this time.
Please refer to `xCAT Documentation <https://sourceforge.net/p/xcat/wiki/XCAT_Documentation/>`_ on SourceForge for information on System X servers.
Refer to `xCAT Documentation <https://sourceforge.net/p/xcat/wiki/XCAT_Documentation/>`_ on SourceForge for information on System X servers.

View File

@ -47,7 +47,7 @@ OPTIONS
\ **-n|-**\ **-nodes**\ The nodes or groups to be added or removed. It takes the noderange format. Please check the man page for noderange for details.
\ **-n|-**\ **-nodes**\ The nodes or groups to be added or removed. It takes the noderange format. Check the man page for noderange for details.

View File

@ -336,7 +336,7 @@ VMware/KVM specific:
\ **-**\ **-resize**\ \ *disk*\ =\ *size*\
Change the size of the Hard disk. The disk in \ *qcow2*\ format can not be set to less than it's current size. The disk in \ *raw*\ format can be resized smaller, please use caution. Multiple disks can be resized by using comma separated \ *disk*\ \ **=**\ \ *size*\ pairs. The disks are specified by SCSI id. Size defaults to GB.
Change the size of the Hard disk. The disk in \ *qcow2*\ format can not be set to less than it's current size. The disk in \ *raw*\ format can be resized smaller, use caution. Multiple disks can be resized by using comma separated \ *disk*\ \ **=**\ \ *size*\ pairs. The disks are specified by SCSI id. Size defaults to GB.
@ -838,7 +838,7 @@ The resource information after modification is similar to:
lpar1: 128.
Note: The physical I/O resources specified with \ *add_physlots*\ will be appended to the specified partition. The physical I/O resources which are not specified but belonged to the partition will not be removed. For more information about \ *add_physlots*\ , please refer to lsvm(1)|lsvm.1.
Note: The physical I/O resources specified with \ *add_physlots*\ will be appended to the specified partition. The physical I/O resources which are not specified but belonged to the partition will not be removed. For more information about \ *add_physlots*\ , refer to lsvm(1)|lsvm.1.
VMware/KVM specific:

View File

@ -46,7 +46,7 @@ for stateless: \ **packimage**\
for statelite: \ **liteimg**\
Besides prompting for some paramter values, the \ **genimage**\ command takes default guesses for the parameters not specified or not defined in the \ *osimage*\ and \ *linuximage*\ tables. It also assumes default answers for questions from the yum/zypper command when installing rpms into the image. Please use \ **-**\ **-interactive**\ flag if you want the yum/zypper command to prompt you for the answers.
Besides prompting for some paramter values, the \ **genimage**\ command takes default guesses for the parameters not specified or not defined in the \ *osimage*\ and \ *linuximage*\ tables. It also assumes default answers for questions from the yum/zypper command when installing rpms into the image. Use \ **-**\ **-interactive**\ flag if you want the yum/zypper command to prompt you for the answers.
If \ **-**\ **-onlyinitrd**\ is specified, genimage only regenerates the initrd for a stateless image to be used for a diskless install.

View File

@ -86,7 +86,7 @@ Display MAC only. The default is to write the first valid adapter MAC to the xCA
\ **-D**\
Perform discovery for mac address. By default, it will run ping test to test the connection between adapter and xCAT management node. Use '--noping' can skip the ping test to save time. Please be aware that in this way, the lpars will be reset.
Perform discovery for mac address. By default, it will run ping test to test the connection between adapter and xCAT management node. Use '--noping' can skip the ping test to save time. Be aware that in this way, the lpars will be reset.
\ **-f**\

View File

@ -39,7 +39,7 @@ The \ **diskless**\ type:
The attributes of osimage will be used to capture and prepare the root image. The \ **osver**\ , \ **arch**\ and \ **profile**\ attributes for the stateless/statelite image to be created are duplicated from the \ **node**\ 's attribute. If the \ **-p|-**\ **-profile**\ \ *profile*\ option is specified, the image will be created under "/<\ *installroot*\ >/netboot/<osver>/<arch>/<\ *profile*\ >/rootimg".
The default files/directories excluded in the image are specified by /opt/xcat/share/xcat/netboot/<os>/<\ *profile*\ >.<osver>.<arch>.imgcapture.exlist; also, you can put your customized file (<\ *profile*\ >.<osver>.<arch>.imgcapture.exlist) to /install/custom/netboot/<osplatform>. The directories in the default \ *.imgcapture.exlist*\ file are necessary to capture image from the diskful Linux node managed by xCAT, please don't remove it.
The default files/directories excluded in the image are specified by /opt/xcat/share/xcat/netboot/<os>/<\ *profile*\ >.<osver>.<arch>.imgcapture.exlist; also, you can put your customized file (<\ *profile*\ >.<osver>.<arch>.imgcapture.exlist) to /install/custom/netboot/<osplatform>. The directories in the default \ *.imgcapture.exlist*\ file are necessary to capture the image from the diskful Linux node managed by xCAT, don't remove it.
The image captured will be extracted into the /<\ *installroot*\ >/netboot/<\ **osver**\ >/<\ **arch**\ >/<\ **profile**\ >/rootimg directory.

View File

@ -31,7 +31,7 @@ DESCRIPTION
***********
The lsslp command discovers selected service types using the -s flag. All service types are returned if the -s flag is not specified. If a specific IP address is not specified using the -i flag, the request is sent out all available network adapters. The optional -r, -x, -z and --vpdtable flags format the output. If you can't receive all the hardware, please use -T to increase the waiting time.
The lsslp command discovers selected service types using the -s flag. All service types are returned if the -s flag is not specified. If a specific IP address is not specified using the -i flag, the request is sent out all available network adapters. The optional -r, -x, -z and --vpdtable flags format the output. If you can't receive all the hardware, use -T to increase the waiting time.
NOTE: SLP broadcast requests will propagate only within the subnet of the network adapter broadcast IPs specified by the -i flag.
@ -41,7 +41,7 @@ OPTIONS
*******
\ **noderange**\ The nodes which the user want to discover. If the user specify the noderange, lsslp will just return the nodes in the node range. Which means it will help to add the new nodes to the xCAT database without modifying the existed definitions. But the nodes' name specified in noderange should be defined in database in advance. The specified nodes' type can be frame/cec/hmc/fsp/bpa. If the it is frame or cec, lsslp will list the bpa or fsp nodes within the nodes(bap for frame, fsp for cec). Please do not use noderange with the flag -s.
\ **noderange**\ The nodes which the user want to discover. If the user specify the noderange, lsslp will just return the nodes in the node range. Which means it will help to add the new nodes to the xCAT database without modifying the existed definitions. But the nodes' name specified in noderange should be defined in database in advance. The specified nodes' type can be frame/cec/hmc/fsp/bpa. If the it is frame or cec, lsslp will list the bpa or fsp nodes within the nodes(bap for frame, fsp for cec). Do not use noderange with the flag -s.
\ **-i**\ IP(s) the command will send out (defaults to all available adapters).
@ -75,7 +75,7 @@ OPTIONS
\ **-z**\ Stanza formated output.
\ **-I**\ Give the warning message for the nodes in database which have no SLP responses. Please note that this flag noly can be used after the database migration finished successfully.
\ **-I**\ Give the warning message for the nodes in database which have no SLP responses. Note that this flag noly can be used after the database migration finished successfully.
************
@ -298,7 +298,7 @@ Output is similar to:
bpa 9458-100 BPCF017 B-0 40.17.0.2 f17c00bpcb_a
8. To find the nodes within the user specified. Please make sure the noderange input have been defined in xCAT database.
8. To find the nodes within the user specified. Make sure the noderange input have been defined in xCAT database.
.. code-block:: perl

View File

@ -32,7 +32,7 @@ By default, it sets up the NTP server for xCAT management node. If -a flag is sp
\ *site.ntpservers*\ -- the NTP servers for the service node and compute node to sync with. The keyword <xcatmaster> means that the node's NTP server is the node that is managing it (either its service node or the management node).
To setup NTP on the compute node, please add \ **setupntp**\ postscript to the \ *postscripts*\ table and run \ *updatenode node -P setupntp*\ command.
To setup NTP on the compute node, add \ **setupntp**\ postscript to the \ *postscripts*\ table and run \ *updatenode node -P setupntp*\ command.
*******

View File

@ -54,7 +54,7 @@ OPTIONS
\ **dockerflag**\
A JSON string which will be used as parameters to create a docker. Please reference https://docs.docker.com/engine/reference/api/docker_remote_api_v1.22/ for more information about which parameters can be specified.
A JSON string which will be used as parameters to create a docker. Reference https://docs.docker.com/engine/reference/api/docker_remote_api_v1.22/ for more information about which parameters can be specified.
Some useful flags are:

View File

@ -61,7 +61,7 @@ You can use the force option to reinitialize a node if it already has resources
After the mkdsklsnode command completes you can use the \ **lsnim**\ command to check the NIM node definition to see if it is ready for booting the node. ("lsnim -l <nim_node_name>").
You can supply your own scripts to be run on the management node or on the service node (if their is hierarchy) for a node during the \ **mkdsklsnode**\ command. Such scripts are called \ **prescripts**\ . They should be copied to /install/prescripts dirctory. A table called \ *prescripts*\ is used to specify the scripts and their associated actions. The scripts to be run at the beginning of the \ **mkdsklsnode**\ command are stored in the 'begin' column of \ *prescripts*\ table. The scripts to be run at the end of the \ **mkdsklsnode**\ command are stored in the 'end' column of \ *prescripts*\ table. Please run 'tabdump prescripts -d' command for details. An example for the 'begin' or the 'end' column is: \ *diskless:myscript1,myscript2*\ . The following two environment variables will be passed to each script: NODES contains all the names of the nodes that need to run the script for and ACTION contains the current current nodeset action, in this case "diskless". If \ *#xCAT setting:MAX_INSTANCE=number*\ is specified in the script, the script will get invoked for each node in parallel, but no more than \ *number*\ of instances will be invoked at at a time. If it is not specified, the script will be invoked once for all the nodes.
You can supply your own scripts to be run on the management node or on the service node (if their is hierarchy) for a node during the \ **mkdsklsnode**\ command. Such scripts are called \ **prescripts**\ . They should be copied to /install/prescripts dirctory. A table called \ *prescripts*\ is used to specify the scripts and their associated actions. The scripts to be run at the beginning of the \ **mkdsklsnode**\ command are stored in the 'begin' column of \ *prescripts*\ table. The scripts to be run at the end of the \ **mkdsklsnode**\ command are stored in the 'end' column of \ *prescripts*\ table. Run 'tabdump prescripts -d' command for details. An example for the 'begin' or the 'end' column is: \ *diskless:myscript1,myscript2*\ . The following two environment variables will be passed to each script: NODES contains all the names of the nodes that need to run the script for and ACTION contains the current current nodeset action, in this case "diskless". If \ *#xCAT setting:MAX_INSTANCE=number*\ is specified in the script, the script will get invoked for each node in parallel, but no more than \ *number*\ of instances will be invoked at at a time. If it is not specified, the script will be invoked once for all the nodes.
*******

View File

@ -66,7 +66,7 @@ For example:
This command will automatically configure the cross-over ports if the given nodes are on different switches.
For added security, the root guard and bpdu guard will be enabled for the ports in this vlan. However, the guards will not be disabled if the ports are removed from the vlan using chvlan or rmvlan commands. To disable them, you need to use the switch command line interface. Please refer to the switch command line interface manual to see how to disable the root guard and bpdu guard for a port.
For added security, the root guard and bpdu guard will be enabled for the ports in this vlan. However, the guards will not be disabled if the ports are removed from the vlan using chvlan or rmvlan commands. To disable them, you need to use the switch command line interface. Refer to the switch command line interface manual to see how to disable the root guard and bpdu guard for a port.
**********
@ -83,7 +83,7 @@ OPTIONS
\ **-n|-**\ **-nodes**\ The nodes or groups to be included in the vlan. It can be stand alone nodes or KVM guests. It takes the noderange format. Please check the man page for noderange for details.
\ **-n|-**\ **-nodes**\ The nodes or groups to be included in the vlan. It can be stand alone nodes or KVM guests. It takes the noderange format. Check the man page for noderange for details.
@ -137,7 +137,7 @@ To start, the xCAT switches and switches table needs to be filled with switch an
"node3","switch1","12",,"primary:eth0",,
"node3","switch2","3",,"eth1",,
Please note that the interface value for the management (primary) network can be empty, the word "primary" or "primary:ethx". For other networks, the interface attribute must be specified.
Note that the interface value for the management (primary) network can be empty, the word "primary" or "primary:ethx". For other networks, the interface attribute must be specified.
The following is an example of the switches table

View File

@ -85,9 +85,9 @@ The first form of \ **mkvm**\ command creates new partition(s) with the same pr
The second form of this command duplicates all the partitions from the source specified by \ *profile*\ to the destination specified by \ *destcec*\ . The source and destination CECs can be managed by different HMCs.
Please make sure the nodes in the \ *noderange*\ is defined in the \ *nodelist*\ table and the \ *mgt*\ is set to 'hmc' in the \ *nodehm*\ table before running this command.
Make sure the nodes in the \ *noderange*\ is defined in the \ *nodelist*\ table and the \ *mgt*\ is set to 'hmc' in the \ *nodehm*\ table before running this command.
Please note that the \ **mkvm**\ command currently only supports creating standard LPARs, not virtual LPARs working with VIOS server.
Note that the \ **mkvm**\ command currently only supports creating standard LPARs, not virtual LPARs working with VIOS server.
For PPC (using Direct FSP Management) specific:

View File

@ -41,7 +41,7 @@ Parameters
\ *name*\ is the name of the monitoring plug-in module. For example, if the the \ *name*\ is called \ *xxx*\ , then the actual file name that the xcatd looks for is \ */opt/xcat/lib/perl/xCAT_monitoring/xxx.pm*\ . Use \ *monls -a*\ command to list all the monitoring plug-in modules that can be used.
\ *settings*\ is the monitoring plug-in specific settings. It is used to customize the behavior of the plug-in or configure the 3rd party software. Format: \ *-s key-value -s key=value ...*\ Please note that the square brackets are needed here. Use \ *monls name -d*\ command to look for the possbile setting keys for a plug-in module.
\ *settings*\ is the monitoring plug-in specific settings. It is used to customize the behavior of the plug-in or configure the 3rd party software. Format: \ *-s key-value -s key=value ...*\ Note that the square brackets are needed here. Use \ *monls name -d*\ command to look for the possbile setting keys for a plug-in module.
*******

View File

@ -75,7 +75,7 @@ EXAMPLES
monrm gangliamon
Please note that gangliamon must have been registered in the xCAT \ *monitoring*\ table. For a list of registered plug-in modules, use command \ **monls**\ .
Note that gangliamon must have been registered in the xCAT \ *monitoring*\ table. For a list of registered plug-in modules, use command \ **monls**\ .
*****

View File

@ -79,7 +79,7 @@ EXAMPLES
monstop gangliamon
Please note that gangliamon must have been registered in the xCAT \ *monitoring*\ table. For a list of registered plug-in modules, use command \ *monls*\ .
Note that gangliamon must have been registered in the xCAT \ *monitoring*\ table. For a list of registered plug-in modules, use command \ *monls*\ .
*****

View File

@ -64,7 +64,7 @@ This command will also create a NIM script resource to enable the xCAT support f
After the \ **nimnodeset**\ command completes you can use the \ **lsnim**\ command to check the NIM node definition to see if it is ready for booting the node. ("lsnim -l <nim_node_name>").
You can supply your own scripts to be run on the management node or on the service node (if their is hierarchy) for a node during the \ **nimnodeset**\ command. Such scripts are called \ **prescripts**\ . They should be copied to /install/prescripts dirctory. A table called \ *prescripts*\ is used to specify the scripts and their associated actions. The scripts to be run at the beginning of the \ **nimnodeset**\ command are stored in the 'begin' column of \ *prescripts*\ table. The scripts to be run at the end of the \ **nimnodeset**\ command are stored in the 'end' column of \ *prescripts*\ table. Please run 'tabdump prescripts -d' command for details. An example for the 'begin' or the 'end' column is: \ *standalone:myscript1,myscript2*\ . The following two environment variables will be passed to each script: NODES contains all the names of the nodes that need to run the script for and ACTION contains the current nodeset action, in this case "standalone". If \ *#xCAT setting:MAX_INSTANCE=number*\ is specified in the script, the script will get invoked for each node in parallel, but no more than \ *number*\ of instances will be invoked at at a time. If it is not specified, the script will be invoked once for all the nodes.
You can supply your own scripts to be run on the management node or on the service node (if their is hierarchy) for a node during the \ **nimnodeset**\ command. Such scripts are called \ **prescripts**\ . They should be copied to /install/prescripts dirctory. A table called \ *prescripts*\ is used to specify the scripts and their associated actions. The scripts to be run at the beginning of the \ **nimnodeset**\ command are stored in the 'begin' column of \ *prescripts*\ table. The scripts to be run at the end of the \ **nimnodeset**\ command are stored in the 'end' column of \ *prescripts*\ table. Run 'tabdump prescripts -d' command for details. An example for the 'begin' or the 'end' column is: \ *standalone:myscript1,myscript2*\ . The following two environment variables will be passed to each script: NODES contains all the names of the nodes that need to run the script for and ACTION contains the current nodeset action, in this case "standalone". If \ *#xCAT setting:MAX_INSTANCE=number*\ is specified in the script, the script will get invoked for each node in parallel, but no more than \ *number*\ of instances will be invoked at at a time. If it is not specified, the script will be invoked once for all the nodes.
*******

View File

@ -73,7 +73,7 @@ PPC (with HMC) specific:
========================
The \ **rflash**\ command uses the \ **xdsh**\ command to connect to the HMC controlling the given managed system and perform the updates. Before run \ **rflash**\ , please use \ **rspconfig**\ to check if the related HMC ssh is enabled. If enable a HMC ssh connection, please use \ **rspconfig**\ comamnd.
The \ **rflash**\ command uses the \ **xdsh**\ command to connect to the HMC controlling the given managed system and perform the updates. Before running \ **rflash**\ , use \ **rspconfig**\ to check if the related HMC ssh is enabled. To enable a HMC ssh connection, use \ **rspconfig**\ comamnd.
\ **Warning!**\ This command may take considerable time to complete, depending on the number of systems being updated and the workload on the target HMC. In particular, power subsystem updates may take an hour or more if there are many attached managed systems.
@ -91,7 +91,7 @@ Any previously activated code on the affected systems will be automatically acce
\ **IMPORTANT!**\ If the power subsystem is recycled, all of its attached managed systems will be recycled.
If it outputs \ **"Timeout waiting for prompt"**\ during the upgrade, please set the \ **"ppctimeout"**\ larger in the \ **site**\ table. After the upgrade, remeber to change it back. If run the \ **"rflash"**\ command on an AIX management node, need to make sure the value of \ **"useSSHonAIX"**\ is \ **"yes"**\ in the site table.
If it outputs \ **"Timeout waiting for prompt"**\ during the upgrade, set the \ **"ppctimeout"**\ larger in the \ **site**\ table. After the upgrade, remeber to change it back. If run the \ **"rflash"**\ command on an AIX management node, need to make sure the value of \ **"useSSHonAIX"**\ is \ **"yes"**\ in the site table.
PPC (using Direct FSP Management) specific:

View File

@ -33,7 +33,7 @@ DESCRIPTION
The \ **rmvlan**\ command removes the given vlan ID from the cluster. It removes the vlan id from all the swithces involved, deconfigures the nodes so that vlan adaptor (tag) will be remved, cleans up /etc/hosts, DNS and database tables for the given vlan.
For added security, the root guard and bpdu guard were enabled for the ports in this vlan by mkvlan and chvlan commands. However, the guards will not be disabled by this command. To disable them, you need to use the switch command line interface. Please refer to the switch command line interface manual to see how to disable the root guard and bpdu guard for a port.
For added security, the root guard and bpdu guard were enabled for the ports in this vlan by mkvlan and chvlan commands. However, the guards will not be disabled by this command. To disable them, you need to use the switch command line interface. Refer to the switch command line interface manual to see how to disable the root guard and bpdu guard for a port.
**********

View File

@ -200,7 +200,7 @@ OPTIONS
Don't try to run \ **wake**\ against the 'on' state node, it would cause the node gets to 'off' state.
For some of xCAT hardware such as NeXtScale, it may need to enable S3 before using \ **wake**\ . The following steps can be used to enable S3. Please reference pasu(1)|pasu.1 for "pasu" usage.
For some of xCAT hardware such as NeXtScale, it may need to enable S3 before using \ **wake**\ . The following steps can be used to enable S3. Reference pasu(1)|pasu.1 for "pasu" usage.
.. code-block:: perl

View File

@ -167,8 +167,8 @@ Command Protocol can be used. See man \ **xdsh**\ for more details.
xCAT ships some default configuration files
for Ethernet switches and and IB switches under
\ */opt/xcat/share/xcat/devicetype*\ directory. If you want to overwrite
any of the configuration files, please copy it to \ */var/opt/xcat/*\
directory and cutomize it.
any of the configuration files, copy them to \ */var/opt/xcat/*\
directory and cutomize.
For example, \ *base/IBSwitch/Qlogic/config*\ is the configuration
file location if devicetype is specified as IBSwitch::Qlogic.
xCAT will first search config file using \ */var/opt/xcat/*\ as the base.

View File

@ -29,9 +29,9 @@ The switchdiscover command scans the subnets and discovers all the swithches on
To view all the switches defined in the xCAT databasee use \ **lsdef -w "nodetype=switch"**\ command.
For lldp method, please make sure that lldpd package is installed and lldpd is running on the xCAT management node. lldpd comes from xcat-dep packge or you can get it from http://vincentbernat.github.io/lldpd/installation.html.
For lldp method, make sure that lldpd package is installed and lldpd is running on the xCAT management node. lldpd comes from xcat-dep packge or you can get it from http://vincentbernat.github.io/lldpd/installation.html.
For snmp method, please make sure that snmpwalk command is installed and snmp is enabled for switches. To install snmpwalk, "yum install net-snmp-utils" for redhat and sles, "apt-get install snmp" for Ubuntu.
For snmp method, make sure that snmpwalk command is installed and snmp is enabled for switches. To install snmpwalk, "yum install net-snmp-utils" for redhat and sles, "apt-get install snmp" for Ubuntu.
*******

View File

@ -234,8 +234,8 @@ running commands, are terminated (SIGTERM).
xCAT ships some default configuration files
for Ethernet switches and and IB switches under
\ */opt/xcat/share/xcat/devicetype*\ directory. If you want to overwrite
any of the configuration files, please copy it to \ */var/opt/xcat/*\
directory and cutomize it.
any of the configuration files, copy them to \ */var/opt/xcat/*\
directory and cutomize.
For example, \ *base/IBSwitch/Qlogic/config*\ is the configuration
file location if devicetype is specified as IBSwitch::Qlogic.
xCAT will first search config file using \ */var/opt/xcat/*\ as the base.

View File

@ -191,28 +191,20 @@ In plain English, a node or group name is in \ **xCAT Node Name Format**\ i
from the begining there are:
\*
one or more alpha characters of any case and any number of "-" in any combination
\* one or more alpha characters of any case and any number of "-" in any combination
\*
followed by one or more numbers
\* followed by one or more numbers
\*
then optionally followed by one alpha character of any case or "-"
\* then optionally followed by one alpha character of any case or "-"
\*
followed by any combination of case mixed alphanumerics and "-"
\* followed by any combination of case mixed alphanumerics and "-"
\ **noderange**\ supports node/group names in \ *any*\ format. \ **xCAT Node Name Format**\ is

View File

@ -57,11 +57,11 @@ litefile Attributes:
tmpfs - It is the default option if you leave the options column blank. It provides a file or directory for the node to use when booting, its permission will be the same as the original version on the server. In most cases, it is read-write; however, on the next statelite boot, the original version of the file or directory on the server will be used, it means it is non-persistent. This option can be performed on files and directories..
rw - Same as Above.Its name "rw" does NOT mean it always be read-write, even in most cases it is read-write. Please do not confuse it with the "rw" permission in the file system.
rw - Same as Above.Its name "rw" does NOT mean it always be read-write, even in most cases it is read-write. Do not confuse it with the "rw" permission in the file system.
persistent - It provides a mounted file or directory that is copied to the xCAT persistent location and then over-mounted on the local file or directory. Anything written to that file or directory is preserved. It means, if the file/directory does not exist at first, it will be copied to the persistent location. Next time the file/directory in the persistent location will be used. The file/directory will be persistent across reboots. Its permission will be the same as the original one in the statelite location. It requires the statelite table to be filled out with a spot for persistent statelite. This option can be performed on files and directories.
con - The contents of the pathname are concatenated to the contents of the existing file. For this directive the searching in the litetree hierarchy does not stop when the first match is found. All files found in the hierarchy will be concatenated to the file when found. The permission of the file will be "-rw-r--r--", which means it is read-write for the root user, but readonly for the others. It is non-persistent, when the node reboots, all changes to the file will be lost. It can only be performed on files. Please do not use it for one directory.
con - The contents of the pathname are concatenated to the contents of the existing file. For this directive the searching in the litetree hierarchy does not stop when the first match is found. All files found in the hierarchy will be concatenated to the file when found. The permission of the file will be "-rw-r--r--", which means it is read-write for the root user, but readonly for the others. It is non-persistent, when the node reboots, all changes to the file will be lost. It can only be performed on files. Do not use it for one directory.
ro - The file/directory will be overmounted read-only on the local file/directory. It will be located in the directory hierarchy specified in the litetree table. Changes made to this file or directory on the server will be immediately seen in this file/directory on the node. This option requires that the file/directory to be mounted must be available in one of the entries in the litetree table. This option can be performed on files and directories.

View File

@ -50,7 +50,7 @@ nodelist Attributes:
\ **status**\
The current status of this node. This attribute will be set by xCAT software. Valid values: defined, booting, netbooting, booted, discovering, configuring, installing, alive, standingby, powering-off, unreachable. If blank, defined is assumed. The possible status change sequences are: For installation: defined->[discovering]->[configuring]->[standingby]->installing->booting->booted->[alive], For diskless deployment: defined->[discovering]->[configuring]->[standingby]->netbooting->booted->[alive], For booting: [alive/unreachable]->booting->[alive], For powering off: [alive]->powering-off->[unreachable], For monitoring: alive->unreachable. Discovering and configuring are for x Series discovery process. Alive and unreachable are set only when there is a monitoring plug-in start monitor the node status for xCAT. Please note that the status values will not reflect the real node status if you change the state of the node from outside of xCAT (i.e. power off the node using HMC GUI).
The current status of this node. This attribute will be set by xCAT software. Valid values: defined, booting, netbooting, booted, discovering, configuring, installing, alive, standingby, powering-off, unreachable. If blank, defined is assumed. The possible status change sequences are: For installation: defined->[discovering]->[configuring]->[standingby]->installing->booting->booted->[alive], For diskless deployment: defined->[discovering]->[configuring]->[standingby]->netbooting->booted->[alive], For booting: [alive/unreachable]->booting->[alive], For powering off: [alive]->powering-off->[unreachable], For monitoring: alive->unreachable. Discovering and configuring are for x Series discovery process. Alive and unreachable are set only when there is a monitoring plug-in start monitor the node status for xCAT. Note that the status values will not reflect the real node status if you change the state of the node from outside of xCAT (i.e. power off the node using HMC GUI).

View File

@ -274,7 +274,7 @@ site Attributes:
runbootscripts: If set to 'yes' the scripts listed in the postbootscripts
attribute in the osimage and postscripts tables will be run during
each reboot of stateful (diskful) nodes. This attribute has no
effect on stateless and statelite nodes. Please run the following
effect on stateless and statelite nodes. Run the following
command after you change the value of this attribute:
'updatenode <nodes> -P setuppostbootscripts'
@ -309,7 +309,7 @@ site Attributes:
'1': enable basic debug mode
'2': enable expert debug mode
For the details on 'basic debug mode' and 'expert debug mode',
please refer to xCAT documentation.
refer to xCAT documentation.
--------------------
REMOTESHELL ATTRIBUTES

View File

@ -74,19 +74,19 @@ switches Attributes:
\ **linkports**\
The ports that connect to other switches. Currently, this column is only used by vlan configuration. The format is: "port_number:switch,port_number:switch...". Please refer to the switch table for details on how to specify the port numbers.
The ports that connect to other switches. Currently, this column is only used by vlan configuration. The format is: "port_number:switch,port_number:switch...". Refer to the switch table for details on how to specify the port numbers.
\ **sshusername**\
The remote login user name. It can be for ssh or telnet. If it is for telnet, please set protocol to "telnet". If the sshusername is blank, the username, password and protocol will be retrieved from the passwd table with "switch" as the key.
The remote login user name. It can be for ssh or telnet. If it is for telnet, set protocol to "telnet". If the sshusername is blank, the username, password and protocol will be retrieved from the passwd table with "switch" as the key.
\ **sshpassword**\
The remote login password. It can be for ssh or telnet. If it is for telnet, please set protocol to "telnet". If the sshusername is blank, the username, password and protocol will be retrieved from the passwd table with "switch" as the key.
The remote login password. It can be for ssh or telnet. If it is for telnet, set protocol to "telnet". If the sshusername is blank, the username, password and protocol will be retrieved from the passwd table with "switch" as the key.

View File

@ -197,7 +197,7 @@ vm Attributes:
\ **physlots**\
Specify the physical slots drc index that will assigned to the partition, the delimiter is ',', and the drc index must started with '0x'. For more details, please reference to manpage of 'lsvm'.
Specify the physical slots drc index that will assigned to the partition, the delimiter is ',', and the drc index must started with '0x'. For more details, reference manpage for 'lsvm'.

View File

@ -425,7 +425,7 @@ group Attributes:
\ **linkports**\ (switches.linkports)
The ports that connect to other switches. Currently, this column is only used by vlan configuration. The format is: "port_number:switch,port_number:switch...". Please refer to the switch table for details on how to specify the port numbers.
The ports that connect to other switches. Currently, this column is only used by vlan configuration. The format is: "port_number:switch,port_number:switch...". Refer to the switch table for details on how to specify the port numbers.
@ -745,7 +745,7 @@ group Attributes:
or
The remote login password. It can be for ssh or telnet. If it is for telnet, please set protocol to "telnet". If the sshusername is blank, the username, password and protocol will be retrieved from the passwd table with "switch" as the key.
The remote login password. It can be for ssh or telnet. If it is for telnet, set protocol to "telnet". If the sshusername is blank, the username, password and protocol will be retrieved from the passwd table with "switch" as the key.
@ -1148,7 +1148,7 @@ group Attributes:
or
The remote login user name. It can be for ssh or telnet. If it is for telnet, please set protocol to "telnet". If the sshusername is blank, the username, password and protocol will be retrieved from the passwd table with "switch" as the key.
The remote login user name. It can be for ssh or telnet. If it is for telnet, set protocol to "telnet". If the sshusername is blank, the username, password and protocol will be retrieved from the passwd table with "switch" as the key.
@ -1226,7 +1226,7 @@ group Attributes:
\ **vmphyslots**\ (vm.physlots)
Specify the physical slots drc index that will assigned to the partition, the delimiter is ',', and the drc index must started with '0x'. For more details, please reference to manpage of 'lsvm'.
Specify the physical slots drc index that will assigned to the partition, the delimiter is ',', and the drc index must started with '0x'. For more details, reference manpage for 'lsvm'.

View File

@ -437,7 +437,7 @@ node Attributes:
\ **linkports**\ (switches.linkports)
The ports that connect to other switches. Currently, this column is only used by vlan configuration. The format is: "port_number:switch,port_number:switch...". Please refer to the switch table for details on how to specify the port numbers.
The ports that connect to other switches. Currently, this column is only used by vlan configuration. The format is: "port_number:switch,port_number:switch...". Refer to the switch table for details on how to specify the port numbers.
@ -751,7 +751,7 @@ node Attributes:
or
The remote login password. It can be for ssh or telnet. If it is for telnet, please set protocol to "telnet". If the sshusername is blank, the username, password and protocol will be retrieved from the passwd table with "switch" as the key.
The remote login password. It can be for ssh or telnet. If it is for telnet, set protocol to "telnet". If the sshusername is blank, the username, password and protocol will be retrieved from the passwd table with "switch" as the key.
@ -1032,7 +1032,7 @@ node Attributes:
\ **status**\ (nodelist.status)
The current status of this node. This attribute will be set by xCAT software. Valid values: defined, booting, netbooting, booted, discovering, configuring, installing, alive, standingby, powering-off, unreachable. If blank, defined is assumed. The possible status change sequences are: For installation: defined->[discovering]->[configuring]->[standingby]->installing->booting->booted->[alive], For diskless deployment: defined->[discovering]->[configuring]->[standingby]->netbooting->booted->[alive], For booting: [alive/unreachable]->booting->[alive], For powering off: [alive]->powering-off->[unreachable], For monitoring: alive->unreachable. Discovering and configuring are for x Series discovery process. Alive and unreachable are set only when there is a monitoring plug-in start monitor the node status for xCAT. Please note that the status values will not reflect the real node status if you change the state of the node from outside of xCAT (i.e. power off the node using HMC GUI).
The current status of this node. This attribute will be set by xCAT software. Valid values: defined, booting, netbooting, booted, discovering, configuring, installing, alive, standingby, powering-off, unreachable. If blank, defined is assumed. The possible status change sequences are: For installation: defined->[discovering]->[configuring]->[standingby]->installing->booting->booted->[alive], For diskless deployment: defined->[discovering]->[configuring]->[standingby]->netbooting->booted->[alive], For booting: [alive/unreachable]->booting->[alive], For powering off: [alive]->powering-off->[unreachable], For monitoring: alive->unreachable. Discovering and configuring are for x Series discovery process. Alive and unreachable are set only when there is a monitoring plug-in start monitor the node status for xCAT. Note that the status values will not reflect the real node status if you change the state of the node from outside of xCAT (i.e. power off the node using HMC GUI).
@ -1184,7 +1184,7 @@ node Attributes:
or
The remote login user name. It can be for ssh or telnet. If it is for telnet, please set protocol to "telnet". If the sshusername is blank, the username, password and protocol will be retrieved from the passwd table with "switch" as the key.
The remote login user name. It can be for ssh or telnet. If it is for telnet, set protocol to "telnet". If the sshusername is blank, the username, password and protocol will be retrieved from the passwd table with "switch" as the key.
@ -1262,7 +1262,7 @@ node Attributes:
\ **vmphyslots**\ (vm.physlots)
Specify the physical slots drc index that will assigned to the partition, the delimiter is ',', and the drc index must started with '0x'. For more details, please reference to manpage of 'lsvm'.
Specify the physical slots drc index that will assigned to the partition, the delimiter is ',', and the drc index must started with '0x'. For more details, reference manpage for 'lsvm'.

View File

@ -8,7 +8,7 @@ reorgtbls
Usage:
--V - Verbose mode
--h - usage
--t -comma delimitated list of tables.
--t -comma delimited list of tables.
Without this flag it reorgs all tables in the xcatdb database .
Author: Lissa Valletta

View File

@ -26,7 +26,7 @@ test_hca_state
Having consistent OFED settings, and even HCA firmware, can be very
important for a properly functioning InfiniBand fabric. This tool
can help you confirm that your nodes are using the settings you
want, and if any nodes have settings descrepancies.
want, and if any nodes have settings discrepancies.
Example output:

View File

@ -58,7 +58,7 @@ Remove xCAT Files
dpkg -l | awk '/xcat/ { print $2 }'
If you want to remove more cleanly. below list maybe helpful for you. They are the packages list of xcat installation tarball. These list are the whole RPMs list, it's possible for some RPMs not to be installed due to them are not suitable for your environment. Please do judgment by yourself.
If you want to remove more cleanly, the list bleow maybe helpful. Listed are the packages of xcat installation tarball. Some RPMs may not to be installed in a specific environment.
* XCAT Core Packages list (xcat-core):

View File

@ -8,7 +8,7 @@ Differentiators
* Open Source
Eclipse Public License. Support contracts are also available, please contact IBM.
Eclipse Public License. Support contracts are also available, contact IBM.
* Supports Multiple Operating Systems

View File

@ -65,7 +65,7 @@ When managing a cluster with hundreds or thousands of nodes, operating on many n
#. Contribute to xCAT (Optional)
While using xCAT, if you find something (code, documentation, ...) that can be improved and you want to contribute that to xCAT, please do that for your and other xCAT users benefit. And welcome to xCAT community!
While using xCAT, if you find something (code, documentation, ...) that can be improved and you want to contribute that to xCAT, do that for your and other xCAT users benefit. And welcome to xCAT community!
Refer to the :doc:`/developers/index` to learn how to contribute to xCAT community.

View File

@ -15,6 +15,6 @@ Action
xCAT does not use RSA_EXPORT ciphers for ssl communication by default. However, xCAT does allow user to choose the ciphers from the site.xcatsslciphers attribute.
Please make sure you do not put RSA_EXPORT related ciphers in this attribute.
Make sure you do not put RSA_EXPORT related ciphers in this attribute.
It is recommended that you upgrade openssl to 1.0.1L and upper version for the fix of this problem. Please go to the os distribution to get the latest openssl package.
It is recommended that you upgrade openssl to 1.0.1L and upper version for the fix of this problem. Go to the os distribution to get the latest openssl package.

View File

@ -9,4 +9,4 @@ This issue affects OpenSSL version: 1.0.2
Action
------
xCAT uses OpenSSL for client-server communication but **does not** ship it. Please upgrade OpenSSL to 1.0.2a or higher.
xCAT uses OpenSSL for client-server communication but **does not** ship it. Upgrade OpenSSL to 1.0.2a or higher.

View File

@ -22,6 +22,6 @@ The VENOM bug (CVE-2015-3456) exists in the virtual Floppy Disk Controller for t
Action
------
xCAT does not ship any rpms that have QEMU component directly. However xCAT does make system calls to QEMU when doing KVM/Xen visualization. If you are using xCAT to manage KVM or Xen hosts and quests, please get the latest rpms that have QEMU component from the os distro and do a upgrade on both xCAT management node and the KVM/Xen hosts.
xCAT does not ship any rpms that have QEMU component directly. However xCAT does make system calls to QEMU when doing KVM/Xen visualization. If you are using xCAT to manage KVM or Xen hosts and quests, get the latest rpms that have QEMU component from the os distro and do a upgrade on both xCAT management node and the KVM/Xen hosts.

View File

@ -1,13 +1,13 @@
2015-05-20 - OpenSSL Vulnerabilities (LOGJAM)
=============================================
A Logjam vulnerability attacks openssl and web services on weak (512-bit) Diffie-Hellman key groups. Please refer to the following documents for details.
A Logjam vulnerability attacks openssl and web services on weak (512-bit) Diffie-Hellman key groups. Refer to the following documents for details.
Main site: https://weakdh.org/
Server test: https://weakdh.org/sysadmin.html
Please refer to the following openssl link for more details regarding the fix: https://www.openssl.org/blog/blog/2015/05/20/logjam-freak-upcoming-changes/
Refer to the following openssl link for more details regarding the fix: https://www.openssl.org/blog/blog/2015/05/20/logjam-freak-upcoming-changes/
OpenSSL 1.0.2 users should upgrade to 1.0.2b
OpenSSL 1.0.1 users should upgrade to 1.0.1n
@ -15,6 +15,6 @@ Please refer to the following openssl link for more details regarding the fix: h
Action
------
xCAT uses OpenSSL for client-server communication but **does not** ship it. It uses the default ciphers from openssl. It also allows the user to customize it through site.xcatsslversion and site.xcatsslciphers. Please make sure you do not enable DH or DHE ciphers.
xCAT uses OpenSSL for client-server communication but **does not** ship it. It uses the default ciphers from openssl. It also allows the user to customize it through site.xcatsslversion and site.xcatsslciphers. Make sure you do not enable DH or DHE ciphers.
Please get the latest openssl package from the os distros and upgrade it on all the xCAT management nodes, the service nodes and xCAT client nodes.
Get the latest openssl package from the os distros and upgrade it on all the xCAT management nodes, the service nodes and xCAT client nodes.

View File

@ -10,7 +10,7 @@ use xCAT::ExtTab;
# !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
#
# When making additions or deletions to this file please be sure to
# When making additions or deletions to this file be sure to
# modify BOTH the tabspec and defspec definitions. This includes
# adding descriptions for any new attributes.
#
@ -154,9 +154,9 @@ use xCAT::ExtTab;
file => "The full pathname of the file. e.g: /etc/hosts. If the path is a directory, then it should be terminated with a '/'. ",
options => "Options for the file:\n\n" .
qq{ tmpfs - It is the default option if you leave the options column blank. It provides a file or directory for the node to use when booting, its permission will be the same as the original version on the server. In most cases, it is read-write; however, on the next statelite boot, the original version of the file or directory on the server will be used, it means it is non-persistent. This option can be performed on files and directories..\n\n} .
qq{ rw - Same as Above.Its name "rw" does NOT mean it always be read-write, even in most cases it is read-write. Please do not confuse it with the "rw" permission in the file system. \n\n} .
qq{ rw - Same as Above.Its name "rw" does NOT mean it always be read-write, even in most cases it is read-write. Do not confuse it with the "rw" permission in the file system. \n\n} .
qq{ persistent - It provides a mounted file or directory that is copied to the xCAT persistent location and then over-mounted on the local file or directory. Anything written to that file or directory is preserved. It means, if the file/directory does not exist at first, it will be copied to the persistent location. Next time the file/directory in the persistent location will be used. The file/directory will be persistent across reboots. Its permission will be the same as the original one in the statelite location. It requires the statelite table to be filled out with a spot for persistent statelite. This option can be performed on files and directories. \n\n} .
qq{ con - The contents of the pathname are concatenated to the contents of the existing file. For this directive the searching in the litetree hierarchy does not stop when the first match is found. All files found in the hierarchy will be concatenated to the file when found. The permission of the file will be "-rw-r--r--", which means it is read-write for the root user, but readonly for the others. It is non-persistent, when the node reboots, all changes to the file will be lost. It can only be performed on files. Please do not use it for one directory.\n\n} .
qq{ con - The contents of the pathname are concatenated to the contents of the existing file. For this directive the searching in the litetree hierarchy does not stop when the first match is found. All files found in the hierarchy will be concatenated to the file when found. The permission of the file will be "-rw-r--r--", which means it is read-write for the root user, but readonly for the others. It is non-persistent, when the node reboots, all changes to the file will be lost. It can only be performed on files. Do not use it for one directory.\n\n} .
qq{ ro - The file/directory will be overmounted read-only on the local file/directory. It will be located in the directory hierarchy specified in the litetree table. Changes made to this file or directory on the server will be immediately seen in this file/directory on the node. This option requires that the file/directory to be mounted must be available in one of the entries in the litetree table. This option can be performed on files and directories.\n\n} .
qq{ link - It provides one file/directory for the node to use when booting, it is copied from the server, and will be placed in tmpfs on the booted node. In the local file system of the booted node, it is one symbolic link to one file/directory in tmpfs. And the permission of the symbolic link is "lrwxrwxrwx", which is not the real permission of the file/directory on the node. So for some application sensitive to file permissions, it will be one issue to use "link" as its option, for example, "/root/.ssh/", which is used for SSH, should NOT use "link" as its option. It is non-persistent, when the node is rebooted, all changes to the file/directory will be lost. This option can be performed on files and directories. \n\n} .
qq{ link,con - It works similar to the "con" option. All the files found in the litetree hierarchy will be concatenated to the file when found. The final file will be put to the tmpfs on the booted node. In the local file system of the booted node, it is one symbolic link to the file/directory in tmpfs. It is non-persistent, when the node is rebooted, all changes to the file will be lost. The option can only be performed on files. \n\n} .
@ -232,7 +232,7 @@ qq{ link,ro - The file is readonly, and will be placed in tmpfs on the booted no
'datacenter' => "Optionally specify a datacenter for the VM to exist in (only applicable to VMWare)",
'cluster' => 'Specify to the underlying virtualization infrastructure a cluster membership for the hypervisor.',
'vidproto' => "Request a specific protocol for remote video access be set up. For example, spice in KVM.",
'physlots' => "Specify the physical slots drc index that will assigned to the partition, the delimiter is ',', and the drc index must started with '0x'. For more details, please reference to manpage of 'lsvm'.",
'physlots' => "Specify the physical slots drc index that will assigned to the partition, the delimiter is ',', and the drc index must started with '0x'. For more details, reference manpage for 'lsvm'.",
'vidmodel' => "Model of video adapter to provide to guest. For example, qxl in KVM",
'vidpassword' => "Password to use instead of temporary random tokens for VNC and SPICE access",
'storagecache' => "Select caching scheme to employ. E.g. KVM understands 'none', 'writethrough' and 'writeback'",
@ -609,7 +609,7 @@ passed as argument rather than by table value',
descriptions => {
node => 'The hostname of a node in the cluster.',
groups => "A comma-delimited list of groups this node is a member of. Group names are arbitrary, except all nodes should be part of the 'all' group. Internal group names are designated by using __<groupname>. For example, __Unmanaged, could be the internal name for a group of nodes that is not managed by xCAT. Admins should avoid using the __ characters when defining their groups.",
status => 'The current status of this node. This attribute will be set by xCAT software. Valid values: defined, booting, netbooting, booted, discovering, configuring, installing, alive, standingby, powering-off, unreachable. If blank, defined is assumed. The possible status change sequences are: For installation: defined->[discovering]->[configuring]->[standingby]->installing->booting->booted->[alive], For diskless deployment: defined->[discovering]->[configuring]->[standingby]->netbooting->booted->[alive], For booting: [alive/unreachable]->booting->[alive], For powering off: [alive]->powering-off->[unreachable], For monitoring: alive->unreachable. Discovering and configuring are for x Series discovery process. Alive and unreachable are set only when there is a monitoring plug-in start monitor the node status for xCAT. Please note that the status values will not reflect the real node status if you change the state of the node from outside of xCAT (i.e. power off the node using HMC GUI).',
status => 'The current status of this node. This attribute will be set by xCAT software. Valid values: defined, booting, netbooting, booted, discovering, configuring, installing, alive, standingby, powering-off, unreachable. If blank, defined is assumed. The possible status change sequences are: For installation: defined->[discovering]->[configuring]->[standingby]->installing->booting->booted->[alive], For diskless deployment: defined->[discovering]->[configuring]->[standingby]->netbooting->booted->[alive], For booting: [alive/unreachable]->booting->[alive], For powering off: [alive]->powering-off->[unreachable], For monitoring: alive->unreachable. Discovering and configuring are for x Series discovery process. Alive and unreachable are set only when there is a monitoring plug-in start monitor the node status for xCAT. Note that the status values will not reflect the real node status if you change the state of the node from outside of xCAT (i.e. power off the node using HMC GUI).',
statustime => "The data and time when the status was updated.",
appstatus => "A comma-delimited list of application status. For example: 'sshd=up,ftp=down,ll=down'",
appstatustime => 'The date and time when appstatus was updated.',
@ -690,9 +690,9 @@ passed as argument rather than by table value',
password => 'The password string for SNMPv3 or community string for SNMPv1/SNMPv2. Falls back to passwd table, and site snmpc value if using SNMPv1/SNMPv2.',
privacy => 'The privacy protocol to use for v3. xCAT will use authNoPriv if this is unspecified. DES is recommended to use if v3 enabled, as it is the most readily available.',
auth => 'The authentication protocol to use for SNMPv3. SHA is assumed if v3 enabled and this is unspecified',
linkports => 'The ports that connect to other switches. Currently, this column is only used by vlan configuration. The format is: "port_number:switch,port_number:switch...". Please refer to the switch table for details on how to specify the port numbers.',
sshusername => 'The remote login user name. It can be for ssh or telnet. If it is for telnet, please set protocol to "telnet". If the sshusername is blank, the username, password and protocol will be retrieved from the passwd table with "switch" as the key.',
sshpassword => 'The remote login password. It can be for ssh or telnet. If it is for telnet, please set protocol to "telnet". If the sshusername is blank, the username, password and protocol will be retrieved from the passwd table with "switch" as the key.',
linkports => 'The ports that connect to other switches. Currently, this column is only used by vlan configuration. The format is: "port_number:switch,port_number:switch...". Refer to the switch table for details on how to specify the port numbers.',
sshusername => 'The remote login user name. It can be for ssh or telnet. If it is for telnet, set protocol to "telnet". If the sshusername is blank, the username, password and protocol will be retrieved from the passwd table with "switch" as the key.',
sshpassword => 'The remote login password. It can be for ssh or telnet. If it is for telnet, set protocol to "telnet". If the sshusername is blank, the username, password and protocol will be retrieved from the passwd table with "switch" as the key.',
protocol => 'Protocol for running remote commands for the switch. The valid values are: ssh, telnet. ssh is the default. If the sshusername is blank, the username, password and protocol will be retrieved from the passwd table with "switch" as the key. The passwd.comments attribute is used for protocol.',
switchtype => 'The type of switch. It is used to identify the file name that implements the functions for this switch. The valid values are: Mellanox, Cisco, BNT and Juniper.',
},
@ -1115,7 +1115,7 @@ passed as argument rather than by table value',
" runbootscripts: If set to 'yes' the scripts listed in the postbootscripts\n" .
" attribute in the osimage and postscripts tables will be run during\n" .
" each reboot of stateful (diskful) nodes. This attribute has no\n" .
" effect on stateless and statelite nodes. Please run the following\n" .
" effect on stateless and statelite nodes. Run the following\n" .
" command after you change the value of this attribute: \n" .
" 'updatenode <nodes> -P setuppostbootscripts'\n\n" .
" precreatemypostscripts: (yes/1 or no/0). Default is no. If yes, it will \n" .
@ -1145,7 +1145,7 @@ passed as argument rather than by table value',
" '1': enable basic debug mode\n" .
" '2': enable expert debug mode\n" .
" For the details on 'basic debug mode' and 'expert debug mode',\n" .
" please refer to xCAT documentation.\n\n" .
" refer to xCAT documentation.\n\n" .
" --------------------\n" .
"REMOTESHELL ATTRIBUTES\n" .
" --------------------\n" .

View File

@ -248,7 +248,7 @@ Purge the Hard disk. Deregisters and deletes the files. Multiple can be done w
=item B<--resize> I<disk>=I<size>
Change the size of the Hard disk. The disk in I<qcow2> format can not be set to less than it's current size. The disk in I<raw> format can be resized smaller, please use caution. Multiple disks can be resized by using comma separated I<disk>B<=>I<size> pairs. The disks are specified by SCSI id. Size defaults to GB.
Change the size of the Hard disk. The disk in I<qcow2> format can not be set to less than it's current size. The disk in I<raw> format can be resized smaller, use caution. Multiple disks can be resized by using comma separated I<disk>B<=>I<size> pairs. The disks are specified by SCSI id. Size defaults to GB.
=back
@ -572,7 +572,7 @@ The resource information after modification is similar to:
lpar1: 1/2/2
lpar1: 128.
Note: The physical I/O resources specified with I<add_physlots> will be appended to the specified partition. The physical I/O resources which are not specified but belonged to the partition will not be removed. For more information about I<add_physlots>, please refer to L<lsvm(1)|lsvm.1>.
Note: The physical I/O resources specified with I<add_physlots> will be appended to the specified partition. The physical I/O resources which are not specified but belonged to the partition will not be removed. For more information about I<add_physlots>, refer to L<lsvm(1)|lsvm.1>.
=head2 VMware/KVM specific:

View File

@ -29,7 +29,7 @@ for stateless: B<packimage>
for statelite: B<liteimg>
Besides prompting for some paramter values, the B<genimage> command takes default guesses for the parameters not specified or not defined in the I<osimage> and I<linuximage> tables. It also assumes default answers for questions from the yum/zypper command when installing rpms into the image. Please use B<--interactive> flag if you want the yum/zypper command to prompt you for the answers.
Besides prompting for some paramter values, the B<genimage> command takes default guesses for the parameters not specified or not defined in the I<osimage> and I<linuximage> tables. It also assumes default answers for questions from the yum/zypper command when installing rpms into the image. Use B<--interactive> flag if you want the yum/zypper command to prompt you for the answers.
If B<--onlyinitrd> is specified, genimage only regenerates the initrd for a stateless image to be used for a diskless install.

View File

@ -54,7 +54,7 @@ Display MAC only. The default is to write the first valid adapter MAC to the xCA
B<-D>
Perform discovery for mac address. By default, it will run ping test to test the connection between adapter and xCAT management node. Use '--noping' can skip the ping test to save time. Please be aware that in this way, the lpars will be reset.
Perform discovery for mac address. By default, it will run ping test to test the connection between adapter and xCAT management node. Use '--noping' can skip the ping test to save time. Be aware that in this way, the lpars will be reset.
B<-f>

View File

@ -20,7 +20,7 @@ The B<diskless> type:
The attributes of osimage will be used to capture and prepare the root image. The B<osver>, B<arch> and B<profile> attributes for the stateless/statelite image to be created are duplicated from the B<node>'s attribute. If the B<-p|--profile> I<profile> option is specified, the image will be created under "/<I<installroot>>/netboot/<osver>/<arch>/<I<profile>>/rootimg".
The default files/directories excluded in the image are specified by /opt/xcat/share/xcat/netboot/<os>/<I<profile>>.<osver>.<arch>.imgcapture.exlist; also, you can put your customized file (<I<profile>>.<osver>.<arch>.imgcapture.exlist) to /install/custom/netboot/<osplatform>. The directories in the default I<.imgcapture.exlist> file are necessary to capture image from the diskful Linux node managed by xCAT, please don't remove it.
The default files/directories excluded in the image are specified by /opt/xcat/share/xcat/netboot/<os>/<I<profile>>.<osver>.<arch>.imgcapture.exlist; also, you can put your customized file (<I<profile>>.<osver>.<arch>.imgcapture.exlist) to /install/custom/netboot/<osplatform>. The directories in the default I<.imgcapture.exlist> file are necessary to capture the image from the diskful Linux node managed by xCAT, don't remove it.
The image captured will be extracted into the /<I<installroot>>/netboot/<B<osver>>/<B<arch>>/<B<profile>>/rootimg directory.

View File

@ -13,13 +13,13 @@ B<lsslp> [I<noderange>] [B<-V>] [B<-i> I<ip[,ip..]>] B<[-w] [-r|-x|-z] [-n] [-s
=head1 DESCRIPTION
The lsslp command discovers selected service types using the -s flag. All service types are returned if the -s flag is not specified. If a specific IP address is not specified using the -i flag, the request is sent out all available network adapters. The optional -r, -x, -z and --vpdtable flags format the output. If you can't receive all the hardware, please use -T to increase the waiting time.
The lsslp command discovers selected service types using the -s flag. All service types are returned if the -s flag is not specified. If a specific IP address is not specified using the -i flag, the request is sent out all available network adapters. The optional -r, -x, -z and --vpdtable flags format the output. If you can't receive all the hardware, use -T to increase the waiting time.
NOTE: SLP broadcast requests will propagate only within the subnet of the network adapter broadcast IPs specified by the -i flag.
=head1 OPTIONS
B<noderange> The nodes which the user want to discover. If the user specify the noderange, lsslp will just return the nodes in the node range. Which means it will help to add the new nodes to the xCAT database without modifying the existed definitions. But the nodes' name specified in noderange should be defined in database in advance. The specified nodes' type can be frame/cec/hmc/fsp/bpa. If the it is frame or cec, lsslp will list the bpa or fsp nodes within the nodes(bap for frame, fsp for cec). Please do not use noderange with the flag -s.
B<noderange> The nodes which the user want to discover. If the user specify the noderange, lsslp will just return the nodes in the node range. Which means it will help to add the new nodes to the xCAT database without modifying the existed definitions. But the nodes' name specified in noderange should be defined in database in advance. The specified nodes' type can be frame/cec/hmc/fsp/bpa. If the it is frame or cec, lsslp will list the bpa or fsp nodes within the nodes(bap for frame, fsp for cec). Do not use noderange with the flag -s.
B<-i> IP(s) the command will send out (defaults to all available adapters).
@ -54,7 +54,7 @@ B<-x> XML format.
B<-z> Stanza formated output.
B<-I> Give the warning message for the nodes in database which have no SLP responses. Please note that this flag noly can be used after the database migration finished successfully.
B<-I> Give the warning message for the nodes in database which have no SLP responses. Note that this flag noly can be used after the database migration finished successfully.
=head1 RETURN VALUE
@ -227,7 +227,7 @@ Output is similar to:
bpa 9458-100 BPCF017 B-0 40.17.0.2 f17c00bpcb_a
8. To find the nodes within the user specified. Please make sure the noderange input have been defined in xCAT database.
8. To find the nodes within the user specified. Make sure the noderange input have been defined in xCAT database.
lsslp CEC1-CEC3
or lsslp CEC1,CEC2,CEC3

View File

@ -28,7 +28,7 @@ I<site.ntpservers> -- the NTP servers for the service node and compute node to s
=back
To setup NTP on the compute node, please add B<setupntp> postscript to the I<postscripts> table and run I<updatenode node -P setupntp> command.
To setup NTP on the compute node, add B<setupntp> postscript to the I<postscripts> table and run I<updatenode node -P setupntp> command.
=head1 OPTIONS

View File

@ -29,7 +29,7 @@ The command that the instance will run based on the B<image> specified. The B<im
=item B<dockerflag>
A JSON string which will be used as parameters to create a docker. Please reference https://docs.docker.com/engine/reference/api/docker_remote_api_v1.22/ for more information about which parameters can be specified.
A JSON string which will be used as parameters to create a docker. Reference https://docs.docker.com/engine/reference/api/docker_remote_api_v1.22/ for more information about which parameters can be specified.
Some useful flags are:

View File

@ -45,7 +45,7 @@ You can use the force option to reinitialize a node if it already has resources
After the mkdsklsnode command completes you can use the B<lsnim> command to check the NIM node definition to see if it is ready for booting the node. ("lsnim -l <nim_node_name>").
You can supply your own scripts to be run on the management node or on the service node (if their is hierarchy) for a node during the B<mkdsklsnode> command. Such scripts are called B<prescripts>. They should be copied to /install/prescripts dirctory. A table called I<prescripts> is used to specify the scripts and their associated actions. The scripts to be run at the beginning of the B<mkdsklsnode> command are stored in the 'begin' column of I<prescripts> table. The scripts to be run at the end of the B<mkdsklsnode> command are stored in the 'end' column of I<prescripts> table. Please run 'tabdump prescripts -d' command for details. An example for the 'begin' or the 'end' column is: I<diskless:myscript1,myscript2>. The following two environment variables will be passed to each script: NODES contains all the names of the nodes that need to run the script for and ACTION contains the current current nodeset action, in this case "diskless". If I<#xCAT setting:MAX_INSTANCE=number> is specified in the script, the script will get invoked for each node in parallel, but no more than I<number> of instances will be invoked at at a time. If it is not specified, the script will be invoked once for all the nodes.
You can supply your own scripts to be run on the management node or on the service node (if their is hierarchy) for a node during the B<mkdsklsnode> command. Such scripts are called B<prescripts>. They should be copied to /install/prescripts dirctory. A table called I<prescripts> is used to specify the scripts and their associated actions. The scripts to be run at the beginning of the B<mkdsklsnode> command are stored in the 'begin' column of I<prescripts> table. The scripts to be run at the end of the B<mkdsklsnode> command are stored in the 'end' column of I<prescripts> table. Run 'tabdump prescripts -d' command for details. An example for the 'begin' or the 'end' column is: I<diskless:myscript1,myscript2>. The following two environment variables will be passed to each script: NODES contains all the names of the nodes that need to run the script for and ACTION contains the current current nodeset action, in this case "diskless". If I<#xCAT setting:MAX_INSTANCE=number> is specified in the script, the script will get invoked for each node in parallel, but no more than I<number> of instances will be invoked at at a time. If it is not specified, the script will be invoked once for all the nodes.
=head1 OPTIONS

View File

@ -46,9 +46,9 @@ The first form of B<mkvm> command creates new partition(s) with the same profile
The second form of this command duplicates all the partitions from the source specified by I<profile> to the destination specified by I<destcec>. The source and destination CECs can be managed by different HMCs.
Please make sure the nodes in the I<noderange> is defined in the I<nodelist> table and the I<mgt> is set to 'hmc' in the I<nodehm> table before running this command.
Make sure the nodes in the I<noderange> is defined in the I<nodelist> table and the I<mgt> is set to 'hmc' in the I<nodehm> table before running this command.
Please note that the B<mkvm> command currently only supports creating standard LPARs, not virtual LPARs working with VIOS server.
Note that the B<mkvm> command currently only supports creating standard LPARs, not virtual LPARs working with VIOS server.
=head2 For PPC (using Direct FSP Management) specific:

View File

@ -20,7 +20,7 @@ This command is used to register a monitoring plug-in module to monitor the xCAT
I<name> is the name of the monitoring plug-in module. For example, if the the I<name> is called I<xxx>, then the actual file name that the xcatd looks for is I</opt/xcat/lib/perl/xCAT_monitoring/xxx.pm>. Use I<monls -a> command to list all the monitoring plug-in modules that can be used.
I<settings> is the monitoring plug-in specific settings. It is used to customize the behavior of the plug-in or configure the 3rd party software. Format: I<-s key-value -s key=value ...> Please note that the square brackets are needed here. Use I<monls name -d> command to look for the possbile setting keys for a plug-in module.
I<settings> is the monitoring plug-in specific settings. It is used to customize the behavior of the plug-in or configure the 3rd party software. Format: I<-s key-value -s key=value ...> Note that the square brackets are needed here. Use I<monls name -d> command to look for the possbile setting keys for a plug-in module.
=head1 OPTIONS

View File

@ -39,7 +39,7 @@ B<-v | --version> Command Version.
monrm gangliamon
Please note that gangliamon must have been registered in the xCAT I<monitoring> table. For a list of registered plug-in modules, use command B<monls>.
Note that gangliamon must have been registered in the xCAT I<monitoring> table. For a list of registered plug-in modules, use command B<monls>.

View File

@ -46,7 +46,7 @@ B<-v | -version> Command Version.
monstop gangliamon
Please note that gangliamon must have been registered in the xCAT I<monitoring> table. For a list of registered plug-in modules, use command I<monls>.
Note that gangliamon must have been registered in the xCAT I<monitoring> table. For a list of registered plug-in modules, use command I<monls>.

View File

@ -41,7 +41,7 @@ This command will also create a NIM script resource to enable the xCAT support f
After the B<nimnodeset> command completes you can use the B<lsnim> command to check the NIM node definition to see if it is ready for booting the node. ("lsnim -l <nim_node_name>").
You can supply your own scripts to be run on the management node or on the service node (if their is hierarchy) for a node during the B<nimnodeset> command. Such scripts are called B<prescripts>. They should be copied to /install/prescripts dirctory. A table called I<prescripts> is used to specify the scripts and their associated actions. The scripts to be run at the beginning of the B<nimnodeset> command are stored in the 'begin' column of I<prescripts> table. The scripts to be run at the end of the B<nimnodeset> command are stored in the 'end' column of I<prescripts> table. Please run 'tabdump prescripts -d' command for details. An example for the 'begin' or the 'end' column is: I<standalone:myscript1,myscript2>. The following two environment variables will be passed to each script: NODES contains all the names of the nodes that need to run the script for and ACTION contains the current nodeset action, in this case "standalone". If I<#xCAT setting:MAX_INSTANCE=number> is specified in the script, the script will get invoked for each node in parallel, but no more than I<number> of instances will be invoked at at a time. If it is not specified, the script will be invoked once for all the nodes.
You can supply your own scripts to be run on the management node or on the service node (if their is hierarchy) for a node during the B<nimnodeset> command. Such scripts are called B<prescripts>. They should be copied to /install/prescripts dirctory. A table called I<prescripts> is used to specify the scripts and their associated actions. The scripts to be run at the beginning of the B<nimnodeset> command are stored in the 'begin' column of I<prescripts> table. The scripts to be run at the end of the B<nimnodeset> command are stored in the 'end' column of I<prescripts> table. Run 'tabdump prescripts -d' command for details. An example for the 'begin' or the 'end' column is: I<standalone:myscript1,myscript2>. The following two environment variables will be passed to each script: NODES contains all the names of the nodes that need to run the script for and ACTION contains the current nodeset action, in this case "standalone". If I<#xCAT setting:MAX_INSTANCE=number> is specified in the script, the script will get invoked for each node in parallel, but no more than I<number> of instances will be invoked at at a time. If it is not specified, the script will be invoked once for all the nodes.
=head1 OPTIONS

Some files were not shown because too many files have changed in this diff Show More