2
0
mirror of https://github.com/xcat2/xcat-core.git synced 2025-05-29 09:13:08 +00:00

Documentation wording, grammar and spelling fixes

This commit is contained in:
Mark Gurevich gurevich@us.ibm.com 2015-12-15 14:44:03 -05:00
parent 2f6c7529e0
commit 5fb66a68e4
12 changed files with 32 additions and 32 deletions

View File

@ -10,7 +10,7 @@ Database Attributes
-------------------
* excludenodes:
A set of comma separated nodes and/or groups that would automatically be subtracted from any noderange, it can be used for excluding some failed nodes for any xCAT commands. See :doc:`noderange </guides/admin-guides/references/man3/noderange>` for details on supported formats.
A set of comma separated nodes and/or groups that would automatically be subtracted from any noderange, it can be used for excluding some failed nodes from any xCAT command. See :doc:`noderange </guides/admin-guides/references/man3/noderange>` for details on supported formats.
* nodestatus:
If set to ``n``, the ``nodelist.status`` column will not be updated during the node deployment, node discovery and power operations. The default is to update.
@ -93,7 +93,7 @@ Services Attributes
-------------------
* consoleondemand:
When set to ``yes``, conserver connects and creates the console output only when the user opens the console. Default is ``no`` on Linux, yes on AIX.
When set to ``yes``, conserver connects and creates the console output only when the user opens the console. Default is ``no`` on Linux, ``yes`` on AIX.
* timezone:
The timezone for all the nodes in the cluster(e.g. ``America/New_York``).
@ -102,7 +102,7 @@ Services Attributes
tftp directory path. Default is /tftpboot.
* tftpflags:
The flags that used to start tftpd. Default is ``-v -l -s /tftpboot -m /etc/tftpmapfile4xcat.conf`` if ``tftplfags`` is not set.
The flags used to start tftpd. Default is ``-v -l -s /tftpboot -m /etc/tftpmapfile4xcat.conf`` if ``tftplfags`` is not set.
Virtualization Attributes
@ -116,7 +116,7 @@ xCAT Daemon attributes
----------------------
* xcatdport:
The port used by the xcatd daemon for client/server communication.
The port used by xcatd daemon for client/server communication.
* xcatiport:
The port used by xcatd to receive installation status updates from nodes.

View File

@ -8,7 +8,7 @@ Do You Need Hierarchy in Your Cluster?
Service Nodes
`````````````
For very large clusters, xCAT has the ability to distribute the management operations to service nodes. This allows the management node to delegate all management responsibilities for a set of compute or storage nodes to a service node so that the management node doesn't get overloaded. Although xCAT automates a lot of the aspects of deploying and configuring the services, it still adds complexity to your cluster. So the question is: at what size cluster do you need to start using service nodes?? The exact answer depends on a lot of factors (mgmt node size, network speed, node type, OS, frequency of node deployment, etc.), but here are some general guidelines for how many nodes a single mgmt node (or single service node) can handle:
For very large clusters, xCAT has the ability to distribute the management operations to service nodes. This allows the management node to delegate all management responsibilities for a set of compute or storage nodes to a service node so that the management node doesn't get overloaded. Although xCAT automates a lot of the aspects of deploying and configuring the services, it still adds complexity to your cluster. So the question is: at what size cluster do you need to start using service nodes? The exact answer depends on a lot of factors (mgmt node size, network speed, node type, OS, frequency of node deployment, etc.), but here are some general guidelines for how many nodes a single management node (or single service node) can handle:
* **[Linux]:**
* Stateful or Stateless: 500 nodes

View File

@ -1,7 +1,7 @@
xCAT Cluster OS Running Type
============================
Whether a pyhsical server or a virtual machine, it needs to run an Operating System to support user applications. Generally, the OS is installed in the hard disk of the compute node. But xCAT also support the type that running OS in the RAM.
Whether a node is a pyhsical server or a virtual machine, it needs to run an Operating System to support user applications. Generally, the OS is installed in the hard disk of the compute node. But xCAT also support the type that running OS in the RAM.
This section gives the pros and cons of each OS running type, and describes the cluster characteristics that will impact from each.
@ -30,7 +30,7 @@ Nodes boot from a RAMdisk OS image downloaded from the xCAT mgmt node or service
* Main disadvantage
You can't use a large image with many different applications all in the image for varied users, because it uses too much of the node's memory to store the ramdisk. (To mitigate this disadvantage, you can put your large application binaries and libraries in gpfs to reduce the ramdisk size. This requires some manual configuration of the image).
You can't use a large image with many different applications in the image for varied users, because it uses too much of the node's memory to store the ramdisk. (To mitigate this disadvantage, you can put your large application binaries and libraries in shared storage to reduce the ramdisk size. This requires some manual configuration of the image).
Each node can also have a local "scratch" disk for ``swap``, ``/tmp``, ``/var``, ``log`` files, dumps, etc. The purpose of the scratch disk is to provide a location for files that are written to by the node that can become quite large or for files that you don't want to disappear when the node reboots. There should be nothing put on the scratch disk that represents the node's "state", so that if the disk fails you can simply replace it and reboot the node. A scratch disk would typically be used for situations like: job scheduling preemption is required (which needs a lot of swap space), the applications write large temp files, or you want to keep gpfs log or trace files persistently. (As a partial alternative to using the scratch disk, customers can choose to put ``/tmp`` ``/var/tmp``, and log files (except GPFS logs files) in GPFS, but must be willing to accept the dependency on GPFS). This can be done by enabling the 'localdisk' support. For the details, please refer to the section [TODO Enabling the localdisk Option].

View File

@ -5,7 +5,7 @@ All of the xCAT Objects and Configuration data are stored in xCAT database. By d
xCAT defines about 70 tables to store different data. You can get the xCAT database definition from file ``/opt/xcat/lib/perl/xCAT/Schema.pm``.
You can run ``tabdump`` command to get all the xCAT database tables. Or executing ``tabdump -d <tablename>`` or ``man <tablename>`` to get the detail columns of table definition. ::
You can run ``tabdump`` command to get all the xCAT database tables. Or run ``tabdump -d <tablename>`` or ``man <tablename>`` to get the detail information on columns and table definitions. ::
$ tabdump
$ tabdump site
@ -26,7 +26,7 @@ For a complete reference, see the man page for xcatdb: ``man xcatdb``.
* **passwd table**
Contains default userids and passwords for xCAT to access cluster components. In most cases, xCAT will also actually set the userid/password in the relevant component (Generally for SP like bmc, fsp.) when it is being configured or installed. The default userids/passwords in passwd table for specific cluster components can be overridden by the columns in other tables, e.g. ``mpa`` , ``ipmi`` , ``ppchcp`` , etc.
Contains default userids and passwords for xCAT to access cluster components. In most cases, xCAT will also set the userid/password in the relevant component (Generally for SP like bmc, fsp.) when it is being configured or installed. The default userids/passwords in passwd table for specific cluster components can be overridden by the columns in other tables, e.g. ``mpa`` , ``ipmi`` , ``ppchcp`` , etc.
* **networks table**
@ -56,11 +56,11 @@ xCAT offers 5 commands to manipulate the databse tables:
* ``dumpxCATdb``
Dumps all the xCAT db tables to CSV files under the specified directory, often used to backup the xCAT database in xCAT reinstallation or management node migration.
Dumps all the xCAT db tables to CSV files under the specified directory, often used to backup the xCAT database for xCAT reinstallation or management node migration.
* ``restorexCATdb``
Restore the xCAT db tables with the CSV files under the specified directory.
Restore the xCAT db tables from the CSV files under the specified directory.
**Advanced Topic: How to use Regular Expression in xCAT tables:**

View File

@ -41,7 +41,7 @@ These two options will result in exactly the same definitions and attribute valu
Creating a dynamic node group
-----------------------------
The selection criteria for a dynamic node group is specified by providing a list of ``attr<operator>val`` pairs that can be used to determine the members of a group. The valid operators include: ``==``, ``!=``, ``=~`` and ``!~``. The ``attr`` field can be any node definition attribute returned by the ``lsdef`` command. The ``val`` field in selection criteria can be a simple sting or a regular expression. A regular expression can only be specified when using the ``=~`` or ``!~`` operators. See <TODO http://www.perl.com/doc/manual/html/pod/perlre.html> for information on the format and syntax of regular expressions.
The selection criteria for a dynamic node group is specified by providing a list of ``attr<operator>val`` pairs that can be used to determine the members of a group. The valid operators include: ``==``, ``!=``, ``=~`` and ``!~``. The ``attr`` field can be any node definition attribute returned by the ``lsdef`` command. The ``val`` field in selection criteria can be a simple sting or a regular expression. A regular expression can only be specified when using the ``=~`` or ``!~`` operators. See http://www.perl.com/doc/manual/html/pod/perlre.html for information on the format and syntax of regular expressions.
Operator descriptions ::

View File

@ -8,7 +8,7 @@ Basically, xCAT has 20 types of objects. They are: ::
node notification osdistro osdistroupdate osimage
policy rack route site zone
This section will introduce you several important types of object to give you an overview of how the object looks like and how to manipulate them.
This section will introduce you to several important types of objects and give you an overview of how to view and manipulate them.
You can get the detail description of each object by ``man <object type>`` e.g. ``man node``.
@ -115,11 +115,11 @@ You can get the detail description of each object by ``man <object type>`` e.g.
postscripts=syslog,remoteshell,syncfiles
provmethod=rhels7.1-x86_64-install-compute
This is useful that define common attributes in **group object** so that new added node will inherits them automatically. Since the attributes are defined in the **group object**, it will make the change of attributes easier that you don't need to touch the individual nodes.
It is useful to define common attributes in **group object** so that newly added node will inherit them automatically. Since the attributes are defined in the **group object**, you don't need to touch the individual nodes attributes.
* **Use Regular Expression to generate value for node attributes**
This is powerful feature in xCAT that you can generate individual attribute value from node name instead of sign them one by one. Refer to :doc:`Use Regular Expression in xCAT Database Table <../xcat_db/regexp_db>`.
This is powerful feature in xCAT that you can generate individual attribute value from node name instead of assigning them one by one. Refer to :doc:`Use Regular Expression in xCAT Database Table <../xcat_db/regexp_db>`.
* **osimage Object**
@ -154,7 +154,7 @@ You can get the detail description of each object by ``man <object type>`` e.g.
Then in the next network boot, the node **cn1** will start to deploy **rhles7.1**.
* **Manipulate Object**
* **Manipulating Objects**
You already saw that I used the commands ``mkdef``, ``lsdef``, ``chdef`` to manipulate the objects. xCAT has 4 objects management commands to manage all the xCAT objects.

View File

@ -4,19 +4,19 @@ node
Description
-----------
The definition of physical units in the cluster, such as lpar,virtual machine, frame, cec, hmc, switch.
The definition of physical units in the cluster, such as lpar, virtual machine, frame, cec, hmc, switch.
Key Attrubutes
--------------
* os:
The operating system deployed on this node. Valid values: AIX, rhels*,rhelc*, rhas*,centos*,SL*, fedora*, sles* (where * is the version #)
The operating system deployed on this node. Valid values: AIX, rhels*, rhelc*, rhas*, centos*, SL*, fedora*, sles* (where * is the version #)
* arch:
The hardware architecture of this node. Valid values: x86_64, ppc64, x86, ia64.
* groups:
Usually, there are a set of nodes with some attributes in common, xCAT admin can define a node group containing these nodes, so that the management task can be issued against the group instead of individual nodes. A node can be a memeber of different groups, so the value of this attributes is a comma-delimited list of groups. At least one group is required to create a node. The new created group names should not be prefixed with "__" as this token has been preserverd as the intrnal group name.
Usually, there are a set of nodes with some attributes in common, xCAT admin can define a node group containing these nodes, so that the management task can be issued against the group instead of individual nodes. A node can be a memeber of different groups, so the value of this attributes is a comma-delimited list of groups. At least one group is required to create a node. The new created group names should not be prefixed with "__" as this token has been preserverd as the internal group name.
* mgt:
The method to do general hardware management of the node. This attribute can be determined by the machine type of the node. Valid values: ipmi, blade, hmc, ivm, fsp, bpa, kvm, esx, rhevm.
@ -54,7 +54,7 @@ Key Attrubutes
The provisioning method for node deployment. Usually, this attribute is an ``osimage`` object name.
* status:
The current status of the node, which is updated by xCAT. This value can be used to monitor the provision process. Valid values: powering-off,installing,booting/netbooting,booted.
The current status of the node, which is updated by xCAT. This value can be used to monitor the provision process. Valid values: powering-off, installing, booting/netbooting, booted.
Use Cases
---------

View File

@ -4,28 +4,28 @@ osimage
Description
-----------
A logic definition of image which can be used to provision the node.
A logical definition of image which can be used to provision the node.
Key Attributes
--------------
* imagetype:
The type of operating system this definition represents (linux,AIX).
The type of operating system this definition represents (linux, AIX).
* osarch:
The hardware architecture of the nodes this image supports. Valid values: x86_64, ppc64, ppc64le.
* osvers:
The Linux distribution name and release number of the image. Valid values: rhels*,rhelc*, rhas*,centos*,SL*, fedora*, sles* (where * is the version #).
The Linux distribution name and release number of the image. Valid values: rhels*, rhelc*, rhas*, centos*, SL*, fedora*, sles* (where * is the version #).
* pkgdir:
The name of the directory where the copied in OS distro content are stored.
The name of the directory where the copied OS distro content are stored.
* pkglist:
The fully qualified name of a file, which contains the list of packages shipped in Linux distribution ISO which will be installed on the node.
* otherpkgdir
When xCAT user need to install some additional packages not shipped in Linux distribution ISO, these packages can be placed in the directory specified in this attribute. xCAT user should take care the dependency problem themselves, put all the dependency packages not shipped in Linux distribution ISO in this directory and create repository in this directory.
When xCAT user needs to install some additional packages not shipped in Linux distribution ISO, those packages can be placed in the directory specified in this attribute. xCAT user should take care of dependency problems themselves, by putting all the dependency packages not shipped in Linux distribution ISO in this directory and creating repository in this directory.
* otherpkglist:
@ -46,7 +46,7 @@ List all the osimage objects ::
* Case 2:
Create a osimage definition "customized-rhels7-ppc64-install-compute" based on an existed osimage "rhels7-ppc64-install-compute", the osimage "customized-rhels7-ppc64-install-compute" will inherit all the attributes of "rhels7-ppc64-install-compute" except installing the additional packages specified in the file "/tmp/otherpkg.list":
Create a osimage definition "customized-rhels7-ppc64-install-compute" based on an existing osimage "rhels7-ppc64-install-compute", the osimage "customized-rhels7-ppc64-install-compute" will inherit all the attributes of "rhels7-ppc64-install-compute" except installing the additional packages specified in the file "/tmp/otherpkg.list":
*step 1* : write the osimage definition "rhels7-ppc64-install-compute" to a stanza file "osimage.stanza" ::
@ -89,6 +89,6 @@ The content will look like ::
provmethod=install
template=/opt/xcat/share/xcat/install/rh/compute.rhels7.tmpl
*step 3* : create the osimage "customized-rhels7-ppc64-install-compute" with the stanza file ::
*step 3* : create the osimage "customized-rhels7-ppc64-install-compute" from the stanza file ::
cat /tmp/osimage.stanza |mkdef -z

View File

@ -2,7 +2,7 @@
.. BEGIN_install_os_mgmt_node
The system requirements for your xCAT management node largely depends on the size of the cluster you plan to manage and the type of provisioning used (diskful, diskless, system clones, etc). The majority of system load comes during cluster provisioning time.
The system requirements for your xCAT management node largely depend on the size of the cluster you plan to manage and the type of provisioning used (diskful, diskless, system clones, etc). The majority of system load comes during cluster provisioning time.
**Memory Requirements:**

View File

@ -1,7 +1,7 @@
Prepare the Management Node
===========================
These steps prepare the Management Node or xCAT Installation
These steps prepare the Management Node for xCAT Installation
Install an OS on the Management Node
------------------------------------

View File

@ -1,7 +1,7 @@
Prepare the Management Node
===========================
These steps prepare the Management Node or xCAT Installation
These steps prepare the Management Node for xCAT Installation
Install an OS on the Management Node
------------------------------------

View File

@ -1,9 +1,9 @@
xCAT2 Release Information
=========================
The following table is a summary of the New Operating System, New Hardware and New features that are supported in certain xCAT release.
The following table is a summary of the New Operating System, New Hardware and New features that are supported in each xCAT release.
The New OS and New Hardware which listed in the table have been fully tested. The OS which comes with the same source code or Hardware comes with the same CPU should also work, but you need to try it by yourself.
The New OS and New Hardware which are listed in the table have been fully tested. The OS with the same source code or Hardware comes with the same CPU should also work, but you need to try it by yourself.
For a complete list of new functions, bug fixes, restrictions, and known problems, refer to the individual release notes.