2
0
mirror of https://github.com/xcat2/xcat-core.git synced 2025-05-30 09:36:41 +00:00

Merge pull request #407 from mfoliveira/patch-1

xcat-core/docs: minor documentation fixes
This commit is contained in:
Victor Hu 2015-11-14 21:42:30 -05:00
commit 61229fc417
5 changed files with 10 additions and 10 deletions

View File

@ -61,10 +61,10 @@ DNS Attributes
In this example xcatmn is the name of the xCAT MN, and DNS there should listen on eth1 and eth2. On all of the nodes in group ``service`` DNS should listen on the bond0 nic.
**NOTE**: if using this attribute to block certain interfaces, make sure the ip maps to your hostname of xCAT MN is not blocked since xCAT needs to use this ip to communicate with the local NDS server on MN.
**NOTE**: if using this attribute to block certain interfaces, make sure the ip that maps to your hostname of xCAT MN is not blocked since xCAT needs to use this ip to communicate with the local DNS server on MN.
Install/Deployment Attrubutes
Install/Deployment Attributes
-----------------------------
* installdir:
@ -86,7 +86,7 @@ Remoteshell Attributes
----------------------
* sshbetweennodes:
Comma separated list of groups of compute nodes to enable passwordless root ssh during install, or ``xdsh -K``. Default is ``ALLGROUPS``. Set to ``NOGROUPS``,if you do not wish to enabled any group of compute nodes.If using the ``zone`` table, this attribute in not used.
Comma separated list of groups of compute nodes to enable passwordless root ssh during install, or ``xdsh -K``. Default is ``ALLGROUPS``. Set to ``NOGROUPS`` if you do not wish to enable it for any group of compute nodes. If using the ``zone`` table, this attribute in not used.
Services Attributes

View File

@ -32,7 +32,7 @@ Nodes boot from a RAMdisk OS image downloaded from the xCAT mgmt node or service
You can't use a large image with many different applications all in the image for varied users, because it uses too much of the node's memory to store the ramdisk. (To mitigate this disadvantage, you can put your large application binaries and libraries in gpfs to reduce the ramdisk size. This requires some manual configuration of the image).
Each node can also have a local "scratch" disk for ``swap``, ``/tmp``, ``/var``, ``log`` files, dumps, etc. The purpose of the scratch disk is to provide a location for files that are written to by the node that can become quite large or for files that you don't want to have disappear when the node reboots. There should be nothing put on the scratch disk that represents the node's "state", so that if the disk fails you can simply replace it and reboot the node. A scratch disk would typically be used for situations like: job scheduling preemption is required (which needs a lot of swap space), the applications write large temp files, or you want to keep gpfs log or trace files persistently. (As a partial alternative to using the scratch disk, customers can choose to put ``/tmp`` ``/var/tmp``, and log files (except GPFS logs files) in GPFS, but must be willing to accept the dependency on GPFS). This can be done by enabling the 'localdisk' support. For the details, please refer to the section [TODO Enabling the localdisk Option].
Each node can also have a local "scratch" disk for ``swap``, ``/tmp``, ``/var``, ``log`` files, dumps, etc. The purpose of the scratch disk is to provide a location for files that are written to by the node that can become quite large or for files that you don't want to disappear when the node reboots. There should be nothing put on the scratch disk that represents the node's "state", so that if the disk fails you can simply replace it and reboot the node. A scratch disk would typically be used for situations like: job scheduling preemption is required (which needs a lot of swap space), the applications write large temp files, or you want to keep gpfs log or trace files persistently. (As a partial alternative to using the scratch disk, customers can choose to put ``/tmp`` ``/var/tmp``, and log files (except GPFS logs files) in GPFS, but must be willing to accept the dependency on GPFS). This can be done by enabling the 'localdisk' support. For the details, please refer to the section [TODO Enabling the localdisk Option].
OSimage Definition

View File

@ -129,7 +129,7 @@ You can get the detail description of each object by ``man <object type>`` e.g.
$ lsdef -t osimage
Display the detail attirbutes of one **osimage** named **rhels7.1-x86_64-install-compute**: ::
Display the detail attributes of one **osimage** named **rhels7.1-x86_64-install-compute**: ::
$ lsdef -t osimage rhels7.1-x86_64-install-compute
Object name: rhels7.1-x86_64-install-compute

View File

@ -7,7 +7,7 @@ In the chapter :doc:`xCAT Object <../../../basic_concepts/xcat_object/index>`, i
xCAT offers several powerful **Automatic Hardware Discovery** methods to simplify the procedure of SP configuration and server information collection. If your managed cluster has more than 10 servers, the automatic discovery is worth to take a try. If your cluster has more than 50 servers, the automatic discovery is recommended.
Following are the brief characters and adaptability of each method, you can select a proper one according to your cluster size and other consideration.
Following are the brief characteristics and adaptability of each method, you can select a proper one according to your cluster size and other consideration.
* **Manually Define Nodes**
@ -23,13 +23,13 @@ Following are the brief characters and adaptability of each method, you can sele
It will take additional time to configure the SP (Management Modules like: BMC, FSP) and collect the server information like MTMS (Machine Type and Machine Serial) and Host MAC address for OS deployment ...
This method is inefficiency and error-prone for a large number of servers.
This method is inefficient and error-prone for a large number of servers.
* **MTMS-based Discovery**
**Step1**: **Automatically** search all the servers and collect server MTMS information.
**Step2**: Define the searched server to a **Node Object** automatically. In this case, the node name will be generate base on the **MTMS** string. Or admin can rename the **Node Object** to a reasonable name like **r1u1 (It means the physical location is in Rack1 and Unit1)** base on the **MTMS**.
**Step2**: Define the searched server to a **Node Object** automatically. In this case, the node name will be generated based on the **MTMS** string. The admin can rename the **Node Object** to a reasonable name like **r1u1** (It means the physical location is in Rack1 and Unit1).
**Step3**: Power on the nodes, xCAT discovery engine will update additional information like the **MAC for deployment** for the nodes.
@ -41,7 +41,7 @@ Following are the brief characters and adaptability of each method, you can sele
* cons
Compare to **Switch-based Discovery**, admin needs to be involved to rename the auto discovered node if wanting to give node a reasonable name. It's hard to rename the node to a location awared name for a large number of server.
Compared to **Switch-based Discovery**, the admin needs to be involved to rename the automatically discovered node to a reasonable name (optional). It's hard to rename the node to a location-based name for a large number of server.
* **Switch-based Discovery**

View File

@ -18,7 +18,7 @@ Network Services (dhcp, tftp, http,etc):
The various network services necessary to perform Operating System deployment over the network. xCAT will bring up and configure the network services automatically without any intervention from the System Administrator.
Service Processor (SP):
A module embedded in the hardware server used to perform the out-of-band hardware control. (e.g. Integrated Management Module (IMM), Flexible Service Processor (FSP), etc)
A module embedded in the hardware server used to perform the out-of-band hardware control. (e.g. Integrated Management Module (IMM), Flexible Service Processor (FSP), Baseboard Management Controller (BMC), etc)
Management network:
The network used by the Management Node (or Service Node) to install operating systems and manage the nodes. The Management Node and in-band Network Interface Card (NIC) of the nodes are connected to this network. If you have a large cluster utilizing Service Nodes, sometimes this network is segregated into separate VLANs for each Service Node.