mirror of
https://github.com/xcat2/xcat-core.git
synced 2025-06-17 20:00:19 +00:00
correct words and syntax error about xcat-docs
This commit is contained in:
@ -30,7 +30,7 @@ The following example downloads the confluent tar package and creates a local re
|
||||
source
|
||||
~~~~~~
|
||||
|
||||
To build from source, ensure your machine has the correct development packages to build rpms, then execute hte following:
|
||||
To build from source, ensure your machine has the correct development packages to build rpms, then execute the following:
|
||||
|
||||
* Clone the git repo: ::
|
||||
|
||||
|
@ -68,8 +68,8 @@ The following examples will create diskless images for ``cudafull`` and ``cudaru
|
||||
|
||||
xCAT provides a sample package list files for CUDA. You can find them at:
|
||||
|
||||
* ``/opt/xcat/share/xcat/netboot/rh/cudafull.rhels7.ppc64le.otherpkgs.pkglist``
|
||||
* ``/opt/xcat/share/xcat/netboot/rh/cudaruntime.rhels7.ppc64le.otherpkgs.pkglist``
|
||||
* ``/opt/xcat/share/xcat/netboot/ubuntu/cudafull.ubuntu14.04.3.ppc64el.pkglist``
|
||||
* ``/opt/xcat/share/xcat/netboot/ubuntu/cudaruntime.ubuntu14.04.3.ppc64el.pkglist``
|
||||
|
||||
**[diskless note]**: For diskless images, the requirement for rebooting the machine is not applicable because the images is loaded on each reboot. The install of the CUDA packages is required to be done in the ``otherpkglist`` **NOT** the ``pkglist``.
|
||||
|
||||
|
@ -15,9 +15,9 @@ The data synchronization is important for any high availability configuration. W
|
||||
|
||||
There are a lot of ways for data syncronization, but considering the specific xCAT HAMN requirements, only several of the data syncronziation options are practical for xCAT HAMN.
|
||||
|
||||
**1\. Move physical disks between the two management nodes**: if we could physically move the hard disks from the failed management node to the backup management node, and bring up the backup management node, then both the operating system and xCAT data will be identical between the new management node the failed management node. RAID1 or disk mirroring could be used to avoid the disk be a single point of failure.
|
||||
**1\. Move physical disks between the two management nodes**: if we could physically move the hard disks from the failed management node to the backup management node, and bring up the backup management node, then both the operating system and xCAT data will be identical between the new management node and the failed management node. RAID1 or disk mirroring could be used to avoid the disk be a single point of failure.
|
||||
|
||||
**2\. Shared data**: the two management nodes use the single copy of xCAT data, no matter which management node is the primary MN, the cluster management capability is running on top of the single data copy. The acess to the data could be done through various ways like shared storage, NAS, NFS, samba etc. Based on the protocol being used, the data might be accessable only on one management node at a time or be accessable on both management nodes in parellel. If the data could only be accessed from one management node, the failover process need to take care of the data access transition; if the data could be accessed on both management nodes, the failover does not need to consider the data access transition, it usually means the failover process could be faster.
|
||||
**2\. Shared data**: the two management nodes use the single copy of xCAT data, no matter which management node is the primary MN, the cluster management capability is running on top of the single data copy. The access to the data could be done through various ways like shared storage, NAS, NFS, samba etc. Based on the protocol being used, the data might be accessable only on one management node at a time or be accessable on both management nodes in parellel. If the data could only be accessed from one management node, the failover process need to take care of the data access transition; if the data could be accessed on both management nodes, the failover does not need to consider the data access transition, it usually means the failover process could be faster.
|
||||
|
||||
Warning: Running database through network file system has a lot of potential problems and is not practical, however, most of the database system provides database replication feature that can be used to synronize the database between the two management nodes.
|
||||
|
||||
|
@ -67,7 +67,7 @@ The following table illustrates the cluster being used in this example:
|
||||
|
||||
The node attributes ``servicenode`` and ``xcatmaster``, define which Service node will serve the particular compute node.
|
||||
|
||||
* ``servicenode`` - defines which Service Node the **Management Node** should send commands to (e.g ``xdsh``) and should be set to the hostname or IP address of the service node that the management node can conttact it by.
|
||||
* ``servicenode`` - defines which Service Node the **Management Node** should send commands to (e.g ``xdsh``) and should be set to the hostname or IP address of the service node that the management node can contact it by.
|
||||
* ``xcatmaster`` - defines which Service Node the **Compute Node** should boot from and should be set to the hostname or IP address of the service node that the compute node can contact it by.
|
||||
|
||||
You must set both ``servicenode`` and ``xcatmaster`` regardless of whether or not you are using service node pools, for most scenarios, the value will be identical. ::
|
||||
|
@ -11,7 +11,7 @@ node, and ssh access to the nodes that the Service Nodes services.
|
||||
The following sections explain how to accomplish this.
|
||||
|
||||
|
||||
Build the Service Node Diksless Image
|
||||
Build the Service Node Diskless Image
|
||||
-------------------------------------
|
||||
|
||||
This section assumes you can build the stateless image on the management node because the Service Nodes are the same OS and architecture as the management node. If this is not the case, you need to build the image on a machine that matches the Service Node's OS architecture.
|
||||
|
@ -52,7 +52,7 @@ Configuration for Diskless Installation
|
||||
-p /install/<path>/<MLNX_OFED_LINUX.iso> -i $1 -n genimage
|
||||
|
||||
|
||||
**[Note]** If you want ot customized kernel version (i.e the kernel version of the diskless image you want to generate is different with the kernel version of you management node), you need to pass ``--add-kernel-support`` attribute to Mellanox. the line added into ``<profile>.postinstall`` should like below ::
|
||||
**[Note]** If you want to customized kernel version (i.e the kernel version of the diskless image you want to generate is different with the kernel version of you management node), you need to pass ``--add-kernel-support`` attribute to Mellanox. the line added into ``<profile>.postinstall`` should like below ::
|
||||
|
||||
/install/postscripts/mlnxofed_ib_install \
|
||||
-p /install/<path>/<MLNX_OFED_LINUX.iso> -m --add-kernel-support -end- -i $1 -n genimage
|
||||
@ -127,4 +127,4 @@ Configuration for Diskless Installation
|
||||
SM lid: 0
|
||||
Capability mask: 0x02594868
|
||||
Port GUID: 0x5cf3fc000004ec04
|
||||
Link layer: InfiniBand
|
||||
Link layer: InfiniBand
|
||||
|
@ -98,7 +98,7 @@ Define the read only community for snmp version 1 and 2. ::
|
||||
|
||||
rspconfig <switch> community=<string>
|
||||
|
||||
Enable/disable snmp function on the swithc. ::
|
||||
Enable/disable snmp function on the switch. ::
|
||||
|
||||
rspconfig <switch> snmpcfg=enable/disable
|
||||
|
||||
|
@ -106,7 +106,7 @@ More examples of input parameters:
|
||||
|
||||
create_raid="rl#0|pci_id#1014:034a|disk_num#1" create_raid="rl#0|pci_slot_name#0001:08:00.0|disk_num#2"
|
||||
|
||||
#. Create two RAID arrays, RAID level is 0, one array uses disks sg0 and sg1, the other array uses diskS sg2 and sg3: ::
|
||||
#. Create two RAID arrays, RAID level is 0, one array uses disks sg0 and sg1, the other array uses disks sg2 and sg3: ::
|
||||
|
||||
create_raid="rl#0|disk_names#sg0#sg1" create_raid="rl#0|disk_names#sg2#sg3"
|
||||
|
||||
|
@ -126,14 +126,12 @@ For support clone, add 'otherpkglist' and 'otherpkgdir' attributes to the image
|
||||
Capture Image from Golden Client
|
||||
````````````````````````````````
|
||||
|
||||
On Management node, use xCAT command 'imgcapture' to capture an image from the golden-client.
|
||||
On Management node, use xCAT command 'imgcapture' to capture an image from the golden-client.::
|
||||
|
||||
::
|
||||
imgcapture <golden-client> -t sysclone -o <mycomputeimage>
|
||||
|
||||
When imgcapture is running, it pulls the image from the golden-client, and creates a image files system and a corresponding osimage definition on the xcat management node. You can use below command to check the osimage attributes.
|
||||
When imgcapture is running, it pulls the image from the golden-client, and creates a image files system and a corresponding osimage definition on the xcat management node. You can use below command to check the osimage attributes.::
|
||||
|
||||
::
|
||||
lsdef -t osimage <mycomputeimage>
|
||||
|
||||
Install the target nodes with the image from the golden-client
|
||||
|
@ -11,7 +11,7 @@ There are more attributes of nodeset used for some specific purpose or specific
|
||||
|
||||
* **runimage**: If you would like to run a task after deployment, you can define that task with this attribute.
|
||||
* **runcmd**: This instructs the node to boot to the xCAT nbfs environment and proceed to configure BMC for basic remote access. This causes the IP, netmask, gateway, username, and password to be programmed according to the configuration table.
|
||||
* **shell**: This instructs tho node to boot to the xCAT genesis environment, and present a shell prompt on console. The node will also be able to be sshed into and have utilities such as wget, tftp, scp, nfs, and cifs. It will have storage drivers available for many common systems.
|
||||
* **shell**: This instructs the node to boot to the xCAT genesis environment, and present a shell prompt on console. The node will also be able to be sshed into and have utilities such as wget, tftp, scp, nfs, and cifs. It will have storage drivers available for many common systems.
|
||||
|
||||
Choose such additional attribute of nodeset according to your requirement, if want to get more informantion about nodeset, refer to nodeset's man page.
|
||||
|
||||
|
@ -12,7 +12,7 @@ The output should be similar to the following: ::
|
||||
"rhels7.1-ppc64le-stateful-mgmtnode",,"compute","linux",,"install",,"rhels7.1-ppc64le",,,"Linux","rhels7.1","ppc64le",,,,,,,,
|
||||
"rhels7.1-ppc64le-netboot-compute",,"compute","linux",,"netboot",,"rhels7.1-ppc64le",,,"Linux","rhels7.1","ppc64le",,,,,,,,
|
||||
|
||||
The ``netboot-compute`` is the default **diskless** osimage created rhels7.1 ppc64le. Run ``genimage`` to generatea diskless image based on the "rhels7.1-ppc64le-netboot-compute" definition: ::
|
||||
The ``netboot-compute`` is the default **diskless** osimage created rhels7.1 ppc64le. Run ``genimage`` to generate a diskless image based on the "rhels7.1-ppc64le-netboot-compute" definition: ::
|
||||
|
||||
genimage rhels7.1-ppc64le-netboot-compute
|
||||
|
||||
|
@ -12,7 +12,7 @@ The following commands are provided:
|
||||
* ``pscp`` - parallel remote copy ( supports scp and not hierarchy)
|
||||
* ``psh`` - parallel remote shell ( supports ssh and not hierarchy)
|
||||
* ``pasu`` - parallel ASU utility
|
||||
* ``xdcp`` - concurrently copies files too and from multiple nodes. ( scp/rcp and hierarchy)
|
||||
* ``xdcp`` - concurrently copies files to and from multiple nodes. ( scp/rcp and hierarchy)
|
||||
* ``xdsh`` - concurrently runs commands on multiple nodes. ( supports ssh/rsh and hierarchy)
|
||||
* ``xdshbak`` - formats the output of the xdsh command
|
||||
* ``xcoll`` - Formats command output of the psh, xdsh, rinv command
|
||||
|
@ -25,7 +25,7 @@ Set the correct NIC from which DHCP server provide service::
|
||||
|
||||
chdef -t site dhcpinterfaces=eth1,eth2
|
||||
|
||||
Add dynamic range in purpose of assigning temporary IP adddress for FSP/BMCs and hosts::
|
||||
Add dynamic range in purpose of assigning temporary IP address for FSP/BMCs and hosts::
|
||||
|
||||
chdef -t network 10_0_0_0-255_255_0_0 dynamicrange="10.0.100.1-10.0.100.100"
|
||||
chdef -t network 50_0_0_0-255_255_0_0 dynamicrange="50.0.100.1-50.0.100.100"
|
||||
|
@ -1,7 +1,7 @@
|
||||
Customize osimage (Optional)
|
||||
============================
|
||||
|
||||
Optional means all the subitems in this page are not necessary to finish an OS deployment. If you are new to xCAT, you can just jump to `Initialize the Compute for Deployment`.
|
||||
Optional means all the subitems in this page are not necessary to finish an OS deployment. If you are new to xCAT, you can just jump to :ref:`Initialize the Compute for Deployment<deploy_os>`.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
Reference in New Issue
Block a user