diff --git a/docs/source/advanced/confluent/server/confluent_server.rst b/docs/source/advanced/confluent/server/confluent_server.rst index 31a42d727..ea9bf1ebc 100644 --- a/docs/source/advanced/confluent/server/confluent_server.rst +++ b/docs/source/advanced/confluent/server/confluent_server.rst @@ -30,7 +30,7 @@ The following example downloads the confluent tar package and creates a local re source ~~~~~~ -To build from source, ensure your machine has the correct development packages to build rpms, then execute hte following: +To build from source, ensure your machine has the correct development packages to build rpms, then execute the following: * Clone the git repo: :: diff --git a/docs/source/advanced/gpu/nvidia/osimage/ubuntu.rst b/docs/source/advanced/gpu/nvidia/osimage/ubuntu.rst index f0b63d908..f56f8b9ed 100644 --- a/docs/source/advanced/gpu/nvidia/osimage/ubuntu.rst +++ b/docs/source/advanced/gpu/nvidia/osimage/ubuntu.rst @@ -68,8 +68,8 @@ The following examples will create diskless images for ``cudafull`` and ``cudaru xCAT provides a sample package list files for CUDA. You can find them at: - * ``/opt/xcat/share/xcat/netboot/rh/cudafull.rhels7.ppc64le.otherpkgs.pkglist`` - * ``/opt/xcat/share/xcat/netboot/rh/cudaruntime.rhels7.ppc64le.otherpkgs.pkglist`` + * ``/opt/xcat/share/xcat/netboot/ubuntu/cudafull.ubuntu14.04.3.ppc64el.pkglist`` + * ``/opt/xcat/share/xcat/netboot/ubuntu/cudaruntime.ubuntu14.04.3.ppc64el.pkglist`` **[diskless note]**: For diskless images, the requirement for rebooting the machine is not applicable because the images is loaded on each reboot. The install of the CUDA packages is required to be done in the ``otherpkglist`` **NOT** the ``pkglist``. diff --git a/docs/source/advanced/hamn/high_available_management_node.rst b/docs/source/advanced/hamn/high_available_management_node.rst index 7da41db12..e6bb3c383 100644 --- a/docs/source/advanced/hamn/high_available_management_node.rst +++ b/docs/source/advanced/hamn/high_available_management_node.rst @@ -15,9 +15,9 @@ The data synchronization is important for any high availability configuration. W There are a lot of ways for data syncronization, but considering the specific xCAT HAMN requirements, only several of the data syncronziation options are practical for xCAT HAMN. -**1\. Move physical disks between the two management nodes**: if we could physically move the hard disks from the failed management node to the backup management node, and bring up the backup management node, then both the operating system and xCAT data will be identical between the new management node the failed management node. RAID1 or disk mirroring could be used to avoid the disk be a single point of failure. +**1\. Move physical disks between the two management nodes**: if we could physically move the hard disks from the failed management node to the backup management node, and bring up the backup management node, then both the operating system and xCAT data will be identical between the new management node and the failed management node. RAID1 or disk mirroring could be used to avoid the disk be a single point of failure. -**2\. Shared data**: the two management nodes use the single copy of xCAT data, no matter which management node is the primary MN, the cluster management capability is running on top of the single data copy. The acess to the data could be done through various ways like shared storage, NAS, NFS, samba etc. Based on the protocol being used, the data might be accessable only on one management node at a time or be accessable on both management nodes in parellel. If the data could only be accessed from one management node, the failover process need to take care of the data access transition; if the data could be accessed on both management nodes, the failover does not need to consider the data access transition, it usually means the failover process could be faster. +**2\. Shared data**: the two management nodes use the single copy of xCAT data, no matter which management node is the primary MN, the cluster management capability is running on top of the single data copy. The access to the data could be done through various ways like shared storage, NAS, NFS, samba etc. Based on the protocol being used, the data might be accessable only on one management node at a time or be accessable on both management nodes in parellel. If the data could only be accessed from one management node, the failover process need to take care of the data access transition; if the data could be accessed on both management nodes, the failover does not need to consider the data access transition, it usually means the failover process could be faster. Warning: Running database through network file system has a lot of potential problems and is not practical, however, most of the database system provides database replication feature that can be used to synronize the database between the two management nodes. diff --git a/docs/source/advanced/hierarchy/define_service_nodes.rst b/docs/source/advanced/hierarchy/define_service_nodes.rst index 5614ac399..e031b56c9 100644 --- a/docs/source/advanced/hierarchy/define_service_nodes.rst +++ b/docs/source/advanced/hierarchy/define_service_nodes.rst @@ -67,7 +67,7 @@ The following table illustrates the cluster being used in this example: The node attributes ``servicenode`` and ``xcatmaster``, define which Service node will serve the particular compute node. - * ``servicenode`` - defines which Service Node the **Management Node** should send commands to (e.g ``xdsh``) and should be set to the hostname or IP address of the service node that the management node can conttact it by. + * ``servicenode`` - defines which Service Node the **Management Node** should send commands to (e.g ``xdsh``) and should be set to the hostname or IP address of the service node that the management node can contact it by. * ``xcatmaster`` - defines which Service Node the **Compute Node** should boot from and should be set to the hostname or IP address of the service node that the compute node can contact it by. You must set both ``servicenode`` and ``xcatmaster`` regardless of whether or not you are using service node pools, for most scenarios, the value will be identical. :: diff --git a/docs/source/advanced/hierarchy/provision/diskless_sn.rst b/docs/source/advanced/hierarchy/provision/diskless_sn.rst index ddc87af74..1413c246e 100644 --- a/docs/source/advanced/hierarchy/provision/diskless_sn.rst +++ b/docs/source/advanced/hierarchy/provision/diskless_sn.rst @@ -11,7 +11,7 @@ node, and ssh access to the nodes that the Service Nodes services. The following sections explain how to accomplish this. -Build the Service Node Diksless Image +Build the Service Node Diskless Image ------------------------------------- This section assumes you can build the stateless image on the management node because the Service Nodes are the same OS and architecture as the management node. If this is not the case, you need to build the image on a machine that matches the Service Node's OS architecture. diff --git a/docs/source/advanced/networks/infiniband/mlnxofed_ib_install_v2_diskless.rst b/docs/source/advanced/networks/infiniband/mlnxofed_ib_install_v2_diskless.rst index 3f10a8298..96f793732 100644 --- a/docs/source/advanced/networks/infiniband/mlnxofed_ib_install_v2_diskless.rst +++ b/docs/source/advanced/networks/infiniband/mlnxofed_ib_install_v2_diskless.rst @@ -52,7 +52,7 @@ Configuration for Diskless Installation -p /install// -i $1 -n genimage - **[Note]** If you want ot customized kernel version (i.e the kernel version of the diskless image you want to generate is different with the kernel version of you management node), you need to pass ``--add-kernel-support`` attribute to Mellanox. the line added into ``.postinstall`` should like below :: + **[Note]** If you want to customized kernel version (i.e the kernel version of the diskless image you want to generate is different with the kernel version of you management node), you need to pass ``--add-kernel-support`` attribute to Mellanox. the line added into ``.postinstall`` should like below :: /install/postscripts/mlnxofed_ib_install \ -p /install// -m --add-kernel-support -end- -i $1 -n genimage @@ -127,4 +127,4 @@ Configuration for Diskless Installation SM lid: 0 Capability mask: 0x02594868 Port GUID: 0x5cf3fc000004ec04 - Link layer: InfiniBand \ No newline at end of file + Link layer: InfiniBand diff --git a/docs/source/advanced/networks/infiniband/switch_configuration.rst b/docs/source/advanced/networks/infiniband/switch_configuration.rst index 96c8cd0a7..b84d4a138 100644 --- a/docs/source/advanced/networks/infiniband/switch_configuration.rst +++ b/docs/source/advanced/networks/infiniband/switch_configuration.rst @@ -98,7 +98,7 @@ Define the read only community for snmp version 1 and 2. :: rspconfig community= -Enable/disable snmp function on the swithc. :: +Enable/disable snmp function on the switch. :: rspconfig snmpcfg=enable/disable diff --git a/docs/source/advanced/raid/hardware_raid.rst b/docs/source/advanced/raid/hardware_raid.rst index cc042c198..ca616790a 100644 --- a/docs/source/advanced/raid/hardware_raid.rst +++ b/docs/source/advanced/raid/hardware_raid.rst @@ -106,7 +106,7 @@ More examples of input parameters: create_raid="rl#0|pci_id#1014:034a|disk_num#1" create_raid="rl#0|pci_slot_name#0001:08:00.0|disk_num#2" - #. Create two RAID arrays, RAID level is 0, one array uses disks sg0 and sg1, the other array uses diskS sg2 and sg3: :: + #. Create two RAID arrays, RAID level is 0, one array uses disks sg0 and sg1, the other array uses disks sg2 and sg3: :: create_raid="rl#0|disk_names#sg0#sg1" create_raid="rl#0|disk_names#sg2#sg3" diff --git a/docs/source/advanced/sysclone/sysclone.rst b/docs/source/advanced/sysclone/sysclone.rst index 16bc2b12e..8801fd7a1 100644 --- a/docs/source/advanced/sysclone/sysclone.rst +++ b/docs/source/advanced/sysclone/sysclone.rst @@ -126,14 +126,12 @@ For support clone, add 'otherpkglist' and 'otherpkgdir' attributes to the image Capture Image from Golden Client ```````````````````````````````` -On Management node, use xCAT command 'imgcapture' to capture an image from the golden-client. +On Management node, use xCAT command 'imgcapture' to capture an image from the golden-client.:: -:: imgcapture -t sysclone -o -When imgcapture is running, it pulls the image from the golden-client, and creates a image files system and a corresponding osimage definition on the xcat management node. You can use below command to check the osimage attributes. +When imgcapture is running, it pulls the image from the golden-client, and creates a image files system and a corresponding osimage definition on the xcat management node. You can use below command to check the osimage attributes.:: -:: lsdef -t osimage Install the target nodes with the image from the golden-client diff --git a/docs/source/guides/admin-guides/manage_clusters/common/deployment/deploy_os.rst b/docs/source/guides/admin-guides/manage_clusters/common/deployment/deploy_os.rst index 909dc70ca..2fca4b8de 100644 --- a/docs/source/guides/admin-guides/manage_clusters/common/deployment/deploy_os.rst +++ b/docs/source/guides/admin-guides/manage_clusters/common/deployment/deploy_os.rst @@ -11,7 +11,7 @@ There are more attributes of nodeset used for some specific purpose or specific * **runimage**: If you would like to run a task after deployment, you can define that task with this attribute. * **runcmd**: This instructs the node to boot to the xCAT nbfs environment and proceed to configure BMC for basic remote access. This causes the IP, netmask, gateway, username, and password to be programmed according to the configuration table. -* **shell**: This instructs tho node to boot to the xCAT genesis environment, and present a shell prompt on console. The node will also be able to be sshed into and have utilities such as wget, tftp, scp, nfs, and cifs. It will have storage drivers available for many common systems. +* **shell**: This instructs the node to boot to the xCAT genesis environment, and present a shell prompt on console. The node will also be able to be sshed into and have utilities such as wget, tftp, scp, nfs, and cifs. It will have storage drivers available for many common systems. Choose such additional attribute of nodeset according to your requirement, if want to get more informantion about nodeset, refer to nodeset's man page. diff --git a/docs/source/guides/admin-guides/manage_clusters/common/deployment/generate_img.rst b/docs/source/guides/admin-guides/manage_clusters/common/deployment/generate_img.rst index 1cfab4ba7..910ebf581 100644 --- a/docs/source/guides/admin-guides/manage_clusters/common/deployment/generate_img.rst +++ b/docs/source/guides/admin-guides/manage_clusters/common/deployment/generate_img.rst @@ -12,7 +12,7 @@ The output should be similar to the following: :: "rhels7.1-ppc64le-stateful-mgmtnode",,"compute","linux",,"install",,"rhels7.1-ppc64le",,,"Linux","rhels7.1","ppc64le",,,,,,,, "rhels7.1-ppc64le-netboot-compute",,"compute","linux",,"netboot",,"rhels7.1-ppc64le",,,"Linux","rhels7.1","ppc64le",,,,,,,, -The ``netboot-compute`` is the default **diskless** osimage created rhels7.1 ppc64le. Run ``genimage`` to generatea diskless image based on the "rhels7.1-ppc64le-netboot-compute" definition: :: +The ``netboot-compute`` is the default **diskless** osimage created rhels7.1 ppc64le. Run ``genimage`` to generate a diskless image based on the "rhels7.1-ppc64le-netboot-compute" definition: :: genimage rhels7.1-ppc64le-netboot-compute diff --git a/docs/source/guides/admin-guides/manage_clusters/common/parallel_cmd.rst b/docs/source/guides/admin-guides/manage_clusters/common/parallel_cmd.rst index 166bba9fc..3c9a59790 100644 --- a/docs/source/guides/admin-guides/manage_clusters/common/parallel_cmd.rst +++ b/docs/source/guides/admin-guides/manage_clusters/common/parallel_cmd.rst @@ -12,7 +12,7 @@ The following commands are provided: * ``pscp`` - parallel remote copy ( supports scp and not hierarchy) * ``psh`` - parallel remote shell ( supports ssh and not hierarchy) * ``pasu`` - parallel ASU utility - * ``xdcp`` - concurrently copies files too and from multiple nodes. ( scp/rcp and hierarchy) + * ``xdcp`` - concurrently copies files to and from multiple nodes. ( scp/rcp and hierarchy) * ``xdsh`` - concurrently runs commands on multiple nodes. ( supports ssh/rsh and hierarchy) * ``xdshbak`` - formats the output of the xdsh command * ``xcoll`` - Formats command output of the psh, xdsh, rinv command diff --git a/docs/source/guides/admin-guides/manage_clusters/ppc64le/discovery/config_environment.rst b/docs/source/guides/admin-guides/manage_clusters/ppc64le/discovery/config_environment.rst index 4873692a8..028e73d61 100644 --- a/docs/source/guides/admin-guides/manage_clusters/ppc64le/discovery/config_environment.rst +++ b/docs/source/guides/admin-guides/manage_clusters/ppc64le/discovery/config_environment.rst @@ -25,7 +25,7 @@ Set the correct NIC from which DHCP server provide service:: chdef -t site dhcpinterfaces=eth1,eth2 -Add dynamic range in purpose of assigning temporary IP adddress for FSP/BMCs and hosts:: +Add dynamic range in purpose of assigning temporary IP address for FSP/BMCs and hosts:: chdef -t network 10_0_0_0-255_255_0_0 dynamicrange="10.0.100.1-10.0.100.100" chdef -t network 50_0_0_0-255_255_0_0 dynamicrange="50.0.100.1-50.0.100.100" diff --git a/docs/source/guides/admin-guides/manage_clusters/ppc64le/diskless/customize_image/index.rst b/docs/source/guides/admin-guides/manage_clusters/ppc64le/diskless/customize_image/index.rst index 1e076c7eb..c9bd53ae8 100644 --- a/docs/source/guides/admin-guides/manage_clusters/ppc64le/diskless/customize_image/index.rst +++ b/docs/source/guides/admin-guides/manage_clusters/ppc64le/diskless/customize_image/index.rst @@ -1,7 +1,7 @@ Customize osimage (Optional) ============================ -Optional means all the subitems in this page are not necessary to finish an OS deployment. If you are new to xCAT, you can just jump to `Initialize the Compute for Deployment`. +Optional means all the subitems in this page are not necessary to finish an OS deployment. If you are new to xCAT, you can just jump to :ref:`Initialize the Compute for Deployment`. .. toctree:: :maxdepth: 2