From 76b69ad05db674ab9ca747204623c0cb9c5f816f Mon Sep 17 00:00:00 2001 From: Mark Gurevich Date: Fri, 14 Oct 2016 15:58:09 -0400 Subject: [PATCH] Few spelling fixes in docs --- docs/source/advanced/docker/docker_registry.rst | 2 +- .../docker/dockerized_xcat/dockerized_xcat.rst | 4 ++-- .../domain_name_resolution/domain_name_resolution.rst | 8 ++++---- .../advanced/hamn/high_available_management_node.rst | 10 +++++----- docs/source/advanced/hamn/index.rst | 10 +++++----- docs/source/advanced/kit/index.rst | 2 +- .../admin-guides/basic_concepts/global_cfg/index.rst | 4 ++-- .../basic_concepts/network_planning/index.rst | 2 +- .../guides/admin-guides/basic_concepts/node_type.rst | 2 +- .../admin-guides/basic_concepts/xcat_db/index.rst | 2 +- .../admin-guides/basic_concepts/xcat_object/index.rst | 2 +- docs/source/guides/install-guides/yum/install.rst | 2 +- docs/source/guides/install-guides/zypper/install.rst | 2 +- docs/source/help.rst | 2 +- docs/source/index.rst | 2 +- docs/source/overview/architecture.rst | 2 +- 16 files changed, 29 insertions(+), 29 deletions(-) diff --git a/docs/source/advanced/docker/docker_registry.rst b/docs/source/advanced/docker/docker_registry.rst index 57553e096..e4b7db3ff 100644 --- a/docs/source/advanced/docker/docker_registry.rst +++ b/docs/source/advanced/docker/docker_registry.rst @@ -17,7 +17,7 @@ Setting Up Docker Registry Manually Docker registry needed to be set up on xCAT's MN. -This section describes two methods of setting up docker registry manullay. +This section describes two methods of setting up docker registry manually. First, create some folders where files for this tutorial will live. :: diff --git a/docs/source/advanced/docker/dockerized_xcat/dockerized_xcat.rst b/docs/source/advanced/docker/dockerized_xcat/dockerized_xcat.rst index db49faa1b..7f87f3e8d 100644 --- a/docs/source/advanced/docker/dockerized_xcat/dockerized_xcat.rst +++ b/docs/source/advanced/docker/dockerized_xcat/dockerized_xcat.rst @@ -24,7 +24,7 @@ By pulling xCAT Docker image and running xCAT Docker image in a container, you g xCAT Docker images ------------------ -xCAT shippes 2 Docker images for Docker host with different architecture: +xCAT ships 2 Docker images for Docker host with different architecture: * "xcat/xcat-ubuntu-x86_64": run on x86_64 Docker host * "xcat/xcat-ubuntu-ppc64le": run on ppc64le Docker host @@ -86,7 +86,7 @@ If you start up the xCAT Docker container by following the steps described in se Save and Restore xCAT data ---------------------------- -According to the policy of Docker, Docker image should only be the service deployment unit, it is not recommended to save data in Docker image. Docker uses "Data Volume" to save persisent data inside container, which can be simply taken as a shared directory between Docker host and Docker container. +According to the policy of Docker, Docker image should only be the service deployment unit, it is not recommended to save data in Docker image. Docker uses "Data Volume" to save persistent data inside container, which can be simply taken as a shared directory between Docker host and Docker container. For dockerized xCAT, there are 3 volumes recommended to save and restore xCAT user data. diff --git a/docs/source/advanced/domain_name_resolution/domain_name_resolution.rst b/docs/source/advanced/domain_name_resolution/domain_name_resolution.rst index acf30517d..d8745b6ee 100644 --- a/docs/source/advanced/domain_name_resolution/domain_name_resolution.rst +++ b/docs/source/advanced/domain_name_resolution/domain_name_resolution.rst @@ -101,7 +101,7 @@ You must set the **forwarders** attribute in the xCAT cluster **site** definitio An xCAT **network** definition must be defined for each management network used in the cluster. The **net** and **mask** attributes will be used by the ``makedns`` command. -A network **domain** and **nameservers** value must be provided either in the network definiton corresponding to the nodes or in the site definition. +A network **domain** and **nameservers** value must be provided either in the network definition corresponding to the nodes or in the site definition. For example, if the cluster domain is **mycluster.com**, the IP address of the management node, (as known by the cluster nodes), is **100.0.0.41** and the site DNS servers are **50.1.2.254,50.1.3.254** then you would run the following command. :: @@ -249,7 +249,7 @@ To use this support you must set one or more of the following node definition at The additional NIC information may be set by directly editing the xCAT **nics** table or by using the **xCAT *defs** commands to modify the node definitions. The details for how to add the additional information is described below. As you will see, entering this -information manually can be tedious and error prone. This support is primarily targetted to be used in +information manually can be tedious and error prone. This support is primarily targeted to be used in conjunction with other IBM products that have tools to fill in this information in an automated way. Managing additional interface information using the **xCAT *defs** commands @@ -269,7 +269,7 @@ For example, the expanded format for the **nicips** and **nichostnamesuffixes** nicips.eth1=10.1.1.6 nichostnamesuffixes.eth1=-eth1 -If we assume that your xCAT node name is **compute02** then this would mean that you have an additonal interface **("eth1")** and that the hostname and IP address are **compute02-eth1** and **10.1.1.6**. +If we assume that your xCAT node name is **compute02** then this would mean that you have an additional interface **("eth1")** and that the hostname and IP address are **compute02-eth1** and **10.1.1.6**. A "|" delimiter is used to specify multiple values for an interface. For example: :: @@ -285,7 +285,7 @@ For the **nicaliases** attribute a list of additional aliases may be provided. : This indicates that the **compute02-eth1** hostname would get the additional two aliases, alias1 alias2, included in the **/etc/hosts** file, (when using the ``makehosts`` command). -The second line indicates that **compute02-eth2** would get the additonal alias **alias3** and that **compute02-eth-lab** would get **alias4** +The second line indicates that **compute02-eth2** would get the additional alias **alias3** and that **compute02-eth-lab** would get **alias4** Setting individual nic attribute values ''''''''''''''''''''''''''''''''''''''' diff --git a/docs/source/advanced/hamn/high_available_management_node.rst b/docs/source/advanced/hamn/high_available_management_node.rst index 4375d553b..6e0e8c434 100644 --- a/docs/source/advanced/hamn/high_available_management_node.rst +++ b/docs/source/advanced/hamn/high_available_management_node.rst @@ -13,15 +13,15 @@ The data synchronization is important for any high availability configuration. W * The configuration files for the services that are required by xCAT, like named, DHCP, apache, nfs, ssh, etc. * The operating systems images repository and users customization data repository, the ``/install`` directory contains these repositories in most cases. -There are a lot of ways for data syncronization, but considering the specific xCAT HAMN requirements, only several of the data syncronziation options are practical for xCAT HAMN. +There are a lot of ways for data synchronization, but considering the specific xCAT HAMN requirements, only several of the data synchronziation options are practical for xCAT HAMN. **1\. Move physical disks between the two management nodes**: if we could physically move the hard disks from the failed management node to the backup management node, and bring up the backup management node, then both the operating system and xCAT data will be identical between the new management node and the failed management node. RAID1 or disk mirroring could be used to avoid the disk be a single point of failure. -**2\. Shared data**: the two management nodes use the single copy of xCAT data, no matter which management node is the primary MN, the cluster management capability is running on top of the single data copy. The access to the data could be done through various ways like shared storage, NAS, NFS, samba etc. Based on the protocol being used, the data might be accessable only on one management node at a time or be accessable on both management nodes in parellel. If the data could only be accessed from one management node, the failover process need to take care of the data access transition; if the data could be accessed on both management nodes, the failover does not need to consider the data access transition, it usually means the failover process could be faster. +**2\. Shared data**: the two management nodes use the single copy of xCAT data, no matter which management node is the primary MN, the cluster management capability is running on top of the single data copy. The access to the data could be done through various ways like shared storage, NAS, NFS, samba etc. Based on the protocol being used, the data might be accessible only on one management node at a time or be accessible on both management nodes in parellel. If the data could only be accessed from one management node, the failover process need to take care of the data access transition; if the data could be accessed on both management nodes, the failover does not need to consider the data access transition, it usually means the failover process could be faster. -Warning: Running database through network file system has a lot of potential problems and is not practical, however, most of the database system provides database replication feature that can be used to synronize the database between the two management nodes. +Warning: Running database through network file system has a lot of potential problems and is not practical, however, most of the database system provides database replication feature that can be used to synchronize the database between the two management nodes. -**3\. Mirroring**: each of the management node has its own copy of the xCAT data, and the two copies of data are syncronized through mirroring mechanism. DRBD is used widely in the high availability configuration scenarios, to provide data replication by mirroring a whole block device via network. If we put all the important data for xCAT onto the DRBD devices, then it could assure the data is synchronized between the two management nodes. Some parallel file system also provides capability to mirror data through network. +**3\. Mirroring**: each of the management node has its own copy of the xCAT data, and the two copies of data are synchronized through mirroring mechanism. DRBD is used widely in the high availability configuration scenarios, to provide data replication by mirroring a whole block device via network. If we put all the important data for xCAT onto the DRBD devices, then it could assure the data is synchronized between the two management nodes. Some parallel file system also provides capability to mirror data through network. Manual vs. Automatic Failover ----------------------------- @@ -36,7 +36,7 @@ From xCAT perspective, if the management node needs to provide network services **2\. Configuration complexity** -The configuration for the high availability applications is usually complex, it may take a long time to configure, debug and stablize the high availability configuration. +The configuration for the high availability applications is usually complex, it may take a long time to configure, debug and stabilize the high availability configuration. **3\. Maintenance effort** diff --git a/docs/source/advanced/hamn/index.rst b/docs/source/advanced/hamn/index.rst index 649ac74e0..1d7bcecab 100644 --- a/docs/source/advanced/hamn/index.rst +++ b/docs/source/advanced/hamn/index.rst @@ -1,12 +1,12 @@ -High Avaiability -================ +High Availability +================= -The xCAT management node plays an important role in the cluster, if the management node is down for whatever reason, the administrators will lose the management capability for the whole cluster, until the management node is back up and running. In some configuration, like the Linux nfs-based statelite in a non-hierarchy cluster, the compute nodes may not be able to run at all without the management node. So, it is important to consider the high availability for the management node. +The xCAT management node plays an important role in the cluster, if the management node is down for whatever reason, the administrators will lose the management capability for the whole cluster, until the management node is back up and running. In some configurations, like the Linux NFSROOT-based statelite in a non-hierarchy cluster, the compute nodes may not be able to run at all without the management node. So, it is important to consider the high availability for the management node. -The goal of the HAMN(High Availability Management Node) configuration is, when the primary xCAT management node fails, the standby management node can take over the role of the management node, either through automatic failover or through manual procedure performed by the administrator, and thus avoid long periods of time during which your cluster does not have active cluster management function available. +The goal of the HAMN (High Availability Management Node) configuration is, when the primary xCAT management node fails, the standby management node can take over the role of the management node, either through automatic failover or through manual procedure performed by the administrator, and thus avoid long periods of time during which your cluster does not have active cluster management function available. -The following pages describes ways to configure the xCAT Management Node for High Availbility. +The following pages describes ways to configure the xCAT Management Node for High Availability. .. toctree:: :maxdepth: 2 diff --git a/docs/source/advanced/kit/index.rst b/docs/source/advanced/kit/index.rst index 8f1325f5a..83f4d44c1 100644 --- a/docs/source/advanced/kit/index.rst +++ b/docs/source/advanced/kit/index.rst @@ -5,7 +5,7 @@ xCAT supports a unique software bundling concept called **software kits**. Soft Prebuilt software kits are available as a tar file which can be downloaded and then added to the xCAT installation. After the kits are added to xCAT, kit components are then added to specific xCAT osimages to automatically install the software bundled with the kit during OS deployment. In some instances, software kits may be provided as partial kits. Partial kits need additional effort to complete the kit before it can be used by xCAT. -Software kits are supported for both diskful and diskless image provisionig. +Software kits are supported for both diskful and diskless image provisioning. .. toctree:: :maxdepth: 2 diff --git a/docs/source/guides/admin-guides/basic_concepts/global_cfg/index.rst b/docs/source/guides/admin-guides/basic_concepts/global_cfg/index.rst index 052836083..63cf09419 100644 --- a/docs/source/guides/admin-guides/basic_concepts/global_cfg/index.rst +++ b/docs/source/guides/admin-guides/basic_concepts/global_cfg/index.rst @@ -1,7 +1,7 @@ Global Configuration ==================== -All the xCAT global configurations are stored in site table, xCAT Admin can adjust the configuration by modifing the site attibute with ``tabedit``. +All the xCAT global configurations are stored in site table, xCAT Admin can adjust the configuration by modifying the site attribute with ``tabedit``. This section only presents some key global configurations, for the complete reference on the xCAT global configurations, please refer to the ``tabdump -d site``. @@ -83,7 +83,7 @@ Install/Deployment Attributes '0': disable debug mode '1': enable basic debug mode - '2': enalbe expert debug mode + '2': enable expert debug mode For the details on 'basic debug mode' and 'expert debug mode', please refer to xCAT documentation. diff --git a/docs/source/guides/admin-guides/basic_concepts/network_planning/index.rst b/docs/source/guides/admin-guides/basic_concepts/network_planning/index.rst index ef87c28d2..f88967131 100644 --- a/docs/source/guides/admin-guides/basic_concepts/network_planning/index.rst +++ b/docs/source/guides/admin-guides/basic_concepts/network_planning/index.rst @@ -19,7 +19,7 @@ For a cluster, several networks are necessary to enable the cluster management a * DHCP(Dynamic Host Configuration Protocol) - The dhcp server, usually the management node or service node, privides the dhcp service for the entire cluster. + The dhcp server, usually the management node or service node, provides the dhcp service for the entire cluster. * TFTP(Trivial File Transfer Protocol) diff --git a/docs/source/guides/admin-guides/basic_concepts/node_type.rst b/docs/source/guides/admin-guides/basic_concepts/node_type.rst index 4f7a84169..5cae9ac85 100644 --- a/docs/source/guides/admin-guides/basic_concepts/node_type.rst +++ b/docs/source/guides/admin-guides/basic_concepts/node_type.rst @@ -1,7 +1,7 @@ xCAT Cluster OS Running Type ============================ -Whether a node is a pyhsical server or a virtual machine, it needs to run an Operating System to support user applications. Generally, the OS is installed in the hard disk of the compute node. But xCAT also support the type that running OS in the RAM. +Whether a node is a physical server or a virtual machine, it needs to run an Operating System to support user applications. Generally, the OS is installed in the hard disk of the compute node. But xCAT also support the type that running OS in the RAM. This section gives the pros and cons of each OS running type, and describes the cluster characteristics that will impact from each. diff --git a/docs/source/guides/admin-guides/basic_concepts/xcat_db/index.rst b/docs/source/guides/admin-guides/basic_concepts/xcat_db/index.rst index 29cfa8eb4..b6351d6ca 100644 --- a/docs/source/guides/admin-guides/basic_concepts/xcat_db/index.rst +++ b/docs/source/guides/admin-guides/basic_concepts/xcat_db/index.rst @@ -40,7 +40,7 @@ For a complete reference, see the man page for xcatdb: ``man xcatdb``. **Manipulate xCAT Database Tables** -xCAT offers 5 commands to manipulate the databse tables: +xCAT offers 5 commands to manipulate the database tables: * ``tabdump`` diff --git a/docs/source/guides/admin-guides/basic_concepts/xcat_object/index.rst b/docs/source/guides/admin-guides/basic_concepts/xcat_object/index.rst index ea1e715da..7c11a9e0a 100644 --- a/docs/source/guides/admin-guides/basic_concepts/xcat_object/index.rst +++ b/docs/source/guides/admin-guides/basic_concepts/xcat_object/index.rst @@ -67,7 +67,7 @@ You can get the detail description of each object by ``man `` e.g. * **group Object** - **group** is an object which includes multiple **node object**. When you set **group** attribute for a **node object** to a group name like **x86_64**, the group **x86_64** is automatically genereated and the node is assigned to the group. + **group** is an object which includes multiple **node object**. When you set **group** attribute for a **node object** to a group name like **x86_64**, the group **x86_64** is automatically generated and the node is assigned to the group. The benefits of using **group object**: diff --git a/docs/source/guides/install-guides/yum/install.rst b/docs/source/guides/install-guides/yum/install.rst index f21dddef4..e5d1fab50 100644 --- a/docs/source/guides/install-guides/yum/install.rst +++ b/docs/source/guides/install-guides/yum/install.rst @@ -5,7 +5,7 @@ Installing xCAT :start-after: BEGIN_install_xcat_introduction :end-before: END_install_xcat_introduction -xCAT is installed by configuring software repositories for ``xcat-core`` and ``xcat-dep`` and using yum package manager. The repositories can be publically hosted or locally hosted. +xCAT is installed by configuring software repositories for ``xcat-core`` and ``xcat-dep`` and using yum package manager. The repositories can be publicly hosted or locally hosted. .. toctree:: :maxdepth: 2 diff --git a/docs/source/guides/install-guides/zypper/install.rst b/docs/source/guides/install-guides/zypper/install.rst index fc8a46235..2bea4105f 100644 --- a/docs/source/guides/install-guides/zypper/install.rst +++ b/docs/source/guides/install-guides/zypper/install.rst @@ -5,7 +5,7 @@ Installing xCAT :start-after: BEGIN_install_xcat_introduction :end-before: END_install_xcat_introduction -xCAT is installed by configuring software repositories for ``xcat-core`` and ``xcat-dep`` and using zypper package manager. The repositories can be publically hosted or locally hosted. +xCAT is installed by configuring software repositories for ``xcat-core`` and ``xcat-dep`` and using zypper package manager. The repositories can be publicly hosted or locally hosted. .. toctree:: diff --git a/docs/source/help.rst b/docs/source/help.rst index abbac5ecf..15e0dde1e 100644 --- a/docs/source/help.rst +++ b/docs/source/help.rst @@ -9,4 +9,4 @@ For support, we encourage the use of the GitHub issues system. * `documentation `_ issues can be opened against xcat-core project -The older email list is still availble: xcat-user@list.sourceforge.net +The older email list is still available: xcat-user@list.sourceforge.net diff --git a/docs/source/index.rst b/docs/source/index.rst index fb662bd60..d507a22b7 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -18,7 +18,7 @@ renderfarms, online gaming infrastructure, and whatever tomorrows next buzzword You've reached xCAT documentation site, The main page product page is http://xcat.org **xCAT** is an open source project hosted on `GitHub `_. -Go to GitHub to view the source, open issues, ask questions, and particpate in the project. +Go to GitHub to view the source, open issues, ask questions, and participate in the project. Enjoy! diff --git a/docs/source/overview/architecture.rst b/docs/source/overview/architecture.rst index a0c7863a9..07004e672 100644 --- a/docs/source/overview/architecture.rst +++ b/docs/source/overview/architecture.rst @@ -9,7 +9,7 @@ xCAT Management Node (xCAT Mgmt Node): The server where xCAT software is installed and used as the single point to perform system management over the entire cluster. On this node, a database is configured to store the xCAT node definitions. Network services (dhcp, tftp, http, etc) are enabled to respond in Operating system deployment. Service Node: - One or more defined "slave" servers operating under the Management Node to assist in system management to reduce the load (cpu, network badnwidth) when using a single Management Node. This concept is necessary when managing very large clusters. + One or more defined "slave" servers operating under the Management Node to assist in system management to reduce the load (cpu, network bandwidth) when using a single Management Node. This concept is necessary when managing very large clusters. Compute Node: The compute nodes are the target servers which xCAT is managing.