2
0
mirror of https://github.com/xcat2/xcat-core.git synced 2025-05-30 09:36:41 +00:00

Merge pull request #425 from mfoliveira/patch-2

More minor documentation changes
This commit is contained in:
Victor Hu 2015-11-18 13:45:13 -05:00
commit 8d47ca5097
9 changed files with 26 additions and 26 deletions

View File

@ -11,8 +11,8 @@ Service Nodes
For very large clusters, xCAT has the ability to distribute the management operations to service nodes. This allows the management node to delegate all management responsibilities for a set of compute or storage nodes to a service node so that the management node doesn't get overloaded. Although xCAT automates a lot of the aspects of deploying and configuring the services, it still adds complexity to your cluster. So the question is: at what size cluster do you need to start using service nodes?? The exact answer depends on a lot of factors (mgmt node size, network speed, node type, OS, frequency of node deployment, etc.), but here are some general guidelines for how many nodes a single mgmt node (or single service node) can handle:
* **[Linux]:**
Stateful or Stateless: 500 nodes
Statelite: 250 nodes
* Stateful or Stateless: 500 nodes
* Statelite: 250 nodes
* **[AIX]:**
150 nodes

View File

@ -19,7 +19,7 @@ Traditional cluster with OS on each node's local disk.
Admin has to manage all of the individual OS copies, has to face the failure of hard disk. For certain application which requires all the compute nodes have exactly same state, this is also changeable for admin.
Stateless(diskless)
Stateless (diskless)
-------------------
Nodes boot from a RAMdisk OS image downloaded from the xCAT mgmt node or service node at boot time.

View File

@ -1,9 +1,9 @@
Select or Create an osimage Definition
======================================
Before creating image by xCAT, distro media should be prepared ahead. That can be ISOs or DVDs.
Before creating an image on xCAT, the distro media should be prepared ahead. That can be ISOs or DVDs.
XCAT use 'copycds' command to create image which will be available to install nodes. "copycds" will copy all contents of Distribution DVDs/ISOs or Service Pack DVDs/ISOs to a destination directory, and create several relevant osimage definitions by default.
XCAT use 'copycds' command to create an image which will be available to install nodes. "copycds" will copy all contents of Distribution DVDs/ISOs or Service Pack DVDs/ISOs to a destination directory, and create several relevant osimage definitions by default.
If using an ISO, copy it to (or NFS mount it on) the management node, and then run: ::
@ -21,7 +21,7 @@ To see the attributes of a particular osimage: ::
lsdef -t osimage <osimage-name>
Initially, some attributes of osimage is assigned to default value by xCAT, they all can work correctly, cause the files or templates invoked by those attributes are shipped with xCAT by default. If need to customize those attribute, refer to next section :doc:`Customize osimage </guides/admin-guides/manage_clusters/ppc64le/diskful/customize_image/index>`
Initially, some attributes of osimage are assigned default values by xCAT - they all can work correctly because the files or templates invoked by those attributes are shipped with xCAT by default. If you need to customize those attributes, refer to the next section :doc:`Customize osimage </guides/admin-guides/manage_clusters/ppc64le/diskful/customize_image/index>`
Below is an example of osimage definitions created by ``copycds``: ::
@ -39,7 +39,7 @@ In these osimage definitions shown above
**Note**: There are more things needed for **ubuntu ppc64le** osimages:
For ubuntu ppc64le, the shipped initrd.gz within ISO is not supported to do network booting. In order to install ubuntu with xCAT, you need to follow the steps below to complete the osimage definition.
For ubuntu ppc64le, the initrd.gz shipped with the ISO does not support network booting. In order to install ubuntu with xCAT, you need to follow the steps below to complete the osimage definition.
* Download mini.iso from
@ -63,7 +63,7 @@ For ubuntu ppc64le, the shipped initrd.gz within ISO is not supported to do netw
**[Tips 1]**
If this is the same distro version as what your management node used, create a .repo file in /etc/yum.repos.d with content similar to: ::
If this is the same distro version as what your management node uses, create a .repo file in /etc/yum.repos.d with contents similar to: ::
[local-<os>-<arch>]
name=xCAT local <os> <version>
@ -71,17 +71,17 @@ If this is the same distro version as what your management node used, create a .
enabled=1
gpgcheck=0
In this way, if you need install some additional RPMs into your MN later, you can simply install them by yum. Or if you are installing a software on your MN that depends some RPMs from the this disto, those RPMs will be found and installed automatically.
In this way, if you need to install some additional RPMs into your MN later, you can simply install them with ``yum``. Or if you are installing a software on your MN that depends some RPMs from this disto, those RPMs will be found and installed automatically.
**[Tips 2]**
Sometime you can create/modify a osimage definition easily based on the default osimage definition. the general steps can be:
You can create/modify an osimage definition easily based on the default osimage definition. The general steps are:
* lsdef -t osimage -z <os>-<arch>-install-compute > <filename>.stanza
* modify <filename>.stanza depending on your requirement
* modify <filename>.stanza according to your requirements
* cat <filename>.stanza| mkdef -z
For example, if need to change osimage name to your favorite name, below statement maybe helpful: ::
For example, if you need to change the osimage name to your favorite name, this command may be helpful: ::
lsdef -t osimage -z rhels6.2-x86_64-install-compute | sed 's/^[^ ]\+:/mycomputeimage:/' | mkdef -z

View File

@ -12,7 +12,7 @@ Normally, there will be at least two entries for the two subnet on MN in ``netwo
"10_0_0_0-255_255_0_0","10.0.0.0","255.255.0.0","eth1","<xcatmaster>",,"10.0.1.1",,,,,,,,,,,,
"50_0_0_0-255_255_0_0","50.0.0.0","255.255.0.0","eth2","<xcatmaster>",,"50.0.1.1",,,,,,,,,,,,
Pls run the following command to add networks in ``networks`` table if no entry in ``networks`` table::
Run the following command to add networks in ``networks`` table if there are no entries in it::
makenetworks
@ -47,10 +47,10 @@ For hardware management with ipmi, add the following line::
"ipmi","ADMIN","admin",,,,
Verify the genesis pkg
Verify the genesis packages
``````````````````````
Genesis pkg is used to **create the root image for network boot** and it **MUST** be installed before doing hardware discovery.
Genesis packages are used to **create the root image for network boot** and **MUST** be installed before doing hardware discovery.
* **[RH]**::
@ -64,4 +64,4 @@ Genesis pkg is used to **create the root image for network boot** and it **MUST*
ii xcat-genesis-base-ppc64 2.10-snap201505172314 all xCAT Genesis netboot image
ii xcat-genesis-scripts 2.10-snap201507240105 ppc64el xCAT genesis
**Note:** If the two pkgs are not installed, pls installed them first and then run ``mknb ppc64`` to create the network boot root image.
**Note:** If the two packages are not installed, install them first and then run ``mknb ppc64`` to create the network boot root image.

View File

@ -1,6 +1,6 @@
.. include:: ../../common/discover/manually_discovery.rst
If you have a few nodes which were not discovered by automated hardware discovery process, you could find them in ``discoverydata`` table using the nodediscoverls. The undiscovered nodes are those that have a discovery method value of 'undef' in the ``discoverydata`` table.
If you have a few nodes which were not discovered by automated hardware discovery process, you can find them in ``discoverydata`` table using the ``nodediscoverls`` command. The undiscovered nodes are those that have a discovery method value of 'undef' in the ``discoverydata`` table.
Display the undefined nodes with the ``nodediscoverls`` command::

View File

@ -11,7 +11,7 @@ Discover server and define
After environment is ready, and the server is powered, we can start server discovery process. The first thing to do is discovering the FSP/BMC of the server. It is automatically powered on when the physical server is powered.
The following command can be used to discovery BMC within an IP range and write the discovered node definition into a stanza file::
The following command can be used to discover BMC(s) within an IP range and write the discovered node definition(s) into a stanza file::
bmcdiscover -s nmap --range 50.0.100.1-100 -z > ./bmc.stanza
@ -19,7 +19,7 @@ The following command can be used to discovery BMC within an IP range and write
bmcdiscover -s nmap --range 50.0.100.1-100 -z -u <username> -p <password> > ./bmc.stanza
You need to modify the node definition in stanza file before using them, the stanza file will be like this::
You need to modify the node definition(s) in stanza file before using them, the stanza file will be like this::
# cat pbmc.stanza
cn1:

View File

@ -1,6 +1,6 @@
.. include:: ../../common/discover/seq_discovery.rst
When the physical location of the server is not so important, sequential base hardware discovery can be used to simplify the discovery work. The idea is: providing a node pool, each node in the pool will be assigned an IP address for host and an IP address for FSP/BMC, then match first came physical server discovery request to the first free node in the node pool and configure the assigned IP address for host and FSP/BMC onto that pysical server.
When the physical location of the server is not so important, sequential-based hardware discovery can be used to simplify the discovery work. The idea is: provided a node pool, each node in the pool will be assigned an IP address for host and an IP address for FSP/BMC, then the first physical server discovery request will be matched to the first free node in the node pool, and IP addresses for host and FSP/BMC will be assigned to that physical server.
.. include:: schedule_environment.rst
.. include:: config_environment.rst
@ -13,7 +13,7 @@ To prepare the node pool, shall predefine nodes first, then initialize the disco
Predefine nodes
```````````````
Predefine a group of node with desired IP address for host and IP address for FSP/BMC::
Predefine a group of nodes with desired IP address for host and IP address for FSP/BMC::
nodeadd cn1 groups=powerLE,all
chdef cn1 mgt=ipmi cons=ipmi ip=10.0.101.1 bmc=50.0.101.1 netboot=petitboot installnic=mac primarynic=mac
@ -25,7 +25,7 @@ Specify the predefined nodes to the nodediscoverstart command to initialize the
nodediscoverstart noderange=cn1
Pls see "nodediscoverstart man page<TBD>" for more details.
See "nodediscoverstart man page<TBD>" for more details.
Display information about the discovery process
```````````````````````````````````````````````
@ -50,6 +50,6 @@ Note: The sequential discovery process will be stopped automatically when all of
Start discovery process
-----------------------
To start discovery process, the system administrator need to power on the servers one by one manually. Then the hardware discovery process will start automatically.
To start the discovery process, the system administrator needs to power on the servers one by one manually. Then the hardware discovery process will start automatically.
.. include:: standard_cn_definition.rst

View File

@ -1,6 +1,6 @@
.. include:: ../../common/discover/switch_discovery.rst
For switch based hardware discovery, the server are identified though the switches and switchposts they directly connect to.
For switch based hardware discovery, the servers are identified through the switches and switchposts they are directly connected to.
.. include:: schedule_environment.rst

View File

@ -99,12 +99,12 @@ As an example, get only the temperature information of a particular machine. ::
Firmware Updating
`````````````````
**TODO**: For OpenPower machine, the firmware updating feature is not implement in the ``rflash`` command. The section should be updated after this feature get implemented.
**TODO**: For OpenPower machines, the firmware updating feature is not implemented in the ``rflash`` command. The section should be updated after this feature is implemented.
Configures Nodes' Service Processors
````````````````````````````````````
Here comes the command, ``rspconfig``. It is used to configure the service processor of a phyisical machine. On a OpenPower system, the service processor is the BMC, Base Motherboard Controller. Various variables can be set through the command. But, please also notice, the actual configuration may change among difference machine model type.
Here comes the command, ``rspconfig``. It is used to configure the service processor of a physical machine. On a OpenPower system, the service processor is the BMC, Baseboard Management Controller. Various variables can be set through the command. But, please also notice, the actual configuration may change among different machine-model types.
Examples