2
0
mirror of https://github.com/xcat2/xcat-core.git synced 2025-05-22 11:42:05 +00:00

Re-commit the man1 man pages

This commit is contained in:
Victor Hu 2015-10-23 22:41:28 -04:00
parent b6b8a5d50c
commit 5ea80e13dc
143 changed files with 28681 additions and 0 deletions

View File

@ -0,0 +1,139 @@
########
addkit.1
########
.. highlight:: perl
****
NAME
****
\ **addkit**\ - Install a kit on the xCAT management node
********
SYNOPSIS
********
\ **addkit**\ [\ **-?**\ |\ **-h**\ |\ **--help**\ ] [\ **-v**\ |\ **--version**\ ]
\ **addkit**\ [\ **-i**\ |\ **--inspection**\ ] \ *kitlist*\
\ **addkit**\ [\ **-V**\ |\ **--verbose**\ ] [\ **-p**\ |\ **--path**\ \ *path*\ ] \ *kitlist*\
***********
DESCRIPTION
***********
The \ **addkit**\ command install a kit on the xCAT management node from a kit tarfile or directory, creating xCAT database definitions for kit, kitrepo, kitcomponent.
Note: The xCAT support for Kits is only available for Linux operating systems.
*******
OPTIONS
*******
\ **-h|--help**\
Display usage message.
\ **-V|--verbose**\
Verbose mode.
\ **-v|--version**\
Command version.
\ **-i|--inspection**\
Show the summary of the given kits
\ **-p|--path <path**\ >
The destination directory to which the contents of the kit tarfiles and/or kit deploy dirs will be copied. When this option is not specified, the default destination directory will be formed from the installdir site attribute with ./kits subdirectory.
\ **kitlist**\
a comma delimited list of kit_tarball_files and kit_deploy_dirs that are to be added to the xCAT cluster. Each entry can be an absolute or relative path. For kit_tarball_files, these must be valid kits tarfiles added. For kit_deploy_dirs, these must be fully populated directory structures that are identical to the contents of an expanded kit_tarball_file.
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occurred.
********
EXAMPLES
********
1. To add two kits from tarball files.
addkit kit-test1.tar.bz2,kit-test2.tar.bz2
Output is similar to:
Kit /kit/kit-test1.tar.bz2,/kit/kit-test2.tar.bz2 was successfully added.
2. To add two kits from directories.
addkit kit-test1,kit-test2
Output is similar to:
Kit /kit/kit-test1,/kit/kit-test2 was successfully added.
3. To add a kit from tarball file to /install/test directory.
addkit -p /install/test kit-test1.tar.bz2
Output is similar to:
Kit /kit/kit-test1.tar.bz2 was successfully added.
4. To read the general infomration of the kit, without adding the kits to xCAT DB
addkit -i kit-test1.tar.bz2
Output is similar to:
kitname=xlc-12.1.0.0-Linux
description=XLC12 for Linux
version=12.1.0.0
ostype=Linux
********
SEE ALSO
********
lskit(1)|lskit.1, rmkit(1)|rmkit.1, addkitcomp(1)|addkitcomp.1, rmkitcomp(1)|rmkitcomp.1, chkkitcomp(1)|chkkitcomp.1

View File

@ -0,0 +1,163 @@
############
addkitcomp.1
############
.. highlight:: perl
****
NAME
****
\ **addkitcomp**\ - Assign Kit components to an xCAT osimage.
********
SYNOPSIS
********
\ **addkitcomp**\ [\ **-?**\ |\ **-h**\ |\ **--help**\ ] [\ **-v**\ |\ **--version**\ ]
\ **addkitcomp**\ [\ **-V**\ |\ **--verbose**\ ] [\ **-a**\ |\ **--adddeps**\ ] [\ **-f**\ |\ **--force**\ ] [\ **-n**\ |\ **--noupgrade**\ ] [\ **--noscripts**\ ] \ **-i**\ \ *osimage*\ \ *kitcompname_list*\
***********
DESCRIPTION
***********
The \ **addkitcomp**\ command will assign kit components to an xCAT osimage. The kit component meta rpm, package rpm and deploy parameters will be added to osimage's otherpkg.pkglist and postbootscripts will be added to osimages's postbootscripts attribute.
Note: The xCAT support for Kits is only available for Linux operating systems.
*******
OPTIONS
*******
\ **-a|--adddeps**\
Assign kitcomponent dependencies to the osimage.
\ **-h|--help**\
Display usage message.
\ **-V|--verbose**\
Verbose mode.
\ **-v|--version**\
Command version.
\ **-f|--force**\
Add kit component to osimage even if there is a mismatch in OS, version, arch, serverrole, or kitcompdeps
\ **-i**\ \ *osimage*\
The osimage name that the kit component is assigning to.
\ **-n|--noupgrade**\
Allow multiple versions of kitcomponent to be installed into the osimage, instead of kitcomponent upgrade
\ **--noscripts**\
Do not add kitcomponent's postbootscripts to osimage
\ **kitcompname_list**\
A comma-delimited list of valid full kit component names or kit component basenames that are to be added to the osimage.
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occurred.
********
EXAMPLES
********
1. To assign a kit component to osimage
addkitcomp -i rhels6.2-ppc64-netboot-compute comp-test1-1.0-1-rhels-6.2-ppc64
Output is similar to:
Assigning kit component comp-test1-1.0-1-rhels-6.2-ppc64 to osimage rhels6.2-ppc64-netboot-compute
Kit components comp-test1-1.0-1-rhels-6.2-ppc64 were added to osimage rhels6.2-ppc64-netboot-compute successfully
2. To assign a kit component to osimage with its dependency.
addkitcomp -a -i rhels6.2-ppc64-netboot-compute comp-test2-1.0-1-rhels-6.2-ppc64
Output is similar to:
Assigning kit component comp-test1-1.0-1-rhels-6.0-ppc64 to osimage rhels6.2-ppc64-netboot-compute
Assigning kit component comp-test2-1.0-1-rhels-6.2-ppc64 to osimage rhels6.2-ppc64-netboot-compute
Kit components comp-test1-1.0-1-rhels-6.0-ppc64,comp-test2-1.0-1-rhels-6.2-ppc64 were added to osimage rhels6.2-ppc64-netboot-compute successfully
3. To assign a kit component to osimage with incompatable osarch, osversion or ostype.
addkitcomp -f -i rhels6.2-ppc64-netboot-compute comp-test1-1.0-1-rhels-6.2-ppc64
Output is similar to:
Assigning kit component comp-test1-1.0-1-rhels-6.2-ppc64 to osimage rhels6.2-ppc64-netboot-compute
Kit components comp-test1-1.0-1-rhels-6.2-ppc64 were added to osimage rhels6.2-ppc64-netboot-compute successfully
4. To assign a new version of kit component to osimage without upgrade.
addkitcomp -n -i rhels6.2-ppc64-netboot-compute comp-test2-1.0-1-rhels-6.2-ppc64
Output is similar to:
Assigning kit component comp-test1-1.0-1-rhels-6.0-ppc64 to osimage rhels6.2-ppc64-netboot-compute
Assigning kit component comp-test2-1.0-1-rhels-6.2-ppc64 to osimage rhels6.2-ppc64-netboot-compute
Kit components comp-test2-1.0-1-rhels-6.2-ppc64 were added to osimage rhels6.2-ppc64-netboot-compute successfully
The result will be:
lsdef -t osimage rhels6.2-ppc64-netboot-compute -i kitcomponents
Object name: rhels6.2-ppc64-netboot-compute
kitcomponents=comp-test2-1.0-0-rhels-6.2-ppc64,comp-test2-1.0-1-rhels-6.2-ppc64
********
SEE ALSO
********
lskit(1)|lskit.1, addkit(1)|addkit.1, rmkit(1)|rmkit.1, rmkitcomp(1)|rmkitcomp.1, chkkitcomp(1)|chkkitcomp.1

View File

@ -0,0 +1,225 @@
#############
bmcdiscover.1
#############
.. highlight:: perl
****
NAME
****
\ **bmcdiscover**\ - Discover bmc using scan method, now scan_method can be nmap.
********
SYNOPSIS
********
\ **bmcdiscover**\ [\ **-h**\ |\ **--help**\ ] [\ **-v**\ |\ **--version**\ ]
\ **bmcdiscover**\ [\ **-s**\ \ *scan_method*\ ] \ **--range**\ \ *ip_ranges*\ [\ **-z**\ ] [\ **-w**\ ]
\ **bmcdiscover**\ \ **-i**\ |\ **--bmcip**\ \ *bmc_ip*\ [\ **-u**\ |\ **--bmcuser**\ \ *bmcusername*\ ] \ **-p**\ |\ **--bmcpwd**\ \ *bmcpassword*\ \ **-c**\ |\ **--check**\
\ **bmcdiscover**\ \ **-i**\ |\ **--bmcip**\ \ *bmc_ip*\ [\ **-u**\ |\ **--bmcuser**\ \ *bmcusername*\ ] \ **-p**\ |\ **--bmcpwd**\ \ *bmcpassword*\ \ **--ipsource**\
***********
DESCRIPTION
***********
The \ **bmcdiscover**\ command will discover bmc using scan method.
This command will use nmap scan active nodes, ip range format should be the same format with that is used by nmap.
Note: scan method can only be nmap now, default scan method is nmap.
This command can check if bmc username or password is correct or not. It can get BMC IP Address source, DHCP Address or static Address.
*******
OPTIONS
*******
\ **--range**\
Specify one or more IP ranges. Ip ranges should be a string, can pass hostnames, IP addresses, networks, etc. Each can be an ip address (10.1.2.3) or an ip range (10.1.2.0/24). If the range is huge, for example, 192.168.1.1/8, the bmcdiscover may take a very long time to scan. So the range should be exactly specified. For nmap scan method, it accepts multiple formats. For example, 192.168.1.1/24, 40-41.1-2.3-4.1-100, scanme.nmap.org, microsoft.com/24.
\ **-s**\
Scan method, now it is nmap.
\ **-z**\
List the stanza formate data.
\ **-w**\
Write to the database.
\ **-i|--bmcip**\
BMC ip.
\ **-u|--bmcuser**\
BMC user name.
\ **-p|--bmcpwd**\
BMC user password.
\ **-c|--check**\
Check.
\ **--ipsource**\
BMC IP source.
\ **-h|--help**\
Display usage message.
\ **-v|--version**\
Command version.
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occurred.
********
EXAMPLES
********
1. To get all bmc from ip range
bmcdiscover -s nmap --range "10.4.23.100-254 50.3.15.1-2"
Output is similar to:
10.4.23.254
50.3.15.1
Note: input for ip range can also be like scanme.nmap.org, microsoft.com/24, 192.168.0.1; 10.0.0-255.1-254.
2. After discover bmc, list the stanza format data
bmcdiscover -s nmap --range "10.4.22-23.100-254" -z
Output is similar to:
node10422254:
objtype=node
groups=all
bmc=10.4.22.254
cons=ipmi
mgt=ipmi
node10423254:
objtype=node
groups=all
bmc=10.4.23.254
cons=ipmi
mgt=ipmi
3. After discover bmc, write host node definition into the database, and the same time, give out stanza format data
bmcdiscover -s nmap --range "10.4.22-23.100-254" -w
Output is similar to:
node10422254:
objtype=node
groups=all
bmc=10.4.22.254
cons=ipmi
mgt=ipmi
node10423254:
objtype=node
groups=all
bmc=10.4.23.254
cons=ipmi
mgt=ipmi
4. To check if user name or password is correct or not for bmc
bmcdiscover -i 10.4.23.254 -u USERID -p PASSW0RD -c
Output is similar to:
Correct ADMINISTRATOR
bmcdiscover -i 10.4.23.254 -u USERID -p PASSW0RD1 -c
Output is similar to:
Error: Wrong bmc password
bmcdiscover -i 10.4.23.254 -u USERID1 -p PASSW0RD1 -c
Output is similar to:
Error: Wrong bmc user
bmcdiscover -i 10.4.23.2541234 -u USERID -p PASSW0RD -c
Output is similar to:
Error: Not bmc
5. Get BMC IP Address source, DHCP Address or static Address
bmcdiscover -i 10.4.23.254 -u USERID -p PASSW0RD --ipsource
Output is similar to:
Static Address
********
SEE ALSO
********
lsslp(1)|lsslp.1

View File

@ -0,0 +1,288 @@
#######
cfgve.1
#######
.. highlight:: perl
****
NAME
****
\ **cfgve**\ - Configure the elements for a virtual environment.
********
SYNOPSIS
********
\ **cfgve**\ \ **-t**\ dc \ **-m**\ manager \ **-o**\ object [\ **-c**\ \ **-k**\ nfs|localfs | \ **-r**\ ]
\ **cfgve**\ \ **-t**\ cl \ **-m**\ manager \ **-o**\ object [\ **-c**\ \ **-p**\ cpu type| \ **-r**\ \ **-f**\ ]
\ **cfgve**\ \ **-t**\ sd \ **-m**\ manager \ **-o**\ object [\ **-c**\ | \ **-g**\ | \ **-s**\
| \ **-a**\ | \ **-b**\ | \ **-r**\ \ **-f**\ ]
\ **cfgve**\ \ **-t**\ nw \ **-m**\ manager \ **-o**\ object [\ **-c**\ \ **-d**\ data center \ **-n**\
vlan ID | \ **-a**\ \ **-l**\ cluster | \ **-b**\ | \ **-r**\ ]
\ **cfgve**\ \ **-t**\ tpl \ **-m**\ manager \ **-o**\ object [\ **-r**\ ]
***********
DESCRIPTION
***********
The \ **cfgve**\ command can be used to configure a virtual environment for
'Storage Domain', 'Network' and 'Template' objects.
The mandatory parameter \ **-m manager**\ is used to specify the address of the
manager of virtual environment. xCAT needs it to access the RHEV manager.
The mandatory parameter \ **-t type**\ is used to specify the type of the target
object.
Basically, \ **cfgve**\ command supports five types of object: \ **dc**\ , \ **cl**\ ,
\ **sd**\ , \ **nw**\ and \ **tpl**\ .
\ **dc**\ - The \ **create**\ and \ **remove**\ operations are supported.
\ **cl**\ - The \ **create**\ and \ **remove**\ operations are supported.
\ **sd**\ - The \ **create**\ , \ **attach**\ , \ **detach**\ , \ **activate**\ ,
\ **deactivate**\ and \ **remove**\ operations are supported.
\ **nw**\ - The \ **create**\ , \ **attach**\ , \ **detach**\ and \ **remove**\ operations are supported.
\ **tpl**\ - The \ **remove**\ operation is supported.
The mandatory parameter \ **-o object**\ is used to specify which object to configure.
*******
OPTIONS
*******
\ **-a**\ To attach the target object.
\ **-b**\ To detach the target object.
\ **-c**\ To create the target object.
For creating of \ **Storage Domain**\ , the target storage domain will be created
first, then attached to data center and activated.
The parameters that used to create the storage domain are gotten
from 'virtsd' table. The detail parameters in the virtsd table:
\ **virtsd.node**\ - The name of the storage domain.
\ **virtsd.sdtype**\ - The type of storage domain. Valid value: data, iso, export.
Default value is 'data'.
\ **virtsd.stype**\ - The storage type. "nfs" or "localfs".
\ **virtsd.location**\ - The location of the storage.
\ **nfs**\ : Format: [nfsserver:nfspath].
The NFS export directory must be configured for read write access and must
be owned by vdsm:kvm.
\ **localfs**\ : "/data/images/rhev" is set by default.
\ **virtsd.host**\ - A host must be specified for a storage doamin as SPM
(Storage Pool Manager) when initialize the storage domain. The role of SPM
may be migrated to other host by rhev-m during the running of the datacenter
(For example, when the current SPM encountered issue or going to maintenance
status.
\ **virtsd.datacenter**\ - The storage will be attached to. 'Default' data center
is the default value.
\ **-d**\ \ *data center*\
The name of data center.
Specify the 'Data Center' that will be used for the object to be attached to.
It is used by <nw> type.
\ **-f**\ It can be used with \ **-r**\ to remove the target object by force.
For removing of \ **Storage Domain**\ , if \ **-f**\ is specified, the storage domain will be deactivated and detached from data center before the removing.
\ **-g**\ To activate the target object.
\ **-h**\ Display usage message.
\ **-k**\ \ *storage type*\
To specify the type of the storage type when creating the data center.
Supported type: nfs; localfs.
\ **-l**\ \ *cluster*\
Specify the cluster for the network to attach to.
\ **-m**\ \ *manager*\
Specify the manager of the virtual environment.
For RHEV, the FQDN (Fully Qualified Domain Name) of the rhev manager have
to be specified.
\ **-n**\ \ *vlan ID*\
To specify the vlan number when creating a network.
\ **-o**\ \ *object*\
The name of the target object.
\ **-p**\ \ *cpu type*\
To specify the cpu type when creating the cluster.
\ **Intel Penryn Family**\ is default type.
Supported type: \ **Intel Conroe Family**\ , \ **Intel Penryn Family**\ ,
\ **Intel Nehalem Family**\ , \ **Intel Westmere Family**\ , \ **AMD Opteron G1**\ ,
\ **AMD Opteron G2**\ , \ **AMD Opteron G3**\
\ **-r**\ To remove the target object.
For removing of \ **Storage Domain**\ , the storage space will be formatted after removing.
\ **-s**\ To deactivate the target object.
\ **-t**\ \ *type*\
Specify the \ **type**\ of the target object.
Supported types:
\ **dc**\ - Data Center
\ **cl**\ - Cluster
\ **sd**\ - Storage Domain
\ **nw**\ - Network
\ **tpl**\ - Template
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occurred.
********
EXAMPLES
********
1. To create the Storage Domain 'sd1', enter:
.. code-block:: perl
cfgve -t sd -m <FQDN of rhev manager> -o sd1 -c
2. To deactivate the Storage Domain 'sd1' from data center, enter:
.. code-block:: perl
cfgve -t sd -m <FQDN of rhev manager> -o sd1 -s
3. To remove the Storage Domain 'sd1', enter:
.. code-block:: perl
cfgve -t sd -m <FQDN of rhev manager> -o sd1 -r
4. To create the network 'nw1', enter:
.. code-block:: perl
cfgve -t nw -m <FQDN of rhev manager> -o nw1 -c
5. To remove the template 'tpl01', enter:
.. code-block:: perl
cfgve -t tpl -m <FQDN of rhev manager> -o tpl01 -r
*****
FILES
*****
/opt/xcat/bin/cfgve
********
SEE ALSO
********
lsve(1)|lsve.1

View File

@ -0,0 +1,81 @@
##########
cfm2xcat.1
##########
.. highlight:: perl
****
NAME
****
\ **cfm2xcat**\ - Migrates the CFM setup in CSM to the xdcp rsync setup in xCAT.
****************
\ **SYNOPSIS**\
****************
\ **cfm2xcat**\ [\ **-i**\ \ *path of the CFM distribution files generated *\ ] [\ **-o**\ \ *path of the xdcp rsync files generated from the CFM distribution files *\ ]
\ **cfm2xcat**\ [\ **-h**\ ]
*******************
\ **DESCRIPTION**\
*******************
Copy the cfm2xcat command to the CSM Management Server. Run the command, indicating where you want your files saved with the -i and -o flags. They can be in the same directory.
The cfm2xcat command will run cfmupdatenode -a, saving the generated CFM distribution files in the directory indicates with (-i). From those distribution files, it will generate xdcp rsync input files (-F option on xdcp) in the directory indicated by ( -o).
Check the rsync files generated. There will be a file generated (rsyncfiles) from the input -o option on the command, and the same file with a (.nr) extension generated for each different noderange that will used to sync files based on your CFM setup in CSM. The rsyncfiles will contain the rsync file list. The rsyncfiles.nr will contain the noderange. If multiple noderanges then the file name (rsyncfiles) will be appended with a number.
*******
OPTIONS
*******
\ **-h**\ Display usage message.
\ **-i**\ Path of the CFM distribution files generated from the cfmupdatenode -a command.
\ **-o**\ Path of the xdcp rsync input file generated from the CFM distribution files.
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occurred.
********
EXAMPLES
********
1. To build xCAT rsync files to use with xdcp -F , enter on the CSM Management Server, make sure the path exists:
\ **cfm2xcat -i /tmp/cfm/cfmdistfiles -o /tmp/cfm/rsyncfiles**\
2. To use the file on the xCAT Management Node copy to /tmp/cfm on the xCAT MN:
\ **xdcp ^/tmp/cfm/rsyncfiles.nr -F /tmp/cfm/rsyncfiles**\
\ **xdcp ^/tmp/cfm/rsyncfiles.nr1 -F /tmp/cfm/rsyncfiles1**\
\ **xdcp ^/tmp/cfm/rsyncfiles.nr2 -F /tmp/cfm/rsyncfiles2**\
*****
FILES
*****
/opt/xcat/share/xcat/tools/cfm2xcat

View File

@ -0,0 +1,339 @@
#######
chdef.1
#######
.. highlight:: perl
****
NAME
****
\ **chdef**\ - Change xCAT data object definitions.
********
SYNOPSIS
********
\ **chdef**\ [\ **-h**\ |\ **--help**\ ] [\ **-t**\ \ *object-types*\ ]
\ **chdef**\ [\ **-t**\ \ *object-types*\ ] [\ **-o**\ \ *object-names*\ ] [\ **-n**\ \ *new-name*\ ] [\ *node*\ ]
\ **chdef**\ [\ **-V**\ |\ **--verbose**\ ] [\ **-t**\ \ *object-types*\ ] [\ **-o**\ \ *object-names*\ ]
[\ **-d**\ |\ **--dynamic**\ ] [\ **-p**\ |\ **--plus**\ ] [\ **-m**\ |\ **--minus**\ ] [\ **-z**\ |\ **--stanza**\ ]
[[\ **-w**\ \ *attr*\ ==\ *val*\ ] [\ **-w**\ \ *attr*\ =~\ *val*\ ] ...] [\ *noderange*\ ] [\ *attr*\ =\ *val*\ [\ *attr*\ =\ *val...*\ ]]
[\ **-u**\ [\ *provmethod*\ =<\ *install*\ |\ *netboot*\ |\ *statelite*\ >] [\ *profile*\ =<xxx>] [\ *osvers*\ =\ *value*\ ] [\ *osarch*\ =\ *value*\ ]]
***********
DESCRIPTION
***********
This command is used to change xCAT object definitions which are stored in the xCAT database. The default is to replace any existing attribute value with the one specified on the command line. The command will also create a new definition if one doesn't exist.
This command also can be used to change the xCAT object name to a new name. Note: the site,monitoring types can NOT be supported.
*******
OPTIONS
*******
\ *attr=val [attr=val ...]*\
Specifies one or more "attribute equals value" pairs, separated by spaces. Attr=val pairs must be specified last on the command line. Use the help option to get a list of valid attributes for each object type.
\ **-d|--dynamic**\
Use the dynamic option to change dynamic node groups definition. This option must be used with -w option.
\ **-h|--help**\
Display usage message.
\ **-m|--minus**\
If the value of the attribute is a list then this option may be used to remove one or more items from the list.
\ **-n**\ \ *new-name*\
Change the current object name to the new-name which is specified by the -n option.
Objects of type site and monitoring cannot be renamed with the -n option.
Note: For the \ **-n**\ option, only one node can be specified. For some special nodes such as fsp, bpa, frame, cec etc., their name is referenced in their own hcp attribute, or the hcp attribute of other nodes. If you use \ **-n**\ option, you must manually change all hcp attributes that refer to this name.
\ *noderange*\
A set of comma delimited node names and/or group names. (must be the first parameter) See the "noderange" man page for details on supported formats.
\ **-o**\ \ *object-names*\
A set of comma delimited object names.
\ **-p|--plus**\
This option will add the specified values to the existing value of the attribute. It will create a comma-separated list of values.
\ **-t**\ \ *object-types*\
A set of comma delimited object types. Use the help option to get a list of valid object types.
\ **-V|--verbose**\
Verbose mode.
\ **-w**\ \ *attr==val*\ \ **-w**\ \ *attr=~val*\ ...
Use one or multiple -w flags to specify the selection string that can be used to select objects. The operators ==, !=, =~ and !~ are available. Use the help option to get a list of valid attributes for each object type.
Operator descriptions:
== Select nodes where the attribute value is exactly this value.
!= Select nodes where the attribute value is not this specific value.
=~ Select nodes where the attribute value matches this regular expression.
!~ Select nodes where the attribute value does not match this regular expression.
Note: the operator !~ will be parsed by shell, if you want to use !~ in the selection string, use single quote instead. For example:-w 'mgt!~ipmi'.
\ **-z|--stanza**\
Indicates that the file being piped to the command is in stanza format. See the xcatstanzafile man page for details on using xCAT stanza files.
\ **-u**\
Fill in the attributes such as template file, pkglist file and otherpkglist file of osimage object based on the specified parameters. It will search "/install/custom/" directory first, and then "/opt/xcat/share/".
Note: this option only works for objtype \ **osimage**\ .
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occurred.
********
EXAMPLES
********
1.
To change a site definition.
.. code-block:: perl
chdef -t site -o clustersite installdir=/xcatinstall
2.
To change a basic node definition.
.. code-block:: perl
chdef -t node -o node01 groups="all,aix"
(The group definitions are also created if they don't already exist.)
3.
To add another group to the "groups" attribute in the previous example.
.. code-block:: perl
chdef -p -t node -o node01 groups="compute"
4.
To remove the "all" group from the "groups" attribute in the previous example.
.. code-block:: perl
chdef -m -t node -o node01 groups="all"
5.
To replace the current "groups" attribute value of "node01".
.. code-block:: perl
chdef -t node -o node01 groups="linux"
6.
To add "node01" to the "members" attribute of a group definition called "LinuxNodes".
.. code-block:: perl
chdef -p -t group -o LinuxNodes members="node01"
7.
To update a set of definitions based on information contained in the stanza file mystanzafile.
.. code-block:: perl
cat mystanzafile | chdef -z
8.
To update a dynamic node group definition to add the cons=hmc wherevals pair.
.. code-block:: perl
chdef -t group -o dyngrp -d -p -w cons==hmc
9.
To change the node object name from node1 to node2.
.. code-block:: perl
chdef -t node -o node1 -n node2
10.
To change the node hwtype, this command will change the value of ppc.nodetype.
.. code-block:: perl
chdef -t node -o node1 hwtype=lpar
11.
To change the policy table for policy number 7.0 for admin1
.. code-block:: perl
chdef -t policy -o 7.0 name=admin1 rule=allow
12.
To change the node nic attributes
.. code-block:: perl
chdef -t node -o cn1 nicips.eth0="1.1.1.1|1.2.1.1" nicnetworks.eth0="net1|net2" nictypes.eth0="Ethernet"
13.
To update an osimage definition.
.. code-block:: perl
chdef redhat6img -u provmethod=install
*****
FILES
*****
$XCATROOT/bin/chdef
(The XCATROOT environment variable is set when xCAT is installed. The
default value is "/opt/xcat".)
*****
NOTES
*****
This command is part of the xCAT software product.
********
SEE ALSO
********
mkdef(1)|mkdef.1, lsdef(1)|lsdef.1, rmdef(1)|rmdef.1, xcatstanzafile(5)|xcatstanzafile.5

View File

@ -0,0 +1,386 @@
##############
chhypervisor.1
##############
.. highlight:: perl
****
NAME
****
\ **chhypervisor**\ - Configure the virtualization hosts.
********
SYNOPSIS
********
\ **RHEV specific :**\
\ **chhypervisor**\ \ *noderange*\ [\ **-a**\ ]
\ **chhypervisor**\ \ *noderange*\ [\ **-n**\ ]
\ **chhypervisor**\ \ *noderange*\ [\ **-p**\ ]
\ **chhypervisor**\ \ *noderange*\ [\ **-e**\ ]
\ **chhypervisor**\ \ *noderange*\ [\ **-d**\ ]
\ **zVM specific :**\
\ **chhypervisor**\ \ *noderange*\ [\ **--adddisk2pool**\ \ *function*\ \ *region*\ \ *volume*\ \ *group*\ ]
\ **chhypervisor**\ \ *noderange*\ [\ **--addscsi**\ \ *device_number*\ \ *device_path*\ \ *option*\ \ *persist*\ ]
\ **chhypervisor**\ \ *noderange*\ [\ **--addvlan**\ \ *name*\ \ *owner*\ \ *type*\ \ *transport*\ ]
\ **chhypervisor**\ \ *noderange*\ [\ **--addvswitch**\ \ *name*\ \ *osa_dev_addr*\ \ *osa_exp_adapter*\ \ *controller*\ \ *connect (0, 1, or 2)*\ \ *memory_queue*\ \ *router*\ \ *transport*\ \ *vlan_id*\ \ *port_type*\ \ *update*\ \ *gvrp*\ \ *native_vlan*\ ]
\ **chhypervisor**\ \ *noderange*\ [\ **--addzfcp2pool**\ \ *pool*\ \ *status*\ \ *wwpn*\ \ *lun*\ \ *size*\ \ *owner*\ ]
\ **chhypervisor**\ \ *noderange*\ [\ **--removediskfrompool**\ \ *function*\ \ *region*\ \ *group*\ ]
\ **chhypervisor**\ \ *noderange*\ [\ **--removescsi**\ \ *device_number*\ \ *persist (YES or NO)*\ ]
\ **chhypervisor**\ \ *noderange*\ [\ **--removevlan**\ \ *name*\ \ *owner*\ ]
\ **chhypervisor**\ \ *noderange*\ [\ **--removevswitch**\ \ *name*\ ]
\ **chhypervisor**\ \ *noderange*\ [\ **--removezfcpfrompool**\ \ *pool*\ \ *lun*\ \ *wwpn*\ ]
\ **chhypervisor**\ \ *noderange*\ [\ **--smcli**\ \ *function*\ \ *arguments*\ ]
***********
DESCRIPTION
***********
The \ **chhypervisor**\ command can be used to configure the RHEV-h.
The rhev-h host will register to the rhev-m automatically, but admin needs to
approve the host can be added to the 'cluster' with \ **-a**\ flag .
After registering, the network interfaces of host need to be added to the 'network' of
RHEV. And the power management for the host should be configured so that
rhev-m could make proper decision when certain host encountered error.
The \ **chhypervisor**\ command can also be used to configure the zVM host.
For each host, an entry should be added to the hypervisor table:
The columns of hypervisor table:
\ **hypervisor.node**\ - rhev-h host name or zVM host name (lower-case).
\ **hypervisor.type**\ - Must be set to 'rhevh' or 'zvm'.
\ **hypervisor.mgr**\ - The rhev manager (The FQDN of rhev-m server) for the host.
\ **hypervisor.interface**\ - The configuration for the nics. Refer to \ **-n**\ .
\ **hypervisor.cluster**\ - The cluster that the host will be added to. The
default is 'Default' cluster if not specified.
*******
OPTIONS
*******
RHEV specific :
===============
\ **-a**\ Approve the host that to be added to cluster.
Before approve, the status of the host must be 'pending_approval'.
\ **-n**\ Configure the network interfaces for the host.
Note: This operation only can be run when host is in 'maintenance mode'.
Use \ **-d**\ to switch the host to 'maintenance' mode.
The interfaces which configured in hypervisor.interface will be added
to the network of RHEV.
The format of hypervisor.interface is multiple [network:interfacename:
protocol:IP:netmask:gateway] sections separated with '|'. For example:
[rhevm2:eth0:static:10.1.0.236:255.255.255.0:0.0.0.0].
\ **network**\ - The logic network which has been created by 'cfgve -t nw'
or the default management network 'rhevm'.
\ **interfacename**\ - Physical network name: 'eth0','eth1'...
\ **protocol**\ - To identify which boot protocol to use for the interface: dhcp
or static.
\ **IP**\ - The IP address for the interface.
\ **netmask**\ - The network mask for the interface.
\ **gateway**\ - The gateay for the interface. This field only can be set when
the interface is added to 'rhevm' network.
\ **-p**\ Configure the power management for the host.
The power management must be configured for the rhev-h host to make the
rhev-m to monitor the power status of the host, so that when certain host
failed to function, rhev-m will fail over certain role like SPM to other active host.
For rack mounted server, the bmc IP and user:password need to be set for the
power management (These parameters are gotten from ipmi table). rhev-m uses the
ipmi protocol to get the power status of the host.
\ **-e**\ To activate the host.
\ **-d**\ To deactivate the host to maintenance mode.
\ **-h**\ Display usage message.
zVM specific :
==============
\ **--adddisk2pool**\ \ *function*\ \ *region*\ \ *volume*\ \ *group*\
Add a disk to a disk pool defined in the EXTENT CONTROL. Function type can be
either: (4) Define region as full volume and add to group OR (5) Add existing
region to group. If the volume already exists in the EXTENT CONTROL, use
function 5. If the volume does not exist in the EXTENT CONTROL, but is attached
to SYSTEM, use function 4.
\ **--addscsi**\ \ *device_number*\ \ *device_path*\ \ *option*\ \ *persist*\
Dynamically add a SCSI disk to a running z/VM system.
\ **--addvlan**\ \ *name*\ \ *owner*\ \ *type*\ \ *transport*\
Create a virtual network LAN.
\ **--addvswitch**\ \ *name*\ \ *osa_dev_addr*\ \ *osa_exp_adapter*\ \ *controller*\ \ *connect (0, 1, or 2)*\ \ *memory_queue*\ \ *router*\ \ *transport*\ \ *vlan_id*\ \ *port_type*\ \ *update*\ \ *gvrp*\ \ *native_vlan*\
Create a virtual switch.
\ **--addzfcp2pool**\ \ *pool*\ \ *status*\ \ *wwpn*\ \ *lun*\ \ *size*\ \ *owner*\
Add a zFCP device to a device pool defined in xCAT. The device must have been
carved up in the storage controller and configured with a WWPN/LUN before it
can be added to the xCAT storage pool. z/VM does not have the ability to
communicate directly with the storage controller to carve up disks dynamically.
\ **--removediskfrompool**\ \ *function*\ \ *region*\ \ *group*\
Remove a disk from a disk pool defined in the EXTENT CONTROL. Function type can
be either: (1) Remove region, (2) Remove region from group, (3) Remove region
from all groups, OR (7) Remove entire group .
\ **--removescsi**\ \ *device_number*\ \ *persist (YES or NO)*\
Delete a real SCSI disk.
\ **--removevlan**\ \ *name*\ \ *owner*\
Delete a virtual network LAN.
\ **--removevswitch**\ \ *name*\
Delete a virtual switch.
\ **--removezfcpfrompool**\ \ *pool*\ \ *lun*\
Remove a zFCP device from a device pool defined in xCAT.
\ **--smcli**\ \ *function*\ \ *arguments*\
Execute a SMAPI function. A list of APIs supported can be found by using the
help flag, e.g. chhypervisor pokdev61 --smcli -h. Specific arguments associated
with a SMAPI function can be found by using the help flag for the function,
e.g. chhypervisor pokdev61 --smcli Image_Query_DM -h. Only z/VM 6.2 and older
SMAPI functions are supported at this time. Additional SMAPI functions will be
added in subsequent zHCP versions.
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occurred.
********
EXAMPLES
********
RHEV specific :
===============
1. To approve the host 'host1', enter:
.. code-block:: perl
chhypervisor host1 -a
2. To configure the network interface for the host 'host1', enter:
.. code-block:: perl
chhypervisor host1 -n
3. To configure the power management for the host 'host1', enter:
.. code-block:: perl
chhypervisor host1 -p
4. To activate the host 'host1', enter:
.. code-block:: perl
chhypervisor host1 -e
5. To deactivate the host 'host1', enter:
.. code-block:: perl
chhypervisor host1 -d
zVM specific :
==============
1. To add a disk to a disk pool defined in the EXTENT CONTROL, enter:
.. code-block:: perl
chhypervisor pokdev61 --adddisk2pool 4 DM1234 DM1234 POOL1
2. To add a zFCP device to a device pool defined in xCAT, enter:
.. code-block:: perl
chhypervisor pokdev61 --addzfcp2pool zfcp1 free 500501234567C890 4012345600000000 8G
3. To remove a region from a group in the EXTENT CONTROL, enter:
.. code-block:: perl
chhypervisor pokdev61 --removediskfrompool 2 DM1234 POOL1
4. To remove a zFCP device from a device pool defined in xCAT, enter:
.. code-block:: perl
chhypervisor pokdev61 --removezfcpfrompool zfcp1 4012345600000000 500501234567C890
5. To execute a SMAPI function (Image_Query_DM), enter:
.. code-block:: perl
chhypervisor pokdev61 --smcli Image_Query_DM -T LNX3
*****
FILES
*****
/opt/xcat/bin/chhypervisor

View File

@ -0,0 +1,106 @@
############
chkkitcomp.1
############
.. highlight:: perl
****
NAME
****
\ **chkkitcomp**\ - Check if Kit components are compatible with an xCAT osimage.
********
SYNOPSIS
********
\ **chkkitcomp**\ [\ **-?**\ |\ **-h**\ |\ **--help**\ ] [\ **-v**\ |\ **--version**\ ]
\ **chkkitcomp**\ [\ **-V**\ |\ **--verbose**\ ] \ **-i**\ \ *osimage*\ \ *kitcompname_list*\
***********
DESCRIPTION
***********
The \ **chkkitcomp**\ command will check if the kit components are compatible with the xCAT osimage.
This command will ignore the current osimage.kitcomponents setting, and just to check if the kitcompname list in the cmdline are compatible with the osimage by osversion/ostype/osarch/ and kit component dependencies.
Note: The xCAT support for Kits is only available for Linux operating systems.
*******
OPTIONS
*******
\ **-h|--help**\
Display usage message.
\ **-V|--verbose**\
Verbose mode.
\ **-v|--version**\
Command version.
\ **-i**\ \ *osimage*\
The name of the osimage is used for check.
\ **kitcompname_list**\
A comma-delimited list of valid full kit component names or kit component basenames that are to be checking to the osimage.
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occurred.
********
EXAMPLES
********
1. To check if a kit component is fitting to an osimage
chkkitcomp -i rhels6.2-ppc64-netboot-compute comp-test1-1.0-1-rhels-6.2-ppc64
Output is similar to:
Kit components comp-test1-1.0-1-rhels-6.2-ppc64 fit to osimage rhels6.2-ppc64-netboot-compute
********
SEE ALSO
********
lskit(1)|lskit.1, addkit(1)|addkit.1, rmkit(1)|rmkit.1, addkitcomp(1)|addkitcomp.1, rmkitcomp(1)|rmkitcomp.1

View File

@ -0,0 +1,164 @@
############
chkosimage.1
############
.. highlight:: perl
****
NAME
****
\ **chkosimage**\ - Use this xCAT command to check an xCAT osimage.
********
SYNOPSIS
********
\ **chkosimage [-h | --help ]**\
\ **chkosimage [-V] [-c|--clean] osimage_name**\
***********
DESCRIPTION
***********
This command is currently supported for AIX osimages only.
Use this command to verify if the NIM lpp_source directories contain the
correct software. The lpp_source directory must contain all the software
that is specified in the "installp_bundle" and "otherpkgs"
attributes of the osimage definition.
The command gets the name of the lpp_source resource from the xCAT osimage
definition and the location of the lpp_source directory from the NIM resource
definition.
It will check for installp, rpm and emgr type packages.
Note: Remember to use the prefixes, "I:", "R:", and "E:", respectively,
when specifying package names in an installp_bundle file or an otherpkgs list.
In addition to checking for missing software the chkosimage command will
also check to see if there are multiple matches. This could happen
when you use wildcards in the software file names. For example, if you
have perl-xCAT\* in a bundle file it could match multiple versions of the xCAT
rpm package saved in your lpp_source directory.
If this happens you must remove the unwanted versions of the rpms. If the
extra rpms are not removed you will get install errors.
To help with this process you can use the "-c|--clean" option. This
option will keep the rpm package with the most recent timestamp and
remove the others.
The chkosimage command should always be used to verify the lpp_source content
before using the osimage to install any AIX cluster nodes.
*******
OPTIONS
*******
\ **-c |--clean**\
Remove any older versions of the rpms. Keep the version with the latest
timestamp.
\ **-h |--help**\
Display usage message.
\ **osimage_name**\
The name of the xCAT for AIX osimage definition.
\ **-V |--verbose**\
Verbose mode.
************
RETURN VALUE
************
0
The command completed successfully.
1
An error has occurred.
********
EXAMPLES
********
1
Check the XCAT osimage called "61image" to verify that the lpp_source
directories contain all the software that is specified in the
"installp_bundle" and "otherpkgs" attributes.
\ **chkosimage -V 61image**\
2
Clean up the lpp_source directory for the osimage named "61img" by removing
any older rpms with the same names but different versions.
\ **chkosimage -c 61img**\
*****
FILES
*****
/opt/xcat/bin/chkosimage
*****
NOTES
*****
This command is part of the xCAT software product.
********
SEE ALSO
********
mknimimage(1)|mknimimage.1

View File

@ -0,0 +1,177 @@
########
chvlan.1
########
.. highlight:: perl
****
NAME
****
\ **chvlan**\ - It adds or removes nodes for the vlan.
********
SYNOPSIS
********
\ **chvlan**\ \ *vlanid*\ \ **-n**\ |\ **--nodes**\ \ *noderange*\ [\ **-i**\ |\ **--interface**\ \ *nic*\ ]
\ **chvlan**\ \ *vlanid*\ \ **-n**\ |\ **--nodes**\ \ *noderange*\ \ **-d**\ |\ **--delete**\
\ **chvlan**\ [\ **-h**\ |\ **--help**\ ]
\ **chvlan**\ [\ **-v**\ |\ **--version**\ ]
***********
DESCRIPTION
***********
The \ **chvlan**\ command adds nodes to the given vlan. If -d is specified, the nodes will be removed from the vlan.
For added security, the root guard and bpdu guard will be enabled for the ports added to this vlan. However, the guards will not be disabled if the ports are removed from the vlan using chvlan (-d) or rmvlan commands. To disable them, you need to use the switch command line interface. Please refer to the switch command line interface manual to see how to disable the root guard and bpdu guard for a port.
**********
Parameters
**********
\ *vlanid*\ is a unique vlan number.
*******
OPTIONS
*******
\ **-n|--nodes**\ The nodes or groups to be added or removed. It can be stand alone nodes or KVM guests. It takes the noderange format. Please check the man page for noderange for details.
\ **-i|--interface**\ (For adding only). The interface name where the vlan will be tagged on. If omitted, the xCAT management network will be assumed. For KVM, it is the interface name on the host.
\ **-h|--help**\ Display usage message.
\ **-v|--version**\ The Command Version.
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occurred.
********
EXAMPLES
********
1.
To add node1, node2 and node3 to vlan 3.
.. code-block:: perl
chvlan 3 -n node1,node2,node3
2.
To add node1, node2 and node3 to vlan 3 using eth1 interface.
.. code-block:: perl
chvlan 3 -n node1,node2,node3 -i eth1
3.
TO remove node1, node2 and node3 from vlan 3.
.. code-block:: perl
chvlan -n node1,node2,node3 -d
4.
To add KVM guests node1 and node2 to vlan 3
.. code-block:: perl
mkdef node1 arch=x86_64 groups=kvm,all installnic=mac primarynic=mac mgt=kvm netboot=pxe nfsserver=10.1.0.204 os=rhels6 profile=compute provmethod=install serialport=0 serialspeed=115200 vmcpus=1 vmhost=x3650n01 vmmemory=512 vmnics=br0 vmstorage=nfs://10.1.0.203/vms
mkdef node2 arch=x86_64 groups=kvm,all installnic=mac primarynic=mac mgt=kvm netboot=pxe nfsserver=10.1.0.204 os=rhels6 profile=compute provmethod=install serialport=0 serialspeed=115200 vmcpus=1 vmhost=x3650n01 vmmemory=512 vmnics=br0 vmstorage=nfs://10.1.0.203/vms
chvlan 3 -n node1,node2
mkvm node1,node2 -s 20G
rpower node1,node2 on
rinstall node1,node2
5.
To remove KVM guests node1 and node2 from vlan 3
.. code-block:: perl
chvlan 3 -n node1,node2 -d
rpower node1,node2 off
rmvm node1,node2
*****
FILES
*****
/opt/xcat/bin/chvlan
********
SEE ALSO
********
mkvlan(1)|mkvlan.1, rmvlan(1)|rmvlan.1, lsvlan(1)|lsvlan.1

View File

@ -0,0 +1,122 @@
#############
chvlanports.1
#############
.. highlight:: perl
****
NAME
****
\ **chvlanports**\ - It adds or removes nodes' switch interfaces for the vlan.
********
SYNOPSIS
********
\ **chvlanports**\ \ *vlanid*\ \ **-n**\ |\ **--nodes**\ \ *noderange*\ \ **-i**\ |\ **--interface**\ \ *nic*\
\ **chvlanports**\ \ *vlanid*\ \ **-n**\ |\ **--nodes**\ \ *noderange*\ \ **-i**\ |\ **--interface**\ \ *nic*\ \ **-d**\ |\ **--delete**\
\ **chvlanports**\ [\ **-h**\ |\ **--help**\ ]
\ **chvlanports**\ [\ **-v**\ |\ **--version**\ ]
***********
DESCRIPTION
***********
The \ **chvlanports**\ command adds nodes switch interfaces to the given vlan. If -d is specified, the nodes switch interfaces will be removed from the vlan.
This command won't create/remove vlans on switches, it just add node's switch ports into exisitng vlan or remove them from existing vlan on switch. Before calling chvlanports, the nodes switch interfaces should be configured in table switch, and vlan must already existing in switches.
=head1 Parameters
\ *vlanid*\ is a unique vlan number.
*******
OPTIONS
*******
\ **-n|--nodes**\ The nodes or groups to be added or removed. It takes the noderange format. Please check the man page for noderange for details.
\ **-i|--interface**\ The interface name where the vlan will be tagged on.
\ **-h|--help**\ Display usage message.
\ **-v|--version**\ The Command Version.
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occurred.
********
EXAMPLES
********
1.
To add node1, node2 and node3 to vlan 3 using eth1 interface.
.. code-block:: perl
chvlanports 3 -n node1,node2,node3 -i eth1
2.
TO remove eth1 interface of node1, node2 and node3 from vlan 3.
.. code-block:: perl
chvlanports 3 -n node1,node2,node3 -i eth1 -d
*****
FILES
*****
/opt/xcat/bin/chvlanports
********
SEE ALSO
********
mkvlan(1)|mkvlan.1, rmvlan(1)|rmvlan.1, lsvlan(1)|lsvlan.1, chvlan(1)|chvlan.1

View File

@ -0,0 +1,949 @@
######
chvm.1
######
.. highlight:: perl
****
NAME
****
\ **chvm**\ - Changes HMC-, DFM-, IVM-, and zVM-managed partition profiles or virtual machines. For Power 775, chvm could be used to change the octant configuration values for generating LPARs; change the I/O slots assignment to LPARs within the same CEC.
********
SYNOPSIS
********
\ **chvm**\ [\ **-h**\ | \ **--help**\ ]
\ **chvm**\ [\ **-v**\ | \ **--version**\ ]
PPC (with HMC) specific:
========================
\ **chvm**\ [\ **-V**\ | \ **--verbose**\ ] \ *noderange*\ [\ **-p**\ \ *profile*\ ]
\ **chvm**\ [\ **-V**\ | \ **--verbose**\ ] \ *noderange*\ \ *attr*\ =\ *val*\ [\ *attr*\ =\ *val*\ ...]
PPC (using Direct FSP Management) specific:
===========================================
\ **chvm**\ \ *noderange*\ \ *--p775*\ [\ **-p**\ \ *profile*\ ]
\ **chvm**\ \ *noderange*\ \ *--p775*\ \ **-i id**\ [\ **-m**\ \ *memory_interleaving*\ ] \ **-r**\ \ *partition_rule*\
\ **chvm**\ \ *noderange*\ [\ **lparname**\ ={\ **\\***\ |\ **name**\ }]
\ **chvm**\ \ *noderange*\ [\ **vmcpus=min/req/max**\ ] [\ **vmmemory=min/req/max**\ ]
[\ **vmothersetting=hugepage:N,bsr:N**\ ]
[\ **add_physlots=drc_index1,drc_index2...**\ ]
[\ **add_vmnics=vlan1[,vlan2..]]**\ [\ **add_vmstorage=<N|viosnode:slotid**\ >] [\ **--vios**\ ]
[\ **del_physlots=drc_index1,drc_index2...**\ ]
[\ **del_vadapter=slotid**\ ]
VMware/KVM specific:
====================
\ **chvm**\ \ *noderange*\ [\ **-a**\ \ *size*\ ] [\ **-d**\ \ *disk*\ ] [\ **-p**\ \ *disk*\ ] [\ **--resize**\ \ **disk**\ =\ *size*\ ] [\ **--cpus**\ \ *count*\ ] [\ **--mem**\ \ *memory*\ ]
zVM specific:
=============
\ **chvm**\ \ *noderange*\ [\ **--add3390**\ \ *disk_pool*\ \ *device_address*\ \ *size*\ \ *mode*\ \ *read_password*\ \ *write_password*\ \ *multi_password*\ ]
\ **chvm**\ \ *noderange*\ [\ **--add3390active**\ \ *device_address*\ \ *mode*\ ]
\ **chvm**\ \ *noderange*\ [\ **--add9336**\ \ *disk_pool*\ \ *device_address*\ \ *size*\ \ *mode*\ \ *read_password*\ \ *write_password*\ \ *multi_password*\ ]
\ **chvm**\ \ *noderange*\ [\ **--adddisk2pool**\ \ *function*\ \ *region*\ \ *volume*\ \ *group*\ ]
\ **chvm**\ \ *noderange*\ [\ **--addnic**\ \ *device_address*\ \ *type*\ \ *device_count*\ ]
\ **chvm**\ \ *noderange*\ [\ **--addpagespool**\ \ *volume_address*\ \ *volume_label*\ \ *volume_use*\ \ *system_config_name*\ \ *system_config_type*\ \ *parm_disk_owner*\ \ *parm_disk_number*\ \ *parm_disk_password*\ ]
\ **chvm**\ \ *noderange*\ [\ **--addprocessor**\ \ *device_address*\ ]
\ **chvm**\ \ *noderange*\ [\ **--addprocessoractive**\ \ *device_address*\ \ *type*\ ]
\ **chvm**\ \ *noderange*\ [\ **--addvdisk**\ \ *device_address*\ \ *size*\ ]
\ **chvm**\ \ *noderange*\ [\ **--addzfcp**\ \ *pool*\ \ *device_address*\ \ *loaddev*\ \ *size*\ \ *tag*\ \ *wwpn*\ \ *lun*\ ]
\ **chvm**\ \ *noderange*\ [\ **--connectnic2guestlan**\ \ *device_address*\ \ *lan*\ \ *owner*\ ]
\ **chvm**\ \ *noderange*\ [\ **--connectnic2vswitch**\ \ *device_address*\ \ *vswitch*\ ]
\ **chvm**\ \ *noderange*\ [\ **--copydisk**\ \ *target_address*\ \ *source_node*\ \ *source_address*\ ]
\ **chvm**\ \ *noderange*\ [\ **--dedicatedevice**\ \ *virtual_device*\ \ *real_device*\ \ *mode*\ ]
\ **chvm**\ \ *noderange*\ [\ **--deleteipl**\ ]
\ **chvm**\ \ *noderange*\ [\ **--disconnectnic**\ \ *device_address*\ ]
\ **chvm**\ \ *noderange*\ [\ **--formatdisk**\ \ *device_address*\ \ *multi_password*\ ]
\ **chvm**\ \ *noderange*\ [\ **--grantvswitch**\ \ *vswitch*\ ]
\ **chvm**\ \ *noderange*\ [\ **--purgerdr**\ ]
\ **chvm**\ \ *noderange*\ [\ **--removedisk**\ \ *device_address*\ ]
\ **chvm**\ \ *noderange*\ [\ **--removenic**\ \ *device_address*\ ]
\ **chvm**\ \ *noderange*\ [\ **--removeprocessor**\ \ *device_address*\ ]
\ **chvm**\ \ *noderange*\ [\ **--removeloaddev**\ \ *wwpn*\ \ *lun*\ ]
\ **chvm**\ \ *noderange*\ [\ **--removezfcp**\ \ *device_address*\ \ *wwpn*\ \ *lun*\ ]
\ **chvm**\ \ *noderange*\ [\ **--replacevs**\ \ *directory_entry*\ ]
\ **chvm**\ \ *noderange*\ [\ **--setipl**\ \ *ipl_target*\ \ *load_parms*\ \ *parms*\ ]
\ **chvm**\ \ *noderange*\ [\ **--setpassword**\ \ *password*\ ]
\ **chvm**\ \ *noderange*\ [\ **--setloaddev**\ \ *wwpn*\ \ *lun*\ ]
\ **chvm**\ \ *noderange*\ [\ **--sharevolume**\ \ *volume_address*\ \ *share_enable*\ ]
\ **chvm**\ \ *noderange*\ [\ **--undedicatedevice**\ \ *device_address*\ ]
***********
DESCRIPTION
***********
PPC (with HMC) specific:
========================
The chvm command modifies the partition profile for the partitions specified in noderange. A partitions current profile can be read using lsvm, modified, and piped into the chvm command, or changed with the -p flag.
This command also supports to change specific partition attributes by specifying one or more "attribute equals value" pairs in command line directly, without whole partition profile.
PPC (using Direct FSP Management) specific:
===========================================
For Power 755(use option \ *--p775*\ to specify):
chvm could be used to change the octant configuration values for generating LPARs. chvm is designed to set the Octant configure value to split the CPU and memory for partitions, and set Octant Memory interleaving value. The chvm will only set the pending attributes value. After chvm, the CEC needs to be rebooted manually for the pending values to be enabled. Before reboot the cec, the administrator can use chvm to change the partition plan. If the the partition needs I/O slots, the administrator should use chvm to assign the I/O slots.
chvm is also designed to assign the I/O slots to the new LPAR. Both the current IO owning lpar and the new IO owning lpar must be powered off before an IO assignment. Otherwise, if the I/O slot is belonged to an Lpar and the LPAR is power on, the command will return an error when trying to assign that slot to a different lpar.
The administrator should use lsvm to get the profile content, and then edit the content, and add the node name with ":" manually before the I/O which will be assigned to the node. And then the profile can be piped into the chvm command, or changed with the -p flag.
For normal power machine:
chvm could be used to modify the resources assigned to partitions. The admin shall specify the attributes with options \ *vmcpus*\ , \ *vmmemory*\ , \ *add_physlots*\ , \ *vmothersetting*\ , \ *add_vmnics*\ and/or \ *add_vmstorage*\ . If nothing specified, nothing will be returned.
VMware/KVM specific:
====================
The chvm command modifes the vm specified in noderange. Calling with deregister or purge options at the same time as the resize option is not recommended.
zVM specific:
=============
The chvm command modifes the virtual machine's configuration specified in noderange.
*******
OPTIONS
*******
Common:
=======
\ **-h**\
Display usage message.
\ **-v**\
Command Version.
PPC (with HMC) specific:
========================
\ **-p**\ \ *profile*\
Name of an existing partition profile.
\ *attr*\ =\ *val*\
Specifies one or more "attribute equals value" pairs, separated by spaces.
\ **-V**\
Verbose output.
PPC (using Direct FSP Management) specific:
===========================================
\ **--p775**\
Specify the operation is for Power 775 machines.
\ **-i**\
Starting numeric id of the newly created partitions. For Power 775 using Direct FSP Management, the id value only could be \ **1**\ , \ **5**\ , \ **9**\ , \ **13**\ , \ **17**\ , \ **21**\ , \ **25**\ and \ **29**\ . Shall work with option \ **--p775**\ .
\ **-m**\
memory interleaving. The setting value only could be \ **1**\ or \ **2**\ . \ **2**\ means \ **non-interleaved**\ mode (also 2MC mode), the memory cannot be shared across the processors in an octant. \ **1**\ means \ **interleaved**\ mode (also 8MC mode) , the memory can be shared. The default value is \ **1**\ . Shall work with option \ **--p775**\ .
\ **-r**\
partition rule. Shall work with option \ **--p775**\ .
If all the octants configuration value are same in one CEC, it will be " \ **-r**\ \ **0-7**\ :\ *value*\ " .
If the octants use the different configuration value in one cec, it will be "\ **-r**\ \ **0**\ :\ *value1*\ ,\ **1**\ :\ *value2*\ ,...\ **7**\ :\ *value7*\ ", or "\ **-r**\ \ **0**\ :\ *value1*\ ,\ **1-7**\ :\ *value2*\ " and so on.
The octants configuration value for one Octant could be \ **1**\ , \ **2**\ , \ **3**\ , \ **4**\ , \ **5**\ . The meanings of the octants configuration value are as following:
.. code-block:: perl
1 -- 1 partition with all cpus and memory of the octant
2 -- 2 partitions with a 50/50 split of cpus and memory
3 -- 3 partitions with a 25/25/50 split of cpus and memory
4 -- 4 partitions with a 25/25/25/25 split of cpus and memory
5 -- 2 partitions with a 25/75 split of cpus and memory
\ **-p**\ \ *profile*\
Name of I/O slots assignment profile. Shall work with option \ **--p775**\ .
\ **lparname**\ ={\ **\\***\ |\ **name**\ }
Set LPAR name for the specified lpars. If '\*' specified, it means to get names from xCAT database and then set them for the specified lpars. If a string is specified, it only supports single node and the string will be set for the specified lpar. The user can use lsvm to check the lparnames for lpars.
\ **vmcpus=value**\ \ **vmmemory=value**\ \ **add_physlots=value**\ \ **vmothersetting=value**\
To specify the parameters that will be modified.
\ **add_vmnics=value**\ \ **add_vmstorage=value**\ [\ **--vios**\ ]
To create new virtual adapter for the specified node.
\ **del_physlots=drc_index1,drc_index2...**\
To delete physical slots which are specified by the \ *drc_index1,drc_index2...*\ .
\ **del_vadapter=slotid**\
To delete a virtual adapter specified by the \ *slotid*\ .
VMware/KVM specific:
====================
\ **-a**\ \ *size*\
Add a new Hard disk with size defaulting to GB. Multiple can be added with comma separated values.
\ **--cpus**\ \ *count*\
Set the number of CPUs.
\ **-d**\ \ *disk*\
Deregister the Hard disk but leave the backing files. Multiple can be done with comma separated values. The disks are specified by SCSI id. Size defaults to GB.
\ **--mem**\ \ *memory*\
Set the memory, defaults to MB.
\ **-p**\ \ *disk*\
Purge the Hard disk. Deregisters and deletes the files. Multiple can be done with comma separated values. The disks are specified by SCSI id. Size defaults to GB.
\ **--resize**\ \ **disk**\ =\ *size*\
Change the size of the Hard disk. The disk can never be set to less than it's current size. Multiple disks can be resized to \ *size*\ by using comma separated values on the left side of \ **=**\ . The disks are specified by SCSI id. Size defaults to GB.
zVM specific:
=============
\ **--add3390**\ \ *disk_pool*\ \ *device_address*\ \ *size*\ \ *mode*\ \ *read_password*\ \ *write_password*\ \ *multi_password*\
Adds a 3390 (ECKD) disk to a virtual machine's directory entry. The device address can be automatically assigned by specifying 'auto'. The size of the disk can be specified in GB, MB, or the number of cylinders.
\ **--add3390active**\ \ *device_address*\ \ *mode*\
Adds a 3390 (ECKD) disk that is defined in a virtual machine's directory entry to that virtual server's active configuration.
\ **--add9336**\ \ *disk_pool*\ \ *device_address*\ \ *size*\ \ *mode*\ \ *read_password*\ \ *write_password*\ \ *multi_password*\
Adds a 9336 (FBA) disk to a virtual machine's directory entry. The device address can be automatically assigned by specifying 'auto'. The size of the disk can be specified in GB, MB, or the number of blocks.
\ **--adddisk2pool**\ \ *function*\ \ *region*\ \ *volume*\ \ *group*\
Add a disk to a disk pool defined in the EXTENT CONTROL. Function type can be either: (4) Define region as full volume and add to group OR (5) Add existing region to group. The disk has to already be attached to SYSTEM.
\ **--addnic**\ \ *device_address*\ \ *type*\ \ *device_count*\
Adds a network adapter to a virtual machine's directory entry (case sensitive).
\ **--addpagespool**\ \ *volume_addr*\ \ *volume_label*\ \ *volume_use*\ \ *system_config_name*\ \ *system_config_type*\ \ *parm_disk_owner*\ \ *parm_disk_number*\ \ *parm_disk_password*\
Add a full volume page or spool disk to the virtual machine.
\ **--addprocessor**\ \ *device_address*\
Adds a virtual processor to a virtual machine's directory entry.
\ **--addprocessoractive**\ \ *device_address*\ \ *type*\
Adds a virtual processor to a virtual machine's active configuration (case sensitive).
\ **--addvdisk**\ \ *device_address*\ \ *size*\
Adds a v-disk to a virtual machine's directory entry.
\ **--addzfcp**\ \ *pool*\ \ *device_address*\ \ *loaddev*\ \ *size*\ \ *tag*\ \ *wwpn*\ \ *lun*\
Add a zFCP device to a device pool defined in xCAT. The device must have been
carved up in the storage controller and configured with a WWPN/LUN before it can
be added to the xCAT storage pool. z/VM does not have the ability to communicate
directly with the storage controller to carve up disks dynamically. xCAT will
find the a zFCP device in the specified pool that meets the size required, if
the WWPN and LUN are not given. The device address can be automatically assigned
by specifying 'auto'. The WWPN/LUN can be set as the LOADDEV in the directory
entry if (1) is specified as the 'loaddev'.
\ **--connectnic2guestlan**\ \ *device_address*\ \ *lan*\ \ *owner*\
Connects a given network adapter to a GuestLAN.
\ **--connectnic2vswitch**\ \ *device_address*\ \ *vswitch*\
Connects a given network adapter to a VSwitch.
\ **--copydisk**\ \ *target_address*\ \ *source_node*\ \ *source_address*\
Copy a disk attached to a given virtual server.
\ **--dedicatedevice**\ \ *virtual_device*\ \ *real_device*\ \ *mode*\
Adds a dedicated device to a virtual machine's directory entry.
\ **--deleteipl**\
Deletes the IPL statement from the virtual machine's directory entry.
\ **--disconnectnic**\ \ *device_address*\
Disconnects a given network adapter.
\ **--formatdisk**\ \ *disk_address*\ \ *multi_password*\
Formats a disk attached to a given virtual server (only ECKD disks supported). The disk should not be linked to any other virtual server. This command is best used after add3390().
\ **--grantvswitch**\ \ *vswitch*\
Grant vSwitch access for given virtual machine.
\ **--purgerdr**\
Purge the reader belonging to the virtual machine
\ **--removedisk**\ \ *device_address*\
Removes a minidisk from a virtual machine's directory entry.
\ **--removenic**\ \ *device_address*\
Removes a network adapter from a virtual machine's directory entry.
\ **--removeprocessor**\ \ *device_address*\
Removes a processor from an active virtual machine's configuration.
\ **--removeloaddev**\ \ *wwpn*\ \ *lun*\
Removes the LOADDEV statement from a virtual machines's directory entry.
\ **--removezfcp**\ \ *device_address*\ \ *wwpn*\ \ *lun*\
Removes a given SCSI/FCP device belonging to the virtual machine.
\ **--replacevs**\ \ *directory_entry*\
Replaces a virtual machine's directory entry. The directory entry can be echoed into stdin or a text file.
\ **--setipl**\ \ *ipl_target*\ \ *load_parms*\ \ *parms*\
Sets the IPL statement for a given virtual machine.
\ **--setpassword**\ \ *password*\
Sets the password for a given virtual machine.
\ **--setloaddev**\ \ *wwpn*\ \ *lun*\
Sets the LOADDEV statement in the virtual machine's directory entry.
\ **--undedicatedevice**\ \ *device_address*\
Delete a dedicated device from a virtual machine's active configuration and directory entry.
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occurred.
********
EXAMPLES
********
PPC (with HMC) specific:
========================
1. To change the partition profile for lpar4 using the configuration data in the file /tmp/lparfile, enter:
.. code-block:: perl
cat /tmp/lparfile | chvm lpar4
Output is similar to:
.. code-block:: perl
lpar4: Success
2. To change the partition profile for lpar4 to the existing profile 'prof1', enter:
.. code-block:: perl
chvm lpar4 -p prof1
Output is similar to:
.. code-block:: perl
lpar4: Success
3. To change partition attributes for lpar4 by specifying attribute value pairs in command line, enter:
.. code-block:: perl
chvm lpar4 max_mem=4096
Output is similar to:
.. code-block:: perl
lpar4: Success
PPC (using Direct FSP Management) specific:
===========================================
1. For Power 775, to create a new partition lpar1 on the first octant of the cec cec01, lpar1 will use all the cpu and memory of the octant 0, enter:
.. code-block:: perl
mkdef -t node -o lpar1 mgt=fsp groups=all parent=cec01 nodetype=lpar hcp=cec01
then:
.. code-block:: perl
chvm lpar1 --p775 -i 1 -m 1 -r 0:1
Output is similar to:
.. code-block:: perl
lpar1: Success
cec01: Please reboot the CEC cec1 firstly, and then use chvm to assign the I/O slots to the LPARs
2. For Power 775, to create new partitions lpar1-lpar8 on the whole cec cec01, each LPAR will use all the cpu and memory of each octant, enter:
.. code-block:: perl
mkdef -t node -o lpar1-lpar8 nodetype=lpar mgt=fsp groups=all parent=cec01 hcp=cec01
then:
.. code-block:: perl
chvm lpar1-lpar8 --p775 -i 1 -m 1 -r 0-7:1
Output is similar to:
.. code-block:: perl
lpar1: Success
lpar2: Success
lpar3: Success
lpar4: Success
lpar5: Success
lpar6: Success
lpar7: Success
lpar8: Success
cec01: Please reboot the CEC cec1 firstly, and then use chvm to assign the I/O slots to the LPARs
3. For Power 775 cec1, to create new partitions lpar1-lpar9, the lpar1 will use 25% CPU and 25% memory of the first octant, and lpar2 will use the left CPU and memory of the first octant. lpar3-lpar9 will use all the cpu and memory of each octant, enter:
.. code-block:: perl
mkdef -t node -o lpar1-lpar9 mgt=fsp groups=all parent=cec1 nodetype=lpar hcp=cec1
then:
.. code-block:: perl
chvm lpar1-lpar9 --p775 -i 1 -m 1 -r 0:5,1-7:1
Output is similar to:
.. code-block:: perl
lpar1: Success
lpar2: Success
lpar3: Success
lpar4: Success
lpar5: Success
lpar6: Success
lpar7: Success
lpar8: Success
lpar9: Success
cec1: Please reboot the CEC cec1 firstly, and then use chvm to assign the I/O slots to the LPARs
4.To change the I/O slot profile for lpar4 using the configuration data in the file /tmp/lparfile, the I/O slots information is similar to:
.. code-block:: perl
4: 514/U78A9.001.0123456-P1-C17/0x21010202/2/1
4: 513/U78A9.001.0123456-P1-C15/0x21010201/2/1
4: 512/U78A9.001.0123456-P1-C16/0x21010200/2/1
then run the command:
.. code-block:: perl
cat /tmp/lparfile | chvm lpar4 --p775
5. To change the I/O slot profile for lpar1-lpar8 using the configuration data in the file /tmp/lparfile. Users can use the output of lsvm.and remove the cec information, and modify the lpar id before each I/O, and run the command as following:
.. code-block:: perl
chvm lpar1-lpar8 --p775 -p /tmp/lparfile
6. To change the LPAR name, enter:
.. code-block:: perl
chvm lpar1 lparname=test_lpar01
Output is similar to:
.. code-block:: perl
lpar1: Success
7. For Normal Power machine, to modify the resource assigned to a partition:
Before modify, the resource assigned to node 'lpar1' can be shown with:
lsvm lpar1
The output is similar to:
.. code-block:: perl
lpar1: Lpar Processor Info:
Curr Processor Min: 1.
Curr Processor Req: 4.
Curr Processor Max: 16.
lpar1: Lpar Memory Info:
Curr Memory Min: 1.00 GB(4 regions).
Curr Memory Req: 4.00 GB(16 regions).
Curr Memory Max: 32.00 GB(128 regions).
lpar1: 1,513,U78AA.001.WZSGVU7-P1-T7,0x21010201,0xc03(USB Controller)
lpar1: 1,512,U78AA.001.WZSGVU7-P1-T9,0x21010200,0x104(RAID Controller)
lpar1: 1/2/2
lpar1: 128.
To modify the resource assignment:
.. code-block:: perl
chvm lpar1 vmcpus=1/2/16 vmmemory=1G/8G/32G add_physlots=0x21010202
The output is similar to:
.. code-block:: perl
lpar1: Success
The resource information after modification is similar to:
.. code-block:: perl
lpar1: Lpar Processor Info:
Curr Processor Min: 1.
Curr Processor Req: 2.
Curr Processor Max: 16.
lpar1: Lpar Memory Info:
Curr Memory Min: 1.00 GB(4 regions).
Curr Memory Req: 8.00 GB(32 regions).
Curr Memory Max: 32.00 GB(128 regions).
lpar1: 1,514,U78AA.001.WZSGVU7-P1-C19,0x21010202,0xffff(Empty Slot)
lpar1: 1,513,U78AA.001.WZSGVU7-P1-T7,0x21010201,0xc03(USB Controller)
lpar1: 1,512,U78AA.001.WZSGVU7-P1-T9,0x21010200,0x104(RAID Controller)
lpar1: 1/2/2
lpar1: 128.
Note: The physical I/O resources specified with \ *add_physlots*\ will be appended to the specified partition. The physical I/O resources which are not specified but belonged to the partition will not be removed. For more information about \ *add_physlots*\ , please refer to lsvm(1)|lsvm.1.
VMware/KVM specific:
====================
.. code-block:: perl
chvm vm1 -a 8,16 --mem 512 --cpus 2
Output is similar to:
.. code-block:: perl
vm1: node successfully changed
zVM specific:
=============
1. To adds a 3390 (ECKD) disk to a virtual machine's directory entry:
.. code-block:: perl
chvm gpok3 --add3390 POOL1 0101 2G MR
Output is similar to:
.. code-block:: perl
gpok3: Adding disk 0101 to LNX3... Done
2. To add a network adapter to a virtual machine's directory entry:
.. code-block:: perl
chvm gpok3 --addnic 0600 QDIO 3
Output is similar to:
.. code-block:: perl
gpok3: Adding NIC 0900 to LNX3... Done
3. To connects a given network adapter to a GuestLAN:
.. code-block:: perl
chvm gpok3 --connectnic2guestlan 0600 GLAN1 LN1OWNR
Output is similar to:
.. code-block:: perl
gpok3: Connecting NIC 0600 to GuestLan GLAN1 on LN1OWNR... Done
4. To connects a given network adapter to a vSwitch:
.. code-block:: perl
chvm gpok3 --connectnic2vswitch 0600 VSW1
Output is similar to:
.. code-block:: perl
gpok3: Connecting NIC 0600 to vSwitch VSW1 on LNX3... Done
5. To removes a minidisk from a virtual machine's directory entry:
.. code-block:: perl
chvm gpok3 --removedisk 0101
Output is similar to:
.. code-block:: perl
gpok3: Removing disk 0101 on LNX3... Done
6. To Removes a network adapter from a virtual machine's directory entry:
.. code-block:: perl
chvm gpok3 --removenic 0700
Output is similar to:
.. code-block:: perl
gpok3: Removing NIC 0700 on LNX3... Done
7. To replaces a virtual machine's directory entry:
.. code-block:: perl
cat /tmp/dirEntry.txt | chvm gpok3 --replacevs
Output is similar to:
.. code-block:: perl
gpok3: Replacing user entry of LNX3... Done
*****
FILES
*****
/opt/xcat/bin/chvm
********
SEE ALSO
********
mkvm(1)|mkvm.1, lsvm(1)|lsvm.1, rmvm(1)|rmvm.1

View File

@ -0,0 +1,194 @@
########
chzone.1
########
.. highlight:: perl
************
\ **NAME**\
************
\ **chzone**\ - Changes a zone defined in the cluster.
****************
\ **SYNOPSIS**\
****************
\ **chzone**\ <zonename> [\ **--defaultzone**\ ] [-K] [\ **-k**\ \ *full path to the ssh RSA private key*\ ] [\ **-a**\ \ *noderange*\ | \ **-r**\ \ *noderange*\ ] [\ **-g**\ ] [\ **-f**\ ] [\ **-s**\ \ *yes|no*\ ] [-V]
\ **chzone**\ [\ **-h**\ | \ **-v**\ ]
*******************
\ **DESCRIPTION**\
*******************
The \ **chzone**\ command is designed to change the definition of a zone previous defined in the cluster.
The chzone command is only supported on Linux ( No AIX support).
The nodes are not updated with the new root ssh keys by chzone. You must run updatenode -k or xdsh -K to the nodes to update the root ssh keys to the new generated zone keys. This will also sync any service nodes with the zone keys, if you have a hierarchical cluster.
Note: if any zones in the zone table, there must be one and only one defaultzone. Otherwise, errors will occur.
***************
\ **OPTIONS**\
***************
\ **-h**\ |\ **--help**\
Displays usage information.
\ **-v**\ |\ **--version**\
Displays command version and build date.
\ **-k | --sshkeypath**\ \ *full path to the ssh RSA private key*\
This is the path to the id_rsa key that will be used to build new root's ssh keys for the zone. If -k is used, it will generate the ssh public key from the input ssh RSA private key, and store both in /etc/xcat/sshkeys/<zonename>/.ssh directory.
\ **-K | --genkeys**\
Using this flag, will generate new ssh RSA private and public keys for the zone into the /etc/xcat/sshkeys/<zonename>/.ssh directory.
The nodes are not automatically updated with the new root ssh keys by chzone. You must run updatenode -k or xdsh -K to the nodes to update the root ssh keys to the new generated zone keys. This will also sync any service nodes with the zone keys, if you have a hierarchical cluster.
\ **--default**\
if --defaultzone is input, then it will set the zone defaultzone attribute to yes.
if --defaultzone is input and another zone is currently the default,
then the -f flag must be used to force a change to the new defaultzone.
If -f flag is not use an error will be returned and no change made.
Note: if any zones in the zone table, there must be one and only one defaultzone. Otherwise, errors will occur.
\ **-a | --addnoderange**\ \ *noderange*\
For each node in the noderange, it will set the zonename attribute for that node to the input zonename.
If the -g flag is also on the command, then
it will add the group name "zonename" to each node in the noderange.
\ **-r | --rmnoderange**\ \ *noderange*\
For each node in the noderange, if the node is a member of the input zone, it will remove the zonename attribute for that node.
If any of the nodes in the noderange is not a member of the zone, you will get an error and nothing will be changed.
If the -g flag is also on the command, then
it will remove the group name "zonename" from each node in the noderange.
\ **-s| --sshbetweennodes**\ \ **yes|no**\
If -s entered, the zone sshbetweennodes attribute will be set to yes or no based on the input. When this is set to yes, then ssh will be setup to allow passwordless root access between nodes. If no, then root will be prompted for a password when running ssh between the nodes in the zone.
\ **-f | --force**\
Used with the (--defaultzone) flag to override the current default zone.
\ **-g | --assigngroup**\
Used with the (-a or -r ) flag to add or remove the group zonename for all nodes in the input noderange.
\ **-V**\ |\ **--Verbose**\
Verbose mode.
****************
\ **Examples**\
****************
\*
To chzone zone1 to the default zone, enter:
\ **chzone**\ \ *zone1*\ --default -f
\*
To generate new root ssh keys for zone2A using the ssh id_rsa private key in /root/.ssh:
\ **chzone**\ \ *zone2A*\ -k /root/.ssh
Note: you must use xdsh -K or updatenode -k to update the nodes with the new keys
\*
To generate new root ssh keys for zone2A, enter :
\ **chzone**\ \ *zone2A*\ -K
Note: you must use xdsh -K or updatenode -k to update the nodes with the new keys
\*
To add a new group of nodes (compute3) to zone3 and add zone3 group to the nodes, enter:
\ **chzone**\ \ *zone3*\ -a compute3 -g
\*
To remove a group of nodes (compute4) from zone4 and remove zone4 group from the nodes, enter:
\ **chzone**\ \ *zone4*\ -r compute4 -g
\*
To change the sshbetweennodes setting on the zone to not allow passwordless ssh between nodes, enter:
\ **chzone**\ \ *zone5*\ -s no
Note: you must use xdsh -K or updatenode -k to update the nodes with this new setting.
\ **Files**\
\ **/opt/xcat/bin/chzone/**\
Location of the chzone command.
****************
\ **SEE ALSO**\
****************
L <mkzone(1)|mkzone.1>,L <rmzone(1)|rmzone.1>,L <xdsh(1)|xdsh.1>, updatenode(1)|updatenode.1

View File

@ -0,0 +1,103 @@
#########
clonevm.1
#########
.. highlight:: perl
****
NAME
****
\ **clonevm**\ - Create masters from virtual machines and virtual machines from masters.
********
SYNOPSIS
********
\ *clonevm noderange [ -t <mastertobemade*\ | -b <master to base vms upon> ] -d|--detached -f|--force>
***********
DESCRIPTION
***********
Command to promote a VM's current configuration and storage to a master as well as
performing the converse operation of creating VMs based on a master.
By default, attempting to create a master from a running VM will produce an error.
The force argument will request that a master be made of the VM anyway.
Also, by default a VM that is used to create a master will be rebased as a thin
clone of that master. If the force argument is used to create a master of a powered
on vm, this will not be done. Additionally, the detached option can be used to
explicitly request that a clone not be tethered to a master image, allowing the
clones to not be tied to the health of a master, at the cost of additional storage.
When promoting a VM's current state to master, all rleated virtual disks will be
copied and merged with any prerequisite images. A master will not be tethered to
other masters.
*******
OPTIONS
*******
\ **-h|--help**\ Display usage message.
\ **-b**\ The master to base the clones upon
\ **-t**\ The target master to copy a single VM's state to
\ **-d**\ Explicitly request that the noderange be untethered from any masters.
\ **-f**\ Force cloning of a powered on VM. Implies -d if the VM is on.
\ **-v|--version**\ Command Version.
\ **-V|--verbose**\ Verbose output.
************
RETURN VALUE
************
0: The command completed successfully.
Any other value: An error has occurred.
********
EXAMPLES
********
Creating a master named appserver from a node called vm1:
\ *clonevm vm1 -t appserver*\
Cleating 30 VMs from a master named appserver:
\ *clonevm vm1-vm30 -b appserver*\
*****
FILES
*****
/opt/xcat/bin/clonevm
********
SEE ALSO
********
chvm(1)|chvm.1, lsvm(1)|lsvm.1, rmvm(1)|rmvm.1, mkvm(1)|mkvm.1, vmmaster(5)|vmmaster.5

View File

@ -0,0 +1,98 @@
###########
configfpc.1
###########
.. highlight:: perl
****
NAME
****
\ **configfpc**\ - discover the Fan Power Controllers (FPCs) and configure the FPC interface
********
SYNOPSIS
********
\ **configfpc**\ \ **-i**\ \ *interface*\
\ **configfpc**\ \ **-i**\ \ *interface*\ \ **--ip**\ \ *default ip address*\
\ **configfpc**\ [\ **-V**\ |\ **--verbose**\ ]
\ **configfpc**\ [\ **-h**\ |\ **--help**\ |\ **-?**\ ]
***********
DESCRIPTION
***********
\ **configfpc**\ will discover and configure all FPCs that are set to the default IP address. If not supplied the default ip is 192.168.0.100.
The \ **-i**\ \ **interface**\ is required to direct \ **configfpc**\ to the xCAT MN interface which is on the same VLAN as the FPCs.
There are several bits of information that must be included in the xCAT database before running this command.
You must create the FPC node definitions for all FPCs being discovered including the IP address and switch port information.
The \ **configfpc**\ command discovers the FPCs and collects the MAC address. The MAC address is used to relate the FPC to a FPC node using the switch information for this MAC. Once the relationship is discovered the FPC is configured with the FPC node IP settings.
This process is repeated until no more FPCs are discovered.
For more information on xCAT support of NeXtScale and configfpc see the following doc:
XCAT_NeXtScale_Clusters
*******
OPTIONS
*******
\ **-i**\ \ *interface*\
Use this flag to specify which xCAT MN interface (example: eth4) that is connected to the NeXtScale FPCs. This option is required.
\ **--ip**\ \ *default ip address*\
Use this flag to override the default ip address of 192.168.0.100 with a new address.
\ **-V**\ |\ **--verbose**\
Verbose mode
*******
Example
*******
1
To discover and configure all NeXtScale Fan Power Controllers (FPCs) connected on eth0 interface.
\ **configfpc**\ \ **-i**\ \ *eth0*\
2
To override the default ip address and run in Verbose mode.
\ **configfpc**\ \ **-i**\ \ *eth0*\ \ **--ip**\ \ *196.68.0.100*\ \ **-V**\

View File

@ -0,0 +1,89 @@
##########
csm2xcat.1
##########
.. highlight:: perl
****
NAME
****
\ **csm2xcat**\ - Allows the migration of a CSM database to an xCAT database.
********
SYNOPSIS
********
\ **csm2xcat**\ [\ **--dir**\ \ *path*\ ]
\ **csm2xcat**\ [\ **-h**\ ]
***********
DESCRIPTION
***********
The csm2xcat command must be run on the Management Server of the CSM system that you want to migrate to xCAT. The commmand will build two xCAT stanza files that can update the xCAT database with the chdef command.
Copy the csm2xcat command to the CSM Management Server. Run the command, indicating where you want your stanza files saved with the --dir parameter. Check the stanza files to see if the information is what you want put in the xCAT database. Copy the two stanza files: node.stanza, device.stanza back to your xCAT Management node, and run the chdef command to input into the xCAT database.
*******
OPTIONS
*******
\ **-h**\ Display usage message.
\ **--dir**\ Path to the directory containing the stanza files.
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occurred.
********
EXAMPLES
********
1. To build xCAT stanza files, enter on the CSM Management Server:
\ **csm2xcat --dir /tmp/mydir**\
2. To put the data in the xCAT database on the xCAT Management Node:
\ **cat node.stanza | chdef -z**\
\ **cat device.stanza | chdef -z**\
*****
FILES
*****
/opt/xcat/share/xcat/tools/csm2xcat
$dir/conversion.log
********
SEE ALSO
********
chdef(1)|chdef.1

View File

@ -0,0 +1,191 @@
#############
db2sqlsetup.1
#############
.. highlight:: perl
****
NAME
****
\ **db2sqlsetup**\ - Sets up the IBM DB2 for xCAT to use.
********
SYNOPSIS
********
\ **db2sqlsetup**\ {\ **-h**\ |\ **--help**\ }
\ **db2sqlsetup**\ {\ **-v**\ |\ **--version**\ }
\ **db2sqlsetup**\ {\ **-i**\ |\ **--init**\ }{<-S> | <-C>} [-o|--setupODBC] [\ **-V**\ |\ **--verbose**\ ]
\ **db2sqlsetup**\ {\ **-i**\ |\ **--init**\ }{<-S>} [-N|--nostart] [-o|--setupODBC] [\ **-V**\ |\ **--verbose**\ ]
\ **db2sqlsetup**\ {\ **-o**\ |\ **--setupODBC**\ } {<-S> | <-C>} [-V|--verbose]
\ **db2sqlsetup**\ {\ **-p**\ |\ **--passwd**\ } [<-S> | <-C>]
***********
DESCRIPTION
***********
\ **db2sqlsetup**\ - Sets up the IBM DB2 database for xCAT to use. The db2sqlsetup script is run on the Management Node, after the DB2 Server code has been installed, to setup the DB2 Server (-S).
The xcatd daemon will be stopped during migration on the MN. No xCAT commands should be run during the init process, because we will be migrating the xCAT database to DB2 and restarting the xcatd daemon.
The db2sqlsetup script must be run on each Service Node, after the DB2 Client code has been installed, to setup the DB2 Client (-C). There are two postscripts that are provided ( db2install and odbcsetup) that will automatically setup you Service Node as a DB2 client.
For full information on the setup of DB2, see Setting_Up_DB2_as_the_xCAT_DB.
When running of db2sqlsetup on the MN:
One password must be supplied for the setup, a password for the xcatdb unix id which will be used as the DB2 instance id and database name. The password will be prompted for interactively or can be input with the XCATDB2PW environment variable.
The script will create the xcat database instance (xcatdb) in the /var/lib/db2 directory unless overriden by setting the site.databaseloc attribute. This attribute should not be set to the directory that is defined in the installloc attribute and it is recommended that the databaseloc be a new filesystem dedicated to the DB2 database, especially in very large clusters.
When running db2sqlseutp on the SN:
Not only will the password for the DB2 instance Id be prompted for and must match the one on the Management Node; but also the hostname or ip address of the Management Node as known by the Service Node must be supplied , unless the XCATDB2SERVER environment variable is set.
You can automatically install and setup of DB2 on the SN using the db2install and odbcsetup postscripts and not need to manually run the command. See the full documentation.
Note: On AIX , root must be running ksh and on Linux, bash shell.
*******
OPTIONS
*******
\ **-h|--help**\
Displays the usage message.
\ **-v|--version**\
Displays the release version of the code.
\ **-V|--verbose**\
Displays verbose messages.
\ **-i|--init**\
The init option is used to setup an installed DB2 database on AIX or Linux (p-Series) so that xCAT can use the database. This must be combined with either the -S or -C flag to indicate whether we are setting up the Server or the Client. With the -S flag, it involves creating the xcatdb database, the xcatdb instance id, allowing access to the xcatdb database by the Management Node. It also backs up the current xCAT database and restores it into the newly setup xcatdb DB2 database. It creates the /etc/xcat/cfgloc file to point the xcatd daemon to the DB2 database and restarts the xcatd daemon using the database.
\ **-p|--passwd**\
The password change option is to change the database access password for the DB2 xcatdb database. If -S is input then it will only change the password on the DB2 Server (MN). If -C is input it will only change on the DB2 clients (SN). If neither -S or -C are input with this flag, then it will change both the DB2 Server and Clients. When changing the password the xcatd daemon will be stopped and restarted. Any other tools accessing the database should also be stopped before changing and restarted after changing.
\ **-S|-C**\
This options says whether to setup the Server (-S) on the Management Node, or the Client (-C) on the Service Nodes.
\ **-N|--nostart**\
This option with the -S flag will create the database, but will not backup and restore xCAT tables into the database. It will create the cfgloc file such that the next start of xcatd will try and contact the database. This can be used to setup the xCAT DB2 database during or before install.
\ **-o|--setupODBC**\
This option sets up the ODBC /etc/../odbcinst.ini, /etc/../odbc.ini and the .odbc.ini file in roots home directory will be created and initialized to run off the xcatdb DB2 database.
*********************
ENVIRONMENT VARIABLES
*********************
\*
XCATDB2INSPATH overrides the default install path for DB2 which is /opt/ibm/db2/V9.7 for Linux and /opt/IBM/db2/V9.7 for AIX.
\*
DATABASELOC override the where to create the xcat DB2 database, which is /var/lib/db2 by default of taken from the site.databaseloc attribute.
\*
XCATDB2PW can be set to the password for the xcatdb DB2 instance id so that there will be no prompting for a password when the script is run.
********
EXAMPLES
********
\*
To setup DB2 Server for xCAT to run on the DB2 xcatdb database, on the MN:
\ **db2sqlsetup**\ \ *-i*\ \ *-S*\
\*
To setup DB2 Client for xCAT to run on the DB2 xcatdb database, on the SN:
\ **db2sqlsetup**\ \ *-i*\ \ *-C*\
\*
To setup the ODBC for DB2 xcatdb database access, on the MN :
\ **db2sqlsetup**\ \ *-o*\ \ *-S*\
\*
To setup the ODBC for DB2 xcatdb database access, on the SN :
\ **db2sqlsetup**\ \ *-o*\ \ *-C*\
\*
To setup the DB2 database but not start xcat running with it:
\ **db2sqlsetup**\ \ *-i*\ \ *-S*\ \ *-N*\
\*
To change the DB2 xcatdb password on both the Management and Service Nodes:
\ **db2sqlsetup**\ \ *-p*\

View File

@ -0,0 +1,113 @@
############
dumpxCATdb.1
############
.. highlight:: perl
****
NAME
****
\ **dumpxCATdb**\ - dumps the xCAT db tables .
********
SYNOPSIS
********
\ **dumpxCATdb**\ [\ **-a**\ ] [\ **-V**\ ] [{\ **-p**\ |\ **--path**\ } \ *path*\ ]
\ **dumpxCATdb**\ [\ **-b**\ ] [\ **-V**\ ] [{\ **-p**\ |\ **--path**\ } \ *path*\ ]
\ **dumpxCATdb**\ [\ **-h**\ |\ **--help**\ ] [\ **-v**\ |\ **--version**\ ]
***********
DESCRIPTION
***********
If not using the binary dump option (-b), then the dumpxCATdb command creates .csv files for xCAT database tables and puts them in the directory given by the -p flag. These files can be used by the restorexCATdb command to restore the database. The command will read the list of tables in the site.skiptables attribute and not backup those tables.
Supports using XCAT_SKIPTABLES env variable to provide a list of skip tables.
The command will never backup TEAL or ISNM tables, except isnm_config. To dump TEAL tables use the documented process for TEAL. For ISNM use tabdump, after using tabprune to get to prune unnecessary records.
If using the binary dump option for the DB2 or postgreSQL database, then the routine will use the Database provide utilites for backup of the entire database.
*******
OPTIONS
*******
\ **-h**\ Display usage message.
\ **-v**\ Command Version.
\ **-V**\ Verbose.
\ **-a**\ All,without this flag the eventlog and auditlog will be skipped.
\ **-b**\ This flag is only used for the DB2 or postgreSQL database. The routine will use the database backup utilities to create a binary backup of the entire database. Note to use this backup on DB2, you will have first had to modify the logging of the database and have taken an offline initial backup. Refer to the xCAT DB2 documentation for more instructions.
\ **-p**\ Path to the directory to dump the database. It will be created, if it does not exist.
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occurred.
********
EXAMPLES
********
1. To dump the xCAT database into the /tmp/db directory, enter:
\ **dumpxCATdb -p /tmp/db**\
2. To dump the xCAT database into the /tmp/db directory, including the auditlog and eventlog enter:
\ **dumpxCATdb -a -p /tmp/db**\
3. To have dumpxCATdb not backup the hosts or passwd table:
\ **chtab key=skiptables site.value="hosts,passwd"**\
\ **dumpxCATdb -p /tmp/db**\
4. To have dumpxCATdb not backup the hosts or passwd table:
\ **export XCAT_SKIPTABLES="hosts,passwd"**\
\ **dumpxCATdb -p /tmp/db**\
5. To have dumpxCATdb use DB2 utilities to backup the DB2 database:
\ **dumpxCATdb -b -p /install/db2backup**\
*****
FILES
*****
/opt/xcat/sbin/dumpxCATdb
********
SEE ALSO
********
restorexCATdb(1)|restorexCATdb.1

View File

@ -0,0 +1,288 @@
##########
genimage.1
##########
.. highlight:: perl
****
NAME
****
\ **genimage**\ - Generates a stateless image to be used for a diskless install.
********
SYNOPSIS
********
\ **genimage**\
\ **genimage**\ [\ **-o**\ \ *osver*\ ] [\ **-a**\ \ *arch*\ ] [\ **-p**\ \ *profile*\ ] [\ **-i**\ \ *nodebootif*\ ] [\ **-n**\ \ *nodenetdrivers*\ ] [\ **--onlyinitrd**\ ] [\ **-r**\ \ *otherifaces*\ ] [\ **-k**\ \ *kernelver*\ ] [\ **-g**\ \ *krpmver*\ ] [\ **-m**\ \ *statelite*\ ] [\ **-l**\ \ *rootlimitsize*\ ] [\ **--permission**\ \ *permission*\ ] [\ **--interactive**\ ] [\ **--dryrun**\ ] [\ **--ignorekernelchk**\ ] [\ **--noupdate**\ ] \ *imagename*\
\ **genimage**\ [\ **-h**\ | \ **--help**\ | \ **-v**\ | \ **--version**\ ]
***********
DESCRIPTION
***********
Generates a stateless and a statelite image that can be used to boot xCAT nodes in a diskless mode.
genimage will use the osimage definition for information to generate this image. Additional options specified on the command line will override any corresponding previous osimage settings, and will be written back to the osimage definition.
If \ **genimage**\ runs on the management node, both the \ *osimage*\ table and \ *linuximage*\ table will be updated with the given values from the options.
The \ **genimage**\ command will generate two initial ramdisks for \ **stateless**\ and \ **statelite**\ , one is \ **initrd-stateless.gz**\ , the other one is \ **initrd-statelite.gz**\ .
After your image is generated, you can chroot to the
image, install any additional software you would like, or make modifications to files, and then run the following command to prepare the image for deployment.
for stateless: \ **packimage**\
for statelite: \ **liteimg**\
Besides prompting for some paramter values, the \ **genimage**\ command takes default guesses for the parameters not specified or not defined in the \ *osimage*\ and \ *linuximage*\ tables. It also assumes default answers for questions from the yum/zypper command when installing rpms into the image. Please use --interactive flag if you want the yum/zypper command to prompt you for the answers.
If \ **--onlyinitrd**\ is specified, genimage only regenerates the initrd for a stateless image to be used for a diskless install.
The \ **genimage**\ command must be run on a system that is the same architecture and same distro with same major release version as the nodes it will be
used on. If the management node is not the same architecture or same distro level, copy the contents of
/opt/xcat/share/xcat/netboot/<os> to a system that is the proper architecture, and mount /install from
the management node to that system. Then change directory to /opt/xcat/share/xcat/netboot/<os> and run ./genimage.
**********
Parameters
**********
\ *imagename*\ specifies the name of an os image definition to be used. The specification for the image is stored in the \ *osimage*\ table and \ *linuximage*\ table.
*******
OPTIONS
*******
\ **-a**\ \ *arch*\
The hardware architecture of this node: x86_64, ppc64, x86, ia64, etc. If omitted, the current hardware architecture will be used.
\ **-o**\ \ *osver*\
The operating system for the image: fedora8, rhel5, sles10, etc. The OS packages must be in
/install/<osver>/<arch> (use copycds(8)|copycds.8).
\ **-p**\ \ *profile*\
The profile (e.g. compute, service) to use to create the image. This determines what package lists are
used from /opt/xcat/share/xcat/netboot/<os> to create the image with. When deploying nodes with this image,
the nodes' nodetype.profile attribute must be set to this same value.
\ **-i**\ \ *nodebootif*\
This argument is now optional, and allows you to specify the network boot interface to be configured in the image (e.g. eth0). If not specified, the interface will be determined and configured during the network boot process.
\ **-n**\ \ *nodenetdrivers*\
This argument is now optional, and allows you to specify the driver
modules needed for the network interface(s) on your stateless nodes. If
you do not specify this option, the default is to include all recent IBM
xSeries network drivers.
If specified, \ *nodenetdrivers*\ should be a comma separated list of
network drivers to be used by the stateless nodes (Ie.: -n tg3,e1000).
Note that the drivers will be loaded in the order that you list them,
which may prove important in some cases.
\ **-l**\ \ *rootlimit*\
The maximum size allowed for the root file system in the image. Specify in bytes, or can append k, m, or g.
\ **--onlyinitrd**\
Regenerates the initrd for a stateless image to be used for a diskless install.
Regenerates the initrd that is part of a stateless/statelite image that is used to boot xCAT nodes in a stateless/stateli
te mode.
The \ **genimage --onlyinitrd**\ command will generate two initial ramdisks, one is \ **initrd-statelite.gz**\ for \ **statelite**\ mode, the other one is \ **initrd-stateless.gz**\ for \ **stateless**\ mode.
\ **--permission**\ \ *permission*\
The mount permission of \ **/.statelite**\ directory for \ **statelite**\ mode, which is only used for \ **statelite**\ mode, and the default permission is 755.
\ **-r**\ \ *otherifaces*\
Other network interfaces (e.g. eth1) in the image that should be configured via DHCP.
\ **-k**\ \ *kernelver*\
Use this flag if you want to use a specific version of the kernel in the image. Defaults to the first kernel found
in the install image.
\ **-g**\ \ *krpmver*\
Use this flag to specify the rpm version for kernel packages in the image. It must be present if -k flag is specified in the command for SLES. Generally, the value of -g is the part after \ **linux-**\ and before \ **.rpm**\ in a kernel rpm name.
\ **-m**\ statelite
This flag is for Ubuntu, Debian and Fedora12 only. Use this flag to specify if you want to generate statelite image. The default is to generate stateless image for these three operating systems. For others, this flag is invalid because both stateless and statelite images will be generated with this command.
\ **--interactive**\
This flag allows the user to answer questions from yum/zypper command when installing rpms into the image. If it is not specified, '-y' will be passed to the yum command and '--non-interactive --no-gpg-checks' will be passed to the zypper command as default answers.
\ **--dryrun**\
This flag shows the underlying call to the os specific genimage function. The user can copy and the paste the output to run the command on another machine that does not have xCAT installed.
\ **-t**\ \ *tmplimit*\
(Deprecated) This flag allows the user to setup the /tmp and the /var/tmp file system sizes. This flag is no longer supported. You can overwrite any file system size using the .postinstall script where you can create a new /etc/fstab file.
\ **--ignorekernelchk**\
Skip the kernel version checking when injecting drivers from osimage.driverupdatesrc. That means all drivers from osimage.driverupdatesrc will be injected to initrd for the specific target kernel.
\ **--noupdate**\
This flag allows the user to bypass automatic package updating when installing other packages.
\ **-v|--version**\
Display version.
\ **-h|--help**\
Display usage message.
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occurred.
********
EXAMPLES
********
1 To prompt the user for inputs:
.. code-block:: perl
genimage
2 To generate an image using information from an osimage definition:
.. code-block:: perl
genimage myimagename
3 To run genimage in test mode without actually generating an image:
.. code-block:: perl
genimage --dryrun myimagename
4 To generate an image and have yum/zypper prompt for responses:
.. code-block:: perl
genimage myimagename --interactive
5 To generate an image, replacing some values in the osimage definition:
.. code-block:: perl
genimage -i eth0 -n tg3 myimagename
*****
FILES
*****
/opt/xcat/bin/genimage
/opt/xcat/share/xcat/netboot/<OS>/genimage
********
SEE ALSO
********
packimage(1)|packimage.1, liteimg(1)|liteimg.1

View File

@ -0,0 +1,137 @@
###########
geninitrd.1
###########
.. highlight:: perl
****
NAME
****
\ **geninitrd**\ - Generate an initrd (initial ramfs) which to be used for statefull install or stateless netboot.
********
SYNOPSIS
********
\ **geninitrd**\ \ *imagename*\ [\ **--ignorekernelchk**\ ]
\ **geninitrd**\ [\ **-h**\ | \ **--help**\ ]
***********
DESCRIPTION
***********
Generate the initrd for the osimage: \ **imagename**\ which is an xCAT object of \ *osimage*\ type.
\ **Diskfull Osimage**\
If the \ **imagename**\ is a statefull one (The provmethod attribute for the osimage is 'install'),
this command is used to rebuild the initrd to inject the new drivers from driver rpms or
'update distro' and copy the rebuilt initrd and new kernel (If there's new kernel in 'update
distro') to the directory \ */tftpboot/xcat/<imagename*\ >.
If the initrd has been rebuilt by geninitrd, when run nodeset, the \ *--noupdateinitrd*\ option
should be used to skip the rebuilding of initrd to improve the performance.
Three attributes of osimage object can be used to specify the Driver RPM location and Driver names
for injecting new drviers to initrd.
\ **netdrivers**\ - comma separated driver names that need to be injected to the initrd.
The postfix '.ko' can be ignored. The netdrivers attribute must be set to specify the new driver list.
If you want to load all the drivers from the driver rpms, using the keyword allupdate.
\ **driverupdatesrc**\ - comma separated driver rpm packages (full path should be specified)
\ **osupdatename**\ - comma separated 'osdistroupdate' object. Each 'osdistroupdate' object specifies a
Linux distro update. When run geninitrd, 'kernel-\*.rpm' will be searched from osdistroupdate.dirpath
to get all the rpm packages and then search the drivers from the rpm packages.
Refer to the doc: Using_Linux_Driver_Update_Disk
\ **Stateless Osimage**\
If the \ **imagename**\ is a stateless one (The provmethod attribute for the osimage is 'netboot'),
this command is used to generate the initrd from the rootimg which generated by 'genimage' command.
So the 'genimage' must be run once before running the geninitrd command.
Two attributes of osimage object can be used to specify the Driver RPM location and Driver names
for injecting new drviers to initrd.
\ **netdrivers**\ - comma separated driver names that need to be injected to the initrd.
The postfix '.ko' can be ignored. The netdrivers attribute must be set to specify the new driver list.
If you want to load all the drivers from the driver rpms, using the keyword allupdate.
\ **driverupdatesrc**\ - comma separated driver rpm packages (full path should be specified)
**********
Parameters
**********
\ *imagename*\ specifies the name of an os image definition to be used. The specification for the image is storted in the \ *osimage*\ table and \ *linuximage*\ table.
\ **--ignorekernelchk**\
Skip the kernel version checking when injecting drivers from osimage.driverupdatesrc. That means all drivers from osimage.driverupdatesrc will be injected to initrd for the specific target kernel.
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occurred.
********
EXAMPLES
********
1 To generate initrd for the osimage \ **myimagename**\ :
.. code-block:: perl
geninitrd myimagename
*****
FILES
*****
/opt/xcat/bin/geninitrd
/opt/xcat/bin/genimage
/opt/xcat/share/xcat/netboot/<OS>/genimage
********
SEE ALSO
********
geninitrd(1)|geninitrd.1, genimage(1)|genimage.1

View File

@ -0,0 +1,318 @@
#########
getmacs.1
#########
.. highlight:: perl
****
NAME
****
\ **getmacs**\ - Collects node MAC address.
********
SYNOPSIS
********
Common:
=======
\ **getmacs**\ [\ **-h**\ | \ **--help**\ | \ **-v**\ | \ **--version**\ ]
PPC specific:
=============
\ **getmacs**\ \ *noderange*\ [\ **-F**\ \ *filter*\ ]
\ **getmacs**\ \ *noderange*\ [\ **-M**\ ]
\ **getmacs**\ \ *noderange*\ [\ **-V**\ | \ **--verbose**\ ] [\ **-f**\ ] [\ **-d**\ ] [\ **--arp**\ ] | [\ **-D**\ {[\ **-S**\ \ *server*\ ] [\ **-G**\ \ *gateway*\ ] [\ **-C**\ \ *client*\ ] [\ **-o**\ ] | [\ **--noping**\ ]}]
blade specific:
===============
\ **getmacs**\ \ *noderange*\ [\ **-V**\ | \ **--verbose**\ ] [\ **-d**\ ] [\ **--arp**\ ] [\ **-i**\ \ *ethN*\ |\ *enN*\ ]
***********
DESCRIPTION
***********
The getmacs command collects MAC address from a single or range of nodes.
Note that on AIX systems, the returned MAC address is not colon-seperated (for example 8ee2245cf004), while on Linux systems the MAC address is colon-seperated (for example 8e:e2:24:5c:f0:04).
If no ping test performed, getmacs writes the first adapter MAC to the xCAT database. If ping test performed, getmacs will write the first successfully pinged MAC to xCAT database.
For PPC (using Direct FSP Management) specific:
Note: If network adapters are physically assigned to LPARs, getmacs cannot read the MAC addresses unless perform \ **Discovery**\ with option "\ **-D**\ ", since there is no HMC command to read them and getmacs has to login to open formware. And if the LPARs has never been activated before, getmacs need to be performed with the option "\ **-D**\ " to get theirs MAC addresses.
For PPC (using HMC) specific:
Note: The option "\ **-D**\ " \ **must**\ be used to get MAC addresses of LPARs.
For IBM Flex Compute Node (Compute Node for short) specific:
Note: If "\ **-d**\ " is specified, all the MAC of the blades will be displayed. If no option specified, the first MAC address of the blade will be written to mac table.
*******
OPTIONS
*******
\ **--arp**\
Read MAC address with ARP protocal.
\ **-C**\
Specify the IP address of the partition for ping test. The default is to read from xCAT database if no \ **-C**\ specified.
\ **-d**\
Display MAC only. The default is to write the first valid adapter MAC to the xCAT database.
\ **-D**\
Perform discovery for mac address. By default, it will run ping test to test the connection between adapter and xCAT management node. Use '--noping' can skip the ping test to save time. Please be aware that in this way, the lpars will be reset.
\ **-f**\
Force immediate shutdown of the partition.This flag must be used with -D flag.
\ **-F**\
Specify filters to select the correct adapter. Acceptable filters are Type, MAC_Address, Phys_Port_Loc, Adapter, Port_Group, Phys_Port, Logical_Port, VLan, VSwitch, Curr_Conn_Speed.
\ **-G**\
Gateway IP address of the partition. The default is to read from xCAT database if no \ **-G**\ specified.
\ **-h**\
Display usage message.
\ **-M**\
Return multiple MAC addresses for the same adapter or port, if available from the hardware. For some network adapters (e.g. HFI) the MAC can change when there are some recoverable internal errors. In this case, the hardware can return several MACs that the adapter can potentially have, so that xCAT can put all of them in DHCP. This allows successful booting, even after a MAC change, but on Linux at this time, it can also cause duplicate IP addresses, so it is currently not recommended on Linux. By default (without this flag), only a single MAC address is returned for each adapter.
\ **--noping**\
Only can be used with '-D' to display all the available adapters with mac address but do NOT run ping test.
\ **-o**\
Read MAC address when the lpar is in openfirmware state. This option mush be used with [\ **-D**\ ] option to perform ping test. Before use \ **-o**\ , the lpar must be in openfirmware state.
\ **-S**\
The IP address of the machine to ping. The default is to read from xCAT databse if no \ **-S**\ specified.
\ **-v**\
Command Version.
\ **-V**\
Verbose output.
\ **-i**\
Specify the interface whose mac address will be collected and written into mac table. If 4 mac addresses are returned by option '-d', they all are the mac addresses of the blade. The N can start from 0(map to the eth0 of the blade) to 3. If 5 mac addresses are returned, the 1st mac address must be the mac address of the blade's FSP, so the N will start from 1(map to the eth0 of the blade) to 4.
************
RETURN VALUE
************
.. code-block:: perl
0 The command completed successfully.
1 An error has occurred.
********
EXAMPLES
********
1. To retrieve the MAC address for the HMC-managed partition lpar4 and write the first valid adapter MAC to the xCAT database, enter:
.. code-block:: perl
getmacs lpar4
Output is similar to:
.. code-block:: perl
lpar4:
#Type MAC_Address Phys_Port_Loc Adapter Port_Group Phys_Port Logical_Port VLan VSwitch Curr_Conn_Speed
hea 7607DFB07F02 N/A N/A N/A N/A N/A 1 ETHERNET0 N/A
ent U78A1.001.99203B5-P1-T6 00145eb55788 /lhea@23c00614/ethernet@23e00514 unsuccessful physical
2. To retrieve the MAC address with ARP protocal:
.. code-block:: perl
getmacs lpar4 --arp
Output is similar to:
.. code-block:: perl
lpar4:
#IP MAC_Address
192.168.0.10 00145eb55788
3. To retrieve the MAC address for the HMC-managed partition lpar4 and display the result only, enter:
.. code-block:: perl
getmacs lpar4 -d
Output is similar to:
.. code-block:: perl
lpar4:
#Type MAC_Address Phys_Port_Loc Adapter Port_Group Phys_Port Logical_Port VLan VSwitch Curr_Conn_Speed
hea 7607DFB07F02 N/A N/A N/A N/A N/A 1 ETHERNET0 N/A
ent U78A1.001.99203B5-P1-T6 00145eb55788 /lhea@23c00614/ethernet@23e00514 unsuccessful physical
4. To retrieve the MAC address for the HMC-managed partition lpar4 with filter Type=hea,VSwitch=ETHERNET0.
.. code-block:: perl
getmacs lpar4 -F Type=hea,VSwitch=ETHERNET0
Output is similar to:
.. code-block:: perl
lpar4:
#Type MAC_Address Phys_Port_Loc Adapter Port_Group Phys_Port Logical_Port VLan VSwitch Curr_Conn_Speed
hea 7607DFB07F02 N/A N/A N/A N/A N/A 1 ETHERNET0 N/A
5. To retrieve the MAC address while performing a ping test for the HMC-managed partition lpar4 and display the result only, enter:
.. code-block:: perl
getmacs lpar4 -d -D -S 9.3.6.49 -G 9.3.6.1 -C 9.3.6.234
Output is similar to:
.. code-block:: perl
lpar4:
#Type Location Code MAC Address Full Path Name Ping Result
ent U9133.55A.10B7D1G-V12-C4-T1 8e:e2:24:5c:f0:04 /vdevice/l-lan@30000004 successful virtual
6. To retrieve the MAC address for Power 775 LPAR using Direct FSP Management without ping test and display the result only, enter:
.. code-block:: perl
getmacs lpar4 -d
Output is similar to:
.. code-block:: perl
lpar4:
#Type Phys_Port_Loc MAC_Address Adapter Port_Group Phys_Port Logical_Port VLan VSwitch Curr_Conn_Speed
HFI N/A 02:00:02:00:00:04 N/A N/A N/A N/A N/A N/A N/A
7. To retrieve multiple MAC addresses from Power 775 HFI network adapter using Direct FSP Management, enter:
.. code-block:: perl
getmacs lpar4 -M
Output is similar to:
.. code-block:: perl
lpar4:
#Type Phys_Port_Loc MAC_Address Adapter Port_Group Phys_Port Logical_Port VLan VSwitch Curr_Conn_Speed
HFI N/A 02:00:02:00:00:04|02:00:02:00:00:05|02:00:02:00:00:06 N/A N/A N/A N/A N/A N/A N/A
8. To retrieve the MAC address for Power Lpar by '-D' but without ping test.
.. code-block:: perl
getmacs lpar4 -D --noping
Output is similar to:
.. code-block:: perl
lpar4:
# Type Location Code MAC Address Full Path Name Device Type
ent U8233.E8B.103A4DP-V3-C3-T1 da:08:4c:4d:d5:03 /vdevice/l-lan@30000003 virtual
ent U8233.E8B.103A4DP-V3-C4-T1 da:08:4c:4d:d5:04 /vdevice/l-lan@30000004 virtual
ent U78A0.001.DNWHYT2-P1-C6-T1 00:21:5e:a9:50:42 /lhea@200000000000000/ethernet@200000000000003 physical
*****
FILES
*****
/opt/xcat/bin/getmacs
********
SEE ALSO
********
makedhcp(8)|makedhcp.8

View File

@ -0,0 +1,148 @@
############
getslnodes.1
############
.. highlight:: perl
****
NAME
****
\ **getslnodes**\ - queries your SoftLayer account and gets attributes for each server.
********
SYNOPSIS
********
\ **getslnodes**\ [\ **-v**\ |\ **--verbose**\ ] [\ *hostname-match*\ ]
\ **getslnodes**\ [\ **-?**\ | \ **-h**\ | \ **--help**\ ]
***********
DESCRIPTION
***********
The \ **getslnodes**\ command queries your SoftLayer account and gets attributes for each
server. The attributes can be piped to 'mkdef -z' to define the nodes
in the xCAT DB so that xCAT can manage them.
Before using this command, you must download and install the SoftLayer API perl module.
For example:
.. code-block:: perl
cd /usr/local/lib
git clone https://github.com/softlayer/softlayer-api-perl-client.git
You also need to follow these directions to get your SoftLayer API key: http://knowledgelayer.softlayer.com/procedure/retrieve-your-api-key
\ **getslnodes**\ requires a .slconfig file in your home directory that contains your
SoftLayer userid, API key, and location of the SoftLayer API perl module, in attr=val format.
For example:
.. code-block:: perl
# Config file used by the xcat cmd getslnodes
userid = joe_smith
apikey = 1234567890abcdef1234567890abcdef1234567890abcdef
apidir = /usr/local/lib/softlayer-api-perl-client
*******
OPTIONS
*******
\ **-?|-h|--help**\
Display usage message.
\ **-v|--version**\
Command Version.
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occurred.
********
EXAMPLES
********
1.
Display information about all of the nodes in your SoftLayer account:
.. code-block:: perl
getslnodes
2.
Display information about all of the nodes whose hostname starts with foo:
.. code-block:: perl
getslnodes foo
3.
Create xCAT node defintions in the xCAT DB for all of the nodes in your SoftLayer account:
.. code-block:: perl
getslnodes | mkdef -z
*****
FILES
*****
/opt/xcat/bin/getslnodes
********
SEE ALSO
********
pushinitrd(1)|pushinitrd.1

View File

@ -0,0 +1,130 @@
########
gettab.1
########
.. highlight:: perl
****
NAME
****
\ **gettab**\ - select table rows, based on attribute criteria, and display specific attributes.
********
SYNOPSIS
********
\ **gettab**\ [\ **-H**\ | \ **--with-fieldname**\ ] \ *key=value,... table.attribute ...*\
\ **gettab**\ [\ **-?**\ | \ **-h**\ | \ **--help**\ ]
***********
DESCRIPTION
***********
The \ **gettab**\ command uses the specified key values to select a row in each of the tables requested.
For each selected row, the specified attributes are displayed. The \ **gettab**\ command can be used instead
of \ **nodels**\ for tables that are not keyed by nodename (e.g. the \ **site**\ table), or to select rows based
on an attribute value other than nodename.
*******
OPTIONS
*******
\ **-H|--with-fieldname**\
Always display table.attribute name next to result. By default, this is done only if more than
one table.attribute is requested.
\ **-?|-h|--help**\
Display usage message.
************
RETURN VALUE
************
0
The command completed successfully.
1
An error has occurred.
********
EXAMPLES
********
\*
To display setting for \ **master**\ (management node) in the site table:
\ **gettab -H**\ \ *key=master site.value*\
The output would be similar to:
.. code-block:: perl
site.value: mgmtnode.cluster.com
\*
To display the first node or group name that has \ **mgt**\ set to \ **blade**\ in the nodehm table:
\ **gettab**\ \ *mgt=blade nodehm.node*\
The output would be similar to:
.. code-block:: perl
blades
*****
FILES
*****
/opt/xcat/bin/gettab
********
SEE ALSO
********
nodels(1)|nodels.1, chtab(8)|chtab.8, tabdump(8)|tabdump.8

View File

@ -0,0 +1,123 @@
#############
getxcatdocs.1
#############
.. highlight:: perl
****
NAME
****
\ **getxcatdocs**\ - downloads the xCAT documentation and converts to HTML and PDF
********
SYNOPSIS
********
\ **getxcatdocs**\ [\ **-?**\ | \ **-h**\ | \ **--help**\ ]
\ **getxcatdocs**\ [\ **-v**\ | \ **--verbose**\ ] [\ *destination-dir*\ ]
\ **getxcatdocs**\ [\ **-v**\ | \ **--verbose**\ ] [\ **-c**\ | \ **--continue**\ ] [\ **-d**\ | \ **--doc**\ \ *single_doc*\ ] [\ *destination-dir*\ ]
***********
DESCRIPTION
***********
The \ **getxcatdocs**\ command downloads the xCAT documentation from the wiki and converts it to both HTML and PDF.
This enables reading the documentation when you do not have internet access. Note that this command does not
download/convert the entire xCAT wiki - only the "official" xCAT documentation linked from http://sourceforge.net/p/xcat/wiki/XCAT_Documentation.
If \ *destination-dir*\ is specified, \ **getxcatdocs**\ will put the converted documentation in that directory, in 3 sub-directories: html, pdf, images.
Otherwise, it will put it in the current directory (in the same three sub-directories).
If \ **--doc**\ \ *single_doc*\ is specified, only that one wiki page will be downloaded and converted.
\ **getxcatdocs**\ uses curl to run the Allura wiki API to download the document markdown text, and Pandoc with LaTex them to PDF. You must have all of these functions installed to run \ **getxcatdocs**\ . See:
http://sourceforge.net/p/xcat/wiki/Editing_and_Downloading_xCAT_Documentation/#converting-wiki-pages-to-html-and-pdfs
Limitations:
============
\*
This command does not run on AIX or Windows.
*******
OPTIONS
*******
\ **-?|-h|--help**\
Display usage message.
\ **-v|--verbose**\
Run the command in verbose mode.
\ **-c|--continue**\
If a previous run of this command failed (which often happens if you lose your network connection), continue processing using files already downloaded to your markdown directory.
\ **-d|--doc**\ \ *single_doc*\
Run this command for a single document only. If you get errors about Official-xcat-doc.png not found, either download this image directly from http://sourceforge.net/p/xcat/wiki/XCAT_Documentation/attachment/Official-xcat-doc.png or run \ **getxcatdocs -d XCAT_Documentation**\ first.
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occurred.
********
EXAMPLES
********
1.
To download/convert the documentation and put it in ~/tmp:
.. code-block:: perl
getxcatdocs ~/tmp
*****
FILES
*****
/opt/xcat/bin/getxcatdocs

View File

@ -0,0 +1,101 @@
################
groupfiles4dsh.1
################
.. highlight:: perl
****
NAME
****
\ **groupfiles4dsh**\ - Builds a directory of files for each defined nodegroup in xCAT.
********
SYNOPSIS
********
\ **groupfiles4dsh**\ [{\ **-p**\ |\ **--path**\ } \ *path*\ ]
\ **groupfiles4dsh**\ [\ **-h**\ |\ **--help**\ ] [\ **-v**\ |\ **--version**\ ]
***********
DESCRIPTION
***********
This tool will build a directory of files, one for each defined
nodegroup in xCAT. The file will be named the nodegroup name and
contain a list of nodes that belong to the nodegroup.
The file can be used as input to the AIX dsh command.
The purpose of this tool is to allow backward compatiblity with scripts
that were created using the AIX or CSM dsh command
Reference: man dsh.
*******
OPTIONS
*******
\ **-h**\ Display usage message.
\ **-v**\ Command Version.
\ **-p**\ Path to the directory to create the nodegroup files (must exist).
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occurred.
********
EXAMPLES
********
1. To create the nodegroup files in directory /tmp/nodegroupfiles, enter:
\ **groupfiles4dsh -p /tmp/nodegroupfiles**\
To use with dsh:
.. code-block:: perl
export DSH_CONTEXT=DSH ( default unless CSM is installed)
export DSH_NODE_RSH=/bin/ssh (default is rsh)
export DSH_NODEGROUP_PATH= /tmp/nodegroupfiles
dsh -N all date (where all is a group defined in xCAT)
dsh -a date (will look in all nodegroupfiles and build a list of all nodes)
*****
FILES
*****
/opt/xcat/share/xcat/tools/groupfiles4dsh
********
SEE ALSO
********
xdsh(1)|xdsh.1

View File

@ -0,0 +1,180 @@
############
imgcapture.1
############
.. highlight:: perl
****
NAME
****
\ **imgcapture**\ - Captures an image from a Linux diskful node and create a diskless or diskful image on the management node.
********
SYNOPSIS
********
\ **imgcapture**\ node \ **-t**\ |\ **--type**\ {diskless|sysclone} \ **-o**\ |\ **--osimage**\ \ *osimage*\ [\ **-V**\ |\ **--verbose**\ ]
\ **imgcapture**\ [\ **-h**\ | \ **--help**\ ] | [\ **-v**\ | \ **--version**\ ]
***********
DESCRIPTION
***********
The \ **imgcapture**\ command will capture an image from one running diskful Linux node and create a diskless or diskful image for later use.
The \ **node**\ should be one diskful Linux node, managed by the xCAT MN, and the remote shell between MN and the \ **node**\ should have been configured. AIX is not supported.
The \ **imgcapture**\ command supports two image types: \ **diskless**\ and \ **sysclone**\ . For the \ **diskless**\ type, it will capture an image from one running diskful Linux node, prepares the rootimg directory, kernel and initial rmadisks for the \ **liteimg**\ /\ **packimage**\ command to generate the statelite/stateless rootimg. For the \ **sysclone**\ type, it will capture an image from one running diskful Linux node, create an osimage which can be used to clone other diskful Linux nodes.
The \ **diskless**\ type:
The attributes of osimage will be used to capture and prepare the root image. The \ **osver**\ , \ **arch**\ and \ **profile**\ attributes for the stateless/statelite image to be created are duplicated from the \ **node**\ 's attribute. If the \ **-p|--profile**\ \ *profile*\ option is specified, the image will be created under "/<\ *installroot*\ >/netboot/<osver>/<arch>/<\ *profile*\ >/rootimg".
The default files/directories excluded in the image are specified by /opt/xcat/share/xcat/netboot/<os>/<\ *profile*\ >.<osver>.<arch>.imgcapture.exlist; also, you can put your customized file (<\ *profile*\ >.<osver>.<arch>.imgcapture.exlist) to /install/custom/netboot/<osplatform>. The directories in the default \ *.imgcapture.exlist*\ file are necessary to capture image from the diskful Linux node managed by xCAT, please don't remove it.
The image captured will be extracted into the /<\ *installroot*\ >/netboot/<\ **osver**\ >/<\ **arch**\ >/<\ **profile**\ >/rootimg directory.
After the \ **imgcapture**\ command returns without any errors, you can customize the rootimg and run the \ **liteimg**\ /\ **packimage**\ command with the options you want.
The \ **sysclone**\ type:
xCAT leverages the Open Source Tool - Systemimager to capture the osimage from the \ **node**\ , and put it into /<\ *installroot*\ >/\ **sysclone**\ /\ **images**\ directory.
The \ **imgcapture**\ command will create the \ *osimage*\ definition after the image is captured successfully, you can use this osimage and \ **nodeset**\ command to clone diskful nodes.
*******
OPTIONS
*******
\ **-t**\ |\ **--type**\
Specify the osimage type you want to capture, two types are supported: diskless and sysclone.
\ **-p|--profile**\ \ *profile*\
Assign \ *profile*\ as the profile of the image to be created.
\ **-o|--osimage**\ \ *osimage*\
The osimage name.
\ **-i**\ \ *nodebootif*\
The network interface the diskless node will boot over (e.g. eth0), which is used by the \ **genimage**\ command to generate initial ramdisks.
This is optional.
\ **-n**\ \ *nodenetdrivers*\
The driver modules needed for the network interface, which is used by the \ **genimage**\ command to generate initial ramdisks.
This is optional. By default, the \ **genimage**\ command can provide drivers for the following network interfaces:
For x86 or x86_64 platform:
.. code-block:: perl
tg3 bnx2 bnx2x e1000 e1000e igb m1x_en
For ppc64 platform:
.. code-block:: perl
e1000 e1000e igb ibmveth ehea
For S390x:
.. code-block:: perl
qdio ccwgroup
If the network interface is not in the above list, you'd better specify the driver modules with this option.
\ **-h|--help**\
Display the usage message.
\ **-v|--version**\
Display the version.
\ **-V|--verbose**\
Verbose output.
************
RETRUN VALUE
************
0 The command completed sucessfully.
1 An error has occurred.
********
EXAMPLES
********
\ **node1**\ is one diskful Linux node, which is managed by xCAT.
1. There's one pre-defined \ *osimage*\ . In order to capture and prepare the diskless root image for \ *osimage*\ , run the command:
imgcapture node1 -t diskless -o osimage
2. In order to capture the diskful image from \ **node1**\ and create the \ *osimage*\ \ **img1**\ , run the command:
imgcapture node1 -t sysclone -o img1
*****
FILES
*****
/opt/xcat/bin/imgcapture
********
SEE ALSO
********
genimage(1)|genimage.1, imgimport(1)|imgimport.1, imgexport(1)|imgexport.1, packimage(1)|packimage.1, liteimg(1)|liteimg.1, nodeset(8)|nodeset.8

View File

@ -0,0 +1,135 @@
###########
imgexport.1
###########
.. highlight:: perl
****
NAME
****
\ **imgexport**\ - Exports an xCAT image.
********
SYNOPSIS
********
\ **imgexport [-h| --help]**\
\ **imgexport image_name [destination] [[-e|--extra file:dir] ... ] [-p|--postscripts node_name] [-v|--verbose]**\
***********
DESCRIPTION
***********
The imgexport command will export an image that is being used by xCAT. To export images, you must have the images defined in the \ *osimage*\ table. All the columns in the \ *osimage*\ and \ *linuximage*\ tables will be exported. If kits are used in statefull or stateless images, \ *kit*\ , \ *kitcomponent*\ and \ *kitrepo*\ tables will be exported. In addition, the following files will also be exported.
For statefull:
x.pkglist
x.otherpkgs.pkglist
x.tmpl
x.synclist
kits related files
For stateless:
kernel
initrd.gz
rootimg.gz
x.pkglist
x.otherpkgs.pkglist
x.synclist
x.postinstall
x.exlist
kits related files
For statelite:
kernel
initrd.gz
root image tree
x.pkglist
x.synclist
x.otherpkgs.pkglist
x.postinstall
x.exlist
where x is the name of the profile.
Any files specified by the -e flag will also be exported. If -p flag is specified, the names of the postscripts and the postbootscripts for the given node will be exported. The postscripts themsleves need to be manualy exported using -e flag.
For statelite, the litefile table settings for the image will also be exported. The litetree and statelite tables are not exported.
*******
OPTIONS
*******
\ **-e|--extra**\ \ *srcfile:destdir*\ Pack up extra files. If \ *destdir*\ is omitted, the destination directory will be the same as the source directory.
\ **-h|--help**\ Display usage message.
\ **-p|--postscripts**\ \ *node_name*\ Get the names of the postscripts and postbootscripts for the given node and pack them into the image.
\ **-v|--verbose**\ Verbose output.
\ *image_name*\ The name of the image. Use \ *lsdef -t*\ osimage to find out all the image names.
\ *destination*\ The output bundle file name.
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occurred.
********
EXAMPLES
********
1. Simplest way to export an image. If there is an image in the osimage table named 'foo', then run:
\ **imgexport foo**\
foo.tgz will be built in the current working directory. Make sure that you have enough space in the directory that you are in to run imgexport if you have a big image to tar up.
2. To include extra files with your image:
\ **imgexport Default_Stateless_1265981465 foo.tgz -e /install/postscripts/myscript1 -e /tmp/mydir:/usr/mydir**\
In addition to all the default files, this will export \ */install/postscripts/myscript1*\ and the whole directory \ */tmp/dir*\ into the file called foo.tgz. And when imgimport is called \ */install/postscripts/myscript1*\ will be copied into the same directory and \ */tmp/mydir*\ will be copied to \ */usr/mydir*\ .
3. To include postscript with your image:
\ **imgexport Default_Stateless_1265981465 foo.tgz -p node1 -e /install/postscripts/myscript1**\
The \ *postscripts*\ and the \ *postbootscripts*\ names specified in the \ *postscripts*\ table for node1 will be exported into the image. The postscript \ *myscript1*\ will also be exported.
*****
FILES
*****
/opt/xcat/bin/imgexport
********
SEE ALSO
********
imgimport(1)|imgimport.1

View File

@ -0,0 +1,146 @@
###########
imgimport.1
###########
.. highlight:: perl
****
NAME
****
\ **imgimport**\ - Imports an xCAT image or configuration file into the xCAT tables so that you can immediately begin deploying with it.
********
SYNOPSIS
********
\ **imgimport [-h|--help]**\
\ **imgimport**\ bundle_file_name [-p|--postscripts nodelist] [-f|--profile new_profile] [-v|--verbose]>
***********
DESCRIPTION
***********
The imgimport command will import an image that has been exported by \ *imgexport*\ from xCAT. This is the easiest way to transfer/backup/, change or share images created by xCAT whether they be stateless or stateful. The bundle file will be unpacked in the current working directory. The xCAT configuration such as \ *osimage*\ and \ *linuximage*\ tables will then be updated.
For statefull, the following files will be copied to the appropriate directories.
x.pkglist
x.otherpkgs.pkglist
x.tmpl
x.synclist
kits related files
For stateless, the following files will be copied to the appropriate directories.
kernel
initrd.gz
rootimg.gz
x.pkglist
x.otherpkgs.pkglist
x.synclist
x.postinstall
x.exlist
kits related files
For statelite, the following files will be copied to the appropriate directories.
kernel
initrd.gz
root image tree
x.pkglist
x.synclist
x.otherpkgs.pkglist
x.postinstall
x.exlist
where x is the profile name.
Any extra files, included by --extra flag in the imgexport command, will also be copied to the appropriate directories.
For statelite, the litefile table will be updated for the image. The litetree and statelite tables are not imported.
If -p flag is specified, the \ *postscripts*\ table will be updated with the postscripts and the postbootscripts names from the image for the nodes given by this flag.
If -f flag is not specified, all the files will be copied to the same directories as the source. If it is specified, the old profile name x will be changed to the new and the files will be copied to the appropriate directores for the new profiles. For example, \ */opt/xcat/share/xcat/netboot/sles/x.pkglist*\ will be copied to \ */install/custom/netboot/sles/compute_new.pkglist*\ and \ */install/netboot/sles11/ppc64/x/kernel*\ will be copied to \ */install/netboot/sles11/ppc64/compute_new/kernel*\ . This flag is commonly used when you want to copy the image on the same xCAT mn so you can make modification on the new one.
After this command, you can run the \ *nodeset*\ command and then start deploying the nodes. You can also choose to modify the files and run the following commands before the node depolyment.
For statefull:
nodeset
For stateless:
genimage
packimage
nodeset
For statelite
genimage
liteimg
nodeset
*******
OPTIONS
*******
\ **-f|--profile**\ \ *new_prof*\ Import the image with a new profile name.
\ **-h|--help**\ Display usage message.
\ **-p|--postscripts**\ \ *nodelist*\ Import the postscripts. The postscripts contained in the image will be set in the postscripts table for \ *nodelist*\ .
\ **-v|--verbose**\ Verbose output.
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occurred.
********
EXAMPLES
********
1. Simplest way to import an image. If there is a bundle file named 'foo.gz', then run:
\ *imgimport foo.gz*\
2. Import the image with postscript names.
\ *imgimport foo.gz -p node1,node2*\
The \ *postscripts*\ table will be updated with the name of the \ *postscripts*\ and the \ *postbootscripts*\ for node1 and node2.
3. Import the image with a new profile name
\ *imgimport foo.gz -f compute_test*\
*****
FILES
*****
/opt/xcat/bin/imgimport
********
SEE ALSO
********
imgexport(1)|imgexport.1

View File

@ -0,0 +1,121 @@
#########
liteimg.1
#########
.. highlight:: perl
****
NAME
****
\ **liteimg**\ - Modify statelite image by creating a series of links.
********
SYNOPSIS
********
\ *liteimg [-h| --help]*\
\ *liteimg [-v| --version]*\
\ *liteimg imagename*\
***********
DESCRIPTION
***********
This command modifies the statelite image by creating a series of links.
It creates 2 levels of indirection so that files can be modified while in
their image state as well as during runtime. For example, a file like
<$imgroot>/etc/ntp.conf will have the following operations done to it:
\ * mkdir -p $imgroot/.default/etc*\
\ * mkdir -p $imgroot/.statelite/tmpfs/etc*\
\ * mv $imgroot/etc/ntp.conf $imgroot/.default/etc*\
\ * cd $imgroot/.statelite/tmpfs/etc*\
\ * ln -sf ../../../.default/etc/ntp.conf .*\
\ * cd $imgroot/etc*\
\ * ln -sf ../.statelite/tmpfs/etc/ntp.conf .*\
When finished, the original file will reside in
\ *$imgroot/.default/etc/ntp.conf*\ . \ *$imgroot/etc/ntp.conf*\ will link to
\ *$imgroot/.statelite/tmpfs/etc/ntp.conf*\ which will in turn link to
\ *$imgroot/.default/etc/ntp.conf*\
Note: If you make any changes to your litefile table after running liteimg then you will need to rerun liteimg again.
**********
Parameters
**********
\ *imagename*\ specifies the name of a os image definition to be used. The specification for the image is storted in the \ *osimage*\ table and \ *linuximage*\ table.
*******
OPTIONS
*******
\ **-h**\ Display usage message.
\ **-v**\ Command Version.
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occurred.
********
EXAMPLES
********
1. To lite a RHEL 6.6 statelite image for a compute node architecture x86_64 enter:
\ *liteimg rhels6.6-x86_64-statelite-compute*\
*****
FILES
*****
/opt/xcat/bin/
*****
NOTES
*****
This command is part of the xCAT software product.
********
SEE ALSO
********
genimage(1)|genimage.1

View File

@ -0,0 +1,402 @@
#######
lsdef.1
#######
.. highlight:: perl
****
NAME
****
\ **lsdef**\ - Use this command to list xCAT data object definitions.
********
SYNOPSIS
********
\ **lsdef**\ [\ **-h**\ |\ **--help**\ ] [\ **-t**\ \ *object-types*\ ] [\ **-i**\ \ *attr-list*\ ]
\ **lsdef**\ [\ **-V**\ |\ **--verbose**\ ] [\ **-l**\ |\ **--long**\ ] [\ **-s**\ |\ **--short**\ ] [\ **-a**\ |\ **--all**\ ] [\ **-S**\ ]
[\ **-t**\ \ *object-types*\ ] [\ **-o**\ \ *object-names*\ ] [\ **-z**\ |\ **--stanza**\ ] [\ **-i**\ \ *attr-list*\ ]
[\ **-c**\ |\ **--compress**\ ] [\ **--osimage**\ ] [\ **--nics**\ ] [[\ **-w**\ \ *attr*\ ==\ *val*\ ]
[\ **-w**\ \ *attr*\ =~\ *val*\ ] ...] [\ *noderange*\ ]
***********
DESCRIPTION
***********
This command is used to display xCAT object definitions which are stored
in the xCAT database.
*******
OPTIONS
*******
\ **-a|--all**\
Display all definitions.
For performance consideration, the auditlog and eventlog objects will not be listed.
To list auditlog or eventlog objects, use lsdef -t auditlog or lsdef -t eventlog instead.
\ **-c|--compress**\
Display information in compressed mode, each output line has format "<object name>: <data>".
The output can be passed to command xcoll or xdshbak for formatted output.
The -c flag must be used with -i flag.
\ **-h|--help**\
Display usage message.
\ **-i**\ \ *attr-list*\
Comma separated list of attribute names to display.
\ **-l|--long**\
List the complete object definition.
\ **-s|--short**\
Only list the object names.
\ **-S**\
List all the hidden nodes (FSP/BPA nodes) with other ones.
\ *noderange*\
A set of comma delimited node names and/or group names.
See the "noderange" man page for details on supported formats.
\ **-o**\ \ *object-names*\
A set of comma delimited object names.
\ **--osimage**\
Show all the osimage information for the node.
\ **--nics**\
Show the nics configuration information for the node.
\ **-t**\ \ *object-types*\
A set of comma delimited object types. Use the help option to get a list of valid objects.
\ **-V|--verbose**\
Verbose mode.
\ **-w**\ \ *attr==val*\ \ **-w**\ \ *attr=~val*\ ...
Use one or multiple -w flags to specify the selection string that can be used to select objects. The operators ==, !=, =~ and !~ are available. Use the help option to get a list of valid attributes for each object type.
Operator descriptions:
== Select nodes where the attribute value is exactly this value.
!= Select nodes where the attribute value is not this specific value.
=~ Select nodes where the attribute value matches this regular expression.
!~ Select nodes where the attribute value does not match this regular expression.
Note: if the "val" fields includes spaces or any other characters that will be parsed by shell, the "attr<operator>val" needs to be quoted. If the operator is "!~", the "attr<operator>val" needs to be quoted using single quote.
\ **-z|--stanza**\
Display output in stanza format. See the xcatstanzafile man page for details on using xCAT stanza files.
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occurred.
********
EXAMPLES
********
1.
To display a description of all the valid attributes that could be used
when defining an xCAT node.
.. code-block:: perl
lsdef -t node -h
2.
To get a list of all the objects that have been defined.
.. code-block:: perl
lsdef
OR
lsdef -a
3.
To get all the attributes of the node1
.. code-block:: perl
lsdef node1
OR
lsdef -t node node1
OR
lsdef -t node -o node1
4.
To get the object name of node1 instead of all the attributes
.. code-block:: perl
lsdef -s node1
5.
To get a list of all the network definitions.
.. code-block:: perl
lsdef -t network
6.
To get a complete listing of all network definitions.
.. code-block:: perl
lsdef -l -t network
7.
To list the whole xCAT database and write it to a stanza file. (backup database)
.. code-block:: perl
lsdef -a -l -z > mydbstanzafile
8.
To list the MAC and install adapter name for each node.
.. code-block:: perl
lsdef -t node -i mac,installnic
9.
To list an osimage definition named "aix53J".
.. code-block:: perl
lsdef -t osimage -l -o aix53J
10.
To list all node definitions that have a status value of "booting".
.. code-block:: perl
lsdef -t node -w status==booting
11.
To list all the attributes of the group "service".
.. code-block:: perl
lsdef -l -t group -o service
12.
To list all the attributes of the nodes that are members of the group "service".
.. code-block:: perl
lsdef -t node -l service
13.
To get a listing of object definitions that includes information about
what xCAT database tables are used to store the data.
.. code-block:: perl
lsdef -V -l -t node -o node01
14.
To list the hidden nodes that can't be seen with other flags.
The hidden nodes are FSP/BPAs.
.. code-block:: perl
lsdef -S
15.
To list the nodes status and use xcoll to format the output.
.. code-block:: perl
lsdef -t node -i status -c | xcoll
16.
To display the description for some specific attributes that could be used
when defining an xCAT node.
.. code-block:: perl
lsdef -t node -h -i profile,pprofile
17.
To display the nics configuration information for node cn1.
.. code-block:: perl
lsdef cn1 --nics
*****
FILES
*****
/opt/xcat/bin/lsdef
*****
NOTES
*****
This command is part of the xCAT software product.
********
SEE ALSO
********
mkdef(1)|mkdef.1, chdef(1)|chdef.1, rmdef(1)|rmdef.1, xcatstanzafile(5)|xcatstanzafile.5

View File

@ -0,0 +1,265 @@
############
lsflexnode.1
############
.. highlight:: perl
****
NAME
****
\ **lsflexnode**\ - Display the information of flexible node
********
SYNOPSIS
********
\ **lsflexnode**\ [-h | --help]
\ **lsflexnode**\ [-v | --version]
\ **lsflexnode**\ \ *noderange*\
***********
DESCRIPTION
***********
IBM BladeCenter HX5 offers flexibility ideal that the blades can be combined together for scalability.
There are several concepts to support the HX5 multiple blades combination:
\ **Complex**\ : Multiple blades which combined by a scalability card is a complex.
\ **Parition**\ : A logic concept which containing part of the \ **Blade slot node**\ in a complex. Each partition can map to a system to install Operating System. Each partition could have 1HX5, 1HX5+1MD or 2HX5+2MD. (MD is the Memory Drawer)
\ **Blade slot node**\ : The physical blade which installed in the slot of a chassis. It can be a HX5 or MD.
A \ **Complex**\ will be created automatically when a multiple blades combination is installed. In this \ **Complex**\ , every blade belongs to it is working as a \ **Blade slot node**\ .
A \ **Partition**\ can be created base on the \ **Complex**\ , each \ **Partition**\ can have one or multiple \ **Blade slot node**\ .
The \ *noderange*\ in the \ **SYNOPSIS**\ can be a AMM node or a blade node.
*******
OPTIONS
*******
\ **-h | --help**\
Display the usage message.
\ **-v | --version**\
Display the version information.
**********
ATTRIBUTES
**********
The meaning of attributes which displayed by the \ **lsflexnode**\ . The word 'node' in this section means \ **Blade slot node**\ .
\ **Complex**\
The unique numeric identifier for a complex installed in the chassis.
\ **Partition number**\
The number of partitions currently defined for this complex.
\ **Complex node number**\
The number of nodes existing in this complex, regardless of their assignment to any given partition.
\ **Partition**\
The unique numeric identifier for a partition defined within a complex installed in the chassis.
\ **Partition Mode**\
The currently configured mode of this partition. It can be 'partition' or 'standalone'.
\ **Partition node number**\
The number of nodes currently defined for this partition.
\ **Partition status**\
The current power status of this partition when the partition has a valid partition configuration. It can be 'poweredoff', 'poweredon', 'resetting' or 'invalid'.
\ **Node**\
The unique numeric identifier for this node, unique within the partition. If this node does not belong to a partition, the slot number will be displayed.
\ **Node state**\
The physical power state of this node. It can be 'poweredoff', 'poweredon' or 'resetting'.
\ **Node slot**\
The base slot number where the node exists in the chassis.
\ **Node resource**\
A string providing a summary overview of the resources provided by this node. It includes the CPU number, CPU frequency and Memory size.
\ **Node type**\
The general categorization of the node. It can be 'processor', 'memory' or 'io'.
\ **Node role**\
Indicates if the node is assigned to a partition, and if so, provides an indication of whether the node is the primary node of the partition or not.
\ **Flexnode state**\
The state of a flexible node. It is the state of the partition which this node belongs to. If this node does NOT belong to a partition, the value should be 'invalid'.
It can be 'poweredoff', 'poweredon', 'resetting' or 'invalid'.
\ **Complex id**\
The identifier of the complex this node belongs to.
\ **Partition id**\
The identifier of the partition this node belongs to.
********
EXAMPLES
********
1
Display all the \ **Complex**\ , \ **Partition**\ and \ **Blade slot node**\ which managed by a AMM.
.. code-block:: perl
lsflexnode amm1
The output:
.. code-block:: perl
amm1: Complex - 24068
amm1: ..Partition number - 1
amm1: ..Complex node number - 2
amm1: ..Partition = 1
amm1: ....Partition Mode - partition
amm1: ....Partition node number - 1
amm1: ....Partition status - poweredoff
amm1: ....Node - 0 (logic id)
amm1: ......Node state - poweredoff
amm1: ......Node slot - 14
amm1: ......Node type - processor
amm1: ......Node resource - 2 (1866 MHz) / 8 (2 GB)
amm1: ......Node role - secondary
amm1: ..Partition = unassigned
amm1: ....Node - 13 (logic id)
amm1: ......Node state - poweredoff
amm1: ......Node slot - 13
amm1: ......Node type - processor
amm1: ......Node resource - 2 (1866 MHz) / 8 (2 GB)
amm1: ......Node role - unassigned
2
Display a flexible node.
.. code-block:: perl
lsflexnode blade1
The output:
.. code-block:: perl
blade1: Flexnode state - poweredoff
blade1: Complex id - 24068
blade1: Partition id - 1
blade1: Slot14: Node state - poweredoff
blade1: Slot14: Node slot - 14
blade1: Slot14: Node type - processor
blade1: Slot14: Node resource - 2 (1866 MHz) / 8 (2 GB)
blade1: Slot14: Node role - secondary
*****
FILES
*****
/opt/xcat/bin/lsflexnode
********
SEE ALSO
********
mkflexnode(1)|mkflexnode.1, rmflexnode(1)|rmflexnode.1

View File

@ -0,0 +1,179 @@
##########
lshwconn.1
##########
.. highlight:: perl
****
NAME
****
\ **lshwconn**\ - Use this command to display the connection status for CEC and Frame nodes.
********
SYNOPSIS
********
\ **lshwconn**\ [\ **-h**\ | \ **--help**\ ]
\ **lshwconn**\ [\ **-v**\ | \ **--version**\ ]
PPC (with HMC) specific:
========================
\ **lshwconn**\ [\ **-V**\ | \ **--verbose**\ ] \ *noderange*\
PPC (without HMC, using FSPAPI) specific:
=========================================
\ **lshwconn**\ \ *noderange*\ \ **-T**\ \ *tooltype*\
***********
DESCRIPTION
***********
This command is used to display the connection status for CEC and Frame node.
*******
OPTIONS
*******
\ **-h|--help**\
Display usage message.
\ **-V|--verbose**\
Verbose output.
\ **-T**\
The tooltype is used to communicate to the CEC/Frame. The value could be lpar or fnm. The tooltype value lpar is for xCAT and fnm is for CNM.
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occurred.
********
EXAMPLES
********
1.
To display connection status for all CEC nodes in node group CEC:
.. code-block:: perl
lshwconn cec
Output is similar to:
.. code-block:: perl
cec1: ipaddr=192.168.200.245,alt_ipaddr=unavailable,state=Connected
cec2: Connection not found
2.
To display connection status for Frame node frame1:
.. code-block:: perl
lshwconn frame1
Output is similar to:
.. code-block:: perl
frame1: side=a,ipaddr=192.168.200.247,alt_ipaddr=unavailable,state=Connected
frame1: side=b,ipaddr=192.168.200.248,alt_ipaddr=unavailable,state=Connected
3.
To display connection status for all CEC nodes in node group CEC to hardware server, and using lpar tooltype:
.. code-block:: perl
lshwconn cec -T lpar
Output is similar to:
.. code-block:: perl
cec1: sp=primary,ipadd=40.3.7.1,alt_ipadd=unavailable,state=LINE UP
cec2: Connection not found
*****
FILES
*****
$XCATROOT/bin/lshwconn
(The XCATROOT environment variable is set when xCAT is installed. The
default value is "/opt/xcat".)
*****
NOTES
*****
This command is part of the xCAT software product.
********
SEE ALSO
********
rmhwconn(1)|rmhwconn.1, mkhwconn(1)|mkhwconn.1

View File

@ -0,0 +1,253 @@
#######
lskit.1
#######
.. highlight:: perl
****
NAME
****
\ **lskit**\ - Lists information for one or more Kits.
********
SYNOPSIS
********
\ **lskit**\ [\ **-V**\ | \ **--verbose**\ ]
[\ **-F**\ | \ **--framework**\ \ *kitattr_names*\ ]
[\ **-x**\ | \ **--xml**\ | \ **--XML**\ ]
[\ **-K**\ | \ **--kitattr**\ \ *kitattr_names*\ ]
[\ **-R**\ | \ **--repoattr**\ \ *repoattr_names*\ ]
[\ **-C**\ | \ **--compattr**\ \ *compattr_names*\ ]
[kit_names]
\ **lskit**\ [\ **-?**\ | \ **-h**\ | \ **--help**\ | \ **-v**\ | \ **--version**\ ]
\ **lskit**\ [\ **-F**\ | \ **--framework**\ \ *kit_path_name*\ ]
***********
DESCRIPTION
***********
The \ **lskit**\ command is used to list information for one or more kits. A kit is a special kind of package that is used to install a software product on one or more nodes in an xCAT cluster.
Note: The xCAT support for Kits is only available for Linux operating systems.
The \ **lskit**\ command outputs the following info for each kit: the kit's basic info, the kit's repositories, and the kit's components. The command outputs the info in two formats: human-readable format (default), and XML format. Use the -x option to view the info in XML format.
Input to the command can specify any number or combination of the input options.
*******
OPTIONS
*******
\ **-F|--framework**\ \ *kit_path_name*\
Use this option to display the framework values of the specified Kit tarfile. This information is retreived directly from the tarfile and can be done before the Kit has been defined in the xCAT database. This option cannot be combined with other options.
\ **-K|--kitattr**\ \ *kitattr_names*\
Where \ *kitattr_names*\ is a comma-delimited list of kit attribute names. The names correspond to attribute names in the \ **kit**\ table. The \ **lskit**\ command will only display the specified kit attributes.
\ **-R|--repoattr**\ \ *repoattr_names*\
Where \ *repoattr_names*\ is a comma-delimited list of kit repository attribute names. The names correspond to attribute names in the \ **kitrepo**\ table. The \ **lskit**\ command will only display the specified kit repository attributes.
\ **-C|--compattr**\ \ *compattr_names*\
where \ *compattr_names*\ is a comma-delimited list of kit component attribute names. The names correspond to attribute names in the \ **kitcomponent**\ table. The \ **lskit**\ command will only display the specified kit component attributes.
\ *kit_names*\
is a comma-delimited list of kit names. The \ **lskit**\ command will only display the kits matching these names.
\ **-x|--xml|--XML**\
Need XCATXMLTRACE=1 env when using -x|--xml|--XML, for example: XCATXMLTRACE=1 lskit -x testkit-1.0.0
Return the output with XML tags. The data is returned as:
<data>
<kitinfo>
...
</kitinfo>
</data>
...
<data>
<kitinfo>
...
</kitinfo>
</data>
Each <kitinfo> tag contains info for one kit. The info inside <kitinfo> is structured as follows:
The <kit> sub-tag contains the kit's basic info.
The <kitrepo> sub-tags store info about the kit's repositories.
The <kitcomponent> sub-tags store info about the kit's components.
The data inside <kitinfo> is returned as:
<kitinfo>
<kit>
...
</kit>
.. code-block:: perl
<kitrepo>
...
</kitrepo>
...
<kitcomponent>
...
</kitcomponent>
...
</kitinfo>
\ **-V|--verbose**\
Display additional progress and error messages.
\ **-v|--version**\
Command Version.
\ **-?|-h|--help**\
Display usage message.
************
RETURN VALUE
************
0
The command completed successfully.
1
An error has occurred.
********
EXAMPLES
********
1.
To list all kits, enter:
.. code-block:: perl
lskit
2.
To list the kit "kit-test1-1.0-Linux", enter:
.. code-block:: perl
lskit kit-test1-1.0-Linux
3.
To list the kit "kit-test1-1.0-Linux" for selected attributes, enter:
.. code-block:: perl
lskit -K basename,description -R kitreponame -C kitcompname kit-test1-1.0-Linux
4.
To list the framework value of a Kit tarfile.
.. code-block:: perl
lskit -F /myhome/mykits/pperte-1.3.0.2-0-x86_64.tar.bz2
Extracting the kit.conf file from /myhome/mykits/pperte-1.3.0.2-0-x86_64.tar.bz2. Please wait.
kitframework=2
compatible_kitframeworks=0,1,2
5.
To list kit "testkit-1.0-1" with XML tags, enter:
.. code-block:: perl
XCATXMLTRACE=1 lskit -x testkit-1.0-1
*****
FILES
*****
/opt/xcat/bin/lskit
********
SEE ALSO
********
lskitcomp(1)|lskitcomp.1, lskitdeployparam(1)|lskitdeployparam.1, addkit(1)|addkit.1, rmkit(1)|rmkit.1, addkitcomp(1)|addkitcomp.1, rmkitcomp(1)|rmkitcomp.1

View File

@ -0,0 +1,245 @@
###########
lskitcomp.1
###########
.. highlight:: perl
****
NAME
****
\ **lskitcomp**\ - Used to list information for one or more kit components.
********
SYNOPSIS
********
\ **lskitcomp**\ [\ **-V**\ | \ **--verbose**\ ]
[\ **-x**\ | \ **--xml**\ | \ **--XML**\ ]
[\ **-C**\ | \ **--compattr**\ \ *compattr_names*\ ]
[\ **-O**\ | \ **--osdistro**\ \ *os_distro*\ ]
[\ **-S**\ | \ **--serverrole**\ \ *server_role*\ ]
[kitcomp_names]
\ **lskitcomp**\ [\ **-?**\ | \ **-h**\ | \ **--help**\ | \ **-v**\ | \ **--version**\ ]
***********
DESCRIPTION
***********
The \ **lskitcomp**\ command is used to list information for one or more kit components. A kit is made up of one or more kit components. Each kit component is a meta package used to install a software product component on one or more nodes in an xCAT cluster.
The \ **lskitcomp**\ command outputs the kit component info in two formats: human-readable format (default), and XML format. Use the -x option to view the info in XML format.
Input to the command can specify any number or combination of the input options.
Note: The xCAT support for Kits is only available for Linux operating systems.
*******
OPTIONS
*******
\ **-C|--compattr**\ \ *compattr_names*\
where \ *compattr_names*\ is a comma-delimited list of kit component attribute names. The names correspond to attribute names in the \ **kitcomponent**\ table. The \ **lskitcomp**\ command will only display the specified kit component attributes.
\ **-O|--osdistro**\ \ *os_distro*\
where \ *os_distro*\ is the name of an osdistro in \ **osdistro**\ table. The \ **lskitcomp**\ command will only display the kit components matching the specified osdistro.
\ **-S|--serverrole**\ \ *server_role*\
where \ *server_role*\ is the name of a server role. The typical server roles are: mgtnode, servicenode, computenode, loginnode, storagennode. The \ **lskitcomp**\ command will only display the kit components matching the specified server role.
\ *kitcomp_names*\
is a comma-delimited list of kit component names. The \ **lskitcomp**\ command will only display the kit components matching the specified names.
\ **-x|--xml|--XML**\
Need XCATXMLTRACE=1 env when using -x|--xml|--XML.
Return the output with XML tags. The data is returned as:
<data>
<kitinfo>
...
</kitinfo>
</data>
...
<data>
<kitinfo>
...
</kitinfo>
</data>
Each <kitinfo> tag contains info for a group of kit compoonents belonging to the same kit. The info inside <kitinfo> is structured as follows:
The <kit> sub-tag contains the kit's name.
The <kitcomponent> sub-tags store info about the kit's components.
The data inside <kitinfo> is returned as:
<kitinfo>
<kit>
...
</kit>
.. code-block:: perl
<kitcomponent>
...
</kitcomponent>
...
</kitinfo>
\ **-V|--verbose**\
Display additional progress and error messages.
\ **-v|--version**\
Command Version.
\ **-?|-h|--help**\
Display usage message.
************
RETURN VALUE
************
0
The command completed successfully.
1
An error has occurred.
********
EXAMPLES
********
1.
To list all kit components, enter:
.. code-block:: perl
lskitcomp
2.
To list the kit component "comp-server-1.0-1-rhels-6-x86_64", enter:
.. code-block:: perl
lskitcomp comp-server-1.0-1-rhels-6-x86_64
3.
To list the kit component "comp-server-1.0-1-rhels-6-x86_64" for selected kit component attributes, enter:
.. code-block:: perl
lskitcomp -C kitcompname,desc comp-server-1.0-1-rhels-6-x86_64
4.
To list kit components compatible with "rhels-6.2-x86_64" osdistro, enter:
.. code-block:: perl
lskitcomp -O rhels-6.2-x86_64
5.
To list kit components compatible with "rhels-6.2-x86_64" osdistro and "computenode" server role, enter:
.. code-block:: perl
lskitcomp -O rhels-6.2-x86_64 -S computenode
6.
To list the kit component "testkit-compute-1.0-1-ubuntu-14.04-ppc64el" with XML tags, enter:
.. code-block:: perl
XCATXMLTRACE=1 lskitcomp -x testkit-compute-1.0-1-ubuntu-14.04-ppc64el
*****
FILES
*****
/opt/xcat/bin/lskitcomp
********
SEE ALSO
********
lskit(1)|lskit.1, lskitdeployparam(1)|lskitdeployparam.1, addkit(1)|addkit.1, rmkit(1)|rmkit.1, addkitcomp(1)|addkitcomp.1, rmkitcomp(1)|rmkitcomp.1

View File

@ -0,0 +1,164 @@
##################
lskitdeployparam.1
##################
.. highlight:: perl
****
NAME
****
\ **lskitdeployparam**\ - Lists the deployment parameters for one or more Kits or Kit components
********
SYNOPSIS
********
\ **lskitdeployparam**\ [\ **-V**\ | \ **--verbose**\ ]
[\ **-x**\ | \ **--xml**\ | \ **--XML**\ ]
[\ **-k**\ | \ **--kitname**\ \ *kit_names*\ ]
[\ **-c**\ | \ **--compname**\ \ *comp_names*\ ]
\ **lskitdeployparam**\ [\ **-?**\ | \ **-h**\ | \ **--help**\ | \ **-v**\ | \ **--version**\ ]
***********
DESCRIPTION
***********
The \ **lskitdeployparam**\ command is used to list the kit deployment parameters for one or more kits, or one or more kit components. Kit deployment parameters are used to customize the installation or upgrade of kit components.
The \ **lskitdeployparam**\ command outputs the kit component information in two formats: human-readable format (default), and XML format. Use the -x option to view the information in XML format.
Input to the command can specify any combination of the input options.
Note: The xCAT support for Kits is only available for Linux operating systems.
*******
OPTIONS
*******
\ **-k|--kitname**\ \ *kit_names*\
Where \ *kit_names*\ is a comma-delimited list of kit names. The \ **lskitdeployparam**\ command will only display the deployment parameters for the kits with the matching names.
\ **-c|--compname**\ \ *comp_names*\
Where \ *comp_names*\ is a comma-delimited list of kit component names. The \ **lskitdeployparam**\ command will only display the deployment parameters for the kit components with the matching names.
\ **-x|--xml|--XML**\
Return the output with XML tags. The data is returned as:
<data>
<kitdeployparam>
<name>KIT_KIT1_PARAM1</name>
<value>value11</value>
</kitdeployparam>
</data>
<data>
<kitdeployparam>
<name>KIT_KIT1_PARAM2</name>
<value>value12</value>
</kitdeployparam>
</data>
...
\ **-V|--verbose**\
Display additional progress and error messages.
\ **-v|--version**\
Command Version.
\ **-?|-h|--help**\
Display usage message.
************
RETURN VALUE
************
0
The command completed successfully.
1
An error has occurred.
********
EXAMPLES
********
1.
To list kit deployment parameters for kit "kit-test1-1.0-Linux", enter:
.. code-block:: perl
lskitdeployparam -k kit-test1-1.0-Linux
2.
To list kit deployment parameters for kit component "comp-server-1.0-1-rhels-6-x86_64", enter:
.. code-block:: perl
lskitdeployparam -c comp-server-1.0-1-rhels-6-x86_64
*****
FILES
*****
/opt/xcat/bin/lskitdeployparam
********
SEE ALSO
********
lskit(1)|lskit.1, lskitcomp(1)|lskitcomp.1, addkit(1)|addkit.1, rmkit(1)|rmkit.1, addkitcomp(1)|addkitcomp.1, rmkitcomp(1)|rmkitcomp.1

View File

@ -0,0 +1,151 @@
############
lskmodules.1
############
.. highlight:: perl
****
NAME
****
\ **lskmodules**\ - list kernel driver modules in rpms or driver disk image files
********
SYNOPSIS
********
\ **lskmodules**\ [\ **-V**\ | \ **--verbose**\ ]
[\ **-i**\ | \ **--osimage**\ \ *osimage_names*\ ]
[\ **-c**\ | \ **--kitcomponent**\ \ *kitcomp_names*\ ]
[\ **-o**\ | \ **--osdistro**\ \ *osdistro_names*\ ]
[\ **-u**\ | \ **--osdistropudate**\ \ *osdistroupdate_names*\ ]
[\ **-x**\ | \ **--xml**\ | \ **--XML**\ ]
\ **lskmodules**\ [\ **-?**\ | \ **-h**\ | \ **--help**\ | \ **-v**\ | \ **--version**\ ]
***********
DESCRIPTION
***********
The \ **lskmodules**\ command finds the kernel driver module files (\*.ko) in the specified input locations, runs the modinfo command against each file, and returns the driver name and description. If -x is specified, the output is returned with XML tags.
Input to the command can specify any number or combination of the input options.
*******
OPTIONS
*******
\ **-i|--osimage**\ \ *osimage_names*\
where \ *osimage_names*\ is a comma-delimited list of xCAT database osimage object names. For each \ *osimage_name*\ , lskmodules will use the entries in osimage.driverupdatesrc for the rpms and driver disk image files to search.
\ **-c|--kitcomponent**\ \ *kitcomponent_names*\
where \ *kitcomponent_names*\ is a comma-delimited list of xCAT database kitcomponent object names. For each \ *kitcomponent_name*\ , lskmodules will use the entries in kitcomponent.driverpacks for the rpm list and the repodir of the kitcomponent.kitreponame for the location of the rpm files to search.
\ **-o|--osdistro**\ \ *osdistro_names*\
where \ *osdistro_names*\ is a comma-delimited list of xCAT database osdistro object names. For each \ *osdistro_name*\ , lskmodules will search each <osdistro.dirpaths>/Packages/kernel-<kernelversion>.rpm file.
\ **-u|--osdistroupdate**\ \ *osdistroupdate_names*\
where \ *osdistroupdate_names*\ is a comma-delimited list of xCAT database osdistroupdate table entries. For each \ *osdistroupdate_name*\ , lskmodules will search the <osdistroupdate.dirpath>/kernel-<kernelversion>.rpm file.
\ **-x|--xml|--XML**\
Return the output with XML tags. The data is returned as:
<module>
<name> xxx.ko </name>
<description> this is module xxx </description>
</module>
This option is intended for use by other programs. The XML will not be displayed. To view the returned XML, set the XCATSHOWXML=yes environment variable before running this command.
\ **-V|--verbose**\
Display additional progress and error messages.
\ **-v|--version**\
Command Version.
\ **-?|-h|--help**\
Display usage message.
************
RETURN VALUE
************
0
The command completed successfully.
1
An error has occurred.
********
EXAMPLES
********
1.
To list the kernel modules included in the driverpacks shipped with kitcomponent kit1_comp1-x86_64,
enter:
.. code-block:: perl
lskmodules -c kit1_comp1-x86_64
*****
FILES
*****
********
SEE ALSO
********

View File

@ -0,0 +1,143 @@
########
lslite.1
########
.. highlight:: perl
****
NAME
****
\ **lslite**\ - Display a summary of the statelite information.
********
SYNOPSIS
********
\ **lslite**\ [-h | --help]
\ **lslite**\ [-V | --verbose] [-i imagename] | [noderange]
***********
DESCRIPTION
***********
The \ **lslite**\ command displays a summary of the statelite information that has been defined for a noderange or an image.
*******
OPTIONS
*******
\ **-h|--help**\
Display usage message.
\ **-V|--verbose**\
Verbose mode.
\ **-i imagename**\
The name of an existing xCAT osimage definition.
\ **noderange**\
A set of comma delimited node names and/or group names. See the "noderange" man page for details on additional supported formats.
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occurred.
********
EXAMPLES
********
1.
To list the statelite information for an xCAT node named "node01".
\ **lslite node01**\
Output is similar to:
>>>Node: node01
Osimage: 61img
Persistent directory (statelite table):
xcatmn1:/statelite
Litefiles (litefile table):
tmpfs,rw /etc/adjtime
tmpfs,rw /etc/lvm/.cache
tmpfs,rw /etc/mtab
........
Litetree path (litetree table):
1,MN:/etc
2,server1:/etc
2.
To list the statelite information for an xCAT osimage named "osimage01".
\ **lslite -i osimage01**\
Output is similar to:
.. code-block:: perl
tmpfs,rw /etc/adjtime
tmpfs,rw /etc/lvm/.cache
tmpfs,rw /etc/mtab
........
*****
FILES
*****
/opt/xcat/bin/lslite
********
SEE ALSO
********
noderange(3)|noderange.3, tabdump(8)|tabdump.8

View File

@ -0,0 +1,367 @@
#######
lsslp.1
#######
.. highlight:: perl
****
NAME
****
\ **lsslp**\ - Discovers selected networked services information within the same subnet.
********
SYNOPSIS
********
\ *lsslp [-h| --help]*\
\ *lsslp [-v| --version]*\
\ *lsslp [noderange] [-V] [-i ip[,ip..]][-w][-r|-x|-z][-n][-s CEC|FRAME|MM|IVM|RSA|HMC|CMM|IMM2|FSP][-t tries][-I][-C counts][-T timeout][--vpdtable]*\
***********
DESCRIPTION
***********
The lsslp command discovers selected service types using the -s flag. All service types are returned if the -s flag is not specified. If a specific IP address is not specified using the -i flag, the request is sent out all available network adapters. The optional -r, -x, -z and --vpdtable flags format the output. If you can't receive all the hardware, please use -T to increase the waiting time.
NOTE: SLP broadcast requests will propagate only within the subnet of the network adapter broadcast IPs specified by the -i flag.
*******
OPTIONS
*******
\ **noderange**\ The nodes which the user want to discover.
If the user specify the noderange, lsslp will just return the nodes in
the node range. Which means it will help to add the new nodes to the xCAT
database without modifying the existed definitions. But the nodes' name
specified in noderange should be defined in database in advance. The specified
nodes' type can be frame/cec/hmc/fsp/bpa. If the it is frame or cec, lsslp
will list the bpa or fsp nodes within the nodes(bap for frame, fsp for cec).
Please do not use noderange with the flag -s.
\ **-i**\ IP(s) the command will send out (defaults to all available adapters).
\ **-h**\ Display usage message.
\ **-n**\ Only display and write the newly discovered hardwares.
\ **-u**\ Do unicast to a specified IP range. Must be used with -s and --range.
The -u flag is not supported on AIX.
\ **--range**\ Specify one or more IP ranges. Must be use in unicast mode.
It accepts multiple formats. For example, 192.168.1.1/24, 40-41.1-2.3-4.1-100.
If the range is huge, for example, 192.168.1.1/8, lsslp may take a very long time for node scan.
So the range should be exactly specified.
\ **-r**\ Display Raw SLP response.
\ **-C**\ The number of the expected responses specified by the user.
When using this flag, lsslp will not return until the it has found all the nodes or time out.
The default max time is 3 secondes. The user can use -T flag the specify the time they want to use.
A short time will limite the time costing, while a long time will help to find all the nodes.
\ **-T**\ The number in seconds to limite the time costing of lsslp.
\ **-s**\ Service type interested in discovering.
\ **-t**\ Number or service-request attempts.
\ **--vpdtable**\ Output the SLP response in vpdtable formatting. Easy for writting data to vpd table.
\ **-v**\ Command Version.
\ **-V**\ Verbose output.
\ **-w**\ Writes output to xCAT database.
\ **-x**\ XML format.
\ **-z**\ Stanza formated output.
\ **-I**\ Give the warning message for the nodes in database which have no SLP responses.
Please note that this flag noly can be used after the database migration finished successfully.
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occurred.
********
EXAMPLES
********
1. To list all discovered HMC service types in tabular format, enter:
.. code-block:: perl
lsslp -s HMC
Output is similar to:
.. code-block:: perl
device type-model serial-number ip-addresses hostname
HMC 7310CR2 103F55A 1.1.1.115 hmc01
HMC 7310CR2 105369A 3.3.3.103 hmc02
HMC 7310CR3 KPHHK24 3.3.3.154 hmc03
2. list all discovered FSP service types in raw response format on subnet 30.0.0.255, enter:
.. code-block:: perl
lsslp -i 3.0.0.255 -s CEC -r
Output is similar to:
.. code-block:: perl
(type=cec-service-processor),(serial-number=10A3AEB),(machinetype-model=9117-570),(fru-serial-number=YL11C5338102),(hostname=),(frame-number=0),(cage-number=0),(ip-address=3.0.0.94,1.1.1.147),(web-url=https://3.0.0.94:473 ), (slot=1),(bpc-machinetype-model=0),(bpc-serial-number=0),(Image=fips240/b0630a_0623.240)
(type=cec-service-processor),(serial-number=10A3E2B),(machinetype-model=9117-570),(fru-serial- number=YL11C5338250),(hostname=),(frame-number=0),(cage-number=0),(ip-address=3.0.0.95,1.1.1.147), (web-url=https://3.0.0.95:473 ),(slot=1),(bpc-machinetype-model=0),(bpc-serial-number=0),(Image=fips240/b0630a_0623.240)
3. To list all discovered MM service types in XML format and write the output to the xCAT database, enter:
.. code-block:: perl
lsslp -s MM -x -w
Output is similar to:
.. code-block:: perl
<Node>
<groups>mm,all</groups>
<id>00:14:5E:E0:CB:1E</id>
<mgt>blade</mgt>
<mtm>029310C</mtm>
<node>Server-029310C-SN100485A-A</node>
<nodetype>mm</nodetype>
<otherinterfaces>9.114.47.229</otherinterfaces>
<serial>100485A</serial>
</Node>
4. To list all discovered service types in stanza format and write the output to the xCAT database, enter:
.. code-block:: perl
lsslp -z -w
Output is similar to:
c76v1hmc02:
objtype=node
hcp=c76v1hmc02
nodetype=hmc
mtm=7315CR2
serial=10407DA
ip=192.168.200.125
groups=hmc,all
mgt=hmc
mac=00:1a:64:fb:7d:50
hidden=0
192.168.200.244:
objtype=node
hcp=192.168.200.244
nodetype=fsp
mtm=9125-F2A
serial=0262662
side=A-0
otherinterfaces=192.168.200.244
groups=fsp,all
mgt=fsp
id=4
parent=Server-9125-F2A-SN0262662
mac=00:1a:64:fa:01:fe
hidden=1
Server-8205-E6B-SN1074CDP:
objtype=node
hcp=Server-8205-E6B-SN1074CDP
nodetype=cec
mtm=8205-E6B
serial=1074CDP
groups=cec,all
mgt=fsp
id=0
hidden=0
192.168.200.33:
objtype=node
hcp=192.168.200.33
nodetype=bpa
mtm=9458-100
serial=99201WM
side=B-0
otherinterfaces=192.168.200.33
groups=bpa,all
mgt=bpa
id=0
mac=00:09:6b:ad:19:90
hidden=1
Server-9125-F2A-SN0262652:
objtype=node
hcp=Server-9125-F2A-SN0262652
nodetype=frame
mtm=9125-F2A
serial=0262652
groups=frame,all
mgt=fsp
id=5
hidden=0
5. To list all discovered service types in stanza format and display the IP address, enter:
.. code-block:: perl
lsslp -w
Output is similar to:
.. code-block:: perl
mm01:
objtype=node
nodetype=fsp
mtm=8233-E8B
serial=1000ECP
side=A-0
groups=fsp,all
mgt=fsp
id=0
mac=00:14:5E:F0:5C:FD
otherinterfaces=50.0.0.5
bpa01:
objtype=node
nodetype=bpa
mtm=9A01-100
serial=0P1N746
side=A-1
groups=bpa,all
mgt=bpa
id=0
mac=00:1A:64:54:8C:A5
otherinterfaces=50.0.0.1
6. To list all the CECs, enter:
.. code-block:: perl
lsslp -s CEC
device type-model serial-number side ip-addresses hostname
FSP 9117-MMB 105EBEP A-1 20.0.0.138 20.0.0.138
FSP 9117-MMB 105EBEP B-1 20.0.0.139 20.0.0.139
CEC 9117-MMB 105EBEP Server-9117-MMB-SN105EBEP
7. To list all the nodes defined in database which have no SLP response.
.. code-block:: perl
lsslp -I
Output is similar to:
These nodes defined in database but can't be discovered: f17c00bpcb_b,f17c01bpcb_a,f17c01bpcb_b,f17c02bpcb_a,
device type-model serial-number side ip-addresses hostname
bpa 9458-100 BPCF017 A-0 40.17.0.1 f17c00bpca_a
bpa 9458-100 BPCF017 B-0 40.17.0.2 f17c00bpcb_a
8. To find the nodes within the user specified. Please make sure the noderange input have been defined in xCAT database.
.. code-block:: perl
lsslp CEC1-CEC3
or lsslp CEC1,CEC2,CEC3
device type-model serial-number side ip-addresses hostname
FSP 9A01-100 0P1P336 A-0 192.168.200.34 192.168.200.34
FSP 9A01-100 0P1P336 B-0 192.168.200.35 192.168.200.35
FSP 9A01-100 0P1P336 A-1 50.0.0.27 50.0.0.27
FSP 9A01-100 0P1P336 B-1 50.0.0.28 50.0.0.28
CEC 9A01-100 0P1P336 CEC1
FSP 8233-E8B 1040C7P A-0 192.168.200.36 192.168.200.36
FSP 8233-E8B 1040C7P B-0 192.168.200.37 192.168.200.37
FSP 8233-E8B 1040C7P A-1 50.0.0.29 50.0.0.29
FSP 8233-E8B 1040C7P B-1 50.0.0.30 50.0.0.30
CEC 8233-E8B 1040C7P CEC2
FSP 8205-E6B 1000ECP A-0 192.168.200.38 192.168.200.38
FSP 8205-E6B 1000ECP B-0 192.168.200.39 192.168.200.39
FSP 8205-E6B 1000ECP A-1 50.0.0.31 50.0.0.27
FSP 8205-E6B 1000ECP B-1 50.0.0.32 50.0.0.28
CEC 8205-E6B 1000ECP CEC3
9. To list all discovered CMM in stanza format, enter:
lsslp -s CMM -m -z
e114ngmm1:
objtype=node
mpa=e114ngmm1
nodetype=cmm
mtm=98939AX
serial=102537A
groups=cmm,all
mgt=blade
hidden=0
otherinterfaces=70.0.0.30
hwtype=cmm
10. To use lsslp unicast, enter:
lsslp -u -s CEC --range 40-41.1-2.1-2.1-2
*****
FILES
*****
/opt/xcat/bin/lsslp
********
SEE ALSO
********
rscan(1)|rscan.1

View File

@ -0,0 +1,269 @@
########
lstree.1
########
.. highlight:: perl
****
NAME
****
\ **lstree**\ - Display the tree of service node hierarchy, hardware hierarchy, or VM hierarchy.
********
SYNOPSIS
********
\ **lstree**\ [-h | --help]
\ **lstree**\ [-s | --servicenode] [-H | --hardwaremgmt] [-v | --virtualmachine] [noderange]
***********
DESCRIPTION
***********
The \ **lstree**\ command can display the tree of service node hierarchy for the xCAT nodes which have service node defined or which are service nodes, display the tree of hardware hierarchy only for the physical objects, display the tree of VM hierarchy for the xCAT nodes which are virtual machines or which are the hosts of virtual machines. If a noderange is specified, only show the part of the hierarchy that involves those nodes. For ZVM, we only support to disply VM hierarchy. By default, lstree will show both the hardware hierarchy and the VM hierarchy for all the nodes.
*******
OPTIONS
*******
\ **-h|--help**\
Display usage message.
\ **-s|-- servicenode**\
Show the tree of service node hierarchy.
\ **-H|--hardwaremgmt**\
Show the tree of hardware hierarchy.
\ **--v|--virtualmachine**\
Show the tree of VM hierarchy.
\ **noderange**\
A set of comma delimited node names and/or group names. See the "noderange" man page for details on additional supported formats.
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occurred.
********
EXAMPLES
********
1.
To display the tree of service node hierarchy for all the nodes.
\ **lstree -s**\
Output is similar to:
Service Node: mysn01
|__mycn01
|__mycn02
|__mycn03
Service Node: mysn02
|__mycn11
|__mycn12
|__mycn13
......
2.
To display the tree of service node hierarchy for service node "mysn01".
\ **lstree -s mysn01**\
Output is similar to:
Service Node: mysn01
|__mycn01
|__mycn02
|__mycn03
3.
To display the tree of hardware hierarchy for all the nodes.
\ **lstree -H**\
Output is similar to:
HMC: myhmc01
|__Frame: myframe01
|__CEC: mycec01
|__CEC: mycec02
......
Service Focal Point: myhmc02
|__Frame: myframe01
|__CEC: mycec01
|__CEC: mycec02
|__CEC: mycec03
......
Management Module: mymm01
|__Blade 1: js22n01
|__Blade 2: js22n02
|__Blade 3: js22n03
......
BMC: 192.168.0.1
|__Server: x3650n01
4.
To display the tree of hardware hierarchy for HMC "myhmc01".
\ **lstree -H myhmc01**\
Output is similar to:
HMC: myhmc01
|__Frame: myframe01
|__CEC: mycec01
|__CEC: mycec02
......
5.
To display the tree of VM hierarchy for all the nodes.
\ **lstree -v**\
Output is similar to:
Server: hs22n01
|__ hs22vm1
Server: x3650n01
|__ x3650n01kvm1
|__ x3650n01kvm2
6.
To display the tree of VM hierarchy for the node "x3650n01".
\ **lstree -v x3650n01**\
Output is similar to:
Server: x3650n01
|__ x3650n01kvm1
|__ x3650n01kvm2
7.
To display both the hardware tree and VM tree for all nodes.
\ **lstree**\
Output is similar to:
HMC: myhmc01
|__Frame: myframe01
|__CEC: mycec01
|__LPAR 1: node01
|__LPAR 2: node02
|__LPAR 3: node03
......
|__CEC: mycec02
|__LPAR 1: node11
|__LPAR 2: node12
|__LPAR 3: node13
......
Service Focal Point: myhmc02
|__Frame: myframe01
|__CEC: mycec01
|__LPAR 1: node01
|__LPAR 2: node02
|__LPAR 3: node03
......
|__Frame: myframe02
|__CEC: mycec02
|__LPAR 1: node21
|__LPAR 2: node22
|__LPAR 3: node23
......
Management Module: mymm01
|__Blade 1: hs22n01
|__hs22n01vm1
|__hs22n01vm2
|__Blade 2: hs22n02
|__hs22n02vm1
|__hs22n02vm2
......
BMC: 192.168.0.1
|__Server: x3650n01
|__ x3650n01kvm1
|__ x3650n01kvm2
*****
FILES
*****
/opt/xcat/bin/lstree
********
SEE ALSO
********
noderange(3)|noderange.3, tabdump(8)|tabdump.8

View File

@ -0,0 +1,259 @@
######
lsve.1
######
.. highlight:: perl
****
NAME
****
\ **lsve**\ - Lists detail attributes for a virtual environment.
********
SYNOPSIS
********
\ **lsve**\ [\ **-t**\ type] [\ **-m**\ manager] [\ **-o**\ object]
***********
DESCRIPTION
***********
The \ **lsve**\ command can be used to list a virtual environment for
'Data Center', 'Cluster', 'Storage Domain', 'Network' and 'Template' objects.
The mandatory parameter \ **-m manager**\ is used to specify the address of the
manager of virtual environment. xCAT needs it to access the RHEV manager.
The mandatory parameter \ **-t type**\ is used to specify the type of the target
object.
Basically, \ **lsve**\ command supports three types of object: \ **dc**\ , \ **cl**\ , \ **sd**\ , \ **nw**\
and \ **tpl**\ .
The parameter \ **-o object**\ is used to specify which object to list. If no \ **-o**\ is specified,
all the objects with the \ **-t**\ type will be displayed.
*******
OPTIONS
*******
\ **-h**\ Display usage message.
\ **-m**\ Specify the manager of the virtual environment.
For RHEV, the FQDN (Fully Qualified Domain Name) of the rhev manager have to be specified.
\ **-o**\ The target object to display.
\ **-t**\ Specify the \ **type**\ of the target object.
Supported types:
\ **dc**\ - Data Center (For type of 'dc', all the elements belongs to the data
center will be listed.)
\ **cl**\ - Cluster
\ **sd**\ - Storage Domain (To get the status of Storage Doamin, show it from
\ *data center*\ it attached to.
\ **nw**\ - Network
\ **tpl**\ - Template
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occurred.
********
EXAMPLES
********
1. To list the data center 'Default', enter:
.. code-block:: perl
lsve -t B<dc> -m <FQDN of rhev manager> -o Default
Output is similar to:
.. code-block:: perl
datacenters: [Default]
description: The default Data Center
state: up
storageformat: v1
storagetype: nfs
clusters: [Default]
cpu: Intel Westmere Family
description: The default server cluster
memory_hugepage: true
memory_overcommit: 100
storagedomains: [image]
available: 55834574848
committed: 13958643712
ismaster: true
status: active
storage_add: <Address of storage domain>
storage_format: v1
storage_path: /vfsimg
storage_type: nfs
type: data
used: 9663676416
networks: [rhevm2]
description:
state: operational
stp: false
networks: [rhevm]
description: Management Network
state: operational
stp: false
templates: [Blank]
bootorder: hd
cpucore: 1
cpusocket: 1
creation_time: 2008-04-01T00:00:00.000-04:00
display: spice
memory: 536870912
state: ok
stateless: false
type: desktop
2. To list the cluster 'Default', enter:
.. code-block:: perl
lsve -t B<cl> -m <FQDN of rhev manager> -o Default
Output is similar to:
.. code-block:: perl
cpu: Intel Westmere Family
description: The default server cluster
memory_hugepage: true
memory_overcommit: 10
3. To list the Storage Domain 'image', enter:
.. code-block:: perl
lsve -t B<sd> -m <FQDN of rhev manager> -o image
Output is similar to:
storagedomains: [image]
available: 55834574848
committed: 13958643712
ismaster: true
status:
storage_add: <Address of storage domain>
storage_format: v1
storage_path: /vfsimg
storage_type: nfs
type: data
used: 9663676416
4. To list the network 'rhevm', enter:
.. code-block:: perl
lsve -t B<nw> -m <FQDN of rhev manager> -o rhevm
Output is similar to:
.. code-block:: perl
networks: [rhevm]
description: Management Network
state: operational
stp: false
5. To list the template 'tpl01', enter:
.. code-block:: perl
lsve -t tpl -m <FQDN of rhev manager> -o tpl01
Output is similar to:
.. code-block:: perl
templates: [tpl01]
bootorder: network
cpucore: 2
cpusocket: 2
creation_time: 2012-08-22T23:52:35.953-04:00
display: vnc
memory: 1999634432
state: ok
stateless: false
type: server
*****
FILES
*****
/opt/xcat/bin/lsve
********
SEE ALSO
********
cfgve(1)|cfgve.1

View File

@ -0,0 +1,146 @@
########
lsvlan.1
########
.. highlight:: perl
****
NAME
****
\ **lsvlan**\ - It lists the existing vlans for the cluster.
********
SYNOPSIS
********
\ **lsvlan**\
\ **lsvlan**\ [\ *vlanid*\ ]
\ **lsvlan**\ [\ **-h**\ |\ **--help**\ ]
\ **lsvlan**\ [\ **-v**\ |\ **--version**\ ]
***********
DESCRIPTION
***********
The \ **lsvlan**\ command lists all the vlans for the cluster. If \ *vlanid*\ is specifined it will list more details about this vlan including the nodes in the vlan.
**********
Parameters
**********
\ *vlanid*\ is a unique vlan number. If it is omitted, all vlans will be listed.
*******
OPTIONS
*******
\ **-h|--help**\ Display usage message.
\ **-v|--version**\ Command Version.
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occurred.
********
EXAMPLES
********
1.
To list all the vlans in the cluster
.. code-block:: perl
lsvlan
Output is similar to:
vlan 3:
subnet 10.3.0.0
netmask 255.255.0.0
.. code-block:: perl
vlan 4:
subnet 10.4.0.0
netmask 255.255.0.0
2.
TO list the details for vlan3
.. code-block:: perl
lsvlan 3
Output is similar to:
vlan 3
subnet 10.3.0.0
netmask 255.255.0.0
.. code-block:: perl
hostname ip address node vm host
v3n1 10.3.0.1 c68m4hsp06
v3n2 10.3.0.2 x3455n01
v3n3 10.3.0.3 x3650n01
v3n4 10.3.0.4 x3650n01kvm1 x3650n01
v3n5 10.3.0.5 x3650n01kvm2 x3650n01
*****
FILES
*****
/opt/xcat/bin/lsvlan
********
SEE ALSO
********
mkvlan(1)|mkvlan.1, rmvlan(1)|rmvlan.1, chvlan(1)|chvlan.1

View File

@ -0,0 +1,400 @@
######
lsvm.1
######
.. highlight:: perl
****
NAME
****
\ **lsvm**\ - Lists partition profile information for HMC-, DFM-, IVM-, KVM-, Vmware- and zVM-managed nodes. For Power 775, it lists the LPARs' I/O slots information and CEC configuration.
********
SYNOPSIS
********
\ **lsvm**\ [\ **-h**\ | \ **--help]**\
\ **lsvm**\ [\ **-v**\ | \ **--version**\ ]
\ **lsvm**\ [\ **-V**\ | \ **--verbose**\ ] \ *noderange*\
\ **lsvm**\ [\ **-a**\ | \ **--all**\ ] \ *noderange*\
For PPC (using Direct FSP Management):
======================================
\ **lsvm**\ [\ **-l**\ | \ **--long**\ ] \ **--p775**\ \ *noderange*\
\ **lsvm**\ \ *noderange*\
For zVM:
========
\ **lsvm**\ \ *noderange*\
***********
DESCRIPTION
***********
The lsvm command lists all partition profiles defined for the partitions specified in noderange. If noderange is a CEC, all the partitions associated with that CEC are displayed.
For PPC (using Direct FSP Management):
======================================
For Power 775(use option \ *--p775*\ to specify), lsvm lists all partition I/O slots information for the partitions specified in noderange. If noderange is a CEC, it gets the CEC's pump mode value, octant's memory interleaving value, the all the octants configure value, and all the I/O slots information.
For DFM-managed (short for Direct FSP Management mode) normal power machine, lsvm lists the processor, memory, physical I/O slots, hugepage and BSR info for the specified partitions or CEC.
The pump mode value has the valid options:
1 - Node Pump Mode
2 - Chip Pump Mode
The Memory Interleaving Mode has 3 valid options:
0 - not Applicable
1 - interleaved
2 - non-interleaved
More information about this part, refer to the section Using the \*vm commands to define partitions in xCAT DFM in the doc below.
XCAT_Power_775_Hardware_Management
For KVM and Vmware
==================
The virtual machines that defined in the hypervisor \ *noderange*\ will be displayed. \ *noderange*\ only can be hypervisor. The type of the hypervisor should be set: hypervisor.type before running the lsvm.
Note: Only the virtual machine which is in power on state can be listed by lsvm command.
For zVM:
========
Show the directory entry for a given virtual machine.
*******
OPTIONS
*******
\ **-h**\
Display usage message.
\ **-v**\
Command Version.
\ **-V**\
Verbose output.
\ **-a**\
List all the profiles for one partition
\ **--p775**\
Specify the operation is for Power 775 machines.
\ **-l**\
Show lparnames for lpars. It shall work with option \ **--p775**\ .
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occurred.
********
EXAMPLES
********
1. To list all partition profiles defined for HMC-managed partition lpar3, enter:
.. code-block:: perl
lsvm lpar3
Output is similar to:
.. code-block:: perl
lpar3: name=lpar3,lpar_name=lpar3,lpar_id=4,lpar_env=aixlinux,all_resources=0,min_mem=512, desired_mem=2048, max_mem=3072,min_num_huge_pages=0,desired_num_huge_pages=0,max_num_huge_pages=0,proc_mode=shared, min_proc_units=0.5,desired_proc_units=0.5,max_proc_units=0.5,min_procs=1,desired_procs=1,max_procs=1, sharing_mode=uncap,uncap_weight=128,shared_proc_pool_id=0,shared_proc_pool_name=DefaultPool,io_slots=none, lpar_io_pool_ids=none,max_virtual_slots=10, "virtual_serial_adapters=1/server/1/any//any/1,0/server/1/any//any/1", virtual_scsi_adapters=2/client/1/p6vios/4/1,virtual_eth_adapters=3/0/1//0/1,hca_adapters=none,boot_mode=norm,conn_monitoring=0,auto_start=0,power_ctrl_lpar_ids=none,work_group_id=none,redundant_err_path_reporting=0, bsr_arrays=0,lhea_logical_ports=none,lhea_capabilities=none,lpar_proc_compat_mode=default,electronic_err_reporting=null
2.To list all IVM-managed partitions associated with CEC cec01, enter:
.. code-block:: perl
lsvm cec01
g Output is similar to:
.. code-block:: perl
cec01: name=10-B7D1G,lpar_name=10-B7D1G,lpar_id=1,os_type=vioserver,all_resources=0,min_mem=512, desired_mem=2048,max_mem=2048,proc_mode=shared,min_proc_units=0.10,desired_proc_units=0.40, max_proc_units=4.00,min_procs=1,desired_procs=4,max_procs=4,sharing_mode=uncap,uncap_weight=128, "io_slots=21010002/none/0,21010003/none/0,21010004/none/0,21020003/none/0,21020004/none/0,21030003/none/0,21030004/none/0,21040003/none/0,21040004/none/0",lpar_io_pool_ids=none,max_virtual_slots=48, "virtual_serial_adapters=0/server/1/any//any/1,1/server/1/any//any/1,10/client/0/2/lp2/0/0,12/client/0/3/lp3/0/0,14/client/0/4/lp4/0/0","virtual_scsi_adapters=11/server/2/lp2/2/0,13/server/3/lp3/2/0,15/server/4/lp4/2/0","virtual_eth_adapters=3/0/1//1/0,4/0/2//1/0,5/0/3//1/0,6/0/4//1/0",boot_mode=norm,conn_monitoring=0,auto_start=0,power_ctrl_lpar_ids=none
name=lp2,lpar_name=lp2,lpar_id=2,os_type=aixlinux,all_resources=0,min_mem=128,desired_mem=1024,max_mem=1024,proc_mode=shared,min_proc_units=0.10,desired_proc_units=0.10,max_proc_units=4.00,min_procs=1,desired_procs=1,max_procs=4,sharing_mode=uncap,uncap_weight=128,io_slots=none,lpar_io_pool_ids=none,max_virtual_slots=6, "virtual_serial_adapters=0/server/1/any//any/1,1/server/1/any//any/1",virtual_scsi_adapters=2/client/1/10-7D1G/11/1,virtual_eth_adapters=4/0/1//0/0,boot_mode=norm,conn_monitoring=0,auto_start=0,power_ctrl_lpar_ids=none
name=lp3,lpar_name=lp3,lpar_id=3,os_type=aixlinux,all_resources=0,min_mem=128,desired_mem=128,max_mem=128,proc_mode=shared,min_proc_units=0.10,desired_proc_units=0.10,max_proc_units=4.00,min_procs=1,desired_procs=1,max_procs=4,sharing_mode=uncap,uncap_weight=128,io_slots=none,lpar_io_pool_ids=none,max_virtual_slots=6, "virtual_serial_adapters=0/server/1/any//any/1,1/server/1/any//any/1",virtual_scsi_adapters=2/client/1/10-B7D1G/13/1,virtual_eth_adapters=4/0/1//0/0,boot_mode=of,conn_monitoring=0,auto_start=1, power_ctrl_lpar_ids=none
3. For Power 775, to list the I/O slot information of lpar1, enter:
.. code-block:: perl
lsvm lpar1 --p775
Output is similar to:
.. code-block:: perl
1: 514/U78A9.001.0123456-P1-C17/0x21010202/2/1
1: 513/U78A9.001.0123456-P1-C15/0x21010201/2/1
1: 512/U78A9.001.0123456-P1-C16/0x21010200/2/1
To list the lparname of lpars, enter:
.. code-block:: perl
lsvm lpar1 -l --p775
Output is similar to:
lpar1: 1: 514/U78A9.001.0123456-P1-C17/0x21010202/2/1
lpar1: 1: 513/U78A9.001.0123456-P1-C15/0x21010201/2/1
lpar1: 1: 512/U78A9.001.0123456-P1-C16/0x21010200/2/1
4. For Power 775, to list the I/O slot information and octant configuration of cec1, enter:
.. code-block:: perl
lsvm cec1 --p775
Output is similar to:
.. code-block:: perl
1: 514/U78A9.001.0123456-P1-C17/0x21010202/2/1
1: 513/U78A9.001.0123456-P1-C15/0x21010201/2/1
1: 512/U78A9.001.0123456-P1-C16/0x21010200/2/1
13: 537/U78A9.001.0123456-P1-C9/0x21010219/2/13
13: 536/U78A9.001.0123456-P1-C10/0x21010218/2/13
17: 545/U78A9.001.0123456-P1-C7/0x21010221/2/17
17: 544/U78A9.001.0123456-P1-C8/0x21010220/2/17
21: 553/U78A9.001.0123456-P1-C5/0x21010229/2/21
21: 552/U78A9.001.0123456-P1-C6/0x21010228/2/21
25: 569/U78A9.001.0123456-P1-C1/0x21010239/2/25
25: 561/U78A9.001.0123456-P1-C3/0x21010231/2/25
25: 560/U78A9.001.0123456-P1-C4/0x21010230/2/25
29: 568/U78A9.001.0123456-P1-C2/0x21010238/2/29
5: 521/U78A9.001.0123456-P1-C13/0x21010209/2/5
5: 520/U78A9.001.0123456-P1-C14/0x21010208/2/5
9: 529/U78A9.001.0123456-P1-C11/0x21010211/2/9
9: 528/U78A9.001.0123456-P1-C12/0x21010210/2/9
cec1: PendingPumpMode=1,CurrentPumpMode=1,OctantCount=8:
OctantID=0,PendingOctCfg=5,CurrentOctCfg=1,PendingMemoryInterleaveMode=2,CurrentMemoryInterleaveMode=2;
OctantID=1,PendingOctCfg=1,CurrentOctCfg=1,PendingMemoryInterleaveMode=2,CurrentMemoryInterleaveMode=2;
OctantID=2,PendingOctCfg=1,CurrentOctCfg=1,PendingMemoryInterleaveMode=2,CurrentMemoryInterleaveMode=2;
OctantID=3,PendingOctCfg=1,CurrentOctCfg=1,PendingMemoryInterleaveMode=2,CurrentMemoryInterleaveMode=2;
OctantID=4,PendingOctCfg=1,CurrentOctCfg=1,PendingMemoryInterleaveMode=2,CurrentMemoryInterleaveMode=2;
OctantID=5,PendingOctCfg=1,CurrentOctCfg=1,PendingMemoryInterleaveMode=2,CurrentMemoryInterleaveMode=2;
OctantID=6,PendingOctCfg=1,CurrentOctCfg=1,PendingMemoryInterleaveMode=2,CurrentMemoryInterleaveMode=2;
OctantID=7,PendingOctCfg=1,CurrentOctCfg=1,PendingMemoryInterleaveMode=2,CurrentMemoryInterleaveMode=2;
To list the lparname of lpars, enter:
.. code-block:: perl
lsvm cec1 -l --p775
Output is similar to:
.. code-block:: perl
lpar1: 1: 514/U78A9.001.0123456-P1-C17/0x21010202/2/1: 32: 0/3/3
lpar1: 1: 513/U78A9.001.0123456-P1-C15/0x21010201/2/1: 32: 0/3/3
lpar1: 1: 512/U78A9.001.0123456-P1-C16/0x21010200/2/1: 32: 0/3/3
lpar13: 13: 537/U78A9.001.0123456-P1-C9/0x21010219/2/13: 32: 0/3/3
lpar13: 13: 536/U78A9.001.0123456-P1-C10/0x21010218/2/13: 32: 0/3/3
lpar17: 17: 545/U78A9.001.0123456-P1-C7/0x21010221/2/17: 32: 0/0/0
lpar17: 17: 544/U78A9.001.0123456-P1-C8/0x21010220/2/17: 32: 0/0/0
lpar21: 21: 553/U78A9.001.0123456-P1-C5/0x21010229/2/21: 32: 0/0/0
lpar21: 21: 552/U78A9.001.0123456-P1-C6/0x21010228/2/21: 32: 0/0/0
lpar24: 25: 569/U78A9.001.0123456-P1-C1/0x21010239/2/25: 32: 0/0/0
lpar25: 25: 561/U78A9.001.0123456-P1-C3/0x21010231/2/25: 32: 0/0/0
lpar25: 25: 560/U78A9.001.0123456-P1-C4/0x21010230/2/25: 32: 0/0/0
lpar29: 29: 568/U78A9.001.0123456-P1-C2/0x21010238/2/29: 32: 0/0/0
lpar5: 5: 521/U78A9.001.0123456-P1-C13/0x21010209/2/5: 32: 0/3/3
lpar5: 5: 520/U78A9.001.0123456-P1-C14/0x21010208/2/5: 32: 0/3/3
lpar9: 9: 529/U78A9.001.0123456-P1-C11/0x21010211/2/9: 32: 0/3/3
lpar9: 9: 528/U78A9.001.0123456-P1-C12/0x21010210/2/9: 32: 0/3/3
cec1: PendingPumpMode=1,CurrentPumpMode=1,OctantCount=8:
OctantID=0,PendingOctCfg=5,CurrentOctCfg=1,PendingMemoryInterleaveMode=2,CurrentMemoryInterleaveMode=2;
OctantID=1,PendingOctCfg=1,CurrentOctCfg=1,PendingMemoryInterleaveMode=2,CurrentMemoryInterleaveMode=2;
OctantID=2,PendingOctCfg=1,CurrentOctCfg=1,PendingMemoryInterleaveMode=2,CurrentMemoryInterleaveMode=2;
OctantID=3,PendingOctCfg=1,CurrentOctCfg=1,PendingMemoryInterleaveMode=2,CurrentMemoryInterleaveMode=2;
OctantID=4,PendingOctCfg=1,CurrentOctCfg=1,PendingMemoryInterleaveMode=2,CurrentMemoryInterleaveMode=2;
OctantID=5,PendingOctCfg=1,CurrentOctCfg=1,PendingMemoryInterleaveMode=2,CurrentMemoryInterleaveMode=2;
OctantID=6,PendingOctCfg=1,CurrentOctCfg=1,PendingMemoryInterleaveMode=2,CurrentMemoryInterleaveMode=2;
OctantID=7,PendingOctCfg=1,CurrentOctCfg=1,PendingMemoryInterleaveMode=2,CurrentMemoryInterleaveMode=2;
Number of BSR arrays: 256,Bytes per BSR array: 4096,Available BSR array: 0;
Available huge page memory(in pages): 0
Configurable huge page memory(in pages): 12
Page Size(in GB): 16
Maximum huge page memory(in pages): 24
Requested huge page memory(in pages): 15
Number of BSR arrays: 256,Bytes per BSR array: 4096,Available BSR array: 0;
Available huge page memory(in pages): 0
Configurable huge page memory(in pages): 12
Page Size(in GB): 16
Maximum huge page memory(in pages): 24
Requested huge page memory(in pages): 15
5. To list the virtual machine's directory entry:
.. code-block:: perl
lsvm gpok3
Output is similar to:
.. code-block:: perl
gpok3: USER LNX3 PWD 512M 1G G
gpok3: INCLUDE LNXDFLT
gpok3: COMMAND SET VSWITCH VSW2 GRANT LNX3
6. For DFM-managed normal power machine, list out the detailed resource information:
.. code-block:: perl
lsvm cec
Output is similar to:
.. code-block:: perl
cec: HYP Configurable Processors: 16, Avail Processors: 16.
HYP Configurable Memory:32.00 GB(128 regions).
HYP Available Memory: 31.25 GB(125 regions).
HYP Memory Region Size: 0.25 GB(256 MB).
cec: All Physical I/O info:
65535,519,U78AA.001.WZSGVU7-P1-C7,0x21010207,0xffff(Empty Slot)
65535,518,U78AA.001.WZSGVU7-P1-C6,0x21010206,0xffff(Empty Slot)
65535,517,U78AA.001.WZSGVU7-P1-C5,0x21010205,0xffff(Empty Slot)
65535,516,U78AA.001.WZSGVU7-P1-C4,0x21010204,0xffff(Empty Slot)
65535,514,U78AA.001.WZSGVU7-P1-C19,0x21010202,0xffff(Empty Slot)
65535,513,U78AA.001.WZSGVU7-P1-T7,0x21010201,0xc03(USB Controller)
65535,512,U78AA.001.WZSGVU7-P1-T9,0x21010200,0x104(RAID Controller)
cec: Huge Page Memory
Available huge page memory(in pages): 2
Configurable huge page memory(in pages): 2
Page Size(in GB): 16
Maximum huge page memory(in pages): 4
Requested huge page memory(in pages): 2
cec: Barrier Synchronization Register(BSR)
Number of BSR arrays: 256
Bytes per BSR array: 4096
Available BSR array: 256
Note: The lines list in "All Physical I/O info" section represent all the physical I/O resource information. The format is like "owner_lparid,slot_id,physical resource name,drc_index,slot_class_code(class discription)". The 'drc index' is short for Dynamic Resource Configuration Index, it uniquely indicate a physical I/O resource in normal power machine.
For DFM-managed partition on normal power machine, list out the detailed information:
.. code-block:: perl
lsvm lpar1
Output is similar to:
.. code-block:: perl
lpar1: Lpar Processor Info:
Curr Processor Min: 1.
Curr Processor Req: 16.
Curr Processor Max: 16.
lpar1: Lpar Memory Info:
Curr Memory Min: 0.25 GB(1 regions).
Curr Memory Req: 30.75 GB(123 regions).
Curr Memory Max: 32.00 GB(128 regions).
lpar1: 1,519,U78AA.001.WZSGVU7-P1-C7,0x21010207,0xffff(Empty Slot)
lpar1: 1,518,U78AA.001.WZSGVU7-P1-C6,0x21010206,0xffff(Empty Slot)
lpar1: 1,517,U78AA.001.WZSGVU7-P1-C5,0x21010205,0xffff(Empty Slot)
lpar1: 1,516,U78AA.001.WZSGVU7-P1-C4,0x21010204,0xffff(Empty Slot)
lpar1: 1,514,U78AA.001.WZSGVU7-P1-C19,0x21010202,0xffff(Empty Slot)
lpar1: 1,513,U78AA.001.WZSGVU7-P1-T7,0x21010201,0xc03(USB Controller)
lpar1: 1,512,U78AA.001.WZSGVU7-P1-T9,0x21010200,0x104(RAID Controller)
lpar1: 1/2/2
lpar1: 256.
*****
FILES
*****
/opt/xcat/bin/lsvm
********
SEE ALSO
********
mkvm(1)|mkvm.1, chvm(1)|chvm.1, rmvm(1)|rmvm.1

View File

@ -0,0 +1,145 @@
#########
lsxcatd.1
#########
.. highlight:: perl
****
NAME
****
\ **lsxcatd**\ - lists xCAT daemon information.
********
SYNOPSIS
********
\ **lsxcatd**\ [\ **-h**\ | \ **--help**\ | \ **-v**\ | \ **--version**\ | \ **-d**\ | \ **--database**\ |\ **-t**\ | \ **--nodetype**\ | \ **-a**\ | \ **--all**\ ]
***********
DESCRIPTION
***********
The \ **lsxcat**\ command lists important xCAT daemon (xcatd) information.
*******
OPTIONS
*******
\ **-v|--version**\
Command Version.
\ **-h|--help**\
Display usage message.
\ **-d|--database**\
Displays information about the current database being used by xCAT.
\ **-t|--nodetype**\
Displays whether the node is a Management Node or a Service Node.
\ **-a|--all**\
Displays all information about the daemon supported by the command.
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occurred.
********
EXAMPLES
********
1.
To display information about the current database:
.. code-block:: perl
lsxcatd -d
Output is similar to:
.. code-block:: perl
cfgloc=Pg:dbname=xcatdb;host=7.777.47.250|xcatadm
dbengine=Pg
dbname=xcatdb
dbhost=7.777.47.250
dbadmin=xcatadm
2.
To display all information:
.. code-block:: perl
lsxcatd -a
Output is similar to:
Version 2.8.5 (git commit 0d4888af5a7a96ed521cb0e32e2c918a9d13d7cc, built Tue Jul 29 02:22:47 EDT 2014)
This is a Management Node
cfgloc=mysql:dbname=xcatdb;host=9.114.34.44|xcatadmin
dbengine=mysql
dbname=xcatdb
dbhost=9.114.34.44
dbadmin=xcatadmin
*****
FILES
*****
/opt/xcat/bin/lsxcatd
********
SEE ALSO
********

View File

@ -0,0 +1,114 @@
#########
makentp.1
#########
.. highlight:: perl
********
SYNOPSIS
********
\ *makentp [-h|--help]*\
\ *makentp [-v|--version]*\
\ *makentp [-a|--all] [-V|--verbose]*\
***********
DESCRIPTION
***********
\ *makentp*\ command sets up the NTP server on the xCAT management node and the service node.
By default, it sets up the NTP server for xCAT management node. If -a flag is specified, the command will setup the ntp servers for management node as well as all the service nodes that have \ *servicenode.ntpserver*\ set. It honors the site table attributes \ *extntpservers*\ and \ *ntpservers*\ described below:
\ *site.extntpservers*\ -- the NTP servers for the management node to sync with. If it is empty then the NTP server will use the management node's own hardware clock to calculate the system date and time.
\ *site.ntpservers*\ -- the NTP servers for the service node and compute node to sync with. The keyword <xcatmaster> means that the node's NTP server is the node that is managing it (either its service node or the management node).
To setup NTP on the compute node, please add \ **setupntp**\ postscript to the \ *postscripts*\ table and run \ *updatenode node -P setupntp*\ command.
*******
OPTIONS
*******
\ **-a|--all**\
Setup NTP servers for both management node and the service node.
\ **-h|--help**\
Display usage message.
\ **-v|--version**\
Command Version.
\ **-V|--verbose**\
Verbose output.
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occurred.
********
EXAMPLES
********
\*
To setup NTP server on the management node:
\ **makentp**\
\*
To setup NTP servers on both management node and the service node:
\ **setupntp**\ \ *-a*\
*****
FILES
*****
/opt/xcat/bin/setupntp
********
SEE ALSO
********

View File

@ -0,0 +1,312 @@
#######
mkdef.1
#######
.. highlight:: perl
****
NAME
****
\ **mkdef**\ - Use this command to create xCAT data object definitions.
********
SYNOPSIS
********
\ **mkdef**\ [\ **-h**\ |\ **--help**\ ] [\ **-t**\ \ *object-types*\ ]
\ **mkdef**\ [\ **-V**\ |\ **--verbose**\ ] [\ **-t**\ \ *object-types*\ ] [\ **-o**\ \ *object-names*\ ]
[\ **-z**\ |\ **--stanza**\ ] [\ **-d**\ |\ **--dynamic**\ ] [\ **-f**\ |\ **--force**\ ]
[[\ **-w**\ \ *attr*\ ==\ *val*\ ] [\ **-w**\ \ *attr*\ =~\ *val*\ ] ...] [\ *noderange*\ ] [\ *attr*\ =\ *val*\ [\ *attr*\ =\ *val...*\ ]]
[\ **-u**\ \ **provmethod**\ =<\ *install*\ |\ *netboot*\ |\ *statelite*\ > \ **profile**\ =<xxx> [\ *osvers*\ =\ *value*\ ] [\ *osarch*\ =\ *value*\ ]]
***********
DESCRIPTION
***********
This command is used to create xCAT object definitions which are stored in the xCAT database. If the definition already exists it will return an error message. The force option may be used to re-create a definition. In this case the old definition will be remove and the new definition will be created.
*******
OPTIONS
*******
\ *attr=val [attr=val ...]*\
Specifies one or more "attribute equals value" pairs, separated by spaces. Attr=val pairs must be specified last on the command line. Use the help option to get a list of valid attributes for each object type.
Note: when creating node object definitions, the 'groups' attribute is required.
\ **-d|--dynamic**\
Use the dynamic option to create dynamic node groups. This option must be used with -w option.
\ **-f|--force**\
Use the force option to re-create object definitions. This option removes the old definition before creating the new one.
\ **-h|--help**\
Display usage message.
\ *noderange*\
A set of comma delimited node names and/or group names. (must be the first parameter) See the "noderange" man page for details on supported formats.
\ **-o**\ \ *object-names*\
A set of comma delimited object names.
\ **-t**\ \ *object-types*\
A set of comma delimited object types. Use the help option to get a list of valid object types.
\ **-V|--verbose**\
Verbose mode.
\ **-w**\ \ *attr==val*\ \ **-w**\ \ *attr=~val*\ ...
Use one or multiple -w flags to specify the selection string that can be used to select objects. The operators ==, !=, =~ and !~ are available. For mkdef commmand, the -w flag only makes sense for creating dynamic node group. Use the help option to get a list of valid attributes for each object type.
Operator descriptions:
== Select nodes where the attribute value is exactly this value.
!= Select nodes where the attribute value is not this specific value.
=~ Select nodes where the attribute value matches this regular expression.
!~ Select nodes where the attribute value does not match this regular expression.
Note: if the "val" fields includes spaces or any other characters that will be parsed by shell, the "attr<operator>val" needs to be quoted. If the operator is "!~", the "attr<operator>val" needs to be quoted using single quote.
\ **-z|--stanza**\
Indicates that the file being piped to the command is in stanza format. See the xcatstanzafile man page for details on using xCAT stanza files.
\ **-u**\
Fill in the attributes such as template file, pkglist file and otherpkglist file of osimage object based on the specified parameters. It will search "/install/custom/" directory first, and then "/opt/xcat/share/".
The \ *provmethod*\ and \ *profile*\ must be specified. If \ *osvers*\ or \ *osarch*\ is not specified, the corresponding value of the management node will be used.
Note: this option only works for objtype \ **osimage**\ .
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occurred.
********
EXAMPLES
********
1.
To create a site definition.
.. code-block:: perl
mkdef -t site -o clustersite installdir=/xcatinstall
2.
To create a basic node definition.
.. code-block:: perl
mkdef -t node -o node01 groups="all,aix"
3.
To re-create the current definition of "node01".
.. code-block:: perl
mkdef -f -t node -o node01 nodetype=osi groups="linux"
(The group definitions are also created if they don't already exist.)
4.
To create a set of different types of definitions based on information contained in a stanza file.
.. code-block:: perl
cat defstanzafile | mkdef -z
5.
To create a group definition called LinuxNodes containing the nodes clstrn01 and clstrn02.
.. code-block:: perl
mkdef -t group -o LinuxNodes members="clstrn01,clstrn02"
6.
To create a node definition for an FSP node using the attributes provided by the group fspnodes.
.. code-block:: perl
mkdef -t node fspn1 groups=fspnodes nodetype=fsp
7.
To create node definitions for a set of node host names contained in the node range "node1,node2,node3"
.. code-block:: perl
mkdef -t node node1,node2,node3 power=hmc groups="all,aix"
8.
To create a dynamic node group definition called HMCMgtNodes containing all the HMC managed nodes"
.. code-block:: perl
mkdef -t group -o HMCMgtNodes -d -w mgt==hmc -w cons==hmc
9.
To create a dynamic node group definition called SLESNodes containing all the SLES nodes
.. code-block:: perl
mkdef -t group -o SLESNodes -d -w "os=~^sles[0-9]+$"
10.
To create a entry (7.0) in the policy table for user admin1
.. code-block:: perl
mkdef -t policy -o 7.0 name=admin1 rule=allow
11.
To create a node definition with nic attributes
.. code-block:: perl
mkdef -t node cn1 groups=all nicips.eth0="1.1.1.1|1.2.1.1" nicnetworks.eth0="net1|net2" nictypes.eth0="Ethernet"
12.
To create an osimage definition and fill in attributes automatically.
.. code-block:: perl
mkdef redhat6img -u profile=compute provmethod=statelite
*****
FILES
*****
$XCATROOT/bin/mkdef
(The XCATROOT environment variable is set when xCAT is installed. The
default value is "/opt/xcat".)
*****
NOTES
*****
This command is part of the xCAT software product.
********
SEE ALSO
********
chdef(1)|chdef.1, lsdef(1)|lsdef.1, rmdef(1)|rmdef.1, xcatstanzafile(5)|xcatstanzafile.5

View File

@ -0,0 +1,279 @@
#############
mkdsklsnode.1
#############
.. highlight:: perl
****
NAME
****
\ **mkdsklsnode**\ - Use this xCAT command to define and initialize AIX/NIM diskless machines.
********
SYNOPSIS
********
\ **mkdsklsnode [-h|--help ]**\
\ **mkdsklsnode [-V|--verbose] [-f|--force] [-n|--newname] [-i osimage_name] [-l location] [-u|--updateSN] [-k|--skipsync] [-p|--primarySN] [-b|--backupSN] [-S|--setuphanfs] noderange [attr=val [attr=val ...]]**\
***********
DESCRIPTION
***********
This xCAT command can be used to define and/or initialize AIX/NIM diskless machines. Once this step is completed you can use either the xCAT \ **rnetboot**\ command or the \ **rbootseq/rpower**\ commands to initiate a network boot of the nodes.
The command can be used to define and initialize a new NIM machine object or it can be used to reinitialize an existing machine to use a different operating system image.
This command will also create a NIM resolv_conf resource to be used when installing the node. If a resolv_conf resource is not already included in the xCAT osimage definition and if the "domain" and "nameservers" values are set then a new NIM resolv_conf resource will be created and allocated to the nodes.
The "domain" and "nameservers" attributes can be set in either the xCAT "network" definition used by the nodes or in the xCAT cluster "site" definition. The setting in the "network" definition will take priority.
The "search" field of the resolv.conf file will contain a list all the domains
listed in the xCAT network definitions and the xCAT site definiton.
The "nameservers" value can either be set to a specific IP address or the "<xcatmaster>" key word. The "<xcatmaster>" key word means that the value of the "xcatmaster" attribute of the node definition will be used in the /etc/resolv.conf file. (I.e. The name of the install server as known by the node.)
You can set the "domain" and "nameservers" attributes by using the \ **chdef**\ command. For example:
chdef -t network -o clstr_net domain=cluster.com nameservers=<xcatmaster>
If the "domain" and "nameservers" attributes are not set in either the nodes "network" definition or the "site" definition then no new NIM resolv_conf resource will be created.
If you are using xCAT service nodes the \ **mkdsklsnode**\ command will automatically determine the correct server(s) for the node and create the NIM definitions on that server(s).
When creating a new NIM machine definition the default is to use the same name as the xCAT node name that is provided.
You can use the "-n" option of the mkdsklsnode command to create and initialize an alternate NIM machine definition for the same physical nodes. This option allows you to set up a new image to use when a node is next rebooted while the node is currently running. This is possible because the NIM name for a machine definition does not have to be the hostname of the node. This allows you to have multiple NIM machine definitions for the same physical node. The naming convention for the new NIM machine name is "<xcat_node_name>_<image_name>", (Ex. "node01_61spot"). Since all the NIM initialization can be done while the node is running the downtime for for the node is reduced to the time it takes to reboot.
\ **Note:**\ When using the "-n" option make sure that the new osimage you specify and all the NIM resources that are used are different than what are currently being used on the nodes. The NIM resources should not be shared between the old osimage and the new osimage.
You can use the force option to reinitialize a node if it already has resources allocated or it is in the wrong NIM state. This option will reset the NIM node and deallocate resources before reinititializing. Use this option with caution since reinitializing a node will stop the node if it is currently running.
After the mkdsklsnode command completes you can use the \ **lsnim**\ command to check the NIM node definition to see if it is ready for booting the node. ("lsnim -l <nim_node_name>").
You can supply your own scripts to be run on the management node or on the service node (if their is hierarchy) for a node during the \ **mkdsklsnode**\ command. Such scripts are called \ **prescripts**\ . They should be copied to /install/prescripts dirctory. A table called \ *prescripts*\ is used to specify the scripts and their associated actions. The scripts to be run at the beginning of the \ **mkdsklsnode**\ command are stored in the 'begin' column of \ *prescripts*\ table. The scripts to be run at the end of the \ **mkdsklsnode**\ command are stored in the 'end' column of \ *prescripts*\ table. Please run 'tabdump prescripts -d' command for details. An example for the 'begin' or the 'end' column is: \ *diskless:myscript1,myscript2*\ . The following two environment variables will be passed to each script: NODES contains all the names of the nodes that need to run the script for and ACTION contains the current current nodeset action, in this case "diskless". If \ *#xCAT setting:MAX_INSTANCE=number*\ is specified in the script, the script will get invoked for each node in parallel, but no more than \ *number*\ of instances will be invoked at at a time. If it is not specified, the script will be invoked once for all the nodes.
*******
OPTIONS
*******
\ **attr=val [attr=val ...]**\
Specifies one or more "attribute equals value" pairs, separated by spaces. Attr=
val pairs must be specified last on the command line. These are used to specify additional values that can be passed to the underlying NIM commands.
Valid values:
\ **duplex**\
Specifies the duplex setting (optional). Used when defining the NIM machine. Use this setting to configure the client's network interface. This value can be full or half. The default is full. (ex. "duplex=full")
\ **speed**\
Specifies the speed setting (optional). Used when defining the NIM machine. This is the communication speed to use when configuring the client's network interface. This value can be 10, 100, or 1000. The default is 100. (ex. "speed=100")
\ **psize**\
Specifies the size in Megabytes of the paging space for the diskless node.(optional) Used when initializing the NIM machine. The minimum and default size is 64 MB of paging space. (ex. "psize=256")
\ **sparse_paging**\
Specifies that the paging file should be created as an AIX sparse file, (ex. "sparse_paging=yes"). The default is "no".
\ **dump_iscsi_port**\
The tcpip port number to use to communicate dump images from the client to the dump resource server. Normally set by default. This port number is used by a dump resource server.
\ **configdump**\
Specifies the type dump to be collected from the client. The values are
"selective", "full", and "none". If the configdump attribute is set to "full"
or "selective" the client will automatically be configured to dump to an iSCSI
target device. The "selective" memory dump will avoid dumping user data. The
"full" memory dump will dump all the memory of the client partition. Selective
and full memory dumps will be stored in subdirectory of the dump resource
allocated to the client. This attribute is saved in the xCAT osimage
definition.
\ **-b |--backupSN**\
When using backup service nodes only update the backup. The default is to update both the primary and backup service nodes.
\ **-f |--force**\
Use the force option to reinitialize the NIM machines.
\ **-h |--help**\
Display usage message.
\ **-i image_name**\
The name of an existing xCAT osimage definition. If this information is not provided on the command line the code checks the node definition for the value of the "provmethod" attribute. If the "-i" value is provided on the command line then that value will be used to set the "provmethod" attribute of the node definitions.
\ **-k|--skipsync**\
Use this option to have the mkdsklsnode command skip the NIM sync_roots operation. This option should only be used if you are certain that the shared_root resource does not have to be updated from the SPOT. Normally, when the SPOT is updated, you should do a sync_roots on the shared_root resource.
\ **-l|--location**\
The directory location to use when creating new NIM resolv_conf resources. The default location is /install/nim.
\ **-n|--newname**\
Create a new NIM machine object name for the xCAT node. Use the naming convention "<xcat_node_name>_<image_name>" for the new NIM machine definition.
\ **-p|--primarySN**\
When using backup service nodes only update the primary. The default is to update both the primary and backup service nodes.
\ **-S|--setuphanfs**\
Setup NFSv4 replication between the primary service nodes and backup service nodes to provide high availability NFS for the compute nodes. This option only exports the /install directory with NFSv4 replication settings, the data synchronization between the primary service nodes and backup service nodes needs to be taken care of through some mechanism.
\ **-u|--updateSN**\
Use this option if you wish to update the osimages but do not want to define or initialize the NIM client definitions. This option is only valid when the xCAT "site" definition attribute "sharedinstall" is set to either "sns" or "all".
\ **noderange**\
A set of comma delimited node names and/or group names. See the "noderange" man page for details on additional supported formats.
\ **-V |--verbose**\
Verbose mode.
************
RETURN VALUE
************
0
The command completed successfully.
1
An error has occurred.
********
EXAMPLES
********
1
Initialize an xCAT node named "node01" as an AIX diskless machine. The xCAT osimage named "61spot" should be used to boot the node.
\ **mkdsklsnode -i 61spot node01**\
2
Initialize all AIX diskless nodes contained in the xCAT node group called "aixnodes" using the image definitions pointed to by the "provmethod" attribute of the xCAT node definitions.
\ **mkdsklsnode aixnodes**\
3
Initialize diskless node "clstrn29" using the xCAT osimage called "61dskls". Also set the paging size to be 128M and specify the paging file be an AIX sparse file.
\ **mkdsklsnode -i 61dskls clstrn29 psize=128 sparse_paging=yes**\
4
Initialize an xCAT node called "node02" as an AIX diskless node. Create a new NIM machine definition name with the osimage as an extension to the xCAT node name.
\ **mkdsklsnode -n -i 61spot node02**\
*****
FILES
*****
/opt/xcat/bin/mkdsklsnode
*****
NOTES
*****
This command is part of the xCAT software product.
********
SEE ALSO
********
rmdsklsnode(1)|rmdsklsnode.1

View File

@ -0,0 +1,99 @@
############
mkflexnode.1
############
.. highlight:: perl
****
NAME
****
\ **mkflexnode**\ - Create a flexible node.
********
SYNOPSIS
********
\ **mkflexnode**\ [-h | --help]
\ **mkflexnode**\ [-v | --version]
\ **mkflexnode**\ \ *noderange*\
***********
DESCRIPTION
***********
A flexible node is a \ **Partition**\ in a complex. Creating a flexible node is to create a partition which including all the slots defined in the xCAT blade node.
Before creating a flexible node, a general xCAT blade node should be defined. The \ *id*\ attribute of this node should be a node range like 'a-b', it means the blades installed in slots 'a-b' need to be assigned to the partition. 'a' is the start slot, 'b' is the end slot. If this partition only have one slot, the slot range can be 'a'.
The action of creating flexible node will impact the hardware status. Before creating it, the blades in the slot range should be in \ **power off**\ state.
After the creating, use the \ **lsflexnode**\ to check the status of the node.
The \ *noderange*\ only can be a blade node.
*******
OPTIONS
*******
\ **-h | --help**\
Display the usage message.
\ **-v | --version**\
Display the version information.
********
EXAMPLES
********
1
Create a flexible node base on the xCAT node blade1.
The blade1 should belong to a complex, the \ *id*\ attribute should be set correctly and all the slots should be in \ **power off**\ state.
.. code-block:: perl
mkflexnode blade1
*****
FILES
*****
/opt/xcat/bin/mkflexnode
********
SEE ALSO
********
lsflexnode(1)|lsflexnode.1, rmflexnode(1)|rmflexnode.1

View File

@ -0,0 +1,232 @@
##########
mkhwconn.1
##########
.. highlight:: perl
****
NAME
****
\ **mkhwconn**\ - Sets up connections for CEC and Frame nodes to HMC nodes or hardware server.
********
SYNOPSIS
********
\ **mkhwconn**\ [\ **-h**\ | \ **--help**\ ]
\ **mkhwconn**\ [\ **-v**\ | \ **--version**\ ]
PPC (with HMC) specific:
========================
\ **mkhwconn**\ [\ **-V**\ | \ **--verbose**\ ] \ *noderange*\ \ **-t**\ [\ **--port**\ \ *port_value*\ ]
\ **mkhwconn**\ [\ **-V**\ | \ **--verbose**\ ] \ *noderange*\ \ **-s**\ [\ *hmcnode*\ \ **--port**\ \ *port_value*\ ]
\ **mkhwconn**\ [\ **-V**\ | \ **--verbose**\ ] \ *noderange*\ \ **-p**\ \ *hmc*\ [\ **-P**\ \ *passwd*\ ] [\ **--port**\ \ *port_value*\ ]
PPC (using Direct FSP Management) specific:
===========================================
\ **mkhwconn**\ \ *noderange*\ \ **-t**\ [\ **-T tooltype**\ ] [\ **--port**\ \ *port_value*\ ]
***********
DESCRIPTION
***********
For PPC (with HMC) specific:
This command is used to set up connections for CEC and Frame nodes to HMC nodes. (If the connection already exists, it will not break it.)
This command is useful when you have multiple HMCs, each of which will manage a subset of the CECs/Frames. Use \ **mkhwconn**\ to tell
each HMC which CECs/Frames it should manage. When using this, you should turn off the self-discovery on each HMC. You also need
to put all the HMCs and all the Frames on a single flat service network.
When \ **-t**\ is specified, this command reads the connection information from the xCAT ppc table (e.g. the parent attribute), and read the user/password from the ppcdirect table. Then this command will assign CEC nodes and Frame nodes to HMC nodes.
When \ **-p**\ is specified, this command gets the connection information from command line arguments. If \ **-P**\ is not specified, the default password for CEC and Frame nodes is used.
The flag \ **-s**\ is used to make the connection between the frame and its Service focal point(HMC). Makehwconn will also set the connections between the CECs within this Frame and the HMC. The sfp of the frame/CEC can either be defined in ppc table beforehand or specified in command line after the flag -s. If the user use mkhwconn noderange -s HMC_name, it will not only make the connections but also set the sfp attributes for these nodes in PPC table.
In any case, before running this command, the CEC and Frame nodes need be defined with correct nodetype.nodetype value (cec or frame) and nodehm.mgt value (hmc).
Note: If a CEC belongs to a frame, which has a BPA installed, this CEC should not be assigned to an HMC individually. Instead, the whole frame should be assigned to the HMC.
For PPC (using Direct FSP Management) specific:
It is used to set up connections for CEC and Frame node to Hardware Server on management node (or service node ). It only could be done according to the node definition in xCAT DB. And this command will try to read the user/password from the ppcdirect table first. If fails, then read them from passwd table. Commonly , the username is \ **HMC**\ . If using the \ **ppcdirect**\ table, each CEC/Frame and user/password should be stored in \ **ppcdirect**\ table. If using the \ **passwd**\ table, the key should be "\ **cec**\ " or "\ **frame**\ ", and the related user/password are stored in \ **passwd**\ table.
When \ **--port**\ is specified, this command will create the connections for CECs/Frames whose side in \ **vpd**\ table is equal to port value.
*******
OPTIONS
*******
\ **-h|--help**\
Display usage message.
\ **-t**\
Read connection information from xCAT DB (ppc and ppcdirect tables). Use this option if you need to connect multiple CECs/Frames
to multiple HMCs in a single command.
\ **-p**\
The HMC node name. Only one HMC nodes can be specified by this flag. To setup connection for multiple HMC nodes, use flag \ **-t**\ .
\ **-P**\
The password of HMC based CEC/Frame login user(Default user name is 'HMC'). This flag is optional.
\ **-T**\
The tooltype is used to communicate to the CEC/Frame. The value could be \ **lpar**\ or \ **fnm**\ . The tooltype value \ **lpar**\ is for xCAT and \ **fnm**\ is for CNM. The default value is "\ **lpar**\ ".
\ **--port**\
The port value specifies which special side will be used to create the connection to the CEC/Frame. The value could only be specified as "\ **0**\ " or "\ **1**\ " and the default value is "\ **0,1**\ ". If the user wants to use all ports to create the connection, he should not specify this value. If the port value is specified as "\ **0**\ ", in the vpd table, the side column should be \ **A-0**\ and \ **B-0**\ ; If the port value is specified as "\ **1**\ ", the side column should be \ **A-1**\ and \ **B-1**\ . When making hardware connection between CEC/Frame and HMC, the value is used to specify the fsp/bpa port of the cec/frame and will be organized in order of "\ **A-0,A-1,B-0,B-1**\ ". If any side does not exist, the side would simply be ignored. Generally, only one port of a fsp/bap can be connected while another port be used as backup.
\ **-s**\
The flag -s is used to make the connection between the frame and its Service Focal Point(HMC). -s flag is not supposed to work with other functional flags.
\ **-V|--verbose**\
Verbose output.
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occurred.
********
EXAMPLES
********
1.
To setup the connection for all CEC nodes in node group cec to HMC node, according to the definition in xCAT DB:
.. code-block:: perl
mkhwconn cec -t
2.
To setup the connection for Frame nodes in node group frame to HMC node hmc1, with password 'abc123':
.. code-block:: perl
mkhwconn frame -p hmc1 -P abc123
3.
To setup the connections for all CEC nodes in node group cec to hardware server, and the tooltype value is lpar:
.. code-block:: perl
mkhwconn cec -t -T lpar
4.
To setup the connections for all cecs nodes in node group cec to hardware server, and the tooltype value is lpar, and the port value is 1:
.. code-block:: perl
mkhwconn cec -t -T lpar --port 1
5.
To setup the connection between the frame and it's SFP node. This command will also set the connections between the CECs within this frame and their SFP node. User need to define HMC_name in the database in advance, but no need to set the sfp attribute for these node, xCAT will set the HMC_name as ppc.sfp for these nodes. The CECs within this frame should have the same sfp attribute as the frame.
.. code-block:: perl
mkhwconn cec -s HMC_name -P HMC_passwd
*****
FILES
*****
$XCATROOT/bin/mkhwconn
(The XCATROOT environment variable is set when xCAT is installed. The
default value is "/opt/xcat".)
*****
NOTES
*****
This command is part of the xCAT software product.
********
SEE ALSO
********
lshwconn(1)|lshwconn.1, rmhwconn(1)|rmhwconn.1

View File

@ -0,0 +1,476 @@
############
mknimimage.1
############
.. highlight:: perl
****
NAME
****
\ **mknimimage**\ - Use this xCAT command to create xCAT osimage definitions and related AIX/NIM resources. The command can also be used to update an existing AIX diskless image(SPOT).
********
SYNOPSIS
********
\ **mknimimage [-h | --help ]**\
\ **mknimimage [-V] -u osimage_name [attr=val [attr=val ...]]**\
\ **mknimimage [-V] [-f|--force] [-r|--sharedroot] [-D|--mkdumpres] [-l location] [-c|--completeosimage] [-s image_source] [-i current_image] [-p|--cplpp] [-t nimtype] [-m nimmethod] [-n mksysbnode] [-b mksysbfile] osimage_name [attr=val [attr=val ...]]**\
***********
DESCRIPTION
***********
This command will create both an xCAT osimage definition and the corresponding NIM resource definitions. The command can also be used to update an existing AIX diskless image(SPOT).
The command will also install the NIM master software and configure NIM if needed.
The naming convention for the NIM SPOT resource definition is to use the same name as the xCAT osimage. The naming convention for any other NIM resources that are created is "<osimage_name>_<resource_type>". (ex. "61image_lpp_source" )
When creating a mksysb image definition you must specify either the "-n" or the "-b" option. The "-n" option can be used to create a mksysb image from an existing NIM client machine. The "-b" option can be used to specify an existing mksysb backup file.
\ **Adding software and configuration files to the osimage.**\
When creating a diskless osimage definition you also have the option of automatically updating the NIM SPOT resource. You can have additional software installed or you can have configuration files added or updated. To have software installed you must provide either the names of NIM installp_bundle resources or fileset names on the command line using the "attr=val" option. You may also supply the installp flags, RPM flags, emgr flags to use when installing the software.
To have configuration files updated you must provide the full path name of a "synclists" file which contains the the list of actual files to update. The xCAT osimage definition that is created will contain the installp_bundle, otherpkgs, and synclists files that are provided on the command line.
\ **Updating an existing xCAT osimage**\
If you wish to update an existing diskless image after it has already been created you can use the "-u" (update) option. In this case the xCAT osimage definition will not be updated.
There are two ways to use the update feature.
You can update the osimage definition and run the \ **mknimimage**\ command with no "installp_bundle", "otherpkgs", or "synclists" command line values. The information for updating the SPOT will come from the osimage definition only. This has the advantage of keeping a record of any changes that were made to the SPOT.
Or, you could do a more ad hoc update by providing one or more of the "installp_bundle", "otherpkgs", or "synclists" values on the command line. If any of these values are provided the \ **mknimimage**\ command will use those values only. The osimage definition will not be used or updated.
WARNING: Installing random RPM packages in a SPOT may have unpredictable consequences. The SPOT is a very restricted environment and some RPM packages may corrupt the SPOT or even hang your management system. Try to be very careful about the packages you install. When installing RPMs, if the mknimimage command hangs or if there are file systems left mounted after the command completes you may need to reboot your management node to recover. This is a limitation of the current AIX support for diskless systems
\ **Copying an xCAT osimage.**\
You can use the "-i" and "-p" options to copy an existing diskless osimage. To do this you must supply the name of an existing xCAT osimage definition and the name of the new osimage you wish to create. The \ **mknimimage**\ command will do the following:
- create a new xCAT osimage definition using the new name that was specified.
- copy the NIM SPOT resource to a new location and define it to NIM using a new name.
- if the original osimage included a NIM "shared_root" resource then a new shared_root resource will be created for the new SPOT.
- any other resources (or attributes) included in the original osimage will be included in the new osimage definition.
- if the "-p" option is specified then the original NIM lpp_source resource will be copied to a new location and redfined to NIM. (The default would be to use the original lpp_source - to save file system space.)
\ **Additional information**\
IMPORTANT: The NIM lpp_source and SPOT resources can get quite large. Always make sure that you have sufficient file system space available before running the \ **mknimimage**\ command.
To list the contents of the xCAT osimage definition use the xCAT \ **lsdef**\ command ("lsdef -t osimage -l -o <osimage_name>").
To check the validity of a SPOT or lpp_source resource
To remove an xCAT osimage definition along with the associated NIM resource definitions use the \ **rmnimimage**\ command. Be careful not to accidently remove NIM resources if they are still needed.
To list a NIM resource definition use the AIX \ **lsnim**\ command ("lsnim -l <resource_name>").
To check the validity of a SPOT or lpp_source resource use the AIX \ **nim**\ command ("nim -o check <resourec-name>").
To remove specific NIM resource definitons use the AIX \ **nim**\ command. ("nim -o remove <resource-name>").
*******
OPTIONS
*******
\ **attr=val [attr=val ...]**\
Specifies one or more "attribute equals value" pairs, separated by spaces. Attr=val pairs must be specified last on the command line.
Currently supported attributes:
\ **bosinst_data**\
The name of a NIM bosinst_data resource.
\ **dump**\
The name of the NIM dump resource.
\ **fb_script**\
The name of a NIM fb_script resource.
\ **home**\
The name of the NIM home resource.
\ **installp_bundle**\
One or more comma separated NIM installp_bundle resources.
\ **lpp_source**\
The name of the NIM lpp_source resource.
\ **mksysb**\
The name of a NIM mksysb resource.
\ **otherpkgs**\
One or more comma separated installp, emgr, or rpm packages. The packages must
have prefixes of 'I:', 'E:', or 'R:', respectively. (ex. R:foo.rpm)
\ **paging**\
The name of the NIM paging resource.
\ **resolv_conf**\
The name of the NIM resolv_conf resource.
\ **root**\
The name of the NIM root resource.
\ **script**\
The name of a NIM script resource.
\ **shared_home**\
The name of the NIM shared_home resource.
\ **shared_root**\
A shared_root resource represents a directory that can be used as a / (root) directory by one or more diskless clients.
\ **spot**\
The name of the NIM SPOT resource.
\ **synclists**\
The fully qualified name of a file containing a list of files to synchronize on the nodes.
\ **tmp**\
The name of the NIM tmp resource.
\ **installp_flags**\
The alternate flags to be passed along to the AIX installp command. (The default for installp_flags is "-abgQXY".)
\ **rpm_flags**\
The alternate flags to be passed along to the AIX rpm command. (The default for
rpm_flags is "-Uvh ".) The mknimimage command will check each rpm to see if
it is installed. It will not be reinstalled unless you specify the appropriate
rpm option, such as '--replacepkgs'.
\ **emgr_flags**\
The alternate flags to be passed along to the AIX emgr command. (There is no default flags for the emgr command.)
\ **dumpsize**\
The maximum size for a single dump image the dump resource will accept. Space is not allocated until a client starts to dump. The default size is 50GB. The dump resource should be large enough to hold the expected AIX dump and snap data.
\ **max_dumps**\
The maximum number of archived dumps for an individual client. The default is one.
\ **snapcollect**\
Indicates that after a dump is collected then snap data should be collected. The snap data will be collected in the clients dump resource directory. Values are "yes" or "no". The default is "no".
\ **nfs_vers**\
Value Specifies the NFS protocol version required for NFS access.
\ **nfs_sec**\
Value Specifies the security method required for NFS access.
Note that you may specify multiple "script", "otherpkgs", and "installp_bundle" resources by using a comma seperated list. (ex. "script=ascript,bscript"). RPM names may be included in the "otherpkgs" list by using a "R:" prefix(ex. "R:whatever.rpm"). epkg (AIX interim fix package) file names may be included in the "otherpkgs" using the 'E:' prefix. (ex. "otherpkgs=E:IZ38930TL0.120304.epkg.Z").
\ **-b mksysbfile**\
Used to specify the path name of a mksysb file to use when defining a NIM mksysb resource.
\ **-c|--completeosimage**\
Complete the creation of the osimage definition passed in on the command line. This option will use any additonal values passed in on the command line and/or it will attempt to create required resources in order to complete the definition of the xCAT osimage. For example, if the osimage definition is missing a spot or shared_root resource the command will create those resources and add them to the osimage definition.
\ **-f|--force**\
Use the force option to re-create xCAT osimage definition. This option removes the old definition before creating the new one. It does not remove any of the NIM resource definitions named in the osimage definition. Use the \ **rmnimimage**\ command to remove the NIM resources associated with an xCAT osimage definition.
\ **-h |--help**\
Display usage message.
\ **osimage_name**\
The name of the xCAT osimage definition. This will be used as the name of the xCAT osimage definition as well as the name of the NIM SPOT resource.
\ **-D|--mkdumpres**\
Create a diskless dump resource.
\ **-i current_image**\
The name of an existing xCAT osimage that should be copied to make a new xCAT osimage definition. Only valid when defining a "diskless" or "dataless" type image.
\ **-l location**\
The directory location to use when creating new NIM resources. The default location is /install/nim.
\ **-m nimmethod**\
Used to specify the NIM installation method to use. The possible values are "rte" and "mksysb". The default is "rte".
\ **-n mksysbnode**\
The xCAT node to use to create a mksysb image. The node must be a defined as a NIM client machine.
\ **-p|--cplpp**\
Use this option when copying existing diskless osimages to indicate that you also wish to have the lpp_resource copied. This option is only valid when using the "-i" option.
\ **-r|--sharedroot**\
Use this option to specify that a NIM "shared_root" resource be created for the AIX diskless nodes. The default is to create a NIM "root" resource. This feature is only available when using AIX version 6.1.4 or beyond. See the AIX/NIM documentation for a description of the "root" and "shared_root" resources.
\ **-s image_source**\
The source of software to use when creating the new NIM lpp_source resource. This could be a source directory or a previously defined NIM lpp_source resource name.
\ **-t nimtype**\
Used to specify the NIM machine type. The possible values are "standalone", "diskless" or "dataless". The default is "standalone".
\ **-u**\
Used to update an AIX/NIM SPOT resource with additional software and configuration files. This option is only valid for xCAT diskless osimage objects. The SPOT resource associated with the xCAT osimage definition will be updated. This option can also be used to update the nfs_vers attribute from NFSv3 to NFSv4 for the NIM resources associated with diskful or diskless image.
\ **-V |--verbose**\
Verbose mode.
************
RETURN VALUE
************
0
The command completed successfully.
1
An error has occurred.
********
EXAMPLES
********
1) Create an osimage definition and the basic NIM resources needed to do a NIM "standalone" "rte" installation of node "node01". Assume the software contained on the AIX product media has been copied to the /AIX/instimages directory.
\ **mknimimage -s /AIX/instimages 61image**\
2) Create an osimage definition that includes some additional NIM resources.
\ **mknimimage -s /AIX/instimages 61image installp_bundle=mybndlres,addswbnd**\
This command will create lpp_source, spot, and bosinst_data resources using the source specified by the "-s" option. The installp_bundle information will also be included in the osimage definition. The mybndlres and addswbnd resources must be created before using this osimage definition to install a node.
3) Create an osimage definition that includes a mksysb image and related resources.
\ **mknimimage -m mksysb -n node27 newsysb spot=myspot bosinst_data=mybdata**\
This command will use node27 to create a mksysb backup image and use that to define a NIM mksysb resource. The osimage definition will contain the name of the mksysb resource as well as the spot and bosinst_data resource.
4) Create an osimage definition using a mksysb image provided on the command line.
\ **mknimimage -m mksysb -b /tmp/backups/mysysbimage newsysb spot=myspot bosinst_data=mybdata**\
This command defines a NIM mksysb resource using mysysbimage.
5) Create an osimage definition and create the required spot definition using the mksysb backup file provided on the command line.
\ **mknimimage -m mksysb -b /tmp/backups/mysysbimage newsysb bosinst_data=mybdata**\
This command defines a NIM mksysb resource and a spot definition using mysysbimage.
6) Create a diskless image called 61dskls using the AIX source files provided in the /AIX/instimages directory.
\ **mknimimage -t diskless -s /AIX/instimages 61dskls**\
7) Create a diskless image called "614dskls" that includes a NIM "shared_root" and a "dump" resource. Use the existing NIM lpp_resource called "614_lpp_source". Also specify verbose output.
\ **mknimimage -V -r -D -t diskless -s 614_lpp_source 614dskls snapcollect=yes**\
The "snapcollect" attribute specifies that AIX "snap" data should be include when a system dump is initiated.
8) Create a new diskless image by copying an existing image.
\ **mknimimage -t diskless -i 61cosi 61cosi_updt1**\
Note: If you also wish to have the original lpp_source copied and defined use the -p option.
\ **mknimimage -t diskless -i 61cosi -p 61cosi_updt1**\
9) Create a diskless image using an existing lpp_source resource named "61cosi_lpp_source" and include NIM tmp and home resources. This assumes that the "mytmp" and "myhome" NIM resources have already been created by using NIM commands.
\ **mknimimage -t diskless -s 61cosi_lpp_source 611cosi tmp=mytmp home=myhome**\
10) Create a diskless image and update it with additional software using rpm flags and configuration files.
\ **mknimimage -t diskless -s 61cosi_lpp_source 61dskls otherpkgs=I:fset1,R:foo.rpm,E:IZ38930TL0.120304.epkg.Z synclists=/install/mysyncfile rpm_flags="-i --nodeps"**\
The xCAT osimage definition created by this command will include the "otherpkgs" and "synclists" values. The NIM SPOT resource associated with this osimage will be updated with the additional software using rpm flags "-i --nodeps" and configuration files.
11) Update an existing diskless image (AIX/NIM SPOT) using the information saved in the xCAT "61dskls" osimage definition. Also specify verbose messages.
\ **mknimimage -V -u 61dskls**\
12) Update an existing diskless image called "61dskls". Install the additional software specified in the NIM "bndres1" and "bndres2" installp_bundle resources using the installp flags "-agcQX". (The NIM "bndres1" and "bndres2" definitions must be created before using them in this command.)
\ **mknimimage -u 61dskls installp_bundle=bndres1,bndres2 installp_flags="-agcQX"**\
Note that when "installp_bundle", "otherpkgs", or "synclists" values are specified with the "-u" option then the xCAT osimage definiton is not used or updated.
13) Update an existing image to support NFSv4. Also specify verbose messages.
\ **mknimimage -V -u 61dskls nfs_vers=4**\
*****
FILES
*****
/opt/xcat/bin/mknimimage
*****
NOTES
*****
This command is part of the xCAT software product.
********
SEE ALSO
********
rmnimimage(1)|rmnimimage.1

View File

@ -0,0 +1,226 @@
########
mkvlan.1
########
.. highlight:: perl
****
NAME
****
\ **mkvlan**\ - It takes a list of nodes and create a private tagged vlan for them.
********
SYNOPSIS
********
\ **mkvlan**\ [\ *vlanid*\ ] \ **-n**\ |\ **--nodes**\ \ *noderange*\ [\ **-t**\ |\ **--net**\ \ *subnet*\ ] [\ **-m**\ |\ **--mask**\ \ *netmask*\ ] [\ **-p**\ |\ **--prefix**\ \ *hostname_prefix*\ ] [\ **-i**\ |\ **--interface**\ \ *nic*\ ]
\ **mkvlan**\ [\ **-h**\ |\ **--help**\ ]
\ **mkvlan**\ [\ **-v**\ |\ **--version**\ ]
***********
DESCRIPTION
***********
The \ **mkvlan**\ command takes a list of nodes and move them to a private vlan.
This command will configure the switch to create a new tagged vlan on the given nic. The primary nic will be used if the nic is not specified. The new vlan ID is given by the command. However, if it is omitted, xCAT will automatically generate the new vlan ID by querying all the switches involved and finding out the smallest common number that is not used by any existing vlans. The subnet and the netmask for the vlan will be derived from the value of "vlannets" and "vlanmasks" from the \ *site*\ table if -t and -m are not specified. The following are the default site table entires:
.. code-block:: perl
vlannets="|(\d+)|10.($1+0).0.0|";
vlanmask="255.255.0.0";
The vlan network will be entered in the \ *networks*\ table. The nodes will be added to the vlan using the vlan tagging technique. And the new IP addresses and new hostnames will be assigned to the nodes. The -p flag specifies the node hostname prefix for the nodes. If it is not specified, by default, the hostnames for the nodes are having the following format:
v<vlanid>nY where Y is the node number. For example, the hostname for node 5 on vlan 10 is v10n5.
The \ *switch.vlan*\ will be updated with the new vlan id for the node for standaline nodes. For KVM guests, the \ *vm.nics*\ identifies which vlan this node belongs to. For example: vl3 means this node is in vlan 3.
If there are more than one switches involved in the vlan, the ports that connect to the switches need to entered in \ *switches.linkports*\ with the following format:
.. code-block:: perl
<port numner>:switch,<port number>:switch....
For example:
.. code-block:: perl
"42:switch1,43:switch2"
This command will automatically configure the cross-over ports if the given nodes are on different switches.
For added security, the root guard and bpdu guard will be enabled for the ports in this vlan. However, the guards will not be disabled if the ports are removed from the vlan using chvlan or rmvlan commands. To disable them, you need to use the switch command line interface. Please refer to the switch command line interface manual to see how to disable the root guard and bpdu guard for a port.
**********
Parameters
**********
\ *vlanid*\ is a unique vlan number. If it is omitted, xCAT will automatically generate the new vlan ID by querying all the switches involved and finding out the smallest common number that is not used by any existing vlans. Use \ **lsvlan**\ to find out the existing vlan ids used by xCAT.
*******
OPTIONS
*******
\ **-n|--nodes**\ The nodes or groups to be included in the vlan. It can be stand alone nodes or KVM guests. It takes the noderange format. Please check the man page for noderange for details.
\ **-t|--net**\ The subnet for the vlan.
\ **-m|--mask**\ The netmask for the vlan
\ **-p|--prefix**\ The prefix the the new hostnames for the nodes in the vlan.
\ **-i|--interface**\ The interface name where the vlan will be tagged on. If omitted, the xCAT management network will be assumed. For FVM, this is the interface name on the host.
\ **-h|--help**\ Display usage message.
\ **-v|--version**\ The Command Version.
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occurred.
********
EXAMPLES
********
To start, the xCAT switches and switches table needs to be filled with switch and port info for the nodes. For example, the swith table will look like this:
#node,switch,port,vlan,interface,comments,disable
"node1","switch1","10",,,,
"node1","switch2","1",,"eth1",,
"node2","switch1","11",,"primary",,
"node2","switch2","2",,"eth1",,
"node3","switch1","12",,"primary:eth0",,
"node3","switch2","3",,"eth1",,
Please note that the interface value for the management (primary) network can be empty, the word "primary" or "primary:ethx". For other networks, the interface attribute must be specified.
The following is an example of the switches table
#switch,snmpversion,username,password,privacy,auth,linkports,sshusername,sshpassword,switchtype,comments,disable
"switch1","3","username","passw0rd",,"sha","48:switch2",,,,,
"switch2","2",,,,,"43:switch1",,,,,
1.
To make a private vlan for node1, node2 and node3
.. code-block:: perl
mkvlan -n node1,node2,node3
The vlan will be created on eth0 for the nodes.
2.
To make a private vlan for node1, node2 and node3 on eth1,
.. code-block:: perl
mkvlan -n node1,node2,node3 -i eth1
3.
TO make a private vlan for node1, node2 with given subnet and netmask.
.. code-block:: perl
mkvlan -n node1,node2,node3 -t 10.3.2.0 -m 255.255.255.0
4.
To make a private vlan for KVM guests node1 and node2
.. code-block:: perl
chtab key=usexhrm site.vlaue=1
mkdef node1 arch=x86_64 groups=kvm,all installnic=mac primarynic=mac mgt=kvm netboot=pxe nfsserver=10.1.0.204 os=rhels6 profile=compute provmethod=install serialport=0 serialspeed=115200 vmcpus=1 vmhost=x3650n01 vmmemory=512 vmnics=br0 vmstorage=nfs://10.1.0.203/vms
mkdef node2 arch=x86_64 groups=kvm,all installnic=mac primarynic=mac mgt=kvm netboot=pxe nfsserver=10.1.0.204 os=rhels6 profile=compute provmethod=install serialport=0 serialspeed=115200 vmcpus=1 vmhost=x3650n01 vmmemory=512 vmnics=br0 vmstorage=nfs://10.1.0.203/vms
mkvlan -n node1,node2
mkvm node1,node2 -s 20G
rpower node1,node2 on
rinstall node1,node2
*****
FILES
*****
/opt/xcat/bin/mkvlan
********
SEE ALSO
********
chvlan(1)|chvlan.1, rmvlan(1)|rmvlan.1, lsvlan(1)|lsvlan.1

View File

@ -0,0 +1,564 @@
######
mkvm.1
######
.. highlight:: perl
****
NAME
****
\ **mkvm**\ - Creates HMC-, DFM-, IVM-, and zVM-managed partitions or other virtual machines.
********
SYNOPSIS
********
Common:
=======
\ **mkvm**\ [\ **-h**\ | \ **--help**\ ]
\ **mkvm**\ [\ **-v**\ | \ **--version**\ ]
For PPC (with HMC) specific:
============================
\ **mkvm**\ [\ **-V**\ | \ **--verbose**\ ] \ *noderange*\ \ **-i**\ \ *id*\ \ **-l**\ \ *singlenode*\
\ **mkvm**\ [\ **-V**\ | \ **--verbose**\ ] \ *noderange*\ \ **-c**\ \ *destcec*\ \ **-p**\ \ *profile*\
\ **mkvm**\ [\ **-V**\ | \ **--verbose**\ ] \ *noderange*\ \ **--full**\
For PPC (using Direct FSP Management) specific:
===============================================
\ **mkvm**\ \ *noderange*\ [\ **--full**\ ]
\ **mkvm**\ \ *noderange*\ [\ **vmcpus=min/req/max**\ ] [\ **vmmemory=min/req/max**\ ]
[\ **vmphyslots=drc_index1,drc_index2...**\ ] [\ **vmothersetting=hugepage:N,bsr:N**\ ]
[\ **vmnics=vlan1[,vlan2..]]**\ [\ **vmstorage=<N|viosnode:slotid**\ >] [\ **--vios**\ ]
For KVM:
========
\ **mkvm**\ \ *noderange*\ [\ **-m|--master**\ \ *mastername*\ ] [\ **-s|--size**\ \ *disksize*\ ] [\ **--mem**\ \ *memsize*\ ] [\ **--cpus**\ \ *cpucount*\ ] [\ **-f|--force**\ ]
For Vmware:
===========
\ **mkvm**\ \ *noderange*\ [\ **-s**\ |\ **--size**\ \ *disksize*\ ] [\ **--mem**\ \ *memsize*\ ] [\ **--cpus**\ \ *cpucount*\ ]
For zVM:
========
\ **mkvm**\ \ *noderange*\ [\ *directory_entry_file_path*\ ]
\ **mkvm**\ \ *noderange*\ [\ *source_virtual_machine*\ ] [\ **pool=**\ \ *disk_pool*\ ]
***********
DESCRIPTION
***********
For PPC (with HMC) specific:
============================
The first form of mkvm command creates new partition(s) with the same profile/resources as the partition specified by \ *singlenode*\ . The -i and \ *noderange*\ specify the starting numeric partition number and the \ *noderange*\ for the newly created partitions, respectively. The LHEA port numbers and the HCA index numbers will be automatically increased if they are defined in the source partition.
The second form of this command duplicates all the partitions from the source specified by \ *profile*\ to the destination specified by \ *destcec*\ . The source and destination CECs can be managed by different HMCs.
Please make sure the nodes in the \ *noderange*\ is defined in the \ *nodelist*\ table and the \ *mgt*\ is set to 'hmc' in the \ *nodehm*\ table before running this command.
Please note that the mkvm command currently only supports creating standard LPARs, not virtual LPARs working with VIOS server.
For PPC (using Direct FSP Management) specific:
===============================================
With option \ *full*\ , a partition using all the resources on a normal power machine will be created.
If no option is specified, a partition using the parameters specified with attributes such as 'vmcpus', 'vmmory', 'vmphyslots', 'vmothersetting', 'vmnics', 'vmstorage' will be created. Those attributes can either be specified with '\*def' commands running before or be specified with this command.
For KVM and Vmware:
===================
The mkvm command creates new virtual machine(s) with the \ *disksize*\ size of hard disk, \ *memsize*\ size of memory and \ *cpucount*\ number of cpu.
For KVM: If \ **-f**\ |\ **--force**\ is specified, the storage will be destroyed first if it existed.
For zVM:
========
The first form of mkvm creates a new virtual machine based on a directory entry.
The second form of this creates a new virtual machine with the same profile/resources as the specified node (cloning).
*******
OPTIONS
*******
\ **-h|--help**\
Display usage message.
\ **-c**\
The cec (fsp) name for the destination.
\ **--cpus**\
The cpu count which will be created for the kvm/vmware virtual machine.
\ **--full**\
Request to create a new full system partition for each CEC.
\ **vmcpus=value**\ \ **vmmemory=value**\ \ **vmphyslots=value**\ \ **vmothersetting=value**\ \ **vmnics=value**\ \ **vmstorage=value**\ [\ **--vios**\ ]
To specify the parameters which are used to create a partition. The \ *vmcpus*\ , \ *vmmemory*\ are necessay, and the value specified with this command have a more high priority. If the value of any of the three options is not specified, the corresponding value specified for the node object will be used. If any of the three attributes is neither specified with this command nor specified with the node object, error information will be returned. To reference to lsvm(1)|lsvm.1 for more information about 'drc_index' for \ *vmphyslots*\ .
The option \ *vios*\ is used to specify the partition that will be created is a VIOS partition. If specified, the value for \ *vmstorage*\ shall be number which indicate the number of vSCSI server adapter will be created, and if no value specified for \ *vmphyslots*\ , all the physical slot of the power machine will be asigned to VIOS partition. If not specified, it shall be in form of \ *vios_name:server_slotid*\ to specify the vios and the virtual slot id of the vSCSI server adapter that will be connected from the Logical partition.
\ **-f|--force**\
If \ **-f|--force**\ is specified, the storage will be destroyed first if it existed.
\ **-i**\
Starting numeric id of the newly created partitions.
\ **-l**\
The partition name of the source.
\ **--mem**\
The memory size which will be used for the new created kvm/vmware virtual machine. Unit is Megabyte.
\ **-p**\
The file that contains the profiles for the source partitions.
\ **-s|--size**\
The size of storage which will be created for the kvm/vmware virtual machine.
\ **-v|--version**\
Command Version.
\ **-V|--verbose**\
Verbose output.
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occurred.
********
EXAMPLES
********
1. To create a new HMC-managed partition lpar5 based on the profile/resources of lpar4, enter:
.. code-block:: perl
mkdef -t node -o lpar5 mgt=hmc groups=all
then:
.. code-block:: perl
mkvm lpar5 -i 5 -l lpar4
Output is similar to:
.. code-block:: perl
lpar5: Success
2. To create new HMC-managed partitions lpar5-lpar8 based on the profile/resources of lpar4, enter:
.. code-block:: perl
mkdef -t node -o lpar5-lpar8 mgt=hmc groups=all
then:
.. code-block:: perl
mkvm lpar5-lpar8 -i 5 -l lpar4
Output is similar to:
.. code-block:: perl
lpar5: Success
lpar6: Success
lpar7: Success
lpar8: Success
3. To duplicate all the HMC-managed partitions associated with cec01 on cec02, first save the lpars from cec01 to a file:
.. code-block:: perl
lsvm lpar01-lpar04 > /tmp/myprofile
then create lpars on cec02:
.. code-block:: perl
mkvm lpar05-lpar08 -c cec02 -p /tmp/myprofile
Output is similar to:
.. code-block:: perl
lpar5: Success
lpar6: Success
lpar7: Success
lpar8: Success
4. To duplicate all the HMC-managed partitions associated with cec01 on cec02, one is for cec01, the other is for cec02:
.. code-block:: perl
mkdef -t node -o lpar5,lpar6 mgt=hmc groups=all
chtab node=lpar5 ppc.parent=cec01
chtab node=lpar6 ppc.parent=cec02
then create lpars on cec01 and cec02:
.. code-block:: perl
mkvm lpar5,lpar6 --full
Output is similar to:
.. code-block:: perl
lpar5: Success
lpar6: Success
5. To create a new zVM virtual machine (gpok3) based on a directory entry:
.. code-block:: perl
mkvm gpok3 /tmp/dirEntry.txt
Output is similar to:
.. code-block:: perl
gpok3: Creating user directory entry for LNX3... Done
6. To clone a new zVM virtual machine with the same profile/resources as the specified node:
.. code-block:: perl
mkvm gpok4 gpok3 pool=POOL1
Output is similar to:
.. code-block:: perl
gpok4: Cloning gpok3
gpok4: Linking source disk (0100) as (1100)
gpok4: Linking source disk (0101) as (1101)
gpok4: Stopping LNX3... Done
gpok4: Creating user directory entry
gpok4: Granting VSwitch (VSW1) access for gpok3
gpok4: Granting VSwitch (VSW2) access for gpok3
gpok4: Adding minidisk (0100)
gpok4: Adding minidisk (0101)
gpok4: Disks added (2). Disks in user entry (2)
gpok4: Linking target disk (0100) as (2100)
gpok4: Copying source disk (1100) to target disk (2100) using FLASHCOPY
gpok4: Mounting /dev/dasdg1 to /mnt/LNX3
gpok4: Setting network configuration
gpok4: Linking target disk (0101) as (2101)
gpok4: Copying source disk (1101) to target disk (2101) using FLASHCOPY
gpok4: Powering on
gpok4: Detatching source disk (0101) at (1101)
gpok4: Detatching source disk (0100) at (1100)
gpok4: Starting LNX3... Done
7. To create a new kvm/vmware virtual machine with 10G storage, 2048M memory and 2 cpus.
.. code-block:: perl
mkvm vm1 -s 10G --mem 2048 --cpus 2
8. To create a full partition on normal power machine.
First, define a node object:
.. code-block:: perl
mkdef -t node -o lpar1 mgt=fsp cons=fsp nodetype=ppc,osi id=1 hcp=cec parent=cec hwtype=lpar groups=lpar,all
Then, create the partion on the specified cec.
.. code-block:: perl
mkvm lpar1 --full
The output is similar to:
.. code-block:: perl
lpar1: Done
To query the resources allocated to node 'lpar1'
.. code-block:: perl
lsvm lpar1
The output is similar to:
.. code-block:: perl
lpar1: Lpar Processor Info:
Curr Processor Min: 1.
Curr Processor Req: 16.
Curr Processor Max: 16.
lpar1: Lpar Memory Info:
Curr Memory Min: 0.25 GB(1 regions).
Curr Memory Req: 30.75 GB(123 regions).
Curr Memory Max: 32.00 GB(128 regions).
lpar1: 1,519,U78AA.001.WZSGVU7-P1-C7,0x21010207,0xffff(Empty Slot)
lpar1: 1,518,U78AA.001.WZSGVU7-P1-C6,0x21010206,0xffff(Empty Slot)
lpar1: 1,517,U78AA.001.WZSGVU7-P1-C5,0x21010205,0xffff(Empty Slot)
lpar1: 1,516,U78AA.001.WZSGVU7-P1-C4,0x21010204,0xffff(Empty Slot)
lpar1: 1,514,U78AA.001.WZSGVU7-P1-C19,0x21010202,0xffff(Empty Slot)
lpar1: 1,513,U78AA.001.WZSGVU7-P1-T7,0x21010201,0xc03(USB Controller)
lpar1: 1,512,U78AA.001.WZSGVU7-P1-T9,0x21010200,0x104(RAID Controller)
lpar1: 1/2/2
lpar1: 256.
Note: The 'parent' attribute for node 'lpar1' is the object name of physical power machine that the full partition will be created on.
9. To create a partition using some of the resources on normal power machine.
Option 1:
After a node object is defined, the resources that will be used for the partition shall be specified like this:
.. code-block:: perl
chdef lpar1 vmcpus=1/4/16 vmmemory=1G/4G/32G vmphyslots=0x21010201,0x21010200 vmothersetting=bsr:128,hugepage:2
Then, create the partion on the specified cec.
.. code-block:: perl
mkvm lpar1
Option 2:
.. code-block:: perl
mkvm lpar1 vmcpus=1/4/16 vmmemory=1G/4G/32G vmphyslots=0x21010201,0x21010200 vmothersetting=bsr:128,hugepage:2
The outout is similar to:
.. code-block:: perl
lpar1: Done
Note: The 'vmplyslots' specify the drc index of the physical slot device. Every drc index shall be delimited with ','. The 'vmothersetting' specify two kinds of resource, bsr(Barrier Synchronization Register) specified the num of BSR arrays, hugepage(Huge Page Memory) specified the num of huge pages.
To query the resources allocated to node 'lpar1'
.. code-block:: perl
lsvm lpar1
The output is similar to:
.. code-block:: perl
lpar1: Lpar Processor Info:
Curr Processor Min: 1.
Curr Processor Req: 4.
Curr Processor Max: 16.
lpar1: Lpar Memory Info:
Curr Memory Min: 1.00 GB(4 regions).
Curr Memory Req: 4.00 GB(16 regions).
Curr Memory Max: 32.00 GB(128 regions).
lpar1: 1,513,U78AA.001.WZSGVU7-P1-T7,0x21010201,0xc03(USB Controller)
lpar1: 1,512,U78AA.001.WZSGVU7-P1-T9,0x21010200,0x104(RAID Controller)
lpar1: 1/2/2
lpar1: 128.
10. To create a vios partition using some of the resources on normal power machine.
.. code-block:: perl
mkvm viosnode vmcpus=1/4/16 vmmemory=1G/4G/32G vmphyslots=0x21010201,0x21010200 vmnics=vlan1 vmstorage=5 --vios
The resouces for the node is similar to:
.. code-block:: perl
viosnode: Lpar Processor Info:
Curr Processor Min: 1.
Curr Processor Req: 4.
Curr Processor Max: 16.
viosnode: Lpar Memory Info:
Curr Memory Min: 1.00 GB(4 regions).
Curr Memory Req: 4.00 GB(16 regions).
Curr Memory Max: 32.00 GB(128 regions).
viosnode: 1,513,U78AA.001.WZSGVU7-P1-T7,0x21010201,0xc03(USB Controller)
viosnode: 1,512,U78AA.001.WZSGVU7-P1-T9,0x21010200,0x104(RAID Controller)
viosnode: 1,0,U8205.E6B.0612BAR-V1-C,0x30000000,vSerial Server
viosnode: 1,1,U8205.E6B.0612BAR-V1-C1,0x30000001,vSerial Server
viosnode: 1,3,U8205.E6B.0612BAR-V1-C3,0x30000003,vEth (port_vlanid=1,mac_addr=4211509276a7)
viosnode: 1,5,U8205.E6B.0612BAR-V1-C5,0x30000005,vSCSI Server
viosnode: 1,6,U8205.E6B.0612BAR-V1-C6,0x30000006,vSCSI Server
viosnode: 1,7,U8205.E6B.0612BAR-V1-C7,0x30000007,vSCSI Server
viosnode: 1,8,U8205.E6B.0612BAR-V1-C8,0x30000008,vSCSI Server
viosnode: 1,9,U8205.E6B.0612BAR-V1-C9,0x30000009,vSCSI Server
viosnode: 0/0/0
viosnode: 0.
*****
FILES
*****
/opt/xcat/bin/mkvm
********
SEE ALSO
********
chvm(1)|chvm.1, lsvm(1)|lsvm.1, rmvm(1)|rmvm.1

View File

@ -0,0 +1,177 @@
########
mkzone.1
########
.. highlight:: perl
************
\ **NAME**\
************
\ **mkzone**\ - Defines a new zone in the cluster.
****************
\ **SYNOPSIS**\
****************
\ **mkzone**\ <zonename> [\ **--defaultzone**\ ] [\ **-k**\ \ *full path to the ssh RSA private key*\ ] [\ **-a**\ \ *noderange*\ ] [\ **-g**\ ] [\ **-f**\ ] [\ **-s**\ \ *yes|no*\ ] [-V]
\ **mkzone**\ [\ **-h**\ | \ **-v**\ ]
*******************
\ **DESCRIPTION**\
*******************
The \ **mkzone**\ command is designed to divide the xCAT cluster into multiple zones. The nodes in each zone will share common root ssh keys. This allows the nodes in a zone to be able to as root ssh to each other without password, but cannot do the same to any node in another zone. All zones share a common xCAT Management Node and database including the site table, which defines the attributes of the entire cluster.
The mkzone command is only supported on Linux ( No AIX support).
The nodes are not updated with the new root ssh keys by mkzone. You must run updatenode -k or xdsh -K to the nodes to update the root ssh keys to the new generated zone keys. This will also sync any service nodes with the zone keys, if you have a hierarchical cluster.
Note: if any zones in the zone table, there must be one and only one defaultzone. Otherwise, errors will occur.
***************
\ **OPTIONS**\
***************
\ **-h**\ |\ **--help**\
Displays usage information.
\ **-v**\ |\ **--version**\
Displays command version and build date.
\ **-k | --sshkeypath**\ \ *full path to the ssh RSA private key*\
This is the path to the id_rsa key that will be used to build root's ssh keys for the zone. If -k is used, it will generate the ssh public key from the input ssh RSA private key and store both in /etc/xcat/sshkeys/<zonename>/.ssh directory.
If -f is not used, then it will generate a set of root ssh keys for the zone and store them in /etc/xcat/sshkeys/<zonename>/.ssh.
\ **--default**\
if --defaultzone is input, then it will set the zone defaultzone attribute to yes; otherwise it will set to no.
if --defaultzone is input and another zone is currently the default,
then the -f flag must be used to force a change to the new defaultzone.
If -f flag is not use an error will be returned and no change made.
Note: if any zones in the zone table, there must be one and only one defaultzone. Otherwise, errors will occur.
\ **-a | --addnoderange**\ \ *noderange*\
For each node in the noderange, it will set the zonename attribute for that node to the input zonename.
If the -g flag is also on the command, then
it will add the group name "zonename" to each node in the noderange.
\ **-s| --sshbetweennodes**\ \ **yes|no**\
If -s entered, the zone sshbetweennodes attribute will be set to yes or no. It defaults to yes. When this is set to yes, then ssh will be setup
to allow passwordless root access between nodes. If no, then root will be prompted for a password when running ssh between the nodes in the zone.
\ **-f | --force**\
Used with the (--defaultzone) flag to override the current default zone.
\ **-g | --assigngroup**\
Used with the (-a) flag to create the group zonename for all nodes in the input noderange.
\ **-V**\ |\ **--Verbose**\
Verbose mode.
****************
\ **Examples**\
****************
\*
To make a new zone1 using defaults , enter:
\ **mkzone**\ \ *zone1*\
Note: with the first mkzone, you will automatically get the xcatdefault zone created as the default zone. This zone uses ssh keys from
<roothome>/.ssh directory.
\*
To make a new zone2 using defaults and make it the default zone enter:
\ **mkzone**\ \ *zone2*\ --defaultzone -f
\*
To make a new zone2A using the ssh id_rsa private key in /root/.ssh:
\ **mkzone**\ \ *zone2A*\ -k /root/.ssh
\*
To make a new zone3 and assign the noderange compute3 to the zone enter:
\ **mkzone**\ \ *zone3*\ -a compute3
\*
To make a new zone4 and assign the noderange compute4 to the zone and add zone4 as a group to each node enter:
\ **mkzone**\ \ *zone4*\ -a compute4 -g
\*
To make a new zone5 and assign the noderange compute5 to the zone and add zone5 as a group to each node but not allow passwordless ssh between the nodes enter:
\ **mkzone**\ \ *zone5*\ -a compute5 -g -s no
\ **Files**\
\ **/opt/xcat/bin/mkzone/**\
Location of the mkzone command.
****************
\ **SEE ALSO**\
****************
chzone(1)|chzone.1, rmzone(1)|rmzone.1, xdsh(1)|xdsh.1, updatenode(1)|updatenode.1

View File

@ -0,0 +1,158 @@
########
monadd.1
########
.. highlight:: perl
****
NAME
****
\ **monadd**\ - Registers a monitoring plug-in to the xCAT cluster.
********
SYNOPSIS
********
\ *monadd [-h| --help]*\
\ *monadd [-v| --version]*\
\ *monadd name [-n|--nodestatmon] [-s|--settings settings]*\
***********
DESCRIPTION
***********
This command is used to register a monitoring plug-in module to monitor the xCAT cluster. The plug-in module will be added to the xCAT \ *monitoring*\ database table and the configuration scripts for the monitoring plug-in, if any, will be added to the \ *postscripts*\ table. A monitoring plug-in module acts as a bridge that connects a 3rd party monitoring software and the xCAT cluster. A configuration script is used to configure the 3rd party software. Once added to the <postscripts> table, it will be invoked on the nodes during node deployment stage.
**********
Parameters
**********
\ *name*\ is the name of the monitoring plug-in module. For example, if the the \ *name*\ is called \ *xxx*\ , then the actual file name that the xcatd looks for is \ */opt/xcat/lib/perl/xCAT_monitoring/xxx.pm*\ . Use \ *monls -a*\ command to list all the monitoring plug-in modules that can be used.
\ *settings*\ is the monitoring plug-in specific settings. It is used to customize the behavior of the plug-in or configure the 3rd party software. Format: \ *-s key-value -s key=value ...*\ Please note that the square brackets are needed here. Use \ *monls name -d*\ command to look for the possbile setting keys for a plug-in module.
*******
OPTIONS
*******
\ **-h | --help**\
Display usage message.
\ **-n | --nodestatmon**\
Indicate that this monitoring plug-in will be used for feeding the node liveness status to the xCAT \ *nodelist*\ table.
\ **-s | --settings**\
Specifies the plug-in specific settings. These settings will be used by the plug-in to customize certain entities for the plug-in or the third party monitoring software. e.g. -s mon_interval=10 -s toggle=1.
\ **-v | --version **\
Command Version.
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occurred.
********
EXAMPLES
********
1.
To register gangliamon plug-in module (which interacts with Ganglia monitoring software) to monitor the xCAT cluster, enter:
.. code-block:: perl
monadd gangliamon
2.
To register rmcmon plug-in module (which interacts with IBM's RSCT monitoring software) to monitor the xCAT cluster and have it feed the node liveness status to xCAT's \ *nodelist*\ table, enter:
.. code-block:: perl
monadd rmcmon -n
This will also add the \ *configrmcnode*\ to the \ *postscripts*\ table. To view the content of the \ *postscripts*\ table, enter:
.. code-block:: perl
tabdump postscritps
#node,postscripts,comments,disable
"service","servicenode",,
"xcatdefaults","syslog,remoteshell,configrmcnode",,
3.
To register xcatmon plug-in module to feed the node liveness status to xCAT's \ *nodelist*\ table, enter:
.. code-block:: perl
monadd xcatmon -n -s ping-interval=2
where 2 is the number of minutes between the pings.
*****
FILES
*****
/opt/xcat/bin/monadd
********
SEE ALSO
********
monls(1)|monls.1, monrm(1)|monrm.1, monstart(1)|monstart.1, monstop(1)|monstop.1, moncfg(1)|moncfg.1, mondecfg(1)|mondecfg.1

View File

@ -0,0 +1,105 @@
########
moncfg.1
########
.. highlight:: perl
****
NAME
****
\ **moncfg**\ - Configures a 3rd party monitoring software to monitor the xCAT cluster.
********
SYNOPSIS
********
\ *moncfg [-h| --help]*\
\ *moncfg [-v| --version]*\
\ *moncfg name [noderange] [-r|--remote]*\
***********
DESCRIPTION
***********
This command is used to configure a 3rd party monitoring software to monitor the xCAT cluster. For example, it modifies the configration file for the monitoring software so that the nodes can be included in the monitoring domain. The operation is performed on the management node and the service nodes of the given nodes. The operation will also be performed on the nodes if the \ *-r*\ option is specified, though the configuration of the nodes is usually performed during the node deployment stage.
**********
Parameters
**********
\ *name*\ is the name of the monitoring plug-in module. For example, if the the \ *name*\ is called \ *xxx*\ , then the actual file name that the xcatd looks for is \ */opt/xcat/lib/perl/xCAT_monitoring/xxx.pm*\ . Use \ *monls -a*\ command to list all the monitoring plug-in modules that can be used.
\ *noderange*\ specifies the nodes to be monitored. If omitted, all nodes will be monitored.
*******
OPTIONS
*******
\ **-h | --help**\ Display usage message.
\ **-r | --remote**\ Specifies that the operation will also be performed on the nodes.
\ **-v | --version **\ Command Version.
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occurred.
********
EXAMPLES
********
1. To configure the management node and the service nodes for ganglia monitoring, enter:
.. code-block:: perl
moncfg gangliamon
1. To configure the management node, nodes and their service nodes for ganglia monitoring, enter:
.. code-block:: perl
moncfg gangliamon -r
*****
FILES
*****
/opt/xcat/bin/moncfg
********
SEE ALSO
********
monls(1)|monls.1, mondecfg(1)|mondecfg.1, monadd(1)|monadd.1, monrm(1)|monrm.1, monstart(1)|monstart.1, monstop(1)|monstop.1

View File

@ -0,0 +1,105 @@
##########
mondecfg.1
##########
.. highlight:: perl
****
NAME
****
\ **mondecfg**\ - Deconfigures a 3rd party monitoring software from monitoring the xCAT cluster.
********
SYNOPSIS
********
\ *moncfg [-h| --help]*\
\ *moncfg [-v| --version]*\
\ *moncfg name [noderange] [-r|--remote]*\
***********
DESCRIPTION
***********
This command is used to deconfigure a 3rd party monitoring software from monitoring the xCAT cluster. The operation is performed on the management node and the service nodes of the given nodes. The operation will also be performed on the nodes if the \ *-r*\ option is specified. The deconfigration operation will remove the nodes from the 3rd party software's monitoring domain.
**********
Parameters
**********
\ *name*\ is the name of the monitoring plug-in module. Use \ *monls*\ command to list all the monitoring plug-in modules that can be used.
\ *noderange*\ specified the nodes to be deconfigured. If omitted, all nodes will be deconfigured.
*******
OPTIONS
*******
\ **-h | --help**\ Display usage message.
\ **-r | --remote**\ Specifies that the operation will also be performed on the nodes.
\ **-v | --version **\ Command Version.
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occurred.
********
EXAMPLES
********
1. To deconfigure the management node and the service nodes from the ganglia monitoring, enter:
.. code-block:: perl
mondecfg gangliamon
1. To deconfigure the management node, nodes and their service nodes from the ganglia monitoring, enter:
.. code-block:: perl
mondecfg gangliamon -r
*****
FILES
*****
/opt/xcat/bin/mondecfg
********
SEE ALSO
********
monls(1)|monls.1, moncfg(1)|moncfg.1, monadd(1)|monadd.1, monrm(1)|monrm.1, monstart(1)|monstart.1, monstop(1)|monstop.1

View File

@ -0,0 +1,136 @@
#######
monls.1
#######
.. highlight:: perl
****
NAME
****
\ **monls**\ - Lists monitoring plug-in modules that can be used to monitor the xCAT cluster.
********
SYNOPSIS
********
\ *monls [-h| --help]*\
\ *monls [-v| --version]*\
\ *monls \ \*name\*\ [-d|--description]*\
\ *monls [-a|--all] [-d|--description]*\
***********
DESCRIPTION
***********
This command is used to list the status, desctiption, the configuration scripts and the settings of one or all of the monitoring plug-in modules.
**********
Parameters
**********
\ *name*\ is the name of the monitoring plug-in module.
*******
OPTIONS
*******
\ **-a | --all**\ Searches the \ *XCATROOT/lib/perl/xCAT_monitoring*\ directory and reports all the monitoring plug-in modules. If nothing is specified, the list is read from the \ *monitoring*\ tabel.
\ **-d | --description**\ Display the description of the plug-in modules. The description ususally contains the possible settings.
\ **-h | --help**\ Display usage message.
\ **-v | --version **\ Command Version.
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occurred.
********
EXAMPLES
********
1. To list the status of all the monitoring plug-in modules from the \ *monitoring*\ table, enter:
.. code-block:: perl
monls
The output looks like this:
.. code-block:: perl
xcatmon monitored node-status-monitored
snmpmon not-monitored
2. To list the status of all the monitoring plug-in modules including the ones that are not in the monitoring table, enter
.. code-block:: perl
monls -a
The output looks like this:
.. code-block:: perl
xcatmon monitored node-status-monitored
snmpmon not-monitored
gangliamon not-monitored
rmcmon monitored
nagiosmon not-monitored
3. To list the status and the desciption for \ *snmpmon*\ module, enter:
.. code-block:: perl
monls snmpmon -d
*****
FILES
*****
/opt/xcat/bin/monls
********
SEE ALSO
********
monadd(1)|monadd.1, monrm(1)|monrm.1, monstart(1)|monstart.1, monstop(1)|monstop.1, moncfg(1)|moncfg.1, mondecfg(1)|mondecfg.1

View File

@ -0,0 +1,95 @@
#######
monrm.1
#######
.. highlight:: perl
****
NAME
****
\ **monrm**\ - Unregisters a monitoring plug-in module from the xCAT cluster.
********
SYNOPSIS
********
\ *monrm [-h| --help]*\
\ *monrm [-v| --version]*\
\ *monrm name*\
***********
DESCRIPTION
***********
This command is used to unregister a monitoring plug-in module from the \ *monitoring*\ table. It also removes any configuration scripts associated with the monitoring plug-in from the \ *postscripts*\ table. A monitoring plug-in module acts as a bridge that connects a 3rd party monitoring software and the xCAT cluster. A configuration script is used to configure the 3rd party software. Once added to the \ *postscripts*\ table, it will be invoked on the nodes during node deployment stage.
**********
Parameters
**********
\ *name*\ is the name of the monitoring plug-in module in the \ *monitoring*\ table. Use \ *monls*\ command to list all the monitoring plug-in modules that can be used.
*******
OPTIONS
*******
\ **-h | --help**\ Display usage message.
\ **-v | --version **\ Command Version.
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occurred.
********
EXAMPLES
********
1.To unregister gangliamon plug-in module (which interacts with Ganglia monitoring software) from the xCAT cluster, enter:
.. code-block:: perl
monrm gangliamon
Please note that gangliamon must have been registered in the xCAT \ *monitoring*\ table. For a list of registered plug-in modules, use command \ *monls*\ .
*****
FILES
*****
/opt/xcat/bin/monrm
********
SEE ALSO
********
monls(1)|monls.1, monadd(1)|monadd.1, monstart(1)|monstart.1, monstop(1)|monstop.1, moncfg(1)|moncfg.1, mondecfg(1)|mondecfg.1

View File

@ -0,0 +1,140 @@
#########
monshow.1
#########
.. highlight:: perl
****
NAME
****
\ **monshow**\ - Shows event data for monitoring.
********
SYNOPSIS
********
\ *monshow [-h| --help]*\
\ *monshow [-v| --version]*\
\ *monshow name [noderange] [-s] [-t time] [-a attributes] [-w attr<operator*\ val [-w attr<operator>val] ... ][-o {p|e}]>
***********
DESCRIPTION
***********
This command displays the events that happened on the given nodes or the monitoring data that is collected from the given nodes for a monitoring plugin.
**********
Parameters
**********
\ *name*\ is the name of the monitoring plug-in module to be invoked.
\ *noderange*\ is a list of nodes to be showed for. If omitted, the data for all the nodes will be displayed.
*******
OPTIONS
*******
\ **-h | --help**\ Display usage message.
\ **-v | --version **\ Command Version.
\ **-s**\ shows the summary data.
\ **-t**\ specifies a range of time for the data, The default is last 60 minutes. For example -t 6-4, it will display the data from last 6 minutes to 4 minutes; If it is -t 6, it will display the data from last 6 minutes until now.
\ **-a**\ specifies a comma-separated list of attributes or metrics names. The default is all.
\ **-w**\ specify one or multiple selection string that can be used to select events. The operators ==, !=, =,!,>,<,>=,<= are available. Wildcards % and _ are supported in the pattern string. % allows you to match any string of any length(including zero length) and _ allows you to match on a single character. The valid attributes are eventtype, monitor, monnode, application, component, id, serverity, message, rawdata, comments. Valid severity are: Informational, Warning, Critical.
Operator descriptions:
== Select event where the attribute value is exactly this value.
!= Select event where the attribute value is not this specific value.
=~ Select event where the attribute value matches this pattern string. Not work with severity.
!~ Select event where the attribute value does not match this pattern string. Not work with severity.
> Select event where the severity is higher than this value. Only work with severity.
< Select event where the severity is lower than this value. Only work with severity.
>= Select event where the severity is higher than this value(include). Only work with severity.
<= Select event where the severity is lower than this value(include). Only work with severity.
Note: if the "val" or "operator" fields includes spaces or any other characters that will be parsed by shell, the "attr<operator>val" needs to be quoted. If the operator is "!~", the "attr<operator>val" needs to be quoted using single quote.
\ **-o**\ specifies montype, it can be p or e. p means performance, e means events.
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occurred.
********
EXAMPLES
********
1. To show summary data about PctRealMemFree and PctTotalTimeIdle of cluster in last 6 minutes, enter:
.. code-block:: perl
monshow rmcmon -s -a PctRealMemFree,PctTotalTimeIdle -t 6
2. To show all data of node1 and node2, enter:
.. code-block:: perl
monshow rmcmon node1,node2
3. To show summary data of nodes which managed by servicenode1, enter:
.. code-block:: perl
monshow rmcmon servicenode1 -s
4. To show RMC event with severity Critical, enter:
.. code-block:: perl
monshow rmcmon -w severity==Critical
*****
FILES
*****
/opt/xcat/bin/monshow
********
SEE ALSO
********
monls(1)|monls.1, monstart(1)|monstart.1, monstop(1)|monstop.1, monadd(1)|monadd.1, monrm(1)|monrm.1, moncfg(1)|moncfg.1, mondecfg(1)|mondecfg.1

View File

@ -0,0 +1,105 @@
##########
monstart.1
##########
.. highlight:: perl
****
NAME
****
\ **monstart**\ - Starts a plug-in module to monitor the xCAT cluster.
********
SYNOPSIS
********
\ *monstart [-h| --help]*\
\ *monstart [-v| --version]*\
\ *monstart name [noderange] [-r|--remote]*\
***********
DESCRIPTION
***********
This command is used to start a 3rd party software, (for example start the daemons), to monitor the xCAT cluster. The operation is performed on the management node and the service nodes of the given nodes. The operation will also be performed on the nodes if the \ *-r*\ option is specified.
**********
Parameters
**********
\ *name*\ is the name of the monitoring plug-in module. For example, if the the \ *name*\ is called \ *xxx*\ , then the actual file name that the xcatd looks for is \ */opt/xcat/lib/perl/xCAT_monitoring/xxx.pm*\ . Use \ *monls -a*\ command to list all the monitoring plug-in modules that can be used.
\ *noderange*\ is the nodes to be monitored. If omitted, all nodes will be monitored.
*******
OPTIONS
*******
\ **-h | --help**\ Display usage message.
\ **-r | --remote**\ Specifies that the operation will also be performed on the nodes. For example, the3rd party monitoring software daemons on the nodes will also be started.
\ **-v | --version **\ Command Version.
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occurred.
********
EXAMPLES
********
1. To start gangliamon plug-in module (which interacts with Ganglia monitoring software) to monitor the xCAT cluster, enter:
.. code-block:: perl
monstart gangliamon -r
2. To start xcatmon plug-in module to feed the node liveness status to xCAT's \ *nodelist*\ table, enter:
.. code-block:: perl
monstart rmcmon
*****
FILES
*****
/opt/xcat/bin/monstart
********
SEE ALSO
********
monls(1)|monls.1, monstop(1)|monstop.1, monadd(1)|monadd.1, monrm(1)|monrm.1, moncfg(1)|moncfg.1, mondecfg(1)|mondecfg.1

View File

@ -0,0 +1,99 @@
#########
monstop.1
#########
.. highlight:: perl
****
NAME
****
\ **monstop**\ - Stops a monitoring plug-in module to monitor the xCAT cluster.
********
SYNOPSIS
********
\ *monstop [-h| --help]*\
\ *monstop [-v| --version]*\
\ *monstop name [noderange] [-r|--remote]*\
***********
DESCRIPTION
***********
This command is used to stop a 3rd party software, (for example stop the daemons), from monitoring the xCAT cluster. The operation is performed on the management node and the service nodes of the given nodes. The operation will also be performed on the nodes if the \ *-r*\ option is specified.
**********
Parameters
**********
\ *name*\ is the name of the monitoring plug-in module in the \ *monitoring*\ table. Use \ *monls*\ command to list all the monitoring plug-in modules that can be used.
\ *noderange*\ is the nodes to be stopped for monitoring. If omitted, all nodes will be stopped.
*******
OPTIONS
*******
\ **-h | -help**\ Display usage message.
\ **-r | --remote**\ Specifies that the operation will also be performed on the nodes. For example, the3rd party monitoring software daemons on the nodes will also be stopped.
\ **-v | -version **\ Command Version.
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occurred.
********
EXAMPLES
********
1.To stop gangliamon plug-in module (which interacts with Ganglia monitoring software) to monitor the xCAT cluster, enter:
.. code-block:: perl
monstop gangliamon
Please note that gangliamon must have been registered in the xCAT \ *monitoring*\ table. For a list of registered plug-in modules, use command \ *monls*\ .
*****
FILES
*****
/opt/xcat/bin/monstop
********
SEE ALSO
********
monls(1)|monls.1, monstart(1)|monstart.1, monadd(1)|monadd.1, monrm(1)|monrm.1, moncfg(1)|moncfg.1, mondecfg(1)|mondecfg.1

View File

@ -0,0 +1,173 @@
############
mysqlsetup.1
############
.. highlight:: perl
****
NAME
****
\ **mysqlsetup**\ - Sets up the MySQL or MariaDB database for xCAT to use.
********
SYNOPSIS
********
\ **mysqlsetup**\ {\ **-h**\ |\ **--help**\ }
\ **mysqlsetup**\ {\ **-v**\ |\ **--version**\ }
\ **mysqlsetup**\ {\ **-i**\ |\ **--init**\ } [\ **-f**\ |\ **--hostfile**\ ] [-o|--odbc] [-L|--LL] [\ **-V**\ |\ **--verbose**\ ]
\ **mysqlsetup**\ {\ **-u**\ |\ **--update**\ } [\ **-f**\ |\ **--hostfile**\ ] [-o|--odbc] [-L|--LL] [\ **-V**\ |\ **--verbose**\ ]
\ **mysqlsetup**\ {\ **-o**\ |\ **--odbc**\ } [-V|--verbose]
\ **mysqlsetup**\ {\ **-L**\ |\ **--LL**\ } [-V|--verbose]
***********
DESCRIPTION
***********
\ **mysqlsetup**\ - Sets up the MySQL or MariaDB database (linux only for MariaDB) for xCAT to use. The mysqlsetup script is run on the Management Node as root after the MySQL code or MariaDB code has been installed. Before running the init option, the MySQL server should be stopped, if it is running. The xCAT daemon, xcatd, must be running, do not stop it. No xCAT commands should be run during the init process, because we will be migrating the xCAT database to MySQL or MariaDB and restarting the xcatd daemon as well as the MySQL daemon. For full information on all the steps that will be done, read the "Configure MySQL and Migrate xCAT Data to MySQL" sections in
Setting_Up_MySQL_as_the_xCAT_DB
Two passwords must be supplied for the setup, a password for the xcatadmin id and a password for the root id in the MySQL database. These will be prompted for interactively, unless the environment variables XCATMYSQLADMIN_PW and XCATMYSQLROOT_PW are set to the passwords for the xcatadmin id and root id in the database,resp.
Note below we refer to MySQL but it works the same for MariaDB.
*******
OPTIONS
*******
\ **-h|--help**\
Displays the usage message.
\ **-v|--version**\
Displays the release version of the code.
\ **-V|--verbose**\
Displays verbose messages.
\ **-i|--init**\
The init option is used to setup a xCAT database on an installed MySQL or MariaDB server for xCAT to use. The mysqlsetup script will check for the installed MariaDB server rpm first and will use MariaDB if it is installed. This involves creating the xcatdb database, the xcatadmin id, allowing access to the xcatdb database by the Management Node. It customizes the my.cnf configuration file for xcat and starts the MySQL server. It also backs up the current xCAT database and restores it into the newly setup xcatdb MySQL database. It creates the /etc/xcat/cfgloc file to point the xcatd daemon to the MySQL database and restarts the xcatd daemon using the database.
On AIX, it additionally setup the mysql id and group and corrects the permissions in the MySQL install directories. For AIX, you should be using the MySQL rpms available from the xCAT website. For Linux, you should use the MySQL or MariaDB rpms shipped with the OS. You can chose the -f and/or the -o option, to run after the init.
\ **-u|--update**\
To run the update option, you must first have run the -i option and have xcat successfully running on the MySQL database. You can chose the -f and/or the -o option, to update.
\ **-f|--hostfile**\
This option runs during update, it will take all the host from the input file (provide a full path) and give them database access to the xcatdb in MySQL for the xcatadmin id. Wildcards and ipaddresses may be used. xCAT must have been previously successfully setup to use MySQL. xcatadmin and MySQL root password are required.
\ **-o|--odbc**\
This option sets up the ODBC /etc/../odbcinst.ini, /etc/../odbc.ini and the .odbc.ini file in roots home directory will be created and initialized to run off the xcatdb MySQL database.
See "Add ODBC Support" in
Setting_Up_MySQL_as_the_xCAT_DB
\ **-L|--LL**\
Additional database configuration specifically for the LoadLeveler product.
See "Add ODBC Support" in
Setting_Up_MySQL_as_the_xCAT_DB
*********************
ENVIRONMENT VARIABLES
*********************
\*
\ **XCATMYSQLADMIN_PW**\ - the password for the xcatadmin id that will be assigned in the MySQL database.
\*
\ **XCATMYSQLROOT_PW**\ - the password for the root id that will be assigned to the MySQL root id, if the script creates it. The password to use to run MySQL command to the database as the MySQL root id. This password may be different than the unix root password on the Management Node.
********
EXAMPLES
********
\*
To setup MySQL for xCAT to run on the MySQL xcatdb database :
\ **mysqlsetup**\ \ *-i*\
\*
Add hosts from /tmp/xcat/hostlist that can access the xcatdb database in MySQL:
\ **mysqlsetup**\ \ *-u*\ \ *-f /tmp/xcat/hostlist*\
Where the file contains a host per line, for example:
.. code-block:: perl
node1
1.115.85.2
10.%.%.%
nodex.cluster.net
\*
To setup the ODBC for MySQL xcatdb database access :
\ **mysqlsetup**\ \ *-o*\
\*
To setup MySQL for xCAT and add hosts from /tmp/xcat/hostlist and setup the ODBC in Verbose mode:
\ **mysqlsetup**\ \ *-i*\ \ *-f /tmp/xcat/hostlist*\ \ *-o*\ \ *-V*\

View File

@ -0,0 +1,159 @@
#############
nimnodecust.1
#############
.. highlight:: perl
****
NAME
****
\ **nimnodecust**\ - Use this xCAT command to customize AIX/NIM standalone machines.
********
SYNOPSIS
********
\ **nimnodecust [-h|--help ]**\
\ **nimnodecust [-V] -s lpp_source_name [-p packages] [-b installp_bundles] noderange [attr=val [attr=val ...]]**\
***********
DESCRIPTION
***********
This xCAT command can be used to customize AIX/NIM standalone machines.
The software packages that you wish to install on the nodes must be copied to the appropriate directory locations in the NIM lpp_source resource provided by the "-s" option. For example, if the location of your lpp_source resource is "/install/nim/lpp_source/61lpp/" then you would copy RPM packages to "/install/nim/lpp_source/61lpp/RPMS/ppc" and you would copy your installp packages to "/install/nim/lpp_source/61lpp/installp/ppc". Typically you would want to copy the packages to the same lpp_source that was used to install the node. You can find the location for an lpp_source with the AIX lsnim command. (Ex. "lsnim -l <lpp_source_name>")
The packages you wish to install on the nodes may be specified with either a comma-separated list of package names or by a comma-separated list of installp_bundle names. The installp_bundle names are what were used when creating the corresponding NIM installp_bundle definitions. The installp_bundle definitions may also be used when installing the nodes.
A bundle file contains a list of package names. The RPMs must have a prefix of "R:" and the installp packages must have a prefix of "I:". For example, the contents of a simple bundle file might look like the following.
.. code-block:: perl
# RPM
R:expect-5.42.1-3.aix5.1.ppc.rpm
R:ping-2.4b2_to-1.aix5.3.ppc.rpm
#installp
I:openssh.base
I:openssh.license
To create a NIM installp_bundle definition you can use the "nim -o define" operation. For example, to create a definition called "mypackages" for a bundle file located at "/install/nim/mypkgs.bnd" you could issue the following command.
"nim -o define -t installp_bundle -a server=master -a location=/install/nim/mypkgs.bnd mypackages".
See the AIX documantation for more information on using installp_bundle files.
The xCAT nimnodecust command will automatically handle the distribution of the packages to AIX service nodes when using an xCAT hierachical environment.
*******
OPTIONS
*******
\ **attr=val [attr=val ...]**\
Specifies one or more "attribute equals value" pairs, separated by spaces. Attr=val pairs must be specified last on the command line. These are used to specify
additional values that can be passed to the underlying NIM commands, ("nim -o cust..."). See the NIM documentation for valid "nim" command line options.
\ **-b installp_bundle_names**\
.. code-block:: perl
A comma separated list of NIM installp_bundle names.
\ **-h |--help**\
Display usage message.
\ **-p package_names**\
A comma-separated list of software packages to install. Packages may be RPM or installp.
\ **noderange**\
A set of comma delimited node names and/or group names. See the "noderange" man page for details on additional supported formats.
\ **-V |--verbose**\
Verbose mode.
************
RETURN VALUE
************
0
The command completed successfully.
1
An error has occurred.
********
EXAMPLES
********
1) Install the installp package "openssh.base.server" on an xCAT node named "node01". Assume that the package has been copied to the NIM lpp_source resource called "61lppsource".
\ **nimnodecust -s 61lppsource -p openssh.base.server node01**\
2) Install the product software contained in the two bundles called "llbnd" and "pebnd" on all AIX nodes contained in the xCAT node group called "aixnodes". Assume that all the software packages have been copied to the NIM lpp_source resource called "61lppsource".
\ **nimnodecust -s 61lppsource -b llbnd,pebnd aixnodes**\
*****
FILES
*****
/opt/xcat/bin/nimnodecust
*****
NOTES
*****
This command is part of the xCAT software product.

View File

@ -0,0 +1,192 @@
############
nimnodeset.1
############
.. highlight:: perl
****
NAME
****
\ **nimnodeset**\ - Use this xCAT command to initialize AIX/NIM standalone machines.
********
SYNOPSIS
********
\ **nimnodeset [-h|--help ]**\
\ **nimnodeset [-V|--verbose] [-f|--force] [-i osimage_name] [-l location] [-p|--primarySN] [-b|--backupSN] noderange [attr=val [attr=val ...]]**\
***********
DESCRIPTION
***********
This xCAT command can be used to initialize AIX/NIM standalone machines. Once this step is completed the either the xCAT \ **rnetboot**\ command or the \ **rbootseq/rpower**\ commands to initiate a network boot of the nodes.
If you are using xCAT service nodes the \ **nimnodeset**\ command will automatically determine the correct server(s) for the node and do the initialization on that server(s).
The osimage_name is the name of an xCAT osimage definition that contains the list of NIM resources to use when initializing the nodes. If the osimage_name is not provided on the command line the code checks the node definition for the value of the "provmethod" attribute (which is the name of an osimage definition). If the osimage_image is provided on the command line then the code will also set the "provmethod" attribute of the node definiions.
This command will also create a NIM resolv_conf resource to be used when installing the node. If a resolv_conf resource is not already included in the xCAT osimage definition and if the "domain" and "nameservers" values are set then a new
NIM resolv_conf resource will be created and allocated to the nodes.
The "domain" and "nameservers" attributes can be set in either the xCAT "network" definition used by the nodes or in the xCAT cluster "site" definition. The setting in the "network" definition will take priority.
The "search" field of the resolv.conf file will contain a list all the domains
listed in the xCAT network definitions and the xCAT site definiton.
The "nameservers" value can either be set to a specific IP address or the "<xcatmaster>" key word. The "<xcatmaster>" key word means that the value of the "xcatmaster" attribute of the node definition will be used in the /etc/resolv.conf file. (I.e. The name of the install server as known by the node.)
You can set the "domain" and "nameservers" attributes by using the \ **chdef**\ command. For example:
chdef -t network -o clstr_net domain=cluster.com nameservers=<xcatmaster>
If the "domain" and "nameservers" attributes are not set in either the nodes "network" definition or the "site" definition then no new NIM resolv_conf resource
will be created.
You can specify additional attributes and values using the "attr=val" command line option. This information will be passed on to the underlying call to the NIM "nim -o bos_inst" command. See the NIM documentation for information on valid command line options for the nim command. The "attr" must correspond to a NIM attribute supported for the NIM "bos_inst" operation. Information provided by the "attr=val" option will take precedence over the information provided in the osimage definition.
The force option can be used to reinitialize a node if it already has resources allocated or it is in the wrong NIM state. This option will reset the NIM node and deallocate resources before reinititializing.
This command will also create a NIM script resource to enable the xCAT support for user-provided customization scripts.
After the \ **nimnodeset**\ command completes you can use the \ **lsnim**\ command to check the NIM node definition to see if it is ready for booting the node. ("lsnim -l <nim_node_name>").
You can supply your own scripts to be run on the management node or on the service node (if their is hierarchy) for a node during the \ **nimnodeset**\ command. Such scripts are called \ **prescripts**\ . They should be copied to /install/prescripts dirctory. A table called \ *prescripts*\ is used to specify the scripts and their associated actions. The scripts to be run at the beginning of the \ **nimnodeset**\ command are stored in the 'begin' column of \ *prescripts*\ table. The scripts to be run at the end of the \ **nimnodeset**\ command are stored in the 'end' column of \ *prescripts*\ table. Please run 'tabdump prescripts -d' command for details. An example for the 'begin' or the 'end' column is: \ *standalone:myscript1,myscript2*\ . The following two environment variables will be passed to each script: NODES contains all the names of the nodes that need to run the script for and ACTION contains the current nodeset action, in this case "standalone". If \ *#xCAT setting:MAX_INSTANCE=number*\ is specified in the script, the script will get invoked for each node in parallel, but no more than \ *number*\ of instances will be invoked at at a time. If it is not specified, the script will be invoked once for all the nodes.
*******
OPTIONS
*******
\ **attr=val [attr=val ...]**\
Specifies one or more "attribute equals value" pairs, separated by spaces. Attr=
val pairs must be specified last on the command line. These are used to specify additional values that can be passed to the underlying NIM commands, ("nim -o bos_inst ..."). See the NIM documentation for valid "nim" command line options. Note that you may specify multiple "script" and "installp_bundle" values by using a comma seperated list. (ex. "script=ascript,bscript").
\ **-b|--backupSN**\
When using backup service nodes only update the backup. The default is to update both the primary and backup service nodes
\ **-f |--force**\
Use the force option to reinitialize the NIM machines.
\ **-h |--help**\
Display usage message.
\ **-i image_name**\
The name of an existing xCAT osimage definition.
\ **-l|--location**\
The directory location to use when creating new NIM resolv_conf resources. The d
efault location is /install/nim.
\ **-p|--primarySN**\
When using backup service nodes only update the primary. The default is to update both the primary and backup service nodes.
\ **noderange**\
A set of comma delimited node names and/or group names. See the "noderange" man page for details on additional supported formats.
\ **-V |--verbose**\
Verbose mode.
************
RETURN VALUE
************
0
The command completed successfully.
1
An error has occurred.
********
EXAMPLES
********
1) Initialize an xCAT node named "node01". Use the xCAT osimage named "61gold" to install the node.
\ **nimnodeset -i 61gold node01**\
2) Initialize all AIX nodes contained in the xCAT node group called "aixnodes" using the image definitions pointed to by the "provmethod" attribute of the xCAT node definitions.
\ **nimnodeset aixnodes**\
3) Initialize an xCAT node called "node02". Include installp_bundle resources that are not included in the osimage definition. This assumes the NIM installp_bundle resources have already been created.
\ **nimnodeset -i 611image node02 installp_bundle=sshbundle,addswbundle**\
*****
FILES
*****
/opt/xcat/bin/nimnodeset
*****
NOTES
*****
This command is part of the xCAT software product.
********
SEE ALSO
********
mknimimage(1)|mknimimage.1, rnetboot(1)|rnetboot.1

View File

@ -0,0 +1,80 @@
###############
nodeaddunmged.1
###############
.. highlight:: perl
****
NAME
****
\ **nodeaddunmged**\ - Create a unmanaged node.
********
SYNOPSIS
********
\ **nodeaddunmged**\ [-h| --help | -v | --version]
\ **nodeaddunmged**\ hostname=<node-name> ip=<ip-address>
***********
DESCRIPTION
***********
The \ **nodeaddunmged**\ command adds an unmanaged node to the __Unmanaged group. You can specify the node name and IP address of the node.
*******
OPTIONS
*******
\ **-h|--help**\
Display usage message.
\ **-v|--version**\
Command Version.
\ **hostname=<node-name**\ >
Sets the name of the new unmanaged node, where <node-name> is the name of the node.
\ **ip=<ip-address**\ >
Sets the IP address of the unmanaged node, where <ip-address> is the IP address of the new node in the form xxx.xxx.xxx.xxx
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occured.
********
EXAMPLES
********
To add an unmanaged node, use the following command:
nodeaddunmged hostname=unmanaged01 ip=192.168.1.100
********
SEE ALSO
********

View File

@ -0,0 +1,160 @@
########
nodech.1
########
.. highlight:: perl
****
NAME
****
\ **nodech**\ - Changes nodes' attributes in the xCAT cluster database.
********
SYNOPSIS
********
\ **nodech**\ \ *noderange*\ \ *table.column=value*\ [\ *...*\ ]
\ **nodech**\ {\ **-d**\ | \ **--delete**\ } \ *noderange*\ \ *table*\ [\ *...*\ ]
\ **nodech**\ {\ **-v**\ | \ **--version**\ }
\ **nodech**\ [\ **-?**\ | \ **-h**\ | \ **--help**\ ]
***********
DESCRIPTION
***********
The nodech command changes the specified attributes for the given nodes. Normally, the given
value will completely replace the current attribute value. But if ",=" is used instead of "=",
the specified value will be prepended to the attribute's comma separated list, if it is not already
there. If "^=" is used, the specified value will be removed from the attribute's comma separated list,
if it is there. You can also use "^=" and ",=" in the same command to essentially replace one item
in the list with another. (See the Examples section.)
Additionally, as in nodels, boolean expressions can be used to further limit the scope of nodech from
the given noderange. The operators supported are the same as nodels (=~, !~, ==, and !=).
With these operators in mind, the unambiguous assignment operator is '=@'. If you need, for example, to set
the nodelist.comments to =foo, you would have to do \ *nodech n1 nodelist.comments=@=foo*\ .
See the \ **xcatdb**\ man page for an overview of each table.
The nodech command also supports some short cut names as aliases to common attributes. See the
\ **nodels**\ man page for details.
*******
OPTIONS
*******
\ **-d|--delete**\
Delete the nodes' row in the specified tables.
\ **-v|--version**\
Command Version.
\ **-?|-h|--help**\
Display usage message.
************
RETURN VALUE
************
0
The command completed successfully.
1
An error has occurred.
********
EXAMPLES
********
\*
To update nodes in noderange node1-node4 to be in only group all:
\ **nodech**\ \ *node1-node4 groups=all*\
\*
To put all nodes with nodepos.rack value of 2 into a group called rack2:
\ **nodech**\ \ *all*\ nodepos.rack==2 groups,=rack2
\*
To add nodes in noderange node1-node4 to the nodetype table with os=rhel5:
\ **nodech**\ \ *node1-node4 groups=all,rhel5 nodetype.os=rhel5*\
\*
To add node1-node4 to group1 in addition to the groups they are already in:
\ **nodech**\ \ *node1-node4 groups,=group1*\
\*
To put node1-node4 in group2, instead of group1:
\ **nodech**\ \ *node1-node4 groups^=group1 groups,=group2*\
*****
FILES
*****
/opt/xcat/bin/nodech
********
SEE ALSO
********
nodels(1)|nodels.1, nodeadd(8)|nodeadd.8, noderange(3)|noderange.3

View File

@ -0,0 +1,82 @@
###########
nodechmac.1
###########
.. highlight:: perl
****
NAME
****
\ **nodechmac**\ - Updates the MAC address for a node.
********
SYNOPSIS
********
\ **nodechmac**\ [-h| --help | -v | --version]
\ **nodechmac**\ <node-name> mac=<mac-address>
***********
DESCRIPTION
***********
The \ **nodechmac**\ command changes the MAC address for provisioned nodes network interface.
You can use this command to keep an existing node configuration. For example, if an existing node has hardware problems, the replacement node can use the old configurations. By using the nodechmac command, the node name and network settings of the old node can be used by the new node.
*******
OPTIONS
*******
\ **-h|--help**\
Display usage message.
\ **-v|--version**\
Command Version.
\ **node-name**\
Specifies the name of the node you want to update, where <node-name> is the node that is updated.
\ **mac=<mac-address**\
Sets the new MAC address for the NIC used by the provisioning node, where <mac-address> is the NICs new MAC address.
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occured.
********
EXAMPLES
********
You can update the MAC address for a node, by using the following command:
nodechmac compute-000 mac=2F:3C:88:98:7E:01
********
SEE ALSO
********

View File

@ -0,0 +1,105 @@
###############
nodechprofile.1
###############
.. highlight:: perl
****
NAME
****
\ **nodechprofile**\ - updates a profile used by a node
********
SYNOPSIS
********
\ **nodechprofile**\ [-h| --help | -v | --version]
\ **nodechprofile**\ <noderange> [imageprofile=<image-profile>] [networkprofile=<network-profile>] [hardwareprofile=<hardware-profile>]
***********
DESCRIPTION
***********
The \ **nodechprofile**\ command updates the profiles used by a node, including: the image profile, network profile, and hardware management profile.
If you update the image profile for a node, the operating system and provisioning settings for the node are updated.
If you update the network profile, the IP address and network settings for the node are updated.
If you update the hardware management profile, the hardware settings for the node are updated.
After nodes' hardware profile or image profile are updated, the status for each node is changed to "defined". A node with a "defined" status must be reinstalled
After nodes' network profile updated, the status for nodes is not changed. You'll need to run \ **noderegenips**\ to re-generate the nodes' IP address and nodes' status may also be updated at this stage.
*******
OPTIONS
*******
\ **-h|--help**\
Display usage message.
\ **-v|--version**\
Command Version.
\ **noderange**\
The nodes to be removed.
\ **imageprofile=<image-profile**\ >
Sets the new image profile name used by the node, where <image-profile> is the new image profile. An image profile defines the provisioning method, OS information, kit information, and provisioning parameters for a node. If the "__ImageProfile_imgprofile" group already exists in the nodehm table, then "imgprofile" is used as the image profile name.
\ **networkprofile=<network-profile**\ >
Sets the new network profile name used by the node, where <network-profile> is the new network profile. A network profile defines the network, NIC, and routes for a node. If the "__NetworkProfile_netprofile" group already exists in the nodehm table, then "netprofile" is used as the network profile name.
\ **hardwareprofile=<hardware-profile**\ >
Sets the new hardware profile name used by the node, where <hardware-profile> is the new hardware management profile used by the node. If a "__HardwareProfile_hwprofile" group exists, then "hwprofile" is the hardware profile name. A hardware profile defines hardware management related information for imported nodes, including: IPMI, HMC, CEC, CMM.
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occured.
********
EXAMPLES
********
To change the image profile to rhels6.3_packaged for compute nodes compute-000 and compute-001, use the following command:
nodechprofile compute-000,compute-001 imageprofile=rhels6.3_packaged
To change all of the profiles for compute node compute-000, enter the following command:
nodechprofile compute-000 imageprofile=rhels6.3_packaged networkprofile=default_cn hardwareprofile=default_ipmi
********
SEE ALSO
********
nodepurge(1)|nodepurge.1, noderefresh(1)|noderefresh.1, nodeimport(1)|nodeimport.1, noderange(3)|noderange.3

View File

@ -0,0 +1,204 @@
#################
nodediscoverdef.1
#################
.. highlight:: perl
****
NAME
****
\ **nodediscoverdef**\ - Define the undefined discovery request to a predefined xCAT node,
or clean up the discovery entries from the discoverydata table
(which can be displayed by nodediscoverls command)
********
SYNOPSIS
********
\ **nodediscoverdef**\ \ **-u uuid**\ \ **-n node**\
\ **nodediscoverdef**\ \ **-r**\ \ **-u uuid**\
\ **nodediscoverdef**\ \ **-r**\ \ **-t**\ {\ **seq**\ |\ **profile**\ |\ **switch**\ |\ **blade**\ |\ **manual**\ |\ **undef**\ |\ **all**\ }
\ **nodediscoverdef**\ [\ **-h**\ |\ **--help**\ |\ **-v**\ |\ **--version**\ ]
***********
DESCRIPTION
***********
The \ **nodediscoverdef**\ command defines the discovery entry from the discoverydata table to a predefined
xCAT node. The discovery entry can be displayed by \ **nodediscoverls**\ command.
The options \ **-u**\ and \ **-n**\ have to be used together to define a discovery request to a node.
The \ **nodediscoverdef**\ command also can be used to clean up the discovery entries from the
discoverydata table.
The option \ **-r**\ is used to remove discovery entries. If working with \ **-u**\ , the specific entry
which uuid specified by \ **-u**\ will be removed.
You also can use the \ **-r**\ \ **-t**\ option to limit that only remove the nodes that were discovered in a
particular method of discovery.
*******
OPTIONS
*******
\ **-t seq|profile|switch|blade|manual|undef|all**\
Specify the nodes that have been discovered by the specified discovery method:
\*
\ **seq**\ - Sequential discovery (started via nodediscoverstart noderange=<noderange> ...).
\*
\ **profile**\ - Profile discovery (started via nodediscoverstart networkprofile=<network-profile> ...).
\*
\ **switch**\ - Switch-based discovery (used when the switch and switches tables are filled in).
\*
\ **blade**\ - Blade discovery (used for IBM Flex blades).
\*
\ **manual**\ - Manually discovery (used when defining node by nodediscoverdef command).
\*
\ **undef**\ - Display the nodes that were in the discovery pool, but for which xCAT has not yet received a discovery request.
\*
\ **all**\ - All discovered nodes.
\ **-n node**\
The xCAT node that the discovery entry will be defined to.
\ **-r**\
Remove the discovery entries from discoverydata table.
\ **-u uuid**\
The uuid of the discovered entry.
\ **-h|--help**\
Display usage message.
\ **-v|--version**\
Command version.
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occurred.
********
EXAMPLES
********
1
Define the discovery entry which uuid is 51E5F2D7-0D59-11E2-A7BC-3440B5BEDBB4 to node node1
\ **nodediscoverdef**\ -u 51E5F2D7-0D59-11E2-A7BC-3440B5BEDBB4 -n node1
.. code-block:: perl
Defined [51E5F2D7-0D59-11E2-A7BC-3440B5BEDBB4] to node node1.
2
Remove the discovery entry which uuid is 51E5F2D7-0D59-11E2-A7BC-3440B5BEDBB4 from the discoverydata table
\ **nodediscoverdef**\ -r -u 51E5F2D7-0D59-11E2-A7BC-3440B5BEDBB4
.. code-block:: perl
Removing discovery entries finished.
3
Remove the discovery entries which discover type is \ **seq**\ from the discoverydata table
\ **nodediscoverdef**\ -r -t seq
.. code-block:: perl
Removing discovery entries finished.
********
SEE ALSO
********
nodediscoverstart(1)|nodediscoverstart.1, nodediscoverstatus(1)|nodediscoverstatus.1, nodediscoverstop(1)|nodediscoverstop.1, nodediscoverls(1)|nodediscoverls.1

View File

@ -0,0 +1,231 @@
################
nodediscoverls.1
################
.. highlight:: perl
****
NAME
****
\ **nodediscoverls**\ - List the discovered nodes
********
SYNOPSIS
********
\ **nodediscoverls**\ [\ **-t seq**\ |\ **profile**\ |\ **switch**\ |\ **blade**\ |\ **manual**\ |\ **undef**\ |\ **all**\ ] [\ **-l**\ ]
\ **nodediscoverls**\ [\ **-u uuid**\ ] [\ **-l**\ ]
\ **nodediscoverls**\ [\ **-h**\ |\ **--help**\ |\ **-v**\ |\ **--version**\ ]
***********
DESCRIPTION
***********
The \ **nodediscoverls**\ command lists nodes that have recently been discovered. If discovery
is currently in progress (i.e. \ **nodediscoverstart**\ has been run, but \ **nodediscoverstop**\ has not been),
then \ **nodediscoverls**\ will list the nodes that have been discovered so far in this session.
If discovery is not currently in progress, \ **nodediscoverls**\ will list all of the nodes that were
discovered in the last discovery session.
You can use the \ **-t**\ option to limit the output to just the nodes that were discovered in a
particular method of discovery.
*******
OPTIONS
*******
\ **-t seq|profile|switch|blade|manual|undef|all**\
Display the nodes that have been discovered by the specified discovery method:
\*
\ **seq**\ - Sequential discovery (started via nodediscoverstart noderange=<noderange> ...).
\*
\ **profile**\ - Profile discovery (started via nodediscoverstart networkprofile=<network-profile> ...).
\*
\ **switch**\ - Switch-based discovery (used when the switch and switches tables are filled in).
\*
\ **blade**\ - Blade discovery (used for IBM Flex blades).
\*
\ **manual**\ - Manually discovery (used when defining node by nodediscoverdef command).
\*
\ **undef**\ - Display the nodes that were in the discovery pool, but for which xCAT has not yet received a discovery request.
\*
\ **all**\ - All discovered nodes.
\ **-l**\
Display more detailed information about the discovered nodes.
\ **-u uuid**\
Display the discovered node that has this uuid.
\ **-h|--help**\
Display usage message.
\ **-v|--version**\
Command version.
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occured.
********
EXAMPLES
********
1
Display the discovered nodes when sequential discovery is running:
\ **nodediscoverls**\
.. code-block:: perl
UUID NODE METHOD MTM SERIAL
51E5F2D7-0D59-11E2-A7BC-3440B5BEDBB2 distest1 sequential 786310X 1052EF2
51E5F2D7-0D59-11E2-A7BC-3440B5BEDBB3 distest2 sequential 786310X 1052EF3
2
Display the nodes that were in the discovery pool, but for which xCAT has not yet received a discovery request:
\ **nodediscoverls**\ -t undef
.. code-block:: perl
UUID NODE METHOD MTM SERIAL
51E5F2D7-0D59-11E2-A7BC-3440B5BEDBB0 undef undef 786310X 1052EF0
51E5F2D7-0D59-11E2-A7BC-3440B5BEDBB1 undef undef 786310X 1052EF1
3
Display all the discovered nodes:
\ **nodediscoverls**\ -t all
.. code-block:: perl
UUID NODE METHOD MTM SERIAL
51E5F2D7-0D59-11E2-A7BC-3440B5BEDBB0 undef undef 786310X 1052EF0
51E5F2D7-0D59-11E2-A7BC-3440B5BEDBB1 undef undef 786310X 1052EF1
51E5F2D7-0D59-11E2-A7BC-3440B5BEDBB2 distest1 sequential 786310X 1052EF2
51E5F2D7-0D59-11E2-A7BC-3440B5BEDBB3 distest2 sequential 786310X 1052EF3
4
Display the discovered node whose uuid is \ **51E5F2D7-0D59-11E2-A7BC-3440B5BEDBB2**\ , with detailed information:
\ **nodediscoverls**\ -u 51E5F2D7-0D59-11E2-A7BC-3440B5BEDBB2 -l
.. code-block:: perl
Object uuid: 51E5F2D7-0D59-11E2-A7BC-3440B5BEDBB2
node=distest1
method=sequential
discoverytime=03-31-2013 17:05:12
arch=x86_64
cpucount=32
cputype=Intel(R) Xeon(R) CPU E5-2690 0 @ 2.90GHz
memory=198460852
mtm=786310X
serial=1052EF2
nicdriver=eth0!be2net,eth1!be2net
nicipv4=eth0!10.0.0.212/8
nichwaddr=eth0!34:40:B5:BE:DB:B0,eth1!34:40:B5:BE:DB:B4
nicpci=eth0!0000:0c:00.0,eth1!0000:0c:00.1
nicloc=eth0!Onboard Ethernet 1,eth1!Onboard Ethernet 2
niconboard=eth0!1,eth1!2
nicfirm=eth0!ServerEngines BE3 Controller,eth1!ServerEngines BE3 Controller
switchname=eth0!c909f06sw01
switchaddr=eth0!192.168.70.120
switchdesc=eth0!IBM Flex System Fabric EN4093 10Gb Scalable Switch, flash image: version 7.2.6, boot image: version 7.2.6
switchport=eth0!INTA2
********
SEE ALSO
********
nodediscoverstart(1)|nodediscoverstart.1, nodediscoverstatus(1)|nodediscoverstatus.1, nodediscoverstop(1)|nodediscoverstop.1, nodediscoverdef(1)|nodediscoverdef.1

View File

@ -0,0 +1,245 @@
###################
nodediscoverstart.1
###################
.. highlight:: perl
****
NAME
****
\ **nodediscoverstart**\ - starts the node discovery process
********
SYNOPSIS
********
\ **nodediscoverstart**\ [\ **-h**\ |\ **--help**\ |\ **-v**\ |\ **--version**\ ]
\ **Sequential Discovery Specific:**\
\ **nodediscoverstart**\ \ **noderange=**\ \ *noderange*\ [\ **hostiprange=**\ \ *imageprofile*\ ] [\ **bmciprange=**\ \ *bmciprange*\ ] [\ **groups=**\ \ *groups*\ ] [\ **rack=**\ \ *rack*\ ] [\ **chassis=**\ \ *chassis*\ ] [\ **height=**\ \ *height*\ ] [\ **unit=**\ \ *unit*\ ] [osimage=<osimagename>] [-n|--dns] [-s|--skipbmcsetup] [\ **-V|--verbose**\ ]
\ **Profile Discovery Specific:**\
\ **nodediscoverstart**\ \ **networkprofile=**\ \ *network-profile*\ \ **imageprofile=**\ \ *image-profile*\ \ **hostnameformat=**\ \ *nost-name-format*\ [\ **hardwareprofile=**\ \ *hardware-profile*\ ] [\ **groups=**\ \ *node-groups*\ ] [\ **rack=**\ \ *rack-name*\ ] [\ **chassis=**\ \ *chassis-name*\ ] [\ **height=**\ \ *rack-server-height*\ ] [\ **unit=**\ \ *rack-server-unit-location*\ ] [\ **rank=**\ \ *rank-num*\ ]
***********
DESCRIPTION
***********
The \ **nodediscoverstart**\ command starts either the \ **Sequential Discovery**\ or \ **Profile Discovery**\ process. They can not both be
running at the same time.
\ **Sequential Discovery Specific:**\
This is the simplest discovery approach. You only need to specify the \ **noderange**\ , \ **hostiprange**\ and \ **bmciprange**\ that should be
given to nodes that are discovered. (If you pre-define the nodes (via nodeadd or mkdef) and specify their host and BMC IP addresses,
then you only need to specify the \ **noderange**\ to the \ **nodediscoverstart**\ command.) Once you have run \ **nodediscoverstart**\ , then
physically power on the nodes in the sequence that you want them to receive the node names and IPs, waiting a short time (e.g. 30 seconds)
between each node.
\ **Profile Discovery Specific:**\
This is the PCM discovery approach. \ *networkprofile*\ , \ *imageprofile*\ , \ *hostnameformat*\ arguments must be specified to start the \ **Profile Discovery**\ .
All nodes discovered by this process will be associated with specified profiles and rack/chassis/unit locations.
When the nodes are discovered, PCM updates the affected configuration files on the management node automatically. Configuration files include the /etc/hosts service file, DNS configuration, and DHCP configuration. Kit plug-ins are automatically triggered to update kit related configurations and services.
When you power on the nodes, they PXE boot and DHCP/TFTP/HTTP on the management node give each node the xCAT genesis boot image,
which inventories the node hardware and sends data to the management node. There, either the sequential discovery process or the
profile discovery process assigns node attributes and defines the node in the the database.
*******
OPTIONS
*******
\ **noderange=**\ \ *noderange*\
The set of node names that should be given to nodes that are discovered via the \ **Sequential Discovery**\ method.
This argument is required to \ **Sequential Discovery**\ . Any valid xCAT \ **noderange**\ is allowed, e.g. node[01-10].
\ **hostiprange=**\ \ *ip range*\
The ip range which will be assigned to the host of new discovered nodes in the \ **Sequential Discovery**\ method. The format can be: \ *start_ip*\ \ **-**\ \ *end_ip*\ or \ *noderange*\ , e.g. 192.168.0.1-192.168.0.10 or 192.168.0.[1-10].
\ **bmciprange=**\ \ *ip range*\
The ip range which will be assigned to the bmc of new discovered nodes in the \ **Sequential Discovery**\ method. The format can be: \ *start_ip*\ \ **-**\ \ *end_ip*\ or \ *noderange*\ , e.g. 192.168.1.1-192.168.1.10 or 192.168.1.[1-10].
\ **imageprofile=**\ \ *image-profile*\
Sets the new image profile name used by the discovered nodes in the \ **Profile Discovery**\ method. An image profile defines the provisioning method, OS information, kit information, and provisioning parameters for a node. If the "__ImageProfile_imgprofile" group already exists in the nodehm table, then "imgprofile" is used as the image profile name.
\ **networkprofile=**\ \ *network-profile*\
Sets the new network profile name used by the discovered nodes in the \ **Profile Discovery**\ method. A network profile defines the network, NIC, and routes for a node. If the "__NetworkProfile_netprofile" group already exists in the nodehm table, then "netprofile" is used as the network profile name.
\ **hardwareprofile=**\ \ *hardware-profile*\
Sets the new hardware profile name used by the discovered nodes in the \ **Profile Discovery**\ method. If a "__HardwareProfile_hwprofile" group exists, then "hwprofile" is the hardware profile name. A hardware profile defines hardware management related information for imported nodes, including: IPMI, HMC, CEC, CMM.
\ **hostnameformat=**\ \ *nost-name-format*\
Sets the node name format for all discovered nodes in the \ **Profile Discovery**\ method. The two types of formats supported are prefix#NNNappendix and prefix#RRand#NNappendix, where wildcard #NNN and #NN are replaced by a system generated number that is based on the provisioning order. Wildcard #RR represents the rack number and stays constant.
For example, if the node name format is compute-#NN, the node name is generated as: compute-00, compute-01, ..., compute-99. If the node name format is blade#NNN-x64, the node name is generated as: blade001-x64, blade002-x64, ..., blade999-x64
For example, if the node name format is compute-#RR-#NN and the rack number is 2, the node name is generated as: compute-02-00, compute-02-01, ..., compute-02-99. If node name format is node-#NN-in-#RR and rack number is 1, the node name is generated as: node-00-in-01, node-01-in-01, ..., node-99-in-01
\ **groups=**\ \ *node-groups*\
Sets the node groups that the discovered nodes should be put in for either the Sequential Discovery or Profile Discovery methods, where \ *node-group*\ is a comma-separated list of node groups.
\ **rack=**\ \ *rack-name*\ >
Sets the rack name where the node is located for either the Sequential Discovery or Profile Discovery methods.
\ **chasiss=**\ \ *chassis-name*\
Sets the chassis name that the Blade server or PureFlex blade is located in, for either the Sequential Discovery or Profile Discovery methods. This option is used for the Blade server and PureFlex system only. You cannot specify this option with the rack option.
\ **height=**\ \ *rack-server-height*\
Sets the height of a rack-mounted server in U units for either the Sequential Discovery or Profile Discovery methods. If the rack option is not specified, the default value is 1.
\ **unit=**\ \ *rack-server-unit-location*\
Sets the start unit value for the node in the rack, for either the Sequential Discovery or Profile Discovery methods. This option is for a rack server only. If the unit option is not specified, the default value is 1
\ **rank=**\ \ *rank-num*\
Specifies the starting rank number that is used in the node name format, for the Profile Discovery method. The rank number must be a valid integer between 0 and 254. This option must be specified with nodenameformat option. For example, if your node name format is compute-#RR-#NN. The rack's number is 2 and rank is specified as 5, the node name is generated as follows: compute-02-05, compute-02-06, ..., compute-02-99.
\ **osimage=**\ \ *osimagename*\
Specifies the osimage name that will be associated with the new discovered node, the os provisioning will be started automatically at the end of the discovery process.
\ **-n|--dns**\
Specifies to run makedns <nodename> for any new discovered node. This is useful mainly for non-predefined configuration, before running the "nodediscoverstart -n", the user needs to run makedns -n to initialize the named setup on the management node.
\ **-s|--skipbmcsetup**\
Specifies to skip the bmcsetup during the sequential discovery process, if the bmciprange is specified with nodediscoverstart command, the BMC will be setup automatically during the discovery process, if the user does not want to run bmcsetup, could specify the "-s|--skipbmcsetup" with nodediscoverstart command to skip the bmcsetup.
\ **-V|--verbose**\
Enumerates the free node names and host/bmc ips that are being specified in the ranges given. Use this option
with Sequential Discovery to ensure that you are specifying the ranges you intend.
\ **-h|--help**\
Display usage message.
\ **-v|--version**\
Command Version.
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occured.
********
EXAMPLES
********
1
\ **Sequential Discovery**\ : To discover nodes with noderange and host/bmc ip range:
\ **nodediscoverstart noderange=n[1-10] hostiprange='172.20.101.1-172.20.101.10' bmciprange='172.20.102.1-172.20.102.10' -V**\
.. code-block:: perl
Sequential Discovery: Started:
Number of free node names: 10
Number of free host ips: 10
Number of free bmc ips: 10
------------------------------------Free Nodes------------------------------------
NODE HOST IP BMC IP
n01 172.20.101.1 172.20.102.1
n02 172.20.101.2 172.20.102.2
... ... ...
2
\ **Profile Discovery**\ : To discover nodes using the default_cn network profile and the rhels6.3_packaged image profile, use the following command:
\ **nodediscoverstart networkprofile=default_cn imageprofile=rhels6.3_packaged hostnameformat=compute#NNN**\
********
SEE ALSO
********
nodediscoverstop(1)|nodediscoverstop.1, nodediscoverls(1)|nodediscoverls.1, nodediscoverstatus(1)|nodediscoverstatus.1

View File

@ -0,0 +1,74 @@
####################
nodediscoverstatus.1
####################
.. highlight:: perl
****
NAME
****
\ **nodediscoverstatus**\ - gets the node discovery process status
********
SYNOPSIS
********
\ **nodediscoverstatus**\ [\ **-h**\ |\ **--help**\ |\ **-v**\ |\ **--version**\ ]
***********
DESCRIPTION
***********
The \ **nodediscoverstatus**\ command detects if the sequential or profile node discovery process is currently running, i.e. \ **nodediscoverstart**\
has been run, but \ **nodediscoverstop**\ has not.
*******
OPTIONS
*******
\ **-h|--help**\
Display usage message.
\ **-v|--version**\
Command Version.
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occured.
********
EXAMPLES
********
To determine if there are some nodes discovered and the discovered nodes' status, enter the following command:
nodediscoverstatus
********
SEE ALSO
********
nodediscoverstart(1)|nodediscoverstart.1, nodediscoverls(1)|nodediscoverls.1, nodediscoverstatus(1)|nodediscoverstop.1

View File

@ -0,0 +1,73 @@
##################
nodediscoverstop.1
##################
.. highlight:: perl
****
NAME
****
\ **nodediscoverstop**\ - stops the node discovery process.
********
SYNOPSIS
********
\ **nodediscoverstop**\ [\ **-h**\ |\ **--help**\ |\ **-v**\ |\ **--version**\ ]
***********
DESCRIPTION
***********
The \ **nodediscoverstop**\ command stops the sequential or profile node discovery process.
Once this command has been run, newly discovered nodes will not be assigned node names
and attributes automatically via the sequential or profile discovery process.
*******
OPTIONS
*******
\ **-h|--help**\
Display usage message.
\ **-v|--version**\
Command Version.
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occured.
********
EXAMPLES
********
nodediscoverstop
********
SEE ALSO
********
nodediscoverstart(1)|nodediscoverstart.1, nodediscoverls(1)|nodediscoverls.1, nodediscoverstatus(1)|nodediscoverstatus.1

View File

@ -0,0 +1,119 @@
###########
nodegrpch.1
###########
.. highlight:: perl
****
NAME
****
\ **nodegrpch**\ - Changes attributes at the group level in the xCAT cluster database.
********
SYNOPSIS
********
\ **nodegrpch**\ \ *group1,group2,...*\ \ *table.column=value*\ [\ *...*\ ]
\ **nodegrpch**\ {\ **-v**\ | \ **--version**\ }
\ **nodegrpch**\ [\ **-?**\ | \ **-h**\ | \ **--help**\ ]
***********
DESCRIPTION
***********
The nodegrpch command is similar to the nodech command, but ensures that the parameters are
declared at the group level rather than the node specific level, and clears conflicting node
specific overrides of the specified groups. Using table.column=value will do a
verbatim assignment. If ",=" is used instead of "=", the specified value will be prepended to the
attribute's comma separated list, if it is not already there. If "^=" is used, the specified
value will be removed from the attribute's comma separated list, if it is there. You can also
use "^=" and ",=" in the same command to essentially replace one item
in the list with another. (See the Examples section.)
With these operators in mind, the unambiguous assignment operator is '=@'. If you need, for example, to set
the nodehm.comments to =foo, you would have to do \ *nodegrpch group1 nodehm.comments=@=foo*\ .
See the \ **xcatdb**\ man page for an overview of each table.
The nodegrpch command also supports some short cut names as aliases to common attributes. See the
\ **nodels**\ man page for details.
*******
OPTIONS
*******
\ **-v|--version**\
Command Version.
\ **-?|-h|--help**\
Display usage message.
************
RETURN VALUE
************
0
The command completed successfully.
1
An error has occurred.
********
EXAMPLES
********
\*
To declare all members of ipmi group to have nodehm.mgt be ipmi
\ ** nodegrpch**\ \ *ipmi nodehm.mgt=ipmi*\
*****
FILES
*****
/opt/xcat/bin/nodegrpch
********
SEE ALSO
********
nodech(1)|nodech.1, nodels(1)|nodels.1, nodeadd(8)|nodeadd.8, noderange(3)|noderange.3

View File

@ -0,0 +1,254 @@
############
nodeimport.1
############
.. highlight:: perl
****
NAME
****
\ **nodeimport**\ - Create profiled nodes by importing hostinfo file.
********
SYNOPSIS
********
\ **nodeimport**\ [-h| --help | -v | --version]
\ **nodeimport**\ file=<hostinfo-filename> networkprofile=<network-profile> imageprofile=<image-profile> hostnameformat=<node-name-format> [hardwareprofile=<hardware-profile>] [groups=<node-groups>]
***********
DESCRIPTION
***********
The \ **nodeimport**\ command creates nodes by importing a hostinfo file which is following stanza format. In this hostinfo file, we can define node's hostname, ip, mac, switch name, switch port and host location infomation like rack, chassis, start unit, server height...etc
After nodes imported, the configuration files related with these nodes will be updated automatically. For example: /etc/hosts, dns configuration, dhcp configuration. And the kits node plugins will also be triggered automatically to update kit related configuration/services.
*******
OPTIONS
*******
\ **-h|--help**\
Display usage message.
\ **-v|--version**\
Command Version.
\ **file=<nodeinfo-filename**\ >
Specifies the node information file, where <nodeinfo-filename> is the full path and file name of the node information file.
\ **imageprofile=<image-profile**\ >
Sets the new image profile name used by the node, where <image-profile> is the new image profile. An image profile defines the provisioning method, OS information, kit information, and provisioning parameters for a node. If the "__ImageProfile_imgprofile" group already exists in the nodehm table, then "imgprofile" is used as the image profile name.
\ **networkprofile=<network-profile**\ >
Sets the new network profile name used by the node, where <network-profile> is the new network profile. A network profile defines the network, NIC, and routes for a node. If the "__NetworkProfile_netprofile" group already exists in the nodehm table, then "netprofile" is used as the network profile name.
\ **hardwareprofile=<hardware-profile**\ >
Sets the new hardware profile name used by the node, where <hardware-profile> is the new hardware management profile used by the node. If a "__HardwareProfile_hwprofile" group exists, then "hwprofile" is the hardware profile name. A hardware profile defines hardware management related information for imported nodes, including: IPMI, HMC, CEC, CMM.
\ **hostnameformat=<host-name-format**\ >
Sets the node name format for all nodes discovered, where <node-name-format> is a supported format. The two types of formats supported are prefix#NNNappendix and prefix#RRand#NNappendix, where wildcard #NNN and #NN are replaced by a system generated number that is based on the provisioning order. Wildcard #RR represents the rack number and stays constant.
For example, if the node name format is compute-#NN, the node name is generated as: compute-00, compute-01, ... , compute-99. If the node name format is blade#NNN-x64, the node name is generated as: blade001-x64, blade002-x64, ... , blade999-x64
For example, if the node name format is compute-#RR-#NN and the rack number is 2, the node name is generated as: compute-02-00, compute-02-01, ..., compute-02-99. If node name format is node-#NN-in-#RR and rack number is 1, the node name is generated as: node-00-in-01, node-01-in-01, ... , node-99-in-01
\ **groups=<node-groups**\ >
Sets the node groups that the imported node belongs to, where <node-group> is a comma-separated list of node groups.
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occured while validating parameters.
2 An error has occured while parsing hostinfo file.
********
EXAMPLES
********
To import nodes using a profile, follow the following steps:
1. Find all node groups and profiles, run the following command "tabdump nodegroups". For detailed profile information run "lsdef -t group <groupname>". Example of detailed profile information:
.. code-block:: perl
# tabdump nodegroup
#groupname,grouptype,members,membergroups,wherevals,comments,disable
"compute","static",,,,,
"__HardwareProfile_default_ipmi","static","static",,,,
"__NetworkProfile_default_mn","static","static",,,,
"__NetworkProfile_default_cn","static",,,,,
"__ImageProfile_rhels6.2-x86_64-install-compute","static","static",,,,
# lsdef -t group __NetworkProfile_default_cn
Object name: __NetworkProfile_default_cn
grouptype=static
installnic=eth0
members=compute-000,compute-001
netboot=xnba
nichostnamesuffixes=eth0:-eth0
nicnetworks=eth0:provision
nictypes=eth0:Ethernet
primarynic=eth0
2. Prepare a node information file.
.. code-block:: perl
Example of a node information file, a blade and a rack server defined:
# hostinfo begin
# This entry defines a blade.
__hostname__:
mac=b8:ac:6f:37:59:24
ip=192.168.1.20
chassis=chassis01
# This entry defines a rack server.
__hostname__:
mac=b8:ac:6f:37:59:25
ip=192.168.1.20
rack=rack01
height=1
unit=2
# hostinfo end.
Another example of a node infomation file, a PureFlex X/P node defined:
# hostinfo begin
# To define a PureFlex P/X node, chassis and slot id must be specified.
# The chassis must be a PureFlex chassis.
__hostname__:
mac=b8:ac:6f:37:59:25
chassis=cmm01
slotid=1
# hostinfo end.
Example of a node information file, a switch auto discovery node defined:
# hostinfo begin
# This entry defines a blade.
__hostname__:
switches=eth0!switch1!1,eth0!switch2!1!eth1
Example of a node information file that specifies a CEC-based rack-mounted Power node that uses direct FSP management:
# Node information file begins
# This entry defines a Power rack-mount node.
__hostname__:
mac=b8:ac:6f:37:59:28
cec=mycec
__hostname__:
mac=b8:ac:6f:37:59:28
cec=mycec
lparid=2
# Node information file ends.
Example of a node information file that specifies a PowerKVM Guest node that uses KVM management:
# Node information file begins
# This entry defines a PowerKVM Guest node.
# Make sure the node 'vm01' is already created on Hypervisor
vm01:
mac=b8:ef:3f:28:31:15
vmhost=pkvm1
# Node information file ends.
The node information file includes the following items:
\ **__hostname__:**\ This is a mandatory item.
Description: The name of the node, where __hostname__ is automatically generated by the node name format. You can also input a fixed node name, for example "compute-node".
\ **mac=<mac-address**\ > This is a mandatory item.
Description: Specify the MAC address for the NIC used by the provisionging node, where <mac-address> is the NICs MAC address.
\ **switches=<nic-name!switch-name!switch-port**\ > This is a mandatory item, when define switch, switchport and node nic name relationship.
Description: Specify nic name, switch name and switch port to define node and switch relationship. We can define multi nic-switch-port relations here, looks like: switches=eth0!switch1!1,eth1!switch1,2
\ **slotid=<slot-id**\ > This is a mandatory item while define a PureFlex node.
Description: The node position in the PureFlex Chassis.
\ **cec=<cec-name**\ > This is a mandatory option for defining Power rack-mounted nodes.
Description: Specifies the name of a Power rack-mount central electronic complex (CEC).
\ **lparid=<lpar-id**\ > This is a optional option for defining Power rack-mounted nodes.
Description: Specifies the LPAR ID of a Power rack-mounted node, where <lpar-id> is the ID number. The default value is 1 if it is not defined.
\ **ip=<ip-address**\ > This is an optional item.
Description: Specify the IP address used for provisioning a node, where <ip-address> is in the form xxx.xxx.xxx.xxx. If this item is not included, the IP address used to provision the node is generated automatically according to the Network Profile used by the node.
\ **nicips=<nics-ip**\ > This is an optional item.
Description: Lists the IP address for each network interface configuration (NIC) used by the node, excluding the provisioning network, where <nics-ip> is in the form <nic1>!<nic-ip1>,<nic2>!<nic-ip2>,.... For example, if you have 2 network interfaces configured, the nicips attribute should list both network interfaces: nicips=eth1!10.10.10.11,bmc!192.168.10.3. If the nicips attribute is not specified, the IP addresses are generated automatically according to the network profile.
\ **rack=<rack-name**\ > This is an optional item.
Description: node location info. Specify the rack name which this node will be placed into. If not specify this item, there will be no node location info set for this node. this item must be specified together with height + unit.
\ **chasiss=<chassis-name**\ > This is an optional item.
Description: node location info, for blade(or PureFlex) only. Specify the chasiss name which this blade will be placed into. this item can not be specified together with rack.
\ **height=<chassis-height**\ > This is an optional item.
Description: node location info, for rack server only. Specify the server height number, in U. this item must be specified together with rack and unit.
\ **unit=<rack-server-unit-location**\ > This is an optional item.
Description: node location info, for rack server only. Specify the node's start unit number in rack, in U. this item must be specified together with rack and height.
\ **vmhost=<PowerKVM Hypervisior Host Name**\ > This is a mandatory option for defining PowerKVM Guest nodes.
Description: Specifies the vmhost of a Power KVM Guest node, where <vmhost> is the host name of PowerKVM Hypervisior.
3. Import the nodes, by using the following commands. Note: If we want to import PureFlex X/P nodes, hardware profile must be set to a PureFlex hardware type.
nodeimport file=/root/hostinfo.txt networkprofile=default_cn imageprofile=rhels6.3_packaged hostnameformat=compute-#NNN
4. After importing the nodes, the nodes are created and all configuration files used by the nodes are updated, including: /etc/hosts, DNS, DHCP.
5. Reboot the nodes. After the nodes are booted they are provisioned automatically.
********
SEE ALSO
********
nodepurge(1)|nodepurge.1, nodechprofile(1)|nodechprofile.1, noderefresh(1)|noderefresh.1

View File

@ -0,0 +1,402 @@
########
nodels.1
########
.. highlight:: perl
****
NAME
****
\ **nodels**\ - lists the nodes, and their attributes, from the xCAT database.
********
SYNOPSIS
********
\ **nodels**\ [\ *noderange*\ ] [\ **-b**\ | \ **--blame**\ ] [\ **-H**\ | \ **--with-fieldname**\ ] [\ **-S**\ ] [\ *table.column*\ | \ *shortname*\ ] [\ *...*\ ]
\ **nodels**\ [\ *noderange*\ ] [\ **-H**\ | \ **--with-fieldname**\ ] [\ *table*\ ]
\ **nodels**\ [\ **-?**\ | \ **-h**\ | \ **--help**\ | \ **-v**\ | \ **--version**\ ]
***********
DESCRIPTION
***********
The \ **nodels**\ command lists the nodes specified in the node range. If no noderange is provided, then all nodes are listed.
Additional attributes of the nodes will also be displayed if the table names and attribute names
are specified after the noderange in the form: \ *table.column*\ . A few shortcut names can
also be used as aliases to common attributes:
\ **groups**\
nodelist.groups
\ **tags**\
nodelist.groups
\ **mgt**\
nodehm.mgt
nodels can also select based on table value criteria. The following operators are available:
\ **==**\
Select nodes where the table.column value is exactly a certain value.
\ **!=**\
Select nodes where the table.column value is not a given specific value.
\ **=~**\
Select nodes where the table.column value matches a given regular expression.
\ **!~**\
Select nodes where the table.column value does not match a given regular expression.
The \ **nodels**\ command with a specific node and one or more table.attribute parameters is a good substitute
for grep'ing through the tab files, as was typically done in xCAT 1.x. This is because nodels will translate
any regular expression rows in the tables into their meaning for the specified node. The tab\* commands
will not do this, instead they will just display the regular expression row verbatim.
*******
OPTIONS
*******
\ **-v|--version**\
Command Version.
\ **-H|--with-fieldname**\
Force display of table name and column name context for each result
\ **-b|--blame**\
For values inherited from groups, display which groups provided the inheritence
\ **-S**\
List all the hidden nodes (FSP/BPA nodes) with other ones.
\ **-?|-h|--help**\
Display usage message.
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occurred.
********
EXAMPLES
********
1.
To list all defined nodes, enter:
.. code-block:: perl
nodels
Output is similar to:
.. code-block:: perl
node1
node2
node3
2.
To list all defined attributes in a table for a node or noderange, enter:
.. code-block:: perl
nodels rra001a noderes
Output is similar to:
.. code-block:: perl
rra001a: noderes.primarynic: eth0
rra001a: noderes.xcatmaster: rra000
rra001a: noderes.installnic: eth0
rra001a: noderes.netboot: pxe
rra001a: noderes.servicenode: rra000
rra001a: noderes.node: rra001a
3.
To list nodes in node group ppc, enter:
.. code-block:: perl
nodels ppc
Output is similar to:
.. code-block:: perl
ppcnode1
ppcnode2
ppcnode3
4.
To list the groups each node is part of:
.. code-block:: perl
nodels all groups
Output is similar to:
.. code-block:: perl
node1: groups: all
node2: groups: all,storage
node3: groups: all,blade
5.
To list the groups each node is part of:
.. code-block:: perl
nodels all nodehm.power
Output is similar to:
.. code-block:: perl
node1: nodehm.power: blade
node2: nodehm.power: ipmi
node3: nodehm.power: ipmi
6.
To list the out-of-band mgt method for blade1:
.. code-block:: perl
nodels blade1 nodehm.mgt
Output is similar to:
.. code-block:: perl
blade1: blade
7.
Listing blades managed through an AMM named 'amm1'
.. code-block:: perl
nodels all mp.mpa==amm1
Output is similar to:
.. code-block:: perl
blade1
blade10
blade11
blade12
blade13
blade2
blade3
blade4
blade5
blade6
blade7
blade8
blade9
8.
Listing the switch.switch value for nodes in the second rack:
.. code-block:: perl
nodels all nodepos.rack==2 switch.switch
Output is similar to:
.. code-block:: perl
n41: switch.switch: switch2
n42: switch.switch: switch2
n43: switch.switch: switch2
n44: switch.switch: switch2
n45: switch.switch: switch2
n46: switch.switch: switch2
n47: switch.switch: switch2
n55: switch.switch: switch2
n56: switch.switch: switch2
n57: switch.switch: switch2
n58: switch.switch: switch2
n59: switch.switch: switch2
n60: switch.switch: switch2
9.
Listing the blade slot number for anything managed through a device with a name beginning with amm:
.. code-block:: perl
nodels all mp.mpa=~/^amm.*/ mp.id
Output looks like:
.. code-block:: perl
blade1: mp.id: 1
blade10: mp.id: 10
blade11: mp.id: 11
blade12: mp.id: 12
blade13: mp.id: 13
blade2: mp.id: 2
blade3: mp.id: 3
blade4: mp.id: 4
blade5: mp.id: 5
blade6: mp.id: 6
blade7: mp.id: 7
blade8: mp.id: 8
blade9: mp.id: 9
10.
To list the hidden nodes that can't be seen with other flags.
The hidden nodes are FSP/BPAs.
.. code-block:: perl
lsdef -S
*****
FILES
*****
/opt/xcat/bin/nodels
********
SEE ALSO
********
noderange(3)|noderange.3, tabdump(8)|tabdump.8, lsdef(1)|lsdef.1

View File

@ -0,0 +1,81 @@
###########
nodepurge.1
###########
.. highlight:: perl
****
NAME
****
\ **nodepurge**\ - Removes nodes.
********
SYNOPSIS
********
\ **nodepurge**\ [-h| --help | -v | --version]
\ **nodepurge**\ <noderange>
***********
DESCRIPTION
***********
The \ **nodepurge**\ automatically removes all nodes from the database and any related configurations used by the node.
After the nodes are removed, the configuration files related to these nodes are automatically updated, including the following files: /etc/hosts, DNS, DHCP. Any kits that are used by the nodes are triggered to automatically update kit configuration and services.
*******
OPTIONS
*******
\ **-h|--help**\
Display usage message.
\ **-v|--version**\
Command Version
\ **noderange**\
The nodes to be removed.
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occured.
********
EXAMPLES
********
To remove nodes compute-000 and compute-001, use the following command:
nodepurge compute-000,compute-001
********
SEE ALSO
********
nodeimport(1)|nodeimport.1, nodechprofile(1)|nodechprofile.1, noderefresh(1)|noderefresh.1, noderange(3)|noderange.3

View File

@ -0,0 +1,77 @@
#############
noderefresh.1
#############
.. highlight:: perl
****
NAME
****
\ **noderefresh**\ - Update nodes configurations by running associated kit plugins.
********
SYNOPSIS
********
\ **noderefresh**\ [-h| --help | -v | --version]
\ **noderefresh**\ <noderange>
***********
DESCRIPTION
***********
The \ **noderefresh**\ command will update nodes settings, it will call all associated kit plug-in configurations and also services
*******
OPTIONS
*******
\ **-h|--help**\
Display usage message.
\ **-v|--version**\
Command Version.
\ **noderange**\
The nodes to be updated.
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occured.
********
EXAMPLES
********
noderefresh compute-000,compute-001
********
SEE ALSO
********
nodeimport(1)|nodeimport.1, nodechprofile(1)|nodechprofile.1, nodepurge(1)|nodepurge.1, noderange(3)|noderange.3

View File

@ -0,0 +1,81 @@
########
noderm.1
########
.. highlight:: perl
****
NAME
****
\ **noderm**\ -Removes the nodes in the noderange from all database table.
********
SYNOPSIS
********
\ *noderm [-h| --help]*\
\ *noderm noderange*\
***********
DESCRIPTION
***********
.. code-block:: perl
The noderm command removes the nodes in the input node range.
*******
OPTIONS
*******
\ **-h**\ Display usage message.
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occurred.
********
EXAMPLES
********
1. To remove the nodes in noderange node1-node4, enter:
\ *noderm node1-node4*\
*****
FILES
*****
/opt/xcat/bin/noderm
********
SEE ALSO
********
nodels(1)|nodels.1, nodeadd(8)|nodeadd.8, noderange(3)|noderange.3

View File

@ -0,0 +1,175 @@
##########
nodestat.1
##########
.. highlight:: perl
****
Name
****
\ **nodestat**\ - display the running status of each node in a noderange
****************
\ **Synopsis**\
****************
\ **nodestat**\ [\ *noderange*\ ] [\ *-m*\ |\ *--usemon*\ ] [\ *-p*\ |\ *--powerstat*\ ] [\ *-f*\ ] [\ *-u*\ |\ *--updatedb*\ ]
\ **nodestat**\ [\ *-h*\ |\ *--help*\ |\ *-v*\ |\ *--version*\ ]
*******************
\ **Description**\
*******************
\ **nodestat**\ displays and optionally updates the database the running status of a
single or range of nodes or groups. See noderange(3)|noderange.3.
By default, it works as following:
1. gets the sshd,pbs_mom,xend port status;
2. if none of them are open, it gets the fping status;
3. for pingable nodes that are in the middle of deployment, it gets the deployment status;
4. for non-pingable nodes, it shows 'noping'.
When -m is specified and there are settings in the monsetting table, it displays the status of the applications specified in the monsetting table. When -p is specified it shows the power status for the nodes that are not pingable. When -u is specified it saves the status info into the xCAT database. Node's pingable status and deployment status is saved in the nodelist.status column. Node's application status is saved in the nodelist.appstatus column.
To specify settings in the \ **monsetting**\ table, use 'xcatmon' as the name, 'apps' as the key and the value will be a list of comma separated list of application names. For each application, you can specify the port number that can be queried on the nodes to get the running status. Or you can specify a command that can be called to get the node status from. The command can be a command that can be run locally at the management node or the service node for hierarchical cluster, or a command that can be run remotely on the nodes.
The following is an example of the settings in the \ **monsetting**\ table:
.. code-block:: perl
name key value
xcatmon apps ssh,ll,gpfs,someapp
xcatmon gpfs cmd=/tmp/mycmd,group=compute,group=service
xcarmon ll port=9616,group=compute
xcatmon someapp dcmd=/tmp/somecmd
Keywords to use:
.. code-block:: perl
apps -- a list of comma separated application names whose status will be queried. For how to get the status of each app, look for app name in the key filed in a different row.
port -- the application daemon port number, if not specified, use internal list, then /etc/services.
group -- the name of a node group that needs to get the application status from. If not specified, assume all the nodes in the nodelist table. To specify more than one groups, use group=a,group=b format.
cmd -- the command that will be run locally on mn or sn.
lcmd -- the command that will be run the the mn only.
dcmd -- the command that will be run distributed on the nodes using xdsh <nodes> ....
For commands specified by 'cmd' and 'lcmd', the input of is a list of comma separated node names, the output must be in the following format:
.. code-block:: perl
node1:string1
node2:string2
...
For the command specified by 'dcmd', no input is needed, the output can be a string.
***************
\ **Options**\
***************
\ **-f**\
Uses fping instead of nmap even if nmap is available. If you seem to be having a problem with false negatives, fping can be more forgiving, but slower.
\ **-m**\ |\ **--usemon**\
Uses the settings from the \ **monsetting**\ talbe to determine a list of applications that need to get status for.
\ **-p**\ |\ **--powerstat**\
Gets the power status for the nodes that are 'noping'.
\ **-u**\ |\ **--updatedb**\
Updates the status and appstatus columns of the nodelist table with the returned running status from the given nodes.
\ **-v**\ |\ **--version**\
Print version.
\ **-h**\ |\ **--help**\
Print help.
****************
\ **Examples**\
****************
1. nodestat compute
.. code-block:: perl
node1 sshd
node2 sshd
node3 ping
node4 pbs
node5 noping
2. nodestat compute -p
.. code-block:: perl
node1 sshd
node2 sshd
node3 ping
node4 pbs
node5 noping(Shutting down)
3. nodestat compute -u
node1 sshd
node2 sshd
node3 ping
node4 netboot
node5 noping
4. nodestat compute -m
node1 ping,sshd,ll,gpfs=ok
node2 ping,sshd,ll,gpfs=not ok,someapp=something is wrong
node3 netboot
node4 noping
************************
\ **See**\ \ **Also**\
************************
noderange(3)|noderange.3, nodels(1)|nodels.1, nodeset(8)|nodeset.8

View File

@ -0,0 +1,110 @@
###########
packimage.1
###########
.. highlight:: perl
****
NAME
****
\ **packimage**\ - Packs the stateless image from the chroot file system.
********
SYNOPSIS
********
\ *packimage [-h| --help]*\
\ *packimage [-v| --version]*\
\ *packimage imagename*\
***********
DESCRIPTION
***********
Packs the stateless image from the chroot file system into a file system to be
sent to the node for a diskless install.
The install dir is setup by using "installdir" attribute set in the site table.
The nodetype table "profile" attribute for the node should reflect the profile of the install image.
This command will get all the necessary os image definition files from the \ *osimage*\ and \ *linuximage*\ tables.
**********
Parameters
**********
\ *imagename*\ specifies the name of a os image definition to be used. The specification for the image is stored in the \ *osimage*\ table and \ *linuximage*\ table.
*******
OPTIONS
*******
\ **-h**\ Display usage message.
\ **-v**\ Command Version.
\ **-o**\ Operating system (fedora8, rhel5, sles10,etc)
\ **-p**\ Profile (compute,service)
\ **-a**\ Architecture (ppc64,x86_64,etc)
\ **-m**\ Method (default cpio)
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occurred.
********
EXAMPLES
********
1. To pack the osimage rhels7.1-x86_64-netboot-compute:
\ *packimage rhels7.1-x86_64-netboot-compute*\
*****
FILES
*****
/opt/xcat/sbin/packimage
*****
NOTES
*****
This command is part of the xCAT software product.
********
SEE ALSO
********
genimage(1)|genimage.1

View File

@ -0,0 +1,295 @@
######
pasu.1
######
.. highlight:: perl
****
NAME
****
\ **pasu**\ - run the ASU to many nodes in parallel
********
SYNOPSIS
********
\ **pasu**\ [\ **-V**\ ] [\ **-d**\ ] [\ **-n**\ ] [\ **-l**\ \ *user*\ ] [\ **-p**\ \ *passwd*\ ] [\ **-f**\ \ *fanout*\ ] [\ **-i**\ \ *hostname-suffix*\ ] \ *noderange*\ \ *command*\
\ **pasu**\ [\ **-V**\ ] [\ **-d**\ ] [\ **-n**\ ] [\ **-l**\ \ *user*\ ] [\ **-p**\ \ *passwd*\ ] [\ **-f**\ \ *fanout*\ ] [\ **-i**\ \ *hostname-suffix*\ ] \ **-b**\ \ *batchfile*\ \ *noderange*\
\ **pasu**\ [\ **-h**\ | \ **--help**\ ]
***********
DESCRIPTION
***********
The \ **pasu**\ command runs the ASU command in out-of-band mode in parallel to multiple nodes. Out-of-band mode means
that ASU connects from the xCAT management node to the IMM (BMC) of each node to set or query the ASU settings. To
see all of the ASU settings available on the node, use the "show all" command. To query or set multiple values,
use the \ **-b**\ (batch) option. To group similar output from multiple nodes, use xcoll(1)|xcoll.1.
Before running \ **pasu**\ , you must install the ASU RPM from IBM. You can download it from the IBM Fix Central site.
You also must configure the IMMs properly according to xCAT documentation. Run "\ **rpower**\ \ *noderange*\ \ **stat**\ "
to confirm that the IMMs are configured properly.
*******
OPTIONS
*******
\ **-n|--nonodecheck**\
Do not send the noderange to xcatd to expand it into a list of nodes. Use the noderange exactly as it is specified
to pasu. In this case, the noderange must be a simple list of comma-separated hostnames of the IMMs.
\ **-l|--loginname**\ \ *username*\
The username to use to connect to the IMMs. If not specified, the row in the xCAT \ **passwd**\ table with key "ipmi"
will be used to get the username.
\ **-p|--passwd**\ \ *passwd*\
The password to use to connect to the IMMs. If not specified, the row in the xCAT passwd table with key "ipmi"
will be used to get the password.
\ **-f|--fanout**\
How many processes to run in parallel simultaneously. The default is 64. You can also set the XCATPSHFANOUT
environment variable.
\ **-b|--batch**\ -\ *batchfile*\
A simple text file that contains multiple ASU commands, each on its own line.
\ **-d|--donotfilter**\
By default, pasu filters out (i.e. does not display) the standard initial output from ASU:
.. code-block:: perl
IBM Advanced Settings Utility version 9.30.79N
Licensed Materials - Property of IBM
(C) Copyright IBM Corp. 2007-2012 All Rights Reserved
Connected to IMM at IP address node2-imm
If you want this output to be displayed, use this flag.
\ **-i|--interface**\ \ *hostname-suffix*\
The hostname suffix to be appended to the node names.
\ **-V|--verbose**\
Display verbose messages.
\ **-h|--help**\
Display usage message.
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occurred.
********
EXAMPLES
********
1.
To display the Com1ActiveAfterBoot setting on 2 nodes:
.. code-block:: perl
pasu node1,node2 show DevicesandIOPorts.Com1ActiveAfterBoot
Output is similar to:
.. code-block:: perl
node1: DevicesandIOPorts.Com1ActiveAfterBoot=Enable
node2: DevicesandIOPorts.Com1ActiveAfterBoot=Enable
2.
To display the Com1ActiveAfterBoot setting on all compute nodes:
.. code-block:: perl
pasu compute show DevicesandIOPorts.Com1ActiveAfterBoot | xcoll
Output is similar to:
.. code-block:: perl
====================================
compute
====================================
DevicesandIOPorts.Com1ActiveAfterBoot=Enable
3.
To set several settings on all compute nodes, create a batch file
called (for example) asu-settings with contents:
.. code-block:: perl
set DevicesandIOPorts.Com1ActiveAfterBoot Enable
set DevicesandIOPorts.SerialPortSharing Enable
set DevicesandIOPorts.SerialPortAccessMode Dedicated
set DevicesandIOPorts.RemoteConsole Enable
Then run:
.. code-block:: perl
pasu -b asu-settings compute | xcoll
Output is similar to:
.. code-block:: perl
====================================
compute
====================================
Batch mode start.
[set DevicesandIOPorts.Com1ActiveAfterBoot Enable]
DevicesandIOPorts.Com1ActiveAfterBoot=Enable
[set DevicesandIOPorts.SerialPortSharing Enable]
DevicesandIOPorts.SerialPortSharing=Enable
[set DevicesandIOPorts.SerialPortAccessMode Dedicated]
DevicesandIOPorts.SerialPortAccessMode=Dedicated
[set DevicesandIOPorts.RemoteConsole Enable]
DevicesandIOPorts.RemoteConsole=Enable
Beginning intermediate batch update.
Waiting for command completion status.
Command completed successfully.
Completed intermediate batch update.
Batch mode competed successfully.
4.
To confirm that all the settings were made on all compute nodes, create a batch file
called (for example) asu-show with contents:
.. code-block:: perl
show DevicesandIOPorts.Com1ActiveAfterBoot
show DevicesandIOPorts.SerialPortSharing
show DevicesandIOPorts.SerialPortAccessMode
show DevicesandIOPorts.RemoteConsole
Then run:
.. code-block:: perl
pasu -b asu-show compute | xcoll
Output is similar to:
.. code-block:: perl
====================================
compute
====================================
Batch mode start.
[show DevicesandIOPorts.Com1ActiveAfterBoot]
DevicesandIOPorts.Com1ActiveAfterBoot=Enable
[show DevicesandIOPorts.SerialPortSharing]
DevicesandIOPorts.SerialPortSharing=Enable
[show DevicesandIOPorts.SerialPortAccessMode]
DevicesandIOPorts.SerialPortAccessMode=Dedicated
[show DevicesandIOPorts.RemoteConsole]
DevicesandIOPorts.RemoteConsole=Enable
Batch mode competed successfully.
*****
FILES
*****
/opt/xcat/bin/pasu
********
SEE ALSO
********
noderange(3)|noderange.3, rpower(1)|rpower.1, xcoll(1)|xcoll.1

View File

@ -0,0 +1,50 @@
#######
pcons.1
#######
.. highlight:: perl
********
SYNOPSIS
********
\ **pcons**\ \ *noderange*\ \ *command*\
\ **pcons**\
[\ **-h**\ |\ **--help**\ ]
\ **pcons**\
[\ **-v**\ |\ **--version**\ ]
***********
DESCRIPTION
***********
Runs the a command to the noderange using the console.
********
EXAMPLES
********
.. code-block:: perl
pcons 1,3 stat
pcons all,-129-256 stat
********
SEE ALSO
********
psh(1)|psh.1

View File

@ -0,0 +1,127 @@
############
pgsqlsetup.1
############
.. highlight:: perl
****
NAME
****
\ **pgsqlsetup**\ - Sets up the PostgreSQL database for xCAT to use.
********
SYNOPSIS
********
\ **pgsqlsetup**\ {\ **-h**\ |\ **--help**\ }
\ **pgsqlsetup**\ {\ **-v**\ |\ **--version**\ }
\ **pgsqlsetup**\ {\ **-i**\ |\ **--init**\ } [-N|nostart] [-P|--PCM] [-o|--setupODBC] [\ **-V**\ |\ **--verbose**\ ]
\ **pgsqlsetup**\ {\ **-o**\ |\ **--setupODBC**\ } [-V|--verbose]
***********
DESCRIPTION
***********
\ **pgsqlsetup**\ - Sets up the PostgreSQL database for xCAT to use. The pgsqlsetup script is run on the Management Node as root after the PostgreSQL code has been installed. The xcatd daemon will be stopped during migration. No xCAT commands should be run during the init process, because we will be migrating the xCAT database to PostgreSQL and restarting the xcatd daemon as well as the PostgreSQL daemon. For full information on all the steps that will be done reference
One password must be supplied for the setup, a password for the xcatadm unix id and the same password for the xcatadm database id. The password will be prompted for interactively or you can set the XCATPGPW environment variable to the password and then there will be no prompt.
*******
OPTIONS
*******
\ **-h|--help**\
Displays the usage message.
\ **-v|--version**\
Displays the release version of the code.
\ **-V|--verbose**\
Displays verbose messages.
\ **-i|--init**\
The init option is used to setup an installed PostgreSQL database so that xCAT can use the database. This involves creating the xcat database, the xcat admin id, allowing access to the xcatdb database by the Management Node. It customizes the postgresql.conf configuration file, adds the management server to the pg_hba.conf and starts the PostgreSQL server. It also backs up the current xCAT database and restores it into the newly setup xcatdb PostgreSQL database. It creates the /etc/xcat/cfgloc file to point the xcatd daemon to the PostgreSQL database and restarts the xcatd daemon using the database.
On AIX, it additionally setup the xcatadm unix id and the postgres id and group. For AIX, you should be using the PostgreSQL rpms available from the xCAT website. For Linux, you should use the PostgreSQL rpms shipped with the OS. You can chose the -o option, to run after the init.
To add additional nodes to access the PostgreSQL server, setup on the Management Node, edit the pg_hba.conf file.
For more documentation see:Setting_Up_PostgreSQL_as_the_xCAT_DB
\ **-N|--nostart**\
This option with the -i flag will create the database, but will not backup and restore xCAT tables into the database. It will create the cfgloc file such that the next start of xcatd will try and contact the database. This can be used to setup the xCAT PostgreSQL database during or before install.
\ **-P|--PCM**\
This option sets up PostgreSQL database to be used with xCAT running with PCM.
\ **-o|--odbc**\
This option sets up the ODBC /etc/../odbcinst.ini, /etc/../odbc.ini and the .odbc.ini file in roots home directory will be created and initialized to run off the xcatdb PostgreSQL database.
*********************
ENVIRONMENT VARIABLES
*********************
\ **XCATPGPW**\
The password to be used to setup the xCAT admin id for the database.
********
EXAMPLES
********
\*
To setup PostgreSQL for xCAT to run on the PostgreSQL xcatdb database :
\ **pgsqlsetup**\ \ *-i*\
\*
To setup the ODBC for PostgreSQL xcatdb database access :
\ **pgsqlsetup**\ \ *-o*\

View File

@ -0,0 +1,112 @@
#######
pping.1
#######
.. highlight:: perl
********
SYNOPSIS
********
\ **pping**\ [\ **-i**\ |\ **--interface**\ \ *interfaces*\ ] [\ **-f**\ |\ **--use_fping**\ ] \ *noderange*\
\ **pping**\ [\ **-h**\ |\ **--help**\ ]
\ **pping**\ {\ **-v**\ |\ **--version**\ }
***********
DESCRIPTION
***********
\ **pping**\ is a utility used to ping a list of nodes in parallel.
\ **pping**\ will return an unsorted list of nodes with a ping or noping status.
\ **pping**\ front-ends nmap or fping if available.
This command does not support the xcatd client/server communication. It must be run on the management node.
*******
OPTIONS
*******
\ **-i**\ |\ **--interface**\ \ *interfaces*\
A comma separated list of network interface names that should be pinged instead of the interface represented by the nodename/hostname.
The following name resolution convention is assumed: an interface is reachable by the hostname <nodename>-<interface>. For example,
the ib2 interface on node3 has a hostname of node3-ib2.
If more than one interface is specified, each interface will be combined with the nodenames as described above and will be pinged in turn.
\ **-f**\ |\ **--use_fping**\
Use fping instead of nmap
\ **-h**\ |\ **--help**\
Show usage information.
\ **-v**\ |\ **--version**\
Display the installed version of xCAT.
********
EXAMPLES
********
1.
pping all
.. code-block:: perl
node1: ping
node2: ping
node3: noping
2.
pping all -i ib0,ib1
.. code-block:: perl
node1-ib0: ping
node2-ib0: ping
node3-ib0: noping
node1-ib1: ping
node2-ib1: ping
node3-ib1: noping
********
SEE ALSO
********
psh(1)|psh.1, noderange(3)|noderange.3

View File

@ -0,0 +1,135 @@
########
ppping.1
########
.. highlight:: perl
********
SYNOPSIS
********
\ **ppping**\ [\ **-i**\ |\ **--interface**\ \ *interfaces*\ ] [\ **-d**\ |\ **--debug**\ ] [\ **-V**\ |\ **--verbose**\ ] [\ **-q**\ |\ **--quiet**\ ] [\ **-s**\ |\ **--serial**\ ] \ *noderange*\
\ **ppping**\ [\ **-h**\ |\ **--help**\ ]
\ **pping**\ {\ **-v**\ |\ **--version**\ }
***********
DESCRIPTION
***********
\ **ppping**\ is a utility used to test the connectivity between nodes in the noderange using ping.
By default, \ **ppping**\ will return an unsorted list of the node pairs that are not able to ping each other, or a message that all nodes are pingable.
More or less output can be controlled by the -V and -q options.
\ **ppping**\ front-ends \ **pping**\ and \ **xdsh**\ .
*******
OPTIONS
*******
\ **-s**\
Ping serially instead of in parallel.
\ **-i**\ |\ **--interface**\ \ *interfaces*\
A comma separated list of network interface names that should be pinged instead of the interface represented by the nodename/hostname.
The following name resolution convention is assumed: an interface is reachable by the hostname <nodename>-<interface>. For example,
the ib2 interface on node3 has a hostname of node3-ib2.
If more than one interface is specified, each interface will be combined with the nodenames as described above and will be pinged in turn.
\ **-V**\ |\ **--verbose**\
Display verbose output. The result of every ping attempt from every node will be displayed. Without this option, just a summary
of the successful pings are displayed, along with all of the unsuccessful pings.
\ **-q**\ |\ **--quiet**\
Display minimum output: just the unsuccessful pings. This option has the effect that if all pings are successful, nothing is displayed.
But it also has the performance benefit that each node does not have to send successful ping info back to the management node.
\ **-d**\ |\ **--debug**\
Print debug information.
\ **-h**\ |\ **--help**\
Show usage information.
\ **-v**\ |\ **--version**\
Display the installed version of xCAT.
********
EXAMPLES
********
1.
ppping all -q
.. code-block:: perl
blade7: node2: noping
blade8: node2: noping
blade9: node2: noping
devmaster: node2: noping
node2: noping
2.
ppping node1,node2 -i ib0,ib1,ib2,ib3
.. code-block:: perl
node1: pinged all nodes successfully on interface ib0
node1: pinged all nodes successfully on interface ib1
node1: pinged all nodes successfully on interface ib2
node1: pinged all nodes successfully on interface ib3
node2: pinged all nodes successfully on interface ib0
node2: pinged all nodes successfully on interface ib1
node2: pinged all nodes successfully on interface ib2
node2: pinged all nodes successfully on interface ib3
********
SEE ALSO
********
psh(1)|psh.1, pping(1)|pping.1

View File

@ -0,0 +1,127 @@
########
prsync.1
########
.. highlight:: perl
****
Name
****
prsync - parallel rsync
****************
\ **Synopsis**\
****************
\ **prsync**\ \ *filename*\ [\ *filename*\ \ *...*\ ] \ *noderange:destinationdirectory*\
\ **prsync**\ [\ *-o rsync options*\ ] [\ **-f**\ \ *fanout*\ ] [\ *filename*\ \ *filename*\ \ *...*\ ] [\ *directory*\ \ *directory*\ \ *...*\ ]
\ *noderange:destinationdirectory*\
\ **prsync**\ {\ **-h**\ |\ **--help**\ |\ **-v**\ |\ **--version**\ }
*******************
\ **Description**\
*******************
\ **prsync**\ is a front-end to rsync for a single or range of nodes and/or
groups in parallel.
Note: this command does not support the xcatd client/server communication and therefore must be run on the management node. It does not support hierarchy, use xdcp -F to run rsync from the
management node to the compute node via a service node
\ **prsync**\ is NOT multicast, but is parallel unicasts.
***************
\ **Options**\
***************
\ **rsyncopts**\
rsync options. See \ **rsync(1)**\ .
\ **-f**\ \ *fanout*\
Specifies a fanout value for the maximum number of concur-
rently executing remote shell processes.
\ **filename**\
A space delimited list of files to rsync.
\ **directory**\
A space delimited list of directories to rsync.
\ **noderange:destination**\
A noderange(3)|noderange.3 and destination directory. The : is required.
\ **-h**\ |\ **--help**\
Print help.
\ **-v**\ |\ **--version**\
Print version.
\ **XCATPSHFANOUT**\
Specifies the fanout value. This variable is overridden by
the \ **-f**\ flag. Default is 64.
****************
\ **Examples**\
****************
\*
\ **cd**\ \ */install;*\ \ **prsync**\ \ **-o "crz"**\ \ *post*\ \ *stage:/install*\
\*
\ **prsync**\ \ *passwd*\ \ *group*\ \ *rack01:/etc*\
************************
\ **See**\ \ **Also**\
************************
noderange(3)|noderange.3, pscp(1)|pscp.1, pping(1)|pping.1, psh(1)|psh.1

View File

@ -0,0 +1,116 @@
######
pscp.1
######
.. highlight:: perl
****
Name
****
\ **pscp**\ - parallel remote copy
****************
\ **Synopsis**\
****************
\ **pscp**\ [-i \ *suffix*\ ] [\ *scp options*\ \ *...*\ ] [\ **-f**\ \ *fanout*\ ] \ *filename*\ [\ *filename*\ \ *...*\ ] \ *noderange:destinationdirectory*\
\ **pscp**\ {\ **-h**\ |\ **--help**\ |\ **-v**\ |\ **--version**\ }
*******************
\ **Description**\
*******************
\ **pscp**\ is a utility used to copy a single or multiple set of files and/or
directories to a single or range of nodes and/or groups in parallel.
\ **pscp**\ is a front-end to the remote copy \ **scp**\ .
Note: this command does not support the xcatd client/server communication and therefore must be run on the management node. It does not support hierarchy, use xdcp to run remote copy command from the
management node to the compute node via a service node.
\ **pscp**\ is NOT multicast, but is parallel unicasts.
***************
\ **Options**\
***************
\ **-f**\ \ *fanout*\
Specifies a fanout value for the maximum number of concur-
rently executing remote shell processes.
\ **-i**\ \ *suffix*\
Interfaces to be used.
\ **scp options**\
See \ **scp(1)**\
\ **filename**\
A space delimited list of files to copy. If \ **-r**\ is passed as an scp option, directories may be specified as well.
\ **noderange:destination**\
A noderange(3)|noderange.3 and destination directory. The : is required.
\ **-h**\ |\ **--help**\
Print help.
\ **-v**\ |\ **--version**\
Print version.
\ **XCATPSHFANOUT**\
Specifies the fanout value. This variable is overridden by
the \ **-f**\ flag. Default is 64.
****************
\ **Examples**\
****************
\ **pscp**\ \ **-r**\ \ */usr/local*\ \ *node1,node3:/usr/local*\
\ **pscp**\ \ *passwd*\ \ *group*\ \ *rack01:/etc*\
************************
\ **See**\ \ **Also**\
************************
noderange(3)|noderange.3, pping(1)|pping.1, prsync(1)|prsync.1, psh(1)|psh.1

View File

@ -0,0 +1,162 @@
#####
psh.1
#####
.. highlight:: perl
****
Name
****
psh - parallel remote shell
****************
\ **Synopsis**\
****************
\ **psh**\ [\ **-i**\ \ *interface*\ ] [\ **-f**\ \ *fanout*\ ] [\ **-l**\ \ *user*\ ] \ *noderange*\ \ *command*\
\ **psh**\ {\ **-h**\ |\ **--help**\ |\ **-v**\ |\ **--version**\ }
*******************
\ **Description**\
*******************
\ **psh**\ is a utility used to run a command across a list of nodes in parallel.
\ **ssh**\ must be set up to allow no prompting for \ **psh**\ to work.
Note:
This command does not run through xcatd like most xCAT commands do.
This means you must either run it on the management node, or have a network connection between
your machine and the nodes. It does not support hierarchy, use xdsh to run remote command from the
management node to the compute node via a service node.
\ **psh**\ arguments need to precede noderange, otherwise, you will get unexpected errors.
***************
\ **Options**\
***************
\ **-i**\ \ *interface*\
The NIC on the node that psh should communicate with. For example, if \ *interface*\ is \ **eth1**\ ,
then psh will concatenate \ **-eth1**\ to the end of every node name before ssh'ing to it. This
assumes those host names have been set up to resolve to the IP address of each of the eth1 NICs.
\ **-f**\ \ *fanout*\
Specifies a fanout value for the maximum number of concur-
rently executing remote shell processes.
\ **-l**\ \ *user*\
Log into the nodes as the specified username. The default is to use the same username as you
are running the psh command as.
\ **-n|--nonodecheck**\
Do not send the noderange to xcatd to expand it into a list of nodes. Instead, use the noderange exactly as it is specified.
In this case, the noderange must be a simple list of comma-separated hostnames of the nodes.
This allows you to run \ **psh**\ even when xcatd is not running.
\ **noderange**\
See noderange(3)|noderange.3.
\ **command**\
Command to be run in parallel. If no command is give then \ **psh**\
enters interactive mode. In interactive mode a ">" prompt is
displayed. Any command entered is executed in parallel to the
nodes in the noderange. Use "exit" or "Ctrl-D" to end the interactive session.
\ **-h**\ |\ **--help**\
Print help.
*************************************
\ **Environment**\ \ **Variables**\
*************************************
\ **XCATPSHFANOUT**\
Specifies the fanout value. This variable is overridden by
the \ **-f**\ flag. Default is 64.
****************
\ **Examples**\
****************
\*
Run uptime on 3 nodes:
\ **psh**\ \ *node4-node6*\ \ *uptime*\
node4: Sun Aug 5 17:42:06 MDT 2001
node5: Sun Aug 5 17:42:06 MDT 2001
node6: Sun Aug 5 17:42:06 MDT 2001
\*
Run a command on some BladeCenter management modules:
\ **psh**\ \ *amm1-amm5*\ \ *'info -T mm[1]'*\
\*
Remove the tmp files on the nodes in the 1st frame:
\ **psh**\ \ *rack01*\ \ *'rm -f /tmp/\\*'*\
Notice the use of '' to forward shell expansion. This is not necessary
in interactive mode.
************************
\ **See**\ \ **Also**\
************************
noderange(3)|noderange.3, pscp(1)|pscp.1, pping(1)|pping.1, prsync(1)|prsync.1

View File

@ -0,0 +1,116 @@
############
pushinitrd.1
############
.. highlight:: perl
****
NAME
****
\ **pushinitrd**\ - queries your SoftLayer account and gets attributes for each server.
********
SYNOPSIS
********
\ **pushinitrd**\ [\ **-v**\ |\ **--verbose**\ ] [\ **-w**\ \ *waittime*\ ] [\ *noderange*\ ]
\ **pushinitrd**\ [\ **-?**\ | \ **-h**\ | \ **--help**\ ]
***********
DESCRIPTION
***********
The \ **pushinitrd**\ command copies the initrd, kernel, params, and static IP info to nodes, so they can be net installed
even across vlans (w/o setting up pxe/dhcp broadcast relay). This assumes a working
OS is on the nodes. Before running this command, you must run nodeset for these nodes.
All of the nodes given to one invocation of \ **pushinitrd**\ must be using the same osimage.
Before using this command, if will be most convenient if you exchange the ssh keys using:
.. code-block:: perl
xdsh <noderange> -K
*******
OPTIONS
*******
\ **-w**\ \ *waittime*\
The number of seconds the initrd should wait before trying to communicate over the network.
The default is 75. This translates into the netwait kernel parameter and is usually needed
in a SoftLayer environment because it can take a while for a NIC to be active after changing state.
\ **-?|-h|--help**\
Display usage message.
\ **-v|--version**\
Command Version.
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occurred.
********
EXAMPLES
********
1.
Configure nodes for net installing in a SoftLayer environment:
.. code-block:: perl
pushinitrd <noderange>
*****
FILES
*****
/opt/xcat/bin/pushinitrd
********
SEE ALSO
********
getslnodes(1)|getslnodes.1

View File

@ -0,0 +1,50 @@
#########
rbeacon.1
#########
.. highlight:: perl
********
SYNOPSIS
********
\ **rbeacon**\ \ *noderange*\ {\ **on**\ |\ **blink**\ |\ **off**\ |\ **stat**\ }
\ **rbeacon**\ [\ **-h**\ |\ **--help**\ ]
\ **rbeacon**\ {\ **-v**\ |\ **--version**\ }
***********
DESCRIPTION
***********
\ **rbeacon**\ Turns beacon (a light on the front of the physical server) on/off/blink or gives status of a node or noderange.
********
EXAMPLES
********
.. code-block:: perl
rbeacon 1,3 off
rbeacon 14-56,70-203 on
rbeacon 1,3,14-56,70-203 blink
rbeacon all,-129-256 stat
********
SEE ALSO
********
noderange(3)|noderange.3, rpower(1)|rpower.1

View File

@ -0,0 +1,190 @@
##########
rbootseq.1
##########
.. highlight:: perl
********
SYNOPSIS
********
\ **rbootseq**\ [\ **-h**\ |\ **--help**\ |\ **-v**\ |\ **--version**\ ]
Blade specific:
===============
\ **rbootseq**\ \ *noderange*\ {\ **hd0**\ |\ **hd1**\ |\ **hd2**\ |\ **hd3**\ |\ **net**\ |\ **iscsi**\ |\ **iscsicrit**\ |\ **cdrom**\ |\ **usbflash**\ |\ **floppy**\ |\ **none**\ |\ **list**\ |\ **stat**\ }\ **,**\ \ *...*\
HP Blade specific:
==================
\ **rbootseq**\ \ *noderange*\ {\ **hd**\ |\ **net1**\ |\ **net2**\ |\ **net3**\ |\ **net4**\ |\ **cdrom**\ |\ **usbflash**\ |\ **floppy**\ |\ **none**\ }\ **,**\ \ *...*\
PPC (using Direct FSP Management) specific:
===========================================
\ **rbootseq**\ \ *noderange*\ \ **[hfi|net]**\
***********
DESCRIPTION
***********
For Blade specific:
\ **rbootseq**\ sets the boot sequence (the order in which boot devices should be tried) for the specified blades.
Up to four different medium/devices can be listed, separated by commas. The boot sequence will remain
in effect for these blades until set differently.
For PPC (using Direct FSP Management) specific:
\ **rbootseq**\ sets the ethernet (net) or hfi device as the first boot device for the specified PPC LPARs.
The \ **rbootseq**\ command requires that the ethernet or hfi mac address is stored in the mac table, and that the network information is correct in the networks table.
*******
OPTIONS
*******
\ **hd0**\ |\ **harddisk0**\ |\ **hd**\ |\ **harddisk**\
The first hard disk.
\ **hd1**\ |\ **harddisk1**\
The second hard disk.
\ **hd2**\ |\ **harddisk2**\
The third hard disk.
\ **hd3**\ |\ **harddisk3**\
The fourth hard disk.
\ **n**\ |\ **net**\ |\ **network**\
Boot over the ethernet network, using a PXE or BOOTP broadcast.
\ **n**\ |\ **net**\ |\ **network**\ |\ **net1**\ |\ **nic1**\ (HP Blade Only)
Boot over the first ethernet network, using a PXE or BOOTP broadcast.
\ **net2**\ |\ **nic2**\ (HP Blade Only)
Boot over the second ethernet network, using a PXE or BOOTP broadcast.
\ **net3**\ |\ **nic3**\ (HP Blade Only)
Boot over the third ethernet network, using a PXE or BOOTP broadcast.
\ **net3**\ |\ **nic3**\ (HP Blade Only)
Boot over the fourth ethernet network, using a PXE or BOOTP broadcast.
\ **hfi**\
Boot p775 nodes over the HFI network, using BOOTP broadcast.
\ **iscsi**\
Boot to an iSCSI disk over the network.
\ **iscsicrit**\
??
\ **cd**\ |\ **cdrom**\
The CD or DVD drive.
\ **usbflash**\ |\ **usb**\ |\ **flash**\
A USB flash drive.
\ **floppy**\
The floppy drive.
\ **none**\
If it gets to this part of the sequence, do not boot. Can not be specified 1st, or before any real boot devices.
\ **list**\ |\ **stat**\
Display the current boot sequence.
********
EXAMPLES
********
1.
Set blades 14-56 and 70-203 to try to boot first from the CD drive, then the floppy drive, then
the network, and finally from the 1st hard disk:
.. code-block:: perl
rbootseq blade[14-56],blade[70-203] c,f,n,hd0
********
SEE ALSO
********
rsetboot(1)|rsetboot.1

View File

@ -0,0 +1,100 @@
#######
rcons.1
#######
.. highlight:: perl
****
Name
****
\ **rcons**\ - remotely accesses the serial console of a node
****************
\ **Synopsis**\
****************
\ **rcons**\ \ *singlenode*\ [\ *conserver-host*\ ] [\ **-f**\ ] [\ **-s**\ ]
\ **rcons**\ [\ **-h**\ |\ **--help**\ |\ **-v**\ |\ **--version**\ ]
*******************
\ **Description**\
*******************
\ **rcons**\ provides access to a single remote node serial console, using the out-of-band infrastructure for the node
(e.g. BMC, Management Module, HMC, KVM, etc.). It uses the conserver open source package to provide one read-write and
multiple read-only instances of the console, plus console logging.
If \ *conserver-host*\ is specified, the conserver daemon on that host will be contacted, instead of on the local host.
To exit the console session, enter: <ctrl><shift>e c .
***************
\ **Options**\
***************
\ **-f**\
If another console for this node is already open in read-write mode, force that console into read-only (spy) mode, and
open this console in read-write mode. If -f is not specified, this console will be put in spy mode if another console
is already open in read-write mode. The -f flag can not be used with the -s flag.
\ **-s**\
Open the console in read-only (spy) mode, in this mode all the escape sequences work, but all other keyboard input is
discarded. The -s flag can not be used with the -f flag.
\ **-h**\ |\ **--help**\
Print help.
\ **-v**\ |\ **--version**\
Print version.
*************
\ **Files**\
*************
\ **nodehm**\ table -
xCAT node hardware management table. See nodehm(5)|nodehm.5 for
further details. This is used to determine the console access
method.
****************
\ **Examples**\
****************
\ **rcons**\ \ *node5*\
************************
\ **See**\ \ **Also**\
************************
wcons(1)|wcons.1

View File

@ -0,0 +1,102 @@
##########
regnotif.1
##########
.. highlight:: perl
****
NAME
****
\ **regnotif**\ - Registers a Perl module or a command that will get called when changes occur in the desired xCAT database tables.
********
SYNOPSIS
********
\ *regnotif [-h| --help]*\
\ *regnotif [-v| --version]*\
\ *regnotif \ \*filename tablename\*\ [,tablename]... [-o|--operation actions]*\
***********
DESCRIPTION
***********
This command is used to register a Perl module or a command to the xCAT notification table. Once registered, the module or the command will get called when changes occur in the xCAT database tables indicated by tablename. The changes can be row addition, deletion and update which are specified by actions.
**********
Parameters
**********
\ *filename*\ is the path name of the Perl module or command to be registered.
\ *tablename*\ is the name of the table that the user is interested in.
*******
OPTIONS
*******
\ **-h | -help**\ Display usage message.
\ **-v | -version **\ Command Version.
\ **-V | -verbose**\ Verbose output.
\ **-o | -operation**\ specifies the database table actions that the user is interested in. It is a comma separated list. 'a' for row addition, 'd' for row deletion and 'u' for row update.
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occurred.
********
EXAMPLES
********
1. To register a Perl module that gets invoked when rows get added or deleted. in the nodelist and the nodehm tables, enter:
.. code-block:: perl
regnotif /opt/xcat/lib/perl/xCAT_monitoring/mycode.pm nodelist,nodhm -o a,d
2. To register a command that gets invoked when rows get updated in the switch table, enter:
regnotif /usr/bin/mycmd switch -o u
*****
FILES
*****
/opt/xcat/bin/regnotif
********
SEE ALSO
********
unregnotif(1)|unregnotif.1

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,123 @@
############
replaycons.1
############
.. highlight:: perl
****
NAME
****
\ **replaycons**\ - replay the console output for a node
********
SYNOPSIS
********
\ **replaycons**\ [\ *node*\ ] [\ *bps*\ ] [\ *tail_amount*\ ]
\ **replaycons**\ [\ **-h**\ | \ **--help**\ | \ **-v**\ | \ **--version**\ ]
***********
DESCRIPTION
***********
The \ **replaycons**\ command reads the console log stored by conserver for this node, and displays it
in a way that simulates the original output of the console. Using the \ *bps*\ value, it will throttle
the speed of the output play back. (The conserver logs are stored in /var/log/consoles.)
For now, replaycons must be run locally on the system on which the console log is stored. This is normally
that management node, but in a hierarchical cluster will usually be the service node.
*******
OPTIONS
*******
\ *bps*\ ]
The display rate to use to play back the console output. Default is 19200.
\ *tail_amount*\
The place in the console log file to start play back, specified as the # of lines from the end.
\ **-v|--version**\
Command Version.
\ **-h|--help**\
Display usage message.
************
RETURN VALUE
************
0
The command completed successfully.
1
An error has occurred.
********
EXAMPLES
********
1.
To replay the console for node1 at the default rate, starting 2000 lines from the end:
.. code-block:: perl
replaycons 19200 2000
*****
FILES
*****
/opt/xcat/bin/replaycons
********
SEE ALSO
********
rcons(1)|rcons.1

View File

@ -0,0 +1,91 @@
##############
restartxcatd.1
##############
.. highlight:: perl
****
NAME
****
\ **restartxcatd**\ - Restart the xCAT daemon (xcatd).
********
SYNOPSIS
********
\ **restartxcatd**\ [[\ **-h**\ |\ **--help**\ ] | [\ **-v**\ |\ **--version**\ ] | [\ **-r**\ |\ **--reload**\ ]] [\ **-V**\ |\ **--verbose**\ ]
***********
DESCRIPTION
***********
The \ **restartxcatd**\ command restarts the xCAT daemon (xcatd).
\ **Linux Specific**\
It will perform the xcatd \ *fast restart*\ . The xcatd \ *fast restart*\ is a specific restart which has two advantages compares to the \ *stop*\ and then \ *start*\ .
1. The interval of xcatd out of service is very short.
2. The in processing request which initiated by old xcatd will not be stopped by force. The old xcatd will hand over the sockets to new xcatd, but old xcat will still be waiting for the in processing request to finish before the exit.
It does the same thing as 'service xcatd restart' on NON-systemd enabled Operating System like rh6.x and sles11.x. But for the systemd enabled Operating System like rh7 and sles12, the 'service xcatd restart' just do the \ *stop*\ and \ *start*\ instead of xcatd \ *fast restart*\ .
It's recommended to use \ **restartxcatd**\ command to restart xcatd on systemd enable system like rh7 and sles12 instead of 'service xcatd restart' or 'systemctl restart xcatd'.
\ **AIX Specific**\
It runs 'stopsrc -s xcatd' to stop xcatd first if xcatd is active, then runs 'startsrc -s xcatd' to start xcatd.
If the xcatd subsystem was not created, \ **restartxcatd**\ will create it automatically.
*******
OPTIONS
*******
\ **-h**\ Display usage message.
\ **-v**\ Command Version.
\ **-r**\ On a Service Node, services will not be restarted.
\ **-V**\ Display the verbose messages.
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occurred.
********
EXAMPLES
********
1. To restart the xCAT daemon, enter:
\ **restartxcatd**\
*****
FILES
*****
/opt/xcat/sbin/restartxcatd

View File

@ -0,0 +1,108 @@
###############
restorexCATdb.1
###############
.. highlight:: perl
****
NAME
****
\ **restorexCATdb**\ - restores the xCAT db tables .
********
SYNOPSIS
********
\ **restorexCATdb**\ [\ **-a**\ ] [\ **-V**\ ] [{\ **-p**\ |\ **--path**\ } \ *path*\ ]
\ **restorexCATdb**\ [\ **-b**\ ] [\ **-V**\ ] [{\ **-t**\ |\ **--timestamp**\ } \ *timestamp*\ ] [{\ **-p**\ |\ **--path**\ } \ *path*\ ]
\ **restorexCATdb**\ [\ **-h**\ |\ **--help**\ ] [\ **-v**\ |\ **--version**\ ]
***********
DESCRIPTION
***********
If not using binary restore(-b), the restorexCATdb command restores the xCAT database tables from the \*.csv files in directory given by the -p flag. The site table skiptables attribute can be set to a list of tables not to restore. It will not restore isnm_perf\* tables. See man dumpxCATdb.
If using the binary restore option for DB2 or postgreSQL, the entire database is restored from the binary backup made with dumpxCATdb. The database will be restored using the database Utilities. For DB2, the timestamp of the correct DB2 backup file (-t) must be provided.
All applications accessing the DB2 database must be stopped before you can use the binary restore options. See the xCAT DB2 document for more information.
For postgreSQL, you do not have to stop the applications accessing the database and the complete path to the backup file, must be supplied on the -p flag.
*******
OPTIONS
*******
\ **-h**\ Display usage message.
\ **-v**\ Command Version.
\ **-V**\ Verbose.
\ **-a**\ All,without this flag the eventlog and auditlog will be skipped.
These tables are skipped by default because restoring will generate new indexes
\ **-b**\ Restore from the binary image.
\ **-p**\ Path to the directory containing the database restore files. If restoring from the binary image (-b) and using postgeSQL, then this is the complete path to the restore file that was created with dumpxCATdb -b.
\ **-t**\ Use with the -b flag to designate the timestamp of the binary image to use to restore for DB2.
************
RETURN VALUE
************
0 The command completed successfully.
1 An error has occurred.
********
EXAMPLES
********
1. To restore the xCAT database from the /dbbackup/db directory, enter:
\ **restorexCATdb -p /dbbackup/db**\
2. To restore the xCAT database including auditlog and eventlog from the /dbbackup/db directory, enter:
\ **restorexCATdb -a -p /dbbackup/db**\
3. To restore the xCAT DB2 database from the binary image with timestamp 20111130130239 enter:
\ **restorexCATdb -b -t 20111130130239 -p /dbbackup/db**\
4. To restore the xCAT postgreSQL database from the binary image file pgbackup.20553 created by dumpxCATdb enter:
\ **restorexCATdb -b -p /dbbackup/db/pgbackup.20553**\
*****
FILES
*****
/opt/xcat/sbin/restorexCATdb
********
SEE ALSO
********
dumpxCATdb(1)|dumpxCATdb.1

View File

@ -0,0 +1,118 @@
###########
reventlog.1
###########
.. highlight:: perl
****
Name
****
\ **reventlog**\ - retrieve or clear remote hardware event logs
****************
\ **Synopsis**\
****************
\ **reventlog**\ \ *noderange*\ {\ *number-of-entries [-s]*\ |\ **all [-s]**\ |\ **clear**\ }
\ **reventlog**\ [\ **-h**\ |\ **--help**\ |\ **-v**\ |\ **--version**\ ]
*******************
\ **Description**\
*******************
\ **reventlog**\ can display any number of remote hardware event log entries
or clear them for a range of nodes. Hardware event
logs are stored on each servers service processor.
***************
\ **Options**\
***************
\ *number-of-entries*\
Retrieve the specified number of entries from the nodes' service processors.
\ **all**\
Retrieve all entries.
\ **-s**\
To sort the entries from latest (always the last entry in event DB) to oldest (always the first entry in event DB). If \ **number-of-entries**\ specified, the latest \ **number-of-entries**\ events will be output in the order of latest to oldest.
\ **clear**\
Clear event logs.
\ **-h**\ |\ **--help**\
Print help.
\ **-v**\ |\ **--version**\
Print version.
****************
\ **Examples**\
****************
\ **reventlog**\ \ *node4,node5*\ \ *5*\
.. code-block:: perl
node4: SERVPROC I 09/06/00 15:23:33 Remote Login Successful User ID = USERID[00]
node4: SERVPROC I 09/06/00 15:23:32 System spn1 started a RS485 connection with us[00]
node4: SERVPROC I 09/06/00 15:22:35 RS485 connection to system spn1 has ended[00]
node4: SERVPROC I 09/06/00 15:22:32 Remote Login Successful User ID = USERID[00]
node4: SERVPROC I 09/06/00 15:22:31 System spn1 started a RS485 connection with us[00]
node5: SERVPROC I 09/06/00 15:22:32 Remote Login Successful User ID = USERID[00]
node5: SERVPROC I 09/06/00 15:22:31 System spn1 started a RS485 connection with us[00]
node5: SERVPROC I 09/06/00 15:21:34 RS485 connection to system spn1 has ended[00]
node5: SERVPROC I 09/06/00 15:21:30 Remote Login Successful User ID = USERID[00]
node5: SERVPROC I 09/06/00 15:21:29 System spn1 started a RS485 connection with us[00]
\ **reventlog**\ \ *node4,node5*\ \ *clear*\
.. code-block:: perl
node4: clear
node5: clear
********
SEE ALSO
********
rpower(1)|rpower.1, monstart(1)|monstart.1

View File

@ -0,0 +1,261 @@
########
rflash.1
########
.. highlight:: perl
****
Name
****
\ **rflash**\ - Performs Licensed Internal Code (LIC) update support for HMC-attached POWER5 and POWER6 Systems, and POWER7 systems using Direct FSP management. rflash is also able to update firmware for NextScale Fan Power Controllers (FPC).
****************
\ **Synopsis**\
****************
\ **rflash**\ [\ **-h**\ |\ **--help**\ | \ **-v**\ |\ **--version**\ ]
PPC (with HMC) specific:
========================
\ **rflash**\ \ *noderange*\ \ **-p**\ \ *directory*\ {\ **--activate**\ \ **concurrent**\ |\ **disruptive**\ } [\ **-V**\ |\ **--verbose**\ ]
\ **rflash**\ \ *noderange*\ {\ **--commit**\ |\ **--recover**\ } [\ **-V**\ |\ **--verbose**\ ]
PPC (without HMC, using Direct FSP Management) specific:
========================================================
\ **rflash**\ \ *noderange*\ \ **-p**\ \ *directory*\ \ **--activate**\ \ **disruptive**\ |\ **deferred**\ [\ **-d**\ \ *data_directory*\ ]
\ **rflash**\ \ *noderange*\ {\ **--commit**\ |\ **--recover**\ }
NeXtScale FPC specific:
=======================
\ **rflash**\ \ *noderange*\ \ *http directory*\
*******************
\ **Description**\
*******************
\ **rflash**\ The \ **rflash**\ command initiates Firmware updates on supported xCAT nodes. Licensed Internal Code (also known as microcode) updates are performed on supported HMC-attached POWER5 and POWER6 pSeries nodes, and POWER7 systems using Direct FSP management.
The command scans the specified directory structure for Firmware update package files applicable to the given nodes and components. And then it will \ **automatically**\ select the \ **latest**\ version for the upgrade. The firmware update files include the Microcode update package and associated XML file. They can be downloaded from the IBM Web site: \ *http://www-933.ibm.com/support/fixcentral/*\ .
The POWER5 and POWER6 systems contain several components that use Licensed Internal Code. The \ **rflash**\ command supports two of these components: the managed system (also known as the Central Electronics Complex, or CEC) and the power subsystem (also known as the Bulk Power Assembly (BPA) or Bulk Power Controller (BPC)). Some POWER5 managed systems can be attached to a power subsystem. These power subsystems can support multiple managed systems. When the \ **rflash**\ command is invoked, xCAT will determine the managed system or power subsystem associated with that CEC and perform the update.
The \ **noderange**\ can be an CEC or CEC list, a Lpar or Lpar list and a Frame or Frame list. But CEC (or Lpar) and Frame \ **can't**\ be used at the same time. When the \ *noderange*\ is an CEC or CEC list, \ **rflash**\ will upgrade the firmware of the CEC or CECs in the cec list. If \ *noderange*\ is a Lpar or Lpar list, \ **rflash**\ will update Licensed Internal Code (LIC) on HMC-attached POWER5 and POWER6 pSeries nodes, and POWER7 systems using Direct FSP management. If \ *noderange*\ is a Frame or Frame list, \ **rflash**\ will update Licensed Internal Code (LIC) of the power subsystem on HMC-attached POWER5 and POWER6 pSeries nodes. The \ *noderange*\ can also be the specified node groups. You can specify a comma or space-separated list of node group ranges. See the \ *noderange*\ man page for detailed usage information.
The command will update firmware for NeXtScale FPC when given an FPC node and the http information needed to access the firmware.
PPC (with HMC) specific:
========================
The \ **rflash**\ command uses the \ **xdsh**\ command to connect to the HMC controlling the given managed system and perform the updates. Before run \ **rflash**\ , please use \ **rspconfig**\ to check if the related HMC ssh is enabled. If enable a HMC ssh connection, please use \ **rspconfig**\ comamnd.
\ **Warning!**\ This command may take considerable time to complete, depending on the number of systems being updated and the workload on the target HMC. In particular, power subsystem updates may take an hour or more if there are many attached managed systems.
Depending on the Licensed Internal Code update that is installed, the affected HMC-attached POWER5 and POWER6 systems may need to be recycled. The \ **--activate**\ flag determines how the affected systems activate the new code. The concurrent option activates code updates that do not require a system recycle (known as a "concurrent update"). If this option is given with an update that requires a system recycle (known as a "disruptive update"), a message will be returned, and no activation will be performed. The disruptive option will cause any affected systems that are powered on to be powered down before installing and activating the update. Once the update is complete, the command will attempt to power on any affected systems that it powered down. Those systems that were powered down when the command was issued will remain powered down when the update is complete.
The flash chip of a POWER5 and POWER6 managed system or power subsystem stores firmware in two locations, referred to as the temporary side and the permanent side. By default, most POWER5 and POWER6 systems boot from the temporary side of the flash. When the \ **rflash**\ command updates code, the current contents of the temporary side are written to the permanent side, and the new code is written to the temporary side. The new code is then activated. Therefore, the two sides of the flash will contain different levels of code when the update has completed.
The \ **--commit**\ flag is used to write the contents of the temporary side of the flash to the permanent side. This flag should be used after updating code and verifying correct system operation. The \ **--recover**\ flag is used to write the permanent side of the flash chip back to the temporary side. This flag should be used to recover from a corrupt flash operation, so that the previously running code can be restored.
\ **NOTE:**\ When the \ **--commit**\ or \ **--recover**\ two flags is used, the noderange \ **cannot**\ be BPA. It only \ **can**\ be CEC or LPAR ,and will take effect for \ **both**\ managed systems and power subsystems.
xCAT recommends that you shutdown your Operating System images and power off your managed systems before applying disruptive updates to managed systems or power subsystems.
Any previously activated code on the affected systems will be automatically accepted into permanent flash by this procedure.
\ **IMPORTANT!**\ If the power subsystem is recycled, all of its attached managed systems will be recycled.
If it outputs \ **"Timeout waiting for prompt"**\ during the upgrade, please set the \ **"ppctimeout"**\ larger in the \ **site**\ table. After the upgrade, remeber to change it back. If run the \ **"rflash"**\ command on an AIX management node, need to make sure the value of \ **"useSSHonAIX"**\ is \ **"yes"**\ in the site table.
PPC (using Direct FSP Management) specific:
===========================================
In currently Direct FSP/BPA Management, our \ **rflash**\ doesn't support \ **concurrent**\ value of \ **--activate**\ flag, and supports \ **disruptive**\ and \ **deferred**\ . The \ **disruptive**\ option will cause any affected systems that are powered on to be powered down before installing and activating the update. So we require that the systems should be powered off before do the firmware update.
The \ **deferred**\ option will load the new firmware into the T (temp) side, but will not activate it like the disruptive firmware. The customer will continue to run the Frames and CECs working with the P (perm) side and can wait for a maintenance window where they can activate and boot the Frame/CECs with new firmware levels. Refer to the doc to get more details:
XCAT_Power_775_Hardware_Management
In Direct FSP/BPA Management, there is -d <data_directory> option. The default value is /tmp. When do firmware update, rflash will put some related data from rpm packages in <data_directory> directory, so the execution of rflash will require available disk space in <data_directory> for the command to properly execute:
For one GFW rpm package and one power code rpm package , if the GFW rpm package size is gfw_rpmsize, and the Power code rpm package size is power_rpmsize, it requires that the available disk space should be more than:
1.5\*gfw_rpmsize + 1.5\*power_rpmsize
For Power 775, the rflash command takes effect on the primary and secondary FSPs or BPAs almost in parallel.
For more details about the Firmware Update using Direct FSP/BPA Management, refer to:
XCAT_Power_775_Hardware_Management#Updating_the_BPA_and_FSP_firmware_using_xCAT_DFM
NeXtScale FPC specific:
=======================
The command will update firmware for NeXtScale FPC when given an FPC node and the http information needed to access the firmware. The http imformation required includes both the MN IP address as well as the directory containing the firmware. It is recommended that the firmware be downloaded and placed in the /install directory structure as the xCAT MN /install directory is configured with the correct permissions for http. Refer to the doc to get more details:
XCAT_NeXtScale_Clusters
***************
\ **Options**\
***************
\ **-h|--help**\
Writes the command's usage statement to standard output.
\ **-p directory**\
Specifies the directory where the packages are located.
\ **-d data_directory**\
Specifies the directory where the raw data from rpm packages for each CEC/Frame are located. The default directory is /tmp. The option is only used in Direct FSP/BPA Management.
\ **--activate**\ \ **concurrent**\ | \ **disruptive**\
Must be specified to activate the new Licensed Internal Code. The "disruptive" option will cause the target systems to be recycled. Without this flag, LIC updates will be installed only, not activated.
\ **--commit**\
Used to commit the flash image in the temporary side of the chip to the permanent side for both managed systems and power subsystems.
\ **--recover**\
Used to recover the flash image in the permanent side of the chip to the temporary side for both managed systems and power subsystems.
\ **-v|--version**\
Displays the command's version.
\ **-V|--verbose**\
Verbose output.
*******************
\ **Exit Status**\
*******************
0 The command completed successfully.
1 An error has occurred.
****************
\ **Examples**\
****************
1
To update only the power subsystem attached to a single HMC-attached pSeries CEC(cec_name), and recycle the power subsystem and all attached managed systems when the update is complete, and the Microcode update package and associated XML file are in /tmp/fw, enter:
.. code-block:: perl
rflash cec_name -p /tmp/fw --activate disruptive
2
To update only the power subsystem attached to a single HMC-attached pSeries node, and recycle the power subsystem and all attached managed systems when the update is complete, and the Microcode update package and associated XML file are in /tmp/fw, enter:
.. code-block:: perl
rflash bpa_name -p /tmp/fw --activate disruptive
3
To commit a firmware update to permanent flash for both managed system and the related power subsystems, enter:
.. code-block:: perl
rflash cec_name --commit
4
To update the firmware on a NeXtScale FPC specify the FPC node name and the HTTP location of the file including the xCAT MN IP address and the directory on the xCAT MN containing the firmware as follows:
.. code-block:: perl
rflash fpc01 http://10.1.147.169/install/firmware/fhet17a/ibm_fw_fpc_fhet17a-2.02_anyos_noarch.rom
****************
\ **Location**\
****************
\ **/opt/xcat/bin/rflash**\
*****
NOTES
*****
This command is part of the xCAT software product.
********
SEE ALSO
********
rinv(1)|rinv.1, rspconfig(1)|rspconfig.1

View File

@ -0,0 +1,531 @@
######
rinv.1
######
.. highlight:: perl
****
Name
****
\ **rinv**\ - Remote hardware inventory
****************
\ **Synopsis**\
****************
\ **rinv**\ [\ **-h**\ |\ **--help**\ |\ **-v**\ |\ **--version**\ ]
BMC/MPA specific:
=================
\ **rinv**\ \ *noderange*\ {\ **pci**\ |\ **model**\ |\ **serial**\ |\ **asset**\ |\ **vpd**\ |\ **mprom**\ |\ **deviceid**\ |\ **guid**\ |\ **firm**\ |\ **diag**\ |\ **bios**\ |\ **mparom**\ |\ **mac**\ |\ **all**\ }
PPC (with HMC) specific:
========================
\ **rinv**\ \ *noderange*\ {\ **bus**\ |\ **config**\ |\ **serial**\ |\ **model**\ |\ **firm**\ |\ **all**\ }
PPC (using Direct FSP Management) specific:
===========================================
\ **rinv**\ \ *noderange*\ {\ **firm**\ }
\ **rinv**\ \ *noderange*\ {\ **deconfig**\ [\ **-x**\ ]}
Blade specific:
===============
\ **rinv**\ \ *noderange*\ {\ **mtm**\ |\ **serial**\ |\ **mac**\ |\ **bios**\ |\ **diag**\ |\ **mprom**\ |\ **mparom**\ |\ **firm**\ |\ **all**\ }
VMware specific:
================
\ **rinv**\ \ *noderange*\ [\ **-t**\ ]
zVM specific:
=============
\ **rinv**\ \ *noderange*\ [\ **config**\ |\ **all**\ ]
\ **rinv**\ \ *noderange*\ [\ **--diskpoolspace**\ ]
\ **rinv**\ \ *noderange*\ [\ **--diskpool**\ \ *pool*\ \ *space*\ ]
\ **rinv**\ \ *noderange*\ [\ **--fcpdevices**\ \ *state*\ \ *details*\ ]
\ **rinv**\ \ *noderange*\ [\ **--diskpoolnames**\ ]
\ **rinv**\ \ *noderange*\ [\ **--networknames**\ ]
\ **rinv**\ \ *noderange*\ [\ **--network**\ \ *name*\ ]
\ **rinv**\ \ *noderange*\ [\ **--ssi**\ ]
\ **rinv**\ \ *noderange*\ [\ **--smapilevel**\ ]
\ **rinv**\ \ *noderange*\ [\ **--wwpns**\ \ *fcp_channel*\ ]
\ **rinv**\ \ *noderange*\ [\ **--zfcppool**\ \ *pool*\ \ *space*\ ]
\ **rinv**\ \ *noderange*\ [\ **--zfcppoolnames**\ ]
*******************
\ **Description**\
*******************
\ **rinv**\ retrieves hardware configuration information from the on-board
Service Processor for a single or range of nodes and groups.
Calling \ **rinv**\ for VMware will display the UUID/GUID, nuumber of CPUs, amount of memory, the MAC address and a list of Hard disks. The output for each Hard disk includes the label, size and backing file location.
***************
\ **Options**\
***************
\ **pci**\
Retrieves PCI bus information.
\ **bus**\
List all buses for each I/O slot.
\ **config**\
Retrieves number of processors, speed, total memory, and DIMM
locations.
\ **model**\
Retrieves model number.
\ **serial**\
Retrieves serial number.
\ **firm**\
Retrieves firmware versions.
\ **deconfig**\
Retrieves deconfigured resources. Deconfigured resources are hw components (cpus, memory, etc.) that have failed so the firmware has automatically turned those components off. This option is only capable of listing some of the deconfigured resources and should not be the only method used to check the hardware status.
\ **-x**\
To output the raw information of deconfigured resources for CEC.
\ **asset**\
Retrieves asset tag. Usually it's the MAC address of eth0.
\ **vpd**\
Same as specifying model, serial, deviceid, and mprom.
\ **diag**\
Diagnostics information of firmware.
\ **mprom**\
Retrieves mprom firmware level
\ **deviceid**\
Retrieves device identification. Usually device, manufacturing and product ids.
\ **guid**\
Retrieves the global unique identifier
\ **all**\
All of the above.
\ **-h**\ |\ **--help**\
Print help.
\ **-v**\ |\ **--version**\
Print version.
\ **-t**\
Set the values in the vm table to what vCenter has for the indicated nodes.
\ **zVM specific :**\
\ **--diskpoolspace**\
Calculates the total size of every known storage pool.
\ **--diskpool**\ \ *pool*\ \ *space*\
Lists the storage devices (ECKD and FBA) contained in a disk pool. Space can be: all, free, or used.
\ **--fcpdevices**\ \ *state*\ \ *details*\
Lists the FCP device channels that are active, free, or offline. State can be: active, free, or offline.
\ **--diskpoolnames**\
Lists the known disk pool names.
\ **--networknames**\
Lists the known network names.
\ **--network**\ \ *name*\
Shows the configuration of a given network device.
\ **--ssi**\
Obtain the SSI and system status.
\ **--smapilevel**\
Obtain the SMAPI level installed on the z/VM system.
\ **--wwpns**\ \ *fcp_channel*\
Query a given FCP device channel on a z/VM system and return a list of WWPNs.
\ **--zfcppool**\ \ *pool*\ \ *space*\
List the SCSI/FCP devices contained in a zFCP pool. Space can be: free or used.
\ **--zfcppoolnames**\
List the known zFCP pool names.
****************
\ **Examples**\
****************
\*
To retrieve all information available from blade node4, enter:
.. code-block:: perl
rinv node5 all
node5: Machine Type/Model 865431Z
node5: Serial Number 23C5030
node5: Asset Tag 00:06:29:1F:01:1A
node5: PCI Information
node5: Bus VendID DevID RevID Description Slot Pass/Fail
node5: 0 1166 0009 06 Host Bridge 0 PASS
node5: 0 1166 0009 06 Host Bridge 0 PASS
node5: 0 5333 8A22 04 VGA Compatible Controller0 PASS
node5: 0 8086 1229 08 Ethernet Controller 0 PASS
node5: 0 8086 1229 08 Ethernet Controller 0 PASS
node5: 0 1166 0200 50 ISA Bridge 0 PASS
node5: 0 1166 0211 00 IDE Controller 0 PASS
node5: 0 1166 0220 04 Universal Serial Bus 0 PASS
node5: 1 9005 008F 02 SCSI Bus Controller 0 PASS
node5: 1 14C1 8043 03 Unknown Device Type 2 PASS
node5: Machine Configuration Info
node5: Number of Processors:
node5: Processor Speed: 866 MHz
node5: Total Memory: 512 MB
node5: Memory DIMM locations: Slot(s) 3 4
\*
To output the raw information of deconfigured resources for CEC cec01, enter:
.. code-block:: perl
rinv cec01 deconfig -x
cec01:
<SYSTEM>
<System_type>IH</System_type>
<NODE>
<Location_code>U78A9.001.0123456-P1</Location_code>
<RID>800</RID>
</NODE>
</SYSTEM>
\*
To retrieve 'config' information from the HMC-managed LPAR node3, enter:
.. code-block:: perl
rinv node3 config
node5: Machine Configuration Info
node5: Number of Processors: 1
node5: Total Memory (MB): 1024
\*
To retrieve information about a VMware node vm1, enter:
.. code-block:: perl
rinv vm1
vm1: UUID/GUID: 42198f65-d579-fb26-8de7-3ae49e1790a7
vm1: CPUs: 1
vm1: Memory: 1536 MB
vm1: Network adapter 1: 36:1b:c2:6e:04:02
vm1: Hard disk 1 (d0): 9000 MB @ [nfs_192.168.68.21_vol_rc1storage_vmware] vm1_3/vm1.vmdk
vm1: Hard disk 2 (d4): 64000 MB @ [nfs_192.168.68.21_vol_rc1storage_vmware] vm1_3/vm1_5.vmdk
\ **zVM specific :**\
\*
To list the defined network names available for a given node:
.. code-block:: perl
rinv pokdev61 --getnetworknames
Output is similar to:
.. code-block:: perl
pokdev61: LAN:QDIO SYSTEM GLAN1
pokdev61: LAN:HIPERS SYSTEM GLAN2
pokdev61: LAN:QDIO SYSTEM GLAN3
pokdev61: VSWITCH SYSTEM VLANTST1
pokdev61: VSWITCH SYSTEM VLANTST2
pokdev61: VSWITCH SYSTEM VSW1
pokdev61: VSWITCH SYSTEM VSW2
pokdev61: VSWITCH SYSTEM VSW3
\*
To list the configuration for a given network:
.. code-block:: perl
rinv pokdev61 --getnetwork GLAN1
Output is similar to:
.. code-block:: perl
pokdev61: LAN SYSTEM GLAN1 Type: QDIO Connected: 1 Maxconn: INFINITE
pokdev61: PERSISTENT UNRESTRICTED IP Accounting: OFF
pokdev61: IPTimeout: 5 MAC Protection: Unspecified
pokdev61: Isolation Status: OFF
\*
To list the disk pool names available:
.. code-block:: perl
rinv pokdev61 --diskpoolnames
Output is similar to:
.. code-block:: perl
pokdev61: POOL1
pokdev61: POOL2
pokdev61: POOL3
\*
List the configuration for a given disk pool:
.. code-block:: perl
rinv pokdev61 --diskpool POOL1 free
Output is similar to:
.. code-block:: perl
pokdev61: #VolID DevType StartAddr Size
pokdev61: EMC2C4 3390-09 0001 10016
pokdev61: EMC2C5 3390-09 0001 10016
\*
List the known zFCP pool names.
.. code-block:: perl
rinv pokdev61 --zfcppoolnames
Output is similar to:
.. code-block:: perl
pokdev61: zfcp1
pokdev61: zfcp2
pokdev61: zfcp3
\*
List the SCSI/FCP devices contained in a given zFCP pool:
.. code-block:: perl
rinv pokdev61 --zfcppool zfcp1
Output is similar to:
.. code-block:: perl
pokdev61: #status,wwpn,lun,size,range,owner,channel,tag
pokdev61: used,500512345678c411,4014412100000000,2g,3B40-3B7F,ihost13,3b77,
pokdev61: used,500512345678c411,4014412200000000,8192M,3B40-3B7F,ihost13,3b77,replace_root_device
pokdev61: free,500512345678c411,4014412300000000,8g,3B40-3B7F,,,
pokdev61: free,5005123456789411,4014412400000000,2g,3B40-3B7F,,,
pokdev61: free,5005123456789411;5005123456789411,4014412600000000,2G,3B40-3B7F,,,
********
SEE ALSO
********
rpower(1)|rpower.1

Some files were not shown because too many files have changed in this diff Show More