mirror of
https://github.com/xcat2/xcat-core.git
synced 2025-05-30 09:36:41 +00:00
Merge pull request #717 from gurevichmark/man1_p-z_Fixes
man1 changes for commands p-z
This commit is contained in:
commit
f65a71b310
@ -19,11 +19,11 @@ SYNOPSIS
|
||||
********
|
||||
|
||||
|
||||
\ *packimage [-h| --help]*\
|
||||
\ **packimage [-h| -**\ **-help]**\
|
||||
|
||||
\ *packimage [-v| --version]*\
|
||||
\ **packimage [-v| -**\ **-version]**\
|
||||
|
||||
\ *packimage imagename*\
|
||||
\ **packimage**\ \ *imagename*\
|
||||
|
||||
|
||||
***********
|
||||
@ -40,7 +40,7 @@ This command will get all the necessary os image definition files from the \ *os
|
||||
|
||||
|
||||
**********
|
||||
Parameters
|
||||
PARAMETERS
|
||||
**********
|
||||
|
||||
|
||||
@ -82,7 +82,11 @@ EXAMPLES
|
||||
|
||||
1. To pack the osimage rhels7.1-x86_64-netboot-compute:
|
||||
|
||||
\ *packimage rhels7.1-x86_64-netboot-compute*\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
packimage rhels7.1-x86_64-netboot-compute
|
||||
|
||||
|
||||
|
||||
*****
|
||||
|
@ -19,13 +19,13 @@ SYNOPSIS
|
||||
********
|
||||
|
||||
|
||||
\ **pgsqlsetup**\ {\ **-h | -**\ **-help**\ }
|
||||
\ **pgsqlsetup**\ {\ **-h**\ | \ **-**\ **-help**\ }
|
||||
|
||||
\ **pgsqlsetup**\ {\ **-v | -**\ **-version**\ }
|
||||
\ **pgsqlsetup**\ {\ **-v**\ | \ **-**\ **-version**\ }
|
||||
|
||||
\ **pgsqlsetup**\ {\ **-i | -**\ **-init**\ } [-N|nostart] [-P|-**\ **-PCM] [-o|-**\ **-setupODBC] [\ **-V | -**\ **-verbose**\ ]
|
||||
\ **pgsqlsetup**\ {\ **-i**\ | \ **-**\ **-init**\ } [\ **-N**\ | \ **-**\ **-nostart**\ ] [\ **-P**\ | \ **-**\ **-PCM**\ ] [\ **-o**\ | \ **-**\ **-odbc**\ ] [\ **-V**\ | \ **-**\ **-verbose**\ ]
|
||||
|
||||
\ **pgsqlsetup**\ {\ **-o | -**\ **-setupODBC**\ } [-V|-**\ **-verbose]
|
||||
\ **pgsqlsetup**\ {\ **-o**\ | \ **-**\ **-setupODBC**\ } [\ **-V**\ | \ **-**\ **-verbose**\ ]
|
||||
|
||||
|
||||
***********
|
||||
@ -109,19 +109,23 @@ EXAMPLES
|
||||
|
||||
|
||||
|
||||
\*
|
||||
1. To setup PostgreSQL for xCAT to run on the PostgreSQL xcatdb database :
|
||||
|
||||
To setup PostgreSQL for xCAT to run on the PostgreSQL xcatdb database :
|
||||
|
||||
\ **pgsqlsetup**\ \ *-i*\
|
||||
.. code-block:: perl
|
||||
|
||||
pgsqlsetup -i
|
||||
|
||||
|
||||
|
||||
|
||||
\*
|
||||
2. To setup the ODBC for PostgreSQL xcatdb database access :
|
||||
|
||||
To setup the ODBC for PostgreSQL xcatdb database access :
|
||||
|
||||
\ **pgsqlsetup**\ \ *-o*\
|
||||
.. code-block:: perl
|
||||
|
||||
pgsqlsetup -o
|
||||
|
||||
|
||||
|
||||
|
||||
|
@ -73,7 +73,13 @@ EXAMPLES
|
||||
|
||||
1.
|
||||
|
||||
pping all
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
pping all
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
@ -87,7 +93,13 @@ EXAMPLES
|
||||
|
||||
2.
|
||||
|
||||
pping all -i ib0,ib1
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
pping all -i ib0,ib1
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
|
@ -92,7 +92,13 @@ EXAMPLES
|
||||
|
||||
1.
|
||||
|
||||
ppping all -q
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
ppping all -q
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
@ -108,7 +114,13 @@ EXAMPLES
|
||||
|
||||
2.
|
||||
|
||||
ppping node1,node2 -i ib0,ib1,ib2,ib3
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
ppping node1,node2 -i ib0,ib1,ib2,ib3
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
|
@ -21,7 +21,7 @@ prsync - parallel rsync
|
||||
|
||||
\ **prsync**\ \ *filename*\ [\ *filename*\ \ *...*\ ] \ *noderange:destinationdirectory*\
|
||||
|
||||
\ **prsync**\ [\ *-o rsync options*\ ] [\ **-f**\ \ *fanout*\ ] [\ *filename*\ \ *filename*\ \ *...*\ ] [\ *directory*\ \ *directory*\ \ *...*\ ]
|
||||
\ **prsync**\ [\ **-o**\ \ *rsync options*\ ] [\ **-f**\ \ *fanout*\ ] [\ *filename*\ \ *filename*\ \ *...*\ ] [\ *directory*\ \ *directory*\ \ *...*\ ]
|
||||
\ *noderange:destinationdirectory*\
|
||||
|
||||
\ **prsync**\ {\ **-h | -**\ **-help | -v | -**\ **-version**\ }
|
||||
@ -47,7 +47,7 @@ management node to the compute node via a service node
|
||||
|
||||
|
||||
|
||||
\ **rsyncopts**\
|
||||
\ *rsyncopts*\
|
||||
|
||||
rsync options. See \ **rsync(1)**\ .
|
||||
|
||||
@ -60,19 +60,19 @@ management node to the compute node via a service node
|
||||
|
||||
|
||||
|
||||
\ **filename**\
|
||||
\ *filename*\
|
||||
|
||||
A space delimited list of files to rsync.
|
||||
|
||||
|
||||
|
||||
\ **directory**\
|
||||
\ *directory*\
|
||||
|
||||
A space delimited list of directories to rsync.
|
||||
|
||||
|
||||
|
||||
\ **noderange:destination**\
|
||||
\ *noderange:destination*\
|
||||
|
||||
A noderange(3)|noderange.3 and destination directory. The : is required.
|
||||
|
||||
@ -105,15 +105,23 @@ management node to the compute node via a service node
|
||||
|
||||
|
||||
|
||||
\*
|
||||
1.
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
cd /install; prsync -o "crz" post stage:/install
|
||||
|
||||
\ **cd**\ \ */install;*\ \ **prsync**\ \ **-o "crz"**\ \ *post*\ \ *stage:/install*\
|
||||
|
||||
|
||||
|
||||
\*
|
||||
2.
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
prsync passwd group rack01:/etc
|
||||
|
||||
\ **prsync**\ \ *passwd*\ \ *group*\ \ *rack01:/etc*\
|
||||
|
||||
|
||||
|
||||
|
@ -19,7 +19,7 @@ Name
|
||||
****************
|
||||
|
||||
|
||||
\ **pscp**\ [-i \ *suffix*\ ] [\ *scp options*\ \ *...*\ ] [\ **-f**\ \ *fanout*\ ] \ *filename*\ [\ *filename*\ \ *...*\ ] \ *noderange:destinationdirectory*\
|
||||
\ **pscp**\ [\ **-i**\ \ *suffix*\ ] [\ *scp options*\ \ *...*\ ] [\ **-f**\ \ *fanout*\ ] \ *filename*\ [\ *filename*\ \ *...*\ ] \ *noderange:destinationdirectory*\
|
||||
|
||||
\ **pscp**\ {\ **-h | -**\ **-help | -v | -**\ **-version**\ }
|
||||
|
||||
@ -59,19 +59,19 @@ management node to the compute node via a service node.
|
||||
|
||||
|
||||
|
||||
\ **scp options**\
|
||||
\ *scp options*\
|
||||
|
||||
See \ **scp(1)**\
|
||||
|
||||
|
||||
|
||||
\ **filename**\
|
||||
\ *filename*\
|
||||
|
||||
A space delimited list of files to copy. If \ **-r**\ is passed as an scp option, directories may be specified as well.
|
||||
|
||||
|
||||
|
||||
\ **noderange:destination**\
|
||||
\ *noderange:destination*\
|
||||
|
||||
A noderange(3)|noderange.3 and destination directory. The : is required.
|
||||
|
||||
@ -103,8 +103,26 @@ management node to the compute node via a service node.
|
||||
****************
|
||||
|
||||
|
||||
\ **pscp**\ \ **-r**\ \ */usr/local*\ \ *node1,node3:/usr/local*\
|
||||
\ **pscp**\ \ *passwd*\ \ *group*\ \ *rack01:/etc*\
|
||||
|
||||
1.
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
pscp -r /usr/local node1,node3:/usr/local
|
||||
|
||||
|
||||
|
||||
|
||||
2.
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
pscp passwd group rack01:/etc
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
************************
|
||||
|
@ -79,13 +79,13 @@ management node to the compute node via a service node.
|
||||
|
||||
|
||||
|
||||
\ **noderange**\
|
||||
\ *noderange*\
|
||||
|
||||
See noderange(3)|noderange.3.
|
||||
|
||||
|
||||
|
||||
\ **command**\
|
||||
\ *command*\
|
||||
|
||||
Command to be run in parallel. If no command is give then \ **psh**\
|
||||
enters interactive mode. In interactive mode a ">" prompt is
|
||||
@ -121,31 +121,43 @@ management node to the compute node via a service node.
|
||||
|
||||
|
||||
|
||||
\*
|
||||
1. Run uptime on 3 nodes:
|
||||
|
||||
Run uptime on 3 nodes:
|
||||
|
||||
\ **psh**\ \ *node4-node6*\ \ *uptime*\
|
||||
.. code-block:: perl
|
||||
|
||||
psh node4-node6 uptime
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
node4: Sun Aug 5 17:42:06 MDT 2001
|
||||
node5: Sun Aug 5 17:42:06 MDT 2001
|
||||
node6: Sun Aug 5 17:42:06 MDT 2001
|
||||
|
||||
node4: Sun Aug 5 17:42:06 MDT 2001
|
||||
node5: Sun Aug 5 17:42:06 MDT 2001
|
||||
node6: Sun Aug 5 17:42:06 MDT 2001
|
||||
|
||||
|
||||
|
||||
\*
|
||||
2. Run a command on some BladeCenter management modules:
|
||||
|
||||
Run a command on some BladeCenter management modules:
|
||||
|
||||
\ **psh**\ \ *amm1-amm5*\ \ *'info -T mm[1]'*\
|
||||
.. code-block:: perl
|
||||
|
||||
psh amm1-amm5 'info -T mm[1]'
|
||||
|
||||
|
||||
|
||||
|
||||
\*
|
||||
3. Remove the tmp files on the nodes in the 1st frame:
|
||||
|
||||
Remove the tmp files on the nodes in the 1st frame:
|
||||
|
||||
\ **psh**\ \ *rack01*\ \ *'rm -f /tmp/\\*'*\
|
||||
.. code-block:: perl
|
||||
|
||||
psh rack01 'rm -f /tmp/*'
|
||||
|
||||
|
||||
Notice the use of '' to forward shell expansion. This is not necessary
|
||||
in interactive mode.
|
||||
|
@ -88,7 +88,11 @@ method.
|
||||
****************
|
||||
|
||||
|
||||
\ **rcons**\ \ *node5*\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
rcons node5
|
||||
|
||||
|
||||
|
||||
************************
|
||||
|
@ -19,11 +19,11 @@ SYNOPSIS
|
||||
********
|
||||
|
||||
|
||||
\ *regnotif [-h| --help]*\
|
||||
\ **regnotif [-h| -**\ **-help]**\
|
||||
|
||||
\ *regnotif [-v| --version]*\
|
||||
\ **regnotif [-v| -**\ **-version]**\
|
||||
|
||||
\ *regnotif \ \*filename tablename\*\ [,tablename]... [-o|--operation actions]*\
|
||||
\ **regnotif**\ \ *filename tablename[,tablename]...*\ [\ **-o | -**\ **-operation**\ \ *actions*\ ]
|
||||
|
||||
|
||||
***********
|
||||
@ -35,7 +35,7 @@ This command is used to register a Perl module or a command to the xCAT notifica
|
||||
|
||||
|
||||
**********
|
||||
Parameters
|
||||
PARAMETERS
|
||||
**********
|
||||
|
||||
|
||||
@ -48,13 +48,13 @@ OPTIONS
|
||||
*******
|
||||
|
||||
|
||||
\ **-h | -help**\ Display usage message.
|
||||
\ **-h | -**\ **-help**\ Display usage message.
|
||||
|
||||
\ **-v | -version **\ Command Version.
|
||||
\ **-v | -**\ **-version**\ Command Version.
|
||||
|
||||
\ **-V | -verbose**\ Verbose output.
|
||||
\ **-V | -**\ **-verbose**\ Verbose output.
|
||||
|
||||
\ **-o | -operation**\ specifies the database table actions that the user is interested in. It is a comma separated list. 'a' for row addition, 'd' for row deletion and 'u' for row update.
|
||||
\ **-o | -**\ **-operation**\ specifies the database table actions that the user is interested in. It is a comma separated list. 'a' for row addition, 'd' for row deletion and 'u' for row update.
|
||||
|
||||
|
||||
************
|
||||
@ -82,7 +82,11 @@ EXAMPLES
|
||||
|
||||
2. To register a command that gets invoked when rows get updated in the switch table, enter:
|
||||
|
||||
regnotif /usr/bin/mycmd switch -o u
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
regnotif /usr/bin/mycmd switch -o u
|
||||
|
||||
|
||||
|
||||
*****
|
||||
|
@ -19,50 +19,30 @@ renergy.1
|
||||
****************
|
||||
|
||||
|
||||
\ **renergy**\ [-h | -**\ **-help]
|
||||
\ **renergy**\ [\ **-h**\ | \ **-**\ **-help**\ ]
|
||||
|
||||
\ **renergy**\ [-v | -**\ **-version]
|
||||
\ **renergy**\ [\ **-v**\ | \ **-**\ **-version**\ ]
|
||||
|
||||
\ **Power 6 server specific :**\
|
||||
|
||||
|
||||
\ **renergy**\ \ *noderange*\ [-V] { all | [savingstatus] [cappingstatus]
|
||||
[cappingmaxmin] [cappingvalue] [cappingsoftmin] [averageAC]
|
||||
[averageDC] [ambienttemp] [exhausttemp] [CPUspeed]
|
||||
[syssbpower] [sysIPLtime]}
|
||||
\ **renergy**\ \ *noderange*\ [\ **-V**\ ] {\ **all | [savingstatus] [cappingstatus] [cappingmaxmin] [cappingvalue] [cappingsoftmin] [averageAC] [averageDC] [ambienttemp] [exhausttemp] [CPUspeed] [syssbpower] [sysIPLtime]**\ }
|
||||
|
||||
\ **renergy**\ \ *noderange*\ [-V] { savingstatus={on | off}
|
||||
| cappingstatus={on | off} | cappingwatt=watt
|
||||
| cappingperc=percentage }
|
||||
\ **renergy**\ \ *noderange*\ [\ **-V**\ ] {\ **savingstatus={on | off} | cappingstatus={on | off} | cappingwatt=watt | cappingperc=percentage**\ }
|
||||
|
||||
\ **Power 7 server specific :**\
|
||||
|
||||
|
||||
\ **renergy**\ \ *noderange*\ [-V] { all | [savingstatus] [dsavingstatus]
|
||||
[cappingstatus] [cappingmaxmin] [cappingvalue] [cappingsoftmin]
|
||||
[averageAC] [averageDC] [ambienttemp] [exhausttemp] [CPUspeed]
|
||||
[syssbpower] [sysIPLtime] [fsavingstatus] [ffoMin] [ffoVmin]
|
||||
[ffoTurbo] [ffoNorm] [ffovalue]}
|
||||
\ **renergy**\ \ *noderange*\ [\ **-V**\ ] {\ **all | [savingstatus] [dsavingstatus] [cappingstatus] [cappingmaxmin] [cappingvalue] [cappingsoftmin] [averageAC] [averageDC] [ambienttemp] [exhausttemp] [CPUspeed] [syssbpower] [sysIPLtime] [fsavingstatus] [ffoMin] [ffoVmin] [ffoTurbo] [ffoNorm] [ffovalue]**\ }
|
||||
|
||||
\ **renergy**\ \ *noderange*\ [-V] { savingstatus={on | off}
|
||||
| dsavingstatus={on-norm | on-maxp | off}
|
||||
| fsavingstatus={on | off} | ffovalue=MHZ
|
||||
| cappingstatus={on | off} | cappingwatt=watt
|
||||
| cappingperc=percentage }
|
||||
\ **renergy**\ \ *noderange*\ [\ **-V**\ ] {\ **savingstatus={on | off} | dsavingstatus={on-norm | on-maxp | off} | fsavingstatus={on | off} | ffovalue=MHZ | cappingstatus={on | off} | cappingwatt=watt | cappingperc=percentage**\ }
|
||||
|
||||
\ **Power 8 server specific :**\
|
||||
|
||||
|
||||
\ **renergy**\ \ *noderange*\ [-V] { all | [savingstatus] [dsavingstatus]
|
||||
[averageAC] [averageAChistory] [averageDC] [averageDChistory]
|
||||
[ambienttemp] [ambienttemphistory] [exhausttemp] [exhausttemphistory]
|
||||
[fanspeed] [fanspeedhistory] [CPUspeed] [CPUspeedhistory]
|
||||
[syssbpower] [sysIPLtime] [fsavingstatus] [ffoMin] [ffoVmin]
|
||||
[ffoTurbo] [ffoNorm] [ffovalue]}
|
||||
\ **renergy**\ \ *noderange*\ [\ **-V**\ ] {\ **all | [savingstatus] [dsavingstatus] [averageAC] [averageAChistory] [averageDC] [averageDChistory] [ambienttemp] [ambienttemphistory] [exhausttemp] [exhausttemphistory] [fanspeed] [fanspeedhistory] [CPUspeed] [CPUspeedhistory] [syssbpower] [sysIPLtime] [fsavingstatus] [ffoMin] [ffoVmin] [ffoTurbo] [ffoNorm] [ffovalue]**\ }
|
||||
|
||||
\ **renergy**\ \ *noderange*\ [-V] { savingstatus={on | off}
|
||||
| dsavingstatus={on-norm | on-maxp | off}
|
||||
| fsavingstatus={on | off} | ffovalue=MHZ }
|
||||
\ **renergy**\ \ *noderange*\ \ **[-V] {savingstatus={on | off} | dsavingstatus={on-norm | on-maxp | off} | fsavingstatus={on | off} | ffovalue=MHZ }**\
|
||||
|
||||
\ *NOTE:*\ The setting operation for \ **Power 8**\ server is only supported
|
||||
for the server which is running in PowerVM mode. Do NOT run the setting
|
||||
@ -74,23 +54,14 @@ for the server which is running in OPAL mode.
|
||||
\ **For Management Modules:**\
|
||||
|
||||
|
||||
\ **renergy**\ \ *noderange*\ [-V] { all | pd1all | pd2all | [pd1status]
|
||||
[pd2status] [pd1policy] [pd2policy] [pd1powermodule1]
|
||||
[pd1powermodule2] [pd2powermodule1] [pd2powermodule2]
|
||||
[pd1avaiablepower] [pd2avaiablepower] [pd1reservedpower]
|
||||
[pd2reservedpower] [pd1remainpower] [pd2remainpower]
|
||||
[pd1inusedpower] [pd2inusedpower] [availableDC] [averageAC]
|
||||
[thermaloutput] [ambienttemp] [mmtemp] }
|
||||
\ **renergy**\ \ *noderange*\ [\ **-V**\ ] {\ **all | pd1all | pd2all | [pd1status] [pd2status] [pd1policy] [pd2policy] [pd1powermodule1] [pd1powermodule2] [pd2powermodule1] [pd2powermodule2] [pd1avaiablepower] [pd2avaiablepower] [pd1reservedpower] [pd2reservedpower] [pd1remainpower] [pd2remainpower] [pd1inusedpower] [pd2inusedpower] [availableDC] [averageAC] [thermaloutput] [ambienttemp] [mmtemp]**\ }
|
||||
|
||||
\ **For a blade server nodes:**\
|
||||
|
||||
|
||||
\ **renergy**\ \ *noderange*\ [-V] { all | [averageDC]
|
||||
[capability] [cappingvalue] [CPUspeed] [maxCPUspeed]
|
||||
[savingstatus] [dsavingstatus] }
|
||||
\ **renergy**\ \ *noderange*\ [\ **-V**\ ] {\ **all | [averageDC] [capability] [cappingvalue] [CPUspeed] [maxCPUspeed] [savingstatus] [dsavingstatus]**\ }
|
||||
|
||||
\ **renergy**\ \ *noderange*\ [-V] { savingstatus={on | off}
|
||||
| dsavingstatus={on-norm | on-maxp | off} }
|
||||
\ **renergy**\ \ *noderange*\ [\ **-V**\ ] {\ **savingstatus={on | off} | dsavingstatus={on-norm | on-maxp | off}**\ }
|
||||
|
||||
\ **Flex specific :**\
|
||||
|
||||
@ -98,36 +69,26 @@ for the server which is running in OPAL mode.
|
||||
\ **For Flex Management Modules:**\
|
||||
|
||||
|
||||
\ **renergy**\ \ *noderange*\ [-V] { all | [powerstatus]
|
||||
[powerpolicy] [powermodule] [avaiablepower] [reservedpower]
|
||||
[remainpower] [inusedpower] [availableDC] [averageAC]
|
||||
[thermaloutput] [ambienttemp] [mmtemp] }
|
||||
\ **renergy**\ \ *noderange*\ [\ **-V**\ ] {\ **all | [powerstatus] [powerpolicy] [powermodule] [avaiablepower] [reservedpower] [remainpower] [inusedpower] [availableDC] [averageAC] [thermaloutput] [ambienttemp] [mmtemp]**\ }
|
||||
|
||||
\ **For Flex node (power and x86):**\
|
||||
|
||||
|
||||
\ **renergy**\ \ *noderange*\ [-V] { all | [averageDC]
|
||||
[capability] [cappingvalue] [cappingmaxmin] [cappingmax]
|
||||
[cappingmin] [cappingGmin] [CPUspeed] [maxCPUspeed]
|
||||
[savingstatus] [dsavingstatus] }
|
||||
\ **renergy**\ \ *noderange*\ [\ **-V**\ ] {\ **all | [averageDC] [capability] [cappingvalue] [cappingmaxmin] [cappingmax] [cappingmin] [cappingGmin] [CPUspeed] [maxCPUspeed] [savingstatus] [dsavingstatus]**\ }
|
||||
|
||||
\ **renergy**\ \ *noderange*\ [-V] { cappingstatus={on | off}
|
||||
| cappingwatt=watt | cappingperc=percentage
|
||||
| savingstatus={on | off} | dsavingstatus={on-norm | on-maxp | off} }
|
||||
\ **renergy**\ \ *noderange*\ [\ **-V**\ ] {\ **cappingstatus={on | off} | cappingwatt=watt | cappingperc=percentage | savingstatus={on | off} | dsavingstatus={on-norm | on-maxp | off}**\ }
|
||||
|
||||
\ **iDataPlex specific :**\
|
||||
|
||||
|
||||
\ **renergy**\ \ *noderange*\ [-V] [ { cappingmaxmin | cappingmax | cappingmin } ]
|
||||
[cappingstatus] [cappingvalue] [relhistogram]
|
||||
\ **renergy**\ \ *noderange*\ [\ **-V**\ ] [{\ **cappingmaxmin | cappingmax | cappingmin}] [cappingstatus] [cappingvalue] [relhistogram]**\ }
|
||||
|
||||
\ **renergy**\ \ *noderange*\ [-V] { cappingstatus={on | enable | off | disable}
|
||||
| {cappingwatt|cappingvalue}=watt }
|
||||
\ **renergy**\ \ *noderange*\ [\ **-V**\ ] {\ **cappingstatus={on | enable | off | disable} | {cappingwatt|cappingvalue}=watt**\ }
|
||||
|
||||
\ **OpenPOWER server specific :**\
|
||||
|
||||
|
||||
\ **renergy**\ \ *noderange*\ { powerusage | temperature }
|
||||
\ **renergy**\ \ *noderange*\ {\ **powerusage | temperature**\ }
|
||||
|
||||
|
||||
*******************
|
||||
@ -327,7 +288,7 @@ so no additional plugins are needed for BladeCenter.)
|
||||
|
||||
Note: For Blade Center, the value of attribute
|
||||
averageAC is the total AC power being consumed by all modules
|
||||
in the chassis. It also includes power consumed by the Chassis
|
||||
in the chassis. It also includes power consumed by the Chassis
|
||||
Cooling Devices for BCH chassis.
|
||||
|
||||
|
||||
@ -780,11 +741,13 @@ so no additional plugins are needed for BladeCenter.)
|
||||
|
||||
|
||||
|
||||
1
|
||||
1. Query all attributes which CEC1,CEC2 supported.
|
||||
|
||||
Query all attributes which CEC1,CEC2 supported.
|
||||
|
||||
\ **renergy**\ CEC1,CEC2 all
|
||||
.. code-block:: perl
|
||||
|
||||
renergy CEC1,CEC2 all
|
||||
|
||||
|
||||
The output of the query operation:
|
||||
|
||||
@ -820,11 +783,13 @@ so no additional plugins are needed for BladeCenter.)
|
||||
|
||||
|
||||
|
||||
2
|
||||
2. Query the \ **fanspeed**\ attribute for Power8 CEC.
|
||||
|
||||
Query the \ **fanspeed**\ attribute for Power8 CEC.
|
||||
|
||||
\ **renergy**\ CEC1 fanspeed
|
||||
.. code-block:: perl
|
||||
|
||||
renergy CEC1 fanspeed
|
||||
|
||||
|
||||
The output of the query operation:
|
||||
|
||||
@ -843,9 +808,7 @@ so no additional plugins are needed for BladeCenter.)
|
||||
|
||||
|
||||
|
||||
3
|
||||
|
||||
Query the historical records for the \ **CPUspeed**\ attribute. (Power8 CEC)
|
||||
3. Query the historical records for the \ **CPUspeed**\ attribute. (Power8 CEC)
|
||||
|
||||
\ **renergy**\ CEC1 CPUspeedhistory
|
||||
|
||||
@ -873,7 +836,11 @@ so no additional plugins are needed for BladeCenter.)
|
||||
|
||||
Query all the attirbutes for management module node MM1. (For chassis)
|
||||
|
||||
\ **renergy**\ MM1 all
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
renergy MM1 all
|
||||
|
||||
|
||||
The output of the query operation:
|
||||
|
||||
@ -899,18 +866,19 @@ so no additional plugins are needed for BladeCenter.)
|
||||
mm1: pd2powermodule2: Bay 4: 2940W
|
||||
mm1: pd2remainpower: 51W
|
||||
mm1: pd2reservedpower: 2889W
|
||||
mm1: pd2status: 2 - Warning: Power redundancy does not exist
|
||||
in this power domain.
|
||||
mm1: pd2status: 2 - Warning: Power redundancy does not exist in this power domain.
|
||||
mm1: thermaloutput: 9717.376000 BTU/hour
|
||||
|
||||
|
||||
|
||||
|
||||
5
|
||||
5. Query all the attirbutes for blade server node blade1.
|
||||
|
||||
Query all the attirbutes for blade server node blade1.
|
||||
|
||||
\ **renergy**\ blade1 all
|
||||
.. code-block:: perl
|
||||
|
||||
renergy blade1 all
|
||||
|
||||
|
||||
The output of the query operation:
|
||||
|
||||
@ -928,12 +896,14 @@ so no additional plugins are needed for BladeCenter.)
|
||||
|
||||
|
||||
|
||||
6
|
||||
6. Query the attributes savingstatus, cappingstatus
|
||||
and CPUspeed for server CEC1.
|
||||
|
||||
Query the attributes savingstatus, cappingstatus
|
||||
and CPUspeed for server CEC1.
|
||||
|
||||
\ **renergy**\ CEC1 savingstatus cappingstatus CPUspeed
|
||||
.. code-block:: perl
|
||||
|
||||
renergy CEC1 savingstatus cappingstatus CPUspeed
|
||||
|
||||
|
||||
The output of the query operation:
|
||||
|
||||
@ -947,11 +917,13 @@ so no additional plugins are needed for BladeCenter.)
|
||||
|
||||
|
||||
|
||||
7
|
||||
7. Turn on the power saving function of CEC1.
|
||||
|
||||
Turn on the power saving function of CEC1.
|
||||
|
||||
\ **renergy**\ CEC1 savingstatus=on
|
||||
.. code-block:: perl
|
||||
|
||||
renergy CEC1 savingstatus=on
|
||||
|
||||
|
||||
The output of the setting operation:
|
||||
|
||||
@ -964,12 +936,14 @@ so no additional plugins are needed for BladeCenter.)
|
||||
|
||||
|
||||
|
||||
8
|
||||
8. Set the power capping value base on the percentage of the
|
||||
max-min capping value. Here, set it to 50%.
|
||||
|
||||
Set the power capping value base on the percentage of the
|
||||
max-min capping value. Here, set it to 50%.
|
||||
|
||||
\ **renergy**\ CEC1 cappingperc=50
|
||||
.. code-block:: perl
|
||||
|
||||
renergy CEC1 cappingperc=50
|
||||
|
||||
|
||||
If the maximum capping value of the CEC1 is 850w, and the
|
||||
minimum capping value of the CEC1 is 782w, the Power Capping
|
||||
@ -986,11 +960,13 @@ so no additional plugins are needed for BladeCenter.)
|
||||
|
||||
|
||||
|
||||
9
|
||||
9. Query powerusage and temperature for OpenPOWER servers.
|
||||
|
||||
Query powerusage and temperature for OpenPOWER servers.
|
||||
|
||||
\ **renergy**\ ops01 powerusage temperature
|
||||
.. code-block:: perl
|
||||
|
||||
renergy ops01 powerusage temperature
|
||||
|
||||
|
||||
The output will be like this:
|
||||
|
||||
@ -1017,39 +993,21 @@ so no additional plugins are needed for BladeCenter.)
|
||||
|
||||
|
||||
|
||||
1
|
||||
|
||||
For more information on 'Power System Energy Management':
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
http://www-03.ibm.com/systems/power/software/energy/index.html
|
||||
1. For more information on 'Power System Energy Management':
|
||||
|
||||
http://www-03.ibm.com/systems/power/software/energy/index.html
|
||||
|
||||
|
||||
|
||||
2
|
||||
|
||||
EnergyScale white paper for Power6:
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
http://www-03.ibm.com/systems/power/hardware/whitepapers/energyscale.html
|
||||
2. EnergyScale white paper for Power6:
|
||||
|
||||
http://www-03.ibm.com/systems/power/hardware/whitepapers/energyscale.html
|
||||
|
||||
|
||||
|
||||
3
|
||||
|
||||
EnergyScale white paper for Power7:
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
http://www-03.ibm.com/systems/power/hardware/whitepapers/energyscale7.html
|
||||
3. EnergyScale white paper for Power7:
|
||||
|
||||
http://www-03.ibm.com/systems/power/hardware/whitepapers/energyscale7.html
|
||||
|
||||
|
||||
|
||||
|
@ -43,7 +43,7 @@ OPTIONS
|
||||
|
||||
|
||||
|
||||
\ *bps*\ ]
|
||||
\ *bps*\
|
||||
|
||||
The display rate to use to play back the console output. Default is 19200.
|
||||
|
||||
@ -74,16 +74,12 @@ RETURN VALUE
|
||||
|
||||
|
||||
|
||||
0
|
||||
|
||||
The command completed successfully.
|
||||
|
||||
0 The command completed successfully.
|
||||
|
||||
|
||||
1
|
||||
|
||||
An error has occurred.
|
||||
|
||||
|
||||
1 An error has occurred.
|
||||
|
||||
|
||||
|
||||
|
||||
|
@ -53,13 +53,13 @@ OPTIONS
|
||||
*******
|
||||
|
||||
|
||||
\ **-h**\ Display usage message.
|
||||
\ **-h|-**\ **-help**\ Display usage message.
|
||||
|
||||
\ **-v**\ Command Version.
|
||||
\ **-v|-**\ **-version**\ Command Version.
|
||||
|
||||
\ **-r**\ On a Service Node, services will not be restarted.
|
||||
\ **-r|-**\ **-reload**\ On a Service Node, services will not be restarted.
|
||||
|
||||
\ **-V**\ Display the verbose messages.
|
||||
\ **-V|-**\ **-verbose**\ Display the verbose messages.
|
||||
|
||||
|
||||
************
|
||||
@ -79,7 +79,11 @@ EXAMPLES
|
||||
|
||||
1. To restart the xCAT daemon, enter:
|
||||
|
||||
\ **restartxcatd**\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
restartxcatd
|
||||
|
||||
|
||||
|
||||
*****
|
||||
|
@ -43,20 +43,19 @@ OPTIONS
|
||||
*******
|
||||
|
||||
|
||||
\ **-h**\ Display usage message.
|
||||
\ **-h|-**\ **-help**\ Display usage message.
|
||||
|
||||
\ **-v**\ Command Version.
|
||||
\ **-v|-**\ **-version**\ Command Version.
|
||||
|
||||
\ **-V**\ Verbose.
|
||||
\ **-V|-**\ **-verbose**\ Verbose.
|
||||
|
||||
\ **-a**\ All,without this flag the eventlog and auditlog will be skipped.
|
||||
These tables are skipped by default because restoring will generate new indexes
|
||||
\ **-a**\ All,without this flag the eventlog and auditlog will be skipped. These tables are skipped by default because restoring will generate new indexes
|
||||
|
||||
\ **-b**\ Restore from the binary image.
|
||||
\ **-b**\ Restore from the binary image.
|
||||
|
||||
\ **-p**\ Path to the directory containing the database restore files. If restoring from the binary image (-b) and using postgeSQL, then this is the complete path to the restore file that was created with dumpxCATdb -b.
|
||||
\ **-p|-**\ **-path**\ Path to the directory containing the database restore files. If restoring from the binary image (-b) and using postgeSQL, then this is the complete path to the restore file that was created with dumpxCATdb -b.
|
||||
|
||||
\ **-t**\ Use with the -b flag to designate the timestamp of the binary image to use to restore for DB2.
|
||||
\ **-t|-**\ **-timestamp**\ Use with the -b flag to designate the timestamp of the binary image to use to restore for DB2.
|
||||
|
||||
|
||||
************
|
||||
@ -76,19 +75,35 @@ EXAMPLES
|
||||
|
||||
1. To restore the xCAT database from the /dbbackup/db directory, enter:
|
||||
|
||||
\ **restorexCATdb -p /dbbackup/db**\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
restorexCATdb -p /dbbackup/db
|
||||
|
||||
|
||||
2. To restore the xCAT database including auditlog and eventlog from the /dbbackup/db directory, enter:
|
||||
|
||||
\ **restorexCATdb -a -p /dbbackup/db**\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
restorexCATdb -a -p /dbbackup/db
|
||||
|
||||
|
||||
3. To restore the xCAT DB2 database from the binary image with timestamp 20111130130239 enter:
|
||||
|
||||
\ **restorexCATdb -b -t 20111130130239 -p /dbbackup/db**\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
restorexCATdb -b -t 20111130130239 -p /dbbackup/db
|
||||
|
||||
|
||||
4. To restore the xCAT postgreSQL database from the binary image file pgbackup.20553 created by dumpxCATdb enter:
|
||||
|
||||
\ **restorexCATdb -b -p /dbbackup/db/pgbackup.20553**\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
restorexCATdb -b -p /dbbackup/db/pgbackup.20553
|
||||
|
||||
|
||||
|
||||
*****
|
||||
|
@ -19,7 +19,7 @@ Name
|
||||
****************
|
||||
|
||||
|
||||
\ **reventlog**\ \ *noderange*\ {\ *number-of-entries [-s]*\ |\ **all [-s] | clear**\ }
|
||||
\ **reventlog**\ \ *noderange*\ {\ *number-of-entries*\ [\ **-s**\ ]|\ **all [-s] | clear**\ }
|
||||
|
||||
\ **reventlog**\ [\ **-h | -**\ **-help | -v | -**\ **-version**\ ]
|
||||
|
||||
@ -82,30 +82,51 @@ logs are stored on each servers service processor.
|
||||
****************
|
||||
|
||||
|
||||
\ **reventlog**\ \ *node4,node5*\ \ *5*\
|
||||
|
||||
1.
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
reventlog node4,node5 5
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
node4: SERVPROC I 09/06/00 15:23:33 Remote Login Successful User ID = USERID[00]
|
||||
node4: SERVPROC I 09/06/00 15:23:32 System spn1 started a RS485 connection with us[00]
|
||||
node4: SERVPROC I 09/06/00 15:22:35 RS485 connection to system spn1 has ended[00]
|
||||
node4: SERVPROC I 09/06/00 15:22:32 Remote Login Successful User ID = USERID[00]
|
||||
node4: SERVPROC I 09/06/00 15:22:31 System spn1 started a RS485 connection with us[00]
|
||||
node5: SERVPROC I 09/06/00 15:22:32 Remote Login Successful User ID = USERID[00]
|
||||
node5: SERVPROC I 09/06/00 15:22:31 System spn1 started a RS485 connection with us[00]
|
||||
node5: SERVPROC I 09/06/00 15:21:34 RS485 connection to system spn1 has ended[00]
|
||||
node5: SERVPROC I 09/06/00 15:21:30 Remote Login Successful User ID = USERID[00]
|
||||
node5: SERVPROC I 09/06/00 15:21:29 System spn1 started a RS485 connection with us[00]
|
||||
|
||||
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
node4: SERVPROC I 09/06/00 15:23:33 Remote Login Successful User ID = USERID[00]
|
||||
node4: SERVPROC I 09/06/00 15:23:32 System spn1 started a RS485 connection with us[00]
|
||||
node4: SERVPROC I 09/06/00 15:22:35 RS485 connection to system spn1 has ended[00]
|
||||
node4: SERVPROC I 09/06/00 15:22:32 Remote Login Successful User ID = USERID[00]
|
||||
node4: SERVPROC I 09/06/00 15:22:31 System spn1 started a RS485 connection with us[00]
|
||||
node5: SERVPROC I 09/06/00 15:22:32 Remote Login Successful User ID = USERID[00]
|
||||
node5: SERVPROC I 09/06/00 15:22:31 System spn1 started a RS485 connection with us[00]
|
||||
node5: SERVPROC I 09/06/00 15:21:34 RS485 connection to system spn1 has ended[00]
|
||||
node5: SERVPROC I 09/06/00 15:21:30 Remote Login Successful User ID = USERID[00]
|
||||
node5: SERVPROC I 09/06/00 15:21:29 System spn1 started a RS485 connection with us[00]
|
||||
|
||||
|
||||
\ **reventlog**\ \ *node4,node5*\ \ *clear*\
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
node4: clear
|
||||
node5: clear
|
||||
2.
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
reventlog node4,node5 clear
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
node4: clear
|
||||
node5: clear
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
@ -100,26 +100,22 @@ PPC (using Direct FSP Management) specific:
|
||||
|
||||
In currently Direct FSP/BPA Management, our \ **rflash**\ doesn't support \ **concurrent**\ value of \ **-**\ **-activate**\ flag, and supports \ **disruptive**\ and \ **deferred**\ . The \ **disruptive**\ option will cause any affected systems that are powered on to be powered down before installing and activating the update. So we require that the systems should be powered off before do the firmware update.
|
||||
|
||||
The \ **deferred**\ option will load the new firmware into the T (temp) side, but will not activate it like the disruptive firmware. The customer will continue to run the Frames and CECs working with the P (perm) side and can wait for a maintenance window where they can activate and boot the Frame/CECs with new firmware levels. Refer to the doc to get more details:
|
||||
XCAT_Power_775_Hardware_Management
|
||||
The \ **deferred**\ option will load the new firmware into the T (temp) side, but will not activate it like the disruptive firmware. The customer will continue to run the Frames and CECs working with the P (perm) side and can wait for a maintenance window where they can activate and boot the Frame/CECs with new firmware levels. Refer to the doc to get more details: XCAT_Power_775_Hardware_Management
|
||||
|
||||
In Direct FSP/BPA Management, there is -d <data_directory> option. The default value is /tmp. When do firmware update, rflash will put some related data from rpm packages in <data_directory> directory, so the execution of rflash will require available disk space in <data_directory> for the command to properly execute:
|
||||
|
||||
For one GFW rpm package and one power code rpm package , if the GFW rpm package size is gfw_rpmsize, and the Power code rpm package size is power_rpmsize, it requires that the available disk space should be more than:
|
||||
1.5\*gfw_rpmsize + 1.5\*power_rpmsize
|
||||
For one GFW rpm package and one power code rpm package , if the GFW rpm package size is gfw_rpmsize, and the Power code rpm package size is power_rpmsize, it requires that the available disk space should be more than: 1.5\*gfw_rpmsize + 1.5\*power_rpmsize
|
||||
|
||||
For Power 775, the rflash command takes effect on the primary and secondary FSPs or BPAs almost in parallel.
|
||||
|
||||
For more details about the Firmware Update using Direct FSP/BPA Management, refer to:
|
||||
XCAT_Power_775_Hardware_Management#Updating_the_BPA_and_FSP_firmware_using_xCAT_DFM
|
||||
For more details about the Firmware Update using Direct FSP/BPA Management, refer to: XCAT_Power_775_Hardware_Management#Updating_the_BPA_and_FSP_firmware_using_xCAT_DFM
|
||||
|
||||
|
||||
NeXtScale FPC specific:
|
||||
=======================
|
||||
|
||||
|
||||
The command will update firmware for NeXtScale FPC when given an FPC node and the http information needed to access the firmware. The http imformation required includes both the MN IP address as well as the directory containing the firmware. It is recommended that the firmware be downloaded and placed in the /install directory structure as the xCAT MN /install directory is configured with the correct permissions for http. Refer to the doc to get more details:
|
||||
XCAT_NeXtScale_Clusters
|
||||
The command will update firmware for NeXtScale FPC when given an FPC node and the http information needed to access the firmware. The http imformation required includes both the MN IP address as well as the directory containing the firmware. It is recommended that the firmware be downloaded and placed in the /install directory structure as the xCAT MN /install directory is configured with the correct permissions for http. Refer to the doc to get more details: XCAT_NeXtScale_Clusters
|
||||
|
||||
|
||||
OpenPOWER specific:
|
||||
@ -148,13 +144,13 @@ The command will update firmware for OpenPOWER BMC when given an OpenPOWER node
|
||||
|
||||
|
||||
|
||||
\ **-p directory**\
|
||||
\ **-p**\ \ *directory*\
|
||||
|
||||
Specifies the directory where the packages are located.
|
||||
|
||||
|
||||
|
||||
\ **-d data_directory**\
|
||||
\ **-d**\ \ *data_directory*\
|
||||
|
||||
Specifies the directory where the raw data from rpm packages for each CEC/Frame are located. The default directory is /tmp. The option is only used in Direct FSP/BPA Management.
|
||||
|
||||
@ -207,9 +203,7 @@ The command will update firmware for OpenPOWER BMC when given an OpenPOWER node
|
||||
|
||||
|
||||
|
||||
1
|
||||
|
||||
To update only the power subsystem attached to a single HMC-attached pSeries CEC(cec_name), and recycle the power subsystem and all attached managed systems when the update is complete, and the Microcode update package and associated XML file are in /tmp/fw, enter:
|
||||
1. To update only the power subsystem attached to a single HMC-attached pSeries CEC(cec_name), and recycle the power subsystem and all attached managed systems when the update is complete, and the Microcode update package and associated XML file are in /tmp/fw, enter:
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
@ -219,9 +213,7 @@ The command will update firmware for OpenPOWER BMC when given an OpenPOWER node
|
||||
|
||||
|
||||
|
||||
2
|
||||
|
||||
To update only the power subsystem attached to a single HMC-attached pSeries node, and recycle the power subsystem and all attached managed systems when the update is complete, and the Microcode update package and associated XML file are in /tmp/fw, enter:
|
||||
2. To update only the power subsystem attached to a single HMC-attached pSeries node, and recycle the power subsystem and all attached managed systems when the update is complete, and the Microcode update package and associated XML file are in /tmp/fw, enter:
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
@ -231,9 +223,7 @@ The command will update firmware for OpenPOWER BMC when given an OpenPOWER node
|
||||
|
||||
|
||||
|
||||
3
|
||||
|
||||
To commit a firmware update to permanent flash for both managed system and the related power subsystems, enter:
|
||||
3. To commit a firmware update to permanent flash for both managed system and the related power subsystems, enter:
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
@ -243,9 +233,7 @@ The command will update firmware for OpenPOWER BMC when given an OpenPOWER node
|
||||
|
||||
|
||||
|
||||
4
|
||||
|
||||
To update the firmware on a NeXtScale FPC specify the FPC node name and the HTTP location of the file including the xCAT MN IP address and the directory on the xCAT MN containing the firmware as follows:
|
||||
4. To update the firmware on a NeXtScale FPC specify the FPC node name and the HTTP location of the file including the xCAT MN IP address and the directory on the xCAT MN containing the firmware as follows:
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
@ -255,9 +243,7 @@ The command will update firmware for OpenPOWER BMC when given an OpenPOWER node
|
||||
|
||||
|
||||
|
||||
5
|
||||
|
||||
To update the firmware on OpenPOWER machine specify the node name and the file path of the HPM firmware file as follows:
|
||||
5. To update the firmware on OpenPOWER machine specify the node name and the file path of the HPM firmware file as follows:
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
|
@ -212,74 +212,144 @@ Calling \ **rinv**\ for VMware will display the UUID/GUID, nuumber of CPUs, amo
|
||||
|
||||
Set the values in the vm table to what vCenter has for the indicated nodes.
|
||||
|
||||
\ **zVM specific :**\
|
||||
|
||||
|
||||
\ **-**\ **-diskpoolspace**\
|
||||
|
||||
Calculates the total size of every known storage pool.
|
||||
|
||||
|
||||
|
||||
\ **zVM specific :**\
|
||||
|
||||
|
||||
\ **-**\ **-diskpoolspace**\
|
||||
|
||||
Calculates the total size of every known storage pool.
|
||||
|
||||
<<<<<<< HEAD
|
||||
\ **-**\ **-diskpool**\ \ *pool*\ \ *space*\
|
||||
|
||||
Lists the storage devices (ECKD and FBA) contained in a disk pool. Space can be: all, free, or used.
|
||||
|
||||
=======
|
||||
|
||||
|
||||
\ **-**\ **-diskpool**\ \ *pool*\ \ *space*\
|
||||
>>>>>>> man1 changes for commands p-z
|
||||
|
||||
Lists the storage devices (ECKD and FBA) contained in a disk pool. Space can be: all, free, or used.
|
||||
|
||||
<<<<<<< HEAD
|
||||
\ **-**\ **-fcpdevices**\ \ *state*\ \ *details*\
|
||||
|
||||
Lists the FCP device channels that are active, free, or offline. State can be: active, free, or offline.
|
||||
|
||||
=======
|
||||
|
||||
|
||||
\ **-**\ **-fcpdevices**\ \ *state*\ \ *details*\
|
||||
>>>>>>> man1 changes for commands p-z
|
||||
|
||||
Lists the FCP device channels that are active, free, or offline. State can be: active, free, or offline.
|
||||
|
||||
<<<<<<< HEAD
|
||||
\ **-**\ **-diskpoolnames**\
|
||||
|
||||
Lists the known disk pool names.
|
||||
|
||||
=======
|
||||
|
||||
|
||||
\ **-**\ **-diskpoolnames**\
|
||||
>>>>>>> man1 changes for commands p-z
|
||||
|
||||
Lists the known disk pool names.
|
||||
|
||||
<<<<<<< HEAD
|
||||
\ **-**\ **-networknames**\
|
||||
|
||||
Lists the known network names.
|
||||
|
||||
=======
|
||||
|
||||
|
||||
\ **-**\ **-networknames**\
|
||||
>>>>>>> man1 changes for commands p-z
|
||||
|
||||
Lists the known network names.
|
||||
|
||||
<<<<<<< HEAD
|
||||
\ **-**\ **-network**\ \ *name*\
|
||||
|
||||
Shows the configuration of a given network device.
|
||||
|
||||
=======
|
||||
|
||||
|
||||
\ **-**\ **-network**\ \ *name*\
|
||||
>>>>>>> man1 changes for commands p-z
|
||||
|
||||
Shows the configuration of a given network device.
|
||||
|
||||
<<<<<<< HEAD
|
||||
\ **-**\ **-ssi**\
|
||||
|
||||
Obtain the SSI and system status.
|
||||
|
||||
=======
|
||||
|
||||
|
||||
\ **-**\ **-ssi**\
|
||||
>>>>>>> man1 changes for commands p-z
|
||||
|
||||
Obtain the SSI and system status.
|
||||
|
||||
<<<<<<< HEAD
|
||||
\ **-**\ **-smapilevel**\
|
||||
|
||||
Obtain the SMAPI level installed on the z/VM system.
|
||||
|
||||
=======
|
||||
|
||||
|
||||
\ **-**\ **-smapilevel**\
|
||||
>>>>>>> man1 changes for commands p-z
|
||||
|
||||
Obtain the SMAPI level installed on the z/VM system.
|
||||
|
||||
<<<<<<< HEAD
|
||||
\ **-**\ **-wwpns**\ \ *fcp_channel*\
|
||||
|
||||
Query a given FCP device channel on a z/VM system and return a list of WWPNs.
|
||||
|
||||
=======
|
||||
|
||||
|
||||
\ **-**\ **-wwpns**\ \ *fcp_channel*\
|
||||
>>>>>>> man1 changes for commands p-z
|
||||
|
||||
Query a given FCP device channel on a z/VM system and return a list of WWPNs.
|
||||
|
||||
<<<<<<< HEAD
|
||||
\ **-**\ **-zfcppool**\ \ *pool*\ \ *space*\
|
||||
|
||||
List the SCSI/FCP devices contained in a zFCP pool. Space can be: free or used.
|
||||
|
||||
=======
|
||||
|
||||
|
||||
\ **-**\ **-zfcppool**\ \ *pool*\ \ *space*\
|
||||
>>>>>>> man1 changes for commands p-z
|
||||
|
||||
List the SCSI/FCP devices contained in a zFCP pool. Space can be: free or used.
|
||||
|
||||
<<<<<<< HEAD
|
||||
\ **-**\ **-zfcppoolnames**\
|
||||
|
||||
List the known zFCP pool names.
|
||||
|
||||
=======
|
||||
|
||||
|
||||
\ **-**\ **-zfcppoolnames**\
|
||||
>>>>>>> man1 changes for commands p-z
|
||||
|
||||
List the known zFCP pool names.
|
||||
|
||||
|
||||
|
||||
@ -290,15 +360,19 @@ Calling \ **rinv**\ for VMware will display the UUID/GUID, nuumber of CPUs, amo
|
||||
|
||||
|
||||
|
||||
\*
|
||||
|
||||
To retrieve all information available from blade node4, enter:
|
||||
1. To retrieve all information available from blade node4, enter:
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
rinv node5 all
|
||||
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
node5: Machine Type/Model 865431Z
|
||||
node5: Serial Number 23C5030
|
||||
node5: Asset Tag 00:06:29:1F:01:1A
|
||||
@ -323,15 +397,19 @@ Calling \ **rinv**\ for VMware will display the UUID/GUID, nuumber of CPUs, amo
|
||||
|
||||
|
||||
|
||||
\*
|
||||
|
||||
To output the raw information of deconfigured resources for CEC cec01, enter:
|
||||
2. To output the raw information of deconfigured resources for CEC cec01, enter:
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
rinv cec01 deconfig -x
|
||||
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
cec01:
|
||||
<SYSTEM>
|
||||
<System_type>IH</System_type>
|
||||
@ -344,7 +422,7 @@ Calling \ **rinv**\ for VMware will display the UUID/GUID, nuumber of CPUs, amo
|
||||
|
||||
|
||||
|
||||
\*
|
||||
3.
|
||||
|
||||
To retrieve 'config' information from the HMC-managed LPAR node3, enter:
|
||||
|
||||
@ -352,7 +430,13 @@ Calling \ **rinv**\ for VMware will display the UUID/GUID, nuumber of CPUs, amo
|
||||
.. code-block:: perl
|
||||
|
||||
rinv node3 config
|
||||
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
node5: Machine Configuration Info
|
||||
node5: Number of Processors: 1
|
||||
node5: Total Memory (MB): 1024
|
||||
@ -360,7 +444,7 @@ Calling \ **rinv**\ for VMware will display the UUID/GUID, nuumber of CPUs, amo
|
||||
|
||||
|
||||
|
||||
\*
|
||||
4.
|
||||
|
||||
To retrieve information about a VMware node vm1, enter:
|
||||
|
||||
@ -368,6 +452,13 @@ Calling \ **rinv**\ for VMware will display the UUID/GUID, nuumber of CPUs, amo
|
||||
.. code-block:: perl
|
||||
|
||||
rinv vm1
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
vm1: UUID/GUID: 42198f65-d579-fb26-8de7-3ae49e1790a7
|
||||
vm1: CPUs: 1
|
||||
vm1: Memory: 1536 MB
|
||||
@ -380,7 +471,7 @@ Calling \ **rinv**\ for VMware will display the UUID/GUID, nuumber of CPUs, amo
|
||||
|
||||
|
||||
|
||||
\*
|
||||
5.
|
||||
|
||||
To list the defined network names available for a given node:
|
||||
|
||||
@ -407,7 +498,7 @@ Calling \ **rinv**\ for VMware will display the UUID/GUID, nuumber of CPUs, amo
|
||||
|
||||
|
||||
|
||||
\*
|
||||
6.
|
||||
|
||||
To list the configuration for a given network:
|
||||
|
||||
@ -430,7 +521,7 @@ Calling \ **rinv**\ for VMware will display the UUID/GUID, nuumber of CPUs, amo
|
||||
|
||||
|
||||
|
||||
\*
|
||||
7.
|
||||
|
||||
To list the disk pool names available:
|
||||
|
||||
@ -452,7 +543,7 @@ Calling \ **rinv**\ for VMware will display the UUID/GUID, nuumber of CPUs, amo
|
||||
|
||||
|
||||
|
||||
\*
|
||||
8.
|
||||
|
||||
List the configuration for a given disk pool:
|
||||
|
||||
@ -474,7 +565,7 @@ Calling \ **rinv**\ for VMware will display the UUID/GUID, nuumber of CPUs, amo
|
||||
|
||||
|
||||
|
||||
\*
|
||||
9.
|
||||
|
||||
List the known zFCP pool names.
|
||||
|
||||
@ -496,7 +587,7 @@ Calling \ **rinv**\ for VMware will display the UUID/GUID, nuumber of CPUs, amo
|
||||
|
||||
|
||||
|
||||
\*
|
||||
10.
|
||||
|
||||
List the SCSI/FCP devices contained in a given zFCP pool:
|
||||
|
||||
|
@ -21,7 +21,7 @@ SYNOPSIS
|
||||
|
||||
\ **rmdsklsnode [-h | -**\ **-help ]**\
|
||||
|
||||
\ **rmdsklsnode [-V|-**\ **-verbose] [-f|-**\ **-force] [-r|-**\ **-remdef] [-i image_name] [-p|-**\ **-primarySN] [-b|-**\ **-backupSN] noderange**\
|
||||
\ **rmdsklsnode [-V|-**\ **-verbose] [-f|-**\ **-force] [-r|-**\ **-remdef] [-i**\ \ *image_name*\ ] \ **[-p|-**\ **-primarySN] [-b|-**\ **-backupSN]**\ \ *noderange*\
|
||||
|
||||
|
||||
***********
|
||||
@ -71,13 +71,13 @@ OPTIONS
|
||||
|
||||
|
||||
|
||||
\ **-i image_name**\
|
||||
\ **-i**\ \ *image_name*\
|
||||
|
||||
The name of an xCAT image definition.
|
||||
|
||||
|
||||
|
||||
\ **noderange**\
|
||||
\ *noderange*\
|
||||
|
||||
A set of comma delimited node names and/or group names. See the "noderange" man page for details on additional supported formats.
|
||||
|
||||
@ -109,16 +109,12 @@ RETURN VALUE
|
||||
|
||||
|
||||
|
||||
0
|
||||
|
||||
The command completed successfully.
|
||||
|
||||
0 The command completed successfully.
|
||||
|
||||
|
||||
1
|
||||
|
||||
An error has occurred.
|
||||
|
||||
|
||||
1 An error has occurred.
|
||||
|
||||
|
||||
|
||||
|
||||
@ -129,22 +125,37 @@ EXAMPLES
|
||||
|
||||
1) Remove the NIM client definition for the xCAT node named "node01". Give verbose output.
|
||||
|
||||
\ **rmdsklsnode -V node01**\
|
||||
|
||||
2) Remove the NIM client definitions for all the xCAT nodes in the group "aixnod
|
||||
es". Attempt to shut down the nodes if they are running.
|
||||
.. code-block:: perl
|
||||
|
||||
rmdsklsnode -V node01
|
||||
|
||||
|
||||
2) Remove the NIM client definitions for all the xCAT nodes in the group "aixnodes". Attempt to shut down the nodes if they are running.
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
rmdsklsnode -f aixnodes
|
||||
|
||||
\ **rmdsklsnode -f aixnodes**\
|
||||
|
||||
3) Remove the NIM client machine definition for xCAT node "node02" that was created with the \ **mkdsklsnode -n**\ option and the image "AIXdskls". (i.e. NIM client machine name "node02_AIXdskls".)
|
||||
|
||||
\ **rmdsklsnode -i AIXdskls node02**\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
rmdsklsnode -i AIXdskls node02
|
||||
|
||||
|
||||
This assume that node02 is not currently running.
|
||||
|
||||
4) Remove the old alternate client definition "node27_olddskls".
|
||||
|
||||
\ **rmdsklsnode -r -i olddskls node27**\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
rmdsklsnode -r -i olddskls node27
|
||||
|
||||
|
||||
Assuming the node was booted using an new alternate NIM client definition then this will leave the node running.
|
||||
|
||||
|
@ -19,9 +19,9 @@ SYNOPSIS
|
||||
********
|
||||
|
||||
|
||||
\ **rmflexnode**\ [-h | -**\ **-help]
|
||||
\ **rmflexnode**\ [\ **-h**\ | \ **-**\ **-help**\ ]
|
||||
|
||||
\ **rmflexnode**\ [-v | -**\ **-version]
|
||||
\ **rmflexnode**\ [\ **-v**\ | \ **-**\ **-version**\ ]
|
||||
|
||||
\ **rmflexnode**\ \ *noderange*\
|
||||
|
||||
@ -67,9 +67,7 @@ EXAMPLES
|
||||
|
||||
|
||||
|
||||
1
|
||||
|
||||
Delete a flexible node base on the xCAT node blade1.
|
||||
1 Delete a flexible node base on the xCAT node blade1.
|
||||
|
||||
The blade1 should belong to a complex, the \ *id*\ attribute should be set correctly and all the slots should be in \ **power off**\ state.
|
||||
|
||||
|
@ -95,7 +95,11 @@ This is used to determine the current host to migrate from.
|
||||
****************
|
||||
|
||||
|
||||
\ **rmigrate**\ \ *v1*\ \ *n2*\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
rmigrate v1 n2
|
||||
|
||||
|
||||
zVM specific:
|
||||
=============
|
||||
|
@ -19,9 +19,9 @@ SYNOPSIS
|
||||
********
|
||||
|
||||
|
||||
\ *rmimage [-h | --help]*\
|
||||
\ **rmimage [-h | -**\ **-help]**\
|
||||
|
||||
\ *rmimage [-V | --verbose] imagename [--xcatdef]*\
|
||||
\ **rmimage [-V | -**\ **-verbose]**\ \ *imagename*\ \ **[-**\ **-xcatdef]**\
|
||||
|
||||
|
||||
***********
|
||||
@ -37,7 +37,7 @@ to calculate the image root directory; otherwise, this command uses the operatin
|
||||
architecture and profile name to calculate the image root directory.
|
||||
|
||||
The osimage definition will not be removed from the xCAT tables by default,
|
||||
specifying the flag --xcatdef will remove the osimage definition,
|
||||
specifying the flag \ **-**\ **-xcatdef**\ will remove the osimage definition,
|
||||
or you can use rmdef -t osimage to remove the osimage definition.
|
||||
|
||||
The statelite image files on the diskful service nodes will not be removed,
|
||||
@ -83,11 +83,19 @@ EXAMPLES
|
||||
|
||||
1. To remove a RHEL 7.1 stateless image for a compute node architecture x86_64, enter:
|
||||
|
||||
\ *rmimage rhels7.1-x86_64-netboot-compute*\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
rmimage rhels7.1-x86_64-netboot-compute
|
||||
|
||||
|
||||
2. To remove a rhels5.5 statelite image for a compute node architecture ppc64 and the osimage definition, enter:
|
||||
|
||||
\ *rmimage rhels5.5-ppc64-statelite-compute --xcatdef*\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
rmimage rhels5.5-ppc64-statelite-compute --xcatdef
|
||||
|
||||
|
||||
|
||||
*****
|
||||
|
@ -70,7 +70,7 @@ OPTIONS
|
||||
|
||||
|
||||
|
||||
\ **kitlist**\
|
||||
\ *kitlist*\
|
||||
|
||||
A comma delimited list of kits that are to be removed from the xCAT cluster. Each entry can be a kitname or kit basename. For kit basename, rmkit command will remove all the kits that have that kit basename.
|
||||
|
||||
@ -94,28 +94,52 @@ EXAMPLES
|
||||
|
||||
1. To remove two kits from tarball files.
|
||||
|
||||
rmkit kit-test1,kit-test2
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
rmkit kit-test1,kit-test2
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
Kit kit-test1-1.0-Linux,kit-test2-1.0-Linux was successfully removed.
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
Kit kit-test1-1.0-Linux,kit-test2-1.0-Linux was successfully removed.
|
||||
|
||||
|
||||
2. To remove two kits from tarball files even the kit components in them are still being used by osimages.
|
||||
|
||||
rmkit kit-test1,kit-test2 --force
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
rmkit kit-test1,kit-test2 --force
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
Kit kit-test1-1.0-Linux,kit-test2-1.0-Linux was successfully removed.
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
Kit kit-test1-1.0-Linux,kit-test2-1.0-Linux was successfully removed.
|
||||
|
||||
|
||||
3. To list kitcomponents in this kit used by osimage
|
||||
|
||||
rmkit kit-test1,kit-test2 -t
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
rmkit kit-test1,kit-test2 -t
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
kit-test1-kitcomp-1.0-Linux is being used by osimage osimage-test
|
||||
Following kitcomponents are in use: kit-test1-kitcomp-1.0-Linux
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
kit-test1-kitcomp-1.0-Linux is being used by osimage osimage-test
|
||||
Following kitcomponents are in use: kit-test1-kitcomp-1.0-Linux
|
||||
|
||||
|
||||
|
||||
********
|
||||
@ -125,5 +149,3 @@ SEE ALSO
|
||||
|
||||
lskit(1)|lskit.1, addkit(1)|addkit.1, addkitcomp(1)|addkitcomp.1, rmkitcomp(1)|rmkitcomp.1, chkkitcomp(1)|chkkitcomp.1
|
||||
|
||||
~
|
||||
|
||||
|
@ -82,7 +82,7 @@ OPTIONS
|
||||
|
||||
|
||||
|
||||
\ **kitcompname_list**\
|
||||
\ *kitcompname_list*\
|
||||
|
||||
A comma-delimited list of valid full kit component names or kit component basenames that are to be removed from the osimage. If a basename is specified, all kitcomponents matching that basename will be removed from the osimage.
|
||||
|
||||
@ -106,27 +106,51 @@ EXAMPLES
|
||||
|
||||
1. To remove a kit component from osimage
|
||||
|
||||
rmkitcomp -i rhels6.2-ppc64-netboot-compute comp-test1-1.0-1-rhels-6.2-ppc64
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
rmkitcomp -i rhels6.2-ppc64-netboot-compute comp-test1-1.0-1-rhels-6.2-ppc64
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
kitcomponents comp-test1-1.0-1-rhels-6.2-ppc64 were removed from osimage rhels6.2-ppc64-netboot-compute successfully
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
kitcomponents comp-test1-1.0-1-rhels-6.2-ppc64 were removed from osimage rhels6.2-ppc64-netboot-compute successfully
|
||||
|
||||
|
||||
2. To remove a kit component even it is still used as a dependency of other kit component.
|
||||
|
||||
rmkitcomp -f -i rhels6.2-ppc64-netboot-compute comp-test1-1.0-1-rhels-6.2-ppc64
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
rmkitcomp -f -i rhels6.2-ppc64-netboot-compute comp-test1-1.0-1-rhels-6.2-ppc64
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
kitcomponents comp-test1-1.0-1-rhels-6.2-ppc64 were removed from osimage rhels6.2-ppc64-netboot-compute successfully
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
kitcomponents comp-test1-1.0-1-rhels-6.2-ppc64 were removed from osimage rhels6.2-ppc64-netboot-compute successfully
|
||||
|
||||
|
||||
3. To remove a kit component from osimage and also remove the kit component meta RPM and package RPM. So in next genimage for statelss image and updatenode for stateful nodes, the kit component meta RPM and package RPM will be uninstalled.
|
||||
|
||||
rmkitcomp -u -i rhels6.2-ppc64-netboot-compute comp-test1-1.0-1-rhels-6.2-ppc64
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
rmkitcomp -u -i rhels6.2-ppc64-netboot-compute comp-test1-1.0-1-rhels-6.2-ppc64
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
kitcomponents comp-test1-1.0-1-rhels-6.2-ppc64 were removed from osimage rhels6.2-ppc64-netboot-compute successfully
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
kitcomponents comp-test1-1.0-1-rhels-6.2-ppc64 were removed from osimage rhels6.2-ppc64-netboot-compute successfully
|
||||
|
||||
|
||||
|
||||
********
|
||||
|
@ -21,7 +21,7 @@ SYNOPSIS
|
||||
|
||||
\ **rmnimimage [-h|-**\ **-help]**\
|
||||
|
||||
\ **rmnimimage [-V|-**\ **-verbose] [-f|-**\ **-force] [-d|-**\ **-delete] [-x|-**\ **-xcatdef] [-M|-**\ **-managementnode] [-s servicenoderange] osimage_name**\
|
||||
\ **rmnimimage [-V|-**\ **-verbose] [-f|-**\ **-force] [-d|-**\ **-delete] [-x|-**\ **-xcatdef] [-M|-**\ **-managementnode] [-s**\ \ *servicenoderange*\ ] \ *osimage_name*\
|
||||
|
||||
|
||||
***********
|
||||
@ -82,13 +82,13 @@ OPTIONS
|
||||
|
||||
|
||||
|
||||
\ **-s servicenoderange**\
|
||||
\ **-s**\ \ *servicenoderange*\
|
||||
|
||||
Remove the NIM resources on these xCAT service nodes only. Do not remove the NIM resources from the xCAT management node.
|
||||
|
||||
|
||||
|
||||
\ **osimage_name**\
|
||||
\ *osimage_name*\
|
||||
|
||||
The name of the xCAT osimage definition.
|
||||
|
||||
@ -113,16 +113,12 @@ RETURN VALUE
|
||||
|
||||
|
||||
|
||||
0
|
||||
|
||||
The command completed successfully.
|
||||
|
||||
0 The command completed successfully.
|
||||
|
||||
|
||||
1
|
||||
|
||||
An error has occurred.
|
||||
|
||||
|
||||
1 An error has occurred.
|
||||
|
||||
|
||||
|
||||
|
||||
@ -133,27 +129,47 @@ EXAMPLES
|
||||
|
||||
1) Remove all NIM resources specified in the xCAT "61image" definition.
|
||||
|
||||
\ **rmnimimage 61image**\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
rmnimimage 61image
|
||||
|
||||
|
||||
The "nim -o remove" operation will be used to remove the NIM resource definitions on the management node as well as any service nodes where the resource has been replicated. This NIM operation does not completely remove all files and directories associated with the NIM resources.
|
||||
|
||||
2) Remove all the NIM resources specified by the xCAT "61rte" osimage definition. Delete ALL files and directories associated with the NIM resources. This will also remove the lpp_source resource.
|
||||
|
||||
\ **rmnimimage -d 61rte**\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
rmnimimage -d 61rte
|
||||
|
||||
|
||||
3) Remove all the NIM resources specified by the xCAT "614img" osimage definition and also remove the xCAT definition.
|
||||
|
||||
\ **rmnimimage -x -d 614img**\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
rmnimimage -x -d 614img
|
||||
|
||||
|
||||
Note: When this command completes all definitions and files will be completely erased, so use with caution!
|
||||
|
||||
4) Remove the NIM resources specified in the "614dskls" osimage definition on the xcatsn1 and xcatsn2 service nodes. Delete all files or directories associated with the NIM resources.
|
||||
|
||||
\ **rmnimimage -d -s xcatsn1,xcatsn2 614dskls**\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
rmnimimage -d -s xcatsn1,xcatsn2 614dskls
|
||||
|
||||
|
||||
5) Remove the NIM resources specified in the "614old" osimage definition on the xCAT management node only.
|
||||
|
||||
\ **rmnimimage -M -d 614old**\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
rmnimimage -M -d 614old
|
||||
|
||||
|
||||
|
||||
*****
|
||||
|
@ -19,24 +19,24 @@ SYNOPSIS
|
||||
********
|
||||
|
||||
|
||||
\ *rmvm [-h| --help]*\
|
||||
\ **rmvm [-h| -**\ **-help]**\
|
||||
|
||||
\ *rmvm [-v| --version]*\
|
||||
\ **rmvm [-v| -**\ **-version]**\
|
||||
|
||||
\ *rmvm [-V| --verbose] noderange [-r] [--service]*\
|
||||
\ **rmvm [-V| -**\ **-verbose]**\ \ *noderange*\ \ **[-r] [-**\ **-service]**\
|
||||
|
||||
For KVM and Vmware:
|
||||
===================
|
||||
|
||||
|
||||
\ *rmvm [-p] [-f]*\
|
||||
\ **rmvm [-p] [-f]**\
|
||||
|
||||
|
||||
PPC (using Direct FSP Management) specific:
|
||||
===========================================
|
||||
|
||||
|
||||
\ *rmvm noderange*\
|
||||
\ **rmvm**\ \ *noderange*\
|
||||
|
||||
|
||||
|
||||
@ -55,11 +55,11 @@ OPTIONS
|
||||
*******
|
||||
|
||||
|
||||
\ **-h**\ Display usage message.
|
||||
\ **-h|-**\ **-help**\ Display usage message.
|
||||
|
||||
\ **-v**\ Command Version.
|
||||
\ **-v|-**\ **-version**\ Command Version.
|
||||
|
||||
\ **-V**\ Verbose output.
|
||||
\ **-V|-**\ **-verbose**\ Verbose output.
|
||||
|
||||
\ **-r**\ Retain the data object definitions of the nodes.
|
||||
|
||||
@ -89,15 +89,27 @@ EXAMPLES
|
||||
|
||||
1. To remove the HMC-managed partition lpar3, enter:
|
||||
|
||||
\ *rmvm lpar3*\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
rmvm lpar3
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
lpar3: Success
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
lpar3: Success
|
||||
|
||||
|
||||
2. To remove all the HMC-managed partitions associated with CEC cec01, enter:
|
||||
|
||||
\ *rmvm cec01*\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
rmvm cec01
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
@ -111,7 +123,11 @@ Output is similar to:
|
||||
|
||||
3. To remove the HMC-managed service partitions of the specified CEC cec01 and cec02, enter:
|
||||
|
||||
\ *rmvm cec01,cec02 --service*\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
rmvm cec01,cec02 --service
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
@ -124,15 +140,27 @@ Output is similar to:
|
||||
|
||||
4. To remove the HMC-managed partition lpar1, but retain its definition, enter:
|
||||
|
||||
\ *rmvm lpar1 -r*\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
rmvm lpar1 -r
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
lpar1: Success
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
lpar1: Success
|
||||
|
||||
|
||||
5. To remove a zVM virtual machine:
|
||||
|
||||
\ *rmvm gpok4*\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
rmvm gpok4
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
@ -144,7 +172,11 @@ Output is similar to:
|
||||
|
||||
6. To remove a DFM-managed partition on normal power machine:
|
||||
|
||||
\ *rmvm lpar1*\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
rmvm lpar1
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
|
@ -19,7 +19,7 @@ rmzone.1
|
||||
****************
|
||||
|
||||
|
||||
\ **rmzone**\ <zonename> [\ **-g**\ ] [\ **-f**\ ]
|
||||
\ **rmzone**\ \ *zonename*\ [\ **-g**\ ] [\ **-f**\ ]
|
||||
|
||||
\ **rmzone**\ [\ **-h**\ | \ **-v**\ ]
|
||||
|
||||
@ -81,27 +81,37 @@ Note: if any zones in the zone table, there must be one and only one defaultzone
|
||||
|
||||
|
||||
|
||||
\*
|
||||
1. To remove zone1 from the zone table and the zonename attribute on all it's assigned nodes , enter:
|
||||
|
||||
To remove zone1 from the zone table and the zonename attribute on all it's assigned nodes , enter:
|
||||
|
||||
\ **rmzone**\ \ *zone1*\
|
||||
.. code-block:: perl
|
||||
|
||||
rmzone zone1
|
||||
|
||||
|
||||
|
||||
|
||||
\*
|
||||
2.
|
||||
|
||||
To remove zone2 from the zone table, the zone2 zonename attribute, and the zone2 group assigned to all nodes that were in zone2, enter:
|
||||
|
||||
\ **rmzone**\ \ *zone2*\ -g
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
rmzone zone2 -g
|
||||
|
||||
|
||||
|
||||
|
||||
\*
|
||||
3.
|
||||
|
||||
To remove zone3 from the zone table, all the node zone attributes and override the fact it is the defaultzone, enter:
|
||||
|
||||
\ **rmzone**\ \ *zone3*\ -g -f
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
rmzone zone3 -g -f
|
||||
|
||||
|
||||
|
||||
|
||||
|
@ -27,7 +27,7 @@ zVM specific:
|
||||
=============
|
||||
|
||||
|
||||
\ **rnetboot**\ noderange [\ **ipl=**\ \ *address*\ ]
|
||||
\ **rnetboot**\ \ *noderange*\ [\ **ipl=**\ \ *address*\ ]
|
||||
|
||||
|
||||
|
||||
@ -72,15 +72,15 @@ specify the number of retries that the monitoring process will perform before de
|
||||
|
||||
Specify the the timeout, in minutes, to wait for the expectedstatus specified by -m flag. This is a required flag if the -m flag is specified.
|
||||
|
||||
\ **-V**\
|
||||
\ **-V|-**\ **-verbose**\
|
||||
|
||||
Verbose output.
|
||||
|
||||
\ **-h**\
|
||||
\ **-h|-**\ **-help**\
|
||||
|
||||
Display usage message.
|
||||
|
||||
\ **-v**\
|
||||
\ **-v|-**\ **-version**\
|
||||
|
||||
Command Version.
|
||||
|
||||
|
@ -79,16 +79,12 @@ RETURN VALUE
|
||||
|
||||
|
||||
|
||||
0
|
||||
|
||||
The command completed successfully.
|
||||
|
||||
0 The command completed successfully.
|
||||
|
||||
|
||||
1
|
||||
|
||||
An error has occurred.
|
||||
|
||||
|
||||
1 An error has occurred.
|
||||
|
||||
|
||||
|
||||
|
||||
@ -119,10 +115,15 @@ FILES
|
||||
|
||||
|
||||
/opt/xcat/bin/rollupdate
|
||||
|
||||
/opt/xcat/share/xcat/rollupdate/rollupdate.input.sample
|
||||
|
||||
/opt/xcat/share/xcat/rollupdate/ll.tmpl
|
||||
|
||||
/opt/xcat/share/xcat/rollupdate/rollupdate_all.input.sample
|
||||
|
||||
/opt/xcat/share/xcat/rollupdate/llall.tmpl
|
||||
|
||||
/var/log/xcat/rollupdate.log
|
||||
|
||||
|
||||
|
@ -367,26 +367,38 @@ EXAMPLES
|
||||
|
||||
|
||||
|
||||
\*
|
||||
1. To display power status of nodes4 and note5
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
rpower node4,node5 stat
|
||||
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
node4: on
|
||||
node5: off
|
||||
|
||||
|
||||
|
||||
|
||||
\*
|
||||
2. To power on node5
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
rpower node5 on
|
||||
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
node5: on
|
||||
|
||||
|
||||
|
@ -19,11 +19,11 @@ SYNOPSIS
|
||||
********
|
||||
|
||||
|
||||
\ *rscan [-h|--help]*\
|
||||
\ **rscan [-h|-**\ **-help]**\
|
||||
|
||||
\ *rscan [-v|--version]*\
|
||||
\ **rscan [-v|-**\ **-version]**\
|
||||
|
||||
\ *rscan [-V|--verbose] noderange [-u][-w][-x|-z]*\
|
||||
\ **rscan [-V|-**\ **-verbose]**\ \ *noderange*\ \ **[-u][-w][-x|-z]**\
|
||||
|
||||
|
||||
***********
|
||||
@ -45,11 +45,11 @@ OPTIONS
|
||||
*******
|
||||
|
||||
|
||||
\ **-h**\ Display usage message.
|
||||
\ **-h|-**\ **-help**\ Display usage message.
|
||||
|
||||
\ **-v**\ Command Version.
|
||||
\ **-v|-**\ **-version**\ Command Version.
|
||||
|
||||
\ **-V**\ Verbose output.
|
||||
\ **-V|-**\ **-verbose**\ Verbose output.
|
||||
|
||||
\ **-u**\ Updates and then prints out node definitions in the xCAT database for CEC/BPA. It updates the existing nodes that contain the same mtms and serial number for nodes managed by the specified hardware control point. This primarily works with CEC/FSP and frame/BPA nodes when the node name is not the same as the managed system name on hardware control point (HMC), This flag will update the BPA/FSP node name definitions to be listed as the managed system name in the xCAT database.
|
||||
|
||||
@ -75,13 +75,9 @@ RETURN VALUE
|
||||
************
|
||||
|
||||
|
||||
0 The command completed successfully.
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
0 The command completed successfully.
|
||||
|
||||
1 An error has occurred.
|
||||
|
||||
1 An error has occurred.
|
||||
|
||||
|
||||
********
|
||||
@ -91,7 +87,11 @@ EXAMPLES
|
||||
|
||||
1. To list all nodes managed by HMC hmc01 in tabular format, enter:
|
||||
|
||||
\ *rscan hmc01*\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
rscan hmc01
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
@ -110,7 +110,11 @@ Output is similar to:
|
||||
|
||||
2. To list all nodes managed by IVM ivm02 in XML format and write the output to the xCAT database, enter:
|
||||
|
||||
\ *rscan ivm02 -x -w*\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
rscan ivm02 -x -w
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
@ -162,7 +166,11 @@ Output is similar to:
|
||||
|
||||
3. To list all nodes managed by HMC hmc02 in stanza format and write the output to the xCAT database, enter:
|
||||
|
||||
\ *rscan hmc02 -z -w*\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
rscan hmc02 -z -w
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
@ -224,7 +232,11 @@ Output is similar to:
|
||||
|
||||
4. To update definitions of nodes, which is managed by hmc03, enter:
|
||||
|
||||
\ *rscan hmc03 -u*\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
rscan hmc03 -u
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
@ -238,7 +250,11 @@ Output is similar to:
|
||||
|
||||
5. To collects the node information from one or more hardware control points on zVM AND populate the database with details collected by rscan:
|
||||
|
||||
\ *rscan gpok2 -W*\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
rscan gpok2 -w
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
@ -259,7 +275,11 @@ Output is similar to:
|
||||
|
||||
6. To scan the Flex system cluster:
|
||||
|
||||
\ *rscan cmm01*\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
rscan cmm01
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
@ -275,7 +295,11 @@ Output is similar to:
|
||||
|
||||
7. To update the Flex system cluster:
|
||||
|
||||
\ *rscan cmm01 -u*\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
rscan cmm01 -u
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
@ -289,7 +313,11 @@ Output is similar to:
|
||||
|
||||
8. To scan the KVM host "hyp01", list all the KVM guest information on the KVM host in stanza format and write the KVM guest information into xCAT database:
|
||||
|
||||
\ *rscan hyp01 -z -w*\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
rscan hyp01 -z -w
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
@ -320,7 +348,11 @@ Output is similar to:
|
||||
|
||||
9. To update definitions of kvm guest, which is managed by hypervisor hyp01, enter:
|
||||
|
||||
\ *rscan hyp01 -u*\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
rscan hyp01 -u
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
|
@ -31,7 +31,7 @@ BMC/MPA specific:
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ \ **snmpdest**\ =\ *snmpmanager-IP*\
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ \ **community**\ ={\ **public**\ |\ *string*\ }
|
||||
\ **rspconfig**\ \ *noderange*\ \ **community**\ ={\ **public**\ | \ *string*\ }
|
||||
|
||||
|
||||
BMC specific:
|
||||
@ -40,7 +40,7 @@ BMC specific:
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ {\ **ip | netmask | gateway | backupgateway | garp**\ }
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ \ **garp**\ ={\ *time*\ }
|
||||
\ **rspconfig**\ \ *noderange*\ \ **garp**\ =\ *time*\
|
||||
|
||||
|
||||
MPA specific:
|
||||
@ -61,15 +61,15 @@ MPA specific:
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ \ **pd2**\ ={\ **nonred | redwoperf | redwperf**\ }
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ \ **network**\ ={[\ **ip**\ ],[\ **host**\ ],[\ **gateway**\ ],[\ **netmask**\ ]|\ **\\***\ }
|
||||
\ **rspconfig**\ \ *noderange*\ \ **network**\ ={[\ *ip*\ ],[\ *host*\ ],[\ *gateway*\ ],[\ *netmask*\ ]|\*}
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ \ **initnetwork**\ ={[\ **ip**\ ],[\ **host**\ ],[\ **gateway**\ ],[\ **netmask**\ ]|\ **\\***\ }
|
||||
\ **rspconfig**\ \ *noderange*\ \ **initnetwork**\ ={[\ *ip*\ ],[\ *host*\ ],[\ *gateway*\ ],[\ *netmask*\ ]|\*}
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ \ **textid**\ ={\ **\\*|textid**\ }
|
||||
\ **rspconfig**\ \ *noderange*\ \ **textid**\ ={\* | \ *textid*\ }
|
||||
|
||||
\ **rspconfig**\ \ *singlenode*\ \ **frame**\ ={\ **frame_number**\ }
|
||||
\ **rspconfig**\ \ *singlenode*\ \ **frame**\ ={\ *frame_number*\ }
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ \ **frame**\ ={\ **\\***\ }
|
||||
\ **rspconfig**\ \ *noderange*\ \ **frame**\ ={\*}
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ \ **swnet**\ ={[\ **ip**\ ],[\ **gateway**\ ],[\ **netmask**\ ]}
|
||||
|
||||
@ -90,33 +90,33 @@ FSP/CEC specific:
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ \ **celogin1**\ ={\ **enable | disable**\ }
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ \ **time**\ ={\ **hh:mm:ss**\ }
|
||||
\ **rspconfig**\ \ *noderange*\ \ **time**\ =\ *hh:mm:ss*\
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ \ **date**\ ={\ **mm:dd:yyyy**\ }
|
||||
\ **rspconfig**\ \ *noderange*\ \ **date**\ =\ *mm:dd:yyyy*\
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ \ **decfg**\ ={\ **enable|disable**\ :\ **policyname,...**\ }
|
||||
\ **rspconfig**\ \ *noderange*\ \ **decfg**\ ={\ **enable|disable**\ :\ *policyname,...*\ }
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ \ **procdecfg**\ ={\ **configure|deconfigure**\ :\ **processingunit**\ :\ **id,...**\ }
|
||||
\ **rspconfig**\ \ *noderange*\ \ **procdecfg**\ ={\ **configure|deconfigure**\ :\ *processingunit*\ :\ *id,...*\ }
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ \ **memdecfg**\ ={\ **configure|deconfigure**\ :\ **processingunit**\ :\ **unit|bank**\ :\ **id,...**\ >}
|
||||
\ **rspconfig**\ \ *noderange*\ \ **memdecfg**\ ={\ **configure|deconfigure**\ :\ *processingunit*\ :\ **unit|bank**\ :\ *id,...*\ >}
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ \ **network**\ ={\ **nic,\\***\ }
|
||||
\ **rspconfig**\ \ *noderange*\ \ **network**\ ={\ **nic,**\ \*}
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ \ **network**\ ={\ **nic,[IP,][hostname,][gateway,][netmask]**\ }
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ \ **network**\ ={\ **nic,0.0.0.0**\ }
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ \ **HMC_passwd**\ ={\ **currentpasswd,newpasswd**\ }
|
||||
\ **rspconfig**\ \ *noderange*\ \ **HMC_passwd**\ ={\ *currentpasswd,newpasswd*\ }
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ \ **admin_passwd**\ ={\ **currentpasswd,newpasswd**\ }
|
||||
\ **rspconfig**\ \ *noderange*\ \ **admin_passwd**\ ={\ *currentpasswd,newpasswd*\ }
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ \ **general_passwd**\ ={\ **currentpasswd,newpasswd**\ }
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ \ **\\*_passwd**\ ={\ **currentpasswd,newpasswd**\ }
|
||||
\ **rspconfig**\ \ *noderange*\ \*\ **_passwd**\ ={\ **currentpasswd,newpasswd**\ }
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ {\ **hostname**\ }
|
||||
\ **rspconfig**\ \ *noderange*\ {\ *hostname*\ }
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ \ **hostname**\ ={\ **\\*|name**\ }
|
||||
\ **rspconfig**\ \ *noderange*\ \ **hostname**\ ={\* | \ *name*\ }
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ \ **-**\ **-resetnet**\
|
||||
|
||||
@ -129,11 +129,11 @@ Flex system Specific:
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ \ **snmpcfg**\ ={\ **enable | disable**\ }
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ \ **network**\ ={[\ **ip**\ ],[\ **host**\ ],[\ **gateway**\ ],[\ **netmask**\ ]|\ **\\***\ }
|
||||
\ **rspconfig**\ \ *noderange*\ \ **network**\ ={[\ **ip**\ ],[\ **host**\ ],[\ **gateway**\ ],[\ **netmask**\ ] | \*}
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ \ **solcfg**\ ={\ **enable | disable**\ }
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ \ **textid**\ ={\ **\\*|textid**\ }
|
||||
\ **rspconfig**\ \ *noderange*\ \ **textid**\ ={\* | \ *textid*\ }
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ \ **cec_off_policy**\ ={\ **poweroff | stayon**\ }
|
||||
|
||||
@ -144,7 +144,7 @@ BPA/Frame Specific:
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ {\ **network | dev | celogin1**\ }
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ \ **network**\ ={\ **nic,\\***\ }
|
||||
\ **rspconfig**\ \ *noderange*\ \ **network**\ ={\ **nic,**\ \*}
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ \ **network**\ ={\ **nic,[IP,][hostname,][gateway,][netmask]**\ }
|
||||
|
||||
@ -154,17 +154,17 @@ BPA/Frame Specific:
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ \ **celogin1**\ ={\ **enable | disable**\ }
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ \ **HMC_passwd**\ ={\ **currentpasswd,newpasswd**\ }
|
||||
\ **rspconfig**\ \ *noderange*\ \ **HMC_passwd**\ ={\ *currentpasswd,newpasswd*\ }
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ \ **admin_passwd**\ ={\ **currentpasswd,newpasswd**\ }
|
||||
\ **rspconfig**\ \ *noderange*\ \ **admin_passwd**\ ={\ *currentpasswd,newpasswd*\ }
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ \ **general_passwd**\ ={\ **currentpasswd,newpasswd**\ }
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ \ **\\*_passwd**\ ={\ **currentpasswd,newpasswd**\ }
|
||||
\ **rspconfig**\ \ *noderange*\ \*\ **_passwd**\ ={\ **currentpasswd,newpasswd**\ }
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ {\ **hostname**\ }
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ \ **hostname**\ ={\ **\\*|name**\ }
|
||||
\ **rspconfig**\ \ *noderange*\ \ **hostname**\ ={\* | \ *name*\ }
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ \ **-**\ **-resetnet**\
|
||||
|
||||
@ -173,17 +173,17 @@ FSP/CEC (using Direct FSP Management) Specific:
|
||||
===============================================
|
||||
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ \ **HMC_passwd**\ ={\ **currentpasswd,newpasswd**\ }
|
||||
\ **rspconfig**\ \ *noderange*\ \ **HMC_passwd**\ ={\ *currentpasswd,newpasswd*\ }
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ \ **admin_passwd**\ ={\ **currentpasswd,newpasswd**\ }
|
||||
\ **rspconfig**\ \ *noderange*\ \ **admin_passwd**\ ={\ *currentpasswd,newpasswd*\ }
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ \ **general_passwd**\ ={\ **currentpasswd,newpasswd**\ }
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ \ **\\*_passwd**\ ={\ **currentpasswd,newpasswd**\ }
|
||||
\ **rspconfig**\ \ *noderange*\ \*\ **_passwd**\ ={\ **currentpasswd,newpasswd**\ }
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ {\ **sysname**\ }
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ \ **sysname**\ ={\ **\\* | name**\ }
|
||||
\ **rspconfig**\ \ *noderange*\ \ **sysname**\ ={\* | \ *name*\ }
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ {\ **pending_power_on_side**\ }
|
||||
|
||||
@ -197,7 +197,7 @@ FSP/CEC (using Direct FSP Management) Specific:
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ {\ **huge_page**\ }
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ \ **huge_page**\ ={\ **NUM**\ }
|
||||
\ **rspconfig**\ \ *noderange*\ \ **huge_page**\ ={\ *NUM*\ }
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ {\ **setup_failover**\ }
|
||||
|
||||
@ -212,21 +212,25 @@ BPA/Frame (using Direct FSP Management) Specific:
|
||||
=================================================
|
||||
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ \ **HMC_passwd**\ ={\ **currentpasswd,newpasswd**\ }
|
||||
\ **rspconfig**\ \ *noderange*\ \ **HMC_passwd**\ ={\ *currentpasswd,newpasswd*\ }
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ \ **admin_passwd**\ ={\ **currentpasswd,newpasswd**\ }
|
||||
\ **rspconfig**\ \ *noderange*\ \ **admin_passwd**\ ={\ *currentpasswd,newpasswd*\ }
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ \ **general_passwd**\ ={\ **currentpasswd,newpasswd**\ }
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ \ **\\*_passwd**\ ={\ **currentpasswd,newpasswd**\ }
|
||||
\ **rspconfig**\ \ *noderange*\ \*\ **_passwd**\ ={\ **currentpasswd,newpasswd**\ }
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ {\ **frame**\ }
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ \ **frame**\ ={\ **\\*|frame_number**\ }
|
||||
\ **rspconfig**\ \ *noderange*\ \ **frame**\ ={\* | \ *frame_number*\ }
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ {\ **sysname**\ }
|
||||
|
||||
<<<<<<< HEAD
|
||||
\ **rspconfig**\ \ *noderange*\ \ **sysname**\ ={\ **\\* | name**\ }
|
||||
=======
|
||||
\ **rspconfig**\ \ *noderange*\ \ **sysname**\ ={\* | \ *name*\ }
|
||||
>>>>>>> man1 changes for commands p-z
|
||||
|
||||
\ **rspconfig**\ \ *noderange*\ {\ **pending_power_on_side**\ }
|
||||
|
||||
@ -264,13 +268,13 @@ OPTIONS
|
||||
|
||||
|
||||
|
||||
\ **alert**\ ={\ *on*\ |\ *enable*\ |\ *off*\ |\ *disable*\ }
|
||||
\ **alert={on | enable | off | disable}**\
|
||||
|
||||
Turn on or off SNMP alerts.
|
||||
|
||||
|
||||
|
||||
\ **autopower**\ ={\ *enable*\ |\ *disable*\ }
|
||||
\ **autopower**\ ={\ *enable*\ | \ *disable*\ }
|
||||
|
||||
Select the policy for auto power restart. If enabled, the system will boot automatically once power is restored after a power disturbance.
|
||||
|
||||
@ -282,25 +286,25 @@ OPTIONS
|
||||
|
||||
|
||||
|
||||
\ **community**\ ={\ **public**\ |\ *string*\ }
|
||||
\ **community**\ ={\ **public**\ | \ *string*\ }
|
||||
|
||||
Get or set the SNMP commmunity value. The default is \ *public*\ .
|
||||
Get or set the SNMP commmunity value. The default is \ **public**\ .
|
||||
|
||||
|
||||
|
||||
\ **date**\ ={\ *mm:dd:yyy*\ }
|
||||
\ **date**\ =\ *mm:dd:yyy*\
|
||||
|
||||
Enter the current date.
|
||||
|
||||
|
||||
|
||||
\ **decfg**\ ={\ *enable|disable*\ :\ *policyname,...*\ }
|
||||
\ **decfg**\ ={\ **enable | disable**\ :\ *policyname,...*\ }
|
||||
|
||||
Enables or disables deconfiguration policies.
|
||||
|
||||
|
||||
|
||||
\ **frame**\ ={\ **framenumber**\ |\ *\\**\ }
|
||||
\ **frame**\ ={\ *framenumber*\ | \*}
|
||||
|
||||
Set or get frame number. If no framenumber and \* specified, framenumber for the nodes will be displayed and updated in xCAAT database. If framenumber is specified, it only supports single node and the framenumber will be set for that frame. If \* is specified, it supports noderange and all the frame numbers for the noderange will be read from xCAT database and set to frames. Setting the frame number is a disruptive command which requires all CECs to be powered off prior to issuing the command.
|
||||
|
||||
@ -312,25 +316,25 @@ OPTIONS
|
||||
|
||||
|
||||
|
||||
\ **HMC_passwd**\ ={\ **currentpasswd,newpasswd**\ }
|
||||
\ **HMC_passwd**\ ={\ *currentpasswd,newpasswd*\ }
|
||||
|
||||
Change the password of the userid \ **HMC**\ for CEC/Frame. If the CEC/Frame is the factory default, the currentpasswd should NOT be specified; otherwise, the currentpasswd should be specified to the current password of the userid \ **HMC**\ for the CEC/Frame.
|
||||
|
||||
|
||||
|
||||
\ **admin_passwd**\ ={\ **currentpasswd,newpasswd**\ }
|
||||
\ **admin_passwd**\ ={\ *currentpasswd,newpasswd*\ }
|
||||
|
||||
Change the password of the userid \ **admin**\ for CEC/Frame from currentpasswd to newpasswd. If the CEC/Frame is the factory default, the currentpasswd should NOT be specified; otherwise, the currentpasswd should be specified to the current password of the userid \ **admin**\ for the CEC/Frame.
|
||||
|
||||
|
||||
|
||||
\ **general_passwd**\ ={\ **currentpasswd,newpasswd**\ }
|
||||
\ **general_passwd**\ ={\ *currentpasswd,newpasswd*\ }
|
||||
|
||||
Change the password of the userid \ **general**\ for CEC/Frame from currentpasswd to newpasswd. If the CEC/Frame is the factory default, the currentpasswd should NOT be specified; otherwise, the currentpasswd should be specified to the current password of the userid \ **general**\ for the CEC/Frame.
|
||||
|
||||
|
||||
|
||||
\ ** \\*_passwd**\ ={\ **currentpasswd,newpasswd**\ }
|
||||
\*\ **_passwd**\ ={\ *currentpasswd,newpasswd*\ }
|
||||
|
||||
Change the passwords of the userids \ **HMC**\ , \ **admin**\ and \ **general**\ for CEC/Frame from currentpasswd to newpasswd. If the CEC/Frame is the factory default, the currentpasswd should NOT be specified; otherwise, if the current passwords of the userids \ **HMC**\ , \ **admin**\ and \ **general**\ for CEC/Frame are the same one, the currentpasswd should be specified to the current password, and then the password will be changed to the newpasswd. If the CEC/Frame is NOT the factory default, and the current passwords of the userids \ **HMC**\ , \ **admin**\ and \ **general**\ for CEC/Frame are NOT the same one, this option could NOT be used, and we should change the password one by one.
|
||||
|
||||
@ -372,7 +376,7 @@ OPTIONS
|
||||
|
||||
|
||||
|
||||
\ **setup_failover**\ ={\ *enable*\ |\ *disable*\ }
|
||||
\ **setup_failover**\ ={\ **enable**\ | \ **disable**\ }
|
||||
|
||||
Enable or disable the service processor failover function of a CEC or display status of this function.
|
||||
|
||||
@ -384,25 +388,25 @@ OPTIONS
|
||||
|
||||
|
||||
|
||||
\ **hostname**\ ={\ *\\*|name*\ }
|
||||
\ **hostname**\ ={\* | \ *name*\ }
|
||||
|
||||
Set CEC/BPA system names to the names in xCAT DB or the input name.
|
||||
|
||||
|
||||
|
||||
\ **iocap**\ ={\ *enable*\ |\ *disable*\ }
|
||||
\ **iocap**\ ={\ **enable**\ | \ **disable**\ }
|
||||
|
||||
Select the policy for I/O Adapter Enlarged Capacity. This option controls the size of PCI memory space allocated to each PCI slot.
|
||||
|
||||
|
||||
|
||||
\ **dev**\ ={\ *enable*\ |\ *disable*\ }
|
||||
\ **dev**\ ={\ **enable**\ | \ **disable**\ }
|
||||
|
||||
Enable or disable the CEC|Frame 'dev' account or display account status if no value specified.
|
||||
|
||||
|
||||
|
||||
\ **celogin1**\ ={\ *enable*\ |\ *disable*\ }
|
||||
\ **celogin1**\ ={\ **enable**\ | \ **disable**\ }
|
||||
|
||||
Enable or disable the CEC|Frame 'celogin1' account or display account status if no value specified.
|
||||
|
||||
@ -414,7 +418,7 @@ OPTIONS
|
||||
|
||||
|
||||
|
||||
\ **memdecfg**\ ={\ *configure|deconfigure*\ :\ *processingunit*\ :\ *unit|bank*\ :\ *id,...*\ }
|
||||
\ **memdecfg**\ ={\ **configure | deconfigure**\ :\ *processingunit*\ :\ *unit|bank*\ :\ *id,...*\ }
|
||||
|
||||
Select whether each memory bank should be enabled or disabled. State changes take effect on the next platform boot.
|
||||
|
||||
@ -485,7 +489,7 @@ OPTIONS
|
||||
|
||||
|
||||
|
||||
\ **procdecfg**\ ={\ *configure|deconfigure*\ :\ *processingunit*\ :\ *id,...*\ }
|
||||
\ **procdecfg**\ ={\ **configure|deconfigure**\ :\ *processingunit*\ :\ *id,...*\ }
|
||||
|
||||
Selects whether each processor should be enabled or disabled. State changes take effect on the next platform boot.
|
||||
|
||||
@ -503,7 +507,7 @@ OPTIONS
|
||||
|
||||
|
||||
|
||||
\ **snmpcfg**\ ={\ *enable*\ |\ *disable*\ }
|
||||
\ **snmpcfg**\ ={\ **enable | disable**\ }
|
||||
|
||||
Enable or disable SNMP on MPA.
|
||||
|
||||
@ -515,7 +519,7 @@ OPTIONS
|
||||
|
||||
|
||||
|
||||
\ **solcfg**\ ={\ *enable*\ |\ *disable*\ }
|
||||
\ **solcfg**\ ={\ **enable | disable**\ }
|
||||
|
||||
Enable or disable the sol on MPA (or CMM) and blade servers belongs to it.
|
||||
|
||||
@ -527,7 +531,7 @@ OPTIONS
|
||||
|
||||
|
||||
|
||||
\ **sshcfg**\ ={\ *enable*\ |\ *disable*\ }
|
||||
\ **sshcfg**\ ={\ **enable | disable**\ }
|
||||
|
||||
Enable or disable SSH on MPA.
|
||||
|
||||
@ -551,13 +555,13 @@ OPTIONS
|
||||
|
||||
|
||||
|
||||
\ **pending_power_on_side**\ ={\ *temp|perm*\ }
|
||||
\ **pending_power_on_side**\ ={\ **temp|perm**\ }
|
||||
|
||||
List or set pending power on side for CEC or Frame. If no pending_power_on_side value specified, the pending power on side for the CECs or frames will be displayed. If specified, the pending_power_on_side value will be set to CEC's FSPs or Frame's BPAs. The value 'temp' means T-side or temporary side. The value 'perm' means P-side or permanent side.
|
||||
|
||||
|
||||
|
||||
\ **time**\ ={\ *hh:mm:ss*\ }
|
||||
\ **time**\ =\ *hh:mm:ss*\
|
||||
|
||||
Enter the current time in UTC (Coordinated Universal Time) format.
|
||||
|
||||
@ -569,11 +573,11 @@ OPTIONS
|
||||
|
||||
|
||||
|
||||
\ **USERID**\ ={\ *newpasswd*\ } \ **updateBMC**\ ={\ *y|n*\ }
|
||||
\ **USERID**\ ={\ *newpasswd*\ } \ **updateBMC**\ ={\ **y|n**\ }
|
||||
|
||||
Change the password of the userid \ **USERID**\ for CMM in Flex system cluster. The option \ *updateBMC*\ can be used to specify whether updating the password of BMCs that connected to the speified CMM. The value is 'y' by default which means whenever updating the password of CMM, the password of BMCs will be also updated. Note that there will be several seconds needed before this command complete.
|
||||
|
||||
If value \ **\\***\ is specified for USERID and the object node is \ *Flex System X node*\ , the password used to access the BMC of the System X node through IPMI will be updated as the same password of the userid \ **USERID**\ of the CMM in the same cluster.
|
||||
If value "\*" is specified for USERID and the object node is \ *Flex System X node*\ , the password used to access the BMC of the System X node through IPMI will be updated as the same password of the userid \ **USERID**\ of the CMM in the same cluster.
|
||||
|
||||
|
||||
|
||||
@ -595,7 +599,11 @@ OPTIONS
|
||||
|
||||
|
||||
|
||||
<<<<<<< HEAD
|
||||
\ **-v**\ , \ **-**\ **-version**\
|
||||
=======
|
||||
\ **-v**\ | \ **-**\ **-version**\
|
||||
>>>>>>> man1 changes for commands p-z
|
||||
|
||||
Display the version number.
|
||||
|
||||
@ -608,19 +616,25 @@ EXAMPLES
|
||||
|
||||
|
||||
|
||||
\*
|
||||
1. To setup new ssh keys on the Management Module mm:
|
||||
|
||||
To setup new ssh keys on the Management Module mm:
|
||||
|
||||
\ **rspconfig**\ mm snmpcfg=enable sshcfg=enable
|
||||
.. code-block:: perl
|
||||
|
||||
rspconfig mm snmpcfg=enable sshcfg=enable
|
||||
|
||||
|
||||
|
||||
|
||||
\*
|
||||
2. To turn on SNMP alerts for node5:
|
||||
|
||||
To turn on SNMP alerts for node5:
|
||||
|
||||
\ **rspconfig**\ \ *node5*\ \ **alert**\ =\ **on**\
|
||||
.. code-block:: perl
|
||||
|
||||
rspconfig node5 alert=on
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
@ -630,11 +644,15 @@ EXAMPLES
|
||||
|
||||
|
||||
|
||||
\*
|
||||
3. To display the destination setting for SNMP alerts for node4:
|
||||
|
||||
To display the destination setting for SNMP alerts for node4:
|
||||
|
||||
\ **rspconfig**\ \ *node4 snmpdest*\
|
||||
.. code-block:: perl
|
||||
|
||||
rspconfig node4 snmpdest
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
@ -644,11 +662,17 @@ EXAMPLES
|
||||
|
||||
|
||||
|
||||
\*
|
||||
4.
|
||||
|
||||
To display the frame number for frame 9A00-10000001
|
||||
|
||||
\ **rspconfig**\ \ *9A00-10000001 frame*\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
rspconfig> 9A00-10000001 frame
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
@ -658,11 +682,15 @@ EXAMPLES
|
||||
|
||||
|
||||
|
||||
\*
|
||||
5. To set the frame number for frame 9A00-10000001
|
||||
|
||||
To set the frame number for frame 9A00-10000001
|
||||
|
||||
\ **rspconfig**\ \ *9A00-10000001 frame=2*\
|
||||
.. code-block:: perl
|
||||
|
||||
rspconfig 9A00-10000001 frame=2
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
@ -672,11 +700,15 @@ EXAMPLES
|
||||
|
||||
|
||||
|
||||
\*
|
||||
6. To set the frame numbers for frame 9A00-10000001 and 9A00-10000002
|
||||
|
||||
To set the frame numbers for frame 9A00-10000001 and 9A00-10000002
|
||||
|
||||
\ **rspconfig**\ \ *9A00-10000001,9A00-10000002 frame=\\**\
|
||||
.. code-block:: perl
|
||||
|
||||
rspconfig 9A00-10000001,9A00-10000002 frame=*
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
@ -687,11 +719,15 @@ EXAMPLES
|
||||
|
||||
|
||||
|
||||
\*
|
||||
7. To display the MPA network parameters for mm01:
|
||||
|
||||
To display the MPA network parameters for mm01:
|
||||
|
||||
\ **rspconfig**\ \ *mm01 network*\
|
||||
.. code-block:: perl
|
||||
|
||||
rspconfig mm01 network
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
@ -704,11 +740,15 @@ EXAMPLES
|
||||
|
||||
|
||||
|
||||
\*
|
||||
8. To change the MPA network parameters with the values in the xCAT database for mm01:
|
||||
|
||||
To change the MPA network parameters with the values in the xCAT database for mm01:
|
||||
|
||||
\ **rspconfig**\ \ *mm01 network=\\**\
|
||||
.. code-block:: perl
|
||||
|
||||
rspconfig mm01 network=*
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
@ -721,11 +761,15 @@ EXAMPLES
|
||||
|
||||
|
||||
|
||||
\*
|
||||
9. To change only the gateway parameter for the MPA network mm01:
|
||||
|
||||
To change only the gateway parameter for the MPA network mm01:
|
||||
|
||||
\ **rspconfig**\ \ *mm01 network=,,192.168.1.1,*\
|
||||
.. code-block:: perl
|
||||
|
||||
rspconfig mm01 network=,,192.168.1.1,
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
@ -735,11 +779,15 @@ EXAMPLES
|
||||
|
||||
|
||||
|
||||
\*
|
||||
10. To display the FSP network parameters for fsp01:
|
||||
|
||||
To display the FSP network parameters for fsp01:
|
||||
|
||||
\ **rspconfig**\ \ *fsp01 network*\
|
||||
.. code-block:: perl
|
||||
|
||||
rspconfig> fsp01 network
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
@ -762,11 +810,15 @@ EXAMPLES
|
||||
|
||||
|
||||
|
||||
\*
|
||||
11. To change the FSP network parameters with the values in command line for eth0 on fsp01:
|
||||
|
||||
To change the FSP network parameters with the values in command line for eth0 on fsp01:
|
||||
|
||||
\ **rspconfig**\ \ *fsp01 network=eth0,192.168.1.200,fsp01,,255.255.255.0*\
|
||||
.. code-block:: perl
|
||||
|
||||
rspconfig fsp01 network=eth0,192.168.1.200,fsp01,,255.255.255.0
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
@ -776,11 +828,15 @@ EXAMPLES
|
||||
|
||||
|
||||
|
||||
\*
|
||||
12. To change the FSP network parameters with the values in the xCAT database for eth0 on fsp01:
|
||||
|
||||
To change the FSP network parameters with the values in the xCAT database for eth0 on fsp01:
|
||||
|
||||
\ **rspconfig**\ \ *fsp01 network=eth0,\\**\
|
||||
.. code-block:: perl
|
||||
|
||||
rspconfig fsp01 network=eth0,*
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
@ -790,11 +846,15 @@ EXAMPLES
|
||||
|
||||
|
||||
|
||||
\*
|
||||
13. To configure eth0 on fsp01 to get dynamic IP address from DHCP server:
|
||||
|
||||
To configure eth0 on fsp01 to get dynamic IP address from DHCP server:
|
||||
|
||||
\ **rspconfig**\ \ *fsp01 network=eth0,0.0.0.0*\
|
||||
.. code-block:: perl
|
||||
|
||||
rspconfig fsp01 network=eth0,0.0.0.0
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
@ -804,11 +864,15 @@ EXAMPLES
|
||||
|
||||
|
||||
|
||||
\*
|
||||
14. To get the current power redundancy mode for power domain 1 on mm01:
|
||||
|
||||
To get the current power redundancy mode for power domain 1 on mm01:
|
||||
|
||||
\ **rspconfig**\ \ *mm01 pd1*\
|
||||
.. code-block:: perl
|
||||
|
||||
rspconfig mm01 pd1
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
@ -818,11 +882,15 @@ EXAMPLES
|
||||
|
||||
|
||||
|
||||
\*
|
||||
15. To change the current power redundancy mode for power domain 1 on mm01 to non-redundant:
|
||||
|
||||
To change the current power redundancy mode for power domain 1 on mm01 to non-redundant:
|
||||
|
||||
\ **rspconfig**\ \ *mm01 pd1=nonred*\
|
||||
.. code-block:: perl
|
||||
|
||||
rspconfig mm01 pd1=nonred
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
@ -832,11 +900,15 @@ EXAMPLES
|
||||
|
||||
|
||||
|
||||
\*
|
||||
16. To enable NTP with an NTP server address of 192.168.1.1, an update frequency of 90 minutes, and with v3 authentication enabled on mm01:
|
||||
|
||||
To enable NTP with an NTP server address of 192.168.1.1, an update frequency of 90 minutes, and with v3 authentication enabled on mm01:
|
||||
|
||||
\ **rspconfig**\ \ *mm01 ntp=enable,192.168.1.1,90,enable*\
|
||||
.. code-block:: perl
|
||||
|
||||
rspconfig mm01 ntp=enable,192.168.1.1,90,enable
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
@ -849,11 +921,15 @@ EXAMPLES
|
||||
|
||||
|
||||
|
||||
\*
|
||||
17. To disable NTP v3 authentication only on mm01:
|
||||
|
||||
To disable NTP v3 authentication only on mm01:
|
||||
|
||||
\ **rspconfig**\ \ *mm01 ntp=,,,disable*\
|
||||
.. code-block:: perl
|
||||
|
||||
rspconfig mm01 ntp=,,,disable
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
@ -863,11 +939,15 @@ EXAMPLES
|
||||
|
||||
|
||||
|
||||
\*
|
||||
18. To disable Predictive Failure and L2 Failure deconfiguration policies on mm01:
|
||||
|
||||
To disable Predictive Failure and L2 Failure deconfiguration policies on mm01:
|
||||
|
||||
\ **rspconfig**\ \ *mm01 decfg=disable:predictive,L3*\
|
||||
.. code-block:: perl
|
||||
|
||||
rspconfig mm01 decfg=disable:predictive,L3
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
@ -877,11 +957,15 @@ EXAMPLES
|
||||
|
||||
|
||||
|
||||
\*
|
||||
19. To deconfigure processors 4 and 5 of Processing Unit 0 on mm01:
|
||||
|
||||
To deconfigure processors 4 and 5 of Processing Unit 0 on mm01:
|
||||
|
||||
\ **rspconfig**\ \ *mm01 procedecfg=deconfigure:0:4,5*\
|
||||
.. code-block:: perl
|
||||
|
||||
rspconfig mm01 procedecfg=deconfigure:0:4,5
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
@ -891,73 +975,57 @@ EXAMPLES
|
||||
|
||||
|
||||
|
||||
\*
|
||||
|
||||
To check if CEC sysname set correct on mm01:
|
||||
|
||||
\ **rspconfig**\ \ *mm01 sysname*\
|
||||
20. To check if CEC sysname set correct on mm01:
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
rspconfig mm01 sysname
|
||||
|
||||
mm01: mm01
|
||||
|
||||
|
||||
\ **rspconfig**\ \ *mm01 sysname=cec01*\
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
|
||||
rspconfig mm01 sysname=cec01
|
||||
|
||||
mm01: Success
|
||||
|
||||
|
||||
\ **rspconfig**\ \ *mm01 sysname*\
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
|
||||
rspconfig mm01 sysname
|
||||
|
||||
mm01: cec01
|
||||
|
||||
|
||||
|
||||
|
||||
\*
|
||||
|
||||
To check and change the pending_power_on_side value of cec01's fsps:
|
||||
|
||||
\ **rspconfig**\ \ *cec01 pending_power_on_side*\
|
||||
21. To check and change the pending_power_on_side value of cec01's fsps:
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
rspconfig cec01 pending_power_on_side
|
||||
|
||||
cec01: Pending Power On Side Primary: temp
|
||||
cec01: Pending Power On Side Secondary: temp
|
||||
|
||||
|
||||
\ **rspconfig**\ \ *cec01 pending_power_on_side=perm*\
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
|
||||
rspconfig cec01 pending_power_on_side=perm
|
||||
|
||||
cec01: Success
|
||||
|
||||
|
||||
\ **rspconfig**\ \ *cec01 pending_power_on_side*\
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
|
||||
rspconfig cec01 pending_power_on_side
|
||||
|
||||
cec01: Pending Power On Side Primary: perm
|
||||
cec01: Pending Power On Side Secondary: perm
|
||||
|
||||
|
||||
|
||||
|
||||
\*
|
||||
22. To show the BSR allocation for cec01:
|
||||
|
||||
To show the BSR allocation for cec01:
|
||||
|
||||
\ **rspconfig**\ \ *cec01 BSR*\
|
||||
.. code-block:: perl
|
||||
|
||||
rspconfig cec01 BSR
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
@ -979,11 +1047,15 @@ EXAMPLES
|
||||
|
||||
|
||||
|
||||
\*
|
||||
23. To query the huge page information for CEC1, enter:
|
||||
|
||||
To query the huge page information for CEC1, enter:
|
||||
|
||||
\ **rspconfig**\ \ *CEC1 huge_page*\
|
||||
.. code-block:: perl
|
||||
|
||||
rspconfig CEC1 huge_page
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
@ -1007,11 +1079,15 @@ EXAMPLES
|
||||
|
||||
|
||||
|
||||
\*
|
||||
24. To request 10 huge pages for CEC1, enter:
|
||||
|
||||
To request 10 huge pages for CEC1, enter:
|
||||
|
||||
\ **rspconfig**\ \ *CEC1 huge_page=10*\
|
||||
.. code-block:: perl
|
||||
|
||||
rspconfig CEC1 huge_page=10
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
@ -1021,25 +1097,19 @@ EXAMPLES
|
||||
|
||||
|
||||
|
||||
\*
|
||||
|
||||
To disable service processor failover for cec01, in order to complete this command, the user should power off cec01 first:
|
||||
|
||||
\ **rspconfig**\ \ *cec01 setup_failover*\
|
||||
25. To disable service processor failover for cec01, in order to complete this command, the user should power off cec01 first:
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
rspconfig cec01 setup_failover
|
||||
|
||||
cec01: Failover status: Enabled
|
||||
|
||||
|
||||
\ **rpower**\ \ *cec01 off*\
|
||||
|
||||
\ **rspconfig**\ \ *cec01 setup_failover=disable*\
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
|
||||
rpower cec01 off
|
||||
|
||||
rspconfig cec01 setup_failover=disable
|
||||
|
||||
cec01: Success
|
||||
|
||||
|
||||
@ -1053,34 +1123,24 @@ EXAMPLES
|
||||
|
||||
|
||||
|
||||
\*
|
||||
|
||||
To force service processor failover for cec01:
|
||||
|
||||
\ **lshwconn**\ \ *cec01*\
|
||||
26. To force service processor failover for cec01:
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
lshwconn cec01
|
||||
|
||||
cec01: 192.168.1.1: LINE DOWN
|
||||
cec01: 192.168.2.1: sp=primary,ipadd=192.168.2.1,alt_ipadd=unavailable,state=LINE UP
|
||||
cec01: 192.168.1.2: sp=secondary,ipadd=192.168.1.2,alt_ipadd=unavailable,state=LINE UP
|
||||
cec01: 192.168.2.2: LINE DOWN
|
||||
|
||||
|
||||
\ **rspconfig**\ \ *cec01 force_failover*\
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
|
||||
rspconfig cec01 force_failover
|
||||
|
||||
cec01: Success.
|
||||
|
||||
|
||||
\ **lshwconn**\ \ *cec01*\
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
|
||||
lshwconn> cec01
|
||||
|
||||
cec01: 192.168.1.1: sp=secondary,ipadd=192.168.1.1,alt_ipadd=unavailable,state=LINE UP
|
||||
cec01: 192.168.2.1: LINE DOWN
|
||||
cec01: 192.168.1.2: LINE DOWN
|
||||
@ -1089,11 +1149,17 @@ EXAMPLES
|
||||
|
||||
|
||||
|
||||
\*
|
||||
27.
|
||||
|
||||
To deconfigure memory bank 9 and 10 of Processing Unit 0 on mm01:
|
||||
|
||||
\ **rspconfig**\ \ *mm01 memdecfg=deconfigure:bank:0:9,10*\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
rspconfig mm01 memdecfg=deconfigure:bank:0:9,10
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
@ -1103,11 +1169,19 @@ EXAMPLES
|
||||
|
||||
|
||||
|
||||
\*
|
||||
28.
|
||||
|
||||
To reset the network interface of the specified nodes:
|
||||
|
||||
<<<<<<< HEAD
|
||||
\ **rspconfig**\ \ *-**\ **-resetnet*\
|
||||
=======
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
rspconfig --resetnet
|
||||
|
||||
>>>>>>> man1 changes for commands p-z
|
||||
|
||||
Output is similar to:
|
||||
|
||||
@ -1126,11 +1200,15 @@ EXAMPLES
|
||||
|
||||
|
||||
|
||||
\*
|
||||
29. To update the existing admin password on fsp:
|
||||
|
||||
To update the existing admin password on fsp:
|
||||
|
||||
\ **rspconfig**\ \ *fsp admin_passwd=admin,abc123*\
|
||||
.. code-block:: perl
|
||||
|
||||
rspconfig fsp admin_passwd=admin,abc123
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
@ -1140,11 +1218,15 @@ EXAMPLES
|
||||
|
||||
|
||||
|
||||
\*
|
||||
30. To set the initial password for user HMC on fsp:
|
||||
|
||||
To set the initial password for user HMC on fsp:
|
||||
|
||||
\ **rspconfig**\ \ *fsp HMC_passwd=,abc123*\
|
||||
.. code-block:: perl
|
||||
|
||||
rspconfig fsp HMC_passwd=,abc123
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
|
@ -176,7 +176,13 @@ Processor for a single or range of nodes and groups.
|
||||
****************
|
||||
|
||||
|
||||
\ **rvitals**\ \ *node5*\ \ *all*\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
rvitals node5 all
|
||||
|
||||
|
||||
Output is similar to:
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
|
@ -40,8 +40,7 @@ The \ **sinv**\ command is an xCAT Distributed Shell Utility.
|
||||
|
||||
\ **COMMAND**\ \ **SPECIFICATION**\ :
|
||||
|
||||
The xdsh or rinv command to execute on the remote targets is specified by the
|
||||
\ **-c**\ flag, or by the \ **-f**\ flag
|
||||
The xdsh or rinv command to execute on the remote targets is specified by the \ **-c**\ flag, or by the \ **-f**\ flag
|
||||
which is followed by the fully qualified path to a file containing the command.
|
||||
|
||||
Note: do not add | xdshcoll to the command on the command line or in the
|
||||
@ -243,101 +242,121 @@ Command Protocol can be used. See man \ **xdsh**\ for more details.
|
||||
|
||||
|
||||
|
||||
\*
|
||||
1. To setup sinv.template (name optional) for input to the \ **sinv**\ command , enter:
|
||||
|
||||
To setup sinv.template (name optional) for input to the \ **sinv**\ command , enter:
|
||||
|
||||
\ **xdsh**\ \ *node1,node2 "rpm -qa | grep ssh " | xdshcoll > /tmp/sinv.template*\
|
||||
.. code-block:: perl
|
||||
|
||||
Note: when setting up the template the output of xdsh must be piped
|
||||
to xdshcoll, sinv processing depends on it.
|
||||
xdsh node1,node2 "rpm -qa | grep ssh " | xdshcoll > /tmp/sinv.template
|
||||
|
||||
|
||||
Note: when setting up the template the output of xdsh must be piped to xdshcoll, sinv processing depends on it.
|
||||
|
||||
|
||||
|
||||
\*
|
||||
2. To setup rinv.template for input to the \ **sinv**\ command , enter:
|
||||
|
||||
To setup rinv.template for input to the \ **sinv**\ command , enter:
|
||||
|
||||
\ **rinv**\ \ *node1-node2 serial | xdshcoll > /tmp/rinv.template*\
|
||||
.. code-block:: perl
|
||||
|
||||
Note: when setting up the template the output of rinv must be piped
|
||||
to xdshcoll, sinv processing depends on it.
|
||||
rinv node1-node2 serial | xdshcoll > /tmp/rinv.template
|
||||
|
||||
|
||||
Note: when setting up the template the output of rinv must be piped to xdshcoll, sinv processing depends on it.
|
||||
|
||||
|
||||
|
||||
\*
|
||||
3. To execute \ **sinv**\ using the sinv.template generated above
|
||||
on the nodegroup, \ **testnodes**\ ,possibly generating up to two
|
||||
new templates, and removing all generated templates in the end, and writing
|
||||
output report to /tmp/sinv.output, enter:
|
||||
|
||||
To execute \ **sinv**\ using the sinv.template generated above
|
||||
on the nodegroup, \ **testnodes**\ ,possibly generating up to two
|
||||
new templates, and removing all generated templates in the end, and writing
|
||||
output report to /tmp/sinv.output, enter:
|
||||
|
||||
\ **sinv**\ \ * -c "xdsh testnodes rpm -qa | grep ssh" -p /tmp/sinv.template -t 2 -r -o /tmp/sinv.output*\
|
||||
.. code-block:: perl
|
||||
|
||||
Note: do not add the pipe to xdshcoll on the -c flag, it is automatically
|
||||
added by the sinv routine.
|
||||
sinv -c "xdsh testnodes rpm -qa | grep ssh" -p /tmp/sinv.template -t 2 -r -o /tmp/sinv.output
|
||||
|
||||
|
||||
Note: do not add the pipe to xdshcoll on the -c flag, it is automatically added by the sinv routine.
|
||||
|
||||
|
||||
|
||||
\*
|
||||
4. To execute \ **sinv**\ on noderange, node1-node4, using the seed node, node8,
|
||||
to generate the first template, using the xdsh command (-c),
|
||||
possibly generating up to two additional
|
||||
templates and not removing any templates at the end, enter:
|
||||
|
||||
To execute \ **sinv**\ on noderange, node1-node4, using the seed node, node8,
|
||||
to generate the first template, using the xdsh command (-c),
|
||||
possibly generating up to two additional
|
||||
templates and not removing any templates at the end, enter:
|
||||
|
||||
\ **sinv**\ \ *-c "xdsh node1-node4 lslpp -l | grep bos.adt" -s node8 -p /tmp/sinv.template -t 2 -o /tmp/sinv.output*\
|
||||
.. code-block:: perl
|
||||
|
||||
sinv -c "xdsh node1-node4 lslpp -l | grep bos.adt" -s node8 -p /tmp/sinv.template -t 2 -o /tmp/sinv.output
|
||||
|
||||
|
||||
|
||||
|
||||
\*
|
||||
5. To execute \ **sinv**\ on noderange, node1-node4, using the seed node, node8,
|
||||
to generate the first template, using the rinv command (-c),
|
||||
possibly generating up to two additional
|
||||
templates and removing any generated templates at the end, enter:
|
||||
|
||||
To execute \ **sinv**\ on noderange, node1-node4, using the seed node, node8,
|
||||
to generate the first template, using the rinv command (-c),
|
||||
possibly generating up to two additional
|
||||
templates and removing any generated templates at the end, enter:
|
||||
|
||||
\ **sinv**\ \ *-c "rinv node1-node4 serial" -s node8 -p /tmp/sinv.template -t 2 -r -o /tmp/rinv.output*\
|
||||
.. code-block:: perl
|
||||
|
||||
sinv -c "rinv node1-node4 serial" -s node8 -p /tmp/sinv.template -t 2 -r -o /tmp/rinv.output
|
||||
|
||||
|
||||
|
||||
|
||||
\*
|
||||
6. To execute \ **sinv**\ on noderange, node1-node4, using node1 as
|
||||
the seed node, to generate the sinv.template from the xdsh command (-c),
|
||||
using the exact match option, generating no additional templates, enter:
|
||||
|
||||
To execute \ **sinv**\ on noderange, node1-node4, using node1 as
|
||||
the seed node, to generate the sinv.template from the xdsh command (-c),
|
||||
using the exact match option, generating no additional templates, enter:
|
||||
|
||||
\ **sinv**\ \ *-c "xdsh node1-node4 lslpp -l | grep bos.adt" -s node1 -e -p /tmp/sinv.template -o /tmp/sinv.output*\
|
||||
.. code-block:: perl
|
||||
|
||||
sinv -c "xdsh node1-node4 lslpp -l | grep bos.adt" -s node1 -e -p /tmp/sinv.template -o /tmp/sinv.output
|
||||
|
||||
|
||||
Note: the /tmp/sinv.template file must be empty, otherwise it will be used
|
||||
as an admin generated template.
|
||||
|
||||
|
||||
|
||||
\*
|
||||
7. To execute \ **sinv**\ on the Linux osimage defined for cn1. First build a template from the /etc/hosts on the node. Then run sinv to compare.
|
||||
|
||||
To execute \ **sinv**\ on the Linux osimage defined for cn1. First build a template from the /etc/hosts on the node. Then run sinv to compare.
|
||||
\ **xdsh**\ \ *cn1 "cat /etc/hosts" | xdshcoll *\ /tmp/sinv2/template"
|
||||
|
||||
\ **sinv**\ \ *-c "xdsh -i /install/netboot/rhels6/ppc64/test_ramdisk_statelite/rootimg cat /etc/hosts" -e -t1 -p /tmp/sinv.template -o /tmp/sinv.output*\
|
||||
.. code-block:: perl
|
||||
|
||||
xdsh cn1 "cat /etc/hosts" | xdshcoll > /tmp/sinv2/template"
|
||||
|
||||
sinv -c "xdsh -i /install/netboot/rhels6/ppc64/test_ramdisk_statelite/rootimg cat /etc/hosts" -e -t1 -p /tmp/sinv.template -o /tmp/sinv.output
|
||||
|
||||
|
||||
|
||||
|
||||
\*
|
||||
8.
|
||||
|
||||
To execute \ **sinv**\ on the AIX NIM 611dskls spot and compare /etc/hosts to compute1 node, run the following:
|
||||
|
||||
\ **xdsh**\ \ *compute1 "cat /etc/hosts" | xdshcoll *\ /tmp/sinv2/template"
|
||||
|
||||
\ **sinv**\ \ *-c "xdsh -i 611dskls cat /etc/hosts" -e -t1 -p /tmp/sinv.template -o /tmp/sinv.output*\
|
||||
.. code-block:: perl
|
||||
|
||||
xdsh compute1 "cat /etc/hosts" | xdshcoll > /tmp/sinv2/template"
|
||||
|
||||
sinv -c "xdsh -i 611dskls cat /etc/hosts" -e -t1 -p /tmp/sinv.template -o /tmp/sinv.output
|
||||
|
||||
|
||||
|
||||
|
||||
\*
|
||||
9.
|
||||
|
||||
To execute \ **sinv**\ on the device mswitch2 and compare to mswitch1
|
||||
|
||||
\ **sinv**\ \ *-c "xdsh mswitch enable;show version" -s mswitch1 -p /tmp/sinv/template -**\ **-devicetype IBSwitch::Mellanox -l admin -t 2*\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
sinv -c "xdsh mswitch enable;show version" -s mswitch1 -p /tmp/sinv/template --devicetype IBSwitch::Mellanox -l admin -t 2
|
||||
|
||||
|
||||
|
||||
|
||||
|
@ -19,9 +19,9 @@ SYNOPSIS
|
||||
********
|
||||
|
||||
|
||||
\ **snmove**\ \ *noderange*\ [\ **-V**\ ] [\ **-l | -**\ **-liteonly**\ ] [\ **-d | -**\ **-dest**\ \ *sn2*\ ] [\ **-D | -**\ **-destn**\ \ *sn2n*\ ] [\ **-i | -**\ **-ignorenodes**\ ] [\ **-P | -**\ **-postscripts**\ \ *script1,script2...*\ |\ *all*\ ]
|
||||
\ **snmove**\ \ *noderange*\ [\ **-V**\ ] [\ **-l | -**\ **-liteonly**\ ] [\ **-d | -**\ **-dest**\ \ *sn2*\ ] [\ **-D | -**\ **-destn**\ \ *sn2n*\ ] [\ **-i | -**\ **-ignorenodes**\ ] [\ **-P | -**\ **-postscripts**\ \ *script1,script2...*\ | \ **all**\ ]
|
||||
|
||||
\ **snmove**\ [\ **-V**\ ] [\ **-l | -**\ **-liteonly**\ ] \ **-s | -**\ **-source**\ \ *sn1*\ [\ **-S | -**\ **-sourcen**\ \ *sn1n*\ ] [\ **-d | -**\ **-dest**\ \ *sn2*\ ] [\ **-D | -**\ **-destn**\ \ *sn2n*\ ] [\ **-i | -**\ **-ignorenodes**\ ] [\ **-P | -**\ **-postscripts**\ \ *script1,script2...*\ |\ *all*\ ]
|
||||
\ **snmove**\ [\ **-V**\ ] [\ **-l | -**\ **-liteonly**\ ] \ **-s | -**\ **-source**\ \ *sn1*\ [\ **-S | -**\ **-sourcen**\ \ *sn1n*\ ] [\ **-d | -**\ **-dest**\ \ *sn2*\ ] [\ **-D | -**\ **-destn**\ \ *sn2n*\ ] [\ **-i | -**\ **-ignorenodes**\ ] [\ **-P | -**\ **-postscripts**\ \ *script1,script2...*\ | \ **all**\ ]
|
||||
|
||||
\ **snmove**\ [\ **-h | -**\ **-help | -v | -**\ **-version**\ ]
|
||||
|
||||
@ -68,7 +68,7 @@ service node.
|
||||
|
||||
By default the command will modify the nodes so that they will be able to be managed by the backup service node.
|
||||
|
||||
If the -i option is specified, the nodes themselves will not be modified.
|
||||
If the \ **-i**\ option is specified, the nodes themselves will not be modified.
|
||||
|
||||
You can also have postscripts executed on the nodes by using the -P option if needed.
|
||||
|
||||
@ -120,14 +120,13 @@ OPTIONS
|
||||
|
||||
\ **-P|-**\ **-postscripts**\
|
||||
|
||||
Specifies a list of extra postscripts to be run on the nodes after the nodes are moved over to the new serive node. If 'all' is specified, all the postscripts defined in the postscripts table will be run for the nodes. The specified postscripts must be stored under /install/postscripts directory.
|
||||
Specifies a list of extra postscripts to be run on the nodes after the nodes are moved over to the new serive node. If \ **all**\ is specified, all the postscripts defined in the postscripts table will be run for the nodes. The specified postscripts must be stored under /install/postscripts directory.
|
||||
|
||||
|
||||
|
||||
\ **-s|-**\ **-source**\
|
||||
|
||||
Specifies the hostname of the current (source) service node sa known by (facing)
|
||||
the management node.
|
||||
Specifies the hostname of the current (source) service node sa known by (facing) the management node.
|
||||
|
||||
|
||||
|
||||
@ -161,7 +160,11 @@ EXAMPLES
|
||||
|
||||
Move the nodes contained in group "group1" to the service node named "xcatsn02".
|
||||
|
||||
\ **snmove group1 -d xcatsn02 -D xcatsn02-eth1**\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
snmove group1 -d xcatsn02 -D xcatsn02-eth1
|
||||
|
||||
|
||||
|
||||
|
||||
@ -169,7 +172,11 @@ EXAMPLES
|
||||
|
||||
Move all the nodes that use service node xcatsn01 to service node xcatsn02.
|
||||
|
||||
\ **snmove -s xcatsn01 -S xcatsn01-eth1 -d xcatsn02 -D xcatsn02-eth1**\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
snmove -s xcatsn01 -S xcatsn01-eth1 -d xcatsn02 -D xcatsn02-eth1
|
||||
|
||||
|
||||
|
||||
|
||||
@ -177,7 +184,11 @@ EXAMPLES
|
||||
|
||||
Move any nodes that have sn1 as their primary server to the backup service node set in the xCAT node definition.
|
||||
|
||||
\ **snmove -s sn1**\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
snmove -s sn1
|
||||
|
||||
|
||||
|
||||
|
||||
@ -185,7 +196,11 @@ EXAMPLES
|
||||
|
||||
Move all the nodes in the xCAT group named "nodegroup1" to their backup SNs.
|
||||
|
||||
\ **snmove nodegroup1**\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
snmove nodegroup1
|
||||
|
||||
|
||||
|
||||
|
||||
@ -193,7 +208,11 @@ EXAMPLES
|
||||
|
||||
Move all the nodes in xCAT group "sngroup1" to the service node named "xcatsn2".
|
||||
|
||||
\ **snmove sngroup1 -d xcatsn2**\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
snmove sngroup1 -d xcatsn2
|
||||
|
||||
|
||||
|
||||
|
||||
@ -201,7 +220,11 @@ EXAMPLES
|
||||
|
||||
Move all the nodes in xCAT group "sngroup1" to the SN named "xcatsn2" and run extra postscripts.
|
||||
|
||||
\ **snmove sngroup1 -d xcatsn2 -P test1**\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
snmove sngroup1 -d xcatsn2 -P test1
|
||||
|
||||
|
||||
|
||||
|
||||
@ -209,7 +232,11 @@ EXAMPLES
|
||||
|
||||
Move all the nodes in xCAT group "sngroup1" to the SN named "xcatsn2" and do not run anything on the nodes.
|
||||
|
||||
\ **snmove sngroup1 -d xcatsn2 -i**\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
snmove sngroup1 -d xcatsn2 -i
|
||||
|
||||
|
||||
|
||||
|
||||
@ -217,7 +244,11 @@ EXAMPLES
|
||||
|
||||
Synchronize any AIX statelite files from the primary server for compute03 to the backup server. This will not actually move the node to it's backup service node.
|
||||
|
||||
\ **snmove compute03 -l -V**\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
snmove compute03 -l -V
|
||||
|
||||
|
||||
|
||||
|
||||
|
@ -34,12 +34,15 @@ This command is only for Power 775 using Direct FSP Management, and used in Powe
|
||||
The \ **swapnodes**\ command will keep the \ **current_node**\ name in the xCAT table, and use the \ *fip_node*\ 's hardware resource. Besides that, the IO adapters will be assigned to the new hardware resource if they are in the same CEC. So the swapnodes command will do 2 things:
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
1. swap the location info in the db between 2 nodes:
|
||||
|
||||
All the ppc table attributes (including hcp, id, parent, supernode and so on).
|
||||
All the nodepos table attributes(including rack, u, chassis, slot, room and so on).
|
||||
|
||||
|
||||
|
||||
2. assign the I/O adapters from the defective node(the original current_node) to the available node(the original fip_node) if the nodes are in the same cec.
|
||||
|
||||
(1)swap the location info in the db between 2 nodes:
|
||||
All the ppc table attributes (including hcp, id, parent, supernode and so on).
|
||||
All the nodepos table attributes(including rack, u, chassis, slot, room and so on).
|
||||
(2)assign the I/O adapters from the defective node(the original current_node) to the available node(the original fip_node) if the nodes are in the same cec.
|
||||
|
||||
|
||||
The \ **swapnodes**\ command shouldn't make the decision of which 2 nodes are swapped. It will just received the 2 node names as cmd line parameters.
|
||||
@ -96,9 +99,7 @@ EXAMPLES
|
||||
|
||||
|
||||
|
||||
1
|
||||
|
||||
To swap the service node attributes and IO assignments between sn1 and compute2 which are in the same cec, all the attributes in the ppc table and nodepos talbe of the two node will be swapped, and the the I/O adapters from the defective node (the original sn1) will be assigned to the available node (the original compute2). After the swapping, the sn1 will use the compute2's hardware resource and the I/O adapters from the original sn1.
|
||||
1. To swap the service node attributes and IO assignments between sn1 and compute2 which are in the same cec, all the attributes in the ppc table and nodepos talbe of the two node will be swapped, and the the I/O adapters from the defective node (the original sn1) will be assigned to the available node (the original compute2). After the swapping, the sn1 will use the compute2's hardware resource and the I/O adapters from the original sn1.
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
@ -108,9 +109,7 @@ EXAMPLES
|
||||
|
||||
|
||||
|
||||
2
|
||||
|
||||
To swap the service node attributes and IO assignments between sn1 and compute2 which are NOT in the same cec, all the attributes in the ppc table and nodepos talbe of the two node will be swapped. After the swapping, the sn1 will use the compute2's hardware resource.
|
||||
2. To swap the service node attributes and IO assignments between sn1 and compute2 which are NOT in the same cec, all the attributes in the ppc table and nodepos talbe of the two node will be swapped. After the swapping, the sn1 will use the compute2's hardware resource.
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
@ -120,9 +119,7 @@ EXAMPLES
|
||||
|
||||
|
||||
|
||||
3
|
||||
|
||||
Only to move the service node (sn1) definition to the compute node (compute2)'s hardware resource, and not move the compute2 definition to the sn1. After the swapping, the sn1 will use the compute2's hardware resource, and the compute2 definition is not changed.
|
||||
3. Only to move the service node (sn1) definition to the compute node (compute2)'s hardware resource, and not move the compute2 definition to the sn1. After the swapping, the sn1 will use the compute2's hardware resource, and the compute2 definition is not changed.
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
|
@ -11,11 +11,11 @@ SYNOPSIS
|
||||
********
|
||||
|
||||
|
||||
\ *switchdiscover [-h| --help]*\
|
||||
\ **switchdiscover [-h| -**\ **-help]**\
|
||||
|
||||
\ *switchdiscover [-v| --version]*\
|
||||
\ **switchdiscover [-v| -**\ **-version]**\
|
||||
|
||||
\ *switchdiscover [noderange|--range ip_ranges] [-V] [-w][-r|-x|-z][-s scan_methods]*\
|
||||
\ **switchdiscover**\ [\ *noderange*\ | \ **-**\ **-range**\ \ *ip_ranges*\ ] \ **[-V] [-w][-r|-x|-z][-s**\ \ *scan_methods*\ ]
|
||||
|
||||
|
||||
***********
|
||||
@ -36,7 +36,7 @@ OPTIONS
|
||||
|
||||
|
||||
|
||||
\ **noderange**\
|
||||
\ *noderange*\
|
||||
|
||||
The switches which the user want to discover.
|
||||
If the user specify the noderange, switchdiscover will just
|
||||
@ -49,7 +49,7 @@ OPTIONS
|
||||
|
||||
|
||||
|
||||
\ **-h**\
|
||||
\ **-h|-**\ **-help**\
|
||||
|
||||
Display usage message.
|
||||
|
||||
@ -78,7 +78,7 @@ OPTIONS
|
||||
|
||||
|
||||
|
||||
\ **-v**\
|
||||
\ **-v|-**\ **-version**\
|
||||
|
||||
Command Version.
|
||||
|
||||
@ -125,29 +125,41 @@ EXAMPLES
|
||||
|
||||
|
||||
|
||||
\*
|
||||
1. To discover the switches on some subnets:
|
||||
|
||||
To discover the switches on some subnets:
|
||||
|
||||
\ **switchdiscover**\ \ *-**\ **-range 10.2.3.0/24,192.168.3.0/24,11.5.6.7*\
|
||||
.. code-block:: perl
|
||||
|
||||
switchdiscover --range 10.2.3.0/24,192.168.3.0/24,11.5.6.7
|
||||
|
||||
|
||||
|
||||
|
||||
\*
|
||||
2. To do the switch discovery and save them to the xCAT database:
|
||||
|
||||
To do the switch discovery and save them to the xCAT database:
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
switchdiscover --range 10.2.3.4/24 -w
|
||||
|
||||
<<<<<<< HEAD
|
||||
\ **switchdiscover**\ \ *-**\ **-range 10.2.3.4/24 -w*\
|
||||
=======
|
||||
>>>>>>> man1 changes for commands p-z
|
||||
|
||||
It is recommended to run \ **makehosts**\ after the switches are saved in the DB.
|
||||
|
||||
|
||||
|
||||
\*
|
||||
3.
|
||||
|
||||
To use lldp mathod to discover the switches:
|
||||
|
||||
\ **switchdiscover**\ -s lldp
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
switchdiscover -s lldp
|
||||
|
||||
|
||||
|
||||
|
||||
@ -160,8 +172,9 @@ FILES
|
||||
/opt/xcat/bin/switchdiscover
|
||||
|
||||
|
||||
********
|
||||
*********
|
||||
SEE ALSO
|
||||
********
|
||||
|
||||
*********
|
||||
|
||||
|
||||
|
@ -21,7 +21,7 @@ SYNOPSIS
|
||||
|
||||
\ **tabgrep**\ \ *nodename*\
|
||||
|
||||
\ **tabgrep**\ [\ *-?*\ | \ *-h*\ | \ *-**\ **-help*\ ]
|
||||
\ **tabgrep**\ [\ **-?**\ | \ **-h**\ | \ **-**\ **-help**\ ]
|
||||
|
||||
|
||||
***********
|
||||
@ -53,16 +53,12 @@ RETURN VALUE
|
||||
|
||||
|
||||
|
||||
0
|
||||
|
||||
The command completed successfully.
|
||||
|
||||
0 The command completed successfully.
|
||||
|
||||
|
||||
1
|
||||
|
||||
An error has occurred.
|
||||
|
||||
|
||||
1 An error has occurred.
|
||||
|
||||
|
||||
|
||||
|
||||
@ -72,11 +68,15 @@ EXAMPLES
|
||||
|
||||
|
||||
|
||||
\*
|
||||
1.
|
||||
|
||||
To display the tables that contain blade1:
|
||||
|
||||
\ **tabgrep**\ \ *blade1*\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
tabgrep blade1
|
||||
|
||||
|
||||
The output would be similar to:
|
||||
|
||||
|
@ -19,11 +19,11 @@ SYNOPSIS
|
||||
********
|
||||
|
||||
|
||||
\ *unregnotif [-h| --help]*\
|
||||
\ **unregnotif [-h| -**\ **-help]**\
|
||||
|
||||
\ *unregnotif [-v| --version]*\
|
||||
\ **unregnotif [-v| -**\ **-version]**\
|
||||
|
||||
\ *unregnotif \ \*filename\*\ *\
|
||||
\ **unregnotif**\ \ *filename*\
|
||||
|
||||
|
||||
***********
|
||||
@ -35,7 +35,7 @@ This command is used to unregistered a Perl module or a command that was watchin
|
||||
|
||||
|
||||
**********
|
||||
Parameters
|
||||
PARAMETERS
|
||||
**********
|
||||
|
||||
|
||||
|
@ -19,11 +19,11 @@ SYNOPSIS
|
||||
********
|
||||
|
||||
|
||||
\ *updateSNimage [-h | --help ]*\
|
||||
\ **updateSNimage [-h | -**\ **-help ]**\
|
||||
|
||||
\ *updateSNimage [-v | --version]*\
|
||||
\ **updateSNimage [-v | -**\ **-version]**\
|
||||
|
||||
\ *updateSNimage {-n} [-p]*\
|
||||
\ **updateSNimage**\ [\ **-n**\ \ *node*\ ] [\ **-p**\ \ *path*\ ]
|
||||
|
||||
|
||||
***********
|
||||
@ -43,7 +43,7 @@ OPTIONS
|
||||
|
||||
\ **-v |-**\ **-version**\ Display xCAT version.
|
||||
|
||||
\ **-n | -**\ **-node**\ A remote host name or ip address that contains the install image to be updated.
|
||||
\ **-n |-**\ **-node**\ A remote host name or ip address that contains the install image to be updated.
|
||||
|
||||
\ **-p |-**\ **-path**\ Path to the install image.
|
||||
|
||||
@ -65,9 +65,17 @@ EXAMPLES
|
||||
|
||||
1. To update the image on the local host.
|
||||
|
||||
\ *updateSNimage -p /install/netboot/fedora8/x86_64/test/rootimg*\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
updateSNimage -p /install/netboot/fedora8/x86_64/test/rootimg
|
||||
|
||||
|
||||
2. To update the image on a remote host.
|
||||
|
||||
\ *updateSNimage -n 9.112.45.6 -p /install/netboot/fedora8/x86_64/test/rootimg*\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
updateSNimage -n 9.112.45.6 -p /install/netboot/fedora8/x86_64/test/rootimg
|
||||
|
||||
|
||||
|
@ -19,13 +19,13 @@ SYNOPSIS
|
||||
********
|
||||
|
||||
|
||||
\ **updatenode**\ \ **noderange**\ [\ **-V | -**\ **-verbose**\ ] [\ **-F | -**\ **-sync**\ ] [\ **-f | -**\ **-snsync**\ ] [\ **-S | -**\ **-sw**\ ] [\ **-l**\ \ *userID*\ ] [\ **-P | -**\ **-scripts**\ [\ **script1,script2...**\ ]] [\ **-s | -**\ **-sn**\ ] [\ **-A | -**\ **-updateallsw**\ ] [\ **-c | -**\ **-cmdlineonly**\ ] [\ **-d alt_source_dir**\ ] [\ **-**\ **-fanout**\ ] [\ **-t timeout**\ } [\ **attr=val**\ [\ **attr=val...**\ ]] [\ **-n | -**\ **-noverify**\ ]
|
||||
\ **updatenode**\ \ *noderange*\ [\ **-V | -**\ **-verbose**\ ] [\ **-F | -**\ **-sync**\ ] [\ **-f | -**\ **-snsync**\ ] [\ **-S | -**\ **-sw**\ ] [\ **-l**\ \ *userID*\ ] [\ **-P | -**\ **-scripts**\ [\ *script1,script2...*\ ]] [\ **-s | -**\ **-sn**\ ] [\ **-A | -**\ **-updateallsw**\ ] [\ **-c | -**\ **-cmdlineonly**\ ] [\ **-d**\ \ *alt_source_dir*\ ] [\ **-**\ **-fanout**\ ] [\ **-ti**\ \ *timeout*\ } [\ *attr=val*\ [\ *attr=val...*\ ]] [\ **-n | -**\ **-noverify**\ ]
|
||||
|
||||
\ **updatenode**\ \ **noderange**\ [\ **-k | -**\ **-security**\ ] [\ **-t timeout**\ ]
|
||||
\ **updatenode**\ \ **noderange**\ [\ **-k | -**\ **-security**\ ] [\ **-t**\ \ *timeout*\ ]
|
||||
|
||||
\ **updatenode**\ \ **noderange**\ [\ **-g | -**\ **-genmypost**\ ]
|
||||
|
||||
\ **updatenode**\ \ **noderange**\ [\ **-V | -**\ **-verbose**\ ] [\ **-t timeout**\ ] [\ **script1,script2...**\ ]
|
||||
\ **updatenode**\ \ **noderange**\ [\ **-V | -**\ **-verbose**\ ] [\ **-t**\ \ *timeout*\ ] [\ *script1,script2...*\ ]
|
||||
|
||||
\ **updatenode**\ \ **noderange**\ [\ **-V | -**\ **-verbose**\ ] [\ **-f | -**\ **-snsync**\ ]
|
||||
|
||||
@ -41,46 +41,36 @@ The updatenode command is run on the xCAT management node and can be used
|
||||
to perform the following node updates:
|
||||
|
||||
|
||||
1
|
||||
|
||||
Distribute and synchronize files.
|
||||
|
||||
1. Distribute and synchronize files.
|
||||
|
||||
|
||||
2
|
||||
|
||||
Install or update software on diskful nodes.
|
||||
|
||||
|
||||
2. Install or update software on diskful nodes.
|
||||
|
||||
|
||||
3
|
||||
|
||||
Run postscripts.
|
||||
|
||||
|
||||
3. Run postscripts.
|
||||
|
||||
|
||||
4
|
||||
|
||||
Update the ssh keys and host keys for the service nodes and compute nodes;
|
||||
Update the ca and credentials for the service nodes.
|
||||
|
||||
|
||||
4. Update the ssh keys and host keys for the service nodes and compute nodes;
|
||||
Update the ca and credentials for the service nodes.
|
||||
|
||||
|
||||
|
||||
The default behavior when no options are input to updatenode will be to run
|
||||
the following options "-S", "-P" and "-F" options in this order.
|
||||
the following options \ **-S**\ , \ **-P**\ and \ **-F**\ options in this order.
|
||||
If you wish to limit updatenode to specific
|
||||
actions you can use combinations of the "-S", "-P", and "-F" flags.
|
||||
actions you can use combinations of the \ **-S**\ , \ **-P**\ , and \ **-F**\ flags.
|
||||
|
||||
For example, If you just want to synchronize configuration file you could
|
||||
specify the "-F" flag. If you want to synchronize files and update
|
||||
software you would specify the "-F" and "-S" flags. See the descriptions
|
||||
specify the \ **-F**\ flag. If you want to synchronize files and update
|
||||
software you would specify the \ **-F**\ and \ **-S**\ flags. See the descriptions
|
||||
of these flags and examples below.
|
||||
|
||||
The flag "-k" (--security) can NOT be used together with "-S", "-P", and "-F"
|
||||
flags.
|
||||
The flag \ **-k**\ (\ **-**\ **-security**\ ) can NOT be used together with \ **-S**\ , \ **-P**\ , and \ **-F**\ flags.
|
||||
|
||||
The flag "-f" (--snsync) can NOT be used together with "-S", "-P", and "-F"
|
||||
flags.
|
||||
The flag \ **-f**\ (\ **-**\ **-snsync**\ ) can NOT be used together with \ **-S**\ , \ **-P**\ , and \ **-F**\ flags.
|
||||
|
||||
Note: In a large cluster environment the updating of nodes in an ad hoc
|
||||
manner can quickly get out of hand, leaving the system administrator with
|
||||
@ -95,29 +85,22 @@ To distribute and synchronize files
|
||||
The basic process for distributing and synchronizing nodes is:
|
||||
|
||||
|
||||
\*
|
||||
|
||||
Create a synclist file.
|
||||
|
||||
\* Create a synclist file.
|
||||
|
||||
|
||||
\*
|
||||
|
||||
Indicate the location of the synclist file.
|
||||
|
||||
|
||||
\* Indicate the location of the synclist file.
|
||||
|
||||
|
||||
\*
|
||||
|
||||
Run the updatenode command to update the nodes.
|
||||
|
||||
|
||||
\* Run the updatenode command to update the nodes.
|
||||
|
||||
|
||||
|
||||
Files may be distributed and synchronized for both diskless and
|
||||
diskful nodes. Syncing files to NFS-based statelite nodes is not supported.
|
||||
|
||||
More information on using the synchronization file function is in
|
||||
the following doc: Using_Updatenode.
|
||||
More information on using the synchronization file function is in the following doc: Using_Updatenode.
|
||||
|
||||
Create the synclist file
|
||||
------------------------
|
||||
@ -260,31 +243,23 @@ Update security
|
||||
The basic functions of update security for nodes:
|
||||
|
||||
|
||||
\*
|
||||
|
||||
Setup the ssh keys for the target nodes. It enables the management
|
||||
node and service nodes to ssh to the target nodes without password.
|
||||
|
||||
\* Setup the ssh keys for the target nodes. It enables the management
|
||||
node and service nodes to ssh to the target nodes without password.
|
||||
|
||||
|
||||
\*
|
||||
|
||||
Redeliver the host keys to the target nodes.
|
||||
|
||||
|
||||
\* Redeliver the host keys to the target nodes.
|
||||
|
||||
|
||||
\*
|
||||
|
||||
Redeliver the ca and certificates files to the service node.
|
||||
These files are used to authenticate the ssl connection between
|
||||
xcatd's of management node and service node.
|
||||
|
||||
|
||||
\* Redeliver the ca and certificates files to the service node.
|
||||
These files are used to authenticate the ssl connection between
|
||||
xcatd's of management node and service node.
|
||||
|
||||
|
||||
\*
|
||||
|
||||
Remove the entries of target nodes from known_hosts file.
|
||||
|
||||
|
||||
\* Remove the entries of target nodes from known_hosts file.
|
||||
|
||||
|
||||
|
||||
\ *Set up the SSH keys*\
|
||||
@ -316,7 +291,12 @@ Since the certificates have the validity time, the ntp service is recommended
|
||||
to be set up between management node and service node.
|
||||
|
||||
Simply running following command to update the security keys:
|
||||
\ **updatenode**\ \ *noderange*\ -k
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
updatenode <noderange> -k
|
||||
|
||||
|
||||
|
||||
|
||||
@ -326,7 +306,7 @@ PARAMETERS
|
||||
|
||||
|
||||
|
||||
\ **noderange**\
|
||||
\ *noderange*\
|
||||
|
||||
A set of comma delimited xCAT node names
|
||||
and/or group names. See the xCAT "noderange"
|
||||
@ -335,7 +315,7 @@ PARAMETERS
|
||||
|
||||
|
||||
|
||||
\ **script1,script2...**\
|
||||
\ *script1,script2...*\
|
||||
|
||||
A comma-separated list of script names.
|
||||
The scripts must be executable and copied
|
||||
@ -344,11 +324,15 @@ PARAMETERS
|
||||
If parameters are spcified, the whole list needs to be quoted by double quotes.
|
||||
For example:
|
||||
|
||||
\ **"script1 p1 p2,script2"**\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
"script1 p1 p2,script2"
|
||||
|
||||
|
||||
|
||||
|
||||
[\ **attr=val**\ [\ **attr=val...**\ ]]
|
||||
[\ *attr=val*\ [\ *attr=val...*\ ]]
|
||||
|
||||
Specifies one or more "attribute equals value" pairs, separated by spaces.
|
||||
Attr=val pairs must be specified last on the command line. The currently
|
||||
@ -388,7 +372,7 @@ OPTIONS
|
||||
|
||||
|
||||
|
||||
\ **-d alt_source_dir**\
|
||||
\ **-d**\ \ *alt_source_dir*\
|
||||
|
||||
Used to specify a source directory other than the standard lpp_source directory specified in the xCAT osimage definition. (AIX only)
|
||||
|
||||
@ -485,7 +469,7 @@ OPTIONS
|
||||
|
||||
|
||||
|
||||
\ **-t timeout**\
|
||||
\ **-t**\ \ *timeout*\
|
||||
|
||||
Specifies a timeout in seconds the command will wait for the remote targets to complete. If timeout is not specified
|
||||
it will wait indefinitely. updatenode -k is the exception that has a timeout of 10 seconds, unless overridden by this flag.
|
||||
@ -521,12 +505,13 @@ EXAMPLES
|
||||
|
||||
|
||||
|
||||
1
|
||||
1. To perform all updatenode features for the Linux nodes in the group "compute":
|
||||
|
||||
To perform all updatenode features for the Linux nodes in the group
|
||||
"compute":
|
||||
|
||||
\ **updatenode compute**\
|
||||
.. code-block:: perl
|
||||
|
||||
updatenode compute
|
||||
|
||||
|
||||
The command will: run any scripts listed in the nodes "postscripts and postbootscripts"
|
||||
attribute, install or update any software indicated in the
|
||||
@ -536,83 +521,103 @@ EXAMPLES
|
||||
|
||||
|
||||
|
||||
2
|
||||
2. To run postscripts,postbootscripts and file synchronization only on the node "clstrn01":
|
||||
|
||||
To run postscripts,postbootscripts and file synchronization only on the node
|
||||
"clstrn01":
|
||||
|
||||
\ **updatenode clstrn01 -F -P**\
|
||||
.. code-block:: perl
|
||||
|
||||
updatenode clstrn01 -F -P
|
||||
|
||||
|
||||
|
||||
|
||||
3
|
||||
|
||||
Running updatenode -P with the syncfiles postscript is not supported. You should use updatenode -F instead.
|
||||
3. Running updatenode -P with the syncfiles postscript is not supported. You should use updatenode -F instead.
|
||||
|
||||
Do not run:
|
||||
|
||||
\ **updatenode clstrno1 -P syncfiles**\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
updatenode clstrno1 -P syncfiles
|
||||
|
||||
|
||||
Run:
|
||||
|
||||
\ **updatenode clstrn01 -F**\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
updatenode clstrn01 -F
|
||||
|
||||
|
||||
|
||||
|
||||
4
|
||||
4. To run the postscripts and postbootscripts indicated in the postscripts and postbootscripts attributes on the node "clstrn01":
|
||||
|
||||
To run the postscripts and postbootscripts indicated in the postscripts and postbootscripts attributes on
|
||||
the node "clstrn01":
|
||||
|
||||
\ **updatenode clstrn01 -P**\
|
||||
.. code-block:: perl
|
||||
|
||||
updatenode clstrn01 -P
|
||||
|
||||
|
||||
|
||||
|
||||
5
|
||||
5. To run the postscripts script1 and script2 on the node "clstrn01":
|
||||
|
||||
To run the postscripts script1 and script2 on the node "clstrn01":
|
||||
|
||||
\ **cp script1,script2 /install/postscripts**\
|
||||
.. code-block:: perl
|
||||
|
||||
cp script1,script2 /install/postscripts
|
||||
|
||||
updatenode clstrn01 -P "script1 p1 p2,script2"
|
||||
|
||||
\ **updatenode clstrn01 -P "script1 p1 p2,script2"**\
|
||||
|
||||
Since flag '-P' can be omitted when only script names are specified,
|
||||
the following command is equivalent:
|
||||
|
||||
\ **updatenode clstrn01 "script1 p1 p2,script2"**\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
updatenode clstrn01 "script1 p1 p2,script2"
|
||||
|
||||
|
||||
p1 p2 are parameters for script1.
|
||||
|
||||
|
||||
|
||||
6
|
||||
6. To synchronize the files on the node "clstrn01": Prepare the synclist file.
|
||||
For AIX, set the full path of synclist in the osimage table synclists
|
||||
attribute. For Linux, put the synclist file into the location:
|
||||
/install/custom/<inst_type>/<distro>/<profile>.<os>.<arch>.synclist
|
||||
Then:
|
||||
|
||||
To synchronize the files on the node "clstrn01": Prepare the synclist file.
|
||||
For AIX, set the full path of synclist in the osimage table synclists
|
||||
attribute. For Linux, put the synclist file into the location:
|
||||
/install/custom/<inst_type>/<distro>/<profile>.<os>.<arch>.synclist
|
||||
Then:
|
||||
|
||||
\ **updatenode clstrn01 -F**\
|
||||
.. code-block:: perl
|
||||
|
||||
updatenode clstrn01 -F
|
||||
|
||||
|
||||
|
||||
|
||||
7
|
||||
7. To perform the software update on the Linux node "clstrn01": Copy the extra
|
||||
rpm into the /install/post/otherpkgs/<os>/<arch>/\* and add the rpm names into
|
||||
the /install/custom/install/<ostype>/profile.otherpkgs.pkglist . Then:
|
||||
|
||||
To perform the software update on the Linux node "clstrn01": Copy the extra
|
||||
rpm into the /install/post/otherpkgs/<os>/<arch>/\* and add the rpm names into
|
||||
the /install/custom/install/<ostype>/profile.otherpkgs.pkglist . Then:
|
||||
|
||||
\ **updatenode clstrn01 -S**\
|
||||
.. code-block:: perl
|
||||
|
||||
updatenode clstrn01 -S
|
||||
|
||||
|
||||
|
||||
|
||||
8
|
||||
8. To update the AIX node named "xcatn11" using the "installp_bundle" and/or
|
||||
"otherpkgs" attribute values stored in the xCAT database. Use the default installp, rpm and emgr flags.
|
||||
|
||||
To update the AIX node named "xcatn11" using the "installp_bundle" and/or
|
||||
"otherpkgs" attribute values stored in the xCAT database. Use the default installp, rpm and emgr flags.
|
||||
|
||||
\ **updatenode xcatn11 -V -S**\
|
||||
.. code-block:: perl
|
||||
|
||||
updatenode xcatn11 -V -S
|
||||
|
||||
|
||||
Note: The xCAT "xcatn11" node definition points to an xCAT osimage definition
|
||||
which contains the "installp_bundle" and "otherpkgs" attributes as well as
|
||||
@ -620,73 +625,93 @@ EXAMPLES
|
||||
|
||||
|
||||
|
||||
9
|
||||
9. To update the AIX node "xcatn11" by installing the "bos.cpr" fileset using
|
||||
the "-agQXY" installp flags. Also display the output of the installp command.
|
||||
|
||||
To update the AIX node "xcatn11" by installing the "bos.cpr" fileset using
|
||||
the "-agQXY" installp flags. Also display the output of the installp command.
|
||||
|
||||
\ **updatenode xcatn11 -V -S otherpkgs="I:bos.cpr" installp_flags="-agQXY"**\
|
||||
.. code-block:: perl
|
||||
|
||||
updatenode xcatn11 -V -S otherpkgs="I:bos.cpr" installp_flags="-agQXY"
|
||||
|
||||
|
||||
Note: The 'I:' prefix is optional but recommended for installp packages.
|
||||
|
||||
|
||||
|
||||
10
|
||||
10. To uninstall the "bos.cpr" fileset that was installed in the previous example.
|
||||
|
||||
To uninstall the "bos.cpr" fileset that was installed in the previous example.
|
||||
|
||||
\ **updatenode xcatn11 -V -S otherpkgs="I:bos.cpr" installp_flags="-u"**\
|
||||
.. code-block:: perl
|
||||
|
||||
updatenode xcatn11 -V -S otherpkgs="I:bos.cpr" installp_flags="-u"
|
||||
|
||||
|
||||
|
||||
|
||||
11
|
||||
11. To update the AIX nodes "xcatn11" and "xcatn12" with the "gpfs.base" fileset
|
||||
and the "rsync" rpm using the installp flags "-agQXY" and the rpm flags "-i --nodeps".
|
||||
|
||||
To update the AIX nodes "xcatn11" and "xcatn12" with the "gpfs.base" fileset
|
||||
and the "rsync" rpm using the installp flags "-agQXY" and the rpm flags "-i --nodeps".
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
updatenode xcatn11,xcatn12 -V -S otherpkgs="I:gpfs.base,R:rsync-2.6.2-1.aix5.1.ppc.rpm" installp_flags="-agQXY" rpm_flags="-i --nodeps"
|
||||
|
||||
<<<<<<< HEAD
|
||||
\ **updatenode xcatn11,xcatn12 -V -S otherpkgs="I:gpfs.base,R:rsync-2.6.2-1.aix5.1.ppc.rpm" installp_flags="-agQXY" rpm_flags="-i -**\ **-nodeps"**\
|
||||
=======
|
||||
>>>>>>> man1 changes for commands p-z
|
||||
|
||||
Note: Using the "-V" flag with multiple nodes may result in a large amount of output.
|
||||
|
||||
|
||||
|
||||
12
|
||||
12. To uninstall the rsync rpm that was installed in the previous example.
|
||||
|
||||
To uninstall the rsync rpm that was installed in the previous example.
|
||||
|
||||
\ **updatenode xcatn11 -V -S otherpkgs="R:rsync-2.6.2-1" rpm_flags="-e"**\
|
||||
.. code-block:: perl
|
||||
|
||||
updatenode xcatn11 -V -S otherpkgs="R:rsync-2.6.2-1" rpm_flags="-e"
|
||||
|
||||
|
||||
|
||||
|
||||
13
|
||||
13. Update the AIX node "node01" using the software specified in the NIM "sslbnd" and "sshbnd" installp_bundle resources and the "-agQXY" installp flags.
|
||||
|
||||
Update the AIX node "node01" using the software specified in the NIM "sslbnd" and "sshbnd" installp_bundle resources and the "-agQXY" installp flags.
|
||||
|
||||
\ **updatenode node01 -V -S installp_bundle="sslbnd,sshbnd" installp_flags="-agQXY"**\
|
||||
.. code-block:: perl
|
||||
|
||||
updatenode node01 -V -S installp_bundle="sslbnd,sshbnd" installp_flags="-agQXY"
|
||||
|
||||
|
||||
|
||||
|
||||
14
|
||||
14. To get a preview of what would happen if you tried to install the "rsct.base" fileset on AIX node "node42". (You must use the "-V" option to get the full output from the installp command.)
|
||||
|
||||
To get a preview of what would happen if you tried to install the "rsct.base" fileset on AIX node "node42". (You must use the "-V" option to get the full output from the installp command.)
|
||||
|
||||
\ **updatenode node42 -V -S otherpkgs="I:rsct.base" installp_flags="-apXY"**\
|
||||
.. code-block:: perl
|
||||
|
||||
updatenode node42 -V -S otherpkgs="I:rsct.base" installp_flags="-apXY"
|
||||
|
||||
|
||||
|
||||
|
||||
15
|
||||
15. To check what rpm packages are installed on the AIX node "node09". (You must use the "-c" flag so updatenode does not get a list of packages from the database.)
|
||||
|
||||
To check what rpm packages are installed on the AIX node "node09". (You must use the "-c" flag so updatenode does not get a list of packages from the database.)
|
||||
|
||||
\ **updatenode node09 -V -c -S rpm_flags="-qa"**\
|
||||
.. code-block:: perl
|
||||
|
||||
updatenode node09 -V -c -S rpm_flags="-qa"
|
||||
|
||||
|
||||
|
||||
|
||||
16
|
||||
16. To install all software updates contained in the /images directory.
|
||||
|
||||
To install all software updates contained in the /images directory.
|
||||
|
||||
\ **updatenode node27 -V -S -A -d /images**\
|
||||
.. code-block:: perl
|
||||
|
||||
updatenode node27 -V -S -A -d /images
|
||||
|
||||
|
||||
Note: Make sure the directory is exportable and that the permissions are set
|
||||
correctly for all the files. (Including the .toc file in the case of
|
||||
@ -694,52 +719,64 @@ EXAMPLES
|
||||
|
||||
|
||||
|
||||
17
|
||||
17. Install the interim fix package located in the /efixes directory.
|
||||
|
||||
Install the interim fix package located in the /efixes directory.
|
||||
|
||||
\ **updatenode node29 -V -S -d /efixes otherpkgs=E:IZ38930TL0.120304.epkg.Z**\
|
||||
.. code-block:: perl
|
||||
|
||||
updatenode node29 -V -S -d /efixes otherpkgs=E:IZ38930TL0.120304.epkg.Z
|
||||
|
||||
|
||||
|
||||
|
||||
18
|
||||
18. To uninstall the interim fix that was installed in the previous example.
|
||||
|
||||
To uninstall the interim fix that was installed in the previous example.
|
||||
|
||||
\ **updatenode xcatsn11 -V -S -c emgr_flags="-r -L IZ38930TL0"**\
|
||||
.. code-block:: perl
|
||||
|
||||
updatenode xcatsn11 -V -S -c emgr_flags="-r -L IZ38930TL0"
|
||||
|
||||
|
||||
|
||||
|
||||
19
|
||||
19. To update the security keys for the node "node01"
|
||||
|
||||
To update the security keys for the node "node01"
|
||||
|
||||
\ **updatenode node01 -k**\
|
||||
.. code-block:: perl
|
||||
|
||||
updatenode node01 -k
|
||||
|
||||
|
||||
|
||||
|
||||
20
|
||||
20. To update the service nodes with the files to be synchronized to node group compute:
|
||||
|
||||
To update the service nodes with the files to be synchronized to node group compute:
|
||||
|
||||
\ **updatenode compute -f**\
|
||||
.. code-block:: perl
|
||||
|
||||
updatenode compute -f
|
||||
|
||||
|
||||
|
||||
|
||||
21
|
||||
21. To run updatenode with the non-root userid "user1" that has been setup as an xCAT userid with sudo on node1 to run as root, do the following:
|
||||
See Granting_Users_xCAT_privileges for required sudo setup.
|
||||
|
||||
To run updatenode with the non-root userid "user1" that has been setup as an xCAT userid with sudo on node1 to run as root, do the following:
|
||||
See Granting_Users_xCAT_privileges for required sudo setup.
|
||||
|
||||
\ **updatenode node1 -l user1 -P syslog**\
|
||||
.. code-block:: perl
|
||||
|
||||
updatenode node1 -l user1 -P syslog
|
||||
|
||||
|
||||
|
||||
|
||||
22
|
||||
22. In Sysclone environment, after capturing the delta changes from golden client to management node, to run updatenode to push these delta changes to target nodes.
|
||||
|
||||
In Sysclone environment, after capturing the delta changes from golden client to management node, to run updatenode to push these delta changes to target nodes.
|
||||
|
||||
\ **updatenode target-node -S**\
|
||||
.. code-block:: perl
|
||||
|
||||
updatenode target-node -S
|
||||
|
||||
|
||||
|
||||
|
||||
|
@ -63,7 +63,11 @@ wcons windows on your $DISPLAY will be killed.
|
||||
****************
|
||||
|
||||
|
||||
\ **wkill**\ \ *node1-node5*\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
wkill node1-node5
|
||||
|
||||
|
||||
|
||||
************************
|
||||
|
@ -19,7 +19,7 @@ SYNOPSIS
|
||||
********
|
||||
|
||||
|
||||
\ *xCATWorld {noderange}*\
|
||||
\ **xCATWorld**\ \ *noderange*\
|
||||
|
||||
|
||||
***********
|
||||
@ -48,7 +48,11 @@ EXAMPLES
|
||||
|
||||
1.To run , enter:
|
||||
|
||||
\ *xCATWorld nodegrp1*\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
xCATWorld nodegrp1
|
||||
|
||||
|
||||
|
||||
*****
|
||||
|
@ -19,9 +19,9 @@ SYNOPSIS
|
||||
********
|
||||
|
||||
|
||||
\ *xcat2nim [-h|--help ]*\
|
||||
\ **xcat2nim [-h|-**\ **-help]**\
|
||||
|
||||
\ *xcat2nim [-V|--verbose] [-u|--update] [-l|--list] [-r|--remove] [-f|--force] [-t object-types] [-o object-names] [-a|--allobjects] [-p|--primarySN] [-b|--backupSN] [noderange] [attr=val [attr=val...]] *\
|
||||
\ **xcat2nim [-V|-**\ **-verbose] [-u|-**\ **-update] [-l|-**\ **-list] [-r|-**\ **-remove] [-f|-**\ **-force] [-t object-types] [-o**\ \ *object-names*\ ] \ **[-a|-**\ **-allobjects] [-p|-**\ **-primarySN] [-b|-**\ **-backupSN]**\ \ *[noderange] [attr=val [attr=val...]]*\
|
||||
|
||||
|
||||
***********
|
||||
@ -65,7 +65,7 @@ OPTIONS
|
||||
|
||||
\ **-a|-**\ **-all**\ The list of objects will include all xCAT node, group and network objects.
|
||||
|
||||
\ **attr=val [attr=val ...]**\ Specifies one or more "attribute equals value" pairs, separated by spaces. Attr=val pairs must be specified last on the command line. The attribute names must correspond to the attributes supported by the relevant NIM commands. When providing attr=val pairs on the command line you must not specify more than one object type.
|
||||
\ *attr=val [attr=val ...]*\ Specifies one or more "attribute equals value" pairs, separated by spaces. Attr=val pairs must be specified last on the command line. The attribute names must correspond to the attributes supported by the relevant NIM commands. When providing attr=val pairs on the command line you must not specify more than one object type.
|
||||
|
||||
\ **-b|-**\ **-backupSN**\ When using backup service nodes only update the backup. The default is to update both the primary and backup service nodes.
|
||||
|
||||
@ -75,13 +75,13 @@ OPTIONS
|
||||
|
||||
\ **-l|-**\ **-list**\ List NIM definitions corresponding to xCAT definitions.
|
||||
|
||||
\ **-o object-names**\ A set of comma delimited xCAT object names. Objects must be of type node, group, or network.
|
||||
\ **-o**\ \ *object-names*\ A set of comma delimited xCAT object names. Objects must be of type node, group, or network.
|
||||
|
||||
\ **-p|-**\ **-primarySN**\ When using backup service nodes only update the primary. The default is to update both the primary and backup service nodes.
|
||||
|
||||
\ **-r|-**\ **-remove**\ Remove NIM definitions corresponding to xCAT definitions.
|
||||
|
||||
\ **-t object-types**\ A set of comma delimited xCAT object types. Supported types include: node, group, and network.
|
||||
\ **-t**\ \ *object-types*\ A set of comma delimited xCAT object types. Supported types include: node, group, and network.
|
||||
|
||||
Note: If the object type is "group", it means that the \ **xcat2nim**\ command will operate on a NIM machine group definition corresponding to the xCAT node group definition. Before creating a NIM machine group, all the NIM client nodes definition must have been created.
|
||||
|
||||
@ -107,39 +107,75 @@ EXAMPLES
|
||||
|
||||
1. To create a NIM machine definition corresponding to the xCAT node "clstrn01".
|
||||
|
||||
\ *xcat2nim -t node -o clstrn01*\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
xcat2nim -t node -o clstrn01
|
||||
|
||||
|
||||
2. To create NIM machine definitions for all xCAT node definitions.
|
||||
|
||||
\ *xcat2nim -t node*\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
xcat2nim -t node
|
||||
|
||||
|
||||
3. Update all the NIM machine definitions for the nodes contained in the xCAT "compute" node group and specify attribute values that will be applied to each definition.
|
||||
|
||||
\ *xcat2nim -u -t node -o compute netboot_kernel=mp cable_type="N/A"*\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
xcat2nim -u -t node -o compute netboot_kernel=mp cable_type="N/A"
|
||||
|
||||
|
||||
4. To create a NIM machine group definition corresponding to the xCAT group "compute".
|
||||
|
||||
\ *xcat2nim -t group -o compute*\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
xcat2nim -t group -o compute
|
||||
|
||||
|
||||
5. To create NIM network definitions corresponding to the xCAT "clstr_net" an "publc_net" network definitions. Also display verbose output.
|
||||
|
||||
\ *xcat2nim -V -t network -o "clstr_net,publc_net"*\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
xcat2nim -V -t network -o "clstr_net,publc_net"
|
||||
|
||||
|
||||
6. To list the NIM definition for node clstrn02.
|
||||
|
||||
\ *xcat2nim -l -t node clstrn02*\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
xcat2nim -l -t node clstrn02
|
||||
|
||||
|
||||
7. To re-create a NIM machine definiton and display verbose output.
|
||||
|
||||
\ *xcat2nim -V -t node -f clstrn05*\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
xcat2nim -V -t node -f clstrn05
|
||||
|
||||
|
||||
8. To remove the NIM definition for the group "AIXnodes".
|
||||
|
||||
\ *xcat2nim -t group -r -o AIXnodes*\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
xcat2nim -t group -r -o AIXnodes
|
||||
|
||||
|
||||
9. To list the NIM "clstr_net" definition.
|
||||
|
||||
\ *xcat2nim -l -t network -o clstr_net*\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
xcat2nim -l -t network -o clstr_net
|
||||
|
||||
|
||||
|
||||
*****
|
||||
|
@ -19,9 +19,9 @@ SYNOPSIS
|
||||
********
|
||||
|
||||
|
||||
\ **xcatchroot -h **\
|
||||
\ **xcatchroot -h**\
|
||||
|
||||
\ **xcatchroot [-V] -i osimage_name cmd_string**\
|
||||
\ **xcatchroot [-V] -i**\ \ *osimage_name cmd_string*\
|
||||
|
||||
|
||||
***********
|
||||
@ -33,19 +33,15 @@ For AIX diskless images this command will modify the AIX SPOT resource using
|
||||
the chroot command. You must include the name of an xCAT osimage
|
||||
definition and the command that you wish to have run in the spot.
|
||||
|
||||
WARNING:
|
||||
\ **WARNING:**\
|
||||
|
||||
|
||||
Be very careful when using this command!!! Make sure you are
|
||||
very clear about exactly what you are changing so that you do
|
||||
not accidently corrupt the image.
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
Be very careful when using this command!!! Make sure you are
|
||||
very clear about exactly what you are changing so that you do
|
||||
not accidently corrupt the image.
|
||||
|
||||
As a precaution it is advisable to make a copy of the original
|
||||
spot in case your changes wind up corrupting the image.
|
||||
|
||||
As a precaution it is advisable to make a copy of the original
|
||||
spot in case your changes wind up corrupting the image.
|
||||
|
||||
When you are done updating a NIM spot resource you should always run the NIM
|
||||
check operation on the spot.
|
||||
@ -54,7 +50,7 @@ check operation on the spot.
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
nim -Fo check <spot_name>
|
||||
nim -Fo check <spot_name>
|
||||
|
||||
|
||||
The xcatchroot command will take care of any of the required setup so that
|
||||
@ -93,7 +89,7 @@ OPTIONS
|
||||
|
||||
|
||||
|
||||
\ **cmd_string**\
|
||||
\ *cmd_string*\
|
||||
|
||||
The command you wish to have run in the chroot environment. (Use a quoted
|
||||
string.)
|
||||
@ -106,7 +102,7 @@ OPTIONS
|
||||
|
||||
|
||||
|
||||
\ **-i osimage_name**\
|
||||
\ **-i**\ \ *osimage_name*\
|
||||
|
||||
The name of the xCAT osimage definition.
|
||||
|
||||
@ -125,16 +121,12 @@ RETURN VALUE
|
||||
|
||||
|
||||
|
||||
0
|
||||
|
||||
The command completed successfully.
|
||||
|
||||
0 The command completed successfully.
|
||||
|
||||
|
||||
1
|
||||
|
||||
An error has occurred.
|
||||
|
||||
|
||||
1 An error has occurred.
|
||||
|
||||
|
||||
|
||||
|
||||
@ -146,19 +138,35 @@ EXAMPLES
|
||||
1) Set the root password to "cluster" in the spot so that when the diskless
|
||||
node boots it will have a root password set.
|
||||
|
||||
\ **xcatchroot -i 614spot "/usr/bin/echo root:cluster | /usr/bin/chpasswd -c"**\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
xcatchroot -i 614spot "/usr/bin/echo root:cluster | /usr/bin/chpasswd -c"
|
||||
|
||||
|
||||
2) Install the bash rpm package.
|
||||
|
||||
\ **xcatchroot -i 614spot "/usr/bin/rpm -Uvh /lpp_source/RPMS/ppc bash-3.2-1.aix5.2.ppc.rpm"**\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
xcatchroot -i 614spot "/usr/bin/rpm -Uvh /lpp_source/RPMS/ppc bash-3.2-1.aix5.2.ppc.rpm"
|
||||
|
||||
|
||||
3) To enable system debug.
|
||||
|
||||
\ **xcatchroot -i 614spot "bosdebug -D -M"**\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
xcatchroot -i 614spot "bosdebug -D -M"
|
||||
|
||||
|
||||
4) To set the "ipforwarding" system tunable.
|
||||
|
||||
\ **xcatchroot -i 614spot "/usr/sbin/no -r -o ipforwarding=1"**\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
xcatchroot -i 614spot "/usr/sbin/no -r -o ipforwarding=1"
|
||||
|
||||
|
||||
|
||||
*****
|
||||
|
@ -93,12 +93,14 @@ is identical:
|
||||
|
||||
|
||||
|
||||
\*
|
||||
1. To display the results of a command issued on several nodes, in
|
||||
the format used in the Description, enter:
|
||||
|
||||
To display the results of a command issued on several nodes, in
|
||||
the format used in the Description, enter:
|
||||
|
||||
\ **psh**\ \ *node1,node2,node3 cat /etc/passwd*\ | \ **xcoll**\
|
||||
.. code-block:: perl
|
||||
|
||||
psh node1,node2,node3 cat /etc/passwd | xcoll
|
||||
|
||||
|
||||
|
||||
|
||||
|
@ -19,10 +19,7 @@ xdcp.1
|
||||
****************
|
||||
|
||||
|
||||
\ **xdcp**\ \ *noderange*\ [[\ **-f**\ \ *fanout*\ ]
|
||||
[\ **-L**\ ] [\ **-l**\ \ *userID*\ ] [\ **-o**\ \ *node_options*\ ] [\ **-p**\ ]
|
||||
[\ **-P**\ ] [\ **-r**\ \ *node_remote_shell*\ ] [\ **-R**\ ] [\ **-t**\ \ *timeout*\ ]
|
||||
[\ **-T**\ ] [\ **-v**\ ] [\ **-q**\ ] [\ **-X**\ \ *env_list*\ ] sourcefile.... targetpath
|
||||
\ **xdcp**\ \ *noderange*\ [[\ **-f**\ \ *fanout*\ ] [\ **-L**\ ] [\ **-l**\ \ *userID*\ ] [\ **-o**\ \ *node_options*\ ] [\ **-p**\ ] [\ **-P**\ ] [\ **-r**\ \ *node_remote_shell*\ ] [\ **-R**\ ] [\ **-t**\ \ *timeout*\ ] [\ **-T**\ ] [\ **-v**\ ] [\ **-q**\ ] [\ **-X**\ \ *env_list*\ ] \ *sourcefile.... targetpath*\
|
||||
|
||||
\ **xdcp**\ \ *noderange*\ [\ **-F**\ \ *rsync input file*\ ]
|
||||
|
||||
@ -95,7 +92,7 @@ standard output or standard error is displayed.
|
||||
|
||||
|
||||
|
||||
\ **sourcefile...**\
|
||||
\ *sourcefile...*\
|
||||
|
||||
Specifies the complete path for the file to be copied to or
|
||||
from the target. Multiple files can be specified. When used
|
||||
@ -104,7 +101,7 @@ standard output or standard error is displayed.
|
||||
|
||||
|
||||
|
||||
\ **targetpath**\
|
||||
\ *targetpath*\
|
||||
|
||||
If one source_file file, then it specifies the file to copy the source_file
|
||||
file to on the target. If multiple source_file files, it specifies
|
||||
@ -155,7 +152,7 @@ standard output or standard error is displayed.
|
||||
|
||||
|
||||
For example:
|
||||
/etc/password /etc/hosts -> /etc
|
||||
/etc/password /etc/hosts -> /etc
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
@ -187,17 +184,22 @@ standard output or standard error is displayed.
|
||||
The scripts must be also added to the file list to rsync to the node for hierarchical clusters. It is optional for non-hierarchical clusters.
|
||||
|
||||
For example, your rsynclist file may look like this:
|
||||
/tmp/share/file2 -> /tmp/file2
|
||||
/tmp/share/file2.post -> /tmp/file2.post
|
||||
/tmp/share/file3 -> /tmp/filex
|
||||
/tmp/share/file3.post -> /tmp/file3.post
|
||||
/tmp/myscript -> /tmp/myscript
|
||||
# the below are postscripts
|
||||
EXECUTE:
|
||||
/tmp/share/file2.post
|
||||
/tmp/share/file3.post
|
||||
EXECUTEALWAYS:
|
||||
/tmp/myscript
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
/tmp/share/file2 -> /tmp/file2
|
||||
/tmp/share/file2.post -> /tmp/file2.post
|
||||
/tmp/share/file3 -> /tmp/filex
|
||||
/tmp/share/file3.post -> /tmp/file3.post
|
||||
/tmp/myscript -> /tmp/myscript
|
||||
# the below are postscripts
|
||||
EXECUTE:
|
||||
/tmp/share/file2.post
|
||||
/tmp/share/file3.post
|
||||
EXECUTEALWAYS:
|
||||
/tmp/myscript
|
||||
|
||||
|
||||
If /tmp/file2 and /tmp/file3 update /tmp/file2 and /tmp/filex on the node, then the postscripts /tmp/file2.post and /tmp/file3.post are automatically run on
|
||||
the node. /tmp/myscript will always be run on the node.
|
||||
@ -205,20 +207,25 @@ standard output or standard error is displayed.
|
||||
Another option is the \ **APPEND:**\ clause in the synclist file. The \ **APPEND:**\ clause is used to append the contents of the input file to an existing file on the node. The file to append \ **must**\ already exist on the node and not be part of the synclist that contains the \ **APPEND:**\ clause.
|
||||
|
||||
For example, your rsynclist file may look like this:
|
||||
/tmp/share/file2 -> /tmp/file2
|
||||
/tmp/share/file2.post -> /tmp/file2.post
|
||||
/tmp/share/file3 -> /tmp/filex
|
||||
/tmp/share/file3.post -> /tmp/file3.post
|
||||
/tmp/myscript -> /tmp/myscript
|
||||
# the below are postscripts
|
||||
EXECUTE:
|
||||
/tmp/share/file2.post
|
||||
/tmp/share/file3.post
|
||||
EXECUTEALWAYS:
|
||||
/tmp/myscript
|
||||
APPEND:
|
||||
/etc/myappenddir/appendfile -> /etc/mysetup/setup
|
||||
/etc/myappenddir/appendfile2 -> /etc/mysetup/setup2
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
/tmp/share/file2 -> /tmp/file2
|
||||
/tmp/share/file2.post -> /tmp/file2.post
|
||||
/tmp/share/file3 -> /tmp/filex
|
||||
/tmp/share/file3.post -> /tmp/file3.post
|
||||
/tmp/myscript -> /tmp/myscript
|
||||
# the below are postscripts
|
||||
EXECUTE:
|
||||
/tmp/share/file2.post
|
||||
/tmp/share/file3.post
|
||||
EXECUTEALWAYS:
|
||||
/tmp/myscript
|
||||
APPEND:
|
||||
/etc/myappenddir/appendfile -> /etc/mysetup/setup
|
||||
/etc/myappenddir/appendfile2 -> /etc/mysetup/setup2
|
||||
|
||||
|
||||
When you use the append script, the file (left) of the arrow is appended to the file right of the arrow. In this example, /etc/myappenddir/appendfile is appended to /etc/mysetup/setup file, which must already exist on the node. The /opt/xcat/share/xcat/scripts/xdcpappend.sh is used to accomplish this.
|
||||
|
||||
@ -484,12 +491,14 @@ userdefined.
|
||||
|
||||
|
||||
|
||||
\*
|
||||
1. To copy the /etc/hosts file from all nodes in the cluster
|
||||
to the /tmp/hosts.dir directory on the local host, enter:
|
||||
|
||||
To copy the /etc/hosts file from all nodes in the cluster
|
||||
to the /tmp/hosts.dir directory on the local host, enter:
|
||||
|
||||
\ **xdcp**\ \ *all -P /etc/hosts /tmp/hosts.dir*\
|
||||
.. code-block:: perl
|
||||
|
||||
xdcp all -P /etc/hosts /tmp/hosts.dir
|
||||
|
||||
|
||||
A suffix specifying the name of the target is appended to each
|
||||
file name. The contents of the /tmp/hosts.dir directory are similar to:
|
||||
@ -504,64 +513,74 @@ userdefined.
|
||||
|
||||
|
||||
|
||||
\*
|
||||
2. To copy the directory /var/log/testlogdir from all targets in
|
||||
NodeGroup1 with a fanout of 12, and save each directory on the local
|
||||
host as /var/log._target, enter:
|
||||
|
||||
To copy the directory /var/log/testlogdir from all targets in
|
||||
NodeGroup1 with a fanout of 12, and save each directory on the local
|
||||
host as /var/log._target, enter:
|
||||
|
||||
\ **xdcp**\ \ *NodeGroup1 -f 12 -RP /var/log/testlogdir /var/log*\
|
||||
.. code-block:: perl
|
||||
|
||||
xdcp NodeGroup1 -f 12 -RP /var/log/testlogdir /var/log
|
||||
|
||||
|
||||
|
||||
|
||||
\*
|
||||
3. To copy /localnode/smallfile and /tmp/bigfile to /tmp on node1
|
||||
using rsync and input -t flag to rsync, enter:
|
||||
|
||||
To copy /localnode/smallfile and /tmp/bigfile to /tmp on node1
|
||||
using rsync and input -t flag to rsync, enter:
|
||||
|
||||
\ *xdcp node1 -r /usr/bin/rsync -o "-t" /localnode/smallfile /tmp/bigfile /tmp*\
|
||||
.. code-block:: perl
|
||||
|
||||
xdcp node1 -r /usr/bin/rsync -o "-t" /localnode/smallfile /tmp/bigfile /tmp
|
||||
|
||||
|
||||
|
||||
|
||||
\*
|
||||
4. To copy the /etc/hosts file from the local host to all the nodes
|
||||
in the cluster, enter:
|
||||
|
||||
To copy the /etc/hosts file from the local host to all the nodes
|
||||
in the cluster, enter:
|
||||
|
||||
\ **xdcp**\ \ *all /etc/hosts /etc/hosts*\
|
||||
.. code-block:: perl
|
||||
|
||||
xdcp all /etc/hosts /etc/hosts
|
||||
|
||||
|
||||
|
||||
|
||||
\*
|
||||
5. To copy all the files in /tmp/testdir from the local host to all the nodes
|
||||
in the cluster, enter:
|
||||
|
||||
To copy all the files in /tmp/testdir from the local host to all the nodes
|
||||
in the cluster, enter:
|
||||
|
||||
\ **xdcp**\ \ *all /tmp/testdir/\\* /tmp/testdir*\
|
||||
.. code-block:: perl
|
||||
|
||||
xdcp all /tmp/testdir/* /tmp/testdir
|
||||
|
||||
|
||||
|
||||
|
||||
\*
|
||||
6. To copy all the files in /tmp/testdir and it's subdirectories
|
||||
from the local host to node1 in the cluster, enter:
|
||||
|
||||
To copy all the files in /tmp/testdir and it's subdirectories
|
||||
from the local host to node1 in the cluster, enter:
|
||||
|
||||
\ **xdcp**\ \ *node1 -R /tmp/testdir /tmp/testdir*\
|
||||
.. code-block:: perl
|
||||
|
||||
xdcp node1 -R /tmp/testdir /tmp/testdir
|
||||
|
||||
|
||||
|
||||
|
||||
\*
|
||||
7. To copy the /etc/hosts file from node1 and node2 to the
|
||||
/tmp/hosts.dir directory on the local host, enter:
|
||||
|
||||
To copy the /etc/hosts file from node1 and node2 to the
|
||||
/tmp/hosts.dir directory on the local host, enter:
|
||||
|
||||
\ **xdcp**\ \ *node1,node2 -P /etc/hosts /tmp/hosts.dir*\
|
||||
.. code-block:: perl
|
||||
|
||||
xdcp node1,node2 -P /etc/hosts /tmp/hosts.dir
|
||||
|
||||
|
||||
|
||||
|
||||
\*
|
||||
|
||||
To rsync the /etc/hosts file to your compute nodes:
|
||||
8. To rsync the /etc/hosts file to your compute nodes:
|
||||
|
||||
Create a rsync file /tmp/myrsync, with this line:
|
||||
|
||||
@ -573,11 +592,15 @@ userdefined.
|
||||
|
||||
Run:
|
||||
|
||||
\ **xdcp**\ \ *compute -F /tmp/myrsync*\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
xdcp compute -F /tmp/myrsync
|
||||
|
||||
|
||||
|
||||
|
||||
\*
|
||||
9.
|
||||
|
||||
To rsync all the files in /home/mikev to the compute nodes:
|
||||
|
||||
@ -587,14 +610,16 @@ userdefined.
|
||||
|
||||
Run:
|
||||
|
||||
\ **xdcp**\ \ *compute -F /tmp/myrsync*\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
xdcp compute -F /tmp/myrsync
|
||||
|
||||
|
||||
|
||||
|
||||
\*
|
||||
|
||||
To rsync to the compute nodes, using service nodes, the command will first
|
||||
rsync the files to the /var/xcat/syncfiles directory on the service nodes and then rsync the files from that directory to the compute nodes. The /var/xcat/syncfiles default directory on the service nodes, can be changed by putting a directory value in the site table SNsyncfiledir attribute.
|
||||
10. To rsync to the compute nodes, using service nodes, the command will first
|
||||
rsync the files to the /var/xcat/syncfiles directory on the service nodes and then rsync the files from that directory to the compute nodes. The /var/xcat/syncfiles default directory on the service nodes, can be changed by putting a directory value in the site table SNsyncfiledir attribute.
|
||||
|
||||
Create a rsync file /tmp/myrsync, with this line:
|
||||
|
||||
@ -606,14 +631,18 @@ userdefined.
|
||||
|
||||
Run:
|
||||
|
||||
\ **xdcp**\ \ *compute -F /tmp/myrsync*\ to update the Compute Nodes
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
xdcp compute -F /tmp/myrsync
|
||||
|
||||
|
||||
to update the Compute Nodes
|
||||
|
||||
|
||||
|
||||
\*
|
||||
|
||||
To rsync to the service nodes in preparation for rsyncing the compute nodes
|
||||
during an install from the service node.
|
||||
11. To rsync to the service nodes in preparation for rsyncing the compute nodes
|
||||
during an install from the service node.
|
||||
|
||||
Create a rsync file /tmp/myrsync, with this line:
|
||||
|
||||
@ -621,13 +650,17 @@ userdefined.
|
||||
|
||||
Run:
|
||||
|
||||
\ **xdcp**\ \ *compute -s -F /tmp/myrsync*\ to sync the service node for compute
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
xdcp compute -s -F /tmp/myrsync
|
||||
|
||||
|
||||
to sync the service node for compute
|
||||
|
||||
|
||||
|
||||
\*
|
||||
|
||||
To rsync the /etc/file1 and file2 to your compute nodes and rename to filex and filey:
|
||||
12. To rsync the /etc/file1 and file2 to your compute nodes and rename to filex and filey:
|
||||
|
||||
Create a rsync file /tmp/myrsync, with these line:
|
||||
|
||||
@ -637,13 +670,17 @@ userdefined.
|
||||
|
||||
Run:
|
||||
|
||||
\ **xdcp**\ \ *compute -F /tmp/myrsync*\ to update the Compute Nodes
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
xdcp compute -F /tmp/myrsync
|
||||
|
||||
|
||||
to update the Compute Nodes
|
||||
|
||||
|
||||
|
||||
\*
|
||||
|
||||
To rsync files in the Linux image at /install/netboot/fedora9/x86_64/compute/rootimg on the MN:
|
||||
13. To rsync files in the Linux image at /install/netboot/fedora9/x86_64/compute/rootimg on the MN:
|
||||
|
||||
Create a rsync file /tmp/myrsync, with this line:
|
||||
|
||||
@ -651,15 +688,21 @@ userdefined.
|
||||
|
||||
Run:
|
||||
|
||||
\ **xdcp**\ \ *-i /install/netboot/fedora9/x86_64/compute/rootimg -F /tmp/myrsync*\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
xdcp -i /install/netboot/fedora9/x86_64/compute/rootimg -F /tmp/myrsync
|
||||
|
||||
|
||||
|
||||
|
||||
\*
|
||||
14. To define the Management Node in the database so you can use xdcp, run
|
||||
|
||||
To define the Management Node in the database so you can use xdcp,run
|
||||
|
||||
\ **xcatconfig -m**\
|
||||
.. code-block:: perl
|
||||
|
||||
xcatconfig -m
|
||||
|
||||
|
||||
|
||||
|
||||
|
@ -582,121 +582,161 @@ The dsh command exit code is 0 if the command executed without errors and all re
|
||||
|
||||
|
||||
|
||||
\*
|
||||
1. To set up the SSH keys for root on node1, run as root:
|
||||
|
||||
To set up the SSH keys for root on node1, run as root:
|
||||
|
||||
\ **xdsh**\ \ *node1 -K*\
|
||||
.. code-block:: perl
|
||||
|
||||
xdsh node1 -K
|
||||
|
||||
|
||||
|
||||
|
||||
\*
|
||||
2. To run the \ **ps -ef**\ command on node targets \ **node1**\ and \ **node2**\ , enter:
|
||||
|
||||
To run the \ **ps -ef **\ command on node targets \ **node1**\ and \ **node2**\ , enter:
|
||||
|
||||
\ **xdsh**\ \ *node1,node2 "ps -ef"*\
|
||||
.. code-block:: perl
|
||||
|
||||
xdsh node1,node2 "ps -ef"
|
||||
|
||||
|
||||
|
||||
|
||||
\*
|
||||
3. To run the \ **ps**\ command on node targets \ **node1**\ and run the remote command with the -v and -t flag, enter:
|
||||
|
||||
To run the \ **ps**\ command on node targets \ **node1**\ and run the remote command with the -v and -t flag, enter:
|
||||
|
||||
\ **xdsh**\ \ *node1,node2 -o"-v -t" ps*\
|
||||
=item \*
|
||||
.. code-block:: perl
|
||||
|
||||
To execute the commands contained in \ **myfile**\ in the \ **XCAT**\
|
||||
context on several node targets, with a fanout of \ **1**\ , enter:
|
||||
xdsh node1,node2 -o"-v -t" ps
|
||||
|
||||
\ **xdsh**\ \ *node1,node2 -f 1 -e myfile*\
|
||||
|
||||
|
||||
|
||||
\*
|
||||
4. To execute the commands contained in \ **myfile**\ in the \ **XCAT**\
|
||||
context on several node targets, with a fanout of \ **1**\ , enter:
|
||||
|
||||
To run the ps command on node1 and ignore all the dsh
|
||||
environment variable except the DSH_NODE_OPTS, enter:
|
||||
|
||||
\ **xdsh**\ \ *node1 -X \\`DSH_NODE_OPTS' ps*\
|
||||
.. code-block:: perl
|
||||
|
||||
xdsh node1,node2 -f 1 -e myfile
|
||||
|
||||
|
||||
|
||||
|
||||
\*
|
||||
5. To run the ps command on node1 and ignore all the dsh
|
||||
environment variable except the DSH_NODE_OPTS, enter:
|
||||
|
||||
To run on Linux, the xdsh command "rpm -qa | grep xCAT"
|
||||
on the service node fedora9 diskless image, enter:
|
||||
|
||||
\ **xdsh**\ \ *-i /install/netboot/fedora9/x86_64/service/rootimg "rpm -qa | grep xCAT"*\
|
||||
.. code-block:: perl
|
||||
|
||||
xdsh node1 -X `DSH_NODE_OPTS' ps
|
||||
|
||||
|
||||
|
||||
|
||||
\*
|
||||
6. To run on Linux, the xdsh command "rpm -qa | grep xCAT"
|
||||
on the service node fedora9 diskless image, enter:
|
||||
|
||||
To run on AIX, the xdsh command "lslpp -l | grep bos"
|
||||
on the NIM 611dskls spot, enter:
|
||||
|
||||
\ **xdsh**\ \ *-i 611dskls "/usr/bin/lslpp -l | grep bos"*\
|
||||
.. code-block:: perl
|
||||
|
||||
xdsh -i /install/netboot/fedora9/x86_64/service/rootimg "rpm -qa | grep xCAT"
|
||||
|
||||
|
||||
|
||||
|
||||
\*
|
||||
7. To run on AIX, the xdsh command "lslpp -l | grep bos" on the NIM 611dskls spot, enter:
|
||||
|
||||
To cleanup the servicenode directory that stages the copy of files to the
|
||||
nodes, enter:
|
||||
|
||||
\ **xdsh**\ \ *servicenoderange -c *\
|
||||
.. code-block:: perl
|
||||
|
||||
xdsh -i 611dskls "/usr/bin/lslpp -l | grep bos"
|
||||
|
||||
|
||||
|
||||
|
||||
\*
|
||||
8. To cleanup the servicenode directory that stages the copy of files to the nodes, enter:
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
xdsh servicenoderange -c
|
||||
|
||||
|
||||
|
||||
|
||||
9.
|
||||
|
||||
To define the QLogic IB switch as a node and to set up the SSH keys for IB switch
|
||||
\ **qswitch**\ with device configuration file
|
||||
\ **/var/opt/xcat/IBSwitch/Qlogic/config**\ and user name \ **username**\ , Enter
|
||||
|
||||
\ **chdef**\ \ *-t node -o qswitch groups=all nodetype=switch*\
|
||||
|
||||
\ **xdsh**\ \ *qswitch -K -l username -**\ **-devicetype IBSwitch::Qlogic*\
|
||||
.. code-block:: perl
|
||||
|
||||
chdef -t node -o qswitch groups=all nodetype=switch
|
||||
|
||||
xdsh qswitch -K -l username --devicetype IBSwitch::Qlogic
|
||||
|
||||
|
||||
|
||||
|
||||
\*
|
||||
10. To define the Management Node in the database so you can use xdsh, Enter
|
||||
|
||||
To define the Management Node in the database so you can use xdsh, Enter
|
||||
|
||||
\ **xcatconfig -m**\
|
||||
.. code-block:: perl
|
||||
|
||||
xcatconfig -m
|
||||
|
||||
|
||||
|
||||
|
||||
\*
|
||||
11. To define the Mellanox switch as a node and run a command to show the ssh keys.
|
||||
\ **mswitch**\ with and user name \ **username**\ , Enter
|
||||
|
||||
To define the Mellanox switch as a node and run a command to show the ssh keys.
|
||||
\ **mswitch**\ with and user name \ **username**\ , Enter
|
||||
|
||||
\ **chdef**\ \ *-t node -o mswitch groups=all nodetype=switch*\
|
||||
.. code-block:: perl
|
||||
|
||||
chdef -t node -o mswitch groups=all nodetype=switch
|
||||
|
||||
xdsh mswitch -l admin --devicetype IBSwitch::Mellanox 'enable;configure terminal;show ssh server host-keys'
|
||||
|
||||
\ **xdsh**\ \ *mswitch -l admin -**\ **-devicetype IBSwitch::Mellanox 'enable;configure terminal;show ssh server host-keys'*\
|
||||
|
||||
|
||||
|
||||
\*
|
||||
12.
|
||||
|
||||
To define a BNT Ethernet switch as a node and run a command to create a new vlan with vlan id 3 on the switch.
|
||||
|
||||
\ **chdef**\ \ *myswitch groups=all*\
|
||||
|
||||
\ **tabch**\ \ *switch=myswitch switches.sshusername=admin switches.sshpassword=passw0rd switches.protocol=[ssh|telnet]*\
|
||||
where \ *admin*\ and \ *passw0rd*\ are the SSH user name and password for the switch. If it is for Telnet, add \ *tn:*\ in front of the user name: \ *tn:admin*\ .
|
||||
.. code-block:: perl
|
||||
|
||||
chdef myswitch groups=all
|
||||
|
||||
tabch switch=myswitch switches.sshusername=admin switches.sshpassword=passw0rd switches.protocol=[ssh|telnet]
|
||||
|
||||
|
||||
where \ *admin*\ and \ *passw0rd*\ are the SSH user name and password for the switch.
|
||||
|
||||
If it is for Telnet, add \ *tn:*\ in front of the user name: \ *tn:admin*\ .
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
dsh myswitch --devicetype EthSwitch::BNT 'enable;configure terminal;vlan 3;end;show vlan'
|
||||
|
||||
<xdsh> \ *myswitch --devicetype EthSwitch::BNT 'enable;configure terminal;vlan 3;end;show vlan'*\
|
||||
|
||||
|
||||
|
||||
\*
|
||||
13.
|
||||
|
||||
To run xdsh with the non-root userid "user1" that has been setup as an xCAT userid and with sudo on node1 and node2 to run as root, do the following, see xCAT doc on Granting_Users_xCAT_privileges:
|
||||
|
||||
\ **xdsh**\ \ *node1,node2 -**\ **-sudo -l user1 "cat /etc/passwd"*\
|
||||
|
||||
.. code-block:: perl
|
||||
|
||||
xdsh node1,node2 --sudo -l user1 "cat /etc/passwd"
|
||||
|
||||
|
||||
|
||||
|
||||
|
@ -145,10 +145,8 @@ OPTIONS
|
||||
|
||||
|
||||
|
||||
\*
|
||||
|
||||
To display the results of a command issued on several nodes, in
|
||||
the format used in the Description, enter:
|
||||
1. To display the results of a command issued on several nodes, in
|
||||
the format used in the Description, enter:
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
@ -158,7 +156,7 @@ OPTIONS
|
||||
|
||||
|
||||
|
||||
\*
|
||||
2.
|
||||
|
||||
To display the results of a command issued on several nodes with
|
||||
identical output displayed only once, enter:
|
||||
@ -171,10 +169,8 @@ OPTIONS
|
||||
|
||||
|
||||
|
||||
\*
|
||||
|
||||
To display the results of a command issued on several nodes with
|
||||
compact output and be sorted alphabetically by host name, enter:
|
||||
3. To display the results of a command issued on several nodes with
|
||||
compact output and be sorted alphabetically by host name, enter:
|
||||
|
||||
|
||||
.. code-block:: perl
|
||||
|
@ -74,12 +74,14 @@ is identical:
|
||||
|
||||
|
||||
|
||||
\*
|
||||
1. To display the results of a command issued on several nodes, in
|
||||
the format used in the Description, enter:
|
||||
|
||||
To display the results of a command issued on several nodes, in
|
||||
the format used in the Description, enter:
|
||||
|
||||
\ **xdsh**\ \ *node1,node2,node3 cat /etc/passwd*\ | \ **xdshcoll**\
|
||||
.. code-block:: perl
|
||||
|
||||
xdsh node1,node2,node3 cat /etc/passwd> | B<xdshcoll
|
||||
|
||||
|
||||
|
||||
|
||||
|
@ -19,7 +19,7 @@ SYNOPSIS
|
||||
********
|
||||
|
||||
|
||||
\ **xpbsnodes**\ [{\ **noderange**\ }] [{\ **offline | clear | stat | state**\ }]
|
||||
\ **xpbsnodes**\ [{\ *noderange*\ }] [{\ **offline | clear | stat | state**\ }]
|
||||
|
||||
\ **xpbsnodes**\ [\ **-h | -**\ **-help**\ ] [\ **-v | -**\ **-version**\ ]
|
||||
|
||||
@ -37,9 +37,9 @@ OPTIONS
|
||||
*******
|
||||
|
||||
|
||||
\ **-h**\ Display usage message.
|
||||
\ **-h|-**\ **-help**\ Display usage message.
|
||||
|
||||
\ **-v**\ Command Version.
|
||||
\ **-v|-**\ **-version**\ Command Version.
|
||||
|
||||
\ **offline|off**\ Take nodes offline.
|
||||
|
||||
|
@ -4,11 +4,11 @@ B<packimage> - Packs the stateless image from the chroot file system.
|
||||
|
||||
=head1 SYNOPSIS
|
||||
|
||||
I<packimage [-h| --help]>
|
||||
B<packimage [-h| --help]>
|
||||
|
||||
I<packimage [-v| --version]>
|
||||
B<packimage [-v| --version]>
|
||||
|
||||
I<packimage imagename>
|
||||
B<packimage> I<imagename>
|
||||
|
||||
=head1 DESCRIPTION
|
||||
|
||||
@ -19,7 +19,7 @@ The nodetype table "profile" attribute for the node should reflect the profile o
|
||||
|
||||
This command will get all the necessary os image definition files from the I<osimage> and I<linuximage> tables.
|
||||
|
||||
=head1 Parameters
|
||||
=head1 PARAMETERS
|
||||
|
||||
I<imagename> specifies the name of a os image definition to be used. The specification for the image is stored in the I<osimage> table and I<linuximage> table.
|
||||
|
||||
@ -49,7 +49,7 @@ B<-m> Method (default cpio)
|
||||
|
||||
1. To pack the osimage rhels7.1-x86_64-netboot-compute:
|
||||
|
||||
I<packimage rhels7.1-x86_64-netboot-compute>
|
||||
packimage rhels7.1-x86_64-netboot-compute
|
||||
|
||||
|
||||
=head1 FILES
|
||||
|
@ -5,13 +5,13 @@ B<pgsqlsetup> - Sets up the PostgreSQL database for xCAT to use.
|
||||
=head1 SYNOPSIS
|
||||
|
||||
|
||||
B<pgsqlsetup> {B<-h>|B<--help>}
|
||||
B<pgsqlsetup> {B<-h> | B<--help>}
|
||||
|
||||
B<pgsqlsetup> {B<-v>|B<--version>}
|
||||
B<pgsqlsetup> {B<-v> | B<--version>}
|
||||
|
||||
B<pgsqlsetup> {B<-i>|B<--init>} [-N|nostart] [-P|--PCM] [-o|--setupODBC] [B<-V>|B<--verbose>]
|
||||
B<pgsqlsetup> {B<-i> | B<--init>} [B<-N> | B<--nostart>] [B<-P> | B<--PCM>] [B<-o> | B<--odbc>] [B<-V> | B<--verbose>]
|
||||
|
||||
B<pgsqlsetup> {B<-o>|B<--setupODBC>} [-V|--verbose]
|
||||
B<pgsqlsetup> {B<-o> | B<--setupODBC>} [B<-V> | B<--verbose>]
|
||||
|
||||
=head1 DESCRIPTION
|
||||
|
||||
@ -71,16 +71,14 @@ The password to be used to setup the xCAT admin id for the database.
|
||||
|
||||
=over 2
|
||||
|
||||
=item *
|
||||
|
||||
=item 1.
|
||||
To setup PostgreSQL for xCAT to run on the PostgreSQL xcatdb database :
|
||||
|
||||
B<pgsqlsetup> I<-i>
|
||||
|
||||
=item *
|
||||
pgsqlsetup -i
|
||||
|
||||
=item 2.
|
||||
To setup the ODBC for PostgreSQL xcatdb database access :
|
||||
|
||||
B<pgsqlsetup> I<-o>
|
||||
pgsqlsetup -o
|
||||
|
||||
=back
|
||||
|
@ -52,7 +52,9 @@ Display the installed version of xCAT.
|
||||
|
||||
=item 1.
|
||||
|
||||
pping all
|
||||
pping all
|
||||
|
||||
Output is similar to:
|
||||
|
||||
node1: ping
|
||||
node2: ping
|
||||
@ -60,7 +62,9 @@ pping all
|
||||
|
||||
=item 2.
|
||||
|
||||
pping all -i ib0,ib1
|
||||
pping all -i ib0,ib1
|
||||
|
||||
Output is similar to:
|
||||
|
||||
node1-ib0: ping
|
||||
node2-ib0: ping
|
||||
|
@ -64,7 +64,9 @@ Display the installed version of xCAT.
|
||||
|
||||
=item 1.
|
||||
|
||||
ppping all -q
|
||||
ppping all -q
|
||||
|
||||
Output is similar to:
|
||||
|
||||
blade7: node2: noping
|
||||
blade8: node2: noping
|
||||
@ -74,7 +76,9 @@ ppping all -q
|
||||
|
||||
=item 2.
|
||||
|
||||
ppping node1,node2 -i ib0,ib1,ib2,ib3
|
||||
ppping node1,node2 -i ib0,ib1,ib2,ib3
|
||||
|
||||
Output is similar to:
|
||||
|
||||
node1: pinged all nodes successfully on interface ib0
|
||||
node1: pinged all nodes successfully on interface ib1
|
||||
|
@ -6,7 +6,7 @@ prsync - parallel rsync
|
||||
|
||||
B<prsync> I<filename> [I<filename> I<...>] I<noderange:destinationdirectory>
|
||||
|
||||
B<prsync> [I<-o rsync options>] [B<-f> I<fanout>] [I<filename> I<filename> I<...>] [I<directory> I<directory> I<...>]
|
||||
B<prsync> [B<-o> I<rsync options>] [B<-f> I<fanout>] [I<filename> I<filename> I<...>] [I<directory> I<directory> I<...>]
|
||||
I<noderange:destinationdirectory>
|
||||
|
||||
B<prsync> {B<-h>|B<--help>|B<-v>|B<--version>}
|
||||
@ -25,7 +25,7 @@ B<prsync> is NOT multicast, but is parallel unicasts.
|
||||
|
||||
=over 7
|
||||
|
||||
=item B<rsyncopts>
|
||||
=item I<rsyncopts>
|
||||
|
||||
rsync options. See B<rsync(1)>.
|
||||
|
||||
@ -34,15 +34,15 @@ rsync options. See B<rsync(1)>.
|
||||
Specifies a fanout value for the maximum number of concur-
|
||||
rently executing remote shell processes.
|
||||
|
||||
=item B<filename>
|
||||
=item I<filename>
|
||||
|
||||
A space delimited list of files to rsync.
|
||||
|
||||
=item B<directory>
|
||||
=item I<directory>
|
||||
|
||||
A space delimited list of directories to rsync.
|
||||
|
||||
=item B<noderange:destination>
|
||||
=item I<noderange:destination>
|
||||
|
||||
A L<noderange(3)|noderange.3> and destination directory. The : is required.
|
||||
|
||||
@ -70,13 +70,13 @@ the B<-f> flag. Default is 64.
|
||||
|
||||
=over 3
|
||||
|
||||
=item *
|
||||
=item 1.
|
||||
|
||||
B<cd> I</install;> B<prsync> B<-o "crz"> I<post> I<stage:/install>
|
||||
cd /install; prsync -o "crz" post stage:/install
|
||||
|
||||
=item *
|
||||
=item 2.
|
||||
|
||||
B<prsync> I<passwd> I<group> I<rack01:/etc>
|
||||
prsync passwd group rack01:/etc
|
||||
|
||||
=back
|
||||
|
||||
|
@ -4,7 +4,7 @@ B<pscp> - parallel remote copy
|
||||
|
||||
=head1 B<Synopsis>
|
||||
|
||||
B<pscp> [-i I<suffix>] [I<scp options> I<...>] [B<-f> I<fanout>] I<filename> [I<filename> I<...>] I<noderange:destinationdirectory>
|
||||
B<pscp> [B<-i> I<suffix>] [I<scp options> I<...>] [B<-f> I<fanout>] I<filename> [I<filename> I<...>] I<noderange:destinationdirectory>
|
||||
|
||||
B<pscp> {B<-h>|B<--help>|B<-v>|B<--version>}
|
||||
|
||||
@ -33,15 +33,15 @@ rently executing remote shell processes.
|
||||
|
||||
Interfaces to be used.
|
||||
|
||||
=item B<scp options>
|
||||
=item I<scp options>
|
||||
|
||||
See B<scp(1)>
|
||||
|
||||
=item B<filename>
|
||||
=item I<filename>
|
||||
|
||||
A space delimited list of files to copy. If B<-r> is passed as an scp option, directories may be specified as well.
|
||||
|
||||
=item B<noderange:destination>
|
||||
=item I<noderange:destination>
|
||||
|
||||
A L<noderange(3)|noderange.3> and destination directory. The : is required.
|
||||
|
||||
@ -66,8 +66,17 @@ the B<-f> flag. Default is 64.
|
||||
|
||||
=head1 B<Examples>
|
||||
|
||||
B<pscp> B<-r> I</usr/local> I<node1,node3:/usr/local>
|
||||
B<pscp> I<passwd> I<group> I<rack01:/etc>
|
||||
=over 2
|
||||
|
||||
=item 1.
|
||||
|
||||
pscp -r /usr/local node1,node3:/usr/local
|
||||
|
||||
=item 2.
|
||||
|
||||
pscp passwd group rack01:/etc
|
||||
|
||||
=back
|
||||
|
||||
=head1 B<See> B<Also>
|
||||
|
||||
|
@ -49,11 +49,11 @@ Do not send the noderange to xcatd to expand it into a list of nodes. Instead,
|
||||
In this case, the noderange must be a simple list of comma-separated hostnames of the nodes.
|
||||
This allows you to run B<psh> even when xcatd is not running.
|
||||
|
||||
=item B<noderange>
|
||||
=item I<noderange>
|
||||
|
||||
See L<noderange(3)|noderange.3>.
|
||||
|
||||
=item B<command>
|
||||
=item I<command>
|
||||
|
||||
Command to be run in parallel. If no command is give then B<psh>
|
||||
enters interactive mode. In interactive mode a ">" prompt is
|
||||
@ -81,27 +81,26 @@ the B<-f> flag. Default is 64.
|
||||
|
||||
=over 3
|
||||
|
||||
=item *
|
||||
|
||||
=item 1.
|
||||
Run uptime on 3 nodes:
|
||||
|
||||
B<psh> I<node4-node6> I<uptime>
|
||||
psh node4-node6 uptime
|
||||
|
||||
node4: Sun Aug 5 17:42:06 MDT 2001
|
||||
node5: Sun Aug 5 17:42:06 MDT 2001
|
||||
node6: Sun Aug 5 17:42:06 MDT 2001
|
||||
Output is similar to:
|
||||
|
||||
=item *
|
||||
node4: Sun Aug 5 17:42:06 MDT 2001
|
||||
node5: Sun Aug 5 17:42:06 MDT 2001
|
||||
node6: Sun Aug 5 17:42:06 MDT 2001
|
||||
|
||||
=item 2.
|
||||
Run a command on some BladeCenter management modules:
|
||||
|
||||
B<psh> I<amm1-amm5> I<'info -T mm[1]'>
|
||||
|
||||
=item *
|
||||
psh amm1-amm5 'info -T mm[1]'
|
||||
|
||||
=item 3.
|
||||
Remove the tmp files on the nodes in the 1st frame:
|
||||
|
||||
B<psh> I<rack01> I<'rm -f /tmp/*'>
|
||||
psh rack01 'rm -f /tmp/*'
|
||||
|
||||
Notice the use of '' to forward shell expansion. This is not necessary
|
||||
in interactive mode.
|
||||
|
@ -54,7 +54,7 @@ method.
|
||||
|
||||
=head1 B<Examples>
|
||||
|
||||
B<rcons> I<node5>
|
||||
rcons node5
|
||||
|
||||
=head1 B<See> B<Also>
|
||||
|
||||
|
@ -5,12 +5,11 @@ B<regnotif> - Registers a Perl module or a command that will get called when cha
|
||||
|
||||
=head1 SYNOPSIS
|
||||
|
||||
I<regnotif [-h| --help]>
|
||||
B<regnotif [-h| --help]>
|
||||
|
||||
I<regnotif [-v| --version]>
|
||||
B<regnotif [-v| --version]>
|
||||
|
||||
|
||||
I<regnotif I<filename tablename>[,tablename]... [-o|--operation actions]>
|
||||
B<regnotif> I<filename tablename[,tablename]...> [B<-o>|B<--operation> I<actions>]
|
||||
|
||||
|
||||
=head1 DESCRIPTION
|
||||
@ -18,7 +17,7 @@ I<regnotif I<filename tablename>[,tablename]... [-o|--operation actions]>
|
||||
This command is used to register a Perl module or a command to the xCAT notification table. Once registered, the module or the command will get called when changes occur in the xCAT database tables indicated by tablename. The changes can be row addition, deletion and update which are specified by actions.
|
||||
|
||||
|
||||
=head1 Parameters
|
||||
=head1 PARAMETERS
|
||||
|
||||
I<filename> is the path name of the Perl module or command to be registered.
|
||||
I<tablename> is the name of the table that the user is interested in.
|
||||
@ -26,13 +25,13 @@ I<tablename> is the name of the table that the user is interested in.
|
||||
=head1 OPTIONS
|
||||
|
||||
|
||||
B<-h | -help> Display usage message.
|
||||
B<-h | --help> Display usage message.
|
||||
|
||||
B<-v | -version > Command Version.
|
||||
B<-v | --version> Command Version.
|
||||
|
||||
B<-V | -verbose> Verbose output.
|
||||
B<-V | --verbose> Verbose output.
|
||||
|
||||
B<-o | -operation> specifies the database table actions that the user is interested in. It is a comma separated list. 'a' for row addition, 'd' for row deletion and 'u' for row update.
|
||||
B<-o | --operation> specifies the database table actions that the user is interested in. It is a comma separated list. 'a' for row addition, 'd' for row deletion and 'u' for row update.
|
||||
|
||||
=head1 RETURN VALUE
|
||||
|
||||
@ -48,7 +47,7 @@ B<-o | -operation> specifies the database table actions that the user is int
|
||||
|
||||
2. To register a command that gets invoked when rows get updated in the switch table, enter:
|
||||
|
||||
regnotif /usr/bin/mycmd switch -o u
|
||||
regnotif /usr/bin/mycmd switch -o u
|
||||
|
||||
=head1 FILES
|
||||
|
||||
|
@ -4,22 +4,17 @@ B<renergy> - remote energy management tool
|
||||
|
||||
=head1 B<SYNOPSIS>
|
||||
|
||||
B<renergy> [-h | --help]
|
||||
B<renergy> [B<-h> | B<--help>]
|
||||
|
||||
B<renergy> [-v | --version]
|
||||
B<renergy> [B<-v> | B<--version>]
|
||||
|
||||
B<Power 6 server specific :>
|
||||
|
||||
=over 2
|
||||
|
||||
B<renergy> I<noderange> [-V] { all | [savingstatus] [cappingstatus]
|
||||
[cappingmaxmin] [cappingvalue] [cappingsoftmin] [averageAC]
|
||||
[averageDC] [ambienttemp] [exhausttemp] [CPUspeed]
|
||||
[syssbpower] [sysIPLtime]}
|
||||
B<renergy> I<noderange> [B<-V>] {B<all | [savingstatus] [cappingstatus] [cappingmaxmin] [cappingvalue] [cappingsoftmin] [averageAC] [averageDC] [ambienttemp] [exhausttemp] [CPUspeed] [syssbpower] [sysIPLtime]>}
|
||||
|
||||
B<renergy> I<noderange> [-V] { savingstatus={on | off}
|
||||
| cappingstatus={on | off} | cappingwatt=watt
|
||||
| cappingperc=percentage }
|
||||
B<renergy> I<noderange> [B<-V>] {B<savingstatus={on | off} | cappingstatus={on | off} | cappingwatt=watt | cappingperc=percentage>}
|
||||
|
||||
=back
|
||||
|
||||
@ -27,17 +22,9 @@ B<Power 7 server specific :>
|
||||
|
||||
=over 2
|
||||
|
||||
B<renergy> I<noderange> [-V] { all | [savingstatus] [dsavingstatus]
|
||||
[cappingstatus] [cappingmaxmin] [cappingvalue] [cappingsoftmin]
|
||||
[averageAC] [averageDC] [ambienttemp] [exhausttemp] [CPUspeed]
|
||||
[syssbpower] [sysIPLtime] [fsavingstatus] [ffoMin] [ffoVmin]
|
||||
[ffoTurbo] [ffoNorm] [ffovalue]}
|
||||
B<renergy> I<noderange> [B<-V>] {B<all | [savingstatus] [dsavingstatus] [cappingstatus] [cappingmaxmin] [cappingvalue] [cappingsoftmin] [averageAC] [averageDC] [ambienttemp] [exhausttemp] [CPUspeed] [syssbpower] [sysIPLtime] [fsavingstatus] [ffoMin] [ffoVmin] [ffoTurbo] [ffoNorm] [ffovalue]>}
|
||||
|
||||
B<renergy> I<noderange> [-V] { savingstatus={on | off}
|
||||
| dsavingstatus={on-norm | on-maxp | off}
|
||||
| fsavingstatus={on | off} | ffovalue=MHZ
|
||||
| cappingstatus={on | off} | cappingwatt=watt
|
||||
| cappingperc=percentage }
|
||||
B<renergy> I<noderange> [B<-V>] {B<savingstatus={on | off} | dsavingstatus={on-norm | on-maxp | off} | fsavingstatus={on | off} | ffovalue=MHZ | cappingstatus={on | off} | cappingwatt=watt | cappingperc=percentage>}
|
||||
|
||||
=back
|
||||
|
||||
@ -45,16 +32,9 @@ B<Power 8 server specific :>
|
||||
|
||||
=over 2
|
||||
|
||||
B<renergy> I<noderange> [-V] { all | [savingstatus] [dsavingstatus]
|
||||
[averageAC] [averageAChistory] [averageDC] [averageDChistory]
|
||||
[ambienttemp] [ambienttemphistory] [exhausttemp] [exhausttemphistory]
|
||||
[fanspeed] [fanspeedhistory] [CPUspeed] [CPUspeedhistory]
|
||||
[syssbpower] [sysIPLtime] [fsavingstatus] [ffoMin] [ffoVmin]
|
||||
[ffoTurbo] [ffoNorm] [ffovalue]}
|
||||
B<renergy> I<noderange> [B<-V>] {B<all | [savingstatus] [dsavingstatus] [averageAC] [averageAChistory] [averageDC] [averageDChistory] [ambienttemp] [ambienttemphistory] [exhausttemp] [exhausttemphistory] [fanspeed] [fanspeedhistory] [CPUspeed] [CPUspeedhistory] [syssbpower] [sysIPLtime] [fsavingstatus] [ffoMin] [ffoVmin] [ffoTurbo] [ffoNorm] [ffovalue]>}
|
||||
|
||||
B<renergy> I<noderange> [-V] { savingstatus={on | off}
|
||||
| dsavingstatus={on-norm | on-maxp | off}
|
||||
| fsavingstatus={on | off} | ffovalue=MHZ }
|
||||
B<renergy> I<noderange> B<[-V] {savingstatus={on | off} | dsavingstatus={on-norm | on-maxp | off} | fsavingstatus={on | off} | ffovalue=MHZ }>
|
||||
|
||||
I<NOTE:> The setting operation for B<Power 8> server is only supported
|
||||
for the server which is running in PowerVM mode. Do NOT run the setting
|
||||
@ -70,13 +50,7 @@ B<For Management Modules:>
|
||||
|
||||
=over 4
|
||||
|
||||
B<renergy> I<noderange> [-V] { all | pd1all | pd2all | [pd1status]
|
||||
[pd2status] [pd1policy] [pd2policy] [pd1powermodule1]
|
||||
[pd1powermodule2] [pd2powermodule1] [pd2powermodule2]
|
||||
[pd1avaiablepower] [pd2avaiablepower] [pd1reservedpower]
|
||||
[pd2reservedpower] [pd1remainpower] [pd2remainpower]
|
||||
[pd1inusedpower] [pd2inusedpower] [availableDC] [averageAC]
|
||||
[thermaloutput] [ambienttemp] [mmtemp] }
|
||||
B<renergy> I<noderange> [B<-V>] {B<all | pd1all | pd2all | [pd1status] [pd2status] [pd1policy] [pd2policy] [pd1powermodule1] [pd1powermodule2] [pd2powermodule1] [pd2powermodule2] [pd1avaiablepower] [pd2avaiablepower] [pd1reservedpower] [pd2reservedpower] [pd1remainpower] [pd2remainpower] [pd1inusedpower] [pd2inusedpower] [availableDC] [averageAC] [thermaloutput] [ambienttemp] [mmtemp]>}
|
||||
|
||||
=back
|
||||
|
||||
@ -84,12 +58,9 @@ B<For a blade server nodes:>
|
||||
|
||||
=over 4
|
||||
|
||||
B<renergy> I<noderange> [-V] { all | [averageDC]
|
||||
[capability] [cappingvalue] [CPUspeed] [maxCPUspeed]
|
||||
[savingstatus] [dsavingstatus] }
|
||||
B<renergy> I<noderange> [B<-V>] {B<all | [averageDC] [capability] [cappingvalue] [CPUspeed] [maxCPUspeed] [savingstatus] [dsavingstatus]>}
|
||||
|
||||
B<renergy> I<noderange> [-V] { savingstatus={on | off}
|
||||
| dsavingstatus={on-norm | on-maxp | off} }
|
||||
B<renergy> I<noderange> [B<-V>] {B<savingstatus={on | off} | dsavingstatus={on-norm | on-maxp | off}>}
|
||||
|
||||
=back
|
||||
|
||||
@ -103,10 +74,7 @@ B<For Flex Management Modules:>
|
||||
|
||||
=over 4
|
||||
|
||||
B<renergy> I<noderange> [-V] { all | [powerstatus]
|
||||
[powerpolicy] [powermodule] [avaiablepower] [reservedpower]
|
||||
[remainpower] [inusedpower] [availableDC] [averageAC]
|
||||
[thermaloutput] [ambienttemp] [mmtemp] }
|
||||
B<renergy> I<noderange> [B<-V>] {B<all | [powerstatus] [powerpolicy] [powermodule] [avaiablepower] [reservedpower] [remainpower] [inusedpower] [availableDC] [averageAC] [thermaloutput] [ambienttemp] [mmtemp]>}
|
||||
|
||||
=back
|
||||
|
||||
@ -114,14 +82,9 @@ B<For Flex node (power and x86):>
|
||||
|
||||
=over 4
|
||||
|
||||
B<renergy> I<noderange> [-V] { all | [averageDC]
|
||||
[capability] [cappingvalue] [cappingmaxmin] [cappingmax]
|
||||
[cappingmin] [cappingGmin] [CPUspeed] [maxCPUspeed]
|
||||
[savingstatus] [dsavingstatus] }
|
||||
B<renergy> I<noderange> [B<-V>] {B<all | [averageDC] [capability] [cappingvalue] [cappingmaxmin] [cappingmax] [cappingmin] [cappingGmin] [CPUspeed] [maxCPUspeed] [savingstatus] [dsavingstatus]>}
|
||||
|
||||
B<renergy> I<noderange> [-V] { cappingstatus={on | off}
|
||||
| cappingwatt=watt | cappingperc=percentage
|
||||
| savingstatus={on | off} | dsavingstatus={on-norm | on-maxp | off} }
|
||||
B<renergy> I<noderange> [B<-V>] {B<cappingstatus={on | off} | cappingwatt=watt | cappingperc=percentage | savingstatus={on | off} | dsavingstatus={on-norm | on-maxp | off}>}
|
||||
|
||||
=back
|
||||
|
||||
@ -132,11 +95,9 @@ B<iDataPlex specific :>
|
||||
|
||||
=over 2
|
||||
|
||||
B<renergy> I<noderange> [-V] [ { cappingmaxmin | cappingmax | cappingmin } ]
|
||||
[cappingstatus] [cappingvalue] [relhistogram]
|
||||
B<renergy> I<noderange> [B<-V>] [{B<cappingmaxmin | cappingmax | cappingmin}] [cappingstatus] [cappingvalue] [relhistogram]>}
|
||||
|
||||
B<renergy> I<noderange> [-V] { cappingstatus={on | enable | off | disable}
|
||||
| {cappingwatt|cappingvalue}=watt }
|
||||
B<renergy> I<noderange> [B<-V>] {B<cappingstatus={on | enable | off | disable} | {cappingwatt|cappingvalue}=watt>}
|
||||
|
||||
=back
|
||||
|
||||
@ -144,7 +105,7 @@ B<OpenPOWER server specific :>
|
||||
|
||||
=over 2
|
||||
|
||||
B<renergy> I<noderange> { powerusage | temperature }
|
||||
B<renergy> I<noderange> {B<powerusage | temperature>}
|
||||
|
||||
=back
|
||||
|
||||
@ -335,7 +296,7 @@ averageAC is the aggregate for all of the servers in a rack.
|
||||
|
||||
Note: For Blade Center, the value of attribute
|
||||
averageAC is the total AC power being consumed by all modules
|
||||
in the chassis. It also includes power consumed by the Chassis
|
||||
in the chassis. It also includes power consumed by the Chassis
|
||||
Cooling Devices for BCH chassis.
|
||||
|
||||
=item B<averageAChistory>
|
||||
@ -669,11 +630,10 @@ Currently, only CPU temperature and baseboard temperature sensor available for O
|
||||
|
||||
=over 3
|
||||
|
||||
=item 1
|
||||
|
||||
=item 1.
|
||||
Query all attributes which CEC1,CEC2 supported.
|
||||
|
||||
B<renergy> CEC1,CEC2 all
|
||||
renergy CEC1,CEC2 all
|
||||
|
||||
The output of the query operation:
|
||||
|
||||
@ -703,11 +663,10 @@ The output of the query operation:
|
||||
CEC2: exhausttemp: 40 C
|
||||
CEC2: CPUspeed: 4695 MHz
|
||||
|
||||
=item 2
|
||||
|
||||
=item 2.
|
||||
Query the B<fanspeed> attribute for Power8 CEC.
|
||||
|
||||
B<renergy> CEC1 fanspeed
|
||||
renergy CEC1 fanspeed
|
||||
|
||||
The output of the query operation:
|
||||
|
||||
@ -720,8 +679,7 @@ The output of the query operation:
|
||||
CEC1: fanspeed (Fan U78CB.001.WZS00MA-E1 0000210C): 4992 RPM
|
||||
CEC1: fanspeed (Fan U78CB.001.WZS00MA-E2 0000210D): 5016 RPM
|
||||
|
||||
=item 3
|
||||
|
||||
=item 3.
|
||||
Query the historical records for the B<CPUspeed> attribute. (Power8 CEC)
|
||||
|
||||
B<renergy> CEC1 CPUspeedhistory
|
||||
@ -744,7 +702,7 @@ The output of the query operation:
|
||||
|
||||
Query all the attirbutes for management module node MM1. (For chassis)
|
||||
|
||||
B<renergy> MM1 all
|
||||
renergy MM1 all
|
||||
|
||||
The output of the query operation:
|
||||
|
||||
@ -767,15 +725,13 @@ The output of the query operation:
|
||||
mm1: pd2powermodule2: Bay 4: 2940W
|
||||
mm1: pd2remainpower: 51W
|
||||
mm1: pd2reservedpower: 2889W
|
||||
mm1: pd2status: 2 - Warning: Power redundancy does not exist
|
||||
in this power domain.
|
||||
mm1: pd2status: 2 - Warning: Power redundancy does not exist in this power domain.
|
||||
mm1: thermaloutput: 9717.376000 BTU/hour
|
||||
|
||||
=item 5
|
||||
|
||||
=item 5.
|
||||
Query all the attirbutes for blade server node blade1.
|
||||
|
||||
B<renergy> blade1 all
|
||||
renergy blade1 all
|
||||
|
||||
The output of the query operation:
|
||||
|
||||
@ -787,12 +743,11 @@ The output of the query operation:
|
||||
blade1: maxCPUspeed: 4204MHZ
|
||||
blade1: savingstatus: off
|
||||
|
||||
=item 6
|
||||
|
||||
=item 6.
|
||||
Query the attributes savingstatus, cappingstatus
|
||||
and CPUspeed for server CEC1.
|
||||
|
||||
B<renergy> CEC1 savingstatus cappingstatus CPUspeed
|
||||
renergy CEC1 savingstatus cappingstatus CPUspeed
|
||||
|
||||
The output of the query operation:
|
||||
|
||||
@ -800,23 +755,21 @@ The output of the query operation:
|
||||
CEC1: cappingstatus: on
|
||||
CEC1: CPUspeed: 3621 MHz
|
||||
|
||||
=item 7
|
||||
|
||||
=item 7.
|
||||
Turn on the power saving function of CEC1.
|
||||
|
||||
B<renergy> CEC1 savingstatus=on
|
||||
renergy CEC1 savingstatus=on
|
||||
|
||||
The output of the setting operation:
|
||||
|
||||
CEC1: Set savingstatus succeeded.
|
||||
CEC1: This setting may need some minutes to take effect.
|
||||
|
||||
=item 8
|
||||
|
||||
=item 8.
|
||||
Set the power capping value base on the percentage of the
|
||||
max-min capping value. Here, set it to 50%.
|
||||
|
||||
B<renergy> CEC1 cappingperc=50
|
||||
renergy CEC1 cappingperc=50
|
||||
|
||||
If the maximum capping value of the CEC1 is 850w, and the
|
||||
minimum capping value of the CEC1 is 782w, the Power Capping
|
||||
@ -827,11 +780,10 @@ The output of the setting operation:
|
||||
CEC1: Set cappingperc succeeded.
|
||||
CEC1: cappingvalue: 816
|
||||
|
||||
=item 9
|
||||
|
||||
=item 9.
|
||||
Query powerusage and temperature for OpenPOWER servers.
|
||||
|
||||
B<renergy> ops01 powerusage temperature
|
||||
renergy ops01 powerusage temperature
|
||||
|
||||
The output will be like this:
|
||||
|
||||
@ -851,23 +803,20 @@ The output will be like this:
|
||||
|
||||
=over 3
|
||||
|
||||
=item 1
|
||||
|
||||
=item 1.
|
||||
For more information on 'Power System Energy Management':
|
||||
|
||||
http://www-03.ibm.com/systems/power/software/energy/index.html
|
||||
|
||||
=item 2
|
||||
http://www-03.ibm.com/systems/power/software/energy/index.html
|
||||
|
||||
=item 2.
|
||||
EnergyScale white paper for Power6:
|
||||
|
||||
http://www-03.ibm.com/systems/power/hardware/whitepapers/energyscale.html
|
||||
|
||||
=item 3
|
||||
http://www-03.ibm.com/systems/power/hardware/whitepapers/energyscale.html
|
||||
|
||||
=item 3.
|
||||
EnergyScale white paper for Power7:
|
||||
|
||||
http://www-03.ibm.com/systems/power/hardware/whitepapers/energyscale7.html
|
||||
http://www-03.ibm.com/systems/power/hardware/whitepapers/energyscale7.html
|
||||
|
||||
=back
|
||||
|
||||
|
@ -24,7 +24,7 @@ that management node, but in a hierarchical cluster will usually be the service
|
||||
|
||||
=over 10
|
||||
|
||||
=item I<bps>]
|
||||
=item I<bps>
|
||||
|
||||
The display rate to use to play back the console output. Default is 19200.
|
||||
|
||||
@ -48,11 +48,9 @@ Display usage message.
|
||||
=over 3
|
||||
|
||||
=item 0
|
||||
|
||||
The command completed successfully.
|
||||
|
||||
=item 1
|
||||
|
||||
An error has occurred.
|
||||
|
||||
=back
|
||||
|
@ -38,13 +38,13 @@ If the xcatd subsystem was not created, B<restartxcatd> will create it automatic
|
||||
=head1 OPTIONS
|
||||
|
||||
|
||||
B<-h> Display usage message.
|
||||
B<-h|--help> Display usage message.
|
||||
|
||||
B<-v> Command Version.
|
||||
B<-v|--version> Command Version.
|
||||
|
||||
B<-r> On a Service Node, services will not be restarted.
|
||||
B<-r|--reload> On a Service Node, services will not be restarted.
|
||||
|
||||
B<-V> Display the verbose messages.
|
||||
B<-V|--verbose> Display the verbose messages.
|
||||
|
||||
|
||||
=head1 RETURN VALUE
|
||||
@ -57,7 +57,7 @@ B<-V> Display the verbose messages.
|
||||
|
||||
1. To restart the xCAT daemon, enter:
|
||||
|
||||
B<restartxcatd>
|
||||
restartxcatd
|
||||
|
||||
|
||||
|
||||
|
@ -23,20 +23,19 @@ For postgreSQL, you do not have to stop the applications accessing the database
|
||||
=head1 OPTIONS
|
||||
|
||||
|
||||
B<-h> Display usage message.
|
||||
B<-h|--help> Display usage message.
|
||||
|
||||
B<-v> Command Version.
|
||||
B<-v|--version> Command Version.
|
||||
|
||||
B<-V> Verbose.
|
||||
B<-V|--verbose> Verbose.
|
||||
|
||||
B<-a> All,without this flag the eventlog and auditlog will be skipped.
|
||||
These tables are skipped by default because restoring will generate new indexes
|
||||
B<-a> All,without this flag the eventlog and auditlog will be skipped. These tables are skipped by default because restoring will generate new indexes
|
||||
|
||||
B<-b> Restore from the binary image.
|
||||
B<-b> Restore from the binary image.
|
||||
|
||||
B<-p> Path to the directory containing the database restore files. If restoring from the binary image (-b) and using postgeSQL, then this is the complete path to the restore file that was created with dumpxCATdb -b.
|
||||
B<-p|--path> Path to the directory containing the database restore files. If restoring from the binary image (-b) and using postgeSQL, then this is the complete path to the restore file that was created with dumpxCATdb -b.
|
||||
|
||||
B<-t> Use with the -b flag to designate the timestamp of the binary image to use to restore for DB2.
|
||||
B<-t|--timestamp> Use with the -b flag to designate the timestamp of the binary image to use to restore for DB2.
|
||||
|
||||
=head1 RETURN VALUE
|
||||
|
||||
@ -48,19 +47,19 @@ B<-t> Use with the -b flag to designate the timestamp of the binary ima
|
||||
|
||||
1. To restore the xCAT database from the /dbbackup/db directory, enter:
|
||||
|
||||
B<restorexCATdb -p /dbbackup/db>
|
||||
restorexCATdb -p /dbbackup/db
|
||||
|
||||
2. To restore the xCAT database including auditlog and eventlog from the /dbbackup/db directory, enter:
|
||||
|
||||
B<restorexCATdb -a -p /dbbackup/db>
|
||||
restorexCATdb -a -p /dbbackup/db
|
||||
|
||||
3. To restore the xCAT DB2 database from the binary image with timestamp 20111130130239 enter:
|
||||
|
||||
B<restorexCATdb -b -t 20111130130239 -p /dbbackup/db>
|
||||
restorexCATdb -b -t 20111130130239 -p /dbbackup/db
|
||||
|
||||
4. To restore the xCAT postgreSQL database from the binary image file pgbackup.20553 created by dumpxCATdb enter:
|
||||
|
||||
B<restorexCATdb -b -p /dbbackup/db/pgbackup.20553>
|
||||
restorexCATdb -b -p /dbbackup/db/pgbackup.20553
|
||||
|
||||
=head1 FILES
|
||||
|
||||
|
@ -4,7 +4,7 @@ B<reventlog> - retrieve or clear remote hardware event logs
|
||||
|
||||
=head1 B<Synopsis>
|
||||
|
||||
B<reventlog> I<noderange> {I<number-of-entries [-s]>|B<all [-s]>|B<clear>}
|
||||
B<reventlog> I<noderange> {I<number-of-entries> [B<-s>]|B<all [-s]>|B<clear>}
|
||||
|
||||
B<reventlog> [B<-h>|B<--help>|B<-v>|B<--version>]
|
||||
|
||||
@ -47,9 +47,15 @@ Print version.
|
||||
|
||||
=head1 B<Examples>
|
||||
|
||||
B<reventlog> I<node4,node5> I<5>
|
||||
=over 2
|
||||
|
||||
node4: SERVPROC I 09/06/00 15:23:33 Remote Login Successful User ID = USERID[00]
|
||||
=item 1.
|
||||
|
||||
reventlog node4,node5 5
|
||||
|
||||
Output is similar to:
|
||||
|
||||
node4: SERVPROC I 09/06/00 15:23:33 Remote Login Successful User ID = USERID[00]
|
||||
node4: SERVPROC I 09/06/00 15:23:32 System spn1 started a RS485 connection with us[00]
|
||||
node4: SERVPROC I 09/06/00 15:22:35 RS485 connection to system spn1 has ended[00]
|
||||
node4: SERVPROC I 09/06/00 15:22:32 Remote Login Successful User ID = USERID[00]
|
||||
@ -57,10 +63,14 @@ B<reventlog> I<node4,node5> I<5>
|
||||
node5: SERVPROC I 09/06/00 15:22:32 Remote Login Successful User ID = USERID[00]
|
||||
node5: SERVPROC I 09/06/00 15:22:31 System spn1 started a RS485 connection with us[00]
|
||||
node5: SERVPROC I 09/06/00 15:21:34 RS485 connection to system spn1 has ended[00]
|
||||
node5: SERVPROC I 09/06/00 15:21:30 Remote Login Successful User ID = USERID[00]
|
||||
node5: SERVPROC I 09/06/00 15:21:30 Remote Login Successful User ID = USERID[00]
|
||||
node5: SERVPROC I 09/06/00 15:21:29 System spn1 started a RS485 connection with us[00]
|
||||
|
||||
B<reventlog> I<node4,node5> I<clear>
|
||||
=item 2.
|
||||
|
||||
reventlog node4,node5 clear
|
||||
|
||||
Output is similar to:
|
||||
|
||||
node4: clear
|
||||
node5: clear
|
||||
|
@ -64,23 +64,19 @@ If it outputs B<"Timeout waiting for prompt"> during the upgrade, please set the
|
||||
|
||||
In currently Direct FSP/BPA Management, our B<rflash> doesn't support B<concurrent> value of B<--activate> flag, and supports B<disruptive> and B<deferred>. The B<disruptive> option will cause any affected systems that are powered on to be powered down before installing and activating the update. So we require that the systems should be powered off before do the firmware update.
|
||||
|
||||
The B<deferred> option will load the new firmware into the T (temp) side, but will not activate it like the disruptive firmware. The customer will continue to run the Frames and CECs working with the P (perm) side and can wait for a maintenance window where they can activate and boot the Frame/CECs with new firmware levels. Refer to the doc to get more details:
|
||||
XCAT_Power_775_Hardware_Management
|
||||
The B<deferred> option will load the new firmware into the T (temp) side, but will not activate it like the disruptive firmware. The customer will continue to run the Frames and CECs working with the P (perm) side and can wait for a maintenance window where they can activate and boot the Frame/CECs with new firmware levels. Refer to the doc to get more details: XCAT_Power_775_Hardware_Management
|
||||
|
||||
In Direct FSP/BPA Management, there is -d <data_directory> option. The default value is /tmp. When do firmware update, rflash will put some related data from rpm packages in <data_directory> directory, so the execution of rflash will require available disk space in <data_directory> for the command to properly execute:
|
||||
|
||||
For one GFW rpm package and one power code rpm package , if the GFW rpm package size is gfw_rpmsize, and the Power code rpm package size is power_rpmsize, it requires that the available disk space should be more than:
|
||||
1.5*gfw_rpmsize + 1.5*power_rpmsize
|
||||
For one GFW rpm package and one power code rpm package , if the GFW rpm package size is gfw_rpmsize, and the Power code rpm package size is power_rpmsize, it requires that the available disk space should be more than: 1.5*gfw_rpmsize + 1.5*power_rpmsize
|
||||
|
||||
For Power 775, the rflash command takes effect on the primary and secondary FSPs or BPAs almost in parallel.
|
||||
|
||||
For more details about the Firmware Update using Direct FSP/BPA Management, refer to:
|
||||
XCAT_Power_775_Hardware_Management#Updating_the_BPA_and_FSP_firmware_using_xCAT_DFM
|
||||
For more details about the Firmware Update using Direct FSP/BPA Management, refer to: XCAT_Power_775_Hardware_Management#Updating_the_BPA_and_FSP_firmware_using_xCAT_DFM
|
||||
|
||||
=head2 NeXtScale FPC specific:
|
||||
|
||||
The command will update firmware for NeXtScale FPC when given an FPC node and the http information needed to access the firmware. The http imformation required includes both the MN IP address as well as the directory containing the firmware. It is recommended that the firmware be downloaded and placed in the /install directory structure as the xCAT MN /install directory is configured with the correct permissions for http. Refer to the doc to get more details:
|
||||
XCAT_NeXtScale_Clusters
|
||||
The command will update firmware for NeXtScale FPC when given an FPC node and the http information needed to access the firmware. The http imformation required includes both the MN IP address as well as the directory containing the firmware. It is recommended that the firmware be downloaded and placed in the /install directory structure as the xCAT MN /install directory is configured with the correct permissions for http. Refer to the doc to get more details: XCAT_NeXtScale_Clusters
|
||||
|
||||
=head2 OpenPOWER specific:
|
||||
|
||||
@ -98,11 +94,11 @@ Writes the command's usage statement to standard output.
|
||||
|
||||
Chech the firmware version of BMC and HPM file.
|
||||
|
||||
=item B<-p directory>
|
||||
=item B<-p> I<directory>
|
||||
|
||||
Specifies the directory where the packages are located.
|
||||
|
||||
=item B<-d data_directory>
|
||||
=item B<-d> I<data_directory>
|
||||
|
||||
Specifies the directory where the raw data from rpm packages for each CEC/Frame are located. The default directory is /tmp. The option is only used in Direct FSP/BPA Management.
|
||||
|
||||
@ -138,32 +134,27 @@ Verbose output.
|
||||
|
||||
=over 4
|
||||
|
||||
=item 1
|
||||
|
||||
=item 1.
|
||||
To update only the power subsystem attached to a single HMC-attached pSeries CEC(cec_name), and recycle the power subsystem and all attached managed systems when the update is complete, and the Microcode update package and associated XML file are in /tmp/fw, enter:
|
||||
|
||||
rflash cec_name -p /tmp/fw --activate disruptive
|
||||
|
||||
=item 2
|
||||
|
||||
=item 2.
|
||||
To update only the power subsystem attached to a single HMC-attached pSeries node, and recycle the power subsystem and all attached managed systems when the update is complete, and the Microcode update package and associated XML file are in /tmp/fw, enter:
|
||||
|
||||
rflash bpa_name -p /tmp/fw --activate disruptive
|
||||
|
||||
=item 3
|
||||
|
||||
=item 3.
|
||||
To commit a firmware update to permanent flash for both managed system and the related power subsystems, enter:
|
||||
|
||||
rflash cec_name --commit
|
||||
|
||||
=item 4
|
||||
|
||||
=item 4.
|
||||
To update the firmware on a NeXtScale FPC specify the FPC node name and the HTTP location of the file including the xCAT MN IP address and the directory on the xCAT MN containing the firmware as follows:
|
||||
|
||||
rflash fpc01 http://10.1.147.169/install/firmware/fhet17a/ibm_fw_fpc_fhet17a-2.02_anyos_noarch.rom
|
||||
|
||||
=item 5
|
||||
|
||||
=item 5.
|
||||
To update the firmware on OpenPOWER machine specify the node name and the file path of the HPM firmware file as follows:
|
||||
|
||||
rflash fs3 /firmware/8335_810.1543.20151021b_update.hpm
|
||||
|
@ -138,6 +138,8 @@ Print version.
|
||||
|
||||
Set the values in the vm table to what vCenter has for the indicated nodes.
|
||||
|
||||
=back
|
||||
|
||||
B<zVM specific :>
|
||||
|
||||
=over 2
|
||||
@ -188,18 +190,17 @@ List the known zFCP pool names.
|
||||
|
||||
=back
|
||||
|
||||
=back
|
||||
|
||||
=head1 B<Examples>
|
||||
|
||||
=over 4
|
||||
|
||||
=item *
|
||||
|
||||
=item 1.
|
||||
To retrieve all information available from blade node4, enter:
|
||||
|
||||
rinv node5 all
|
||||
|
||||
Output is similar to:
|
||||
|
||||
node5: Machine Type/Model 865431Z
|
||||
node5: Serial Number 23C5030
|
||||
node5: Asset Tag 00:06:29:1F:01:1A
|
||||
@ -221,12 +222,13 @@ To retrieve all information available from blade node4, enter:
|
||||
node5: Total Memory: 512 MB
|
||||
node5: Memory DIMM locations: Slot(s) 3 4
|
||||
|
||||
=item *
|
||||
|
||||
=item 2.
|
||||
To output the raw information of deconfigured resources for CEC cec01, enter:
|
||||
|
||||
rinv cec01 deconfig -x
|
||||
|
||||
Output is similar to:
|
||||
|
||||
cec01:
|
||||
<SYSTEM>
|
||||
<System_type>IH</System_type>
|
||||
@ -236,21 +238,26 @@ To output the raw information of deconfigured resources for CEC cec01, enter:
|
||||
</NODE>
|
||||
</SYSTEM>
|
||||
|
||||
=item *
|
||||
=item 3.
|
||||
|
||||
To retrieve 'config' information from the HMC-managed LPAR node3, enter:
|
||||
|
||||
rinv node3 config
|
||||
|
||||
Output is similar to:
|
||||
|
||||
node5: Machine Configuration Info
|
||||
node5: Number of Processors: 1
|
||||
node5: Total Memory (MB): 1024
|
||||
|
||||
=item *
|
||||
=item 4.
|
||||
|
||||
To retrieve information about a VMware node vm1, enter:
|
||||
|
||||
rinv vm1
|
||||
|
||||
Output is similar to:
|
||||
|
||||
vm1: UUID/GUID: 42198f65-d579-fb26-8de7-3ae49e1790a7
|
||||
vm1: CPUs: 1
|
||||
vm1: Memory: 1536 MB
|
||||
@ -260,7 +267,7 @@ To retrieve information about a VMware node vm1, enter:
|
||||
|
||||
B<zVM specific :>
|
||||
|
||||
=item *
|
||||
=item 5.
|
||||
|
||||
To list the defined network names available for a given node:
|
||||
|
||||
@ -277,7 +284,7 @@ Output is similar to:
|
||||
pokdev61: VSWITCH SYSTEM VSW2
|
||||
pokdev61: VSWITCH SYSTEM VSW3
|
||||
|
||||
=item *
|
||||
=item 6.
|
||||
|
||||
To list the configuration for a given network:
|
||||
|
||||
@ -290,7 +297,7 @@ Output is similar to:
|
||||
pokdev61: IPTimeout: 5 MAC Protection: Unspecified
|
||||
pokdev61: Isolation Status: OFF
|
||||
|
||||
=item *
|
||||
=item 7.
|
||||
|
||||
To list the disk pool names available:
|
||||
|
||||
@ -302,7 +309,7 @@ Output is similar to:
|
||||
pokdev61: POOL2
|
||||
pokdev61: POOL3
|
||||
|
||||
=item *
|
||||
=item 8.
|
||||
|
||||
List the configuration for a given disk pool:
|
||||
|
||||
@ -315,7 +322,7 @@ Output is similar to:
|
||||
pokdev61: EMC2C5 3390-09 0001 10016
|
||||
|
||||
|
||||
=item *
|
||||
=item 9.
|
||||
|
||||
List the known zFCP pool names.
|
||||
|
||||
@ -327,7 +334,7 @@ Output is similar to:
|
||||
pokdev61: zfcp2
|
||||
pokdev61: zfcp3
|
||||
|
||||
=item *
|
||||
=item 10.
|
||||
|
||||
List the SCSI/FCP devices contained in a given zFCP pool:
|
||||
|
||||
|
@ -6,7 +6,7 @@ B<rmdsklsnode> - Use this xCAT command to remove AIX/NIM diskless machine defini
|
||||
|
||||
B<rmdsklsnode [-h | --help ]>
|
||||
|
||||
B<rmdsklsnode [-V|--verbose] [-f|--force] [-r|--remdef] [-i image_name] [-p|--primarySN] [-b|--backupSN] noderange>
|
||||
B<rmdsklsnode [-V|--verbose] [-f|--force] [-r|--remdef] [-i> I<image_name>] B<[-p|--primarySN] [-b|--backupSN]> I<noderange>
|
||||
|
||||
=head1 DESCRIPTION
|
||||
|
||||
@ -43,11 +43,11 @@ e both the primary and backup service nodes.
|
||||
|
||||
Display usage message.
|
||||
|
||||
=item B<-i image_name>
|
||||
=item B<-i> I<image_name>
|
||||
|
||||
The name of an xCAT image definition.
|
||||
|
||||
=item B<noderange>
|
||||
=item I<noderange>
|
||||
|
||||
A set of comma delimited node names and/or group names. See the "noderange" man page for details on additional supported formats.
|
||||
|
||||
@ -71,11 +71,9 @@ Verbose mode.
|
||||
=over 3
|
||||
|
||||
=item 0
|
||||
|
||||
The command completed successfully.
|
||||
|
||||
=item 1
|
||||
|
||||
An error has occurred.
|
||||
|
||||
=back
|
||||
@ -84,22 +82,21 @@ An error has occurred.
|
||||
|
||||
1) Remove the NIM client definition for the xCAT node named "node01". Give verbose output.
|
||||
|
||||
B<rmdsklsnode -V node01>
|
||||
rmdsklsnode -V node01
|
||||
|
||||
2) Remove the NIM client definitions for all the xCAT nodes in the group "aixnod
|
||||
es". Attempt to shut down the nodes if they are running.
|
||||
2) Remove the NIM client definitions for all the xCAT nodes in the group "aixnodes". Attempt to shut down the nodes if they are running.
|
||||
|
||||
B<rmdsklsnode -f aixnodes>
|
||||
rmdsklsnode -f aixnodes
|
||||
|
||||
3) Remove the NIM client machine definition for xCAT node "node02" that was created with the B<mkdsklsnode -n> option and the image "AIXdskls". (i.e. NIM client machine name "node02_AIXdskls".)
|
||||
|
||||
B<rmdsklsnode -i AIXdskls node02>
|
||||
rmdsklsnode -i AIXdskls node02
|
||||
|
||||
This assume that node02 is not currently running.
|
||||
|
||||
4) Remove the old alternate client definition "node27_olddskls".
|
||||
|
||||
B<rmdsklsnode -r -i olddskls node27>
|
||||
rmdsklsnode -r -i olddskls node27
|
||||
|
||||
Assuming the node was booted using an new alternate NIM client definition then this will leave the node running.
|
||||
|
||||
|
@ -5,9 +5,9 @@ B<rmflexnode> - Delete a flexible node.
|
||||
|
||||
=head1 SYNOPSIS
|
||||
|
||||
B<rmflexnode> [-h | --help]
|
||||
B<rmflexnode> [B<-h> | B<--help>]
|
||||
|
||||
B<rmflexnode> [-v | --version]
|
||||
B<rmflexnode> [B<-v> | B<--version>]
|
||||
|
||||
B<rmflexnode> I<noderange>
|
||||
|
||||
@ -44,7 +44,6 @@ Display the version information.
|
||||
=over 3
|
||||
|
||||
=item 1
|
||||
|
||||
Delete a flexible node base on the xCAT node blade1.
|
||||
|
||||
The blade1 should belong to a complex, the I<id> attribute should be set correctly and all the slots should be in B<power off> state.
|
||||
|
@ -52,7 +52,7 @@ This is used to determine the current host to migrate from.
|
||||
|
||||
=head1 B<Examples>
|
||||
|
||||
B<rmigrate> I<v1> I<n2>
|
||||
rmigrate v1 n2
|
||||
|
||||
=head2 zVM specific:
|
||||
|
||||
|
@ -4,10 +4,10 @@ B<rmimage> - Removes the Linux stateless or statelite image from the file system
|
||||
|
||||
=head1 SYNOPSIS
|
||||
|
||||
I<rmimage [-h | --help]>
|
||||
B<rmimage [-h | --help]>
|
||||
|
||||
|
||||
I<rmimage [-V | --verbose] imagename [--xcatdef]>
|
||||
B<rmimage [-V | --verbose]> I<imagename> B<[--xcatdef]>
|
||||
|
||||
=head1 DESCRIPTION
|
||||
|
||||
@ -19,7 +19,7 @@ to calculate the image root directory; otherwise, this command uses the operatin
|
||||
architecture and profile name to calculate the image root directory.
|
||||
|
||||
The osimage definition will not be removed from the xCAT tables by default,
|
||||
specifying the flag --xcatdef will remove the osimage definition,
|
||||
specifying the flag B<--xcatdef> will remove the osimage definition,
|
||||
or you can use rmdef -t osimage to remove the osimage definition.
|
||||
|
||||
The statelite image files on the diskful service nodes will not be removed,
|
||||
@ -51,11 +51,11 @@ B<--xcatdef> Remove the xCAT osimage definition
|
||||
|
||||
1. To remove a RHEL 7.1 stateless image for a compute node architecture x86_64, enter:
|
||||
|
||||
I<rmimage rhels7.1-x86_64-netboot-compute>
|
||||
rmimage rhels7.1-x86_64-netboot-compute
|
||||
|
||||
2. To remove a rhels5.5 statelite image for a compute node architecture ppc64 and the osimage definition, enter:
|
||||
|
||||
I<rmimage rhels5.5-ppc64-statelite-compute --xcatdef>
|
||||
rmimage rhels5.5-ppc64-statelite-compute --xcatdef
|
||||
|
||||
=head1 FILES
|
||||
|
||||
|
@ -38,7 +38,7 @@ Remove this kit even there is any component in this kit is listed by osimage.kit
|
||||
|
||||
Test if kitcomponents in this kit are used by osimage
|
||||
|
||||
=item B<kitlist>
|
||||
=item I<kitlist>
|
||||
|
||||
A comma delimited list of kits that are to be removed from the xCAT cluster. Each entry can be a kitname or kit basename. For kit basename, rmkit command will remove all the kits that have that kit basename.
|
||||
|
||||
@ -54,32 +54,30 @@ A comma delimited list of kits that are to be removed from the xCAT cluster. Ea
|
||||
|
||||
1. To remove two kits from tarball files.
|
||||
|
||||
rmkit kit-test1,kit-test2
|
||||
rmkit kit-test1,kit-test2
|
||||
|
||||
Output is similar to:
|
||||
|
||||
Kit kit-test1-1.0-Linux,kit-test2-1.0-Linux was successfully removed.
|
||||
Kit kit-test1-1.0-Linux,kit-test2-1.0-Linux was successfully removed.
|
||||
|
||||
2. To remove two kits from tarball files even the kit components in them are still being used by osimages.
|
||||
|
||||
rmkit kit-test1,kit-test2 --force
|
||||
rmkit kit-test1,kit-test2 --force
|
||||
|
||||
Output is similar to:
|
||||
|
||||
Kit kit-test1-1.0-Linux,kit-test2-1.0-Linux was successfully removed.
|
||||
Kit kit-test1-1.0-Linux,kit-test2-1.0-Linux was successfully removed.
|
||||
|
||||
3. To list kitcomponents in this kit used by osimage
|
||||
|
||||
rmkit kit-test1,kit-test2 -t
|
||||
rmkit kit-test1,kit-test2 -t
|
||||
|
||||
Output is similar to:
|
||||
|
||||
kit-test1-kitcomp-1.0-Linux is being used by osimage osimage-test
|
||||
Following kitcomponents are in use: kit-test1-kitcomp-1.0-Linux
|
||||
kit-test1-kitcomp-1.0-Linux is being used by osimage osimage-test
|
||||
Following kitcomponents are in use: kit-test1-kitcomp-1.0-Linux
|
||||
|
||||
=head1 SEE ALSO
|
||||
|
||||
L<lskit(1)|lskit.1>, L<addkit(1)|addkit.1>, L<addkitcomp(1)|addkitcomp.1>, L<rmkitcomp(1)|rmkitcomp.1>, L<chkkitcomp(1)|chkkitcomp.1>
|
||||
|
||||
|
||||
~
|
||||
|
@ -46,7 +46,7 @@ Do not remove kitcomponent's postbootscripts from osimage
|
||||
|
||||
osimage name that include this kit component.
|
||||
|
||||
=item B<kitcompname_list>
|
||||
=item I<kitcompname_list>
|
||||
|
||||
A comma-delimited list of valid full kit component names or kit component basenames that are to be removed from the osimage. If a basename is specified, all kitcomponents matching that basename will be removed from the osimage.
|
||||
|
||||
@ -62,27 +62,27 @@ A comma-delimited list of valid full kit component names or kit component basena
|
||||
|
||||
1. To remove a kit component from osimage
|
||||
|
||||
rmkitcomp -i rhels6.2-ppc64-netboot-compute comp-test1-1.0-1-rhels-6.2-ppc64
|
||||
rmkitcomp -i rhels6.2-ppc64-netboot-compute comp-test1-1.0-1-rhels-6.2-ppc64
|
||||
|
||||
Output is similar to:
|
||||
|
||||
kitcomponents comp-test1-1.0-1-rhels-6.2-ppc64 were removed from osimage rhels6.2-ppc64-netboot-compute successfully
|
||||
kitcomponents comp-test1-1.0-1-rhels-6.2-ppc64 were removed from osimage rhels6.2-ppc64-netboot-compute successfully
|
||||
|
||||
2. To remove a kit component even it is still used as a dependency of other kit component.
|
||||
|
||||
rmkitcomp -f -i rhels6.2-ppc64-netboot-compute comp-test1-1.0-1-rhels-6.2-ppc64
|
||||
rmkitcomp -f -i rhels6.2-ppc64-netboot-compute comp-test1-1.0-1-rhels-6.2-ppc64
|
||||
|
||||
Output is similar to:
|
||||
|
||||
kitcomponents comp-test1-1.0-1-rhels-6.2-ppc64 were removed from osimage rhels6.2-ppc64-netboot-compute successfully
|
||||
kitcomponents comp-test1-1.0-1-rhels-6.2-ppc64 were removed from osimage rhels6.2-ppc64-netboot-compute successfully
|
||||
|
||||
3. To remove a kit component from osimage and also remove the kit component meta RPM and package RPM. So in next genimage for statelss image and updatenode for stateful nodes, the kit component meta RPM and package RPM will be uninstalled.
|
||||
|
||||
rmkitcomp -u -i rhels6.2-ppc64-netboot-compute comp-test1-1.0-1-rhels-6.2-ppc64
|
||||
rmkitcomp -u -i rhels6.2-ppc64-netboot-compute comp-test1-1.0-1-rhels-6.2-ppc64
|
||||
|
||||
Output is similar to:
|
||||
|
||||
kitcomponents comp-test1-1.0-1-rhels-6.2-ppc64 were removed from osimage rhels6.2-ppc64-netboot-compute successfully
|
||||
kitcomponents comp-test1-1.0-1-rhels-6.2-ppc64 were removed from osimage rhels6.2-ppc64-netboot-compute successfully
|
||||
|
||||
|
||||
=head1 SEE ALSO
|
||||
|
@ -6,7 +6,7 @@ B<rmnimimage> - Use this xCAT command to remove NIM resources specified in an xC
|
||||
|
||||
B<rmnimimage [-h|--help]>
|
||||
|
||||
B<rmnimimage [-V|--verbose] [-f|--force] [-d|--delete] [-x|--xcatdef] [-M|--managementnode] [-s servicenoderange] osimage_name>
|
||||
B<rmnimimage [-V|--verbose] [-f|--force] [-d|--delete] [-x|--xcatdef] [-M|--managementnode] [-s> I<servicenoderange>] I<osimage_name>
|
||||
|
||||
=head1 DESCRIPTION
|
||||
|
||||
@ -52,11 +52,11 @@ Override the check for shared resources when removing an xCAT osimage.
|
||||
|
||||
Remove NIM resources from the xCAT management node only.
|
||||
|
||||
=item B<-s servicenoderange>
|
||||
=item B<-s> I<servicenoderange>
|
||||
|
||||
Remove the NIM resources on these xCAT service nodes only. Do not remove the NIM resources from the xCAT management node.
|
||||
|
||||
=item B<osimage_name>
|
||||
=item I<osimage_name>
|
||||
|
||||
The name of the xCAT osimage definition.
|
||||
|
||||
@ -75,11 +75,9 @@ Remove the xCAT osimage definition.
|
||||
=over 3
|
||||
|
||||
=item 0
|
||||
|
||||
The command completed successfully.
|
||||
|
||||
=item 1
|
||||
|
||||
An error has occurred.
|
||||
|
||||
=back
|
||||
@ -88,27 +86,27 @@ An error has occurred.
|
||||
|
||||
1) Remove all NIM resources specified in the xCAT "61image" definition.
|
||||
|
||||
B<rmnimimage 61image>
|
||||
rmnimimage 61image
|
||||
|
||||
The "nim -o remove" operation will be used to remove the NIM resource definitions on the management node as well as any service nodes where the resource has been replicated. This NIM operation does not completely remove all files and directories associated with the NIM resources.
|
||||
|
||||
2) Remove all the NIM resources specified by the xCAT "61rte" osimage definition. Delete ALL files and directories associated with the NIM resources. This will also remove the lpp_source resource.
|
||||
|
||||
B<rmnimimage -d 61rte>
|
||||
rmnimimage -d 61rte
|
||||
|
||||
3) Remove all the NIM resources specified by the xCAT "614img" osimage definition and also remove the xCAT definition.
|
||||
|
||||
B<rmnimimage -x -d 614img>
|
||||
rmnimimage -x -d 614img
|
||||
|
||||
Note: When this command completes all definitions and files will be completely erased, so use with caution!
|
||||
|
||||
4) Remove the NIM resources specified in the "614dskls" osimage definition on the xcatsn1 and xcatsn2 service nodes. Delete all files or directories associated with the NIM resources.
|
||||
|
||||
B<rmnimimage -d -s xcatsn1,xcatsn2 614dskls>
|
||||
rmnimimage -d -s xcatsn1,xcatsn2 614dskls
|
||||
|
||||
5) Remove the NIM resources specified in the "614old" osimage definition on the xCAT management node only.
|
||||
|
||||
B<rmnimimage -M -d 614old>
|
||||
rmnimimage -M -d 614old
|
||||
|
||||
=head1 FILES
|
||||
|
||||
|
@ -4,19 +4,19 @@ B<rmvm> - Removes HMC-, DFM-, IVM-, KVM-, Vmware- and zVM-managed partitions or
|
||||
|
||||
=head1 SYNOPSIS
|
||||
|
||||
I<rmvm [-h| --help]>
|
||||
B<rmvm [-h| --help]>
|
||||
|
||||
I<rmvm [-v| --version]>
|
||||
B<rmvm [-v| --version]>
|
||||
|
||||
I<rmvm [-V| --verbose] noderange [-r] [--service]>
|
||||
B<rmvm [-V| --verbose]> I<noderange> B<[-r] [--service]>
|
||||
|
||||
=head2 For KVM and Vmware:
|
||||
|
||||
I<rmvm [-p] [-f]>
|
||||
B<rmvm [-p] [-f]>
|
||||
|
||||
=head2 PPC (using Direct FSP Management) specific:
|
||||
|
||||
I<rmvm noderange>
|
||||
B<rmvm> I<noderange>
|
||||
|
||||
=head1 DESCRIPTION
|
||||
|
||||
@ -27,11 +27,11 @@ For DFM-managed (short For Direct FSP Management mode) normal power machines, on
|
||||
=head1 OPTIONS
|
||||
|
||||
|
||||
B<-h> Display usage message.
|
||||
B<-h|--help> Display usage message.
|
||||
|
||||
B<-v> Command Version.
|
||||
B<-v|--version> Command Version.
|
||||
|
||||
B<-V> Verbose output.
|
||||
B<-V|--verbose> Verbose output.
|
||||
|
||||
B<-r> Retain the data object definitions of the nodes.
|
||||
|
||||
@ -53,15 +53,15 @@ B<-f> Force remove the VM, even if the VM appears to be online. This w
|
||||
|
||||
1. To remove the HMC-managed partition lpar3, enter:
|
||||
|
||||
I<rmvm lpar3>
|
||||
rmvm lpar3
|
||||
|
||||
Output is similar to:
|
||||
|
||||
lpar3: Success
|
||||
lpar3: Success
|
||||
|
||||
2. To remove all the HMC-managed partitions associated with CEC cec01, enter:
|
||||
|
||||
I<rmvm cec01>
|
||||
rmvm cec01
|
||||
|
||||
Output is similar to:
|
||||
|
||||
@ -71,7 +71,7 @@ Output is similar to:
|
||||
|
||||
3. To remove the HMC-managed service partitions of the specified CEC cec01 and cec02, enter:
|
||||
|
||||
I<rmvm cec01,cec02 --service>
|
||||
rmvm cec01,cec02 --service
|
||||
|
||||
Output is similar to:
|
||||
|
||||
@ -80,15 +80,15 @@ Output is similar to:
|
||||
|
||||
4. To remove the HMC-managed partition lpar1, but retain its definition, enter:
|
||||
|
||||
I<rmvm lpar1 -r>
|
||||
rmvm lpar1 -r
|
||||
|
||||
Output is similar to:
|
||||
|
||||
lpar1: Success
|
||||
lpar1: Success
|
||||
|
||||
5. To remove a zVM virtual machine:
|
||||
|
||||
I<rmvm gpok4>
|
||||
rmvm gpok4
|
||||
|
||||
Output is similar to:
|
||||
|
||||
@ -97,7 +97,7 @@ Output is similar to:
|
||||
|
||||
6. To remove a DFM-managed partition on normal power machine:
|
||||
|
||||
I<rmvm lpar1>
|
||||
rmvm lpar1
|
||||
|
||||
Output is similar to:
|
||||
|
||||
|
@ -4,7 +4,7 @@ B<rmzone> - Removes a zone from the cluster.
|
||||
|
||||
=head1 B<SYNOPSIS>
|
||||
|
||||
B<rmzone> <zonename> [B<-g>] [B<-f>]
|
||||
B<rmzone> I<zonename> [B<-g>] [B<-f>]
|
||||
|
||||
B<rmzone> [B<-h> | B<-v>]
|
||||
|
||||
@ -52,24 +52,22 @@ Verbose mode.
|
||||
|
||||
=over 3
|
||||
|
||||
=item *
|
||||
|
||||
=item 1.
|
||||
To remove zone1 from the zone table and the zonename attribute on all it's assigned nodes , enter:
|
||||
|
||||
B<rmzone> I<zone1>
|
||||
rmzone zone1
|
||||
|
||||
|
||||
=item *
|
||||
=item 2.
|
||||
|
||||
To remove zone2 from the zone table, the zone2 zonename attribute, and the zone2 group assigned to all nodes that were in zone2, enter:
|
||||
|
||||
B<rmzone> I<zone2> -g
|
||||
rmzone zone2 -g
|
||||
|
||||
=item *
|
||||
=item 3.
|
||||
|
||||
To remove zone3 from the zone table, all the node zone attributes and override the fact it is the defaultzone, enter:
|
||||
|
||||
B<rmzone> I<zone3> -g -f
|
||||
rmzone zone3 -g -f
|
||||
|
||||
=back
|
||||
|
||||
|
@ -10,7 +10,7 @@ B<rnetboot> [B<-h>|B<--help>] [B<-v>|B<--version>]
|
||||
|
||||
=head2 zVM specific:
|
||||
|
||||
B<rnetboot> noderange [B<ipl=> I<address>]
|
||||
B<rnetboot> I<noderange> [B<ipl=> I<address>]
|
||||
|
||||
|
||||
=head1 DESCRIPTION
|
||||
@ -47,15 +47,15 @@ B<-t>
|
||||
|
||||
Specify the the timeout, in minutes, to wait for the expectedstatus specified by -m flag. This is a required flag if the -m flag is specified.
|
||||
|
||||
B<-V>
|
||||
B<-V|--verbose>
|
||||
|
||||
Verbose output.
|
||||
|
||||
B<-h>
|
||||
B<-h|--help>
|
||||
|
||||
Display usage message.
|
||||
|
||||
B<-v>
|
||||
B<-v|--version>
|
||||
|
||||
Command Version.
|
||||
|
||||
|
@ -53,11 +53,9 @@ Display usage message.
|
||||
=over 3
|
||||
|
||||
=item 0
|
||||
|
||||
The command completed successfully.
|
||||
|
||||
=item 1
|
||||
|
||||
An error has occurred.
|
||||
|
||||
=back
|
||||
@ -81,10 +79,15 @@ enter:
|
||||
=head1 FILES
|
||||
|
||||
/opt/xcat/bin/rollupdate
|
||||
|
||||
/opt/xcat/share/xcat/rollupdate/rollupdate.input.sample
|
||||
|
||||
/opt/xcat/share/xcat/rollupdate/ll.tmpl
|
||||
|
||||
/opt/xcat/share/xcat/rollupdate/rollupdate_all.input.sample
|
||||
|
||||
/opt/xcat/share/xcat/rollupdate/llall.tmpl
|
||||
|
||||
/var/log/xcat/rollupdate.log
|
||||
|
||||
|
||||
|
@ -242,17 +242,23 @@ Display the version number.
|
||||
|
||||
=over 3
|
||||
|
||||
=item *
|
||||
=item 1.
|
||||
To display power status of nodes4 and note5
|
||||
|
||||
rpower node4,node5 stat
|
||||
|
||||
Output is similar to:
|
||||
|
||||
node4: on
|
||||
node5: off
|
||||
|
||||
=item *
|
||||
=item 2.
|
||||
To power on node5
|
||||
|
||||
rpower node5 on
|
||||
|
||||
Output is similar to:
|
||||
|
||||
node5: on
|
||||
|
||||
=back
|
||||
|
@ -4,11 +4,11 @@ B<rscan> - Collects node information from one or more hardware control points.
|
||||
|
||||
=head1 SYNOPSIS
|
||||
|
||||
I<rscan [-h|--help]>
|
||||
B<rscan [-h|--help]>
|
||||
|
||||
I<rscan [-v|--version]>
|
||||
B<rscan [-v|--version]>
|
||||
|
||||
I<rscan [-V|--verbose] noderange [-u][-w][-x|-z]>
|
||||
B<rscan [-V|--verbose]> I<noderange> B<[-u][-w][-x|-z]>
|
||||
|
||||
|
||||
=head1 DESCRIPTION
|
||||
@ -25,11 +25,11 @@ Note: The first line of the output always contains information about the hardwar
|
||||
|
||||
|
||||
|
||||
B<-h> Display usage message.
|
||||
B<-h|--help> Display usage message.
|
||||
|
||||
B<-v> Command Version.
|
||||
B<-v|--version> Command Version.
|
||||
|
||||
B<-V> Verbose output.
|
||||
B<-V|--verbose> Verbose output.
|
||||
|
||||
B<-u> Updates and then prints out node definitions in the xCAT database for CEC/BPA. It updates the existing nodes that contain the same mtms and serial number for nodes managed by the specified hardware control point. This primarily works with CEC/FSP and frame/BPA nodes when the node name is not the same as the managed system name on hardware control point (HMC), This flag will update the BPA/FSP node name definitions to be listed as the managed system name in the xCAT database.
|
||||
|
||||
@ -52,16 +52,15 @@ B<-z> Stanza formated output.
|
||||
|
||||
=head1 RETURN VALUE
|
||||
|
||||
0 The command completed successfully.
|
||||
0 The command completed successfully.
|
||||
|
||||
1 An error has occurred.
|
||||
1 An error has occurred.
|
||||
|
||||
=head1 EXAMPLES
|
||||
|
||||
1. To list all nodes managed by HMC hmc01 in tabular format, enter:
|
||||
|
||||
I<rscan hmc01>
|
||||
|
||||
rscan hmc01
|
||||
|
||||
Output is similar to:
|
||||
|
||||
@ -77,7 +76,7 @@ Output is similar to:
|
||||
|
||||
2. To list all nodes managed by IVM ivm02 in XML format and write the output to the xCAT database, enter:
|
||||
|
||||
I<rscan ivm02 -x -w>
|
||||
rscan ivm02 -x -w
|
||||
|
||||
Output is similar to:
|
||||
|
||||
@ -126,7 +125,7 @@ Output is similar to:
|
||||
|
||||
3. To list all nodes managed by HMC hmc02 in stanza format and write the output to the xCAT database, enter:
|
||||
|
||||
I<rscan hmc02 -z -w>
|
||||
rscan hmc02 -z -w
|
||||
|
||||
Output is similar to:
|
||||
|
||||
@ -184,7 +183,7 @@ Output is similar to:
|
||||
|
||||
4. To update definitions of nodes, which is managed by hmc03, enter:
|
||||
|
||||
I<rscan hmc03 -u>
|
||||
rscan hmc03 -u
|
||||
|
||||
Output is similar to:
|
||||
|
||||
@ -194,7 +193,7 @@ Output is similar to:
|
||||
|
||||
5. To collects the node information from one or more hardware control points on zVM AND populate the database with details collected by rscan:
|
||||
|
||||
I<rscan gpok2 -W>
|
||||
rscan gpok2 -w
|
||||
|
||||
Output is similar to:
|
||||
|
||||
@ -211,7 +210,7 @@ Output is similar to:
|
||||
|
||||
6. To scan the Flex system cluster:
|
||||
|
||||
I<rscan cmm01>
|
||||
rscan cmm01
|
||||
|
||||
Output is similar to:
|
||||
|
||||
@ -223,7 +222,7 @@ Output is similar to:
|
||||
|
||||
7. To update the Flex system cluster:
|
||||
|
||||
I<rscan cmm01 -u>
|
||||
rscan cmm01 -u
|
||||
|
||||
Output is similar to:
|
||||
|
||||
@ -233,7 +232,7 @@ Output is similar to:
|
||||
|
||||
8. To scan the KVM host "hyp01", list all the KVM guest information on the KVM host in stanza format and write the KVM guest information into xCAT database:
|
||||
|
||||
I<rscan hyp01 -z -w>
|
||||
rscan hyp01 -z -w
|
||||
|
||||
Output is similar to:
|
||||
|
||||
@ -260,7 +259,7 @@ Output is similar to:
|
||||
|
||||
9. To update definitions of kvm guest, which is managed by hypervisor hyp01, enter:
|
||||
|
||||
I<rscan hyp01 -u>
|
||||
rscan hyp01 -u
|
||||
|
||||
Output is similar to:
|
||||
|
||||
|
@ -14,13 +14,13 @@ B<rspconfig> I<noderange> B<alert>={B<on>|B<enable>|B<off>|B<disable>}
|
||||
|
||||
B<rspconfig> I<noderange> B<snmpdest>=I<snmpmanager-IP>
|
||||
|
||||
B<rspconfig> I<noderange> B<community>={B<public>|I<string>}
|
||||
B<rspconfig> I<noderange> B<community>={B<public> | I<string>}
|
||||
|
||||
=head2 BMC specific:
|
||||
|
||||
B<rspconfig> I<noderange> {B<ip>|B<netmask>|B<gateway>|B<backupgateway>|B<garp>}
|
||||
|
||||
B<rspconfig> I<noderange> B<garp>={I<time>}
|
||||
B<rspconfig> I<noderange> B<garp>=I<time>
|
||||
|
||||
=head2 MPA specific:
|
||||
|
||||
@ -38,15 +38,15 @@ B<rspconfig> I<noderange> B<pd1>={B<nonred>|B<redwoperf>|B<redwperf>}
|
||||
|
||||
B<rspconfig> I<noderange> B<pd2>={B<nonred>|B<redwoperf>|B<redwperf>}
|
||||
|
||||
B<rspconfig> I<noderange> B<network>={[B<ip>],[B<host>],[B<gateway>],[B<netmask>]|B<*>}
|
||||
B<rspconfig> I<noderange> B<network>={[I<ip>],[I<host>],[I<gateway>],[I<netmask>]|*}
|
||||
|
||||
B<rspconfig> I<noderange> B<initnetwork>={[B<ip>],[B<host>],[B<gateway>],[B<netmask>]|B<*>}
|
||||
B<rspconfig> I<noderange> B<initnetwork>={[I<ip>],[I<host>],[I<gateway>],[I<netmask>]|*}
|
||||
|
||||
B<rspconfig> I<noderange> B<textid>={B<*|textid>}
|
||||
B<rspconfig> I<noderange> B<textid>={* | I<textid>}
|
||||
|
||||
B<rspconfig> I<singlenode> B<frame>={B<frame_number>}
|
||||
B<rspconfig> I<singlenode> B<frame>={I<frame_number>}
|
||||
|
||||
B<rspconfig> I<noderange> B<frame>={B<*>}
|
||||
B<rspconfig> I<noderange> B<frame>={*}
|
||||
|
||||
B<rspconfig> I<noderange> B<swnet>={[B<ip>],[B<gateway>],[B<netmask>]}
|
||||
|
||||
@ -64,33 +64,33 @@ B<rspconfig> I<noderange> B<dev>={B<enable>|B<disable>}
|
||||
|
||||
B<rspconfig> I<noderange> B<celogin1>={B<enable>|B<disable>}
|
||||
|
||||
B<rspconfig> I<noderange> B<time>={B<hh:mm:ss>}
|
||||
B<rspconfig> I<noderange> B<time>=I<hh:mm:ss>
|
||||
|
||||
B<rspconfig> I<noderange> B<date>={B<mm:dd:yyyy>}
|
||||
B<rspconfig> I<noderange> B<date>=I<mm:dd:yyyy>
|
||||
|
||||
B<rspconfig> I<noderange> B<decfg>={B<enable|disable>:B<policyname,...>}
|
||||
B<rspconfig> I<noderange> B<decfg>={B<enable|disable>:I<policyname,...>}
|
||||
|
||||
B<rspconfig> I<noderange> B<procdecfg>={B<configure|deconfigure>:B<processingunit>:B<id,...>}
|
||||
B<rspconfig> I<noderange> B<procdecfg>={B<configure|deconfigure>:I<processingunit>:I<id,...>}
|
||||
|
||||
B<rspconfig> I<noderange> B<memdecfg>={B<configure|deconfigure>:B<processingunit>:B<unit|bank>:B<id,...>>}
|
||||
B<rspconfig> I<noderange> B<memdecfg>={B<configure|deconfigure>:I<processingunit>:B<unit|bank>:I<id,...>>}
|
||||
|
||||
B<rspconfig> I<noderange> B<network>={B<nic,*>}
|
||||
B<rspconfig> I<noderange> B<network>={B<nic,>*}
|
||||
|
||||
B<rspconfig> I<noderange> B<network>={B<nic,[IP,][hostname,][gateway,][netmask]>}
|
||||
|
||||
B<rspconfig> I<noderange> B<network>={B<nic,0.0.0.0>}
|
||||
|
||||
B<rspconfig> I<noderange> B<HMC_passwd>={B<currentpasswd,newpasswd>}
|
||||
B<rspconfig> I<noderange> B<HMC_passwd>={I<currentpasswd,newpasswd>}
|
||||
|
||||
B<rspconfig> I<noderange> B<admin_passwd>={B<currentpasswd,newpasswd>}
|
||||
B<rspconfig> I<noderange> B<admin_passwd>={I<currentpasswd,newpasswd>}
|
||||
|
||||
B<rspconfig> I<noderange> B<general_passwd>={B<currentpasswd,newpasswd>}
|
||||
|
||||
B<rspconfig> I<noderange> B<*_passwd>={B<currentpasswd,newpasswd>}
|
||||
B<rspconfig> I<noderange> *B<_passwd>={B<currentpasswd,newpasswd>}
|
||||
|
||||
B<rspconfig> I<noderange> {B<hostname>}
|
||||
B<rspconfig> I<noderange> {I<hostname>}
|
||||
|
||||
B<rspconfig> I<noderange> B<hostname>={B<*|name>}
|
||||
B<rspconfig> I<noderange> B<hostname>={* | I<name>}
|
||||
|
||||
B<rspconfig> I<noderange> B<--resetnet>
|
||||
|
||||
@ -100,11 +100,11 @@ B<rspconfig> I<noderange> B<sshcfg>={B<enable>|B<disable>}
|
||||
|
||||
B<rspconfig> I<noderange> B<snmpcfg>={B<enable>|B<disable>}
|
||||
|
||||
B<rspconfig> I<noderange> B<network>={[B<ip>],[B<host>],[B<gateway>],[B<netmask>]|B<*>}
|
||||
B<rspconfig> I<noderange> B<network>={[B<ip>],[B<host>],[B<gateway>],[B<netmask>] | *}
|
||||
|
||||
B<rspconfig> I<noderange> B<solcfg>={B<enable>|B<disable>}
|
||||
|
||||
B<rspconfig> I<noderange> B<textid>={B<*|textid>}
|
||||
B<rspconfig> I<noderange> B<textid>={* | I<textid>}
|
||||
|
||||
|
||||
B<rspconfig> I<noderange> B<cec_off_policy>={B<poweroff>|B<stayon>}
|
||||
@ -113,7 +113,7 @@ B<rspconfig> I<noderange> B<cec_off_policy>={B<poweroff>|B<stayon>}
|
||||
|
||||
B<rspconfig> I<noderange> {B<network>|B<dev>|B<celogin1>}
|
||||
|
||||
B<rspconfig> I<noderange> B<network>={B<nic,*>}
|
||||
B<rspconfig> I<noderange> B<network>={B<nic,>*}
|
||||
|
||||
B<rspconfig> I<noderange> B<network>={B<nic,[IP,][hostname,][gateway,][netmask]>}
|
||||
|
||||
@ -123,33 +123,33 @@ B<rspconfig> I<noderange> B<dev>={B<enable>|B<disable>}
|
||||
|
||||
B<rspconfig> I<noderange> B<celogin1>={B<enable>|B<disable>}
|
||||
|
||||
B<rspconfig> I<noderange> B<HMC_passwd>={B<currentpasswd,newpasswd>}
|
||||
B<rspconfig> I<noderange> B<HMC_passwd>={I<currentpasswd,newpasswd>}
|
||||
|
||||
B<rspconfig> I<noderange> B<admin_passwd>={B<currentpasswd,newpasswd>}
|
||||
B<rspconfig> I<noderange> B<admin_passwd>={I<currentpasswd,newpasswd>}
|
||||
|
||||
B<rspconfig> I<noderange> B<general_passwd>={B<currentpasswd,newpasswd>}
|
||||
|
||||
B<rspconfig> I<noderange> B<*_passwd>={B<currentpasswd,newpasswd>}
|
||||
B<rspconfig> I<noderange> *B<_passwd>={B<currentpasswd,newpasswd>}
|
||||
|
||||
B<rspconfig> I<noderange> {B<hostname>}
|
||||
|
||||
B<rspconfig> I<noderange> B<hostname>={B<*|name>}
|
||||
B<rspconfig> I<noderange> B<hostname>={* | I<name>}
|
||||
|
||||
B<rspconfig> I<noderange> B<--resetnet>
|
||||
|
||||
=head2 FSP/CEC (using Direct FSP Management) Specific:
|
||||
|
||||
B<rspconfig> I<noderange> B<HMC_passwd>={B<currentpasswd,newpasswd>}
|
||||
B<rspconfig> I<noderange> B<HMC_passwd>={I<currentpasswd,newpasswd>}
|
||||
|
||||
B<rspconfig> I<noderange> B<admin_passwd>={B<currentpasswd,newpasswd>}
|
||||
B<rspconfig> I<noderange> B<admin_passwd>={I<currentpasswd,newpasswd>}
|
||||
|
||||
B<rspconfig> I<noderange> B<general_passwd>={B<currentpasswd,newpasswd>}
|
||||
|
||||
B<rspconfig> I<noderange> B<*_passwd>={B<currentpasswd,newpasswd>}
|
||||
B<rspconfig> I<noderange> *B<_passwd>={B<currentpasswd,newpasswd>}
|
||||
|
||||
B<rspconfig> I<noderange> {B<sysname>}
|
||||
|
||||
B<rspconfig> I<noderange> B<sysname>={B<*>|B<name>}
|
||||
B<rspconfig> I<noderange> B<sysname>={* | I<name>}
|
||||
|
||||
B<rspconfig> I<noderange> {B<pending_power_on_side>}
|
||||
|
||||
@ -163,7 +163,7 @@ B<rspconfig> I<noderange> {B<BSR>}
|
||||
|
||||
B<rspconfig> I<noderange> {B<huge_page>}
|
||||
|
||||
B<rspconfig> I<noderange> B<huge_page>={B<NUM>}
|
||||
B<rspconfig> I<noderange> B<huge_page>={I<NUM>}
|
||||
|
||||
B<rspconfig> I<noderange> {B<setup_failover>}
|
||||
|
||||
@ -175,21 +175,21 @@ B<rspconfig> I<noderange> B<--resetnet>
|
||||
|
||||
=head2 BPA/Frame (using Direct FSP Management) Specific:
|
||||
|
||||
B<rspconfig> I<noderange> B<HMC_passwd>={B<currentpasswd,newpasswd>}
|
||||
B<rspconfig> I<noderange> B<HMC_passwd>={I<currentpasswd,newpasswd>}
|
||||
|
||||
B<rspconfig> I<noderange> B<admin_passwd>={B<currentpasswd,newpasswd>}
|
||||
B<rspconfig> I<noderange> B<admin_passwd>={I<currentpasswd,newpasswd>}
|
||||
|
||||
B<rspconfig> I<noderange> B<general_passwd>={B<currentpasswd,newpasswd>}
|
||||
|
||||
B<rspconfig> I<noderange> B<*_passwd>={B<currentpasswd,newpasswd>}
|
||||
B<rspconfig> I<noderange> *B<_passwd>={B<currentpasswd,newpasswd>}
|
||||
|
||||
B<rspconfig> I<noderange> {B<frame>}
|
||||
|
||||
B<rspconfig> I<noderange> B<frame>={B<*|frame_number>}
|
||||
B<rspconfig> I<noderange> B<frame>={* | I<frame_number>}
|
||||
|
||||
B<rspconfig> I<noderange> {B<sysname>}
|
||||
|
||||
B<rspconfig> I<noderange> B<sysname>={B<*>|B<name>}
|
||||
B<rspconfig> I<noderange> B<sysname>={* | I<name>}
|
||||
|
||||
B<rspconfig> I<noderange> {B<pending_power_on_side>}
|
||||
|
||||
@ -217,11 +217,11 @@ For options B<autopower>|B<iocap>|B<dev>|B<celogin1>|B<decfg>|B<memdecfg>|B<proc
|
||||
|
||||
=over 4
|
||||
|
||||
=item B<alert>={I<on>|I<enable>|I<off>|I<disable>}
|
||||
=item B<alert={on | enable | off | disable}>
|
||||
|
||||
Turn on or off SNMP alerts.
|
||||
|
||||
=item B<autopower>={I<enable>|I<disable>}
|
||||
=item B<autopower>={I<enable> | I<disable>}
|
||||
|
||||
Select the policy for auto power restart. If enabled, the system will boot automatically once power is restored after a power disturbance.
|
||||
|
||||
@ -229,19 +229,19 @@ Select the policy for auto power restart. If enabled, the system will boot autom
|
||||
|
||||
Get the BMC backup gateway ip address.
|
||||
|
||||
=item B<community>={B<public>|I<string>}
|
||||
=item B<community>={B<public> | I<string>}
|
||||
|
||||
Get or set the SNMP commmunity value. The default is I<public>.
|
||||
Get or set the SNMP commmunity value. The default is B<public>.
|
||||
|
||||
=item B<date>={I<mm:dd:yyy>}
|
||||
=item B<date>=I<mm:dd:yyy>
|
||||
|
||||
Enter the current date.
|
||||
|
||||
=item B<decfg>={I<enable|disable>:I<policyname,...>}
|
||||
=item B<decfg>={B<enable | disable>:I<policyname,...>}
|
||||
|
||||
Enables or disables deconfiguration policies.
|
||||
|
||||
=item B<frame>={B<framenumber>|I<*>}
|
||||
=item B<frame>={I<framenumber> | *}
|
||||
|
||||
Set or get frame number. If no framenumber and * specified, framenumber for the nodes will be displayed and updated in xCAAT database. If framenumber is specified, it only supports single node and the framenumber will be set for that frame. If * is specified, it supports noderange and all the frame numbers for the noderange will be read from xCAT database and set to frames. Setting the frame number is a disruptive command which requires all CECs to be powered off prior to issuing the command.
|
||||
|
||||
@ -249,19 +249,19 @@ Set or get frame number. If no framenumber and * specified, framenumber for the
|
||||
|
||||
Set or get cec off policy after lpars are powered off. If no cec_off_policy value specified, the cec_off_policy for the nodes will be displayed. the cec_off_policy has two values: B<poweroff> and B<stayon>. B<poweroff> means Power off when last partition powers off. B<stayon> means Stay running after last partition powers off. If cec_off_policy value is specified, the cec off policy will be set for that cec.
|
||||
|
||||
=item B<HMC_passwd>={B<currentpasswd,newpasswd>}
|
||||
=item B<HMC_passwd>={I<currentpasswd,newpasswd>}
|
||||
|
||||
Change the password of the userid B<HMC> for CEC/Frame. If the CEC/Frame is the factory default, the currentpasswd should NOT be specified; otherwise, the currentpasswd should be specified to the current password of the userid B<HMC> for the CEC/Frame.
|
||||
|
||||
=item B<admin_passwd>={B<currentpasswd,newpasswd>}
|
||||
=item B<admin_passwd>={I<currentpasswd,newpasswd>}
|
||||
|
||||
Change the password of the userid B<admin> for CEC/Frame from currentpasswd to newpasswd. If the CEC/Frame is the factory default, the currentpasswd should NOT be specified; otherwise, the currentpasswd should be specified to the current password of the userid B<admin> for the CEC/Frame.
|
||||
|
||||
=item B<general_passwd>={B<currentpasswd,newpasswd>}
|
||||
=item B<general_passwd>={I<currentpasswd,newpasswd>}
|
||||
|
||||
Change the password of the userid B<general> for CEC/Frame from currentpasswd to newpasswd. If the CEC/Frame is the factory default, the currentpasswd should NOT be specified; otherwise, the currentpasswd should be specified to the current password of the userid B<general> for the CEC/Frame.
|
||||
|
||||
=item B< *_passwd>={B<currentpasswd,newpasswd>}
|
||||
=item *B<_passwd>={I<currentpasswd,newpasswd>}
|
||||
|
||||
Change the passwords of the userids B<HMC>, B<admin> and B<general> for CEC/Frame from currentpasswd to newpasswd. If the CEC/Frame is the factory default, the currentpasswd should NOT be specified; otherwise, if the current passwords of the userids B<HMC>, B<admin> and B<general> for CEC/Frame are the same one, the currentpasswd should be specified to the current password, and then the password will be changed to the newpasswd. If the CEC/Frame is NOT the factory default, and the current passwords of the userids B<HMC>, B<admin> and B<general> for CEC/Frame are NOT the same one, this option could NOT be used, and we should change the password one by one.
|
||||
|
||||
@ -289,7 +289,7 @@ Get Barrier Synchronization Register (BSR) allocation for a CEC.
|
||||
|
||||
Query huge page information or request NUM of huge pages for CEC. If no value specified, it means query huge page information for the specified CECs, if a CEC is specified, the specified huge_page value NUM will be used as the requested number of huge pages for the CEC, if CECs are specified, it means to request the same NUM huge pages for all the specified CECs.
|
||||
|
||||
=item B<setup_failover>={I<enable>|I<disable>}
|
||||
=item B<setup_failover>={B<enable> | B<disable>}
|
||||
|
||||
Enable or disable the service processor failover function of a CEC or display status of this function.
|
||||
|
||||
@ -297,19 +297,19 @@ Enable or disable the service processor failover function of a CEC or display st
|
||||
|
||||
Force a service processor failover from the primary service processor to the secondary service processor.
|
||||
|
||||
=item B<hostname>={I<*|name>}
|
||||
=item B<hostname>={* | I<name>}
|
||||
|
||||
Set CEC/BPA system names to the names in xCAT DB or the input name.
|
||||
|
||||
=item B<iocap>={I<enable>|I<disable>}
|
||||
=item B<iocap>={B<enable> | B<disable>}
|
||||
|
||||
Select the policy for I/O Adapter Enlarged Capacity. This option controls the size of PCI memory space allocated to each PCI slot.
|
||||
|
||||
=item B<dev>={I<enable>|I<disable>}
|
||||
=item B<dev>={B<enable> | B<disable>}
|
||||
|
||||
Enable or disable the CEC|Frame 'dev' account or display account status if no value specified.
|
||||
|
||||
=item B<celogin1>={I<enable>|I<disable>}
|
||||
=item B<celogin1>={B<enable> | B<disable>}
|
||||
|
||||
Enable or disable the CEC|Frame 'celogin1' account or display account status if no value specified.
|
||||
|
||||
@ -317,7 +317,7 @@ Enable or disable the CEC|Frame 'celogin1' account or display account status if
|
||||
|
||||
The ip address.
|
||||
|
||||
=item B<memdecfg>={I<configure|deconfigure>:I<processingunit>:I<unit|bank>:I<id,...>}
|
||||
=item B<memdecfg>={B<configure | deconfigure>:I<processingunit>:I<unit|bank>:I<id,...>}
|
||||
|
||||
Select whether each memory bank should be enabled or disabled. State changes take effect on the next platform boot.
|
||||
|
||||
@ -366,7 +366,7 @@ Power Domain 1 - determines how an MPA responds to a loss of redundant power.
|
||||
|
||||
Power Domain 2 - determines how an MPA responds to a loss of redundant power.
|
||||
|
||||
=item B<procdecfg>={I<configure|deconfigure>:I<processingunit>:I<id,...>}
|
||||
=item B<procdecfg>={B<configure|deconfigure>:I<processingunit>:I<id,...>}
|
||||
|
||||
Selects whether each processor should be enabled or disabled. State changes take effect on the next platform boot.
|
||||
|
||||
@ -378,7 +378,7 @@ Prevents components from turning on that will cause loss of power redundancy.
|
||||
|
||||
Power throttles components to maintain power redundancy and prevents components from turning on that will cause loss of power redundancy.
|
||||
|
||||
=item B<snmpcfg>={I<enable>|I<disable>}
|
||||
=item B<snmpcfg>={B<enable>|B<disable>}
|
||||
|
||||
Enable or disable SNMP on MPA.
|
||||
|
||||
@ -386,7 +386,7 @@ Enable or disable SNMP on MPA.
|
||||
|
||||
Get or set where the SNMP alerts should be sent to.
|
||||
|
||||
=item B<solcfg>={I<enable>|I<disable>}
|
||||
=item B<solcfg>={B<enable>|B<disable>}
|
||||
|
||||
Enable or disable the sol on MPA (or CMM) and blade servers belongs to it.
|
||||
|
||||
@ -394,7 +394,7 @@ Enable or disable the sol on MPA (or CMM) and blade servers belongs to it.
|
||||
|
||||
Performs a service processor dump.
|
||||
|
||||
=item B<sshcfg>={I<enable>|I<disable>}
|
||||
=item B<sshcfg>={B<enable>|B<disable>}
|
||||
|
||||
Enable or disable SSH on MPA.
|
||||
|
||||
@ -410,11 +410,11 @@ Performs a system dump.
|
||||
|
||||
Query or set sysname for CEC or Frame. If no value specified, means to query sysname of the specified nodes. If '*' specified, it means to set sysname for the specified nodes, and the sysname values would get from xCAT datebase. If a string is specified, it means to use the string as sysname value to set for the specified node.
|
||||
|
||||
=item B<pending_power_on_side>={I<temp|perm>}
|
||||
=item B<pending_power_on_side>={B<temp|perm>}
|
||||
|
||||
List or set pending power on side for CEC or Frame. If no pending_power_on_side value specified, the pending power on side for the CECs or frames will be displayed. If specified, the pending_power_on_side value will be set to CEC's FSPs or Frame's BPAs. The value 'temp' means T-side or temporary side. The value 'perm' means P-side or permanent side.
|
||||
|
||||
=item B<time>={I<hh:mm:ss>}
|
||||
=item B<time>=I<hh:mm:ss>
|
||||
|
||||
Enter the current time in UTC (Coordinated Universal Time) format.
|
||||
|
||||
@ -422,11 +422,11 @@ Enter the current time in UTC (Coordinated Universal Time) format.
|
||||
|
||||
Set the blade or MPA textid. When using '*', the textid used is the node name specified on the command-line. Note that when specifying an actual textid, only a single node can be specified in the noderange.
|
||||
|
||||
=item B<USERID>={I<newpasswd>} B<updateBMC>={I<y|n>}
|
||||
=item B<USERID>={I<newpasswd>} B<updateBMC>={B<y|n>}
|
||||
|
||||
Change the password of the userid B<USERID> for CMM in Flex system cluster. The option I<updateBMC> can be used to specify whether updating the password of BMCs that connected to the speified CMM. The value is 'y' by default which means whenever updating the password of CMM, the password of BMCs will be also updated. Note that there will be several seconds needed before this command complete.
|
||||
|
||||
If value B<*> is specified for USERID and the object node is I<Flex System X node>, the password used to access the BMC of the System X node through IPMI will be updated as the same password of the userid B<USERID> of the CMM in the same cluster.
|
||||
If value "*" is specified for USERID and the object node is I<Flex System X node>, the password used to access the BMC of the System X node through IPMI will be updated as the same password of the userid B<USERID> of the CMM in the same cluster.
|
||||
|
||||
=item B<--resetnet>
|
||||
|
||||
@ -440,7 +440,7 @@ Enable or disable v3 authentication (enable|disable).
|
||||
|
||||
Prints out a brief usage message.
|
||||
|
||||
=item B<-v>, B<--version>
|
||||
=item B<-v> | B<--version>
|
||||
|
||||
Display the version number.
|
||||
|
||||
@ -451,88 +451,97 @@ Display the version number.
|
||||
|
||||
=over 3
|
||||
|
||||
=item *
|
||||
|
||||
=item 1.
|
||||
To setup new ssh keys on the Management Module mm:
|
||||
|
||||
B<rspconfig> mm snmpcfg=enable sshcfg=enable
|
||||
|
||||
=item *
|
||||
rspconfig mm snmpcfg=enable sshcfg=enable
|
||||
|
||||
=item 2.
|
||||
To turn on SNMP alerts for node5:
|
||||
|
||||
B<rspconfig> I<node5> B<alert>=B<on>
|
||||
rspconfig node5 alert=on
|
||||
|
||||
Output is similar to:
|
||||
|
||||
node5: Alerts: enabled
|
||||
|
||||
=item *
|
||||
|
||||
=item 3.
|
||||
To display the destination setting for SNMP alerts for node4:
|
||||
|
||||
B<rspconfig> I<node4 snmpdest>
|
||||
rspconfig node4 snmpdest
|
||||
|
||||
Output is similar to:
|
||||
|
||||
node4: BMC SNMP Destination 1: 9.114.47.227
|
||||
|
||||
=item *
|
||||
=item 4.
|
||||
|
||||
To display the frame number for frame 9A00-10000001
|
||||
|
||||
B<rspconfig> I<9A00-10000001 frame>
|
||||
rspconfig> 9A00-10000001 frame
|
||||
|
||||
Output is similar to:
|
||||
|
||||
9A00-10000001: 1
|
||||
|
||||
=item *
|
||||
|
||||
=item 5.
|
||||
To set the frame number for frame 9A00-10000001
|
||||
|
||||
B<rspconfig> I<9A00-10000001 frame=2>
|
||||
rspconfig 9A00-10000001 frame=2
|
||||
|
||||
Output is similar to:
|
||||
|
||||
9A00-10000001: SUCCESS
|
||||
|
||||
=item *
|
||||
|
||||
=item 6.
|
||||
To set the frame numbers for frame 9A00-10000001 and 9A00-10000002
|
||||
|
||||
B<rspconfig> I<9A00-10000001,9A00-10000002 frame=*>
|
||||
rspconfig 9A00-10000001,9A00-10000002 frame=*
|
||||
|
||||
Output is similar to:
|
||||
|
||||
9A00-10000001: SUCCESS
|
||||
9A00-10000002: SUCCESS
|
||||
|
||||
=item *
|
||||
|
||||
=item 7.
|
||||
To display the MPA network parameters for mm01:
|
||||
|
||||
B<rspconfig> I<mm01 network>
|
||||
rspconfig mm01 network
|
||||
|
||||
Output is similar to:
|
||||
|
||||
mm01: MM IP: 192.168.1.47
|
||||
mm01: MM Hostname: MM001125C31F28
|
||||
mm01: Gateway: 192.168.1.254
|
||||
mm01: Subnet Mask: 255.255.255.224
|
||||
|
||||
=item *
|
||||
|
||||
=item 8.
|
||||
To change the MPA network parameters with the values in the xCAT database for mm01:
|
||||
|
||||
B<rspconfig> I<mm01 network=*>
|
||||
rspconfig mm01 network=*
|
||||
|
||||
Output is similar to:
|
||||
|
||||
mm01: MM IP: 192.168.1.47
|
||||
mm01: MM Hostname: mm01
|
||||
mm01: Gateway: 192.168.1.254
|
||||
mm01: Subnet Mask: 255.255.255.224
|
||||
|
||||
=item *
|
||||
|
||||
=item 9.
|
||||
To change only the gateway parameter for the MPA network mm01:
|
||||
|
||||
B<rspconfig> I<mm01 network=,,192.168.1.1,>
|
||||
rspconfig mm01 network=,,192.168.1.1,
|
||||
|
||||
Output is similar to:
|
||||
|
||||
mm01: Gateway: 192.168.1.1
|
||||
|
||||
=item *
|
||||
|
||||
=item 10.
|
||||
To display the FSP network parameters for fsp01:
|
||||
|
||||
B<rspconfig> I<fsp01 network>
|
||||
rspconfig> fsp01 network
|
||||
|
||||
Output is similar to:
|
||||
|
||||
fsp01:
|
||||
eth0:
|
||||
@ -549,120 +558,128 @@ B<rspconfig> I<fsp01 network>
|
||||
Gateway:
|
||||
Netmask: 255.255.255.0
|
||||
|
||||
=item *
|
||||
|
||||
=item 11.
|
||||
To change the FSP network parameters with the values in command line for eth0 on fsp01:
|
||||
|
||||
B<rspconfig> I<fsp01 network=eth0,192.168.1.200,fsp01,,255.255.255.0>
|
||||
rspconfig fsp01 network=eth0,192.168.1.200,fsp01,,255.255.255.0
|
||||
|
||||
Output is similar to:
|
||||
|
||||
fsp01: Success to set IP address,hostname,netmask
|
||||
|
||||
=item *
|
||||
|
||||
=item 12.
|
||||
To change the FSP network parameters with the values in the xCAT database for eth0 on fsp01:
|
||||
|
||||
B<rspconfig> I<fsp01 network=eth0,*>
|
||||
rspconfig fsp01 network=eth0,*
|
||||
|
||||
Output is similar to:
|
||||
|
||||
fsp01: Success to set IP address,hostname,gateway,netmask
|
||||
|
||||
=item *
|
||||
|
||||
=item 13.
|
||||
To configure eth0 on fsp01 to get dynamic IP address from DHCP server:
|
||||
|
||||
B<rspconfig> I<fsp01 network=eth0,0.0.0.0>
|
||||
rspconfig fsp01 network=eth0,0.0.0.0
|
||||
|
||||
Output is similar to:
|
||||
|
||||
fsp01: Success to set IP type to dynamic.
|
||||
|
||||
=item *
|
||||
|
||||
=item 14.
|
||||
To get the current power redundancy mode for power domain 1 on mm01:
|
||||
|
||||
B<rspconfig> I<mm01 pd1>
|
||||
rspconfig mm01 pd1
|
||||
|
||||
Output is similar to:
|
||||
|
||||
mm01: Redundant without performance impact
|
||||
|
||||
=item *
|
||||
|
||||
=item 15.
|
||||
To change the current power redundancy mode for power domain 1 on mm01 to non-redundant:
|
||||
|
||||
B<rspconfig> I<mm01 pd1=nonred>
|
||||
rspconfig mm01 pd1=nonred
|
||||
|
||||
Output is similar to:
|
||||
|
||||
mm01: nonred
|
||||
|
||||
=item *
|
||||
|
||||
=item 16.
|
||||
To enable NTP with an NTP server address of 192.168.1.1, an update frequency of 90 minutes, and with v3 authentication enabled on mm01:
|
||||
|
||||
B<rspconfig> I<mm01 ntp=enable,192.168.1.1,90,enable>
|
||||
rspconfig mm01 ntp=enable,192.168.1.1,90,enable
|
||||
|
||||
Output is similar to:
|
||||
|
||||
mm01: NTP: disabled
|
||||
mm01: NTP Server: 192.168.1.1
|
||||
mm01: NTP: 90 (minutes)
|
||||
mm01: NTP: enabled
|
||||
|
||||
=item *
|
||||
|
||||
=item 17.
|
||||
To disable NTP v3 authentication only on mm01:
|
||||
|
||||
B<rspconfig> I<mm01 ntp=,,,disable>
|
||||
rspconfig mm01 ntp=,,,disable
|
||||
|
||||
Output is similar to:
|
||||
|
||||
mm01: NTP v3: disabled
|
||||
|
||||
=item *
|
||||
|
||||
=item 18.
|
||||
To disable Predictive Failure and L2 Failure deconfiguration policies on mm01:
|
||||
|
||||
B<rspconfig> I<mm01 decfg=disable:predictive,L3>
|
||||
rspconfig mm01 decfg=disable:predictive,L3
|
||||
|
||||
Output is similar to:
|
||||
|
||||
mm01: Success
|
||||
|
||||
=item *
|
||||
|
||||
=item 19.
|
||||
To deconfigure processors 4 and 5 of Processing Unit 0 on mm01:
|
||||
|
||||
B<rspconfig> I<mm01 procedecfg=deconfigure:0:4,5>
|
||||
rspconfig mm01 procedecfg=deconfigure:0:4,5
|
||||
|
||||
Output is similar to:
|
||||
|
||||
mm01: Success
|
||||
|
||||
=item *
|
||||
|
||||
=item 20.
|
||||
To check if CEC sysname set correct on mm01:
|
||||
|
||||
B<rspconfig> I<mm01 sysname>
|
||||
rspconfig mm01 sysname
|
||||
|
||||
mm01: mm01
|
||||
|
||||
B<rspconfig> I<mm01 sysname=cec01>
|
||||
rspconfig mm01 sysname=cec01
|
||||
|
||||
mm01: Success
|
||||
|
||||
B<rspconfig> I<mm01 sysname>
|
||||
rspconfig mm01 sysname
|
||||
|
||||
mm01: cec01
|
||||
|
||||
=item *
|
||||
|
||||
=item 21.
|
||||
To check and change the pending_power_on_side value of cec01's fsps:
|
||||
|
||||
B<rspconfig> I<cec01 pending_power_on_side>
|
||||
rspconfig cec01 pending_power_on_side
|
||||
|
||||
cec01: Pending Power On Side Primary: temp
|
||||
cec01: Pending Power On Side Secondary: temp
|
||||
|
||||
B<rspconfig> I<cec01 pending_power_on_side=perm>
|
||||
rspconfig cec01 pending_power_on_side=perm
|
||||
|
||||
cec01: Success
|
||||
|
||||
B<rspconfig> I<cec01 pending_power_on_side>
|
||||
rspconfig cec01 pending_power_on_side
|
||||
|
||||
cec01: Pending Power On Side Primary: perm
|
||||
cec01: Pending Power On Side Secondary: perm
|
||||
|
||||
=item *
|
||||
|
||||
=item 22.
|
||||
To show the BSR allocation for cec01:
|
||||
|
||||
B<rspconfig> I<cec01 BSR>
|
||||
rspconfig cec01 BSR
|
||||
|
||||
Output is similar to:
|
||||
|
||||
cec01: Barrier Synchronization Register (BSR)
|
||||
cec01: Number of BSR arrays: 256
|
||||
@ -678,11 +695,12 @@ B<rspconfig> I<cec01 BSR>
|
||||
cec01: lpar07 : 32
|
||||
cec01: lpar08 : 32
|
||||
|
||||
=item *
|
||||
|
||||
=item 23.
|
||||
To query the huge page information for CEC1, enter:
|
||||
|
||||
B<rspconfig> I<CEC1 huge_page>
|
||||
rspconfig CEC1 huge_page
|
||||
|
||||
Output is similar to:
|
||||
|
||||
CEC1: Huge Page Memory
|
||||
CEC1: Available huge page memory(in pages): 0
|
||||
@ -700,25 +718,25 @@ B<rspconfig> I<CEC1 huge_page>
|
||||
CEC1: lpar25 : 0
|
||||
CEC1: lpar29 : 0
|
||||
|
||||
=item *
|
||||
|
||||
=item 24.
|
||||
To request 10 huge pages for CEC1, enter:
|
||||
|
||||
B<rspconfig> I<CEC1 huge_page=10>
|
||||
rspconfig CEC1 huge_page=10
|
||||
|
||||
Output is similar to:
|
||||
|
||||
CEC1: Success
|
||||
|
||||
=item *
|
||||
|
||||
=item 25.
|
||||
To disable service processor failover for cec01, in order to complete this command, the user should power off cec01 first:
|
||||
|
||||
B<rspconfig> I<cec01 setup_failover>
|
||||
rspconfig cec01 setup_failover
|
||||
|
||||
cec01: Failover status: Enabled
|
||||
|
||||
B<rpower> I<cec01 off>
|
||||
rpower cec01 off
|
||||
|
||||
B<rspconfig> I<cec01 setup_failover=disable>
|
||||
rspconfig cec01 setup_failover=disable
|
||||
|
||||
cec01: Success
|
||||
|
||||
@ -726,41 +744,42 @@ B<rspconfig> I<cec01 setup_failover>
|
||||
|
||||
cec01: Failover status: Disabled
|
||||
|
||||
=item *
|
||||
|
||||
=item 26.
|
||||
To force service processor failover for cec01:
|
||||
|
||||
B<lshwconn> I<cec01>
|
||||
lshwconn cec01
|
||||
|
||||
cec01: 192.168.1.1: LINE DOWN
|
||||
cec01: 192.168.2.1: sp=primary,ipadd=192.168.2.1,alt_ipadd=unavailable,state=LINE UP
|
||||
cec01: 192.168.1.2: sp=secondary,ipadd=192.168.1.2,alt_ipadd=unavailable,state=LINE UP
|
||||
cec01: 192.168.2.2: LINE DOWN
|
||||
|
||||
B<rspconfig> I<cec01 force_failover>
|
||||
rspconfig cec01 force_failover
|
||||
|
||||
cec01: Success.
|
||||
|
||||
B<lshwconn> I<cec01>
|
||||
lshwconn> cec01
|
||||
|
||||
cec01: 192.168.1.1: sp=secondary,ipadd=192.168.1.1,alt_ipadd=unavailable,state=LINE UP
|
||||
cec01: 192.168.2.1: LINE DOWN
|
||||
cec01: 192.168.1.2: LINE DOWN
|
||||
cec01: 192.168.2.2: sp=primary,ipadd=192.168.2.2,alt_ipadd=unavailable,state=LINE UP
|
||||
|
||||
=item *
|
||||
=item 27.
|
||||
|
||||
To deconfigure memory bank 9 and 10 of Processing Unit 0 on mm01:
|
||||
|
||||
B<rspconfig> I<mm01 memdecfg=deconfigure:bank:0:9,10>
|
||||
rspconfig mm01 memdecfg=deconfigure:bank:0:9,10
|
||||
|
||||
Output is similar to:
|
||||
|
||||
mm01: Success
|
||||
|
||||
=item *
|
||||
=item 28.
|
||||
|
||||
To reset the network interface of the specified nodes:
|
||||
|
||||
B<rspconfig> I<--resetnet>
|
||||
rspconfig --resetnet
|
||||
|
||||
Output is similar to:
|
||||
|
||||
@ -773,19 +792,21 @@ Output is similar to:
|
||||
|
||||
Reset network finished.
|
||||
|
||||
=item *
|
||||
|
||||
=item 29.
|
||||
To update the existing admin password on fsp:
|
||||
|
||||
B<rspconfig> I<fsp admin_passwd=admin,abc123>
|
||||
rspconfig fsp admin_passwd=admin,abc123
|
||||
|
||||
Output is similar to:
|
||||
|
||||
fsp: Success
|
||||
|
||||
=item *
|
||||
|
||||
=item 30.
|
||||
To set the initial password for user HMC on fsp:
|
||||
|
||||
B<rspconfig> I<fsp HMC_passwd=,abc123>
|
||||
rspconfig fsp HMC_passwd=,abc123
|
||||
|
||||
Output is similar to:
|
||||
|
||||
fsp: Success
|
||||
|
||||
@ -794,6 +815,3 @@ B<rspconfig> I<fsp HMC_passwd=,abc123>
|
||||
=head1 SEE ALSO
|
||||
|
||||
L<noderange(3)|noderange.3>, L<rpower(1)|rpower.1>, L<rcons(1)|rcons.1>, L<rinv(1)|rinv.1>, L<rvitals(1)|rvitals.1>, L<rscan(1)|rscan.1>, L<rflash(1)|rflash.1>
|
||||
|
||||
|
||||
|
||||
|
@ -106,7 +106,9 @@ Print version.
|
||||
|
||||
=head1 B<Examples>
|
||||
|
||||
B<rvitals> I<node5> I<all>
|
||||
rvitals node5 all
|
||||
|
||||
Output is similar to:
|
||||
|
||||
node5: CPU 1 Temperature: + 29.00 C (+ 84.2 F)
|
||||
node5: CPU 2 Temperature: + 19.00 C (+ 66.2 F)
|
||||
|
@ -24,8 +24,7 @@ The B<sinv> command is an xCAT Distributed Shell Utility.
|
||||
|
||||
B<COMMAND> B<SPECIFICATION>:
|
||||
|
||||
The xdsh or rinv command to execute on the remote targets is specified by the
|
||||
B<-c> flag, or by the B<-f> flag
|
||||
The xdsh or rinv command to execute on the remote targets is specified by the B<-c> flag, or by the B<-f> flag
|
||||
which is followed by the fully qualified path to a file containing the command.
|
||||
|
||||
|
||||
@ -200,86 +199,76 @@ Verbose mode.
|
||||
|
||||
=over 3
|
||||
|
||||
=item *
|
||||
|
||||
=item 1.
|
||||
To setup sinv.template (name optional) for input to the B<sinv> command , enter:
|
||||
|
||||
B<xdsh> I<node1,node2 "rpm -qa | grep ssh " | xdshcoll E<gt> /tmp/sinv.template>
|
||||
xdsh node1,node2 "rpm -qa | grep ssh " | xdshcoll > /tmp/sinv.template
|
||||
|
||||
Note: when setting up the template the output of xdsh must be piped
|
||||
to xdshcoll, sinv processing depends on it.
|
||||
|
||||
=item *
|
||||
Note: when setting up the template the output of xdsh must be piped to xdshcoll, sinv processing depends on it.
|
||||
|
||||
=item 2.
|
||||
To setup rinv.template for input to the B<sinv> command , enter:
|
||||
|
||||
B<rinv> I<node1-node2 serial | xdshcoll E<gt> /tmp/rinv.template>
|
||||
rinv node1-node2 serial | xdshcoll > /tmp/rinv.template
|
||||
|
||||
Note: when setting up the template the output of rinv must be piped
|
||||
to xdshcoll, sinv processing depends on it.
|
||||
|
||||
=item *
|
||||
Note: when setting up the template the output of rinv must be piped to xdshcoll, sinv processing depends on it.
|
||||
|
||||
=item 3.
|
||||
To execute B<sinv> using the sinv.template generated above
|
||||
on the nodegroup, B<testnodes> ,possibly generating up to two
|
||||
new templates, and removing all generated templates in the end, and writing
|
||||
output report to /tmp/sinv.output, enter:
|
||||
|
||||
B<sinv> I< -c "xdsh testnodes rpm -qa | grep ssh" -p /tmp/sinv.template -t 2 -r -o /tmp/sinv.output>
|
||||
sinv -c "xdsh testnodes rpm -qa | grep ssh" -p /tmp/sinv.template -t 2 -r -o /tmp/sinv.output
|
||||
|
||||
Note: do not add the pipe to xdshcoll on the -c flag, it is automatically
|
||||
added by the sinv routine.
|
||||
|
||||
=item *
|
||||
Note: do not add the pipe to xdshcoll on the -c flag, it is automatically added by the sinv routine.
|
||||
|
||||
=item 4.
|
||||
To execute B<sinv> on noderange, node1-node4, using the seed node, node8,
|
||||
to generate the first template, using the xdsh command (-c),
|
||||
possibly generating up to two additional
|
||||
templates and not removing any templates at the end, enter:
|
||||
|
||||
B<sinv> I<-c "xdsh node1-node4 lslpp -l | grep bos.adt" -s node8 -p /tmp/sinv.template -t 2 -o /tmp/sinv.output>
|
||||
|
||||
=item *
|
||||
sinv -c "xdsh node1-node4 lslpp -l | grep bos.adt" -s node8 -p /tmp/sinv.template -t 2 -o /tmp/sinv.output
|
||||
|
||||
=item 5.
|
||||
To execute B<sinv> on noderange, node1-node4, using the seed node, node8,
|
||||
to generate the first template, using the rinv command (-c),
|
||||
possibly generating up to two additional
|
||||
templates and removing any generated templates at the end, enter:
|
||||
|
||||
B<sinv> I<-c "rinv node1-node4 serial" -s node8 -p /tmp/sinv.template -t 2 -r -o /tmp/rinv.output>
|
||||
sinv -c "rinv node1-node4 serial" -s node8 -p /tmp/sinv.template -t 2 -r -o /tmp/rinv.output
|
||||
|
||||
=item *
|
||||
|
||||
=item 6.
|
||||
To execute B<sinv> on noderange, node1-node4, using node1 as
|
||||
the seed node, to generate the sinv.template from the xdsh command (-c),
|
||||
using the exact match option, generating no additional templates, enter:
|
||||
|
||||
B<sinv> I<-c "xdsh node1-node4 lslpp -l | grep bos.adt" -s node1 -e -p /tmp/sinv.template -o /tmp/sinv.output>
|
||||
sinv -c "xdsh node1-node4 lslpp -l | grep bos.adt" -s node1 -e -p /tmp/sinv.template -o /tmp/sinv.output
|
||||
|
||||
Note: the /tmp/sinv.template file must be empty, otherwise it will be used
|
||||
as an admin generated template.
|
||||
|
||||
=item *
|
||||
|
||||
=item 7.
|
||||
To execute B<sinv> on the Linux osimage defined for cn1. First build a template from the /etc/hosts on the node. Then run sinv to compare.
|
||||
B<xdsh> I<cn1 "cat /etc/hosts" | xdshcoll > /tmp/sinv2/template"
|
||||
|
||||
B<sinv> I<-c "xdsh -i /install/netboot/rhels6/ppc64/test_ramdisk_statelite/rootimg cat /etc/hosts" -e -t1 -p /tmp/sinv.template -o /tmp/sinv.output>
|
||||
xdsh cn1 "cat /etc/hosts" | xdshcoll > /tmp/sinv2/template"
|
||||
|
||||
=item *
|
||||
sinv -c "xdsh -i /install/netboot/rhels6/ppc64/test_ramdisk_statelite/rootimg cat /etc/hosts" -e -t1 -p /tmp/sinv.template -o /tmp/sinv.output
|
||||
|
||||
=item 8.
|
||||
|
||||
To execute B<sinv> on the AIX NIM 611dskls spot and compare /etc/hosts to compute1 node, run the following:
|
||||
|
||||
B<xdsh> I<compute1 "cat /etc/hosts" | xdshcoll > /tmp/sinv2/template"
|
||||
xdsh compute1 "cat /etc/hosts" | xdshcoll > /tmp/sinv2/template"
|
||||
|
||||
B<sinv> I<-c "xdsh -i 611dskls cat /etc/hosts" -e -t1 -p /tmp/sinv.template -o /tmp/sinv.output>
|
||||
sinv -c "xdsh -i 611dskls cat /etc/hosts" -e -t1 -p /tmp/sinv.template -o /tmp/sinv.output
|
||||
|
||||
=item *
|
||||
=item 9.
|
||||
|
||||
To execute B<sinv> on the device mswitch2 and compare to mswitch1
|
||||
|
||||
B<sinv> I<-c "xdsh mswitch enable;show version" -s mswitch1 -p /tmp/sinv/template --devicetype IBSwitch::Mellanox -l admin -t 2>
|
||||
|
||||
sinv -c "xdsh mswitch enable;show version" -s mswitch1 -p /tmp/sinv/template --devicetype IBSwitch::Mellanox -l admin -t 2
|
||||
|
||||
=back
|
||||
|
||||
|
@ -4,9 +4,9 @@ B<snmove> - Move xCAT compute nodes to a different xCAT service node.
|
||||
|
||||
=head1 SYNOPSIS
|
||||
|
||||
B<snmove> I<noderange> [B<-V>] [B<-l>|B<--liteonly>] [B<-d>|B<--dest> I<sn2>] [B<-D>|B<--destn> I<sn2n>] [B<-i>|B<--ignorenodes>] [B<-P>|B<--postscripts> I<script1,script2...>|I<all>]
|
||||
B<snmove> I<noderange> [B<-V>] [B<-l>|B<--liteonly>] [B<-d>|B<--dest> I<sn2>] [B<-D>|B<--destn> I<sn2n>] [B<-i>|B<--ignorenodes>] [B<-P>|B<--postscripts> I<script1,script2...> | B<all>]
|
||||
|
||||
B<snmove> [B<-V>] [B<-l>|B<--liteonly>] B<-s>|B<--source> I<sn1> [B<-S>|B<--sourcen> I<sn1n>] [B<-d>|B<--dest> I<sn2>] [B<-D>|B<--destn> I<sn2n>] [B<-i>|B<--ignorenodes>] [B<-P>|B<--postscripts> I<script1,script2...>|I<all>]
|
||||
B<snmove> [B<-V>] [B<-l>|B<--liteonly>] B<-s>|B<--source> I<sn1> [B<-S>|B<--sourcen> I<sn1n>] [B<-d>|B<--dest> I<sn2>] [B<-D>|B<--destn> I<sn2n>] [B<-i>|B<--ignorenodes>] [B<-P>|B<--postscripts> I<script1,script2...> | B<all>]
|
||||
|
||||
B<snmove> [B<-h>|B<--help>|B<-v>|B<--version>]
|
||||
|
||||
@ -49,7 +49,7 @@ service node.
|
||||
|
||||
By default the command will modify the nodes so that they will be able to be managed by the backup service node.
|
||||
|
||||
If the -i option is specified, the nodes themselves will not be modified.
|
||||
If the B<-i> option is specified, the nodes themselves will not be modified.
|
||||
|
||||
You can also have postscripts executed on the nodes by using the -P option if needed.
|
||||
|
||||
@ -88,12 +88,11 @@ Use this option to ONLY synchronize any AIX statelite files from the primary ser
|
||||
|
||||
=item B<-P|--postscripts>
|
||||
|
||||
Specifies a list of extra postscripts to be run on the nodes after the nodes are moved over to the new serive node. If 'all' is specified, all the postscripts defined in the postscripts table will be run for the nodes. The specified postscripts must be stored under /install/postscripts directory.
|
||||
Specifies a list of extra postscripts to be run on the nodes after the nodes are moved over to the new serive node. If B<all> is specified, all the postscripts defined in the postscripts table will be run for the nodes. The specified postscripts must be stored under /install/postscripts directory.
|
||||
|
||||
=item B<-s|--source>
|
||||
|
||||
Specifies the hostname of the current (source) service node sa known by (facing)
|
||||
the management node.
|
||||
Specifies the hostname of the current (source) service node sa known by (facing) the management node.
|
||||
|
||||
=item B<-S|--sourcen>
|
||||
|
||||
@ -118,49 +117,49 @@ Command Version.
|
||||
|
||||
Move the nodes contained in group "group1" to the service node named "xcatsn02".
|
||||
|
||||
B<snmove group1 -d xcatsn02 -D xcatsn02-eth1>
|
||||
snmove group1 -d xcatsn02 -D xcatsn02-eth1
|
||||
|
||||
=item 2.
|
||||
|
||||
Move all the nodes that use service node xcatsn01 to service node xcatsn02.
|
||||
|
||||
B<snmove -s xcatsn01 -S xcatsn01-eth1 -d xcatsn02 -D xcatsn02-eth1>
|
||||
snmove -s xcatsn01 -S xcatsn01-eth1 -d xcatsn02 -D xcatsn02-eth1
|
||||
|
||||
=item 3.
|
||||
|
||||
Move any nodes that have sn1 as their primary server to the backup service node set in the xCAT node definition.
|
||||
|
||||
B<snmove -s sn1>
|
||||
snmove -s sn1
|
||||
|
||||
=item 4.
|
||||
|
||||
Move all the nodes in the xCAT group named "nodegroup1" to their backup SNs.
|
||||
|
||||
B<snmove nodegroup1>
|
||||
snmove nodegroup1
|
||||
|
||||
=item 5.
|
||||
|
||||
Move all the nodes in xCAT group "sngroup1" to the service node named "xcatsn2".
|
||||
|
||||
B<snmove sngroup1 -d xcatsn2>
|
||||
snmove sngroup1 -d xcatsn2
|
||||
|
||||
=item 6.
|
||||
|
||||
Move all the nodes in xCAT group "sngroup1" to the SN named "xcatsn2" and run extra postscripts.
|
||||
|
||||
B<snmove sngroup1 -d xcatsn2 -P test1>
|
||||
snmove sngroup1 -d xcatsn2 -P test1
|
||||
|
||||
=item 7.
|
||||
|
||||
Move all the nodes in xCAT group "sngroup1" to the SN named "xcatsn2" and do not run anything on the nodes.
|
||||
|
||||
B<snmove sngroup1 -d xcatsn2 -i>
|
||||
snmove sngroup1 -d xcatsn2 -i
|
||||
|
||||
=item 8.
|
||||
|
||||
Synchronize any AIX statelite files from the primary server for compute03 to the backup server. This will not actually move the node to it's backup service node.
|
||||
|
||||
B<snmove compute03 -l -V>
|
||||
snmove compute03 -l -V
|
||||
|
||||
=back
|
||||
|
||||
|
@ -14,10 +14,18 @@ This command is only for Power 775 using Direct FSP Management, and used in Powe
|
||||
|
||||
The B<swapnodes> command will keep the B<current_node> name in the xCAT table, and use the I<fip_node>'s hardware resource. Besides that, the IO adapters will be assigned to the new hardware resource if they are in the same CEC. So the swapnodes command will do 2 things:
|
||||
|
||||
(1)swap the location info in the db between 2 nodes:
|
||||
All the ppc table attributes (including hcp, id, parent, supernode and so on).
|
||||
All the nodepos table attributes(including rack, u, chassis, slot, room and so on).
|
||||
(2)assign the I/O adapters from the defective node(the original current_node) to the available node(the original fip_node) if the nodes are in the same cec.
|
||||
=over 2
|
||||
|
||||
=item 1.
|
||||
swap the location info in the db between 2 nodes:
|
||||
|
||||
All the ppc table attributes (including hcp, id, parent, supernode and so on).
|
||||
All the nodepos table attributes(including rack, u, chassis, slot, room and so on).
|
||||
|
||||
=item 2.
|
||||
assign the I/O adapters from the defective node(the original current_node) to the available node(the original fip_node) if the nodes are in the same cec.
|
||||
|
||||
=back
|
||||
|
||||
The B<swapnodes> command shouldn't make the decision of which 2 nodes are swapped. It will just received the 2 node names as cmd line parameters.
|
||||
|
||||
@ -58,25 +66,21 @@ one way. Only move the I<current_node> definition to the I<fip_node>'s hardware
|
||||
|
||||
=over 3
|
||||
|
||||
=item 1
|
||||
|
||||
=item 1.
|
||||
To swap the service node attributes and IO assignments between sn1 and compute2 which are in the same cec, all the attributes in the ppc table and nodepos talbe of the two node will be swapped, and the the I/O adapters from the defective node (the original sn1) will be assigned to the available node (the original compute2). After the swapping, the sn1 will use the compute2's hardware resource and the I/O adapters from the original sn1.
|
||||
|
||||
swapnodes -c sn1 -f compute2
|
||||
|
||||
=item 2
|
||||
|
||||
=item 2.
|
||||
To swap the service node attributes and IO assignments between sn1 and compute2 which are NOT in the same cec, all the attributes in the ppc table and nodepos talbe of the two node will be swapped. After the swapping, the sn1 will use the compute2's hardware resource.
|
||||
|
||||
swapnodes -c sn1 -f compute2
|
||||
|
||||
=item 3
|
||||
|
||||
=item 3.
|
||||
Only to move the service node (sn1) definition to the compute node (compute2)'s hardware resource, and not move the compute2 definition to the sn1. After the swapping, the sn1 will use the compute2's hardware resource, and the compute2 definition is not changed.
|
||||
|
||||
swapnodes -c sn1 -f compute2 -o
|
||||
|
||||
|
||||
=back
|
||||
|
||||
|
||||
|
@ -6,12 +6,11 @@ B<switchdiscover> - Discover all the switches on the subnets.
|
||||
|
||||
=head1 SYNOPSIS
|
||||
|
||||
I<switchdiscover [-h| --help]>
|
||||
B<switchdiscover [-h| --help]>
|
||||
|
||||
I<switchdiscover [-v| --version]>
|
||||
B<switchdiscover [-v| --version]>
|
||||
|
||||
|
||||
I<switchdiscover [noderange|--range ip_ranges] [-V] [-w][-r|-x|-z][-s scan_methods]>
|
||||
B<switchdiscover> [I<noderange> | B<--range> I<ip_ranges>] B<[-V] [-w][-r|-x|-z][-s> I<scan_methods>]
|
||||
|
||||
|
||||
|
||||
@ -28,7 +27,7 @@ For lldp method, please make sure that lldpd package is installed and lldpd is r
|
||||
|
||||
=over 10
|
||||
|
||||
=item B<noderange>
|
||||
=item I<noderange>
|
||||
|
||||
The switches which the user want to discover.
|
||||
If the user specify the noderange, switchdiscover will just
|
||||
@ -39,7 +38,7 @@ specified in noderange should be defined in database in advance.
|
||||
The ips of the switches will be defined in /etc/hosts file.
|
||||
This command will fill the switch attributes for the switches defined.
|
||||
|
||||
=item B<-h>
|
||||
=item B<-h|--help>
|
||||
|
||||
Display usage message.
|
||||
|
||||
@ -60,7 +59,7 @@ Display Raw responses.
|
||||
It is a comma separated list of methods for switch discovery.
|
||||
The possible switch scan methods are: lldp and nmap. The default is nmap.
|
||||
|
||||
=item B<-v>
|
||||
=item B<-v|--version>
|
||||
|
||||
Command Version.
|
||||
|
||||
@ -92,25 +91,23 @@ Stanza formated output.
|
||||
|
||||
=over 3
|
||||
|
||||
=item *
|
||||
|
||||
=item 1.
|
||||
To discover the switches on some subnets:
|
||||
|
||||
B<switchdiscover> I<--range 10.2.3.0/24,192.168.3.0/24,11.5.6.7>
|
||||
|
||||
=item *
|
||||
switchdiscover --range 10.2.3.0/24,192.168.3.0/24,11.5.6.7
|
||||
|
||||
=item 2.
|
||||
To do the switch discovery and save them to the xCAT database:
|
||||
|
||||
B<switchdiscover> I<--range 10.2.3.4/24 -w>
|
||||
switchdiscover --range 10.2.3.4/24 -w
|
||||
|
||||
It is recommended to run B<makehosts> after the switches are saved in the DB.
|
||||
|
||||
=item *
|
||||
=item 3.
|
||||
|
||||
To use lldp mathod to discover the switches:
|
||||
|
||||
B<switchdiscover> -s lldp
|
||||
switchdiscover -s lldp
|
||||
|
||||
|
||||
=back
|
||||
@ -121,9 +118,3 @@ B<switchdiscover> -s lldp
|
||||
|
||||
|
||||
=head1 SEE ALSO
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
@ -6,7 +6,7 @@ B<tabgrep> - list table names in which an entry for the given node appears.
|
||||
|
||||
B<tabgrep> I<nodename>
|
||||
|
||||
B<tabgrep> [I<-?> | I<-h> | I<--help>]
|
||||
B<tabgrep> [B<-?> | B<-h> | B<--help>]
|
||||
|
||||
=head1 DESCRIPTION
|
||||
|
||||
@ -29,11 +29,9 @@ Display usage message.
|
||||
=over 3
|
||||
|
||||
=item 0
|
||||
|
||||
The command completed successfully.
|
||||
|
||||
=item 1
|
||||
|
||||
An error has occurred.
|
||||
|
||||
=back
|
||||
@ -42,11 +40,11 @@ An error has occurred.
|
||||
|
||||
=over 2
|
||||
|
||||
=item *
|
||||
=item 1.
|
||||
|
||||
To display the tables that contain blade1:
|
||||
|
||||
B<tabgrep> I<blade1>
|
||||
tabgrep blade1
|
||||
|
||||
The output would be similar to:
|
||||
|
||||
@ -67,4 +65,4 @@ The output would be similar to:
|
||||
|
||||
=head1 SEE ALSO
|
||||
|
||||
L<nodels(1)|nodels.1>, L<tabdump(8)|tabdump.8>
|
||||
L<nodels(1)|nodels.1>, L<tabdump(8)|tabdump.8>
|
||||
|
@ -5,12 +5,11 @@ B<unregnotif> - unregister a Perl module or a command that was watching for the
|
||||
|
||||
=head1 SYNOPSIS
|
||||
|
||||
I<unregnotif [-h| --help]>
|
||||
B<unregnotif [-h| --help]>
|
||||
|
||||
I<unregnotif [-v| --version]>
|
||||
B<unregnotif [-v| --version]>
|
||||
|
||||
|
||||
I<unregnotif I<filename>>
|
||||
B<unregnotif> I<filename>
|
||||
|
||||
|
||||
=head1 DESCRIPTION
|
||||
@ -18,7 +17,7 @@ I<unregnotif I<filename>>
|
||||
This command is used to unregistered a Perl module or a command that was watching for the changes of the desired xCAT database tables.
|
||||
|
||||
|
||||
=head1 Parameters
|
||||
=head1 PARAMETERS
|
||||
|
||||
I<filename> is the path name of the Perl module or command to be registered.
|
||||
|
||||
|
@ -4,11 +4,11 @@ B<updateSNimage> - Adds the needed Service Node configuration files to the insta
|
||||
|
||||
=head1 SYNOPSIS
|
||||
|
||||
I<updateSNimage [-h | --help ]>
|
||||
B<updateSNimage [-h | --help ]>
|
||||
|
||||
I<updateSNimage [-v | --version]>
|
||||
B<updateSNimage [-v | --version]>
|
||||
|
||||
I<updateSNimage {-n} [-p]>
|
||||
B<updateSNimage> [B<-n> I<node>] [B<-p> I<path>]
|
||||
|
||||
=head1 DESCRIPTION
|
||||
|
||||
@ -21,9 +21,7 @@ B<-h |--help> Display usage message.
|
||||
|
||||
B<-v |--version> Display xCAT version.
|
||||
|
||||
|
||||
B<-n | --node> A remote host name or ip address that contains the install image to be updated.
|
||||
|
||||
B<-n |--node> A remote host name or ip address that contains the install image to be updated.
|
||||
|
||||
B<-p |--path> Path to the install image.
|
||||
|
||||
@ -38,11 +36,11 @@ B<-p |--path> Path to the install image.
|
||||
|
||||
1. To update the image on the local host.
|
||||
|
||||
I<updateSNimage -p /install/netboot/fedora8/x86_64/test/rootimg>
|
||||
updateSNimage -p /install/netboot/fedora8/x86_64/test/rootimg
|
||||
|
||||
|
||||
2. To update the image on a remote host.
|
||||
|
||||
I<updateSNimage -n 9.112.45.6 -p /install/netboot/fedora8/x86_64/test/rootimg>
|
||||
updateSNimage -n 9.112.45.6 -p /install/netboot/fedora8/x86_64/test/rootimg
|
||||
|
||||
|
||||
|
@ -4,13 +4,13 @@ B<updatenode> - Update nodes in an xCAT cluster environment.
|
||||
|
||||
=head1 SYNOPSIS
|
||||
|
||||
B<updatenode> B<noderange> [B<-V>|B<--verbose>] [B<-F>|B<--sync>] [B<-f>|B<--snsync>] [B<-S>|B<--sw>] [B<-l> I<userID>] [B<-P>|B<--scripts> [B<script1,script2...>]] [B<-s>|B<--sn>] [B<-A>|B<--updateallsw>] [B<-c>|B<--cmdlineonly>] [B<-d alt_source_dir>] [B<--fanout>] [B<-t timeout>} [B<attr=val> [B<attr=val...>]] [B<-n>|B<--noverify>]
|
||||
B<updatenode> I<noderange> [B<-V>|B<--verbose>] [B<-F>|B<--sync>] [B<-f>|B<--snsync>] [B<-S>|B<--sw>] [B<-l> I<userID>] [B<-P>|B<--scripts> [I<script1,script2...>]] [B<-s>|B<--sn>] [B<-A>|B<--updateallsw>] [B<-c>|B<--cmdlineonly>] [B<-d> I<alt_source_dir>] [B<--fanout>] [B<-ti> I<timeout>} [I<attr=val> [I<attr=val...>]] [B<-n>|B<--noverify>]
|
||||
|
||||
B<updatenode> B<noderange> [B<-k>|B<--security>] [B<-t timeout>]
|
||||
B<updatenode> B<noderange> [B<-k>|B<--security>] [B<-t> I<timeout>]
|
||||
|
||||
B<updatenode> B<noderange> [B<-g>|B<--genmypost>]
|
||||
|
||||
B<updatenode> B<noderange> [B<-V>|B<--verbose>] [B<-t timeout>] [B<script1,script2...>]
|
||||
B<updatenode> B<noderange> [B<-V>|B<--verbose>] [B<-t> I<timeout>] [I<script1,script2...>]
|
||||
|
||||
B<updatenode> B<noderange> [B<-V>|B<--verbose>] [B<-f>|B<--snsync>]
|
||||
|
||||
@ -23,40 +23,34 @@ to perform the following node updates:
|
||||
|
||||
=over 3
|
||||
|
||||
=item 1
|
||||
|
||||
=item 1.
|
||||
Distribute and synchronize files.
|
||||
|
||||
=item 2
|
||||
|
||||
=item 2.
|
||||
Install or update software on diskful nodes.
|
||||
|
||||
=item 3
|
||||
|
||||
=item 3.
|
||||
Run postscripts.
|
||||
|
||||
=item 4
|
||||
|
||||
=item 4.
|
||||
Update the ssh keys and host keys for the service nodes and compute nodes;
|
||||
Update the ca and credentials for the service nodes.
|
||||
|
||||
=back
|
||||
|
||||
The default behavior when no options are input to updatenode will be to run
|
||||
the following options "-S", "-P" and "-F" options in this order.
|
||||
the following options B<-S>, B<-P> and B<-F> options in this order.
|
||||
If you wish to limit updatenode to specific
|
||||
actions you can use combinations of the "-S", "-P", and "-F" flags.
|
||||
actions you can use combinations of the B<-S>, B<-P>, and B<-F> flags.
|
||||
|
||||
For example, If you just want to synchronize configuration file you could
|
||||
specify the "-F" flag. If you want to synchronize files and update
|
||||
software you would specify the "-F" and "-S" flags. See the descriptions
|
||||
specify the B<-F> flag. If you want to synchronize files and update
|
||||
software you would specify the B<-F> and B<-S> flags. See the descriptions
|
||||
of these flags and examples below.
|
||||
|
||||
The flag "-k" (--security) can NOT be used together with "-S", "-P", and "-F"
|
||||
flags.
|
||||
The flag B<-k> (B<--security>) can NOT be used together with B<-S>, B<-P>, and B<-F> flags.
|
||||
|
||||
The flag "-f" (--snsync) can NOT be used together with "-S", "-P", and "-F"
|
||||
flags.
|
||||
The flag B<-f> (B<--snsync>) can NOT be used together with B<-S>, B<-P>, and B<-F> flags.
|
||||
|
||||
|
||||
Note: In a large cluster environment the updating of nodes in an ad hoc
|
||||
@ -72,15 +66,12 @@ The basic process for distributing and synchronizing nodes is:
|
||||
=over 3
|
||||
|
||||
=item *
|
||||
|
||||
Create a synclist file.
|
||||
|
||||
=item *
|
||||
|
||||
Indicate the location of the synclist file.
|
||||
|
||||
=item *
|
||||
|
||||
Run the updatenode command to update the nodes.
|
||||
|
||||
=back
|
||||
@ -88,8 +79,7 @@ Run the updatenode command to update the nodes.
|
||||
Files may be distributed and synchronized for both diskless and
|
||||
diskful nodes. Syncing files to NFS-based statelite nodes is not supported.
|
||||
|
||||
More information on using the synchronization file function is in
|
||||
the following doc: Using_Updatenode.
|
||||
More information on using the synchronization file function is in the following doc: Using_Updatenode.
|
||||
|
||||
=head3 Create the synclist file
|
||||
|
||||
@ -193,22 +183,18 @@ The basic functions of update security for nodes:
|
||||
=over 3
|
||||
|
||||
=item *
|
||||
|
||||
Setup the ssh keys for the target nodes. It enables the management
|
||||
node and service nodes to ssh to the target nodes without password.
|
||||
|
||||
=item *
|
||||
|
||||
Redeliver the host keys to the target nodes.
|
||||
|
||||
=item *
|
||||
|
||||
Redeliver the ca and certificates files to the service node.
|
||||
These files are used to authenticate the ssl connection between
|
||||
xcatd's of management node and service node.
|
||||
|
||||
=item *
|
||||
|
||||
Remove the entries of target nodes from known_hosts file.
|
||||
|
||||
=back
|
||||
@ -242,21 +228,22 @@ Since the certificates have the validity time, the ntp service is recommended
|
||||
to be set up between management node and service node.
|
||||
|
||||
Simply running following command to update the security keys:
|
||||
B<updatenode> I<noderange> -k
|
||||
|
||||
updatenode <noderange> -k
|
||||
|
||||
|
||||
=head1 PARAMETERS
|
||||
|
||||
=over 10
|
||||
|
||||
=item B<noderange>
|
||||
=item I<noderange>
|
||||
|
||||
A set of comma delimited xCAT node names
|
||||
and/or group names. See the xCAT "noderange"
|
||||
man page for details on additional supported
|
||||
formats.
|
||||
|
||||
=item B<script1,script2...>
|
||||
=item I<script1,script2...>
|
||||
|
||||
A comma-separated list of script names.
|
||||
The scripts must be executable and copied
|
||||
@ -265,9 +252,9 @@ Each script can take zero or more parameters.
|
||||
If parameters are spcified, the whole list needs to be quoted by double quotes.
|
||||
For example:
|
||||
|
||||
B<"script1 p1 p2,script2">
|
||||
"script1 p1 p2,script2"
|
||||
|
||||
=item [B<attr=val> [B<attr=val...>]]
|
||||
=item [I<attr=val> [I<attr=val...>]]
|
||||
|
||||
Specifies one or more "attribute equals value" pairs, separated by spaces.
|
||||
Attr=val pairs must be specified last on the command line. The currently
|
||||
@ -298,7 +285,7 @@ Specifies that the updatenode command should only use software maintenance
|
||||
information provided on the command line. This flag is only valid when
|
||||
using AIX software maintenance support.
|
||||
|
||||
=item B<-d alt_source_dir>
|
||||
=item B<-d> I<alt_source_dir>
|
||||
|
||||
Used to specify a source directory other than the standard lpp_source directory specified in the xCAT osimage definition. (AIX only)
|
||||
|
||||
@ -376,7 +363,7 @@ Specifies that node network availability verification will be skipped.
|
||||
|
||||
Set the server information stored on the nodes in /opt/xcat/xcatinfo on Linux.
|
||||
|
||||
=item B<-t timeout>
|
||||
=item B<-t> I<timeout>
|
||||
|
||||
Specifies a timeout in seconds the command will wait for the remote targets to complete. If timeout is not specified
|
||||
it will wait indefinitely. updatenode -k is the exception that has a timeout of 10 seconds, unless overridden by this flag.
|
||||
@ -402,12 +389,10 @@ Verbose mode.
|
||||
|
||||
=over 3
|
||||
|
||||
=item 1
|
||||
=item 1.
|
||||
To perform all updatenode features for the Linux nodes in the group "compute":
|
||||
|
||||
To perform all updatenode features for the Linux nodes in the group
|
||||
"compute":
|
||||
|
||||
B<updatenode compute>
|
||||
updatenode compute
|
||||
|
||||
The command will: run any scripts listed in the nodes "postscripts and postbootscripts"
|
||||
attribute, install or update any software indicated in the
|
||||
@ -415,171 +400,148 @@ attribute, install or update any software indicated in the
|
||||
B<To install or update software part>), synchronize any files indicated by
|
||||
the synclist files specified in the osimage "synclists" attribute.
|
||||
|
||||
=item 2
|
||||
=item 2.
|
||||
To run postscripts,postbootscripts and file synchronization only on the node "clstrn01":
|
||||
|
||||
To run postscripts,postbootscripts and file synchronization only on the node
|
||||
"clstrn01":
|
||||
|
||||
B<updatenode clstrn01 -F -P>
|
||||
|
||||
=item 3
|
||||
updatenode clstrn01 -F -P
|
||||
|
||||
=item 3.
|
||||
Running updatenode -P with the syncfiles postscript is not supported. You should use updatenode -F instead.
|
||||
|
||||
Do not run:
|
||||
|
||||
B<updatenode clstrno1 -P syncfiles>
|
||||
updatenode clstrno1 -P syncfiles
|
||||
|
||||
Run:
|
||||
|
||||
B<updatenode clstrn01 -F>
|
||||
updatenode clstrn01 -F
|
||||
|
||||
=item 4
|
||||
=item 4.
|
||||
To run the postscripts and postbootscripts indicated in the postscripts and postbootscripts attributes on the node "clstrn01":
|
||||
|
||||
To run the postscripts and postbootscripts indicated in the postscripts and postbootscripts attributes on
|
||||
the node "clstrn01":
|
||||
|
||||
B<updatenode clstrn01 -P>
|
||||
|
||||
=item 5
|
||||
updatenode clstrn01 -P
|
||||
|
||||
=item 5.
|
||||
To run the postscripts script1 and script2 on the node "clstrn01":
|
||||
|
||||
B<cp script1,script2 /install/postscripts>
|
||||
cp script1,script2 /install/postscripts
|
||||
|
||||
B<updatenode clstrn01 -P "script1 p1 p2,script2">
|
||||
updatenode clstrn01 -P "script1 p1 p2,script2"
|
||||
|
||||
Since flag '-P' can be omitted when only script names are specified,
|
||||
the following command is equivalent:
|
||||
|
||||
B<updatenode clstrn01 "script1 p1 p2,script2">
|
||||
updatenode clstrn01 "script1 p1 p2,script2"
|
||||
|
||||
p1 p2 are parameters for script1.
|
||||
|
||||
|
||||
=item 6
|
||||
|
||||
=item 6.
|
||||
To synchronize the files on the node "clstrn01": Prepare the synclist file.
|
||||
For AIX, set the full path of synclist in the osimage table synclists
|
||||
attribute. For Linux, put the synclist file into the location:
|
||||
/install/custom/<inst_type>/<distro>/<profile>.<os>.<arch>.synclist
|
||||
Then:
|
||||
|
||||
B<updatenode clstrn01 -F>
|
||||
|
||||
=item 7
|
||||
updatenode clstrn01 -F
|
||||
|
||||
=item 7.
|
||||
To perform the software update on the Linux node "clstrn01": Copy the extra
|
||||
rpm into the /install/post/otherpkgs/<os>/<arch>/* and add the rpm names into
|
||||
the /install/custom/install/<ostype>/profile.otherpkgs.pkglist . Then:
|
||||
|
||||
B<updatenode clstrn01 -S>
|
||||
|
||||
=item 8
|
||||
updatenode clstrn01 -S
|
||||
|
||||
=item 8.
|
||||
To update the AIX node named "xcatn11" using the "installp_bundle" and/or
|
||||
"otherpkgs" attribute values stored in the xCAT database. Use the default installp, rpm and emgr flags.
|
||||
|
||||
B<updatenode xcatn11 -V -S>
|
||||
updatenode xcatn11 -V -S
|
||||
|
||||
Note: The xCAT "xcatn11" node definition points to an xCAT osimage definition
|
||||
which contains the "installp_bundle" and "otherpkgs" attributes as well as
|
||||
the name of the NIM lpp_source resource.
|
||||
|
||||
=item 9
|
||||
|
||||
=item 9.
|
||||
To update the AIX node "xcatn11" by installing the "bos.cpr" fileset using
|
||||
the "-agQXY" installp flags. Also display the output of the installp command.
|
||||
|
||||
B<updatenode xcatn11 -V -S otherpkgs="I:bos.cpr" installp_flags="-agQXY">
|
||||
updatenode xcatn11 -V -S otherpkgs="I:bos.cpr" installp_flags="-agQXY"
|
||||
|
||||
Note: The 'I:' prefix is optional but recommended for installp packages.
|
||||
|
||||
=item 10
|
||||
|
||||
=item 10.
|
||||
To uninstall the "bos.cpr" fileset that was installed in the previous example.
|
||||
|
||||
B<updatenode xcatn11 -V -S otherpkgs="I:bos.cpr" installp_flags="-u">
|
||||
|
||||
=item 11
|
||||
updatenode xcatn11 -V -S otherpkgs="I:bos.cpr" installp_flags="-u"
|
||||
|
||||
=item 11.
|
||||
To update the AIX nodes "xcatn11" and "xcatn12" with the "gpfs.base" fileset
|
||||
and the "rsync" rpm using the installp flags "-agQXY" and the rpm flags "-i --nodeps".
|
||||
|
||||
B<updatenode xcatn11,xcatn12 -V -S otherpkgs="I:gpfs.base,R:rsync-2.6.2-1.aix5.1.ppc.rpm" installp_flags="-agQXY" rpm_flags="-i --nodeps">
|
||||
updatenode xcatn11,xcatn12 -V -S otherpkgs="I:gpfs.base,R:rsync-2.6.2-1.aix5.1.ppc.rpm" installp_flags="-agQXY" rpm_flags="-i --nodeps"
|
||||
|
||||
Note: Using the "-V" flag with multiple nodes may result in a large amount of output.
|
||||
|
||||
=item 12
|
||||
|
||||
=item 12.
|
||||
To uninstall the rsync rpm that was installed in the previous example.
|
||||
|
||||
B<updatenode xcatn11 -V -S otherpkgs="R:rsync-2.6.2-1" rpm_flags="-e">
|
||||
|
||||
=item 13
|
||||
updatenode xcatn11 -V -S otherpkgs="R:rsync-2.6.2-1" rpm_flags="-e"
|
||||
|
||||
=item 13.
|
||||
Update the AIX node "node01" using the software specified in the NIM "sslbnd" and "sshbnd" installp_bundle resources and the "-agQXY" installp flags.
|
||||
|
||||
B<updatenode node01 -V -S installp_bundle="sslbnd,sshbnd" installp_flags="-agQXY">
|
||||
|
||||
=item 14
|
||||
updatenode node01 -V -S installp_bundle="sslbnd,sshbnd" installp_flags="-agQXY"
|
||||
|
||||
=item 14.
|
||||
To get a preview of what would happen if you tried to install the "rsct.base" fileset on AIX node "node42". (You must use the "-V" option to get the full output from the installp command.)
|
||||
|
||||
B<updatenode node42 -V -S otherpkgs="I:rsct.base" installp_flags="-apXY">
|
||||
|
||||
=item 15
|
||||
updatenode node42 -V -S otherpkgs="I:rsct.base" installp_flags="-apXY"
|
||||
|
||||
=item 15.
|
||||
To check what rpm packages are installed on the AIX node "node09". (You must use the "-c" flag so updatenode does not get a list of packages from the database.)
|
||||
|
||||
B<updatenode node09 -V -c -S rpm_flags="-qa">
|
||||
|
||||
=item 16
|
||||
updatenode node09 -V -c -S rpm_flags="-qa"
|
||||
|
||||
=item 16.
|
||||
To install all software updates contained in the /images directory.
|
||||
|
||||
B<updatenode node27 -V -S -A -d /images>
|
||||
updatenode node27 -V -S -A -d /images
|
||||
|
||||
Note: Make sure the directory is exportable and that the permissions are set
|
||||
correctly for all the files. (Including the .toc file in the case of
|
||||
installp filesets.)
|
||||
|
||||
=item 17
|
||||
|
||||
=item 17.
|
||||
Install the interim fix package located in the /efixes directory.
|
||||
|
||||
B<updatenode node29 -V -S -d /efixes otherpkgs=E:IZ38930TL0.120304.epkg.Z>
|
||||
|
||||
=item 18
|
||||
updatenode node29 -V -S -d /efixes otherpkgs=E:IZ38930TL0.120304.epkg.Z
|
||||
|
||||
=item 18.
|
||||
To uninstall the interim fix that was installed in the previous example.
|
||||
|
||||
B<updatenode xcatsn11 -V -S -c emgr_flags="-r -L IZ38930TL0">
|
||||
|
||||
=item 19
|
||||
updatenode xcatsn11 -V -S -c emgr_flags="-r -L IZ38930TL0"
|
||||
|
||||
=item 19.
|
||||
To update the security keys for the node "node01"
|
||||
|
||||
B<updatenode node01 -k>
|
||||
|
||||
=item 20
|
||||
updatenode node01 -k
|
||||
|
||||
=item 20.
|
||||
To update the service nodes with the files to be synchronized to node group compute:
|
||||
|
||||
B<updatenode compute -f>
|
||||
|
||||
=item 21
|
||||
updatenode compute -f
|
||||
|
||||
=item 21.
|
||||
To run updatenode with the non-root userid "user1" that has been setup as an xCAT userid with sudo on node1 to run as root, do the following:
|
||||
See Granting_Users_xCAT_privileges for required sudo setup.
|
||||
|
||||
B<updatenode node1 -l user1 -P syslog>
|
||||
|
||||
=item 22
|
||||
updatenode node1 -l user1 -P syslog
|
||||
|
||||
=item 22.
|
||||
In Sysclone environment, after capturing the delta changes from golden client to management node, to run updatenode to push these delta changes to target nodes.
|
||||
|
||||
B<updatenode target-node -S>
|
||||
updatenode target-node -S
|
||||
|
||||
|
||||
=back
|
||||
|
@ -36,7 +36,7 @@ Print version.
|
||||
|
||||
=head1 B<Examples>
|
||||
|
||||
B<wkill> I<node1-node5>
|
||||
wkill node1-node5
|
||||
|
||||
|
||||
=head1 B<See> B<Also>
|
||||
|
@ -4,7 +4,7 @@ B<xCATWorld> - Sample client program for xCAT.
|
||||
|
||||
=head1 SYNOPSIS
|
||||
|
||||
I<xCATWorld {noderange}>
|
||||
B<xCATWorld> I<noderange>
|
||||
|
||||
=head1 DESCRIPTION
|
||||
|
||||
@ -21,9 +21,7 @@ For debugging purposes we have an Environment Variable XCATBYPASS. If export XC
|
||||
|
||||
1.To run , enter:
|
||||
|
||||
I<xCATWorld nodegrp1>
|
||||
|
||||
|
||||
xCATWorld nodegrp1
|
||||
|
||||
=head1 FILES
|
||||
|
||||
|
@ -4,9 +4,9 @@ B<xcat2nim> - Use this command to create and manage AIX NIM definitions based on
|
||||
|
||||
=head1 SYNOPSIS
|
||||
|
||||
I<xcat2nim [-h|--help ]>
|
||||
B<xcat2nim [-h|--help]>
|
||||
|
||||
I<xcat2nim [-V|--verbose] [-u|--update] [-l|--list] [-r|--remove] [-f|--force] [-t object-types] [-o object-names] [-a|--allobjects] [-p|--primarySN] [-b|--backupSN] [noderange] [attr=val [attr=val...]] >
|
||||
B<xcat2nim [-V|--verbose] [-u|--update] [-l|--list] [-r|--remove] [-f|--force] [-t object-types] [-o> I<object-names>] B<[-a|--allobjects] [-p|--primarySN] [-b|--backupSN]> I<[noderange] [attr=val [attr=val...]]>
|
||||
|
||||
=head1 DESCRIPTION
|
||||
|
||||
@ -42,7 +42,7 @@ The remove("-r"), force("-f") and update("-u") options are not supported for NIM
|
||||
|
||||
B<-a|--all> The list of objects will include all xCAT node, group and network objects.
|
||||
|
||||
B<attr=val [attr=val ...]> Specifies one or more "attribute equals value" pairs, separated by spaces. Attr=val pairs must be specified last on the command line. The attribute names must correspond to the attributes supported by the relevant NIM commands. When providing attr=val pairs on the command line you must not specify more than one object type.
|
||||
I<attr=val [attr=val ...]> Specifies one or more "attribute equals value" pairs, separated by spaces. Attr=val pairs must be specified last on the command line. The attribute names must correspond to the attributes supported by the relevant NIM commands. When providing attr=val pairs on the command line you must not specify more than one object type.
|
||||
|
||||
B<-b|--backupSN> When using backup service nodes only update the backup. The default is to update both the primary and backup service nodes.
|
||||
|
||||
@ -52,13 +52,13 @@ B<-h|--help> Display the usage message.
|
||||
|
||||
B<-l|--list> List NIM definitions corresponding to xCAT definitions.
|
||||
|
||||
B<-o object-names> A set of comma delimited xCAT object names. Objects must be of type node, group, or network.
|
||||
B<-o> I<object-names> A set of comma delimited xCAT object names. Objects must be of type node, group, or network.
|
||||
|
||||
B<-p|--primarySN> When using backup service nodes only update the primary. The default is to update both the primary and backup service nodes.
|
||||
|
||||
B<-r|--remove> Remove NIM definitions corresponding to xCAT definitions.
|
||||
|
||||
B<-t object-types> A set of comma delimited xCAT object types. Supported types include: node, group, and network.
|
||||
B<-t> I<object-types> A set of comma delimited xCAT object types. Supported types include: node, group, and network.
|
||||
|
||||
Note: If the object type is "group", it means that the B<xcat2nim> command will operate on a NIM machine group definition corresponding to the xCAT node group definition. Before creating a NIM machine group, all the NIM client nodes definition must have been created.
|
||||
|
||||
@ -76,39 +76,39 @@ B<-V|--verbose> Verbose mode.
|
||||
|
||||
1. To create a NIM machine definition corresponding to the xCAT node "clstrn01".
|
||||
|
||||
I<xcat2nim -t node -o clstrn01>
|
||||
xcat2nim -t node -o clstrn01
|
||||
|
||||
2. To create NIM machine definitions for all xCAT node definitions.
|
||||
|
||||
I<xcat2nim -t node>
|
||||
xcat2nim -t node
|
||||
|
||||
3. Update all the NIM machine definitions for the nodes contained in the xCAT "compute" node group and specify attribute values that will be applied to each definition.
|
||||
|
||||
I<xcat2nim -u -t node -o compute netboot_kernel=mp cable_type="N/A">
|
||||
xcat2nim -u -t node -o compute netboot_kernel=mp cable_type="N/A"
|
||||
|
||||
4. To create a NIM machine group definition corresponding to the xCAT group "compute".
|
||||
|
||||
I<xcat2nim -t group -o compute>
|
||||
xcat2nim -t group -o compute
|
||||
|
||||
5. To create NIM network definitions corresponding to the xCAT "clstr_net" an "publc_net" network definitions. Also display verbose output.
|
||||
|
||||
I<xcat2nim -V -t network -o "clstr_net,publc_net">
|
||||
xcat2nim -V -t network -o "clstr_net,publc_net"
|
||||
|
||||
6. To list the NIM definition for node clstrn02.
|
||||
|
||||
I<xcat2nim -l -t node clstrn02>
|
||||
xcat2nim -l -t node clstrn02
|
||||
|
||||
7. To re-create a NIM machine definiton and display verbose output.
|
||||
|
||||
I<xcat2nim -V -t node -f clstrn05>
|
||||
xcat2nim -V -t node -f clstrn05
|
||||
|
||||
8. To remove the NIM definition for the group "AIXnodes".
|
||||
|
||||
I<xcat2nim -t group -r -o AIXnodes>
|
||||
xcat2nim -t group -r -o AIXnodes
|
||||
|
||||
9. To list the NIM "clstr_net" definition.
|
||||
|
||||
I<xcat2nim -l -t network -o clstr_net>
|
||||
xcat2nim -l -t network -o clstr_net
|
||||
|
||||
=head1 FILES
|
||||
|
||||
|
@ -4,9 +4,9 @@ B<xcatchroot> - Use this xCAT command to modify an xCAT AIX diskless operating s
|
||||
|
||||
=head1 SYNOPSIS
|
||||
|
||||
B<xcatchroot -h >
|
||||
B<xcatchroot -h>
|
||||
|
||||
B<xcatchroot [-V] -i osimage_name cmd_string>
|
||||
B<xcatchroot [-V] -i> I<osimage_name cmd_string>
|
||||
|
||||
=head1 DESCRIPTION
|
||||
|
||||
@ -14,16 +14,16 @@ For AIX diskless images this command will modify the AIX SPOT resource using
|
||||
the chroot command. You must include the name of an xCAT osimage
|
||||
definition and the command that you wish to have run in the spot.
|
||||
|
||||
WARNING:
|
||||
B<WARNING:>
|
||||
|
||||
=over 3
|
||||
|
||||
Be very careful when using this command!!! Make sure you are
|
||||
very clear about exactly what you are changing so that you do
|
||||
not accidently corrupt the image.
|
||||
Be very careful when using this command!!! Make sure you are
|
||||
very clear about exactly what you are changing so that you do
|
||||
not accidently corrupt the image.
|
||||
|
||||
As a precaution it is advisable to make a copy of the original
|
||||
spot in case your changes wind up corrupting the image.
|
||||
As a precaution it is advisable to make a copy of the original
|
||||
spot in case your changes wind up corrupting the image.
|
||||
|
||||
=back
|
||||
|
||||
@ -32,7 +32,7 @@ check operation on the spot.
|
||||
|
||||
=over 3
|
||||
|
||||
nim -Fo check <spot_name>
|
||||
nim -Fo check <spot_name>
|
||||
|
||||
=back
|
||||
|
||||
@ -69,7 +69,7 @@ Always run the NIM check operation after you are done updating your spot.
|
||||
|
||||
=over 10
|
||||
|
||||
=item B<cmd_string>
|
||||
=item I<cmd_string>
|
||||
|
||||
The command you wish to have run in the chroot environment. (Use a quoted
|
||||
string.)
|
||||
@ -78,7 +78,7 @@ string.)
|
||||
|
||||
Display usage message.
|
||||
|
||||
=item B<-i osimage_name>
|
||||
=item B<-i> I<osimage_name>
|
||||
|
||||
The name of the xCAT osimage definition.
|
||||
|
||||
@ -93,11 +93,9 @@ Verbose mode.
|
||||
=over 3
|
||||
|
||||
=item 0
|
||||
|
||||
The command completed successfully.
|
||||
|
||||
=item 1
|
||||
|
||||
An error has occurred.
|
||||
|
||||
=back
|
||||
@ -107,19 +105,19 @@ An error has occurred.
|
||||
1) Set the root password to "cluster" in the spot so that when the diskless
|
||||
node boots it will have a root password set.
|
||||
|
||||
B<xcatchroot -i 614spot "/usr/bin/echo root:cluster | /usr/bin/chpasswd -c">
|
||||
xcatchroot -i 614spot "/usr/bin/echo root:cluster | /usr/bin/chpasswd -c"
|
||||
|
||||
2) Install the bash rpm package.
|
||||
|
||||
B<xcatchroot -i 614spot "/usr/bin/rpm -Uvh /lpp_source/RPMS/ppc bash-3.2-1.aix5.2.ppc.rpm">
|
||||
xcatchroot -i 614spot "/usr/bin/rpm -Uvh /lpp_source/RPMS/ppc bash-3.2-1.aix5.2.ppc.rpm"
|
||||
|
||||
3) To enable system debug.
|
||||
|
||||
B<xcatchroot -i 614spot "bosdebug -D -M">
|
||||
xcatchroot -i 614spot "bosdebug -D -M"
|
||||
|
||||
4) To set the "ipforwarding" system tunable.
|
||||
|
||||
B<xcatchroot -i 614spot "/usr/sbin/no -r -o ipforwarding=1">
|
||||
xcatchroot -i 614spot "/usr/sbin/no -r -o ipforwarding=1"
|
||||
|
||||
=head1 FILES
|
||||
|
||||
|
@ -62,12 +62,11 @@ Display output as nodenames instead of groupnames.
|
||||
|
||||
=over 3
|
||||
|
||||
=item *
|
||||
|
||||
=item 1.
|
||||
To display the results of a command issued on several nodes, in
|
||||
the format used in the Description, enter:
|
||||
|
||||
B<psh> I<node1,node2,node3 cat /etc/passwd> | B<xcoll>
|
||||
psh node1,node2,node3 cat /etc/passwd | xcoll
|
||||
|
||||
=back
|
||||
|
||||
|
@ -4,10 +4,7 @@ B<xdcp> - Concurrently copies files to or from multiple nodes. In addition, prov
|
||||
|
||||
=head1 B<SYNOPSIS>
|
||||
|
||||
B<xdcp> I<noderange> [[B<-f> I<fanout>]
|
||||
[B<-L>] [B<-l> I<userID>] [B<-o> I<node_options>] [B<-p>]
|
||||
[B<-P>] [B<-r> I<node_remote_shell>] [B<-R>] [B<-t> I<timeout>]
|
||||
[B<-T>] [B<-v>] [B<-q>] [B<-X> I<env_list>] sourcefile.... targetpath
|
||||
B<xdcp> I<noderange> [[B<-f> I<fanout>] [B<-L>] [B<-l> I<userID>] [B<-o> I<node_options>] [B<-p>] [B<-P>] [B<-r> I<node_remote_shell>] [B<-R>] [B<-t> I<timeout>] [B<-T>] [B<-v>] [B<-q>] [B<-X> I<env_list>] I<sourcefile.... targetpath>
|
||||
|
||||
B<xdcp> I<noderange> [B<-F> I<rsync input file>]
|
||||
|
||||
@ -77,14 +74,14 @@ standard output or standard error is displayed.
|
||||
|
||||
=over 5
|
||||
|
||||
=item B<sourcefile...>
|
||||
=item I<sourcefile...>
|
||||
|
||||
Specifies the complete path for the file to be copied to or
|
||||
from the target. Multiple files can be specified. When used
|
||||
with the -R flag, only a single directory can be specified.
|
||||
When used with the -P flag, only a single file can be specified.
|
||||
|
||||
=item B<targetpath>
|
||||
=item I<targetpath>
|
||||
|
||||
If one source_file file, then it specifies the file to copy the source_file
|
||||
file to on the target. If multiple source_file files, it specifies
|
||||
@ -120,7 +117,7 @@ or
|
||||
<path to source file> -> <path to destination directory ( must end in /)>
|
||||
|
||||
For example:
|
||||
/etc/password /etc/hosts -> /etc
|
||||
/etc/password /etc/hosts -> /etc
|
||||
|
||||
/tmp/file2 -> /tmp/file2
|
||||
|
||||
@ -148,6 +145,7 @@ Another option is the B<EXECUTEALWAYS:> clause in the synclist file. The B<EXEC
|
||||
The scripts must be also added to the file list to rsync to the node for hierarchical clusters. It is optional for non-hierarchical clusters.
|
||||
|
||||
For example, your rsynclist file may look like this:
|
||||
|
||||
/tmp/share/file2 -> /tmp/file2
|
||||
/tmp/share/file2.post -> /tmp/file2.post
|
||||
/tmp/share/file3 -> /tmp/filex
|
||||
@ -167,6 +165,7 @@ the node. /tmp/myscript will always be run on the node.
|
||||
Another option is the B<APPEND:> clause in the synclist file. The B<APPEND:> clause is used to append the contents of the input file to an existing file on the node. The file to append B<must> already exist on the node and not be part of the synclist that contains the B<APPEND:> clause.
|
||||
|
||||
For example, your rsynclist file may look like this:
|
||||
|
||||
/tmp/share/file2 -> /tmp/file2
|
||||
/tmp/share/file2.post -> /tmp/file2.post
|
||||
/tmp/share/file3 -> /tmp/filex
|
||||
@ -402,13 +401,11 @@ userdefined.
|
||||
|
||||
=over 3
|
||||
|
||||
|
||||
=item *
|
||||
|
||||
=item 1.
|
||||
To copy the /etc/hosts file from all nodes in the cluster
|
||||
to the /tmp/hosts.dir directory on the local host, enter:
|
||||
|
||||
B<xdcp> I<all -P /etc/hosts /tmp/hosts.dir>
|
||||
xdcp all -P /etc/hosts /tmp/hosts.dir
|
||||
|
||||
A suffix specifying the name of the target is appended to each
|
||||
file name. The contents of the /tmp/hosts.dir directory are similar to:
|
||||
@ -417,53 +414,44 @@ file name. The contents of the /tmp/hosts.dir directory are similar to:
|
||||
hosts._node2 hosts._node5 hosts._node8
|
||||
hosts._node3 hosts._node6
|
||||
|
||||
|
||||
=item *
|
||||
|
||||
=item 2.
|
||||
To copy the directory /var/log/testlogdir from all targets in
|
||||
NodeGroup1 with a fanout of 12, and save each directory on the local
|
||||
host as /var/log._target, enter:
|
||||
|
||||
B<xdcp> I<NodeGroup1 -f 12 -RP /var/log/testlogdir /var/log>
|
||||
|
||||
=item *
|
||||
xdcp NodeGroup1 -f 12 -RP /var/log/testlogdir /var/log
|
||||
|
||||
=item 3.
|
||||
To copy /localnode/smallfile and /tmp/bigfile to /tmp on node1
|
||||
using rsync and input -t flag to rsync, enter:
|
||||
|
||||
I<xdcp node1 -r /usr/bin/rsync -o "-t" /localnode/smallfile /tmp/bigfile /tmp>
|
||||
|
||||
=item *
|
||||
xdcp node1 -r /usr/bin/rsync -o "-t" /localnode/smallfile /tmp/bigfile /tmp
|
||||
|
||||
=item 4.
|
||||
To copy the /etc/hosts file from the local host to all the nodes
|
||||
in the cluster, enter:
|
||||
|
||||
B<xdcp> I<all /etc/hosts /etc/hosts>
|
||||
|
||||
=item *
|
||||
xdcp all /etc/hosts /etc/hosts
|
||||
|
||||
=item 5.
|
||||
To copy all the files in /tmp/testdir from the local host to all the nodes
|
||||
in the cluster, enter:
|
||||
|
||||
B<xdcp> I<all /tmp/testdir/* /tmp/testdir>
|
||||
|
||||
=item *
|
||||
xdcp all /tmp/testdir/* /tmp/testdir
|
||||
|
||||
=item 6.
|
||||
To copy all the files in /tmp/testdir and it's subdirectories
|
||||
from the local host to node1 in the cluster, enter:
|
||||
|
||||
B<xdcp> I<node1 -R /tmp/testdir /tmp/testdir>
|
||||
|
||||
=item *
|
||||
xdcp node1 -R /tmp/testdir /tmp/testdir
|
||||
|
||||
=item 7.
|
||||
To copy the /etc/hosts file from node1 and node2 to the
|
||||
/tmp/hosts.dir directory on the local host, enter:
|
||||
|
||||
B<xdcp> I<node1,node2 -P /etc/hosts /tmp/hosts.dir>
|
||||
|
||||
|
||||
=item *
|
||||
xdcp node1,node2 -P /etc/hosts /tmp/hosts.dir
|
||||
|
||||
=item 8.
|
||||
To rsync the /etc/hosts file to your compute nodes:
|
||||
|
||||
Create a rsync file /tmp/myrsync, with this line:
|
||||
@ -476,9 +464,9 @@ or
|
||||
|
||||
Run:
|
||||
|
||||
B<xdcp> I<compute -F /tmp/myrsync>
|
||||
xdcp compute -F /tmp/myrsync
|
||||
|
||||
=item *
|
||||
=item 9.
|
||||
|
||||
To rsync all the files in /home/mikev to the compute nodes:
|
||||
|
||||
@ -488,10 +476,9 @@ Create a rsync file /tmp/myrsync, with this line:
|
||||
|
||||
Run:
|
||||
|
||||
B<xdcp> I<compute -F /tmp/myrsync>
|
||||
|
||||
=item *
|
||||
xdcp compute -F /tmp/myrsync
|
||||
|
||||
=item 10.
|
||||
To rsync to the compute nodes, using service nodes, the command will first
|
||||
rsync the files to the /var/xcat/syncfiles directory on the service nodes and then rsync the files from that directory to the compute nodes. The /var/xcat/syncfiles default directory on the service nodes, can be changed by putting a directory value in the site table SNsyncfiledir attribute.
|
||||
|
||||
@ -506,10 +493,11 @@ or
|
||||
|
||||
Run:
|
||||
|
||||
B<xdcp> I<compute -F /tmp/myrsync> to update the Compute Nodes
|
||||
xdcp compute -F /tmp/myrsync
|
||||
|
||||
=item *
|
||||
to update the Compute Nodes
|
||||
|
||||
=item 11.
|
||||
To rsync to the service nodes in preparation for rsyncing the compute nodes
|
||||
during an install from the service node.
|
||||
|
||||
@ -519,11 +507,11 @@ Create a rsync file /tmp/myrsync, with this line:
|
||||
|
||||
Run:
|
||||
|
||||
B<xdcp> I<compute -s -F /tmp/myrsync> to sync the service node for compute
|
||||
xdcp compute -s -F /tmp/myrsync
|
||||
|
||||
to sync the service node for compute
|
||||
|
||||
=item *
|
||||
|
||||
=item 12.
|
||||
To rsync the /etc/file1 and file2 to your compute nodes and rename to filex and filey:
|
||||
|
||||
Create a rsync file /tmp/myrsync, with these line:
|
||||
@ -534,10 +522,11 @@ Create a rsync file /tmp/myrsync, with these line:
|
||||
|
||||
Run:
|
||||
|
||||
B<xdcp> I<compute -F /tmp/myrsync> to update the Compute Nodes
|
||||
xdcp compute -F /tmp/myrsync
|
||||
|
||||
=item *
|
||||
to update the Compute Nodes
|
||||
|
||||
=item 13.
|
||||
To rsync files in the Linux image at /install/netboot/fedora9/x86_64/compute/rootimg on the MN:
|
||||
|
||||
Create a rsync file /tmp/myrsync, with this line:
|
||||
@ -546,13 +535,12 @@ Create a rsync file /tmp/myrsync, with this line:
|
||||
|
||||
Run:
|
||||
|
||||
B<xdcp> I<-i /install/netboot/fedora9/x86_64/compute/rootimg -F /tmp/myrsync>
|
||||
xdcp -i /install/netboot/fedora9/x86_64/compute/rootimg -F /tmp/myrsync
|
||||
|
||||
=item *
|
||||
=item 14.
|
||||
To define the Management Node in the database so you can use xdcp, run
|
||||
|
||||
To define the Management Node in the database so you can use xdcp,run
|
||||
|
||||
B<xcatconfig -m>
|
||||
xcatconfig -m
|
||||
|
||||
|
||||
=back
|
||||
|
@ -487,103 +487,91 @@ The dsh command exit code is 0 if the command executed without errors and all re
|
||||
|
||||
=over 3
|
||||
|
||||
|
||||
=item *
|
||||
|
||||
=item 1.
|
||||
To set up the SSH keys for root on node1, run as root:
|
||||
|
||||
B<xdsh> I<node1 -K>
|
||||
xdsh node1 -K
|
||||
|
||||
=item *
|
||||
=item 2.
|
||||
To run the B<ps -ef> command on node targets B<node1> and B<node2>, enter:
|
||||
|
||||
To run the B<ps -ef > command on node targets B<node1> and B<node2>, enter:
|
||||
|
||||
B<xdsh> I<node1,node2 "ps -ef">
|
||||
|
||||
=item *
|
||||
xdsh node1,node2 "ps -ef"
|
||||
|
||||
=item 3.
|
||||
To run the B<ps> command on node targets B<node1> and run the remote command with the -v and -t flag, enter:
|
||||
|
||||
B<xdsh> I<node1,node2 -o"-v -t" ps>
|
||||
=item *
|
||||
xdsh node1,node2 -o"-v -t" ps
|
||||
|
||||
=item 4.
|
||||
To execute the commands contained in B<myfile> in the B<XCAT>
|
||||
context on several node targets, with a fanout of B<1>, enter:
|
||||
|
||||
B<xdsh> I<node1,node2 -f 1 -e myfile>
|
||||
|
||||
|
||||
=item *
|
||||
xdsh node1,node2 -f 1 -e myfile
|
||||
|
||||
=item 5.
|
||||
To run the ps command on node1 and ignore all the dsh
|
||||
environment variable except the DSH_NODE_OPTS, enter:
|
||||
|
||||
B<xdsh> I<node1 -X `DSH_NODE_OPTS' ps>
|
||||
|
||||
|
||||
=item *
|
||||
xdsh node1 -X `DSH_NODE_OPTS' ps
|
||||
|
||||
=item 6.
|
||||
To run on Linux, the xdsh command "rpm -qa | grep xCAT"
|
||||
on the service node fedora9 diskless image, enter:
|
||||
|
||||
B<xdsh> I<-i /install/netboot/fedora9/x86_64/service/rootimg "rpm -qa | grep xCAT">
|
||||
xdsh -i /install/netboot/fedora9/x86_64/service/rootimg "rpm -qa | grep xCAT"
|
||||
|
||||
=item *
|
||||
=item 7.
|
||||
To run on AIX, the xdsh command "lslpp -l | grep bos" on the NIM 611dskls spot, enter:
|
||||
|
||||
To run on AIX, the xdsh command "lslpp -l | grep bos"
|
||||
on the NIM 611dskls spot, enter:
|
||||
xdsh -i 611dskls "/usr/bin/lslpp -l | grep bos"
|
||||
|
||||
B<xdsh> I<-i 611dskls "/usr/bin/lslpp -l | grep bos">
|
||||
=item 8.
|
||||
To cleanup the servicenode directory that stages the copy of files to the nodes, enter:
|
||||
|
||||
=item *
|
||||
xdsh servicenoderange -c
|
||||
|
||||
To cleanup the servicenode directory that stages the copy of files to the
|
||||
nodes, enter:
|
||||
|
||||
B<xdsh> I<servicenoderange -c >
|
||||
|
||||
=item *
|
||||
=item 9.
|
||||
|
||||
To define the QLogic IB switch as a node and to set up the SSH keys for IB switch
|
||||
B<qswitch> with device configuration file
|
||||
B</var/opt/xcat/IBSwitch/Qlogic/config> and user name B<username>, Enter
|
||||
|
||||
B<chdef> I<-t node -o qswitch groups=all nodetype=switch>
|
||||
chdef -t node -o qswitch groups=all nodetype=switch
|
||||
|
||||
B<xdsh> I<qswitch -K -l username --devicetype IBSwitch::Qlogic>
|
||||
|
||||
=item *
|
||||
xdsh qswitch -K -l username --devicetype IBSwitch::Qlogic
|
||||
|
||||
=item 10.
|
||||
To define the Management Node in the database so you can use xdsh, Enter
|
||||
|
||||
B<xcatconfig -m>
|
||||
|
||||
=item *
|
||||
xcatconfig -m
|
||||
|
||||
=item 11.
|
||||
To define the Mellanox switch as a node and run a command to show the ssh keys.
|
||||
B<mswitch> with and user name B<username>, Enter
|
||||
|
||||
B<chdef> I<-t node -o mswitch groups=all nodetype=switch>
|
||||
chdef -t node -o mswitch groups=all nodetype=switch
|
||||
|
||||
B<xdsh> I<mswitch -l admin --devicetype IBSwitch::Mellanox 'enable;configure terminal;show ssh server host-keys'>
|
||||
xdsh mswitch -l admin --devicetype IBSwitch::Mellanox 'enable;configure terminal;show ssh server host-keys'
|
||||
|
||||
=item *
|
||||
=item 12.
|
||||
|
||||
To define a BNT Ethernet switch as a node and run a command to create a new vlan with vlan id 3 on the switch.
|
||||
|
||||
B<chdef> I<myswitch groups=all>
|
||||
chdef myswitch groups=all
|
||||
|
||||
B<tabch> I<switch=myswitch switches.sshusername=admin switches.sshpassword=passw0rd switches.protocol=[ssh|telnet]>
|
||||
where I<admin> and I<passw0rd> are the SSH user name and password for the switch. If it is for Telnet, add I<tn:> in front of the user name: I<tn:admin>.
|
||||
tabch switch=myswitch switches.sshusername=admin switches.sshpassword=passw0rd switches.protocol=[ssh|telnet]
|
||||
|
||||
<xdsh> I<myswitch --devicetype EthSwitch::BNT 'enable;configure terminal;vlan 3;end;show vlan'>
|
||||
where I<admin> and I<passw0rd> are the SSH user name and password for the switch.
|
||||
|
||||
=item *
|
||||
If it is for Telnet, add I<tn:> in front of the user name: I<tn:admin>.
|
||||
|
||||
dsh myswitch --devicetype EthSwitch::BNT 'enable;configure terminal;vlan 3;end;show vlan'
|
||||
|
||||
=item 13.
|
||||
|
||||
To run xdsh with the non-root userid "user1" that has been setup as an xCAT userid and with sudo on node1 and node2 to run as root, do the following, see xCAT doc on Granting_Users_xCAT_privileges:
|
||||
|
||||
|
||||
B<xdsh> I<node1,node2 --sudo -l user1 "cat /etc/passwd">
|
||||
xdsh node1,node2 --sudo -l user1 "cat /etc/passwd"
|
||||
|
||||
=back
|
||||
|
||||
|
@ -99,22 +99,20 @@ Quiet mode, do not display "." for each 1000 lines of output.
|
||||
|
||||
=over 3
|
||||
|
||||
=item *
|
||||
|
||||
=item 1.
|
||||
To display the results of a command issued on several nodes, in
|
||||
the format used in the Description, enter:
|
||||
|
||||
xdsh node1,node2,node3 cat /etc/passwd | xdshbak
|
||||
|
||||
=item *
|
||||
=item 2.
|
||||
|
||||
To display the results of a command issued on several nodes with
|
||||
identical output displayed only once, enter:
|
||||
|
||||
xdsh host1,host2,host3 pwd | xdshbak -c
|
||||
|
||||
=item *
|
||||
|
||||
=item 3.
|
||||
To display the results of a command issued on several nodes with
|
||||
compact output and be sorted alphabetically by host name, enter:
|
||||
|
||||
|
@ -49,12 +49,11 @@ is identical:
|
||||
|
||||
=over 3
|
||||
|
||||
=item *
|
||||
|
||||
=item 1.
|
||||
To display the results of a command issued on several nodes, in
|
||||
the format used in the Description, enter:
|
||||
|
||||
B<xdsh> I<node1,node2,node3 cat /etc/passwd> | B<xdshcoll>
|
||||
xdsh node1,node2,node3 cat /etc/passwd> | B<xdshcoll
|
||||
|
||||
=back
|
||||
|
||||
|
@ -4,7 +4,7 @@ B<xpbsnodes> - PBS pbsnodes front-end for a noderange.
|
||||
|
||||
=head1 SYNOPSIS
|
||||
|
||||
B<xpbsnodes> [{B<noderange>}] [{B<offline>|B<clear>|B<stat>|B<state>}]
|
||||
B<xpbsnodes> [{I<noderange>}] [{B<offline>|B<clear>|B<stat>|B<state>}]
|
||||
|
||||
B<xpbsnodes> [B<-h>|B<--help>] [B<-v>|B<--version>]
|
||||
|
||||
@ -16,9 +16,9 @@ B<xpbsnodes> is a front-end to PBS pbsnode but uses xCAT's noderange to specify
|
||||
=head1 OPTIONS
|
||||
|
||||
|
||||
B<-h> Display usage message.
|
||||
B<-h|--help> Display usage message.
|
||||
|
||||
B<-v> Command Version.
|
||||
B<-v|--version> Command Version.
|
||||
|
||||
B<offline|off> Take nodes offline.
|
||||
|
||||
@ -39,8 +39,6 @@ B<stat|state> Display PBS node state.
|
||||
|
||||
xpbsnodes all stat
|
||||
|
||||
|
||||
|
||||
=head1 FILES
|
||||
|
||||
/opt/torque/x86_64/bin/xpbsnodes
|
||||
|
Loading…
x
Reference in New Issue
Block a user