xcat-core/xCAT-client/pods/man1/xdcp.1.pod
lissav f5e11be42b add examples
git-svn-id: https://svn.code.sf.net/p/xcat/code/xcat-core/trunk@8711 8638fb3e-16cb-4fca-ae20-7b5d299a9bcd
2011-01-20 14:26:32 +00:00

510 lines
16 KiB
Plaintext

=head1 B<NAME>
B<xdcp> - Concurrently copies files to or from multiple nodes. In addition, provides an option to use rsync to update the files on the nodes, or to an installation image on the local node.
=head1 B<SYNOPSIS>
B<xdcp> I<noderange> [[B<-f> I<fanout>]
[B<-L>] [B<-l> I<userID>] [B<-o> I<node_options>] [B<-p>]
[B<-P>] [B<-r> I<node_remote_shell>] [B<-R>] [B<-t> I<timeout>]
[B<-T>] [B<-v>] [B<-q>] [B<-X> I<env_list>] sourcefile.... targetpath
B<xdcp> I<noderange> [B<-F> I<rsync input file>]
B<xdcp> I<computenoderange> [B<-s> B<-F> I<rsync input file>]
B<xdcp> [B<-i> I<path to install image>] [B<-F> I<rsync input file>]
B<xdcp> [B<-h> | B<-V> | B<-q>]
=head1 B<DESCRIPTION>
The B<xdcp> command concurrently copies files to or from remote target
nodes. The command issues a remote copy com-
mand for each node or device specified. When files are pulled from a
target, they are placed into the target_path with the name of the
remote node or device appended to the copied source_file name. The
/usr/bin/rcp command is the model for syntax and security.
If using hierarchy, then xdcp runs on the service node that is servicing the compute node. The file will first be copied to the path defined in the site table, SNsyncfiledir attribute, or the default path /var/xcat/syncfiles on the service node, if the attribute is not defined. The -P flag will not automatically copy
the files from the compute node to the Management node, hierarchically. There
is a two step process, see -P flag.
B<REMOTE> B<USER>:
A user_ID can be specified for the remote copy command. Remote user
specification is identical for the xdcp and xdsh commands. See the xdsh
command for more information.
B<REMOTE> B<COMMAND> B<COPY>:
The B<xdcp> command uses a configurable remote copy command to execute
remote copies on remote targets. Support is explicitly provided for
Remote Shell rcp command, the OpenSSH scp command and the
/usr/bin/rsync command.
For node targets, the remote copy command is determined by the follow-
ing order of precedence:
1. The B<-r> flag.
2. The B</usr/bin/scp> command.
B<COMMAND> B<EXECUTIONS>
The maximum number of concurrent remote copy command processes (the
fanout) can be specified with the -f flag or the DSH_FANOUT environment
variable. The fanout is only restricted by the number of remote shell
commands that can be run in parallel. You can experiment with the
DSH_FANOUT value on your management server to see if higher values are
appropriate.
A timeout value for remote copy command execution can be specified with
the -t flag or DSH_TIMEOUT environment variable. If any remote target
does not respond within the timeout value, the xdcp command displays an
error message and exits.
The -T flag provides diagnostic trace information for dcp command exe-
cution. Default settings and the actual remote copy commands that are
executed to the remote targets are displayed.
The xdcp command can be executed silently using the -Q flag; no target
standard output or standard error is displayed.
=head1 B<OPTIONS>
=over 5
=item B<sourcefile...>
Specifies the complete path for the file to be copied to or
from the target. Multiple files can be specified. When used
with the -R flag, only a single directory can be specified.
When used with the -P flag, only a single file can be specified.
=item B<targetpath>
If one source_file file, then it specifies the file to copy the source_file
file to on the target. If multiple source_file files, it specifies
the directory to copy the source_file files to on the target.
If the -P flag is specified, the target_path is the local host location
for the copied files. The remote file directory structure is recreated
under target_path and the remote target name is appended
to the copied source_file name in the target_path directory.
Note: the targetpath directory must exist.
=item B<-f>|B<--fanout> I<fanout_value>
Specifies a fanout value for the maximum number of concur-
rently executing remote shell processes. Serial execution
can be specified by indicating a fanout value of B<1>. If B<-f>
is not specified, a default fanout value of B<64> is used.
=item B<-F>|B<--File> I<rsync input file>
Specifies the path to the file that will be used to
build the rsync command.
The format of the input file is as follows, each line contains:
<path to source file1> <path to source file2> ... -> < path to destination file/directory>
or
<path to source file> -> <path to destination file>
or
<path to source file> -> <path to destination directory ( must end in /)>
For example:
/etc/password /etc/hosts -> /etc
/tmp/file2 -> /tmp/file2
/tmp/file2 -> /tmp/
/tmp/filex -> /tmp/source/filey
/etc/* -> /etc/
B<Running postscripts after files are sync'd to the nodes>:
After you define the files to rsync, you can add an B<EXECUTE:> clause in the synclist file. The B<EXECUTE:> clause will list all the postscripts that you would like to run after the files are sync'd to the node.
The postscript file must be of the form B<filename.post>, where the <filename>
is the <filename> is the from <filename>, reside in the same
directory as B<filename>, and be executable.
If the file B<filename> is rsync'd to the node, then the B<filename.post>
will automatically be run on the node.
If the file B<filename> is not updated on the node, the B<filename.post> will not be run.
Putting the B<filename.post> in the file list to rsync to the node is required
for hierarchical clusters. It is optional for non-hierarchical cluster.
For example, your rsynclist file may look like this:
/tmp/file2 -> /tmp/file2
/tmp/file2.post -> /tmp/file2.post
/tmp/file3 -> /tmp/filex
/tmp/file3.post -> /tmp/file3.post
# the below are postscripts
EXECUTE:
/tmp/file2.post
/tmp/file3.post
If /tmp/file2 and /tmp/file3 update /tmp/file2 and /tmp/filex on the node, then the postscripts /tmp/file2.post and /tmp/file3.post are automatically run on
the node.
On Linux rsync always uses ssh remoteshell. On AIX, ssh or rsh is used depending on the site.useSSHonAIX attribute.
=item B<-h>|B<--help>
Displays usage information.
=item B<-i>|B<--rootimg> I<install image>
Specifies the path to the install image on the local Linux node.
=item B<-o>|B<--node-options> I<node_options>
Specifies options to pass to the remote shell command for
node targets. The options must be specified within double
quotation marks ("") to distinguish them from B<xdsh> options.
=item B<-p>|B<--preserve>
Preserves the source file characteristics as implemented by
the configured remote copy command.
=item B<-P>|B<--pull>
Pulls (copies) the files from the targets and places them in
the target_path directory on the local host. The target_path
must be a directory. Files pulled from remote machines have
._target appended to the file name to distinguish between
them. When the -P flag is used with the -R flag, ._target is
appended to the directory. Only one file per invocation of the
xdcp pull command can be pulled from the specified targets.
Hierarchy is not automatically support yet. You must first pull
the file to the Service Node and then pull the file to the Management
node.
=item B<-q>|B<--show-config>
Displays the current environment settings for all DSH
Utilities commands. This includes the values of all environment
variables and settings for all currently installed and
valid contexts. Each setting is prefixed with I<context>: to
identify the source context of the setting.
=item B<-r>|B<--node-rcp> I<node_remote_copy>
Specifies the full path of the remote copy command used
for remote command execution on node targets.
=item B<-R>|B<--recursive> I<recursive>
Recursively copies files from a local directory to the remote
targets, or when specified with the -P flag, recursively pulls
(copies) files from a remote directory to the local host. A
single source directory can be specified using the source_file
parameter.
=item B<-s> I<synch service nodes>
Will only sync the files listed in the synclist (-F), to the service
nodes for the input compute node list. The files will be placed in the
directory defined by the site.SNsyncfiledir attribute, or the default
/var/xcat/syncfiles directory.
=item B<-t>|B<--timeout> I<timeout>
Specifies the time, in seconds, to wait for output from any
currently executing remote targets. If no output is
available from any target in the specified I<timeout>, B<xdsh>
displays an error and terminates execution for the remote
targets that failed to respond. If I<timeout> is not specified,
B<xdsh> waits indefinitely to continue processing output from
all remote targets. When specified with the B<-i> flag, the
user is prompted for an additional timeout interval to wait
for output.
=item B<-T>|B<--trace>
Enables trace mode. The B<xdcp> command prints diagnostic
messages to standard output during execution to each target.
=item B<-v>|B<--verify>
Verifies each target before executing any remote commands
on the target. If a target is not responding, execution of
remote commands for the target is canceled. When specified
with the B<-i> flag, the user is prompted to retry the
verification request.
=item B<-V>|B<--version>
Displays the B<xdcp> command version information.
=back
=head1 B<Environment> B<Variables>
=over 4
=item B<DSH_ENVIRONMENT>
Specifies a file that contains environment variable
definitions to export to the target before executing the remote
command. This variable is overridden by the B<-E> flag.
=item B<DSH_FANOUT>
Specifies the fanout value. This variable is overridden by
the B<-f> flag.
=item B<DSH_NODE_OPTS>
Specifies the options to use for the remote shell command
with node targets only. This variable is overridden by the
B<-o> flag.
=item B<DSH_NODE_RCP>
Specifies the full path of the remote copy command to use
to copy local scripts and local environment configuration
files to node targets.
=item B<DSH_NODE_RSH>
Specifies the full path of the remote shell to use for
remote command execution on node targets. This variable is
overridden by the B<-r> flag.
=item B<DSH_NODEGROUP_PATH>
Specifies a colon-separated list of directories that
contain node group files for the B<DSH> context. When the B<-a> flag
is specified in the B<DSH> context, a list of unique node
names is collected from all node group files in the path.
=item B<DSH_PATH>
Sets the command path to use on the targets. If B<DSH_PATH> is
not set, the default path defined in the profile of the
remote I<user_ID> is used.
=item B<DSH_SYNTAX>
Specifies the shell syntax to use on remote targets; B<ksh> or
B<csh>. If not specified, the B<ksh> syntax is assumed. This
variable is overridden by the B<-S> flag.
=item B<DSH_TIMEOUT>
Specifies the time, in seconds, to wait for output from
each remote target. This variable is overridden by the B<-t>
flag.
=back
=head1 B<Exit Status>
Exit values for each remote copy command execution are displayed in
messages from the xdcp command, if the remote copy command exit value is
non-zero. A non-zero return code from a remote copy command indicates
that an error was encountered during the remote copy. If a remote copy
command encounters an error, execution of the remote copy on that tar-
get is bypassed.
The xdcp command exit code is 0, if the xdcp command executed without
errors and all remote copy commands finished with exit codes of 0. If
internal xdcp errors occur or the remote copy commands do not complete
successfully, the xdcp command exit value is greater than 0.
=head1 B<Security>
The B<xdcp> command has no security configuration requirements. All
remote command security requirements - configuration,
authentication, and authorization - are imposed by the underlying remote
command configured for B<xdsh>. The command assumes that authentication
and authorization is configured between the local host and the
remote targets. Interactive password prompting is not supported; an
error is displayed and execution is bypassed for a remote target if
password prompting occurs, or if either authorization or
authentication to the remote target fails. Security configurations as they
pertain to the remote environment and remote shell command are
userdefined.
=head1 B<Examples>
=over 3
=item *
To copy the /etc/hosts file from all nodes in the cluster
to the /tmp/hosts.dir directory on the local host, enter:
B<xdcp> I<all -P /etc/hosts /tmp/hosts.dir>
A suffix specifying the name of the target is appended to each
file name. The contents of the /tmp/hosts.dir directory are similar to:
hosts._node1 hosts._node4 hosts._node7
hosts._node2 hosts._node5 hosts._node8
hosts._node3 hosts._node6
=item *
To copy the directory /var/log/testlogdir from all targets in
NodeGroup1 with a fanout of 12, and save each directory on the local
host as /var/log._target, enter:
B<xdcp> I<NodeGroup1 -f 12 -RP /var/log/testlogdir /var/log>
=item *
To copy /localnode/smallfile and /tmp/bigfile to /tmp on node1
using rsync and input -t flag to rsync, enter:
I<xdcp node1 -r /usr/bin/rsync -o "-t" /localnode/smallfile /tmp/bigfile /tmp>
=item *
To copy the /etc/hosts file from the local host to all the nodes
in the cluster, enter:
B<xdcp> I<all /etc/hosts /etc/hosts>
=item *
To copy all the files in /tmp/testdir from the local host to all the nodes
in the cluster, enter:
B<xdcp> I<all /tmp/testdir/* /tmp/testdir>
=item *
To copy all the files in /tmp/testdir and it's subdirectories
from the local host to node1 in the cluster, enter:
B<xdcp> I<node1 -R /tmp/testdir /tmp/testdir>
=item *
To copy the /etc/hosts file from node1 and node2 to the
/tmp/hosts.dir directory on the local host, enter:
B<xdcp> I<node1,node2 -P /etc/hosts /tmp/hosts.dir>
=item *
To rsync the /etc/hosts file to your compute nodes:
Create a rsync file /tmp/myrsync, with this line:
/etc/hosts -> /etc/hosts
or
/etc/hosts -> /etc/ (last / is required)
Run:
B<xdcp> I<compute -F /tmp/myrsync>
=item *
To rsync all the files in /home/mikev to the compute nodes:
Create a rsync file /tmp/myrsync, with this line:
/home/mikev/* -> /home/mikev/ (last / is required)
Run:
B<xdcp> I<compute -F /tmp/myrsync>
=item *
To rsync to the compute nodes, using service nodes, the command will first
rsync the files to the /var/xcat/syncfiles directory on the service nodes and then rsync the files from that directory to the compute nodes. The /var/xcat/syncfiles default directory on the service nodes, can be changed by putting a directory value in the site table SNsyncfiledir attribute.
Create a rsync file /tmp/myrsync, with this line:
/etc/hosts /etc/passwd -> /etc
or
/etc/hosts /etc/passwd -> /etc/
Run:
B<xdcp> I<compute -F /tmp/myrsync> to update the Compute Nodes
=item *
To rsync to the service nodes in preparation for rsyncing the compute nodes
during an install from the service node.
Create a rsync file /tmp/myrsync, with this line:
/etc/hosts /etc/passwd -> /etc
Run:
B<xdcp> I<compute -s -F /tmp/myrsync> to sync the service node for compute
=item *
To rsync the /etc/file1 and file2 to your compute nodes and rename to filex and filey:
Create a rsync file /tmp/myrsync, with these line:
/etc/file1 -> /etc/filex
/etc/file2 -> /etc/filey
Run:
B<xdcp> I<compute -F /tmp/myrsync> to update the Compute Nodes
=item *
To rsync files in the Linux image at /install/netboot/fedora9/x86_64/compute/rootimg on the MN:
Create a rsync file /tmp/myrsync, with this line:
/etc/hosts /etc/passwd -> /etc
Run:
B<xdcp> I<-i /install/netboot/fedora9/x86_64/compute/rootimg -F /tmp/myrsync>
=back
=head1 B<Files>
=head1 B<SEE ALSO>
L<xdsh(1)|xdsh.1>, L<noderange(3)|noderange.3>