=head1 B B - Concurrently copies files to or from multiple nodes. In addition, provides an option to use rsync to update the files on the nodes, or to an installation image on the local node. =head1 B B I [[B<-f> I] [B<-L>] [B<-l> I] [B<-o> I] [B<-r> I] [B<-R>] [B<-t> I] [B<-T>] [B<-v>] [B<-X> I] sourcefile.... targetpath B I [B<-F> I] B I [B<-s> B<-F> I] B [B<-i> I] [B<-F> I] B [B<-h> | B<-V> | B<-q>] =head1 B The B command concurrently copies files to or from remote target nodes. The command issues a remote copy com- mand for each node or device specified. When files are pulled from a target, they are placed into the target_path with the name of the remote node or device appended to the copied source_file name. The /usr/bin/rcp command is the model for syntax and security. If using hierarchy, then xdcp runs on the service node that is servicing the compute node. The file will first be copied to the path defined in the site table, SNsyncfiledir attribute, or the default path /var/xcat/syncfiles on the service node, if the attribute is not defined. The -P flag will not automatically copy the files from the compute node to the Management node, hierarchically. There is a two step process, see -P flag. B B: Target specification is identical for the xdcp and xdsh commands. See the xdsh man page for details on specifying targets for the xdcp command. B B: A user_ID can be specified for the remote copy command. Remote user specification is identical for the xdcp and xdsh commands. See the xdsh command for more information. B B B: The B command uses a configurable remote copy command to execute remote copies on remote targets. Support is explicitly provided for Remote Shell rcp command, the OpenSSH scp command and the /usr/bin/rsync command. For node targets, the remote copy command is determined by the follow- ing order of precedence: 1. The B<-r> flag. 2. The B command. B B The maximum number of concurrent remote copy command processes (the fanout) can be specified with the -f flag or the DSH_FANOUT environment variable. The fanout is only restricted by the number of remote shell commands that can be run in parallel. You can experiment with the DSH_FANOUT value on your management server to see if higher values are appropriate. A timeout value for remote copy command execution can be specified with the -t flag or DSH_TIMEOUT environment variable. If any remote target does not respond within the timeout value, the xdcp command displays an error message and exits. The -T flag provides diagnostic trace information for dcp command exe- cution. Default settings and the actual remote copy commands that are executed to the remote targets are displayed. The xdcp command can be executed silently using the -Q flag; no target standard output or standard error is displayed. =head1 B =over 5 =item B Specifies the complete path for the file to be copied to or from the target. Multiple files can be specified. When used with the -R flag, only a single directory can be specified. When used with the -P flag, only a single file can be specified. =item B Specifies the complete path to copy one or more source_file files to on the target. If the -P flag is specified, the tar- get_path is the local host location for the copied files. The remote file directory structure is recreated under target_path and the remote target name is appended to the copied source_file name in the target_path directory. Note: the targetpath directory must exist. =item B<-f>|B<--fanout> I Specifies a fanout value for the maximum number of concur- rently executing remote shell processes. Serial execution can be specified by indicating a fanout value of B<1>. If B<-f> is not specified, a default fanout value of B<64> is used. =item B<-F>|B<--File> I Specifies the full path to the file that will be used to build the rsync command. The format of the input file is as follows, each line contains: ... -> < path to destination file/directory> or -> For example: /etc/password /etc/hosts -> /etc /tmp/file2 -> /tmp/file2 /tmp/filex -> /tmp/source/filey =item B<-h>|B<--help> Displays usage information. =item B<-i>|B<--rootimg> I Specifies the full path to the install image on the local Linux node. =item B<-o>|B<--node-options> I Specifies options to pass to the remote shell command for node targets. The options must be specified within double quotation marks ("") to distinguish them from B options. =item B<-p>|B<--preserve> Preserves the source file characteristics as implemented by the configured remote copy command. =item B<-P>|B<--pull> Pulls (copies) the files from the targets and places them in the target_path directory on the local host. The target_path must be a directory. Files pulled from remote machines have ._target appended to the file name to distinguish between them. When the -P flag is used with the -R flag, ._target is appended to the directory. Only one file per invocation of the xdcp pull command can be pulled from the specified targets. Hierarchy is not automatically support yet. You must first pull the file to the Service Node and then pull the file to the Management node. =item B<-q>|B<--show-config> Displays the current environment settings for all DSH Utilities commands. This includes the values of all environment variables and settings for all currently installed and valid contexts. Each setting is prefixed with I: to identify the source context of the setting. =item B<-r>|B<--node-rcp> I Specifies the full path of the remote copy command used for remote command execution on node targets. =item B<-R>|B<--recursive> I Recursively copies files from a local directory to the remote targets, or when specified with the -P flag, recursively pulls (copies) files from a remote directory to the local host. A single source directory can be specified using the source_file parameter. =item B<-s> I Will only sync the files listed in the synclist (-F), to the service nodes for the input compute node list. The files will be placed in the directory defined by the site.SNsyncfiledir attribute, or the default /var/xcat/syncfiles directory. =item B<-t>|B<--timeout> I Specifies the time, in seconds, to wait for output from any currently executing remote targets. If no output is available from any target in the specified I, B displays an error and terminates execution for the remote targets that failed to respond. If I is not specified, B waits indefinitely to continue processing output from all remote targets. When specified with the B<-i> flag, the user is prompted for an additional timeout interval to wait for output. =item B<-T>|B<--trace> Enables trace mode. The B command prints diagnostic messages to standard output during execution to each target. =item B<-v>|B<--verify> Verifies each target before executing any remote commands on the target. If a target is not responding, execution of remote commands for the target is canceled. When specified with the B<-i> flag, the user is prompted to retry the verification request. =item B<-V>|B<--version> Displays the B command version information. =back =head1 B B =over 4 =item B Specifies the default context to use when resolving targets. This variable is overridden by the B<-C> flag. =item B Specifies a file that contains environment variable definitions to export to the target before executing the remote command. This variable is overridden by the B<-E> flag. =item B Specifies the fanout value. This variable is overridden by the B<-f> flag. =item B Specifies the options to use for the remote shell command with node targets only. This variable is overridden by the B<-o> flag. =item B Specifies the full path of the remote copy command to use to copy local scripts and local environment configuration files to node targets. =item B Specifies the full path of the remote shell to use for remote command execution on node targets. This variable is overridden by the B<-r> flag. =item B Specifies a colon-separated list of directories that contain node group files for the B context. When the B<-a> flag is specified in the B context, a list of unique node names is collected from all node group files in the path. =item B Sets the command path to use on the targets. If B is not set, the default path defined in the profile of the remote I is used. =item B Specifies the shell syntax to use on remote targets; B or B. If not specified, the B syntax is assumed. This variable is overridden by the B<-S> flag. =item B Specifies the time, in seconds, to wait for output from each remote target. This variable is overridden by the B<-t> flag. =back =head1 B Exit values for each remote copy command execution are displayed in messages from the xdcp command, if the remote copy command exit value is non-zero. A non-zero return code from a remote copy command indicates that an error was encountered during the remote copy. If a remote copy command encounters an error, execution of the remote copy on that tar- get is bypassed. The xdcp command exit code is 0, if the xdcp command executed without errors and all remote copy commands finished with exit codes of 0. If internal xdcp errors occur or the remote copy commands do not complete successfully, the xdcp command exit value is greater than 0. =head1 B The B command has no security configuration requirements. All remote command security requirements - configuration, authentication, and authorization - are imposed by the underlying remote command configured for B. The command assumes that authentication and authorization is configured between the local host and the remote targets. Interactive password prompting is not supported; an error is displayed and execution is bypassed for a remote target if password prompting occurs, or if either authorization or authentication to the remote target fails. Security configurations as they pertain to the remote environment and remote shell command are userdefined. =head1 B =over 3 =item * To copy the /etc/hosts file from all nodes in the cluster to the /tmp/hosts.dir directory on the local host, enter: B I A suffix specifying the name of the target is appended to each file name. The contents of the /tmp/hosts.dir directory are similar to: hosts._node1 hosts._node4 hosts._node7 hosts._node2 hosts._node5 hosts._node8 hosts._node3 hosts._node6 =item * To copy the directory /var/log/testlogdir from all targets in NodeGroup1 with a fanout of 12, and save each directory on the local host as /var/log._target, enter: B I =item * To copy /localnode/smallfile and /tmp/bigfile to /tmp on node1 using rsync (ssh) and input -t flag to rsync, enter: I =item * To copy the /etc/hosts file from the local host to all the nodes in the cluster, enter: B I =item * To copy all the files in /tmp/testdir from the local host to all the nodes in the cluster, enter: B I =item * To copy all the files in /tmp/testdir and it's subdirectories from the local host to node1 in the cluster, enter: B I =item * To copy the /etc/hosts file from node1 and node2 to the /tmp/hosts.dir directory on the local host, enter: B I =item * To rsync the /etc/hosts file to your compute nodes: Create a rsync file /tmp/myrsync, with this line: /etc/hosts -> /etc/hosts Run: B I =item * To rsync to the compute nodes, using service nodes, the command will first rsync the files to the /var/xcat/syncfiles directory on the service nodes and then rsync the files from that directory to the compute nodes. The /var/xcat/syncfiles default directory on the service nodes, can be changed by putting a directory value in the site table SNsyncfiledir attribute. Create a rsync file /tmp/myrsync, with this line: /etc/hosts /etc/passwd -> /etc Run: B I to update the Compute Nodes =item * To rsync to the service nodes in preparation for rsyncing the compute nodes during an install from the service node. Create a rsync file /tmp/myrsync, with this line: /etc/hosts /etc/passwd -> /etc Run: B I to sync the service node for compute =item * To copy the /etc/hosts file from node1 and node2 to the /tmp/hosts.dir directory on the local host, enter: B I =item * To rsync the /etc/hosts file to your compute nodes: Create a rsync file /tmp/myrsync, with this line: /etc/hosts -> /etc/hosts Run: B I =item * To rsync to the compute nodes, using service nodes, the command will first rsync the files to the /var/xcat/syncfiles directory on the service nodes and then rsync the files from that directory to the compute nodes. The /var/xcat/syncfiles default directory on the service nodes, can be changed by putting a directory value in the site table SNsyncfiledir attribute. Create a rsync file /tmp/myrsync, with this line: /etc/hosts /etc/passwd -> /etc Run: B I to update the Compute Nodes =item * To rsync to the service nodes in preparation for rsyncing the compute nodes during an install from the service node. Create a rsync file /tmp/myrsync, with this line: /etc/hosts /etc/passwd -> /etc Run: B I to sync the service node for compute =item * To rsync the /etc/file1 and file2 to your compute nodes and rename to filex and filey: Create a rsync file /tmp/myrsync, with these line: /etc/file1 -> /etc/filex /etc/file2 -> /etc/filey Run: B I to update the Compute Nodes =item * To rsync files in the Linux image at /install/netboot/fedora9/x86_64/compute/rootimg on the MN: Create a rsync file /tmp/myrsync, with this line: /etc/hosts /etc/passwd -> /etc Run: B I<-i /install/netboot/fedora9/x86_64/compute/rootimg -F /tmp/myrsync> =back =head1 B =head1 B L, L