2
0
mirror of https://github.com/xcat2/xcat-core.git synced 2025-05-29 09:13:08 +00:00

Fixed some of the WARNING and ERROR messages that are coming out doing a

'make clean' then 'make html' for the documentation.  There are a lot
of formatting errors that are present

Also fixed some of the broken links.  When pages are added and changed
the person working on it should check the build errors
This commit is contained in:
Victor Hu 2015-09-30 12:25:40 -04:00
parent 8eb347e015
commit f2b7757628
26 changed files with 432 additions and 402 deletions

View File

@ -126,11 +126,13 @@ For support clone, add 'otherpkglist' and 'otherpkgdir' attributes to the image
Capture Image from Golden Client
````````````````````````````````
On Management node, use xCAT command 'imgcapture' to capture an image from the golden-client.
On Management node, use xCAT command 'imgcapture' to capture an image from the golden-client.
::
imgcapture <golden-client> -t sysclone -o <mycomputeimage>
When imgcapture is running, it pulls the image from the golden-client, and creates a image files system and a corresponding osimage definition on the xcat management node. You can use below command to check the osimage attributes.
::
lsdef -t osimage <mycomputeimage>
@ -219,4 +221,4 @@ If you install systemimager RPMs on CentOS 6.5 node by yum, you maybe hit failur
Kernel panic at times when install target node with rhels7.0 in Power 7 server
``````````````````````````````````````````````````````````````````````````````
When you clone rhels7.0 image to target node which is Power 7 server lpar, maybe you will hit Kernel panic problem at times after boot loader grub2 download kernel and initrd. This is an known issue but without resolve yet. up to now, we recommend you try again.
When you clone rhels7.0 image to target node which is Power 7 server lpar, maybe you will hit Kernel panic problem at times after boot loader grub2 download kernel and initrd. This is an known issue but without resolve yet. up to now, we recommend you try again.

View File

@ -36,7 +36,7 @@ After submitting a pull request, you may get comments from reviewer that somethi
$git push origin <mybranch> -f
Resolving Conflit in the Pull Request
------------------------------------
-------------------------------------
During the reviewing of your pull request, some one may change certain code which conflict with your change so that your pull request can NOT be merged automatically. You can use following steps to resolve the conflict.

View File

@ -33,7 +33,7 @@ Design an xCAT Cluster for High Availability
Everyone wants their cluster to be as reliable and available as possible, but there are multiple ways to achieve that end goal. Availability and complexity are inversely proportional. You should choose an approach that balances these 2 in a way that fits your environment the best. Here's a few choices in order of least complex to more complex.
**Service Node Pools** With No HA Software
``````````````````````````````````````
``````````````````````````````````````````
**Service node pools** is an xCAT approach in which more than one service node (SN) is in the broadcast domain for a set of nodes. When each node netboots, it chooses an available SN by which one responds to its DHCP request 1st. When services are set up on the node (e.g. DNS), xCAT configures the services to use at that SN and one other SN in the pool. That way, if one SN goes down, the node can keep running, and the next time it netboots it will automatically choose another SN.
This approach is most often used with stateless nodes because that environment is more dynamic. It can possibly be used with stateful nodes (with a little more effort), but that type of node doesn't netboot nearly as often so a more manual operation (snmove) is needed in that case move a node to different SNs.

View File

@ -60,59 +60,58 @@ Use Cases
---------
* Case 1:
There is a ppc64le node named "cn1", the mac of installation NIC is "ca:68:d3:ae:db:03", the ip assigned is "10.0.0.100", the network boot method is "grub2", place it into the group "all". Use the following command ::
There is a ppc64le node named "cn1", the mac of installation NIC is "ca:68:d3:ae:db:03", the ip assigned is "10.0.0.100", the network boot method is "grub2", place it into the group "all". Use the following command ::
mkdef -t node -o cn1 arch=ppc64 mac="ca:68:d3:ae:db:03" ip="10.0.0.100" netboot="grub2" groups="all"
* Case 2:
List all the node objects ::
List all the node objects ::
nodels
This can also be done with ::
This can also be done with ::
lsdef -t node
* Case 3:
List the mac of object "cn1" ::
List the mac of object "cn1" ::
lsdef -t node -o cn1 -i mac
* Case 4:
There is a node definition "cn1", modify its network boot method to "yaboot" ::
There is a node definition "cn1", modify its network boot method to "yaboot" ::
chdef -t node -o cn1 netboot=yaboot
* Case 5:
There is a node definition "cn1", create a node definition "cn2" with the same attributes with "cn1", except the mac addr(ca:68:d3:ae:db:04) and ip address(10.0.0.101)
There is a node definition "cn1", create a node definition "cn2" with the same attributes with "cn1", except the mac addr(ca:68:d3:ae:db:04) and ip address(10.0.0.101)
*step 1*: write the definition of "cn1" to a stanza file named "cn.stanza" ::
*step 1*: write the definition of "cn1" to a stanza file named "cn.stanza" ::
lsdef -z cn1 > /tmp/cn.stanza
lsdef -z cn1 > /tmp/cn.stanza
The content of "/tmp/cn.stanza" will look like ::
The content of "/tmp/cn.stanza" will look like ::
# <xCAT data object stanza file>
cn1:
objtype=node
groups=all
ip=10.0.0.100
mac=ca:68:d3:ae:db:03
netboot=grub2
# <xCAT data object stanza file>
cn1:
objtype=node
groups=all
ip=10.0.0.100
mac=ca:68:d3:ae:db:03
netboot=grub2
*step 2*: modify the "/tmp/cn.stanza" according to the "cn2" attributes ::
# <xCAT data object stanza file>
cn2:
objtype=node
groups=all
ip=10.0.0.101
mac=ca:68:d3:ae:db:04
netboot=grub2
*step 3*: create "cn2" definition with "cn.stanza" ::
*step 2*: modify the "/tmp/cn.stanza" according to the "cn2" attributes ::
# <xCAT data object stanza file>
cn2:
objtype=node
groups=all
ip=10.0.0.101
mac=ca:68:d3:ae:db:04
netboot=grub2
*step 3*: create "cn2" definition with "cn.stanza" ::
cat /tmp/cn.stanza |mkdef -z
cat /tmp/cn.stanza |mkdef -z

View File

@ -1,5 +0,0 @@
Add Additional Software
==========================

View File

@ -1,4 +1,5 @@
.. BEGIN_Overview
By default, xCAT will install the operating system on the first disk and with default partitions layout in the node. However, you may choose to customize the disk partitioning during the install process and define a specific disk layout. You can do this in one of two ways: '**partition definition file**' or '**partition definition script**'.
**Notes**
@ -6,12 +7,17 @@ By default, xCAT will install the operating system on the first disk and with de
- 'Partition definition file' way can be used for RedHat, SLES and Ubuntu.
- 'partition definition script' way was tested only for RedHat and Ubuntu, use this feature on SLES at your own risk.
- Because disk configuration for ubuntu is different from Redhat, there maybe some section special for ubuntu.
.. END_Overview
.. BEGIN_partition_definition_file_Overview
You could create a customized osimage partition file, say /install/custom/my-partitions, that contains the disk partitioning definition, then associate the partition file with osimage, the nodeset command will insert the contents of this file directly into the generated autoinst configuration file that will be used by the OS installer.
.. END_partition_definition_file_Overview
.. BEGIN_partition_definition_file_content
The partition file must follow the partitioning syntax of the installer(e.g. kickstart for RedHat, AutoYaST for SLES, Preseed for Ubuntu). you could refer to the `Kickstart documentation <http://fedoraproject.org/wiki/Anaconda/Kickstart#part_or_partition>`_ or `Autoyast documentation <https://doc.opensuse.org/projects/autoyast/configuration.html#CreateProfile.Partitioning>`_ or `Preseed documentation <https://www.debian.org/releases/stable/i386/apbs04.html.en#preseed-partman>`_ write your own partitions layout. Meanwhile, RedHat and SuSE provides some tools that could help generate kickstart/autoyast templates, in which you could refer to the partition section for the partitions layout information:
@ -29,10 +35,13 @@ The partition file must follow the partitioning syntax of the installer(e.g. kic
* **[Ubuntu]**
- For detailed information see the files partman-auto-recipe.txt and partman-auto-raid-recipe.txt included in the debian-installer package. Both files are also available from the debian-installer source repository. Note that the supported functionality may change between releases.
.. END_partition_definition_file_content
.. BEGIN_partition_definition_file_example_RedHat_Standard_Partitions_for_IBM_Power_machines
Here is partition definition file example for RedHat standard partition in IBM Power machines
::
# Uncomment this PReP line for IBM Power servers
part None --fstype "PPC PReP Boot" --size 8 --ondisk sda
@ -45,7 +54,9 @@ Here is partition definition file example for RedHat standard partition in IBM P
.. END_partition_definition_file_example_RedHat_Standard_Partitions_for_IBM_Power_machines
.. BEGIN_partition_definition_file_example_RedHat_LVM_for_IBM_Power_machines
Here is partition definition file example for RedHat LVM partition in IBM Power machines
::
# Uncomment this PReP line for IBM Power servers
part None --fstype "PPC PReP Boot" --ondisk /dev/sda --size 8
@ -60,45 +71,51 @@ Here is partition definition file example for RedHat LVM partition in IBM Power
.. END_partition_definition_file_example_RedHat_LVM_for_IBM_Power_machines
.. BEGIN_partition_definition_file_example_RedHat_RAID1_for_IBM_Power_machines
Partition definition file example for RedHat RAID1 please refer to :doc:`Configure RAID before Deploy OS </guides/admin-guides/manage_clusters/ppc64le/diskful/customize_image/raid_cfg>`
.. END_partition_definition_file_example_RedHat_RAID1_for_IBM_Power_machines
.. BEGIN_partition_definition_file_example_SLES_Standard_Partitions_for_X86_64
Here is partition definition file example for SLES standard partition in X86_64 machines
::
<drive>
<device>/dev/sda</device>
<initialize config:type="boolean">true</initialize>
<use>all</use>
<partitions config:type="list">
<partition>
<create config:type="boolean">true</create>
<filesystem config:type="symbol">swap</filesystem>
<format config:type="boolean">true</format>
<mount>swap</mount>
<mountby config:type="symbol">path</mountby>
<partition_nr config:type="integer">1</partition_nr>
<partition_type>primary</partition_type>
<size>32G</size>
</partition>
<partition>
<create config:type="boolean">true</create>
<filesystem config:type="symbol">ext3</filesystem>
<format config:type="boolean">true</format>
<mount>/</mount>
<mountby config:type="symbol">path</mountby>
<partition_nr config:type="integer">2</partition_nr>
<partition_type>primary</partition_type>
<size>64G</size>
</partition>
</partitions>
</drive>
.. code-block:: xml
<drive>
<device>/dev/sda</device>
<initialize config:type="boolean">true</initialize>
<use>all</use>
<partitions config:type="list">
<partition>
<create config:type="boolean">true</create>
<filesystem config:type="symbol">swap</filesystem>
<format config:type="boolean">true</format>
<mount>swap</mount>
<mountby config:type="symbol">path</mountby>
<partition_nr config:type="integer">1</partition_nr>
<partition_type>primary</partition_type>
<size>32G</size>
</partition>
<partition>
<create config:type="boolean">true</create>
<filesystem config:type="symbol">ext3</filesystem>
<format config:type="boolean">true</format>
<mount>/</mount>
<mountby config:type="symbol">path</mountby>
<partition_nr config:type="integer">2</partition_nr>
<partition_type>primary</partition_type>
<size>64G</size>
</partition>
</partitions>
</drive>
.. END_partition_definition_file_example_SLES_Standard_Partitions_for_X86_64
.. BEGIN_partition_definition_file_example_SLES_LVM_for_ppc64
Here is partition definition file example for SLES LVM partition in P server
::
The following is an example of a partition definition file for a SLES LVM Partition on Power Server: ::
<drive>
<device>/dev/sda</device>
<initialize config:type="boolean">true</initialize>
@ -211,67 +228,73 @@ Here is partition definition file example for SLES LVM partition in P server
.. END_partition_definition_file_example_SLES_LVM_for_ppc64
.. BEGIN_partition_definition_file_example_SLES_Standard_partition_for_ppc64
Here is partition definition file example for SLES standard partition in ppc64 machines
::
.. code-block:: xml
<drive>
<device>/dev/sda</device>
<initialize config:type="boolean">true</initialize>
<partitions config:type="list">
<partition>
<create config:type="boolean">true</create>
<crypt_fs config:type="boolean">false</crypt_fs>
<filesystem config:type="symbol">ext3</filesystem>
<format config:type="boolean">false</format>
<loop_fs config:type="boolean">false</loop_fs>
<mountby config:type="symbol">device</mountby>
<partition_id config:type="integer">65</partition_id>
<partition_nr config:type="integer">1</partition_nr>
<resize config:type="boolean">false</resize>
<size>auto</size>
</partition>
<partition>
<create config:type="boolean">true</create>
<crypt_fs config:type="boolean">false</crypt_fs>
<filesystem config:type="symbol">swap</filesystem>
<format config:type="boolean">true</format>
<fstopt>defaults</fstopt>
<loop_fs config:type="boolean">false</loop_fs>
<mount>swap</mount>
<mountby config:type="symbol">id</mountby>
<partition_id config:type="integer">130</partition_id>
<partition_nr config:type="integer">2</partition_nr>
<resize config:type="boolean">false</resize>
<size>auto</size>
</partition>
<partition>
<create config:type="boolean">true</create>
<crypt_fs config:type="boolean">false</crypt_fs>
<filesystem config:type="symbol">ext3</filesystem>
<format config:type="boolean">true</format>
<fstopt>acl,user_xattr</fstopt>
<loop_fs config:type="boolean">false</loop_fs>
<mount>/</mount>
<mountby config:type="symbol">id</mountby>
<partition_id config:type="integer">131</partition_id>
<partition_nr config:type="integer">3</partition_nr>
<resize config:type="boolean">false</resize>
<size>max</size>
</partition>
</partitions>
<pesize></pesize>
<type config:type="symbol">CT_DISK</type>
<use>all</use>
<device>/dev/sda</device>
<initialize config:type="boolean">true</initialize>
<partitions config:type="list">
<partition>
<create config:type="boolean">true</create>
<crypt_fs config:type="boolean">false</crypt_fs>
<filesystem config:type="symbol">ext3</filesystem>
<format config:type="boolean">false</format>
<loop_fs config:type="boolean">false</loop_fs>
<mountby config:type="symbol">device</mountby>
<partition_id config:type="integer">65</partition_id>
<partition_nr config:type="integer">1</partition_nr>
<resize config:type="boolean">false</resize>
<size>auto</size>
</partition>
<partition>
<create config:type="boolean">true</create>
<crypt_fs config:type="boolean">false</crypt_fs>
<filesystem config:type="symbol">swap</filesystem>
<format config:type="boolean">true</format>
<fstopt>defaults</fstopt>
<loop_fs config:type="boolean">false</loop_fs>
<mount>swap</mount>
<mountby config:type="symbol">id</mountby>
<partition_id config:type="integer">130</partition_id>
<partition_nr config:type="integer">2</partition_nr>
<resize config:type="boolean">false</resize>
<size>auto</size>
</partition>
<partition>
<create config:type="boolean">true</create>
<crypt_fs config:type="boolean">false</crypt_fs>
<filesystem config:type="symbol">ext3</filesystem>
<format config:type="boolean">true</format>
<fstopt>acl,user_xattr</fstopt>
<loop_fs config:type="boolean">false</loop_fs>
<mount>/</mount>
<mountby config:type="symbol">id</mountby>
<partition_id config:type="integer">131</partition_id>
<partition_nr config:type="integer">3</partition_nr>
<resize config:type="boolean">false</resize>
<size>max</size>
</partition>
</partitions>
<pesize></pesize>
<type config:type="symbol">CT_DISK</type>
<use>all</use>
</drive>
.. END_partition_definition_file_example_SLES_Standard_partition_for_ppc64
.. BEGIN_partition_definition_file_example_SLES_RAID1
Partition definition file example for SLES RAID1 please refer to `Configure RAID before Deploy OS <http://xcat-docs.readthedocs.org/en/latest/guides/admin-guides/manage_clusters/ppc64le/diskful/customize_image/raid_cfg.html>`_
.. END_partition_definition_file_example_SLES_RAID1
.. BEGIN_partition_definition_file_example_Ubuntu_Standard_partition_for_PPC64le
Here is partition definition file example for Ubuntu standard partition in ppc64le machines
::
Here is partition definition file example for Ubuntu standard partition in ppc64le machines ::
ubuntu-boot ::
8 1 1 prep
$primary{ } $bootable{ } method{ prep }
@ -286,8 +309,9 @@ Here is partition definition file example for Ubuntu standard partition in ppc64
.. END_partition_definition_file_example_Ubuntu_Standard_partition_for_PPC64le
.. BEGIN_partition_definition_file_example_Ubuntu_Standard_partition_for_x86_64
Here is partition definition file example for Ubuntu standard partition in x86_64 machines
::
Here is partition definition file example for Ubuntu standard partition in x86_64 machines: ::
256 256 512 vfat
$primary{ }
method{ format }
@ -326,8 +350,9 @@ Here is partition definition file example for Ubuntu standard partition in x86_6
.. END_partition_definition_file_example_Ubuntu_Standard_partition_for_x86_64
.. BEGIN_partition_definition_file_Associate_partition_file_with_osimage_common
Run below commands to associate the partition with the osimage
::
Run the following commands to associate the partition with the osimage: ::
chdef -t osimage <osimagename> partitionfile=/install/custom/my-partitions
nodeset <nodename> osimage=<osimage>
@ -339,56 +364,62 @@ Run below commands to associate the partition with the osimage
.. BEGIN_Partition_Definition_Script_overview
Create a shell script that will be run on the node during the install process to dynamically create the disk partitioning definition. This script will be run during the OS installer %pre script on Redhat or preseed/early_command on Unbuntu execution and must write the correct partitioning definition into the file /tmp/partitionfile on the node
.. END_Partition_Definition_Script_overview
.. BEGIN_Partition_Definition_Script_Create_partition_script_content
The purpose of the partition script is to create the /tmp/partionfile that will be inserted into the kickstart/autoyast/preseed template, the script could include complex logic like select which disk to install and even configure RAID, etc
**Note**: the partition script feature is not thoroughly tested on SLES, there might be problems, use this feature on SLES at your own risk.
.. END_Partition_Definition_Script_Create_partition_script_content
.. BEGIN_Partition_Definition_Script_Create_partition_script_example_redhat_sles
Here is an example of the partition script on Redhat and SLES, the partitioning script is /install/custom/my-partitions.sh:
::
instdisk="/dev/sda"
modprobe ext4 >& /dev/null
modprobe ext4dev >& /dev/null
if grep ext4dev /proc/filesystems > /dev/null; then
FSTYPE=ext3
elif grep ext4 /proc/filesystems > /dev/null; then
FSTYPE=ext4
else
FSTYPE=ext3
fi
BOOTFSTYPE=ext3
EFIFSTYPE=vfat
if uname -r|grep ^3.*el7 > /dev/null; then
FSTYPE=xfs
BOOTFSTYPE=xfs
EFIFSTYPE=efi
fi
Here is an example of the partition script on Redhat and SLES, the partitioning script is ``/install/custom/my-partitions.sh``: ::
if [ `uname -m` = "ppc64" ]; then
echo 'part None --fstype "PPC PReP Boot" --ondisk '$instdisk' --size 8' >> /tmp/partitionfile
fi
if [ -d /sys/firmware/efi ]; then
echo 'bootloader --driveorder='$instdisk >> /tmp/partitionfile
echo 'part /boot/efi --size 50 --ondisk '$instdisk' --fstype $EFIFSTYPE' >> /tmp/partitionfile
else
echo 'bootloader' >> /tmp/partitionfile
fi
instdisk="/dev/sda"
modprobe ext4 >& /dev/null
modprobe ext4dev >& /dev/null
if grep ext4dev /proc/filesystems > /dev/null; then
FSTYPE=ext3
elif grep ext4 /proc/filesystems > /dev/null; then
FSTYPE=ext4
else
FSTYPE=ext3
fi
BOOTFSTYPE=ext3
EFIFSTYPE=vfat
if uname -r|grep ^3.*el7 > /dev/null; then
FSTYPE=xfs
BOOTFSTYPE=xfs
EFIFSTYPE=efi
fi
if [ `uname -m` = "ppc64" ]; then
echo 'part None --fstype "PPC PReP Boot" --ondisk '$instdisk' --size 8' >> /tmp/partitionfile
fi
if [ -d /sys/firmware/efi ]; then
echo 'bootloader --driveorder='$instdisk >> /tmp/partitionfile
echo 'part /boot/efi --size 50 --ondisk '$instdisk' --fstype $EFIFSTYPE' >> /tmp/partitionfile
else
echo 'bootloader' >> /tmp/partitionfile
fi
echo "part /boot --size 512 --fstype $BOOTFSTYPE --ondisk $instdisk" >> /tmp/partitionfile
echo "part swap --recommended --ondisk $instdisk" >> /tmp/partitionfile
echo "part / --size 1 --grow --ondisk $instdisk --fstype $FSTYPE" >> /tmp/partitionfile
echo "part /boot --size 512 --fstype $BOOTFSTYPE --ondisk $instdisk" >> /tmp/partitionfile
echo "part swap --recommended --ondisk $instdisk" >> /tmp/partitionfile
echo "part / --size 1 --grow --ondisk $instdisk --fstype $FSTYPE" >> /tmp/partitionfile
.. END_Partition_Definition_Script_Create_partition_script_example_redhat_sles
.. BEGIN_Partition_Definition_Script_Create_partition_script_example_ubuntu
The following is an example of the partition script on Ubuntu, the partitioning script is /install/custom/my-partitions.sh:
::
The following is an example of the partition script on Ubuntu, the partitioning script is /install/custom/my-partitions.sh: ::
if [ -d /sys/firmware/efi ]; then
echo "ubuntu-efi ::" > /tmp/partitionfile
echo " 512 512 1024 fat16" >> /tmp/partitionfile
@ -410,35 +441,40 @@ The following is an example of the partition script on Ubuntu, the partitioning
.. END_Partition_Definition_Script_Create_partition_script_example_ubuntu
.. BEGIN_Partition_Definition_Script_Associate_partition_script_with_osimage_common
Run below commands to associate partition script with osimage:
::
chdef -t osimage <osimagename> partitionfile='s:/install/custom/my-partitions.sh'
nodeset <nodename> osimage=<osimage>
- The "s:" preceding the filename tells nodeset that this is a script.
- For Redhat, when nodeset runs and generates the /install/autoinst file for a node, it will add the execution of the contents of this script to the %pre section of that file. The nodeset command will then replace the #XCAT_PARTITION_START#...#XCAT_PARTITION_END# directives from the osimage template file with "%include /tmp/partitionfile" to dynamically include the tmp definition file your script created.
- For Ubuntu, when nodeset runs and generates the /install/autoinst file for a node, it will replace the "#XCA_PARTMAN_RECIPE_SCRIPT#" directive and add the execution of the contents of this script to the /install/autoinst/<node>.pre, the /install/autoinst/<node>.pre script will be run in the preseed/early_command.
Run below commands to associate partition script with osimage: ::
chdef -t osimage <osimagename> partitionfile='s:/install/custom/my-partitions.sh'
nodeset <nodename> osimage=<osimage>
- The "s:" preceding the filename tells nodeset that this is a script.
- For Redhat, when nodeset runs and generates the /install/autoinst file for a node, it will add the execution of the contents of this script to the %pre section of that file. The nodeset command will then replace the #XCAT_PARTITION_START#...#XCAT_PARTITION_END# directives from the osimage template file with "%include /tmp/partitionfile" to dynamically include the tmp definition file your script created.
- For Ubuntu, when nodeset runs and generates the /install/autoinst file for a node, it will replace the "#XCA_PARTMAN_RECIPE_SCRIPT#" directive and add the execution of the contents of this script to the /install/autoinst/<node>.pre, the /install/autoinst/<node>.pre script will be run in the preseed/early_command.
.. END_Partition_Definition_Script_Associate_partition_script_with_osimage_common
.. BEGIN_Partition_Disk_File_ubuntu_only
The disk file contains the name of the disks to partition in traditional, non-devfs format and delimited with space " ", for example,
::
The disk file contains the name of the disks to partition in traditional, non-devfs format and delimited with space " ", for example : ::
/dev/sda /dev/sdb
If not specified, the default value will be used.
**Associate partition disk file with osimage**
::
**Associate partition disk file with osimage** ::
chdef -t osimage <osimagename> -p partitionfile='d:/install/custom/partitiondisk'
nodeset <nodename> osimage=<osimage>
- the 'd:' preceding the filename tells nodeset that this is a partition disk file.
- For Ubuntu, when nodeset runs and generates the /install/autoinst file for a node, it will generate a script to write the content of the partition disk file to /tmp/boot_disk, this context to run the script will replace the #XCA_PARTMAN_DISK_SCRIPT# directive in /install/autoinst/<node>.pre.
.. END_Partition_Disk_File_ubuntu_only
.. BEGIN_Partition_Disk_Script_ubuntu_only
The disk script contains a script to generate a partitioning disk file named "/tmp/boot_disk". for example,
::
The disk script contains a script to generate a partitioning disk file named "/tmp/boot_disk". for example: ::
rm /tmp/devs-with-boot 2>/dev/null || true;
for d in $(list-devices partition); do
mkdir -p /tmp/mymount;
@ -460,20 +496,23 @@ The disk script contains a script to generate a partitioning disk file named "/t
If not specified, the default value will be used.
**Associate partition disk script with osimage**
::
**Associate partition disk script with osimage** ::
chdef -t osimage <osimagename> -p partitionfile='s:d:/install/custom/partitiondiskscript'
nodeset <nodename> osimage=<osimage>
- the 's:' prefix tells nodeset that is a script, the 's:d:' preceding the filename tells nodeset that this is a script to generate the partition disk file.
- For Ubuntu, when nodeset runs and generates the /install/autoinst file for a node, this context to run the script will replace the #XCA_PARTMAN_DISK_SCRIPT# directive in /install/autoinst/<node>.pre.
.. END_Partition_Disk_Script_ubuntu_only
.. BEGIN_Additional_preseed_configuration_file_ubuntu_only
To support other specific partition methods such as RAID or LVM in Ubuntu, some additional preseed configuration entries should be specified.
If using file way, 'c:<the absolute path of the additional preseed config file>', the additional preseed config file contains the additional preseed entries in "d-i ..." syntax. When "nodeset", the #XCA_PARTMAN_ADDITIONAL_CFG# directive in /install/autoinst/<node> will be replaced with content of the config file, an example:
::
If using file way, 'c:<the absolute path of the additional preseed config file>', the additional preseed config file contains the additional preseed entries in "d-i ..." syntax. When "nodeset", the #XCA_PARTMAN_ADDITIONAL_CFG# directive in /install/autoinst/<node> will be replaced with content of the config file. For example: ::
d-i partman-auto/method string raid
d-i partman-md/confirm boolean true
@ -481,9 +520,11 @@ If not specified, the default value will be used.
.. END_Additional_preseed_configuration_file_ubuntu_only
.. BEGIN_Additional_preseed_configuration_script_ubuntu_only
To support other specific partition methods such as RAID or LVM in Ubuntu, some additional preseed configuration entries should be specified.
If using script way, 's:c:<the absolute path of the additional preseed config script>', the additional preseed config script is a script to set the preseed values with "debconf-set". When "nodeset", the #XCA_PARTMAN_ADDITIONAL_CONFIG_SCRIPT# directive in /install/autoinst/<node>.pre will be replaced with the content of the script, an example:
::
If using script way, 's:c:<the absolute path of the additional preseed config script>', the additional preseed config script is a script to set the preseed values with "debconf-set". When "nodeset", the #XCA_PARTMAN_ADDITIONAL_CONFIG_SCRIPT# directive in /install/autoinst/<node>.pre will be replaced with the content of the script. For example: ::
debconf-set partman-auto/method string raid
debconf-set partman-md/confirm boolean true

View File

@ -13,68 +13,69 @@ Define configuration information for the Secondary Adapters in the nics table
There are 3 ways to complete this operation.
1. Using command line
1. Using the ``mkdef`` and ``chdef`` commands ::
Below is a example
::
[root@ls21n01 ~]# mkdef cn1 groups=all nicips.eth1="11.1.89.7|12.1.89.7" nicnetworks.eth1="net11|net12" nictypes.eth1="Ethernet"
# mkdef cn1 groups=all nicips.eth1="11.1.89.7|12.1.89.7" nicnetworks.eth1="net11|net12" nictypes.eth1="Ethernet"
1 object definitions have been created or modified.
[root@ls21n01 ~]# chdef cn1 nicips.eth2="13.1.89.7|14.1.89.7" nicnetworks.eth2="net13|net14" nictypes.eth2="Ethernet"
# chdef cn1 nicips.eth2="13.1.89.7|14.1.89.7" nicnetworks.eth2="net13|net14" nictypes.eth2="Ethernet"
1 object definitions have been created or modified.
2. Using stanza file
2. Using an xCAT stanza file
Prepare your stanza file <filename>.stanza. the content of <filename>.stanza like below:
::
# <xCAT data object stanza file>
cn1:
objtype=node
arch=x86_64
groups=kvm,vm,all
nichostnamesuffixes.eth1=-eth1-1|-eth1-2
nichostnamesuffixes.eth2=-eth2-1|-eth2-2
nicips.eth1=11.1.89.7|12.1.89.7
nicips.eth2=13.1.89.7|14.1.89.7
nicnetworks.eth1=net11|net12
nicnetworks.eth2=net13|net14
nictypes.eth1=Ethernet
nictypes.eth2=Ethernet
- Prepare a stanza file ``<filename>.stanza`` with content similiar to the following: ::
define configuration information by <filename>.stanza
::
cat <filename>.stanza | mkdef -z
# <xCAT data object stanza file>
cn1:
objtype=node
arch=x86_64
groups=kvm,vm,all
nichostnamesuffixes.eth1=-eth1-1|-eth1-2
nichostnamesuffixes.eth2=-eth2-1|-eth2-2
nicips.eth1=11.1.89.7|12.1.89.7
nicips.eth2=13.1.89.7|14.1.89.7
nicnetworks.eth1=net11|net12
nicnetworks.eth2=net13|net14
nictypes.eth1=Ethernet
nictypes.eth2=Ethernet
3. Using 'tabedit' to edit the nics table
- Using the ``mkdef -z`` option, define the stanza file to xCAT: ::
The 'tabedit' command opens the specified table in the user's editor(such as VI), allows user to edit any text, and then writes changes back to the database table. But it's tedious and error prone, so don't recommended this way. if using this way, notices the **nicips**, **nictypes** and **nicnetworks** attributes are required.
# cat <filename>.stanza | mkdef -z
Here is a sample nics table content:
::
[root@ls21n01 ~]# tabdump nics
#node,nicips,nichostnamesuffixes,nictypes,niccustomscripts,nicnetworks,nicaliases,comments,disable
"cn1","eth1!11.1.89.7|12.1.89.7,eth2!13.1.89.7|14.1.89.7","eth1!-eth1-1|-eth1-2,eth2!-eth2-1|-eth2-2,"eth1!Ethernet,eth2!Ethernet",,"eth1!net11|net12,eth2!net13|net14",,,
3. Using ``tabedit`` to edit the ``nics`` database table directly
After you have define configuration information by any way above, you can run below command to put configuration information into /etc/hosts:
::
makehosts cn1
The ``tabedit`` command opens the specified xCAT database table in a vi like editor and allows the user to edit any text and write the changes back to the database table.
Then /etc/hosts will looks like:
::
*WARNING* Using the ``tabedit`` command is not the recommended method because it is tedious and error prone.
After changing the content of the ``nics`` table, here is the result from ``tabdump nics`` ::
# tabdump nics
#node,nicips,nichostnamesuffixes,nictypes,niccustomscripts,nicnetworks,nicaliases,comments,disable
"cn1","eth1!11.1.89.7|12.1.89.7,eth2!13.1.89.7|14.1.89.7","eth1!-eth1-1|-eth1-2,eth2!-eth2-1|-eth2-2,"eth1!Ethernet,eth2!Ethernet",,"eth1!net11|net12,eth2!net13|net14",,,
After you have defined the configuration information in any of the ways above, run the ``makehosts`` command to add the new configuration to the ``/etc/hosts`` file. ::
# makehosts cn1
# cat /etc/hosts
11.1.89.7 cn1-eth1-1 cn1-eth1-1.ppd.pok.ibm.com
12.1.89.7 cn1-eth1-2 cn1-eth1-2.ppd.pok.ibm.com
13.1.89.7 cn1-eth2-1 cn1-eth2-1.ppd.pok.ibm.com
14.1.89.7 cn1-eth2-2 cn1-eth2-2.ppd.pok.ibm.com
Add confignics into the node's postscripts list
-----------------------------------------------
Using below command to add confignics into the node's postscripts list
::
Using below command to add confignics into the node's postscripts list ::
chdef cn1 -p postscripts=confignics
By default, confignics does not configure the install nic. if need, using flag "-s" to allow the install nic to be configured.
::
By default, confignics does not configure the install nic. if need, using flag "-s" to allow the install nic to be configured. ::
chdef cn1 -p prostscripts="confignics -s"
Option "-s" write the install nic's information into configuration file for persistance. All install nic's data defined in nics table will be written also.
@ -83,8 +84,12 @@ Option "-s" write the install nic's information into configuration file for pers
Add network object into the networks table
------------------------------------------
The nicnetworks attribute only defined the network object name which used by the ip address. Other information about the network should be define in the networks table. Can use tabedit to add/ modify the networks objects.
::
The ``nicnetworks`` attribute only defines the nic that uses the IP address.
Other information about the network should be defined in the ``networks`` table.
Use the ``tabedit`` command to add/modify the networks in the``networks`` table ::
tabdump networks
#netname,net,mask,mgtifname,gateway,dhcpserver,tftpserver,nameservers,ntpservers,logservers,dynamicrange,staticrange,staticrangeincrement,nodehostname,ddnsdomain,vlanid,domain,comments,disable
...
"net11", "11.1.89.0", "255.255.255.0", "eth1",,,,,,,,,,,,,,,
@ -93,9 +98,11 @@ The nicnetworks attribute only defined the network object name which used by the
"net14", "14.1.89.0", "255.255.255.0", "eth2",,,,,,,,,,,,,,,
Option -r to remove the undefined NICS
---------------------------------------
If the compute node's nics were configured by confignics, and the nics configuration changed in the nics table, can use "confignics -r" to remove the undefined nics. For example: On the compute node the eth0, eth1 and eth2 were configured
::
--------------------------------------
If the compute node's nics were configured by ``confignics`` and the nics configuration changed in the nics table, user the ``confignics -r`` to remove the undefined nic.
For example, if on a compute node the ``eth0``, ``eth1``, and ``eth2`` nics were configured: ::
# ifconfig
eth0 Link encap:Ethernet HWaddr 00:14:5e:d9:6c:e6
...
@ -104,27 +111,17 @@ If the compute node's nics were configured by confignics, and the nics configura
eth2 Link encap:Ethernet HWaddr 00:14:5e:d9:6c:e8
...
Delete the eth2 definition in nics table with chdef command. Run
::
updatenode <noderange> -P "confignics -r" to remove the undefined eth2 on the compute node.
Delete the eth2 definition in nics table using the ``chdef`` command.
Then run the following to remove the undefined ``eth2`` nic on the compute node: ::
# updatenode <noderange> -P "confignics -r"
The result should have ``eth2`` disabled: ::
The complete result is:
::
# ifconfig
eth0 Link encap:Ethernet HWaddr 00:14:5e:d9:6c:e6
...
eth1 Link encap:Ethernet HWaddr 00:14:5e:d9:6c:e7
...
Deleting the install nic will import some strange problems. So confignics -r can not delete the install nic.
Deleting the ``installnic`` will result in strange problems, so ``confignics -r`` will not delete the nic set as the ``installnic``.

View File

@ -1,5 +1,3 @@
.. _create_img:
Select or Create an osimage Definition
======================================
@ -7,27 +5,27 @@ Before creating image by xCAT, distro media should be prepared ahead. That can b
XCAT use 'copycds' command to create image which will be available to install nodes. "copycds" will copy all contents of Distribution DVDs/ISOs or Service Pack DVDs/ISOs to a destination directory, and create several relevant osimage definitions by default.
If using an ISO, copy it to (or NFS mount it on) the management node, and then run:
::
If using an ISO, copy it to (or NFS mount it on) the management node, and then run: ::
copycds <path>/<specific-distro>.iso
If using a DVD, put it in the DVD drive of the management node and run:
::
If using a DVD, put it in the DVD drive of the management node and run: ::
copycds /dev/<dvd-drive-name>
To see the list of osimages, run
::
To see the list of osimages: ::
lsdef -t osimage
To see the attributes of a particular osimage, run
::
To see the attributes of a particular osimage: ::
lsdef -t osimage <osimage-name>
Initially, some attributes of osimage is assigned to default value by xCAT, they all can work correctly, cause the files or templates invoked by those attributes are shipped with xCAT by default. If need to customize those attribute, refer to next section :doc:`Customize osimage </guides/admin-guides/manage_clusters/ppc64le/diskful/customize_image/index>`
Below is an example of osimage definitions created by 'copycds'
::
[root@server ~]# lsdef -t osimage
Below is an example of osimage definitions created by ``copycds``: ::
# lsdef -t osimage
rhels7.2-ppc64le-install-compute (osimage)
rhels7.2-ppc64le-install-service (osimage)
rhels7.2-ppc64le-netboot-compute (osimage)
@ -43,8 +41,8 @@ In these osimage definitions shown above
**[Tips 1]**
If this is the same distro version as what your management node used, create a .repo file in /etc/yum.repos.d with content similar to:
::
If this is the same distro version as what your management node used, create a .repo file in /etc/yum.repos.d with content similar to: ::
[local-<os>-<arch>]
name=xCAT local <os> <version>
baseurl=file:/install/<os>/<arch>
@ -61,8 +59,8 @@ Sometime you can create/modify a osimage definition easily based on the default
* modify <filename>.stanza depending on your requirement
* cat <filename>.stanza| mkdef -z
For example, if need to change osimage name to your favorite name, below statement maybe helpful:
::
For example, if need to change osimage name to your favorite name, below statement maybe helpful: ::
lsdef -t osimage -z rhels6.2-x86_64-install-compute | sed 's/^[^ ]\+:/mycomputeimage:/' | mkdef -z

View File

@ -3,10 +3,9 @@
Initialize the Compute for Deployment
=====================================
XCAT use '**nodeset**' command to associate a specific image to a node which will be installed with this image.
::
XCAT use '**nodeset**' command to associate a specific image to a node which will be installed with this image. ::
nodeset <nodename> osimage=<osimage>
There are more attributes of nodeset used for some specific purpose or specific machines, for example:
@ -21,18 +20,19 @@ Start the OS Deployment
Start the deployment involves two key operations. First specify the boot device of the next boot to be network, then reboot the node:
For **Power machine**, those two operations can be completed by one command '**rnetboot**'
::
For **Power servers**, those two operations can be completed by one command ``rnetboot``: ::
rnetboot <node>
But for **x86_64 server**, those two operations need two independent commands.
Specify the boot device boot from network next time, run
::
rsetboot <node> net
For **x86_64 servers**, those two operations need two independent commands.
Reboot x server:
::
rpower <node> reset
#. set the next boot device to be from the "network" ::
rsetboot <node> net
#. Reboot the xSeries server: :::
rpower <node> reset

View File

@ -16,41 +16,45 @@ No matter which approach chosen, there are two steps to make new drivers work. o
.. BEGIN_locate_driver_for_DUD
There are two approaches for xCAT to find the driver disk (pick one):
- Specify the location of the driver disk in the osimage object (*This is ONLY supported in 2.8 and later*)
#. Specify the location of the driver disk in the osimage object (*This is ONLY supported in 2.8 and later*)
The value for the 'driverupdatesrc' attribute is a comma separated driver disk list. The tag 'dud' must be specified before the full path of 'driver update disk' to specify the type of the file:
::
chdef -t osimage <osimagename> driverupdatesrc=dud:<full path of driver disk>
The value for the 'driverupdatesrc' attribute is a comma separated driver disk list. The tag 'dud' must be specified before the full path of 'driver update disk' to specify the type of the file: ::
- Put the driver update disk in the directory <installroot>/driverdisk/<os>/<arch> (*e.g. /install/driverdisk/sles11.1/x86_64*). During the running of the 'genimage', 'geninitrd' or 'nodeset' command, xCAT will look for driver update disks in the directory <installroot>/driverdisk/<os>/<arch>.
chdef -t osimage <osimagename> driverupdatesrc=dud:<full path of driver disk>
#. Put the driver update disk in the directory ``<installroot>/driverdisk/<os>/<arch>`` (example: ``/install/driverdisk/sles11.1/x86_64``).
During the running of the ``genimage``, ``geninitrd``, or ``nodeset`` commands, xCAT will look for driver update disks in the directory ``<installroot>/driverdisk/<os>/<arch>``.
.. END_locate_driver_for_DUD
.. BEGIN_locate_driver_for_RPM
The Driver RPM packages must be specified in the osimage object.
Three attributes of osimage object can be used to specify the Driver RPM location and Driver names. If you want to load new drivers in the initrd, the '**netdrivers**' attribute must be set. And one or both of the '**driverupdatesrc**' and '**osupdatename**' attributes must be set. If both of 'driverupdatesrc' and 'osupdatename' are set, the drivers in the 'driverupdatesrc' have higher priority.
- netdrivers - comma separated driver names that need to be injected into the initrd. The postfix '.ko' can be ignored.
The 'netdrivers' attribute must be set to specify the new driver list. If you want to load all the drivers from the driver rpms, use the keyword allupdate. Another keyword for the netdrivers attribute is updateonly, which means only the drivers located in the original initrd will be added to the newly built initrd from the driver rpms. This is useful to reduce the size of the new built initrd when the distro is updated, since there are many more drivers in the new kernel rpm than in the original initrd. Examples:
::
The 'netdrivers' attribute must be set to specify the new driver list. If you want to load all the drivers from the driver rpms, use the keyword allupdate. Another keyword for the netdrivers attribute is updateonly, which means only the drivers located in the original initrd will be added to the newly built initrd from the driver rpms. This is useful to reduce the size of the new built initrd when the distro is updated, since there are many more drivers in the new kernel rpm than in the original initrd. Examples: ::
chdef -t osimage <osimagename> netdrivers=megaraid_sas.ko,igb.ko
chdef -t osimage <osimagename> netdrivers=allupdate
chdef -t osimage <osimagename> netdrivers=updateonly,igb.ko,new.ko
- driverupdatesrc - comma separated driver rpm packages (full path should be specified)
A tag named 'rpm' can be specified before the full path of the rpm to specify the file type. The tag is optional since the default format is 'rpm' if no tag is specified. Example:
::
A tag named 'rpm' can be specified before the full path of the rpm to specify the file type. The tag is optional since the default format is 'rpm' if no tag is specified. Example: ::
chdef -t osimage <osimagename> driverupdatesrc=rpm:<full path of driver disk1>,rpm:<full path of driver disk2>
- osupdatename - comma separated 'osdistroupdate' objects. Each 'osdistroupdate' object specifies a Linux distro update.
When geninitrd is run, 'kernel-*.rpm' will be searched in the osdistroupdate.dirpath to get all the rpm packages and then those rpms will be searched for drivers. Example:
::
When geninitrd is run, ``kernel-*.rpm`` will be searched in the osdistroupdate.dirpath to get all the rpm packages and then those rpms will be searched for drivers. Example: ::
mkdef -t osdistroupdate update1 dirpath=/install/<os>/<arch>
chdef -t osimage <osimagename> osupdatename=update1
@ -59,14 +63,13 @@ If 'osupdatename' is specified, the kernel shipped with the 'osupdatename' will
.. BEGIN_inject_into_initrd__for_diskfull_for_DUD
- If specifying the driver disk location in the osimage, there are two ways to inject drivers:
The first way is:
::
#. ::
nodeset <noderange> osimage=<osimagename>
The Second way is:
::
#. ::
geninitrd <osimagename>
nodeset <noderange> osimage=<osimagename> --noupdateinitrd
@ -80,14 +83,16 @@ Running 'nodeset <nodenrage>' in anyway will load the driver disk
.. BEGIN__inject_into_initrd__for_diskfull_for_RPM
There are two ways to inject drivers, one is:
::
nodeset <noderange> osimage=<osimagename> [--ignorekernelchk]
There are two ways to inject drivers:
Another is:
::
geninitrd <osimagename> [--ignorekernelchk]
nodeset <noderange> osimage=<osimagename> --noupdateinitrd
#. Using nodeset command only: ::
nodeset <noderange> osimage=<osimagename> [--ignorekernelchk]
#. Using geninitrd with nodeset command: ::
geninitrd <osimagename> [--ignorekernelchk]
nodeset <noderange> osimage=<osimagename> --noupdateinitrd
**Note:** 'geninitrd' + 'nodeset --noupdateinitrd' is useful when you need to run nodeset frequently for diskful nodes. 'geninitrd' only needs to be run once to rebuild the initrd and 'nodeset --noupdateinitrd' will not touch the initrd and kernel in /tftpboot/xcat/osimage/<osimage name>/.
@ -95,10 +100,11 @@ The option '--ignorekernelchk' is used to skip the kernel version checking when
.. END_inject_into_initrd__for_diskfull_for_RPM
.. BEGIN_inject_into_initrd__for_diskless_for_DUD
- If specifying the driver disk location in the osimage
Run the below command:
::
Run the following command: ::
genimage <osimagename>
- If putting the driver disk in <installroot>/driverdisk/<os>/<arch>:
@ -107,17 +113,20 @@ Running 'genimage' in anyway will load the driver disk
.. END_inject_into_initrd__for_diskless_for_DUD
.. BEGIN_inject_into_initrd__for_diskless_for_RPM
Run the below command:
::
Run the following command: ::
genimage <osimagename> [--ignorekernelchk]
The option '--ignorekernelchk' is used to skip the kernel version checking when injecting drivers from osimage.driverupdatesrc. To use this flag, you should make sure the drivers in the driver rpms are usable for the target kernel.
.. END_inject_into_initrd__for_diskless_for_RPM
.. BEGIN_node
- If the drivers from the driver disk or driver rpm are not already part of the installed or booted system, it's necessary to add the rpm packages for the drivers to the .pkglist or .otherpkglist of the osimage object to install them in the system.
- If a driver rpm needs to be loaded, the osimage object must be used for the 'nodeset' and 'genimage' command, instead of the older style profile approach.
- Both a Driver disk and a Driver rpm can be loaded in one 'nodeset' or 'genimage' invocation.
.. END_node

View File

@ -95,7 +95,7 @@ Very often, the user wants to make a copy of an existing image on the same xCAT
imgimport myimage.tgz -p group1 -f compute2
Modify an image (optional)
-------------------------
--------------------------
Skip this section if you want to use the image as is.
@ -116,6 +116,7 @@ Skip this section if you want to use the image as is.
2, Run genimage: ::
genimage image_name
3, Run packimage: ::
packimage image_name
@ -181,7 +182,7 @@ In the above example, we have a directive of where the files came from and what
Note that even though source destination information is included, all files that are standard will be copied to the appropriate place that xCAT thinks they should go.
Exported files
~~~~~~~~~~~~
~~~~~~~~~~~~~~
The following files will be exported, assuming x is the profile name:

View File

@ -6,7 +6,7 @@ Using Postscript
xCAT automatically runs a few postscripts and postbootscripts that are delivered with xCAT to set up the nodes. You can also add your own scripts to further customize the nodes. This explains the xCAT support to do this.
Types of scripts
~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~
There are two types of scripts in the postscripts table ( postscripts and postbootscripts). The types are based on when in the install process they will be executed. Run the following for more information:
@ -83,7 +83,7 @@ Recommended Postscript design
* Postscripts should be well documented. At the top of the script, the first few lines should describe the function and inputs and output. You should have comments throughout the script. This is especially important if using regx.
PostScript/PostbootScript execution
~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When your script is executed on the node, all the attributes in the site table are exported as variables for your scripts to use. You can add extra attributes for yourself. See the sample mypostscript file below.

View File

@ -1,5 +0,0 @@
Postscripts and Prescripts
==========================

View File

@ -8,11 +8,11 @@ This section describes how to use xCAT to deploy diskful nodes with RAID1 setup,
All the examples in this section are based on three configuration scenarios:
1. RHEL6 on a system p machine with two SCSI disks sda and sdb
#. RHEL6 on a system p machine with two SCSI disks sda and sdb
2. RHEL6 on a system p machine with two SAS disks and multipath configuration.
#. RHEL6 on a system p machine with two SAS disks and multipath configuration.
3. SLES 11 SP1 on a system p machine with two SCSI disks sda and sdb
#. SLES 11 SP1 on a system p machine with two SCSI disks sda and sdb
If you are not using the configuration scenarios listed above, you may need to modify some of the steps in this documentation to make it work in your environment.
@ -21,8 +21,8 @@ Deploy Diskful Nodes with RAID1 Setup on RedHat
xCAT provides two sample kickstart template files with the RAID1 settings, ``/opt/xcat/share/xcat/install/rh/service.raid1.rhel6.ppc64.tmpl`` is for the configuration scenario **1** listed above and ``/opt/xcat/share/xcat/install/rh/service.raid1.multipath.rhel6.ppc64.tmpl`` is for the configuration scenario **2** listed above. You can customize the template file and put it under ``/install/custom/install/<platform>/`` if the default one does not match your requirements.
Here is the RAID1 partitioning section in service.raid1.rhel6.ppc64.tmpl:
::
Here is the RAID1 partitioning section in ``service.raid1.rhel6.ppc64.tmpl``: ::
#Full RAID 1 Sample
part None --fstype "PPC PReP Boot" --size 8 --ondisk sda --asprimary
part None --fstype "PPC PReP Boot" --size 8 --ondisk sdb --asprimary
@ -39,8 +39,8 @@ Here is the RAID1 partitioning section in service.raid1.rhel6.ppc64.tmpl:
part raid.22 --size 1 --fstype ext4 --grow --ondisk sdb
raid / --level 1 --device md2 raid.21 raid.22
And here is the RAID1 partitioning section in service.raid1.multipath.rhel6.ppc64.tmpl
::
Here is the RAID1 partitioning section in ``service.raid1.multipath.rhel6.ppc64.tmpl``: ::
#Full RAID 1 Sample
part None --fstype "PPC PReP Boot" --size 8 --ondisk mpatha --asprimary
part None --fstype "PPC PReP Boot" --size 8 --ondisk mpathb --asprimary
@ -61,9 +61,9 @@ The samples above created one PReP partition, one 200MB ``/boot`` partition and
After the diskful nodes are up and running, you can check the RAID1 settings with the following commands:
Mount command shows the ``/dev/mdx`` devices are mounted to various file systems, the ``/dev/mdx`` indicates that the RAID is being used on this node.
::
[root@server ~]# mount
Mount command shows the ``/dev/mdx`` devices are mounted to various file systems, the ``/dev/mdx`` indicates that the RAID is being used on this node. ::
# mount
/dev/md2 on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
@ -72,9 +72,9 @@ Mount command shows the ``/dev/mdx`` devices are mounted to various file systems
/dev/md0 on /boot type ext4 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
The file ``/proc/mdstat`` includes the RAID devices status on the system, here is an example of ``/proc/mdstat`` in the non-multipath environment:
::
[root@server ~]# cat /proc/mdstat
The file ``/proc/mdstat`` includes the RAID devices status on the system, here is an example of ``/proc/mdstat`` in the non-multipath environment: ::
# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sda5[0] sdb5[1]
19706812 blocks super 1.1 [2/2] [UU]
@ -88,9 +88,9 @@ The file ``/proc/mdstat`` includes the RAID devices status on the system, here i
unused devices: <none>
On the system with multipath configuration, the ``/proc/mdstat`` looks like:
::
[root@server ~]# cat /proc/mdstat
On the system with multipath configuration, the ``/proc/mdstat`` looks like: ::
# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 dm-11[0] dm-6[1]
291703676 blocks super 1.1 [2/2] [UU]
@ -103,18 +103,20 @@ On the system with multipath configuration, the ``/proc/mdstat`` looks like:
204788 blocks super 1.0 [2/2] [UU]
unused devices: <none>
The command mdadm can query the detailed configuration for the RAID partitions:
::
The command ``mdadm`` can query the detailed configuration for the RAID partitions: ::
mdadm --detail /dev/md2
Deploy Diskful Nodes with RAID1 Setup on SLES
---------------------------------------------
xCAT provides one sample autoyast template files with the RAID1 settings ``/opt/xcat/share/xcat/install/sles/service.raid1.sles11.tmpl``. You can customize the template file and put it under ``/install/custom/install/<platform>/`` if the default one does not match your requirements.
Here is the RAID1 partitioning section in service.raid1.sles11.tmpl:
::
Here is the RAID1 partitioning section in service.raid1.sles11.tmpl: ::
<partitioning config:type="list">
<drive>
<device>/dev/sda</device>
@ -208,8 +210,8 @@ Here is the RAID1 partitioning section in service.raid1.sles11.tmpl:
The samples above created one 24MB PReP partition on each disk, one 2GB mirrored swap partition and one mirrored ``/`` partition uses all the disk space. If you want to use different partitioning scheme in your cluster, modify this RAID1 section in the autoyast template file accordingly.
Since the PReP partition can not be mirrored between the two disks, some additional postinstall commands should be run to make the second disk bootable, here the the commands needed to make the second disk bootable:
::
Since the PReP partition can not be mirrored between the two disks, some additional postinstall commands should be run to make the second disk bootable, here the the commands needed to make the second disk bootable: ::
# Set the second disk to be bootable for RAID1 setup
parted -s /dev/sdb mkfs 1 fat16
parted /dev/sdb set 1 type 6
@ -221,8 +223,8 @@ The procedure listed above has been added to the file ``/opt/xcat/share/xcat/ins
After the diskful nodes are up and running, you can check the RAID1 settings with the following commands:
Mount command shows the ``/dev/mdx`` devices are mounted to various file systems, the ``/dev/mdx`` indicates that the RAID is being used on this node.
::
Mount command shows the ``/dev/mdx`` devices are mounted to various file systems, the ``/dev/mdx`` indicates that the RAID is being used on this node. ::
server:~ # mount
/dev/md1 on / type reiserfs (rw)
proc on /proc type proc (rw)
@ -232,8 +234,8 @@ Mount command shows the ``/dev/mdx`` devices are mounted to various file systems
tmpfs on /dev/shm type tmpfs (rw,mode=1777)
devpts on /dev/pts type devpts (rw,mode=0620,gid=5)
The file ``/proc/mdstat`` includes the RAID devices status on the system, here is an example of ``/proc/mdstat``:
::
The file ``/proc/mdstat`` includes the RAID devices status on the system, here is an example of ``/proc/mdstat``: ::
server:~ # cat /proc/mdstat
Personalities : [raid1] [raid0] [raid10] [raid6] [raid5] [raid4]
md0 : active (auto-read-only) raid1 sda2[0] sdb2[1]
@ -246,8 +248,8 @@ The file ``/proc/mdstat`` includes the RAID devices status on the system, here i
unused devices: <none>
The command mdadm can query the detailed configuration for the RAID partitions:
::
The command mdadm can query the detailed configuration for the RAID partitions: ::
mdadm --detail /dev/md1
Disk Replacement Procedure
@ -255,9 +257,9 @@ Disk Replacement Procedure
If any one disk fails in the RAID1 arrary, do not panic. Follow the procedure listed below to replace the failed disk and you will be fine.
Faulty disks should appear marked with an (F) if you look at ``/proc/mdstat``:
::
[root@server ~]# cat /proc/mdstat
Faulty disks should appear marked with an (F) if you look at ``/proc/mdstat``: ::
# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 dm-11[0](F) dm-6[1]
291703676 blocks super 1.1 [2/1] [_U]
@ -276,27 +278,27 @@ We can see that the first disk is broken because all the RAID partitions on this
Remove the failed disk from RAID arrary
---------------------------------------
``mdadm`` is the command that can be used to query and manage the RAID arrays on Linux. To remove the failed disk from RAID array, use the command:
::
``mdadm`` is the command that can be used to query and manage the RAID arrays on Linux. To remove the failed disk from RAID array, use the command: ::
mdadm --manage /dev/mdx --remove /dev/xxx
Where the ``/dev/mdx`` are the RAID partitions listed in ``/proc/mdstat`` file, such as md0, md1 and md2; the ``/dev/xxx`` are the backend devices like dm-11, dm-8 and dm-9 in the multipath configuration and sda5, sda3 and sda2 in the non-multipath configuration.
Here is the example of removing failed disk from the RAID1 array in the non-multipath configuration:
::
Here is the example of removing failed disk from the RAID1 array in the non-multipath configuration: ::
mdadm --manage /dev/md0 --remove /dev/sda3
mdadm --manage /dev/md1 --remove /dev/sda2
mdadm --manage /dev/md2 --remove /dev/sda5
Here is the example of removing failed disk from the RAID1 array in the multipath configuration:
::
Here is the example of removing failed disk from the RAID1 array in the multipath configuration: ::
mdadm --manage /dev/md0 --remove /dev/dm-9
mdadm --manage /dev/md1 --remove /dev/dm-8
mdadm --manage /dev/md2 --remove /dev/dm-11
After the failed disk is removed from the RAID1 array, the partitions on the failed disk will be removed from ``/proc/mdstat`` and the "mdadm --detail" output also.
::
[root@server ~]# cat /proc/mdstat
After the failed disk is removed from the RAID1 array, the partitions on the failed disk will be removed from ``/proc/mdstat`` and the "mdadm --detail" output also. ::
# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 dm-6[1]
291703676 blocks super 1.1 [2/1] [_U]
@ -310,7 +312,7 @@ After the failed disk is removed from the RAID1 array, the partitions on the fai
unused devices: <none>
[root@server ~]# mdadm --detail /dev/md0
# mdadm --detail /dev/md0
/dev/md0:
Version : 1.0
Creation Time : Tue Jul 19 02:39:03 2011
@ -343,24 +345,24 @@ Replace the disk
Depends on the hot swap capability, you may simply unplug the disk and replace with a new one if the hot swap is supported; otherwise, you will need to power off the machine and replace the disk and the power on the machine.
Create partitions on the new disk
The first thing we must do now is to create the exact same partitioning as on the new disk. We can do this with one simple command:
::
The first thing we must do now is to create the exact same partitioning as on the new disk. We can do this with one simple command: ::
sfdisk -d /dev/<good_disk> | sfdisk /dev/<new_disk>
For the non-mulipath configuration, here is an example:
::
For the non-mulipath configuration, here is an example: ::
sfdisk -d /dev/sdb | sfdisk /dev/sda
For the multipath configuration, here is an example:
::
For the multipath configuration, here is an example: ::
sfdisk -d /dev/dm-1 | sfdisk /dev/dm-0
If you got error message "sfdisk: I don't like these partitions - nothing changed.", you can add "--force" option to the sfdisk command:
::
If you got error message "sfdisk: I don't like these partitions - nothing changed.", you can add "--force" option to the sfdisk command: ::
sfdisk -d /dev/sdb | sfdisk /dev/sda --force
You can run
::
You can run: ::
fdisk -l
To check if both hard drives have the same partitioning now.
@ -368,29 +370,29 @@ To check if both hard drives have the same partitioning now.
Add the new disk into the RAID1 array
-------------------------------------
After the partitions are created on the new disk, you can use command
::
After the partitions are created on the new disk, you can use command: ::
mdadm --manage /dev/mdx --add /dev/xxx
To add the new disk to the RAID1 array. Where the ``/dev/mdx`` are the RAID partitions like md0, md1 and md2; the ``/dev/xxx`` are the backend devices like dm-11, dm-8 and dm-9 in the multipath configuration and sda5, sda3 and sda2 in the non-multipath configuration.
Here is an example for the non-multipath configuration:
::
Here is an example for the non-multipath configuration: ::
mdadm --manage /dev/md0 --add /dev/sda3
mdadm --manage /dev/md1 --add /dev/sda2
mdadm --manage /dev/md2 --add /dev/sda5
Here is an example for the multipath configuration:
::
Here is an example for the multipath configuration: ::
mdadm --manage /dev/md0 --add /dev/dm-9
mdadm --manage /dev/md1 --add /dev/dm-8
mdadm --manage /dev/md2 --add /dev/dm-11
All done! You can have a cup of coffee to watch the fully automatic reconstruction running...
While the RAID1 array is reconstructing, you will see some progress information in ``/proc/mdstat``:
::
[root@server raid1]# cat /proc/mdstat
While the RAID1 array is reconstructing, you will see some progress information in ``/proc/mdstat``: ::
# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 dm-11[0] dm-6[1]
291703676 blocks super 1.1 [2/1] [_U]
@ -407,9 +409,9 @@ While the RAID1 array is reconstructing, you will see some progress information
unused devices: <none>
After the reconstruction is done, the ``/proc/mdstat`` becomes like:
::
[root@server ~]# cat /proc/mdstat
After the reconstruction is done, the ``/proc/mdstat`` becomes like: ::
# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 dm-11[0] dm-6[1]
291703676 blocks super 1.1 [2/2] [UU]

View File

@ -1,5 +0,0 @@
.. _Sync-Files-label:
Sync Files to Compute Node
==========================

View File

@ -2,7 +2,7 @@ Manage Virtual Machine (VMs)
============================
Create the Virtual Machine
----------------------
--------------------------
In this doc, we assume the powerKVM hypervisor host cn1 is ready to use.
@ -84,27 +84,24 @@ Run the chdef command to change the following attributes for the vm1:
10. Set 'netboot' attribute
* **[x86_64]**
::
* **[x86_64]** ::
chdef vm1 netboot=xnba
chdef vm1 netboot=xnba
* **[PPC64LE]**
::
* **[PPC64LE]** ::
chdef vm1 netboot=grub2
chdef vm1 netboot=grub2
Make sure the grub2 had been installed on your Management Node: ::
rpm -aq | grep grub2
grub2-xcat-1.0-1.noarch
rpm -aq | grep grub2
grub2-xcat-1.0-1.noarch
Note: If you are working with xCAT-dep oldder than 20141012, the modules for xCAT shipped grub2 can not support ubuntu LE smoothly. So the following steps needed to complete the grub2 setting. ::
rm /tftpboot/boot/grub2/grub2.ppc
cp /tftpboot/boot/grub2/powerpc-ieee1275/core.elf /tftpboot/boot/grub2/grub2.ppc
/bin/cp -rf /tmp/iso/boot/grub/powerpc-ieee1275/elf.mod /tftpboot/boot/grub2/powerpc-ieee1275/
rm /tftpboot/boot/grub2/grub2.ppc
cp /tftpboot/boot/grub2/powerpc-ieee1275/core.elf /tftpboot/boot/grub2/grub2.ppc
/bin/cp -rf /tmp/iso/boot/grub/powerpc-ieee1275/elf.mod /tftpboot/boot/grub2/powerpc-ieee1275/
Make the VM under xCAT
``````````````````````
@ -205,7 +202,7 @@ You can use console in xcat management node or kvm hypervisor to monitor the pro
Remove the virtual machine
------------------------
--------------------------
Remove the vm1 even when it is in power on status. ::

View File

@ -15,7 +15,7 @@ Switch info::
.. include:: config_environment.rst
Predefined Nodes
---------------
----------------
In order to differentiate one node from another, the admin needs to predefine node in xCAT database based on the switches information. This consists of two parts:

View File

@ -1,9 +1,9 @@
Add Additional Software Packages
================================
.. include:: ../../../common/deployment/additionalpkg/additional_pkg.rst
.. toctree::
:maxdepth: 2
../../../common/deployment/additionalpkg/additional_pkg_overview.rst
../../../common/deployment/additionalpkg/nonubuntu_os_pkg.rst
../../../common/deployment/additionalpkg/nonubuntu_os_other_pkg.rst

View File

@ -1,9 +1,10 @@
Prescripts and Postscripts
==========================
.. include:: ../../../common/deployment/prepostscritps/pre_post_script.rst
.. toctree::
:maxdepth: 2
../../../common/deployment/prepostscritps/pre_script.rst
../../../common/deployment/prepostscritps/post_script.rst
../../../common/deployment/prepostscritps/suggestions.rst
../../../common/deployment/prepostscripts/pre_script.rst
../../../common/deployment/prepostscripts/post_script.rst
../../../common/deployment/prepostscripts/suggestions.rst

View File

@ -1,5 +1,6 @@
Synchronizing Files
===================
.. include:: ../../../common/deployment/syncfile/syncfile.rst
.. toctree::
:maxdepth: 2

View File

@ -1,2 +0,0 @@
.. include:: ../../../common/deployment/additional_pkg.rst

View File

@ -0,0 +1 @@
../../diskful/customize_image/additional_pkg.rst

View File

@ -1,7 +1,7 @@
Customize osimage (Optional)
============================
Optional means all the subitems in this page are not necessary to finish an OS deployment. If you are new to xCAT, you can just jump
to `Initialize the Compute for Deployment`.
Optional means all the subitems in this page are not necessary to finish an OS deployment. If you are new to xCAT, you can just jump to `Initialize the Compute for Deployment`.
.. toctree::
:maxdepth: 2

View File

@ -1,2 +0,0 @@
.. include:: ../../../common/deployment/pre_post_script.rst

View File

@ -0,0 +1 @@
../../diskful/customize_image/pre_post_script.rst

View File

@ -1,2 +0,0 @@
.. include:: ../../../common/deployment/syncfile.rst

View File

@ -0,0 +1 @@
../../diskful/customize_image/syncfile.rst