2
0
mirror of https://github.com/xcat2/xcat-core.git synced 2025-06-18 12:20:40 +00:00

modify some parts depending on the comments

This commit is contained in:
huweihua
2015-09-10 03:32:02 -04:00
parent 2805011aff
commit fc6e6e1d5d
2 changed files with 35 additions and 35 deletions

View File

@ -1,9 +1,7 @@
Configure Secondary Network Adapter
===================================
Configure Additional Network Interfaces
=======================================
Introduction
------------
The **nics** table and the **confignics** postscript can be used to automatically configure additional **ethernet** and **Infiniband** adapters on nodes as they are being deployed. ("Additional adapters" means adapters other than the primary adapter that the node is being installed/booted over.)
The **nics** table and the **confignics** postscript can be used to automatically configure additional network interfaces (mutltiple ethernets adapters, InfiniBand, etc) on the nodes as they are being deployed.
The way the confignics postscript decides what IP address to give the secondary adapter is by checking the nics table, in which the nic configuration information is stored.
@ -15,7 +13,9 @@ Define configuration information for the Secondary Adapters in the nics table
There are 3 ways to complete this operation.
**First way is to use command line input. below is a example**
1. Using command line
Below is a example
::
[root@ls21n01 ~]# mkdef cn1 groups=all nicips.eth1="11.1.89.7|12.1.89.7" nicnetworks.eth1="net11|net12" nictypes.eth1="Ethernet"
1 object definitions have been created or modified.
@ -23,9 +23,9 @@ There are 3 ways to complete this operation.
[root@ls21n01 ~]# chdef cn1 nicips.eth2="13.1.89.7|14.1.89.7" nicnetworks.eth2="net13|net14" nictypes.eth2="Ethernet"
1 object definitions have been created or modified.
**Second way is to use stanza file**
2. Using stanza file
prepare your stanza file <filename>.stanza. the content of <filename>.stanza like below:
Prepare your stanza file <filename>.stanza. the content of <filename>.stanza like below:
::
# <xCAT data object stanza file>
cn1:
@ -45,7 +45,7 @@ define configuration information by <filename>.stanza
::
cat <filename>.stanza | mkdef -z
**Third way is to use 'tabedit' to edit the nics table directly**
3. Using 'tabedit' to edit the nics table
The 'tabedit' command opens the specified table in the user's editor(such as VI), allows user to edit any text, and then writes changes back to the database table. But it's tedious and error prone, so don't recommended this way. if using this way, notices the **nicips**, **nictypes** and **nicnetworks** attributes are required.

View File

@ -8,18 +8,18 @@ This section describes how to use xCAT to deploy diskful nodes with RAID1 setup,
All the examples in this section are based on three configuration scenarios:
- RHEL6 on a system p machine with two SCSI disks sda and sdb
1. RHEL6 on a system p machine with two SCSI disks sda and sdb
- RHEL6 on a system p machine with two SAS disks and multipath configuration.
2. RHEL6 on a system p machine with two SAS disks and multipath configuration.
- SLES 11 SP1 on a system p machine with two SCSI disks sda and sdb
3. SLES 11 SP1 on a system p machine with two SCSI disks sda and sdb
If you are not using the configuration scenarios listed above, you may need to modify some of the steps in this documentation to make it work in your environment.
Deploy Diskful Nodes with RAID1 Setup on RedHat
-----------------------------------------------
xCAT provides two sample kickstart template files with the RAID1 settings, /opt/xcat/share/xcat/install/rh/service.raid1.rhel6.ppc64.tmpl is for the configuration scenario #1 listed above and /opt/xcat/share/xcat/install/rh/service.raid1.multipath.rhel6.ppc64.tmpl is for the configuration scenario #2 listed above. You can customize the template file and put it under /install/custom/install/<platform>/ if the default one does not match your requirements.
xCAT provides two sample kickstart template files with the RAID1 settings, ``/opt/xcat/share/xcat/install/rh/service.raid1.rhel6.ppc64.tmpl`` is for the configuration scenario **1** listed above and ``/opt/xcat/share/xcat/install/rh/service.raid1.multipath.rhel6.ppc64.tmpl`` is for the configuration scenario **2** listed above. You can customize the template file and put it under ``/install/custom/install/<platform>/`` if the default one does not match your requirements.
Here is the RAID1 partitioning section in service.raid1.rhel6.ppc64.tmpl:
::
@ -57,11 +57,11 @@ And here is the RAID1 partitioning section in service.raid1.multipath.rhel6.ppc6
part raid.22 --size 1 --fstype ext4 --grow --ondisk mpathb
raid / --level 1 --device md2 raid.21 raid.22
The samples above created one PReP partition, one 200MB /boot partition and one / partition on sda/sda and mpatha/mpathb. If you want to use different partitioning scheme in your cluster, modify this RAID1 section in the kickstart template file accordingly.
The samples above created one PReP partition, one 200MB ``/boot`` partition and one ``/`` partition on ``sda/sdb`` and ``mpatha/mpathb``. If you want to use different partitioning scheme in your cluster, modify this RAID1 section in the kickstart template file accordingly.
After the diskful nodes are up and running, you can check the RAID1 settings with the following commands:
Mount command shows the /dev/mdx devices are mounted to various file systems, the /dev/mdx indicates that the RAID is being used on this node.
Mount command shows the ``/dev/mdx`` devices are mounted to various file systems, the ``/dev/mdx`` indicates that the RAID is being used on this node.
::
[root@server ~]# mount
/dev/md2 on / type ext4 (rw)
@ -72,7 +72,7 @@ Mount command shows the /dev/mdx devices are mounted to various file systems, th
/dev/md0 on /boot type ext4 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
The file /proc/mdstat includes the RAID devices status on the system, here is an example of /proc/mdstat in the non-multipath environment:
The file ``/proc/mdstat`` includes the RAID devices status on the system, here is an example of ``/proc/mdstat`` in the non-multipath environment:
::
[root@server ~]# cat /proc/mdstat
Personalities : [raid1]
@ -88,7 +88,7 @@ The file /proc/mdstat includes the RAID devices status on the system, here is an
unused devices: <none>
On the system with multipath configuration, the /proc/mdstat looks like:
On the system with multipath configuration, the ``/proc/mdstat`` looks like:
::
[root@server ~]# cat /proc/mdstat
Personalities : [raid1]
@ -111,7 +111,7 @@ The command mdadm can query the detailed configuration for the RAID partitions:
Deploy Diskful Nodes with RAID1 Setup on SLES
---------------------------------------------
xCAT provides one sample autoyast template files with the RAID1 settings /opt/xcat/share/xcat/install/sles/service.raid1.sles11.tmpl. You can customize the template file and put it under /install/custom/install/<platform>/ if the default one does not match your requirements.
xCAT provides one sample autoyast template files with the RAID1 settings ``/opt/xcat/share/xcat/install/sles/service.raid1.sles11.tmpl``. You can customize the template file and put it under ``/install/custom/install/<platform>/`` if the default one does not match your requirements.
Here is the RAID1 partitioning section in service.raid1.sles11.tmpl:
::
@ -206,9 +206,9 @@ Here is the RAID1 partitioning section in service.raid1.sles11.tmpl:
</drive>
</partitioning>
The samples above created one 24MB PReP partition on each disk, one 2GB mirroed swap partition and one mirroed / partition uses all the disk space. If you want to use different partitioning scheme in your cluster, modify this RAID1 section in the autoyast template file accordingly.
The samples above created one 24MB PReP partition on each disk, one 2GB mirrored swap partition and one mirrored ``/`` partition uses all the disk space. If you want to use different partitioning scheme in your cluster, modify this RAID1 section in the autoyast template file accordingly.
Since the PReP partition can not be mirroed between the two disks, some additional postinstall commands should be run to make the second disk bootable, here the the commands needed to make the second disk bootable:
Since the PReP partition can not be mirrored between the two disks, some additional postinstall commands should be run to make the second disk bootable, here the the commands needed to make the second disk bootable:
::
# Set the second disk to be bootable for RAID1 setup
parted -s /dev/sdb mkfs 1 fat16
@ -217,11 +217,11 @@ Since the PReP partition can not be mirroed between the two disks, some addition
dd if=/dev/sda1 of=/dev/sdb1
bootlist -m normal sda sdb
The procedure listed above has been added to the file /opt/xcat/share/xcat/install/scripts/post.sles11.raid1 to make it be automated. The autoyast template file service.raid1.sles11.tmpl will include the content of post.sles11.raid1, so no manual steps are needed here.
The procedure listed above has been added to the file ``/opt/xcat/share/xcat/install/scripts/post.sles11.raid1`` to make it be automated. The autoyast template file service.raid1.sles11.tmpl will include the content of post.sles11.raid1, so no manual steps are needed here.
After the diskful nodes are up and running, you can check the RAID1 settings with the following commands:
Mount command shows the /dev/mdx devices are mounted to various file systems, the /dev/mdx indicates that the RAID is being used on this node.
Mount command shows the ``/dev/mdx`` devices are mounted to various file systems, the ``/dev/mdx`` indicates that the RAID is being used on this node.
::
server:~ # mount
/dev/md1 on / type reiserfs (rw)
@ -232,7 +232,7 @@ Mount command shows the /dev/mdx devices are mounted to various file systems, th
tmpfs on /dev/shm type tmpfs (rw,mode=1777)
devpts on /dev/pts type devpts (rw,mode=0620,gid=5)
The file /proc/mdstat includes the RAID devices status on the system, here is an example of /proc/mdstat:
The file ``/proc/mdstat`` includes the RAID devices status on the system, here is an example of ``/proc/mdstat``:
::
server:~ # cat /proc/mdstat
Personalities : [raid1] [raid0] [raid10] [raid6] [raid5] [raid4]
@ -255,7 +255,7 @@ Disk Replacement Procedure
If any one disk fails in the RAID1 arrary, do not panic. Follow the procedure listed below to replace the failed disk and you will be fine.
Faulty disks should appear marked with an (F) if you look at /proc/mdstat:
Faulty disks should appear marked with an (F) if you look at ``/proc/mdstat``:
::
[root@server ~]# cat /proc/mdstat
Personalities : [raid1]
@ -276,11 +276,11 @@ We can see that the first disk is broken because all the RAID partitions on this
Remove the failed disk from RAID arrary
---------------------------------------
mdadm is the command that can be used to query and manage the RAID arrays on Linux. To remove the failed disk from RAID array, use the command:
``mdadm`` is the command that can be used to query and manage the RAID arrays on Linux. To remove the failed disk from RAID array, use the command:
::
mdadm --manage /dev/mdx --remove /dev/xxx
Where the /dev/mdx are the RAID partitions listed in /proc/mdstat file, such as md0, md1 and md2; the /dev/xxx are the backend devices like dm-11, dm-8 and dm-9 in the multipath configuration and sda5, sda3 and sda2 in the non-multipath configuration.
Where the ``/dev/mdx`` are the RAID partitions listed in ``/proc/mdstat`` file, such as md0, md1 and md2; the ``/dev/xxx`` are the backend devices like dm-11, dm-8 and dm-9 in the multipath configuration and sda5, sda3 and sda2 in the non-multipath configuration.
Here is the example of removing failed disk from the RAID1 array in the non-multipath configuration:
::
@ -294,7 +294,7 @@ Here is the example of removing failed disk from the RAID1 array in the multipat
mdadm --manage /dev/md1 --remove /dev/dm-8
mdadm --manage /dev/md2 --remove /dev/dm-11
After the failed disk is removed from the RAID1 array, the partitions on the failed disk will be removed from /proc/mdstat and the "mdadm --detail" output also.
After the failed disk is removed from the RAID1 array, the partitions on the failed disk will be removed from ``/proc/mdstat`` and the "mdadm --detail" output also.
::
[root@server ~]# cat /proc/mdstat
Personalities : [raid1]
@ -363,7 +363,7 @@ You can run
::
fdisk -l
to check if both hard drives have the same partitioning now.
To check if both hard drives have the same partitioning now.
Add the new disk into the RAID1 array
-------------------------------------
@ -372,7 +372,7 @@ After the partitions are created on the new disk, you can use command
::
mdadm --manage /dev/mdx --add /dev/xxx
to add the new disk to the RAID1 array. Where the /dev/mdx are the RAID partitions like md0, md1 and md2; the /dev/xxx are the backend devices like dm-11, dm-8 and dm-9 in the multipath configuration and sda5, sda3 and sda2 in the non-multipath configuration.
To add the new disk to the RAID1 array. Where the ``/dev/mdx`` are the RAID partitions like md0, md1 and md2; the ``/dev/xxx`` are the backend devices like dm-11, dm-8 and dm-9 in the multipath configuration and sda5, sda3 and sda2 in the non-multipath configuration.
Here is an example for the non-multipath configuration:
::
@ -388,7 +388,7 @@ Here is an example for the multipath configuration:
All done! You can have a cup of coffee to watch the fully automatic reconstruction running...
While the RAID1 array is reconstructing, you will see some progress information in /proc/mdstat:
While the RAID1 array is reconstructing, you will see some progress information in ``/proc/mdstat``:
::
[root@server raid1]# cat /proc/mdstat
Personalities : [raid1]
@ -407,7 +407,7 @@ While the RAID1 array is reconstructing, you will see some progress information
unused devices: <none>
After the reconstruction is done, the /proc/mdstat becomes like:
After the reconstruction is done, the ``/proc/mdstat`` becomes like:
::
[root@server ~]# cat /proc/mdstat
Personalities : [raid1]
@ -428,13 +428,13 @@ Make the new disk bootable
If the new disk does not have a PReP partition or the PReP partition has some problem, it will not be bootable, here is an example on how to make the new disk bootable, you may need to substitute the device name with your own values.
**RedHat:**
::
* **[RHEL]**::
mkofboot .b /dev/sda
bootlist -m normal sda sdb
**SLES:**
::
* **[SLES]**::
parted -s /dev/sda mkfs 1 fat16
parted /dev/sda set 1 type 6
parted /dev/sda set 1 boot on