2
0
mirror of https://github.com/xcat2/xcat-core.git synced 2025-06-18 12:20:40 +00:00
This commit is contained in:
Jarrod Johnson
2016-04-27 15:25:52 -04:00
76 changed files with 1946 additions and 1188 deletions

View File

@ -1,5 +1,4 @@
# The shell is commented out so that it will run in bash on linux and ksh on aix
# !/bin/bash
#!/bin/bash
# Build and upload the xcat-core code, on either linux or aix.

View File

@ -1,5 +1,5 @@
# The shell is commented out so that it will run in bash on linux and ksh on aix
# !/bin/sh
#!/bin/sh
#
# Package up all the xCAT open source dependencies
# - creating the yum repos

View File

@ -1,3 +1,4 @@
#!/bin/sh
#######################################################################
#build script for local usage
#used for Linux/AIX/Ubuntu

View File

@ -29,7 +29,7 @@ Granting/Revoking access to the database for Service Node Clients
* Log into the MySQL interactive program. ::
/usr/bin/mysql -r root -p
/usr/bin/mysql -u root -p
* Granting access to the xCAT database. Service Nodes are required for xCAT hierarchical support. Compute nodes may also need access that depends on which application is going to run. (xcat201 is xcatadmin's password for following examples) ::

View File

@ -77,7 +77,7 @@ Debian/Ubuntu
mysql-server-5*
mysql-server-core-5*
* MariaDB - Using ``apt-get``, ensure that the following packages are installed on the management node: ::
* MariaDB - Using ``apt-get``, ensure that the following packages are installed on the management node. ``apt-get install mariadb-server`` will pull in all required packages. For Ubuntu 16.04, it no longer required ``libmariadbclient18``. ::
libmariadbclient18
mariadb-client

View File

@ -1,11 +1,11 @@
IB Driver Preparation and Installation
======================================
xCAT provides sample postscripts to help you install the Mellanox OpenFabrics Enterprise Distribution (OFED) Infiniband Driver. These scripts are located in ``opt/xcat/share/xcat/ib/scripts/Mellanox/``. You can use these scripts directly or change them to satisfy your own environment. **xCAT 2.11 drops support of mlnxofed_ib_install and recommends using version 2 of the script: mlnxofed_ib_install.v2**.
xCAT provides sample postscripts to help you install the Mellanox OpenFabrics Enterprise Distribution (OFED) InfiniBand Driver. These scripts are located in ``opt/xcat/share/xcat/ib/scripts/Mellanox/``. You can use these scripts directly or change them to satisfy your own environment. **xCAT 2.11 drops support of mlnxofed_ib_install and recommends using version 2 of the script: mlnxofed_ib_install.v2**.
.. toctree::
:maxdepth: 2
mlnxofed_ib_install_v2_usage.rst
mlnxofed_ib_install_v1_usage.rst

View File

@ -1,8 +1,8 @@
Infiniband (Mellanox)
InfiniBand (Mellanox)
=====================
xCAT offers a certain degree support for Mellanox infiniband product, it help you to configurate Mellanox infiniband products easily.
xCAT offers a certain degree support for Mellanox InfiniBand product, it help you to configurate Mellanox InfiniBand products easily. For more information about Mellanox InfiniBand, please refer to `Mellanox official site <http://www.mellanox.com/>`_.
.. toctree::
:maxdepth: 2

View File

@ -1,95 +1,169 @@
Configuration for Diskful Installation
=======================================
1. Set script ``mlnxofed_ib_install`` as postbootscript ::
1. Set script ``mlnxofed_ib_install`` as ``postbootscripts`` or ``postscripts`` ::
chdef <node> -p postbootscripts="mlnxofed_ib_install -p /install/<path>/<MLNX_OFED_LINUX.iso>"
chdef <node> -p postbootscripts="mlnxofed_ib_install -p /install/<subpath>/<MLNX_OFED_LINUX.iso>"
Or ::
chdef <node> -p postscripts="mlnxofed_ib_install -p /install/<subpath>/<MLNX_OFED_LINUX.iso>"
xCAT simulates completely the way Mellanox scripts work by using ``postbootscripts``. This way need to reboot after drive installation to make Mellanox drivers work reliably just like Mellanox suggested. If you want to use the reboot after operating system installation to avoid reboot twice, you can using ``postscripts`` attribute to install Mellanox drivers. This way has been verified in limited scenarios. For more information please refer to :doc:`The Scenarios Have Been Verified </advanced/networks/infiniband/mlnxofed_ib_verified_scenario_matrix>`. You can try this way in other else scenarios if you needed.
2. Specify dependence package **[required for RHEL and SLES]**
2. Specify dependency package
a) Copy a correct pkglist file **shipped by xCAT** according your environment to the ``/install/custom/install/<ostype>/`` directory, these pkglist files are located under ``/opt/xcat/share/xcat/install/<ostype>/`` ::
Some dependencies need to be installed before running Mellanox scripts. These dependencies are different between different scenario. xCAT configurates these dependency packages by using ``pkglist`` attribute of ``osimage`` definition. Please refer to :doc:`Add Additional Software Packages </guides/admin-guides/manage_clusters/ppc64le/diskful/customize_image/additional_pkg>` for more information::
cp /opt/xcat/share/xcat/install/<ostype>/compute.<osver>.<arch>.pkglist \
/install/custom/install/<ostype>/compute.<osver>.<arch>.pkglist
# lsdef -t osimage <os>-<arch>-install-compute
Object name: <os>-<arch>-install-compute
imagetype=linux
....
pkgdir=/<os packages directory>
pkglist=/<os packages list directory>/compute.<os>.<arch>.pkglist
....
b) Edit your ``/install/custom/install/<ostype>/compute.<osver>.<arch>.pkglist`` and add one line
``#INCLUDE:/opt/xcat/share/xcat/ib/netboot/<ostype>/ib.<osver>.<arch>.pkglist#``
You can check directory ``/opt/xcat/share/xcat/ib/netboot/<ostype>/`` and choose one correct ``ib.<osver>.<arch>.pkglist`` according your environment.
c) Make the related osimage use the customized pkglist ::
You can append the ib dependency packages list in the end of ``/<os packages list directory>/compute.<os>.<arch>.pkglist`` directly like below: ::
chdef -t osimage -o <osver>-<arch>-install-compute \
pkglist=/install/custom/install/<ostype>/compute.<osver>.<arch>.pkglist
#cat /<os packages list directory>/compute.<os>.<arch>.pkglist
@base
@x11
openssl
ntp
rsyn
#ib part
createrepo
kernel-devel
kernel-source
....
Or if you want to isolate InfiniBand dependency packages list into a separate file, after you edit this file, you can append the file in ``/<os packages list directory>/compute.<os>.<arch>.pkglist`` like below way: ::
#cat /<os packages list directory>/compute.<os>.<arch>.pkglist
@base
@x11
openssl
ntp
rsyn
#INCLUDE:/<ib pkglist path>/<you ib pkglist file>#
xCAT has shipped some ib pkglist files under ``/opt/xcat/share/xcat/ib/netboot/<ostype>/``, these pkglist files have been verified in sepecific scenario. Please refer to :doc:`The Scenarios Have Been Verified </advanced/networks/infiniband/mlnxofed_ib_verified_scenario_matrix>` to judge if you can use it directly in your environment. If so, you can use it like below: ::
#cat /<os packages list directory>/compute.<os>.<arch>.pkglist
@base
@x11
openssl
ntp
rsyn
#INCLUDE:/opt/xcat/share/xcat/ib/netboot/<ostype>/ib.<os>.<arch>.pkglist#
Take rhels7.2 on ppc64le for example: ::
# lsdef -t osimage rhels7.2-ppc64le-install-compute
Object name: rhels7.2-ppc64le-install-compute
imagetype=linux
osarch=ppc64le
osdistroname=rhels7.2-ppc64le
osname=Linux
osvers=rhels7.2
otherpkgdir=/install/post/otherpkgs/rhels7.2/ppc64le
pkgdir=/install/rhels7.2/ppc64le
pkglist=/install/custom/install/rh/compute.rhels7.ib.pkglist
profile=compute
provmethod=install
template=/opt/xcat/share/xcat/install/rh/compute.rhels7.tmpl
Take RHEL 6.4 on x86_64 for example ::
cp /opt/xcat/share/xcat/install/rh/compute.rhels6.x86_64.pkglist \
/install/custom/install/rh/compute.rhels6.x86_64.pkglist
**[Note]**: If the osimage definition was generated by xCAT command ``copycds``, default value ``/opt/xcat/share/xcat/install/rh/compute.rhels7.pkglist`` was assigned to ``pkglist`` attribute. ``/opt/xcat/share/xcat/install/rh/compute.rhels7.pkglist`` is the sample pkglist shipped by xCAT, recommend to make a copy of this sample and using the copy in real environment. In the above example, ``/install/custom/install/rh/compute.rhels7.ib.pkglist`` is a copy of ``/opt/xcat/share/xcat/install/rh/compute.rhels7.pkglist``. ::
# cat /install/custom/install/rh/compute.rhels7.ib.pkglist
#Please make sure there is a space between @ and group name
wget
ntp
nfs-utils
net-snmp
rsync
yp-tools
openssh-server
util-linux
net-tools
#INCLUDE:/opt/xcat/share/xcat/ib/netboot/rh/ib.rhels7.ppc64le.pkglist#
Edit the ``/install/custom/install/rh/compute.rhels6.x86_64.pkglist`` and add below line
``#INCLUDE:/opt/xcat/share/xcat/ib/netboot/rh/ib.rhels6.x86_64.pkglist#``
Then ``/install/custom/install/rh/compute.rhels6.x86_64.pkglist`` looks like below ::
#Please make sure there is a space between @ and group name
#INCLUDE:/opt/xcat/share/xcat/ib/netboot/rh/ib.rhels6.x86_64.pkglist#
ntp
nfs-utils
net-snmp
rsync
yp-tools
openssh-server
util-linux-ng
Then modify related osimage ::
chdef -t osimage -o rhels6.4-x86_64-install-compute \
pkglist=/install/custom/install/rh/compute.rhels6.x86_64.pkglist
3. Install node ::
nodeset <node> osimage=<osver>-<arch>-install-compute
rsetboot <node> net
rpower <node> reset
**[Note]**:
* In RHEL7.x, after performing all steps above, ``openibd`` doesn't work well. you can resolve this problem depending on `Mellanox OFED Linux Release Notes <http://www.mellanox.com/related-docs/prod_software/Mellanox_OFED_Linux_Release_Notes_3_1-1_0_5.pdf>`_ by yourself. But reboot one more time can resolve all of these complex issues. so **we strongly recommend reboot machine again to avoid unexpected problem in RHEL7.x.**
After steps above, you can login target node and find the Mellanox InfiniBand drives are located under ``/lib/modules/<kernel_version>/extra/``.
* If you performed firmware updates, i.e. you didn't pass ``--without-fw-update`` to option ``-m`` of ``mlnxofed_ib_install``, **reboot machine for all distro**
Issue ``ibv_devinfo`` command you can get the InfiniBand apater information ::
After steps above, you can login target ndoe and find the Mellanox IB drives are located under ``/lib/modules/<kernel_version>/extra/``.
# ibv_devinfo
hca_id: mlx5_0
transport: InfiniBand (0)
fw_ver: 10.14.2036
node_guid: f452:1403:0076:10e0
sys_image_guid: f452:1403:0076:10e0
vendor_id: 0x02c9
vendor_part_id: 4113
hw_ver: 0x0
board_id: IBM1210111019
phys_port_cnt: 2
Device ports:
port: 1
state: PORT_INIT (2)
max_mtu: 4096 (5)
active_mtu: 4096 (5)
sm_lid: 0
port_lid: 65535
port_lmc: 0x00
link_layer: InfiniBand
port: 2
state: PORT_DOWN (1)
max_mtu: 4096 (5)
active_mtu: 4096 (5)
sm_lid: 0
port_lid: 65535
port_lmc: 0x00
link_layer: InfiniBand
Using ``service openibd status`` to verify if openibd works well. Below is the output in rhels7.2. ::
# service openibd status
HCA driver loaded
Configured IPoIB devices:
ib0 ib1
Currently active IPoIB devices:
Configured Mellanox EN devices:
Currently active Mellanox devices:
The following OFED modules are loaded:
rdma_ucm
rdma_cm
ib_addr
ib_ipoib
mlx4_core
mlx4_ib
mlx4_en
mlx5_core
mlx5_ib
ib_uverbs
ib_umad
ib_ucm
ib_sa
ib_cm
ib_mad
ib_core
Issue ``ibstat`` command you can get the IB apater information ::
[root@server ~]# ibstat
CA 'mlx4_0'
CA type: MT4099
Number of ports: 2
Firmware version: 2.11.500
Hardware version: 0
Node GUID: 0x5cf3fc000004ec02
System image GUID: 0x5cf3fc000004ec05
Port 1:
State: Initializing
Physical state: LinkUp
Rate: 40 (FDR10)
Base lid: 0
LMC: 0
SM lid: 0
Capability mask: 0x02594868
Port GUID: 0x5cf3fc000004ec03
Link layer: InfiniBand
Port 2:
State: Down
Physical state: Disabled
Rate: 10
Base lid: 0
LMC: 0
SM lid: 0
Capability mask: 0x02594868
Port GUID: 0x5cf3fc000004ec04
Link layer: InfiniBand

View File

@ -1,128 +1,169 @@
Configuration for Diskless Installation
=======================================
1. Specify dependence package **[required for RHEL and SLES]**
1. Specify dependency package
a) Copy a correct pkglist file **shipped by xCAT** according your environment to the ``/install/custom/netboot/<ostype>/`` directory ::
Some dependencies need to be installed before running Mellanox scripts. These dependencies are different among different scenarios. xCAT can help user to install these dependency packages by adding these package names to the file specified by the ``pkglist`` attribute of the ``osimage`` definition. Please refer to :doc:`Add Additional Software Packages </guides/admin-guides/manage_clusters/ppc64le/diskful/customize_image/additional_pkg>` for more information::
cp /opt/xcat/share/xcat/netboot/<ostype>/compute.<osver>.<arch>.pkglist \
/install/custom/netboot/<ostype>/compute.<osver>.<arch>.pkglist
# lsdef -t osimage <osver>-<arch>-netboot-compute
Object name: <osver>-<arch>-netboot-compute
imagetype=linux
....
pkgdir=/<os packages directory>
pkglist=/<os packages list directory>/compute.<os>.<arch>.pkglist
....
b) Edit ``/install/custom/netboot/<ostype>/<profile>.pkglist`` and add ``#INCLUDE:/opt/xcat/share/xcat/ib/netboot/<ostype>/ib.<osver>.<arch>.pkglist#``
You can append the ib dependency packages list in the end of ``/<os packages list directory>/compute.<os>.<arch>.pkglist`` directly like below: ::
For example, on RHEL 6.4 (x86_64): ::
#cat /<os packages list directory>/compute.<os>.<arch>.pkglist
bash
nfs-utils
openssl
dhclient
kernel
.....
cp /opt/xcat/share/xcat/netboot/rh/compute.rhels6.x86_64.pkglist \
/install/custom/netboot/rh/compute.rhels6.x86_64.pkglist
Edit ``/install/custom/netboot/rh/compute.rhels6.x86_64.pkglist`` and add ``#INCLUDE:/opt/xcat/share/xcat/ib/netboot/rh/ib.rhels6.x86_64.pkglist#``
Then ``/install/custom/netboot/rh/compute.rhels6.x86_64.pkglist`` looks like below ::
#ib part
createrepo
kernel-devel
kernel-source
....
Or if you want to isolate InfiniBand dependency packages list into a separate file, after you edit this file, you can append the file in ``/<os packages list directory>/compute.<os>.<arch>.pkglist`` like below way: ::
#cat /<os packages list directory>/compute.<os>.<arch>.pkglist
bash
nfs-utils
openssl
dhclient
kernel
.....
#INCLUDE:/<ib pkglist path>/<you ib pkglist file>#
xCAT ships some InfiniBand pkglist files under ``/opt/xcat/share/xcat/ib/netboot/<ostype>/``, these pkglist files have been verified in sepecific scenario. Please refer to :doc:`The Scenarios Have Been Verified </advanced/networks/infiniband/mlnxofed_ib_verified_scenario_matrix>` to judge if you can use it directly in your environment. If so, you can use it like below: ::
#cat /<os packages list directory>/compute.<os>.<arch>.pkglist
bash
nfs-utils
openssl
dhclient
kernel
.....
#INCLUDE:/opt/xcat/share/xcat/ib/netboot/<ostype>/ib.<os>.<arch>.pkglist#
#INCLUDE:/opt/xcat/share/xcat/ib/netboot/rh/ib.rhels6.x86_64.pkglist#
bash
nfs-utils
openssl
dhclient
.....
2. Prepare postinstall scripts
a) Specify a correct postinstall script **shipped by xCAT** ::
mkdir -p /install/custom/netboot/<ostype>/
cp /opt/xcat/share/xcat/netboot/<ostype>/<profile>.postinstall \
/install/custom/netboot/<ostype>/
chmod +x /install/custom/netboot/<ostype>/<profile>.postinstall
Edit ``postinstall`` script to trigger InfniBand drvices installation during ``genimage``. Using below command to find out where the ``postinstall`` script is defined. ::
# lsdef -t osimage <os>-<arch>-netboot-compute
Object name: <os>-<arch>-netboot-compute
....
postinstall=/<postinstall script path/compute.<os>.<arch>.postinstall
....
Take RHEL 6.4 on x86_64 for example ::
mkdir -p /install/custom/netboot/rh/
cp /opt/xcat/share/xcat/netboot/rh/compute.rhels6.x86_64.postinstall \
/install/custom/netboot/rh/
chmod +x /install/custom/netboot/rh/compute.rhels6.x86_64.postinstall
b) Edit ``/install/custom/netboot/<ostype>/<profile>.postinstall`` and add below line in the end ::
Edit ``/<postinstall script path/compute.<os>.<arch>.postinstall`` and add below line in the end. ::
/install/postscripts/mlnxofed_ib_install \
-p /install/<path>/<MLNX_OFED_LINUX.iso> -i $1 -n genimage
**[Note]** If you want to customized kernel version (i.e the kernel version of the diskless image you want to generate is different with the kernel version of you management node), you need to pass ``--add-kernel-support`` attribute to Mellanox. the line added into ``<profile>.postinstall`` should like below ::
**[Note]** Mellanox OFED ISO was built against a series of certain kernael versions, If the version of linux kernel you are using does not match with any of the Mellanox offered pre-built kernel modules, you can pass ``--add-kernel-support`` command line argument to Mellanox OFED installation script to build these kernel modules base on the version of linux kernel you are using. The line added into ``<profile>.postinstall`` should like below. ::
/install/postscripts/mlnxofed_ib_install \
-p /install/<path>/<MLNX_OFED_LINUX.iso> -m --add-kernel-support -end- -i $1 -n genimage
-p /install/<subpath>/<MLNX_OFED_LINUX.iso> -m --add-kernel-support -end- -i $1 -n genimage
Below steps maybe helpful for you to do judgment if you belong to this situation.
Get the kernel version of your management node ::
uname -r
Get the kernel version of target image. take generating a diskless image of rhels7.0 on x86_64 for example ::
[root@server]# lsdef -t osimage rhels7.0-x86_64-install-compute -i pkgdir
Object name: rhels7.0-x86_64-install-compute
pkgdir=/install/rhels7.0/x86_64
[root@server]# ls -l /install/rhels7.0/x86_64/Packages/ |grep kernel*
.......
-r--r--r-- 1 root root 30264588 May 5 2014 kernel-3.10.0-123.el7.x86_64.rpm
.......
3. Set the related osimage using the customized pkglist and compute.postinsall
Take rhels7.2 on ppc64le for example: ::
* [RHEL/SLES] ::
#lsdef -t osimage rhels7.2-ppc64le-netboot-compute
Object name: rhels7.2-ppc64le-netboot-compute
exlist=/opt/xcat/share/xcat/netboot/rh/compute.rhels7.ppc64le.exlist
imagetype=linux
osarch=ppc64le
osdistroname=rhels7.2-ppc64le
osname=Linux
osvers=rhels7.2
otherpkgdir=/install/post/otherpkgs/rhels7.2/ppc64le
permission=755
pkgdir=/install/rhels7.2/ppc64le
pkglist=/install/custom/netboot/rh/compute.rhels7.ppc64le.pkglist
postinstall=/install/custom/netboot/rh/compute.rhels7.ppc64le.ib.postinstall
profile=compute
provmethod=netboot
rootimgdir=/install/netboot/rhels7.2/ppc64le/compute
chdef -t osimage -o <osver>-<arch>-netboot-compute \
pkglist=/install/custom/netboot/<ostype>/compute.<osver>.<arch>.pkglist \
postinstall=/install/custom/netboot/<ostype>/<profile>.postinstall
* [Ubuntu] ::
**[Note]**: If the osimage definition was generated by xCAT command ``copycds``, default value ``/opt/xcat/share/xcat/netboot/rh/compute.rhels7.ppc64le.pkglist`` was assigned to ``pkglist`` attribute. ``/opt/xcat/share/xcat/netboot/rh/compute.rhels7.ppc64le.pkglist`` is the sample pkglist shipped by xCAT, recommend to make a copy of this sample and using the copy in real environment. In the above example, ``/install/custom/netboot/rh/compute.rhels7.ppc64le.pkglist`` is a copy of ``/opt/xcat/share/xcat/netboot/rh/compute.rhels7.ppc64le.pkglist``. For the same reason, ``/install/custom/netboot/rh/compute.rhels7.ppc64le.ib.postinstall`` is a copy of ``/opt/xcat/share/xcat/netboot/rh/compute.rhels7.ppc64le.postinstall``.
chdef -t osimage -o <osver>-<arch>-netboot-compute \
postinstall=/install/custom/netboot/<ostype>/<profile>.postinstall
``compute.rhels7.ppc64le.pkglist`` looks like below: ::
4. Generate and package image for diskless installation ::
# cat /install/custom/netboot/rh/compute.rhels7.ppc64le.pkglist
bash
nfs-utils
openssl
dhclient
bc
......
lsvpd
irqbalance
procps-ng
parted
net-tools
#INCLUDE:/opt/xcat/share/xcat/ib/netboot/rh/ib.rhels7.ppc64le.pkglist#
``compute.rhels7.ppc64le.ib.postinstall`` looks like below: ::
# cat /install/custom/netboot/rh/compute.rhels7.ppc64le.ib.postinstall
#!/bin/sh
#-- Do not remove following line if you want to make use of CVS version tracking
.....
# [ -r $workdir/$profile.$ext ] && cat $workdir/$profile.$ext | grep -E '^[[:space:]]*#.*[[:space:]]\$Id' >> $installroot/etc/IMGVERSION
#done
/install/postscripts/mlnxofed_ib_install -p /install/ofed/MLNX_OFED_LINUX-3.2-2.0.0.0-rhel7.2-ppc64le.iso -i $1 -n genimage
3. Generate and package image for diskless installation ::
genimage <osver>-<arch>-netboot-compute
packimage <osver>-<arch>-netboot-compute
5. Install node ::
4. Install node ::
nodeset <nodename> osimage=<osver>-<arch>-netboot-compute
rsetboot <nodename> net
rpower <nodename> reset
After installation, you can login target ndoe and issue ``ibstat`` command to verify if your IB driver works well. if everything is fine, you can get the IB apater information ::
[root@server ~]# ibstat
CA 'mlx4_0'
CA type: MT4099
Number of ports: 2
Firmware version: 2.11.500
Hardware version: 0
Node GUID: 0x5cf3fc000004ec02
System image GUID: 0x5cf3fc000004ec05
Port 1:
State: Initializing
Physical state: LinkUp
Rate: 40 (FDR10)
Base lid: 0
LMC: 0
SM lid: 0
Capability mask: 0x02594868
Port GUID: 0x5cf3fc000004ec03
Link layer: InfiniBand
Port 2:
State: Down
Physical state: Disabled
Rate: 10
Base lid: 0
LMC: 0
SM lid: 0
Capability mask: 0x02594868
Port GUID: 0x5cf3fc000004ec04
Link layer: InfiniBand
After installation, you can login target ndoe and issue ``ibv_devinfo`` command to verify if your InfiniBand driver works well. if everything is fine, you can get the InfiniBand apater information. ::
# ibv_devinfo
hca_id: mlx5_0
transport: InfiniBand (0)
fw_ver: 10.14.2036
node_guid: f452:1403:0076:10e0
sys_image_guid: f452:1403:0076:10e0
vendor_id: 0x02c9
vendor_part_id: 4113
hw_ver: 0x0
board_id: IBM1210111019
phys_port_cnt: 2
Device ports:
port: 1
state: PORT_INIT (2)
max_mtu: 4096 (5)
active_mtu: 4096 (5)
sm_lid: 0
port_lid: 65535
port_lmc: 0x00
link_layer: InfiniBand
port: 2
state: PORT_DOWN (1)
max_mtu: 4096 (5)
active_mtu: 4096 (5)
sm_lid: 0
port_lid: 65535
port_lmc: 0x00
link_layer: InfiniBand

View File

@ -1,12 +1,22 @@
Preparation
===========
Obtain the Mellanox OFED ISO file from `Mellanox official site <http://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux_sw_drivers>`_ and put it into one place under ``/install`` directory depending on your need.
Obtain the Mellanox OFED ISO
----------------------------
Obtain the Mellanox OFED ISO file from `Mellanox official Download Page <http://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux_sw_drivers>`_ and put it into one place under ``/install`` directory depending on your need.
**[NOTE]**
* Mellanox provides OFED drivers in **tarball** and **iso** formats. xCAT only supports the **iso** format at this time.
* Mellanox provides different OFED ISOs depending on operating system and machine architecture, named like MLNX_OFED_LINUX-<packver1>-<packver2>-<osver>-<arch>.iso, you should download correct one according your environment.
* Mellanox provides OFED drivers in **tarball** and **ISO image** formats. xCAT only supports the **iso** format at this time.
* Mellanox provides different OFED ISOs depending on operating system and machine architecture, named like MLNX_OFED_LINUX-<packver1>-<packver2>-<osver>-<arch>.iso, you should download correct one according to your environment.
* Mellanox has some updates and known issues for echo OFED, please read `InfiniBand/VPI Software Overview <http://www.mellanox.com/page/software_overview_ib>`_ to understand these information.
* The Mellanox links offered above maybe outdate in future for Mellanox updates its web page, xCAT will keep updating for synchronization. If we don't update in time, please access `Mellanox web portal <http://www.mellanox.com>`_ to find ``Support/Education`` then ``InfiniBand/VPI Drivers`` lables.
Prepare Install Script
----------------------
**mlnxofed_ib_install.v2** is a sample script, its framework can help you install Mellanox drives easily. But in specific scenario, some detail need to be modified to meet requirement, such like dependency package list. It has been verified in limited scenarios and can work as solution in these scenarios. For these scenarions information please refer to :doc:`The Scenarios Have Been Verified </advanced/networks/infiniband/mlnxofed_ib_verified_scenario_matrix>`.
Copy **mlnxofed_ib_install.v2** into ``/install/postscripts`` and change name to **mlnxofed_ib_install** ::
@ -27,7 +37,10 @@ In general you can use ``mlnxofed_ib_install`` like below ::
mlnxofed_ib_install -p /install/<path>/<MLNX_OFED_LINUX.iso>
If need to pass ``--without-32bit --without-fw-update --add-kernel-support --force`` to ``mlnxofedinstall``, refer to below command ::
If need to pass ``--without-32bit --without-fw-update --add-kernel-support --force`` to ``mlnxofedinstall``, refer to below command. ::
mlnxofed_ib_install -p /install/<path>/<MLNX_OFED_LINUX.iso> \
-m --without-32bit --without-fw-update --add-kernel-support --force -end-
**[Note]** We recommend to update your firmware to the version Mellanox supported in its release notes to avoid unexpected problem when you install InfiniBand driver.

View File

@ -7,4 +7,6 @@ Using mlnxofed_ib_install.v2 (Recommend)
mlnxofed_ib_install_v2_preparation.rst
mlnxofed_ib_install_v2_diskful.rst
mlnxofed_ib_install_v2_diskless.rst
mlnxofed_ib_verified_scenario_matrix.rst
mlnxofed_ib_known_issue.rst

View File

@ -0,0 +1,19 @@
Known Issues
============
Known Issue 1
-------------
After you install mellanox derives in rhels7.2 successfully by xCAT, maybe you have new requirement to upgrade your operating system to higher version. In this case you probably hit such problem the IB adaptor drives shipped by operating system is higher than the Mellanox drives you have installed. That means the the Mellanox drives will be replaced by the drives shipped by operating system. If it's not the result you expect, you hope keep the Mellanox drives after operating system upgraded, please add below statement into ``/etc/yum.conf`` in your target node after you install mellanox derives successfully for the first time. ::
exclude=dapl* libib* ibacm infiniband* libmlx* librdma* opensm* ibutils*
Known Issue 2
-------------
If you want to use ``--add-kernel-support`` attribute in sles12.1 and ppc64le scenario, you will find some dependency packages are not shipped by SLES Server DVDs, such like ``python-devel``, it's shipped in SDK DVDs. xCAT doesn't ship specific pkglist to support such scenario. If you have such requirement, please used ``otherpkglist`` and ``otherpkgs`` attributes to prepare dependency packages repository ahead. If you need help about ``otherpkglist`` and ``otherpkgs``attributes, please refer to :doc:`Add Additional Software Packages </guides/admin-guides/manage_clusters/ppc64le/diskful/customize_image/additional_pkg>`.

View File

@ -0,0 +1,15 @@
The Scenarios Have Been Verified
=================================
+---------------+---------+---------------------------------------------------+------------------------------------------------------------------+---------------------------+
| OS version | Arch | Ofed version | Attribute supported by mlnx | IB.pkglist |
+===============+=========+===================================================+==================================================================+===========================+
| rhels7.1 | ppc64 | MLNX_OFED_LINUX-3.2-2.0.0.0-rhel7.1-ppc64.iso |--without-32bit --without-fw-update --add-kernel-support --force | ib.rhels7.ppc64.pkglist |
+---------------+---------+---------------------------------------------------+------------------------------------------------------------------+---------------------------+
| rhels7.2 | ppc64le | MLNX_OFED_LINUX-3.2-2.0.0.0-rhel7.2-ppc64le.iso |--without-32bit --without-fw-update --add-kernel-support --force | ib.rhels7.ppc64le.pkglist |
+---------------+---------+---------------------------------------------------+------------------------------------------------------------------+---------------------------+
| sles12.1 | ppc64le |MLNX_OFED_LINUX-3.2-2.0.0.0-sles12sp1-ppc64le.iso |--without-32bit --without-fw-update --force | ib.sles12.ppc64le.pkglist |
+---------------+---------+---------------------------------------------------+------------------------------------------------------------------+---------------------------+
| ubuntu14.04.3 | ppc64le |MLNX_OFED_LINUX-3.2-2.0.0.0-ubuntu14.04-ppc64le.iso|--without-32bit --without-fw-update --add-kernel-support --force |ib.ubuntu14.ppc64le.pkglist|
+---------------+---------+---------------------------------------------------+------------------------------------------------------------------+---------------------------+

View File

@ -1,14 +1,10 @@
Manage Clusters
===============
This chapter introduces the procedures of how to manage a real cluster. Basically, it includes the following parts:
The following provides detailed information to help start managing your cluster using xCAT.
* Discover and Define Nodes
* Deploy/Configure OS for the Nodes
* Install/Configure Applications for the Nodes
* General System Management Work for the Nodes
The sections are organized based on hardware architecture.
You should select the proper sub-chapter according to the hardware type of your cluster. If having a mixed cluster that has multiple types of hardware, you have to refer to multiple sub-chapters accordingly.
.. toctree::
:maxdepth: 2

View File

@ -0,0 +1,11 @@
Configure xCAT
==============
After installing xCAT onto the management node, configure some basic attributes for your cluster into xCAT.
.. toctree::
:maxdepth: 2
site.rst
networks.rst
password.rst

View File

@ -0,0 +1,46 @@
Set attributes in the ``networks`` table
========================================
#. Display the network settings defined in the xCAT ``networks`` table using: ``tabdump networks`` ::
#netname,net,mask,mgtifname,gateway,dhcpserver,tftpserver,nameservers,ntpservers,logservers,
dynamicrange,staticrange,staticrangeincrement,nodehostname,ddnsdomain,vlanid,domain,
comments,disable
"10_0_0_0-255_0_0_0","10.0.0.0","255.0.0.0","eth0","10.0.0.101",,"10.4.27.5",,,,,,,,,,,,
A default network is created for the detected primary network using the same netmask and gateway. There may be additional network entries in the table for each network present on the management node where xCAT is installed.
#. To define additional networks, use one of the following options:
* Use ``mkdef`` to create/update an entry into ``networks`` table. (**Recommended**)
To create a network entry for 192.168.X.X/16 with a gateway of 192.168.1.254: ::
mkdef -t network -o net1 net=192.168.0.0 mask=255.255.0.0 gateway=192.168.1.254
* Use the ``tabedit`` command to modify the networks table directly in an editor: ``tabedit networks``
* Use the ``makenetworks`` command to automatically generate a entry in the ``networks`` table
#. Verify the network statements
**Domain** and **nameserver** attributes must be configured in the ``networks`` table or in the ``site`` table for xCAT to function properly.
Initialize DHCP services
------------------------
#. Configure DHCP to listen on different network interfaces (**Optional**)
xCAT allows specifying different network intercaces thateDHCP can listen on for different nodes or node groups. If this is not needed, go to the next step. To set dhcpinterfaces ::
chdef -t site dhcpinterfaces='xcatmn|eth1,eth2;service|bond0'
For more information, see ``dhcpinterfaces`` keyword in the :doc:`site </guides/admin-guides/references/man5/site.5>` table.
#. Create a new DHCP configuration file with the networks defined using the ``makedhcp`` command. ::
makedhcp -n

View File

@ -0,0 +1,59 @@
Configure passwords
===================
#. Configure the system password for the ``root`` user on the compute nodes.
* Set using the :doc:`chtab </guides/admin-guides/references/man8/chtab.8>` command: (**Recommended**) ::
chtab key=system passwd.username=root passwd.password=abc123
To encrypt the password using ``openssl``, use the following command: ::
chtab key=system passwd.username=root passwd.password=`openssl passwd -1 abc123`
* Directly edit the passwd table using the :doc:`tabedit </guides/admin-guides/references/man8/tabedit.8>` command.
#. Configure the passwords for Management modules of the compute nodes.
* For IPMI/BMC managed systems: ::
chtab key=ipmi passwd.username=USERID passwd.password=PASSW0RD
* For HMC managed systems: ::
chtab key=hmc passwd.username=hscroot passwd.password=abc123
The username and password for the HMC can be assigned directly to the HMC node object definition in xCAT. This is needed when the HMC username/password is different for each HMC. ::
mkdef -t node -o hmc1 groups=hmc,all nodetype=ppc hwtype=hmc mgt=hmc \
username=hscroot password=hmcPassw0rd
* For Blade managed systems: ::
chtab key=blade passwd.username=USERID passwd.password=PASSW0RD
* For FSP/BPA (Flexible Service Processor/Bulk Power Assembly), if the passwords are set to the factory defaults, you must change them before running and commands to them. ::
rspconfig frame general_passwd=general,<newpassword>
rspconfig frame admin_passwd=admin,<newpassword>
rspconfig frame HMC_passwd=,<newpassword>
#. If the REST API is being used configure a user and set a policy rule in xCAT.
#. Create a non root user that will be used to make the REST API calls. ::
useradd xcatws
passwd xcatws # set the password
#. Create an entry for the user into the xCAT ``passwd`` table. ::
chtab key=xcat passwd.username=xcatws passwd.password=<xcatws_password>
#. Set a policy in the xCAT ``policy`` table to allow the user to make calls against xCAT. ::
mkdef -t policy 6 name=xcatws rule=allow
When making calls to the xCAT REST API, pass in the credentials using the following attributes: ``userName`` and ``userPW``

View File

@ -0,0 +1,36 @@
Set attributes in the ``site`` table
====================================
#. Verify the following attributes have been correctly set in the xCAT ``site`` table.
* domain
* forwarders
* master [#]_
* nameservers
For more information on the keywords, see the DHCP ATTRIBUTES in the :doc:`site </guides/admin-guides/references/man5/site.5>` table.
If the fields are not set or need to be changed, use the xCAT ``chdef`` command: ::
chdef -t site domain="domain_string"
chdef -t site fowarders="forwarders"
chdef -t site master="xcat_master_ip"
chdef -t site nameservers="nameserver1,nameserver2,etc"
.. [#] The value of the ``master`` attribute in the site table should be set as the IP address of the management node responsible for the compute node.
Initialize DNS services
-----------------------
#. Initialize the DNS [#]_ services on the xCAT Management Node: ::
makedns -n
Verify DNS is working by running ``nslookup`` against your Management Node: ::
nslookup <management_node_hostname>
For more information on DNS, refer to :ref:`dns_label`
.. [#] Setting up name resolution and the ability to have hostname resolved to IP addresses is **required** for xCAT.

View File

@ -1,251 +0,0 @@
Configure xCAT
==============
After you installed xCAT packages on management node,you have to configure the management node first .This document introduces how to configure the environment well before you can use xCAT normally.
Here is a summary of steps required for the xCAT management node .
::
1.Check Site Table
2.Check Networks
3.Configure Password Table
4.Initialize DHCP
Check Site Table
----------------
After xCAT is installed , site table should be checked. Please verify following attributes and make sure they are correctly set. ::
domain: The DNS domain name (exg. cluster.com).
nameservers: A comma delimited list of DNS servers that each node in this network should use. This value will end up in the nameserver settings of the /etc/resolv.conf on each node in this network. If this attribute value is set to the IP address of an xCAT node, make sure DNS is running on it. In a hierarchical cluster, you can also set this attribute to "<xcatmaster>" to mean the DNS server for each node in this network should be the node that is managing it (either its service node or the management node). Used in creating the DHCP network definition, and DNS configuration.
forwarders: The DNS servers at your site that can provide names outside of the cluster. The makedns command will configure the DNS on the management node to forward requests it does not know to these servers.Note that the DNS servers on the service nodes will ignore this value and always be configured to forward requests to the management node.
master: The hostname of the xCAT management node, as known by the nodes.
1. Before xCAT build is installed, management HostName and management DomainName should be configured in DNS configure file **/etc/resolv.conf**, after xCAT is installed, nameserver, master, domain and forwarder will be set correctly in site table.
1.1.Before install xCAT:
* Modify **resolv.conf** file like example1:
::
cat /etc/resolv.conf
; generated by /usr/sbin/dhclient-script
search <xcat_dom>
nameserver <Management_Node_Ip>
nameserver <Forwarder_ip>
* Configure hostname setting so that using ``hostname`` could get the machine hostname like example2:
::
mn:~ # hostname
mn
* Configure domain name setting so that using ``hostname -d`` could get domain name like example3:
::
[root@mn ~]# hostname -d
pok.stglabs.ibm.com
1.2 After xCAT is installed:
* Using tabdump site to check ``site table``, the outputs will like example4:
::
"domain","<xcat_dom>",,
"forwarders","<Forwarder_ip>",,
"master","<Management_Node_Ip>",,
"nameservers","<Management_Node_Ip>",,
2. If configures in above 1 are not configured before xCAT is installed, the outputs of ``tabdump site`` are as following: ::
"domain"," ",,
"forwarders",,,
"master","NORESOLUTION",,
"nameservers","NORESOLUTION"
* In this situation, please configure the **/etc/resolv.conf** file according to example1. Then using ``chdef`` (exg. ``chdef -t site master=<management_node_ip>`` ) or ``tabedit site`` command to configure the site table according to example4.
3.After site table is configured
* Please initialize DNS using: ::
makedns -n
* Verify DNS work well using: ::
nslookup <Mangement_Node_Hostname>
* It gives out the Management node hostname and resolved ip. Here is an example: ::
c910f04x27v05:~ # nslookup c910f04x27v05
Server: 10.4.27.5
Address: 10.4.27.5#53
Name: c910f04x27v05.pok.stglabs.ibm.com
Address: 10.4.27.5
**Note**:
#. The value of attribute master in site table can be set either management node ip or service node ip.
#. Setting up name resolution and having the nodes resolved to IP addresses are required in xCAT clusters .
#. Set site.forwarders to your site-wide DNS servers that can resolve site or public hostnames. The DNS on the MN will forward any requests it can't answer to these servers.
#. For more dns explanation please refer to :ref:`dns_label`
Check Networks
--------------
Please check networks tables: ::
tabdump networks
The outputs are as following: ::
#netname,net,mask,mgtifname,gateway,dhcpserver,tftpserver,nameservers,ntpservers,logservers,dynamicrange,staticrange,staticrangeincrement,nodehostname,ddnsdomain,vlanid,domain,comments,disable
"10_0_0_0-255_0_0_0","10.0.0.0","255.0.0.0","eth0","10.0.0.103",,"10.4.27.5",,,,,,,,,,,,
**Note**:Networks table will be set after xCAT is installed using default net,default mask and default gateway.
1.If the cluster-facing NICs were not configured when xCAT was installed, or if there are more networks in the cluster that are only available via the service nodes or compute nodes, users can use such options below to create network definitions (exg.50.3.5.5).
1.1(Optinal) How to configured networks table:
* Using ``mkdef`` to update networks table. ::
mkdef -t network -o net1 net=9.114.0.0 mask=255.255.255.224 gateway=9.114.113.254
net The network address.
mask The network mask.
gateway The network gateway.
* Or using ``tabedit`` to modify networks table. ::
Tabedit networks
* Or using command ``makenetworks`` to automatically generate networks table entry. ::
makenetworks
1.2.Verify networks table similar like:
::
# tabdump networks
#netname,net,mask,mgtifname,gateway,dhcpserver,tftpserver,nameservers,ntpservers,logservers,dynamicrange,nodehostname,comments,disable
50_0_0_0-255_0_0_0","50.0.0.0","255.0.0.0","eth1","<xcatmaster>",,"50.3.5.5",,,,,,,,,,,,
**Note**:Domain and nameservers values must be provided either in the network definiton corresponding to the node or in the site definition.
Configure Password Table
-------------------------
The password should be set in the passwd table that will be assigned to root when the node is installed. You can modify this table using ``tabedit``. To change the default password for root on the nodes, change the system line. ::
tabedit passwd
#key,username,password,cryptmethod,comments,disable
"system","root","cluster",,,
"hmc","hscroot","ABC123",,,
Or ::
chtab key=system passwd.username=root passwd.password=cluster
**Note**:
#. Currently xCAT puts the root password on the node only during install. It is taken from the passwd table where key=system. The new subcluster support requires a unique password for each subcluster to be installed.
#. The xCAT database needs to contain the proper authentication working with hmc/blade/ipmi userid and password. Example for passwd set up:
::
chtab key=hmc passwd.username=hscroot passwd.password=abc123
or
chtab key=blade passwd.username=USERID passwd.password=PASSW0RD
or
chtab key=ipmi passwd.username=USERID passwd.password=PASSW0RD
#. (Optional)If the BPA passwords are still the factory defaults, you must change them before running any other commands to them.
::
rspconfig frame general_passwd=general,<newpd>
rspconfig frame admin_passwd=admin,<newpd>
rspconfig frame HMC_passwd=,<newpd>
#. (Optional)The username and password for xCAT to access an HMC can also be assigned directly to the HMC node object using the ``mkdef`` or ``chdef`` commands. This assignment is useful when a specific HMC has a username and/or password that is different from the default one specified in the passwd table. For example, to create an HMC node object and set a unique username or password for it:
::
mkdef -t node -o hmc1 groups=hmc,all nodetype=ppc hwtype=hmc mgt=hmc username=hscroot password=abc1234
Or to change it if the HMC definition already exists:
chdef -t node -o hmc1 username=hscroot password=abc1234
#. (Optional)The REST API calls need to provide a username and password. When this request is passed to xcatd, it will first verify that this user/pw is in the xCAT passwd table, and then xcatd will look in the policy table to see if that user is allowed to do the requested operation.
* The account which key is xcat will be used for the REST API authentication. The username and password should be passed in with the attirbutes. ::
userName: Pass the username of the account
userPW: Pass the password of the account
* Use non-root account to create new user and setup the password and policy rules. ::
useradd wsuser
passwd wsuser # set the password
tabch key=xcat,username=wsuser passwd.password=cluster
mkdef -t policy 6 name=wsuser rule=allow
* Use root account: ::
tabch key=xcat,username=root passwd.password=<root-pw>
Initialize DHCP
---------------
Initialize DHCP service
~~~~~~~~~~~~~~~~~~~~~~~
Create a new dhcp configuration file with a network statement for each network the dhcp daemon should listen on. ::
makedhcp -n
(Optional)Setup the DHCP interfaces in site table
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To set up the site table dhcp interfaces for your system p cluster, identify the correct interfaces that xCAT should listen to on your management node and service nodes. ::
chdef -t site dhcpinterfaces='pmanagenode|eth1;service|eth0'
makedhcp -n
dhcpinterfaces: The network interfaces DHCP should listen on. If it is the same for all nodes, use a simple comma-separated list of NICs. To specify different NICs for different nodes:xcatmn|eth1,eth2;service|bond0.In this example xcatmn is the name of the xCAT MN, and DHCP there should listen on eth1 and eth2. On all of the nodes in group 'service' DHCP should listen on the bond0 nic.
**Note**:To verify makedhcp work well, please check nic,domain-name ,domain-servers in dhcpd.conf , for example: ::
shared-network nic {
subnet 10.0.0.0 netmask 255.0.0.0 {
authoritative;
max-lease-time 43200;
min-lease-time 43200;
default-lease-time 43200;
option routers 10.2.1.12;
next-server 10.2.1.13;
option log-servers <Management_Node_Ip>;
option ntp-servers <Management_Node_Ip>;
option domain-name "<xcat_dom>";
option domain-name-servers <Management_Node_Ip>;
option domain-search "pok.stglabs.ibm.com";
zone pok.stglabs.ibm.com. {
primary 10.2.1.13; key xcat_key;
}

View File

@ -4,7 +4,6 @@ Configure xCAT
Configure network table
```````````````````````
Normally, there will be at least two entries for the two subnet on MN in ``networks`` table after xCAT is installed::
#tabdump networks
@ -50,18 +49,10 @@ For hardware management with ipmi, add the following line::
Verify the genesis packages
```````````````````````````
Genesis packages are used to **create the root image for network boot** and **MUST** be installed before doing hardware discovery.
The **xcat-genesis** packages should have been installed when xCAT was installed, but would cause problems if missing. **xcat-genesis** packages are required to create the genesis root image to do hardware discovery and the genesis kernel sits in ``/tftpboot/xcat/``. Verify that the ``genesis-scripts`` and ``genesis-base`` packages are installed:
* **[RH]**::
* **[RHEL/SLES]**: ``rpm -qa | grep -i genesis``
# rpm -qa |grep -i genesis
xCAT-genesis-scripts-ppc64-2.10-snap201507240527.noarch
xCAT-genesis-base-ppc64-2.10-snap201505172314.noarch
* **[Ubuntu]**: ``dpkg -l | grep -i genesis``
* **[ubuntu]**::
# dpkg -l | grep genesis
ii xcat-genesis-base-ppc64 2.10-snap201505172314 all xCAT Genesis netboot image
ii xcat-genesis-scripts 2.10-snap201507240105 ppc64el xCAT genesis
**Note:** If the two packages are not installed, install them first and then run ``mknb ppc64`` to create the network boot root image.
If missing, install them from the ``xcat-deps`` package and run ``mknb ppc64`` to create the genesis network boot root image.

View File

@ -1,11 +1,20 @@
Hardware Discovery & Define Node
================================
Have the servers to be defined as **Node Object** in xCAT is the first step to do for a cluster management.
In order to manage machines using xCAT, the machines need to be defined as xCAT ``node objects`` in the database. The :doc:`xCAT Objects </guides/admin-guides/basic_concepts/xcat_object/index>` documentation describes the process for manually creating ``node objects`` one by one using the xCAT ``mkdef`` command. This is valid when managing a small sizes cluster but can be error prone and cumbersome when managing large sized clusters.
In the chapter :doc:`xCAT Object <../../../basic_concepts/xcat_object/index>`, it describes how to create a **Node Object** through `mkdef` command. You can collect all the necessary information of target servers and define them to a **xCAT Node Object** by manually run `mkdef` command. This is doable when you have a small cluster which has less than 10 servers. But it's really error-prone and inefficiency to manually configure SP (like BMC) and collect information for a large number servers.
xCAT provides several *automatic hardware discovery* methods to assist with hardware discovery by helping to simplify the process of detecting service processors (SP) and collecting various server information. The following are methods that xCAT supports:
.. toctree::
:maxdepth: 2
mtms/index.rst
switch_discovery.rst
seq_discovery.rst
manually_define.rst
manually_discovery.rst
xCAT offers several powerful **Automatic Hardware Discovery** methods to simplify the procedure of SP configuration and server information collection. If your managed cluster has more than 10 servers, the automatic discovery is worth to take a try. If your cluster has more than 50 servers, the automatic discovery is recommended.
Following are the brief characteristics and adaptability of each method, you can select a proper one according to your cluster size and other consideration.
@ -73,12 +82,3 @@ Following are the brief characteristics and adaptability of each method, you can
You have to strictly boot on the node in order if you want the node has the expected name. Generally you have to waiting for the discovery process finished before power on the next one.
.. toctree::
:maxdepth: 2
manually_define.rst
mtms_discovery.rst
switch_discovery.rst
seq_discovery.rst
manually_discovery.rst

View File

@ -0,0 +1,14 @@
Discovery
=========
When the IPMI-based servers are connected to power, the BMCs will boot up and attempt to obtain an IP address from an open range dhcp server on your network. In the case for xCAT managed networks, xCAT should be configured serve an open range dhcp IP addresses with the ``dynamicrange`` attribute in the networks table.
When the BMCs have an IP address and is pingable from the xCAT management node, administrators can discover the BMCs using the xCAT's :doc:`bmcdiscover </guides/admin-guides/references/man1/bmcdiscover.1>` command and obtain basic information to start the hardware discovery process.
xCAT Hardware discover uses the xCAT genesis kernel (diskless) to discover additional attributes of the compute node and automatically populate the node definitions in xCAT.
.. toctree::
:maxdepth: 2
discovery_using_defined.rst
discovery_using_dhcp.rst

View File

@ -0,0 +1,144 @@
Set static BMC IP using different IP address (recommended)
==========================================================
The following example outlines the MTMS based hardware discovery for a single IPMI-based compute node.
+------------------------------+------------+
| Compute Node Information | Value |
+==============================+============+
| Model Type | 8247-22l |
+------------------------------+------------+
| Serial Number | 10112CA |
+------------------------------+------------+
| Hostname | cn01 |
+------------------------------+------------+
| IP address | 10.1.2.1 |
+------------------------------+------------+
The BMC IP address is obtained by the open range dhcp server and the plan in this scenario is to change the IP address for the BMC to a static IP address in a different subnet than the open range addresses. The static IP address in this example is in the same subnet as the open range to simplify the networking configuration on the xCAT management node.
+------------------------------+------------+
| BMC Information | Value |
+==============================+============+
| IP address - dhcp | 172.30.0.1 |
+------------------------------+------------+
| IP address - static | 172.20.2.1 |
+------------------------------+------------+
#. Detect the BMCs and add the node definitions into xCAT.
Use the ``bmcdiscover`` command to discover the BMCs responding over an IP range and automatically write the output into the xCAT database. You **must** use the ``-t`` option to indicate node type is bmc and the ``-w`` option to automatically write the output into the xCAT database.
To discover the BMC with an IP address of 172.30.0.1, use the command: ::
bmcdiscover --range 172.30.0.1 -t -z -w
The discovered nodes will be written to xCAT database: ::
# lsdef node-8247-22l-10112ca
Object name: node-8247-22l-10112ca
bmc=172.30.0.1
cons=ipmi
groups=all
hwtype=bmc
mgt=ipmi
mtm=8247-22L
nodetype=mp
postbootscripts=otherpkgs
postscripts=syslog,remoteshell,syncfiles
serial=10112CA
#. **Pre-define** the compute nodes:
Use the ``bmcdiscover`` command to help discover the nodes over an IP range and easily create a starting file to define the compute nodes into xCAT.
To discover the compute nodes for the BMCs with an IP address of 172.30.0.1, use the command: ::
bmcdiscover --range 172.30.0.1 -z > predefined.stanzas
The discovered nodes have the naming convention: node-<*model-type*>-<*serial-number*> ::
# cat predefined.stanzas
node-8247-22l-10112ca:
objtype=node
groups=all
bmc=172.30.0.1
cons=ipmi
mgt=ipmi
mtm=8247-22L
serial=10112CA
#. Edit the ``predefined.stanzas`` file and change the discovered nodes to the intended ``hostname`` and ``IP address``.
#. Edit the ``predefined.stanzas`` file: ::
vi predefined.stanzas
#. Rename the discovered object names to their intended compute node hostnames based on the MTMS mapping: ::
node-8247-22l-10112ca ==> cn01
#. Add a ``ip`` attribute and give it the compute node IP address: ::
ip=10.1.2.1
#. Repeat for additional nodes in the ``predefined.stanza`` file based on the MTMS mapping.
In this example, the ``predefined.stanzas`` file now looks like the following: ::
# cat predefined.stanzas
cn01:
objtype=node
groups=all
bmc=172.30.0.1
cons=ipmi
mgt=ipmi
mtm=8247-22L
serial=10112CA
ip=10.1.2.1
#. Set the chain table to run the ``bmcsetup`` script, this will set the BMC IP to static. ::
chdef cn01 chain="runcmd=bmcsetup"
#. Change the BMC IP address
Set the BMC IP address to a different value for the **predefined** compute node definitions.
To change the dhcp obtained IP address of 172.30.0.1 to a static IP address of 172.20.2.1, run the following command: ::
chdef cn01 bmc=172.20.2.1
#. Define the compute nodes into xCAT: ::
cat predefined.stanzas | mkdef -z
#. Add the compute node IP information to ``/etc/hosts``: ::
makehosts cn01
#. Refresh the DNS configuration for the new hosts: ::
makedns -n
#. **[Optional]** Monitor the node discovery process using rcons
Configure the conserver for the **discovered** node to watch the discovery process using ``rcons``::
makeconservercf node-8247-22l-10112ca
In another terminal window, open the remote console: ::
rcons node-8247-22l-10112ca
#. Start the discovery process by booting the **discovered** node definition: ::
rsetboot node-8247-22l-10112ca net
rpower node-8247-22l-10112ca on
#. The discovery process will network boot the machine into the diskless xCAT genesis kernel and perform the discovery process. When the discovery process is complete, doing ``lsdef`` on the compute nodes should show discovered attributes for the machine. The important ``mac`` information should be discovered, which is necessary for xCAT to perform OS provisioning.

View File

@ -0,0 +1,113 @@
Set static BMC IP using dhcp provided IP address
================================================
The following example outlines the MTMS based hardware discovery for a single IPMI-based compute node.
+------------------------------+------------+
| Compute Node Information | Value |
+==============================+============+
| Model Type | 8247-22l |
+------------------------------+------------+
| Serial Number | 10112CA |
+------------------------------+------------+
| Hostname | cn01 |
+------------------------------+------------+
| IP address | 10.1.2.1 |
+------------------------------+------------+
The BMC IP address is obtained by the open range dhcp server and the plan is to leave the IP address the same, except we want to change the IP address to be static in the BMC.
+------------------------------+------------+
| BMC Information | Value |
+==============================+============+
| IP address - dhcp | 172.30.0.1 |
+------------------------------+------------+
| IP address - static | 172.30.0.1 |
+------------------------------+------------+
#. **Pre-define** the compute nodes:
Use the ``bmcdiscover`` command to help discover the nodes over an IP range and easily create a starting file to define the compute nodes into xCAT.
To discover the compute nodes for the BMCs with an IP address of 172.30.0.1, use the command: ::
bmcdiscover --range 172.30.0.1 -z > predefined.stanzas
The discovered nodes have the naming convention: node-<*model-type*>-<*serial-number*> ::
# cat predefined.stanzas
node-8247-22l-10112ca:
objtype=node
groups=all
bmc=172.30.0.1
cons=ipmi
mgt=ipmi
mtm=8247-22L
serial=10112CA
#. Edit the ``predefined.stanzas`` file and change the discovered nodes to the intended ``hostname`` and ``IP address``.
#. Edit the ``predefined.stanzas`` file: ::
vi predefined.stanzas
#. Rename the discovered object names to their intended compute node hostnames based on the MTMS mapping: ::
node-8247-22l-10112ca ==> cn01
#. Add a ``ip`` attribute and give it the compute node IP address: ::
ip=10.1.2.1
#. Repeat for additional nodes in the ``predefined.stanza`` file based on the MTMS mapping.
In this example, the ``predefined.stanzas`` file now looks like the following: ::
# cat predefined.stanzas
cn01:
objtype=node
groups=all
bmc=172.30.0.1
cons=ipmi
mgt=ipmi
mtm=8247-22L
serial=10112CA
ip=10.1.2.1
#. Set the chain table to run the ``bmcsetup`` script, this will set the BMC IP to static. ::
chdef cn01 chain="runcmd=bmcsetup"
#. Define the compute nodes into xCAT: ::
cat predefined.stanzas | mkdef -z
#. Add the compute node IP information to ``/etc/hosts``: ::
makehosts cn01
#. Refresh the DNS configuration for the new hosts: ::
makedns -n
#. **[Optional]** Monitor the node discovery process using rcons
Configure the conserver for the **predefined** node to watch the discovery process using ``rcons``::
makeconservercf cn01
In another terminal window, open the remote console: ::
rcons cn01
#. Start the discovery process by booting the **predefined** node definition: ::
rsetboot cn01 net
rpower cn01 on
#. The discovery process will network boot the machine into the diskless xCAT genesis kernel and perform the discovery process. When the discovery process is complete, doing ``lsdef`` on the compute nodes should show discovered attributes for the machine. The important ``mac`` information should be discovered, which is necessary for xCAT to perform OS provisioning.

View File

@ -0,0 +1,27 @@
MTMS-based Discovery
====================
MTMS stands for **M**\ achine **T**\ ype/\ **M**\ odel and **S**\ erial. This is one way to uniquely identify each physical server.
MTMS-based hardware discovery assumes the administator has the model type and serial number information for the physical servers and a plan for mapping the servers to intended hostname/IP addresses.
**Overview**
#. Automatically search and collect MTMS information from the servers
#. Write **discovered-bmc-nodes** to xCAT (recommened to set different BMC IP address)
#. Create **predefined-compute-nodes** to xCAT providing additional properties
#. Power on the nodes which triggers xCAT hardware discovery engine
**Pros**
* Limited effort to get servers defined using xCAT hardware discovery engine
**Cons**
* When compared to switch-based discovery, the administrator needs to create the **predefined-compute-nodes** for each of the **discovered-bmc-nodes**. This could become difficult for a large number of servers.
.. toctree::
:maxdepth: 2
verification.rst
discovery.rst

View File

@ -0,0 +1,53 @@
Verification
============
Before starting hardware discovery, ensure the following is configured to make the discovery process as smooth as possible.
Password Table
--------------
In order to communicate with IPMI-based hardware (with BMCs), verify that the xCAT ``passwd`` table contains an entry for ``ipmi`` which defines the default username and password to communicate with the IPMI-based servers. ::
tabdump passwd | grep ipmi
If not configured, use the following command to set ``usernam=ADMIN`` and ``password=admin``. ::
chtab key=ipmi passwd.username=ADMIN passwd.password=admin
Genesis Package
---------------
The **xCAT-genesis** packages provides the utility to create the genesis network boot rootimage used by xCAT when doing hardware discovery. It should be installed during the xCAT install and would cause problems if missing.
Verify that the ``genesis-scripts`` and ``genesis-base`` packages are installed:
* **[RHEL/SLES]**: ::
rpm -qa | grep -i genesis
* **[Ubuntu]**: ::
dpkg -l | grep -i genesis
If missing:
#. Install them from the ``xcat-dep`` repository using the Operating Specific package manager (``yum, zypper, apt-get, etc``)
* **[RHEL]**: ::
yum install xCAT-genesis
* **[SLES]**: ::
zypper install xCAT-genesis
* **[Ubuntu]**: ::
apt-get install xCAT-genesis
#. Create the network boot rootimage with the following command: ``mknb ppc64``.
The genesis kernel should be copied to ``/tftpboot/xcat``.

View File

@ -1,80 +0,0 @@
MTMS-based Discovery
====================
MTMS is short for Machine Type/Model and Serial which is unique for a physical server. The idea of MTMS based hardware discovery is that the admin know the physical location information of the server with specified MTMS. Then the admin can assign nodename and host ip address for the physical server.
.. include:: schedule_environment.rst
.. include:: config_environment.rst
Discover server and define
--------------------------
After environment is ready, and the server is powered, we can start server discovery process. The first thing to do is discovering the FSP/BMC of the server. It is automatically powered on when the physical server is powered.
The following command can be used to discover BMC(s) within an IP range and write the discovered node definition(s) into a stanza file::
bmcdiscover -s nmap --range 50.0.100.1-100 -z > ./bmc.stanza
**Note**: bmcdiscover will use username/password pair set in ``passwd`` table with **key** equal **ipmi**. If you'd like to use other username/password, you can use ::
bmcdiscover -s nmap --range 50.0.100.1-100 -z -u <username> -p <password> > ./bmc.stanza
You need to modify the node definition(s) in stanza file before using them, the stanza file will be like this::
# cat pbmc.stanza
cn1:
objtype=node
bmc=50.0.100.1
mtm=8247-42L
serial=10112CA
groups=pbmc,all
mgt=ipmi
Then, define it into xCATdb::
# cat pbmc.stanza | mkdef -z
1 object definitions have been created or modified.
The server definition will be like this::
# lsdef cn1
Object name: cn1
bmc=50.0.100.1
groups=pbmc,all
hidden=0
mgt=ipmi
mtm=8247-42L
nodetype=mp
postbootscripts=otherpkgs
postscripts=syslog,remoteshell,syncfiles
serial=10112CA
After the physical server is defined into xCATdb, the next thing is update the node definition with the example node attributes::
chdef cn1 ip=10.0.101.1
In order to do BMC configuration during the discovery process, set ``runcmd=bmcsetup``. For more info about chain, please refer to :doc:`Chain <../../../../../advanced/chain/index>` ::
chdef cn1 chain="runcmd=bmcsetup"
Then, add node info into /etc/hosts and DNS::
makehosts cn1
makedns -n
Start discovery process
-----------------------
To start discovery process, just need to power on the host remotely with the following command, and the discovery process will start automatically after the host is powered on::
rpower cn1 on
**[Optional]** If you'd like to monitor the discovery process, you can use::
chdef cn1 cons=ipmi
makeconservercf
rcons cn1
.. include:: standard_cn_definition.rst

View File

@ -1,14 +1,14 @@
IBM Power LE / OpenPOWER
=========================
This chapter introduces the procedure of how to manage an IBM Power LE/OpenPower cluster. Generally speaking, the processor of **Compute Node** is **IBM Power Chip** based and the management module is **BMC** based.
The following sections documents the procedures in managing IBM Power LE (Little Endian) / OpenPOWER servers in an xCAT cluster.
These are machines use the IBM Power Architecture and is **IPMI** managed.
For a new user, you are recommended to read this chapter in order since later section depends on the execute result of previous section.
.. toctree::
:maxdepth: 2
configure_xcat.rst
configure/index.rst
discovery/index.rst
management.rst
diskful/index.rst

View File

@ -59,7 +59,7 @@ Keywords to use:
.. code-block:: perl
apps -- a list of comma separated application names whose status will be queried. For how to get the status of each app, look for app name in the key filed in a different row.
apps -- a list of comma separated application names whose status will be queried. For how to get the status of each app, look for app name in the key field in a different row.
port -- the application daemon port number, if not specified, use internal list, then /etc/services.
group -- the name of a node group that needs to get the application status from. If not specified, assume all the nodes in the nodelist table. To specify more than one groups, use group=a,group=b format.
cmd -- the command that will be run locally on mn or sn.

View File

@ -11,7 +11,7 @@ NAME
****
\ **rmvm**\ - Removes HMC-, DFM-, IVM-, KVM-, Vmware- and zVM-managed partitions or virtual machines.
\ **rmvm**\ - Removes HMC-, DFM-, IVM-, KVM-, VMware- and zVM-managed partitions or virtual machines.
********
@ -25,18 +25,18 @@ SYNOPSIS
\ **rmvm [-V| -**\ **-verbose]**\ \ *noderange*\ \ **[-r] [-**\ **-service]**\
For KVM and Vmware:
For KVM and VMware:
===================
\ **rmvm [-p] [-f]**\
\ **rmvm [-p] [-f]**\ \ *noderange*\
PPC (using Direct FSP Management) specific:
===========================================
\ **rmvm**\ \ *noderange*\
\ **rmvm [-p]**\ \ *noderange*\
@ -65,9 +65,7 @@ OPTIONS
\ **-**\ **-service**\ Remove the service partitions of the specified CECs.
\ **-p**\ Purge the existence of the VM from persistant storage. This will erase all storage related to the VM in addition to removing it from the active virtualization configuration.
\ **-p|-**\ **-part**\ Remove the specified partiton on normal power machine.
\ **-p**\ KVM: Purge the existence of the VM from persistant storage. This will erase all storage related to the VM in addition to removing it from the active virtualization configuration. PPC: Remove the specified partiton on normal power machine.
\ **-f**\ Force remove the VM, even if the VM appears to be online. This will bring down a live VM if requested.

View File

@ -11,7 +11,7 @@ SYNOPSIS
********
\ **rsetboot**\ \ *noderange*\ {\ **hd | net | cd | default | stat**\ }
\ **rsetboot**\ \ *noderange*\ {\ **hd | net | cd | default | stat**\ } [\ **-u**\ ] [\ **-p**\ ]
\ **rsetboot**\ [\ **-h | -**\ **-help | -v | -**\ **-version**\ ]
@ -21,9 +21,7 @@ DESCRIPTION
***********
\ **rsetboot**\ sets the boot media that should be used on the next boot of the specified nodes. After the nodes are
booted with the specified device (e.g. via rpower(1)|rpower.1), the nodes will return to using the
default boot device specified in the BIOS. Currently this command is only supported for IPMI nodes.
\ **rsetboot**\ sets the boot media and boot mode that should be used on the next boot of the specified nodes. After the nodes are booted with the specified device and boot mode (e.g. via rpower(1)|rpower.1), the nodes will return to using the default boot device specified in the BIOS. Currently this command is only supported for IPMI nodes.
*******
@ -62,6 +60,18 @@ OPTIONS
\ **-u**\
To specify the next boot mode to be "UEFI Mode".
\ **-p**\
To make the specified boot device and boot mode settings persistent.
********
EXAMPLES

View File

@ -19,11 +19,11 @@ Name
****************
\ **nodeset**\ [\ *noderange*\ ] [\ **boot**\ | \ **stat**\ | \ **iscsiboot**\ | \ **offline**\ | \ **runcmd=bmcsetup**\ | \ **osimage**\ = \ *imagename*\ | \ **shell**\ | \ **shutdown**\ ]
\ **nodeset**\ \ *noderange*\ [\ **boot**\ | \ **stat**\ | \ **iscsiboot**\ | \ **offline**\ | \ **runcmd=bmcsetup**\ | \ **osimage**\ [=\ *imagename*\ ] | \ **shell**\ | \ **shutdown**\ ]
\ **nodeset**\ \ *noderange*\ \ **osimage=**\ \ *imagename*\ [\ **-**\ **-noupdateinitrd**\ ] [\ **-**\ **-ignorekernelchk**\ ]
\ **nodeset**\ \ *noderange*\ \ **osimage**\ [=\ *imagename*\ ] [\ **-**\ **-noupdateinitrd**\ ] [\ **-**\ **-ignorekernelchk**\ ]
\ **nodeset**\ \ *noderange*\ \ **runimage=**\ \ *task*\
\ **nodeset**\ \ *noderange*\ \ **runimage=**\ \ *task*\
\ **nodeset**\ [\ **-h | -**\ **-help | -v | -**\ **-version**\ ]
@ -74,7 +74,7 @@ A user can supply their own scripts to be run on the mn or on the service node (
\ **osimage | osimage=<imagename**\ >
\ **osimage | osimage=**\ \ *imagename*\
Prepare server for installing a node using the specified os image. The os image is defined in the \ *osimage*\ table and \ *linuximage*\ table. If the <imagename> is omitted, the os image name will be obtained from \ *nodetype.provmethod*\ for the node.

View File

@ -33,16 +33,24 @@ xCAT uses the apt package manager on Ubuntu Linux distributions to install and r
gpgcheck=1
Set up Network
--------------
Configure the Management Node
-----------------------------
The Management Node IP address should be set to a **static** IP address.
By setting properties on the Management Node before installing the xCAT software will allow xCAT to automatically configure key attributes in the xCAT ``site`` table during the install.
Modify the ``interfaces`` file in ``/etc/network`` and configure a static IP address. ::
# The primary network interface
auto eth0
iface eth0 inet static
address 10.3.31.11
netmask 255.0.0.0
#. Ensure a hostname is configured on the management node by issuing the ``hostname`` command. [*It's recommended to use a fully qualified domain name (FQDN) when setting the hostname*]
#. To set the hostname of *xcatmn.cluster.com*: ::
hostname xcatmn.cluster.com
#. Add the hostname to the ``/etc/hostname`` and ``/etc/hosts`` to persist the hostname on reboot.
#. Reboot or run ``service hostname restart`` to allow the hostname to take effect and verify the hostname command returns correctly:
* ``hostname``
* ``hostname -d`` - should display the domain
#. Reduce the risk of the Management Node IP address being lost by setting the interface IP to **STATIC** in the ``/etc/network/interfaces`` configuration file.
#. Configure any domain search strings and nameservers using the ``resolvconf`` command.

View File

@ -3,6 +3,9 @@
For the current list of operating systems supported and verified by the development team for the different releases of xCAT, see the :doc:`xCAT2 Release Notes </overview/xcat2_release>`.
**Disclaimer** These instructions are intended to only be guidelines and specific details may differ slightly based on the operating system version. Always refer to the operating system documentation for the latest recommended procedures.
.. END_see_release_notes
.. BEGIN_install_os_mgmt_node
@ -26,14 +29,6 @@ The system requirements for your xCAT management node largely depend on the size
.. END_install_os_mgmt_node
.. BEGIN_setup_mgmt_node_network
The Management Node IP address should be set to a **static** IP address.
Modify the ``ifcfg-<device>`` file in ``/etc/sysconfig/network-scripts`` and configure a static IP address.
.. END_setup_mgmt_node_network
.. BEGIN_install_xcat_introduction
xCAT consists of two software packages: ``xcat-core`` and ``xcat-dep``

View File

@ -15,9 +15,8 @@ Configure the Base OS Repository
xCAT uses the yum package manager on RHEL Linux distributions to install and resolve dependency packages provided by the base operating system. Follow this section to create the repository for the base operating system on the Management Node
#. Copy the DVD iso file to ``/tmp`` on the Management Node: ::
# This example will use RHEL-LE-7.1-20150219.1-Server-ppc64le-dvd1.iso
#. Copy the DVD iso file to ``/tmp`` on the Management Node.
This example will use file ``RHEL-LE-7.1-20150219.1-Server-ppc64le-dvd1.iso``
#. Mount the iso to ``/mnt/iso/rhels7.1`` on the Management Node. ::
@ -33,10 +32,26 @@ xCAT uses the yum package manager on RHEL Linux distributions to install and res
gpgcheck=1
Set up Network
--------------
Configure the Management Node
-----------------------------
.. include:: ../common_sections.rst
:start-after: BEGIN_setup_mgmt_node_network
:end-before: END_setup_mgmt_node_network
By setting properties on the Management Node before installing the xCAT software will allow xCAT to automatically configure key attributes in the xCAT ``site`` table during the install.
#. Ensure a hostname is configured on the management node by issuing the ``hostname`` command. [*It's recommended to use a fully qualified domain name (FQDN) when setting the hostname*]
#. To set the hostname of *xcatmn.cluster.com*: ::
hostname xcatmn.cluster.com
#. Add the hostname to the ``/etc/sysconfig/network`` in order to persist the hostname on reboot.
#. Reboot the server and verify the hostname by running the following commands:
* ``hostname``
* ``hostname -d`` - should display the domain
#. Reduce the risk of the Management Node IP address being lost by setting the IP to **STATIC** in the ``/etc/sysconfig/network-scripts/ifcfg-<dev>`` configuration files.
#. Configure any domain search strings and nameservers to the ``/etc/resolv.conf`` file.

View File

@ -33,10 +33,25 @@ xCAT uses the zypper package manager on SLES Linux distributions to install and
gpgcheck=1
Set up Network
--------------
Configure the Management Node
-----------------------------
The Management Node IP address should be set to a **static** IP address.
By setting properties on the Management Node before installing the xCAT software will allow xCAT to automatically configure key attributes in the xCAT ``site`` table during the install.
Modify the ``ifcfg-<device>`` file in ``/etc/sysconfig/network/`` and configure a static IP address.
#. Ensure a hostname is configured on the management node by issuing the ``hostname`` command. [*It's recommended to use a fully qualified domain name (FQDN) when setting the hostname*]
#. To set the hostname of *xcatmn.cluster.com*: ::
hostname xcatmn.cluster.com
#. Add the hostname to the ``/etc/hostname`` in order to persist the hostname on reboot.
#. Reboot the server and verify the hostname by running the following commands:
* ``hostname``
* ``hostname -d`` - should display the domain
#. Reduce the risk of the Management Node IP address being lost by setting the IP to **STATIC** in the ``/etc/sysconfig/network/ifcfg-<dev>`` configuration files.
#. Configure any domain search strings and nameservers to the ``/etc/resolv.conf`` file.

View File

@ -1,5 +1,4 @@
# The shell is commented out so that it will run in bash on linux and ksh on aix
# !/bin/bash
#!/bin/bash
# IBM(c) 2007 EPL license http://www.eclipse.org/legal/epl-v10.html

View File

@ -96,7 +96,7 @@ my %usage = (
MIC specific:
rinv noderange [system|ver|board|core|gddr|all]",
"rsetboot" =>
"Usage: rsetboot <noderange> [net|hd|cd|floppy|def|stat] [-V|--verbose]
"Usage: rsetboot <noderange> [net|hd|cd|floppy|def|stat] [-V|--verbose] [-u] [-p]
rsetboot [-h|--help|-v|--version]",
"rbootseq" =>
"Usage:
@ -459,7 +459,7 @@ Options:
"Usage:
Common:
nodeset [-h|--help|-v|--version]
nodeset <noderange> [shell|boot|runcmd=bmcsetup|iscsiboot|osimage[=<imagename>]|offline]",
nodeset <noderange> [shell|boot|runcmd=bmcsetup|iscsiboot|osimage[=<imagename>]|offline|shutdown|stat]",
"rmflexnode" =>
"Usage:
rmflexnode [-h|--help|-v|--version]

View File

@ -874,12 +874,20 @@ sub initmysqldb
} # end AIX only
#on debian/ubuntu should comment the bind-adress line in my.cnf
#on Ubuntu16.04, the bind-address line is in the mariadb.conf.d/50-server.cnf
if ( $::debianflag ){
$cmd = "sed 's/\\(^\\s*bind.*\\)/#\\1/' /etc/mysql/my.cnf > /tmp/my.cnf; mv -f /tmp/my.cnf /etc/mysql/my.cnf;chmod 644 /etc/mysql/my.cnf";
my $bind_file;
if (-e "/etc/mysql/mariadb.conf.d/50-server.cnf")
{
$bind_file = "/etc/mysql/mariadb.conf.d/50-server.cnf";
} else {
$bind_file = "/etc/mysql/my.cnf";
}
$cmd = "sed 's/\\(^\\s*bind.*\\)/#\\1/' $bind_file > /tmp/my.cnf; mv -f /tmp/my.cnf $bind_file;chmod 644 $bind_file";
xCAT::Utils->runcmd($cmd, 0);
if ($::RUNCMD_RC != 0)
{
xCAT::MsgUtils->message("E", " comment the bind-address line in /etc/mysql/my.cfg failed: $cmd.");
xCAT::MsgUtils->message("E", " comment the bind-address line in $bind_file failed: $cmd.");
exit(1);
}
}

View File

@ -37,7 +37,7 @@ The following is an example of the settings in the B<monsetting> table:
Keywords to use:
apps -- a list of comma separated application names whose status will be queried. For how to get the status of each app, look for app name in the key filed in a different row.
apps -- a list of comma separated application names whose status will be queried. For how to get the status of each app, look for app name in the key field in a different row.
port -- the application daemon port number, if not specified, use internal list, then /etc/services.
group -- the name of a node group that needs to get the application status from. If not specified, assume all the nodes in the nodelist table. To specify more than one groups, use group=a,group=b format.
cmd -- the command that will be run locally on mn or sn.

View File

@ -1,6 +1,6 @@
=head1 NAME
B<rmvm> - Removes HMC-, DFM-, IVM-, KVM-, Vmware- and zVM-managed partitions or virtual machines.
B<rmvm> - Removes HMC-, DFM-, IVM-, KVM-, VMware- and zVM-managed partitions or virtual machines.
=head1 SYNOPSIS
@ -10,13 +10,13 @@ B<rmvm [-v| --version]>
B<rmvm [-V| --verbose]> I<noderange> B<[-r] [--service]>
=head2 For KVM and Vmware:
=head2 For KVM and VMware:
B<rmvm [-p] [-f]>
B<rmvm [-p] [-f]> I<noderange>
=head2 PPC (using Direct FSP Management) specific:
B<rmvm> I<noderange>
B<rmvm [-p]> I<noderange>
=head1 DESCRIPTION
@ -37,9 +37,7 @@ B<-r> Retain the data object definitions of the nodes.
B<--service> Remove the service partitions of the specified CECs.
B<-p> Purge the existence of the VM from persistant storage. This will erase all storage related to the VM in addition to removing it from the active virtualization configuration.
B<-p|--part> Remove the specified partiton on normal power machine.
B<-p> KVM: Purge the existence of the VM from persistant storage. This will erase all storage related to the VM in addition to removing it from the active virtualization configuration. PPC: Remove the specified partiton on normal power machine.
B<-f> Force remove the VM, even if the VM appears to be online. This will bring down a live VM if requested.

View File

@ -5,16 +5,14 @@ B<rsetboot> - Sets the boot device to be used for BMC-based servers for the next
=head1 SYNOPSIS
B<rsetboot> I<noderange> {B<hd>|B<net>|B<cd>|B<default>|B<stat>}
B<rsetboot> I<noderange> {B<hd>|B<net>|B<cd>|B<default>|B<stat>} [B<-u>] [B<-p>]
B<rsetboot> [B<-h>|B<--help>|B<-v>|B<--version>]
=head1 DESCRIPTION
B<rsetboot> sets the boot media that should be used on the next boot of the specified nodes. After the nodes are
booted with the specified device (e.g. via L<rpower(1)|rpower.1>), the nodes will return to using the
default boot device specified in the BIOS. Currently this command is only supported for IPMI nodes.
B<rsetboot> sets the boot media and boot mode that should be used on the next boot of the specified nodes. After the nodes are booted with the specified device and boot mode (e.g. via L<rpower(1)|rpower.1>), the nodes will return to using the default boot device specified in the BIOS. Currently this command is only supported for IPMI nodes.
=head1 OPTIONS
@ -40,6 +38,14 @@ Boot using the default set in BIOS.
Display the current boot setting.
=item B<-u>
To specify the next boot mode to be "UEFI Mode".
=item B<-p>
To make the specified boot device and boot mode settings persistent.
=back
=head1 EXAMPLES

View File

@ -4,11 +4,11 @@ B<nodeset> - set the boot state for a noderange
=head1 B<Synopsis>
B<nodeset> [I<noderange>] [B<boot> | B<stat> | B<iscsiboot> | B<offline> | B<runcmd=bmcsetup> | B<osimage>= I<imagename> | B<shell> | B<shutdown>]
B<nodeset> I<noderange> [B<boot> | B<stat> | B<iscsiboot> | B<offline> | B<runcmd=bmcsetup> | B<osimage>[=I<imagename>] | B<shell> | B<shutdown>]
B<nodeset> I<noderange> B<osimage=> I<imagename> [B<--noupdateinitrd>] [B<--ignorekernelchk>]
B<nodeset> I<noderange> B<osimage>[=I<imagename>] [B<--noupdateinitrd>] [B<--ignorekernelchk>]
B<nodeset> I<noderange> B<runimage=> I<task>
B<nodeset> I<noderange> B<runimage=>I<task>
B<nodeset> [B<-h>|B<--help>|B<-v>|B<--version>]
@ -49,7 +49,7 @@ Instruct network boot loader to be skipped, generally meaning boot to hard disk
Cleanup the current pxe/tftp boot configuration files for the nodes requested
=item B<osimage>|B<osimage=<imagename>>
=item B<osimage>|B<osimage=>I<imagename>
Prepare server for installing a node using the specified os image. The os image is defined in the I<osimage> table and I<linuximage> table. If the <imagename> is omitted, the os image name will be obtained from I<nodetype.provmethod> for the node.

View File

@ -188,6 +188,7 @@ else
done
echo -n "Acquiring network addresses.."
tries=0
while [ -z "$bootnic" ]; do
for tmp1 in $ALLUP_NICS; do
if ip addr show dev $tmp1|grep -v 'scope link'|grep -v 'dynamic'|grep -v inet6|grep inet > /dev/null; then
@ -199,6 +200,10 @@ else
fi
done
sleep 2
tries=$(($tries+1))
if [ $tries -ge 10 ]; then
break
fi
done
if [ -z "$bootnic" ]; then
/bin/bash

View File

@ -1577,7 +1577,7 @@ sub tabdb
if ($rep) {
return tabdb($rep->[0], $rep->[1], $rep->[2]);
} else {
$tmplerr="Unable to find requested filed <$field> from table <$table>, with key <$key>"
$tmplerr="Unable to find requested field <$field> from table <$table>, with key <$key>"
}
}
return "";
@ -1586,7 +1586,7 @@ sub tabdb
# check for site.xcatdebugmode
if (($table =~ /site/) and ($key =~ /xcatdebugmode/)) {
if ((($ent->{$field}) ne "0") and (($ent->{$field}) ne "1") and (($ent->{$field}) ne "2")) {
$tmplerr="Unable to recognise filed <$field> from table <$table>, with key <$key>. Please enter '0' '1' or '2'"
$tmplerr="Unable to recognise field <$field> from table <$table>, with key <$key>. Please enter '0' '1' or '2'"
}
}
}

View File

@ -418,7 +418,7 @@ sub getDescription {
ping-interval: the number of minutes between each nmap/fping operation.
The default value is 3.
apps: a list of comma separated application names whose status will be queried.
For how to get the status of each app, look for app name in the key filed
For how to get the status of each app, look for app name in the key field
in a different row.
port: the application daemon port number, if not specified, use internal list,
then /etc/services.

View File

@ -172,15 +172,9 @@ my %command_states = (
# ^ / |
# | 404 and / |
# 20x| 'No such image'/ |
# | v 20x| error
# | v | error
# CREATE_TO_WAIT_FOR_IMAGE_PULL_DONE ------------------------------> error_msg
# |
# v error
# CREATE_TO_WAIT_FOR_RM_DEFCONN_DONE-------> error_msg
# |
# 20x|
# v error
# CREATE_TO_WAIT_FOR_CONNECT_NET_DONE------> error_msg
# |
# 20x|
# v
@ -200,28 +194,10 @@ my %command_states = (
init_url => "/images/create?fromImage=#DOCKER_IMAGE#",
init_state => "CREATE_TO_WAIT_FOR_IMAGE_PULL_DONE",
},
connectnet => {
genreq_ptr => \&genreq_for_net_connect,
state_machine_engine => \&default_state_engine,
init_method => "POST",
init_url => "/networks/#NETNAME#/connect",
init_state => "CREATE_TO_WAIT_FOR_CONNECT_NET_DONE",
},
rmdefconn => {
genreq_ptr => \&genreq_for_net_disconnect,
state_machine_engine => \&default_state_engine,
init_method => "POST",
init_url => "/networks/bridge/disconnect",
init_state => "CREATE_TO_WAIT_FOR_RM_DEFCONN_DONE",
}
},
# The state changing for rmdocker
#
# INIT_TO_WAIT_FOR_DISCONNECT_NET_DONE
# If success or force to remove, to remove docker
# Else return error
# In remove docker round, return error_msg if failed or success if done
# For rmdocker
# return error_msg if failed or success if done
rmdocker => {
force => {
state_machine_engine => \&default_state_engine,
@ -233,13 +209,6 @@ my %command_states = (
init_method => "DELETE",
init_url => "/containers/#NODE#",
},
disconnect => {
genreq_ptr => \&genreq_for_net_disconnect,
state_machine_engine => \&default_state_engine,
init_method => "POST",
init_url => "/networks/#NETNAME#/disconnect",
init_state => "INIT_TO_WAIT_FOR_DISCONNECT_NET_DONE",
},
},
# For lsdocker [-l|--logs]
@ -484,17 +453,6 @@ sub default_state_engine {
$global_callback->({node=>[{name=>[$node],"$info_flag"=>["Pull image $node_hash->{image} start"]}]});
change_node_state($node, $command_states{mkdocker}{pullimage});
return;
} elsif ($data->is_success) {
$global_callback->({node=>[{name=>[$node],"$info_flag"=>["Remove default network connection"]}]});
change_node_state($node, $command_states{mkdocker}{rmdefconn});
return;
}
}
elsif ($curr_state eq 'CREATE_TO_WAIT_FOR_RM_DEFCONN_DONE') {
if ($data->is_success) {
$global_callback->({node=>[{name=>[$node],"$info_flag"=>["Connecting customzied network '$node_hash->{nics}'"]}]});
change_node_state($node, $command_states{mkdocker}{connectnet});
return;
}
}
elsif ($curr_state eq 'CREATE_TO_WAIT_FOR_IMAGE_PULL_DONE') {
@ -505,13 +463,6 @@ sub default_state_engine {
return;
}
}
elsif ($curr_state eq 'INIT_TO_WAIT_FOR_DISCONNECT_NET_DONE') {
if ($data->is_success or $node_hash_variable{$node}->{opt} eq 'force') {
$global_callback->({node=>[{name=>[$node],"$info_flag"=>["Disconnect customzied network '$node_hash->{nics}' done"]}]});
change_node_state($node, $command_states{rmdocker}{$node_hash->{opt}});
return;
}
}
foreach my $tmp (@msg) {
if ($tmp->[0]) {
@ -736,7 +687,7 @@ sub parse_args {
return ( [1, "Option $op is not supported for $cmd"]);
}
}
$request->{mapping_option} = "disconnect";
$request->{mapping_option} = "force";
}
elsif ($cmd eq 'lsdocker') {
foreach my $op (@ARGV) {
@ -825,11 +776,7 @@ sub process_request {
$mapping_hash = $command_states{$command}{$req->{mapping_option}};
}
else {
if ($command eq 'rmdocker') {
$mapping_hash = $command_states{$command}{disconnect};
} else {
$mapping_hash = $command_states{$command}{default};
}
$mapping_hash = $command_states{$command}{default};
}
my $max_concur_session_allow = 20; # A variable can be set by caculated in the future
if ($command eq 'lsdocker') {
@ -1131,6 +1078,10 @@ sub genreq_for_mkdocker {
my ($node, $dockerhost, $method, $api) = @_;
my $dockerinfo = $node_hash_variable{$node};
my %info_hash = ();
if (defined($dockerinfo->{flag})) {
my $flag_hash = decode_json($dockerinfo->{flag});
%info_hash = %$flag_hash;
}
#$info_hash{name} = '/'.$node;
#$info_hash{Hostname} = '';
#$info_hash{Domainname} = '';
@ -1139,72 +1090,15 @@ sub genreq_for_mkdocker {
$info_hash{Memory} = $dockerinfo->{mem};
$info_hash{MacAddress} = $dockerinfo->{mac};
$info_hash{CpusetCpus} = $dockerinfo->{cpus};
if (defined($dockerinfo->{flag})) {
my $flag_hash = decode_json($dockerinfo->{flag});
%info_hash = (%info_hash, %$flag_hash);
}
$info_hash{HostConfig}->{NetworkMode} = $dockerinfo->{nics};
$info_hash{NetworkDisabled} = JSON::false;
$info_hash{NetworkingConfig}->{EndpointsConfig}->{"$dockerinfo->{nics}"}->{IPAMConfig}->{IPv4Address} = $dockerinfo->{ip};
my $content = encode_json \%info_hash;
return genreq($node, $dockerhost, $method, $api, $content);
}
#-------------------------------------------------------
=head3 genreq_for_net_connect
Generate HTTP request for network operation for a docker
Input: $node: The docker container name
$dockerhost: hash, keys: name, port, user, pw, user, pw, user, pw
$method: the http method to generate the http request
$api: the url to generate the http request
return: The http request;
Usage example:
my $res = genreq_for_net_connect($node,\%dockerhost,'POST','/networks/$nic/connect');
=cut
#-------------------------------------------------------
sub genreq_for_net_connect {
my ($node, $dockerhost, $method, $api) = @_;
my $dockerinfo = $node_hash_variable{$node};
my %info_hash = ();
$info_hash{container} = $node;
$info_hash{EndpointConfig}->{IPAMConfig}->{IPv4Address} = $dockerinfo->{ip};
my $content = encode_json \%info_hash;
return genreq($node, $dockerhost, $method, $api, $content);
}
#-------------------------------------------------------
=head3 genreq_for_net_disconnect
Generate HTTP request for network operation for a docker
Input: $node: The docker container name
$dockerhost: hash, keys: name, port, user, pw, user, pw, user, pw
$method: the http method to generate the http request
$api: the url to generate the http request
return: The http request;
Usage example:
my $res = genreq_for_net_disconnect($node,\%dockerhost,'POST','/networks/$nic/disconnect');
=cut
#-------------------------------------------------------
sub genreq_for_net_disconnect {
my ($node, $dockerhost, $method, $api) = @_;
my $dockerinfo = $node_hash_variable{$node};
my %info_hash = ();
$info_hash{Container} = $node;
$info_hash{Force} = JSON::false;
my $content = encode_json \%info_hash;
return genreq($node, $dockerhost, $method, $api, $content);
}
#-------------------------------------------------------
=head3 sendreq
Based on the method, url create a http request and send out on the given SSL connection

File diff suppressed because it is too large Load Diff

View File

@ -3055,6 +3055,17 @@ sub rscan {
$hash_vm2host{$vm_node_host->{node}} = $vm_node_host->{host};
}
my @maxlength;
my @rscan_header = (
["type", "" ],
["name", "" ],
["hypervisor", "" ],
["id", "" ],
["cpu", "" ],
["memory", "" ],
["nic", "" ],
["disk", "" ]);
#operate every domain in current hypervisor
foreach $dom (@doms) {
my $name=$dom->get_name();
@ -3071,16 +3082,31 @@ sub rscan {
$uuid =~ s/^(..)(..)(..)(..)-(..)(..)-(..)(..)/$4$3$2$1-$6$5-$8$7/;
}
my $type = $domain->findnodes("/domain")->[0]->getAttribute("type");
if (length($type) > $maxlength[0]) {
$maxlength[0] = length($type);
}
my @nodeobj = $domain->findnodes("/domain/name");
if (@nodeobj and defined($nodeobj[0])) {
$node = $nodeobj[0]->to_literal;
}
if (length($node) > $maxlength[1]) {
$maxlength[1] = length($node);
}
my $hypervisor = $hyper;
if (length($hypervisor) > $maxlength[2]) {
$maxlength[2] = length($hypervisor);
}
my $id = $domain->findnodes("/domain")->[0]->getAttribute("id");
if (length($id) > $maxlength[3]) {
$maxlength[3] = length($id);
}
my @vmcpusobj = $domain->findnodes("/domain/vcpu");
if (@vmcpusobj and defined($vmcpusobj[0])) {
$vmcpus = $vmcpusobj[0]->to_literal;
}
if (length($vmcpus) > $maxlength[4]) {
$maxlength[4] = length($vmcpus);
}
my @vmmemoryobj = $domain->findnodes("/domain/memory");
if (@vmmemoryobj and defined($vmmemoryobj[0])) {
my $mem = $vmmemoryobj[0]->to_literal;
@ -3105,6 +3131,9 @@ sub rscan {
$vmmemory=($mem*1024)/(1024*1024);
}
}
if (length($vmmemory) > $maxlength[5]) {
$maxlength[5] = length($vmmemory);
}
my @vmstoragediskobjs = $domain->findnodes("/domain/devices/disk");
foreach my $vmstoragediskobj (@vmstoragediskobjs) {
if (($vmstoragediskobj->getAttribute("device") eq "disk") and ($vmstoragediskobj->getAttribute("type") eq "file")) {
@ -3115,6 +3144,9 @@ sub rscan {
}
}
}
if (length($vmstorage) > $maxlength[7]) {
$maxlength[7] = length($vmstorage);
}
my @archobj = $domain->findnodes("/domain/os/type");
if (@archobj and defined($archobj[0])) {
$arch = $archobj[0]->getAttribute("arch");
@ -3133,6 +3165,9 @@ sub rscan {
}
}
}
if (length($vmnics) > $maxlength[6]) {
$maxlength[6] = length($vmnics);
}
push @{$host2kvm{$uuid}}, join( ",", $type,$node,$hypervisor,$id,$vmcpus,$vmmemory,$vmnics,$vmstorage,$arch,$mac,$vmnicnicmodel );
if ($write) {
unless (exists $hash_vm2host{$node}) {
@ -3222,15 +3257,14 @@ sub rscan {
if (!$stanza) {
my $header;
my @rscan_header = (
["type", "%-8s" ],
["name", "%-9s" ],
["hypervisor", "%-15s"],
["id", "%-7s" ],
["cpu", "%-8s" ],
["memory", "%-11s"],
["nic", "%-8s" ],
["disk", "%-9s" ]);
$rscan_header[0][1] = sprintf "%%-%ds",($maxlength[0]+3);
$rscan_header[1][1] = sprintf "%%-%ds",($maxlength[1]+3);
$rscan_header[2][1] = sprintf "%%-%ds",($maxlength[2]+3);
$rscan_header[3][1] = sprintf "%%-%ds",($maxlength[3]+3);
$rscan_header[4][1] = sprintf "%%-%ds",($maxlength[4]+3);
$rscan_header[5][1] = sprintf "%%-%ds",($maxlength[5]+3);
$rscan_header[6][1] = sprintf "%%-%ds",($maxlength[6]+3);
$rscan_header[7][1] = sprintf "%%-%ds",($maxlength[7]+3);
foreach (@rscan_header) {
$header .= sprintf ( @$_[1], @$_[0] );
}

View File

@ -67,7 +67,7 @@ cert_opt = ca_default # Certificate field options
default_days = 7300 # how long to certify for
default_crl_days= 30 # how long before next CRL
default_md = sha1 # which md to use.
default_md = sha256 # which md to use.
preserve = no # keep passed DN ordering
# A few difference way of specifying how similar the request should look

View File

@ -0,0 +1,14 @@
pciutils-libs
pciutils
tcl
tk
tcsh
libgcc.ppc
gcc-gfortran
createrepo
kernel-devel
python-devel
lsof
redhat-rpm-config
rpm-build
libnl

View File

@ -14,4 +14,6 @@ kernel-devel
gtk2
atk
cairo
gcc
createrepo
libnl

View File

@ -0,0 +1,10 @@
python-libxml2
tcsh
libatk-1_0-0
python
tcl
lsof
libgtk-2_0-0
tk
libnl1
pciutils

View File

@ -0,0 +1 @@
dpkg-dev

View File

@ -97,7 +97,9 @@ if [ -z "$install_disk" ]; then
fi
# If there is kernel file, add partition's disk into disk_array
for i in $ker_dir/vmlinuz*; do
# It seems the kernel file in ubuntu and sles are named like vmlinux, but in RH it is called vmlinuz
# To check both vmlinux and vmlinuz, use regular expression "vmlinu*" to match them
for i in $ker_dir/vmlinu*; do
disk_part=${partition%%[0-9]*}
touch "$tmpfile$disk_part"
disk_array=$disk_array"$disk_part "

View File

@ -20,3 +20,11 @@ do
#nic name change during the install and first_reboot
sed -i '/HWADDR/d' $i
done
# NetworkManager will conflict with the configuring xcat do later in postboot script, so disable it in postscript
# There are 2 other service related to NetworkManager: NetworkManager-dispatcher and NetworkManager-wait-online
# Both of them are triggered by NetworkManager, so just disable NetworkManager here
if [ -f "/usr/lib/systemd/system/NetworkManager.service" ]; then
systemctl disable NetworkManager
fi

View File

@ -24,3 +24,4 @@ lsvpd
irqbalance
procps
parted
xz

View File

@ -20,3 +20,4 @@ rsync
rsyslog
e2fsprogs
parted
xz

View File

@ -73,18 +73,34 @@ sub xdie {
die @_;
}
#helper subroutine to get the major release number
#of a osver
sub majversion {
my $version = shift;
my $majorrel;
if($osver =~ /^\D*(\d*)[.\d]*$/){
$majorrel = $1;
}
return $majorrel;
}
sub mount_chroot {
my $rootimage_dir = shift;
#postinstall script of package installation
#might access the /proc, /sys and /dev filesystem
#mount them from host read-only
system("mkdir -p $rootimage_dir/proc");
system("mount proc $rootimage_dir/proc -t proc -o ro");
system("mkdir -p $rootimage_dir/sys");
system("mount sysfs $rootimage_dir/sys -t sysfs -o ro");
system("mkdir -p $rootimage_dir/dev");
system("mount devtmpfs $rootimage_dir/dev -t devtmpfs -o ro");
#postinstall script of some packages might access the /proc, /sys and /dev filesystem
#For Redhat7 or above, mount these directories readonly from host to avoid error messages
#For Redhat6 or below, mount these directories might introduce error messages
if(majversion($osver) > 6){
system("mkdir -p $rootimage_dir/proc");
system("mount proc $rootimage_dir/proc -t proc -o ro");
system("mkdir -p $rootimage_dir/sys");
system("mount sysfs $rootimage_dir/sys -t sysfs -o ro");
system("mkdir -p $rootimage_dir/dev");
system("mount devtmpfs $rootimage_dir/dev -t devtmpfs -o ro");
}
}
@ -92,9 +108,11 @@ sub mount_chroot {
sub umount_chroot {
my $rootimage_dir = shift;
system("umount $rootimage_dir/proc");
system("umount $rootimage_dir/sys");
system("umount $rootimage_dir/dev");
if(majversion($osver) >6){
system("umount $rootimage_dir/proc");
system("umount $rootimage_dir/sys");
system("umount $rootimage_dir/dev");
}
}
#check whether a dir is NFS mounted
@ -969,14 +987,19 @@ sub mkinitrd_dracut {
$perm = (stat("$fullpath/$dracutdir/installkernel"))[2];
chmod($perm&07777, "$dracutmpath/installkernel");
my $dracutmodulelist=" xcat nfs base network kernel-modules ";
if (-d glob($dracutmoduledir."[0-9]*fadump")){
$dracutmodulelist .=" fadump ";
}
if ($dracutver >= "033") {
$dracutmodulelist .= " syslog ";
}
# update etc/dracut.conf
open($DRACUTCONF, '>', "$rootimg_dir/etc/dracut.conf");
if (-d glob($dracutmoduledir."[0-9]*fadump")){
print $DRACUTCONF qq{dracutmodules+="xcat nfs base network kernel-modules fadump syslog"\n};
}
else{
print $DRACUTCONF qq{dracutmodules+="xcat nfs base network kernel-modules syslog"\n};
}
print $DRACUTCONF qq{dracutmodules+="$dracutmodulelist"\n};
print $DRACUTCONF qq{add_drivers+="$add_drivers"\n};
close $DRACUTCONF;
} else {

View File

@ -241,3 +241,16 @@ switchdiscover_range_x
switchdiscover_range_z
switchdiscover_range_z_V
nodeset_check_warninginfo
rpower_stop
rpower_start
rpower_state
rpower_restart
rpower_pause
rpower_unpause
mkdocker_h
mkdocker_command
rmdocker_h
rmdocker_command
rmdocker_f_command
lsdocker_h_command
lsdocker_l_command

View File

@ -0,0 +1,279 @@
start:rpower_stop
description:stop a created docker instance
cmd:chdef $$DOCKERCN dockerhost=$$DOCKERHOST:2375 dockercpus=1 ip=$$DOCKERCONIP dockermemory=4096 groups=docker,all mgt=docker
check:rc==0
cmd:makehosts $$DOCKERCN
check:rc==0
cmd:mkdocker $$DOCKERCN image=$$DOCKERIMAGE command=$$DOCKERCOMMAND dockerflag="{\"AttachStdin\":true,\"AttachStdout\":true,\"AttachStderr\":true,\"OpenStdin\":true,\"Tty\":true}"
check:rc==0
cmd:rpower $$DOCKERCN stop
check:rc==0
check:ouptut=~container already stopped
cmd:rpower $$DOCKERCN restart
check:ouptut=~success
cmd:rpower $$DOCKERCN state
check:rc==0
check:output=~running
cmd:rpower $$DOCKERCN stop
check:rc==0
cmd:rpower $$DOCKERCN state
check:output=~exited
cmd:rmdocker $$DOCKERCN -f
check:rc==0
cmd:makehosts -d $$DOCKERCN
check:rc==0
cmd:rmdef $$DOCKERCN
check:rc==0
end
start:rpower_start
description:start a created docker instance
cmd:chdef $$DOCKERCN dockerhost=$$DOCKERHOST:2375 dockercpus=1 ip=$$DOCKERCONIP dockermemory=4096 groups=docker,all mgt=docker
check:rc==0
cmd:makehosts $$DOCKERCN
check:rc==0
cmd:mkdocker $$DOCKERCN image=$$DOCKERIMAGE command=$$DOCKERCOMMAND dockerflag="{\"AttachStdin\":true,\"AttachStdout\":true,\"AttachStderr\":true,\"OpenStdin\":true,\"Tty\":true}"
check:rc==0
cmd:rpower $$DOCKERCN state
check:rc==0
check:output=~created
cmd:rpower $$DOCKERCN start
check:rc==0
check:output=~success
cmd:rpower $$DOCKERCN state
check:rc==0
check:output=~running
cmd:rmdocker $$DOCKERCN -f
check:rc==0
cmd:makehosts -d $$DOCKERCN
check:rc==0
cmd:rmdef $$DOCKERCN
check:rc==0
end
start:rpower_state
description:get state of the instance
cmd:chdef $$DOCKERCN dockerhost=$$DOCKERHOST:2375 dockercpus=1 ip=$$DOCKERCONIP dockermemory=4096 groups=docker,all mgt=docker
check:rc==0
cmd:makehosts $$DOCKERCN
check:rc==0
cmd:mkdocker $$DOCKERCN image=$$DOCKERIMAGE command=$$DOCKERCOMMAND dockerflag="{\"AttachStdin\":true,\"AttachStdout\":true,\"AttachStderr\":true,\"OpenStdin\":true,\"Tty\":true}"
check:rc==0
cmd:rpower $$DOCKERCN state
check:output=~created
cmd:rpower $$DOCKERCN restart
check:rc==0
check:output=~success
cmd:rpower $$DOCKERCN state
check:rc==0
check:output=~running
cmd:rpower $$DOCKERCN stop
check:rc==0
cmd:rpower $$DOCKERCN state
check:output=~exited
cmd:rpower $$DOCKERCN start
check:rc==0
cmd:rpower $$DOCKERCN state
check:output=~running
cmd:rmdocker $$DOCKERCN -f
check:rc==0
cmd:makehosts -d $$DOCKERCN
check:rc==0
cmd:rmdef $$DOCKERCN
check:rc==0
end
start:rpower_restart
description:restart a created docker instance
cmd:chdef $$DOCKERCN dockerhost=$$DOCKERHOST:2375 dockercpus=1 ip=$$DOCKERCONIP dockermemory=4096 groups=docker,all mgt=docker
check:rc==0
cmd:makehosts $$DOCKERCN
check:rc==0
cmd:mkdocker $$DOCKERCN image=$$DOCKERIMAGE command=$$DOCKERCOMMAND dockerflag="{\"AttachStdin\":true,\"AttachStdout\":true,\"AttachStderr\":true,\"OpenStdin\":true,\"Tty\":true}"
check:rc==0
cmd:rpower $$DOCKERCN state
check:output=~created
cmd:rpower $$DOCKERCN restart
check:rc==0
check:output=~success
cmd:sleep 6
cmd:rpower $$DOCKERCN state
check:output=~running
cmd:sleep 6
cmd:rpower $$DOCKERCN restart
check:rc==0
check:output=~success
cmd:rpower $$DOCKERCN state
check:output=~running
cmd:rmdocker $$DOCKERCN -f
check:rc==0
cmd:makehosts -d $$DOCKERCN
check:rc==0
cmd:rmdef $$DOCKERCN
check:rc==0
end
start:rpower_pause
decription:pause all processes in the instance
cmd:chdef $$DOCKERCN dockerhost=$$DOCKERHOST:2375 dockercpus=1 ip=$$DOCKERCONIP dockermemory=4096 groups=docker,all mgt=docker
check:rc==0
cmd:makehosts $$DOCKERCN
check:rc==0
cmd:mkdocker $$DOCKERCN image=$$DOCKERIMAGE command=$$DOCKERCOMMAND dockerflag="{\"AttachStdin\":true,\"AttachStdout\":true,\"AttachStderr\":true,\"OpenStdin\":true,\"Tty\":true}"
check:rc==0
cmd:rpower $$DOCKERCN pause
check:rc!=0
check:output=~ Cannot pause container $$DOCKERCN
cmd:rpower $$DOCKERCN start
check:rc==0
cmd:rpower $$DOCKERCN pause
check:rc==0
cmd:rpower $$DOCKERCN state
check:output=~paused
cmd:rpower $$DOCKERCN unpause
check:rc==0
cmd:rmdocker $$DOCKERCN -f
check:rc==0
cmd:makehosts -d $$DOCKERCN
check:rc==0
cmd:rmdef $$DOCKERCN
check:rc==0
end
start:rpower_unpause
description:unpause all processes in the instance
cmd:chdef $$DOCKERCN dockerhost=$$DOCKERHOST:2375 dockercpus=1 ip=$$DOCKERCONIP dockermemory=4096 groups=docker,all mgt=docker
check:rc==0
cmd:makehosts $$DOCKERCN
check:rc==0
cmd:mkdocker $$DOCKERCN image=$$DOCKERIMAGE command=$$DOCKERCOMMAND dockerflag="{\"AttachStdin\":true,\"AttachStdout\":true,\"AttachStderr\":true,\"OpenStdin\":true,\"Tty\":true}"
check:rc==0
cmd:rpower $$DOCKERCN start
check:rc==0
cmd:rpower $$DOCKERCN pause
check:ouptut=~paused
cmd:rpower $$DOCKERCN unpause
check:rc==0
check:output=~success
cmd:sleep 6
cmd:rpower $$DOCKERCN state
check:output=~running
cmd:sleep 6
cmd:rmdocker $$DOCKERCN -f
check:rc==0
cmd:makehosts -d $$DOCKERCN
check:rc==0
cmd:rmdef $$DOCKERCN
check:rc==0
end
start:mkdocker_h
description:output usage for mkdocker
cmd:mkdocker -h
check:rc==0
check:output=~Usage: mkdocker
end
start:mkdocker_command
description:create docker instance image should be ubuntu and command should be bash here
cmd:chdef $$DOCKERCN dockerhost=$$DOCKERHOST:2375 dockercpus=1 ip=$$DOCKERCONIP dockermemory=4096 groups=docker,all mgt=docker
check:rc==0
cmd:makehosts $$DOCKERCN
check:rc==0
cmd:mkdocker $$DOCKERCN image=$$DOCKERIMAGE command=$$DOCKERCOMMAND dockerflag="{\"AttachStdin\":true,\"AttachStdout\":true,\"AttachStderr\":true,\"OpenStdin\":true,\"Tty\":true}"
check:rc==0
check:output=~$$DOCKERCN: success
cmd:lsdocker $$DOCKERCN
check:rc==0
check:output=~$$DOCKERIMAGE
check:output=~$$DOCKERCOMMAND
cmd:rpower $$DOCKERCN state
check:rc==0
check:output=~$$DOCKERCN: created
cmd:rpower $$DOCKERCN start
check:rc==0
check:output=~$$DOCKERCN: success
cmd:xdsh $$DOCKERHOST "docker ps -l"
check:output=~$$DOCKERCN
check:rc==0
cmd:ping $$DOCKERCN -c 3
check:output=~64 bytes from $$DOCKERCN
cmd:rmdocker $$DOCKERCN -f
check:rc==0
cmd:makehosts -d $$DOCKERCN
check:rc==0
cmd:rmdef $$DOCKERCN
check:rc==0
end
start:rmdocker_h
description:output usage for rmdocker
cmd:rmdocker -h
check:rc==0
check:output=~Usage: rmdocker <noderage>
end
start:rmdocker_command
description:remove docker instance
cmd:chdef $$DOCKERCN dockerhost=$$DOCKERHOST:2375 dockercpus=1 ip=$$DOCKERCONIP dockermemory=4096 groups=docker,all mgt=docker
check:rc==0
cmd:makehosts $$DOCKERCN
check:rc==0
cmd:mkdocker $$DOCKERCN image=$$DOCKERIMAGE command=$$DOCKERCOMMAND dockerflag="{\"AttachStdin\":true,\"AttachStdout\":true,\"AttachStderr\":true,\"OpenStdin\":true,\"Tty\":true}"
check:rc==0
check:output=~$$DOCKERCN: success
cmd:lsdocker $$DOCKERCN
check:rc==0
cmd:rmdocker $$DOCKERCN
check:rc==0
cmd:lsdocker -l $$DOCKERCN
check:rc!=0
check:output=~ Error: No such container
cmd:makehosts -d $$DOCKERCN
check:rc==0
cmd:rmdef $$DOCKERCN
check:rc==0
end
start:rmdocker_f_command
description:force to remove docker instance
cmd:chdef $$DOCKERCN dockerhost=$$DOCKERHOST:2375 dockercpus=1 ip=$$DOCKERCONIP dockermemory=4096 groups=docker,all mgt=docker
check:rc==0
cmd:makehosts $$DOCKERCN
check:rc==0
cmd:mkdocker $$DOCKERCN image=$$DOCKERIMAGE command=$$DOCKERCOMMAND dockerflag="{\"AttachStdin\":true,\"AttachStdout\":true,\"AttachStderr\":true,\"OpenStdin\":true,\"Tty\":true}"
check:rc==0
check:output=~$$DOCKERCN: success
cmd:lsdocker $$DOCKERCN
check:rc==0
cmd:rpower $$DOCKERCN start
check:rc==0
cmd:rmdocker $$DOCKERCN
chec:rc!=0
check:output=~Stop the container before attempting removal or use -f
cmd:rmdocker $$DOCKERCN -f
check:rc==0
check:output=~$$DOCKERCN: success
cmd:makehosts -d $$DOCKERCN
check:rc==0
cmd:rmdef $$DOCKERCN
check:rc==0
end
start:lsdocker_h_command
description:output usage for lsdocker
cmd:lsdocker -h
check:rc==0
check:output=~Usage: lsdocker <noderange>
end
start:lsdocker_l_command
description:list docker instance info
cmd:chdef $$DOCKERCN dockerhost=$$DOCKERHOST:2375 dockercpus=1 ip=$$DOCKERCONIP dockermemory=4096 groups=docker,all mgt=docker
check:rc==0
cmd:makehosts $$DOCKERCN
check:rc==0
cmd:mkdocker $$DOCKERCN image=$$DOCKERIMAGE command=$$DOCKERCOMMAND dockerflag="{\"AttachStdin\":true,\"AttachStdout\":true,\"AttachStderr\":true,\"OpenStdin\":true,\"Tty\":true}"
check:rc==0
cmd:lsdocker -l $$DOCKERCN
check:rc==0
check:output=~$$DOCKERCN
cmd:rmdocker $$DOCKERCN -f
check:rc==0
cmd:makehosts -d $$DOCKERCN
check:rc==0
cmd:rmdef $$DOCKERCN
check:rc==0
end

View File

@ -1,10 +1,9 @@
start:Diskless_installation_flat_p8_le
os:Linux
stop=yes
cmd:copycds $$ISO
check:rc==0
cmd:if [[ "__GETNODEATTR($$CN,arch)__" != "ppc64" ]];then chdef -t node -o $$CN vmstorage=dir:///var/lib/libvirt/images/ && rmvm $$CN -f -p && mkvm $$CN -s 20G ; fi
cmd:if [ "__GETNODEATTR($$CN,arch)__" != "ppc64" -a "__GETNODEATTR($$CN,mgt)__" != "ipmi" ];then chdef -t node -o $$CN vmstorage=dir:///var/lib/libvirt/images/ && rmvm $$CN -f && mkvm $$CN ; fi
check:rc==0
cmd:makedns -n
check:rc==0
@ -26,6 +25,8 @@ cmd:packimage __GETNODEATTR($$CN,os)__-__GETNODEATTR($$CN,arch)__-netboot-comput
check:rc==0
cmd:nodeset $$CN osimage=__GETNODEATTR($$CN,os)__-__GETNODEATTR($$CN,arch)__-netboot-compute
check:rc==0
cmd:if [[ "__GETNODEATTR($$CN,mgt)__" = "ipmi" ]]; then rsetboot $$CN net; fi
check:rc==0
cmd:rpower $$CN boot
check:rc==0
cmd:sleep 200

View File

@ -1,6 +1,5 @@
start:Full_installation_flat_p8_le
os:Linux
stop=yes
cmd:copycds $$ISO
check:rc==0

View File

@ -1,6 +1,5 @@
start:reg_linux_SN_installation_hierarchy
os:Linux
stop=yes
cmd:chtab key=nameservers site.value="<xcatmaster>"
check:rc==0

View File

@ -1,12 +1,11 @@
start:reg_linux_diskfull_installation_flat
os:Linux
stop=yes
cmd:if ping -c 1 $$SN > /dev/null;then rpower $$SN off > /dev/null;fi
cmd:chdef -t node -o $$CN servicenode= monserver=$$MN nfsserver=$$MN tftpserver=$$MN xcatmaster=$$MN
check:rc==0
cmd:if [[ "__GETNODEATTR($$CN,arch)__" =~ "x86_64" ]];then chdef -t node -o $$CN vmstorage=dir:///var/lib/libvirt/images/ && rmvm $$CN -f -p && mkvm $$CN -s 15G; fi
cmd:if [ "__GETNODEATTR($$CN,arch)__" != "ppc64" -a "__GETNODEATTR($$CN,mgt)__" != "ipmi" ];then chdef -t node -o $$CN vmstorage=dir:///var/lib/libvirt/images/ && rmvm $$CN -f && mkvm $$CN ; fi
cmd:makedns -n
check:rc==0
cmd:makeconservercf
@ -14,7 +13,7 @@ check:rc==0
cmd:cat /etc/conserver.cf | grep $$CN
check:output=~$$CN
cmd:sleep 20
cmd:if [[ "__GETNODEATTR($$CN,arch)__" =~ "ppc64" ]]; then getmacs -D $$CN; fi
cmd:if [ "__GETNODEATTR($$CN,arch)__" = "ppc64" -a "__GETNODEATTR($$CN,mgt)__" != "ipmi" ]; then getmacs -D $$CN; fi
check:rc==0
cmd:makedhcp -n
check:rc==0
@ -32,10 +31,12 @@ check:rc==0
cmd:lsdef $$CN |grep provmethod
check:rc==0
check:output=~__GETNODEATTR($$CN,os)__-__GETNODEATTR($$CN,arch)__-install-compute
cmd:if [[ "__GETNODEATTR($$CN,arch)__" =~ "ppc64" ]]; then rnetboot $$CN;elif [[ "__GETNODEATTR($$CN,arch)__" =~ "x86_64" ]];then rpower $$CN boot; fi
cmd:if [[ "__GETNODEATTR($$CN,mgt)__" = "ipmi" ]]; then rsetboot $$CN net; fi
check:rc==0
cmd:if [ "__GETNODEATTR($$CN,mgt)__" != "ipmi" ];then if [ "__GETNODEATTR($$CN,arch)__" = "ppc64" ];then rnetboot $$CN;else rpower $$CN boot;fi else rpower $$CN boot;fi
check:rc==0
cmd:if [[ "__GETNODEATTR($$CN,arch)__" =~ "ppc" ]]; then sleep 1200;elif [[ "__GETNODEATTR($$CN,arch)__" =~ "x86_64" ]];then sleep 600;else sleep 180;fi
cmd:if [[ "__GETNODEATTR($$CN,mgt)__" =~ "ipmi" ]]; then sleep 1800;elif [[ "__GETNODEATTR($$CN,arch)__" =~ "ppc64" ]];then sleep 1200;else sleep 600;fi
cmd:lsdef -l $$CN | grep status
@ -52,4 +53,8 @@ check:rc==0
check:output=~\d\d:\d\d:\d\d
cmd:xdsh $$CN mount
check:rc==0
cmd:sleep 120
cmd:ping $$CN -c 3
check:rc==0
check:output=~64 bytes from $$CN
end

View File

@ -1,6 +1,5 @@
start:reg_linux_diskfull_installation_hierarchy
os:Linux
stop=yes
cmd:chdef -t node -o $$CN servicenode=$$SN monserver=$$SN nfsserver=$$SN tftpserver=$$SN xcatmaster=$$SN
check:rc==0

View File

@ -1,6 +1,5 @@
start:reg_linux_diskless_installation_flat
os:Linux
stop=yes
cmd:chdef -t node -o $$CN servicenode= monserver=$$MN nfsserver=$$MN tftpserver=$$MN xcatmaster=$$MN
check:rc==0
@ -63,6 +62,10 @@ check:output=~\d\d:\d\d:\d\d
cmd:xdsh $$CN mount
check:rc==0
check:output=~on / type tmpfs
cmd:sleep 120
cmd:ping $$CN -c 3
check:rc==0
check:output=~64 bytes from $$CN
cmd:rootimgdir=`lsdef -t osimage __GETNODEATTR($$CN,os)__-__GETNODEATTR($$CN,arch)__-netboot-compute|grep rootimgdir|awk -F'=' '{print $2}'`; if [ -d $rootimgdir.regbak ]; then rm -rf $rootimgdir; mv $rootimgdir.regbak $rootimgdir; fi
check:rc==0

View File

@ -1,6 +1,5 @@
start:reg_linux_diskless_installation_hierarchy
os:Linux
stop=yes
cmd:chdef -t node -o $$CN servicenode=$$SN monserver=$$SN nfsserver=$$SN tftpserver=$$SN xcatmaster=$$SN
check:rc==0

View File

@ -1,6 +1,5 @@
start:reg_linux_statelite_installation_flat
os:Linux
stop=yes
cmd:chdef -t node -o $$CN servicenode= monserver=$$MN nfsserver=$$MN tftpserver=$$MN xcatmaster=$$MN
check:rc==0
@ -117,6 +116,10 @@ cmd:xdsh $$CN mount
check:rc==0
check:output=~/nodedata/$$CN on /.statelite/persistent
check:output=~rootfs on / type tmpfs
cmd:sleep 120
cmd:ping $$CN -c 3
check:rc==0
check:output=~64 bytes from $$CN
cmd:rootimgdir=`lsdef -t osimage __GETNODEATTR($$CN,os)__-__GETNODEATTR($$CN,arch)__-statelite-compute|grep rootimgdir|awk -F'=' '{print $2}'`; if [ -d $rootimgdir.regbak ]; then rm -rf $rootimgdir; mv $rootimgdir.regbak $rootimgdir; fi
check:rc==0

View File

@ -1,5 +1,6 @@
start:reg_linux_statelite_installation_hierarchy
os:Linux
stop:yes
cmd:MNIP=`cat /etc/hosts|grep $$MN|awk '{print $1}'`;sed -i "s:nameserver .*:nameserver $MNIP:g" /etc/resolv.conf

View File

@ -1,6 +1,5 @@
start:reg_linux_statelite_installation_hierarchy_by_nfs
os:Linux
stop=yes
cmd:chdef -t node -o $$CN servicenode=$$SN monserver=$$SN nfsserver=$$SN tftpserver=$$SN xcatmaster=$$SN
check:rc==0

View File

@ -1,6 +1,5 @@
start:reg_linux_statelite_installation_hierarchy_by_ramdisk
os:Linux
stop=yes
cmd:chdef -t node -o $$CN servicenode=$$SN monserver=$$SN nfsserver=$$SN tftpserver=$$SN xcatmaster=$$SN
check:rc==0

View File

@ -1,10 +1,9 @@
start:Ubuntu_diskless_installation_flat_x86_vm
os:Linux
stop=yes
cmd:copycds $$ISO
check:rc==0
cmd:if [[ "__GETNODEATTR($$CN,arch)__" =~ "x86_64" ]];then chdef -t node -o $$CN vmstorage=dir:///var/lib/libvirt/images/ && rmvm $$CN -f -p && mkvm $$CN -s 15G; fi
cmd:if [[ "__GETNODEATTR($$CN,arch)__" =~ "x86_64" ]];then chdef -t node -o $$CN vmstorage=dir:///var/lib/libvirt/images/ && rmvm $$CN -f && mkvm $$CN ; fi
cmd:makedns -n
check:rc==0
cmd:makedhcp -n

View File

@ -1,6 +1,5 @@
start:Ubuntu_full_installation_flat_x86_vm
os:Linux
stop=yes
cmd:copycds $$ISO
check:rc==0

View File

@ -0,0 +1,54 @@
start:Full_installation_flat_docker
os:Linux
cmd:copycds $$ISO
check:rc==0
cmd:if [[ "__GETNODEATTR($$DOCKERHOST,arch)__" != "ppc64" ]];then chdef -t node -o $$DOCKERHOST vmstorage=dir:///var/lib/libvirt/images/ && rmvm $$DOCKERHOST -f && mkvm $$DOCKERHOST ; fi
check:rc==0
cmd:makehosts $$DOCKERHOST
check:rc==0
cmd:makedns -n
check:rc==0
cmd:sleep 60
cmd:makedhcp -n
check:rc==0
cmd:makedhcp -a
check:rc==0
cmd:makeconservercf $$DOCKERHOST
check:rc==0
cmd:cat /etc/conserver.cf | grep $$DOCKERHOST
check:output=~$$DOCKERHOST
cmd: mkdef -t osimage -o __GETNODEATTR($$DOCKERHOST,os)__-__GETNODEATTR($$DOCKERHOST,arch)__-install-dockerhost -u profile=compute provmethod=install
check:rc==0
cmd:if [[ "__GETNODEATTR($$DOCKERHOST,os)__" =~ "ubuntu" ]];then ver=`cat /etc/*-release |grep "VERSION_ID"| awk -F '"' '{print $2}'| awk -F"." '{printf "%s%s\n",$1,$2}'` ; chdef -t osimage -o __GETNODEATTR($$DOCKERHOST,os)__-__GETNODEATTR($$DOCKERHOST,arch)__-install-dockerhost otherpkgdir="https://apt.dockerproject.org/repo ubuntu-trusty main,http://cz.archive.ubuntu.com/ubuntu trusty main" otherpkglist="/install/custom/ubuntu$ver/ubuntu"$ver"_docker.pkglist" osdistroname="__GETNODEATTR($$DOCKERHOST,os)__ ";fi
check:rc==0
cmd:if [[ "__GETNODEATTR($$DOCKERHOST,os)__" =~ "ubuntu" ]];then ver=`cat /etc/*-release |grep "VERSION_ID"| awk -F '"' '{print $2}'| awk -F"." '{printf "%s%s\n",$1,$2}'` ; mkdir -p /install/custom/ubuntu$ver/ ; chdef -t osimage -o __GETNODEATTR($$DOCKERHOST,os)__-__GETNODEATTR($$DOCKERHOST,arch)__-install-dockerhost otherpkglist="/install/custom/ubuntu$ver/ubuntu"$ver"_docker.pkglist" pkglist="/install/custom/ubuntu$ver/ubuntu$ver.pkglist";fi
check:rc==0
cmd:if [[ "__GETNODEATTR($$DOCKERHOST,os)__" =~ "ubuntu" ]];then ver=`cat /etc/*-release |grep "VERSION_ID"| awk -F '"' '{print $2}'| awk -F"." '{printf "%s%s\n",$1,$2}'` ;for i in openssh-server ntp gawk nfs-common snmpd bridge-utils; do cat /install/custom/ubuntu$ver/ubuntu$ver.pkglist|grep "$i$";if [ $? -ne 0 ] ; then echo "$i" >> /install/custom/ubuntu$ver/ubuntu$ver.pkglist; fi done;fi
check:rc==0
cmd:if [[ "__GETNODEATTR($$DOCKERHOST,os)__" =~ "ubuntu14.04" ]];then ver=`cat /etc/*-release |grep "VERSION_ID"| awk -F '"' '{print $2}'| awk -F"." '{printf "%s%s\n",$1,$2}'`; for i in docker-engine;do cat /install/custom/ubuntu$ver/ubuntu"$ver"\_docker.pkglist |grep "$i$";if [ $? -ne 0 ] ; then echo "$i" >> /install/custom/ubuntu$ver/ubuntu"$ver"\_docker.pkglist;fi done;fi
check:rc==0
cmd: chdef $$DOCKERHOST -p postbootscripts="setupdockerhost mynet0=$$MYNET0VALUE@$$DOCKERHOSIP:$$NICNAME"
check:rc==0
cmd:nodeset $$DOCKERHOST osimage=__GETNODEATTR($$DOCKERHOST,os)__-__GETNODEATTR($$DOCKERHOST,arch)__-install-dockerhost
check:rc==0
cmd:rpower $$DOCKERHOST boot
check:rc==0
cmd:sleep 40
cmd:lsdef -l $$DOCKERHOST | grep status
cmd:sleep 3600
check:rc==0
cmd:ping $$DOCKERHOST -c 3
check:output=~64 bytes from $$DOCKERHOST
check:rc==0
cmd:lsdef -l $$DOCKERHOST | grep status
check:output=~booted
cmd:xdsh $$DOCKERHOST date
check:rc==0
cmd:xdsh $$DOCKERHOST "docker -v"
check:output=~Docker version
check:rc==0
cmd:xdsh $$DOCKERHOST "docker pull $$DOCKERIMAGE"
check:rc==0
cmd:rmdef -t osimage -o __GETNODEATTR($$DOCKERHOST,os)__-__GETNODEATTR($$DOCKERHOST,arch)__-install-dockerhost
check:rc==0
end

View File

@ -99,13 +99,6 @@ fi
" >> /xcatpost/mypostscript.post
fi
stopservice NetworkManager
stopservice NetworkManager-dispatcher
stopservice NetworkManager-wait-online
disableservice NetworkManager
disableservice NetworkManager-dispatcher
disableservice NetworkManager-wait-online
chmod +x /xcatpost/mypostscript.post
if [ -x /xcatpost/mypostscript.post ];then
msgutil_r "$MASTER_IP" "info" "running /xcatpost/mypostscript.post" "/var/log/xcat/xcat.log"