2
0
mirror of https://github.com/xcat2/xcat-core.git synced 2025-05-29 17:23:08 +00:00

Merge pull request #705 from whowutwut/mtms_discovery

Refactor MTMS hardware discovery section in documentation
This commit is contained in:
Xiaopeng Wang 2016-04-19 15:36:26 +08:00
commit 53f577a8a8
19 changed files with 587 additions and 395 deletions

View File

@ -1,14 +1,10 @@
Manage Clusters
===============
This chapter introduces the procedures of how to manage a real cluster. Basically, it includes the following parts:
The following provides detailed information to help start managing your cluster using xCAT.
* Discover and Define Nodes
* Deploy/Configure OS for the Nodes
* Install/Configure Applications for the Nodes
* General System Management Work for the Nodes
The sections are organized based on hardware architecture.
You should select the proper sub-chapter according to the hardware type of your cluster. If having a mixed cluster that has multiple types of hardware, you have to refer to multiple sub-chapters accordingly.
.. toctree::
:maxdepth: 2

View File

@ -0,0 +1,11 @@
Configure xCAT
==============
After installing xCAT onto the management node, configure some basic attributes for your cluster into xCAT.
.. toctree::
:maxdepth: 2
site.rst
networks.rst
password.rst

View File

@ -0,0 +1,46 @@
Set attributes in the ``networks`` table
========================================
#. Display the network settings defined in the xCAT ``networks`` table using: ``tabdump networks`` ::
#netname,net,mask,mgtifname,gateway,dhcpserver,tftpserver,nameservers,ntpservers,logservers,
dynamicrange,staticrange,staticrangeincrement,nodehostname,ddnsdomain,vlanid,domain,
comments,disable
"10_0_0_0-255_0_0_0","10.0.0.0","255.0.0.0","eth0","10.0.0.101",,"10.4.27.5",,,,,,,,,,,,
A default network is created for the detected primary network using the same netmask and gateway. There may be additional network entries in the table for each network present on the management node where xCAT is installed.
#. To define additional networks, use one of the following options:
* Use ``mkdef`` to create/update an entry into ``networks`` table. (**Recommended**)
To create a network entry for 192.168.X.X/16 with a gateway of 192.168.1.254: ::
mkdef -t network -o net1 net=192.168.0.0 mask=255.255.0.0 gateway=192.168.1.254
* Use the ``tabedit`` command to modify the networks table directly in an editor: ``tabedit networks``
* Use the ``makenetworks`` command to automatically generate a entry in the ``networks`` table
#. Verify the network statements
**Domain** and **nameserver** attributes must be configured in the ``networks`` table or in the ``site`` table for xCAT to function properly.
Initialize DHCP services
------------------------
#. Configure DHCP to listen on different network interfaces (**Optional**)
xCAT allows specifying different network intercaces thateDHCP can listen on for different nodes or node groups. If this is not needed, go to the next step. To set dhcpinterfaces ::
chdef -t site dhcpinterfaces='xcatmn|eth1,eth2;service|bond0'
For more information, see ``dhcpinterfaces`` keyword in the :doc:`site </guides/admin-guides/references/man5/site.5>` table.
#. Create a new DHCP configuration file with the networks defined using the ``makedhcp`` command. ::
makedhcp -n

View File

@ -0,0 +1,59 @@
Configure passwords
===================
#. Configure the system password for the ``root`` user on the compute nodes.
* Set using the :doc:`chtab </guides/admin-guides/references/man8/chtab.8>` command: (**Recommended**) ::
chtab key=system passwd.username=root passwd.password=abc123
To encrypt the password using ``openssl``, use the following command: ::
chtab key=system passwd.username=root passwd.password=`openssl passwd -1 abc123`
* Directly edit the passwd table using the :doc:`tabedit </guides/admin-guides/references/man8/tabedit.8>` command.
#. Configure the passwords for Management modules of the compute nodes.
* For IPMI/BMC managed systems: ::
chtab key=ipmi passwd.username=USERID passwd.password=PASSW0RD
* For HMC managed systems: ::
chtab key=hmc passwd.username=hscroot passwd.password=abc123
The username and password for the HMC can be assigned directly to the HMC node object definition in xCAT. This is needed when the HMC username/password is different for each HMC. ::
mkdef -t node -o hmc1 groups=hmc,all nodetype=ppc hwtype=hmc mgt=hmc \
username=hscroot password=hmcPassw0rd
* For Blade managed systems: ::
chtab key=blade passwd.username=USERID passwd.password=PASSW0RD
* For FSP/BPA (Flexible Service Processor/Bulk Power Assembly), if the passwords are set to the factory defaults, you must change them before running and commands to them. ::
rspconfig frame general_passwd=general,<newpassword>
rspconfig frame admin_passwd=admin,<newpassword>
rspconfig frame HMC_passwd=,<newpassword>
#. If the REST API is being used configure a user and set a policy rule in xCAT.
#. Create a non root user that will be used to make the REST API calls. ::
useradd xcatws
passwd xcatws # set the password
#. Create an entry for the user into the xCAT ``passwd`` table. ::
chtab key=xcat passwd.username=xcatws passwd.password=<xcatws_password>
#. Set a policy in the xCAT ``policy`` table to allow the user to make calls against xCAT. ::
mkdef -t policy 6 name=xcatws rule=allow
When making calls to the xCAT REST API, pass in the credentials using the following attributes: ``userName`` and ``userPW``

View File

@ -0,0 +1,36 @@
Set attributes in the ``site`` table
====================================
#. Verify the following attributes have been correctly set in the xCAT ``site`` table.
* domain
* forwarders
* master [#]_
* nameservers
For more information on the keywords, see the DHCP ATTRIBUTES in the :doc:`site </guides/admin-guides/references/man5/site.5>` table.
If the fields are not set or need to be changed, use the xCAT ``chdef`` command: ::
chdef -t site domain="domain_string"
chdef -t site fowarders="forwarders"
chdef -t site master="xcat_master_ip"
chdef -t site nameservers="nameserver1,nameserver2,etc"
.. [#] The value of the ``master`` attribute in the site table should be set as the IP address of the management node responsible for the compute node.
Initialize DNS services
-----------------------
#. Initialize the DNS [#]_ services on the xCAT Management Node: ::
makedns -n
Verify DNS is working by running ``nslookup`` against your Management Node: ::
nslookup <management_node_hostname>
For more information on DNS, refer to :ref:`dns_label`
.. [#] Setting up name resolution and the ability to have hostname resolved to IP addresses is **required** for xCAT.

View File

@ -1,251 +0,0 @@
Configure xCAT
==============
After you installed xCAT packages on management node,you have to configure the management node first .This document introduces how to configure the environment well before you can use xCAT normally.
Here is a summary of steps required for the xCAT management node .
::
1.Check Site Table
2.Check Networks
3.Configure Password Table
4.Initialize DHCP
Check Site Table
----------------
After xCAT is installed , site table should be checked. Please verify following attributes and make sure they are correctly set. ::
domain: The DNS domain name (exg. cluster.com).
nameservers: A comma delimited list of DNS servers that each node in this network should use. This value will end up in the nameserver settings of the /etc/resolv.conf on each node in this network. If this attribute value is set to the IP address of an xCAT node, make sure DNS is running on it. In a hierarchical cluster, you can also set this attribute to "<xcatmaster>" to mean the DNS server for each node in this network should be the node that is managing it (either its service node or the management node). Used in creating the DHCP network definition, and DNS configuration.
forwarders: The DNS servers at your site that can provide names outside of the cluster. The makedns command will configure the DNS on the management node to forward requests it does not know to these servers.Note that the DNS servers on the service nodes will ignore this value and always be configured to forward requests to the management node.
master: The hostname of the xCAT management node, as known by the nodes.
1. Before xCAT build is installed, management HostName and management DomainName should be configured in DNS configure file **/etc/resolv.conf**, after xCAT is installed, nameserver, master, domain and forwarder will be set correctly in site table.
1.1.Before install xCAT:
* Modify **resolv.conf** file like example1:
::
cat /etc/resolv.conf
; generated by /usr/sbin/dhclient-script
search <xcat_dom>
nameserver <Management_Node_Ip>
nameserver <Forwarder_ip>
* Configure hostname setting so that using ``hostname`` could get the machine hostname like example2:
::
mn:~ # hostname
mn
* Configure domain name setting so that using ``hostname -d`` could get domain name like example3:
::
[root@mn ~]# hostname -d
pok.stglabs.ibm.com
1.2 After xCAT is installed:
* Using tabdump site to check ``site table``, the outputs will like example4:
::
"domain","<xcat_dom>",,
"forwarders","<Forwarder_ip>",,
"master","<Management_Node_Ip>",,
"nameservers","<Management_Node_Ip>",,
2. If configures in above 1 are not configured before xCAT is installed, the outputs of ``tabdump site`` are as following: ::
"domain"," ",,
"forwarders",,,
"master","NORESOLUTION",,
"nameservers","NORESOLUTION"
* In this situation, please configure the **/etc/resolv.conf** file according to example1. Then using ``chdef`` (exg. ``chdef -t site master=<management_node_ip>`` ) or ``tabedit site`` command to configure the site table according to example4.
3.After site table is configured
* Please initialize DNS using: ::
makedns -n
* Verify DNS work well using: ::
nslookup <Mangement_Node_Hostname>
* It gives out the Management node hostname and resolved ip. Here is an example: ::
c910f04x27v05:~ # nslookup c910f04x27v05
Server: 10.4.27.5
Address: 10.4.27.5#53
Name: c910f04x27v05.pok.stglabs.ibm.com
Address: 10.4.27.5
**Note**:
#. The value of attribute master in site table can be set either management node ip or service node ip.
#. Setting up name resolution and having the nodes resolved to IP addresses are required in xCAT clusters .
#. Set site.forwarders to your site-wide DNS servers that can resolve site or public hostnames. The DNS on the MN will forward any requests it can't answer to these servers.
#. For more dns explanation please refer to :ref:`dns_label`
Check Networks
--------------
Please check networks tables: ::
tabdump networks
The outputs are as following: ::
#netname,net,mask,mgtifname,gateway,dhcpserver,tftpserver,nameservers,ntpservers,logservers,dynamicrange,staticrange,staticrangeincrement,nodehostname,ddnsdomain,vlanid,domain,comments,disable
"10_0_0_0-255_0_0_0","10.0.0.0","255.0.0.0","eth0","10.0.0.103",,"10.4.27.5",,,,,,,,,,,,
**Note**:Networks table will be set after xCAT is installed using default net,default mask and default gateway.
1.If the cluster-facing NICs were not configured when xCAT was installed, or if there are more networks in the cluster that are only available via the service nodes or compute nodes, users can use such options below to create network definitions (exg.50.3.5.5).
1.1(Optinal) How to configured networks table:
* Using ``mkdef`` to update networks table. ::
mkdef -t network -o net1 net=9.114.0.0 mask=255.255.255.224 gateway=9.114.113.254
net The network address.
mask The network mask.
gateway The network gateway.
* Or using ``tabedit`` to modify networks table. ::
Tabedit networks
* Or using command ``makenetworks`` to automatically generate networks table entry. ::
makenetworks
1.2.Verify networks table similar like:
::
# tabdump networks
#netname,net,mask,mgtifname,gateway,dhcpserver,tftpserver,nameservers,ntpservers,logservers,dynamicrange,nodehostname,comments,disable
50_0_0_0-255_0_0_0","50.0.0.0","255.0.0.0","eth1","<xcatmaster>",,"50.3.5.5",,,,,,,,,,,,
**Note**:Domain and nameservers values must be provided either in the network definiton corresponding to the node or in the site definition.
Configure Password Table
-------------------------
The password should be set in the passwd table that will be assigned to root when the node is installed. You can modify this table using ``tabedit``. To change the default password for root on the nodes, change the system line. ::
tabedit passwd
#key,username,password,cryptmethod,comments,disable
"system","root","cluster",,,
"hmc","hscroot","ABC123",,,
Or ::
chtab key=system passwd.username=root passwd.password=cluster
**Note**:
#. Currently xCAT puts the root password on the node only during install. It is taken from the passwd table where key=system. The new subcluster support requires a unique password for each subcluster to be installed.
#. The xCAT database needs to contain the proper authentication working with hmc/blade/ipmi userid and password. Example for passwd set up:
::
chtab key=hmc passwd.username=hscroot passwd.password=abc123
or
chtab key=blade passwd.username=USERID passwd.password=PASSW0RD
or
chtab key=ipmi passwd.username=USERID passwd.password=PASSW0RD
#. (Optional)If the BPA passwords are still the factory defaults, you must change them before running any other commands to them.
::
rspconfig frame general_passwd=general,<newpd>
rspconfig frame admin_passwd=admin,<newpd>
rspconfig frame HMC_passwd=,<newpd>
#. (Optional)The username and password for xCAT to access an HMC can also be assigned directly to the HMC node object using the ``mkdef`` or ``chdef`` commands. This assignment is useful when a specific HMC has a username and/or password that is different from the default one specified in the passwd table. For example, to create an HMC node object and set a unique username or password for it:
::
mkdef -t node -o hmc1 groups=hmc,all nodetype=ppc hwtype=hmc mgt=hmc username=hscroot password=abc1234
Or to change it if the HMC definition already exists:
chdef -t node -o hmc1 username=hscroot password=abc1234
#. (Optional)The REST API calls need to provide a username and password. When this request is passed to xcatd, it will first verify that this user/pw is in the xCAT passwd table, and then xcatd will look in the policy table to see if that user is allowed to do the requested operation.
* The account which key is xcat will be used for the REST API authentication. The username and password should be passed in with the attirbutes. ::
userName: Pass the username of the account
userPW: Pass the password of the account
* Use non-root account to create new user and setup the password and policy rules. ::
useradd wsuser
passwd wsuser # set the password
tabch key=xcat,username=wsuser passwd.password=cluster
mkdef -t policy 6 name=wsuser rule=allow
* Use root account: ::
tabch key=xcat,username=root passwd.password=<root-pw>
Initialize DHCP
---------------
Initialize DHCP service
~~~~~~~~~~~~~~~~~~~~~~~
Create a new dhcp configuration file with a network statement for each network the dhcp daemon should listen on. ::
makedhcp -n
(Optional)Setup the DHCP interfaces in site table
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To set up the site table dhcp interfaces for your system p cluster, identify the correct interfaces that xCAT should listen to on your management node and service nodes. ::
chdef -t site dhcpinterfaces='pmanagenode|eth1;service|eth0'
makedhcp -n
dhcpinterfaces: The network interfaces DHCP should listen on. If it is the same for all nodes, use a simple comma-separated list of NICs. To specify different NICs for different nodes:xcatmn|eth1,eth2;service|bond0.In this example xcatmn is the name of the xCAT MN, and DHCP there should listen on eth1 and eth2. On all of the nodes in group 'service' DHCP should listen on the bond0 nic.
**Note**:To verify makedhcp work well, please check nic,domain-name ,domain-servers in dhcpd.conf , for example: ::
shared-network nic {
subnet 10.0.0.0 netmask 255.0.0.0 {
authoritative;
max-lease-time 43200;
min-lease-time 43200;
default-lease-time 43200;
option routers 10.2.1.12;
next-server 10.2.1.13;
option log-servers <Management_Node_Ip>;
option ntp-servers <Management_Node_Ip>;
option domain-name "<xcat_dom>";
option domain-name-servers <Management_Node_Ip>;
option domain-search "pok.stglabs.ibm.com";
zone pok.stglabs.ibm.com. {
primary 10.2.1.13; key xcat_key;
}

View File

@ -4,7 +4,6 @@ Configure xCAT
Configure network table
```````````````````````
Normally, there will be at least two entries for the two subnet on MN in ``networks`` table after xCAT is installed::
#tabdump networks
@ -50,18 +49,10 @@ For hardware management with ipmi, add the following line::
Verify the genesis packages
```````````````````````````
Genesis packages are used to **create the root image for network boot** and **MUST** be installed before doing hardware discovery.
The **xcat-genesis** packages should have been installed when xCAT was installed, but would cause problems if missing. **xcat-genesis** packages are required to create the genesis root image to do hardware discovery and the genesis kernel sits in ``/tftpboot/xcat/``. Verify that the ``genesis-scripts`` and ``genesis-base`` packages are installed:
* **[RH]**::
* **[RHEL/SLES]**: ``rpm -qa | grep -i genesis``
# rpm -qa |grep -i genesis
xCAT-genesis-scripts-ppc64-2.10-snap201507240527.noarch
xCAT-genesis-base-ppc64-2.10-snap201505172314.noarch
* **[Ubuntu]**: ``dpkg -l | grep -i genesis``
* **[ubuntu]**::
# dpkg -l | grep genesis
ii xcat-genesis-base-ppc64 2.10-snap201505172314 all xCAT Genesis netboot image
ii xcat-genesis-scripts 2.10-snap201507240105 ppc64el xCAT genesis
**Note:** If the two packages are not installed, install them first and then run ``mknb ppc64`` to create the network boot root image.
If missing, install them from the ``xcat-deps`` package and run ``mknb ppc64`` to create the genesis network boot root image.

View File

@ -1,11 +1,20 @@
Hardware Discovery & Define Node
================================
Have the servers to be defined as **Node Object** in xCAT is the first step to do for a cluster management.
In order to manage machines using xCAT, the machines need to be defined as xCAT ``node objects`` in the database. The :doc:`xCAT Objects </guides/admin-guides/basic_concepts/xcat_object/index>` documentation describes the process for manually creating ``node objects`` one by one using the xCAT ``mkdef`` command. This is valid when managing a small sizes cluster but can be error prone and cumbersome when managing large sized clusters.
In the chapter :doc:`xCAT Object <../../../basic_concepts/xcat_object/index>`, it describes how to create a **Node Object** through `mkdef` command. You can collect all the necessary information of target servers and define them to a **xCAT Node Object** by manually run `mkdef` command. This is doable when you have a small cluster which has less than 10 servers. But it's really error-prone and inefficiency to manually configure SP (like BMC) and collect information for a large number servers.
xCAT provides several *automatic hardware discovery* methods to assist with hardware discovery by helping to simplify the process of detecting service processors (SP) and collecting various server information. The following are methods that xCAT supports:
.. toctree::
:maxdepth: 2
mtms/index.rst
switch_discovery.rst
seq_discovery.rst
manually_define.rst
manually_discovery.rst
xCAT offers several powerful **Automatic Hardware Discovery** methods to simplify the procedure of SP configuration and server information collection. If your managed cluster has more than 10 servers, the automatic discovery is worth to take a try. If your cluster has more than 50 servers, the automatic discovery is recommended.
Following are the brief characteristics and adaptability of each method, you can select a proper one according to your cluster size and other consideration.
@ -73,12 +82,3 @@ Following are the brief characteristics and adaptability of each method, you can
You have to strictly boot on the node in order if you want the node has the expected name. Generally you have to waiting for the discovery process finished before power on the next one.
.. toctree::
:maxdepth: 2
manually_define.rst
mtms_discovery.rst
switch_discovery.rst
seq_discovery.rst
manually_discovery.rst

View File

@ -0,0 +1,14 @@
Discovery
=========
When the IPMI-based servers are connected to power, the BMCs will boot up and attempt to obtain an IP address from an open range dhcp server on your network. In the case for xCAT managed networks, xCAT should be configured serve an open range dhcp IP addresses with the ``dynamicrange`` attribute in the networks table.
When the BMCs have an IP address and is pingable from the xCAT management node, administrators can discover the BMCs using the xCAT's :doc:`bmcdiscover </guides/admin-guides/references/man1/bmcdiscover.1>` command and obtain basic information to start the hardware discovery process.
xCAT Hardware discover uses the xCAT genesis kernel (diskless) to discover additional attributes of the compute node and automatically populate the node definitions in xCAT.
.. toctree::
:maxdepth: 2
discovery_using_defined.rst
discovery_using_dhcp.rst

View File

@ -0,0 +1,144 @@
Set static BMC IP using different IP address (recommended)
==========================================================
The following example outlines the MTMS based hardware discovery for a single IPMI-based compute node.
+------------------------------+------------+
| Compute Node Information | Value |
+==============================+============+
| Model Type | 8247-22l |
+------------------------------+------------+
| Serial Number | 10112CA |
+------------------------------+------------+
| Hostname | cn01 |
+------------------------------+------------+
| IP address | 10.1.2.1 |
+------------------------------+------------+
The BMC IP address is obtained by the open range dhcp server and the plan in this scenario is to change the IP address for the BMC to a static IP address in a different subnet than the open range addresses. The static IP address in this example is in the same subnet as the open range to simplify the networking configuration on the xCAT management node.
+------------------------------+------------+
| BMC Information | Value |
+==============================+============+
| IP address - dhcp | 172.30.0.1 |
+------------------------------+------------+
| IP address - static | 172.20.2.1 |
+------------------------------+------------+
#. Detect the BMCs and add the node definitions into xCAT.
Use the ``bmcdiscover`` command to discover the BMCs responding over an IP range and automatically write the output into the xCAT database. You **must** use the ``-t`` option to indicate node type is bmc and the ``-w`` option to automatically write the output into the xCAT database.
To discover the BMC with an IP address of 172.30.0.1, use the command: ::
bmcdiscover --range 172.30.0.1 -t -z -w
The discovered nodes will be written to xCAT database: ::
# lsdef node-8247-22l-10112ca
Object name: node-8247-22l-10112ca
bmc=172.30.0.1
cons=ipmi
groups=all
hwtype=bmc
mgt=ipmi
mtm=8247-22L
nodetype=mp
postbootscripts=otherpkgs
postscripts=syslog,remoteshell,syncfiles
serial=10112CA
#. **Pre-define** the compute nodes:
Use the ``bmcdiscover`` command to help discover the nodes over an IP range and easily create a starting file to define the compute nodes into xCAT.
To discover the compute nodes for the BMCs with an IP address of 172.30.0.1, use the command: ::
bmcdiscover --range 172.30.0.1 -z > predefined.stanzas
The discovered nodes have the naming convention: node-<*model-type*>-<*serial-number*> ::
# cat predefined.stanzas
node-8247-22l-10112ca:
objtype=node
groups=all
bmc=172.30.0.1
cons=ipmi
mgt=ipmi
mtm=8247-22L
serial=10112CA
#. Edit the ``predefined.stanzas`` file and change the discovered nodes to the intended ``hostname`` and ``IP address``.
#. Edit the ``predefined.stanzas`` file: ::
vi predefined.stanzas
#. Rename the discovered object names to their intended compute node hostnames based on the MTMS mapping: ::
node-8247-22l-10112ca ==> cn01
#. Add a ``ip`` attribute and give it the compute node IP address: ::
ip=10.1.2.1
#. Repeat for additional nodes in the ``predefined.stanza`` file based on the MTMS mapping.
In this example, the ``predefined.stanzas`` file now looks like the following: ::
# cat predefined.stanzas
cn01:
objtype=node
groups=all
bmc=172.30.0.1
cons=ipmi
mgt=ipmi
mtm=8247-22L
serial=10112CA
ip=10.1.2.1
#. Set the chain table to run the ``bmcsetup`` script, this will set the BMC IP to static. ::
chdef cn01 chain="runcmd=bmcsetup"
#. Change the BMC IP address
Set the BMC IP address to a different value for the **predefined** compute node definitions.
To change the dhcp obtained IP address of 172.30.0.1 to a static IP address of 172.20.2.1, run the following command: ::
chdef cn01 bmc=172.20.2.1
#. Define the compute nodes into xCAT: ::
cat predefined.stanzas | mkdef -z
#. Add the compute node IP information to ``/etc/hosts``: ::
makehosts cn01
#. Refresh the DNS configuration for the new hosts: ::
makedns -n
#. **[Optional]** Monitor the node discovery process using rcons
Configure the conserver for the **discovered** node to watch the discovery process using ``rcons``::
makeconservercf node-8247-22l-10112ca
In another terminal window, open the remote console: ::
rcons node-8247-22l-10112ca
#. Start the discovery process by booting the **discovered** node definition: ::
rsetboot node-8247-22l-10112ca net
rpower node-8247-22l-10112ca on
#. The discovery process will network boot the machine into the diskless xCAT genesis kernel and perform the discovery process. When the discovery process is complete, doing ``lsdef`` on the compute nodes should show discovered attributes for the machine. The important ``mac`` information should be discovered, which is necessary for xCAT to perform OS provisioning.

View File

@ -0,0 +1,113 @@
Set static BMC IP using dhcp provided IP address
================================================
The following example outlines the MTMS based hardware discovery for a single IPMI-based compute node.
+------------------------------+------------+
| Compute Node Information | Value |
+==============================+============+
| Model Type | 8247-22l |
+------------------------------+------------+
| Serial Number | 10112CA |
+------------------------------+------------+
| Hostname | cn01 |
+------------------------------+------------+
| IP address | 10.1.2.1 |
+------------------------------+------------+
The BMC IP address is obtained by the open range dhcp server and the plan is to leave the IP address the same, except we want to change the IP address to be static in the BMC.
+------------------------------+------------+
| BMC Information | Value |
+==============================+============+
| IP address - dhcp | 172.30.0.1 |
+------------------------------+------------+
| IP address - static | 172.30.0.1 |
+------------------------------+------------+
#. **Pre-define** the compute nodes:
Use the ``bmcdiscover`` command to help discover the nodes over an IP range and easily create a starting file to define the compute nodes into xCAT.
To discover the compute nodes for the BMCs with an IP address of 172.30.0.1, use the command: ::
bmcdiscover --range 172.30.0.1 -z > predefined.stanzas
The discovered nodes have the naming convention: node-<*model-type*>-<*serial-number*> ::
# cat predefined.stanzas
node-8247-22l-10112ca:
objtype=node
groups=all
bmc=172.30.0.1
cons=ipmi
mgt=ipmi
mtm=8247-22L
serial=10112CA
#. Edit the ``predefined.stanzas`` file and change the discovered nodes to the intended ``hostname`` and ``IP address``.
#. Edit the ``predefined.stanzas`` file: ::
vi predefined.stanzas
#. Rename the discovered object names to their intended compute node hostnames based on the MTMS mapping: ::
node-8247-22l-10112ca ==> cn01
#. Add a ``ip`` attribute and give it the compute node IP address: ::
ip=10.1.2.1
#. Repeat for additional nodes in the ``predefined.stanza`` file based on the MTMS mapping.
In this example, the ``predefined.stanzas`` file now looks like the following: ::
# cat predefined.stanzas
cn01:
objtype=node
groups=all
bmc=172.30.0.1
cons=ipmi
mgt=ipmi
mtm=8247-22L
serial=10112CA
ip=10.1.2.1
#. Set the chain table to run the ``bmcsetup`` script, this will set the BMC IP to static. ::
chdef cn01 chain="runcmd=bmcsetup"
#. Define the compute nodes into xCAT: ::
cat predefined.stanzas | mkdef -z
#. Add the compute node IP information to ``/etc/hosts``: ::
makehosts cn01
#. Refresh the DNS configuration for the new hosts: ::
makedns -n
#. **[Optional]** Monitor the node discovery process using rcons
Configure the conserver for the **predefined** node to watch the discovery process using ``rcons``::
makeconservercf cn01
In another terminal window, open the remote console: ::
rcons cn01
#. Start the discovery process by booting the **predefined** node definition: ::
rsetboot cn01 net
rpower cn01 on
#. The discovery process will network boot the machine into the diskless xCAT genesis kernel and perform the discovery process. When the discovery process is complete, doing ``lsdef`` on the compute nodes should show discovered attributes for the machine. The important ``mac`` information should be discovered, which is necessary for xCAT to perform OS provisioning.

View File

@ -0,0 +1,27 @@
MTMS-based Discovery
====================
MTMS stands for **M**\ achine **T**\ ype/\ **M**\ odel and **S**\ erial. This is one way to uniquely identify each physical server.
MTMS-based hardware discovery assumes the administator has the model type and serial number information for the physical servers and a plan for mapping the servers to intended hostname/IP addresses.
**Overview**
#. Automatically search and collect MTMS information from the servers
#. Write **discovered-bmc-nodes** to xCAT (recommened to set different BMC IP address)
#. Create **predefined-compute-nodes** to xCAT providing additional properties
#. Power on the nodes which triggers xCAT hardware discovery engine
**Pros**
* Limited effort to get servers defined using xCAT hardware discovery engine
**Cons**
* When compared to switch-based discovery, the administrator needs to create the **predefined-compute-nodes** for each of the **discovered-bmc-nodes**. This could become difficult for a large number of servers.
.. toctree::
:maxdepth: 2
verification.rst
discovery.rst

View File

@ -0,0 +1,53 @@
Verification
============
Before starting hardware discovery, ensure the following is configured to make the discovery process as smooth as possible.
Password Table
--------------
In order to communicate with IPMI-based hardware (with BMCs), verify that the xCAT ``passwd`` table contains an entry for ``ipmi`` which defines the default username and password to communicate with the IPMI-based servers. ::
tabdump passwd | grep ipmi
If not configured, use the following command to set ``usernam=ADMIN`` and ``password=admin``. ::
chtab key=ipmi passwd.username=ADMIN passwd.password=admin
Genesis Package
---------------
The **xCAT-genesis** packages provides the utility to create the genesis network boot rootimage used by xCAT when doing hardware discovery. It should be installed during the xCAT install and would cause problems if missing.
Verify that the ``genesis-scripts`` and ``genesis-base`` packages are installed:
* **[RHEL/SLES]**: ::
rpm -qa | grep -i genesis
* **[Ubuntu]**: ::
dpkg -l | grep -i genesis
If missing:
#. Install them from the ``xcat-dep`` repository using the Operating Specific package manager (``yum, zypper, apt-get, etc``)
* **[RHEL]**: ::
yum install xCAT-genesis
* **[SLES]**: ::
zypper install xCAT-genesis
* **[Ubuntu]**: ::
apt-get install xCAT-genesis
#. Create the network boot rootimage with the following command: ``mknb ppc64``.
The genesis kernel should be copied to ``/tftpboot/xcat``.

View File

@ -1,80 +0,0 @@
MTMS-based Discovery
====================
MTMS is short for Machine Type/Model and Serial which is unique for a physical server. The idea of MTMS based hardware discovery is that the admin know the physical location information of the server with specified MTMS. Then the admin can assign nodename and host ip address for the physical server.
.. include:: schedule_environment.rst
.. include:: config_environment.rst
Discover server and define
--------------------------
After environment is ready, and the server is powered, we can start server discovery process. The first thing to do is discovering the FSP/BMC of the server. It is automatically powered on when the physical server is powered.
The following command can be used to discover BMC(s) within an IP range and write the discovered node definition(s) into a stanza file::
bmcdiscover -s nmap --range 50.0.100.1-100 -z > ./bmc.stanza
**Note**: bmcdiscover will use username/password pair set in ``passwd`` table with **key** equal **ipmi**. If you'd like to use other username/password, you can use ::
bmcdiscover -s nmap --range 50.0.100.1-100 -z -u <username> -p <password> > ./bmc.stanza
You need to modify the node definition(s) in stanza file before using them, the stanza file will be like this::
# cat pbmc.stanza
cn1:
objtype=node
bmc=50.0.100.1
mtm=8247-42L
serial=10112CA
groups=pbmc,all
mgt=ipmi
Then, define it into xCATdb::
# cat pbmc.stanza | mkdef -z
1 object definitions have been created or modified.
The server definition will be like this::
# lsdef cn1
Object name: cn1
bmc=50.0.100.1
groups=pbmc,all
hidden=0
mgt=ipmi
mtm=8247-42L
nodetype=mp
postbootscripts=otherpkgs
postscripts=syslog,remoteshell,syncfiles
serial=10112CA
After the physical server is defined into xCATdb, the next thing is update the node definition with the example node attributes::
chdef cn1 ip=10.0.101.1
In order to do BMC configuration during the discovery process, set ``runcmd=bmcsetup``. For more info about chain, please refer to :doc:`Chain <../../../../../advanced/chain/index>` ::
chdef cn1 chain="runcmd=bmcsetup"
Then, add node info into /etc/hosts and DNS::
makehosts cn1
makedns -n
Start discovery process
-----------------------
To start discovery process, just need to power on the host remotely with the following command, and the discovery process will start automatically after the host is powered on::
rpower cn1 on
**[Optional]** If you'd like to monitor the discovery process, you can use::
chdef cn1 cons=ipmi
makeconservercf
rcons cn1
.. include:: standard_cn_definition.rst

View File

@ -1,14 +1,14 @@
IBM Power LE / OpenPOWER
=========================
This chapter introduces the procedure of how to manage an IBM Power LE/OpenPower cluster. Generally speaking, the processor of **Compute Node** is **IBM Power Chip** based and the management module is **BMC** based.
The following sections documents the procedures in managing IBM Power LE (Little Endian) / OpenPOWER servers in an xCAT cluster.
These are machines use the IBM Power Architecture and is **IPMI** managed.
For a new user, you are recommended to read this chapter in order since later section depends on the execute result of previous section.
.. toctree::
:maxdepth: 2
configure_xcat.rst
configure/index.rst
discovery/index.rst
management.rst
diskful/index.rst

View File

@ -33,16 +33,24 @@ xCAT uses the apt package manager on Ubuntu Linux distributions to install and r
gpgcheck=1
Set up Network
--------------
Configure the Management Node
-----------------------------
The Management Node IP address should be set to a **static** IP address.
By setting properties on the Management Node before installing the xCAT software will allow xCAT to automatically configure key attributes in the xCAT ``site`` table during the install.
Modify the ``interfaces`` file in ``/etc/network`` and configure a static IP address. ::
# The primary network interface
auto eth0
iface eth0 inet static
address 10.3.31.11
netmask 255.0.0.0
#. Ensure a hostname is configured on the management node by issuing the ``hostname`` command. [*It's recommended to use a fully qualified domain name (FQDN) when setting the hostname*]
#. To set the hostname of *xcatmn.cluster.com*: ::
hostname xcatmn.cluster.com
#. Add the hostname to the ``/etc/hostname`` and ``/etc/hosts`` to persist the hostname on reboot.
#. Reboot or run ``service hostname restart`` to allow the hostname to take effect and verify the hostname command returns correctly:
* ``hostname``
* ``hostname -d`` - should display the domain
#. Reduce the risk of the Management Node IP address being lost by setting the interface IP to **STATIC** in the ``/etc/network/interfaces`` configuration file.
#. Configure any domain search strings and nameservers using the ``resolvconf`` command.

View File

@ -3,6 +3,9 @@
For the current list of operating systems supported and verified by the development team for the different releases of xCAT, see the :doc:`xCAT2 Release Notes </overview/xcat2_release>`.
**Disclaimer** These instructions are intended to only be guidelines and specific details may differ slightly based on the operating system version. Always refer to the operating system documentation for the latest recommended procedures.
.. END_see_release_notes
.. BEGIN_install_os_mgmt_node
@ -26,14 +29,6 @@ The system requirements for your xCAT management node largely depend on the size
.. END_install_os_mgmt_node
.. BEGIN_setup_mgmt_node_network
The Management Node IP address should be set to a **static** IP address.
Modify the ``ifcfg-<device>`` file in ``/etc/sysconfig/network-scripts`` and configure a static IP address.
.. END_setup_mgmt_node_network
.. BEGIN_install_xcat_introduction
xCAT consists of two software packages: ``xcat-core`` and ``xcat-dep``

View File

@ -15,9 +15,8 @@ Configure the Base OS Repository
xCAT uses the yum package manager on RHEL Linux distributions to install and resolve dependency packages provided by the base operating system. Follow this section to create the repository for the base operating system on the Management Node
#. Copy the DVD iso file to ``/tmp`` on the Management Node: ::
# This example will use RHEL-LE-7.1-20150219.1-Server-ppc64le-dvd1.iso
#. Copy the DVD iso file to ``/tmp`` on the Management Node.
This example will use file ``RHEL-LE-7.1-20150219.1-Server-ppc64le-dvd1.iso``
#. Mount the iso to ``/mnt/iso/rhels7.1`` on the Management Node. ::
@ -33,10 +32,26 @@ xCAT uses the yum package manager on RHEL Linux distributions to install and res
gpgcheck=1
Set up Network
--------------
Configure the Management Node
-----------------------------
.. include:: ../common_sections.rst
:start-after: BEGIN_setup_mgmt_node_network
:end-before: END_setup_mgmt_node_network
By setting properties on the Management Node before installing the xCAT software will allow xCAT to automatically configure key attributes in the xCAT ``site`` table during the install.
#. Ensure a hostname is configured on the management node by issuing the ``hostname`` command. [*It's recommended to use a fully qualified domain name (FQDN) when setting the hostname*]
#. To set the hostname of *xcatmn.cluster.com*: ::
hostname xcatmn.cluster.com
#. Add the hostname to the ``/etc/sysconfig/network`` in order to persist the hostname on reboot.
#. Reboot the server and verify the hostname by running the following commands:
* ``hostname``
* ``hostname -d`` - should display the domain
#. Reduce the risk of the Management Node IP address being lost by setting the IP to **STATIC** in the ``/etc/sysconfig/network-scripts/ifcfg-<dev>`` configuration files.
#. Configure any domain search strings and nameservers to the ``/etc/resolv.conf`` file.

View File

@ -33,10 +33,25 @@ xCAT uses the zypper package manager on SLES Linux distributions to install and
gpgcheck=1
Set up Network
--------------
Configure the Management Node
-----------------------------
The Management Node IP address should be set to a **static** IP address.
By setting properties on the Management Node before installing the xCAT software will allow xCAT to automatically configure key attributes in the xCAT ``site`` table during the install.
Modify the ``ifcfg-<device>`` file in ``/etc/sysconfig/network/`` and configure a static IP address.
#. Ensure a hostname is configured on the management node by issuing the ``hostname`` command. [*It's recommended to use a fully qualified domain name (FQDN) when setting the hostname*]
#. To set the hostname of *xcatmn.cluster.com*: ::
hostname xcatmn.cluster.com
#. Add the hostname to the ``/etc/hostname`` in order to persist the hostname on reboot.
#. Reboot the server and verify the hostname by running the following commands:
* ``hostname``
* ``hostname -d`` - should display the domain
#. Reduce the risk of the Management Node IP address being lost by setting the IP to **STATIC** in the ``/etc/sysconfig/network/ifcfg-<dev>`` configuration files.
#. Configure any domain search strings and nameservers to the ``/etc/resolv.conf`` file.