2
0
mirror of https://github.com/xcat2/xcat-core.git synced 2025-09-06 18:28:16 +00:00

merge master to 2.13 branch (#4916)

* Add section for OpenBMC rflash in admin-guide, link this section to CORAL reference section

* enhance rflash unattended doc

* Add libnl3 to ib.rhels7.ppc64le.pkglist

* Reverse installing xCAT-openbmc-py by default so we can require python dependencies

* Adding documentation for installing xCAT-openbmc-py

* enhance rflash doc

* Fix indent problem for the comment lines

For the usability issue, add more spaces to avoid of
some errors when deleting the `#`.

* Install first, then performance section

* Modify all fullwidth colon in test case

* remove the dependency, Load SOAP module dynamically

* Add doc to enable goconserver by default

Add the steps in documentation to enable goconserver by default.

* Use makegocons instead of makeconservercf during discovery and provision

* check if agent exists in process_request and give more clear message.

* add usercase for xcat-inventory

* Not start agent when no valid nodes (#4915)
This commit is contained in:
yangsong
2018-03-09 19:21:30 +08:00
committed by GitHub
parent b55cbe37ff
commit d0ed517ee2
47 changed files with 667 additions and 154 deletions

View File

@@ -18,9 +18,9 @@ Remove Old Provision Environment
makedhcp -d <noderange>
#. Remove the nodes from the conserver configuration ::
#. Remove the nodes from the goconserver configuration ::
makeconservercf -d <noderange>
makegocons -d <noderange>
Change Definition
-----------------
@@ -76,6 +76,6 @@ Update The Provision Environment
makedhcp -a
#. Configure the new names in conserver ::
#. Configure the new names in goconserver ::
makeconservercf
makegocons

View File

@@ -190,9 +190,9 @@ Then update the following in xCAT:
"1.4","new_MN_name",,,,,,"trusted",,``
* Setup up conserver with new credentials ::
* Setup up goconserver with new credentials ::
makeconservercf
makegocons
External DNS Server Changed
---------------------------
@@ -262,9 +262,9 @@ If it exists, then use the return name and do the following:
makedhcp -a
- Add the MN to conserver ::
- Add the MN to goconserver ::
makeconservercf
makegocons
Update the genesis packages
---------------------------

View File

@@ -0,0 +1,119 @@
Configuration
=============
Location
--------
The configuration file for ``goconserver`` is located at ``/etc/goconserver/server.conf``.
When the configuration is changed, reload using: ``systemctl restart goconserver.service``.
An example for the configuration could be found from
`Example Conf <https://github.com/xcat2/goconserver/blob/master/etc/goconserver/server.conf>`_.
Tag For xCAT
------------
xCAT generates a configuration file that includes a identifier on the first
line. For example: ::
#generated by xcat Version 2.13.10 (git commit 7fcd37ffb7cec37c021ab47d4baec151af547ac0, built Thu Jan 25 07:15:36 EST 2018)
``makegocons`` checks for this token and will not make changes to the
configuration file if it exists. This gives the user the ability to customize
the configuration based on their specific site configuration.
Multiple Output Plugins
-----------------------
``goconserver`` support console redirection to multiple targets with ``file``,
``tcp`` and ``udp`` logger plugins. The entry could be found like below: ::
console:
# the console session port for client(congo) to connect.
port: 12430
logger:
# for file logger
file:
# multiple file loggers could be specified
# valid fields: name, logdir
- name: default
logdir: /var/log/goconserver/nodes/
- name: xCAT
logdir: /var/log/consoles
tcp:
- name: logstash
host: briggs01
port: 9653
ssl_key_file: /etc/xcat/cert/server-cred.pem
ssl_cert_file: /etc/xcat/cert/server-cred.pem
ssl_ca_cert_file: /etc/xcat/cert/ca.pem
- name: rsyslog
host: sn02
port: 9653
udp:
- name: filebeat
host: 192.168.1.5
port: 512
With the configuration above, the console log files for each node would be written in
both ``/var/log/goconserver/nodes/<node>.log`` and ``/var/log/consoles/<node>.log``.
In addition, console log content will be redirected into remote services
specified in the tcp and udp sections.
Verification
------------
To check if ``goconserver`` works correctly, see the log file ``/var/log/goconserver/server.log``.
#. Check if TCP logger has been activated.
When starting ``goconserver``, if the log message is like below, it
means the TCP configuration has been activated. ::
{"file":"github.com/xcat2/goconserver/console/logger/tcp.go (122)","level":"info","msg":"Starting TCP publisher: logstash","time":"2018-03-02T21:15:35-05:00"}
{"file":"github.com/xcat2/goconserver/console/logger/tcp.go (122)","level":"info","msg":"Starting TCP publisher: sn02","time":"2018-03-02T21:15:35-05:00"}
#. Debug when encounter error about TCP logger
If the remote service is not started or the network is unreachable, the
log message would be like below. ::
{"file":"github.com/xcat2/goconserver/console/logger/tcp.go (127)","level":"error","msg":"TCP publisher logstash: dial tcp 10.6.27.1:9653: getsockopt: connection refused","time":"2018-03-07T21:12:58-05:00"}
Check the service status and the network configuration including the
``selinux`` and ``iptable rules``. When the remote service works
correctly, TCP or UDP logger of ``goconserver`` would recover automatically.
Reconnect Interval
------------------
If console node is defined with ``ondemand=false``, when the console connection
could not be established, ``goconserver`` would reconnect automatically. The
interval time could be specified at ::
console:
# retry interval in second if console could not be connected.
reconnect_interval: 10
Performance Tuning
------------------
Adjust the worker numbers to leverage multi-core processor performance based on
the site configuration. ::
global:
# the max cpu cores for workload
worker: 4
Debug
-----
The log level for ``goconserver`` is defined in ``/etc/goconserver/server.conf`` ::
global:
# debug, info, warn, error, fatal, panic
log_level: info

View File

@@ -0,0 +1,14 @@
Go Conserver
============
``goconserver`` is a conserver replacement written in `Go <https://golang.org/>`_
programming language. For more information, see https://github.com/xcat2/goconserver/
.. toctree::
:maxdepth: 2
quickstart.rst
configuration.rst
rest.rst

View File

@@ -0,0 +1,20 @@
Quickstart
==========
To enable ``goconserver``, execute the following steps:
#. Install the ``goconserver`` RPM: ::
yum install goconserver
#. If upgrading xCAT running ``conserver``, stop it first: ::
systemctl stop conserver.service
#. Start ``goconserver`` and create the console configuration files with a single command ::
makegocons
The new console logs will start logging to ``/var/log/consoles/<node>.log``

View File

@@ -0,0 +1,5 @@
REST API
========
``goconserver`` provides REST API interface to manage the node sessions. For
detail, see `REST <https://github.com/xcat2/goconserver/tree/master/api/>`_.

View File

@@ -8,6 +8,7 @@ Advanced Topics
cluster_maintenance/index.rst
migration/index.rst
confluent/index.rst
goconserver/index.rst
docker/index.rst
domain_name_resolution/index.rst
gpu/index.rst
@@ -25,3 +26,4 @@ Advanced Topics
softlayer/index.rst
sysclone/index.rst
zones/index.rst
xcat-inventory/index.rst

View File

@@ -0,0 +1,28 @@
Define and create your first xCAT cluster easily
================================================
The inventory templates for 2 kinds of typical xCAT cluster is shipped. You can create your first xCAT cluster easily by making several modifications on the template. The templates can be found under ``/opt/xcat/share/xcat/inventory_templates`` on management node with ``xcat-inventory`` installed.
Currently, the inventory templates includes:
1. flat_cluster_template.yaml:
a flat baremetal cluster, including **openbmc controlled PowerLE servers**, **IPMI controlled Power servers(commented out)**, **X86_64 servers(commented out)**
2. flat_kvm_cluster_template.yaml: a flat KVM based Virtual Machine cluster, including **PowerKVM based VM nodes**, **KVM based X86_64 VM nodes(commented out)**
The steps to create your first xCAT cluster is:
1. create a customized cluster inventory file "mycluster.yaml" based on ``flat_cluster_template.yaml`` ::
cp /opt/xcat/share/xcat/inventory_templates/flat_cluster_template.yaml /git/cluster/mycluster.yaml
2. custmize the cluster inventory file "mycluster.yaml" by modifying the attributs in the line under token ``#CHANGEME`` according to the setup of your phisical cluster. You can create new node definition by duplicating and modifying the node definition in the template.
3. import the cluster inventory file ::
xcat-inventory import -f /git/cluster/mycluster.yaml
Now you have your 1st xCAT cluster, you can start bring up the cluster by provision nodes.

View File

@@ -0,0 +1,21 @@
xcat-inventory
==============
`xcat-inventory <https://github.com/xcat2/xcat-inventory>`_ is an inventory tool for the cluster managed by xCAT. Its features includes:
* a object based view of the cluster inventory, which is flexible, extensible and well formatted
* interfaces to export/import the cluster inventory data in yaml/json format, which can be then managed under source control
* inventory templates for typical clusters, which help user to defines a cluster easily
* ability to intergrate with Ansible(Comming Soon)
This section presents 2 typical user case of ``xcat-inventory``
.. toctree::
:maxdepth: 2
version_control_inventory.rst
define_create_cluster.rst

View File

@@ -0,0 +1,50 @@
Manage the xCAT Cluster Definition under Source Control
=======================================================
The xCAT cluster inventory data, including global configuration and object definitions(node/osimage/passwd/policy/network/router), and the relationship of the objects, can be exported to a YAML/JSON file(**inventory file**) from xCAT Database, or be imported to xCAT Database from the inventory file.
By managing the inventory file under source control system, you can manage the xCAT cluster definition under source control. This section presents a typical step-by-step scenario on how to manage cluster inventory data under ``git``.
1. create a directory ``/git/cluster`` under git directory to hold the cluster inventory ::
mkdir -p /git/cluster
cd /git/cluster
git init
2. export the current cluster configuration to a inventory file "mycluster.yaml" under the git directory created above ::
xcat-inventory export --format=yaml >/git/cluster/mycluster.yaml
3. check diff and commit the cluster inventory file(commit no: c95673) ::
cd /git/cluster
git diff
git add /git/cluster/mycluster.yaml
git commit /git/cluster/mycluster.yaml -m "$(date "+%Y_%m_%d_%H_%M_%S"): initial cluster inventory data; blah-blah"
4. ordinary cluster maintenance and operation: replaced bad nodes, turn on xcatdebugmode...
5. cluster setup is stable now, export and commit the cluster configuration(commit no: c95673) ::
xcat-inventory export --format=yaml >/git/cluster/mycluster.yaml
cd /git/cluster
git diff
git add xcat-inventory export --format=yaml >/git/cluster/mycluster.yaml
git commit /git/cluster/mycluster.yaml -m "$(date "+%Y_%m_%d_%H_%M_%S"): replaced bad nodes; turn on xcatdebugmode; blah-blah"
6. ordinary cluster maintenance and operation, some issues are founded in current cluster, need to restore the current cluster configuration to commit no c95673 [1]_ ::
cd /git/cluster
git checkout c95673
xcat-inventory import -f /git/cluster/mycluster.yaml
*Notice:*
1. The cluster inventory data exported by ``xcat-inventory`` does not include intermidiate data, transiate data and historical data in xCAT DataBase, such as node status, auditlog table
2. We suggest you backup your xCAT database by ``dumpxCATdb`` before your trial on this feature, although we have run sufficient test on this
.. [1] When you import the inventory data to xCAT Database in step 6, there are 2 modes: ``clean mode`` and ``update mode``. If you choose the ``clean mode`` by ``xcat-inventory import -c|--clean``, all the objects definitions that are not included in the inventory file will be removed; Otherwise, only the objects included in the inventory file will be updated or inserted. Please choose the proper mode according to your need

View File

@@ -161,7 +161,7 @@ When the VM has been created and powered on, choose one of the following methods
* Use **rcons/wcons** on xCAT management node to open text console: ::
chdef vm1 cons=kvm
makeconservercf vm1
makegocons vm1
rcons vm1
* Connect to virtual machine through vnc console

View File

@@ -126,7 +126,7 @@ The BMC IP address is obtained by the open range dhcp server and the plan in thi
Configure the conserver for the **discovered** node to watch the discovery process using ``rcons``::
makeconservercf node-8247-22l-10112ca
makegocons node-8247-22l-10112ca
In another terminal window, open the remote console: ::

View File

@@ -99,9 +99,9 @@ The BMC IP address is obtained by the open range dhcp server and the plan is to
#. **[Optional]** Monitor the node discovery process using rcons
Configure the conserver for the **predefined** node to watch the discovery process using ``rcons``::
Configure the goconserver for the **predefined** node to watch the discovery process using ``rcons``::
makeconservercf cn01
makegocons cn01
In another terminal window, open the remote console: ::

View File

@@ -34,5 +34,5 @@ To start discovery process, just need to power on the PBMC node remotely with th
**[Optional]** If you'd like to monitor the discovery process, you can use::
chdef Server-8247-22L-SN10112CA cons=ipmi
makeconservercf
makegocons
rcons Server-8247-22L-SN10112CA

View File

@@ -25,5 +25,5 @@ To start discovery process, just need to power on the PBMC node remotely with th
**[Optional]** If you'd like to monitor the discovery process, you can use::
makeconservercf node-8247-42l-10112ca
makegocons node-8247-42l-10112ca
rcons node-8247-42l-10112ca

View File

@@ -6,5 +6,5 @@ Advanced Operations
rinv.rst
rvitals.rst
rflash.rst
rflash/index.rst
rspconfig.rst

View File

@@ -0,0 +1,10 @@
``rflash`` - Remote Firmware Flashing
=====================================
See :doc:`rflash manpage </guides/admin-guides/references/man1/rflash.1>` for more information.
.. toctree::
:maxdepth: 2
ipmi.rst
openbmc/index.rst

View File

@@ -1,7 +1,5 @@
``rflash`` - Remote Firmware Flashing
=====================================
See :doc:`rflash manpage </guides/admin-guides/references/man1/rflash.1>` for more information.
IPMI Firmware Update
====================
The ``rflash`` command is provided to assist the system administrator in updating firmware.

View File

@@ -0,0 +1,8 @@
OpenBMC Firmware Update
=======================
.. toctree::
:maxdepth: 2
manually.rst
unattended.rst

View File

@@ -0,0 +1,10 @@
Manual Firmware Flash
=====================
.. include:: ./openbmc_common.rst
:start-after: BEGIN_flashing_OpenBMC_Servers
:end-before: END_flashing_OpenBMC_Servers
.. include:: ./openbmc_common.rst
:start-after: BEGIN_Validation_OpenBMC_firmware
:end-before: END_Validation_OpenBMC_firmware

View File

@@ -0,0 +1,100 @@
.. BEGIN_unattended_OpenBMC_flashing
Unattended flash of OpenBMC firmware will do the following events:
#. Upload both BMC firmware file and PNOR firmware file
#. Activate both BMC firmware and PNOR firmware
#. If BMC firmware becomes activate, reboot BMC to apply new BMC firmware, or else, ``rflash`` will exit
#. If BMC itself state is ``NotReady``, ``rflash`` will exit
#. If BMC itself state is ``Ready``, and use ``--no-host-reboot`` option, ``rflash`` will not reboot the compute node
#. If BMC itself state is ``Ready``, and do not use ``--no-host-reboot`` option, ``rflash`` will reboot the compute node to apply PNOR firmware
Use the following command to flash the firmware unattended: ::
rflash <noderange> -d /path/to/directory
Use the following command to flash the firmware unattended and not reboot the compute node: ::
rflash <noderange> -d /path/to/directory --no-host-reboot
If there are errors encountered during the flash process, take a look at the manual steps to continue flashing the BMC.
.. END_unattended_OpenBMC_flashing
.. BEGIN_flashing_OpenBMC_Servers
The sequence of events that must happen to flash OpenBMC firmware is the following:
#. Power off the Host
#. Upload and Activate BMC
#. Reboot the BMC (applies BMC)
#. Upload and Activate PNOR
#. Power on the Host (applies PNOR)
Power off Host
--------------
Use the rpower command to power off the host: ::
rpower <noderange> off
Upload and Activate BMC Firmware
--------------------------------
Use the rflash command to upload and activate the PNOR firmware: ::
rflash <noderange> -a /path/to/obmc-phosphor-image-witherspoon.ubi.mtd.tar
If running ``rflash`` in Hierarchy, the firmware files must be accessible on the Service Nodes.
**Note:** If a .tar file is provided, the ``-a`` option does an upload and activate in one step. If an ID is provided, the ``-a`` option just does activate the specified firmware. After firmware is activated, use the ``rflash <noderange> -l`` to view. The ``rflash`` command shows ``(*)`` as the active firmware and ``(+)`` on the firmware that requires reboot to become effective.
Reboot the BMC
--------------
Use the ``rpower`` command to reboot the BMC: ::
rpower <noderange> bmcreboot
The BMC will take 2-5 minutes to reboot, check the status using: ``rpower <noderange> bmcstate`` and wait for ``BMCReady`` to be returned.
**Known Issue:** On reboot, the first call to the BMC after reboot, xCAT will return ``Error: BMC did not respond within 10 seconds, retry the command.``. Please retry.
Upload and Activate PNOR Firmware
---------------------------------
Use the rflash command to upload and activate the PNOR firmware: ::
rflash <noderange> -a /path/to/witherspoon.pnor.squashfs.tar
If running ``rflash`` in Hierarchy, the firmware files must be accessible on the Service Nodes.
**Note:** The ``-a`` option does an upload and activate in one step, after firmware is activated, use the ``rflash <noderange> -l`` to view. The ``rflash`` command shows ``(*)`` as the active firmware and ``(+)`` on the firmware that requires reboot to become effective.
Power on Host
-------------
User the ``rpower`` command to power on the Host: ::
rpower <noderange> on
.. END_flashing_OpenBMC_Servers
.. BEGIN_Validation_OpenBMC_firmware
Validation
----------
Use one of the following commands to validate firmware levels are in sync:
* Use the ``rinv`` command to validate firmware level: ::
rinv <noderange> firm -V | grep -i ibm | grep "\*" | xcoll
* Use the ``rflash`` command to validate the firmware level: ::
rflash <noderange> -l | grep "\*" | xcoll
.. END_Validation_OpenBMC_firmware

View File

@@ -0,0 +1,10 @@
Unattended Firmware Flash
=========================
.. include:: ./openbmc_common.rst
:start-after: BEGIN_unattended_OpenBMC_flashing
:end-before: END_unattended_OpenBMC_flashing
.. include:: ./openbmc_common.rst
:start-after: BEGIN_Validation_OpenBMC_firmware
:end-before: END_Validation_OpenBMC_firmware

View File

@@ -28,7 +28,37 @@ Troubleshooting
General
```````
The xCAT ``rcons`` command relies on conserver (http://www.conserver.com/). The ``conserver`` package should have been installed with xCAT as it's part of the xCAT dependency package. If you are having problems seeing the console, try the following.
``xCAT`` has been integrated with 3 kinds of console server service, they are
- `conserver <http://www.conserver.com/>`_
- `goconserver <https://github.com/xcat2/goconserver/>`_
- `confluent <https://github.com/xcat2/confluent/>`_
``rcons`` command relies on one of them. The ``conserver`` and ``goconserver``
packages should have been installed with xCAT as they are part of the xCAT
dependency packages. If you hope to try ``confluent``,
see `confluent </advanced/confluent/>`_.
For systemd based systems, ``goconserver`` is used by default. If you are
having problems seeing the console, try the following.
#. Make sure ``goconserver`` is configured by running ``makegocons``.
#. Check if ``goconserver`` is up and running ::
systemctl status goconserver.service
#. If ``goconserver`` is not running, start the service using: ::
systemctl start goconserver.service
#. Try ``makegocons -q [<node>]`` to verify if the node has been registered.
#. Invoke the console again: ``rcons <node>``
More details for goconserver, see `goconserver documentation </advanced/goconserver/>`_.
**[Deprecated]** If ``conserver`` is used, try the following.
#. Make sure ``conserver`` is configured by running ``makeconservercf``.
@@ -42,12 +72,4 @@ The xCAT ``rcons`` command relies on conserver (http://www.conserver.com/). The
[sysvinit] service conserver start
[systemd] systemctl start conserver.service
#. After this, try invoking the console again: ``rcons <node>``
OpenBMC Specific
````````````````
#. For OpenBMC managed servers, the root user must be able to ssh passwordless to the BMC for the ``rcons`` function to work.
Copy the ``/root/.ssh/id_rsa.pub`` public key to the BMC's ``~/.ssh/authorized_keys`` file.
#. Invoke the console again: ``rcons <node>``

View File

@@ -5,4 +5,4 @@ Power9 Firmware Update
:maxdepth: 2
ipmi.rst
openbmc.rst
openbmc/index.rst

View File

@@ -1,79 +0,0 @@
OpenBMC Firmware Update
=======================
The process of updating firmware on the OpenBMC managed servers is documented below.
The sequence of events that must happen is the following:
* Power off the Host
* Update and Activate PNOR
* Update and Activate BMC
* Reboot the BMC (applies BMC)
* Power on the Host (applies PNOR)
**Note:** xCAT is working on streamlining this process to reduce the flexibility of the above steps at the convenience of the Administrator to handle the necessary reboots. See `Issue #4245 <https://github.com/xcat2/xcat-core/issues/4245>`_
Power off Host
--------------
Use the rpower command to power off the host: ::
rpower <noderange> off
Update and Activate PNOR Firmware
---------------------------------
Use the rflash command to upload and activate the PNOR firmware: ::
rflash <noderange> -a /path/to/witherspoon.pnor.squashfs.tar
If running ``rflash`` in Hierarchy, the firmware files must be accessible on the Service Nodes.
**Note:** The ``-a`` option does an upload and activate in one step, after firmware is activated, use the ``rflash <noderange> -l`` to view. The ``rflash`` command shows ``(*)`` as the active firmware and ``(+)`` on the firmware that requires reboot to become effective.
Update and Activate BMC Firmware
--------------------------------
Use the rflash command to upload and activate the PNOR firmware: ::
rflash <noderange> -a /path/to/obmc-phosphor-image-witherspoon.ubi.mtd.tar
If running ``rflash`` in Hierarchy, the firmware files must be accessible on the Service Nodes.
**Note:** The ``-a`` option does an upload and activate in one step, after firmware is activated, use the ``rflash <noderange> -l`` to view. The ``rflash`` command shows ``(*)`` as the active firmware and ``(+)`` on the firmware that requires reboot to become effective.
Reboot the BMC
--------------
Use the ``rpower`` command to reboot the BMC: ::
rpower <noderange> bmcreboot
The BMC will take 2-5 minutes to reboot, check the status using: ``rpower <noderange> bmcstate`` and wait for ``BMCReady`` to be returned.
**Known Issue:** On reboot, the first call to the BMC after reboot, xCAT will return ``Error: BMC did not respond within 10 seconds, retry the command.``. Please retry.
Power on Host
-------------
User the ``rpower`` command to power on the Host: ::
rpower <noderange> on
Validation
----------
Use one of the following commands to validate firmware levels are in sync:
* Use the ``rinv`` command to validate firmware level: ::
rinv <noderange> firm -V | grep -i ibm | grep "\*" | xcoll
* Use the ``rflash`` command to validate the firmware level: ::
rflash <noderange> -l | grep "\*" | xcoll

View File

@@ -0,0 +1,8 @@
OpenBMC Firmware Update
=======================
.. toctree::
:maxdepth: 2
unattended.rst
manually.rst

View File

@@ -0,0 +1,10 @@
Manual Firmware Flash
=====================
.. include:: ../../../../../guides/admin-guides/manage_clusters/ppc64le/management/advanced/rflash/openbmc/openbmc_common.rst
:start-after: BEGIN_flashing_OpenBMC_Servers
:end-before: END_flashing_OpenBMC_Servers
.. include:: ../../../../../guides/admin-guides/manage_clusters/ppc64le/management/advanced/rflash/openbmc/openbmc_common.rst
:start-after: BEGIN_Validation_OpenBMC_firmware
:end-before: END_Validation_OpenBMC_firmware

View File

@@ -0,0 +1,10 @@
Unattended Firmware Flash
=========================
.. include:: ../../../../../guides/admin-guides/manage_clusters/ppc64le/management/advanced/rflash/openbmc/openbmc_common.rst
:start-after: BEGIN_unattended_OpenBMC_flashing
:end-before: END_unattended_OpenBMC_flashing
.. include:: ../../../../../guides/admin-guides/manage_clusters/ppc64le/management/advanced/rflash/openbmc/openbmc_common.rst
:start-after: BEGIN_Validation_OpenBMC_firmware
:end-before: END_Validation_OpenBMC_firmware

View File

@@ -4,4 +4,5 @@ Cluster Management
.. toctree::
:maxdepth: 2
scalability/index.rst
firmware/index.rst

View File

@@ -0,0 +1,7 @@
Scalability
===========
.. toctree::
:maxdepth: 2
python/index.rst

View File

@@ -0,0 +1,12 @@
Python framework
================
When testing the scale up of xCAT commands against OpenBMC REST API, it was evident that the Perl framework of xCAT did not scale well and was not sending commands to the BMCs in a true parallel fashion.
The team investigated the possibility of using Python framework
.. toctree::
:maxdepth: 2
install/index.rst
performance.rst

View File

@@ -0,0 +1,14 @@
Disable Python Framework
========================
By default, if ``xCAT-openbmc-py`` is installed and Python files are there, xCAT will default to running the Python framework.
A site table attribute is created to allow the ability to control between Python and Perl.
* To disable all Python code and revert to the Perl implementation: ::
chdef -t site clustersite openbmcperl=ALL
* To disable single commands, specify a command separated lists: ::
chdef -t site clustersite openbmcperl="rpower,rbeacon"

View File

@@ -0,0 +1,12 @@
Installation
============
A new RPM is created that contains the Python code: ``xCAT-openbmc-py``. The Python code requires additonal Python libraries that may not be available as an operating system provided package. The following will help resolve the dependencies.
.. toctree::
:maxdepth: 2
rpm.rst
pip.rst
disable.rst

View File

@@ -0,0 +1,22 @@
Using pip
=========
A alternative method for installing the Python dependencies is using ``pip``.
#. Download ``pip`` using one of the following methods:
#. ``pip`` is provided in the EPEL repo as: ``python2-pip``
#. Follow the instructions to install from here: https://pip.pypa.io/en/stable/installing/
#. Use ``pip`` to install the following Python libraries: ::
pip install gevent docopt requests paramiko scp
#. Install ``xCAT-openbmc-py`` using ``rpm`` with ``--nodeps``: ::
cd xcat-core
rpm -ihv xCAT-openbmc-py*.rpm --nodeps

View File

@@ -0,0 +1,42 @@
Using RPM (recommended)
=======================
**Support is only for RHEL 7.5 for Power LE (Power 9)**
The following repositories should be configured on your Management Node (and Service Nodes).
* RHEL 7.5 OS Repository
* RHEL 7.5 Extras Repository
* RHEL 7 EPEL Repo (https://fedoraproject.org/wiki/EPEL)
* Fedora28 Repo (for ``gevent``, ``greenlet``)
#. Configure the MN/SN to the RHEL 7.5 OS Repo
#. Configure the MN/SN to the RHEL 7.5 Extras Repo
#. Configure the MN/SN to the EPEL Repo (https://fedoraproject.org/wiki/EPEL)
#. Create a local Fedora28 Repo and Configure the MN/SN to the FC28 Repo
Here's an example to configure the Fedora 28 repo at ``/install/repos/fc28``
#. Make the target repo directory on the MN: ::
mkdir -p /install/repos/fc28/ppc64le/Packages
#. Download the rpms from the Internet: ::
cd /install/repos/fc28/ppc64le/Packages
wget https://www.rpmfind.net/linux/fedora-secondary/development/rawhide/Everything/ppc64le/os/Packages/p/python2-gevent-1.2.2-2.fc28.ppc64le.rpm
wget https://www.rpmfind.net/linux/fedora-secondary/development/rawhide/Everything/ppc64le/os/Packages/p/python2-greenlet-0.4.13-2.fc28.ppc64le.rpm
#. Create a yum repo in that directory: ::
cd /install/repos/fc28/ppc64le/
createrepo .
#. Install ``xCAT-openbmc-py`` using ``yum``: ::
yum install xCAT-openbmc-py
**Note**: The install will fail if the dependencies cannot be met.

View File

@@ -0,0 +1,33 @@
Performance
===========
Supported Commands
------------------
The following commands are currently supported:
+----------------+-----------+-------------+----------------------------------+
|Command |Support |Release |Notes |
+================+===========+=============+==================================+
| rpower | Yes | 2.13.11 | |
+----------------+-----------+-------------+----------------------------------+
| rinv | Yes | 2.13.11 | |
+----------------+-----------+-------------+----------------------------------+
| rbeacon | Yes | 2.13.11 | |
+----------------+-----------+-------------+----------------------------------+
| rspconfig | No | | |
+----------------+-----------+-------------+----------------------------------+
| rsetboot | Yes | 2.13.11 | |
+----------------+-----------+-------------+----------------------------------+
| rvitals | Yes | 2.13.11 | |
+----------------+-----------+-------------+----------------------------------+
| rflash | No | | |
+----------------+-----------+-------------+----------------------------------+
| reventlog | No | | |
+----------------+-----------+-------------+----------------------------------+
Data
----
TBD

View File

@@ -813,7 +813,7 @@ my %methods = (
}, # end IConsole_getPowerButtonHandled
); # end my %methods
use SOAP::Lite; # vbox.pm requires SOAP::Lite before requiring vboxService.pm, so we can check for SOAP::Lite dynamically
#use SOAP::Lite; # vbox.pm requires SOAP::Lite before requiring vboxService.pm, so we can check for SOAP::Lite dynamically
use Exporter;
use Carp ();

View File

@@ -18,6 +18,9 @@ AutoReqProv: no
BuildArch: noarch
Requires: xCAT-server
Requires: python-gevent >= 1.2.2-2
Requires: python-greenlet >= 0.4.13-2
Requires: python2-docopt python-requests python-paramiko python-scp
%description
xCAT-openbmc-py provides openbmc related functions.

View File

@@ -596,23 +596,23 @@ sub build_conf {
" reconnect_interval: 10 # retry interval in second if console could not be connected\n".
" logger: # multiple logger targets could be specified\n".
" file: # file logger, valid fields: name,logdir. Accept array in yaml format\n".
" - name: default # the identity name customized by user\n".
" logdir: ".CONSOLE_LOG_DIR." # default log directory of xcat\n".
" # - name: goconserver \n".
" # logdir: /var/log/goconserver/nodes \n".
" # tcp: # valied fields: name, host, port, timeout, ssl_key_file, ssl_cert_file, ssl_ca_cert_file, ssl_insecure\n".
" # - name: logstash \n".
" # host: 127.0.0.1 \n".
" # port: 9653 \n".
" # timeout: 3 # default 3 second\n".
" # - name: filebeat \n".
" # host: <hostname or ip> \n".
" # port: <port> \n".
" # udp: # valid fiedls: name, host, port, timeout\n".
" # - name: rsyslog \n".
" # host: \n".
" # port: \n".
" # timeout: # default 3 second\n";
" - name: default # the identity name customized by user\n".
" logdir: ".CONSOLE_LOG_DIR." # default log directory of xcat\n".
" #- name: goconserver \n".
" # logdir: /var/log/goconserver/nodes \n".
" #tcp: # valied fields: name, host, port, timeout, ssl_key_file, ssl_cert_file, ssl_ca_cert_file, ssl_insecure\n".
" #- name: logstash \n".
" # host: 127.0.0.1 \n".
" # port: 9653 \n".
" # timeout: 3 # default 3 second\n".
" #- name: filebeat \n".
" # host: <hostname or ip> \n".
" # port: <port> \n".
" #udp: # valid fiedls: name, host, port, timeout\n".
" #- name: rsyslog \n".
" # host: 127.0.0.1 \n".
" # port: 512 \n".
" # timeout: 3 # default 3 second\n";
my $file;
my $ret = open ($file, '>', '/etc/goconserver/server.conf');

View File

@@ -69,11 +69,14 @@ sub acquire_lock {
flock($lock_fd, LOCK_EX) or return undef;
return $lock_fd;
}
sub start_python_agent {
if (! -e $PYTHON_AGENT_FILE) {
xCAT::MsgUtils->message("S", "start_python_agent() Error: '$PYTHON_AGENT_FILE' does not exist");
return undef;
sub exists_python_agent {
if ( -e $PYTHON_AGENT_FILE) {
return 1;
}
return 0;
}
sub start_python_agent {
if (!defined(acquire_lock())) {
xCAT::MsgUtils->message("S", "start_python_agent() Error: Failed to acquire lock");

View File

@@ -117,10 +117,8 @@ sub process_request {
my $request = shift;
$callback = shift;
# If we can't start the python agent, exit immediately
my $pid = xCAT::OPENBMC::start_python_agent();
if (!defined($pid)) {
xCAT::MsgUtils->message("E", { data => ["Failed to start the xCAT Python agent. Check /var/log/xcat/cluster.log for more information."] }, $callback);
if (!xCAT::OPENBMC::exists_python_agent()) {
xCAT::MsgUtils->message("E", { data => ["The xCAT Python agent does not exist. Check if xCAT-openbmc-py package is installed on management node and service nodes."] }, $callback);
return;
}
@@ -133,6 +131,13 @@ sub process_request {
$callback->({ errorcode => [$check] }) if ($check);
return unless(%node_info);
# If we can't start the python agent, exit immediately
my $pid = xCAT::OPENBMC::start_python_agent();
if (!defined($pid)) {
xCAT::MsgUtils->message("E", { data => ["Failed to start the xCAT Python agent. Check /var/log/xcat/cluster.log for more information."] }, $callback);
return;
}
xCAT::OPENBMC::submit_agent_request($pid, $request, \%node_info, $callback);
xCAT::OPENBMC::wait_agent($pid, $callback);
}

View File

@@ -105,7 +105,7 @@ sub process_request
my $soapsupport = eval { require SOAP::Lite; };
unless ($soapsupport) { #Still no SOAP::Lite module
$callback->({ error => "SOAP::Lite perl module missing, unable to fulfill Virtual Box plugin requirements", errorcode => [42] });
$callback->({ error => "SOAP::Lite perl module missing. Install perl-SOAP-Lite before running commands on Virtual Box nodes.", errorcode => [42] });
return [];
}
require xCAT::vboxService;

View File

@@ -6,6 +6,7 @@ tcsh
gcc-gfortran
lsof
libnl
libnl3
libxml2-python
python-devel
redhat-rpm-config

View File

@@ -187,7 +187,7 @@ os:linux
description:create a node with cec template once
cmd:result=`lsdef | grep auto_test_cec_node_1`; if [[ $result =~ "auto_test_cec_node_1" ]]; then echo $result; noderm auto_test_cec_node_1; fi
cmd:mkdef -t node -o auto_test_cec_node_1 --template cec-template serial=test mtm=test hcp=test
checkrc==0
check:rc==0
check:output=~1 object definitions have been created or modified
cmd:lsdef auto_test_cec_node_1
check:output=~Object name\: auto\_test\_cec\_node\_1
@@ -214,7 +214,7 @@ cmd:lsdef --template auto_test_invalid_template
check:rc==1
check:output=~Error\: Could not find auto\_test\_invalid\_template in xCAT templates
cmd:mkdef -t node -o auto_test_node --template auto_test_invalid_template
checkrc==1
check:rc==1
check:output=~Error\: Could not find the template object named \'auto\_test\_invalid\_template\' of type \'node\'
end
@@ -224,11 +224,11 @@ description:create a node with a node template, using cec template to create nod
cmd:result=`lsdef | grep auto_test_cec_node_1`; if [[ $result =~ "auto_test_cec_node_1" ]]; then echo $result; noderm auto_test_cec_node_1; fi
cmd:result=`lsdef | grep auto_test_cec_node_2`; if [[ $result =~ "auto_test_cec_node_2" ]]; then echo $result; noderm auto_test_cec_node_2; fi
cmd:mkdef -t node -o auto_test_cec_node_1 --template cec-template serial=test mtm=test hcp=test groups=test_template
checkrc==0
check:rc==0
check:output=~1 object definitions have been created or modified
check:output=~created
cmd:mkdef -t node -o auto_test_cec_node_2 --template auto_test_cec_node_1 serial=test2 mtm=test2 hcp=test2
checkrc==0
check:rc==0
check:output=~1 object definitions have been created or modified
cmd:lsdef auto_test_cec_node_2
check:output=~Object name\: auto\_test\_cec\_node\_2
@@ -263,7 +263,7 @@ cmd:mkdef -t node -o auto_test_cec_node_1 --template cec-template serial=test mt
check:rc==1
check:output=~Error\: The attribute \"hcp\" must be specified!
cmd:mkdef -t node -o auto_test_cec_node_1 --template cec-template serial=test mtm=test hcp=test
checkrc==0
check:rc==0
check:output=~1 object definitions have been created or modified
cmd:lsdef auto_test_cec_node_1
check:output=~Object name\: auto\_test\_cec\_node\_1
@@ -287,10 +287,10 @@ description:create node named cec-template with cec template at beginning, the n
cmd:result=`lsdef | grep cec-template`; if [[ $result =~ "cec-template" ]]; then echo $result; noderm cec-template; fi
cmd:result=`lsdef | grep auto_test_cec_node`; if [[ $result =~ "auto_test_cec_node" ]]; then echo $result; noderm auto_test_cec_node; fi
cmd:mkdef -t node -o cec-template --template cec-template serial=test mtm=test hcp=test groups=test_template_priority
checkrc==0
check:rc==0
check:output=~1 object definitions have been created or modified
cmd:mkdef -t node -o auto_test_cec_node --template cec-template serial=test2 mtm=test2 hcp=test2
checkrc==0
check:rc==0
check:output=~1 object definitions have been created or modified
cmd:lsdef auto_test_cec_node
check:output=~Object name\: auto\_test\_cec\_node
@@ -306,9 +306,9 @@ check:output=~hcp\=test2
check:output=~mtm\=test2
check:output=~serial\=test2
cmd:noderm cec-template
checkrc==0
check:rc==0
cmd:noderm auto_test_cec_node
checkrc==0
check:rc==0
end
start:mkdef_template_diskless_osimage_rootimgdir

View File

@@ -134,7 +134,7 @@ os:linux
description:try to delete a template, then error messages appear
cmd:result=`lsdef | grep switch-template`; if [[ $result =~ "switch-template" ]]; then echo $result; noderm switch-template; fi
cmd:rmdef switch-template
checkrc==1
check:rc==1
check:output=~Error\: Could not find an object named \'switch-template\' of type \'node\'
check:output=~No objects have been removed from the xCAT database.
end

View File

@@ -91,10 +91,6 @@ Requires: ipmitool-xcat >= 1.8.17-1
%ifarch ppc ppc64 ppc64le
Requires: ipmitool-xcat >= 1.8.17-1
%endif
%ifarch ppc64le
# only OpenBMC support
Requires: xCAT-openbmc-py
%endif
%endif
%if %notpcm

View File

@@ -73,10 +73,6 @@ Requires: ipmitool-xcat >= 1.8.17-1
%ifarch ppc ppc64 ppc64le
Requires: ipmitool-xcat >= 1.8.17-1
%endif
%ifarch ppc64le
# only OpenBMC support
Requires: xCAT-openbmc-py
%endif
%endif
%if %notpcm