2
0
mirror of https://github.com/xcat2/xcat-core.git synced 2025-05-22 03:32:04 +00:00

1, add the content for httpd tuning (#5513)

2, adjust the parameters based on field feedback
This commit is contained in:
Bin Xu 2018-08-15 15:27:13 +08:00 committed by yangsong
parent 028be486e3
commit fe9743fdd4
3 changed files with 60 additions and 2 deletions

View File

@ -0,0 +1,57 @@
Tuning httpd for xCAT node deployments
======================================
In xCAT, the Operation System provisioning over network is heavily relying on the web server (Apache 2.x). However, Apache 2.x is a general-purpose web server, the default settings may not allow enough simultaneous HTTP client connections to support a large cluster.
#. Tuning MaxRequestWorkers directive
By default, httpd is configured to use ``prefork`` module for **MPM**, which has a limit of 256 simultaneous requests. If any slow httpd response issue was hit during OS provisioning, you can increase **MaxRequestWorkers** directive for greater performance.
For example, to avoid some nodes provisioning failure when rebooting all nodes in a large hierarchy stateless cluster ( one service node is serving 270 compute nodes ). It is suggested to increased the value from 256 to 1000.
On Red Hat, change (or add) these directives in
::
/etc/httpd/conf/httpd.conf
On SLES (with Apache2), change (or add) these directives in
::
/etc/apache2/server-tuning.conf
#. Having httpd Cache the Files It Is Serving
Note: this information was contributed by Jonathan Dye and is provided here as an example. The details may have to be changed for distro or apache version.
This is simplest if you set noderes.nfsserver to a separate apache server, and then you can configure it to reverse proxy and cache. For some reason mod_mem_cache doesn't seem to behave as expected, so you can use mod_disk_cache to achieve a similar result: make a tmpfs on the apache server and configure its mountpoint to be the directory that CacheRoot points to. Also tell it to ignore /install/autoinst since the caching settings are really aggressive. Do a recursive wget to warm the cache and watch the tmpfs fill up. Then do a bunch of kickstart installs. Before this, the apache server on the xcat management node may have been a bottleneck during kickstart installs. After this change, it no longer should be.
Here is the apache config file:
::
ProxyRequests Off # don't be a proxy, just allow the reverse proxy
CacheIgnoreCacheControl On
CacheStoreNoStore On
CacheIgnoreNoLastMod On
CacheRoot /var/cache/apache2/tmpfs
CacheEnable disk /install
CacheDisable /install/autoinst
CacheMaxFileSize 1073741824
# CacheEnable mem / # failed attempt to do in-memory caching
# MCacheSize 20971520
# MCacheMaxObjectSize 524288000
# through ethernet network
# ProxyPass /install http://172.21.254.201/install
# through IB network
ProxyPass /install http://192.168.111.2/install
For more Apache 2.x tuning, see the external web page: `Apache Performance Tuning <http://httpd.apache.org/docs/2.4/misc/perf-tuning.html>`_

View File

@ -10,4 +10,5 @@ The information in this document should be viewed as example data only. Many of
linux_os_tuning.rst
xcatd_tuning.rst
database_tuning.rst
database_tuning.rst
httpd_tuning.rst

View File

@ -22,4 +22,4 @@ For large clusters, you consider changing the default settings in ``site`` table
**useflowcontrol** : If ``yes``, the postscript processing on each node contacts ``xcatd`` on the MN/SN using a lightweight UDP packet to wait until ``xcatd`` is ready to handle the requests associated with postscripts. This prevents deploying nodes from flooding ``xcatd`` and locking out admin interactive use. This value works with the **xcatmaxconnections** and **xcatmaxbatch** attributes. If the value is ``no``, nodes sleep for a random time before contacting ``xcatd``, and retry. The default is ``no``. Not supported on AIX.
These attributes may be changed based on the size of your cluster. For a large cluster, it is better to enable **useflowcontrol** and set ``xcatmaxconnection = 128``, ``xcatmaxbatchconnections = 100``. Then the daemon will only allow 100 concurrent connections from the nodes. This will allow 28 connections still to be available on the management node for xCAT commands (e.g ``nodels``).
These attributes may be changed based on the size of your cluster. For a large cluster, it is better to enable **useflowcontrol** and set ``xcatmaxconnection = 356``, ``xcatmaxbatchconnections = 300``. Then the daemon will only allow 300 concurrent connections from the nodes. This will allow 56 connections still to be available on the management node for xCAT commands (e.g ``nodels``).