mirror of
https://github.com/xcat2/xcat-core.git
synced 2025-05-22 11:42:05 +00:00
Spelling fixes for admin guides
This commit is contained in:
parent
c5d40aa00b
commit
1b9ccde323
@ -6,7 +6,7 @@ Description
|
||||
|
||||
The definition of physical units in the cluster, such as lpar, virtual machine, frame, cec, hmc, switch.
|
||||
|
||||
Key Attrubutes
|
||||
Key Attributes
|
||||
--------------
|
||||
|
||||
* os:
|
||||
@ -16,7 +16,7 @@ Key Attrubutes
|
||||
The hardware architecture of this node. Valid values: x86_64, ppc64, x86, ia64.
|
||||
|
||||
* groups:
|
||||
Usually, there are a set of nodes with some attributes in common, xCAT admin can define a node group containing these nodes, so that the management task can be issued against the group instead of individual nodes. A node can be a memeber of different groups, so the value of this attributes is a comma-delimited list of groups. At least one group is required to create a node. The new created group names should not be prefixed with "__" as this token has been preserverd as the internal group name.
|
||||
Usually, there are a set of nodes with some attributes in common, xCAT admin can define a node group containing these nodes, so that the management task can be issued against the group instead of individual nodes. A node can be a member of different groups, so the value of this attributes is a comma-delimited list of groups. At least one group is required to create a node. The new created group names should not be prefixed with "__" as this token has been preserved as the internal group name.
|
||||
|
||||
* mgt:
|
||||
The method to do general hardware management of the node. This attribute can be determined by the machine type of the node. Valid values: ipmi, blade, hmc, ivm, fsp, bpa, kvm, esx, rhevm.
|
||||
|
@ -1,14 +1,14 @@
|
||||
Accelerating the diskless initrd and rootimg generating
|
||||
========================================================
|
||||
|
||||
Generating diskless initrd with ``genimage`` and compressed rootimg with ``packimage`` and ``liteimg`` is a time-comsuming process, it can be accelerated by enabling paralell compression tool ``pigz`` on the management node with multiple processors and cores. See :ref:`Appendix <pigz_example>` for an example on ``packimage`` performance optimized with ``pigz`` enabled.
|
||||
Generating diskless initrd with ``genimage`` and compressed rootimg with ``packimage`` and ``liteimg`` is a time-consuming process, it can be accelerated by enabling parallel compression tool ``pigz`` on the management node with multiple processors and cores. See :ref:`Appendix <pigz_example>` for an example on ``packimage`` performance optimized with ``pigz`` enabled.
|
||||
|
||||
|
||||
|
||||
Enabling the ``pigz`` for diskless initrd and rootimg generating
|
||||
----------------------------------------------------------------
|
||||
|
||||
The paralell compression tool ``pigz`` can be enabled by installing ``pigz`` package on the management server or diskless rootimg. Depending on the method of generating the initrd and compressed rootimg, the steps differ in different Linux distributions.
|
||||
The parallel compression tool ``pigz`` can be enabled by installing ``pigz`` package on the management server or diskless rootimg. Depending on the method of generating the initrd and compressed rootimg, the steps differ in different Linux distributions.
|
||||
|
||||
* **[RHEL]**
|
||||
|
||||
@ -24,7 +24,7 @@ The paralell compression tool ``pigz`` can be enabled by installing ``pigz`` pac
|
||||
|
||||
``pigz`` should be installed in the diskless rootimg. Download ``pigz`` package from https://dl.fedoraproject.org/pub/epel/ , then customize the diskless osimage to install ``pigz`` as the additional packages, see :doc:`Install Additional Other Packages</guides/admin-guides/manage_clusters/ppc64le/diskless/customize_image/additional_pkg>` for more details.
|
||||
|
||||
2) Enabeling the ``pigz`` in ``packimage``
|
||||
2) Enabling the ``pigz`` in ``packimage``
|
||||
|
||||
``pigz`` should be installed on the management server. Download ``pigz`` package from https://dl.fedoraproject.org/pub/epel/ , then install the ``pigz`` with ``yum`` or ``rpm``.
|
||||
|
||||
|
@ -70,7 +70,7 @@ These are described in more details in the following sections.
|
||||
RPM Names
|
||||
'''''''''
|
||||
|
||||
A simple otherpkgs.pkglist file just contains the the name of the rpm file without the version numbers.
|
||||
A simple otherpkgs.pkglist file just contains the name of the rpm file without the version numbers.
|
||||
|
||||
For example, if you put the following three rpms under **/install/post/otherpkgs/<os>/<arch>/** directory, ::
|
||||
|
||||
|
@ -51,7 +51,7 @@ These are described in more details in the following sections.
|
||||
RPM Names
|
||||
''''''''''
|
||||
|
||||
A simple .pkglist file just contains the the name of the rpm file without the version numbers.
|
||||
A simple .pkglist file just contains the name of the rpm file without the version numbers.
|
||||
|
||||
For example ::
|
||||
|
||||
|
@ -1,7 +1,7 @@
|
||||
Configure Additional Network Interfaces - confignics
|
||||
====================================================
|
||||
|
||||
The **nics** table and the **confignics** postscript can be used to automatically configure additional network interfaces (mutltiple ethernets adapters, InfiniBand, etc) on the nodes as they are being deployed.
|
||||
The **nics** table and the **confignics** postscript can be used to automatically configure additional network interfaces (multiple ethernets adapters, InfiniBand, etc) on the nodes as they are being deployed.
|
||||
|
||||
The way the confignics postscript decides what IP address to give the secondary adapter is by checking the nics table, in which the nic configuration information is stored.
|
||||
|
||||
@ -78,7 +78,7 @@ By default, confignics does not configure the install nic. if need, using flag "
|
||||
|
||||
chdef cn1 -p prostscripts="confignics -s"
|
||||
|
||||
Option "-s" writes the install nic's information into configuration file for persistance. All install nic's data defined in nics table will be written also.
|
||||
Option "-s" writes the install nic's information into configuration file for persistence. All install nic's data defined in nics table will be written also.
|
||||
|
||||
|
||||
Add network object into the networks table
|
||||
|
@ -11,7 +11,7 @@ There are more attributes of nodeset used for some specific purpose or specific
|
||||
* **runcmd**: This instructs the node to boot to the xCAT nbfs environment and proceed to configure BMC for basic remote access. This causes the IP, netmask, gateway, username, and password to be programmed according to the configuration table.
|
||||
* **shell**: This instructs the node to boot to the xCAT genesis environment, and present a shell prompt on console. The node will also be able to be sshed into and have utilities such as wget, tftp, scp, nfs, and cifs. It will have storage drivers available for many common systems.
|
||||
|
||||
Choose such additional attribute of nodeset according to your requirement, if want to get more informantion about nodeset, refer to nodeset's man page.
|
||||
Choose such additional attribute of nodeset according to your requirement, if want to get more information about nodeset, refer to nodeset's man page.
|
||||
|
||||
Start the OS Deployment
|
||||
=======================
|
||||
|
@ -38,7 +38,7 @@ In above example, pkglist file is /opt/xcat/share/xcat/netboot/rh/compute.rhels7
|
||||
Setup pkglist
|
||||
-------------
|
||||
|
||||
Before setting up kdump,the approprite rpms should be added to the pkglist file.Here is the rpm packages list which needs to be added to pkglist file for kdump for different OS.
|
||||
Before setting up kdump, the appropriate rpms should be added to the pkglist file.Here is the rpm packages list which needs to be added to pkglist file for kdump for different OS.
|
||||
|
||||
* **[RHEL]** ::
|
||||
|
||||
|
@ -1,7 +1,7 @@
|
||||
Generate Diskless Image
|
||||
=======================
|
||||
|
||||
The ``copycds`` command copies the contents of the Linux media to ``/install/<os>/<arch>`` so that it will be available for installing nodes or creating diskless images. After executing ``copycds``, there are serveral ``osimage`` definitions created by default. Run ``tabdump osimage`` to view these images: ::
|
||||
The ``copycds`` command copies the contents of the Linux media to ``/install/<os>/<arch>`` so that it will be available for installing nodes or creating diskless images. After executing ``copycds``, there are several ``osimage`` definitions created by default. Run ``tabdump osimage`` to view these images: ::
|
||||
|
||||
tabdump osimage
|
||||
|
||||
@ -18,7 +18,7 @@ The ``netboot-compute`` is the default **diskless** osimage created rhels7.1 ppc
|
||||
|
||||
Before packing the diskless image, you have the opportunity to change any files in the image by changing to the ``rootimgdir`` and making modifications. (e.g. ``/install/netboot/rhels7.1/ppc64le/compute/rootimg``).
|
||||
|
||||
However it's recommended that all changes to the image are made via post install scripts so that it's easily repeatable.Although, instead, we recommend that you make all changes to the image via your postinstall script, so that it is repeatable. Refer to :doc:`/guides/admin-guides/manage_clusters/ppc64le/diskless/customize_image/pre_post_script` for more details.
|
||||
However it's recommended that all changes to the image are made via post install scripts so that it's easily repeatable. Although, instead, we recommend that you make all changes to the image via your postinstall script, so that it is repeatable. Refer to :doc:`/guides/admin-guides/manage_clusters/ppc64le/diskless/customize_image/pre_post_script` for more details.
|
||||
|
||||
|
||||
Pack Diskless Image
|
||||
@ -102,7 +102,7 @@ Skip this section if you want to use the image as is.
|
||||
|
||||
1, The use can modify the image to fit his/her own need. The following can be modified.
|
||||
|
||||
* Modify .pkglist file to add or remove packges that are from the os distro
|
||||
* Modify .pkglist file to add or remove packages that are from the os distro
|
||||
|
||||
* Modify .otherpkgs.pkglist to add or remove packages from other sources. Refer to ``Using_Updatenode`` for details
|
||||
|
||||
|
@ -40,7 +40,7 @@ The ``postinstall`` scripts are executed in step b).
|
||||
Do ``postinstall`` scripts execute in chroot mode under ``rootimgdir`` directory?
|
||||
`````````````````````````````````````````````````````````````````````````````````
|
||||
|
||||
No. Unlike postscripts and postbootscripts, the ``postinstall`` scripts are run in non-chroot environment, directly on the management node. In the postinstall scripts, all the paths of the directories and files are based on ``/`` of the managememnt node. To reference inside the ``rootimgdir``, use the ``$IMG_ROOTIMGDIR`` environment variable, exported by ``genimage``.
|
||||
No. Unlike postscripts and postbootscripts, the ``postinstall`` scripts are run in non-chroot environment, directly on the management node. In the postinstall scripts, all the paths of the directories and files are based on ``/`` of the management node. To reference inside the ``rootimgdir``, use the ``$IMG_ROOTIMGDIR`` environment variable, exported by ``genimage``.
|
||||
|
||||
What are some of the environment variables available to my customized ``postinstall`` scripts?
|
||||
``````````````````````````````````````````````````````````````````````````````````````````````
|
||||
|
@ -185,7 +185,7 @@ Here is the RAID1 partitioning section in service.raid1.sles11.tmpl: ::
|
||||
|
||||
The samples above created one 24MB PReP partition on each disk, one 2GB mirrored swap partition and one mirrored ``/`` partition uses all the disk space. If you want to use different partitioning scheme in your cluster, modify this RAID1 section in the autoyast template file accordingly.
|
||||
|
||||
Since the PReP partition can not be mirrored between the two disks, some additional postinstall commands should be run to make the second disk bootable, here the the commands needed to make the second disk bootable: ::
|
||||
Since the PReP partition can not be mirrored between the two disks, some additional postinstall commands should be run to make the second disk bootable, here the commands needed to make the second disk bootable: ::
|
||||
|
||||
# Set the second disk to be bootable for RAID1 setup
|
||||
parted -s /dev/sdb mkfs 1 fat16
|
||||
@ -230,7 +230,7 @@ The command mdadm can query the detailed configuration for the RAID partitions:
|
||||
Disk Replacement Procedure
|
||||
--------------------------
|
||||
|
||||
If any one disk fails in the RAID1 arrary, do not panic. Follow the procedure listed below to replace the failed disk and you will be fine.
|
||||
If any one disk fails in the RAID1 array, do not panic. Follow the procedure listed below to replace the failed disk and you will be fine.
|
||||
|
||||
Faulty disks should appear marked with an (F) if you look at ``/proc/mdstat``: ::
|
||||
|
||||
@ -250,7 +250,7 @@ Faulty disks should appear marked with an (F) if you look at ``/proc/mdstat``: :
|
||||
|
||||
We can see that the first disk is broken because all the RAID partitions on this disk are marked as (F).
|
||||
|
||||
Remove the failed disk from RAID arrary
|
||||
Remove the failed disk from RAID array
|
||||
---------------------------------------
|
||||
|
||||
``mdadm`` is the command that can be used to query and manage the RAID arrays on Linux. To remove the failed disk from RAID array, use the command: ::
|
||||
|
@ -3,7 +3,7 @@ Overview
|
||||
|
||||
Synchronizing (sync) files to the nodes is a feature of xCAT used to distribute specific files from the management node to the new-deploying or deployed nodes.
|
||||
|
||||
This function is supported for diskful or RAMdisk-based diskless nodes.Generally, the specific files are usually the system configuration files for the nodes in the **/etc/directory**, like **/etc/hosts**, **/etc/resolve.conf**; it also could be the application programs configuration files for the nodes. The advantages of this function are: it can parallel sync files to the nodes or nodegroup for the installed nodes; it can automatically sync files to the newly-installing node after the installation. Additionally, this feature also supports the flexible format to define the synced files in a configuration file, called **'synclist'**.
|
||||
This function is supported for diskful or RAMdisk-based diskless nodes. Generally, the specific files are usually the system configuration files for the nodes in the **/etc/directory**, like **/etc/hosts**, **/etc/resolve.conf**; it also could be the application programs configuration files for the nodes. The advantages of this function are: it can parallel sync files to the nodes or nodegroup for the installed nodes; it can automatically sync files to the newly-installing node after the installation. Additionally, this feature also supports the flexible format to define the synced files in a configuration file, called **'synclist'**.
|
||||
|
||||
The synclist file can be a common one for a group of nodes using the same profile or osimage, or can be the special one for a particular node. Since the location of the synclist file will be used to find the synclist file, the common synclist should be put in a given location for Linux nodes or specified by the osimage.
|
||||
|
||||
@ -17,7 +17,7 @@ For a new-installing nodes, the Syncing File action will be triggered when perfo
|
||||
|
||||
The postscript **'syncfiles'** is located in the **/install/postscripts/**. When running, it sends a message to the xcatd on the management node or service node, then the xcatd figures out the corresponding synclist file for the node and calls the ``xdcp`` command to sync files in the synclist to the node.
|
||||
|
||||
**If installing nodes in a hierarchical configuration, you must sync the Service Nodes first to make sure they are updated. The compute nodes will be sync'd from their service nodes.You can use the** ``updatenode <computenodes> -f`` **command to sync all the service nodes for range of compute nodes provided.**
|
||||
**If installing nodes in a hierarchical configuration, you must sync the Service Nodes first to make sure they are updated. The compute nodes will be sync'd from their service nodes. You can use the** ``updatenode <computenodes> -f`` **command to sync all the service nodes for range of compute nodes provided.**
|
||||
|
||||
For an installed nodes, the Syncing File action happens when performing the ``updatenode -F`` or ``xdcp -F synclist`` command to update a nodes. If performing the ``updatenode -F``, it figures out the location of the synclist files for all the nodes and classify the nodes which using same synclist file and then calls the ``xdcp -F synclist`` to sync files to the nodes.
|
||||
|
||||
|
@ -97,7 +97,7 @@ Note: From xCAT 2.9.2 on AIX and from xCAT 2.12 on Linux, xCAT support a new for
|
||||
|
||||
file -> (noderange for permitted nodes) file
|
||||
|
||||
The noderange would have several format. Following examples show that /etc/hosts file is synced to the nodes which is specifed before the file name ::
|
||||
The noderange would have several format. Following examples show that /etc/hosts file is synced to the nodes which is specified before the file name ::
|
||||
|
||||
/etc/hosts -> (node1,node2) /etc/hosts # The /etc/hosts file is synced to node1 and node2
|
||||
/etc/hosts -> (node1-node4) /etc/hosts # The /etc/hosts file is synced to node1,node2,node3 and node4
|
||||
|
@ -51,7 +51,7 @@ The content above presents some syntax supported in exlist file:
|
||||
|
||||
+./usr/share/locale/C*
|
||||
|
||||
It is useful to include files following an exclude entry to qiuckly remove a larger set of files using a wildcard and then adding back the few necessary files using the + sign. In the above example, all the files and sub-directories matching the pattern ``/usr/share/locale/C*`` will be included in the ``rootimg.gz`` file.
|
||||
It is useful to include files following an exclude entry to quickly remove a larger set of files using a wildcard and then adding back the few necessary files using the + sign. In the above example, all the files and sub-directories matching the pattern ``/usr/share/locale/C*`` will be included in the ``rootimg.gz`` file.
|
||||
|
||||
|
||||
Customize the ``exlist`` file and the osimage definition
|
||||
@ -77,4 +77,4 @@ If you want to customize the osimage ``sles12.1-ppc64le-netboot-compute`` with y
|
||||
|
||||
.. [1] The ``exlist`` file entry should not end with a slash ``/``, For example, this entry will never match anything: ``./usr/lib/perl[0-9]/[0-9.]*/ppc64le-linux-thread-multi/Encode/``.
|
||||
|
||||
.. [2] Pattern match test applies to the whole file name,starting from one of the start points specified in the ``exlist`` file entry. The regex syntax should comply with the regex syntax of system command ``find -path``, refer to its doc for details.
|
||||
.. [2] Pattern match test applies to the whole file name, starting from one of the start points specified in the ``exlist`` file entry. The regex syntax should comply with the regex syntax of system command ``find -path``, refer to its doc for details.
|
||||
|
@ -2,7 +2,7 @@ Manage Virtual Machine (VM)
|
||||
============================
|
||||
|
||||
|
||||
Now the MowerKVM hypervisor "kvmhost1" is ready, this section introduces the VM management in xCAT, including examples on how to create,remove and clone VMs.
|
||||
Now the MowerKVM hypervisor "kvmhost1" is ready, this section introduces the VM management in xCAT, including examples on how to create, remove and clone VMs.
|
||||
|
||||
Create Virtual Machine
|
||||
----------------------
|
||||
@ -117,7 +117,7 @@ Now a VM "vm1" is created, it can be provisioned like any other nodes in xCAT. T
|
||||
|
||||
rpower vm1 on
|
||||
|
||||
If "vm1" is powered on successfully, the VM status can be obtained by running the following command on management node ::
|
||||
If "vm1" is powered on successfully, the VM status can be obtained by running the following command on management node ::
|
||||
|
||||
rpower vm1 status
|
||||
|
||||
|
@ -134,6 +134,6 @@ where <scripts> is a comma separated postscript like ospkgs,otherpkgs etc.
|
||||
* wget is used in xcatdsklspost/xcataixpost to get all the postscripts from the <server> to the node. You can check /tmp/wget.log file on the node to see if wget was successful or not. You need to make sure the /xcatpost directory has enough space to hold the postscripts.
|
||||
* A file called /xcatpost/mypostscript (Linux) is created on the node which contains the environmental variables and scripts to be run. Make sure this file exists and it contains correct info. You can also run this file on the node manually to debug.
|
||||
* For ospkgs/otherpkgs, if /install is not mounted on the <server>, it will download all the rpms from the <server> to the node using wget. Make sure /tmp and /xcatpost have enough space to hold the rpms and check /tmp/wget.log for errors.
|
||||
* For ospkgs/otherpkgs, If zypper or yum is installed on the node, it will be used the command to install the rpms. Make sure to run createrepo on the source direcory on the <server> every time a rpm is added or removed. Otherwise, the rpm command will be used, in this case, make sure all the necessary depended rpms are copied in the same source directory.
|
||||
* For ospkgs/otherpkgs, If zypper or yum is installed on the node, it will be used the command to install the rpms. Make sure to run createrepo on the source directory on the <server> every time a rpm is added or removed. Otherwise, the rpm command will be used, in this case, make sure all the necessary depended rpms are copied in the same source directory.
|
||||
* You can append -x on the first line of ospkgs/otherpkgs to get more debug info.
|
||||
|
||||
|
@ -3,12 +3,12 @@ MTMS-based Discovery
|
||||
|
||||
MTMS stands for **M**\ achine **T**\ ype/\ **M**\ odel and **S**\ erial. This is one way to uniquely identify each physical server.
|
||||
|
||||
MTMS-based hardware discovery assumes the administator has the model type and serial number information for the physical servers and a plan for mapping the servers to intended hostname/IP addresses.
|
||||
MTMS-based hardware discovery assumes the administrator has the model type and serial number information for the physical servers and a plan for mapping the servers to intended hostname/IP addresses.
|
||||
|
||||
**Overview**
|
||||
|
||||
#. Automatically search and collect MTMS information from the servers
|
||||
#. Write **discovered-bmc-nodes** to xCAT (recommened to set different BMC IP address)
|
||||
#. Write **discovered-bmc-nodes** to xCAT (recommended to set different BMC IP address)
|
||||
#. Create **predefined-compute-nodes** to xCAT providing additional properties
|
||||
#. Power on the nodes which triggers xCAT hardware discovery engine
|
||||
|
||||
|
@ -20,7 +20,7 @@ The litefile table specifies the directories and files on the statelite nodes th
|
||||
#. The third column in the litefile table specifies options for the directory or file:
|
||||
|
||||
#. tmpfs - It provides a file or directory for the node to use when booting, its permission will be the same as the original version on the server. In most cases, it is read-write; however, on the next statelite boot, the original version of the file or directory on the server will be used, it means it is non-persistent. This option can be performed on files and directories.
|
||||
#. rw - Same as Above.Its name "rw" does NOT mean it always be read-write, even in most cases it is read-write. Do not confuse it with the "rw" permission in the file system.
|
||||
#. rw - Same as above. Its name "rw" does NOT mean it always be read-write, even in most cases it is read-write. Do not confuse it with the "rw" permission in the file system.
|
||||
#. persistent - It provides a mounted file or directory that is copied to the xCAT persistent location and then over-mounted on the local file or directory. Anything written to that file or directory is preserved. It means, if the file/directory does not exist at first, it will be copied to the persistent location. Next time the file/directory in the persistent location will be used. The file/directory will be persistent across reboots. Its permission will be the same as the original one in the statelite location. It requires the statelite table to be filled out with a spot for persistent statelite. This option can be performed on files and directories.
|
||||
#. con - The contents of the pathname are concatenated to the contents of the existing file. For this directive the searching in the litetree hierarchy does not stop when the first match is found. All files found in the hierarchy will be concatenated to the file when found. The permission of the file will be "-rw-r--r--", which means it is read-write for the root user, but readonly for the others. It is non-persistent, when the node reboots, all changes to the file will be lost. It can only be performed on files. Do not use it for one directory.
|
||||
#. ro - The file/directory will be overmounted read-only on the local file/directory. It will be located in the directory hierarchy specified in the litetree table. Changes made to this file or directory on the server will be immediately seen in this file/directory on the node. This option requires that the file/directory to be mounted must be available in one of the entries in the litetree table. This option can be performed on files and directories.
|
||||
@ -133,5 +133,5 @@ noderes
|
||||
|
||||
``noderes.nfsserver`` attribute can be set for the NFSroot server. If this is not set, then the default is the Management Node.
|
||||
|
||||
``noderes.nfsdir`` can be set. If this is not set, the the default is ``/install``
|
||||
``noderes.nfsdir`` can be set. If this is not set, the default is ``/install``
|
||||
|
||||
|
@ -88,7 +88,7 @@ Fail to ping the installed VM
|
||||
|
||||
ADDRCONF(NETDEV_UP): eth0 link is not ready.
|
||||
|
||||
**Solutoin**:
|
||||
**Solution**:
|
||||
Usually caused by the incorrect VM NIC model. Try the following steps to specify "virtio": ::
|
||||
|
||||
rmvm vm1
|
||||
|
@ -16,7 +16,7 @@ Provision Hypervisor
|
||||
|
||||
#. Customize the hypervisor node definition to create network bridge
|
||||
|
||||
xCAT ships a postscript **xHRM** to create a network bridge on kvm host during installation/netbooting. Specify the **xHRM** with appropriate parameters in **postscripts** attibute. For example:
|
||||
xCAT ships a postscript **xHRM** to create a network bridge on kvm host during installation/netbooting. Specify the **xHRM** with appropriate parameters in **postscripts** attribute. For example:
|
||||
|
||||
* To create a bridge named 'br0' against the installation network device specified by **installnic**: ::
|
||||
|
||||
@ -68,7 +68,7 @@ If the hypervisor is provisioned successfully according to the steps described a
|
||||
br0 8000.000000000000 no eth0
|
||||
|
||||
|
||||
If the network bridge is not created or configured successfully, run "xHRM" with **updatenode** on managememt node to create it manually:::
|
||||
If the network bridge is not created or configured successfully, run "xHRM" with **updatenode** on management node to create it manually:::
|
||||
|
||||
updatenode kvmhost1 -P "xHRM bridgeprereq eth0:br0"
|
||||
|
||||
|
@ -15,7 +15,7 @@ Please make sure the following packages have been installed on the management no
|
||||
Set Up the kvm storage directory on the management node(optional)
|
||||
-----------------------------------------------------------------
|
||||
|
||||
It is a recommended configuration to create a shared file system for virtual machines hosting. The shared file system, usually on a SAN, NAS or GPFS, is shared among KVM hypevisors, which simplifies VM migration from one hypervisor to another with xCAT.
|
||||
It is a recommended configuration to create a shared file system for virtual machines hosting. The shared file system, usually on a SAN, NAS or GPFS, is shared among KVM hypervisors, which simplifies VM migration from one hypervisor to another with xCAT.
|
||||
|
||||
The easiest shared file system is ``/install`` directory on the management node, it can be shared among hypervisors via NFS. Please refer to the following steps :
|
||||
|
||||
|
Loading…
x
Reference in New Issue
Block a user