2
0
mirror of https://github.com/xcat2/xcat-core.git synced 2025-05-22 19:52:03 +00:00

add doc on xCAT VM support

This commit is contained in:
immarvin 2015-11-15 02:26:48 -05:00
parent c5c8258cdd
commit ccbcac32fb
4 changed files with 202 additions and 33 deletions

View File

@ -0,0 +1,57 @@
Problems and Solutions
======================
VNC client complains the credentials are not valid
--------------------------------------------------
**Issue**:
While connecting to the hypervisor with VNC, the vnc client complains "Authentication failed".
**Solution**:
Check whether the clocks on the hypervisor and headnode are synced
rpower fails with "qemu: could not open disk image /var/lib/xcat/pools/2e66895a-e09a-53d5-74d3-eccdd9746eb5/vmXYZ.sda.qcow2: Permission denied"
-----------------------------------------------------------------------------------------------------------------------------------------------
**Issue**: ::
#rpower vm1 on
vm1: Error: internal error Process exited while reading console log output: char device redirected to /dev/pts/1
qemu: could not open disk image /var/lib/xcat/pools/2e66895a-e09a-53d5-74d3-eccdd9746eb5/vm1.sda.qcow2: Permission denied: internal error Process exited while reading console log output: char device redirected to /dev/pts/1
qemu: could not open disk image /var/lib/xcat/pools/2e66895a-e09a-53d5-74d3-eccdd9746eb5/vm1.sda.qcow2: Permission denied
**Solution**:
Usually caused by incorrect permission in NFS server/client configuration. NFSv4 is enabled in some Linux distributions such as CentOS6 by default. The solution is simply to disable NFSv4 support on the NFS server by uncommenting the following line in "/etc/sysconfig/nfs": ::
RPCNFSDARGS="-N 4"
Then restart the NFS services and try to power on the VM again...
**Note**: For stateless hypervisor, please purge the VM by ``rmvm -p vm1``, reboot the hypervisor and then create the VM.
"Error: Cannot communicate via libvirt to kvmhost1"
---------------------------------------------------
**Issue**:
The kvm related commands complain "Error: Cannot communicate via libvirt to kvmhost1"
**Solution**:
Usually caused by incorrect ssh configuration between xCAT management node and hypervisor. Please make sure it is possible to access the hypervisor from management node via ssh without password.
Fail to ping the newly installed VM
------------------------------------
**Issue**:
The newly installed stateful VM node is not pingable, the following message can be observed in the console during VM booting: ::
ADDRCONF(NETDEV_UP): eth0 link is not ready.
**Solutoin**:
Usually caused by the incorrect VM NIC model. Please try the following steps to specify "virtio": ::
rmvm vm1
chdef vm1 vmnicnicmodel=virtio
mkvm vm1

View File

@ -1,8 +1,35 @@
Virtual Machines
================
The **Kernel-based Virtual Machine (KVM)** is a full virtualization solution for for Enterprise Linux distributions. KVM is known as the *de facto* open source virtualization mechanism. It is currently used by many software companies.
**IBM PowerKVM** is a product that leverages the Power resilience and performance with the openness of KVM, which provides several advantages:
* Higher workload consolidation with processors overcommitment and memory sharing
* Dynamic addition and removal of virtual devices
* Microthreading scheduling granularity
* Integration with **IBM PowerVC** and **OpenStack**
* Simplified management using open source software
* Avoids vendor lock-in
* Uses POWER8 hardware features, such as SMT8 and microthreading
The xCAT based KVM solution offers users the ability to:
* provision the hypervisor on bare metal nodes
* provision virtual machines
* migrate virtual machines to different hosts
* install all versions of Linux supported in the standard xCAT provisioning methods (you can install stateless virtual machines, iSCSI, and scripted install virtual machines)
* install copy on write instances of virtual machines
* copy virtual machines
This section introduces the steps of management node preparation, KVM hypervisor setup and virtual machine management, and presents some typical problems and solutions on xCAT kvm support.
.. toctree::
:maxdepth: 2
kvmMN.rst
powerKVM.rst
manage_vms.rst
FAQ.rst

View File

@ -0,0 +1,31 @@
Set Up the Management Server for KVM
====================================
Install the kvm related packages
--------------------------------
Additional packages need to be installed on the management node for kvm support.
Please make sure the following packages have been installed on the management node, if not, install them manually.
``perl-Sys-Virt``
Set Up the kvm storage directory on the management node(optional)
-----------------------------------------------------------------
It is a recommended configuration to create a shared file system for virtual machines hosting. The shared file system, usually on a SAN, NAS or GPFS, is shared among KVM hypevisors, which simplifies VM migration from one hypervisor to another with xCAT.
The easiest shared file system is ``/install`` directory on the management node, it can be shared among hypervisors via NFS. Please refer to the following steps :
* Create a directory to store the virtual disk files ::
mkdir -p /install/vms
* export the storage directory ::
echo "/install/vms *(rw,no_root_squash,sync,fsid=0)" >> /etc/exports
exportfs -r
**Note**: make sure the root permission is turned on for nfs clients (i.e. use the ``no_root_squash`` option). Otherwise, the virtual disk file can not work. The option ``fsid=0`` is useful for NFSv4.

View File

@ -1,44 +1,98 @@
PowerKVM
========
Install PowerKVM
----------------
The process to set up PowerKVM hypervisors using xCAT is very similar to deploying diskful compute nodes.
#. Download the PowerKVM iso and add it to xCAT using copycds: ::
# if the iso file is: ibm-powerkvm-2.1.1.0-22.0-ppc64-gold-201410191558.iso
copycds -n pkvm2.1.1 ibm-powerkvm-2.1.1.0-22.0-ppc64-gold-201410191558.iso
#. Then provision the target node using the PowerKVM osimage: ::
nodeset <noderange> osimage=pkvm2.1.1-ppc64-install-compute
rsetboot <noderange> net
rpower <noderange> reset
Refer to :doc:`/guides/admin-guides/manage_clusters/ppc64le/diskful/index` if you need more information.
Setup PowerKVM Hypervisor
=========================
Verifying hypervisor bridges
----------------------------
Provision Hypervisor with PowerKVM
----------------------------------
In order to launch VMs, bridges must be configured on the PowerKVM hypervisors for the Virtual Machines to utilize.
Check that at least one bridge is configured and mapped to a physical interface. Show the bridge information using ``brctl show``: ::
Please follow the ``Diskful Installation Documentation`` :ref:`Diskful Installation <diskfull_installation>` to provision kvm hypervisor with PowerKVM, several customization steps should be taken into consideration.
To demonstrate the brief steps on hypervisor provision, take **ibm-powerkvm-3.1.0.0-39.0-ppc64le-gold-201511041419.iso** for example here:
#. Obtain a PowerKVM iso and create PowerKVM osimages with it: ::
copycds ibm-powerkvm-3.1.0.0-39.0-ppc64le-gold-201511041419.iso
The following PowerKVM osimage will be created on success ::
# lsdef -t osimage -o pkvm3.1-ppc64le-install-compute
Object name: pkvm3.1-ppc64le-install-compute
imagetype=linux
osarch=ppc64le
osdistroname=pkvm3.1-ppc64le
osname=Linux
osvers=pkvm3.1
otherpkgdir=/install/post/otherpkgs/pkvm3.1/ppc64le
pkgdir=/install/pkvm3.1/ppc64le
profile=compute
provmethod=install
template=/opt/xcat/share/xcat/install/pkvm/compute.pkvm3.ppc64le.tmpl
#. Customize the hypervisor node definition to create network bridge
xCAT ships a postscript **xHRM** to create a network bridge on kvm host during installation/netbooting. Please specify the **xHRM** with appropraite parameters in **postscripts** attibute. Here is some examples on this:
To create a bridge with default name 'default' against the installation network device which was specified by **installnic** attribute ::
chdef kvmhost1 -p postscripts="xHRM bridgeprereq"
To create a bridge named 'br0' against the installation network device which was specified by **installnic** attribute(recommended) ::
chdef kvmhost1 -p postscripts="xHRM bridgeprereq br0"
To create a bridge named 'br0' against the network device 'eth0' ::
chdef kvmhost1 -p postscripts="xHRM bridgeprereq eth0:br0"
**Note**: The network bridge name you specified should avoid ``virbr0,virbr1...``, which might have beem taken on the PowerKVM installation [1]_.
#. Customize the hypervisor node definition to mount the shared kvm storage directory on management node(optional)
If the shared kvm storage directory on the management node has been exported, it can be mounted on powerKVM hypervisor for virtual machines hosting.
An easy way to do this is to create another postscript named "mountvms" which creates a directory **/install/vms** on hypervisor and then mounts **/install/vms** from the management node, the content of "mountvms" can be: ::
logger -t xcat "Install: setting vms mount in fstab"
mkdir -p /install/vms
echo "$MASTER:/install/vms /install/vms nfs rsize=8192,wsize=8192,timeo=14,intr,nfsvers=2 1 2" >> /etc/fstab
Then set the file permission and specify the script in **postscripts** attribute of hypervisor node definition: ::
chmod 755 /install/postscripts/mountvms
chdef kvmhost1 -p postscripts=mountvms
#. Provision the hypervisor node with the PowerKVM osimage ::
nodeset kvmhost1 osimage=pkvm3.1-ppc64le-install-compute
rpower kvmhost1 boot
Create network bridge on hypervisor
------------------------------------
To launch VMs, a network bridge must be created on the PowerKVM hypervisors.
If the hypervisor is provisioned successfully according to the steps described above, a network bridge will be created and attached to a physical interface. This can be checked by running ``brctl show`` on the hypervisor to show the network bridge information, please make sure a network bridge has been created and configured according to the parameters passed to postscript "xHRM" ::
# brctl show
bridge name bridge id STP enabled interfaces
br0 8000.000000000000 no eth0
If there are no bridges configured, the xCAT post install script will not work. You must manually create a bridge. The following is provided as an example for creating a bridge br0 using interface eth0 with IP address: 10.1.101.1/16, for example: ::
IPADDR=10.1.101.1/16
brctl addbr br0
brctl addif br0 eth0
brctl setfd br0 0
ip addr add dev br0 $IPADDR
ip link set br0 up
ip addr del dev eth0 $IPADDR
If the network bridge is not created or configured successfully, please run "xHRM" with **updatenode** on managememt node to create it manually:::
updatenode kvmhost1 -P "xHRM bridgeprereq eth0:br0"
.. [1] Every standard libvirt installation during PowerKVM powervision provides NAT based connectivity to virtual machines out of the box. Some network bridges(virbr0,virbr1...) and dummy network devices(virbr0-nic,virbr1-nic...) will be created by default ::
#brctl show
#bridge name bridge id STP enabled interfaces
#virbr0 8000.525400c7f843 yes virbr0-nic
#virbr1 8000.5254001619f5 yes virbr1-nic
Note: During any of ubuntu installation, the virtual machines need to access Internet, so make sure the PowerKVM hypervisor is able to access Internet.