2
0
mirror of https://github.com/xcat2/xcat-core.git synced 2025-06-12 17:30:19 +00:00

Merge pull request #409 from immarvin/onkvmdoc

add doc on xCAT VM support
This commit is contained in:
Victor Hu
2015-11-18 22:54:01 -05:00
5 changed files with 286 additions and 202 deletions

View File

@ -1,88 +1,88 @@
Manage Virtual Machine (VMs)
Manage Virtual Machine (VM)
============================
Create the Virtual Machine
--------------------------
In this doc, we assume the powerKVM hypervisor host cn1 is ready to use.
Now the MowerKVM hypervisor "kvmhost1" is ready, this section introduces the VM management in xCAT, including examples on how to create,remove and clone VMs.
Create Virtual Machine
----------------------
Create VM Node Definition
`````````````````````````
Define virtual machine vm1, add it to xCAT under the vm group, its ip is x.x.x.x, use makehost to add hostname and ip into /etc/hosts file: ::
Create a virtual machine node object "vm1", assign it to be a member of group "vm", its ip is "192.168.0.1", run ``makehost`` to add an entry in ``/etc/hosts`` file: ::
mkdef vm1 groups=vm,all
chdef vm1 ip=x.x.x.x
chdef vm1 ip=192.168.0.1
makehosts vm1
Update DNS with this new node: ::
Update DNS configuration and database: ::
makedns -n
makedns -a
Define attributes for the VM
`````````````````````````````
Specify VM attributes
`````````````````````
Run the chdef command to change the following attributes for the vm1:
After the VM object is created, several key attributes need to be specified with ``chdef`` :
1. Define the virtual cpu number: ::
1. the number of virtual cpus in the VM: ::
chdef vm1 vmcpus=2
chdef vm1 vmcpus=2
2. Define the kvm hypervisor of the virtual machine vm1, it should be set to cn1: ::
2. the kvm hypervisor of the VM: ::
chdef vm1 vmhost=cn1
chdef vm1 vmhost=kvmhost1
3. Define the virtual memory size, the unit is Megabit. For example, to define 1GB of memory to vm1: ::
3. the virtual memory size, with the unit "Megabit". Specify 1GB memory to "vm1" here: ::
chdef vm1 vmmemory=1024
chdef vm1 vmmemory=1024
Note: For diskless node, the vmmemory should be set larger than 2048, otherwise the node cannot be booted up.
**Note**: For diskless node, the **vmmemory** should be at least 2048 MB, otherwise the node cannot boot up.
4. Define the hardware management module: ::
4. the hardware management module, "kvm" for PowerKVM: ::
chdef vm1 mgt=kvm
5. Define the virtual network card, it should be set to the bridge br0/virb0/default which defined in hypervisor. If no bridge was set explicitly, no network device will be created for the node vm1: ::
5. Define the virtual network card, it should be set to the bridge "br0" which has been created in the hypervisor. If no bridge is specified, no network device will be created for the VM node "vm1": ::
chdef vm1 vmnics=br0
6. The vmnicnicmodel attribute is used to set the type and corresponding driver for the nic. If not set, the default value is 'virtio'.
6. The **vmnicnicmodel** attribute is used to set the type and corresponding driver for the nic. If not set, the default value is 'virtio'.
::
chdef vm1 vmnicnicmodel=virtio
7. Define the storage for the vm1, three formats for the storage source are supported.
7. Define the storage for the vm1, three types of storage source format are supported.
A. Create storage on a nfs server
A. Create storage on a NFS server
The format is ``nfs://<IP_of_NFS_server>/dir``, that means the kvm disk files will be created at ``nfs://<IP_of_NFS_server>/dir``: ::
chdef vm1 vmstorage=nfs://<IP_of_NFS_server>/install/vms/
chdef vm1 vmstorage=nfs://<IP_of_NFS_server>/install/vms/
B. Create storage on a device of hypervisor
Instead of the format is 'phy:/dev/sdb1': ::
The format is 'phy:/dev/sdb1': ::
chdef vm1 vmstorage=phy:/dev/sdb1
chdef vm1 vmstorage=phy:/dev/sdb1
C. Create storage on a directory of hypervisor
Instead of he format is 'dir:///var/lib/libvirt/images': ::
The format is 'dir:///var/lib/libvirt/images': ::
chdef vm1 vmstorage=dir:///var/lib/libvirt/images
chdef vm1 vmstorage=dir:///var/lib/libvirt/images
Note: The attribute vmstorage is only necessary for diskfull node. You can ignore it for diskless node.
**Note**: The attribute **vmstorage** is only valid for diskfull VM node.
8. Define the console attributes for the virtual machine: ::
8. Define the **console** attributes for VM: ::
chdef vm1 serialport=0 serialspeed=115200
chdef vm1 serialport=0 serialspeed=115200
9. (optional)For monitor the installing process from vnc client, set vidpassword value: ::
9. (optional)For monitoring and access the VM with vnc client, set **vidpassword** value: ::
chtab node=vm1 vm.vidpassword=abc123
chtab node=vm1 vm.vidpassword=abc123
10. Set 'netboot' attribute
10. Set **netboot** attribute
* **[x86_64]** ::
@ -92,60 +92,38 @@ Run the chdef command to change the following attributes for the vm1:
chdef vm1 netboot=grub2
Make sure the grub2 had been installed on your Management Node: ::
Make sure "grub2" had been installed on the management node: ::
rpm -aq | grep grub2
#rpm -aq | grep grub2
grub2-xcat-1.0-1.noarch
Note: If you are working with xCAT-dep oldder than 20141012, the modules for xCAT shipped grub2 can not support ubuntu LE smoothly. So the following steps needed to complete the grub2 setting. ::
rm /tftpboot/boot/grub2/grub2.ppc
cp /tftpboot/boot/grub2/powerpc-ieee1275/core.elf /tftpboot/boot/grub2/grub2.ppc
/bin/cp -rf /tmp/iso/boot/grub/powerpc-ieee1275/elf.mod /tftpboot/boot/grub2/powerpc-ieee1275/
Make virtual machine
````````````````````
Make the VM under xCAT
``````````````````````
If vmstorage is on a nfs server or a device of hypervisor, for example ::
If **vmstorage** is a NFS mounted directory or a device on hypervisor, run ::
mkvm vm1
If create the virtual machine vm1 with 20G hard disk from a large disk directory, for example ::
To create the virtual machine "vm1" with 20G hard disk on a hypervisor directory, run ::
mkvm vm1 -s 20G
If the vm1 was created successfully, a hard disk file named vm1.sda.qcow2 can be found in vmstorage location. And you can run the lsdef vm1 to see whether the mac attribute has been set automatically.
When "vm1" is created successfully, a VM hard disk file with a name like "vm1.sda.qcow2" will be found in the location specified by **vmstorage**. What's more, the **mac** attribute of "vm1" is set automatically, please check it with: ::
lsdef vm1 -i mac
Configure DHCP
```````````````
::
makedhcp -n
makedhcp -a
Create osimage object
``````````````````````````````
After you download the OS ISO, refer to :ref:`create_img` to create osimage objects.
Prepare the VM for installation
```````````````````````````````````````
::
nodeset vm1 osimage=<osimage_name>
Start VM Installation
``````````````````````
::
Now a VM "vm1" is created, it can be provisioned like any other nodes in xCAT. The VM node can be powered on by: ::
rpower vm1 on
If the vm1 was powered on successfully, you can get following information when running 'virsh list' on the kvm hypervisor cn1. ::
If "vm1" is powered on successfully, the VM status can be obtained by running the following command on management node ::
virsh list
rpower vm1 status
or running the following command on the kvm hypervisor "kvmhost1" ::
#virsh list
Id Name State
--------------------------------
6 vm1 running
@ -154,61 +132,59 @@ If the vm1 was powered on successfully, you can get following information when r
Monitoring the Virtual Machine
``````````````````````````````
You can use console in xcat management node or kvm hypervisor to monitor the process.
When the VM has been created and powered on, please choose one of the following methods to monitor and access it.
* On the kvm hypervisor you can use virsh to open text console: ::
* Open the console on kvm hypervisor: ::
virsh console vm1
* Use rcons/wcons on the xCAT management node to open text console: ::
* Use **rcons/wcons** on xCAT management node to open text console: ::
chdef vm1 cons=kvm
makeconservercf vm1
rcons vm1
* Connecting to the virtual machine's vnc console
* Connect to virtual machine through vnc console
In order to connect to the virtual machine's console, you need to generate a new set of credentials. You can do it by running: ::
In order to connect the virtual machine's vnc server, a new set of credentials need to be generated by running: ::
xcatclient getrvidparms vm1
vm1: method: kvm
vm1: textconsole: /dev/pts/0
vm1: password: JOQTUtn0dUOBv9o3
vm1: vidproto: vnc
vm1: server: cn1
vm1: server: kvmhost1
vm1: vidport: 5900
Note: Now just pick your favorite vnc client and connect to the hypervisor, using the password generated by "getrvidparms". If the vnc client complains the password is not valid, it is possible that your hypervisor and headnode clocks are out of sync! You can sync them by running "ntpdate <ntp server>" on both the hypervisor and the headnode.
**Note**: Now just pick a favorite vnc client to connect the hypervisor, with the password generated by ``getrvidparms``. If the vnc client complains "the password is not valid", the reason might be that the hypervisor and headnode clocks are out of sync! Please try to sync them by running ``ntpdate <ntp server>`` on both the hypervisor and the headnode.
* Use wvid on the xCAT management node
* Use wvid on management node
Make sure firewalld service had been stopped. ::
Make sure **firewalld** service is stopped, disable it if not: ::
chkconfig firewalld off
chkconfig firewalld off
Note: Forwarding request to systemctl will disable firewalld.service. ::
or ::
rm /etc/systemd/system/basic.target.wants/firewalld.service
rm /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service
systemctl disable firewalld
Then, run wvid vm1 on MN::
wvid vm1
Then, run ``wvid`` on MN::
* For powerKVM, we can use kimchi to monitor the installing process
wvid vm1
Open "https://<pkvm_ip>:8001" to open kimchi. There will be a “connect” button you can use below "Actions" button and input Password required:abc123 your have set before mkvm, then you could get the console.
* For PowerKVM, **kimchi** on the kvm hypervisor can be used to monitor and access the VM.
Remove the virtual machine
--------------------------
Remove the vm1 even when it is in power on status. ::
Remove the VM "vm1" even when it is in "power-on" status: ::
rmvm vm1 -f
Remove the definition of kvm and related storage. ::
Remove the definition of "vm1" and related storage: ::
rmvm vm1 -p
@ -216,107 +192,45 @@ Remove the definition of kvm and related storage. ::
Clone the virtual machine
-------------------------
Clone is a concept that create a new node from the old one by reuse most of data that has been installed on the old node. Before creating a new node, a vm (virtual machine) master must be created first. The new node will be created from the vm master. The new node can attach to the vm master or not.
The node can NOT be run without the vm master if choosing to make the node attach to the vm master. The advantage is that the less disk space is needed.
**Clone** is an operation that creating a VM from an existed one by inheriting most of its attributes and data.
The general step of **clone** a VM is like this: first creating a **VM master** , then creating a VM with the newly created **VM master** in **attaching** or **detaching** mode.
**In attaching mode**
In this mode, all the nodes will be attached to the vm master. Lesser disk space will be used than the general node.
Create the vm master vm5 from a node (vm1) and make the original node vm1 attaches to the new created vm master: ::
In this mode, all the newly created VMs are attached to the VM master. Since the image of the newly created VM only includes the differences from the VM master, which requires less disk space. The newly created VMs can NOT run without the VM master.
clonevm vm1 -t vm5
An example is shown below:
Create the VM master "vm5" from a VM node "vm1": ::
#clonevm vm1 -t vm5
vm1: Cloning vm1.sda.qcow2 (currently is 1050.6640625 MB and has a capacity of 4096MB)
vm1: Cloning of vm1.sda.qcow2 complete (clone uses 1006.74609375 for a disk size of 4096MB)
vm1: Rebasing vm1.sda.qcow2 from master
vm1: Rebased vm1.sda.qcow2 from master
After the performing, you can see the following entry has been added into the vmmaster table. ::
The newly created VM master "vm5" can be found in the **vmmaster** table. ::
tabdump vmmaster
#tabdump vmmaster
name,os,arch,profile,storage,storagemodel,nics,vintage,originator,comments,disable
"vm5","<os>","<arch>","compute","nfs://<storage_server_ip>/vms/kvm",,"br0","<date>","root",,
Clone a new node vm2 from vm master vm5: ::
Clone a new node vm2 from VM master vm5: ::
clonevm vm2 -b vm5
**In detaching mode**
Create a vm master that the original node detaches with the created vm master. ::
Create a VM master "vm6" . ::
clonevm vm2 -t vm6 -d
#clonevm vm2 -t vm6 -d
vm2: Cloning vm2.sda.qcow2 (currently is 1049.4765625 MB and has a capacity of 4096MB)
vm2: Cloning of vm2.sda.qcow2 complete (clone uses 1042.21875 for a disk size of 4096MB)
Clone the vm3 from the vm6 with the detaching mode turn on: ::
Clone a VM "vm3" from the VM master "vm6" in detaching mode: ::
clonevm vm3 -b vm6 -d
#clonevm vm3 -b vm6 -d
vm3: Cloning vm6.sda.qcow2 (currently is 1042.21875 MB and has a capacity of 4096MB)
FAQ
---
1, libvirtd run into problem
**Issue**: One error as following message: ::
rpower vm1 on
vm1: internal error no supported architecture for os type 'hvm'
**Solution**: This error was fixed by restarting libvirtd on the host machine: ::
xdsh cn1 service libvirtd restart
Note: In any case that you find there is libvirtd error message in syslog, you can try to restart the libvirtd.
2, Virtual disk has problem
**Issue**: When running command 'rpower vm1 on', get the following error message: ::
vm1: Error: unable to set user and group to '0:0'
on '/var/lib/xcat/pools/27f1df4b-e6cb-5ed2-42f2-9ef7bdd5f00f/vm1.sda.qcow2': Invalid argument:
**Solution**: try to figure out the ``nfs://<storage_server_ip>`` was exported correctly. The nfs client should have root authority.
3, VNC client complains the credentials are not valid
**Issue**: When connecting to the hypervisor using VNC to get a VM console, the vnc client complains with "Authentication failed".
**Solution**: Check if the clocks on your hypervisor and headnode are in sync!
4, rpower fails with "qemu: could not open disk image /var/lib/xcat/pools/2e66895a-e09a-53d5-74d3-eccdd9746eb5/vmXYZ.sda.qcow2: Permission denied" error message
**Issue**: When running rpower on a vm, rpower complains with the following error message: ::
rpower vm1 on
vm1: Error: internal error Process exited while reading console log output: char device redirected to /dev/pts/1
qemu: could not open disk image /var/lib/xcat/pools/2e66895a-e09a-53d5-74d3-eccdd9746eb5/vm1.sda.qcow2: Permission denied: internal error Process exited while reading console log output: char device redirected to /dev/pts/1
qemu: could not open disk image /var/lib/xcat/pools/2e66895a-e09a-53d5-74d3-eccdd9746eb5/vm1.sda.qcow2: Permission denied
[root@xcat xCAT_plugin]#
**Solution**: This might be caused by bad permissions in your NFS server / client (where clients will not mount the share with the correct permissions). Systems like CentOS 6 will have NFS v4 support activated by default. This might be causing the above mentioned problems so one solution is to simply disable NFS v4 support in your NFS server by uncommenting the following option in /etc/sysconfig/nfs: ::
RPCNFSDARGS="-N 4"
Finish by restarting your NFS services (i.e. service nfsd restart) and try powering on your VM again...
Note: if you are running a stateless hypervisor, we advise you to purge the VM (rmvm -p vmXYZ), restart the hypervisor and "mkvm vmXYZ -s 4" to recreate the VM as soon as the hypervisor is up and running.
5, Error: Cannot communicate via libvirt to <host>
**Issue**: This error mostly caused by the incorrect setting of the ssh tunnel between xCAT management node and <host>.
**Solution**: Check that xCAT MN could ssh to the <host> without password.
6, Cannot ping to the vm after the first boot of stateful install
**Issue**: The new installed stateful vm node is not pingable after the first boot, you may see the following error message in the console when vm booting: ::
ADDRCONF(NETDEV_UP): eth0 link is not ready.
**Solutoin**: This issue may be caused by the incorrect driver for vm. You can try to change driver to 'virtio' by following steps: ::
rmvm vm1
chdef vm1 vmnicnicmodel=virtio
mkvm vm1

View File

@ -0,0 +1,57 @@
Trouble Shooting
================
VNC client complains the credentials are not valid
--------------------------------------------------
**Issue**:
While connecting to the hypervisor with VNC, the vnc client complains "Authentication failed".
**Solution**:
Check whether the clocks on the hypervisor and headnode are synced
rpower fails with "qemu: could not open disk image /var/lib/xcat/pools/2e66895a-e09a-53d5-74d3-eccdd9746eb5/vmXYZ.sda.qcow2: Permission denied"
-----------------------------------------------------------------------------------------------------------------------------------------------
**Issue**: ::
#rpower vm1 on
vm1: Error: internal error Process exited while reading console log output: char device redirected to /dev/pts/1
qemu: could not open disk image /var/lib/xcat/pools/2e66895a-e09a-53d5-74d3-eccdd9746eb5/vm1.sda.qcow2: Permission denied: internal error Process exited while reading console log output: char device redirected to /dev/pts/1
qemu: could not open disk image /var/lib/xcat/pools/2e66895a-e09a-53d5-74d3-eccdd9746eb5/vm1.sda.qcow2: Permission denied
**Solution**:
Usually caused by incorrect permission in NFS server/client configuration. NFSv4 is enabled in some Linux distributions such as CentOS6 by default. The solution is simply to disable NFSv4 support on the NFS server by uncommenting the following line in "/etc/sysconfig/nfs": ::
RPCNFSDARGS="-N 4"
Then restart the NFS services and try to power on the VM again...
**Note**: For stateless hypervisor, please purge the VM by ``rmvm -p vm1``, reboot the hypervisor and then create the VM.
"Error: Cannot communicate via libvirt to kvmhost1"
---------------------------------------------------
**Issue**:
The kvm related commands complain "Error: Cannot communicate via libvirt to kvmhost1"
**Solution**:
Usually caused by incorrect ssh configuration between xCAT management node and hypervisor. Please make sure it is possible to access the hypervisor from management node via ssh without password.
Fail to ping the newly installed VM
------------------------------------
**Issue**:
The newly installed stateful VM node is not pingable, the following message can be observed in the console during VM booting: ::
ADDRCONF(NETDEV_UP): eth0 link is not ready.
**Solutoin**:
Usually caused by the incorrect VM NIC model. Please try the following steps to specify "virtio": ::
rmvm vm1
chdef vm1 vmnicnicmodel=virtio
mkvm vm1

View File

@ -1,8 +1,35 @@
Virtual Machines
================
The **Kernel-based Virtual Machine (KVM)** is a full virtualization solution for for Enterprise Linux distributions. KVM is known as the *de facto* open source virtualization mechanism. It is currently used by many software companies.
**IBM PowerKVM** is a product that leverages the Power resilience and performance with the openness of KVM, which provides several advantages:
* Higher workload consolidation with processors overcommitment and memory sharing
* Dynamic addition and removal of virtual devices
* Microthreading scheduling granularity
* Integration with **IBM PowerVC** and **OpenStack**
* Simplified management using open source software
* Avoids vendor lock-in
* Uses POWER8 hardware features, such as SMT8 and microthreading
The xCAT based KVM solution offers users the ability to:
* provision the hypervisor on bare metal nodes
* provision virtual machines
* migrate virtual machines to different hosts
* install all versions of Linux supported in the standard xCAT provisioning methods (you can install stateless virtual machines, iSCSI, and scripted install virtual machines)
* install copy on write instances of virtual machines
* copy virtual machines
This section introduces the steps of management node preparation, KVM hypervisor setup and virtual machine management, and presents some typical problems and solutions on xCAT kvm support.
.. toctree::
:maxdepth: 2
kvmMN.rst
powerKVM.rst
manage_vms.rst
FAQ.rst

View File

@ -0,0 +1,31 @@
Set Up the Management Node for KVM
====================================
Install the kvm related packages
--------------------------------
Additional packages need to be installed on the management node for kvm support.
Please make sure the following packages have been installed on the management node, if not, install them manually.
``perl-Sys-Virt``
Set Up the kvm storage directory on the management node(optional)
-----------------------------------------------------------------
It is a recommended configuration to create a shared file system for virtual machines hosting. The shared file system, usually on a SAN, NAS or GPFS, is shared among KVM hypevisors, which simplifies VM migration from one hypervisor to another with xCAT.
The easiest shared file system is ``/install`` directory on the management node, it can be shared among hypervisors via NFS. Please refer to the following steps :
* Create a directory to store the virtual disk files ::
mkdir -p /install/vms
* export the storage directory ::
echo "/install/vms *(rw,no_root_squash,sync,fsid=0)" >> /etc/exports
exportfs -r
**Note**: make sure the root permission is turned on for nfs clients (i.e. use the ``no_root_squash`` option). Otherwise, the virtual disk file can not work.

View File

@ -1,44 +1,99 @@
PowerKVM
========
Install PowerKVM
----------------
The process to set up PowerKVM hypervisors using xCAT is very similar to deploying diskful compute nodes.
#. Download the PowerKVM iso and add it to xCAT using copycds: ::
# if the iso file is: ibm-powerkvm-2.1.1.0-22.0-ppc64-gold-201410191558.iso
copycds -n pkvm2.1.1 ibm-powerkvm-2.1.1.0-22.0-ppc64-gold-201410191558.iso
#. Then provision the target node using the PowerKVM osimage: ::
nodeset <noderange> osimage=pkvm2.1.1-ppc64-install-compute
rsetboot <noderange> net
rpower <noderange> reset
Refer to :doc:`/guides/admin-guides/manage_clusters/ppc64le/diskful/index` if you need more information.
Setup PowerKVM Hypervisor
=========================
Verifying hypervisor bridges
----------------------------
Provision Hypervisor with PowerKVM
----------------------------------
In order to launch VMs, bridges must be configured on the PowerKVM hypervisors for the Virtual Machines to utilize.
Check that at least one bridge is configured and mapped to a physical interface. Show the bridge information using ``brctl show``: ::
Please follow the ``Diskful Installation Documentation`` :ref:`Diskful Installation <diskfull_installation>` to provision kvm hypervisor with PowerKVM, several customization steps should be taken into consideration.
To demonstrate the brief steps on hypervisor provision, take **ibm-powerkvm-3.1.0.0-39.0-ppc64le-gold-201511041419.iso** for example here:
#. Obtain a PowerKVM iso and create PowerKVM osimages with it: ::
copycds ibm-powerkvm-3.1.0.0-39.0-ppc64le-gold-201511041419.iso
The following PowerKVM osimage will be created on success ::
# lsdef -t osimage -o pkvm3.1-ppc64le-install-compute
Object name: pkvm3.1-ppc64le-install-compute
imagetype=linux
osarch=ppc64le
osdistroname=pkvm3.1-ppc64le
osname=Linux
osvers=pkvm3.1
otherpkgdir=/install/post/otherpkgs/pkvm3.1/ppc64le
pkgdir=/install/pkvm3.1/ppc64le
profile=compute
provmethod=install
template=/opt/xcat/share/xcat/install/pkvm/compute.pkvm3.ppc64le.tmpl
#. Customize the hypervisor node definition to create network bridge
xCAT ships a postscript **xHRM** to create a network bridge on kvm host during installation/netbooting. Please specify the **xHRM** with appropriate parameters in **postscripts** attibute. Here is some examples on this:
To create a bridge with default name 'default' against the installation network device which was specified by **installnic** attribute ::
chdef kvmhost1 -p postscripts="xHRM bridgeprereq"
To create a bridge named 'br0' against the installation network device which was specified by **installnic** attribute(recommended) ::
chdef kvmhost1 -p postscripts="xHRM bridgeprereq br0"
To create a bridge named 'br0' against the network device 'eth0' ::
chdef kvmhost1 -p postscripts="xHRM bridgeprereq eth0:br0"
**Note**: The network bridge name you use should not be the virtual bridges created by libvirt installation [1]_.
#. Customize the hypervisor node definition to mount the shared kvm storage directory on management node **(optional)**
If the shared kvm storage directory on the management node has been exported, it can be mounted on PowerKVM hypervisor for virtual machines hosting.
An easy way to do this is to create another postscript named "mountvms" which creates a directory **/install/vms** on hypervisor and then mounts **/install/vms** from the management node, the content of "mountvms" can be: ::
logger -t xcat "Install: setting vms mount in fstab"
mkdir -p /install/vms
echo "$MASTER:/install/vms /install/vms nfs \
rsize=8192,wsize=8192,timeo=14,intr,nfsvers=2 1 2" >> /etc/fstab
Then set the file permission and specify the script in **postscripts** attribute of hypervisor node definition: ::
chmod 755 /install/postscripts/mountvms
chdef kvmhost1 -p postscripts=mountvms
#. Provision the hypervisor node with the PowerKVM osimage ::
nodeset kvmhost1 osimage=pkvm3.1-ppc64le-install-compute
rpower kvmhost1 boot
Create network bridge on hypervisor
------------------------------------
To launch VMs, a network bridge must be created on the PowerKVM hypervisors.
If the hypervisor is provisioned successfully according to the steps described above, a network bridge will be created and attached to a physical interface. This can be checked by running ``brctl show`` on the hypervisor to show the network bridge information, please make sure a network bridge has been created and configured according to the parameters passed to postscript "xHRM" ::
# brctl show
bridge name bridge id STP enabled interfaces
br0 8000.000000000000 no eth0
If there are no bridges configured, the xCAT post install script will not work. You must manually create a bridge. The following is provided as an example for creating a bridge br0 using interface eth0 with IP address: 10.1.101.1/16, for example: ::
IPADDR=10.1.101.1/16
brctl addbr br0
brctl addif br0 eth0
brctl setfd br0 0
ip addr add dev br0 $IPADDR
ip link set br0 up
ip addr del dev eth0 $IPADDR
If the network bridge is not created or configured successfully, please run "xHRM" with **updatenode** on managememt node to create it manually:::
updatenode kvmhost1 -P "xHRM bridgeprereq eth0:br0"
.. [1] Every standard libvirt installation during PowerKVM powervision provides NAT based connectivity to virtual machines out of the box. Some network bridges(virbr0,virbr1...) and dummy network devices(virbr0-nic,virbr1-nic...) will be created by default ::
#brctl show
#bridge name bridge id STP enabled interfaces
#virbr0 8000.525400c7f843 yes virbr0-nic
#virbr1 8000.5254001619f5 yes virbr1-nic
Note: During any of ubuntu installation, the virtual machines need to access Internet, so make sure the PowerKVM hypervisor is able to access Internet.