diff --git a/docs/source/guides/admin-guides/manage_clusters/ppc64le/virtual_machines/RHEVHypervisor.rst b/docs/source/guides/admin-guides/manage_clusters/ppc64le/virtual_machines/RHEVHypervisor.rst new file mode 100644 index 000000000..5ad65df54 --- /dev/null +++ b/docs/source/guides/admin-guides/manage_clusters/ppc64le/virtual_machines/RHEVHypervisor.rst @@ -0,0 +1,58 @@ + + At the time of this writing there is no ISO image availabe for RHEV. Individual RPM packages need to be downloaded. + + * Download *Management-Agent-Power-7* and *Power_Tools-7* RPMs from RedHat to the xCAT management node. Steps below assume all RPMs were downloaded to ``/install/post/otherpkgs/rhels7.3/ppc64le/RHEV4/4.0-GA`` + + * Create a yum repository for the downloaded RPMs :: + + createrepo /install/post/otherpkgs/rhels7.3/ppc64le/RHEV4/4.0-GA + + * Create new osimage definition based on an existing RHEL7 osimage definition :: + + mkdef -t osimage -o rhels7.3-ppc64le-RHEV4-install-compute \ + --template rhels7.3-ppc64le-install-compute + + * Modify ``otherpkgdir`` attribute to point to the package directory with downloaded RPMs :: + + chdef -t osimage rhels7.3-ppc64le-RHEV4-install-compute \ + otherpkgdir=/install/post/otherpkgs/rhels7.3/ppc64le/RHEV4/4.0-GA + + * Create a new package list file ``/install/custom/rhels7.3/ppc64le/rhelv4.pkglist`` to include necessary packages provided from the OS. :: + + #INCLUDE:/opt/xcat/share/xcat/install/rh/compute.rhels7.pkglist# + bridge-utils + + * Modify ``pkglist`` attribute to point to the package list file from the step above :: + + chdef -t osimage rhels7.3-snap3-ppc64le-RHEV4-install-compute \ + pkglist=/install/custom/rhels7.3/ppc64le/rhelv4.pkglist + + * Create a new package list file ``/install/custom/rhels7.3/ppc64le/rhev4.otherpkgs.pkglist`` to list required packages :: + + libvirt + qemu-kvm-rhev + qemu-kvm-tools-rhev + virt-manager-common + virt-install + + * Modify ``otherpkglist`` attribute to point to the package list file from the step above :: + + chdef -t osimage rhels7.3-snap3-ppc64le-RHEV4-install-compute \ + otherpkglist=/install/custom/rhels7.3/ppc64le/rhev4.otherpkgs.pkglist + + * The RHEV osimage should look similar to: :: + + Object name: rhels7.3-ppc64le-RHEV4-install-compute + imagetype=linux + osarch=ppc64le + osdistroname=rhels7.3-ppc64le + osname=Linux + osvers=rhels7.3 + otherpkgdir=/install/post/otherpkgs/rhels7.3/ppc64le/RHEV4/4.0-GA + otherpkglist=/install/custom/rhels7.3/ppc64le/rhev4.otherpkgs.pkglist + pkgdir=/install/rhels7.3/ppc64le + pkglist=/install/custom/rhels7.3/ppc64le/rhelv4.pkglist + profile=compute + provmethod=install + template=/opt/xcat/share/xcat/install/rh/compute.rhels7.tmpl + diff --git a/docs/source/guides/admin-guides/manage_clusters/ppc64le/virtual_machines/hypervisorKVM.rst b/docs/source/guides/admin-guides/manage_clusters/ppc64le/virtual_machines/hypervisorKVM.rst new file mode 100644 index 000000000..add5069d2 --- /dev/null +++ b/docs/source/guides/admin-guides/manage_clusters/ppc64le/virtual_machines/hypervisorKVM.rst @@ -0,0 +1,85 @@ +Install and Configure Hypervisor +================================ + +Provision Hypervisor +-------------------- + +* **[PowerKVM]** + + .. include:: pKVMHypervisor.rst + +* **[RHEV]** + + .. include:: RHEVHypervisor.rst + +#. Customize the hypervisor node definition to create network bridge + + xCAT ships a postscript **xHRM** to create a network bridge on kvm host during installation/netbooting. Specify the **xHRM** with appropriate parameters in **postscripts** attibute. For example: + + * To create a bridge named 'br0' against the installation network device specified by **installnic**: :: + + chdef kvmhost1 -p postscripts="xHRM bridgeprereq br0" + + * To create a bridge with default name 'default' against the installation network device specified by **installnic**: :: + + chdef kvmhost1 -p postscripts="xHRM bridgeprereq" + + * To create a bridge named 'br0' against the network device 'eth0': :: + + chdef kvmhost1 -p postscripts="xHRM bridgeprereq eth0:br0" + + **Note**: The network bridge name you use should not be the virtual bridges (vbrX) created by libvirt installation [1]_. + + +#. Customize the hypervisor node definition to mount the shared kvm storage directory on management node **(optional)** + + If the shared kvm storage directory on the management node has been exported, it can be mounted on PowerKVM hypervisor for virtual machines hosting. + + An easy way to do this is to create another postscript named "mountvms" which creates a directory **/install/vms** on hypervisor and then mounts **/install/vms** from the management node, the content of "mountvms" can be: :: + + logger -t xcat "Install: setting vms mount in fstab" + mkdir -p /install/vms + echo "$MASTER:/install/vms /install/vms nfs \ + rsize=8192,wsize=8192,timeo=14,intr,nfsvers=2 1 2" >> /etc/fstab + + + Then set the file permission and specify the script in **postscripts** attribute of hypervisor node definition: :: + + chmod 755 /install/postscripts/mountvms + chdef kvmhost1 -p postscripts=mountvms + +#. Provision the hypervisor node with the osimage :: + + nodeset kvmhost1 osimage= + rpower kvmhost1 boot + + +Create network bridge on hypervisor +------------------------------------ + +To launch VMs, a network bridge must be created on the KVM hypervisor. + +If the hypervisor is provisioned successfully according to the steps described above, a network bridge will be created and attached to a physical interface. This can be checked by running ``brctl show`` on the hypervisor to show the network bridge information, please make sure a network bridge has been created and configured according to the parameters passed to postscript "xHRM" :: + + # brctl show + bridge name bridge id STP enabled interfaces + br0 8000.000000000000 no eth0 + + +If the network bridge is not created or configured successfully, run "xHRM" with **updatenode** on managememt node to create it manually::: + + updatenode kvmhost1 -P "xHRM bridgeprereq eth0:br0" + +Start libvirtd service +---------------------- + +Verify **libvirtd** service is running: :: + + systemctl status libvirtd + +If service is not running, it can be started with: :: + + systemctl start libvirtd + +.. [1] Every standard libvirt installation provides NAT based connectivity to virtual machines out of the box using the "virtual bridge" interfaces (virbr0, virbr1, etc) Those will be created by default. + diff --git a/docs/source/guides/admin-guides/manage_clusters/ppc64le/virtual_machines/index.rst b/docs/source/guides/admin-guides/manage_clusters/ppc64le/virtual_machines/index.rst index 0bd559648..a13f1e1c6 100644 --- a/docs/source/guides/admin-guides/manage_clusters/ppc64le/virtual_machines/index.rst +++ b/docs/source/guides/admin-guides/manage_clusters/ppc64le/virtual_machines/index.rst @@ -26,12 +26,12 @@ The xCAT based KVM solution offers users the ability to: * install copy on write instances of virtual machines * clone virtual machines -This section introduces the steps of management node preparation, KVM hypervisor setup and virtual machine management, and presents some typical problems and solutions on xCAT kvm support. +This section introduces the steps of management node preparation, hypervisor setup and virtual machine management, and presents some typical problems and solutions on xCAT kvm support. .. toctree:: :maxdepth: 2 kvmMN.rst - powerKVM.rst + hypervisorKVM.rst manage_vms.rst FAQ.rst diff --git a/docs/source/guides/admin-guides/manage_clusters/ppc64le/virtual_machines/pKVMHypervisor.rst b/docs/source/guides/admin-guides/manage_clusters/ppc64le/virtual_machines/pKVMHypervisor.rst new file mode 100644 index 000000000..e75d9f5e3 --- /dev/null +++ b/docs/source/guides/admin-guides/manage_clusters/ppc64le/virtual_machines/pKVMHypervisor.rst @@ -0,0 +1,20 @@ + + Obtain a PowerKVM ISO and create PowerKVM osimages with it: :: + + copycds ibm-powerkvm-3.1.0.0-39.0-ppc64le-gold-201511041419.iso + + The following PowerKVM osimage will be created :: + + # lsdef -t osimage -o pkvm3.1-ppc64le-install-compute + Object name: pkvm3.1-ppc64le-install-compute + imagetype=linux + osarch=ppc64le + osdistroname=pkvm3.1-ppc64le + osname=Linux + osvers=pkvm3.1 + otherpkgdir=/install/post/otherpkgs/pkvm3.1/ppc64le + pkgdir=/install/pkvm3.1/ppc64le + profile=compute + provmethod=install + template=/opt/xcat/share/xcat/install/pkvm/compute.pkvm3.ppc64le.tmpl + diff --git a/docs/source/guides/admin-guides/manage_clusters/ppc64le/virtual_machines/powerKVM.rst b/docs/source/guides/admin-guides/manage_clusters/ppc64le/virtual_machines/powerKVM.rst deleted file mode 100644 index 239b69b08..000000000 --- a/docs/source/guides/admin-guides/manage_clusters/ppc64le/virtual_machines/powerKVM.rst +++ /dev/null @@ -1,92 +0,0 @@ -Setup PowerKVM Hypervisor -========================= - - -Provision Hypervisor with PowerKVM ----------------------------------- - - -Please follow the :ref:`Diskful Installation ` to provision kvm hypervisor with PowerKVM, several customization steps should be taken into consideration. - -To demonstrate the brief steps on hypervisor provision, take **ibm-powerkvm-3.1.0.0-39.0-ppc64le-gold-201511041419.iso** for example here: - -#. Obtain a PowerKVM iso and create PowerKVM osimages with it: :: - - copycds ibm-powerkvm-3.1.0.0-39.0-ppc64le-gold-201511041419.iso - - The following PowerKVM osimage will be created on success :: - - # lsdef -t osimage -o pkvm3.1-ppc64le-install-compute - Object name: pkvm3.1-ppc64le-install-compute - imagetype=linux - osarch=ppc64le - osdistroname=pkvm3.1-ppc64le - osname=Linux - osvers=pkvm3.1 - otherpkgdir=/install/post/otherpkgs/pkvm3.1/ppc64le - pkgdir=/install/pkvm3.1/ppc64le - profile=compute - provmethod=install - template=/opt/xcat/share/xcat/install/pkvm/compute.pkvm3.ppc64le.tmpl - -#. Customize the hypervisor node definition to create network bridge - - xCAT ships a postscript **xHRM** to create a network bridge on kvm host during installation/netbooting. Please specify the **xHRM** with appropriate parameters in **postscripts** attibute. Here is some examples on this: - - To create a bridge with default name 'default' against the installation network device which was specified by **installnic** attribute :: - - chdef kvmhost1 -p postscripts="xHRM bridgeprereq" - - To create a bridge named 'br0' against the installation network device which was specified by **installnic** attribute(recommended) :: - - chdef kvmhost1 -p postscripts="xHRM bridgeprereq br0" - - To create a bridge named 'br0' against the network device 'eth0' :: - - chdef kvmhost1 -p postscripts="xHRM bridgeprereq eth0:br0" - - **Note**: The network bridge name you use should not be the virtual bridges created by libvirt installation [1]_. - - -#. Customize the hypervisor node definition to mount the shared kvm storage directory on management node **(optional)** - - If the shared kvm storage directory on the management node has been exported, it can be mounted on PowerKVM hypervisor for virtual machines hosting. - - An easy way to do this is to create another postscript named "mountvms" which creates a directory **/install/vms** on hypervisor and then mounts **/install/vms** from the management node, the content of "mountvms" can be: :: - - logger -t xcat "Install: setting vms mount in fstab" - mkdir -p /install/vms - echo "$MASTER:/install/vms /install/vms nfs \ - rsize=8192,wsize=8192,timeo=14,intr,nfsvers=2 1 2" >> /etc/fstab - - - Then set the file permission and specify the script in **postscripts** attribute of hypervisor node definition: :: - - chmod 755 /install/postscripts/mountvms - chdef kvmhost1 -p postscripts=mountvms - -#. Provision the hypervisor node with the PowerKVM osimage :: - - nodeset kvmhost1 osimage=pkvm3.1-ppc64le-install-compute - rpower kvmhost1 boot - - -Create network bridge on hypervisor ------------------------------------- - -To launch VMs, a network bridge must be created on the PowerKVM hypervisors. - -If the hypervisor is provisioned successfully according to the steps described above, a network bridge will be created and attached to a physical interface. This can be checked by running ``brctl show`` on the hypervisor to show the network bridge information, please make sure a network bridge has been created and configured according to the parameters passed to postscript "xHRM" :: - - # brctl show - bridge name bridge id STP enabled interfaces - br0 8000.000000000000 no eth0 - - -If the network bridge is not created or configured successfully, please run "xHRM" with **updatenode** on managememt node to create it manually::: - - updatenode kvmhost1 -P "xHRM bridgeprereq eth0:br0" - - -.. [1] Every standard libvirt installation provides NAT based connectivity to virtual machines out of the box using the "virtual bridge" interfaces (virbr0, virbr1, etc) Those will be created by default. -