2
0
mirror of https://github.com/xcat2/xcat-core.git synced 2024-11-21 17:11:52 +00:00
28 Test Design of xcat terraform provider phase1
xuweibj edited this page 2019-05-23 10:20:10 +08:00

Test design of xCAT Terraform Provider (phase 1)

TO CHECK

groups=__TFPOOL-FREE

Upstream documents

The prelimit of xCAT Terraform Provider in xCAT 2.15

  • Terraform downloading(x86_64 version will download from Terraform offical web, ppcle version will be downloaded from xcat.org)
  • xCAT Terraform Provider downloading(Both x86_64 and ppcle version will be downloaded from xcat.org)
  • xCAT API service(There are pip and container version finally, But in 2.15, just recommend to use pip version)
  • xCAT(There are rpm and container version, in 2.15, just recommend to use rpm version)
  • There is no user authentication in 2.15. The terraform user can operate xCAT API service directly for other purpose. Authentication will be covered in later xCAT version.
  • In 2.15, Terraform user have to use xcat restapi to get osimage and node list in xCAT DB.
  • In 2.15, xCAT Terroform Provider just support == and != in selectors, does not support >= <= and other avanced matching method.
  • In 2.15, recommend to install xCAT and xCAT API service in the same node.

Test environment set up (in 2.15)

product component location ip remark
xCAT Terraform provider node1 10.x.x.1 binary used directly
xCAT API service node2 10.x.x.2 pip installation
xCAT node2 10.x.x.2 rpm installation
nodes poll 2 real nodes 10.x.x.4-5 need to cover provision test

All configuration about xCAT cluster must be done ahead.

  • xCAT installation

  • Create the xCAT Terraform Provider user account in password table on xCAT MN ahead by admin

# chtab key=xcat passwd.username=xtpu1 passwd.password=12345 
# chtab key=xcat passwd.username=xtpu2 passwd.password=12345 
  • Use above account xtpu1 and xtpu2 to access xCAT API service to apply token, remember these tokens.

  • Define all kinds of node xCAT Terraform Provider supported in xCAT DB ahead. At least there are two real nodes to test provision in parallel. The other nodes can be bogus nodes.

# chdef xtpbogusp9phyn1 groups=free usercomment=",ib=0,gpu=0" mtm=8335-GTC arch=ppc64le disksize=200 memory=256 cputype="POWER9 (raw), altivec supported" cpucount=32 ip=100.50.20.1 cons=openbmc mgt=openbmc netboot=petitboot profile=compute mac=E9.83.35.EB.20.01 room="r1" rack="rC" unit="u1" status=powering-on
# chdef xtpbogusp9phyn2 groups=free usercomment=",ib=0,gpu=0" mtm=8335-GTB arch=ppc64le disksize=200 memory=256 cputype="POWER9 (raw), altivec supported" cpucount=32 ip=100.50.20.2 cons=openbmc mgt=openbmc netboot=petitboot profile=compute mac=E9.83.35.EB.20.02 room="r1" rack="rB" unit="u2" status=powering-on
# chdef xtpbogusp9phyn3 groups=free usercomment=",ib=1,gpu=0" mtm=8335-GTB arch=ppc64le disksize=200 memory=256 cputype="POWER9 (raw), altivec supported" cpucount=32 ip=100.50.20.3 cons=openbmc mgt=openbmc netboot=petitboot profile=compute mac=E9.83.35.EB.20.03 room="r1" rack="rB" unit="u3" status=powering-on
# chdef xtpbogusp9phyn4 groups=free usercomment=",ib=0,gpu=1" mtm=8335-GTB arch=ppc64le disksize=200 memory=256 cputype="POWER9 (raw), altivec supported" cpucount=32 ip=100.50.20.4 cons=openbmc mgt=openbmc netboot=petitboot profile=compute mac=E9.83.35.EB.20.04 room="r1" rack="rB" unit="u4" status=powering-on
# chdef xtpbogusp9phyn5 groups=free usercomment=",ib=1,gpu=1" mtm=8335-GTB arch=ppc64le disksize=200 memory=256 cputype="POWER9 (raw), altivec supported" cpucount=32 ip=100.50.20.5 cons=openbmc mgt=openbmc netboot=petitboot profile=compute mac=E9.83.35.EB.20.05 room="r1" rack="rB" unit="u5" status=powering-on
# chdef xtpbogusp9phyn6 groups=free usercomment=",ib=0,gpu=1" mtm=8335-GTB arch=ppc64le disksize=300 memory=256 cputype="POWER9 (raw), altivec supported" cpucount=32 ip=100.50.20.6 cons=openbmc mgt=openbmc netboot=petitboot profile=compute mac=E9.83.35.EB.20.06 room="r1" rack="rB" unit="u6" status=powering-on
# chdef xtpbogusp9phyn7 groups=free usercomment=",ib=0,gpu=1" mtm=8335-GTB arch=ppc64le disksize=400 memory=128 cputype="POWER9 (raw), altivec supported" cpucount=32 ip=100.50.20.7 cons=openbmc mgt=openbmc netboot=petitboot profile=compute mac=E9.83.35.EB.20.07 room="r1" rack="rB" unit="u7" status=powering-on
# chdef xtpbogusp9phyn8 groups=free usercomment=",ib=0,gpu=1" mtm=8335-GTB arch=ppc64le disksize=400 memory=64 cputype="POWER9 (raw), altivec supported" cpucount=32 ip=100.50.20.8 cons=openbmc mgt=openbmc netboot=petitboot profile=compute mac=E9.83.35.EB.20.08 room="r1" rack="rB" unit="u8" status=powering-on
# chdef xtpbogusp9phyn9 groups=free usercomment=",ib=0,gpu=1" mtm=8335-GTB arch=ppc64le disksize=400 memory=64 cputype="POWER8 (raw), altivec supported" cpucount=20 ip=100.50.20.9 cons=openbmc mgt=openbmc netboot=petitboot profile=compute mac=E9.83.35.EB.20.09 room="r1" rack="rB" unit="u9" status=powering-on
# chdef xtpbogusp9phyn10 groups=free usercomment=",ib=0,gpu=1" mtm=8335-GTB arch=ppc64le disksize=400 memory=64 cputype="POWER9 (raw), altivec supported" cpucount=16 ip=100.50.20.10 cons=openbmc mgt=openbmc netboot=petitboot profile=compute mac=E9.83.35.EB.20.10 room="r1" rack="rB" unit="u10" status=powering-on

# chdef xtpbogusx86phyn1 groups=free usercomment=",ib=0,gpu=0" mtm=7912AC1 arch=x86_64 disksize=300 memory=64 cputype=" Intel(R) Xeon(R) CPU E5-2665 0 @ 2.40GHz" cpucount=16 ip=100.50.30.01 mac=86.79.12.C1.30.01 room="r2" rack="rAC1" unit="u1" status=powering-on cons=ipmi mgt=ipmi netboot=xnba profile=compute  
# chdef xtpbogusx86phyn2 groups=free usercomment=",ib=1,gpu=0" mtm=7912AC2 arch=x86_64 disksize=400 memory=128 cputype=" Intel(R) Xeon(R) CPU E5-2665 0 @ 2.40GHz" cpucount=32 ip=100.50.30.02 mac=86.79.12.C2.30.02 room="r2" rack="rAC1" unit="u2" status=powering-on cons=ipmi mgt=ipmi netboot=xnba profile=compute  
# chdef xtpbogusx86phyn3 groups=free usercomment=",ib=1,gpu=0" mtm=7912AC2 arch=x86_64 disksize=400 memory=128 cputype=" Intel(R) Xeon(R) CPU E5-2665 0 @ 2.40GHz" cpucount=32 ip=100.50.30.03 mac=86.79.12.C2.30.03 room="r2" rack="rAC1" unit="u3" status=powering-on cons=ipmi mgt=ipmi netboot=xnba profile=compute 

//below 2 nodes should be real node. need to prepare ahead
# chdef xtprealp8vm1 groups=free usercomment=',ib=0,gpu=0" mtm=8335-GTA arch=ppc64le cons=ipmi mgt=kvm netboot=grub2 profile=compute vmhost=xxx mac=x.x.x.x.x.x ip=x.x.x.x
# chdef xtprealx86vm1 groups=free usercomment=",ib=0,gpu=0" mtm=7912AC3 arch=x86_64 cons=kvm  mgt=kvm netboot=xnba profile=compute vmhost=xxx mac=x.x.x.x.x.x ip=x.x.x.x
  • Install xCAT Terraform Provider in node1. xCAT Terraform Provider installation will involve a operating system user, terraform init will find provider binary under this user's home directory /<user_home>/.terraform.d/plugins/. In this test, suppose this OS user is root.
// login node1 by root 

//Terraform ppcle version
$ wget https://media.github.ibm.com/releases/207181/files/158261?token=AABlEjj4dE6g_afKtyCL0TTcD8gGrNE9ks5c3OiqwA%3D%3D -O /usr/bin/terraform
$ chmod +x /usr/bin/terraform

//Terraform x86 version
# mkdir /tmp/terraform 
# wget https://releases.hashicorp.com/terraform/0.11.13/terraform_0.11.13_linux_amd64.zip -P /tmp/terraform
# cd /tmp/terraform  && unzip terraform terraform_0.11.13_linux_amd64.zip
# mv /tmp/terraform/terraform /usr/local/sbin/
# rm -rf /tmp/terraform

//xCAT Terraform Provider ppc64le version
$ wget https://media.github.ibm.com/releases/207181/files/158263?token=AABlElBlpu3Q8UGn3xJBlrHbN60nKizLks5c3Qq4wA%3D%3D -O /root/.terraform.d/plugins/terraform-provider-xcat
$ chmod +x /root/.terraform.d/plugins/terraform-provider-xcat 
  • install xCAT API service in node2 depending on steps

Test Scope

Basic Function test

Basic function test include customer apply, update, free compute instance and get configuration of each compute instance.

apply function test

case 1 : apply node depending on selectors.

  • Create work directory
$ mkdir -p /terraform_test/xtpu1/task1 && cd /terraform_test/xtpu1/task1 
  • Create below xcat.tf files under /terraform_test/xtpu1/task1
#cat /terraform_test/xtpu1/task1/xcat.tf
provider "xcat" {
  url = "<the access url of xCAT API service>"
  username = "xtpu1"
  password = "12345"
}
  • init terraform for this task
# terraform init
  • Create node.tf files /terraform_test/xtpu1/task1/node.tf
resource "xcat_node" "ppc64lenode" {
  selectors {
    arch="ppc64le"
    machinetype="8335-GTB"
    gpu=0
    ib=1
    disksize>=300 
    memory<=128
    cputype="POWER9 (raw), altivec supported"
    cpucount<=20 
  }
  count=1
}

resource "xcat_node" "x86node" {
  selectors {
    arch="x86_64"
    machinetype="7912AC2"
    disksize==400 
    memory==128
    cputype="Intel(R) Xeon(R) CPU E5-2665 0 @ 2.40GHz"
    cpucount==32 
  }
  count=2
}

output "x86nodes" {
  value=[ 
      "${xcat_node.x86node.*.name}"
  ]
}

output "ppc64lenodes" {
  value=[ 
      "${xcat_node.ppc64lenode.*.name}"
  ]
}

output "login_credential" {
  value="username: root; password: cluster"
}
  • terraform plan/apply
$ terraform apply
  • Expect return xtpbogusp9phyn10, xtpbogusx86phyn2 and xtpbogusx86phyn3.
  • Expect All below 3 nodes have attribute defined in xCAT DB. like below
//xtpbogusp9phyn10
mac=E9.83.35.EB.20.10 room="r1" rack="rB" unit="u10"
//xtpbogusx86phyn2
ip=100.50.30.02 mac=86.79.12.C2.30.02 room="r2" rack="rAC1" unit="u2" 
//xtpbogusx86phyn3
ip=100.50.30.03 mac=86.79.12.C2.30.03 room="r2" rack="rAC1" unit="u3" 
  • Expect the value of groups attribute of xtpbogusp9phyn10, xtpbogusx86phyn2 and xtpbogusx86phyn3 in xcat DB have been changed from free to xtpu1.
  • Expect the value of groups attribute of the rest nodes in node pool still is free

case 2 : apply node with provision requirement

  • Create work directory
$ mkdir -p /terraform_test/xtpu1/task2 && cd /terraform_test/xtpu1/task2
  • Create below xcat.tf files under /terraform_test/xtpu1/task2/
#cat /terraform_test/xtpu1/task2/xcat.tf
provider "xcat" {
  url = "<the access url of xCAT API service>"
  username = "xtpu1"
  password = "12345"
}
  • init terraform for this task
# terraform init
  • Create below node.tf files under /terraform_test/xtpu1/task2/
#cat /terraform_test/xtpu1/task2/node.tf
resource "xcat_node" "ppc64lenode" {
  selectors {
    arch="ppc64le"
    machinetype="8335-GTA"
  }
  count=1
  osimage="rhels7.6-ppc64le-install-compute"
  powerstatus=on
}
resource "xcat_node" "x86node" {
  selectors {
    arch="x86_64"
    machinetype="7912AC3"
  }
  count=1
  osimage="rhels7.4-x86_64-netboot-compute"
  powerstatus=off
}
output "x86nodes" {
  value=[ 
      "${xcat_node.x86node.*.name}"
  ]
}
output "ppc64lenodes" {
  value=[ 
      "${xcat_node.ppc64lenode.*.name}"
  ]
}
output "login_credential" {
  value="username: root; password: cluster"
}
  • terraform plan/apply
$ terraform plan
$ terraform apply
  • Expect return xtprealp8vm1 and xtprealx86vm1.
  • Expect the value of group attribute of xtprealp8vm1 and xtprealx86vm1in xcat DB have been changed from free to user name xtpu1.
  • Expect xtprealp8vm1 is pingable and xtprealx86vm1 is not pingable.
  • xtprealp8vm1 will be installed rhels7.6
  • Use ip, username and password returned by apply, can login xtprealp8vm1.
  • Change node.tf to power on xtprealx86vm1
resource "xcat_node" "x86node" {
  selectors {
    arch="x86_64"
    machinetype="7912AC3"
  }
  count=1
  osimage="rhels7.4-x86_64-netboot-compute"
  powerstatus=on
}
  • Expect xtprealx86vm1 will be powered on and os is rhels7.6(OS won't be reinstalled in this time)
  • Expect the value of groups attribute of xtprealx86vm1 and xtprealp8vm1 in xcat DB have been changed from free to xtpu1.

Case 3: If there is no node satisfy selectors, no node return

  • Create work directory and init terraform
$ mkdir -p /terraform_test/xtpu1/task3 && cd /terraform_test/xtpu1/task3 && terraform init
  • Create below xcat.tf files under /terraform_test/xtpu1/task3/
#cat /terraform_test/xtpu1/task3/xcat.tf
provider "xcat" {
  url = "<the access url of xCAT API service>"
  username = "xtpu1"
  password = "12345"
}
  • init terraform for this task
# terraform init
  • Create below node.tf files under /terraform_test/xtpu1/task3
resource "xcat_node" "ppc64lenode" {
  selectors {
    arch="ppc64le"
    machinetype="8335-GTF"
  }
  count=1
}
output "ppc64lenodes" {
  value=[ 
      "${xcat_node.ppc64lenode.*.name}"
  ]
}
output "login_credential" {
  value="username: root; password: cluster"
}
  • terraform plan/apply
$ terraform plan
$ terraform apply
  • Expect apply failure.

Case 4: If the node pool is empty, apply will fail

  • clear up node pool in xcat mn in node3
$ chdef free groups=all
  • login node1 by root
  • Create work directory
$ mkdir -p /terraform_test/xtpu1/task4 && cd /terraform_test/xtpu1/task4
  • Create below xcat.tf files under /terraform_test/xtpu1/task4/
#cat /terraform_test/xtpu1/task3/xcat.tf
provider "xcat" {
  url = "<the access url of xCAT API service>"
  username = "xtpu1"
  password = "12345"
}
  • init terraform for this task
# terraform init
  • Create below node.tf files under /terraform_test/xtpu1/task4/
resource "xcat_node" "ppc64lenode" {
  selectors {
    arch="ppc64le"
    machinetype="8335-GTB"
  }
  count=1
}
output "ppc64lenodes" {
  value=[ 
      "${xcat_node.ppc64lenode.*.name}"
  ]
}
output "login_credential" {
  value="username: root; password: cluster"
}
  • terraform plan/apply
$ terraform plan
$ terraform apply
  • Expect apply failure.

Case 5

  • repeat steps in case1 by user xtpu1.
  • Create work directory for xtpu2
$ mkdir -p /terraform_test/xtpu2/task1 && cd /terraform_test/xtpu2/task1 
  • Create below xcat.tf files under /terraform_test/xtpu2/task1
#cat /terraform_test/xtpu2/task1/xcat.tf
provider "xcat" {
  url = "<the access url of xCAT API service>"
  username = "xtpu2"
  password = "12345"
}
  • init terraform for this task
# terraform init
  • apply below node xtpbogusp9phyn10 by user xtpu2
resource "xcat_node" "ppc64lenode" {
  selectors {
    hostnmae="xtpbogusp9phyn10"
  }
  count=1
}
output "ppc64lenodes" {
  value=[ 
      "${xcat_node.ppc64lenode.*.name}"
  ]
}
output "login_credential" {
  value="username: root; password: cluster"
}
  • Expect apply failure.

Update function test

case1 update the number of node and node configuration

  • repeat steps in apply case1 by user xtpu1.
  • change the node.tf files /terraform_test/xtpu1/task1/node.tf
resource "xcat_node" "x86node" {
  selectors {
    arch="x86_64"
    machinetype="7912AC2"
    disksize==400 
    memory==64
    cputype="Intel(R) Xeon(R) CPU E5-2665 0 @ 2.40GHz"
    cpucount==32 
  }
  count=1
}
output "x86nodes" {
  value=[ 
      "${xcat_node.x86node.*.name}"
  ]
}
output "login_credential" {
  value="username: root; password: cluster"
}
  • Expect return xtpbogusx86phyn1.
  • Expect the value of groups attribute of xtpbogusx86phyn1 is set to xtpu1, the value of groups attribute of xtpbogusp9phyn10, xtpbogusx86phyn2 and xtpbogusx86phyn3 in xcat DB have been changed back to free.

case2 update the osimage of node

  • repeat 1-5 steps of apply case2
  • change node.tf to
#cat /terraform_test/xtpu1/task2/node.tf
resource "xcat_node" "ppc64lenode" {
  selectors {
    arch="ppc64le"
    machinetype="8335-GTA"
  }
  count=1
  osimage="rhels7.5-ppc64le-install-compute"
  powerstatus=on
}

resource "xcat_node" "x86node" {
  selectors {
    arch="x86_64"
    machinetype="7912AC3"
  }
  count=1
  osimage="rhels7.4-x86_64-netboot-compute"
  powerstatus=off
}

output "x86nodes" {
  value=[ 
      "${xcat_node.x86node.*.name}"
  ]
}

output "ppc64lenodes" {
  value=[ 
      "${xcat_node.ppc64lenode.*.name}"
  ]
}

output "login_credential" {
  value="username: root; password: cluster"
}
  • Expect still return xtprealp8vm1 and xtprealx86vm1.
  • Expect the os of xtprealp8vm1 changed from rhels7.6 to rhels7.5
  • Expect nothing changed against xtprealx86vm1

update the power status of node

  • repeat 1-5 steps of apply case2
  • change node.tf to
#cat /terraform_test/xtpu1/task2/node.tf
resource "xcat_node" "ppc64lenode" {
  selectors {
    arch="ppc64le"
    machinetype="8335-GTA"
  }
  count=1
  osimage="rhels7.6-ppc64le-install-compute"
  powerstatus=off
}

resource "xcat_node" "x86node" {
  selectors {
    arch="x86_64"
    machinetype="7912AC3"
  }
  count=1
  osimage="rhels7.4-x86_64-netboot-compute"
  powerstatus=on
}

output "x86nodes" {
  value=[ 
      "${xcat_node.x86node.*.name}"
  ]
}

output "ppc64lenodes" {
  value=[ 
      "${xcat_node.ppc64lenode.*.name}"
  ]
}

output "login_credential" {
  value="username: root; password: cluster"
}
  • Expect still return xtprealp8vm1 and xtprealx86vm1.
  • Expect power status of xtprealp8vm1 change to off
  • Expect power status of xtprealx86vm1 change to on
  • The rest thing do not change

destroy function test

case1 free whole cluster applied ahead

  • repeat 1-5 steps of apply case2.
  • Touch some files on xtprealp8vm1 and xtprealx86vm1
  • destroy resource
# Terraform destroy
  • Expect xtprealp8vm1 and xtprealx86vm1 have been free
  • Expect the values of groups attribute of xtprealp8vm1 and xtprealx86vm1 in xcat DB have been changed back to free
  • Expect the files touched on xtprealp8vm1 and xtprealx86vm1 will gone. That means can not leak the information of last user.

case2 To verify if the node has been free by last user can be apply by other user

  • repeat apply test case 5, but this time expect apply successfully

case3 To verify if the node resource leak after apply and destroy many times by different user

  • If different user apply and destroy node resource many times in parallel, to check if node resource will leak.(i.e. the nodes in free group of xcat db become less and less abnormally.

User management test

if end user can get right higher then what they need (not covered in 2.15)

Due to xCAT need a high right user to operate xcatd, need to make sure user management solution of xCAT Terraform won't result in leak high right user information to end user. I.e. end user can get high right user's user name, password, token, certificate and so on. End user can leverage these information to operate xcatd directly or change confidential configuration.

If one user has way to access other user's resource

Every user should have their own work space, to check:

  • If one user has way to access other user's resource (tf files, state files, configuration files......)
  • If one user has way to operate the node applied by other user.

If there is confidential information existed in files end user can access

  • If there is confidential information (password, token, certificate) existed in files end user can access.
  • If there is confidential information (password, token, certificate) hard code in source code

If one user has way to access other user's resource

  • If customer A apply a group of nodes, customer B apply another group of nodes. If it is possible for customer A to access node in group B through the node A applied without any authorization. (xcat mn can login any node deployed by it without password).

xCAT Terraform Provider deployment test

[Case 1] If it is easy for user to set up xCAT Terraform product from scratch. (manually)

If user has nothing at first, if it is easy for user to set up xCAT Terraform product.

  • If there is detailed setup steps (doc)
  • If the steps is correct and easy to follow.

[Case 2] If it is easy for user to set up xCAT Terraform product based on one existed xCAT mn. (manually)

If user has had xCAT mn already, is it easy for user to integrate xCAT Terraform product with it.

  • If there is detailed setup steps (doc)
  • If the steps is correct and easy to follow.