Table of Contents
- Test design of xCAT Terraform Provider (phase 1)
- TO CHECK
- Upstream documents
- The prelimit of xCAT Terraform Provider in xCAT 2.15
- Test environment set up (in 2.15)
- Test Scope
- Basic Function test
- apply function test
- case 1 : apply node depending on selectors.
- case 2 : apply node with provision requirement
- Case 3: If there is no node satisfy selectors, no node return
- Case 4: If the node pool is empty, apply will fail
- Case 5
- Update function test
- case1 update the number of node and node configuration
- case2 update the osimage of node
- update the power status of node
- destroy function test
- User management test
- if end user can get right higher then what they need (not covered in 2.15)
- If one user has way to access other user's resource
- If there is confidential information existed in files end user can access
- If one user has way to access other user's resource
- xCAT Terraform Provider deployment test
Test design of xCAT Terraform Provider (phase 1)
TO CHECK
groups=__TFPOOL-FREE
Upstream documents
- Source code of xCAT Terraform Provider
- xCAT Terraform Provider quick start
- Minidesign of xcat Terraform Provider
- How to apply and orchestrate compute instances from a xCAT cluster with Terraform
- Terraform official portal
- Some attributes definition of node in xCAT DB
disksize: The size of the disks for the node in GB. memory: The size of the memory for the node in MB. cputype: The cpu model name for the node. cpucount: The number of cpus for the node. rack: The frame the node is in. room: The room where the node is located. unit: The vertical position of the node in the frame
The prelimit of xCAT Terraform Provider in xCAT 2.15
- Terraform downloading(x86_64 version will download from Terraform offical web, ppcle version will be downloaded from xcat.org)
- xCAT Terraform Provider downloading(Both x86_64 and ppcle version will be downloaded from xcat.org)
- xCAT API service(There are pip and container version finally, But in 2.15, just recommend to use pip version)
- xCAT(There are rpm and container version, in 2.15, just recommend to use rpm version)
- There is no user authentication in 2.15. The terraform user can operate xCAT API service directly for other purpose. Authentication will be covered in later xCAT version.
- In 2.15, Terraform user have to use xcat restapi to get osimage and node list in xCAT DB.
- In 2.15, xCAT Terroform Provider just support
==
and!=
in selectors, does not support>=
<=
and other avanced matching method. - In 2.15, recommend to install xCAT and xCAT API service in the same node.
Test environment set up (in 2.15)
product component | location | ip | remark |
---|---|---|---|
xCAT Terraform provider | node1 | 10.x.x.1 | binary used directly |
xCAT API service | node2 | 10.x.x.2 | pip installation |
xCAT | node2 | 10.x.x.2 | rpm installation |
nodes poll | 2 real nodes | 10.x.x.4-5 | need to cover provision test |
All configuration about xCAT cluster must be done ahead.
-
xCAT installation
-
Create the xCAT Terraform Provider user account in
password
table on xCAT MN ahead by admin
# chtab key=xcat passwd.username=xtpu1 passwd.password=12345
# chtab key=xcat passwd.username=xtpu2 passwd.password=12345
-
Use above account
xtpu1
andxtpu2
to access xCAT API service to apply token, remember these tokens. -
Define all kinds of node xCAT Terraform Provider supported in xCAT DB ahead. At least there are two real nodes to test provision in parallel. The other nodes can be bogus nodes.
# chdef xtpbogusp9phyn1 groups=free usercomment=",ib=0,gpu=0" mtm=8335-GTC arch=ppc64le disksize=200 memory=256 cputype="POWER9 (raw), altivec supported" cpucount=32 ip=100.50.20.1 cons=openbmc mgt=openbmc netboot=petitboot profile=compute mac=E9.83.35.EB.20.01 room="r1" rack="rC" unit="u1" status=powering-on
# chdef xtpbogusp9phyn2 groups=free usercomment=",ib=0,gpu=0" mtm=8335-GTB arch=ppc64le disksize=200 memory=256 cputype="POWER9 (raw), altivec supported" cpucount=32 ip=100.50.20.2 cons=openbmc mgt=openbmc netboot=petitboot profile=compute mac=E9.83.35.EB.20.02 room="r1" rack="rB" unit="u2" status=powering-on
# chdef xtpbogusp9phyn3 groups=free usercomment=",ib=1,gpu=0" mtm=8335-GTB arch=ppc64le disksize=200 memory=256 cputype="POWER9 (raw), altivec supported" cpucount=32 ip=100.50.20.3 cons=openbmc mgt=openbmc netboot=petitboot profile=compute mac=E9.83.35.EB.20.03 room="r1" rack="rB" unit="u3" status=powering-on
# chdef xtpbogusp9phyn4 groups=free usercomment=",ib=0,gpu=1" mtm=8335-GTB arch=ppc64le disksize=200 memory=256 cputype="POWER9 (raw), altivec supported" cpucount=32 ip=100.50.20.4 cons=openbmc mgt=openbmc netboot=petitboot profile=compute mac=E9.83.35.EB.20.04 room="r1" rack="rB" unit="u4" status=powering-on
# chdef xtpbogusp9phyn5 groups=free usercomment=",ib=1,gpu=1" mtm=8335-GTB arch=ppc64le disksize=200 memory=256 cputype="POWER9 (raw), altivec supported" cpucount=32 ip=100.50.20.5 cons=openbmc mgt=openbmc netboot=petitboot profile=compute mac=E9.83.35.EB.20.05 room="r1" rack="rB" unit="u5" status=powering-on
# chdef xtpbogusp9phyn6 groups=free usercomment=",ib=0,gpu=1" mtm=8335-GTB arch=ppc64le disksize=300 memory=256 cputype="POWER9 (raw), altivec supported" cpucount=32 ip=100.50.20.6 cons=openbmc mgt=openbmc netboot=petitboot profile=compute mac=E9.83.35.EB.20.06 room="r1" rack="rB" unit="u6" status=powering-on
# chdef xtpbogusp9phyn7 groups=free usercomment=",ib=0,gpu=1" mtm=8335-GTB arch=ppc64le disksize=400 memory=128 cputype="POWER9 (raw), altivec supported" cpucount=32 ip=100.50.20.7 cons=openbmc mgt=openbmc netboot=petitboot profile=compute mac=E9.83.35.EB.20.07 room="r1" rack="rB" unit="u7" status=powering-on
# chdef xtpbogusp9phyn8 groups=free usercomment=",ib=0,gpu=1" mtm=8335-GTB arch=ppc64le disksize=400 memory=64 cputype="POWER9 (raw), altivec supported" cpucount=32 ip=100.50.20.8 cons=openbmc mgt=openbmc netboot=petitboot profile=compute mac=E9.83.35.EB.20.08 room="r1" rack="rB" unit="u8" status=powering-on
# chdef xtpbogusp9phyn9 groups=free usercomment=",ib=0,gpu=1" mtm=8335-GTB arch=ppc64le disksize=400 memory=64 cputype="POWER8 (raw), altivec supported" cpucount=20 ip=100.50.20.9 cons=openbmc mgt=openbmc netboot=petitboot profile=compute mac=E9.83.35.EB.20.09 room="r1" rack="rB" unit="u9" status=powering-on
# chdef xtpbogusp9phyn10 groups=free usercomment=",ib=0,gpu=1" mtm=8335-GTB arch=ppc64le disksize=400 memory=64 cputype="POWER9 (raw), altivec supported" cpucount=16 ip=100.50.20.10 cons=openbmc mgt=openbmc netboot=petitboot profile=compute mac=E9.83.35.EB.20.10 room="r1" rack="rB" unit="u10" status=powering-on
# chdef xtpbogusx86phyn1 groups=free usercomment=",ib=0,gpu=0" mtm=7912AC1 arch=x86_64 disksize=300 memory=64 cputype=" Intel(R) Xeon(R) CPU E5-2665 0 @ 2.40GHz" cpucount=16 ip=100.50.30.01 mac=86.79.12.C1.30.01 room="r2" rack="rAC1" unit="u1" status=powering-on cons=ipmi mgt=ipmi netboot=xnba profile=compute
# chdef xtpbogusx86phyn2 groups=free usercomment=",ib=1,gpu=0" mtm=7912AC2 arch=x86_64 disksize=400 memory=128 cputype=" Intel(R) Xeon(R) CPU E5-2665 0 @ 2.40GHz" cpucount=32 ip=100.50.30.02 mac=86.79.12.C2.30.02 room="r2" rack="rAC1" unit="u2" status=powering-on cons=ipmi mgt=ipmi netboot=xnba profile=compute
# chdef xtpbogusx86phyn3 groups=free usercomment=",ib=1,gpu=0" mtm=7912AC2 arch=x86_64 disksize=400 memory=128 cputype=" Intel(R) Xeon(R) CPU E5-2665 0 @ 2.40GHz" cpucount=32 ip=100.50.30.03 mac=86.79.12.C2.30.03 room="r2" rack="rAC1" unit="u3" status=powering-on cons=ipmi mgt=ipmi netboot=xnba profile=compute
//below 2 nodes should be real node. need to prepare ahead
# chdef xtprealp8vm1 groups=free usercomment=',ib=0,gpu=0" mtm=8335-GTA arch=ppc64le cons=ipmi mgt=kvm netboot=grub2 profile=compute vmhost=xxx mac=x.x.x.x.x.x ip=x.x.x.x
# chdef xtprealx86vm1 groups=free usercomment=",ib=0,gpu=0" mtm=7912AC3 arch=x86_64 cons=kvm mgt=kvm netboot=xnba profile=compute vmhost=xxx mac=x.x.x.x.x.x ip=x.x.x.x
- Install xCAT Terraform Provider in
node1
. xCAT Terraform Provider installation will involve a operating system user,terraform init
will find provider binary under this user's home directory/<user_home>/.terraform.d/plugins/
. In this test, suppose this OS user isroot
.
// login node1 by root
//Terraform ppcle version
$ wget https://media.github.ibm.com/releases/207181/files/158261?token=AABlEjj4dE6g_afKtyCL0TTcD8gGrNE9ks5c3OiqwA%3D%3D -O /usr/bin/terraform
$ chmod +x /usr/bin/terraform
//Terraform x86 version
# mkdir /tmp/terraform
# wget https://releases.hashicorp.com/terraform/0.11.13/terraform_0.11.13_linux_amd64.zip -P /tmp/terraform
# cd /tmp/terraform && unzip terraform terraform_0.11.13_linux_amd64.zip
# mv /tmp/terraform/terraform /usr/local/sbin/
# rm -rf /tmp/terraform
//xCAT Terraform Provider ppc64le version
$ wget https://media.github.ibm.com/releases/207181/files/158263?token=AABlElBlpu3Q8UGn3xJBlrHbN60nKizLks5c3Qq4wA%3D%3D -O /root/.terraform.d/plugins/terraform-provider-xcat
$ chmod +x /root/.terraform.d/plugins/terraform-provider-xcat
- install xCAT API service in node2 depending on steps
Test Scope
Basic Function test
Basic function test include customer apply, update, free compute instance and get configuration of each compute instance.
apply
function test
case 1 : apply node depending on selectors.
- Create work directory
$ mkdir -p /terraform_test/xtpu1/task1 && cd /terraform_test/xtpu1/task1
- Create below
xcat.tf
files under/terraform_test/xtpu1/task1
#cat /terraform_test/xtpu1/task1/xcat.tf
provider "xcat" {
url = "<the access url of xCAT API service>"
username = "xtpu1"
password = "12345"
}
- init terraform for this task
# terraform init
- Create
node.tf
files/terraform_test/xtpu1/task1/node.tf
resource "xcat_node" "ppc64lenode" {
selectors {
arch="ppc64le"
machinetype="8335-GTB"
gpu=0
ib=1
disksize>=300
memory<=128
cputype="POWER9 (raw), altivec supported"
cpucount<=20
}
count=1
}
resource "xcat_node" "x86node" {
selectors {
arch="x86_64"
machinetype="7912AC2"
disksize==400
memory==128
cputype="Intel(R) Xeon(R) CPU E5-2665 0 @ 2.40GHz"
cpucount==32
}
count=2
}
output "x86nodes" {
value=[
"${xcat_node.x86node.*.name}"
]
}
output "ppc64lenodes" {
value=[
"${xcat_node.ppc64lenode.*.name}"
]
}
output "login_credential" {
value="username: root; password: cluster"
}
- terraform plan/apply
$ terraform apply
- Expect return
xtpbogusp9phyn10
,xtpbogusx86phyn2
andxtpbogusx86phyn3
. - Expect All below 3 nodes have attribute defined in xCAT DB. like below
//xtpbogusp9phyn10
mac=E9.83.35.EB.20.10 room="r1" rack="rB" unit="u10"
//xtpbogusx86phyn2
ip=100.50.30.02 mac=86.79.12.C2.30.02 room="r2" rack="rAC1" unit="u2"
//xtpbogusx86phyn3
ip=100.50.30.03 mac=86.79.12.C2.30.03 room="r2" rack="rAC1" unit="u3"
- Expect the value of
groups
attribute ofxtpbogusp9phyn10
,xtpbogusx86phyn2
andxtpbogusx86phyn3
in xcat DB have been changed fromfree
toxtpu1
. - Expect the value of
groups
attribute of the rest nodes in node pool still isfree
case 2 : apply node with provision requirement
- Create work directory
$ mkdir -p /terraform_test/xtpu1/task2 && cd /terraform_test/xtpu1/task2
- Create below
xcat.tf
files under/terraform_test/xtpu1/task2/
#cat /terraform_test/xtpu1/task2/xcat.tf
provider "xcat" {
url = "<the access url of xCAT API service>"
username = "xtpu1"
password = "12345"
}
- init terraform for this task
# terraform init
- Create below
node.tf
files under/terraform_test/xtpu1/task2/
#cat /terraform_test/xtpu1/task2/node.tf
resource "xcat_node" "ppc64lenode" {
selectors {
arch="ppc64le"
machinetype="8335-GTA"
}
count=1
osimage="rhels7.6-ppc64le-install-compute"
powerstatus=on
}
resource "xcat_node" "x86node" {
selectors {
arch="x86_64"
machinetype="7912AC3"
}
count=1
osimage="rhels7.4-x86_64-netboot-compute"
powerstatus=off
}
output "x86nodes" {
value=[
"${xcat_node.x86node.*.name}"
]
}
output "ppc64lenodes" {
value=[
"${xcat_node.ppc64lenode.*.name}"
]
}
output "login_credential" {
value="username: root; password: cluster"
}
- terraform plan/apply
$ terraform plan
$ terraform apply
- Expect return
xtprealp8vm1
andxtprealx86vm1
. - Expect the value of group attribute of
xtprealp8vm1
andxtprealx86vm1
in xcat DB have been changed fromfree
to user namextpu1
. - Expect
xtprealp8vm1
is pingable andxtprealx86vm1
is not pingable. xtprealp8vm1
will be installed rhels7.6- Use ip, username and password returned by
apply
, can loginxtprealp8vm1
. - Change
node.tf
to power onxtprealx86vm1
resource "xcat_node" "x86node" {
selectors {
arch="x86_64"
machinetype="7912AC3"
}
count=1
osimage="rhels7.4-x86_64-netboot-compute"
powerstatus=on
}
- Expect
xtprealx86vm1
will be powered on and os is rhels7.6(OS won't be reinstalled in this time) - Expect the value of groups attribute of xtprealx86vm1 and xtprealp8vm1 in xcat DB have been changed from free to xtpu1.
Case 3: If there is no node satisfy selectors, no node return
- Create work directory and init terraform
$ mkdir -p /terraform_test/xtpu1/task3 && cd /terraform_test/xtpu1/task3 && terraform init
- Create below xcat.tf files under /terraform_test/xtpu1/task3/
#cat /terraform_test/xtpu1/task3/xcat.tf
provider "xcat" {
url = "<the access url of xCAT API service>"
username = "xtpu1"
password = "12345"
}
- init terraform for this task
# terraform init
- Create below
node.tf
files under/terraform_test/xtpu1/task3
resource "xcat_node" "ppc64lenode" {
selectors {
arch="ppc64le"
machinetype="8335-GTF"
}
count=1
}
output "ppc64lenodes" {
value=[
"${xcat_node.ppc64lenode.*.name}"
]
}
output "login_credential" {
value="username: root; password: cluster"
}
- terraform plan/apply
$ terraform plan
$ terraform apply
- Expect apply failure.
Case 4: If the node pool is empty, apply will fail
- clear up node pool in xcat mn in node3
$ chdef free groups=all
- login node1 by root
- Create work directory
$ mkdir -p /terraform_test/xtpu1/task4 && cd /terraform_test/xtpu1/task4
- Create below xcat.tf files under
/terraform_test/xtpu1/task4/
#cat /terraform_test/xtpu1/task3/xcat.tf
provider "xcat" {
url = "<the access url of xCAT API service>"
username = "xtpu1"
password = "12345"
}
- init terraform for this task
# terraform init
- Create below
node.tf
files under/terraform_test/xtpu1/task4/
resource "xcat_node" "ppc64lenode" {
selectors {
arch="ppc64le"
machinetype="8335-GTB"
}
count=1
}
output "ppc64lenodes" {
value=[
"${xcat_node.ppc64lenode.*.name}"
]
}
output "login_credential" {
value="username: root; password: cluster"
}
- terraform plan/apply
$ terraform plan
$ terraform apply
- Expect apply failure.
Case 5
- repeat steps in case1 by user
xtpu1
. - Create work directory for
xtpu2
$ mkdir -p /terraform_test/xtpu2/task1 && cd /terraform_test/xtpu2/task1
- Create below xcat.tf files under
/terraform_test/xtpu2/task1
#cat /terraform_test/xtpu2/task1/xcat.tf
provider "xcat" {
url = "<the access url of xCAT API service>"
username = "xtpu2"
password = "12345"
}
- init terraform for this task
# terraform init
- apply below node
xtpbogusp9phyn10
by userxtpu2
resource "xcat_node" "ppc64lenode" {
selectors {
hostnmae="xtpbogusp9phyn10"
}
count=1
}
output "ppc64lenodes" {
value=[
"${xcat_node.ppc64lenode.*.name}"
]
}
output "login_credential" {
value="username: root; password: cluster"
}
- Expect apply failure.
Update
function test
case1 update the number of node and node configuration
- repeat steps in apply case1 by user
xtpu1
. - change the node.tf files
/terraform_test/xtpu1/task1/node.tf
resource "xcat_node" "x86node" {
selectors {
arch="x86_64"
machinetype="7912AC2"
disksize==400
memory==64
cputype="Intel(R) Xeon(R) CPU E5-2665 0 @ 2.40GHz"
cpucount==32
}
count=1
}
output "x86nodes" {
value=[
"${xcat_node.x86node.*.name}"
]
}
output "login_credential" {
value="username: root; password: cluster"
}
- Expect return
xtpbogusx86phyn1
. - Expect the value of groups attribute of
xtpbogusx86phyn1
is set toxtpu1
, the value of groups attribute of xtpbogusp9phyn10, xtpbogusx86phyn2 and xtpbogusx86phyn3 in xcat DB have been changed back to free.
case2 update the osimage of node
- repeat 1-5 steps of apply case2
- change
node.tf
to
#cat /terraform_test/xtpu1/task2/node.tf
resource "xcat_node" "ppc64lenode" {
selectors {
arch="ppc64le"
machinetype="8335-GTA"
}
count=1
osimage="rhels7.5-ppc64le-install-compute"
powerstatus=on
}
resource "xcat_node" "x86node" {
selectors {
arch="x86_64"
machinetype="7912AC3"
}
count=1
osimage="rhels7.4-x86_64-netboot-compute"
powerstatus=off
}
output "x86nodes" {
value=[
"${xcat_node.x86node.*.name}"
]
}
output "ppc64lenodes" {
value=[
"${xcat_node.ppc64lenode.*.name}"
]
}
output "login_credential" {
value="username: root; password: cluster"
}
- Expect still return xtprealp8vm1 and xtprealx86vm1.
- Expect the os of xtprealp8vm1 changed from rhels7.6 to rhels7.5
- Expect nothing changed against xtprealx86vm1
update the power status of node
- repeat 1-5 steps of apply case2
- change
node.tf
to
#cat /terraform_test/xtpu1/task2/node.tf
resource "xcat_node" "ppc64lenode" {
selectors {
arch="ppc64le"
machinetype="8335-GTA"
}
count=1
osimage="rhels7.6-ppc64le-install-compute"
powerstatus=off
}
resource "xcat_node" "x86node" {
selectors {
arch="x86_64"
machinetype="7912AC3"
}
count=1
osimage="rhels7.4-x86_64-netboot-compute"
powerstatus=on
}
output "x86nodes" {
value=[
"${xcat_node.x86node.*.name}"
]
}
output "ppc64lenodes" {
value=[
"${xcat_node.ppc64lenode.*.name}"
]
}
output "login_credential" {
value="username: root; password: cluster"
}
- Expect still return xtprealp8vm1 and xtprealx86vm1.
- Expect power status of xtprealp8vm1 change to off
- Expect power status of xtprealx86vm1 change to on
- The rest thing do not change
destroy
function test
case1 free whole cluster applied ahead
- repeat 1-5 steps of apply case2.
- Touch some files on
xtprealp8vm1
andxtprealx86vm1
- destroy resource
# Terraform destroy
- Expect
xtprealp8vm1
andxtprealx86vm1
have been free - Expect the values of
groups
attribute ofxtprealp8vm1
andxtprealx86vm1
in xcat DB have been changed back tofree
- Expect the files touched on
xtprealp8vm1
andxtprealx86vm1
will gone. That means can not leak the information of last user.
case2 To verify if the node has been free by last user can be apply by other user
- repeat apply test case 5, but this time expect apply successfully
case3 To verify if the node resource leak after apply and destroy many times by different user
- If different user apply and destroy node resource many times in parallel, to check if node resource will leak.(i.e. the nodes in
free
group of xcat db become less and less abnormally.
User management test
if end user can get right higher then what they need (not covered in 2.15)
Due to xCAT need a high right user to operate xcatd, need to make sure user management solution of xCAT Terraform won't result in leak high right user information to end user. I.e. end user can get high right user's user name, password, token, certificate and so on. End user can leverage these information to operate xcatd directly or change confidential configuration.
If one user has way to access other user's resource
Every user should have their own work space, to check:
- If one user has way to access other user's resource (tf files, state files, configuration files......)
- If one user has way to operate the node applied by other user.
If there is confidential information existed in files end user can access
- If there is confidential information (password, token, certificate) existed in files end user can access.
- If there is confidential information (password, token, certificate) hard code in source code
If one user has way to access other user's resource
- If customer A apply a group of nodes, customer B apply another group of nodes. If it is possible for customer A to access node in group B through the node A applied without any authorization. (xcat mn can login any node deployed by it without password).
xCAT Terraform Provider deployment test
[Case 1] If it is easy for user to set up xCAT Terraform product from scratch. (manually)
If user has nothing at first, if it is easy for user to set up xCAT Terraform product.
- If there is detailed setup steps (doc)
- If the steps is correct and easy to follow.
[Case 2] If it is easy for user to set up xCAT Terraform product based on one existed xCAT mn. (manually)
If user has had xCAT mn already, is it easy for user to integrate xCAT Terraform product with it.
- If there is detailed setup steps (doc)
- If the steps is correct and easy to follow.
News
- Apr 22, 2016: xCAT 2.11.1 released.
- Mar 11, 2016: xCAT 2.9.3 (AIX only) released.
- Dec 11, 2015: xCAT 2.11 released.
- Nov 11, 2015: xCAT 2.9.2 (AIX only) released.
- Jul 30, 2015: xCAT 2.10 released.
- Jul 30, 2015: xCAT migrates from sourceforge to github
- Jun 26, 2015: xCAT 2.7.9 released.
- Mar 20, 2015: xCAT 2.9.1 released.
- Dec 12, 2014: xCAT 2.9 released.
- Sep 5, 2014: xCAT 2.8.5 released.
- May 23, 2014: xCAT 2.8.4 released.
- Jan 24, 2014: xCAT 2.7.8 released.
- Nov 15, 2013: xCAT 2.8.3 released.
- Jun 26, 2013: xCAT 2.8.2 released.
- May 17, 2013: xCAT 2.7.7 released.
- May 10, 2013: xCAT 2.8.1 released.
- Feb 28, 2013: xCAT 2.8 released.
- Nov 30, 2012: xCAT 2.7.6 released.
- Oct 29, 2012: xCAT 2.7.5 released.
- Aug 27, 2012: xCAT 2.7.4 released.
- Jun 22, 2012: xCAT 2.7.3 released.
- May 25, 2012: xCAT 2.7.2 released.
- Apr 20, 2012: xCAT 2.7.1 released.
- Mar 19, 2012: xCAT 2.7 released.
- Mar 15, 2012: xCAT 2.6.11 released.
- Jan 23, 2012: xCAT 2.6.10 released.
- Nov 15, 2011: xCAT 2.6.9 released.
- Sep 30, 2011: xCAT 2.6.8 released.
- Aug 26, 2011: xCAT 2.6.6 released.
- May 20, 2011: xCAT 2.6 released.
- Feb 14, 2011: Watson plays on Jeopardy and is managed by xCAT!
- xCAT Release Notes Summary
- xCAT OS And Hw Support Matrix
- xCAT Test Environment Summary
History
- Oct 22, 2010: xCAT 2.5 released.
- Apr 30, 2010: xCAT 2.4 is released.
- Oct 31, 2009: xCAT 2.3 released.
xCAT's 10 year anniversary! - Apr 16, 2009: xCAT 2.2 released.
- Oct 31, 2008: xCAT 2.1 released.
- Sep 12, 2008: Support for xCAT 2
can now be purchased! - June 9, 2008: xCAT breaths life into
(at the time) the fastest
supercomputer on the planet - May 30, 2008: xCAT 2.0 for Linux
officially released! - Oct 31, 2007: IBM open sources
xCAT 2.0 to allow collaboration
among all of the xCAT users. - Oct 31, 1999: xCAT 1.0 is born!
xCAT started out as a project in
IBM developed by Egan Ford. It
was quickly adopted by customers
and IBM manufacturing sites to
rapidly deploy clusters.