321 Commits

Author SHA1 Message Date
b8e22ef5b2 fix ipv6 0.0.0.0 to ::, the equivalent ip 2014-04-04 01:19:09 +01:00
795295e1a3 add ifname to arguments for checkConfig_Sles 2014-04-02 09:59:22 +01:00
d730e82055 update routeop with device based routing 2014-04-02 09:43:15 +01:00
71ad311a68 update route.pm, and start work on routeop 2014-04-01 22:20:37 +01:00
e34da389fb defect 4033: fix device based routing in makeroutes 2014-04-01 16:16:14 +01:00
45b11936cd fix spelling 2014-03-27 13:33:30 -04:00
165abd4f88 defect 3998 - change file vars to globals 2014-03-26 17:08:17 -04:00
8e110d51f1 defect 3953 2014-03-26 13:07:48 -04:00
045c42fce6 doxcat: the ntp -c rv 0 offset may return negative number 2014-03-26 16:00:12 -05:00
1407ef0288 gensis-scripts: doxcat - ntpq -c rv state does not work any more, use the ntpq -c rv offset instead 2014-03-26 14:26:35 -05:00
3aeb21025d defect 3964 - clarify mknb man page for hierarchy with sharedtftp=0 2014-03-25 15:57:21 -04:00
4c48026801 defect 3941 2014-03-25 08:09:50 -04:00
e1fb6aae06 add the centos6.5 discid info, fix a problem in post.xcat 2014-03-25 17:14:50 -05:00
49824ba195 add grub2-xcat as a dependency to enable provisioning rhel7 on an ubuntu MN 2014-03-25 01:59:18 -07:00
b067d90a60 fix xCAT-genesis-builder/buildrpm for latest mcp version 2014-03-25 14:56:10 -05:00
c747908e0f remove autosetup of useflowcontorl see 4031 2014-03-24 09:26:12 -04:00
c6ee255d70 fix defect 3875 2014-03-24 04:56:24 -04:00
e52453be0f update man 2014-03-21 14:51:14 -04:00
5afd870cef tabprune -a supports all tables 2014-03-21 14:45:30 -04:00
fe38638b4b Support 'specific' copycds mode.
ESXi releases updates without bumping version numbers.  For such cases,
the default behavior is for the new version to supersede the old.  This
makes sense for the vast majority of cases.  There are, however, corner
cases where being explicit about the release is indicated.
2014-03-21 10:56:20 -04:00
d33000aef6 fix -L option 2014-03-21 09:56:47 -04:00
93ff514a2a add pointer to Easy Regx documentation 2014-03-21 08:02:10 -04:00
c6cb1c9977 append some code logic and documentation for rhels7 support on grub2 2014-03-21 04:40:25 -07:00
1af1135646 add testcase for prsync 2014-03-21 02:12:45 -04:00
8165f1acd0 add testcase for pscp 2014-03-21 02:12:33 -04:00
25ca816477 add testcase for sysclone 2014-03-21 02:12:23 -04:00
ffa9f73d13 add testcase for xdcp 2014-03-21 02:12:10 -04:00
fe5b75eabf add testcase for xdsh 2014-03-21 02:11:52 -04:00
b685bfc8a7 add testcase for makeroutes 2014-03-21 02:01:03 -04:00
781595f000 add testcase for ppping 2014-03-21 02:00:43 -04:00
3d15b983a3 Merge branch '2.8' of ssh://git.code.sf.net/p/xcat/xcat-core into 2.8 2014-03-21 01:41:23 -04:00
6a8371b7d4 add testcase for makedns 2014-03-21 01:41:03 -04:00
2013e18c73 fix bug #4027, replace chop with chomp 2014-03-20 23:11:28 +00:00
72c76ab32b fix defect 3981 2014-03-20 14:13:27 -04:00
45af650bb9 more code for supporting other hardware in OpenStack baremetal driver. 2014-03-21 07:00:25 -04:00
7a980c9cea fix autodiscovery problem as not run makehosts command 2014-03-20 18:43:14 +08:00
36690a8671 fix defect #4030 [DEV]'makedhcp -n' failed to add subnet info to dhcpd.conf on rhels7.0 x86_64 mn 2014-03-19 08:14:00 -07:00
8984f6243b add rhels7 stateless and statelite support for x86_64 2014-03-19 07:59:02 -07:00
dd64ec3740 fix defect 4028 2014-03-19 07:22:22 -04:00
ddf56b2788 clarify that genimage -i is optional 2014-03-18 14:51:33 -04:00
554aea03cd xCAT-SoftLayer package is noarch only on Linux, not on AIX 2014-03-18 16:19:06 -05:00
6c355dbeb6 xCAT-SoftLayer.spec: cp -a does not work on AIX, change to use -p -R instead 2014-03-18 14:18:34 -05:00
1bb9999176 makerpm: add copying xcat.conf.apach24 into /SOURCES 2014-03-18 13:55:36 -05:00
bdcc8ab613 update with xCAT-OpenStack-baremetal packaging: 1) update xpod2man to not create summary page 2) remove xcat.1.pod from xCAT-OpenStack-baremetal 2014-03-17 23:33:20 -04:00
a6c7b582ef fix defect #4023 syntax error 2014-03-18 09:18:43 -04:00
67eecc44d4 Merge branch '2.8' of ssh://git.code.sf.net/p/xcat/xcat-core into 2.8
add xCAT-SoftLayer files to 2.8 branch
2014-03-17 06:53:15 -04:00
f085548559 adding xCAT-SoftLayer files to 2.8 branch 2014-03-17 06:52:35 -04:00
4779749c1e defect 4022 and add podchecker, good debug tool 2014-03-17 06:44:28 -04:00
67b998eaf9 fix for bug 3935: support nic name with space, like Local Area Network on windows 2014-03-14 17:51:29 -05:00
a9e5fa3bad fix for bug 3937: return 1 when lsdef <object> can not find the object 2014-03-14 17:01:05 -05:00
2ad92ce054 fix for bug 3976: check -i flag 2014-03-14 16:16:27 -05:00
46132889d8 Merge branch '2.8' of ssh://git.code.sf.net/p/xcat/xcat-core into 2.8 2014-03-13 10:37:58 -04:00
e41210c8d8 add xCAT-OpenStack-baremetal to build-ubunturepo package list, change debian/source/format to 1.0 2014-03-13 07:44:15 -07:00
4b87e5d765 additional checks 2014-03-13 10:37:14 -04:00
5c6a57dee7 add ubuntu/debian package files for dpkg-buildpackage 2014-03-13 07:15:19 -07:00
a663b2d88a change sync of keys so they are cleaned up when zones are removed 2014-03-13 09:49:23 -04:00
18d519bdcf chzone manpage 2014-03-13 08:44:07 -04:00
7be4bec3e6 add and improve zone manpages 2014-03-13 07:52:31 -04:00
fe7b4c79bf add and improve zone manpages 2014-03-13 07:50:59 -04:00
68e5147e80 add man page for mkzone 2014-03-12 12:29:43 -04:00
aaa740b4a9 fix parsing 2014-03-12 11:33:04 -04:00
906f61e9e5 rm id_rsa so we can switch between sshbetweennode yes and no 2014-03-12 10:46:11 -04:00
8ec9319c63 xCAT baremetal driver for OpenStack supprt nodes with all hw types that are supported by xCAT 2014-03-13 03:14:03 -04:00
c4c741bf41 fix the description for table hwinv. 2014-03-13 03:05:55 -04:00
73868bdc66 chzone code 2014-03-11 14:19:11 -04:00
ea3eebe585 fix bug 4014: updatenodegroups routine always add all group to the node 2014-03-10 19:04:31 -07:00
b1948ed1e9 a lot of chzone, needs more work 2014-03-10 14:43:00 -04:00
ef05a92cb2 rhels7 statelite/stateless support 2014-03-08 06:46:54 -08:00
68b53380a3 fix defect #4015 2014-03-07 07:32:03 -08:00
894a1c680d rmzone 2014-03-07 09:55:07 -05:00
6c15e1409f Merge branch '2.8' of ssh://git.code.sf.net/p/xcat/xcat-core into 2.8 2014-03-07 09:53:36 -05:00
734eccddeb rmzone support 2014-03-07 09:53:00 -05:00
4be6530914 Merge branch '2.8' of ssh://git.code.sf.net/p/xcat/xcat-core into 2.8 2014-03-07 06:42:25 -08:00
af52462972 fix defect #3999 2014-03-07 06:01:30 -08:00
c2d2452719 fix defect #3999 and #4013 2014-03-07 05:54:56 -08:00
69a80e72ee fix for bug 4010: remove xcatws.cgi from PCM build 2014-03-06 15:47:15 -06:00
36b8c67f80 fix bug 4007: DFM support Powerlinux, mkvm return error 2014-03-05 00:50:53 -08:00
62e366907e better hierarchical processing for zones 2014-03-04 10:22:21 -05:00
546307bc40 docuement DSH_REMOTE_PASSWORD env variable 2014-03-04 08:31:20 -05:00
4faf715d40 add OpenStack-baremetal rpm 2014-03-03 15:11:48 -05:00
6d500da7d7 fix spelling 2014-03-03 15:06:38 -05:00
89a526de4f add OpenStack-baremetal rpm 2014-03-03 15:01:37 -05:00
7b9c893055 support getting zone ssh keys 2014-03-03 13:29:02 -05:00
d3cd61745d support zone root ssh keys 2014-03-03 13:18:19 -05:00
91ff649bea more zone functions 2014-03-03 13:15:06 -05:00
0be18a3c4a Code drop to support new netboot method - grub2 2014-02-28 07:52:13 -05:00
a4f87fbb69 fix for bug 4002, configib replaces /etc/sysctl.conf 2014-02-28 14:10:24 -06:00
7da6bca6d4 updatenode customized mypostscript.tmpl with ZONENAME lines 2014-02-27 10:55:31 -05:00
eb6d69a76b updatenode customized mypostscript.tmpl with ZONENAME lines 2014-02-27 10:12:53 -05:00
3b02602dd2 fix bug 3873: DFM illegal action could work 2014-02-27 01:24:41 -08:00
4889456e12 Merge branch '2.8' of ssh://linggao@git.code.sf.net/p/xcat/xcat-core into 2.8 2014-02-27 11:01:43 -05:00
3389774e1b more for xCAT baremetal driver for OpenStack 2014-02-27 11:01:15 -05:00
e77ef4eed3 initialize if no zones 2014-02-26 11:33:39 -05:00
a35f118cd9 adding zonename 2014-02-26 10:37:05 -05:00
4483b709da adding routines for support 2014-02-26 10:35:26 -05:00
f2f814016d adding routines for support 2014-02-26 10:34:25 -05:00
de2bafc46d Merge branch '2.8' of ssh://git.code.sf.net/p/xcat/xcat-core into 2.8 2014-02-26 08:13:59 -05:00
d8dedfb3b1 defect 3961: make domain name keep without . at beginning 2014-02-26 13:26:01 -05:00
55a90fc1b9 fix ENABLESSHBETWEENNODES setting in mypostscript file when zones 2014-02-26 08:12:11 -05:00
e6dc6f8a22 removed the debugger 2014-02-26 04:32:07 -05:00
84232a4b45 Merge branch '2.8' of ssh://git.code.sf.net/p/xcat/xcat-core into 2.8 2014-02-25 11:41:45 -05:00
04015bd64d defect 3994 2014-02-25 11:41:15 -05:00
2ec690f18f zone hierarchical support for xdsh -K 2014-02-25 09:59:43 -05:00
7528e8f0f1 update manpage for mkvm and chvm about partitioning item 2014-02-24 23:55:09 -08:00
7b43034fab fix a typo in updatenode manpage 2014-02-25 13:44:19 -06:00
1a5d650be8 xdsh -K support (still need to do hierarchy) 2014-02-24 11:48:44 -05:00
18a8fd1476 xdsh -K support (still need to do hierarchy) 2014-02-24 11:42:19 -05:00
c99c4d809b xdsh -K support (still need to do hierarchy) 2014-02-24 11:38:58 -05:00
db8efd37b9 fix for bug 3991: if the node can not be resolved or ip address is not valid, print warning message and ignore the node 2014-02-21 12:41:37 -06:00
9749c8984c zone coce 2014-02-20 12:34:20 -05:00
329c65eec5 more code 2014-02-20 11:34:53 -05:00
dabc94af12 more code 2014-02-20 11:33:24 -05:00
dad84f2d1a fix for defect 3985 2014-02-20 07:09:50 -05:00
accf4fe1e3 Merge branch '2.8' of ssh://linggao@git.code.sf.net/p/xcat/xcat-core into 2.8 2014-02-20 09:22:53 -05:00
b6cb78e67d xCAT baremetal driver for OpenStack 2014-02-20 09:21:27 -05:00
9e826d1189 more zone code 2014-02-19 12:50:36 -05:00
82caea177c designchanges 2014-02-19 08:49:43 -05:00
00318b28d4 designchanges 2014-02-19 08:48:14 -05:00
d80d7a974f add zone table sshbetweennodes attribute 2014-02-19 07:14:15 -05:00
59d675af7b add zone table sshbetweennodes attribute 2014-02-19 07:06:28 -05:00
8a015d24fd add missing -g flag 2014-02-19 05:22:52 -05:00
1107b62310 add missing -g flag 2014-02-19 05:21:02 -05:00
5500f73824 improvements 2014-02-18 11:08:22 -05:00
640ded52ec more improvements for zones 2014-02-18 10:14:49 -05:00
d5e7c46622 more zone support 2014-02-18 09:06:11 -05:00
ace58552c1 more zone support 2014-02-18 08:49:47 -05:00
62c1f4a6f6 fix man page 2014-02-18 06:20:28 -05:00
6e1720038e change comment on path of where the gpfs_updates directory is placed 2014-02-17 12:23:32 +00:00
95486591e4 link in zone commands 2014-02-13 14:27:41 -05:00
b1fddf8eca Merge branch '2.8' of ssh://git.code.sf.net/p/xcat/xcat-core into 2.8 2014-02-13 09:53:54 -05:00
d03dcbe5f3 Explicitly set SSL_VERIFY_MODE during start ssl in Client 2014-02-13 09:51:02 -05:00
2ee32a79c8 Multiple zone support 2014-02-13 07:54:10 -05:00
7fe699f561 Multiple zone support 2014-02-13 07:50:30 -05:00
445483f25a put in wrong directory 2014-02-13 07:48:02 -05:00
aa32357398 Multiple zone support 2014-02-13 07:43:36 -05:00
601dc2569c update manpage of mkdef/chdef for osimage 2014-02-13 00:59:42 -08:00
e0eea42b96 only use short node name in loadclouddata 2014-02-13 12:28:34 -05:00
663e7d0132 take *.rhel*.pkglist as pkglist file if *.rhels*.pkglist do not exist 2014-02-12 00:52:35 -08:00
3dce4de63a specify text installation mode, otherwise, anaconda will drop into choice dialog when fails to start X in graphic mode 2014-02-12 00:49:36 -08:00
a4a0ed5da3 fix bug 3983: copycds show error info 2014-02-11 22:06:48 -08:00
2876b0f371 The template for keystone and swift(all in one) 2014-02-12 12:34:27 -05:00
3fd9ee8ad9 fix bug #3984 Ubuntu 13.10 diskless installation fails 2014-02-11 10:54:53 -08:00
2d7b3f6c51 support for keystone+swift 2014-02-11 15:44:44 -05:00
73b9ef07ad Have openssl req use better message digest 2014-02-11 10:08:06 -05:00
784f89b916 roll back the last change in configmic file; And fix the issue that hostname cannot show the short hostname 2014-02-11 04:48:40 -05:00
8804d7180b RHEL7 support for diskful compute profile complete 2014-02-10 14:01:05 -05:00
89f3dccdbe Have RHEL7 proceed to get through install completion
Still need to get through the postscript phase
2014-02-10 14:00:52 -05:00
263ee3af3a Add signature detection for RHEL7 media to anaconda 2014-02-10 10:50:32 -05:00
5452efea2e fix the issue that hostname command cannot get short hort name 2014-02-10 08:24:41 -05:00
504dc16571 Fix xCAT init script status reporting
xCAT in some cases was reporting improper status for certain scenarios.
Risk being inaccure if no pid file exists so that it is accurate when it
does exist.
2014-02-07 17:39:32 -05:00
1d64d9ce82 new zone table and zonename attribute 2014-02-05 08:28:53 -05:00
b808b12dcf Correct } mistake in previous commit to IPMI.pm 2014-02-04 10:26:57 -05:00
7f42f33094 fix unordered cherry-pick 2014-01-30 20:42:02 +00:00
e889f59523 add nichostnameprefixes to @nodeattrs 2014-01-30 20:40:17 +00:00
afb15362f3 first commit for prefix hostname feature 2014-01-30 20:40:07 +00:00
b832cb5023 fix bug 3971, trim othernames variable 2014-01-30 19:54:07 +00:00
5e62fbced8 _clear_cache was removed long ago. _build_cache does all the needed work itself, so skip _build_cache. 2014-01-30 13:00:42 -05:00
5ac1ebaa63 fix for a typo in confignics, caused ib configuration problems 2014-01-29 10:03:17 -06:00
0ba8649e56 Add IBM_HPC_Stack_in_an_xCAT_Cluster 2014-01-28 15:35:47 -05:00
9cae05441d minor fix 2014-01-28 13:47:49 -08:00
5e686b1a45 Add range check and message to nmap 2014-01-28 13:38:39 -08:00
b7ec6919e9 fix Undefined subroutine &JSON::decode_json called at /opt/xcat/ws/xcatws.cgi line 168 2014-01-27 23:31:39 -05:00
6556f026d7 modify usage for 'chvm' and 'mkvm', remove debug msg 2014-01-26 22:53:39 -08:00
d662c5e8b5 The scripts used for configuring and provisioning VIOS partition 2014-01-26 22:02:50 -08:00
23423a5cb1 create VIOS and logical partitions 2014-01-26 21:57:57 -08:00
4a2de6ddc2 windows solution 222013 xcat part 2014-01-27 13:12:27 +08:00
0a810645f2 fix for bug 3979: print a message with rnetboot/bootseq when gateway is empty 2014-01-27 09:58:48 -06:00
1e2fdb0529 improve man for sshbetweennodes 2014-01-23 12:59:11 -05:00
e62b5b43a8 3974 rinv failed for Fujitsu Blade Server 2014-01-22 23:19:19 -08:00
caceaab306 upload testcase for ubuntu 2014-01-22 23:48:03 -05:00
d08d6fb244 Add enic to nics in genesis 2014-01-22 16:57:43 -05:00
0a7963cc65 fix 3879,complete kit size is too big,just keep build_input in complete kit with buildkit.conf,other_files,plugins and scripts, remove other useless dir 2014-01-21 03:48:06 -05:00
07f95665be Add a new file xcat.conf.apach24 2014-01-20 13:51:04 -08:00
9b171c32af fix bug #3973 Ubuntu 13.10 diskful installation fails 2014-01-20 13:49:15 -08:00
bc607bf74c fix bug #3973 Ubuntu 13.10 diskful installation fails 2014-01-20 13:21:03 -08:00
c98217e5a1 fix bug #3973 Ubuntu 13.10 diskful installation fails 2014-01-20 13:16:57 -08:00
20865b8812 Change the man page of nodeset to make it supports the shutdown and shell operations 2014-01-20 11:09:09 -05:00
91a5abd7f3 fix 3879,remove build_input dir from complete kit xxx.tar.gz 2014-01-20 04:10:03 -05:00
7c8f671f9f kitconponent should be kitcomponent 2014-01-19 22:19:07 -05:00
95d332aa23 update imgexport/imgimport manpage to surpport kits 2014-01-19 21:54:04 -05:00
ece1fd7d03 fixed 3357,add symlink,copy postscripts and plugin files for kit 2014-01-16 20:54:14 -05:00
0f93e7a135 Add a new file xcat.conf.apach24 2014-01-16 12:45:19 -08:00
d97a9e4316 minor fix 2014-01-16 04:41:33 -08:00
aa2c4978c5 update building of ubuntu repo to include the supported ubuntu disctros 2014-01-16 04:38:31 -08:00
65bd3afbb3 Merge branch '2.8' of ssh://git.code.sf.net/p//xcat/xcat-core into 2.8 2014-01-16 04:21:24 -08:00
3b418f8981 update building of ubuntu repo to include the supported ubuntu disctros 2014-01-16 03:56:55 -08:00
d0e3c2354f defect 3968: fixed the issue that for statelite on aix, the .statelite dir was not copied to shared_root spot from default spot 2014-01-16 04:48:20 -05:00
bcb98fcec9 fix mysqlsetup -u 2014-01-15 06:14:52 -05:00
27f6761231 fix defect #3960 Genimage broken for CentOS 5.4 nodes 2014-01-14 01:00:33 -08:00
6777f86870 Make IPMI 2.0 crypto dependencies mandatory
Faced with an increasing population of IPMI 2 only devices, make the AES/CBC
requirements mandatory as it is a common source of systems failing to work
now.
2014-01-13 10:50:00 -05:00
caaa130479 To make bmcsetup cmd to update node status to be [bmcready] in genesis; And make chain mechanism to support [shutdown] key word which is used to poweroff the node 2014-01-13 07:54:05 -05:00
a3ade8608b fix the issue that proxydhcp configuration file cannot be updated 2014-01-10 09:15:50 -05:00
487c57d358 Fix for bug 3955. 2014-01-10 03:01:56 -05:00
9c35744fd7 Fix for bug 3815
Last fix results in "makedhcp -n" not working, so re-fix it.
2014-01-09 22:14:55 -05:00
4263c661c5 update description 2014-01-09 08:46:26 -05:00
3db9440ca9 Fix detection of debian for some ubuntu installations 2014-01-08 15:16:18 -05:00
1caaf17eb2 Merge branch '2.8' of ssh://git.code.sf.net/p/xcat/xcat-core into 2.8 2014-01-08 11:11:44 -05:00
a7bb1d0a9a simplify messages for odbcsetup call 2014-01-08 11:11:17 -05:00
7f7907ae3e fix for bug 3951: remove the code to check xcat versions during xcatd restart/reload 2014-01-08 23:49:01 +08:00
a37d0e3ec4 fix for defect 3839 2014-01-08 10:32:37 -05:00
f35332a0d1 fix for bug 3947: add check for AIX and nmap existence 2014-01-08 23:29:30 +08:00
c1359797e8 handle kit staff in imgexport/imgimport 2014-01-08 20:14:47 +08:00
05a1624439 defect 3135: changed the mount process for the statelite directory (for persistent entries) that make a directory after the node name and remount to the nodename directory instead of nfs root dir for persistent 2014-01-08 04:59:33 -05:00
9470ac6d1a Add catagories to the site table 2014-01-07 15:40:58 -05:00
509b979ee8 fix:remoed should be removed 2014-01-07 03:06:30 -05:00
a03b712e2f make confignics postscript to accept the site.setinstallnic to configure installnic to be static 2014-01-07 02:26:53 -05:00
a46a8919a3 defect 3948 2014-01-06 14:56:58 -05:00
8ca9e350e4 fixing the migration problem that rmkitcomp should remove the kitdeployment parameter file and its contents. 2014-01-06 15:22:11 +08:00
3e31714aa3 fedora19/fedora20 diskful support 2014-01-02 23:50:47 -08:00
24b255fec5 update cases0 for buildkit 2014-01-02 02:50:08 -05:00
37be3cd9d7 do not use nodels --version in /etc/init.d/xcatd 2014-01-02 14:38:09 +08:00
f11aa5cc6d Fix for bug 3952
Made makedhcp be able to handle the case where site.nameservers or
networks.nameservers is a comma delimited list with <xcatmaster>
keyword in it.
2014-01-02 01:17:44 -05:00
49cd33382c defect 3909: make xcatd loads xCAT::Enabletrace by require instead of use to save resource 2014-01-02 02:02:24 -05:00
6d76a44409 fix man page of mknb to indicate that mknb only supports x86_64 2014-01-02 01:12:19 -05:00
0fe0d25c5f change the name os-object-storage-setup in os-object-storage-setup.rb 2014-01-01 22:09:20 -05:00
2facd1f0c4 Change the Windows deployment templates to support disk configuration, nics configuration and run postscripts 2013-12-31 09:01:28 -05:00
1039ba5490 Enhance genimage.cmd to accept second and third params for multiple winpe support 2013-12-31 08:02:54 -05:00
b4a450352c add an enhancement to skip not well-formed deployment parameters. 2013-12-31 17:02:22 +08:00
341e847646 minor change in last check of passing parameters to postinstall and postbootscritps 2013-12-31 15:37:53 +08:00
740114c491 Add some comments and help message for genimage.bat 2013-12-30 23:02:52 -05:00
0d1e50c764 fixed the issue that missed the winpe-scripting.cab in last checkin 2013-12-30 07:01:40 -05:00
9403878ade fixing bug 3815: don't use global variables which doest work well in hierarchy system. 2013-12-30 10:56:57 +08:00
2e1d048dee passing kitcomponent deploy parameters to genimage package installation, postinstall script and postbootscripts. 2013-12-30 10:05:19 +08:00
c327c4a24b fixng bug 3945: give an example of how to write ospkgdeps and kitpkgdeps in different arch. 2013-12-27 17:05:28 +08:00
c3132f5da9 kit release should be mandatory according to mini-design 2013-12-27 02:42:46 -05:00
b93a7b7ab1 fixing bug 3943: give accureate pattern to match output from console 2013-12-27 15:30:00 +08:00
2ad4b13f78 fix Use of uninitialized value within @a2 in pattern match (m//) at BuildKitUtils.pm line 273 2013-12-27 01:53:35 -05:00
fc7d8ecee5 Support dns master/slave configuration 2013-12-26 22:57:24 -05:00
f2817eff86 update the cookbook and role for swift 2013-12-26 15:05:36 -05:00
c944543a1f fixing bug 3848, moving preuninstall script from prerequisite rpm to meta rpm to make sure it can be issued before componnet been uninstalled. 2013-12-26 15:23:56 +08:00
c4522d9f5c update node dependency cookbook statsd for swift 2013-12-26 14:14:17 -05:00
e0de4e6a26 support for swift 2013-12-26 13:06:40 -05:00
60afd44afc if there are two roles for one chef-client, the script couldn't assign the
roles to the node. fixed it.
2013-12-26 12:39:02 -05:00
f3aae68777 support for swift 2013-12-24 15:16:11 -05:00
1afea00f08 template cleanup 2013-12-23 15:23:33 -05:00
b55034f64c support for cinder 2013-12-23 15:37:45 -05:00
52098b8ad4 Add warning message on fsp wrong slp reply 2013-12-22 05:03:57 -08:00
4920ba650d grep in busybox do not support long optipn string, use short instead 2013-12-19 19:13:10 -08:00
a3b589af67 Changed the table name from capacity to hwinv. 2013-12-19 22:51:39 -05:00
891f41b6ba add rhels5.10 discinfo 2013-12-19 04:55:45 -08:00
b767c8ae95 fix syntax error if is blank 2013-12-19 04:26:24 -08:00
5b2c27eb75 grep in busybox do not support long optipn string, use short instead 2013-12-19 01:42:59 -08:00
29fa3ff010 add the second argument for genimage.bat to make it can generate winpe and BCD to a specific dir 2013-12-19 07:25:25 -05:00
ce58c3bc1c fix the problem that rhels5.10 initrd cannot resolve the mn hostname 2013-12-18 23:19:35 -08:00
9584f0f683 Remove useless file build-debianrepo 2013-12-19 04:18:25 -08:00
a24cabf71c Rewirte the proxydhcp.c with proxydhcp-xcat in perl; Added site.installnic to control the nics setting for windows; Added servicenode.proxydhcp and noderes.proxydhcp to control the starting of proxydhcp-xcat daemon and makedhcp againsts node for windows deployment 2013-12-19 04:53:49 -05:00
eaf168ed48 Modify the bug that causing mklocalrepo.sh wrong 2013-12-19 00:57:32 -08:00
b68c12d07a fixed a syntax error 2013-12-19 08:34:10 -05:00
69bb732270 Added a capacity table to store the cpu, memory and disk sizes for nodes 2013-12-19 07:52:29 -05:00
70a8a07daa Merge branch '2.8' of ssh://git.code.sf.net/p/xcat/xcat-core into 2.8 2013-12-18 15:44:03 -05:00
61f2851006 fixed Error: at the beginning of monshow usage (feature request 73) 2013-12-18 15:43:15 -05:00
eebb88bbb1 defect 3916 2013-12-18 09:13:28 -05:00
8efb1c8f48 Support dns master/slave configuration 2013-12-18 01:35:15 -05:00
7b5055c77c defect 3916 2013-12-17 13:20:28 -05:00
0fe981ce6d make including winpostscript dir only for linux 2013-12-17 13:54:29 -05:00
f3bc8f145a correct the errors in the allinone template(using single flat network) 2013-12-17 20:53:30 -05:00
413ec01107 add the role os-block-storage-volume for cinder 2013-12-17 20:50:58 -05:00
83f82660ca enhanced the role for cinder 2013-12-17 15:12:10 -05:00
df0ec5b14b It will output some errors when using new chef/chef-server. The errors will not affect the
whole procedure. Update the cookbooks to revmoe the error mesages.
2013-12-17 14:50:17 -05:00
b9c3d71d6f if we set up the chef-server during os provision, there are some error messages
in the /var/log/messages on the management node.  Fixed them.
2013-12-17 14:10:53 -05:00
50686d34c7 Make xCAT rpm to install all files in /install/winpostscripts/* for Windows support 2013-12-17 02:23:38 -05:00
5021289a86 Enhanced postscript support in templates 2013-12-16 03:53:47 -05:00
32f8a37a22 code drop for windows postscript/postbootscript support. The postscript/postbootscript should be set in node/osimage.postscript/postbootscript and copy to /install/winpostscripts before running nodeset 2013-12-16 02:22:02 -05:00
b958093474 Add lsslp unicast support 2013-12-13 01:26:52 -08:00
0922dc234c Add lsslp unicast support 2013-12-13 00:59:33 -08:00
acce8ea798 Add lsslp unicast support 2013-12-13 00:50:14 -08:00
d6b62f0eac fix defect 3942 2013-12-12 11:15:15 -05:00
214852bcd7 liteimg use rc.statelite instead of rc.statelite.ppc.redhat for rhels6.5.ppc64 statelite 2013-12-12 02:13:02 -08:00
d5a02a43a8 prevent the running of postbootscripts 2013-12-12 17:48:17 +08:00
8238177b7f prevent the running of postbootscripts 2013-12-12 17:43:57 +08:00
0284522bf1 prevent the running of postbootscripts 2013-12-12 17:37:48 +08:00
0ac6f5d4e8 skip the bmc interface for nics configuration 2013-12-10 07:17:51 -05:00
75a1115638 add test cases for addkit 2013-12-10 02:19:18 -05:00
01d1bee405 Mutiple winpes support. nodeset (Windows.pm) will generate configuration file (path of winpe) in /var/lib/xcat/proxydhcp.cfg and send signal to proxydhcp daemon, proxydhcp daemon loads configuration file and offers 4011 service to windows compute nodes. 2013-12-10 05:34:09 -05:00
96a81f9911 fixing bug 3340: add test option for rmkit to list kitcomponents in use 2013-12-09 21:37:28 -08:00
044febc97d fixing bug 3340: add test option for rmkit to list kitcomponents in use 2013-12-09 21:27:28 -08:00
389396a5f5 modify the vmstorage value format of local path 2013-12-09 18:58:53 -08:00
fd9456c60c add valid values of kvm, esx, rhevm to nodehm.mgt and power attributes 2013-12-09 19:02:14 -05:00
e0dc206356 update description of litefile and litetree image attribute to include reference to image groups 2013-12-09 14:39:45 -05:00
a60bb9d50c rhels6.5 support 2013-12-05 21:54:18 -08:00
7877dae894 fix for bug 3902: add bridge nics into dhcpd.conf, em\d+ for Fedora 2013-12-05 12:50:12 +08:00
6040b777d7 fix for bug 3922, use getNodesAttribs instead of getNodeAttribs 2013-12-05 09:47:39 +08:00
2ecf98530b Fix for bug 3912
update net-snmp rpm version
2013-12-04 04:01:39 -05:00
cdab9ccc32 fix lots of info in man page 2013-12-03 07:20:38 -05:00
58505be501 Fixed bug 3927
AIX bundle file can not recognize '#' in the middle of line.
2013-12-02 22:58:25 -05:00
5bc7502d61 code drop for feature to support multiple disks/paritions and multiple nics configuration for Windows deployment. 2013-12-03 02:26:31 -05:00
7d6d4cc690 Defect 3926, rerun of mysqlsetup -i leaves xcatd stopped 2013-12-02 06:42:27 -05:00
672f7548a3 two environment template files if develop_mode=false 2013-12-02 16:36:19 -05:00
e037cd0b69 update linux.conf.template for autotest 2013-12-01 22:13:38 -05:00
0da6df2117 at present, we will use "develop_mode=true", so I change the value
of develop_mode in the example to "true".
2013-11-29 16:16:57 -05:00
1d39d95386 add two examples of the environment files 2013-11-29 14:55:27 -05:00
25ef66f1d2 databags and related items in the openstack chef cookbooks.
These are some examples, they can work with the current cookbooks.
If there are some changes in the cookbooks, please update the
databags and related items
2013-11-29 14:44:34 -05:00
f3925b9cf0 To support databag in openstack chef cookbook.
--nodevmode is only used when running all the procedure, and will
generate the secret, create the databag, and load the databag item
2013-11-29 14:32:42 -05:00
bcb80dc6c3 support multiple os version 2013-11-29 00:35:14 -05:00
5b5703e18b add test cases for buildkit 2013-11-28 21:53:31 -05:00
59ca686eec To support developer_mode=false, we need to use databag. There was
a bug in rabbimq-server(could not change the guest's password). Fixed
it.
2013-11-28 13:50:07 -05:00
54f2ee5abe add the role[os-network-dhcp-agent] into os-compute-single-controller.rb .
It could be used to allocate IP for the nova VMs when using role[allinone-compute]
2013-11-28 13:45:08 -05:00
3f6448b316 removed the nbroot rpms from the yum group file 2013-11-27 04:36:46 -05:00
d814b94bde Have ipmi do wire format, to match ipmitool and microsoft behavior in spite of the spec (which no one follows, not even prior xCAT code) 2013-11-26 10:26:51 -05:00
f83c398450 enhance os value in testcase,adding os:rhels,os:sles support in cases0 2013-11-24 20:53:58 -05:00
47a010573d for bug 3919: version compare problem 2013-11-22 01:35:37 -08:00
3b1f5fbbf2 update the ubuntu dep tarball 2013-11-21 20:26:30 -08:00
749909960a Merge branch '2.8' of ssh://git.code.sf.net/p/xcat/xcat-core into 2.8 2013-11-21 07:22:25 -05:00
1166185575 remove 'sequential' parameter for runxcmd calling in configfpc.pm 2013-11-21 01:04:30 -08:00
b5409af7ee fix bug 3889 xcatd not running preprocess for multiple plugins when mgt=ipmi 2013-11-21 00:51:00 -08:00
2ade33338a defect 3917: add support for running of postinstall script in mic genimage. The rootimage root is changed to overlay/rootimg from overlay/package 2013-11-21 05:17:23 -05:00
6f25f60016 fixing the problem that configing bond0 flushed the default gateway. 2013-11-21 10:09:07 +08:00
9b4d72de32 Merge branch '2.8' of ssh://git.code.sf.net/p/xcat/xcat-core into 2.8 2013-11-19 07:19:31 -05:00
5161d143bc Change the man pages of nodeset,genimage and geninitrd commands for adding --ignorekernelchk option 2013-11-19 07:07:34 -05:00
c3f73bc9b8 Merge branch '2.8' of ssh://git.code.sf.net/p/xcat/xcat-core into 2.8 2013-11-19 15:55:45 +08:00
2b61564684 Code drop for new requirement: Add a new flag --ignorekernelchk for nodeset, geninitrd and genimage commands to skip the kernel version checking when injecting drivers from osimage.driverupdatesrc 2013-11-19 06:27:11 -05:00
089ea2da87 fix for bug 3913: do not use autocommit=0 for table read 2013-11-19 15:49:26 +08:00
004d179474 Fix SLES driver update media injection that is not rpm based 2013-11-18 16:31:15 -05:00
9578417eae defect 3870 2013-11-18 13:42:28 -05:00
fee08fb028 Merge branch '2.8' of ssh://git.code.sf.net/p/xcat/xcat-core into 2.8 2013-11-18 07:52:56 -05:00
0804cf1ae6 Defect 3906 2013-11-18 06:50:18 -05:00
e432c238f9 fixed bug 3904, if the environements dir doesn't exsit, create it. 2013-11-18 16:26:03 -05:00
c8053583bd fixed bug 3898 2013-11-18 15:37:15 -05:00
43bc2d1e8e fix #3877 [FVT]genimage return confusing error message when xCAT-IBMhpc is not installed 2013-11-17 19:07:05 -08:00
e2708df2b4 fix defect #3693 [DEV] rhels6.4-ppc64 statelite failed with (FATAL error: could not get the entries from litefile table...) when noderes.xcatmaster=<hostname of MN> 2013-11-17 18:48:00 -08:00
d91767af84 change version of 2.8 branch to 2.8.4 2013-11-15 16:46:40 -05:00
312 changed files with 19805 additions and 2650 deletions

View File

@ -1 +1 @@
2.8.3
2.8.4

View File

@ -1,241 +0,0 @@
#!/bin/sh
# Update GSA Ubuntu Repositories or create a local repository
#
# Author: Leonardo Tonetto (tonetto@linux.vnet.ibm.com)
# Revisor: Arif Ali (aali@ocf.co.uk)
#
# After running this script, add the following line to
# /etc/apt/sources.list for local repository
# deb file://<core_repo_path>/xcat-core/ maverick main
# deb file://<dep_repo_path>/xcat-dep/ maverick main
#
# For the purpose of getting the distribution name
# Supported distributions
dists="squeeze"
a_flag= # automatic flag - only update if repo was updated
c_flag= # xcat-core (trunk-delvel) path
d_flag= # xcat-dep (trunk) path
local_flag= # build the repository localy
while getopts 'c:d:u:p:l:a' OPTION
do
case $OPTION in
c) c_flag=1
xcat_core_path="$OPTARG"
;;
d) d_flag=1
xcat_dep_path="$OPTARG"
;;
l) local_flag=1
local_repo_path="$OPTARG"
;;
a) a_flag=1
;;
?) printf "Usage: %s -c <core_trunk_path> [-d <dep_trunk_path>] -l <local-repo_path> [-a]\n" $(basename $0) >&2
echo "-a Automatic: update only if there's any update on repo"
exit 2
;;
esac
done
shift $(($OPTIND - 1))
if [ -z "$c_flag" -a -z "$d_flag" ]
then
printf "Usage: %s -c <core_trunk_path> [-d <dep_trunk_path>] { -l <local-repo_path> | [-u <gsa_id> -p <gsa_passwd>] } [-a]\n" $(basename $0) >&2
echo "-a Automatic: update only if there's any update on repo"
exit 2
fi
if [ ! -d $xcat_core_path ]
then
printf "%s: No such directory\n" "$xcat_core_path" >&2
exit 2
fi
if [ "$d_flag" ]
then
if [ ! -d $xcat_dep_path ]
then
printf "%s: No such directory\n" "$xcat_dep_path" >&2
exit 2
fi
fi
if [ "$local_flag" ]
then
repo_xcat_core_path=$local_repo_path"/xcat-core"
repo_xcat_dep_path=$local_repo_path"/xcat-dep"
else
printf "Usage: %s -c <core_trunk_path> [-d <dep_trunk_path>] -l <local-repo_path> [-a]\n" $(basename $0) >&2
echo "-a Automatic: update only if there's any update on repo"
exit 2
fi
if [ "$a_flag" ]
then
touch svcupdate.trace
SVCUP='svcupdate.trace'
svn update $xcat_core_path 1> $SVCUP 2>&1
if ! grep 'Tree is up to date' $SVCUP
then
update_core=1
else
update_core=
fi
rm -f $SVCUP
else
update_core=1
fi
if [ "$c_flag" -a "$update_core" ]
then
echo "###############################"
echo "# Building xcat-core packages #"
echo "###############################"
CMD_PATH=`pwd`
cd $xcat_core_path
./build-debs-all "snap" "Nightly_Builds"
echo "#################################"
echo "# Creating xcat-core repository #"
echo "#################################"
if [ -d $repo_xcat_core_path ]; then
rm -rf $repo_xcat_core_path
fi
mkdir -p $repo_xcat_core_path/conf
find . -iname '*.deb' -exec mv {} $repo_xcat_core_path \;
rm -rf debs/
cd $CMD_PATH
rm -rf $repo_xcat_core_path/conf/distributions
for dist in $dists; do
cat << __EOF__ >> $repo_xcat_core_path/conf/distributions
Origin: xCAT internal repository
Label: xcat-core bazaar repository
Codename: $dist
Architectures: amd64
Components: main
Description: Repository automatically genereted conf
__EOF__
done
cat << __EOF__ > $repo_xcat_core_path/conf/options
verbose
basedir .
__EOF__
for dist in $dists; do
for file in `ls $repo_xcat_core_path/*.deb`; do
reprepro -b $repo_xcat_core_path includedeb $dist $file;
done
done
mv $xcat_core_path/latest_version $repo_xcat_core_path/xcat-core_latest-build
cat << '__EOF__' > $repo_xcat_core_path/mklocalrepo.sh
codename=`lsb_release -a 2>null | grep Codename | awk '{print $2}'`
cd `dirname $0`
echo deb file://"`pwd`" $codename main > /etc/apt/sources.list.d/xcat-core.list
__EOF__
chmod 775 $repo_xcat_core_path/mklocalrepo.sh
rm -rf $repo_xcat_core_path/*.deb
if [ -z "$local_flag" ]
then
echo "###############################"
echo "# Updating GSA xcat-core repo #"
echo "###############################"
lftp -e "mirror -R --delete-first $repo_xcat_core_path /projects/i/ipl-xcat/ubuntu/; exit;" -u $gsa_id,$gsa_passwd -p 22 sftp://ausgsa.ibm.com
fi ### if [ -z "$local_flag" ]
fi ### if [ "$a_flag" ]
if [ "$a_flag" -a "$d_flag" ]
then
touch svcupdate.trace
SVCUP='svcupdate.trace'
svn update $xcat_dep_path 1> $SVCUP 2>&1
if ! grep 'Tree is up to date' $SVCUP
then
update_dep=1
else
update_dep=
fi
rm -f $SVCUP
else
update_dep=1
fi
if [ "$d_flag" -a "$update_dep" ]
then
echo "##############################"
echo "# Building xcat-dep packages #"
echo "##############################"
CMD_PATH=`pwd`
cd $xcat_dep_path
./build-debs-all "snap" "Nightly_Builds"
echo "################################"
echo "# Creating xcat-dep repository #"
echo "################################"
rm -rf $repo_xcat_dep_path
mkdir -p $repo_xcat_dep_path/conf
find $xcat_dep_path -iname '*.deb' -exec cp {} $repo_xcat_dep_path \;
rm -rf $repo_xcat_core_path/conf/distributions
for dist in $dists; do
cat << __EOF__ >> $repo_xcat_dep_path/conf/distributions
Origin: xCAT internal repository
Label: xcat-dep bazaar repository
Codename: $dist
Architectures: amd64
Components: main
Description: Repository automatically genereted conf
__EOF__
done
cat << __EOF__ > $repo_xcat_dep_path/conf/options
verbose
basedir .
__EOF__
for dist in $dists; do
for file in `ls $repo_xcat_dep_path/*.deb`; do
reprepro -b $repo_xcat_dep_path includedeb $dist $file;
done
done
cat << '__EOF__' > $repo_xcat_dep_path/mklocalrepo.sh
codename=`lsb_release -a 2>null | grep Codename | awk '{print $2}'`
cd `dirname $0`
echo deb file://"`pwd`" $codename main > /etc/apt/sources.list.d/xcat-dep.list
__EOF__
chmod 775 $repo_xcat_dep_path/mklocalrepo.sh
rm -rf $repo_xcat_dep_path/*.deb
if [ -z "$local_flag" ]
then
echo "##############################"
echo "# Updating GSA xcat-dep repo #"
echo "##############################"
lftp -e "mirror -R --delete-first $repo_xcat_dep_path /projects/i/ipl-xcat/ubuntu/; exit;" -u $gsa_id,$gsa_passwd -p 22 sftp://ausgsa.ibm.com
fi ### if [ -z "$local_flag" ]
fi ### if [ "$d_flag" -a "$a_flag"]
if [ -z "$local_flag" ] # delete the temp repo after upload is done
then
rm -rf ./gsa-repo_temp
fi
exit 0

View File

@ -53,7 +53,7 @@ for i in $*; do
done
# Supported distributions
dists="maverick natty oneiric precise"
dists="maverick natty oneiric precise saucy"
c_flag= # xcat-core (trunk-delvel) path
d_flag= # xcat-dep (trunk) path
@ -194,7 +194,7 @@ then
if [ ! -d ../../$package_dir_name ];then
mkdir -p "../../$package_dir_name"
fi
packages="xCAT-client xCAT-genesis-scripts perl-xCAT xCAT-server xCAT-UI xCAT xCATsn xCAT-test xCAT-OpenStack"
packages="xCAT-client xCAT-genesis-scripts perl-xCAT xCAT-server xCAT-UI xCAT xCATsn xCAT-test xCAT-OpenStack xCAT-OpenStack-baremetal"
for file in `echo $packages`
do
@ -276,7 +276,7 @@ __EOF__
done
#create the mklocalrepo script
cat << __EOF__ > mklocalrepo.sh
cat << '__EOF__' > mklocalrepo.sh
. /etc/lsb-release
cd `dirname $0`
echo deb file://"`pwd`" $DISTRIB_CODENAME main > /etc/apt/sources.list.d/xcat-core.list
@ -413,6 +413,10 @@ __EOF__
while [ $((i+=1)) -le 5 ] && ! rsync -urLv --delete xcat-dep ${uploader},xcat@web.sourceforge.net:${sf_dir}/ubuntu/
do : ; done
#upload the tarball
i=0
echo "Uploading $dep_tar_name to ${sf_dir}/xcat-dep/2.x_Ubuntu/ ..."
while [ $((i+=1)) -le 5 ] && ! rsync -v $dep_tar_name ${uploader},xcat@web.sourceforge.net:${sf_dir}/xcat-dep/2.x_Ubuntu/
do : ; done
cd $old_pwd
fi
exit 0

View File

@ -41,7 +41,7 @@ UPLOADUSER=bp-sawyers
FRS=/home/frs/project/x/xc/xcat
# These are the rpms that should be built for each kind of xcat build
ALLBUILD="perl-xCAT xCAT-client xCAT-server xCAT-IBMhpc xCAT-rmc xCAT-UI xCAT-test xCAT-buildkit xCAT xCATsn xCAT-genesis-scripts xCAT-OpenStack"
ALLBUILD="perl-xCAT xCAT-client xCAT-server xCAT-IBMhpc xCAT-rmc xCAT-UI xCAT-test xCAT-buildkit xCAT xCATsn xCAT-genesis-scripts xCAT-OpenStack xCAT-SoftLayer xCAT-OpenStack-baremetal"
ZVMBUILD="perl-xCAT xCAT-server xCAT-UI"
ZVMLINK="xCAT-client xCAT xCATsn"
PCMBUILD="xCAT"
@ -237,7 +237,7 @@ if [ "$OSNAME" = "AIX" ]; then
fi
# Build the rest of the noarch rpms
for rpmname in xCAT-client xCAT-server xCAT-IBMhpc xCAT-rmc xCAT-UI xCAT-test xCAT-buildkit; do
for rpmname in xCAT-client xCAT-server xCAT-IBMhpc xCAT-rmc xCAT-UI xCAT-test xCAT-buildkit xCAT-SoftLayer; do
#if [ "$EMBED" = "zvm" -a "$rpmname" != "xCAT-server" -a "$rpmname" != "xCAT-UI" ]; then continue; fi # for zvm embedded env only need to build server and UI
if [[ " $EMBEDBUILD " != *\ $rpmname\ * ]]; then continue; fi
if [ "$OSNAME" = "AIX" -a "$rpmname" = "xCAT-buildkit" ]; then continue; fi # do not build xCAT-buildkit on aix
@ -272,19 +272,19 @@ if [ "$OSNAME" != "AIX" ]; then
fi
# Build the xCAT and xCATsn rpms for all platforms
for rpmname in xCAT xCATsn xCAT-OpenStack; do
for rpmname in xCAT xCATsn xCAT-OpenStack xCAT-OpenStack-baremetal; do
#if [ "$EMBED" = "zvm" ]; then break; fi
if [[ " $EMBEDBUILD " != *\ $rpmname\ * ]]; then continue; fi
if [ $SOMETHINGCHANGED == 1 -o "$BUILDALL" == 1 ]; then # used to be: if $GREP -E "^[UAD] +$rpmname/" $GITUP; then
UPLOAD=1
ORIGFAILEDRPMS="$FAILEDRPMS"
if [ "$OSNAME" = "AIX" ]; then
if [ "$rpmname" = "xCAT-OpenStack" ]; then continue; fi # do not bld openstack on aix
if [ "$rpmname" = "xCAT-OpenStack" ] || [ "$rpmname" = "xCAT-OpenStack-baremetal" ]; then continue; fi # do not bld openstack on aix
./makerpm $rpmname "$EMBED"
if [ $? -ne 0 ]; then FAILEDRPMS="$FAILEDRPMS $rpmname"; fi
else
for arch in x86_64 ppc64 s390x; do
if [ "$rpmname" = "xCAT-OpenStack" -a "$arch" != "x86_64" ]; then continue; fi # only bld openstack for x86_64 for now
if [ "$rpmname" = "xCAT-OpenStack" -a "$arch" != "x86_64" ] || [ "$rpmname" = "xCAT-OpenStack-baremetal" -a "$arch" != "x86_64" ] ; then continue; fi # only bld openstack for x86_64 for now
./makerpm $rpmname $arch "$EMBED"
if [ $? -ne 0 ]; then FAILEDRPMS="$FAILEDRPMS $rpmname-$arch"; fi
done
@ -496,6 +496,7 @@ if [ "$OSNAME" != "AIX" -a "$REL" = "devel" -a "$PROMOTE" != 1 -a -z "$EMBED" ];
rpm2cpio ../$XCATCORE/xCAT-test-*.$NOARCH.rpm | cpio -id '*.html'
rpm2cpio ../$XCATCORE/xCAT-buildkit-*.$NOARCH.rpm | cpio -id '*.html'
rpm2cpio ../$XCATCORE/xCAT-OpenStack-*.x86_64.rpm | cpio -id '*.html'
rpm2cpio ../$XCATCORE/xCAT-SoftLayer-*.$NOARCH.rpm | cpio -id '*.html'
i=0
while [ $((i+=1)) -le 5 ] && ! rsync $verboseflag -r opt/xcat/share/doc/man1 opt/xcat/share/doc/man3 opt/xcat/share/doc/man5 opt/xcat/share/doc/man7 opt/xcat/share/doc/man8 $UPLOADUSER,xcat@web.sourceforge.net:htdocs/
do : ; done

View File

@ -10,9 +10,6 @@
<packagereq type="required">xCAT-server</packagereq>
<packagereq type="required">xCAT-client</packagereq>
<packagereq type="required">perl-xCAT</packagereq>
<packagereq type="required">xCAT-nbroot-core-x86_64</packagereq>
<packagereq type="required">xCAT-nbroot-core-x86</packagereq>
<packagereq type="optional">xCAT-nbroot-core-ppc64</packagereq>
</packagelist>
</group>
</comps>

View File

@ -53,6 +53,7 @@ function makexcat {
tar -X /tmp/xcat-excludes -cf $RPMROOT/SOURCES/templates.tar templates
gzip -f $RPMROOT/SOURCES/templates.tar
cp xcat.conf $RPMROOT/SOURCES
cp xcat.conf.apach24 $RPMROOT/SOURCES
cp xCATMN $RPMROOT/SOURCES
else # xCATsn
tar -X /tmp/xcat-excludes -cf $RPMROOT/SOURCES/license.tar LICENSE.html
@ -75,7 +76,9 @@ function makexcat {
tar --exclude .svn --exclude upflag -czf $RPMROOT/SOURCES/postscripts.tar.gz postscripts LICENSE.html
tar --exclude .svn -czf $RPMROOT/SOURCES/prescripts.tar.gz prescripts
tar --exclude .svn -czf $RPMROOT/SOURCES/templates.tar.gz templates
tar --exclude .svn -czf $RPMROOT/SOURCES/winpostscripts.tar.gz winpostscripts
cp xcat.conf $RPMROOT/SOURCES
cp xcat.conf.apach24 $RPMROOT/SOURCES
cp xCATMN $RPMROOT/SOURCES
cd - >/dev/null
elif [ "$RPMNAME" = "xCATsn" ]; then

View File

@ -262,6 +262,12 @@ expression B<($1-1)%14+1> will evaluate to B<6>.
See http://www.perl.com/doc/manual/html/pod/perlre.html for information on perl regular expressions.
=head2 Easy Regular Expressions
As of xCAT 2.8.1, you can use a modified version of the regular expression support described in the previous section. You do not need to enter the node information (1st part of the expression), it will be derived from the input nodename. You only need to supply the 2nd part of the expression to determine the value to give the attribute. For examples, see
https://sourceforge.net/apps/mediawiki/xcat/index.php?title=Listing_and_Modifying_the_Database#Easy_Regular_expressions
=head1 OBJECT DEFINITIONS
Because it can get confusing what attributes need to go in what tables, the xCAT database can also

View File

@ -223,6 +223,7 @@ if (ref($request) eq 'HASH') { # the request is an array, not pure XML
SSL_key_file => $keyfile,
SSL_cert_file => $certfile,
SSL_ca_file => $cafile,
SSL_verify_mode => SSL_VERIFY_PEER,
SSL_use_cert => 1,
Timeout => 0,
);

View File

@ -618,7 +618,7 @@ sub getDBtable
{
# need to get info from DB
my $thistable = xCAT::Table->new($table, -create => 1, -autocommit => 0);
my $thistable = xCAT::Table->new($table, -create => 1);
if (!$thistable)
{
return undef;
@ -2747,7 +2747,7 @@ sub collapsenicsattr()
# e.g nicips.eth0
# do not need to handle nic attributes without the postfix .ethx,
# it will be overwritten by the attributes with the postfix .ethx,
if ($nodeattr =~ /^(nic\w+)\.(\w+)$/)
if ($nodeattr =~ /^(nic\w+)\.(.*)$/)
{
if ($1 && $2)
{

View File

@ -67,7 +67,7 @@ sub getHcpAttribs
}
}
my @ps = $tabs->{ppc}->getAllNodeAttribs(['node','parent','nodetype','hcp']);
my @ps = $tabs->{ppc}->getAllNodeAttribs(['node','parent','nodetype','hcp','id']);
for my $entry ( @ps ) {
my $tmp_parent = $entry->{parent};
my $tmp_node = $entry->{node};
@ -79,6 +79,9 @@ sub getHcpAttribs
if (defined($tmp_node) && defined($tmp_type) && ($tmp_type eq "blade") && defined($entry->{hcp})) {
push @{$ppchash{$tmp_node}{children}}, $entry->{hcp};
}
if (defined($tmp_node) && defined($entry->{id}) && defined($tmp_parent) && defined($tmp_type) && ($tmp_type eq "lpar")) {
$ppchash{$tmp_parent}{mapping}{$tmp_node} = $entry->{id};
}
#if(exists($ppchash{$tmp_node})) {
# if( defined($tmp_type) ) {

View File

@ -15,6 +15,7 @@ use xCAT::PPCcli qw(SUCCESS EXPECT_ERROR RC_ERROR NR_ERROR);
use xCAT::Usage;
use xCAT::NodeRange;
use xCAT::FSPUtils;
use xCAT::VMCommon;
#use Data::Dumper;
use xCAT::MsgUtils qw(verbose_message);
##############################################
@ -52,7 +53,7 @@ sub chvm_parse_extra_options {
my $args = shift;
my $opt = shift;
# Partition used attributes #
my @support_ops = qw(vmcpus vmmemory vmphyslots vmothersetting);
my @support_ops = qw(vmcpus vmmemory vmphyslots vmothersetting vmstorage vmnics del_vadapter);
if (ref($args) ne 'ARRAY') {
return "$args";
}
@ -84,6 +85,71 @@ sub chvm_parse_extra_options {
$opt->{bsr} = $1;
}
next;
} elsif ($cmd eq "vmstorage") {
if (exists($opt->{vios})) {
if ($value !~ /\d+/) {
return "'$value' is invalid, must be numbers";
} else {
my @array = ();
for (1..$value) {
push @array, 0;
}
$value = \@array;
}
} else {
if ($value =~ /^([\w_-]*):(\d+)$/) {
$value = ["0,$1:$2"];
} else {
return "'$value' is invalid, must be in form of 'Server_name:slotnum'";
}
}
} elsif ($cmd eq "vmcpus") {
if ($value =~ /^(\d+)\/(\d+)\/(\d+)$/) {
unless ($1 <= $2 and $2 <= $3) {
return "'$value' is invalid, must be in order";
}
} else {
return "'$value' is invalid, must be integer";
}
} elsif ($cmd eq "vmmemory") {
if ($value =~ /^([\d|.]+)([G|M]?)\/([\d|.]+)([G|M]?)\/([\d|.]+)([G|M]?)$/i) {
my ($mmin, $mcur, $mmax);
if ($2 == "G" or $2 == '') {
$mmin = $1 * 1024;
}
if ($4 == "G" or $4 == '') {
$mcur = $3 * 1024;
}
if ($6 == "G" or $6 == '') {
$mmax = $5 * 1024;
}
unless ($mmin <= $mcur and $mcur <= $mmax) {
return "'$value' is invalid, must be in order";
}
} else {
return "'$value' is invalid";
}
} elsif ($cmd eq "vmphyslots") {
my @tmp_array = split ",",$value;
foreach (@tmp_array) {
unless (/(0x\w{8})/) {
return "'$_' is invalid";
}
}
} elsif ($cmd eq "vmothersetting") {
my @tmp_array = split ",", $value;
foreach (@tmp_array) {
unless (/^(bsr|hugepage):\d+$/) {
return "'$_' is invalid";
}
}
} elsif ($cmd eq "vmnics") {
my @tmp_array = split ",", $value;
foreach (@tmp_array) {
unless (/^vlan\d+$/i) {
return "'$_' is invalid";
}
}
}
} else {
@ -124,7 +190,7 @@ sub chvm_parse_args {
$Getopt::Long::ignorecase = 0;
Getopt::Long::Configure( "bundling" );
if ( !GetOptions( \%opt, qw(V|verbose p=s i=s m=s r=s p775) )) {
if ( !GetOptions( \%opt, qw(V|verbose p=s i=s m=s r=s p775 vios) )) {
return( usage() );
}
####################################
@ -398,7 +464,7 @@ sub mkvm_parse_args {
push @unsupport_ops, $tmpop;
}
}
my @support_ops = qw(vmcpus vmmemory vmphyslots vmothersetting);
my @support_ops = qw(vmcpus vmmemory vmphyslots vmothersetting vmnics vmstorage);
if (defined(@ARGV[0]) and defined($opt{full})) {
return(usage("Option 'full' shall be used alone."));
} elsif (defined(@ARGV[0])) {
@ -711,6 +777,45 @@ sub do_op_extra_cmds {
$action = "part_set_lpar_pending_proc";
} elsif ($op eq "vmphyslots") {
$action = "set_io_slot_owner_uber";
} elsif ($op eq "del_vadapter") {
$action = "part_clear_vslot_config";
} elsif ($op eq "vmnics") {
my @vlans = split /,/,$param;
foreach (@vlans) {
if (/vlan(\d+)/i) {
my $vlanid = $1;
my $mac = lc(xCAT::VMCommon::genMac($name));
if ($mac =~ /(..):(..):(..):(..):(..):(..)/) {
my $tail = hex($6)+$vlanid;
$mac = sprintf("$1$2$3$4$5%02x",$tail);
}
my $value = xCAT::FSPUtils::fsp_api_action($request,$name, $d, "part_set_veth_slot_config",0,"0,$vlanid,$mac");
if (@$value[1] && ((@$value[1] =~ /Error/i) && (@$value[2] ne '0'))) {
return ([[$name, @$value[1], '1']]) ;
} else {
push @values, [$name, "Success", '0'];
}
}
}
next;
} elsif ($op eq "vmstorage") {
foreach my $v_info (@$param) {
if ($v_info =~ /(\d+),([\w_-]*):(\d+)/) {
my $vios = &find_lpar_id($request, @$d[3], $2);
my $r_slotid = $3;
if (!defined($vios)) {
return ([[$name, "Cannot find lparid for Server lpar:$1", '1']]);
}
$v_info = "$1,$vios,$r_slotid";
}
my $value = xCAT::FSPUtils::fsp_api_action($request,$name, $d, "part_set_vscsi_slot_config",0,$v_info);
if (@$value[1] && ((@$value[1] =~ /Error/i) && (@$value[2] ne '0'))) {
return ([[$name, @$value[1], '1']]) ;
} else {
push @values, [$name, "Success", '0'];
}
}
next;
} elsif ($op eq "vmmemory") {
my @td = @$d;
@td[0] = 0;
@ -750,6 +855,9 @@ sub do_op_extra_cmds {
$action = "part_set_lpar_pending_mem";
} elsif ($op eq "bsr") {
$action = "set_lpar_bsr";
} elsif ($op eq "vios") {
print __LINE__."=========>op=vios===\n";
next;
} else {
last;
}
@ -1638,6 +1746,34 @@ sub query_cec_info_actions {
#$data .= "\n";
next;
}
if ($action eq "part_get_all_vio_info") {
my @output = split /\n/, @$values[1];
my ($drc_index,$drc_name);
foreach my $line (@output) {
chomp($line);
if ($line =~ /Index:.*drc_index:([^,]*),\s*drc_name:(.*)$/) {
$drc_index = $1;
$drc_name = $2;
next;
} elsif ($line =~ /\s*lpar_id=(\d+),type=(vSCSI|vSerial),slot=(\d+),attr=(\d+).*remote_lpar_id=(0x\w+),remote_slot_num=(0x\w+)/) {
if ($4 eq '0') {
push @array, [$name, "$1,$3,$drc_name,$drc_index,$2 Client(Server_lparid=$5,Server_slotid=$6)", 0];
} else {
push @array, [$name, "$1,$3,$drc_name,$drc_index,$2 Server", 0];
}
} elsif ($line =~ /\s*lpar_id=(\d+),type=(vEth),slot=(\d+).*port_vlan_id=(\d+),mac_addr=(\w+)/) {
push @array, [$name, "$1,$3,$drc_name,$drc_index,$2 (port_vlanid=$4,mac_addr=$5)", 0];
#} elsif ($line =~ /\s*lpar_id=(\d+),type=(\w+),slot=(\d+)/) {
# push @array, [$name, "$1,$3,$drc_name,$drc_index,$2", 0];
#} else {
#print "=====>line:$line\n";
#push @array, [$name, $line, 0];
}
$drc_index = '';
$drc_name = '';
}
next;
}
}
#$data .= "@$values[1]\n\n";
push @array, [$name, @$values[1], @$values[2]];
@ -1660,14 +1796,16 @@ sub query_cec_info {
my $args = $request->{opt};
my @td = ();
my @result = ();
#print Dumper($request);
#print Dumper($hash);
while (my ($mtms,$h) = each(%$hash) ) {
while (my ($name, $d) = each (%$h)) {
@td = @$d;
if (@$d[0] == 0 && @$d[4] ne "lpar") {
if (@$d[0] == 0 && @$d[4] !~ /lpar|vios/) {
last;
}
#my $rethash = query_cec_info_actions($request, $name, $d, 0, ["part_get_lpar_processing","part_get_lpar_memory","part_get_all_vio_info","lpar_lhea_mac","part_get_all_io_bus_info","get_huge_page","get_cec_bsr"]);
my $rethash = query_cec_info_actions($request, $name, $d, 0, ["part_get_lpar_processing","part_get_lpar_memory","part_get_all_io_bus_info","get_huge_page","get_cec_bsr"]);
my $rethash = query_cec_info_actions($request, $name, $d, 0, ["part_get_lpar_processing","part_get_lpar_memory","part_get_all_io_bus_info","part_get_all_vio_info","get_huge_page","get_cec_bsr"]);
#push @result, [$name, $rethash, 0];
push @result, @$rethash;
}
@ -1766,9 +1904,10 @@ sub deal_with_avail_mem {
} else {
$cur_avail = $lparhash->{hyp_avail_mem} + $used_regions - $tmphash{lpar0_used_mem};
}
xCAT::MsgUtils->verbose_message($request, "====****====used:$used_regions,avail:$cur_avail,($min:$cur:$max).");
#xCAT::MsgUtils->verbose_message($request, "====****====used:$used_regions,avail:$cur_avail,($min:$cur:$max).");
if ($cur_avail < $min) {
return([$name, "Parse reserverd regions failed, no enough memory, available:$lparhash->{hyp_avail_mem}.", 1]);
my $cur_mem_in_G = $lparhash->{hyp_avail_mem} * $lparhash->{mem_region_size} * 1.0 / 1024;
return([$name, "Parse reserverd regions failed, no enough memory, available:$cur_mem_in_G GB.", 1]);
}
if ($cur > $cur_avail) {
my $new_cur = $cur_avail;
@ -1781,6 +1920,17 @@ sub deal_with_avail_mem {
return 0;
}
sub find_lpar_id {
my $request = shift;
my $parent = shift;
my $name = shift;
my %mapping = %{$request->{ppc}->{$parent}->{mapping}};
if (exists($mapping{$name})) {
return $mapping{$name};
}
return undef;
}
sub create_lpar {
my $request = shift;
my $name = shift;
@ -1804,12 +1954,42 @@ sub create_lpar {
xCAT::FSPUtils::fsp_api_action($request, $name, $d, "part_set_lpar_group_id");
xCAT::FSPUtils::fsp_api_action($request, $name, $d, "part_set_lpar_avail_priority");
#print "======>physlots:$lparhash->{physlots}.\n";
$values = xCAT::FSPUtils::fsp_api_action($request, $name, $d, "set_io_slot_owner_uber", 0, $lparhash->{physlots});
#$values = xCAT::FSPUtils::fsp_api_action($request, $name, $d, "set_io_slot_owner", 0, join(",",@phy_io_array));
if (@$values[2] ne 0) {
&set_lpar_undefined($request, $name, $d);
return ([$name, @$values[1], @$values[2]]);
if (exists($lparhash->{physlots})) {
$values = xCAT::FSPUtils::fsp_api_action($request, $name, $d, "set_io_slot_owner_uber", 0, $lparhash->{physlots});
#$values = xCAT::FSPUtils::fsp_api_action($request, $name, $d, "set_io_slot_owner", 0, join(",",@phy_io_array));
if (@$values[2] ne 0) {
&set_lpar_undefined($request, $name, $d);
return ([$name, @$values[1], @$values[2]]);
}
}
if (exists($lparhash->{nics})) {
my @vlans = split /,/,$lparhash->{nics};
foreach (@vlans) {
if (/vlan(\d+)/i) {
my $vlanid = $1;
my $mac = lc(xCAT::VMCommon::genMac($name));
if ($mac =~ /(..):(..):(..):(..):(..):(..)/) {
my $tail = hex($6)+$vlanid;
$mac = sprintf("$1$2$3$4$5%02x",$tail);
}
$values = xCAT::FSPUtils::fsp_api_action($request,$name, $d, "part_set_veth_slot_config",0,"0,$vlanid,$mac");
if (@$values[2] ne 0) {
&set_lpar_undefined($request, $name, $d);
return ([$name, @$values[1], @$values[2]]);
}
}
}
}
if (exists($lparhash->{storage})) {
foreach my $v_info (@{$lparhash->{storage}}) {
$values = xCAT::FSPUtils::fsp_api_action($request,$name, $d, "part_set_vscsi_slot_config",0,$v_info);
if (@$values[2] ne 0) {
&set_lpar_undefined($request, $name, $d);
return ([$name, @$values[1], @$values[2]]);
}
}
}
# ====== ====== #
if (exists($lparhash->{phy_hea})) {
my $phy_hash = $lparhash->{phy_hea};
foreach my $phy_drc (keys %$phy_hash) {
@ -1850,6 +2030,7 @@ sub create_lpar {
&set_lpar_undefined($request, $name, $d);
return ([$name, @$values[1], @$values[2]]);
}
xCAT::FSPUtils::fsp_api_action($request, $name, $d, "part_set_lpar_comp_modes");
#print "======>memory:$lparhash->{huge_page}.\n";
xCAT::FSPUtils::fsp_api_action($request, $name, $d, "set_huge_page", 0, $lparhash->{huge_page});
@ -1880,7 +2061,7 @@ sub mkspeclpar {
while (my ($mtms, $h) = each (%$hash)) {
my $memhash;
my @nodes = keys(%$h);
my $ent = $vmtab->getNodesAttribs(\@nodes, ['cpus', 'memory','physlots', 'othersettings']);
my $ent = $vmtab->getNodesAttribs(\@nodes, ['cpus', 'memory','physlots', 'othersettings', 'storage', 'nics']);
while (my ($name, $d) = each (%$h)) {
if (@$d[4] ne 'lpar') {
push @result, [$name, "Node must be LPAR", 1];
@ -1889,7 +2070,7 @@ sub mkspeclpar {
if (!exists($memhash->{run})) {
my @td = @$d;
@td[0] = 0;
$memhash = &query_cec_info_actions($request, $name, \@td, 1, ["part_get_hyp_process_and_mem","lpar_lhea_mac"]);
$memhash = &query_cec_info_actions($request, $name, \@td, 1, ["part_get_hyp_process_and_mem","lpar_lhea_mac","part_get_all_io_bus_info"]);
$memhash->{run} = 1;
}
my $tmp_ent = $ent->{$name}->[0];
@ -1902,45 +2083,130 @@ sub mkspeclpar {
if (exists($opt->{vmphyslots})) {
$tmp_ent->{physlots} = $opt->{vmphyslots};
}
if (exists($opt->{vmothersetting})) {
$tmp_ent->{othersettings} = $opt->{vmothersetting};
}
if (exists($opt->{vmstorage})) {
$tmp_ent->{storage} = $opt->{vmstorage};
}
if (exists($opt->{vmnics})) {
$tmp_ent->{nics} = $opt->{vmnics};
}
if (!defined($tmp_ent) ) {
return ([[$name, "Not find params", 1]]);
} elsif (!exists($tmp_ent->{cpus}) || !exists($tmp_ent->{memory}) || !exists($tmp_ent->{physlots})) {
return ([[$name, "The attribute 'vmcpus', 'vmmemory' and 'vmphyslots' are all needed to be specified.", 1]]);
#} elsif (!exists($tmp_ent->{cpus}) || !exists($tmp_ent->{memory}) || !exists($tmp_ent->{physlots})) {
} elsif (!exists($tmp_ent->{cpus}) || !exists($tmp_ent->{memory})) {
return ([[$name, "The attribute 'vmcpus', 'vmmemory' are needed to be specified.", 1]]);
}
if ($tmp_ent->{memory} =~ /(\d+)([G|M]?)\/(\d+)([G|M]?)\/(\d+)([G|M]?)/i) {
my $memsize = $memhash->{mem_region_size};
my $min = $1;
# FIX bug 3873 [FVT]DFM illegal action could work
#
if ($tmp_ent->{cpus} =~ /^(\d+)\/(\d+)\/(\d+)$/) {
unless ($1 <= $2 and $2 <= $3) {
return([[$name, "Parameter for 'vmcpus' is invalid", 1]]);
}
} else {
return([[$name, "Parameter for 'vmcpus' is invalid", 1]]);
}
if ($tmp_ent->{memory} =~ /^([\d|.]+)([G|M]?)\/([\d|.]+)([G|M]?)\/([\d|.]+)([G|M]?)$/i) {
my ($mmin, $mcur, $mmax);
if ($2 == "G" or $2 == '') {
$min = $min * 1024;
}
$min = $min/$memsize;
my $cur = $3;
$mmin = $1 * 1024;
}
if ($4 == "G" or $4 == '') {
$cur = $cur * 1024;
$mcur = $3 * 1024;
}
$cur = $cur/$memsize;
my $max = $5;
if ($6 == "G" or $6 == '') {
$max = $max * 1024;
$mmax = $5 * 1024;
}
$max = $max/$memsize;
$tmp_ent->{memory} = "$min/$cur/$max";
unless ($mmin <= $mcur and $mcur <= $mmax) {
return([[$name, "Parameter for 'vmmemory' is invalid", 1]]);
}
my $memsize = $memhash->{mem_region_size};
$mmin = ($mmin + $memsize) / $memsize;
$mcur = ($mcur + $memsize) / $memsize;
$mmax = ($mmax + $memsize) / $memsize;
$tmp_ent->{memory} = "$mmin/$mcur/$mmax";
$tmp_ent->{mem_region_size} = $memsize;
} else {
return([[$name, "Parameter for 'vmmemory' is invalid", 1]]);
}
if (exists($tmp_ent->{physlots})) {
my @tmp_array = split ",",$tmp_ent->{physlots};
foreach (@tmp_array) {
unless (/(0x\w{8})/i) {
return([[$name, "Parameter:$_ for 'vmphyslots' is invalid", 1]]);
}
}
}
if (exists($tmp_ent->{othersettings})) {
my @tmp_array = split ",",$tmp_ent->{othersettings};
foreach (@tmp_array) {
unless (/^(bsr|hugepage):\d+$/) {
return([[$name, "Parameter:$_ for 'vmothersetting' is invalid", 1]]);
}
}
}
if (exists($tmp_ent->{nics})) {
my @tmp_array = split ",",$tmp_ent->{nics};
foreach (@tmp_array) {
unless (/^vlan\d+$/i) {
return([[$name, "Parameter:$_ for 'vmnics' is invalid", 1]]);
}
}
}
if (exists($opt->{vios})) {
if (!exists($tmp_ent->{physlots})) {
my @phy_io_array = keys(%{$memhash->{bus}});
$tmp_ent->{physlots} = join(",", @phy_io_array);
}
if (exists($tmp_ent->{storage}) and $tmp_ent->{storage} !~ /^\d+$/) {
return ([[$name, "Parameter for 'vmstorage' is invalid", 1]]);
} elsif (exists($tmp_ent->{storage})) {
my $num = $tmp_ent->{storage};
my @array = ();
for (1..$num) {
push @array, '0';
}
$tmp_ent->{storage} = \@array;
}
} else {
if (exists($tmp_ent->{storage}) and $tmp_ent->{storage} !~ /^[\w_-]*:\d+$/) {
return ([[$name, "Parameter for 'vmstorage' is invalid", 1]]);
} elsif (exists($tmp_ent->{storage})) {
if ($tmp_ent->{storage} =~ /([\w_-]*):(\d+)/) {
my $vios = &find_lpar_id($request, @$d[3], $1);
my $r_slotid = $2;
if (!defined($vios)) {
return ([[$name, "Cannot find lparid for Server lpar:$1"]]);
}
$tmp_ent->{storage} = ["0,$vios,$r_slotid"];
}
}
}
$tmp_ent->{hyp_config_mem} = $memhash->{hyp_config_mem};
$tmp_ent->{hyp_avail_mem} = $memhash->{hyp_avail_mem};
$tmp_ent->{huge_page} = "0/0/0";
$tmp_ent->{bsr_num} = "0";
if (exists($tmp_ent->{othersettings})) {
my $setting = $tmp_ent->{othersettings};
if ($setting =~ /hugepage:(\d+)/) {
my $tmp = $1;
$tmp_ent->{huge_page} = "1/".$tmp."/".$tmp;
if ($tmp >= 1) {
$tmp_ent->{huge_page} = "1/".$tmp."/".$tmp;
}
}
if ($setting =~ /bsr:(\d+)/) {
$tmp_ent->{bsr_num} = $1;
if ($1 >= 1) {
$tmp_ent->{bsr_num} = $1;
}
}
}
$tmp_ent->{phy_hea} = $memhash->{phy_drc_group_port};

View File

@ -56,6 +56,7 @@ $::STATUS_SHELL="shell";
$::STATUS_DEFINED="defined";
$::STATUS_UNKNOWN="unknown";
$::STATUS_FAILED="failed";
$::STATUS_BMCREADY="bmcready";
%::VALID_STATUS_VALUES = (
$::STATUS_ACTIVE=>1,
$::STATUS_INACTIVE=>1,
@ -72,6 +73,7 @@ $::STATUS_FAILED="failed";
$::STATUS_DEFINED=>1,
$::STATUS_UNKNOWN=>1,
$::STATUS_FAILED=>1,
$::STATUS_BMCREADY=>1,
$::STATUS_SYNCING=>1,
$::STATUS_OUT_OF_SYNC=>1,

View File

@ -1353,6 +1353,16 @@ sub dolitesetup
return 1;
}
# also copy $instrootloc/.statelite contents
$ccmd = "/usr/bin/cp -p -r $instrootloc/.statelite $SRloc";
$out = xCAT::Utils->runcmd("$ccmd", -1);
if ($::RUNCMD_RC != 0)
{
my $rsp;
push @{$rsp->{data}}, "Could not copy $instrootloc/.statelite to $SRloc.";
xCAT::MsgUtils->message("E", $rsp, $callback);
return 1;
}
}
}
}

View File

@ -827,7 +827,7 @@ sub get_mac_addr {
$done[0] = 0;
$cmd[0] = "\" local-mac-address\" ". $phandle . " get-package-property\r";
$msg[0] = "Status: return code and mac-address now on stack\n";
$pattern[0] = "ok";#"\s*3 >";
$pattern[0] = "local-mac-address.*ok";#"\s*3 >";
$newstate[0] = 1;
# cmd(1) is a dot (.). This is a stack manipulation command that removes one
@ -1231,8 +1231,8 @@ sub ping_server{
$done[2] = 0;
$cmd[2] = "dev /packages/net\r";
$msg[2] = "Status: selected the /packages/net node as the active package\n";
#$pattern[2] = ".*dev(.*)ok(.*)0 >(.*)";
$pattern[2] = "ok";
$pattern[2] = ".*dev.*packages.*net(.*)ok(.*)0 >(.*)";
#$pattern[2] = "ok";
$newstate[2]= 3;
# state 3, ping the server
@ -1266,6 +1266,7 @@ sub ping_server{
# state 5, all done
$done[5] = 1;
# for ping, only need to set speed and duplex for ethernet adapters
#
if ( $list_type eq "ent" ) {
@ -1323,8 +1324,10 @@ sub ping_server{
$timeout = 300;
while ( $done[$state] eq 0 ) {
send_command($verbose, $rconsole, $cmd[$state]);
@result = $rconsole->expect(
$timeout,
[qr/$pattern[$state]/s=>
sub {
@ -1362,7 +1365,9 @@ sub ping_server{
}
],
);
return 1 if ($rc eq 1);
return 1 if ($rc eq 1);
if ( $state eq 1 ) {
$adap_conn = $adap_conn_list[$j];
$cmd[1] = "\" ethernet,$adap_speed,$adap_conn,$adap_duplex\" encode-string \" chosen-network-type\" property\r";
@ -2050,14 +2055,46 @@ sub multiple_open_dev {
; \r";
send_command($verbose, $rconsole, $command);
$command = "patch new-open-dev open-dev net-ping \r";
send_command($verbose, $rconsole, $command);
$timeout = 30;
$rconsole->expect(
$timeout,
#[qr/patch new-open-dev(.*)>/=>
[qr/>/=>
[qr/new-open-dev(.*)ok/=>
#[qr/>/=>
sub {
nc_msg($verbose, "Status: at End of multiple_open_dev \n");
$rconsole->clear_accum();
}
],
[qr/]/=>
sub {
nc_msg($verbose, "Unexpected prompt\n");
$rconsole->clear_accum();
$rc = 1;
}
],
[timeout =>
sub {
send_user(2, "Timeout\n");
$rconsole->clear_accum();
$rc = 1;
}
],
[eof =>
sub {
send_user(2, "Cannot connect to $node\n");
$rconsole->clear_accum();
$rc = 1;
}
],
);
$command = "patch new-open-dev open-dev net-ping \r";
send_command($verbose, $rconsole, $command);
$rconsole->expect(
$timeout,
[qr/patch new-open-dev(.*)ok/=>
#[qr/>/=>
sub {
nc_msg($verbose, "Status: at End of multiple_open_dev \n");
$rconsole->clear_accum();
@ -2086,6 +2123,7 @@ sub multiple_open_dev {
}
],
);
return $rc;
}
###################################################################
@ -2569,7 +2607,7 @@ sub lparnetbootexp
####################################
nc_msg($verbose, "Connecting to the $node.\n");
sleep 3;
$timeout = 2;
$timeout = 10;
$rconsole->expect(
$timeout,
[ qr/Enter.* for help.*/i =>
@ -2778,6 +2816,8 @@ sub lparnetbootexp
$done = 0;
$retry_count = 0;
$timeout = 10;
while (!$done) {
my @result = $rconsole->expect(
$timeout,
@ -2885,6 +2925,7 @@ sub lparnetbootexp
}
}
##############################
# Call multiple_open_dev to
# circumvent firmware OPEN-DEV
@ -2919,6 +2960,7 @@ sub lparnetbootexp
$match_pat = ".*";
}
if($colon) {
nc_msg($verbose, "#Type:Location_Code:MAC_Address:Full_Path_Name:Ping_Result:Device_Type:Size_MB:OS:OS_Version:\n");
$outputarrayindex++; # start from 1, 0 is used to set as 0
@ -2972,7 +3014,7 @@ sub lparnetbootexp
} else {
for( $i = 0; $i < $adapter_found; $i++) {
if ($adap_type[$i] =~ /$match_pat/) {
if ($adap_type[$i] eq "hfi-ent") {
if (!($adap_type[$i] eq "hfi-ent")) {
$mac_address = get_mac_addr($phandle_array[$i], $rconsole, $node, $verbose);
$loc_code = get_adaptr_loc($phandle_array[$i], $rconsole, $node, $verbose);
}

View File

@ -149,7 +149,7 @@ sub nodesbycriteria {
}
if ($neednewcache) {
if ($nodelist) {
$nodelist->_clear_cache();
#$nodelist->_clear_cache();
$nodelist->_build_cache(\@cachedcolumns);
}
}

View File

@ -265,6 +265,7 @@ sub rackformat_to_numricformat{
values are attributes of a specific nic, like:
type : nic type
hostnamesuffix: hostname suffix
hostnameprefix: hostname prefix
customscript: custom script for this nic
network: network name for this nic
ip: ip address of this nic.
@ -276,7 +277,7 @@ sub get_nodes_nic_attrs{
my $nodes = shift;
my $nicstab = xCAT::Table->new( 'nics');
my $entry = $nicstab->getNodesAttribs($nodes, ['nictypes', 'nichostnamesuffixes', 'niccustomscripts', 'nicnetworks', 'nicips']);
my $entry = $nicstab->getNodesAttribs($nodes, ['nictypes', 'nichostnamesuffixes', 'nichostnameprefixes', 'niccustomscripts', 'nicnetworks', 'nicips']);
my %nicsattrs;
my @nicattrslist;
@ -308,6 +309,20 @@ sub get_nodes_nic_attrs{
}
}
if($entry->{$node}->[0]->{'nichostnameprefixes'}){
@nicattrslist = split(",", $entry->{$node}->[0]->{'nichostnameprefixes'});
foreach (@nicattrslist){
my @nicattrs;
if ($_ =~ /!/) {
@nicattrs = split("!", $_);
} else {
@nicattrs = split(":", $_);
}
$nicsattrs{$node}{$nicattrs[0]}{'hostnameprefix'} = $nicattrs[1];
}
}
if($entry->{$node}->[0]->{'niccustomscripts'}){
@nicattrslist = split(",", $entry->{$node}->[0]->{'niccustomscripts'});
foreach (@nicattrslist){
@ -733,6 +748,27 @@ sub get_imageprofile_prov_method
#-------------------------------------------------------------------------------
=head3 get_imageprofile_prov_osvers
Description : Get A node's provisioning os version and profile from its imageprofile attribute.
Arguments : $imgprofilename - imageprofile name
Returns : node's osversion and profile
=cut
#-------------------------------------------------------------------------------
sub get_imageprofile_prov_osvers
{
my $class = shift;
my $imgprofilename = shift;
my $osimgtab = xCAT::Table->new('osimage');
my $osimgentry = ($osimgtab->getAllAttribsWhere("imagename = '$imgprofilename'", 'ALL' ))[0];
my $osversion = $osimgentry->{'osvers'};
my $profile = $osimgentry->{'profile'};
return ($osversion, $profile);
}
#-------------------------------------------------------------------------------
=head3 check_profile_consistent
Description : Check if three profile consistent
Arguments : $imageprofile - image profile name
@ -1000,6 +1036,40 @@ sub parse_nodeinfo_file
return 1, "";
}
#-------------------------------------------------------
=head3 update the table prodkey, in order to support windows
per node license key
Returns: $retcode.
$retcode = 1. update failed, the value is undef
$retcode = 0. save into db is OK..
=cut
#-------------------------------------------------------
sub update_windows_prodkey
{
my $class = shift;
my $node = shift;
my $product = shift;
my $key = shift;
unless(defined($node) && defined($product) && defined($key))
{
return 1;
}
# please notice this db usage
my %keyhash;
my %updates;
$keyhash{'node'} = $node;
$updates{'product'} = $product;
$updates{'key'} = $key;
my $tab = xCAT::Table->new('prodkey', -create=>1, -autocommit=>0);
$tab->setAttribs( \%keyhash,\%updates );
$tab->commit;
$tab->close;
return 0;
}
#-------------------------------------------------------------------------------
=head3 check_nicips
Description: Check if the nicips defined in MAC file is correct

View File

@ -35,7 +35,7 @@ package xCAT::RemoteShellExp;
node.
DSH_TO_USERID - The userid on the node where the ssh keys will be updated.
DSH_ENABLE_SSH - Node to node root passwordless ssh will be setup.
DSH_ZONE_SSHKEYS - directory containing the zones root .ssh keys
Usage: remoteshellexp
[-t node list] test ssh connection to the node
@ -144,7 +144,7 @@ sub remoteshellexp
} else {
$from_userid="root";
}
# set User on the node where we will send the keys
# set User on the node where we will send the keys
# this id can be a local id as well as root
if ($ENV{'DSH_TO_USERID'}) {
$to_userid=$ENV{'DSH_TO_USERID'};
@ -154,18 +154,19 @@ sub remoteshellexp
# set User home directory to find the ssh public key to send
# For non-root ids information may not be in /etc/passwd
# but elsewhere like LDAP
if ($ENV{'DSH_FROM_USERID_HOME'}) {
$home=$ENV{'DSH_FROM_USERID_HOME'};
$home=$ENV{'DSH_FROM_USERID_HOME'};
} else {
$home=xCAT::Utils->getHomeDir($from_userid);
$home=xCAT::Utils->getHomeDir($from_userid);
}
# This indicates we will generate new ssh keys for the user,
# if they are not already there
my $key="$home/.ssh/id_rsa";
my $key2="$home/.ssh/id_rsa.pub";
# Check to see if empty
if (-z $key) {
# unless using zones
my $key="$home/.ssh/id_rsa";
my $key2="$home/.ssh/id_rsa.pub";
# Check to see if empty
if (-z $key) {
my $rsp = {};
$rsp->{error}->[0] =
"The $key file is empty. Remove it and rerun the command.";
@ -183,8 +184,7 @@ sub remoteshellexp
}
if (($flag eq "k") && (!(-e $key)))
{
# if the file size of the id_rsa key is 0, tell them to remove it
# and run the command again
# updating keys and the key file does not exist
$rc=xCAT::RemoteShellExp->gensshkeys($expecttimeout);
}
# send ssh keys to the nodes/devices, to setup passwordless ssh
@ -200,6 +200,9 @@ sub remoteshellexp
if ($ssh_setup_cmd) { # setup ssh on devices
$rc=xCAT::RemoteShellExp->senddeviceskeys($remoteshell,$remotecopy,$to_userid,$to_user_password,$home,$ssh_setup_cmd,$nodes, $expecttimeout);
} else { #setup ssh on nodes
if ($ENV{'DSH_ZONE_SSHKEYS'}) { # if using zones the override the location of the keys
$home= $ENV{'DSH_ZONE_SSHKEYS'};
}
$rc=xCAT::RemoteShellExp->sendnodeskeys($remoteshell,$remotecopy,$to_userid,$to_user_password,$home,$nodes, $expecttimeout);
}
}
@ -495,11 +498,15 @@ sub sendnodeskeys
# in $HOME/.ssh/tmp/authorized_keys
# copy to the node to the temp directory
# scp $HOME/.ssh/tmp/authorized_keys to_userid@<node>:/tmp/$to_userid/.ssh
# scp $HOME/.ssh/id_rsa.pub to_userid@<node>:/tmp/$to_userid/.ssh
# Note if using zones, the keys do not come from ~/.ssh but from the
# zone table, sshkeydir attribute. For zones the userid is always root
# If you are going to enable ssh to ssh between nodes, then
# scp $HOME/.ssh/id_rsa to that temp directory on the node
# copy the script $HOME/.ssh/copy.sh to the node, it will do the
# the work of setting up the user's ssh keys and clean up
# ssh (run) copy.sh on the node
my @nodelist=split(/,/,$nodes);
foreach my $node (@nodelist) {
$sendkeys = new Expect;
@ -607,11 +614,11 @@ sub sendnodeskeys
my $spawncopyfiles;
if ($ENV{'DSH_ENABLE_SSH'}) { # we will enable node to node ssh
$spawncopyfiles=
"$remotecopy $home/.ssh/id_rsa $home/.ssh/copy.sh $home/.ssh/tmp/authorized_keys $to_userid\@$node:/tmp/$to_userid/.ssh ";
"$remotecopy $home/.ssh/id_rsa $home/.ssh/id_rsa.pub $home/.ssh/copy.sh $home/.ssh/tmp/authorized_keys $to_userid\@$node:/tmp/$to_userid/.ssh ";
} else { # no node to node ssh ( don't send private key)
$spawncopyfiles=
"$remotecopy $home/.ssh/copy.sh $home/.ssh/tmp/authorized_keys $to_userid\@$node:/tmp/$to_userid/.ssh ";
"$remotecopy $home/.ssh/id_rsa.pub $home/.ssh/copy.sh $home/.ssh/tmp/authorized_keys $to_userid\@$node:/tmp/$to_userid/.ssh ";
}
# send copy command
unless ($sendkeys->spawn($spawncopyfiles))

1017
perl-xCAT/xCAT/SLP.pm Normal file → Executable file

File diff suppressed because it is too large Load Diff

398
perl-xCAT/xCAT/Schema.pm Normal file → Executable file
View File

@ -134,7 +134,7 @@ litetree => {
table_desc => 'Directory hierarchy to traverse to get the initial contents of node files. The files that are specified in the litefile table are searched for in the directories specified in this table.',
descriptions => {
priority => 'This number controls what order the directories are searched. Directories are searched from smallest priority number to largest.',
image => "The name of the image that will use this directory, as specified in the osimage table. If image is not supplied, the default is 'ALL'. 'ALL' means use it for all images.",
image => "The name of the image (as specified in the osimage table) that will use this directory. You can also specify an image group name that is listed in the osimage.groups attribute of some osimages. 'ALL' means use this row for all images.",
directory => 'The location (hostname:path) of a directory that contains files specified in the litefile table. Variables are allowed. E.g: $noderes.nfsserver://xcatmasternode/install/$node/#CMD=uname-r#/',
mntopts => "A comma-separated list of options to use when mounting the litetree directory. (Ex. 'soft') The default is to do a 'hard' mount.",
comments => 'Any user-written notes.',
@ -148,7 +148,7 @@ litefile => {
required => [qw(image file)], # default type is rw nfsroot
table_desc => 'The litefile table specifies the directories and files on the statelite nodes that should be readwrite, persistent, or readonly overlay. All other files in the statelite nodes come from the readonly statelite image.',
descriptions => {
image => "The name of the image that will use these files, as specified in the osimage table. 'ALL' means use it for all images.",
image => "The name of the image (as specified in the osimage table) that will use these options on this dir/file. You can also specify an image group name that is listed in the osimage.groups attribute of some osimages. 'ALL' means use this row for all images.",
file => "The full pathname of the file. e.g: /etc/hosts. If the path is a directory, then it should be terminated with a '/'. ",
options => "Options for the file:\n\n".
qq{ tmpfs - It is the default option if you leave the options column blank. It provides a file or directory for the node to use when booting, its permission will be the same as the original version on the server. In most cases, it is read-write; however, on the next statelite boot, the original version of the file or directory on the server will be used, it means it is non-persistent. This option can be performed on files and directories..\n\n}.
@ -200,7 +200,7 @@ vm => {
'mgr' => 'The function manager for the virtual machine',
'host' => 'The system that currently hosts the VM',
'migrationdest' => 'A noderange representing candidate destinations for migration (i.e. similar systems, same SAN, or other criteria that xCAT can use',
'storage' => 'A list of storage files or devices to be used. i.e. /cluster/vm/<nodename> or nfs://<server>/path/to/folder/',
'storage' => 'A list of storage files or devices to be used. i.e. dir:///cluster/vm/<nodename> or nfs://<server>/path/to/folder/',
'storagemodel' => 'Model of storage devices to provide to guest',
'cfgstore' => 'Optional location for persistant storage separate of emulated hard drives for virtualization solutions that require persistant store to place configuration data',
'memory' => 'Megabytes of memory the VM currently should be set to.',
@ -538,8 +538,8 @@ nodehm => {
table_desc => "Settings that control how each node's hardware is managed. Typically, an additional table that is specific to the hardware type of the node contains additional info. E.g. the ipmi, mp, and ppc tables.",
descriptions => {
node => 'The node name or group name.',
power => 'The method to use to control the power of the node. If not set, the mgt attribute will be used. Valid values: ipmi, blade, hmc, ivm, fsp. If "ipmi", xCAT will search for this node in the ipmi table for more info. If "blade", xCAT will search for this node in the mp table. If "hmc", "ivm", or "fsp", xCAT will search for this node in the ppc table.',
mgt => 'The method to use to do general hardware management of the node. This attribute is used as the default if power or getmac is not set. Valid values: ipmi, blade, hmc, ivm, fsp, bpa. See the power attribute for more details.',
power => 'The method to use to control the power of the node. If not set, the mgt attribute will be used. Valid values: ipmi, blade, hmc, ivm, fsp, kvm, esx, rhevm. If "ipmi", xCAT will search for this node in the ipmi table for more info. If "blade", xCAT will search for this node in the mp table. If "hmc", "ivm", or "fsp", xCAT will search for this node in the ppc table.',
mgt => 'The method to use to do general hardware management of the node. This attribute is used as the default if power or getmac is not set. Valid values: ipmi, blade, hmc, ivm, fsp, bpa, kvm, esx, rhevm. See the power attribute for more details.',
cons => 'The console method. If nodehm.serialport is set, this will default to the nodehm.mgt setting, otherwise it defaults to unused. Valid values: cyclades, mrv, or the values valid for the mgt attribute.',
termserver => 'The hostname of the terminal server.',
termport => 'The port number on the terminal server that this node is connected to.',
@ -554,7 +554,7 @@ nodehm => {
},
},
nodelist => {
cols => [qw(node groups status statustime appstatus appstatustime primarysn hidden updatestatus updatestatustime comments disable)],
cols => [qw(node groups status statustime appstatus appstatustime primarysn hidden updatestatus updatestatustime zonename comments disable)],
keys => [qw(node)],
tablespace =>'XCATTBS32K',
table_desc => "The list of all the nodes in the cluster, including each node's current status and what groups it is in.",
@ -569,6 +569,7 @@ nodelist => {
hidden => "Used to hide fsp and bpa definitions, 1 means not show them when running lsdef and nodels",
updatestatus => "The current node update status. Valid states are synced out-of-sync,syncing,failed.",
updatestatustime => "The date and time when the updatestatus was updated.",
zonename => "The name of the zone to which the node is currently assigned. If undefined, then it is not assigned to any zone. ",
comments => 'Any user-written notes.',
disable => "Set to 'yes' or '1' to comment out this row.",
},
@ -591,14 +592,14 @@ nodepos => {
},
},
noderes => {
cols => [qw(node servicenode netboot tftpserver tftpdir nfsserver monserver nfsdir installnic primarynic discoverynics cmdinterface xcatmaster current_osimage next_osimage nimserver routenames nameservers comments disable)],
cols => [qw(node servicenode netboot tftpserver tftpdir nfsserver monserver nfsdir installnic primarynic discoverynics cmdinterface xcatmaster current_osimage next_osimage nimserver routenames nameservers proxydhcp comments disable)],
keys => [qw(node)],
tablespace =>'XCATTBS16K',
table_desc => 'Resources and settings to use when installing nodes.',
descriptions => {
node => 'The node name or group name.',
servicenode => 'A comma separated list of node names (as known by the management node) that provides most services for this node. The first service node on the list that is accessible will be used. The 2nd node on the list is generally considered to be the backup service node for this node when running commands like snmove.',
netboot => 'The type of network booting to use for this node. Valid values: pxe or xnba for x86* architecture, yaboot for POWER architecture.',
netboot => 'The type of network booting to use for this node. Valid values: pxe or xnba for x86* architecture, yaboot for POWER architecture, grub2 for RHEL7 on Power. Notice: yaboot is not supported from rhels7 on Power,use grub2 instead',
tftpserver => 'The TFTP server for this node (as known by this node). If not set, it defaults to networks.tftpserver.',
tftpdir => 'The directory that roots this nodes contents from a tftp and related perspective. Used for NAS offload by using different mountpoints.',
nfsserver => 'The NFS or HTTP server for this node (as known by this node).',
@ -614,6 +615,7 @@ noderes => {
nimserver => 'Not used for now. The NIM server for this node (as known by this node).',
routenames => 'A comma separated list of route names that refer to rows in the routes table. These are the routes that should be defined on this node when it is deployed.',
nameservers => 'An optional node/group specific override for name server list. Most people want to stick to site or network defined nameserver configuration.',
proxydhcp => 'To specify whether the node supports proxydhcp protocol. Valid values: yes or 1, no or 0. Default value is yes.',
comments => 'Any user-written notes.',
disable => "Set to 'yes' or '1' to comment out this row.",
},
@ -731,6 +733,21 @@ linuximage => {
disable => "Set to 'yes' or '1' to comment out this row.",
},
},
winimage => {
cols => [qw(imagename template installto partitionfile winpepath comments disable)],
keys => [qw(imagename)],
tablespace =>'XCATTBS32K',
table_desc => 'Information about a Windows operating system image that can be used to deploy cluster nodes.',
descriptions => {
imagename => 'The name of this xCAT OS image definition.',
template => 'The fully qualified name of the template file that is used to create the windows unattend.xml file for diskful installation.',
installto => 'The disk and partition that the Windows will be deployed to. The valid format is <disk>:<partition>. If not set, default value is 0:1 for bios boot mode(legacy) and 0:3 for uefi boot mode; If setting to 1, it means 1:1 for bios boot and 1:3 for uefi boot',
partitionfile => 'The path of partition configuration file. Since the partition configuration for bios boot mode and uefi boot mode are different, this configuration file should include two parts if customer wants to support both bios and uefi mode. If customer just wants to support one of the modes, specify one of them anyway. Example of partition configuration file: [BIOS]xxxxxxx[UEFI]yyyyyyy. To simplify the setting, you also can set installto in partitionfile with section likes [INSTALLTO]0:1',
winpepath => 'The path of winpe which will be used to boot this image. If the real path is /tftpboot/winboot/winpe1/, the value for winpepath should be set to winboot/winpe1',
comments => 'Any user-written notes.',
disable => "Set to 'yes' or '1' to comment out this row.",
}
},
passwd => {
cols => [qw(key username password cryptmethod authdomain comments disable)],
keys => [qw(key username)],
@ -820,13 +837,13 @@ ppchcp => {
},
},
servicenode => {
cols => [qw(node nameserver dhcpserver tftpserver nfsserver conserver monserver ldapserver ntpserver ftpserver nimserver ipforward dhcpinterfaces comments disable)],
cols => [qw(node nameserver dhcpserver tftpserver nfsserver conserver monserver ldapserver ntpserver ftpserver nimserver ipforward dhcpinterfaces proxydhcp comments disable)],
keys => [qw(node)],
tablespace =>'XCATTBS16K',
table_desc => 'List of all Service Nodes and services that will be set up on the Service Node.',
descriptions => {
node => 'The hostname of the service node as known by the Management Node.',
nameserver => 'Do we set up DNS on this service node? Valid values:yes or 1, no or 0. If yes, creates named.conf file with forwarding to the management node and starts named. If no or 0, it does not change the current state of the service. ',
nameserver => 'Do we set up DNS on this service node? Valid values: 2, 1, no or 0. If 2, creates named.conf as dns slave, using the management node as dns master, and starts named. If 1, creates named.conf file with forwarding to the management node and starts named. If no or 0, it does not change the current state of the service. ',
dhcpserver => 'Do we set up DHCP on this service node? Not supported on AIX. Valid values:yes or 1, no or 0. If yes, runs makedhcp -n. If no or 0, it does not change the current state of the service. ',
tftpserver => 'Do we set up TFTP on this service node? Not supported on AIX. Valid values:yes or 1, no or 0. If yes, configures and starts atftp. If no or 0, it does not change the current state of the service. ',
nfsserver => 'Do we set up file services (HTTP,FTP,or NFS) on this service node? For AIX will only setup NFS, not HTTP or FTP. Valid values:yes or 1, no or 0.If no or 0, it does not change the current state of the service. ',
@ -838,6 +855,7 @@ servicenode => {
nimserver => 'Not used. Do we set up a NIM server on this service node? Valid values:yes or 1, no or 0. If no or 0, it does not change the current state of the service.',
ipforward => 'Do we set up ip forwarding on this service node? Valid values:yes or 1, no or 0. If no or 0, it does not change the current state of the service.',
dhcpinterfaces => 'The network interfaces DHCP server should listen on for the target node. This attribute can be used for management node and service nodes. If defined, it will override the values defined in site.dhcpinterfaces. This is a comma separated list of device names. !remote! indicates a non-local network for relay DHCP. For example: !remote!,eth0,eth1',
proxydhcp => 'Do we set up proxydhcp service on this node? valid values: yes or 1, no or 0. If yes, the proxydhcp daemon will be enabled on this node.',
comments => 'Any user-written notes.',
disable => "Set to 'yes' or '1' to comment out this row.",
@ -846,35 +864,44 @@ servicenode => {
site => {
cols => [qw(key value comments disable)],
keys => [qw(key)],
table_desc => "Global settings for the whole cluster. This table is different from the \nother tables in that each attribute is just named in the key column, rather \nthan having a separate column for each attribute. The following is a list of \nthe attributes currently used by xCAT.\n",
table_desc => "Global settings for the whole cluster. This table is different from the \nother tables in that each attribute is just named in the key column, rather \nthan having a separate column for each attribute. The following is a list of \nattributes currently used by xCAT organized into categories.\n",
descriptions => {
# Do not put description text past column 88, so it displays well in a 100 char wide window.
# ----------------------------------------------------------------------------------|----------
key => "Attribute Name: Description\n\n".
" auditskipcmds: List of commands and/or client types that will not be written to the auditlog table.\n".
" ------------\n".
"AIX ATTRIBUTES\n".
" ------------\n".
" nimprime : The name of NIM server, if not set default is the AIX MN.
If Linux MN, then must be set for support of mixed cluster (TBD).\n\n".
" useSSHonAIX: (yes/1 or no/0). Default is yes. The support for rsh/rcp is deprecated.\n".
" useNFSv4onAIX: (yes/1 or no/0). If yes, NFSv4 will be used with NIM. If no,\n".
" NFSv3 will be used with NIM. Default is no.\n\n".
" -----------------\n".
"DATABASE ATTRIBUTES\n".
" -----------------\n".
" auditskipcmds: List of commands and/or client types that will not be\n".
" written to the auditlog table.\n".
" 'ALL' means all cmds will be skipped. If attribute is null, all\n".
" commands will be written.\n".
" clienttype:web would skip all commands from the web client\n".
" For example: tabdump,nodels,clienttype:web \n".
" will not log tabdump,nodels and any web client commands.\n\n".
" blademaxp: The maximum number of concurrent processes for blade hardware control.\n\n".
" cleanupxcatpost: (yes/1 or no/0). Set to 'yes' or '1' to clean up the /xcatpost\n".
" directory on the stateless and statelite nodes after the\n".
" postscripts are run. Default is no.\n\n".
" consoleondemand: When set to 'yes', conserver connects and creates the console\n".
" output only when the user opens the console. Default is no on\n".
" Linux, yes on AIX.\n\n".
" databaseloc: Directory where we create the db instance directory.\n".
" Default is /var/lib. Only DB2 is currently supported.\n".
" Do not use the directory in the site.installloc or\n".
" installdir attribute. This attribute must not be changed\n".
" once db2sqlsetup script has been run and DB2 has been setup.\n\n".
" db2installloc: The location which the service nodes should mount for\n".
" the db2 code to install. Format is hostname:/path. If hostname is\n".
" omitted, it defaults to the management node. Default is /mntdb2.\n\n".
" defserialflow: The default serial flow - currently only used by the mknb command.\n\n".
" defserialport: The default serial port - currently only used by mknb.\n\n".
" defserialspeed: The default serial speed - currently only used by mknb.\n\n".
" excludenodes: A set of comma separated nodes and/or groups that would automatically\n".
" be subtracted from any noderange, it can be used for excluding some\n".
" failed nodes for any xCAT commands. See the 'noderange' manpage for\n".
" details on supported formats.\n\n".
" nodestatus: If set to 'n', the nodelist.status column will not be updated during\n".
" the node deployment, node discovery and power operations. The default is to update.\n\n".
" skiptables: Comma separated list of tables to be skipped by dumpxCATdb\n\n".
" -------------\n".
"DHCP ATTRIBUTES\n".
" -------------\n".
" dhcpinterfaces: The network interfaces DHCP should listen on. If it is the same\n".
" for all nodes, use a simple comma-separated list of NICs. To\n".
" specify different NICs for different nodes:\n".
@ -887,59 +914,19 @@ site => {
" disjointdhcps: If set to '1', the .leases file on a service node only contains\n".
" the nodes it manages. The default value is '0'.\n".
" '0' value means include all the nodes in the subnet.\n\n".
" pruneservices: Whether to enable service pruning when noderm is run (i.e.\n".
" removing DHCP entries when noderm is executed)\n\n".
" ------------\n".
"DNS ATTRIBUTES\n".
" ------------\n".
" dnshandler: Name of plugin that handles DNS setup for makedns.\n".
" domain: The DNS domain name used for the cluster.\n\n".
" ea_primary_hmc: The hostname of the HMC that the Integrated Switch Network\n".
" Management Event Analysis should send hardware serviceable\n".
" events to for processing and potentially sending to IBM.\n\n".
" ea_backup_hmc: The hostname of the HMC that the Integrated Switch Network\n".
" Management Event Analysis should send hardware serviceable\n".
" events to if the primary HMC is down.\n\n".
" enableASMI: (yes/1 or no/0). If yes, ASMI method will be used after fsp-api. If no,\n".
" when fsp-api is used, ASMI method will not be used. Default is no.\n\n".
" excludenodes: A set of comma separated nodes and/or groups that would automatically\n".
" be subtracted from any noderange, it can be used for excluding some\n".
" failed nodes for any xCAT commands. See the 'noderange' manpage for\n".
" details on supported formats.\n\n".
" forwarders: The DNS servers at your site that can provide names outside of the\n".
" cluster. The makedns command will configure the DNS on the management\n".
" node to forward requests it does not know to these servers.\n".
" Note that the DNS servers on the service nodes will ignore this value\n".
" and always be configured to forward requests to the management node.\n\n".
" fsptimeout: The timeout, in milliseconds, to use when communicating with FSPs.\n\n".
" genmacprefix: When generating mac addresses automatically, use this manufacturing\n".
" prefix (e.g. 00:11:aa)\n\n".
" genpasswords: Automatically generate random passwords for BMCs when configuring\n".
" them.\n\n".
" httpport: The port number that the booting/installing nodes should contact the\n".
" http server on the MN/SN on. It is your responsibility to configure\n".
" the http server to listen on that port - xCAT will not do that.\n\n".
" installdir: The local directory name used to hold the node deployment packages.\n\n".
" installloc: The location from which the service nodes should mount the \n".
" deployment packages in the format hostname:/path. If hostname is\n".
" omitted, it defaults to the management node. The path must\n".
" match the path in the installdir attribute.\n\n".
" ipmidispatch: Whether or not to send ipmi hw control operations to the service\n".
" node of the target compute nodes. Default is 'y'.\n\n".
" hwctrldispatch: Whether or not to send hw control operations to the service\n".
" node of the target nodes. Default is 'y'.(At present, this attribute\n".
" is only used for IBM Flex System)\n\n".
" ipmimaxp: The max # of processes for ipmi hw ctrl. The default is 64. Currently,\n".
" this is only used for HP hw control.\n\n".
" ipmiretries: The # of retries to use when communicating with BMCs. Default is 3.\n\n".
" ipmisdrcache: If set to 'no', then the xCAT IPMI support will not cache locally\n".
" the target node's SDR cache to improve performance.\n\n".
" ipmitimeout: The timeout to use when communicating with BMCs. Default is 2.\n".
" This attribute is currently not used.\n\n".
" iscsidir: The path to put the iscsi disks in on the mgmt node.\n\n".
" master: The hostname of the xCAT management node, as known by the nodes.\n\n".
" maxssh: The max # of SSH connections at any one time to the hw ctrl point for PPC\n".
" This parameter doesn't take effect on the rpower command.\n".
" It takes effects on other PPC hardware control command\n".
" getmacs/rnetboot/rbootseq and so on. Default is 8.\n\n".
" mnroutenames: The name of the routes to be setup on the management node.\n".
" It is a comma separated list of route names that are defined in the\n".
" routes table.\n\n".
" nameservers: A comma delimited list of DNS servers that each node in the cluster\n".
" should use. This value will end up in the nameserver settings of the\n".
" /etc/resolv.conf on each node. It is common (but not required) to set\n".
@ -949,18 +936,35 @@ site => {
" \"<xcatmaster>\" to mean the DNS server for each node should be the\n".
" node that is managing it (either its service node or the management\n".
" node).\n\n".
" nimprime : The name of NIM server, if not set default is the AIX MN.
If Linux MN, then must be set for support of mixed cluster (TBD).\n\n".
" nodestatus: If set to 'n', the nodelist.status column will not be updated during\n".
" the node deployment, node discovery and power operations. The default is to update.\n\n".
" ntpservers: A comma delimited list of NTP servers for the cluster - often the\n".
" xCAT management node.\n\n".
" runbootscripts: If set to 'yes' the scripts listed in the postbootscripts\n".
" attribute in the osimage and postscripts tables will be run during\n".
" each reboot of stateful (diskful) nodes. This attribute has no\n".
" effect on stateless and statelite nodes. Please run the following\n" .
" command after you change the value of this attribute: \n".
" 'updatenode <nodes> -P setuppostbootscripts'\n\n".
" -------------------------\n".
"HARDWARE CONTROL ATTRIBUTES\n".
" -------------------------\n".
" blademaxp: The maximum number of concurrent processes for blade hardware control.\n\n".
" ea_primary_hmc: The hostname of the HMC that the Integrated Switch Network\n".
" Management Event Analysis should send hardware serviceable\n".
" events to for processing and potentially sending to IBM.\n\n".
" ea_backup_hmc: The hostname of the HMC that the Integrated Switch Network\n".
" Management Event Analysis should send hardware serviceable\n".
" events to if the primary HMC is down.\n\n".
" enableASMI: (yes/1 or no/0). If yes, ASMI method will be used after fsp-api. If no,\n".
" when fsp-api is used, ASMI method will not be used. Default is no.\n\n".
" fsptimeout: The timeout, in milliseconds, to use when communicating with FSPs.\n\n".
" hwctrldispatch: Whether or not to send hw control operations to the service\n".
" node of the target nodes. Default is 'y'.(At present, this attribute\n".
" is only used for IBM Flex System)\n\n".
" ipmidispatch: Whether or not to send ipmi hw control operations to the service\n".
" node of the target compute nodes. Default is 'y'.\n\n".
" ipmimaxp: The max # of processes for ipmi hw ctrl. The default is 64. Currently,\n".
" this is only used for HP hw control.\n\n".
" ipmiretries: The # of retries to use when communicating with BMCs. Default is 3.\n\n".
" ipmisdrcache: If set to 'no', then the xCAT IPMI support will not cache locally\n".
" the target node's SDR cache to improve performance.\n\n".
" ipmitimeout: The timeout to use when communicating with BMCs. Default is 2.\n".
" This attribute is currently not used.\n\n".
" maxssh: The max # of SSH connections at any one time to the hw ctrl point for PPC\n".
" This parameter doesn't take effect on the rpower command.\n".
" It takes effects on other PPC hardware control command\n".
" getmacs/rnetboot/rbootseq and so on. Default is 8.\n\n".
" syspowerinterval: For system p CECs, this is the number of seconds the rpower\n".
" command will wait between performing the action for each CEC.\n".
" For system x IPMI servers, this is the number of seconds the\n".
@ -987,15 +991,45 @@ site => {
" ppctimeout: The timeout, in milliseconds, to use when communicating with PPC hw\n".
" through HMC. It only takes effect on the hardware control commands\n".
" through HMC. Default is 0.\n\n".
" snmpc: The snmp community string that xcat should use when communicating with the\n".
" switches.\n\n".
" ---------------------------\n".
"INSTALL/DEPLOYMENT ATTRIBUTES\n".
" ---------------------------\n".
" cleanupxcatpost: (yes/1 or no/0). Set to 'yes' or '1' to clean up the /xcatpost\n".
" directory on the stateless and statelite nodes after the\n".
" postscripts are run. Default is no.\n\n".
" db2installloc: The location which the service nodes should mount for\n".
" the db2 code to install. Format is hostname:/path. If hostname is\n".
" omitted, it defaults to the management node. Default is /mntdb2.\n\n".
" defserialflow: The default serial flow - currently only used by the mknb command.\n\n".
" defserialport: The default serial port - currently only used by mknb.\n\n".
" defserialspeed: The default serial speed - currently only used by mknb.\n\n".
" genmacprefix: When generating mac addresses automatically, use this manufacturing\n".
" prefix (e.g. 00:11:aa)\n\n".
" genpasswords: Automatically generate random passwords for BMCs when configuring\n".
" them.\n\n".
" installdir: The local directory name used to hold the node deployment packages.\n\n".
" installloc: The location from which the service nodes should mount the \n".
" deployment packages in the format hostname:/path. If hostname is\n".
" omitted, it defaults to the management node. The path must\n".
" match the path in the installdir attribute.\n\n".
" iscsidir: The path to put the iscsi disks in on the mgmt node.\n\n".
" mnroutenames: The name of the routes to be setup on the management node.\n".
" It is a comma separated list of route names that are defined in the\n".
" routes table.\n\n".
" runbootscripts: If set to 'yes' the scripts listed in the postbootscripts\n".
" attribute in the osimage and postscripts tables will be run during\n".
" each reboot of stateful (diskful) nodes. This attribute has no\n".
" effect on stateless and statelite nodes. Please run the following\n" .
" command after you change the value of this attribute: \n".
" 'updatenode <nodes> -P setuppostbootscripts'\n\n".
" precreatemypostscripts: (yes/1 or no/0). Default is no. If yes, it will \n".
" instruct xCAT at nodeset and updatenode time to query the db once for\n".
" all of the nodes passed into the cmd and create the mypostscript file\n".
" for each node, and put them in a directory of tftpdir(such as: /tftpboot)\n".
" If no, it will not generate the mypostscript file in the tftpdir.\n\n".
" pruneservices: Whether to enable service pruning when noderm is run (i.e.\n".
" removing DHCP entries when noderm is executed)\n\n".
" rsh: This is no longer used. path to remote shell command for xdsh.\n\n".
" rcp: This is no longer used. path to remote copy command for xdcp.\n\n".
" setinstallnic: Set the network configuration for installnic to be static.\n\n".
" sharedtftp: Set to 0 or no, xCAT should not assume the directory\n".
" in tftpdir is mounted on all on Service Nodes. Default is 1/yes.\n".
" If value is set to a hostname, the directory in tftpdir\n".
@ -1006,18 +1040,31 @@ site => {
" shared filesystem is being used across all service nodes.\n".
" 'all' means that the management as well as the service nodes\n".
" are all using a common shared filesystem. The default is 'no'.\n".
" skiptables: Comma separated list of tables to be skipped by dumpxCATdb\n".
" xcatconfdir: Where xCAT config data is (default /etc/xcat).\n\n".
" --------------------\n".
"REMOTESHELL ATTRIBUTES\n".
" --------------------\n".
" nodesyncfiledir: The directory on the node, where xdcp will rsync the files\n".
" SNsyncfiledir: The directory on the Service Node, where xdcp will rsync the files\n".
" from the MN that will eventually be rsync'd to the compute nodes.\n\n".
" nodesyncfiledir: The directory on the node, where xdcp will rsync the files\n".
" snmpc: The snmp community string that xcat should use when communicating with the\n".
" switches.\n\n".
" sshbetweennodes: Comma separated list of groups to enable passwordless root \n".
" sshbetweennodes: Comma separated list of groups of compute nodes to enable passwordless root \n".
" ssh during install, or xdsh -K. Default is ALLGROUPS.\n".
" Set to NOGROUPS,if you do not wish to enabled any groups.\n".
" Set to NOGROUPS,if you do not wish to enabled any group of compute nodes.\n".
" Service Nodes are not affected by this attribute\n".
" they are always setup with\n".
" passwordless root access to nodes and other SN.\n\n".
" passwordless root access to nodes and other SN.\n".
" If using the zone table, this attribute in not used.\n\n".
" -----------------\n".
"SERVICES ATTRIBUTES\n".
" -----------------\n".
" consoleondemand: When set to 'yes', conserver connects and creates the console\n".
" output only when the user opens the console. Default is no on\n".
" Linux, yes on AIX.\n\n".
" httpport: The port number that the booting/installing nodes should contact the\n".
" http server on the MN/SN on. It is your responsibility to configure\n".
" the http server to listen on that port - xCAT will not do that.\n\n".
" ntpservers: A comma delimited list of NTP servers for the cluster - often the\n".
" xCAT management node.\n\n".
" svloglocal: if set to 1, syslog on the service node will not get forwarded to the\n".
" mgmt node.\n\n".
" timezone: (e.g. America/New_York)\n\n".
@ -1027,11 +1074,26 @@ site => {
" useNmapfromMN: When set to yes, nodestat command should obtain the node status\n".
" using nmap (if available) from the management node instead of the\n".
" service node. This will improve the performance in a flat network.\n\n".
" useSSHonAIX: (yes/1 or no/0). If yes, ssh/scp will be setup and used. If no, rsh/rcp. The support for rsh/rcp is deprecated.\n".
" vsftp: Default is 'n'. If set to 'y', the xcatd on the mn will automatically\n".
" bring up vsftpd. (You must manually install vsftpd before this.\n".
" This setting does not apply to the service node. For sn\n".
" you need to set servicenode.ftpserver=1 if you want xcatd to\n".
" bring up vsftpd.\n\n".
" -----------------------\n".
"VIRTUALIZATION ATTRIBUTES\n".
" -----------------------\n".
" usexhrm: Have xCAT run its xHRM script when booting up KVM guests to set the\n".
" virtual network bridge up correctly. See\n".
" https://sourceforge.net/apps/mediawiki/xcat/index.php?title=XCAT_Virtualization_with_KVM#Setting_up_a_network_bridge\n\n".
" rsh/rcp will be setup and used on AIX. Default is yes.\n\n".
" vcenterautojoin: When set to no, the VMWare plugin will not attempt to auto remove\n".
" and add hypervisors while trying to perform operations. If users\n".
" or tasks outside of xCAT perform the joining this assures xCAT\n".
" will not interfere.\n\n".
" vmwarereconfigonpower: When set to no, the VMWare plugin will make no effort to\n".
" push vm.cpus/vm.memory updates from xCAT to VMWare.\n\n".
" --------------------\n".
"XCAT DAEMON ATTRIBUTES\n".
" --------------------\n".
" useflowcontrol: (yes/1 or no/0). If yes, the postscript processing on each node\n".
" contacts xcatd on the MN/SN using a lightweight UDP packet to wait\n".
" until xcatd is ready to handle the requests associated with\n".
@ -1042,28 +1104,14 @@ site => {
" xcatd, and retry. On a new install of xcat, this value will be set to yes.\n".
" See the following document for details:\n".
" https://sourceforge.net/apps/mediawiki/xcat/index.php?title=Hints_and_Tips_for_Large_Scale_Clusters\n\n".
" useNFSv4onAIX: (yes/1 or no/0). If yes, NFSv4 will be used with NIM. If no,\n".
" NFSv3 will be used with NIM. Default is no.\n\n".
" vcenterautojoin: When set to no, the VMWare plugin will not attempt to auto remove\n".
" and add hypervisors while trying to perform operations. If users\n".
" or tasks outside of xCAT perform the joining this assures xCAT\n".
" will not interfere.\n\n".
" vmwarereconfigonpower: When set to no, the VMWare plugin will make no effort to\n".
" push vm.cpus/vm.memory updates from xCAT to VMWare.\n\n".
" vsftp: Default is 'n'. If set to 'y', the xcatd on the mn will automatically\n".
" bring up vsftpd. (You must manually install vsftpd before this.\n".
" This setting does not apply to the service node. For sn\n".
" you need to set servicenode.ftpserver=1 if you want xcatd to\n".
" bring up vsftpd.\n\n".
" xcatconfdir: Where xCAT config data is (default /etc/xcat).\n\n".
" xcatmaxconnections: Number of concurrent xCAT protocol requests before requests\n".
" begin queueing. This applies to both client command requests\n".
" and node requests, e.g. to get postscripts. Default is 64.\n\n".
" xcatmaxbatchconnections: Number of concurrent xCAT connections allowed from the nodes.\n".
" Value must be less than xcatmaxconnections. Default is 50.\n\n".
" xcatdport: The port used by the xcatd daemon for client/server communication.\n\n".
" xcatiport: The port used by xcatd to receive install status updates from nodes.\n\n",
" xcatsslversion: The ssl version by xcatd. Default is SSLv3.\n\n",
" xcatiport: The port used by xcatd to receive install status updates from nodes.\n\n".
" xcatsslversion: The ssl version by xcatd. Default is SSLv3.\n\n".
" xcatsslciphers: The ssl cipher by xcatd. Default is 3DES.\n\n",
value => 'The value of the attribute specified in the "key" column.',
comments => 'Any user-written notes.',
@ -1143,6 +1191,19 @@ performance => {
disable => "Set to 'yes' or '1' to comment out this row.",
},
},
zone => {
cols => [qw(zonename sshkeydir sshbetweennodes defaultzone comments disable)],
keys => [qw(zonename)],
table_desc => 'Defines a cluster zone for nodes that share root ssh key access to each other.',
descriptions => {
zonename => 'The name of the zone.',
sshkeydir => 'Directory containing the shared root ssh RSA keys.',
sshbetweennodes => 'Indicates whether passwordless ssh will be setup between the nodes of this zone. Values are yes/1 or no/0. Default is yes. ',
defaultzone => 'If nodes are not assigned to any other zone, they will default to this zone. If value is set to yes or 1.',
comments => 'Any user-provided notes.',
disable => "Set to 'yes' or '1' to comment out this row.",
},
},
eventlog => {
cols => [qw(recid eventtime eventtype monitor monnode node application component id severity message rawdata comments disable)],
@ -1279,7 +1340,7 @@ firmware => {
},
nics => {
cols => [qw(node nicips nichostnamesuffixes nictypes niccustomscripts nicnetworks nicaliases comments disable)],
cols => [qw(node nicips nichostnamesuffixes nichostnameprefixes nictypes niccustomscripts nicnetworks nicaliases comments disable)],
keys => [qw(node)],
tablespace =>'XCATTBS16K',
table_desc => 'Stores NIC details.',
@ -1297,6 +1358,13 @@ nics => {
<nic1>!<ext1>|<ext2>,<nic2>!<ext1>|<ext2>,..., for example, eth0!-eth0|-eth0-ipv6,ib0!-ib0|-ib0-ipv6.
The xCAT object definition commands support to use nichostnamesuffixes.<nicname> as the sub attributes.
Note: According to DNS rules a hostname must be a text string up to 24 characters drawn from the alphabet (A-Z), digits (0-9), minus sign (-),and period (.). When you are specifying "nichostnamesuffixes" or "nicaliases" make sure the resulting hostnames will conform to this naming convention',
nichostnameprefixes => 'Comma-separated list of hostname prefixes per NIC.
If only one ip address is associated with each NIC:
<nic1>!<ext1>,<nic2>!<ext2>,..., for example, eth0!eth0-,ib0!ib-
If multiple ip addresses are associcated with each NIC:
<nic1>!<ext1>|<ext2>,<nic2>!<ext1>|<ext2>,..., for example, eth0!eth0-|eth0-ipv6i-,ib0!ib-|ib-ipv6-.
The xCAT object definition commands support to use nichostnameprefixes.<nicname> as the sub attributes.
Note: According to DNS rules a hostname must be a text string up to 24 characters drawn from the alphabet (A-Z), digits (0-9), minus sign (-),and period (.). When you are specifying "nichostnameprefixes" or "nicaliases" make sure the resulting hostnames will conform to this naming convention',
nictypes => 'Comma-separated list of NIC types per NIC. <nic1>!<type1>,<nic2>!<type2>, e.g. eth0!Ethernet,ib0!Infiniband. The xCAT object definition commands support to use nictypes.<nicname> as the sub attributes.',
niccustomscripts => 'Comma-separated list of custom scripts per NIC. <nic1>!<script1>,<nic2>!<script2>, e.g. eth0!configeth eth0, ib0!configib ib0. The xCAT object definition commands support to use niccustomscripts.<nicname> as the sub attribute
.',
@ -1481,8 +1549,20 @@ mic => {
disable => "Do not use. tabprune will not work if set to yes or 1",
},
},
hwinv => {
cols => [qw(node cputype cpucount memory disksize comments disable)],
keys => [qw(node)],
table_desc => 'The hardware inventory for the node.',
descriptions => {
node => 'The node name or group name.',
cputype => 'The cpu model name for the node.',
cpucount => 'The number of cpus for the node.',
memory => 'The size of the memory for the node in MB.',
disksize => 'The size of the disks for the node in GB.',
comments => 'Any user-provided notes.',
disable => "Set to 'yes' or '1' to comment out this row.",
},
},
); # end of tabspec definition
@ -1560,6 +1640,7 @@ foreach my $tabname (keys(%xCAT::ExtTab::ext_tabspec)) {
rack => { attrs => [], attrhash => {}, objkey => 'rackname' },
osdistro=> { attrs => [], attrhash => {}, objkey => 'osdistroname' },
osdistroupdate=> { attrs => [], attrhash => {}, objkey => 'osupdatename' },
zone=> { attrs => [], attrhash => {}, objkey => 'zonename' },
);
@ -1628,6 +1709,11 @@ my @nodeattrs = (
tabentry => 'noderes.monserver',
access_tabentry => 'noderes.node=attr:node',
},
{attr_name => 'supportproxydhcp',
tabentry => 'noderes.proxydhcp',
access_tabentry => 'noderes.node=attr:node',
},
{attr_name => 'kernel',
tabentry => 'bootparams.kernel',
access_tabentry => 'bootparams.node=attr:node',
@ -1696,6 +1782,10 @@ my @nodeattrs = (
{attr_name => 'setupipforward',
tabentry => 'servicenode.ipforward',
access_tabentry => 'servicenode.node=attr:node',
},
{attr_name => 'setupproxydhcp',
tabentry => 'servicenode.proxydhcp',
access_tabentry => 'servicenode.node=attr:node',
},
# - moserver not used yet
# {attr_name => 'setupmonserver',
@ -2177,6 +2267,10 @@ my @nodeattrs = (
tabentry => 'nics.nichostnamesuffixes',
access_tabentry => 'nics.node=attr:node',
},
{attr_name => 'nichostnameprefixes',
tabentry => 'nics.nichostnameprefixes',
access_tabentry => 'nics.node=attr:node',
},
{attr_name => 'nictypes',
tabentry => 'nics.nictypes',
access_tabentry => 'nics.node=attr:node',
@ -2452,6 +2546,25 @@ my @nodeattrs = (
tabentry => 'mic.powermgt',
access_tabentry => 'mic.node=attr:node',
},
#####################
## hwinv table #
#####################
{attr_name => 'cputype',
tabentry => 'hwinv.cputype',
access_tabentry => 'hwinv.node=attr:node',
},
{attr_name => 'cpucount',
tabentry => 'hwinv.cpucount',
access_tabentry => 'hwinv.node=attr:node',
},
{attr_name => 'memory',
tabentry => 'hwinv.memory',
access_tabentry => 'hwinv.node=attr:node',
},
{attr_name => 'disksize',
tabentry => 'hwinv.disksize',
access_tabentry => 'hwinv.node=attr:node',
},
); # end of @nodeattrs that applies to both nodes and groups
@ -2502,6 +2615,10 @@ my @nodeattrs = (
{attr_name => 'updatestatustime',
tabentry => 'nodelist.updatestatustime',
access_tabentry => 'nodelist.node=attr:node',
},
{attr_name => 'zonename',
tabentry => 'nodelist.zonename',
access_tabentry => 'nodelist.node=attr:node',
},
{attr_name => 'usercomment',
tabentry => 'nodelist.comments',
@ -2707,6 +2824,29 @@ push(@{$defspec{node}->{'attrs'}}, @nodeattrs);
access_tabentry => 'linuximage.imagename=attr:imagename',
},
####################
# winimage table#
####################
{attr_name => 'template',
only_if => 'imagetype=windows',
tabentry => 'winimage.template',
access_tabentry => 'winimage.imagename=attr:imagename',
},
{attr_name => 'installto',
only_if => 'imagetype=windows',
tabentry => 'winimage.installto',
access_tabentry => 'winimage.imagename=attr:imagename',
},
{attr_name => 'partitionfile',
only_if => 'imagetype=windows',
tabentry => 'winimage.partitionfile',
access_tabentry => 'winimage.imagename=attr:imagename',
},
{attr_name => 'winpepath',
only_if => 'imagetype=windows',
tabentry => 'winimage.winpepath',
access_tabentry => 'winimage.imagename=attr:imagename',
},
####################
# nimimage table#
####################
{attr_name => 'nimtype',
@ -2933,6 +3073,36 @@ push(@{$defspec{node}->{'attrs'}}, @nodeattrs);
access_tabentry => 'rack.rackname=attr:rackname',
},
);
####################
# zone table #
####################
@{$defspec{zone}->{'attrs'}} = (
{attr_name => 'zonename',
tabentry => 'zone.zonename',
access_tabentry => 'zone.zonename=attr:zonename',
},
{attr_name => 'sshkeydir',
tabentry => 'zone.sshkeydir',
access_tabentry => 'zone.zonename=attr:zonename',
},
{attr_name => 'sshbetweennodes',
tabentry => 'zone.sshbetweennodes',
access_tabentry => 'zone.zonename=attr:zonename',
},
{attr_name => 'defaultzone',
tabentry => 'zone.defaultzone',
access_tabentry => 'zone.zonename=attr:zonename',
},
{attr_name => 'usercomment',
tabentry => 'zone.comments',
access_tabentry => 'zone.zonename=attr:zonename',
},
);
#########################
# route data object #
#########################
# routes table #
#########################
#########################
# route data object #
#########################

2
perl-xCAT/xCAT/ServiceNodeUtils.pm Normal file → Executable file
View File

@ -163,6 +163,8 @@ sub isServiceReq
if (($value eq "1") || ($value eq "YES"))
{
$servicehash->{$service} = "1";
} elsif ($value eq "2") {
$servicehash->{$service} = "2";
} else {
$servicehash->{$service} = "0";
}

View File

@ -19,6 +19,7 @@ if ($^O =~ /^aix/i) {
use lib "$::XCATROOT/lib/perl";
use strict;
require xCAT::Table;
require xCAT::Zone;
use File::Path;
#-----------------------------------------------------------------------
@ -271,7 +272,7 @@ sub bldnonrootSSHFiles
Error:
0=good, 1=error
Example:
xCAT::TableUtils->setupSSH(@target_nodes);
xCAT::TableUtils->setupSSH(@target_nodes,$expecttimeout);
Comments:
Does not setup known_hosts. Assumes automatically
setup by SSH ( ssh config option StrictHostKeyChecking no should
@ -335,21 +336,21 @@ sub setupSSH
$::REMOTE_SHELL = "/usr/bin/ssh";
my $rsp = {};
# Get the home directory
my $home = xCAT::Utils->getHomeDir($from_userid);
$ENV{'DSH_FROM_USERID_HOME'} = $home;
if ($from_userid eq "root")
{
# make the directory to hold keys to transfer to the nodes
if (!-d $SSHdir)
{
mkpath("$SSHdir", { mode => 0755 });
}
# generates new keys for root, if they do not already exist
# generates new keys for root, if they do not already exist ~/.ssh
# nodes not used on this option but in there to preserve the interface
my $rc=
xCAT::RemoteShellExp->remoteshellexp("k",$::CALLBACK,$::REMOTE_SHELL,$n_str,$expecttimeout);
@ -374,7 +375,10 @@ else
fi
mkdir -p \$dest_dir
cat /tmp/$to_userid/.ssh/authorized_keys >> \$home/.ssh/authorized_keys 2>&1
cat /tmp/$to_userid/.ssh/id_rsa.pub >> \$home/.ssh/authorized_keys 2>&1
rm -f \$home/.ssh/id_rsa 2>&1
cp /tmp/$to_userid/.ssh/id_rsa \$home/.ssh/id_rsa 2>&1
cp /tmp/$to_userid/.ssh/id_rsa.pub \$home/.ssh/id_rsa.pub 2>&1
chmod 0600 \$home/.ssh/id_* 2>&1
rm -f /tmp/$to_userid/.ssh/* 2>&1
rmdir \"/tmp/$to_userid/.ssh\"
@ -386,6 +390,7 @@ rmdir \"/tmp/$to_userid\" \n";
my $auth_key2=0;
if ($from_userid eq "root")
{
# this will put the root/.ssh/id_rsa.pub key in the authorized keys file to put on the node
my $rc = xCAT::TableUtils->cpSSHFiles($SSHdir);
if ($rc != 0)
{ # error
@ -418,50 +423,47 @@ rmdir \"/tmp/$to_userid\" \n";
xCAT::TableUtils->bldnonrootSSHFiles($from_userid);
}
# send the keys to the nodes for root or some other id
#
# This environment variable determines whether to setup
# node to node ssh
# The nodes must be checked against the site.sshbetweennodes attribute
# send the keys
# For root user and not to devices only to nodes
if (($from_userid eq "root") && (!($ENV{'DEVICETYPE'}))) {
my $enablenodes;
my $disablenodes;
my @nodelist= split(",", $n_str);
foreach my $n (@nodelist)
# Need to check if nodes are in a zone.
my @zones;
my $tab = xCAT::Table->new("zone");
my @zones;
if ($tab)
{
my $enablessh=xCAT::TableUtils->enablessh($n);
if ($enablessh == 1) {
$enablenodes .= $n;
$enablenodes .= ",";
} else {
$disablenodes .= $n;
$disablenodes .= ",";
# if we have zones, need to send the zone keys to each node in the zone
my @attribs = ("zonename");
@zones = $tab->getAllAttribs(@attribs);
$tab->close();
} else {
$rsp->{data}->[0] = "Could not open zone table.\n";
xCAT::MsgUtils->message("E", $rsp, $::CALLBACK);
return 1;
}
# check for zones, key send is different if zones defined or not
if (@zones) { # we have zones defined
my $rc = xCAT::TableUtils->sendkeysTOzones($ref_nodes,$expecttimeout);
if ($rc != 0)
{
$rsp->{data}->[0] = "Error sending ssh keys to the zones.\n";
xCAT::MsgUtils->message("E", $rsp, $::CALLBACK);
exit 1;
}
}
my $cmd;
if ($enablenodes) { # node on list to setup nodetonodessh
chop $enablenodes; # remove last comma
$ENV{'DSH_ENABLE_SSH'} = "YES";
my $rc=xCAT::RemoteShellExp->remoteshellexp("s",$::CALLBACK,"/usr/bin/ssh",$enablenodes,$expecttimeout);
} else { # no zones
# if no zone table defined, do it the old way , keys are in ~/.ssh
my $rc = xCAT::TableUtils->sendkeysNOzones($ref_nodes,$expecttimeout);
if ($rc != 0)
{
$rsp->{data}->[0] = "remoteshellexp failed sending keys to enablenodes.";
xCAT::MsgUtils->message("E", $rsp, $::CALLBACK);
}
}
if ($disablenodes) { # node on list to setup nodetonodessh
chop $disablenodes; # remove last comma
my $rc=xCAT::RemoteShellExp->remoteshellexp("s",$::CALLBACK,"/usr/bin/ssh",$disablenodes,$expecttimeout);
if ($rc != 0)
{
$rsp->{data}->[0] = "remoteshellexp failed sending keys to disablenodes.";
xCAT::MsgUtils->message("E", $rsp, $::CALLBACK);
{
$rsp->{data}->[0] = "Error sending ssh keys to the nodes.\n";
xCAT::MsgUtils->message("E", $rsp, $::CALLBACK);
}
}
} else { # from user is not root or it is a device , always send private key
$ENV{'DSH_ENABLE_SSH'} = "YES";
my $rc=xCAT::RemoteShellExp->remoteshellexp("s",$::CALLBACK,"/usr/bin/ssh",$n_str,$expecttimeout);
@ -503,6 +505,235 @@ rmdir \"/tmp/$to_userid\" \n";
#--------------------------------------------------------------------------------
=head3 sendkeysNOzones
Transfers the ssh keys
for the root id on the nodes no zones
key from ~/.ssh site.sshbetweennodes honored
Arguments:
Array of nodes
Timeout for expect call (optional)
Returns:
Env Variables: $DSH_FROM_USERID, $DSH_TO_USERID, $DSH_REMOTE_PASSWORD
the ssh keys are transferred from the $DSH_FROM_USERID to the $DSH_TO_USERID
on the node(s). The DSH_REMOTE_PASSWORD and the DSH_FROM_USERID
must be obtained by
the calling script or from the xdsh client
Globals:
$::XCATROOT , $::CALLBACK
Error:
0=good, 1=error
Example:
xCAT::TableUtils->sendkeysNOzones($ref_nodes,$expecttimeout);
Comments:
Does not setup known_hosts. Assumes automatically
setup by SSH ( ssh config option StrictHostKeyChecking no should
be set in the ssh config file).
=cut
#--------------------------------------------------------------------------------
sub sendkeysNOzones
{
my ($class, $ref_nodes,$expecttimeout) = @_;
my @nodes=$ref_nodes;
my $enablenodes;
my $disablenodes;
my $n_str = $nodes[0];
my @nodelist= split(",", $n_str);
my $rsp = ();
foreach my $n (@nodelist)
{
my $enablessh=xCAT::TableUtils->enablessh($n);
if ($enablessh == 1) {
$enablenodes .= $n;
$enablenodes .= ",";
} else {
$disablenodes .= $n;
$disablenodes .= ",";
}
}
if ($enablenodes) { # node on list to setup nodetonodessh
chop $enablenodes; # remove last comma
$ENV{'DSH_ENABLE_SSH'} = "YES";
# send the keys to the nodes
my $rc=xCAT::RemoteShellExp->remoteshellexp("s",$::CALLBACK,"/usr/bin/ssh",$enablenodes,$expecttimeout);
if ($rc != 0)
{
$rsp->{data}->[0] = "remoteshellexp failed sending keys to enablenodes.";
xCAT::MsgUtils->message("E", $rsp, $::CALLBACK);
}
}
if ($disablenodes) { # node on list to disable nodetonodessh
chop $disablenodes; # remove last comma
# send the keys to the nodes
my $rc=xCAT::RemoteShellExp->remoteshellexp("s",$::CALLBACK,"/usr/bin/ssh",$disablenodes,$expecttimeout);
if ($rc != 0)
{
$rsp->{data}->[0] = "remoteshellexp failed sending keys to disablenodes.";
xCAT::MsgUtils->message("E", $rsp, $::CALLBACK);
}
}
}
#--------------------------------------------------------------------------------
=head3 sendkeysTOzones
Transfers the ssh keys
for the root id on the nodes using the zone table.
If in a zone, then root ssh keys for the node will be taken from the zones ssh keys not ~/.ssh
zones are only supported on nodes that are not a service node.
Also for the call to RemoteShellExp, we must group the nodes that are in the same zone
Arguments:
Array of nodes
Timeout for expect call (optional)
Returns:
Env Variables: $DSH_FROM_USERID, $DSH_TO_USERID, $DSH_REMOTE_PASSWORD
the ssh keys are transferred from the $DSH_FROM_USERID to the $DSH_TO_USERID
on the node(s). The DSH_REMOTE_PASSWORD and the DSH_FROM_USERID
must be obtained by
the calling script or from the xdsh client
Globals:
$::XCATROOT , $::CALLBACK
Error:
0=good, 1=error
Example:
xCAT::TableUtils->sendkeysTOzones($ref_nodes,$expecttimeout);
Comments:
Does not setup known_hosts. Assumes automatically
setup by SSH ( ssh config option StrictHostKeyChecking no should
be set in the ssh config file).
=cut
#--------------------------------------------------------------------------------
sub sendkeysTOzones
{
my ($class, $ref_nodes,$expecttimeout) = @_;
my @nodes=$ref_nodes;
my $n_str = $nodes[0];
my @nodes= split(",", $n_str);
my $rsp = ();
my $cmd;
my $roothome = xCAT::Utils->getHomeDir("root");
my $zonehash =xCAT::Zone->getzoneinfo($::CALLBACK,\@nodes);
foreach my $zonename (keys %$zonehash) {
# build list of nodes
my $zonenodelist="";
foreach my $node (@{$zonehash->{$zonename}->{nodes}}) {
$zonenodelist .= $node;
$zonenodelist .= ",";
}
$zonenodelist =~ s/,$//; # remove last comma
# if any nodes defined for the zone
if ($zonenodelist) {
# check to see if we enable passwordless ssh between the nodes
if (!(defined($zonehash->{$zonename}->{sshbetweennodes}))||
(($zonehash->{$zonename}->{sshbetweennodes} =~ /^yes$/i )
|| ($zonehash->{$zonename}->{sshbetweennodes} eq "1"))) {
$ENV{'DSH_ENABLE_SSH'} = "YES";
} else {
delete $ENV{'DSH_ENABLE_SSH'}; # do not enable passwordless ssh
}
# point to the ssh keys to send for this zone
my $keydir = $zonehash->{$zonename}->{sshkeydir} ;
# check to see if the id_rsa and id_rsa.pub key is in the directory
my $key="$keydir/id_rsa";
my $key2="$keydir/id_rsa.pub";
# Check to see if empty
if (!(-e $key)) {
my $rsp = {};
$rsp->{error}->[0] =
"The $key file does not exist for $zonename. Need to use chzone to regenerate the keys.";
xCAT::MsgUtils->message("E", $rsp, $::CALLBACK, 1);
return 1;
}
if (!(-e $key2)) {
my $rsp = {};
$rsp->{error}->[0] =
"The $key2 file does not exist for $zonename. Need to use chzone to regenerate the keys.";
xCAT::MsgUtils->message("E", $rsp, $::CALLBACK, 1);
return 1;
}
# now put copy.sh in the zone directory from ~/.ssh
my $rootkeydir="$roothome/.ssh";
if ($rootkeydir ne $keydir) { # the zone keydir is not the same as ~/.ssh.
$cmd="cp $rootkeydir/copy.sh $keydir";
xCAT::Utils->runcmd($cmd, 0);
if ($::RUNCMD_RC != 0)
{
my $rsp = {};
$rsp->{error}->[0] =
"Could not copy copy.sh to the zone key dir";
xCAT::MsgUtils->message("E", $rsp, $::CALLBACK);
return 1;
}
}
# Also create $keydir/tmp and put root's id_rsa.pub (in authorized_keys) for the transfer
$cmd="mkdir -p $keydir/tmp";
xCAT::Utils->runcmd($cmd, 0);
if ($::RUNCMD_RC != 0)
{
my $rsp = {};
$rsp->{error}->[0] =
"Could not mkdir the zone $keydir/tmp";
xCAT::MsgUtils->message("E", $rsp, $::CALLBACK);
return 1;
}
# create authorized_keys file
if (xCAT::Utils->isMN()) { # if on Management Node
$cmd = " cp $roothome/.ssh/id_rsa.pub $keydir/tmp/authorized_keys";
} else { # SN
$cmd = " cp $roothome/.ssh/authorized_keys $keydir/tmp/authorized_keys";
}
xCAT::Utils->runcmd($cmd, 0);
if ($::RUNCMD_RC != 0)
{
$rsp->{data}->[0] = "$cmd failed.\n";
xCAT::MsgUtils->message("E", $rsp, $::CALLBACK);
return (1);
}
else
{
chmod 0600, "$keydir/.ssh/tmp/authorized_keys";
}
# strip off .ssh
my ($newkeydir,$ssh) = (split(/\.ssh/, $keydir));
$ENV{'DSH_ZONE_SSHKEYS'} =$newkeydir ;
# send the keys to the nodes
my $rc=xCAT::RemoteShellExp->remoteshellexp("s",$::CALLBACK,"/usr/bin/ssh",
$zonenodelist,$expecttimeout);
if ($rc != 0)
{
$rsp = {};
$rsp->{data}->[0] = "remoteshellexp failed sending keys to $zonename.";
xCAT::MsgUtils->message("E", $rsp, $::CALLBACK);
}
} # end nodes in the zone
} # end for each zone
return (0);
}
#--------------------------------------------------------------------------------
=head3 cpSSHFiles
Builds authorized_keyfiles for root
@ -1594,8 +1825,8 @@ sub enableSSH
} else {
# if not a service node we need to check, before enabling
if (defined($groups_hash)) {
if ($groups_hash->{ALLGROUPS} == 1)
if (keys %$groups_hash) { # not empty
if ($groups_hash->{ALLGROUPS} == 1)
{
$enablessh=1;
}
@ -1769,7 +2000,7 @@ sub updatenodegroups {
}
}
my ($ent) = $tabhd->getNodeAttribs($node, ['groups']);
my @list = qw(all);
my @list = ();
if (defined($ent) and $ent->{groups}) {
push @list, split(/,/,$ent->{groups});
}
@ -1783,5 +2014,54 @@ sub updatenodegroups {
@list = keys %saw;
$tabhd->setNodeAttribs($node, {groups=>join(",",@list)});
}
#-----------------------------------------------------------------------------
=head3 rmnodegroups
remove groups from the group attribute for the specified node
Arguments:
node
tabhd: the handler of 'nodelist' table,
groups: the groups that need to be removed.
Can be an array or string.
Globals:
none
Error:
Example:
xCAT::TableUtils->rmnodegroups($node, $tab, $groups);
=cut
#-----------------------------------------------------------------------------
sub rmnodegroups {
my ($class, $node, $tabhd, $groups) = @_;
my ($ent) = $tabhd->getNodeAttribs($node, ['groups']);
my @definedgroups;
my @removegroups;
my @newgroups;
if (defined($ent) and $ent->{groups}) {
push @definedgroups, split(/,/,$ent->{groups});
}
if (ref($groups) eq 'ARRAY') {
push @removegroups, @$groups;
} else {
push @removegroups, split(/,/,$groups);
}
# take out any groups that match the input list of groups to remove
foreach my $dgrp (@definedgroups){
if (grep(/^$dgrp$/, @removegroups)) { # is the group to be removed
next;
} else { # keep this group
push @newgroups,$dgrp;
}
}
my %saw;
@saw{@newgroups} = ();
@newgroups = keys %saw;
$tabhd->setNodeAttribs($node, {groups=>join(",",@newgroups)});
}
1;

View File

@ -207,6 +207,7 @@ my %usage = (
mkvm noderange [--full]
mkvm noderange [vmcpus=min/req/max] [vmmemory=min/req/max]
[vmphyslots=drc_index1,drc_index2...] [vmothersetting=hugepage:N,bsr:N]
[vmnics=vlan1,vlan2] [vmstorage=<N|viosnode:slotid>] [--vios]
For KVM
mkvm noderange -m|--master mastername -s|--size disksize -f|--force
For zVM
@ -241,6 +242,8 @@ my %usage = (
chvm <noderange> [lparname=<*|name>]
chvm <noderange> [vmcpus=min/req/max] [vmmemory=min/req/max]
[vmphyslots=drc_index1,drc_index2...] [vmothersetting=hugepage:N,bsr:N]
[vmnics=vlan1,vlan2] [vmstorage=<N|viosnode:slotid>] [--vios]
chvm <noderange> [del_vadapter=slotid]
VMware specific:
chvm <noderange> [-a size][-d disk][-p disk][--resize disk=size][--cpus count][--mem memory]
zVM specific:
@ -277,7 +280,7 @@ my %usage = (
"lsslp" =>
"Usage: lsslp [-h|--help|-v|--version]
lsslp [<noderange>][-V|--verbose][-i ip[,ip..]][-w][-r|-x|-z][-n][-I][-s FRAME|CEC|MM|IVM|RSA|HMC|CMM|IMM2|FSP]
[-t tries][--vpdtable][-C counts][-T timeout]",
[-u] [--range IPranges][-t tries][--vpdtable][-C counts][-T timeout]",
"rflash" =>
"Usage:
rflash [ -h|--help|-v|--version]

View File

@ -940,18 +940,15 @@ sub runcmd
my ($class, $cmd, $exitcode, $refoutput, $stream) = @_;
$::RUNCMD_RC = 0;
# redirect stderr to stdout
if (!($cmd =~ /2>&1$/)) { $cmd .= ' 2>&1'; }
my $hostname = `/bin/hostname`;
chomp $hostname;
if ($::VERBOSE)
{
# get this systems name as known by xCAT management node
my $Sname = xCAT::InstUtils->myxCATname();
my $msg;
if ($Sname) {
$msg = "Running command on $Sname: $cmd";
} else {
$msg="Running command: $cmd";
}
if ($::VERBOSE)
{
my $msg="Running command on $hostname: $cmd";
if ($::CALLBACK){
my $rsp = {};
@ -960,7 +957,7 @@ sub runcmd
} else {
xCAT::MsgUtils->message("I", "$msg\n");
}
}
}
my $outref = [];
if (!defined($stream) || (length($stream) == 0)) { # do not stream
@ -3372,6 +3369,95 @@ sub filter_nostatusupdate{
}
}
}
}
sub version_cmp {
my $ver_a = shift;
if ($ver_a =~ /xCAT::Utils/)
{
$ver_a = shift;
}
my $ver_b = shift;
my @array_a = ($ver_a =~ /([-.]|\d+|[^-.\d]+)/g);
my @array_b = ($ver_b =~ /([-.]|\d+|[^-.\d]+)/g);
my ($a, $b);
my $len_a = @array_a;
my $len_b = @array_b;
my $len = $len_a;
if ( $len_b < $len_a ) {
$len = $len_b;
}
for ( my $i = 0; $i < $len; $i++ ) {
$a = $array_a[$i];
$b = $array_b[$i];
if ($a eq $b) {
next;
} elsif ( $a eq '-' ) {
return -1;
} elsif ( $b eq '-') {
return 1;
} elsif ( $a eq '.' ) {
return -1;
} elsif ( $b eq '.' ) {
return 1;
} elsif ($a =~ /^\d+$/ and $b =~ /^\d+$/) {
if ($a =~ /^0/ || $b =~ /^0/) {
return ($a cmp $b);
} else {
return ($a <=> $b);
}
} else {
$a = uc $a;
$b = uc $b;
return ($a cmp $b);
}
}
return ( $len_a <=> $len_b )
}
#--------------------------------------------------------------------------------
=head3 fullpathbin
returns the full path of a specified binary executable file
Arguments:
string of the bin file name
Returns:
string of the full path name of the binary executable file
Globals:
none
Error:
string of the bin file name in the argument
Example:
my $CHKCONFIG = xCAT::Utils->fullpathbin("chkconfig");
Comments:
none
=cut
#--------------------------------------------------------------------------------
sub fullpathbin
{
my $bin=shift;
if( $bin =~ /xCAT::Utils/)
{
$bin=shift;
}
my @paths= ("/bin","/usr/bin","/sbin","/usr/sbin");
my $fullpath=$bin;
foreach my $path (@paths)
{
if (-x $path.'/'.$bin)
{
$fullpath= $path.'/'.$bin;
last;
}
}
return $fullpath;
}
1;

452
perl-xCAT/xCAT/Zone.pm Normal file
View File

@ -0,0 +1,452 @@
#!/usr/bin/env perl
# IBM(c) 2007 EPL license http://www.eclipse.org/legal/epl-v10.html
package xCAT::Zone;
BEGIN
{
$::XCATROOT = $ENV{'XCATROOT'} ? $ENV{'XCATROOT'} : '/opt/xcat';
}
# if AIX - make sure we include perl 5.8.2 in INC path.
# Needed to find perl dependencies shipped in deps tarball.
if ($^O =~ /^aix/i) {
unshift(@INC, qw(/usr/opt/perl5/lib/5.8.2/aix-thread-multi /usr/opt/perl5/lib/5.8.2 /usr/opt/perl5/lib/site_perl/5.8.2/aix-thread-multi /usr/opt/perl5/lib/site_perl/5.8.2));
}
use lib "$::XCATROOT/lib/perl";
# do not put a use or require for xCAT::Table here. Add to each new routine
# needing it to avoid reprocessing of user tables ( ExtTab.pm) for each command call
use POSIX qw(ceil);
use File::Path;
use Socket;
use strict;
use Symbol;
use warnings "all";
#--------------------------------------------------------------------------------
=head1 xCAT::Zone
=head2 Package Description
This program module file, is a set of Zone utilities used by xCAT *zone commands.
=cut
#--------------------------------------------------------------------------------
=head3 genSSHRootKeys
Arguments:
callback for error messages
directory in which to put the ssh RSA keys
zonename
rsa private key to use for generation ( optional)
Returns:
Error: 1 - key generation failure.
Example:
$rc =xCAT::Zone->genSSHRootKeys($callback,$keydir,$rsakey);
=cut
#--------------------------------------------------------------------------------
sub genSSHRootKeys
{
my ($class, $callback, $keydir,$zonename,$rsakey) = @_;
#
# create /keydir if needed
#
if (!-d $keydir)
{
my $cmd = "/bin/mkdir -m 700 -p $keydir";
my $output = xCAT::Utils->runcmd("$cmd", 0);
if ($::RUNCMD_RC != 0)
{
my $rsp = {};
$rsp->{error}->[0] =
"Could not create $keydir directory";
xCAT::MsgUtils->message("E", $rsp, $callback);
return 1;
}
}
#need to gen a new rsa key for root for the zone
my $pubfile = "$keydir/id_rsa.pub";
my $pvtfile = "$keydir/id_rsa";
# if exists, remove the old files
if (-r $pubfile)
{
my $cmd = "/bin/rm $keydir/id_rsa*";
my $output = xCAT::Utils->runcmd("$cmd", 0);
if ($::RUNCMD_RC != 0)
{
my $rsp = {};
$rsp->{error}->[0] = "Could not remove id_rsa files from $keydir directory.";
xCAT::MsgUtils->message("E", $rsp, $callback);
return 1;
}
}
# gen new RSA keys
my $cmd;
my $output;
# if private key was input use it
if (defined ($rsakey)) {
$cmd="/usr/bin/ssh-keygen -y -f $rsakey > $pubfile";
$output = xCAT::Utils->runcmd("$cmd", 0);
if ($::RUNCMD_RC != 0)
{
my $rsp = {};
$rsp->{error}->[0] = "Could not generate $pubfile from $rsakey";
xCAT::MsgUtils->message("E", $rsp, $callback);
return 1;
}
# now copy the private key into the directory
$cmd="cp $rsakey $keydir";
$output = xCAT::Utils->runcmd("$cmd", 0);
if ($::RUNCMD_RC != 0)
{
my $rsp = {};
$rsp->{error}->[0] = "Could not run $cmd";
xCAT::MsgUtils->message("E", $rsp, $callback);
return 1;
}
} else { # generate all new keys
$cmd = "/usr/bin/ssh-keygen -t rsa -q -b 2048 -N '' -f $pvtfile";
$output = xCAT::Utils->runcmd("$cmd", 0);
if ($::RUNCMD_RC != 0)
{
my $rsp = {};
$rsp->{error}->[0] = "Could not generate $pubfile";
xCAT::MsgUtils->message("E", $rsp, $callback);
return 1;
}
}
#make sure permissions are correct
$cmd = "chmod 644 $pubfile;chown root $pubfile";
$output = xCAT::Utils->runcmd("$cmd", 0);
if ($::RUNCMD_RC != 0)
{
my $rsp = {};
$rsp->{error}->[0] = "Could set permission and owner on $pubfile";
xCAT::MsgUtils->message("E", $rsp, $callback);
return 1;
}
}
#--------------------------------------------------------------------------------
=head3 getdefaultzone
Arguments:
None
Returns:
Name of the current default zone from the zone table
Example:
my $defaultzone =xCAT::Zone->getdefaultzone($callback);
=cut
#--------------------------------------------------------------------------------
sub getdefaultzone
{
my ($class, $callback) = @_;
my $defaultzone;
# read all the zone table and find the defaultzone, if it exists
my $tab = xCAT::Table->new("zone");
if ($tab){
my @zones = $tab->getAllAttribs('zonename','defaultzone');
foreach my $zone (@zones) {
# Look for the defaultzone=yes/1 entry
if ((defined($zone->{defaultzone})) &&
(($zone->{defaultzone} =~ /^yes$/i )
|| ($zone->{defaultzone} eq "1"))) {
$defaultzone = $zone->{zonename};
}
$tab->close();
}
} else {
my $rsp = {};
$rsp->{error}->[0] =
"Error reading the zone table. ";
xCAT::MsgUtils->message("E", $rsp, $callback);
}
return $defaultzone;
}
#--------------------------------------------------------------------------------
=head3 iszonedefined
Arguments:
zonename
Returns:
1 if the zone is already in the zone table.
Example:
xCAT::Zone->iszonedefined($zonename);
=cut
#--------------------------------------------------------------------------------
sub iszonedefined
{
my ($class,$zonename) = @_;
# checks the zone table to see if input zonename already in the table
my $tab = xCAT::Table->new("zone");
$tab->close();
my $zonehash = $tab->getAttribs({zonename => $zonename},'sshkeydir');
if ( keys %$zonehash) {
return 1;
}else{
return 0;
}
}
#--------------------------------------------------------------------------------
=head3 getzonekeydir
Arguments:
zonename
Returns:
path to the root ssh keys for the zone /etc/xcat/sshkeys/<zonename>/.ssh
1 - zone not defined
Example:
xCAT::Zone->getzonekeydir($zonename);
=cut
#--------------------------------------------------------------------------------
sub getzonekeydir
{
my ($class,$zonename) = @_;
my $tab = xCAT::Table->new("zone");
$tab->close();
my $zonehash = $tab->getAttribs({zonename => $zonename},'sshkeydir');
if ( keys %$zonehash) {
my $zonesshkeydir=$zonehash->{sshkeydir};
return $zonesshkeydir;
}else{
return 1; # this is a bad error zone not defined
}
}
#--------------------------------------------------------------------------------
=head3 getmyzonename
Arguments:
$node -one nodename
Returns:
$zonename
Example:
my $zonename=xCAT::Zone->getmyzonename($node);
=cut
#--------------------------------------------------------------------------------
sub getmyzonename
{
my ($class,$node,$callback) = @_;
my @node;
push @node,$node;
my $zonename;
my $nodelisttab = xCAT::Table->new("nodelist");
my $nodehash = $nodelisttab->getNodesAttribs(\@node, ['zonename']);
$nodelisttab->close();
if ( defined ($nodehash->{$node}->[0]->{zonename})) { # it was defined in the nodelist table
$zonename=$nodehash->{$node}->[0]->{zonename};
} else { # get the default zone
$zonename =xCAT::Zone->getdefaultzone($callback);
}
return $zonename;
}
#--------------------------------------------------------------------------------
=head3 enableSSHbetweennodes
Arguments:
zonename
Returns:
1 if the sshbetweennodes attribute is yes/1 or undefined
0 if the sshbetweennodes attribute is no/0
Example:
xCAT::Zone->enableSSHbetweennodes($zonename);
=cut
#--------------------------------------------------------------------------------
sub enableSSHbetweennodes
{
my ($class,$node,$callback) = @_;
# finds the zone of the node
my $enablessh = 1; # default
my $zonename=xCAT::Zone->getmyzonename($node);
# reads the zone table
my $tab = xCAT::Table->new("zone");
$tab->close();
# read both keys, want to know zone is in the zone table. If sshkeydir is not there
# it is either missing or invalid anyway
my $zonehash = $tab->getAttribs({zonename => $zonename},'sshbetweennodes','sshkeydir');
if (! ( keys %$zonehash)) {
my $rsp = {};
$rsp->{error}->[0] =
"$node has a zonename: $zonename that is not define in the zone table. Remove the zonename from the node, or create the zone using mkzone. The generated mypostscript may not reflect the correct setting for ENABLESSHBETWEENNODES";
xCAT::MsgUtils->message("E", $rsp, $callback);
return $enablessh;
}
my $sshbetweennodes=$zonehash->{sshbetweennodes};
if (defined ($sshbetweennodes)) {
if (($sshbetweennodes =~ /^no$/i) || ($sshbetweennodes eq "0")) {
$enablessh = 0;
} else {
$enablessh = 1;
}
} else { # not defined default yes
$enablessh = 1 ; # default
}
return $enablessh;
}
#--------------------------------------------------------------------------------
=head3 usingzones
Arguments:
none
Returns:
1 if the zone table is not empty
0 if empty
Example:
xCAT::Zone->usingzones;
=cut
#--------------------------------------------------------------------------------
sub usingzones
{
my ($class) = @_;
# reads the zonetable
my $tab = xCAT::Table->new("zone");
my @zone = $tab->getAllAttribs('zonename');
$tab->close();
if (@zone) {
return 1;
}else{
return 0;
}
}
#--------------------------------------------------------------------------------
=head3 getzoneinfo
Arguments:
callback
An array of nodes
Returns:
Hash array by zonename point to the nodes in that zonename and sshkeydir
<zonename1> -> {nodelist} -> array of nodes in the zone
-> {sshkeydir} -> directory containing ssh RSA keys
-> {defaultzone} -> is it the default zone
Example:
my %zonehash =xCAT::Zone->getzoneinfo($callback,@nodearray);
Rules:
If the nodes nodelist.zonename attribute is a zonename, it is assigned to that zone
If the nodes nodelist.zonename attribute is undefined:
If there is a defaultzone in the zone table, the node is assigned to that zone
If there is no defaultzone in the zone table, the node is assigned to the ~.ssh keydir
$::GETZONEINFO_RC
0 = good return
1 = error occured
=cut
#--------------------------------------------------------------------------------
sub getzoneinfo
{
my ($class, $callback,$nodes) = @_;
$::GETZONEINFO_RC=0;
my $zonehash;
my $defaultzone;
# read all the zone table
my $zonetab = xCAT::Table->new("zone");
my @zones;
if ($zonetab){
@zones = $zonetab->getAllAttribs('zonename','sshkeydir','sshbetweennodes','defaultzone');
$zonetab->close();
if (@zones) {
foreach my $zone (@zones) {
my $zonename=$zone->{zonename};
$zonehash->{$zonename}->{sshkeydir}= $zone->{sshkeydir};
$zonehash->{$zonename}->{defaultzone}= $zone->{defaultzone};
$zonehash->{$zonename}->{sshbetweennodes}= $zone->{sshbetweennodes};
# find the defaultzone
if ((defined($zone->{defaultzone})) &&
(($zone->{defaultzone} =~ /^yes$/i )
|| ($zone->{defaultzone} eq "1"))) {
$defaultzone = $zone->{zonename};
}
}
}
} else {
my $rsp = {};
$rsp->{error}->[0] =
"Error reading the zone table. ";
xCAT::MsgUtils->message("E", $rsp, $callback);
$::GETZONEINFO_RC =1;
return;
}
my $nodelisttab = xCAT::Table->new("nodelist");
my $nodehash = $nodelisttab->getNodesAttribs(\@$nodes, ['zonename']);
# for each of the nodes, look up it's zone name and assign to the zonehash
# If the nodes nodelist.zonename attribute is a zonename, it is assigned to that zone
# If the nodes nodelist.zonename attribute is undefined:
# If there is a defaultzone in the zone table, the node is assigned to that zone
# If there is no defaultzone error out
foreach my $node (@$nodes) {
my $zonename;
$zonename=$nodehash->{$node}->[0]->{zonename};
if (defined($zonename)) { # zonename explicitly defined in nodelist.zonename
# check to see if defined in the zone table
unless ( xCAT::Zone->iszonedefined($zonename)) {
my $rsp = {};
$rsp->{error}->[0] =
"$node has a zonename: $zonename that is not define in the zone table. Remove the zonename from the node, or create the zone using mkzone.";
xCAT::MsgUtils->message("E", $rsp, $callback);
$::GETZONEINFO_RC =1;
return;
}
push @{$zonehash->{$zonename}->{nodes}},$node;
} else { # no explict zonename
if (defined ($defaultzone)) { # there is a default zone in the zone table, use it
push @{$zonehash->{$defaultzone}->{nodes}},$node;
} else { # if no default, this is an error
my $rsp = {};
$rsp->{error}->[0] =
"There is no default zone defined in the zone table. There must be exactly one default zone. ";
xCAT::MsgUtils->message("E", $rsp, $callback);
$::GETZONEINFO_RC =1;
return;
}
}
}
return $zonehash;
}
#--------------------------------------------------------------------------------
=head3 getnodesinzone
Arguments:
callback
zonename
Returns:
Array of nodes
Example:
my @nodes =xCAT::Zone->getnodesinzone($callback,$zonename);
=cut
#--------------------------------------------------------------------------------
sub getnodesinzone
{
my ($class, $callback,$zonename) = @_;
my @nodes;
my $nodelisttab = xCAT::Table->new("nodelist");
my @nodelist=$nodelisttab->getAllAttribs('node','zonename');
# build the array of nodes in this zone
foreach my $nodename (@nodelist) {
if ((defined($nodename->{'zonename'})) && ($nodename->{'zonename'} eq $zonename)) {
push @nodes,$nodename->{'node'};
}
}
return @nodes;
}
1;

View File

@ -14,9 +14,10 @@ require Exporter;
%distnames = (
"1310229985.226287" => "centos6",
"1323560292.885204" => "centos6.2",
"1341569670.539525" => "centos6.3",#x86
"1362445555.957609" => "centos6.4",#x86_64
"1323560292.885204" => "centos6.2",
"1341569670.539525" => "centos6.3",#x86
"1362445555.957609" => "centos6.4",#x86_64
"1385726732.061157" => "centos6.5",#x86_64
"1176234647.982657" => "centos5",
"1156364963.862322" => "centos4.4",
"1178480581.024704" => "centos4.5",
@ -49,6 +50,8 @@ require Exporter;
"1328205744.315196" => "rhels5.8", #x86_64
"1354216429.587870" => "rhels5.9", #x86_64
"1354214009.518521" => "rhels5.9", #ppc64
"1378846702.129847" => "rhels5.10", #x86_64
"1378845049.643372" => "rhels5.10", #ppc64
"1285193176.460470" => "rhels6", #x86_64
"1285192093.430930" => "rhels6", #ppc64
"1305068199.328169" => "rhels6.1", #x86_64
@ -60,6 +63,8 @@ require Exporter;
"1339638991.532890" => "rhels6.3", #i386
"1359576752.435900" => "rhels6.4", #x86_64
"1359576196.686790" => "rhels6.4", #ppc64
"1384196515.415715" => "rhels6.5", #x86_64
"1384198011.520581" => "rhels6.5", #ppc64
"1285193176.593806" => "rhelhpc6", #x86_64
"1305067719.718814" => "rhelhpc6.1",#x86_64
"1321545261.599847" => "rhelhpc6.2",#x86_64
@ -78,7 +83,7 @@ require Exporter;
"1305315870.828212" => "fedora15", #x86_64 DVD ISO
"1372355769.065812" => "fedora19", #x86_64 DVD ISO
"1372402928.663653" => "fedora19", #ppc64 DVD ISO
"1386856788.124593" => "fedora20", #x86_64 DVD ISO
"1194512200.047708" => "rhas4.6",
"1194512327.501046" => "rhas4.6",
"1241464993.830723" => "rhas4.8", #x86-64

View File

@ -6,84 +6,275 @@
#include <stdlib.h>
#include <errno.h>
#include <netinet/in.h>
int main() {
int serverfd,port;
int getpktinfo = 1;
struct addrinfo hint, *res;
char cmsg[CMSG_SPACE(sizeof(struct in_pktinfo))];
char clientpacket[1024];
struct sockaddr_in clientaddr;
struct msghdr msg;
struct cmsghdr *cmsgptr;
struct iovec iov[1];
unsigned int myip;
char *txtptr;
iov[0].iov_base = clientpacket;
iov[0].iov_len = 1024;
memset(&msg,0,sizeof(msg));
memset(&clientaddr,0,sizeof(clientaddr));
msg.msg_name=&clientaddr;
msg.msg_namelen = sizeof(clientaddr);
msg.msg_iov = iov;
msg.msg_iovlen = 1;
msg.msg_control=&cmsg;
msg.msg_controllen = sizeof(cmsg);
char bootpmagic[4] = {0x63,0x82,0x53,0x63};
int pktsize;
int doexit=0;
port = 4011;
memset(&hint,0,sizeof(hint));
hint.ai_family = PF_INET; /* Would've done UNSPEC, but it doesn't work right and this is heavily v4 specific anyway */
hint.ai_socktype = SOCK_DGRAM;
hint.ai_flags = AI_PASSIVE;
getaddrinfo(NULL,"4011",&hint,&res);
serverfd = socket(res->ai_family, res->ai_socktype, res->ai_protocol);
if (!serverfd) { fprintf(stderr,"That's odd...\n"); }
setsockopt(serverfd,IPPROTO_IP,IP_PKTINFO,&getpktinfo,sizeof(getpktinfo));
if (bind(serverfd,res->ai_addr ,res->ai_addrlen) < 0) {
fprintf(stderr,"Unable to bind 4011");
exit(1);
}
while (!doexit) {
pktsize = recvmsg(serverfd,&msg,0);
if (pktsize < 320) {
continue;
}
if (clientpacket[0] != 1 || memcmp(clientpacket+0xec,bootpmagic,4)) {
continue;
}
for (cmsgptr = CMSG_FIRSTHDR(&msg); cmsgptr != NULL; cmsgptr = CMSG_NXTHDR(&msg,cmsgptr)) {
if (cmsgptr->cmsg_level == IPPROTO_IP && cmsgptr->cmsg_type == IP_PKTINFO) {
myip = ((struct in_pktinfo*)(CMSG_DATA(cmsgptr)))->ipi_addr.s_addr;
}
}
clientpacket[0] = 2; //change to a reply
myip = htonl(myip); //endian neutral change
clientpacket[0x14] = (myip>>24)&0xff; //maybe don't need to do this, maybe assigning the whole int would be better
clientpacket[0x15] = (myip>>16)&0xff;
clientpacket[0x16] = (myip>>8)&0xff;
clientpacket[0x17] = (myip)&0xff;
txtptr = clientpacket+0x6c;
strncpy(txtptr,"Boot/bootmgfw.efi",128); // keeping 128 in there just in case someone changes the string
clientpacket[0xf0]=0x35; //DHCP MSG type
clientpacket[0xf1]=0x1; // LEN of 1
clientpacket[0xf2]=0x5; //DHCP ACK
clientpacket[0xf3]=0x36; //DHCP server identifier
clientpacket[0xf4]=0x4; //DHCP server identifier length
clientpacket[0xf5] = (myip>>24)&0xff; //maybe don't need to do this, maybe assigning the whole int would be better
clientpacket[0xf6] = (myip>>16)&0xff;
clientpacket[0xf7] = (myip>>8)&0xff;
clientpacket[0xf8] = (myip)&0xff;
clientpacket[0xf9] = 0xfc; // dhcp 252 'proxy', but coopeted by bootmgfw, it's actually suggesting the boot config file
clientpacket[0xfa] = 9; //length of 9
txtptr = clientpacket+0xfb;
strncpy(txtptr,"Boot/BCD",8);
clientpacket[0x103]=0;
clientpacket[0x104]=0xff;
sendto(serverfd,clientpacket,pktsize,0,(struct sockaddr*)&clientaddr,sizeof(clientaddr));
}
#include <signal.h>
#include <syslog.h>
// the chunk size for each alloc
int chunknum = 200;
int doreload = 0;
int verbose = 0;
char logmsg[50];
// the struct to store the winpe configuration for each node
struct nodecfg {
char node[50];
char data[150];
};
char *data = NULL; // the ptr to the array of all node config
int nodenum = 0;
// trigger the main program to reload configuration file
void reload(int sig) {
doreload = 1;
}
// the subroutine which is used to load configuration from
// /var/lib/xcat/proxydhcp.cfg to *data
void loadcfg () {
nodenum = 0;
free(data);
data = NULL;
doreload = 0;
char *dp = NULL;
FILE *fp;
fp = fopen("/var/lib/xcat/proxydhcp.cfg", "r");
if (fp) {
int num = chunknum;
int rtime = 1;
while (num == chunknum) {
// realloc the chunknum size of memory each to save memory usage
data = realloc(data, sizeof(struct nodecfg) * chunknum * rtime);
if (NULL == data) {
fprintf (stderr, "Cannot get enough memory.\n");
free (data);
return;
}
dp = data + sizeof(struct nodecfg) * chunknum * (rtime - 1);
memset(dp, 0, sizeof(struct nodecfg) * chunknum);
num = fread(dp, sizeof (struct nodecfg), chunknum, fp);
nodenum += num;
rtime++;
}
fclose(fp);
}
}
// get the path of winpe from configuration file which is stored in *data
char *getwinpepath(char *node) {
int i;
struct nodecfg *nc = (struct nodecfg *)data;
for (i=0; i<nodenum;i++) {
if (0 == strcmp(nc->node, node)) {
return nc->data;
}
nc++;
}
return NULL;
}
int main(int argc, char *argv[]) {
int i;
for(i = 0; i < argc; i++)
{
if (strcmp(argv[i], "-V") == 0) {
verbose = 1;
setlogmask(LOG_UPTO(LOG_DEBUG));
openlog("proxydhcp", LOG_NDELAY, LOG_LOCAL0);
}
}
// regist my pid to /var/run/xcat/proxydhcp.pid
int pid = getpid();
FILE *pidf = fopen ("/var/run/xcat/proxydhcp.pid", "w");
if (pidf) {
fprintf(pidf, "%d", pid);
fclose (pidf);
} else {
fprintf (stderr, "Cannot open /var/run/xcat/proxydhcp.pid\n");
return 1;
}
// load configuration at first start
loadcfg();
// regist signal SIGUSR1 for triggering reload configuration from outside
struct sigaction sigact;
sigact.sa_handler = &reload;
sigaction(SIGUSR1, &sigact, NULL);
int serverfd,port;
int getpktinfo = 1;
struct addrinfo hint, *res;
char cmsg[CMSG_SPACE(sizeof(struct in_pktinfo))];
char clientpacket[1024];
struct sockaddr_in clientaddr;
struct msghdr msg;
struct cmsghdr *cmsgptr;
struct iovec iov[1];
unsigned int myip, clientip;
char *txtptr;
iov[0].iov_base = clientpacket;
iov[0].iov_len = 1024;
memset(&msg,0,sizeof(msg));
memset(&clientaddr,0,sizeof(clientaddr));
msg.msg_name=&clientaddr;
msg.msg_namelen = sizeof(clientaddr);
msg.msg_iov = iov;
msg.msg_iovlen = 1;
msg.msg_control=&cmsg;
msg.msg_controllen = sizeof(cmsg);
char defaultwinpe[20] = "Boot/bootmgfw.efi";
char bootpmagic[4] = {0x63,0x82,0x53,0x63};
int pktsize;
int doexit=0;
port = 4011;
memset(&hint,0,sizeof(hint));
hint.ai_family = PF_INET; /* Would've done UNSPEC, but it doesn't work right and this is heavily v4 specific anyway */
hint.ai_socktype = SOCK_DGRAM;
hint.ai_flags = AI_PASSIVE;
getaddrinfo(NULL,"4011",&hint,&res);
serverfd = socket(res->ai_family, res->ai_socktype, res->ai_protocol);
if (!serverfd) { fprintf(stderr,"That's odd...\n"); }
setsockopt(serverfd,IPPROTO_IP,IP_PKTINFO,&getpktinfo,sizeof(getpktinfo));
if (bind(serverfd,res->ai_addr ,res->ai_addrlen) < 0) {
fprintf(stderr,"Unable to bind 4011");
exit(1);
}
while (!doexit) {
// use select to wait for the 4011 request packages coming
fd_set fds;
FD_ZERO(&fds);
FD_SET(serverfd, &fds);
struct timeval timeout;
timeout.tv_sec = 30;
timeout.tv_usec = 0;
int rc;
if ((rc = select(serverfd+1,&fds,0,0, &timeout)) <= 0) {
if (doreload) {
loadcfg();
fprintf(stderr, "load in select\n");
}
if (verbose) {syslog(LOG_DEBUG, "reload /var/lib/xcat/proxydhcp.cfg\n");}
continue;
}
if (doreload) {
loadcfg();
if (verbose) {syslog(LOG_DEBUG, "reload /var/lib/xcat/proxydhcp.cfg\n");}
}
pktsize = recvmsg(serverfd,&msg,0);
if (pktsize < 320) {
continue;
}
if (clientpacket[0] != 1 || memcmp(clientpacket+0xec,bootpmagic,4)) {
continue;
}
for (cmsgptr = CMSG_FIRSTHDR(&msg); cmsgptr != NULL; cmsgptr = CMSG_NXTHDR(&msg,cmsgptr)) {
if (cmsgptr->cmsg_level == IPPROTO_IP && cmsgptr->cmsg_type == IP_PKTINFO) {
myip = ((struct in_pktinfo*)(CMSG_DATA(cmsgptr)))->ipi_addr.s_addr;
}
}
// get the ip of dhcp client
clientip = 0;
int i;
for (i = 0; i< 4; i++) {
clientip = clientip << 8;
clientip += (unsigned char)clientpacket[15-i];
}
// get the winpe path
struct hostent *host = gethostbyaddr(&clientip, sizeof(clientip), AF_INET);
char *winpepath = defaultwinpe;
if (host) {
if (host->h_name) {
// remove the domain part from hostname
char *place = strstr(host->h_name, ".");
if (place) {
*place = '\0';
}
winpepath = getwinpepath(host->h_name);
if (winpepath == NULL) {
winpepath = defaultwinpe;
}
if (verbose) {
sprintf(logmsg, "Received proxydhcp request from %s\n", host->h_name);
syslog(LOG_DEBUG, logmsg);
}
}
} else {
winpepath = defaultwinpe;
}
// get the Vendor class identifier
char *arch = NULL;
unsigned char *p = clientpacket + 0xf0;
while (*p != 0xff && p < (unsigned char *)clientpacket + pktsize) {
if (*p == 60) {
arch = p + 0x11;
break;
} else {
p += *(p+1) + 2;
}
}
char winboot[50]; // the bootload of winpe
memset(winboot, 0, 50);
if (0 == memcmp(arch, "00000", 5)) { // bios boot mode
strcpy(winboot, winpepath);
strcat(winboot, "Boot/pxeboot.0");
} else if (0 == memcmp(arch, "00007", 5)) { // uefi boot mode
strcpy(winboot, winpepath);
strcat(winboot, "Boot/bootmgfw.efi");
}
clientpacket[0] = 2; //change to a reply
myip = htonl(myip); //endian neutral change
clientpacket[0x14] = (myip>>24)&0xff; //maybe don't need to do this, maybe assigning the whole int would be better
clientpacket[0x15] = (myip>>16)&0xff;
clientpacket[0x16] = (myip>>8)&0xff;
clientpacket[0x17] = (myip)&0xff;
txtptr = clientpacket+0x6c;
strncpy(txtptr, winboot ,128); // keeping 128 in there just in case someone changes the string
//strncpy(txtptr,"winboot/new/Boot/bootmgfw.efi",128); // keeping 128 in there just in case someone changes the string
//strncpy(txtptr,"Boot/pxeboot.0",128); // keeping 128 in there just in case someone changes the string
clientpacket[0xf0]=0x35; //DHCP MSG type
clientpacket[0xf1]=0x1; // LEN of 1
clientpacket[0xf2]=0x5; //DHCP ACK
clientpacket[0xf3]=0x36; //DHCP server identifier
clientpacket[0xf4]=0x4; //DHCP server identifier length
clientpacket[0xf5] = (myip>>24)&0xff; //maybe don't need to do this, maybe assigning the whole int would be better
clientpacket[0xf6] = (myip>>16)&0xff;
clientpacket[0xf7] = (myip>>8)&0xff;
clientpacket[0xf8] = (myip)&0xff;
char winBCD[50];
strcpy(winBCD, winpepath);
strcat(winBCD, "Boot/BCD");
clientpacket[0xf9] = 0xfc; // dhcp 252 'proxy', but coopeted by bootmgfw, it's actually suggesting the boot config file
clientpacket[0xfa] = strlen(winBCD) + 1; //length of 9
txtptr = clientpacket+0xfb;
strncpy(txtptr, winBCD, strlen(winBCD));
clientpacket[0xfa + strlen(winBCD) + 1] = 0;
clientpacket[0xfa + strlen(winBCD) + 2] = 0xff;
sendto(serverfd,clientpacket,pktsize,0,(struct sockaddr*)&clientaddr,sizeof(clientaddr));
if (verbose) {
sprintf(logmsg, "Path of bootloader:%s. Path of BCD:%s\n", winboot, winBCD);
syslog(LOG_DEBUG, logmsg);
}
}
if (verbose) { closelog();}
}

View File

@ -14,7 +14,7 @@
# postscript (stateful install) or with the otherpkgs processing of
# genimage (stateless/statelite install). This script will install any
# gpfs update rpms that exist on the xCAT management node in the
# /install/post/gpfs_updates directory.
# /install/post/otherpkgs/gpfs_updates directory.
# This is necessary because the GPFS updates can ONLY be installed
# after the base rpms have been installed, and the update rpms cannot
# exist in any rpm repositories used by xCAT otherpkgs processing

View File

@ -0,0 +1,6 @@
xcat-openstack-baremetal for Debian
-----------------------------------
<possible notes regarding this package - if none, delete this file>
-- root <root@unknown> Wed, 12 Mar 2014 01:47:54 -0700

View File

@ -0,0 +1,9 @@
xcat-openstack-baremetal for Debian
-----------------------------------
<this file describes information about the source package, see Debian policy
manual section 4.14. You WILL either need to modify or delete this file>

View File

@ -0,0 +1,5 @@
xcat-openstack-baremetal (2.8.4-1) unstable; urgency=low
* Initial release (Closes: #nnnn) <nnnn is the bug number of your ITP>
-- root <root@unknown> Wed, 12 Mar 2014 01:47:54 -0700

View File

@ -0,0 +1 @@
8

View File

@ -0,0 +1,14 @@
Source: xcat-openstack-baremetal
Section: admin
Priority: extra
Maintainer: xCAT <xcat-user@lists.sourceforge.net>
Build-Depends: debhelper (>= 8.0.0)
Standards-Version: 3.9.4
Homepage: http://xcat.sourceforge.net/
#Vcs-Git: git://git.debian.org/collab-maint/xcat-openstack-baremetal.git
#Vcs-Browser: http://git.debian.org/?p=collab-maint/xcat-openstack-baremetal.git;a=summary
Package: xcat-openstack-baremetal
Architecture: all
Depends: xCAT-client
Description: Executables and data of the xCAT baremetal driver for OpenStack

View File

@ -0,0 +1,38 @@
Format: http://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
Upstream-Name: xcat-openstack-baremetal
Source: <url://example.com>
Files: *
Copyright: <years> <put author's name and email here>
<years> <likewise for another author>
License: <special license>
<Put the license of the package here indented by 1 space>
<This follows the format of Description: lines in control file>
.
<Including paragraphs>
# If you want to use GPL v2 or later for the /debian/* files use
# the following clauses, or change it to suit. Delete these two lines
Files: debian/*
Copyright: 2014 root <root@unknown>
License: GPL-2+
This package is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
.
This package is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>
.
On Debian systems, the complete text of the GNU General
Public License version 2 can be found in "/usr/share/common-licenses/GPL-2".
# Please also look if there are files or directories which have a
# different copyright/license attached and list them here.
# Please avoid to pick license terms that are more restrictive than the
# packaged work, as it may make Debian's contributions unacceptable upstream.

View File

@ -0,0 +1,7 @@
opt/xcat/bin
opt/xcat/sbin
opt/xcat/lib/perl/xCAT_plugin
opt/xcat/lib/python/xcat/openstack/baremetal
opt/xcat/share/xcat/openstack/postscripts
opt/xcat/share/man/man1
opt/xcat/share/doc/man1

View File

View File

@ -0,0 +1,2 @@
xcat-openstack-baremetal_2.8.4-1_all.deb admin extra
xcat-openstack-baremetal_2.8.4-1_all.deb admin extra

View File

@ -0,0 +1,6 @@
lib/* opt/xcat/lib/
share/xcat/openstack/postscripts/* opt/xcat/share/xcat/openstack/postscripts/
share/man/man1/* opt/xcat/share/man/man1/
share/doc/man1/* opt/xcat/share/doc/man1/

View File

@ -0,0 +1,43 @@
#!/bin/sh
# postinst script for xcat-openstack-baremetal
#
# see: dh_installdeb(1)
set -e
# summary of how this script can be called:
# * <postinst> `configure' <most-recently-configured-version>
# * <old-postinst> `abort-upgrade' <new version>
# * <conflictor's-postinst> `abort-remove' `in-favour' <package>
# <new-version>
# * <postinst> `abort-remove'
# * <deconfigured's-postinst> `abort-deconfigure' `in-favour'
# <failed-install-package> <version> `removing'
# <conflicting-package> <version>
# for details, see http://www.debian.org/doc/debian-policy/ or
# the debian-policy package
case "$1" in
configure)
#copy the postscripts under /installl/postscripts directory on MN only
if [ -f "/etc/xCATMN" ]; then
cp /opt/xcat/share/xcat/openstack/postscripts/* /install/postscripts/
fi
;;
abort-upgrade|abort-remove|abort-deconfigure)
;;
*)
echo "postinst called with unknown argument \`$1'" >&2
exit 1
;;
esac
# dh_installdeb will replace this with shell code automatically
# generated by other debhelper scripts.
#DEBHELPER#
exit 0

View File

@ -0,0 +1,46 @@
#!/bin/sh
# prerm script for xcat-openstack-baremetal
#
# see: dh_installdeb(1)
set -e
# summary of how this script can be called:
# * <prerm> `remove'
# * <old-prerm> `upgrade' <new-version>
# * <new-prerm> `failed-upgrade' <old-version>
# * <conflictor's-prerm> `remove' `in-favour' <package> <new-version>
# * <deconfigured's-prerm> `deconfigure' `in-favour'
# <package-being-installed> <version> `removing'
# <conflicting-package> <version>
# for details, see http://www.debian.org/doc/debian-policy/ or
# the debian-policy package
case "$1" in
remove|upgrade|deconfigure)
#remove postscripts under /installl/postscripts directory on MN only
if [ -f "/etc/xCATMN" ]; then
for fn in /opt/xcat/share/xcat/openstack/postscripts/*
do
bn=`basename $fn`
rm /install/postscripts/$bn
done
fi
;;
failed-upgrade)
;;
*)
echo "prerm called with unknown argument \`$1'" >&2
exit 1
;;
esac
# dh_installdeb will replace this with shell code automatically
# generated by other debhelper scripts.
#DEBHELPER#
exit 0

View File

@ -0,0 +1,44 @@
#!/usr/bin/make -f
# -*- makefile -*-
# Sample debian/rules that uses debhelper.
# This file was originally written by Joey Hess and Craig Small.
# As a special exception, when this file is copied by dh-make into a
# dh-make output file, you may use that output file without restriction.
# This special exception was added by Craig Small in version 0.37 of dh-make.
# Uncomment this to turn on verbose mode.
#export DH_VERBOSE=1
build:
pwd
`pwd`/xpod2man
clean:
dh_testdir
dh_testroot
dh_clean -d
install:
pwd
dh_testdir
dh_testroot
dh_installdirs
dh_install -X".svn"
chmod 444 `pwd`/debian/xcat-openstack-baremetal/opt/xcat/share/man/man1/*
chmod 644 `pwd`/debian/xcat-openstack-baremetal/opt/xcat/share/doc/man1/*
dh_link
binary-indep: build install
pwd
export
dh_installman
dh_compress
dh_installdeb
dh_gencontrol
dh_md5sums
dh_builddeb
binary-arch:
pwd
binary: binary-indep binary-arch
.PHONY: build clean binary-indep binary-arch binary install configure

View File

@ -0,0 +1 @@
1.0

View File

@ -0,0 +1,201 @@
dh_installdirs
dh_installdirs
dh_install
dh_link
dh_installman
dh_compress
dh_installdeb
dh_gencontrol
dh_md5sums
dh_builddeb
dh_builddeb
dh_installdirs
dh_install
dh_link
dh_installman
dh_compress
dh_installdeb
dh_gencontrol
dh_md5sums
dh_builddeb
dh_builddeb
dh_installdirs
dh_install
dh_link
dh_installman
dh_compress
dh_installdeb
dh_gencontrol
dh_md5sums
dh_builddeb
dh_builddeb
dh_installdirs
dh_install
dh_link
dh_installman
dh_compress
dh_installdeb
dh_gencontrol
dh_md5sums
dh_builddeb
dh_builddeb
dh_installdirs
dh_install
dh_link
dh_installman
dh_compress
dh_installdeb
dh_gencontrol
dh_md5sums
dh_builddeb
dh_builddeb
dh_installdirs
dh_install
dh_link
dh_installman
dh_compress
dh_installdeb
dh_gencontrol
dh_md5sums
dh_builddeb
dh_builddeb
dh_installdirs
dh_install
dh_link
dh_installman
dh_compress
dh_installdeb
dh_gencontrol
dh_md5sums
dh_builddeb
dh_builddeb
dh_installdirs
dh_install
dh_link
dh_installman
dh_compress
dh_installdeb
dh_gencontrol
dh_md5sums
dh_builddeb
dh_builddeb
dh_installdirs
dh_install
dh_link
dh_installman
dh_compress
dh_installdeb
dh_gencontrol
dh_md5sums
dh_builddeb
dh_builddeb
dh_installdirs
dh_install
dh_link
dh_installman
dh_compress
dh_installdeb
dh_gencontrol
dh_md5sums
dh_builddeb
dh_builddeb
dh_installdirs
dh_install
dh_link
dh_installman
dh_compress
dh_installdeb
dh_gencontrol
dh_md5sums
dh_builddeb
dh_builddeb
dh_installdirs
dh_install
dh_link
dh_installman
dh_compress
dh_installdeb
dh_gencontrol
dh_md5sums
dh_builddeb
dh_builddeb
dh_installdirs
dh_install
dh_link
dh_installman
dh_compress
dh_installdeb
dh_gencontrol
dh_md5sums
dh_builddeb
dh_builddeb
dh_installdirs
dh_install
dh_link
dh_installman
dh_compress
dh_installdeb
dh_gencontrol
dh_md5sums
dh_builddeb
dh_builddeb
dh_installdirs
dh_install
dh_link
dh_installman
dh_compress
dh_installdeb
dh_gencontrol
dh_md5sums
dh_builddeb
dh_builddeb
dh_installdirs
dh_install
dh_link
dh_installman
dh_compress
dh_installdeb
dh_gencontrol
dh_md5sums
dh_builddeb
dh_builddeb
dh_installdirs
dh_install
dh_link
dh_installman
dh_compress
dh_installdeb
dh_gencontrol
dh_md5sums
dh_builddeb
dh_builddeb
dh_installdirs
dh_install
dh_link
dh_installman
dh_compress
dh_installdeb
dh_gencontrol
dh_md5sums
dh_builddeb
dh_builddeb
dh_installdirs
dh_install
dh_link
dh_installman
dh_compress
dh_installdeb
dh_gencontrol
dh_md5sums
dh_builddeb
dh_builddeb
dh_installdirs
dh_install
dh_link
dh_installman
dh_compress
dh_installdeb
dh_gencontrol
dh_md5sums
dh_builddeb
dh_builddeb

View File

@ -0,0 +1,4 @@
opt/xcat/bin/xcatclient opt/xcat/sbin/deploy_ops_bm_node
opt/xcat/bin/xcatclient opt/xcat/sbin/cleanup_ops_bm_node
opt/xcat/bin/xcatclient opt/xcat/bin/opsaddbmnode
opt/xcat/bin/xcatclientnnr opt/xcat/bin/opsaddimage

View File

@ -0,0 +1 @@
misc:Depends=

View File

@ -0,0 +1,904 @@
# IBM(c) 2013 EPL license http://www.eclipse.org/legal/epl-v10.html
package xCAT_plugin::openstack;
BEGIN
{
$::XCATROOT = $ENV{'XCATROOT'} ? $ENV{'XCATROOT'} : '/opt/xcat';
}
use lib "$::XCATROOT/lib/perl";
use xCAT::Utils;
use xCAT::TableUtils;
use xCAT::SvrUtils;
use xCAT::NetworkUtils;
use xCAT::Table;
use Data::Dumper;
use File::Path;
use File::Copy;
use Getopt::Long;
Getopt::Long::Configure("bundling");
Getopt::Long::Configure("pass_through");
sub handled_commands {
return {
opsaddbmnode => "openstack", #external command
opsaddimage => "openstack", #external command
deploy_ops_bm_node => "openstack", #internal command called from the baremetal driver
cleanup_ops_bm_node => "openstack", #internal command called from the baremetal driver
}
}
sub process_request {
my $request = shift;
my $callback = shift;
my $doreq = shift;
my $command = $request->{command}->[0];
if ($command eq "opsaddbmnode") {
return opsaddbmnode($request, $callback, $doreq);
} elsif ($command eq "opsaddimage") {
return opsaddimage($request, $callback, $doreq);
} elsif ($command eq "deploy_ops_bm_node") {
return deploy_ops_bm_node($request, $callback, $doreq);
} elsif ($command eq "cleanup_ops_bm_node") {
return cleanup_ops_bm_node($request, $callback, $doreq);
} else {
$callback->({error=>["Unsupported command: $command."],errorcode=>[1]});
return 1;
}
}
#-------------------------------------------------------------------------------
=head3 opsaddbmnode
This function takes the xCAT nodes and register them
as the OpenStack baremetal nodes
=cut
#-------------------------------------------------------------------------------
sub opsaddbmnode {
my $request = shift;
my $callback = shift;
my $doreq = shift;
@ARGV = @{$request->{arg}};
Getopt::Long::Configure("bundling");
Getopt::Long::Configure("no_pass_through");
my $help;
my $version;
my $host;
if(!GetOptions(
'h|help' => \$help,
'v|version' => \$version,
's=s' => \$host,
))
{
&opsaddbmnode_usage($callback);
return 1;
}
# display the usage if -h or --help is specified
if ($help) {
&opsaddbmnode_usage($callback);
return 0;
}
# display the version statement if -v or --verison is specified
if ($version)
{
my $rsp={};
$rsp->{data}->[0]= xCAT::Utils->Version();
$callback->($rsp);
return 0;
}
if (!$request->{node}) {
$callback->({error=>["Please specify at least one node."],errorcode=>[1]});
return 1;
}
if (!$host) {
$callback->({error=>["Please specify the OpenStack compute host name with -s flag."],errorcode=>[1]});
return 1;
}
my $nodes = $request->{node};
#get node mgt
my $nodehmhash;
my $nodehmtab = xCAT::Table->new("nodehm");
if ($nodehmtab) {
$nodehmhash = $nodehmtab->getNodesAttribs($nodes,['power', 'mgt']);
}
#get bmc info for the nodes
my $ipmitab = xCAT::Table->new("ipmi", -create => 0);
my $tmp_ipmi;
if ($ipmitab) {
$tmp_ipmi = $ipmitab->getNodesAttribs($nodes, ['bmc','username', 'password'], prefetchcache=>1);
#print Dumper($tmp_ipmi);
} else {
$callback->({error=>["Cannot open the ipmi table."],errorcode=>[1]});
return 1;
}
#get mac for the nodes
my $mactab = xCAT::Table->new("mac", -create => 0);
my $tmp_mac;
if ($mactab) {
$tmp_mac = $mactab->getNodesAttribs($nodes, ['mac'], prefetchcache=>1);
#print Dumper($tmp_mac);
} else {
$callback->({error=>["Cannot open the mac table."],errorcode=>[1]});
return 1;
}
#get cpu, memory and disk info for the nodes
my $hwinvtab = xCAT::Table->new("hwinv", -create => 0);
my $tmp_hwinv;
if ($hwinvtab) {
$tmp_hwinv = $hwinvtab->getNodesAttribs($nodes, ['cpucount', 'memory', 'disksize'], prefetchcache=>1);
#print Dumper($tmp_hwinv);
} else {
$callback->({error=>["Cannot open the hwinv table."],errorcode=>[1]});
return 1;
}
#get default username and password for bmc
my $d_bmcuser;
my $d_bmcpasswd;
my $passtab = xCAT::Table->new('passwd');
if ($passtab) {
($tmp_passwd)=$passtab->getAttribs({'key'=>'ipmi'},'username','password');
if (defined($tmp_passwd)) {
$d_bmcuser = $tmp_passwd->{username};
$d_bmcpasswd = $tmp_passwd->{password};
}
}
#print "d_bmcuser=$d_bmcuser, d_bmcpasswd=$d_bmcpasswd \n";
foreach my $node (@$nodes) {
#collect the node infomation needed for each node, some info
my $mgt;
my $ref_nodehm = $nodehmhash->{$node}->[0];
if ($ref_nodehm) {
if ($ref_nodehm->{'power'}) {
$mgt = $ref_nodehm->{'power'};
} elsif ($ref_nodehm->{'mgt'}) {
$mgt = $ref_nodehm->{'mgt'};
}
}
my ($bmc, $bmc_user, $bmc_password, $mac, $cpu, $memory, $disk);
if (($mgt) && ($mgt eq 'ipmi')) {
my $ref_ipmi = $tmp_ipmi->{$node}->[0];
if ($ref_ipmi) {
if (exists($ref_ipmi->{bmc})) {
$bmc = $ref_ipmi->{bmc};
}
if (exists($ref_ipmi->{username})) {
$bmc_user = $ref_ipmi->{username};
if (exists($ref_ipmi->{password})) {
$bmc_password = $ref_ipmi->{password};
}
} else { #take the default if they cannot be found on ipmi table
if ($d_bmcuser) { $bmc_user = $d_bmcuser; }
if ($d_bmcpasswd) { $bmc_password = $d_bmcpasswd; }
}
}
} # else { # for hardware control point other than ipmi, just fake it in OpenStack.
#$bmc = "0.0.0.0";
#$bmc_user = "xCAT";
#$bmc_password = "xCAT";
#}
my $ref_mac = $tmp_mac->{$node}->[0];
if ($ref_mac) {
if (exists($ref_mac->{mac})) {
$mac = $ref_mac->{mac};
}
}
$ref_hwinv = $tmp_hwinv->{$node}->[0];
if ($ref_hwinv) {
if (exists($ref_hwinv->{cpucount})) {
$cpu = $ref_hwinv->{cpucount};
}
if (exists($ref_hwinv->{memory})) {
$memory = $ref_hwinv->{memory};
#TODO: what if the unit is not in MB? We need to convert it to MB
$memory =~ s/MB|mb//g;
}
if (exists($ref_hwinv->{disksize})) {
#The format of the the disk size is: sda:250GB,sdb:250GB or just 250GB
#We need to get the size of the first one
$disk = $ref_hwinv->{disksize};
my @a = split(',', $disk);
my @b = split(':', $a[0]);
if (@b > 1) {
$disk = $b[1];
} else {
$disk = $b[0];
}
#print "a=@a, b=@b\n";
#TODO: what if the unit is not in GB? We need to convert it to MB
$disk =~ s/GB|gb//g;
}
}
#some info are mendatory
if (!$mac) {
$callback->({error=>["Mac address is not defined in the mac table for node $node."],errorcode=>[1]});
next;
}
if (!$cpu) {
#default cpu count is 1
$cpu = 1;
}
if (!$memory) {
#default memory size is 1024MB=1GB
$memory = 1024;
}
if (!$disk) {
#default disk size is 1GB
$disk = 1;
}
#print "$bmc, $bmc_user, $bmc_password, $mac, $cpu, $memory, $disk\n";
#call OpenStack command to add the node into the OpenStack as
#a baremetal node.
my $cmd_tmp = "nova baremetal-node-create";
if ($bmc) {
#make sure it is an ip address
if (($bmc !~ /\d+\.\d+\.\d+\.\d+/) && ($bmc !~ /:/)) {
$bmc = xCAT::NetworkUtils->getipaddr($bmc);
}
$cmd_tmp .= " --pm_address=$bmc";
}
if ($bmc_user) {
$cmd_tmp .= " --pm_user=$bmc_user";
}
if ($bmc_password) {
$cmd_tmp .= " --pm_password=$bmc_password";
}
$cmd_tmp .= " $host $cpu $memory $disk $mac";
my $cmd = qq~source \~/openrc;$cmd_tmp~;
#print "cmd=$cmd\n";
my $output =
xCAT::InstUtils->xcmd($callback, $doreq, "xdsh", [$host], $cmd, 0);
if ($::RUNCMD_RC != 0) {
my $rsp;
push @{$rsp->{data}}, "OpenStack creating baremetal node $node:";
push @{$rsp->{data}}, "$output";
xCAT::MsgUtils->message("E", $rsp, $callback);
}
}
}
#-------------------------------------------------------------------------------
=head3 opsaddimage
This function takes the xCAT nodes and register them
as the OpenStack baremetal nodes
=cut
#-------------------------------------------------------------------------------
sub opsaddimage {
my $request = shift;
my $callback = shift;
my $doreq = shift;
@ARGV = @{$request->{arg}};
Getopt::Long::Configure("bundling");
Getopt::Long::Configure("no_pass_through");
my $help;
my $version;
#my $cloud;
my $ops_img_names;
my $controller;
if(!GetOptions(
'h|help' => \$help,
'v|version' => \$version,
'c=s' => \$controller,
'n=s' => \$ops_img_names,
))
{
&opsaddimage_usage($callback);
return 1;
}
# display the usage if -h or --help is specified
if ($help) {
&opsaddimage_usage($callback);
return 0;
}
# display the version statement if -v or --verison is specified
if ($version)
{
my $rsp={};
$rsp->{data}->[0]= xCAT::Utils->Version();
$callback->($rsp);
return 0;
}
if (@ARGV ==0) {
$callback->({error=>["Please specify an image name or a list of image names."],errorcode=>[1]});
return 1;
}
#make sure the input cloud name is valid.
#if (!$cloud) {
# $callback->({error=>["Please specify the name of the cloud with -c flag."],errorcode=>[1]});
# return 1;
#} else {
# my $cloudstab = xCAT::Table->new('clouds', -create => 0);
# my @et = $cloudstab->getAllAttribs('name', 'controller');
# if(@et) {
# foreach my $tmp_et (@et) {
# if ($tmp_et->{name} eq $cloud) {
# if ($tmp_et->{controller}) {
# $controller = $tmp_et->{controller};
# last;
# } else {
# $callback->({error=>["Please specify the controller in the clouds table for the cloud: $cloud."],errorcode=>[1]});
# return 1;
# }
# }
# }
# }
if (!$controller) {
$callback->({error=>["Please specify the OpenStack controller node name with -c."],errorcode=>[1]});
return 1;
}
#}
#make sure that the images from the command are valid image names
@images = split(',', $ARGV[0]);
@new_names = ();
if ($ops_img_names) {
@new_names = split(',', $ops_img_names);
}
#print "images=@images, new image names=@new_names, controller=$controller\n";
my $image_hash = {};
my $osimgtab = xCAT::Table->new('osimage', -create => 0);
my @et = $osimgtab->getAllAttribs('imagename');
if(@et) {
foreach my $tmp_et (@et) {
$image_hash->{$tmp_et->{imagename}}{'xCAT'} = 1;
}
}
my @bad_images;
foreach my $image (@images) {
if (!exists($image_hash->{$image})) {
push @bad_images, $image;
}
}
if (@bad_images > 0) {
$callback->({error=>["The following images cannot be found in xCAT osimage table:\n " . join("\n ", @bad_images) . "\n"],errorcode=>[1]});
return 1;
}
my $index=0;
foreach my $image (@images) {
my $new_name = shift(@new_names);
if (!$new_name) {
$new_name = $image; #the default new name is xCAT image name
}
my $cmd_tmp = "glance image-create --name $new_name --public --disk-format qcow2 --container-format bare --property xcat_image_name=\'$image\' < /tmp/$image.qcow2";
my $cmd = qq~touch /tmp/$image.qcow2;source \~/openrc;$cmd_tmp;rm /tmp/$image.qcow2~;
#print "cmd=$cmd\ncontroller=$controller\n";
my $output =
xCAT::InstUtils->xcmd($callback, $doreq, "xdsh", [$controller], $cmd, 0);
if ($::RUNCMD_RC != 0) {
my $rsp;
push @{$rsp->{data}}, "OpenStack creating image $new_name:";
push @{$rsp->{data}}, "$output";
xCAT::MsgUtils->message("E", $rsp, $callback);
}
}
}
#-------------------------------------------------------------------------------
=head3 deploy_ops_bm_node
This is a internel command called by OpenStack xCAT-baremetal driver.
It prepares the node by adding the config_ops_bm_node postbootscript
to the postscript table for the node, then call nodeset and then boot
the node up.
=cut
#-------------------------------------------------------------------------------
sub deploy_ops_bm_node {
my $request = shift;
my $callback = shift;
my $doreq = shift;
@ARGV = @{$request->{arg}};
Getopt::Long::Configure("bundling");
Getopt::Long::Configure("no_pass_through");
my $node = $request->{node}->[0];
my $help;
my $version;
my $img_name;
my $hostname;
my $fixed_ip;
my $netmask;
if(!GetOptions(
'h|help' => \$help,
'v|version' => \$version,
'image=s' => \$img_name,
'host=s' => \$hostname,
'ip=s' => \$fixed_ip,
'mask=s' => \$netmask,
))
{
&deploy_ops_bm_node_usage($callback);
return 1;
}
# display the usage if -h or --help is specified
if ($help) {
&deploy_ops_bm_node_usage($callback);
return 0;
}
# display the version statement if -v or --verison is specified
if ($version)
{
my $rsp={};
$rsp->{data}->[0]= xCAT::Utils->Version();
$callback->($rsp);
return 0;
}
#print "node=$node, image=$img_name, host=$hostname, ip=$fixed_ip, mask=$netmask\n";
#validate the image name
my $osimagetab = xCAT::Table->new('osimage', -create=>1);
my $ref = $osimagetab->getAttribs({imagename => $img_name}, 'imagename');
if (!$ref) {
$callback->({error=>["Invalid image name: $img_name."],errorcode=>[1]});
return 1;
}
#check if the fixed ip is within the xCAT management network.
#get the master ip address for the node then check if the master ip and
#the OpenStack fixed_ip are on the same subnet.
#my $same_nw = 0;
#my $master = xCAT::TableUtils->GetMasterNodeName($node);
#my $master_ip = xCAT::NetworkUtils->toIP($master);
#if (xCAT::NetworkUtils::isInSameSubnet($master_ip, $fixed_ip, $netmask, 0)) {
# $same_nw = 1;
#}
#add config_ops_bm_node to the node's postbootscript
my $script = "config_ops_bm_node $hostname $fixed_ip $netmask";
add_postscript($callback, $node, $script);
#run nodeset
my $cmd = qq~osimage=$img_name~;
my $output = xCAT::Utils->runxcmd(
{command => ["nodeset"],
node => [$node],
arg => [$cmd]},
$doreq, -1, 1);
if ($::RUNCMD_RC != 0) {
my $rsp;
push @{$rsp->{data}}, "nodeset:";
push @{$rsp->{data}}, "$output";
xCAT::MsgUtils->message("E", $rsp, $callback);
return 1;
}
#deploy the node now, supported nodehm.mgt values: ipmi, blade,fsp, hmc.
my $hmtab = xCAT::Table->new('nodehm');
my $hment = $hmtab->getNodeAttribs($node,['mgt']);
if ($hment && $hment->{'mgt'}) {
my $mgt = $hment->{'mgt'};
if ($mgt eq 'ipmi') {
deploy_bmc_node($callback, $doreq, $node);
} elsif (($mgt eq 'blade') || ($mgt eq 'fsp')) {
deploy_blade($callback, $doreq, $node);
} elsif ($mgt eq 'hmc') {
deploy_hmc_node($callback, $doreq, $node);
} else {
my $rsp;
push @{$rsp->{data}}, "Node $node: nodehm.mgt=$mgt is not supported in the OpenStack cloud.";
xCAT::MsgUtils->message("E", $rsp, $callback);
return 1;
}
} else {
#nodehm.mgt must setup for node
my $rsp;
push @{$rsp->{data}}, "Node $node: nodehm.mgt cannot be empty in order to deploy.";
xCAT::MsgUtils->message("E", $rsp, $callback);
return 1;
}
}
# Deploy a rack-mounted node
sub deploy_bmc_node {
my $callback = shift;
my $doreq = shift;
my $node = shift;
#set boot order
my $cmd = qq~net~;
my $output = xCAT::Utils->runxcmd(
{command => ["rsetboot"],
node => [$node],
arg => [$cmd]},
$doreq, -1, 1);
if ($::RUNCMD_RC != 0) {
my $rsp;
push @{$rsp->{data}}, "rsetboot:";
push @{$rsp->{data}}, "$output";
xCAT::MsgUtils->message("E", $rsp, $callback);
}
#reboot the node
my $cmd = qq~boot~;
my $output = xCAT::Utils->runxcmd(
{command => ["rpower"],
node => [$node],
arg => [$cmd]},
$doreq, -1, 1);
if ($::RUNCMD_RC != 0) {
my $rsp;
push @{$rsp->{data}}, "rpower:";
push @{$rsp->{data}}, "$output";
xCAT::MsgUtils->message("E", $rsp, $callback);
return 1;
}
}
# Deploy a blade or fsp controlled node
sub deploy_blade {
my $callback = shift;
my $doreq = shift;
my $node = shift;
#set boot order
my $cmd = qq~net~;
my $output = xCAT::Utils->runxcmd(
{command => ["rbootseq"],
node => [$node],
arg => [$cmd]},
$doreq, -1, 1);
if ($::RUNCMD_RC != 0) {
my $rsp;
push @{$rsp->{data}}, "rbootseq:";
push @{$rsp->{data}}, "$output";
xCAT::MsgUtils->message("E", $rsp, $callback);
}
#reboot the node
my $cmd = qq~boot~;
my $output = xCAT::Utils->runxcmd(
{command => ["rpower"],
node => [$node],
arg => [$cmd]},
$doreq, -1, 1);
if ($::RUNCMD_RC != 0) {
my $rsp;
push @{$rsp->{data}}, "rpower:";
push @{$rsp->{data}}, "$output";
xCAT::MsgUtils->message("E", $rsp, $callback);
return 1;
}
}
# Deploy a node controlled by HMC
sub deploy_hmc_node {
my $callback = shift;
my $doreq = shift;
my $node = shift;
my $output = xCAT::Utils->runxcmd(
{command => ["rnetboot"],
node => [$node]},
$doreq, -1, 1);
if ($::RUNCMD_RC != 0) {
my $rsp;
push @{$rsp->{data}}, "rnetboot:";
push @{$rsp->{data}}, "$output";
xCAT::MsgUtils->message("E", $rsp, $callback);
return 1;
}
}
#-------------------------------------------------------------------------------
=head3 cleanup_ops_bm_node
This is a internel command called by OpenStack xCAT-baremetal driver.
It undoes all the changes made by deploy_ops_bm_node command. It removes
the config_ops_bmn_ode postbootscript from the postscript table for the
node, removes the alias ip and then power off the node.
=cut
#-------------------------------------------------------------------------------
sub cleanup_ops_bm_node {
my $request = shift;
my $callback = shift;
my $doreq = shift;
@ARGV = @{$request->{arg}};
Getopt::Long::Configure("bundling");
Getopt::Long::Configure("no_pass_through");
my $node = $request->{node}->[0];
my $help;
my $version;
my $fixed_ip;
if(!GetOptions(
'h|help' => \$help,
'v|version' => \$version,
'ip=s' => \$fixed_ip,
))
{
&cleanup_ops_bm_node_usage($callback);
return 1;
}
# display the usage if -h or --help is specified
if ($help) {
&cleanup_ops_bm_node_usage($callback);
return 0;
}
# display the version statement if -v or --verison is specified
if ($version)
{
my $rsp={};
$rsp->{data}->[0]= xCAT::Utils->Version();
$callback->($rsp);
return 0;
}
#print "node=$node, ip=$fixed_ip\n";
#removes the config_ops_bm_node postbootscript from the postscripts table
remove_postscript($callback, $node, "config_ops_bm_node");
#run updatenode to remove the ip alias
my $cmd = qq~-P deconfig_ops_bm_node $fixed_ip~;
my $output = xCAT::Utils->runxcmd(
{command => ["updatenode"],
node => [$node],
arg => [$cmd]},
$doreq, -1, 1);
if ($::RUNCMD_RC != 0) {
my $rsp;
push @{$rsp->{data}}, "updatenode:";
push @{$rsp->{data}}, "$output";
xCAT::MsgUtils->message("E", $rsp, $callback);
}
#turn the node power off
$ssh_ok = 0;
my $cmd = qq~stat~;
my $output = xCAT::Utils->runxcmd(
{command => ["rpower"],
node => [$node],
arg => [$cmd]},
$doreq, -1, 1);
if ($::RUNCMD_RC != 0) {
my $rsp;
push @{$rsp->{data}}, "rpower:";
push @{$rsp->{data}}, "$output";
xCAT::MsgUtils->message("E", $rsp, $callback);
return 1;
} else {
if ($output !~ /: off/) {
#power off the node
my $cmd = qq~off~;
my $output = xCAT::Utils->runxcmd(
{command => ["rpower"],
node => [$node],
arg => [$cmd]},
$doreq, -1, 1);
if ($::RUNCMD_RC != 0) {
my $rsp;
push @{$rsp->{data}}, "rpower:";
push @{$rsp->{data}}, "$output";
xCAT::MsgUtils->message("E", $rsp, $callback);
return 1;
}
}
}
}
#-------------------------------------------------------
=head3 add_postscript
It adds the 'config_ops_bm_node' postbootscript to the
postscript table for the given node.
=cut
#-------------------------------------------------------
sub add_postscript {
my $callback=shift;
my $node=shift;
my $script=shift;
#print "script=$script\n";
my $posttab=xCAT::Table->new("postscripts", -create =>1);
my %setup_hash;
my $ref = $posttab->getNodeAttribs($node,[qw(postscripts postbootscripts)]);
my $found=0;
if ($ref) {
if (exists($ref->{postscripts})) {
my @a = split(/,/, $ref->{postscripts});
if (grep(/^config_ops_bm_node/, @a)) {
$found = 1;
if (!grep(/^$script$/, @a)) {
#not exact match, must replace it with the new script
for (@a) {
s/^config_ops_bm_node.*$/$script/;
}
my $new_post = join(',', @a);
$setup_hash{$node}={postscripts=>"$new_post"};
}
}
}
if (exists($ref->{postbootscripts})) {
my $post=$ref->{postbootscripts};
my @old_a=split(',', $post);
if (grep(/^config_ops_bm_node/, @old_a)) {
if (!grep(/^$script$/, @old_a)) {
#not exact match, will replace it with new script
for (@old_a) {
s/^config_ops_bm_node.*$/$script/;
}
my $new_postboot = join(',', @old_a);
$setup_hash{$node}={postbootscripts=>"$new_postboot"};
}
} else {
if (! $found) {
$setup_hash{$node}={postbootscripts=>"$post,$script"};
}
}
} else {
if (! $found) {
$setup_hash{$node}={postbootscripts=>"$script"};
}
}
} else {
$setup_hash{$node}={postbootscripts=>"$script"};
}
if (keys(%setup_hash) > 0) {
$posttab->setNodesAttribs(\%setup_hash);
}
return 0;
}
#-------------------------------------------------------
=head3 remove_postscript
It removes the 'config_ops_bm_node' postbootscript from
the postscript table for the given node.
=cut
#-------------------------------------------------------
sub remove_postscript {
my $callback=shift;
my $node=shift;
my $script=shift;
my $posttab=xCAT::Table->new("postscripts", -create =>1);
my %setup_hash;
my $ref = $posttab->getNodeAttribs($node,[qw(postscripts postbootscripts)]);
my $found=0;
if ($ref) {
if (exists($ref->{postscripts})) {
my @old_a = split(/,/, $ref->{postscripts});
my @new_a = grep(!/^$script/, @old_a);
if (scalar(@new_a) != scalar(@old_a)) {
my $new_post = join(',', @new_a);
$setup_hash{$node}={postscripts=>"$new_post"};
}
}
if (exists($ref->{postbootscripts})) {
my @old_b = split(/,/, $ref->{postbootscripts});
my @new_b = grep(!/^$script/, @old_b);
if (scalar(@new_b) != scalar(@old_b)) {
my $new_post = join(',', @new_b);
$setup_hash{$node}={postbootscripts=>"$new_post"};
}
}
}
if (keys(%setup_hash) > 0) {
$posttab->setNodesAttribs(\%setup_hash);
}
return 0;
}
#-------------------------------------------------------------------------------
=head3 opsaddbmnode_usage
The usage text for opsaddbmnode command.
=cut
#-------------------------------------------------------------------------------
sub opsaddbmnode_usage {
my $cb=shift;
my $rsp={};
$rsp->{data}->[0]= "Usage: opsaddbmnode -h";
$rsp->{data}->[1]= " opsaddbmnode -v";
$rsp->{data}->[2]= " opsaddbmnode <noderange> -s <service_host>";
$cb->($rsp);
}
#-------------------------------------------------------------------------------
=head3 opsaddimage_usage
The usage text for opsaddimage command.
=cut
#-------------------------------------------------------------------------------
sub opsaddimage_usage {
my $cb=shift;
my $rsp={};
$rsp->{data}->[0]= "Usage: opsaddimage -h";
$rsp->{data}->[1]= " opsaddimage -v";
$rsp->{data}->[2]= " opsaddimage <image1,image2...> [-n <new_name1,new_name2...> -c <controller>";
$cb->($rsp);
}
#-------------------------------------------------------------------------------
=head3 deploy_ops_bm_node_usage
The usage text for deploy_ops_bm_node command.
=cut
#-------------------------------------------------------------------------------
sub deploy_ops_bm_node_usage {
my $cb=shift;
my $rsp={};
$rsp->{data}->[0]= "Usage: deploy_ops_bm_node -h";
$rsp->{data}->[1]= " deploy_ops_bm_node -v";
$rsp->{data}->[2]= " deploy_ops_bm_node <node> --image <image_name> --host <ops_hostname> --ip <ops_fixed_ip> --mask <netmask>";
$cb->($rsp);
}
#-------------------------------------------------------------------------------
=head3 cleanup_ops_bm_node_usage
The usage text cleanup_ops_bm_node command.
=cut
#-------------------------------------------------------------------------------
sub cleanup_ops_bm_node_usage {
my $cb=shift;
my $rsp={};
$rsp->{data}->[0]= "Usage: cleanup_ops_bm_node -h";
$rsp->{data}->[1]= " cleanup_ops_bm_node -v";
$rsp->{data}->[2]= " cleanup_ops_bm_node <node> [--ip <ops_fixed_ip>]";
$cb->($rsp);
}
1;

View File

@ -0,0 +1,17 @@
# Copyright (c) 2012 NTT DOCOMO, INC.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from xcat.openstack.baremetal import driver
BareMetalDriver = driver.xCATBareMetalDriver

View File

@ -0,0 +1,256 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# coding=utf-8
"""
A driver for Bare-metal platform.
"""
from oslo.config import cfg
from nova.compute import power_state
from nova import context as nova_context
from nova import exception
from nova.openstack.common import excutils
from nova.openstack.common.gettextutils import _
from nova.openstack.common import importutils
from nova.openstack.common import jsonutils
from nova.openstack.common import log as logging
from nova.virt.baremetal import baremetal_states
from nova.virt.baremetal import db
from nova.virt.baremetal import driver as bm_driver
from nova.virt.baremetal import utils as bm_utils
from nova.virt import driver
from nova.virt import firewall
from nova.virt.libvirt import imagecache
from xcat.openstack.baremetal import xcat_driver
from xcat.openstack.baremetal import exception as xcat_exception
from xcat.openstack.baremetal import power_states
LOG = logging.getLogger(__name__)
CONF = cfg.CONF
CONF.import_opt('use_ipv6', 'nova.netconf')
class xCATBareMetalDriver(bm_driver.BareMetalDriver):
"""BareMetal hypervisor driver."""
def __init__(self, virtapi, read_only=False):
super(xCATBareMetalDriver, self).__init__(virtapi)
self.xcat = xcat_driver.xCAT()
def _get_xCAT_image_name(self, image_meta):
prop = image_meta.get('properties')
xcat_image_name = prop.get('xcat_image_name')
if xcat_image_name:
return xcat_image_name
else:
raise xcat_exception.xCATInvalidImageError(image=image_meta.get('name'))
def spawn(self, context, instance, image_meta, injected_files,
admin_password, network_info=None, block_device_info=None):
"""
Create a new instance/VM/domain on the virtualization platform.
Once this successfully completes, the instance should be
running (power_state.RUNNING).
If this fails, any partial instance should be completely
cleaned up, and the virtualization platform should be in the state
that it was before this call began.
:param context: security context
:param instance: Instance object as returned by DB layer.
This function should use the data there to guide
the creation of the new instance.
:param image_meta: image object returned by nova.image.glance that
defines the image from which to boot this instance
:param injected_files: User files to inject into instance.
:param admin_password: Administrator password to set in instance.
:param network_info:
:py:meth:`~nova.network.manager.NetworkManager.get_instance_nw_info`
:param block_device_info: Information about block devices to be
attached to the instance.
"""
#import pdb
#pdb.set_trace()
node_uuid = self._require_node(instance)
node = db.bm_node_associate_and_update(context, node_uuid,
{'instance_uuid': instance['uuid'],
'instance_name': instance['hostname'],
'task_state': baremetal_states.BUILDING})
try:
self._plug_vifs(instance, network_info, context=context)
self._attach_block_devices(instance, block_device_info)
self._start_firewall(instance, network_info)
macs = self.macs_for_instance(instance)
nodename = self.xcat.get_xcat_node_name(macs)
imagename = self._get_xCAT_image_name(image_meta)
hostname = instance.get('hostname')
#get the network information for the new node
interfaces = bm_utils.map_network_interfaces(network_info, CONF.use_ipv6)
if CONF.use_ipv6:
fixed_ip = interfaces[0].get('address_v6')
netmask = interfaces[0].get('netmask_v6')
gateway = interfaces[0].get('gateway_v6')
else:
fixed_ip = interfaces[0].get('address')
netmask = interfaces[0].get('netmask')
gateway = interfaces[0].get('gateway')
#convert netmask from IPAddress to unicode string
if netmask:
netmask = unicode(netmask)
#let xCAT install it
bm_driver._update_state(context, node, instance, baremetal_states.DEPLOYING)
self.xcat.deploy_node(nodename, imagename, hostname, fixed_ip, netmask, gateway)
bm_driver._update_state(context, node, instance, baremetal_states.ACTIVE)
except Exception as e:
with excutils.save_and_reraise_exception():
LOG.error(_("Error occured while deploying instance %(instance)s "
"on baremetal node %(node)s: %(error)s") %
{'instance': instance['uuid'],
'node': node['uuid'],
'error':str(e)})
bm_driver._update_state(context, node, instance, baremetal_states.ERROR)
def reboot(self, context, instance, network_info, reboot_type,
block_device_info=None, bad_volumes_callback=None):
"""Reboot the specified instance.
After this is called successfully, the instance's state
goes back to power_state.RUNNING. The virtualization
platform should ensure that the reboot action has completed
successfully even in cases in which the underlying domain/vm
is paused or halted/stopped.
:param instance: Instance object as returned by DB layer.
:param network_info:
:py:meth:`~nova.network.manager.NetworkManager.get_instance_nw_info`
:param reboot_type: Either a HARD or SOFT reboot
:param block_device_info: Info pertaining to attached volumes
:param bad_volumes_callback: Function to handle any bad volumes
encountered
"""
try:
node = bm_driver._get_baremetal_node_by_instance_uuid(instance['uuid'])
macs = self.macs_for_instance(instance)
nodename = self.xcat.get_xcat_node_name(macs)
self.xcat.reboot_node(nodename)
bm_driver._update_state(context, node, instance, baremetal_states.RUNNING)
except xcat_exception.xCATCommandError as e:
with excutils.save_and_reraise_exception():
LOG.error(_("Error occured while rebooting instance %(instance)s "
"on baremetal node %(node)s: %(error)s") %
{'instance': instance['uuid'],
'node': node['uuid'],
'error':str(e)})
bm_driver._update_state(context, node, instance, baremetal_states.ERROR)
def destroy(self, instance, network_info, block_device_info=None,
context=None):
"""Destroy (shutdown and delete) the specified instance.
If the instance is not found (for example if networking failed), this
function should still succeed. It's probably a good idea to log a
warning in that case.
:param context: security context
:param instance: Instance object as returned by DB layer.
:param network_info:
:py:meth:`~nova.network.manager.NetworkManager.get_instance_nw_info`
:param block_device_info: Information about block devices that should
be detached from the instance.
:param destroy_disks: Indicates if disks should be destroyed
"""
#import pdb
#pdb.set_trace()
context = nova_context.get_admin_context()
try:
node = bm_driver._get_baremetal_node_by_instance_uuid(instance['uuid'])
except exception.InstanceNotFound:
LOG.warning(_("Destroy function called on a non-existing instance %s")
% instance['uuid'])
return
try:
macs = self.macs_for_instance(instance)
nodename = self.xcat.get_xcat_node_name(macs)
interfaces = bm_utils.map_network_interfaces(network_info, CONF.use_ipv6)
fixed_ip=None
if interfaces and interfaces[0]:
if CONF.use_ipv6:
fixed_ip = interfaces[0].get('address_v6')
else:
fixed_ip = interfaces[0].get('address')
if fixed_ip:
self.xcat.cleanup_node(nodename, fixed_ip)
else:
self.xcat.cleanup_node(nodename)
except Exception as e:
#just log it and move on
LOG.warning(_("Destroy called with xCAT error:" + str(e)))
try:
self._detach_block_devices(instance, block_device_info)
self._stop_firewall(instance, network_info)
self._unplug_vifs(instance, network_info)
bm_driver._update_state(context, node, None, baremetal_states.DELETED)
except Exception as e:
with excutils.save_and_reraise_exception():
LOG.error(_("Error occurred while destroying instance %s: %s")
% (instance['uuid'], str(e)))
bm_driver._update_state(context, node, instance,
baremetal_states.ERROR)
def power_off(self, instance, node=None):
"""Power off the specified instance."""
macs = self.macs_for_instance(instance)
nodename = self.xcat.get_xcat_node_name(macs)
self.xcat.power_off_node(nodename)
def power_on(self, context, instance, network_info, block_device_info=None,
node=None):
"""Power on the specified instance."""
macs = self.macs_for_instance(instance)
nodename = self.xcat.get_xcat_node_name(macs)
self.xcat.power_on_node(nodename)
def get_console_output(self, instance):
pass
def get_info(self, instance):
"""Get the current status of an instance, by name (not ID!)
Returns a dict containing:
:state: the running state, one of the power_state codes
:max_mem: (int) the maximum memory in KBytes allowed
:mem: (int) the memory in KBytes used by the domain
:num_cpu: (int) the number of virtual CPUs for the domain
:cpu_time: (int) the CPU time used in nanoseconds
"""
node = bm_driver._get_baremetal_node_by_instance_uuid(instance['uuid'])
macs = self.macs_for_instance(instance)
nodename = self.xcat.get_xcat_node_name(macs)
ps = self.xcat.get_node_power_state(nodename)
if ps == power_states.ON:
pstate = power_state.RUNNING
elif ps == power_states.OFF:
pstate = power_state.SHUTDOWN
else:
pstate = power_state.NOSTATE
return {'state': pstate,
'max_mem': node['memory_mb'],
'mem': node['memory_mb'],
'num_cpu': node['cpus'],
'cpu_time': 0}

View File

@ -0,0 +1,41 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
"""xCAT baremtal exceptions.
"""
import functools
import sys
from oslo.config import cfg
import webob.exc
from nova.openstack.common import excutils
from nova.openstack.common.gettextutils import _
from nova.openstack.common import log as logging
from nova import safe_utils
from nova import exception as nova_exception
LOG = logging.getLogger(__name__)
class xCATException(Exception):
errmsg = _("xCAT general exception")
def __init__(self, errmsg=None, **kwargs):
if not errmsg:
errmsg = self.errmsg
errmsg = errmsg % kwargs
super(xCATException, self).__init__(errmsg)
class xCATCommandError(xCATException):
errmsg = _("Error returned when calling xCAT command %(cmd)s"
" for node %(node)s:%(error)s")
class xCATInvalidImageError(xCATException):
errmsg = _("The image %(image)s is not an xCAT image")
class xCATDeploymentFailure(xCATException):
errmsg = _("xCAT node deployment failed for node %(node)s:%(error)s")
class xCATRebootFailure(xCATException):
errmsg = _("xCAT node rebooting failed for node %(node)s:%(error)s")

View File

@ -0,0 +1,9 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
"""
Possible xCAT node power states.
"""
OFF = 'off'
ON = 'on'
ERROR = 'error'

View File

@ -0,0 +1,260 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# coding=utf-8
"""
Baremetal xCAT power manager.
"""
import os
import sys
import stat
from oslo.config import cfg
import datetime
from nova import context as nova_context
from nova.virt.baremetal import baremetal_states
from nova.openstack.common.gettextutils import _
from nova.openstack.common import log as logging
from nova.openstack.common import loopingcall
from nova.openstack.common import timeutils
from nova import paths
from nova import utils
from xcat.openstack.baremetal import exception
from xcat.openstack.baremetal import power_states
LOG = logging.getLogger(__name__)
# register configuration options
xcat_opts = [
cfg.IntOpt('deploy_timeout',
help='Timeout for node deployment. Default: 0 second (unlimited)',
default=0),
cfg.IntOpt('reboot_timeout',
help='Timeout for rebooting a node. Default: 0 second (unlimited)',
default=0),
cfg.IntOpt('deploy_checking_interval',
help='Checking interval for node deployment. Default: 10 seconds',
default=10),
cfg.IntOpt('reboot_checking_interval',
help='Checking interval for rebooting a node. Default: 5 seconds',
default=5),
]
xcat_group = cfg.OptGroup(name='xcat',
title='xCAT Options')
CONF = cfg.CONF
CONF.register_group(xcat_group)
CONF.register_opts(xcat_opts, xcat_group)
class xCAT(object):
"""A driver that calls xCAT funcions"""
def __init__(self):
#setup the path for xCAT commands
#xcatroot = os.getenv('XCATROOT', '/opt/xcat/')
#sys.path.append("%s/bin" % xcatroot)
#sys.path.append("%s/sbin" % xcatroot)
pass
def _exec_xcat_command(self, command):
"""Calls xCAT command."""
args = command.split(" ")
out, err = utils.execute(*args, run_as_root=True)
LOG.debug(_("xCAT command stdout: '%(out)s', stderr: '%(err)s'"),
{'out': out, 'err': err})
return out, err
def get_xcat_node_name(self, macs):
"""Get the xcat node name given mac addressed.
It uses the mac address to search for the node name in xCAT.
"""
for mac in macs:
out, err = self._exec_xcat_command("lsdef -w mac=%s" % mac)
if out:
return out.split(" ")[0]
errstr='No node found in xCAT with the following mac address: ' \
+ ','.join(macs)
LOG.warning(errstr)
raise exception.xCATCommandError(errstr)
def deploy_node(self, nodename, imagename, hostname, fixed_ip, netmask, gateway):
"""
Install the node.
It calls xCAT command deploy_ops_bmnode which prepares the node
by adding the config_ops_bm_node postbootscript to the postscript
table for the node, then call nodeset and then boot the node up.
"""
out, err = self._exec_xcat_command(
"deploy_ops_bm_node %(node)s --image %(image)s"
" --host %(host)s --ip %(ip)s --mask %(mask)s"
% {'node': nodename,
'image': imagename,
'host': hostname,
'ip': fixed_ip,
'mask': netmask,
})
if err:
errstr = _("Error returned when calling xCAT deploy_ops_bm_node"
" command for node %s:%s") % (nodename, err)
LOG.warning(errstr)
raise exception.xCATCommandError(errstr)
self._wait_for_node_deploy(nodename)
def cleanup_node(self, nodename, fixed_ip=None):
"""
Undo all the changes made to the node by deploy_node function.
It calls xCAT command cleanup_ops_bm_node which removes the
config_ops_bm_node postbootscript from the postscript table
for the node, removes the alias ip and then power the node off.
"""
cmd = "cleanup_ops_bm_node %s" % nodename
if fixed_ip:
cmd += " --ip %s" % fixed_ip
out, err = self._exec_xcat_command(cmd)
if err:
errstr = _("Error returned when calling xCAT cleanup_ops_bm_node"
" command for node %s:%s") % (nodename, err)
LOG.warning(errstr)
raise exception.xCATCommandError(errstr)
def power_on_node(self, nodename):
"""Power on the node."""
state = self.get_node_power_state(nodename)
if state == power_states.ON:
LOG.warning(_("Powring on node called, but the node %s "
"is already on") % nodename)
out, err = self._exec_xcat_command("rpower %s on" % nodename)
if err:
errstr = _("Error returned when calling xCAT rpower on"
" for node %s:%s") % (nodename, err)
LOG.warning(errstr)
raise exception.xCATCommandError(errstr)
else:
self._wait_for_node_reboot(nodename)
return power_states.ON
def power_off_node(self, nodename):
"""Power off the node."""
state = self.get_node_power_state(nodename)
if state == power_states.OFF:
LOG.warning(_("Powring off node called, but the node %s "
"is already off") % nodename)
out, err = self._exec_xcat_command("rpower %s off" % nodename)
if err:
errstr = _("Error returned when calling xCAT rpower off"
" for node %s:%s") % (nodename, err)
LOG.warning(errstr)
raise exception.xCATCommandError(errstr)
else:
return power_states.OFF
def reboot_node(self, nodename):
"""Reboot the node."""
out, err = self._exec_xcat_command("rpower %s boot" % nodename)
if err:
errstr = _("Error returned when calling xCAT rpower boot"
" for node %s:%s") % (nodename, err)
LOG.warning(errstr)
raise exception.xCATCommandError(errstr)
self._wait_for_node_reboot(nodename)
return power_states.ON
def get_node_power_state(self, nodename):
out, err = self._exec_xcat_command("rpower %s stat" % nodename)
if err:
errstr = _("Error returned when calling xCAT rpower stat"
" for node %s:%s") % (nodename, err)
LOG.warning(errstr)
raise exception.xCATCommandError(errstr)
else:
state = out.split(":")[1]
if state:
state = state.strip()
if state == 'on':
return power_states.ON
elif state == 'off':
return power_states.OFF
return power_states.ERROR
def _wait_for_node_deploy(self, nodename):
"""Wait for xCAT node deployment to complete."""
locals = {'errstr':''}
def _wait_for_deploy():
out,err = self._exec_xcat_command("nodels %s nodelist.status" % nodename)
if err:
locals['errstr'] = _("Error returned when quering node status"
" for node %s:%s") % (nodename, err)
LOG.warning(locals['errstr'])
raise loopingcall.LoopingCallDone()
if out:
node,status = out.split(": ")
status = status.strip()
if status == "booted":
LOG.info(_("Deployment for node %s completed.")
% nodename)
raise loopingcall.LoopingCallDone()
if (CONF.xcat.deploy_timeout and
timeutils.utcnow() > expiration):
locals['errstr'] = _("Timeout while waiting for"
" deployment of node %s.") % nodename
LOG.warning(locals['errstr'])
raise loopingcall.LoopingCallDone()
expiration = timeutils.utcnow() + datetime.timedelta(
seconds=CONF.xcat.deploy_timeout)
timer = loopingcall.FixedIntervalLoopingCall(_wait_for_deploy)
# default check every 10 seconds
timer.start(interval=CONF.xcat.deploy_checking_interval).wait()
if locals['errstr']:
raise exception.xCATDeploymentFailure(locals['errstr'])
def _wait_for_node_reboot(self, nodename):
"""Wait for xCAT node boot to complete."""
locals = {'errstr':''}
def _wait_for_reboot():
out,err = self._exec_xcat_command("nodestat %s" % nodename)
if err:
locals['errstr'] = _("Error returned when quering node status"
" for node %s:%s") % (nodename, err)
LOG.warning(locals['errstr'])
raise loopingcall.LoopingCallDone()
if out:
node,status = out.split(": ")
status = status.strip()
if status == "sshd":
LOG.info(_("Rebooting node %s completed.")
% nodename)
raise loopingcall.LoopingCallDone()
if (CONF.xcat.reboot_timeout and
timeutils.utcnow() > expiration):
locals['errstr'] = _("Timeout while waiting for"
" rebooting node %s.") % nodename
LOG.warning(locals['errstr'])
raise loopingcall.LoopingCallDone()
expiration = timeutils.utcnow() + datetime.timedelta(
seconds=CONF.xcat.reboot_timeout)
timer = loopingcall.FixedIntervalLoopingCall(_wait_for_reboot)
# default check every 5 seconds
timer.start(interval=CONF.xcat.reboot_checking_interval).wait()
if locals['errstr']:
raise exception.xCATRebootFailure(locals['errstr'])

View File

@ -0,0 +1,90 @@
=head1 NAME
B<opsaddbmnode> - It adds xCAT baremetal nodes to an OpenStack cloud.
=head1 SYNOPSIS
B<opsaddbmnode> I<noderange> B<-s> I<service_host>
B<opsaddbmnode> [B<-h>|B<--help>]
B<opsaddbmnode> [B<-v>|B<--version>]
=head1 DESCRIPTION
The B<opsaddbmnode> command registers xCAT nodes to an OpenStack cloud. An OpenStack nova baremetal node registration command takes several node attributes:
=over
=item BMC ip addresss, user id and password
=item Name of nova compute host which will control this baremetal node
=item Number of CPUs in the node
=item Memory in the node (MB)
=item Local hard disk in the node (GB)
=item MAC address to provision the node
=back
The opsaddbmnode command pulls the above baremetal node information from xCAT tables and calls "nova baremetal-node-create" to register the baremetal node with the OpenStack cloud.
Please make sure the following xCAT tables are filled with correct information for the given nodes before calling this command.
=over
=item ipmi (for BMC ip addresss, user id and password)
=item mac (for MAC address)
=item hwinv (for CPU, memory and disk info.)
=back
=head1 Parameters
I<noderage> is a comma separated node or node group names.
=head1 OPTIONS
=over 10
=item B<-s> The node name of the OpenStack compute host that hosts the baremetal nodes.
=item B<-h|--help> Display usage message.
=item B<-v|--version> The Command Version.
=back
=head1 RETURN VALUE
0 The command completed successfully.
1 An error has occurred.
=head1 EXAMPLES
=over 3
=item 1
To register node1, node2 and node3 to OpenStack, sv1 is the compute host.
opsassbmnode node1,node2,node3 -s sv1
=back
=head1 FILES
/opt/xcat/bin/opadddbmnode
=head1 SEE ALSO
L<opsaddimage(1)|opsaddimage.1>

View File

@ -0,0 +1,65 @@
=head1 NAME
B<opsaddimage> - It adds or removes nodes for the vlan.
=head1 SYNOPSIS
B<opsaddimage> I<image1,image2,...> B<-n> I<new_name1,new_name2,...> [B<-c> I<controller>]
B<opsaddimage> [B<-h>|B<--help>]
B<opsaddimage> [B<-v>|B<--version>]
=head1 DESCRIPTION
The B<opsaddimage> command adds a list of xCAT images into the OpenStack cloud.
Under the cover, it creates a fake image and registers the fake image into OpenStack with command B<glance image-create>. It sets the property in the image to indicate that this is an xCAT image and also stores the original xCAT image name in the property for further reference.
The xCAT image names can be listed using B<lsdef -t osimage> command.
=head1 Parameters
I<image1,image1...> a comma separated xCAT images names.
=head1 OPTIONS
=over 10
=item B<-n> a comma separated new image names in the OpenStack. If omitted, the default is the original xCAT image nanme.
=item B<-c> the node name of the OpenStack controller. This node must be an xCAT managed node.
=item B<-h|--help> Display usage message.
=item B<-v|--version> The Command Version.
=back
=head1 RETURN VALUE
0 The command completed successfully.
1 An error has occurred.
=head1 EXAMPLES
=over 3
=item 1.
To register xCAT image rhels6.3-x86_64-install-compute into OpenStack.
opsaddimage rhels6.3-x86_64-install-compute -c sv2
=back
=head1 FILES
/opt/xcat/bin/opsaddimage
=head1 SEE ALSO
L<opsaddbmnode(1)|opsaddbmnode.1>

View File

@ -0,0 +1,215 @@
#!/bin/sh
# IBM(c) 2014 EPL license http://www.eclipse.org/legal/epl-v10.html
# xCAT post script for configuring the openstack baremetal node.
# The format is:
# config_ops_bm_node ops_hostname ops_ip ops_netmask
get_os_type()
{
#get os type
str_os_type=`uname | tr 'A-Z' 'a-z'`
str_temp=''
if [ "$str_os_type" = "linux" ];then
str_temp=`echo $OSVER | grep -E '(sles|suse)'`
if [ -f "/etc/debian_version" ];then
str_os_type="debian"
elif [ -f "/etc/SuSE-release" -o -n "$str_temp" ];then
str_os_type="sles"
else
str_os_type="redhat"
fi
else
str_os_type="aix"
fi
echo "$str_os_type"
}
setup_ip()
{
str_os_type=$1
str_if_name=$2
str_v4ip=$3
str_v4mask=$4
ret=`ifconfig $str_if_name |grep "inet addr" 2>&1`
if [ $? -eq 0 ]; then
old_ip=`echo $ret|cut -d':' -f2 |cut -d' ' -f1`
old_mask=`echo $ret|cut -d':' -f4`
#echo "old ip = $old_ip, old mask=$old_mask"
if [ "$old_ip" == "$str_v4ip" -a "$old_mask" == "$str_v4mask" ]; then
#if nic is up and the address is the same, then donothing
#echo "do nothing"
exit 0
else
#bring down the nic and reconstruct it.
#echo "bring down the old nic"
ifconfig $str_if_name del $old_ip
fi
fi
if [ "$str_os_type" = "sles" ];then
str_conf_file="/etc/sysconfig/network/ifcfg-${str_if_name}"
if [ -f $str_conf_file ]; then
rm $str_conf_file
fi
echo "DEVICE=${str_if_name}" > $str_conf_file
echo "BOOTPROTO=static" >> $str_conf_file
echo "IPADDR=${str_v4ip}" >> $str_conf_file
echo "NETMASK=${str_v4mask}" >> $str_conf_file
echo "NETWORK=''" >> $str_conf_file
echo "STARTMODE=onboot" >> $str_conf_file
echo "USERCONTROL=no" >> $str_conf_file
ifup $str_if_name
#debian ubuntu
elif [ "$str_os_type" = "debian" ];then
str_conf_file="/etc/network/interfaces.d/${str_if_name}"
if [ -f $str_conf_file ]; then
rm $str_conf_file
fi
echo "auto ${str_if_name}" > $str_conf_file
echo "iface ${str_if_name} inet static" >> $str_conf_file
echo " address ${str_v4ip}" >> $str_conf_file
echo " netmask ${str_v4mask}" >> $str_conf_file
ifconfig $str_if_name up
else
# Write the info to the ifcfg file for redhat
str_conf_file="/etc/sysconfig/network-scripts/ifcfg-${str_if_name}"
if [ -f $str_conf_file ]; then
rm $str_conf_file
fi
echo "DEVICE=${str_if_name}" > $str_conf_file
echo "BOOTPROTO=static" >> $str_conf_file
echo "NM_CONTROLLED=no" >> $str_conf_file
echo "IPADDR=${str_v4ip}" >> $str_conf_file
echo "NETMASK=${str_v4mask}" >> $str_conf_file
echo "ONBOOT=yes" >> $str_conf_file
ifup $str_if_name
fi
}
#change hostname permanently
change_host_name()
{
str_os_type=$1
str_hostname=$2
hostname $str_hostname
if [ "$str_os_type" = "sles" ];then
echo "Persistently changing the hostname not implemented yet."
#debian ubuntu
elif [ "$str_os_type" = "debian" ];then
conf_file="/etc/hostname"
echo "$str_hostname" > $conf_file
else
conf_file="/etc/sysconfig/network"
if [ ! -f $conf_file ]; then
touch $conf_file
fi
grep 'HOSTNAME' $conf_file 2>&1 > /dev/null
if [ $? -eq 0 ]; then
sed -i "s/HOSTNAME=.*/HOSTNAME=$str_hostname/" $conf_file
else
echo "HOSTNAME=$str_hostname" >> $conf_file
fi
fi
}
str_os_type=$(get_os_type)
echo "os_type=$str_os_type"
if [ "$str_os_type" = "aix" ]; then
logger -t xcat "config_ops_bm_node dose not support AIX."
echo "config_ops_bm_node dose not support AIX."
exit 0
fi
#change the hostname
if [[ -n "$1" ]]; then
change_host_name $str_os_type $1
fi
#Add the openstack ip to the node
if [[ -n $2 ]]; then
ops_ip=$2
if [[ -z $3 ]]; then
logger -t xcat "config_ops_bm_node: Please specify the netmask."
echo "config_ops_bm_node: Please specify the netmask."
exit 1
else
ops_mask=$3
fi
#figure out the install nic
if [[ -n $MACADDRESS ]]; then
pos=0
#mac has the following format: 01:02:03:04:05:0E!node5|01:02:03:05:0F!node6-eth1
for x in `echo "$MACADDRESS" | tr "|" "\n"`
do
node=""
mac=""
pos=$((pos+1))
i=`expr index $x !`
if [[ $i -gt 0 ]]; then
node=`echo ${x##*!}`
mac_tmp=`echo ${x%%!*}`
else
mac_tmp=$x
fi
if [[ $pos -eq 1 ]]; then
mac1=$mac_tmp
fi
if [[ "$PRIMARYNIC" = "$mac_tmp" ]]; then
mac=$mac_tmp
break
fi
if [[ -z "$PRIMARYNIC" ]] || [[ "$PRIMARYNIC" = "mac" ]]; then
if [[ -z $node ]] || [[ "$node" = "$NODE" ]]; then
mac=$mac_tmp
break
fi
fi
done
if [[ -z $mac ]]; then
if [[ -z "$PRIMARYNIC" ]] || [[ "$PRIMARYNIC" = "mac" ]]; then
mac=$mac1 #if nothing mathes, take the first mac
else
nic=$PRIMARYNIC #or the primary nic itself is the nic
fi
fi
else
logger -t xcat "config_ops_bm_node: no mac addresses are defined in the mac table for the node $NODE"
echo "config_ops_bm_node: no mac addresses are defined in the mac table for the node $NODE"
index=$((index+1))
continue
fi
echo "mac=$mac"
#find the nic that has the mac
if [[ -z $nic ]]; then
#go to each nic to match the mac address
ret=`ifconfig |grep -i $mac 2>&1`;
if [ $? -eq 0 ]; then
nic=`echo $ret |head -n1|cut -d' ' -f 1`
else
logger -t xcat "config_ops_bm_node: The mac address for the network for $NODE is not defined."
echo "config_ops_bm_node: The mac address for the network for $NODE is not defined."
fi
fi
echo "nic=$nic"
#now setup the ip alias
setup_ip $str_os_type $nic:0 $ops_ip $ops_mask
fi

View File

@ -0,0 +1,94 @@
#!/bin/sh
# IBM(c) 2014 EPL license http://www.eclipse.org/legal/epl-v10.html
# xCAT post script for deconfiguring the openstack baremetal node.
# The format is:
# deconfig_ops_bm_node ops_ip
get_os_type()
{
#get os type
str_os_type=`uname | tr 'A-Z' 'a-z'`
str_temp=''
if [ "$str_os_type" = "linux" ];then
str_temp=`echo $OSVER | grep -E '(sles|suse)'`
if [ -f "/etc/debian_version" ];then
str_os_type="debian"
elif [ -f "/etc/SuSE-release" -o -n "$str_temp" ];then
str_os_type="sles"
else
str_os_type="redhat"
fi
else
str_os_type="aix"
fi
echo "$str_os_type"
}
#change hostname permanently
change_host_name()
{
str_os_type=$1
str_hostname=$2
hostname $str_hostname
if [ "$str_os_type" = "sles" ];then
echo "Persistently changing the hostname not implemented yet."
#debian ubuntu
elif [ "$str_os_type" = "debian" ];then
conf_file="/etc/hostname"
echo "$str_hostname" > $conf_file
else
conf_file="/etc/sysconfig/network"
if [ ! -f $conf_file ]; then
touch $conf_file
fi
grep 'HOSTNAME' $conf_file 2>&1 > /dev/null
if [ $? -eq 0 ]; then
sed -i "s/HOSTNAME=.*/HOSTNAME=$str_hostname/" $conf_file
else
echo "HOSTNAME=$str_hostname" >> $conf_file
fi
fi
}
str_os_type=$(get_os_type)
echo "os_type=$str_os_type"
if [ $str_os_type == "aix" ]; then
logger -t xcat "deconfig_ops_bm_node dose not support AIX."
echo "deconfig_ops_bm_node dose not support AIX."
exit 0
fi
#change the hostname
#hostname $NODE
change_host_name $str_os_type $NODE
#remove the openstack ip from the node
if [[ -n $1 ]]; then
ops_ip=$1
nic=$(ip addr | grep $ops_ip | awk '{print $NF}')
echo "nic=$nic, ops_ip=$ops_ip"
ifconfig $nic del $ops_ip
#delete the configuration file
if [ "$str_os_type" = "sles" ]; then
str_conf_file="/etc/sysconfig/network/ifcfg-$nic"
elif [ "$str_os_type" = "debian" ]; then #debian ubuntu
str_conf_file="/etc/network/interfaces.d/$nic"
else #redhat
str_conf_file="/etc/sysconfig/network-scripts/ifcfg-$nic"
fi
if [ -f $str_conf_file ]; then
rm $str_conf_file
fi
fi

View File

@ -0,0 +1,102 @@
Summary: Executables and data of the xCAT baremetal driver for OpenStack
Name: xCAT-OpenStack-baremetal
Version: %(cat Version)
Release: snap%(date +"%Y%m%d%H%M")
Epoch: 4
License: IBM
Group: Applications/System
Source: xCAT-OpenStack-baremetal-%{version}.tar.gz
Packager: IBM Corp.
Vendor: IBM Corp.
Distribution: %{?_distribution:%{_distribution}}%{!?_distribution:%{_vendor}}
Prefix: /opt/xcat
BuildRoot: /var/tmp/%{name}-%{version}-%{release}-root
%ifos linux
BuildArch: noarch
%endif
Provides: xCAT-OpenStack-baremetal = %{epoch}:%{version}
Requires: xCAT-client
%description
xCAT-OpenStack-baremetal provides the baremetal driver for OpenStack.
%prep
%setup -q -n xCAT-OpenStack-baremetal
%build
# Convert pods to man pages and html pages
./xpod2man
%install
# The install phase puts all of the files in the paths they should be in when the rpm is
# installed on a system. The RPM_BUILD_ROOT is a simulated root file system and usually
# has a value like: /var/tmp/xCAT-OpenStack-baremetal-2.0-snap200802270932-root
rm -rf $RPM_BUILD_ROOT
mkdir -p $RPM_BUILD_ROOT/%{prefix}/bin
mkdir -p $RPM_BUILD_ROOT/%{prefix}/sbin
mkdir -p $RPM_BUILD_ROOT/%{prefix}/lib/perl/xCAT_plugin
mkdir -p $RPM_BUILD_ROOT/%{prefix}/lib/python/xcat/openstack/baremetal
mkdir -p $RPM_BUILD_ROOT/%{prefix}/share/xcat/openstack/postscripts
mkdir -p $RPM_BUILD_ROOT/%{prefix}/share/man/man1
mkdir -p $RPM_BUILD_ROOT/%{prefix}/share/doc/man1
set +x
cp -R lib/* $RPM_BUILD_ROOT/%{prefix}/lib
cp share/xcat/openstack/postscripts/* $RPM_BUILD_ROOT/%{prefix}/share/xcat/openstack/postscripts
# These were built dynamically in the build phase
cp share/man/man1/* $RPM_BUILD_ROOT/%{prefix}/share/man/man1
chmod 444 $RPM_BUILD_ROOT/%{prefix}/share/man/man1/*
# These were built dynamically during the build phase
cp share/doc/man1/* $RPM_BUILD_ROOT/%{prefix}/share/doc/man1
chmod 644 $RPM_BUILD_ROOT/%{prefix}/share/doc/man1/*
# These links get made in the RPM_BUILD_ROOT/prefix area
ln -sf ../bin/xcatclient $RPM_BUILD_ROOT/%{prefix}/sbin/deploy_ops_bm_node
ln -sf ../bin/xcatclient $RPM_BUILD_ROOT/%{prefix}/sbin/cleanup_ops_bm_node
ln -sf ../bin/xcatclient $RPM_BUILD_ROOT/%{prefix}/bin/opsaddbmnode
ln -sf ../bin/xcatclientnnr $RPM_BUILD_ROOT/%{prefix}/bin/opsaddimage
set -x
%clean
# This step does not happen until *after* the %files packaging below
rm -rf $RPM_BUILD_ROOT
%files
%defattr(-,root,root)
#%doc LICENSE.html
# Just package everything that has been copied into RPM_BUILD_ROOT
%{prefix}
%changelog
%post
#copy the postscripts under /installl/postscripts directory on MN only
if [ -f "/etc/xCATMN" ]; then
cp $RPM_INSTALL_PREFIX0/share/xcat/openstack/postscripts/* /install/postscripts/
fi
%preun
#remove postscripts under /installl/postscripts directory on MN only
if [ -f "/etc/xCATMN" ]; then
for fn in $RPM_INSTALL_PREFIX0/share/xcat/openstack/postscripts/*
do
bn=`basename $fn`
rm /install/postscripts/$bn
done
fi
exit 0

214
xCAT-OpenStack-baremetal/xpod2man Executable file
View File

@ -0,0 +1,214 @@
#!/usr/bin/perl
# IBM(c) 2007 EPL license http://www.eclipse.org/legal/epl-v10.html
# First builds the xCAT summary man page from Synopsis of each man page.
# Then converts all of the pod man pages into html (including links to each other)
# We assume that this script is run in the xCAT-vlan-2.0 dir, so everything is
# done relative to that.
use strict;
#use lib '.';
use Pod::Man;
use Pod::Html;
my $poddir = 'pods';
my $mandir = 'share/man';
my $htmldir = 'share/doc';
my $cachedir = '/tmp';
my @pods = getPodList($poddir);
#foreach (@pods) { print "$_\n"; } exit;
# Build the cmd overview page.
#writesummarypage("$poddir/man1/xcat.1.pod", @pods);
# Build the man page for each pod.
#mkdir($mandir) or die "Error: could not create $mandir.\n";
print "Converting PODs to man pages...\n";
foreach my $podfile (@pods) {
my $manfile = $podfile;
$manfile =~ s/^$poddir/$mandir/; # change the beginning of the path
$manfile =~ s/\.pod$//; # change the ending
my $mdir = $manfile;
$mdir =~ s|/[^/]*$||; # get rid of the basename part
if (system("mkdir -p $mdir")) { die "Error: could not create $mdir.\n"; }
my ($section) = $podfile =~ /\.(\d+)\.pod$/;
convertpod2man($podfile, $manfile, $section);
}
my @dummyPods = createDummyPods($poddir, \@pods);
# Build the html page for each pod.
#mkdir($htmldir) or die "Error: could not create $htmldir.\n";
print "Converting PODs to HTML pages...\n";
# have to clear the cache, because old entries can cause a problem
unlink("$cachedir/pod2htmd.tmp", "$cachedir/pod2htmi.tmp");
foreach my $podfile (@pods) {
my $htmlfile = $podfile;
$htmlfile =~ s/^$poddir/$htmldir/; # change the beginning of the path
$htmlfile =~ s/\.pod$/\.html/; # change the ending
my $hdir = $htmlfile;
$hdir =~ s|/[^/]*$||; # get rid of the basename part
if (system("mkdir -p $hdir")) { die "Error: could not create $hdir.\n"; }
#print "$podfile, $htmlfile, $poddir, $htmldir\n";
convertpod2html($podfile, $htmlfile, $poddir, $htmldir);
}
# Remove the dummy pods
unlink @dummyPods;
rmdir "$poddir/man7";
exit;
# To enable linking between the cmd man pages and the db man pages, need to:
# grep thru the cmd pods searching for references (L<>) to any section 5 man page
# if that pod does not exist, create an empty one that will satisfy pod2html
# keep track of all dummy pods created, so they can be removed later
sub createDummyPods {
my ($poddir, $pods) = @_;
my $cmd = "grep -r -E 'L<.+\\([57]\\)\\|.+\\.[57]>' " . $poddir;
#print "Running cmd: ", $cmd, "\n";
my @lines = `$cmd`;
if ($?) { print "Error running: $cmd\n"; print join('', @lines); }
#my @lines;
#system($cmd);
my @dummyPods;
foreach my $l (@lines) {
#print "$l\n";
my @matches = $l =~ /L<([^\(]+)\(([57])\)\|\1\.[57]>/g; # get all the matches in the line
# The above line should create the array with every other entry being the man page name
# and every other entry is the section # (5 or 7)
my $cmd;
while ($cmd=shift @matches) {
#foreach my $m (@matches) {
my $section = shift @matches;
my $filename = "$poddir/man$section/$cmd.$section.pod";
#print "$filename\n";
if (!(grep /^$filename$/, @$pods) && !(grep /^$filename$/, @dummyPods)) { push @dummyPods, $filename; }
}
}
# Create these empty files
print "Creating empty linked-to files: ", join(', ', @dummyPods), "\n";
mkdir "$poddir/man7";
foreach my $d (@dummyPods) {
if (!open(TMP, ">>$d")) { warn "Could not create dummy pod file $d ($!)\n"; }
else { close TMP; }
}
return @dummyPods;
}
# Recursively get the list of pod man page files.
sub getPodList {
my $poddir = shift;
my @files;
# 1st get toplevel dir listing
opendir(DIR, $poddir) or die "Error: could not read $poddir.\n";
my @topdir = grep !/^\./, readdir(DIR); # /
close(DIR);
# Now go thru each subdir (these are man1, man3, etc.)
foreach my $mandir (@topdir) {
opendir(DIR, "$poddir/$mandir") or die "Error: could not read $poddir/$mandir.\n";
my @dir = grep !/^\./, readdir(DIR); # /
close(DIR);
foreach my $file (@dir) {
push @files, "$poddir/$mandir/$file";
}
}
return sort @files;
}
# Create the xcat man page that gives a summary description of each xcat cmd.
# Not used.
sub writesummarypage {
my $file = shift; # relative path file name of the man page
# the rest of @_ contains the pod files that describe each cmd
open(FILE, ">$file") or die "Error: could not open $file for writing.\n";
print FILE <<'EOS1';
=head1 NAME
B<xcat> - extreme Cluster Administration Tool.
=head1 DESCRIPTION
Extreme Cluster Administration Toolkit (xCAT). xCAT is a scalable distributed computing management
and provisioning tool that provides a unified interface for hardware control, discovery, and
OS diskful/diskfree deployment.
=head1 XCAT DATABASE
All of the cluster configuration information is in the xCAT database. See L<xcatdb(5)|xcatdb.5> for
descriptions of every table in the database.
=head1 XCAT COMMANDS
What follows is a short description of each xCAT command. To get more information about a particular
command, see its man page. Note that the commands are listed in alphabetical order B<within each section>,
i.e. all the commands in section 1, then the commands in section 3, etc.
=over 12
EOS1
# extract the summary for each cmd from its man page
foreach my $manpage (@_) {
my ($sectionnum) = $manpage =~ /\.(\d+)\.pod$/;
# Suck in the whole file, then we will parse it.
open(MANPAGE, "$manpage") or die "Error: could not open $manpage for reading.\n";
my @contents = <MANPAGE>;
my $wholemanpage = join('', @contents);
close(MANPAGE);
# This regex matches: optional space, =head1, space, title, space, cmd, space, description, newline
my ($cmd, $description) = $wholemanpage =~ /^\s*=head1\s+\S+\s+(\S+)\s+(.+?)\n/si;
if (!defined($cmd)) { print "Warning: $manpage is not in a recognized structure. It will be ignored.\n"; next; }
if (!defined($description)) { print "Warning: $manpage does not have a description for $cmd. It will be ignored.\n"; next; }
$cmd =~ s/^.<(.+)>$/$1/; # if the cmd name has pod formatting around it, strip it off
$description =~ s/^-\s*//; # if the description has a leading hypen, strip it off
print FILE "\n=item L<$cmd($sectionnum)|$cmd.$sectionnum>\n\n".$description."\n";
}
# Artificially add the xcattest cmd, because the xCAT-test rpm will add this
print FILE "\n=item L<xcattest(1)|xcattest.1>\n\nRun automated xCAT test cases.\n";
print FILE <<"EOS3";
=back
EOS3
close FILE;
}
# Create the html page for one pod.
sub convertpod2html {
my ($podfile, $htmlfile, $poddir, $htmldir) = @_;
#TODO: use --css=<stylesheet> and --title=<pagetitle> to make the pages look better
pod2html($podfile,
"--outfile=$htmlfile",
"--podpath=man1",
"--podroot=$poddir",
"--htmldir=$htmldir",
"--recurse",
"--cachedir=$cachedir",
);
}
# Create the man page for one pod.
sub convertpod2man {
my ($podfile, $manfile, $section) = @_;
my $parser = Pod::Man->new(section => $section);
$parser->parse_from_file($podfile, $manfile);
}

View File

@ -91,7 +91,9 @@ template "/etc/quantum/policy.json" do
group node["openstack"]["network"]["platform"]["group"]
mode 00644
notifies :restart, "service[quantum-server]", :delayed
if node.run_list.expand(node.chef_environment).recipes.include?("openstack-network::server")
notifies :restart, "service[quantum-server]", :delayed
end
end
rabbit_server_role = node["openstack"]["network"]["rabbit_server_chef_role"]
@ -143,12 +145,14 @@ end
# may just be running a subset of agents (like l3_agent)
# and not the api server components, so we ignore restart
# failures here as there may be no quantum-server process
service "quantum-server" do
service_name platform_options["quantum_server_service"]
supports :status => true, :restart => true
ignore_failure true
if node.run_list.expand(node.chef_environment).recipes.include?("openstack-network::server")
service "quantum-server" do
service_name platform_options["quantum_server_service"]
supports :status => true, :restart => true
ignore_failure true
action :nothing
action :nothing
end
end
template "/etc/quantum/quantum.conf" do
@ -166,7 +170,9 @@ template "/etc/quantum/quantum.conf" do
:service_pass => service_pass
)
notifies :restart, "service[quantum-server]", :delayed
if node.run_list.expand(node.chef_environment).recipes.include?("openstack-network::server")
notifies :restart, "service[quantum-server]", :delayed
end
end
template "/etc/quantum/api-paste.ini" do
@ -179,7 +185,9 @@ template "/etc/quantum/api-paste.ini" do
"service_pass" => service_pass
)
notifies :restart, "service[quantum-server]", :delayed
if node.run_list.expand(node.chef_environment).recipes.include?("openstack-network::server")
notifies :restart, "service[quantum-server]", :delayed
end
end
directory "/etc/quantum/plugins/#{main_plugin}" do
@ -336,7 +344,9 @@ when "openvswitch"
:sql_connection => sql_connection,
:local_ip => local_ip
)
notifies :restart, "service[quantum-server]", :delayed
if node.run_list.expand(node.chef_environment).recipes.include?("openstack-network::server")
notifies :restart, "service[quantum-server]", :delayed
end
if node.run_list.expand(node.chef_environment).recipes.include?("openstack-network::openvswitch")
notifies :restart, "service[quantum-plugin-openvswitch-agent]", :delayed
end

View File

@ -9,31 +9,31 @@
"ref": "f759cd013c0a836f2acb219b3e006ff0a1308878"
},
"memcached": {
"locked_version": "1.4.0"
"locked_version": "1.6.2"
},
"runit": {
"locked_version": "1.1.6"
"locked_version": "1.3.0"
},
"build-essential": {
"locked_version": "1.4.0"
"locked_version": "1.4.2"
},
"yum": {
"locked_version": "2.3.0"
"locked_version": "2.4.0"
},
"sysctl": {
"locked_version": "0.3.3"
},
"apt": {
"locked_version": "2.1.0"
"locked_version": "2.3.0"
},
"git": {
"locked_version": "2.5.2"
"locked_version": "2.7.0"
},
"dmg": {
"locked_version": "1.1.0"
"locked_version": "2.0.4"
},
"windows": {
"locked_version": "1.10.0"
"locked_version": "1.11.0"
},
"chef_handler": {
"locked_version": "1.1.4"

View File

@ -0,0 +1,46 @@
# CHANGELOG for cookbook-openstack-object-storage
This file is used to list changes made in each version of cookbook-openstack-object-storage.
## 7.1.0:
* Update apt sources to grizzly to prepare for grizzly
and havana branches
## 7.0.11:
* Add missing swift-container-sync upstart service which is
not setup by default in ubuntu 12.04 packages
## 7.0.10:
* Do not role restrict super_admin_key in proxy config
* Case correct swauth_version attribute in proxy recipe
* Treat platform_options["swauth_packages"] as a list
## 7.0.9:
* Bugfix tempurl role restriction
## 7.0.8:
* Bugfix allow_override spacing in proxy server template
## 7.0.7:
* Add flexibility to middleware pipeline
## 7.0.6:
* Add choice of install python-swauth from git or package
## 7.0.5:
* Add support for container-sync
## 7.0.4:
* Allow roles used in searches to be defined by cookbook user
## 7.0.3:
* Bugfix the swift-ring-builder output scanner
## 7.0.2:
* Expand statsd support as well as capacity and recon supporting.
## 7.0.1:
* Support more then 24 disks (/dev/sdaa, /dev/vdab, etc)
## 7.0.0:
* Initial openstack object storage cookbook

View File

@ -63,6 +63,14 @@ Attributes
* ```default[:swift][:authmode]``` - "swauth" or "keystone" (default "swauth"). Right now, only swauth is supported (defaults to swauth)
* ```default[:swift][:tempurl]``` - "true" or "false". Adds tempurl to the pipeline and sets allow_overrides to true when using swauth
* ```default[:swift][:swauth_source]``` - "git" or "package"(default). Selects between installing python-swauth from git or system package
* ```default[:swift][:swauth_repository]``` - Specifies git repo. Default "https://github.com/gholt/swauth.git"
* ```default[:swift][:swauth_version]``` - Specifies git repo tagged branch. Default "1.0.8"
* ```default[:swift][:swift_secret_databag_name]``` - this cookbook supports an optional secret databag where we will retrieve the following attributes overriding any default attributes below. (defaults to nil)
```
@ -249,7 +257,7 @@ License and Author
| | |
|:---------------------|:---------------------------------------------------|
| **Authors** | Alan Meadows (<alan.meadows@gmail.com>) |
| | Oisin Feely (<of3434@att.com>) |
| | Oisin Feeley (<of3434@att.com>) |
| | Ron Pedde (<ron.pedde@rackspace.com>) |
| | Will Kelly (<will.kelly@rackspace.com>) |
| | |

View File

@ -11,7 +11,7 @@ default["swift"]["git_builder_ip"] = "127.0.0.1"
# the release only has any effect on ubuntu, and must be
# a valid release on http://ubuntu-cloud.archive.canonical.com/ubuntu
default["swift"]["release"] = "folsom"
default["swift"]["release"] = "grizzly"
# we support an optional secret databag where we will retrieve the
# following attributes overriding any default attributes here
@ -25,6 +25,17 @@ default["swift"]["release"] = "folsom"
# }
default["swift"]["swift_secret_databag_name"] = nil
#--------------------
# roles
#--------------------
default["swift"]["setup_chef_role"] = "swift-setup"
default["swift"]["management_server_chef_role"] = "swift-management-server"
default["swift"]["proxy_server_chef_role"] = "swift-proxy-server"
default["swift"]["object_server_chef_role"] = "swift-object-server"
default["swift"]["account_server_chef_role"] = "swift-account-server"
default["swift"]["container_server_chef_role"] = "swift-container-server"
#--------------------
# authentication
#--------------------
@ -53,7 +64,40 @@ default["swift"]["ring"]["replicas"] = 3
#------------------
# statistics
#------------------
default["swift"]["enable_statistics"] = true
default["swift"]["statistics"]["enabled"] = true
default["swift"]["statistics"]["sample_rate"] = 1
# there are two ways to discover your graphite server ip for
# statsd to periodically publish to. You can directly set
# the ip below, or leave it set to nil and supply chef with
# the role name of your graphite server and the interface
# name to retrieve the appropriate internal ip address from
#
# if no servers with the role below can be found then
# 127.0.0.1 will be used
default["swift"]["statistics"]["graphing_ip"] = nil
default["swift"]["statistics"]["graphing_role"] = 'graphite-role'
default["swift"]["statistics"]["graphing_interface"] = 'eth0'
# how frequently to run chef instantiated /usr/local/bin/swift_statsd_publish.py
# which publishes dispersion and recon statistics (in minutes)
default["swift"]["statistics"]["report_frequency"] = 15
# enable or disable specific portions of generated report
default["swift"]["statistics"]["enable_dispersion_report"] = true
default["swift"]["statistics"]["enable_recon_report"] = true
default["swift"]["statistics"]["enable_disk_report"] = true
# settings for statsd which should be configured to use the local
# statsd daemon that chef will install if statistics are enabled
default["swift"]["statistics"]["statsd_host"] = "127.0.0.1"
default["swift"]["statistics"]["statsd_port"] = "8125"
default["swift"]["statistics"]["statsd_prefix"] = "openstack.swift"
# paths to the recon cache files
default["swift"]["statistics"]["recon_account_cache"] = "/var/cache/swift/account.recon"
default["swift"]["statistics"]["recon_container_cache"] = "/var/cache/swift/container.recon"
default["swift"]["statistics"]["recon_object_cache"] = "/var/cache/swift/object.recon"
#------------------
# network settings
@ -109,11 +153,52 @@ default["swift"]["disk_test_filter"] = [ "candidate =~ /(sd|hd|xvd|vd)(?!a$)[a-z
"not system('/sbin/parted /dev/' + candidate + ' -s print | grep linux-swap')",
"not info.has_key?('removable') or info['removable'] == 0.to_s" ]
#-------------------
# template overrides
#-------------------
# proxy-server
# override in a wrapper to enable tempurl with swauth
default["swift"]["tempurl"]["enabled"] = false
# container-server
# Override this with an allowed list of your various swift clusters if you wish
# to enable container sync for your end-users between clusters. This should
# be an array of fqdn hostnames for the cluster end-points that your end-users
# would access in the format of ['host1', 'host2', 'host3']
default["swift"]["container-server"]["allowed_sync_hosts"] = []
# container-sync logging settings
default["swift"]["container-server"]["container-sync"]["log_name"] = 'container-sync'
default["swift"]["container-server"]["container-sync"]["log_facility"] = 'LOG_LOCAL0'
default["swift"]["container-server"]["container-sync"]["log_level"] = 'INFO'
# If you need to use an HTTP Proxy, set it here; defaults to no proxy.
default["swift"]["container-server"]["container-sync"]["sync_proxy"] = nil
# Will sync, at most, each container once per interval (in seconds)
default["swift"]["container-server"]["container-sync"]["interval"] = 300
# Maximum amount of time to spend syncing each container per pass (in seconds)
default["swift"]["container-server"]["container-sync"]["container_time"] = 60
#------------------
# swauth source
# -----------------
# Versions of swauth in Ubuntu Cloud Archive PPA can be outdated. This
# allows us to chose to install directly from a tagged branch of
# gholt's repository.
# values: package, git
default["swift"]["swauth_source"] = "package"
default["swift"]["swauth_repository"] = "https://github.com/gholt/swauth.git"
default["swift"]["swauth_version"] = "1.0.8"
#------------------
# packages
#------------------
# Leveling between distros
case platform
when "redhat"
@ -132,7 +217,8 @@ when "redhat"
"git_dir" => "/var/lib/git",
"git_service" => "git",
"service_provider" => Chef::Provider::Service::Redhat,
"override_options" => ""
"override_options" => "",
"swift_statsd_publish" => "/usr/bin/swift-statsd-publish.py"
}
#
# python-iso8601 is a missing dependency for swift.
@ -153,7 +239,8 @@ when "centos"
"git_dir" => "/var/lib/git",
"git_service" => "git",
"service_provider" => Chef::Provider::Service::Redhat,
"override_options" => ""
"override_options" => "",
"swift_statsd_publish" => "/usr/bin/swift-statsd-publish.py"
}
when "fedora"
default["swift"]["platform"] = {
@ -171,7 +258,8 @@ when "fedora"
"git_dir" => "/var/lib/git",
"git_service" => "git",
"service_provider" => Chef::Provider::Service::Systemd,
"override_options" => ""
"override_options" => "",
"swift_statsd_publish" => "/usr/bin/swift-statsd-publish.py"
}
when "ubuntu"
default["swift"]["platform"] = {
@ -189,6 +277,7 @@ when "ubuntu"
"git_dir" => "/var/cache/git",
"git_service" => "git-daemon",
"service_provider" => Chef::Provider::Service::Upstart,
"override_options" => "-o Dpkg::Options:='--force-confold' -o Dpkg::Option:='--force-confdef'"
"override_options" => "-o Dpkg::Options:='--force-confold' -o Dpkg::Option:='--force-confdef'",
"swift_statsd_publish" => "/usr/local/bin/swift-statsd-publish.py"
}
end

View File

@ -0,0 +1,19 @@
# swift-container-sync - SWIFT Container Sync
#
# The swift container sync.
description "SWIFT Container Sync"
author "Sergio Rubio <rubiojr@bvox.net>"
start on runlevel [2345]
stop on runlevel [016]
pre-start script
if [ -f "/etc/swift/container-server.conf" ]; then
exec /usr/bin/swift-init container-sync start
else
exit 1
fi
end script
post-stop exec /usr/bin/swift-init container-sync stop

View File

@ -3,7 +3,7 @@ maintainer "ATT, Inc."
license "Apache 2.0"
description "Installs and configures Openstack Swift"
long_description IO.read(File.join(File.dirname(__FILE__), 'README.md'))
version "1.1.0"
version "7.1.0"
recipe "openstack-object-storage::setup", "Does initial setup of a swift cluster"
recipe "openstack-object-storage::account-server", "Installs the swift account server"
recipe "openstack-object-storage::object-server", "Installs the swift object server"

View File

@ -62,7 +62,8 @@ def generate_script
# figure out what's present in the cluster
disk_data[which] = {}
disk_state,_,_ = Chef::Search::Query.new.search(:node,"chef_environment:#{node.chef_environment} AND roles:swift-#{which}-server")
role = node["swift"]["#{which}_server_chef_role"]
disk_state,_,_ = Chef::Search::Query.new.search(:node,"chef_environment:#{node.chef_environment} AND roles:#{role}")
# for a running track of available disks
disk_data[:available] ||= {}
@ -195,24 +196,24 @@ def parse_ring_output(ring_data)
next
elsif line =~ /^\s+(\d+)\s+(\d+)\s+(\d+)\s+(\d+\.\d+\.\d+\.\d+)\s+(\d+)\s+(\S+)\s+([0-9.]+)\s+(\d+)\s+([-0-9.]+)\s*$/
output[:hosts] ||= {}
output[:hosts][$3] ||= {}
output[:hosts][$4] ||= {}
output[:hosts][$3][$5] = {}
output[:hosts][$4][$6] ||= {}
output[:hosts][$3][$5][:id] = $1
output[:hosts][$3][$5][:region] = $2
output[:hosts][$3][$5][:zone] = $3
output[:hosts][$3][$5][:ip] = $4
output[:hosts][$3][$5][:port] = $5
output[:hosts][$3][$5][:device] = $6
output[:hosts][$3][$5][:weight] = $7
output[:hosts][$3][$5][:partitions] = $8
output[:hosts][$3][$5][:balance] = $9
output[:hosts][$4][$6][:id] = $1
output[:hosts][$4][$6][:region] = $2
output[:hosts][$4][$6][:zone] = $3
output[:hosts][$4][$6][:ip] = $4
output[:hosts][$4][$6][:port] = $5
output[:hosts][$4][$6][:device] = $6
output[:hosts][$4][$6][:weight] = $7
output[:hosts][$4][$6][:partitions] = $8
output[:hosts][$4][$6][:balance] = $9
elsif line =~ /^\s+(\d+)\s+(\d+)\s+(\d+\.\d+\.\d+\.\d+)\s+(\d+)\s+(\S+)\s+([0-9.]+)\s+(\d+)\s+([-0-9.]+)\s*$/
output[:hosts] ||= {}
output[:hosts][$3] ||= {}
output[:hosts][$3][$5] = {}
output[:hosts][$3][$5] ||= {}
output[:hosts][$3][$5][:id] = $1
output[:hosts][$3][$5][:zone] = $2

View File

@ -23,11 +23,39 @@ end
include_recipe 'sysctl::default'
#-------------
# stats
#-------------
# optionally statsd daemon for stats collection
if node["swift"]["enable_statistics"]
if node["swift"]["statistics"]["enabled"]
node.set['statsd']['relay_server'] = true
include_recipe 'statsd::server'
end
# find graphing server address
if Chef::Config[:solo] and not node['recipes'].include?("chef-solo-search")
Chef::Log.warn("This recipe uses search. Chef Solo does not support search.")
graphite_servers = []
else
graphite_servers = search(:node, "roles:#{node['swift']['statistics']['graphing_role']} AND chef_environment:#{node.chef_environment}")
end
graphite_host = "127.0.0.1"
unless graphite_servers.empty?
graphite_host = graphite_servers[0]['network']["ipaddress_#{node['swift']['statistics']['graphing_interface']}"]
end
if node['swift']['statistics']['graphing_ip'].nil?
node.set['statsd']['graphite_host'] = graphite_host
else
node.set['statsd']['graphite_host'] = node['swift']['statistics']['graphing_ip']
end
#--------------
# swift common
#--------------
platform_options = node["swift"]["platform"]
# update repository if requested with the ubuntu cloud

View File

@ -91,3 +91,31 @@ template "/etc/swift/container-server.conf" do
notifies :restart, "service[swift-container-updater]", :immediately
notifies :restart, "service[swift-container-auditor]", :immediately
end
# Ubuntu 12.04 packages are missing the swift-container-sync service scripts
# See https://bugs.launchpad.net/cloud-archive/+bug/1250171
if platform?("ubuntu")
cookbook_file "/etc/init/swift-container-sync.conf" do
owner "root"
group "root"
mode "0755"
source "swift-container-sync.conf.upstart"
action :create
not_if "[ -e /etc/init/swift-container-sync.conf ]"
end
link "/etc/init.d/swift-container-sync" do
to "/lib/init/upstart-job"
not_if "[ -e /etc/init.d/swift-container-sync ]"
end
end
service_name=platform_options["service_prefix"] + 'swift-container-sync' + platform_options["service_suffix"]
unless node["swift"]["container-server"]["allowed_sync_hosts"] == []
service "swift-container-sync" do
service_name service_name
provider platform_options["service_provider"]
supports :status => false, :restart => true
action [:enable, :start]
only_if "[ -e /etc/swift/container-server.conf ] && [ -e /etc/swift/container.ring.gz ]"
end
end

View File

@ -26,10 +26,29 @@ include_recipe "openstack-object-storage::ring-repo"
platform_options = node["swift"]["platform"]
if node["swift"]["authmode"] == "swauth"
platform_options["swauth_packages"].each.each do |pkg|
package pkg do
action :install
options platform_options["override_options"] # retain configs
case node["swift"]["swauth_source"]
when "package"
platform_options["swauth_packages"].each do |pkg|
package pkg do
action :install
options platform_options["override_options"]
end
end
when "git"
git "#{Chef::Config[:file_cache_path]}/swauth" do
repository node["swift"]["swauth_repository"]
revision node["swift"]["swauth_version"]
action :sync
end
bash "install_swauth" do
cwd "#{Chef::Config[:file_cache_path]}/swauth"
user "root"
group "root"
code <<-EOH
python setup.py install
EOH
environment 'PREFIX' => "/usr/local"
end
end
end
@ -44,6 +63,19 @@ else
auth_key = swift_secrets['dispersion_auth_key']
end
if node['swift']['statistics']['enabled']
template platform_options["swift_statsd_publish"] do
source "swift-statsd-publish.py.erb"
owner "root"
group "root"
mode "0755"
end
cron "cron_swift_statsd_publish" do
command "#{platform_options['swift_statsd_publish']} > /dev/null 2>&1"
minute "*/#{node["swift"]["statistics"]["report_frequency"]}"
end
end
template "/etc/swift/dispersion.conf" do
source "dispersion.conf.erb"
owner "swift"

View File

@ -94,6 +94,17 @@ template "/etc/swift/object-server.conf" do
notifies :restart, "service[swift-object-auditor]", :immediately
end
%w[ /var/swift /var/swift/recon ].each do |path|
directory path do
# Create the swift recon cache directory and set its permissions.
owner "swift"
group "swift"
mode 00755
action :create
end
end
cron "swift-recon" do
minute "*/5"
command "swift-recon-cron /etc/swift/object-server.conf"

View File

@ -26,7 +26,8 @@ end
if node.run_list.expand(node.chef_environment).recipes.include?("openstack-object-storage::setup")
Chef::Log.info("I ran the openstack-object-storage::setup so I will use my own swift passwords")
else
setup = search(:node, "chef_environment:#{node.chef_environment} AND roles:swift-setup")
setup_role = node["swift"]["setup_chef_role"]
setup = search(:node, "chef_environment:#{node.chef_environment} AND roles:#{setup_role}")
if setup.length == 0
Chef::Application.fatal! "You must have run the openstack-object-storage::setup recipe (on this or another node) before running the swift::proxy recipe on this node"
elsif setup.length == 1
@ -47,11 +48,35 @@ platform_options["proxy_packages"].each do |pkg|
end
end
package "python-swauth" do
action :install
only_if { node["swift"]["authmode"] == "swauth" }
if node["swift"]["authmode"] == "swauth"
case node["swift"]["swauth_source"]
when "package"
platform_options["swauth_packages"].each do |pkg|
package pkg do
action :install
options platform_options["override_options"]
end
end
when "git"
git "#{Chef::Config[:file_cache_path]}/swauth" do
repository node["swift"]["swauth_repository"]
revision node["swift"]["swauth_version"]
action :sync
end
bash "install_swauth" do
cwd "#{Chef::Config[:file_cache_path]}/swauth"
user "root"
group "root"
code <<-EOH
python setup.py install
EOH
environment 'PREFIX' => "/usr/local"
end
end
end
package "python-swift-informant" do
action :install
only_if { node["swift"]["use_informant"] }
@ -84,7 +109,8 @@ if Chef::Config[:solo]
memcache_servers = [ "127.0.0.1:11211" ]
else
memcache_servers = []
proxy_nodes = search(:node, "chef_environment:#{node.chef_environment} AND roles:swift-proxy-server")
proxy_role = node["swift"]["proxy_server_chef_role"]
proxy_nodes = search(:node, "chef_environment:#{node.chef_environment} AND roles:#{proxy_role}")
proxy_nodes.each do |proxy|
proxy_ip = locate_ip_in_cidr(node["swift"]["network"]["proxy-cidr"], proxy)
next if not proxy_ip # skip nil ips so we dont break the config
@ -101,6 +127,19 @@ else
authkey = swift_secrets['swift_authkey']
end
if node["swift"]["authmode"] == "keystone"
openstack_identity_bootstrap_token = secret "secrets", "openstack_identity_bootstrap_token"
%w[ /home/swift /home/swift/keystone-signing ].each do |path|
directory path do
owner "swift"
group "swift"
mode 00700
action :create
end
end
end
# create proxy config file
template "/etc/swift/proxy-server.conf" do
source "proxy-server.conf.erb"
@ -108,6 +147,7 @@ template "/etc/swift/proxy-server.conf" do
group "swift"
mode "0600"
variables("authmode" => node["swift"]["authmode"],
"openstack_identity_bootstrap_token" => openstack_identity_bootstrap_token,
"bind_host" => node["swift"]["network"]["proxy-bind-ip"],
"bind_port" => node["swift"]["network"]["proxy-bind-port"],
"authkey" => authkey,

View File

@ -22,7 +22,8 @@ include_recipe "openstack-object-storage::common"
if Chef::Config[:solo]
Chef::Application.fatal! "This recipe uses search. Chef Solo does not support search."
else
setup_role_count = search(:node, "chef_environment:#{node.chef_environment} AND roles:swift-setup").length
setup_role = node["swift"]["setup_chef_role"]
setup_role_count = search(:node, "chef_environment:#{node.chef_environment} AND roles:#{setup_role}").length
if setup_role_count > 1
Chef::Application.fatal! "You can only have one node with the swift-setup role"
end
@ -42,9 +43,32 @@ platform_options["proxy_packages"].each do |pkg|
end
end
package "python-swauth" do
action :upgrade
only_if { node["swift"]["authmode"] == "swauth" }
if node["swift"]["authmode"] == "swauth"
case node["swift"]["swauth_source"]
when "package"
platform_options["swauth_packages"].each do |pkg|
package pkg do
action :upgrade
options platform_options["override_options"]
end
end
when "git"
git "#{Chef::Config[:file_cache_path]}/swauth" do
repository node["swift"]["swauth_repository"]
revision node["swift"]["swauth_version"]
action :sync
end
bash "install_swauth" do
cwd "#{Chef::Config[:file_cache_path]}/swauth"
user "root"
group "root"
code <<-EOH
python setup.py install
EOH
environment 'PREFIX' => "/usr/local"
end
end
end
package "python-swift-informant" do

View File

@ -14,12 +14,12 @@ describe 'openstack-object-storage::common' do
@node = @chef_run.node
@node.set['platform_family'] = "debian"
@node.set['lsb']['codename'] = "precise"
@node.set['swift']['release'] = "folsom"
@node.set['swift']['release'] = "grizzly"
@node.set['swift']['authmode'] = 'swauth'
@node.set['swift']['git_builder_ip'] = '10.0.0.10'
# TODO: this does not work
# ::Chef::Log.should_receive(:info).with("chefspec: precise-updates/folsom")
# ::Chef::Log.should_receive(:info).with("chefspec: precise-updates/grizzly")
@chef_run.converge "openstack-object-storage::common"
end

View File

@ -16,6 +16,8 @@ describe 'openstack-object-storage::container-server' do
@node.set['swift']['authmode'] = 'swauth'
@node.set['swift']['network']['container-bind-ip'] = '10.0.0.1'
@node.set['swift']['network']['container-bind-port'] = '8080'
@node.set['swift']['container-server']['allowed_sync_hosts'] = ['host1', 'host2', 'host3']
@node.set['swift']['container-bind-port'] = '8080'
@node.set['swift']['disk_enum_expr'] = "[{ 'sda' => {}}]"
@node.set['swift']['disk_test_filter'] = [ "candidate =~ /sd[^a]/ or candidate =~ /hd[^a]/ or candidate =~ /vd[^a]/ or candidate =~ /xvd[^a]/",
"File.exist?('/dev/' + candidate)",
@ -33,7 +35,7 @@ describe 'openstack-object-storage::container-server' do
end
it "starts swift container services on boot" do
%w{swift-container swift-container-auditor swift-container-replicator swift-container-updater}.each do |svc|
%w{swift-container swift-container-auditor swift-container-replicator swift-container-updater swift-container-sync}.each do |svc|
expect(@chef_run).to set_service_to_start_on_boot svc
end
end
@ -52,12 +54,34 @@ describe 'openstack-object-storage::container-server' do
expect(sprintf("%o", @file.mode)).to eq "600"
end
it "template contents" do
pending "TODO: implement"
it "has allowed sync hosts" do
expect(@chef_run).to create_file_with_content @file.name,
"allowed_sync_hosts = host1,host2,host3"
end
end
end
it "should create container sync upstart conf for ubuntu" do
expect(@chef_run).to create_cookbook_file "/etc/init/swift-container-sync.conf"
end
it "should create container sync init script for ubuntu" do
expect(@chef_run).to create_link "/etc/init.d/swift-container-sync"
end
describe "/etc/swift/container-server.conf" do
before do
@node = @chef_run.node
@node.set["swift"]["container-server"]["allowed_sync_hosts"] = []
@chef_run.converge "openstack-object-storage::container-server"
@file = @chef_run.template "/etc/swift/container-server.conf"
end
it "has no allowed_sync_hosts on empty lists" do
expect(@chef_run).not_to create_file_with_content @file.name,
/^allowed_sync_hots =/
end
end
end
end

View File

@ -14,7 +14,7 @@ describe 'openstack-object-storage::disks' do
@node = @chef_run.node
@node.set['platform_family'] = "debian"
@node.set['lsb']['codename'] = "precise"
@node.set['swift']['release'] = "folsom"
@node.set['swift']['release'] = "grizzly"
@node.set['swift']['authmode'] = 'swauth'
@node.set['swift']['git_builder_ip'] = '10.0.0.10'
@node.set['swift']['disk_enum_expr'] = "[{ 'sda' => {}}]"

View File

@ -14,6 +14,9 @@ describe 'openstack-object-storage::management-server' do
@node = @chef_run.node
@node.set['lsb']['code'] = 'precise'
@node.set['swift']['authmode'] = 'swauth'
@node.set['swift']['statistics']['enabled'] = true
@node.set['swift']['swauth_source'] = 'package'
@node.set['swift']['platform']['swauth_packages'] = ['swauth']
@chef_run.converge "openstack-object-storage::management-server"
end
@ -42,6 +45,27 @@ describe 'openstack-object-storage::management-server' do
end
describe "/usr/local/bin/swift-statsd-publish.py" do
before do
@file = @chef_run.template "/usr/local/bin/swift-statsd-publish.py"
end
it "has proper owner" do
expect(@file).to be_owned_by "root", "root"
end
it "has proper modes" do
expect(sprintf("%o", @file.mode)).to eq "755"
end
it "has expected statsd host" do
expect(@chef_run).to create_file_with_content @file.name,
"self.statsd_host = '127.0.0.1'"
end
end
end
end

View File

@ -14,6 +14,8 @@ describe 'openstack-object-storage::proxy-server' do
@node = @chef_run.node
@node.set['lsb']['code'] = 'precise'
@node.set['swift']['authmode'] = 'swauth'
@node.set['swift']['platform']['swauth_packages'] = ['swauth']
@node.set['swift']['swauth_source'] = 'package'
@node.set['swift']['network']['proxy-bind-ip'] = '10.0.0.1'
@node.set['swift']['network']['proxy-bind-port'] = '8080'
@chef_run.converge "openstack-object-storage::proxy-server"
@ -28,7 +30,7 @@ describe 'openstack-object-storage::proxy-server' do
end
it "installs swauth package if swauth is selected" do
expect(@chef_run).to install_package "python-swauth"
expect(@chef_run).to install_package "swauth"
end
it "starts swift-proxy on boot" do

View File

@ -14,7 +14,7 @@ describe 'openstack-object-storage::ring-repo' do
@node = @chef_run.node
@node.set['platform_family'] = "debian"
@node.set['lsb']['codename'] = "precise"
@node.set['swift']['release'] = "folsom"
@node.set['swift']['release'] = "grizzly"
@node.set['swift']['authmode'] = 'swauth'
@node.set['swift']['git_builder_ip'] = '10.0.0.10'
@chef_run.converge "openstack-object-storage::ring-repo"

View File

@ -14,7 +14,7 @@ describe 'openstack-object-storage::rsync' do
@node = @chef_run.node
@node.set['platform_family'] = "debian"
@node.set['lsb']['codename'] = "precise"
@node.set['swift']['release'] = "folsom"
@node.set['swift']['release'] = "grizzly"
@node.set['swift']['authmode'] = 'swauth'
@node.set['swift']['git_builder_ip'] = '10.0.0.10'
@chef_run.converge "openstack-object-storage::rsync"

View File

@ -15,11 +15,11 @@
bind_ip = <%= @bind_ip %>
bind_port = <%= @bind_port %>
workers = 10
<% if node[:swift][:enable_statistics] -%>
<% if node[:swift][:statistics][:enabled] -%>
log_statsd_host = localhost
log_statsd_port = 8125
log_statsd_default_sample_rate = 1
log_statsd_metric_prefix = openstack.swift.<%= node[:hostname] %>
log_statsd_default_sample_rate = <%= node[:swift][:statistics][:sample_rate] %>
log_statsd_metric_prefix = <%= node[:swift][:statistics][:statsd_prefix] %>.<%= node[:hostname] %>
<% end %>
[pipeline:main]

View File

@ -18,12 +18,16 @@
bind_ip = <%= @bind_ip %>
bind_port = <%= @bind_port %>
workers = 10
<% if node[:swift][:enable_statistics] -%>
<% if node["swift"]["enable_statistics"] -%>
log_statsd_host = localhost
log_statsd_port = 8125
log_statsd_default_sample_rate = 1
log_statsd_metric_prefix = openstack.swift.<%= node[:hostname] %>
<% end %>
log_statsd_metric_prefix = openstack.swift.<%= node["hostname"] %>
<% end -%>
<% if node["swift"]["container-server"]["allowed_sync_hosts"] -%>
allowed_sync_hosts = <%= node["swift"]["container-server"]["allowed_sync_hosts"].join(",") %>
<% end -%>
[pipeline:main]
pipeline = container-server
@ -77,12 +81,14 @@ use = egg:swift#container
[container-sync]
# You can override the default log routing for this app here (don't use set!):
# log_name = container-sync
# log_facility = LOG_LOCAL0
# log_level = INFO
log_name = <%= node["swift"]["container-server"]["container-sync"]["log_name"] %>
log_facility = <%= node["swift"]["container-server"]["container-sync"]["log_facility"] %>
log_level = <%= node["swift"]["container-server"]["container-sync"]["log_level"] %>
# If you need to use an HTTP Proxy, set it here; defaults to no proxy.
# sync_proxy = http://127.0.0.1:8888
<% if node["swift"]["container-server"]["container-sync"]["sync_proxy"] -%>
sync_proxy = <%= node["swift"]["container-server"]["container-sync"]["sync_proxy"] %>
<% end -%>
# Will sync, at most, each container once per interval
# interval = 300
interval = <%= node["swift"]["container-server"]["container-sync"]["interval"] %>
# Maximum amount of time to spend syncing each container per pass
# container_time = 60
container_time = <%= node["swift"]["container-server"]["container-sync"]["container_time"] %>

View File

@ -16,11 +16,11 @@
bind_ip = <%= @bind_ip %>
bind_port = <%= @bind_port %>
workers = 10
<% if node[:swift][:enable_statistics] -%>
<% if node[:swift][:statistics][:enabled] -%>
log_statsd_host = localhost
log_statsd_port = 8125
log_statsd_default_sample_rate = 1
log_statsd_metric_prefix = openstack.swift.<%= node[:hostname] %>
log_statsd_default_sample_rate = <%= node[:swift][:statistics][:sample_rate] %>
log_statsd_metric_prefix = <%= node[:swift][:statistics][:statsd_prefix] %>.<%= node[:hostname] %>
<% end %>
[pipeline:main]

View File

@ -8,15 +8,23 @@ when "swauth"
end
account_management=false
if node[:roles].include?("swift-management-server") and node[:swift][:authmode] == "swauth" then
if node[:swift][:authmode] == "swauth" then
account_management="true"
end
# need to both: 1) add tempurl before auth middleware, 2) set allow_overrides=true
tempurl_toggle=false
if node[:swift][:authmode] == "swauth" and node[:swift][:tempurl][:enabled] == true then
tempurl_toggle = true
pipeline = "tempurl swauth"
end
-%>
# This file is managed by chef. Do not edit it.
#
# Cluster info:
# Auth mode: <%= node[:swift][:authmode] %>
# Management server: <%= node[:roles].include?("swift-management-server") %>
# Management server: <%= node[:roles].include?(node[:swift][:management_server_chef_role]) %>
# Account management enabled: <%= account_management %>
# Auth pipeline: <%= pipeline %>
@ -38,11 +46,12 @@ end
workers = <%= [ node[:cpu][:total] - 1, 1 ].max %>
bind_ip = <%= @bind_host %>
bind_port = <%= @bind_port %>
<% if node[:swift][:enable_statistics] -%>
user = swift
<% if node[:swift][:statistics][:enabled] -%>
log_statsd_host = localhost
log_statsd_port = 8125
log_statsd_default_sample_rate = 1
log_statsd_metric_prefix = openstack.swift.<%= node[:hostname] %>
log_statsd_default_sample_rate = <%= node[:swift][:statistics][:sample_rate] %>
log_statsd_metric_prefix = <%= node[:swift][:statistics][:statsd_prefix] %>.<%= node[:hostname] %>
<% end %>
@ -82,13 +91,7 @@ use = egg:swift#proxy
# If set to 'true' authorized accounts that do not yet exist within the Swift
# cluster will be automatically created.
# account_autocreate = false
######
#
# N.B. ideally allow_account_management would only be set on the
# management server, but swauth will delete using the cluster url
# and not the local url
# allow_account_managemnet = <%= account_management %>
allow_account_management = true
allow_account_management = <%= account_management %>
<% if @authmode == "keystone" -%>
account_autocreate = true
@ -106,6 +109,12 @@ default_swift_cluster = local#<%= node[:swift][:swift_url] %>#<%= node[:swift][:
<% else %>
default_swift_cluster = local#<%= node[:swift][:swift_url] %>
<% end %>
<% if tempurl_toggle -%>
allow_overrides = true
<% end %>
<% end %>
<% if node["swift"]["container-server"]["allowed_sync_hosts"] -%>
allowed_sync_hosts = <%= node["swift"]["container-server"]["allowed_sync_hosts"].join(",") %>
<% end %>
[filter:healthcheck]
@ -129,7 +138,10 @@ use = egg:swift#memcache
# commas, as in: 10.1.2.3:11211,10.1.2.4:11211
# memcache_servers = 127.0.0.1:11211
#####
memcache_servers = <%= @memcache_servers.join(",") %>
#memcache_servers = <%= @memcache_servers.join(",") %>
<% unless @memcache_servers.empty? -%>
memcache_servers = <%= @memcache_servers %>
<% end -%>
[filter:ratelimit]
use = egg:swift#ratelimit
@ -238,7 +250,7 @@ use = egg:swift#tempurl
use = egg:swift#formpost
[filter:keystoneauth]
operator_roles = Member,admin
operator_roles = Member,admin,swiftoperator
use = egg:swift#keystoneauth
[filter:proxy-logging]
@ -253,10 +265,31 @@ use = egg:swift#proxy_logging
# You can use log_statsd_* from [DEFAULT] or override them here:
# access_log_statsd_host = localhost
# access_log_statsd_port = 8125
# access_log_statsd_default_sample_rate = 1
# access_log_statsd_default_sample_rate = <%= node[:swift][:statistics][:sample_rate] %>
# access_log_statsd_metric_prefix =
# access_log_headers = False
# What HTTP methods are allowed for StatsD logging (comma-sep); request methods
# not in this list will have "BAD_METHOD" for the <verb> portion of the metric.
# log_statsd_valid_http_methods = GET,HEAD,POST,PUT,DELETE,COPY
[filter:authtoken]
<% case @authmode
when "keystone" -%>
paste.filter_factory = keystone.middleware.auth_token:filter_factory
# usage for anonymous referrers ('.r:*')
delay_auth_decision = true
#
signing_dir = /home/swift/keystone-signing
auth_protocol = http
auth_port = 35357
auth_host = <%= node["swift"]["network"]["proxy-bind-ip"] %>
admin_token = <%= @openstack_identity_bootstrap_token %>
# the service tenant and swift userid and password created in Keystone
admin_tenant_name = service
admin_user = swift
admin_password = swift
<% end -%>

View File

@ -3,7 +3,7 @@ gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = 0.0.0.0
address = <%= @storage_local_net_ip %>
[account]
max connections = 10

View File

@ -0,0 +1,157 @@
#!/usr/bin/env python
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# coding=utf-8
#
# Author: Alan Meadows <alan.meadows@gmail.com>
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""
THIS FILE WAS INSTALLED BY CHEF. ANY CHANGES WILL BE OVERWRITTEN.
Openstack swift collector for recon and dispersion reports. Will send
back dispersion reporting metrics as well as swift recon statistics
to a statsd server for graphite consumption
"""
from subprocess import Popen, PIPE, check_call
from socket import socket, AF_INET, SOCK_DGRAM
import re
import os
try:
import json
json # workaround for pyflakes issue #13
except ImportError:
import simplejson as json
class OpenStackSwiftStatisticsCollector(object):
def __init__(self):
'''Setup some initial values defined by chef'''
self.statsd_host = '<%= node[:swift][:statistics][:statsd_host] %>'
self.statsd_port = <%= node[:swift][:statistics][:statsd_port] %>
self.statsd_prefix = '<%= node[:swift][:statistics][:statsd_prefix] %>'
<% if node[:swift][:statistics][:enable_dispersion_report] -%>
self.enable_dispersion_report = True
<% else %>
self.enable_dispersion_report = False
<% end %>
<% if node[:swift][:statistics][:enable_recon_report] -%>
self.enable_recon_report = True
<% else %>
self.enable_recon_report = False
<% end %>
<% if node[:swift][:statistics][:enable_disk_report] -%>
self.enable_disk_report = True
<% else %>
self.enable_disk_report = False
<% end %>
self.recon_account_cache = '<%= node[:swift][:statistics][:recon_account_cache] %>'
self.recon_container_cache = '<%= node[:swift][:statistics][:recon_container_cache] %>'
self.recon_object_cache = '<%= node[:swift][:statistics][:recon_object_cache] %>'
def _dispersion_report(self):
"""
Swift Dispersion Report Collection
"""
p = Popen(['/usr/bin/swift-dispersion-report', '-j'],
stdout=PIPE, stderr=PIPE)
stdout, stderr = p.communicate()
self.publish('%s.dispersion.errors' % self.statsd_prefix, len(stderr.split('\n')) - 1)
data = json.loads(stdout)
for t in ('object', 'container'):
for (k, v) in data[t].items():
self.publish('%s.dispersion.%s.%s' % (self.statsd_prefix, t, k), v)
def _recon_report(self):
"""
Swift Recon Collection
"""
recon_cache = {'account': self.recon_account_cache,
'container': self.recon_container_cache,
'object': self.recon_object_cache}
for recon_type in recon_cache:
if not os.access(recon_cache[recon_type], os.R_OK):
continue
try:
f = open(recon_cache[recon_type])
try:
rmetrics = json.loads(f.readlines()[0].strip())
metrics = self._process_cache(rmetrics)
for k, v in metrics:
metric_name = '%s.%s.%s' % (self.statsd_prefix, recon_type, ".".join(k))
if isinstance(v, (int, float)):
self.publish(metric_name, v)
except (ValueError, IndexError):
continue
finally:
f.close()
def _disk_report(self):
"""
Swift Disk Capacity Report
"""
p = Popen(['/usr/bin/swift-recon', '-d'],
stdout=PIPE, stderr=PIPE)
stdout, stderr = p.communicate()
used, total = 0, 0
match = re.search(r'.* space used: ([0-9]*\.?[0-9]+) of ([0-9]*\.?[0-9]+)', stdout, re.M|re.I)
if match:
used, total = [int(i) for i in match.groups()]
highest, avg = 0, 0
match = re.search(r'.* lowest:.+highest: ([0-9]*\.?[0-9]+)%, avg: ([0-9]*\.?[0-9]+)%', stdout, re.M|re.I)
if match:
highest, avg = match.groups()
self.publish('%s.capacity.bytes_used' % self.statsd_prefix, used)
self.publish('%s.capacity.bytes_free' % self.statsd_prefix, total-used)
self.publish('%s.capacity.bytes_utilization' % self.statsd_prefix, int((used/total)*100))
self.publish('%s.capacity.single_disk_utilization_highest' % self.statsd_prefix, highest)
self.publish('%s.capacity.single_disk_utilization_average' % self.statsd_prefix, avg)
def collect(self):
if (self.enable_dispersion_report):
self._dispersion_report()
if (self.enable_recon_report):
self._recon_report()
if (self.enable_disk_report):
self._disk_report()
def publish(self, metric_name, value):
"""Publish a metric to statsd server"""
# TODO: IPv6 support
print '%s:%s|g' % (metric_name.encode('utf-8'), value), (self.statsd_host, self.statsd_port)
udp_sock = socket(AF_INET, SOCK_DGRAM)
udp_sock.sendto('%s:%s|g' % (metric_name.encode('utf-8'), value), (self.statsd_host, self.statsd_port))
def _process_cache(self, d, path=()):
"""Recusively walk a nested recon cache dict to obtain path/values"""
metrics = []
for k, v in d.iteritems():
if not isinstance(v, dict):
metrics.append((path + (k,), v))
else:
self._process_cache(v, path + (k,))
return metrics
if __name__ == '__main__':
collector = OpenStackSwiftStatisticsCollector()
collector.collect()

View File

@ -66,6 +66,13 @@ rabbitmq_user "add openstack rabbit user" do
action :add
end
rabbitmq_user "change the password of the openstack rabbit user" do
user user
password pass
action :change_password
end
rabbitmq_vhost "add openstack rabbit vhost" do
vhost vhost

View File

@ -0,0 +1 @@
metadata

View File

@ -0,0 +1,28 @@
{
"sources": {
"statsd": {
"path": "."
},
"build-essential": {
"locked_version": "1.4.2"
},
"git": {
"locked_version": "2.6.0"
},
"dmg": {
"locked_version": "2.0.0"
},
"yum": {
"locked_version": "2.3.2"
},
"windows": {
"locked_version": "1.10.0"
},
"chef_handler": {
"locked_version": "1.1.4"
},
"runit": {
"locked_version": "1.2.0"
}
}
}

View File

@ -0,0 +1,8 @@
source "https://rubygems.org"
gem "chef", "~> 11.4.4"
gem "json", "<= 1.7.7" # chef dependency
gem "berkshelf", "~> 2.0.10"
gem "chefspec", "~> 1.2.0"
gem "foodcritic"
gem "strainer"

View File

@ -0,0 +1,212 @@
GEM
remote: https://rubygems.org/
specs:
activesupport (3.2.14)
i18n (~> 0.6, >= 0.6.4)
multi_json (~> 1.0)
addressable (2.3.5)
akami (1.2.0)
gyoku (>= 0.4.0)
nokogiri (>= 1.4.0)
berkshelf (2.0.10)
activesupport (~> 3.2.0)
addressable (~> 2.3.4)
buff-shell_out (~> 0.1)
chozo (>= 0.6.1)
faraday (>= 0.8.5)
hashie (>= 2.0.2)
minitar (~> 0.5.4)
rbzip2 (~> 0.2.0)
retryable (~> 1.3.3)
ridley (~> 1.5.0)
solve (>= 0.5.0)
thor (~> 0.18.0)
buff-config (0.4.0)
buff-extensions (~> 0.3)
varia_model (~> 0.1)
buff-extensions (0.5.0)
buff-ignore (1.1.0)
buff-platform (0.1.0)
buff-ruby_engine (0.1.0)
buff-shell_out (0.1.0)
buff-ruby_engine (~> 0.1.0)
builder (3.2.2)
celluloid (0.14.1)
timers (>= 1.0.0)
celluloid-io (0.14.1)
celluloid (>= 0.14.1)
nio4r (>= 0.4.5)
chef (11.4.4)
erubis
highline (>= 1.6.9)
json (>= 1.4.4, <= 1.7.7)
mixlib-authentication (>= 1.3.0)
mixlib-cli (~> 1.3.0)
mixlib-config (>= 1.1.2)
mixlib-log (>= 1.3.0)
mixlib-shellout
net-ssh (~> 2.6)
net-ssh-multi (~> 1.1.0)
ohai (>= 0.6.0)
rest-client (>= 1.0.4, < 1.7.0)
yajl-ruby (~> 1.1)
chefspec (1.2.0)
chef (>= 10.0)
erubis
fauxhai (>= 0.1.1, < 2.0)
minitest-chef-handler (>= 0.6.0)
rspec (~> 2.0)
chozo (0.6.1)
activesupport (>= 3.2.0)
hashie (>= 2.0.2)
multi_json (>= 1.3.0)
ci_reporter (1.9.0)
builder (>= 2.1.2)
diff-lcs (1.2.4)
erubis (2.7.0)
faraday (0.8.8)
multipart-post (~> 1.2.0)
fauxhai (1.1.1)
httparty
net-ssh
ohai
ffi (1.9.0)
foodcritic (2.2.0)
erubis
gherkin (~> 2.11.7)
nokogiri (~> 1.5.4)
treetop (~> 1.4.10)
yajl-ruby (~> 1.1.0)
gherkin (2.11.8)
multi_json (~> 1.3)
gssapi (1.0.3)
ffi (>= 1.0.1)
gyoku (1.1.0)
builder (>= 2.1.2)
hashie (2.0.5)
highline (1.6.19)
httparty (0.11.0)
multi_json (~> 1.0)
multi_xml (>= 0.5.2)
httpclient (2.2.0.2)
httpi (0.9.7)
rack
i18n (0.6.5)
ipaddress (0.8.0)
json (1.7.7)
little-plugger (1.1.3)
logging (1.6.2)
little-plugger (>= 1.1.3)
mime-types (1.25)
minitar (0.5.4)
minitest (4.7.5)
minitest-chef-handler (1.0.1)
chef
ci_reporter
minitest (~> 4.7.3)
mixlib-authentication (1.3.0)
mixlib-log
mixlib-cli (1.3.0)
mixlib-config (1.1.2)
mixlib-log (1.6.0)
mixlib-shellout (1.2.0)
multi_json (1.7.9)
multi_xml (0.5.5)
multipart-post (1.2.0)
net-http-persistent (2.9)
net-ssh (2.6.8)
net-ssh-gateway (1.2.0)
net-ssh (>= 2.6.5)
net-ssh-multi (1.1)
net-ssh (>= 2.1.4)
net-ssh-gateway (>= 0.99.0)
nio4r (0.5.0)
nokogiri (1.5.10)
nori (1.1.5)
ohai (6.18.0)
ipaddress
mixlib-cli
mixlib-config
mixlib-log
mixlib-shellout
systemu
yajl-ruby
polyglot (0.3.3)
rack (1.5.2)
rbzip2 (0.2.0)
rest-client (1.6.7)
mime-types (>= 1.16)
retryable (1.3.3)
ridley (1.5.2)
addressable
buff-config (~> 0.2)
buff-extensions (~> 0.3)
buff-ignore (~> 1.1)
buff-shell_out (~> 0.1)
celluloid (~> 0.14.0)
celluloid-io (~> 0.14.0)
erubis
faraday (>= 0.8.4)
hashie (>= 2.0.2)
json (>= 1.7.7)
mixlib-authentication (>= 1.3.0)
net-http-persistent (>= 2.8)
net-ssh
nio4r (>= 0.5.0)
retryable
solve (>= 0.4.4)
varia_model (~> 0.1)
winrm (~> 1.1.0)
rspec (2.14.1)
rspec-core (~> 2.14.0)
rspec-expectations (~> 2.14.0)
rspec-mocks (~> 2.14.0)
rspec-core (2.14.5)
rspec-expectations (2.14.2)
diff-lcs (>= 1.1.3, < 2.0)
rspec-mocks (2.14.3)
rubyntlm (0.1.1)
savon (0.9.5)
akami (~> 1.0)
builder (>= 2.1.2)
gyoku (>= 0.4.0)
httpi (~> 0.9)
nokogiri (>= 1.4.0)
nori (~> 1.0)
wasabi (~> 1.0)
solve (0.8.1)
strainer (3.3.0)
berkshelf (~> 2.0)
buff-platform (~> 0.1)
systemu (2.5.2)
thor (0.18.1)
timers (1.1.0)
treetop (1.4.15)
polyglot
polyglot (>= 0.3.1)
uuidtools (2.1.4)
varia_model (0.2.0)
buff-extensions (~> 0.2)
hashie (>= 2.0.2)
wasabi (1.0.0)
nokogiri (>= 1.4.0)
winrm (1.1.2)
gssapi (~> 1.0.0)
httpclient (~> 2.2.0.2)
logging (~> 1.6.1)
nokogiri (~> 1.5.0)
rubyntlm (~> 0.1.1)
savon (= 0.9.5)
uuidtools (~> 2.1.2)
yajl-ruby (1.1.0)
PLATFORMS
ruby
DEPENDENCIES
berkshelf (~> 2.0.10)
chef (~> 11.4.4)
chefspec (~> 1.2.0)
foodcritic
json (<= 1.7.7)
strainer

View File

@ -1,69 +1,53 @@
# DESCRIPTION
Description
===========
Chef cookbook to install [Etsy's
StatsD](https://github.com/etsy/statsd) daemon. Supports the new
pluggable backend modules.
Installs and sets up statsd <http://github.com/etsy/statsd>
# REQUIREMENTS
Requirements
============
Depends on the cookbooks:
Ubuntu 12.04
* git
* nodejs
Attributes
==========
# ATTRIBUTES
* `node['statsd']['port']` - The port for Statsd to listen for stats on. Defaults to 8125
* `node['statsd']['graphite_host']` - The host to forward processed statistics to. Defaults to localhost.
* `node['statsd']['graphite_port']` - The port to forward processed statistics to. Defaults to 2003
* `node['statsd']['package_version']` - The version to use when creating the package. Defaults to 0.6.0
* `node['statsd']['tmp_dir']` - The temporary directory to while building the package. Defaults to /tmp
* `node['statsd']['repo']` - The gitrepo to use. Defaults to "git://github.com/etsy/statsd.git"
* `node['statsd']['sha']` - The sha checksum of the repo to use
## Basic attributes
Usage
=====
* `repo`: Location of statsd repo (defaults to Etsy's).
* `log_file`: Where to log output (defaults to:
`/var/log/statsd.log`).
* `flush_interval_msecs`: Flush interval in msecs (default 10000).
* `port`: Port to listen for UDP stats (default 8125).
Including this recipe will build a dpkg from the statsd git repository and install it.
## Graphite settings
By default statsd will attempt to send statistics to a graphite instance running on localhost.
* `graphite_enabled`: Enable the built-in Graphite backend (default true).
* `graphite_port`: Port to talk to Graphite on (default 2003).
* `graphite_host`: Host name of Graphite server (default localhost).
Testing
=======
## Adding backends
$ bundle install
$ bundle exec berks install
$ bundle exec strainer test
Set the attribute `backends` to a hash of statsd NPM module
backends. The hash key is the name of the NPM module, while the hash
value is the version of the NPM module to install (or null for latest
version).
License and Author
==================
For example, to use version 0.0.1 of [statsd-librato-backend][]:
Author:: Scott Lampert (<sl724q@att.com>)
attrs[:statsd][:backends] = { 'statsd-librato-backend' => '0.0.1' }
Copyright 2012-2013, AT&T Services, Inc.
To use the latest version of statsd-librato-backend:
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
attrs[:statsd][:backends] = { 'statsd-librato-backend' => nil }
http://www.apache.org/licenses/LICENSE-2.0
The cookbook will install each backend module under the statsd
directory and add it to the list of backends loaded in the
configuration file.
### Extra backend configuration
Set the attribute `extra_config` to any additional configuration
options that should be included in the StatsD configuration file.
For example, to set your email and token for the
[statsd-librato-backend][] backend module, use the following:
```js
attrs[:statsd][:extra_config] => {
'librato' => {
'email' => 'myemail@example.com',
'token' => '1234567890ABCDEF'
}
}
```
# USAGE
[statsd-librato-backend]: https://github.com/librato/statsd-librato-backend
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,4 @@
# Strainerfile
knife test: bundle exec knife cookbook test $COOKBOOK
foodcritic: bundle exec foodcritic -f any -t ~FC003 -t ~FC023 $SANDBOX/$COOKBOOK
chefspec: bundle exec rspec $SANDBOX/$COOKBOOK

View File

@ -1,32 +1,9 @@
default[:statsd][:repo] = "git://github.com/etsy/statsd.git"
default[:statsd][:revision] = "master"
default[:statsd][:log_file] = "/var/log/statsd.log"
default[:statsd][:flush_interval_msecs] = 10000
default[:statsd][:port] = 8125
# Is the graphite backend enabled?
default[:statsd][:graphite_enabled] = true
default[:statsd][:graphite_port] = 2003
default[:statsd][:graphite_host] = "localhost"
#
# Add all NPM module backends here. Each backend should be a
# hash of the backend's name to the NPM module's version. If we
# should just use the latest, set the hash to null.
#
# For example, to use version 0.0.1 of statsd-librato-backend:
#
# attrs[:statsd][:backends] = { 'statsd-librato-backend' => '0.0.1' }
#
# To use the latest version of statsd-librato-backend:
#
# attrs[:statsd][:backends] = { 'statsd-librato-backend' => nil }
#
default[:statsd][:backends] = {}
#
# Add any additional backend configuration here.
#
default[:statsd][:extra_config] = {}
default['statsd']['port'] = 8125
default['statsd']['graphite_port'] = 2003
default['statsd']['graphite_host'] = "localhost"
default['statsd']['relay_server'] = false
default['statsd']['package_version'] = "0.6.0"
default['statsd']['sha'] = "2ccde8266bbe941ac5f79efe39103b99e1196d92"
default['statsd']['user'] = "statsd"
default['statsd']['repo'] = "git://github.com/etsy/statsd.git"
default['statsd']['tmp_dir'] = "/tmp"

View File

@ -1,12 +1,8 @@
description "statsd"
author "Librato"
author "etsy"
start on runlevel [2345]
stop on runlevel [!2345]
env SL_NAME=statsd
respawn
start on startup
stop on shutdown
script
# We found $HOME is needed. Without it, we ran into problems

View File

@ -0,0 +1,3 @@
#!/bin/sh
# Called by Upstart, /etc/init/statsd.conf
node /usr/share/statsd/stats.js /etc/statsd/localConfig.js 2>&1 >> /tmp/statsd.log

View File

@ -1,12 +1,16 @@
maintainer "Mike Heffner"
maintainer_email "mike@librato.com"
name "statsd"
maintainer "AT&T Services, Inc."
maintainer_email "cookbooks@lists.tfoundry.com"
license "Apache 2.0"
description "Installs/Configures statsd"
long_description IO.read(File.join(File.dirname(__FILE__), 'README.md'))
version "0.1.1"
version "0.1.4"
recipe "statsd", "Installs stats ruby gem"
recipe "statsd::server", "Configures statsd server"
depends "build-essential"
depends "git"
depends "nodejs", ">= 0.5.2"
%w{ ubuntu }.each do |os|
supports os
end
supports "ubuntu"
depends "build-essential"
depends "git"

View File

@ -2,109 +2,19 @@
# Cookbook Name:: statsd
# Recipe:: default
#
# Copyright 2011, Librato, Inc.
# Copyright 2013, Scott Lampert
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
include_recipe "nodejs"
include_recipe "git"
git "/usr/share/statsd" do
repository node[:statsd][:repo]
revision node[:statsd][:revision]
action :sync
end
execute "install dependencies" do
command "npm install -d"
cwd "/usr/share/statsd"
end
backends = []
if node[:statsd][:graphite_enabled]
backends << "./backends/graphite"
end
node[:statsd][:backends].each do |k, v|
if v
name = "#{k}@#{v}"
else
name= k
end
execute "install npm module #{name}" do
command "npm install #{name}"
cwd "/usr/share/statsd"
end
backends << k
end
directory "/etc/statsd" do
action :create
end
user "statsd" do
comment "statsd"
system true
shell "/bin/false"
end
service "statsd" do
provider Chef::Provider::Service::Upstart
restart_command "stop statsd; start statsd"
start_command "start statsd"
stop_command "stop statsd"
supports :restart => true, :start => true, :stop => true
end
template "/etc/statsd/config.js" do
source "config.js.erb"
mode 0644
config_hash = {
:flushInterval => node[:statsd][:flush_interval_msecs],
:port => node[:statsd][:port],
:backends => backends
}.merge(node[:statsd][:extra_config])
if node[:statsd][:graphite_enabled]
config_hash[:graphitePort] = node[:statsd][:graphite_port]
config_hash[:graphiteHost] = node[:statsd][:graphite_host]
end
variables(:config_hash => config_hash)
notifies :restart, resources(:service => "statsd")
end
directory "/usr/share/statsd/scripts" do
action :create
end
template "/usr/share/statsd/scripts/start" do
source "upstart.start.erb"
mode 0755
notifies :restart, resources(:service => "statsd")
end
cookbook_file "/etc/init/statsd.conf" do
source "upstart.conf"
mode 0644
notifies :restart, resources(:service => "statsd")
end
bash "create_log_file" do
code <<EOH
touch #{node[:statsd][:log_file]} && chown statsd #{node[:statsd][:log_file]}
EOH
not_if {File.exist?(node[:statsd][:log_file])}
end
service "statsd" do
action [ :enable, :start ]
end
gem_package "statsd-ruby"

View File

@ -0,0 +1,87 @@
#
# Cookbook Name:: statsd
# Recipe:: server
#
# Copyright 2013, Scott Lampert
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
include_recipe "build-essential"
include_recipe "git"
case node["platform"]
when "ubuntu", "debian"
package "nodejs"
package "debhelper"
statsd_version = node['statsd']['sha']
git ::File.join(node['statsd']['tmp_dir'], "statsd") do
repository node['statsd']['repo']
reference statsd_version
action :sync
notifies :run, "execute[build debian package]"
end
# Fix the debian changelog file of the repo
template ::File.join(node['statsd']['tmp_dir'], "statsd/debian/changelog") do
source "changelog.erb"
end
execute "build debian package" do
command "dpkg-buildpackage -us -uc"
cwd ::File.join(node['statsd']['tmp_dir'], "statsd")
creates ::File.join(node['statsd']['tmp_dir'], "statsd_#{node['statsd']['package_version']}_all.deb")
end
dpkg_package "statsd" do
action :install
source ::File.join(node['statsd']['tmp_dir'], "statsd_#{node['statsd']['package_version']}_all.deb")
end
when "redhat", "centos"
raise "No support for RedHat or CentOS (yet)."
end
template "/etc/statsd/localConfig.js" do
source "localConfig.js.erb"
mode 00644
notifies :restart, "service[statsd]"
end
cookbook_file "/usr/share/statsd/scripts/start" do
source "upstart.start"
owner "root"
group "root"
mode 00755
end
cookbook_file "/etc/init/statsd.conf" do
source "upstart.conf"
owner "root"
group "root"
mode 00644
end
user node['statsd']['user'] do
comment "statsd"
system true
shell "/bin/false"
end
service "statsd" do
provider Chef::Provider::Service::Upstart
action [ :enable, :start ]
end

View File

@ -0,0 +1,5 @@
statsd (<%= node['statsd']['package_version'] %>) unstable; urgency=low
* Dummy changelog for dpkg build
-- Scott Lampert <scott@lampert.org> Thu, 14 Mar 2013 15:24:00 -0700

View File

@ -1 +0,0 @@
<%= JSON.pretty_generate(@config_hash) %>

Some files were not shown because too many files have changed in this diff Show More