mirror of
https://github.com/xcat2/confluent.git
synced 2025-08-25 04:30:29 +00:00
Merge branch 'master' into firwareUpdateServer
This commit is contained in:
30
README.md
Normal file
30
README.md
Normal file
@@ -0,0 +1,30 @@
|
||||
# Confluent
|
||||
|
||||
 [](https://github.com/xcat2/confluent/blob/master/LICENSE)
|
||||
|
||||
Confluent is a software package to handle essential bootstrap and operation of scale-out server configurations.
|
||||
It supports stateful and stateless deployments for various operating systems.
|
||||
|
||||
Check [this page](https://hpc.lenovo.com/users/documentation/whatisconfluent.html
|
||||
) for a more detailed list of features.
|
||||
|
||||
Confluent is the modern successor of [xCAT](https://github.com/xcat2/xcat-core).
|
||||
If you're coming from xCAT, check out [this comparison](https://hpc.lenovo.com/users/documentation/confluentvxcat.html).
|
||||
|
||||
# Documentation
|
||||
|
||||
Confluent documentation is hosted on hpc.lenovo.com: https://hpc.lenovo.com/users/documentation/
|
||||
|
||||
# Download
|
||||
|
||||
Get the latest version from: https://hpc.lenovo.com/users/downloads/
|
||||
|
||||
Check release notes on: https://hpc.lenovo.com/users/news/
|
||||
|
||||
# Open Source License
|
||||
|
||||
Confluent is made available under the Apache 2.0 license: https://opensource.org/license/apache-2-0
|
||||
|
||||
# Developers
|
||||
|
||||
Want to help? Submit a [Pull Request](https://github.com/xcat2/confluent/pulls).
|
@@ -1,6 +1,8 @@
|
||||
Name: confluent-client
|
||||
Project: https://hpc.lenovo.com/users/
|
||||
Source: https://github.com/lenovo/confluent
|
||||
Upstream-Name: confluent-client
|
||||
|
||||
All file of the Confluent-client software is distributed under the terms of an Apache-2.0 license as indicated below:
|
||||
|
||||
Files: *
|
||||
Copyright: 2014-2019 Lenovo
|
||||
@@ -11,7 +13,8 @@ Copyright: 2014 IBM Corporation
|
||||
2015-2019 Lenovo
|
||||
License: Apache-2.0
|
||||
|
||||
File: sortutil.py
|
||||
File: sortutil.py,
|
||||
tlvdata.py
|
||||
Copyright: 2014 IBM Corporation
|
||||
2015-2016 Lenovo
|
||||
License: Apache-2.0
|
||||
@@ -19,3 +22,56 @@ License: Apache-2.0
|
||||
File: tlv.py
|
||||
Copyright: 2014 IBM Corporation
|
||||
License: Apache-2.0
|
||||
|
||||
License: Apache-2.0
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions:
|
||||
|
||||
You must give any other recipients of the Work or Derivative Works a copy of this License; and
|
||||
You must cause any modified files to carry prominent notices stating that You changed the files; and
|
||||
You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and
|
||||
If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License.
|
||||
You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
@@ -21,6 +21,7 @@
|
||||
|
||||
import optparse
|
||||
import os
|
||||
import re
|
||||
import select
|
||||
import sys
|
||||
|
||||
@@ -84,6 +85,7 @@ fullline = sys.stdin.readline()
|
||||
printpending = True
|
||||
clearpending = False
|
||||
holdoff = 0
|
||||
padded = None
|
||||
while fullline:
|
||||
for line in fullline.split('\n'):
|
||||
if not line:
|
||||
@@ -92,13 +94,18 @@ while fullline:
|
||||
line = 'UNKNOWN: ' + line
|
||||
if options.log:
|
||||
node, output = line.split(':', 1)
|
||||
output = output.lstrip()
|
||||
if padded is None:
|
||||
if output.startswith(' '):
|
||||
padded = True
|
||||
else:
|
||||
padded = False
|
||||
if padded:
|
||||
output = re.sub(r'^ ', '', output)
|
||||
currlog = options.log.format(node=node, nodename=node)
|
||||
with open(currlog, mode='a') as log:
|
||||
log.write(output + '\n')
|
||||
continue
|
||||
node, output = line.split(':', 1)
|
||||
output = output.lstrip()
|
||||
grouped.add_line(node, output)
|
||||
if options.watch:
|
||||
if not holdoff:
|
||||
|
@@ -157,7 +157,7 @@ def main():
|
||||
elif attrib.endswith('.ipv6_address') and val:
|
||||
ip6bynode[node][currnet] = val.split('/', 1)[0]
|
||||
elif attrib.endswith('.hostname'):
|
||||
namesbynode[node][currnet] = re.split('\s+|,', val)
|
||||
namesbynode[node][currnet] = re.split(r'\s+|,', val)
|
||||
for node in ip4bynode:
|
||||
mydomain = domainsbynode.get(node, None)
|
||||
for ipdb in (ip4bynode, ip6bynode):
|
||||
|
173
confluent_client/bin/l2traceroute
Executable file
173
confluent_client/bin/l2traceroute
Executable file
@@ -0,0 +1,173 @@
|
||||
#!/usr/libexec/platform-python
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright 2017 Lenovo
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
__author__ = 'tkucherera'
|
||||
|
||||
import optparse
|
||||
import os
|
||||
import signal
|
||||
import sys
|
||||
import subprocess
|
||||
|
||||
try:
|
||||
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
|
||||
except AttributeError:
|
||||
pass
|
||||
path = os.path.dirname(os.path.realpath(__file__))
|
||||
path = os.path.realpath(os.path.join(path, '..', 'lib', 'python'))
|
||||
if path.startswith('/opt'):
|
||||
sys.path.append(path)
|
||||
|
||||
import confluent.client as client
|
||||
|
||||
argparser = optparse.OptionParser(
|
||||
usage="Usage: %prog <start_node> -i <interface> <end_node> -e <eface>",
|
||||
)
|
||||
argparser.add_option('-i', '--interface', type='str',
|
||||
help='interface to check path against for the start node')
|
||||
argparser.add_option('-e', '--eface', type='str',
|
||||
help='interface to check path against for the end node')
|
||||
argparser.add_option('-c', '--cumulus', action="store_true", dest="cumulus",
|
||||
help='return layer 2 route through cumulus switches only')
|
||||
|
||||
(options, args) = argparser.parse_args()
|
||||
|
||||
try:
|
||||
start_node = args[0]
|
||||
end_node = args[1]
|
||||
interface = options.interface
|
||||
eface = options.eface
|
||||
except IndexError:
|
||||
argparser.print_help()
|
||||
sys.exit(1)
|
||||
|
||||
session = client.Command()
|
||||
|
||||
def get_neighbors(switch):
|
||||
switch_neigbors = []
|
||||
url = '/networking/neighbors/by-switch/{0}/by-peername/'.format(switch)
|
||||
for neighbor in session.read(url):
|
||||
switch = neighbor['item']['href'].strip('/')
|
||||
if switch in all_switches:
|
||||
switch_neigbors.append(switch)
|
||||
return switch_neigbors
|
||||
|
||||
def find_path(start, end, path=[]):
|
||||
path = path + [start]
|
||||
if start == end:
|
||||
return path # If start and end are the same, return the path
|
||||
|
||||
for node in get_neighbors(start):
|
||||
if node not in path:
|
||||
new_path = find_path(node, end, path)
|
||||
if new_path:
|
||||
return new_path # If a path is found, return it
|
||||
|
||||
return None # If no path is found, return None
|
||||
|
||||
def is_cumulus(switch):
|
||||
try:
|
||||
read_attrib = subprocess.check_output(['nodeattrib', switch, 'hardwaremanagement.method'])
|
||||
except subprocess.CalledProcessError:
|
||||
return False
|
||||
for attribs in read_attrib.decode('utf-8').split('\n'):
|
||||
if len(attribs.split(':')) > 1:
|
||||
attrib = attribs.split(':')
|
||||
if attrib[2].strip() == 'affluent':
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
else:
|
||||
return False
|
||||
|
||||
|
||||
def host_to_switch(node, interface=None):
|
||||
# first check the the node config to see what switches are connected
|
||||
# if host is in rhel can use nmstate package
|
||||
if node in all_switches:
|
||||
return [node]
|
||||
switches = []
|
||||
netarg = 'net.*.switch'
|
||||
if interface:
|
||||
netarg = 'net.{0}.switch'.format(interface)
|
||||
try:
|
||||
read_attrib = subprocess.check_output(['nodeattrib', node, netarg])
|
||||
except subprocess.CalledProcessError:
|
||||
return False
|
||||
for attribs in read_attrib.decode('utf-8').split('\n'):
|
||||
attrib = attribs.split(':')
|
||||
try:
|
||||
if ' net.mgt.switch' in attrib or attrib[2] == '':
|
||||
continue
|
||||
except IndexError:
|
||||
continue
|
||||
switch = attrib[2].strip()
|
||||
if is_cumulus(switch) and options.cumulus:
|
||||
switches.append(switch)
|
||||
else:
|
||||
switches.append(switch)
|
||||
return switches
|
||||
|
||||
def path_between_nodes(start_switches, end_switches):
|
||||
for start_switch in start_switches:
|
||||
for end_switch in end_switches:
|
||||
if start_switch == end_switch:
|
||||
return [start_switch]
|
||||
else:
|
||||
path = find_path(start_switch, end_switch)
|
||||
if path:
|
||||
return path
|
||||
else:
|
||||
return 'No path found'
|
||||
|
||||
|
||||
all_switches = []
|
||||
for res in session.read('/networking/neighbors/by-switch/'):
|
||||
if 'error' in res:
|
||||
sys.stderr.write(res['error'] + '\n')
|
||||
exitcode = 1
|
||||
else:
|
||||
switch = (res['item']['href'].replace('/', ''))
|
||||
all_switches.append(switch)
|
||||
|
||||
end_nodeslist = []
|
||||
nodelist = '/noderange/{0}/nodes/'.format(end_node)
|
||||
for res in session.read(nodelist):
|
||||
if 'error' in res:
|
||||
sys.stderr.write(res['error'] + '\n')
|
||||
exitcode = 1
|
||||
else:
|
||||
elem=(res['item']['href'].replace('/', ''))
|
||||
end_nodeslist.append(elem)
|
||||
|
||||
start_switches = host_to_switch(start_node, interface)
|
||||
for end_node in end_nodeslist:
|
||||
if end_node:
|
||||
end_switches = host_to_switch(end_node, eface)
|
||||
if not end_switches:
|
||||
print('Error: net.{0}.switch attribute is not valid')
|
||||
continue
|
||||
path = path_between_nodes(start_switches, end_switches)
|
||||
print(f'{start_node} to {end_node}: {path}')
|
||||
|
||||
# TODO dont put switches that are connected through management interfaces.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
@@ -102,9 +102,9 @@ def run():
|
||||
cmdv = ['ssh', sshnode] + cmdvbase + cmdstorun[0]
|
||||
if currprocs < concurrentprocs:
|
||||
currprocs += 1
|
||||
run_cmdv(node, cmdv, all, pipedesc)
|
||||
run_cmdv(sshnode, cmdv, all, pipedesc)
|
||||
else:
|
||||
pendingexecs.append((node, cmdv))
|
||||
pendingexecs.append((sshnode, cmdv))
|
||||
if not all or exitcode:
|
||||
sys.exit(exitcode)
|
||||
rdy, _, _ = select.select(all, [], [], 10)
|
||||
|
@@ -126,13 +126,14 @@ elif options.set:
|
||||
argset = argset.strip()
|
||||
if argset:
|
||||
arglist += shlex.split(argset)
|
||||
argset = argfile.readline()
|
||||
argset = argfile.readline()
|
||||
session.stop_if_noderange_over(noderange, options.maxnodes)
|
||||
exitcode=client.updateattrib(session,arglist,nodetype, noderange, options, None)
|
||||
if exitcode != 0:
|
||||
sys.exit(exitcode)
|
||||
|
||||
# Lists all attributes
|
||||
|
||||
if len(args) > 0:
|
||||
# setting output to all so it can search since if we do have something to search, we want to show all outputs even if it is blank.
|
||||
if requestargs is None:
|
||||
|
121
confluent_client/bin/nodebmcpassword
Executable file
121
confluent_client/bin/nodebmcpassword
Executable file
@@ -0,0 +1,121 @@
|
||||
#!/usr/libexec/platform-python
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright 2015-2017 Lenovo
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
__author__ = 'tkucherera'
|
||||
|
||||
from getpass import getpass
|
||||
import optparse
|
||||
import os
|
||||
import signal
|
||||
import sys
|
||||
try:
|
||||
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
|
||||
except AttributeError:
|
||||
pass
|
||||
|
||||
path = os.path.dirname(os.path.realpath(__file__))
|
||||
path = os.path.realpath(os.path.join(path, '..', 'lib', 'python'))
|
||||
if path.startswith('/opt'):
|
||||
sys.path.append(path)
|
||||
|
||||
import confluent.client as client
|
||||
|
||||
argparser = optparse.OptionParser(usage="Usage: %prog <noderange> <username> <new_password>")
|
||||
argparser.add_option('-m', '--maxnodes', type='int',
|
||||
help='Number of nodes to affect before prompting for confirmation')
|
||||
argparser.add_option('-p', '--prompt', action='store_true',
|
||||
help='Prompt for password values interactively')
|
||||
argparser.add_option('-e', '--environment', action='store_true',
|
||||
help='Set passwod, but from environment variable of '
|
||||
'same name')
|
||||
|
||||
|
||||
|
||||
(options, args) = argparser.parse_args()
|
||||
|
||||
|
||||
try:
|
||||
noderange = args[0]
|
||||
username = args[1]
|
||||
except IndexError:
|
||||
argparser.print_help()
|
||||
sys.exit(1)
|
||||
|
||||
client.check_globbing(noderange)
|
||||
session = client.Command()
|
||||
exitcode = 0
|
||||
|
||||
if options.prompt:
|
||||
oneval = 1
|
||||
twoval = 2
|
||||
while oneval != twoval:
|
||||
oneval = getpass('Enter pass for {0}: '.format(username))
|
||||
twoval = getpass('Confirm pass for {0}: '.format(username))
|
||||
if oneval != twoval:
|
||||
print('Values did not match.')
|
||||
new_password = twoval
|
||||
|
||||
elif len(args) == 3:
|
||||
if options.environment:
|
||||
key = args[2]
|
||||
new_password = os.environ.get(key, os.environ[key.upper()])
|
||||
else:
|
||||
new_password = args[2]
|
||||
else:
|
||||
argparser.print_help()
|
||||
sys.exit(1)
|
||||
|
||||
errorNodes = set([])
|
||||
uid_dict = {}
|
||||
session.stop_if_noderange_over(noderange, options.maxnodes)
|
||||
|
||||
for rsp in session.read('/noderange/{0}/configuration/management_controller/users/all'.format(noderange)):
|
||||
databynode = rsp["databynode"]
|
||||
for node in databynode:
|
||||
if 'error' in rsp['databynode'][node]:
|
||||
print(node, ':', rsp['databynode'][node]['error'])
|
||||
errorNodes.add(node)
|
||||
continue
|
||||
for user in rsp['databynode'][node]['users']:
|
||||
if user['username'] == username:
|
||||
if not user['uid'] in uid_dict:
|
||||
uid_dict[user['uid']] = node
|
||||
continue
|
||||
uid_dict[user['uid']] = uid_dict[user['uid']] + ',{}'.format(node)
|
||||
break
|
||||
|
||||
if not uid_dict:
|
||||
print("Error: Could not reach target node's bmc user")
|
||||
sys.exit(1)
|
||||
|
||||
for uid in uid_dict:
|
||||
success = session.simple_noderange_command(uid_dict[uid], 'configuration/management_controller/users/{0}'.format(uid), new_password, key='password', errnodes=errorNodes) # = 0 if successful
|
||||
|
||||
allNodes = set([])
|
||||
|
||||
for node in session.read('/noderange/{0}/nodes/'.format(noderange)):
|
||||
if 'error' in node and success != 0:
|
||||
sys.exit(success)
|
||||
allNodes.add(node['item']['href'].replace("/", ""))
|
||||
|
||||
goodNodes = allNodes - errorNodes
|
||||
|
||||
for node in goodNodes:
|
||||
print(node + ": Password Change Successful")
|
||||
|
||||
|
||||
sys.exit(success)
|
@@ -303,9 +303,14 @@ else:
|
||||
'/noderange/{0}/configuration/management_controller/extended/all'.format(noderange),
|
||||
session, printbmc, options, attrprefix='bmc.')
|
||||
if options.extra:
|
||||
rcode |= client.print_attrib_path(
|
||||
'/noderange/{0}/configuration/management_controller/extended/extra'.format(noderange),
|
||||
session, printextbmc, options)
|
||||
if options.advanced:
|
||||
rcode |= client.print_attrib_path(
|
||||
'/noderange/{0}/configuration/management_controller/extended/extra_advanced'.format(noderange),
|
||||
session, printextbmc, options)
|
||||
else:
|
||||
rcode |= client.print_attrib_path(
|
||||
'/noderange/{0}/configuration/management_controller/extended/extra'.format(noderange),
|
||||
session, printextbmc, options)
|
||||
if printsys or options.exclude:
|
||||
if printsys == 'all':
|
||||
printsys = []
|
||||
|
@@ -243,7 +243,7 @@ if options.windowed:
|
||||
elif 'Height' in line:
|
||||
window_height = int(line.split(':')[1])
|
||||
elif '-geometry' in line:
|
||||
l = re.split(' |x|-|\+', line)
|
||||
l = re.split(' |x|-|\\+', line)
|
||||
l_nosp = [ele for ele in l if ele.strip()]
|
||||
wmxo = int(l_nosp[1])
|
||||
wmyo = int(l_nosp[2])
|
||||
|
@@ -81,6 +81,12 @@ def main(args):
|
||||
if not args.profile and args.network:
|
||||
sys.stderr.write('Both noderange and a profile name are required arguments to request a network deployment\n')
|
||||
return 1
|
||||
if args.clear and args.profile:
|
||||
sys.stderr.write(
|
||||
'The -c/--clear option should not be used with a profile, '
|
||||
'it is a request to not deploy any profile, and will clear '
|
||||
'whatever the current profile is without being specified\n')
|
||||
return 1
|
||||
if extra:
|
||||
sys.stderr.write('Unrecognized arguments: ' + repr(extra) + '\n')
|
||||
c = client.Command()
|
||||
@@ -166,8 +172,6 @@ def main(args):
|
||||
','.join(errnodes)))
|
||||
return 1
|
||||
rc |= c.simple_noderange_command(args.noderange, '/power/state', 'boot')
|
||||
if args.network and not args.prepareonly:
|
||||
return rc
|
||||
return 0
|
||||
|
||||
if __name__ == '__main__':
|
||||
|
@@ -68,7 +68,7 @@ def main():
|
||||
else:
|
||||
elem=(res['item']['href'].replace('/', ''))
|
||||
list.append(elem)
|
||||
print(options.delim.join(list))
|
||||
print(options.delim.join(list))
|
||||
|
||||
sys.exit(exitcode)
|
||||
|
||||
|
@@ -595,7 +595,7 @@ def print_attrib_path(path, session, requestargs, options, rename=None, attrpref
|
||||
else:
|
||||
printmissing.add(attr)
|
||||
for missing in printmissing:
|
||||
sys.stderr.write('Error: {0} not a valid attribute\n'.format(attr))
|
||||
sys.stderr.write('Error: {0} not a valid attribute\n'.format(missing))
|
||||
return exitcode
|
||||
|
||||
|
||||
@@ -668,6 +668,9 @@ def updateattrib(session, updateargs, nodetype, noderange, options, dictassign=N
|
||||
for attrib in updateargs[1:]:
|
||||
keydata[attrib] = None
|
||||
for res in session.update(targpath, keydata):
|
||||
for node in res.get('databynode', {}):
|
||||
for warnmsg in res['databynode'][node].get('_warnings', []):
|
||||
sys.stderr.write('Warning: ' + warnmsg + '\n')
|
||||
if 'error' in res:
|
||||
if 'errorcode' in res:
|
||||
exitcode = res['errorcode']
|
||||
@@ -702,6 +705,14 @@ def updateattrib(session, updateargs, nodetype, noderange, options, dictassign=N
|
||||
noderange, 'attributes/all', dictassign[key], key)
|
||||
else:
|
||||
if "=" in updateargs[1]:
|
||||
update_ready = True
|
||||
for arg in updateargs[1:]:
|
||||
if not '=' in arg:
|
||||
update_ready = False
|
||||
exitcode = 1
|
||||
if not update_ready:
|
||||
sys.stderr.write('Error: {0} Can not set and read at the same time!\n'.format(str(updateargs[1:])))
|
||||
sys.exit(exitcode)
|
||||
try:
|
||||
for val in updateargs[1:]:
|
||||
val = val.split('=', 1)
|
||||
|
@@ -98,17 +98,24 @@ class GroupedData(object):
|
||||
self.byoutput = {}
|
||||
self.header = {}
|
||||
self.client = confluentconnection
|
||||
self.detectedpad = None
|
||||
|
||||
def generate_byoutput(self):
|
||||
self.byoutput = {}
|
||||
thepad = self.detectedpad if self.detectedpad else ''
|
||||
for n in self.bynode:
|
||||
output = '\n'.join(self.bynode[n])
|
||||
output = ''
|
||||
for ln in self.bynode[n]:
|
||||
output += ln.replace(thepad, '', 1) + '\n'
|
||||
if output not in self.byoutput:
|
||||
self.byoutput[output] = set([n])
|
||||
else:
|
||||
self.byoutput[output].add(n)
|
||||
|
||||
def add_line(self, node, line):
|
||||
wspc = re.search(r'^\s*', line).group()
|
||||
if self.detectedpad is None or len(wspc) < len(self.detectedpad):
|
||||
self.detectedpad = wspc
|
||||
if node not in self.bynode:
|
||||
self.bynode[node] = [line]
|
||||
else:
|
||||
@@ -219,4 +226,4 @@ if __name__ == '__main__':
|
||||
if not line:
|
||||
continue
|
||||
groupoutput.add_line(*line.split(': ', 1))
|
||||
groupoutput.print_deviants()
|
||||
groupoutput.print_deviants()
|
||||
|
@@ -1,12 +1,16 @@
|
||||
%define name confluent_client
|
||||
%define version #VERSION#
|
||||
%define fversion %{lua:
|
||||
sv, _ = string.gsub("#VERSION#", "[~+]", "-")
|
||||
print(sv)
|
||||
}
|
||||
%define release 1
|
||||
|
||||
Summary: Client libraries and utilities for confluent
|
||||
Name: %{name}
|
||||
Version: %{version}
|
||||
Release: %{release}
|
||||
Source0: %{name}-%{version}.tar.gz
|
||||
Source0: %{name}-%{fversion}.tar.gz
|
||||
License: Apache2
|
||||
Group: Development/Libraries
|
||||
BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-buildroot
|
||||
@@ -21,7 +25,7 @@ This package enables python development and command line access to
|
||||
a confluent server.
|
||||
|
||||
%prep
|
||||
%setup -n %{name}-%{version} -n %{name}-%{version}
|
||||
%setup -n %{name}-%{fversion}
|
||||
|
||||
%build
|
||||
%if "%{dist}" == ".el7"
|
||||
|
@@ -153,11 +153,11 @@ _confluent_osimage_completion()
|
||||
{
|
||||
_confluent_get_args
|
||||
if [ $NUMARGS == 2 ]; then
|
||||
COMPREPLY=($(compgen -W "initialize import updateboot rebase" -- ${COMP_WORDS[COMP_CWORD]}))
|
||||
COMPREPLY=($(compgen -W "initialize import importcheck updateboot rebase" -- ${COMP_WORDS[COMP_CWORD]}))
|
||||
return
|
||||
elif [ ${CMPARGS[1]} == 'initialize' ]; then
|
||||
COMPREPLY=($(compgen -W "-h -u -s -t -i" -- ${COMP_WORDS[COMP_CWORD]}))
|
||||
elif [ ${CMPARGS[1]} == 'import' ]; then
|
||||
elif [ ${CMPARGS[1]} == 'import' ] || [ ${CMPARGS[1]} == 'importcheck' ]; then
|
||||
compopt -o default
|
||||
COMPREPLY=()
|
||||
return
|
||||
|
38
confluent_client/doc/man/l2traceroute.ronn
Normal file
38
confluent_client/doc/man/l2traceroute.ronn
Normal file
@@ -0,0 +1,38 @@
|
||||
l2traceroute(8) -- returns the layer 2 route through an Ethernet network managed by confluent given 2 end points.
|
||||
==============================
|
||||
## SYNOPSIS
|
||||
`l2traceroute [options] <start_node> <end_noderange>`
|
||||
|
||||
## DESCRIPTION
|
||||
**l2traceroute** is a command that returns the layer 2 route for the configered interfaces in nodeattrib.
|
||||
It can also be used with the -i and -e options to check against specific interfaces on the endpoints.
|
||||
|
||||
|
||||
## PREREQUISITES
|
||||
**l2traceroute** the net.<interface>.switch attributes have to be set on the end points if endpoint is not a switch
|
||||
|
||||
|
||||
## OPTIONS
|
||||
* ` -e` EFACE, --eface=INTERFACE
|
||||
interface to check against for the second end point
|
||||
* ` -i` INTERFACE, --interface=INTERFACE
|
||||
interface to check against for the first end point
|
||||
* ` -c` CUMULUS, --cumulus=CUMULUS
|
||||
return layer 2 route through cumulus switches only
|
||||
* `-h`, `--help`:
|
||||
Show help message and exit
|
||||
|
||||
|
||||
## EXAMPLES
|
||||
* Checking route between two nodes:
|
||||
`# l2traceroute_client n244 n1851`
|
||||
`n244 to n1851: ['switch114']`
|
||||
|
||||
* Checking route from one node to multiple nodes:
|
||||
`# l2traceroute_client n244 n1833,n1851`
|
||||
`n244 to n1833: ['switch114', 'switch7', 'switch32', 'switch253', 'switch85', 'switch72', 'switch21', 'switch2', 'switch96', 'switch103', 'switch115']
|
||||
n244 to n1851: ['switch114']`
|
||||
|
||||
|
||||
|
||||
|
30
confluent_client/doc/man/nodeapply.ronn
Normal file
30
confluent_client/doc/man/nodeapply.ronn
Normal file
@@ -0,0 +1,30 @@
|
||||
nodeapply(8) -- Execute command on many nodes in a noderange through ssh
|
||||
=========================================================================
|
||||
|
||||
## SYNOPSIS
|
||||
|
||||
`nodeapply [options] <noderange>`
|
||||
|
||||
## DESCRIPTION
|
||||
|
||||
Provides shortcut access to a number of common operations against deployed
|
||||
nodes. These operations include refreshing ssh certificates and configuration,
|
||||
rerunning syncflies, and executing specified postscripts.
|
||||
|
||||
## OPTIONS
|
||||
|
||||
* `-k`, `--security`
|
||||
Refresh SSH configuration (hosts.equiv and node SSH certificates)
|
||||
|
||||
* `-F`, `--sync`
|
||||
Rerun syncfiles from deployed profile
|
||||
|
||||
* `-P SCRIPTS`, `--scripts=SCRIPTS`
|
||||
Re-run specified scripts, with full path under scripts specified, e.g. post.d/scriptname,firstboot.d/otherscriptname
|
||||
|
||||
* `-c COUNT`, `-f COUNT`, `--count=COUNT`
|
||||
Specify the maximum number of instances to run concurrently
|
||||
|
||||
* `-m MAXNODES`, `--maxnodes=MAXNODES`
|
||||
Specify a maximum number of nodes to run remote ssh command to, prompting
|
||||
if over the threshold
|
28
confluent_client/doc/man/nodebmcpassword.ronn
Normal file
28
confluent_client/doc/man/nodebmcpassword.ronn
Normal file
@@ -0,0 +1,28 @@
|
||||
nodebmcpassword(8) -- Change management controller password for a specified user
|
||||
=========================================================
|
||||
|
||||
## SYNOPSIS
|
||||
|
||||
`nodebmcpassword <noderange> <username> <new_password>`
|
||||
|
||||
## DESCRIPTION
|
||||
|
||||
`nodebmcpassword` allows you to change the management controller password for a user on a specified noderange
|
||||
|
||||
## OPTIONS
|
||||
|
||||
* `-m MAXNODES`, `--maxnodes=MAXNODES`:
|
||||
Number of nodes to affect before prompting for
|
||||
confirmation
|
||||
|
||||
* `-h`, `--help`:
|
||||
Show help message and exit
|
||||
|
||||
## EXAMPLES:
|
||||
|
||||
* Reset the management controller for nodes n1 through n4:
|
||||
`# nodebmcreset n1-n4`
|
||||
`n1: Password Change Successful`
|
||||
`n2: Password Change Successful`
|
||||
`n3: Password Change Successful`
|
||||
`n4: Password Change Successful`
|
@@ -1,7 +1,11 @@
|
||||
VERSION=`git describe|cut -d- -f 1`
|
||||
NUMCOMMITS=`git describe|cut -d- -f 2`
|
||||
if [ "$NUMCOMMITS" != "$VERSION" ]; then
|
||||
VERSION=$VERSION.dev$NUMCOMMITS.g`git describe|cut -d- -f 3`
|
||||
LASTNUM=$(echo $VERSION|rev|cut -d . -f 1|rev)
|
||||
LASTNUM=$((LASTNUM+1))
|
||||
FIRSTPART=$(echo $VERSION|rev|cut -d . -f 2- |rev)
|
||||
VERSION=${FIRSTPART}.${LASTNUM}
|
||||
VERSION=$VERSION~dev$NUMCOMMITS+`git describe|cut -d- -f 3`
|
||||
fi
|
||||
sed -e "s/#VERSION#/$VERSION/" confluent_osdeploy.spec.tmpl > confluent_osdeploy.spec
|
||||
cd ..
|
||||
|
@@ -2,7 +2,11 @@ cd $(dirname $0)
|
||||
VERSION=`git describe|cut -d- -f 1`
|
||||
NUMCOMMITS=`git describe|cut -d- -f 2`
|
||||
if [ "$NUMCOMMITS" != "$VERSION" ]; then
|
||||
VERSION=$VERSION.dev$NUMCOMMITS.g`git describe|cut -d- -f 3`
|
||||
LASTNUM=$(echo $VERSION|rev|cut -d . -f 1|rev)
|
||||
LASTNUM=$((LASTNUM+1))
|
||||
FIRSTPART=$(echo $VERSION|rev|cut -d . -f 2- |rev)
|
||||
VERSION=${FIRSTPART}.${LASTNUM}
|
||||
VERSION=$VERSION~dev$NUMCOMMITS+`git describe|cut -d- -f 3`
|
||||
fi
|
||||
sed -e "s/#VERSION#/$VERSION/" confluent_osdeploy-aarch64.spec.tmpl > confluent_osdeploy-aarch64.spec
|
||||
cd ..
|
||||
|
@@ -86,7 +86,7 @@ def map_idx_to_name():
|
||||
for line in subprocess.check_output(['ip', 'l']).decode('utf8').splitlines():
|
||||
if line.startswith(' ') and 'link/' in line:
|
||||
typ = line.split()[0].split('/')[1]
|
||||
devtype[prevdev] = typ if type != 'ether' else 'ethernet'
|
||||
devtype[prevdev] = typ if typ != 'ether' else 'ethernet'
|
||||
if line.startswith(' '):
|
||||
continue
|
||||
idx, iface, rst = line.split(':', 2)
|
||||
@@ -192,8 +192,10 @@ class NetplanManager(object):
|
||||
if needcfgwrite:
|
||||
needcfgapply = True
|
||||
newcfg = {'network': {'version': 2, 'ethernets': {devname: self.cfgbydev[devname]}}}
|
||||
oumask = os.umask(0o77)
|
||||
with open('/etc/netplan/{0}-confluentcfg.yaml'.format(devname), 'w') as planout:
|
||||
planout.write(yaml.dump(newcfg))
|
||||
os.umask(oumask)
|
||||
if needcfgapply:
|
||||
subprocess.call(['netplan', 'apply'])
|
||||
|
||||
@@ -403,19 +405,36 @@ class NetworkManager(object):
|
||||
else:
|
||||
cname = stgs.get('connection_name', None)
|
||||
iname = list(cfg['interfaces'])[0]
|
||||
if not cname:
|
||||
cname = iname
|
||||
ctype = self.devtypes.get(iname, None)
|
||||
if not ctype:
|
||||
sys.stderr.write("Warning, no device found for interface_name ({0}), skipping setup\n".format(iname))
|
||||
return
|
||||
if stgs.get('vlan_id', None):
|
||||
vlan = stgs['vlan_id']
|
||||
if ctype == 'infiniband':
|
||||
vlan = '0x{0}'.format(vlan) if not vlan.startswith('0x') else vlan
|
||||
cmdargs['infiniband.parent'] = iname
|
||||
cmdargs['infiniband.p-key'] = vlan
|
||||
iname = '{0}.{1}'.format(iname, vlan[2:])
|
||||
elif ctype == 'ethernet':
|
||||
ctype = 'vlan'
|
||||
cmdargs['vlan.parent'] = iname
|
||||
cmdargs['vlan.id'] = vlan
|
||||
iname = '{0}.{1}'.format(iname, vlan)
|
||||
else:
|
||||
sys.stderr.write("Warning, unknown interface_name ({0}) device type ({1}) for VLAN/PKEY, skipping setup\n".format(iname, ctype))
|
||||
return
|
||||
cname = iname if not cname else cname
|
||||
u = self.uuidbyname.get(cname, None)
|
||||
cargs = []
|
||||
for arg in cmdargs:
|
||||
cargs.append(arg)
|
||||
cargs.append(cmdargs[arg])
|
||||
if u:
|
||||
cmdargs['connection.interface-name'] = iname
|
||||
subprocess.check_call(['nmcli', 'c', 'm', u] + cargs)
|
||||
subprocess.check_call(['nmcli', 'c', 'm', u, 'connection.interface-name', iname] + cargs)
|
||||
subprocess.check_call(['nmcli', 'c', 'u', u])
|
||||
else:
|
||||
subprocess.check_call(['nmcli', 'c', 'add', 'type', self.devtypes[iname], 'con-name', cname, 'connection.interface-name', iname] + cargs)
|
||||
subprocess.check_call(['nmcli', 'c', 'add', 'type', ctype, 'con-name', cname, 'connection.interface-name', iname] + cargs)
|
||||
self.read_connections()
|
||||
u = self.uuidbyname.get(cname, None)
|
||||
if u:
|
||||
@@ -436,6 +455,12 @@ if __name__ == '__main__':
|
||||
srvs, _ = apiclient.scan_confluents()
|
||||
doneidxs = set([])
|
||||
dc = None
|
||||
if not srvs: # the multicast scan failed, fallback to deploycfg cfg file
|
||||
with open('/etc/confluent/confluent.deploycfg', 'r') as dci:
|
||||
for cfgline in dci.read().split('\n'):
|
||||
if cfgline.startswith('deploy_server:'):
|
||||
srvs = [cfgline.split()[1]]
|
||||
break
|
||||
for srv in srvs:
|
||||
try:
|
||||
s = socket.create_connection((srv, 443))
|
||||
@@ -498,6 +523,8 @@ if __name__ == '__main__':
|
||||
netname_to_interfaces['default']['interfaces'] -= netname_to_interfaces[netn]['interfaces']
|
||||
if not netname_to_interfaces['default']['interfaces']:
|
||||
del netname_to_interfaces['default']
|
||||
# Make sure VLAN/PKEY connections are created last
|
||||
netname_to_interfaces = dict(sorted(netname_to_interfaces.items(), key=lambda item: 'vlan_id' in item[1]['settings']))
|
||||
rm_tmp_llas(tmpllas)
|
||||
if os.path.exists('/usr/sbin/netplan'):
|
||||
nm = NetplanManager(dc)
|
||||
|
@@ -0,0 +1,49 @@
|
||||
is_suse=false
|
||||
is_rhel=false
|
||||
|
||||
if test -f /boot/efi/EFI/redhat/grub.cfg; then
|
||||
grubcfg="/boot/efi/EFI/redhat/grub.cfg"
|
||||
grub2-mkconfig -o $grubcfg
|
||||
is_rhel=true
|
||||
elif test -f /boot/efi/EFI/sle_hpc/grub.cfg; then
|
||||
grubcfg="/boot/efi/EFI/sle_hpc/grub.cfg"
|
||||
grub2-mkconfig -o $grubcfg
|
||||
is_suse=true
|
||||
else
|
||||
echo "Expected File missing: Check if os sle_hpc or redhat"
|
||||
exit
|
||||
fi
|
||||
|
||||
# working on SUSE
|
||||
if $is_suse; then
|
||||
start=false
|
||||
num_line=0
|
||||
lines_to_edit=()
|
||||
while read line; do
|
||||
((num_line++))
|
||||
if [[ $line == *"grub_platform"* ]]; then
|
||||
start=true
|
||||
fi
|
||||
if $start; then
|
||||
if [[ $line != "#"* ]];then
|
||||
lines_to_edit+=($num_line)
|
||||
fi
|
||||
fi
|
||||
if [[ ${#line} -eq 2 && $line == *"fi" ]]; then
|
||||
if $start; then
|
||||
start=false
|
||||
fi
|
||||
fi
|
||||
done < grub_cnf.cfg
|
||||
|
||||
for line_num in "${lines_to_edit[@]}"; do
|
||||
line_num+="s"
|
||||
sed -i "${line_num},^,#," $grubcfg
|
||||
done
|
||||
sed -i 's,^terminal,#terminal,' $grubcfg
|
||||
fi
|
||||
|
||||
# Working on Redhat
|
||||
if $is_rhel; then
|
||||
sed -i 's,^serial,#serial, ; s,^terminal,#terminal,' $grubcfg
|
||||
fi
|
@@ -3,6 +3,9 @@
|
||||
[ -f /opt/confluent/bin/apiclient ] && confapiclient=/opt/confluent/bin/apiclient
|
||||
[ -f /etc/confluent/apiclient ] && confapiclient=/etc/confluent/apiclient
|
||||
for pubkey in /etc/ssh/ssh_host*key.pub; do
|
||||
if [ "$pubkey" = /etc/ssh/ssh_host_key.pub ]; then
|
||||
continue
|
||||
fi
|
||||
certfile=${pubkey/.pub/-cert.pub}
|
||||
rm $certfile
|
||||
confluentpython $confapiclient /confluent-api/self/sshcert $pubkey -o $certfile
|
||||
|
@@ -26,7 +26,7 @@ mkdir -p opt/confluent/bin
|
||||
mkdir -p stateless-bin
|
||||
cp -a el8bin/* .
|
||||
ln -s el8 el9
|
||||
for os in rhvh4 el7 genesis el8 suse15 ubuntu20.04 ubuntu22.04 coreos el9; do
|
||||
for os in rhvh4 el7 genesis el8 suse15 ubuntu20.04 ubuntu22.04 ubuntu24.04 coreos el9; do
|
||||
mkdir ${os}out
|
||||
cd ${os}out
|
||||
if [ -d ../${os}bin ]; then
|
||||
@@ -76,7 +76,7 @@ cp -a esxi7 esxi8
|
||||
%install
|
||||
mkdir -p %{buildroot}/opt/confluent/share/licenses/confluent_osdeploy/
|
||||
#cp LICENSE %{buildroot}/opt/confluent/share/licenses/confluent_osdeploy/
|
||||
for os in rhvh4 el7 el8 el9 genesis suse15 ubuntu20.04 ubuntu22.04 esxi6 esxi7 esxi8 coreos; do
|
||||
for os in rhvh4 el7 el8 el9 genesis suse15 ubuntu20.04 ubuntu22.04 ubuntu24.04 esxi6 esxi7 esxi8 coreos; do
|
||||
mkdir -p %{buildroot}/opt/confluent/lib/osdeploy/$os/initramfs/aarch64/
|
||||
cp ${os}out/addons.* %{buildroot}/opt/confluent/lib/osdeploy/$os/initramfs/aarch64/
|
||||
if [ -d ${os}disklessout ]; then
|
||||
|
@@ -28,7 +28,7 @@ This contains support utilities for enabling deployment of x86_64 architecture s
|
||||
#cp start_root urlmount ../stateless-bin/
|
||||
#cd ..
|
||||
ln -s el8 el9
|
||||
for os in rhvh4 el7 genesis el8 suse15 ubuntu18.04 ubuntu20.04 ubuntu22.04 coreos el9; do
|
||||
for os in rhvh4 el7 genesis el8 suse15 ubuntu18.04 ubuntu20.04 ubuntu22.04 ubuntu24.04 coreos el9; do
|
||||
mkdir ${os}out
|
||||
cd ${os}out
|
||||
if [ -d ../${os}bin ]; then
|
||||
@@ -42,7 +42,7 @@ for os in rhvh4 el7 genesis el8 suse15 ubuntu18.04 ubuntu20.04 ubuntu22.04 coreo
|
||||
mv ../addons.cpio .
|
||||
cd ..
|
||||
done
|
||||
for os in el7 el8 suse15 el9 ubuntu20.04 ubuntu22.04; do
|
||||
for os in el7 el8 suse15 el9 ubuntu20.04 ubuntu22.04 ubuntu24.04; do
|
||||
mkdir ${os}disklessout
|
||||
cd ${os}disklessout
|
||||
if [ -d ../${os}bin ]; then
|
||||
@@ -78,7 +78,7 @@ cp -a esxi7 esxi8
|
||||
%install
|
||||
mkdir -p %{buildroot}/opt/confluent/share/licenses/confluent_osdeploy/
|
||||
cp LICENSE %{buildroot}/opt/confluent/share/licenses/confluent_osdeploy/
|
||||
for os in rhvh4 el7 el8 el9 genesis suse15 ubuntu20.04 ubuntu18.04 ubuntu22.04 esxi6 esxi7 esxi8 coreos; do
|
||||
for os in rhvh4 el7 el8 el9 genesis suse15 ubuntu20.04 ubuntu18.04 ubuntu22.04 ubuntu24.04 esxi6 esxi7 esxi8 coreos; do
|
||||
mkdir -p %{buildroot}/opt/confluent/lib/osdeploy/$os/initramfs
|
||||
mkdir -p %{buildroot}/opt/confluent/lib/osdeploy/$os/profiles
|
||||
cp ${os}out/addons.* %{buildroot}/opt/confluent/lib/osdeploy/$os/initramfs
|
||||
|
@@ -1,4 +1,5 @@
|
||||
#!/usr/bin/python
|
||||
import time
|
||||
import importlib
|
||||
import tempfile
|
||||
import json
|
||||
@@ -223,6 +224,7 @@ def synchronize():
|
||||
if status == 202:
|
||||
lastrsp = ''
|
||||
while status != 204:
|
||||
time.sleep(2)
|
||||
status, rsp = ac.grab_url_with_status('/confluent-api/self/remotesyncfiles')
|
||||
if not isinstance(rsp, str):
|
||||
rsp = rsp.decode('utf8')
|
||||
|
@@ -5,6 +5,7 @@ import json
|
||||
import os
|
||||
import shutil
|
||||
import pwd
|
||||
import time
|
||||
import grp
|
||||
try:
|
||||
from importlib.machinery import SourceFileLoader
|
||||
@@ -223,6 +224,7 @@ def synchronize():
|
||||
if status == 202:
|
||||
lastrsp = ''
|
||||
while status != 204:
|
||||
time.sleep(2)
|
||||
status, rsp = ac.grab_url_with_status('/confluent-api/self/remotesyncfiles')
|
||||
if not isinstance(rsp, str):
|
||||
rsp = rsp.decode('utf8')
|
||||
|
@@ -10,6 +10,7 @@ import stat
|
||||
import struct
|
||||
import sys
|
||||
import subprocess
|
||||
import traceback
|
||||
|
||||
bootuuid = None
|
||||
|
||||
@@ -426,4 +427,9 @@ def install_to_disk(imgpath):
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
install_to_disk(os.environ['mountsrc'])
|
||||
try:
|
||||
install_to_disk(os.environ['mountsrc'])
|
||||
except Exception:
|
||||
traceback.print_exc()
|
||||
time.sleep(86400)
|
||||
raise
|
||||
|
@@ -16,6 +16,7 @@ if [ -z "$confluent_mgr" ]; then
|
||||
fi
|
||||
confluent_profile=$(grep ^profile: /etc/confluent/confluent.deploycfg|awk '{print $2}')
|
||||
timedatectl set-timezone $(grep ^timezone: /etc/confluent/confluent.deploycfg|awk '{print $2}')
|
||||
hostnamectl set-hostname $nodename
|
||||
export nodename confluent_mgr confluent_profile
|
||||
. /etc/confluent/functions
|
||||
mkdir -p /var/log/confluent
|
||||
|
@@ -1,4 +1,6 @@
|
||||
#!/usr/bin/python3
|
||||
import random
|
||||
import time
|
||||
import subprocess
|
||||
import importlib
|
||||
import tempfile
|
||||
@@ -7,6 +9,7 @@ import os
|
||||
import shutil
|
||||
import pwd
|
||||
import grp
|
||||
import sys
|
||||
from importlib.machinery import SourceFileLoader
|
||||
try:
|
||||
apiclient = SourceFileLoader('apiclient', '/opt/confluent/bin/apiclient').load_module()
|
||||
@@ -227,9 +230,16 @@ def synchronize():
|
||||
myips.append(addr)
|
||||
data = json.dumps({'merge': tmpdir, 'appendonce': appendoncedir, 'myips': myips})
|
||||
status, rsp = ac.grab_url_with_status('/confluent-api/self/remotesyncfiles', data)
|
||||
if status >= 300:
|
||||
sys.stderr.write("Error starting syncfiles - {}:\n".format(status))
|
||||
sys.stderr.write(rsp.decode('utf8'))
|
||||
sys.stderr.write('\n')
|
||||
sys.stderr.flush()
|
||||
return status
|
||||
if status == 202:
|
||||
lastrsp = ''
|
||||
while status != 204:
|
||||
time.sleep(1+(2*random.random()))
|
||||
status, rsp = ac.grab_url_with_status('/confluent-api/self/remotesyncfiles')
|
||||
if not isinstance(rsp, str):
|
||||
rsp = rsp.decode('utf8')
|
||||
@@ -277,10 +287,21 @@ def synchronize():
|
||||
os.chmod(fname, int(opts[fname][opt], 8))
|
||||
if uid != -1 or gid != -1:
|
||||
os.chown(fname, uid, gid)
|
||||
return status
|
||||
finally:
|
||||
shutil.rmtree(tmpdir)
|
||||
shutil.rmtree(appendoncedir)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
synchronize()
|
||||
status = 202
|
||||
while status not in (204, 200):
|
||||
try:
|
||||
status = synchronize()
|
||||
except Exception as e:
|
||||
sys.stderr.write(str(e))
|
||||
sys.stderr.write('\n')
|
||||
sys.stderr.flush()
|
||||
status = 300
|
||||
if status not in (204, 200):
|
||||
time.sleep((random.random()*3)+2)
|
||||
|
@@ -7,8 +7,8 @@ if ! grep console= /proc/cmdline >& /dev/null; then
|
||||
if [ -n "$autocons" ]; then
|
||||
echo console=$autocons |sed -e 's!/dev/!!' >> /tmp/01-autocons.conf
|
||||
autocons=${autocons%,*}
|
||||
echo $autocons > /tmp/01-autocons.devnode
|
||||
echo "Detected firmware specified console at $(cat /tmp/01-autocons.conf)" > $autocons
|
||||
echo $autocons > /tmp/01-autocons.devnode
|
||||
echo "Detected firmware specified console at $(cat /tmp/01-autocons.conf)" > $autocons
|
||||
echo "Initializing auto detected console when installer starts" > $autocons
|
||||
fi
|
||||
fi
|
||||
@@ -16,4 +16,5 @@ if grep console=ttyS /proc/cmdline >& /dev/null; then
|
||||
echo "Serial console has been requested in the kernel arguments, the local video may not show progress" > /dev/tty1
|
||||
fi
|
||||
. /lib/anaconda-lib.sh
|
||||
echo rd.fcoe=0 > /etc/cmdline.d/nofcoe.conf
|
||||
wait_for_kickstart
|
||||
|
@@ -2,6 +2,15 @@
|
||||
[ -e /tmp/confluent.initq ] && return 0
|
||||
. /lib/dracut-lib.sh
|
||||
setsid sh -c 'exec bash <> /dev/tty2 >&0 2>&1' &
|
||||
if [ -f /tmp/dd_disk ]; then
|
||||
for dd in $(cat /tmp/dd_disk); do
|
||||
if [ -e $dd ]; then
|
||||
driver-updates --disk $dd $dd
|
||||
rm $dd
|
||||
fi
|
||||
done
|
||||
rm /tmp/dd_disk
|
||||
fi
|
||||
udevadm trigger
|
||||
udevadm trigger --type=devices --action=add
|
||||
udevadm settle
|
||||
@@ -20,13 +29,6 @@ function confluentpython() {
|
||||
/usr/bin/python2 $*
|
||||
fi
|
||||
}
|
||||
if [ -f /tmp/dd_disk ]; then
|
||||
for dd in $(cat /tmp/dd_disk); do
|
||||
if [ -e $dd ]; then
|
||||
driver-updates --disk $dd $dd
|
||||
fi
|
||||
done
|
||||
fi
|
||||
vlaninfo=$(getarg vlan)
|
||||
if [ ! -z "$vlaninfo" ]; then
|
||||
vldev=${vlaninfo#*:}
|
||||
@@ -61,43 +63,52 @@ if [ -e /dev/disk/by-label/CNFLNT_IDNT ]; then
|
||||
udevadm info $i | grep ID_NET_DRIVER=cdc_ether > /dev/null && continue
|
||||
ip link set $(basename $i) up
|
||||
done
|
||||
for NICGUESS in $(ip link|grep LOWER_UP|grep -v LOOPBACK| awk '{print $2}' | sed -e 's/:$//'); do
|
||||
if [ "$autoconfigmethod" = "dhcp" ]; then
|
||||
/usr/libexec/nm-initrd-generator ip=$NICGUESS:dhcp
|
||||
else
|
||||
v4addr=$(grep ^ipv4_address: $tcfg)
|
||||
v4addr=${v4addr#ipv4_address: }
|
||||
v4plen=${v4addr#*/}
|
||||
v4addr=${v4addr%/*}
|
||||
v4gw=$(grep ^ipv4_gateway: $tcfg)
|
||||
v4gw=${v4gw#ipv4_gateway: }
|
||||
ip addr add dev $NICGUESS $v4addr/$v4plen
|
||||
if [ "$v4gw" = "null" ]; then
|
||||
v4gw=""
|
||||
fi
|
||||
if [ ! -z "$v4gw" ]; then
|
||||
ip route add default via $v4gw
|
||||
fi
|
||||
v4nm=$(grep ipv4_netmask: $tcfg)
|
||||
v4nm=${v4nm#ipv4_netmask: }
|
||||
DETECTED=0
|
||||
for dsrv in $deploysrvs; do
|
||||
if curl --capath /tls/ -s --connect-timeout 3 https://$dsrv/confluent-public/ > /dev/null; then
|
||||
rm /run/NetworkManager/system-connections/*
|
||||
/usr/libexec/nm-initrd-generator ip=$v4addr::$v4gw:$v4nm:$hostname:$NICGUESS:none
|
||||
DETECTED=1
|
||||
ifname=$NICGUESS
|
||||
TRIES=30
|
||||
DETECTED=0
|
||||
while [ "$DETECTED" = 0 ] && [ $TRIES -gt 0 ]; do
|
||||
TRIES=$((TRIES - 1))
|
||||
for NICGUESS in $(ip link|grep LOWER_UP|grep -v LOOPBACK| awk '{print $2}' | sed -e 's/:$//'); do
|
||||
if [ "$autoconfigmethod" = "dhcp" ]; then
|
||||
/usr/libexec/nm-initrd-generator ip=$NICGUESS:dhcp
|
||||
else
|
||||
v4addr=$(grep ^ipv4_address: $tcfg)
|
||||
v4addr=${v4addr#ipv4_address: }
|
||||
v4plen=${v4addr#*/}
|
||||
v4addr=${v4addr%/*}
|
||||
v4gw=$(grep ^ipv4_gateway: $tcfg)
|
||||
v4gw=${v4gw#ipv4_gateway: }
|
||||
ip addr add dev $NICGUESS $v4addr/$v4plen
|
||||
if [ "$v4gw" = "null" ]; then
|
||||
v4gw=""
|
||||
fi
|
||||
if [ ! -z "$v4gw" ]; then
|
||||
ip route add default via $v4gw
|
||||
fi
|
||||
v4nm=$(grep ipv4_netmask: $tcfg)
|
||||
v4nm=${v4nm#ipv4_netmask: }
|
||||
DETECTED=0
|
||||
for dsrv in $deploysrvs; do
|
||||
if curl --capath /tls/ -s --connect-timeout 3 https://$dsrv/confluent-public/ > /dev/null; then
|
||||
rm /run/NetworkManager/system-connections/*
|
||||
/usr/libexec/nm-initrd-generator ip=$v4addr::$v4gw:$v4nm:$hostname:$NICGUESS:none
|
||||
DETECTED=1
|
||||
ifname=$NICGUESS
|
||||
break
|
||||
fi
|
||||
done
|
||||
if [ ! -z "$v4gw" ]; then
|
||||
ip route del default via $v4gw
|
||||
fi
|
||||
ip addr flush dev $NICGUESS
|
||||
if [ $DETECTED = 1 ]; then
|
||||
break
|
||||
fi
|
||||
done
|
||||
if [ ! -z "$v4gw" ]; then
|
||||
ip route del default via $v4gw
|
||||
fi
|
||||
ip addr flush dev $NICGUESS
|
||||
if [ $DETECTED = 1 ]; then
|
||||
break
|
||||
fi
|
||||
fi
|
||||
done
|
||||
done
|
||||
for NICGUESS in $(ip link|grep LOWER_UP|grep -v LOOPBACK| awk '{print $2}' | sed -e 's/:$//'); do
|
||||
ip addr flush dev $NICGUESS
|
||||
ip link set $NICGUESS down
|
||||
done
|
||||
NetworkManager --configure-and-quit=initrd --no-daemon
|
||||
hmackeyfile=/tmp/cnflnthmackeytmp
|
||||
@@ -175,7 +186,7 @@ if [ ! -z "$autocons" ]; then
|
||||
errout="-e $autocons"
|
||||
fi
|
||||
while ! confluentpython /opt/confluent/bin/apiclient $errout /confluent-api/self/deploycfg2 > /etc/confluent/confluent.deploycfg; do
|
||||
sleep 10
|
||||
sleep 10
|
||||
done
|
||||
ifidx=$(cat /tmp/confluent.ifidx 2> /dev/null)
|
||||
if [ -z "$ifname" ]; then
|
||||
@@ -216,23 +227,38 @@ proto=${proto#protocol: }
|
||||
textconsole=$(grep ^textconsole: /etc/confluent/confluent.deploycfg)
|
||||
textconsole=${textconsole#textconsole: }
|
||||
if [ "$textconsole" = "true" ] && ! grep console= /proc/cmdline > /dev/null; then
|
||||
autocons=$(cat /tmp/01-autocons.devnode)
|
||||
if [ ! -z "$autocons" ]; then
|
||||
echo Auto-configuring installed system to use text console
|
||||
echo Auto-configuring installed system to use text console > $autocons
|
||||
autocons=$(cat /tmp/01-autocons.devnode)
|
||||
if [ ! -z "$autocons" ]; then
|
||||
echo Auto-configuring installed system to use text console
|
||||
echo Auto-configuring installed system to use text console > $autocons
|
||||
/opt/confluent/bin/autocons -c > /dev/null
|
||||
cp /tmp/01-autocons.conf /etc/cmdline.d/
|
||||
else
|
||||
echo "Unable to automatically detect requested text console"
|
||||
fi
|
||||
cp /tmp/01-autocons.conf /etc/cmdline.d/
|
||||
else
|
||||
echo "Unable to automatically detect requested text console"
|
||||
fi
|
||||
fi
|
||||
|
||||
echo inst.repo=$proto://$mgr/confluent-public/os/$profilename/distribution >> /etc/cmdline.d/01-confluent.conf
|
||||
. /etc/os-release
|
||||
if [ "$ID" = "dracut" ]; then
|
||||
ID=$(echo $PRETTY_NAME|awk '{print $1}')
|
||||
VERSION_ID=$(echo $VERSION|awk '{print $1}')
|
||||
if [ "$ID" = "Oracle" ]; then
|
||||
ID=OL
|
||||
elif [ "$ID" = "Red" ]; then
|
||||
ID=RHEL
|
||||
fi
|
||||
fi
|
||||
ISOSRC=$(blkid -t TYPE=iso9660|grep -Ei ' LABEL="'$ID-$VERSION_ID|sed -e s/:.*//)
|
||||
if [ -z "$ISOSRC" ]; then
|
||||
echo inst.repo=$proto://$mgr/confluent-public/os/$profilename/distribution >> /etc/cmdline.d/01-confluent.conf
|
||||
root=anaconda-net:$proto://$mgr/confluent-public/os/$profilename/distribution
|
||||
export root
|
||||
else
|
||||
echo inst.repo=cdrom:$ISOSRC >> /etc/cmdline.d/01-confluent.conf
|
||||
fi
|
||||
echo inst.ks=$proto://$mgr/confluent-public/os/$profilename/kickstart >> /etc/cmdline.d/01-confluent.conf
|
||||
kickstart=$proto://$mgr/confluent-public/os/$profilename/kickstart
|
||||
root=anaconda-net:$proto://$mgr/confluent-public/os/$profilename/distribution
|
||||
export kickstart
|
||||
export root
|
||||
autoconfigmethod=$(grep ipv4_method /etc/confluent/confluent.deploycfg)
|
||||
autoconfigmethod=${autoconfigmethod#ipv4_method: }
|
||||
if [ "$autoconfigmethod" = "dhcp" ]; then
|
||||
@@ -312,4 +338,8 @@ if [ -e /lib/nm-lib.sh ]; then
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
for NICGUESS in $(ip link|grep LOWER_UP|grep -v LOOPBACK| awk '{print $2}' | sed -e 's/:$//'); do
|
||||
ip addr flush dev $NICGUESS
|
||||
ip link set $NICGUESS down
|
||||
done
|
||||
|
||||
|
@@ -27,7 +27,7 @@ with open('/etc/confluent/confluent.deploycfg') as dplcfgfile:
|
||||
_, profile = line.split(' ', 1)
|
||||
if line.startswith('ipv4_method: '):
|
||||
_, v4cfg = line.split(' ', 1)
|
||||
if v4cfg == 'static' or v4cfg =='dhcp':
|
||||
if v4cfg == 'static' or v4cfg =='dhcp' or not server6:
|
||||
server = server4
|
||||
if not server:
|
||||
server = '[{}]'.format(server6)
|
||||
|
@@ -90,8 +90,14 @@ touch /tmp/cryptpkglist
|
||||
touch /tmp/pkglist
|
||||
touch /tmp/addonpackages
|
||||
if [ "$cryptboot" == "tpm2" ]; then
|
||||
LUKSPARTY="--encrypted --passphrase=$(cat /etc/confluent/confluent.apikey)"
|
||||
echo $cryptboot >> /tmp/cryptboot
|
||||
lukspass=$(python3 /opt/confluent/bin/apiclient /confluent-api/self/profileprivate/pending/luks.key 2> /dev/null)
|
||||
if [ -z "$lukspass" ]; then
|
||||
lukspass=$(python3 -c 'import os;import base64;print(base64.b64encode(os.urandom(66)).decode())')
|
||||
fi
|
||||
echo $lukspass > /etc/confluent/luks.key
|
||||
chmod 000 /etc/confluent/luks.key
|
||||
LUKSPARTY="--encrypted --passphrase=$lukspass"
|
||||
echo $cryptboot >> /tmp/cryptboot
|
||||
echo clevis-dracut >> /tmp/cryptpkglist
|
||||
fi
|
||||
|
||||
@@ -114,8 +120,8 @@ confluentpython /etc/confluent/apiclient /confluent-public/os/$confluent_profile
|
||||
grep '^%include /tmp/partitioning' /tmp/kickstart.* > /dev/null || rm /tmp/installdisk
|
||||
if [ -e /tmp/installdisk -a ! -e /tmp/partitioning ]; then
|
||||
INSTALLDISK=$(cat /tmp/installdisk)
|
||||
sed -e s/%%INSTALLDISK%%/$INSTALLDISK/ -e s/%%LUKSHOOK%%/$LUKSPARTY/ /tmp/partitioning.template > /tmp/partitioning
|
||||
dd if=/dev/zero of=/dev/$(cat /tmp/installdisk) bs=1M count=1 >& /dev/null
|
||||
sed -e s/%%INSTALLDISK%%/$INSTALLDISK/ -e "s!%%LUKSHOOK%%!$LUKSPARTY!" /tmp/partitioning.template > /tmp/partitioning
|
||||
vgchange -a n >& /dev/null
|
||||
wipefs -a -f /dev/$INSTALLDISK >& /dev/null
|
||||
fi
|
||||
kill $logshowpid
|
||||
|
@@ -1,8 +1,12 @@
|
||||
#!/bin/sh
|
||||
grep HostCert /etc/ssh/sshd_config.anaconda >> /mnt/sysimage/etc/ssh/sshd_config
|
||||
echo HostbasedAuthentication yes >> /mnt/sysimage/etc/ssh/sshd_config
|
||||
echo HostbasedUsesNameFromPacketOnly yes >> /mnt/sysimage/etc/ssh/sshd_config
|
||||
echo IgnoreRhosts no >> /mnt/sysimage/etc/ssh/sshd_config
|
||||
targssh=/mnt/sysimage/etc/ssh/sshd_config
|
||||
if [ -d /mnt/sysimage/etc/ssh/sshd_config.d/ ]; then
|
||||
targssh=/mnt/sysimage/etc/ssh/sshd_config.d/90-confluent.conf
|
||||
fi
|
||||
grep HostCert /etc/ssh/sshd_config.anaconda >> $targssh
|
||||
echo HostbasedAuthentication yes >> $targssh
|
||||
echo HostbasedUsesNameFromPacketOnly yes >> $targssh
|
||||
echo IgnoreRhosts no >> $targssh
|
||||
sshconf=/mnt/sysimage/etc/ssh/ssh_config
|
||||
if [ -d /mnt/sysimage/etc/ssh/ssh_config.d/ ]; then
|
||||
sshconf=/mnt/sysimage/etc/ssh/ssh_config.d/01-confluent.conf
|
||||
|
@@ -1,4 +1,6 @@
|
||||
#!/usr/bin/python3
|
||||
import random
|
||||
import time
|
||||
import subprocess
|
||||
import importlib
|
||||
import tempfile
|
||||
@@ -7,6 +9,7 @@ import os
|
||||
import shutil
|
||||
import pwd
|
||||
import grp
|
||||
import sys
|
||||
from importlib.machinery import SourceFileLoader
|
||||
try:
|
||||
apiclient = SourceFileLoader('apiclient', '/opt/confluent/bin/apiclient').load_module()
|
||||
@@ -227,9 +230,16 @@ def synchronize():
|
||||
myips.append(addr)
|
||||
data = json.dumps({'merge': tmpdir, 'appendonce': appendoncedir, 'myips': myips})
|
||||
status, rsp = ac.grab_url_with_status('/confluent-api/self/remotesyncfiles', data)
|
||||
if status >= 300:
|
||||
sys.stderr.write("Error starting syncfiles - {}:\n".format(status))
|
||||
sys.stderr.write(rsp.decode('utf8'))
|
||||
sys.stderr.write('\n')
|
||||
sys.stderr.flush()
|
||||
return status
|
||||
if status == 202:
|
||||
lastrsp = ''
|
||||
while status != 204:
|
||||
time.sleep(1+(2*random.random()))
|
||||
status, rsp = ac.grab_url_with_status('/confluent-api/self/remotesyncfiles')
|
||||
if not isinstance(rsp, str):
|
||||
rsp = rsp.decode('utf8')
|
||||
@@ -277,10 +287,21 @@ def synchronize():
|
||||
os.chmod(fname, int(opts[fname][opt], 8))
|
||||
if uid != -1 or gid != -1:
|
||||
os.chown(fname, uid, gid)
|
||||
return status
|
||||
finally:
|
||||
shutil.rmtree(tmpdir)
|
||||
shutil.rmtree(appendoncedir)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
synchronize()
|
||||
status = 202
|
||||
while status not in (204, 200):
|
||||
try:
|
||||
status = synchronize()
|
||||
except Exception as e:
|
||||
sys.stderr.write(str(e))
|
||||
sys.stderr.write('\n')
|
||||
sys.stderr.flush()
|
||||
status = 300
|
||||
if status not in (204, 200):
|
||||
time.sleep((random.random()*3)+2)
|
||||
|
@@ -1,4 +1,5 @@
|
||||
#!/bin/sh
|
||||
cryptdisk=$(blkid -t TYPE="crypto_LUKS"|sed -e s/:.*//)
|
||||
clevis luks bind -f -d $cryptdisk -k - tpm2 '{}' < /etc/confluent/confluent.apikey
|
||||
cryptsetup luksRemoveKey $cryptdisk < /etc/confluent/confluent.apikey
|
||||
clevis luks bind -f -d $cryptdisk -k - tpm2 '{}' < /etc/confluent/luks.key
|
||||
chmod 000 /etc/confluent/luks.key
|
||||
#cryptsetup luksRemoveKey $cryptdisk < /etc/confluent/confluent.apikey
|
||||
|
@@ -171,6 +171,13 @@ permissions=
|
||||
wait-device-timeout=60000
|
||||
|
||||
EOC
|
||||
if [ "$linktype" = infiniband ]; then
|
||||
cat >> /run/NetworkManager/system-connections/$ifname.nmconnection << EOC
|
||||
[infiniband]
|
||||
transport-mode=datagram
|
||||
|
||||
EOC
|
||||
fi
|
||||
autoconfigmethod=$(grep ^ipv4_method: /etc/confluent/confluent.deploycfg |awk '{print $2}')
|
||||
auto6configmethod=$(grep ^ipv6_method: /etc/confluent/confluent.deploycfg |awk '{print $2}')
|
||||
if [ "$autoconfigmethod" = "dhcp" ]; then
|
||||
|
@@ -41,7 +41,7 @@ if [ ! -f /etc/confluent/firstboot.ran ]; then
|
||||
run_remote_config firstboot.d
|
||||
fi
|
||||
|
||||
curl -X POST -d 'status: complete' -H "CONFLUENT_NODENAME: $nodename" -H "CONFLUENT_APIKEY: $confluent_apikey" https://$confluent_mgr/confluent-api/self/updatestatus
|
||||
curl -X POST -d 'status: complete' -H "CONFLUENT_NODENAME: $nodename" -H "CONFLUENT_APIKEY: $confluent_apikey" https://$confluent_websrv/confluent-api/self/updatestatus
|
||||
systemctl disable firstboot
|
||||
rm /etc/systemd/system/firstboot.service
|
||||
rm /etc/confluent/firstboot.ran
|
||||
|
@@ -10,8 +10,16 @@ import stat
|
||||
import struct
|
||||
import sys
|
||||
import subprocess
|
||||
import traceback
|
||||
|
||||
bootuuid = None
|
||||
vgname = 'localstorage'
|
||||
oldvgname = None
|
||||
|
||||
def convert_lv(oldlvname):
|
||||
if oldvgname is None:
|
||||
return None
|
||||
return oldlvname.replace(oldvgname, vgname)
|
||||
|
||||
def get_partname(devname, idx):
|
||||
if devname[-1] in '0123456789':
|
||||
@@ -53,6 +61,8 @@ def get_image_metadata(imgpath):
|
||||
header = img.read(16)
|
||||
if header == b'\x63\x7b\x9d\x26\xb7\xfd\x48\x30\x89\xf9\x11\xcf\x18\xfd\xff\xa1':
|
||||
for md in get_multipart_image_meta(img):
|
||||
if md.get('device', '').startswith('/dev/zram'):
|
||||
continue
|
||||
yield md
|
||||
else:
|
||||
raise Exception('Installation from single part image not supported')
|
||||
@@ -86,14 +96,14 @@ def fixup(rootdir, vols):
|
||||
if tab.startswith('#ORIGFSTAB#'):
|
||||
if entry[1] in devbymount:
|
||||
targetdev = devbymount[entry[1]]
|
||||
if targetdev.startswith('/dev/localstorage/'):
|
||||
if targetdev.startswith('/dev/{}/'.format(vgname)):
|
||||
entry[0] = targetdev
|
||||
else:
|
||||
uuid = subprocess.check_output(['blkid', '-s', 'UUID', '-o', 'value', targetdev]).decode('utf8')
|
||||
uuid = uuid.strip()
|
||||
entry[0] = 'UUID={}'.format(uuid)
|
||||
elif entry[2] == 'swap':
|
||||
entry[0] = '/dev/mapper/localstorage-swap'
|
||||
entry[0] = '/dev/mapper/{}-swap'.format(vgname.replace('-', '--'))
|
||||
entry[0] = entry[0].ljust(42)
|
||||
entry[1] = entry[1].ljust(16)
|
||||
entry[3] = entry[3].ljust(28)
|
||||
@@ -141,6 +151,46 @@ def fixup(rootdir, vols):
|
||||
grubsyscfg = os.path.join(rootdir, 'etc/sysconfig/grub')
|
||||
if not os.path.exists(grubsyscfg):
|
||||
grubsyscfg = os.path.join(rootdir, 'etc/default/grub')
|
||||
kcmdline = os.path.join(rootdir, 'etc/kernel/cmdline')
|
||||
if os.path.exists(kcmdline):
|
||||
with open(kcmdline) as kcmdlinein:
|
||||
kcmdlinecontent = kcmdlinein.read()
|
||||
newkcmdlineent = []
|
||||
for ent in kcmdlinecontent.split():
|
||||
if ent.startswith('resume='):
|
||||
newkcmdlineent.append('resume={}'.format(newswapdev))
|
||||
elif ent.startswith('root='):
|
||||
newkcmdlineent.append('root={}'.format(newrootdev))
|
||||
elif ent.startswith('rd.lvm.lv='):
|
||||
ent = convert_lv(ent)
|
||||
if ent:
|
||||
newkcmdlineent.append(ent)
|
||||
else:
|
||||
newkcmdlineent.append(ent)
|
||||
with open(kcmdline, 'w') as kcmdlineout:
|
||||
kcmdlineout.write(' '.join(newkcmdlineent) + '\n')
|
||||
for loadent in glob.glob(os.path.join(rootdir, 'boot/loader/entries/*.conf')):
|
||||
with open(loadent) as loadentin:
|
||||
currentry = loadentin.read().split('\n')
|
||||
with open(loadent, 'w') as loadentout:
|
||||
for cfgline in currentry:
|
||||
cfgparts = cfgline.split()
|
||||
if not cfgparts or cfgparts[0] != 'options':
|
||||
loadentout.write(cfgline + '\n')
|
||||
continue
|
||||
newcfgparts = [cfgparts[0]]
|
||||
for cfgpart in cfgparts[1:]:
|
||||
if cfgpart.startswith('root='):
|
||||
newcfgparts.append('root={}'.format(newrootdev))
|
||||
elif cfgpart.startswith('resume='):
|
||||
newcfgparts.append('resume={}'.format(newswapdev))
|
||||
elif cfgpart.startswith('rd.lvm.lv='):
|
||||
cfgpart = convert_lv(cfgpart)
|
||||
if cfgpart:
|
||||
newcfgparts.append(cfgpart)
|
||||
else:
|
||||
newcfgparts.append(cfgpart)
|
||||
loadentout.write(' '.join(newcfgparts) + '\n')
|
||||
with open(grubsyscfg) as defgrubin:
|
||||
defgrub = defgrubin.read().split('\n')
|
||||
with open(grubsyscfg, 'w') as defgrubout:
|
||||
@@ -148,9 +198,18 @@ def fixup(rootdir, vols):
|
||||
gline = gline.split()
|
||||
newline = []
|
||||
for ent in gline:
|
||||
if ent.startswith('resume=') or ent.startswith('rd.lvm.lv'):
|
||||
continue
|
||||
newline.append(ent)
|
||||
if ent.startswith('resume='):
|
||||
newline.append('resume={}'.format(newswapdev))
|
||||
elif ent.startswith('root='):
|
||||
newline.append('root={}'.format(newrootdev))
|
||||
elif ent.startswith('rd.lvm.lv='):
|
||||
ent = convert_lv(ent)
|
||||
if ent:
|
||||
newline.append(ent)
|
||||
elif '""' in ent:
|
||||
newline.append('""')
|
||||
else:
|
||||
newline.append(ent)
|
||||
defgrubout.write(' '.join(newline) + '\n')
|
||||
grubcfg = subprocess.check_output(['find', os.path.join(rootdir, 'boot'), '-name', 'grub.cfg']).decode('utf8').strip().replace(rootdir, '/').replace('//', '/')
|
||||
grubcfg = grubcfg.split('\n')
|
||||
@@ -227,8 +286,14 @@ def had_swap():
|
||||
return True
|
||||
return False
|
||||
|
||||
newrootdev = None
|
||||
newswapdev = None
|
||||
def install_to_disk(imgpath):
|
||||
global bootuuid
|
||||
global newrootdev
|
||||
global newswapdev
|
||||
global vgname
|
||||
global oldvgname
|
||||
lvmvols = {}
|
||||
deftotsize = 0
|
||||
mintotsize = 0
|
||||
@@ -260,6 +325,13 @@ def install_to_disk(imgpath):
|
||||
biggestfs = fs
|
||||
biggestsize = fs['initsize']
|
||||
if fs['device'].startswith('/dev/mapper'):
|
||||
oldvgname = fs['device'].rsplit('/', 1)[-1]
|
||||
# if node has - then /dev/mapper will double up the hypen
|
||||
if '_' in oldvgname and '-' in oldvgname.split('_')[-1]:
|
||||
oldvgname = oldvgname.rsplit('-', 1)[0].replace('--', '-')
|
||||
osname = oldvgname.split('_')[0]
|
||||
nodename = socket.gethostname().split('.')[0]
|
||||
vgname = '{}_{}'.format(osname, nodename)
|
||||
lvmvols[fs['device'].replace('/dev/mapper/', '')] = fs
|
||||
deflvmsize += fs['initsize']
|
||||
minlvmsize += fs['minsize']
|
||||
@@ -304,6 +376,8 @@ def install_to_disk(imgpath):
|
||||
end = sectors
|
||||
parted.run('mkpart primary {}s {}s'.format(curroffset, end))
|
||||
vol['targetdisk'] = get_partname(instdisk, volidx)
|
||||
if vol['mount'] == '/':
|
||||
newrootdev = vol['targetdisk']
|
||||
curroffset += size + 1
|
||||
if not lvmvols:
|
||||
if swapsize:
|
||||
@@ -313,13 +387,14 @@ def install_to_disk(imgpath):
|
||||
if end > sectors:
|
||||
end = sectors
|
||||
parted.run('mkpart swap {}s {}s'.format(curroffset, end))
|
||||
subprocess.check_call(['mkswap', get_partname(instdisk, volidx + 1)])
|
||||
newswapdev = get_partname(instdisk, volidx + 1)
|
||||
subprocess.check_call(['mkswap', newswapdev])
|
||||
else:
|
||||
parted.run('mkpart lvm {}s 100%'.format(curroffset))
|
||||
lvmpart = get_partname(instdisk, volidx + 1)
|
||||
subprocess.check_call(['pvcreate', '-ff', '-y', lvmpart])
|
||||
subprocess.check_call(['vgcreate', 'localstorage', lvmpart])
|
||||
vginfo = subprocess.check_output(['vgdisplay', 'localstorage', '--units', 'b']).decode('utf8')
|
||||
subprocess.check_call(['vgcreate', vgname, lvmpart])
|
||||
vginfo = subprocess.check_output(['vgdisplay', vgname, '--units', 'b']).decode('utf8')
|
||||
vginfo = vginfo.split('\n')
|
||||
pesize = 0
|
||||
pes = 0
|
||||
@@ -346,13 +421,17 @@ def install_to_disk(imgpath):
|
||||
extents += 1
|
||||
if vol['mount'] == '/':
|
||||
lvname = 'root'
|
||||
|
||||
else:
|
||||
lvname = vol['mount'].replace('/', '_')
|
||||
subprocess.check_call(['lvcreate', '-l', '{}'.format(extents), '-y', '-n', lvname, 'localstorage'])
|
||||
vol['targetdisk'] = '/dev/localstorage/{}'.format(lvname)
|
||||
subprocess.check_call(['lvcreate', '-l', '{}'.format(extents), '-y', '-n', lvname, vgname])
|
||||
vol['targetdisk'] = '/dev/{}/{}'.format(vgname, lvname)
|
||||
if vol['mount'] == '/':
|
||||
newrootdev = vol['targetdisk']
|
||||
if swapsize:
|
||||
subprocess.check_call(['lvcreate', '-y', '-l', '{}'.format(swapsize // pesize), '-n', 'swap', 'localstorage'])
|
||||
subprocess.check_call(['mkswap', '/dev/localstorage/swap'])
|
||||
subprocess.check_call(['lvcreate', '-y', '-l', '{}'.format(swapsize // pesize), '-n', 'swap', vgname])
|
||||
subprocess.check_call(['mkswap', '/dev/{}/swap'.format(vgname)])
|
||||
newswapdev = '/dev/{}/swap'.format(vgname)
|
||||
os.makedirs('/run/imginst/targ')
|
||||
for vol in allvols:
|
||||
with open(vol['targetdisk'], 'wb') as partition:
|
||||
@@ -426,4 +505,9 @@ def install_to_disk(imgpath):
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
install_to_disk(os.environ['mountsrc'])
|
||||
try:
|
||||
install_to_disk(os.environ['mountsrc'])
|
||||
except Exception:
|
||||
traceback.print_exc()
|
||||
time.sleep(86400)
|
||||
raise
|
||||
|
@@ -30,6 +30,7 @@ if [ ! -f /sysroot/tmp/installdisk ]; then
|
||||
done
|
||||
fi
|
||||
lvm vgchange -a n
|
||||
/sysroot/usr/sbin/wipefs -a /dev/$(cat /sysroot/tmp/installdisk)
|
||||
udevadm control -e
|
||||
if [ -f /sysroot/etc/lvm/devices/system.devices ]; then
|
||||
rm /sysroot/etc/lvm/devices/system.devices
|
||||
|
@@ -16,6 +16,7 @@ if [ -z "$confluent_mgr" ]; then
|
||||
fi
|
||||
confluent_profile=$(grep ^profile: /etc/confluent/confluent.deploycfg|awk '{print $2}')
|
||||
timedatectl set-timezone $(grep ^timezone: /etc/confluent/confluent.deploycfg|awk '{print $2}')
|
||||
hostnamectl set-hostname $nodename
|
||||
export nodename confluent_mgr confluent_profile
|
||||
. /etc/confluent/functions
|
||||
mkdir -p /var/log/confluent
|
||||
|
@@ -23,9 +23,9 @@ exec 2>> /var/log/confluent/confluent-post.log
|
||||
chmod 600 /var/log/confluent/confluent-post.log
|
||||
tail -f /var/log/confluent/confluent-post.log > /dev/console &
|
||||
logshowpid=$!
|
||||
curl -f https://$confluent_mgr/confluent-public/os/$confluent_profile/scripts/firstboot.service > /etc/systemd/system/firstboot.service
|
||||
curl -f https://$confluent_websrv/confluent-public/os/$confluent_profile/scripts/firstboot.service > /etc/systemd/system/firstboot.service
|
||||
mkdir -p /opt/confluent/bin
|
||||
curl -f https://$confluent_mgr/confluent-public/os/$confluent_profile/scripts/firstboot.sh > /opt/confluent/bin/firstboot.sh
|
||||
curl -f https://$confluent_websrv/confluent-public/os/$confluent_profile/scripts/firstboot.sh > /opt/confluent/bin/firstboot.sh
|
||||
chmod +x /opt/confluent/bin/firstboot.sh
|
||||
systemctl enable firstboot
|
||||
selinuxpolicy=$(grep ^SELINUXTYPE /etc/selinux/config |awk -F= '{print $2}')
|
||||
@@ -40,7 +40,7 @@ run_remote_parts post.d
|
||||
# Induce execution of remote configuration, e.g. ansible plays in ansible/post.d/
|
||||
run_remote_config post.d
|
||||
|
||||
curl -sf -X POST -d 'status: staged' -H "CONFLUENT_NODENAME: $nodename" -H "CONFLUENT_APIKEY: $confluent_apikey" https://$confluent_mgr/confluent-api/self/updatestatus
|
||||
curl -sf -X POST -d 'status: staged' -H "CONFLUENT_NODENAME: $nodename" -H "CONFLUENT_APIKEY: $confluent_apikey" https://$confluent_websrv/confluent-api/self/updatestatus
|
||||
|
||||
kill $logshowpid
|
||||
|
||||
|
@@ -1,4 +1,6 @@
|
||||
#!/usr/bin/python3
|
||||
import random
|
||||
import time
|
||||
import subprocess
|
||||
import importlib
|
||||
import tempfile
|
||||
@@ -7,6 +9,7 @@ import os
|
||||
import shutil
|
||||
import pwd
|
||||
import grp
|
||||
import sys
|
||||
from importlib.machinery import SourceFileLoader
|
||||
try:
|
||||
apiclient = SourceFileLoader('apiclient', '/opt/confluent/bin/apiclient').load_module()
|
||||
@@ -227,9 +230,16 @@ def synchronize():
|
||||
myips.append(addr)
|
||||
data = json.dumps({'merge': tmpdir, 'appendonce': appendoncedir, 'myips': myips})
|
||||
status, rsp = ac.grab_url_with_status('/confluent-api/self/remotesyncfiles', data)
|
||||
if status >= 300:
|
||||
sys.stderr.write("Error starting syncfiles - {}:\n".format(status))
|
||||
sys.stderr.write(rsp.decode('utf8'))
|
||||
sys.stderr.write('\n')
|
||||
sys.stderr.flush()
|
||||
return status
|
||||
if status == 202:
|
||||
lastrsp = ''
|
||||
while status != 204:
|
||||
time.sleep(1+(2*random.random()))
|
||||
status, rsp = ac.grab_url_with_status('/confluent-api/self/remotesyncfiles')
|
||||
if not isinstance(rsp, str):
|
||||
rsp = rsp.decode('utf8')
|
||||
@@ -277,10 +287,21 @@ def synchronize():
|
||||
os.chmod(fname, int(opts[fname][opt], 8))
|
||||
if uid != -1 or gid != -1:
|
||||
os.chown(fname, uid, gid)
|
||||
return status
|
||||
finally:
|
||||
shutil.rmtree(tmpdir)
|
||||
shutil.rmtree(appendoncedir)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
synchronize()
|
||||
status = 202
|
||||
while status not in (204, 200):
|
||||
try:
|
||||
status = synchronize()
|
||||
except Exception as e:
|
||||
sys.stderr.write(str(e))
|
||||
sys.stderr.write('\n')
|
||||
sys.stderr.flush()
|
||||
status = 300
|
||||
if status not in (204, 200):
|
||||
time.sleep((random.random()*3)+2)
|
||||
|
@@ -116,4 +116,5 @@ profile=$(grep ^profile: /etc/confluent/confluent.deploycfg.new | sed -e 's/^pro
|
||||
mv /etc/confluent/confluent.deploycfg.new /etc/confluent/confluent.deploycfg
|
||||
export node mgr profile
|
||||
. /tmp/modinstall
|
||||
localcli network firewall load
|
||||
exec /bin/install
|
||||
|
@@ -1,4 +1,6 @@
|
||||
#!/usr/bin/python3
|
||||
import random
|
||||
import time
|
||||
import subprocess
|
||||
import importlib
|
||||
import tempfile
|
||||
@@ -7,6 +9,7 @@ import os
|
||||
import shutil
|
||||
import pwd
|
||||
import grp
|
||||
import sys
|
||||
from importlib.machinery import SourceFileLoader
|
||||
try:
|
||||
apiclient = SourceFileLoader('apiclient', '/opt/confluent/bin/apiclient').load_module()
|
||||
@@ -227,9 +230,16 @@ def synchronize():
|
||||
myips.append(addr)
|
||||
data = json.dumps({'merge': tmpdir, 'appendonce': appendoncedir, 'myips': myips})
|
||||
status, rsp = ac.grab_url_with_status('/confluent-api/self/remotesyncfiles', data)
|
||||
if status >= 300:
|
||||
sys.stderr.write("Error starting syncfiles - {}:\n".format(status))
|
||||
sys.stderr.write(rsp.decode('utf8'))
|
||||
sys.stderr.write('\n')
|
||||
sys.stderr.flush()
|
||||
return status
|
||||
if status == 202:
|
||||
lastrsp = ''
|
||||
while status != 204:
|
||||
time.sleep(1+(2*random.random()))
|
||||
status, rsp = ac.grab_url_with_status('/confluent-api/self/remotesyncfiles')
|
||||
if not isinstance(rsp, str):
|
||||
rsp = rsp.decode('utf8')
|
||||
@@ -277,10 +287,21 @@ def synchronize():
|
||||
os.chmod(fname, int(opts[fname][opt], 8))
|
||||
if uid != -1 or gid != -1:
|
||||
os.chown(fname, uid, gid)
|
||||
return status
|
||||
finally:
|
||||
shutil.rmtree(tmpdir)
|
||||
shutil.rmtree(appendoncedir)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
synchronize()
|
||||
status = 202
|
||||
while status not in (204, 200):
|
||||
try:
|
||||
status = synchronize()
|
||||
except Exception as e:
|
||||
sys.stderr.write(str(e))
|
||||
sys.stderr.write('\n')
|
||||
sys.stderr.flush()
|
||||
status = 300
|
||||
if status not in (204, 200):
|
||||
time.sleep((random.random()*3)+2)
|
||||
|
@@ -1,4 +1,6 @@
|
||||
#!/usr/bin/python3
|
||||
import random
|
||||
import time
|
||||
import subprocess
|
||||
import importlib
|
||||
import tempfile
|
||||
@@ -7,6 +9,7 @@ import os
|
||||
import shutil
|
||||
import pwd
|
||||
import grp
|
||||
import sys
|
||||
from importlib.machinery import SourceFileLoader
|
||||
try:
|
||||
apiclient = SourceFileLoader('apiclient', '/opt/confluent/bin/apiclient').load_module()
|
||||
@@ -227,9 +230,16 @@ def synchronize():
|
||||
myips.append(addr)
|
||||
data = json.dumps({'merge': tmpdir, 'appendonce': appendoncedir, 'myips': myips})
|
||||
status, rsp = ac.grab_url_with_status('/confluent-api/self/remotesyncfiles', data)
|
||||
if status >= 300:
|
||||
sys.stderr.write("Error starting syncfiles - {}:\n".format(status))
|
||||
sys.stderr.write(rsp.decode('utf8'))
|
||||
sys.stderr.write('\n')
|
||||
sys.stderr.flush()
|
||||
return status
|
||||
if status == 202:
|
||||
lastrsp = ''
|
||||
while status != 204:
|
||||
time.sleep(1+(2*random.random()))
|
||||
status, rsp = ac.grab_url_with_status('/confluent-api/self/remotesyncfiles')
|
||||
if not isinstance(rsp, str):
|
||||
rsp = rsp.decode('utf8')
|
||||
@@ -277,10 +287,21 @@ def synchronize():
|
||||
os.chmod(fname, int(opts[fname][opt], 8))
|
||||
if uid != -1 or gid != -1:
|
||||
os.chown(fname, uid, gid)
|
||||
return status
|
||||
finally:
|
||||
shutil.rmtree(tmpdir)
|
||||
shutil.rmtree(appendoncedir)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
synchronize()
|
||||
status = 202
|
||||
while status not in (204, 200):
|
||||
try:
|
||||
status = synchronize()
|
||||
except Exception as e:
|
||||
sys.stderr.write(str(e))
|
||||
sys.stderr.write('\n')
|
||||
sys.stderr.flush()
|
||||
status = 300
|
||||
if status not in (204, 200):
|
||||
time.sleep((random.random()*3)+2)
|
||||
|
@@ -1,4 +1,6 @@
|
||||
#!/usr/bin/python3
|
||||
import random
|
||||
import time
|
||||
import subprocess
|
||||
import importlib
|
||||
import tempfile
|
||||
@@ -7,6 +9,7 @@ import os
|
||||
import shutil
|
||||
import pwd
|
||||
import grp
|
||||
import sys
|
||||
from importlib.machinery import SourceFileLoader
|
||||
try:
|
||||
apiclient = SourceFileLoader('apiclient', '/opt/confluent/bin/apiclient').load_module()
|
||||
@@ -227,9 +230,16 @@ def synchronize():
|
||||
myips.append(addr)
|
||||
data = json.dumps({'merge': tmpdir, 'appendonce': appendoncedir, 'myips': myips})
|
||||
status, rsp = ac.grab_url_with_status('/confluent-api/self/remotesyncfiles', data)
|
||||
if status >= 300:
|
||||
sys.stderr.write("Error starting syncfiles - {}:\n".format(status))
|
||||
sys.stderr.write(rsp.decode('utf8'))
|
||||
sys.stderr.write('\n')
|
||||
sys.stderr.flush()
|
||||
return status
|
||||
if status == 202:
|
||||
lastrsp = ''
|
||||
while status != 204:
|
||||
time.sleep(1+(2*random.random()))
|
||||
status, rsp = ac.grab_url_with_status('/confluent-api/self/remotesyncfiles')
|
||||
if not isinstance(rsp, str):
|
||||
rsp = rsp.decode('utf8')
|
||||
@@ -277,10 +287,21 @@ def synchronize():
|
||||
os.chmod(fname, int(opts[fname][opt], 8))
|
||||
if uid != -1 or gid != -1:
|
||||
os.chown(fname, uid, gid)
|
||||
return status
|
||||
finally:
|
||||
shutil.rmtree(tmpdir)
|
||||
shutil.rmtree(appendoncedir)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
synchronize()
|
||||
status = 202
|
||||
while status not in (204, 200):
|
||||
try:
|
||||
status = synchronize()
|
||||
except Exception as e:
|
||||
sys.stderr.write(str(e))
|
||||
sys.stderr.write('\n')
|
||||
sys.stderr.flush()
|
||||
status = 300
|
||||
if status not in (204, 200):
|
||||
time.sleep((random.random()*3)+2)
|
||||
|
@@ -1,4 +1,6 @@
|
||||
#!/usr/bin/python3
|
||||
import random
|
||||
import time
|
||||
import subprocess
|
||||
import importlib
|
||||
import tempfile
|
||||
@@ -7,6 +9,7 @@ import os
|
||||
import shutil
|
||||
import pwd
|
||||
import grp
|
||||
import sys
|
||||
from importlib.machinery import SourceFileLoader
|
||||
try:
|
||||
apiclient = SourceFileLoader('apiclient', '/opt/confluent/bin/apiclient').load_module()
|
||||
@@ -227,9 +230,16 @@ def synchronize():
|
||||
myips.append(addr)
|
||||
data = json.dumps({'merge': tmpdir, 'appendonce': appendoncedir, 'myips': myips})
|
||||
status, rsp = ac.grab_url_with_status('/confluent-api/self/remotesyncfiles', data)
|
||||
if status >= 300:
|
||||
sys.stderr.write("Error starting syncfiles - {}:\n".format(status))
|
||||
sys.stderr.write(rsp.decode('utf8'))
|
||||
sys.stderr.write('\n')
|
||||
sys.stderr.flush()
|
||||
return status
|
||||
if status == 202:
|
||||
lastrsp = ''
|
||||
while status != 204:
|
||||
time.sleep(1+(2*random.random()))
|
||||
status, rsp = ac.grab_url_with_status('/confluent-api/self/remotesyncfiles')
|
||||
if not isinstance(rsp, str):
|
||||
rsp = rsp.decode('utf8')
|
||||
@@ -277,10 +287,21 @@ def synchronize():
|
||||
os.chmod(fname, int(opts[fname][opt], 8))
|
||||
if uid != -1 or gid != -1:
|
||||
os.chown(fname, uid, gid)
|
||||
return status
|
||||
finally:
|
||||
shutil.rmtree(tmpdir)
|
||||
shutil.rmtree(appendoncedir)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
synchronize()
|
||||
status = 202
|
||||
while status not in (204, 200):
|
||||
try:
|
||||
status = synchronize()
|
||||
except Exception as e:
|
||||
sys.stderr.write(str(e))
|
||||
sys.stderr.write('\n')
|
||||
sys.stderr.flush()
|
||||
status = 300
|
||||
if status not in (204, 200):
|
||||
time.sleep((random.random()*3)+2)
|
||||
|
@@ -58,6 +58,10 @@ if ! grep console= /proc/cmdline > /dev/null; then
|
||||
echo "Automatic console configured for $autocons"
|
||||
fi
|
||||
echo sshd:x:30:30:SSH User:/var/empty/sshd:/sbin/nologin >> /etc/passwd
|
||||
modprobe ib_ipoib
|
||||
modprobe ib_umad
|
||||
modprobe hfi1
|
||||
modprobe mlx5_ib
|
||||
cd /sys/class/net
|
||||
for nic in *; do
|
||||
ip link set $nic up
|
||||
|
@@ -10,6 +10,7 @@ import stat
|
||||
import struct
|
||||
import sys
|
||||
import subprocess
|
||||
import traceback
|
||||
|
||||
bootuuid = None
|
||||
|
||||
@@ -424,5 +425,10 @@ def install_to_disk(imgpath):
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
install_to_disk(os.environ['mountsrc'])
|
||||
try:
|
||||
install_to_disk(os.environ['mountsrc'])
|
||||
except Exception:
|
||||
traceback.print_exc()
|
||||
time.sleep(86400)
|
||||
raise
|
||||
|
||||
|
@@ -8,7 +8,7 @@ for addr in $(grep ^MANAGER: /etc/confluent/confluent.info|awk '{print $2}'|sed
|
||||
fi
|
||||
done
|
||||
mkdir -p /mnt/remoteimg /mnt/remote /mnt/overlay
|
||||
if grep confluennt_imagemethtod=untethered /proc/cmdline > /dev/null; then
|
||||
if grep confluent_imagemethod=untethered /proc/cmdline > /dev/null; then
|
||||
mount -t tmpfs untethered /mnt/remoteimg
|
||||
curl https://$confluent_mgr/confluent-public/os/$confluent_profile/rootimg.sfs -o /mnt/remoteimg/rootimg.sfs
|
||||
else
|
||||
|
@@ -0,0 +1,11 @@
|
||||
[Unit]
|
||||
Description=Confluent onboot hook
|
||||
Requires=network-online.target
|
||||
After=network-online.target
|
||||
|
||||
[Service]
|
||||
ExecStart=/opt/confluent/bin/onboot.sh
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
|
@@ -0,0 +1,40 @@
|
||||
#!/bin/bash
|
||||
|
||||
# This script is executed on each boot as it is
|
||||
# completed. It is best to edit the middle of the file as
|
||||
# noted below so custom commands are executed before
|
||||
# the script notifies confluent that install is fully complete.
|
||||
|
||||
nodename=$(grep ^NODENAME /etc/confluent/confluent.info|awk '{print $2}')
|
||||
confluent_apikey=$(cat /etc/confluent/confluent.apikey)
|
||||
v4meth=$(grep ^ipv4_method: /etc/confluent/confluent.deploycfg|awk '{print $2}')
|
||||
if [ "$v4meth" = "null" -o -z "$v4meth" ]; then
|
||||
confluent_mgr=$(grep ^deploy_server_v6: /etc/confluent/confluent.deploycfg|awk '{print $2}')
|
||||
fi
|
||||
if [ -z "$confluent_mgr" ]; then
|
||||
confluent_mgr=$(grep ^deploy_server: /etc/confluent/confluent.deploycfg|awk '{print $2}')
|
||||
fi
|
||||
confluent_profile=$(grep ^profile: /etc/confluent/confluent.deploycfg|awk '{print $2}')
|
||||
timedatectl set-timezone $(grep ^timezone: /etc/confluent/confluent.deploycfg|awk '{print $2}')
|
||||
hostnamectl set-hostname $nodename
|
||||
export nodename confluent_mgr confluent_profile
|
||||
. /etc/confluent/functions
|
||||
mkdir -p /var/log/confluent
|
||||
chmod 700 /var/log/confluent
|
||||
exec >> /var/log/confluent/confluent-onboot.log
|
||||
exec 2>> /var/log/confluent/confluent-onboot.log
|
||||
chmod 600 /var/log/confluent/confluent-onboot.log
|
||||
tail -f /var/log/confluent/confluent-onboot.log > /dev/console &
|
||||
logshowpid=$!
|
||||
|
||||
run_remote_python syncfileclient
|
||||
run_remote_python confignet
|
||||
|
||||
# onboot scripts may be placed into onboot.d, e.g. onboot.d/01-firstaction.sh, onboot.d/02-secondaction.sh
|
||||
run_remote_parts onboot.d
|
||||
|
||||
# Induce execution of remote configuration, e.g. ansible plays in ansible/onboot.d/
|
||||
run_remote_config onboot.d
|
||||
|
||||
#curl -X POST -d 'status: booted' -H "CONFLUENT_NODENAME: $nodename" -H "CONFLUENT_APIKEY: $confluent_apikey" https://$confluent_mgr/confluent-api/self/updatestatus
|
||||
kill $logshowpid
|
@@ -1,4 +1,6 @@
|
||||
#!/usr/bin/python3
|
||||
import random
|
||||
import time
|
||||
import subprocess
|
||||
import importlib
|
||||
import tempfile
|
||||
@@ -7,6 +9,7 @@ import os
|
||||
import shutil
|
||||
import pwd
|
||||
import grp
|
||||
import sys
|
||||
from importlib.machinery import SourceFileLoader
|
||||
try:
|
||||
apiclient = SourceFileLoader('apiclient', '/opt/confluent/bin/apiclient').load_module()
|
||||
@@ -227,9 +230,16 @@ def synchronize():
|
||||
myips.append(addr)
|
||||
data = json.dumps({'merge': tmpdir, 'appendonce': appendoncedir, 'myips': myips})
|
||||
status, rsp = ac.grab_url_with_status('/confluent-api/self/remotesyncfiles', data)
|
||||
if status >= 300:
|
||||
sys.stderr.write("Error starting syncfiles - {}:\n".format(status))
|
||||
sys.stderr.write(rsp.decode('utf8'))
|
||||
sys.stderr.write('\n')
|
||||
sys.stderr.flush()
|
||||
return status
|
||||
if status == 202:
|
||||
lastrsp = ''
|
||||
while status != 204:
|
||||
time.sleep(1+(2*random.random()))
|
||||
status, rsp = ac.grab_url_with_status('/confluent-api/self/remotesyncfiles')
|
||||
if not isinstance(rsp, str):
|
||||
rsp = rsp.decode('utf8')
|
||||
@@ -277,10 +287,21 @@ def synchronize():
|
||||
os.chmod(fname, int(opts[fname][opt], 8))
|
||||
if uid != -1 or gid != -1:
|
||||
os.chown(fname, uid, gid)
|
||||
return status
|
||||
finally:
|
||||
shutil.rmtree(tmpdir)
|
||||
shutil.rmtree(appendoncedir)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
synchronize()
|
||||
status = 202
|
||||
while status not in (204, 200):
|
||||
try:
|
||||
status = synchronize()
|
||||
except Exception as e:
|
||||
sys.stderr.write(str(e))
|
||||
sys.stderr.write('\n')
|
||||
sys.stderr.flush()
|
||||
status = 300
|
||||
if status not in (204, 200):
|
||||
time.sleep((random.random()*3)+2)
|
||||
|
@@ -4,4 +4,5 @@ confluent_mgr=$(grep ^deploy_server $deploycfg|awk '{print $2}')
|
||||
confluent_profile=$(grep ^profile: $deploycfg|awk '{print $2}')
|
||||
export deploycfg confluent_mgr confluent_profile
|
||||
curl -f https://$confluent_mgr/confluent-public/os/$confluent_profile/scripts/post.sh > /tmp/post.sh
|
||||
. /tmp/post.sh
|
||||
bash /tmp/post.sh
|
||||
true
|
||||
|
@@ -2,7 +2,10 @@
|
||||
echo "Confluent first boot is running"
|
||||
HOME=$(getent passwd $(whoami)|cut -d: -f 6)
|
||||
export HOME
|
||||
seems a potentially relevant thing to put i... by Jarrod Johnson
|
||||
(
|
||||
exec >> /target/var/log/confluent/confluent-firstboot.log
|
||||
exec 2>> /target/var/log/confluent/confluent-firstboot.log
|
||||
chmod 600 /target/var/log/confluent/confluent-firstboot.log
|
||||
cp -a /etc/confluent/ssh/* /etc/ssh/
|
||||
systemctl restart sshd
|
||||
rootpw=$(grep ^rootpassword: /etc/confluent/confluent.deploycfg |awk '{print $2}')
|
||||
@@ -18,7 +21,10 @@ done
|
||||
hostnamectl set-hostname $(grep ^NODENAME: /etc/confluent/confluent.info | awk '{print $2}')
|
||||
touch /etc/cloud/cloud-init.disabled
|
||||
source /etc/confluent/functions
|
||||
|
||||
confluent_profile=$(grep ^profile: /etc/confluent/confluent.deploycfg|awk '{print $2}')
|
||||
export confluent_mgr confluent_profile
|
||||
run_remote_parts firstboot.d
|
||||
run_remote_config firstboot.d
|
||||
curl --capath /etc/confluent/tls -f -H "CONFLUENT_NODENAME: $nodename" -H "CONFLUENT_APIKEY: $confluent_apikey" -X POST -d "status: complete" https://$confluent_mgr/confluent-api/self/updatestatus
|
||||
) &
|
||||
tail --pid $! -n 0 -F /target/var/log/confluent/confluent-post.log > /dev/console
|
||||
|
@@ -8,7 +8,6 @@ chmod go-rwx /etc/confluent/*
|
||||
for i in /custom-installation/ssh/*.ca; do
|
||||
echo '@cert-authority *' $(cat $i) >> /target/etc/ssh/ssh_known_hosts
|
||||
done
|
||||
|
||||
cp -a /etc/ssh/ssh_host* /target/etc/confluent/ssh/
|
||||
cp -a /etc/ssh/sshd_config.d/confluent.conf /target/etc/confluent/ssh/sshd_config.d/
|
||||
sshconf=/target/etc/ssh/ssh_config
|
||||
@@ -19,10 +18,15 @@ echo 'Host *' >> $sshconf
|
||||
echo ' HostbasedAuthentication yes' >> $sshconf
|
||||
echo ' EnableSSHKeysign yes' >> $sshconf
|
||||
echo ' HostbasedKeyTypes *ed25519*' >> $sshconf
|
||||
|
||||
cp /etc/confluent/functions /target/etc/confluent/functions
|
||||
source /etc/confluent/functions
|
||||
mkdir -p /target/var/log/confluent
|
||||
cp /var/log/confluent/* /target/var/log/confluent/
|
||||
(
|
||||
exec >> /target/var/log/confluent/confluent-post.log
|
||||
exec 2>> /target/var/log/confluent/confluent-post.log
|
||||
chmod 600 /target/var/log/confluent/confluent-post.log
|
||||
curl -f https://$confluent_mgr/confluent-public/os/$confluent_profile/scripts/firstboot.sh > /target/etc/confluent/firstboot.sh
|
||||
curl -f https://$confluent_mgr/confluent-public/os/$confluent_profile/scripts/functions > /target/etc/confluent/functions
|
||||
source /target/etc/confluent/functions
|
||||
chmod +x /target/etc/confluent/firstboot.sh
|
||||
cp /tmp/allnodes /target/root/.shosts
|
||||
cp /tmp/allnodes /target/etc/ssh/shosts.equiv
|
||||
@@ -56,6 +60,7 @@ cp /custom-installation/confluent/bin/apiclient /target/opt/confluent/bin
|
||||
mount -o bind /dev /target/dev
|
||||
mount -o bind /proc /target/proc
|
||||
mount -o bind /sys /target/sys
|
||||
mount -o bind /sys/firmware/efi/efivars /target/sys/firmware/efi/efivars
|
||||
if [ 1 = $updategrub ]; then
|
||||
chroot /target update-grub
|
||||
fi
|
||||
@@ -83,6 +88,8 @@ chroot /target bash -c "source /etc/confluent/functions; run_remote_parts post.d
|
||||
source /target/etc/confluent/functions
|
||||
|
||||
run_remote_config post
|
||||
python3 /opt/confluent/bin/apiclient /confluent-api/self/updatestatus -d 'status: staged'
|
||||
|
||||
umount /target/sys /target/dev /target/proc
|
||||
|
||||
) &
|
||||
tail --pid $! -n 0 -F /target/var/log/confluent/confluent-post.log > /dev/console
|
||||
|
@@ -1,5 +1,16 @@
|
||||
#!/bin/bash
|
||||
deploycfg=/custom-installation/confluent/confluent.deploycfg
|
||||
mkdir -p /var/log/confluent
|
||||
mkdir -p /opt/confluent/bin
|
||||
mkdir -p /etc/confluent
|
||||
cp /custom-installation/confluent/confluent.info /custom-installation/confluent/confluent.apikey /etc/confluent/
|
||||
cat /custom-installation/tls/*.pem >> /etc/confluent/ca.pem
|
||||
cp /custom-installation/confluent/bin/apiclient /opt/confluent/bin
|
||||
cp $deploycfg /etc/confluent/
|
||||
(
|
||||
exec >> /var/log/confluent/confluent-pre.log
|
||||
exec 2>> /var/log/confluent/confluent-pre.log
|
||||
chmod 600 /var/log/confluent/confluent-pre.log
|
||||
|
||||
cryptboot=$(grep encryptboot: $deploycfg|sed -e 's/^encryptboot: //')
|
||||
if [ "$cryptboot" != "" ] && [ "$cryptboot" != "none" ] && [ "$cryptboot" != "null" ]; then
|
||||
@@ -23,7 +34,17 @@ echo HostbasedAuthentication yes >> /etc/ssh/sshd_config.d/confluent.conf
|
||||
echo HostbasedUsesNameFromPacketOnly yes >> /etc/ssh/sshd_config.d/confluent.conf
|
||||
echo IgnoreRhosts no >> /etc/ssh/sshd_config.d/confluent.conf
|
||||
systemctl restart sshd
|
||||
mkdir -p /etc/confluent
|
||||
export nodename confluent_profile confluent_mgr
|
||||
curl -f https://$confluent_mgr/confluent-public/os/$confluent_profile/scripts/functions > /etc/confluent/functions
|
||||
. /etc/confluent/functions
|
||||
run_remote_parts pre.d
|
||||
curl -f -X POST -H "CONFLUENT_NODENAME: $nodename" -H "CONFLUENT_APIKEY: $apikey" https://$confluent_mgr/confluent-api/self/nodelist > /tmp/allnodes
|
||||
curl -f https://$confluent_mgr/confluent-public/os/$confluent_profile/scripts/getinstalldisk > /custom-installation/getinstalldisk
|
||||
python3 /custom-installation/getinstalldisk
|
||||
if [ ! -e /tmp/installdisk ]; then
|
||||
curl -f https://$confluent_mgr/confluent-public/os/$confluent_profile/scripts/getinstalldisk > /custom-installation/getinstalldisk
|
||||
python3 /custom-installation/getinstalldisk
|
||||
fi
|
||||
sed -i s!%%INSTALLDISK%%!/dev/$(cat /tmp/installdisk)! /autoinstall.yaml
|
||||
) &
|
||||
tail --pid $! -n 0 -F /var/log/confluent/confluent-pre.log > /dev/console
|
||||
|
||||
|
@@ -1,4 +1,6 @@
|
||||
#!/usr/bin/python3
|
||||
import random
|
||||
import time
|
||||
import subprocess
|
||||
import importlib
|
||||
import tempfile
|
||||
@@ -7,6 +9,7 @@ import os
|
||||
import shutil
|
||||
import pwd
|
||||
import grp
|
||||
import sys
|
||||
from importlib.machinery import SourceFileLoader
|
||||
try:
|
||||
apiclient = SourceFileLoader('apiclient', '/opt/confluent/bin/apiclient').load_module()
|
||||
@@ -227,9 +230,16 @@ def synchronize():
|
||||
myips.append(addr)
|
||||
data = json.dumps({'merge': tmpdir, 'appendonce': appendoncedir, 'myips': myips})
|
||||
status, rsp = ac.grab_url_with_status('/confluent-api/self/remotesyncfiles', data)
|
||||
if status >= 300:
|
||||
sys.stderr.write("Error starting syncfiles - {}:\n".format(status))
|
||||
sys.stderr.write(rsp.decode('utf8'))
|
||||
sys.stderr.write('\n')
|
||||
sys.stderr.flush()
|
||||
return status
|
||||
if status == 202:
|
||||
lastrsp = ''
|
||||
while status != 204:
|
||||
time.sleep(1+(2*random.random()))
|
||||
status, rsp = ac.grab_url_with_status('/confluent-api/self/remotesyncfiles')
|
||||
if not isinstance(rsp, str):
|
||||
rsp = rsp.decode('utf8')
|
||||
@@ -277,10 +287,21 @@ def synchronize():
|
||||
os.chmod(fname, int(opts[fname][opt], 8))
|
||||
if uid != -1 or gid != -1:
|
||||
os.chown(fname, uid, gid)
|
||||
return status
|
||||
finally:
|
||||
shutil.rmtree(tmpdir)
|
||||
shutil.rmtree(appendoncedir)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
synchronize()
|
||||
status = 202
|
||||
while status not in (204, 200):
|
||||
try:
|
||||
status = synchronize()
|
||||
except Exception as e:
|
||||
sys.stderr.write(str(e))
|
||||
sys.stderr.write('\n')
|
||||
sys.stderr.flush()
|
||||
status = 300
|
||||
if status not in (204, 200):
|
||||
time.sleep((random.random()*3)+2)
|
||||
|
@@ -4,4 +4,5 @@ confluent_mgr=$(grep ^deploy_server $deploycfg|awk '{print $2}')
|
||||
confluent_profile=$(grep ^profile: $deploycfg|awk '{print $2}')
|
||||
export deploycfg confluent_mgr confluent_profile
|
||||
curl -f https://$confluent_mgr/confluent-public/os/$confluent_profile/scripts/post.sh > /tmp/post.sh
|
||||
. /tmp/post.sh
|
||||
bash /tmp/post.sh
|
||||
true
|
||||
|
@@ -26,12 +26,14 @@ if [ -e /tmp/cnflnthmackeytmp ]; then
|
||||
chroot . curl -f -H "CONFLUENT_NODENAME: $NODENAME" -H "CONFLUENT_CRYPTHMAC: $(cat /root/$hmacfile)" -d @/tmp/cnflntcryptfile https://$MGR/confluent-api/self/registerapikey
|
||||
cp /root/$passfile /root/custom-installation/confluent/confluent.apikey
|
||||
DEVICE=$(cat /tmp/autodetectnic)
|
||||
IP=done
|
||||
else
|
||||
chroot . custom-installation/confluent/bin/clortho $NODENAME $MGR > /root/custom-installation/confluent/confluent.apikey
|
||||
MGR=[$MGR]
|
||||
nic=$(grep ^MANAGER /custom-installation/confluent/confluent.info|grep fe80::|sed -e s/.*%//|head -n 1)
|
||||
nic=$(ip link |grep ^$nic:|awk '{print $2}')
|
||||
DEVICE=${nic%:}
|
||||
IP=done
|
||||
fi
|
||||
if [ -z "$MGTIFACE" ]; then
|
||||
chroot . usr/bin/curl -f -H "CONFLUENT_NODENAME: $NODENAME" -H "CONFLUENT_APIKEY: $(cat /root//custom-installation/confluent/confluent.apikey)" https://${MGR}/confluent-api/self/deploycfg > $deploycfg
|
||||
|
@@ -79,8 +79,12 @@ if [ ! -z "$cons" ]; then
|
||||
fi
|
||||
echo "Preparing to deploy $osprofile from $MGR"
|
||||
echo $osprofile > /custom-installation/confluent/osprofile
|
||||
echo URL=http://${MGR}/confluent-public/os/$osprofile/distribution/install.iso >> /conf/param.conf
|
||||
fcmdline="$(cat /custom-installation/confluent/cmdline.orig) url=http://${MGR}/confluent-public/os/$osprofile/distribution/install.iso"
|
||||
. /etc/os-release
|
||||
DIRECTISO=$(blkid -t TYPE=iso9660 |grep -Ei ' LABEL="Ubuntu-Server '$VERSION_ID)
|
||||
if [ -z "$DIRECTISO" ]; then
|
||||
echo URL=http://${MGR}/confluent-public/os/$osprofile/distribution/install.iso >> /conf/param.conf
|
||||
fcmdline="$(cat /custom-installation/confluent/cmdline.orig) url=http://${MGR}/confluent-public/os/$osprofile/distribution/install.iso"
|
||||
fi
|
||||
if [ ! -z "$cons" ]; then
|
||||
fcmdline="$fcmdline console=${cons#/dev/}"
|
||||
fi
|
||||
|
@@ -3,5 +3,12 @@ sed -i 's/label: ubuntu/label: Ubuntu/' $2/profile.yaml && \
|
||||
ln -s $1/casper/vmlinuz $2/boot/kernel && \
|
||||
ln -s $1/casper/initrd $2/boot/initramfs/distribution && \
|
||||
mkdir -p $2/boot/efi/boot && \
|
||||
ln -s $1/EFI/boot/* $2/boot/efi/boot
|
||||
if [ -d $1/EFI/boot/ ]; then
|
||||
ln -s $1/EFI/boot/* $2/boot/efi/boot
|
||||
elif [ -d $1/efi/boot/ ]; then
|
||||
ln -s $1/efi/boot/* $2/boot/efi/boot
|
||||
else
|
||||
echo "Unrecogrized boot contents in media" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
|
@@ -0,0 +1,12 @@
|
||||
import yaml
|
||||
import os
|
||||
|
||||
ainst = {}
|
||||
with open('/autoinstall.yaml', 'r') as allin:
|
||||
ainst = yaml.safe_load(allin)
|
||||
|
||||
ainst['storage']['layout']['password'] = os.environ['lukspass']
|
||||
|
||||
with open('/autoinstall.yaml', 'w') as allout:
|
||||
yaml.safe_dump(ainst, allout)
|
||||
|
@@ -3,11 +3,11 @@ echo "Confluent first boot is running"
|
||||
HOME=$(getent passwd $(whoami)|cut -d: -f 6)
|
||||
export HOME
|
||||
(
|
||||
exec >> /target/var/log/confluent/confluent-firstboot.log
|
||||
exec 2>> /target/var/log/confluent/confluent-firstboot.log
|
||||
chmod 600 /target/var/log/confluent/confluent-firstboot.log
|
||||
exec >> /var/log/confluent/confluent-firstboot.log
|
||||
exec 2>> /var/log/confluent/confluent-firstboot.log
|
||||
chmod 600 /var/log/confluent/confluent-firstboot.log
|
||||
cp -a /etc/confluent/ssh/* /etc/ssh/
|
||||
systemctl restart sshd
|
||||
systemctl restart ssh
|
||||
rootpw=$(grep ^rootpassword: /etc/confluent/confluent.deploycfg |awk '{print $2}')
|
||||
if [ ! -z "$rootpw" -a "$rootpw" != "null" ]; then
|
||||
echo root:$rootpw | chpasswd -e
|
||||
@@ -27,4 +27,4 @@ run_remote_parts firstboot.d
|
||||
run_remote_config firstboot.d
|
||||
curl --capath /etc/confluent/tls -f -H "CONFLUENT_NODENAME: $nodename" -H "CONFLUENT_APIKEY: $confluent_apikey" -X POST -d "status: complete" https://$confluent_mgr/confluent-api/self/updatestatus
|
||||
) &
|
||||
tail --pid $! -n 0 -F /target/var/log/confluent/confluent-post.log > /dev/console
|
||||
tail --pid $! -n 0 -F /var/log/confluent/confluent-post.log > /dev/console
|
||||
|
@@ -0,0 +1,26 @@
|
||||
#!/usr/bin/python3
|
||||
import yaml
|
||||
import os
|
||||
|
||||
ainst = {}
|
||||
with open('/autoinstall.yaml', 'r') as allin:
|
||||
ainst = yaml.safe_load(allin)
|
||||
|
||||
tz = None
|
||||
ntps = []
|
||||
with open('/etc/confluent/confluent.deploycfg', 'r') as confluentdeploycfg:
|
||||
dcfg = yaml.safe_load(confluentdeploycfg)
|
||||
tz = dcfg['timezone']
|
||||
ntps = dcfg.get('ntpservers', [])
|
||||
|
||||
if ntps and not ainst.get('ntp', None):
|
||||
ainst['ntp'] = {}
|
||||
ainst['ntp']['enabled'] = True
|
||||
ainst['ntp']['servers'] = ntps
|
||||
|
||||
if tz and not ainst.get('timezone'):
|
||||
ainst['timezone'] = tz
|
||||
|
||||
with open('/autoinstall.yaml', 'w') as allout:
|
||||
yaml.safe_dump(ainst, allout)
|
||||
|
@@ -60,9 +60,12 @@ cp /custom-installation/confluent/bin/apiclient /target/opt/confluent/bin
|
||||
mount -o bind /dev /target/dev
|
||||
mount -o bind /proc /target/proc
|
||||
mount -o bind /sys /target/sys
|
||||
mount -o bind /run /target/run
|
||||
mount -o bind /sys/firmware/efi/efivars /target/sys/firmware/efi/efivars
|
||||
if [ 1 = $updategrub ]; then
|
||||
chroot /target update-grub
|
||||
fi
|
||||
|
||||
echo "Port 22" >> /etc/ssh/sshd_config
|
||||
echo "Port 2222" >> /etc/ssh/sshd_config
|
||||
echo "Match LocalPort 22" >> /etc/ssh/sshd_config
|
||||
@@ -87,8 +90,36 @@ chroot /target bash -c "source /etc/confluent/functions; run_remote_parts post.d
|
||||
source /target/etc/confluent/functions
|
||||
|
||||
run_remote_config post
|
||||
|
||||
if [ -f /etc/confluent_lukspass ]; then
|
||||
numdevs=$(lsblk -lo name,uuid|grep $(awk '{print $2}' < /target/etc/crypttab |sed -e s/UUID=//)|wc -l)
|
||||
if [ 0$numdevs -ne 1 ]; then
|
||||
wall "Unable to identify the LUKS device, halting install"
|
||||
while :; do sleep 86400; done
|
||||
fi
|
||||
CRYPTTAB_SOURCE=$(awk '{print $2}' /target/etc/crypttab)
|
||||
. /target/usr/lib/cryptsetup/functions
|
||||
crypttab_resolve_source
|
||||
|
||||
if [ ! -e $CRYPTTAB_SOURCE ]; then
|
||||
wall "Unable to find $CRYPTTAB_SOURCE, halting install"
|
||||
while :; do sleep 86400; done
|
||||
fi
|
||||
cp /etc/confluent_lukspass /target/etc/confluent/luks.key
|
||||
chmod 000 /target/etc/confluent/luks.key
|
||||
lukspass=$(cat /etc/confluent_lukspass)
|
||||
chroot /target apt install libtss2-rc0
|
||||
PASSWORD=$lukspass chroot /target systemd-cryptenroll --tpm2-device=auto --tpm2-pcrs="" $CRYPTTAB_SOURCE
|
||||
fetch_remote systemdecrypt
|
||||
mv systemdecrypt /target/etc/initramfs-tools/scripts/local-top/systemdecrypt
|
||||
fetch_remote systemdecrypt-hook
|
||||
mv systemdecrypt-hook /target/etc/initramfs-tools/hooks/systemdecrypt
|
||||
chmod 755 /target/etc/initramfs-tools/scripts/local-top/systemdecrypt /target/etc/initramfs-tools/hooks/systemdecrypt
|
||||
chroot /target update-initramfs -u
|
||||
fi
|
||||
python3 /opt/confluent/bin/apiclient /confluent-api/self/updatestatus -d 'status: staged'
|
||||
|
||||
umount /target/sys /target/dev /target/proc
|
||||
|
||||
umount /target/sys /target/dev /target/proc /target/run
|
||||
) &
|
||||
tail --pid $! -n 0 -F /target/var/log/confluent/confluent-post.log > /dev/console
|
||||
|
@@ -13,11 +13,6 @@ exec 2>> /var/log/confluent/confluent-pre.log
|
||||
chmod 600 /var/log/confluent/confluent-pre.log
|
||||
|
||||
cryptboot=$(grep encryptboot: $deploycfg|sed -e 's/^encryptboot: //')
|
||||
if [ "$cryptboot" != "" ] && [ "$cryptboot" != "none" ] && [ "$cryptboot" != "null" ]; then
|
||||
echo "****Encrypted boot requested, but not implemented for this OS, halting install" > /dev/console
|
||||
[ -f '/tmp/autoconsdev' ] && (echo "****Encryptod boot requested, but not implemented for this OS,halting install" >> $(cat /tmp/autoconsdev))
|
||||
while :; do sleep 86400; done
|
||||
fi
|
||||
|
||||
|
||||
cat /custom-installation/ssh/*pubkey > /root/.ssh/authorized_keys
|
||||
@@ -45,6 +40,24 @@ if [ ! -e /tmp/installdisk ]; then
|
||||
python3 /custom-installation/getinstalldisk
|
||||
fi
|
||||
sed -i s!%%INSTALLDISK%%!/dev/$(cat /tmp/installdisk)! /autoinstall.yaml
|
||||
run_remote_python mergetime
|
||||
if [ "$cryptboot" != "" ] && [ "$cryptboot" != "none" ] && [ "$cryptboot" != "null" ]; then
|
||||
lukspass=$(python3 /opt/confluent/bin/apiclient /confluent-api/self/profileprivate/pending/luks.key 2> /dev/null)
|
||||
if [ -z "$lukspass" ]; then
|
||||
lukspass=$(head -c 66 < /dev/urandom |base64 -w0)
|
||||
fi
|
||||
export lukspass
|
||||
run_remote_python addcrypt
|
||||
if ! grep 'password:' /autoinstall.yaml > /dev/null; then
|
||||
echo "****Encrypted boot requested, but the user-data does not have a hook to enable,halting install" > /dev/console
|
||||
[ -f '/tmp/autoconsdev' ] && (echo "****Encryptod boot requested, but the user-data does not have a hook to enable,halting install" >> $(cat /tmp/autoconsdev))
|
||||
while :; do sleep 86400; done
|
||||
fi
|
||||
sed -i s!%%CRYPTPASS%%!$lukspass! /autoinstall.yaml
|
||||
sed -i s!'#CRYPTBOOT'!! /autoinstall.yaml
|
||||
echo -n $lukspass > /etc/confluent_lukspass
|
||||
chmod 000 /etc/confluent_lukspass
|
||||
fi
|
||||
) &
|
||||
tail --pid $! -n 0 -F /var/log/confluent/confluent-pre.log > /dev/console
|
||||
|
||||
|
@@ -1,4 +1,6 @@
|
||||
#!/usr/bin/python3
|
||||
import random
|
||||
import time
|
||||
import subprocess
|
||||
import importlib
|
||||
import tempfile
|
||||
@@ -7,6 +9,7 @@ import os
|
||||
import shutil
|
||||
import pwd
|
||||
import grp
|
||||
import sys
|
||||
from importlib.machinery import SourceFileLoader
|
||||
try:
|
||||
apiclient = SourceFileLoader('apiclient', '/opt/confluent/bin/apiclient').load_module()
|
||||
@@ -227,9 +230,16 @@ def synchronize():
|
||||
myips.append(addr)
|
||||
data = json.dumps({'merge': tmpdir, 'appendonce': appendoncedir, 'myips': myips})
|
||||
status, rsp = ac.grab_url_with_status('/confluent-api/self/remotesyncfiles', data)
|
||||
if status >= 300:
|
||||
sys.stderr.write("Error starting syncfiles - {}:\n".format(status))
|
||||
sys.stderr.write(rsp.decode('utf8'))
|
||||
sys.stderr.write('\n')
|
||||
sys.stderr.flush()
|
||||
return status
|
||||
if status == 202:
|
||||
lastrsp = ''
|
||||
while status != 204:
|
||||
time.sleep(1+(2*random.random()))
|
||||
status, rsp = ac.grab_url_with_status('/confluent-api/self/remotesyncfiles')
|
||||
if not isinstance(rsp, str):
|
||||
rsp = rsp.decode('utf8')
|
||||
@@ -277,10 +287,21 @@ def synchronize():
|
||||
os.chmod(fname, int(opts[fname][opt], 8))
|
||||
if uid != -1 or gid != -1:
|
||||
os.chown(fname, uid, gid)
|
||||
return status
|
||||
finally:
|
||||
shutil.rmtree(tmpdir)
|
||||
shutil.rmtree(appendoncedir)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
synchronize()
|
||||
status = 202
|
||||
while status not in (204, 200):
|
||||
try:
|
||||
status = synchronize()
|
||||
except Exception as e:
|
||||
sys.stderr.write(str(e))
|
||||
sys.stderr.write('\n')
|
||||
sys.stderr.flush()
|
||||
status = 300
|
||||
if status not in (204, 200):
|
||||
time.sleep((random.random()*3)+2)
|
||||
|
@@ -0,0 +1,17 @@
|
||||
#!/bin/sh
|
||||
case $1 in
|
||||
prereqs)
|
||||
echo
|
||||
exit 0
|
||||
;;
|
||||
esac
|
||||
|
||||
systemdecryptnow() {
|
||||
. /usr/lib/cryptsetup/functions
|
||||
local CRYPTTAB_SOURCE=$(awk '{print $2}' /systemdecrypt/crypttab)
|
||||
local CRYPTTAB_NAME=$(awk '{print $1}' /systemdecrypt/crypttab)
|
||||
crypttab_resolve_source
|
||||
/lib/systemd/systemd-cryptsetup attach "${CRYPTTAB_NAME}" "${CRYPTTAB_SOURCE}" none tpm2-device=auto
|
||||
}
|
||||
|
||||
systemdecryptnow
|
@@ -0,0 +1,26 @@
|
||||
#!/bin/sh
|
||||
case "$1" in
|
||||
prereqs)
|
||||
echo
|
||||
exit 0
|
||||
;;
|
||||
esac
|
||||
|
||||
. /usr/share/initramfs-tools/hook-functions
|
||||
mkdir -p $DESTDIR/systemdecrypt
|
||||
copy_exec /lib/systemd/systemd-cryptsetup /lib/systemd
|
||||
for i in /lib/x86_64-linux-gnu/libtss2*
|
||||
do
|
||||
copy_exec ${i} /lib/x86_64-linux-gnu
|
||||
done
|
||||
if [ -f /lib/x86_64-linux-gnu/cryptsetup/libcryptsetup-token-systemd-tpm2.so ]; then
|
||||
mkdir -p $DESTDIR/lib/x86_64-linux-gnu/cryptsetup
|
||||
copy_exec /lib/x86_64-linux-gnu/cryptsetup/libcryptsetup-token-systemd-tpm2.so /lib/x86_64-linux-gnu/cryptsetup
|
||||
fi
|
||||
mkdir -p $DESTDIR/scripts/local-top
|
||||
|
||||
echo /scripts/local-top/systemdecrypt >> $DESTDIR/scripts/local-top/ORDER
|
||||
|
||||
if [ -f $DESTDIR/cryptroot/crypttab ]; then
|
||||
mv $DESTDIR/cryptroot/crypttab $DESTDIR/systemdecrypt/crypttab
|
||||
fi
|
1
confluent_osdeploy/ubuntu24.04
Symbolic link
1
confluent_osdeploy/ubuntu24.04
Symbolic link
@@ -0,0 +1 @@
|
||||
ubuntu22.04
|
1
confluent_osdeploy/ubuntu24.04-diskless
Symbolic link
1
confluent_osdeploy/ubuntu24.04-diskless
Symbolic link
@@ -0,0 +1 @@
|
||||
ubuntu20.04-diskless
|
@@ -31,6 +31,8 @@ int add_uuid(char* destination, int maxsize) {
|
||||
strncpy(destination, "/uuid=", maxsize);
|
||||
uuidsize = read(uuidf, destination + 6, maxsize - 6);
|
||||
close(uuidf);
|
||||
if (uuidsize < 0) { return 0; }
|
||||
if (uuidsize > 524288) { return 0; }
|
||||
if (destination[uuidsize + 5] == '\n') {
|
||||
destination[uuidsize + 5 ] = 0;
|
||||
}
|
||||
@@ -45,6 +47,8 @@ int add_confluent_uuid(char* destination, int maxsize) {
|
||||
strncpy(destination, "/confluentuuid=", maxsize);
|
||||
uuidsize = read(uuidf, destination + 15, maxsize - 15);
|
||||
close(uuidf);
|
||||
if (uuidsize < 0) { return 0; }
|
||||
if (uuidsize > 524288) { return 0; }
|
||||
if (destination[uuidsize + 14] == '\n') {
|
||||
destination[uuidsize + 14] = 0;
|
||||
}
|
||||
|
@@ -1,21 +1,102 @@
|
||||
Name: confluent-client
|
||||
Project: https://hpc.lenovo.com/users/
|
||||
Source: https://github.com/lenovo/confluent
|
||||
Upstream-Name: confluent-client
|
||||
|
||||
All file of the Confluent-client software is distributed under the terms of an Apache-2.0 license as indicated below:
|
||||
|
||||
Files: *
|
||||
Copyright: 2014-2019 Lenovo
|
||||
Copyright: 2014-2023 Lenovo
|
||||
License: Apache-2.0
|
||||
|
||||
File: client.py:
|
||||
File: client.py
|
||||
sockapi.py
|
||||
redfish.py
|
||||
attributes.py
|
||||
httpapi.py
|
||||
configmanager.py
|
||||
Copyright: 2014 IBM Corporation
|
||||
2015-2019 Lenovo
|
||||
License: Apache-2.0
|
||||
|
||||
File: sortutil.py
|
||||
log.py
|
||||
Copyright: 2014 IBM Corporation
|
||||
2015-2016 Lenovo
|
||||
License: Apache-2.0
|
||||
|
||||
File: util.py
|
||||
noderange.py
|
||||
main.py
|
||||
exceptions.py
|
||||
Copyright: 2014 IBM Corporation
|
||||
2015-2017 Lenovo
|
||||
License: Apache-2.0
|
||||
|
||||
File: ipmi.py
|
||||
plugin.py
|
||||
core.py
|
||||
consoleserver.py
|
||||
Copyright: 2014 IBM Corporation
|
||||
2015-2018 Lenovo
|
||||
License: Apache-2.0
|
||||
|
||||
File: tlv.py
|
||||
shellmodule.py
|
||||
conf.py
|
||||
auth.py
|
||||
Copyright: 2014 IBM Corporation
|
||||
License: Apache-2.0
|
||||
|
||||
|
||||
License: Apache-2.0
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions:
|
||||
|
||||
You must give any other recipients of the Work or Derivative Works a copy of this License; and
|
||||
You must cause any modified files to carry prominent notices stating that You changed the files; and
|
||||
You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and
|
||||
If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License.
|
||||
You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
@@ -22,6 +22,11 @@ import shutil
|
||||
import eventlet.green.socket as socket
|
||||
import eventlet
|
||||
import greenlet
|
||||
import pwd
|
||||
import signal
|
||||
import confluent.collective.manager as collective
|
||||
import confluent.noderange as noderange
|
||||
|
||||
|
||||
def fprint(txt):
|
||||
sys.stdout.write(txt)
|
||||
@@ -109,6 +114,8 @@ def nics_missing_ipv6():
|
||||
iname, state = comps[:2]
|
||||
if iname == b'lo':
|
||||
continue
|
||||
if iname == b'virbr0':
|
||||
continue
|
||||
addrs = comps[2:]
|
||||
hasv6 = False
|
||||
hasv4 = False
|
||||
@@ -157,6 +164,7 @@ def lookup_node(node):
|
||||
if __name__ == '__main__':
|
||||
ap = argparse.ArgumentParser(description='Run configuration checks for a system running confluent service')
|
||||
ap.add_argument('-n', '--node', help='A node name to run node specific checks against')
|
||||
ap.add_argument('-a', '--automation', help='Do checks against a deployed node for automation and syncfiles function', action='store_true')
|
||||
args, extra = ap.parse_known_args(sys.argv)
|
||||
if len(extra) > 1:
|
||||
ap.print_help()
|
||||
@@ -217,6 +225,7 @@ if __name__ == '__main__':
|
||||
print('OK')
|
||||
except subprocess.CalledProcessError:
|
||||
emprint('Failed to load confluent automation key, syncfiles and profile ansible plays will not work (Example resolution: osdeploy initialize -a)')
|
||||
os.kill(int(sshutil.agent_pid), signal.SIGTERM)
|
||||
fprint('Checking for blocked insecure boot: ')
|
||||
if insecure_boot_attempts():
|
||||
emprint('Some nodes are attempting network boot using PXE or HTTP boot, but the node is not configured to allow this (Example resolution: nodegroupattrib everything deployment.useinsecureprotocols=firmware)')
|
||||
@@ -252,6 +261,9 @@ if __name__ == '__main__':
|
||||
uuid = rsp.get('id.uuid', {}).get('value', None)
|
||||
if uuid:
|
||||
uuidok = True
|
||||
if 'collective.managercandidates' in rsp:
|
||||
# Check if current node in candidates
|
||||
pass
|
||||
if 'deployment.useinsecureprotocols' in rsp:
|
||||
insec = rsp.get('deployment.useinsecureprotocols', {}).get('value', None)
|
||||
if insec != 'firmware':
|
||||
@@ -270,17 +282,40 @@ if __name__ == '__main__':
|
||||
switch_value = rsp[key].get('value',None)
|
||||
if switch_value and switch_value not in valid_nodes:
|
||||
emprint(f'{switch_value} is not a valid node name (as referenced by attribute "{key}" of node {args.node}).')
|
||||
print(f"Checking network configuration for {args.node}")
|
||||
cfg = configmanager.ConfigManager(None)
|
||||
cfd = cfg.get_node_attributes(
|
||||
args.node, ('deployment.*', 'collective.managercandidates'))
|
||||
profile = cfd.get(args.node, {}).get(
|
||||
'deployment.pendingprofile', {}).get('value', None)
|
||||
if not profile:
|
||||
emprint(
|
||||
f'{args.node} is not currently set to deploy any '
|
||||
'profile, network boot attempts will be ignored')
|
||||
candmgrs = cfd.get(args.node, {}).get(
|
||||
'collective.managercandidates', {}).get('value', None)
|
||||
if candmgrs:
|
||||
try:
|
||||
candmgrs = noderange.NodeRange(candmgrs, cfg).nodes
|
||||
except Exception: # fallback to unverified noderange
|
||||
candmgrs = noderange.NodeRange(candmgrs).nodes
|
||||
if collective.get_myname() not in candmgrs:
|
||||
emprint(f'{args.node} has deployment restricted to '
|
||||
'certain collective managers excluding the '
|
||||
'system running the selfcheck')
|
||||
print(f"Checking network configuration for {args.node}")
|
||||
bootablev4nics = []
|
||||
bootablev6nics = []
|
||||
targsships = []
|
||||
for nic in glob.glob("/sys/class/net/*/ifindex"):
|
||||
idx = int(open(nic, "r").read())
|
||||
nicname = nic.split('/')[-2]
|
||||
ncfg = netutil.get_nic_config(cfg, args.node, ifidx=idx)
|
||||
if ncfg['ipv4_address']:
|
||||
targsships.append(ncfg['ipv4_address'])
|
||||
if ncfg['ipv4_address'] or ncfg['ipv4_method'] == 'dhcp':
|
||||
bootablev4nics.append(nicname)
|
||||
if ncfg['ipv6_address']:
|
||||
targsships.append(ncfg['ipv6_address'])
|
||||
bootablev6nics.append(nicname)
|
||||
if bootablev4nics:
|
||||
print("{} appears to have network configuration suitable for IPv4 deployment via: {}".format(args.node, ",".join(bootablev4nics)))
|
||||
@@ -288,7 +323,7 @@ if __name__ == '__main__':
|
||||
print('{} appears to have networking configuration suitable for IPv6 deployment via: {}'.format(args.node, ",".join(bootablev6nics)))
|
||||
else:
|
||||
emprint(f"{args.node} may not have any viable IP network configuration (check name resolution (DNS or hosts file) "
|
||||
"and/or net.*ipv4_address, and verify that the deployment serer addresses and subnet mask/prefix length are accurate)")
|
||||
"and/or net.*ipv4_address, and verify that the deployment server addresses and subnet mask/prefix length are accurate)")
|
||||
if not uuidok and not macok:
|
||||
allok = False
|
||||
emprint(f'{args.node} does not have a uuid or mac address defined in id.uuid or net.*hwaddr, deployment will not work (Example resolution: nodeinventory {args.node} -s)')
|
||||
@@ -311,6 +346,34 @@ if __name__ == '__main__':
|
||||
emprint('Name resolution failed for node, it is normally a good idea for the node name to resolve to an IP')
|
||||
if result:
|
||||
print("OK")
|
||||
if args.automation:
|
||||
print(f'Checking confluent automation access to {args.node}...')
|
||||
child = os.fork()
|
||||
if child > 0:
|
||||
pid, extcode = os.waitpid(child, 0)
|
||||
else:
|
||||
sshutil.ready_keys = {}
|
||||
sshutil.agent_pid = None
|
||||
cuser = pwd.getpwnam('confluent')
|
||||
os.setgid(cuser.pw_gid)
|
||||
os.setuid(cuser.pw_uid)
|
||||
sshutil.prep_ssh_key('/etc/confluent/ssh/automation')
|
||||
for targ in targsships:
|
||||
srun = subprocess.run(
|
||||
['ssh', '-Tn', '-o', 'BatchMode=yes', '-l', 'root',
|
||||
'-o', 'StrictHostKeyChecking=yes', targ, 'true'],
|
||||
stdin=subprocess.DEVNULL, stderr=subprocess.PIPE)
|
||||
if srun.returncode == 0:
|
||||
print(f'Confluent automation access to {targ} seems OK')
|
||||
else:
|
||||
if b'Host key verification failed' in srun.stderr:
|
||||
emprint(f'Confluent ssh unable to verify host key for {targ}, check /etc/ssh/ssh_known_hosts. (Example resolution: osdeploy initialize -k)')
|
||||
elif b'ermission denied' in srun.stderr:
|
||||
emprint(f'Confluent user unable to ssh in to {targ}, check /root/.ssh/authorized_keys on the target system versus /etc/confluent/ssh/automation.pub (Example resolution: osdeploy initialize -a)')
|
||||
else:
|
||||
emprint('Unknown error attempting confluent automation ssh:')
|
||||
sys.stderr.buffer.write(srun.stderr)
|
||||
os.kill(int(sshutil.agent_pid), signal.SIGTERM)
|
||||
else:
|
||||
print("Skipping node checks, no node specified (Example: confluent_selfcheck -n n1)")
|
||||
# possible checks:
|
||||
|
@@ -1,7 +1,7 @@
|
||||
#!/usr/bin/python2
|
||||
#!/usr/bin/python3
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright 2017 Lenovo
|
||||
# Copyright 2017,2024 Lenovo
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
@@ -30,7 +30,7 @@ import confluent.config.conf as conf
|
||||
import confluent.main as main
|
||||
|
||||
argparser = optparse.OptionParser(
|
||||
usage="Usage: %prog [options] [dump|restore] [path]")
|
||||
usage="Usage: %prog [options] [dump|restore|merge] [path]")
|
||||
argparser.add_option('-p', '--password',
|
||||
help='Password to use to protect/unlock a protected dump')
|
||||
argparser.add_option('-i', '--interactivepassword', help='Prompt for password',
|
||||
@@ -51,13 +51,13 @@ argparser.add_option('-s', '--skipkeys', action='store_true',
|
||||
'data is needed. keys do not change and as such '
|
||||
'they do not require incremental backup')
|
||||
(options, args) = argparser.parse_args()
|
||||
if len(args) != 2 or args[0] not in ('dump', 'restore'):
|
||||
if len(args) != 2 or args[0] not in ('dump', 'restore', 'merge'):
|
||||
argparser.print_help()
|
||||
sys.exit(1)
|
||||
dumpdir = args[1]
|
||||
|
||||
|
||||
if args[0] == 'restore':
|
||||
if args[0] in ('restore', 'merge'):
|
||||
pid = main.is_running()
|
||||
if pid is not None:
|
||||
print("Confluent is running, must shut down to restore db")
|
||||
@@ -69,9 +69,22 @@ if args[0] == 'restore':
|
||||
if options.interactivepassword:
|
||||
password = getpass.getpass('Enter password to restore backup: ')
|
||||
try:
|
||||
cfm.init(True)
|
||||
cfm.statelessmode = True
|
||||
cfm.restore_db_from_directory(dumpdir, password)
|
||||
stateless = args[0] == 'restore'
|
||||
cfm.init(stateless)
|
||||
cfm.statelessmode = stateless
|
||||
skipped = {'nodes': [], 'nodegroups': []}
|
||||
cfm.restore_db_from_directory(
|
||||
dumpdir, password,
|
||||
merge="skip" if args[0] == 'merge' else False, skipped=skipped)
|
||||
if skipped['nodes']:
|
||||
skippedn = ','.join(skipped['nodes'])
|
||||
print('The following nodes were skipped during merge: '
|
||||
'{}'.format(skippedn))
|
||||
if skipped['nodegroups']:
|
||||
skippedn = ','.join(skipped['nodegroups'])
|
||||
print('The following node groups were skipped during merge: '
|
||||
'{}'.format(skippedn))
|
||||
|
||||
cfm.statelessmode = False
|
||||
cfm.ConfigManager.wait_for_sync(True)
|
||||
if owner != 0:
|
||||
|
@@ -1,4 +1,4 @@
|
||||
#!/usr/bin/python2
|
||||
#!/usr/bin/python3
|
||||
|
||||
__author__ = 'jjohnson2,bfinley'
|
||||
|
||||
@@ -49,8 +49,11 @@ def main(args):
|
||||
wiz.add_argument('-p', help='Copy in TFTP contents required for PXE support', action='store_true')
|
||||
wiz.add_argument('-i', help='Interactively prompt for behaviors', action='store_true')
|
||||
wiz.add_argument('-l', help='Set up local management node to allow login from managed nodes', action='store_true')
|
||||
osip = sp.add_parser('importcheck', help='Check import of an OS image from an ISO image')
|
||||
osip.add_argument('imagefile', help='File to use for source of importing')
|
||||
osip = sp.add_parser('import', help='Import an OS image from an ISO image')
|
||||
osip.add_argument('imagefile', help='File to use for source of importing')
|
||||
osip.add_argument('-n', help='Specific a custom distribution name')
|
||||
upb = sp.add_parser(
|
||||
'updateboot',
|
||||
help='Push profile.yaml of the named profile data into boot assets as appropriate')
|
||||
@@ -63,7 +66,9 @@ def main(args):
|
||||
if cmdset.command == 'list':
|
||||
return oslist()
|
||||
if cmdset.command == 'import':
|
||||
return osimport(cmdset.imagefile)
|
||||
return osimport(cmdset.imagefile, custname=cmdset.n)
|
||||
if cmdset.command == 'importcheck':
|
||||
return osimport(cmdset.imagefile, checkonly=True)
|
||||
if cmdset.command == 'initialize':
|
||||
return initialize(cmdset)
|
||||
if cmdset.command == 'updateboot':
|
||||
@@ -320,14 +325,14 @@ def initialize(cmdset):
|
||||
try:
|
||||
sshutil.initialize_ca()
|
||||
except sshutil.AlreadyExists:
|
||||
emprint('Skipping generation of SSH CA, already present and would likely be more problematic to regenerate than to reuse (if absolutely sure you want to discard old CA, then delete /etc/confluent/ssh/ca*)')
|
||||
emprint('Skipping generation of SSH CA, already present and would likely be more problematic to regenerate than to reuse (if absolutely sure you want to discard old CA, then delete /etc/confluent/ssh/ca* and restart confluent)')
|
||||
if cmdset.a:
|
||||
didsomething = True
|
||||
init_confluent_myname()
|
||||
try:
|
||||
sshutil.initialize_root_key(True, True)
|
||||
except sshutil.AlreadyExists:
|
||||
emprint('Skipping generation of new automation key, already present and regeneration usually causes more problems. (If absolutely certain, delete /etc/confluent/ssh/automation*)')
|
||||
emprint('Skipping generation of new automation key, already present and regeneration usually causes more problems. (If absolutely certain, delete /etc/confluent/ssh/automation* and restart confluent)')
|
||||
if cmdset.p:
|
||||
install_tftp_content()
|
||||
if cmdset.l:
|
||||
@@ -496,7 +501,7 @@ def oslist():
|
||||
print("")
|
||||
|
||||
|
||||
def osimport(imagefile):
|
||||
def osimport(imagefile, checkonly=False, custname=None):
|
||||
c = client.Command()
|
||||
imagefile = os.path.abspath(imagefile)
|
||||
if c.unixdomain:
|
||||
@@ -507,11 +512,33 @@ def osimport(imagefile):
|
||||
pass
|
||||
importing = False
|
||||
shortname = None
|
||||
for rsp in c.create('/deployment/importing/', {'filename': imagefile}):
|
||||
apipath = '/deployment/importing/'
|
||||
if checkonly:
|
||||
apipath = '/deployment/fingerprint/'
|
||||
apiargs = {'filename': imagefile}
|
||||
if custname:
|
||||
apiargs['custname'] = custname
|
||||
for rsp in c.create(apipath, apiargs):
|
||||
if 'target' in rsp:
|
||||
importing = True
|
||||
shortname = rsp['name']
|
||||
print('Importing from {0} to {1}'.format(imagefile, rsp['target']))
|
||||
elif 'targetpath' in rsp:
|
||||
tpath = rsp.get('targetpath', None)
|
||||
tname = rsp.get('name', None)
|
||||
oscat = rsp.get('oscategory', None)
|
||||
if tpath:
|
||||
print('Detected target directory: ' + tpath)
|
||||
if tname:
|
||||
print('Detected distribution name: ' + tname)
|
||||
if oscat:
|
||||
print('Detected OS category: ' + oscat)
|
||||
for err in rsp.get('errors', []):
|
||||
sys.stderr.write('Error: ' + err + '\n')
|
||||
|
||||
elif 'error' in rsp:
|
||||
sys.stderr.write(rsp['error'] + '\n')
|
||||
sys.exit(rsp.get('errorcode', 1))
|
||||
else:
|
||||
print(repr(rsp))
|
||||
try:
|
||||
|
@@ -17,7 +17,9 @@ cd /tmp/confluent/$PKGNAME
|
||||
if [ -x ./makeman ]; then
|
||||
./makeman
|
||||
fi
|
||||
./makesetup
|
||||
sed -e 's/~/./' ./makesetup > ./makesetup.deb
|
||||
chmod +x ./makesetup.deb
|
||||
./makesetup.deb
|
||||
VERSION=`cat VERSION`
|
||||
cat > setup.cfg << EOF
|
||||
[install]
|
||||
@@ -35,8 +37,10 @@ cd deb_dist/!(*.orig)/
|
||||
if [ "$OPKGNAME" = "confluent-server" ]; then
|
||||
if grep wheezy /etc/os-release; then
|
||||
sed -i 's/^\(Depends:.*\)/\1, python-confluent-client, python-lxml, python-eficompressor, python-pycryptodomex, python-dateutil, python-pyopenssl, python-msgpack/' debian/control
|
||||
elif grep jammy /etc/os-release; then
|
||||
sed -i 's/^\(Depends:.*\)/\1, confluent-client, python3-lxml, python3-eficompressor, python3-pycryptodome, python3-websocket, python3-msgpack, python3-eventlet, python3-pyparsing, python3-pyghmi(>=1.5.71), python3-paramiko, python3-pysnmp4, python3-libarchive-c, confluent-vtbufferd, python3-netifaces, python3-yaml, python3-dateutil/' debian/control
|
||||
else
|
||||
sed -i 's/^\(Depends:.*\)/\1, confluent-client, python3-lxml, python3-eficompressor, python3-pycryptodome, python3-websocket, python3-msgpack, python3-eventlet, python3-pyparsing, python3-pyghmi, python3-paramiko, python3-pysnmp4, python3-libarchive-c, confluent-vtbufferd/' debian/control
|
||||
sed -i 's/^\(Depends:.*\)/\1, confluent-client, python3-lxml, python3-eficompressor, python3-pycryptodome, python3-websocket, python3-msgpack, python3-eventlet, python3-pyparsing, python3-pyghmi(>=1.5.71), python3-paramiko, python3-pysnmp4, python3-libarchive-c, confluent-vtbufferd, python3-netifaces, python3-yaml, python3-dateutil, python3-pyasyncore/' debian/control
|
||||
fi
|
||||
if grep wheezy /etc/os-release; then
|
||||
echo 'confluent_client python-confluent-client' >> debian/pydist-overrides
|
||||
@@ -72,7 +76,7 @@ else
|
||||
rm -rf $PKGNAME.egg-info dist setup.py
|
||||
rm -rf $(find deb_dist -mindepth 1 -maxdepth 1 -type d)
|
||||
if [ ! -z "$1" ]; then
|
||||
mv deb_dist/* $1/
|
||||
mv deb_dist/*.deb $1/
|
||||
fi
|
||||
fi
|
||||
exit 0
|
||||
|
@@ -58,7 +58,7 @@ _allowedbyrole = {
|
||||
'/nodes/',
|
||||
'/node*/media/uploads/',
|
||||
'/node*/inventory/firmware/updates/*',
|
||||
'/node*/suppport/servicedata*',
|
||||
'/node*/support/servicedata*',
|
||||
'/node*/attributes/expression',
|
||||
'/nodes/*/console/session*',
|
||||
'/nodes/*/shell/sessions*',
|
||||
|
@@ -76,7 +76,7 @@ def get_certificate_paths():
|
||||
continue
|
||||
kploc = check_apache_config(os.path.join(currpath,
|
||||
fname))
|
||||
if keypath and kploc[0]:
|
||||
if keypath and kploc[0] and keypath != kploc[0]:
|
||||
return None, None # Ambiguous...
|
||||
if kploc[0]:
|
||||
keypath, certpath = kploc
|
||||
@@ -206,7 +206,7 @@ def create_simple_ca(keyout, certout):
|
||||
finally:
|
||||
os.remove(tmpconfig)
|
||||
|
||||
def create_certificate(keyout=None, certout=None):
|
||||
def create_certificate(keyout=None, certout=None, csrout=None):
|
||||
if not keyout:
|
||||
keyout, certout = get_certificate_paths()
|
||||
if not keyout:
|
||||
@@ -214,9 +214,10 @@ def create_certificate(keyout=None, certout=None):
|
||||
assure_tls_ca()
|
||||
shortname = socket.gethostname().split('.')[0]
|
||||
longname = shortname # socket.getfqdn()
|
||||
subprocess.check_call(
|
||||
['openssl', 'ecparam', '-name', 'secp384r1', '-genkey', '-out',
|
||||
keyout])
|
||||
if not csrout:
|
||||
subprocess.check_call(
|
||||
['openssl', 'ecparam', '-name', 'secp384r1', '-genkey', '-out',
|
||||
keyout])
|
||||
san = ['IP:{0}'.format(x) for x in get_ip_addresses()]
|
||||
# It is incorrect to put IP addresses as DNS type. However
|
||||
# there exists non-compliant clients that fail with them as IP
|
||||
@@ -229,21 +230,34 @@ def create_certificate(keyout=None, certout=None):
|
||||
os.close(tmphdl)
|
||||
tmphdl, extconfig = tempfile.mkstemp()
|
||||
os.close(tmphdl)
|
||||
tmphdl, csrout = tempfile.mkstemp()
|
||||
os.close(tmphdl)
|
||||
needcsr = False
|
||||
if csrout is None:
|
||||
needcsr = True
|
||||
tmphdl, csrout = tempfile.mkstemp()
|
||||
os.close(tmphdl)
|
||||
shutil.copy2(sslcfg, tmpconfig)
|
||||
serialnum = '0x' + ''.join(['{:02x}'.format(x) for x in bytearray(os.urandom(20))])
|
||||
try:
|
||||
with open(tmpconfig, 'a') as cfgfile:
|
||||
cfgfile.write('\n[SAN]\nsubjectAltName={0}'.format(san))
|
||||
with open(extconfig, 'a') as cfgfile:
|
||||
cfgfile.write('\nbasicConstraints=CA:false\nsubjectAltName={0}'.format(san))
|
||||
subprocess.check_call([
|
||||
'openssl', 'req', '-new', '-key', keyout, '-out', csrout, '-subj',
|
||||
'/CN={0}'.format(longname),
|
||||
'-extensions', 'SAN', '-config', tmpconfig
|
||||
])
|
||||
if needcsr:
|
||||
with open(tmpconfig, 'a') as cfgfile:
|
||||
cfgfile.write('\n[SAN]\nsubjectAltName={0}'.format(san))
|
||||
with open(extconfig, 'a') as cfgfile:
|
||||
cfgfile.write('\nbasicConstraints=CA:false\nsubjectAltName={0}'.format(san))
|
||||
subprocess.check_call([
|
||||
'openssl', 'req', '-new', '-key', keyout, '-out', csrout, '-subj',
|
||||
'/CN={0}'.format(longname),
|
||||
'-extensions', 'SAN', '-config', tmpconfig
|
||||
])
|
||||
else:
|
||||
# when used manually, allow the csr SAN to stand
|
||||
# may add explicit subj/SAN argument, in which case we would skip copy
|
||||
with open(tmpconfig, 'a') as cfgfile:
|
||||
cfgfile.write('\ncopy_extensions=copy\n')
|
||||
with open(extconfig, 'a') as cfgfile:
|
||||
cfgfile.write('\nbasicConstraints=CA:false\n')
|
||||
if os.path.exists('/etc/confluent/tls/cakey.pem'):
|
||||
# simple style CA in effect, make a random serial number and
|
||||
# hope for the best, and accept inability to backdate the cert
|
||||
serialnum = '0x' + ''.join(['{:02x}'.format(x) for x in bytearray(os.urandom(20))])
|
||||
subprocess.check_call([
|
||||
'openssl', 'x509', '-req', '-in', csrout,
|
||||
'-CA', '/etc/confluent/tls/cacert.pem',
|
||||
@@ -252,20 +266,40 @@ def create_certificate(keyout=None, certout=None):
|
||||
'-extfile', extconfig
|
||||
])
|
||||
else:
|
||||
# we moved to a 'proper' CA, mainly for access to backdating
|
||||
# start of certs for finicky system clocks
|
||||
# this also provides a harder guarantee of serial uniqueness, but
|
||||
# not of practical consequence (160 bit random value is as good as
|
||||
# guaranteed unique)
|
||||
# downside is certificate generation is serialized
|
||||
cacfgfile = '/etc/confluent/tls/ca/openssl.cfg'
|
||||
if needcsr:
|
||||
tmphdl, tmpcafile = tempfile.mkstemp()
|
||||
shutil.copy2(cacfgfile, tmpcafile)
|
||||
os.close(tmphdl)
|
||||
cacfgfile = tmpcafile
|
||||
# with realcalock: # if we put it in server, we must lock it
|
||||
subprocess.check_call([
|
||||
'openssl', 'ca', '-config', '/etc/confluent/tls/ca/openssl.cfg',
|
||||
'openssl', 'ca', '-config', cacfgfile,
|
||||
'-in', csrout, '-out', certout, '-batch', '-notext',
|
||||
'-startdate', '19700101010101Z', '-enddate', '21000101010101Z',
|
||||
'-extfile', extconfig
|
||||
])
|
||||
finally:
|
||||
os.remove(tmpconfig)
|
||||
os.remove(csrout)
|
||||
os.remove(extconfig)
|
||||
if needcsr:
|
||||
os.remove(csrout)
|
||||
print(extconfig) # os.remove(extconfig)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
import sys
|
||||
outdir = os.getcwd()
|
||||
keyout = os.path.join(outdir, 'key.pem')
|
||||
certout = os.path.join(outdir, 'cert.pem')
|
||||
create_certificate(keyout, certout)
|
||||
certout = os.path.join(outdir, sys.argv[2] + 'cert.pem')
|
||||
csrout = None
|
||||
try:
|
||||
csrout = sys.argv[1]
|
||||
except IndexError:
|
||||
csrout = None
|
||||
create_certificate(keyout, certout, csrout)
|
||||
|
@@ -469,9 +469,13 @@ node = {
|
||||
'net.interface_names': {
|
||||
'description': 'Interface name or comma delimited list of names to match for this interface. It is generally recommended '
|
||||
'to leave this blank unless needing to set up interfaces that are not on a common subnet with a confluent server, '
|
||||
'as confluent servers provide autodetection for matching the correct network definition to an interface.'
|
||||
'as confluent servers provide autodetection for matching the correct network definition to an interface. '
|
||||
'This would be the default name per the deployed OS and can be a comma delimited list to denote members of '
|
||||
'a team'
|
||||
'a team or a single interface for VLAN/PKEY connections.'
|
||||
},
|
||||
'net.vlan_id': {
|
||||
'description': 'Ethernet VLAN or InfiniBand PKEY to use for this connection. '
|
||||
'Specify the parent device using net.interface_names.'
|
||||
},
|
||||
'net.ipv4_address': {
|
||||
'description': 'When configuring static, use this address. If '
|
||||
|
@@ -252,10 +252,12 @@ def _rpc_master_rename_nodegroups(tenant, renamemap):
|
||||
|
||||
|
||||
def _rpc_master_clear_node_attributes(tenant, nodes, attributes):
|
||||
ConfigManager(tenant).clear_node_attributes(nodes, attributes)
|
||||
warnings = []
|
||||
ConfigManager(tenant).clear_node_attributes(nodes, attributes, warnings)
|
||||
return warnings
|
||||
|
||||
|
||||
def _rpc_clear_node_attributes(tenant, nodes, attributes):
|
||||
def _rpc_clear_node_attributes(tenant, nodes, attributes): # master has to do the warnings
|
||||
ConfigManager(tenant)._true_clear_node_attributes(nodes, attributes)
|
||||
|
||||
|
||||
@@ -348,9 +350,9 @@ def exec_on_leader(function, *args):
|
||||
rpclen = len(rpcpayload)
|
||||
cfgleader.sendall(struct.pack('!Q', rpclen))
|
||||
cfgleader.sendall(rpcpayload)
|
||||
_pendingchangesets[xid].wait()
|
||||
retv = _pendingchangesets[xid].wait()
|
||||
del _pendingchangesets[xid]
|
||||
return
|
||||
return retv
|
||||
|
||||
|
||||
def exec_on_followers(fnname, *args):
|
||||
@@ -714,8 +716,9 @@ def relay_slaved_requests(name, listener):
|
||||
exc = None
|
||||
if not (rpc['function'].startswith('_rpc_') or rpc['function'].endswith('_collective_member')):
|
||||
raise Exception('Unsupported function {0} called'.format(rpc['function']))
|
||||
retv = None
|
||||
try:
|
||||
globals()[rpc['function']](*rpc['args'])
|
||||
retv = globals()[rpc['function']](*rpc['args'])
|
||||
except ValueError as ve:
|
||||
exc = ['ValueError', str(ve)]
|
||||
except Exception as e:
|
||||
@@ -723,7 +726,7 @@ def relay_slaved_requests(name, listener):
|
||||
exc = ['Exception', str(e)]
|
||||
if 'xid' in rpc:
|
||||
res = _push_rpc(listener, msgpack.packb({'xid': rpc['xid'],
|
||||
'exc': exc}, use_bin_type=False))
|
||||
'exc': exc, 'ret': retv}, use_bin_type=False))
|
||||
if not res:
|
||||
break
|
||||
try:
|
||||
@@ -929,7 +932,7 @@ def follow_channel(channel):
|
||||
exc = Exception(excstr)
|
||||
_pendingchangesets[rpc['xid']].send_exception(exc)
|
||||
else:
|
||||
_pendingchangesets[rpc['xid']].send()
|
||||
_pendingchangesets[rpc['xid']].send(rpc.get('ret', None))
|
||||
if 'quorum' in rpc:
|
||||
_hasquorum = rpc['quorum']
|
||||
res = _push_rpc(channel, b'') # use null as ACK
|
||||
@@ -1089,6 +1092,11 @@ class _ExpressionFormat(string.Formatter):
|
||||
self._nodename = nodename
|
||||
self._numbers = None
|
||||
|
||||
def _vformat(self, format_string, args, kwargs, used_args, recursion_depth,
|
||||
auto_arg_index=False):
|
||||
return super()._vformat(format_string, args, kwargs, used_args,
|
||||
recursion_depth, auto_arg_index)
|
||||
|
||||
def get_field(self, field_name, args, kwargs):
|
||||
return field_name, field_name
|
||||
|
||||
@@ -1895,7 +1903,7 @@ class ConfigManager(object):
|
||||
def add_group_attributes(self, attribmap):
|
||||
self.set_group_attributes(attribmap, autocreate=True)
|
||||
|
||||
def set_group_attributes(self, attribmap, autocreate=False):
|
||||
def set_group_attributes(self, attribmap, autocreate=False, merge="replace", keydata=None, skipped=None):
|
||||
for group in attribmap:
|
||||
curr = attribmap[group]
|
||||
for attrib in curr:
|
||||
@@ -1916,11 +1924,11 @@ class ConfigManager(object):
|
||||
if cfgstreams:
|
||||
exec_on_followers('_rpc_set_group_attributes', self.tenant,
|
||||
attribmap, autocreate)
|
||||
self._true_set_group_attributes(attribmap, autocreate)
|
||||
self._true_set_group_attributes(attribmap, autocreate, merge=merge, keydata=keydata, skipped=skipped)
|
||||
|
||||
def _true_set_group_attributes(self, attribmap, autocreate=False):
|
||||
def _true_set_group_attributes(self, attribmap, autocreate=False, merge="replace", keydata=None, skipped=None):
|
||||
changeset = {}
|
||||
for group in attribmap:
|
||||
for group in list(attribmap):
|
||||
if group == '':
|
||||
raise ValueError('"{0}" is not a valid group name'.format(
|
||||
group))
|
||||
@@ -1933,6 +1941,11 @@ class ConfigManager(object):
|
||||
group))
|
||||
if not autocreate and group not in self._cfgstore['nodegroups']:
|
||||
raise ValueError("{0} group does not exist".format(group))
|
||||
if merge == 'skip' and group in self._cfgstore['nodegroups']:
|
||||
if skipped is not None:
|
||||
skipped.append(group)
|
||||
del attribmap[group]
|
||||
continue
|
||||
for attr in list(attribmap[group]):
|
||||
# first do a pass to normalize out any aliased attribute names
|
||||
if attr in _attraliases:
|
||||
@@ -2007,6 +2020,9 @@ class ConfigManager(object):
|
||||
newdict = {'value': attribmap[group][attr]}
|
||||
else:
|
||||
newdict = attribmap[group][attr]
|
||||
if keydata and attr.startswith('secret.') and 'cryptvalue' in newdict:
|
||||
newdict['value'] = decrypt_value(newdict['cryptvalue'], keydata['cryptkey'], keydata['integritykey'])
|
||||
del newdict['cryptvalue']
|
||||
if 'value' in newdict and attr.startswith("secret."):
|
||||
newdict['cryptvalue'] = crypt_value(newdict['value'])
|
||||
del newdict['value']
|
||||
@@ -2197,16 +2213,19 @@ class ConfigManager(object):
|
||||
self._notif_attribwatchers(changeset)
|
||||
self._bg_sync_to_file()
|
||||
|
||||
def clear_node_attributes(self, nodes, attributes):
|
||||
def clear_node_attributes(self, nodes, attributes, warnings=None):
|
||||
if cfgleader:
|
||||
return exec_on_leader('_rpc_master_clear_node_attributes',
|
||||
mywarnings = exec_on_leader('_rpc_master_clear_node_attributes',
|
||||
self.tenant, nodes, attributes)
|
||||
if mywarnings and warnings is not None:
|
||||
warnings.extend(mywarnings)
|
||||
return
|
||||
if cfgstreams:
|
||||
exec_on_followers('_rpc_clear_node_attributes', self.tenant,
|
||||
nodes, attributes)
|
||||
self._true_clear_node_attributes(nodes, attributes)
|
||||
self._true_clear_node_attributes(nodes, attributes, warnings)
|
||||
|
||||
def _true_clear_node_attributes(self, nodes, attributes):
|
||||
def _true_clear_node_attributes(self, nodes, attributes, warnings=None):
|
||||
# accumulate all changes into a changeset and push in one go
|
||||
changeset = {}
|
||||
realattributes = []
|
||||
@@ -2229,8 +2248,17 @@ class ConfigManager(object):
|
||||
# delete it and check for inheritence to backfil data
|
||||
del nodek[attrib]
|
||||
self._do_inheritance(nodek, attrib, node, changeset)
|
||||
if warnings is not None:
|
||||
if attrib in nodek:
|
||||
warnings.append('The attribute "{}" was defined specifically for the node and clearing now has a value inherited from the group "{}"'.format(attrib, nodek[attrib]['inheritedfrom']))
|
||||
_addchange(changeset, node, attrib)
|
||||
_mark_dirtykey('nodes', node, self.tenant)
|
||||
elif attrib in nodek:
|
||||
if warnings is not None:
|
||||
warnings.append('The attribute "{0}" is inherited from group "{1}", leaving the inherited value alone (use "{0}=" with no value to explicitly blank the value if desired)'.format(attrib, nodek[attrib]['inheritedfrom']))
|
||||
else:
|
||||
if warnings is not None:
|
||||
warnings.append('Attribute "{}" is either already cleared, or does not match a defined attribute (if referencing an attribute group, try a wildcard)'.format(attrib))
|
||||
if ('_expressionkeys' in nodek and
|
||||
attrib in nodek['_expressionkeys']):
|
||||
recalcexpressions = True
|
||||
@@ -2329,7 +2357,7 @@ class ConfigManager(object):
|
||||
|
||||
|
||||
|
||||
def set_node_attributes(self, attribmap, autocreate=False):
|
||||
def set_node_attributes(self, attribmap, autocreate=False, merge="replace", keydata=None, skipped=None):
|
||||
for node in attribmap:
|
||||
curr = attribmap[node]
|
||||
for attrib in curr:
|
||||
@@ -2350,9 +2378,9 @@ class ConfigManager(object):
|
||||
if cfgstreams:
|
||||
exec_on_followers('_rpc_set_node_attributes',
|
||||
self.tenant, attribmap, autocreate)
|
||||
self._true_set_node_attributes(attribmap, autocreate)
|
||||
self._true_set_node_attributes(attribmap, autocreate, merge, keydata, skipped)
|
||||
|
||||
def _true_set_node_attributes(self, attribmap, autocreate):
|
||||
def _true_set_node_attributes(self, attribmap, autocreate, merge="replace", keydata=None, skipped=None):
|
||||
# TODO(jbjohnso): multi mgr support, here if we have peers,
|
||||
# pickle the arguments and fire them off in eventlet
|
||||
# flows to peers, all should have the same result
|
||||
@@ -2360,7 +2388,7 @@ class ConfigManager(object):
|
||||
changeset = {}
|
||||
# first do a sanity check of the input upfront
|
||||
# this mitigates risk of arguments being partially applied
|
||||
for node in attribmap:
|
||||
for node in list(attribmap):
|
||||
node = confluent.util.stringify(node)
|
||||
if node == '':
|
||||
raise ValueError('"{0}" is not a valid node name'.format(node))
|
||||
@@ -2373,6 +2401,11 @@ class ConfigManager(object):
|
||||
'"{0}" is not a valid node name'.format(node))
|
||||
if autocreate is False and node not in self._cfgstore['nodes']:
|
||||
raise ValueError("node {0} does not exist".format(node))
|
||||
if merge == "skip" and node in self._cfgstore['nodes']:
|
||||
del attribmap[node]
|
||||
if skipped is not None:
|
||||
skipped.append(node)
|
||||
continue
|
||||
if 'groups' not in attribmap[node] and node not in self._cfgstore['nodes']:
|
||||
attribmap[node]['groups'] = []
|
||||
for attrname in list(attribmap[node]):
|
||||
@@ -2443,6 +2476,9 @@ class ConfigManager(object):
|
||||
# add check here, skip None attributes
|
||||
if newdict is None:
|
||||
continue
|
||||
if keydata and attrname.startswith('secret.') and 'cryptvalue' in newdict:
|
||||
newdict['value'] = decrypt_value(newdict['cryptvalue'], keydata['cryptkey'], keydata['integritykey'])
|
||||
del newdict['cryptvalue']
|
||||
if 'value' in newdict and attrname.startswith("secret."):
|
||||
newdict['cryptvalue'] = crypt_value(newdict['value'])
|
||||
del newdict['value']
|
||||
@@ -2483,19 +2519,21 @@ class ConfigManager(object):
|
||||
self._bg_sync_to_file()
|
||||
#TODO: wait for synchronization to suceed/fail??)
|
||||
|
||||
def _load_from_json(self, jsondata, sync=True):
|
||||
def _load_from_json(self, jsondata, sync=True, merge=False, keydata=None, skipped=None):
|
||||
self.inrestore = True
|
||||
try:
|
||||
self._load_from_json_backend(jsondata, sync=True)
|
||||
self._load_from_json_backend(jsondata, sync=True, merge=merge, keydata=keydata, skipped=skipped)
|
||||
finally:
|
||||
self.inrestore = False
|
||||
|
||||
def _load_from_json_backend(self, jsondata, sync=True):
|
||||
def _load_from_json_backend(self, jsondata, sync=True, merge=False, keydata=None, skipped=None):
|
||||
"""Load fresh configuration data from jsondata
|
||||
|
||||
:param jsondata: String of jsondata
|
||||
:return:
|
||||
"""
|
||||
if not skipped:
|
||||
skipped = {'nodes': None, 'nodegroups': None}
|
||||
dumpdata = json.loads(jsondata)
|
||||
tmpconfig = {}
|
||||
for confarea in _config_areas:
|
||||
@@ -2543,20 +2581,27 @@ class ConfigManager(object):
|
||||
pass
|
||||
# Now we have to iterate through each fixed up element, using the
|
||||
# set attribute to flesh out inheritence and expressions
|
||||
_cfgstore['main']['idmap'] = {}
|
||||
if (not merge) or _cfgstore.get('main', {}).get('idmap', None) is None:
|
||||
_cfgstore['main']['idmap'] = {}
|
||||
attribmerge = merge if merge else "replace"
|
||||
for confarea in _config_areas:
|
||||
self._cfgstore[confarea] = {}
|
||||
if not merge or confarea not in self._cfgstore:
|
||||
self._cfgstore[confarea] = {}
|
||||
if confarea not in tmpconfig:
|
||||
continue
|
||||
if confarea == 'nodes':
|
||||
self.set_node_attributes(tmpconfig[confarea], True)
|
||||
self.set_node_attributes(tmpconfig[confarea], True, merge=attribmerge, keydata=keydata, skipped=skipped['nodes'])
|
||||
elif confarea == 'nodegroups':
|
||||
self.set_group_attributes(tmpconfig[confarea], True)
|
||||
self.set_group_attributes(tmpconfig[confarea], True, merge=attribmerge, keydata=keydata, skipped=skipped['nodegroups'])
|
||||
elif confarea == 'usergroups':
|
||||
if merge:
|
||||
continue
|
||||
for usergroup in tmpconfig[confarea]:
|
||||
role = tmpconfig[confarea][usergroup].get('role', 'Administrator')
|
||||
self.create_usergroup(usergroup, role=role)
|
||||
elif confarea == 'users':
|
||||
if merge:
|
||||
continue
|
||||
for user in tmpconfig[confarea]:
|
||||
ucfg = tmpconfig[confarea][user]
|
||||
uid = ucfg.get('id', None)
|
||||
@@ -2627,7 +2672,7 @@ class ConfigManager(object):
|
||||
dumpdata[confarea][element][attribute]['cryptvalue'] = '!'.join(cryptval)
|
||||
elif isinstance(dumpdata[confarea][element][attribute], set):
|
||||
dumpdata[confarea][element][attribute] = \
|
||||
list(dumpdata[confarea][element][attribute])
|
||||
confluent.util.natural_sort(list(dumpdata[confarea][element][attribute]))
|
||||
return json.dumps(
|
||||
dumpdata, sort_keys=True, indent=4, separators=(',', ': '))
|
||||
|
||||
@@ -2856,7 +2901,7 @@ def _restore_keys(jsond, password, newpassword=None, sync=True):
|
||||
newpassword = keyfile.read()
|
||||
set_global('master_privacy_key', _format_key(cryptkey,
|
||||
password=newpassword), sync)
|
||||
if integritykey:
|
||||
if integritykey:
|
||||
set_global('master_integrity_key', _format_key(integritykey,
|
||||
password=newpassword), sync)
|
||||
_masterkey = cryptkey
|
||||
@@ -2891,35 +2936,46 @@ def _dump_keys(password, dojson=True):
|
||||
return keydata
|
||||
|
||||
|
||||
def restore_db_from_directory(location, password):
|
||||
def restore_db_from_directory(location, password, merge=False, skipped=None):
|
||||
kdd = None
|
||||
try:
|
||||
with open(os.path.join(location, 'keys.json'), 'r') as cfgfile:
|
||||
keydata = cfgfile.read()
|
||||
json.loads(keydata)
|
||||
_restore_keys(keydata, password)
|
||||
kdd = json.loads(keydata)
|
||||
if merge:
|
||||
if 'cryptkey' in kdd:
|
||||
kdd['cryptkey'] = _parse_key(kdd['cryptkey'], password)
|
||||
if 'integritykey' in kdd:
|
||||
kdd['integritykey'] = _parse_key(kdd['integritykey'], password)
|
||||
else:
|
||||
kdd['integritykey'] = None # GCM
|
||||
else:
|
||||
kdd = None
|
||||
_restore_keys(keydata, password)
|
||||
except IOError as e:
|
||||
if e.errno == 2:
|
||||
raise Exception("Cannot restore without keys, this may be a "
|
||||
"redacted dump")
|
||||
try:
|
||||
moreglobals = json.load(open(os.path.join(location, 'globals.json')))
|
||||
for globvar in moreglobals:
|
||||
set_global(globvar, moreglobals[globvar])
|
||||
except IOError as e:
|
||||
if e.errno != 2:
|
||||
raise
|
||||
try:
|
||||
collective = json.load(open(os.path.join(location, 'collective.json')))
|
||||
_cfgstore['collective'] = {}
|
||||
for coll in collective:
|
||||
add_collective_member(coll, collective[coll]['address'],
|
||||
collective[coll]['fingerprint'])
|
||||
except IOError as e:
|
||||
if e.errno != 2:
|
||||
raise
|
||||
if not merge:
|
||||
try:
|
||||
moreglobals = json.load(open(os.path.join(location, 'globals.json')))
|
||||
for globvar in moreglobals:
|
||||
set_global(globvar, moreglobals[globvar])
|
||||
except IOError as e:
|
||||
if e.errno != 2:
|
||||
raise
|
||||
try:
|
||||
collective = json.load(open(os.path.join(location, 'collective.json')))
|
||||
_cfgstore['collective'] = {}
|
||||
for coll in collective:
|
||||
add_collective_member(coll, collective[coll]['address'],
|
||||
collective[coll]['fingerprint'])
|
||||
except IOError as e:
|
||||
if e.errno != 2:
|
||||
raise
|
||||
with open(os.path.join(location, 'main.json'), 'r') as cfgfile:
|
||||
cfgdata = cfgfile.read()
|
||||
ConfigManager(tenant=None)._load_from_json(cfgdata)
|
||||
ConfigManager(tenant=None)._load_from_json(cfgdata, merge=merge, keydata=kdd, skipped=skipped)
|
||||
ConfigManager.wait_for_sync(True)
|
||||
|
||||
|
||||
|
@@ -49,7 +49,6 @@ _handled_consoles = {}
|
||||
|
||||
_tracelog = None
|
||||
_bufferdaemon = None
|
||||
_bufferlock = None
|
||||
|
||||
try:
|
||||
range = xrange
|
||||
@@ -62,39 +61,38 @@ def chunk_output(output, n):
|
||||
yield output[i:i + n]
|
||||
|
||||
def get_buffer_output(nodename):
|
||||
out = _bufferdaemon.stdin
|
||||
instream = _bufferdaemon.stdout
|
||||
out = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
|
||||
out.setsockopt(socket.SOL_SOCKET, socket.SO_PASSCRED, 1)
|
||||
out.connect("\x00confluent-vtbuffer")
|
||||
if not isinstance(nodename, bytes):
|
||||
nodename = nodename.encode('utf8')
|
||||
outdata = bytearray()
|
||||
with _bufferlock:
|
||||
out.write(struct.pack('I', len(nodename)))
|
||||
out.write(nodename)
|
||||
out.flush()
|
||||
select.select((instream,), (), (), 30)
|
||||
while not outdata or outdata[-1]:
|
||||
try:
|
||||
chunk = os.read(instream.fileno(), 128)
|
||||
except IOError:
|
||||
chunk = None
|
||||
if chunk:
|
||||
outdata.extend(chunk)
|
||||
else:
|
||||
select.select((instream,), (), (), 0)
|
||||
return bytes(outdata[:-1])
|
||||
out.send(struct.pack('I', len(nodename)))
|
||||
out.send(nodename)
|
||||
select.select((out,), (), (), 30)
|
||||
while not outdata or outdata[-1]:
|
||||
try:
|
||||
chunk = os.read(out.fileno(), 128)
|
||||
except IOError:
|
||||
chunk = None
|
||||
if chunk:
|
||||
outdata.extend(chunk)
|
||||
else:
|
||||
select.select((out,), (), (), 0)
|
||||
return bytes(outdata[:-1])
|
||||
|
||||
|
||||
def send_output(nodename, output):
|
||||
if not isinstance(nodename, bytes):
|
||||
nodename = nodename.encode('utf8')
|
||||
with _bufferlock:
|
||||
_bufferdaemon.stdin.write(struct.pack('I', len(nodename) | (1 << 29)))
|
||||
_bufferdaemon.stdin.write(nodename)
|
||||
_bufferdaemon.stdin.flush()
|
||||
for chunk in chunk_output(output, 8192):
|
||||
_bufferdaemon.stdin.write(struct.pack('I', len(chunk) | (2 << 29)))
|
||||
_bufferdaemon.stdin.write(chunk)
|
||||
_bufferdaemon.stdin.flush()
|
||||
out = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
|
||||
out.setsockopt(socket.SOL_SOCKET, socket.SO_PASSCRED, 1)
|
||||
out.connect("\x00confluent-vtbuffer")
|
||||
out.send(struct.pack('I', len(nodename) | (1 << 29)))
|
||||
out.send(nodename)
|
||||
for chunk in chunk_output(output, 8192):
|
||||
out.send(struct.pack('I', len(chunk) | (2 << 29)))
|
||||
out.send(chunk)
|
||||
|
||||
def _utf8_normalize(data, decoder):
|
||||
# first we give the stateful decoder a crack at the byte stream,
|
||||
@@ -603,15 +601,10 @@ def _start_tenant_sessions(cfm):
|
||||
def initialize():
|
||||
global _tracelog
|
||||
global _bufferdaemon
|
||||
global _bufferlock
|
||||
_bufferlock = semaphore.Semaphore()
|
||||
_tracelog = log.Logger('trace')
|
||||
_bufferdaemon = subprocess.Popen(
|
||||
['/opt/confluent/bin/vtbufferd'], bufsize=0, stdin=subprocess.PIPE,
|
||||
stdout=subprocess.PIPE)
|
||||
fl = fcntl.fcntl(_bufferdaemon.stdout.fileno(), fcntl.F_GETFL)
|
||||
fcntl.fcntl(_bufferdaemon.stdout.fileno(),
|
||||
fcntl.F_SETFL, fl | os.O_NONBLOCK)
|
||||
['/opt/confluent/bin/vtbufferd', 'confluent-vtbuffer'], bufsize=0, stdin=subprocess.DEVNULL,
|
||||
stdout=subprocess.DEVNULL)
|
||||
|
||||
def start_console_sessions():
|
||||
configmodule.hook_new_configmanagers(_start_tenant_sessions)
|
||||
|
@@ -44,6 +44,7 @@ import confluent.discovery.core as disco
|
||||
import confluent.interface.console as console
|
||||
import confluent.exceptions as exc
|
||||
import confluent.messages as msg
|
||||
import confluent.mountmanager as mountmanager
|
||||
import confluent.networking.macmap as macmap
|
||||
import confluent.noderange as noderange
|
||||
import confluent.osimage as osimage
|
||||
@@ -70,9 +71,11 @@ import eventlet.green.socket as socket
|
||||
import struct
|
||||
import sys
|
||||
import uuid
|
||||
import yaml
|
||||
|
||||
|
||||
pluginmap = {}
|
||||
dispatch_plugins = (b'ipmi', u'ipmi', b'redfish', u'redfish', b'tsmsol', u'tsmsol', b'geist', u'geist', b'deltapdu', u'deltapdu', b'eatonpdu', u'eatonpdu', b'affluent', u'affluent', b'cnos', u'cnos')
|
||||
dispatch_plugins = (b'ipmi', u'ipmi', b'redfish', u'redfish', b'tsmsol', u'tsmsol', b'geist', u'geist', b'deltapdu', u'deltapdu', b'eatonpdu', u'eatonpdu', b'affluent', u'affluent', b'cnos', u'cnos', b'enos', u'enos')
|
||||
|
||||
PluginCollection = plugin.PluginCollection
|
||||
|
||||
@@ -160,16 +163,25 @@ def _merge_dict(original, custom):
|
||||
|
||||
|
||||
rootcollections = ['deployment/', 'discovery/', 'events/', 'networking/',
|
||||
'noderange/', 'nodes/', 'nodegroups/', 'usergroups/' ,
|
||||
'noderange/', 'nodes/', 'nodegroups/', 'storage/', 'usergroups/' ,
|
||||
'users/', 'uuid', 'version', 'staging/']
|
||||
|
||||
|
||||
|
||||
class PluginRoute(object):
|
||||
def __init__(self, routedict):
|
||||
self.routeinfo = routedict
|
||||
|
||||
|
||||
|
||||
def handle_storage(configmanager, inputdata, pathcomponents, operation):
|
||||
if len(pathcomponents) == 1:
|
||||
yield msg.ChildCollection('remote/')
|
||||
return
|
||||
if pathcomponents[1] == 'remote':
|
||||
for rsp in mountmanager.handle_request(configmanager, inputdata, pathcomponents[2:], operation):
|
||||
yield rsp
|
||||
|
||||
def handle_deployment(configmanager, inputdata, pathcomponents,
|
||||
operation):
|
||||
if len(pathcomponents) == 1:
|
||||
@@ -192,8 +204,19 @@ def handle_deployment(configmanager, inputdata, pathcomponents,
|
||||
for prof in osimage.list_profiles():
|
||||
yield msg.ChildCollection(prof + '/')
|
||||
return
|
||||
if len(pathcomponents) == 3:
|
||||
profname = pathcomponents[-1]
|
||||
if len(pathcomponents) >= 3:
|
||||
profname = pathcomponents[2]
|
||||
if len(pathcomponents) == 4:
|
||||
if operation == 'retrieve':
|
||||
if len(pathcomponents) == 4 and pathcomponents[-1] == 'info':
|
||||
with open('/var/lib/confluent/public/os/{}/profile.yaml'.format(profname)) as profyaml:
|
||||
profinfo = yaml.safe_load(profyaml)
|
||||
profinfo['name'] = profname
|
||||
yield msg.KeyValueData(profinfo)
|
||||
return
|
||||
elif len(pathcomponents) == 3:
|
||||
if operation == 'retrieve':
|
||||
yield msg.ChildCollection('info')
|
||||
if operation == 'update':
|
||||
if 'updateboot' in inputdata:
|
||||
osimage.update_boot(profname)
|
||||
@@ -209,6 +232,17 @@ def handle_deployment(configmanager, inputdata, pathcomponents,
|
||||
for cust in customized:
|
||||
yield msg.KeyValueData({'customized': cust})
|
||||
return
|
||||
if pathcomponents[1] == 'fingerprint':
|
||||
if operation == 'create':
|
||||
importer = osimage.MediaImporter(inputdata['filename'], configmanager, checkonly=True)
|
||||
medinfo = {
|
||||
'targetpath': importer.targpath,
|
||||
'name': importer.osname,
|
||||
'oscategory': importer.oscategory,
|
||||
'errors': importer.errors,
|
||||
}
|
||||
yield msg.KeyValueData(medinfo)
|
||||
return
|
||||
if pathcomponents[1] == 'importing':
|
||||
if len(pathcomponents) == 2 or not pathcomponents[-1]:
|
||||
if operation == 'retrieve':
|
||||
@@ -216,8 +250,12 @@ def handle_deployment(configmanager, inputdata, pathcomponents,
|
||||
yield imp
|
||||
return
|
||||
elif operation == 'create':
|
||||
importer = osimage.MediaImporter(inputdata['filename'],
|
||||
configmanager)
|
||||
if inputdata.get('custname', None):
|
||||
importer = osimage.MediaImporter(inputdata['filename'],
|
||||
configmanager, inputdata['custname'])
|
||||
else:
|
||||
importer = osimage.MediaImporter(inputdata['filename'],
|
||||
configmanager)
|
||||
yield msg.KeyValueData({'target': importer.targpath,
|
||||
'name': importer.importkey})
|
||||
return
|
||||
@@ -323,6 +361,10 @@ def _init_core():
|
||||
'pluginattrs': ['hardwaremanagement.method'],
|
||||
'default': 'ipmi',
|
||||
}),
|
||||
'extra_advanced': PluginRoute({
|
||||
'pluginattrs': ['hardwaremanagement.method'],
|
||||
'default': 'ipmi',
|
||||
}),
|
||||
},
|
||||
},
|
||||
'storage': {
|
||||
@@ -391,6 +433,7 @@ def _init_core():
|
||||
'pluginattrs': ['hardwaremanagement.method'],
|
||||
'default': 'ipmi',
|
||||
}),
|
||||
'ikvm': PluginRoute({'handler': 'ikvm'}),
|
||||
},
|
||||
'description': PluginRoute({
|
||||
'pluginattrs': ['hardwaremanagement.method'],
|
||||
@@ -1327,6 +1370,9 @@ def handle_path(path, operation, configmanager, inputdata=None, autostrip=True):
|
||||
elif pathcomponents[0] == 'deployment':
|
||||
return handle_deployment(configmanager, inputdata, pathcomponents,
|
||||
operation)
|
||||
elif pathcomponents[0] == 'storage':
|
||||
return handle_storage(configmanager, inputdata, pathcomponents,
|
||||
operation)
|
||||
elif pathcomponents[0] == 'nodegroups':
|
||||
return handle_nodegroup_request(configmanager, inputdata,
|
||||
pathcomponents,
|
||||
|
@@ -127,14 +127,15 @@ class CredServer(object):
|
||||
if hmacval != hmac.new(hmackey, etok, hashlib.sha256).digest():
|
||||
client.close()
|
||||
return
|
||||
cfgupdate = {nodename: {'crypted.selfapikey': {'hashvalue': echotoken}, 'deployment.sealedapikey': '', 'deployment.apiarmed': ''}}
|
||||
if hmackey and apiarmed != 'continuous':
|
||||
self.cfm.clear_node_attributes([nodename], ['secret.selfapiarmtoken'])
|
||||
if apiarmed == 'continuous':
|
||||
del cfgupdate[nodename]['deployment.apiarmed']
|
||||
cfgupdate = {nodename: {'crypted.selfapikey': {'hashvalue': echotoken}}}
|
||||
self.cfm.set_node_attributes(cfgupdate)
|
||||
client.recv(2) # drain end of message
|
||||
client.send(b'\x05\x00') # report success
|
||||
if hmackey and apiarmed != 'continuous':
|
||||
self.cfm.clear_node_attributes([nodename], ['secret.selfapiarmtoken'])
|
||||
if apiarmed != 'continuous':
|
||||
tokclear = {nodename: {'deployment.sealedapikey': '', 'deployment.apiarmed': ''}}
|
||||
self.cfm.set_node_attributes(tokclear)
|
||||
finally:
|
||||
client.close()
|
||||
|
||||
|
@@ -74,6 +74,8 @@ import confluent.discovery.handlers.tsm as tsm
|
||||
import confluent.discovery.handlers.pxe as pxeh
|
||||
import confluent.discovery.handlers.smm as smm
|
||||
import confluent.discovery.handlers.xcc as xcc
|
||||
import confluent.discovery.handlers.xcc3 as xcc3
|
||||
import confluent.discovery.handlers.megarac as megarac
|
||||
import confluent.exceptions as exc
|
||||
import confluent.log as log
|
||||
import confluent.messages as msg
|
||||
@@ -113,6 +115,8 @@ nodehandlers = {
|
||||
'service:lenovo-smm': smm,
|
||||
'service:lenovo-smm2': smm,
|
||||
'lenovo-xcc': xcc,
|
||||
'lenovo-xcc3': xcc3,
|
||||
'megarac-bmc': megarac,
|
||||
'service:management-hardware.IBM:integrated-management-module2': imm,
|
||||
'pxe-client': pxeh,
|
||||
'onie-switch': None,
|
||||
@@ -132,6 +136,8 @@ servicenames = {
|
||||
'service:lenovo-smm2': 'lenovo-smm2',
|
||||
'affluent-switch': 'affluent-switch',
|
||||
'lenovo-xcc': 'lenovo-xcc',
|
||||
'lenovo-xcc3': 'lenovo-xcc3',
|
||||
'megarac-bmc': 'megarac-bmc',
|
||||
#'openbmc': 'openbmc',
|
||||
'service:management-hardware.IBM:integrated-management-module2': 'lenovo-imm2',
|
||||
'service:io-device.Lenovo:management-module': 'lenovo-switch',
|
||||
@@ -147,6 +153,8 @@ servicebyname = {
|
||||
'lenovo-smm2': 'service:lenovo-smm2',
|
||||
'affluent-switch': 'affluent-switch',
|
||||
'lenovo-xcc': 'lenovo-xcc',
|
||||
'lenovo-xcc3': 'lenovo-xcc3',
|
||||
'megarac-bmc': 'megarac-bmc',
|
||||
'lenovo-imm2': 'service:management-hardware.IBM:integrated-management-module2',
|
||||
'lenovo-switch': 'service:io-device.Lenovo:management-module',
|
||||
'thinkagile-storage': 'service:thinkagile-storagebmc',
|
||||
@@ -453,7 +461,7 @@ def iterate_addrs(addrs, countonly=False):
|
||||
yield 1
|
||||
return
|
||||
yield addrs
|
||||
|
||||
|
||||
def _parameterize_path(pathcomponents):
|
||||
listrequested = False
|
||||
childcoll = True
|
||||
@@ -542,7 +550,7 @@ def handle_api_request(configmanager, inputdata, operation, pathcomponents):
|
||||
if len(pathcomponents) > 2:
|
||||
raise Exception('TODO')
|
||||
currsubs = get_subscriptions()
|
||||
return [msg.ChildCollection(x) for x in currsubs]
|
||||
return [msg.ChildCollection(x) for x in currsubs]
|
||||
elif operation == 'retrieve':
|
||||
return handle_read_api_request(pathcomponents)
|
||||
elif (operation in ('update', 'create') and
|
||||
@@ -1703,3 +1711,4 @@ if __name__ == '__main__':
|
||||
start_detection()
|
||||
while True:
|
||||
eventlet.sleep(30)
|
||||
|
||||
|
51
confluent_server/confluent/discovery/handlers/megarac.py
Normal file
51
confluent_server/confluent/discovery/handlers/megarac.py
Normal file
@@ -0,0 +1,51 @@
|
||||
# Copyright 2024 Lenovo
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import confluent.discovery.handlers.redfishbmc as redfishbmc
|
||||
import eventlet.support.greendns
|
||||
|
||||
|
||||
getaddrinfo = eventlet.support.greendns.getaddrinfo
|
||||
|
||||
|
||||
class NodeHandler(redfishbmc.NodeHandler):
|
||||
|
||||
def get_firmware_default_account_info(self):
|
||||
return ('admin', 'admin')
|
||||
|
||||
|
||||
def remote_nodecfg(nodename, cfm):
|
||||
cfg = cfm.get_node_attributes(
|
||||
nodename, 'hardwaremanagement.manager')
|
||||
ipaddr = cfg.get(nodename, {}).get('hardwaremanagement.manager', {}).get(
|
||||
'value', None)
|
||||
ipaddr = ipaddr.split('/', 1)[0]
|
||||
ipaddr = getaddrinfo(ipaddr, 0)[0][-1]
|
||||
if not ipaddr:
|
||||
raise Exception('Cannot remote configure a system without known '
|
||||
'address')
|
||||
info = {'addresses': [ipaddr]}
|
||||
nh = NodeHandler(info, cfm)
|
||||
nh.config(nodename)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
import confluent.config.configmanager as cfm
|
||||
c = cfm.ConfigManager(None)
|
||||
import sys
|
||||
info = {'addresses': [[sys.argv[1]]]}
|
||||
print(repr(info))
|
||||
testr = NodeHandler(info, c)
|
||||
testr.config(sys.argv[2])
|
||||
|
321
confluent_server/confluent/discovery/handlers/redfishbmc.py
Normal file
321
confluent_server/confluent/discovery/handlers/redfishbmc.py
Normal file
@@ -0,0 +1,321 @@
|
||||
# Copyright 2024 Lenovo
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import confluent.discovery.handlers.generic as generic
|
||||
import confluent.exceptions as exc
|
||||
import confluent.netutil as netutil
|
||||
import confluent.util as util
|
||||
import eventlet
|
||||
import eventlet.support.greendns
|
||||
import json
|
||||
try:
|
||||
from urllib import urlencode
|
||||
except ImportError:
|
||||
from urllib.parse import urlencode
|
||||
|
||||
getaddrinfo = eventlet.support.greendns.getaddrinfo
|
||||
|
||||
webclient = eventlet.import_patched('pyghmi.util.webclient')
|
||||
|
||||
def get_host_interface_urls(wc, mginfo):
|
||||
returls = []
|
||||
hifurl = mginfo.get('HostInterfaces', {}).get('@odata.id', None)
|
||||
if not hifurl:
|
||||
return None
|
||||
hifinfo = wc.grab_json_response(hifurl)
|
||||
hifurls = hifinfo.get('Members', [])
|
||||
for hifurl in hifurls:
|
||||
hifurl = hifurl['@odata.id']
|
||||
hifinfo = wc.grab_json_response(hifurl)
|
||||
acturl = hifinfo.get('ManagerEthernetInterface', {}).get('@odata.id', None)
|
||||
if acturl:
|
||||
returls.append(acturl)
|
||||
return returls
|
||||
|
||||
|
||||
class NodeHandler(generic.NodeHandler):
|
||||
devname = 'BMC'
|
||||
|
||||
def __init__(self, info, configmanager):
|
||||
self.trieddefault = None
|
||||
self.targuser = None
|
||||
self.curruser = None
|
||||
self.currpass = None
|
||||
self.targpass = None
|
||||
self.nodename = None
|
||||
self.csrftok = None
|
||||
self.channel = None
|
||||
self.atdefault = True
|
||||
self._srvroot = None
|
||||
self._mgrinfo = None
|
||||
super(NodeHandler, self).__init__(info, configmanager)
|
||||
|
||||
def srvroot(self, wc):
|
||||
if not self._srvroot:
|
||||
srvroot, status = wc.grab_json_response_with_status('/redfish/v1/')
|
||||
if status == 200:
|
||||
self._srvroot = srvroot
|
||||
return self._srvroot
|
||||
|
||||
def mgrinfo(self, wc):
|
||||
if not self._mgrinfo:
|
||||
mgrs = self.srvroot(wc)['Managers']['@odata.id']
|
||||
rsp = wc.grab_json_response(mgrs)
|
||||
if len(rsp['Members']) != 1:
|
||||
raise Exception("Can not handle multiple Managers")
|
||||
mgrurl = rsp['Members'][0]['@odata.id']
|
||||
self._mgrinfo = wc.grab_json_response(mgrurl)
|
||||
return self._mgrinfo
|
||||
|
||||
|
||||
def get_firmware_default_account_info(self):
|
||||
raise Exception('This must be subclassed')
|
||||
|
||||
def scan(self):
|
||||
c = webclient.SecureHTTPConnection(self.ipaddr, 443, verifycallback=self.validate_cert)
|
||||
i = c.grab_json_response('/redfish/v1/')
|
||||
uuid = i.get('UUID', None)
|
||||
if uuid:
|
||||
self.info['uuid'] = uuid.lower()
|
||||
|
||||
def validate_cert(self, certificate):
|
||||
# broadly speaking, merely checks consistency moment to moment,
|
||||
# but if https_cert gets stricter, this check means something
|
||||
fprint = util.get_fingerprint(self.https_cert)
|
||||
return util.cert_matches(fprint, certificate)
|
||||
|
||||
def enable_ipmi(self, wc):
|
||||
npu = self.mgrinfo(wc).get(
|
||||
'NetworkProtocol', {}).get('@odata.id', None)
|
||||
if not npu:
|
||||
raise Exception('Cannot enable IPMI, no NetworkProtocol on BMC')
|
||||
npi = wc.grab_json_response(npu)
|
||||
if not npi.get('IPMI', {}).get('ProtocolEnabled'):
|
||||
wc.set_header('If-Match', '*')
|
||||
wc.grab_json_response_with_status(
|
||||
npu, {'IPMI': {'ProtocolEnabled': True}}, method='PATCH')
|
||||
acctinfo = wc.grab_json_response_with_status(
|
||||
self.target_account_url(wc))
|
||||
acctinfo = acctinfo[0]
|
||||
actypes = acctinfo['AccountTypes']
|
||||
candidates = acctinfo['AccountTypes@Redfish.AllowableValues']
|
||||
if 'IPMI' not in actypes and 'IPMI' in candidates:
|
||||
actypes.append('IPMI')
|
||||
acctupd = {
|
||||
'AccountTypes': actypes,
|
||||
'Password': self.currpass,
|
||||
}
|
||||
rsp = wc.grab_json_response_with_status(
|
||||
self.target_account_url(wc), acctupd, method='PATCH')
|
||||
|
||||
def _get_wc(self):
|
||||
defuser, defpass = self.get_firmware_default_account_info()
|
||||
wc = webclient.SecureHTTPConnection(self.ipaddr, 443, verifycallback=self.validate_cert)
|
||||
wc.set_basic_credentials(defuser, defpass)
|
||||
wc.set_header('Content-Type', 'application/json')
|
||||
wc.set_header('Accept', 'application/json')
|
||||
authmode = 0
|
||||
if not self.trieddefault:
|
||||
rsp, status = wc.grab_json_response_with_status('/redfish/v1/Managers')
|
||||
if status == 403:
|
||||
self.trieddefault = True
|
||||
chgurl = None
|
||||
rsp = json.loads(rsp)
|
||||
currerr = rsp.get('error', {})
|
||||
ecode = currerr.get('code', None)
|
||||
if ecode.endswith('PasswordChangeRequired'):
|
||||
for einfo in currerr.get('@Message.ExtendedInfo', []):
|
||||
if einfo.get('MessageId', None).endswith('PasswordChangeRequired'):
|
||||
for msgarg in einfo.get('MessageArgs'):
|
||||
chgurl = msgarg
|
||||
break
|
||||
if chgurl:
|
||||
if self.targpass == defpass:
|
||||
raise Exception("Must specify a non-default password to onboard this BMC")
|
||||
wc.set_header('If-Match', '*')
|
||||
cpr = wc.grab_json_response_with_status(chgurl, {'Password': self.targpass}, method='PATCH')
|
||||
if cpr[1] >= 200 and cpr[1] < 300:
|
||||
self.curruser = defuser
|
||||
self.currpass = self.targpass
|
||||
wc.set_basic_credentials(self.curruser, self.currpass)
|
||||
_, status = wc.grab_json_response_with_status('/redfish/v1/Managers')
|
||||
tries = 10
|
||||
while status >= 300 and tries:
|
||||
eventlet.sleep(1)
|
||||
_, status = wc.grab_json_response_with_status('/redfish/v1/Managers')
|
||||
return wc
|
||||
|
||||
if status > 400:
|
||||
self.trieddefault = True
|
||||
if status == 401:
|
||||
wc.set_basic_credentials(defuser, self.targpass)
|
||||
rsp, status = wc.grab_json_response_with_status('/redfish/v1/Managers')
|
||||
if status == 200: # Default user still, but targpass
|
||||
self.currpass = self.targpass
|
||||
self.curruser = defuser
|
||||
return wc
|
||||
elif self.targuser != defuser:
|
||||
wc.set_basic_credentials(self.targuser, self.targpass)
|
||||
rsp, status = wc.grab_json_response_with_status('/redfish/v1/Managers')
|
||||
if status != 200:
|
||||
raise Exception("Target BMC does not recognize firmware default credentials nor the confluent stored credential")
|
||||
else:
|
||||
self.curruser = defuser
|
||||
self.currpass = defpass
|
||||
return wc
|
||||
if self.curruser:
|
||||
wc.set_basic_credentials(self.curruser, self.currpass)
|
||||
rsp, status = wc.grab_json_response_with_status('/redfish/v1/Managers')
|
||||
if status != 200:
|
||||
return None
|
||||
return wc
|
||||
wc.set_basic_credentials(self.targuser, self.targpass)
|
||||
rsp, status = wc.grab_json_response_with_status('/redfish/v1/Managers')
|
||||
if status != 200:
|
||||
return None
|
||||
self.curruser = self.targuser
|
||||
self.currpass = self.targpass
|
||||
return wc
|
||||
|
||||
def target_account_url(self, wc):
|
||||
asrv = self.srvroot(wc).get('AccountService', {}).get('@odata.id')
|
||||
rsp, status = wc.grab_json_response_with_status(asrv)
|
||||
accts = rsp.get('Accounts', {}).get('@odata.id')
|
||||
rsp, status = wc.grab_json_response_with_status(accts)
|
||||
accts = rsp.get('Members', [])
|
||||
for accturl in accts:
|
||||
accturl = accturl.get('@odata.id', '')
|
||||
if accturl:
|
||||
rsp, status = wc.grab_json_response_with_status(accturl)
|
||||
if rsp.get('UserName', None) == self.curruser:
|
||||
targaccturl = accturl
|
||||
break
|
||||
else:
|
||||
raise Exception("Unable to identify Account URL to modify on this BMC")
|
||||
return targaccturl
|
||||
|
||||
def config(self, nodename):
|
||||
mgrs = None
|
||||
self.nodename = nodename
|
||||
creds = self.configmanager.get_node_attributes(
|
||||
nodename, ['secret.hardwaremanagementuser',
|
||||
'secret.hardwaremanagementpassword',
|
||||
'hardwaremanagement.manager',
|
||||
'hardwaremanagement.method',
|
||||
'console.method'],
|
||||
True)
|
||||
cd = creds.get(nodename, {})
|
||||
defuser, defpass = self.get_firmware_default_account_info()
|
||||
user, passwd, _ = self.get_node_credentials(
|
||||
nodename, creds, defuser, defpass)
|
||||
user = util.stringify(user)
|
||||
passwd = util.stringify(passwd)
|
||||
self.targuser = user
|
||||
self.targpass = passwd
|
||||
wc = self._get_wc()
|
||||
curruserinfo = {}
|
||||
authupdate = {}
|
||||
wc.set_header('Content-Type', 'application/json')
|
||||
if user != self.curruser:
|
||||
authupdate['UserName'] = user
|
||||
if passwd != self.currpass:
|
||||
authupdate['Password'] = passwd
|
||||
if authupdate:
|
||||
targaccturl = self.target_account_url(wc)
|
||||
rsp, status = wc.grab_json_response_with_status(targaccturl, authupdate, method='PATCH')
|
||||
if status >= 300:
|
||||
raise Exception("Failed attempting to update credentials on BMC")
|
||||
self.curruser = user
|
||||
self.currpass = passwd
|
||||
wc.set_basic_credentials(user, passwd)
|
||||
_, status = wc.grab_json_response_with_status('/redfish/v1/Managers')
|
||||
tries = 10
|
||||
while tries and status >= 300:
|
||||
tries -= 1
|
||||
eventlet.sleep(1.0)
|
||||
_, status = wc.grab_json_response_with_status(
|
||||
'/redfish/v1/Managers')
|
||||
if (cd.get('hardwaremanagement.method', {}).get('value', 'ipmi') != 'redfish'
|
||||
or cd.get('console.method', {}).get('value', None) == 'ipmi'):
|
||||
self.enable_ipmi(wc)
|
||||
if ('hardwaremanagement.manager' in cd and
|
||||
cd['hardwaremanagement.manager']['value'] and
|
||||
not cd['hardwaremanagement.manager']['value'].startswith(
|
||||
'fe80::')):
|
||||
newip = cd['hardwaremanagement.manager']['value']
|
||||
newip = newip.split('/', 1)[0]
|
||||
newipinfo = getaddrinfo(newip, 0)[0]
|
||||
newip = newipinfo[-1][0]
|
||||
if ':' in newip:
|
||||
raise exc.NotImplementedException('IPv6 remote config TODO')
|
||||
hifurls = get_host_interface_urls(wc, self.mgrinfo(wc))
|
||||
mgtnicinfo = self.mgrinfo(wc)['EthernetInterfaces']['@odata.id']
|
||||
mgtnicinfo = wc.grab_json_response(mgtnicinfo)
|
||||
mgtnics = [x['@odata.id'] for x in mgtnicinfo.get('Members', [])]
|
||||
actualnics = []
|
||||
for candnic in mgtnics:
|
||||
if candnic in hifurls:
|
||||
continue
|
||||
actualnics.append(candnic)
|
||||
if len(actualnics) != 1:
|
||||
raise Exception("Multi-interface BMCs are not supported currently")
|
||||
currnet = wc.grab_json_response(actualnics[0])
|
||||
netconfig = netutil.get_nic_config(self.configmanager, nodename, ip=newip)
|
||||
newconfig = {
|
||||
"Address": newip,
|
||||
"SubnetMask": netutil.cidr_to_mask(netconfig['prefix']),
|
||||
}
|
||||
newgw = netconfig['ipv4_gateway']
|
||||
if newgw:
|
||||
newconfig['Gateway'] = newgw
|
||||
else:
|
||||
newconfig['Gateway'] = newip # required property, set to self just to have a value
|
||||
for net in currnet.get("IPv4Addresses", []):
|
||||
if net["Address"] == newip and net["SubnetMask"] == newconfig['SubnetMask'] and (not newgw or newconfig['Gateway'] == newgw):
|
||||
break
|
||||
else:
|
||||
wc.set_header('If-Match', '*')
|
||||
rsp, status = wc.grab_json_response_with_status(actualnics[0], {
|
||||
'DHCPv4': {'DHCPEnabled': False},
|
||||
'IPv4StaticAddresses': [newconfig]}, method='PATCH')
|
||||
elif self.ipaddr.startswith('fe80::'):
|
||||
self.configmanager.set_node_attributes(
|
||||
{nodename: {'hardwaremanagement.manager': self.ipaddr}})
|
||||
else:
|
||||
raise exc.TargetEndpointUnreachable(
|
||||
'hardwaremanagement.manager must be set to desired address (No IPv6 Link Local detected)')
|
||||
|
||||
|
||||
def remote_nodecfg(nodename, cfm):
|
||||
cfg = cfm.get_node_attributes(
|
||||
nodename, 'hardwaremanagement.manager')
|
||||
ipaddr = cfg.get(nodename, {}).get('hardwaremanagement.manager', {}).get(
|
||||
'value', None)
|
||||
ipaddr = ipaddr.split('/', 1)[0]
|
||||
ipaddr = getaddrinfo(ipaddr, 0)[0][-1]
|
||||
if not ipaddr:
|
||||
raise Exception('Cannot remote configure a system without known '
|
||||
'address')
|
||||
info = {'addresses': [ipaddr]}
|
||||
nh = NodeHandler(info, cfm)
|
||||
nh.config(nodename)
|
||||
|
||||
if __name__ == '__main__':
|
||||
import confluent.config.configmanager as cfm
|
||||
c = cfm.ConfigManager(None)
|
||||
import sys
|
||||
info = {'addresses': [[sys.argv[1]]] }
|
||||
print(repr(info))
|
||||
testr = NodeHandler(info, c)
|
||||
testr.config(sys.argv[2])
|
@@ -247,6 +247,10 @@ class NodeHandler(immhandler.NodeHandler):
|
||||
if rsp.status == 200:
|
||||
pwdchanged = True
|
||||
password = newpassword
|
||||
wc.set_header('Authorization', 'Bearer ' + rspdata['access_token'])
|
||||
if '_csrf_token' in wc.cookies:
|
||||
wc.set_header('X-XSRF-TOKEN', wc.cookies['_csrf_token'])
|
||||
wc.grab_json_response_with_status('/api/providers/logout')
|
||||
else:
|
||||
if rspdata.get('locktime', 0) > 0:
|
||||
raise LockedUserException(
|
||||
@@ -280,6 +284,7 @@ class NodeHandler(immhandler.NodeHandler):
|
||||
rsp.read()
|
||||
if rsp.status != 200:
|
||||
return (None, None)
|
||||
wc.grab_json_response_with_status('/api/providers/logout')
|
||||
self._currcreds = (username, newpassword)
|
||||
wc.set_basic_credentials(username, newpassword)
|
||||
pwdchanged = True
|
||||
@@ -403,6 +408,34 @@ class NodeHandler(immhandler.NodeHandler):
|
||||
if user['users_user_name'] == '':
|
||||
return user['users_user_id']
|
||||
|
||||
def create_tmp_account(self, wc):
|
||||
rsp, status = wc.grab_json_response_with_status('/redfish/v1/AccountService/Accounts')
|
||||
if status != 200:
|
||||
raise Exception("Unable to list current accounts")
|
||||
usednames = set([])
|
||||
tmpnam = '6pmu0ezczzcp'
|
||||
tpass = base64.b64encode(os.urandom(9)).decode() + 'Iw47$'
|
||||
ntpass = base64.b64encode(os.urandom(9)).decode() + 'Iw47$'
|
||||
for acct in rsp.get("Members", []):
|
||||
url = acct.get("@odata.id", None)
|
||||
if url:
|
||||
uinfo = wc.grab_json_response(url)
|
||||
usednames.add(uinfo.get('UserName', None))
|
||||
if tmpnam in usednames:
|
||||
raise Exception("Tmp account already exists")
|
||||
rsp, status = wc.grab_json_response_with_status(
|
||||
'/redfish/v1/AccountService/Accounts',
|
||||
{'UserName': tmpnam, 'Password': tpass, 'RoleId': 'Administrator'})
|
||||
if status >= 300:
|
||||
raise Exception("Failure creating tmp account: " + repr(rsp))
|
||||
tmpurl = rsp['@odata.id']
|
||||
wc.set_basic_credentials(tmpnam, tpass)
|
||||
rsp, status = wc.grab_json_response_with_status(
|
||||
tmpurl, {'Password': ntpass}, method='PATCH')
|
||||
wc.set_basic_credentials(tmpnam, ntpass)
|
||||
return tmpurl
|
||||
|
||||
|
||||
def _setup_xcc_account(self, username, passwd, wc):
|
||||
userinfo = wc.grab_json_response('/api/dataset/imm_users')
|
||||
uid = None
|
||||
@@ -434,18 +467,35 @@ class NodeHandler(immhandler.NodeHandler):
|
||||
'/api/function',
|
||||
{'USER_UserModify': '{0},{1},,1,4,0,0,0,0,,8,,,'.format(uid, username)})
|
||||
if status == 200 and rsp.get('return', 0) == 13:
|
||||
wc.grab_json_response('/api/providers/logout')
|
||||
wc.set_basic_credentials(self._currcreds[0], self._currcreds[1])
|
||||
status = 503
|
||||
tries = 2
|
||||
tmpaccount = None
|
||||
while status != 200:
|
||||
tries -= 1
|
||||
rsp, status = wc.grab_json_response_with_status(
|
||||
'/redfish/v1/AccountService/Accounts/{0}'.format(uid),
|
||||
{'UserName': username}, method='PATCH')
|
||||
if status != 200:
|
||||
rsp = json.loads(rsp)
|
||||
if rsp.get('error', {}).get('code', 'Unknown') in ('Base.1.8.GeneralError', 'Base.1.12.GeneralError'):
|
||||
eventlet.sleep(10)
|
||||
if rsp.get('error', {}).get('code', 'Unknown') in ('Base.1.8.GeneralError', 'Base.1.12.GeneralError', 'Base.1.14.GeneralError'):
|
||||
if tries:
|
||||
eventlet.sleep(4)
|
||||
elif tmpaccount:
|
||||
wc.grab_json_response_with_status(tmpaccount, method='DELETE')
|
||||
raise Exception('Failed renaming main account')
|
||||
else:
|
||||
tmpaccount = self.create_tmp_account(wc)
|
||||
tries = 8
|
||||
else:
|
||||
break
|
||||
if tmpaccount:
|
||||
wc.set_basic_credentials(username, passwd)
|
||||
wc.grab_json_response_with_status(tmpaccount, method='DELETE')
|
||||
self.tmppasswd = None
|
||||
self._currcreds = (username, passwd)
|
||||
return
|
||||
self.tmppasswd = None
|
||||
wc.grab_json_response('/api/providers/logout')
|
||||
self._currcreds = (username, passwd)
|
||||
@@ -596,7 +646,10 @@ class NodeHandler(immhandler.NodeHandler):
|
||||
statargs['ENET_IPv4GatewayIPAddr'] = netconfig['ipv4_gateway']
|
||||
elif not netutil.address_is_local(newip):
|
||||
raise exc.InvalidArgumentException('Will not remotely configure a device with no gateway')
|
||||
wc.grab_json_response('/api/dataset', statargs)
|
||||
netset, status = wc.grab_json_response_with_status('/api/dataset', statargs)
|
||||
print(repr(netset))
|
||||
print(repr(status))
|
||||
|
||||
elif self.ipaddr.startswith('fe80::'):
|
||||
self.configmanager.set_node_attributes(
|
||||
{nodename: {'hardwaremanagement.manager': self.ipaddr}})
|
||||
@@ -627,8 +680,9 @@ def remote_nodecfg(nodename, cfm):
|
||||
ipaddr = ipaddr.split('/', 1)[0]
|
||||
ipaddr = getaddrinfo(ipaddr, 0)[0][-1]
|
||||
if not ipaddr:
|
||||
raise Excecption('Cannot remote configure a system without known '
|
||||
raise Exception('Cannot remote configure a system without known '
|
||||
'address')
|
||||
info = {'addresses': [ipaddr]}
|
||||
nh = NodeHandler(info, cfm)
|
||||
nh.config(nodename)
|
||||
|
||||
|
104
confluent_server/confluent/discovery/handlers/xcc3.py
Normal file
104
confluent_server/confluent/discovery/handlers/xcc3.py
Normal file
@@ -0,0 +1,104 @@
|
||||
# Copyright 2024 Lenovo
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import confluent.discovery.handlers.redfishbmc as redfishbmc
|
||||
import eventlet.support.greendns
|
||||
import confluent.util as util
|
||||
|
||||
webclient = eventlet.import_patched('pyghmi.util.webclient')
|
||||
|
||||
|
||||
|
||||
getaddrinfo = eventlet.support.greendns.getaddrinfo
|
||||
|
||||
|
||||
class NodeHandler(redfishbmc.NodeHandler):
|
||||
devname = 'XCC'
|
||||
|
||||
def get_firmware_default_account_info(self):
|
||||
return ('USERID', 'PASSW0RD')
|
||||
|
||||
def scan(self):
|
||||
ip, port = self.get_web_port_and_ip()
|
||||
c = webclient.SecureHTTPConnection(ip, port,
|
||||
verifycallback=self.validate_cert)
|
||||
c.set_header('Accept', 'application/json')
|
||||
i = c.grab_json_response('/api/providers/logoninfo')
|
||||
modelname = i.get('items', [{}])[0].get('machine_name', None)
|
||||
if modelname:
|
||||
self.info['modelname'] = modelname
|
||||
for attrname in list(self.info.get('attributes', {})):
|
||||
val = self.info['attributes'][attrname]
|
||||
if '-uuid' == attrname[-5:] and len(val) == 32:
|
||||
val = val.lower()
|
||||
self.info['attributes'][attrname] = '-'.join([val[:8], val[8:12], val[12:16], val[16:20], val[20:]])
|
||||
attrs = self.info.get('attributes', {})
|
||||
room = attrs.get('room-id', None)
|
||||
if room:
|
||||
self.info['room'] = room
|
||||
rack = attrs.get('rack-id', None)
|
||||
if rack:
|
||||
self.info['rack'] = rack
|
||||
name = attrs.get('name', None)
|
||||
if name:
|
||||
self.info['hostname'] = name
|
||||
unumber = attrs.get('lowest-u', None)
|
||||
if unumber:
|
||||
self.info['u'] = unumber
|
||||
location = attrs.get('location', None)
|
||||
if location:
|
||||
self.info['location'] = location
|
||||
mtm = attrs.get('enclosure-machinetype-model', None)
|
||||
if mtm:
|
||||
self.info['modelnumber'] = mtm.strip()
|
||||
sn = attrs.get('enclosure-serial-number', None)
|
||||
if sn:
|
||||
self.info['serialnumber'] = sn.strip()
|
||||
if attrs.get('enclosure-form-factor', None) == 'dense-computing':
|
||||
encuuid = attrs.get('chassis-uuid', None)
|
||||
if encuuid:
|
||||
self.info['enclosure.uuid'] = fixuuid(encuuid)
|
||||
slot = int(attrs.get('slot', 0))
|
||||
if slot != 0:
|
||||
self.info['enclosure.bay'] = slot
|
||||
|
||||
def validate_cert(self, certificate):
|
||||
fprint = util.get_fingerprint(self.https_cert)
|
||||
return util.cert_matches(fprint, certificate)
|
||||
|
||||
|
||||
def remote_nodecfg(nodename, cfm):
|
||||
cfg = cfm.get_node_attributes(
|
||||
nodename, 'hardwaremanagement.manager')
|
||||
ipaddr = cfg.get(nodename, {}).get('hardwaremanagement.manager', {}).get(
|
||||
'value', None)
|
||||
ipaddr = ipaddr.split('/', 1)[0]
|
||||
ipaddr = getaddrinfo(ipaddr, 0)[0][-1]
|
||||
if not ipaddr:
|
||||
raise Exception('Cannot remote configure a system without known '
|
||||
'address')
|
||||
info = {'addresses': [ipaddr]}
|
||||
nh = NodeHandler(info, cfm)
|
||||
nh.config(nodename)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
import confluent.config.configmanager as cfm
|
||||
c = cfm.ConfigManager(None)
|
||||
import sys
|
||||
info = {'addresses': [[sys.argv[1]]]}
|
||||
print(repr(info))
|
||||
testr = NodeHandler(info, c)
|
||||
testr.config(sys.argv[2])
|
||||
|
@@ -315,9 +315,9 @@ def proxydhcp(handler, nodeguess):
|
||||
optidx = rqv.tobytes().index(b'\x63\x82\x53\x63') + 4
|
||||
except ValueError:
|
||||
continue
|
||||
hwlen = rq[2]
|
||||
opts, disco = opts_to_dict(rq, optidx, 3)
|
||||
disco['hwaddr'] = ':'.join(['{0:02x}'.format(x) for x in rq[28:28+hwlen]])
|
||||
hwlen = rqv[2]
|
||||
opts, disco = opts_to_dict(rqv, optidx, 3)
|
||||
disco['hwaddr'] = ':'.join(['{0:02x}'.format(x) for x in rqv[28:28+hwlen]])
|
||||
node = None
|
||||
if disco.get('hwaddr', None) in macmap:
|
||||
node = macmap[disco['hwaddr']]
|
||||
@@ -346,7 +346,7 @@ def proxydhcp(handler, nodeguess):
|
||||
profile = None
|
||||
if not myipn:
|
||||
myipn = socket.inet_aton(recv)
|
||||
profile = get_deployment_profile(node, cfg)
|
||||
profile, stgprofile = get_deployment_profile(node, cfg)
|
||||
if profile:
|
||||
log.log({
|
||||
'info': 'Offering proxyDHCP boot from {0} to {1} ({2})'.format(recv, node, client[0])})
|
||||
@@ -356,7 +356,7 @@ def proxydhcp(handler, nodeguess):
|
||||
continue
|
||||
if opts.get(77, None) == b'iPXE':
|
||||
if not profile:
|
||||
profile = get_deployment_profile(node, cfg)
|
||||
profile, stgprofile = get_deployment_profile(node, cfg)
|
||||
if not profile:
|
||||
log.log({'info': 'No pending profile for {0}, skipping proxyDHCP reply'.format(node)})
|
||||
continue
|
||||
@@ -385,8 +385,9 @@ def proxydhcp(handler, nodeguess):
|
||||
rpv[268:280] = b'\x3c\x09PXEClient\xff'
|
||||
net4011.sendto(rpv[:281], client)
|
||||
except Exception as e:
|
||||
tracelog.log(traceback.format_exc(), ltype=log.DataTypes.event,
|
||||
event=log.Events.stacktrace)
|
||||
log.logtrace()
|
||||
# tracelog.log(traceback.format_exc(), ltype=log.DataTypes.event,
|
||||
# event=log.Events.stacktrace)
|
||||
|
||||
|
||||
def start_proxydhcp(handler, nodeguess=None):
|
||||
@@ -453,13 +454,14 @@ def snoop(handler, protocol=None, nodeguess=None):
|
||||
# with try/except
|
||||
if i < 64:
|
||||
continue
|
||||
_, level, typ = struct.unpack('QII', cmsgarr[:16])
|
||||
if level == socket.IPPROTO_IP and typ == IP_PKTINFO:
|
||||
idx, recv = struct.unpack('II', cmsgarr[16:24])
|
||||
recv = ipfromint(recv)
|
||||
rqv = memoryview(rawbuffer)[:i]
|
||||
if rawbuffer[0] == 1: # Boot request
|
||||
process_dhcp4req(handler, nodeguess, cfg, net4, idx, recv, rqv)
|
||||
_, level, typ = struct.unpack('QII', cmsgarr[:16])
|
||||
if level == socket.IPPROTO_IP and typ == IP_PKTINFO:
|
||||
idx, recv = struct.unpack('II', cmsgarr[16:24])
|
||||
recv = ipfromint(recv)
|
||||
rqv = memoryview(rawbuffer)[:i]
|
||||
client = (ipfromint(clientaddr.sin_addr.s_addr), socket.htons(clientaddr.sin_port))
|
||||
process_dhcp4req(handler, nodeguess, cfg, net4, idx, recv, rqv, client)
|
||||
elif netc == net6:
|
||||
recv = 'ff02::1:2'
|
||||
pkt, addr = netc.recvfrom(2048)
|
||||
@@ -476,6 +478,10 @@ def snoop(handler, protocol=None, nodeguess=None):
|
||||
tracelog.log(traceback.format_exc(), ltype=log.DataTypes.event,
|
||||
event=log.Events.stacktrace)
|
||||
|
||||
|
||||
_mac_to_uuidmap = {}
|
||||
|
||||
|
||||
def process_dhcp6req(handler, rqv, addr, net, cfg, nodeguess):
|
||||
ip = addr[0]
|
||||
req, disco = v6opts_to_dict(bytearray(rqv[4:]))
|
||||
@@ -501,7 +507,7 @@ def process_dhcp6req(handler, rqv, addr, net, cfg, nodeguess):
|
||||
handler(info)
|
||||
consider_discover(info, req, net, cfg, None, nodeguess, addr)
|
||||
|
||||
def process_dhcp4req(handler, nodeguess, cfg, net4, idx, recv, rqv):
|
||||
def process_dhcp4req(handler, nodeguess, cfg, net4, idx, recv, rqv, client):
|
||||
rq = bytearray(rqv)
|
||||
addrlen = rq[2]
|
||||
if addrlen > 16 or addrlen == 0:
|
||||
@@ -531,7 +537,12 @@ def process_dhcp4req(handler, nodeguess, cfg, net4, idx, recv, rqv):
|
||||
# We will fill out service to have something to byte into,
|
||||
# but the nature of the beast is that we do not have peers,
|
||||
# so that will not be present for a pxe snoop
|
||||
info = {'hwaddr': netaddr, 'uuid': disco['uuid'],
|
||||
theuuid = disco['uuid']
|
||||
if theuuid:
|
||||
_mac_to_uuidmap[netaddr] = theuuid
|
||||
elif netaddr in _mac_to_uuidmap:
|
||||
theuuid = _mac_to_uuidmap[netaddr]
|
||||
info = {'hwaddr': netaddr, 'uuid': theuuid,
|
||||
'architecture': disco['arch'],
|
||||
'netinfo': {'ifidx': idx, 'recvip': recv, 'txid': txid},
|
||||
'services': ('pxe-client',)}
|
||||
@@ -539,7 +550,7 @@ def process_dhcp4req(handler, nodeguess, cfg, net4, idx, recv, rqv):
|
||||
and time.time() > ignoredisco.get(netaddr, 0) + 90):
|
||||
ignoredisco[netaddr] = time.time()
|
||||
handler(info)
|
||||
consider_discover(info, rqinfo, net4, cfg, rqv, nodeguess)
|
||||
consider_discover(info, rqinfo, net4, cfg, rqv, nodeguess, requestor=client)
|
||||
|
||||
|
||||
|
||||
@@ -583,26 +594,34 @@ def get_deployment_profile(node, cfg, cfd=None):
|
||||
if not cfd:
|
||||
cfd = cfg.get_node_attributes(node, ('deployment.*', 'collective.managercandidates'))
|
||||
profile = cfd.get(node, {}).get('deployment.pendingprofile', {}).get('value', None)
|
||||
if not profile:
|
||||
return None
|
||||
candmgrs = cfd.get(node, {}).get('collective.managercandidates', {}).get('value', None)
|
||||
if candmgrs:
|
||||
candmgrs = noderange.NodeRange(candmgrs, cfg).nodes
|
||||
if collective.get_myname() not in candmgrs:
|
||||
return None
|
||||
return profile
|
||||
stgprofile = cfd.get(node, {}).get('deployment.stagedprofile', {}).get('value', None)
|
||||
if profile or stgprofile:
|
||||
candmgrs = cfd.get(node, {}).get('collective.managercandidates', {}).get('value', None)
|
||||
if candmgrs:
|
||||
try:
|
||||
candmgrs = noderange.NodeRange(candmgrs, cfg).nodes
|
||||
except Exception: # fallback to unverified noderange
|
||||
candmgrs = noderange.NodeRange(candmgrs).nodes
|
||||
if collective.get_myname() not in candmgrs:
|
||||
return None, None
|
||||
return profile, stgprofile
|
||||
|
||||
staticassigns = {}
|
||||
myipbypeer = {}
|
||||
def check_reply(node, info, packet, sock, cfg, reqview, addr):
|
||||
httpboot = info['architecture'] == 'uefi-httpboot'
|
||||
def check_reply(node, info, packet, sock, cfg, reqview, addr, requestor):
|
||||
if not requestor:
|
||||
requestor = ('0.0.0.0', None)
|
||||
if requestor[0] == '0.0.0.0' and not info.get('uuid', None):
|
||||
return # ignore DHCP from local non-PXE segment
|
||||
httpboot = info.get('architecture', None) == 'uefi-httpboot'
|
||||
cfd = cfg.get_node_attributes(node, ('deployment.*', 'collective.managercandidates'))
|
||||
profile = get_deployment_profile(node, cfg, cfd)
|
||||
if not profile:
|
||||
profile, stgprofile = get_deployment_profile(node, cfg, cfd)
|
||||
if ((not profile)
|
||||
and (requestor[0] == '0.0.0.0' or not stgprofile)):
|
||||
if time.time() > ignoremacs.get(info['hwaddr'], 0) + 90:
|
||||
ignoremacs[info['hwaddr']] = time.time()
|
||||
log.log({'info': 'Ignoring boot attempt by {0} no deployment profile specified (uuid {1}, hwaddr {2})'.format(
|
||||
node, info['uuid'], info['hwaddr']
|
||||
node, info.get('uuid', 'NA'), info['hwaddr']
|
||||
)})
|
||||
return
|
||||
if addr:
|
||||
@@ -611,7 +630,7 @@ def check_reply(node, info, packet, sock, cfg, reqview, addr):
|
||||
return
|
||||
return reply_dhcp6(node, addr, cfg, packet, cfd, profile, sock)
|
||||
else:
|
||||
return reply_dhcp4(node, info, packet, cfg, reqview, httpboot, cfd, profile)
|
||||
return reply_dhcp4(node, info, packet, cfg, reqview, httpboot, cfd, profile, sock, requestor)
|
||||
|
||||
def reply_dhcp6(node, addr, cfg, packet, cfd, profile, sock):
|
||||
myaddrs = netutil.get_my_addresses(addr[-1], socket.AF_INET6)
|
||||
@@ -648,14 +667,16 @@ def reply_dhcp6(node, addr, cfg, packet, cfd, profile, sock):
|
||||
ipass[4:16] = b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x05\x00\x18'
|
||||
ipass[16:32] = socket.inet_pton(socket.AF_INET6, ipv6addr)
|
||||
ipass[32:40] = b'\x00\x00\x00\x78\x00\x00\x01\x2c'
|
||||
elif (not packet['vci']) or not packet['vci'].startswith('HTTPClient:Arch:'):
|
||||
return # do not send ip-less replies to anything but HTTPClient specifically
|
||||
#1 msgtype
|
||||
#3 txid
|
||||
#22 - server ident
|
||||
#len(packet[1]) + 4 - client ident
|
||||
#len(ipass) + 4 or 0
|
||||
#len(url) + 4
|
||||
elif (not packet['vci']) or not packet['vci'].startswith(
|
||||
'HTTPClient:Arch:'):
|
||||
# do not send ip-less replies to anything but HTTPClient specifically
|
||||
return
|
||||
# 1 msgtype
|
||||
# 3 txid
|
||||
# 22 - server ident
|
||||
# len(packet[1]) + 4 - client ident
|
||||
# len(ipass) + 4 or 0
|
||||
# len(url) + 4
|
||||
replylen = 50 + len(bootfile) + len(packet[1]) + 4
|
||||
if len(ipass):
|
||||
replylen += len(ipass)
|
||||
@@ -695,26 +716,31 @@ def get_my_duid():
|
||||
return _myuuid
|
||||
|
||||
|
||||
def reply_dhcp4(node, info, packet, cfg, reqview, httpboot, cfd, profile):
|
||||
def reply_dhcp4(node, info, packet, cfg, reqview, httpboot, cfd, profile, sock=None, requestor=None):
|
||||
replen = 275 # default is going to be 286
|
||||
# while myipn is describing presumed destination, it's really
|
||||
# vague in the face of aliases, need to convert to ifidx and evaluate
|
||||
# aliases for best match to guess
|
||||
|
||||
isboot = True
|
||||
if requestor is None:
|
||||
requestor = ('0.0.0.0', None)
|
||||
if info.get('architecture', None) is None:
|
||||
isboot = False
|
||||
rqtype = packet[53][0]
|
||||
insecuremode = cfd.get(node, {}).get('deployment.useinsecureprotocols',
|
||||
{}).get('value', 'never')
|
||||
if not insecuremode:
|
||||
insecuremode = 'never'
|
||||
if insecuremode == 'never' and not httpboot:
|
||||
if rqtype == 1 and info['architecture']:
|
||||
log.log(
|
||||
{'info': 'Boot attempt by {0} detected in insecure mode, but '
|
||||
'insecure mode is disabled. Set the attribute '
|
||||
'`deployment.useinsecureprotocols` to `firmware` or '
|
||||
'`always` to enable support, or use UEFI HTTP boot '
|
||||
'with HTTPS.'.format(node)})
|
||||
return
|
||||
if isboot:
|
||||
insecuremode = cfd.get(node, {}).get('deployment.useinsecureprotocols',
|
||||
{}).get('value', 'never')
|
||||
if not insecuremode:
|
||||
insecuremode = 'never'
|
||||
if insecuremode == 'never' and not httpboot:
|
||||
if rqtype == 1 and info.get('architecture', None):
|
||||
log.log(
|
||||
{'info': 'Boot attempt by {0} detected in insecure mode, but '
|
||||
'insecure mode is disabled. Set the attribute '
|
||||
'`deployment.useinsecureprotocols` to `firmware` or '
|
||||
'`always` to enable support, or use UEFI HTTP boot '
|
||||
'with HTTPS.'.format(node)})
|
||||
return
|
||||
reply = bytearray(512)
|
||||
repview = memoryview(reply)
|
||||
repview[:20] = iphdr
|
||||
@@ -725,9 +751,16 @@ def reply_dhcp4(node, info, packet, cfg, reqview, httpboot, cfd, profile):
|
||||
repview[1:10] = reqview[1:10] # duplicate txid, hwlen, and others
|
||||
repview[10:11] = b'\x80' # always set broadcast
|
||||
repview[28:44] = reqview[28:44] # copy chaddr field
|
||||
relayip = reqview[24:28].tobytes()
|
||||
if (not isboot) and relayip == b'\x00\x00\x00\x00':
|
||||
# Ignore local DHCP packets if it isn't a firmware request
|
||||
return
|
||||
relayipa = None
|
||||
if relayip != b'\x00\x00\x00\x00':
|
||||
relayipa = socket.inet_ntoa(relayip)
|
||||
gateway = None
|
||||
netmask = None
|
||||
niccfg = netutil.get_nic_config(cfg, node, ifidx=info['netinfo']['ifidx'])
|
||||
niccfg = netutil.get_nic_config(cfg, node, ifidx=info['netinfo']['ifidx'], relayipn=relayip)
|
||||
nicerr = niccfg.get('error_msg', False)
|
||||
if nicerr:
|
||||
log.log({'error': nicerr})
|
||||
@@ -751,7 +784,7 @@ def reply_dhcp4(node, info, packet, cfg, reqview, httpboot, cfd, profile):
|
||||
gateway = None
|
||||
netmask = (2**32 - 1) ^ (2**(32 - netmask) - 1)
|
||||
netmask = struct.pack('!I', netmask)
|
||||
elif (not packet['vci']) or not (packet['vci'].startswith('HTTPClient:Arch:') or packet['vci'].startswith('PXEClient')):
|
||||
elif (not packet.get('vci', None)) or not (packet['vci'].startswith('HTTPClient:Arch:') or packet['vci'].startswith('PXEClient')):
|
||||
return # do not send ip-less replies to anything but netboot specifically
|
||||
myipn = niccfg['deploy_server']
|
||||
if not myipn:
|
||||
@@ -771,10 +804,19 @@ def reply_dhcp4(node, info, packet, cfg, reqview, httpboot, cfd, profile):
|
||||
node, profile, len(bootfile) - 127)})
|
||||
return
|
||||
repview[108:108 + len(bootfile)] = bootfile
|
||||
elif info.get('architecture', None) == 'uefi-aarch64' and packet.get(77, None) == b'iPXE':
|
||||
if not profile:
|
||||
profile, stgprofile = get_deployment_profile(node, cfg)
|
||||
if not profile:
|
||||
log.log({'info': 'No pending profile for {0}, skipping proxyDHCP eply'.format(node)})
|
||||
return
|
||||
bootfile = 'http://{0}/confluent-public/os/{1}/boot.ipxe'.format(myipn, profile).encode('utf8')
|
||||
repview[108:108 + len(bootfile)] = bootfile
|
||||
myip = myipn
|
||||
myipn = socket.inet_aton(myipn)
|
||||
orepview[12:16] = myipn
|
||||
repview[20:24] = myipn
|
||||
repview[24:28] = relayip
|
||||
repview[236:240] = b'\x63\x82\x53\x63'
|
||||
repview[240:242] = b'\x35\x01'
|
||||
if rqtype == 1: # if discover, then offer
|
||||
@@ -785,17 +827,19 @@ def reply_dhcp4(node, info, packet, cfg, reqview, httpboot, cfd, profile):
|
||||
repview[245:249] = myipn
|
||||
repview[249:255] = b'\x33\x04\x00\x00\x00\xf0' # fixed short lease time
|
||||
repview[255:257] = b'\x61\x11'
|
||||
repview[257:274] = packet[97]
|
||||
if packet.get(97, None) is not None:
|
||||
repview[257:274] = packet[97]
|
||||
# Note that sending PXEClient kicks off the proxyDHCP procedure, ignoring
|
||||
# boot filename and such in the DHCP packet
|
||||
# we will simply always do it to provide the boot payload in a consistent
|
||||
# matter to both dhcp-elsewhere and fixed ip clients
|
||||
if info['architecture'] == 'uefi-httpboot':
|
||||
repview[replen - 1:replen + 11] = b'\x3c\x0aHTTPClient'
|
||||
replen += 12
|
||||
else:
|
||||
repview[replen - 1:replen + 10] = b'\x3c\x09PXEClient'
|
||||
replen += 11
|
||||
if isboot:
|
||||
if info.get('architecture', None) == 'uefi-httpboot':
|
||||
repview[replen - 1:replen + 11] = b'\x3c\x0aHTTPClient'
|
||||
replen += 12
|
||||
else:
|
||||
repview[replen - 1:replen + 10] = b'\x3c\x09PXEClient'
|
||||
replen += 11
|
||||
hwlen = bytearray(reqview[2:3].tobytes())[0]
|
||||
fulladdr = repview[28:28+hwlen].tobytes()
|
||||
myipbypeer[fulladdr] = myipn
|
||||
@@ -812,6 +856,14 @@ def reply_dhcp4(node, info, packet, cfg, reqview, httpboot, cfd, profile):
|
||||
repview[replen - 1:replen + 1] = b'\x03\x04'
|
||||
repview[replen + 1:replen + 5] = gateway
|
||||
replen += 6
|
||||
elif relayip != b'\x00\x00\x00\x00' and clipn:
|
||||
log.log({'error': 'Relay DHCP offer to {} will fail due to missing gateway information'.format(node)})
|
||||
if 82 in packet:
|
||||
reloptionslen = len(packet[82])
|
||||
reloptionshdr = struct.pack('BB', 82, reloptionslen)
|
||||
repview[replen - 1:replen + 1] = reloptionshdr
|
||||
repview[replen + 1:replen + reloptionslen + 1] = packet[82]
|
||||
replen += 2 + reloptionslen
|
||||
repview[replen - 1:replen] = b'\xff' # end of options, should always be last byte
|
||||
repview = memoryview(reply)
|
||||
pktlen = struct.pack('!H', replen + 28) # ip+udp = 28
|
||||
@@ -835,9 +887,19 @@ def reply_dhcp4(node, info, packet, cfg, reqview, httpboot, cfd, profile):
|
||||
ipinfo = 'with static address {0}'.format(niccfg['ipv4_address'])
|
||||
else:
|
||||
ipinfo = 'without address, served from {0}'.format(myip)
|
||||
log.log({
|
||||
'info': 'Offering {0} boot {1} to {2}'.format(boottype, ipinfo, node)})
|
||||
send_raw_packet(repview, replen + 28, reqview, info)
|
||||
if relayipa:
|
||||
ipinfo += ' (relayed to {} via {})'.format(relayipa, requestor[0])
|
||||
if isboot:
|
||||
log.log({
|
||||
'info': 'Offering {0} boot {1} to {2}'.format(boottype, ipinfo, node)})
|
||||
else:
|
||||
log.log({
|
||||
'info': 'Offering DHCP {} to {}'.format(ipinfo, node)})
|
||||
if relayip != b'\x00\x00\x00\x00':
|
||||
sock.sendto(repview[28:28 + replen], requestor)
|
||||
else:
|
||||
send_raw_packet(repview, replen + 28, reqview, info)
|
||||
|
||||
|
||||
def send_raw_packet(repview, replen, reqview, info):
|
||||
ifidx = info['netinfo']['ifidx']
|
||||
@@ -862,9 +924,10 @@ def send_raw_packet(repview, replen, reqview, info):
|
||||
sendto(tsock.fileno(), pkt, replen, 0, ctypes.byref(targ),
|
||||
ctypes.sizeof(targ))
|
||||
|
||||
def ack_request(pkt, rq, info):
|
||||
def ack_request(pkt, rq, info, sock=None, requestor=None):
|
||||
hwlen = bytearray(rq[2:3].tobytes())[0]
|
||||
hwaddr = rq[28:28+hwlen].tobytes()
|
||||
relayip = rq[24:28].tobytes()
|
||||
myipn = myipbypeer.get(hwaddr, None)
|
||||
if not myipn or pkt.get(54, None) != myipn:
|
||||
return
|
||||
@@ -883,15 +946,20 @@ def ack_request(pkt, rq, info):
|
||||
repview[12:len(rply)].tobytes())
|
||||
datasum = ~datasum & 0xffff
|
||||
repview[26:28] = struct.pack('!H', datasum)
|
||||
send_raw_packet(repview, len(rply), rq, info)
|
||||
if relayip != b'\x00\x00\x00\x00':
|
||||
sock.sendto(repview[28:], requestor)
|
||||
else:
|
||||
send_raw_packet(repview, len(rply), rq, info)
|
||||
|
||||
def consider_discover(info, packet, sock, cfg, reqview, nodeguess, addr=None):
|
||||
if info.get('hwaddr', None) in macmap and info.get('uuid', None):
|
||||
check_reply(macmap[info['hwaddr']], info, packet, sock, cfg, reqview, addr)
|
||||
def consider_discover(info, packet, sock, cfg, reqview, nodeguess, addr=None, requestor=None):
|
||||
if packet.get(53, None) == b'\x03':
|
||||
ack_request(packet, reqview, info, sock, requestor)
|
||||
elif info.get('hwaddr', None) in macmap: # and info.get('uuid', None):
|
||||
check_reply(macmap[info['hwaddr']], info, packet, sock, cfg, reqview, addr, requestor)
|
||||
elif info.get('uuid', None) in uuidmap:
|
||||
check_reply(uuidmap[info['uuid']], info, packet, sock, cfg, reqview, addr)
|
||||
check_reply(uuidmap[info['uuid']], info, packet, sock, cfg, reqview, addr, requestor)
|
||||
elif packet.get(53, None) == b'\x03':
|
||||
ack_request(packet, reqview, info)
|
||||
ack_request(packet, reqview, info, sock, requestor)
|
||||
elif info.get('uuid', None) and info.get('hwaddr', None):
|
||||
if time.time() > ignoremacs.get(info['hwaddr'], 0) + 90:
|
||||
ignoremacs[info['hwaddr']] = time.time()
|
||||
|
@@ -246,11 +246,11 @@ def _find_srvtype(net, net4, srvtype, addresses, xid):
|
||||
try:
|
||||
net4.sendto(data, ('239.255.255.253', 427))
|
||||
except socket.error as se:
|
||||
# On occasion, multicasting may be disabled
|
||||
# tolerate this scenario and move on
|
||||
if se.errno != 101:
|
||||
raise
|
||||
net4.sendto(data, (bcast, 427))
|
||||
pass
|
||||
try:
|
||||
net4.sendto(data, (bcast, 427))
|
||||
except socket.error as se:
|
||||
pass
|
||||
|
||||
|
||||
def _grab_rsps(socks, rsps, interval, xidmap, deferrals):
|
||||
@@ -411,7 +411,7 @@ def query_srvtypes(target):
|
||||
parsed = _parse_slp_header(rs)
|
||||
if parsed:
|
||||
payload = parsed['payload']
|
||||
if payload[:2] != '\x00\x00':
|
||||
if payload[:2] != b'\x00\x00':
|
||||
return
|
||||
stypelen = struct.unpack('!H', bytes(payload[2:4]))[0]
|
||||
stypes = payload[4:4+stypelen].decode('utf-8')
|
||||
@@ -471,10 +471,13 @@ def snoop(handler, protocol=None):
|
||||
# socket in use can occur when aliased ipv4 are encountered
|
||||
net.bind(('', 427))
|
||||
net4.bind(('', 427))
|
||||
|
||||
newmacs = set([])
|
||||
known_peers = set([])
|
||||
peerbymacaddress = {}
|
||||
deferpeers = []
|
||||
while True:
|
||||
try:
|
||||
newmacs = set([])
|
||||
newmacs.clear()
|
||||
r, _, _ = select.select((net, net4), (), (), 60)
|
||||
# clear known_peers and peerbymacaddress
|
||||
# to avoid stale info getting in...
|
||||
@@ -482,14 +485,16 @@ def snoop(handler, protocol=None):
|
||||
# addresses that come close together
|
||||
# calling code needs to understand deeper context, as snoop
|
||||
# will now yield dupe info over time
|
||||
known_peers = set([])
|
||||
peerbymacaddress = {}
|
||||
deferpeers = []
|
||||
known_peers.clear()
|
||||
peerbymacaddress.clear()
|
||||
deferpeers.clear()
|
||||
while r and len(deferpeers) < 256:
|
||||
for s in r:
|
||||
(rsp, peer) = s.recvfrom(9000)
|
||||
if peer in known_peers:
|
||||
continue
|
||||
if peer in deferpeers:
|
||||
continue
|
||||
mac = neighutil.get_hwaddr(peer[0])
|
||||
if not mac:
|
||||
probepeer = (peer[0], struct.unpack('H', os.urandom(2))[0] | 1025) + peer[2:]
|
||||
|
@@ -60,6 +60,7 @@ def active_scan(handler, protocol=None):
|
||||
known_peers = set([])
|
||||
for scanned in scan(['urn:dmtf-org:service:redfish-rest:1', 'urn::service:affluent']):
|
||||
for addr in scanned['addresses']:
|
||||
addr = addr[0:1] + addr[2:]
|
||||
if addr in known_peers:
|
||||
break
|
||||
hwaddr = neighutil.get_hwaddr(addr[0])
|
||||
@@ -79,13 +80,20 @@ def scan(services, target=None):
|
||||
|
||||
|
||||
def _process_snoop(peer, rsp, mac, known_peers, newmacs, peerbymacaddress, byehandler, machandlers, handler):
|
||||
if mac in peerbymacaddress and peer not in peerbymacaddress[mac]['addresses']:
|
||||
peerbymacaddress[mac]['addresses'].append(peer)
|
||||
if mac in peerbymacaddress:
|
||||
normpeer = peer[0:1] + peer[2:]
|
||||
for currpeer in peerbymacaddress[mac]['addresses']:
|
||||
currnormpeer = currpeer[0:1] + peer[2:]
|
||||
if currnormpeer == normpeer:
|
||||
break
|
||||
else:
|
||||
peerbymacaddress[mac]['addresses'].append(peer)
|
||||
else:
|
||||
peerdata = {
|
||||
'hwaddr': mac,
|
||||
'addresses': [peer],
|
||||
}
|
||||
targurl = None
|
||||
for headline in rsp[1:]:
|
||||
if not headline:
|
||||
continue
|
||||
@@ -105,13 +113,21 @@ def _process_snoop(peer, rsp, mac, known_peers, newmacs, peerbymacaddress, byeha
|
||||
if not value.endswith('/redfish/v1/'):
|
||||
return
|
||||
elif header == 'LOCATION':
|
||||
if not value.endswith('/DeviceDescription.json'):
|
||||
if '/eth' in value and value.endswith('.xml'):
|
||||
targurl = '/redfish/v1/'
|
||||
targtype = 'megarac-bmc'
|
||||
continue # MegaRAC redfish
|
||||
elif value.endswith('/DeviceDescription.json'):
|
||||
targurl = '/DeviceDescription.json'
|
||||
targtype = 'lenovo-xcc'
|
||||
continue
|
||||
else:
|
||||
return
|
||||
if handler:
|
||||
eventlet.spawn_n(check_fish_handler, handler, peerdata, known_peers, newmacs, peerbymacaddress, machandlers, mac, peer)
|
||||
if handler and targurl:
|
||||
eventlet.spawn_n(check_fish_handler, handler, peerdata, known_peers, newmacs, peerbymacaddress, machandlers, mac, peer, targurl, targtype)
|
||||
|
||||
def check_fish_handler(handler, peerdata, known_peers, newmacs, peerbymacaddress, machandlers, mac, peer):
|
||||
retdata = check_fish(('/DeviceDescription.json', peerdata))
|
||||
def check_fish_handler(handler, peerdata, known_peers, newmacs, peerbymacaddress, machandlers, mac, peer, targurl, targtype):
|
||||
retdata = check_fish((targurl, peerdata, targtype))
|
||||
if retdata:
|
||||
known_peers.add(peer)
|
||||
newmacs.add(mac)
|
||||
@@ -164,11 +180,14 @@ def snoop(handler, byehandler=None, protocol=None, uuidlookup=None):
|
||||
net4.bind(('', 1900))
|
||||
net6.bind(('', 1900))
|
||||
peerbymacaddress = {}
|
||||
newmacs = set([])
|
||||
deferrednotifies = []
|
||||
machandlers = {}
|
||||
while True:
|
||||
try:
|
||||
newmacs = set([])
|
||||
deferrednotifies = []
|
||||
machandlers = {}
|
||||
newmacs.clear()
|
||||
deferrednotifies.clear()
|
||||
machandlers.clear()
|
||||
r = select.select((net4, net6), (), (), 60)
|
||||
if r:
|
||||
r = r[0]
|
||||
@@ -251,7 +270,10 @@ def snoop(handler, byehandler=None, protocol=None, uuidlookup=None):
|
||||
break
|
||||
candmgrs = cfd.get(node, {}).get('collective.managercandidates', {}).get('value', None)
|
||||
if candmgrs:
|
||||
candmgrs = noderange.NodeRange(candmgrs, cfg).nodes
|
||||
try:
|
||||
candmgrs = noderange.NodeRange(candmgrs, cfg).nodes
|
||||
except Exception:
|
||||
candmgrs = noderange.NodeRange(candmgrs).nodes
|
||||
if collective.get_myname() not in candmgrs:
|
||||
break
|
||||
currtime = time.time()
|
||||
@@ -322,7 +344,7 @@ def _find_service(service, target):
|
||||
host = '[{0}]'.format(host)
|
||||
msg = smsg.format(host, service)
|
||||
if not isinstance(msg, bytes):
|
||||
msg = msg.encode('utf8')
|
||||
msg = msg.encode('utf8')
|
||||
net6.sendto(msg, addr[4])
|
||||
else:
|
||||
net4.setsockopt(socket.SOL_SOCKET, socket.SO_BROADCAST, 1)
|
||||
@@ -410,7 +432,11 @@ def _find_service(service, target):
|
||||
if '/redfish/v1/' not in peerdata[nid].get('urls', ()) and '/redfish/v1' not in peerdata[nid].get('urls', ()):
|
||||
continue
|
||||
if '/DeviceDescription.json' in peerdata[nid]['urls']:
|
||||
pooltargs.append(('/DeviceDescription.json', peerdata[nid]))
|
||||
pooltargs.append(('/DeviceDescription.json', peerdata[nid], 'lenovo-xcc'))
|
||||
else:
|
||||
for targurl in peerdata[nid]['urls']:
|
||||
if '/eth' in targurl and targurl.endswith('.xml'):
|
||||
pooltargs.append(('/redfish/v1/', peerdata[nid], 'megarac-bmc'))
|
||||
# For now, don't interrogate generic redfish bmcs
|
||||
# This is due to a need to deduplicate from some supported SLP
|
||||
# targets (IMM, TSM, others)
|
||||
@@ -425,21 +451,32 @@ def _find_service(service, target):
|
||||
def check_fish(urldata, port=443, verifycallback=None):
|
||||
if not verifycallback:
|
||||
verifycallback = lambda x: True
|
||||
url, data = urldata
|
||||
try:
|
||||
url, data, targtype = urldata
|
||||
except ValueError:
|
||||
url, data = urldata
|
||||
targtype = 'service:redfish-bmc'
|
||||
try:
|
||||
wc = webclient.SecureHTTPConnection(_get_svrip(data), port, verifycallback=verifycallback, timeout=1.5)
|
||||
peerinfo = wc.grab_json_response(url)
|
||||
peerinfo = wc.grab_json_response(url, headers={'Accept': 'application/json'})
|
||||
except socket.error:
|
||||
return None
|
||||
if url == '/DeviceDescription.json':
|
||||
if not peerinfo:
|
||||
return None
|
||||
try:
|
||||
peerinfo = peerinfo[0]
|
||||
except KeyError:
|
||||
peerinfo['xcc-variant'] = '3'
|
||||
except IndexError:
|
||||
return None
|
||||
try:
|
||||
myuuid = peerinfo['node-uuid'].lower()
|
||||
if '-' not in myuuid:
|
||||
myuuid = '-'.join([myuuid[:8], myuuid[8:12], myuuid[12:16], myuuid[16:20], myuuid[20:]])
|
||||
data['uuid'] = myuuid
|
||||
data['attributes'] = peerinfo
|
||||
data['services'] = ['lenovo-xcc']
|
||||
data['services'] = ['lenovo-xcc'] if 'xcc-variant' not in peerinfo else ['lenovo-xcc' + peerinfo['xcc-variant']]
|
||||
return data
|
||||
except (IndexError, KeyError):
|
||||
return None
|
||||
@@ -447,7 +484,7 @@ def check_fish(urldata, port=443, verifycallback=None):
|
||||
peerinfo = wc.grab_json_response('/redfish/v1/')
|
||||
if url == '/redfish/v1/':
|
||||
if 'UUID' in peerinfo:
|
||||
data['services'] = ['service:redfish-bmc']
|
||||
data['services'] = [targtype]
|
||||
data['uuid'] = peerinfo['UUID'].lower()
|
||||
return data
|
||||
return None
|
||||
@@ -466,7 +503,12 @@ def _parse_ssdp(peer, rsp, peerdata):
|
||||
if code == b'200':
|
||||
if nid in peerdata:
|
||||
peerdatum = peerdata[nid]
|
||||
if peer not in peerdatum['addresses']:
|
||||
normpeer = peer[0:1] + peer[2:]
|
||||
for currpeer in peerdatum['addresses']:
|
||||
currnormpeer = currpeer[0:1] + peer[2:]
|
||||
if currnormpeer == normpeer:
|
||||
break
|
||||
else:
|
||||
peerdatum['addresses'].append(peer)
|
||||
else:
|
||||
peerdatum = {
|
||||
@@ -501,5 +543,7 @@ def _parse_ssdp(peer, rsp, peerdata):
|
||||
|
||||
if __name__ == '__main__':
|
||||
def printit(rsp):
|
||||
print(repr(rsp))
|
||||
pass # print(repr(rsp))
|
||||
active_scan(printit)
|
||||
|
||||
|
||||
|
@@ -53,7 +53,7 @@ def execupdate(handler, filename, updateobj, type, owner, node, datfile):
|
||||
return
|
||||
if type == 'ffdc' and os.path.isdir(filename):
|
||||
filename += '/' + node
|
||||
if 'type' == 'ffdc':
|
||||
if type == 'ffdc':
|
||||
errstr = False
|
||||
if os.path.exists(filename):
|
||||
errstr = '{0} already exists on {1}, cannot overwrite'.format(
|
||||
|
@@ -72,6 +72,20 @@ opmap = {
|
||||
}
|
||||
|
||||
|
||||
def get_user_for_session(sessionid, sessiontok):
|
||||
if not isinstance(sessionid, str):
|
||||
sessionid = sessionid.decode()
|
||||
if not isinstance(sessiontok, str):
|
||||
sessiontok = sessiontok.decode()
|
||||
if not sessiontok or not sessionid:
|
||||
raise Exception("invalid session id or token")
|
||||
if sessiontok != httpsessions.get(sessionid, {}).get('csrftoken', None):
|
||||
raise Exception("Invalid csrf token for session")
|
||||
user = httpsessions[sessionid]['name']
|
||||
if not isinstance(user, str):
|
||||
user = user.decode()
|
||||
return user
|
||||
|
||||
def group_creation_resources():
|
||||
yield confluent.messages.Attributes(
|
||||
kv={'name': None}, desc="Name of the group").html() + '<br>'
|
||||
@@ -175,6 +189,8 @@ def _get_query_dict(env, reqbody, reqtype):
|
||||
qstring = None
|
||||
if qstring:
|
||||
for qpair in qstring.split('&'):
|
||||
if '=' not in qpair:
|
||||
continue
|
||||
qkey, qvalue = qpair.split('=')
|
||||
qdict[qkey] = qvalue
|
||||
if reqbody is not None:
|
||||
@@ -618,7 +634,6 @@ def resourcehandler(env, start_response):
|
||||
yield '500 - ' + str(e)
|
||||
return
|
||||
|
||||
|
||||
def resourcehandler_backend(env, start_response):
|
||||
"""Function to handle new wsgi requests
|
||||
"""
|
||||
@@ -669,7 +684,11 @@ def resourcehandler_backend(env, start_response):
|
||||
if 'CONTENT_LENGTH' in env and int(env['CONTENT_LENGTH']) > 0:
|
||||
reqbody = env['wsgi.input'].read(int(env['CONTENT_LENGTH']))
|
||||
reqtype = env['CONTENT_TYPE']
|
||||
operation = opmap[env['REQUEST_METHOD']]
|
||||
operation = opmap.get(env['REQUEST_METHOD'], None)
|
||||
if not operation:
|
||||
start_response('400 Bad Method', headers)
|
||||
yield ''
|
||||
return
|
||||
querydict = _get_query_dict(env, reqbody, reqtype)
|
||||
if operation != 'retrieve' and 'restexplorerop' in querydict:
|
||||
operation = querydict['restexplorerop']
|
||||
@@ -728,7 +747,13 @@ def resourcehandler_backend(env, start_response):
|
||||
elif (env['PATH_INFO'].endswith('/forward/web') and
|
||||
env['PATH_INFO'].startswith('/nodes/')):
|
||||
prefix, _, _ = env['PATH_INFO'].partition('/forward/web')
|
||||
_, _, nodename = prefix.rpartition('/')
|
||||
#_, _, nodename = prefix.rpartition('/')
|
||||
default = False
|
||||
if 'default' in env['PATH_INFO']:
|
||||
default = True
|
||||
_,_,nodename,_ = prefix.split('/')
|
||||
else:
|
||||
_, _, nodename = prefix.rpartition('/')
|
||||
hm = cfgmgr.get_node_attributes(nodename, 'hardwaremanagement.manager')
|
||||
targip = hm.get(nodename, {}).get(
|
||||
'hardwaremanagement.manager', {}).get('value', None)
|
||||
@@ -737,6 +762,29 @@ def resourcehandler_backend(env, start_response):
|
||||
yield 'No hardwaremanagement.manager defined for node'
|
||||
return
|
||||
targip = targip.split('/', 1)[0]
|
||||
if default:
|
||||
try:
|
||||
ip_info = socket.getaddrinfo(targip, 0, 0, socket.SOCK_STREAM)
|
||||
except socket.gaierror:
|
||||
start_response('404 Not Found', headers)
|
||||
yield 'hardwaremanagement.manager definition could not be resolved'
|
||||
return
|
||||
# this is just to future proof just in case the indexes of the address family change in future
|
||||
for i in range(len(ip_info)):
|
||||
if ip_info[i][0] == socket.AF_INET:
|
||||
url = 'https://{0}/'.format(ip_info[i][-1][0])
|
||||
start_response('302', [('Location', url)])
|
||||
yield 'Our princess is in another castle!'
|
||||
return
|
||||
elif ip_info[i][0] == socket.AF_INET6:
|
||||
url = 'https://[{0}]/'.format(ip_info[i][-1][0])
|
||||
if url.startswith('https://[fe80'):
|
||||
start_response('405 Method Not Allowed', headers)
|
||||
yield 'link local ipv6 address cannot be used in browser'
|
||||
return
|
||||
start_response('302', [('Location', url)])
|
||||
yield 'Our princess is in another castle!'
|
||||
return
|
||||
funport = forwarder.get_port(targip, env['HTTP_X_FORWARDED_FOR'],
|
||||
authorized['sessionid'])
|
||||
host = env['HTTP_X_FORWARDED_HOST']
|
||||
|
@@ -220,16 +220,20 @@ def setlimits():
|
||||
def assure_ownership(path):
|
||||
try:
|
||||
if os.getuid() != os.stat(path).st_uid:
|
||||
sys.stderr.write('{} is not owned by confluent user, change ownership\n'.format(path))
|
||||
if os.getuid() == 0:
|
||||
sys.stderr.write('Attempting to run as root, when non-root usage is detected\n')
|
||||
else:
|
||||
sys.stderr.write('{} is not owned by confluent user, change ownership\n'.format(path))
|
||||
sys.exit(1)
|
||||
except OSError as e:
|
||||
if e.errno == 13:
|
||||
sys.stderr.write('{} is not owned by confluent user, change ownership\n'.format(path))
|
||||
if os.getuid() == 0:
|
||||
sys.stderr.write('Attempting to run as root, when non-root usage is detected\n')
|
||||
else:
|
||||
sys.stderr.write('{} is not owned by confluent user, change ownership\n'.format(path))
|
||||
sys.exit(1)
|
||||
|
||||
def sanity_check():
|
||||
if os.getuid() == 0:
|
||||
return True
|
||||
assure_ownership('/etc/confluent')
|
||||
assure_ownership('/etc/confluent/cfg')
|
||||
for filename in glob.glob('/etc/confluent/cfg/*'):
|
||||
|
@@ -262,10 +262,10 @@ class Generic(ConfluentMessage):
|
||||
|
||||
def json(self):
|
||||
return json.dumps(self.data)
|
||||
|
||||
|
||||
def raw(self):
|
||||
return self.data
|
||||
|
||||
|
||||
def html(self):
|
||||
return json.dumps(self.data)
|
||||
|
||||
@@ -344,10 +344,10 @@ class ConfluentResourceCount(ConfluentMessage):
|
||||
self.myargs = [count]
|
||||
self.desc = 'Resource Count'
|
||||
self.kvpairs = {'count': count}
|
||||
|
||||
|
||||
def strip_node(self, node):
|
||||
pass
|
||||
|
||||
|
||||
class CreatedResource(ConfluentMessage):
|
||||
notnode = True
|
||||
readonly = True
|
||||
@@ -569,6 +569,8 @@ def get_input_message(path, operation, inputdata, nodes=None, multinode=False,
|
||||
return InputLicense(path, nodes, inputdata, configmanager)
|
||||
elif path == ['deployment', 'ident_image']:
|
||||
return InputIdentImage(path, nodes, inputdata)
|
||||
elif path == ['console', 'ikvm']:
|
||||
return InputIkvmParams(path, nodes, inputdata)
|
||||
elif inputdata:
|
||||
raise exc.InvalidArgumentException(
|
||||
'No known input handler for request')
|
||||
@@ -948,6 +950,9 @@ class InputIdentImage(ConfluentInputMessage):
|
||||
keyname = 'ident_image'
|
||||
valid_values = ['create']
|
||||
|
||||
class InputIkvmParams(ConfluentInputMessage):
|
||||
keyname = 'method'
|
||||
valid_values = ['unix', 'wss']
|
||||
|
||||
class InputIdentifyMessage(ConfluentInputMessage):
|
||||
valid_values = set([
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user