2
0
mirror of https://github.com/xcat2/confluent.git synced 2024-11-22 01:22:00 +00:00

Merge branch 'master' into staging

This commit is contained in:
Tinashe Kucherera 2024-09-12 10:29:52 -04:00 committed by GitHub
commit 304d0b5dce
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
174 changed files with 5198 additions and 973 deletions

30
README.md Normal file
View File

@ -0,0 +1,30 @@
# Confluent
![Python 3](https://img.shields.io/badge/python-3-blue.svg) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/xcat2/confluent/blob/master/LICENSE)
Confluent is a software package to handle essential bootstrap and operation of scale-out server configurations.
It supports stateful and stateless deployments for various operating systems.
Check [this page](https://hpc.lenovo.com/users/documentation/whatisconfluent.html
) for a more detailed list of features.
Confluent is the modern successor of [xCAT](https://github.com/xcat2/xcat-core).
If you're coming from xCAT, check out [this comparison](https://hpc.lenovo.com/users/documentation/confluentvxcat.html).
# Documentation
Confluent documentation is hosted on hpc.lenovo.com: https://hpc.lenovo.com/users/documentation/
# Download
Get the latest version from: https://hpc.lenovo.com/users/downloads/
Check release notes on: https://hpc.lenovo.com/users/news/
# Open Source License
Confluent is made available under the Apache 2.0 license: https://opensource.org/license/apache-2-0
# Developers
Want to help? Submit a [Pull Request](https://github.com/xcat2/confluent/pulls).

View File

@ -1,6 +1,8 @@
Name: confluent-client
Project: https://hpc.lenovo.com/users/
Source: https://github.com/lenovo/confluent
Upstream-Name: confluent-client
All file of the Confluent-client software is distributed under the terms of an Apache-2.0 license as indicated below:
Files: *
Copyright: 2014-2019 Lenovo
@ -11,7 +13,8 @@ Copyright: 2014 IBM Corporation
2015-2019 Lenovo
License: Apache-2.0
File: sortutil.py
File: sortutil.py,
tlvdata.py
Copyright: 2014 IBM Corporation
2015-2016 Lenovo
License: Apache-2.0
@ -19,3 +22,56 @@ License: Apache-2.0
File: tlv.py
Copyright: 2014 IBM Corporation
License: Apache-2.0
License: Apache-2.0
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files.
"Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions:
You must give any other recipients of the Work or Derivative Works a copy of this License; and
You must cause any modified files to carry prominent notices stating that You changed the files; and
You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and
If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License.
You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS

View File

@ -21,6 +21,7 @@
import optparse
import os
import re
import select
import sys
@ -84,6 +85,7 @@ fullline = sys.stdin.readline()
printpending = True
clearpending = False
holdoff = 0
padded = None
while fullline:
for line in fullline.split('\n'):
if not line:
@ -92,13 +94,18 @@ while fullline:
line = 'UNKNOWN: ' + line
if options.log:
node, output = line.split(':', 1)
output = output.lstrip()
if padded is None:
if output.startswith(' '):
padded = True
else:
padded = False
if padded:
output = re.sub(r'^ ', '', output)
currlog = options.log.format(node=node, nodename=node)
with open(currlog, mode='a') as log:
log.write(output + '\n')
continue
node, output = line.split(':', 1)
output = output.lstrip()
grouped.add_line(node, output)
if options.watch:
if not holdoff:

View File

@ -157,7 +157,7 @@ def main():
elif attrib.endswith('.ipv6_address') and val:
ip6bynode[node][currnet] = val.split('/', 1)[0]
elif attrib.endswith('.hostname'):
namesbynode[node][currnet] = re.split('\s+|,', val)
namesbynode[node][currnet] = re.split(r'\s+|,', val)
for node in ip4bynode:
mydomain = domainsbynode.get(node, None)
for ipdb in (ip4bynode, ip6bynode):

173
confluent_client/bin/l2traceroute Executable file
View File

@ -0,0 +1,173 @@
#!/usr/libexec/platform-python
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# Copyright 2017 Lenovo
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
__author__ = 'tkucherera'
import optparse
import os
import signal
import sys
import subprocess
try:
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
except AttributeError:
pass
path = os.path.dirname(os.path.realpath(__file__))
path = os.path.realpath(os.path.join(path, '..', 'lib', 'python'))
if path.startswith('/opt'):
sys.path.append(path)
import confluent.client as client
argparser = optparse.OptionParser(
usage="Usage: %prog <start_node> -i <interface> <end_node> -e <eface>",
)
argparser.add_option('-i', '--interface', type='str',
help='interface to check path against for the start node')
argparser.add_option('-e', '--eface', type='str',
help='interface to check path against for the end node')
argparser.add_option('-c', '--cumulus', action="store_true", dest="cumulus",
help='return layer 2 route through cumulus switches only')
(options, args) = argparser.parse_args()
try:
start_node = args[0]
end_node = args[1]
interface = options.interface
eface = options.eface
except IndexError:
argparser.print_help()
sys.exit(1)
session = client.Command()
def get_neighbors(switch):
switch_neigbors = []
url = '/networking/neighbors/by-switch/{0}/by-peername/'.format(switch)
for neighbor in session.read(url):
switch = neighbor['item']['href'].strip('/')
if switch in all_switches:
switch_neigbors.append(switch)
return switch_neigbors
def find_path(start, end, path=[]):
path = path + [start]
if start == end:
return path # If start and end are the same, return the path
for node in get_neighbors(start):
if node not in path:
new_path = find_path(node, end, path)
if new_path:
return new_path # If a path is found, return it
return None # If no path is found, return None
def is_cumulus(switch):
try:
read_attrib = subprocess.check_output(['nodeattrib', switch, 'hardwaremanagement.method'])
except subprocess.CalledProcessError:
return False
for attribs in read_attrib.decode('utf-8').split('\n'):
if len(attribs.split(':')) > 1:
attrib = attribs.split(':')
if attrib[2].strip() == 'affluent':
return True
else:
return False
else:
return False
def host_to_switch(node, interface=None):
# first check the the node config to see what switches are connected
# if host is in rhel can use nmstate package
if node in all_switches:
return [node]
switches = []
netarg = 'net.*.switch'
if interface:
netarg = 'net.{0}.switch'.format(interface)
try:
read_attrib = subprocess.check_output(['nodeattrib', node, netarg])
except subprocess.CalledProcessError:
return False
for attribs in read_attrib.decode('utf-8').split('\n'):
attrib = attribs.split(':')
try:
if ' net.mgt.switch' in attrib or attrib[2] == '':
continue
except IndexError:
continue
switch = attrib[2].strip()
if is_cumulus(switch) and options.cumulus:
switches.append(switch)
else:
switches.append(switch)
return switches
def path_between_nodes(start_switches, end_switches):
for start_switch in start_switches:
for end_switch in end_switches:
if start_switch == end_switch:
return [start_switch]
else:
path = find_path(start_switch, end_switch)
if path:
return path
else:
return 'No path found'
all_switches = []
for res in session.read('/networking/neighbors/by-switch/'):
if 'error' in res:
sys.stderr.write(res['error'] + '\n')
exitcode = 1
else:
switch = (res['item']['href'].replace('/', ''))
all_switches.append(switch)
end_nodeslist = []
nodelist = '/noderange/{0}/nodes/'.format(end_node)
for res in session.read(nodelist):
if 'error' in res:
sys.stderr.write(res['error'] + '\n')
exitcode = 1
else:
elem=(res['item']['href'].replace('/', ''))
end_nodeslist.append(elem)
start_switches = host_to_switch(start_node, interface)
for end_node in end_nodeslist:
if end_node:
end_switches = host_to_switch(end_node, eface)
if not end_switches:
print('Error: net.{0}.switch attribute is not valid')
continue
path = path_between_nodes(start_switches, end_switches)
print(f'{start_node} to {end_node}: {path}')
# TODO dont put switches that are connected through management interfaces.

View File

@ -102,9 +102,9 @@ def run():
cmdv = ['ssh', sshnode] + cmdvbase + cmdstorun[0]
if currprocs < concurrentprocs:
currprocs += 1
run_cmdv(node, cmdv, all, pipedesc)
run_cmdv(sshnode, cmdv, all, pipedesc)
else:
pendingexecs.append((node, cmdv))
pendingexecs.append((sshnode, cmdv))
if not all or exitcode:
sys.exit(exitcode)
rdy, _, _ = select.select(all, [], [], 10)

View File

@ -22,6 +22,7 @@ import optparse
import os
import signal
import sys
import shlex
try:
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
@ -125,13 +126,14 @@ elif options.set:
argset = argset.strip()
if argset:
arglist += shlex.split(argset)
argset = argfile.readline()
argset = argfile.readline()
session.stop_if_noderange_over(noderange, options.maxnodes)
exitcode=client.updateattrib(session,arglist,nodetype, noderange, options, None)
if exitcode != 0:
sys.exit(exitcode)
# Lists all attributes
if len(args) > 0:
# setting output to all so it can search since if we do have something to search, we want to show all outputs even if it is blank.
if requestargs is None:

View File

@ -0,0 +1,121 @@
#!/usr/libexec/platform-python
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# Copyright 2015-2017 Lenovo
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
__author__ = 'tkucherera'
from getpass import getpass
import optparse
import os
import signal
import sys
try:
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
except AttributeError:
pass
path = os.path.dirname(os.path.realpath(__file__))
path = os.path.realpath(os.path.join(path, '..', 'lib', 'python'))
if path.startswith('/opt'):
sys.path.append(path)
import confluent.client as client
argparser = optparse.OptionParser(usage="Usage: %prog <noderange> <username> <new_password>")
argparser.add_option('-m', '--maxnodes', type='int',
help='Number of nodes to affect before prompting for confirmation')
argparser.add_option('-p', '--prompt', action='store_true',
help='Prompt for password values interactively')
argparser.add_option('-e', '--environment', action='store_true',
help='Set passwod, but from environment variable of '
'same name')
(options, args) = argparser.parse_args()
try:
noderange = args[0]
username = args[1]
except IndexError:
argparser.print_help()
sys.exit(1)
client.check_globbing(noderange)
session = client.Command()
exitcode = 0
if options.prompt:
oneval = 1
twoval = 2
while oneval != twoval:
oneval = getpass('Enter pass for {0}: '.format(username))
twoval = getpass('Confirm pass for {0}: '.format(username))
if oneval != twoval:
print('Values did not match.')
new_password = twoval
elif len(args) == 3:
if options.environment:
key = args[2]
new_password = os.environ.get(key, os.environ[key.upper()])
else:
new_password = args[2]
else:
argparser.print_help()
sys.exit(1)
errorNodes = set([])
uid_dict = {}
session.stop_if_noderange_over(noderange, options.maxnodes)
for rsp in session.read('/noderange/{0}/configuration/management_controller/users/all'.format(noderange)):
databynode = rsp["databynode"]
for node in databynode:
if 'error' in rsp['databynode'][node]:
print(node, ':', rsp['databynode'][node]['error'])
errorNodes.add(node)
continue
for user in rsp['databynode'][node]['users']:
if user['username'] == username:
if not user['uid'] in uid_dict:
uid_dict[user['uid']] = node
continue
uid_dict[user['uid']] = uid_dict[user['uid']] + ',{}'.format(node)
break
if not uid_dict:
print("Error: Could not reach target node's bmc user")
sys.exit(1)
for uid in uid_dict:
success = session.simple_noderange_command(uid_dict[uid], 'configuration/management_controller/users/{0}'.format(uid), new_password, key='password', errnodes=errorNodes) # = 0 if successful
allNodes = set([])
for node in session.read('/noderange/{0}/nodes/'.format(noderange)):
if 'error' in node and success != 0:
sys.exit(success)
allNodes.add(node['item']['href'].replace("/", ""))
goodNodes = allNodes - errorNodes
for node in goodNodes:
print(node + ": Password Change Successful")
sys.exit(success)

View File

@ -303,9 +303,14 @@ else:
'/noderange/{0}/configuration/management_controller/extended/all'.format(noderange),
session, printbmc, options, attrprefix='bmc.')
if options.extra:
rcode |= client.print_attrib_path(
'/noderange/{0}/configuration/management_controller/extended/extra'.format(noderange),
session, printextbmc, options)
if options.advanced:
rcode |= client.print_attrib_path(
'/noderange/{0}/configuration/management_controller/extended/extra_advanced'.format(noderange),
session, printextbmc, options)
else:
rcode |= client.print_attrib_path(
'/noderange/{0}/configuration/management_controller/extended/extra'.format(noderange),
session, printextbmc, options)
if printsys or options.exclude:
if printsys == 'all':
printsys = []

View File

@ -243,7 +243,7 @@ if options.windowed:
elif 'Height' in line:
window_height = int(line.split(':')[1])
elif '-geometry' in line:
l = re.split(' |x|-|\+', line)
l = re.split(' |x|-|\\+', line)
l_nosp = [ele for ele in l if ele.strip()]
wmxo = int(l_nosp[1])
wmyo = int(l_nosp[2])

View File

@ -81,6 +81,12 @@ def main(args):
if not args.profile and args.network:
sys.stderr.write('Both noderange and a profile name are required arguments to request a network deployment\n')
return 1
if args.clear and args.profile:
sys.stderr.write(
'The -c/--clear option should not be used with a profile, '
'it is a request to not deploy any profile, and will clear '
'whatever the current profile is without being specified\n')
return 1
if extra:
sys.stderr.write('Unrecognized arguments: ' + repr(extra) + '\n')
c = client.Command()
@ -90,17 +96,6 @@ def main(args):
if 'error' in rsp:
sys.stderr.write(rsp['error'] + '\n')
sys.exit(1)
if not args.clear and args.network and not args.prepareonly:
rc = c.simple_noderange_command(args.noderange, '/boot/nextdevice', 'network',
bootmode='uefi',
persistent=False,
errnodes=errnodes)
if errnodes:
sys.stderr.write(
'Unable to set boot device for following nodes: {0}\n'.format(
','.join(errnodes)))
return 1
rc |= c.simple_noderange_command(args.noderange, '/power/state', 'boot')
if args.clear:
cleararm(args.noderange, c)
clearpending(args.noderange, c)
@ -120,7 +115,7 @@ def main(args):
for profname in profnames:
sys.stderr.write(' ' + profname + '\n')
else:
sys.stderr.write('No deployment profiles available, try osdeploy fiimport or imgutil capture\n')
sys.stderr.write('No deployment profiles available, try osdeploy import or imgutil capture\n')
sys.exit(1)
armonce(args.noderange, c)
setpending(args.noderange, args.profile, c)
@ -166,8 +161,17 @@ def main(args):
else:
print('{0}: {1}{2}'.format(node, profile, armed))
sys.exit(0)
if args.network and not args.prepareonly:
return rc
if not args.clear and args.network and not args.prepareonly:
rc = c.simple_noderange_command(args.noderange, '/boot/nextdevice', 'network',
bootmode='uefi',
persistent=False,
errnodes=errnodes)
if errnodes:
sys.stderr.write(
'Unable to set boot device for following nodes: {0}\n'.format(
','.join(errnodes)))
return 1
rc |= c.simple_noderange_command(args.noderange, '/power/state', 'boot')
return 0
if __name__ == '__main__':

View File

@ -68,7 +68,7 @@ def main():
else:
elem=(res['item']['href'].replace('/', ''))
list.append(elem)
print(options.delim.join(list))
print(options.delim.join(list))
sys.exit(exitcode)

View File

@ -16,13 +16,10 @@
# limitations under the License.
import argparse
import base64
import csv
import fcntl
import io
import numpy as np
import os
import subprocess
import sys
try:
@ -35,7 +32,31 @@ except ImportError:
pass
def plot(gui, output, plotdata, bins):
def iterm_draw(data):
databuf = data.getbuffer()
datalen = len(databuf)
data = base64.b64encode(databuf).decode('utf8')
sys.stdout.write(
'\x1b]1337;File=inline=1;size={}:'.format(datalen))
sys.stdout.write(data)
sys.stdout.write('\a')
sys.stdout.write('\n')
sys.stdout.flush()
def kitty_draw(data):
data = base64.b64encode(data.getbuffer())
while data:
chunk, data = data[:4096], data[4096:]
m = 1 if data else 0
sys.stdout.write('\x1b_Ga=T,f=100,m={};'.format(m))
sys.stdout.write(chunk.decode('utf8'))
sys.stdout.write('\x1b\\')
sys.stdout.flush()
sys.stdout.write('\n')
def plot(gui, output, plotdata, bins, fmt):
import matplotlib as mpl
if gui and mpl.get_backend() == 'agg':
sys.stderr.write('Error: No GUI backend available and -g specified!\n')
@ -51,8 +72,13 @@ def plot(gui, output, plotdata, bins):
tdata = io.BytesIO()
plt.savefig(tdata)
if not gui and not output:
writer = DumbWriter()
writer.draw(tdata)
if fmt == 'sixel':
writer = DumbWriter()
writer.draw(tdata)
elif fmt == 'kitty':
kitty_draw(tdata)
elif fmt == 'iterm':
iterm_draw(tdata)
return n, bins
def textplot(plotdata, bins):
@ -81,7 +107,8 @@ histogram = False
aparser = argparse.ArgumentParser(description='Quick access to common statistics')
aparser.add_argument('-c', type=int, default=0, help='Column number to analyze (default is last column)')
aparser.add_argument('-d', default=None, help='Value used to separate columns')
aparser.add_argument('-x', default=False, action='store_true', help='Output histogram in sixel format')
aparser.add_argument('-x', default=False, action='store_true', help='Output histogram in graphical format')
aparser.add_argument('-f', default='sixel', help='Format for histogram output (sixel/iterm/kitty)')
aparser.add_argument('-s', default=0, help='Number of header lines to skip before processing')
aparser.add_argument('-g', default=False, action='store_true', help='Open histogram in separate graphical window')
aparser.add_argument('-o', default=None, help='Output histogram to the specified filename in PNG format')
@ -138,7 +165,7 @@ while data:
data = list(csv.reader([data], delimiter=delimiter))[0]
n = None
if args.g or args.o or args.x:
n, bins = plot(args.g, args.o, plotdata, bins=args.b)
n, bins = plot(args.g, args.o, plotdata, bins=args.b, fmt=args.f)
if args.t:
n, bins = textplot(plotdata, bins=args.b)
print('Samples: {5} Min: {3} Median: {0} Mean: {1} Max: {4} StandardDeviation: {2} Sum: {6}'.format(np.median(plotdata), np.mean(plotdata), np.std(plotdata), np.min(plotdata), np.max(plotdata), len(plotdata), np.sum(plotdata)))

View File

@ -595,7 +595,7 @@ def print_attrib_path(path, session, requestargs, options, rename=None, attrpref
else:
printmissing.add(attr)
for missing in printmissing:
sys.stderr.write('Error: {0} not a valid attribute\n'.format(attr))
sys.stderr.write('Error: {0} not a valid attribute\n'.format(missing))
return exitcode
@ -668,6 +668,9 @@ def updateattrib(session, updateargs, nodetype, noderange, options, dictassign=N
for attrib in updateargs[1:]:
keydata[attrib] = None
for res in session.update(targpath, keydata):
for node in res.get('databynode', {}):
for warnmsg in res['databynode'][node].get('_warnings', []):
sys.stderr.write('Warning: ' + warnmsg + '\n')
if 'error' in res:
if 'errorcode' in res:
exitcode = res['errorcode']
@ -702,6 +705,14 @@ def updateattrib(session, updateargs, nodetype, noderange, options, dictassign=N
noderange, 'attributes/all', dictassign[key], key)
else:
if "=" in updateargs[1]:
update_ready = True
for arg in updateargs[1:]:
if not '=' in arg:
update_ready = False
exitcode = 1
if not update_ready:
sys.stderr.write('Error: {0} Can not set and read at the same time!\n'.format(str(updateargs[1:])))
sys.exit(exitcode)
try:
for val in updateargs[1:]:
val = val.split('=', 1)

View File

@ -98,17 +98,24 @@ class GroupedData(object):
self.byoutput = {}
self.header = {}
self.client = confluentconnection
self.detectedpad = None
def generate_byoutput(self):
self.byoutput = {}
thepad = self.detectedpad if self.detectedpad else ''
for n in self.bynode:
output = '\n'.join(self.bynode[n])
output = ''
for ln in self.bynode[n]:
output += ln.replace(thepad, '', 1) + '\n'
if output not in self.byoutput:
self.byoutput[output] = set([n])
else:
self.byoutput[output].add(n)
def add_line(self, node, line):
wspc = re.search(r'^\s*', line).group()
if self.detectedpad is None or len(wspc) < len(self.detectedpad):
self.detectedpad = wspc
if node not in self.bynode:
self.bynode[node] = [line]
else:
@ -219,4 +226,4 @@ if __name__ == '__main__':
if not line:
continue
groupoutput.add_line(*line.split(': ', 1))
groupoutput.print_deviants()
groupoutput.print_deviants()

View File

@ -1,12 +1,16 @@
%define name confluent_client
%define version #VERSION#
%define fversion %{lua:
sv, _ = string.gsub("#VERSION#", "[~+]", "-")
print(sv)
}
%define release 1
Summary: Client libraries and utilities for confluent
Name: %{name}
Version: %{version}
Release: %{release}
Source0: %{name}-%{version}.tar.gz
Source0: %{name}-%{fversion}.tar.gz
License: Apache2
Group: Development/Libraries
BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-buildroot
@ -21,7 +25,7 @@ This package enables python development and command line access to
a confluent server.
%prep
%setup -n %{name}-%{version} -n %{name}-%{version}
%setup -n %{name}-%{fversion}
%build
%if "%{dist}" == ".el7"

View File

@ -153,11 +153,11 @@ _confluent_osimage_completion()
{
_confluent_get_args
if [ $NUMARGS == 2 ]; then
COMPREPLY=($(compgen -W "initialize import updateboot rebase" -- ${COMP_WORDS[COMP_CWORD]}))
COMPREPLY=($(compgen -W "initialize import importcheck updateboot rebase" -- ${COMP_WORDS[COMP_CWORD]}))
return
elif [ ${CMPARGS[1]} == 'initialize' ]; then
COMPREPLY=($(compgen -W "-h -u -s -t -i" -- ${COMP_WORDS[COMP_CWORD]}))
elif [ ${CMPARGS[1]} == 'import' ]; then
elif [ ${CMPARGS[1]} == 'import' ] || [ ${CMPARGS[1]} == 'importcheck' ]; then
compopt -o default
COMPREPLY=()
return

View File

@ -0,0 +1,38 @@
l2traceroute(8) -- returns the layer 2 route through an Ethernet network managed by confluent given 2 end points.
==============================
## SYNOPSIS
`l2traceroute [options] <start_node> <end_noderange>`
## DESCRIPTION
**l2traceroute** is a command that returns the layer 2 route for the configered interfaces in nodeattrib.
It can also be used with the -i and -e options to check against specific interfaces on the endpoints.
## PREREQUISITES
**l2traceroute** the net.<interface>.switch attributes have to be set on the end points if endpoint is not a switch
## OPTIONS
* ` -e` EFACE, --eface=INTERFACE
interface to check against for the second end point
* ` -i` INTERFACE, --interface=INTERFACE
interface to check against for the first end point
* ` -c` CUMULUS, --cumulus=CUMULUS
return layer 2 route through cumulus switches only
* `-h`, `--help`:
Show help message and exit
## EXAMPLES
* Checking route between two nodes:
`# l2traceroute_client n244 n1851`
`n244 to n1851: ['switch114']`
* Checking route from one node to multiple nodes:
`# l2traceroute_client n244 n1833,n1851`
`n244 to n1833: ['switch114', 'switch7', 'switch32', 'switch253', 'switch85', 'switch72', 'switch21', 'switch2', 'switch96', 'switch103', 'switch115']
n244 to n1851: ['switch114']`

View File

@ -0,0 +1,30 @@
nodeapply(8) -- Execute command on many nodes in a noderange through ssh
=========================================================================
## SYNOPSIS
`nodeapply [options] <noderange>`
## DESCRIPTION
Provides shortcut access to a number of common operations against deployed
nodes. These operations include refreshing ssh certificates and configuration,
rerunning syncflies, and executing specified postscripts.
## OPTIONS
* `-k`, `--security`
Refresh SSH configuration (hosts.equiv and node SSH certificates)
* `-F`, `--sync`
Rerun syncfiles from deployed profile
* `-P SCRIPTS`, `--scripts=SCRIPTS`
Re-run specified scripts, with full path under scripts specified, e.g. post.d/scriptname,firstboot.d/otherscriptname
* `-c COUNT`, `-f COUNT`, `--count=COUNT`
Specify the maximum number of instances to run concurrently
* `-m MAXNODES`, `--maxnodes=MAXNODES`
Specify a maximum number of nodes to run remote ssh command to, prompting
if over the threshold

View File

@ -7,8 +7,8 @@ nodeattrib(8) -- List or change confluent nodes attributes
`nodeattrib <noderange> [<nodeattribute1=value1> <nodeattribute2=value2> ...]`
`nodeattrib -c <noderange> <nodeattribute1> <nodeattribute2> ...`
`nodeattrib -e <noderange> <nodeattribute1> <nodeattribute2> ...`
`nodeattrib -p <noderange> <nodeattribute1> <nodeattribute2> ...`
`nodeattrib <noderange> -s <attributes.batch>`
`nodeattrib -p <noderange> <nodeattribute1> <nodeattribute2> ...`
`nodeattrib <noderange> -s <attributes.batch> ...`
## DESCRIPTION
@ -54,14 +54,17 @@ to a blank value will allow masking a group defined attribute with an empty valu
* `-e`, `--environment`:
Set specified attributes based on exported environment variable of matching name.
Environment variable names may be lower case or all upper case.
Replace . with _ as needed (e.g. info.note may be specified as either $info_note or $INFO_NOTE
Replace . with _ as needed (e.g. info.note may be specified as either $info_note or $INFO_NOTE)
* `-p`, `--prompt`:
Request interactive prompting to provide values rather than the command line
or environment variables.
* `-s`, `--set`:
Set attributes using a batch file
Set attributes using a batch file rather than the command line. The attributes in the batch file
can be specified as one line of key=value pairs simmilar to command line or each attribute can
be in its own line. Lines that start with # sign will be read as a comment. See EXAMPLES for batch
file syntax.
* `-m MAXNODES`, `--maxnodes=MAXNODES`:
Prompt if trying to set attributes on more than
@ -120,6 +123,25 @@ to a blank value will allow masking a group defined attribute with an empty valu
`d1: net.pxe.switch: pxeswitch1`
`d1: net.switch:`
* Setting attributes using a batch file with syntax similar to command line:
`# cat nodeattributes.batch`
`# power`
`power.psu1.outlet=3 power.psu1.pdu=pdu2`
`# nodeattrib n41 -s nodeattributes.batch`
`n41: 3`
`n41: pdu2`
* Setting attributes using a batch file with syntax where each attribute is in its own line:
`# cat nodeattributes.batch`
`# management`
`custom.mgt.switch=switch_main`
`custom.mgt.switch.port=swp4`
`# nodeattrib n41 -s nodeattributes.batch`
`n41: switch_main`
`n41: swp4`
## SEE ALSO
nodegroupattrib(8), nodeattribexpressions(5)

View File

@ -0,0 +1,28 @@
nodebmcpassword(8) -- Change management controller password for a specified user
=========================================================
## SYNOPSIS
`nodebmcpassword <noderange> <username> <new_password>`
## DESCRIPTION
`nodebmcpassword` allows you to change the management controller password for a user on a specified noderange
## OPTIONS
* `-m MAXNODES`, `--maxnodes=MAXNODES`:
Number of nodes to affect before prompting for
confirmation
* `-h`, `--help`:
Show help message and exit
## EXAMPLES:
* Reset the management controller for nodes n1 through n4:
`# nodebmcreset n1-n4`
`n1: Password Change Successful`
`n2: Password Change Successful`
`n3: Password Change Successful`
`n4: Password Change Successful`

View File

@ -1,7 +1,11 @@
VERSION=`git describe|cut -d- -f 1`
NUMCOMMITS=`git describe|cut -d- -f 2`
if [ "$NUMCOMMITS" != "$VERSION" ]; then
VERSION=$VERSION.dev$NUMCOMMITS.g`git describe|cut -d- -f 3`
LASTNUM=$(echo $VERSION|rev|cut -d . -f 1|rev)
LASTNUM=$((LASTNUM+1))
FIRSTPART=$(echo $VERSION|rev|cut -d . -f 2- |rev)
VERSION=${FIRSTPART}.${LASTNUM}
VERSION=$VERSION~dev$NUMCOMMITS+`git describe|cut -d- -f 3`
fi
sed -e "s/#VERSION#/$VERSION/" confluent_osdeploy.spec.tmpl > confluent_osdeploy.spec
cd ..

View File

@ -1,7 +1,12 @@
cd $(dirname $0)
VERSION=`git describe|cut -d- -f 1`
NUMCOMMITS=`git describe|cut -d- -f 2`
if [ "$NUMCOMMITS" != "$VERSION" ]; then
VERSION=$VERSION.dev$NUMCOMMITS.g`git describe|cut -d- -f 3`
LASTNUM=$(echo $VERSION|rev|cut -d . -f 1|rev)
LASTNUM=$((LASTNUM+1))
FIRSTPART=$(echo $VERSION|rev|cut -d . -f 2- |rev)
VERSION=${FIRSTPART}.${LASTNUM}
VERSION=$VERSION~dev$NUMCOMMITS+`git describe|cut -d- -f 3`
fi
sed -e "s/#VERSION#/$VERSION/" confluent_osdeploy-aarch64.spec.tmpl > confluent_osdeploy-aarch64.spec
cd ..
@ -29,4 +34,5 @@ mv confluent_el8bin.tar.xz ~/rpmbuild/SOURCES/
mv confluent_el9bin.tar.xz ~/rpmbuild/SOURCES/
rm -rf el9bin
rm -rf el8bin
rpmbuild -ba confluent_osdeploy-aarch64.spec
podman run --privileged --rm -v $HOME:/root el8builder rpmbuild -ba /root/confluent/confluent_osdeploy/confluent_osdeploy-aarch64.spec

View File

@ -86,7 +86,7 @@ def map_idx_to_name():
for line in subprocess.check_output(['ip', 'l']).decode('utf8').splitlines():
if line.startswith(' ') and 'link/' in line:
typ = line.split()[0].split('/')[1]
devtype[prevdev] = typ if type != 'ether' else 'ethernet'
devtype[prevdev] = typ if typ != 'ether' else 'ethernet'
if line.startswith(' '):
continue
idx, iface, rst = line.split(':', 2)
@ -151,13 +151,14 @@ class NetplanManager(object):
needcfgapply = False
for devname in devnames:
needcfgwrite = False
if stgs['ipv6_method'] == 'static':
# ipv6_method missing at uconn...
if stgs.get('ipv6_method', None) == 'static':
curraddr = stgs['ipv6_address']
currips = self.getcfgarrpath([devname, 'addresses'])
if curraddr not in currips:
needcfgwrite = True
currips.append(curraddr)
if stgs['ipv4_method'] == 'static':
if stgs.get('ipv4_method', None) == 'static':
curraddr = stgs['ipv4_address']
currips = self.getcfgarrpath([devname, 'addresses'])
if curraddr not in currips:
@ -180,7 +181,7 @@ class NetplanManager(object):
if dnsips:
currdnsips = self.getcfgarrpath([devname, 'nameservers', 'addresses'])
for dnsip in dnsips:
if dnsip not in currdnsips:
if dnsip and dnsip not in currdnsips:
needcfgwrite = True
currdnsips.append(dnsip)
if dnsdomain:
@ -191,8 +192,10 @@ class NetplanManager(object):
if needcfgwrite:
needcfgapply = True
newcfg = {'network': {'version': 2, 'ethernets': {devname: self.cfgbydev[devname]}}}
oumask = os.umask(0o77)
with open('/etc/netplan/{0}-confluentcfg.yaml'.format(devname), 'w') as planout:
planout.write(yaml.dump(newcfg))
os.umask(oumask)
if needcfgapply:
subprocess.call(['netplan', 'apply'])
@ -294,7 +297,8 @@ class WickedManager(object):
class NetworkManager(object):
def __init__(self, devtypes):
def __init__(self, devtypes, deploycfg):
self.deploycfg = deploycfg
self.connections = {}
self.uuidbyname = {}
self.uuidbydev = {}
@ -344,7 +348,7 @@ class NetworkManager(object):
bondcfg[stg] = deats[stg]
if member in self.uuidbyname:
subprocess.check_call(['nmcli', 'c', 'del', self.uuidbyname[member]])
subprocess.check_call(['nmcli', 'c', 'add', 'type', 'team-slave', 'master', team, 'con-name', member, 'connection.interface-name', member])
subprocess.check_call(['nmcli', 'c', 'add', 'type', 'bond-slave', 'master', team, 'con-name', member, 'connection.interface-name', member])
if bondcfg:
args = []
for parm in bondcfg:
@ -366,6 +370,20 @@ class NetworkManager(object):
cmdargs['ipv4.gateway'] = stgs['ipv4_gateway']
if stgs.get('ipv6_gateway', None):
cmdargs['ipv6.gateway'] = stgs['ipv6_gateway']
dnsips = self.deploycfg.get('nameservers', [])
if not dnsips:
dnsips = []
dns4 = []
dns6 = []
for dnsip in dnsips:
if '.' in dnsip:
dns4.append(dnsip)
elif ':' in dnsip:
dns6.append(dnsip)
if dns4:
cmdargs['ipv4.dns'] = ','.join(dns4)
if dns6:
cmdargs['ipv6.dns'] = ','.join(dns6)
if len(cfg['interfaces']) > 1: # team time.. should be..
if not cfg['settings'].get('team_mode', None):
sys.stderr.write("Warning, multiple interfaces ({0}) without a team_mode, skipping setup\n".format(','.join(cfg['interfaces'])))
@ -378,26 +396,45 @@ class NetworkManager(object):
for arg in cmdargs:
cargs.append(arg)
cargs.append(cmdargs[arg])
subprocess.check_call(['nmcli', 'c', 'add', 'type', 'team', 'con-name', cname, 'connection.interface-name', cname, 'team.runner', stgs['team_mode']] + cargs)
if stgs['team_mode'] == 'lacp':
stgs['team_mode'] = '802.3ad'
subprocess.check_call(['nmcli', 'c', 'add', 'type', 'bond', 'con-name', cname, 'connection.interface-name', cname, 'bond.options', 'mode={}'.format(stgs['team_mode'])] + cargs)
for iface in cfg['interfaces']:
self.add_team_member(cname, iface)
subprocess.check_call(['nmcli', 'c', 'u', cname])
else:
cname = stgs.get('connection_name', None)
iname = list(cfg['interfaces'])[0]
if not cname:
cname = iname
ctype = self.devtypes.get(iname, None)
if not ctype:
sys.stderr.write("Warning, no device found for interface_name ({0}), skipping setup\n".format(iname))
return
if stgs.get('vlan_id', None):
vlan = stgs['vlan_id']
if ctype == 'infiniband':
vlan = '0x{0}'.format(vlan) if not vlan.startswith('0x') else vlan
cmdargs['infiniband.parent'] = iname
cmdargs['infiniband.p-key'] = vlan
iname = '{0}.{1}'.format(iname, vlan[2:])
elif ctype == 'ethernet':
ctype = 'vlan'
cmdargs['vlan.parent'] = iname
cmdargs['vlan.id'] = vlan
iname = '{0}.{1}'.format(iname, vlan)
else:
sys.stderr.write("Warning, unknown interface_name ({0}) device type ({1}) for VLAN/PKEY, skipping setup\n".format(iname, ctype))
return
cname = iname if not cname else cname
u = self.uuidbyname.get(cname, None)
cargs = []
for arg in cmdargs:
cargs.append(arg)
cargs.append(cmdargs[arg])
if u:
cmdargs['connection.interface-name'] = iname
subprocess.check_call(['nmcli', 'c', 'm', u] + cargs)
subprocess.check_call(['nmcli', 'c', 'm', u, 'connection.interface-name', iname] + cargs)
subprocess.check_call(['nmcli', 'c', 'u', u])
else:
subprocess.check_call(['nmcli', 'c', 'add', 'type', self.devtypes[iname], 'con-name', cname, 'connection.interface-name', iname] + cargs)
subprocess.check_call(['nmcli', 'c', 'add', 'type', ctype, 'con-name', cname, 'connection.interface-name', iname] + cargs)
self.read_connections()
u = self.uuidbyname.get(cname, None)
if u:
@ -418,6 +455,12 @@ if __name__ == '__main__':
srvs, _ = apiclient.scan_confluents()
doneidxs = set([])
dc = None
if not srvs: # the multicast scan failed, fallback to deploycfg cfg file
with open('/etc/confluent/confluent.deploycfg', 'r') as dci:
for cfgline in dci.read().split('\n'):
if cfgline.startswith('deploy_server:'):
srvs = [cfgline.split()[1]]
break
for srv in srvs:
try:
s = socket.create_connection((srv, 443))
@ -435,10 +478,26 @@ if __name__ == '__main__':
curridx = addr[-1]
if curridx in doneidxs:
continue
status, nc = apiclient.HTTPSClient(usejson=True, host=srv).grab_url_with_status('/confluent-api/self/netcfg')
for tries in (1, 2, 3):
try:
status, nc = apiclient.HTTPSClient(usejson=True, host=srv).grab_url_with_status('/confluent-api/self/netcfg')
break
except Exception:
if tries == 3:
raise
time.sleep(1)
continue
nc = json.loads(nc)
if not dc:
status, dc = apiclient.HTTPSClient(usejson=True, host=srv).grab_url_with_status('/confluent-api/self/deploycfg2')
for tries in (1, 2, 3):
try:
status, dc = apiclient.HTTPSClient(usejson=True, host=srv).grab_url_with_status('/confluent-api/self/deploycfg2')
break
except Exception:
if tries == 3:
raise
time.sleep(1)
continue
dc = json.loads(dc)
iname = get_interface_name(idxmap[curridx], nc.get('default', {}))
if iname:
@ -464,11 +523,13 @@ if __name__ == '__main__':
netname_to_interfaces['default']['interfaces'] -= netname_to_interfaces[netn]['interfaces']
if not netname_to_interfaces['default']['interfaces']:
del netname_to_interfaces['default']
# Make sure VLAN/PKEY connections are created last
netname_to_interfaces = dict(sorted(netname_to_interfaces.items(), key=lambda item: 'vlan_id' in item[1]['settings']))
rm_tmp_llas(tmpllas)
if os.path.exists('/usr/sbin/netplan'):
nm = NetplanManager(dc)
if os.path.exists('/usr/bin/nmcli'):
nm = NetworkManager(devtypes)
nm = NetworkManager(devtypes, dc)
elif os.path.exists('/usr/sbin/wicked'):
nm = WickedManager()
for netn in netname_to_interfaces:

View File

@ -0,0 +1,49 @@
is_suse=false
is_rhel=false
if test -f /boot/efi/EFI/redhat/grub.cfg; then
grubcfg="/boot/efi/EFI/redhat/grub.cfg"
grub2-mkconfig -o $grubcfg
is_rhel=true
elif test -f /boot/efi/EFI/sle_hpc/grub.cfg; then
grubcfg="/boot/efi/EFI/sle_hpc/grub.cfg"
grub2-mkconfig -o $grubcfg
is_suse=true
else
echo "Expected File missing: Check if os sle_hpc or redhat"
exit
fi
# working on SUSE
if $is_suse; then
start=false
num_line=0
lines_to_edit=()
while read line; do
((num_line++))
if [[ $line == *"grub_platform"* ]]; then
start=true
fi
if $start; then
if [[ $line != "#"* ]];then
lines_to_edit+=($num_line)
fi
fi
if [[ ${#line} -eq 2 && $line == *"fi" ]]; then
if $start; then
start=false
fi
fi
done < grub_cnf.cfg
for line_num in "${lines_to_edit[@]}"; do
line_num+="s"
sed -i "${line_num},^,#," $grubcfg
done
sed -i 's,^terminal,#terminal,' $grubcfg
fi
# Working on Redhat
if $is_rhel; then
sed -i 's,^serial,#serial, ; s,^terminal,#terminal,' $grubcfg
fi

View File

@ -3,6 +3,9 @@
[ -f /opt/confluent/bin/apiclient ] && confapiclient=/opt/confluent/bin/apiclient
[ -f /etc/confluent/apiclient ] && confapiclient=/etc/confluent/apiclient
for pubkey in /etc/ssh/ssh_host*key.pub; do
if [ "$pubkey" = /etc/ssh/ssh_host_key.pub ]; then
continue
fi
certfile=${pubkey/.pub/-cert.pub}
rm $certfile
confluentpython $confapiclient /confluent-api/self/sshcert $pubkey -o $certfile

View File

@ -26,7 +26,7 @@ mkdir -p opt/confluent/bin
mkdir -p stateless-bin
cp -a el8bin/* .
ln -s el8 el9
for os in rhvh4 el7 genesis el8 suse15 ubuntu20.04 ubuntu22.04 coreos el9; do
for os in rhvh4 el7 genesis el8 suse15 ubuntu20.04 ubuntu22.04 ubuntu24.04 coreos el9; do
mkdir ${os}out
cd ${os}out
if [ -d ../${os}bin ]; then
@ -76,7 +76,7 @@ cp -a esxi7 esxi8
%install
mkdir -p %{buildroot}/opt/confluent/share/licenses/confluent_osdeploy/
#cp LICENSE %{buildroot}/opt/confluent/share/licenses/confluent_osdeploy/
for os in rhvh4 el7 el8 el9 genesis suse15 ubuntu20.04 ubuntu22.04 esxi6 esxi7 esxi8 coreos; do
for os in rhvh4 el7 el8 el9 genesis suse15 ubuntu20.04 ubuntu22.04 ubuntu24.04 esxi6 esxi7 esxi8 coreos; do
mkdir -p %{buildroot}/opt/confluent/lib/osdeploy/$os/initramfs/aarch64/
cp ${os}out/addons.* %{buildroot}/opt/confluent/lib/osdeploy/$os/initramfs/aarch64/
if [ -d ${os}disklessout ]; then

View File

@ -28,7 +28,7 @@ This contains support utilities for enabling deployment of x86_64 architecture s
#cp start_root urlmount ../stateless-bin/
#cd ..
ln -s el8 el9
for os in rhvh4 el7 genesis el8 suse15 ubuntu18.04 ubuntu20.04 ubuntu22.04 coreos el9; do
for os in rhvh4 el7 genesis el8 suse15 ubuntu18.04 ubuntu20.04 ubuntu22.04 ubuntu24.04 coreos el9; do
mkdir ${os}out
cd ${os}out
if [ -d ../${os}bin ]; then
@ -42,7 +42,7 @@ for os in rhvh4 el7 genesis el8 suse15 ubuntu18.04 ubuntu20.04 ubuntu22.04 coreo
mv ../addons.cpio .
cd ..
done
for os in el7 el8 suse15 el9 ubuntu20.04 ubuntu22.04; do
for os in el7 el8 suse15 el9 ubuntu20.04 ubuntu22.04 ubuntu24.04; do
mkdir ${os}disklessout
cd ${os}disklessout
if [ -d ../${os}bin ]; then
@ -78,7 +78,7 @@ cp -a esxi7 esxi8
%install
mkdir -p %{buildroot}/opt/confluent/share/licenses/confluent_osdeploy/
cp LICENSE %{buildroot}/opt/confluent/share/licenses/confluent_osdeploy/
for os in rhvh4 el7 el8 el9 genesis suse15 ubuntu20.04 ubuntu18.04 ubuntu22.04 esxi6 esxi7 esxi8 coreos; do
for os in rhvh4 el7 el8 el9 genesis suse15 ubuntu20.04 ubuntu18.04 ubuntu22.04 ubuntu24.04 esxi6 esxi7 esxi8 coreos; do
mkdir -p %{buildroot}/opt/confluent/lib/osdeploy/$os/initramfs
mkdir -p %{buildroot}/opt/confluent/lib/osdeploy/$os/profiles
cp ${os}out/addons.* %{buildroot}/opt/confluent/lib/osdeploy/$os/initramfs

View File

@ -3,6 +3,8 @@ import os
class DiskInfo(object):
def __init__(self, devname):
if devname.startswith('nvme') and 'c' in devname:
raise Exception("Skipping multipath devname")
self.name = devname
self.wwn = None
self.path = None

View File

@ -1,4 +1,5 @@
#!/usr/bin/python
import time
import importlib
import tempfile
import json
@ -223,6 +224,7 @@ def synchronize():
if status == 202:
lastrsp = ''
while status != 204:
time.sleep(2)
status, rsp = ac.grab_url_with_status('/confluent-api/self/remotesyncfiles')
if not isinstance(rsp, str):
rsp = rsp.decode('utf8')

View File

@ -3,6 +3,8 @@ import os
class DiskInfo(object):
def __init__(self, devname):
if devname.startswith('nvme') and 'c' in devname:
raise Exception("Skipping multipath devname")
self.name = devname
self.wwn = None
self.path = None

View File

@ -5,6 +5,7 @@ import json
import os
import shutil
import pwd
import time
import grp
try:
from importlib.machinery import SourceFileLoader
@ -223,6 +224,7 @@ def synchronize():
if status == 202:
lastrsp = ''
while status != 204:
time.sleep(2)
status, rsp = ac.grab_url_with_status('/confluent-api/self/remotesyncfiles')
if not isinstance(rsp, str):
rsp = rsp.decode('utf8')

View File

@ -155,7 +155,7 @@ fi
ready=0
while [ $ready = "0" ]; do
get_remote_apikey
if [[ $confluent_mgr == *:* ]]; then
if [[ $confluent_mgr == *:* ]] && [[ $confluent_mgr != "["* ]]; then
confluent_mgr="[$confluent_mgr]"
fi
tmperr=$(mktemp)
@ -189,7 +189,7 @@ cat > /run/NetworkManager/system-connections/$ifname.nmconnection << EOC
EOC
echo id=${ifname} >> /run/NetworkManager/system-connections/$ifname.nmconnection
echo uuid=$(uuidgen) >> /run/NetworkManager/system-connections/$ifname.nmconnection
linktype=$(ip link |grep -A2 ${ifname}|tail -n 1|awk '{print $1}')
linktype=$(ip link show dev ${ifname}|grep link/|awk '{print $1}')
if [ "$linktype" = link/infiniband ]; then
linktype="infiniband"
else
@ -324,7 +324,7 @@ fi
echo '[proxy]' >> /run/NetworkManager/system-connections/$ifname.nmconnection
chmod 600 /run/NetworkManager/system-connections/*.nmconnection
confluent_websrv=$confluent_mgr
if [[ $confluent_websrv == *:* ]]; then
if [[ $confluent_websrv == *:* ]] && [[ $confluent_websrv != "["* ]]; then
confluent_websrv="[$confluent_websrv]"
fi
echo -n "Initializing ssh..."

View File

@ -3,6 +3,8 @@ import os
class DiskInfo(object):
def __init__(self, devname):
if devname.startswith('nvme') and 'c' in devname:
raise Exception("Skipping multipath devname")
self.name = devname
self.wwn = None
self.path = None

View File

@ -10,6 +10,7 @@ import stat
import struct
import sys
import subprocess
import traceback
bootuuid = None
@ -426,4 +427,9 @@ def install_to_disk(imgpath):
if __name__ == '__main__':
install_to_disk(os.environ['mountsrc'])
try:
install_to_disk(os.environ['mountsrc'])
except Exception:
traceback.print_exc()
time.sleep(86400)
raise

View File

@ -1,6 +1,6 @@
. /lib/dracut-lib.sh
confluent_whost=$confluent_mgr
if [[ "$confluent_whost" == *:* ]]; then
if [[ "$confluent_whost" == *:* ]] && [[ "$confluent_whost" != "["* ]]; then
confluent_whost="[$confluent_mgr]"
fi
mkdir -p /mnt/remoteimg /mnt/remote /mnt/overlay

View File

@ -16,6 +16,7 @@ if [ -z "$confluent_mgr" ]; then
fi
confluent_profile=$(grep ^profile: /etc/confluent/confluent.deploycfg|awk '{print $2}')
timedatectl set-timezone $(grep ^timezone: /etc/confluent/confluent.deploycfg|awk '{print $2}')
hostnamectl set-hostname $nodename
export nodename confluent_mgr confluent_profile
. /etc/confluent/functions
mkdir -p /var/log/confluent

View File

@ -1,4 +1,6 @@
#!/usr/bin/python3
import random
import time
import subprocess
import importlib
import tempfile
@ -7,6 +9,7 @@ import os
import shutil
import pwd
import grp
import sys
from importlib.machinery import SourceFileLoader
try:
apiclient = SourceFileLoader('apiclient', '/opt/confluent/bin/apiclient').load_module()
@ -227,9 +230,16 @@ def synchronize():
myips.append(addr)
data = json.dumps({'merge': tmpdir, 'appendonce': appendoncedir, 'myips': myips})
status, rsp = ac.grab_url_with_status('/confluent-api/self/remotesyncfiles', data)
if status >= 300:
sys.stderr.write("Error starting syncfiles - {}:\n".format(status))
sys.stderr.write(rsp.decode('utf8'))
sys.stderr.write('\n')
sys.stderr.flush()
return status
if status == 202:
lastrsp = ''
while status != 204:
time.sleep(1+(2*random.random()))
status, rsp = ac.grab_url_with_status('/confluent-api/self/remotesyncfiles')
if not isinstance(rsp, str):
rsp = rsp.decode('utf8')
@ -277,10 +287,21 @@ def synchronize():
os.chmod(fname, int(opts[fname][opt], 8))
if uid != -1 or gid != -1:
os.chown(fname, uid, gid)
return status
finally:
shutil.rmtree(tmpdir)
shutil.rmtree(appendoncedir)
if __name__ == '__main__':
synchronize()
status = 202
while status not in (204, 200):
try:
status = synchronize()
except Exception as e:
sys.stderr.write(str(e))
sys.stderr.write('\n')
sys.stderr.flush()
status = 300
if status not in (204, 200):
time.sleep((random.random()*3)+2)

View File

@ -7,8 +7,8 @@ if ! grep console= /proc/cmdline >& /dev/null; then
if [ -n "$autocons" ]; then
echo console=$autocons |sed -e 's!/dev/!!' >> /tmp/01-autocons.conf
autocons=${autocons%,*}
echo $autocons > /tmp/01-autocons.devnode
echo "Detected firmware specified console at $(cat /tmp/01-autocons.conf)" > $autocons
echo $autocons > /tmp/01-autocons.devnode
echo "Detected firmware specified console at $(cat /tmp/01-autocons.conf)" > $autocons
echo "Initializing auto detected console when installer starts" > $autocons
fi
fi
@ -16,4 +16,5 @@ if grep console=ttyS /proc/cmdline >& /dev/null; then
echo "Serial console has been requested in the kernel arguments, the local video may not show progress" > /dev/tty1
fi
. /lib/anaconda-lib.sh
echo rd.fcoe=0 > /etc/cmdline.d/nofcoe.conf
wait_for_kickstart

View File

@ -2,6 +2,15 @@
[ -e /tmp/confluent.initq ] && return 0
. /lib/dracut-lib.sh
setsid sh -c 'exec bash <> /dev/tty2 >&0 2>&1' &
if [ -f /tmp/dd_disk ]; then
for dd in $(cat /tmp/dd_disk); do
if [ -e $dd ]; then
driver-updates --disk $dd $dd
rm $dd
fi
done
rm /tmp/dd_disk
fi
udevadm trigger
udevadm trigger --type=devices --action=add
udevadm settle
@ -20,13 +29,6 @@ function confluentpython() {
/usr/bin/python2 $*
fi
}
if [ -f /tmp/dd_disk ]; then
for dd in $(cat /tmp/dd_disk); do
if [ -e $dd ]; then
driver-updates --disk $dd $dd
fi
done
fi
vlaninfo=$(getarg vlan)
if [ ! -z "$vlaninfo" ]; then
vldev=${vlaninfo#*:}
@ -61,43 +63,52 @@ if [ -e /dev/disk/by-label/CNFLNT_IDNT ]; then
udevadm info $i | grep ID_NET_DRIVER=cdc_ether > /dev/null && continue
ip link set $(basename $i) up
done
for NICGUESS in $(ip link|grep LOWER_UP|grep -v LOOPBACK| awk '{print $2}' | sed -e 's/:$//'); do
if [ "$autoconfigmethod" = "dhcp" ]; then
/usr/libexec/nm-initrd-generator ip=$NICGUESS:dhcp
else
v4addr=$(grep ^ipv4_address: $tcfg)
v4addr=${v4addr#ipv4_address: }
v4plen=${v4addr#*/}
v4addr=${v4addr%/*}
v4gw=$(grep ^ipv4_gateway: $tcfg)
v4gw=${v4gw#ipv4_gateway: }
ip addr add dev $NICGUESS $v4addr/$v4plen
if [ "$v4gw" = "null" ]; then
v4gw=""
fi
if [ ! -z "$v4gw" ]; then
ip route add default via $v4gw
fi
v4nm=$(grep ipv4_netmask: $tcfg)
v4nm=${v4nm#ipv4_netmask: }
DETECTED=0
for dsrv in $deploysrvs; do
if curl --capath /tls/ -s --connect-timeout 3 https://$dsrv/confluent-public/ > /dev/null; then
rm /run/NetworkManager/system-connections/*
/usr/libexec/nm-initrd-generator ip=$v4addr::$v4gw:$v4nm:$hostname:$NICGUESS:none
DETECTED=1
ifname=$NICGUESS
TRIES=30
DETECTED=0
while [ "$DETECTED" = 0 ] && [ $TRIES -gt 0 ]; do
TRIES=$((TRIES - 1))
for NICGUESS in $(ip link|grep LOWER_UP|grep -v LOOPBACK| awk '{print $2}' | sed -e 's/:$//'); do
if [ "$autoconfigmethod" = "dhcp" ]; then
/usr/libexec/nm-initrd-generator ip=$NICGUESS:dhcp
else
v4addr=$(grep ^ipv4_address: $tcfg)
v4addr=${v4addr#ipv4_address: }
v4plen=${v4addr#*/}
v4addr=${v4addr%/*}
v4gw=$(grep ^ipv4_gateway: $tcfg)
v4gw=${v4gw#ipv4_gateway: }
ip addr add dev $NICGUESS $v4addr/$v4plen
if [ "$v4gw" = "null" ]; then
v4gw=""
fi
if [ ! -z "$v4gw" ]; then
ip route add default via $v4gw
fi
v4nm=$(grep ipv4_netmask: $tcfg)
v4nm=${v4nm#ipv4_netmask: }
DETECTED=0
for dsrv in $deploysrvs; do
if curl --capath /tls/ -s --connect-timeout 3 https://$dsrv/confluent-public/ > /dev/null; then
rm /run/NetworkManager/system-connections/*
/usr/libexec/nm-initrd-generator ip=$v4addr::$v4gw:$v4nm:$hostname:$NICGUESS:none
DETECTED=1
ifname=$NICGUESS
break
fi
done
if [ ! -z "$v4gw" ]; then
ip route del default via $v4gw
fi
ip addr flush dev $NICGUESS
if [ $DETECTED = 1 ]; then
break
fi
done
if [ ! -z "$v4gw" ]; then
ip route del default via $v4gw
fi
ip addr flush dev $NICGUESS
if [ $DETECTED = 1 ]; then
break
fi
fi
done
done
for NICGUESS in $(ip link|grep LOWER_UP|grep -v LOOPBACK| awk '{print $2}' | sed -e 's/:$//'); do
ip addr flush dev $NICGUESS
ip link set $NICGUESS down
done
NetworkManager --configure-and-quit=initrd --no-daemon
hmackeyfile=/tmp/cnflnthmackeytmp
@ -175,7 +186,7 @@ if [ ! -z "$autocons" ]; then
errout="-e $autocons"
fi
while ! confluentpython /opt/confluent/bin/apiclient $errout /confluent-api/self/deploycfg2 > /etc/confluent/confluent.deploycfg; do
sleep 10
sleep 10
done
ifidx=$(cat /tmp/confluent.ifidx 2> /dev/null)
if [ -z "$ifname" ]; then
@ -216,23 +227,38 @@ proto=${proto#protocol: }
textconsole=$(grep ^textconsole: /etc/confluent/confluent.deploycfg)
textconsole=${textconsole#textconsole: }
if [ "$textconsole" = "true" ] && ! grep console= /proc/cmdline > /dev/null; then
autocons=$(cat /tmp/01-autocons.devnode)
if [ ! -z "$autocons" ]; then
echo Auto-configuring installed system to use text console
echo Auto-configuring installed system to use text console > $autocons
autocons=$(cat /tmp/01-autocons.devnode)
if [ ! -z "$autocons" ]; then
echo Auto-configuring installed system to use text console
echo Auto-configuring installed system to use text console > $autocons
/opt/confluent/bin/autocons -c > /dev/null
cp /tmp/01-autocons.conf /etc/cmdline.d/
else
echo "Unable to automatically detect requested text console"
fi
cp /tmp/01-autocons.conf /etc/cmdline.d/
else
echo "Unable to automatically detect requested text console"
fi
fi
echo inst.repo=$proto://$mgr/confluent-public/os/$profilename/distribution >> /etc/cmdline.d/01-confluent.conf
. /etc/os-release
if [ "$ID" = "dracut" ]; then
ID=$(echo $PRETTY_NAME|awk '{print $1}')
VERSION_ID=$(echo $VERSION|awk '{print $1}')
if [ "$ID" = "Oracle" ]; then
ID=OL
elif [ "$ID" = "Red" ]; then
ID=RHEL
fi
fi
ISOSRC=$(blkid -t TYPE=iso9660|grep -Ei ' LABEL="'$ID-$VERSION_ID|sed -e s/:.*//)
if [ -z "$ISOSRC" ]; then
echo inst.repo=$proto://$mgr/confluent-public/os/$profilename/distribution >> /etc/cmdline.d/01-confluent.conf
root=anaconda-net:$proto://$mgr/confluent-public/os/$profilename/distribution
export root
else
echo inst.repo=cdrom:$ISOSRC >> /etc/cmdline.d/01-confluent.conf
fi
echo inst.ks=$proto://$mgr/confluent-public/os/$profilename/kickstart >> /etc/cmdline.d/01-confluent.conf
kickstart=$proto://$mgr/confluent-public/os/$profilename/kickstart
root=anaconda-net:$proto://$mgr/confluent-public/os/$profilename/distribution
export kickstart
export root
autoconfigmethod=$(grep ipv4_method /etc/confluent/confluent.deploycfg)
autoconfigmethod=${autoconfigmethod#ipv4_method: }
if [ "$autoconfigmethod" = "dhcp" ]; then
@ -312,4 +338,8 @@ if [ -e /lib/nm-lib.sh ]; then
fi
fi
fi
for NICGUESS in $(ip link|grep LOWER_UP|grep -v LOOPBACK| awk '{print $2}' | sed -e 's/:$//'); do
ip addr flush dev $NICGUESS
ip link set $NICGUESS down
done

View File

@ -27,7 +27,7 @@ with open('/etc/confluent/confluent.deploycfg') as dplcfgfile:
_, profile = line.split(' ', 1)
if line.startswith('ipv4_method: '):
_, v4cfg = line.split(' ', 1)
if v4cfg == 'static' or v4cfg =='dhcp':
if v4cfg == 'static' or v4cfg =='dhcp' or not server6:
server = server4
if not server:
server = '[{}]'.format(server6)

View File

@ -3,6 +3,8 @@ import os
class DiskInfo(object):
def __init__(self, devname):
if devname.startswith('nvme') and 'c' in devname:
raise Exception("Skipping multipath devname")
self.name = devname
self.wwn = None
self.path = None

View File

@ -90,8 +90,14 @@ touch /tmp/cryptpkglist
touch /tmp/pkglist
touch /tmp/addonpackages
if [ "$cryptboot" == "tpm2" ]; then
LUKSPARTY="--encrypted --passphrase=$(cat /etc/confluent/confluent.apikey)"
echo $cryptboot >> /tmp/cryptboot
lukspass=$(python3 /opt/confluent/bin/apiclient /confluent-api/self/profileprivate/pending/luks.key 2> /dev/null)
if [ -z "$lukspass" ]; then
lukspass=$(python3 -c 'import os;import base64;print(base64.b64encode(os.urandom(66)).decode())')
fi
echo $lukspass > /etc/confluent/luks.key
chmod 000 /etc/confluent/luks.key
LUKSPARTY="--encrypted --passphrase=$lukspass"
echo $cryptboot >> /tmp/cryptboot
echo clevis-dracut >> /tmp/cryptpkglist
fi
@ -114,8 +120,8 @@ confluentpython /etc/confluent/apiclient /confluent-public/os/$confluent_profile
grep '^%include /tmp/partitioning' /tmp/kickstart.* > /dev/null || rm /tmp/installdisk
if [ -e /tmp/installdisk -a ! -e /tmp/partitioning ]; then
INSTALLDISK=$(cat /tmp/installdisk)
sed -e s/%%INSTALLDISK%%/$INSTALLDISK/ -e s/%%LUKSHOOK%%/$LUKSPARTY/ /tmp/partitioning.template > /tmp/partitioning
dd if=/dev/zero of=/dev/$(cat /tmp/installdisk) bs=1M count=1 >& /dev/null
sed -e s/%%INSTALLDISK%%/$INSTALLDISK/ -e "s!%%LUKSHOOK%%!$LUKSPARTY!" /tmp/partitioning.template > /tmp/partitioning
vgchange -a n >& /dev/null
wipefs -a -f /dev/$INSTALLDISK >& /dev/null
fi
kill $logshowpid

View File

@ -1,8 +1,12 @@
#!/bin/sh
grep HostCert /etc/ssh/sshd_config.anaconda >> /mnt/sysimage/etc/ssh/sshd_config
echo HostbasedAuthentication yes >> /mnt/sysimage/etc/ssh/sshd_config
echo HostbasedUsesNameFromPacketOnly yes >> /mnt/sysimage/etc/ssh/sshd_config
echo IgnoreRhosts no >> /mnt/sysimage/etc/ssh/sshd_config
targssh=/mnt/sysimage/etc/ssh/sshd_config
if [ -d /mnt/sysimage/etc/ssh/sshd_config.d/ ]; then
targssh=/mnt/sysimage/etc/ssh/sshd_config.d/90-confluent.conf
fi
grep HostCert /etc/ssh/sshd_config.anaconda >> $targssh
echo HostbasedAuthentication yes >> $targssh
echo HostbasedUsesNameFromPacketOnly yes >> $targssh
echo IgnoreRhosts no >> $targssh
sshconf=/mnt/sysimage/etc/ssh/ssh_config
if [ -d /mnt/sysimage/etc/ssh/ssh_config.d/ ]; then
sshconf=/mnt/sysimage/etc/ssh/ssh_config.d/01-confluent.conf

View File

@ -1,4 +1,6 @@
#!/usr/bin/python3
import random
import time
import subprocess
import importlib
import tempfile
@ -7,6 +9,7 @@ import os
import shutil
import pwd
import grp
import sys
from importlib.machinery import SourceFileLoader
try:
apiclient = SourceFileLoader('apiclient', '/opt/confluent/bin/apiclient').load_module()
@ -227,9 +230,16 @@ def synchronize():
myips.append(addr)
data = json.dumps({'merge': tmpdir, 'appendonce': appendoncedir, 'myips': myips})
status, rsp = ac.grab_url_with_status('/confluent-api/self/remotesyncfiles', data)
if status >= 300:
sys.stderr.write("Error starting syncfiles - {}:\n".format(status))
sys.stderr.write(rsp.decode('utf8'))
sys.stderr.write('\n')
sys.stderr.flush()
return status
if status == 202:
lastrsp = ''
while status != 204:
time.sleep(1+(2*random.random()))
status, rsp = ac.grab_url_with_status('/confluent-api/self/remotesyncfiles')
if not isinstance(rsp, str):
rsp = rsp.decode('utf8')
@ -277,10 +287,21 @@ def synchronize():
os.chmod(fname, int(opts[fname][opt], 8))
if uid != -1 or gid != -1:
os.chown(fname, uid, gid)
return status
finally:
shutil.rmtree(tmpdir)
shutil.rmtree(appendoncedir)
if __name__ == '__main__':
synchronize()
status = 202
while status not in (204, 200):
try:
status = synchronize()
except Exception as e:
sys.stderr.write(str(e))
sys.stderr.write('\n')
sys.stderr.flush()
status = 300
if status not in (204, 200):
time.sleep((random.random()*3)+2)

View File

@ -1,4 +1,5 @@
#!/bin/sh
cryptdisk=$(blkid -t TYPE="crypto_LUKS"|sed -e s/:.*//)
clevis luks bind -f -d $cryptdisk -k - tpm2 '{}' < /etc/confluent/confluent.apikey
cryptsetup luksRemoveKey $cryptdisk < /etc/confluent/confluent.apikey
clevis luks bind -f -d $cryptdisk -k - tpm2 '{}' < /etc/confluent/luks.key
chmod 000 /etc/confluent/luks.key
#cryptsetup luksRemoveKey $cryptdisk < /etc/confluent/confluent.apikey

View File

@ -120,7 +120,7 @@ fi
ready=0
while [ $ready = "0" ]; do
get_remote_apikey
if [[ $confluent_mgr == *:* ]]; then
if [[ $confluent_mgr == *:* ]] && [[ $confluent_mgr != "["* ]]; then
confluent_mgr="[$confluent_mgr]"
fi
tmperr=$(mktemp)
@ -154,7 +154,7 @@ cat > /run/NetworkManager/system-connections/$ifname.nmconnection << EOC
EOC
echo id=${ifname} >> /run/NetworkManager/system-connections/$ifname.nmconnection
echo uuid=$(uuidgen) >> /run/NetworkManager/system-connections/$ifname.nmconnection
linktype=$(ip link |grep -A2 ${ifname}|tail -n 1|awk '{print $1}')
linktype=$(ip link show dev ${ifname}|grep link/|awk '{print $1}')
if [ "$linktype" = link/infiniband ]; then
linktype="infiniband"
else
@ -171,6 +171,13 @@ permissions=
wait-device-timeout=60000
EOC
if [ "$linktype" = infiniband ]; then
cat >> /run/NetworkManager/system-connections/$ifname.nmconnection << EOC
[infiniband]
transport-mode=datagram
EOC
fi
autoconfigmethod=$(grep ^ipv4_method: /etc/confluent/confluent.deploycfg |awk '{print $2}')
auto6configmethod=$(grep ^ipv6_method: /etc/confluent/confluent.deploycfg |awk '{print $2}')
if [ "$autoconfigmethod" = "dhcp" ]; then
@ -281,7 +288,7 @@ fi
echo '[proxy]' >> /run/NetworkManager/system-connections/$ifname.nmconnection
chmod 600 /run/NetworkManager/system-connections/*.nmconnection
confluent_websrv=$confluent_mgr
if [[ $confluent_websrv == *:* ]]; then
if [[ $confluent_websrv == *:* ]] && [[ $confluent_websrv != "["* ]]; then
confluent_websrv="[$confluent_websrv]"
fi
echo -n "Initializing ssh..."

View File

@ -9,9 +9,16 @@ HOME=$(getent passwd $(whoami)|cut -d: -f 6)
export HOME
nodename=$(grep ^NODENAME /etc/confluent/confluent.info|awk '{print $2}')
confluent_apikey=$(cat /etc/confluent/confluent.apikey)
confluent_mgr=$(grep ^deploy_server: /etc/confluent/confluent.deploycfg|awk '{print $2}')
confluent_mgr=$(grep ^deploy_server_v6: /etc/confluent/confluent.deploycfg|awk '{print $2}')
if [ -z "$confluent_mgr" ] || [ "$confluent_mgr" == "null" ] || ! ping -c 1 $confluent_mgr >& /dev/null; then
confluent_mgr=$(grep ^deploy_server: /etc/confluent/confluent.deploycfg|awk '{print $2}')
fi
confluent_websrv=$confluent_mgr
if [[ "$confluent_mgr" == *:* ]]; then
confluent_websrv="[$confluent_mgr]"
fi
confluent_profile=$(grep ^profile: /etc/confluent/confluent.deploycfg|awk '{print $2}')
export nodename confluent_mgr confluent_profile
export nodename confluent_mgr confluent_profile confluent_websrv
. /etc/confluent/functions
(
exec >> /var/log/confluent/confluent-firstboot.log
@ -34,7 +41,7 @@ if [ ! -f /etc/confluent/firstboot.ran ]; then
run_remote_config firstboot.d
fi
curl -X POST -d 'status: complete' -H "CONFLUENT_NODENAME: $nodename" -H "CONFLUENT_APIKEY: $confluent_apikey" https://$confluent_mgr/confluent-api/self/updatestatus
curl -X POST -d 'status: complete' -H "CONFLUENT_NODENAME: $nodename" -H "CONFLUENT_APIKEY: $confluent_apikey" https://$confluent_websrv/confluent-api/self/updatestatus
systemctl disable firstboot
rm /etc/systemd/system/firstboot.service
rm /etc/confluent/firstboot.ran

View File

@ -3,6 +3,8 @@ import os
class DiskInfo(object):
def __init__(self, devname):
if devname.startswith('nvme') and 'c' in devname:
raise Exception("Skipping multipath devname")
self.name = devname
self.wwn = None
self.path = None

View File

@ -10,8 +10,16 @@ import stat
import struct
import sys
import subprocess
import traceback
bootuuid = None
vgname = 'localstorage'
oldvgname = None
def convert_lv(oldlvname):
if oldvgname is None:
return None
return oldlvname.replace(oldvgname, vgname)
def get_partname(devname, idx):
if devname[-1] in '0123456789':
@ -53,6 +61,8 @@ def get_image_metadata(imgpath):
header = img.read(16)
if header == b'\x63\x7b\x9d\x26\xb7\xfd\x48\x30\x89\xf9\x11\xcf\x18\xfd\xff\xa1':
for md in get_multipart_image_meta(img):
if md.get('device', '').startswith('/dev/zram'):
continue
yield md
else:
raise Exception('Installation from single part image not supported')
@ -86,14 +96,14 @@ def fixup(rootdir, vols):
if tab.startswith('#ORIGFSTAB#'):
if entry[1] in devbymount:
targetdev = devbymount[entry[1]]
if targetdev.startswith('/dev/localstorage/'):
if targetdev.startswith('/dev/{}/'.format(vgname)):
entry[0] = targetdev
else:
uuid = subprocess.check_output(['blkid', '-s', 'UUID', '-o', 'value', targetdev]).decode('utf8')
uuid = uuid.strip()
entry[0] = 'UUID={}'.format(uuid)
elif entry[2] == 'swap':
entry[0] = '/dev/mapper/localstorage-swap'
entry[0] = '/dev/mapper/{}-swap'.format(vgname.replace('-', '--'))
entry[0] = entry[0].ljust(42)
entry[1] = entry[1].ljust(16)
entry[3] = entry[3].ljust(28)
@ -141,6 +151,46 @@ def fixup(rootdir, vols):
grubsyscfg = os.path.join(rootdir, 'etc/sysconfig/grub')
if not os.path.exists(grubsyscfg):
grubsyscfg = os.path.join(rootdir, 'etc/default/grub')
kcmdline = os.path.join(rootdir, 'etc/kernel/cmdline')
if os.path.exists(kcmdline):
with open(kcmdline) as kcmdlinein:
kcmdlinecontent = kcmdlinein.read()
newkcmdlineent = []
for ent in kcmdlinecontent.split():
if ent.startswith('resume='):
newkcmdlineent.append('resume={}'.format(newswapdev))
elif ent.startswith('root='):
newkcmdlineent.append('root={}'.format(newrootdev))
elif ent.startswith('rd.lvm.lv='):
ent = convert_lv(ent)
if ent:
newkcmdlineent.append(ent)
else:
newkcmdlineent.append(ent)
with open(kcmdline, 'w') as kcmdlineout:
kcmdlineout.write(' '.join(newkcmdlineent) + '\n')
for loadent in glob.glob(os.path.join(rootdir, 'boot/loader/entries/*.conf')):
with open(loadent) as loadentin:
currentry = loadentin.read().split('\n')
with open(loadent, 'w') as loadentout:
for cfgline in currentry:
cfgparts = cfgline.split()
if not cfgparts or cfgparts[0] != 'options':
loadentout.write(cfgline + '\n')
continue
newcfgparts = [cfgparts[0]]
for cfgpart in cfgparts[1:]:
if cfgpart.startswith('root='):
newcfgparts.append('root={}'.format(newrootdev))
elif cfgpart.startswith('resume='):
newcfgparts.append('resume={}'.format(newswapdev))
elif cfgpart.startswith('rd.lvm.lv='):
cfgpart = convert_lv(cfgpart)
if cfgpart:
newcfgparts.append(cfgpart)
else:
newcfgparts.append(cfgpart)
loadentout.write(' '.join(newcfgparts) + '\n')
with open(grubsyscfg) as defgrubin:
defgrub = defgrubin.read().split('\n')
with open(grubsyscfg, 'w') as defgrubout:
@ -148,9 +198,18 @@ def fixup(rootdir, vols):
gline = gline.split()
newline = []
for ent in gline:
if ent.startswith('resume=') or ent.startswith('rd.lvm.lv'):
continue
newline.append(ent)
if ent.startswith('resume='):
newline.append('resume={}'.format(newswapdev))
elif ent.startswith('root='):
newline.append('root={}'.format(newrootdev))
elif ent.startswith('rd.lvm.lv='):
ent = convert_lv(ent)
if ent:
newline.append(ent)
elif '""' in ent:
newline.append('""')
else:
newline.append(ent)
defgrubout.write(' '.join(newline) + '\n')
grubcfg = subprocess.check_output(['find', os.path.join(rootdir, 'boot'), '-name', 'grub.cfg']).decode('utf8').strip().replace(rootdir, '/').replace('//', '/')
grubcfg = grubcfg.split('\n')
@ -227,8 +286,14 @@ def had_swap():
return True
return False
newrootdev = None
newswapdev = None
def install_to_disk(imgpath):
global bootuuid
global newrootdev
global newswapdev
global vgname
global oldvgname
lvmvols = {}
deftotsize = 0
mintotsize = 0
@ -260,6 +325,13 @@ def install_to_disk(imgpath):
biggestfs = fs
biggestsize = fs['initsize']
if fs['device'].startswith('/dev/mapper'):
oldvgname = fs['device'].rsplit('/', 1)[-1]
# if node has - then /dev/mapper will double up the hypen
if '_' in oldvgname and '-' in oldvgname.split('_')[-1]:
oldvgname = oldvgname.rsplit('-', 1)[0].replace('--', '-')
osname = oldvgname.split('_')[0]
nodename = socket.gethostname().split('.')[0]
vgname = '{}_{}'.format(osname, nodename)
lvmvols[fs['device'].replace('/dev/mapper/', '')] = fs
deflvmsize += fs['initsize']
minlvmsize += fs['minsize']
@ -304,6 +376,8 @@ def install_to_disk(imgpath):
end = sectors
parted.run('mkpart primary {}s {}s'.format(curroffset, end))
vol['targetdisk'] = get_partname(instdisk, volidx)
if vol['mount'] == '/':
newrootdev = vol['targetdisk']
curroffset += size + 1
if not lvmvols:
if swapsize:
@ -313,13 +387,14 @@ def install_to_disk(imgpath):
if end > sectors:
end = sectors
parted.run('mkpart swap {}s {}s'.format(curroffset, end))
subprocess.check_call(['mkswap', get_partname(instdisk, volidx + 1)])
newswapdev = get_partname(instdisk, volidx + 1)
subprocess.check_call(['mkswap', newswapdev])
else:
parted.run('mkpart lvm {}s 100%'.format(curroffset))
lvmpart = get_partname(instdisk, volidx + 1)
subprocess.check_call(['pvcreate', '-ff', '-y', lvmpart])
subprocess.check_call(['vgcreate', 'localstorage', lvmpart])
vginfo = subprocess.check_output(['vgdisplay', 'localstorage', '--units', 'b']).decode('utf8')
subprocess.check_call(['vgcreate', vgname, lvmpart])
vginfo = subprocess.check_output(['vgdisplay', vgname, '--units', 'b']).decode('utf8')
vginfo = vginfo.split('\n')
pesize = 0
pes = 0
@ -346,13 +421,17 @@ def install_to_disk(imgpath):
extents += 1
if vol['mount'] == '/':
lvname = 'root'
else:
lvname = vol['mount'].replace('/', '_')
subprocess.check_call(['lvcreate', '-l', '{}'.format(extents), '-y', '-n', lvname, 'localstorage'])
vol['targetdisk'] = '/dev/localstorage/{}'.format(lvname)
subprocess.check_call(['lvcreate', '-l', '{}'.format(extents), '-y', '-n', lvname, vgname])
vol['targetdisk'] = '/dev/{}/{}'.format(vgname, lvname)
if vol['mount'] == '/':
newrootdev = vol['targetdisk']
if swapsize:
subprocess.check_call(['lvcreate', '-y', '-l', '{}'.format(swapsize // pesize), '-n', 'swap', 'localstorage'])
subprocess.check_call(['mkswap', '/dev/localstorage/swap'])
subprocess.check_call(['lvcreate', '-y', '-l', '{}'.format(swapsize // pesize), '-n', 'swap', vgname])
subprocess.check_call(['mkswap', '/dev/{}/swap'.format(vgname)])
newswapdev = '/dev/{}/swap'.format(vgname)
os.makedirs('/run/imginst/targ')
for vol in allvols:
with open(vol['targetdisk'], 'wb') as partition:
@ -426,4 +505,9 @@ def install_to_disk(imgpath):
if __name__ == '__main__':
install_to_disk(os.environ['mountsrc'])
try:
install_to_disk(os.environ['mountsrc'])
except Exception:
traceback.print_exc()
time.sleep(86400)
raise

View File

@ -1,6 +1,6 @@
. /lib/dracut-lib.sh
confluent_whost=$confluent_mgr
if [[ "$confluent_whost" == *:* ]]; then
if [[ "$confluent_whost" == *:* ]] && [[ "$confluent_whost" != "["* ]]; then
confluent_whost="[$confluent_mgr]"
fi
mkdir -p /mnt/remoteimg /mnt/remote /mnt/overlay

View File

@ -30,6 +30,7 @@ if [ ! -f /sysroot/tmp/installdisk ]; then
done
fi
lvm vgchange -a n
/sysroot/usr/sbin/wipefs -a /dev/$(cat /sysroot/tmp/installdisk)
udevadm control -e
if [ -f /sysroot/etc/lvm/devices/system.devices ]; then
rm /sysroot/etc/lvm/devices/system.devices

View File

@ -16,6 +16,7 @@ if [ -z "$confluent_mgr" ]; then
fi
confluent_profile=$(grep ^profile: /etc/confluent/confluent.deploycfg|awk '{print $2}')
timedatectl set-timezone $(grep ^timezone: /etc/confluent/confluent.deploycfg|awk '{print $2}')
hostnamectl set-hostname $nodename
export nodename confluent_mgr confluent_profile
. /etc/confluent/functions
mkdir -p /var/log/confluent

View File

@ -5,9 +5,16 @@
nodename=$(grep ^NODENAME /etc/confluent/confluent.info|awk '{print $2}')
confluent_apikey=$(cat /etc/confluent/confluent.apikey)
confluent_mgr=$(grep ^deploy_server: /etc/confluent/confluent.deploycfg|awk '{print $2}')
confluent_profile=$(grep ^profile: /etc/confluent/confluent.deploycfg|awk '{print $2}')
export nodename confluent_mgr confluent_profile
confluent_mgr=$(grep ^deploy_server_v6: /etc/confluent/confluent.deploycfg|awk '{print $2}')
if [ -z "$confluent_mgr" ] || [ "$confluent_mgr" == "null" ] || ! ping -c 1 $confluent_mgr >& /dev/null; then
confluent_mgr=$(grep ^deploy_server: /etc/confluent/confluent.deploycfg|awk '{print $2}')
fi
confluent_websrv=$confluent_mgr
if [[ "$confluent_mgr" == *:* ]]; then
confluent_websrv="[$confluent_mgr]"
fi
export nodename confluent_mgr confluent_profile confluent_websrv
. /etc/confluent/functions
mkdir -p /var/log/confluent
chmod 700 /var/log/confluent
@ -16,9 +23,9 @@ exec 2>> /var/log/confluent/confluent-post.log
chmod 600 /var/log/confluent/confluent-post.log
tail -f /var/log/confluent/confluent-post.log > /dev/console &
logshowpid=$!
curl -f https://$confluent_mgr/confluent-public/os/$confluent_profile/scripts/firstboot.service > /etc/systemd/system/firstboot.service
curl -f https://$confluent_websrv/confluent-public/os/$confluent_profile/scripts/firstboot.service > /etc/systemd/system/firstboot.service
mkdir -p /opt/confluent/bin
curl -f https://$confluent_mgr/confluent-public/os/$confluent_profile/scripts/firstboot.sh > /opt/confluent/bin/firstboot.sh
curl -f https://$confluent_websrv/confluent-public/os/$confluent_profile/scripts/firstboot.sh > /opt/confluent/bin/firstboot.sh
chmod +x /opt/confluent/bin/firstboot.sh
systemctl enable firstboot
selinuxpolicy=$(grep ^SELINUXTYPE /etc/selinux/config |awk -F= '{print $2}')
@ -33,7 +40,7 @@ run_remote_parts post.d
# Induce execution of remote configuration, e.g. ansible plays in ansible/post.d/
run_remote_config post.d
curl -sf -X POST -d 'status: staged' -H "CONFLUENT_NODENAME: $nodename" -H "CONFLUENT_APIKEY: $confluent_apikey" https://$confluent_mgr/confluent-api/self/updatestatus
curl -sf -X POST -d 'status: staged' -H "CONFLUENT_NODENAME: $nodename" -H "CONFLUENT_APIKEY: $confluent_apikey" https://$confluent_websrv/confluent-api/self/updatestatus
kill $logshowpid

View File

@ -1,4 +1,6 @@
#!/usr/bin/python3
import random
import time
import subprocess
import importlib
import tempfile
@ -7,6 +9,7 @@ import os
import shutil
import pwd
import grp
import sys
from importlib.machinery import SourceFileLoader
try:
apiclient = SourceFileLoader('apiclient', '/opt/confluent/bin/apiclient').load_module()
@ -227,9 +230,16 @@ def synchronize():
myips.append(addr)
data = json.dumps({'merge': tmpdir, 'appendonce': appendoncedir, 'myips': myips})
status, rsp = ac.grab_url_with_status('/confluent-api/self/remotesyncfiles', data)
if status >= 300:
sys.stderr.write("Error starting syncfiles - {}:\n".format(status))
sys.stderr.write(rsp.decode('utf8'))
sys.stderr.write('\n')
sys.stderr.flush()
return status
if status == 202:
lastrsp = ''
while status != 204:
time.sleep(1+(2*random.random()))
status, rsp = ac.grab_url_with_status('/confluent-api/self/remotesyncfiles')
if not isinstance(rsp, str):
rsp = rsp.decode('utf8')
@ -277,10 +287,21 @@ def synchronize():
os.chmod(fname, int(opts[fname][opt], 8))
if uid != -1 or gid != -1:
os.chown(fname, uid, gid)
return status
finally:
shutil.rmtree(tmpdir)
shutil.rmtree(appendoncedir)
if __name__ == '__main__':
synchronize()
status = 202
while status not in (204, 200):
try:
status = synchronize()
except Exception as e:
sys.stderr.write(str(e))
sys.stderr.write('\n')
sys.stderr.flush()
status = 300
if status not in (204, 200):
time.sleep((random.random()*3)+2)

View File

@ -116,4 +116,5 @@ profile=$(grep ^profile: /etc/confluent/confluent.deploycfg.new | sed -e 's/^pro
mv /etc/confluent/confluent.deploycfg.new /etc/confluent/confluent.deploycfg
export node mgr profile
. /tmp/modinstall
localcli network firewall load
exec /bin/install

View File

@ -174,6 +174,8 @@ dnsdomain=${dnsdomain#dnsdomain: }
echo search $dnsdomain >> /etc/resolv.conf
echo -n "Initializing ssh..."
ssh-keygen -A
mkdir -p /usr/share/empty.sshd
rm /etc/ssh/ssh_host_dsa_key*
for pubkey in /etc/ssh/ssh_host*key.pub; do
certfile=${pubkey/.pub/-cert.pub}
privfile=${pubkey%.pub}

View File

@ -1,4 +1,6 @@
#!/usr/bin/python3
import random
import time
import subprocess
import importlib
import tempfile
@ -7,6 +9,7 @@ import os
import shutil
import pwd
import grp
import sys
from importlib.machinery import SourceFileLoader
try:
apiclient = SourceFileLoader('apiclient', '/opt/confluent/bin/apiclient').load_module()
@ -227,9 +230,16 @@ def synchronize():
myips.append(addr)
data = json.dumps({'merge': tmpdir, 'appendonce': appendoncedir, 'myips': myips})
status, rsp = ac.grab_url_with_status('/confluent-api/self/remotesyncfiles', data)
if status >= 300:
sys.stderr.write("Error starting syncfiles - {}:\n".format(status))
sys.stderr.write(rsp.decode('utf8'))
sys.stderr.write('\n')
sys.stderr.flush()
return status
if status == 202:
lastrsp = ''
while status != 204:
time.sleep(1+(2*random.random()))
status, rsp = ac.grab_url_with_status('/confluent-api/self/remotesyncfiles')
if not isinstance(rsp, str):
rsp = rsp.decode('utf8')
@ -277,10 +287,21 @@ def synchronize():
os.chmod(fname, int(opts[fname][opt], 8))
if uid != -1 or gid != -1:
os.chown(fname, uid, gid)
return status
finally:
shutil.rmtree(tmpdir)
shutil.rmtree(appendoncedir)
if __name__ == '__main__':
synchronize()
status = 202
while status not in (204, 200):
try:
status = synchronize()
except Exception as e:
sys.stderr.write(str(e))
sys.stderr.write('\n')
sys.stderr.flush()
status = 300
if status not in (204, 200):
time.sleep((random.random()*3)+2)

View File

@ -3,6 +3,8 @@ import os
class DiskInfo(object):
def __init__(self, devname):
if devname.startswith('nvme') and 'c' in devname:
raise Exception("Skipping multipath devname")
self.name = devname
self.wwn = None
self.path = None

View File

@ -116,7 +116,7 @@ fi
ready=0
while [ $ready = "0" ]; do
get_remote_apikey
if [[ $confluent_mgr == *:* ]]; then
if [[ $confluent_mgr == *:* ]] && [[ $confluent_mgr != "["* ]]; then
confluent_mgr="[$confluent_mgr]"
fi
tmperr=$(mktemp)

View File

@ -1,4 +1,6 @@
#!/usr/bin/python3
import random
import time
import subprocess
import importlib
import tempfile
@ -7,6 +9,7 @@ import os
import shutil
import pwd
import grp
import sys
from importlib.machinery import SourceFileLoader
try:
apiclient = SourceFileLoader('apiclient', '/opt/confluent/bin/apiclient').load_module()
@ -227,9 +230,16 @@ def synchronize():
myips.append(addr)
data = json.dumps({'merge': tmpdir, 'appendonce': appendoncedir, 'myips': myips})
status, rsp = ac.grab_url_with_status('/confluent-api/self/remotesyncfiles', data)
if status >= 300:
sys.stderr.write("Error starting syncfiles - {}:\n".format(status))
sys.stderr.write(rsp.decode('utf8'))
sys.stderr.write('\n')
sys.stderr.flush()
return status
if status == 202:
lastrsp = ''
while status != 204:
time.sleep(1+(2*random.random()))
status, rsp = ac.grab_url_with_status('/confluent-api/self/remotesyncfiles')
if not isinstance(rsp, str):
rsp = rsp.decode('utf8')
@ -277,10 +287,21 @@ def synchronize():
os.chmod(fname, int(opts[fname][opt], 8))
if uid != -1 or gid != -1:
os.chown(fname, uid, gid)
return status
finally:
shutil.rmtree(tmpdir)
shutil.rmtree(appendoncedir)
if __name__ == '__main__':
synchronize()
status = 202
while status not in (204, 200):
try:
status = synchronize()
except Exception as e:
sys.stderr.write(str(e))
sys.stderr.write('\n')
sys.stderr.flush()
status = 300
if status not in (204, 200):
time.sleep((random.random()*3)+2)

View File

@ -10,6 +10,12 @@ dynamic behavior and replace with static configuration.
<hwclock>UTC</hwclock>
<timezone>%%TIMEZONE%%</timezone>
</timezone>
<firstboot>
<firstboot_enabled config:type="boolean">false</firstboot_enabled>
</firstboot>
<kdump>
<add_crash_kernel config:type="boolean">false</add_crash_kernel>
</kdump>
<general>
<self_update config:type="boolean">false</self_update>
<mode>

View File

@ -1,4 +1,7 @@
#!/bin/sh
# WARNING
# be careful when editing files here as this script is called
# in parallel to other copy operations, so changes to files can be lost
discnum=$(basename $1)
if [ "$discnum" != 1 ]; then exit 0; fi
if [ -e $2/boot/kernel ]; then exit 0; fi

View File

@ -3,6 +3,8 @@ import os
class DiskInfo(object):
def __init__(self, devname):
if devname.startswith('nvme') and 'c' in devname:
raise Exception("Skipping multipath devname")
self.name = devname
self.wwn = None
self.path = None

View File

@ -0,0 +1,3 @@
#!/usr/bin/bash
# remove online repos
grep -lE "baseurl=https?://download.opensuse.org" /etc/zypp/repos.d/*repo | xargs rm --

View File

@ -1,4 +1,6 @@
#!/usr/bin/python3
import random
import time
import subprocess
import importlib
import tempfile
@ -7,6 +9,7 @@ import os
import shutil
import pwd
import grp
import sys
from importlib.machinery import SourceFileLoader
try:
apiclient = SourceFileLoader('apiclient', '/opt/confluent/bin/apiclient').load_module()
@ -227,9 +230,16 @@ def synchronize():
myips.append(addr)
data = json.dumps({'merge': tmpdir, 'appendonce': appendoncedir, 'myips': myips})
status, rsp = ac.grab_url_with_status('/confluent-api/self/remotesyncfiles', data)
if status >= 300:
sys.stderr.write("Error starting syncfiles - {}:\n".format(status))
sys.stderr.write(rsp.decode('utf8'))
sys.stderr.write('\n')
sys.stderr.flush()
return status
if status == 202:
lastrsp = ''
while status != 204:
time.sleep(1+(2*random.random()))
status, rsp = ac.grab_url_with_status('/confluent-api/self/remotesyncfiles')
if not isinstance(rsp, str):
rsp = rsp.decode('utf8')
@ -277,10 +287,21 @@ def synchronize():
os.chmod(fname, int(opts[fname][opt], 8))
if uid != -1 or gid != -1:
os.chown(fname, uid, gid)
return status
finally:
shutil.rmtree(tmpdir)
shutil.rmtree(appendoncedir)
if __name__ == '__main__':
synchronize()
status = 202
while status not in (204, 200):
try:
status = synchronize()
except Exception as e:
sys.stderr.write(str(e))
sys.stderr.write('\n')
sys.stderr.flush()
status = 300
if status not in (204, 200):
time.sleep((random.random()*3)+2)

View File

@ -10,6 +10,12 @@ dynamic behavior and replace with static configuration.
<hwclock>UTC</hwclock>
<timezone>%%TIMEZONE%%</timezone>
</timezone>
<firstboot>
<firstboot_enabled config:type="boolean">false</firstboot_enabled>
</firstboot>
<kdump>
<add_crash_kernel config:type="boolean">false</add_crash_kernel>
</kdump>
<general>
<self_update config:type="boolean">false</self_update>
<mode>

View File

@ -1,4 +1,7 @@
#!/bin/sh
# WARNING
# be careful when editing files here as this script is called
# in parallel to other copy operations, so changes to files can be lost
discnum=$(basename $1)
if [ "$discnum" != 1 ]; then exit 0; fi
if [ -e $2/boot/kernel ]; then exit 0; fi

View File

@ -3,6 +3,8 @@ import os
class DiskInfo(object):
def __init__(self, devname):
if devname.startswith('nvme') and 'c' in devname:
raise Exception("Skipping multipath devname")
self.name = devname
self.wwn = None
self.path = None

View File

@ -0,0 +1,3 @@
#!/usr/bin/bash
# remove online repos
grep -lE "baseurl=https?://download.opensuse.org" /etc/zypp/repos.d/*repo | xargs rm --

View File

@ -1,4 +1,6 @@
#!/usr/bin/python3
import random
import time
import subprocess
import importlib
import tempfile
@ -7,6 +9,7 @@ import os
import shutil
import pwd
import grp
import sys
from importlib.machinery import SourceFileLoader
try:
apiclient = SourceFileLoader('apiclient', '/opt/confluent/bin/apiclient').load_module()
@ -227,9 +230,16 @@ def synchronize():
myips.append(addr)
data = json.dumps({'merge': tmpdir, 'appendonce': appendoncedir, 'myips': myips})
status, rsp = ac.grab_url_with_status('/confluent-api/self/remotesyncfiles', data)
if status >= 300:
sys.stderr.write("Error starting syncfiles - {}:\n".format(status))
sys.stderr.write(rsp.decode('utf8'))
sys.stderr.write('\n')
sys.stderr.flush()
return status
if status == 202:
lastrsp = ''
while status != 204:
time.sleep(1+(2*random.random()))
status, rsp = ac.grab_url_with_status('/confluent-api/self/remotesyncfiles')
if not isinstance(rsp, str):
rsp = rsp.decode('utf8')
@ -277,10 +287,21 @@ def synchronize():
os.chmod(fname, int(opts[fname][opt], 8))
if uid != -1 or gid != -1:
os.chown(fname, uid, gid)
return status
finally:
shutil.rmtree(tmpdir)
shutil.rmtree(appendoncedir)
if __name__ == '__main__':
synchronize()
status = 202
while status not in (204, 200):
try:
status = synchronize()
except Exception as e:
sys.stderr.write(str(e))
sys.stderr.write('\n')
sys.stderr.flush()
status = 300
if status not in (204, 200):
time.sleep((random.random()*3)+2)

View File

@ -1,4 +1,5 @@
if ! grep console= /proc/cmdline > /dev/null; then
mkdir -p /custom-installation
/opt/confluent/bin/autocons > /custom-installation/autocons.info
cons=$(cat /custom-installation/autocons.info)
if [ ! -z "$cons" ]; then

View File

@ -58,6 +58,10 @@ if ! grep console= /proc/cmdline > /dev/null; then
echo "Automatic console configured for $autocons"
fi
echo sshd:x:30:30:SSH User:/var/empty/sshd:/sbin/nologin >> /etc/passwd
modprobe ib_ipoib
modprobe ib_umad
modprobe hfi1
modprobe mlx5_ib
cd /sys/class/net
for nic in *; do
ip link set $nic up

View File

@ -3,6 +3,8 @@ import os
class DiskInfo(object):
def __init__(self, devname):
if devname.startswith('nvme') and 'c' in devname:
raise Exception("Skipping multipath devname")
self.name = devname
self.wwn = None
self.path = None

View File

@ -10,6 +10,7 @@ import stat
import struct
import sys
import subprocess
import traceback
bootuuid = None
@ -206,6 +207,8 @@ def fixup(rootdir, vols):
partnum = re.search('(\d+)$', targdev).group(1)
targblock = re.search('(.*)\d+$', targdev).group(1)
if targblock:
if targblock.endswith('p') and 'nvme' in targblock:
targblock = targblock[:-1]
shimpath = subprocess.check_output(['find', os.path.join(rootdir, 'boot/efi'), '-name', 'shimx64.efi']).decode('utf8').strip()
shimpath = shimpath.replace(rootdir, '/').replace('/boot/efi', '').replace('//', '/').replace('/', '\\')
subprocess.check_call(['efibootmgr', '-c', '-d', targblock, '-l', shimpath, '--part', partnum])
@ -422,5 +425,10 @@ def install_to_disk(imgpath):
if __name__ == '__main__':
install_to_disk(os.environ['mountsrc'])
try:
install_to_disk(os.environ['mountsrc'])
except Exception:
traceback.print_exc()
time.sleep(86400)
raise

View File

@ -8,7 +8,7 @@ for addr in $(grep ^MANAGER: /etc/confluent/confluent.info|awk '{print $2}'|sed
fi
done
mkdir -p /mnt/remoteimg /mnt/remote /mnt/overlay
if grep confluennt_imagemethtod=untethered /proc/cmdline > /dev/null; then
if grep confluent_imagemethod=untethered /proc/cmdline > /dev/null; then
mount -t tmpfs untethered /mnt/remoteimg
curl https://$confluent_mgr/confluent-public/os/$confluent_profile/rootimg.sfs -o /mnt/remoteimg/rootimg.sfs
else

View File

@ -0,0 +1,11 @@
[Unit]
Description=Confluent onboot hook
Requires=network-online.target
After=network-online.target
[Service]
ExecStart=/opt/confluent/bin/onboot.sh
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,40 @@
#!/bin/bash
# This script is executed on each boot as it is
# completed. It is best to edit the middle of the file as
# noted below so custom commands are executed before
# the script notifies confluent that install is fully complete.
nodename=$(grep ^NODENAME /etc/confluent/confluent.info|awk '{print $2}')
confluent_apikey=$(cat /etc/confluent/confluent.apikey)
v4meth=$(grep ^ipv4_method: /etc/confluent/confluent.deploycfg|awk '{print $2}')
if [ "$v4meth" = "null" -o -z "$v4meth" ]; then
confluent_mgr=$(grep ^deploy_server_v6: /etc/confluent/confluent.deploycfg|awk '{print $2}')
fi
if [ -z "$confluent_mgr" ]; then
confluent_mgr=$(grep ^deploy_server: /etc/confluent/confluent.deploycfg|awk '{print $2}')
fi
confluent_profile=$(grep ^profile: /etc/confluent/confluent.deploycfg|awk '{print $2}')
timedatectl set-timezone $(grep ^timezone: /etc/confluent/confluent.deploycfg|awk '{print $2}')
hostnamectl set-hostname $nodename
export nodename confluent_mgr confluent_profile
. /etc/confluent/functions
mkdir -p /var/log/confluent
chmod 700 /var/log/confluent
exec >> /var/log/confluent/confluent-onboot.log
exec 2>> /var/log/confluent/confluent-onboot.log
chmod 600 /var/log/confluent/confluent-onboot.log
tail -f /var/log/confluent/confluent-onboot.log > /dev/console &
logshowpid=$!
run_remote_python syncfileclient
run_remote_python confignet
# onboot scripts may be placed into onboot.d, e.g. onboot.d/01-firstaction.sh, onboot.d/02-secondaction.sh
run_remote_parts onboot.d
# Induce execution of remote configuration, e.g. ansible plays in ansible/onboot.d/
run_remote_config onboot.d
#curl -X POST -d 'status: booted' -H "CONFLUENT_NODENAME: $nodename" -H "CONFLUENT_APIKEY: $confluent_apikey" https://$confluent_mgr/confluent-api/self/updatestatus
kill $logshowpid

View File

@ -1,4 +1,6 @@
#!/usr/bin/python3
import random
import time
import subprocess
import importlib
import tempfile
@ -7,6 +9,7 @@ import os
import shutil
import pwd
import grp
import sys
from importlib.machinery import SourceFileLoader
try:
apiclient = SourceFileLoader('apiclient', '/opt/confluent/bin/apiclient').load_module()
@ -227,9 +230,16 @@ def synchronize():
myips.append(addr)
data = json.dumps({'merge': tmpdir, 'appendonce': appendoncedir, 'myips': myips})
status, rsp = ac.grab_url_with_status('/confluent-api/self/remotesyncfiles', data)
if status >= 300:
sys.stderr.write("Error starting syncfiles - {}:\n".format(status))
sys.stderr.write(rsp.decode('utf8'))
sys.stderr.write('\n')
sys.stderr.flush()
return status
if status == 202:
lastrsp = ''
while status != 204:
time.sleep(1+(2*random.random()))
status, rsp = ac.grab_url_with_status('/confluent-api/self/remotesyncfiles')
if not isinstance(rsp, str):
rsp = rsp.decode('utf8')
@ -277,10 +287,21 @@ def synchronize():
os.chmod(fname, int(opts[fname][opt], 8))
if uid != -1 or gid != -1:
os.chown(fname, uid, gid)
return status
finally:
shutil.rmtree(tmpdir)
shutil.rmtree(appendoncedir)
if __name__ == '__main__':
synchronize()
status = 202
while status not in (204, 200):
try:
status = synchronize()
except Exception as e:
sys.stderr.write(str(e))
sys.stderr.write('\n')
sys.stderr.flush()
status = 300
if status not in (204, 200):
time.sleep((random.random()*3)+2)

View File

@ -4,4 +4,5 @@ confluent_mgr=$(grep ^deploy_server $deploycfg|awk '{print $2}')
confluent_profile=$(grep ^profile: $deploycfg|awk '{print $2}')
export deploycfg confluent_mgr confluent_profile
curl -f https://$confluent_mgr/confluent-public/os/$confluent_profile/scripts/post.sh > /tmp/post.sh
. /tmp/post.sh
bash /tmp/post.sh
true

View File

@ -2,7 +2,10 @@
echo "Confluent first boot is running"
HOME=$(getent passwd $(whoami)|cut -d: -f 6)
export HOME
seems a potentially relevant thing to put i... by Jarrod Johnson
(
exec >> /target/var/log/confluent/confluent-firstboot.log
exec 2>> /target/var/log/confluent/confluent-firstboot.log
chmod 600 /target/var/log/confluent/confluent-firstboot.log
cp -a /etc/confluent/ssh/* /etc/ssh/
systemctl restart sshd
rootpw=$(grep ^rootpassword: /etc/confluent/confluent.deploycfg |awk '{print $2}')
@ -18,7 +21,10 @@ done
hostnamectl set-hostname $(grep ^NODENAME: /etc/confluent/confluent.info | awk '{print $2}')
touch /etc/cloud/cloud-init.disabled
source /etc/confluent/functions
confluent_profile=$(grep ^profile: /etc/confluent/confluent.deploycfg|awk '{print $2}')
export confluent_mgr confluent_profile
run_remote_parts firstboot.d
run_remote_config firstboot.d
curl --capath /etc/confluent/tls -f -H "CONFLUENT_NODENAME: $nodename" -H "CONFLUENT_APIKEY: $confluent_apikey" -X POST -d "status: complete" https://$confluent_mgr/confluent-api/self/updatestatus
) &
tail --pid $! -n 0 -F /target/var/log/confluent/confluent-post.log > /dev/console

View File

@ -3,6 +3,8 @@ import os
class DiskInfo(object):
def __init__(self, devname):
if devname.startswith('nvme') and 'c' in devname:
raise Exception("Skipping multipath devname")
self.name = devname
self.wwn = None
self.path = None

View File

@ -8,7 +8,6 @@ chmod go-rwx /etc/confluent/*
for i in /custom-installation/ssh/*.ca; do
echo '@cert-authority *' $(cat $i) >> /target/etc/ssh/ssh_known_hosts
done
cp -a /etc/ssh/ssh_host* /target/etc/confluent/ssh/
cp -a /etc/ssh/sshd_config.d/confluent.conf /target/etc/confluent/ssh/sshd_config.d/
sshconf=/target/etc/ssh/ssh_config
@ -19,10 +18,15 @@ echo 'Host *' >> $sshconf
echo ' HostbasedAuthentication yes' >> $sshconf
echo ' EnableSSHKeysign yes' >> $sshconf
echo ' HostbasedKeyTypes *ed25519*' >> $sshconf
cp /etc/confluent/functions /target/etc/confluent/functions
source /etc/confluent/functions
mkdir -p /target/var/log/confluent
cp /var/log/confluent/* /target/var/log/confluent/
(
exec >> /target/var/log/confluent/confluent-post.log
exec 2>> /target/var/log/confluent/confluent-post.log
chmod 600 /target/var/log/confluent/confluent-post.log
curl -f https://$confluent_mgr/confluent-public/os/$confluent_profile/scripts/firstboot.sh > /target/etc/confluent/firstboot.sh
curl -f https://$confluent_mgr/confluent-public/os/$confluent_profile/scripts/functions > /target/etc/confluent/functions
source /target/etc/confluent/functions
chmod +x /target/etc/confluent/firstboot.sh
cp /tmp/allnodes /target/root/.shosts
cp /tmp/allnodes /target/etc/ssh/shosts.equiv
@ -56,6 +60,7 @@ cp /custom-installation/confluent/bin/apiclient /target/opt/confluent/bin
mount -o bind /dev /target/dev
mount -o bind /proc /target/proc
mount -o bind /sys /target/sys
mount -o bind /sys/firmware/efi/efivars /target/sys/firmware/efi/efivars
if [ 1 = $updategrub ]; then
chroot /target update-grub
fi
@ -83,6 +88,8 @@ chroot /target bash -c "source /etc/confluent/functions; run_remote_parts post.d
source /target/etc/confluent/functions
run_remote_config post
python3 /opt/confluent/bin/apiclient /confluent-api/self/updatestatus -d 'status: staged'
umount /target/sys /target/dev /target/proc
) &
tail --pid $! -n 0 -F /target/var/log/confluent/confluent-post.log > /dev/console

View File

@ -1,5 +1,16 @@
#!/bin/bash
deploycfg=/custom-installation/confluent/confluent.deploycfg
mkdir -p /var/log/confluent
mkdir -p /opt/confluent/bin
mkdir -p /etc/confluent
cp /custom-installation/confluent/confluent.info /custom-installation/confluent/confluent.apikey /etc/confluent/
cat /custom-installation/tls/*.pem >> /etc/confluent/ca.pem
cp /custom-installation/confluent/bin/apiclient /opt/confluent/bin
cp $deploycfg /etc/confluent/
(
exec >> /var/log/confluent/confluent-pre.log
exec 2>> /var/log/confluent/confluent-pre.log
chmod 600 /var/log/confluent/confluent-pre.log
cryptboot=$(grep encryptboot: $deploycfg|sed -e 's/^encryptboot: //')
if [ "$cryptboot" != "" ] && [ "$cryptboot" != "none" ] && [ "$cryptboot" != "null" ]; then
@ -23,7 +34,17 @@ echo HostbasedAuthentication yes >> /etc/ssh/sshd_config.d/confluent.conf
echo HostbasedUsesNameFromPacketOnly yes >> /etc/ssh/sshd_config.d/confluent.conf
echo IgnoreRhosts no >> /etc/ssh/sshd_config.d/confluent.conf
systemctl restart sshd
mkdir -p /etc/confluent
export nodename confluent_profile confluent_mgr
curl -f https://$confluent_mgr/confluent-public/os/$confluent_profile/scripts/functions > /etc/confluent/functions
. /etc/confluent/functions
run_remote_parts pre.d
curl -f -X POST -H "CONFLUENT_NODENAME: $nodename" -H "CONFLUENT_APIKEY: $apikey" https://$confluent_mgr/confluent-api/self/nodelist > /tmp/allnodes
curl -f https://$confluent_mgr/confluent-public/os/$confluent_profile/scripts/getinstalldisk > /custom-installation/getinstalldisk
python3 /custom-installation/getinstalldisk
if [ ! -e /tmp/installdisk ]; then
curl -f https://$confluent_mgr/confluent-public/os/$confluent_profile/scripts/getinstalldisk > /custom-installation/getinstalldisk
python3 /custom-installation/getinstalldisk
fi
sed -i s!%%INSTALLDISK%%!/dev/$(cat /tmp/installdisk)! /autoinstall.yaml
) &
tail --pid $! -n 0 -F /var/log/confluent/confluent-pre.log > /dev/console

View File

@ -1,4 +1,6 @@
#!/usr/bin/python3
import random
import time
import subprocess
import importlib
import tempfile
@ -7,6 +9,7 @@ import os
import shutil
import pwd
import grp
import sys
from importlib.machinery import SourceFileLoader
try:
apiclient = SourceFileLoader('apiclient', '/opt/confluent/bin/apiclient').load_module()
@ -227,9 +230,16 @@ def synchronize():
myips.append(addr)
data = json.dumps({'merge': tmpdir, 'appendonce': appendoncedir, 'myips': myips})
status, rsp = ac.grab_url_with_status('/confluent-api/self/remotesyncfiles', data)
if status >= 300:
sys.stderr.write("Error starting syncfiles - {}:\n".format(status))
sys.stderr.write(rsp.decode('utf8'))
sys.stderr.write('\n')
sys.stderr.flush()
return status
if status == 202:
lastrsp = ''
while status != 204:
time.sleep(1+(2*random.random()))
status, rsp = ac.grab_url_with_status('/confluent-api/self/remotesyncfiles')
if not isinstance(rsp, str):
rsp = rsp.decode('utf8')
@ -277,10 +287,21 @@ def synchronize():
os.chmod(fname, int(opts[fname][opt], 8))
if uid != -1 or gid != -1:
os.chown(fname, uid, gid)
return status
finally:
shutil.rmtree(tmpdir)
shutil.rmtree(appendoncedir)
if __name__ == '__main__':
synchronize()
status = 202
while status not in (204, 200):
try:
status = synchronize()
except Exception as e:
sys.stderr.write(str(e))
sys.stderr.write('\n')
sys.stderr.flush()
status = 300
if status not in (204, 200):
time.sleep((random.random()*3)+2)

View File

@ -4,4 +4,5 @@ confluent_mgr=$(grep ^deploy_server $deploycfg|awk '{print $2}')
confluent_profile=$(grep ^profile: $deploycfg|awk '{print $2}')
export deploycfg confluent_mgr confluent_profile
curl -f https://$confluent_mgr/confluent-public/os/$confluent_profile/scripts/post.sh > /tmp/post.sh
. /tmp/post.sh
bash /tmp/post.sh
true

View File

@ -26,12 +26,14 @@ if [ -e /tmp/cnflnthmackeytmp ]; then
chroot . curl -f -H "CONFLUENT_NODENAME: $NODENAME" -H "CONFLUENT_CRYPTHMAC: $(cat /root/$hmacfile)" -d @/tmp/cnflntcryptfile https://$MGR/confluent-api/self/registerapikey
cp /root/$passfile /root/custom-installation/confluent/confluent.apikey
DEVICE=$(cat /tmp/autodetectnic)
IP=done
else
chroot . custom-installation/confluent/bin/clortho $NODENAME $MGR > /root/custom-installation/confluent/confluent.apikey
MGR=[$MGR]
nic=$(grep ^MANAGER /custom-installation/confluent/confluent.info|grep fe80::|sed -e s/.*%//|head -n 1)
nic=$(ip link |grep ^$nic:|awk '{print $2}')
DEVICE=${nic%:}
IP=done
fi
if [ -z "$MGTIFACE" ]; then
chroot . usr/bin/curl -f -H "CONFLUENT_NODENAME: $NODENAME" -H "CONFLUENT_APIKEY: $(cat /root//custom-installation/confluent/confluent.apikey)" https://${MGR}/confluent-api/self/deploycfg > $deploycfg

View File

@ -79,8 +79,12 @@ if [ ! -z "$cons" ]; then
fi
echo "Preparing to deploy $osprofile from $MGR"
echo $osprofile > /custom-installation/confluent/osprofile
echo URL=http://${MGR}/confluent-public/os/$osprofile/distribution/install.iso >> /conf/param.conf
fcmdline="$(cat /custom-installation/confluent/cmdline.orig) url=http://${MGR}/confluent-public/os/$osprofile/distribution/install.iso"
. /etc/os-release
DIRECTISO=$(blkid -t TYPE=iso9660 |grep -Ei ' LABEL="Ubuntu-Server '$VERSION_ID)
if [ -z "$DIRECTISO" ]; then
echo URL=http://${MGR}/confluent-public/os/$osprofile/distribution/install.iso >> /conf/param.conf
fcmdline="$(cat /custom-installation/confluent/cmdline.orig) url=http://${MGR}/confluent-public/os/$osprofile/distribution/install.iso"
fi
if [ ! -z "$cons" ]; then
fcmdline="$fcmdline console=${cons#/dev/}"
fi

View File

@ -3,5 +3,12 @@ sed -i 's/label: ubuntu/label: Ubuntu/' $2/profile.yaml && \
ln -s $1/casper/vmlinuz $2/boot/kernel && \
ln -s $1/casper/initrd $2/boot/initramfs/distribution && \
mkdir -p $2/boot/efi/boot && \
ln -s $1/EFI/boot/* $2/boot/efi/boot
if [ -d $1/EFI/boot/ ]; then
ln -s $1/EFI/boot/* $2/boot/efi/boot
elif [ -d $1/efi/boot/ ]; then
ln -s $1/efi/boot/* $2/boot/efi/boot
else
echo "Unrecogrized boot contents in media" >&2
exit 1
fi

View File

@ -0,0 +1,12 @@
import yaml
import os
ainst = {}
with open('/autoinstall.yaml', 'r') as allin:
ainst = yaml.safe_load(allin)
ainst['storage']['layout']['password'] = os.environ['lukspass']
with open('/autoinstall.yaml', 'w') as allout:
yaml.safe_dump(ainst, allout)

View File

@ -3,11 +3,11 @@ echo "Confluent first boot is running"
HOME=$(getent passwd $(whoami)|cut -d: -f 6)
export HOME
(
exec >> /target/var/log/confluent/confluent-firstboot.log
exec 2>> /target/var/log/confluent/confluent-firstboot.log
chmod 600 /target/var/log/confluent/confluent-firstboot.log
exec >> /var/log/confluent/confluent-firstboot.log
exec 2>> /var/log/confluent/confluent-firstboot.log
chmod 600 /var/log/confluent/confluent-firstboot.log
cp -a /etc/confluent/ssh/* /etc/ssh/
systemctl restart sshd
systemctl restart ssh
rootpw=$(grep ^rootpassword: /etc/confluent/confluent.deploycfg |awk '{print $2}')
if [ ! -z "$rootpw" -a "$rootpw" != "null" ]; then
echo root:$rootpw | chpasswd -e
@ -27,4 +27,4 @@ run_remote_parts firstboot.d
run_remote_config firstboot.d
curl --capath /etc/confluent/tls -f -H "CONFLUENT_NODENAME: $nodename" -H "CONFLUENT_APIKEY: $confluent_apikey" -X POST -d "status: complete" https://$confluent_mgr/confluent-api/self/updatestatus
) &
tail --pid $! -n 0 -F /target/var/log/confluent/confluent-post.log > /dev/console
tail --pid $! -n 0 -F /var/log/confluent/confluent-post.log > /dev/console

View File

@ -3,6 +3,8 @@ import os
class DiskInfo(object):
def __init__(self, devname):
if devname.startswith('nvme') and 'c' in devname:
raise Exception("Skipping multipath devname")
self.name = devname
self.wwn = None
self.path = None

View File

@ -0,0 +1,26 @@
#!/usr/bin/python3
import yaml
import os
ainst = {}
with open('/autoinstall.yaml', 'r') as allin:
ainst = yaml.safe_load(allin)
tz = None
ntps = []
with open('/etc/confluent/confluent.deploycfg', 'r') as confluentdeploycfg:
dcfg = yaml.safe_load(confluentdeploycfg)
tz = dcfg['timezone']
ntps = dcfg.get('ntpservers', [])
if ntps and not ainst.get('ntp', None):
ainst['ntp'] = {}
ainst['ntp']['enabled'] = True
ainst['ntp']['servers'] = ntps
if tz and not ainst.get('timezone'):
ainst['timezone'] = tz
with open('/autoinstall.yaml', 'w') as allout:
yaml.safe_dump(ainst, allout)

View File

@ -60,9 +60,12 @@ cp /custom-installation/confluent/bin/apiclient /target/opt/confluent/bin
mount -o bind /dev /target/dev
mount -o bind /proc /target/proc
mount -o bind /sys /target/sys
mount -o bind /run /target/run
mount -o bind /sys/firmware/efi/efivars /target/sys/firmware/efi/efivars
if [ 1 = $updategrub ]; then
chroot /target update-grub
fi
echo "Port 22" >> /etc/ssh/sshd_config
echo "Port 2222" >> /etc/ssh/sshd_config
echo "Match LocalPort 22" >> /etc/ssh/sshd_config
@ -87,8 +90,36 @@ chroot /target bash -c "source /etc/confluent/functions; run_remote_parts post.d
source /target/etc/confluent/functions
run_remote_config post
if [ -f /etc/confluent_lukspass ]; then
numdevs=$(lsblk -lo name,uuid|grep $(awk '{print $2}' < /target/etc/crypttab |sed -e s/UUID=//)|wc -l)
if [ 0$numdevs -ne 1 ]; then
wall "Unable to identify the LUKS device, halting install"
while :; do sleep 86400; done
fi
CRYPTTAB_SOURCE=$(awk '{print $2}' /target/etc/crypttab)
. /target/usr/lib/cryptsetup/functions
crypttab_resolve_source
if [ ! -e $CRYPTTAB_SOURCE ]; then
wall "Unable to find $CRYPTTAB_SOURCE, halting install"
while :; do sleep 86400; done
fi
cp /etc/confluent_lukspass /target/etc/confluent/luks.key
chmod 000 /target/etc/confluent/luks.key
lukspass=$(cat /etc/confluent_lukspass)
chroot /target apt install libtss2-rc0
PASSWORD=$lukspass chroot /target systemd-cryptenroll --tpm2-device=auto --tpm2-pcrs="" $CRYPTTAB_SOURCE
fetch_remote systemdecrypt
mv systemdecrypt /target/etc/initramfs-tools/scripts/local-top/systemdecrypt
fetch_remote systemdecrypt-hook
mv systemdecrypt-hook /target/etc/initramfs-tools/hooks/systemdecrypt
chmod 755 /target/etc/initramfs-tools/scripts/local-top/systemdecrypt /target/etc/initramfs-tools/hooks/systemdecrypt
chroot /target update-initramfs -u
fi
python3 /opt/confluent/bin/apiclient /confluent-api/self/updatestatus -d 'status: staged'
umount /target/sys /target/dev /target/proc
umount /target/sys /target/dev /target/proc /target/run
) &
tail --pid $! -n 0 -F /target/var/log/confluent/confluent-post.log > /dev/console

View File

@ -13,11 +13,6 @@ exec 2>> /var/log/confluent/confluent-pre.log
chmod 600 /var/log/confluent/confluent-pre.log
cryptboot=$(grep encryptboot: $deploycfg|sed -e 's/^encryptboot: //')
if [ "$cryptboot" != "" ] && [ "$cryptboot" != "none" ] && [ "$cryptboot" != "null" ]; then
echo "****Encrypted boot requested, but not implemented for this OS, halting install" > /dev/console
[ -f '/tmp/autoconsdev' ] && (echo "****Encryptod boot requested, but not implemented for this OS,halting install" >> $(cat /tmp/autoconsdev))
while :; do sleep 86400; done
fi
cat /custom-installation/ssh/*pubkey > /root/.ssh/authorized_keys
@ -35,7 +30,7 @@ echo HostbasedUsesNameFromPacketOnly yes >> /etc/ssh/sshd_config.d/confluent.con
echo IgnoreRhosts no >> /etc/ssh/sshd_config.d/confluent.conf
systemctl restart sshd
mkdir -p /etc/confluent
export confluent_profile confluent_mgr
export nodename confluent_profile confluent_mgr
curl -f https://$confluent_mgr/confluent-public/os/$confluent_profile/scripts/functions > /etc/confluent/functions
. /etc/confluent/functions
run_remote_parts pre.d
@ -45,6 +40,24 @@ if [ ! -e /tmp/installdisk ]; then
python3 /custom-installation/getinstalldisk
fi
sed -i s!%%INSTALLDISK%%!/dev/$(cat /tmp/installdisk)! /autoinstall.yaml
run_remote_python mergetime
if [ "$cryptboot" != "" ] && [ "$cryptboot" != "none" ] && [ "$cryptboot" != "null" ]; then
lukspass=$(python3 /opt/confluent/bin/apiclient /confluent-api/self/profileprivate/pending/luks.key 2> /dev/null)
if [ -z "$lukspass" ]; then
lukspass=$(head -c 66 < /dev/urandom |base64 -w0)
fi
export lukspass
run_remote_python addcrypt
if ! grep 'password:' /autoinstall.yaml > /dev/null; then
echo "****Encrypted boot requested, but the user-data does not have a hook to enable,halting install" > /dev/console
[ -f '/tmp/autoconsdev' ] && (echo "****Encryptod boot requested, but the user-data does not have a hook to enable,halting install" >> $(cat /tmp/autoconsdev))
while :; do sleep 86400; done
fi
sed -i s!%%CRYPTPASS%%!$lukspass! /autoinstall.yaml
sed -i s!'#CRYPTBOOT'!! /autoinstall.yaml
echo -n $lukspass > /etc/confluent_lukspass
chmod 000 /etc/confluent_lukspass
fi
) &
tail --pid $! -n 0 -F /var/log/confluent/confluent-pre.log > /dev/console

View File

@ -1,4 +1,6 @@
#!/usr/bin/python3
import random
import time
import subprocess
import importlib
import tempfile
@ -7,6 +9,7 @@ import os
import shutil
import pwd
import grp
import sys
from importlib.machinery import SourceFileLoader
try:
apiclient = SourceFileLoader('apiclient', '/opt/confluent/bin/apiclient').load_module()
@ -227,9 +230,16 @@ def synchronize():
myips.append(addr)
data = json.dumps({'merge': tmpdir, 'appendonce': appendoncedir, 'myips': myips})
status, rsp = ac.grab_url_with_status('/confluent-api/self/remotesyncfiles', data)
if status >= 300:
sys.stderr.write("Error starting syncfiles - {}:\n".format(status))
sys.stderr.write(rsp.decode('utf8'))
sys.stderr.write('\n')
sys.stderr.flush()
return status
if status == 202:
lastrsp = ''
while status != 204:
time.sleep(1+(2*random.random()))
status, rsp = ac.grab_url_with_status('/confluent-api/self/remotesyncfiles')
if not isinstance(rsp, str):
rsp = rsp.decode('utf8')
@ -277,10 +287,21 @@ def synchronize():
os.chmod(fname, int(opts[fname][opt], 8))
if uid != -1 or gid != -1:
os.chown(fname, uid, gid)
return status
finally:
shutil.rmtree(tmpdir)
shutil.rmtree(appendoncedir)
if __name__ == '__main__':
synchronize()
status = 202
while status not in (204, 200):
try:
status = synchronize()
except Exception as e:
sys.stderr.write(str(e))
sys.stderr.write('\n')
sys.stderr.flush()
status = 300
if status not in (204, 200):
time.sleep((random.random()*3)+2)

View File

@ -0,0 +1,17 @@
#!/bin/sh
case $1 in
prereqs)
echo
exit 0
;;
esac
systemdecryptnow() {
. /usr/lib/cryptsetup/functions
local CRYPTTAB_SOURCE=$(awk '{print $2}' /systemdecrypt/crypttab)
local CRYPTTAB_NAME=$(awk '{print $1}' /systemdecrypt/crypttab)
crypttab_resolve_source
/lib/systemd/systemd-cryptsetup attach "${CRYPTTAB_NAME}" "${CRYPTTAB_SOURCE}" none tpm2-device=auto
}
systemdecryptnow

View File

@ -0,0 +1,26 @@
#!/bin/sh
case "$1" in
prereqs)
echo
exit 0
;;
esac
. /usr/share/initramfs-tools/hook-functions
mkdir -p $DESTDIR/systemdecrypt
copy_exec /lib/systemd/systemd-cryptsetup /lib/systemd
for i in /lib/x86_64-linux-gnu/libtss2*
do
copy_exec ${i} /lib/x86_64-linux-gnu
done
if [ -f /lib/x86_64-linux-gnu/cryptsetup/libcryptsetup-token-systemd-tpm2.so ]; then
mkdir -p $DESTDIR/lib/x86_64-linux-gnu/cryptsetup
copy_exec /lib/x86_64-linux-gnu/cryptsetup/libcryptsetup-token-systemd-tpm2.so /lib/x86_64-linux-gnu/cryptsetup
fi
mkdir -p $DESTDIR/scripts/local-top
echo /scripts/local-top/systemdecrypt >> $DESTDIR/scripts/local-top/ORDER
if [ -f $DESTDIR/cryptroot/crypttab ]; then
mv $DESTDIR/cryptroot/crypttab $DESTDIR/systemdecrypt/crypttab
fi

View File

@ -0,0 +1 @@
ubuntu22.04

View File

@ -0,0 +1 @@
ubuntu20.04-diskless

Some files were not shown because too many files have changed in this diff Show More