Provide means of combining
multiple pkglist and syncfiles
based on hierarchy.
This enables construction of
a more complex structure of
images for those that may want it.
Older python did not provide timeout. Keep the timeout
for the modern python that skips select without a timeout,
but try again without timeout to retain compatibility.
This will APPEND if the target file doesn't
already have the entire source contents in
a contiguous location already. This makes
it more safe to rerun without negative consequence.
Default is 1 second, bump to 2 seconds for some
extraordinarily slow switches. This changes
overall to about 10 seconds as there are, by default,
5 retries.
Rather than incurring it on each iteration (causing a scan to take
15 seconds in test), defer to
handle them all later (reducing to 5 seconds to scan).
If no neigh table is present for
a given address, send a packet to
induce kernel activity.
Then wait for a bit over 2 seconds to allow for
2 retries (at default settings)
and then proceed assuming all findable neighbor table
entries will be found.
When a non-readable file was
encountered, confluent would
cryptically report rsync failure.
Check for the usual culprit, unreadable files if rsync fails.
Cause this error to manifest with clearer text.
Use netutil assessment of the
best server ip for pxe responses.
Using 'recvip' is too simplistic for broadcast
packets. recvip just gets the first ipv4 address on the interface,
when an alias may be better.
netutil assesses all the possible aliases, thus has better logic and pxe.py
now uses it.
Eventlet narrowly targets overriding
select in subprocess, to avoid rewriting adequate functions.
However, subprocess does an 'optimization' to skip
select if there's fewer than 3 pipes to juggle and no timeout specified.
Induce python to always use select
by specifying a very long timeout.
This causes confluent to be able to spawn multiple subprocesses and
not be hung waiting for input.
Hardcoding 0x123 serial number would cause strict clients to reject the
certificate.
While we are still not guaranteeing uniqueness, the chances of a
duplicate are impossibly small.
Closing the socket outside of relay_data causes relay_data
to stay alive, causing eventlet to think a filehandle is open that is not.
A reasonable question would be why eventlet fails to error that read
when the filehandle is closed, but for now, move the close activity
to the relay_data handler.
This resolves "Second simultaneous read on fileno" conditions introduced
by the fix for leaking filehandles.