There is no hard guarantee that all 4 bytes will come. In practice,
I've never seen this occur, but to be complete, should make sure recv gets everything it was supposed to.
It turns out that eventlet.green.threading.Event() doesn't behave very efficiently in this context for whatever reason.
Use eventlet.event.Event() instead. It was not used before due to lack of timeout and clear, but that is overcome by
disposing of it rather than reusing and using with eventlet.Timeout() to add timeout to wait that doesn't have built in timeout.
If wanting to run as non-root, mkdir -p /var/run/confluent /var/log/confluent /etc/confluent
and chown those to be owned by confluent user. That is probably path for deb and rpm packaging.
When any reconfiguration happens, break a command object (and the session that lies beneath).
This does cause needless churn in response to some changes that wouldn't matter, but it's a
small price to pay for the simplicity of not bothering to be fancier.
For now, the only live path is the one that
gets struck by exception. However, the exception
only happens on 'strip_node', meaning for
multi-node resources, things should be sane when
the time comes.
Previously, was pulling the cord on a perfectly servicable IPMI
session object. Change to just deactivating the SOL portion
of an IPMI connection if the console closes.
With this change, an instance under pressure from new or bad authentication attempts
will continue to be viable to authenticated sessions and clients with tokens (e.g. any
http client that honors cookies).
Previously, if a username or password was bad, retry would not occur.
Correct this such that every so often or right when someone connects,
the target is checked to see if the user/password is now considered good.
The pickling would get horrendously slow as total node count increased. This meant very long time to sync
to disk for just one change out of 65,000. This strategy changes things to more selective and only
do things for the dirty keys rather than everything. Large changes to small amounts of nodes will take
more time (because more calls to dump pickle), but small changes to a small subset of nodes will take much
less time.
There was an optimization to skip examination of groups if it was determined
that the group membership had not changed. However, this erroneously
masked the examination in the case of reordered groups. Skip the
optimization to cover that case at the expense of at least some needless churn.
This only happens when something goes to change group membership in some way, so
this shouldn't be too expensive.
Now when expressions can not be completed, the reason is presented as 'broken'.
Additionally, when unsetting a value that would affect expressions,
perform appropriate changes.