"force_rename" parameter was not used previously, but it also was broken.
Fix it. We need to create a new NMSettingsStorage instance when the filename
changes, as the storage's filename is immutable.
No bad effects so far, it was unused.
But as it wasn't used, also no longer set the update_reason flag
NM_SETTINGS_CONNECTION_UPDATE_REASON_FORCE_RENAME. We didn't have the
force-rename behavior so far. This makes the flag totally unused, and
maybe should be dropped. It's kept for now, if only to show what could
be done.
We have some interal code that is only used to expose functionality for
the tests. Those functions should be easily distinguishable from code
that is used by the "real" code. Give a "nmtst" prefix. Rename
nms_keyfile_writer_test_connection() to nmtst_keyfile_writer_test_connection().
When user are changing SR-IOV VF settings for options like `max-tx-rate`
which some hardware not supported yet, the failure of this VF will fail
the whole activation, then the SR-IOV will be disabled means all the VFs
will be deleted.
Deleting VFs might break network connectivity and this collateral
damage of VF option failure is not acceptable for OpenShift use cases
even they have checkpoint protection.
This patch only log warn message on failure of VF options and will not
fail the activation.
NetworkManager also ignore MTU failure during activation, I believe this
fit into the same assumption.
User case reference: https://bugzilla.redhat.com/show_bug.cgi?id=2210164
Signed-off-by: Gris Ge <fge@redhat.com>
Fail to save a connection with a 'link' setting instead of just
ignoring it. Now:
$ nmcli connection add type ethernet ifname foobar
Connection 'ethernet-foobar' (c3f6f067-e1d5-4bb1-8d67-e09109253a79) successfully added.
$ nmcli connection modify ethernet-foobar link.tx-queue-length 1234
Error: Failed to modify connection 'ethernet-foobar': failed to update connection: The ifcfg-rh plugin doesn't support setting 'link'. If you are modifying an existing connection profile saved in ifcfg-rh format, please migrate the connection to keyfile using 'nmcli connection migrate c3f6f067-e1d5-4bb1-8d67-e09109253a79' or via the Update2() D-Bus API and try again.
$ nmcli connection migrate c3f6f067-e1d5-4bb1-8d67-e09109253a79
Connection 'ethernet-foobar' (c3f6f067-e1d5-4bb1-8d67-e09109253a79) successfully migrated.
$ nmcli connection modify ethernet-foobar link.tx-queue-length 1234
$
Fixes: 39bfcf7aab ('all: add "link" setting')
The ifcfg-rh plugin is now deprecated and in bugfixes-only mode. When
users try to set a property that is not supported by the plugin, we
need to report an error.
Add an helper function to set such error. Also, introduce a new error
code so that the situation can be detected and dealt with
programmatically.
Debian:9 (stretch) is archived. We need to patch the sources.list
for it to be usable.
Although it's end of life, we are still interested, whether we
are able to build with such old compiler. Fix the test.
Rename the variables and function name to use conscious language. In
addition, rename `type` and `link_type` variables to `port_type` and
`controller_type` to make it more intuitive.
When configuring the bridge port options the code was checking on port
link type instead of controller link type. In addition, the test is now
being skipped for nm-fake-platform.
From time to time we bump the used clang-format (and Fedora) version.
Previously, we had to change more than one places.
Instead, let the "nm-code-format-container.sh" parse it from ".gitlab-ci/config.yml".
The macro uses g_alloca(). Using alloca() is potentially dangerous. For
example, it must never be used in an unbounded loop. This should be
immediately obvious from the name, so we don't accidentally use them
in the wrong context.
All other alloca() macros should have such a prefix already. And they
always have to be macros, because you couldn't use alloca() to return
memory from a function.
We want to guard against concurrent modifications of profiles. We cannot
lock profiles, so what we instead do is expose (and bump) a version ID.
The user can check the version ID, plan ahead what to do, and tell
NetworkManager to only make the modification if no concurrent
modification was done. The conflict can be detected via the version ID.
The Update2() D-Bus call gets a parameter to only allow the request if
the version ID still matches.
nmcli should use this, but it is quite some effort to retry upon
concurrent modification. This is still to do.
Note that the user might make a decision that is based on multiple
profiles. As the new version-id is only per-profile, we cannot guard
against such inter-profile modifications. What would be needed, is a
UpdateMany() call, where we could modify multiple profiles at once, and
the action only takes effect if all version IDs show no concurrent
modification. That's not done yet, and maybe never will be.
By default, bond/bridge/team devices ignore carrier, and so do their
ports. However, it can make sense to set '[device*].ignore-carrier' for
the controller device. Meaningfully support that.
This is a follow up to commit 8c91422954 ('device: handle carrier
changes for master device differently'), which didn't fully solve the
problem.
What already works, is that when you set ignore-carrier for the
controller, then after loss of carrier and a carrier wait timeout, the
controller and ports go down. If both the controller and port profiles
have autoconnect disabled, they stay down and that's it. It works as
expected, but is not very useful, because when we want to automatically
react on carrier loss, we also want to automatically reconnect.
For controller profiles, carrier only makes sense when ports are
attached. However, we can (auto) activate controller profiles without
ports. So when the user enables autoconnect for the controller profile,
then the profile will eagerly reconnect. That means, after loss of
carrier, the device goes down and reconnects right away. It means, when
configuring a bond with ignore-carrier=no and autoconnect=yes, then
the sensible thing happens (an immediate reconnect). That is just not
a useful configuration.
The useful way to configure configure ignore-carrier=no for a controller
device, autoconnect on the master must be disabled while being enabled
on the ports. After all, it's the ports that will autoconnect based on
the carrier state and bring up the controller with them.
Note that at the moment when a port decides to autoconnect, the
controller profile is not yet selected. That only happens later during
_internal_activate_device() after searching it with find_master(). At
that point, the port profile checks whether it should autoconnect based
on its own carrier state, and abort if not.
If autoconnect is aborted due to lack of carrier, the profile gets
blocked from autoconnect with reason "failed". Hence, when the carrier
returns, we need to clear any "failed" blocked reasons and schedule
another autoconnect check,
Note that this really only works if the port is itself a simple device,
like an ethernet. If the port is itself a software device (like a bond,
or a VLAN), then the carrier state in _internal_activate_device() is
unknown, and we cannot avoid autoconnect. It's unclear how that could
make sense, if at all.
This setup can be combined with "connection.autoconnect-slaves=yes". In
that case, we have the first port to autoconnect when they get carrier,
bringing up the controller too. Usually the other ports that don't have
carrier would not autoconnect, but with autoconnect-slaves they will.
The effect is, that we autoconnect whenever any of the ports has
carrier, and then we immediately also bring up the ports that don't have
carrier (which we usually would not).
https://bugzilla.redhat.com/show_bug.cgi?id=2156684https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/merge_requests/1658
Between ppp 2.4.8 and 2.4.9, "rp-pppoe.so" was renamed to "pppoe.so" (and a
symlink created). Between 2.4.9 and 2.5.0, the symlink was dropped.
See-also: b2c36e6c0e
I guess, NetworkManager always meant to use ppp's "(rp-)pppoe.so"
plugin, and never the library from the rp-pppoe project.
If a user actually wants to use the plugin from rp-pppoe project, then
this is going to break.
https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/issues/1312
Fixes: afe80171b2 ('ppp: move ppp code to "nm-pppd-compat.c"')
Previously, the ppp version was only detected (and used) at one place,
in "nm-pppd-compat.c", after including the ppp headers. That was nice
and easy.
However, with that way, we could only detect it after including ppp
headers, and given the ugliness of ppp headers, we only want to include
them in "nm-pppd-compat.c" (and nowhere else).
In particular, 'nm-pppd-compat.c" uses symbols from the ppp daemon, it
thus can only be linked into a ppp plugin, not in NetworkManager core
itself. But at some places we will need to know the ppp version, outside
of the ppp plugin and "nm-pppd-compat.c".
Additionally, detect it at configure time and place it in "config.h".
There is a static assert that we are in agreement with the two ways of
detection.