Note that NMSettingEthtool and NMSettingMatch don't have such
functions either.
We have API
nm_connection_get_setting (NMConnection *, GType)
nm_connection_get_setting_by_name (NMConnection *, const char *)
which can be used generically, meaning: the requested setting type
is an argument to the function. That is generally more useful and
flexible.
Don't add API which duplicates existing functionality and is (arguably)
inferiour. Drop it now. This is an ABI/API break for the current development
cycle where the 1.14.0 API is still unstable. Indeed it's already after
1.14-rc1, which is ugly. But it's also unlikely that somebody already uses
this API/ABI and is badly impacted by this change.
Note that nm_connection_get_setting() and nm_connection_get_setting_by_name()
are slightly inconvenient in C still, because they usually require a cast.
We should fix that by changing the return type to "void *". Such
a change may be possibly any time without breaking API/ABI (almost, it'd
be an API change when taking a function pointer without casting).
Got this assertion:
NetworkManager[12939]: <debug> [1536917977.4868] active-connection[0x563d8fd34540]: set state deactivated (was deactivating)
...
NetworkManager[12939]: nm-openvpn[1106] <info> openvpn[1132]: send SIGTERM
NetworkManager[12939]: nm-openvpn[1106] <info> wait for 1 openvpn processes to terminate...
NetworkManager[12939]: nm-openvpn[1106] <warn> openvpn[1132] exited with error code 1
NetworkManager[12939]: <info> [1536917977.5035] vpn-connection[0x563d8fd34540,2fdeaea3-975f-4325-8305-83ebca5eaa26,"my-openvpn-Red-Hat",0]: VPN plugin: requested secrets; state disconnected (9)
NetworkManager[12939]: plugin_interactive_secrets_required: assertion 'priv->vpn_state == STATE_CONNECT || priv->vpn_state == STATE_NEED_AUTH' failed
Meaning. We should either ensure that secrets_required_cb() signal callback
is disconnected from proxy's signal, or we gracefully handle callbacks at
unexpected moments. Do the latter.
In general, when we build a package, we want no compiler warnings
and all unit tests to pass.
That is in particular true when building a package for the distribution
in koji. When builing in koji, we (rightly) cannot pass rpmbuild options, so
the default whether tests/compiler-warnings are fatal matter very much.
One could argue: let's have the tests/compiler-warnings fatal and fail the build.
During a build in koji for a Fedora release, we want them all pass. And if somebody
does a manual build, the person can patch the spec file (or use rpmbuild
flags).
However, note how commit "f7b5e48cdb contrib/rpm: don't force fatal warnings
with tests" already disabled fatal compiler warnings. Why? It seems
compiler warnings should be even more stable than our unit tests, as long
as you target a particular Fedora release and compiler version. So this
was done to support rebuilding an SRPM for a different Fedora release,
or to be more graceful during early development phase of a Fedora
release, where things are not as stable yet.
The exactly same reasoning applies to treating unit-tests failures as fatal.
For example, a recent iproute2 issue broke unit tests. That meant, with
that iproute2 release in build root, the NetworkManager RPM could not be built.
Very annoying.
Now:
- if "test" is enabled, that means both `make check` and compiler warnings
are treated fatal. If "test" is disabled, `make check` and compiler
warnings are still done, just not fatal.
- "test" is now disabled by default via the spec file. They are not fatal
when building in koji or when rebuilding the package manually.
- tests can be enabled optionally. Note that the "build_clean.sh"
script enables them by default. So, a user using this script would
need to explicitly "--without test".
- always enable more compiler warnings. They are not marked as breaking
the build anyway.
- also, always build with '--with-tests=yes'. Note that our autotools is
actually very nice. Even if you build '--with-tests=no', you still can
run `make check` and the tests are build on demand. The only
difference here is whether the tests are build during `make` or during
`make check`. While little difference, build everything during the
`make` step.
- when running tests, use `make -k check`. Even if they fail, we want to
run the entire test suite.
- also running tests are disabled, still run them. But don't let them
fail the build.
In libnm, we prefer opaque typedefs. gtk-doc needs to be patched to properly
generate documentation. Add a check for that.
Add a test. By default, this does not fail but just prints a warning. The test
can be made failing by setting NMTST_CHECK_GTK_DOC=1.
See-also: https://gitlab.gnome.org/GNOME/gtk-doc/merge_requests/2
The NetworkManager spec file used to determine devel builds as those that
have an odd minor version number. In that case, the built package would
enable more-asserts.
-- By the way, why is '1.13.3-dev' considered a delopment version worthy of more
asserts, but a build from the development phase of the next minor release on
'nm-1-12' branch not?
Note that during the development phase of Fedora (and sometimes even afterwards),
we commonly package development versions from 'master'. For example '1.12.0-0.1',
which is some snapshot with version number '1.11.x-dev' (or '1.12-rc1' in this case),
but before the actual '1.12.0' release.
It's problematic that for part of the devel phase we compile the
package for the distribution with more assertions. This package is
significanly different and rpmdiff and coverity give different results
for them.
For example, the binary size of debug packages is larger, so first
rpmdiff will complain that the binary sized increased (compare to the
previous version) and then later it decreases again.
Likewise, coverity finds significantly different issues on a debug
build. For example, it sees assertions against NULL and takes that
as a hint as to whether the parameter can/shall be NULL. Keeping
coverity warnings low is already high effort to sort out false
positives. We should not invest time in checking debug builds with
coverity, at least not as long as there are more important issues.
But more importantly, the --with-more-asserts configure option governs whether
nm_assert() is enabled. The only point of existance of nm_assert() -- compared to
g_assert(), g_return_*() and assert() -- is that this variant is disabled by default.
It's only used for checks that are really really not supposed to fail and/or
which may be expensive to do. This is useful for developing and CI,
but it's not right to put into the distribution. It really enables
assertions that you don't want in such a scenario. Enabling them even
for distribution builds defeats their purpose. If you care about an
assertion to be usually/always enabled, you should use g_assert() or
g_return_*() instead.
What this changes, that "devel" builds in koji/brew do not have more-asserts
enabled. When manually building the SRPM one still can enable it,
for example via
$ ./contrib/fedora/rpm/build_clean.sh -w debug
Also our CI has an option to build packages with or without more-asserts
(defaulting to more asserts already).
If the user explicitly passes --with-netconfig=$PATH or --with-resolvconf=$PATH,
the path is accepted as is. We only do autodetection, if the binary was not found.
In that case, if the binary cannot be found in the common paths fail compilation.
Some path variable like $(bindir), $(datadir), etc. are special for
autotools and must be handled separately through config-extra.h.
But dhcp path variables are just normal variables defined through
the configure script and should go into config.h.
Handle the iptables, dnsmasq and dnssec-trigger paths in the same way
through common code.
The path set by user must be accepted as is, even if does not exist,
because this is a requirement for cross-compilation. When user does
not specify a path, search a predefined set of paths and fall back to
an hardcoded one.
Set the execution bit on /usr/sbin/{ifup,ifdown} ghost files to match
the mode of same files installed by initscripts.
Otherwise, they will appear as changed according to rpm verify:
.M....... g /usr/sbin/ifdown
.M....... g /usr/sbin/ifup
when the alternatives mechanism is not in place.
# ll /usr/sbin/if{up,down}
-rwxr-xr-x. 1 root root 1651 Aug 24 06:23 /usr/sbin/ifdown
-rwxr-xr-x. 1 root root 5010 Aug 24 06:23 /usr/sbin/ifup
https://bugzilla.redhat.com/show_bug.cgi?id=1626517
A few components are still disabled. Most notably, team support
which is not available on Ubuntu 14.04 (trusty).
All other components which are disabled are bugs in our build tools.
It should be possible to enable them, but currently breaks on travis.
Those needs additional fixes.
In particular, the DHCP plugins and ifcfg-rh plugin with meson.
Also, netconfig plugin with autotools requires that the path exists.
Rename variables for the error number. Commonly the naming
is:
- errno: the error number from <errno.h> itself
- errsv: a copy of errno
- nlerr: a netlink error number
- err: an error code, but not a errno/errsv and not
a netlink error number.
Internal DHCPv4 client requires a valid MAC address for functioning.
Just always require a MAC address to start DHCP, both v4 and v6.
We have no MAC address for example on Layer3 devices like tun or wireguard.
Also, before "34af574d58 systemd/dhcp: fix assertion starting DHCP
client without MAC address", if we tired to start sd_dhcp_client without
setting a MAC address, an assertion was triggered.
An assertion in dhcp_network_bind_raw_socket() is triggered when
starting an sd_dhcp_client without setting setting a MAC address
first.
- sd_dhcp_client_start()
- client_start()
- client_start_delayed()
- dhcp_network_bind_raw_socket()
In that case, the arp-type and MAC address is still unset. Note that
dhcp_network_bind_raw_socket() already checks for a valid arp-type
and MAC address below, so we should just gracefully return -EINVAL.
Maybe sd_dhcp_client_start() should fail earlier when starting without
MAC address. But the failure here will be correctly propagated and
the start aborted.
See-also: https://github.com/systemd/systemd/pull/10054