It's not clear why this happens. But since recently in our gitlab-ci,
all the Fedora machines will fail. It happens in the step
check_run_clean 6 && test $IS_FEDORA = 1 -o $IS_CENTOS = 1 && ./contrib/fedora/rpm/build_clean.sh -g -w crypto_gnutls -w debug -w iwd -w test -W meson
which explains why it only affects Fedora configurations.
It does not always fail, but the probability of failure is high.
The failure is:
...
rm -f et.gmo && /usr/bin/msgmerge --for-msgfmt -o et.1po et.po NetworkManager.pot && /usr/bin/msgfmt -c --statistics --verbose -o et.gmo et.1po && rm -f et.1po
libgomp: Thread creation failed: Resource temporarily unavailable
make[3]: *** [Makefile:383: et.gmo] Error 1
Maybe some new resource restricting in gitlab. Let's add this workaround.
I don't really understand the cause, but this seems to avoid it, which is
good enough for me.
When we run `NM_TEST_SELECT_RUN=x ./.gitlab-ci/run-test.sh` to run one
step only, we should not do the final clean, so that the build artifacts
are preserved.
When we register/unregister a commit-type or when we add/remove a
config-data to NML3Cfg, that act only does the registration/addition.
Only on the next commit, are the changes actually done. The purpose
of that is to add/register multiple configurations and commit them later
when ready.
However, it would be wrong to not do the commit a short time after. The
configuration state is dirty with need to be committed, and that should
happen soon.
Worse, when a interface disappears, NMDevice will clear the ifindex and
the NML3Cfg instance, thereby unregistering all config data and commit
type. If we previously commited something, we need to do another follow-up
commit to cleanup that state.
That is for example important with ECMP routes, which are registered in
NMNetns. When NML3Cfg goes down, it always must unregister to properly
cleanup. Failure to do so, causes an assertion failure and crash. This
change fixes that.
Fix that by automatically schedule and idle commit on
register/unregister/add/remove of commit-type/config-data.
It should *always* be permissible to call a AUTO commit from
an idle handler, because various parties cannot use NML3Cfg
independently, and they cannot know when somebody else does a
commit.
Note that NML3Cfg remembers if it presiouvly did a commit
("commit_type_update_sticky"), so even if the last commit-type gets
unregistered, the next commit will still do a sticky update (one more
time).
The only remaining question is what happens during quitting. When
quitting, NetworkManager we may want to leave some interfaces up and
configured. If we were to properly cleanup the NML3Cfg we might need a
mechanism to handle that. However, currently we just leak everything
during quit, so that is not a concern now. It is something that needs
to be addressed in the future.
https://bugzilla.redhat.com/show_bug.cgi?id=2158394https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/merge_requests/1505
Routes can be added with `ip route add|change|replace|append|prepend`.
Add a test that randomly tries to add such routes, and checks that
the cache stays consistent.
https://bugzilla.redhat.com/show_bug.cgi?id=2060684
nmtstp_env1_add_test_func() prepares a certain environment (with dummy
interface) that is used by some tests. Extend it, to allow creating more
than one interface (currently up to two).
The major point of NMDedupMultiIndex is that it can de-duplicate
the objects. It thus makes sense the everybody is using the same
instance. Make the multi-idx instance of NMPlatform configurable.
This is not used outside of unit tests, because the daemon currently
always creates one platform instance and everybody then re-uses the
instance of the platform.
While this is (currently) only used by tests, and that the performance
optimization of de-duplicating is irrelevant for tests, this is still
useful. The test can then check whether two separate NMPlatform objects
shared the same instance and whether it was de-duplicated.
There really is no way around this. As we don't cache all the routes
(e.g. ignored based on rtm_protocol or rtm_type), we cannot know which
route was replaced, when we get a NLM_F_REPLACE message.
We need to request a new dump in that case, which can be expensive, if
there are a lot of routes or if replace happens frequently.
The only possible solutions would be:
1) NetworkManager caches all routes, but it also needs to make sure to
get *everything* right. In particular, to understand every relevant
route attribute (including those added in the future, which is
impossible).
2) kernel provides a reasonable API (rhbz#1337855, rhbz#1337860) that
allows to sufficiently understand what is going on based on the
netlink notifications.
When you issue
ip route replace broadcast 1.2.3.4/32 dev eth0
then this route may well replace a (unicast) route that we have in
the cache.
Previously, we would right away ignore such messages in
_new_from_nl_route(), which means we miss the fact that a route gets
replaced.
Instead, we need to parse the message at least so far, that we can
detect and handle the replace.
We don't cache certain routes, for example based on the protocol. This is
a performance optimization to ignore routes that we usually don't care
about.
Still, if the user does `ip route replace` with such a route, then we
need to pass it to nmp_cache_update_netlink_route(), so that we can
properly remove the replaced route.
Knowing which route was replaces might be impossible, as our cache does
not contain all routes. Likely all that nmp_cache_update_netlink_route()
can to is to set "resync_required" for NLM_F_REPLACE. But for that it
should see the object first.
This also means, if we ever write a BPF filter to filter out messages
that contain NLM_F_REPLACE, because that would lead to cache inconsistencies.
The route table is part of the weak-id. You can see that with:
ip route replace unicast 1.2.3.4/32 dev eth0 table 57
ip route replace unicast 1.2.3.4/32 dev eth0 table 58
afterwards, `ip route show table all` will list both routes. The replace
operation is only per-table. Note that NMP_CACHE_ID_TYPE_ROUTES_BY_WEAK_ID
already got this right.
Fixes: 10ac675299 ('platform: add support for routing tables to platform cache')
By setting "NMTST_DEBUG" to any non-empty string, "is_debug" is enabled.
So "NMTST_DEBUG='debug,...'" is mostly redundant. Note however you can
still disable debug mode explicitly, like "NMTST_DEBUG=no-debug,...",
which can make sense if you want to set other flags but not enabling
debug mode (like "NMTST_DEBUG=no-debug,quick").
You can also explicitly set the log level ("NMTST_DEBUG='log-level=TRACE,...'"
or enable trace debugging with "NMTST_DEBUG='d,...'", where "d" (or "D")
is shorthand for "NMTST_DEBUG=log-level=TRACE,no-expect-message,...".
Anyway. If you explicitly set the log level with "log-level=" or "d",
debug logging is enabled with the "debug" flag. But it only logged at
level "<debug>". That seems not best. Instead, enable "<trace>" level by
default in debug mode.
That's useful, because there is not a clear distinction between
"<debug>" and "<trace>" level. When debugging, you really want all the
information you got, you can also filter out later (`grep` is a thing).
There is g_ptr_array_copy() in glib, but only since 2.68 so we cannot use it.
We had a compat implementation nm_g_ptr_array_copy(), however that one always
requires an additional parameter, the free function of the new array.
g_ptr_array_copy() always does a deep clone, and uses the source array's
free function. We don't have access to the free function (seems quite a
limitation of GPtrArray API), so our nm_g_ptr_array_copy() cannot be
exactly the same.
Previously, nm_g_ptr_array_copy() aimed to be as similar as possible to
g_ptr_array_copy(), and it would require the caller that the free
function is the same as the array's. That seems an unnecessary
limitation, and our compat implementation still looks different and has
a different name. If we were able to fully re-implement it, we would
instead add it to "nm-glib.h".
Anyway. As our implementation already differs, there is no need for the
arbitrary limitation to only perform deep copies. Instead, also allow
shallow copies. Rename the function to nm_g_ptr_array_new_clone() to
make it clearly distinct from g_ptr_array_copy().
CURLOPT_PROTOCOLS [0] was deprecated in libcurl 7.85.0 with
CURLOPT_PROTOCOLS_STR [1] as a replacement.
Well, technically it was only deprecated in 7.87.0, and retroactively
marked as deprecated since 7.85.0 [2]. But CURLOPT_PROTOCOLS_STR exists
since 7.85.0, so that's what we want to use.
This causes compiler warnings and build errors:
../src/core/nm-connectivity.c: In function 'do_curl_request':
../src/core/nm-connectivity.c:770:5: error: 'CURLOPT_PROTOCOLS' is deprecated: since 7.85.0. Use CURLOPT_PROTOCOLS_STR [-Werror=deprecated-declarations]
770 | curl_easy_setopt(ehandle, CURLOPT_PROTOCOLS, CURLPROTO_HTTP | CURLPROTO_HTTPS);
| ^~~~~~~~~~~~~~~~
In file included from ../src/core/nm-connectivity.c:13:
/usr/include/curl/curl.h:1749:3: note: declared here
1749 | CURLOPTDEPRECATED(CURLOPT_PROTOCOLS, CURLOPTTYPE_LONG, 181,
| ^~~~~~~~~~~~~~~~~
This patch is largely taken from systemd patch [2].
Based-on-patch-by: Frantisek Sumsal <frantisek@sumsal.cz>
[0] https://curl.se/libcurl/c/CURLOPT_PROTOCOLS.html
[1] https://curl.se/libcurl/c/CURLOPT_PROTOCOLS_STR.html
[2] 6967571bf2
[3] e61a4c0b7c
Fixes: 7a1734926a ('connectivity,cloud-setup: restrict curl protocols to HTTP and HTTPS')
When only running a subset of the tests (with "-p"), then valgrind
indicates a leak. Avoid that.
$ ./tools/run-nm-test.sh -m src/core/platform/tests/test-route-linux -v
# no leak
$ ./tools/run-nm-test.sh -m src/core/platform/tests/test-route-linux -v -p /route/ip4
# many leaks:
==1662102== 107 (96 direct, 11 indirect) bytes in 1 blocks are definitely lost in loss record 388 of 448
==1662102== at 0x4848464: calloc (vg_replace_malloc.c:1340)
==1662102== by 0x4F615F0: g_malloc0 (gmem.c:163)
==1662102== by 0x1621A6: _nmtst_add_test_func_full (nm-test-utils.h:918)
==1662102== by 0x1623EB: _nmtstp_setup_tests (test-route.c:2179)
==1662102== by 0x16E53D: main (test-common.c:2693)
==1662102==
{
<insert_a_suppression_name_here>
Memcheck:Leak
match-leak-kinds: definite
fun:calloc
fun:g_malloc0
fun:_nmtst_add_test_func_full
fun:_nmtstp_setup_tests
fun:main
}
This allows to free resources (a pointer) at the end of the test.
The purpose is to avoid valgrind warnings about leaks. While a leak
in the test is not a severe issue by itself, it does interfere with
checking for actual leaks. Thus every leak must be avoided.
Only allocate one chunk of memory to contain all data of
NmtstTestData.
This isn't about performance (which doesn't matter for test code).
It's about packing all in one struct and being able to free all at
once with a simple g_free(). We no longer need _nmtst_test_data_free()
with this.
Note that NmtstTestData is never mutated, it just holds some data.
As such, the single place where such a structure gets initialized,
can become a bit more complicated, in exchange for having a trivial
free operation (and anyway there no functions that modify the data
or that would care about the data layout).
In kernel, the valid range for the weight is 1-256 (on netlink this is
expressed as u8 in rtnh_hops, ranging 0-255).
We need an additional value, to represent
- unset weight, for non-ECMP routes in kernel.
- in libnm API, to express routes that should not be merged as ECMP
routes (the default).
Extend the type in NMPlatformIP4Route.weight to u16, and fix the code
for the special handling of the numeric range.
Also the libnm API needs to change. Modify the type of the attribute on
D-Bus from "b" to "u", to use a 32 bit integer. We use 32 bit, because
we already have common code to handle 32 bit unsigned integers, despite
only requiring 257 values. It seems better to stick to a few data types
(u32) instead of introducing more, only because the range is limited.
Co-Authored-By: Fernando Fernandez Mancera <ffmancera@riseup.net>
Fixes: 1bbdecf5e1 ('platform: manage ECMP routes')
There are two callers of available_connections_add(). One from
cp_connection_added_or_updated() (which is when a connection
gets added/modified) and one from nm_device_recheck_available_connections().
They both call first nm_device_check_connection_available() to see
whether the profile is available on the device. They certainly
need to pass the same check flags, otherwise a profile might
be available in some cases, and not in others.
I didn't actually test this, but I think this could result
in a profile wrongly not being listed as an available-connection.
Moreover, that might mean, that `nmcli connection up $PROFILE`
might work to find the device/profile, but `nmcli device up $DEVICE`
couldn't find the suitable profile (because the latter calls
nm_device_get_best_connection(), which iterates the
available-connections). I didn't test this, because regardless of
that, it seems obvious that the conditions for when we call
available_connections_add() must be the same from both places.
So the only question is what is the right condition, and it would
seem that _NM_DEVICE_CHECK_CON_AVAILABLE_FOR_USER_REQUEST is the right
flag.
Fixes: 02dbe670ca ('device: for available connections check whether they are available for user-request')
https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/merge_requests/1496
Sometimes the buffer space of the netlink socket runs out and we lose
the response to our link change:
<info> [1670321010.2952] platform-linux: netlink[rtnl]: read: too many netlink events. Need to resynchronize platform cache
<warn> [1670321010.3467] platform-linux: do-change-link[2]: failure changing link: internal failure 3
With 3 above being WAIT_FOR_NL_RESPONSE_RESULT_FAILED_RESYNC.
Let's try harder.
https://bugzilla.redhat.com/show_bug.cgi?id=2154350
$ nmcli --offline connection add type wifi con-name hotspot ssid hotspot-ssid wifi.mode ap wifi-sec.key-mgmt none wifi-sec.wep-key-type 1 wifi-sec.wep-key0 1234567890
would previously always print a message
Info: WEP key is guessed to be of '1 (key)'
At least, when we explicitly set the key-type, this message is bogus.
Suppress it.
It's anyway questionable whether printing such warnings does anything good.
We would still get the warning with the arguments swapped, which seems wrong:
$ nmcli --offline connection add type wifi con-name hotspot ssid hotspot-ssid wifi.mode ap wifi-sec.key-mgmt none wifi-sec.wep-key0 1234567890 wifi-sec.wep-key-type 1
Info: WEP key is guessed to be of '1 (key)'
https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/merge_requests/1497
iptables takes a file lock at /run/xtables.lock. By default, if
the file is locked, iptables will fail with error. When that happens,
the iptables rules won't be configured, and the shared mode
(for which we use iptables) will not be setup properly.
Instead, pass "--wait 2", to block. Yes, it's ugly that we use
blocking program invocations, but that's how it is. Also, iptables
should be fast to not be a problem in practice.
This is not nice:
<warn> [1670321010.3467] platform-linux: do-change-link[2]: failure changing link: internal failure 3
Let's explain what "internal failure 3" is.