python-setuptools is now gone from debian:testing ([1], [2]):
Package python-setuptools is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
E: Package 'python-setuptools' has no installation candidate
This package is entirely optional. Fix the failure by ignoring any failure to
install the package.
[1] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=938168
[2] https://tracker.debian.org/news/1391360/python-setuptools-removed-from-testing/
c37722ff2f55 generic: use _c_boolean_expr_() in _c_{likely,unlikely}_()
8baa8831b17a generic: add _c_boolean_expr_() to preserved "-Wparentheses" warning
2cda8dc53a9a generic: use _c_likely_() in c_assert()
git-subtree-dir: src/c-stdaux
git-subtree-split: c37722ff2f5525caa6680e6114333222a9d468a4
openvswitch accepts "dot1q-tunnel" as vlan mode:
A dot1q-tunnel port is somewhat like an access port. Like an
access port, it carries packets on the single VLAN specified
in the tag column and this VLAN, called the service VLAN,
does not appear in an 802.1Q header for packets that ingress
or egress on the port. The main difference lies in the be‐
havior when packets that include a 802.1Q header ingress on
the port. Whereas an access port drops such packets, a
dot1q-tunnel port treats these as double-tagged with the
outer service VLAN tag and the inner customer VLAN taken
from the 802.1Q header. Correspondingly, to egress on the
port, a packet outer VLAN (or only VLAN) must be tag, which
is removed before egress, which exposes the inner (customer)
VLAN if one is present.
Support this mode.
Add a new "ovs-port.trunks" property that indicates which VLANs are
trunked by the port.
At ovsdb level the property is just an array of integers; on the
command line, ovs-vsctl accepts ranges and expands them.
In NetworkManager the ovs-port setting stores the trunks directly as a
list of ranges.
The next commit is going to introduce a new object in libnm to
represent a range of ovs-port VLANs. A "range of integers" object
seems something that can be used for other purposes in the future, so
instead of adding an object specific for this case
(e.g. NMOvsPortVlanRange), introduce a generic NMRange object that
generically represents a range of non-negative integers.
In some scenarios, autoconnect should not be blocked if the device is
activated on the external connection (e.g. autoconnect on the loopback
device).
Adding the `allow_autoconnect_on_external` flag to support such
behavior.
Support managing the loopback interface through NM as the users want to
set the proper mtu for loopback interface when forwarding the packets.
Additionally, the IP addresses, DNS, route and routing rules are also
allowed to configure for the loopback connection profiles.
https://bugzilla.redhat.com/show_bug.cgi?id=2060905
We soon will handle loopback, so -- if no loopback profile is activated
in NetworkManager -- we will have an externally managed profile on
loopback. This messes up the result.
In general, external connections don't make much sense for
build_device_hostname_infos(). Ignore them.
any_devices_active() exists to avoid hostname update when no devices are
active. See [1] and commit b07f6712e9 ('policy: check for active
devices before triggering dns update on hostname change').
Soon, we will add support for loopback device, so "lo" will
almost always be activated (either externally or actively managed by
NetworkManager).
In any case, external devices should not count here, even if they appear
activating/activated.
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1344303
The implementation for static asserts with (sizeof(char[(cond) ? 1 : -1]))
silently fails if the condition is not a compile time constant, because
it results in a VLA which is evaluated at runtime. Well, for that reason
we build with "-Wvla" to catch accidentally using a non-const expression
in a static assert. But still, we can do better. Use instead bitfields
to trigger the compiler error. This works only with static expressions
and also without "-Wvla".
https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/merge_requests/1468
2d3877aabd7d docs: avoid duplicate headers
ba751b517888 c-stdaux: be more consistent with #ifdef
9796f4a63a4b c-stdaux: move _c_always_inline_ to *-generic
34067b3a5f4f c-stdaux: avoid declspec-fallback for _c_public_
82b82245cf36 c-stdaux: expose _c_public_ in *-generic
37fa624afcd6 docs: set C_COMPILER_DOCS
7197bc75f829 docs: add ./src to include path
34ed5b2c4b52 test-basic: avoid _c_unused_
00cc51c99c64 test-basic: fix *_gnuc() fallback to have an argument
6a9262c168f7 test-basic: use strtol() over close() to set errno
807d4a704757 test-basic: guard cleanup-tests by GNUC
13f65ad8c27c test-basic: separate tests by module
fdf399ef7f5b test-api: only test for available APIs
1f9cfe8e3b2f c-stdaux: export C_MODULE_*
65bf768151e3 c-stdaux: move GNUC-macros into separate module
6549fa0eb8f3 c-stdaux: extract unix'ish code into separate module
d69c3c0fe7ee c-stdaux: split off portable code
132d82a37607 c-stdaux: add C_COMPILER_DOCS documentation
053b2d9f1c11 c-stdaux: avoid ctx-expr in c_assert()
e75f32c2e046 c-stdaux: fix typo in c_assert() docs
d75a2350ae22 c-stdaux: stub likely/unlikely as fallback
eb90a0d0fced c-stdaux: fix documentation of likely/unlikely
57f332c53184 c-stdaux: fix typo in c_closedir() docs
f3d6b60400d3 c-stdaux: add _c_always_inline_
8d017b02cf12 c-stdaux: provide target identification
3d8f78f964ff ci: enable windows builds
git-subtree-dir: src/c-stdaux
git-subtree-split: 2d3877aabd7d0e813f4a153ac262ee83b3c04793
a4144785ab77 docs: include ./src in include path
efd6619234cd docs: use c-apidocs glob
git-subtree-dir: src/c-rbtree
git-subtree-split: a4144785ab77ecc0627898c7c60523b2368c6ecb
When the test in gitlab-ci fails, you might want to rerun the test
on your machine. You fire up podman, run "./.gitlab-ci/*-install.sh"
and "./.gitlab-ci/run-test.sh".
Make it possible to manually select parts that are tested by
"run-test.sh" by setting NM_TEST_SELECT_RUN. Otherwise, if you want to
test a particular configuration, you either have to run all earlier
steps (which takes a long time and can even be broken) or you have
to manually patch the file.
For example,
NM_TEST_SELECT_RUN=6 ./.gitlab-ci/run-test.sh
clang-3.4.2-9.el7 does not like nesting NM_MAX() macro inside nm_hash_update_vals() macro.
Workaround by using MAX() instead. NM_MAX() uses an expression statement and NM_UNIQ()
to evaluate the arguments only once. We don't need that here and glib's MAX() suffices.
CC src/libnm-platform/src_libnm_platform_libnm_platform_la-nm-platform.lo
../src/libnm-platform/nm-platform.c:8247:53: error: in-class initializer for static data member is not a constant expression
(guint8) NM_MAX(obj->weight, 1u));
^
../src/libnm-std-aux/nm-std-aux.h:399:40: note: expanded from macro 'NM_MAX'
#define NM_MAX(a, b) __NM_MAX(NM_UNIQ, a, NM_UNIQ, b)
^
../src/libnm-std-aux/nm-std-aux.h:402:39: note: expanded from macro '__NM_MAX'
typeof(a) NM_UNIQ_T(A, aq) = (a); \
^
../src/libnm-glib-aux/nm-hash-utils.h:124:36: note: expanded from macro 'nm_hash_update_vals'
NM_HASH_COMBINE_VALS(_val, __VA_ARGS__); \
^
Fixes: 8cc41d41fe ('platform: add NM_PLATFORM_IP_ROUTE_CMP_TYPE_ECMP_ID for comparing ECMP base route')
We want to follow current Fedora, so update to f37.
Also, we now use clang-format from Fedora 37 release, so the default
image in gitlab-ci must match, because that image is used for the
"check-tree" test.
This is the version shipped in Fedora 37. As Fedora 37 is now out, the
core developers switch to it. Our gitlab-ci will also use that as base
image for the check-{patch.tree} tests and to generate the pages. There
is a need that everybody agrees on which clang-format version to use,
and that version should be the one of the currently used Fedora release.
Also update the used Fedora image in "contrib/scripts/nm-code-format-container.sh"
script.
The gitlab-ci still needs update in the following commit. The change
in isolation will break the "check-tree" test.
We sometimes have functions foo() and foo_full(), in which case
foo() has fewer arguments and just calls foo_full(). The "full"
function here is the more powerful one, and foo() is implemented
in terms of the former.
nm_platform_ip4_route_cmp_full() and m_platform_ip4_route_cmp() inverted
that pattern. The "_full" there stands for the full comparison, to not
allowing to select the comparison type.
That inconsistency is ugly. Also, these wrappers were used at only few
places. Let's drop them.
While at it, also drop nm_platform_qdisc_cmp() and rename
nm_platform_qdisc_cmp_full(). Here cmp()/cmp_full() followed the common
pattern foo()/foo_full(), but it's still hardly used and unnecessary.
When adding a new route we need to consider it contains extra nexthops
i.e it is a ECMP route. As we cannot modify the NMPObject once created,
we need to pass the extra nexthops as an argument.
We cannot use the original NMPObject because normalization is happening
during when adding the route.
When reading from netlink an ECMP IPv4 route, we need to parse the
multiple nexthops. In order to do that, we are introducing
NMPlatformIP4RtNextHop struct.
The first nexthop information will be kept at the original
NMPlatformIP4Route and the new property n_nexthops will indicate how
many nexthops we need to consider.
This test is inherently fragile, as it depends on starting processes,
wait for something and kill the process. There are timings involved
that are out of control of the test. Try to adjust the timing.
# NetworkManager-DEBUG: <debug> [1668755976.9741] kill child process test-s-4 (111487): sending SIGKILL...
# NetworkManager-DEBUG: <debug> [1668755976.9753] kill child process test-s-4 (111487): waiting for process to terminate after sending SIGTERM (15) and SIGKILL...
# NetworkManager-DEBUG: <debug> [1668755976.9758] kill child process test-s-4 (111487): after sending SIGTERM (15) and SIGKILL, process 111487 exited by signal 9 (5759 usec elapsed)
Bail out! GLib:ERROR:../src/core/tests/test-core-with-expect.c:154:test_nm_utils_kill_child_sync_do: Did not see expected message NetworkManager-DEBUG: *<debug> [*] kill child process test-s-4 (*): waiting up to 1 milliseconds for process to terminate normally after sending SIGTERM (15)...
Bail out! nm:ERROR:../src/core/tests/test-core-with-expect.c:457:test_nm_utils_kill_child: assertion failed (exit_status == 0): (6 == 0)
--- stderr ---
**
GLib:ERROR:../src/core/tests/test-core-with-expect.c:154:test_nm_utils_kill_child_sync_do: Did not see expected message NetworkManager-DEBUG: *<debug> [*] kill child process test-s-4 (*): waiting up to 1 milliseconds for process to terminate normally after sending SIGTERM (15)...
**
nm:ERROR:../src/core/tests/test-core-with-expect.c:457:test_nm_utils_kill_child: assertion failed (exit_status == 0): (6 == 0)
/builds/NetworkManager/NetworkManager/tools/run-nm-test.sh: line 337: 110662 Aborted "${NMTST_DBUS_RUN_SESSION[@]}" "${NMTST_LIBTOOL[@]}" "$NMTST_VALGRIND" --quiet --error-exitcode=$VALGRIND_ERROR --leak-check=full --gen-suppressions=all "${NMTST_SUPPRESSIONS[@]}" --num-callers=100 --log-file="$LOGFILE" "$TEST" "${TEST_ARGV[@]}"
Under normal circumstances, the timeout is not supposed to be hit.
I see it hit on gitlab-ci. Was that because the machine was very
busy? It's hard to say whether there was a legitimate problem here,
and more importantly, what that problem was.
Try to increase the timeout. If there is a real problem, we probably
will still hit the timeout.
We must consume the reference, like we would in the other case.
Interestingly, I am unable to reproduce a case where valgrind would
complain about the leak. But it is there nonetheless.
Fixes: 0a22f4e4905c ('libnm: refactor tracking of NMSetting in NMConnection')
See wpa_supplicant commit [1]:
macsec: Make pre-shared CKN variable length
IEEE Std 802.1X-2010, 9.3.1 defines following restrictions for
CKN:
"MKA places no restriction on the format of the CKN, save that it
comprise an integral number of octets, between 1 and 32
(inclusive), and that all potential members of the CA use the same
CKN. No further constraints are placed on the CKNs used with PSKs,
..."
Hence do not require a 32 octet long CKN but instead allow a
shorter CKN to be configured.
This fixes interoperability with some Aruba switches, that do not
accept a 32 octet long CKN (only support shorter ones).
[1] https://w1.fi/cgit/hostap/commit/?id=b678ed1efc50e8da4638d962f8eac13312a4048f
When called with update_carrier=TRUE, nm_device_bring_up_full() checks
for carrier changes and it may queue a transition to DISCONNECTED
through the following call chain:
-> nm_device_bring_up_full()
-> nm_device_set_carrier_from_platform()
-> nm_device_set_carrier()
-> carrier_changed()
-> nm_device_queue_state()
In _set_state_full(state=UNAVAILABLE) after bringing the interface up
we also call nm_device_cleanup() which clears the enqueued state
change to DISCONNECTED. When this happens, the device remains in
UNAVAILABLE and never gets activated even if it was ready.
This was observed with macsec interfaces, but in theory can happen
with all those interfaces that get carrier immediately after being
brought up.
Avoid this issue by not checking the carrier synchronously from
_set_state_full(). The carrier change event will be processed in the
next asynchronous invocation of device_link_changed().
https://bugzilla.redhat.com/show_bug.cgi?id=2122564