This is necessary on Travis/Ubuntu 16.04, otherwise the test
fails with
# NetworkManager-MESSAGE: <warn> [1575301791.7600] platform-linux: do-add-link[nm-test-device/team]: failure 95 (Operation not supported)
Aborted (core dumped)
# test:ERROR:../src/platform/tests/test-link.c:353:test_software: assertion failed: (software_add (link_type, DEVICE_NAME))
ERROR: src/platform/tests/test-link-linux - too few tests run (expected 76, got 6)
The bluetooth plugin (with BlueZ5/NAP support) always gets
build, but DUN support requires a library.
When enabling build of the bluetooth subpackage, then always
enable DUN support. And enable it explicitly, especially meson
would not autodetect support and disable it by default.
Don't proceed if the context was torn down on an error in
try_create_connect_properties().
<info> [1574092292.0225] manager: NetworkManager state is now CONNECTING
<warn> [1574092292.0228] modem-broadband[ttyV0]: failed to connect 'ttyV0': unable to determine the network id
<info> [1574092292.0230] device (ttyV0): state change: prepare -> failed (reason 'modem-init-failed', sys-iface-state: 'managed')
<info> [1574092292.0236] manager: NetworkManager state is now DISCONNECTED
<warn> [1574092292.0250] device (ttyV0): Activation: failed for connection 'ttyV0'
(NetworkManager:69212): libnm-CRITICAL **: 16:51:32.025: ((libnm-core/nm-connection.c:193)): assertion '<dropped>' failed
Thread 1 "NetworkManager" received signal SIGTRAP, Trace/breakpoint trap.
0x00007ffff78da6e5 in _g_log_abort () from /lib64/libglib-2.0.so.0
(gdb) bt
#0 0x00007ffff78da6e5 in _g_log_abort () at /lib64/libglib-2.0.so.0
#1 0x00007ffff78db9b6 in g_logv () at /lib64/libglib-2.0.so.0
#2 0x00007ffff78dbb83 in g_log () at /lib64/libglib-2.0.so.0
#3 0x000055555563fcd2 in _nm_g_return_if_fail_warning (line=line@entry=193, file=0x5555557ae221 "libnm-core/nm-connection.c", log_domain=0x5555557ae23c "libnm") at ./shared/nm-default.h:219
#4 0x000055555563feba in _connection_get_setting_checkPython Exception <class 'gdb.error'> No type named TypeNode.:
(connection=0x0, setting_type=) at libnm-core/nm-connection.c:193
#5 _connection_get_setting_checkPython Exception <class 'gdb.error'> No type named TypeNode.:
(connection=0x0, setting_type=) at libnm-core/nm-connection.c:191
#6 0x00007fffe871f8b4 in nm_modem_get_connection_ip_type (self=self@entry=0x7fffd801c730, connection=0x0, error=error@entry=0x7fffffffc8e8) at src/devices/wwan/nm-modem.c:374
#7 0x00007fffe871bfed in connect_context_step (self=0x7fffd801c730) at src/devices/wwan/nm-modem-broadband.c:591
#8 0x00007fffe871c74b in modem_act_stage1_prepare (_self=0x7fffd801c730, connection=0x555555af5520, out_failure_reason=<optimized out>) at src/devices/wwan/nm-modem-broadband.c:687
#9 0x00007fffe8720203 in nm_modem_act_stage1_prepare (self=0x7fffd801c730, req=0x555555b08a30, out_failure_reason=0x7fffffffcbe0) at src/devices/wwan/nm-modem.c:1045
#10 0x0000555555705f1b in activate_stage1_device_prepare (self=0x555555a956a0) at src/devices/nm-device.c:6562
#11 0x00005555556dcbca in activation_source_handle_cb (self=0x555555a956a0, addr_family=2) at src/devices/nm-device.c:6177
#12 0x00007ffff78d0dcb in g_idle_dispatch () at /lib64/libglib-2.0.so.0
#13 0x00007ffff78d44a0 in g_main_context_dispatch () at /lib64/libglib-2.0.so.0
#14 0x00007ffff78d4830 in g_main_context_iterate.isra () at /lib64/libglib-2.0.so.0
#15 0x00007ffff78d4b23 in g_main_loop_run () at /lib64/libglib-2.0.so.0
#16 0x0000555555599ff4 in main (argc=<optimized out>, argv=<optimized out>) at src/main.c:451
https://gitlab.freedesktop.org/NetworkManager/NetworkManager/merge_requests/338/
Do another import, shortly before re-release.
There are no actual changes, but as always: to find out
that there are no changes requires large part of the work of
just doing the reimport.
Also, systemd import branch was rebased recently, that means
git-merge does not get this reimport right automatically (because
it thinks that the changes on master should be reverted). Hence,
this reimport required more care. Do it while there are few
changes.
We have our NM specific logging and log levels. Maybe we should
not have that, and instead only rely on syslog (like systemd)
or glog(). Anyway, currently we have one way and it makes sense
that this is also used outside from "src".
Move the helper function to parse log levels from string to
"nm-logging-base.h" so that we can use the same logging levels
outside of core.
This moves code that is currently GPL2+ licensed to
LGPL2.1+. However as far as I see, this code was entirely written
by Red Hat employees who would not object with this change. Also,
it's as obvious and trivial as it gets.
We have "nm-logging-fwd.h", which (as the name implies) is header-only.
Add instead a "nm-logging-base.c", which also contains implementation for
logging functions that are not only useful under "src/nm-logging.c"
Not being able to compare two NMIPAddress instances is a major
limitation. Add nm_ip_address_cmp_full(). The choice here for adding
a "cmp()" function instead of a "equals()" function is that cmp is
more useful. We only want to add one of the two, so choose the
more powerful one. Yes, usually its also not the variant we want
or the variant that is convenient to use, such is life.
Compare this to:
- nm_ip_route_equal_full(), which is an equal() method and not
a cmp().
- nm_ip_route_equal_full() which has a guint flags argument,
instead of a typedef for an enum, with a proper generated
GType.
The "ibft" plugin is no more. The default on RHEL/Fedora is now "ifcfg-rh[,keyfile]".
Adjust the configuration, because a wrong comment is confusing here.
Modifying configuration snippets is potentially annoying, because the user might
have edited the file, so on upgrade a "NetworkManager.conf.rpmnew" file
will be created. Still do it.
There is already a way to hide/shadow scripts in "/usr/lib/NetworkManager/dispatcher.d":
by putting a file of the same name in "/etc/NetworkManager/dispatcher.d".
There is also the special case that if the file symlinks to "/dev/null", the
file is silently ignored. This is the proper way to hide a script.
I think we should also take a plain empty file as user indication to hide a script.
This way, one can simply hide a file with
# touch /etc/NetworkManager/dispatcher.d/10-ifcfg-rh-routes.sh
It's an alternative to symlinking to /dev/null.
I have a coredump that seems to indicate that nm_device_get_active_connection()
did not return a valid object. Let's add an assertion, trying to identify the
issue earlier. Aside from that, this change isn't useful, but an nm_assert()
shouldn't hurt anyway.
- systemd-networkd and initscripts both support it.
- it seems suggested to configure routes with scope "link" on AWS.
- the scope is only supported for IPv4 routes. Kernel ignores the
attribute for IPv6 routes.
- we don't support the aliases like "link" or "global". Instead
only the numeric value is supported. This is different from
systemd-networkd, which accepts names like "global" and "link",
but no numerical values. I think restricting ourself only to
the aliases unnecessarily limits what is possible on netlink.
The alternative would be to allow aliases and numbers both,
but that causes multiple ways to define something and has
thus downsides. So, only numeric values.
- when setting rtm_scope to RT_SCOPE_NOWHERE (0, the default), kernel
will coerce that to RT_SCOPE_LINK. This ambiguity of nowhere vs. link
is a problem, but we don't do anything about it.
- The other problem is, that when deleting a route with scope RT_SCOPE_NOWHERE,
this acts as a wild care and removes the first route that matches (given the
other route attributes). That means, NetworkManager has no meaningful
way to delete a route with scope zero, there is always the danger that
we might delete the wrong route. But this is nothing new to this
patch. The problem existed already previously, except that
NetworkManager could only add routes with scope nowhere (i.e. link).
There is an "info" part and a part with the data that we parsed.
Don't track the static and mutable data in the same variable.
Also, this allows to mark the static part as "const static".
In the past, kernel (and NetworkManager) did not support the onlink
flags for IPv6 routes. That is no longer the case.
Fixes: f5e8bbc8e0 ('libnm,core: enable "onlink" flags also for IPv6 routes')
In practice, nowadays g_free() is the same as free(), so there is no
difference. However, we still should not mix the two and use free()
for data that was allocated with malloc() -- in this case, the memory
was allocated by libc's realpath().
The NMClient is associated with a certain context. Add a getter
function to give the context.
The context is really not internal API of NMClient, that is because
the user must iterate this context and be aware of it.
Usually, the nmobj never gets reused for one dbobj. That means,
we really don't expect a nml_dbus_property_o_notify() for a property
that was already cleared.
However, that is for example not the case with NMClient itself. As NetworkManager
gets restarted, the name owner gets lost, the property cleared but afterwards
it might get notified again.
That means, nml_dbus_property_o_notify() and nml_dbus_property_o_clear() must
work well together, otherwise a sequence of
nml_dbus_property_o_notify()
nml_dbus_property_o_clear()
nml_dbus_property_o_notify()
leads to an assertion failure "nm_assert (!pr_o->is_ready)".
Fixes: ce0e898fb4 ('libnm: refactor caching of D-Bus objects in NMClient')
NMClient makes asynchronous D-Bus calls via g_dbus_connection_call().
This references the current GMainContext to later invoke the
asynchronous callback. Even when we cancel the asynchronous call,
the callback will still be invoked (later) to complete the request.
In particular this means when we destroy (unref) an NMClient, there
are quite possibly pending requests in the GMainContext. Although they
are cancelled, they keep the GMainContext alive.
With synchronous initialization, we have an internal GMainContext.
When we destroy the NMClient, we cannot just unhook the integrated
source, instead, we need to keep it integrated in the caller's main
context, as long as there are pending requests.
Add a mechanism to track those pending requests and fix the leak for the
internal GMainContext. Also expose the same mechanism to the user via a new
API called nm_client_get_context_busy_watcher(). This allows the user
to know when it can stop iterating the main context and when all
resources are reclaimed.
For example the following will lead to a crash:
for i in range(1,2000):
nmc = NM.Client.new(None)
This creates a number of NMClient instances and destroys them again.
Note that here the GMainContext is never iterated, because
synchronous initialization does not iterate the caller's context. So,
while we correctly unref and dispose the created NMClient instances,
there are pending requests left in the inner GMainContext. These pile
up and soon the program will crash because it runs out of file descriptors.
We can have a similar problem with asynchronous initialization, when
we create a new GMainContext per client, and don't iterate it after
we are done with the client.
Note that this patch does not avoid the problem in general. The problem
cannot be avoided, the user must iterate the main contex at some point.
Otherwise resources (memory and file descriptors) will be leaked.
Fixes: ce0e898fb4 ('libnm: refactor caching of D-Bus objects in NMClient')
https://gitlab.freedesktop.org/NetworkManager/NetworkManager/merge_requests/347
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5 ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c64, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c64 && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
We will rework NMClient entirely. Then, the synchronous initialization will also use
the asynchronous code paths. The difference will be that with synchronous initialization,
all D-Bus interaction will be done with an internal GMainContext as current thread default,
and that internal context will run until initialization completes.
Note that even after initialization completes, it cannot be swapped back to the user's
(outer) GMainContext. That is because contexts are essentially the queue for our
D-Bus events, and we cannot swap from one queue to the other in a race
free manner (or a full resync). In other words, the two contexts are not in sync,
so after using the internal context NMClient needs to stick to that (at least, until
the name owner gets lost, which gives an opportunity to resync and switch back to the
user's main context).
We thus need to hook the internal (inner) GMainContext with the user's (outer) context,
so when the user iterates the outer context, events on the inner context get dispatched.
Add nm_utils_g_main_context_create_integrate_source() to create such a GSource for
integrating two contexts.
Note that the use-case here is limited: the integrated, inner main context must
not be explicitly iterated except from being dispatched by the integrating
source. Otherwise, you'd get recursive runs, possible deadlocks and general
ugliness. NMClient must show restrain how to use the inner context while it is
integrated.
Some compilers don't convert arrays as _Generic() type selectors to
their pointer type. That means, for those compilers the generic type
would be an array and not a pointer. Work around that by adding zero
to the pointer/array argument.
Also, I cannot get this to work with "clang-3.4.2-9.el7". Disable it
for that compiler. The value of the generic check is anyway that it only
needs to work with some compiler combinations. That will trigger a
compilation failure and we can fix the implementation also for compilers
that don't support the macro.
See-also: http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1930.htm
There are two macros: NM_GOBJECT_PROPERTIES_DEFINE_BASE() and
NM_GOBJECT_PROPERTIES_DEFINE(). The former just defines the
property enums and the obj_properties array. The latter also
defines the functions _notify() and _nm_gobject_notify_together_impl().
That means, depending on whether you actually use _notify(), you have
to choose one of the macros. I think that is unnecessarily cumbersome.
Let's mark the function as _nm_unused so that the compiler doesn't
complain about the unused function. I don't think it's a problem
to use NM_GOBJECT_PROPERTIES_DEFINE() even if you don't actually use
_notify().
warning: extra tokens at the end of %endif directive in line 717: %endif # end autotools
warning: extra tokens at the end of %endif directive in line 775: %endif # end autotools
Previously, our "internal" DHCPv4 client is based on a fork of
systemd code. This manner of maintaining the fork is problematic.
The solution is to use a proper library: n-dhcp4 from the nettools
project.
We already have these two as undocumented plugins available, by
setting either "dhcp=systemd" or "dhcp=nettools". This is only for
testing. Users are only supposed to use the "internal" plugin.
Up until now, the "internal" DHCPv4 plugin was based on "systemd" code.
Change that to use "nettools" instead.
Possibly this breaks something, and we need to fix it. But do this
early so we have time to test the nettools plugin and identify issues.
For the user, this change should be entirely transparant.
https://gitlab.freedesktop.org/NetworkManager/NetworkManager/merge_requests/302