With the default 128KiB buffer size it is easy to lose events. For
example when 64 interfaces appear at the same time, we lose events for
the last 16. Increase the buffer size to 4MiB.
https://bugzilla.redhat.com/show_bug.cgi?id=1651578
CSiphash is a first class citizen, it's fine to use everwhere where we
need it.
NMHash wraps CSiphash and provides three things:
1) Convenience macros that make hashing nicer to use.
2) it uses a randomly generated, per-run hash seed, that can be combined
with a guint static seed.
3) it's a general API for hashing data. It nowhere promises that it
actually uses siphash24, although currently it does everywhere.
NMHash is not (officially) siphash24.
Add API nm_hash_siphash42_init() and nm_hash_siphash42() to "nm-hash-utils.h",
that exposes (2) for use with regular CSiphash. You of course no longer
get the convenice macros (1) but you get plain siphash24 (which
NMHash does not give (3)).
While at it, also add a nm_hash_complete_u64(). Usually, for hasing we
want guint types. But we don't need to hide the fact, that the
underlying value is first uint64. Expose it.
These are simple macros/functions that append a heap-allocated string to
a GPtrArray. It is intended for a GPtrArray which takes ownership of the
strings (meaning, it has a g_free() GDestroyNotify).
From [1]:
You may optionally specify attribute names with ‘__’ preceding and
following the name. This allows you to use them in header files
without being concerned about a possible macro of the same name. For
example, you may use the attribute name __noreturn__ instead of
noreturn.
[1] https://gcc.gnu.org/onlinedocs/gcc/Attribute-Syntax.html#Attribute-Syntax
Add a new CON_DEFAULT() macro that places a property name into a
special section used at runtime to check whether it is a supported
connection default.
Unfortunately, this mechanism doesn't work for plugins so we have to
enumerate the connection defaults from plugins in the daemon using
another CON_DEFAULT_NOP() macro.
Supporting a trailing comma in NM_MAKE_STRV() can be desirable, because it
allows to extend the code with less noise in the diff.
Now, there may or may not be a trailing comma at the end.
There is a downside to this: the following no longer work:
const char *const v1[] = NM_MAKE_STRV ("a", "b");
const char *const v2[3] = NM_MAKE_STRV ("a", "b");
but then, above can be written more simply already as:
const char *const v1[] = { "a", "b", NULL };
const char *const v2[3] = { "a", "b" };
so the fact that the macro won't work in that case may be preferable,
because it forces you to use the already existing better variant.
nm_utils_get_monotonic_timestamp*() inherrently use static data. Let's
initialize it in a thread safe manner.
nm_utils_get_monotonic_timestamp*() are a fundamental utility function
that should work correctly in all cases. Such a low level function should
be thread safe.
The GChecksum API is cumbersome to use.
For example, g_checksum_get_digest() requires a length input/output
argument. At the same time, GChecksum does not allow you to query its
checksum-type nor the desired digest-length. When you have a GChecksum
at hand, you must always know the digest-length you are going to use.
So, the length parameter is only good for asserting.
Add a macro to make that more convenient.
Benefits: it's less lines of code, and we always do all the asserts
that are due.
- fix thread-safety by adding a memory barrier (g_atomic_pointer_get())
to the double-checked locking pattern when initializing the hash key.
- generate the random data outside the lock. Calling nm_utils_random_bytes()
within the lock is ugly, because we don't want to assume that the function
has no side effects which are prone to dead-lock. There is no problem attempting
to generate the random data without lock, and only use it when the race is won.
Often, during tests we want to assert against the logged messages.
In fact, most tests enable assertions for all logging and enforce
them with g_test_assert_expected_messages(). So, this is common.
However, sometimes it can be cumbersome to understand which logging
lines will be produced. For example, the next commits will call
nm_dhcp_manager_get() during the tests, which initializes NMDhcpManager
and logs a message which plugin was selected (or an additional warning,
if the selected plugin was not found). The availability of the DHCP plugin
depends on searching the path for "/usr/bin/dhclient", so from testing code
it's hard to determine what will be logged.
Instead, add a way to temporarily disable logging during testing.
If passed a relative path, load the editor .so from the same directory
as the plugin .so. This is useful for development, as it allows running
the editor plugin from the build tree conveniently.
https://github.com/NetworkManager/NetworkManager/pull/242
Refactor the check so that integer overflow cannot happen. Realistically,
it anyway couldn't happen, because _name is nowhere near the size of
G_MAXSIZE. Still, avoid such code. Also, the operands involved here are
constants, so the extra check can anyway be resolved at compile-time.
"shared/nm-utils" is a loose collection of utility functions.
There is a certain aim that they can be used independently.
However, they also rely on each other.
Add a test that we can build a minimal shared library with
these tools, independent of libnm-core.
This is independent functionality that only depends on linux API
and glib.
Note how "nm-logging" uses this for getting the timestamps. This
makes "nm-logging.c" itself dependen on "src/nm-core-utils.c",
for little reason.
1. NetworkManager-1.14.0/shared/nm-utils/nm-shared-utils.c:1242: value_overwrite: Overwriting previous write to "ch" with value from "v".
2. NetworkManager-1.14.0/shared/nm-utils/nm-shared-utils.c:1239: assigned_value: Assigning value from "++str[0]" to "ch" here, but that stored value is overwritten before it can be used.
# 1237| if (ch >= '0' && ch <= '7') {
# 1238| v = v * 8 + (ch - '0');
# 1239|-> ch = (++str)[0];
# 1240| }
# 1241| }
Don't assign ch when it is going to be overwritten.
keyfile already supports omitting the "connection.id" and
"connection.uuid". In that case, the ID would be taken from the
keyfile's name, and the UUID was generated by md5 hashing the
full filename.
No longer do this during nm_keyfile_read(), instead let all
callers call nm_keyfile_read_ensure_*() to their liking. This is done
for two reasons:
- a minor reason is, that one day we want to expose keyfile API
as public API. That means, we also want to read keyfiles from
stdin, where there is no filename available. The implementation
which parses stdio needs to define their own way of auto-generating
ID and UUID. Note how nm_keyfile_read()'s API no longer takes a
filename as argument, which would be awkward for the stdin case.
- Currently, we only support one keyfile directory, which (configurably)
is "/etc/NetworkManager/system-connections".
In the future, we want to support multiple keyfile dirctories, like
"/var/run/NetworkManager/profiles" or "/usr/lib/NetworkManager/profiles".
Here we want that a file "foo" (which does not specify a UUID) gets the
same UUID regardless of the directory it is in. That seems better, because
then the UUID won't change as you move the file between directories.
Yes, that means, that the same UUID will be provided by multiple
files, but NetworkManager must already cope with that situation anyway.
Unfortunately, the UUID generation scheme hashes the full path. That
means, we must hash the path name of the file "foo" inside the
original "system-connections" directory.
Refactor the code so that it accounds for a difference between the
filename of the keyfile, and the profile_dir used for generating
the UUID.
In general, it's fine to pass %NULL to g_free().
However, consider:
char *
foo (void)
{
gs_free char *value = NULL;
value = g_strdup ("hi");
return g_steal_pointer (&value);
}
gs_free, gs_local_free(), and g_steal_pointer() are all inlinable.
Here the compiler can easily recognize that we always pass %NULL to
g_free(). But with the previous implementation, the compiler would
not omit the call to g_free().
Similar patterns happen all over the place:
gboolean
baz (void)
{
gs_free char *value = NULL;
if (!some_check ())
return FALSE;
value = get_value ();
if (!value)
return FALSE;
return TRUE;
}
in this example, g_free() is only required after setting @value to
non-NULL.
Note that this does increase the binary side a bit (4k for libnm, 8k
for NetworkManager, with "-O2").
The assertion fails in nmtui's ip_route_transform_from_dest_string(),
which does not initialize the address output argument to %NULL.
There are three possibilities how the API could work:
- assert/require the user to pass in arguments which pre-initialized
to NULL or unset.
- always set the output arguments, even if the function fails.
- don't bother and leave output values untouched, if function fails.
It's not clear which approach is the best. Not to bother possibliy
leaves uninitialized values, which could be error prone. Still, do
just that.
Fixes: 0b3197a3fd
As we accept addr_family %AF_UNSPEC to detect the address family,
we also need to return it. Just returning the binary address without
the address family makes no sense.
Ok, I changed my mind.
The new behavior seems to make more sense to me. Not that it matters,
because we always use nm_utils_strbuf*() API with buffers that we expect
to be large enough to contain the result. And when truncation occurs,
we usually don't care much about it. That is, there is no code that
uses nm_utils_strbuf*() API and handles string truncation in particular.
- previously, parsing wireguard genl data resulted in memory corruption:
- _wireguard_update_from_allowedips_nla() takes pointers to
allowedip = &g_array_index (buf->allowedips, NMWireGuardAllowedIP, buf->allowedips->len - 1);
but resizing the GArray will invalidate this pointer. This happens
when there are multiple allowed-ips to parse.
- there was some confusion who owned the allowedips pointers.
_wireguard_peers_cpy() and _vt_cmd_obj_dispose_lnk_wireguard()
assumed each peer owned their own chunk, but _wireguard_get_link_properties()
would not duplicate the memory properly.
- rework memory handling for allowed_ips. Now, the NMPObjectLnkWireGuard
keeps a pointer _allowed_ips_buf. This buffer contains the instances for
all peers.
The parsing of the netlink message is the complicated part, because
we don't know upfront how many peers/allowed-ips we receive. During
construction, the tracking of peers/allowed-ips is complicated,
via a CList/GArray. At the end of that, we prettify the data
representation and put everything into two buffers. That is more
efficient and simpler for user afterwards. This moves complexity
to the way how the object is created, vs. how it is used later.
- ensure that we nm_explicit_bzero() private-key and preshared-key. However,
that only works to a certain point, because our netlink library does not
ensure that no data is leaked.
- don't use a "struct sockaddr" union for the peer's endpoint. Instead,
use a combintation of endpoint_family, endpoint_port, and
endpoint_addr.
- a lot of refactoring.
I think g_memdup() is dangerous for integer overflow. There
is no need for accepting this danger, just use our own nm_memdup()
which does not have this flaw.
PROP_0 is how we commonly name this property when we don't use
NM_GOBJECT_PROPERTIES_DEFINE(). Rename it.
Also, allow to skip PROP_0 in nm_gobject_notify_together(), that
is handy to optionally invoke a notification, like
nm_gobject_notify_together (obj,
PROP_SOMETHING,
changed ? PROP_OTHER : PROP_0);
We already had nm_free_secret() to clear the secret out
of a NUL terminated string. That works well for secrets
which are strings, it can be used with a cleanup attribute
(nm_auto_free_secret) and as a cleanup function for a
GBytes.
However, it does not work for secrets which are binary.
For those, we must also track the length of the allocated
data and clear it.
Add two new structs NMSecretPtr and NMSecretBuf to help
with that.