We have bridge min/max/default values in core-internal. Do the same
for bridge port ones.
We will soon use those values to enforce limits when assuming a
bridge port configuration.
Secret-flags are flags, but most combinations don't actually make sense
and maybe should be rejected. Anyway, that is not done, and most places
just check that there are no unknown flags set.
Add _nm_setting_secret_flags_valid() to perform the check at one place
instead of having the implementation at various places.
The property infos are already sorted by name. As nm_setting_enumerate_values()
now uses that information, in most cases there is nothing to sort.
The only instance is NMSettingConnection, which has a different
sort-order. At least for some purposes, not all:
- nm_setting_enumerate_values(), obviously.
- nm_setting_enumerate_values() is called by keyfile writer. That
means, keyfile writer will persist properties in a sorted way.
Cache the property list with alternative sorting also in the
setting-meta data, instead of calculating it each time.
Beside caching the information, this has the additional benefit that
this kind of sorting is now directly available, without calling
nm_setting_enumerate_values(). Meaning, we can implement keyfile writer
without using nm_setting_enumerate_values().
We should no longer use nm_connection_for_each_setting_value() and
nm_setting_for_each_value(). It's fundamentally broken as it does
not work with properties that are not backed by a GObject property
and it cannot be fixed because it is public API.
Add an internal function _nm_connection_aggregate() to replace it.
Compare the implementation of the aggregation functionality inside
libnm with the previous two checks for secret-flags that it replaces:
- previous approach broke abstraction and require detailed knowledge of
secret flags. Meaning, they must special case NMSettingVpn and
GObject-property based secrets.
If we implement a new way for implementing secrets (like we will need
for WireGuard), then this the new way should only affect libnm-core,
not require changes elsewhere.
- it's very inefficient to itereate over all settings. It involves
cloning and sorting the list of settings, and retrieve and clone all
GObject properties. Only to look at secret properties alone.
_nm_connection_aggregate() is supposed to be more flexible then just
the two new aggregate types that perform a "find-any" search. The
@arg argument and boolean return value can suffice to implement
different aggregation types in the future.
Also fixes the check of NMAgentManager for secret flags for VPNs
(NM_CONNECTION_AGGREGATE_ANY_SYSTEM_SECRET_FLAGS). A secret for VPNs
is a property that either has a secret or a secret-flag. The previous
implementation would only look at present secrets and
check their flags. It wouldn't check secret-flags that are
NM_SETTING_SECRET_FLAG_NONE, but have no secret.
We will need access to the serialization flags from within the synth_func().
That will be for WireGuard's peers. Peers are a list of complex, structured
elements, and some fields (the peer's preshared-key) are secret and
others are not. So when serializing the peers, we need to know whether
to include secrets or not.
Instead of letting _nm_setting_to_dbus() check the flags, pass them
down.
While at it, don't pass the property_name argument. Instead, pass the
entire meta-data information we have. Most synth functions don't care
about the property or the name either way. But we should not pre-filter
information that we have at hand. Just pass it to the synth function.
If the synth function would be public API, that would be a reason to be
careful about what we pass. But it isn't and it only has one caller.
So passing it along is fine. Also, do it now when adding the flags
argument, as we touch all synth implementations anyway.
We shall not shortcut the synth function. If the synth function is
unhappy about a missing NMConnection argument, then that needs to be
fixed.
So, revert 395c385b9 and fix the issue in nm_setting_wireless_get_security()
differently. I presume that is the only place that caused problems,
since the history of the patch does not clealy show what the problem
was.
This reverts commit 395c385b9b.
In quite some cases we need the string representation on the heap.
While nm_utils_inet*_ntop() accepts NULL as output buffer to fallback
to a static buffer, such usage of a static buffer is discouraged.
So, we actually should always allocate a temporaray buffer on the
stack. But that is cumbersome to write.
Add simple wrappers that makes calling this more convenient.
Report an error when the user tries to add an unknown attribute
instead of silently accepting (and ignoring) it.
Note that this commit also changes the behavior of public API
nm_utils_sriov_vf_from_str() to return an error when an unknown
attribute is found. I think the previous behavior was buggy as wrong
attributes were simply ignored without any way for the user to know.
Fixes: a9b4532fa7
The entire point of using version 3/5 UUIDs is to generate
stable UUIDs based on a string. It's usually important that
we don't change the UUID generation algorithm later on.
Since we didn't have a version 5 implementation, we would always
resort to the MD5 based version 3. Version 5 is recommended by RFC 4122:
o Choose either MD5 [4] or SHA-1 [8] as the hash algorithm; If
backward compatibility is not an issue, SHA-1 is preferred.
Add a version 5 implementation so we can use it in the future.
All test values are generated with python's uuid module or OSSP uuid.
We link against libuuid.so, but it was entirely internal to
libnm-core. We only exposed UUIDs in string form.
Add API to also handle UUIDs in binary form.
Note that libuuid already defines a type "uuid_t". However,
don't use it and instead use our own typedef NMUuid.
Reasons:
- uuid.h should be internal to libnm-core (nm-utils.c specifically),
and not be used by or exposed it other parts of the code.
- uuid_t is a typedef for a guchar[16] array. Typedefs
for arrays are confusing, because depending on whether
it's an automatic variable or a pointer in a function argument,
they behave differently regarding whether to take their address
or not and usage of "sizeof()".
Add 3 variants of _nm_utils_hexstr2bin*():
- _nm_utils_hexstr2bin_full(), which takes a preallocated
buffer and fills it.
- _nm_utils_hexstr2bin_alloc() which returns a malloc'ed
buffer
- _nm_utils_hexstr2bin_buf(), which fills a preallocated
buffer of a specific size.
We already have nm_utils_bin2hexstr() and _nm_utils_bin2hexstr_full().
This is confusing.
- nm_utils_bin2hexstr() is public API of libnm. Also, it has
a last argument @final_len to truncate the string at that
length.
It uses no delimiter and lower-case characters.
- _nm_utils_bin2hexstr_full() does not do any truncation, but
it has options to specify a delimiter, the character case,
and to update a given buffer in-place. Also, like
nm_utils_bin2hexstr() and _nm_utils_bin2hexstr() it can
allocate a new buffer on demand.
- _nm_utils_bin2hexstr() would use ':' as delimiter and make
the case configurable. Also, it would always allocate the returned
buffer.
It's too much and confusing. Drop _nm_utils_bin2hexstr() which is internal
API and just a wrapper around _nm_utils_bin2hexstr_full().
It's just more convenient, as it allows better chaining.
Also, allow passing %NULL as @out buffer. It's clear how
large the output buffer must be, so for convenience let the
function (optionally) allocate a new buffer.
This behavior of whether to
- take @out, fill it, and return @out
- take no @out, allocate new buffer, fill and and return it
is slightly error prone. But it was already error prone before, when
it would accept an input buffer without explicit buffer length. I think
this makes it more safe, because in the common case the caller can avoid
pre-allocating a buffer of the right size and the function gets it
right.
The certificate setter function like nm_setting_802_1x_set_ca_cert()
actually load the file from disk, and validate whether it is a valid
certificate. That is very wrong to do.
For one, the certificates are external files, which are not embedded
into the NMConnection. That means, strongly validating the files while
loading the ifcfg files, is wrong because:
- if validation fails, loading the file fails in its entirety with
a warning in the log. That is not helpful to the user, who now
can no longer use nmcli to fix the path of the certificate (because
the profile failed to load in the first place).
- even if the certificate is valid at load-time, there is no guarantee
that it is valid later on, when we actually try to use the file. What
good does such a validation do? nm_setting_802_1x_set_ca_cert() might
make sense during nmcli_connection_modify(). At the moment when we
create or update the profile, we do want to validate the input and
be helpful to the user. Validating the file later on, when reloading
the profile from disk seems undesirable.
- note how keyfile also does not perform such validations (for good
reasons, I presume).
Also, there is so much wrong with how ifcfg reader handles EAP files.
There is a lot of duplication, and trying to be too smart. I find it
wrong how the "eap_readers" are nested. E.g. both eap_peap_reader() and
"tls" method call to eap_tls_reader(), making it look like that
NMSetting8021x can handle multiple EAP profiles separately. But it cannot. The
802-1x profile is a flat set of properties like ca-cert and others. All
EAP methods share these properties, so having this complex parsing
is not only complicated, but also wrong. The reader should simply parse
the shell variables, and let NMSetting8021x::verify() handle validation
of the settings. Anyway, the patch does not address that.
Also, the setting of the likes of NM_SETTING_802_1X_CLIENT_CERT_PASSWORD was
awkwardly only done when
privkey_format != NM_SETTING_802_1X_CK_FORMAT_PKCS12
&& scheme == NM_SETTING_802_1X_CK_SCHEME_PKCS11
It is too smart. Just read it from file, if it contains invalid data, let
verify() reject it. That is only partly addressed.
Also note, how writer never actually writes the likes of
IEEE_8021X_CLIENT_CERT_PASSWORD. That is another bug and not fixed
either.
We only exposed wrappers around this function, but all of them have
different behavior, and none exposes all possible features. For example,
nm_utils_hexstr2bin() strips leading "0x", but it does not clear
the data on failure (nm_explicit_bzero()). Instead of adding more
wrappers, expose the underlying implementation, so that callers may
use the function the way they want it.
nm_utils_rsa_key_encrypt() is internal API which is only uesd for testing.
Move it to nm-crypto.h (where it fits better) and rename it to make the
testing-aspect obvious.
I think GSList is not a great data type. Most of the time when we used
it, we better had choosen another data type.
These utility functions were unused, and I think we should use GSList
less.
Drop them.
GBytes makes more sense, because it's immutable.
Also, since at other places we use GBytes, having
different types is combersome and requires needless
conversions.
Also:
- avoid nm_utils_escape_ssid() instead of _nm_utils_ssid_to_string().
We use nm_utils_escape_ssid() when we want to log the SSID. However, it
does not escape newlines, which is bad.
- also no longer use nm_utils_same_ssid(). Since it no longer
treated trailing NUL special, it is not different from
g_bytes_equal().
- also, don't use nm_utils_ssid_to_utf8() for logging anymore.
For logging, _nm_utils_ssid_escape_utf8safe() is better because
it is loss-less escaping which can be unambigously reverted.
Add a new 'match' setting containing properties to match a connection
to devices. At the moment only the interface-name property is present
and, contrary to connection.interface-name, it allows the use of
wildcards.
Note that in NetworkManager API (D-Bus, libnm, and nmcli),
the features are called "feature-xyz". The "feature-" prefix
is used, because NMSettingEthtool possibly will gain support
for options that are not only -K|--offload|--features, for
example -C|--coalesce.
The "xzy" suffix is either how ethtool utility calls the feature
("tso", "rx"). Or, if ethtool utility specifies no alias for that
feature, it's the name from kernel's ETH_SS_FEATURES ("tx-tcp6-segmentation").
If possible, we prefer ethtool utility's naming.
Also note, how the features "feature-sg", "feature-tso", and
"feature-tx" actually refer to multiple underlying kernel features
at once. This too follows what ethtool utility does.
The functionality is not yet implemented server-side.
Add a new way how NMSetting subclasses can be implemented.
Currently, most NMSetting implementations realize all their properties
via GObject properties. That has some downsides:
- the biggest one, is the large effort to add new properties.
Most of them are implemented on a one-by-one basis and they come
with additional API (like native getter functions).
It makes it cumbersome to add more properties.
- for certain properties, it's hard to encode them entirely in
a GObject property. That results in unusable API like
NM_SETTING_IP_CONFIG_ADDRESSES, NM_SETTING_BOND_OPTIONS,
NM_SETTING_USER_DATA. These complex valued properties only
exist, because we currently always need GObject properties
to even implement simple functionality. For example,
nm_setting_duplicate() is entirely implemented via
nm_setting_enumerate_values(), which can only iterate
GObject properies. There is no reason why this is necessary.
Note also how nmcli badly handles bond options and VPN
data. That is only a shortcoming of nmcli and wouldn't
need to be that way. But it happend, because we didn't
keep an open mind that settings might be more than just
accessing GObject properties.
- a major point of NMSetting is to convert to/from a GVariant
from the D-Bus API. As NMSetting needs to squeeze all values
into the static GObject structure, there is no place to
encode invalid or unknown properties. Optimally,
_nm_setting_new_from_dbus() does not loose any information
and a subsequent _nm_setting_to_dbus() can restore the original
variant. That is interesting, because we want that an older
libnm client can talk to a newer NetworkManager version. The
client needs to handle unknown properties gracefully to stay
forward compatible. However, it also should not just drop the
properties on the floor.
Note however, optimally we want that nm_setting_verify() still
can reject settings that have such unknown/invalid values. So,
it should be possible to create an NMSetting instance without
error or loosing information. But verify() should be usable to
identify such settings as invalid.
They also have a few upsides.
- libnm is heavily oriented around GObject. So, we generate
our nm-settings manual based on the gtk-doc. Note however,
how we fail to generate a useful manual for bond.options.
Also note, that there is no reason we couldn't generate
great documentation, even if the properties are not GObject
properties.
- GObject properties do give some functionality like meta-data,
data binding and notification. However, the meta-data is not
sufficient on its own. Note how keyfile and nmcli need extensive
descriptor tables on top of GObject properties, to make this
useful. Note how GObject notifications for NMSetting instances
are usually not useful, aside for data binding like nmtui does.
Also note how NMSettingBond already follows a different paradigm
than using GObject properties. Nowdays, NMSettingBond is considered
a mistake (related bug rh#1032808). Many ideas of NMSettingBond
are flawed, like exposing an inferiour API that reduces everything
to a string hash. Also, it only implemented the options hash inside
NMSettingBond. That means, if we would consider this a good style,
we would have to duplicate this approach in each new setting
implementation.
Add a new style to track data for NMSetting subclasses. It keeps
an internal hash table with all GVariant properies. Also, the
functionality is hooked into NMSetting base class, so all future
subclasses that follow this way, can benefit from this. This approach
has a few similiarties with NMSettingBond, but avoids its flaws.
With this, we also no longer need GObject properties (if we would
also implement generating useful documentation based on non-gkt-doc).
They may be added as accessors if they are useful, but there is no
need for them.
Also, handling the properties as a hash of variants invites for a
more generic approach when handling them. While we still could add
accessors that operate on a one-by-one bases, this leads to a more
generic usage where we apply common functionality to a set of properties.
Also, this is for the moment entirely internal and an implementation
detail. It's entirely up to the NMSetting subclass to make use of this
new style. Also, there are little hooks for the subclass available.
If they turn out to be necessary, they might be added. However, for
the moment, the functionality is restricted to what is useful and
necessary.
NMSetting internally already tracked a list of all proper GObject properties
and D-Bus-only properties.
Rework the tracking of the list, so that:
- instead of attaching the data to the GType of the setting via
g_type_set_qdata(), it is tracked in a static array indexed by
NMMetaSettingType. This allows to find the setting-data by simple
pointer arithmetic, instead of taking a look and iterating (like
g_type_set_qdata() does).
Note, that this is still thread safe, because the static table entry is
initialized in the class-init function with _nm_setting_class_commit().
And it only accessed by following a NMSettingClass instance, thus
the class constructor already ran (maybe not for all setting classes,
but for the particular one that we look up).
I think this makes initialization of the metadata simpler to
understand.
Previously, in a first phase each class would attach the metadata
to the GType as setting_property_overrides_quark(). Then during
nm_setting_class_ensure_properties() it would merge them and
set as setting_properties_quark(). Now, during the first phase,
we only incrementally build a properties_override GArray, which
we finally hand over during nm_setting_class_commit().
- sort the property infos by name and do binary search.
Also expose this meta data types as internal API in nm-setting-private.h.
While not accessed yet, it can prove beneficial, to have direct (internal)
access to these structures.
Also, rename NMSettingProperty to NMSettInfoProperty to use a distinct
naming scheme. We already have 40+ subclasses of NMSetting that are called
NMSetting*. Likewise, NMMetaSetting* is heavily used already. So, choose a
new, distinct name.
Previously, each (non abstract) NMSetting class had to register
its name and priority via _nm_register_setting().
Note, that libnm-core.la already links against "nm-meta-setting.c",
which also redundantly keeps track of the settings name and gtype
as well.
Re-use NMMetaSettingInfo also in libnm-core.la, to track this meta
data.
The goal is to get rid of private data structures that track
meta data about NMSetting classes. In this case, "registered_settings"
hash. Instead, we should have one place where all this meta data
is tracked. This was, it is also accessible as internal API,
which can be useful (for keyfile).
Note that NMSettingClass has some overlap with NMMetaSettingInfo.
One difference is, that NMMetaSettingInfo is const, while NMSettingClass
is only initialized during the class_init() method. Appart from that,
it's mostly a matter of taste, whether we attach meta data to
NMSettingClass, to NMMetaSettingInfo, or to a static-array indexed
by NMMetaSettingType.
Note, that previously, _nm_register_setting() was private API. That
means, no user could subclass a functioning NMSetting instance. The same
is still true: NMMetaSettingInfo is internal API and users cannot access
it to create their own NMSetting subclasses. But that is almost desired.
libnm is not designed, to be extensible via subclassing, nor is it
clear why that would be a useful thing to do. One day, we should remove
the NMSetting and NMSettingClass definitions from public headers. Their
only use is subclassing the types, which however does not work.
While libnm-core was linking already against nm-meta-setting.c,
nm_meta_setting_infos was unreferenced. So, this change increases
the binary size of libnm and NetworkManager (1032 bytes). Note however
that roughly the same information was previously allocated at runtime.
Add a new option that allows to activate a profile multiple times
(at the same time). Previoulsy, all profiles were implicitly
NM_SETTING_CONNECTION_MULTI_CONNECT_SINGLE, meaning, that activating
a profile that is already active will deactivate it first.
This will make more sense, as we also add more match-options how
profiles can be restricted to particular devices. We already have
connection.type, connection.interface-name, and (ethernet|wifi).mac-address
to restrict a profile to particular devices. For example, it is however
not possible to specify a wildcard like "eth*" to match a profile to
a set of devices by interface-name. That is another missing feature,
and once we extend the matching capabilities, it makes more sense to
activate a profile multiple times.
See also https://bugzilla.redhat.com/show_bug.cgi?id=997998, which
previously changed that a connection is restricted to a single activation
at a time. This work relaxes that again.
This only adds the new property, it is not used nor implemented yet.
https://bugzilla.redhat.com/show_bug.cgi?id=1555012
Note the special error codes NM_UTILS_ERROR_CONNECTION_AVAILABLE_*.
This will be used to determine, whether the profile is fundamentally
incompatible with the device, or whether just some other properties
mismatch. That information will be importand during a plain `nmcli
connection up`, where NetworkManager searches all devices for a device
to activate. If no device is found (and multiple errors happened),
we want to show the error that is most likely relevant for the user.
Also note, how NMDevice's check_connection_compatible() uses the new
class field "device_class->connection_type_check_compatible" to simplify
checks for compatible profiles.
The error reason is still unused.
We commonly don't use the glib typedefs for char/short/int/long,
but their C types directly.
$ git grep '\<g\(char\|short\|int\|long\|float\|double\)\>' | wc -l
587
$ git grep '\<\(char\|short\|int\|long\|float\|double\)\>' | wc -l
21114
One could argue that using the glib typedefs is preferable in
public API (of our glib based libnm library) or where it clearly
is related to glib, like during
g_object_set (obj, PROPERTY, (gint) value, NULL);
However, that argument does not seem strong, because in practice we don't
follow that argument today, and seldomly use the glib typedefs.
Also, the style guide for this would be hard to formalize, because
"using them where clearly related to a glib" is a very loose suggestion.
Also note that glib typedefs will always just be typedefs of the
underlying C types. There is no danger of glib changing the meaning
of these typedefs (because that would be a major API break of glib).
A simple style guide is instead: don't use these typedefs.
No manual actions, I only ran the bash script:
FILES=($(git ls-files '*.[hc]'))
sed -i \
-e 's/\<g\(char\|short\|int\|long\|float\|double\)\>\( [^ ]\)/\1\2/g' \
-e 's/\<g\(char\|short\|int\|long\|float\|double\)\> /\1 /g' \
-e 's/\<g\(char\|short\|int\|long\|float\|double\)\>/\1/g' \
"${FILES[@]}"
allow to specify the DUID to be used int the DHCPv6 client identifier
option: the dhcp-duid property accepts either a hex string or the
special values "lease", "llt", "ll", "stable-llt", "stable-ll" and
"stable-uuid".
"lease": give priority to the DUID available in the lease file if any,
otherwise fallback to a global default dependant on the dhcp
client used. This is the default and reflects how the DUID
was managed previously.
"ll": enforce generation and use of LL type DUID based on the current
hardware address.
"llt": enforce generation and use of LLT type DUID based on the current
hardware address and a stable time field.
"stable-ll": enforce generation and use of LL type DUID based on a
link layer address derived from the stable id.
"stable-llt": enforce generation and use of LLT type DUID based on
a link layer address and a timestamp both derived from the
stable id.
"stable-uuid": enforce generation and use of a UUID type DUID based on a
uuid generated from the stable id.
Note that:
- we compile some source files multiple times. Most notably those
under "shared/".
- we include a default header "shared/nm-default.h" in every source
file. This header is supposed to setup a common environment by defining
and including parts that are commonly used. As we always include the
same header, the header must behave differently depending
one whether the compilation is for libnm-core, NetworkManager or
libnm-glib. E.g. it must include <glib/gi18n.h> or <glib/gi18n-lib.h>
depending on whether we compile a library or an application.
For that, the source files need the NETWORKMANAGER_COMPILATION #define
to behave accordingly.
Extend the define to be composed of flags. These flags are all named
NM_NETWORKMANAGER_COMPILATION_WITH_*, they indicate which part of the
build are available. E.g. when building libnm-core.la itself, then
WITH_LIBNM_CORE, WITH_LIBNM_CORE_INTERNAL, and WITH_LIBNM_CORE_PRIVATE
are available. When building NetworkManager, WITH_LIBNM_CORE_PRIVATE
is not available but the internal parts are still accessible. When
building nmcli, only WITH_LIBNM_CORE (the public part) is available.
This granularily controls the build.
binary-search can find an index of a matching entry in a sorted
list. However, if the list contains multiple entries that compare
equal, it can be interesting to find the first/last entry. For example,
if you want to append new items after the last.
Extend binary search to optionally continue the binary search
to determine the range that compares equal.