In nm_keyfile_plugin_connection_from_file(), disable the "bad owner"
check.
As root you can read all files anyway, or if necessary even chown them,
and for
other users the standard file permissions will do a fine job.
This fixes running "make check" as root.
https://bugzilla.gnome.org/show_bug.cgi?id=701112
When a connection is removed from NMSettings, it gets deactivated. That was
happening once in response to the now-removed connection's visibility changing
to invisible to everyone, and a second time in response to the actual removal
signal. Unfortunately the second time the NMActiveConnection is already
cleaned up and in the DEACTIVATED state, and has no priv->device, which
causes great hilarity when nm_device_set_state() is called with NULL.
_deactivate_if_active() should only try to deactivate NMActiveConnections
that actually are active.
Plugins that could save connections to disk previously depended on inotify
events from the kernel to know when to signal connection removal; that is
in response to a 'delete' request they would unlink the backing filesystem
resources, get the inotify signal, and cause NM_SETTINGS_CONNECTION_REMOVED
to be emitted.
Unsaved connections don't have any backing resources, so they would never
get the signal emitted, and NMSettings would never forget about them.
Also, when monitor-connection-files=false in the configuration, obviously
the inotify signals will never come in because they aren't set up.
Given that we can no longer rely on inotify, it's best to just explicitly
send out the NM_SETTINGS_CONNECTION_REMOVED signal whenever a connection
is deleted via the D-Bus interface or internally.
We do not need to increase periodical scanning frequency when AP signal
strength is good enough. Signal level of -45dBm is considered as good,
change the threshold to -65dBm.
platform/nm-linux-platform.c: In function 'delete_object':
platform/nm-linux-platform.c:102:13: error: 'cached_object' may be used uninitialized in this function [-Werror=maybe-uninitialized]
if (object && *object) {
^
platform/nm-linux-platform.c:1019:35: note: 'cached_object' was declared here
Except that it won't be, but I guess the gsystem auto* stuff is
confusing the compiler.
The platform still needs to know about them, becuase the ethernet interface
is what gets configured and used for IP. But the Manager doens't want to
create a full new NMDevice for them, because there's already a Modem
device that "owns" that WWAN interface. So keep WWAN devices visible
to the platform, but just make the manager ignore them when creating
NMDevices.
Also, many WWAN pseduo-ethernet drivers set NOARP becuase they really
are point-to-point and thus ARP is pointless, and in this case, they
won't have any arptype of ARPHRD_ETHER. So determining the NMLinkType
from udev must take that into account.
Software devices don't have a UDI until udev finds them, and since we need
to know about the software devices before udev finds them the UDI will be
missing. Instead of requiring a UDI on NMDevice creation, update the
property from the NMPlatform link change signal when udev does find the
device.
Now that a UDI is no longer required for device creation, software devices
added by NM would be created in the platform_link_added_cb() signal
handler triggered by the various software device creation methods in
system_create_virtual_device() (eg nm_platform_bridge_add() etc). Then
the NMDevice created in system_create_virtual_device() would be a duplicate
and cause problems when it was added. Since system_create_virtual_device()
needs to do setup on some devices, suppress the device creation from the
platform link added handler in this function.
Much of this is a hack which should be cleaned up later.
The unref in finalize is not needed, because the hash table gets already freed
in dispose.
Note that setting available_connections to NULL in dispose is not really
needed, because (altough dispose might be called more than once) there
is a guarded by "priv->disposed = TRUE;". But leave it for clearity in
case somebody tries to access the pointer (which shouldn't happen).
Signed-off-by: Thomas Haller <thaller@redhat.com>
nm_platform_query_devices() wasn't including the new "reason" argument
in its link-added emissions. (This didn't cause any problems since
NMManager doesn't look at that argument anyway, but it's still
obviously wrong.)
The initscripts do this:
MATCH='^.+\.[0-9]{1,4}$'
if [[ "${DEVICE}" =~ $MATCH ]]; then
VID=$(echo "${DEVICE}" | LC_ALL=C sed 's/^.*\.\([0-9]\+\)/\1/')
PHYSDEV=${DEVICE%.*}
fi
MATCH='^vlan[0-9]{1,4}?'
if [[ "${DEVICE}" =~ $MATCH ]]; then
VID=$(echo "${DEVICE}" | LC_ALL=C sed 's/^vlan0*//')
# PHYSDEV should be set in ifcfg-vlan* file
if test -z "$PHYSDEV"; then
net_log $"PHYSDEV should be set for device ${DEVICE}"
exit 1
fi
fi
which means that if the VLAN name starts with "vlan" then
PHYSDEV must be set, otherwise the parent interface cannot
be determined.
Since PHYSDEV, if set, reflects the explicit intentions of the
user instead of assuming the name from DEVICE, make PHYSDEV
take precedence over determining the parent interface from
heuristics.
NM knows software devices it creates exist immediately, and it knows
it can use them immediately instead of waiting for udev to find them.
Ideally we'd wait for udev to find all devices, to allow tags and
other rules to run, but when creating software devices they must
currently be available for use immediately.
Bridges and bonds are required to be IFF_UP before they can be
activated (see their is_available() implementation). But during
activation code paths, specifically in nm_manager_activate_connection(),
the interface is created and immediately checked for availability
to activate (since the creation was a result of the user requesting
device activation). That availability check fails, because the
device is not visible outside NMPlatform (because it hasn't been
found by udev yet, because we haven't gotten back to the event
loop) and thus nm_platform_link_is_up() fails, failing the
availability check. Thus the device activation ends in error.
What should really happen is that device activation shouldn't
be synchronous; a new NMActiveConnection should be created for
any activation request, which can then wait until all the
resources it needs for activation are available (device found,
master devices created, etc). That's going to take more work
though.
Use preprocessor constants in nm_logging_all_domains_to_string instead
of hard coded values for the special logging domain names.
Signed-off-by: Thomas Haller <thaller@redhat.com>
If a device is removed, then trying to reset its IPv6 state will cause
nm_utils_do_sysctl() to log a warning since the path no longer exists.
So don't try to do it in that case.
Most places except the tests don't want the default route when asking
the platform for all routes, so make that simpler by just adding a
parameter for including the default route or not.
The switch to combine IPv4 configs to arrive at the final config would
cause externally added addresses and routes to be removed from the
interface when a DHCP or LLv4 event came in, becasue the externally
added details weren't cached anywhere and thus would be dropped on the
IP changes.
When a VPN wanted to add some routes (like the host route for the
VPN gateway) it would add them itself and listen for parent device
events and re-add them if necessary. That's pretty fragile, plus
the platform blows away routes that aren't part of the IP config
that's getting applied.
So we might as well just have the VPN connection tell the parent
what the routes are, and have the parent device handle updating
the routing. The routes are through the parent device anyway,
and so are "owned" by the parent too.
Like IPv6, keep the DHCP/LLv4 config separate and combine it with the
NMSettingIP4Config to arrive at the final, combined IP4 config. This
brings the behavior in line with IPv6 code flow and will allow adding
the VPN routes config into the mix more easily.
This is the same we already did for nm-platform addresses in commit
68c3e1153c. It will help to avoid various
issues and is also a step towards support for route lifetimes.
It appears the kernel does not send notifications via netlink if the
default route is removed in some cases. This causes the platform
route cache to become stale, and thus when the default route is
reset by NM the platform thinks the route already exists, and does
not add it. But the route doesn't exist, becuase the kernel silently
removed it without telling anyone.
Fix that with a big hammer by flushing/refilling the route cache when
devices are deactivated (deletion of their addresses causes the default
route to be removed by the kernel) and when the default route is
updated by NM itself.
Pavel: if we find a more granular method, we should probably revert
this as the cache refill can be expensive.
If the agent has dropped off the bus then its proxy may already
be destroyed, so we'll get warnings when trying to make method
calls using it. Track proxy destruction and warn if we try to
use a destroyed proxy.
Turns out this function is useless, because it's only called when the
agent has dropped off the bus or when the whole request is being
freed. If the agent has dropped off the bus then there's no point
in asking it to cancel the request because there's nothing to ask.
So we can collapse request_cancel() into request_free().
If all agents can handle VPN hints, then we'll try to use
ConnectInteractive() to let the VPN plugin ask for secrets
interactively via the SecretsRequired signal. These hints
are then passed to agents during the connection process if
the plugin needs more secrets or different secrets, and when
the new secrets are returned, they are passed back to the VPN
plugin.
If at least one agent does not have the VPN hints capability,
we can't use ConnectInteractive(), but fall back to the old
Connect call, because that agent won't be able to send the
hints to the VPN plugin's authentication dialog, and thus
we won't get back the secrets the VPN plugin is looking for.
So, for interactive secrets to work correctly, you need:
1) A VPN plugin updated for interactive secrets requests
2) NM updated for interactive secrets requests
3) all agents to set the VPN_HINTS capability when
registering with NetworkManager and to pass hints
along to the VPN authentication dialog
4) a VPN authentication dialog updated to look for hints
and only return secrets corresponding to the hints
requested by the plugin