Compare commits

..

1 Commits

Author SHA1 Message Date
Colin 3c5fee5d14 attempt to update swaynotificationcenter patch, but fail
the new version has problems: start swaync while one of the services is running, and then itll toggle that service on every control-center open -> close cycle
2023-11-08 21:26:42 +00:00
652 changed files with 63912 additions and 38251 deletions

3
.gitignore vendored
View File

@ -1,4 +1,5 @@
.working
/keep
result
result-*
/secrets/local.nix
/working

125
README.md
View File

@ -1,7 +1,3 @@
![hello](doc/hello.gif)
# .❄≡We|_c0m3 7o m`/ f14k≡❄.
## What's Here
this is the top-level repo from which i configure/deploy all my NixOS machines:
@ -10,19 +6,18 @@ this is the top-level repo from which i configure/deploy all my NixOS machines:
- server
- mobile phone (Pinephone)
everything outside of [hosts/](./hosts/) and [secrets/](./secrets/) is intended for export, to be importable for use by 3rd parties.
everything outside of <./hosts/> and <./secrets/> is intended for export, to be importable for use by 3rd parties.
the only hard dependency for my exported pkgs/modules should be [nixpkgs][nixpkgs].
building [hosts/](./hosts/) will require [sops][sops].
building <./hosts/> will require [sops][sops].
you might specifically be interested in these files (elaborated further in #key-points-of-interest):
- ~~[`sxmo-utils`](./pkgs/additional/sxmo-utils/default.nix)~~
- ~~[example SXMO deployment](./hosts/modules/gui/sxmo/default.nix)~~
- these files will remain until my config settles down, but i no longer use or maintain SXMO.
- [`sxmo-utils-latest`](./pkgs/additional/sxmo-utils/default.nix)
- [example SXMO deployment](./hosts/modules/gui/sxmo/default.nix)
- [my implementation of impermanence](./modules/persist/default.nix)
- my way of deploying dotfiles/configuring programs per-user:
- [modules/fs/](./modules/fs/default.nix)
- [modules/programs/](./modules/programs/default.nix)
- [modules/users/](./modules/users/default.nix)
- <./modules/fs/default.nix>
- <./modules/programs.nix>
- <./modules/users.nix>
[nixpkgs]: https://github.com/NixOS/nixpkgs
[sops]: https://github.com/Mic92/sops-nix
@ -40,37 +35,37 @@ or follow the instructions [here][NUR] to use it via the Nix User Repositories.
## Layout
- `doc/`
- instructions for tasks i find myself doing semi-occasionally in this repo.
- instructions for tasks i find myself doing semi-occasionally in this repo.
- `hosts/`
- the bulk of config which isn't factored with external use in mind.
- that is, if you were to add this repo to a flake.nix for your own use,
you won't likely be depending on anything in this directory.
- the bulk of config which isn't factored with external use in mind.
- that is, if you were to add this repo to a flake.nix for your own use,
you won't likely be depending on anything in this directory.
- `integrations/`
- code intended for consumption by external tools (e.g. the Nix User Repos)
- code intended for consumption by external tools (e.g. the Nix User Repos)
- `modules/`
- config which is gated behind `enable` flags, in similar style to nixpkgs'
`nixos/` directory.
- if you depend on this repo, it's most likely for something in this directory.
- config which is gated behind `enable` flags, in similar style to nixpkgs'
`nixos/` directory.
- if you depend on this repo, it's most likely for something in this directory.
- `nixpatches/`
- literally, diffs i apply atop upstream nixpkgs before performing further eval.
- literally, diffs i apply atop upstream nixpkgs before performing further eval.
- `overlays/`
- exposed via the `overlays` output in `flake.nix`.
- predominantly a list of `callPackage` directives.
- exposed via the `overlays` output in `flake.nix`.
- predominantly a list of `callPackage` directives.
- `pkgs/`
- derivations for things not yet packaged in nixpkgs.
- derivations for things from nixpkgs which i need to `override` for some reason.
- inline code for wholly custom packages (e.g. `pkgs/additional/sane-scripts/` for CLI tools
that are highly specific to my setup).
- derivations for things not yet packaged in nixpkgs.
- derivations for things from nixpkgs which i need to `override` for some reason.
- inline code for wholly custom packages (e.g. `pkgs/additional/sane-scripts/` for CLI tools
that are highly specific to my setup).
- `scripts/`
- scripts which aren't reachable on a deployed system, but may aid manual deployments
- scripts which aren't reachable on a deployed system, but may aid manual deployments
- `secrets/`
- encrypted keys, API tokens, anything which one or more of my machines needs
read access to but shouldn't be world-readable.
- not much to see here
- encrypted keys, API tokens, anything which one or more of my machines needs
read access to but shouldn't be world-readable.
- not much to see here
- `templates/`
- exposed via the `templates` output in `flake.nix`.
- used to instantiate short-lived environments.
- used to auto-fill the boiler-plate portions of new packages.
- exposed via the `templates` output in `flake.nix`.
- used to instantiate short-lived environments.
- used to auto-fill the boiler-plate portions of new packages.
## Key Points of Interest
@ -78,41 +73,35 @@ or follow the instructions [here][NUR] to use it via the Nix User Repositories.
i.e. you might find value in using these in your own config:
- `modules/fs/`
- use this to statically define leafs and nodes anywhere in the filesystem,
not just inside `/nix/store`.
- e.g. specify that `/var/www` should be:
- owned by a specific user/group
- set to a specific mode
- symlinked to some other path
- populated with some statically-defined data
- populated according to some script
- created as a dependency of some service (e.g. `nginx`)
- values defined here are applied neither at evaluation time _nor_ at activation time.
- rather, they become systemd services.
- systemd manages dependencies
- e.g. link `/var/www -> /mnt/my-drive/www` only _after_ `/mnt/my-drive/www` appears)
- this is akin to using [Home Manager's][home-manager] file API -- the part which lets you
statically define `~/.config` files -- just with a different philosophy.
- use this to statically define leafs and nodes anywhere in the filesystem,
not just inside `/nix/store`.
- e.g. specify that `/var/www` should be:
- owned by a specific user/group
- set to a specific mode
- symlinked to some other path
- populated with some statically-defined data
- populated according to some script
- created as a dependency of some service (e.g. `nginx`)
- values defined here are applied neither at evaluation time _nor_ at activation time.
- rather, they become systemd services.
- systemd manages dependencies
- e.g. link `/var/www -> /mnt/my-drive/www` only _after_ `/mnt/my-drive/www` appears)
- this is akin to using [Home Manager's][home-manager] file API -- the part which lets you
statically define `~/.config` files -- just with a different philosophy.
- `modules/persist/`
- my alternative to the Impermanence module.
- this builds atop `modules/fs/` to achieve things stock impermanence can't:
- persist things to encrypted storage which is unlocked at login time (pam_mount).
- "persist" cache directories -- to free up RAM -- but auto-wipe them on mount
and encrypt them to ephemeral keys so they're unreadable post shutdown/unmount.
- `modules/programs/`
- like nixpkgs' `programs` options, but allows both system-wide or per-user deployment.
- allows `fs` and `persist` config values to be gated behind program deployment:
- e.g. `/home/<user>/.mozilla/firefox` is persisted only for users who
`sane.programs.firefox.enableFor.user."<user>" = true;`
- allows aggressive sandboxing any program:
- `sane.programs.firefox.sandbox.method = "bwrap"; # sandbox with bubblewrap`
- `sane.programs.firefox.sandbox.whitelistWayland = true; # allow it to render a wayland window`
- `sane.programs.firefox.sandbox.extraHomePaths = [ "Downloads" ]; # allow it read/write access to ~/Downloads`
- integrated with `fs` and `persist` modules so that programs' config files and persisted data stores are linked into the sandbox w/o any extra involvement.
- `modules/users/`
- convenience layer atop the above modules so that you can just write
`fs.".config/git"` instead of `fs."/home/colin/.config/git"`
- per-user services managed by [s6-rc](https://www.skarnet.org/software/s6-rc/)
- my alternative to the Impermanence module.
- this builds atop `modules/fs/` to achieve things stock impermanence can't:
- persist things to encrypted storage which is unlocked at login time (pam_mount).
- "persist" cache directories -- to free up RAM -- but auto-wipe them on mount
and encrypt them to ephemeral keys so they're unreadable post shutdown/unmount.
- `modules/programs.nix`
- like nixpkgs' `programs` options, but allows both system-wide or per-user deployment.
- allows `fs` and `persist` config values to be gated behind program deployment:
- e.g. `/home/<user>/.mozilla/firefox` is persisted only for users who
`sane.programs.firefox.enableFor.user."<user>" = true;`
- `modules/users.nix`
- convenience layer atop the above modules so that you can just write
`fs.".config/git"` instead of `fs."/home/colin/.config/git"`
some things in here could easily find broader use. if you would find benefit in
them being factored out of my config, message me and we could work to make that happen.

112
TODO.md
View File

@ -1,110 +1,67 @@
## BUGS
- moby: megapixels doesn't load in sandbox
- when moby wlan is explicitly set down (via ip link set wlan0 down), /var/lib/trust-dns/dhcp-configs doesn't get reset
- trust-dns: can't recursively resolve api.mangadex.org
- and *sometimes* apple.com fails
- sandbox: `ip netns exec ovpns bash`: doesn't work
- sandbox: link cache means that if i update ~/.config/... files inline, sandboxed programs still see the old version
- mpv: no way to exit fullscreen video on moby
- uosc hides controls on FS, and touch doesn't support unhiding
- Signal restart loop drains battery
- decrease s6 restart time?
- `ssh` access doesn't grant same linux capabilities as login
- why i need to manually restart `wireguard-wg-ovpns` on servo periodically
- else DNS fails
- ringer (i.e. dino incoming call) doesn't prevent moby from sleeping
- sway mouse/kb hotplug doesn't work
- sysvol (volume overlay): when casting with `blast`, sysvol doesn't react to volume changes
## REFACTORING:
- REMOVE DEPRECATED `crypt` from sftpgo_auth_hook
- consolidate ~/dev and ~/ref
- ~/dev becomes a link to ~/ref/cat/mine
- fold hosts/common/home/ssh.nix -> hosts/common/users/colin.nix
### sops/secrets
- attach secrets to the thing they're used by (sane.programs)
- rework secrets to leverage `sane.fs`
- remove sops activation script as it's covered by my systemd sane.fs impl
- user secrets could just use `gocryptfs`, like with ~/private?
- can gocryptfs support nested filesystems, each with different perms (for desko, moby, etc)?
### roles
- allow any host to take the role of `uninsane.org`
- will make it easier to test new services?
### upstreaming
- split out a sxmo module usable by NUR consumers
- bump nodejs version in lemmy-ui
- add updateScripts to all my packages in nixpkgs
- fix lightdm-mobile-greeter for newer libhandy
- port zecwallet-lite to a from-source build
- REVIEW/integrate jellyfin dataDir config: <https://github.com/NixOS/nixpkgs/pull/233617>
- remove `libsForQt5.callPackage` broadly: <https://github.com/NixOS/nixpkgs/issues/180841>
#### upstreaming to non-nixpkgs repos
- gtk: build schemas even on cross compilation: <https://github.com/NixOS/nixpkgs/pull/247844>
- sxmo: add new app entries
## IMPROVEMENTS:
- systemd/journalctl: use a less shit pager
- there's an env var for it: SYSTEMD_PAGER? and a flag for journalctl
### security/resilience
- matrix/ntfy: automatically add the ntfy.uninsane.org push URL as part of synapse launch
- ntfy: use a more secure topic
- validate duplicity backups!
- encrypt more ~ dirs (~/archives, ~/records, ..?)
- best to do this after i know for sure i have good backups
- /mnt/desko/home, etc, shouldn't include secrets (~/private)
- 95% of its use is for remote media access and stuff which isn't in VCS (~/records)
- port all sane.programs to be sandboxed
- enforce that all `environment.packages` has a sandbox profile (or explicitly opts out)
- revisit "non-sandboxable" apps and check that i'm not actually just missing mountpoints
- LL_FS_RW=/ isn't enough -- need all mount points like `=/:/proc:/sys:...`.
- ensure non-bin package outputs are linked for sandboxed apps
- i.e. `outputs.man`, `outputs.debug`, `outputs.doc`, ...
- lock down dbus calls within the sandbox
- otherwise anyone can `systemd-run --user ...` to potentially escape a sandbox
- <https://github.com/flatpak/xdg-dbus-proxy>
- remove `.ssh` access from Firefox!
- limit access to `~/knowledge/secrets` through an agent that requires GUI approval, so a firefox exploit can't steal all my logins
- port sanebox to a compiled language (hare?)
- it adds like 50-70ms launch time _on my laptop_. i'd hate to know how much that is on the pinephone.
- remove /run/wrappers from the sandbox path
- they're mostly useless when using no-new-privs, just an opportunity to forget to specify deps
- make dconf stuff less monolithic
- i.e. per-app dconf profiles for those which need it. possible static config.
- have `sane.programs` be wrapped such that they run in a cgroup?
- at least, only give them access to the portion of the fs they *need*.
- Android takes approach of giving each app its own user: could hack that in here.
- **systemd-run** takes a command and runs it in a temporary scope (cgroup)
- presumably uses the same options as systemd services
- see e.g. <https://github.com/NixOS/nixpkgs/issues/113903#issuecomment-857296349>
- flatpak does this, somehow
- apparmor? SElinux? (desktop) "portals"?
- see Spectrum OS; Alyssa Ross; etc
- bubblewrap-based sandboxing: <https://github.com/nixpak/nixpak>
- canaries for important services
- e.g. daily email checks; daily backup checks
- integrate `nix check` into Gitea actions?
### user experience
- rofi: sort items case-insensitively
- xdg-desktop-portal shouldn't kill children on exit
- *maybe* a job for `setsid -f`?
- replace starship prompt with something more efficient
- watch `forkstat`: it does way too much
- cleanup waybar so that it's not invoking playerctl every 2 seconds
- install apps:
- display QR codes for WiFi endpoints: <https://linuxphoneapps.org/apps/noappid.wisperwind.wifi2qr/>
- shopping list (not in nixpkgs): <https://linuxphoneapps.org/apps/ro.hume.cosmin.shoppinglist/>
- offline Wikipedia (or, add to `wike`)
- offline docs viewer (gtk): <https://github.com/workbenchdev/Biblioteca>
- some type of games manager/launcher
- Gnome Highscore (retro games)?: <https://gitlab.gnome.org/World/highscore>
- better maps for mobile (Osmin (QtQuick)? Pure Maps (Qt/Kirigami)? Gnome Maps is improved in 45)
- note-taking app: <https://linuxphoneapps.org/categories/note-taking/>
- OSK overlay specifically for mobile gaming
- i.e. mock joysticks, for use with SuperTux and SuperTuxKart
- install mobile-friendly games:
- Shattered Pixel Dungeon (nixpkgs `shattered-pixel-dungeon`; doesn't cross-compile b/c openjdk/libIDL) <https://github.com/ebolalex/shattered-pixel-dungeon>
- UnCiv (Civ V clone; nixpkgs `unciv`; doesn't cross-compile): <https://github.com/yairm210/UnCiv>
- Simon Tatham's Puzzle Collection (not in nixpkgs) <https://git.tartarus.org/?p=simon/puzzles.git>
- Shootin Stars (Godot; not in nixpkgs) <https://gitlab.com/greenbeast/shootin-stars>
- numberlink (generic name for Flow Free). not packaged in Nix
- Neverball (https://neverball.org/screenshots.php). nix: as `neverball`
- blurble (https://linuxphoneapps.org/games/app.drey.blurble/). nix: not as of 2024-02-05
- Trivia Quiz (https://linuxphoneapps.org/games/io.github.nokse22.trivia-quiz/)
- sane-sync-music: remove empty dirs
#### moby
- fix cpuidle (gets better power consumption): <https://xnux.eu/log/077.html>
- moby: tune keyboard layout
- install apps:
- display QR codes for WiFi endpoints: <https://linuxphoneapps.org/apps/noappid.wisperwind.wifi2qr/>
- shopping list: <https://linuxphoneapps.org/apps/ro.hume.cosmin.shoppinglist/>
- offline Wikipedia
- SwayNC:
- don't show MPRIS if no players detected
- this is a problem of playerctld, i guess
- add option to change audio output
- fix colors (red alert) to match overall theme
- extend width to 100% of portrait mode
- moby: tune GPS
- run only geoclue, and not gpsd, to save power?
- tune QGPS setting in eg25-control, for less jitter?
@ -113,15 +70,18 @@
- manually do smoothing, as some layer between mepo and geoclue/gpsd?
- moby: show battery state on ssh login
- moby: improve gPodder launch time
- sxmo: port to swaybar like i use on desktop
- users in #sxmo claim it's way better perf
- sxmo: fix youtube scripts (package youtube-cli)
- moby: theme GTK apps (i.e. non-adwaita styles)
- combine multiple icon themes to get one which has the full icon set?
- get adwaita-icon-theme to ship everything even when cross-compiled?
- especially, make the menubar collapsible
- try Gradience tool specifically for theming adwaita? <https://linuxphoneapps.org/apps/com.github.gradienceteam.gradience/>
- phog: remove the gnome-shell runtime dependency to save hella closure size
#### non-moby
- RSS: integrate a paywall bypass
- e.g. self-hosted [ladder](https://github.com/everywall/ladder) (like 12ft.io)
- neovim: set up language server (lsp; rnix-lsp; nvim-lspconfig)
- neovim: integrate LLMs
- Helix: make copy-to-system clipboard be the default
- firefox/librewolf: persist history
- just not cookies or tabs
@ -138,12 +98,16 @@
- could change junk filter from "no DKIM success" to explicit "DKIM failed"
### perf
- debug nixos-rebuild times
- add `pkgs.impure-cached.<foo>` package set to build things with ccache enabled
- every package here can be auto-generated, and marked with some env var so that it doesn't pollute the pure package set
- would be super handy for package prototyping!
- get moby to build without binfmt emulation (i.e. make all emulation explicit)
- then i can distribute builds across servo + desko, and also allow servo to pull packages from desko w/o worrying about purity
## NEW FEATURES:
- migrate MAME cabinet to nix
- boot it from PXE from servo?
- deploy to new server, and use it as a remote builder
- enable IPv6
- package lemonade lemmy app: <https://linuxphoneapps.org/apps/ml.mdwalters.lemonade/>

Binary file not shown.

Before

Width:  |  Height:  |  Size: 127 KiB

View File

@ -1,52 +1,15 @@
{
"nodes": {
"flake-compat": {
"locked": {
"lastModified": 1688025799,
"narHash": "sha256-ktpB4dRtnksm9F5WawoIkEneh1nrEvuxb5lJFt1iOyw=",
"owner": "nix-community",
"repo": "flake-compat",
"rev": "8bf105319d44f6b9f0d764efa4fdef9f1cc9ba1c",
"type": "github"
},
"original": {
"owner": "nix-community",
"repo": "flake-compat",
"type": "github"
}
},
"flake-parts": {
"inputs": {
"nixpkgs-lib": [
"nixpkgs-wayland",
"nix-eval-jobs",
"nixpkgs"
]
},
"locked": {
"lastModified": 1712014858,
"narHash": "sha256-sB4SWl2lX95bExY2gMFG5HIzvva5AVMJd4Igm+GpZNw=",
"owner": "hercules-ci",
"repo": "flake-parts",
"rev": "9126214d0a59633752a136528f5f3b9aa8565b7d",
"type": "github"
},
"original": {
"owner": "hercules-ci",
"repo": "flake-parts",
"type": "github"
}
},
"flake-utils": {
"inputs": {
"systems": "systems"
},
"locked": {
"lastModified": 1710146030,
"narHash": "sha256-SZ5L6eA7HJ/nmkzGG7/ISclqe6oZdOZTNoesiInkXPQ=",
"lastModified": 1687709756,
"narHash": "sha256-Y5wKlQSkgEK2weWdOu4J3riRd+kV/VCgHsqLNTTWQ/0=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "b1d9ab70662946ef0850d488da1c9019f3a9752a",
"rev": "dbabf0ca0c0c4bce6ea5eaf65af5cb694d2082c7",
"type": "github"
},
"original": {
@ -55,25 +18,6 @@
"type": "github"
}
},
"lib-aggregate": {
"inputs": {
"flake-utils": "flake-utils",
"nixpkgs-lib": "nixpkgs-lib"
},
"locked": {
"lastModified": 1715515815,
"narHash": "sha256-yaLScMHNFCH6SbB0HSA/8DWDgK0PyOhCXoFTdHlWkhk=",
"owner": "nix-community",
"repo": "lib-aggregate",
"rev": "09883ca828e8cfaacdb09e29190a7b84ad1d9925",
"type": "github"
},
"original": {
"owner": "nix-community",
"repo": "lib-aggregate",
"type": "github"
}
},
"mobile-nixos": {
"flake": false,
"locked": {
@ -91,157 +35,42 @@
"type": "github"
}
},
"nix-eval-jobs": {
"inputs": {
"flake-parts": "flake-parts",
"nix-github-actions": "nix-github-actions",
"nixpkgs": "nixpkgs",
"treefmt-nix": "treefmt-nix"
},
"locked": {
"lastModified": 1715804156,
"narHash": "sha256-GtIHP86Cz1kD9xZO/cKbNQACHKdoT9WFbLJAq6W2EDY=",
"owner": "nix-community",
"repo": "nix-eval-jobs",
"rev": "bb95091f6c6f38f6cfc215a1797a2dd466312c8b",
"type": "github"
},
"original": {
"owner": "nix-community",
"repo": "nix-eval-jobs",
"type": "github"
}
},
"nix-github-actions": {
"inputs": {
"nixpkgs": [
"nixpkgs-wayland",
"nix-eval-jobs",
"nixpkgs"
]
},
"locked": {
"lastModified": 1703863825,
"narHash": "sha256-rXwqjtwiGKJheXB43ybM8NwWB8rO2dSRrEqes0S7F5Y=",
"owner": "nix-community",
"repo": "nix-github-actions",
"rev": "5163432afc817cf8bd1f031418d1869e4c9d5547",
"type": "github"
},
"original": {
"owner": "nix-community",
"repo": "nix-github-actions",
"type": "github"
}
},
"nixpkgs": {
"locked": {
"lastModified": 1715037484,
"narHash": "sha256-OUt8xQFmBU96Hmm4T9tOWTu4oCswCzoVl+pxSq/kiFc=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "ad7efee13e0d216bf29992311536fce1d3eefbef",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixpkgs-unstable",
"repo": "nixpkgs",
"type": "github"
}
},
"nixpkgs-lib": {
"locked": {
"lastModified": 1715474941,
"narHash": "sha256-CNCqCGOHdxuiVnVkhTpp2WcqSSmSfeQjubhDOcgwGjU=",
"owner": "nix-community",
"repo": "nixpkgs.lib",
"rev": "58e03b95f65dfdca21979a081aa62db0eed6b1d8",
"type": "github"
},
"original": {
"owner": "nix-community",
"repo": "nixpkgs.lib",
"type": "github"
}
},
"nixpkgs-next-unpatched": {
"locked": {
"lastModified": 1715839255,
"narHash": "sha256-IKUEASEZKDqOC/q6RP54O3Dz3C2+BBi+VtnIbhBpBbw=",
"owner": "nixos",
"repo": "nixpkgs",
"rev": "1887e39d7e68bb191eb804c0f976ad25b3980595",
"type": "github"
},
"original": {
"owner": "nixos",
"ref": "staging-next",
"repo": "nixpkgs",
"type": "github"
}
},
"nixpkgs-stable": {
"locked": {
"lastModified": 1715458492,
"narHash": "sha256-q0OFeZqKQaik2U8wwGDsELEkgoZMK7gvfF6tTXkpsqE=",
"lastModified": 1698544399,
"narHash": "sha256-vhRmPyEyoPkrXF2iykBsWHA05MIaOSmMRLMF7Hul6+s=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "8e47858badee5594292921c2668c11004c3b0142",
"rev": "d87c5d8c41c9b3b39592563242f3a448b5cc4bc9",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "release-23.11",
"ref": "release-23.05",
"repo": "nixpkgs",
"type": "github"
}
},
"nixpkgs-unpatched": {
"locked": {
"lastModified": 1715851096,
"narHash": "sha256-ed72tDlrU4/PBWPYoxPk+HFazU3Yny0stTjlGZ7YeMA=",
"lastModified": 1698611440,
"narHash": "sha256-jPjHjrerhYDy3q9+s5EAsuhyhuknNfowY6yt6pjn9pc=",
"owner": "nixos",
"repo": "nixpkgs",
"rev": "977a49df312d89b7dfbb3579bf13b7dfe23e7878",
"rev": "0cbe9f69c234a7700596e943bfae7ef27a31b735",
"type": "github"
},
"original": {
"owner": "nixos",
"ref": "master",
"ref": "nixos-unstable",
"repo": "nixpkgs",
"type": "github"
}
},
"nixpkgs-wayland": {
"inputs": {
"flake-compat": "flake-compat",
"lib-aggregate": "lib-aggregate",
"nix-eval-jobs": "nix-eval-jobs",
"nixpkgs": [
"nixpkgs-unpatched"
]
},
"locked": {
"lastModified": 1715843614,
"narHash": "sha256-qveerNXc6yF2digoKDR9Hj/o0n8Y3bW/yET6sRochv0=",
"owner": "nix-community",
"repo": "nixpkgs-wayland",
"rev": "5e2c5345f3204c867c9d4183cbb68069d0f7a951",
"type": "github"
},
"original": {
"owner": "nix-community",
"repo": "nixpkgs-wayland",
"type": "github"
}
},
"root": {
"inputs": {
"mobile-nixos": "mobile-nixos",
"nixpkgs-next-unpatched": "nixpkgs-next-unpatched",
"nixpkgs-unpatched": "nixpkgs-unpatched",
"nixpkgs-wayland": "nixpkgs-wayland",
"sops-nix": "sops-nix",
"uninsane-dot-org": "uninsane-dot-org"
}
@ -254,11 +83,11 @@
"nixpkgs-stable": "nixpkgs-stable"
},
"locked": {
"lastModified": 1715482972,
"narHash": "sha256-y1uMzXNlrVOWYj1YNcsGYLm4TOC2aJrwoUY1NjQs9fM=",
"lastModified": 1698548647,
"narHash": "sha256-7c03OjBGqnwDW0FBaBc+NjfEBxMkza+dxZGJPyIzfFE=",
"owner": "Mic92",
"repo": "sops-nix",
"rev": "b6cb5de2ce57acb10ecdaaf9bbd62a5ff24fa02e",
"rev": "632c3161a6cc24142c8e3f5529f5d81042571165",
"type": "github"
},
"original": {
@ -282,40 +111,19 @@
"type": "github"
}
},
"treefmt-nix": {
"inputs": {
"nixpkgs": [
"nixpkgs-wayland",
"nix-eval-jobs",
"nixpkgs"
]
},
"locked": {
"lastModified": 1711963903,
"narHash": "sha256-N3QDhoaX+paWXHbEXZapqd1r95mdshxToGowtjtYkGI=",
"owner": "numtide",
"repo": "treefmt-nix",
"rev": "49dc4a92b02b8e68798abd99184f228243b6e3ac",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "treefmt-nix",
"type": "github"
}
},
"uninsane-dot-org": {
"inputs": {
"flake-utils": "flake-utils",
"nixpkgs": [
"nixpkgs-unpatched"
]
},
"locked": {
"lastModified": 1713198740,
"narHash": "sha256-8SUaqMJdAkMOI9zhvlToL7eCr5Sl+2o2pDQ7nq+HoJU=",
"lastModified": 1698634059,
"narHash": "sha256-+Oyv6vDyCtBzab/5cTG0nUrHD9gj7KgGfD4D1Rn4fCk=",
"ref": "refs/heads/master",
"rev": "af8420d1c256d990b5e24de14ad8592a5d85bf77",
"revCount": 239,
"rev": "2419750ca98fc04af42c91e50c49a29c68d465d2",
"revCount": 210,
"type": "git",
"url": "https://git.uninsane.org/colin/uninsane"
},

414
flake.nix
View File

@ -29,29 +29,22 @@
# - daily:
# - nixos-unstable cut from master after enough packages have been built in caches.
# - every 6 hours:
# - master auto-merged into staging and staging-next
# - master auto-merged into staging.
# - staging-next auto-merged into staging.
# - manually, approximately once per month:
# - staging-next is cut from staging.
# - staging-next merged into master.
#
# which branch to source from?
# - nixos-unstable: for everyday development; it provides good caching
# - master: temporarily if i'm otherwise cherry-picking lots of already-applied patches
# - staging-next: if testing stuff that's been PR'd into staging, i.e. base library updates.
# - staging: maybe if no staging-next -> master PR has been cut yet?
# - for everyday development, prefer `nixos-unstable` branch, as it provides good caching.
# - if need to test bleeding updates (e.g. if submitting code into staging):
# - use `staging-next` if it's been cut (i.e. if there's an active staging-next -> master PR)
# - use `staging` if no staging-next branch has been cut.
#
# <https://github.com/nixos/nixpkgs/tree/nixos-unstable>
# nixpkgs-unpatched.url = "github:nixos/nixpkgs?ref=nixos-unstable";
nixpkgs-unpatched.url = "github:nixos/nixpkgs?ref=master";
# nixpkgs-unpatched.url = "github:nixos/nixpkgs?ref=nixos-staging";
# nixpkgs-unpatched.url = "github:nixos/nixpkgs?ref=nixos-staging-next";
nixpkgs-next-unpatched.url = "github:nixos/nixpkgs?ref=staging-next";
nixpkgs-wayland = {
url = "github:nix-community/nixpkgs-wayland";
inputs.nixpkgs.follows = "nixpkgs-unpatched";
};
nixpkgs-unpatched.url = "github:nixos/nixpkgs?ref=nixos-unstable";
# nixpkgs-unpatched.url = "github:nixos/nixpkgs?ref=staging-next";
# nixpkgs-unpatched.url = "github:nixos/nixpkgs?ref=staging";
mobile-nixos = {
# <https://github.com/nixos/mobile-nixos>
@ -80,8 +73,6 @@
outputs = {
self,
nixpkgs-unpatched,
nixpkgs-next-unpatched ? nixpkgs-unpatched,
nixpkgs-wayland,
mobile-nixos,
sops-nix,
uninsane-dot-org,
@ -100,31 +91,39 @@
# rather than apply our nixpkgs patches as a flake input, do that here instead.
# this (temporarily?) resolves the bad UX wherein a subflake residing in the same git
# repo as the main flake causes the main flake to have an unstable hash.
patchNixpkgs = variant: nixpkgs: (import ./nixpatches/flake.nix).outputs {
inherit variant nixpkgs;
self = patchNixpkgs variant nixpkgs;
nixpkgs = (import ./nixpatches/flake.nix).outputs {
self = nixpkgs;
nixpkgs = nixpkgs-unpatched;
} // {
# provide values that nixpkgs ordinarily sources from the flake.lock file,
# inaccessible to it here because of the import-from-derivation.
# rev and shortRev seem to not always exist (e.g. if the working tree is dirty),
# so those are made conditional.
#
# these values impact the name of a produced nixos system. having date/rev in the
# `readlink /run/current-system` store path helps debuggability.
inherit (self) lastModifiedDate lastModified;
} // optionalAttrs (self ? rev) {
inherit (self) rev;
} // optionalAttrs (self ? shortRev) {
inherit (self) shortRev;
};
nixpkgs' = patchNixpkgs "master" nixpkgs-unpatched;
nixpkgsCompiledBy = system: nixpkgs'.legacyPackages."${system}";
nixpkgsCompiledBy = system: nixpkgs.legacyPackages."${system}";
evalHost = { name, local, target, variant ? null, nixpkgs ? nixpkgs' }: nixpkgs.lib.nixosSystem {
evalHost = { name, local, target }: nixpkgs.lib.nixosSystem {
system = target;
modules = [
{
nixpkgs.buildPlatform.system = local;
nixpkgs = (if (local != null) then {
buildPlatform = local;
} else {}) // {
# TODO: does the earlier `system` arg to nixosSystem make its way here?
hostPlatform.system = target;
};
# nixpkgs.buildPlatform = local; # set by instantiate.nix instead
# nixpkgs.config.replaceStdenv = { pkgs }: pkgs.ccacheStdenv;
}
(optionalAttrs (local != target) {
# XXX(2023/12/11): cache.nixos.org uses `system = ...` instead of `hostPlatform.system`, and that choice impacts the closure of every package.
# so avoid specifying hostPlatform.system on non-cross builds, so i can use upstream caches.
nixpkgs.hostPlatform.system = target;
})
(optionalAttrs (variant == "light") {
sane.maxBuildCost = 2;
})
(optionalAttrs (variant == "min") {
sane.maxBuildCost = 0;
})
(import ./hosts/instantiate.nix { hostName = name; })
self.nixosModules.default
self.nixosModules.passthru
@ -137,26 +136,39 @@
];
};
in {
nixosConfigurations = let
hosts = {
servo = { name = "servo"; local = "x86_64-linux"; target = "x86_64-linux"; };
desko = { name = "desko"; local = "x86_64-linux"; target = "x86_64-linux"; };
desko-light = { name = "desko"; local = "x86_64-linux"; target = "x86_64-linux"; variant = "light"; };
lappy = { name = "lappy"; local = "x86_64-linux"; target = "x86_64-linux"; };
lappy-light = { name = "lappy"; local = "x86_64-linux"; target = "x86_64-linux"; variant = "light"; };
lappy-min = { name = "lappy"; local = "x86_64-linux"; target = "x86_64-linux"; variant = "min"; };
moby = { name = "moby"; local = "x86_64-linux"; target = "aarch64-linux"; };
moby-light = { name = "moby"; local = "x86_64-linux"; target = "aarch64-linux"; variant = "light"; };
moby-min = { name = "moby"; local = "x86_64-linux"; target = "aarch64-linux"; variant = "min"; };
rescue = { name = "rescue"; local = "x86_64-linux"; target = "x86_64-linux"; };
};
hostsNext = mapAttrs' (h: v: {
name = "${h}-next";
value = v // { nixpkgs = patchNixpkgs "staging-next" nixpkgs-next-unpatched; };
}) hosts;
in mapAttrValues evalHost (
hosts // hostsNext
);
nixosConfigurations =
let
hosts = {
servo = { name = "servo"; local = "x86_64-linux"; target = "x86_64-linux"; };
desko = { name = "desko"; local = "x86_64-linux"; target = "x86_64-linux"; };
lappy = { name = "lappy"; local = "x86_64-linux"; target = "x86_64-linux"; };
moby = { name = "moby"; local = "x86_64-linux"; target = "aarch64-linux"; };
rescue = { name = "rescue"; local = "x86_64-linux"; target = "x86_64-linux"; };
};
# cross-compiled builds: instead of emulating the host, build using a cross-compiler.
# - these are faster to *build* than the emulated variants (useful when tweaking packages),
# - but fewer of their packages can be found in upstream caches.
cross = mapAttrValues evalHost hosts;
emulated = mapAttrValues
({name, local, target}: evalHost {
inherit name target;
local = null;
})
hosts;
prefixAttrs = prefix: attrs: mapAttrs'
(name: value: {
name = prefix + name;
inherit value;
})
attrs;
in
(prefixAttrs "cross-" cross) //
(prefixAttrs "emulated-" emulated) // {
# prefer native builds for these machines:
inherit (emulated) servo desko lappy rescue;
# prefer cross-compiled builds for these machines:
inherit (cross) moby;
};
# unofficial output
# this produces a EFI-bootable .img file (GPT with a /boot partition and a system (/ or /nix) partition).
@ -175,37 +187,26 @@
imgs = mapAttrValues (host: host.config.system.build.img) self.nixosConfigurations;
# unofficial output
hostConfigs = mapAttrValues (host: host.config) self.nixosConfigurations;
hostSystems = mapAttrValues (host: host.config.system.build.toplevel) self.nixosConfigurations;
hostPkgs = mapAttrValues (host: host.config.system.build.pkgs) self.nixosConfigurations;
hostPrograms = mapAttrValues (host: mapAttrValues (p: p.package) host.config.sane.programs) self.nixosConfigurations;
patched.nixpkgs = nixpkgs';
overlays = {
# N.B.: `nix flake check` requires every overlay to take `final: prev:` at defn site,
# hence the weird redundancy.
default = final: prev: self.overlays.pkgs final prev;
sane-all = final: prev: import ./overlays/all.nix final prev;
disable-flakey-tests = final: prev: import ./overlays/disable-flakey-tests.nix final prev;
pkgs = final: prev: import ./overlays/pkgs.nix final prev;
pins = final: prev: import ./overlays/pins.nix final prev;
preferences = final: prev: import ./overlays/preferences.nix final prev;
optimizations = final: prev: import ./overlays/optimizations.nix final prev;
passthru = final: prev:
let
mobile = (import "${mobile-nixos}/overlay/overlay.nix");
uninsane = uninsane-dot-org.overlays.default;
wayland = final: prev: {
# default is to dump the packages into `waylandPkgs` *and* the toplevel.
# but i just want the `waylandPkgs` set
inherit (nixpkgs-wayland.overlays.default final prev)
waylandPkgs
new-wayland-protocols #< 2024/03/10: nixpkgs-wayland assumes this will be in the toplevel
;
};
uninsane = uninsane-dot-org.overlay;
in
(mobile final prev)
// (uninsane final prev)
// (wayland final prev)
;
};
@ -233,27 +234,23 @@
# extract only our own packages from the full set.
# because of `nix flake check`, we flatten the package set and only surface x86_64-linux packages.
packages = mapAttrs
(system: passthruPkgs: passthruPkgs.lib.filterAttrs
(name: pkg:
(system: allPkgs:
allPkgs.lib.filterAttrs (name: pkg:
# keep only packages which will pass `nix flake check`, i.e. keep only:
# - derivations (not package sets)
# - packages that build for the given platform
(! elem name [ "feeds" "pythonPackagesExtensions" ])
&& (passthruPkgs.lib.meta.availableOn passthruPkgs.stdenv.hostPlatform pkg)
&& (allPkgs.lib.meta.availableOn allPkgs.stdenv.hostPlatform pkg)
)
(
# expose sane packages and chosen inputs (uninsane.org)
(import ./pkgs { pkgs = passthruPkgs; }) // {
inherit (passthruPkgs) uninsane-dot-org;
(import ./pkgs { pkgs = allPkgs; }) // {
inherit (allPkgs) uninsane-dot-org;
}
)
)
# self.legacyPackages;
{
x86_64-linux = (nixpkgsCompiledBy "x86_64-linux").appendOverlays [
self.overlays.passthru
];
}
{ inherit (self.legacyPackages) x86_64-linux; }
;
apps."x86_64-linux" =
@ -261,54 +258,17 @@
pkgs = self.legacyPackages."x86_64-linux";
sanePkgs = import ./pkgs { inherit pkgs; };
deployScript = host: addr: action: pkgs.writeShellScript "deploy-${host}" ''
set -e
nix build '.#nixosConfigurations.${host}.config.system.build.toplevel' --out-link ./result-${host} $@
sudo nix sign-paths -r -k /run/secrets/nix_serve_privkey $(readlink ./result-${host})
host="${host}"
addr="${addr}"
action="${if action != null then action else ""}"
runOnTarget() {
# run the command ($@) on the machine we're deploying to.
# if that's a remote machine, then do it via ssh, else local shell.
if [ -n "$addr" ]; then
ssh "$addr" "$@"
else
"$@"
fi
}
nix build ".#nixosConfigurations.$host.config.system.build.toplevel" --out-link "./build/result-$host" "$@"
storePath="$(readlink ./build/result-$host)"
# mimic `nixos-rebuild --target-host`, in effect:
# - nix-copy-closure ...
# - nix-env --set ...
# - switch-to-configuration <boot|dry-activate|switch|test|>
# avoid the actual `nixos-rebuild` for a few reasons:
# - fewer nix evals
# - more introspectability and debuggability
# - sandbox friendliness (especially: `git` doesn't have to be run as root)
if [ -n "$addr" ]; then
sudo nix store sign -r -k /run/secrets/nix_signing_key "$storePath"
# add more `-v` for more verbosity (up to 5).
# builders-use-substitutes false: optimizes so that the remote machine doesn't try to get paths from its substituters.
# we already have all paths here, and the remote substitution is slow to check and SERIOUSLY flaky on moby in particular.
nix copy -vv --option builders-use-substitutes false --to "ssh-ng://$addr" "$storePath"
fi
if [ -n "$action" ]; then
runOnTarget sudo nix-env -p /nix/var/nix/profiles/system --set "$storePath"
runOnTarget sudo "$storePath/bin/switch-to-configuration" "$action"
fi
# XXX: this triggers another config eval & (potentially) build.
# if the config changed between these invocations, the above signatures might not apply to the deployed config.
# let the user handle that edge case by re-running this whole command
nixos-rebuild --flake '.#${host}' ${action} --target-host colin@${addr} --use-remote-sudo $@
'';
deployApp = host: addr: action: {
type = "app";
program = ''${deployScript host addr action}'';
};
# pkg updating.
# a cleaner alternative lives here: <https://discourse.nixos.org/t/how-can-i-run-the-updatescript-of-personal-packages/25274/2>
# mkUpdater :: [ String ] -> { type = "app"; program = path; }
mkUpdater = attrPath: {
type = "app";
program = let
@ -333,7 +293,7 @@
} else {}
)
(pkgs.lib.getAttrFromPath basePath sanePkgs);
mkUpdaters = { ignore ? [], flakePrefix ? [] }@opts: basePath:
mkUpdaters = { ignore ? [] }@opts: basePath:
let
updaters = mkUpdatersNoAliases opts basePath;
invokeUpdater = name: pkg:
@ -343,7 +303,7 @@
# in case `name` has a `.` in it, we have to quote it
escapedPath = builtins.map (p: ''"${p}"'') fullPath;
updatePath = builtins.concatStringsSep "." (flakePrefix ++ escapedPath);
updatePath = builtins.concatStringsSep "." ([ "update" "pkgs" ] ++ escapedPath);
in pkgs.lib.optionalString doUpdateByDefault (
pkgs.lib.escapeShellArgs [
"nix" "run" ".#${updatePath}"
@ -351,9 +311,8 @@
);
in {
type = "app";
# top-level app just invokes the updater of everything one layer below it
program = builtins.toString (pkgs.writeShellScript
(builtins.concatStringsSep "-" (flakePrefix ++ basePath))
(builtins.concatStringsSep "-" (["update"] ++ basePath))
(builtins.concatStringsSep
"\n"
(pkgs.lib.mapAttrsToList invokeUpdater updaters)
@ -373,15 +332,9 @@
- `nix run '.#update.feeds'`
- updates metadata for all feeds
- `nix run '.#init-feed' <url>`
- `nix run '.#deploy.{desko,lappy,moby,servo}[-light|-test]' [nix args ...]`
- build and deploy the host
- `nix run '.#preDeploy.{desko,lappy,moby,servo}[-light]' [nix args ...]`
- copy closures to a host, but don't activate it
- or `nix run '.#preDeploy'` to target all hosts
- `nix run '.#deploy-{lappy,moby,moby-test,servo}' [nixos-rebuild args ...]`
- `nix run '.#check'`
- make sure all systems build; NUR evaluates
- `nix run '.#bench'`
- benchmark the eval time of common targets this flake provides
specific build targets of interest:
- `nix build '.#imgs.rescue'`
@ -393,114 +346,48 @@
nix flake show --option allow-import-from-derivation true
'');
};
# wrangle some names to get package updaters which refer back into the flake, but also conditionally ignore certain paths (e.g. sane.feeds).
# TODO: better design
update = rec {
_impl.pkgs.sane = mkUpdaters { flakePrefix = [ "update" "_impl" "pkgs" ]; ignore = [ [ "sane" "feeds" ] ]; } [ "sane" ];
pkgs = _impl.pkgs.sane;
_impl.feeds.sane.feeds = mkUpdaters { flakePrefix = [ "update" "_impl" "feeds" ]; } [ "sane" "feeds" ];
feeds = _impl.feeds.sane.feeds;
};
update.pkgs = mkUpdaters { ignore = [ ["feeds"] ]; } [];
update.feeds = mkUpdaters {} [ "feeds" ];
init-feed = {
type = "app";
program = "${pkgs.feeds.init-feed}";
};
deploy = {
desko = deployApp "desko" "desko" "switch";
desko-light = deployApp "desko-light" "desko" "switch";
lappy = deployApp "lappy" "lappy" "switch";
lappy-light = deployApp "lappy-light" "lappy" "switch";
lappy-min = deployApp "lappy-min" "lappy" "switch";
moby = deployApp "moby" "moby" "switch";
moby-light = deployApp "moby-light" "moby" "switch";
moby-min = deployApp "moby-min" "moby" "switch";
moby-test = deployApp "moby" "moby" "test";
servo = deployApp "servo" "servo" "switch";
# like `nixos-rebuild --flake . switch`
self = deployApp "$(hostname)" "" "switch";
self-light = deployApp "$(hostname)-light" "" "switch";
self-min = deployApp "$(hostname)-min" "" "switch";
deploy-lappy = {
type = "app";
program = builtins.toString (pkgs.writeShellScript "deploy-all" ''
nix run '.#deploy.lappy'
nix run '.#deploy.moby'
nix run '.#deploy.desko'
nix run '.#deploy.servo'
'');
program = ''${deployScript "lappy" "lappy" "switch"}'';
};
preDeploy = {
# build the host and copy the runtime closure to that host, but don't activate it.
desko = deployApp "desko" "desko" null;
desko-light = deployApp "desko-light" "desko" null;
lappy = deployApp "lappy" "lappy" null;
lappy-light = deployApp "lappy-light" "lappy" null;
lappy-min = deployApp "lappy-min" "lappy" null;
moby = deployApp "moby" "moby" null;
moby-light = deployApp "moby-light" "moby" null;
moby-min = deployApp "moby-min" "moby" null;
servo = deployApp "servo" "servo" null;
deploy-moby-test = {
type = "app";
program = builtins.toString (pkgs.writeShellScript "predeploy-all" ''
# copy the -min/-light variants first; this might be run while waiting on a full build. or the full build failed.
nix run '.#preDeploy.moby-min' -- "$@"
nix run '.#preDeploy.lappy-min' -- "$@"
nix run '.#preDeploy.moby-light' -- "$@"
nix run '.#preDeploy.lappy-light' -- "$@"
nix run '.#preDeploy.desko-light' -- "$@"
nix run '.#preDeploy.lappy' -- "$@"
nix run '.#preDeploy.servo' -- "$@"
nix run '.#preDeploy.moby' -- "$@"
nix run '.#preDeploy.desko' -- "$@"
'');
program = ''${deployScript "moby" "moby-hn" "test"}'';
};
deploy-moby = {
type = "app";
program = ''${deployScript "moby" "moby-hn" "switch"}'';
};
deploy-servo = {
type = "app";
program = ''${deployScript "servo" "servo" "switch"}'';
};
sync = {
type = "app";
program = builtins.toString (pkgs.writeShellScript "sync-all" ''
RC_lappy=$(nix run '.#sync.lappy' -- "$@")
RC_moby=$(nix run '.#sync.moby' -- "$@")
RC_desko=$(nix run '.#sync.desko' -- "$@")
echo "lappy: $RC_lappy"
echo "moby: $RC_moby"
echo "desko: $RC_desko"
'');
};
sync.desko = {
# copy music from servo to desko
# can run this from any device that has ssh access to desko and servo
type = "app";
program = builtins.toString (pkgs.writeShellScript "sync-to-desko" ''
sudo mount /mnt/desko/home
${pkgs.sane-scripts.sync-music}/bin/sane-sync-music --compat /mnt/servo/media/Music /mnt/desko/home/Music "$@"
'');
};
sync.lappy = {
# copy music from servo to lappy
# can run this from any device that has ssh access to lappy and servo
type = "app";
program = builtins.toString (pkgs.writeShellScript "sync-to-lappy" ''
sudo mount /mnt/lappy/home
${pkgs.sane-scripts.sync-music}/bin/sane-sync-music --compress --compat /mnt/servo/media/Music /mnt/lappy/home/Music "$@"
'');
};
sync.moby = {
# copy music from servo to moby
# can run this from any device that has ssh access to moby and servo
sync-moby = {
# copy music from the current device to moby
# TODO: should i actually sync from /mnt/servo-media/Music instead of the local drive?
type = "app";
program = builtins.toString (pkgs.writeShellScript "sync-to-moby" ''
sudo mount /mnt/moby/home
sudo mount /mnt/desko/home
${pkgs.rsync}/bin/rsync -arv --exclude servo-macros /mnt/moby/home/Pictures/ /mnt/desko/home/Pictures/moby/
# N.B.: limited by network/disk -> reduce job count to improve pause/resume behavior
${pkgs.sane-scripts.sync-music}/bin/sane-sync-music --compress --compat --jobs 4 /mnt/servo/media/Music /mnt/moby/home/Music "$@"
sudo mount /mnt/moby-home
${pkgs.sane-scripts.sync-music}/bin/sane-sync-music ~/Music /mnt/moby-home/Music
'');
};
sync-lappy = {
# copy music from servo to lappy
# can run this from any device that has ssh access to lappy
type = "app";
program = builtins.toString (pkgs.writeShellScript "sync-to-lappy" ''
sudo mount /mnt/lappy-home
${pkgs.sane-scripts.sync-music}/bin/sane-sync-music /mnt/servo-media/Music /mnt/lappy-home/Music
'');
};
@ -509,12 +396,12 @@
program = builtins.toString (pkgs.writeShellScript "check-all" ''
nix run '.#check.nur'
RC0=$?
nix run '.#check.hostConfigs'
nix run '.#check.host-configs'
RC1=$?
nix run '.#check.rescue'
RC2=$?
echo "nur: $RC0"
echo "hostConfigs: $RC1"
echo "host-configs: $RC1"
echo "rescue: $RC2"
exit $(($RC0 | $RC1 | $RC2))
'');
@ -531,63 +418,32 @@
--option restrict-eval true \
--option allow-import-from-derivation true \
--drv-path --show-trace \
-I nixpkgs=${nixpkgs-unpatched} \
-I nixpkgs-overlays=${./.}/hosts/common/nix/overlay \
-I nixpkgs=$(nix-instantiate --find-file nixpkgs) \
-I ../../ \
| tee # tee to prevent interactive mode
'');
};
check.hostConfigs = {
check.host-configs = {
type = "app";
program = let
checkHost = host: let
shellHost = pkgs.lib.replaceStrings [ "-" ] [ "_" ] host;
in ''
nix build -v '.#nixosConfigurations.${host}.config.system.build.toplevel' --out-link ./build/result-${host} -j2 "$@"
RC_${shellHost}=$?
checkHost = host: ''
nix build -v '.#nixosConfigurations.${host}.config.system.build.toplevel' --out-link ./result-${host} -j2 $@
RC_${host}=$?
'';
in builtins.toString (pkgs.writeShellScript
"check-host-configs"
''
# build minimally-usable hosts first, then their full image.
# this gives me a minimal image i can deploy or copy over, early.
${checkHost "lappy-min"}
${checkHost "moby-min"}
${checkHost "desko-light"}
${checkHost "moby-light"}
${checkHost "lappy-light"}
${checkHost "desko"}
${checkHost "lappy"}
${checkHost "servo"}
${checkHost "moby"}
${checkHost "rescue"}
# still want to build the -light variants first so as to avoid multiple simultaneous webkitgtk builds
${checkHost "desko-light-next"}
${checkHost "moby-light-next"}
${checkHost "desko-next"}
${checkHost "lappy-next"}
${checkHost "servo-next"}
${checkHost "moby-next"}
${checkHost "rescue-next"}
echo "desko: $RC_desko"
echo "lappy: $RC_lappy"
echo "servo: $RC_servo"
echo "moby: $RC_moby"
echo "rescue: $RC_rescue"
echo "desko-next: $RC_desko_next"
echo "lappy-next: $RC_lappy_next"
echo "servo-next: $RC_servo_next"
echo "moby-next: $RC_moby_next"
echo "rescue-next: $RC_rescue_next"
# i don't really care if the -next hosts fail. i build them mostly to keep the cache fresh/ready
exit $(($RC_desko | $RC_lappy | $RC_servo | $RC_moby | $RC_rescue))
''
);
@ -596,31 +452,7 @@
check.rescue = {
type = "app";
program = builtins.toString (pkgs.writeShellScript "check-rescue" ''
nix build -v '.#imgs.rescue' --out-link ./build/result-rescue-img -j2
'');
};
bench = {
type = "app";
program = builtins.toString (pkgs.writeShellScript "bench" ''
doBench() {
attrPath="$1"
shift
echo -n "benchmarking eval of '$attrPath'... "
/run/current-system/sw/bin/time -f "%e sec" -o /dev/stdout \
nix eval --no-eval-cache --quiet --raw ".#$attrPath" --apply 'result: if result != null then "" else "unexpected null"' $@ 2> /dev/null
}
if [ -n "$1" ]; then
doBench "$@"
else
doBench hostConfigs
doBench hostConfigs.lappy
doBench hostConfigs.lappy.sane.programs
doBench hostConfigs.lappy.sane.users.colin
doBench hostConfigs.lappy.sane.fs
doBench hostConfigs.lappy.environment.systemPackages
fi
nix build -v '.#imgs.rescue' --out-link ./result-rescue-img -j2
'');
};
};

View File

@ -4,33 +4,29 @@
./fs.nix
];
sane.services.trust-dns.asSystemResolver = false; # TEMPORARY: TODO: re-enable trust-dns
# sane.programs.devPkgs.enableFor.user.colin = true;
# sane.guest.enable = true;
# don't enable wifi by default: it messes with connectivity.
# systemd.services.iwd.enable = false;
# systemd.services.wpa_supplicant.enable = false;
# services.distccd.enable = true;
# sane.programs.distcc.enableFor.user.guest = true;
sops.secrets.colin-passwd.neededForUsers = true;
sane.roles.build-machine.enable = true;
sane.roles.ac = true;
sane.roles.client = true;
sane.roles.dev-machine = true;
sane.roles.pc = true;
sane.services.wg-home.enable = true;
sane.services.wg-home.ip = config.sane.hosts.by-name."desko".wg-home.ip;
sane.services.duplicity.enable = true;
sane.services.nixserve.secretKeyFile = config.sops.secrets.nix_serve_privkey.path;
sane.nixcache.remote-builders.desko = false;
sane.programs.cups.enableFor.user.colin = true;
sane.programs.sway.enableFor.user.colin = true;
sane.gui.sway.enable = true;
sane.programs.iphoneUtils.enableFor.user.colin = true;
sane.programs.steam.enableFor.user.colin = true;
sane.programs."gnome.geary".config.autostart = true;
sane.programs.signal-desktop.config.autostart = true;
sane.programs.guiApps.suggestedPrograms = [ "desktopGuiApps" ];
sane.programs.consoleUtils.suggestedPrograms = [ "consoleMediaUtils" "desktopConsoleUtils" ];
# sane.programs.devPkgs.enableFor.user.colin = true;
boot.loader.efi.canTouchEfiVariables = false;
sane.image.extraBootFiles = [ pkgs.bootpart-uefi-x86_64 ];
@ -38,6 +34,10 @@
# needed to use libimobiledevice/ifuse, for iphone sync
services.usbmuxd.enable = true;
# don't enable wifi by default: it messes with connectivity.
systemd.services.iwd.enable = false;
systemd.services.wpa_supplicant.enable = false;
# default config: https://man.archlinux.org/man/snapper-configs.5
# defaults to something like:
# - hourly snapshots

View File

@ -1,12 +1,14 @@
{ ... }:
{
sane.persist.root-on-tmpfs = true;
# increase /tmp space (defaults to 50% of RAM) for building large nix things.
# a cross-compiled kernel, particularly, will easily use 30+GB of tmp
fileSystems."/tmp".options = [ "size=64G" ];
fileSystems."/nix" = {
device = "/dev/disk/by-uuid/845d85bf-761d-431b-a406-e6f20909154f";
# device = "/dev/disk/by-uuid/985a0a32-da52-4043-9df7-615adec2e4ff";
device = "/dev/disk/by-uuid/0ab0770b-7734-4167-88d9-6e4e20bb2a56";
fsType = "btrfs";
options = [
"compress=zstd"
@ -15,7 +17,8 @@
};
fileSystems."/boot" = {
device = "/dev/disk/by-uuid/5049-9AFD";
# device = "/dev/disk/by-uuid/CAA7-E7D2";
device = "/dev/disk/by-uuid/41B6-BAEF";
fsType = "vfat";
};
}

View File

@ -2,24 +2,24 @@
{
imports = [
./fs.nix
./polyfill.nix
];
sane.roles.client = true;
sane.roles.dev-machine = true;
sane.roles.pc = true;
sane.services.wg-home.enable = true;
sane.services.wg-home.ip = config.sane.hosts.by-name."lappy".wg-home.ip;
# sane.guest.enable = true;
sane.gui.sway.enable = true;
boot.loader.efi.canTouchEfiVariables = false;
sane.image.extraBootFiles = [ pkgs.bootpart-uefi-x86_64 ];
sane.programs.cups.enableFor.user.colin = true;
sane.programs.stepmania.enableFor.user.colin = true;
sane.programs.sway.enableFor.user.colin = true;
sane.programs."gnome.geary".config.autostart = true;
sane.programs.signal-desktop.config.autostart = true;
sane.programs.guiApps.suggestedPrograms = [
"desktopGuiApps"
"stepmania"
];
sane.programs.consoleUtils.suggestedPrograms = [ "consoleMediaUtils" "desktopConsoleUtils" ];
sops.secrets.colin-passwd.neededForUsers = true;

View File

@ -1,6 +1,8 @@
{ ... }:
{
sane.persist.root-on-tmpfs = true;
fileSystems."/nix" = {
device = "/dev/disk/by-uuid/75230e56-2c69-4e41-b03e-68475f119980";
fsType = "btrfs";
@ -14,4 +16,24 @@
device = "/dev/disk/by-uuid/BD79-D6BB";
fsType = "vfat";
};
# fileSystems."/nix" = {
# device = "/dev/disk/by-uuid/5a7fa69c-9394-8144-a74c-6726048b129f";
# fsType = "btrfs";
# };
# fileSystems."/boot" = {
# device = "/dev/disk/by-uuid/4302-1685";
# fsType = "vfat";
# };
# fileSystems."/" = {
# device = "none";
# fsType = "tmpfs";
# options = [
# "mode=755"
# "size=1G"
# "defaults"
# ];
# };
}

View File

@ -0,0 +1,43 @@
# doesn't actually *enable* anything,
# but sets up any modules such that if they *were* enabled, they'll act as expected.
{ pkgs, ... }:
{
sane.gui.sxmo = {
greeter = "greetd-sway-gtkgreet";
noidle = true; #< power button requires 1s hold, which makes it impractical to be dealing with.
settings = {
# XXX: make sure the user is part of the `input` group!
SXMO_LISGD_INPUT_DEVICE = "/dev/input/by-id/usb-Wacom_Co._Ltd._Pen_and_multitouch_sensor-event-if00";
# these identifiers are from `swaymsg -t get_inputs`
SXMO_VOLUME_BUTTON = "1:1:AT_Translated_Set_2_keyboard";
# SXMO_VOLUME_BUTTON = "none";
# N.B.: thinkpad's power button requires a full second press to do anything
SXMO_POWER_BUTTON = "0:1:Power_Button";
# SXMO_POWER_BUTTON = "none";
SXMO_DISABLE_LEDS = "1";
SXMO_UNLOCK_IDLE_TIME = "120"; # default
# sxmo tries to determine device type from /proc/device-tree/compatible,
# but that doesn't seem to exist on NixOS? (or maybe it just doesn't exist
# on non-aarch64 builds).
# the device type informs (at least):
# - SXMO_WIFI_MODULE
# - SXMO_RTW_SCAN_INTERVAL
# - SXMO_SYS_FILES
# - SXMO_TOUCHSCREEN_ID
# - SXMO_MONITOR
# - SXMO_ALSA_CONTROL_NAME
# - SXMO_SWAY_SCALE
# see <repo:mil/sxmo-utils:scripts/deviceprofiles>
# SXMO_DEVICE_NAME = "pine64,pinephone-1.2";
# if sxmo doesn't know the device, it can't decide whether to use one_button or three_button mode
# and so it just wouldn't handle any button inputs (sxmo_hook_inputhandler.sh not on path)
SXMO_DEVICE_NAME = "three_button_touchscreen";
};
package = (pkgs.sxmo-utils-latest.override { preferSystemd = true; }).overrideAttrs (base: {
postPatch = (base.postPatch or "") + ''
# after volume-button navigation mode, restore full keyboard functionality
cp ${./xkb_mobile_normal_buttons} ./configs/xkb/xkb_mobile_normal_buttons
'';
});
};
}

View File

@ -1,22 +1,12 @@
# tow-boot: <https://tow-boot.org>
# docs (pinephone specific): <https://github.com/Tow-Boot/Tow-Boot/tree/development/boards/pine64-pinephoneA64>
# LED and button behavior is defined here: <https://github.com/Tow-Boot/Tow-Boot/blob/development/modules/tow-boot/phone-ux.nix>
# - hold VOLDOWN: enter recovery mode
# - LED will turn aqua instead of yellow
# - recovery mode would ordinarily allow a selection of entries, but for pinephone i guess it doesn't do anything?
# - hold VOLUP: force it to load the OS from eMMC?
# - LED will turn blue instead of yellow
# boot LEDs:
# - yellow = entered tow-boot
# - 10 red flashes => poweroff means tow-boot couldn't boot into the next stage (i.e. distroboot)
# - distroboot: <https://source.denx.de/u-boot/u-boot/-/blob/v2022.04/doc/develop/distro.rst>)
{ config, pkgs, ... }:
{
# we need space in the GPT header to place tow-boot.
# only actually need 1 MB, but better to over-allocate than under-allocate
sane.image.extraGPTPadding = 16 * 1024 * 1024;
sane.image.firstPartGap = 0;
sane.image.installBootloader = ''
dd if=${pkgs.tow-boot-pinephone}/Tow-Boot.noenv.bin of=$out/nixos.img bs=1024 seek=8 conv=notrunc
system.build.img = pkgs.runCommand "nixos_full-disk-image.img" {} ''
cp -v ${config.system.build.img-without-firmware}/nixos.img $out
chmod +w $out
dd if=${pkgs.tow-boot-pinephone}/Tow-Boot.noenv.bin of=$out bs=1024 seek=8 conv=notrunc
'';
}

View File

@ -20,7 +20,6 @@
];
sane.roles.client = true;
sane.roles.handheld = true;
sane.zsh.showDeadlines = false; # unlikely to act on them when in shell
sane.services.wg-home.enable = true;
sane.services.wg-home.ip = config.sane.hosts.by-name."moby".wg-home.ip;
@ -28,21 +27,20 @@
# XXX colin: phosh doesn't work well with passwordless login,
# so set this more reliable default password should anything go wrong
users.users.colin.initialPassword = "147147";
# services.getty.autologinUser = "root"; # allows for emergency maintenance?
services.getty.autologinUser = "root"; # allows for emergency maintenance?
sops.secrets.colin-passwd.neededForUsers = true;
# sane.gui.sxmo.enable = true;
sane.programs.sway.enableFor.user.colin = true;
sane.programs.swaylock.enableFor.user.colin = false; #< not usable on touch
sane.programs.schlock.enableFor.user.colin = true;
sane.programs.swayidle.config.actions.screenoff.delay = 300;
sane.programs.swayidle.config.actions.screenoff.enable = true;
sane.programs.sane-input-handler.enableFor.user.colin = true;
sane.gui.sxmo.enable = true;
sane.programs.guiApps.suggestedPrograms = [ "handheldGuiApps" ];
# sane.programs.consoleUtils.enableFor.user.colin = false;
# sane.programs.guiApps.enableFor.user.colin = false;
sane.programs.blueberry.enableFor.user.colin = false; # bluetooth manager: doesn't cross compile!
sane.programs.fcitx5.enableFor.user.colin = false; # does not cross compile
sane.programs.mercurial.enableFor.user.colin = false; # does not cross compile
sane.programs.nvme-cli.enableFor.system = false; # does not cross compile (libhugetlbfs)
sane.programs.sequoia.enableFor.user.colin = false;
sane.programs.tuiApps.enableFor.user.colin = false; # visidata, others, don't compile well
# disabled for faster deploys
sane.programs.soundconverter.enableFor.user.colin = false;
# enabled for easier debugging
sane.programs.eg25-control.enableFor.user.colin = true;
@ -50,16 +48,26 @@
# sane.programs.ntfy-sh.config.autostart = true;
sane.programs.dino.config.autostart = true;
# sane.programs.signal-desktop.config.autostart = true; # TODO: enable once electron stops derping.
# sane.programs."gnome.geary".config.autostart = true;
# sane.programs.calls.config.autostart = true;
sane.programs.mpv.config.vo = "wlshm"; #< see hosts/common/programs/mpv.nix for details
sane.programs.firefox.mime.priority = 300; # prefer other browsers when possible
# HACK/TODO: make `programs.P.env.VAR` behave according to `mime.priority`
sane.programs.firefox.env = lib.mkForce {};
sane.programs.epiphany.env.BROWSER = "epiphany";
sane.programs.pipewire.config = {
# tune so Dino doesn't drop audio
sane.programs.firefox.enableFor.user.colin = false; # use epiphany instead
# note the .conf.d approach: using ~/.config/pipewire/pipewire.conf directly breaks all audio,
# presumably because that deletes the defaults entirely whereas the .conf.d approach selectively overrides defaults
sane.user.fs.".config/pipewire/pipewire.conf.d/10-fix-dino-mic-cutout.conf".symlink.text = ''
# config docs: <https://gitlab.freedesktop.org/pipewire/pipewire/-/wikis/Config-PipeWire#properties>
# useful to run `pw-top` to see that these settings are actually having effect,
# and `pw-metadata` to see if any settings conflict (e.g. max-quantum < min-quantum)
#
# restart pipewire after editing these files:
# - `systemctl --user restart pipewire`
# - pipewire users will likely stop outputting audio until they are also restarted
#
# there's seemingly two buffers for the mic (see: <https://gitlab.freedesktop.org/pipewire/pipewire/-/wikis/FAQ#pipewire-buffering-explained>)
# 1. Pipewire buffering out of the driver and into its own member.
# 2. Pipewire buffering into Dino.
@ -70,9 +78,11 @@
# `pw-metadata -n settings 0 clock.force-quantum 1024` reduces to about 1 error per second.
# `pw-metadata -n settings 0 clock.force-quantum 2048` reduces to 1 error every < 10s.
# pipewire default config includes `clock.power-of-two-quantum = true`
min-quantum = 2048;
max-quantum = 8192;
};
context.properties = {
default.clock.min-quantum = 2048
default.clock.max-quantum = 8192
}
'';
boot.loader.efi.canTouchEfiVariables = false;
# /boot space is at a premium. default was 20.
@ -112,6 +122,44 @@
# enable rotation sensor
hardware.sensor.iio.enable = true;
# inject specialized alsa configs via the environment.
# specifically, this gets the pinephone headphones & internal earpiece working.
# see pkgs/patched/alsa-ucm-conf for more info.
environment.variables.ALSA_CONFIG_UCM2 = "/run/current-system/sw/share/alsa/ucm2";
environment.pathsToLink = [ "/share/alsa/ucm2" ];
environment.systemPackages = [
(pkgs.alsa-ucm-conf-sane.override {
# internal speaker has a tendency to break :(
preferEarpiece = true;
})
];
systemd = let
ucm-env = config.environment.variables.ALSA_CONFIG_UCM2;
in {
# cribbed from <repo:nixos/mobile-nixos:modules/quirks/audio.nix>
# pipewire
user.services.pipewire.environment.ALSA_CONFIG_UCM2 = ucm-env;
user.services.pipewire-pulse.environment.ALSA_CONFIG_UCM2 = ucm-env;
user.services.wireplumber.environment.ALSA_CONFIG_UCM2 = ucm-env;
services.pipewire.environment.ALSA_CONFIG_UCM2 = ucm-env;
services.pipewire-pulse.environment.ALSA_CONFIG_UCM2 = ucm-env;
services.wireplumber.environment.ALSA_CONFIG_UCM2 = ucm-env;
# pulseaudio
# user.services.pulseaudio.environment.ALSA_CONFIG_UCM2 = ucm-env;
# services.pulseaudio.environment.ALSA_CONFIG_UCM2 = ucm-env;
# TODO: move elsewhere...
services.ModemManager.serviceConfig = {
# N.B.: the extra "" in ExecStart serves to force upstream ExecStart to be ignored
ExecStart = [ "" "${pkgs.modemmanager}/bin/ModemManager --debug" ];
# --debug sets DEBUG level logging: so reset
ExecStartPost = [ "${pkgs.modemmanager}/bin/mmcli --set-logging=INFO" ];
};
};
services.udev.extraRules = let
chmod = "${pkgs.coreutils}/bin/chmod";
chown = "${pkgs.coreutils}/bin/chown";

View File

@ -1,6 +1,7 @@
{ ... }:
{
sane.persist.root-on-tmpfs = true;
fileSystems."/nix" = {
device = "/dev/disk/by-uuid/1f1271f8-53ce-4081-8a29-60a4a6b5d6f9";
fsType = "btrfs";

View File

@ -64,5 +64,6 @@
"dialout" # TODO: figure out if dialout is required. that's for /dev/ttyUSB1, but geoclue probably doesn't read that?
];
sane.services.eg25-control.enable = true;
sane.programs.where-am-i.enableFor.user.colin = true;
}

View File

@ -74,7 +74,6 @@ in
# without this some GUI apps fail: `DRM_IOCTL_MODE_CREATE_DUMB failed: Cannot allocate memory`
# this is because they can't allocate enough video ram.
# see related nixpkgs issue: <https://github.com/NixOS/nixpkgs/issues/260222>
# TODO(2023/12/03): remove once mesa 23.3.1 lands: <https://github.com/NixOS/nixpkgs/pull/265740>
#
# the default CMA seems to be 32M.
# i was running fine with 256MB from 2022/07-ish through 2022/12-ish, but then the phone quit reliably coming back from sleep (phosh): maybe a memory leak?
@ -85,7 +84,6 @@ in
"lima.sched_timeout_ms=2000"
];
# services.xserver.displayManager.job.preStart = ensureHWReady;
# systemd.services.greetd.preStart = ensureHWReady;
systemd.services.unl0kr.preStart = ensureHWReady;
services.xserver.displayManager.job.preStart = ensureHWReady;
systemd.services.greetd.preStart = ensureHWReady;
}

View File

@ -24,22 +24,73 @@
backlight = "backlight"; # /sys/class/backlight/*backlight*/brightness
};
sane.programs.alacritty.config.fontSize = 9;
sane.gui.sxmo = {
nogesture = true;
settings = {
### hardware: touch screen
SXMO_LISGD_INPUT_DEVICE = "/dev/input/by-path/platform-1c2ac00.i2c-event";
# vol and power are detected correctly by upstream
sane.programs.sway.config = {
font = "pango:monospace 10";
mod = "Mod1"; # prefer Alt
workspace_layout = "tabbed";
};
### preferences
DEFAULT_COUNTRY = "US";
sane.programs.waybar.config = {
fontSize = 14;
height = 26;
persistWorkspaces = [ "1" "2" "3" "4" "5" ];
modules.media = false;
modules.network = false;
modules.perf = false;
modules.windowTitle = false;
# TODO: show modem state
SXMO_AUTOROTATE = "1"; # enable auto-rotation at launch. has no meaning in stock/upstream sxmo-utils
# BEMENU lines (wayland DMENU):
# - camera is 9th entry
# - flashlight is 10th entry
# - config is 14th entry. inside that:
# - autorotate is 11th entry
# - system menu is 19th entry
# - close is 20th entry
# - power is 15th entry
# - close is 16th entry
SXMO_BEMENU_LANDSCAPE_LINES = "11"; # default 8
SXMO_BEMENU_PORTRAIT_LINES = "16"; # default 16
SXMO_LOCK_IDLE_TIME = "15"; # how long between screenoff -> lock -> back to screenoff (default: 8)
# gravity: how far to tilt the device before the screen rotates
# for a given setting, normal <-> invert requires more movement then left <-> right
# i.e. the settingd doesn't feel completely symmetric
# SXMO_ROTATION_GRAVITY default is 16374
# SXMO_ROTATION_GRAVITY = "12800"; # uncomfortably high
# SXMO_ROTATION_GRAVITY = "12500"; # kinda uncomfortable when walking
SXMO_ROTATION_GRAVITY = "12000";
SXMO_SCREENSHOT_DIR = "/home/colin/Pictures"; # default: "$HOME"
# sway/wayland scaling:
# - conflicting info out there on how scaling actually works
# at the least, for things where it matters (mpv), it seems like scale settings have 0 effect on perf
# ways to enforce scaling:
# - <https://wiki.archlinux.org/title/HiDPI>
# - `swaymsg -- output DSI-1 scale 2.0` (scales everything)
# - `dconf write /org/gnome/desktop/interface/text-scaling-factor 2.0` (scales ONLY TEXT)
# - `GDK_DPI_SCALE=2.0` (scales ONLY TEXT)
#
# application notes:
# - cozy: in landscape, playback position is not visible unless scale <= 1.7
# - if in a tab, then scale 1.6 is the max
# SXMO_SWAY_SCALE = "1.5"; # hard to press gPodder icons
SXMO_SWAY_SCALE = "1.6";
# SXMO_SWAY_SCALE = "1.8";
# SXMO_SWAY_SCALE = "2";
SXMO_WORKSPACE_WRAPPING = "5"; # how many workspaces. default: 4
# wvkbd layers:
# - full
# - landscape
# - special (e.g. coding symbols like ~)
# - emoji
# - nav
# - simple (like landscape, but no parens/tab/etc; even fewer chars)
# - simplegrid (simple, but grid layout)
# - dialer (digits)
# - cyrillic
# - arabic
# - persian
# - greek
# - georgian
WVKBD_LANDSCAPE_LAYERS = "landscape,special,emoji";
WVKBD_LAYERS = "full,special,emoji";
};
};
}

View File

@ -6,7 +6,7 @@
boot.loader.efi.canTouchEfiVariables = false;
sane.image.extraBootFiles = [ pkgs.bootpart-uefi-x86_64 ];
sane.persist.enable = false; # what we mean here is that the image is immutable; `/` is still tmpfs.
sane.persist.enable = false;
sane.nixcache.enable = false; # don't want to be calling out to dead machines that we're *trying* to rescue
# auto-login at shell

View File

@ -1,7 +1,7 @@
{ ... }:
{
fileSystems."/nix" = {
fileSystems."/" = {
device = "/dev/disk/by-uuid/44445555-6666-7777-8888-999900001111";
fsType = "ext4";
};

View File

@ -14,22 +14,22 @@
signaldctl.enableFor.user.colin = true;
};
sane.roles.ac = true;
sane.roles.build-machine.enable = true;
sane.roles.build-machine.emulation = false;
sane.zsh.showDeadlines = false; # ~/knowledge doesn't always exist
sane.programs.consoleUtils.suggestedPrograms = [
"consoleMediaUtils" # notably, for go2tv / casting
"pcConsoleUtils"
"desktopConsoleUtils"
"sane-scripts.stop-all-servo"
];
sane.services.dyn-dns.enable = true;
sane.services.trust-dns.asSystemResolver = false; # TODO: enable once it's all working well
sane.services.wg-home.enable = true;
sane.services.wg-home.visibleToWan = true;
sane.services.wg-home.forwardToWan = true;
sane.services.wg-home.routeThroughServo = false;
sane.services.wg-home.ip = config.sane.hosts.by-name."servo".wg-home.ip;
sane.nixcache.remote-builders.desko = false;
sane.nixcache.remote-builders.servo = false;
sane.nixcache.substituters.servo = false;
sane.nixcache.substituters.desko = false;
# sane.services.duplicity.enable = true; # TODO: re-enable after HW upgrade
# automatically log in at the virtual consoles.

View File

@ -1,67 +1,7 @@
# zfs docs:
# - <https://nixos.wiki/wiki/ZFS>
# - <repo:nixos/nixpkgs:nixos/modules/tasks/filesystems/zfs.nix>
#
# zfs check health: `zpool status`
#
# zfs pool creation (requires `boot.supportedFilesystems = [ "zfs" ];`
# - 1. identify disk IDs: `ls -l /dev/disk/by-id`
# - 2. pool these disks: `zpool create -f -m legacy pool raidz ata-ST4000VN008-2DR166_WDH0VB45 ata-ST4000VN008-2DR166_WDH17616 ata-ST4000VN008-2DR166_WDH0VC8Q ata-ST4000VN008-2DR166_WDH17680`
# - legacy documented: <https://superuser.com/questions/790036/what-is-a-zfs-legacy-mount-point>
# - 3. enable acl support: `zfs set acltype=posixacl pool`
#
# import pools: `zpool import pool`
# show zfs datasets: `zfs list` (will be empty if haven't imported)
# show zfs properties (e.g. compression): `zfs get all pool`
# set zfs properties: `zfs set compression=on pool`
{ ... }:
{
# hostId: not used for anything except zfs guardrail?
# [hex(ord(x)) for x in 'serv']
networking.hostId = "73657276";
boot.supportedFilesystems = [ "zfs" ];
# boot.zfs.enabled = true;
boot.zfs.forceImportRoot = false;
# scrub all zfs pools weekly:
services.zfs.autoScrub.enable = true;
boot.extraModprobeConfig = ''
### zfs_arc_max tunable:
# ZFS likes to use half the ram for its own cache and let the kernel push everything else to swap.
# so, reduce its cache size
# see: <https://askubuntu.com/a/1290387>
# see: <https://serverfault.com/a/1119083>
# see: <https://openzfs.github.io/openzfs-docs/Performance%20and%20Tuning/Module%20Parameters.html#zfs-arc-max>
# for all tunables, see: `man 4 zfs`
# to update these parameters without rebooting:
# - `echo '4294967296' | sane-sudo-redirect /sys/module/zfs/parameters/zfs_arc_max`
### zfs_bclone_enabled tunable
# this allows `cp --reflink=always FOO BAR` to work. i.e. shallow copies.
# it's unstable as of 2.2.3. led to *actual* corruption in 2.2.1, but hopefully better by now.
# - <https://github.com/openzfs/zfs/issues/405>
# note that `du -h` won't *always* show the reduced size for reflink'd files (?).
# `zpool get all | grep clone` seems to be the way to *actually* see how much data is being deduped
options zfs zfs_arc_max=4294967296 zfs_bclone_enabled=1
'';
# to be able to mount the pool like this, make sure to tell zfs to NOT manage it itself.
# otherwise local-fs.target will FAIL and you will be dropped into a rescue shell.
# - `zfs set mountpoint=legacy pool`
# if done correctly, the pool can be mounted before this `fileSystems` entry is created:
# - `sudo mount -t zfs pool /mnt/persist/pool`
fileSystems."/mnt/pool" = {
device = "pool";
fsType = "zfs";
options = [ "acl" ]; #< not sure if this `acl` flag is actually necessary. it mounts without it.
};
# services.zfs.zed = ... # TODO: zfs can send me emails when disks fail
sane.programs.sysadminUtils.suggestedPrograms = [ "zfs" ];
sane.persist.stores."ext" = {
origin = "/mnt/pool/persist";
storeDescription = "external HDD storage";
defaultMethod = "bind"; #< TODO: change to "symlink"?
};
sane.persist.root-on-tmpfs = true;
# increase /tmp space (defaults to 50% of RAM) for building large nix things.
# even the stock `nixpkgs.linux` consumes > 16 GB of tmp
fileSystems."/tmp".options = [ "size=32G" ];
@ -81,7 +21,7 @@
};
# slow, external storage (for archiving, etc)
fileSystems."/mnt/usb-hdd" = {
fileSystems."/mnt/persist/ext" = {
device = "/dev/disk/by-uuid/aa272cff-0fcc-498e-a4cb-0d95fb60631b";
fsType = "btrfs";
options = [
@ -89,47 +29,66 @@
"defaults"
];
};
sane.fs."/mnt/usb-hdd".mount = {};
# FIRST TIME SETUP FOR MEDIA DIRECTORY:
# - set the group stick bit: `sudo find /var/media -type d -exec chmod g+s {} +`
# - this ensures new files/dirs inherit the group of their parent dir (instead of the user who creates them)
# - ensure everything under /var/media is mounted with `-o acl`, to support acls
# - ensure all files are rwx by group: `setfacl --recursive --modify d:g::rwx /var/media`
# - alternatively, `d:g:media:rwx` to grant `media` group even when file has a different owner, but that's a bit complex
sane.persist.sys.byStore.ext = [{
path = "/var/media";
user = "colin";
group = "media";
mode = "0775";
}];
sane.fs."/var/media/archive".dir = {};
# this is file.text instead of symlink.text so that it may be read over a remote mount (where consumers might not have any /nix/store/.../README.md path)
sane.fs."/var/media/archive/README.md".file.text = ''
sane.persist.stores."ext" = {
origin = "/mnt/persist/ext/persist";
storeDescription = "external HDD storage";
};
sane.fs."/mnt/persist/ext".mount = {};
sane.persist.sys.byStore.plaintext = [
# TODO: this is overly broad; only need media and share directories to be persisted
{ user = "colin"; group = "users"; path = "/var/lib/uninsane"; }
];
# force some problematic directories to always get correct permissions:
sane.fs."/var/lib/uninsane/media".dir.acl = {
user = "colin"; group = "media"; mode = "0775";
};
sane.fs."/var/lib/uninsane/media/archive".dir = {};
sane.fs."/var/lib/uninsane/media/archive/README.md".file.text = ''
this directory is for media i wish to remove from my library,
but keep for a short time in case i reverse my decision.
treat it like a system trash can.
'';
sane.fs."/var/media/Books".dir = {};
sane.fs."/var/media/Books/Audiobooks".dir = {};
sane.fs."/var/media/Books/Books".dir = {};
sane.fs."/var/media/Books/Visual".dir = {};
sane.fs."/var/media/collections".dir = {};
# sane.fs."/var/media/datasets".dir = {};
sane.fs."/var/media/freeleech".dir = {};
sane.fs."/var/media/Music".dir = {};
sane.fs."/var/media/Pictures".dir = {};
sane.fs."/var/media/Videos".dir = {};
sane.fs."/var/media/Videos/Film".dir = {};
sane.fs."/var/media/Videos/Shows".dir = {};
sane.fs."/var/media/Videos/Talks".dir = {};
# this is file.text instead of symlink.text so that it may be read over a remote mount (where consumers might not have any /nix/store/.../README.md path)
sane.fs."/var/lib/uninsane/media/Books".dir = {};
sane.fs."/var/lib/uninsane/media/Books/Audiobooks".dir = {};
sane.fs."/var/lib/uninsane/media/Books/Books".dir = {};
sane.fs."/var/lib/uninsane/media/Books/Visual".dir = {};
sane.fs."/var/lib/uninsane/media/collections".dir = {};
sane.fs."/var/lib/uninsane/media/datasets".dir = {};
sane.fs."/var/lib/uninsane/media/freeleech".dir = {};
sane.fs."/var/lib/uninsane/media/Music".dir = {};
sane.fs."/var/lib/uninsane/media/Pictures".dir = {};
sane.fs."/var/lib/uninsane/media/Videos".dir = {};
sane.fs."/var/lib/uninsane/media/Videos/Film".dir = {};
sane.fs."/var/lib/uninsane/media/Videos/Shows".dir = {};
sane.fs."/var/lib/uninsane/media/Videos/Talks".dir = {};
sane.fs."/var/lib/uninsane/datasets/README.md".file.text = ''
this directory may seem redundant with ../media/datasets. it isn't.
this directory exists on SSD, allowing for speedy access to specific datasets when necessary.
the contents should be a subset of what's in ../media/datasets.
'';
# make sure large media is stored to the HDD
sane.persist.sys.ext = [
{
user = "colin";
group = "users";
mode = "0777";
path = "/var/lib/uninsane/media/Videos";
}
{
user = "colin";
group = "users";
mode = "0777";
path = "/var/lib/uninsane/media/freeleech";
}
{
user = "colin";
group = "users";
mode = "0777";
path = "/var/lib/uninsane/media/datasets";
}
];
# btrfs doesn't easily support swapfiles
# swapDevices = [

View File

@ -24,12 +24,61 @@ in
sane.ports.openFirewall = true;
sane.ports.openUpnp = true;
# view refused packets with: `sudo journalctl -k`
# networking.firewall.logRefusedPackets = true;
# The global useDHCP flag is deprecated, therefore explicitly set to false here.
# Per-interface useDHCP will be mandatory in the future, so this generated config
# replicates the default behaviour.
networking.useDHCP = false;
networking.interfaces.eth0.useDHCP = true;
# XXX colin: probably don't need this. wlan0 won't be populated unless i touch a value in networking.interfaces.wlan0
networking.wireless.enable = false;
# this is needed to forward packets from the VPN to the host
boot.kernel.sysctl."net.ipv4.ip_forward" = 1;
# unless we add interface-specific settings for each VPN, we have to define nameservers globally.
# networking.nameservers = [
# "1.1.1.1"
# "9.9.9.9"
# ];
# use systemd's stub resolver.
# /etc/resolv.conf isn't sophisticated enough to use different servers per net namespace (or link).
# instead, running the stub resolver on a known address in the root ns lets us rewrite packets
# in the ovnps namespace to use the provider's DNS resolvers.
# a weakness is we can only query 1 NS at a time (unless we were to clone the packets?)
# there also seems to be some cache somewhere that's shared between the two namespaces.
# i think this is a libc thing. might need to leverage proper cgroups to _really_ kill it.
# - getent ahostsv4 www.google.com
# - try fix: <https://serverfault.com/questions/765989/connect-to-3rd-party-vpn-server-but-dont-use-it-as-the-default-route/766290#766290>
services.resolved.enable = true;
# without DNSSEC:
# - dig matrix.org => works
# - curl https://matrix.org => works
# with default DNSSEC:
# - dig matrix.org => works
# - curl https://matrix.org => fails
# i don't know why. this might somehow be interfering with the DNS run on this device (trust-dns)
services.resolved.dnssec = "false";
networking.nameservers = [
# use systemd-resolved resolver
# full resolver (which understands /etc/hosts) lives on 127.0.0.53
# stub resolver (just forwards upstream) lives on 127.0.0.54
"127.0.0.53"
];
# nscd -- the Name Service Caching Daemon -- caches DNS query responses
# in a way that's unaware of my VPN routing, so routes are frequently poor against
# services which advertise different IPs based on geolocation.
# nscd claims to be usable without a cache, but in practice i can't get it to not cache!
# nsncd is the Name Service NON-Caching Daemon. it's a drop-in that doesn't cache;
# this is OK on the host -- because systemd-resolved caches. it's probably sub-optimal
# in the netns and we query upstream DNS more often than needed. hm.
# TODO: run a separate recursive resolver in each namespace.
services.nscd.enableNsncd = true;
# services.resolved.extraConfig = ''
# # docs: `man resolved.conf`
# # DNS servers to use via the `wg-ovpns` interface.
@ -87,7 +136,7 @@ in
}
];
preSetup = ''
${ip} netns add ovpns || (test -e /run/netns/ovpns && echo "ovpns already exists")
${ip} netns add ovpns || echo "ovpns already exists"
'';
postShutdown = ''
${in-ns} ip link del ovpns-veth-b || echo "couldn't delete ovpns-veth-b"

View File

@ -13,7 +13,7 @@ in
lib.mkIf false
{
sane.persist.sys.byStore.plaintext = [
{ inherit user group; mode = "0700"; path = svc-dir; method = "bind"; }
{ inherit user group; mode = "0700"; path = svc-dir; }
];
services.calibre-web.enable = true;
@ -24,7 +24,7 @@ lib.mkIf false
# services.calibre-web.options.calibreLibrary = svc-dir;
services.nginx.virtualHosts."calibre.uninsane.org" = {
forceSSL = true;
addSSL = true;
enableACME = true;
locations."/" = {
proxyPass = "http://${ip}:${builtins.toString port}";

View File

@ -24,64 +24,50 @@
# that is NOT the case when the STUN server and client A are on the same LAN
# even if client A contacts the STUN server via its WAN address with port reflection enabled.
# hence, there's no obvious way to put the STUN server on the same LAN as either client and expect the rest to work.
# - there an old version which *half worked*, which is:
# - run the turn server in the root namespace.
# - bind the turn server to the veth connecting it to the VPN namespace (so it sends outgoing traffic to the right place).
# - NAT the turn port range from VPN into root namespace (so it receives incomming traffic).
# - this approach would fail the prosody conversations.im check, but i didn't notice *obvious* call routing errors.
#
# debugging:
# - log messages like 'usage: realm=<turn.uninsane.org>, username=<1715915193>, rp=14, rb=1516, sp=8, sb=684'
# - rp = received packets
# - rb = received bytes
# - sp = sent packets
# - sb = sent bytes
{ lib, ... }:
let
# TURN port range (inclusive).
# default coturn behavior is to use the upper quarter of all ports. i.e. 49152 - 65535.
# i believe TURN allocations expire after either 5 or 10 minutes of inactivity.
turnPortLow = 49152; # 49152 = 0xc000
turnPortHigh = turnPortLow + 256;
# TODO: this range could be larger, but right now that's costly because each element is its own UPnP forward
# TURN port range (inclusive)
turnPortLow = 49152;
turnPortHigh = 49167;
turnPortRange = lib.range turnPortLow turnPortHigh;
in
{
# the port definitions are only needed if running in the root net namespace
# sane.ports.ports = lib.mkMerge ([
# {
# "3478" = {
# # this is the "control" port.
# # i.e. no client data is forwarded through it, but it's where clients request tunnels.
# protocol = [ "tcp" "udp" ];
# # visibleTo.lan = true;
# # visibleTo.wan = true;
# visibleTo.ovpn = true; # forward traffic from the VPN to the root NS
# description = "colin-stun-turn";
# };
# "5349" = {
# # the other port 3478 also supports TLS/DTLS, but presumably clients wanting TLS will default 5349
# protocol = [ "tcp" ];
# # visibleTo.lan = true;
# # visibleTo.wan = true;
# visibleTo.ovpn = true;
# description = "colin-stun-turn-over-tls";
# };
# }
# ] ++ (builtins.map
# (port: {
# "${builtins.toString port}" = let
# count = port - turnPortLow + 1;
# numPorts = turnPortHigh - turnPortLow + 1;
# in {
# protocol = [ "tcp" "udp" ];
# # visibleTo.lan = true;
# # visibleTo.wan = true;
# visibleTo.ovpn = true;
# description = "colin-turn-${builtins.toString count}-of-${builtins.toString numPorts}";
# };
# })
# turnPortRange
# ));
sane.ports.ports = lib.mkMerge ([
{
"3478" = {
# this is the "control" port.
# i.e. no client data is forwarded through it, but it's where clients request tunnels.
protocol = [ "tcp" "udp" ];
# visibleTo.lan = true;
# visibleTo.wan = true;
visibleTo.ovpn = true;
description = "colin-stun-turn";
};
"5349" = {
# the other port 3478 also supports TLS/DTLS, but presumably clients wanting TLS will default 5349
protocol = [ "tcp" ];
# visibleTo.lan = true;
# visibleTo.wan = true;
visibleTo.ovpn = true;
description = "colin-stun-turn-over-tls";
};
}
] ++ (builtins.map
(port: {
"${builtins.toString port}" = let
count = port - turnPortLow + 1;
numPorts = turnPortHigh - turnPortLow + 1;
in {
protocol = [ "tcp" "udp" ];
# visibleTo.lan = true;
# visibleTo.wan = true;
visibleTo.ovpn = true;
description = "colin-turn-${builtins.toString count}-of-${builtins.toString numPorts}";
};
})
turnPortRange
));
services.nginx.virtualHosts."turn.uninsane.org" = {
# allow ACME to procure a cert via nginx for this domain
@ -117,28 +103,22 @@ in
services.coturn.realm = "turn.uninsane.org";
services.coturn.cert = "/var/lib/acme/turn.uninsane.org/fullchain.pem";
services.coturn.pkey = "/var/lib/acme/turn.uninsane.org/key.pem";
#v disable to allow unauthenticated access (or set `services.coturn.no-auth = true`)
services.coturn.use-auth-secret = true;
services.coturn.static-auth-secret-file = "/var/lib/coturn/shared_secret.bin";
services.coturn.lt-cred-mech = true; #< XXX: use-auth-secret overrides lt-cred-mech
services.coturn.lt-cred-mech = true;
services.coturn.min-port = turnPortLow;
services.coturn.max-port = turnPortHigh;
# services.coturn.secure-stun = true;
services.coturn.extraConfig = lib.concatStringsSep "\n" [
"verbose"
# "Verbose" #< even MORE verbosity than "verbose" (it's TOO MUCH verbosity really)
"no-multicast-peers" # disables sending to IPv4 broadcast addresses (e.g. 224.0.0.0/3)
# "listening-ip=10.0.1.5" "external-ip=185.157.162.178" #< 2024/04/25: works, if running in root namespace
"listening-ip=185.157.162.178" "external-ip=185.157.162.178"
# old attempts:
# "Verbose" #< even MORE verbosity than "verbose"
# "no-multicast-peers" # disables sending to IPv4 broadcast addresses (e.g. 224.0.0.0/3)
"listening-ip=10.0.1.5"
# "external-ip=185.157.162.178/10.0.1.5"
"external-ip=185.157.162.178"
# "listening-ip=10.78.79.51" # can be specified multiple times; omit for *
# "external-ip=97.113.128.229/10.78.79.51"
# "external-ip=97.113.128.229"
# "mobility" # "mobility with ICE (MICE) specs support" (?)
];
systemd.services.coturn.serviceConfig.NetworkNamespacePath = "/run/netns/ovpns";
}

View File

@ -1,83 +0,0 @@
# as of 2023/12/02: complete blockchain is 530 GiB (on-disk size may be larger)
#
# ports:
# - 8333: for node-to-node communications
# - 8332: rpc (client-to-node)
#
# rpc setup:
# - generate a password
# - use: <https://github.com/bitcoin/bitcoin/blob/master/share/rpcauth/rpcauth.py>
# (rpcauth.py is not included in the `'.#bitcoin'` package result)
# - `wget https://raw.githubusercontent.com/bitcoin/bitcoin/master/share/rpcauth/rpcauth.py`
# - `python ./rpcauth.py colin`
# - copy the hash here. it's SHA-256, so safe to be public.
# - add "rpcuser=colin" and "rpcpassword=<output>" to secrets/servo/bitcoin.conf (i.e. ~/.bitcoin/bitcoin.conf)
# - bitcoin.conf docs: <https://github.com/bitcoin/bitcoin/blob/master/doc/bitcoin-conf.md>
# - validate with `bitcoin-cli -netinfo`
{ config, lib, pkgs, sane-lib, ... }:
let
# wrapper to run bitcoind with the tor onion address as externalip (computed at runtime)
_bitcoindWithExternalIp = with pkgs; writeShellScriptBin "bitcoind" ''
externalip="$(cat /var/lib/tor/onion/bitcoind/hostname)"
exec ${bitcoind}/bin/bitcoind "-externalip=$externalip" "$@"
'';
# the package i provide to services.bitcoind ends up on system PATH, and used by other tools like clightning.
# therefore, even though services.bitcoind only needs `bitcoind` binary, provide all the other bitcoin-related binaries (notably `bitcoin-cli`) as well:
bitcoindWithExternalIp = with pkgs; symlinkJoin {
name = "bitcoind-with-external-ip";
paths = [ _bitcoindWithExternalIp bitcoind ];
};
in
{
sane.persist.sys.byStore.ext = [
{ user = "bitcoind-mainnet"; group = "bitcoind-mainnet"; path = "/var/lib/bitcoind-mainnet"; method = "bind"; }
];
# sane.ports.ports."8333" = {
# # this allows other nodes and clients to download blocks from me.
# protocol = [ "tcp" ];
# visibleTo.wan = true;
# description = "colin-bitcoin";
# };
services.tor.relay.onionServices.bitcoind = {
version = 3;
map = [{
# by default tor will route public tor port P to 127.0.0.1:P.
# so if this port is the same as clightning would natively use, then no further config is needed here.
# see: <https://2019.www.torproject.org/docs/tor-manual.html.en#HiddenServicePort>
port = 8333;
# target.port; target.addr; #< set if tor port != clightning port
}];
# allow "tor" group (i.e. bitcoind-mainnet) to read /var/lib/tor/onion/bitcoind/hostname
settings.HiddenServiceDirGroupReadable = true;
};
services.bitcoind.mainnet = {
enable = true;
package = bitcoindWithExternalIp;
rpc.users.colin = {
# see docs at top of file for how to generate this
passwordHMAC = "30002c05d82daa210550e17a182db3f3$6071444151281e1aa8a2729f75e3e2d224e9d7cac3974810dab60e7c28ffaae4";
};
extraConfig = ''
# don't load the wallet, and disable wallet RPC calls
disablewallet=1
# proxy all outbound traffic through Tor
proxy=127.0.0.1:9050
'';
};
users.users.bitcoind-mainnet.extraGroups = [ "tor" ];
systemd.services.bitcoind-mainnet.serviceConfig.RestartSec = "30s"; #< default is 0
sane.users.colin.fs.".bitcoin/bitcoin.conf" = sane-lib.fs.wantedSymlinkTo config.sops.secrets."bitcoin.conf".path;
sops.secrets."bitcoin.conf" = {
mode = "0600";
owner = "colin";
group = "users";
};
sane.programs.bitcoind.enableFor.user.colin = true; # for debugging/administration: `bitcoin-cli`
}

View File

@ -1,782 +0,0 @@
#!/usr/bin/env nix-shell
#!nix-shell -i python3 -p "python3.withPackages (ps: [ ps.pyln-client ])"
"""
clightning-sane: helper to perform common Lightning node admin operations:
- view channel balances
- rebalance channels
COMMON OPERATIONS:
- view channel balances: `clightning-sane status`
- rebalance channels to improve routability (without paying any fees): `clightning-sane autobalance`
FULL OPERATION:
- `clightning-sane status --full`
- `P$`: represents how many msats i've captured in fees from this channel.
- `COST`: rough measure of how much it's "costing" me to let my channel partner hold funds on his side of the channel.
this is based on the notion that i only capture fees from outbound transactions, and so the channel partner holding all liquidity means i can't capture fees on that liquidity.
"""
# pyln-client docs: <https://github.com/ElementsProject/lightning/tree/master/contrib/pyln-client>
# terminology:
# - "scid": "Short Channel ID", e.g. 123456x7890x0
# from this id, we can locate the actual channel, its peers, and its parameters
import argparse
import logging
import math
import sys
import time
from concurrent.futures import ThreadPoolExecutor
from dataclasses import dataclass
from enum import Enum
from pyln.client import LightningRpc, Millisatoshi, RpcError
logger = logging.getLogger(__name__)
RPC_FILE = "/var/lib/clightning/bitcoin/lightning-rpc"
# CLTV (HLTC delta) of the final hop
# set this too low and you might get inadvertent channel closures (?)
CLTV = 18
# for every sequentally failed transaction, delay this much before trying again.
# note that the initial route building process can involve 10-20 "transient" failures, as it discovers dead channels.
TX_FAIL_BACKOFF = 0.8
MAX_SEQUENTIAL_JOB_FAILURES = 200
class LoopError(Enum):
""" error when trying to loop sats, or when unable to calculate a route for the loop """
TRANSIENT = "TRANSIENT" # try again, we'll maybe find a different route
NO_ROUTE = "NO_ROUTE"
class RouteError(Enum):
""" error when calculated a route """
HAS_BASE_FEE = "HAS_BASE_FEE"
NO_ROUTE = "NO_ROUTE"
class Metrics:
looped_msat: int = 0
sendpay_fail: int = 0
sendpay_succeed: int = 0
own_bad_channel: int = 0
no_route: int = 0
in_ch_unsatisfiable: int = 0
def __repr__(self) -> str:
return f"looped:{self.looped_msat}, tx:{self.sendpay_succeed}, tx_fail:{self.sendpay_fail}, own_bad_ch:{self.own_bad_channel}, no_route:{self.no_route}, in_ch_restricted:{self.in_ch_unsatisfiable}"
@dataclass
class TxBounds:
max_msat: int
min_msat: int = 0
def __repr__(self) -> str:
return f"TxBounds({self.min_msat} <= msat <= {self.max_msat})"
def is_satisfiable(self) -> bool:
return self.min_msat <= self.max_msat
def raise_max_to_be_satisfiable(self) -> "Self":
if self.max_msat < self.min_msat:
logger.debug(f"raising max_msat to be consistent: {self.max_msat} -> {self.min_msat}")
return TxBounds(self.min_msat, self.min_msat)
return TxBounds(min_msat=self.min_msat, max_msat=self.max_msat)
def intersect(self, other: "TxBounds") -> "Self":
return TxBounds(
min_msat=max(self.min_msat, other.min_msat),
max_msat=min(self.max_msat, other.max_msat),
)
def restrict_to_htlc(self, ch: "LocalChannel", why: str = "") -> "Self":
"""
apply min/max HTLC size restrictions of the given channel.
"""
if ch:
why = why or ch.directed_scid_to_me
if why: why = f"{why}: "
new_min, new_max = self.min_msat, self.max_msat
if ch.htlc_minimum_to_me > self.min_msat:
new_min = ch.htlc_minimum_to_me
logger.debug(f"{why}raising min_msat due to HTLC requirements: {self.min_msat} -> {new_min}")
if ch.htlc_maximum_to_me < self.max_msat:
new_max = ch.htlc_maximum_to_me
logger.debug(f"{why}lowering max_msat due to HTLC requirements: {self.max_msat} -> {new_max}")
return TxBounds(min_msat=new_min, max_msat=new_max)
def restrict_to_zero_fees(self, ch: "LocalChannel"=None, base: int=0, ppm: int=0, why:str = "") -> "Self":
"""
restrict tx size such that PPM fees are zero.
if the channel has a base fee, then `max_msat` is forced to 0.
"""
if ch:
why = why or ch.directed_scid_to_me
self = self.restrict_to_zero_fees(base=ch.to_me["base_fee_millisatoshi"], ppm=ch.to_me["fee_per_millionth"], why=why)
if why: why = f"{why}: "
new_max = self.max_msat
ppm_max = math.ceil(1000000 / ppm) - 1 if ppm != 0 else new_max
if ppm_max < new_max:
logger.debug(f"{why}decreasing max_msat due to fee ppm: {new_max} -> {ppm_max}")
new_max = ppm_max
if base != 0:
logger.debug(f"{why}free route impossible: channel has base fees")
new_max = 0
return TxBounds(min_msat=self.min_msat, max_msat=new_max)
class LocalChannel:
def __init__(self, channels: list, rpc: "RpcHelper"):
assert 0 < len(channels) <= 2, f"unexpected: channel count: {channels}"
out = None
in_ = None
for c in channels:
if c["source"] == rpc.self_id:
assert out is None, f"unexpected: multiple channels from self: {channels}"
out = c
if c["destination"] == rpc.self_id:
assert in_ is None, f"unexpected: multiple channels to self: {channels}"
in_ = c
# assert out is not None, f"no channel from self: {channels}"
# assert in_ is not None, f"no channel to self: {channels}"
if out and in_:
assert out["destination"] == in_["source"], f"channel peers are asymmetric?! {channels}"
assert out["short_channel_id"] == in_["short_channel_id"], f"channel ids differ?! {channels}"
self.from_me = out
self.to_me = in_
self.remote_node = rpc.node(self.remote_peer)
self.peer_ch = rpc.peerchannel(self.scid, self.remote_peer)
self.forwards_from_me = rpc.rpc.listforwards(out_channel=self.scid, status="settled")["forwards"]
def __repr__(self) -> str:
return self.to_str(with_scid=True, with_bal_ratio=True, with_cost=False, with_ppm_theirs=False)
def to_str(
self,
with_peer_id:bool = False,
with_scid:bool = False,
with_bal_msat:bool = False,
with_bal_ratio:bool = False,
with_cost:bool = False,
with_ppm_theirs:bool = False,
with_ppm_mine:bool = False,
with_profits:bool = True,
with_payments:bool = False,
) -> str:
base_flag = "*" if not self.online or self.base_fee_to_me != 0 else ""
alias = f"({self.remote_alias}){base_flag}"
peerid = f" {self.remote_peer}" if with_peer_id else ""
scid = f" scid:{self.scid:>13}" if with_scid else ""
bal = f" S:{int(self.sendable):11}/R:{int(self.receivable):11}" if with_bal_msat else ""
ratio = f" MINE:{(100*self.send_ratio):>8.4f}%" if with_bal_ratio else ""
payments = f" OUT:{int(self.out_fulfilled_msat):>11}/IN:{int(self.in_fulfilled_msat):>11}" if with_payments else ""
profits = f" P$:{int(self.fees_lifetime_mine):>8}" if with_profits else ""
cost = f" COST:{self.opportunity_cost_lent:>8}" if with_cost else ""
ppm_theirs = self.ppm_to_me if self.to_me else "N/A"
ppm_theirs = f" PPM_THEIRS:{ppm_theirs:>6}" if with_ppm_theirs else ""
ppm_mine = self.ppm_from_me if self.from_me else "N/A"
ppm_mine = f" PPM_MINE:{ppm_mine:>6}" if with_ppm_mine else ""
return f"channel{alias:30}{peerid}{scid}{bal}{ratio}{payments}{profits}{cost}{ppm_theirs}{ppm_mine}"
@property
def online(self) -> bool:
return self.from_me and self.to_me
@property
def remote_peer(self) -> str:
if self.from_me:
return self.from_me["destination"]
else:
return self.to_me["source"]
@property
def remote_alias(self) -> str:
return self.remote_node["alias"]
@property
def scid(self) -> str:
if self.from_me:
return self.from_me["short_channel_id"]
else:
return self.to_me["short_channel_id"]
@property
def htlc_minimum_to_me(self) -> Millisatoshi:
return self.to_me["htlc_minimum_msat"]
@property
def htlc_minimum_from_me(self) -> Millisatoshi:
return self.from_me["htlc_minimum_msat"]
@property
def htlc_minimum(self) -> Millisatoshi:
return max(self.htlc_minimum_to_me, self.htlc_minimum_from_me)
@property
def htlc_maximum_to_me(self) -> Millisatoshi:
return self.to_me["htlc_maximum_msat"]
@property
def htlc_maximum_from_me(self) -> Millisatoshi:
return self.from_me["htlc_maximum_msat"]
@property
def htlc_maximum(self) -> Millisatoshi:
return min(self.htlc_maximum_to_me, self.htlc_maximum_from_me)
@property
def direction_to_me(self) -> int:
return self.to_me["direction"]
@property
def direction_from_me(self) -> int:
return self.from_me["direction"]
@property
def directed_scid_to_me(self) -> str:
return f"{self.scid}/{self.direction_to_me}"
@property
def directed_scid_from_me(self) -> str:
return f"{self.scid}/{self.direction_from_me}"
@property
def delay_them(self) -> str:
return self.to_me["delay"]
@property
def delay_me(self) -> str:
return self.from_me["delay"]
@property
def ppm_to_me(self) -> int:
return self.to_me["fee_per_millionth"]
@property
def ppm_from_me(self) -> int:
return self.from_me["fee_per_millionth"]
# return self.peer_ch["fee_proportional_millionths"]
@property
def base_fee_to_me(self) -> int:
return self.to_me["base_fee_millisatoshi"]
@property
def receivable(self) -> int:
return self.peer_ch["receivable_msat"]
@property
def sendable(self) -> int:
return self.peer_ch["spendable_msat"]
@property
def in_fulfilled_msat(self) -> Millisatoshi:
return self.peer_ch["in_fulfilled_msat"]
@property
def out_fulfilled_msat(self) -> Millisatoshi:
return self.peer_ch["out_fulfilled_msat"]
@property
def fees_lifetime_mine(self) -> Millisatoshi:
return sum(fwd["fee_msat"] for fwd in self.forwards_from_me)
@property
def send_ratio(self) -> float:
cap = self.receivable + self.sendable
return self.sendable / cap
@property
def opportunity_cost_lent(self) -> int:
""" how much msat did we gain by pushing their channel to its current balance? """
return int(self.receivable * self.ppm_from_me / 1000000)
class RpcHelper:
def __init__(self, rpc: LightningRpc):
self.rpc = rpc
self.self_id = rpc.getinfo()["id"]
def localchannel(self, scid: str) -> LocalChannel:
listchan = self.rpc.listchannels(scid)
# this assertion would probably indicate a typo in the scid
assert listchan and listchan.get("channels", []) != [], f"bad listchannels for {scid}: {listchan}"
return LocalChannel(listchan["channels"], self)
def node(self, id: str) -> dict:
nodes = self.rpc.listnodes(id)["nodes"]
assert len(nodes) == 1, f"unexpected: multiple nodes for {id}: {nodes}"
return nodes[0]
def peerchannel(self, scid: str, peer_id: str) -> dict:
peerchannels = self.rpc.listpeerchannels(peer_id)["channels"]
channels = [c for c in peerchannels if c["short_channel_id"] == scid]
assert len(channels) == 1, f"expected exactly 1 channel, got: {channels}"
return channels[0]
def try_getroute(self, *args, **kwargs) -> dict | None:
""" wrapper for getroute which returns None instead of error if no route exists """
try:
route = self.rpc.getroute(*args, **kwargs)
except RpcError as e:
logger.debug(f"rpc failed: {e}")
return None
else:
route = route["route"]
if route == []: return None
return route
class LoopRouter:
def __init__(self, rpc: RpcHelper, metrics: Metrics = None):
self.rpc = rpc
self.metrics = metrics or Metrics()
self.bad_channels = [] # list of directed scid
self.nonzero_base_channels = [] # list of directed scid
def drop_caches(self) -> None:
logger.info("LoopRouter.drop_caches()")
self.bad_channels = []
def _get_directed_scid(self, scid: str, direction: int) -> dict:
channels = self.rpc.rpc.listchannels(scid)["channels"]
channels = [c for c in channels if c["direction"] == direction]
assert len(channels) == 1, f"expected exactly 1 channel: {channels}"
return channels[0]
def loop_once(self, out_scid: str, in_scid: str, bounds: TxBounds) -> LoopError|int:
out_ch = self.rpc.localchannel(out_scid)
in_ch = self.rpc.localchannel(in_scid)
if out_ch.directed_scid_from_me in self.bad_channels or in_ch.directed_scid_to_me in self.bad_channels:
logger.info(f"loop {out_scid} -> {in_scid} failed in our own channel")
self.metrics.own_bad_channel += 1
return LoopError.TRANSIENT
# bounds = bounds.restrict_to_htlc(out_ch) # htlc bounds seem to be enforced only in the outward direction
bounds = bounds.restrict_to_htlc(in_ch)
bounds = bounds.restrict_to_zero_fees(in_ch)
if not bounds.is_satisfiable():
self.metrics.in_ch_unsatisfiable += 1
return LoopError.NO_ROUTE
logger.debug(f"route with bounds {bounds}")
route = self.route(out_ch, in_ch, bounds)
logger.debug(f"route: {route}")
if route == RouteError.NO_ROUTE:
self.metrics.no_route += 1
return LoopError.NO_ROUTE
elif route == RouteError.HAS_BASE_FEE:
# try again with a different route
return LoopError.TRANSIENT
amount_msat = route[0]["amount_msat"]
invoice_id = f"loop-{time.time():.6f}".replace(".", "_")
invoice_desc = f"bal {out_scid}:{in_scid}"
invoice = self.rpc.rpc.invoice("any", invoice_id, invoice_desc)
logger.debug(f"invoice: {invoice}")
payment = self.rpc.rpc.sendpay(route, invoice["payment_hash"], invoice_id, amount_msat, invoice["bolt11"], invoice["payment_secret"])
logger.debug(f"sent: {payment}")
try:
wait = self.rpc.rpc.waitsendpay(invoice["payment_hash"])
logger.debug(f"result: {wait}")
except RpcError as e:
self.metrics.sendpay_fail += 1
err_data = e.error["data"]
err_scid, err_dir = err_data["erring_channel"], err_data["erring_direction"]
err_directed_scid = f"{err_scid}/{err_dir}"
logger.debug(f"ch failed, adding to excludes: {err_directed_scid}; {e.error}")
self.bad_channels.append(err_directed_scid)
return LoopError.TRANSIENT
else:
self.metrics.sendpay_succeed += 1
self.metrics.looped_msat += int(amount_msat)
return int(amount_msat)
def route(self, out_ch: LocalChannel, in_ch: LocalChannel, bounds: TxBounds) -> list[dict] | RouteError:
exclude = [
# ensure the payment doesn't cross either channel in reverse.
# note that this doesn't preclude it from taking additional trips through self, with other peers.
# out_ch.directed_scid_to_me,
# in_ch.directed_scid_from_me,
# alternatively, never route through self. this avoids a class of logic error, like what to do with fees i charge "myself".
self.rpc.self_id
] + self.bad_channels + self.nonzero_base_channels
out_peer = out_ch.remote_peer
in_peer = in_ch.remote_peer
route_or_bounds = bounds
while isinstance(route_or_bounds, TxBounds):
old_bounds = route_or_bounds
route_or_bounds = self._find_partial_route(out_peer, in_peer, old_bounds, exclude=exclude)
if route_or_bounds == old_bounds:
return RouteError.NO_ROUTE
if isinstance(route_or_bounds, RouteError):
return route_or_bounds
route = self._add_route_endpoints(route_or_bounds, out_ch, in_ch)
return route
def _find_partial_route(self, out_peer: str, in_peer: str, bounds: TxBounds, exclude: list[str]=[]) -> list[dict] | RouteError | TxBounds:
route = self.rpc.try_getroute(in_peer, amount_msat=bounds.max_msat, riskfactor=0, fromid=out_peer, exclude=exclude, cltv=CLTV)
if route is None:
logger.debug(f"no route for {bounds.max_msat}msat {out_peer} -> {in_peer}")
return RouteError.NO_ROUTE
send_msat = route[0]["amount_msat"]
if send_msat != Millisatoshi(bounds.max_msat):
logger.debug(f"found route with non-zero fee: {send_msat} -> {bounds.max_msat}. {route}")
error = None
for hop in route:
hop_scid = hop["channel"]
hop_dir = hop["direction"]
directed_scid = f"{hop_scid}/{hop_dir}"
ch = self._get_directed_scid(hop_scid, hop_dir)
if ch["base_fee_millisatoshi"] != 0:
self.nonzero_base_channels.append(directed_scid)
error = RouteError.HAS_BASE_FEE
bounds = bounds.restrict_to_zero_fees(ppm=ch["fee_per_millionth"], why=directed_scid)
return bounds.raise_max_to_be_satisfiable() if error is None else error
return route
def _add_route_endpoints(self, route, out_ch: LocalChannel, in_ch: LocalChannel):
inbound_hop = dict(
id=self.rpc.self_id,
channel=in_ch.scid,
direction=in_ch.direction_to_me,
amount_msat=route[-1]["amount_msat"],
delay=route[-1]["delay"],
style="tlv",
)
route = self._add_route_delay(route, in_ch.delay_them) + [ inbound_hop ]
outbound_hop = dict(
id=out_ch.remote_peer,
channel=out_ch.scid,
direction=out_ch.direction_from_me,
amount_msat=route[0]["amount_msat"],
delay=route[0]["delay"] + out_ch.delay_them,
style="tlv",
)
route = [ outbound_hop ] + route
return route
def _add_route_delay(self, route: list[dict], delay: int) -> list[dict]:
return [ dict(hop, delay=hop["delay"] + delay) for hop in route ]
@dataclass
class LoopJob:
out: str # scid
in_: str # scid
amount: int
@dataclass
class LoopJobIdle:
sec: int = 10
class LoopJobDone(Enum):
COMPLETED = "COMPLETED"
ABORTED = "ABORTED"
class AbstractLoopRunner:
def __init__(self, looper: LoopRouter, bounds: TxBounds, parallelism: int):
self.looper = looper
self.bounds = bounds
self.parallelism = parallelism
self.bounds_map = {} # map (out:str, in_:str) -> TxBounds. it's a cache so we don't have to try 10 routes every time.
def pop_job(self) -> LoopJob | LoopJobIdle | LoopJobDone:
raise NotImplemented # abstract method
def finished_job(self, job: LoopJob, progress: int|LoopError) -> None:
raise NotImplemented # abstract method
def run_to_completion(self, exit_on_any_completed:bool = False) -> None:
self.exiting = False
self.exit_on_any_completed = exit_on_any_completed
if self.parallelism == 1:
# run inline to aid debugging
self._worker_thread()
else:
with ThreadPoolExecutor(max_workers=self.parallelism) as executor:
_ = list(executor.map(lambda _i: self._try_invoke(self._worker_thread), range(self.parallelism)))
def drop_caches(self) -> None:
logger.info("AbstractLoopRunner.drop_caches()")
self.looper.drop_caches()
self.bounds_map = {}
def _try_invoke(self, f, *args) -> None:
"""
try to invoke `f` with the provided `args`, and log if it fails.
this overcomes the issue that background tasks which fail via Exception otherwise do so silently.
"""
try:
f(*args)
except Exception as e:
logger.error(f"task failed: {e}")
def _worker_thread(self) -> None:
while not self.exiting:
job = self.pop_job()
logger.debug(f"popped job: {job}")
if isinstance(job, LoopJobDone):
return self._worker_finished(job)
if isinstance(job, LoopJobIdle):
logger.debug(f"idling for {job.sec}")
time.sleep(job.sec)
continue
result = self._execute_job(job)
logger.debug(f"finishing job {job} with {result}")
self.finished_job(job, result)
def _execute_job(self, job: LoopJob) -> LoopError|int:
bounds = self.bounds_map.get((job.out, job.in_), self.bounds)
bounds = bounds.intersect(TxBounds(max_msat=job.amount))
if not bounds.is_satisfiable():
logger.debug(f"TxBounds for job are unsatisfiable; skipping: {bounds} {job}")
return LoopError.NO_ROUTE
amt_looped = self.looper.loop_once(job.out, job.in_, bounds)
if amt_looped in (0, LoopError.NO_ROUTE, LoopError.TRANSIENT):
return amt_looped
logger.info(f"looped {amt_looped} from {job.out} -> {job.in_}")
bounds = bounds.intersect(TxBounds(max_msat=amt_looped))
self.bounds_map[(job.out, job.in_)] = bounds
return amt_looped
def _worker_finished(self, job: LoopJobDone) -> None:
if job == LoopJobDone.COMPLETED and self.exit_on_any_completed:
logger.debug(f"worker completed -> exiting pool")
self.exiting = True
class LoopPairState:
# TODO: use this in MultiLoopBalancer, or stop shoving state in here and put it on LoopBalancer instead.
def __init__(self, out: str, in_: str, amount: int):
self.out = out
self.in_ = in_
self.amount_target = amount
self.amount_looped = 0
self.amount_outstanding = 0
self.tx_fail_count = 0
self.route_fail_count = 0
self.last_job_start_time = None
self.failed_tx_throttler = 0 # increase by one every time we fail, decreases more gradually, when we succeed
class LoopBalancer(AbstractLoopRunner):
def __init__(self, out: str, in_: str, amount: int, looper: LoopRouter, bounds: TxBounds, parallelism: int=1):
super().__init__(looper, bounds, parallelism)
self.state = LoopPairState(out, in_, amount)
def pop_job(self) -> LoopJob | LoopJobIdle | LoopJobDone:
if self.state.tx_fail_count + 10*self.state.route_fail_count >= MAX_SEQUENTIAL_JOB_FAILURES:
logger.info(f"giving up ({self.state.out} -> {self.state.in_}): {self.state.tx_fail_count} tx failures, {self.state.route_fail_count} route failures")
return LoopJobDone.ABORTED
if self.state.tx_fail_count + self.state.route_fail_count > 0:
# N.B.: last_job_start_time is guaranteed to have been set by now
idle_until = self.state.last_job_start_time + TX_FAIL_BACKOFF*self.state.failed_tx_throttler
idle_for = idle_until - time.time()
if self.state.amount_outstanding != 0 or idle_for > 0:
# when we hit transient failures, restrict to just one job in flight at a time.
# this is aimed for the initial route building, where multiple jobs in flight is just useless,
# but it's not a bad idea for network blips, etc, either.
logger.info(f"throttling ({self.state.out} -> {self.state.in_}) for {idle_for:.0f}: {self.state.tx_fail_count} tx failures, {self.state.route_fail_count} route failures")
return LoopJobIdle(idle_for) if idle_for > 0 else LoopJobIdle()
amount_avail = self.state.amount_target - self.state.amount_looped - self.state.amount_outstanding
if amount_avail < self.bounds.min_msat:
if self.state.amount_outstanding == 0: return LoopJobDone.COMPLETED
return LoopJobIdle() # sending out another job would risk over-transferring
amount_this_job = min(amount_avail, self.bounds.max_msat)
self.state.amount_outstanding += amount_this_job
self.state.last_job_start_time = time.time()
return LoopJob(out=self.state.out, in_=self.state.in_, amount=amount_this_job)
def finished_job(self, job: LoopJob, progress: int) -> None:
self.state.amount_outstanding -= job.amount
if progress == LoopError.NO_ROUTE:
self.state.route_fail_count += 1
self.state.failed_tx_throttler += 10
elif progress == LoopError.TRANSIENT:
self.state.tx_fail_count += 1
self.state.failed_tx_throttler += 1
else:
self.state.amount_looped += progress
self.state.tx_fail_count = 0
self.state.route_fail_count = 0
self.state.failed_tx_throttler = max(0, self.state.failed_tx_throttler - 0.2)
logger.info(f"loop progressed ({job.out} -> {job.in_}) {progress}: {self.state.amount_looped} of {self.state.amount_target}")
class MultiLoopBalancer(AbstractLoopRunner):
"""
multiplexes jobs between multiple LoopBalancers.
note that the child LoopBalancers don't actually execute the jobs -- just produce them.
"""
def __init__(self, looper: LoopRouter, bounds: TxBounds, parallelism: int=1):
super().__init__(looper, bounds, parallelism)
self.loops = []
# job_index: increments on every job so we can grab jobs evenly from each LoopBalancer.
# in that event that producers are idling, it can actually increment more than once,
# so don't take this too literally
self.job_index = 0
def add_loop(self, out: LocalChannel, in_: LocalChannel, amount: int) -> None:
"""
start looping sats from out -> in_
"""
assert not any(l.state.out == out.scid and l.state.in_ == in_.scid for l in self.loops), f"tried to add duplicate loops from {out} -> {in_}"
logger.info(f"looping from ({out}) to ({in_})")
self.loops.append(LoopBalancer(out.scid, in_.scid, amount, self.looper, self.bounds, self.parallelism))
def pop_job(self) -> LoopJob | LoopJobIdle | LoopJobDone:
# N.B.: this can be called in parallel, so try to be consistent enough to not crash
idle_job = None
abort_job = None
for i, _ in enumerate(self.loops):
loop = self.loops[(self.job_index + i) % len(self.loops)]
self.job_index += 1
job = loop.pop_job()
if isinstance(job, LoopJob):
return job
if isinstance(job, LoopJobIdle):
idle_job = LoopJobIdle(min(job.sec, idle_job.sec)) if idle_job is not None else job
if job == LoopJobDone.ABORTED:
abort_job = job
# either there's a task to idle, or we have to terminate.
# if terminating, terminate ABORTED if any job aborted, else COMPLETED
if idle_job is not None: return idle_job
if abort_job is not None: return abort_job
return LoopJobDone.COMPLETED
def finished_job(self, job: LoopJob, progress: int) -> None:
# this assumes (enforced externally) that we have only one loop for a given out/in_ pair
for l in self.loops:
if l.state.out == job.out and l.state.in_ == job.in_:
l.finished_job(job, progress)
logger.info(f"total: {self.looper.metrics}")
def balance_loop(rpc: RpcHelper, out: str, in_: str, amount_msat: int, min_msat: int, max_msat: int, parallelism: int):
looper = LoopRouter(rpc)
bounds = TxBounds(min_msat=min_msat, max_msat=max_msat)
balancer = LoopBalancer(out, in_, amount_msat, looper, bounds, parallelism)
balancer.run_to_completion()
def autobalance_once(rpc: RpcHelper, metrics: Metrics, bounds: TxBounds, parallelism: int) -> bool:
"""
autobalances all channels.
returns True if channels are balanced (or as balanced as can be); False if in need of further balancing
"""
looper = LoopRouter(rpc, metrics)
balancer = MultiLoopBalancer(looper, bounds, parallelism)
channels = []
for peerch in rpc.rpc.listpeerchannels()["channels"]:
try:
channels.append(rpc.localchannel(peerch["short_channel_id"]))
except:
logger.info(f"NO CHANNELS for {peerch['peer_id']}")
channels = [ch for ch in channels if ch.online and ch.base_fee_to_me == 0]
give_to = [ ch for ch in channels if ch.send_ratio > 0.95 ]
take_from = [ ch for ch in channels if ch.send_ratio < 0.20 ]
if give_to == [] and take_from == []:
return True
for to in give_to:
for from_ in take_from:
balancer.add_loop(to, from_, 10000000)
balancer.run_to_completion(exit_on_any_completed=True)
return False
def autobalance(rpc: RpcHelper, min_msat: int, max_msat: int, parallelism: int):
bounds = TxBounds(min_msat=min_msat, max_msat=max_msat)
metrics = Metrics()
while not autobalance_once(rpc, metrics, bounds, parallelism):
pass
def show_status(rpc: RpcHelper, full: bool=False):
"""
show a table of channel balances between peers.
"""
for peerch in rpc.rpc.listpeerchannels()["channels"]:
try:
ch = rpc.localchannel(peerch["short_channel_id"])
except:
print(f"{peerch['peer_id']} scid:{peerch['short_channel_id']} state:{peerch['state']} NO CHANNELS")
else:
print(ch.to_str(with_scid=True, with_bal_ratio=True, with_payments=True, with_cost=full, with_ppm_theirs=True, with_ppm_mine=True, with_peer_id=full))
def main():
logging.basicConfig()
logger.setLevel(logging.INFO)
parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument("--verbose", action="store_true", help="more logging")
parser.add_argument("--min-msat", default="999", help="min transaction size")
parser.add_argument("--max-msat", default="1000000", help="max transaction size")
parser.add_argument("--jobs", default="1", help="how many HTLCs to keep in-flight at once")
subparsers = parser.add_subparsers(help="action")
status_parser = subparsers.add_parser("status")
status_parser.set_defaults(action="status")
status_parser.add_argument("--full", action="store_true", help="more info per channel")
loop_parser = subparsers.add_parser("loop")
loop_parser.set_defaults(action="loop")
loop_parser.add_argument("out", help="peer id to send tx through")
loop_parser.add_argument("in_", help="peer id to receive tx through")
loop_parser.add_argument("amount", help="total amount of msat to loop")
autobal_parser = subparsers.add_parser("autobalance")
autobal_parser.set_defaults(action="autobalance")
args = parser.parse_args()
if args.verbose:
logger.setLevel(logging.DEBUG)
rpc = RpcHelper(LightningRpc(RPC_FILE))
if args.action == "status":
show_status(rpc, full=args.full)
if args.action == "loop":
balance_loop(rpc, out=args.out, in_=args.in_, amount_msat=int(args.amount), min_msat=int(args.min_msat), max_msat=int(args.max_msat), parallelism=int(args.jobs))
if args.action == "autobalance":
autobalance(rpc, min_msat=int(args.min_msat), max_msat=int(args.max_msat), parallelism=int(args.jobs))
if __name__ == '__main__':
main()

View File

@ -1,135 +0,0 @@
# clightning is an implementation of Bitcoin's Lightning Network.
# as such, this assumes that `services.bitcoin` is enabled.
# docs:
# - tor clightning config: <https://docs.corelightning.org/docs/tor>
# - `lightning-cli` and subcommands: <https://docs.corelightning.org/reference/lightning-cli>
# - `man lightningd-config`
#
# management/setup/use:
# - guide: <https://github.com/ElementsProject/lightning>
#
# debugging:
# - `lightning-cli getlog debug`
# - `lightning-cli listpays` -> show payments this node sent
# - `lightning-cli listinvoices` -> show payments this node received
#
# first, acquire peers:
# - `lightning-cli connect id@host`
# where `id` is the node's pubkey, and `host` is perhaps an ip:port tuple, or a hash.onion:port tuple.
# for testing, choose any node listed on <https://1ml.com>
# - `lightning-cli listpeers`
# should show the new peer, with `connected: true`
#
# then, fund the clightning wallet
# - `lightning-cli newaddr`
#
# then, open channels
# - `lightning-cli connect ...`
# - `lightning-cli fundchannel <node_id> <amount_in_satoshis>`
#
# who to federate with?
# - a lot of the larger nodes allow hands-free channel creation
# - either inbound or outbound, sometimes paid
# - find nodes on:
# - <https://terminal.lightning.engineering/>
# - <https://1ml.com>
# - tor nodes: <https://1ml.com/node?order=capacity&iponionservice=true>
# - <https://lightningnetwork.plus>
# - <https://mempool.space/lightning>
# - <https://amboss.space>
# - a few tor-capable nodes which allow channel creation:
# - <https://c-otto.de/>
# - <https://cyberdyne.sh/>
# - <https://yalls.org/about/>
# - <https://coincept.com/>
# - more resources: <https://www.lopp.net/lightning-information.html>
# - node routability: https://hashxp.org/lightning/node/<id>
# - especially, acquire inbound liquidity via lightningnetwork.plus's swap feature
# - most of the opportunities are gated behind a minimum connection or capacity requirement
#
# tune payment parameters
# - `lightning-cli setchannel <id> [feebase] [feeppm] [htlcmin] [htlcmax] [enforcedelay] [ignorefeelimits]`
# - e.g. `lightning-cli setchannel all 0 10`
# - it's suggested that feebase=0 simplifies routing.
#
# teardown:
# - `lightning-cli withdraw <bc1... dest addr> <amount in satoshis> [feerate]`
#
# sanity:
# - `lightning-cli listfunds`
#
# to receive a payment (do as `clightning` user):
# - `lightning-cli invoice <amount in millisatoshi> <label> <description>`
# - specify amount as `any` if undetermined
# - then give the resulting bolt11 URI to the payer
# to send a payment:
# - `lightning-cli pay <bolt11 URI>`
# - or `lightning-cli pay <bolt11 URI> [amount_msat] [label] [riskfactor] [maxfeepercent] ...`
# - amount_msat must be "null" if the bolt11 URI specifies a value
# - riskfactor defaults to 10
# - maxfeepercent defaults to 0.5
# - label is a human-friendly label for my records
{ config, pkgs, ... }:
{
sane.persist.sys.byStore.ext = [
{ user = "clightning"; group = "clightning"; mode = "0710"; path = "/var/lib/clightning"; method = "bind"; }
];
# `lightning-cli` finds its RPC file via `~/.lightning/bitcoin/lightning-rpc`, to message the daemon
sane.user.fs.".lightning".symlink.target = "/var/lib/clightning";
# see bitcoin.nix for how to generate this
services.bitcoind.mainnet.rpc.users.clightning.passwordHMAC =
"befcb82d9821049164db5217beb85439$2c31ac7db3124612e43893ae13b9527dbe464ab2d992e814602e7cb07dc28985";
sane.services.clightning.enable = true;
sane.services.clightning.proxy = "127.0.0.1:9050"; # proxy outgoing traffic through tor
# sane.services.clightning.publicAddress = "statictor:127.0.0.1:9051";
sane.services.clightning.getPublicAddressCmd = "cat /var/lib/tor/onion/clightning/hostname";
services.tor.relay.onionServices.clightning = {
version = 3;
map = [{
# by default tor will route public tor port P to 127.0.0.1:P.
# so if this port is the same as clightning would natively use, then no further config is needed here.
# see: <https://2019.www.torproject.org/docs/tor-manual.html.en#HiddenServicePort>
port = 9735;
# target.port; target.addr; #< set if tor port != clightning port
}];
# allow "tor" group (i.e. clightning) to read /var/lib/tor/onion/clightning/hostname
settings.HiddenServiceDirGroupReadable = true;
};
# must be in "tor" group to read /var/lib/tor/onion/*/hostname
users.users.clightning.extraGroups = [ "tor" ];
systemd.services.clightning.after = [ "tor.service" ];
# lightning-config contains fields from here:
# - <https://docs.corelightning.org/docs/configuration>
# secret config includes:
# - bitcoin-rpcpassword
# - alias=nodename
# - rgb=rrggbb
# - fee-base=<millisatoshi>
# - fee-per-satoshi=<ppm>
# - feature configs (i.e. experimental-xyz options)
sane.services.clightning.extraConfig = ''
log-level=debug:lightningd
# peerswap:
# - config example: <https://github.com/fort-nix/nix-bitcoin/pull/462/files#diff-b357d832705b8ce8df1f41934d613f79adb77c4cd5cd9e9eb12a163fca3e16c6>
# XXX: peerswap crashes clightning on launch. stacktrace is useless.
# plugin=${pkgs.peerswap}/bin/peerswap
# peerswap-db-path=/var/lib/clightning/peerswap/swaps
# peerswap-policy-path=...
'';
sane.services.clightning.extraConfigFiles = [ config.sops.secrets."lightning-config".path ];
sops.secrets."lightning-config" = {
mode = "0640";
owner = "clightning";
group = "clightning";
};
sane.programs.clightning.enableFor.user.colin = true; # for debugging/admin: `lightning-cli`
}

View File

@ -1,10 +0,0 @@
{ ... }:
{
imports = [
./bitcoin.nix
./clightning.nix
./i2p.nix
./monero.nix
./tor.nix
];
}

View File

@ -1,4 +0,0 @@
{ ... }:
{
services.i2p.enable = true;
}

View File

@ -1,31 +0,0 @@
# as of 2023/11/26: complete downloaded blockchain should be 200GiB on disk, give or take.
{ ... }:
{
sane.persist.sys.byStore.ext = [
# /var/lib/monero/lmdb is what consumes most of the space
{ user = "monero"; group = "monero"; path = "/var/lib/monero"; method = "bind"; }
];
services.monero.enable = true;
services.monero.limits.upload = 5000; # in kB/s
services.monero.extraConfig = ''
# see: monero doc/ANONYMITY_NETWORKS.md
#
# "If any anonymity network is enabled, transactions being broadcast that lack a valid 'context'
# (i.e. the transaction did not come from a P2P connection) will only be sent to peers on anonymity networks."
#
# i think this means that setting tx-proxy here ensures any transactions sent locally to my node (via RPC)
# will be sent over an anonymity network.
tx-proxy=i2p,127.0.0.1:9000
tx-proxy=tor,127.0.0.1:9050
'';
# monero ports: <https://monero.stackexchange.com/questions/604/what-ports-does-monero-use-rpc-p2p-etc>
# - 18080 = "P2P" monero node <-> monero node connections
# - 18081 = "RPC" monero client -> monero node connections
sane.ports.ports."18080" = {
protocol = [ "tcp" ];
visibleTo.wan = true;
description = "colin-monero-p2p";
};
}

View File

@ -1,25 +0,0 @@
# tor settings: <https://2019.www.torproject.org/docs/tor-manual.html.en>
{ lib, ... }:
{
# tor hidden service hostnames aren't deterministic, so persist.
# might be able to get away with just persisting /var/lib/tor/onion, not sure.
sane.persist.sys.byStore.plaintext = [
{ user = "tor"; group = "tor"; mode = "0710"; path = "/var/lib/tor"; method = "bind"; }
];
# tor: `tor.enable` doesn't start a relay, exit node, proxy, etc. it's minimal.
# tor.client.enable configures a torsocks proxy, accessible *only* to localhost.
# at 127.0.0.1:9050
services.tor.enable = true;
services.tor.client.enable = true;
# in order for services to read /var/lib/tor/onion/*/hostname, they must be able to traverse /var/lib/tor,
# and /var/lib/tor must have g+x.
# DataDirectoryGroupReadable causes tor to use g+rx, technically more than we need, but all the files are 600 so it's fine.
services.tor.settings.DataDirectoryGroupReadable = true;
# StateDirectoryMode defaults to 0700, and thereby prevents the onion hostnames from being group readable
systemd.services.tor.serviceConfig.StateDirectoryMode = lib.mkForce "0710";
users.users.tor.homeMode = "0710"; # home mode defaults to 0700, causing readability problems, enforced by nixos "users" activation script
services.tor.settings.SafeLogging = false; # show actual .onion names in the syslog, else debugging is impossible
}

View File

@ -0,0 +1,27 @@
{ config, lib, pkgs, ... }:
# using manual ddns now
lib.mkIf false
{
systemd.services.ddns-afraid = {
description = "update dynamic DNS entries for freedns.afraid.org";
serviceConfig = {
EnvironmentFile = config.sops.secrets."ddns_afraid.env".path;
# TODO: ProtectSystem = "strict";
# TODO: ProtectHome = "full";
# TODO: PrivateTmp = true;
};
script = let
curl = "${pkgs.curl}/bin/curl -4";
in ''
${curl} "https://freedns.afraid.org/dynamic/update.php?$AFRAID_KEY"
'';
};
systemd.timers.ddns-afraid = {
wantedBy = [ "multi-user.target" ];
timerConfig = {
OnStartupSec = "2min";
OnUnitActiveSec = "10min";
};
};
}

View File

@ -0,0 +1,30 @@
{ config, lib, pkgs, ... }:
# we use manual DDNS now
lib.mkIf false
{
systemd.services.ddns-he = {
description = "update dynamic DNS entries for HurricaneElectric";
serviceConfig = {
EnvironmentFile = config.sops.secrets."ddns_he.env".path;
# TODO: ProtectSystem = "strict";
# TODO: ProtectHome = "full";
# TODO: PrivateTmp = true;
};
# HE DDNS API is documented: https://dns.he.net/docs.html
script = let
crl = "${pkgs.curl}/bin/curl -4";
in ''
${crl} "https://he.uninsane.org:$HE_PASSPHRASE@dyn.dns.he.net/nic/update?hostname=he.uninsane.org"
${crl} "https://native.uninsane.org:$HE_PASSPHRASE@dyn.dns.he.net/nic/update?hostname=native.uninsane.org"
${crl} "https://uninsane.org:$HE_PASSPHRASE@dyn.dns.he.net/nic/update?hostname=uninsane.org"
'';
};
systemd.timers.ddns-he = {
wantedBy = [ "multi-user.target" ];
timerConfig = {
OnStartupSec = "2min";
OnUnitActiveSec = "10min";
};
};
}

View File

@ -3,7 +3,8 @@
imports = [
./calibre.nix
./coturn.nix
./cryptocurrencies
./ddns-afraid.nix
./ddns-he.nix
./email
./ejabberd.nix
./freshrss.nix
@ -19,13 +20,12 @@
./matrix
./navidrome.nix
./nginx.nix
./nixos-prebuild.nix
./nixserve.nix
./ntfy
./pict-rs.nix
./pleroma.nix
./postgres.nix
./prosody
./slskd.nix
./transmission.nix
./trust-dns.nix
./wikipedia.nix

View File

@ -45,7 +45,7 @@ in
lib.mkIf false
{
sane.persist.sys.byStore.plaintext = [
{ user = "ejabberd"; group = "ejabberd"; path = "/var/lib/ejabberd"; method = "bind"; }
{ user = "ejabberd"; group = "ejabberd"; path = "/var/lib/ejabberd"; }
];
sane.ports.ports = lib.mkMerge ([
{

View File

@ -127,11 +127,10 @@
services.dovecot2.modules = [
pkgs.dovecot_pigeonhole # enables sieve execution (?)
];
services.dovecot2.sieve = {
extensions = [ "fileinto" ];
services.dovecot2.sieveScripts = {
# if any messages fail to pass (or lack) DKIM, move them to Junk
# XXX the key name ("after") is only used to order sieve execution/ordering
scripts.after = builtins.toFile "ensuredkim.sieve" ''
after = builtins.toFile "ensuredkim.sieve" ''
require "fileinto";
if not header :contains "Authentication-Results" "dkim=pass" {
@ -140,6 +139,4 @@
}
'';
};
systemd.services.dovecot2.serviceConfig.RestartSec = lib.mkForce "15s"; # nixos defaults this to 1s
}

View File

@ -20,9 +20,9 @@ in
{
sane.persist.sys.byStore.plaintext = [
# TODO: mode? could be more granular
{ user = "opendkim"; group = "opendkim"; path = "/var/lib/opendkim"; method = "bind"; }
{ user = "root"; group = "root"; path = "/var/lib/postfix"; method = "bind"; }
{ user = "root"; group = "root"; path = "/var/spool/mail"; method = "bind"; }
{ user = "opendkim"; group = "opendkim"; path = "/var/lib/opendkim"; }
{ user = "root"; group = "root"; path = "/var/lib/postfix"; }
{ user = "root"; group = "root"; path = "/var/spool/mail"; }
# *probably* don't need these dirs:
# "/var/lib/dhparams" # https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/security/dhparams.nix
# "/var/lib/dovecot"

View File

@ -2,14 +2,14 @@
{
imports = [
./nfs.nix
./sftpgo
./sftpgo.nix
];
users.groups.export = {};
fileSystems."/var/export/media" = {
# everything in here could be considered publicly readable (based on the viewer's legal jurisdiction)
device = "/var/media";
device = "/var/lib/uninsane/media";
options = [ "rbind" ];
};
# fileSystems."/var/export/playground" = {
@ -29,8 +29,8 @@
# - `sudo btrfs qgroup limit 20G /mnt/persist/ext/persist/var/export/playground`
# to query the quota/status:
# - `sudo btrfs qgroup show -re /var/export/playground`
sane.persist.sys.byStore.ext = [
{ user = "root"; group = "export"; mode = "0775"; path = "/var/export/playground"; method = "bind"; }
sane.persist.sys.ext = [
{ user = "root"; group = "export"; mode = "0775"; path = "/var/export/playground"; }
];
sane.fs."/var/export/README.md" = {

View File

@ -26,7 +26,7 @@
description = "NFS server portmapper";
};
sane.ports.ports."2049" = {
protocol = [ "tcp" "udp" ];
protocol = [ "tcp" ];
visibleTo.lan = true;
description = "NFS server";
};
@ -51,23 +51,6 @@
services.nfs.server.mountdPort = 4002;
services.nfs.server.statdPort = 4000;
services.nfs.extraConfig = ''
[nfsd]
# XXX: NFS over UDP REQUIRES SPECIAL CONFIG TO AVOID DATA LOSS.
# see `man 5 nfs`: "Using NFS over UDP on high-speed links".
# it's actually just a general property of UDP over IPv4 (IPv6 fixes it).
# both the client and the server should configure a shorter-than-default IPv4 fragment reassembly window to mitigate.
# OTOH, tunneling NFS over Wireguard also bypasses this weakness, because a mis-assembled packet would not have a valid signature.
udp=y
[exports]
# all export paths are relative to rootdir.
# for NFSv4, the export with fsid=0 behaves as `/` publicly,
# but NFSv3 implements no such feature.
# using `rootdir` instead of relying on `fsid=0` allows consistent export paths regardless of NFS proto version
rootdir=/var/export
'';
# format:
# fspoint visibility(options)
# options:
@ -102,20 +85,13 @@
in "${export} 10.78.79.0/22(${lib.concatStringsSep "," lanOpts}) 10.0.10.0/24(${lib.concatStringsSep "," vpnOpts})";
in lib.concatStringsSep "\n" [
(fmtExport {
export = "/";
export = "/var/export";
baseOpts = [ "crossmnt" "fsid=root" ];
extraLanOpts = [ "ro" ];
extraVpnOpts = [ "rw" "no_root_squash" ];
})
(fmtExport {
# provide /media as an explicit export. NFSv4 can transparently mount a subdir of an export, but NFSv3 can only mount paths which are exports.
export = "/media";
baseOpts = [ "crossmnt" ]; # TODO: is crossmnt needed here?
extraLanOpts = [ "ro" ];
extraVpnOpts = [ "rw" "no_root_squash" ];
})
(fmtExport {
export = "/playground";
export = "/var/export/playground";
baseOpts = [
"mountpoint"
"all_squash"

View File

@ -0,0 +1,184 @@
# docs:
# - <https://github.com/drakkan/sftpgo>
# - config options: <https://github.com/drakkan/sftpgo/blob/main/docs/full-configuration.md>
# - config defaults: <https://github.com/drakkan/sftpgo/blob/main/sftpgo.json>
# - nixos options: <repo:nixos/nixpkgs:nixos/modules/services/web-apps/sftpgo.nix>
# - nixos example: <repo:nixos/nixpkgs:nixos/tests/sftpgo.nix>
#
# sftpgo is a FTP server that also supports WebDAV, SFTP, and web clients.
#
# TODO: change umask so sftpgo-created files default to 644.
# - it does indeed appear that the 600 is not something sftpgo is explicitly doing.
{ config, lib, pkgs, sane-lib, ... }:
let
# user permissions:
# - see <repo:drakkan/sftpgo:internal/dataprovider/user.go>
# - "*" = grant all permissions
# - read-only perms:
# - "list" = list files and directories
# - "download"
# - rw perms:
# - "upload"
# - "overwrite" = allow uploads to replace existing files
# - "delete" = delete files and directories
# - "delete_files"
# - "delete_dirs"
# - "rename" = rename files and directories
# - "rename_files"
# - "rename_dirs"
# - "create_dirs"
# - "create_symlinks"
# - "chmod"
# - "chown"
# - "chtimes" = change atime/mtime (access and modification times)
#
# home_dir:
# - it seems (empirically) that a user can't cd above their home directory.
# though i don't have a reference for that in the docs.
authResponseSuccess = {
status = 1;
username = "anonymous";
expiration_date = 0;
home_dir = "/var/export";
# uid/gid 0 means to inherit sftpgo uid.
# - i.e. users can't read files which Linux user `sftpgo` can't read
# - uploaded files belong to Linux user `sftpgo`
# other uid/gid values aren't possible for localfs backend, unless i let sftpgo use `sudo`.
uid = 0;
gid = 0;
# uid = 65534;
# gid = 65534;
max_sessions = 0;
# quota_*: 0 means to not use SFTP's quota system
quota_size = 0;
quota_files = 0;
permissions = {
"/" = [ "list" "download" ];
"/playground" = [
# read-only:
"list"
"download"
# write:
"upload"
"overwrite"
"delete"
"rename"
"create_dirs"
"create_symlinks"
# intentionally omitted:
# "chmod"
# "chown"
# "chtimes"
];
};
upload_bandwidth = 0;
download_bandwidth = 0;
filters = {
allowed_ip = [];
denied_ip = [];
};
public_keys = [];
# other fields:
# ? groups
# ? virtual_folders
};
authResponseFail = {
username = "";
};
authSuccessJson = pkgs.writeText "sftp-auth-success.json" (builtins.toJSON authResponseSuccess);
authFailJson = pkgs.writeText "sftp-auth-fail.json" (builtins.toJSON authResponseFail);
unwrappedAuthProgram = pkgs.static-nix-shell.mkBash {
pname = "sftpgo_external_auth_hook";
src = ./.;
pkgs = [ "coreutils" ];
};
authProgram = pkgs.writeShellScript "sftpgo-auth-hook" ''
${unwrappedAuthProgram}/bin/sftpgo_external_auth_hook ${authFailJson} ${authSuccessJson}
'';
in
{
# Client initiates a FTP "control connection" on port 21.
# - this handles the client -> server commands, and the server -> client status, but not the actual data
# - file data, directory listings, etc need to be transferred on an ephemeral "data port".
# - 50000-50100 is a common port range for this.
sane.ports.ports = {
"21" = {
protocol = [ "tcp" ];
visibleTo.lan = true;
description = "colin-FTP server";
};
} // (sane-lib.mapToAttrs
(port: {
name = builtins.toString port;
value = {
protocol = [ "tcp" ];
visibleTo.lan = true;
description = "colin-FTP server data port range";
};
})
(lib.range 50000 50100)
);
services.sftpgo = {
enable = true;
group = "export";
settings = {
ftpd = {
bindings = [
{
# binding this means any wireguard client can connect
address = "10.0.10.5";
port = 21;
debug = true;
}
{
# binding this means any LAN client can connect
address = "10.78.79.51";
port = 21;
debug = true;
}
];
# active mode is susceptible to "bounce attacks", without much benefit over passive mode
disable_active_mode = true;
hash_support = true;
passive_port_range = {
start = 50000;
end = 50100;
};
banner = ''
Welcome, friends, to Colin's read-only FTP server! Also available via NFS on the same host.
Username: "anonymous"
Password: "anonymous"
CONFIGURE YOUR CLIENT FOR "PASSIVE" mode, e.g. `ftp --passive uninsane.org`
Please let me know if anything's broken or not as it should be. Otherwise, browse and DL freely :)
'';
};
data_provider = {
driver = "memory";
external_auth_hook = "${authProgram}";
# track_quota:
# - 0: disable quota tracking
# - 1: quota is updated on every upload/delete, even if user has no quota restriction
# - 2: quota is updated on every upload/delete, but only if user/folder has a quota restriction (default, i think)
# track_quota = 2;
};
};
};
users.users.sftpgo.extraGroups = [ "export" ];
systemd.services.sftpgo.serviceConfig = {
ReadOnlyPaths = [ "/var/export" ];
ReadWritePaths = [ "/var/export/playground" ];
after = [ "network-online.target" ];
wants = [ "network-online.target" ];
Restart = "always";
RestartSec = "20s";
};
}

View File

@ -1,167 +0,0 @@
# docs:
# - <https://github.com/drakkan/sftpgo>
# - config options: <https://github.com/drakkan/sftpgo/blob/main/docs/full-configuration.md>
# - config defaults: <https://github.com/drakkan/sftpgo/blob/main/sftpgo.json>
# - nixos options: <repo:nixos/nixpkgs:nixos/modules/services/web-apps/sftpgo.nix>
# - nixos example: <repo:nixos/nixpkgs:nixos/tests/sftpgo.nix>
#
# sftpgo is a FTP server that also supports WebDAV, SFTP, and web clients.
{ config, lib, pkgs, sane-lib, ... }:
let
external_auth_hook = pkgs.static-nix-shell.mkPython3Bin {
pname = "external_auth_hook";
srcRoot = ./.;
};
# Client initiates a FTP "control connection" on port 21.
# - this handles the client -> server commands, and the server -> client status, but not the actual data
# - file data, directory listings, etc need to be transferred on an ephemeral "data port".
# - 50000-50100 is a common port range for this.
# 50000 is used by soulseek.
passiveStart = 50050;
passiveEnd = 50070;
in
{
sane.ports.ports = {
"21" = {
protocol = [ "tcp" ];
visibleTo.lan = true;
# visibleTo.wan = true;
description = "colin-FTP server";
};
"990" = {
protocol = [ "tcp" ];
visibleTo.lan = true;
visibleTo.wan = true;
description = "colin-FTPS server";
};
} // (sane-lib.mapToAttrs
(port: {
name = builtins.toString port;
value = {
protocol = [ "tcp" ];
visibleTo.lan = true;
visibleTo.wan = true;
description = "colin-FTP server data port range";
};
})
(lib.range passiveStart passiveEnd)
);
# use nginx/acme to produce a cert for FTPS
services.nginx.virtualHosts."ftp.uninsane.org" = {
addSSL = true;
enableACME = true;
};
sane.dns.zones."uninsane.org".inet.CNAME."ftp" = "native";
services.sftpgo = {
enable = true;
group = "export";
package = lib.warnIf (lib.versionOlder "2.5.6" pkgs.sftpgo.version) "sftpgo update: safe to use nixpkgs' sftpgo but keep my own `patches`" pkgs.buildGoModule {
inherit (pkgs.sftpgo) name ldflags nativeBuildInputs doCheck subPackages postInstall passthru meta;
version = "2.5.6-unstable-2024-04-18";
src = pkgs.fetchFromGitHub {
# need to use > 2.5.6 for sftpgo_safe_fileinfo.patch to apply
owner = "drakkan";
repo = "sftpgo";
rev = "950cf67e4c03a12c7e439802cabbb0b42d4ee5f5";
hash = "sha256-UfiFd9NK3DdZ1J+FPGZrM7r2mo9xlKi0dsSlLEinYXM=";
};
vendorHash = "sha256-n1/9A2em3BCtFX+132ualh4NQwkwewMxYIMOphJEamg=";
patches = (pkgs.sftpgo.patches or []) ++ [
# fix for compatibility with kodi:
# ftp LIST operation returns entries over-the-wire like:
# - dgrwxrwxr-x 1 ftp ftp 9 Apr 9 15:05 Videos
# however not all clients understand all mode bits (like that `g`, indicating SGID / group sticky bit).
# instead, only send mode bits which are well-understood.
# the full set of bits, from which i filter, is found here: <https://pkg.go.dev/io/fs#FileMode>
./safe_fileinfo.patch
];
};
settings = {
ftpd = {
bindings = [
{
# binding this means any wireguard client can connect
address = "10.0.10.5";
port = 21;
debug = true;
}
{
# binding this means any LAN client can connect (also WAN traffic forwarded from the gateway)
address = "10.78.79.51";
port = 21;
debug = true;
}
{
# binding this means any wireguard client can connect
address = "10.0.10.5";
port = 990;
debug = true;
tls_mode = 2; # 2 = "implicit FTPS": client negotiates TLS before any FTP command.
}
{
# binding this means any LAN client can connect (also WAN traffic forwarded from the gateway)
address = "10.78.79.51";
port = 990;
debug = true;
tls_mode = 2; # 2 = "implicit FTPS": client negotiates TLS before any FTP command.
}
];
# active mode is susceptible to "bounce attacks", without much benefit over passive mode
disable_active_mode = true;
hash_support = true;
passive_port_range = {
start = passiveStart;
end = passiveEnd;
};
certificate_file = "/var/lib/acme/ftp.uninsane.org/full.pem";
certificate_key_file = "/var/lib/acme/ftp.uninsane.org/key.pem";
banner = ''
Welcome, friends, to Colin's FTP server! Also available via NFS on the same host, but LAN-only.
Read-only access (LAN-restricted):
Username: "anonymous"
Password: "anonymous"
CONFIGURE YOUR CLIENT FOR "PASSIVE" MODE, e.g. `ftp --passive ftp.uninsane.org`.
Please let me know if anything's broken or not as it should be. Otherwise, browse and transfer freely :)
'';
};
data_provider = {
driver = "memory";
external_auth_hook = "${external_auth_hook}/bin/external_auth_hook";
# track_quota:
# - 0: disable quota tracking
# - 1: quota is updated on every upload/delete, even if user has no quota restriction
# - 2: quota is updated on every upload/delete, but only if user/folder has a quota restriction (default, i think)
# track_quota = 2;
};
};
};
users.users.sftpgo.extraGroups = [
"export"
"media"
"nginx" # to access certs
];
systemd.services.sftpgo = {
after = [ "network-online.target" ];
wants = [ "network-online.target" ];
serviceConfig = {
ReadWritePaths = [ "/var/export" ];
Restart = "always";
RestartSec = "20s";
UMask = lib.mkForce "0002";
};
};
}

View File

@ -1,157 +0,0 @@
#!/usr/bin/env nix-shell
#!nix-shell -i python3 -p "python3.withPackages (ps: [ ])"
# vim: set filetype=python :
#
# available environment variables:
# - SFTPGO_AUTHD_USERNAME
# - SFTPGO_AUTHD_USER
# - SFTPGO_AUTHD_IP
# - SFTPGO_AUTHD_PROTOCOL = { "DAV", "FTP", "HTTP", "SSH" }
# - SFTPGO_AUTHD_PASSWORD
# - SFTPGO_AUTHD_PUBLIC_KEY
# - SFTPGO_AUTHD_KEYBOARD_INTERACTIVE
# - SFTPGO_AUTHD_TLS_CERT
#
# user permissions:
# - see <repo:drakkan/sftpgo:internal/dataprovider/user.go>
# - "*" = grant all permissions
# - read-only perms:
# - "list" = list files and directories
# - "download"
# - rw perms:
# - "upload"
# - "overwrite" = allow uploads to replace existing files
# - "delete" = delete files and directories
# - "delete_files"
# - "delete_dirs"
# - "rename" = rename files and directories
# - "rename_files"
# - "rename_dirs"
# - "create_dirs"
# - "create_symlinks"
# - "chmod"
# - "chown"
# - "chtimes" = change atime/mtime (access and modification times)
#
# home_dir:
# - it seems (empirically) that a user can't cd above their home directory.
# though i don't have a reference for that in the docs.
import crypt
import json
import os
from hmac import compare_digest
authFail = dict(username="")
PERM_RO = [ "list", "download" ]
PERM_RW = [
# read-only:
"list",
"download",
# write:
"upload",
"overwrite",
"delete",
"rename",
"create_dirs",
"create_symlinks",
# intentionally omitted:
# "chmod",
# "chown",
# "chtimes",
]
TRUSTED_CREDS = [
# /etc/shadow style creds.
# mkpasswd -m sha-512
# $<method>$<salt>$<hash>
"$6$Zq3c2u4ghUH4S6EP$pOuRt13sEKfX31OqPbbd1LuhS21C9MICMc94iRdTAgdAcJ9h95gQH/6Jf6Ie4Obb0oxQtojRJ1Pd/9QHOlFMW." #< m. rocket boy
]
def mkAuthOk(username: str, permissions: dict[str, list[str]]) -> dict:
return dict(
status = 1,
username = username,
expiration_date = 0,
home_dir = "/var/export",
# uid/gid 0 means to inherit sftpgo uid.
# - i.e. users can't read files which Linux user `sftpgo` can't read
# - uploaded files belong to Linux user `sftpgo`
# other uid/gid values aren't possible for localfs backend, unless i let sftpgo use `sudo`.
uid = 0,
gid = 0,
# uid = 65534,
# gid = 65534,
max_sessions = 0,
# quota_*: 0 means to not use SFTP's quota system
quota_size = 0,
quota_files = 0,
permissions = permissions,
upload_bandwidth = 0,
download_bandwidth = 0,
filters = dict(
allowed_ip = [],
denied_ip = [],
),
public_keys = [],
# other fields:
# ? groups
# ? virtual_folders
)
def isLan(ip: str) -> bool:
return ip.startswith("10.78.76.") \
or ip.startswith("10.78.77.") \
or ip.startswith("10.78.78.") \
or ip.startswith("10.78.79.")
def isWireguard(ip: str) -> bool:
return ip.startswith("10.0.10.")
def isTrustedCred(password: str) -> bool:
for cred in TRUSTED_CREDS:
_, method, salt, hash_ = cred.split("$")
# assert method == "6", f"unrecognized crypt entry: {cred}"
if crypt.crypt(password, f"${method}${salt}") == cred:
return True
return False
def getAuthResponse(ip: str, username: str, password: str) -> dict:
"""
return a sftpgo auth response either denying the user or approving them
with a set of permissions.
"""
if isTrustedCred(password) and username != "colin":
# allow r/w access from those with a special token
return mkAuthOk(username, permissions = {
"/": PERM_RW,
"/playground": PERM_RW,
})
if isWireguard(ip):
# allow any user from wireguard
return mkAuthOk(username, permissions = {
"/": PERM_RW,
"/playground": PERM_RW,
})
if isLan(ip):
if username == "anonymous":
# allow anonymous users on the LAN
return mkAuthOk("anonymous", permissions = {
"/": PERM_RO,
"/playground": PERM_RW,
})
return authFail
def main():
ip = os.environ.get("SFTPGO_AUTHD_IP", "")
username = os.environ.get("SFTPGO_AUTHD_USERNAME", "")
password = os.environ.get("SFTPGO_AUTHD_PASSWORD", "")
resp = getAuthResponse(ip, username, password)
print(json.dumps(resp))
if __name__ == "__main__":
main()

View File

@ -1,32 +0,0 @@
diff --git a/internal/ftpd/handler.go b/internal/ftpd/handler.go
index 036c3977..33211261 100644
--- a/internal/ftpd/handler.go
+++ b/internal/ftpd/handler.go
@@ -169,7 +169,7 @@ func (c *Connection) Stat(name string) (os.FileInfo, error) {
}
return nil, err
}
- return fi, nil
+ return vfs.NewFileInfo(name, fi.IsDir(), fi.Size(), fi.ModTime(), false), nil
}
// Name returns the name of this connection
@@ -315,7 +315,17 @@ func (c *Connection) ReadDir(name string) (ftpserver.DirLister, error) {
}, nil
}
- return c.ListDir(name)
+ lister, err := c.ListDir(name)
+ if err != nil {
+ return nil, err
+ }
+ return &patternDirLister{
+ DirLister: lister,
+ pattern: "*",
+ lastCommand: c.clientContext.GetLastCommand(),
+ dirName: name,
+ connectionPath: c.clientContext.Path(),
+ }, nil
}
// GetHandle implements ClientDriverExtentionFileTransfer

View File

@ -0,0 +1,23 @@
#!/usr/bin/env nix-shell
#!nix-shell -i bash -p coreutils
# vim: set filetype=bash :
#
# available environment variables:
# - SFTPGO_AUTHD_USERNAME
# - SFTPGO_AUTHD_USER
# - SFTPGO_AUTHD_IP
# - SFTPGO_AUTHD_PROTOCOL = { "DAV", "FTP", "HTTP", "SSH" }
# - SFTPGO_AUTHD_PASSWORD
# - SFTPGO_AUTHD_PUBLIC_KEY
# - SFTPGO_AUTHD_KEYBOARD_INTERACTIVE
# - SFTPGO_AUTHD_TLS_CERT
#
#
# call with <script_name> /path/to/fail/response.json /path/to/success/response.json
if [ "$SFTPGO_AUTHD_USERNAME" = "anonymous" ]; then
cat "$2"
else
cat "$1"
fi

View File

@ -16,7 +16,7 @@
mode = "0400";
};
sane.persist.sys.byStore.plaintext = [
{ user = "freshrss"; group = "freshrss"; path = "/var/lib/freshrss"; method = "bind"; }
{ user = "freshrss"; group = "freshrss"; path = "/var/lib/freshrss"; }
];
services.freshrss.enable = true;

View File

@ -4,7 +4,7 @@
{
sane.persist.sys.byStore.plaintext = [
# TODO: mode? could be more granular
{ user = "git"; group = "gitea"; path = "/var/lib/gitea"; method = "bind"; }
{ user = "git"; group = "gitea"; path = "/var/lib/gitea"; }
];
services.gitea.enable = true;
services.gitea.user = "git"; # default is 'gitea'
@ -13,10 +13,6 @@
services.gitea.appName = "Perfectly Sane Git";
# services.gitea.disableRegistration = true;
services.gitea.database.createDatabase = false; #< silence warning which wants db user and name to be equal
# TODO: remove this after merge: <https://github.com/NixOS/nixpkgs/pull/268849>
services.gitea.database.socket = "/run/postgresql"; #< would have been set if createDatabase = true
# gitea doesn't create the git user
users.users.git = {
description = "Gitea Service";
@ -100,24 +96,6 @@
locations."/" = {
proxyPass = "http://127.0.0.1:3000";
};
# gitea serves all `raw` files as content-type: plain, but i'd like to serve them as their actual content type.
# or at least, enough to make specific pages viewable (serving unoriginal content as arbitrary content type is dangerous).
locations."~ ^/colin/phone-case-cq/raw/.*.html" = {
proxyPass = "http://127.0.0.1:3000";
extraConfig = ''
proxy_hide_header Content-Type;
default_type text/html;
add_header Content-Type text/html;
'';
};
locations."~ ^/colin/phone-case-cq/raw/.*.js" = {
proxyPass = "http://127.0.0.1:3000";
extraConfig = ''
proxy_hide_header Content-Type;
default_type text/html;
add_header Content-Type text/javascript;
'';
};
};
sane.dns.zones."uninsane.org".inet.CNAME."git" = "native";

View File

@ -20,7 +20,7 @@
--ignore-panel=HOSTS \
--ws-url=wss://sink.uninsane.org:443/ws \
--port=7890 \
-o /var/lib/goaccess/index.html
-o /var/lib/uninsane/sink/index.html
'';
ExecReload = "${pkgs.coreutils}/bin/kill -HUP $MAINPID";
Type = "simple";
@ -28,19 +28,17 @@
RestartSec = "10s";
# hardening
# TODO: run as `goaccess` user and add `goaccess` user to group `nginx`.
WorkingDirectory = "/tmp";
NoNewPrivileges = true;
PrivateDevices = "yes";
PrivateTmp = true;
ProtectHome = "read-only";
ProtectSystem = "strict";
SystemCallFilter = "~@clock @cpu-emulation @debug @keyring @memlock @module @mount @obsolete @privileged @reboot @resources @setuid @swap @raw-io";
ReadOnlyPaths = "/";
ReadWritePaths = [ "/proc/self" "/var/lib/uninsane/sink" ];
PrivateDevices = "yes";
ProtectKernelModules = "yes";
ProtectKernelTunables = "yes";
ProtectSystem = "strict";
ReadOnlyPaths = [ "/var/log/nginx" ];
ReadWritePaths = [ "/proc/self" "/var/lib/goaccess" ];
StateDirectory = "goaccess";
SystemCallFilter = "~@clock @cpu-emulation @debug @keyring @memlock @module @mount @obsolete @privileged @reboot @resources @setuid @swap @raw-io";
WorkingDirectory = "/var/lib/goaccess";
};
after = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
@ -51,7 +49,7 @@
addSSL = true;
enableACME = true;
# inherit kTLS;
root = "/var/lib/goaccess";
root = "/var/lib/uninsane/sink";
locations."/ws" = {
proxyPass = "http://127.0.0.1:7890";

View File

@ -12,7 +12,7 @@ lib.mkIf false # i don't actively use ipfs anymore
{
sane.persist.sys.byStore.plaintext = [
# TODO: mode? could be more granular
{ user = "261"; group = "261"; path = "/var/lib/ipfs"; method = "bind"; }
{ user = "261"; group = "261"; path = "/var/lib/ipfs"; }
];
networking.firewall.allowedTCPPorts = [ 4001 ];

View File

@ -1,9 +1,9 @@
{ lib, pkgs, ... }:
{ ... }:
{
sane.persist.sys.byStore.plaintext = [
# TODO: mode? we only need this to save Indexer creds ==> migrate to config?
{ user = "root"; group = "root"; path = "/var/lib/jackett"; method = "bind"; }
{ user = "root"; group = "root"; path = "/var/lib/jackett"; }
];
services.jackett.enable = true;
@ -12,8 +12,6 @@
systemd.services.jackett.serviceConfig = {
# run this behind the OVPN static VPN
NetworkNamespacePath = "/run/netns/ovpns";
ExecStartPre = [ "${lib.getExe pkgs.sane-scripts.ip-check} --no-upnp --expect 185.157.162.178" ]; # abort if public IP is not as expected
# patch jackett to listen on the public interfaces
# ExecStart = lib.mkForce "${pkgs.jackett}/bin/Jackett --NoUpdates --DataFolder /var/lib/jackett/.config/Jackett --ListenPublic";
};

View File

@ -41,7 +41,7 @@
};
sane.persist.sys.byStore.plaintext = [
{ user = "jellyfin"; group = "jellyfin"; mode = "0700"; path = "/var/lib/jellyfin"; method = "bind"; }
{ user = "jellyfin"; group = "jellyfin"; mode = "0700"; path = "/var/lib/jellyfin"; }
];
sane.fs."/var/lib/jellyfin/config/logging.json" = {
# "Emby.Dlna" logging: <https://jellyfin.org/docs/general/networking/dlna>
@ -75,7 +75,7 @@
# Jellyfin multimedia server
# this is mostly taken from the official jellfin.org docs
services.nginx.virtualHosts."jelly.uninsane.org" = {
forceSSL = true;
addSSL = true;
enableACME = true;
# inherit kTLS;

View File

@ -1,19 +1,9 @@
# how to update wikipedia snapshot:
# - browse for later snapshots:
# - <https://mirror.accum.se/mirror/wikimedia.org/other/kiwix/zim/wikipedia>
# - DL directly, or via rsync (resumable):
# - `rsync --progress --append-verify rsync://mirror.accum.se/mirror/wikimedia.org/other/kiwix/zim/wikipedia/wikipedia_en_all_maxi_2022-05.zim .`
{ ... }:
{
sane.persist.sys.byStore.ext = [
{ user = "colin"; group = "users"; path = "/var/lib/kiwix"; method = "bind"; }
];
sane.services.kiwix-serve = {
enable = true;
port = 8013;
zimPaths = [ "/var/lib/kiwix/wikipedia_en_all_maxi_2023-11.zim" ];
zimPaths = [ "/var/lib/uninsane/www-archive/wikipedia_en_all_maxi_2022-05.zim" ];
};
services.nginx.virtualHosts."w.uninsane.org" = {

View File

@ -5,14 +5,14 @@ let
in
{
sane.persist.sys.byStore.plaintext = [
{ inherit user group; mode = "0700"; path = stateDir; method = "bind"; }
{ inherit user group; mode = "0700"; path = stateDir; }
];
services.komga.enable = true;
services.komga.port = 11319; # chosen at random
services.nginx.virtualHosts."komga.uninsane.org" = {
forceSSL = true;
addSSL = true;
enableACME = true;
locations."/" = {
proxyPass = "http://127.0.0.1:${builtins.toString port}";

View File

@ -10,21 +10,16 @@ let
uiPort = 1234; # default ui port is 1234
backendPort = 8536; # default backend port is 8536
#^ i guess the "backend" port is used for federation?
pict-rs = pkgs.pict-rs;
# pict-rs = pkgs.pict-rs.overrideAttrs (upstream: {
# # as of v0.4.2, all non-GIF video is forcibly transcoded.
# # that breaks lemmy, because of the request latency.
# # and it eats up hella CPU.
# # pict-rs is iffy around video altogether: mp4 seems the best supported.
# # XXX: this patch no longer applies after 0.5.10 -> 0.5.11 update.
# # git log is hard to parse, but *suggests* that video is natively supported
# # better than in the 0.4.2 days, e.g. 5fd59fc5b42d31559120dc28bfef4e5002fb509e
# # "Change commandline flag to allow disabling video, since it is enabled by default"
# postPatch = (upstream.postPatch or "") + ''
# substituteInPlace src/validate.rs \
# --replace 'if transcode_options.needs_reencode() {' 'if false {'
# '';
# });
pict-rs = pkgs.pict-rs.overrideAttrs (upstream: {
# as of v 0.4.2, all non-GIF video is forcibly transcoded.
# that breaks lemmy, because of the request latency.
# and it eats up hella CPU.
# pict-rs is iffy around video altogether: mp4 seems the best supported.
postPatch = (upstream.postPatch or "") + ''
substituteInPlace src/validate.rs \
--replace 'if transcode_options.needs_reencode() {' 'if false {'
'';
});
in {
services.lemmy = {
enable = true;
@ -83,8 +78,8 @@ in {
# CLI args: <https://git.asonix.dog/asonix/pict-rs#user-content-running>
systemd.services.pict-rs.serviceConfig.ExecStart = lib.mkForce (lib.concatStringsSep " " [
"${lib.getBin pict-rs}/bin/pict-rs run"
"--media-video-max-frame-count" (builtins.toString (30*60*60))
"--media-max-frame-count" (builtins.toString (30*60*60))
"--media-process-timeout 120"
"--media-video-allow-audio" # allow audio
"--media-enable-full-video true" # allow audio
]);
}

View File

@ -21,7 +21,7 @@
];
sane.persist.sys.byStore.plaintext = [
{ user = "matrix-synapse"; group = "matrix-synapse"; path = "/var/lib/matrix-synapse"; method = "bind"; }
{ user = "matrix-synapse"; group = "matrix-synapse"; path = "/var/lib/matrix-synapse"; }
];
services.matrix-synapse.enable = true;
services.matrix-synapse.settings = {

View File

@ -6,7 +6,7 @@
lib.mkIf false
{
sane.persist.sys.byStore.plaintext = [
{ user = "matrix-synapse"; group = "matrix-synapse"; path = "/var/lib/mx-puppet-discord"; method = "bind"; }
{ user = "matrix-synapse"; group = "matrix-synapse"; path = "/var/lib/mx-puppet-discord"; }
];
services.matrix-synapse.settings.app_service_config_files = [

View File

@ -103,7 +103,7 @@ in
sane.persist.sys.byStore.plaintext = [
# TODO: mode?
{ user = "matrix-appservice-irc"; group = "matrix-appservice-irc"; path = "/var/lib/matrix-appservice-irc"; method = "bind"; }
{ user = "matrix-appservice-irc"; group = "matrix-appservice-irc"; path = "/var/lib/matrix-appservice-irc"; }
];
# XXX: matrix-appservice-irc PreStart tries to chgrp the registration.yml to matrix-synapse,

View File

@ -1,12 +1,10 @@
# config options:
# - <https://github.com/mautrix/signal/blob/master/mautrix_signal/example-config.yaml>
{ config, lib, pkgs, ... }:
lib.mkIf false # disabled 2024/01/11: i don't use it, and pkgs.mautrix-signal had some API changes
{ config, pkgs, ... }:
{
sane.persist.sys.byStore.plaintext = [
{ user = "mautrix-signal"; group = "mautrix-signal"; path = "/var/lib/mautrix-signal"; method = "bind"; }
{ user = "signald"; group = "signald"; path = "/var/lib/signald"; method = "bind"; }
{ user = "mautrix-signal"; group = "mautrix-signal"; path = "/var/lib/mautrix-signal"; }
{ user = "signald"; group = "signald"; path = "/var/lib/signald"; }
];
# allow synapse to read the registration file

View File

@ -1,16 +1,15 @@
{ lib, ... }:
lib.mkIf false #< i don't actively use navidrome
{
sane.persist.sys.byStore.plaintext = [
{ user = "navidrome"; group = "navidrome"; path = "/var/lib/navidrome"; method = "bind"; }
{ user = "navidrome"; group = "navidrome"; path = "/var/lib/navidrome"; }
];
services.navidrome.enable = true;
services.navidrome.settings = {
# docs: https://www.navidrome.org/docs/usage/configuration-options/
Address = "127.0.0.1";
Port = 4533;
MusicFolder = "/var/media/Music";
MusicFolder = "/var/lib/uninsane/media/Music";
CovertArtPriority = "*.jpg, *.JPG, *.png, *.PNG, embedded";
AutoImportPlaylists = false;
ScanSchedule = "@every 1h";

View File

@ -54,10 +54,8 @@ in
services.nginx.recommendedOptimisation = true;
# web blog/personal site
# alternative way to link stuff into the share:
# sane.fs."/var/www/sites/uninsane.org/share/Ubunchu".mount.bind = "/var/media/Books/Visual/HiroshiSeo/Ubunchu";
# sane.fs."/var/media/Books/Visual/HiroshiSeo/Ubunchu".dir = {};
services.nginx.virtualHosts."uninsane.org" = publog {
root = "${pkgs.uninsane-dot-org}/share/uninsane-dot-org";
# a lot of places hardcode https://uninsane.org,
# and then when we mix http + non-https, we get CORS violations
# and things don't look right. so force SSL.
@ -67,38 +65,9 @@ in
# for OCSP stapling
sslTrustedCertificate = "${pkgs.cacert}/etc/ssl/certs/ca-bundle.crt";
locations."/" = {
root = "${pkgs.uninsane-dot-org}/share/uninsane-dot-org";
tryFiles = "$uri $uri/ @fallback";
};
# unversioned files
locations."@fallback" = {
root = "/var/www/sites/uninsane.org";
};
# uninsane.org/share/foo => /var/www/sites/uninsane.org/share/foo.
# special-cased to enable directory listings
locations."/share" = {
root = "/var/www/sites/uninsane.org";
extraConfig = ''
# autoindex => render directory listings
autoindex on;
# don't follow any symlinks when serving files
# otherwise it allows a directory escape
disable_symlinks on;
'';
};
locations."/share/Milkbags/" = {
alias = "/var/media/Videos/Milkbags/";
extraConfig = ''
# autoindex => render directory listings
autoindex on;
# don't follow any symlinks when serving files
# otherwise it allows a directory escape
disable_symlinks on;
'';
};
# uninsane.org/share/foo => /var/lib/uninsane/root/share/foo.
# yes, nginx does not strip the prefix when evaluating against the root.
locations."/share".root = "/var/lib/uninsane/root";
# allow matrix users to discover that @user:uninsane.org is reachable via matrix.uninsane.org
locations."= /.well-known/matrix/server".extraConfig =
@ -139,19 +108,6 @@ in
# proxyPass = "http://127.0.0.1:4000";
# extraConfig = pleromaExtraConfig;
# };
# redirect common feed URIs to the canonical feed
locations."= /atom".extraConfig = "return 301 /atom.xml;";
locations."= /feed".extraConfig = "return 301 /atom.xml;";
locations."= /feed.xml".extraConfig = "return 301 /atom.xml;";
locations."= /rss".extraConfig = "return 301 /atom.xml;";
locations."= /rss.xml".extraConfig = "return 301 /atom.xml;";
locations."= /blog/atom".extraConfig = "return 301 /atom.xml;";
locations."= /blog/atom.xml".extraConfig = "return 301 /atom.xml;";
locations."= /blog/feed".extraConfig = "return 301 /atom.xml;";
locations."= /blog/feed.xml".extraConfig = "return 301 /atom.xml;";
locations."= /blog/rss".extraConfig = "return 301 /atom.xml;";
locations."= /blog/rss.xml".extraConfig = "return 301 /atom.xml;";
};
@ -179,8 +135,9 @@ in
security.acme.defaults.email = "admin.acme@uninsane.org";
sane.persist.sys.byStore.plaintext = [
{ user = "acme"; group = "acme"; path = "/var/lib/acme"; method = "bind"; }
{ user = "colin"; group = "users"; path = "/var/www/sites"; method = "bind"; }
# TODO: mode?
{ user = "acme"; group = "acme"; path = "/var/lib/acme"; }
{ user = "colin"; group = "users"; path = "/var/www/sites"; }
];
# let's encrypt default chain looks like:

View File

@ -1,26 +0,0 @@
{ lib, pkgs, ... }:
lib.optionalAttrs false # disabled until i can be sure it's not gonna OOM my server in the middle of the night
{
systemd.services.nixos-prebuild = {
description = "build a nixos image with all updated deps";
path = with pkgs; [ coreutils git nix ];
script = ''
working=$(mktemp -d /tmp/nixos-prebuild.XXXXXX)
pushd "$working"
git clone https://git.uninsane.org/colin/nix-files.git \
&& cd nix-files \
&& nix flake update \
|| true
RC=$(nix run "$working/nix-files#check" -- -j1 --cores 5 --builders "")
popd
rm -rf "$working"
exit "$RC"
'';
};
systemd.timers.nixos-prebuild = {
wantedBy = [ "multi-user.target" ];
timerConfig.OnCalendar = "11,23:00:00";
};
}

View File

@ -0,0 +1,21 @@
{ config, ... }:
{
services.nginx.virtualHosts."nixcache.uninsane.org" = {
addSSL = true;
enableACME = true;
# inherit kTLS;
# serverAliases = [ "nixcache" ];
locations."/".extraConfig = ''
proxy_pass http://localhost:${toString config.services.nix-serve.port};
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
'';
};
sane.dns.zones."uninsane.org".inet.CNAME."nixcache" = "native";
sane.services.nixserve.enable = true;
sane.services.nixserve.secretKeyFile = config.sops.secrets.nix_serve_privkey.path;
}

View File

@ -34,7 +34,7 @@ in
# not 100% necessary to persist this, but ntfy does keep a 12hr (by default) cache
# for pushing notifications to users who become offline.
# ACLs also live here.
{ user = "ntfy-sh"; group ="ntfy-sh"; path = "/var/lib/ntfy-sh"; method = "bind"; }
{ user = "ntfy-sh"; group ="ntfy-sh"; path = "/var/lib/ntfy-sh"; }
];
services.ntfy-sh.enable = true;

View File

@ -49,7 +49,7 @@ in
type = types.package;
default = pkgs.static-nix-shell.mkPython3Bin {
pname = "ntfy-waiter";
srcRoot = ./.;
src = ./.;
pkgs = [ "ntfy-sh" ];
};
description = ''
@ -64,7 +64,7 @@ in
protocol = [ "tcp" ];
visibleTo.lan = true;
visibleTo.wan = true;
description = "colin-notification-waiter-${builtins.toString (port - portLow + 1)}-of-${builtins.toString numPorts}";
description = "colin-notification-waiter-${builtins.toString (port+1)}-of-${builtins.toString numPorts}";
};
}));
systemd.services = lib.mkMerge (builtins.map mkService portRange);

View File

@ -6,7 +6,7 @@ let
in
{
sane.persist.sys.byStore.plaintext = lib.mkIf cfg.enable [
{ user = "pict-rs"; group = "pict-rs"; path = cfg.dataDir; method = "bind"; }
{ user = "pict-rs"; group = "pict-rs"; path = cfg.dataDir; }
];
systemd.services.pict-rs.serviceConfig = {

View File

@ -15,7 +15,7 @@ let
in
{
sane.persist.sys.byStore.plaintext = [
{ user = "pleroma"; group = "pleroma"; path = "/var/lib/pleroma"; method = "bind"; }
{ user = "pleroma"; group = "pleroma"; path = "/var/lib/pleroma"; }
];
services.pleroma.enable = true;
services.pleroma.secretConfigFile = config.sops.secrets.pleroma_secrets.path;
@ -25,7 +25,7 @@ in
config :pleroma, Pleroma.Web.Endpoint,
url: [host: "fed.uninsane.org", scheme: "https", port: 443],
http: [ip: {127, 0, 0, 1}, port: 4040]
http: [ip: {127, 0, 0, 1}, port: 4000]
# secret_key_base: "{secrets.pleroma.secret_key_base}",
# signing_salt: "{secrets.pleroma.signing_salt}"
@ -167,7 +167,7 @@ in
enableACME = true;
# inherit kTLS;
locations."/" = {
proxyPass = "http://127.0.0.1:4040";
proxyPass = "http://127.0.0.1:4000";
recommendedProxySettings = true;
# documented: https://git.pleroma.social/pleroma/pleroma/-/blob/develop/installation/pleroma.nginx
extraConfig = ''

View File

@ -8,7 +8,7 @@ in
{
sane.persist.sys.byStore.plaintext = [
# TODO: mode?
{ user = "postgres"; group = "postgres"; path = "/var/lib/postgresql"; method = "bind"; }
{ user = "postgres"; group = "postgres"; path = "/var/lib/postgresql"; }
];
services.postgresql.enable = true;

View File

@ -57,7 +57,7 @@ let
in
{
sane.persist.sys.byStore.plaintext = [
{ user = "prosody"; group = "prosody"; path = "/var/lib/prosody"; method = "bind"; }
{ user = "prosody"; group = "prosody"; path = "/var/lib/prosody"; }
];
sane.ports.ports."5000" = {
protocol = [ "tcp" ];

View File

@ -1,80 +0,0 @@
# Soulseek daemon (p2p file sharing with an emphasis on Music)
# docs: <https://github.com/slskd/slskd/blob/master/docs/config.md>
#
# config precedence (higher precedence overrules lower precedence):
# - Default Values < Environment Variables < YAML Configuraiton File < Command Line Arguments
#
# debugging:
# - soulseek is just *flaky*. if you see e.g. DNS errors, even though you can't replicate them via `dig` or `getent ahostsv4`, just give it 10 minutes to work out:
# - "Soulseek.AddressException: Failed to resolve address 'vps.slsknet.org': Resource temporarily unavailable"
{ config, lib, pkgs, ... }:
{
sane.persist.sys.byStore.plaintext = [
{ user = "slskd"; group = "media"; path = "/var/lib/slskd"; method = "bind"; }
];
sops.secrets."slskd_env" = {
owner = config.users.users.slskd.name;
mode = "0400";
};
users.users.slskd.extraGroups = [ "media" ];
sane.ports.ports."50300" = {
protocol = [ "tcp" ];
# not visible to WAN: i run this in a separate netns
visibleTo.ovpn = true;
description = "colin-soulseek";
};
sane.dns.zones."uninsane.org".inet.CNAME."soulseek" = "native";
services.nginx.virtualHosts."soulseek.uninsane.org" = {
forceSSL = true;
enableACME = true;
locations."/" = {
proxyPass = "http://10.0.1.6:5030";
proxyWebsockets = true;
};
};
services.slskd.enable = true;
services.slskd.domain = null; # i'll manage nginx for it
services.slskd.group = "media";
# env file, for auth (SLSKD_SLSK_PASSWORD, SLSKD_SLSK_USERNAME)
services.slskd.environmentFile = config.sops.secrets.slskd_env.path;
services.slskd.settings = {
soulseek.diagnostic_level = "Debug"; # one of "None"|"Warning"|"Info"|"Debug"
shares.directories = [
# folders to share
# syntax: <https://github.com/slskd/slskd/blob/master/docs/config.md#directories>
# [Alias]/path/on/disk
# NOTE: Music library is quick to scan; videos take a solid 10min to scan.
# TODO: re-enable the other libraries
# "[Audioooks]/var/media/Books/Audiobooks"
# "[Books]/var/media/Books/Books"
# "[Manga]/var/media/Books/Visual"
# "[games]/var/media/games"
"[Music]/var/media/Music"
# "[Film]/var/media/Videos/Film"
# "[Shows]/var/media/Videos/Shows"
];
# directories.downloads = "..." # TODO
# directories.incomplete = "..." # TODO
# what unit is this? kbps??
global.upload.speed_limit = 32000;
web.logging = true;
# debug = true;
flags.no_logo = true; # don't show logo at start
# flags.volatile = true; # store searches and active transfers in RAM (completed transfers still go to disk). rec for btrfs/zfs
};
systemd.services.slskd.serviceConfig = {
# run this behind the OVPN static VPN
NetworkNamespacePath = "/run/netns/ovpns";
ExecStartPre = [ "${lib.getExe pkgs.sane-scripts.ip-check} --no-upnp --expect 185.157.162.178" ]; # abort if public IP is not as expected
Restart = lib.mkForce "always"; # exits "success" when it fails to connect to soulseek server
RestartSec = "60s";
};
}

View File

@ -1,96 +1,14 @@
{ config, lib, pkgs, ... }:
{ config, pkgs, ... }:
let
# 2023/09/06: nixpkgs `transmission` defaults to old 3.00
# 2024/02/15: some torrent trackers whitelist clients; everyone is still on 3.00 for some reason :|
# some do this via peer-id (e.g. baka); others via user-agent (e.g. MAM).
# peer-id format is essentially the same between 3.00 and 4.x (just swap the MAJOR/MINOR/PATCH numbers).
# user-agent format has changed. `Transmission/3.00` (old) v.s. `TRANSMISSION/MAJ.MIN.PATCH` (new).
realTransmission = pkgs.transmission_4;
realVersion = {
major = lib.versions.major realTransmission.version;
minor = lib.versions.minor realTransmission.version;
patch = lib.versions.patch realTransmission.version;
};
package = realTransmission.overrideAttrs (upstream: {
# `cmakeFlags = [ "-DTR_VERSION_MAJOR=3" ]`, etc, doesn't seem to take effect.
postPatch = (upstream.postPatch or "") + ''
substituteInPlace CMakeLists.txt \
--replace-fail 'TR_VERSION_MAJOR "${realVersion.major}"' 'TR_VERSION_MAJOR "3"' \
--replace-fail 'TR_VERSION_MINOR "${realVersion.minor}"' 'TR_VERSION_MINOR "0"' \
--replace-fail 'TR_VERSION_PATCH "${realVersion.patch}"' 'TR_VERSION_PATCH "0"' \
--replace-fail 'set(TR_USER_AGENT_PREFIX "''${TR_SEMVER}")' 'set(TR_USER_AGENT_PREFIX "3.00")'
'';
});
download-dir = "/var/media/torrents";
torrent-done = pkgs.writeShellApplication {
name = "torrent-done";
runtimeInputs = with pkgs; [
acl
coreutils
findutils
rsync
util-linux
];
text = ''
destructive() {
if [ -n "''${TR_DRY_RUN-}" ]; then
echo "$*"
else
"$@"
fi
}
if [[ "$TR_TORRENT_DIR" =~ ^.*freeleech.*$ ]]; then
# freeleech torrents have no place in my permanent library
echo "freeleech: nothing to do"
exit 0
fi
if ! [[ "$TR_TORRENT_DIR" =~ ^${download-dir}/.*$ ]]; then
echo "unexpected torrent dir, aborting: $TR_TORRENT_DIR"
exit 0
fi
REL_DIR="''${TR_TORRENT_DIR#${download-dir}/}"
MEDIA_DIR="/var/media/$REL_DIR"
destructive mkdir -p "$(dirname "$MEDIA_DIR")"
destructive rsync -arv "$TR_TORRENT_DIR/" "$MEDIA_DIR/"
# make the media rwx by anyone in the group
destructive find "$MEDIA_DIR" -type d -exec setfacl --recursive --modify d:g::rwx,o::rx {} \;
destructive find "$MEDIA_DIR" -type d -exec chmod g+rw,a+rx {} \;
# if there's a single directory inside the media dir, then inline that
subdirs=("$MEDIA_DIR"/*)
if [ ''${#subdirs} -eq 1 ]; then
dirname="''${subdirs[0]}"
if [ -d "$dirname" ]; then
mv "$dirname"/* "$MEDIA_DIR/" && rmdir "$dirname"
fi
fi
# remove noisy files:
find "$MEDIA_DIR/" -type f \(\
-iname 'www.YTS.*.jpg' \
-o -iname 'WWW.YIFY*.COM.jpg' \
-o -iname 'YIFY*.com.txt' \
-o -iname 'YTS*.com.txt' \
\) -exec rm {} \;
# dedupe the whole media library.
# yeah, a bit excessive: move this to a cron job if that's problematic.
destructive hardlink /var/media --reflink=always --ignore-time --verbose
'';
};
in
{
sane.persist.sys.byStore.plaintext = [
# TODO: mode? we need this specifically for the stats tracking in .config/
{ user = "transmission"; group = config.users.users.transmission.group; path = "/var/lib/transmission"; method = "bind"; }
{ user = "transmission"; group = config.users.users.transmission.group; path = "/var/lib/transmission"; }
];
users.users.transmission.extraGroups = [ "media" ];
services.transmission.enable = true;
services.transmission.package = package;
services.transmission.package = pkgs.transmission_4; #< 2023/09/06: nixpkgs `transmission` defaults to old 3.00
#v setting `group` this way doesn't tell transmission to `chown` the files it creates
# it's a nixpkgs setting which just runs the transmission daemon as this group
services.transmission.group = "media";
@ -98,15 +16,13 @@ in
# transmission will by default not allow the world to read its files.
services.transmission.downloadDirPermissions = "775";
services.transmission.extraFlags = [
# "--log-level=debug"
"--log-level=debug"
];
services.transmission.settings = {
# DOCUMENTATION/options list: <https://github.com/transmission/transmission/blob/main/docs/Editing-Configuration-Files.md#options>
# message-level = 3; #< enable for debug logging. 0-3, default is 2.
# 10.0.1.6 => allow rpc only from the root servo ns. it'll tunnel things to the net, if need be.
rpc-bind-address = "10.0.1.6";
# 0.0.0.0 => allow rpc from any host: we gate it via firewall and auth requirement
rpc-bind-address = "0.0.0.0";
#rpc-host-whitelist = "bt.uninsane.org";
#rpc-whitelist = "*.*.*.*";
rpc-authentication-required = true;
@ -116,10 +32,6 @@ in
rpc-password = "{503fc8928344f495efb8e1f955111ca5c862ce0656SzQnQ5";
rpc-whitelist-enabled = false;
# force behind ovpns in case the NetworkNamespace fails somehow
bind-address-ipv4 = "185.157.162.178";
port-forwarding-enabled = false;
# hopefully, make the downloads world-readable
# umask = 0; #< default is 2: i.e. deny writes from world
@ -127,31 +39,19 @@ in
encryption = 2;
# units in kBps
speed-limit-down = 12000;
speed-limit-down = 3000;
speed-limit-down-enabled = true;
speed-limit-up = 800;
speed-limit-up = 600;
speed-limit-up-enabled = true;
# see: https://git.zknt.org/mirror/transmission/commit/cfce6e2e3a9b9d31a9dafedd0bdc8bf2cdb6e876?lang=bg-BG
anti-brute-force-enabled = false;
inherit download-dir;
incomplete-dir = "${download-dir}/incomplete";
download-dir = "/var/lib/uninsane/media";
incomplete-dir = "/var/lib/uninsane/media/incomplete";
# transmission regularly fails to move stuff from the incomplete dir to the main one, so disable:
# TODO: uncomment this line!
incomplete-dir-enabled = false;
# env vars available in script:
# - TR_APP_VERSION - Transmission's short version string, e.g. `4.0.0`
# - TR_TIME_LOCALTIME
# - TR_TORRENT_BYTES_DOWNLOADED - Number of bytes that were downloaded for this torrent
# - TR_TORRENT_DIR - Location of the downloaded data
# - TR_TORRENT_HASH - The torrent's info hash
# - TR_TORRENT_ID
# - TR_TORRENT_LABELS - A comma-delimited list of the torrent's labels
# - TR_TORRENT_NAME - Name of torrent (not filename)
# - TR_TORRENT_TRACKERS - A comma-delimited list of the torrent's trackers' announce URLs
script-torrent-done-enabled = true;
script-torrent-done-filename = "${torrent-done}/bin/torrent-done";
};
systemd.services.transmission.after = [ "wireguard-wg-ovpns.service" ];
@ -159,11 +59,8 @@ in
systemd.services.transmission.serviceConfig = {
# run this behind the OVPN static VPN
NetworkNamespacePath = "/run/netns/ovpns";
ExecStartPre = [ "${lib.getExe pkgs.sane-scripts.ip-check} --no-upnp --expect 185.157.162.178" ]; # abort if public IP is not as expected
Restart = "on-failure";
RestartSec = "30s";
BindPaths = [ "/var/media" ]; #< so it can move completed torrents into the media library
};
# service to automatically backup torrents i add to transmission
@ -194,10 +91,5 @@ in
};
sane.dns.zones."uninsane.org".inet.CNAME."bt" = "native";
sane.ports.ports."51413" = {
protocol = [ "tcp" "udp" ];
visibleTo.ovpn = true;
description = "colin-bittorrent";
};
}

View File

@ -2,11 +2,17 @@
{ config, lib, pkgs, ... }:
let
dyn-dns = config.sane.services.dyn-dns;
nativeAddrs = lib.mapAttrs (_name: builtins.head) config.sane.dns.zones."uninsane.org".inet.A;
bindOvpn = "10.0.1.5";
in
in lib.mkMerge [
{
services.trust-dns.enable = true;
# don't bind to IPv6 until i explicitly test that stack
services.trust-dns.settings.listen_addrs_ipv6 = [];
services.trust-dns.quiet = true;
# services.trust-dns.debug = true;
sane.ports.ports."53" = {
protocol = [ "udp" "tcp" ];
visibleTo.lan = true;
@ -58,6 +64,23 @@ in
services.trust-dns.settings.zones = [ "uninsane.org" ];
# TODO: can i transform this into some sort of service group?
# have `systemctl restart trust-dns.service` restart all the individual services?
systemd.services.trust-dns.serviceConfig = {
DynamicUser = lib.mkForce false;
User = "trust-dns";
Group = "trust-dns";
wantedBy = lib.mkForce [];
};
systemd.services.trust-dns.enable = false;
users.groups.trust-dns = {};
users.users.trust-dns = {
group = "trust-dns";
isSystemUser = true;
};
# sane.services.dyn-dns.restartOnChange = [ "trust-dns.service" ];
networking.nat.enable = true;
networking.nat.extraCommands = ''
@ -82,73 +105,98 @@ in
visibleTo.lan = true;
description = "colin-redirected-dns-for-lan-namespace";
};
}
{
systemd.services =
let
sed = "${pkgs.gnused}/bin/sed";
stateDir = "/var/lib/trust-dns";
zoneTemplate = pkgs.writeText "uninsane.org.zone.in" config.sane.dns.zones."uninsane.org".rendered;
zoneDirFor = flavor: "${stateDir}/${flavor}";
zoneFor = flavor: "${zoneDirFor flavor}/uninsane.org.zone";
mkTrustDnsService = opts: flavor: let
flags = let baseCfg = config.services.trust-dns; in
(lib.optional baseCfg.debug "--debug") ++ (lib.optional baseCfg.quiet "--quiet");
flagsStr = builtins.concatStringsSep " " flags;
sane.services.trust-dns.enable = true;
sane.services.trust-dns.instances = let
mkSubstitutions = flavor: {
"%AWAN%" = "$(cat '${dyn-dns.ipPath}')";
"%CNAMENATIVE%" = "servo.${flavor}";
"%ANATIVE%" = nativeAddrs."servo.${flavor}";
"%AOVPNS%" = "185.157.162.178";
anative = nativeAddrs."servo.${flavor}";
toml = pkgs.formats.toml { };
configTemplate = opts.config or (toml.generate "trust-dns-${flavor}.toml" (
(
lib.filterAttrsRecursive (_: v: v != null) config.services.trust-dns.settings
) // {
listen_addrs_ipv4 = opts.listen or [ anative ];
}
));
configFile = "${stateDir}/${flavor}-config.toml";
port = opts.port or 53;
in {
description = "trust-dns Domain Name Server (serving ${flavor})";
unitConfig.Documentation = "https://trust-dns.org/";
preStart = ''
wan=$(cat '${config.sane.services.dyn-dns.ipPath}')
${sed} s/%AWAN%/$wan/ ${configTemplate} > ${configFile}
'' + lib.optionalString (!opts ? config) ''
mkdir -p ${zoneDirFor flavor}
${sed} \
-e s/%CNAMENATIVE%/servo.${flavor}/ \
-e s/%ANATIVE%/${anative}/ \
-e s/%AWAN%/$wan/ \
-e s/%AOVPNS%/185.157.162.178/ \
${zoneTemplate} > ${zoneFor flavor}
'';
serviceConfig = config.systemd.services.trust-dns.serviceConfig // {
ExecStart = ''
${pkgs.trust-dns}/bin/${pkgs.trust-dns.meta.mainProgram} \
--port ${builtins.toString port} \
--zonedir ${zoneDirFor flavor}/ \
--config ${configFile} ${flagsStr}
'';
};
after = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
};
in {
trust-dns-wan = mkTrustDnsService { listen = [ nativeAddrs."servo.lan" bindOvpn ]; } "wan";
trust-dns-lan = mkTrustDnsService { port = 1053; } "lan";
trust-dns-hn = mkTrustDnsService { port = 1053; } "hn";
trust-dns-hn-resolver = mkTrustDnsService {
config = pkgs.writeText "hn-resolver-config.toml" ''
# i host a resolver in the wireguard VPN so that clients can resolve DNS through the VPN.
# (that's what this file achieves).
#
# one would expect this resolver could host the authoritative zone for `uninsane.org`, and then forward everything else to the system resolver...
# and while that works for `dig`, it breaks for `nslookup` (and so `ssh`, etc).
#
# DNS responses include a flag for if the responding server is the authority of the zone queried.
# it seems that default Linux stub resolvers either:
# - expect DNSSEC when the response includes that bit, or
# - expect A records to be in the `answer` section instead of `additional` section.
# or perhaps something more nuanced. but for `nslookup` to be reliable, it has to talk to an
# instance of trust-dns which is strictly a resolver, with no authority.
# hence, this config: a resolver which forwards to the actual authority.
listen_addrs_ipv4 = ["${nativeAddrs."servo.hn"}"]
listen_addrs_ipv6 = []
[[zones]]
zone = "uninsane.org"
zone_type = "Forward"
stores = { type = "forward", name_servers = [{ socket_addr = "${nativeAddrs."servo.hn"}:1053", protocol = "udp", trust_nx_responses = true }] }
[[zones]]
# forward the root zone to the local DNS resolver
zone = "."
zone_type = "Forward"
stores = { type = "forward", name_servers = [{ socket_addr = "127.0.0.53:53", protocol = "udp", trust_nx_responses = true }] }
'';
} "hn-resolver";
};
in
{
wan = {
substitutions = mkSubstitutions "wan";
listenAddrsIpv4 = [
nativeAddrs."servo.lan"
bindOvpn
];
};
lan = {
substitutions = mkSubstitutions "lan";
listenAddrsIpv4 = [ nativeAddrs."servo.lan" ];
port = 1053;
};
hn = {
substitutions = mkSubstitutions "hn";
listenAddrsIpv4 = [ nativeAddrs."servo.hn" ];
port = 1053;
};
# hn-resolver = {
# # don't need %AWAN% here because we forward to the hn instance.
# listenAddrsIpv4 = [ nativeAddrs."servo.hn" ];
# extraConfig = {
# zones = [
# {
# zone = "uninsane.org";
# zone_type = "Forward";
# stores = {
# type = "forward";
# name_servers = [
# {
# socket_addr = "${nativeAddrs."servo.hn"}:1053";
# protocol = "udp";
# trust_nx_responses = true;
# }
# ];
# };
# }
# {
# # forward the root zone to the local DNS resolver
# zone = ".";
# zone_type = "Forward";
# stores = {
# type = "forward";
# name_servers = [
# {
# socket_addr = "127.0.0.53:53";
# protocol = "udp";
# trust_nx_responses = true;
# }
# ];
# };
# }
# ];
# };
# };
};
sane.services.dyn-dns.restartOnChange = [
"trust-dns-wan.service"
@ -157,3 +205,4 @@ in
# "trust-dns-hn-resolver.service" # doesn't need restart because it doesn't know about WAN IP
];
}
]

View File

@ -1,4 +1,4 @@
{ config, lib, pkgs, ... }:
{ lib, pkgs, ... }:
{
imports = [
./feeds.nix
@ -8,42 +8,94 @@
./hosts.nix
./ids.nix
./machine-id.nix
./net
./nix
./net.nix
./nix-path
./persist.nix
./polyunfill.nix
./programs
./secrets.nix
./ssh.nix
./systemd.nix
./users
./vpn.nix
];
sane.nixcache.enable-trusted-keys = true;
sane.nixcache.enable = lib.mkDefault true;
sane.persist.enable = lib.mkDefault true;
sane.root-on-tmpfs = lib.mkDefault true;
sane.programs.sysadminUtils.enableFor.system = lib.mkDefault true;
sane.programs.consoleUtils.enableFor.user.colin = lib.mkDefault true;
nixpkgs.config.allowUnfree = true; # NIXPKGS_ALLOW_UNFREE=1
nixpkgs.config.allowBroken = true; # NIXPKGS_ALLOW_BROKEN=1
nixpkgs.config.allowUnfree = true;
nixpkgs.config.allowBroken = true; # NIXPKGS_ALLOW_BROKEN
# time.timeZone = "America/Los_Angeles";
time.timeZone = "Etc/UTC"; # DST is too confusing for me => use a stable timezone
# allow `nix flake ...` command
# TODO: is this still required?
nix.extraOptions = ''
experimental-features = nix-command flakes
'';
# hardlinks identical files in the nix store to save 25-35% disk space.
# unclear _when_ this occurs. it's not a service.
# does the daemon continually scan the nix store?
# does the builder use some content-addressed db to efficiently dedupe?
nix.settings.auto-optimise-store = true;
services.journald.extraConfig = ''
# docs: `man journald.conf`
# merged journald config is deployed to /etc/systemd/journald.conf
[Journal]
# disable journal compression because the underlying fs is compressed
Compress=no
'';
systemd.services.nix-daemon.serviceConfig = {
# the nix-daemon manages nix builders
# kill nix-daemon subprocesses when systemd-oomd detects an out-of-memory condition
# see:
# - nixos PR that enabled systemd-oomd: <https://github.com/NixOS/nixpkgs/pull/169613>
# - systemd's docs on these properties: <https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html#ManagedOOMSwap=auto%7Ckill>
#
# systemd's docs warn that without swap, systemd-oomd might not be able to react quick enough to save the system.
# see `man oomd.conf` for further tunables that may help.
#
# alternatively, apply this more broadly with `systemd.oomd.enableSystemSlice = true` or `enableRootSlice`
# TODO: also apply this to the guest user's slice (user-1100.slice)
# TODO: also apply this to distccd
ManagedOOMMemoryPressure = "kill";
ManagedOOMSwap = "kill";
};
system.activationScripts.nixClosureDiff = {
supportsDryActivation = true;
text = ''
# show which packages changed versions or are new/removed in this upgrade
# source: <https://github.com/luishfonseca/dotfiles/blob/32c10e775d9ec7cc55e44592a060c1c9aadf113e/modules/upgrade-diff.nix>
# modified to not error on boot (when /run/current-system doesn't exist)
if [ -d /run/current-system ]; then
${pkgs.nvd}/bin/nvd --nix-bin-dir=${pkgs.nix}/bin diff /run/current-system "$systemConfig"
fi
${pkgs.nvd}/bin/nvd --nix-bin-dir=${pkgs.nix}/bin diff /run/current-system "$systemConfig"
'';
};
# disable non-required packages like nano, perl, rsync, strace
environment.defaultPackages = [];
# dconf docs: <https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/desktop_migration_and_administration_guide/profiles>
# this lets programs temporarily write user-level dconf settings (aka gsettings).
# they're written to ~/.config/dconf/user, unless `DCONF_PROFILE` is set to something other than the default of /etc/dconf/profile/user
# find keys/values with `dconf dump /`
programs.dconf.enable = true;
programs.dconf.packages = [
(pkgs.writeTextFile {
name = "dconf-user-profile";
destination = "/etc/dconf/profile/user";
text = ''
user-db:user
system-db:site
'';
})
];
# sane.programs.glib.enableFor.user.colin = true; # for `gsettings`
# link debug symbols into /run/current-system/sw/lib/debug
# hopefully picked up by gdb automatically?
environment.enableDebugInfo = true;

View File

@ -1,5 +1,4 @@
# where to find good stuff?
# - universal search/directory: <https://podcastindex.org>
# - podcasts w/ a community: <https://lemmyverse.net/communities?query=podcast>
# - podcast rec thread: <https://lemmy.ml/post/1565858>
#
@ -51,8 +50,6 @@ let
else
"infrequent"
));
} // lib.optionalAttrs (lib.hasPrefix "https://www.youtube.com/" raw.url) {
format = "video";
} // lib.optionalAttrs (raw.is_podcast or false) {
format = "podcast";
} // lib.optionalAttrs (raw.title or "" != "") {
@ -63,7 +60,6 @@ let
(fromDb "acquiredlpbonussecretsecret.libsyn.com" // tech) # ACQ2 - more "Acquired" episodes
(fromDb "allinchamathjason.libsyn.com" // pol)
(fromDb "anchor.fm/s/34c7232c/podcast/rss" // tech) # Civboot -- https://anchor.fm/civboot
(fromDb "anchor.fm/s/2da69154/podcast/rss" // tech) # POD OF JAKE -- https://podofjake.com/
(fromDb "cast.postmarketos.org" // tech)
(fromDb "congressionaldish.libsyn.com" // pol) # Jennifer Briney
(fromDb "craphound.com" // pol) # Cory Doctorow -- both podcast & text entries
@ -73,25 +69,21 @@ let
(fromDb "feeds.feedburner.com/80000HoursPodcast" // rat)
(fromDb "feeds.feedburner.com/dancarlin/history" // rat)
(fromDb "feeds.feedburner.com/radiolab" // pol) # Radiolab -- also available here, but ONLY OVER HTTP: <http://feeds.wnyc.org/radiolab>
(fromDb "feeds.libsyn.com/421877" // rat) # Less Wrong Curated
(fromDb "feeds.megaphone.fm/behindthebastards" // pol) # also Maggie Killjoy
(fromDb "feeds.megaphone.fm/hubermanlab" // uncat) # Daniel Huberman on sleep
(fromDb "feeds.megaphone.fm/recodedecode" // tech) # The Verge - Decoder
(fromDb "feeds.simplecast.com/54nAGcIl" // pol) # The Daily
(fromDb "feeds.simplecast.com/82FI35Px" // pol) # Ezra Klein Show
(fromDb "feeds.simplecast.com/wgl4xEgL" // rat) # Econ Talk
(fromDb "feeds.simplecast.com/xKJ93w_w" // uncat) # Atlas Obscura
(fromDb "feeds.transistor.fm/acquired" // tech)
(fromDb "fulltimenix.com" // tech)
(fromDb "futureofcoding.org/episodes" // tech)
(fromDb "hackerpublicradio.org" // tech)
(fromDb "lexfridman.com/podcast" // rat)
(fromDb "mapspodcast.libsyn.com" // uncat) # Multidisciplinary Association for Psychedelic Studies
(fromDb "microarch.club" // tech)
(fromDb "omegataupodcast.net" // tech) # 3/4 German; 1/4 eps are English
(fromDb "omny.fm/shows/cool-people-who-did-cool-stuff" // pol) # Maggie Killjoy -- referenced by Cory Doctorow
(fromDb "omny.fm/shows/money-stuff-the-podcast") # Matt Levine
(fromDb "omny.fm/shows/the-dollop-with-dave-anthony-and-gareth-reynolds") # The Dollop history/comedy
(fromDb "originstories.libsyn.com" // uncat)
(fromDb "podcast.posttv.com/itunes/post-reports.xml" // pol)
(fromDb "podcast.thelinuxexp.com" // tech)
(fromDb "politicalorphanage.libsyn.com" // pol)
(fromDb "reverseengineering.libsyn.com/rss" // tech) # UnNamed Reverse Engineering Podcast
(fromDb "rss.acast.com/deconstructed") # The Intercept - Deconstructed
@ -100,25 +92,17 @@ let
(fromDb "rss.art19.com/60-minutes" // pol)
(fromDb "rss.art19.com/the-portal" // rat) # Eric Weinstein
(fromDb "seattlenice.buzzsprout.com" // pol)
(fromDb "srslywrong.com" // pol)
(fromDb "sharkbytes.transistor.fm" // tech) # Wireshark Podcast o_0
(fromDb "sscpodcast.libsyn.com" // rat) # Astral Codex Ten
(fromDb "talesfromthebridge.buzzsprout.com" // tech) # Sci-Fi? has Peter Watts; author of No Moods, Ads or Cutesy Fucking Icons (rifters.com)
(fromDb "theamphour.com" // tech)
(fromDb "techtalesshow.com" // tech) # Corbin Davenport
(fromDb "techwontsave.us" // pol) # rec by Cory Doctorow
# (fromDb "trashfuturepodcast.podbean.com" // pol) # rec by Cory Doctorow, but way rambly
(fromDb "wakingup.libsyn.com" // pol) # Sam Harris
(fromDb "werenotwrong.fireside.fm" // pol)
(mkPod "https://sfconservancy.org/casts/the-corresponding-source/feeds/ogg/" // tech)
# (fromDb "feeds.libsyn.com/421877" // rat) # Less Wrong Curated
# (fromDb "feeds.megaphone.fm/hubermanlab" // uncat) # Daniel Huberman on sleep
# (fromDb "feeds.simplecast.com/l2i9YnTd" // tech // pol) # Hard Fork (NYtimes tech)
# (fromDb "podcast.thelinuxexp.com" // tech) # low-brow linux/foss PR announcements
# (fromDb "rss.art19.com/your-welcome" // pol) # Michael Malice - Your Welcome -- also available here: <https://origin.podcastone.com/podcast?categoryID2=2232>
# (fromDb "rss.prod.firstlook.media/deconstructed/podcast.rss" // pol) #< possible URL rot
# (fromDb "rss.prod.firstlook.media/intercepted/podcast.rss" // pol) #< possible URL rot
# (fromDb "trashfuturepodcast.podbean.com" // pol) # rec by Cory Doctorow, but way rambly
# (mkPod "https://anchor.fm/s/21bc734/podcast/rss" // pol // infrequent) # Emerge: making sense of what's next -- <https://www.whatisemerging.com/emergepodcast>
# (mkPod "https://audioboom.com/channels/5097784.rss" // tech) # Lateral with Tom Scott
# (mkPod "https://feeds.megaphone.fm/RUNMED9919162779" // pol // infrequent) # The Witch Trials of J.K. Rowling: <https://www.thefp.com/witchtrials>
@ -126,135 +110,135 @@ let
];
texts = [
(fromDb "acoup.blog/feed") # history, states. author: <https://historians.social/@bretdevereaux/following>
(fromDb "amosbbatto.wordpress.com" // tech)
(fromDb "anish.lakhwara.com" // tech)
(fromDb "apenwarr.ca/log/rss.php" // tech) # CEO of tailscale
(fromDb "applieddivinitystudies.com" // rat)
(fromDb "artemis.sh" // tech)
(fromDb "ascii.textfiles.com" // tech) # Jason Scott
(fromDb "austinvernon.site" // tech)
(fromDb "ben-evans.com/benedictevans" // pol)
(fromDb "bitbashing.io" // tech)
(fromDb "bitsaboutmoney.com" // uncat)
(fromDb "blog.danieljanus.pl" // tech)
(fromDb "blog.dshr.org" // pol) # David Rosenthal
(fromDb "blog.jmp.chat" // tech)
(fromDb "blog.rust-lang.org" // tech)
(fromDb "blog.thalheim.io" // tech) # Mic92
(fromDb "bunniestudios.com" // tech) # Bunnie Juang
(fromDb "capitolhillseattle.com" // pol)
(fromDb "edwardsnowden.substack.com" // pol // text)
(fromDb "fasterthanli.me" // tech)
(fromDb "gwern.net" // rat)
(fromDb "hardcoresoftware.learningbyshipping.com" // tech) # Steven Sinofsky
(fromDb "harihareswara.net" // tech // pol) # rec by Cory Doctorow
(fromDb "ianthehenry.com" // tech)
(fromDb "idiomdrottning.org" // uncat)
(fromDb "interconnected.org/home/feed" // rat) # Matt Webb -- engineering-ish, but dreamy
(fromDb "jeffgeerling.com" // tech)
(fromDb "jefftk.com" // tech)
(fromDb "jwz.org/blog" // tech // pol) # DNA lounge guy, loooong-time blogger
(fromDb "kill-the-newsletter.com/feeds/joh91bv7am2pnznv.xml" // pol) # Matt Levine - Money Stuff
(fromDb "kosmosghost.github.io/index.xml" // tech)
(fromDb "linmob.net" // tech)
# AGGREGATORS (> 1 post/day)
(fromDb "lwn.net" // tech)
(fromDb "lynalden.com" // pol)
(fromDb "mako.cc/copyrighteous" // tech // pol) # rec by Cory Doctorow
(fromDb "mg.lol" // tech)
(fromDb "mindingourway.com" // rat)
(fromDb "morningbrew.com/feed" // pol)
(fromDb "nixpkgs.news" // tech)
(fromDb "overcomingbias.com" // rat) # Robin Hanson
(fromDb "palladiummag.com" // uncat)
(fromDb "philosopher.coach" // rat) # Peter Saint-Andre -- side project of stpeter.im
(fromDb "pomeroyb.com" // tech)
(fromDb "postmarketos.org/blog" // tech)
(fromDb "preposterousuniverse.com" // rat) # Sean Carroll
(fromDb "project-insanity.org" // tech) # shared blog by a few NixOS devs, notably onny
(fromDb "putanumonit.com" // rat) # mostly dating topics. not advice, or humor, but looking through a social lens
(fromDb "richardcarrier.info" // rat)
(fromDb "rifters.com/crawl" // uncat) # No Moods, Ads or Cutesy Fucking Icons
(fromDb "righto.com" // tech) # Ken Shirriff
(fromDb "rootsofprogress.org" // rat) # Jason Crawford
(fromDb "samuel.dionne-riel.com" // tech) # SamuelDR
(fromDb "sagacioussuricata.com" // tech) # ian (Sanctuary)
(fromDb "semiaccurate.com" // tech)
(fromDb "sideways-view.com" // rat) # Paul Christiano
(fromDb "slatecave.net" // tech)
(fromDb "slimemoldtimemold.com" // rat)
(fromDb "spectrum.ieee.org" // tech)
(fromDb "stpeter.im/atom.xml" // pol)
(fromDb "thediff.co" // pol) # Byrne Hobart
(fromDb "thisweek.gnome.org" // tech)
(fromDb "tuxphones.com" // tech)
(fromDb "uninsane.org" // tech)
(fromDb "unintendedconsequenc.es" // rat)
(fromDb "vitalik.eth.limo" // tech) # Vitalik Buterin
(fromDb "willow.phantoma.online") # wizard@xyzzy.link
(fromDb "xn--gckvb8fzb.com" // tech)
(mkSubstack "astralcodexten" // rat // daily) # Scott Alexander
(mkSubstack "eliqian" // rat // weekly)
(mkSubstack "oversharing" // pol // daily)
(mkSubstack "samkriss" // humor // infrequent)
(mkText "http://benjaminrosshoffman.com/feed" // pol // weekly)
(mkText "http://boginjr.com/feed" // tech // infrequent)
(mkText "https://forum.merveilles.town/rss.xml" // pol // infrequent) #quality RSS list here: <https://forum.merveilles.town/thread/57/share-your-rss-feeds%21-6/>
(mkText "https://jvns.ca/atom.xml" // tech // weekly) # Julia Evans
(mkText "https://linuxphoneapps.org/blog/atom.xml" // tech // infrequent)
(mkText "https://nixos.org/blog/announcements-rss.xml" // tech // infrequent) # more nixos stuff here, but unclear how to subscribe: <https://nixos.org/blog/categories.html>
(mkText "https://nixos.org/blog/stories-rss.xml" // tech // weekly)
(mkText "https://solar.lowtechmagazine.com/posts/index.xml" // tech // weekly)
(mkText "https://www.stratechery.com/rss" // pol // weekly) # Ben Thompson
# (fromDb "balajis.com" // pol) # Balaji
# (fromDb "drewdevault.com" // tech)
# (fromDb "econlib.org" // pol)
# (fromDb "lesswrong.com" // rat)
# (fromDb "profectusmag.com" // pol) # some conservative/libertarian think tank
# (fromDb "thesideview.co" // uncat) # spiritual journal; RSS items are stubs
# (fromDb "econlib.org" // pol)
# AGGREGATORS (< 1 post/day)
(fromDb "palladiummag.com" // uncat)
(fromDb "profectusmag.com" // uncat)
(fromDb "semiaccurate.com" // tech)
(mkText "https://linuxphoneapps.org/blog/atom.xml" // tech // infrequent)
(fromDb "tuxphones.com" // tech)
(fromDb "spectrum.ieee.org" // tech)
# (fromDb "theregister.com" // tech)
# (fromDb "vitalik.ca" // tech) # moved to vitalik.eth.limo
# (fromDb "webcurious.co.uk" // uncat) # link aggregator; defunct?
# (mkSubstack "doomberg" // tech // weekly) # articles are all pay-walled
# (mkText "https://github.com/Kaiteki-Fedi/Kaiteki/commits/master.atom" // tech // infrequent)
(fromDb "thisweek.gnome.org" // tech)
# more nixos stuff here, but unclear how to subscribe: <https://nixos.org/blog/categories.html>
(mkText "https://nixos.org/blog/announcements-rss.xml" // tech // infrequent)
(mkText "https://nixos.org/blog/stories-rss.xml" // tech // weekly)
## n.b.: quality RSS list here: <https://forum.merveilles.town/thread/57/share-your-rss-feeds%21-6/>
(mkText "https://forum.merveilles.town/rss.xml" // pol // infrequent)
## No Moods, Ads or Cutesy Fucking Icons
(fromDb "rifters.com/crawl" // uncat)
# DEVELOPERS
(fromDb "blog.jmp.chat" // tech)
(fromDb "uninsane.org" // tech)
(fromDb "ascii.textfiles.com" // tech) # Jason Scott
(fromDb "xn--gckvb8fzb.com" // tech)
(fromDb "amosbbatto.wordpress.com" // tech)
(fromDb "fasterthanli.me" // tech)
(fromDb "mg.lol" // tech)
# (fromDb "drewdevault.com" // tech)
## Ken Shirriff
(fromDb "righto.com" // tech)
## shared blog by a few NixOS devs, notably onny
(fromDb "project-insanity.org" // tech)
## Vitalik Buterin
(fromDb "vitalik.ca" // tech)
## ian (Sanctuary)
(fromDb "sagacioussuricata.com" // tech)
(fromDb "artemis.sh" // tech)
## Bunnie Juang
(fromDb "bunniestudios.com" // tech)
(fromDb "blog.danieljanus.pl" // tech)
(fromDb "ianthehenry.com" // tech)
(fromDb "bitbashing.io" // tech)
(fromDb "idiomdrottning.org" // uncat)
(mkText "http://boginjr.com/feed" // tech // infrequent)
(mkText "https://anish.lakhwara.com/home.html" // tech // weekly)
(fromDb "jefftk.com" // tech)
(fromDb "pomeroyb.com" // tech)
(fromDb "harihareswara.net" // tech // pol) # rec by Cory Doctorow
(fromDb "mako.cc/copyrighteous" // tech // pol) # rec by Cory Doctorow
# (mkText "https://til.simonwillison.net/tils/feed.atom" // tech // weekly)
# (mkText "https://www.bloomberg.com/opinion/authors/ARbTQlRLRjE/matthew-s-levine.rss" // pol // weekly) # Matt Levine (preview/paywalled)
];
videos = [
(fromDb "youtube.com/@Channel5YouTube" // pol)
(fromDb "youtube.com/@ColdFusion")
(fromDb "youtube.com/@ContraPoints" // pol)
(fromDb "youtube.com/@Exurb1a")
(fromDb "youtube.com/@hbomberguy")
(fromDb "youtube.com/@JackStauber")
(fromDb "youtube.com/@NativLang")
(fromDb "youtube.com/@PolyMatter")
(fromDb "youtube.com/@TechnologyConnections" // tech)
(fromDb "youtube.com/@TheB1M")
(fromDb "youtube.com/@TomScottGo")
(fromDb "youtube.com/@Vihart")
(fromDb "youtube.com/@Vox")
(fromDb "youtube.com/@Vsauce")
# TECH PROJECTS
(fromDb "blog.rust-lang.org" // tech)
# (fromDb "youtube.com/@rossmanngroup" // pol // tech) # Louis Rossmann
# (TECH; POL) COMMENTATORS
## Matt Webb -- engineering-ish, but dreamy
(fromDb "interconnected.org/home/feed" // rat)
(fromDb "edwardsnowden.substack.com" // pol // text)
## Julia Evans
(mkText "https://jvns.ca/atom.xml" // tech // weekly)
(mkText "http://benjaminrosshoffman.com/feed" // pol // weekly)
## Ben Thompson
(mkText "https://www.stratechery.com/rss" // pol // weekly)
## Balaji
(fromDb "balajis.com" // pol)
(fromDb "ben-evans.com/benedictevans" // pol)
(fromDb "lynalden.com" // pol)
(fromDb "austinvernon.site" // tech)
(mkSubstack "oversharing" // pol // daily)
(mkSubstack "byrnehobart" // pol // infrequent)
# (mkSubstack "doomberg" // tech // weekly) # articles are all pay-walled
## David Rosenthal
(fromDb "blog.dshr.org" // pol)
## Matt Levine
(mkText "https://www.bloomberg.com/opinion/authors/ARbTQlRLRjE/matthew-s-levine.rss" // pol // weekly)
(fromDb "stpeter.im/atom.xml" // pol)
## Peter Saint-Andre -- side project of stpeter.im
(fromDb "philosopher.coach" // rat)
(fromDb "morningbrew.com/feed" // pol)
# RATIONALITY/PHILOSOPHY/ETC
(mkSubstack "samkriss" // humor // infrequent)
(fromDb "unintendedconsequenc.es" // rat)
(fromDb "applieddivinitystudies.com" // rat)
(fromDb "slimemoldtimemold.com" // rat)
(fromDb "richardcarrier.info" // rat)
(fromDb "gwern.net" // rat)
## Jason Crawford
(fromDb "rootsofprogress.org" // rat)
## Robin Hanson
(fromDb "overcomingbias.com" // rat)
## Scott Alexander
(mkSubstack "astralcodexten" // rat // daily)
## Paul Christiano
(fromDb "sideways-view.com" // rat)
## Sean Carroll
(fromDb "preposterousuniverse.com" // rat)
(mkSubstack "eliqian" // rat // weekly)
(mkText "https://acoup.blog/feed" // rat // weekly)
(fromDb "mindingourway.com" // rat)
## mostly dating topics. not advice, or humor, but looking through a social lens
(fromDb "putanumonit.com" // rat)
# LOCAL
(fromDb "capitolhillseattle.com" // pol)
# CODE
# (mkText "https://github.com/Kaiteki-Fedi/Kaiteki/commits/master.atom" // tech // infrequent)
];
images = [
(fromDb "catandgirl.com" // img // humor)
(fromDb "davidrevoy.com" // img // art)
(fromDb "grumpy.website" // img // humor)
(fromDb "miniature-calendar.com" // img // art // daily)
(fromDb "pbfcomics.com" // img // humor)
(fromDb "poorlydrawnlines.com/feed" // img // humor)
(fromDb "smbc-comics.com" // img // humor)
(fromDb "turnoff.us" // img // humor)
(fromDb "xkcd.com" // img // humor)
(fromDb "turnoff.us" // img // humor)
(fromDb "pbfcomics.com" // img // humor)
# (mkImg "http://dilbert.com/feed" // humor // daily)
(fromDb "poorlydrawnlines.com/feed" // img // humor)
# ART
(fromDb "miniature-calendar.com" // img // art // daily)
];
in
{
sane.feeds = texts ++ images ++ podcasts ++ videos;
sane.feeds = texts ++ images ++ podcasts;
assertions = builtins.map
(p: {

View File

@ -1,73 +1,40 @@
# docs
# - x-systemd options: <https://www.freedesktop.org/software/systemd/man/systemd.mount.html>
# - fuse options: `man mount.fuse`
{ config, lib, pkgs, sane-lib, utils, ... }:
{ lib, pkgs, sane-lib, ... }:
let
fsOpts = rec {
common = [
"_netdev"
"noatime"
# user: allow any user with access to the device to mount the fs.
# note that this requires a suid `mount` binary; see: <https://zameermanji.com/blog/2022/8/5/using-fuse-without-root-on-linux/>
"user"
"user" # allow any user with access to the device to mount the fs
"x-systemd.requires=network-online.target"
"x-systemd.after=network-online.target"
"x-systemd.mount-timeout=10s" # how long to wait for mount **and** how long to wait for unmount
];
# x-systemd.automount: mount the fs automatically *on first access*.
# creates a `path-to-mount.automount` systemd unit.
automount = [ "x-systemd.automount" ];
# noauto: don't mount as part of remote-fs.target.
# N.B.: `remote-fs.target` is a dependency of multi-user.target, itself of graphical.target.
# hence, omitting `noauto` can slow down boots.
noauto = [ "noauto" ];
# lazyMount: defer mounting until first access from userspace.
# see: `man systemd.automount`, `man automount`, `man autofs`
lazyMount = noauto ++ automount;
auto = [ "x-systemd.automount" ];
noauto = [ "noauto" ]; # don't mount as part of remote-fs.target
wg = [
"x-systemd.requires=wireguard-wg-home.service"
"x-systemd.after=wireguard-wg-home.service"
];
fuse = [
"allow_other" # allow users other than the one who mounts it to access it. needed, if systemd is the one mounting this fs (as root)
# allow_root: allow root to access files on this fs (if mounted by non-root, else it can always access them).
# N.B.: if both allow_root and allow_other are specified, then only allow_root takes effect.
# "allow_root"
# default_permissions: enforce local permissions check. CRUCIAL if using `allow_other`.
# w/o this, permissions mode of sshfs is like:
# - sshfs runs all remote commands as the remote user.
# - if a local user has local permissions to the sshfs mount, then their file ops are sent blindly across the tunnel.
# - `allow_other` allows *any* local user to access the mount, and hence any local user can now freely become the remote mapped user.
# with default_permissions, sshfs doesn't tunnel file ops from users until checking that said user could perform said op on an equivalent local fs.
ssh = common ++ [
"identityfile=/home/colin/.ssh/id_ed25519"
"allow_other"
"default_permissions"
];
fuseColin = fuse ++ [
sshColin = ssh ++ [
"transform_symlinks"
"idmap=user"
"uid=1000"
"gid=100"
];
ssh = common ++ fuse ++ [
"identityfile=/home/colin/.ssh/id_ed25519"
# i *think* idmap=user means that `colin` on `localhost` and `colin` on the remote are actually treated as the same user, even if their uid/gid differs?
# i.e., local colin's id is translated to/from remote colin's id on every operation?
"idmap=user"
sshRoot = ssh ++ [
# we don't transform_symlinks because that breaks the validity of remote /nix stores
"sftp_server=/run/wrappers/bin/sudo\\040/run/current-system/sw/libexec/sftp-server"
];
sshColin = ssh ++ fuseColin ++ [
# follow_symlinks: remote files which are symlinks are presented to the local system as ordinary files (as the target of the symlink).
# if the symlink target does not exist, the presentation is unspecified.
# symlinks which point outside the mount ARE followed. so this is more capable than `transform_symlinks`
"follow_symlinks"
# symlinks on the remote fs which are absolute paths are presented to the local system as relative symlinks pointing to the expected data on the remote fs.
# only symlinks which would point inside the mountpoint are translated.
"transform_symlinks"
];
# sshRoot = ssh ++ [
# # we don't transform_symlinks because that breaks the validity of remote /nix stores
# "sftp_server=/run/wrappers/bin/sudo\\040/run/current-system/sw/libexec/sftp-server"
# ];
# in the event of hunt NFS mounts, consider:
# - <https://unix.stackexchange.com/questions/31979/stop-broken-nfs-mounts-from-locking-a-directory>
@ -75,103 +42,28 @@ let
# actimeo=n = how long (in seconds) to cache file/dir attributes (default: 3-60s)
# bg = retry failed mounts in the background
# retry=n = for how many minutes `mount` will retry NFS mount operation
# intr = allow Ctrl+C to abort I/O (it will error with `EINTR`)
# soft = on "major timeout", report I/O error to userspace
# softreval = on "major timeout", service the request using known-stale cache results instead of erroring -- if such cache data exists
# retrans=n = how many times to retry a NFS request before giving userspace a "server not responding" error (default: 3)
# timeo=n = number of *deciseconds* to wait for a response before retrying it (default: 600)
# note: client uses a linear backup, so the second request will have double this timeout, then triple, etc.
# proto=udp = encapsulate protocol ops inside UDP packets instead of a TCP session.
# requires `nfsvers=3` and a kernel compiled with `NFS_DISABLE_UDP_SUPPORT=n`.
# UDP might be preferable to TCP because the latter is liable to hang for ~100s (kernel TCP timeout) after a link drop.
# however, even UDP has issues with `umount` hanging.
#
# N.B.: don't change these without first testing the behavior of sandboxed apps on a flaky network.
nfs = common ++ [
# "actimeo=5"
# "bg"
"retrans=1"
# "actimeo=10"
"bg"
"retrans=4"
"retry=0"
# "intr"
"soft"
"softreval"
"timeo=30"
"timeo=15"
"nofail" # don't fail remote-fs.target when this mount fails (not an option for sshfs else would be common)
# "proto=udp" # default kernel config doesn't support NFS over UDP: <https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1964093> (see comment 11).
# "nfsvers=3" # NFSv4+ doesn't support UDP at *all*. it's ok to omit nfsvers -- server + client will negotiate v3 based on udp requirement. but omitting causes confusing mount errors when the server is *offline*, because the client defaults to v4 and thinks the udp option is a config error.
# "x-systemd.idle-timeout=10" # auto-unmount after this much inactivity
];
# manually perform a ftp mount via e.g.
# curlftpfs -o ftpfs_debug=2,user=anonymous:anonymous,connect_timeout=10 -f -s ftp://servo-hn /mnt/my-ftp
ftp = common ++ fuseColin ++ [
# "ftpfs_debug=2"
"user=colin:ipauth"
# connect_timeout=10: casting shows to T.V. fails partway through about half the time
"connect_timeout=20"
];
};
remoteHome = host: {
sane.programs.sshfs-fuse.enableFor.system = true;
fileSystems."/mnt/${host}/home" = {
fileSystems."/mnt/${host}-home" = {
device = "colin@${host}:/home/colin";
fsType = "fuse.sshfs";
options = fsOpts.sshColin ++ fsOpts.lazyMount;
options = fsOpts.sshColin ++ fsOpts.noauto;
noCheck = true;
};
sane.fs."/mnt/${host}/home" = sane-lib.fs.wanted {
dir.acl.user = "colin";
dir.acl.group = "users";
dir.acl.mode = "0700";
};
};
remoteServo = subdir: {
sane.programs.curlftpfs.enableFor.system = true;
sane.fs."/mnt/servo/${subdir}" = sane-lib.fs.wanted {
dir.acl.user = "colin";
dir.acl.group = "users";
dir.acl.mode = "0750";
};
fileSystems."/mnt/servo/${subdir}" = {
device = "ftp://servo-hn:/${subdir}";
noCheck = true;
fsType = "fuse.curlftpfs";
options = fsOpts.ftp ++ fsOpts.noauto ++ fsOpts.wg;
# fsType = "nfs";
# options = fsOpts.nfs ++ fsOpts.lazyMount ++ fsOpts.wg;
};
systemd.services."automount-servo-${utils.escapeSystemdPath subdir}" = let
fs = config.fileSystems."/mnt/servo/${subdir}";
in {
# this is a *flaky* network mount, especially on moby.
# if done as a normal autofs mount, access will eternally block when network is dropped.
# notably, this would block *any* sandboxed app which allows media access, whether they actually try to use that media or not.
# a practical solution is this: mount as a service -- instead of autofs -- and unmount on timeout error, in a restart loop.
# until the ftp handshake succeeds, nothing is actually mounted to the vfs, so this doesn't slow down any I/O when network is down.
description = "automount /mnt/servo/${subdir} in a fault-tolerant and non-blocking manner";
after = [ "network-online.target" ];
requires = [ "network-online.target" ];
wantedBy = [ "default.target" ];
serviceConfig.Type = "simple";
serviceConfig.ExecStart = lib.escapeShellArgs [
"/usr/bin/env"
"PATH=/run/current-system/sw/bin"
"mount.${fs.fsType}"
"-f" # foreground (i.e. don't daemonize)
"-s" # single-threaded (TODO: it's probably ok to disable this?)
"-o"
(lib.concatStringsSep "," (lib.filter (o: !lib.hasPrefix "x-systemd." o) fs.options))
fs.device
"/mnt/servo/${subdir}"
];
# not sure if this configures a linear, or exponential backoff.
# but the first restart will be after `RestartSec`, and the n'th restart (n = RestartSteps) will be RestartMaxDelaySec after the n-1'th exit.
serviceConfig.Restart = "always";
serviceConfig.RestartSec = "10s";
serviceConfig.RestartMaxDelaySec = "120s";
serviceConfig.RestartSteps = "5";
};
sane.fs."/mnt/${host}-home" = sane-lib.fs.wantedDir;
};
in
lib.mkMerge [
@ -207,30 +99,45 @@ lib.mkMerge [
# but it decreases working memory under the heaviest of loads by however much space the compressed memory occupies (e.g. 50% if 2:1; 25% if 4:1)
zramSwap.memoryPercent = 100;
# environment.pathsToLink = [
# # needed to achieve superuser access for user-mounted filesystems (see sshRoot above)
# # we can only link whole directories here, even though we're only interested in pkgs.openssh
# "/libexec"
# ];
# fileSystems."/mnt/servo-nfs" = {
# device = "servo-hn:/";
# noCheck = true;
# fsType = "nfs";
# options = fsOpts.nfs ++ fsOpts.auto ++ fsOpts.wg;
# };
fileSystems."/mnt/servo-nfs/media" = {
device = "servo-hn:/media";
noCheck = true;
fsType = "nfs";
options = fsOpts.nfs ++ fsOpts.auto ++ fsOpts.wg;
};
fileSystems."/mnt/servo-nfs/playground" = {
device = "servo-hn:/playground";
noCheck = true;
fsType = "nfs";
options = fsOpts.nfs ++ fsOpts.auto ++ fsOpts.wg;
};
# fileSystems."/mnt/servo-media-nfs" = {
# device = "servo-hn:/media";
# noCheck = true;
# fsType = "nfs";
# options = fsOpts.common ++ fsOpts.auto;
# };
sane.fs."/mnt/servo-media" = sane-lib.fs.wantedSymlinkTo "/mnt/servo-nfs/media";
programs.fuse.userAllowOther = true; #< necessary for `allow_other` or `allow_root` options.
environment.pathsToLink = [
# needed to achieve superuser access for user-mounted filesystems (see optionsRoot above)
# we can only link whole directories here, even though we're only interested in pkgs.openssh
"/libexec"
];
environment.systemPackages = [
pkgs.sshfs-fuse
];
}
(remoteHome "desko")
(remoteHome "lappy")
(remoteHome "moby")
# this granularity of servo media mounts is necessary to support sandboxing:
# for flaky mounts, we can only bind the mountpoint itself into the sandbox,
# so it's either this or unconditionally bind all of media/.
(remoteServo "media/archive")
(remoteServo "media/Books")
(remoteServo "media/collections")
# (remoteServo "media/datasets")
(remoteServo "media/games")
(remoteServo "media/Music")
(remoteServo "media/Pictures/macros")
(remoteServo "media/torrents")
(remoteServo "media/Videos")
(remoteServo "playground")
]

View File

@ -1,4 +1,4 @@
{ config, lib, pkgs, ... }:
{ lib, pkgs, ... }:
{
imports = [
@ -12,9 +12,8 @@
copy_bin_and_libs ${pkgs.util-linux}/bin/{cfdisk,lsblk,lscpu}
copy_bin_and_libs ${pkgs.gptfdisk}/bin/{cgdisk,gdisk}
copy_bin_and_libs ${pkgs.smartmontools}/bin/smartctl
copy_bin_and_libs ${pkgs.nvme-cli}/bin/nvme
copy_bin_and_libs ${pkgs.e2fsprogs}/bin/resize2fs
'' + lib.optionalString pkgs.stdenv.hostPlatform.isx86_64 ''
copy_bin_and_libs ${pkgs.nvme-cli}/bin/nvme # doesn't cross compile
'';
boot.kernelParams = [
"boot.shell_on_fail"
@ -28,23 +27,6 @@
# "systemd.log_level=debug"
# "systemd.log_target=console"
# moby has to run recent kernels (defined elsewhere).
# meanwhile, kernel variation plays some minor role in things like sandboxing (landlock) and capabilities.
# simpler to keep near the latest kernel on all devices,
# and also makes certain that any weird system-level bugs i see aren't likely to be stale kernel bugs.
# servo needs zfs though, which doesn't support every kernel.
boot.kernelPackages = lib.mkDefault pkgs.zfs.latestCompatibleLinuxPackages;
# TODO: remove after linux 6.9. see: <https://github.com/axboe/liburing/issues/1113>
# - <https://github.com/neovim/neovim/issues/28149>
# - <https://git.kernel.dk/cgit/linux/commit/?h=io_uring-6.9&id=e5444baa42e545bb929ba56c497e7f3c73634099>
# when removing, try starting and suspending (ctrl+z) two instances of neovim simultaneously.
# if the system doesn't freeze, then this is safe to remove.
# added 2024-04-04
sane.user.fs.".profile".symlink.text = lib.mkBefore ''
export UV_USE_IO_URING=0
'';
# hack in the `boot.shell_on_fail` arg since that doesn't always seem to work.
boot.initrd.preFailCommands = "allowShell=1";
@ -57,12 +39,6 @@
# non-free firmware
hardware.enableRedistributableFirmware = true;
# default is 252274, which is too low particularly for servo.
# manifests as spurious "No space left on device" when trying to install watches,
# e.g. in dyn-dns by `systemctl start dyn-dns-watcher.path`.
# see: <https://askubuntu.com/questions/828779/failed-to-add-run-systemd-ask-password-to-directory-watch-no-space-left-on-dev>
boot.kernel.sysctl."fs.inotify.max_user_watches" = 1048576;
# powertop will default to putting USB devices -- including HID -- to sleep after TWO SECONDS
powerManagement.powertop.enable = false;
# linux CPU governor: <https://www.kernel.org/doc/Documentation/cpu-freq/governors.txt>
@ -80,12 +56,10 @@
# - query details with `sudo cpupower frequency-info`
powerManagement.cpuFreqGovernor = "ondemand";
# see: `man logind.conf`
# dont shutdown when power button is short-pressed (commonly done an accident, or by cats).
# but do on long-press: useful to gracefully power-off server.
services.logind.powerKey = "lock";
services.logind.powerKeyLongPress = "poweroff";
services.logind.lidSwitch = "lock";
services.logind.extraConfig = ''
# dont shutdown when power button is short-pressed
HandlePowerKey=ignore
'';
# services.snapper.configs = {
# root = {

View File

@ -1,7 +1,7 @@
{ ... }:
{
imports = [
./fs.nix
./keyring
./mime.nix
./ssh.nix
./xdg-dirs.nix

View File

@ -1,45 +0,0 @@
{ config, lib, ... }:
{
sane.user.persist.byStore.plaintext = [
"archive"
"dev"
# TODO: records should be private
"records"
"ref"
"tmp"
"use"
"Books/local"
"Music"
"Pictures/albums"
"Pictures/cat"
"Pictures/from"
"Pictures/Screenshots" #< XXX: something is case-sensitive about this?
"Pictures/Photos"
"Videos/local"
# these are persisted simply to save on RAM.
# ~/.cache/nix can become several GB.
# mesa_shader_cache is < 10 MB.
# TODO: integrate with sane.programs.sandbox?
".cache/mesa_shader_cache"
".cache/nix"
];
sane.user.persist.byStore.private = [
"knowledge"
];
# convenience
sane.user.fs = let
persistEnabled = config.sane.persist.enable;
in {
".persist/private" = lib.mkIf persistEnabled { symlink.target = config.sane.persist.stores.private.origin; };
".persist/plaintext" = lib.mkIf persistEnabled { symlink.target = config.sane.persist.stores.plaintext.origin; };
".persist/ephemeral" = lib.mkIf persistEnabled { symlink.target = config.sane.persist.stores.cryptClearOnBoot.origin; };
"nixos".symlink.target = "dev/nixos";
"Books/servo".symlink.target = "/mnt/servo/media/Books";
"Videos/servo".symlink.target = "/mnt/servo/media/Videos";
"Pictures/servo-macros".symlink.target = "/mnt/servo/media/Pictures/macros";
};
}

View File

@ -0,0 +1,17 @@
{ config, pkgs, sane-lib, ... }:
let
init-keyring = pkgs.static-nix-shell.mkBash {
pname = "init-keyring";
src = ./.;
};
in
{
sane.user.persist.byStore.private = [ ".local/share/keyrings" ];
sane.user.fs."private/.local/share/keyrings/default" = {
generated.command = [ "${init-keyring}/bin/init-keyring" ];
wantedBy = [ config.sane.fs."/home/colin/private".unit ];
wantedBeforeBy = [ ]; # don't created this as part of `multi-user.target`
};
}

View File

@ -0,0 +1,21 @@
#!/usr/bin/env nix-shell
#!nix-shell -i bash
# initializes the default libsecret keyring (used by gnome-keyring) if not already initialized.
# this initializes it to be plaintext/unencrypted.
ringdir=/home/colin/private/.local/share/keyrings
if test -f "$ringdir/default"
then
echo 'keyring already initialized: not doing anything'
else
keyring="$ringdir/Default_keyring.keyring"
echo 'initializing default user keyring:' "$keyring.new"
echo '[keyring]' > "$keyring.new"
echo 'display-name=Default keyring' >> "$keyring.new"
echo 'lock-on-idle=false' >> "$keyring.new"
echo 'lock-after=false' >> "$keyring.new"
chown colin:users "$keyring.new"
# closest to an atomic update we can achieve
mv "$keyring.new" "$keyring" && echo -n "Default_keyring" > "$ringdir/default"
fi

View File

@ -1,94 +1,28 @@
# TODO: move into modules/users.nix
{ config, lib, pkgs, ...}:
{ config, lib, ...}:
let
# [ ProgramConfig ]
enabledPrograms = builtins.filter
(p: p.enabled)
(builtins.attrValues config.sane.programs);
# [ ProgramConfig ]
enabledProgramsWithPackage = builtins.filter (p: p.package != null) enabledPrograms;
# [ { "<mime-type>" = { prority, desktop } ]
enabledWeightedMimes = builtins.map weightedMimes enabledPrograms;
# ProgramConfig -> { "<mime-type>" = { priority, desktop }; }
weightedMimes = prog: builtins.mapAttrs
(_key: desktop: {
priority = prog.mime.priority; desktop = desktop;
})
prog.mime.associations;
weightedMimes = prog: builtins.mapAttrs (_key: desktop: { priority = prog.mime.priority; desktop = desktop; }) prog.mime.associations;
# [ { "<mime-type>" = { priority, desktop } ]; } ] -> { "<mime-type>" = [ { priority, desktop } ... ]; }
mergeMimes = mimes: lib.foldAttrs (item: acc: [item] ++ acc) [] mimes;
# [ { priority, desktop } ... ] -> Self
sortOneMimeType = associations: builtins.sort
(l: r: lib.throwIf
(l.priority == r.priority)
"${l.desktop} and ${r.desktop} share a preferred mime type with identical priority ${builtins.toString l.priority} (and so the desired association is ambiguous)"
(l.priority < r.priority)
)
associations;
sortOneMimeType = associations: builtins.sort (l: r: assert l.priority != r.priority; l.priority < r.priority) associations;
sortMimes = mimes: builtins.mapAttrs (_k: sortOneMimeType) mimes;
# { "<mime-type>"} = [ { priority, desktop } ... ]; } -> { "<mime-type>" = [ "<desktop>" ... ]; }
removePriorities = mimes: builtins.mapAttrs
(_k: associations: builtins.map (a: a.desktop) associations)
mimes;
# { "<mime-type>" = [ "<desktop>" ... ]; } -> { "<mime-type>" = "<desktop1>;<desktop2>;..."; }
formatDesktopLists = mimes: builtins.mapAttrs
(_k: desktops: lib.concatStringsSep ";" desktops)
mimes;
mimeappsListPkg = pkgs.writeTextDir "share/applications/mimeapps.list" (
lib.generators.toINI { } {
"Default Applications" = formatDesktopLists (removePriorities (sortMimes (mergeMimes enabledWeightedMimes)));
}
);
localShareApplicationsPkg = (pkgs.symlinkJoin {
name = "user-local-share-applications";
paths = builtins.map
(p: "${p.package}")
(enabledProgramsWithPackage ++ [ { package=mimeappsListPkg; } ]);
}).overrideAttrs (orig: {
# like normal symlinkJoin, but don't error if the path doesn't exist
buildCommand = ''
mkdir -p $out/share/applications
for i in $(cat $pathsPath); do
if [ -e "$i/share/applications" ]; then
${pkgs.buildPackages.xorg.lndir}/bin/lndir -silent $i/share/applications $out/share/applications
fi
done
runHook postBuild
'';
postBuild = ''
# rebuild `mimeinfo.cache`, used by file openers to show the list of *all* apps, not just the user's defaults.
${pkgs.buildPackages.desktop-file-utils}/bin/update-desktop-database $out/share/applications
'';
});
removePriorities = mimes: builtins.mapAttrs (_k: associations: builtins.map (a: a.desktop) associations) mimes;
# [ ProgramConfig ]
enabledPrograms = builtins.filter (p: p.enabled) (builtins.attrValues config.sane.programs);
# [ { "<mime-type>" = { prority, desktop } ]
enabledWeightedMimes = builtins.map weightedMimes enabledPrograms;
in
{
# the xdg mime type for a file can be found with:
# - `xdg-mime query filetype path/to/thing.ext`
# the default handler for a mime type can be found with:
# - `xdg-mime query default <mimetype>` (e.g. x-scheme-handler/http)
# the nix-configured handler can be found `nix-repl > :lf . > hostConfigs.desko.xdg.mime.defaultApplications`
#
# glib/gio is queried via glib.bin output:
# - `gio mime x-scheme-handler/https`
# - `gio open <path_or_url>`
# - `gio launch </path/to/app.desktop>`
#
# we can have single associations or a list of associations.
# there's also options to *remove* [non-default] associations from specific apps
# N.B.: don't use nixos' `xdg.mime` option becaue that caues `/share/applications` to be linked into the whole system,
# which limits what i can do around sandboxing. getting the default associations to live in ~/ makes it easier to expose
# the associations to apps selectively.
# xdg.mime.enable = true;
# xdg.mime.defaultApplications = removePriorities (sortMimes (mergeMimes enabledWeightedMimes));
sane.user.fs.".local/share/applications".symlink.target = "${localShareApplicationsPkg}/share/applications";
xdg.mime.enable = true;
xdg.mime.defaultApplications = removePriorities (sortMimes (mergeMimes enabledWeightedMimes));
}

View File

@ -3,18 +3,13 @@
{
# XDG defines things like ~/Desktop, ~/Downloads, etc.
# these clutter the home, so i mostly don't use them.
# note that several of these are not actually standardized anywhere.
# some are even non-conventional, like:
# - XDG_PHOTOS_DIR: only works because i patch e.g. megapixels
sane.user.fs.".config/user-dirs.dirs".symlink.text = ''
XDG_DESKTOP_DIR="$HOME/.xdg/Desktop"
XDG_DOCUMENTS_DIR="$HOME/dev"
XDG_DOWNLOAD_DIR="$HOME/tmp"
XDG_MUSIC_DIR="$HOME/Music"
XDG_PHOTOS_DIR="$HOME/Pictures/Photos"
XDG_PICTURES_DIR="$HOME/Pictures"
XDG_PUBLICSHARE_DIR="$HOME/.xdg/Public"
XDG_SCREENSHOTS_DIR="$HOME/Pictures/Screenshots"
XDG_TEMPLATES_DIR="$HOME/.xdg/Templates"
XDG_VIDEOS_DIR="$HOME/Videos"
'';
@ -22,12 +17,4 @@
# prevent `xdg-user-dirs-update` from overriding/updating our config
# see <https://manpages.ubuntu.com/manpages/bionic/man5/user-dirs.conf.5.html>
sane.user.fs.".config/user-dirs.conf".symlink.text = "enabled=False";
sane.user.fs.".profile".symlink.text = ''
# configure XDG_<type>_DIR preferences (e.g. for downloads, screenshots, etc)
# surround with `set -o allexport` since user-dirs.dirs doesn't `export` its vars
set -a
source $HOME/.config/user-dirs.dirs
set +a
'';
}

View File

@ -19,7 +19,7 @@
};
sane.hosts.by-name."moby" = {
# ssh.authorized = lib.mkDefault false; # moby's too easy to hijack: don't let it ssh places
ssh.authorized = lib.mkDefault false; # moby's too easy to hijack: don't let it ssh places
ssh.user_pubkey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICrR+gePnl0nV/vy7I5BzrGeyVL+9eOuXHU1yNE3uCwU";
ssh.host_pubkey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIO1N/IT3nQYUD+dBlU1sTEEVMxfOyMkrrDeyHcYgnJvw";
wg-home.pubkey = "I7XIR1hm8bIzAtcAvbhWOwIAabGkuEvbWH/3kyIB1yA=";

View File

@ -49,16 +49,6 @@
sane.ids.media.gid = 2414;
sane.ids.ntfy-sh.uid = 2415;
sane.ids.ntfy-sh.gid = 2415;
sane.ids.monero.uid = 2416;
sane.ids.monero.gid = 2416;
sane.ids.slskd.uid = 2417;
sane.ids.slskd.gid = 2417;
sane.ids.bitcoind-mainnet.uid = 2418;
sane.ids.bitcoind-mainnet.gid = 2418;
sane.ids.clightning.uid = 2419;
sane.ids.clightning.gid = 2419;
sane.ids.nix-serve.uid = 2420;
sane.ids.nix-serve.gid = 2420;
sane.ids.colin.uid = 1000;
sane.ids.guest.uid = 1100;
@ -73,8 +63,6 @@
sane.ids.systemd-oom.uid = 2005;
sane.ids.systemd-oom.gid = 2005;
sane.ids.wireshark.gid = 2006;
sane.ids.nixremote.uid = 2007;
sane.ids.nixremote.gid = 2007;
# found on graphical hosts
sane.ids.nm-iodine.uid = 2101; # desko/moby/lappy

View File

@ -1,30 +1,6 @@
{ lib, ... }:
{
imports = [
./dns.nix
./hostnames.nix
./upnp.nix
./vpn.nix
];
systemd.network.enable = true;
networking.useNetworkd = true;
# view refused/dropped packets with: `sudo journalctl -k`
# networking.firewall.logRefusedPackets = true;
# networking.firewall.logRefusedUnicastsOnly = false;
networking.firewall.logReversePathDrops = true;
# linux will drop inbound packets if it thinks a reply to that packet wouldn't exit via the same interface (rpfilter).
# that heuristic fails for complicated VPN-style routing, especially with SNAT.
# networking.firewall.checkReversePath = false; # or "loose" to keep it partially.
# networking.firewall.enable = false; #< set false to debug
# this is needed to forward packets from the VPN to the host.
# this is required separately by servo and by any `sane-vpn` users,
# however Nix requires this be set centrally, in only one location (i.e. here)
boot.kernel.sysctl."net.ipv4.ip_forward" = 1;
# the default backend is "wpa_supplicant".
# wpa_supplicant reliably picks weak APs to connect to.
# see: <https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/issues/474>
@ -59,6 +35,13 @@
# e.g. openconnect drags in webkitgtk (for SSO)!
networking.networkmanager.plugins = lib.mkForce [];
networking.firewall.allowedUDPPorts = [
1900 # to received UPnP advertisements. required by sane-ip-check-upnp
];
# keyfile.path = where networkmanager should look for connection credentials
networking.networkmanager.settings.keyfile.path = "/var/lib/NetworkManager/system-connections";
networking.networkmanager.extraConfig = ''
[keyfile]
path=/var/lib/NetworkManager/system-connections
'';
}

View File

@ -1,74 +0,0 @@
# things to consider when changing these parameters:
# - temporary VPN access (`sane-vpn up ...`)
# - servo `ovpns` namespace (it *relies* on /etc/resolv.conf mentioning 127.0.0.53)
# - jails: `firejail --net=br-ovpnd-us --noprofile --dns=46.227.67.134 ping 1.1.1.1`
#
# components:
# - /etc/nsswitch.conf:
# - glibc uses this to provide `getaddrinfo`, i.e. host -> ip address lookup
# call directly with `getent ahostsv4 www.google.com`
# - `nss` (a component of glibc) is modular: names mentioned in that file are `dlopen`'d (i think that's the mechanism)
# in NixOS, that means _they have to be on LDPATH_.
# - `nscd` is used by NixOS simply to proxy nss requests.
# here, /etc/nsswitch.conf consumers contact nscd via /var/run/nscd/socket.
# in this way, only `nscd` needs to have the nss modules on LDPATH.
# - /etc/resolv.conf
# - contains the DNS servers for a system.
# - historically, NetworkManager would update this file as you switch networks.
# - modern implementations hardcodes `127.0.0.53` and then systemd-resolved proxies everything (and caches).
#
# namespacing:
# - each namespace can use a different /etc/resolv.conf to specify different DNS servers (see `firejail --dns=...`)
# - nscd breaks namespacing: the host nscd is unaware of the guest's /etc/resolv.conf, and so directs the guest's DNS requests to the host's servers.
# - this is fixed by either `firejail --blacklist=/var/run/nscd/socket`, or disabling nscd altogether.
{ config, lib, ... }:
lib.mkMerge [
{
sane.services.trust-dns.enable = lib.mkDefault config.sane.services.trust-dns.asSystemResolver;
sane.services.trust-dns.asSystemResolver = lib.mkDefault true;
}
(lib.mkIf (!config.sane.services.trust-dns.asSystemResolver) {
# use systemd's stub resolver.
# /etc/resolv.conf isn't sophisticated enough to use different servers per net namespace (or link).
# instead, running the stub resolver on a known address in the root ns lets us rewrite packets
# in servo's ovnps namespace to use the provider's DNS resolvers.
# a weakness is we can only query 1 NS at a time (unless we were to clone the packets?)
# TODO: rework servo's netns to use `firejail`, which is capable of spoofing /etc/resolv.conf.
services.resolved.enable = true; #< to disable, set ` = lib.mkForce false`, as other systemd features default to enabling `resolved`.
# without DNSSEC:
# - dig matrix.org => works
# - curl https://matrix.org => works
# with default DNSSEC:
# - dig matrix.org => works
# - curl https://matrix.org => fails
# i don't know why. this might somehow be interfering with the DNS run on this device (trust-dns)
services.resolved.dnssec = "false";
networking.nameservers = [
# use systemd-resolved resolver
# full resolver (which understands /etc/hosts) lives on 127.0.0.53
# stub resolver (just forwards upstream) lives on 127.0.0.54
"127.0.0.53"
];
})
{
# nscd -- the Name Service Caching Daemon -- caches DNS query responses
# in a way that's unaware of my VPN routing, so routes are frequently poor against
# services which advertise different IPs based on geolocation.
# nscd claims to be usable without a cache, but in practice i can't get it to not cache!
# nsncd is the Name Service NON-Caching Daemon. it's a drop-in that doesn't cache;
# this is OK on the host -- because systemd-resolved caches. it's probably sub-optimal
# in the netns and we query upstream DNS more often than needed. hm.
# services.nscd.enableNsncd = true;
# disabling nscd LOSES US SOME FUNCTIONALITY. in particular, only the glibc-builtin modules are accessible via /etc/resolv.conf.
# - dns: glibc-bultin
# - files: glibc-builtin
# - myhostname: systemd
# - mymachines: systemd
# - resolve: systemd
# in practice, i see no difference with nscd disabled.
# disabling nscd VASTLY simplifies netns and process isolation. see explainer at top of file.
services.nscd.enable = false;
system.nssModules = lib.mkForce [];
}
]

View File

@ -1,15 +0,0 @@
{ config, lib, ... }:
{
# give each host a shortname that all the other hosts know, to allow easy comms.
networking.hosts = lib.mkMerge (builtins.map
(host: let
cfg = config.sane.hosts.by-name."${host}";
in {
"${cfg.lan-ip}" = [ host ];
} // lib.optionalAttrs (cfg.wg-home.ip != null) {
"${cfg.wg-home.ip}" = [ "${host}-hn" ];
})
(builtins.attrNames config.sane.hosts.by-name)
);
}

View File

@ -1,20 +0,0 @@
{ pkgs, ... }:
{
networking.firewall.allowedUDPPorts = [
# to receive UPnP advertisements. required by sane-ip-check.
# N.B. sane-ip-check isn't query/response based. it needs to receive on port 1900 -- not receive responses FROM port 1900.
1900
];
networking.firewall.extraCommands = with pkgs; ''
# after an outgoing SSDP query to the multicast address, open FW for incoming responses.
# necessary for anything DLNA, especially go2tv
# source: <https://serverfault.com/a/911286>
# context: <https://github.com/alexballas/go2tv/issues/72>
# ipset -! means "don't fail if set already exists"
${ipset}/bin/ipset create -! upnp hash:ip,port timeout 10
${iptables}/bin/iptables -A OUTPUT -d 239.255.255.250/32 -p udp -m udp --dport 1900 -j SET --add-set upnp src,src --exist
${iptables}/bin/iptables -A INPUT -p udp -m set --match-set upnp dst,dst -j ACCEPT
'';
}

View File

@ -1,56 +0,0 @@
# to add a new OVPN VPN:
# - generate a privkey `wg genkey`
# - add this key to `sops secrets/universal.yaml`
# - upload pubkey to OVPN.com (`cat wg.priv | wg pubkey`)
# - generate config @ OVPN.com
# - copy the Address, PublicKey, Endpoint from OVPN's config
{ config, lib, pkgs, ... }:
let
def-ovpn = name: { endpoint, publicKey, addrV4, id }: {
sane.vpn."ovpnd-${name}" = {
inherit endpoint publicKey addrV4 id;
privateKeyFile = config.sops.secrets."wg/ovpnd_${name}_privkey".path;
dns = [
"46.227.67.134"
"192.165.9.158"
];
};
sops.secrets."wg/ovpnd_${name}_privkey" = {
# needs to be readable by systemd-network or else it says "Ignoring network device" and doesn't expose it to networkctl.
owner = "systemd-network";
};
};
in lib.mkMerge [
(def-ovpn "us" {
endpoint = "vpn31.prd.losangeles.ovpn.com:9929";
publicKey = "VW6bEWMOlOneta1bf6YFE25N/oMGh1E1UFBCfyggd0k=";
id = 1;
addrV4 = "172.27.237.218";
# addrV6 = "fd00:0000:1337:cafe:1111:1111:ab00:4c8f";
})
# TODO: us-atl disabled until i can give it a different link-local address and wireguard key than us-mi
# (def-ovpn "us-atl" {
# endpoint = "vpn18.prd.atlanta.ovpn.com:9929";
# publicKey = "Dpg/4v5s9u0YbrXukfrMpkA+XQqKIFpf8ZFgyw0IkE0=";
# address = [
# "172.21.182.178/32"
# "fd00:0000:1337:cafe:1111:1111:cfcb:27e3/128"
# ];
# })
(def-ovpn "us-mi" {
endpoint = "vpn34.prd.miami.ovpn.com:9929";
publicKey = "VtJz2irbu8mdkIQvzlsYhU+k9d55or9mx4A2a14t0V0=";
id = 2;
addrV4 = "172.21.182.178";
# addrV6 = "fd00:0000:1337:cafe:1111:1111:cfcb:27e3";
})
(def-ovpn "ukr" {
endpoint = "vpn96.prd.kyiv.ovpn.com:9929";
publicKey = "CjZcXDxaaKpW8b5As1EcNbI6+42A6BjWahwXDCwfVFg=";
id = 3;
addrV4 = "172.18.180.159";
# addrV6 = "fd00:0000:1337:cafe:1111:1111:ec5c:add3";
})
]

View File

@ -0,0 +1,13 @@
{ pkgs, ... }:
{
# allow `nix-shell` (and probably nix-index?) to locate our patched and custom packages
nix.nixPath = [
"nixpkgs=${pkgs.path}"
# note the import starts at repo root: this allows `./overlay/default.nix` to access the stuff at the root
# "nixpkgs-overlays=${../../..}/hosts/common/nix-path/overlay"
# as long as my system itself doesn't rely on NIXPKGS at runtime, we can point the overlays to git
# to avoid switching so much during development
"nixpkgs-overlays=/home/colin/dev/nixos/hosts/common/nix-path/overlay"
];
}

View File

@ -1,91 +0,0 @@
{ config, lib, pkgs, ... }:
{
nix.settings = {
# see: `man nix.conf`
# useful when a remote builder has a faster internet connection than me.
# note that this also applies to `nix copy --to`, though.
# i think any time a remote machine wants a path, this means we ask them to try getting it themselves before we supply it.
builders-use-substitutes = true; # default: false
# maximum seconds to wait when connecting to binary substituter
connect-timeout = 3; # default: 0
# download-attempts = 5; # default: 5
# allow `nix flake ...` command
experimental-features = [ "nix-command" "flakes "];
# whether to build from source when binary substitution fails
fallback = true; # default: false
# whether to keep building dependencies if any other one fails
keep-going = true; # default: false
# whether to keep build-only dependencies of GC roots (e.g. C compiler) when doing GC
keep-outputs = true; # default: false
# how many lines to show from failed build
log-lines = 30; # default: 10
# how many substitution downloads to perform in parallel.
# i wonder if parallelism is causing moby's substitutions to fail?
max-substitution-jobs = 6; # default: 16
# narinfo-cache-negative-ttl = 3600 # default: 3600
# whether to use ~/.local/state/nix/profile instead of ~/.nix-profile, etc
use-xdg-base-directories = true; # default: false
# whether to warn if repository has uncommited changes
warn-dirty = false; # default: true
# hardlinks identical files in the nix store to save 25-35% disk space.
# unclear _when_ this occurs. it's not a service.
# does the daemon continually scan the nix store?
# does the builder use some content-addressed db to efficiently dedupe?
auto-optimise-store = true;
# allow #!nix-shell scripts to locate my patched nixpkgs & custom packages.
# this line might become unnecessary: see <https://github.com/NixOS/nixpkgs/pull/273170>
nix-path = config.nix.nixPath;
};
# allow `nix-shell` (and probably nix-index?) to locate our patched and custom packages.
# this is actually a no-op, and the real action happens in assigning `nix.settings.nix-path`.
nix.nixPath = (lib.optionals (config.sane.maxBuildCost >= 2) [
"nixpkgs=${pkgs.path}"
]) ++ [
# note the import starts at repo root: this allows `./overlay/default.nix` to access the stuff at the root
# "nixpkgs-overlays=${../../..}/hosts/common/nix-path/overlay"
# as long as my system itself doesn't rely on NIXPKGS at runtime, we can point the overlays to git
# to avoid switching so much during development
"nixpkgs-overlays=/home/colin/dev/nixos/hosts/common/nix/overlay"
];
# ensure new deployments have a source of this repo with which they can bootstrap.
# this however changes on every commit and can be slow to copy for e.g. `moby`.
environment.etc."nixos" = lib.mkIf (config.sane.maxBuildCost >= 3) {
source = ../../..;
};
environment.etc."nix/registry.json" = lib.mkIf (config.sane.maxBuildCost < 3) {
enable = false;
};
systemd.services.nix-daemon.serviceConfig = {
# the nix-daemon manages nix builders
# kill nix-daemon subprocesses when systemd-oomd detects an out-of-memory condition
# see:
# - nixos PR that enabled systemd-oomd: <https://github.com/NixOS/nixpkgs/pull/169613>
# - systemd's docs on these properties: <https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html#ManagedOOMSwap=auto%7Ckill>
#
# systemd's docs warn that without swap, systemd-oomd might not be able to react quick enough to save the system.
# see `man oomd.conf` for further tunables that may help.
#
# alternatively, apply this more broadly with `systemd.oomd.enableSystemSlice = true` or `enableRootSlice`
# TODO: also apply this to the guest user's slice (user-1100.slice)
# TODO: also apply this to distccd
ManagedOOMMemoryPressure = "kill";
ManagedOOMSwap = "kill";
};
}

View File

@ -1,14 +1,13 @@
{ ... }:
{
# store /home/colin/a/b in /mnt/persist/private/a/b instead of /mnt/persist/private/home/colin/a/b
sane.persist.stores.private.origin = "/home/colin/private";
# store /home/colin/a/b in /home/private/a/b instead of /home/private/home/colin/a/b
sane.persist.stores.private.prefix = "/home/colin";
sane.persist.sys.byStore.initrd = [
"/var/log"
];
sane.persist.sys.byStore.plaintext = [
# TODO: these should be private.. somehow
"/var/log"
"/var/backup" # for e.g. postgres dumps
];
sane.persist.sys.byStore.cryptClearOnBoot = [

View File

@ -1,72 +0,0 @@
# strictly *decrease* the scope of the default nixos installation/config
{ lib, ... }:
{
# disable non-required packages like nano, perl, rsync, strace
environment.defaultPackages = [];
# remove all the non-existent default directories from XDG_DATA_DIRS, XDG_CONFIG_DIRS to simplify debugging.
# this is defaulted in <repo:nixos/nixpkgs:nixos/modules/programs/environment.nix>,
# without being gated by any higher config.
environment.profiles = lib.mkForce [
"/etc/profiles/per-user/$USER"
"/run/current-system/sw"
];
# NIXPKGS_CONFIG defaults to "/etc/nix/nixpkgs-config.nix" in <nixos/modules/programs/environment.nix>.
# that's never existed on my system and everything does fine without it set empty (no nixpkgs API to forcibly *unset* it).
environment.variables.NIXPKGS_CONFIG = lib.mkForce "";
# XDG_CONFIG_DIRS defaults to "/etc/xdg", which doesn't exist.
# in practice, pam appends the values i want to XDG_CONFIG_DIRS, though this approach causes an extra leading `:`
environment.sessionVariables.XDG_CONFIG_DIRS = lib.mkForce [];
# XCURSOR_PATH: defaults to `[ "$HOME/.icons" "$HOME/.local/share/icons" ]`, neither of which i use, just adding noise.
# see: <repo:nixos/nixpkgs:nixos/modules/config/xdg/icons.nix>
environment.sessionVariables.XCURSOR_PATH = lib.mkForce [];
# disable nixos' portal module, otherwise /share/applications gets linked into the system and complicates things (sandboxing).
# instead, i manage portals myself via the sane.programs API (e.g. sane.programs.xdg-desktop-portal).
xdg.portal.enable = false;
xdg.menus.enable = false; #< links /share/applications, and a bunch of other empty (i.e. unused) dirs
# xdg.autostart.enable defaults to true, and links /etc/xdg/autostart into the environment, populated with .desktop files.
# see: <repo:nixos/nixpkgs:nixos/modules/config/xdg/autostart.nix>
# .desktop files are a questionable way to autostart things: i generally prefer a service manager for that.
xdg.autostart.enable = false;
# nix.channel.enable: populates `/nix/var/nix/profiles/per-user/root/channels`, `/root/.nix-channels`, `$HOME/.nix-defexpr/channels`
# <repo:nixos/nixpkgs:nixos/modules/config/nix-channel.nix>
# TODO: may want to recreate NIX_PATH, nix.settings.nix-path
nix.channel.enable = false;
# environment.stub-ld: populate /lib/ld-linux.so with an object that unconditionally errors on launch,
# so as to inform when trying to run a non-nixos binary?
# IMO that's confusing: i thought /lib/ld-linux.so was some file actually required by nix.
environment.stub-ld.enable = false;
# `less.enable` sets LESSKEYIN_SYSTEM, LESSOPEN, LESSCLOSE env vars, which does confusing "lesspipe" things, so disable that.
# it's enabled by default from `<nixos/modules/programs/environment.nix>`, who also sets `PAGER="less"` and `EDITOR="nano"` (keep).
programs.less.enable = lib.mkForce false;
environment.variables.PAGER = lib.mkOverride 900 ""; # mkDefault sets 1000. non-override is 100. 900 will beat the nixpkgs `mkDefault` but not anyone else.
environment.variables.EDITOR = lib.mkOverride 900 "";
# several packages (dconf, modemmanager, networkmanager, gvfs, polkit, udisks, bluez/blueman, feedbackd, etc)
# will add themselves to the dbus search path.
# i prefer dbus to only search XDG paths (/share/dbus-1) for service files, as that's more introspectable.
# see: <repo:nixos/nixpkgs:nixos/modules/services/system/dbus.nix>
# TODO: sandbox dbus? i pretty explicitly don't want to use it as a launcher.
services.dbus.packages = lib.mkForce [
"/run/current-system/sw"
# config.system.path
# pkgs.dbus
# pkgs.polkit.out
# pkgs.modemmanager
# pkgs.networkmanager
# pkgs.udisks
# pkgs.wpa_supplicant
];
# systemd by default forces shitty defaults for e.g. /tmp/.X11-unix.
# nixos propagates those in: <nixos/modules/system/boot/systemd/tmpfiles.nix>
# by overwriting this with an empty file, we can effectively remove it.
environment.etc."tmpfiles.d/x11.conf".text = "# (removed by Colin)";
}

View File

@ -1,94 +0,0 @@
# discord gtk3 client
{ config, lib, pkgs, ... }:
let
cfg = config.sane.programs.abaddon;
in
{
sane.programs.abaddon = {
configOption = with lib; mkOption {
default = {};
type = types.submodule {
options.autostart = mkOption {
type = types.bool;
default = false;
};
};
};
packageUnwrapped = pkgs.abaddon.overrideAttrs (upstream: {
patches = (upstream.patches or []) ++ [
(pkgs.fetchpatch {
url = "https://git.uninsane.org/colin/abaddon/commit/eb551f188d34679f75adcbc83cb8d5beb4d19fd6.patch";
name = ''"view members" default to false'';
hash = "sha256-9BX8iO86CU1lNrKS1G2BjDR+3IlV9bmhRNTsLrxChwQ=";
})
];
});
suggestedPrograms = [ "gnome-keyring" ];
fs.".config/abaddon/abaddon.ini".symlink.text = ''
# see abaddon README.md for options.
# at time of writing:
# | Setting | Type | Default | Description |
# |[discord]------|---------|---------|--------------------------------------------------------------------------------------------------|
# | `gateway` | string | | override url for Discord gateway. must be json format and use zlib stream compression |
# | `api_base` | string | | override base url for Discord API |
# | `memory_db` | boolean | false | if true, Discord data will be kept in memory as opposed to on disk |
# | `token` | string | | Discord token used to login, this can be set from the menu |
# | `prefetch` | boolean | false | if true, new messages will cause the avatar and image attachments to be automatically downloaded |
# | `autoconnect` | boolean | false | autoconnect to discord |
# |[http]--------|--------|---------|---------------------------------------------------------------------------------------------|
# | `user_agent` | string | | sets the user-agent to use in HTTP requests to the Discord API (not including media/images) |
# | `concurrent` | int | 20 | how many images can be concurrently retrieved |
# |[gui}------------------------|---------|---------|----------------------------------------------------------------------------------------------------------------------------|
# | `member_list_discriminator` | boolean | true | show user discriminators in the member list |
# | `stock_emojis` | boolean | true | allow abaddon to substitute unicode emojis with images from emojis.bin, must be false to allow GTK to render emojis itself |
# | `custom_emojis` | boolean | true | download and use custom Discord emojis |
# | `css` | string | | path to the main CSS file |
# | `animations` | boolean | true | use animated images where available (e.g. server icons, emojis, avatars). false means static images will be used |
# | `animated_guild_hover_only` | boolean | true | only animate guild icons when the guild is being hovered over |
# | `owner_crown` | boolean | true | show a crown next to the owner |
# | `unreads` | boolean | true | show unread indicators and mention badges |
# | `save_state` | boolean | true | save the state of the gui (active channels, tabs, expanded channels) |
# | `alt_menu` | boolean | false | keep the menu hidden unless revealed with alt key |
# | `hide_to_tray` | boolean | false | hide abaddon to the system tray on window close |
# | `show_deleted_indicator` | boolean | true | show \[deleted\] indicator next to deleted messages instead of actually deleting the message |
# | `font_scale` | double | | scale font rendering. 1 is unchanged |
# |[style]------------------|--------|-----------------------------------------------------|
# | `linkcolor` | string | color to use for links in messages |
# | `expandercolor` | string | color to use for the expander in the channel list |
# | `nsfwchannelcolor` | string | color to use for NSFW channels in the channel list |
# | `channelcolor` | string | color to use for SFW channels in the channel list |
# | `mentionbadgecolor` | string | background color for mention badges |
# | `mentionbadgetextcolor` | string | color to use for number displayed on mention badges |
# | `unreadcolor` | string | color to use for the unread indicator |
# |[notifications]|---------|--------------------------|-------------------------------------------------------------------------------|
# | `enabled` | boolean | true (if not on Windows) | Enable desktop notifications |
# | `playsound` | boolean | true | Enable notification sounds. Requires ENABLE_NOTIFICATION_SOUNDS=TRUE in CMake |
# |[voice]--|--------|------------------------------------|------------------------------------------------------------|
# | `vad` | string | rnnoise if enabled, gate otherwise | Method used for voice activity detection. Changeable in UI |
# |[windows]|---------|---------|-------------------------|
# | `hideconsole` | boolean | true | Hide console on startup |
# N.B.: abaddon writes this file itself (and even when i don't change anything internally).
# it prefers no spaces around the equal sign.
[discord]
autoconnect=true
[notifications]
# playsound: i manage sounds via swaync
playsound=false
'';
persist.byStore.private = [
".cache/abaddon"
];
services.abaddon = {
description = "unofficial Discord chat client";
partOf = lib.mkIf cfg.config.autostart [ "graphical-session" ];
command = "abaddon";
};
};
}

View File

@ -3,9 +3,6 @@
{
sane.programs.aerc = {
sandbox.method = "bwrap";
sandbox.wrapperType = "inplace"; #< /share/aerc/aerc.conf refers to other /share files by absolute path
sandbox.net = "clearnet";
secrets.".config/aerc/accounts.conf" = ../../../secrets/common/aerc_accounts.conf.bin;
mime.associations."x-scheme-handler/mailto" = "aerc.desktop";
};

View File

@ -3,68 +3,22 @@
# - `man 5 alacritty`
# - defaults: <https://github.com/alacritty/alacritty/releases> -> alacritty.yml
# - irc: #alacritty on libera.chat
{ config, lib, ... }:
let
cfg = config.sane.programs.alacritty;
in
{ lib, ... }:
{
sane.programs.alacritty = {
configOption = with lib; mkOption {
default = {};
type = types.submodule {
options.fontSize = mkOption {
type = types.int;
default = 14;
};
};
};
sandbox.enable = false;
env.TERMINAL = lib.mkDefault "alacritty";
# note: alacritty will switch to .toml config in 13.0 release
# - run `alacritty migrate` to convert the yaml to toml
fs.".config/alacritty/alacritty.yml".symlink.text = ''
font:
size: 14
fs.".config/alacritty/alacritty.toml".symlink.text = ''
[font]
size = ${builtins.toString cfg.config.fontSize}
[[keyboard.bindings]]
mods = "Control"
key = "N"
action = "CreateNewWindow"
[[keyboard.bindings]]
mods = "Control"
key = "PageUp"
action = "ScrollPageUp"
[[keyboard.bindings]]
mods = "Control"
key = "PageDown"
action = "ScrollPageDown"
[[keyboard.bindings]]
mods = "Control|Shift"
key = "PageUp"
action = "ScrollPageUp"
[[keyboard.bindings]]
mods = "Control|Shift"
key = "PageDown"
action = "ScrollPageDown"
# disable OS shortcuts which leak through...
# see sway config or sane-input-handler for more info on why these leak through
[[keyboard.bindings]]
key = "AudioVolumeUp"
action = "None"
[[keyboard.bindings]]
key = "AudioVolumeDown"
action = "None"
[[keyboard.bindings]]
key = "Power"
action = "None"
[[keyboard.bindings]]
key = "PowerOff"
action = "None"
key_bindings:
- { key: N, mods: Control, action: CreateNewWindow }
- { key: PageUp, mods: Control, action: ScrollPageUp }
- { key: PageDown, mods: Control, action: ScrollPageDown }
- { key: PageUp, mods: Control|Shift, action: ScrollPageUp }
- { key: PageDown, mods: Control|Shift, action: ScrollPageDown }
'';
};
}

View File

@ -1,65 +0,0 @@
{ config, lib, pkgs, ... }:
let
cfg = config.sane.programs.alsa-ucm-conf;
in
{
sane.programs.alsa-ucm-conf = {
configOption = with lib; mkOption {
default = {};
type = types.submodule {
options.preferEarpiece = mkOption {
type = types.bool;
default = true;
};
};
};
# upstream alsa ships with PinePhone audio configs, but they don't actually produce sound.
# see: <https://github.com/alsa-project/alsa-ucm-conf/pull/134>
# these audio files come from some revision of:
# - <https://gitlab.manjaro.org/manjaro-arm/packages/community/phosh/alsa-ucm-pinephone>
#
# alternative to patching is to plumb `ALSA_CONFIG_UCM2 = "${./ucm2}"` environment variable into the relevant places
# e.g. `systemd.services.pulseaudio.environment`.
# that leaves more opportunity for gaps (i.e. missing a service),
# on the other hand this method causes about 500 packages to be rebuilt (including qt5 and webkitgtk).
#
# note that with these files, the following audio device support:
# - headphones work.
# - "internal earpiece" works.
# - "internal speaker" doesn't work (but that's probably because i broke the ribbon cable)
# - "analog output" doesn't work.
packageUnwrapped = pkgs.alsa-ucm-conf.overrideAttrs (upstream: {
postPatch = (upstream.postPatch or "") + ''
cp ${./ucm2/PinePhone}/* ucm2/Allwinner/A64/PinePhone/
# fix the self-contained ucm files i source from to have correct path within the alsa-ucm-conf source tree
substituteInPlace ucm2/Allwinner/A64/PinePhone/PinePhone.conf \
--replace-fail 'HiFi.conf' '/Allwinner/A64/PinePhone/HiFi.conf'
substituteInPlace ucm2/Allwinner/A64/PinePhone/PinePhone.conf \
--replace-fail 'VoiceCall.conf' '/Allwinner/A64/PinePhone/VoiceCall.conf'
'' + lib.optionalString cfg.config.preferEarpiece ''
# decrease the priority of the internal speaker so that sounds are routed
# to the earpiece by default.
# this is just personal preference.
substituteInPlace ucm2/Allwinner/A64/PinePhone/{HiFi.conf,VoiceCall.conf} \
--replace-fail 'PlaybackPriority 300' 'PlaybackPriority 100'
'';
});
sandbox.enable = false; #< only provides #out/share/alsa
# alsa-lib package only looks in its $out/share/alsa to find runtime config data, by default.
# but ALSA_CONFIG_UCM2 is an env var that can override that.
# this is particularly needed by wireplumber;
# also *maybe* pipewire and pipewire-pulse.
# taken from <repo:nixos/mobile-nixos:modules/quirks/audio.nix>
env.ALSA_CONFIG_UCM2 = "/run/current-system/sw/share/alsa/ucm2";
enableFor.system = lib.mkIf (builtins.any (en: en) (builtins.attrValues cfg.enableFor.user)) true;
};
environment.pathsToLink = lib.mkIf cfg.enabled [
"/share/alsa/ucm2"
];
}

View File

@ -1,43 +0,0 @@
# debug with:
# - `animatch --debug`
# - `gdb animatch`
# try:
# - `animatch --fullscreen`
# - `animatch --windowed`
# the other config options (e.g. verbose logging -- which doesn't seem to do anything) have to be configured via .ini file
# ```ini
# # ~/.config/Holy Pangolin/Animatch/SuperDerpy.ini
# [SuperDerpy]
# debug=1
# disableTouch=1
# [game]
# verbose=1
# ```
{ pkgs, ... }:
{
sane.programs.animatch = {
packageUnwrapped = with pkgs; animatch.override {
# allegro has no native wayland support, and so by default crashes when run without Xwayland.
# enable the allegro SDL backend, and achieve Wayland support via SDL's Wayland support.
# TODO: see about upstreaming this to nixpkgs?
allegro5 = allegro5.overrideAttrs (upstream: {
buildInputs = upstream.buildInputs ++ [
SDL2
];
cmakeFlags = upstream.cmakeFlags ++ [
"-DALLEGRO_SDL=on"
];
});
};
buildCost = 1;
sandbox.method = "bwrap";
sandbox.whitelistWayland = true;
persist.byStore.plaintext = [
# ".config/Holy Pangolin/Animatch" #< used for SuperDerpy config (e.g. debug, disableTouch, fullscreen, enable sound, etc). SuperDerpy.ini
".local/share/Holy Pangolin/Animatch" #< used for game state (level clears). SuperDerpy.ini
];
};
}

File diff suppressed because it is too large Load Diff

View File

@ -1,44 +0,0 @@
# tips/tricks
# - audio recording
# - default recording input will be silent, on lappy.
# - Audio Setup -> Rescan Audio Devices ...
# - Audio Setup -> Recording device -> sysdefault
{ pkgs, ... }:
{
sane.programs.audacity = {
packageUnwrapped = pkgs.audacity.override {
# wxGTK32 uses webkitgtk-4.0.
# audacity doesn't actually need webkit though, so diable to reduce closure
wxGTK32 = pkgs.wxGTK32.override {
withWebKit = false;
};
};
buildCost = 1;
sandbox.method = "bwrap";
sandbox.whitelistAudio = true;
sandbox.whitelistWayland = true;
sandbox.autodetectCliPaths = true;
sandbox.extraHomePaths = [
# support media imports via file->open dir to some common media directories
"tmp"
"Music"
# audacity needs the entire config dir mounted if running in a sandbox
".config/audacity"
];
sandbox.extraPaths = [
"/dev/snd" # for recording audio inputs to work
];
# disable first-run splash screen
fs.".config/audacity/audacity.cfg".file.text = ''
PrefsVersion=1.1.1r1
[GUI]
ShowSplashScreen=0
[Version]
Major=3
Minor=4
'';
};
}

View File

@ -87,13 +87,7 @@ let
in
{
sane.programs.bemenu = {
sandbox.method = "bwrap"; # landlock works, but requires *all* of /run/user/$ID to be granted.
sandbox.whitelistWayland = true;
sandbox.extraHomePaths = [
".cache/fontconfig" #< else it complains, and is *way* slower
];
packageUnwrapped = pkgs.bemenu.overrideAttrs (upstream: {
package = pkgs.bemenu.overrideAttrs (upstream: {
nativeBuildInputs = (upstream.nativeBuildInputs or []) ++ [
pkgs.makeWrapper
];

Some files were not shown because too many files have changed in this diff Show More