Compare commits

..

1 Commits

Author SHA1 Message Date
5152159691 try to add Kaiteki as a package
requires updating dart (flutter), which is causing problems...
2022-06-05 02:28:23 -07:00
900 changed files with 4770 additions and 117386 deletions

2
.gitignore vendored
View File

@ -1,4 +1,2 @@
.working
result
result-*
/secrets/local.nix

View File

@ -1,49 +0,0 @@
keys:
- &user_desko_colin age1tnl4jfgacwkargzeqnhzernw29xx8mkv73xh6ufdyde6q7859slsnzf24x
- &user_lappy_colin age1j2pqnl8j0krdzk6npe93s4nnqrzwx978qrc0u570gzlamqpnje9sc8le2g
- &user_servo_colin age1z8fauff34cdecr6sjkre260luzxcca05kpcwvhx988d306tpcejsp63znu
- &user_moby_colin age1zsrsvd7j6l62fjxpfd2qnhqlk8wk4p8r0dtxpe4sdgnh2474095qdu7xj9
- &host_crappy age1hl50ufuxnqy0jnk8fqeu4tclh4vte2xn2d59pxff0gun20vsmv5sp78chj
- &host_desko age1vnw7lnfpdpjn62l3u5nyv5xt2c965k96p98kc43mcnyzpetrts9q54mc9v
- &host_lappy age1w7mectcjku6x3sd8plm8wkn2qfrhv9n6zhzlf329e2r2uycgke8qkf9dyn
- &host_servo age1tzlyex2z6t88tg9h82943e39shxhmqeyr7ywhlwpdjmyqsndv3qq27x0rf
- &host_moby age18vq5ktwgeaysucvw9t67drqmg5zd5c5k3le34yqxckkfj7wqdqgsd4ejmt
creation_rules:
- path_regex: secrets/common*
key_groups:
- age:
- *user_desko_colin
- *user_lappy_colin
- *user_servo_colin
- *user_moby_colin
- *host_crappy
- *host_desko
- *host_lappy
- *host_servo
- *host_moby
- path_regex: secrets/servo*
key_groups:
- age:
- *user_desko_colin
- *user_lappy_colin
- *user_servo_colin
- *host_servo
- path_regex: secrets/desko*
key_groups:
- age:
- *user_desko_colin
- *user_lappy_colin
- *host_desko
- path_regex: secrets/lappy*
key_groups:
- age:
- *user_lappy_colin
- *user_desko_colin
- *host_lappy
- path_regex: secrets/moby*
key_groups:
- age:
- *user_desko_colin
- *user_lappy_colin
- *user_moby_colin
- *host_moby

126
README.md
View File

@ -1,126 +0,0 @@
![hello](doc/hello.gif)
# .❄≡We|_c0m3 7o m`/ f14k≡❄.
(er, it's not a flake anymore. welcome to my nix files.)
## What's Here
this is the top-level repo from which i configure/deploy all my NixOS machines:
- desktop
- laptop
- server
- mobile phone (Pinephone)
everything outside of [hosts/](./hosts/) and [secrets/](./secrets/) is intended for export, to be importable for use by 3rd parties.
the only hard dependency for my exported pkgs/modules should be [nixpkgs][nixpkgs].
building [hosts/](./hosts/) will require [sops][sops].
you might specifically be interested in these files (elaborated further in #key-points-of-interest):
- ~~[`sxmo-utils`](./pkgs/additional/sxmo-utils/default.nix)~~
- these files will remain until my config settles down, but i no longer use or maintain SXMO.
- [my implementation of impermanence](./modules/persist/default.nix)
- my way of deploying dotfiles/configuring programs per-user:
- [modules/fs/](./modules/fs/default.nix)
- [modules/programs/](./modules/programs/default.nix)
- [modules/users/](./modules/users/default.nix)
[nixpkgs]: https://github.com/NixOS/nixpkgs
[sops]: https://github.com/Mic92/sops-nix
[uninsane-org]: https://uninsane.org
## Using This Repo In Your Own Config
follow the instructions [here][NUR] to access my packages through the Nix User Repositories.
[NUR]: https://nur.nix-community.org/
## Layout
- `doc/`
- instructions for tasks i find myself doing semi-occasionally in this repo.
- `hosts/`
- configs which aren't factored with external use in mind.
- that is, if you were to add this repo to a flake.nix for your own use,
you won't likely be depending on anything in this directory.
- `integrations/`
- code intended for consumption by external tools (e.g. the Nix User Repos).
- `modules/`
- config which is gated behind `enable` flags, in similar style to nixpkgs' `nixos/` directory.
- if you depend on this repo for anything besides packages, it's most likely for something in this directory.
- `overlays/`
- predominantly a list of `callPackage` directives.
- `pkgs/`
- derivations for things not yet packaged in nixpkgs.
- derivations for things from nixpkgs which i need to `override` for some reason.
- inline code for wholly custom packages (e.g. `pkgs/additional/sane-scripts/` for CLI tools
that are highly specific to my setup).
- `scripts/`
- scripts which aren't reachable on a deployed system, but may aid manual deployments.
- `secrets/`
- encrypted keys, API tokens, anything which one or more of my machines needs
read access to but shouldn't be world-readable.
- not much to see here.
- `templates/`
- used to instantiate short-lived environments.
- used to auto-fill the boiler-plate portions of new packages.
## Key Points of Interest
i.e. you might find value in using these in your own config:
- `modules/fs/`
- use this to statically define leafs and nodes anywhere in the filesystem,
not just inside `/nix/store`.
- e.g. specify that `/var/www` should be:
- owned by a specific user/group
- set to a specific mode
- symlinked to some other path
- populated with some statically-defined data
- populated according to some script
- created as a dependency of some service (e.g. `nginx`)
- values defined here are applied neither at evaluation time _nor_ at activation time.
- rather, they become systemd services.
- systemd manages dependencies
- e.g. link `/var/www -> /mnt/my-drive/www` only _after_ `/mnt/my-drive/www` appears)
- this is akin to using [Home Manager's][home-manager] file API -- the part which lets you
statically define `~/.config` files -- just with a different philosophy.
- `modules/persist/`
- my alternative to the Impermanence module.
- this builds atop `modules/fs/` to achieve things stock impermanence can't:
- persist things to encrypted storage which is unlocked at login time (pam_mount).
- "persist" cache directories -- to free up RAM -- but auto-wipe them on mount
and encrypt them to ephemeral keys so they're unreadable post shutdown/unmount.
- `modules/programs/`
- like nixpkgs' `programs` options, but allows both system-wide or per-user deployment.
- allows `fs` and `persist` config values to be gated behind program deployment:
- e.g. `/home/<user>/.mozilla/firefox` is persisted only for users who
`sane.programs.firefox.enableFor.user."<user>" = true;`
- allows aggressive sandboxing any program:
- `sane.programs.firefox.sandbox.method = "bwrap"; # sandbox with bubblewrap`
- `sane.programs.firefox.sandbox.whitelistWayland = true; # allow it to render a wayland window`
- `sane.programs.firefox.sandbox.extraHomePaths = [ "Downloads" ]; # allow it read/write access to ~/Downloads`
- integrated with `fs` and `persist` modules so that programs' config files and persisted data stores are linked into the sandbox w/o any extra involvement.
- `modules/users/`
- convenience layer atop the above modules so that you can just write
`fs.".config/git"` instead of `fs."/home/colin/.config/git"`
- per-user services managed by [s6-rc](https://www.skarnet.org/software/s6-rc/)
some things in here could easily find broader use. if you would find benefit in
them being factored out of my config, message me and we could work to make that happen.
[home-manager]: https://github.com/nix-community/home-manager
## Mirrors
this repo exists in a few known locations:
- primary: <https://git.uninsane.org/colin/nix-files>
- mirror: <https://github.com/nix-community/nur-combined/tree/master/repos/colinsane>
## Contact
if you want to contact me for questions, or collaborate to split something useful into a shared repo, etc,
you can reach me via any method listed [here](https://uninsane.org/about).
patches, for this repo or any other i host, will be warmly welcomed in any manner you see fit:
`git send-email`, DM'ing the patch over Matrix/Lemmy/ActivityPub/etc, even a literal PR where you
link me to your own clone.

190
TODO.md
View File

@ -1,177 +1,21 @@
## BUGS
- `rmDbusServices` may break sandboxing
- e.g. if the package ships a systemd unit which references $out, then make-sandboxed won't properly update that unit.
- `rmDbusServicesInPlace` is not affected
- when moby wlan is explicitly set down (via ip link set wlan0 down), /var/lib/trust-dns/dhcp-configs doesn't get reset
- `ip monitor` can detect those manual link state changes (NM-dispatcher it seems cannot)
- or try dnsmasq?
- trust-dns: can't recursively resolve api.mangadex.org
- and *sometimes* apple.com fails
- sandbox: link cache means that if i update ~/.config/... files inline, sandboxed programs still see the old version
- mpv: audiocast has mpv sending its output to the builtin speakers unless manually changed
- mpv: no way to exit fullscreen video on moby
- uosc hides controls on FS, and touch doesn't support unhiding
- Signal restart loop drains battery
- decrease s6 restart time?
- `ssh` access doesn't grant same linux capabilities as login
- ringer (i.e. dino incoming call) doesn't prevent moby from sleeping
- syshud (volume overlay): when casting with `blast`, syshud doesn't react to volume changes
- moby: kaslr is effectively disabled
- `dmesg | grep "KASLR disabled due to lack of seed"`
- fix by adding `kaslrseed` to uboot script before `booti`
- <https://github.com/armbian/build/pull/4352>
- not sure how that's supposed to work with tow-boot; maybe i should just update tow-boot
- moby: bpf is effectively disabled?
- `dmesg | grep 'systemd[1]: bpf-lsm: Failed to load BPF object: No such process'`
- `dmesg | grep 'hid_bpf: error while preloading HID BPF dispatcher: -22'`
- `s6` is not re-entrant
- so if the desktop crashes, the login process from `unl0kr` fails to re-launch the GUI
- swaync brightness slider does not work
- it reads brightness from /sys/class/backlight/....
- but to *set* the brightness it assumes systemd logind is running
<repo:ErikReider/SwayNotificationCenter:src/controlCenter/widgets/backlight/backlightUtil.vala>
no reason i can't just write to that file, or exec brightnessctl (if i learn vala)
# features/tweaks
- enable sshfs (deskto/lappy)
- set firefox default search engine
- iron out video drivers
## REFACTORING:
- add import checks to my Python nix-shell scripts
- consolidate ~/dev and ~/ref
- ~/dev becomes a link to ~/ref/cat/mine
- fold hosts/common/home/ssh.nix -> hosts/common/users/colin.nix
### sops/secrets
- rework secrets to leverage `sane.fs`
- remove sops activation script as it's covered by my systemd sane.fs impl
- user secrets could just use `gocryptfs`, like with ~/private?
- can gocryptfs support nested filesystems, each with different perms (for desko, moby, etc)?
### roles
- allow any host to take the role of `uninsane.org`
- will make it easier to test new services?
### upstreaming
- add updateScripts to all my packages in nixpkgs
#### upstreaming to non-nixpkgs repos
- gtk: build schemas even on cross compilation: <https://github.com/NixOS/nixpkgs/pull/247844>
# cleanup
- remove helpers from outputs section (use `let .. in`)
## IMPROVEMENTS:
- systemd/journalctl: use a less shit pager
- there's an env var for it: SYSTEMD_PAGER? and a flag for journalctl
- kernels: ship the same kernel on every machine
- then i can tune the kernels for hardening, without duplicating that work 4 times
- zfs: replace this with something which doesn't require a custom kernel build
- mpv: add media looping controls (e.g. loop song, loop playlist)
# speed up cross compiling
https://nixos.wiki/wiki/Cross_Compiling
https://nixos.wiki/wiki/NixOS_on_ARM
overlays = [{ ... }: {
nixpkgs.crossSystem.system = "aarch64-linux";
}];
### security/resilience
- validate duplicity backups!
- encrypt more ~ dirs (~/archives, ~/records, ..?)
- best to do this after i know for sure i have good backups
- /mnt/desko/home, etc, shouldn't include secrets (~/private)
- 95% of its use is for remote media access and stuff which isn't in VCS (~/records)
- port all sane.programs to be sandboxed
- enforce that all `environment.packages` has a sandbox profile (or explicitly opts out)
- revisit "non-sandboxable" apps and check that i'm not actually just missing mountpoints
- LL_FS_RW=/ isn't enough -- need all mount points like `=/:/proc:/sys:...`.
- ensure non-bin package outputs are linked for sandboxed apps
- i.e. `outputs.man`, `outputs.debug`, `outputs.doc`, ...
- lock down dbus calls within the sandbox
- otherwise anyone can `systemd-run --user ...` to potentially escape a sandbox
- <https://github.com/flatpak/xdg-dbus-proxy>
- remove `.ssh` access from Firefox!
- limit access to `~/knowledge/secrets` through an agent that requires GUI approval, so a firefox exploit can't steal all my logins
- port sanebox to a compiled language (hare?)
- it adds like 50-70ms launch time _on my laptop_. i'd hate to know how much that is on the pinephone.
- make dconf stuff less monolithic
- i.e. per-app dconf profiles for those which need it. possible static config.
- flatpak/spectrum has some stuff to proxy dconf per-app
- canaries for important services
- e.g. daily email checks; daily backup checks
- integrate `nix check` into Gitea actions?
### user experience
- rofi: sort items case-insensitively
- xdg-desktop-portal shouldn't kill children on exit
- *maybe* a job for `setsid -f`?
- replace starship prompt with something more efficient
- watch `forkstat`: it does way too much
- cleanup waybar/nwg-panel so that it's not invoking playerctl every 2 seconds
- nwg-panel: swaync icon is stuck as the refresh icon
- nwg-panel: doesn't appear on all desktops
- nwg-panel: doesn't know that virtual-desktop 10/TV exists
- install apps:
- display QR codes for WiFi endpoints: <https://linuxphoneapps.org/apps/noappid.wisperwind.wifi2qr/>
- shopping list (not in nixpkgs): <https://linuxphoneapps.org/apps/ro.hume.cosmin.shoppinglist/>
- offline Wikipedia (or, add to `wike`)
- offline docs viewer (gtk): <https://github.com/workbenchdev/Biblioteca>
- some type of games manager/launcher
- Gnome Highscore (retro games)?: <https://gitlab.gnome.org/World/highscore>
- better maps for mobile (Osmin (QtQuick)? Pure Maps (Qt/Kirigami)?
- note-taking app: <https://linuxphoneapps.org/categories/note-taking/>
- Folio is nice, uses standard markdown, though it only supports flat repos
- OSK overlay specifically for mobile gaming
- i.e. mock joysticks, for use with SuperTux and SuperTuxKart
- install mobile-friendly games:
- Shattered Pixel Dungeon (nixpkgs `shattered-pixel-dungeon`; doesn't cross-compile b/c openjdk/libIDL) <https://github.com/ebolalex/shattered-pixel-dungeon>
- UnCiv (Civ V clone; nixpkgs `unciv`; doesn't cross-compile): <https://github.com/yairm210/UnCiv>
- Simon Tatham's Puzzle Collection (not in nixpkgs) <https://git.tartarus.org/?p=simon/puzzles.git>
- Shootin Stars (Godot; not in nixpkgs) <https://gitlab.com/greenbeast/shootin-stars>
- numberlink (generic name for Flow Free). not packaged in Nix
- Neverball (https://neverball.org/screenshots.php). nix: as `neverball`
- blurble (https://linuxphoneapps.org/games/app.drey.blurble/). nix: not as of 2024-02-05
- Trivia Quiz (https://linuxphoneapps.org/games/io.github.nokse22.trivia-quiz/)
- sane-sync-music: remove empty dirs
#### moby
- fix cpuidle (gets better power consumption): <https://xnux.eu/log/077.html>
- moby: tune keyboard layout
- SwayNC:
- don't show MPRIS if no players detected
- this is a problem of playerctld, i guess
- add option to change audio output
- fix colors (red alert) to match overall theme
- moby: tune GPS
- tune QGPS setting in eg25-control, for less jitter?
- configure geoclue to do some smoothing?
- manually do smoothing, as some layer between mepo and geoclue?
- moby: port `freshen-agps` timer service to s6 (maybe i want some `s6-cron` or something)
- moby: show battery state on ssh login
- moby: improve gPodder launch time
- moby: theme GTK apps (i.e. non-adwaita styles)
- especially, make the menubar collapsible
- try Gradience tool specifically for theming adwaita? <https://linuxphoneapps.org/apps/com.github.gradienceteam.gradience/>
#### non-moby
- RSS: integrate a paywall bypass
- e.g. self-hosted [ladder](https://github.com/everywall/ladder) (like 12ft.io)
- neovim: set up language server (lsp; rnix-lsp; nvim-lspconfig)
- neovim: integrate LLMs
- Helix: make copy-to-system clipboard be the default
- firefox/librewolf: persist history
- just not cookies or tabs
- package Nix/NixOS docs for Zeal
- install [doc-browser](https://github.com/qwfy/doc-browser)
- this supports both dash (zeal) *and* the datasets from <https://devdocs.io> (which includes nix!)
- install [devhelp](https://wiki.gnome.org/Apps/Devhelp) (gnome)
- have xdg-open parse `<repo:...> URIs (or adjust them so that it _can_ parse)
- sane-bt-search: show details like 5.1 vs stereo, h264 vs h265
- maybe just color these "keywords" in all search results?
- uninsane.org: make URLs relative to allow local use (and as offline homepage)
- email: fix so that local mail doesn't go to junk
- git sendmail flow adds the DKIM signatures, but gets delivered locally w/o having the sig checked, so goes into Junk
- could change junk filter from "no DKIM success" to explicit "DKIM failed"
### perf
- debug nixos-rebuild times
- use `systemctl list-jobs` to show what's being waited on
- i think it's `systemd-networkd-wait-online.service` that's blocking this?
- i wonder what interface it's waiting for. i should use `--ignore=...` to ignore interfaces i don't care about.
- also `wireguard-wg-home.target` when net is offline
- add `pkgs.impure-cached.<foo>` package set to build things with ccache enabled
- every package here can be auto-generated, and marked with some env var so that it doesn't pollute the pure package set
- would be super handy for package prototyping!
## NEW FEATURES:
- migrate MAME cabinet to nix
- boot it from PXE from servo?
- enable IPv6
# better secrets management? read:
- decrypted at activation time: https://github.com/Mic92/sops-nix
less promising:
- https://christine.website/blog/nixos-encrypted-secrets-2021-01-20
- git-crypt (https://github.com/bobbbay/dotfiles.git)

25
configuration.nix Normal file
View File

@ -0,0 +1,25 @@
# Edit this configuration file to define what should be installed on
# your system. Help is available in the configuration.nix(5) man page
# and in the NixOS manual (accessible by running nixos-help).
# USEFUL COMMANDS:
# nix show-config
# nix eval --raw <expr> => print an expression. e.g. nixpkgs.raspberrypifw prints store path to the package
# nix-option ## query options -- including their SET VALUE; similar to search: https://search.nixos.org/options
# nixos-rebuild switch --upgrade ## pull changes from the nixos channel (e.g. security updates) and rebuild
{ config, pkgs, ... }:
{
# enable flake support.
# the real config root lives in flake.nix
nix = {
#package = pkgs.nixFlakes;
package = pkgs.nixUnstable;
extraOptions = ''
experimental-features = nix-command flakes
'';
};
}

View File

@ -1,5 +0,0 @@
{ ... }@args:
let
sane-nix-files = import ./pkgs/additional/sane-nix-files { };
in
import "${sane-nix-files}/impure.nix" args

View File

@ -1,25 +0,0 @@
to add a host:
- create the new nix targets
- hosts/by-name/HOST
- let the toplevel (flake.nix) know about HOST
- build and flash an image
- optionally expand the rootfs
- `cfdisk /dev/sda2` -> resize partition
- `mount /dev/sda2 boot`
- `btrfs filesystem resize max root`
- setup required persistent directories
- `mkdir -p root/persist/private`
- `gocryptfs -init root/persist/private`
- then boot the device, and for every dangling symlink in ~/.local/share, ~/.cache, do `mkdir -p` on it
- setup host ssh
- `mkdir -p root/persist/plaintext/etc/ssh/host_keys`
- boot the machine and let it create its own ssh keys
- add the pubkey to `hosts/common/hosts.nix`
- setup user ssh
- `ssh-keygen`. don't enter any password; it's stored in a password-encrypted fs.
- add the pubkey to `hosts/common/hosts.nix`
- allow the new host to view secrets
- instructions in hosts/common/secrets.nix
- run `ssh-to-age` on user/host pubkeys
- add age key to .sops.yaml
- update encrypted secrets: `sops updatekeys path/to/secret.yaml`

View File

@ -1,13 +0,0 @@
to ship `pkgs.foo` on some host, either:
- add it as an entry in `suggestedPrograms` to the appropriate category in `hosts/common/programs/assorted.nix`, or
- `sane.programs.foo.enableFor.user.colin = true` in `hosts/by-name/myhost/default.nix`
if the program needs customization (persistence, configs, secrets):
- add a file for it at `hosts/common/programs/<foo>.nix`
- set the options, `sane.programs.foo.{fs,persist}`
if it's unclear what fs paths a program uses:
- run one of these commands, launch the program, run it again, and `diff`:
- `du -x --apparent-size ~`
- `find ~ -xdev`
- or, inspect the whole tmpfs root with `ncdu -x /`

Binary file not shown.

Before

Width:  |  Height:  |  Size: 127 KiB

View File

@ -1,12 +0,0 @@
## deploying to SD card
- build a toplevel config: `nix build '.#hostSystems.moby'`
- mount a system:
- `mkdir -p root/{nix,boot}`
- `mount /dev/sdX1 root/boot`
- `mount /dev/sdX2 root/nix`
- copy the config:
- `sudo nix copy --no-check-sigs --to root/ $(readlink result)`
- nix will copy stuff to `root/nix/store`
- install the boot files:
- `sudo /nix/store/sbwpwngjlgw4f736ay9hgi69pj3fdwk5-extlinux-conf-builder.sh -d ./root/boot -t 5 -c $(readlink ./result)`
- extlinux-conf-builder can be found in `/run/current-system/bin/switch-to-configuration`

81
flake.lock Normal file
View File

@ -0,0 +1,81 @@
{
"nodes": {
"home-manager": {
"inputs": {
"nixpkgs": [
"nixpkgs"
]
},
"locked": {
"lastModified": 1654113405,
"narHash": "sha256-VpK+0QaWG2JRgB00lw77N9TjkE3ec0iMYIX1TzGpxa4=",
"owner": "nix-community",
"repo": "home-manager",
"rev": "ac2287df5a2d6f0a44bbcbd11701dbbf6ec43675",
"type": "github"
},
"original": {
"owner": "nix-community",
"ref": "release-22.05",
"repo": "home-manager",
"type": "github"
}
},
"mobile-nixos": {
"flake": false,
"locked": {
"lastModified": 1654281294,
"narHash": "sha256-hT2/u0jUOD4TFU6YyYt+5Gt+hjIeerLTyZG7ru79aDU=",
"owner": "nixos",
"repo": "mobile-nixos",
"rev": "d798b0b34240b18a08c22f5c0ee1f59a3ce43c01",
"type": "github"
},
"original": {
"owner": "nixos",
"repo": "mobile-nixos",
"type": "github"
}
},
"nixpkgs": {
"locked": {
"lastModified": 1654275867,
"narHash": "sha256-pt14ZE4jVPGvfB2NynGsl34pgXfOqum5YJNpDK4+b9E=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "7a20c208aacf4964c19186dcad51f89165dc7ed0",
"type": "github"
},
"original": {
"id": "nixpkgs",
"ref": "nixos-22.05",
"type": "indirect"
}
},
"nurpkgs": {
"locked": {
"lastModified": 1654367137,
"narHash": "sha256-xufB/+qvk/7rh7qrwZbzru1kTu8nsmNWBNQkYbdS84Q=",
"owner": "nix-community",
"repo": "NUR",
"rev": "86ff2d098bce1d623232f4886027a1d61317b195",
"type": "github"
},
"original": {
"owner": "nix-community",
"repo": "NUR",
"type": "github"
}
},
"root": {
"inputs": {
"home-manager": "home-manager",
"mobile-nixos": "mobile-nixos",
"nixpkgs": "nixpkgs",
"nurpkgs": "nurpkgs"
}
}
},
"root": "root",
"version": 7
}

116
flake.nix Normal file
View File

@ -0,0 +1,116 @@
# docs:
# https://nixos.wiki/wiki/Flakes
# https://serokell.io/blog/practical-nix-flakes
{
inputs = {
nixpkgs.url = "nixpkgs/nixos-22.05";
# pkgs-telegram.url = "nixpkgs/33775ec9a2173a08e46edf9f46c9febadbf743e8";# 2022/04/18; telegram 3.7.3. fails: nix log /nix/store/y5kv47hnv55qknb6cnmpcyraicay79fx-telegram-desktop-3.7.3.drv: g++: fatal error: cannot execute '/nix/store/njk5sbd21305bhr7gwibxbbvgbx5lxvn-gcc-9.3.0/libexec/gcc/aarch64-unknown-linux-gnu/9.3.0/cc1plus': execv: No such file or directory
mobile-nixos = {
url = "github:nixos/mobile-nixos";
flake = false;
};
home-manager = {
url = "github:nix-community/home-manager/release-22.05";
inputs.nixpkgs.follows = "nixpkgs";
};
nurpkgs.url = "github:nix-community/NUR";
};
outputs = { self, nixpkgs, mobile-nixos, home-manager, nurpkgs }: {
machines.uninsane = self.decl-bootable-machine { name = "uninsane"; system = "aarch64-linux"; };
machines.desko = self.decl-bootable-machine { name = "desko"; system = "x86_64-linux"; };
machines.lappy = self.decl-bootable-machine { name = "lappy"; system = "x86_64-linux"; };
machines.moby =
let machine = self.decl-machine {
name = "moby";
system = "aarch64-linux";
extraModules = [
(import "${mobile-nixos}/lib/configuration.nix" {
device = "pine64-pinephone";
})
];
};
in {
nixosConfiguration = machine;
img = machine.config.mobile.outputs.u-boot.disk-image;
};
nixosConfigurations = builtins.mapAttrs (name: value: value.nixosConfiguration) self.machines;
imgs = builtins.mapAttrs (name: value: value.img) self.machines;
decl-machine = { name, system, extraModules ? [], basePkgs ? nixpkgs }: let
patchedPkgs = basePkgs.legacyPackages.${system}.applyPatches {
name = "nixpkgs-patched-uninsane";
src = basePkgs;
patches = [
# for mobile: allow phoc to scale to non-integer values
./nixpatches/01-phosh-float-scale.patch
# for raspberry pi: allow building u-boot for rpi 4{,00}
./nixpatches/02-rpi4-uboot.patch
./nixpatches/03-whalebird-4.6.0.patch
./nixpatches/04-dart-2.7.0.patch
];
};
nixosSystem = import (patchedPkgs + "/nixos/lib/eval-config.nix");
in (nixosSystem {
inherit system;
specialArgs = { inherit home-manager; inherit nurpkgs; secrets = import ./secrets/default.nix; };
modules = [
./configuration.nix
./machines/${name}
(import ./helpers/set-hostname.nix name)
(self.overlaysModule system)
] ++ extraModules;
});
# this produces a EFI-bootable .img file (GPT with / and /boot).
# after building this, steps are:
# run `btrfs-convert --uuid copy <device>`
# boot, checkout this flake into /etc/nixos AND UPDATE THE UUIDS IT REFERENCES.
# then `nixos-rebuild ...`
decl-img = { name, system, extraModules ? [] }: (
(self.decl-machine { inherit name; inherit system; extraModules = extraModules ++ [./image.nix]; })
.config.system.build.raw
);
decl-bootable-machine = { name, system }: {
nixosConfiguration = self.decl-machine { inherit name; inherit system; };
img = self.decl-img { inherit name; inherit system; };
};
overlaysModule = system: { config, pkgs, ...}: {
nixpkgs.config.allowUnfree = true;
nixpkgs.overlays = [
#mobile-nixos.overlay
nurpkgs.overlay
(next: prev: {
#### customized packages
# nixos-unstable pleroma is too far out-of-date for our db
pleroma = prev.callPackage ./pkgs/pleroma { };
# jackett doesn't allow customization of the bind address: this will probably always be here.
jackett = next.callPackage ./pkgs/jackett { pkgs = prev; };
# fix abrupt HDD poweroffs as during reboot. patching systemd requires rebuilding nearly every package.
# systemd = import ./pkgs/systemd { pkgs = prev; };
# patch rpi uboot with something that fixes USB HDD boot
ubootRaspberryPi4_64bit = next.callPackage ./pkgs/ubootRaspberryPi4_64bit { pkgs = prev; };
#### TEMPORARY NIXOS-UNSTABLE PACKAGES
# stable telegram doesn't build, so explicitly use the stable one.
# TODO: apply this specifically to the moby build?
# tdesktop = pkgs-telegram.legacyPackages.${system}.tdesktop;
tdesktop = nixpkgs.legacyPackages.${system}.tdesktop;
#### TEMPORARY: PACKAGES WAITING TO BE UPSTREAMED
whalebird = prev.callPackage ./pkgs/whalebird { };
kaiteki = prev.callPackage ./pkgs/kaiteki { };
})
];
};
};
}

13
helpers/gui/gnome.nix Normal file
View File

@ -0,0 +1,13 @@
{ config, pkgs, lib, ... }:
{
# start gnome/gdm on boot
services.xserver.enable = true;
services.xserver.desktopManager.gnome.enable = true;
services.xserver.displayManager.gdm.enable = true;
# gnome does networking stuff with networkmanager
networking.useDHCP = false;
networking.networkmanager.enable = true;
networking.wireless.enable = lib.mkForce false;
}

16
helpers/gui/i3.nix Normal file
View File

@ -0,0 +1,16 @@
{ pkgs, ... }:
{
environment.pathsToLink = [ "/libexec" ]; # patch for i3blocks to work
services.xserver.enable = true;
services.xserver.displayManager.defaultSession = "none+i3";
services.xserver.windowManager.i3 = {
enable = true;
extraPackages = with pkgs; [
dmenu
i3status
i3lock
i3blocks
];
};
}

21
helpers/gui/phosh.nix Normal file
View File

@ -0,0 +1,21 @@
{ ... }:
{
# docs: https://github.com/NixOS/nixpkgs/blob/nixos-22.05/nixos/modules/services/x11/desktop-managers/phosh.nix
services.xserver.desktopManager.phosh = {
enable = true;
user = "colin";
group = "users";
phocConfig = {
xwayland = "true";
# find default outputs by catting /etc/phosh/phoc.ini
outputs.DSI-1 = {
scale = 1.5;
};
};
};
environment.variables = {
# Qt apps won't always start unless this env var is set
QT_QPA_PLATFORM = "wayland";
};
}

View File

@ -0,0 +1,14 @@
{ config, pkgs, lib, ... }:
{
# start plasma-mobile on boot
services.xserver.enable = true;
services.xserver.desktopManager.plasma5.mobile.enable = true;
services.xserver.desktopManager.plasma5.mobile.installRecommendedSoftware = false; # not all plasma5-mobile packages build for aarch64
services.xserver.displayManager.sddm.enable = true;
# Plasma does networking stuff with networkmanager, but nix configures the defaults itself
# networking.useDHCP = false;
# networking.networkmanager.enable = true;
# networking.wireless.enable = lib.mkForce false;
}

31
helpers/gui/sway.nix Normal file
View File

@ -0,0 +1,31 @@
{ pkgs, ... }:
# docs: https://nixos.wiki/wiki/Sway
{
programs.sway = {
# we configure sway with home-manager, but this enable gets us e.g. opengl and fonts
enable = true;
};
# TODO: should be able to use SDDM to get interactive login
services.greetd = {
enable = true;
settings = rec {
initial_session = {
command = "${pkgs.sway}/bin/sway";
user = "colin";
};
default_session = initial_session;
};
};
# unlike other DEs, sway configures no audio stack
# administer with pw-cli, pw-mon, pw-top commands
services.pipewire = {
enable = true;
alsa.enable = true;
alsa.support32Bit = true; # ??
pulse.enable = true;
};
}

View File

@ -0,0 +1,65 @@
{ config, pkgs, lib, ... }:
{
boot.initrd.availableKernelModules = [
"xhci_pci" "ahci" "sd_mod" "sdhci_pci" # nixos-generate-config defaults
"usb_storage" # rpi needed this to boot from usb storage, i think.
# "usbhid" "hid-generic" # hopefully these will fix USB HID auto-sleep ?
];
boot.initrd.kernelModules = [ ];
boot.initrd.supportedFilesystems = [ "ext4" "btrfs" "ext2" "ext3" "vfat" ];
# find more of these with sensors-detect
boot.kernelModules = [
"coretemp"
"kvm-intel"
"kvm-amd" # desktop
"amdgpu" # desktop
];
boot.extraModulePackages = [ ];
boot.kernelParams = [ "boot.shell_on_fail" ];
boot.consoleLogLevel = 7;
# Use the systemd-boot EFI boot loader.
boot.loader.systemd-boot.enable = true;
boot.loader.systemd-boot.configurationLimit = 40; # keep this many generations
boot.loader.efi.canTouchEfiVariables = true;
# enable cross compilation
boot.binfmt.emulatedSystems = [ "aarch64-linux" ];
# nixpkgs.crossSystem.system = "aarch64-linux";
powerManagement.cpuFreqGovernor = "powersave";
hardware.enableRedistributableFirmware = true;
hardware.cpu.amd.updateMicrocode = true; # desktop
hardware.cpu.intel.updateMicrocode = true; # laptop
services.fwupd.enable = true;
# powertop will default to putting USB devices -- including HID -- to sleep after TWO SECONDS
powerManagement.powertop.enable = false;
hardware.opengl.extraPackages = [
# laptop
pkgs.intel-compute-runtime
pkgs.intel-media-driver # new
pkgs.libvdpau-va-gl # new
pkgs.vaapiIntel
# desktop
pkgs.rocm-opencl-icd
pkgs.rocm-opencl-runtime
];
hardware.opengl.driSupport = true;
# For 32 bit applications
hardware.opengl.driSupport32Bit = true;
# TODO colin: does this *do* anything?
swapDevices = [ ];
# services.snapper.configs = {
# root = {
# subvolume = "/";
# extraConfig = {
# ALLOW_USERS = "colin";
# };
# };
# };
# services.snapper.snapshotInterval = "daily";
}

View File

@ -0,0 +1,532 @@
# docs:
# https://rycee.gitlab.io/home-manager/
# https://rycee.gitlab.io/home-manager/options.html
# man home-configuration.nix
#
# system is e.g. x86_64-linux
# gui is "gnome", or null
{ lib, pkgs, system, gui, extraPackages ? [] }: {
home.stateVersion = "21.11";
home.username = "colin";
home.homeDirectory = "/home/colin";
programs.home-manager.enable = true; # this lets home-manager manage dot-files in user dirs, i think
# XDG defines things like ~/Desktop, ~/Downloads, etc.
# these clutter the home, so i mostly don't use them.
xdg.userDirs = {
enable = true;
createDirectories = false; # on headless systems, most xdg dirs are noise
desktop = "$HOME/.xdg/Desktop";
documents = "$HOME/src";
download = "$HOME/tmp";
music = "$HOME/Music";
pictures = "$HOME/Pictures";
publicShare = "$HOME/.xdg/Public";
templates = "$HOME/.xdg/Templates";
videos = "$HOME/Videos";
};
programs.zsh = {
enable = true;
enableSyntaxHighlighting = true;
enableVteIntegration = true;
dotDir = ".config/zsh";
initExtraBeforeCompInit = ''
# p10k instant prompt
# run p10k configure to configure, but it can't write out its file :-(
POWERLEVEL9K_DISABLE_CONFIGURATION_WIZARD=true
'';
# prezto = oh-my-zsh fork; controls prompt, auto-completion, etc.
# see: https://github.com/sorin-ionescu/prezto
prezto = {
enable = true;
pmodules = [
"environment"
"terminal"
"editor"
"history"
"directory"
"spectrum"
"utility"
"completion"
"prompt"
"git"
];
prompt = {
theme = "powerlevel10k";
};
};
};
programs.kitty.enable = true;
programs.git = {
enable = true;
userName = "colin";
userEmail = "colin@uninsane.org";
};
programs.vim = {
enable = true;
extraConfig = ''
" wtf vim project: NOBODY LIKES MOUSE FOR VISUAL MODE
set mouse-=a
" copy/paste to system clipboard
set clipboard=unnamedplus
" <tab> completion menu settings
set wildmenu
set wildmode=longest,list,full
" highlight all matching searches (using / and ?)
set hlsearch
" allow backspace to delete empty lines in insert mode
set backspace=indent,eol,start
" built-in syntax highlighting
syntax enable
" show line/col number in bottom right
set ruler
" highlight trailing space & related syntax errors (does this work?)
let c_space_errors=1
let python_space_errors=1
'';
};
# obtain these by running `dconf dump /` after manually customizing gnome
# TODO: fix "is not of type `GVariant value'"
# dconf.settings = lib.mkIf (gui == "gnome") {
# gnome = {
# # control alt-tab behavior
# "org/gnome/desktop/wm/keybindings" = {
# switch-applications = [ "<Super>Tab" ];
# switch-applications-backward=[];
# switch-windows=["<Alt>Tab"];
# switch-windows-backward=["<Super><Alt>Tab"];
# };
# # idle power savings
# "org/gnome/settings-deamon/plugins/power" = {
# idle-brigthness = 50;
# sleep-inactive-ac-type = "nothing";
# sleep-inactive-battery-timeout = 5400; # seconds
# };
# "org/gnome/shell" = {
# favorite-apps = [
# "org.gnome.Nautilus.desktop"
# "firefox.desktop"
# "kitty.desktop"
# # "org.gnome.Terminal.desktop"
# ];
# };
# "org/gnome/desktop/session" = {
# # how long until considering a session idle (triggers e.g. screen blanking)
# idle-delay = 900;
# };
# "org/gnome/desktop/interface" = {
# text-scaling-factor = 1.25;
# };
# "org/gnome/desktop/media-handling" = {
# # don't auto-mount inserted media
# automount = false;
# automount-open = false;
# };
# };
# };
# home.pointerCursor = {
# package = pkgs.vanilla-dmz;
# name = "Vanilla-DMZ";
# };
# taken from https://github.com/srid/nix-config/blob/705a70c094da53aa50cf560179b973529617eb31/nix/home/i3.nix
xsession.windowManager.i3 = lib.mkIf (gui == "i3") (
let
mod = "Mod4";
in {
enable = true;
config = {
modifier = mod;
fonts = {
names = [ "DejaVu Sans Mono" ];
style = "Bold Semi-Condensed";
size = 11.0;
};
# terminal = "kitty";
# terminal = "${pkgs.kitty}/bin/kitty";
keybindings = {
"${mod}+Return" = "exec ${pkgs.kitty}/bin/kitty";
"${mod}+p" = "exec ${pkgs.dmenu}/bin/dmenu_run";
"${mod}+x" = "exec sh -c '${pkgs.maim}/bin/maim -s | xclip -selection clipboard -t image/png'";
"${mod}+Shift+x" = "exec sh -c '${pkgs.i3lock}/bin/i3lock -c 222222 & sleep 5 && xset dpms force of'";
# Focus
"${mod}+j" = "focus left";
"${mod}+k" = "focus down";
"${mod}+l" = "focus up";
"${mod}+semicolon" = "focus right";
# Move
"${mod}+Shift+j" = "move left";
"${mod}+Shift+k" = "move down";
"${mod}+Shift+l" = "move up";
"${mod}+Shift+semicolon" = "move right";
# multi monitor setup
# "${mod}+m" = "move workspace to output DP-2";
# "${mod}+Shift+m" = "move workspace to output DP-5";
};
# bars = [
# {
# position = "bottom";
# statusCommand = "${pkgs.i3status-rust}/bin/i3status-rs ${./i3status-rust.toml}";
# }
# ];
};
});
wayland.windowManager.sway = lib.mkIf (gui == "sway") {
enable = true;
wrapperFeatures.gtk = true;
config = rec {
terminal = "${pkgs.kitty}/bin/kitty";
window.border = 3; # pixel boundary between windows
# defaults; required for keybindings decl.
modifier = "Mod1";
# list of launchers: https://www.reddit.com/r/swaywm/comments/v39hxa/your_favorite_launcher/
# menu = "${pkgs.dmenu}/bin/dmenu_path";
menu = "${pkgs.fuzzel}/bin/fuzzel";
# menu = "${pkgs.albert}/bin/albert";
left = "h";
down = "j";
up = "k";
right = "l";
keybindings = {
"${modifier}+Return" = "exec ${terminal}";
"${modifier}+Shift+q" = "kill";
"${modifier}+d" = "exec ${menu}";
"${modifier}+${left}" = "focus left";
"${modifier}+${down}" = "focus down";
"${modifier}+${up}" = "focus up";
"${modifier}+${right}" = "focus right";
"${modifier}+Left" = "focus left";
"${modifier}+Down" = "focus down";
"${modifier}+Up" = "focus up";
"${modifier}+Right" = "focus right";
"${modifier}+Shift+${left}" = "move left";
"${modifier}+Shift+${down}" = "move down";
"${modifier}+Shift+${up}" = "move up";
"${modifier}+Shift+${right}" = "move right";
"${modifier}+Shift+Left" = "move left";
"${modifier}+Shift+Down" = "move down";
"${modifier}+Shift+Up" = "move up";
"${modifier}+Shift+Right" = "move right";
"${modifier}+b" = "splith";
"${modifier}+v" = "splitv";
"${modifier}+f" = "fullscreen toggle";
"${modifier}+a" = "focus parent";
"${modifier}+s" = "layout stacking";
"${modifier}+w" = "layout tabbed";
"${modifier}+e" = "layout toggle split";
"${modifier}+Shift+space" = "floating toggle";
"${modifier}+space" = "focus mode_toggle";
"${modifier}+1" = "workspace number 1";
"${modifier}+2" = "workspace number 2";
"${modifier}+3" = "workspace number 3";
"${modifier}+4" = "workspace number 4";
"${modifier}+5" = "workspace number 5";
"${modifier}+6" = "workspace number 6";
"${modifier}+7" = "workspace number 7";
"${modifier}+8" = "workspace number 8";
"${modifier}+9" = "workspace number 9";
"${modifier}+Shift+1" =
"move container to workspace number 1";
"${modifier}+Shift+2" =
"move container to workspace number 2";
"${modifier}+Shift+3" =
"move container to workspace number 3";
"${modifier}+Shift+4" =
"move container to workspace number 4";
"${modifier}+Shift+5" =
"move container to workspace number 5";
"${modifier}+Shift+6" =
"move container to workspace number 6";
"${modifier}+Shift+7" =
"move container to workspace number 7";
"${modifier}+Shift+8" =
"move container to workspace number 8";
"${modifier}+Shift+9" =
"move container to workspace number 9";
"${modifier}+Shift+minus" = "move scratchpad";
"${modifier}+minus" = "scratchpad show";
"${modifier}+Shift+c" = "reload";
"${modifier}+Shift+e" =
"exec swaynag -t warning -m 'You pressed the exit shortcut. Do you really want to exit sway? This will end your Wayland session.' -b 'Yes, exit sway' 'swaymsg exit'";
"${modifier}+r" = "mode resize";
} // {
# media keys
XF86MonBrightnessDown = ''exec "${pkgs.brightnessctl}/bin/brightnessctl set 2%-"'';
XF86MonBrightnessUp = ''exec "${pkgs.brightnessctl}/bin/brightnessctl set +2%"'';
XF86AudioRaiseVolume = "exec '${pkgs.pulsemixer}/bin/pulsemixer --change-volume +5'";
XF86AudioLowerVolume = "exec '${pkgs.pulsemixer}/bin/pulsemixer --change-volume -5'";
XF86AudioMute = "exec '${pkgs.pulsemixer}/bin/pulsemixer --toggle-mute'";
};
# mostly defaults:
bars = [{
mode = "dock";
hiddenState = "hide";
position = "top";
command = "${pkgs.waybar}/bin/waybar";
workspaceButtons = true;
workspaceNumbers = true;
statusCommand = "${pkgs.i3status}/bin/i3status";
fonts = {
names = [ "monospace" ];
size = 8.0;
};
trayOutput = "primary";
colors = {
background = "#000000";
statusline = "#ffffff";
separator = "#666666";
focusedWorkspace = {
border = "#4c7899";
background = "#285577";
text = "#ffffff";
};
activeWorkspace = {
border = "#333333";
background = "#5f676a";
text = "#ffffff";
};
inactiveWorkspace = {
border = "#333333";
background = "#222222";
text = "#888888";
};
urgentWorkspace = {
border = "#2f343a";
background = "#900000";
text = "#ffffff";
};
bindingMode = {
border = "#2f343a";
background = "#900000";
text = "#ffffff";
};
};
}];
};
};
programs.waybar = lib.mkIf (gui == "sway") {
enable = true;
# docs: https://github.com/Alexays/Waybar/wiki/Configuration
settings = {
mainBar = {
layer = "top";
height = 40;
modules-left = ["sway/workspaces" "sway/mode"];
modules-center = ["sway/window"];
modules-right = ["custom/mediaplayer" "clock" "cpu" "network"];
"sway/window" = {
max-length = 50;
};
# include song artist/title. source: https://www.reddit.com/r/swaywm/comments/ni0vso/waybar_spotify_tracktitle/
"custom/mediaplayer" = {
exec = pkgs.writeShellScript "waybar-mediaplayer" ''
player_status=$(${pkgs.playerctl}/bin/playerctl status 2> /dev/null)
if [ "$player_status" = "Playing" ]; then
echo "$(${pkgs.playerctl}/bin/playerctl metadata artist) - $(${pkgs.playerctl}/bin/playerctl metadata title)"
elif [ "$player_status" = "Paused" ]; then
echo " $(${pkgs.playerctl}/bin/playerctl metadata artist) - $(${pkgs.playerctl}/bin/playerctl metadata title)"
fi
'';
interval = 2;
format = "{} ";
# return-type = "json";
on-click = "${pkgs.playerctl}/bin/playerctl play-pause";
on-scroll-up = "${pkgs.playerctl}/bin/playerctl next";
on-scroll-down = "${pkgs.playerctl}/bin/playerctl previous";
};
network = {
interval = 1;
format-ethernet = "{ifname}: {ipaddr}/{cidr} up: {bandwidthUpBits} down: {bandwidthDownBits}";
};
cpu = {
format = "{usage}% ";
tooltip = false;
};
clock = {
format-alt = "{:%a, %d. %b %H:%M}";
};
};
};
# style = ''
# * {
# border: none;
# border-radius: 0;
# font-family: Source Code Pro;
# }
# window#waybar {
# background: #16191C;
# color: #AAB2BF;
# }
# #workspaces button {
# padding: 0 5px;
# }
# .custom-spotify {
# padding: 0 10px;
# margin: 0 4px;
# background-color: #1DB954;
# color: black;
# }
# '';
};
programs.firefox = lib.mkIf (gui != null) {
enable = true;
profiles.default = {
bookmarks = {
fed_uninsane.url = "https://fed.uninsane.org/";
delightful.url = "https://delightful.club/";
crowdsupply.url = "https://www.crowdsupply.com/";
linux_phone_apps.url = "https://linuxphoneapps.org/mobile-compatibility/5/";
mempool.url = "https://jochen-hoenicke.de/queue";
};
};
# firefox profile support seems to be broken :shrug:
# profiles.other = {
# id = 2;
# };
# NB: these must be manually enabled in the Firefox settings on first start
# extensions can be found here: https://gitlab.com/rycee/nur-expressions/-/blob/master/pkgs/firefox-addons/addons.json
extensions = [
pkgs.nur.repos.rycee.firefox-addons.bypass-paywalls-clean
pkgs.nur.repos.rycee.firefox-addons.metamask
pkgs.nur.repos.rycee.firefox-addons.i-dont-care-about-cookies
pkgs.nur.repos.rycee.firefox-addons.sidebery
pkgs.nur.repos.rycee.firefox-addons.sponsorblock
pkgs.nur.repos.rycee.firefox-addons.ublock-origin
];
};
home.shellAliases = {
":q" = "exit";
# common typos
"cd.." = "cd ..";
"cd../" = "cd ../";
};
home.packages = [
pkgs.btrfs-progs
pkgs.dig
pkgs.cryptsetup
pkgs.duplicity
pkgs.fatresize
pkgs.fd
pkgs.file
pkgs.gnumake
pkgs.gptfdisk
pkgs.hdparm
pkgs.htop
pkgs.iftop
pkgs.inetutils # for telnet
pkgs.iotop
pkgs.iptables
pkgs.jq
pkgs.killall
pkgs.lm_sensors # for sensors-detect
pkgs.lsof
pkgs.mix2nix
pkgs.netcat
pkgs.nixpkgs-review
pkgs.nixUnstable # TODO: still needed on 22.05?
# pkgs.nixos-generators
# pkgs.nettools
pkgs.nmap
pkgs.obsidian
pkgs.parted
pkgs.pciutils
# pkgs.ponymix
pkgs.powertop
pkgs.pulsemixer
pkgs.python3
pkgs.ripgrep
pkgs.smartmontools
pkgs.snapper
pkgs.socat
pkgs.sudo
pkgs.usbutils
pkgs.wget
pkgs.wireguard-tools
pkgs.youtube-dl
pkgs.zola
]
++ (if gui != null then
[
# GUI only
pkgs.chromium
pkgs.clinfo
pkgs.element-desktop # broken on phosh
pkgs.evince # works on phosh
pkgs.font-manager
pkgs.gimp # broken on phosh
pkgs.gnome.dconf-editor
pkgs.gnome.file-roller
pkgs.gnome.gnome-maps # works on phosh
pkgs.gnome.nautilus
pkgs.gnome-podcasts
pkgs.gnome.gnome-terminal # works on phosh
pkgs.inkscape
pkgs.kaiteki # Pleroma client
pkgs.libreoffice-fresh # XXX colin: maybe don't want this on mobile
pkgs.mesa-demos
pkgs.playerctl
pkgs.tdesktop # broken on phosh
pkgs.vlc # works on phosh
pkgs.whalebird # pleroma client. input is broken on phosh
pkgs.xterm # broken on phosh
] else [])
++ (if gui == "sway" then
[
# TODO: move this to helpers/gui/sway.nix?
pkgs.swaylock
pkgs.swayidle
pkgs.wl-clipboard
pkgs.mako # notification daemon
# pkgs.dmenu # todo: use wofi?
# user stuff
# pkgs.pavucontrol
] else [])
++ (if gui != null && system == "x86_64-linux" then
[
# x86_64 only
pkgs.signal-desktop
pkgs.spotify
pkgs.discord
] else [])
++ extraPackages;
}

4
helpers/set-hostname.nix Normal file
View File

@ -0,0 +1,4 @@
hostName: { ... }:
{
networking.hostName = hostName;
}

View File

@ -0,0 +1,17 @@
{ ... }:
{
imports = [
./fs.nix
./home-manager.nix
./nix-cache.nix
./users.nix
];
time.timeZone = "America/Los_Angeles";
environment.variables = {
EDITOR = "vim";
};
}

25
helpers/universal/fs.nix Normal file
View File

@ -0,0 +1,25 @@
{ pkgs, ... }:
{
fileSystems."/mnt/media-uninsane" = {
# device = "sshfs#colin@uninsane.org:/opt/uninsane/media";
device = "colin@uninsane.org:/opt/uninsane/media";
fsType = "fuse.sshfs";
options = [
"x-systemd.automount"
"_netdev"
"user"
"idmap=user"
"transform_symlinks"
"identityfile=/home/colin/.ssh/id_ed25519"
"allow_other"
"default_permissions"
"uid=1000"
"gid=1000"
];
};
environment.systemPackages = [
pkgs.sshfs-fuse
];
}

View File

@ -0,0 +1,9 @@
{ home-manager, config, pkgs, ... }:
{
imports = [
home-manager.nixosModule
];
home-manager.useGlobalPkgs = true;
home-manager.useUserPackages = true;
}

View File

@ -0,0 +1,16 @@
{ ... }:
{
# use our own binary cache
nix.settings = {
substituters = [
"https://nixcache.uninsane.org"
"https://nix-community.cachix.org"
"https://cache.nixos.org/"
];
trusted-public-keys = [
"nixcache.uninsane.org:r3WILM6+QrkmsLgqVQcEdibFD7Q/4gyzD9dGT33GP70="
"nix-community.cachix.org-1:mB9FSh9qf2dCimDSUo8Zy7bkq5CX+/rkCWyvRCYg3Fs="
];
};
}

View File

@ -0,0 +1,53 @@
{ config, pkgs, lib, ... }:
# installer docs: https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/profiles/installation-device.nix
{
# Users are exactly these specified here;
# old ones will be deleted (from /etc/passwd, etc) upon upgrade.
users.mutableUsers = false;
# docs: https://nixpkgs-manual-sphinx-markedown-example.netlify.app/generated/options-db.xml.html#users-users
users.users.colin = {
# sets group to "users" (?)
isNormalUser = true;
home = "/home/colin";
uid = 1000;
# XXX colin: this is what the installer has, but is it necessary?
# group = "users";
extraGroups = [
"wheel"
"nixbuild"
"networkmanager"
# phosh/mobile. XXX colin: unsure if necessary
"video"
"feedbackd"
"dialout" # required for modem access
];
initialPassword = lib.mkDefault "";
shell = pkgs.zsh;
# shell = pkgs.bashInteractive;
# XXX colin: create ssh key for THIS user by logging in and running:
# ssh-keygen -t ed25519
openssh.authorizedKeys.keys = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGSDe/y0e9PSeUwYlMPjzhW0UhNsGAGsW3lCG3apxrD5 colin@colin.desktop"
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIG+MZ/l5d8g5hbxMB9ed1uyvhV85jwNrSVNVxb5ujQjw colin@lappy"
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPU5GlsSfbaarMvDA20bxpSZGWviEzXGD8gtrIowc1pX colin@desko"
# TODO: should probably only let this authenticate to my server
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGCLCA9KbjXaXNNMJJvqbPO5KQQ64JCdG8sg88AfdKzi colin@moby"
];
};
security.sudo = {
enable = true;
wheelNeedsPassword = false;
};
services.openssh = {
enable = true;
permitRootLogin = "no";
passwordAuthentication = false;
};
# TODO colin: move this somewhere else!
programs.vim.defaultEditor = true;
}

View File

@ -1,7 +0,0 @@
## directory structure
- by-name/<hostname>: configuration which is evaluated _only_ for the given hostname
- common/: configuration which applies to all hosts
- modules/: nixpkgs-style modules which may be used by multiple hosts, but configured separately per host.
- ideally no module here has effect unless `enable`d
- however, `enable` may default to true
- and in practice some of these modules surely aren't fully "disableable"

View File

@ -1,45 +0,0 @@
# Samsung chromebook XE303C12
# - <https://wiki.postmarketos.org/wiki/Samsung_Chromebook_(google-snow)>
{ ... }:
{
imports = [
./fs.nix
];
sane.hal.samsung.enable = true;
sane.roles.client = true;
# sane.roles.pc = true;
users.users.colin.initialPassword = "147147";
sane.programs.sway.enableFor.user.colin = true;
sane.programs.calls.enableFor.user.colin = false;
sane.programs.consoleMediaUtils.enableFor.user.colin = true;
sane.programs.epiphany.enableFor.user.colin = true;
sane.programs."gnome.geary".enableFor.user.colin = false;
# sane.programs.firefox.enableFor.user.colin = true;
sane.programs.portfolio-filemanager.enableFor.user.colin = true;
sane.programs.signal-desktop.enableFor.user.colin = false;
sane.programs.wike.enableFor.user.colin = true;
sane.programs.dino.config.autostart = false;
sane.programs.dissent.config.autostart = false;
sane.programs.fractal.config.autostart = false;
sane.programs.sway.config.mod = "Mod1"; #< alt key instead of Super
# sane.programs.guiApps.enableFor.user.colin = false;
# sane.programs.pcGuiApps.enableFor.user.colin = false; #< errors!
sane.programs.blueberry.enableFor.user.colin = false; # bluetooth manager: doesn't cross compile!
# sane.programs.brave.enableFor.user.colin = false; # 2024/06/03: fails eval if enabled on cross
# sane.programs.firefox.enableFor.user.colin = false; # 2024/06/03: this triggers an eval error in yarn stuff -- i'm doing IFD somewhere!!?
sane.programs.mepo.enableFor.user.colin = false; # 2024/06/04: doesn't cross compile (nodejs)
sane.programs.mercurial.enableFor.user.colin = false; # 2024/06/03: does not cross compile
sane.programs.nixpkgs-review.enableFor.user.colin = false; # 2024/06/03: OOMs when cross compiling
sane.programs.ntfy-sh.enableFor.user.colin = false; # 2024/06/04: doesn't cross compile (nodejs)
sane.programs.pwvucontrol.enableFor.user.colin = false; # 2024/06/03: doesn't cross compile (libspa-sys)
sane.programs."sane-scripts.bt-search".enableFor.user.colin = false; # 2024/06/03: does not cross compile
sane.programs.sequoia.enableFor.user.colin = false; # 2024/06/03: does not cross compile
sane.programs.zathura.enableFor.user.colin = false; # 2024/06/03: does not cross compile
}

View File

@ -1,16 +0,0 @@
{ ... }:
{
fileSystems."/nix" = {
device = "/dev/disk/by-uuid/55555555-0303-0c12-86df-eda9e9311526";
fsType = "btrfs";
options = [
"compress=zstd"
"defaults"
];
};
fileSystems."/boot" = {
device = "/dev/disk/by-uuid/303C-5A37";
fsType = "vfat";
};
}

View File

@ -1,59 +0,0 @@
{ config, lib, pkgs, ... }:
{
imports = [
./fs.nix
];
sane.services.trust-dns.asSystemResolver = false; # TEMPORARY: TODO: re-enable trust-dns
# sane.programs.devPkgs.enableFor.user.colin = true;
# sane.guest.enable = true;
# don't enable wifi by default: it messes with connectivity.
# systemd.services.iwd.enable = false;
# systemd.services.wpa_supplicant.enable = false;
sane.programs.wpa_supplicant.enableFor.user.colin = lib.mkForce false;
sane.programs.wpa_supplicant.enableFor.system = lib.mkForce false;
sops.secrets.colin-passwd.neededForUsers = true;
sane.roles.build-machine.enable = true;
sane.roles.client = true;
sane.roles.dev-machine = true;
sane.roles.pc = true;
sane.services.wg-home.enable = true;
sane.services.wg-home.ip = config.sane.hosts.by-name."desko".wg-home.ip;
sane.ovpn.addrV4 = "172.26.55.21";
# sane.ovpn.addrV6 = "fd00:0000:1337:cafe:1111:1111:20c1:a73c";
sane.services.duplicity.enable = true;
sane.nixcache.remote-builders.desko = false;
sane.programs.sway.enableFor.user.colin = true;
sane.programs.iphoneUtils.enableFor.user.colin = true;
sane.programs.steam.enableFor.user.colin = true;
sane.programs."gnome.geary".config.autostart = true;
sane.programs.signal-desktop.config.autostart = true;
sane.programs.nwg-panel.config = {
battery = false;
brightness = false;
};
sane.image.extraBootFiles = [ pkgs.bootpart-uefi-x86_64 ];
# needed to use libimobiledevice/ifuse, for iphone sync
services.usbmuxd.enable = true;
# default config: https://man.archlinux.org/man/snapper-configs.5
# defaults to something like:
# - hourly snapshots
# - auto cleanup; keep the last 10 hourlies, last 10 daylies, last 10 monthlys.
services.snapper.configs.nix = {
# TODO: for the impermanent setup, we'd prefer to just do /nix/persist,
# but that also requires setting up the persist dir as a subvol
SUBVOLUME = "/nix";
# TODO: ALLOW_USERS doesn't seem to work. still need `sudo snapper -c nix list`
ALLOW_USERS = [ "colin" ];
};
}

View File

@ -1,21 +0,0 @@
{ ... }:
{
# increase /tmp space (defaults to 50% of RAM) for building large nix things.
# a cross-compiled kernel, particularly, will easily use 30+GB of tmp
fileSystems."/tmp".options = [ "size=64G" ];
fileSystems."/nix" = {
device = "/dev/disk/by-uuid/845d85bf-761d-431b-a406-e6f20909154f";
fsType = "btrfs";
options = [
"compress=zstd"
"defaults"
];
};
fileSystems."/boot" = {
device = "/dev/disk/by-uuid/5049-9AFD";
fsType = "vfat";
};
}

View File

@ -1,36 +0,0 @@
{ config, pkgs, ... }:
{
imports = [
./fs.nix
];
sane.roles.client = true;
sane.roles.dev-machine = true;
sane.roles.pc = true;
sane.services.wg-home.enable = true;
sane.services.wg-home.ip = config.sane.hosts.by-name."lappy".wg-home.ip;
sane.ovpn.addrV4 = "172.23.119.72";
# sane.ovpn.addrV6 = "fd00:0000:1337:cafe:1111:1111:0332:aa96/128";
# sane.guest.enable = true;
sane.image.extraBootFiles = [ pkgs.bootpart-uefi-x86_64 ];
sane.programs.stepmania.enableFor.user.colin = true;
sane.programs.sway.enableFor.user.colin = true;
sane.programs."gnome.geary".config.autostart = true;
sane.programs.signal-desktop.config.autostart = true;
sops.secrets.colin-passwd.neededForUsers = true;
# default config: https://man.archlinux.org/man/snapper-configs.5
# defaults to something like:
# - hourly snapshots
# - auto cleanup; keep the last 10 hourlies, last 10 daylies, last 10 monthlys.
services.snapper.configs.nix = {
# TODO: for the impermanent setup, we'd prefer to just do /nix/persist,
# but that also requires setting up the persist dir as a subvol
SUBVOLUME = "/nix";
ALLOW_USERS = [ "colin" ];
};
}

View File

@ -1,66 +0,0 @@
# Pinephone
#
# wikis, resources, ...:
# - Linux Phone Apps: <https://linuxphoneapps.org/>
# - massive mobile-friendly app database
# - Mobian wiki: <https://wiki.mobian-project.org/doku.php?id=start>
# - recommended apps, chatrooms
{ config, pkgs, lib, ... }:
{
imports = [
./fs.nix
];
sane.hal.pine64.enable = true;
sane.roles.client = true;
sane.roles.handheld = true;
sane.services.wg-home.enable = true;
sane.services.wg-home.ip = config.sane.hosts.by-name."moby".wg-home.ip;
sane.ovpn.addrV4 = "172.24.87.255";
# sane.ovpn.addrV6 = "fd00:0000:1337:cafe:1111:1111:18cd:a72b";
# XXX colin: phosh doesn't work well with passwordless login,
# so set this more reliable default password should anything go wrong
users.users.colin.initialPassword = "147147";
# services.getty.autologinUser = "root"; # allows for emergency maintenance?
sops.secrets.colin-passwd.neededForUsers = true;
sane.programs.sway.enableFor.user.colin = true;
sane.programs.sway.config.mod = "Mod1"; #< alt key instead of Super
sane.programs.blueberry.enableFor.user.colin = false; # bluetooth manager: doesn't cross compile!
sane.programs.fcitx5.enableFor.user.colin = false; # does not cross compile
sane.programs.mercurial.enableFor.user.colin = false; # does not cross compile
sane.programs.nvme-cli.enableFor.system = false; # does not cross compile (libhugetlbfs)
# enabled for easier debugging
sane.programs.eg25-control.enableFor.user.colin = true;
sane.programs.rtl8723cs-wowlan.enableFor.user.colin = true;
# sane.programs.ntfy-sh.config.autostart = true;
sane.programs.dino.config.autostart = true;
# sane.programs.signal-desktop.config.autostart = true; # TODO: enable once electron stops derping.
# sane.programs."gnome.geary".config.autostart = true;
# sane.programs.calls.config.autostart = true;
sane.programs.pipewire.config = {
# tune so Dino doesn't drop audio
# there's seemingly two buffers for the mic (see: <https://gitlab.freedesktop.org/pipewire/pipewire/-/wikis/FAQ#pipewire-buffering-explained>)
# 1. Pipewire buffering out of the driver and into its own member.
# 2. Pipewire buffering into Dino.
# the latter is fixed at 10ms by Dino, difficult to override via runtime config.
# the former defaults low (e.g. 512 samples)
# this default configuration causes the mic to regularly drop out entirely for a couple seconds at a time during a call,
# presumably because the system can't keep up (pw-top shows incrementing counter in ERR column).
# `pw-metadata -n settings 0 clock.force-quantum 1024` reduces to about 1 error per second.
# `pw-metadata -n settings 0 clock.force-quantum 2048` reduces to 1 error every < 10s.
# pipewire default config includes `clock.power-of-two-quantum = true`
min-quantum = 2048;
max-quantum = 8192;
};
# /boot space is at a premium. default was 20.
# even 10 can be too much
boot.loader.generic-extlinux-compatible.configurationLimit = 8;
}

View File

@ -1,17 +0,0 @@
{ ... }:
{
fileSystems."/nix" = {
device = "/dev/disk/by-uuid/1f1271f8-53ce-4081-8a29-60a4a6b5d6f9";
fsType = "btrfs";
options = [
"compress=zstd"
"defaults"
];
};
fileSystems."/boot" = {
device = "/dev/disk/by-uuid/0299-F1E5";
fsType = "vfat";
};
}

View File

@ -1,14 +0,0 @@
{ pkgs, ... }:
{
imports = [
./fs.nix
];
sane.image.extraBootFiles = [ pkgs.bootpart-uefi-x86_64 ];
sane.persist.enable = false; # what we mean here is that the image is immutable; `/` is still tmpfs.
sane.nixcache.enable = false; # don't want to be calling out to dead machines that we're *trying* to rescue
# auto-login at shell
services.getty.autologinUser = "colin";
# users.users.colin.initialPassword = "colin";
}

View File

@ -1,12 +0,0 @@
{ ... }:
{
fileSystems."/nix" = {
device = "/dev/disk/by-uuid/44445555-6666-7777-8888-999900001111";
fsType = "ext4";
};
fileSystems."/boot" = {
device = "/dev/disk/by-uuid/2222-3333";
fsType = "vfat";
};
}

View File

@ -1,49 +0,0 @@
{ config, pkgs, ... }:
{
imports = [
./fs.nix
./net.nix
./services
];
sane.programs = {
# for administering services
freshrss.enableFor.user.colin = true;
matrix-synapse.enableFor.user.colin = true;
signaldctl.enableFor.user.colin = true;
};
sane.roles.build-machine.enable = true;
sane.programs.zsh.config.showDeadlines = false; # ~/knowledge doesn't always exist
sane.programs.consoleUtils.suggestedPrograms = [
"consoleMediaUtils" # notably, for go2tv / casting
"pcConsoleUtils"
"sane-scripts.stop-all-servo"
];
sane.services.dyn-dns.enable = true;
sane.services.trust-dns.asSystemResolver = false; # TODO: enable once it's all working well
sane.services.wg-home.enable = true;
sane.services.wg-home.visibleToWan = true;
sane.services.wg-home.forwardToWan = true;
sane.services.wg-home.routeThroughServo = false;
sane.services.wg-home.ip = config.sane.hosts.by-name."servo".wg-home.ip;
sane.ovpn.addrV4 = "172.23.174.114";
# sane.ovpn.addrV6 = "fd00:0000:1337:cafe:1111:1111:8df3:14b0";
sane.nixcache.remote-builders.desko = false;
sane.nixcache.remote-builders.servo = false;
# sane.services.duplicity.enable = true; # TODO: re-enable after HW upgrade
# automatically log in at the virtual consoles.
# using root here makes sure we always have an escape hatch
services.getty.autologinUser = "root";
sane.image.extraBootFiles = [ pkgs.bootpart-uefi-x86_64 ];
# both transmission and ipfs try to set different net defaults.
# we just use the most aggressive of the two here:
boot.kernel.sysctl = {
"net.core.rmem_max" = 4194304; # 4MB
};
}

View File

@ -1,157 +0,0 @@
# zfs docs:
# - <https://nixos.wiki/wiki/ZFS>
# - <repo:nixos/nixpkgs:nixos/modules/tasks/filesystems/zfs.nix>
#
# zfs check health: `zpool status`
#
# zfs pool creation (requires `boot.supportedFilesystems = [ "zfs" ];`
# - 1. identify disk IDs: `ls -l /dev/disk/by-id`
# - 2. pool these disks: `zpool create -f -m legacy pool raidz ata-ST4000VN008-2DR166_WDH0VB45 ata-ST4000VN008-2DR166_WDH17616 ata-ST4000VN008-2DR166_WDH0VC8Q ata-ST4000VN008-2DR166_WDH17680`
# - legacy documented: <https://superuser.com/questions/790036/what-is-a-zfs-legacy-mount-point>
# - 3. enable acl support: `zfs set acltype=posixacl pool`
#
# import pools: `zpool import pool`
# show zfs datasets: `zfs list` (will be empty if haven't imported)
# show zfs properties (e.g. compression): `zfs get all pool`
# set zfs properties: `zfs set compression=on pool`
{ ... }:
{
# hostId: not used for anything except zfs guardrail?
# [hex(ord(x)) for x in 'serv']
networking.hostId = "73657276";
boot.supportedFilesystems = [ "zfs" ];
# boot.zfs.enabled = true;
boot.zfs.forceImportRoot = false;
# scrub all zfs pools weekly:
services.zfs.autoScrub.enable = true;
boot.extraModprobeConfig = ''
### zfs_arc_max tunable:
# ZFS likes to use half the ram for its own cache and let the kernel push everything else to swap.
# so, reduce its cache size
# see: <https://askubuntu.com/a/1290387>
# see: <https://serverfault.com/a/1119083>
# see: <https://openzfs.github.io/openzfs-docs/Performance%20and%20Tuning/Module%20Parameters.html#zfs-arc-max>
# for all tunables, see: `man 4 zfs`
# to update these parameters without rebooting:
# - `echo '4294967296' | sane-sudo-redirect /sys/module/zfs/parameters/zfs_arc_max`
### zfs_bclone_enabled tunable
# this allows `cp --reflink=always FOO BAR` to work. i.e. shallow copies.
# it's unstable as of 2.2.3. led to *actual* corruption in 2.2.1, but hopefully better by now.
# - <https://github.com/openzfs/zfs/issues/405>
# note that `du -h` won't *always* show the reduced size for reflink'd files (?).
# `zpool get all | grep clone` seems to be the way to *actually* see how much data is being deduped
options zfs zfs_arc_max=4294967296 zfs_bclone_enabled=1
'';
# to be able to mount the pool like this, make sure to tell zfs to NOT manage it itself.
# otherwise local-fs.target will FAIL and you will be dropped into a rescue shell.
# - `zfs set mountpoint=legacy pool`
# if done correctly, the pool can be mounted before this `fileSystems` entry is created:
# - `sudo mount -t zfs pool /mnt/persist/pool`
fileSystems."/mnt/pool" = {
device = "pool";
fsType = "zfs";
options = [ "acl" ]; #< not sure if this `acl` flag is actually necessary. it mounts without it.
};
# services.zfs.zed = ... # TODO: zfs can send me emails when disks fail
sane.programs.sysadminUtils.suggestedPrograms = [ "zfs" ];
sane.persist.stores."ext" = {
origin = "/mnt/pool/persist";
storeDescription = "external HDD storage";
defaultMethod = "bind"; #< TODO: change to "symlink"?
};
# increase /tmp space (defaults to 50% of RAM) for building large nix things.
# even the stock `nixpkgs.linux` consumes > 16 GB of tmp
fileSystems."/tmp".options = [ "size=32G" ];
fileSystems."/nix" = {
device = "/dev/disk/by-uuid/cc81cca0-3cc7-4d82-a00c-6243af3e7776";
fsType = "btrfs";
options = [
"compress=zstd"
"defaults"
];
};
fileSystems."/boot" = {
device = "/dev/disk/by-uuid/6EE3-4171";
fsType = "vfat";
};
# slow, external storage (for archiving, etc)
fileSystems."/mnt/usb-hdd" = {
device = "/dev/disk/by-uuid/aa272cff-0fcc-498e-a4cb-0d95fb60631b";
fsType = "btrfs";
options = [
"compress=zstd"
"defaults"
];
};
sane.fs."/mnt/usb-hdd".mount = {};
# FIRST TIME SETUP FOR MEDIA DIRECTORY:
# - set the group stick bit: `sudo find /var/media -type d -exec chmod g+s {} +`
# - this ensures new files/dirs inherit the group of their parent dir (instead of the user who creates them)
# - ensure everything under /var/media is mounted with `-o acl`, to support acls
# - ensure all files are rwx by group: `setfacl --recursive --modify d:g::rwx /var/media`
# - alternatively, `d:g:media:rwx` to grant `media` group even when file has a different owner, but that's a bit complex
sane.persist.sys.byStore.ext = [{
path = "/var/media";
user = "colin";
group = "media";
mode = "0775";
}];
sane.fs."/var/media/archive".dir = {};
# this is file.text instead of symlink.text so that it may be read over a remote mount (where consumers might not have any /nix/store/.../README.md path)
sane.fs."/var/media/archive/README.md".file.text = ''
this directory is for media i wish to remove from my library,
but keep for a short time in case i reverse my decision.
treat it like a system trash can.
'';
sane.fs."/var/media/Books".dir = {};
sane.fs."/var/media/Books/Audiobooks".dir = {};
sane.fs."/var/media/Books/Books".dir = {};
sane.fs."/var/media/Books/Visual".dir = {};
sane.fs."/var/media/collections".dir = {};
# sane.fs."/var/media/datasets".dir = {};
sane.fs."/var/media/freeleech".dir = {};
sane.fs."/var/media/Music".dir = {};
sane.fs."/var/media/Pictures".dir = {};
sane.fs."/var/media/Videos".dir = {};
sane.fs."/var/media/Videos/Film".dir = {};
sane.fs."/var/media/Videos/Shows".dir = {};
sane.fs."/var/media/Videos/Talks".dir = {};
# this is file.text instead of symlink.text so that it may be read over a remote mount (where consumers might not have any /nix/store/.../README.md path)
sane.fs."/var/lib/uninsane/datasets/README.md".file.text = ''
this directory may seem redundant with ../media/datasets. it isn't.
this directory exists on SSD, allowing for speedy access to specific datasets when necessary.
the contents should be a subset of what's in ../media/datasets.
'';
# btrfs doesn't easily support swapfiles
# swapDevices = [
# { device = "/nix/persist/swapfile"; size = 4096; }
# ];
# this can be a partition. create with:
# fdisk <dev>
# n
# <default partno>
# <start>
# <end>
# t
# <partno>
# 19 # set part type to Linux swap
# w # write changes
# mkswap -L swap <part>
# swapDevices = [
# {
# label = "swap";
# # TODO: randomEncryption.enable = true;
# }
# ];
}

View File

@ -1,116 +0,0 @@
{ config, lib, pkgs, ... }:
let
portOpts = with lib; types.submodule {
options = {
visibleTo.ovpns = mkOption {
type = types.bool;
default = false;
description = ''
whether to forward inbound traffic on the OVPN vpn port to the corresponding localhost port.
'';
};
visibleTo.doof = mkOption {
type = types.bool;
default = false;
description = ''
whether to forward inbound traffic on the doofnet vpn port to the corresponding localhost port.
'';
};
};
};
in
{
options = with lib; {
sane.ports.ports = mkOption {
# add the `visibleTo.{doof,ovpns}` options
type = types.attrsOf portOpts;
};
};
config = {
networking.domain = "uninsane.org";
sane.ports.openFirewall = true;
sane.ports.openUpnp = true;
# unless we add interface-specific settings for each VPN, we have to define nameservers globally.
# networking.nameservers = [
# "1.1.1.1"
# "9.9.9.9"
# ];
# services.resolved.extraConfig = ''
# # docs: `man resolved.conf`
# # DNS servers to use via the `wg-ovpns` interface.
# # i hope that from the root ns, these aren't visible.
# DNS=46.227.67.134%wg-ovpns 192.165.9.158%wg-ovpns
# FallbackDNS=1.1.1.1 9.9.9.9
# '';
# tun-sea config
sane.dns.zones."uninsane.org".inet.A."doof.tunnel" = "205.201.63.12";
# sane.dns.zones."uninsane.org".inet.AAAA."doof.tunnel" = "2602:fce8:106::51"; #< TODO: enable IPv6
networking.wireguard.interfaces.wg-doof = {
privateKeyFile = config.sops.secrets.wg_doof_privkey.path;
# wg is active only in this namespace.
# run e.g. ip netns exec doof <some command like ping/curl/etc, it'll go through wg>
# sudo ip netns exec doof ping www.google.com
interfaceNamespace = "doof";
ips = [
"205.201.63.12"
# "2602:fce8:106::51/128" #< TODO: enable IPv6
];
peers = [
{
publicKey = "nuESyYEJ3YU0hTZZgAd7iHBz1ytWBVM5PjEL1VEoTkU=";
# TODO: configure DNS within the doof ns and use tun-sea.doof.net endpoint
# endpoint = "tun-sea.doof.net:53263";
endpoint = "205.201.63.44:53263";
allowedIPs = [ "0.0.0.0/0" "::/0" ];
persistentKeepalive = 25; #< keep the NAT alive
}
];
};
sane.netns.doof.hostVethIpv4 = "10.0.2.5";
sane.netns.doof.netnsVethIpv4 = "10.0.2.6";
sane.netns.doof.netnsPubIpv4 = "205.201.63.12";
sane.netns.doof.routeTable = 12;
# OVPN CONFIG (https://www.ovpn.com):
# DOCS: https://nixos.wiki/wiki/WireGuard
# if you `systemctl restart wireguard-wg-ovpns`, make sure to also restart any other services in `NetworkNamespacePath = .../ovpns`.
# TODO: why not create the namespace as a seperate operation (nix config for that?)
networking.wireguard.enable = true;
networking.wireguard.interfaces.wg-ovpns = {
privateKeyFile = config.sops.secrets.wg_ovpns_privkey.path;
# wg is active only in this namespace.
# run e.g. ip netns exec ovpns <some command like ping/curl/etc, it'll go through wg>
# sudo ip netns exec ovpns ping www.google.com
interfaceNamespace = "ovpns";
ips = [ "185.157.162.178" ];
peers = [
{
publicKey = "SkkEZDCBde22KTs/Hc7FWvDBfdOCQA4YtBEuC3n5KGs=";
endpoint = "185.157.162.10:9930";
# alternatively: use hostname, but that presents bootstrapping issues (e.g. if host net flakes)
# endpoint = "vpn36.prd.amsterdam.ovpn.com:9930";
allowedIPs = [ "0.0.0.0/0" ];
# nixOS says this is important for keeping NATs active
persistentKeepalive = 25;
# re-executes wg this often. docs hint that this might help wg notice DNS/hostname changes.
# so, maybe that helps if we specify endpoint as a domain name
# dynamicEndpointRefreshSeconds = 30;
# when refresh fails, try it again after this period instead.
# TODO: not avail until nixpkgs upgrade
# dynamicEndpointRefreshRestartSeconds = 5;
}
];
};
sane.netns.ovpns.hostVethIpv4 = "10.0.1.5";
sane.netns.ovpns.netnsVethIpv4 = "10.0.1.6";
sane.netns.ovpns.netnsPubIpv4 = "185.157.162.178";
sane.netns.ovpns.routeTable = 11;
sane.netns.ovpns.dns = "46.227.67.134"; #< DNS requests inside the namespace are forwarded here
};
}

View File

@ -1,34 +0,0 @@
{ config, lib, ... }:
let
cweb-cfg = config.services.calibre-web;
inherit (cweb-cfg) user group;
inherit (cweb-cfg.listen) ip port;
svc-dir = "/var/lib/${cweb-cfg.dataDir}";
in
# XXX: disabled because of runtime errors like:
# > File "/nix/store/c7jqvx980nlg9xhxi065cba61r2ain9y-calibre-web-0.6.19/lib/python3.10/site-packages/calibreweb/cps/db.py", line 926, in speaking_language
# > languages = self.session.query(Languages) \
# > AttributeError: 'NoneType' object has no attribute 'query'
lib.mkIf false
{
sane.persist.sys.byStore.plaintext = [
{ inherit user group; mode = "0700"; path = svc-dir; method = "bind"; }
];
services.calibre-web.enable = true;
services.calibre-web.listen.ip = "127.0.0.1";
# XXX: externally populate `${svc-dir}/metadata.db` (once) from
# <https://github.com/janeczku/calibre-web/blob/master/library/metadata.db>
# i don't know why you have to do this??
# services.calibre-web.options.calibreLibrary = svc-dir;
services.nginx.virtualHosts."calibre.uninsane.org" = {
forceSSL = true;
enableACME = true;
locations."/" = {
proxyPass = "http://${ip}:${builtins.toString port}";
};
};
sane.dns.zones."uninsane.org".inet.CNAME."calibre" = "native";
}

View File

@ -1,145 +0,0 @@
# TURN/STUN NAT traversal service
# commonly used to establish realtime calls with prosody, or possibly matrix/synapse
#
# - <https://github.com/coturn/coturn/>
# - `man turnserver`
# - config docs: <https://github.com/coturn/coturn/blob/master/examples/etc/turnserver.conf>
#
# N.B. during operation it's NORMAL to see "error 401".
# during session creation:
# - client sends Allocate request
# - server replies error 401, providing a realm and nonce
# - client uses realm + nonce + shared secret to construct an auth key & call Allocate again
# - server replies Allocate Success Response
# - source: <https://stackoverflow.com/a/66643135>
#
# N.B. this safest implementation routes all traffic THROUGH A VPN
# - that adds a lot of latency, but in practice turns out to be inconsequential.
# i guess ICE allows clients to prefer the other party's lower-latency server, in practice?
# - still, this is the "safe" implementation because STUN works with IP addresses instead of domain names:
# 1. client A queries the STUN server to determine its own IP address/port.
# 2. client A tells client B which IP address/port client A is visible on.
# 3. client B contacts that IP address/port
# this only works so long as the IP address/port which STUN server sees client A on is publicly routable.
# that is NOT the case when the STUN server and client A are on the same LAN
# even if client A contacts the STUN server via its WAN address with port reflection enabled.
# hence, there's no obvious way to put the STUN server on the same LAN as either client and expect the rest to work.
# - there an old version which *half worked*, which is:
# - run the turn server in the root namespace.
# - bind the turn server to the veth connecting it to the VPN namespace (so it sends outgoing traffic to the right place).
# - NAT the turn port range from VPN into root namespace (so it receives incomming traffic).
# - this approach would fail the prosody conversations.im check, but i didn't notice *obvious* call routing errors.
#
# debugging:
# - log messages like 'usage: realm=<turn.uninsane.org>, username=<1715915193>, rp=14, rb=1516, sp=8, sb=684'
# - rp = received packets
# - rb = received bytes
# - sp = sent packets
# - sb = sent bytes
{ config, lib, ... }:
let
# TURN port range (inclusive).
# default coturn behavior is to use the upper quarter of all ports. i.e. 49152 - 65535.
# i believe TURN allocations expire after either 5 or 10 minutes of inactivity.
turnPortLow = 49152; # 49152 = 0xc000
turnPortHigh = turnPortLow + 256;
turnPortRange = lib.range turnPortLow turnPortHigh;
in
{
# the port definitions are only needed if running in the root net namespace
# sane.ports.ports = lib.mkMerge ([
# {
# "3478" = {
# # this is the "control" port.
# # i.e. no client data is forwarded through it, but it's where clients request tunnels.
# protocol = [ "tcp" "udp" ];
# # visibleTo.lan = true;
# # visibleTo.wan = true;
# visibleTo.ovpns = true; # forward traffic from the VPN to the root NS
# description = "colin-stun-turn";
# };
# "5349" = {
# # the other port 3478 also supports TLS/DTLS, but presumably clients wanting TLS will default 5349
# protocol = [ "tcp" ];
# # visibleTo.lan = true;
# # visibleTo.wan = true;
# visibleTo.ovpns = true;
# description = "colin-stun-turn-over-tls";
# };
# }
# ] ++ (builtins.map
# (port: {
# "${builtins.toString port}" = let
# count = port - turnPortLow + 1;
# numPorts = turnPortHigh - turnPortLow + 1;
# in {
# protocol = [ "tcp" "udp" ];
# # visibleTo.lan = true;
# # visibleTo.wan = true;
# visibleTo.ovpns = true;
# description = "colin-turn-${builtins.toString count}-of-${builtins.toString numPorts}";
# };
# })
# turnPortRange
# ));
services.nginx.virtualHosts."turn.uninsane.org" = {
# allow ACME to procure a cert via nginx for this domain
enableACME = true;
};
sane.dns.zones."uninsane.org".inet = {
# CNAME."turn" = "servo.wan";
# CNAME."turn" = "ovpns";
# CNAME."turn" = "native";
# XXX: SRV records have to point to something with a A/AAAA record; no CNAMEs
A."turn" = "%AOVPNS%";
# A."turn" = "%AWAN%";
SRV."_stun._udp" = "5 50 3478 turn";
SRV."_stun._tcp" = "5 50 3478 turn";
SRV."_stuns._tcp" = "5 50 5349 turn";
SRV."_turn._udp" = "5 50 3478 turn";
SRV."_turn._tcp" = "5 50 3478 turn";
SRV."_turns._tcp" = "5 50 5349 turn";
};
sane.derived-secrets."/var/lib/coturn/shared_secret.bin" = {
encoding = "base64";
# TODO: make this not globally readable
acl.mode = "0644";
};
sane.fs."/var/lib/coturn/shared_secret.bin".wantedBeforeBy = [ "coturn.service" ];
# provide access to certs
users.users.turnserver.extraGroups = [ "nginx" ];
services.coturn.enable = true;
services.coturn.realm = "turn.uninsane.org";
services.coturn.cert = "/var/lib/acme/turn.uninsane.org/fullchain.pem";
services.coturn.pkey = "/var/lib/acme/turn.uninsane.org/key.pem";
#v disable to allow unauthenticated access (or set `services.coturn.no-auth = true`)
services.coturn.use-auth-secret = true;
services.coturn.static-auth-secret-file = "/var/lib/coturn/shared_secret.bin";
services.coturn.lt-cred-mech = true; #< XXX: use-auth-secret overrides lt-cred-mech
services.coturn.min-port = turnPortLow;
services.coturn.max-port = turnPortHigh;
# services.coturn.secure-stun = true;
services.coturn.extraConfig = lib.concatStringsSep "\n" [
"verbose"
# "Verbose" #< even MORE verbosity than "verbose" (it's TOO MUCH verbosity really)
"no-multicast-peers" # disables sending to IPv4 broadcast addresses (e.g. 224.0.0.0/3)
# "listening-ip=${config.sane.netns.ovpns.hostVethIpv4}" "external-ip=${config.sane.netns.ovpns.netnsPubIpv4}" #< 2024/04/25: works, if running in root namespace
"listening-ip=${config.sane.netns.ovpns.netnsPubIpv4}" "external-ip=${config.sane.netns.ovpns.netnsPubIpv4}"
# old attempts:
# "external-ip=${config.sane.netns.ovpns.netnsPubIpv4}/${config.sane.netns.ovpns.hostVethIpv4}"
# "listening-ip=10.78.79.51" # can be specified multiple times; omit for *
# "external-ip=97.113.128.229/10.78.79.51"
# "external-ip=97.113.128.229"
# "mobility" # "mobility with ICE (MICE) specs support" (?)
];
systemd.services.coturn.serviceConfig.NetworkNamespacePath = "/run/netns/ovpns";
}

View File

@ -1,83 +0,0 @@
# as of 2023/12/02: complete blockchain is 530 GiB (on-disk size may be larger)
#
# ports:
# - 8333: for node-to-node communications
# - 8332: rpc (client-to-node)
#
# rpc setup:
# - generate a password
# - use: <https://github.com/bitcoin/bitcoin/blob/master/share/rpcauth/rpcauth.py>
# (rpcauth.py is not included in the `'.#bitcoin'` package result)
# - `wget https://raw.githubusercontent.com/bitcoin/bitcoin/master/share/rpcauth/rpcauth.py`
# - `python ./rpcauth.py colin`
# - copy the hash here. it's SHA-256, so safe to be public.
# - add "rpcuser=colin" and "rpcpassword=<output>" to secrets/servo/bitcoin.conf (i.e. ~/.bitcoin/bitcoin.conf)
# - bitcoin.conf docs: <https://github.com/bitcoin/bitcoin/blob/master/doc/bitcoin-conf.md>
# - validate with `bitcoin-cli -netinfo`
{ config, lib, pkgs, sane-lib, ... }:
let
# wrapper to run bitcoind with the tor onion address as externalip (computed at runtime)
_bitcoindWithExternalIp = with pkgs; writeShellScriptBin "bitcoind" ''
externalip="$(cat /var/lib/tor/onion/bitcoind/hostname)"
exec ${bitcoind}/bin/bitcoind "-externalip=$externalip" "$@"
'';
# the package i provide to services.bitcoind ends up on system PATH, and used by other tools like clightning.
# therefore, even though services.bitcoind only needs `bitcoind` binary, provide all the other bitcoin-related binaries (notably `bitcoin-cli`) as well:
bitcoindWithExternalIp = with pkgs; symlinkJoin {
name = "bitcoind-with-external-ip";
paths = [ _bitcoindWithExternalIp bitcoind ];
};
in
{
sane.persist.sys.byStore.ext = [
{ user = "bitcoind-mainnet"; group = "bitcoind-mainnet"; path = "/var/lib/bitcoind-mainnet"; method = "bind"; }
];
# sane.ports.ports."8333" = {
# # this allows other nodes and clients to download blocks from me.
# protocol = [ "tcp" ];
# visibleTo.wan = true;
# description = "colin-bitcoin";
# };
services.tor.relay.onionServices.bitcoind = {
version = 3;
map = [{
# by default tor will route public tor port P to 127.0.0.1:P.
# so if this port is the same as clightning would natively use, then no further config is needed here.
# see: <https://2019.www.torproject.org/docs/tor-manual.html.en#HiddenServicePort>
port = 8333;
# target.port; target.addr; #< set if tor port != clightning port
}];
# allow "tor" group (i.e. bitcoind-mainnet) to read /var/lib/tor/onion/bitcoind/hostname
settings.HiddenServiceDirGroupReadable = true;
};
services.bitcoind.mainnet = {
enable = true;
package = bitcoindWithExternalIp;
rpc.users.colin = {
# see docs at top of file for how to generate this
passwordHMAC = "30002c05d82daa210550e17a182db3f3$6071444151281e1aa8a2729f75e3e2d224e9d7cac3974810dab60e7c28ffaae4";
};
extraConfig = ''
# don't load the wallet, and disable wallet RPC calls
disablewallet=1
# proxy all outbound traffic through Tor
proxy=127.0.0.1:9050
'';
};
users.users.bitcoind-mainnet.extraGroups = [ "tor" ];
systemd.services.bitcoind-mainnet.serviceConfig.RestartSec = "30s"; #< default is 0
sane.users.colin.fs.".bitcoin/bitcoin.conf" = sane-lib.fs.wantedSymlinkTo config.sops.secrets."bitcoin.conf".path;
sops.secrets."bitcoin.conf" = {
mode = "0600";
owner = "colin";
group = "users";
};
sane.programs.bitcoind.enableFor.user.colin = true; # for debugging/administration: `bitcoin-cli`
}

View File

@ -1,782 +0,0 @@
#!/usr/bin/env nix-shell
#!nix-shell -i python3 -p "python3.withPackages (ps: [ ps.pyln-client ])"
"""
clightning-sane: helper to perform common Lightning node admin operations:
- view channel balances
- rebalance channels
COMMON OPERATIONS:
- view channel balances: `clightning-sane status`
- rebalance channels to improve routability (without paying any fees): `clightning-sane autobalance`
FULL OPERATION:
- `clightning-sane status --full`
- `P$`: represents how many msats i've captured in fees from this channel.
- `COST`: rough measure of how much it's "costing" me to let my channel partner hold funds on his side of the channel.
this is based on the notion that i only capture fees from outbound transactions, and so the channel partner holding all liquidity means i can't capture fees on that liquidity.
"""
# pyln-client docs: <https://github.com/ElementsProject/lightning/tree/master/contrib/pyln-client>
# terminology:
# - "scid": "Short Channel ID", e.g. 123456x7890x0
# from this id, we can locate the actual channel, its peers, and its parameters
import argparse
import logging
import math
import sys
import time
from concurrent.futures import ThreadPoolExecutor
from dataclasses import dataclass
from enum import Enum
from pyln.client import LightningRpc, Millisatoshi, RpcError
logger = logging.getLogger(__name__)
RPC_FILE = "/var/lib/clightning/bitcoin/lightning-rpc"
# CLTV (HLTC delta) of the final hop
# set this too low and you might get inadvertent channel closures (?)
CLTV = 18
# for every sequentally failed transaction, delay this much before trying again.
# note that the initial route building process can involve 10-20 "transient" failures, as it discovers dead channels.
TX_FAIL_BACKOFF = 0.8
MAX_SEQUENTIAL_JOB_FAILURES = 200
class LoopError(Enum):
""" error when trying to loop sats, or when unable to calculate a route for the loop """
TRANSIENT = "TRANSIENT" # try again, we'll maybe find a different route
NO_ROUTE = "NO_ROUTE"
class RouteError(Enum):
""" error when calculated a route """
HAS_BASE_FEE = "HAS_BASE_FEE"
NO_ROUTE = "NO_ROUTE"
class Metrics:
looped_msat: int = 0
sendpay_fail: int = 0
sendpay_succeed: int = 0
own_bad_channel: int = 0
no_route: int = 0
in_ch_unsatisfiable: int = 0
def __repr__(self) -> str:
return f"looped:{self.looped_msat}, tx:{self.sendpay_succeed}, tx_fail:{self.sendpay_fail}, own_bad_ch:{self.own_bad_channel}, no_route:{self.no_route}, in_ch_restricted:{self.in_ch_unsatisfiable}"
@dataclass
class TxBounds:
max_msat: int
min_msat: int = 0
def __repr__(self) -> str:
return f"TxBounds({self.min_msat} <= msat <= {self.max_msat})"
def is_satisfiable(self) -> bool:
return self.min_msat <= self.max_msat
def raise_max_to_be_satisfiable(self) -> "Self":
if self.max_msat < self.min_msat:
logger.debug(f"raising max_msat to be consistent: {self.max_msat} -> {self.min_msat}")
return TxBounds(self.min_msat, self.min_msat)
return TxBounds(min_msat=self.min_msat, max_msat=self.max_msat)
def intersect(self, other: "TxBounds") -> "Self":
return TxBounds(
min_msat=max(self.min_msat, other.min_msat),
max_msat=min(self.max_msat, other.max_msat),
)
def restrict_to_htlc(self, ch: "LocalChannel", why: str = "") -> "Self":
"""
apply min/max HTLC size restrictions of the given channel.
"""
if ch:
why = why or ch.directed_scid_to_me
if why: why = f"{why}: "
new_min, new_max = self.min_msat, self.max_msat
if ch.htlc_minimum_to_me > self.min_msat:
new_min = ch.htlc_minimum_to_me
logger.debug(f"{why}raising min_msat due to HTLC requirements: {self.min_msat} -> {new_min}")
if ch.htlc_maximum_to_me < self.max_msat:
new_max = ch.htlc_maximum_to_me
logger.debug(f"{why}lowering max_msat due to HTLC requirements: {self.max_msat} -> {new_max}")
return TxBounds(min_msat=new_min, max_msat=new_max)
def restrict_to_zero_fees(self, ch: "LocalChannel"=None, base: int=0, ppm: int=0, why:str = "") -> "Self":
"""
restrict tx size such that PPM fees are zero.
if the channel has a base fee, then `max_msat` is forced to 0.
"""
if ch:
why = why or ch.directed_scid_to_me
self = self.restrict_to_zero_fees(base=ch.to_me["base_fee_millisatoshi"], ppm=ch.to_me["fee_per_millionth"], why=why)
if why: why = f"{why}: "
new_max = self.max_msat
ppm_max = math.ceil(1000000 / ppm) - 1 if ppm != 0 else new_max
if ppm_max < new_max:
logger.debug(f"{why}decreasing max_msat due to fee ppm: {new_max} -> {ppm_max}")
new_max = ppm_max
if base != 0:
logger.debug(f"{why}free route impossible: channel has base fees")
new_max = 0
return TxBounds(min_msat=self.min_msat, max_msat=new_max)
class LocalChannel:
def __init__(self, channels: list, rpc: "RpcHelper"):
assert 0 < len(channels) <= 2, f"unexpected: channel count: {channels}"
out = None
in_ = None
for c in channels:
if c["source"] == rpc.self_id:
assert out is None, f"unexpected: multiple channels from self: {channels}"
out = c
if c["destination"] == rpc.self_id:
assert in_ is None, f"unexpected: multiple channels to self: {channels}"
in_ = c
# assert out is not None, f"no channel from self: {channels}"
# assert in_ is not None, f"no channel to self: {channels}"
if out and in_:
assert out["destination"] == in_["source"], f"channel peers are asymmetric?! {channels}"
assert out["short_channel_id"] == in_["short_channel_id"], f"channel ids differ?! {channels}"
self.from_me = out
self.to_me = in_
self.remote_node = rpc.node(self.remote_peer)
self.peer_ch = rpc.peerchannel(self.scid, self.remote_peer)
self.forwards_from_me = rpc.rpc.listforwards(out_channel=self.scid, status="settled")["forwards"]
def __repr__(self) -> str:
return self.to_str(with_scid=True, with_bal_ratio=True, with_cost=False, with_ppm_theirs=False)
def to_str(
self,
with_peer_id:bool = False,
with_scid:bool = False,
with_bal_msat:bool = False,
with_bal_ratio:bool = False,
with_cost:bool = False,
with_ppm_theirs:bool = False,
with_ppm_mine:bool = False,
with_profits:bool = True,
with_payments:bool = False,
) -> str:
base_flag = "*" if not self.online or self.base_fee_to_me != 0 else ""
alias = f"({self.remote_alias}){base_flag}"
peerid = f" {self.remote_peer}" if with_peer_id else ""
scid = f" scid:{self.scid:>13}" if with_scid else ""
bal = f" S:{int(self.sendable):11}/R:{int(self.receivable):11}" if with_bal_msat else ""
ratio = f" MINE:{(100*self.send_ratio):>8.4f}%" if with_bal_ratio else ""
payments = f" OUT:{int(self.out_fulfilled_msat):>11}/IN:{int(self.in_fulfilled_msat):>11}" if with_payments else ""
profits = f" P$:{int(self.fees_lifetime_mine):>8}" if with_profits else ""
cost = f" COST:{self.opportunity_cost_lent:>8}" if with_cost else ""
ppm_theirs = self.ppm_to_me if self.to_me else "N/A"
ppm_theirs = f" PPM_THEIRS:{ppm_theirs:>6}" if with_ppm_theirs else ""
ppm_mine = self.ppm_from_me if self.from_me else "N/A"
ppm_mine = f" PPM_MINE:{ppm_mine:>6}" if with_ppm_mine else ""
return f"channel{alias:30}{peerid}{scid}{bal}{ratio}{payments}{profits}{cost}{ppm_theirs}{ppm_mine}"
@property
def online(self) -> bool:
return self.from_me and self.to_me
@property
def remote_peer(self) -> str:
if self.from_me:
return self.from_me["destination"]
else:
return self.to_me["source"]
@property
def remote_alias(self) -> str:
return self.remote_node["alias"]
@property
def scid(self) -> str:
if self.from_me:
return self.from_me["short_channel_id"]
else:
return self.to_me["short_channel_id"]
@property
def htlc_minimum_to_me(self) -> Millisatoshi:
return self.to_me["htlc_minimum_msat"]
@property
def htlc_minimum_from_me(self) -> Millisatoshi:
return self.from_me["htlc_minimum_msat"]
@property
def htlc_minimum(self) -> Millisatoshi:
return max(self.htlc_minimum_to_me, self.htlc_minimum_from_me)
@property
def htlc_maximum_to_me(self) -> Millisatoshi:
return self.to_me["htlc_maximum_msat"]
@property
def htlc_maximum_from_me(self) -> Millisatoshi:
return self.from_me["htlc_maximum_msat"]
@property
def htlc_maximum(self) -> Millisatoshi:
return min(self.htlc_maximum_to_me, self.htlc_maximum_from_me)
@property
def direction_to_me(self) -> int:
return self.to_me["direction"]
@property
def direction_from_me(self) -> int:
return self.from_me["direction"]
@property
def directed_scid_to_me(self) -> str:
return f"{self.scid}/{self.direction_to_me}"
@property
def directed_scid_from_me(self) -> str:
return f"{self.scid}/{self.direction_from_me}"
@property
def delay_them(self) -> str:
return self.to_me["delay"]
@property
def delay_me(self) -> str:
return self.from_me["delay"]
@property
def ppm_to_me(self) -> int:
return self.to_me["fee_per_millionth"]
@property
def ppm_from_me(self) -> int:
return self.from_me["fee_per_millionth"]
# return self.peer_ch["fee_proportional_millionths"]
@property
def base_fee_to_me(self) -> int:
return self.to_me["base_fee_millisatoshi"]
@property
def receivable(self) -> int:
return self.peer_ch["receivable_msat"]
@property
def sendable(self) -> int:
return self.peer_ch["spendable_msat"]
@property
def in_fulfilled_msat(self) -> Millisatoshi:
return self.peer_ch["in_fulfilled_msat"]
@property
def out_fulfilled_msat(self) -> Millisatoshi:
return self.peer_ch["out_fulfilled_msat"]
@property
def fees_lifetime_mine(self) -> Millisatoshi:
return sum(fwd["fee_msat"] for fwd in self.forwards_from_me)
@property
def send_ratio(self) -> float:
cap = self.receivable + self.sendable
return self.sendable / cap
@property
def opportunity_cost_lent(self) -> int:
""" how much msat did we gain by pushing their channel to its current balance? """
return int(self.receivable * self.ppm_from_me / 1000000)
class RpcHelper:
def __init__(self, rpc: LightningRpc):
self.rpc = rpc
self.self_id = rpc.getinfo()["id"]
def localchannel(self, scid: str) -> LocalChannel:
listchan = self.rpc.listchannels(scid)
# this assertion would probably indicate a typo in the scid
assert listchan and listchan.get("channels", []) != [], f"bad listchannels for {scid}: {listchan}"
return LocalChannel(listchan["channels"], self)
def node(self, id: str) -> dict:
nodes = self.rpc.listnodes(id)["nodes"]
assert len(nodes) == 1, f"unexpected: multiple nodes for {id}: {nodes}"
return nodes[0]
def peerchannel(self, scid: str, peer_id: str) -> dict:
peerchannels = self.rpc.listpeerchannels(peer_id)["channels"]
channels = [c for c in peerchannels if c["short_channel_id"] == scid]
assert len(channels) == 1, f"expected exactly 1 channel, got: {channels}"
return channels[0]
def try_getroute(self, *args, **kwargs) -> dict | None:
""" wrapper for getroute which returns None instead of error if no route exists """
try:
route = self.rpc.getroute(*args, **kwargs)
except RpcError as e:
logger.debug(f"rpc failed: {e}")
return None
else:
route = route["route"]
if route == []: return None
return route
class LoopRouter:
def __init__(self, rpc: RpcHelper, metrics: Metrics = None):
self.rpc = rpc
self.metrics = metrics or Metrics()
self.bad_channels = [] # list of directed scid
self.nonzero_base_channels = [] # list of directed scid
def drop_caches(self) -> None:
logger.info("LoopRouter.drop_caches()")
self.bad_channels = []
def _get_directed_scid(self, scid: str, direction: int) -> dict:
channels = self.rpc.rpc.listchannels(scid)["channels"]
channels = [c for c in channels if c["direction"] == direction]
assert len(channels) == 1, f"expected exactly 1 channel: {channels}"
return channels[0]
def loop_once(self, out_scid: str, in_scid: str, bounds: TxBounds) -> LoopError|int:
out_ch = self.rpc.localchannel(out_scid)
in_ch = self.rpc.localchannel(in_scid)
if out_ch.directed_scid_from_me in self.bad_channels or in_ch.directed_scid_to_me in self.bad_channels:
logger.info(f"loop {out_scid} -> {in_scid} failed in our own channel")
self.metrics.own_bad_channel += 1
return LoopError.TRANSIENT
# bounds = bounds.restrict_to_htlc(out_ch) # htlc bounds seem to be enforced only in the outward direction
bounds = bounds.restrict_to_htlc(in_ch)
bounds = bounds.restrict_to_zero_fees(in_ch)
if not bounds.is_satisfiable():
self.metrics.in_ch_unsatisfiable += 1
return LoopError.NO_ROUTE
logger.debug(f"route with bounds {bounds}")
route = self.route(out_ch, in_ch, bounds)
logger.debug(f"route: {route}")
if route == RouteError.NO_ROUTE:
self.metrics.no_route += 1
return LoopError.NO_ROUTE
elif route == RouteError.HAS_BASE_FEE:
# try again with a different route
return LoopError.TRANSIENT
amount_msat = route[0]["amount_msat"]
invoice_id = f"loop-{time.time():.6f}".replace(".", "_")
invoice_desc = f"bal {out_scid}:{in_scid}"
invoice = self.rpc.rpc.invoice("any", invoice_id, invoice_desc)
logger.debug(f"invoice: {invoice}")
payment = self.rpc.rpc.sendpay(route, invoice["payment_hash"], invoice_id, amount_msat, invoice["bolt11"], invoice["payment_secret"])
logger.debug(f"sent: {payment}")
try:
wait = self.rpc.rpc.waitsendpay(invoice["payment_hash"])
logger.debug(f"result: {wait}")
except RpcError as e:
self.metrics.sendpay_fail += 1
err_data = e.error["data"]
err_scid, err_dir = err_data["erring_channel"], err_data["erring_direction"]
err_directed_scid = f"{err_scid}/{err_dir}"
logger.debug(f"ch failed, adding to excludes: {err_directed_scid}; {e.error}")
self.bad_channels.append(err_directed_scid)
return LoopError.TRANSIENT
else:
self.metrics.sendpay_succeed += 1
self.metrics.looped_msat += int(amount_msat)
return int(amount_msat)
def route(self, out_ch: LocalChannel, in_ch: LocalChannel, bounds: TxBounds) -> list[dict] | RouteError:
exclude = [
# ensure the payment doesn't cross either channel in reverse.
# note that this doesn't preclude it from taking additional trips through self, with other peers.
# out_ch.directed_scid_to_me,
# in_ch.directed_scid_from_me,
# alternatively, never route through self. this avoids a class of logic error, like what to do with fees i charge "myself".
self.rpc.self_id
] + self.bad_channels + self.nonzero_base_channels
out_peer = out_ch.remote_peer
in_peer = in_ch.remote_peer
route_or_bounds = bounds
while isinstance(route_or_bounds, TxBounds):
old_bounds = route_or_bounds
route_or_bounds = self._find_partial_route(out_peer, in_peer, old_bounds, exclude=exclude)
if route_or_bounds == old_bounds:
return RouteError.NO_ROUTE
if isinstance(route_or_bounds, RouteError):
return route_or_bounds
route = self._add_route_endpoints(route_or_bounds, out_ch, in_ch)
return route
def _find_partial_route(self, out_peer: str, in_peer: str, bounds: TxBounds, exclude: list[str]=[]) -> list[dict] | RouteError | TxBounds:
route = self.rpc.try_getroute(in_peer, amount_msat=bounds.max_msat, riskfactor=0, fromid=out_peer, exclude=exclude, cltv=CLTV)
if route is None:
logger.debug(f"no route for {bounds.max_msat}msat {out_peer} -> {in_peer}")
return RouteError.NO_ROUTE
send_msat = route[0]["amount_msat"]
if send_msat != Millisatoshi(bounds.max_msat):
logger.debug(f"found route with non-zero fee: {send_msat} -> {bounds.max_msat}. {route}")
error = None
for hop in route:
hop_scid = hop["channel"]
hop_dir = hop["direction"]
directed_scid = f"{hop_scid}/{hop_dir}"
ch = self._get_directed_scid(hop_scid, hop_dir)
if ch["base_fee_millisatoshi"] != 0:
self.nonzero_base_channels.append(directed_scid)
error = RouteError.HAS_BASE_FEE
bounds = bounds.restrict_to_zero_fees(ppm=ch["fee_per_millionth"], why=directed_scid)
return bounds.raise_max_to_be_satisfiable() if error is None else error
return route
def _add_route_endpoints(self, route, out_ch: LocalChannel, in_ch: LocalChannel):
inbound_hop = dict(
id=self.rpc.self_id,
channel=in_ch.scid,
direction=in_ch.direction_to_me,
amount_msat=route[-1]["amount_msat"],
delay=route[-1]["delay"],
style="tlv",
)
route = self._add_route_delay(route, in_ch.delay_them) + [ inbound_hop ]
outbound_hop = dict(
id=out_ch.remote_peer,
channel=out_ch.scid,
direction=out_ch.direction_from_me,
amount_msat=route[0]["amount_msat"],
delay=route[0]["delay"] + out_ch.delay_them,
style="tlv",
)
route = [ outbound_hop ] + route
return route
def _add_route_delay(self, route: list[dict], delay: int) -> list[dict]:
return [ dict(hop, delay=hop["delay"] + delay) for hop in route ]
@dataclass
class LoopJob:
out: str # scid
in_: str # scid
amount: int
@dataclass
class LoopJobIdle:
sec: int = 10
class LoopJobDone(Enum):
COMPLETED = "COMPLETED"
ABORTED = "ABORTED"
class AbstractLoopRunner:
def __init__(self, looper: LoopRouter, bounds: TxBounds, parallelism: int):
self.looper = looper
self.bounds = bounds
self.parallelism = parallelism
self.bounds_map = {} # map (out:str, in_:str) -> TxBounds. it's a cache so we don't have to try 10 routes every time.
def pop_job(self) -> LoopJob | LoopJobIdle | LoopJobDone:
raise NotImplemented # abstract method
def finished_job(self, job: LoopJob, progress: int|LoopError) -> None:
raise NotImplemented # abstract method
def run_to_completion(self, exit_on_any_completed:bool = False) -> None:
self.exiting = False
self.exit_on_any_completed = exit_on_any_completed
if self.parallelism == 1:
# run inline to aid debugging
self._worker_thread()
else:
with ThreadPoolExecutor(max_workers=self.parallelism) as executor:
_ = list(executor.map(lambda _i: self._try_invoke(self._worker_thread), range(self.parallelism)))
def drop_caches(self) -> None:
logger.info("AbstractLoopRunner.drop_caches()")
self.looper.drop_caches()
self.bounds_map = {}
def _try_invoke(self, f, *args) -> None:
"""
try to invoke `f` with the provided `args`, and log if it fails.
this overcomes the issue that background tasks which fail via Exception otherwise do so silently.
"""
try:
f(*args)
except Exception as e:
logger.error(f"task failed: {e}")
def _worker_thread(self) -> None:
while not self.exiting:
job = self.pop_job()
logger.debug(f"popped job: {job}")
if isinstance(job, LoopJobDone):
return self._worker_finished(job)
if isinstance(job, LoopJobIdle):
logger.debug(f"idling for {job.sec}")
time.sleep(job.sec)
continue
result = self._execute_job(job)
logger.debug(f"finishing job {job} with {result}")
self.finished_job(job, result)
def _execute_job(self, job: LoopJob) -> LoopError|int:
bounds = self.bounds_map.get((job.out, job.in_), self.bounds)
bounds = bounds.intersect(TxBounds(max_msat=job.amount))
if not bounds.is_satisfiable():
logger.debug(f"TxBounds for job are unsatisfiable; skipping: {bounds} {job}")
return LoopError.NO_ROUTE
amt_looped = self.looper.loop_once(job.out, job.in_, bounds)
if amt_looped in (0, LoopError.NO_ROUTE, LoopError.TRANSIENT):
return amt_looped
logger.info(f"looped {amt_looped} from {job.out} -> {job.in_}")
bounds = bounds.intersect(TxBounds(max_msat=amt_looped))
self.bounds_map[(job.out, job.in_)] = bounds
return amt_looped
def _worker_finished(self, job: LoopJobDone) -> None:
if job == LoopJobDone.COMPLETED and self.exit_on_any_completed:
logger.debug(f"worker completed -> exiting pool")
self.exiting = True
class LoopPairState:
# TODO: use this in MultiLoopBalancer, or stop shoving state in here and put it on LoopBalancer instead.
def __init__(self, out: str, in_: str, amount: int):
self.out = out
self.in_ = in_
self.amount_target = amount
self.amount_looped = 0
self.amount_outstanding = 0
self.tx_fail_count = 0
self.route_fail_count = 0
self.last_job_start_time = None
self.failed_tx_throttler = 0 # increase by one every time we fail, decreases more gradually, when we succeed
class LoopBalancer(AbstractLoopRunner):
def __init__(self, out: str, in_: str, amount: int, looper: LoopRouter, bounds: TxBounds, parallelism: int=1):
super().__init__(looper, bounds, parallelism)
self.state = LoopPairState(out, in_, amount)
def pop_job(self) -> LoopJob | LoopJobIdle | LoopJobDone:
if self.state.tx_fail_count + 10*self.state.route_fail_count >= MAX_SEQUENTIAL_JOB_FAILURES:
logger.info(f"giving up ({self.state.out} -> {self.state.in_}): {self.state.tx_fail_count} tx failures, {self.state.route_fail_count} route failures")
return LoopJobDone.ABORTED
if self.state.tx_fail_count + self.state.route_fail_count > 0:
# N.B.: last_job_start_time is guaranteed to have been set by now
idle_until = self.state.last_job_start_time + TX_FAIL_BACKOFF*self.state.failed_tx_throttler
idle_for = idle_until - time.time()
if self.state.amount_outstanding != 0 or idle_for > 0:
# when we hit transient failures, restrict to just one job in flight at a time.
# this is aimed for the initial route building, where multiple jobs in flight is just useless,
# but it's not a bad idea for network blips, etc, either.
logger.info(f"throttling ({self.state.out} -> {self.state.in_}) for {idle_for:.0f}: {self.state.tx_fail_count} tx failures, {self.state.route_fail_count} route failures")
return LoopJobIdle(idle_for) if idle_for > 0 else LoopJobIdle()
amount_avail = self.state.amount_target - self.state.amount_looped - self.state.amount_outstanding
if amount_avail < self.bounds.min_msat:
if self.state.amount_outstanding == 0: return LoopJobDone.COMPLETED
return LoopJobIdle() # sending out another job would risk over-transferring
amount_this_job = min(amount_avail, self.bounds.max_msat)
self.state.amount_outstanding += amount_this_job
self.state.last_job_start_time = time.time()
return LoopJob(out=self.state.out, in_=self.state.in_, amount=amount_this_job)
def finished_job(self, job: LoopJob, progress: int) -> None:
self.state.amount_outstanding -= job.amount
if progress == LoopError.NO_ROUTE:
self.state.route_fail_count += 1
self.state.failed_tx_throttler += 10
elif progress == LoopError.TRANSIENT:
self.state.tx_fail_count += 1
self.state.failed_tx_throttler += 1
else:
self.state.amount_looped += progress
self.state.tx_fail_count = 0
self.state.route_fail_count = 0
self.state.failed_tx_throttler = max(0, self.state.failed_tx_throttler - 0.2)
logger.info(f"loop progressed ({job.out} -> {job.in_}) {progress}: {self.state.amount_looped} of {self.state.amount_target}")
class MultiLoopBalancer(AbstractLoopRunner):
"""
multiplexes jobs between multiple LoopBalancers.
note that the child LoopBalancers don't actually execute the jobs -- just produce them.
"""
def __init__(self, looper: LoopRouter, bounds: TxBounds, parallelism: int=1):
super().__init__(looper, bounds, parallelism)
self.loops = []
# job_index: increments on every job so we can grab jobs evenly from each LoopBalancer.
# in that event that producers are idling, it can actually increment more than once,
# so don't take this too literally
self.job_index = 0
def add_loop(self, out: LocalChannel, in_: LocalChannel, amount: int) -> None:
"""
start looping sats from out -> in_
"""
assert not any(l.state.out == out.scid and l.state.in_ == in_.scid for l in self.loops), f"tried to add duplicate loops from {out} -> {in_}"
logger.info(f"looping from ({out}) to ({in_})")
self.loops.append(LoopBalancer(out.scid, in_.scid, amount, self.looper, self.bounds, self.parallelism))
def pop_job(self) -> LoopJob | LoopJobIdle | LoopJobDone:
# N.B.: this can be called in parallel, so try to be consistent enough to not crash
idle_job = None
abort_job = None
for i, _ in enumerate(self.loops):
loop = self.loops[(self.job_index + i) % len(self.loops)]
self.job_index += 1
job = loop.pop_job()
if isinstance(job, LoopJob):
return job
if isinstance(job, LoopJobIdle):
idle_job = LoopJobIdle(min(job.sec, idle_job.sec)) if idle_job is not None else job
if job == LoopJobDone.ABORTED:
abort_job = job
# either there's a task to idle, or we have to terminate.
# if terminating, terminate ABORTED if any job aborted, else COMPLETED
if idle_job is not None: return idle_job
if abort_job is not None: return abort_job
return LoopJobDone.COMPLETED
def finished_job(self, job: LoopJob, progress: int) -> None:
# this assumes (enforced externally) that we have only one loop for a given out/in_ pair
for l in self.loops:
if l.state.out == job.out and l.state.in_ == job.in_:
l.finished_job(job, progress)
logger.info(f"total: {self.looper.metrics}")
def balance_loop(rpc: RpcHelper, out: str, in_: str, amount_msat: int, min_msat: int, max_msat: int, parallelism: int):
looper = LoopRouter(rpc)
bounds = TxBounds(min_msat=min_msat, max_msat=max_msat)
balancer = LoopBalancer(out, in_, amount_msat, looper, bounds, parallelism)
balancer.run_to_completion()
def autobalance_once(rpc: RpcHelper, metrics: Metrics, bounds: TxBounds, parallelism: int) -> bool:
"""
autobalances all channels.
returns True if channels are balanced (or as balanced as can be); False if in need of further balancing
"""
looper = LoopRouter(rpc, metrics)
balancer = MultiLoopBalancer(looper, bounds, parallelism)
channels = []
for peerch in rpc.rpc.listpeerchannels()["channels"]:
try:
channels.append(rpc.localchannel(peerch["short_channel_id"]))
except:
logger.info(f"NO CHANNELS for {peerch['peer_id']}")
channels = [ch for ch in channels if ch.online and ch.base_fee_to_me == 0]
give_to = [ ch for ch in channels if ch.send_ratio > 0.95 ]
take_from = [ ch for ch in channels if ch.send_ratio < 0.20 ]
if give_to == [] and take_from == []:
return True
for to in give_to:
for from_ in take_from:
balancer.add_loop(to, from_, 10000000)
balancer.run_to_completion(exit_on_any_completed=True)
return False
def autobalance(rpc: RpcHelper, min_msat: int, max_msat: int, parallelism: int):
bounds = TxBounds(min_msat=min_msat, max_msat=max_msat)
metrics = Metrics()
while not autobalance_once(rpc, metrics, bounds, parallelism):
pass
def show_status(rpc: RpcHelper, full: bool=False):
"""
show a table of channel balances between peers.
"""
for peerch in rpc.rpc.listpeerchannels()["channels"]:
try:
ch = rpc.localchannel(peerch["short_channel_id"])
except:
print(f"{peerch['peer_id']} scid:{peerch['short_channel_id']} state:{peerch['state']} NO CHANNELS")
else:
print(ch.to_str(with_scid=True, with_bal_ratio=True, with_payments=True, with_cost=full, with_ppm_theirs=True, with_ppm_mine=True, with_peer_id=full))
def main():
logging.basicConfig()
logger.setLevel(logging.INFO)
parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument("--verbose", action="store_true", help="more logging")
parser.add_argument("--min-msat", default="999", help="min transaction size")
parser.add_argument("--max-msat", default="1000000", help="max transaction size")
parser.add_argument("--jobs", default="1", help="how many HTLCs to keep in-flight at once")
subparsers = parser.add_subparsers(help="action")
status_parser = subparsers.add_parser("status")
status_parser.set_defaults(action="status")
status_parser.add_argument("--full", action="store_true", help="more info per channel")
loop_parser = subparsers.add_parser("loop")
loop_parser.set_defaults(action="loop")
loop_parser.add_argument("out", help="peer id to send tx through")
loop_parser.add_argument("in_", help="peer id to receive tx through")
loop_parser.add_argument("amount", help="total amount of msat to loop")
autobal_parser = subparsers.add_parser("autobalance")
autobal_parser.set_defaults(action="autobalance")
args = parser.parse_args()
if args.verbose:
logger.setLevel(logging.DEBUG)
rpc = RpcHelper(LightningRpc(RPC_FILE))
if args.action == "status":
show_status(rpc, full=args.full)
if args.action == "loop":
balance_loop(rpc, out=args.out, in_=args.in_, amount_msat=int(args.amount), min_msat=int(args.min_msat), max_msat=int(args.max_msat), parallelism=int(args.jobs))
if args.action == "autobalance":
autobalance(rpc, min_msat=int(args.min_msat), max_msat=int(args.max_msat), parallelism=int(args.jobs))
if __name__ == '__main__':
main()

View File

@ -1,135 +0,0 @@
# clightning is an implementation of Bitcoin's Lightning Network.
# as such, this assumes that `services.bitcoin` is enabled.
# docs:
# - tor clightning config: <https://docs.corelightning.org/docs/tor>
# - `lightning-cli` and subcommands: <https://docs.corelightning.org/reference/lightning-cli>
# - `man lightningd-config`
#
# management/setup/use:
# - guide: <https://github.com/ElementsProject/lightning>
#
# debugging:
# - `lightning-cli getlog debug`
# - `lightning-cli listpays` -> show payments this node sent
# - `lightning-cli listinvoices` -> show payments this node received
#
# first, acquire peers:
# - `lightning-cli connect id@host`
# where `id` is the node's pubkey, and `host` is perhaps an ip:port tuple, or a hash.onion:port tuple.
# for testing, choose any node listed on <https://1ml.com>
# - `lightning-cli listpeers`
# should show the new peer, with `connected: true`
#
# then, fund the clightning wallet
# - `lightning-cli newaddr`
#
# then, open channels
# - `lightning-cli connect ...`
# - `lightning-cli fundchannel <node_id> <amount_in_satoshis>`
#
# who to federate with?
# - a lot of the larger nodes allow hands-free channel creation
# - either inbound or outbound, sometimes paid
# - find nodes on:
# - <https://terminal.lightning.engineering/>
# - <https://1ml.com>
# - tor nodes: <https://1ml.com/node?order=capacity&iponionservice=true>
# - <https://lightningnetwork.plus>
# - <https://mempool.space/lightning>
# - <https://amboss.space>
# - a few tor-capable nodes which allow channel creation:
# - <https://c-otto.de/>
# - <https://cyberdyne.sh/>
# - <https://yalls.org/about/>
# - <https://coincept.com/>
# - more resources: <https://www.lopp.net/lightning-information.html>
# - node routability: https://hashxp.org/lightning/node/<id>
# - especially, acquire inbound liquidity via lightningnetwork.plus's swap feature
# - most of the opportunities are gated behind a minimum connection or capacity requirement
#
# tune payment parameters
# - `lightning-cli setchannel <id> [feebase] [feeppm] [htlcmin] [htlcmax] [enforcedelay] [ignorefeelimits]`
# - e.g. `lightning-cli setchannel all 0 10`
# - it's suggested that feebase=0 simplifies routing.
#
# teardown:
# - `lightning-cli withdraw <bc1... dest addr> <amount in satoshis> [feerate]`
#
# sanity:
# - `lightning-cli listfunds`
#
# to receive a payment (do as `clightning` user):
# - `lightning-cli invoice <amount in millisatoshi> <label> <description>`
# - specify amount as `any` if undetermined
# - then give the resulting bolt11 URI to the payer
# to send a payment:
# - `lightning-cli pay <bolt11 URI>`
# - or `lightning-cli pay <bolt11 URI> [amount_msat] [label] [riskfactor] [maxfeepercent] ...`
# - amount_msat must be "null" if the bolt11 URI specifies a value
# - riskfactor defaults to 10
# - maxfeepercent defaults to 0.5
# - label is a human-friendly label for my records
{ config, pkgs, ... }:
{
sane.persist.sys.byStore.ext = [
{ user = "clightning"; group = "clightning"; mode = "0710"; path = "/var/lib/clightning"; method = "bind"; }
];
# `lightning-cli` finds its RPC file via `~/.lightning/bitcoin/lightning-rpc`, to message the daemon
sane.user.fs.".lightning".symlink.target = "/var/lib/clightning";
# see bitcoin.nix for how to generate this
services.bitcoind.mainnet.rpc.users.clightning.passwordHMAC =
"befcb82d9821049164db5217beb85439$2c31ac7db3124612e43893ae13b9527dbe464ab2d992e814602e7cb07dc28985";
sane.services.clightning.enable = true;
sane.services.clightning.proxy = "127.0.0.1:9050"; # proxy outgoing traffic through tor
# sane.services.clightning.publicAddress = "statictor:127.0.0.1:9051";
sane.services.clightning.getPublicAddressCmd = "cat /var/lib/tor/onion/clightning/hostname";
services.tor.relay.onionServices.clightning = {
version = 3;
map = [{
# by default tor will route public tor port P to 127.0.0.1:P.
# so if this port is the same as clightning would natively use, then no further config is needed here.
# see: <https://2019.www.torproject.org/docs/tor-manual.html.en#HiddenServicePort>
port = 9735;
# target.port; target.addr; #< set if tor port != clightning port
}];
# allow "tor" group (i.e. clightning) to read /var/lib/tor/onion/clightning/hostname
settings.HiddenServiceDirGroupReadable = true;
};
# must be in "tor" group to read /var/lib/tor/onion/*/hostname
users.users.clightning.extraGroups = [ "tor" ];
systemd.services.clightning.after = [ "tor.service" ];
# lightning-config contains fields from here:
# - <https://docs.corelightning.org/docs/configuration>
# secret config includes:
# - bitcoin-rpcpassword
# - alias=nodename
# - rgb=rrggbb
# - fee-base=<millisatoshi>
# - fee-per-satoshi=<ppm>
# - feature configs (i.e. experimental-xyz options)
sane.services.clightning.extraConfig = ''
log-level=debug:lightningd
# peerswap:
# - config example: <https://github.com/fort-nix/nix-bitcoin/pull/462/files#diff-b357d832705b8ce8df1f41934d613f79adb77c4cd5cd9e9eb12a163fca3e16c6>
# XXX: peerswap crashes clightning on launch. stacktrace is useless.
# plugin=${pkgs.peerswap}/bin/peerswap
# peerswap-db-path=/var/lib/clightning/peerswap/swaps
# peerswap-policy-path=...
'';
sane.services.clightning.extraConfigFiles = [ config.sops.secrets."lightning-config".path ];
sops.secrets."lightning-config" = {
mode = "0640";
owner = "clightning";
group = "clightning";
};
sane.programs.clightning.enableFor.user.colin = true; # for debugging/admin: `lightning-cli`
}

View File

@ -1,10 +0,0 @@
{ ... }:
{
imports = [
./bitcoin.nix
./clightning.nix
./i2p.nix
./monero.nix
./tor.nix
];
}

View File

@ -1,4 +0,0 @@
{ ... }:
{
services.i2p.enable = true;
}

View File

@ -1,31 +0,0 @@
# as of 2023/11/26: complete downloaded blockchain should be 200GiB on disk, give or take.
{ ... }:
{
sane.persist.sys.byStore.ext = [
# /var/lib/monero/lmdb is what consumes most of the space
{ user = "monero"; group = "monero"; path = "/var/lib/monero"; method = "bind"; }
];
services.monero.enable = true;
services.monero.limits.upload = 5000; # in kB/s
services.monero.extraConfig = ''
# see: monero doc/ANONYMITY_NETWORKS.md
#
# "If any anonymity network is enabled, transactions being broadcast that lack a valid 'context'
# (i.e. the transaction did not come from a P2P connection) will only be sent to peers on anonymity networks."
#
# i think this means that setting tx-proxy here ensures any transactions sent locally to my node (via RPC)
# will be sent over an anonymity network.
tx-proxy=i2p,127.0.0.1:9000
tx-proxy=tor,127.0.0.1:9050
'';
# monero ports: <https://monero.stackexchange.com/questions/604/what-ports-does-monero-use-rpc-p2p-etc>
# - 18080 = "P2P" monero node <-> monero node connections
# - 18081 = "RPC" monero client -> monero node connections
sane.ports.ports."18080" = {
protocol = [ "tcp" ];
visibleTo.wan = true;
description = "colin-monero-p2p";
};
}

View File

@ -1,25 +0,0 @@
# tor settings: <https://2019.www.torproject.org/docs/tor-manual.html.en>
{ lib, ... }:
{
# tor hidden service hostnames aren't deterministic, so persist.
# might be able to get away with just persisting /var/lib/tor/onion, not sure.
sane.persist.sys.byStore.plaintext = [
{ user = "tor"; group = "tor"; mode = "0710"; path = "/var/lib/tor"; method = "bind"; }
];
# tor: `tor.enable` doesn't start a relay, exit node, proxy, etc. it's minimal.
# tor.client.enable configures a torsocks proxy, accessible *only* to localhost.
# at 127.0.0.1:9050
services.tor.enable = true;
services.tor.client.enable = true;
# in order for services to read /var/lib/tor/onion/*/hostname, they must be able to traverse /var/lib/tor,
# and /var/lib/tor must have g+x.
# DataDirectoryGroupReadable causes tor to use g+rx, technically more than we need, but all the files are 600 so it's fine.
services.tor.settings.DataDirectoryGroupReadable = true;
# StateDirectoryMode defaults to 0700, and thereby prevents the onion hostnames from being group readable
systemd.services.tor.serviceConfig.StateDirectoryMode = lib.mkForce "0710";
users.users.tor.homeMode = "0710"; # home mode defaults to 0700, causing readability problems, enforced by nixos "users" activation script
services.tor.settings.SafeLogging = false; # show actual .onion names in the syslog, else debugging is impossible
}

View File

@ -1,33 +0,0 @@
{ ... }:
{
imports = [
./calibre.nix
./coturn.nix
./cryptocurrencies
./email
./ejabberd.nix
./freshrss.nix
./export
./gitea.nix
./goaccess.nix
./ipfs.nix
./jackett.nix
./jellyfin.nix
./kiwix-serve.nix
./komga.nix
./lemmy.nix
./matrix
./navidrome.nix
./nginx.nix
./nixos-prebuild.nix
./ntfy
./pict-rs.nix
./pleroma.nix
./postgres.nix
./prosody
./slskd.nix
./transmission
./trust-dns.nix
./wikipedia.nix
];
}

View File

@ -1,471 +0,0 @@
# docs:
# - <https://docs.ejabberd.im/admin/configuration/basic>
# example configs:
# - <https://github.com/vkleen/machines/blob/138a2586ce185d7cf201d4e1fe898c83c4af52eb/hosts/europium/ejabberd.nix>
# - <https://github.com/Mic92/stockholm/blob/675ef0088624c9de1cb531f318446316884a9d3d/tv/3modules/ejabberd/default.nix>
# - <https://github.com/buffet/tararice/blob/master/programs/ejabberd.nix>
# - enables STUN and TURN
# - only over UDP 3478, not firewall-forwarding any TURN port range
# - uses stun_disco module (but with no options)
# - <https://github.com/leo60228/dotfiles/blob/39b3abba3009bdc31413d4757ca2f882a33eec8b/files/ejabberd.yml>
# - <https://github.com/Mic92/dotfiles/blob/ddf0f4821f554f7667fc803344657367c55fb9e6/nixos/eve/modules/ejabberd.nix>
# - <nixpkgs:nixos/tests/xmpp/ejabberd.nix>
# - 2013: <https://github.com/processone/ejabberd/blob/master/ejabberd.yml.example>
#
# compliance tests:
# - <https://compliance.conversations.im/server/uninsane.org/#xep0352>
#
# administration:
# - `sudo -u ejabberd ejabberdctl help`
#
# federation/support matrix:
# - avatars
# - nixnet.services + dino: works in MUCs but not DMs (as of 2023 H1)
# - movim.eu + dino: works in DMs, MUCs untested (as of 2023/08/29)
# - calls
# - local + dino: audio, video, works in DMs (as of 2023/08/29)
# - movim.eu + dino: audio, video, works in DMs, no matter which side initiates (as of 2023/08/30)
# - +native-cell-number@cheogram.com + dino: audio works in DMs, no matter which side initiates (as of 2023/09/01)
# - can receive calls even if sender isn't in my roster
# - this is presumably using JMP.chat's SIP servers, which then convert it to XMPP call
#
# bugs:
# - 2023/09/01: will randomly stop federating. `systemctl restart ejabberd` fixes, but takes 10 minutes.
{ config, lib, pkgs, ... }:
let
# TODO: this range could be larger, but right now that's costly because each element is its own UPnP forward
# TURN port range (inclusive)
turnPortLow = 49152;
turnPortHigh = 49167;
turnPortRange = lib.range turnPortLow turnPortHigh;
in
# XXX(2023/10/15): disabled in favor of Prosody.
# everything configured below was fine: used ejabberd for several months.
lib.mkIf false
{
sane.persist.sys.byStore.plaintext = [
{ user = "ejabberd"; group = "ejabberd"; path = "/var/lib/ejabberd"; method = "bind"; }
];
sane.ports.ports = lib.mkMerge ([
{
"3478" = {
protocol = [ "tcp" "udp" ];
visibleTo.doof = true;
visibleTo.lan = true;
description = "colin-xmpp-stun-turn";
};
"5222" = {
protocol = [ "tcp" ];
visibleTo.doof = true;
visibleTo.lan = true;
description = "colin-xmpp-client-to-server";
};
"5223" = {
protocol = [ "tcp" ];
visibleTo.doof = true;
visibleTo.lan = true;
description = "colin-xmpps-client-to-server"; # XMPP over TLS
};
"5269" = {
protocol = [ "tcp" ];
visibleTo.doof = true;
description = "colin-xmpp-server-to-server";
};
"5270" = {
protocol = [ "tcp" ];
visibleTo.doof = true;
description = "colin-xmpps-server-to-server"; # XMPP over TLS
};
"5280" = {
protocol = [ "tcp" ];
visibleTo.doof = true;
visibleTo.lan = true;
description = "colin-xmpp-bosh";
};
"5281" = {
protocol = [ "tcp" ];
visibleTo.doof = true;
visibleTo.lan = true;
description = "colin-xmpp-bosh-https";
};
"5349" = {
protocol = [ "tcp" ];
visibleTo.doof = true;
visibleTo.lan = true;
description = "colin-xmpp-stun-turn-over-tls";
};
"5443" = {
protocol = [ "tcp" ];
visibleTo.doof = true;
visibleTo.lan = true;
description = "colin-xmpp-web-services"; # file uploads, websockets, admin
};
}
] ++ (builtins.map
(port: {
"${builtins.toString port}" = let
count = port - turnPortLow + 1;
numPorts = turnPortHigh - turnPortLow + 1;
in {
protocol = [ "tcp" "udp" ];
visibleTo.doof = true;
visibleTo.lan = true;
description = "colin-xmpp-turn-${builtins.toString count}-of-${builtins.toString numPorts}";
};
})
turnPortRange
));
# this ejabberd config uses builtin STUN/TURN server, so hack to ensure no other implementation fights for ports
services.coturn.enable = false;
# provide access to certs
# TODO: this should just be `acme`. then we also add nginx to the `acme` group.
# why is /var/lib/acme/* owned by `nginx` group??
users.users.ejabberd.extraGroups = [ "nginx" ];
security.acme.certs."uninsane.org".extraDomainNames = [
"xmpp.uninsane.org"
"muc.xmpp.uninsane.org"
"pubsub.xmpp.uninsane.org"
"upload.xmpp.uninsane.org"
"vjid.xmpp.uninsane.org"
];
# exists so the XMPP server's cert can obtain altNames for all its resources
services.nginx.virtualHosts."xmpp.uninsane.org" = {
useACMEHost = "uninsane.org";
};
services.nginx.virtualHosts."muc.xmpp.uninsane.org" = {
useACMEHost = "uninsane.org";
};
services.nginx.virtualHosts."pubsub.xmpp.uninsane.org" = {
useACMEHost = "uninsane.org";
};
services.nginx.virtualHosts."upload.xmpp.uninsane.org" = {
useACMEHost = "uninsane.org";
};
services.nginx.virtualHosts."vjid.xmpp.uninsane.org" = {
useACMEHost = "uninsane.org";
};
sane.dns.zones."uninsane.org".inet = {
# XXX: SRV records have to point to something with a A/AAAA record; no CNAMEs
A."xmpp" = "%ANATIVE%";
CNAME."muc.xmpp" = "xmpp";
CNAME."pubsub.xmpp" = "xmpp";
CNAME."upload.xmpp" = "xmpp";
CNAME."vjid.xmpp" = "xmpp";
# _Service._Proto.Name TTL Class SRV Priority Weight Port Target
# - <https://xmpp.org/extensions/xep-0368.html>
# something's requesting the SRV records for muc.xmpp, so let's include it
# nothing seems to request XMPP SRVs for the other records (except @)
# lower numerical priority field tells clients to prefer this method
SRV."_xmpps-client._tcp.muc.xmpp" = "3 50 5223 xmpp";
SRV."_xmpps-server._tcp.muc.xmpp" = "3 50 5270 xmpp";
SRV."_xmpp-client._tcp.muc.xmpp" = "5 50 5222 xmpp";
SRV."_xmpp-server._tcp.muc.xmpp" = "5 50 5269 xmpp";
SRV."_xmpps-client._tcp" = "3 50 5223 xmpp";
SRV."_xmpps-server._tcp" = "3 50 5270 xmpp";
SRV."_xmpp-client._tcp" = "5 50 5222 xmpp";
SRV."_xmpp-server._tcp" = "5 50 5269 xmpp";
SRV."_stun._udp" = "5 50 3478 xmpp";
SRV."_stun._tcp" = "5 50 3478 xmpp";
SRV."_stuns._tcp" = "5 50 5349 xmpp";
SRV."_turn._udp" = "5 50 3478 xmpp";
SRV."_turn._tcp" = "5 50 3478 xmpp";
SRV."_turns._tcp" = "5 50 5349 xmpp";
};
# TODO: allocate UIDs/GIDs ?
services.ejabberd.enable = true;
services.ejabberd.configFile = "/var/lib/ejabberd/ejabberd.yaml";
systemd.services.ejabberd.preStart = let
config-in = pkgs.writeText "ejabberd.yaml.in" (lib.generators.toYAML {} {
hosts = [ "uninsane.org" ];
# none | emergency | alert | critical | error | warning | notice | info | debug
loglevel = "debug";
acme.auto = false;
certfiles = [ "/var/lib/acme/uninsane.org/full.pem" ];
# ca_file = "${pkgs.cacert.unbundled}/etc/ssl/certs/";
# ca_file = "${pkgs.cacert}/etc/ssl/certs/ca-bundle.crt";
pam_userinfotype = "jid";
acl = {
admin.user = [ "colin@uninsane.org" ];
local.user_regexp = "";
loopback.ip = [ "127.0.0.0/8" "::1/128" ];
};
access_rules = {
local.allow = "local";
c2s_access.allow = "all";
announce.allow = "admin";
configure.allow = "admin";
muc_create.allow = "local";
pubsub_createnode_access.allow = "all";
trusted_network.allow = "loopback";
};
# docs: <https://docs.ejabberd.im/admin/configuration/basic/#shaper-rules>
shaper_rules = {
# setting this to above 1 may break outgoing messages
# - maybe some servers rate limit? or just don't understand simultaneous connections?
max_s2s_connections = 1;
max_user_sessions = 10;
max_user_offline_messages = 5000;
c2s_shaper.fast = "all";
s2s_shaper.med = "all";
};
# docs: <https://docs.ejabberd.im/admin/configuration/basic/#shapers>
# this limits the bytes/sec.
# for example, burst: 3_000_000 and rate: 100_000 means:
# - each client has a BW budget that accumulates 100kB/sec and is capped at 3 MB
shaper.fast = 1000000;
shaper.med = 500000;
# shaper.fast.rate = 1000000;
# shaper.fast.burst_size = 10000000;
# shaper.med.rate = 500000;
# shaper.med.burst_size = 5000000;
# see: <https://docs.ejabberd.im/admin/configuration/listen/>
# s2s_use_starttls = true;
s2s_use_starttls = "optional";
# lessens 504: remote-server-timeout errors
# see: <https://github.com/processone/ejabberd/issues/3105#issuecomment-562182967>
negotiation_timeout = 60;
listen = [
{
port = 5222;
module = "ejabberd_c2s";
shaper = "c2s_shaper";
starttls = true;
access = "c2s_access";
}
{
port = 5223;
module = "ejabberd_c2s";
shaper = "c2s_shaper";
tls = true;
access = "c2s_access";
}
{
port = 5269;
module = "ejabberd_s2s_in";
shaper = "s2s_shaper";
}
{
port = 5270;
module = "ejabberd_s2s_in";
shaper = "s2s_shaper";
tls = true;
}
{
port = 5443;
module = "ejabberd_http";
tls = true;
request_handlers = {
"/admin" = "ejabberd_web_admin"; # TODO: ensure this actually works
"/api" = "mod_http_api"; # ejabberd API endpoint (to control server)
"/bosh" = "mod_bosh";
"/upload" = "mod_http_upload";
"/ws" = "ejabberd_http_ws";
# "/.well-known/host-meta" = "mod_host_meta";
# "/.well-known/host-meta.json" = "mod_host_meta";
};
}
{
# STUN+TURN TCP
# note that the full port range should be forwarded ("not NAT'd")
# `use_turn=true` enables both TURN *and* STUN
port = 3478;
module = "ejabberd_stun";
transport = "tcp";
use_turn = true;
turn_min_port = turnPortLow;
turn_max_port = turnPortHigh;
turn_ipv4_address = "%ANATIVE%";
}
{
# STUN+TURN UDP
port = 3478;
module = "ejabberd_stun";
transport = "udp";
use_turn = true;
turn_min_port = turnPortLow;
turn_max_port = turnPortHigh;
turn_ipv4_address = "%ANATIVE%";
}
{
# STUN+TURN TLS over TCP
port = 5349;
module = "ejabberd_stun";
transport = "tcp";
tls = true;
certfile = "/var/lib/acme/uninsane.org/full.pem";
use_turn = true;
turn_min_port = turnPortLow;
turn_max_port = turnPortHigh;
turn_ipv4_address = "%ANATIVE%";
}
];
# TODO: enable mod_fail2ban
# TODO(low): look into mod_http_fileserver for serving macros?
modules = {
# mod_adhoc = {};
# mod_announce = {
# access = "admin";
# };
# allows users to set avatars in vCard
# - <https://docs.ejabberd.im/admin/configuration/modules/#mod-avatar>
mod_avatar = {};
mod_caps = {}; # for mod_pubsub
mod_carboncopy = {}; # allows multiple clients to receive a user's message
# queues messages when recipient is offline, including PEP and presence messages.
# compliance test suggests this be enabled
mod_client_state = {};
# mod_conversejs: TODO: enable once on 21.12
# allows clients like Dino to discover where to upload files
mod_disco.server_info = [
{
modules = "all";
name = "abuse-addresses";
urls = [
"mailto:admin.xmpp@uninsane.org"
"xmpp:colin@uninsane.org"
];
}
{
modules = "all";
name = "admin-addresses";
urls = [
"mailto:admin.xmpp@uninsane.org"
"xmpp:colin@uninsane.org"
];
}
];
mod_http_upload = {
host = "upload.xmpp.uninsane.org";
hosts = [ "upload.xmpp.uninsane.org" ];
put_url = "https://@HOST@:5443/upload";
dir_mode = "0750";
file_mode = "0750";
rm_on_unregister = false;
};
# allow discoverability of BOSH and websocket endpoints
# TODO: enable once on ejabberd 22.05 (presently 21.04)
# mod_host_meta = {};
mod_jidprep = {}; # probably not needed: lets clients normalize jids
mod_last = {}; # allow other users to know when i was last online
mod_mam = {
# Mnesia is limited to 2GB, better to use an SQL backend
# For small servers SQLite is a good fit and is very easy
# to configure. Uncomment this when you have SQL configured:
# db_type: sql
assume_mam_usage = true;
default = "always";
};
mod_muc = {
access = [ "allow" ];
access_admin = { allow = "admin"; };
access_create = "muc_create";
access_persistent = "muc_create";
access_mam = [ "allow" ];
history_size = 100; # messages to show new participants
host = "muc.xmpp.uninsane.org";
hosts = [ "muc.xmpp.uninsane.org" ];
default_room_options = {
anonymous = false;
lang = "en";
persistent = true;
mam = true;
};
};
mod_muc_admin = {};
mod_offline = {
# store messages for a user when they're offline (TODO: understand multi-client workflow?)
access_max_user_messages = "max_user_offline_messages";
store_groupchat = true;
};
mod_ping = {};
mod_privacy = {}; # deprecated, but required for `ejabberctl export_piefxis`
mod_private = {}; # allow local clients to persist arbitrary data on my server
# push notifications to services integrated with e.g. Apple/Android.
# default is for a maximum amount of PII to be withheld, since these push notifs
# generally traverse 3rd party services. can opt to include message body, etc, though.
mod_push = {};
# i don't fully understand what this does, but it seems aimed at making push notifs more reliable.
mod_push_keepalive = {};
mod_roster = {
versioning = true;
};
# docs: <https://docs.ejabberd.im/admin/configuration/modules/#mod-s2s-dialback>
# s2s dialback to verify inbound messages
# unclear to what degree the XMPP network requires this
mod_s2s_dialback = {};
mod_shared_roster = {}; # creates groups for @all, @online, and anything manually administered?
mod_stream_mgmt = {
# resend undelivered messages if the origin client is offline
resend_on_timeout = "if_offline";
};
# fallback for when DNS-based STUN discovery is unsupported.
# - see: <https://xmpp.org/extensions/xep-0215.html>
# docs: <https://docs.ejabberd.im/admin/configuration/modules/#mod-stun-disco>
# people say to just keep this defaulted (i guess ejabberd knows to return its `host` option of uninsane.org?)
mod_stun_disco = {};
# docs: <https://docs.ejabberd.im/admin/configuration/modules/#mod-vcard>
mod_vcard = {
allow_return_all = true; # all users are discoverable (?)
host = "vjid.xmpp.uninsane.org";
hosts = [ "vjid.xmpp.uninsane.org" ];
search = true;
};
mod_vcard_xupdate = {}; # needed for avatars
# docs: <https://docs.ejabberd.im/admin/configuration/modules/#mod-pubsub>
mod_pubsub = {
#^ needed for avatars
access_createnode = "pubsub_createnode_access";
host = "pubsub.xmpp.uninsane.org";
hosts = [ "pubsub.xmpp.uninsane.org" ];
ignore_pep_from_offline = false;
last_item_cache = true;
plugins = [
"pep"
"flat"
];
force_node_config = {
# ensure client bookmarks are private
"storage:bookmarks:" = {
"access_model" = "whitelist";
};
"urn:xmpp:avatar:data" = {
"access_model" = "open";
};
"urn:xmpp:avatar:metadata" = {
"access_model" = "open";
};
};
};
mod_version = {};
};
});
sed = "${pkgs.gnused}/bin/sed";
in ''
ip=$(cat '${config.sane.services.dyn-dns.ipPath}')
# config is 444 (not 644), so we want to write out-of-place and then atomically move
# TODO: factor this out into `sane-woop` helper?
rm -f /var/lib/ejabberd/ejabberd.yaml.new
${sed} "s/%ANATIVE%/$ip/g" ${config-in} > /var/lib/ejabberd/ejabberd.yaml.new
mv /var/lib/ejabberd/ejabberd.yaml{.new,}
'';
sane.services.dyn-dns.restartOnChange = [ "ejabberd.service" ];
}

View File

@ -1,44 +0,0 @@
# nix configs to reference:
# - <https://gitlab.com/simple-nixos-mailserver/nixos-mailserver>
# - <https://github.com/nix-community/nur-combined/-/tree/master/repos/eh5/machines/srv-m/mail-rspamd.nix>
# - postfix / dovecot / rspamd / stalwart-jmap / sogo
#
# rspamd:
# - nixos: <https://nixos.wiki/wiki/Rspamd>
# - guide: <https://rspamd.com/doc/quickstart.html>
# - non-nixos example: <https://dataswamp.org/~solene/2021-07-13-smtpd-rspamd.html>
#
#
# my rough understanding of the pieces:
# - postfix handles SMTP protocol with the rest of the world.
# - dovecot implements IMAP protocol.
# - client auth (i.e. validate that user@uninsane.org is who they claim)
# - "folders" (INBOX, JUNK) are internal to dovecot?
# or where do folders live, on-disk?
#
# - non-local clients (i.e. me) interact with BOTH postfix and dovecot, but primarily dovecot:
# - mail reading is done via IMAP (so, dovecot)
# - mail sending is done via SMTP/submission port (so, postfix)
# - but postfix delegates authorization of that outgoing mail to dovecot, on the server side
#
# - local clients (i.e. sendmail) interact only with postfix
#
# debugging: general connectivity issues
# - test that inbound port 25 is unblocked:
# - `curl https://canyouseeme.org/ --data 'port=25&IP=185.157.162.178' | grep 'see your service'`
# - and retry with port 465, 587
# - i think this API requires the queried IP match the source IP
# - if necessary, `systemctl stop postfix` and `sudo nc -l 185.157.162.178 25`, then try https://canyouseeme.org
{ ... }:
{
imports = [
./dovecot.nix
./postfix.nix
];
#### SPAM FILTERING
# services.rspamd.enable = true;
# services.rspamd.postfix.enable = true;
}

View File

@ -1,145 +0,0 @@
# dovecot config options: <https://doc.dovecot.org/configuration_manual/>
#
# sieve docs:
# - sieve language examples: <https://doc.dovecot.org/configuration_manual/sieve/examples/>
# - sieve protocol/language: <https://proton.me/support/sieve-advanced-custom-filters>
{ config, lib, pkgs, ... }:
{
sane.ports.ports."143" = {
protocol = [ "tcp" ];
visibleTo.doof = true;
visibleTo.lan = true;
description = "colin-imap-imap.uninsane.org";
};
sane.ports.ports."993" = {
protocol = [ "tcp" ];
visibleTo.doof = true;
visibleTo.lan = true;
description = "colin-imaps-imap.uninsane.org";
};
# exists only to manage certs for dovecot
services.nginx.virtualHosts."imap.uninsane.org" = {
enableACME = true;
};
sane.dns.zones."uninsane.org".inet = {
CNAME."imap" = "native";
};
sops.secrets."dovecot_passwd" = {
owner = config.users.users.dovecot2.name;
# TODO: debug why mail can't be sent without this being world-readable
mode = "0444";
};
# inspired by https://gitlab.com/simple-nixos-mailserver/nixos-mailserver/
services.dovecot2.enable = true;
# services.dovecot2.enableLmtp = true;
services.dovecot2.sslServerCert = "/var/lib/acme/imap.uninsane.org/fullchain.pem";
services.dovecot2.sslServerKey = "/var/lib/acme/imap.uninsane.org/key.pem";
services.dovecot2.enablePAM = false;
# sieve scripts require me to set a user for... idk why?
services.dovecot2.mailUser = "colin";
services.dovecot2.mailGroup = "users";
users.users.colin.isSystemUser = lib.mkForce false;
services.dovecot2.extraConfig =
let
passwdFile = config.sops.secrets.dovecot_passwd.path;
in
''
passdb {
driver = passwd-file
args = ${passwdFile}
}
userdb {
driver = passwd-file
args = ${passwdFile}
}
# allow postfix to query our auth db
service auth {
unix_listener auth {
mode = 0660
user = postfix
group = postfix
}
}
auth_mechanisms = plain login
# accept incoming messaging from postfix
# service lmtp {
# unix_listener dovecot-lmtp {
# mode = 0600
# user = postfix
# group = postfix
# }
# }
# plugin {
# sieve_plugins = sieve_imapsieve
# }
mail_debug = yes
auth_debug = yes
# verbose_ssl = yes
'';
services.dovecot2.mailboxes = {
# special-purpose mailboxes: "All" "Archive" "Drafts" "Flagged" "Junk" "Sent" "Trash"
# RFC6154 describes these special mailboxes: https://www.ietf.org/rfc/rfc6154.html
# how these boxes are treated is 100% up to the client and server to decide.
# client behavior:
# iOS
# - Drafts: ?
# - Sent: works
# - Trash: works
# - Junk: works ("mark" -> "move to Junk")
# aerc
# - Drafts: works
# - Sent: works
# - Trash: no; deleted messages are actually deleted
# use `:move trash` instead
# - Junk: ?
# Sent mailbox: all sent messages are copied to it. unclear if this happens server-side or client-side.
Drafts = { specialUse = "Drafts"; auto = "create"; };
Sent = { specialUse = "Sent"; auto = "create"; };
Trash = { specialUse = "Trash"; auto = "create"; };
Junk = { specialUse = "Junk"; auto = "create"; };
};
services.dovecot2.mailPlugins = {
perProtocol = {
# imap.enable = [
# "imap_sieve"
# ];
lda.enable = [
"sieve"
];
# lmtp.enable = [
# "sieve"
# ];
};
};
services.dovecot2.modules = [
pkgs.dovecot_pigeonhole # enables sieve execution (?)
];
services.dovecot2.sieve = {
extensions = [ "fileinto" ];
# if any messages fail to pass (or lack) DKIM, move them to Junk
# XXX the key name ("after") is only used to order sieve execution/ordering
scripts.after = builtins.toFile "ensuredkim.sieve" ''
require "fileinto";
if not header :contains "Authentication-Results" "dkim=pass" {
fileinto "Junk";
stop;
}
'';
};
systemd.services.dovecot2.serviceConfig.RestartSec = lib.mkForce "15s"; # nixos defaults this to 1s
}

View File

@ -1,197 +0,0 @@
# postfix config options: <https://www.postfix.org/postconf.5.html>
{ config, lib, pkgs, ... }:
let
submissionOptions = {
smtpd_tls_security_level = "encrypt";
smtpd_sasl_auth_enable = "yes";
smtpd_sasl_type = "dovecot";
smtpd_sasl_path = "/run/dovecot2/auth";
smtpd_sasl_security_options = "noanonymous";
smtpd_sasl_local_domain = "uninsane.org";
smtpd_client_restrictions = "permit_sasl_authenticated,reject";
# reuse the virtual map so that sender mapping matches recipient mapping
smtpd_sender_login_maps = "hash:/var/lib/postfix/conf/virtual";
smtpd_sender_restrictions = "reject_sender_login_mismatch";
smtpd_recipient_restrictions = "reject_non_fqdn_recipient,permit_sasl_authenticated,reject";
};
in
{
sane.persist.sys.byStore.plaintext = [
# TODO: mode? could be more granular
{ user = "opendkim"; group = "opendkim"; path = "/var/lib/opendkim"; method = "bind"; }
{ user = "root"; group = "root"; path = "/var/lib/postfix"; method = "bind"; }
{ user = "root"; group = "root"; path = "/var/spool/mail"; method = "bind"; }
# *probably* don't need these dirs:
# "/var/lib/dhparams" # https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/security/dhparams.nix
# "/var/lib/dovecot"
];
# XXX(2023/10/20): opening these ports in the firewall has the OPPOSITE effect as intended.
# these ports are only routable so long as they AREN'T opened.
# probably some cursed interaction with network namespaces introduced after 2023/10/10.
# sane.ports.ports."25" = {
# protocol = [ "tcp" ];
# # XXX visibleTo.lan effectively means "open firewall, but don't configure any NAT/forwarding"
# visibleTo.lan = true;
# description = "colin-smtp-mx.uninsane.org";
# };
# sane.ports.ports."465" = {
# protocol = [ "tcp" ];
# visibleTo.lan = true;
# description = "colin-smtps-mx.uninsane.org";
# };
# sane.ports.ports."587" = {
# protocol = [ "tcp" ];
# visibleTo.lan = true;
# description = "colin-smtps-submission-mx.uninsane.org";
# };
# exists only to manage certs for Postfix
services.nginx.virtualHosts."mx.uninsane.org" = {
enableACME = true;
};
sane.dns.zones."uninsane.org".inet = {
MX."@" = "10 mx.uninsane.org.";
A."mx" = "%AOVPNS%"; #< XXX: RFC's specify that the MX record CANNOT BE A CNAME. TODO: use "%AOVPNS%?
# Sender Policy Framework:
# +mx => mail passes if it originated from the MX
# +a => mail passes if it originated from the A address of this domain
# +ip4:.. => mail passes if it originated from this IP
# -all => mail fails if none of these conditions were met
TXT."@" = "v=spf1 a mx -all";
# DKIM public key:
TXT."mx._domainkey" =
"v=DKIM1;k=rsa;p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCkSyMufc2KrRx3j17e/LyB+3eYSBRuEFT8PUka8EDX04QzCwDPdkwgnj3GNDvnB5Ktb05Cf2SJ/S1OLqNsINxJRWtkVfZd/C339KNh9wrukMKRKNELL9HLUw0bczOI4gKKFqyrRE9qm+4csCMAR79Te9FCjGV/jVnrkLdPT0GtFwIDAQAB"
;
# DMARC fields <https://datatracker.ietf.org/doc/html/rfc7489>:
# p=none|quarantine|reject: what to do with failures
# sp = p but for subdomains
# rua = where to send aggregrate reports
# ruf = where to send individual failure reports
# fo=0|1|d|s controls WHEN to send failure reports
# (1=on bad alignment; d=on DKIM failure; s=on SPF failure);
# Additionally:
# adkim=r|s (is DKIM relaxed [default] or strict)
# aspf=r|s (is SPF relaxed [default] or strict)
# pct = sampling ratio for punishing failures (default 100 for 100%)
# rf = report format
# ri = report interval
TXT."_dmarc" =
"v=DMARC1;p=quarantine;sp=reject;rua=mailto:admin+mail@uninsane.org;ruf=mailto:admin+mail@uninsane.org;fo=1:d:s"
;
};
services.postfix.enable = true;
services.postfix.hostname = "mx.uninsane.org";
services.postfix.origin = "uninsane.org";
services.postfix.destination = [ "localhost" "uninsane.org" ];
services.postfix.sslCert = "/var/lib/acme/mx.uninsane.org/fullchain.pem";
services.postfix.sslKey = "/var/lib/acme/mx.uninsane.org/key.pem";
services.postfix.virtual = ''
notify.matrix@uninsane.org matrix-synapse
@uninsane.org colin
'';
services.postfix.config = {
# smtpd_milters = local:/run/opendkim/opendkim.sock
# milter docs: http://www.postfix.org/MILTER_README.html
# mail filters for receiving email and from authorized SMTP clients (i.e. via submission)
# smtpd_milters = inet:185.157.162.190:8891
# opendkim.sock will add a Authentication-Results header, with `dkim=pass|fail|...` value to received messages
smtpd_milters = "unix:/run/opendkim/opendkim.sock";
# mail filters for sendmail
non_smtpd_milters = "$smtpd_milters";
# what to do when a milter exits unexpectedly:
milter_default_action = "accept";
inet_protocols = "ipv4";
smtp_tls_security_level = "may";
# hand received mail over to dovecot so that it can run sieves & such
mailbox_command = ''${pkgs.dovecot}/libexec/dovecot/dovecot-lda -f "$SENDER" -a "$RECIPIENT"'';
# hand received mail over to dovecot
# virtual_alias_maps = [
# "hash:/etc/postfix/virtual"
# ];
# mydestination = "";
# virtual_mailbox_domains = [ "localhost" "uninsane.org" ];
# # virtual_mailbox_maps = "hash:/etc/postfix/virtual";
# virtual_transport = "lmtp:unix:/run/dovecot2/dovecot-lmtp";
# anti-spam options: <https://www.postfix.org/SMTPD_ACCESS_README.html>
# reject_unknown_sender_domain: causes postfix to `dig <sender> MX` and make sure that exists.
# but may cause problems receiving mail from google & others who load-balance?
# - <https://unix.stackexchange.com/questions/592131/how-to-reject-email-from-unknown-domains-with-postfix-on-centos>
# smtpd_sender_restrictions = reject_unknown_sender_domain
};
services.postfix.enableSubmission = true;
services.postfix.submissionOptions = submissionOptions;
services.postfix.enableSubmissions = true;
services.postfix.submissionsOptions = submissionOptions;
systemd.services.postfix.after = [ "wireguard-wg-ovpns.service" ];
systemd.services.postfix.partOf = [ "wireguard-wg-ovpns.service" ];
systemd.services.postfix.serviceConfig = {
# run this behind the OVPN static VPN
NetworkNamespacePath = "/run/netns/ovpns";
};
#### OPENDKIM
services.opendkim.enable = true;
# services.opendkim.domains = "csl:uninsane.org";
services.opendkim.domains = "uninsane.org";
# we use a custom (inet) socket, because the default perms
# of the unix socket don't allow postfix to connect.
# this sits on the machine-local 10.0.1 interface because it's the closest
# thing to a loopback interface shared by postfix and opendkim netns.
# services.opendkim.socket = "inet:8891@185.157.162.190";
# services.opendkim.socket = "local:/run/opendkim.sock";
# selectors can be used to disambiguate sender machines.
# keeping this the same as the hostname seems simplest
services.opendkim.selector = "mx";
systemd.services.opendkim.after = [ "wireguard-wg-ovpns.service" ];
systemd.services.opendkim.partOf = [ "wireguard-wg-ovpns.service" ];
systemd.services.opendkim.serviceConfig = {
# run this behind the OVPN static VPN
NetworkNamespacePath = "/run/netns/ovpns";
# /run/opendkim/opendkim.sock needs to be rw by postfix
UMask = lib.mkForce "0011";
};
#### OUTGOING MESSAGE REWRITING:
services.postfix.enableHeaderChecks = true;
services.postfix.headerChecks = [
# intercept gitea registration confirmations and manually screen them
{
# headerChecks are somehow ignorant of alias rules: have to redirect to a real user
action = "REDIRECT colin@uninsane.org";
pattern = "/^Subject: Please activate your account/";
}
# intercept Matrix registration confirmations
{
action = "REDIRECT colin@uninsane.org";
pattern = "/^Subject:.*Validate your email/";
}
# XXX postfix only supports performing ONE action per header.
# {
# action = "REPLACE Subject: git application: Please activate your account";
# pattern = "/^Subject:.*activate your account/";
# }
];
}

View File

@ -1,58 +0,0 @@
{ config, ... }:
{
imports = [
./nfs.nix
./sftpgo
];
users.groups.export = {};
fileSystems."/var/export/media" = {
# everything in here could be considered publicly readable (based on the viewer's legal jurisdiction)
device = "/var/media";
options = [ "rbind" ];
};
fileSystems."/var/export/pub" = {
device = "/var/www/sites/uninsane.org/share";
options = [ "rbind" ];
};
# fileSystems."/var/export/playground" = {
# device = config.fileSystems."/mnt/persist/ext".device;
# fsType = "btrfs";
# options = [
# "subvol=export-playground"
# "compress=zstd"
# "defaults"
# ];
# };
# N.B.: the backing directory should be manually created here **as a btrfs subvolume** and with a quota.
# - `sudo btrfs subvolume create /mnt/persist/ext/persist/var/export/playground`
# - `sudo btrfs quota enable /mnt/persist/ext/persist/var/export/playground`
# - `sudo btrfs quota rescan -sw /mnt/persist/ext/persist/var/export/playground`
# to adjust the limits (which apply at the block layer, i.e. post-compression):
# - `sudo btrfs qgroup limit 20G /mnt/persist/ext/persist/var/export/playground`
# to query the quota/status:
# - `sudo btrfs qgroup show -re /var/export/playground`
sane.persist.sys.byStore.ext = [
{ user = "root"; group = "export"; mode = "0775"; path = "/var/export/playground"; method = "bind"; }
];
sane.fs."/var/export/README.md" = {
wantedBy = [ "nfs.service" "sftpgo.service" ];
file.text = ''
- media/ read-only: Videos, Music, Books, etc
- playground/ read-write: use it to share files with other users of this server, inaccessible from the www
- pub/ read-only: content made to be shared with the www
'';
};
sane.fs."/var/export/playground/README.md" = {
wantedBy = [ "nfs.service" "sftpgo.service" ];
file.text = ''
this directory is intentionally read+write by anyone with access (i.e. on the LAN).
- share files
- write poetry
- be a friendly troll
'';
};
}

View File

@ -1,135 +0,0 @@
# docs:
# - <https://nixos.wiki/wiki/NFS>
# - <https://wiki.gentoo.org/wiki/Nfs-utils>
# system files:
# - /etc/exports
# system services:
# - nfs-server.service
# - nfs-idmapd.service
# - nfs-mountd.service
# - nfsdcld.service
# - rpc-statd.service
# - rpcbind.service
#
# TODO: force files to be 755, or 750.
# - could maybe be done with some mount option?
{ config, lib, ... }:
lib.mkIf false #< TODO: remove nfs altogether! it's not exactly the most secure
{
services.nfs.server.enable = true;
# see which ports NFS uses with:
# - `rpcinfo -p`
sane.ports.ports."111" = {
protocol = [ "tcp" "udp" ];
visibleTo.lan = true;
description = "NFS server portmapper";
};
sane.ports.ports."2049" = {
protocol = [ "tcp" "udp" ];
visibleTo.lan = true;
description = "NFS server";
};
sane.ports.ports."4000" = {
protocol = [ "udp" ];
visibleTo.lan = true;
description = "NFS server status daemon";
};
sane.ports.ports."4001" = {
protocol = [ "tcp" "udp" ];
visibleTo.lan = true;
description = "NFS server lock daemon";
};
sane.ports.ports."4002" = {
protocol = [ "tcp" "udp" ];
visibleTo.lan = true;
description = "NFS server mount daemon";
};
# NFS4 allows these to float, but NFS3 mandates specific ports, so fix them for backwards compat.
services.nfs.server.lockdPort = 4001;
services.nfs.server.mountdPort = 4002;
services.nfs.server.statdPort = 4000;
services.nfs.extraConfig = ''
[nfsd]
# XXX: NFS over UDP REQUIRES SPECIAL CONFIG TO AVOID DATA LOSS.
# see `man 5 nfs`: "Using NFS over UDP on high-speed links".
# it's actually just a general property of UDP over IPv4 (IPv6 fixes it).
# both the client and the server should configure a shorter-than-default IPv4 fragment reassembly window to mitigate.
# OTOH, tunneling NFS over Wireguard also bypasses this weakness, because a mis-assembled packet would not have a valid signature.
udp=y
[exports]
# all export paths are relative to rootdir.
# for NFSv4, the export with fsid=0 behaves as `/` publicly,
# but NFSv3 implements no such feature.
# using `rootdir` instead of relying on `fsid=0` allows consistent export paths regardless of NFS proto version
rootdir=/var/export
'';
# format:
# fspoint visibility(options)
# options:
# - see: <https://wiki.gentoo.org/wiki/Nfs-utils#Exports>
# - see [man 5 exports](https://linux.die.net/man/5/exports)
# - insecure: require clients use src port > 1024
# - rw, ro (default)
# - async, sync (default)
# - no_subtree_check (default), subtree_check: verify not just that files requested by the client live
# in the expected fs, but also that they live under whatever subdirectory of that fs is exported.
# - no_root_squash, root_squash (default): map requests from uid 0 to user `nobody`.
# - crossmnt: reveal filesystems that are mounted under this endpoint
# - fsid: must be zero for the root export
# - fsid=root is alias for fsid=0
# - mountpoint[=/path]: only export the directory if it's a mountpoint. used to avoid exporting failed mounts.
# - all_squash: rewrite all client requests such that they come from anonuid/anongid
# - any files a user creates are owned by local anonuid/anongid.
# - users can read any local file which anonuid/anongid would be able to read.
# - users can't chown to/away from anonuid/anongid.
# - users can chmod files they own, to anything (making them unreadable to non-`nfsuser` export users, like FTP).
# - `stat` remains unchanged, returning the real UIDs/GIDs to the client.
# - thus programs which check `uid` or `gid` before trying an operation may incorrectly conclude they can't perform some op.
#
# 10.0.0.0/8 to export both to LAN (readonly, unencrypted) and wg vpn (read-write, encrypted)
services.nfs.server.exports =
let
fmtExport = { export, baseOpts, extraLanOpts ? [], extraVpnOpts ? [] }:
let
always = [ "subtree_check" ];
lanOpts = always ++ baseOpts ++ extraLanOpts;
vpnOpts = always ++ baseOpts ++ extraVpnOpts;
in "${export} 10.78.79.0/22(${lib.concatStringsSep "," lanOpts}) 10.0.10.0/24(${lib.concatStringsSep "," vpnOpts})";
in lib.concatStringsSep "\n" [
(fmtExport {
export = "/";
baseOpts = [ "crossmnt" "fsid=root" ];
extraLanOpts = [ "ro" ];
extraVpnOpts = [ "rw" "no_root_squash" ];
})
(fmtExport {
# provide /media as an explicit export. NFSv4 can transparently mount a subdir of an export, but NFSv3 can only mount paths which are exports.
export = "/media";
baseOpts = [ "crossmnt" ]; # TODO: is crossmnt needed here?
extraLanOpts = [ "ro" ];
extraVpnOpts = [ "rw" "no_root_squash" ];
})
(fmtExport {
export = "/playground";
baseOpts = [
"mountpoint"
"all_squash"
"rw"
"anonuid=${builtins.toString config.users.users.nfsuser.uid}"
"anongid=${builtins.toString config.users.groups.export.gid}"
];
})
];
users.users.nfsuser = {
description = "virtual user for anonymous NFS operations";
group = "export";
isSystemUser = true;
};
}

View File

@ -1,164 +0,0 @@
# docs:
# - <https://github.com/drakkan/sftpgo>
# - config options: <https://github.com/drakkan/sftpgo/blob/main/docs/full-configuration.md>
# - config defaults: <https://github.com/drakkan/sftpgo/blob/main/sftpgo.json>
# - nixos options: <repo:nixos/nixpkgs:nixos/modules/services/web-apps/sftpgo.nix>
# - nixos example: <repo:nixos/nixpkgs:nixos/tests/sftpgo.nix>
#
# sftpgo is a FTP server that also supports WebDAV, SFTP, and web clients.
{ config, lib, pkgs, sane-lib, ... }:
let
external_auth_hook = pkgs.static-nix-shell.mkPython3Bin {
pname = "external_auth_hook";
srcRoot = ./.;
pyPkgs = [ "passlib" ];
};
# Client initiates a FTP "control connection" on port 21.
# - this handles the client -> server commands, and the server -> client status, but not the actual data
# - file data, directory listings, etc need to be transferred on an ephemeral "data port".
# - 50000-50100 is a common port range for this.
# 50000 is used by soulseek.
passiveStart = 50050;
passiveEnd = 50070;
in
{
sane.ports.ports = {
"21" = {
protocol = [ "tcp" ];
visibleTo.lan = true;
description = "colin-FTP server";
};
"990" = {
protocol = [ "tcp" ];
visibleTo.doof = true;
visibleTo.lan = true;
description = "colin-FTPS server";
};
} // (sane-lib.mapToAttrs
(port: {
name = builtins.toString port;
value = {
protocol = [ "tcp" ];
visibleTo.doof = true;
visibleTo.lan = true;
description = "colin-FTP server data port range";
};
})
(lib.range passiveStart passiveEnd)
);
# use nginx/acme to produce a cert for FTPS
services.nginx.virtualHosts."ftp.uninsane.org" = {
addSSL = true;
enableACME = true;
};
sane.dns.zones."uninsane.org".inet.CNAME."ftp" = "native";
services.sftpgo = {
enable = true;
group = "export";
package = pkgs.sftpgo.overrideAttrs (upstream: {
patches = (upstream.patches or []) ++ [
# fix for compatibility with kodi:
# ftp LIST operation returns entries over-the-wire like:
# - dgrwxrwxr-x 1 ftp ftp 9 Apr 9 15:05 Videos
# however not all clients understand all mode bits (like that `g`, indicating SGID / group sticky bit).
# instead, only send mode bits which are well-understood.
# the full set of bits, from which i filter, is found here: <https://pkg.go.dev/io/fs#FileMode>
./safe_fileinfo.patch
];
});
settings = {
ftpd = {
bindings = [
{
# binding this means any wireguard client can connect
address = "10.0.10.5";
port = 21;
debug = true;
}
{
# binding this means any LAN client can connect (also WAN traffic forwarded from the gateway)
address = "10.78.79.51";
port = 21;
debug = true;
}
{
# binding this means any wireguard client can connect
address = "10.0.10.5";
port = 990;
debug = true;
tls_mode = 2; # 2 = "implicit FTPS": client negotiates TLS before any FTP command.
}
{
# binding this means any LAN client can connect (also WAN traffic forwarded from the gateway)
address = "10.78.79.51";
port = 990;
debug = true;
tls_mode = 2; # 2 = "implicit FTPS": client negotiates TLS before any FTP command.
}
{
# binding this means any doof client can connect (TLS only)
address = config.sane.netns.doof.hostVethIpv4;
port = 990;
debug = true;
tls_mode = 2; # 2 = "implicit FTPS": client negotiates TLS before any FTP command.
}
];
# active mode is susceptible to "bounce attacks", without much benefit over passive mode
disable_active_mode = true;
hash_support = true;
passive_port_range = {
start = passiveStart;
end = passiveEnd;
};
certificate_file = "/var/lib/acme/ftp.uninsane.org/full.pem";
certificate_key_file = "/var/lib/acme/ftp.uninsane.org/key.pem";
banner = ''
Welcome, friends, to Colin's FTP server! Also available via NFS on the same host, but LAN-only.
Read-only access (LAN clients see everything; WAN clients can only see /pub):
Username: "anonymous"
Password: "anonymous"
CONFIGURE YOUR CLIENT FOR "PASSIVE" MODE, e.g. `ftp --passive ftp.uninsane.org`.
Please let me know if anything's broken or not as it should be. Otherwise, browse and transfer freely :)
'';
};
data_provider = {
driver = "memory";
external_auth_hook = "${external_auth_hook}/bin/external_auth_hook";
# track_quota:
# - 0: disable quota tracking
# - 1: quota is updated on every upload/delete, even if user has no quota restriction
# - 2: quota is updated on every upload/delete, but only if user/folder has a quota restriction (default, i think)
# track_quota = 2;
};
};
};
users.users.sftpgo.extraGroups = [
"export"
"media"
"nginx" # to access certs
];
systemd.services.sftpgo = {
after = [ "network-online.target" ];
wants = [ "network-online.target" ];
serviceConfig = {
ReadWritePaths = [ "/var/export" ];
Restart = "always";
RestartSec = "20s";
UMask = lib.mkForce "0002";
};
};
}

View File

@ -1,171 +0,0 @@
#!/usr/bin/env nix-shell
#!nix-shell -i python3 -p "python3.withPackages (ps: [ ps.passlib ])"
# vim: set filetype=python :
#
# available environment variables:
# - SFTPGO_AUTHD_USERNAME
# - SFTPGO_AUTHD_USER
# - SFTPGO_AUTHD_IP
# - SFTPGO_AUTHD_PROTOCOL = { "DAV", "FTP", "HTTP", "SSH" }
# - SFTPGO_AUTHD_PASSWORD
# - SFTPGO_AUTHD_PUBLIC_KEY
# - SFTPGO_AUTHD_KEYBOARD_INTERACTIVE
# - SFTPGO_AUTHD_TLS_CERT
#
# user permissions:
# - see <repo:drakkan/sftpgo:internal/dataprovider/user.go>
# - "*" = grant all permissions
# - read-only perms:
# - "list" = list files and directories
# - "download"
# - rw perms:
# - "upload"
# - "overwrite" = allow uploads to replace existing files
# - "delete" = delete files and directories
# - "delete_files"
# - "delete_dirs"
# - "rename" = rename files and directories
# - "rename_files"
# - "rename_dirs"
# - "create_dirs"
# - "create_symlinks"
# - "chmod"
# - "chown"
# - "chtimes" = change atime/mtime (access and modification times)
#
# home_dir:
# - it seems (empirically) that a user can't cd above their home directory.
# though i don't have a reference for that in the docs.
import json
import os
import passlib.hosts
from hmac import compare_digest
authFail = dict(username="")
PERM_DENY = []
PERM_LIST = [ "list" ]
PERM_RO = [ "list", "download" ]
PERM_RW = [
# read-only:
"list",
"download",
# write:
"upload",
"overwrite",
"delete",
"rename",
"create_dirs",
"create_symlinks",
# intentionally omitted:
# "chmod",
# "chown",
# "chtimes",
]
TRUSTED_CREDS = [
# /etc/shadow style creds.
# mkpasswd -m sha-512
# $<method>$<salt>$<hash>
"$6$Zq3c2u4ghUH4S6EP$pOuRt13sEKfX31OqPbbd1LuhS21C9MICMc94iRdTAgdAcJ9h95gQH/6Jf6Ie4Obb0oxQtojRJ1Pd/9QHOlFMW." #< m. rocket boy
]
def mkAuthOk(username: str, permissions: dict[str, list[str]]) -> dict:
return dict(
status = 1,
username = username,
expiration_date = 0,
home_dir = "/var/export",
# uid/gid 0 means to inherit sftpgo uid.
# - i.e. users can't read files which Linux user `sftpgo` can't read
# - uploaded files belong to Linux user `sftpgo`
# other uid/gid values aren't possible for localfs backend, unless i let sftpgo use `sudo`.
uid = 0,
gid = 0,
# uid = 65534,
# gid = 65534,
max_sessions = 0,
# quota_*: 0 means to not use SFTP's quota system
quota_size = 0,
quota_files = 0,
permissions = permissions,
upload_bandwidth = 0,
download_bandwidth = 0,
filters = dict(
allowed_ip = [],
denied_ip = [],
),
public_keys = [],
# other fields:
# ? groups
# ? virtual_folders
)
def isLan(ip: str) -> bool:
return ip.startswith("10.78.76.") \
or ip.startswith("10.78.77.") \
or ip.startswith("10.78.78.") \
or ip.startswith("10.78.79.")
def isWireguard(ip: str) -> bool:
return ip.startswith("10.0.10.")
def isTrustedCred(password: str) -> bool:
for cred in TRUSTED_CREDS:
if passlib.hosts.linux_context.verify(password, cred):
return True
return False
def getAuthResponse(ip: str, username: str, password: str) -> dict:
"""
return a sftpgo auth response either denying the user or approving them
with a set of permissions.
"""
if isTrustedCred(password) and username != "colin":
# allow r/w access from those with a special token
return mkAuthOk(username, permissions = {
"/": PERM_RW,
"/playground": PERM_RW,
"/pub": PERM_RO,
})
if isWireguard(ip):
# allow any user from wireguard
return mkAuthOk(username, permissions = {
"/": PERM_RW,
"/playground": PERM_RW,
"/pub": PERM_RO,
})
if isLan(ip):
if username == "anonymous":
# allow anonymous users on the LAN
return mkAuthOk("anonymous", permissions = {
"/": PERM_RO,
"/playground": PERM_RW,
"/pub": PERM_RO,
})
if username == "anonymous":
# anonymous users from the www can have even more limited access.
# mostly because i need an easy way to test WAN connectivity :-)
return mkAuthOk("anonymous", permissions = {
# "/": PERM_DENY,
"/": PERM_LIST, #< REQUIRED, even for lftp to list a subdir
"/media": PERM_DENY,
"/playground": PERM_DENY,
"/pub": PERM_RO,
# "/README.md": PERM_RO, #< does not work
})
return authFail
def main():
ip = os.environ.get("SFTPGO_AUTHD_IP", "")
username = os.environ.get("SFTPGO_AUTHD_USERNAME", "")
password = os.environ.get("SFTPGO_AUTHD_PASSWORD", "")
resp = getAuthResponse(ip, username, password)
print(json.dumps(resp))
if __name__ == "__main__":
main()

View File

@ -1,32 +0,0 @@
diff --git a/internal/ftpd/handler.go b/internal/ftpd/handler.go
index 036c3977..33211261 100644
--- a/internal/ftpd/handler.go
+++ b/internal/ftpd/handler.go
@@ -169,7 +169,7 @@ func (c *Connection) Stat(name string) (os.FileInfo, error) {
}
return nil, err
}
- return fi, nil
+ return vfs.NewFileInfo(name, fi.IsDir(), fi.Size(), fi.ModTime(), false), nil
}
// Name returns the name of this connection
@@ -315,7 +315,17 @@ func (c *Connection) ReadDir(name string) (ftpserver.DirLister, error) {
}, nil
}
- return c.ListDir(name)
+ lister, err := c.ListDir(name)
+ if err != nil {
+ return nil, err
+ }
+ return &patternDirLister{
+ DirLister: lister,
+ pattern: "*",
+ lastCommand: c.clientContext.GetLastCommand(),
+ dirName: name,
+ connectionPath: c.clientContext.Path(),
+ }, nil
}
// GetHandle implements ClientDriverExtentionFileTransfer

View File

@ -1,63 +0,0 @@
# import feeds with e.g.
# ```console
# $ nix build '.#nixpkgs.freshrss'
# $ sudo -u freshrss -g freshrss FRESHRSS_DATA_PATH=/var/lib/freshrss ./result/cli/import-for-user.php --user admin --filename /home/colin/.config/newsflashFeeds.opml
# ```
#
# export feeds with
# ```console
# $ sudo -u freshrss -g freshrss FRESHRSS_DATA_PATH=/var/lib/freshrss ./result/cli/export-opml-for-user.php --user admin
# ```
{ config, lib, pkgs, sane-lib, ... }:
{
sops.secrets."freshrss_passwd" = {
owner = config.users.users.freshrss.name;
mode = "0400";
};
sane.persist.sys.byStore.plaintext = [
{ user = "freshrss"; group = "freshrss"; path = "/var/lib/freshrss"; method = "bind"; }
];
services.freshrss.enable = true;
services.freshrss.baseUrl = "https://rss.uninsane.org";
services.freshrss.virtualHost = "rss.uninsane.org";
services.freshrss.passwordFile = config.sops.secrets.freshrss_passwd.path;
systemd.services.freshrss-import-feeds =
let
feeds = sane-lib.feeds;
fresh = config.systemd.services.freshrss-config;
all-feeds = config.sane.feeds;
wanted-feeds = feeds.filterByFormat ["text" "image"] all-feeds;
opml = pkgs.writeText "sane-freshrss.opml" (feeds.feedsToOpml wanted-feeds);
in {
inherit (fresh) wantedBy environment;
serviceConfig = {
inherit (fresh.serviceConfig) Type User Group StateDirectory WorkingDirectory
# hardening options
CapabilityBoundingSet DeviceAllow LockPersonality NoNewPrivileges PrivateDevices PrivateTmp PrivateUsers ProcSubset ProtectClock ProtectControlGroups ProtectHome ProtectHostname ProtectKernelLogs ProtectKernelModules ProtectKernelTunables ProtectProc ProtectSystem RemoveIPC RestrictNamespaces RestrictRealtime RestrictSUIDSGID SystemCallArchitectures SystemCallFilter UMask;
};
description = "import sane RSS feed list";
after = [ "freshrss-config.service" ];
script = ''
# easiest way to preserve feeds: delete the user, recreate it, import feeds
${pkgs.freshrss}/cli/delete-user.php --user colin || true
${pkgs.freshrss}/cli/create-user.php --user colin --password "$(cat ${config.services.freshrss.passwordFile})" || true
${pkgs.freshrss}/cli/import-for-user.php --user colin --filename ${opml}
'';
};
# the default ("*:0/5") is to run every 5 minutes.
# `systemctl list-timers` to show
systemd.services.freshrss-updater.startAt = lib.mkForce "*:3/30";
services.nginx.virtualHosts."rss.uninsane.org" = {
addSSL = true;
enableACME = true;
# inherit kTLS;
# the routing is handled by services.freshrss.virtualHost
};
sane.dns.zones."uninsane.org".inet.CNAME."rss" = "native";
}

View File

@ -1,139 +0,0 @@
# config options: <https://docs.gitea.io/en-us/administration/config-cheat-sheet/>
{ config, pkgs, lib, ... }:
{
sane.persist.sys.byStore.plaintext = [
# TODO: mode? could be more granular
{ user = "git"; group = "gitea"; path = "/var/lib/gitea"; method = "bind"; }
];
services.gitea.enable = true;
services.gitea.user = "git"; # default is 'gitea'
services.gitea.database.type = "postgres";
services.gitea.database.user = "git";
services.gitea.appName = "Perfectly Sane Git";
# services.gitea.disableRegistration = true;
services.gitea.database.createDatabase = false; #< silence warning which wants db user and name to be equal
# TODO: remove this after merge: <https://github.com/NixOS/nixpkgs/pull/268849>
services.gitea.database.socket = "/run/postgresql"; #< would have been set if createDatabase = true
# gitea doesn't create the git user
users.users.git = {
description = "Gitea Service";
home = "/var/lib/gitea";
useDefaultShell = true;
group = "gitea";
isSystemUser = true;
# sendmail access (not 100% sure if this is necessary)
extraGroups = [ "postdrop" ];
};
services.gitea.settings = {
# options: "Trace", "Debug", "Info", "Warn", "Error", "Critical"
log.LEVEL = "Warn";
server = {
# options: "home", "explore", "organizations", "login" or URL fragment (or full URL)
LANDING_PAGE = "explore";
DOMAIN = "git.uninsane.org";
ROOT_URL = "https://git.uninsane.org/";
};
service = {
# timeout for email approval. 5760 = 4 days
ACTIVE_CODE_LIVE_MINUTES = 5760;
# REGISTER_EMAIL_CONFIRM = false;
# REGISTER_MANUAL_CONFIRM = true;
REGISTER_EMAIL_CONFIRM = true;
# not sure what this notified on?
ENABLE_NOTIFY_MAIL = true;
# defaults to image-based captcha.
# also supports recaptcha (with custom URLs) or hCaptcha.
ENABLE_CAPTCHA = true;
NOREPLY_ADDRESS = "noreply.anonymous.git@uninsane.org";
};
session = {
COOKIE_SECURE = true;
# keep me logged in for 30 days
SESSION_LIFE_TIME = 60 * 60 * 24 * 30;
};
repository = {
DEFAULT_BRANCH = "master";
ENABLE_PUSH_CREATE_USER = true;
ENABLE_PUSH_CREATE_ORG = true;
};
other = {
SHOW_FOOTER_TEMPLATE_LOAD_TIME = false;
};
ui = {
# options: "auto", "gitea", "arc-green"
DEFAULT_THEME = "arc-green";
# cache frontend assets if true
# USE_SERVICE_WORKER = true;
};
#"ui.meta" = ... to customize html author/description/etc
mailer = {
# alternative is to use nixos-level config:
# services.gitea.mailerPasswordFile = ...
ENABLED = true;
MAILER_TYPE = "sendmail";
FROM = "notify.git@uninsane.org";
SENDMAIL_PATH = "${pkgs.postfix}/bin/sendmail";
};
time = {
# options: ANSIC, UnixDate, RubyDate, RFC822, RFC822Z, RFC850, RFC1123, RFC1123Z, RFC3339, RFC3339Nano, Kitchen, Stamp, StampMilli, StampMicro, StampNano
# docs: https://pkg.go.dev/time#pkg-constants
FORMAT = "RFC3339";
};
};
systemd.services.gitea.serviceConfig = {
# nix default is AF_UNIX AF_INET AF_INET6.
# we need more protos for sendmail to work. i thought it only needed +AF_LOCAL, but that didn't work.
RestrictAddressFamilies = lib.mkForce "~";
# add maildrop to allow sendmail to work
ReadWritePaths = lib.mkForce [
"/var/lib/postfix/queue/maildrop"
"/var/lib/gitea"
];
};
services.openssh.settings.UsePAM = true; #< required for `git` user to authenticate
# hosted git (web view and for `git <cmd>` use
# TODO: enable publog?
services.nginx.virtualHosts."git.uninsane.org" = {
forceSSL = true; # gitea complains if served over a different protocol than its config file says
enableACME = true;
# inherit kTLS;
locations."/" = {
proxyPass = "http://127.0.0.1:3000";
};
# gitea serves all `raw` files as content-type: plain, but i'd like to serve them as their actual content type.
# or at least, enough to make specific pages viewable (serving unoriginal content as arbitrary content type is dangerous).
locations."~ ^/colin/phone-case-cq/raw/.*.html" = {
proxyPass = "http://127.0.0.1:3000";
extraConfig = ''
proxy_hide_header Content-Type;
default_type text/html;
add_header Content-Type text/html;
'';
};
locations."~ ^/colin/phone-case-cq/raw/.*.js" = {
proxyPass = "http://127.0.0.1:3000";
extraConfig = ''
proxy_hide_header Content-Type;
default_type text/html;
add_header Content-Type text/javascript;
'';
};
};
sane.dns.zones."uninsane.org".inet.CNAME."git" = "native";
sane.ports.ports."22" = {
protocol = [ "tcp" ];
visibleTo.lan = true;
visibleTo.doof = true;
description = "colin-git@git.uninsane.org";
};
}

View File

@ -1,70 +0,0 @@
{ pkgs, ... }:
{
# based on <https://bytes.fyi/real-time-goaccess-reports-with-nginx/>
# log-format setting can be derived with this tool if custom:
# - <https://github.com/stockrt/nginx2goaccess>
# config options:
# - <https://github.com/allinurl/goaccess/blob/master/config/goaccess.conf>
systemd.services.goaccess = {
description = "GoAccess server monitoring";
serviceConfig = {
ExecStart = ''
${pkgs.goaccess}/bin/goaccess \
-f /var/log/nginx/public.log \
--log-format=VCOMBINED \
--real-time-html \
--html-refresh=30 \
--no-query-string \
--anonymize-ip \
--ignore-panel=HOSTS \
--ws-url=wss://sink.uninsane.org:443/ws \
--port=7890 \
-o /var/lib/goaccess/index.html
'';
ExecReload = "${pkgs.coreutils}/bin/kill -HUP $MAINPID";
Type = "simple";
Restart = "on-failure";
RestartSec = "10s";
# hardening
# TODO: run as `goaccess` user and add `goaccess` user to group `nginx`.
NoNewPrivileges = true;
PrivateDevices = "yes";
PrivateTmp = true;
ProtectHome = "read-only";
ProtectKernelModules = "yes";
ProtectKernelTunables = "yes";
ProtectSystem = "strict";
ReadOnlyPaths = [ "/var/log/nginx" ];
ReadWritePaths = [ "/proc/self" "/var/lib/goaccess" ];
StateDirectory = "goaccess";
SystemCallFilter = "~@clock @cpu-emulation @debug @keyring @memlock @module @mount @obsolete @privileged @reboot @resources @setuid @swap @raw-io";
WorkingDirectory = "/var/lib/goaccess";
};
after = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
};
# server statistics
services.nginx.virtualHosts."sink.uninsane.org" = {
addSSL = true;
enableACME = true;
# inherit kTLS;
root = "/var/lib/goaccess";
locations."/ws" = {
proxyPass = "http://127.0.0.1:7890";
# XXX not sure how much of this is necessary
extraConfig = ''
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_buffering off;
proxy_read_timeout 7d;
'';
};
};
sane.dns.zones."uninsane.org".inet.CNAME."sink" = "native";
}

View File

@ -1,93 +0,0 @@
# admin:
# - view stats:
# - sudo -u ipfs -g ipfs ipfs -c /var/lib/ipfs/ stats bw
# - sudo -u ipfs -g ipfs ipfs -c /var/lib/ipfs/ stats dht
# - sudo -u ipfs -g ipfs ipfs -c /var/lib/ipfs/ bitswap stat
# - number of open peer connections:
# - sudo -u ipfs -g ipfs ipfs -c /var/lib/ipfs/ swarm peers | wc -l
{ lib, ... }:
lib.mkIf false # i don't actively use ipfs anymore
{
sane.persist.sys.byStore.plaintext = [
# TODO: mode? could be more granular
{ user = "261"; group = "261"; path = "/var/lib/ipfs"; method = "bind"; }
];
networking.firewall.allowedTCPPorts = [ 4001 ];
networking.firewall.allowedUDPPorts = [ 4001 ];
services.nginx.virtualHosts."ipfs.uninsane.org" = {
# don't default to ssl upgrades, since this may be dnslink'd from a different domain.
# ideally we'd disable ssl entirely, but some places assume it?
addSSL = true;
enableACME = true;
# inherit kTLS;
locations."/" = {
proxyPass = "http://127.0.0.1:8080";
extraConfig = ''
proxy_set_header Host $host;
proxy_set_header X-Ipfs-Gateway-Prefix "";
'';
};
};
sane.dns.zones."uninsane.org".inet.CNAME."ipfs" = "native";
# services.ipfs.enable = true;
services.kubo.localDiscovery = true;
services.kubo.settings = {
Addresses = {
Announce = [
# "/dns4/ipfs.uninsane.org/tcp/4001"
"/dns4/ipfs.uninsane.org/udp/4001/quic"
];
Swarm = [
# "/dns4/ipfs.uninsane.org/tcp/4001"
# "/ip4/0.0.0.0/tcp/4001"
"/dns4/ipfs.uninsane.org/udp/4001/quic"
"/ip4/0.0.0.0/udp/4001/quic"
];
};
Gateway = {
# the gateway can only be used to serve content already replicated on this host
NoFetch = true;
};
Swarm = {
ConnMgr = {
# maintain between LowWater and HighWater peer connections
# taken from: https://github.com/ipfs/ipfs-desktop/pull/2055
# defaults are 600-900: https://github.com/ipfs/kubo/blob/master/docs/config.md#swarmconnmgr
LowWater = 20;
HighWater = 40;
# default is 20s. i guess more grace period = less churn
GracePeriod = "1m";
};
ResourceMgr = {
# docs: https://github.com/libp2p/go-libp2p-resource-manager#resource-scopes
Enabled = true;
Limits = {
System = {
Conns = 196;
ConnsInbound = 128;
ConnsOutbound = 128;
FD = 512;
Memory = 1073741824; # 1GiB
Streams = 1536;
StreamsInbound = 1024;
StreamsOutbound = 1024;
};
};
};
Transports = {
Network = {
# disable TCP, force QUIC, for lighter resources
TCP = false;
QUIC = true;
};
};
};
};
}

View File

@ -1,34 +0,0 @@
{ config, lib, pkgs, ... }:
{
sane.persist.sys.byStore.plaintext = [
# TODO: mode? we only need this to save Indexer creds ==> migrate to config?
{ user = "root"; group = "root"; path = "/var/lib/jackett"; method = "bind"; }
];
services.jackett.enable = true;
systemd.services.jackett.after = [ "wireguard-wg-ovpns.service" ];
systemd.services.jackett.partOf = [ "wireguard-wg-ovpns.service" ];
systemd.services.jackett.serviceConfig = {
# run this behind the OVPN static VPN
NetworkNamespacePath = "/run/netns/ovpns";
ExecStartPre = [ "${lib.getExe pkgs.sane-scripts.ip-check} --no-upnp --expect ${config.sane.netns.ovpns.netnsPubIpv4}" ]; # abort if public IP is not as expected
# patch jackett to listen on the public interfaces
# ExecStart = lib.mkForce "${pkgs.jackett}/bin/Jackett --NoUpdates --DataFolder /var/lib/jackett/.config/Jackett --ListenPublic";
};
# jackett torrent search
services.nginx.virtualHosts."jackett.uninsane.org" = {
forceSSL = true;
enableACME = true;
# inherit kTLS;
locations."/" = {
proxyPass = "http://${config.sane.netns.ovpns.netnsVethIpv4}:9117";
recommendedProxySettings = true;
};
};
sane.dns.zones."uninsane.org".inet.CNAME."jackett" = "native";
}

View File

@ -1,127 +0,0 @@
# configuration options (today i don't store my config in nix):
#
# - jellyfin-web can be statically configured (result/share/jellyfin-web/config.json)
# - <https://jellyfin.org/docs/general/clients/web-config>
# - configure server list, plugins, "menuLinks", colors
#
# - jellfyin server is configured in /var/lib/jellfin/
# - root/default/<LibraryType>/
# - <LibraryName>.mblink: contains the directory name where this library lives
# - options.xml: contains preferences which were defined in the web UI during import
# - e.g. `EnablePhotos`, `EnableChapterImageExtraction`, etc.
# - config/encoding.xml: transcoder settings
# - config/system.xml: misc preferences like log file duration, audiobook resume settings, etc.
# - data/jellyfin.db: maybe account definitions? internal state?
{ config, lib, ... }:
{
# https://jellyfin.org/docs/general/networking/index.html
sane.ports.ports."1900" = {
protocol = [ "udp" ];
visibleTo.lan = true;
description = "colin-upnp-for-jellyfin";
};
sane.ports.ports."7359" = {
protocol = [ "udp" ];
visibleTo.lan = true;
description = "colin-jellyfin-specific-client-discovery";
# ^ not sure if this is necessary: copied this port from nixos jellyfin.openFirewall
};
# not sure if 8096/8920 get used either:
sane.ports.ports."8096" = {
protocol = [ "tcp" ];
visibleTo.lan = true;
description = "colin-jellyfin-http-lan";
};
sane.ports.ports."8920" = {
protocol = [ "tcp" ];
visibleTo.lan = true;
description = "colin-jellyfin-https-lan";
};
sane.persist.sys.byStore.plaintext = [
{ user = "jellyfin"; group = "jellyfin"; mode = "0700"; path = "/var/lib/jellyfin"; method = "bind"; }
];
sane.fs."/var/lib/jellyfin/config/logging.json" = {
# "Emby.Dlna" logging: <https://jellyfin.org/docs/general/networking/dlna>
symlink.text = ''
{
"Serilog": {
"MinimumLevel": {
"Default": "Information",
"Override": {
"Microsoft": "Warning",
"System": "Warning",
"Emby.Dlna": "Debug",
"Emby.Dlna.Eventing": "Debug"
}
},
"WriteTo": [
{
"Name": "Console",
"Args": {
"outputTemplate": "[{Timestamp:HH:mm:ss}] [{Level:u3}] [{ThreadId}] {SourceContext}: {Message:lj}{NewLine}{Exception}"
}
}
],
"Enrich": [ "FromLogContext", "WithThreadId" ]
}
}
'';
wantedBeforeBy = [ "jellyfin.service" ];
};
# Jellyfin multimedia server
# this is mostly taken from the official jellfin.org docs
services.nginx.virtualHosts."jelly.uninsane.org" = {
forceSSL = true;
enableACME = true;
# inherit kTLS;
locations."/" = {
proxyPass = "http://127.0.0.1:8096";
extraConfig = ''
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Protocol $scheme;
proxy_set_header X-Forwarded-Host $http_host;
# Disable buffering when the nginx proxy gets very resource heavy upon streaming
proxy_buffering off;
'';
};
# locations."/web/" = {
# proxyPass = "http://127.0.0.1:8096/web/index.html";
# extraConfig = ''
# proxy_set_header Host $host;
# proxy_set_header X-Real-IP $remote_addr;
# proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# proxy_set_header X-Forwarded-Proto $scheme;
# proxy_set_header X-Forwarded-Protocol $scheme;
# proxy_set_header X-Forwarded-Host $http_host;
# '';
# };
locations."/socket" = {
proxyPass = "http://127.0.0.1:8096";
extraConfig = ''
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Protocol $scheme;
proxy_set_header X-Forwarded-Host $http_host;
'';
};
};
sane.dns.zones."uninsane.org".inet.CNAME."jelly" = "native";
services.jellyfin.enable = true;
}

View File

@ -1,27 +0,0 @@
# how to update wikipedia snapshot:
# - browse for later snapshots:
# - <https://mirror.accum.se/mirror/wikimedia.org/other/kiwix/zim/wikipedia>
# - DL directly, or via rsync (resumable):
# - `rsync --progress --append-verify rsync://mirror.accum.se/mirror/wikimedia.org/other/kiwix/zim/wikipedia/wikipedia_en_all_maxi_2022-05.zim .`
{ ... }:
{
sane.persist.sys.byStore.ext = [
{ user = "colin"; group = "users"; path = "/var/lib/kiwix"; method = "bind"; }
];
sane.services.kiwix-serve = {
enable = true;
port = 8013;
zimPaths = [ "/var/lib/kiwix/wikipedia_en_all_maxi_2023-11.zim" ];
};
services.nginx.virtualHosts."w.uninsane.org" = {
forceSSL = true;
enableACME = true;
# inherit kTLS;
locations."/".proxyPass = "http://127.0.0.1:8013";
};
sane.dns.zones."uninsane.org".inet.CNAME."w" = "native";
}

View File

@ -1,22 +0,0 @@
{ config, ... }:
let
svc-cfg = config.services.komga;
inherit (svc-cfg) user group port stateDir;
in
{
sane.persist.sys.byStore.plaintext = [
{ inherit user group; mode = "0700"; path = stateDir; method = "bind"; }
];
services.komga.enable = true;
services.komga.port = 11319; # chosen at random
services.nginx.virtualHosts."komga.uninsane.org" = {
forceSSL = true;
enableACME = true;
locations."/" = {
proxyPass = "http://127.0.0.1:${builtins.toString port}";
};
};
sane.dns.zones."uninsane.org".inet.CNAME."komga" = "native";
}

View File

@ -1,90 +0,0 @@
# docs:
# - <repo:LemmyNet/lemmy:docker/federation/nginx.conf>
# - <repo:LemmyNet/lemmy:docker/nginx.conf>
# - <repo:LemmyNet/lemmy-ansible:templates/nginx.conf>
{ config, lib, pkgs, ... }:
let
inherit (builtins) toString;
inherit (lib) mkForce;
uiPort = 1234; # default ui port is 1234
backendPort = 8536; # default backend port is 8536
#^ i guess the "backend" port is used for federation?
pict-rs = pkgs.pict-rs;
# pict-rs = pkgs.pict-rs.overrideAttrs (upstream: {
# # as of v0.4.2, all non-GIF video is forcibly transcoded.
# # that breaks lemmy, because of the request latency.
# # and it eats up hella CPU.
# # pict-rs is iffy around video altogether: mp4 seems the best supported.
# # XXX: this patch no longer applies after 0.5.10 -> 0.5.11 update.
# # git log is hard to parse, but *suggests* that video is natively supported
# # better than in the 0.4.2 days, e.g. 5fd59fc5b42d31559120dc28bfef4e5002fb509e
# # "Change commandline flag to allow disabling video, since it is enabled by default"
# postPatch = (upstream.postPatch or "") + ''
# substituteInPlace src/validate.rs \
# --replace-fail 'if transcode_options.needs_reencode() {' 'if false {'
# '';
# });
in {
services.lemmy = {
enable = true;
settings.hostname = "lemmy.uninsane.org";
# federation.debug forces outbound federation queries to be run synchronously
# N.B.: this option might not be read for 0.17.0+? <https://github.com/LemmyNet/lemmy/blob/c32585b03429f0f76d1e4ff738786321a0a9df98/RELEASES.md#upgrade-instructions>
# settings.federation.debug = true;
settings.port = backendPort;
ui.port = uiPort;
database.createLocally = true;
nginx.enable = true;
};
systemd.services.lemmy.serviceConfig = {
# fix to use a normal user so we can configure perms correctly
DynamicUser = mkForce false;
User = "lemmy";
Group = "lemmy";
};
systemd.services.lemmy.environment = {
RUST_BACKTRACE = "full";
# RUST_LOG = "debug";
# RUST_LOG = "trace";
# upstream defaults LEMMY_DATABASE_URL = "postgres:///lemmy?host=/run/postgresql";
# - Postgres complains that we didn't specify a user
# lemmy formats the url as:
# - postgres://{user}:{password}@{host}:{port}/{database}
# SO suggests (https://stackoverflow.com/questions/3582552/what-is-the-format-for-the-postgresql-connection-string-url):
# - postgresql://[user[:password]@][netloc][:port][/dbname][?param1=value1&...]
# LEMMY_DATABASE_URL = "postgres://lemmy@/run/postgresql"; # connection to server on socket "/run/postgresql/.s.PGSQL.5432" failed: FATAL: database "run/postgresql" does not exist
# LEMMY_DATABASE_URL = "postgres://lemmy?host=/run/postgresql"; # no PostgreSQL user name specified in startup packet
# LEMMY_DATABASE_URL = mkForce "postgres://lemmy@?host=/run/postgresql"; # WORKS
LEMMY_DATABASE_URL = mkForce "postgres://lemmy@/lemmy?host=/run/postgresql";
};
users.groups.lemmy = {};
users.users.lemmy = {
group = "lemmy";
isSystemUser = true;
};
services.nginx.virtualHosts."lemmy.uninsane.org" = {
forceSSL = true;
enableACME = true;
};
sane.dns.zones."uninsane.org".inet.CNAME."lemmy" = "native";
#v DO NOT REMOVE: defaults to 0.3, instead of latest, so always need to explicitly set this.
services.pict-rs.package = pict-rs;
# pict-rs configuration is applied in this order:
# - via toml
# - via env vars (overrides everything above)
# - via CLI flags (overrides everything above)
# some of the CLI flags have defaults, making it the only actual way to configure certain things even when docs claim otherwise.
# CLI args: <https://git.asonix.dog/asonix/pict-rs#user-content-running>
systemd.services.pict-rs.serviceConfig.ExecStart = lib.mkForce (lib.concatStringsSep " " [
"${lib.getBin pict-rs}/bin/pict-rs run"
"--media-video-max-frame-count" (builtins.toString (30*60*60))
"--media-process-timeout 120"
"--media-video-allow-audio" # allow audio
]);
}

View File

@ -1,165 +0,0 @@
# docs: <https://nixos.wiki/wiki/Matrix>
# docs: <https://nixos.org/manual/nixos/stable/index.html#module-services-matrix-synapse>
# example config: <https://github.com/matrix-org/synapse/blob/develop/docs/sample_config.yaml>
#
# ENABLING PUSH NOTIFICATIONS (with UnifiedPush/ntfy):
# - Matrix "pushers" API spec: <https://spec.matrix.org/latest/client-server-api/#post_matrixclientv3pushersset>
# - first, view notification settings:
# - obtain your client's auth token. e.g. Element -> profile -> help/about -> access token.
# - `curl --header 'Authorization: Bearer <your_access_token>' localhost:8008/_matrix/client/v3/pushers | jq .`
# - enable a new notification destination:
# - `curl --header "Authorization: Bearer <your_access_token>" --data '{ "app_display_name": "<topic>", "app_id": "ntfy.uninsane.org", "data": { "url": "https://ntfy.uninsane.org/_matrix/push/v1/notify", "format": "event_id_only" }, "device_display_name": "<topic>", "kind": "http", "lang": "en-US", "profile_tag": "", "pushkey": "<topic>" }' localhost:8008/_matrix/client/v3/pushers/set`
# - delete a notification destination by setting `kind` to `null` (otherwise, request is identical to above)
#
{ config, lib, pkgs, ... }:
{
imports = [
./discord-puppet.nix
./irc.nix
./signal.nix
];
sane.persist.sys.byStore.plaintext = [
{ user = "matrix-synapse"; group = "matrix-synapse"; path = "/var/lib/matrix-synapse"; method = "bind"; }
];
services.matrix-synapse.enable = true;
services.matrix-synapse.settings = {
# this changes the default log level from INFO to WARN.
# maybe there's an easier way?
log_config = ./synapse-log_level.yaml;
server_name = "uninsane.org";
# services.matrix-synapse.enable_registration_captcha = true;
# services.matrix-synapse.enable_registration_without_verification = true;
enable_registration = true;
# services.matrix-synapse.registration_shared_secret = "<shared key goes here>";
# default for listeners is port = 8448, tls = true, x_forwarded = false.
# we change this because the server is situated behind nginx.
listeners = [
{
port = 8008;
bind_addresses = [ "127.0.0.1" ];
type = "http";
tls = false;
x_forwarded = true;
resources = [
{
names = [ "client" "federation" ];
compress = false;
}
];
}
];
ip_range_whitelist = [
# to communicate with ntfy.uninsane.org push notifs.
# TODO: move this to some non-shared loopback device: we don't want Matrix spouting http requests to *anything* on this machine
"10.78.79.51"
];
x_forwarded = true; # because we proxy matrix behind nginx
max_upload_size = "100M"; # default is "50M"
admin_contact = "admin.matrix@uninsane.org";
registrations_require_3pid = [ "email" ];
};
services.matrix-synapse.extraConfigFiles = [
config.sops.secrets."matrix_synapse_secrets.yaml".path
];
systemd.services.matrix-synapse.postStart = ''
ACCESS_TOKEN=$(${pkgs.coreutils}/bin/cat ${config.sops.secrets.matrix_access_token.path})
TOPIC=$(${pkgs.coreutils}/bin/cat ${config.sops.secrets.ntfy-sh-topic.path})
echo "ensuring ntfy push gateway"
${pkgs.curl}/bin/curl \
--header "Authorization: Bearer $ACCESS_TOKEN" \
--data "{ \"app_display_name\": \"ntfy-adapter\", \"app_id\": \"ntfy.uninsane.org\", \"data\": { \"url\": \"https://ntfy.uninsane.org/_matrix/push/v1/notify\", \"format\": \"event_id_only\" }, \"device_display_name\": \"ntfy-adapter\", \"kind\": \"http\", \"lang\": \"en-US\", \"profile_tag\": \"\", \"pushkey\": \"$TOPIC\" }" \
localhost:8008/_matrix/client/v3/pushers/set
echo "registered push gateways:"
${pkgs.curl}/bin/curl \
--header "Authorization: Bearer $ACCESS_TOKEN" \
localhost:8008/_matrix/client/v3/pushers \
| ${pkgs.jq}/bin/jq .
'';
# new users may be registered on the CLI:
# register_new_matrix_user -c /nix/store/8n6kcka37jhmi4qpd2r03aj71pkyh21s-homeserver.yaml http://localhost:8008
#
# or provide an registration token then can use to register through the client.
# docs: https://github.com/matrix-org/synapse/blob/develop/docs/usage/administration/admin_api/registration_tokens.md
# first, grab your own user's access token (Help & About section in Element). then:
# curl --header "Authorization: Bearer <my_token>" localhost:8008/_synapse/admin/v1/registration_tokens
# create a token with unlimited uses:
# curl -d '{}' --header "Authorization: Bearer <my_token>" localhost:8008/_synapse/admin/v1/registration_tokens/new
# create a token with limited uses:
# curl -d '{ "uses_allowed": 1 }' --header "Authorization: Bearer <my_token>" localhost:8008/_synapse/admin/v1/registration_tokens/new
# matrix chat server
# TODO: was `publog`
services.nginx.virtualHosts."matrix.uninsane.org" = {
addSSL = true;
enableACME = true;
# inherit kTLS;
# TODO colin: replace this with something helpful to the viewer
# locations."/".extraConfig = ''
# return 404;
# '';
locations."/" = {
proxyPass = "http://127.0.0.1:8008";
extraConfig = ''
# allow uploading large files (matrix enforces a separate limit, downstream)
client_max_body_size 512m;
'';
};
# redirect browsers to the web client.
# i don't think native matrix clients ever fetch the root.
# ideally this would be put behind some user-agent test though.
locations."= /" = {
return = "301 https://web.matrix.uninsane.org";
};
# locations."/_matrix" = {
# proxyPass = "http://127.0.0.1:8008";
# };
};
# matrix web client
# docs: https://nixos.org/manual/nixos/stable/index.html#module-services-matrix-element-web
services.nginx.virtualHosts."web.matrix.uninsane.org" = {
forceSSL = true;
enableACME = true;
# inherit kTLS;
root = pkgs.element-web.override {
conf = {
default_server_config."m.homeserver" = {
"base_url" = "https://matrix.uninsane.org";
"server_name" = "uninsane.org";
};
};
};
};
sane.dns.zones."uninsane.org".inet = {
CNAME."matrix" = "native";
CNAME."web.matrix" = "native";
};
sops.secrets."matrix_synapse_secrets.yaml" = {
owner = config.users.users.matrix-synapse.name;
};
sops.secrets."matrix_access_token" = {
owner = config.users.users.matrix-synapse.name;
};
# provide access to ntfy-sh-topic secret
users.users.matrix-synapse.extraGroups = [ "ntfy-sh" ];
}

View File

@ -1,58 +0,0 @@
{ lib, ... }:
# XXX mx-discord-puppet uses nodejs_14 which is EOL
# - mx-discord-puppet is abandoned upstream _and_ in nixpkgs
# - recommended to use mautrix-discord: <https://github.com/NixOS/nixpkgs/pull/200462>
lib.mkIf false
{
sane.persist.sys.byStore.plaintext = [
{ user = "matrix-synapse"; group = "matrix-synapse"; path = "/var/lib/mx-puppet-discord"; method = "bind"; }
];
services.matrix-synapse.settings.app_service_config_files = [
# auto-created by mx-puppet-discord service
"/var/lib/mx-puppet-discord/discord-registration.yaml"
];
services.mx-puppet-discord.enable = true;
# schema/example: <https://gitlab.com/mx-puppet/discord/mx-puppet-discord/-/blob/main/sample.config.yaml>
services.mx-puppet-discord.settings = {
bridge = {
# port = 8434
bindAddress = "127.0.0.1";
domain = "uninsane.org";
homeserverUrl = "http://127.0.0.1:8008";
# displayName = "mx-discord-puppet"; # matrix name for the bot
# matrix "groups" were an earlier version of spaces.
# maybe the puppet understands this, maybe not?
enableGroupSync = false;
};
presence = {
enabled = false;
interval = 30000;
};
provisioning = {
# allow these users to control the puppet
whitelist = [ "@colin:uninsane\\.org" ];
};
relay = {
whitelist = [ "@colin:uninsane\\.org" ];
};
selfService = {
# who's allowed to use plumbed rooms (idk what that means)
whitelist = [ "@colin:uninsane\\.org" ];
};
logging = {
# silly, debug, verbose, info, warn, error
console = "debug";
};
};
# TODO: should use a dedicated user
systemd.services.mx-puppet-discord.serviceConfig = {
# fix up to not use /var/lib/private, but just /var/lib
DynamicUser = lib.mkForce false;
User = "matrix-synapse";
Group = "matrix-synapse";
};
}

View File

@ -1,13 +0,0 @@
diff --git a/src/irc/ConnectionInstance.ts b/src/irc/ConnectionInstance.ts
index 688036ca..3373fa27 100644
--- a/src/irc/ConnectionInstance.ts
+++ b/src/irc/ConnectionInstance.ts
@@ -149,7 +149,7 @@ export class ConnectionInstance {
if (this.dead) {
return Promise.resolve();
}
- ircReason = ircReason || reason;
+ ircReason = "bye"; // don't reveal through the IRC quit message that we're a bridge
log.info(
"disconnect()ing %s@%s - %s", this.nick, this.domain, reason
);

View File

@ -1,50 +0,0 @@
diff --git a/config.schema.yml b/config.schema.yml
index 2e71c8d6..42ba8ba1 100644
--- a/config.schema.yml
+++ b/config.schema.yml
@@ -433,7 +433,7 @@ properties:
type: "boolean"
realnameFormat:
type: "string"
- enum: ["mxid","reverse-mxid"]
+ enum: ["mxid","reverse-mxid","localpart"]
ipv6:
type: "object"
properties:
diff --git a/src/irc/IdentGenerator.ts b/src/irc/IdentGenerator.ts
index 7a2b5cf1..50f7815a 100644
--- a/src/irc/IdentGenerator.ts
+++ b/src/irc/IdentGenerator.ts
@@ -74,6 +74,9 @@ export class IdentGenerator {
else if (server.getRealNameFormat() === "reverse-mxid") {
realname = IdentGenerator.sanitiseRealname(IdentGenerator.switchAroundMxid(matrixUser));
}
+ else if (server.getRealNameFormat() == "localpart") {
+ realname = IdentGenerator.sanitiseRealname(matrixUser.localpart);
+ }
else {
throw Error('Invalid value for realNameFormat');
}
diff --git a/src/irc/IrcServer.ts b/src/irc/IrcServer.ts
index 2af73ab4..895b9783 100644
--- a/src/irc/IrcServer.ts
+++ b/src/irc/IrcServer.ts
@@ -101,7 +101,7 @@ export interface IrcServerConfig {
};
lineLimit: number;
userModes?: string;
- realnameFormat?: "mxid"|"reverse-mxid";
+ realnameFormat?: "mxid"|"reverse-mxid"|"localpart";
pingTimeoutMs: number;
pingRateMs: number;
kickOn: {
@@ -289,7 +289,7 @@ export class IrcServer {
return this.config.ircClients.userModes || "";
}
- public getRealNameFormat(): "mxid"|"reverse-mxid" {
+ public getRealNameFormat(): "mxid"|"reverse-mxid"|"localpart" {
return this.config.ircClients.realnameFormat || "mxid";
}

View File

@ -1,171 +0,0 @@
# config docs:
# - <https://github.com/matrix-org/matrix-appservice-irc/blob/develop/config.sample.yaml>
# probably want to remove that.
{ config, lib, ... }:
let
ircServer = { name, additionalAddresses ? [], ssl ? true, sasl ? true, port ? if ssl then 6697 else 6667 }: let
lowerName = lib.toLower name;
in {
# XXX sasl: appservice doesn't support NickServ identification (only SASL, or PASS if sasl = false)
inherit additionalAddresses name port sasl ssl;
botConfig = {
# bot has no presence in IRC channel; only real Matrix users
enabled = false;
# this is the IRC username/nickname *of the bot* (not visible in channels): not of the end-user.
# the irc username/nick of a mapped Matrix user is determined further down in `ircClients` section.
# if `enabled` is false, then this name probably never shows up on the IRC side (?)
nick = "uninsane";
username = "uninsane";
joinChannelsIfNoUsers = false;
};
dynamicChannels = {
enabled = true;
aliasTemplate = "#irc_${lowerName}_$CHANNEL";
published = false; # false => irc rooms aren't listed in homeserver public rooms list
federate = false; # false => Matrix users from other homeservers can't join IRC channels
};
ircClients = {
nickTemplate = "$LOCALPARTsane"; # @colin:uninsane.org (Matrix) -> colinsane (IRC)
realnameFormat = "reverse-mxid"; # @colin:uninsane.org (Matrix) -> org.uninsane:colin (IRC)
# realnameFormat = "localpart"; # @colin:uninsane.org (Matrix) -> colin (IRC) -- but requires the mxid patch below
# by default, Matrix will convert messages greater than (3) lines into a pastebin-like URL to send to IRC.
lineLimit = 20;
# Rizon in particular allows only 4 connections from one IP before a 30min ban.
# that's effectively reduced to 2 during a netsplit, or maybe during a restart.
# - https://wiki.rizon.net/index.php?title=Connection/Session_Limit_Exemptions
# especially, misconfigurations elsewhere in this config may cause hundreds of connections
# so this is a safeguard.
maxClients = 2;
# don't have the bridge disconnect me from IRC when idle.
idleTimeout = 0;
concurrentReconnectLimit = 2;
reconnectIntervalMs = 60000;
kickOn = {
# remove Matrix user from room when...
channelJoinFailure = false;
ircConnectionFailure = false;
userQuit = true;
};
};
matrixClients = {
userTemplate = "@irc_${lowerName}_$NICK"; # the :uninsane.org part is appended automatically
};
# this will let this user message the appservice with `!join #<IRCChannel>` and the rest "Just Works"
"@colin:uninsane.org" = "admin";
membershipLists = {
enabled = true;
global = {
ircToMatrix = {
initial = true;
incremental = true;
requireMatrixJoined = false;
};
matrixToIrc = {
initial = true;
incremental = true;
};
};
ignoreIdleUsersOnStartup = {
enabled = false; # false => always bridge users, even if idle
};
};
# sync room description?
bridgeInfoState = {
enabled = true;
initial = true;
};
# for per-user IRC password:
# - invite @irc_${lowerName}_NickServ:uninsane.org to a DM and type `help` => register
# - invite the matrix-appservice-irc user to a DM and type `!help` => add PW to database
# to validate that i'm authenticated on the IRC network, DM @irc_${lowerName}_NickServ:uninsane.org:
# - send: `STATUS colinsane`
# - response should be `3`: "user recognized as owner via password identification"
# passwordEncryptionKeyPath = "/path/to/privkey"; # appservice will generate its own if unspecified
};
in
{
nixpkgs.overlays = [
(next: prev: {
matrix-appservice-irc = prev.matrix-appservice-irc.overrideAttrs (super: {
patches = super.patches or [] ++ [
./irc-no-reveal-bridge.patch
# ./irc-no-reveal-mxid.patch
];
});
})
];
sane.persist.sys.byStore.plaintext = [
# TODO: mode?
{ user = "matrix-appservice-irc"; group = "matrix-appservice-irc"; path = "/var/lib/matrix-appservice-irc"; method = "bind"; }
];
# XXX: matrix-appservice-irc PreStart tries to chgrp the registration.yml to matrix-synapse,
# which requires matrix-appservice-irc to be of that group
users.users.matrix-appservice-irc.extraGroups = [ "matrix-synapse" ];
# weird race conditions around registration.yml mean we want matrix-synapse to be of matrix-appservice-irc group too.
users.users.matrix-synapse.extraGroups = [ "matrix-appservice-irc" ];
services.matrix-synapse.settings.app_service_config_files = [
"/var/lib/matrix-appservice-irc/registration.yml" # auto-created by irc appservice
];
services.matrix-appservice-irc.enable = true;
services.matrix-appservice-irc.registrationUrl = "http://127.0.0.1:8009";
services.matrix-appservice-irc.settings = {
homeserver = {
url = "http://127.0.0.1:8008";
dropMatrixMessagesAfterSecs = 300;
domain = "uninsane.org";
enablePresence = true;
bindPort = 9999;
bindHost = "127.0.0.1";
};
ircService = {
servers = {
"irc.esper.net" = ircServer {
name = "esper";
sasl = false;
# notable channels:
# - #merveilles
};
"irc.libera.chat" = ircServer {
name = "libera";
sasl = false;
# notable channels:
# - #hare
# - #mnt-reform
};
"irc.myanonamouse.net" = ircServer {
name = "MyAnonamouse";
additionalAddresses = [ "irc2.myanonamouse.net" ];
sasl = false;
};
"irc.oftc.net" = ircServer {
name = "oftc";
sasl = false;
# notable channels:
# - #sxmo
# - #sxmo-offtopic
};
"irc.rizon.net" = ircServer { name = "Rizon"; };
"wigle.net" = ircServer {
name = "WiGLE";
ssl = false;
};
};
};
};
systemd.services.matrix-appservice-irc.serviceConfig = {
# XXX 2023/06/20: nixos specifies this + @aio and @memlock as forbidden
# the service actively uses at least one of these, and both of them are fairly innocuous
SystemCallFilter = lib.mkForce "~@clock @cpu-emulation @debug @keyring @module @mount @obsolete @raw-io @setuid @swap";
};
}

View File

@ -1,39 +0,0 @@
# config options:
# - <https://github.com/mautrix/signal/blob/master/mautrix_signal/example-config.yaml>
{ config, lib, pkgs, ... }:
lib.mkIf false # disabled 2024/01/11: i don't use it, and pkgs.mautrix-signal had some API changes
{
sane.persist.sys.byStore.plaintext = [
{ user = "mautrix-signal"; group = "mautrix-signal"; path = "/var/lib/mautrix-signal"; method = "bind"; }
{ user = "signald"; group = "signald"; path = "/var/lib/signald"; method = "bind"; }
];
# allow synapse to read the registration file
users.users.matrix-synapse.extraGroups = [ "mautrix-signal" ];
services.signald.enable = true;
services.mautrix-signal.enable = true;
services.mautrix-signal.environmentFile =
config.sops.secrets.mautrix_signal_env.path;
services.mautrix-signal.settings.signal.socket_path = "/run/signald/signald.sock";
services.mautrix-signal.settings.homeserver.domain = "uninsane.org";
services.mautrix-signal.settings.bridge.permissions."@colin:uninsane.org" = "admin";
services.matrix-synapse.settings.app_service_config_files = [
# auto-created by mautrix-signal service
"/var/lib/mautrix-signal/signal-registration.yaml"
];
systemd.services.mautrix-signal.serviceConfig = {
# allow communication to signald
SupplementaryGroups = [ "signald" ];
ReadWritePaths = [ "/run/signald" ];
};
sops.secrets."mautrix_signal_env" = {
mode = "0440";
owner = config.users.users.mautrix-signal.name;
group = config.users.users.matrix-synapse.name;
};
}

View File

@ -1,27 +0,0 @@
version: 1
# In systemd's journal, loglevel is implicitly stored, so let's omit it
# from the message text.
formatters:
journal_fmt:
format: '%(name)s: [%(request)s] %(message)s'
filters:
context:
(): synapse.util.logcontext.LoggingContextFilter
request: ""
handlers:
journal:
class: systemd.journal.JournalHandler
formatter: journal_fmt
filters: [context]
SYSLOG_IDENTIFIER: synapse
# default log level: INFO
root:
level: WARN
handlers: [journal]
disable_existing_loggers: False

View File

@ -1,41 +0,0 @@
{ lib, ... }:
lib.mkIf false #< i don't actively use navidrome
{
sane.persist.sys.byStore.plaintext = [
{ user = "navidrome"; group = "navidrome"; path = "/var/lib/navidrome"; method = "bind"; }
];
services.navidrome.enable = true;
services.navidrome.settings = {
# docs: https://www.navidrome.org/docs/usage/configuration-options/
Address = "127.0.0.1";
Port = 4533;
MusicFolder = "/var/media/Music";
CovertArtPriority = "*.jpg, *.JPG, *.png, *.PNG, embedded";
AutoImportPlaylists = false;
ScanSchedule = "@every 1h";
};
systemd.services.navidrome.serviceConfig = {
# fix to use a normal user so we can configure perms correctly
DynamicUser = lib.mkForce false;
User = "navidrome";
Group = "navidrome";
};
users.groups.navidrome = {};
users.users.navidrome = {
group = "navidrome";
isSystemUser = true;
};
services.nginx.virtualHosts."music.uninsane.org" = {
forceSSL = true;
enableACME = true;
# inherit kTLS;
locations."/".proxyPass = "http://127.0.0.1:4533";
};
sane.dns.zones."uninsane.org".inet.CNAME."music" = "native";
}

View File

@ -1,223 +0,0 @@
# docs: <https://nixos.wiki/wiki/Nginx>
# docs: <https://nginx.org/en/docs/>
{ config, lib, pkgs, ... }:
let
# make the logs for this host "public" so that they show up in e.g. metrics
publog = vhost: lib.attrsets.unionOfDisjoint vhost {
extraConfig = (vhost.extraConfig or "") + ''
access_log /var/log/nginx/public.log vcombined;
'';
};
# kTLS = true; # in-kernel TLS for better perf
in
{
sane.ports.ports."80" = {
protocol = [ "tcp" ];
visibleTo.lan = true;
visibleTo.ovpns = true; # so that letsencrypt can procure a cert for the mx record
visibleTo.doof = true;
description = "colin-http-uninsane.org";
};
sane.ports.ports."443" = {
protocol = [ "tcp" ];
visibleTo.lan = true;
visibleTo.doof = true;
description = "colin-https-uninsane.org";
};
services.nginx.enable = true;
services.nginx.appendConfig = ''
# use 1 process per core.
# may want to increase worker_connections too, but `ulimit -n` must be increased first.
worker_processes auto;
'';
# this is the standard `combined` log format, with the addition of $host
# so that we have the virtualHost in the log.
# KEEP IN SYNC WITH GOACCESS
# goaccess calls this VCOMBINED:
# - <https://gist.github.com/jyap808/10570005>
services.nginx.commonHttpConfig = ''
log_format vcombined '$host:$server_port $remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referrer" "$http_user_agent"';
access_log /var/log/nginx/private.log vcombined;
'';
# sets gzip_comp_level = 5
services.nginx.recommendedGzipSettings = true;
# enables OCSP stapling (so clients don't need contact the OCSP server -- i do instead)
# - doesn't seem to, actually: <https://www.ssllabs.com/ssltest/analyze.html?d=uninsane.org>
# caches TLS sessions for 10m
services.nginx.recommendedTlsSettings = true;
# enables sendfile, tcp_nopush, tcp_nodelay, keepalive_timeout 65
services.nginx.recommendedOptimisation = true;
# web blog/personal site
# alternative way to link stuff into the share:
# sane.fs."/var/www/sites/uninsane.org/share/Ubunchu".mount.bind = "/var/media/Books/Visual/HiroshiSeo/Ubunchu";
# sane.fs."/var/media/Books/Visual/HiroshiSeo/Ubunchu".dir = {};
services.nginx.virtualHosts."uninsane.org" = publog {
# a lot of places hardcode https://uninsane.org,
# and then when we mix http + non-https, we get CORS violations
# and things don't look right. so force SSL.
forceSSL = true;
enableACME = true;
# inherit kTLS;
# for OCSP stapling
sslTrustedCertificate = "${pkgs.cacert}/etc/ssl/certs/ca-bundle.crt";
locations."/" = {
root = "${pkgs.uninsane-dot-org}/share/uninsane-dot-org";
tryFiles = "$uri $uri/ @fallback";
};
# unversioned files
locations."@fallback" = {
root = "/var/www/sites/uninsane.org";
};
# uninsane.org/share/foo => /var/www/sites/uninsane.org/share/foo.
# special-cased to enable directory listings
locations."/share" = {
root = "/var/www/sites/uninsane.org";
extraConfig = ''
# autoindex => render directory listings
autoindex on;
# don't follow any symlinks when serving files
# otherwise it allows a directory escape
disable_symlinks on;
'';
};
locations."/share/Milkbags/" = {
alias = "/var/media/Videos/Milkbags/";
extraConfig = ''
# autoindex => render directory listings
autoindex on;
# don't follow any symlinks when serving files
# otherwise it allows a directory escape
disable_symlinks on;
'';
};
# allow matrix users to discover that @user:uninsane.org is reachable via matrix.uninsane.org
locations."= /.well-known/matrix/server".extraConfig =
let
# use 443 instead of the default 8448 port to unite
# the client-server and server-server port for simplicity
server = { "m.server" = "matrix.uninsane.org:443"; };
in ''
add_header Content-Type application/json;
return 200 '${builtins.toJSON server}';
'';
locations."= /.well-known/matrix/client".extraConfig =
let
client = {
"m.homeserver" = { "base_url" = "https://matrix.uninsane.org"; };
"m.identity_server" = { "base_url" = "https://vector.im"; };
};
# ACAO required to allow element-web on any URL to request this json file
in ''
add_header Content-Type application/json;
add_header Access-Control-Allow-Origin *;
return 200 '${builtins.toJSON client}';
'';
# static URLs might not be aware of .well-known (e.g. registration confirmation URLs),
# so hack around that.
locations."/_matrix" = {
proxyPass = "http://127.0.0.1:8008";
};
locations."/_synapse" = {
proxyPass = "http://127.0.0.1:8008";
};
# allow ActivityPub clients to discover how to reach @user@uninsane.org
# see: https://git.pleroma.social/pleroma/pleroma/-/merge_requests/3361/
# not sure this makes sense while i run multiple AP services (pleroma, lemmy)
# locations."/.well-known/nodeinfo" = {
# proxyPass = "http://127.0.0.1:4000";
# extraConfig = pleromaExtraConfig;
# };
# redirect common feed URIs to the canonical feed
locations."= /atom".extraConfig = "return 301 /atom.xml;";
locations."= /feed".extraConfig = "return 301 /atom.xml;";
locations."= /feed.xml".extraConfig = "return 301 /atom.xml;";
locations."= /rss".extraConfig = "return 301 /atom.xml;";
locations."= /rss.xml".extraConfig = "return 301 /atom.xml;";
locations."= /blog/atom".extraConfig = "return 301 /atom.xml;";
locations."= /blog/atom.xml".extraConfig = "return 301 /atom.xml;";
locations."= /blog/feed".extraConfig = "return 301 /atom.xml;";
locations."= /blog/feed.xml".extraConfig = "return 301 /atom.xml;";
locations."= /blog/rss".extraConfig = "return 301 /atom.xml;";
locations."= /blog/rss.xml".extraConfig = "return 301 /atom.xml;";
};
# serve any site not listed above, if it's static.
# because we define it dynamically, SSL isn't trivial. support only http
# documented <https://nginx.org/en/docs/http/ngx_http_core_module.html#server_name>
services.nginx.virtualHosts."~^(?<domain>.+)$" = {
default = true;
addSSL = true;
enableACME = false;
sslCertificate = "/var/www/certs/wildcard/cert.pem";
sslCertificateKey = "/var/www/certs/wildcard/key.pem";
# sslCertificate = "/var/lib/acme/.minica/cert.pem";
# sslCertificateKey = "/var/lib/acme/.minica/key.pem";
# serverName = null;
locations."/" = {
# somehow this doesn't escape -- i get error 400 if i:
# curl 'http://..' --resolve '..:80:127.0.0.1'
root = "/var/www/sites/$domain";
# tryFiles = "$domain/$uri $domain/$uri/ =404";
};
};
security.acme.acceptTerms = true;
security.acme.defaults.email = "admin.acme@uninsane.org";
sane.persist.sys.byStore.plaintext = [
{ user = "acme"; group = "acme"; path = "/var/lib/acme"; method = "bind"; }
{ user = "colin"; group = "users"; path = "/var/www/sites"; method = "bind"; }
];
# let's encrypt default chain looks like:
# - End-entity certificate ← R3 ← ISRG Root X1 ← DST Root CA X3
# - <https://community.letsencrypt.org/t/production-chain-changes/150739>
# DST Root CA X3 expired in 2021 (?)
# the alternative chain is:
# - End-entity certificate ← R3 ← ISRG Root X1 (self-signed)
# using this alternative chain grants more compatibility for services like ejabberd
# but might decrease compatibility with very old clients that don't get updates (e.g. old android, iphone <= 4).
# security.acme.defaults.extraLegoFlags = [
security.acme.certs."uninsane.org" = rec {
# ISRG Root X1 results in lets encrypt sending the same chain as default,
# just without the final ISRG Root X1 ← DST Root CA X3 link.
# i.e. we could alternative clip the last item and achieve the exact same thing.
extraLegoRunFlags = [
"--preferred-chain" "ISRG Root X1"
];
extraLegoRenewFlags = extraLegoRunFlags;
};
# TODO: alternatively, we could clip the last cert IF it's expired,
# optionally outputting that to a new cert file.
# security.acme.defaults.postRun = "";
# create a self-signed SSL certificate for use with literally any domain.
# browsers will reject this, but proxies and local testing tools can be configured
# to accept it.
system.activationScripts.generate-x509-self-signed.text = ''
mkdir -p /var/www/certs/wildcard
test -f /var/www/certs/wildcard/key.pem || ${pkgs.openssl}/bin/openssl \
req -x509 -newkey rsa:4096 \
-keyout /var/www/certs/wildcard/key.pem \
-out /var/www/certs/wildcard/cert.pem \
-sha256 -nodes -days 3650 \
-addext 'subjectAltName=DNS:*' \
-subj '/CN=self-signed'
chmod 640 /var/www/certs/wildcard/{key,cert}.pem
chown root:nginx /var/www/certs/wildcard /var/www/certs/wildcard/{key,cert}.pem
'';
}

View File

@ -1,26 +0,0 @@
{ lib, pkgs, ... }:
lib.optionalAttrs false # disabled until i can be sure it's not gonna OOM my server in the middle of the night
{
systemd.services.nixos-prebuild = {
description = "build a nixos image with all updated deps";
path = with pkgs; [ coreutils git nix ];
script = ''
working=$(mktemp -d /tmp/nixos-prebuild.XXXXXX)
pushd "$working"
git clone https://git.uninsane.org/colin/nix-files.git \
&& cd nix-files \
&& nix flake update \
|| true
RC=$(nix run "$working/nix-files#check" -- -j1 --cores 5 --builders "")
popd
rm -rf "$working"
exit "$RC"
'';
};
systemd.timers.nixos-prebuild = {
wantedBy = [ "multi-user.target" ];
timerConfig.OnCalendar = "11,23:00:00";
};
}

View File

@ -1,14 +0,0 @@
# ntfy: UnifiedPush notification delivery system
# - used to get push notifications out of Matrix and onto a Phone (iOS, Android, or a custom client)
{ config, ... }:
{
imports = [
./ntfy-waiter.nix
./ntfy-sh.nix
];
sops.secrets."ntfy-sh-topic" = {
mode = "0440";
owner = config.users.users.ntfy-sh.name;
group = config.users.users.ntfy-sh.name;
};
}

View File

@ -1,92 +0,0 @@
# ntfy: UnifiedPush notification delivery system
# - used to get push notifications out of Matrix and onto a Phone (iOS, Android, or a custom client)
#
# config options:
# - <https://docs.ntfy.sh/config/#config-options>
#
# usage:
# - ntfy sub https://ntfy.uninsane.org/TOPIC
# - ntfy pub https://ntfy.uninsane.org/TOPIC "my message"
# in production, TOPIC is a shared secret between the publisher (Matrix homeserver) and the subscriber (phone)
#
# administering:
# - sudo -u ntfy-sh ntfy access
#
# debugging:
# - make sure that the keepalives are good:
# - on the subscriber machine, run `lsof -i4` to find the port being used
# - `sudo tcpdump tcp port <p>`
# - shouldn't be too spammy
#
# matrix integration:
# - the user must manually point synapse to the ntfy endpoint:
# - `curl --header "Authorization: <your_token>" --data '{ "app_display_name": "sane-nix moby", "app_id": "ntfy.uninsane.org", "data": { "url": "https://ntfy.uninsane.org/_matrix/push/v1/notify", "format": "event_id_only" }, "device_display_name": "sane-nix moby", "kind": "http", "lang": "en-US", "profile_tag": "", "pushkey": "https://ntfy.uninsane.org/TOPIC" }' localhost:8008/_matrix/client/v3/pushers/set`
# where the token is grabbed from Element's help&about page when logged in
# - to remove, send this `curl` with `"kind": null`
{ config, lib, pkgs, ... }:
let
# subscribers need a non-443 public port to listen on as a way to easily differentiate this traffic
# at the IP layer, to enable e.g. wake-on-lan.
altPort = 2587;
in
{
sane.persist.sys.byStore.plaintext = [
# not 100% necessary to persist this, but ntfy does keep a 12hr (by default) cache
# for pushing notifications to users who become offline.
# ACLs also live here.
{ user = "ntfy-sh"; group ="ntfy-sh"; path = "/var/lib/ntfy-sh"; method = "bind"; }
];
services.ntfy-sh.enable = true;
services.ntfy-sh.settings = {
base-url = "https://ntfy.uninsane.org";
behind-proxy = true; # not sure if needed
# keepalive interval is a ntfy-specific keepalive thing, where it sends actual data down the wire.
# it's not simple TCP keepalive.
# defaults to 45s.
# note that the client may still do its own TCP-level keepalives, typically every 30s
keepalive-interval = "15m";
log-level = "trace"; # trace, debug, info (default), warn, error
auth-default-access = "deny-all";
};
systemd.services.ntfy-sh.serviceConfig.DynamicUser = lib.mkForce false;
systemd.services.ntfy-sh.preStart = ''
# make this specific topic read-write by world
# it would be better to use the token system, but that's extra complexity for e.g.
# how do i plumb a secret into the Matrix notification pusher
#
# note that this will fail upon first run, i.e. before ntfy has created its db.
# just restart the service.
topic=$(cat ${config.sops.secrets.ntfy-sh-topic.path})
${pkgs.ntfy-sh}/bin/ntfy access everyone "$topic" read-write
'';
services.nginx.virtualHosts."ntfy.uninsane.org" = {
forceSSL = true;
enableACME = true;
listen = [
{ addr = "0.0.0.0"; port = altPort; ssl = true; }
{ addr = "0.0.0.0"; port = 443; ssl = true; }
{ addr = "0.0.0.0"; port = 80; ssl = false; }
];
locations."/" = {
proxyPass = "http://127.0.0.1:2586";
proxyWebsockets = true; #< support websocket upgrades. without that, `ntfy sub` hangs silently
recommendedProxySettings = true; #< adds headers so ntfy logs include the real IP
extraConfig = ''
# absurdly long timeout (86400s=24h) so that we never hang up on clients.
# make sure the client is smart enough to detect a broken proxy though!
proxy_read_timeout 86400s;
'';
};
};
sane.dns.zones."uninsane.org".inet.CNAME."ntfy" = "native";
sane.ports.ports."${builtins.toString altPort}" = {
protocol = [ "tcp" ];
visibleTo.lan = true;
visibleTo.doof = true;
description = "colin-ntfy.uninsane.org";
};
}

View File

@ -1,151 +0,0 @@
#!/usr/bin/env nix-shell
#!nix-shell -i python3 -p "python3.withPackages (ps: [ ])" -p ntfy-sh
import argparse
import logging
import os
import socket
import subprocess
import sys
import threading
import time
logger = logging.getLogger(__name__)
LISTEN_QUEUE = 3
WAKE_MESSAGE = b'notification\n'
class Client:
def __init__(self, sock, addr_info, live_after: float):
self.live_after = live_after
self.sock = sock
self.addr_info = addr_info
def __cmp__(self, other: 'Client'):
return cmp(self.addr_info, other.addr_info)
def try_notify(self, message: bytes) -> bool:
"""
returns true if we send a packet to notify client.
fals otherwise (e.g. the socket is dead).
"""
ttl = self.live_after - time.time()
if ttl > 0:
logger.debug(f"sleeping {ttl:.2f}s until client {self.addr_info} is ready to receive notification")
time.sleep(ttl)
try:
self.sock.sendall(message)
except Exception as e:
logger.warning(f"failed to notify client {self.addr_info} {e}")
return False
else:
logger.info(f"successfully notified {self.addr_info}: {message}")
return True
class Adapter:
def __init__(self, host: str, port: int, silence: int, topic: str):
self.host = host
self.port = port
self.silence = silence
self.topic = topic
self.clients = set()
def log_clients(self):
clients_str = '\n'.join(f' {c.addr_info}' for c in self.clients)
logger.debug(f"clients alive ({len(self.clients)}):\n{clients_str}")
def add_client(self, client: Client):
# it's a little bit risky to keep more than one client at the same IP address,
# because it's possible a notification comes in and we ring the old connection,
# even when the new connection says "don't ring yet".
for c in set(self.clients):
if c.addr_info[0] == client.addr_info[0]:
logger.info(f"purging old client before adding new one at same address: {c.addr_info} -> {client.addr_info}")
self.clients.remove(c)
logger.info(f"accepted client at {client.addr_info}")
self.clients.add(client)
def listener_loop(self):
logger.info(f"listening for connections on {self.host}:{self.port}")
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((self.host, self.port))
s.listen(LISTEN_QUEUE)
while True:
conn, addr_info = s.accept()
self.add_client(Client(conn, addr_info, live_after = time.time() + self.silence))
def notify_clients(self, message: bytes = WAKE_MESSAGE):
# notify every client, and drop any which have disconnected.
# note that we notify based on age (oldest -> youngest)
# because notifying young clients might entail sleeping until they're ready.
clients = sorted(self.clients, key=lambda c: (c.live_after, c.addr_info))
dead_clients = [
c for c in clients if not c.try_notify(message)
]
for c in dead_clients:
self.clients.remove(c)
self.log_clients()
def notify_loop(self):
logger.info("waiting for notification events")
ntfy_proc = subprocess.Popen(
[
"ntfy",
"sub",
f"https://ntfy.uninsane.org/{self.topic}"
],
stdout=subprocess.PIPE
)
for line in iter(ntfy_proc.stdout.readline, b''):
logger.debug(f"received notification: {line}")
self.notify_clients()
def get_topic() -> str:
return open('/run/secrets/ntfy-sh-topic', 'rt').read().strip()
def run_forever(callable):
try:
callable()
except Exception as e:
logger.error(f"{callable} failed: {e}")
else:
logger.error(f"{callable} unexpectedly returned")
# sys.exit(1)
os._exit(1) # sometimes `sys.exit()` doesn't actually exit...
def main():
logging.basicConfig()
logging.getLogger().setLevel(logging.DEBUG)
parser = argparse.ArgumentParser(description="accept connections and notify the other end upon ntfy activity, with a guaranteed amount of silence")
parser.add_argument('--verbose', action='store_true')
parser.add_argument('--host', type=str, default='')
parser.add_argument('--port', type=int)
parser.add_argument('--silence', type=int, help="number of seconds to remain silent upon accepting a connection")
args = parser.parse_args()
if args.verbose:
logging.getLogger().setLevel(logging.DEBUG)
else:
logging.getLogger().setLevel(logging.INFO)
adapter = Adapter(args.host, args.port, args.silence, get_topic())
listener_loop = threading.Thread(target=run_forever, name="listener_loop", args=(adapter.listener_loop,))
notify_loop = threading.Thread(target=run_forever, name="notify_loop", args=(adapter.notify_loop,))
# TODO: this method of exiting seems to sometimes leave the listener behind (?)
# preventing anyone else from re-binding the port.
listener_loop.start()
notify_loop.start()
listener_loop.join()
notify_loop.join()
if __name__ == '__main__':
main()

View File

@ -1,72 +0,0 @@
# service which adapts ntfy-sh into something suitable specifically for the Pinephone's
# wake-on-lan (WoL) feature.
# notably, it provides a mechanism by which the caller can be confident of an interval in which
# zero traffic will occur on the TCP connection, thus allowing it to enter sleep w/o fear of hitting
# race conditions in the Pinephone WoL feature.
{ config, lib, pkgs, ... }:
let
cfg = config.sane.ntfy-waiter;
portLow = 5550;
portHigh = 5559;
portRange = lib.range portLow portHigh;
numPorts = portHigh - portLow + 1;
mkService = port: let
silence = port - portLow;
flags = lib.optional cfg.verbose "--verbose";
cli = [
"${cfg.package}/bin/ntfy-waiter"
"--port"
"${builtins.toString port}"
"--silence"
"${builtins.toString silence}"
] ++ flags;
in {
"ntfy-waiter-${builtins.toString silence}" = {
# TODO: run not as root (e.g. as ntfy-sh)
description = "wait for notification, with ${builtins.toString silence} seconds of guaranteed silence";
serviceConfig = {
Type = "simple";
Restart = "always";
RestartSec = "5s";
ExecStart = lib.concatStringsSep " " cli;
};
after = [ "network.target" ];
wantedBy = [ "default.target" ];
};
};
in
{
options = with lib; {
sane.ntfy-waiter.enable = mkOption {
type = types.bool;
default = true;
};
sane.ntfy-waiter.verbose = mkOption {
type = types.bool;
default = true;
};
sane.ntfy-waiter.package = mkOption {
type = types.package;
default = pkgs.static-nix-shell.mkPython3Bin {
pname = "ntfy-waiter";
srcRoot = ./.;
pkgs = [ "ntfy-sh" ];
};
description = ''
exposed to provide an attr-path by which one may build the package for manual testing.
'';
};
};
config = lib.mkIf cfg.enable {
sane.ports.ports = lib.mkMerge (lib.forEach portRange (port: {
"${builtins.toString port}" = {
protocol = [ "tcp" ];
visibleTo.doof = true;
visibleTo.lan = true;
description = "colin-notification-waiter-${builtins.toString (port - portLow + 1)}-of-${builtins.toString numPorts}";
};
}));
systemd.services = lib.mkMerge (builtins.map mkService portRange);
};
}

View File

@ -1,23 +0,0 @@
# pict-rs is an image database/store used by Lemmy.
# i don't explicitly activate it here -- just adjust its defaults to be a bit friendlier
{ config, lib, ... }:
let
cfg = config.services.pict-rs;
in
{
sane.persist.sys.byStore.plaintext = lib.mkIf cfg.enable [
{ user = "pict-rs"; group = "pict-rs"; path = cfg.dataDir; method = "bind"; }
];
systemd.services.pict-rs.serviceConfig = {
# fix to use a normal user so we can configure perms correctly
DynamicUser = lib.mkForce false;
User = "pict-rs";
Group = "pict-rs";
};
users.groups.pict-rs = {};
users.users.pict-rs = {
group = "pict-rs";
isSystemUser = true;
};
}

View File

@ -1,212 +0,0 @@
# docs:
# - <https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/services/networking/pleroma.nix>
# - <https://docs.pleroma.social/backend/configuration/cheatsheet/>
# example config:
# - <https://git.pleroma.social/pleroma/pleroma/-/blob/develop/config/config.exs>
#
# to run it in a oci-container: <https://github.com/barrucadu/nixfiles/blob/master/services/pleroma.nix>
#
# admin frontend: <https://fed.uninsane.org/pleroma/admin>
{ config, pkgs, ... }:
let
logLevel = "warn";
# logLevel = "debug";
in
{
sane.persist.sys.byStore.plaintext = [
{ user = "pleroma"; group = "pleroma"; path = "/var/lib/pleroma"; method = "bind"; }
];
services.pleroma.enable = true;
services.pleroma.secretConfigFile = config.sops.secrets.pleroma_secrets.path;
services.pleroma.configs = [
''
import Config
config :pleroma, Pleroma.Web.Endpoint,
url: [host: "fed.uninsane.org", scheme: "https", port: 443],
http: [ip: {127, 0, 0, 1}, port: 4040]
# secret_key_base: "{secrets.pleroma.secret_key_base}",
# signing_salt: "{secrets.pleroma.signing_salt}"
config :pleroma, :instance,
name: "Perfectly Sane",
description: "Single-user Pleroma instance",
email: "admin.pleroma@uninsane.org",
notify_email: "notify.pleroma@uninsane.org",
limit: 5000,
registrations_open: true,
account_approval_required: true,
max_pinned_statuses: 5,
external_user_synchronization: true
# docs: https://hexdocs.pm/swoosh/Swoosh.Adapters.Sendmail.html
# test mail config with sudo -u pleroma ./bin/pleroma_ctl email test --to someone@somewhere.net
config :pleroma, Pleroma.Emails.Mailer,
enabled: true,
adapter: Swoosh.Adapters.Sendmail,
cmd_path: "${pkgs.postfix}/bin/sendmail"
config :pleroma, Pleroma.User,
restricted_nicknames: [ "admin", "uninsane", "root" ]
config :pleroma, :media_proxy,
enabled: false,
redirect_on_failure: true
#base_url: "https://cache.pleroma.social"
# see for reference:
# - `force_custom_plan`: <https://docs.pleroma.social/backend/configuration/postgresql/#disable-generic-query-plans>
config :pleroma, Pleroma.Repo,
adapter: Ecto.Adapters.Postgres,
username: "pleroma",
database: "pleroma",
hostname: "localhost",
pool_size: 10,
prepare: :named,
parameters: [
plan_cache_mode: "force_custom_plan"
]
# XXX: prepare: :named is needed only for PG <= 12
# prepare: :named,
# password: "{secrets.pleroma.db_password}",
# Configure web push notifications
config :web_push_encryption, :vapid_details,
subject: "mailto:notify.pleroma@uninsane.org"
# public_key: "{secrets.pleroma.vapid_public_key}",
# private_key: "{secrets.pleroma.vapid_private_key}"
# config :joken, default_signer: "{secrets.pleroma.joken_default_signer}"
config :pleroma, :database, rum_enabled: false
config :pleroma, :instance, static_dir: "/var/lib/pleroma/instance/static"
config :pleroma, Pleroma.Uploaders.Local, uploads: "/var/lib/pleroma/uploads"
config :pleroma, configurable_from_database: false
# strip metadata from uploaded images
config :pleroma, Pleroma.Upload, filters: [Pleroma.Upload.Filter.Exiftool.StripLocation]
# TODO: GET /api/pleroma/captcha is broken
# there was a nixpkgs PR to fix this around 2022/10 though.
config :pleroma, Pleroma.Captcha,
enabled: false,
method: Pleroma.Captcha.Native
# (enabled by colin)
# Enable Strict-Transport-Security once SSL is working:
config :pleroma, :http_security,
sts: true
# docs: https://docs.pleroma.social/backend/configuration/cheatsheet/#logger
config :logger,
backends: [{ExSyslogger, :ex_syslogger}]
config :logger, :ex_syslogger,
level: :${logLevel}
# policies => list of message rewriting facilities to be enabled
# transparence => whether to publish these rules in node_info (and /about)
config :pleroma, :mrf,
policies: [Pleroma.Web.ActivityPub.MRF.SimplePolicy],
transparency: true
# reject => { host, reason }
config :pleroma, :mrf_simple,
reject: [ {"threads.net", "megacorp"}, {"*.threads.net", "megacorp"} ]
# reject: [ [host: "threads.net", reason: "megacorp"], [host: "*.threads.net", reason: "megacorp"] ]
# XXX colin: not sure if this actually _does_ anything
# better to steal emoji from other instances?
# - <https://docs.pleroma.social/backend/configuration/cheatsheet/#mrf_steal_emoji>
config :pleroma, :emoji,
shortcode_globs: ["/emoji/**/*.png"],
groups: [
"Cirno": "/emoji/cirno/*.png",
"Kirby": "/emoji/kirby/*.png",
"Bun": "/emoji/bun/*.png",
"Yuru Camp": "/emoji/yuru_camp/*.png",
]
''
];
systemd.services.pleroma.path = [
# something inside pleroma invokes `sh` w/o specifying it by path, so this is needed to allow pleroma to start
pkgs.bash
# used by Pleroma to strip geo tags from uploads
pkgs.exiftool
# i saw some errors when pleroma was shutting down about it not being able to find `awk`. probably not critical
pkgs.gawk
# needed for email operations like password reset
pkgs.postfix
];
systemd.services.pleroma.serviceConfig = {
# postgres can be slow to service early requests, preventing pleroma from starting on the first try
Restart = "on-failure";
RestartSec = "10s";
};
# systemd.services.pleroma.serviceConfig = {
# # required for sendmail. see https://git.pleroma.social/pleroma/pleroma/-/issues/2259
# NoNewPrivileges = lib.mkForce false;
# PrivateTmp = lib.mkForce false;
# CapabilityBoundingSet = lib.mkForce "~";
# };
# this is required to allow pleroma to send email.
# raw `sendmail` works, but i think pleroma's passing it some funny flags or something, idk.
# hack to fix that.
users.users.pleroma.extraGroups = [ "postdrop" ];
# Pleroma server and web interface
# TODO: enable publog?
services.nginx.virtualHosts."fed.uninsane.org" = {
forceSSL = true; # pleroma redirects to https anyway
enableACME = true;
# inherit kTLS;
locations."/" = {
proxyPass = "http://127.0.0.1:4040";
recommendedProxySettings = true;
# documented: https://git.pleroma.social/pleroma/pleroma/-/blob/develop/installation/pleroma.nginx
extraConfig = ''
# XXX colin: this block is in the nixos examples: i don't understand all of it
add_header 'Access-Control-Allow-Origin' '*' always;
add_header 'Access-Control-Allow-Methods' 'POST, PUT, DELETE, GET, PATCH, OPTIONS' always;
add_header 'Access-Control-Allow-Headers' 'Authorization, Content-Type, Idempotency-Key' always;
add_header 'Access-Control-Expose-Headers' 'Link, X-RateLimit-Reset, X-RateLimit-Limit, X-RateLimit-Remaining, X-Request-Id' always;
if ($request_method = OPTIONS) {
return 204;
}
add_header X-XSS-Protection "1; mode=block";
add_header X-Permitted-Cross-Domain-Policies none;
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
add_header Referrer-Policy same-origin;
add_header X-Download-Options noopen;
# proxy_http_version 1.1;
# proxy_set_header Upgrade $http_upgrade;
# proxy_set_header Connection "upgrade";
# # proxy_set_header Host $http_host;
# proxy_set_header Host $host;
# proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# colin: added this due to Pleroma complaining in its logs
# proxy_set_header X-Real-IP $remote_addr;
# proxy_set_header X-Forwarded-Proto $scheme;
# NB: this defines the maximum upload size
client_max_body_size 16m;
'';
};
};
sane.dns.zones."uninsane.org".inet.CNAME."fed" = "native";
sops.secrets."pleroma_secrets" = {
owner = config.users.users.pleroma.name;
};
}

View File

@ -1,89 +0,0 @@
{ pkgs, ... }:
let
GiB = n: MiB 1024*n;
MiB = n: KiB 1024*n;
KiB = n: 1024*n;
in
{
sane.persist.sys.byStore.plaintext = [
# TODO: mode?
{ user = "postgres"; group = "postgres"; path = "/var/lib/postgresql"; method = "bind"; }
];
services.postgresql.enable = true;
# HOW TO UPDATE:
# postgres version updates are manual and require intervention.
# - `sane-stop-all-servo`
# - `systemctl start postgresql`
# - as `sudo su postgres`:
# - `cd /var/log/postgresql`
# - `pg_dumpall > state.sql`
# - `echo placeholder > <new_version>` # to prevent state from being created earlier than we want
# - then, atomically:
# - update the `services.postgresql.package` here
# - `dataDir` is atomically updated to match package; don't touch
# - `nixos-rebuild --flake . switch ; sane-stop-all-servo`
# - `sudo rm -rf /var/lib/postgresql/<new_version>`
# - `systemctl start postgresql`
# - as `sudo su postgres`:
# - `cd /var/lib/postgreql`
# - `psql -f state.sql`
# - restart dependent services (maybe test one at a time)
services.postgresql.package = pkgs.postgresql_15;
# XXX colin: for a proper deploy, we'd want to include something for Pleroma here too.
# services.postgresql.initialScript = pkgs.writeText "synapse-init.sql" ''
# CREATE ROLE "matrix-synapse" WITH LOGIN PASSWORD '<password goes here>';
# CREATE DATABASE "matrix-synapse" WITH OWNER "matrix-synapse"
# TEMPLATE template0
# ENCODING = "UTF8"
# LC_COLLATE = "C"
# LC_CTYPE = "C";
# '';
# perf tuning
# - for recommended values see: <https://pgtune.leopard.in.ua/>
# - for official docs (sparse), see: <https://www.postgresql.org/docs/11/config-setting.html#CONFIG-SETTING-CONFIGURATION-FILE>
services.postgresql.settings = {
# DB Version: 15
# OS Type: linux
# DB Type: web
# Total Memory (RAM): 32 GB
# CPUs num: 12
# Data Storage: ssd
max_connections = 200;
shared_buffers = "8GB";
effective_cache_size = "24GB";
maintenance_work_mem = "2GB";
checkpoint_completion_target = 0.9;
wal_buffers = "16MB";
default_statistics_target = 100;
random_page_cost = 1.1;
effective_io_concurrency = 200;
work_mem = "10485kB";
min_wal_size = "1GB";
max_wal_size = "4GB";
max_worker_processes = 12;
max_parallel_workers_per_gather = 4;
max_parallel_workers = 12;
max_parallel_maintenance_workers = 4;
};
# daily backups to /var/backup
services.postgresqlBackup.enable = true;
# common admin operations:
# sudo systemctl start postgresql
# sudo -u postgres psql
# > \l # lists all databases
# > \du # lists all roles
# > \c pleroma # connects to database by name
# > \d # shows all tables
# > \q # exits psql
# dump/restore (-F t = tar):
# sudo -u postgres pg_dump -F t pleroma > /backup/pleroma-db.tar
# sudo -u postgres -g postgres pg_restore -d pleroma /backup/pleroma-db.tar
}

View File

@ -1,289 +0,0 @@
# example configs:
# - official: <https://prosody.im/doc/example_config>
# - nixos: <https://github.com/kittywitch/nixfiles/blob/main/services/prosody.nix>
# config options:
# - <https://prosody.im/doc/configure>
#
# modules:
# - main: <https://prosody.im/doc/modules>
# - community: <https://modules.prosody.im/index.html>
#
# debugging:
# - logging:
# - enable `stanza_debug` module
# - enable `log.debug = "*syslog"` in extraConfig
# - interactive:
# - `telnet localhost 5582` (this is equal to `prosodyctl shell` -- but doesn't hang)
# - `watch:stanzas(target_spec, filter)` -> to log stanzas, for version > 0.12
# - console docs: <https://prosody.im/doc/console>
# - can modify/inspect arbitrary internals (lua) by prefixing line with `> `
# - e.g. `> _G` to print all globals
#
# sanity checks:
# - `sudo -u prosody -g prosody prosodyctl check connectivity`
# - `sudo -u prosody -g prosody prosodyctl check turn`
# - `sudo -u prosody -g prosody prosodyctl check turn -v --ping=stun.conversations.im`
# - checks that my stun/turn server is usable by clients of conversations.im (?)
# - `sudo -u prosody -g prosody prosodyctl check` (dns, config, certs)
#
#
# create users with:
# - `sudo -u prosody prosodyctl adduser colin@uninsane.org`
#
#
# federation/support matrix:
# - nixnet.services (runs ejabberd):
# - WORKS: sending and receiving PMs and calls (2023/10/15)
# - N.B.: it didn't originally work; was solved by disabling the lua-unbound DNS option & forcing the system/local resolver
# - cheogram (XMPP <-> SMS gateway):
# - WORKS: sending and receiving PMs, images (2023/10/15)
# - PARTIAL: calls (xmpp -> tel works; tel -> xmpp fails)
# - maybe i need to setup stun/turn
#
# TODO:
# - enable push notifications (mod_cloud_notify)
# - optimize coturn (e.g. move off of the VPN!)
# - ensure muc is working
# - enable file uploads
# - "upload.xmpp.uninsane.org:http_upload: URL: <https://upload.xmpp.uninsane.org:5281/upload> - Ensure this can be reached by users"
# - disable or fix bosh (jabber over http):
# - "certmanager: No certificate/key found for client_https port 0"
{ lib, pkgs, ... }:
let
# enables very verbose logging
enableDebug = false;
in
{
sane.persist.sys.byStore.plaintext = [
{ user = "prosody"; group = "prosody"; path = "/var/lib/prosody"; method = "bind"; }
];
sane.ports.ports."5000" = {
protocol = [ "tcp" ];
visibleTo.doof = true;
visibleTo.lan = true;
description = "colin-xmpp-prosody-fileshare-proxy65";
};
sane.ports.ports."5222" = {
protocol = [ "tcp" ];
visibleTo.doof = true;
visibleTo.lan = true;
description = "colin-xmpp-client-to-server";
};
sane.ports.ports."5223" = {
protocol = [ "tcp" ];
visibleTo.doof = true;
visibleTo.lan = true;
description = "colin-xmpps-client-to-server"; # XMPP over TLS
};
sane.ports.ports."5269" = {
protocol = [ "tcp" ];
visibleTo.doof = true;
description = "colin-xmpp-server-to-server";
};
sane.ports.ports."5270" = {
protocol = [ "tcp" ];
visibleTo.doof = true;
description = "colin-xmpps-server-to-server"; # XMPP over TLS
};
sane.ports.ports."5280" = {
protocol = [ "tcp" ];
visibleTo.doof = true;
visibleTo.lan = true;
description = "colin-xmpp-bosh";
};
sane.ports.ports."5281" = {
protocol = [ "tcp" ];
visibleTo.doof = true;
visibleTo.lan = true;
description = "colin-xmpp-prosody-https"; # necessary?
};
users.users.prosody.extraGroups = [
"nginx" # provide access to certs
"ntfy-sh" # access to secret ntfy topic
];
security.acme.certs."uninsane.org".extraDomainNames = [
"xmpp.uninsane.org"
"conference.xmpp.uninsane.org"
"upload.xmpp.uninsane.org"
];
# exists so the XMPP server's cert can obtain altNames for all its resources
services.nginx.virtualHosts."xmpp.uninsane.org" = {
useACMEHost = "uninsane.org";
};
services.nginx.virtualHosts."conference.xmpp.uninsane.org" = {
useACMEHost = "uninsane.org";
};
services.nginx.virtualHosts."upload.xmpp.uninsane.org" = {
useACMEHost = "uninsane.org";
};
sane.dns.zones."uninsane.org".inet = {
# XXX: SRV records have to point to something with a A/AAAA record; no CNAMEs
A."xmpp" = "%ANATIVE%";
CNAME."conference.xmpp" = "xmpp";
CNAME."upload.xmpp" = "xmpp";
# _Service._Proto.Name TTL Class SRV Priority Weight Port Target
# - <https://xmpp.org/extensions/xep-0368.html>
# something's requesting the SRV records for conference.xmpp, so let's include it
# nothing seems to request XMPP SRVs for the other records (except @)
# lower numerical priority field tells clients to prefer this method
SRV."_xmpps-client._tcp.conference.xmpp" = "3 50 5223 xmpp";
SRV."_xmpps-server._tcp.conference.xmpp" = "3 50 5270 xmpp";
SRV."_xmpp-client._tcp.conference.xmpp" = "5 50 5222 xmpp";
SRV."_xmpp-server._tcp.conference.xmpp" = "5 50 5269 xmpp";
SRV."_xmpps-client._tcp" = "3 50 5223 xmpp";
SRV."_xmpps-server._tcp" = "3 50 5270 xmpp";
SRV."_xmpp-client._tcp" = "5 50 5222 xmpp";
SRV."_xmpp-server._tcp" = "5 50 5269 xmpp";
};
# help Prosody find its certificates.
# pointing it to /var/lib/acme doesn't quite work because it expects the private key
# to be named `privkey.pem` instead of acme's `key.pem`
# <https://prosody.im/doc/certificates#automatic_location>
sane.fs."/etc/prosody/certs/uninsane.org/fullchain.pem" = {
symlink.target = "/var/lib/acme/uninsane.org/fullchain.pem";
wantedBeforeBy = [ "prosody.service" ];
};
sane.fs."/etc/prosody/certs/uninsane.org/privkey.pem" = {
symlink.target = "/var/lib/acme/uninsane.org/key.pem";
wantedBeforeBy = [ "prosody.service" ];
};
services.prosody = {
enable = true;
package = pkgs.prosody.override {
# XXX(2023/10/15): build without lua-unbound support.
# this forces Prosody to fall back to the default Lua DNS resolver, which seems more reliable.
# fixes errors like "unbound.queryXYZUV: Resolver error: out of memory"
# related: <https://issues.prosody.im/1737#comment-11>
lua.withPackages = selector: pkgs.lua.withPackages (p:
selector (p // { luaunbound = null; })
);
# withCommunityModules = [ "turncredentials" ];
};
admins = [ "colin@uninsane.org" ];
# allowRegistration = false; # defaults to false
muc = [
{
domain = "conference.xmpp.uninsane.org";
}
];
uploadHttp.domain = "upload.xmpp.uninsane.org";
virtualHosts = {
# "Prosody requires at least one enabled VirtualHost to function. You can
# safely remove or disable 'localhost' once you have added another."
# localhost = {
# domain = "localhost";
# enabled = true;
# };
"xmpp.uninsane.org" = {
domain = "uninsane.org";
enabled = true;
};
};
## modules:
# these are enabled by default, via <repo:nixos/nixpkgs:/pkgs/servers/xmpp/prosody/default.nix>
# - cloud_notify
# - http_upload
# - vcard_muc
# these are enabled by the module defaults (services.prosody.modules.<foo>)
# - admin_adhoc
# - blocklist
# - bookmarks
# - carbons
# - cloud_notify
# - csi
# - dialback
# - disco
# - http_files
# - mam
# - pep
# - ping
# - private
# - XEP-0049: let clients store arbitrary (private) data on the server
# - proxy65
# - XEP-0065: allow server to proxy file transfers between two clients who are behind NAT
# - register
# - roster
# - saslauth
# - smacks
# - time
# - tls
# - uptime
# - vcard_legacy
# - version
extraPluginPaths = [ ./modules ];
extraModules = [
# admin_shell: allows `prosodyctl shell` to work
# see: <https://prosody.im/doc/modules/mod_admin_shell>
# see: <https://prosody.im/doc/console>
"admin_shell"
"admin_telnet" #< needed by admin_shell
# lastactivity: XEP-0012: allow users to query how long another user has been idle for
# - not sure why i enabled this; think it was in someone's config i referenced
"lastactivity"
# allows prosody to share TURN/STUN secrets with XMPP clients to provide them access to the coturn server.
# see: <https://prosody.im/doc/coturn>
"turn_external"
# legacy coturn integration
# see: <https://modules.prosody.im/mod_turncredentials.html>
# "turncredentials"
"sane_ntfy"
] ++ lib.optionals enableDebug [
"stanza_debug" #< logs EVERY stanza as debug: <https://prosody.im/doc/modules/mod_stanza_debug>
];
extraConfig = ''
local function readAll(file)
local f = assert(io.open(file, "rb"))
local content = f:read("*all")
f:close()
-- remove trailing newline
return string.gsub(content, "%s+", "")
end
-- logging docs:
-- - <https://prosody.im/doc/logging>
-- - <https://prosody.im/doc/advanced_logging>
-- levels: debug, info, warn, error
log = {
${if enableDebug then "debug" else "info"} = "*syslog";
}
-- see: <https://prosody.im/doc/certificates#automatic_location>
-- try to solve: "certmanager: Error indexing certificate directory /etc/prosody/certs: cannot open /etc/prosody/certs: No such file or directory"
-- only, this doesn't work because prosody doesn't like acme's naming scheme
-- certificates = "/var/lib/acme"
c2s_direct_tls_ports = { 5223 }
s2s_direct_tls_ports = { 5270 }
turn_external_host = "turn.uninsane.org"
turn_external_secret = readAll("/var/lib/coturn/shared_secret.bin")
-- turn_external_user = "prosody"
-- legacy mod_turncredentials integration
-- turncredentials_host = "turn.uninsane.org"
-- turncredentials_secret = readAll("/var/lib/coturn/shared_secret.bin")
ntfy_binary = "${pkgs.ntfy-sh}/bin/ntfy"
ntfy_topic = readAll("/run/secrets/ntfy-sh-topic")
-- s2s_require_encryption = true
-- c2s_require_encryption = true
'';
};
}

View File

@ -1,52 +0,0 @@
-- simple proof-of-concept Prosody module
-- module development guide: <https://prosody.im/doc/developers/modules>
-- module API docs: <https://prosody.im/doc/developers/moduleapi>
--
-- much of this code is lifted from Prosody's own `mod_cloud_notify`
local jid = require"util.jid";
local ntfy = module:get_option_string("ntfy_binary", "ntfy");
local ntfy_topic = module:get_option_string("ntfy_topic", "xmpp");
module:log("info", "initialized");
local function is_urgent(stanza)
if stanza.name == "message" then
if stanza:get_child("propose", "urn:xmpp:jingle-message:0") then
return true, "jingle call";
end
end
end
local function publish_ntfy(message)
-- message should be the message to publish
local ntfy_url = string.format("https://ntfy.uninsane.org/%s", ntfy_topic)
local cmd = string.format("%s pub %q %q", ntfy, ntfy_url, message)
module.log("debug", "invoking ntfy: %s", cmd)
local success, reason, code = os.execute(cmd)
if not success then
module:log("warn", "ntfy failed: %s => %s %d", cmd, reason, code)
end
end
local function archive_message_added(event)
-- event is: { origin = origin, stanza = stanza, for_user = store_user, id = id }
local stanza = event.stanza;
local to = stanza.attr.to;
to = to and jid.split(to) or event.origin.username;
-- only notify if the stanza destination is the mam user we store it for
if event.for_user == to then
local is_urgent_stanza, urgent_reason = is_urgent(event.stanza);
if is_urgent_stanza then
module:log("info", "urgent push for %s (%s)", to, urgent_reason);
publish_ntfy(urgent_reason)
end
end
end
module:hook("archive-message-added", archive_message_added);

View File

@ -1,79 +0,0 @@
# Soulseek daemon (p2p file sharing with an emphasis on Music)
# docs: <https://github.com/slskd/slskd/blob/master/docs/config.md>
#
# config precedence (higher precedence overrules lower precedence):
# - Default Values < Environment Variables < YAML Configuraiton File < Command Line Arguments
#
# debugging:
# - soulseek is just *flaky*. if you see e.g. DNS errors, even though you can't replicate them via `dig` or `getent ahostsv4`, just give it 10 minutes to work out:
# - "Soulseek.AddressException: Failed to resolve address 'vps.slsknet.org': Resource temporarily unavailable"
{ config, lib, pkgs, ... }:
{
sane.persist.sys.byStore.plaintext = [
{ user = "slskd"; group = "media"; path = "/var/lib/slskd"; method = "bind"; }
];
sops.secrets."slskd_env" = {
owner = config.users.users.slskd.name;
mode = "0400";
};
users.users.slskd.extraGroups = [ "media" ];
sane.ports.ports."50300" = {
protocol = [ "tcp" ];
# visibleTo.ovpns = true; #< not needed: it runs in the ovpns namespace
description = "colin-soulseek";
};
sane.dns.zones."uninsane.org".inet.CNAME."soulseek" = "native";
services.nginx.virtualHosts."soulseek.uninsane.org" = {
forceSSL = true;
enableACME = true;
locations."/" = {
proxyPass = "http://${config.sane.netns.ovpns.netnsVethIpv4}:5030";
proxyWebsockets = true;
};
};
services.slskd.enable = true;
services.slskd.domain = null; # i'll manage nginx for it
services.slskd.group = "media";
# env file, for auth (SLSKD_SLSK_PASSWORD, SLSKD_SLSK_USERNAME)
services.slskd.environmentFile = config.sops.secrets.slskd_env.path;
services.slskd.settings = {
soulseek.diagnostic_level = "Debug"; # one of "None"|"Warning"|"Info"|"Debug"
shares.directories = [
# folders to share
# syntax: <https://github.com/slskd/slskd/blob/master/docs/config.md#directories>
# [Alias]/path/on/disk
# NOTE: Music library is quick to scan; videos take a solid 10min to scan.
# TODO: re-enable the other libraries
# "[Audioooks]/var/media/Books/Audiobooks"
# "[Books]/var/media/Books/Books"
# "[Manga]/var/media/Books/Visual"
# "[games]/var/media/games"
"[Music]/var/media/Music"
# "[Film]/var/media/Videos/Film"
# "[Shows]/var/media/Videos/Shows"
];
# directories.downloads = "..." # TODO
# directories.incomplete = "..." # TODO
# what unit is this? kbps??
global.upload.speed_limit = 32000;
web.logging = true;
# debug = true;
flags.no_logo = true; # don't show logo at start
# flags.volatile = true; # store searches and active transfers in RAM (completed transfers still go to disk). rec for btrfs/zfs
};
systemd.services.slskd.serviceConfig = {
# run this behind the OVPN static VPN
NetworkNamespacePath = "/run/netns/ovpns";
ExecStartPre = [ "${lib.getExe pkgs.sane-scripts.ip-check} --no-upnp --expect ${config.sane.netns.ovpns.netnsPubIpv4}" ]; # abort if public IP is not as expected
Restart = lib.mkForce "always"; # exits "success" when it fails to connect to soulseek server
RestartSec = "60s";
};
}

View File

@ -1,156 +0,0 @@
{ config, lib, pkgs, ... }:
let
# 2023/09/06: nixpkgs `transmission` defaults to old 3.00
# 2024/02/15: some torrent trackers whitelist clients; everyone is still on 3.00 for some reason :|
# some do this via peer-id (e.g. baka); others via user-agent (e.g. MAM).
# peer-id format is essentially the same between 3.00 and 4.x (just swap the MAJOR/MINOR/PATCH numbers).
# user-agent format has changed. `Transmission/3.00` (old) v.s. `TRANSMISSION/MAJ.MIN.PATCH` (new).
realTransmission = pkgs.transmission_4;
realVersion = {
major = lib.versions.major realTransmission.version;
minor = lib.versions.minor realTransmission.version;
patch = lib.versions.patch realTransmission.version;
};
package = realTransmission.overrideAttrs (upstream: {
# `cmakeFlags = [ "-DTR_VERSION_MAJOR=3" ]`, etc, doesn't seem to take effect.
postPatch = (upstream.postPatch or "") + ''
substituteInPlace CMakeLists.txt \
--replace-fail 'TR_VERSION_MAJOR "${realVersion.major}"' 'TR_VERSION_MAJOR "3"' \
--replace-fail 'TR_VERSION_MINOR "${realVersion.minor}"' 'TR_VERSION_MINOR "0"' \
--replace-fail 'TR_VERSION_PATCH "${realVersion.patch}"' 'TR_VERSION_PATCH "0"' \
--replace-fail 'set(TR_USER_AGENT_PREFIX "''${TR_SEMVER}")' 'set(TR_USER_AGENT_PREFIX "3.00")'
'';
});
download-dir = "/var/media/torrents"; #< keep in sync with consts embedded in `torrent-done`
torrent-done = pkgs.static-nix-shell.mkBash {
pname = "torrent-done";
srcRoot = ./.;
pkgs = [
"acl"
"coreutils"
"findutils"
"rsync"
"util-linux"
];
};
in
{
sane.persist.sys.byStore.plaintext = [
# TODO: mode? we need this specifically for the stats tracking in .config/
{ user = "transmission"; group = config.users.users.transmission.group; path = "/var/lib/transmission"; method = "bind"; }
];
users.users.transmission.extraGroups = [ "media" ];
services.transmission.enable = true;
services.transmission.package = package;
#v setting `group` this way doesn't tell transmission to `chown` the files it creates
# it's a nixpkgs setting which just runs the transmission daemon as this group
services.transmission.group = "media";
# transmission will by default not allow the world to read its files.
services.transmission.downloadDirPermissions = "775";
services.transmission.extraFlags = [
# "--log-level=debug"
];
services.transmission.settings = {
# DOCUMENTATION/options list: <https://github.com/transmission/transmission/blob/main/docs/Editing-Configuration-Files.md#options>
# message-level = 3; #< enable for debug logging. 0-3, default is 2.
# ovpns.netnsVethIpv4 => allow rpc only from the root servo ns. it'll tunnel things to the net, if need be.
rpc-bind-address = config.sane.netns.ovpns.netnsVethIpv4;
#rpc-host-whitelist = "bt.uninsane.org";
#rpc-whitelist = "*.*.*.*";
rpc-authentication-required = true;
rpc-username = "colin";
# salted pw. to regenerate, set this plaintext, run nixos-rebuild, and then find the salted pw in:
# /var/lib/transmission/.config/transmission-daemon/settings.json
rpc-password = "{503fc8928344f495efb8e1f955111ca5c862ce0656SzQnQ5";
rpc-whitelist-enabled = false;
# force behind ovpns in case the NetworkNamespace fails somehow
bind-address-ipv4 = config.sane.netns.ovpns.netnsPubIpv4;
port-forwarding-enabled = false;
# hopefully, make the downloads world-readable
# umask = 0; #< default is 2: i.e. deny writes from world
# force peer connections to be encrypted
encryption = 2;
# units in kBps
speed-limit-down = 12000;
speed-limit-down-enabled = true;
speed-limit-up = 800;
speed-limit-up-enabled = true;
# see: https://git.zknt.org/mirror/transmission/commit/cfce6e2e3a9b9d31a9dafedd0bdc8bf2cdb6e876?lang=bg-BG
anti-brute-force-enabled = false;
inherit download-dir;
incomplete-dir = "${download-dir}/incomplete";
# transmission regularly fails to move stuff from the incomplete dir to the main one, so disable:
incomplete-dir-enabled = false;
# env vars available in script:
# - TR_APP_VERSION - Transmission's short version string, e.g. `4.0.0`
# - TR_TIME_LOCALTIME
# - TR_TORRENT_BYTES_DOWNLOADED - Number of bytes that were downloaded for this torrent
# - TR_TORRENT_DIR - Location of the downloaded data
# - TR_TORRENT_HASH - The torrent's info hash
# - TR_TORRENT_ID
# - TR_TORRENT_LABELS - A comma-delimited list of the torrent's labels
# - TR_TORRENT_NAME - Name of torrent (not filename)
# - TR_TORRENT_TRACKERS - A comma-delimited list of the torrent's trackers' announce URLs
script-torrent-done-enabled = true;
script-torrent-done-filename = "${torrent-done}/bin/torrent-done";
};
systemd.services.transmission.after = [ "wireguard-wg-ovpns.service" ];
systemd.services.transmission.partOf = [ "wireguard-wg-ovpns.service" ];
systemd.services.transmission.serviceConfig = {
# run this behind the OVPN static VPN
NetworkNamespacePath = "/run/netns/ovpns";
ExecStartPre = [ "${lib.getExe pkgs.sane-scripts.ip-check} --no-upnp --expect ${config.sane.netns.ovpns.netnsPubIpv4}" ]; # abort if public IP is not as expected
Restart = "on-failure";
RestartSec = "30s";
BindPaths = [ "/var/media" ]; #< so it can move completed torrents into the media library
};
# service to automatically backup torrents i add to transmission
systemd.services.backup-torrents = {
description = "archive torrents to storage not owned by transmission";
script = ''
${pkgs.rsync}/bin/rsync -arv /var/lib/transmission/.config/transmission-daemon/torrents/ /var/backup/torrents/
'';
};
systemd.timers.backup-torrents = {
wantedBy = [ "multi-user.target" ];
timerConfig = {
OnStartupSec = "11min";
OnUnitActiveSec = "240min";
};
};
# transmission web client
services.nginx.virtualHosts."bt.uninsane.org" = {
# basicAuth is literally cleartext user/pw, so FORCE this to happen over SSL
forceSSL = true;
enableACME = true;
# inherit kTLS;
locations."/" = {
# proxyPass = "http://ovpns.uninsane.org:9091";
proxyPass = "http://${config.sane.netns.ovpns.netnsVethIpv4}:9091";
};
};
sane.dns.zones."uninsane.org".inet.CNAME."bt" = "native";
sane.ports.ports."51413" = {
protocol = [ "tcp" "udp" ];
# visibleTo.ovpns = true; #< not needed: it runs in the ovpns namespace
description = "colin-bittorrent";
};
}

View File

@ -1,74 +0,0 @@
#!/usr/bin/env nix-shell
#!nix-shell -i bash -p acl -p bash -p coreutils -p findutils -p rsync -p util-linux
# transmission invokes this with no args, and the following env vars:
# - TR_TORRENT_DIR: full path to the folder i told transmission to download it to.
# e.g. /var/media/torrents/Videos/Film/Jason.Bourne-2016
# optionally:
# - TR_DRY_RUN=1
# - TR_DEBUG=1
# - TR_NO_HARDLINK=1
DOWNLOAD_DIR=/var/media/torrents
destructive() {
if [ -n "${TR_DRY_RUN-}" ]; then
echo "[dry-run] $*"
else
debug "$@"
"$@"
fi
}
debug() {
if [ -n "${TR_DEBUG-}" ]; then
echo "$@"
fi
}
if [[ "$TR_TORRENT_DIR" =~ ^.*freeleech.*$ ]]; then
# freeleech torrents have no place in my permanent library
echo "freeleech: nothing to do"
exit 0
fi
if ! [[ "$TR_TORRENT_DIR" =~ ^$DOWNLOAD_DIR/.*$ ]]; then
echo "unexpected torrent dir, aborting: $TR_TORRENT_DIR"
exit 0
fi
REL_DIR="${TR_TORRENT_DIR#$DOWNLOAD_DIR/}"
MEDIA_DIR="/var/media/$REL_DIR"
destructive mkdir -p "$(dirname "$MEDIA_DIR")"
destructive rsync -arv "$TR_TORRENT_DIR/" "$MEDIA_DIR/"
# make the media rwx by anyone in the group
destructive find "$MEDIA_DIR" -type d -exec setfacl --recursive --modify d:g::rwx,o::rx {} \;
destructive find "$MEDIA_DIR" -type d -exec chmod g+rw,a+rx {} \;
# if there's a single directory inside the media dir, then inline that
subdirs=("$MEDIA_DIR"/*)
debug "top-level items in torrent dir:" "${subdirs[@]}"
if [ ${#subdirs[@]} -eq 1 ]; then
dirname="${subdirs[0]}"
debug "exactly one top-level item, checking if directory: $dirname"
if [ -d "$dirname" ]; then
destructive mv "$dirname"/* "$MEDIA_DIR/" && destructive rmdir "$dirname"
fi
fi
# remove noisy files:
destructive find "$MEDIA_DIR/" -type f \(\
-iname '.*downloaded.?from.*' \
-o -iname 'source.txt' \
-o -iname 'upcoming.?releases.*' \
-o -iname 'www.YTS.*.jpg' \
-o -iname 'WWW.YIFY*.COM.jpg' \
-o -iname 'YIFY*.com.txt' \
-o -iname 'YTS*.com.txt' \
\) -exec rm {} \;
if ! [ -n "${TR_NO_HARDLINK}" ]; then
# dedupe the whole media library.
# yeah, a bit excessive: move this to a cron job if that's problematic
# or make it run with only 1/N probability, etc.
destructive hardlink /var/media --reflink=always --ignore-time --verbose
fi

View File

@ -1,167 +0,0 @@
# TODO: split this file apart into smaller files to make it easier to understand
{ config, lib, pkgs, ... }:
let
dyn-dns = config.sane.services.dyn-dns;
nativeAddrs = lib.mapAttrs (_name: builtins.head) config.sane.dns.zones."uninsane.org".inet.A;
in
{
sane.ports.ports."53" = {
protocol = [ "udp" "tcp" ];
visibleTo.lan = true;
# visibleTo.wan = true;
visibleTo.ovpns = true;
visibleTo.doof = true;
description = "colin-dns-hosting";
};
sane.dns.zones."uninsane.org".TTL = 900;
# SOA record structure: <https://en.wikipedia.org/wiki/SOA_record#Structure>
# SOA MNAME RNAME (... rest)
# MNAME = Master name server for this zone. this is where update requests should be sent.
# RNAME = admin contact (encoded email address)
# Serial = YYYYMMDDNN, where N is incremented every time this file changes, to trigger secondary NS to re-fetch it.
# Refresh = how frequently secondary NS should query master
# Retry = how long secondary NS should wait until re-querying master after a failure (must be < Refresh)
# Expire = how long secondary NS should continue to reply to queries after master fails (> Refresh + Retry)
sane.dns.zones."uninsane.org".inet = {
SOA."@" = ''
ns1.uninsane.org. admin-dns.uninsane.org. (
2023092101 ; Serial
4h ; Refresh
30m ; Retry
7d ; Expire
5m) ; Negative response TTL
'';
TXT."rev" = "2023092101";
CNAME."native" = "%CNAMENATIVE%";
A."@" = "%ANATIVE%";
A."servo.wan" = "%AWAN%";
A."servo.doof" = "%ADOOF%";
A."servo.lan" = config.sane.hosts.by-name."servo".lan-ip;
A."servo.hn" = config.sane.hosts.by-name."servo".wg-home.ip;
# XXX NS records must also not be CNAME
# it's best that we keep this identical, or a superset of, what org. lists as our NS.
# so, org. can specify ns2/ns3 as being to the VPN, with no mention of ns1. we provide ns1 here.
A."ns1" = "%ANATIVE%";
A."ns2" = "%ADOOF%";
A."ns3" = "%AOVPNS%";
A."ovpns" = "%AOVPNS%";
NS."@" = [
"ns1.uninsane.org."
"ns2.uninsane.org."
"ns3.uninsane.org."
];
};
services.trust-dns.settings.zones = [ "uninsane.org" ];
networking.nat.enable = true; #< TODO: try removing this?
# networking.nat.extraCommands = ''
# # redirect incoming DNS requests from LAN addresses
# # to the LAN-specialized DNS service
# # N.B.: use the `nixos-*` chains instead of e.g. PREROUTING
# # because they get cleanly reset across activations or `systemctl restart firewall`
# # instead of accumulating cruft
# iptables -t nat -A nixos-nat-pre -p udp --dport 53 \
# -m iprange --src-range 10.78.76.0-10.78.79.255 \
# -j DNAT --to-destination :1053
# iptables -t nat -A nixos-nat-pre -p tcp --dport 53 \
# -m iprange --src-range 10.78.76.0-10.78.79.255 \
# -j DNAT --to-destination :1053
# '';
# sane.ports.ports."1053" = {
# # because the NAT above redirects in nixos-nat-pre, LAN requests behave as though they arrived on the external interface at the redirected port.
# # TODO: try nixos-nat-post instead?
# # TODO: or, don't NAT from port 53 -> port 1053, but rather nat from LAN addr to a loopback addr.
# # - this is complicated in that loopback is a different interface than eth0, so rewriting the destination address would cause the packets to just be dropped by the interface
# protocol = [ "udp" "tcp" ];
# visibleTo.lan = true;
# description = "colin-redirected-dns-for-lan-namespace";
# };
sane.services.trust-dns.enable = true;
sane.services.trust-dns.instances = let
mkSubstitutions = flavor: {
"%ADOOF%" = config.sane.netns.doof.netnsPubIpv4;
"%ANATIVE%" = nativeAddrs."servo.${flavor}";
"%AOVPNS%" = config.sane.netns.ovpns.netnsPubIpv4;
"%AWAN%" = "$(cat '${dyn-dns.ipPath}')";
"%CNAMENATIVE%" = "servo.${flavor}";
};
in
{
doof = {
substitutions = mkSubstitutions "doof";
listenAddrsIpv4 = [
config.sane.netns.doof.hostVethIpv4
config.sane.netns.ovpns.hostVethIpv4
];
};
hn = {
substitutions = mkSubstitutions "hn";
listenAddrsIpv4 = [ nativeAddrs."servo.hn" ];
};
lan = {
substitutions = mkSubstitutions "lan";
listenAddrsIpv4 = [ nativeAddrs."servo.lan" ];
# port = 1053;
};
# wan = {
# substitutions = mkSubstitutions "wan";
# listenAddrsIpv4 = [
# nativeAddrs."servo.lan"
# ];
# };
# hn-resolver = {
# # don't need %AWAN% here because we forward to the hn instance.
# listenAddrsIpv4 = [ nativeAddrs."servo.hn" ];
# extraConfig = {
# zones = [
# {
# zone = "uninsane.org";
# zone_type = "Forward";
# stores = {
# type = "forward";
# name_servers = [
# {
# socket_addr = "${nativeAddrs."servo.hn"}:1053";
# protocol = "udp";
# trust_nx_responses = true;
# }
# ];
# };
# }
# {
# # forward the root zone to the local DNS resolver
# zone = ".";
# zone_type = "Forward";
# stores = {
# type = "forward";
# name_servers = [
# {
# socket_addr = "127.0.0.53:53";
# protocol = "udp";
# trust_nx_responses = true;
# }
# ];
# };
# }
# ];
# };
# };
};
sane.services.dyn-dns.restartOnChange = [
"trust-dns-doof.service"
"trust-dns-hn.service"
"trust-dns-lan.service"
# "trust-dns-wan.service"
# "trust-dns-hn-resolver.service" # doesn't need restart because it doesn't know about WAN IP
];
}

View File

@ -1,30 +0,0 @@
# docs: <https://nixos.wiki/wiki/MediaWiki>
{ config, lib, ... }:
# XXX: working to host wikipedia with kiwix instead of mediawiki
# mediawiki does more than i need and isn't obviously superior in any way
# except that the dumps are more frequent/up-to-date.
lib.mkIf false
{
sops.secrets."mediawiki_pw" = {
owner = config.users.users.mediawiki.name;
};
services.mediawiki.enable = true;
services.mediawiki.name = "Uninsane Wiki";
services.mediawiki.passwordFile = config.sops.secrets.mediawiki_pw.path;
services.mediawiki.extraConfig = ''
# Disable anonymous editing
$wgGroupPermissions['*']['edit'] = false;
'';
services.mediawiki.virtualHost.listen = [
{
ip = "127.0.0.1";
port = 8013;
ssl = false;
}
];
services.mediawiki.virtualHost.hostName = "w.uninsane.org";
services.mediawiki.virtualHost.adminAddr = "admin+mediawiki@uninsane.org";
# services.mediawiki.extensions = TODO: wikipedia sync extension?
}

View File

@ -1,50 +0,0 @@
{ lib, pkgs, ... }:
{
boot.initrd.supportedFilesystems = [ "ext4" "btrfs" "ext2" "ext3" "vfat" ];
# useful emergency utils
boot.initrd.extraUtilsCommands = ''
copy_bin_and_libs ${pkgs.btrfs-progs}/bin/btrfstune
copy_bin_and_libs ${pkgs.util-linux}/bin/{cfdisk,lsblk,lscpu}
copy_bin_and_libs ${pkgs.gptfdisk}/bin/{cgdisk,gdisk}
copy_bin_and_libs ${pkgs.smartmontools}/bin/smartctl
copy_bin_and_libs ${pkgs.e2fsprogs}/bin/resize2fs
'' + lib.optionalString pkgs.stdenv.hostPlatform.isx86_64 ''
copy_bin_and_libs ${pkgs.nvme-cli}/bin/nvme # doesn't cross compile
'';
boot.kernelParams = [
"boot.shell_on_fail"
#v experimental full pre-emption for hopefully better call/audio latency on moby.
# also toggleable at runtime via /sys/kernel/debug/sched/preempt
# defaults to preempt=voluntary
# "preempt=full"
];
# other kernelParams:
# "boot.trace"
# "systemd.log_level=debug"
# "systemd.log_target=console"
# moby has to run recent kernels (defined elsewhere).
# meanwhile, kernel variation plays some minor role in things like sandboxing (landlock) and capabilities.
# simpler to keep near the latest kernel on all devices,
# and also makes certain that any weird system-level bugs i see aren't likely to be stale kernel bugs.
# servo needs zfs though, which doesn't support every kernel.
boot.kernelPackages = lib.mkDefault pkgs.zfs.latestCompatibleLinuxPackages;
# hack in the `boot.shell_on_fail` arg since that doesn't always seem to work.
boot.initrd.preFailCommands = "allowShell=1";
# default: 4 (warn). 7 is debug
boot.consoleLogLevel = 7;
boot.loader.grub.enable = lib.mkDefault false;
boot.loader.generic-extlinux-compatible.enable = lib.mkDefault true;
hardware.enableAllFirmware = true; # firmware with licenses that don't allow for redistribution. fuck lawyers, fuck IP, give me the goddamn firmware.
# hardware.enableRedistributableFirmware = true; # proprietary but free-to-distribute firmware (extraneous to `enableAllFirmware` option)
# default is 252274, which is too low particularly for servo.
# manifests as spurious "No space left on device" when trying to install watches,
# e.g. in dyn-dns by `systemctl start dyn-dns-watcher.path`.
# see: <https://askubuntu.com/questions/828779/failed-to-add-run-systemd-ask-password-to-directory-watch-no-space-left-on-dev>
boot.kernel.sysctl."fs.inotify.max_user_watches" = 1048576;
}

View File

@ -1,53 +0,0 @@
{ config, lib, pkgs, ... }:
{
imports = [
./boot.nix
./feeds.nix
./fs.nix
./home
./hosts.nix
./ids.nix
./machine-id.nix
./net
./nix.nix
./persist.nix
./polyunfill.nix
./programs
./quirks.nix
./secrets.nix
./ssh.nix
./systemd.nix
./users
];
# docs: https://nixos.org/manual/nixos/stable/options.html#opt-system.stateVersion
# this affects where nixos modules look for stateful data which might have been migrated across releases.
system.stateVersion = "21.11";
sane.nixcache.enable-trusted-keys = true;
sane.nixcache.enable = lib.mkDefault true;
sane.persist.enable = lib.mkDefault true;
sane.root-on-tmpfs = lib.mkDefault true;
sane.programs.sysadminUtils.enableFor.system = lib.mkDefault true;
sane.programs.consoleUtils.enableFor.user.colin = lib.mkDefault true;
# time.timeZone = "America/Los_Angeles";
time.timeZone = "Etc/UTC"; # DST is too confusing for me => use a stable timezone
system.activationScripts.nixClosureDiff = {
supportsDryActivation = true;
text = ''
# show which packages changed versions or are new/removed in this upgrade
# source: <https://github.com/luishfonseca/dotfiles/blob/32c10e775d9ec7cc55e44592a060c1c9aadf113e/modules/upgrade-diff.nix>
# modified to not error on boot (when /run/current-system doesn't exist)
if [ -d /run/current-system ]; then
${pkgs.nvd}/bin/nvd --nix-bin-dir=${pkgs.nix}/bin diff /run/current-system "$systemConfig"
fi
'';
};
# link debug symbols into /run/current-system/sw/lib/debug
# hopefully picked up by gdb automatically?
environment.enableDebugInfo = true;
}

View File

@ -1,269 +0,0 @@
# where to find good stuff?
# - universal search/directory: <https://podcastindex.org>
# - podcasts w/ a community: <https://lemmyverse.net/communities?query=podcast>
# - podcast rec thread: <https://lemmy.ml/post/1565858>
#
# candidates:
# - The Nonlinear Library (podcast): <https://forum.effectivealtruism.org/posts/JTZTBienqWEAjGDRv/listen-to-more-ea-content-with-the-nonlinear-library>
# - has ~10 posts per day, text-to-speech; i would need better tagging before adding this
# - <https://www.metaculus.com/questions/11102/introducing-the-metaculus-journal-podcast/>
# - dead since 2022/10 - 2023/03
{ lib, sane-data, ... }:
let
hourly = { freq = "hourly"; };
daily = { freq = "daily"; };
weekly = { freq = "weekly"; };
infrequent = { freq = "infrequent"; };
art = { cat = "art"; };
humor = { cat = "humor"; };
pol = { cat = "pol"; }; # or maybe just "social"
rat = { cat = "rat"; };
tech = { cat = "tech"; };
uncat = { cat = "uncat"; };
text = { format = "text"; };
img = { format = "image"; };
mkRss = format: url: { inherit url format; } // uncat // infrequent;
# format-specific helpers
mkText = mkRss "text";
mkImg = mkRss "image";
mkPod = mkRss "podcast";
# host-specific helpers
mkSubstack = subdomain: { substack = subdomain; };
fromDb = name:
let
raw = sane-data.feeds."${name}";
in {
url = raw.url;
# not sure the exact mapping with velocity here: entries per day?
freq = lib.mkIf (raw.velocity or 0 != 0) (lib.mkDefault (
if raw.velocity > 2 then
"hourly"
else if raw.velocity > 0.5 then
"daily"
else if raw.velocity > 0.1 then
"weekly"
else
"infrequent"
));
} // lib.optionalAttrs (lib.hasPrefix "https://www.youtube.com/" raw.url) {
format = "video";
} // lib.optionalAttrs (raw.is_podcast or false) {
format = "podcast";
} // lib.optionalAttrs (raw.title or "" != "") {
title = lib.mkDefault raw.title;
};
podcasts = [
(fromDb "acquiredlpbonussecretsecret.libsyn.com" // tech) # ACQ2 - more "Acquired" episodes
(fromDb "allinchamathjason.libsyn.com" // pol)
(fromDb "anchor.fm/s/34c7232c/podcast/rss" // tech) # Civboot -- https://anchor.fm/civboot
(fromDb "anchor.fm/s/2da69154/podcast/rss" // tech) # POD OF JAKE -- https://podofjake.com/
(fromDb "cast.postmarketos.org" // tech)
(fromDb "congressionaldish.libsyn.com" // pol) # Jennifer Briney
(fromDb "craphound.com" // pol) # Cory Doctorow -- both podcast & text entries
(fromDb "darknetdiaries.com" // tech)
(fromDb "feed.podbean.com/matrixlive/feed.xml" // tech) # Matrix (chat) Live
(fromDb "feeds.99percentinvisible.org/99percentinvisible" // pol) # 99% Invisible -- also available here: <https://feeds.simplecast.com/BqbsxVfO>
(fromDb "feeds.feedburner.com/80000HoursPodcast" // rat)
(fromDb "feeds.feedburner.com/dancarlin/history" // rat)
(fromDb "feeds.feedburner.com/radiolab" // pol) # Radiolab -- also available here, but ONLY OVER HTTP: <http://feeds.wnyc.org/radiolab>
(fromDb "feeds.megaphone.fm/behindthebastards" // pol) # also Maggie Killjoy
(fromDb "feeds.megaphone.fm/recodedecode" // tech) # The Verge - Decoder
(fromDb "feeds.simplecast.com/54nAGcIl" // pol) # The Daily
(fromDb "feeds.simplecast.com/82FI35Px" // pol) # Ezra Klein Show
(fromDb "feeds.simplecast.com/wgl4xEgL" // rat) # Econ Talk
(fromDb "feeds.simplecast.com/xKJ93w_w" // uncat) # Atlas Obscura
(fromDb "feeds.transistor.fm/acquired" // tech)
(fromDb "fulltimenix.com" // tech)
(fromDb "futureofcoding.org/episodes" // tech)
(fromDb "hackerpublicradio.org" // tech)
(fromDb "lexfridman.com/podcast" // rat)
(fromDb "mapspodcast.libsyn.com" // uncat) # Multidisciplinary Association for Psychedelic Studies
(fromDb "microarch.club" // tech)
(fromDb "mintcast.org" // tech)
(fromDb "omegataupodcast.net" // tech) # 3/4 German; 1/4 eps are English
(fromDb "omny.fm/shows/cool-people-who-did-cool-stuff" // pol) # Maggie Killjoy -- referenced by Cory Doctorow
(fromDb "omny.fm/shows/money-stuff-the-podcast") # Matt Levine
(fromDb "omny.fm/shows/the-dollop-with-dave-anthony-and-gareth-reynolds") # The Dollop history/comedy
(fromDb "originstories.libsyn.com" // uncat)
(fromDb "podcast.posttv.com/itunes/post-reports.xml" // pol)
(fromDb "politicalorphanage.libsyn.com" // pol)
(fromDb "reverseengineering.libsyn.com/rss" // tech) # UnNamed Reverse Engineering Podcast
(fromDb "rss.acast.com/deconstructed") # The Intercept - Deconstructed
(fromDb "rss.acast.com/ft-tech-tonic" // tech)
(fromDb "rss.acast.com/intercepted-with-jeremy-scahill") # The Intercept - Intercepted
(fromDb "rss.art19.com/60-minutes" // pol)
(fromDb "rss.art19.com/the-portal" // rat) # Eric Weinstein
(fromDb "seattlenice.buzzsprout.com" // pol)
(fromDb "srslywrong.com" // pol)
(fromDb "sharkbytes.transistor.fm" // tech) # Wireshark Podcast o_0
(fromDb "sscpodcast.libsyn.com" // rat) # Astral Codex Ten
(fromDb "talesfromthebridge.buzzsprout.com" // tech) # Sci-Fi? has Peter Watts; author of No Moods, Ads or Cutesy Fucking Icons (rifters.com)
(fromDb "theamphour.com" // tech)
(fromDb "techtalesshow.com" // tech) # Corbin Davenport
(fromDb "techwontsave.us" // pol) # rec by Cory Doctorow
(fromDb "wakingup.libsyn.com" // pol) # Sam Harris
(fromDb "werenotwrong.fireside.fm" // pol)
(mkPod "https://sfconservancy.org/casts/the-corresponding-source/feeds/ogg/" // tech)
# (fromDb "feeds.libsyn.com/421877" // rat) # Less Wrong Curated
# (fromDb "feeds.megaphone.fm/hubermanlab" // uncat) # Daniel Huberman on sleep
# (fromDb "feeds.simplecast.com/l2i9YnTd" // tech // pol) # Hard Fork (NYtimes tech)
# (fromDb "podcast.thelinuxexp.com" // tech) # low-brow linux/foss PR announcements
# (fromDb "rss.art19.com/your-welcome" // pol) # Michael Malice - Your Welcome -- also available here: <https://origin.podcastone.com/podcast?categoryID2=2232>
# (fromDb "rss.prod.firstlook.media/deconstructed/podcast.rss" // pol) #< possible URL rot
# (fromDb "rss.prod.firstlook.media/intercepted/podcast.rss" // pol) #< possible URL rot
# (fromDb "trashfuturepodcast.podbean.com" // pol) # rec by Cory Doctorow, but way rambly
# (mkPod "https://anchor.fm/s/21bc734/podcast/rss" // pol // infrequent) # Emerge: making sense of what's next -- <https://www.whatisemerging.com/emergepodcast>
# (mkPod "https://audioboom.com/channels/5097784.rss" // tech) # Lateral with Tom Scott
# (mkPod "https://feeds.megaphone.fm/RUNMED9919162779" // pol // infrequent) # The Witch Trials of J.K. Rowling: <https://www.thefp.com/witchtrials>
# (mkPod "https://podcasts.la.utexas.edu/this-is-democracy/feed/podcast/" // pol // weekly)
];
texts = [
(fromDb "acoup.blog/feed") # history, states. author: <https://historians.social/@bretdevereaux/following>
(fromDb "amosbbatto.wordpress.com" // tech)
(fromDb "anish.lakhwara.com" // tech)
(fromDb "apenwarr.ca/log/rss.php" // tech) # CEO of tailscale
(fromDb "applieddivinitystudies.com" // rat)
(fromDb "artemis.sh" // tech)
(fromDb "ascii.textfiles.com" // tech) # Jason Scott
(fromDb "austinvernon.site" // tech)
(fromDb "buttondown.email" // tech)
(fromDb "ben-evans.com/benedictevans" // pol)
(fromDb "bitbashing.io" // tech)
(fromDb "bitsaboutmoney.com" // uncat)
(fromDb "blog.danieljanus.pl" // tech)
(fromDb "blog.dshr.org" // pol) # David Rosenthal
(fromDb "blog.jmp.chat" // tech)
(fromDb "blog.rust-lang.org" // tech)
(fromDb "blog.thalheim.io" // tech) # Mic92
(fromDb "bunniestudios.com" // tech) # Bunnie Juang
(fromDb "capitolhillseattle.com" // pol)
(fromDb "edwardsnowden.substack.com" // pol // text)
(fromDb "fasterthanli.me" // tech)
(fromDb "gwern.net" // rat)
(fromDb "hardcoresoftware.learningbyshipping.com" // tech) # Steven Sinofsky
(fromDb "harihareswara.net" // tech // pol) # rec by Cory Doctorow
(fromDb "ianthehenry.com" // tech)
(fromDb "idiomdrottning.org" // uncat)
(fromDb "interconnected.org/home/feed" // rat) # Matt Webb -- engineering-ish, but dreamy
(fromDb "jeffgeerling.com" // tech)
(fromDb "jefftk.com" // tech)
(fromDb "jwz.org/blog" // tech // pol) # DNA lounge guy, loooong-time blogger
(fromDb "kill-the-newsletter.com/feeds/joh91bv7am2pnznv.xml" // pol) # Matt Levine - Money Stuff
(fromDb "kosmosghost.github.io/index.xml" // tech)
(fromDb "linmob.net" // tech)
(fromDb "lwn.net" // tech)
(fromDb "lynalden.com" // pol)
(fromDb "mako.cc/copyrighteous" // tech // pol) # rec by Cory Doctorow
(fromDb "mg.lol" // tech)
(fromDb "mindingourway.com" // rat)
(fromDb "morningbrew.com/feed" // pol)
(fromDb "nixpkgs.news" // tech)
(fromDb "overcomingbias.com" // rat) # Robin Hanson
(fromDb "palladiummag.com" // uncat)
(fromDb "philosopher.coach" // rat) # Peter Saint-Andre -- side project of stpeter.im
(fromDb "pomeroyb.com" // tech)
(fromDb "postmarketos.org/blog" // tech)
(fromDb "preposterousuniverse.com" // rat) # Sean Carroll
(fromDb "project-insanity.org" // tech) # shared blog by a few NixOS devs, notably onny
(fromDb "putanumonit.com" // rat) # mostly dating topics. not advice, or humor, but looking through a social lens
(fromDb "richardcarrier.info" // rat)
(fromDb "rifters.com/crawl" // uncat) # No Moods, Ads or Cutesy Fucking Icons
(fromDb "righto.com" // tech) # Ken Shirriff
(fromDb "rootsofprogress.org" // rat) # Jason Crawford
(fromDb "samuel.dionne-riel.com" // tech) # SamuelDR
(fromDb "sagacioussuricata.com" // tech) # ian (Sanctuary)
(fromDb "semiaccurate.com" // tech)
(fromDb "sideways-view.com" // rat) # Paul Christiano
(fromDb "slatecave.net" // tech)
(fromDb "slimemoldtimemold.com" // rat)
(fromDb "spectrum.ieee.org" // tech)
(fromDb "stpeter.im/atom.xml" // pol)
(fromDb "thediff.co" // pol) # Byrne Hobart
(fromDb "thisweek.gnome.org" // tech)
(fromDb "tuxphones.com" // tech)
(fromDb "uninsane.org" // tech)
(fromDb "unintendedconsequenc.es" // rat)
(fromDb "vitalik.eth.limo" // tech) # Vitalik Buterin
(fromDb "weekinethereumnews.com" // tech)
(fromDb "willow.phantoma.online") # wizard@xyzzy.link
(fromDb "xn--gckvb8fzb.com" // tech)
(fromDb "xorvoid.com" // tech)
(mkSubstack "astralcodexten" // rat // daily) # Scott Alexander
(mkSubstack "eliqian" // rat // weekly)
(mkSubstack "oversharing" // pol // daily)
(mkSubstack "samkriss" // humor // infrequent)
(mkText "http://benjaminrosshoffman.com/feed" // pol // weekly)
(mkText "http://boginjr.com/feed" // tech // infrequent)
(mkText "https://forum.merveilles.town/rss.xml" // pol // infrequent) #quality RSS list here: <https://forum.merveilles.town/thread/57/share-your-rss-feeds%21-6/>
(mkText "https://jvns.ca/atom.xml" // tech // weekly) # Julia Evans
(mkText "https://linuxphoneapps.org/blog/atom.xml" // tech // infrequent)
(mkText "https://nixos.org/blog/announcements-rss.xml" // tech // infrequent) # more nixos stuff here, but unclear how to subscribe: <https://nixos.org/blog/categories.html>
(mkText "https://nixos.org/blog/stories-rss.xml" // tech // weekly)
(mkText "https://solar.lowtechmagazine.com/posts/index.xml" // tech // weekly)
(mkText "https://www.stratechery.com/rss" // pol // weekly) # Ben Thompson
# (fromDb "balajis.com" // pol) # Balaji
# (fromDb "drewdevault.com" // tech)
# (fromDb "econlib.org" // pol)
# (fromDb "lesswrong.com" // rat)
# (fromDb "profectusmag.com" // pol) # some conservative/libertarian think tank
# (fromDb "thesideview.co" // uncat) # spiritual journal; RSS items are stubs
# (fromDb "theregister.com" // tech)
# (fromDb "vitalik.ca" // tech) # moved to vitalik.eth.limo
# (fromDb "webcurious.co.uk" // uncat) # link aggregator; defunct?
# (mkSubstack "doomberg" // tech // weekly) # articles are all pay-walled
# (mkText "https://github.com/Kaiteki-Fedi/Kaiteki/commits/master.atom" // tech // infrequent)
# (mkText "https://til.simonwillison.net/tils/feed.atom" // tech // weekly)
# (mkText "https://www.bloomberg.com/opinion/authors/ARbTQlRLRjE/matthew-s-levine.rss" // pol // weekly) # Matt Levine (preview/paywalled)
];
videos = [
(fromDb "youtube.com/@Channel5YouTube" // pol)
(fromDb "youtube.com/@ColdFusion")
(fromDb "youtube.com/@ContraPoints" // pol)
(fromDb "youtube.com/@Exurb1a")
(fromDb "youtube.com/@hbomberguy")
(fromDb "youtube.com/@JackStauber")
(fromDb "youtube.com/@NativLang")
(fromDb "youtube.com/@PolyMatter")
(fromDb "youtube.com/@TechnologyConnections" // tech)
(fromDb "youtube.com/@TheB1M")
(fromDb "youtube.com/@TomScottGo")
(fromDb "youtube.com/@Vihart")
(fromDb "youtube.com/@Vox")
# (fromDb "youtube.com/@Vsauce") # they're all like 1-minute long videos now? what happened @Vsauce?
# (fromDb "youtube.com/@rossmanngroup" // pol // tech) # Louis Rossmann
];
images = [
(fromDb "catandgirl.com" // img // humor)
(fromDb "davidrevoy.com" // img // art)
(fromDb "grumpy.website" // img // humor)
(fromDb "miniature-calendar.com" // img // art // daily)
(fromDb "pbfcomics.com" // img // humor)
(fromDb "poorlydrawnlines.com/feed" // img // humor)
(fromDb "smbc-comics.com" // img // humor)
(fromDb "turnoff.us" // img // humor)
(fromDb "xkcd.com" // img // humor)
];
in
{
sane.feeds = texts ++ images ++ podcasts ++ videos;
assertions = builtins.map
(p: {
assertion = p.format or "unknown" == "podcast";
message = ''${p.url} is not a podcast: ${p.format or "unknown"}'';
})
podcasts;
}

View File

@ -1,237 +0,0 @@
# docs
# - x-systemd options: <https://www.freedesktop.org/software/systemd/man/systemd.mount.html>
# - fuse options: `man mount.fuse`
{ config, lib, pkgs, sane-lib, utils, ... }:
let
fsOpts = rec {
common = [
"_netdev"
"noatime"
# user: allow any user with access to the device to mount the fs.
# note that this requires a suid `mount` binary; see: <https://zameermanji.com/blog/2022/8/5/using-fuse-without-root-on-linux/>
"user"
"x-systemd.requires=network-online.target"
"x-systemd.after=network-online.target"
"x-systemd.mount-timeout=10s" # how long to wait for mount **and** how long to wait for unmount
];
# x-systemd.automount: mount the fs automatically *on first access*.
# creates a `path-to-mount.automount` systemd unit.
automount = [ "x-systemd.automount" ];
# noauto: don't mount as part of remote-fs.target.
# N.B.: `remote-fs.target` is a dependency of multi-user.target, itself of graphical.target.
# hence, omitting `noauto` can slow down boots.
noauto = [ "noauto" ];
# lazyMount: defer mounting until first access from userspace.
# see: `man systemd.automount`, `man automount`, `man autofs`
lazyMount = noauto ++ automount;
wg = [
"x-systemd.requires=wireguard-wg-home.service"
"x-systemd.after=wireguard-wg-home.service"
];
fuse = [
"allow_other" # allow users other than the one who mounts it to access it. needed, if systemd is the one mounting this fs (as root)
# allow_root: allow root to access files on this fs (if mounted by non-root, else it can always access them).
# N.B.: if both allow_root and allow_other are specified, then only allow_root takes effect.
# "allow_root"
# default_permissions: enforce local permissions check. CRUCIAL if using `allow_other`.
# w/o this, permissions mode of sshfs is like:
# - sshfs runs all remote commands as the remote user.
# - if a local user has local permissions to the sshfs mount, then their file ops are sent blindly across the tunnel.
# - `allow_other` allows *any* local user to access the mount, and hence any local user can now freely become the remote mapped user.
# with default_permissions, sshfs doesn't tunnel file ops from users until checking that said user could perform said op on an equivalent local fs.
"default_permissions"
];
fuseColin = fuse ++ [
"uid=1000"
"gid=100"
];
ssh = common ++ fuse ++ [
"identityfile=/home/colin/.ssh/id_ed25519"
# i *think* idmap=user means that `colin` on `localhost` and `colin` on the remote are actually treated as the same user, even if their uid/gid differs?
# i.e., local colin's id is translated to/from remote colin's id on every operation?
"idmap=user"
];
sshColin = ssh ++ fuseColin ++ [
# follow_symlinks: remote files which are symlinks are presented to the local system as ordinary files (as the target of the symlink).
# if the symlink target does not exist, the presentation is unspecified.
# symlinks which point outside the mount ARE followed. so this is more capable than `transform_symlinks`
"follow_symlinks"
# symlinks on the remote fs which are absolute paths are presented to the local system as relative symlinks pointing to the expected data on the remote fs.
# only symlinks which would point inside the mountpoint are translated.
"transform_symlinks"
];
# sshRoot = ssh ++ [
# # we don't transform_symlinks because that breaks the validity of remote /nix stores
# "sftp_server=/run/wrappers/bin/sudo\\040/run/current-system/sw/libexec/sftp-server"
# ];
# in the event of hunt NFS mounts, consider:
# - <https://unix.stackexchange.com/questions/31979/stop-broken-nfs-mounts-from-locking-a-directory>
# NFS options: <https://linux.die.net/man/5/nfs>
# actimeo=n = how long (in seconds) to cache file/dir attributes (default: 3-60s)
# bg = retry failed mounts in the background
# retry=n = for how many minutes `mount` will retry NFS mount operation
# intr = allow Ctrl+C to abort I/O (it will error with `EINTR`)
# soft = on "major timeout", report I/O error to userspace
# softreval = on "major timeout", service the request using known-stale cache results instead of erroring -- if such cache data exists
# retrans=n = how many times to retry a NFS request before giving userspace a "server not responding" error (default: 3)
# timeo=n = number of *deciseconds* to wait for a response before retrying it (default: 600)
# note: client uses a linear backup, so the second request will have double this timeout, then triple, etc.
# proto=udp = encapsulate protocol ops inside UDP packets instead of a TCP session.
# requires `nfsvers=3` and a kernel compiled with `NFS_DISABLE_UDP_SUPPORT=n`.
# UDP might be preferable to TCP because the latter is liable to hang for ~100s (kernel TCP timeout) after a link drop.
# however, even UDP has issues with `umount` hanging.
#
# N.B.: don't change these without first testing the behavior of sandboxed apps on a flaky network.
nfs = common ++ [
# "actimeo=5"
# "bg"
"retrans=1"
"retry=0"
# "intr"
"soft"
"softreval"
"timeo=30"
"nofail" # don't fail remote-fs.target when this mount fails (not an option for sshfs else would be common)
# "proto=udp" # default kernel config doesn't support NFS over UDP: <https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1964093> (see comment 11).
# "nfsvers=3" # NFSv4+ doesn't support UDP at *all*. it's ok to omit nfsvers -- server + client will negotiate v3 based on udp requirement. but omitting causes confusing mount errors when the server is *offline*, because the client defaults to v4 and thinks the udp option is a config error.
# "x-systemd.idle-timeout=10" # auto-unmount after this much inactivity
];
# manually perform a ftp mount via e.g.
# curlftpfs -o ftpfs_debug=2,user=anonymous:anonymous,connect_timeout=10 -f -s ftp://servo-hn /mnt/my-ftp
ftp = common ++ fuseColin ++ [
# "ftpfs_debug=2"
"user=colin:ipauth"
# connect_timeout=10: casting shows to T.V. fails partway through about half the time
"connect_timeout=20"
];
};
remoteHome = host: {
sane.programs.sshfs-fuse.enableFor.system = true;
fileSystems."/mnt/${host}/home" = {
device = "colin@${host}:/home/colin";
fsType = "fuse.sshfs";
options = fsOpts.sshColin ++ fsOpts.lazyMount;
noCheck = true;
};
sane.fs."/mnt/${host}/home" = sane-lib.fs.wanted {
dir.acl.user = "colin";
dir.acl.group = "users";
dir.acl.mode = "0700";
};
};
remoteServo = subdir: {
sane.programs.curlftpfs.enableFor.system = true;
sane.fs."/mnt/servo/${subdir}" = sane-lib.fs.wanted {
dir.acl.user = "colin";
dir.acl.group = "users";
dir.acl.mode = "0750";
};
fileSystems."/mnt/servo/${subdir}" = {
device = "ftp://servo-hn:/${subdir}";
noCheck = true;
fsType = "fuse.curlftpfs";
options = fsOpts.ftp ++ fsOpts.noauto ++ fsOpts.wg;
# fsType = "nfs";
# options = fsOpts.nfs ++ fsOpts.lazyMount ++ fsOpts.wg;
};
systemd.services."automount-servo-${utils.escapeSystemdPath subdir}" = let
fs = config.fileSystems."/mnt/servo/${subdir}";
in {
# this is a *flaky* network mount, especially on moby.
# if done as a normal autofs mount, access will eternally block when network is dropped.
# notably, this would block *any* sandboxed app which allows media access, whether they actually try to use that media or not.
# a practical solution is this: mount as a service -- instead of autofs -- and unmount on timeout error, in a restart loop.
# until the ftp handshake succeeds, nothing is actually mounted to the vfs, so this doesn't slow down any I/O when network is down.
description = "automount /mnt/servo/${subdir} in a fault-tolerant and non-blocking manner";
after = [ "network-online.target" ];
requires = [ "network-online.target" ];
wantedBy = [ "default.target" ];
serviceConfig.Type = "simple";
serviceConfig.ExecStart = lib.escapeShellArgs [
"/usr/bin/env"
"PATH=/run/current-system/sw/bin"
"mount.${fs.fsType}"
"-f" # foreground (i.e. don't daemonize)
"-s" # single-threaded (TODO: it's probably ok to disable this?)
"-o"
(lib.concatStringsSep "," (lib.filter (o: !lib.hasPrefix "x-systemd." o) fs.options))
fs.device
"/mnt/servo/${subdir}"
];
# not sure if this configures a linear, or exponential backoff.
# but the first restart will be after `RestartSec`, and the n'th restart (n = RestartSteps) will be RestartMaxDelaySec after the n-1'th exit.
serviceConfig.Restart = "always";
serviceConfig.RestartSec = "10s";
serviceConfig.RestartMaxDelaySec = "120s";
serviceConfig.RestartSteps = "5";
};
};
in
lib.mkMerge [
{
# some services which use private directories error if the parent (/var/lib/private) isn't 700.
sane.fs."/var/lib/private".dir.acl.mode = "0700";
# in-memory compressed RAM
# defaults to compressing at most 50% size of RAM
# claimed compression ratio is about 2:1
# - but on moby w/ zstd default i see 4-7:1 (ratio lowers as it fills)
# note that idle overhead is about 0.05% of capacity (e.g. 2B per 4kB page)
# docs: <https://www.kernel.org/doc/Documentation/blockdev/zram.txt>
#
# to query effectiveness:
# `cat /sys/block/zram0/mm_stat`. whitespace separated fields:
# - *orig_data_size* (bytes)
# - *compr_data_size* (bytes)
# - mem_used_total (bytes)
# - mem_limit (bytes)
# - mem_used_max (bytes)
# - *same_pages* (pages which are e.g. all zeros (consumes no additional mem))
# - *pages_compacted* (pages which have been freed thanks to compression)
# - huge_pages (incompressible)
#
# see also:
# - `man zramctl`
zramSwap.enable = true;
# how much ram can be swapped into the zram device.
# this shouldn't be higher than the observed compression ratio.
# the default is 50% (why?)
# 100% should be "guaranteed" safe so long as the data is even *slightly* compressible.
# but it decreases working memory under the heaviest of loads by however much space the compressed memory occupies (e.g. 50% if 2:1; 25% if 4:1)
zramSwap.memoryPercent = 100;
# environment.pathsToLink = [
# # needed to achieve superuser access for user-mounted filesystems (see sshRoot above)
# # we can only link whole directories here, even though we're only interested in pkgs.openssh
# "/libexec"
# ];
programs.fuse.userAllowOther = true; #< necessary for `allow_other` or `allow_root` options.
}
(remoteHome "crappy")
(remoteHome "desko")
(remoteHome "lappy")
(remoteHome "moby")
# this granularity of servo media mounts is necessary to support sandboxing:
# for flaky mounts, we can only bind the mountpoint itself into the sandbox,
# so it's either this or unconditionally bind all of media/.
(remoteServo "media/archive")
(remoteServo "media/Books")
(remoteServo "media/collections")
# (remoteServo "media/datasets")
(remoteServo "media/games")
(remoteServo "media/Music")
(remoteServo "media/Pictures/macros")
(remoteServo "media/torrents")
(remoteServo "media/Videos")
(remoteServo "playground")
]

View File

@ -1,9 +0,0 @@
{ ... }:
{
imports = [
./fs.nix
./mime.nix
./ssh.nix
./xdg-dirs.nix
];
}

View File

@ -1,45 +0,0 @@
{ config, lib, ... }:
{
sane.user.persist.byStore.plaintext = [
"archive"
"dev"
# TODO: records should be private
"records"
"ref"
"tmp"
"use"
"Books/local"
"Music"
"Pictures/albums"
"Pictures/cat"
"Pictures/from"
"Pictures/Screenshots" #< XXX: something is case-sensitive about this?
"Pictures/Photos"
"Videos/local"
# these are persisted simply to save on RAM.
# ~/.cache/nix can become several GB.
# mesa_shader_cache is < 10 MB.
# TODO: integrate with sane.programs.sandbox?
".cache/mesa_shader_cache"
".cache/nix"
];
sane.user.persist.byStore.private = [
"knowledge"
];
# convenience
sane.user.fs = let
persistEnabled = config.sane.persist.enable;
in {
".persist/private" = lib.mkIf persistEnabled { symlink.target = config.sane.persist.stores.private.origin; };
".persist/plaintext" = lib.mkIf persistEnabled { symlink.target = config.sane.persist.stores.plaintext.origin; };
".persist/ephemeral" = lib.mkIf persistEnabled { symlink.target = config.sane.persist.stores.cryptClearOnBoot.origin; };
"nixos".symlink.target = "dev/nixos";
"Books/servo".symlink.target = "/mnt/servo/media/Books";
"Videos/servo".symlink.target = "/mnt/servo/media/Videos";
"Pictures/servo-macros".symlink.target = "/mnt/servo/media/Pictures/macros";
};
}

View File

@ -1,94 +0,0 @@
# TODO: move into modules/users.nix
{ config, lib, pkgs, ...}:
let
# [ ProgramConfig ]
enabledPrograms = builtins.filter
(p: p.enabled)
(builtins.attrValues config.sane.programs);
# [ ProgramConfig ]
enabledProgramsWithPackage = builtins.filter (p: p.package != null) enabledPrograms;
# [ { "<mime-type>" = { prority, desktop } ]
enabledWeightedMimes = builtins.map weightedMimes enabledPrograms;
# ProgramConfig -> { "<mime-type>" = { priority, desktop }; }
weightedMimes = prog: builtins.mapAttrs
(_key: desktop: {
priority = prog.mime.priority; desktop = desktop;
})
prog.mime.associations;
# [ { "<mime-type>" = { priority, desktop } ]; } ] -> { "<mime-type>" = [ { priority, desktop } ... ]; }
mergeMimes = mimes: lib.foldAttrs (item: acc: [item] ++ acc) [] mimes;
# [ { priority, desktop } ... ] -> Self
sortOneMimeType = associations: builtins.sort
(l: r: lib.throwIf
(l.priority == r.priority)
"${l.desktop} and ${r.desktop} share a preferred mime type with identical priority ${builtins.toString l.priority} (and so the desired association is ambiguous)"
(l.priority < r.priority)
)
associations;
sortMimes = mimes: builtins.mapAttrs (_k: sortOneMimeType) mimes;
# { "<mime-type>"} = [ { priority, desktop } ... ]; } -> { "<mime-type>" = [ "<desktop>" ... ]; }
removePriorities = mimes: builtins.mapAttrs
(_k: associations: builtins.map (a: a.desktop) associations)
mimes;
# { "<mime-type>" = [ "<desktop>" ... ]; } -> { "<mime-type>" = "<desktop1>;<desktop2>;..."; }
formatDesktopLists = mimes: builtins.mapAttrs
(_k: desktops: lib.concatStringsSep ";" desktops)
mimes;
mimeappsListPkg = pkgs.writeTextDir "share/applications/mimeapps.list" (
lib.generators.toINI { } {
"Default Applications" = formatDesktopLists (removePriorities (sortMimes (mergeMimes enabledWeightedMimes)));
}
);
localShareApplicationsPkg = (pkgs.symlinkJoin {
name = "user-local-share-applications";
paths = builtins.map
(p: builtins.toString p.package)
(enabledProgramsWithPackage ++ [ { package=mimeappsListPkg; } ]);
}).overrideAttrs (orig: {
# like normal symlinkJoin, but don't error if the path doesn't exist
buildCommand = ''
mkdir -p $out/share/applications
for i in $(cat $pathsPath); do
if [ -e "$i/share/applications" ]; then
${pkgs.buildPackages.xorg.lndir}/bin/lndir -silent $i/share/applications $out/share/applications
fi
done
runHook postBuild
'';
postBuild = ''
# rebuild `mimeinfo.cache`, used by file openers to show the list of *all* apps, not just the user's defaults.
${pkgs.buildPackages.desktop-file-utils}/bin/update-desktop-database $out/share/applications
'';
});
in
{
# the xdg mime type for a file can be found with:
# - `xdg-mime query filetype path/to/thing.ext`
# the default handler for a mime type can be found with:
# - `xdg-mime query default <mimetype>` (e.g. x-scheme-handler/http)
# the nix-configured handler can be found `nix-repl > :lf . > hostConfigs.desko.xdg.mime.defaultApplications`
#
# glib/gio is queried via glib.bin output:
# - `gio mime x-scheme-handler/https`
# - `gio open <path_or_url>`
# - `gio launch </path/to/app.desktop>`
#
# we can have single associations or a list of associations.
# there's also options to *remove* [non-default] associations from specific apps
# N.B.: don't use nixos' `xdg.mime` option becaue that caues `/share/applications` to be linked into the whole system,
# which limits what i can do around sandboxing. getting the default associations to live in ~/ makes it easier to expose
# the associations to apps selectively.
# xdg.mime.enable = true;
# xdg.mime.defaultApplications = removePriorities (sortMimes (mergeMimes enabledWeightedMimes));
sane.user.fs.".local/share/applications".symlink.target = "${localShareApplicationsPkg}/share/applications";
}

View File

@ -1,29 +0,0 @@
# TODO: this should be moved to users/colin.nix
{ config, lib, ... }:
let
host = config.networking.hostName;
user-pubkey-full = config.sane.ssh.pubkeys."colin@${host}" or {};
user-pubkey = user-pubkey-full.asUserKey or null;
host-keys = lib.filter (k: k.user == "root") (lib.attrValues config.sane.ssh.pubkeys);
known-hosts-text = lib.concatStringsSep
"\n"
(builtins.map (k: k.asHostKey) host-keys)
;
in
{
# ssh key is stored in private storage
sane.user.persist.byStore.private = [
{ type = "file"; path = ".ssh/id_ed25519"; }
];
sane.user.fs.".ssh/id_ed25519.pub" = lib.mkIf (user-pubkey != null) {
symlink.text = user-pubkey;
};
sane.user.fs.".ssh/known_hosts".symlink.text = known-hosts-text;
users.users.colin.openssh.authorizedKeys.keys =
let
user-keys = lib.filter (k: k.user == "colin") (lib.attrValues config.sane.ssh.pubkeys);
in
builtins.map (k: k.asUserKey) user-keys;
}

View File

@ -1,27 +0,0 @@
{ ... }:
{
# XDG defines things like ~/Desktop, ~/Downloads, etc.
# these clutter the home, so i mostly don't use them.
# note that several of these are not actually standardized anywhere.
# some are even non-conventional, like:
# - XDG_PHOTOS_DIR: only works because i patch e.g. megapixels
sane.user.fs.".config/user-dirs.dirs".symlink.text = ''
XDG_DESKTOP_DIR="$HOME/.xdg/Desktop"
XDG_DOCUMENTS_DIR="$HOME/dev"
XDG_DOWNLOAD_DIR="$HOME/tmp"
XDG_MUSIC_DIR="$HOME/Music"
XDG_PHOTOS_DIR="$HOME/Pictures/Photos"
XDG_PICTURES_DIR="$HOME/Pictures"
XDG_PUBLICSHARE_DIR="$HOME/.xdg/Public"
XDG_SCREENSHOTS_DIR="$HOME/Pictures/Screenshots"
XDG_TEMPLATES_DIR="$HOME/.xdg/Templates"
XDG_VIDEOS_DIR="$HOME/Videos"
'';
# prevent `xdg-user-dirs-update` from overriding/updating our config
# see <https://manpages.ubuntu.com/manpages/bionic/man5/user-dirs.conf.5.html>
sane.user.fs.".config/user-dirs.conf".symlink.text = "enabled=False";
sane.user.fs.".config/environment.d/30-user-dirs.conf".symlink.target = "../user-dirs.dirs";
}

View File

@ -1,47 +0,0 @@
{ lib, ... }:
{
# TODO: this should be populated per-host
sane.hosts.by-name."crappy" = {
ssh.user_pubkey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMIvSQAGKqmymXIL4La9B00LPxBIqWAr5AsJxk3UQeY5";
ssh.host_pubkey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMN0cpRAloCBOE5/2wuzgik35iNDv5KLceWMCVaa7DIQ";
# wg-home.pubkey = "TODO";
# wg-home.ip = "10.0.10.55";
lan-ip = "10.78.79.55";
};
sane.hosts.by-name."desko" = {
ssh.user_pubkey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPU5GlsSfbaarMvDA20bxpSZGWviEzXGD8gtrIowc1pX";
ssh.host_pubkey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFw9NoRaYrM6LbDd3aFBc4yyBlxGQn8HjeHd/dZ3CfHk";
wg-home.pubkey = "17PMZssYi0D4t2d0vbmhjBKe1sGsE8kT8/dod0Q2CXc=";
wg-home.ip = "10.0.10.22";
lan-ip = "10.78.79.52";
};
sane.hosts.by-name."lappy" = {
ssh.user_pubkey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDpmFdNSVPRol5hkbbCivRhyeENzb9HVyf9KutGLP2Zu";
ssh.host_pubkey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILSJnqmVl9/SYQ0btvGb0REwwWY8wkdkGXQZfn/1geEc";
wg-home.pubkey = "FTUWGw2p4/cEcrrIE86PWVnqctbv8OYpw8Gt3+dC/lk=";
wg-home.ip = "10.0.10.20";
lan-ip = "10.78.79.53";
};
sane.hosts.by-name."moby" = {
# ssh.authorized = lib.mkDefault false; # moby's too easy to hijack: don't let it ssh places
ssh.user_pubkey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICrR+gePnl0nV/vy7I5BzrGeyVL+9eOuXHU1yNE3uCwU";
ssh.host_pubkey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIO1N/IT3nQYUD+dBlU1sTEEVMxfOyMkrrDeyHcYgnJvw";
wg-home.pubkey = "I7XIR1hm8bIzAtcAvbhWOwIAabGkuEvbWH/3kyIB1yA=";
wg-home.ip = "10.0.10.48";
lan-ip = "10.78.79.54";
};
sane.hosts.by-name."servo" = {
ssh.authorized = lib.mkDefault false; # servo presents too many services to the internet: easy atack vector
ssh.user_pubkey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPS1qFzKurAdB9blkWomq8gI1g0T3sTs9LsmFOj5VtqX";
ssh.host_pubkey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOfdSmFkrVT6DhpgvFeQKm3Fh9VKZ9DbLYOPOJWYQ0E8";
wg-home.pubkey = "roAw+IUFVtdpCcqa4khB385Qcv9l5JAB//730tyK4Wk=";
wg-home.ip = "10.0.10.5";
wg-home.endpoint = "uninsane.org:51820";
lan-ip = "10.78.79.51";
};
}

Some files were not shown because too many files have changed in this diff Show More