Merge branch 'master' into gcc-6

This commit is contained in:
Vladimír Čunát 2017-08-12 10:09:41 +02:00
commit 6899c7fdb9
No known key found for this signature in database
GPG Key ID: E747DF1F9575A3AA
2362 changed files with 36299 additions and 29476 deletions

23
.github/CODEOWNERS vendored Normal file
View File

@ -0,0 +1,23 @@
# CODEOWNERS file
#
# This file is used to describe who owns what in this repository. This file does not
# replace `meta.maintainers` but is instead used for other things than derivations
# and modules, like documentation, package sets, and other assets.
#
# For documentation on this file, see https://help.github.com/articles/about-codeowners/
# Mentioned users will get code review requests.
# Python-related code and docs
pkgs/top-level/python-packages.nix @FRidh
pkgs/development/interpreters/python/* @FRidh
pkgs/development/python-modules/* @FRidh
doc/languages-frameworks/python.md @FRidh
# Boostraping and core infra
pkgs/stdenv/ @Ericson2314
pkgs/build-support/cc-wrapper/ @Ericson2314
# Darwin-related
pkgs/stdenv/darwin/* @copumpkin @LnL7
pkgs/os-specific/darwin/* @LnL7
pkgs/os-specific/darwin/apple-source-releases/* @copumpkin

View File

@ -3,7 +3,7 @@
###### Things done
Please check what applies. Note that these are not hard requirements but mereley serve as information for reviewers.
Please check what applies. Note that these are not hard requirements but merely serve as information for reviewers.
- [ ] Tested using sandboxing
([nix.useSandbox](http://nixos.org/nixos/manual/options.html#opt-nix.useSandbox) on NixOS,

View File

@ -1,14 +0,0 @@
{
"userBlacklist": [
"civodul",
"jhasse",
"shlevy",
"bbenoist"
],
"alwaysNotifyForPaths": [
{ "name": "FRidh", "files": ["pkgs/top-level/python-packages.nix", "pkgs/development/interpreters/python/*", "pkgs/development/python-modules/*" ] },
{ "name": "LnL7", "files": ["pkgs/stdenv/darwin/*", "pkgs/os-specific/darwin/*"] },
{ "name": "copumpkin", "files": ["pkgs/stdenv/darwin/*", "pkgs/os-specific/darwin/apple-source-releases/*"] }
],
"fileBlacklist": ["pkgs/top-level/all-packages.nix"]
}

View File

@ -358,8 +358,8 @@
<para>
<varname>pkgs.dockerTools</varname> is a set of functions for creating and
manipulating Docker images according to the
<link xlink:href="https://github.com/docker/docker/blob/master/image/spec/v1.md#docker-image-specification-v100">
Docker Image Specification v1.0.0
<link xlink:href="https://github.com/moby/moby/blob/master/image/spec/v1.2.md#docker-image-specification-v120">
Docker Image Specification v1.2.0
</link>. Docker itself is not used to perform any of the operations done by these
functions.
</para>
@ -493,8 +493,8 @@
<varname>config</varname> is used to specify the configuration of the
containers that will be started off the built image in Docker.
The available options are listed in the
<link xlink:href="https://github.com/docker/docker/blob/master/image/spec/v1.md#container-runconfig-field-descriptions">
Docker Image Specification v1.0.0
<link xlink:href="https://github.com/moby/moby/blob/master/image/spec/v1.2.md#image-json-field-descriptions">
Docker Image Specification v1.2.0
</link>.
</para>
</callout>

View File

@ -698,33 +698,6 @@ rm /nix/var/nix/manifests/*
rm /nix/var/nix/channel-cache/*
```
### How to use the Haste Haskell-to-Javascript transpiler
Open a shell with `haste-compiler` and `haste-cabal-install` (you don't actually need
`node`, but it can be useful to test stuff):
```shell
nix-shell \
-p "haskellPackages.ghcWithPackages (self: with self; [haste-cabal-install haste-compiler])" \
-p nodejs
```
You may not need the following step but if `haste-boot` fails to compile all the
packages it needs, this might do the trick
```shell
haste-cabal update
```
`haste-boot` builds a set of core libraries so that they can be used from Javascript
transpiled programs:
```shell
haste-boot
```
Transpile and run a "Hello world" program:
```
$ echo 'module Main where main = putStrLn "Hello world"' > hello-world.hs
$ hastec --onexec hello-world.hs
$ node hello-world.js
Hello world
```
### Builds on Darwin fail with `math.h` not found
Users of GHC on Darwin have occasionally reported that builds fail, because the

View File

@ -340,7 +340,7 @@ other packages we like to have in the environment, all specified with `propagate
Indeed, we can just add any package we like to have in our environment to `propagatedBuildInputs`.
```nix
with import <nixpkgs>;
with import <nixpkgs> {};
with pkgs.python35Packages;
buildPythonPackage rec {
@ -423,7 +423,7 @@ and in this case the `python35` interpreter is automatically used.
### Interpreters
Versions 2.7, 3.3, 3.4, 3.5 and 3.6 of the CPython interpreter are available as
respectively `python27`, `python33`, `python34`, `python35` and `python36`. The PyPy interpreter
respectively `python27`, `python34`, `python35` and `python36`. The PyPy interpreter
is available as `pypy`. The aliases `python2` and `python3` correspond to respectively `python27` and
`python35`. The default interpreter, `python`, maps to `python2`.
The Nix expressions for the interpreters can be found in
@ -469,7 +469,6 @@ sets are
* `pkgs.python26Packages`
* `pkgs.python27Packages`
* `pkgs.python33Packages`
* `pkgs.python34Packages`
* `pkgs.python35Packages`
* `pkgs.python36Packages`
@ -546,6 +545,35 @@ All parameters from `mkDerivation` function are still supported.
* `catchConflicts` If `true`, abort package build if a package name appears more than once in dependency tree. Default is `true`.
* `checkInputs` Dependencies needed for running the `checkPhase`. These are added to `buildInputs` when `doCheck = true`.
##### Overriding Python packages
The `buildPythonPackage` function has a `overridePythonAttrs` method that
can be used to override the package. In the following example we create an
environment where we have the `blaze` package using an older version of `pandas`.
We override first the Python interpreter and pass
`packageOverrides` which contains the overrides for packages in the package set.
```nix
with import <nixpkgs> {};
(let
python = let
packageOverrides = self: super: {
pandas = super.pandas.overridePythonAttrs(old: rec {
version = "0.19.1";
name = "pandas-${version}";
src = super.fetchPypi {
pname = "pandas";
inherit version;
sha256 = "08blshqj9zj1wyjhhw3kl2vas75vhhicvv72flvf1z3jvapgw295";
};
});
};
in pkgs.python3.override {inherit packageOverrides;};
in python.withPackages(ps: [ps.blaze])).env
```
#### `buildPythonApplication` function
The `buildPythonApplication` function is practically the same as `buildPythonPackage`.
@ -622,7 +650,7 @@ attribute. The `shell.nix` file from the previous section can thus be also writt
```nix
with import <nixpkgs> {};
(python33.withPackages (ps: [ps.numpy ps.requests])).env
(python36.withPackages (ps: [ps.numpy ps.requests])).env
```
In contrast to `python.buildEnv`, `python.withPackages` does not support the more advanced options
@ -755,17 +783,17 @@ In the following example we rename the `pandas` package and build it.
```nix
with import <nixpkgs> {};
let
(let
python = let
packageOverrides = self: super: {
pandas = super.pandas.override {name="foo";};
pandas = super.pandas.overridePythonAttrs(old: {name="foo";});
};
in pkgs.python35.override {inherit packageOverrides;};
in python.pkgs.pandas
in python.withPackages(ps: [ps.pandas])).env
```
Using `nix-build` on this expression will build the package `pandas`
but with the new name `foo`.
Using `nix-build` on this expression will build an environment that contains the
package `pandas` but with the new name `foo`.
All packages in the package set will use the renamed package.
A typical use case is to switch to another version of a certain package.

View File

@ -4,10 +4,14 @@
<title>Ruby</title>
<para>There currently is support to bundle applications that are packaged as Ruby gems. The utility "bundix" allows you to write a <filename>Gemfile</filename>, let bundler create a <filename>Gemfile.lock</filename>, and then convert
this into a nix expression that contains all Gem dependencies automatically.</para>
<para>There currently is support to bundle applications that are packaged as
Ruby gems. The utility "bundix" allows you to write a
<filename>Gemfile</filename>, let bundler create a
<filename>Gemfile.lock</filename>, and then convert this into a nix
expression that contains all Gem dependencies automatically.
</para>
<para>For example, to package sensu, we did:</para>
<para>For example, to package sensu, we did:</para>
<screen>
<![CDATA[$ cd pkgs/servers/monitoring
@ -38,15 +42,61 @@ bundlerEnv rec {
}]]>
</screen>
<para>Please check in the <filename>Gemfile</filename>, <filename>Gemfile.lock</filename> and the <filename>gemset.nix</filename> so future updates can be run easily.
<para>Please check in the <filename>Gemfile</filename>,
<filename>Gemfile.lock</filename> and the
<filename>gemset.nix</filename> so future updates can be run easily.
</para>
<para>Resulting derivations also have two helpful items, <literal>env</literal> and <literal>wrapper</literal>. The first one allows one to quickly drop into
<command>nix-shell</command> with the specified environment present. E.g. <command>nix-shell -A sensu.env</command> would give you an environment with Ruby preset
so it has all the libraries necessary for <literal>sensu</literal> in its paths. The second one can be used to make derivations from custom Ruby scripts which have
<filename>Gemfile</filename>s with their dependencies specified. It is a derivation with <command>ruby</command> wrapped so it can find all the needed dependencies.
For example, to make a derivation <literal>my-script</literal> for a <filename>my-script.rb</filename> (which should be placed in <filename>bin</filename>) you should
run <command>bundix</command> as specified above and then use <literal>bundlerEnv</literal> like this:</para>
<para>For tools written in Ruby - i.e. where the desire is to install
a package and then execute e.g. <command>rake</command> at the command
line, there is an alternative builder called <literal>bundlerApp</literal>.
Set up the <filename>gemset.nix</filename> the same way, and then, for
example:
</para>
<screen>
<![CDATA[{ lib, bundlerApp }:
bundlerApp {
pname = "corundum";
gemdir = ./.;
exes = [ "corundum-skel" ];
meta = with lib; {
description = "Tool and libraries for maintaining Ruby gems.";
homepage = https://github.com/nyarly/corundum;
license = licenses.mit;
maintainers = [ maintainers.nyarly ];
platforms = platforms.unix;
};
}]]>
</screen>
<para>The chief advantage of <literal>bundlerApp</literal> over
<literal>bundlerEnv</literal> is the executables introduced in the
environment are precisely those selected in the <literal>exes</literal>
list, as opposed to <literal>bundlerEnv</literal> which adds all the
executables made available by gems in the gemset, which can mean e.g.
<command>rspec</command> or <command>rake</command> in unpredictable
versions available from various packages.
</para>
<para>Resulting derivations for both builders also have two helpful
attributes, <literal>env</literal> and <literal>wrappedRuby</literal>.
The first one allows one to quickly drop into
<command>nix-shell</command> with the specified environment present.
E.g. <command>nix-shell -A sensu.env</command> would give you an
environment with Ruby preset so it has all the libraries necessary
for <literal>sensu</literal> in its paths. The second one can be
used to make derivations from custom Ruby scripts which have
<filename>Gemfile</filename>s with their dependencies specified. It is
a derivation with <command>ruby</command> wrapped so it can find all
the needed dependencies. For example, to make a derivation
<literal>my-script</literal> for a <filename>my-script.rb</filename>
(which should be placed in <filename>bin</filename>) you should run
<command>bundix</command> as specified above and then use
<literal>bundlerEnv</literal> like this:
</para>
<programlisting>
<![CDATA[let env = bundlerEnv {
@ -60,18 +110,12 @@ run <command>bundix</command> as specified above and then use <literal>bundlerEn
in stdenv.mkDerivation {
name = "my-script";
buildInputs = [ env.wrapper ];
buildInputs = [ env.wrappedRuby ];
script = ./my-script.rb;
buildCommand = ''
mkdir -p $out/bin
install -D -m755 $script $out/bin/my-script
patchShebangs $out/bin/my-script
'';
}]]>
</programlisting>
</section>

View File

@ -366,15 +366,33 @@ it. Place the resulting <filename>package.nix</filename> file into
</section>
<section xml:id="sec-autojump">
<section xml:id="sec-shell-helpers">
<title>Autojump</title>
<title>Interactive shell helpers</title>
<para>
autojump needs the shell integration to be useful but unlike other systems,
nix doesn't have a standard share directory location. This is why a
<command>autojump-share</command> script is shipped that prints the location
of the shared folder. This can then be used in the .bashrc like this:
Some packages provide the shell integration to be more useful. But
unlike other systems, nix doesn't have a standard share directory
location. This is why a bunch <command>PACKAGE-share</command>
scripts are shipped that print the location of the corresponding
shared folder.
Current list of such packages is as following:
<itemizedlist>
<listitem>
<para>
<literal>autojump</literal>: <command>autojump-share</command>
</para>
</listitem>
<listitem>
<para>
<literal>fzf</literal>: <command>fzf-share</command>
</para>
</listitem>
</itemizedlist>
E.g. <literal>autojump</literal> can then used in the .bashrc like this:
<screen>
source "$(autojump-share)/autojump.bash"
</screen>

View File

@ -309,48 +309,6 @@ rec {
mergeAttrsByFuncDefaults = foldl mergeAttrByFunc { inherit mergeAttrBy; };
mergeAttrsByFuncDefaultsClean = list: removeAttrs (mergeAttrsByFuncDefaults list) ["mergeAttrBy"];
# merge attrs based on version key into mkDerivation args, see mergeAttrBy to learn about smart merge defaults
#
# This function is best explained by an example:
#
# {version ? "2.x"}:
#
# mkDerivation (mergeAttrsByVersion "package-name" version
# { # version specific settings
# "git" = { src = ..; preConfigre = "autogen.sh"; buildInputs = [automake autoconf libtool]; };
# "2.x" = { src = ..; };
# }
# { // shared settings
# buildInputs = [ common build inputs ];
# meta = { .. }
# }
# )
#
# Please note that e.g. Eelco Dolstra usually prefers having one file for
# each version. On the other hand there are valuable additional design goals
# - readability
# - do it once only
# - try to avoid duplication
#
# Marc Weber and Michael Raskin sometimes prefer keeping older
# versions around for testing and regression tests - as long as its cheap to
# do so.
#
# Very often it just happens that the "shared" code is the bigger part.
# Then using this function might be appropriate.
#
# Be aware that its easy to cause recompilations in all versions when using
# this function - also if derivations get too complex splitting into multiple
# files is the way to go.
#
# See misc.nix -> versionedDerivation
# discussion: nixpkgs: pull/310
mergeAttrsByVersion = name: version: attrsByVersion: base:
mergeAttrsByFuncDefaultsClean [ { name = "${name}-${version}"; }
base
(maybeAttr version (throw "bad version ${version} for ${name}") attrsByVersion)
];
# sane defaults (same name as attr name so that inherit can be used)
mergeAttrBy = # { buildInputs = concatList; [...]; passthru = mergeAttr; [..]; }
listToAttrs (map (n: nameValuePair n lib.concat)

View File

@ -546,12 +546,12 @@ lib.mapAttrs (n: v: v // { shortName = n; }) rec {
fullName = "zlib License";
};
zpt20 = spdx { # FIXME: why zpt* instead of zpl*
zpl20 = spdx {
spdxId = "ZPL-2.0";
fullName = "Zope Public License 2.0";
};
zpt21 = spdx {
zpl21 = spdx {
spdxId = "ZPL-2.1";
fullName = "Zope Public License 2.1";
};

View File

@ -75,6 +75,7 @@
berdario = "Dario Bertini <berdario@gmail.com>";
bergey = "Daniel Bergey <bergey@teallabs.org>";
bhipple = "Benjamin Hipple <bhipple@protonmail.com>";
binarin = "Alexey Lebedeff <binarin@binarin.ru>";
bjg = "Brian Gough <bjg@gnu.org>";
bjornfor = "Bjørn Forsman <bjorn.forsman@gmail.com>";
bluescreen303 = "Mathijs Kwik <mathijs@bluescreen303.nl>";
@ -93,6 +94,7 @@
campadrenalin = "Philip Horger <campadrenalin@gmail.com>";
canndrew = "Andrew Cann <shum@canndrew.org>";
carlsverre = "Carl Sverre <accounts@carlsverre.com>";
casey = "Casey Rodarmor <casey@rodarmor.net>";
cdepillabout = "Dennis Gosnell <cdep.illabout@gmail.com>";
cfouche = "Chaddaï Fouché <chaddai.fouche@gmail.com>";
changlinli = "Changlin Li <mail@changlinli.com>";
@ -135,6 +137,7 @@
dbrock = "Daniel Brockman <daniel@brockman.se>";
deepfire = "Kosyrev Serge <_deepfire@feelingofgreen.ru>";
demin-dmitriy = "Dmitriy Demin <demindf@gmail.com>";
derchris = "Christian Gerbrandt <derchris@me.com>";
DerGuteMoritz = "Moritz Heidkamp <moritz@twoticketsplease.de>";
dermetfan = "Robin Stumm <serverkorken@gmail.com>";
DerTim1 = "Tim Digel <tim.digel@active-group.de>";
@ -213,6 +216,7 @@
gilligan = "Tobias Pflug <tobias.pflug@gmail.com>";
giogadi = "Luis G. Torres <lgtorres42@gmail.com>";
gleber = "Gleb Peregud <gleber.p@gmail.com>";
glenns = "Glenn Searby <glenn.searby@gmail.com>";
globin = "Robin Gloster <mail@glob.in>";
gnidorah = "Alex Ivanov <yourbestfriend@opmbx.org>";
goibhniu = "Cillian de Róiste <cillian.deroiste@gmail.com>";
@ -248,7 +252,7 @@
jammerful = "jammerful <jammerful@gmail.com>";
jansol = "Jan Solanti <jan.solanti@paivola.fi>";
javaguirre = "Javier Aguirre <contacto@javaguirre.net>";
jb55 = "William Casarin <bill@casarin.me>";
jb55 = "William Casarin <jb55@jb55.com>";
jbedo = "Justin Bedő <cu@cua0.org>";
jcumming = "Jack Cummings <jack@mudshark.org>";
jdagilliland = "Jason Gilliland <jdagilliland@gmail.com>";
@ -289,8 +293,10 @@
khumba = "Bryan Gardiner <bog@khumba.net>";
KibaFox = "Kiba Fox <kiba.fox@foxypossibilities.com>";
kierdavis = "Kier Davis <kierdavis@gmail.com>";
kiloreux = "Kiloreux Emperex <kiloreux@gmail.com>";
kkallio = "Karn Kallio <tierpluspluslists@gmail.com>";
knedlsepp = "Josef Kemetmüller <josef.kemetmueller@gmail.com>";
konimex = "Muhammad Herdiansyah <herdiansyah@openmailbox.org>";
koral = "Koral <koral@mailoo.org>";
kovirobi = "Kovacsics Robert <kovirobi@gmail.com>";
kragniz = "Louis Taylor <louis@kragniz.eu>";
@ -311,6 +317,7 @@
lihop = "Leroy Hopson <nixos@leroy.geek.nz>";
linquize = "Linquize <linquize@yahoo.com.hk>";
linus = "Linus Arver <linusarver@gmail.com>";
lluchs = "Lukas Werling <lukas.werling@gmail.com>";
lnl7 = "Daiderd Jordan <daiderd@gmail.com>";
loskutov = "Ignat Loskutov <ignat.loskutov@gmail.com>";
lovek323 = "Jason O'Conal <jason@oconal.id.au>";
@ -374,6 +381,7 @@
MostAwesomeDude = "Corbin Simpson <cds@corbinsimpson.com>";
mounium = "Katona László <muoniurn@gmail.com>";
MP2E = "Cray Elliott <MP2E@archlinux.us>";
mpcsh = "Mark Cohen <m@mpc.sh>";
mpscholten = "Marc Scholten <marc@mpscholten.de>";
mpsyco = "Francis St-Amour <fr.st-amour@gmail.com>";
msackman = "Matthew Sackman <matthew@wellquite.org>";
@ -405,6 +413,7 @@
np = "Nicolas Pouillard <np.nix@nicolaspouillard.fr>";
nslqqq = "Nikita Mikhailov <nslqqq@gmail.com>";
nthorne = "Niklas Thörne <notrupertthorne@gmail.com>";
nyarly = "Judson Lester <nyarly@gmail.com>";
obadz = "obadz <obadz-nixos@obadz.com>";
ocharles = "Oliver Charles <ollie@ocharles.org.uk>";
odi = "Oliver Dunkl <oliver.dunkl@gmail.com>";
@ -499,6 +508,7 @@
ryanartecona = "Ryan Artecona <ryanartecona@gmail.com>";
ryansydnor = "Ryan Sydnor <ryan.t.sydnor@gmail.com>";
ryantm = "Ryan Mulligan <ryan@ryantm.com>";
rybern = "Ryan Bernstein <ryan.bernstein@columbia.edu>";
rycee = "Robert Helgesson <robert@rycee.net>";
ryneeverett = "Ryne Everett <ryneeverett@gmail.com>";
rzetterberg = "Richard Zetterberg <richard.zetterberg@gmail.com>";
@ -506,6 +516,7 @@
samuelrivas = "Samuel Rivas <samuelrivas@gmail.com>";
sander = "Sander van der Burg <s.vanderburg@tudelft.nl>";
sargon = "Daniel Ehlers <danielehlers@mindeye.net>";
sauyon = "Sauyon Lee <s@uyon.co>";
schmitthenner = "Fabian Schmitthenner <development@schmitthenner.eu>";
schneefux = "schneefux <schneefux+nixos_pkg@schneefux.xyz>";
schristo = "Scott Christopher <schristopher@konputa.com>";
@ -627,6 +638,7 @@
zauberpony = "Elmar Athmer <elmar@athmer.org>";
zef = "Zef Hemel <zef@zef.me>";
zimbatm = "zimbatm <zimbatm@zimbatm.com>";
Zimmi48 = "Théo Zimmermann <theo.zimmermann@univ-paris-diderot.fr>";
zohl = "Al Zohali <zohl@fmap.me>";
zoomulator = "Kim Simmons <zoomulator@gmail.com>";
zraexy = "David Mell <zraexy@gmail.com>";

View File

@ -543,6 +543,10 @@ rec {
# Cavium ThunderX stuff.
PCI_HOST_THUNDER_ECAM y
# The default (=y) forces us to have the XHCI firmware available in initrd,
# which our initrd builder can't currently do easily.
USB_XHCI_TEGRA m
'';
uboot = null;
kernelTarget = "Image";

View File

@ -2,7 +2,7 @@
set -o pipefail
GNOME_FTP="ftp.gnome.org/pub/GNOME/sources"
GNOME_FTP=ftp.gnome.org/pub/GNOME/sources
# projects that don't follow the GNOME major versioning, or that we don't want to
# programmatically update
@ -18,10 +18,10 @@ if [ "$#" -lt 2 ]; then
usage
fi
GNOME_TOP="$1"
GNOME_TOP=$1
shift
action="$1"
action=$1
# curl -l ftp://... doesn't work from my office in HSE, and I don't want to have
# any conversations with sysadmin. Somehow lftp works.
@ -36,18 +36,18 @@ else
fi
find_project() {
exec find "$GNOME_TOP" -mindepth 2 -maxdepth 2 -type d $@
exec find "$GNOME_TOP" -mindepth 2 -maxdepth 2 -type d "$@"
}
show_project() {
local project="$1"
local majorVersion="$2"
local version=""
local project=$1
local majorVersion=$2
local version=
if [ -z "$majorVersion" ]; then
echo "Looking for available versions..." >&2
local available_baseversions=( `ls_ftp ftp://${GNOME_FTP}/${project} | grep '[0-9]\.[0-9]' | sort -t. -k1,1n -k 2,2n` )
if [ "$?" -ne "0" ]; then
local available_baseversions=$(ls_ftp ftp://${GNOME_FTP}/${project} | grep '[0-9]\.[0-9]' | sort -t. -k1,1n -k 2,2n)
if [ "$?" -ne 0 ]; then
echo "Project $project not found" >&2
return 1
fi
@ -59,11 +59,11 @@ show_project() {
if echo "$majorVersion" | grep -q "[0-9]\+\.[0-9]\+\.[0-9]\+"; then
# not a major version
version="$majorVersion"
version=$majorVersion
majorVersion=$(echo "$majorVersion" | cut -d '.' -f 1,2)
fi
local FTPDIR="${GNOME_FTP}/${project}/${majorVersion}"
local FTPDIR=${GNOME_FTP}/${project}/${majorVersion}
#version=`curl -l ${FTPDIR}/ 2>/dev/null | grep LATEST-IS | sed -e s/LATEST-IS-//`
# gnome's LATEST-IS is broken. Do not trust it.
@ -92,7 +92,7 @@ show_project() {
esac
done
echo "Found versions ${!versions[@]}" >&2
version=`echo ${!versions[@]} | sed -e 's/ /\n/g' | sort -t. -k1,1n -k 2,2n -k 3,3n | tail -n1`
version=$(echo ${!versions[@]} | sed -e 's/ /\n/g' | sort -t. -k1,1n -k 2,2n -k 3,3n | tail -n1)
if [ -z "$version" ]; then
echo "No version available for major $majorVersion" >&2
return 1
@ -103,7 +103,7 @@ show_project() {
local name=${project}-${version}
echo "Fetching .sha256 file" >&2
local sha256out=$(curl -s -f http://${FTPDIR}/${name}.sha256sum)
local sha256out=$(curl -s -f http://"${FTPDIR}"/"${name}".sha256sum)
if [ "$?" -ne "0" ]; then
echo "Version not found" >&2
@ -136,8 +136,8 @@ fetchurl: {
}
update_project() {
local project="$1"
local majorVersion="$2"
local project=$1
local majorVersion=$2
# find project in nixpkgs tree
projectPath=$(find_project -name "$project" -print)
@ -150,14 +150,14 @@ update_project() {
if [ "$?" -eq "0" ]; then
echo "Updating $projectPath/src.nix" >&2
echo -e "$src" > "$projectPath/src.nix"
echo -e "$src" > "$projectPath"/src.nix
fi
return 0
}
if [ "$action" == "update-all" ]; then
majorVersion="$2"
if [ "$action" = "update-all" ]; then
majorVersion=$2
if [ -z "$majorVersion" ]; then
echo "No major version specified" >&2
usage
@ -170,23 +170,23 @@ if [ "$action" == "update-all" ]; then
echo "Skipping $project"
else
echo "= Updating $project to $majorVersion" >&2
update_project $project $majorVersion
update_project "$project" "$majorVersion"
echo >&2
fi
done
else
project="$2"
majorVersion="$3"
project=$2
majorVersion=$3
if [ -z "$project" ]; then
echo "No project specified, exiting" >&2
usage
fi
if [ "$action" == "show" ]; then
show_project $project $majorVersion
elif [ "$action" == "update" ]; then
update_project $project $majorVersion
if [ "$action" = show ]; then
show_project "$project" "$majorVersion"
elif [ "$action" = update ]; then
update_project "$project" "$majorVersion"
else
echo "Unknown action $action" >&2
usage

View File

@ -17,11 +17,16 @@
<refsynopsisdiv>
<cmdsynopsis>
<command>nixos-option</command>
<arg choice='plain'><replaceable>option.name</replaceable></arg>
<arg>
<option>-I</option>
<replaceable>path</replaceable>
</arg>
<arg><option>--verbose</option></arg>
<arg><option>--xml</option></arg>
<arg choice="plain"><replaceable>option.name</replaceable></arg>
</cmdsynopsis>
</refsynopsisdiv>
<refsection><title>Description</title>
<para>This command evaluates the configuration specified in
@ -33,6 +38,45 @@ attributes contained in the attribute set.</para>
</refsection>
<refsection><title>Options</title>
<para>This command accepts the following options:</para>
<variablelist>
<varlistentry>
<term><option>-I</option> <replaceable>path</replaceable></term>
<listitem>
<para>
This option is passed to the underlying
<command>nix-instantiate</command> invocation.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term><option>--verbose</option></term>
<listitem>
<para>
This option enables verbose mode, which currently is just
the Bash <command>set</command> <option>-x</option> debug mode.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term><option>--xml</option></term>
<listitem>
<para>
This option causes the output to be rendered as XML.
</para>
</listitem>
</varlistentry>
</variablelist>
</refsection>
<refsection><title>Environment</title>
<variablelist>

View File

@ -130,6 +130,30 @@ rmdir /var/lib/ipfs/.ipfs
instead. Refer to the description of the options for more details.
</para>
</listitem>
<listitem>
<para>
<literal>tlsdate</literal> package and module were removed. This is due to the project
being dead and not building with openssl 1.1.
</para>
</listitem>
<listitem>
<para>
<literal>wvdial</literal> package and module were removed. This is due to the project
being dead and not building with openssl 1.1.
</para>
</listitem>
<listitem>
<para>
<literal>cc-wrapper</literal>'s setup-hook now exports a number of
environment variables corresponding to binutils binaries,
(e.g. <envar>LD</envar>, <envar>STRIP</envar>, <envar>RANLIB</envar>,
etc). This is done to prevent packages' build systems guessing, which is
harder to predict, especially when cross-compiling. However, some packages
have broken due to this—their build systems either not supporting, or
claiming to support without adequate testing, taking such environment
variables as parameters.
</para>
</listitem>
</itemizedlist>
<para>Other notable improvements:</para>
@ -157,6 +181,21 @@ rmdir /var/lib/ipfs/.ipfs
module where user Fontconfig settings are available.
</para>
</listitem>
<listitem>
<para>
ZFS/SPL have been updated to 0.7.0, <literal>zfsUnstable, splUnstable</literal>
have therefore been removed.
</para>
</listitem>
<listitem>
<para>
The <option>time.timeZone</option> option now allows the value
<literal>null</literal> in addition to timezone strings. This value
allows changing the timezone of a system imperatively using
<command>timedatectl set-timezone</command>. The default timezone
is still UTC.
</para>
</listitem>
</itemizedlist>

View File

@ -39,6 +39,12 @@
with lib;
let
extensions = {
qcow2 = "qcow2";
vpc = "vhd";
raw = "img";
};
# Copied from https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/installer/cd-dvd/channel.nix
# TODO: factor out more cleanly
@ -142,8 +148,8 @@ in pkgs.vmTools.runInLinuxVM (
mv $diskImage $out/nixos.img
diskImage=$out/nixos.img
'' else ''
${pkgs.qemu}/bin/qemu-img convert -f raw -O qcow2 $diskImage $out/nixos.qcow2
diskImage=$out/nixos.qcow2
${pkgs.qemu}/bin/qemu-img convert -f raw -O ${format} $diskImage $out/nixos.${extensions.${format}}
diskImage=$out/nixos.${extensions.${format}}
''}
${postVM}
'';

View File

@ -33,7 +33,7 @@ pkgs.stdenv.mkDerivation {
echo "Creating an EXT4 image of $bytes bytes (numInodes=$numInodes, numDataBlocks=$numDataBlocks)"
truncate -s $bytes $out
faketime "1970-01-01 00:00:00" mkfs.ext4 -L ${volumeLabel} -U 44444444-4444-4444-8888-888888888888 $out
faketime -f "1970-01-01 00:00:01" mkfs.ext4 -L ${volumeLabel} -U 44444444-4444-4444-8888-888888888888 $out
# Populate the image contents by piping a bunch of commands to the `debugfs` tool from e2fsprogs.
# For example, to copy /nix/store/abcd...efg-coreutils-8.23/bin/sleep:
@ -76,7 +76,7 @@ pkgs.stdenv.mkDerivation {
echo sif $file gid 30000 # chgrp to nixbld
done
) | faketime "1970-01-01 00:00:00" debugfs -w $out -f /dev/stdin > errorlog 2>&1
) | faketime -f "1970-01-01 00:00:01" debugfs -w $out -f /dev/stdin > errorlog 2>&1
# The debugfs tool doesn't terminate on error nor exit with a non-zero status. Check manually.
if egrep -q 'Could not allocate|File not found' errorlog; then

View File

@ -22,15 +22,26 @@ in {
generated image. Glob patterns work.
'';
};
sizeMB = mkOption {
type = types.int;
default = if config.ec2.hvm then 2048 else 8192;
description = "The size in MB of the image";
};
format = mkOption {
type = types.enum [ "raw" "qcow2" "vpc" ];
default = "qcow2";
description = "The image format to output";
};
};
config.system.build.amazonImage = import ../../../lib/make-disk-image.nix {
inherit lib config;
inherit (cfg) contents;
inherit (cfg) contents format;
pkgs = import ../../../.. { inherit (pkgs) system; }; # ensure we use the regular qemu-kvm package
partitioned = config.ec2.hvm;
diskSize = if config.ec2.hvm then 2048 else 8192;
format = "qcow2";
diskSize = cfg.sizeMB;
configFile = pkgs.writeText "configuration.nix"
''
{
@ -41,5 +52,4 @@ in {
}
'';
};
}

View File

@ -20,12 +20,26 @@ in
options = {
networking.hosts = lib.mkOption {
type = types.attrsOf ( types.listOf types.str );
default = {};
example = literalExample ''
{
"127.0.0.1" = [ "foo.bar.baz" ];
"192.168.0.2" = [ "fileserver.local" "nameserver.local" ];
};
'';
description = ''
Locally defined maps of hostnames to IP addresses.
'';
};
networking.extraHosts = lib.mkOption {
type = types.lines;
default = "";
example = "192.168.0.1 lanlocalhost";
description = ''
Additional entries to be appended to <filename>/etc/hosts</filename>.
Additional verbatim entries to be appended to <filename>/etc/hosts</filename>.
'';
};
@ -188,11 +202,22 @@ in
# /etc/hosts: Hostname-to-IP mappings.
"hosts".text =
let oneToString = set : ip : ip + " " + concatStringsSep " " ( getAttr ip set );
allToString = set : concatMapStringsSep "\n" ( oneToString set ) ( attrNames set );
userLocalHosts = optionalString
( builtins.hasAttr "127.0.0.1" cfg.hosts )
( concatStringsSep " " ( remove "localhost" cfg.hosts."127.0.0.1" ));
userLocalHosts6 = optionalString
( builtins.hasAttr "::1" cfg.hosts )
( concatStringsSep " " ( remove "localhost" cfg.hosts."::1" ));
otherHosts = allToString ( removeAttrs cfg.hosts [ "127.0.0.1" "::1" ]);
in
''
127.0.0.1 localhost
127.0.0.1 ${userLocalHosts} localhost
${optionalString cfg.enableIPv6 ''
::1 localhost
::1 ${userLocalHosts6} localhost
''}
${otherHosts}
${cfg.extraHosts}
'';

View File

@ -26,7 +26,15 @@ with lib;
fonts.fontconfig.enable = false;
nixpkgs.config.packageOverrides = pkgs:
{ dbus = pkgs.dbus.override { x11Support = false; }; };
nixpkgs.config.packageOverrides = pkgs: {
dbus = pkgs.dbus.override { x11Support = false; };
networkmanager_fortisslvpn = pkgs.networkmanager_fortisslvpn.override { withGnome = false; };
networkmanager_l2tp = pkgs.networkmanager_l2tp.override { withGnome = false; };
networkmanager_openconnect = pkgs.networkmanager_openconnect.override { withGnome = false; };
networkmanager_openvpn = pkgs.networkmanager_openvpn.override { withGnome = false; };
networkmanager_pptp = pkgs.networkmanager_pptp.override { withGnome = false; };
networkmanager_vpnc = pkgs.networkmanager_vpnc.override { withGnome = false; };
pinentry = pkgs.pinentry.override { gtk2 = null; qt4 = null; };
};
};
}

View File

@ -28,7 +28,8 @@ let
passwdArray = [ "files" ]
++ optional sssd "sss"
++ optionals ldap [ "ldap" ]
++ optionals mymachines [ "mymachines" ];
++ optionals mymachines [ "mymachines" ]
++ [ "systemd" ];
shadowArray = [ "files" ]
++ optional sssd "sss"

View File

@ -224,7 +224,7 @@ in {
# Allow PulseAudio to get realtime priority using rtkit.
security.rtkit.enable = true;
systemd.packages = [ cfg.package ];
systemd.packages = [ overriddenPackage ];
})
(mkIf hasZeroconf {

View File

@ -5,6 +5,52 @@ with lib;
let
randomEncryptionCoerce = enable: { inherit enable; };
randomEncryptionOpts = { ... }: {
options = {
enable = mkOption {
default = false;
type = types.bool;
description = ''
Encrypt swap device with a random key. This way you won't have a persistent swap device.
WARNING: Don't try to hibernate when you have at least one swap partition with
this option enabled! We have no way to set the partition into which hibernation image
is saved, so if your image ends up on an encrypted one you would lose it!
WARNING #2: Do not use /dev/disk/by-uuid/… or /dev/disk/by-label/… as your swap device
when using randomEncryption as the UUIDs and labels will get erased on every boot when
the partition is encrypted. Best to use /dev/disk/by-partuuid/
'';
};
cipher = mkOption {
default = "aes-xts-plain64";
example = "serpent-xts-plain64";
type = types.str;
description = ''
Use specified cipher for randomEncryption.
Hint: Run "cryptsetup benchmark" to see which one is fastest on your machine.
'';
};
source = mkOption {
default = "/dev/urandom";
example = "/dev/random";
type = types.str;
description = ''
Define the source of randomness to obtain a random key for encryption.
'';
};
};
};
swapCfg = {config, options, ...}: {
options = {
@ -47,10 +93,17 @@ let
randomEncryption = mkOption {
default = false;
type = types.bool;
example = {
enable = true;
cipher = "serpent-xts-plain64";
source = "/dev/random";
};
type = types.coercedTo types.bool randomEncryptionCoerce (types.submodule randomEncryptionOpts);
description = ''
Encrypt swap device with a random key. This way you won't have a persistent swap device.
HINT: run "cryptsetup benchmark" to test cipher performance on your machine.
WARNING: Don't try to hibernate when you have at least one swap partition with
this option enabled! We have no way to set the partition into which hibernation image
is saved, so if your image ends up on an encrypted one you would lose it!
@ -77,7 +130,7 @@ let
device = mkIf options.label.isDefined
"/dev/disk/by-label/${config.label}";
deviceName = lib.replaceChars ["\\"] [""] (escapeSystemdPath config.device);
realDevice = if config.randomEncryption then "/dev/mapper/${deviceName}" else config.device;
realDevice = if config.randomEncryption.enable then "/dev/mapper/${deviceName}" else config.device;
};
};
@ -125,14 +178,14 @@ in
createSwapDevice = sw:
assert sw.device != "";
assert !(sw.randomEncryption && lib.hasPrefix "/dev/disk/by-uuid" sw.device);
assert !(sw.randomEncryption && lib.hasPrefix "/dev/disk/by-label" sw.device);
assert !(sw.randomEncryption.enable && lib.hasPrefix "/dev/disk/by-uuid" sw.device);
assert !(sw.randomEncryption.enable && lib.hasPrefix "/dev/disk/by-label" sw.device);
let realDevice' = escapeSystemdPath sw.realDevice;
in nameValuePair "mkswap-${sw.deviceName}"
{ description = "Initialisation of swap device ${sw.device}";
wantedBy = [ "${realDevice'}.swap" ];
before = [ "${realDevice'}.swap" ];
path = [ pkgs.utillinux ] ++ optional sw.randomEncryption pkgs.cryptsetup;
path = [ pkgs.utillinux ] ++ optional sw.randomEncryption.enable pkgs.cryptsetup;
script =
''
@ -145,13 +198,11 @@ in
truncate --size "${toString sw.size}M" "${sw.device}"
fi
chmod 0600 ${sw.device}
${optionalString (!sw.randomEncryption) "mkswap ${sw.realDevice}"}
${optionalString (!sw.randomEncryption.enable) "mkswap ${sw.realDevice}"}
fi
''}
${optionalString sw.randomEncryption ''
echo "secretkey" | cryptsetup luksFormat --batch-mode ${sw.device}
echo "secretkey" | cryptsetup luksOpen ${sw.device} ${sw.deviceName}
cryptsetup luksErase --batch-mode ${sw.device}
${optionalString sw.randomEncryption.enable ''
cryptsetup plainOpen -c ${sw.randomEncryption.cipher} -d ${sw.randomEncryption.source} ${sw.device} ${sw.deviceName}
mkswap ${sw.realDevice}
''}
'';
@ -159,12 +210,12 @@ in
unitConfig.RequiresMountsFor = [ "${dirOf sw.device}" ];
unitConfig.DefaultDependencies = false; # needed to prevent a cycle
serviceConfig.Type = "oneshot";
serviceConfig.RemainAfterExit = sw.randomEncryption;
serviceConfig.ExecStop = optionalString sw.randomEncryption "${pkgs.cryptsetup}/bin/cryptsetup luksClose ${sw.deviceName}";
serviceConfig.RemainAfterExit = sw.randomEncryption.enable;
serviceConfig.ExecStop = optionalString sw.randomEncryption.enable "${pkgs.cryptsetup}/bin/cryptsetup luksClose ${sw.deviceName}";
restartIfChanged = false;
};
in listToAttrs (map createSwapDevice (filter (sw: sw.size != null || sw.randomEncryption) config.swapDevices));
in listToAttrs (map createSwapDevice (filter (sw: sw.size != null || sw.randomEncryption.enable) config.swapDevices));
};

View File

@ -118,6 +118,9 @@ in
"/share/themes"
"/share/vim-plugins"
"/share/vulkan"
"/share/kservices5"
"/share/kservicetypes5"
"/share/kxmlgui5"
];
system.path = pkgs.buildEnv {

View File

@ -14,13 +14,16 @@ in
time = {
timeZone = mkOption {
default = "UTC";
type = types.str;
default = null;
type = types.nullOr types.str;
example = "America/New_York";
description = ''
The time zone used when displaying times and dates. See <link
xlink:href="https://en.wikipedia.org/wiki/List_of_tz_database_time_zones"/>
for a comprehensive list of possible values for this setting.
If null, the timezone will default to UTC and can be set imperatively
using timedatectl.
'';
};
@ -40,13 +43,14 @@ in
# This way services are restarted when tzdata changes.
systemd.globalEnvironment.TZDIR = tzdir;
environment.etc.localtime =
{ source = "/etc/zoneinfo/${config.time.timeZone}";
mode = "direct-symlink";
systemd.services.systemd-timedated.environment = lib.optionalAttrs (config.time.timeZone != null) { NIXOS_STATIC_TIMEZONE = "1"; };
environment.etc = {
zoneinfo.source = tzdir;
} // lib.optionalAttrs (config.time.timeZone != null) {
localtime.source = "/etc/zoneinfo/${config.time.timeZone}";
localtime.mode = "direct-symlink";
};
environment.etc.zoneinfo.source = tzdir;
};
}

View File

@ -527,7 +527,7 @@ in {
input.gid = ids.gids.input;
};
system.activationScripts.users = stringAfter [ "etc" ]
system.activationScripts.users = stringAfter [ "stdio" ]
''
${pkgs.perl}/bin/perl -w \
-I${pkgs.perlPackages.FileSlurp}/lib/perl5/site_perl \

View File

@ -3,7 +3,7 @@
with lib;
{
meta.maintainers = [ maintainers.grahamc ];
meta.maintainers = with maintainers; [ grahamc ];
options = {
hardware.mcelog = {
@ -19,19 +19,17 @@ with lib;
};
config = mkIf config.hardware.mcelog.enable {
systemd.services.mcelog = {
description = "Machine Check Exception Logging Daemon";
wantedBy = [ "multi-user.target" ];
systemd = {
packages = [ pkgs.mcelog ];
serviceConfig = {
ExecStart = "${pkgs.mcelog}/bin/mcelog --daemon --foreground";
SuccessExitStatus = [ 0 15 ];
ProtectHome = true;
PrivateNetwork = true;
PrivateTmp = true;
services.mcelog = {
wantedBy = [ "multi-user.target" ];
serviceConfig = {
ProtectHome = true;
PrivateNetwork = true;
PrivateTmp = true;
};
};
};
};
}

View File

@ -105,7 +105,6 @@
./programs/venus.nix
./programs/vim.nix
./programs/wireshark.nix
./programs/wvdial.nix
./programs/xfs_quota.nix
./programs/xonsh.nix
./programs/zsh/oh-my-zsh.nix
@ -356,6 +355,7 @@
./services/monitoring/munin.nix
./services/monitoring/nagios.nix
./services/monitoring/netdata.nix
./services/monitoring/osquery.nix
./services/monitoring/prometheus/default.nix
./services/monitoring/prometheus/alertmanager.nix
./services/monitoring/prometheus/blackbox-exporter.nix
@ -516,7 +516,6 @@
./services/networking/teamspeak3.nix
./services/networking/tinc.nix
./services/networking/tftpd.nix
./services/networking/tlsdated.nix
./services/networking/tox-bootstrapd.nix
./services/networking/toxvpn.nix
./services/networking/tvheadend.nix

View File

@ -92,7 +92,7 @@ in
'');
assertions = [
{ assertion = cfg.agent.enableSSHSupport && !config.programs.ssh.startAgent;
{ assertion = cfg.agent.enableSSHSupport -> !config.programs.ssh.startAgent;
message = "You can't use ssh-agent and GnuPG agent with SSH support enabled at the same time!";
}
];

View File

@ -26,6 +26,6 @@ with lib;
###### implementation
config = mkIf config.programs.qt5ct.enable {
environment.variables.QT_QPA_PLATFORMTHEME = "qt5ct";
environment.systemPackages = [ pkgs.qt5ct ];
environment.systemPackages = with pkgs; [ qt5ct libsForQt5.qtstyleplugins ];
};
}

View File

@ -3,7 +3,12 @@
with lib;
let
cfg = config.programs.thefuck;
prg = config.programs;
cfg = prg.thefuck;
initScript = ''
eval $(${pkgs.thefuck}/bin/thefuck --alias ${cfg.alias})
'';
in
{
options = {
@ -24,8 +29,11 @@ in
config = mkIf cfg.enable {
environment.systemPackages = with pkgs; [ thefuck ];
environment.shellInit = ''
eval $(${pkgs.thefuck}/bin/thefuck --alias ${cfg.alias})
environment.shellInit = initScript;
programs.zsh.shellInit = mkIf prg.zsh.enable initScript;
programs.fish.shellInit = mkIf prg.fish.enable ''
${pkgs.thefuck}/bin/thefuck --alias | source
'';
};
}

View File

@ -1,71 +0,0 @@
# Global configuration for wvdial.
{ config, lib, pkgs, ... }:
with lib;
let
configFile = ''
[Dialer Defaults]
PPPD PATH = ${pkgs.ppp}/sbin/pppd
${config.environment.wvdial.dialerDefaults}
'';
cfg = config.environment.wvdial;
in
{
###### interface
options = {
environment.wvdial = {
dialerDefaults = mkOption {
default = "";
type = types.str;
example = ''Init1 = AT+CGDCONT=1,"IP","internet.t-mobile"'';
description = ''
Contents of the "Dialer Defaults" section of
<filename>/etc/wvdial.conf</filename>.
'';
};
pppDefaults = mkOption {
default = ''
noipdefault
usepeerdns
defaultroute
persist
noauth
'';
type = types.str;
description = "Default ppp settings for wvdial.";
};
};
};
###### implementation
config = mkIf (cfg.dialerDefaults != "") {
environment = {
etc =
[
{ source = pkgs.writeText "wvdial.conf" configFile;
target = "wvdial.conf";
}
{ source = pkgs.writeText "wvdial" cfg.pppDefaults;
target = "ppp/peers/wvdial";
}
];
};
};
}

View File

@ -204,6 +204,7 @@ with lib;
"Set the option `services.xserver.displayManager.sddm.package' instead.")
(mkRemovedOptionModule [ "fonts" "fontconfig" "forceAutohint" ] "")
(mkRemovedOptionModule [ "fonts" "fontconfig" "renderMonoTTFAsBitmap" ] "")
(mkRemovedOptionModule [ "boot" "zfs" "enableUnstable" ] "0.7.0 is now the default")
# ZSH
(mkRenamedOptionModule [ "programs" "zsh" "enableSyntaxHighlighting" ] [ "programs" "zsh" "syntaxHighlighting" "enable" ])

View File

@ -13,6 +13,7 @@ with lib;
unitConfig = {
ConditionVirtualization = "!container";
ConditionSecurity = [ "audit" ];
DefaultDependencies = false;
};
path = [ pkgs.audit ];

View File

@ -68,9 +68,9 @@ let
collectd = [{
enabled = false;
typesdb = "${pkgs.collectd}/share/collectd/types.db";
typesdb = "${pkgs.collectd-data}/share/collectd/types.db";
database = "collectd_db";
port = 25826;
bind-address = ":25826";
}];
opentsdb = [{
@ -149,7 +149,6 @@ in
type = types.attrs;
};
};
};

View File

@ -108,7 +108,7 @@ in
after = [ "network.target" ];
serviceConfig = {
ExecStart = "${mongodb}/bin/mongod --quiet --config ${mongoCnf} --fork --pidfilepath ${cfg.pidFile}";
ExecStart = "${mongodb}/bin/mongod --config ${mongoCnf} --fork --pidfilepath ${cfg.pidFile}";
User = cfg.user;
PIDFile = cfg.pidFile;
Type = "forking";

View File

@ -4,6 +4,8 @@ with lib;
let
cfg = config.services.fluentd;
pluginArgs = concatStringsSep " " (map (x: "-p ${x}") cfg.plugins);
in {
###### interface
@ -28,6 +30,15 @@ in {
defaultText = "pkgs.fluentd";
description = "The fluentd package to use.";
};
plugins = mkOption {
type = types.listOf types.path;
default = [];
description = ''
A list of plugin paths to pass into fluentd. It will make plugins defined in ruby files
there available in your config.
'';
};
};
};
@ -39,7 +50,7 @@ in {
description = "Fluentd Daemon";
wantedBy = [ "multi-user.target" ];
serviceConfig = {
ExecStart = "${cfg.package}/bin/fluentd -c ${pkgs.writeText "fluentd.conf" cfg.config}";
ExecStart = "${cfg.package}/bin/fluentd -c ${pkgs.writeText "fluentd.conf" cfg.config} ${pluginArgs}";
ExecReload = "${pkgs.coreutils}/bin/kill -HUP $MAINPID";
};
};

View File

@ -11,9 +11,7 @@ let
password_secret = ${cfg.passwordSecret}
root_username = ${cfg.rootUsername}
root_password_sha2 = ${cfg.rootPasswordSha2}
elasticsearch_cluster_name = ${cfg.elasticsearchClusterName}
elasticsearch_discovery_zen_ping_multicast_enabled = ${boolToString cfg.elasticsearchDiscoveryZenPingMulticastEnabled}
elasticsearch_discovery_zen_ping_unicast_hosts = ${cfg.elasticsearchDiscoveryZenPingUnicastHosts}
elasticsearch_hosts = ${concatStringsSep "," cfg.elasticsearchHosts}
message_journal_dir = ${cfg.messageJournalDir}
mongodb_uri = ${cfg.mongodbUri}
plugin_dir = /var/lib/graylog/plugins
@ -91,22 +89,10 @@ in
'';
};
elasticsearchClusterName = mkOption {
type = types.str;
example = "graylog";
description = "This must be the same as for your Elasticsearch cluster";
};
elasticsearchDiscoveryZenPingMulticastEnabled = mkOption {
type = types.bool;
default = false;
description = "Whether to use elasticsearch multicast discovery";
};
elasticsearchDiscoveryZenPingUnicastHosts = mkOption {
type = types.str;
default = "127.0.0.1:9300";
description = "Tells Graylogs Elasticsearch client how to find other cluster members. See Elasticsearch documentation for details";
elasticsearchHosts = mkOption {
type = types.listOf types.str;
example = literalExample ''[ "http://node1:9200" "http://user:password@node2:19200" ]'';
description = "List of valid URIs of the http ports of your elastic nodes. If one or more of your elasticsearch hosts require authentication, include the credentials in each node URI that requires authentication";
};
messageJournalDir = mkOption {

View File

@ -76,7 +76,7 @@ let
// optionalAttrs (cfg.relayDomains != null) { relay_domains = cfg.relayDomains; }
// optionalAttrs (cfg.recipientDelimiter != "") { recipient_delimiter = cfg.recipientDelimiter; }
// optionalAttrs haveAliases { alias_maps = "${cfg.aliasMapType}:/etc/postfix/aliases"; }
// optionalAttrs haveTransport { transport_maps = "hash:/etc/postfx/transport"; }
// optionalAttrs haveTransport { transport_maps = "hash:/etc/postfix/transport"; }
// optionalAttrs haveVirtual { virtual_alias_maps = "${cfg.virtualMapType}:/etc/postfix/virtual"; }
// optionalAttrs (cfg.dnsBlacklists != []) { smtpd_client_restrictions = clientRestrictions; }
// optionalAttrs cfg.enableHeaderChecks { header_checks = "regexp:/etc/postfix/header_checks"; }
@ -213,8 +213,8 @@ let
wakeupDefined = options.wakeup.isDefined;
wakeupUCDefined = options.wakeupUnusedComponent.isDefined;
finalValue = toString config.wakeup
+ optionalString (!config.wakeupUnusedComponent) "?";
in if wakeupDefined && wakeupUCDefined then finalValue else "-";
+ optionalString (wakeupUCDefined && !config.wakeupUnusedComponent) "?";
in if wakeupDefined then finalValue else "-";
in [
config.name
@ -267,7 +267,7 @@ let
lines = [ sep (formatLine labels) (formatLine labelDefaults) sep ];
in concatStringsSep "\n" lines;
in formattedLabels + "\n" + concatMapStringsSep "\n" formatLine masterCf + "\n";
in formattedLabels + "\n" + concatMapStringsSep "\n" formatLine masterCf + "\n" + cfg.extraMasterConf;
headerCheckOptions = { ... }:
{
@ -839,5 +839,8 @@ in
(mkIf (cfg.extraConfig != "") {
warnings = [ "The services.postfix.extraConfig option was deprecated. Please use services.postfix.config instead." ];
})
(mkIf (cfg.extraMasterConf != "") {
warnings = [ "The services.postfix.extraMasterConf option was deprecated. Please use services.postfix.masterConfig instead." ];
})
]);
}

View File

@ -20,10 +20,10 @@ in
enable = mkOption {
default = false;
description = "
description = ''
Mount filesystems on demand. Unmount them automatically.
You may also be interested in afuse.
";
'';
};
autoMaster = mkOption {
@ -45,10 +45,9 @@ in
/auto file:''${mapConf}
'''
'';
description = "
file contents of /etc/auto.master. See man auto.master
See man 5 auto.master and man 5 autofs.
";
description = ''
Contents of <literal>/etc/auto.master</literal> file. See <command>auto.master(5)</command> and <command>autofs(5)</command>.
'';
};
timeout = mkOption {
@ -58,9 +57,9 @@ in
debug = mkOption {
default = false;
description = "
pass -d and -7 to automount and write log to /var/log/autofs
";
description = ''
Pass -d and -7 to automount and write log to the system journal.
'';
};
};

View File

@ -30,4 +30,5 @@ in {
};
meta.maintainers = with maintainers; [ gnidorah ];
}

View File

@ -15,9 +15,12 @@ let
election-port=${toString cfg.zkElectionPort}
cleanup-period-ms=${toString cfg.zkCleanupPeriod}
servers-spec=${concatStringsSep "," cfg.zkServersSpec}
auto-manage-instances=${lib.boolToString cfg.autoManageInstances}
auto-manage-instances=${toString cfg.autoManageInstances}
${cfg.extraConf}
'';
# NB: toString rather than lib.boolToString on cfg.autoManageInstances is intended.
# Exhibitor tests if it's an integer not equal to 0, so the empty string (toString false)
# will operate in the same fashion as a 0.
configDir = pkgs.writeTextDir "exhibitor.properties" exhibitorConfig;
cliOptionsCommon = {
configtype = cfg.configType;

View File

@ -42,4 +42,5 @@ in {
};
meta.maintainers = with maintainers; [ gnidorah ];
}

View File

@ -4,7 +4,7 @@ with lib;
let
cfg = config.services.zookeeper;
zookeeperConfig = ''
dataDir=${cfg.dataDir}
clientPort=${toString cfg.port}
@ -49,7 +49,7 @@ in {
default = 1;
type = types.int;
};
extraConf = mkOption {
description = "Extra configuration for Zookeeper.";
type = types.lines;
@ -119,7 +119,7 @@ in {
ExecStart = ''
${pkgs.jre}/bin/java \
-cp "${pkgs.zookeeper}/lib/*:${pkgs.zookeeper}/${pkgs.zookeeper.name}.jar:${configDir}" \
${toString cfg.extraCmdLineOptions} \
${escapeShellArgs cfg.extraCmdLineOptions} \
-Dzookeeper.datadir.autocreate=false \
${optionalString cfg.preferIPv4 "-Djava.net.preferIPv4Stack=true"} \
org.apache.zookeeper.server.quorum.QuorumPeerMain \

View File

@ -0,0 +1,91 @@
{ config, lib, pkgs, ... }:
with builtins;
with lib;
let
cfg = config.services.osquery;
in
{
options = {
services.osquery = {
enable = mkEnableOption "osquery";
loggerPath = mkOption {
type = types.path;
description = "Base directory used for logging.";
default = "/var/log/osquery";
};
pidfile = mkOption {
type = types.path;
description = "Path used for pid file.";
default = "/var/osquery/osqueryd.pidfile";
};
utc = mkOption {
type = types.bool;
description = "Attempt to convert all UNIX calendar times to UTC.";
default = true;
};
databasePath = mkOption {
type = types.path;
description = "Path used for database file.";
default = "/var/osquery/osquery.db";
};
extraConfig = mkOption {
type = types.attrs // {
merge = loc: foldl' (res: def: recursiveUpdate res def.value) {};
};
description = "Extra config to be recursively merged into the JSON config file.";
default = { };
};
};
};
config = mkIf cfg.enable {
environment.systemPackages = [ pkgs.osquery ];
environment.etc."osquery/osquery.conf".text = toJSON (
recursiveUpdate {
options = {
config_plugin = "filesystem";
logger_plugin = "filesystem";
logger_path = cfg.loggerPath;
database_path = cfg.databasePath;
utc = cfg.utc;
};
} cfg.extraConfig
);
systemd.services.osqueryd = {
description = "The osquery Daemon";
after = [ "network.target" "syslog.service" ];
wantedBy = [ "multi-user.target" ];
path = [ pkgs.osquery ];
preStart = ''
mkdir -p ${escapeShellArg cfg.loggerPath}
mkdir -p "$(dirname ${escapeShellArg cfg.pidfile})"
mkdir -p "$(dirname ${escapeShellArg cfg.databasePath})"
'';
serviceConfig = {
TimeoutStartSec = 0;
ExecStart = "${pkgs.osquery}/bin/osqueryd --logger_path ${escapeShellArg cfg.loggerPath} --pidfile ${escapeShellArg cfg.pidfile} --database_path ${escapeShellArg cfg.databasePath}";
KillMode = "process";
KillSignal = "SIGTERM";
Restart = "on-failure";
};
};
};
}

View File

@ -8,22 +8,21 @@ let
motdFile = builtins.toFile "rsyncd-motd" cfg.motd;
moduleConfig = name:
let module = getAttr name cfg.modules; in
"[${name}]\n " + (toString (
map
(key: "${key} = ${toString (getAttr key module)}\n")
(attrNames module)
));
foreach = attrs: f:
concatStringsSep "\n" (mapAttrsToList f attrs);
cfgFile = builtins.toFile "rsyncd.conf"
''
cfgFile = ''
${optionalString (cfg.motd != "") "motd file = ${motdFile}"}
${optionalString (cfg.address != "") "address = ${cfg.address}"}
${optionalString (cfg.port != 873) "port = ${toString cfg.port}"}
${cfg.extraConfig}
${toString (map moduleConfig (attrNames cfg.modules))}
'';
${foreach cfg.modules (name: module: ''
[${name}]
${foreach module (k: v:
"${k} = ${v}"
)}
'')}
'';
in
{
@ -84,6 +83,24 @@ in
};
};
user = mkOption {
type = types.str;
default = "root";
description = ''
The user to run the daemon as.
By default the daemon runs as root.
'';
};
group = mkOption {
type = types.str;
default = "root";
description = ''
The group to run the daemon as.
By default the daemon runs as root.
'';
};
};
};
@ -91,16 +108,17 @@ in
config = mkIf cfg.enable {
environment.etc = singleton {
source = cfgFile;
target = "rsyncd.conf";
};
environment.etc."rsyncd.conf".text = cfgFile;
systemd.services.rsyncd = {
description = "Rsync daemon";
wantedBy = [ "multi-user.target" ];
serviceConfig.ExecStart = "${pkgs.rsync}/bin/rsync --daemon --no-detach";
restartTriggers = [ config.environment.etc."rsyncd.conf".source ];
serviceConfig = {
ExecStart = "${pkgs.rsync}/bin/rsync --daemon --no-detach";
User = cfg.user;
Group = cfg.group;
};
};
};
}

View File

@ -243,7 +243,7 @@ in
};
};
security.pam.services.sambda = {};
security.pam.services.samba = {};
})
];

View File

@ -237,13 +237,13 @@ in
# arguments to $(tahoe start). The node directory must come first,
# and arguments which alter Twisted's behavior come afterwards.
ExecStart = ''
${settings.package}/bin/tahoe start ${nodedir} -n -l- --pidfile=${pidfile}
${settings.package}/bin/tahoe start ${lib.escapeShellArg nodedir} -n -l- --pidfile=${lib.escapeShellArg pidfile}
'';
};
preStart = ''
if [ \! -d ${nodedir} ]; then
if [ ! -d ${lib.escapeShellArg nodedir} ]; then
mkdir -p /var/db/tahoe-lafs
tahoe create-introducer ${nodedir}
tahoe create-introducer ${lib.escapeShellArg nodedir}
fi
# Tahoe has created a predefined tahoe.cfg which we must now
@ -252,7 +252,7 @@ in
# we must do this on every prestart. Fixes welcome.
# rm ${nodedir}/tahoe.cfg
# ln -s /etc/tahoe-lafs/introducer-${node}.cfg ${nodedir}/tahoe.cfg
cp /etc/tahoe-lafs/introducer-${node}.cfg ${nodedir}/tahoe.cfg
cp /etc/tahoe-lafs/introducer-"${node}".cfg ${lib.escapeShellArg nodedir}/tahoe.cfg
'';
});
users.extraUsers = flip mapAttrs' cfg.introducers (node: _:
@ -337,13 +337,13 @@ in
# arguments to $(tahoe start). The node directory must come first,
# and arguments which alter Twisted's behavior come afterwards.
ExecStart = ''
${settings.package}/bin/tahoe start ${nodedir} -n -l- --pidfile=${pidfile}
${settings.package}/bin/tahoe start ${lib.escapeShellArg nodedir} -n -l- --pidfile=${lib.escapeShellArg pidfile}
'';
};
preStart = ''
if [ \! -d ${nodedir} ]; then
if [ ! -d ${lib.escapeShellArg nodedir} ]; then
mkdir -p /var/db/tahoe-lafs
tahoe create-node --hostname=localhost ${nodedir}
tahoe create-node --hostname=localhost ${lib.escapeShellArg nodedir}
fi
# Tahoe has created a predefined tahoe.cfg which we must now
@ -351,8 +351,8 @@ in
# XXX I thought that a symlink would work here, but it doesn't, so
# we must do this on every prestart. Fixes welcome.
# rm ${nodedir}/tahoe.cfg
# ln -s /etc/tahoe-lafs/${node}.cfg ${nodedir}/tahoe.cfg
cp /etc/tahoe-lafs/${node}.cfg ${nodedir}/tahoe.cfg
# ln -s /etc/tahoe-lafs/${lib.escapeShellArg node}.cfg ${nodedir}/tahoe.cfg
cp /etc/tahoe-lafs/${lib.escapeShellArg node}.cfg ${lib.escapeShellArg nodedir}/tahoe.cfg
'';
});
users.extraUsers = flip mapAttrs' cfg.nodes (node: _:

View File

@ -22,6 +22,7 @@ let
${optionalString (interfaces!=null) "allow-interfaces=${concatStringsSep "," interfaces}"}
${optionalString (domainName!=null) "domain-name=${domainName}"}
allow-point-to-point=${yesNo allowPointToPoint}
${optionalString (cacheEntriesMax!=null) "cache-entries-max=${toString cacheEntriesMax}"}
[wide-area]
enable-wide-area=${yesNo wideArea}
@ -166,6 +167,15 @@ in
'';
};
cacheEntriesMax = mkOption {
default = null;
type = types.nullOr types.int;
description = ''
Number of resource records to be cached per interface. Use 0 to
disable caching. Avahi daemon defaults to 4096 if not set.
'';
};
};
};

View File

@ -82,14 +82,13 @@ in
};
resolverName = mkOption {
default = "dnscrypt.eu-nl";
default = "random";
example = "dnscrypt.eu-nl";
type = types.nullOr types.str;
description = ''
The name of the DNSCrypt resolver to use, taken from
<filename>${resolverList}</filename>. The default
resolver is located in Holland, supports DNS security
extensions, and <emphasis>claims</emphasis> to not
keep logs.
<filename>${resolverList}</filename>. The default is to
pick a random non-logging resolver that supports DNSSEC.
'';
};

View File

@ -48,7 +48,7 @@ let
# NAT from external ports to internal ports.
${concatMapStrings (fwd: ''
iptables -w -t nat -A nixos-nat-pre \
-i ${cfg.externalInterface} -p tcp \
-i ${cfg.externalInterface} -p ${fwd.proto} \
--dport ${builtins.toString fwd.sourcePort} \
-j DNAT --to-destination ${fwd.destination}
'') cfg.forwardPorts}
@ -133,12 +133,19 @@ in
destination = mkOption {
type = types.str;
example = "10.0.0.1:80";
description = "Forward tcp connection to destination ip:port";
description = "Forward connection to destination ip:port";
};
proto = mkOption {
type = types.str;
default = "tcp";
example = "udp";
description = "Protocol of forwarded connection";
};
};
});
default = [];
example = [ { sourcePort = 8080; destination = "10.0.0.1:80"; } ];
example = [ { sourcePort = 8080; destination = "10.0.0.1:80"; proto = "tcp"; } ];
description =
''
List of forwarded ports from the external interface to
@ -151,38 +158,41 @@ in
###### implementation
config = mkIf config.networking.nat.enable {
config = mkMerge [
{ networking.firewall.extraCommands = mkBefore flushNat; }
(mkIf config.networking.nat.enable {
environment.systemPackages = [ pkgs.iptables ];
environment.systemPackages = [ pkgs.iptables ];
boot = {
kernelModules = [ "nf_nat_ftp" ];
kernel.sysctl = {
"net.ipv4.conf.all.forwarding" = mkOverride 99 true;
"net.ipv4.conf.default.forwarding" = mkOverride 99 true;
};
};
networking.firewall = mkIf config.networking.firewall.enable {
extraCommands = mkMerge [ (mkBefore flushNat) setupNat ];
extraStopCommands = flushNat;
};
systemd.services = mkIf (!config.networking.firewall.enable) { nat = {
description = "Network Address Translation";
wantedBy = [ "network.target" ];
after = [ "network-pre.target" "systemd-modules-load.service" ];
path = [ pkgs.iptables ];
unitConfig.ConditionCapability = "CAP_NET_ADMIN";
serviceConfig = {
Type = "oneshot";
RemainAfterExit = true;
boot = {
kernelModules = [ "nf_nat_ftp" ];
kernel.sysctl = {
"net.ipv4.conf.all.forwarding" = mkOverride 99 true;
"net.ipv4.conf.default.forwarding" = mkOverride 99 true;
};
};
script = flushNat + setupNat;
networking.firewall = mkIf config.networking.firewall.enable {
extraCommands = setupNat;
extraStopCommands = flushNat;
};
postStop = flushNat;
}; };
};
systemd.services = mkIf (!config.networking.firewall.enable) { nat = {
description = "Network Address Translation";
wantedBy = [ "network.target" ];
after = [ "network-pre.target" "systemd-modules-load.service" ];
path = [ pkgs.iptables ];
unitConfig.ConditionCapability = "CAP_NET_ADMIN";
serviceConfig = {
Type = "oneshot";
RemainAfterExit = true;
};
script = flushNat + setupNat;
postStop = flushNat;
}; };
})
];
}

View File

@ -12,6 +12,7 @@ let
dns =
if cfg.useDnsmasq then "dnsmasq"
else if config.services.resolved.enable then "systemd-resolved"
else if config.services.unbound.enable then "unbound"
else "default";
configFile = writeText "NetworkManager.conf" ''

View File

@ -33,8 +33,8 @@ in
package = mkOption {
type = types.package;
default = pkgs.pythonPackages.searx;
defaultText = "pkgs.pythonPackages.searx";
default = pkgs.searx;
defaultText = "pkgs.searx";
description = "searx package to use.";
};

View File

@ -131,7 +131,7 @@ in
(flip mapAttrsToList cfg.networks (network: data:
flip mapAttrs' data.hosts (host: text: nameValuePair
("tinc/${network}/hosts/${host}")
({ mode = "0444"; inherit text; })
({ mode = "0644"; user = "tinc.${network}"; inherit text; })
) // {
"tinc/${network}/tinc.conf" = {
mode = "0444";
@ -164,15 +164,14 @@ in
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ];
path = [ data.package ];
restartTriggers = [ config.environment.etc."tinc/${network}/tinc.conf".source ]
++ mapAttrsToList (host: _ : config.environment.etc."tinc/${network}/hosts/${host}".source) data.hosts;
serviceConfig = {
Type = "simple";
PIDFile = "/run/tinc.${network}.pid";
Restart = "on-failure";
Restart = "always";
RestartSec = "3";
};
preStart = ''
mkdir -p /etc/tinc/${network}/hosts
chown tinc.${network} /etc/tinc/${network}/hosts
# Determine how we should generate our keys
if type tinc >/dev/null 2>&1; then
@ -194,6 +193,19 @@ in
})
);
environment.systemPackages = let
cli-wrappers = pkgs.stdenv.mkDerivation {
name = "tinc-cli-wrappers";
buildInputs = [ pkgs.makeWrapper ];
buildCommand = ''
mkdir -p $out/bin
${concatStringsSep "\n" (mapAttrsToList (network: data: ''
makeWrapper ${data.package}/bin/tinc "$out/bin/tinc.${network}" --add-flags "--pidfile=/run/tinc.${network}.pid"
'') cfg.networks)}
'';
};
in [ cli-wrappers ];
users.extraUsers = flip mapAttrs' cfg.networks (network: _:
nameValuePair ("tinc.${network}") ({
description = "Tinc daemon user for ${network}";

View File

@ -1,111 +0,0 @@
{ config, lib, pkgs, ... }:
with lib;
let
inherit (pkgs) coreutils tlsdate;
cfg = config.services.tlsdated;
in
{
###### interface
options = {
services.tlsdated = {
enable = mkOption {
type = types.bool;
default = false;
description = ''
Enable tlsdated daemon.
'';
};
extraOptions = mkOption {
type = types.string;
default = "";
description = ''
Additional command line arguments to pass to tlsdated.
'';
};
sources = mkOption {
type = types.listOf (types.submodule {
options = {
host = mkOption {
type = types.string;
description = ''
Remote hostname.
'';
};
port = mkOption {
type = types.int;
description = ''
Remote port.
'';
};
proxy = mkOption {
type = types.nullOr types.string;
default = null;
description = ''
The proxy argument expects HTTP, SOCKS4A or SOCKS5 formatted as followed:
http://127.0.0.1:8118
socks4a://127.0.0.1:9050
socks5://127.0.0.1:9050
The proxy support should not leak DNS requests and is suitable for use with Tor.
'';
};
};
});
default = [
{
host = "encrypted.google.com";
port = 443;
proxy = null;
}
];
description = ''
You can list one or more sources to fetch time from.
'';
};
};
};
###### implementation
config = mkIf cfg.enable {
# Make tools such as tlsdate available in the system path
environment.systemPackages = [ tlsdate ];
systemd.services.tlsdated = {
description = "tlsdated daemon";
wantedBy = [ "multi-user.target" ];
serviceConfig = {
# XXX because pkgs.tlsdate is compiled to run as nobody:nogroup, we
# hard-code base-path to /tmp and use PrivateTmp.
ExecStart = "${tlsdate}/bin/tlsdated -f ${pkgs.writeText "tlsdated.confg" ''
base-path /tmp
${concatMapStrings (src: ''
source
host ${src.host}
port ${toString src.port}
proxy ${if src.proxy == null then "none" else src.proxy}
end
'') cfg.sources}
''} ${cfg.extraOptions}";
PrivateTmp = "yes";
};
};
};
}

View File

@ -3,7 +3,12 @@ with lib;
let
cfg = config.services.unifi;
stateDir = "/var/lib/unifi";
cmd = "@${pkgs.jre}/bin/java java -jar ${stateDir}/lib/ace.jar";
cmd = ''
@${pkgs.jre}/bin/java java \
${optionalString (cfg.initialJavaHeapSize != null) "-Xms${(toString cfg.initialJavaHeapSize)}m"} \
${optionalString (cfg.maximumJavaHeapSize != null) "-Xmx${(toString cfg.maximumJavaHeapSize)}m"} \
-jar ${stateDir}/lib/ace.jar
'';
mountPoints = [
{
what = "${pkgs.unifi}/dl";
@ -58,6 +63,26 @@ in
'';
};
services.unifi.initialJavaHeapSize = mkOption {
type = types.nullOr types.int;
default = null;
example = 1024;
description = ''
Set the initial heap size for the JVM in MB. If this option isn't set, the
JVM will decide this value at runtime.
'';
};
services.unifi.maximumJavaHeapSize = mkOption {
type = types.nullOr types.int;
default = null;
example = 4096;
description = ''
Set the maximimum heap size for the JVM in MB. If this option isn't set, the
JVM will decide this value at runtime.
'';
};
};
config = mkIf cfg.enable {
@ -121,8 +146,8 @@ in
serviceConfig = {
Type = "simple";
ExecStart = "${cmd} start";
ExecStop = "${cmd} stop";
ExecStart = "${(removeSuffix "\n" cmd)} start";
ExecStop = "${(removeSuffix "\n" cmd)} stop";
User = "unifi";
PermissionsStartOnly = true;
UMask = "0077";

View File

@ -79,6 +79,16 @@ let
description = "A list of commands called after shutting down the interface.";
};
table = mkOption {
default = "main";
type = types.str;
description = ''The kernel routing table to add this interface's
associated routes to. Setting this is useful for e.g. policy routing
("ip rule") or virtual routing and forwarding ("ip vrf"). Both numeric
table IDs and table names (/etc/rt_tables) can be used. Defaults to
"main".'';
};
peers = mkOption {
default = [];
description = "Peers linked to the interface.";
@ -207,9 +217,11 @@ let
"${ipCommand} link set up dev ${name}"
(map (peer: (map (ip:
"${ipCommand} route replace ${ip} dev ${name}"
) peer.allowedIPs)) values.peers)
(map (peer:
(map (allowedIP:
"${ipCommand} route replace ${allowedIP} dev ${name} table ${values.table}"
) peer.allowedIPs)
) values.peers)
values.postSetup
]);

View File

@ -46,8 +46,20 @@ let
ServerTransportPlugin obfs2,obfs3 exec ${pkgs.pythonPackages.obfsproxy}/bin/obfsproxy managed
''}
''
+ hiddenServices
+ cfg.extraConfig;
hiddenServices = concatStrings (mapAttrsToList (hiddenServiceDir: hs:
let
hsports = concatStringsSep "\n" (map mkHiddenServicePort hs.hiddenServicePorts);
in
"HiddenServiceDir ${hiddenServiceDir}\n${hsports}\n${hs.extraConfig}\n"
) cfg.hiddenServices);
mkHiddenServicePort = hsport: let
trgt = optionalString (hsport.target != null) (" " + hsport.target);
in "HiddenServicePort ${toString hsport.virtualPort}${trgt}";
torRcFile = pkgs.writeText "torrc" torRc;
in
{
@ -229,11 +241,11 @@ in
default = null;
example = "450 GBytes";
description = ''
Specify maximum bandwidth allowed during an accounting
period. This allows you to limit overall tor bandwidth
over some time period. See the
<literal>AccountingMax</literal> option by looking at the
tor manual (<literal>man tor</literal>) for more.
Specify maximum bandwidth allowed during an accounting period. This
allows you to limit overall tor bandwidth over some time period.
See the <literal>AccountingMax</literal> option by looking at the
tor manual <citerefentry><refentrytitle>tor</refentrytitle>
<manvolnum>1</manvolnum></citerefentry> for more.
Note this limit applies individually to upload and
download; if you specify <literal>"500 GBytes"</literal>
@ -247,10 +259,11 @@ in
default = null;
example = "month 1 1:00";
description = ''
Specify length of an accounting period. This allows you to
limit overall tor bandwidth over some time period. See the
<literal>AccountingStart</literal> option by looking at
the tor manual (<literal>man tor</literal>) for more.
Specify length of an accounting period. This allows you to limit
overall tor bandwidth over some time period. See the
<literal>AccountingStart</literal> option by looking at the tor
manual <citerefentry><refentrytitle>tor</refentrytitle>
<manvolnum>1</manvolnum></citerefentry> for more.
'';
};
@ -279,9 +292,10 @@ in
type = types.str;
example = "143";
description = ''
What port to advertise for Tor connections. This corresponds
to the <literal>ORPort</literal> section in the Tor manual; see
<literal>man tor</literal> for more details.
What port to advertise for Tor connections. This corresponds to the
<literal>ORPort</literal> section in the Tor manual; see
<citerefentry><refentrytitle>tor</refentrytitle>
<manvolnum>1</manvolnum></citerefentry> for more details.
At a minimum, you should just specify the port for the
relay to listen on; a common one like 143, 22, 80, or 443
@ -314,6 +328,72 @@ in
'';
};
};
hiddenServices = mkOption {
type = types.attrsOf (types.submodule ({
options = {
hiddenServicePorts = mkOption {
type = types.listOf (types.submodule {
options = {
virtualPort = mkOption {
type = types.int;
example = 80;
description = "Virtual port.";
};
target = mkOption {
type = types.nullOr types.str;
default = null;
example = "127.0.0.1:8080";
description = ''
Target virtual Port shall be mapped to.
You may override the target port, address, or both by
specifying a target of addr, port, addr:port, or
unix:path. (You can specify an IPv6 target as
[addr]:port. Unix paths may be quoted, and may use
standard C escapes.)
'';
};
};
});
example = [ { virtualPort = 80; target = "127.0.0.1:8080"; } { virtualPort = 6667; } ];
description = ''
If target is <literal>null</literal> the virtual port is mapped
to the same port on 127.0.0.1 over TCP. You may use
<literal>target</literal> to overwrite this behaviour (see
description of target).
This corresponds to the <literal>HiddenServicePort VIRTPORT
[TARGET]</literal> option by looking at the tor manual
<citerefentry><refentrytitle>tor</refentrytitle>
<manvolnum>1</manvolnum></citerefentry> for more information.
'';
};
extraConfig = mkOption {
type = types.str;
default = "";
description = ''
Extra configuration. Contents will be added in the current
hidden service context.
'';
};
};
}));
default = {};
example = {
"/var/lib/tor/webserver" = {
hiddenServicePorts = [ { virtualPort = 80; } ];
};
};
description = ''
Configure hidden services.
Please consult the tor manual
<citerefentry><refentrytitle>tor</refentrytitle>
<manvolnum>1</manvolnum></citerefentry> for a more detailed
explanation. (search for 'HIDDEN').
'';
};
};
};

View File

@ -68,14 +68,19 @@ in
services.mingetty.greetingLine = mkDefault ''<<< Welcome to NixOS ${config.system.nixosLabel} (\m) - \l >>>'';
systemd.services."getty@" =
{ serviceConfig.ExecStart = gettyCmd "--noclear --keep-baud %I 115200,38400,9600 $TERM";
{ serviceConfig.ExecStart = [
"" # override upstream default with an empty ExecStart
(gettyCmd "--noclear --keep-baud %I 115200,38400,9600 $TERM")
];
restartIfChanged = false;
};
systemd.services."serial-getty@" =
{ serviceConfig.ExecStart =
let speeds = concatStringsSep "," (map toString config.services.mingetty.serialSpeed);
in gettyCmd "%I ${speeds} $TERM";
let speeds = concatStringsSep "," (map toString config.services.mingetty.serialSpeed); in
{ serviceConfig.ExecStart = [
"" # override upstream default with an empty ExecStart
(gettyCmd "%I ${speeds} $TERM")
];
restartIfChanged = false;
};

View File

@ -6,7 +6,22 @@ let
cfg = config.services.confluence;
pkg = pkgs.atlassian-confluence;
pkg = pkgs.atlassian-confluence.override (optionalAttrs cfg.sso.enable {
enableSSO = cfg.sso.enable;
crowdProperties = ''
application.name ${cfg.sso.applicationName}
application.password ${cfg.sso.applicationPassword}
application.login.url ${cfg.sso.crowd}/console/
crowd.server.url ${cfg.sso.crowd}/services/
crowd.base.url ${cfg.sso.crowd}/
session.isauthenticated session.isauthenticated
session.tokenkey session.tokenkey
session.validationinterval ${toString cfg.sso.validationInterval}
session.lastvalidation session.lastvalidation
'';
});
in
@ -76,6 +91,42 @@ in
};
};
sso = {
enable = mkEnableOption "SSO with Atlassian Crowd";
crowd = mkOption {
type = types.str;
example = "http://localhost:8095/crowd";
description = "Crowd Base URL without trailing slash";
};
applicationName = mkOption {
type = types.str;
example = "jira";
description = "Exact name of this Confluence instance in Crowd";
};
applicationPassword = mkOption {
type = types.str;
description = "Application password of this Confluence instance in Crowd";
};
validationInterval = mkOption {
type = types.int;
default = 2;
example = 0;
description = ''
Set to 0, if you want authentication checks to occur on each
request. Otherwise set to the number of minutes between request
to validate if the user is logged in or out of the Crowd SSO
server. Setting this value to 1 or higher will increase the
performance of Crowd's integration.
'';
};
};
jrePackage = let
jreSwitch = unfree: free: if config.nixpkgs.config.allowUnfree or false then unfree else free;
in mkOption {

View File

@ -6,7 +6,22 @@ let
cfg = config.services.jira;
pkg = pkgs.atlassian-jira;
pkg = pkgs.atlassian-jira.override {
enableSSO = cfg.sso.enable;
crowdProperties = ''
application.name ${cfg.sso.applicationName}
application.password ${cfg.sso.applicationPassword}
application.login.url ${cfg.sso.crowd}/console/
crowd.server.url ${cfg.sso.crowd}/services/
crowd.base.url ${cfg.sso.crowd}/
session.isauthenticated session.isauthenticated
session.tokenkey session.tokenkey
session.validationinterval ${toString cfg.sso.validationInterval}
session.lastvalidation session.lastvalidation
'';
};
in
@ -82,6 +97,40 @@ in
};
};
sso = {
enable = mkEnableOption "SSO with Atlassian Crowd";
crowd = mkOption {
type = types.str;
example = "http://localhost:8095/crowd";
description = "Crowd Base URL without trailing slash";
};
applicationName = mkOption {
type = types.str;
example = "jira";
description = "Exact name of this JIRA instance in Crowd";
};
applicationPassword = mkOption {
type = types.str;
description = "Application password of this JIRA instance in Crowd";
};
validationInterval = mkOption {
type = types.int;
default = 2;
example = 0;
description = ''
Set to 0, if you want authentication checks to occur on each
request. Otherwise set to the number of minutes between request
to validate if the user is logged in or out of the Crowd SSO
server. Setting this value to 1 or higher will increase the
performance of Crowd's integration.
'';
};
};
jrePackage = let
jreSwitch = unfree: free: if config.nixpkgs.config.allowUnfree or false then unfree else free;
in mkOption {

View File

@ -23,16 +23,24 @@
and enter those credentials in your browser.
You can use passwordless database authentication via the UNIX_SOCKET authentication plugin
with the following SQL commands:
<programlisting>
# For MariaDB
INSTALL PLUGIN unix_socket SONAME 'auth_socket';
ALTER USER root IDENTIFIED VIA unix_socket;
CREATE DATABASE piwik;
CREATE USER 'piwik'@'localhost' IDENTIFIED VIA unix_socket;
CREATE USER 'piwik'@'localhost' IDENTIFIED WITH unix_socket;
GRANT ALL PRIVILEGES ON piwik.* TO 'piwik'@'localhost';
# For MySQL
INSTALL PLUGIN auth_socket SONAME 'auth_socket.so';
CREATE DATABASE piwik;
CREATE USER 'piwik'@'localhost' IDENTIFIED WITH auth_socket;
GRANT ALL PRIVILEGES ON piwik.* TO 'piwik'@'localhost';
</programlisting>
Then fill in <literal>piwik</literal> as database user and database name, and leave the password field blank.
This works with MariaDB and MySQL. This authentication works by allowing only the <literal>piwik</literal> unix
user to authenticate as <literal>piwik</literal> database (without needing a password), but no other users.
This authentication works by allowing only the <literal>piwik</literal> unix user to authenticate as the
<literal>piwik</literal> database user (without needing a password), but no other users.
For more information on passwordless login, see
<link xlink:href="https://mariadb.com/kb/en/mariadb/unix_socket-authentication-plugin/" />.
</para>

View File

@ -37,8 +37,10 @@ let
"mod_rrdtool"
"mod_accesslog"
# Remaining list of modules, order assumed to be unimportant.
"mod_authn_file"
"mod_authn_mysql"
"mod_cml"
"mod_dirlisting"
"mod_deflate"
"mod_evasive"
"mod_extforward"
"mod_flv_streaming"
@ -47,6 +49,7 @@ let
"mod_scgi"
"mod_setenv"
"mod_trigger_b4_dl"
"mod_uploadprogress"
"mod_webdav"
];
@ -86,14 +89,9 @@ let
accesslog.use-syslog = "enable"
server.errorlog-use-syslog = "enable"
mimetype.assign = (
".html" => "text/html",
".htm" => "text/html",
".txt" => "text/plain",
".jpg" => "image/jpeg",
".png" => "image/png",
".css" => "text/css"
)
${lib.optionalString cfg.enableUpstreamMimeTypes ''
include "${pkgs.lighttpd}/share/lighttpd/doc/config/conf.d/mime.conf"
''}
static-file.exclude-extensions = ( ".fcgi", ".php", ".rb", "~", ".inc" )
index-file.names = ( "index.html" )
@ -165,6 +163,17 @@ in
'';
};
enableUpstreamMimeTypes = mkOption {
type = types.bool;
default = true;
description = ''
Whether to include the list of mime types bundled with lighttpd
(upstream). If you disable this, no mime types will be added by
NixOS and you will have to add your own mime types in
<option>services.lighttpd.extraConfig</option>.
'';
};
mod_status = mkOption {
default = false;
type = types.bool;

View File

@ -36,6 +36,11 @@ let
http {
include ${cfg.package}/conf/mime.types;
include ${cfg.package}/conf/fastcgi.conf;
include ${cfg.package}/conf/uwsgi_params;
${optionalString (cfg.resolver.addresses != []) ''
resolver ${toString cfg.resolver.addresses} ${optionalString (cfg.resolver.valid != "") "valid=${cfg.resolver.valid}"};
''}
${optionalString (cfg.recommendedOptimisation) ''
# optimisation
@ -116,6 +121,7 @@ let
http {
include ${cfg.package}/conf/mime.types;
include ${cfg.package}/conf/fastcgi.conf;
include ${cfg.package}/conf/uwsgi_params;
${cfg.httpConfig}
}''}
@ -123,16 +129,32 @@ let
'';
vhosts = concatStringsSep "\n" (mapAttrsToList (vhostName: vhost:
let
ssl = vhost.enableSSL || vhost.forceSSL;
defaultPort = if ssl then 443 else 80;
let
ssl = with vhost; addSSL || onlySSL || enableSSL;
listenString = { addr, port, ... }:
"listen ${addr}:${toString (if port != null then port else defaultPort)} "
defaultListen = with vhost;
if listen != [] then listen
else if onlySSL || enableSSL then
singleton { addr = "0.0.0.0"; port = 443; ssl = true; }
++ optional enableIPv6 { addr = "[::]"; port = 443; ssl = true; }
else singleton { addr = "0.0.0.0"; port = 80; ssl = false; }
++ optional enableIPv6 { addr = "[::]"; port = 80; ssl = false; }
++ optional addSSL { addr = "0.0.0.0"; port = 443; ssl = true; }
++ optional (enableIPv6 && addSSL) { addr = "[::]"; port = 443; ssl = true; };
hostListen =
if !vhost.forceSSL
then defaultListen
else filter (x: x.ssl) defaultListen;
listenString = { addr, port, ssl, ... }:
"listen ${addr}:${toString port} "
+ optionalString ssl "ssl http2 "
+ optionalString vhost.default "default_server"
+ optionalString vhost.default "default_server "
+ ";";
redirectListen = filter (x: !x.ssl) defaultListen;
redirectListenString = { addr, ... }:
"listen ${addr}:80 ${optionalString vhost.default "default_server"};";
@ -153,7 +175,7 @@ let
in ''
${optionalString vhost.forceSSL ''
server {
${concatMapStringsSep "\n" redirectListenString vhost.listen}
${concatMapStringsSep "\n" redirectListenString redirectListen}
server_name ${vhost.serverName} ${concatStringsSep " " vhost.serverAliases};
${optionalString vhost.enableACME acmeLocation}
@ -164,7 +186,7 @@ let
''}
server {
${concatMapStringsSep "\n" listenString vhost.listen}
${concatMapStringsSep "\n" listenString hostListen}
server_name ${vhost.serverName} ${concatStringsSep " " vhost.serverAliases};
${optionalString vhost.enableACME acmeLocation}
${optionalString (vhost.root != null) "root ${vhost.root};"}
@ -383,6 +405,32 @@ in
description = "Path to DH parameters file.";
};
resolver = mkOption {
type = types.submodule {
options = {
addresses = mkOption {
type = types.listOf types.str;
default = [];
example = literalExample ''[ "[::1]" "127.0.0.1:5353" ]'';
description = "List of resolvers to use";
};
valid = mkOption {
type = types.str;
default = "";
example = "30s";
description = ''
By default, nginx caches answers using the TTL value of a response.
An optional valid parameter allows overriding it
'';
};
};
};
description = ''
Configures name servers used to resolve names of upstream servers into addresses
'';
default = {};
};
virtualHosts = mkOption {
type = types.attrsOf (types.submodule (import ./vhost-options.nix {
inherit config lib;
@ -393,6 +441,7 @@ in
example = literalExample ''
{
"hydra.example.com" = {
addSSL = true;
forceSSL = true;
enableACME = true;
locations."/" = {
@ -409,11 +458,40 @@ in
config = mkIf cfg.enable {
# TODO: test user supplied config file pases syntax test
assertions = let hostOrAliasIsNull = l: l.root == null || l.alias == null; in [
warnings =
let
deprecatedSSL = name: config: optional config.enableSSL
''
config.services.nginx.virtualHosts.<name>.enableSSL is deprecated,
use config.services.nginx.virtualHosts.<name>.onlySSL instead.
'';
in flatten (mapAttrsToList deprecatedSSL virtualHosts);
assertions =
let
hostOrAliasIsNull = l: l.root == null || l.alias == null;
in [
{
assertion = all (host: all hostOrAliasIsNull (attrValues host.locations)) (attrValues virtualHosts);
message = "Only one of nginx root or alias can be specified on a location.";
}
{
assertion = all (conf: with conf; !(addSSL && (onlySSL || enableSSL))) (attrValues virtualHosts);
message = ''
Options services.nginx.service.virtualHosts.<name>.addSSL and
services.nginx.virtualHosts.<name>.onlySSL are mutually esclusive
'';
}
{
assertion = all (conf: with conf; forceSSL -> addSSL) (attrValues virtualHosts);
message = ''
Option services.nginx.virtualHosts.<name>.forceSSL requires
services.nginx.virtualHosts.<name>.addSSL set to true.
'';
}
];
systemd.services.nginx = {

View File

@ -27,25 +27,21 @@ with lib;
};
listen = mkOption {
type = with types; listOf (submodule {
options = {
addr = mkOption { type = str; description = "IP address."; };
port = mkOption { type = nullOr int; description = "Port number."; };
};
});
default =
[ { addr = "0.0.0.0"; port = null; } ]
++ optional config.networking.enableIPv6
{ addr = "[::]"; port = null; };
type = with types; listOf (submodule { options = {
addr = mkOption { type = str; description = "IP address."; };
port = mkOption { type = int; description = "Port number."; default = 80; };
ssl = mkOption { type = bool; description = "Enable SSL."; default = false; };
}; });
default = [];
example = [
{ addr = "195.154.1.1"; port = 443; }
{ addr = "192.168.1.2"; port = 443; }
{ addr = "195.154.1.1"; port = 443; ssl = true;}
{ addr = "192.154.1.1"; port = 80; }
];
description = ''
Listen addresses and ports for this virtual host.
IPv6 addresses must be enclosed in square brackets.
Setting the port to <literal>null</literal> defaults
to 80 for http and 443 for https (i.e. when enableSSL is set).
Note: this option overrides <literal>addSSL</literal>
and <literal>onlySSL</literal>.
'';
};
@ -70,16 +66,39 @@ with lib;
'';
};
enableSSL = mkOption {
addSSL = mkOption {
type = types.bool;
default = false;
description = "Whether to enable SSL (https) support.";
description = ''
Whether to enable HTTPS in addition to plain HTTP. This will set defaults for
<literal>listen</literal> to listen on all interfaces on the respective default
ports (80, 443).
'';
};
onlySSL = mkOption {
type = types.bool;
default = false;
description = ''
Whether to enable HTTPS and reject plain HTTP connections. This will set
defaults for <literal>listen</literal> to listen on all interfaces on port 443.
'';
};
enableSSL = mkOption {
type = types.bool;
visible = false;
default = false;
};
forceSSL = mkOption {
type = types.bool;
default = false;
description = "Whether to always redirect to https.";
description = ''
Whether to add a separate nginx server block that permanently redirects (301)
all plain HTTP traffic to HTTPS. This option needs <literal>addSSL</literal>
to be set to true.
'';
};
sslCertificate = mkOption {

View File

@ -176,7 +176,7 @@ in {
services.xserver.updateDbusEnvironment = true;
environment.variables.GIO_EXTRA_MODULES = [ "${gnome3.dconf}/lib/gio/modules"
environment.variables.GIO_EXTRA_MODULES = [ "${lib.getLib gnome3.dconf}/lib/gio/modules"
"${gnome3.glib_networking.out}/lib/gio/modules"
"${gnome3.gvfs}/lib/gio/modules" ];
environment.systemPackages = gnome3.corePackages ++ cfg.sessionPath

View File

@ -29,6 +29,7 @@ in
extraPackages = mkOption {
default = self: [];
defaultText = "self: []";
example = literalExample ''
haskellPackages: [
haskellPackages.xmonad-contrib

View File

@ -648,51 +648,11 @@ in
services.xserver.xkbDir = mkDefault "${pkgs.xkeyboard_config}/etc/X11/xkb";
system.extraDependencies = singleton (pkgs.runCommand "xkb-layouts-exist" {
inherit (cfg) layout xkbDir;
system.extraDependencies = singleton (pkgs.runCommand "xkb-validated" {
inherit (cfg) xkbModel layout xkbVariant xkbOptions;
nativeBuildInputs = [ pkgs.xkbvalidate ];
} ''
# We can use the default IFS here, because the layouts won't contain
# spaces or tabs and are ruled out by the sed expression below.
availableLayouts="$(
sed -n -e ':i /^! \(layout\|variant\) *$/ {
# Loop through all of the layouts/variants until we hit another ! at
# the start of the line or the line is empty ('t' branches only if
# the last substitution was successful, so if the line is empty the
# substition will fail).
:l; n; /^!/bi; s/^ *\([^ ]\+\).*/\1/p; tl
}' "$xkbDir/rules/base.lst" | sort -u
)"
layoutNotFound() {
echo >&2
echo "The following layouts and variants are available:" >&2
echo >&2
# While an output width of 80 is more desirable for small terminals, we
# really don't know the amount of columns of the terminal from within
# the builder. The content in $availableLayouts however is pretty
# large, so let's opt for a larger width here, because it will print a
# smaller amount of lines on modern KMS/framebuffer terminals and won't
# lose information even in smaller terminals (it only will look a bit
# ugly).
echo "$availableLayouts" | ${pkgs.utillinux}/bin/column -c 150 >&2
echo >&2
echo "However, the keyboard layout definition in" \
"\`services.xserver.layout' contains the layout \`$1', which" \
"isn't a valid layout or variant." >&2
echo >&2
exit 1
}
# Again, we don't need to take care of IFS, see the comment for
# $availableLayouts.
for l in ''${layout//,/ }; do
if ! echo "$availableLayouts" | grep -qxF "$l"; then
layoutNotFound "$l"
fi
done
validate "$xkbModel" "$layout" "$xkbVariant" "$xkbOptions"
touch "$out"
'');

View File

@ -141,6 +141,7 @@ in
system.build = mkOption {
internal = true;
default = {};
type = types.attrs;
description = ''
Attribute set of derivations used to setup the system.
'';

View File

@ -142,6 +142,18 @@ let
(assertValueOneOf "EmitTimezone" boolValues)
];
# .network files have a [Link] section with different options than in .netlink files
checkNetworkLink = checkUnitConfig "Link" [
(assertOnlyFields [
"MACAddress" "MTUBytes" "ARP" "Unmanaged"
])
(assertMacAddress "MACAddress")
(assertByteFormat "MTUBytes")
(assertValueOneOf "ARP" boolValues)
(assertValueOneOf "Unmanaged" boolValues)
];
commonNetworkOptions = {
enable = mkOption {
@ -371,6 +383,18 @@ let
'';
};
linkConfig = mkOption {
default = {};
example = { Unmanaged = true; };
type = types.addCheck (types.attrsOf unitOption) checkNetworkLink;
description = ''
Each attribute in this set specifies an option in the
<literal>[Link]</literal> section of the unit. See
<citerefentry><refentrytitle>systemd.network</refentrytitle>
<manvolnum>5</manvolnum></citerefentry> for details.
'';
};
name = mkOption {
type = types.nullOr types.str;
default = null;
@ -581,6 +605,12 @@ let
{ inherit (def) enable;
text = commonMatchText def +
''
${optionalString (def.linkConfig != { }) ''
[Link]
${attrsToSection def.linkConfig}
''}
[Network]
${attrsToSection def.networkConfig}
${concatStringsSep "\n" (map (s: "Address=${s}") def.address)}

View File

@ -207,7 +207,7 @@ let
preLVMCommands preDeviceCommands postDeviceCommands postMountCommands preFailCommands kernelModules;
resumeDevices = map (sd: if sd ? device then sd.device else "/dev/disk/by-label/${sd.label}")
(filter (sd: hasPrefix "/dev/" sd.device && !sd.randomEncryption
(filter (sd: hasPrefix "/dev/" sd.device && !sd.randomEncryption.enable
# Don't include zram devices
&& !(hasPrefix "/dev/zram" sd.device)
) config.swapDevices);

View File

@ -593,7 +593,7 @@ in
services.logind.extraConfig = mkOption {
default = "";
type = types.lines;
example = "HandleLidSwitch=ignore";
example = "IdleAction=lock";
description = ''
Extra config options for systemd-logind. See man logind.conf for
available options.

View File

@ -20,8 +20,8 @@ let
sources = map (x: x.source) etc';
targets = map (x: x.target) etc';
modes = map (x: x.mode) etc';
uids = map (x: x.uid) etc';
gids = map (x: x.gid) etc';
users = map (x: x.user) etc';
groups = map (x: x.group) etc';
};
in
@ -108,6 +108,26 @@ in
'';
};
user = mkOption {
default = "+${toString config.uid}";
type = types.str;
description = ''
User name of created file.
Only takes affect when the file is copied (that is, the mode is not 'symlink').
Changing this option takes precedence over <literal>uid</literal>.
'';
};
group = mkOption {
default = "+${toString config.gid}";
type = types.str;
description = ''
Group name of created file.
Only takes affect when the file is copied (that is, the mode is not 'symlink').
Changing this option takes precedence over <literal>gid</literal>.
'';
};
};
config = {
@ -130,7 +150,7 @@ in
system.build.etc = etc;
system.activationScripts.etc = stringAfter [ "stdio" ]
system.activationScripts.etc = stringAfter [ "users" "groups" ]
''
# Set up the statically computed bits of /etc.
echo "setting up /etc..."

View File

@ -6,8 +6,8 @@ set -f
sources_=($sources)
targets_=($targets)
modes_=($modes)
uids_=($uids)
gids_=($gids)
users_=($users)
groups_=($groups)
set +f
for ((i = 0; i < ${#targets_[@]}; i++)); do
@ -36,9 +36,9 @@ for ((i = 0; i < ${#targets_[@]}; i++)); do
fi
if test "${modes_[$i]}" != symlink; then
echo "${modes_[$i]}" > $out/etc/$target.mode
echo "${uids_[$i]}" > $out/etc/$target.uid
echo "${gids_[$i]}" > $out/etc/$target.gid
echo "${modes_[$i]}" > $out/etc/$target.mode
echo "${users_[$i]}" > $out/etc/$target.uid
echo "${groups_[$i]}" > $out/etc/$target.gid
fi
fi

View File

@ -108,6 +108,8 @@ sub link {
my $uid = read_file("$_.uid"); chomp $uid;
my $gid = read_file("$_.gid"); chomp $gid;
copy "$static/$fn", "$target.tmp" or warn;
$uid = getpwnam $uid unless $uid =~ /^\+/;
$gid = getgrnam $gid unless $gid =~ /^\+/;
chown int($uid), int($gid), "$target.tmp" or warn;
chmod oct($mode), "$target.tmp" or warn;
rename "$target.tmp", $target or warn;

View File

@ -24,11 +24,7 @@ let
kernel = config.boot.kernelPackages;
packages = if config.boot.zfs.enableUnstable then {
spl = kernel.splUnstable;
zfs = kernel.zfsUnstable;
zfsUser = pkgs.zfsUnstable;
} else {
packages = {
spl = kernel.spl;
zfs = kernel.zfs;
zfsUser = pkgs.zfs;
@ -62,19 +58,6 @@ in
options = {
boot.zfs = {
enableUnstable = mkOption {
type = types.bool;
default = false;
description = ''
Use the unstable zfs package. This might be an option, if the latest
kernel is not yet supported by a published release of ZFS. Enabling
this option will install a development version of ZFS on Linux. The
version will have already passed an extensive test suite, but it is
more likely to hit an undiscovered bug compared to running a released
version of ZFS on Linux.
'';
};
extraPools = mkOption {
type = types.listOf types.str;
default = [];

View File

@ -0,0 +1,44 @@
# Usage:
# $ NIX_PATH=`pwd`:nixos-config=`pwd`/nixpkgs/nixos/modules/virtualisation/cloud-image.nix nix-build '<nixpkgs/nixos>' -A config.system.build.cloudImage
{ config, lib, pkgs, ... }:
with lib;
{
system.build.cloudImage = import ../../lib/make-disk-image.nix {
inherit pkgs lib config;
partitioned = true;
diskSize = 1 * 1024;
configFile = pkgs.writeText "configuration.nix"
''
{ config, lib, pkgs, ... }:
with lib;
{
imports = [ <nixpkgs/nixos/modules/virtualisation/cloud-image.nix> ];
}
'';
};
imports = [ ../profiles/qemu-guest.nix ];
fileSystems."/".device = "/dev/disk/by-label/nixos";
boot = {
kernelParams = [ "console=ttyS0" ];
loader.grub.device = "/dev/vda";
loader.timeout = 0;
};
networking.hostName = mkDefault "";
services.openssh = {
enable = true;
permitRootLogin = "without-password";
passwordAuthentication = mkDefault false;
};
services.cloud-init.enable = true;
}

View File

@ -16,6 +16,7 @@ in
virtualisation.xen.enable =
mkOption {
default = false;
type = types.bool;
description =
''
Setting this option enables the Xen hypervisor, a

View File

@ -4,7 +4,8 @@
{ nixpkgs ? { outPath = ./..; revCount = 56789; shortRev = "gfedcba"; }
, stableBranch ? false
, supportedSystems ? [ "x86_64-linux" "i686-linux" ]
, supportedSystems ? [ "x86_64-linux" ]
, limitedSupportedSystems ? [ "i686-linux" ]
}:
let
@ -19,10 +20,16 @@ let
else pkgs.lib.mapAttrs (n: v: removeMaintainers v) set
else set;
allSupportedNixpkgs = builtins.removeAttrs (removeMaintainers (import ../pkgs/top-level/release.nix {
supportedSystems = supportedSystems ++ limitedSupportedSystems;
nixpkgs = nixpkgsSrc;
})) [ "unstable" ];
in rec {
nixos = removeMaintainers (import ./release.nix {
inherit stableBranch supportedSystems;
inherit stableBranch;
supportedSystems = supportedSystems ++ limitedSupportedSystems;
nixpkgs = nixpkgsSrc;
});
@ -38,8 +45,11 @@ in rec {
maintainers = [ pkgs.lib.maintainers.eelco ];
};
constituents =
let all = x: map (system: x.${system}) supportedSystems; in
[ nixos.channel
let
all = x: map (system: x.${system})
(supportedSystems ++ limitedSupportedSystems);
in [
nixos.channel
(all nixos.dummy)
(all nixos.manual)
@ -106,8 +116,8 @@ in rec {
(all nixos.tests.xfce)
nixpkgs.tarball
(all nixpkgs.emacs)
(all nixpkgs.jdk)
(all allSupportedNixpkgs.emacs)
(all allSupportedNixpkgs.jdk)
];
});

View File

@ -1,6 +1,6 @@
{ nixpkgs ? { outPath = ./..; revCount = 56789; shortRev = "gfedcba"; }
, stableBranch ? false
, supportedSystems ? [ "x86_64-linux" "i686-linux" ]
, supportedSystems ? [ "x86_64-linux" ]
}:
with import ../lib;

View File

@ -49,6 +49,38 @@ let
machine.i18n.consoleKeyMap = mkOverride 900 layout;
machine.services.xserver.layout = mkOverride 900 layout;
machine.imports = [ ./common/x11.nix extraConfig ];
machine.services.xserver.displayManager.slim = {
enable = true;
# Use a custom theme in order to get best OCR results
theme = pkgs.runCommand "slim-theme-ocr" {
nativeBuildInputs = [ pkgs.imagemagick ];
} ''
mkdir "$out"
convert -size 1x1 xc:white "$out/background.jpg"
convert -size 200x100 xc:white "$out/panel.jpg"
cat > "$out/slim.theme" <<EOF
background_color #ffffff
background_style tile
input_fgcolor #000000
msg_color #000000
session_color #000000
session_font Verdana:size=16:bold
username_msg Username:
username_font Verdana:size=16:bold
username_color #000000
username_x 50%
username_y 40%
password_msg Password:
password_x 50%
password_y 40%
EOF
'';
};
testScript = ''
sub waitCatAndDelete ($) {

View File

@ -6,6 +6,20 @@
import ./make-test.nix ({ pkgs, lib, withFirewall, withConntrackHelpers ? false, ... }:
let
unit = if withFirewall then "firewall" else "nat";
routerBase =
lib.mkMerge [
{ virtualisation.vlans = [ 2 1 ];
networking.firewall.enable = withFirewall;
networking.firewall.allowPing = true;
networking.nat.internalIPs = [ "192.168.1.0/24" ];
networking.nat.externalInterface = "eth1";
}
(lib.optionalAttrs withConntrackHelpers {
networking.firewall.connectionTrackingModules = [ "ftp" ];
networking.firewall.autoLoadConntrackHelpers = true;
})
];
in
{
name = "nat" + (if withFirewall then "WithFirewall" else "Standalone")
@ -30,20 +44,16 @@ import ./make-test.nix ({ pkgs, lib, withFirewall, withConntrackHelpers ? false,
];
router =
{ config, pkgs, ... }:
lib.mkMerge [
{ virtualisation.vlans = [ 2 1 ];
networking.firewall.enable = withFirewall;
networking.firewall.allowPing = true;
networking.nat.enable = true;
networking.nat.internalIPs = [ "192.168.1.0/24" ];
networking.nat.externalInterface = "eth1";
}
(lib.optionalAttrs withConntrackHelpers {
networking.firewall.connectionTrackingModules = [ "ftp" ];
networking.firewall.autoLoadConntrackHelpers = true;
})
];
{ config, pkgs, ... }: lib.mkMerge [
routerBase
{ networking.nat.enable = true; }
];
routerDummyNoNat =
{ config, pkgs, ... }: lib.mkMerge [
routerBase
{ networking.nat.enable = false; }
];
server =
{ config, pkgs, ... }:
@ -57,9 +67,13 @@ import ./make-test.nix ({ pkgs, lib, withFirewall, withConntrackHelpers ? false,
};
testScript =
{ nodes, ... }:
''
startAll;
{ nodes, ... }: let
routerDummyNoNatClosure = nodes.routerDummyNoNat.config.system.build.toplevel;
routerClosure = nodes.router.config.system.build.toplevel;
in ''
$client->start;
$router->start;
$server->start;
# The router should have access to the server.
$server->waitForUnit("network.target");
@ -87,13 +101,18 @@ import ./make-test.nix ({ pkgs, lib, withFirewall, withConntrackHelpers ? false,
$router->succeed("ping -c 1 client >&2");
# If we turn off NAT, the client shouldn't be able to reach the server.
$router->succeed("iptables -t nat -D PREROUTING -j nixos-nat-pre");
$router->succeed("iptables -t nat -D POSTROUTING -j nixos-nat-post");
$router->succeed("${routerDummyNoNatClosure}/bin/switch-to-configuration test 2>&1");
$client->fail("curl --fail --connect-timeout 5 http://server/ >&2");
$client->fail("ping -c 1 server >&2");
# And make sure that reloading the NAT job works.
$router->succeed("systemctl restart ${unit}");
$router->succeed("${routerClosure}/bin/switch-to-configuration test 2>&1");
# FIXME: this should not be necessary, but nat.service is not started because
# network.target is not triggered
# (https://github.com/NixOS/nixpkgs/issues/16230#issuecomment-226408359)
${lib.optionalString (!withFirewall) ''
$router->succeed("systemctl start nat.service");
''}
$client->succeed("curl --fail http://server/ >&2");
$client->succeed("ping -c 1 server >&2");
'';

View File

@ -3,7 +3,7 @@
# generated virtual hosts config.
import ./make-test.nix ({ pkgs, ...} : {
name = "jenkins";
name = "nginx";
meta = with pkgs.stdenv.lib.maintainers; {
maintainers = [ mbbx6spp ];
};

View File

@ -16,17 +16,10 @@ import ./make-test.nix ({ pkgs, ...} :
# fontconfig-penultimate-0.3.3 -> 0.3.4 broke OCR apparently, but no idea why.
nixpkgs.config.packageOverrides = superPkgs: {
fontconfig-penultimate = superPkgs.fontconfig-penultimate.overrideAttrs
(_attrs: rec {
version = "0.3.3";
name = "fontconfig-penultimate-${version}";
src = pkgs.fetchFromGitHub {
owner = "ttuegel";
repo = "fontconfig-penultimate";
rev = version;
sha256 = "0392lw31jps652dcjazln77ihb6bl7gk201gb7wb9i223avp86w9";
};
});
fontconfig-penultimate = superPkgs.fontconfig-penultimate.override {
version = "0.3.3";
sha256 = "1z76jbkb0nhf4w7fy647yyayqr4q02fgk6w58k0yi700p0m3h4c9";
};
};
};

45
nixos/tests/timezone.nix Normal file
View File

@ -0,0 +1,45 @@
{
timezone-static = import ./make-test.nix ({ pkgs, ... }: {
name = "timezone-static";
meta.maintainers = with pkgs.lib.maintainers; [ lheckemann ];
machine.time.timeZone = "Europe/Amsterdam";
testScript = ''
$machine->waitForUnit("dbus.socket");
$machine->fail("timedatectl set-timezone Asia/Tokyo");
my @dateResult = $machine->execute('date -d @0 "+%Y-%m-%d %H:%M:%S"');
$dateResult[1] eq "1970-01-01 01:00:00\n" or die "Timezone seems to be wrong";
'';
});
timezone-imperative = import ./make-test.nix ({ pkgs, ... }: {
name = "timezone-imperative";
meta.maintainers = with pkgs.lib.maintainers; [ lheckemann ];
machine.time.timeZone = null;
testScript = ''
$machine->waitForUnit("dbus.socket");
# Should default to UTC
my @dateResult = $machine->execute('date -d @0 "+%Y-%m-%d %H:%M:%S"');
print $dateResult[1];
$dateResult[1] eq "1970-01-01 00:00:00\n" or die "Timezone seems to be wrong";
$machine->succeed("timedatectl set-timezone Asia/Tokyo");
# Adjustment should be taken into account
my @dateResult = $machine->execute('date -d @0 "+%Y-%m-%d %H:%M:%S"');
print $dateResult[1];
$dateResult[1] eq "1970-01-01 09:00:00\n" or die "Timezone was not adjusted";
# Adjustment should persist across a reboot
$machine->shutdown;
$machine->waitForUnit("dbus.socket");
my @dateResult = $machine->execute('date -d @0 "+%Y-%m-%d %H:%M:%S"');
print $dateResult[1];
$dateResult[1] eq "1970-01-01 09:00:00\n" or die "Timezone adjustment was not persisted";
'';
});
}

View File

@ -38,7 +38,7 @@ stdenv.mkDerivation rec{
Core release, applying a series of patches, and then doing deterministic
builds so anyone can check the downloads correspond to the source code.
'';
homepage = "https://bitcoinxt.software/";
homepage = https://bitcoinxt.software/;
maintainers = with maintainers; [ jefdaj ];
license = licenses.mit;
platforms = platforms.unix;

View File

@ -31,7 +31,7 @@ stdenv.mkDerivation rec{
parties. Users hold the crypto keys to their own money and transact directly
with each other, with the help of a P2P network to check for double-spending.
'';
homepage = "http://www.bitcoin.org/";
homepage = http://www.bitcoin.org/;
maintainers = with maintainers; [ roconnor AndersonTorres ];
license = licenses.mit;
platforms = platforms.unix;

View File

@ -0,0 +1,24 @@
{ lib, python2}:
python2.pkgs.buildPythonApplication rec {
pname = "cryptop";
version = "0.1.0";
name = "${pname}-${version}";
src = python2.pkgs.fetchPypi {
inherit pname version;
sha256 = "00glnlyig1aajh30knc5rnfbamwfxpg29js2db6mymjmfka8lbhh";
};
propagatedBuildInputs = [ python2.pkgs.requests ];
# No tests in archive
doCheck = false;
meta = {
homepage = http://github.com/huwwp/cryptop;
description = "Command line Cryptocurrency Portfolio";
license = with lib.licenses; [ mit ];
maintainers = with lib.maintainers; [ bhipple ];
};
}

View File

@ -16,6 +16,8 @@ stdenv.mkDerivation rec {
# I think that openssl and zlib are required, but come through other
# packages
preBuild = "unset AR";
installPhase = ''
mkdir -p $out/bin
cp freicoin-qt $out/bin

View File

@ -18,7 +18,7 @@ buildGoPackage rec {
meta = {
description = "Golang implementation of Ethereum Classic";
homepage = "https://github.com/ethereumproject/go-ethereum";
homepage = https://github.com/ethereumproject/go-ethereum;
license = with lib.licenses; [ lgpl3 gpl3 ];
};
}

View File

@ -16,7 +16,7 @@ buildGoPackage rec {
};
meta = {
homepage = "https://ethereum.github.io/go-ethereum/";
homepage = https://ethereum.github.io/go-ethereum/;
description = "Official golang implementation of the Ethereum protocol";
license = with lib.licenses; [ lgpl3 gpl3 ];
};

View File

@ -31,7 +31,7 @@ stdenv.mkDerivation rec {
into a blockchain so that Bitcoin-users can speculate in Prediction
Markets.
'';
homepage = "https://bitcoinhivemind.com";
homepage = https://bitcoinhivemind.com;
maintainers = with maintainers; [ canndrew ];
license = licenses.mit;
platforms = platforms.unix;

View File

@ -44,7 +44,7 @@ lib.overrideDerivation (mkDerivation rec {
tasty tasty-hunit tasty-quickcheck text vector
];
homepage = "https://github.com/dapphub/hsevm";
homepage = https://github.com/dapphub/hsevm;
description = "Ethereum virtual machine evaluator";
license = stdenv.lib.licenses.agpl3;
maintainers = [stdenv.lib.maintainers.dbrock];

View File

@ -22,7 +22,8 @@ stdenv.mkDerivation rec{
configureFlags = [ "--with-boost-libdir=${boost.out}/lib" ]
++ optionals withGui [ "--with-gui=qt4" ];
preBuild = optional (!withGui) "cd src; cp makefile.unix Makefile";
preBuild = "unset AR;"
+ (toString (optional (!withGui) "cd src; cp makefile.unix Makefile"));
installPhase =
if withGui
@ -42,7 +43,7 @@ stdenv.mkDerivation rec{
Memorycoin is based on the Bitcoin code, but with some key
differences.
'';
homepage = "http://www.bitcoin.org/";
homepage = http://www.bitcoin.org/;
maintainers = with maintainers; [ AndersonTorres ];
license = licenses.mit;
platforms = platforms.unix;

View File

@ -22,7 +22,8 @@ stdenv.mkDerivation rec{
configureFlags = [ "--with-boost-libdir=${boost.out}/lib" ]
++ optionals withGui [ "--with-gui=qt4" ];
preBuild = optional (!withGui) "cd src; cp makefile.unix Makefile";
preBuild = "unset AR;"
+ (toString (optional (!withGui) "cd src; cp makefile.unix Makefile"));
installPhase =
if withGui

View File

@ -41,7 +41,7 @@ stdenv.mkDerivation rec{
meta = {
description = "Peer-to-peer, anonymous electronic cash system";
homepage = "https://z.cash/";
homepage = https://z.cash/;
maintainers = with maintainers; [ rht ];
license = licenses.mit;
platforms = platforms.unix;

View File

@ -22,7 +22,7 @@ rustPlatform.buildRustPackage rec {
meta = with stdenv.lib; {
description = "Rust-language assets for Zcash";
homepage = "https://github.com/zcash/librustzcash";
homepage = https://github.com/zcash/librustzcash;
maintainers = with maintainers; [ rht ];
license = with licenses; [ mit asl20 ];
platforms = platforms.unix;

View File

@ -21,7 +21,7 @@ stdenv.mkDerivation rec {
meta = with stdenv.lib; {
description = "Optimal Ate Pairing over Barreto-Naehrig Curves";
homepage = "https://github.com/herumi/ate-pairing";
homepage = https://github.com/herumi/ate-pairing;
maintainers = with maintainers; [ rht ];
license = licenses.bsd3;
platforms = platforms.unix;

View File

@ -37,7 +37,7 @@ stdenv.mkDerivation rec{
meta = with stdenv.lib; {
description = "a C++ library for zkSNARK proofs";
homepage = "https://github.com/zcash/libsnark";
homepage = https://github.com/zcash/libsnark;
maintainers = with maintainers; [ rht ];
license = licenses.mit;
platforms = platforms.unix;

View File

@ -19,7 +19,7 @@ stdenv.mkDerivation rec {
'';
meta = with stdenv.lib; {
homepage = "https://github.com/herumi/mie";
homepage = https://github.com/herumi/mie;
maintainers = with maintainers; [ rht ];
license = licenses.bsd3;
platforms = platforms.unix;

Some files were not shown because too many files have changed in this diff Show More