Merge branch 'master' into staging-next

Fixed trivial conflicts caused by removing rec.
This commit is contained in:
Jan Tojnar 2019-09-06 03:20:09 +02:00
commit cdf426488b
No known key found for this signature in database
GPG Key ID: 7FAB2A15F7A607A4
247 changed files with 3153 additions and 2159 deletions

View File

@ -20,4 +20,5 @@
<xi:include href="functions/appimagetools.xml" />
<xi:include href="functions/prefer-remote-fetch.xml" />
<xi:include href="functions/nix-gitignore.xml" />
<xi:include href="functions/ocitools.xml" />
</chapter>

View File

@ -0,0 +1,76 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xml:id="sec-pkgs-ociTools">
<title>pkgs.ociTools</title>
<para>
<varname>pkgs.ociTools</varname> is a set of functions for creating
containers according to the
<link xlink:href="https://github.com/opencontainers/runtime-spec">OCI
container specification v1.0.0</link>. Beyond that it makes no assumptions
about the container runner you choose to use to run the created container.
</para>
<section xml:id="ssec-pkgs-ociTools-buildContainer">
<title>buildContainer</title>
<para>
This function creates a simple OCI container that runs a single command
inside of it. An OCI container consists of a <varname>config.json</varname>
and a rootfs directory.The nix store of the container will contain all
referenced dependencies of the given command.
</para>
<para>
The parameters of <varname>buildContainer</varname> with an example value
are described below:
</para>
<example xml:id='ex-ociTools-buildContainer'>
<title>Build Container</title>
<programlisting>
buildContainer {
cmd = with pkgs; writeScript "run.sh" ''
#!${bash}/bin/bash
${coreutils}/bin/exec ${bash}/bin/bash
''; <co xml:id='ex-ociTools-buildContainer-1' />
mounts = {
"/data" = {
type = "none";
source = "/var/lib/mydata";
options = [ "bind" ];
};
};<co xml:id='ex-ociTools-buildContainer-2' />
readonly = false; <co xml:id='ex-ociTools-buildContainer-3' />
}
</programlisting>
<calloutlist>
<callout arearefs='ex-ociTools-buildContainer-1'>
<para>
<varname>cmd</varname> specifies the program to run inside the container.
This is the only required argument for <varname>buildContainer</varname>.
All referenced packages inside the derivation will be made available
inside the container
</para>
</callout>
<callout arearefs='ex-ociTools-buildContainer-2'>
<para>
<varname>mounts</varname> specifies additional mount points chosen by the
user. By default only a minimal set of necessary filesystems are mounted
into the container (e.g procfs, cgroupfs)
</para>
</callout>
<callout arearefs='ex-ociTools-buildContainer-3'>
<para>
<varname>readonly</varname> makes the container's rootfs read-only if it is set to true.
The default value is false <literal>false</literal>.
</para>
</callout>
</calloutlist>
</example>
</section>
</section>

View File

@ -2714,6 +2714,49 @@ nativeBuildInputs = [ breakpointHook ];
</note>
</listitem>
</varlistentry>
<varlistentry>
<term>
installShellFiles
</term>
<listitem>
<para>
This hook helps with installing manpages and shell completion files. It
exposes 2 shell functions <literal>installManPage</literal> and
<literal>installShellCompletion</literal> that can be used from your
<literal>postInstall</literal> hook.
</para>
<para>
The <literal>installManPage</literal> function takes one or more paths
to manpages to install. The manpages must have a section suffix, and may
optionally be compressed (with <literal>.gz</literal> suffix). This
function will place them into the correct directory.
</para>
<para>
The <literal>installShellCompletion</literal> function takes one or more
paths to shell completion files. By default it will autodetect the shell
type from the completion file extension, but you may also specify it by
passing one of <literal>--bash</literal>, <literal>--fish</literal>, or
<literal>--zsh</literal>. These flags apply to all paths listed after
them (up until another shell flag is given). Each path may also have a
custom installation name provided by providing a flag <literal>--name
NAME</literal> before the path. If this flag is not provided, zsh
completions will be renamed automatically such that
<literal>foobar.zsh</literal> becomes <literal>_foobar</literal>.
<programlisting>
nativeBuildInputs = [ installShellFiles ];
postInstall = ''
installManPage doc/foobar.1 doc/barfoo.3
# explicit behavior
installShellCompletion --bash --name foobar.bash share/completions.bash
installShellCompletion --fish --name foobar.fish share/completions.fish
installShellCompletion --zsh --name _foobar share/completions.zsh
# implicit behavior
installShellCompletion share/completions/foobar.{bash,fish,zsh}
'';
</programlisting>
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
libiconv, libintl

View File

@ -24,8 +24,8 @@
<para>
Apart from high-level options, its possible to tweak a package in almost
arbitrary ways, such as changing or disabling dependencies of a package. For
instance, the Emacs package in Nixpkgs by default has a dependency on GTK+ 2.
If you want to build it against GTK+ 3, you can specify that as follows:
instance, the Emacs package in Nixpkgs by default has a dependency on GTK 2.
If you want to build it against GTK 3, you can specify that as follows:
<programlisting>
<xref linkend="opt-environment.systemPackages"/> = [ (pkgs.emacs.override { gtk = pkgs.gtk3; }) ];
</programlisting>
@ -33,7 +33,7 @@
function that produces Emacs, with the original arguments amended by the set
of arguments specified by you. So here the function argument
<varname>gtk</varname> gets the value <literal>pkgs.gtk3</literal>, causing
Emacs to depend on GTK+ 3. (The parentheses are necessary because in Nix,
Emacs to depend on GTK 3. (The parentheses are necessary because in Nix,
function application binds more weakly than list construction, so without
them, <xref linkend="opt-environment.systemPackages"/> would be a list with
two elements.)

View File

@ -730,7 +730,7 @@ in
</listitem>
<listitem>
<para>
<literal>jre</literal> now defaults to GTK+ UI by default. This improves
<literal>jre</literal> now defaults to GTK UI by default. This improves
visual consistency and makes Java follow system font style, improving the
situation on HighDPI displays. This has a cost of increased closure size;
for server and other headless workloads it's recommended to use

View File

@ -422,6 +422,12 @@
It was not useful except for debugging purposes and was confusingly set as default in some circumstances.
</para>
</listitem>
<listitem>
<para>
The WeeChat plugin <literal>pkgs.weechatScripts.weechat-xmpp</literal> has been removed as it doesn't receive
any updates from upstream and depends on outdated Python2-based modules.
</para>
</listitem>
</itemizedlist>
</section>
@ -710,6 +716,22 @@
<literal>nix-shell -p altcoins.dogecoin</literal>, etc.
</para>
</listitem>
<listitem>
<para>
Ceph has been upgraded to v14.2.1.
See the <link xlink:href="https://ceph.com/releases/v14-2-0-nautilus-released/">release notes</link> for details.
The mgr dashboard as well as osds backed by loop-devices is no longer explicitly supported by the package and module.
Note: There's been some issues with python-cherrypy, which is used by the dashboard
and prometheus mgr modules (and possibly others), hence 0000-dont-check-cherrypy-version.patch.
</para>
</listitem>
<listitem>
<para>
<literal>pkgs.weechat</literal> is now compiled against <literal>pkgs.python3</literal>.
Weechat also recommends <link xlink:href="https://weechat.org/scripts/python3/">to use Python3
in their docs.</link>
</para>
</listitem>
</itemizedlist>
</section>
</section>

View File

@ -17,7 +17,7 @@ in {
name = mkOption {
type = types.str;
description = "The name of the generated derivation";
default = "nixos-disk-image";
default = "nixos-amazon-image-${config.system.nixos.label}-${pkgs.stdenv.hostPlatform.system}";
};
contents = mkOption {
@ -42,7 +42,7 @@ in {
format = mkOption {
type = types.enum [ "raw" "qcow2" "vpc" ];
default = "qcow2";
default = "vpc";
description = "The image format to output";
};
};
@ -51,7 +51,9 @@ in {
inherit lib config;
inherit (cfg) contents format name;
pkgs = import ../../../.. { inherit (pkgs) system; }; # ensure we use the regular qemu-kvm package
partitionTableType = if config.ec2.hvm then "legacy" else "none";
partitionTableType = if config.ec2.efi then "efi"
else if config.ec2.hvm then "legacy"
else "none";
diskSize = cfg.sizeMB;
fsType = "ext4";
configFile = pkgs.writeText "configuration.nix"
@ -61,7 +63,27 @@ in {
${optionalString config.ec2.hvm ''
ec2.hvm = true;
''}
${optionalString config.ec2.efi ''
ec2.efi = true;
''}
}
'';
postVM = ''
extension=''${diskImage##*.}
friendlyName=$out/${cfg.name}.$extension
mv "$diskImage" "$friendlyName"
diskImage=$friendlyName
mkdir -p $out/nix-support
echo "file ${cfg.format} $diskImage" >> $out/nix-support/hydra-build-products
${pkgs.jq}/bin/jq -n \
--arg label ${lib.escapeShellArg config.system.nixos.label} \
--arg system ${lib.escapeShellArg pkgs.stdenv.hostPlatform.system} \
--arg logical_bytes "$(${pkgs.qemu}/bin/qemu-img info --output json "$diskImage" | ${pkgs.jq}/bin/jq '."virtual-size"')" \
--arg file "$diskImage" \
'$ARGS.named' \
> $out/nix-support/image-info.json
'';
};
}

View File

@ -1,279 +1,296 @@
#!/usr/bin/env nix-shell
#! nix-shell -i bash -p qemu ec2_ami_tools jq ec2_api_tools awscli
#!nix-shell -p awscli -p jq -p qemu -i bash
# To start with do: nix-shell -p awscli --run "aws configure"
# Uploads and registers NixOS images built from the
# <nixos/release.nix> amazonImage attribute. Images are uploaded and
# registered via a home region, and then copied to other regions.
set -e
set -o pipefail
# The home region requires an s3 bucket, and a "vmimport" IAM role
# with access to the S3 bucket. Configuration of the vmimport role is
# documented in
# https://docs.aws.amazon.com/vm-import/latest/userguide/vmimport-image-import.html
version=$(nix-instantiate --eval --strict '<nixpkgs>' -A lib.version | sed s/'"'//g)
major=${version:0:5}
echo "NixOS version is $version ($major)"
# set -x
set -euo pipefail
stateDir=/home/deploy/amis/ec2-image-$version
echo "keeping state in $stateDir"
mkdir -p $stateDir
# configuration
state_dir=/home/deploy/amis/ec2-images
home_region=eu-west-1
bucket=nixos-amis
rm -f ec2-amis.nix
regions=(eu-west-1 eu-west-2 eu-west-3 eu-central-1
us-east-1 us-east-2 us-west-1 us-west-2
ca-central-1
ap-southeast-1 ap-southeast-2 ap-northeast-1 ap-northeast-2
ap-south-1 ap-east-1
sa-east-1)
types="hvm"
stores="ebs"
regions="eu-west-1 eu-west-2 eu-west-3 eu-central-1 us-east-1 us-east-2 us-west-1 us-west-2 ca-central-1 ap-southeast-1 ap-southeast-2 ap-northeast-1 ap-northeast-2 sa-east-1 ap-south-1"
log() {
echo "$@" >&2
}
for type in $types; do
link=$stateDir/$type
imageFile=$link/nixos.qcow2
system=x86_64-linux
arch=x86_64
if [ -z "$1" ]; then
log "Usage: ./upload-amazon-image.sh IMAGE_OUTPUT"
exit 1
fi
# Build the image.
if ! [ -L $link ]; then
if [ $type = pv ]; then hvmFlag=false; else hvmFlag=true; fi
# result of the amazon-image from nixos/release.nix
store_path=$1
echo "building image type '$type'..."
nix-build -o $link \
'<nixpkgs/nixos>' \
-A config.system.build.amazonImage \
--arg configuration "{ imports = [ <nixpkgs/nixos/maintainers/scripts/ec2/amazon-image.nix> ]; ec2.hvm = $hvmFlag; }"
if [ ! -e "$store_path" ]; then
log "Store path: $store_path does not exist, fetching..."
nix-store --realise "$store_path"
fi
if [ ! -d "$store_path" ]; then
log "store_path: $store_path is not a directory. aborting"
exit 1
fi
read_image_info() {
if [ ! -e "$store_path/nix-support/image-info.json" ]; then
log "Image missing metadata"
exit 1
fi
jq -r "$1" "$store_path/nix-support/image-info.json"
}
# We handle a single image per invocation, store all attributes in
# globals for convenience.
image_label=$(read_image_info .label)
image_system=$(read_image_info .system)
image_file=$(read_image_info .file)
image_logical_bytes=$(read_image_info .logical_bytes)
# Derived attributes
image_logical_gigabytes=$((($image_logical_bytes-1)/1024/1024/1024+1)) # Round to the next GB
case "$image_system" in
aarch64-linux)
amazon_arch=arm64
;;
x86_64-linux)
amazon_arch=x86_64
;;
*)
log "Unknown system: $image_system"
exit 1
esac
image_name="NixOS-${image_label}-${image_system}"
image_description="NixOS ${image_label} ${image_system}"
log "Image Details:"
log " Name: $image_name"
log " Description: $image_description"
log " Size (gigabytes): $image_logical_gigabytes"
log " System: $image_system"
log " Amazon Arch: $amazon_arch"
read_state() {
local state_key=$1
local type=$2
cat "$state_dir/$state_key.$type" 2>/dev/null || true
}
write_state() {
local state_key=$1
local type=$2
local val=$3
mkdir -p $state_dir
echo "$val" > "$state_dir/$state_key.$type"
}
wait_for_import() {
local region=$1
local task_id=$2
local state snapshot_id
log "Waiting for import task $task_id to be completed"
while true; do
read state progress snapshot_id < <(
aws ec2 describe-import-snapshot-tasks --region $region --import-task-ids "$task_id" | \
jq -r '.ImportSnapshotTasks[].SnapshotTaskDetail | "\(.Status) \(.Progress) \(.SnapshotId)"'
)
log " ... state=$state progress=$progress snapshot_id=$snapshot_id"
case "$state" in
active)
sleep 10
;;
completed)
echo "$snapshot_id"
return
;;
*)
log "Unexpected snapshot import state: '${state}'"
exit 1
;;
esac
done
}
wait_for_image() {
local region=$1
local ami_id=$2
local state
log "Waiting for image $ami_id to be available"
while true; do
read state < <(
aws ec2 describe-images --image-ids "$ami_id" --region $region | \
jq -r ".Images[].State"
)
log " ... state=$state"
case "$state" in
pending)
sleep 10
;;
available)
return
;;
*)
log "Unexpected AMI state: '${state}'"
exit 1
;;
esac
done
}
make_image_public() {
local region=$1
local ami_id=$2
wait_for_image $region "$ami_id"
log "Making image $ami_id public"
aws ec2 modify-image-attribute \
--image-id "$ami_id" --region "$region" --launch-permission 'Add={Group=all}' >&2
}
upload_image() {
local region=$1
local aws_path=${image_file#/}
local state_key="$region.$image_label.$image_system"
local task_id=$(read_state "$state_key" task_id)
local snapshot_id=$(read_state "$state_key" snapshot_id)
local ami_id=$(read_state "$state_key" ami_id)
if [ -z "$task_id" ]; then
log "Checking for image on S3"
if ! aws s3 ls --region "$region" "s3://${bucket}/${aws_path}" >&2; then
log "Image missing from aws, uploading"
aws s3 cp --region $region "$image_file" "s3://${bucket}/${aws_path}" >&2
fi
log "Importing image from S3 path s3://$bucket/$aws_path"
task_id=$(aws ec2 import-snapshot --disk-container "{
\"Description\": \"nixos-image-${image_label}-${image_system}\",
\"Format\": \"vhd\",
\"UserBucket\": {
\"S3Bucket\": \"$bucket\",
\"S3Key\": \"$aws_path\"
}
}" --region $region | jq -r '.ImportTaskId')
write_state "$state_key" task_id "$task_id"
fi
for store in $stores; do
if [ -z "$snapshot_id" ]; then
snapshot_id=$(wait_for_import "$region" "$task_id")
write_state "$state_key" snapshot_id "$snapshot_id"
fi
bucket=nixos-amis
bucketDir="$version-$type-$store"
if [ -z "$ami_id" ]; then
log "Registering snapshot $snapshot_id as AMI"
prevAmi=
prevRegion=
local block_device_mappings=(
"DeviceName=/dev/sda1,Ebs={SnapshotId=$snapshot_id,VolumeSize=$image_logical_gigabytes,DeleteOnTermination=true,VolumeType=gp2}"
)
for region in $regions; do
local extra_flags=(
--root-device-name /dev/sda1
--sriov-net-support simple
--ena-support
--virtualization-type hvm
)
name=nixos-$version-$arch-$type-$store
description="NixOS $system $version ($type-$store)"
block_device_mappings+=(DeviceName=/dev/sdb,VirtualName=ephemeral0)
block_device_mappings+=(DeviceName=/dev/sdc,VirtualName=ephemeral1)
block_device_mappings+=(DeviceName=/dev/sdd,VirtualName=ephemeral2)
block_device_mappings+=(DeviceName=/dev/sde,VirtualName=ephemeral3)
amiFile=$stateDir/$region.$type.$store.ami-id
ami_id=$(
aws ec2 register-image \
--name "$image_name" \
--description "$image_description" \
--region $region \
--architecture $amazon_arch \
--block-device-mappings "${block_device_mappings[@]}" \
"${extra_flags[@]}" \
| jq -r '.ImageId'
)
if ! [ -e $amiFile ]; then
write_state "$state_key" ami_id "$ami_id"
fi
echo "doing $name in $region..."
make_image_public $region "$ami_id"
if [ -n "$prevAmi" ]; then
ami=$(aws ec2 copy-image \
--region "$region" \
--source-region "$prevRegion" --source-image-id "$prevAmi" \
--name "$name" --description "$description" | jq -r '.ImageId')
if [ "$ami" = null ]; then break; fi
else
echo "$ami_id"
}
if [ $store = s3 ]; then
copy_to_region() {
local region=$1
local from_region=$2
local from_ami_id=$3
# Bundle the image.
imageDir=$stateDir/$type-bundled
state_key="$region.$image_label.$image_system"
ami_id=$(read_state "$state_key" ami_id)
# Convert the image to raw format.
rawFile=$stateDir/$type.raw
if ! [ -e $rawFile ]; then
qemu-img convert -f qcow2 -O raw $imageFile $rawFile.tmp
mv $rawFile.tmp $rawFile
fi
if [ -z "$ami_id" ]; then
log "Copying $from_ami_id to $region"
ami_id=$(
aws ec2 copy-image \
--region "$region" \
--source-region "$from_region" \
--source-image-id "$from_ami_id" \
--name "$image_name" \
--description "$image_description" \
| jq -r '.ImageId'
)
if ! [ -d $imageDir ]; then
rm -rf $imageDir.tmp
mkdir -p $imageDir.tmp
ec2-bundle-image \
-d $imageDir.tmp \
-i $rawFile --arch $arch \
--user "$AWS_ACCOUNT" -c "$EC2_CERT" -k "$EC2_PRIVATE_KEY"
mv $imageDir.tmp $imageDir
fi
write_state "$state_key" ami_id "$ami_id"
fi
# Upload the bundle to S3.
if ! [ -e $imageDir/uploaded ]; then
echo "uploading bundle to S3..."
ec2-upload-bundle \
-m $imageDir/$type.raw.manifest.xml \
-b "$bucket/$bucketDir" \
-a "$AWS_ACCESS_KEY_ID" -s "$AWS_SECRET_ACCESS_KEY" \
--location EU
touch $imageDir/uploaded
fi
make_image_public $region "$ami_id"
extraFlags="--image-location $bucket/$bucketDir/$type.raw.manifest.xml"
echo "$ami_id"
}
else
upload_all() {
home_image_id=$(upload_image "$home_region")
jq -n \
--arg key "$home_region.$image_system" \
--arg value "$home_image_id" \
'$ARGS.named'
# Convert the image to vhd format so we don't have
# to upload a huge raw image.
vhdFile=$stateDir/$type.vhd
if ! [ -e $vhdFile ]; then
qemu-img convert -f qcow2 -O vpc $imageFile $vhdFile.tmp
mv $vhdFile.tmp $vhdFile
fi
vhdFileLogicalBytes="$(qemu-img info "$vhdFile" | grep ^virtual\ size: | cut -f 2 -d \( | cut -f 1 -d \ )"
vhdFileLogicalGigaBytes=$(((vhdFileLogicalBytes-1)/1024/1024/1024+1)) # Round to the next GB
echo "Disk size is $vhdFileLogicalBytes bytes. Will be registered as $vhdFileLogicalGigaBytes GB."
taskId=$(cat $stateDir/$region.$type.task-id 2> /dev/null || true)
volId=$(cat $stateDir/$region.$type.vol-id 2> /dev/null || true)
snapId=$(cat $stateDir/$region.$type.snap-id 2> /dev/null || true)
# Import the VHD file.
if [ -z "$snapId" -a -z "$volId" -a -z "$taskId" ]; then
echo "importing $vhdFile..."
taskId=$(ec2-import-volume $vhdFile --no-upload -f vhd \
-O "$AWS_ACCESS_KEY_ID" -W "$AWS_SECRET_ACCESS_KEY" \
-o "$AWS_ACCESS_KEY_ID" -w "$AWS_SECRET_ACCESS_KEY" \
--region "$region" -z "${region}a" \
--bucket "$bucket" --prefix "$bucketDir/" \
| tee /dev/stderr \
| sed 's/.*\(import-vol-[0-9a-z]\+\).*/\1/ ; t ; d')
echo -n "$taskId" > $stateDir/$region.$type.task-id
fi
if [ -z "$snapId" -a -z "$volId" ]; then
ec2-resume-import $vhdFile -t "$taskId" --region "$region" \
-O "$AWS_ACCESS_KEY_ID" -W "$AWS_SECRET_ACCESS_KEY" \
-o "$AWS_ACCESS_KEY_ID" -w "$AWS_SECRET_ACCESS_KEY"
fi
# Wait for the volume creation to finish.
if [ -z "$snapId" -a -z "$volId" ]; then
echo "waiting for import to finish..."
while true; do
volId=$(aws ec2 describe-conversion-tasks --conversion-task-ids "$taskId" --region "$region" | jq -r .ConversionTasks[0].ImportVolume.Volume.Id)
if [ "$volId" != null ]; then break; fi
sleep 10
done
echo -n "$volId" > $stateDir/$region.$type.vol-id
fi
# Delete the import task.
if [ -n "$volId" -a -n "$taskId" ]; then
echo "removing import task..."
ec2-delete-disk-image -t "$taskId" --region "$region" \
-O "$AWS_ACCESS_KEY_ID" -W "$AWS_SECRET_ACCESS_KEY" \
-o "$AWS_ACCESS_KEY_ID" -w "$AWS_SECRET_ACCESS_KEY" || true
rm -f $stateDir/$region.$type.task-id
fi
# Create a snapshot.
if [ -z "$snapId" ]; then
echo "creating snapshot..."
# FIXME: this can fail with InvalidVolume.NotFound. Eventual consistency yay.
snapId=$(aws ec2 create-snapshot --volume-id "$volId" --region "$region" --description "$description" | jq -r .SnapshotId)
if [ "$snapId" = null ]; then exit 1; fi
echo -n "$snapId" > $stateDir/$region.$type.snap-id
fi
# Wait for the snapshot to finish.
echo "waiting for snapshot to finish..."
while true; do
status=$(aws ec2 describe-snapshots --snapshot-ids "$snapId" --region "$region" | jq -r .Snapshots[0].State)
if [ "$status" = completed ]; then break; fi
sleep 10
done
# Delete the volume.
if [ -n "$volId" ]; then
echo "deleting volume..."
aws ec2 delete-volume --volume-id "$volId" --region "$region" || true
rm -f $stateDir/$region.$type.vol-id
fi
blockDeviceMappings="DeviceName=/dev/sda1,Ebs={SnapshotId=$snapId,VolumeSize=$vhdFileLogicalGigaBytes,DeleteOnTermination=true,VolumeType=gp2}"
extraFlags=""
if [ $type = pv ]; then
extraFlags+=" --root-device-name /dev/sda1"
else
extraFlags+=" --root-device-name /dev/sda1"
extraFlags+=" --sriov-net-support simple"
extraFlags+=" --ena-support"
fi
blockDeviceMappings+=" DeviceName=/dev/sdb,VirtualName=ephemeral0"
blockDeviceMappings+=" DeviceName=/dev/sdc,VirtualName=ephemeral1"
blockDeviceMappings+=" DeviceName=/dev/sdd,VirtualName=ephemeral2"
blockDeviceMappings+=" DeviceName=/dev/sde,VirtualName=ephemeral3"
fi
if [ $type = hvm ]; then
extraFlags+=" --sriov-net-support simple"
extraFlags+=" --ena-support"
fi
# Register the AMI.
if [ $type = pv ]; then
kernel=$(aws ec2 describe-images --owner amazon --filters "Name=name,Values=pv-grub-hd0_1.05-$arch.gz" | jq -r .Images[0].ImageId)
if [ "$kernel" = null ]; then break; fi
echo "using PV-GRUB kernel $kernel"
extraFlags+=" --virtualization-type paravirtual --kernel $kernel"
else
extraFlags+=" --virtualization-type hvm"
fi
ami=$(aws ec2 register-image \
--name "$name" \
--description "$description" \
--region "$region" \
--architecture "$arch" \
--block-device-mappings $blockDeviceMappings \
$extraFlags | jq -r .ImageId)
if [ "$ami" = null ]; then break; fi
fi
echo -n "$ami" > $amiFile
echo "created AMI $ami of type '$type' in $region..."
else
ami=$(cat $amiFile)
fi
echo "region = $region, type = $type, store = $store, ami = $ami"
if [ -z "$prevAmi" ]; then
prevAmi="$ami"
prevRegion="$region"
fi
done
for region in "${regions[@]}"; do
if [ "$region" = "$home_region" ]; then
continue
fi
copied_image_id=$(copy_to_region "$region" "$home_region" "$home_image_id")
jq -n \
--arg key "$region.$image_system" \
--arg value "$copied_image_id" \
'$ARGS.named'
done
}
done
for type in $types; do
link=$stateDir/$type
system=x86_64-linux
arch=x86_64
for store in $stores; do
for region in $regions; do
name=nixos-$version-$arch-$type-$store
amiFile=$stateDir/$region.$type.$store.ami-id
ami=$(cat $amiFile)
echo "region = $region, type = $type, store = $store, ami = $ami"
echo -n "waiting for AMI..."
while true; do
status=$(aws ec2 describe-images --image-ids "$ami" --region "$region" | jq -r .Images[0].State)
if [ "$status" = available ]; then break; fi
sleep 10
echo -n '.'
done
echo
# Make the image public.
aws ec2 modify-image-attribute \
--image-id "$ami" --region "$region" --launch-permission 'Add={Group=all}'
echo " \"$major\".$region.$type-$store = \"$ami\";" >> ec2-amis.nix
done
done
done
upload_all | jq --slurp from_entries

View File

@ -7,7 +7,7 @@ with lib;
type = types.bool;
default = config.services.xserver.enable;
description = ''
Whether to build icon theme caches for GTK+ applications.
Whether to build icon theme caches for GTK applications.
'';
};
};

View File

@ -1,6 +1,6 @@
{
x86_64-linux = "/nix/store/hbhdjn5ik3byg642d1m11k3k3s0kn3py-nix-2.2.2";
i686-linux = "/nix/store/fz5cikwvj3n0a6zl44h6l2z3cin64mda-nix-2.2.2";
aarch64-linux = "/nix/store/2gba4cyl4wvxzfbhmli90jy4n5aj0kjj-nix-2.2.2";
x86_64-darwin = "/nix/store/87i4fp46jfw9yl8c7i9gx75m5yph7irl-nix-2.2.2";
x86_64-linux = "/nix/store/3ds3cgji9vjxdbgp10av6smyym1126d1-nix-2.3";
i686-linux = "/nix/store/ln1ndqvfpc9cdl03vqxi6kvlxm9wfv9g-nix-2.3";
aarch64-linux = "/nix/store/n8a1rwzrp20qcr2c4hvyn6c5q9zx8csw-nix-2.3";
x86_64-darwin = "/nix/store/jq6npmpld02sz4rgniz0qrsdfnm6j17a-nix-2.3";
}

View File

@ -948,6 +948,7 @@
./virtualisation/openvswitch.nix
./virtualisation/parallels-guest.nix
./virtualisation/qemu-guest-agent.nix
./virtualisation/railcar.nix
./virtualisation/rkt.nix
./virtualisation/virtualbox-guest.nix
./virtualisation/virtualbox-host.nix

View File

@ -18,7 +18,7 @@ in
enable = mkOption {
default = false;
description = ''
Whether to enable the Plotinus GTK+3 plugin. Plotinus provides a
Whether to enable the Plotinus GTK 3 plugin. Plotinus provides a
popup (triggered by Ctrl-Shift-P) to search the menus of a
compatible application.
'';

View File

@ -13,10 +13,10 @@
<link xlink:href="https://github.com/p-e-w/plotinus"/>
</para>
<para>
Plotinus is a searchable command palette in every modern GTK+ application.
Plotinus is a searchable command palette in every modern GTK application.
</para>
<para>
When in a GTK+3 application and Plotinus is enabled, you can press
When in a GTK 3 application and Plotinus is enabled, you can press
<literal>Ctrl+Shift+P</literal> to open the command palette. The command
palette provides a searchable list of of all menu items in the application.
</para>

View File

@ -34,6 +34,7 @@ with lib;
(mkRenamedOptionModule [ "services" "kubernetes" "etcd" "caFile" ] [ "services" "kubernetes" "apiserver" "etcd" "caFile" ])
(mkRemovedOptionModule [ "services" "kubernetes" "kubelet" "applyManifests" ] "")
(mkRemovedOptionModule [ "services" "kubernetes" "kubelet" "cadvisorPort" ] "")
(mkRemovedOptionModule [ "services" "kubernetes" "kubelet" "allowPrivileged" ] "")
(mkRenamedOptionModule [ "services" "kubernetes" "proxy" "address" ] ["services" "kubernetes" "proxy" "bindAddress"])
(mkRemovedOptionModule [ "services" "kubernetes" "verbose" ] "")
(mkRenamedOptionModule [ "services" "logstash" "address" ] [ "services" "logstash" "listenAddress" ])

View File

@ -62,50 +62,19 @@ in
'';
};
enable = mkEnableOption "Kubernetes addon manager";
kubeconfig = top.lib.mkKubeConfigOptions "Kubernetes addon manager";
bootstrapAddonsKubeconfig = top.lib.mkKubeConfigOptions "Kubernetes addon manager bootstrap";
enable = mkEnableOption "Whether to enable Kubernetes addon manager.";
};
###### implementation
config = let
addonManagerPaths = filter (a: a != null) [
cfg.kubeconfig.caFile
cfg.kubeconfig.certFile
cfg.kubeconfig.keyFile
];
bootstrapAddonsPaths = filter (a: a != null) [
cfg.bootstrapAddonsKubeconfig.caFile
cfg.bootstrapAddonsKubeconfig.certFile
cfg.bootstrapAddonsKubeconfig.keyFile
];
in mkIf cfg.enable {
config = mkIf cfg.enable {
environment.etc."kubernetes/addons".source = "${addons}/";
#TODO: Get rid of kube-addon-manager in the future for the following reasons
# - it is basically just a shell script wrapped around kubectl
# - it assumes that it is clusterAdmin or can gain clusterAdmin rights through serviceAccount
# - it is designed to be used with k8s system components only
# - it would be better with a more Nix-oriented way of managing addons
systemd.services.kube-addon-manager = {
description = "Kubernetes addon manager";
wantedBy = [ "kubernetes.target" ];
after = [ "kube-node-online.target" ];
before = [ "kubernetes.target" ];
environment = {
ADDON_PATH = "/etc/kubernetes/addons/";
KUBECONFIG = top.lib.mkKubeConfig "kube-addon-manager" cfg.kubeconfig;
};
path = with pkgs; [ gawk kubectl ];
preStart = ''
until kubectl -n kube-system get serviceaccounts/default 2>/dev/null; do
echo kubectl -n kube-system get serviceaccounts/default: exit status $?
sleep 2
done
'';
after = [ "kube-apiserver.service" ];
environment.ADDON_PATH = "/etc/kubernetes/addons/";
path = [ pkgs.gawk ];
serviceConfig = {
Slice = "kubernetes.slice";
ExecStart = "${top.package}/bin/kube-addons";
@ -115,52 +84,8 @@ in
Restart = "on-failure";
RestartSec = 10;
};
unitConfig.ConditionPathExists = addonManagerPaths;
};
systemd.paths.kube-addon-manager = {
wantedBy = [ "kube-addon-manager.service" ];
pathConfig = {
PathExists = addonManagerPaths;
PathChanged = addonManagerPaths;
};
};
services.kubernetes.addonManager.kubeconfig.server = mkDefault top.apiserverAddress;
systemd.services.kube-addon-manager-bootstrap = mkIf (top.apiserver.enable && top.addonManager.bootstrapAddons != {}) {
wantedBy = [ "kube-control-plane-online.target" ];
after = [ "kube-apiserver.service" ];
before = [ "kube-control-plane-online.target" ];
path = [ pkgs.kubectl ];
environment = {
KUBECONFIG = top.lib.mkKubeConfig "kube-addon-manager-bootstrap" cfg.bootstrapAddonsKubeconfig;
};
preStart = with pkgs; let
files = mapAttrsToList (n: v: writeText "${n}.json" (builtins.toJSON v))
cfg.bootstrapAddons;
in ''
until kubectl auth can-i '*' '*' -q 2>/dev/null; do
echo kubectl auth can-i '*' '*': exit status $?
sleep 2
done
kubectl apply -f ${concatStringsSep " \\\n -f " files}
'';
script = "echo Ok";
unitConfig.ConditionPathExists = bootstrapAddonsPaths;
};
systemd.paths.kube-addon-manager-bootstrap = {
wantedBy = [ "kube-addon-manager-bootstrap.service" ];
pathConfig = {
PathExists = bootstrapAddonsPaths;
PathChanged = bootstrapAddonsPaths;
};
};
services.kubernetes.addonManager.bootstrapAddonsKubeconfig.server = mkDefault top.apiserverAddress;
services.kubernetes.addonManager.bootstrapAddons = mkIf isRBACEnabled
(let
name = system:kube-addon-manager;

View File

@ -169,23 +169,6 @@ in {
};
};
kubernetes-dashboard-cm = {
apiVersion = "v1";
kind = "ConfigMap";
metadata = {
labels = {
k8s-app = "kubernetes-dashboard";
# Allows editing resource and makes sure it is created first.
"addonmanager.kubernetes.io/mode" = "EnsureExists";
};
name = "kubernetes-dashboard-settings";
namespace = "kube-system";
};
};
};
services.kubernetes.addonManager.bootstrapAddons = mkMerge [{
kubernetes-dashboard-sa = {
apiVersion = "v1";
kind = "ServiceAccount";
@ -227,9 +210,20 @@ in {
};
type = "Opaque";
};
}
(optionalAttrs cfg.rbac.enable
kubernetes-dashboard-cm = {
apiVersion = "v1";
kind = "ConfigMap";
metadata = {
labels = {
k8s-app = "kubernetes-dashboard";
# Allows editing resource and makes sure it is created first.
"addonmanager.kubernetes.io/mode" = "EnsureExists";
};
name = "kubernetes-dashboard-settings";
namespace = "kube-system";
};
};
} // (optionalAttrs cfg.rbac.enable
(let
subjects = [{
kind = "ServiceAccount";
@ -329,6 +323,6 @@ in {
inherit subjects;
};
})
))];
));
};
}

View File

@ -290,32 +290,11 @@ in
###### implementation
config = mkMerge [
(let
apiserverPaths = filter (a: a != null) [
cfg.clientCaFile
cfg.etcd.caFile
cfg.etcd.certFile
cfg.etcd.keyFile
cfg.kubeletClientCaFile
cfg.kubeletClientCertFile
cfg.kubeletClientKeyFile
cfg.serviceAccountKeyFile
cfg.tlsCertFile
cfg.tlsKeyFile
];
etcdPaths = filter (a: a != null) [
config.services.etcd.trustedCaFile
config.services.etcd.certFile
config.services.etcd.keyFile
];
in mkIf cfg.enable {
(mkIf cfg.enable {
systemd.services.kube-apiserver = {
description = "Kubernetes APIServer Service";
wantedBy = [ "kube-control-plane-online.target" ];
after = [ "certmgr.service" ];
before = [ "kube-control-plane-online.target" ];
wantedBy = [ "kubernetes.target" ];
after = [ "network.target" ];
serviceConfig = {
Slice = "kubernetes.slice";
ExecStart = ''${top.package}/bin/kube-apiserver \
@ -386,15 +365,6 @@ in
Restart = "on-failure";
RestartSec = 5;
};
unitConfig.ConditionPathExists = apiserverPaths;
};
systemd.paths.kube-apiserver = mkIf top.apiserver.enable {
wantedBy = [ "kube-apiserver.service" ];
pathConfig = {
PathExists = apiserverPaths;
PathChanged = apiserverPaths;
};
};
services.etcd = {
@ -408,18 +378,6 @@ in
initialAdvertisePeerUrls = mkDefault ["https://${top.masterAddress}:2380"];
};
systemd.services.etcd = {
unitConfig.ConditionPathExists = etcdPaths;
};
systemd.paths.etcd = {
wantedBy = [ "etcd.service" ];
pathConfig = {
PathExists = etcdPaths;
PathChanged = etcdPaths;
};
};
services.kubernetes.addonManager.bootstrapAddons = mkIf isRBACEnabled {
apiserver-kubelet-api-admin-crb = {

View File

@ -104,31 +104,11 @@ in
};
###### implementation
config = let
controllerManagerPaths = filter (a: a != null) [
cfg.kubeconfig.caFile
cfg.kubeconfig.certFile
cfg.kubeconfig.keyFile
cfg.rootCaFile
cfg.serviceAccountKeyFile
cfg.tlsCertFile
cfg.tlsKeyFile
];
in mkIf cfg.enable {
systemd.services.kube-controller-manager = rec {
config = mkIf cfg.enable {
systemd.services.kube-controller-manager = {
description = "Kubernetes Controller Manager Service";
wantedBy = [ "kube-control-plane-online.target" ];
wantedBy = [ "kubernetes.target" ];
after = [ "kube-apiserver.service" ];
before = [ "kube-control-plane-online.target" ];
environment.KUBECONFIG = top.lib.mkKubeConfig "kube-controller-manager" cfg.kubeconfig;
preStart = ''
until kubectl auth can-i get /api -q 2>/dev/null; do
echo kubectl auth can-i get /api: exit status $?
sleep 2
done
'';
serviceConfig = {
RestartSec = "30s";
Restart = "on-failure";
@ -140,7 +120,7 @@ in
"--cluster-cidr=${cfg.clusterCidr}"} \
${optionalString (cfg.featureGates != [])
"--feature-gates=${concatMapStringsSep "," (feature: "${feature}=true") cfg.featureGates}"} \
--kubeconfig=${environment.KUBECONFIG} \
--kubeconfig=${top.lib.mkKubeConfig "kube-controller-manager" cfg.kubeconfig} \
--leader-elect=${boolToString cfg.leaderElect} \
${optionalString (cfg.rootCaFile!=null)
"--root-ca-file=${cfg.rootCaFile}"} \
@ -161,16 +141,7 @@ in
User = "kubernetes";
Group = "kubernetes";
};
path = top.path ++ [ pkgs.kubectl ];
unitConfig.ConditionPathExists = controllerManagerPaths;
};
systemd.paths.kube-controller-manager = {
wantedBy = [ "kube-controller-manager.service" ];
pathConfig = {
PathExists = controllerManagerPaths;
PathChanged = controllerManagerPaths;
};
path = top.path;
};
services.kubernetes.pki.certs = with top.lib; {

View File

@ -256,29 +256,6 @@ in {
wantedBy = [ "multi-user.target" ];
};
systemd.targets.kube-control-plane-online = {
wantedBy = [ "kubernetes.target" ];
before = [ "kubernetes.target" ];
};
systemd.services.kube-control-plane-online = {
description = "Kubernetes control plane is online";
wantedBy = [ "kube-control-plane-online.target" ];
after = [ "kube-scheduler.service" "kube-controller-manager.service" ];
before = [ "kube-control-plane-online.target" ];
path = [ pkgs.curl ];
preStart = ''
until curl -Ssf ${cfg.apiserverAddress}/healthz do
echo curl -Ssf ${cfg.apiserverAddress}/healthz: exit status $?
sleep 3
done
'';
script = "echo Ok";
serviceConfig = {
TimeoutSec = "500";
};
};
systemd.tmpfiles.rules = [
"d /opt/cni/bin 0755 root root -"
"d /run/kubernetes 0755 kubernetes kubernetes -"
@ -302,8 +279,6 @@ in {
services.kubernetes.apiserverAddress = mkDefault ("https://${if cfg.apiserver.advertiseAddress != null
then cfg.apiserver.advertiseAddress
else "${cfg.masterAddress}:${toString cfg.apiserver.securePort}"}");
services.kubernetes.kubeconfig.server = mkDefault cfg.apiserverAddress;
})
];
}

View File

@ -14,36 +14,25 @@ let
buildInputs = [ pkgs.makeWrapper ];
} ''
mkdir -p $out
cp ${pkgs.kubernetes.src}/cluster/centos/node/bin/mk-docker-opts.sh $out/mk-docker-opts.sh
# bashInteractive needed for `compgen`
makeWrapper ${pkgs.bashInteractive}/bin/bash $out/mk-docker-opts --add-flags "$out/mk-docker-opts.sh"
makeWrapper ${pkgs.bashInteractive}/bin/bash $out/mk-docker-opts --add-flags "${pkgs.kubernetes}/bin/mk-docker-opts.sh"
'';
in
{
###### interface
options.services.kubernetes.flannel = {
enable = mkEnableOption "flannel networking";
kubeconfig = top.lib.mkKubeConfigOptions "Kubernetes flannel";
enable = mkEnableOption "enable flannel networking";
};
###### implementation
config = let
flannelPaths = filter (a: a != null) [
cfg.kubeconfig.caFile
cfg.kubeconfig.certFile
cfg.kubeconfig.keyFile
];
kubeconfig = top.lib.mkKubeConfig "flannel" cfg.kubeconfig;
in mkIf cfg.enable {
config = mkIf cfg.enable {
services.flannel = {
enable = mkDefault true;
network = mkDefault top.clusterCidr;
inherit storageBackend kubeconfig;
nodeName = top.kubelet.hostname;
inherit storageBackend;
nodeName = config.services.kubernetes.kubelet.hostname;
};
services.kubernetes.kubelet = {
@ -58,66 +47,24 @@ in
}];
};
systemd.services.mk-docker-opts = {
systemd.services."mk-docker-opts" = {
description = "Pre-Docker Actions";
wantedBy = [ "flannel.target" ];
before = [ "flannel.target" ];
path = with pkgs; [ gawk gnugrep ];
script = ''
${mkDockerOpts}/mk-docker-opts -d /run/flannel/docker
systemctl restart docker
'';
unitConfig.ConditionPathExists = [ "/run/flannel/subnet.env" ];
serviceConfig.Type = "oneshot";
};
systemd.paths.flannel-subnet-env = {
wantedBy = [ "mk-docker-opts.service" ];
systemd.paths."flannel-subnet-env" = {
wantedBy = [ "flannel.service" ];
pathConfig = {
PathExists = [ "/run/flannel/subnet.env" ];
PathChanged = [ "/run/flannel/subnet.env" ];
PathModified = "/run/flannel/subnet.env";
Unit = "mk-docker-opts.service";
};
};
systemd.targets.flannel = {
wantedBy = [ "kube-node-online.target" ];
before = [ "kube-node-online.target" ];
};
systemd.services.flannel = {
wantedBy = [ "flannel.target" ];
after = [ "kubelet.target" ];
before = [ "flannel.target" ];
path = with pkgs; [ iptables kubectl ];
environment.KUBECONFIG = kubeconfig;
preStart = let
args = [
"--selector=kubernetes.io/hostname=${top.kubelet.hostname}"
# flannel exits if node is not registered yet, before that there is no podCIDR
"--output=jsonpath={.items[0].spec.podCIDR}"
# if jsonpath cannot be resolved exit with status 1
"--allow-missing-template-keys=false"
];
in ''
until kubectl get nodes ${concatStringsSep " " args} 2>/dev/null; do
echo Waiting for ${top.kubelet.hostname} to be RegisteredNode
sleep 1
done
'';
unitConfig.ConditionPathExists = flannelPaths;
};
systemd.paths.flannel = {
wantedBy = [ "flannel.service" ];
pathConfig = {
PathExists = flannelPaths;
PathChanged = flannelPaths;
};
};
services.kubernetes.flannel.kubeconfig.server = mkDefault top.apiserverAddress;
systemd.services.docker = {
environment.DOCKER_OPTS = "-b none";
serviceConfig.EnvironmentFile = "-/run/flannel/docker";
@ -144,6 +91,7 @@ in
# give flannel som kubernetes rbac permissions if applicable
services.kubernetes.addonManager.bootstrapAddons = mkIf ((storageBackend == "kubernetes") && (elem "RBAC" top.apiserver.authorizationMode)) {
flannel-cr = {
apiVersion = "rbac.authorization.k8s.io/v1beta1";
kind = "ClusterRole";
@ -179,6 +127,7 @@ in
name = "flannel-client";
}];
};
};
};
}

View File

@ -61,12 +61,6 @@ in
type = str;
};
allowPrivileged = mkOption {
description = "Whether to allow Kubernetes containers to request privileged mode.";
default = false;
type = bool;
};
clusterDns = mkOption {
description = "Use alternative DNS.";
default = "10.1.0.1";
@ -234,28 +228,21 @@ in
###### implementation
config = mkMerge [
(let
kubeletPaths = filter (a: a != null) [
cfg.kubeconfig.caFile
cfg.kubeconfig.certFile
cfg.kubeconfig.keyFile
cfg.clientCaFile
cfg.tlsCertFile
cfg.tlsKeyFile
];
in mkIf cfg.enable {
(mkIf cfg.enable {
services.kubernetes.kubelet.seedDockerImages = [infraContainer];
systemd.services.kubelet = {
description = "Kubernetes Kubelet Service";
wantedBy = [ "kubelet.target" ];
after = [ "kube-control-plane-online.target" ];
before = [ "kubelet.target" ];
wantedBy = [ "kubernetes.target" ];
after = [ "network.target" "docker.service" "kube-apiserver.service" ];
path = with pkgs; [ gitMinimal openssh docker utillinux iproute ethtool thin-provisioning-tools iptables socat ] ++ top.path;
preStart = ''
rm -f /opt/cni/bin/* || true
${concatMapStrings (img: ''
echo "Seeding docker image: ${img}"
docker load <${img}
'') cfg.seedDockerImages}
rm /opt/cni/bin/* || true
${concatMapStrings (package: ''
echo "Linking cni package: ${package}"
ln -fs ${package}/bin/* /opt/cni/bin
@ -269,7 +256,6 @@ in
RestartSec = "1000ms";
ExecStart = ''${top.package}/bin/kubelet \
--address=${cfg.address} \
--allow-privileged=${boolToString cfg.allowPrivileged} \
--authentication-token-webhook \
--authentication-token-webhook-cache-ttl="10s" \
--authorization-mode=Webhook \
@ -308,56 +294,6 @@ in
'';
WorkingDirectory = top.dataDir;
};
unitConfig.ConditionPathExists = kubeletPaths;
};
systemd.paths.kubelet = {
wantedBy = [ "kubelet.service" ];
pathConfig = {
PathExists = kubeletPaths;
PathChanged = kubeletPaths;
};
};
systemd.services.docker.before = [ "kubelet.service" ];
systemd.services.docker-seed-images = {
wantedBy = [ "docker.service" ];
after = [ "docker.service" ];
before = [ "kubelet.service" ];
path = with pkgs; [ docker ];
preStart = ''
${concatMapStrings (img: ''
echo "Seeding docker image: ${img}"
docker load <${img}
'') cfg.seedDockerImages}
'';
script = "echo Ok";
serviceConfig.Type = "oneshot";
serviceConfig.RemainAfterExit = true;
serviceConfig.Slice = "kubernetes.slice";
};
systemd.services.kubelet-online = {
wantedBy = [ "kube-node-online.target" ];
after = [ "flannel.target" "kubelet.target" ];
before = [ "kube-node-online.target" ];
# it is complicated. flannel needs kubelet to run the pause container before
# it discusses the node CIDR with apiserver and afterwards configures and restarts
# dockerd. Until then prevent creating any pods because they have to be recreated anyway
# because the network of docker0 has been changed by flannel.
script = let
docker-env = "/run/flannel/docker";
flannel-date = "stat --print=%Y ${docker-env}";
docker-date = "systemctl show --property=ActiveEnterTimestamp --value docker";
in ''
until test -f ${docker-env} ; do sleep 1 ; done
while test `${flannel-date}` -gt `date +%s --date="$(${docker-date})"` ; do
sleep 1
done
'';
serviceConfig.Type = "oneshot";
serviceConfig.Slice = "kubernetes.slice";
};
# Allways include cni plugins
@ -404,16 +340,5 @@ in
};
})
{
systemd.targets.kubelet = {
wantedBy = [ "kube-node-online.target" ];
before = [ "kube-node-online.target" ];
};
systemd.targets.kube-node-online = {
wantedBy = [ "kubernetes.target" ];
before = [ "kubernetes.target" ];
};
}
];
}

View File

@ -27,11 +27,12 @@ let
certmgrAPITokenPath = "${top.secretsPath}/${cfsslAPITokenBaseName}";
cfsslAPITokenLength = 32;
clusterAdminKubeconfig = with cfg.certs.clusterAdmin; {
server = top.apiserverAddress;
certFile = cert;
keyFile = key;
};
clusterAdminKubeconfig = with cfg.certs.clusterAdmin;
top.lib.mkKubeConfig "cluster-admin" {
server = top.apiserverAddress;
certFile = cert;
keyFile = key;
};
remote = with config.services; "https://${kubernetes.masterAddress}:${toString cfssl.port}";
in
@ -118,11 +119,6 @@ in
cfsslCertPathPrefix = "${config.services.cfssl.dataDir}/cfssl";
cfsslCert = "${cfsslCertPathPrefix}.pem";
cfsslKey = "${cfsslCertPathPrefix}-key.pem";
certmgrPaths = [
top.caFile
certmgrAPITokenPath
];
in
{
@ -172,40 +168,13 @@ in
chown cfssl "${cfsslAPITokenPath}" && chmod 400 "${cfsslAPITokenPath}"
'')]);
systemd.targets.cfssl-online = {
wantedBy = [ "network-online.target" ];
after = [ "cfssl.service" "network-online.target" "cfssl-online.service" ];
};
systemd.services.cfssl-online = {
description = "Wait for ${remote} to be reachable.";
wantedBy = [ "cfssl-online.target" ];
before = [ "cfssl-online.target" ];
path = [ pkgs.curl ];
preStart = ''
until curl --fail-early -fskd '{}' ${remote}/api/v1/cfssl/info -o /dev/null; do
echo curl ${remote}/api/v1/cfssl/info: exit status $?
sleep 2
done
'';
script = "echo Ok";
serviceConfig = {
TimeoutSec = "300";
};
};
systemd.services.kube-certmgr-bootstrap = {
description = "Kubernetes certmgr bootstrapper";
wantedBy = [ "cfssl-online.target" ];
after = [ "cfssl-online.target" ];
before = [ "certmgr.service" ];
path = with pkgs; [ curl cfssl ];
wantedBy = [ "certmgr.service" ];
after = [ "cfssl.target" ];
script = concatStringsSep "\n" [''
set -e
mkdir -p $(dirname ${certmgrAPITokenPath})
mkdir -p $(dirname ${top.caFile})
# If there's a cfssl (cert issuer) running locally, then don't rely on user to
# manually paste it in place. Just symlink.
# otherwise, create the target file, ready for users to insert the token
@ -217,18 +186,15 @@ in
fi
''
(optionalString (cfg.pkiTrustOnBootstrap) ''
if [ ! -s "${top.caFile}" ]; then
until test -s ${top.caFile}.json; do
sleep 2
curl --fail-early -fskd '{}' ${remote}/api/v1/cfssl/info -o ${top.caFile}.json
done
cfssljson -f ${top.caFile}.json -stdout >${top.caFile}
rm ${top.caFile}.json
if [ ! -f "${top.caFile}" ] || [ $(cat "${top.caFile}" | wc -c) -lt 1 ]; then
${pkgs.curl}/bin/curl --fail-early -f -kd '{}' ${remote}/api/v1/cfssl/info | \
${pkgs.cfssl}/bin/cfssljson -stdout >${top.caFile}
fi
'')
];
serviceConfig = {
TimeoutSec = "500";
RestartSec = "10s";
Restart = "on-failure";
};
};
@ -264,28 +230,35 @@ in
mapAttrs mkSpec cfg.certs;
};
systemd.services.certmgr = {
wantedBy = [ "cfssl-online.target" ];
after = [ "cfssl-online.target" "kube-certmgr-bootstrap.service" ];
preStart = ''
while ! test -s ${certmgrAPITokenPath} ; do
sleep 1
echo Waiting for ${certmgrAPITokenPath}
done
'';
unitConfig.ConditionPathExists = certmgrPaths;
};
#TODO: Get rid of kube-addon-manager in the future for the following reasons
# - it is basically just a shell script wrapped around kubectl
# - it assumes that it is clusterAdmin or can gain clusterAdmin rights through serviceAccount
# - it is designed to be used with k8s system components only
# - it would be better with a more Nix-oriented way of managing addons
systemd.services.kube-addon-manager = mkIf top.addonManager.enable (mkMerge [{
environment.KUBECONFIG = with cfg.certs.addonManager;
top.lib.mkKubeConfig "addon-manager" {
server = top.apiserverAddress;
certFile = cert;
keyFile = key;
};
}
systemd.paths.certmgr = {
wantedBy = [ "certmgr.service" ];
pathConfig = {
PathExists = certmgrPaths;
PathChanged = certmgrPaths;
};
};
(optionalAttrs (top.addonManager.bootstrapAddons != {}) {
serviceConfig.PermissionsStartOnly = true;
preStart = with pkgs;
let
files = mapAttrsToList (n: v: writeText "${n}.json" (builtins.toJSON v))
top.addonManager.bootstrapAddons;
in
''
export KUBECONFIG=${clusterAdminKubeconfig}
${kubectl}/bin/kubectl apply -f ${concatStringsSep " \\\n -f " files}
'';
})]);
environment.etc.${cfg.etcClusterAdminKubeconfig}.source = mkIf (cfg.etcClusterAdminKubeconfig != null)
(top.lib.mkKubeConfig "cluster-admin" clusterAdminKubeconfig);
environment.etc.${cfg.etcClusterAdminKubeconfig}.source = mkIf (!isNull cfg.etcClusterAdminKubeconfig)
clusterAdminKubeconfig;
environment.systemPackages = mkIf (top.kubelet.enable || top.proxy.enable) [
(pkgs.writeScriptBin "nixos-kubernetes-node-join" ''
@ -311,22 +284,38 @@ in
exit 1
fi
do_restart=$(test -s ${certmgrAPITokenPath} && echo -n y || echo -n n)
echo $token > ${certmgrAPITokenPath}
chmod 600 ${certmgrAPITokenPath}
if [ y = $do_restart ]; then
echo "Restarting certmgr..." >&1
systemctl restart certmgr
fi
echo "Restarting certmgr..." >&1
systemctl restart certmgr
echo "Node joined succesfully" >&1
echo "Waiting for certs to appear..." >&1
${optionalString top.kubelet.enable ''
while [ ! -f ${cfg.certs.kubelet.cert} ]; do sleep 1; done
echo "Restarting kubelet..." >&1
systemctl restart kubelet
''}
${optionalString top.proxy.enable ''
while [ ! -f ${cfg.certs.kubeProxyClient.cert} ]; do sleep 1; done
echo "Restarting kube-proxy..." >&1
systemctl restart kube-proxy
''}
${optionalString top.flannel.enable ''
while [ ! -f ${cfg.certs.flannelClient.cert} ]; do sleep 1; done
echo "Restarting flannel..." >&1
systemctl restart flannel
''}
echo "Node joined succesfully"
'')];
# isolate etcd on loopback at the master node
# easyCerts doesn't support multimaster clusters anyway atm.
services.etcd = mkIf top.apiserver.enable (with cfg.certs.etcd; {
services.etcd = with cfg.certs.etcd; {
listenClientUrls = ["https://127.0.0.1:2379"];
listenPeerUrls = ["https://127.0.0.1:2380"];
advertiseClientUrls = ["https://etcd.local:2379"];
@ -335,11 +324,19 @@ in
certFile = mkDefault cert;
keyFile = mkDefault key;
trustedCaFile = mkDefault caCert;
});
};
networking.extraHosts = mkIf (config.services.etcd.enable) ''
127.0.0.1 etcd.${top.addons.dns.clusterDomain} etcd.local
'';
services.flannel = with cfg.certs.flannelClient; {
kubeconfig = top.lib.mkKubeConfig "flannel" {
server = top.apiserverAddress;
certFile = cert;
keyFile = key;
};
};
services.kubernetes = {
apiserver = mkIf top.apiserver.enable (with cfg.certs.apiServer; {
@ -359,13 +356,6 @@ in
proxyClientCertFile = mkDefault cfg.certs.apiserverProxyClient.cert;
proxyClientKeyFile = mkDefault cfg.certs.apiserverProxyClient.key;
});
addonManager = mkIf top.addonManager.enable {
kubeconfig = with cfg.certs.addonManager; {
certFile = mkDefault cert;
keyFile = mkDefault key;
};
bootstrapAddonsKubeconfig = clusterAdminKubeconfig;
};
controllerManager = mkIf top.controllerManager.enable {
serviceAccountKeyFile = mkDefault cfg.certs.serviceAccount.key;
rootCaFile = cfg.certs.controllerManagerClient.caCert;
@ -374,12 +364,6 @@ in
keyFile = mkDefault key;
};
};
flannel = mkIf top.flannel.enable {
kubeconfig = with cfg.certs.flannelClient; {
certFile = cert;
keyFile = key;
};
};
scheduler = mkIf top.scheduler.enable {
kubeconfig = with cfg.certs.schedulerClient; {
certFile = mkDefault cert;

View File

@ -45,28 +45,12 @@ in
};
###### implementation
config = let
proxyPaths = filter (a: a != null) [
cfg.kubeconfig.caFile
cfg.kubeconfig.certFile
cfg.kubeconfig.keyFile
];
in mkIf cfg.enable {
systemd.services.kube-proxy = rec {
config = mkIf cfg.enable {
systemd.services.kube-proxy = {
description = "Kubernetes Proxy Service";
wantedBy = [ "kube-node-online.target" ];
after = [ "kubelet-online.service" ];
before = [ "kube-node-online.target" ];
environment.KUBECONFIG = top.lib.mkKubeConfig "kube-proxy" cfg.kubeconfig;
path = with pkgs; [ iptables conntrack_tools kubectl ];
preStart = ''
until kubectl auth can-i get nodes/${top.kubelet.hostname} -q 2>/dev/null; do
echo kubectl auth can-i get nodes/${top.kubelet.hostname}: exit status $?
sleep 2
done
'';
wantedBy = [ "kubernetes.target" ];
after = [ "kube-apiserver.service" ];
path = with pkgs; [ iptables conntrack_tools ];
serviceConfig = {
Slice = "kubernetes.slice";
ExecStart = ''${top.package}/bin/kube-proxy \
@ -75,7 +59,7 @@ in
"--cluster-cidr=${top.clusterCidr}"} \
${optionalString (cfg.featureGates != [])
"--feature-gates=${concatMapStringsSep "," (feature: "${feature}=true") cfg.featureGates}"} \
--kubeconfig=${environment.KUBECONFIG} \
--kubeconfig=${top.lib.mkKubeConfig "kube-proxy" cfg.kubeconfig} \
${optionalString (cfg.verbosity != null) "--v=${toString cfg.verbosity}"} \
${cfg.extraOpts}
'';
@ -83,15 +67,6 @@ in
Restart = "on-failure";
RestartSec = 5;
};
unitConfig.ConditionPathExists = proxyPaths;
};
systemd.paths.kube-proxy = {
wantedBy = [ "kube-proxy.service" ];
pathConfig = {
PathExists = proxyPaths;
PathChanged = proxyPaths;
};
};
services.kubernetes.pki.certs = {

View File

@ -56,35 +56,18 @@ in
};
###### implementation
config = let
schedulerPaths = filter (a: a != null) [
cfg.kubeconfig.caFile
cfg.kubeconfig.certFile
cfg.kubeconfig.keyFile
];
in mkIf cfg.enable {
systemd.services.kube-scheduler = rec {
config = mkIf cfg.enable {
systemd.services.kube-scheduler = {
description = "Kubernetes Scheduler Service";
wantedBy = [ "kube-control-plane-online.target" ];
wantedBy = [ "kubernetes.target" ];
after = [ "kube-apiserver.service" ];
before = [ "kube-control-plane-online.target" ];
environment.KUBECONFIG = top.lib.mkKubeConfig "kube-scheduler" cfg.kubeconfig;
path = [ pkgs.kubectl ];
preStart = ''
until kubectl auth can-i get /api -q 2>/dev/null; do
echo kubectl auth can-i get /api: exit status $?
sleep 2
done
'';
serviceConfig = {
Slice = "kubernetes.slice";
ExecStart = ''${top.package}/bin/kube-scheduler \
--address=${cfg.address} \
${optionalString (cfg.featureGates != [])
"--feature-gates=${concatMapStringsSep "," (feature: "${feature}=true") cfg.featureGates}"} \
--kubeconfig=${environment.KUBECONFIG} \
--kubeconfig=${top.lib.mkKubeConfig "kube-scheduler" cfg.kubeconfig} \
--leader-elect=${boolToString cfg.leaderElect} \
--port=${toString cfg.port} \
${optionalString (cfg.verbosity != null) "--v=${toString cfg.verbosity}"} \
@ -96,15 +79,6 @@ in
Restart = "on-failure";
RestartSec = 5;
};
unitConfig.ConditionPathExists = schedulerPaths;
};
systemd.paths.kube-scheduler = {
wantedBy = [ "kube-scheduler.service" ];
pathConfig = {
PathExists = schedulerPaths;
PathChanged = schedulerPaths;
};
};
services.kubernetes.pki.certs = {

View File

@ -81,6 +81,10 @@ in
default = "";
description = ''
Defines the mapping from system users to database users.
The general form is:
map-name system-username database-username
'';
};

View File

@ -59,7 +59,7 @@
<para>
The latest stable version of Emacs 25 using the
<link
xlink:href="http://www.gtk.org">GTK+ 2</link>
xlink:href="http://www.gtk.org">GTK 2</link>
widget toolkit.
</para>
</listitem>
@ -321,7 +321,7 @@ https://nixos.org/nixpkgs/manual/#sec-modify-via-packageOverrides
<para>
If you want, you can tweak the Emacs package itself from your
<filename>emacs.nix</filename>. For example, if you want to have a
GTK+3-based Emacs instead of the default GTK+2-based binary and remove the
GTK 3-based Emacs instead of the default GTK 2-based binary and remove the
automatically generated <filename>emacs.desktop</filename> (useful is you
only use <command>emacsclient</command>), you can change your file
<filename>emacs.nix</filename> in this way:
@ -349,7 +349,7 @@ in [...]
<para>
After building this file as shown in <xref linkend="ex-emacsNix" />, you
will get an GTK3-based Emacs binary pre-loaded with your favorite packages.
will get an GTK 3-based Emacs binary pre-loaded with your favorite packages.
</para>
</section>
</section>

View File

@ -121,6 +121,7 @@ in {
systemd.tmpfiles.rules = [
"d '${cfg.dataDir}' 0700 zookeeper - - -"
"Z '${cfg.dataDir}' 0700 zookeeper - - -"
];
systemd.services.zookeeper = {

View File

@ -3,18 +3,18 @@
with lib;
let
ceph = pkgs.ceph;
cfg = config.services.ceph;
# function that translates "camelCaseOptions" to "camel case options", credits to tilpner in #nixos@freenode
translateOption = replaceStrings upperChars (map (s: " ${s}") lowerChars);
generateDaemonList = (daemonType: daemons: extraServiceConfig:
mkMerge (
map (daemon:
{ "ceph-${daemonType}-${daemon}" = generateServiceFile daemonType daemon cfg.global.clusterName ceph extraServiceConfig; }
) daemons
)
);
generateServiceFile = (daemonType: daemonId: clusterName: ceph: extraServiceConfig: {
expandCamelCase = replaceStrings upperChars (map (s: " ${s}") lowerChars);
expandCamelCaseAttrs = mapAttrs' (name: value: nameValuePair (expandCamelCase name) value);
makeServices = (daemonType: daemonIds: extraServiceConfig:
mkMerge (map (daemonId:
{ "ceph-${daemonType}-${daemonId}" = makeService daemonType daemonId cfg.global.clusterName pkgs.ceph extraServiceConfig; })
daemonIds));
makeService = (daemonType: daemonId: clusterName: ceph: extraServiceConfig: {
enable = true;
description = "Ceph ${builtins.replaceStrings lowerChars upperChars daemonType} daemon ${daemonId}";
after = [ "network-online.target" "time-sync.target" ] ++ optional (daemonType == "osd") "ceph-mon.target";
@ -34,23 +34,29 @@ let
Restart = "on-failure";
StartLimitBurst = "5";
StartLimitInterval = "30min";
ExecStart = "${ceph.out}/bin/${if daemonType == "rgw" then "radosgw" else "ceph-${daemonType}"} -f --cluster ${clusterName} --id ${if daemonType == "rgw" then "client.${daemonId}" else daemonId} --setuser ceph --setgroup ceph";
ExecStart = ''${ceph.out}/bin/${if daemonType == "rgw" then "radosgw" else "ceph-${daemonType}"} \
-f --cluster ${clusterName} --id ${daemonId} --setuser ceph \
--setgroup ${if daemonType == "osd" then "disk" else "ceph"}'';
} // extraServiceConfig
// optionalAttrs (daemonType == "osd") { ExecStartPre = "${ceph.out}/libexec/ceph/ceph-osd-prestart.sh --id ${daemonId} --cluster ${clusterName}"; };
} // optionalAttrs (builtins.elem daemonType [ "mds" "mon" "rgw" "mgr" ]) { preStart = ''
// optionalAttrs (daemonType == "osd") { ExecStartPre = ''${ceph.lib}/libexec/ceph/ceph-osd-prestart.sh \
--id ${daemonId} --cluster ${clusterName}''; };
} // optionalAttrs (builtins.elem daemonType [ "mds" "mon" "rgw" "mgr" ]) {
preStart = ''
daemonPath="/var/lib/ceph/${if daemonType == "rgw" then "radosgw" else daemonType}/${clusterName}-${daemonId}"
if [ ! -d ''$daemonPath ]; then
mkdir -m 755 -p ''$daemonPath
chown -R ceph:ceph ''$daemonPath
if [ ! -d $daemonPath ]; then
mkdir -m 755 -p $daemonPath
chown -R ceph:ceph $daemonPath
fi
'';
} // optionalAttrs (daemonType == "osd") { path = [ pkgs.getopt ]; }
);
generateTargetFile = (daemonType:
makeTarget = (daemonType:
{
"ceph-${daemonType}" = {
description = "Ceph target allowing to start/stop all ceph-${daemonType} services at once";
partOf = [ "ceph.target" ];
wantedBy = [ "ceph.target" ];
before = [ "ceph.target" ];
};
}
@ -82,6 +88,14 @@ in
'';
};
mgrModulePath = mkOption {
type = types.path;
default = "${pkgs.ceph.lib}/lib/ceph/mgr";
description = ''
Path at which to find ceph-mgr modules.
'';
};
monInitialMembers = mkOption {
type = with types; nullOr commas;
default = null;
@ -157,6 +171,27 @@ in
A comma-separated list of subnets that will be used as cluster networks in the cluster.
'';
};
rgwMimeTypesFile = mkOption {
type = with types; nullOr path;
default = "${pkgs.mime-types}/etc/mime.types";
description = ''
Path to mime types used by radosgw.
'';
};
};
extraConfig = mkOption {
type = with types; attrsOf str;
default = {};
example = ''
{
"ms bind ipv6" = "true";
};
'';
description = ''
Extra configuration to add to the global section. Use for setting values that are common for all daemons in the cluster.
'';
};
mgr = {
@ -216,6 +251,7 @@ in
to the id part in ceph i.e. [ "name1" ] would result in osd.name1
'';
};
extraConfig = mkOption {
type = with types; attrsOf str;
default = {
@ -296,9 +332,6 @@ in
{ assertion = cfg.global.fsid != "";
message = "fsid has to be set to a valid uuid for the cluster to function";
}
{ assertion = cfg.mgr.enable == true;
message = "ceph 12.x requires atleast 1 MGR daemon enabled for the cluster to function";
}
{ assertion = cfg.mon.enable == true -> cfg.mon.daemons != [];
message = "have to set id of atleast one MON if you're going to enable Monitor";
}
@ -317,14 +350,12 @@ in
''Not setting up a list of members in monInitialMembers requires that you set the host variable for each mon daemon or else the cluster won't function'';
environment.etc."ceph/ceph.conf".text = let
# Translate camelCaseOptions to the expected camel case option for ceph.conf
translatedGlobalConfig = mapAttrs' (name: value: nameValuePair (translateOption name) value) cfg.global;
# Merge the extraConfig set for mgr daemons, as mgr don't have their own section
globalAndMgrConfig = translatedGlobalConfig // optionalAttrs cfg.mgr.enable cfg.mgr.extraConfig;
globalSection = expandCamelCaseAttrs (cfg.global // cfg.extraConfig // optionalAttrs cfg.mgr.enable cfg.mgr.extraConfig);
# Remove all name-value pairs with null values from the attribute set to avoid making empty sections in the ceph.conf
globalConfig = mapAttrs' (name: value: nameValuePair (translateOption name) value) (filterAttrs (name: value: value != null) globalAndMgrConfig);
globalSection' = filterAttrs (name: value: value != null) globalSection;
totalConfig = {
global = globalConfig;
global = globalSection';
} // optionalAttrs (cfg.mon.enable && cfg.mon.extraConfig != {}) { mon = cfg.mon.extraConfig; }
// optionalAttrs (cfg.mds.enable && cfg.mds.extraConfig != {}) { mds = cfg.mds.extraConfig; }
// optionalAttrs (cfg.osd.enable && cfg.osd.extraConfig != {}) { osd = cfg.osd.extraConfig; }
@ -336,8 +367,9 @@ in
name = "ceph";
uid = config.ids.uids.ceph;
description = "Ceph daemon user";
group = "ceph";
extraGroups = [ "disk" ];
};
users.groups = singleton {
name = "ceph";
gid = config.ids.gids.ceph;
@ -345,22 +377,26 @@ in
systemd.services = let
services = []
++ optional cfg.mon.enable (generateDaemonList "mon" cfg.mon.daemons { RestartSec = "10"; })
++ optional cfg.mds.enable (generateDaemonList "mds" cfg.mds.daemons { StartLimitBurst = "3"; })
++ optional cfg.osd.enable (generateDaemonList "osd" cfg.osd.daemons { StartLimitBurst = "30"; RestartSec = "20s"; })
++ optional cfg.rgw.enable (generateDaemonList "rgw" cfg.rgw.daemons { })
++ optional cfg.mgr.enable (generateDaemonList "mgr" cfg.mgr.daemons { StartLimitBurst = "3"; });
++ optional cfg.mon.enable (makeServices "mon" cfg.mon.daemons { RestartSec = "10"; })
++ optional cfg.mds.enable (makeServices "mds" cfg.mds.daemons { StartLimitBurst = "3"; })
++ optional cfg.osd.enable (makeServices "osd" cfg.osd.daemons { StartLimitBurst = "30";
RestartSec = "20s";
PrivateDevices = "no"; # osd needs disk access
})
++ optional cfg.rgw.enable (makeServices "rgw" cfg.rgw.daemons { })
++ optional cfg.mgr.enable (makeServices "mgr" cfg.mgr.daemons { StartLimitBurst = "3"; });
in
mkMerge services;
systemd.targets = let
targets = [
{ ceph = { description = "Ceph target allowing to start/stop all ceph service instances at once"; }; }
] ++ optional cfg.mon.enable (generateTargetFile "mon")
++ optional cfg.mds.enable (generateTargetFile "mds")
++ optional cfg.osd.enable (generateTargetFile "osd")
++ optional cfg.rgw.enable (generateTargetFile "rgw")
++ optional cfg.mgr.enable (generateTargetFile "mgr");
{ "ceph" = { description = "Ceph target allowing to start/stop all ceph service instances at once";
wantedBy = [ "multi-user.target" ]; }; }
] ++ optional cfg.mon.enable (makeTarget "mon")
++ optional cfg.mds.enable (makeTarget "mds")
++ optional cfg.osd.enable (makeTarget "osd")
++ optional cfg.rgw.enable (makeTarget "rgw")
++ optional cfg.mgr.enable (makeTarget "mgr");
in
mkMerge targets;

View File

@ -67,7 +67,7 @@ in {
wantedBy = [ "multi-user.target" ];
serviceConfig = {
DynamicUser = true;
ExecStart = "${cfg.package}/bin/darkhttpd ${args}";
ExecStart = "${pkgs.darkhttpd}/bin/darkhttpd ${args}";
AmbientCapabilities = lib.mkIf (cfg.port < 1024) [ "CAP_NET_BIND_SERVICE" ];
Restart = "on-failure";
RestartSec = "2s";

View File

@ -31,7 +31,7 @@ in
e.efl e.enlightenment
e.terminology e.econnman
pkgs.xorg.xauth # used by kdesu
pkgs.gtk2 # To get GTK+'s themes.
pkgs.gtk2 # To get GTK's themes.
pkgs.tango-icon-theme
pkgs.gnome2.gnome_icon_theme
@ -48,7 +48,7 @@ in
services.xserver.desktopManager.session = [
{ name = "Enlightenment";
start = ''
# Set GTK_DATA_PREFIX so that GTK+ can find the themes
# Set GTK_DATA_PREFIX so that GTK can find the themes
export GTK_DATA_PREFIX=${config.system.path}
# find theme engines
export GTK_PATH=${config.system.path}/lib/gtk-3.0:${config.system.path}/lib/gtk-2.0

View File

@ -48,7 +48,7 @@ in
name = "mate";
bgSupport = true;
start = ''
# Set GTK_DATA_PREFIX so that GTK+ can find the themes
# Set GTK_DATA_PREFIX so that GTK can find the themes
export GTK_DATA_PREFIX=${config.system.path}
# Find theme engines

View File

@ -48,7 +48,7 @@ in
config = mkIf cfg.enable {
environment.systemPackages = with pkgs.xfce // pkgs; [
# Get GTK+ themes and gtk-update-icon-cache
# Get GTK themes and gtk-update-icon-cache
gtk2.out
# Supplies some abstract icons such as:
@ -107,10 +107,10 @@ in
start = ''
${cfg.extraSessionCommands}
# Set GTK_PATH so that GTK+ can find the theme engines.
# Set GTK_PATH so that GTK can find the theme engines.
export GTK_PATH="${config.system.path}/lib/gtk-2.0:${config.system.path}/lib/gtk-3.0"
# Set GTK_DATA_PREFIX so that GTK+ can find the Xfce themes.
# Set GTK_DATA_PREFIX so that GTK can find the Xfce themes.
export GTK_DATA_PREFIX=${config.system.path}
${pkgs.runtimeShell} ${pkgs.xfce.xinitrc} &

View File

@ -114,10 +114,10 @@ in
name = "xfce4-14";
bgSupport = true;
start = ''
# Set GTK_PATH so that GTK+ can find the theme engines.
# Set GTK_PATH so that GTK can find the theme engines.
export GTK_PATH="${config.system.path}/lib/gtk-2.0:${config.system.path}/lib/gtk-3.0"
# Set GTK_DATA_PREFIX so that GTK+ can find the Xfce themes.
# Set GTK_DATA_PREFIX so that GTK can find the Xfce themes.
export GTK_DATA_PREFIX=${config.system.path}
${pkgs.runtimeShell} ${pkgs.xfce4-14.xinitrc} &

View File

@ -25,6 +25,9 @@ in
{ assertion = cfg.hvm;
message = "Paravirtualized EC2 instances are no longer supported.";
}
{ assertion = cfg.efi -> cfg.hvm;
message = "EC2 instances using EFI must be HVM instances.";
}
];
boot.growPartition = cfg.hvm;
@ -35,6 +38,11 @@ in
autoResize = true;
};
fileSystems."/boot" = mkIf cfg.efi {
device = "/dev/disk/by-label/ESP";
fsType = "vfat";
};
boot.extraModulePackages = [
config.boot.kernelPackages.ena
];
@ -50,8 +58,10 @@ in
# Generate a GRUB menu. Amazon's pv-grub uses this to boot our kernel/initrd.
boot.loader.grub.version = if cfg.hvm then 2 else 1;
boot.loader.grub.device = if cfg.hvm then "/dev/xvda" else "nodev";
boot.loader.grub.device = if (cfg.hvm && !cfg.efi) then "/dev/xvda" else "nodev";
boot.loader.grub.extraPerEntryConfig = mkIf (!cfg.hvm) "root (hd0)";
boot.loader.grub.efiSupport = cfg.efi;
boot.loader.grub.efiInstallAsRemovable = cfg.efi;
boot.loader.timeout = 0;
boot.initrd.network.enable = true;
@ -137,7 +147,7 @@ in
networking.timeServers = [ "169.254.169.123" ];
# udisks has become too bloated to have in a headless system
# (e.g. it depends on GTK+).
# (e.g. it depends on GTK).
services.udisks2.enable = false;
};
}

View File

@ -1,4 +1,4 @@
{ config, lib, ... }:
{ config, lib, pkgs, ... }:
{
options = {
ec2 = {
@ -9,6 +9,13 @@
Whether the EC2 instance is a HVM instance.
'';
};
efi = lib.mkOption {
default = pkgs.stdenv.hostPlatform.isAarch64;
internal = true;
description = ''
Whether the EC2 instance is using EFI.
'';
};
};
};
}

View File

@ -0,0 +1,125 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.railcar;
generateUnit = name: containerConfig:
let
container = pkgs.ociTools.buildContainer {
args = [
(pkgs.writeShellScript "run.sh" containerConfig.cmd).outPath
];
};
in
nameValuePair "railcar-${name}" {
enable = true;
wantedBy = [ "multi-user.target" ];
serviceConfig = {
ExecStart = ''
${cfg.package}/bin/railcar -r ${cfg.stateDir} run ${name} -b ${container}
'';
Type = containerConfig.runType;
};
};
mount = with types; (submodule {
options = {
type = mkOption {
type = string;
default = "none";
description = ''
The type of the filesystem to be mounted.
Linux: filesystem types supported by the kernel as listed in
`/proc/filesystems` (e.g., "minix", "ext2", "ext3", "jfs", "xfs",
"reiserfs", "msdos", "proc", "nfs", "iso9660"). For bind mounts
(when options include either bind or rbind), the type is a dummy,
often "none" (not listed in /proc/filesystems).
'';
};
source = mkOption {
type = string;
description = "Source for the in-container mount";
};
options = mkOption {
type = loaOf (string);
default = [ "bind" ];
description = ''
Mount options of the filesystem to be used.
Support optoions are listed in the mount(8) man page. Note that
both filesystem-independent and filesystem-specific options
are listed.
'';
};
};
});
in
{
options.services.railcar = {
enable = mkEnableOption "railcar";
containers = mkOption {
default = {};
description = "Declarative container configuration";
type = with types; loaOf (submodule ({ name, config, ... }: {
options = {
cmd = mkOption {
type = types.string;
description = "Command or script to run inside the container";
};
mounts = mkOption {
type = with types; attrsOf mount;
default = {};
description = ''
A set of mounts inside the container.
The defaults have been chosen for simple bindmounts, meaning
that you only need to provide the "source" parameter.
'';
example = ''
{ "/data" = { source = "/var/lib/data"; }; }
'';
};
runType = mkOption {
type = types.string;
default = "oneshot";
description = "The systemd service run type";
};
os = mkOption {
type = types.string;
default = "linux";
description = "OS type of the container";
};
arch = mkOption {
type = types.string;
default = "x86_64";
description = "Computer architecture type of the container";
};
};
}));
};
stateDir = mkOption {
type = types.path;
default = ''/var/railcar'';
description = "Railcar persistent state directory";
};
package = mkOption {
type = types.package;
default = pkgs.railcar;
description = "Railcar package to use";
};
};
config = mkIf cfg.enable {
systemd.services = flip mapAttrs' cfg.containers (name: containerConfig:
generateUnit name containerConfig
);
};
}

View File

@ -196,6 +196,22 @@ in rec {
);
# A disk image that can be imported to Amazon EC2 and registered as an AMI
amazonImage = forMatchingSystems [ "x86_64-linux" "aarch64-linux" ] (system:
with import nixpkgs { inherit system; };
hydraJob ((import lib/eval-config.nix {
inherit system;
modules =
[ versionModule
./maintainers/scripts/ec2/amazon-image.nix
];
}).config.system.build.amazonImage)
);
# Ensure that all packages used by the minimal NixOS config end up in the channel.
dummy = forAllSystems (system: pkgs.runCommand "dummy"
{ toplevel = (import lib/eval-config.nix {

View File

@ -1,4 +1,4 @@
import ./make-test.nix ({pkgs, ...}: {
import ./make-test.nix ({pkgs, lib, ...}: {
name = "All-in-one-basic-ceph-cluster";
meta = with pkgs.stdenv.lib.maintainers; {
maintainers = [ lejonet ];
@ -7,6 +7,7 @@ import ./make-test.nix ({pkgs, ...}: {
nodes = {
aio = { pkgs, ... }: {
virtualisation = {
memorySize = 1536;
emptyDiskImages = [ 20480 20480 ];
vlans = [ 1 ];
};
@ -24,9 +25,6 @@ import ./make-test.nix ({pkgs, ...}: {
ceph
xfsprogs
];
nixpkgs.config.packageOverrides = super: {
ceph = super.ceph.override({ nss = super.nss; libxfs = super.libxfs; libaio = super.libaio; jemalloc = super.jemalloc; });
};
boot.kernelModules = [ "xfs" ];
@ -51,6 +49,9 @@ import ./make-test.nix ({pkgs, ...}: {
enable = true;
daemons = [ "0" "1" ];
};
# So that we don't have to battle systemd when bootstraping
systemd.targets.ceph.wantedBy = lib.mkForce [];
};
};
@ -61,24 +62,26 @@ import ./make-test.nix ({pkgs, ...}: {
# Create the ceph-related directories
$aio->mustSucceed(
"mkdir -p /var/lib/ceph/mgr/ceph-aio/",
"mkdir -p /var/lib/ceph/mon/ceph-aio/",
"mkdir -p /var/lib/ceph/osd/ceph-{0..1}/",
"chown ceph:ceph -R /var/lib/ceph/"
"mkdir -p /var/lib/ceph/mgr/ceph-aio",
"mkdir -p /var/lib/ceph/mon/ceph-aio",
"mkdir -p /var/lib/ceph/osd/ceph-{0,1}",
"chown ceph:ceph -R /var/lib/ceph/",
"mkdir -p /etc/ceph",
"chown ceph:ceph -R /etc/ceph"
);
# Bootstrap ceph-mon daemon
$aio->mustSucceed(
"mkdir -p /var/lib/ceph/bootstrap-osd && chown ceph:ceph /var/lib/ceph/bootstrap-osd",
"sudo -u ceph ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'",
"ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'",
"ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring",
"monmaptool --create --add aio 192.168.1.1 --fsid 066ae264-2a5d-4729-8001-6ad265f50b03 /tmp/monmap",
"sudo -u ceph ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'",
"sudo -u ceph ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring",
"monmaptool --create --add aio 192.168.1.1 --fsid 066ae264-2a5d-4729-8001-6ad265f50b03 /tmp/monmap",
"sudo -u ceph ceph-mon --mkfs -i aio --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring",
"touch /var/lib/ceph/mon/ceph-aio/done",
"sudo -u ceph touch /var/lib/ceph/mon/ceph-aio/done",
"systemctl start ceph-mon-aio"
);
$aio->waitForUnit("ceph-mon-aio");
$aio->mustSucceed("ceph mon enable-msgr2");
# Can't check ceph status until a mon is up
$aio->succeed("ceph -s | grep 'mon: 1 daemons'");
@ -90,6 +93,7 @@ import ./make-test.nix ({pkgs, ...}: {
);
$aio->waitForUnit("ceph-mgr-aio");
$aio->waitUntilSucceeds("ceph -s | grep 'quorum aio'");
$aio->waitUntilSucceeds("ceph -s | grep 'mgr: aio(active,'");
# Bootstrap both OSDs
$aio->mustSucceed(
@ -112,8 +116,8 @@ import ./make-test.nix ({pkgs, ...}: {
"systemctl start ceph-osd-1"
);
$aio->waitUntilSucceeds("ceph osd stat | grep '2 osds: 2 up, 2 in'");
$aio->waitUntilSucceeds("ceph -s | grep 'mgr: aio(active)'");
$aio->waitUntilSucceeds("ceph osd stat | grep -e '2 osds: 2 up[^,]*, 2 in'");
$aio->waitUntilSucceeds("ceph -s | grep 'mgr: aio(active,'");
$aio->waitUntilSucceeds("ceph -s | grep 'HEALTH_OK'");
$aio->mustSucceed(
@ -135,5 +139,23 @@ import ./make-test.nix ({pkgs, ...}: {
"ceph osd pool ls | grep 'aio-test'",
"ceph osd pool delete aio-other-test aio-other-test --yes-i-really-really-mean-it"
);
# As we disable the target in the config, we still want to test that it works as intended
$aio->mustSucceed(
"systemctl stop ceph-osd-0",
"systemctl stop ceph-osd-1",
"systemctl stop ceph-mgr-aio",
"systemctl stop ceph-mon-aio"
);
$aio->succeed("systemctl start ceph.target");
$aio->waitForUnit("ceph-mon-aio");
$aio->waitForUnit("ceph-mgr-aio");
$aio->waitForUnit("ceph-osd-0");
$aio->waitForUnit("ceph-osd-1");
$aio->succeed("ceph -s | grep 'mon: 1 daemons'");
$aio->waitUntilSucceeds("ceph -s | grep 'quorum aio'");
$aio->waitUntilSucceeds("ceph osd stat | grep -e '2 osds: 2 up[^,]*, 2 in'");
$aio->waitUntilSucceeds("ceph -s | grep 'mgr: aio(active,'");
$aio->waitUntilSucceeds("ceph -s | grep 'HEALTH_OK'");
'';
})

View File

@ -30,10 +30,7 @@ let
{ config, pkgs, lib, nodes, ... }:
mkMerge [
{
boot = {
postBootCommands = "rm -fr /var/lib/kubernetes/secrets /tmp/shared/*";
kernel.sysctl = { "fs.inotify.max_user_instances" = 256; };
};
boot.postBootCommands = "rm -fr /var/lib/kubernetes/secrets /tmp/shared/*";
virtualisation.memorySize = mkDefault 1536;
virtualisation.diskSize = mkDefault 4096;
networking = {

View File

@ -77,7 +77,6 @@ let
singleNodeTest = {
test = ''
# prepare machine1 for test
$machine1->waitForUnit("kubernetes.target");
$machine1->waitUntilSucceeds("kubectl get node machine1.${domain} | grep -w Ready");
$machine1->waitUntilSucceeds("docker load < ${redisImage}");
$machine1->waitUntilSucceeds("kubectl create -f ${redisPod}");
@ -103,8 +102,6 @@ let
# Node token exchange
$machine1->waitUntilSucceeds("cp -f /var/lib/cfssl/apitoken.secret /tmp/shared/apitoken.secret");
$machine2->waitUntilSucceeds("cat /tmp/shared/apitoken.secret | nixos-kubernetes-node-join");
$machine1->waitForUnit("kubernetes.target");
$machine2->waitForUnit("kubernetes.target");
# prepare machines for test
$machine1->waitUntilSucceeds("kubectl get node machine2.${domain} | grep -w Ready");

View File

@ -94,8 +94,6 @@ let
singlenode = base // {
test = ''
$machine1->waitForUnit("kubernetes.target");
$machine1->waitUntilSucceeds("kubectl get node machine1.my.zyx | grep -w Ready");
$machine1->waitUntilSucceeds("docker load < ${kubectlImage}");
@ -118,8 +116,6 @@ let
# Node token exchange
$machine1->waitUntilSucceeds("cp -f /var/lib/cfssl/apitoken.secret /tmp/shared/apitoken.secret");
$machine2->waitUntilSucceeds("cat /tmp/shared/apitoken.secret | nixos-kubernetes-node-join");
$machine1->waitForUnit("kubernetes.target");
$machine2->waitForUnit("kubernetes.target");
$machine1->waitUntilSucceeds("kubectl get node machine2.my.zyx | grep -w Ready");

View File

@ -12,9 +12,9 @@ let
# Only allow the demo data to be used (only if it's unfreeRedistributable).
unfreePredicate = pkg: with pkgs.lib; let
allowDrvPredicates = [ "quake3-demo" "quake3-pointrelease" ];
allowPackageNames = [ "quake3-demodata" "quake3-pointrelease" ];
allowLicenses = [ pkgs.lib.licenses.unfreeRedistributable ];
in any (flip hasPrefix pkg.name) allowDrvPredicates &&
in elem pkg.pname allowPackageNames &&
elem (pkg.meta.license or null) allowLicenses;
in

View File

@ -74,7 +74,7 @@ python3Packages.buildPythonApplication rec {
'';
meta = with stdenv.lib; {
description = "A modern audio book player for Linux using GTK+ 3";
description = "A modern audio book player for Linux using GTK 3";
homepage = https://cozy.geigi.de/;
maintainers = [ maintainers.makefu ];
license = licenses.gpl3;

View File

@ -27,7 +27,7 @@ stdenv.mkDerivation rec {
description = "PulseAudio Volume Control";
longDescription = ''
PulseAudio Volume Control (pavucontrol) provides a GTK+
PulseAudio Volume Control (pavucontrol) provides a GTK
graphical user interface to connect to a PulseAudio server and
easily control the volume of all clients, sinks, etc.
'';

View File

@ -46,11 +46,11 @@ python3.pkgs.buildPythonApplication rec {
preFixup = stdenv.lib.optionalString (kakasi != null) "gappsWrapperArgs+=(--prefix PATH : ${kakasi}/bin)";
meta = with stdenv.lib; {
description = "GTK+-based audio player written in Python, using the Mutagen tagging library";
description = "GTK-based audio player written in Python, using the Mutagen tagging library";
license = licenses.gpl2Plus;
longDescription = ''
Quod Libet is a GTK+-based audio player written in Python, using
Quod Libet is a GTK-based audio player written in Python, using
the Mutagen tagging library. It's designed around the idea that
you know how to organize your music better than we do. It lets
you make playlists based on regular expressions (don't worry,

View File

@ -41,7 +41,7 @@ in buildPythonApplication rec {
longDescription = ''
Sonata is an elegant client for the Music Player Daemon.
Written in Python and using the GTK+ 3 widget set, its features
Written in Python and using the GTK 3 widget set, its features
include:
- Expanded and collapsed views

View File

@ -18,7 +18,7 @@ stdenv.mkDerivation rec {
];
meta = with stdenv.lib; {
description = "A notepad clone for GTK+ 2.0";
description = "A notepad clone for GTK 2.0";
homepage = http://tarot.freeshell.org/leafpad;
platforms = platforms.linux;
maintainers = [ maintainers.flosse ];

View File

@ -11,13 +11,13 @@ let
archive_fmt = if system == "x86_64-darwin" then "zip" else "tar.gz";
sha256 = {
"x86_64-linux" = "1np7j6xv0bxmq7762ml0h6pib8963s2vdmyvigi0fz2iik92zv8z";
"x86_64-darwin" = "0f87cv1sbcvix9f7hhw0vsypp0bf627xdyh4bmh0g41k17ls8wvc";
"x86_64-linux" = "1iz36nhkg78346g5407df6jv4d1ydb22hhgs8hiaxql3hq5z7x3q";
"x86_64-darwin" = "1iijk0kx90rax39iradbbafyvd3vwnzsgvyb3s13asy42pbhhkky";
}.${system};
in
callPackage ./generic.nix rec {
version = "1.37.1";
version = "1.38.0";
pname = "vscode";
executableName = "code" + lib.optionalString isInsiders "-insiders";

View File

@ -11,13 +11,13 @@ let
archive_fmt = if system == "x86_64-darwin" then "zip" else "tar.gz";
sha256 = {
"x86_64-linux" = "0j6188gm66bwffyg0vn3ak8242vs2vb2cw92b9wfkiml6sfg555n";
"x86_64-darwin" = "0iblg0hn6jdds7d2hzp0icb5yh6hhw3fd5g4iim64ibi7lpwj2cj";
"x86_64-linux" = "09rq5jx7aicwp3qqi5pcv6bmyyp1rm5cfa96hvy3f4grhq1fi132";
"x86_64-darwin" = "1y1lbb3q5myaz7jg21x5sl0in8wr46brqj9zyrg3f16zahsagzr4";
}.${system};
in
callPackage ./generic.nix rec {
version = "1.37.1";
version = "1.38.0";
pname = "vscodium";
executableName = "codium";

View File

@ -26,7 +26,7 @@ stdenv.mkDerivation {
++ (with perlPackages; [ perl XMLParser ]);
meta = {
description = "Simple GTK+2 color selector";
description = "Simple GTK 2 color selector";
homepage = http://gcolor2.sourceforge.net/;
license = stdenv.lib.licenses.gpl2Plus;
maintainers = with stdenv.lib.maintainers; [ notthemessiah ];

View File

@ -46,11 +46,11 @@ stdenv.mkDerivation rec {
enableParallelBuilding = true;
meta = with stdenv.lib; {
description = "Lightweight GTK+ based image viewer";
description = "Lightweight GTK based image viewer";
longDescription =
''
Geeqie is a lightweight GTK+ based image viewer for Unix like
Geeqie is a lightweight GTK based image viewer for Unix like
operating systems. It features: EXIF, IPTC and XMP metadata
browsing and editing interoperability; easy integration with other
software; geeqie works on files and directories, there is no need to

View File

@ -19,7 +19,7 @@ python27Packages.buildPythonApplication rec {
MComix is an user-friendly, customizable image viewer. It is specifically
designed to handle comic books, but also serves as a generic viewer.
It reads images in ZIP, RAR, 7Zip or tar archives as well as plain image
files. It is written in Python and uses GTK+ through the PyGTK bindings,
files. It is written in Python and uses GTK through the PyGTK bindings,
and runs on both Linux and Windows.
MComix is a fork of the Comix project, and aims to add bug fixes and

View File

@ -22,9 +22,9 @@ stdenv.mkDerivation rec {
];
meta = {
description = "A simple GTK+1/2 painting program";
description = "A simple GTK painting program";
longDescription = ''
mtPaint is a simple GTK+1/2 painting program designed for
mtPaint is a simple GTK painting program designed for
creating icons and pixel based artwork. It can edit indexed palette
or 24 bit RGB images and offers basic painting and palette manipulation
tools. It also has several other more powerful features such as channels,

View File

@ -6,11 +6,11 @@
mkDerivation rec {
pname = "calibre";
version = "3.47.0";
version = "3.47.1";
src = fetchurl {
url = "https://download.calibre-ebook.com/${version}/${pname}-${version}.tar.xz";
sha256 = "0mjj47w9pa7ihycialijrfq2qk107dcxwcwriz3b2mg4lixlawy4";
sha256 = "17lz6rawlv268vv8i5kj59rswsipq3c14066adaz1paw54zr62dk";
};
patches = [
@ -105,7 +105,7 @@ mkDerivation rec {
disallowedReferences = [ podofo.dev ];
calibreDesktopItem = makeDesktopItem {
name = "calibre";
name = "calibre-gui";
desktopName = "calibre";
exec = "@out@/bin/calibre --detach %F";
genericName = "E-book library management";
@ -151,7 +151,7 @@ mkDerivation rec {
};
ebookEditDesktopItem = makeDesktopItem {
name = "calibre-edit-ebook";
name = "calibre-edit-book";
desktopName = "Edit E-book";
genericName = "E-book Editor";
comment = "Edit e-books";

View File

@ -13,7 +13,7 @@ stdenv.mkDerivation rec {
buildInputs = [ intltool gtk2 xdotool hicolor-icon-theme ];
meta = with stdenv.lib; {
description = "Lightweight GTK+ Clipboard Manager";
description = "Lightweight GTK Clipboard Manager";
homepage = "http://clipit.rspwn.com";
license = licenses.gpl3;
platforms = platforms.linux;

View File

@ -28,11 +28,11 @@ stdenv.mkDerivation rec {
meta = with stdenv.lib; {
homepage = https://packages.debian.org/wheezy/epdfview;
description = "A lightweight PDF document viewer using Poppler and GTK+";
description = "A lightweight PDF document viewer using Poppler and GTK";
longDescription = ''
ePDFView is a free lightweight PDF document viewer using Poppler and
GTK+ libraries. The aim of ePDFView is to make a simple PDF document
viewer, in the lines of Evince but without using the Gnome libraries.
ePDFView is a free lightweight PDF document viewer using Poppler and
GTK libraries. The aim of ePDFView is to make a simple PDF document
viewer, in the lines of Evince but without using the Gnome libraries.
'';
license = licenses.gpl2;
maintainers = [ maintainers.astsmtl ];

View File

@ -49,13 +49,13 @@ stdenv.mkDerivation rec {
meta = with stdenv.lib; {
homepage = https://fontmanager.github.io/;
description = "Simple font management for GTK+ desktop environments";
description = "Simple font management for GTK desktop environments";
longDescription = ''
Font Manager is intended to provide a way for average users to
easily manage desktop fonts, without having to resort to command
line tools or editing configuration files by hand. While designed
primarily with the Gnome Desktop Environment in mind, it should
work well with other Gtk+ desktop environments.
work well with other GTK desktop environments.
Font Manager is NOT a professional-grade font management solution.
'';

View File

@ -33,7 +33,7 @@ stdenv.mkDerivation rec {
homepage = https://pwmt.org/projects/girara/;
description = "User interface library";
longDescription = ''
girara is a library that implements a GTK+ based VIM-like user interface
girara is a library that implements a GTK based VIM-like user interface
that focuses on simplicity and minimalism.
'';
license = licenses.zlib;

View File

@ -39,7 +39,7 @@ stdenv.mkDerivation rec {
meta = {
description = "A graphical frontend for libgksu";
longDescription = ''
GKSu is a library that provides a Gtk+ frontend to su and sudo.
GKSu is a library that provides a GTK frontend to su and sudo.
It supports login shells and preserving environment when acting as
a su frontend. It is useful to menu items or other graphical
programs that need to ask a user's password to run another program

View File

@ -30,7 +30,7 @@ stdenv.mkDerivation rec {
description = "Gnome Completion-Run Utility";
longDescription = ''
A simple program which provides a "run program" window, featuring a bash-like TAB completion.
It uses GTK+ interface.
It uses GTK interface.
Also, supports CTRL-R / CTRL-S / "!" for searching through history.
Running commands in a terminal with CTRL-Enter. URL handlers.
'';

View File

@ -16,7 +16,7 @@ stdenv.mkDerivation rec {
hardeningDisable = [ "format" ];
meta = {
description = "GTK+-based audio CD player/ripper";
description = "GTK-based audio CD player/ripper";
homepage = http://nostatic.org/grip;
license = stdenv.lib.licenses.gpl2;

View File

@ -15,9 +15,9 @@ stdenv.mkDerivation rec {
preferLocalBuild = true;
meta = with stdenv.lib; {
description = "A font selection program for X11 using the GTK2 toolkit";
description = "A font selection program for X11 using the GTK 2 toolkit";
longDescription = ''
Font selection tool similar to xfontsel implemented using GTK+ 2.
Font selection tool similar to xfontsel implemented using GTK 2.
Trivial, but useful nonetheless.
'';
homepage = http://gtk2fontsel.sourceforge.net/;

View File

@ -0,0 +1,53 @@
{ lib
, mkDerivation
, makeDesktopItem
, fetchFromGitLab
, qmake
# qt
, qtbase
, qtwebsockets
}:
let
desktopItem = makeDesktopItem {
type = "Application";
name = "Michabo";
desktopName = "Michabo";
exec = "Michabo";
};
in mkDerivation rec {
pname = "michabo";
version = "0.1";
src = fetchFromGitLab {
domain = "git.pleroma.social";
owner = "kaniini";
repo = "michabo";
rev = "v${version}";
sha256 = "0pl4ymdb36r0kwlclfjjp6b1qml3fm9ql7ag5inprny5y8vcjpzn";
};
nativeBuildInputs = [
qmake
];
buildInputs = [
qtbase
qtwebsockets
];
qmakeFlags = [ "michabo.pro" "DESTDIR=${placeholder "out"}/bin" ];
postInstall = ''
ln -s ${desktopItem}/share $out/share
'';
meta = with lib; {
description = "A native desktop app for Pleroma and Mastodon servers";
homepage = "https://git.pleroma.social/kaniini/michabo";
license = licenses.gpl3;
maintainers = with maintainers; [ fgaz ];
platforms = platforms.all;
};
}

View File

@ -64,7 +64,7 @@ buildPythonApplication rec {
access to the graphical desktop via speech and refreshable braille.
It works with applications and toolkits that support the Assistive
Technology Service Provider Interface (AT-SPI). That includes the GNOME
Gtk+ toolkit, the Java platform's Swing toolkit, LibreOffice, Gecko, and
GTK toolkit, the Java platform's Swing toolkit, LibreOffice, Gecko, and
WebKitGtk. AT-SPI support for the KDE Qt toolkit is being pursued.
Needs `services.gnome3.at-spi2-core.enable = true;` in `configuration.nix`.

View File

@ -21,7 +21,7 @@ stdenv.mkDerivation rec {
meta = with stdenv.lib; {
homepage = https://blog.lxde.org/category/pcmanfm/;
license = licenses.gpl2Plus;
description = "File manager with GTK+ interface";
description = "File manager with GTK interface";
maintainers = [ maintainers.ttuegel ];
platforms = platforms.linux;
};

View File

@ -23,7 +23,7 @@ stdenv.mkDerivation rec {
meta = with stdenv.lib; {
homepage = https://pcman.ptt.cc;
license = licenses.gpl2;
description = "Telnet BBS browser with GTK+ interface";
description = "Telnet BBS browser with GTK interface";
maintainers = [ maintainers.sifmelcara ];
platforms = platforms.linux;
};

View File

@ -27,10 +27,7 @@ stdenv.mkDerivation {
'';
meta = with stdenv.lib; {
description = "Simple wrapper around the VTE terminal emulator widget for GTK+";
longDescription = ''
Simple wrapper around the VTE terminal emulator widget for GTK+
'';
description = "Simple wrapper around the VTE terminal emulator widget for GTK";
homepage = https://github.com/esmil/stupidterm;
license = licenses.lgpl3Plus;
maintainers = [ maintainers.etu ];

View File

@ -1,8 +1,8 @@
{ lib, haskellPackages, fetchFromGitHub }:
let
version = "1.6.0";
sha256 = "1yq7lbqg759i3hyxcskx3924b7xmw6i4ny6n8yq80k4hikw2k6mf";
version = "1.6.1";
sha256 = "047gvpq52pif9sfb4qcfdiwz50x3wlnjvsnnjzypm1qlwyl2rbz1";
in (haskellPackages.mkDerivation {
pname = "taskell";

View File

@ -21,7 +21,7 @@ in symlinkJoin {
description = "A highly customizable and functional PDF viewer";
longDescription = ''
Zathura is a highly customizable and functional PDF viewer based on the
poppler rendering library and the gtk+ toolkit. The idea behind zathura
poppler rendering library and the GTK toolkit. The idea behind zathura
is an application that provides a minimalistic and space saving interface
as well as an easy usage that mainly focuses on keyboard interaction.
'';

View File

@ -22,7 +22,7 @@ stdenv.mkDerivation rec {
];
meta = with stdenv.lib; {
description = "Lightweight WebKitGTK+ web browser";
description = "Lightweight WebKitGTK web browser";
homepage = https://www.midori-browser.org/;
license = with licenses; [ lgpl21Plus ];
platforms = with platforms; linux;

View File

@ -21,9 +21,9 @@ stdenv.mkDerivation rec {
installFlags = [ "PREFIX=$(out)" ];
meta = with stdenv.lib; {
description = "A simple web browser based on WebKit/GTK+";
description = "A simple web browser based on WebKit/GTK";
longDescription = ''
Surf is a simple web browser based on WebKit/GTK+. It is able to display
Surf is a simple web browser based on WebKit/GTK. It is able to display
websites and follow links. It supports the XEmbed protocol which makes it
possible to embed it in another application. Furthermore, one can point
surf to another URI by setting its XProperties.

View File

@ -17,11 +17,11 @@ let
vivaldiName = if isSnapshot then "vivaldi-snapshot" else "vivaldi";
in stdenv.mkDerivation rec {
pname = "vivaldi";
version = "2.7.1628.30-1";
version = "2.7.1628.33-1";
src = fetchurl {
url = "https://downloads.vivaldi.com/${branch}/vivaldi-${branch}_${version}_amd64.deb";
sha256 = "1lz8adwiwll8g246s5pa0ipfraph51s9f4lcfysdrp1s3s1qhw8x";
sha256 = "1km5ccxqyd5xgmzm42zca670jf7wd4j7c726fhyj4wjni71zar34";
};
unpackPhase = ''

View File

@ -15,13 +15,13 @@ with lib;
stdenv.mkDerivation rec {
pname = "kubernetes";
version = "1.14.3";
version = "1.15.3";
src = fetchFromGitHub {
owner = "kubernetes";
repo = "kubernetes";
rev = "v${version}";
sha256 = "1r31ssf8bdbz8fdsprhkc34jqhz5rcs3ixlf0mbjcbq0xr7y651z";
sha256 = "0vamr7m8i5svmvb0z01cngv3sffdfjj0bky2zalm7cfnapib8vz1";
};
buildInputs = [ removeReferencesTo makeWrapper which go rsync go-bindata ];
@ -29,7 +29,10 @@ stdenv.mkDerivation rec {
outputs = ["out" "man" "pause"];
postPatch = ''
substituteInPlace "hack/lib/golang.sh" --replace "_cgo" ""
# go env breaks the sandbox
substituteInPlace "hack/lib/golang.sh" \
--replace 'echo "$(go env GOHOSTOS)/$(go env GOHOSTARCH)"' 'echo "${go.GOOS}/${go.GOARCH}"'
substituteInPlace "hack/update-generated-docs.sh" --replace "make" "make SHELL=${stdenv.shell}"
# hack/update-munge-docs.sh only performs some tests on the documentation.
# They broke building k8s; disabled for now.
@ -52,13 +55,12 @@ stdenv.mkDerivation rec {
cp build/pause/pause "$pause/bin/pause"
cp -R docs/man/man1 "$man/share/man"
cp cluster/addons/addon-manager/namespace.yaml $out/share
cp cluster/addons/addon-manager/kube-addons.sh $out/bin/kube-addons
patchShebangs $out/bin/kube-addons
substituteInPlace $out/bin/kube-addons \
--replace /opt/namespace.yaml $out/share/namespace.yaml
wrapProgram $out/bin/kube-addons --set "KUBECTL_BIN" "$out/bin/kubectl"
cp ${./mk-docker-opts.sh} $out/bin/mk-docker-opts.sh
$out/bin/kubectl completion bash > $out/share/bash-completion/completions/kubectl
$out/bin/kubectl completion zsh > $out/share/zsh/site-functions/_kubectl
'';

View File

@ -0,0 +1,113 @@
#!/usr/bin/env bash
# Copyright 2014 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Generate Docker daemon options based on flannel env file.
# exit on any error
set -e
usage() {
echo "$0 [-f FLANNEL-ENV-FILE] [-d DOCKER-ENV-FILE] [-i] [-c] [-m] [-k COMBINED-KEY]
Generate Docker daemon options based on flannel env file
OPTIONS:
-f Path to flannel env file. Defaults to /run/flannel/subnet.env
-d Path to Docker env file to write to. Defaults to /run/docker_opts.env
-i Output each Docker option as individual var. e.g. DOCKER_OPT_MTU=1500
-c Output combined Docker options into DOCKER_OPTS var
-k Set the combined options key to this value (default DOCKER_OPTS=)
-m Do not output --ip-masq (useful for older Docker version)
" >/dev/stderr
exit 1
}
flannel_env="/run/flannel/subnet.env"
docker_env="/run/docker_opts.env"
combined_opts_key="DOCKER_OPTS"
indiv_opts=false
combined_opts=false
ipmasq=true
val=""
while getopts "f:d:icmk:" opt; do
case $opt in
f)
flannel_env=$OPTARG
;;
d)
docker_env=$OPTARG
;;
i)
indiv_opts=true
;;
c)
combined_opts=true
;;
m)
ipmasq=false
;;
k)
combined_opts_key=$OPTARG
;;
\?)
usage
;;
esac
done
if [[ $indiv_opts = false ]] && [[ $combined_opts = false ]]; then
indiv_opts=true
combined_opts=true
fi
if [[ -f "${flannel_env}" ]]; then
source "${flannel_env}"
fi
if [[ -n "$FLANNEL_SUBNET" ]]; then
# shellcheck disable=SC2034 # Variable name referenced in OPT_LOOP below
DOCKER_OPT_BIP="--bip=$FLANNEL_SUBNET"
fi
if [[ -n "$FLANNEL_MTU" ]]; then
# shellcheck disable=SC2034 # Variable name referenced in OPT_LOOP below
DOCKER_OPT_MTU="--mtu=$FLANNEL_MTU"
fi
if [[ "$FLANNEL_IPMASQ" = true ]] && [[ $ipmasq = true ]]; then
# shellcheck disable=SC2034 # Variable name referenced in OPT_LOOP below
DOCKER_OPT_IPMASQ="--ip-masq=false"
fi
eval docker_opts="\$${combined_opts_key}"
docker_opts+=" "
echo -n "" >"${docker_env}"
# OPT_LOOP
for opt in $(compgen -v DOCKER_OPT_); do
eval val=\$"${opt}"
if [[ "$indiv_opts" = true ]]; then
echo "$opt=\"$val\"" >>"${docker_env}"
fi
docker_opts+="$val "
done
if [[ "$combined_opts" = true ]]; then
echo "${combined_opts_key}=\"${docker_opts}\"" >>"${docker_env}"
fi

View File

@ -97,8 +97,8 @@ in rec {
terraform_0_11-full = terraform_0_11.full;
terraform_0_12 = pluggable (generic {
version = "0.12.7";
sha256 = "09zsak1a9z2mk88vb6xs9jaxfpazhs0p7x68mw62c9mm13m8kq02";
version = "0.12.8";
sha256 = "1qlhbn6xj2nd8gwr6aiyjsb62qmj4j9jnxab006xgdr1avvl2p67";
patches = [ ./provider-path.patch ];
passthru = { inherit plugins; };
});

View File

@ -28,7 +28,7 @@ stdenv.mkDerivation rec {
'';
meta = {
description = "Native Gtk+ Twitter client for the Linux desktop";
description = "Native GTK Twitter client for the Linux desktop";
longDescription = "Corebird is a modern, easy and fun Twitter client.";
homepage = https://corebird.baedert.org/;
license = stdenv.lib.licenses.gpl3;

View File

@ -62,7 +62,7 @@ stdenv.mkDerivation {
enableParallelBuilding = true;
meta = with stdenv.lib; {
description = "Modern Jabber/XMPP Client using GTK+/Vala";
description = "Modern Jabber/XMPP Client using GTK/Vala";
homepage = https://github.com/dino/dino;
license = licenses.gpl3;
platforms = platforms.linux;

View File

@ -27,10 +27,10 @@ in {
pname = "discord-canary";
binaryName = "DiscordCanary";
desktopName = "Discord Canary";
version = "0.0.95";
version = "0.0.96";
src = fetchurl {
url = "https://dl-canary.discordapp.net/apps/linux/0.0.95/discord-canary-0.0.95.tar.gz";
sha256 = "06qhm73kc88pq0lgbi7qjy4gx9ighkmx128fdm1dpzfv62fjdasw";
url = "https://dl-canary.discordapp.net/apps/linux/0.0.96/discord-canary-0.0.96.tar.gz";
sha256 = "1fxyh9v5xglwbgr5sidn0cv70qpzcd2q240wsv87k3nawhvfcwsp";
};
};
}.${branch}

View File

@ -2,7 +2,7 @@
, gnome2, gtk3, atk, at-spi2-atk, cairo, pango, gdk-pixbuf, glib, freetype, fontconfig
, dbus, libX11, xorg, libXi, libXcursor, libXdamage, libXrandr, libXcomposite
, libXext, libXfixes, libXrender, libXtst, libXScrnSaver, nss, nspr, alsaLib
, cups, expat, udev, libnotify, libuuid
, cups, expat, udev, libnotify, libuuid, at-spi2-core
# Unfortunately this also overwrites the UI language (not just the spell
# checking language!):
, hunspellDicts, spellcheckerLanguage ? null # E.g. "de_DE"
@ -25,6 +25,7 @@ let
alsaLib
atk
at-spi2-atk
at-spi2-core
cairo
cups
dbus
@ -57,11 +58,11 @@ let
in stdenv.mkDerivation rec {
pname = "signal-desktop";
version = "1.26.2";
version = "1.27.1";
src = fetchurl {
url = "https://updates.signal.org/desktop/apt/pool/main/s/signal-desktop/signal-desktop_${version}_amd64.deb";
sha256 = "08qx7k82x6ybqi3lln6ixzmdz4sr8yz8vfx0y408b85wjfc7ncjk";
sha256 = "16fg60c5r7zcjs8ya6jk33l5kz8m21y9a1si3i0a2dvyaclz4a3q";
};
phases = [ "unpackPhase" "installPhase" ];

View File

@ -6,21 +6,21 @@
, guileSupport ? true, guile
, luaSupport ? true, lua5
, perlSupport ? true, perl
, pythonSupport ? true, pythonPackages
, pythonSupport ? true, python3Packages
, rubySupport ? true, ruby
, tclSupport ? true, tcl
, extraBuildInputs ? []
}:
let
inherit (pythonPackages) python;
inherit (python3Packages) python;
plugins = [
{ name = "perl"; enabled = perlSupport; cmakeFlag = "ENABLE_PERL"; buildInputs = [ perl ]; }
{ name = "tcl"; enabled = tclSupport; cmakeFlag = "ENABLE_TCL"; buildInputs = [ tcl ]; }
{ name = "ruby"; enabled = rubySupport; cmakeFlag = "ENABLE_RUBY"; buildInputs = [ ruby ]; }
{ name = "guile"; enabled = guileSupport; cmakeFlag = "ENABLE_GUILE"; buildInputs = [ guile ]; }
{ name = "lua"; enabled = luaSupport; cmakeFlag = "ENABLE_LUA"; buildInputs = [ lua5 ]; }
{ name = "python"; enabled = pythonSupport; cmakeFlag = "ENABLE_PYTHON"; buildInputs = [ python ]; }
{ name = "python"; enabled = pythonSupport; cmakeFlag = "ENABLE_PYTHON3"; buildInputs = [ python ]; }
];
enabledPlugins = builtins.filter (p: p.enabled) plugins;

View File

@ -1,17 +1,13 @@
{ callPackage, luaPackages, pythonPackages }:
{ callPackage, luaPackages }:
{
weechat-xmpp = callPackage ./weechat-xmpp {
inherit (pythonPackages) pydns;
};
weechat-matrix-bridge = callPackage ./weechat-matrix-bridge {
inherit (luaPackages) cjson luaffi;
};
wee-slack = callPackage ./wee-slack {
inherit pythonPackages;
};
wee-slack = callPackage ./wee-slack { };
weechat-autosort = callPackage ./weechat-autosort { };
weechat-otr = callPackage ./weechat-otr { };
}

View File

@ -1,4 +1,4 @@
{ stdenv, substituteAll, buildEnv, fetchFromGitHub, pythonPackages }:
{ stdenv, substituteAll, buildEnv, fetchFromGitHub, python3Packages }:
stdenv.mkDerivation rec {
pname = "wee-slack";
@ -16,8 +16,8 @@ stdenv.mkDerivation rec {
src = ./libpath.patch;
env = "${buildEnv {
name = "wee-slack-env";
paths = with pythonPackages; [ websocket_client six ];
}}/${pythonPackages.python.sitePackages}";
paths = with python3Packages; [ websocket_client six ];
}}/${python3Packages.python.sitePackages}";
})
];

View File

@ -0,0 +1,64 @@
{ stdenv, substituteAll, buildEnv, fetchgit, fetchFromGitHub, python3Packages, gmp }:
let
# pure-python-otr (potr) requires an older version of pycrypto, which is
# not compatible with pycryptodome. Therefore, the latest patched version
# of pycrypto will be fetched from the Debian project.
# https://security-tracker.debian.org/tracker/source-package/python-crypto
pycrypto = python3Packages.buildPythonPackage rec {
pname = "pycrypto";
version = "2.6.1-10";
src = fetchgit {
url = "https://salsa.debian.org/sramacher/python-crypto.git";
rev = "debian/${version}";
sha256 = "10rgq8bmjfpiqqa1g1p1hh7pxlxs7x0nawvk6zip0pd6x2vsr661";
};
buildInputs = [ gmp ];
preConfigure = ''
sed -i 's,/usr/include,/no-such-dir,' configure
sed -i "s!,'/usr/include/'!!" setup.py
'';
};
potr = python3Packages.potr.overridePythonAttrs (oldAttrs: {
propagatedBuildInputs = [ pycrypto ];
});
in stdenv.mkDerivation rec {
pname = "weechat-otr";
version = "1.9.2";
src = fetchFromGitHub {
repo = pname;
owner = "mmb";
rev = "v${version}";
sha256 = "1lngv98y6883vk8z2628cl4d5y8jxy39w8245gjdvshl8g18k5s2";
};
patches = [
(substituteAll {
src = ./libpath.patch;
env = "${buildEnv {
name = "weechat-otr-env";
paths = [ potr pycrypto ];
}}/${python3Packages.python.sitePackages}";
})
];
passthru.scripts = [ "weechat_otr.py" ];
installPhase = ''
mkdir -p $out/share
cp weechat_otr.py $out/share/weechat_otr.py
'';
meta = with stdenv.lib; {
homepage = "https://github.com/mmb/weechat-otr";
license = licenses.gpl3;
maintainers = with maintainers; [ geistesk ];
description = "WeeChat script for Off-the-Record messaging";
};
}

View File

@ -0,0 +1,13 @@
diff --git a/weechat_otr.py b/weechat_otr.py
index 0ccfb35..c42bebf 100644
--- a/weechat_otr.py
+++ b/weechat_otr.py
@@ -41,6 +41,8 @@ import shlex
import shutil
import sys
+sys.path.append('@env@')
+
import potr
import weechat

View File

@ -1,36 +0,0 @@
{ stdenv, fetchFromGitHub, xmpppy, pydns, substituteAll, buildEnv }:
stdenv.mkDerivation {
name = "weechat-jabber-2017-08-30";
src = fetchFromGitHub {
repo = "weechat-xmpp";
owner = "sleduc";
sha256 = "0s02xs0ynld9cxxzj07al364sfglyc5ir1i82133mq0s8cpphnxv";
rev = "8f6c21f5a160c9318c7a2d8fd5dcac7ab2e0d843";
};
installPhase = ''
mkdir -p $out/share
cp jabber.py $out/share/jabber.py
'';
patches = [
(substituteAll {
src = ./libpath.patch;
env = "${buildEnv {
name = "weechat-xmpp-env";
paths = [ pydns xmpppy ];
}}/lib/python2.7/site-packages";
})
];
passthru.scripts = [ "jabber.py" ];
meta = with stdenv.lib; {
description = "A fork of the jabber plugin for weechat";
homepage = "https://github.com/sleduc/weechat-xmpp";
maintainers = with maintainers; [ ma27 ];
license = licenses.gpl3Plus;
};
}

View File

@ -1,16 +0,0 @@
diff --git a/jabber.py b/jabber.py
index 27006a3..e53c2c0 100644
--- a/jabber.py
+++ b/jabber.py
@@ -95,6 +95,11 @@ SCRIPT_COMMAND = SCRIPT_NAME
import re
import warnings
+import sys
+
+sys.path.append('@env@')
+
+
import_ok = True
try:

View File

@ -1,5 +1,5 @@
{ lib, runCommand, writeScriptBin, buildEnv
, pythonPackages, perlPackages, runtimeShell
, python3Packages, perlPackages, runtimeShell
}:
weechat:
@ -17,11 +17,11 @@ let
in rec {
python = (simplePlugin "python") // {
extraEnv = ''
export PATH="${pythonPackages.python}/bin:$PATH"
export PATH="${python3Packages.python}/bin:$PATH"
'';
withPackages = pkgsFun: (python // {
extraEnv = ''
export PYTHONHOME="${pythonPackages.python.withPackages pkgsFun}"
export PYTHONHOME="${python3Packages.python.withPackages pkgsFun}"
'';
});
};

View File

@ -40,7 +40,7 @@ stdenv.mkDerivation rec {
meta = with stdenv.lib; {
homepage = https://astroidmail.github.io/;
description = "GTK+ frontend to the notmuch mail system";
description = "GTK frontend to the notmuch mail system";
maintainers = with maintainers; [ bdimcheff SuprDewd ];
license = licenses.gpl3Plus;
platforms = platforms.linux;

View File

@ -50,7 +50,7 @@ stdenv.mkDerivation rec {
meta = with stdenv.lib; {
description = "An app to send/receive SMS, make USSD requests, control mobile data usage and more";
longDescription = ''
A simple GTK+ based GUI compatible with Modem manager, Wader and oFono
A simple GTK based GUI compatible with Modem manager, Wader and oFono
system services able to control EDGE/3G/4G broadband modem specific
functions. You can check balance of your SIM card, send or receive SMS
messages, control mobile traffic consumption and more.

View File

@ -38,7 +38,7 @@ stdenv.mkDerivation {
enableParallelBuilding = true;
meta = {
description = "A GTK+-based Usenet newsreader good at both text and binaries";
description = "A GTK-based Usenet newsreader good at both text and binaries";
homepage = http://pan.rebelbase.com/;
maintainers = [ stdenv.lib.maintainers.eelco ];
platforms = stdenv.lib.platforms.linux;

View File

@ -49,7 +49,7 @@ stdenv.mkDerivation rec {
on top of a cross-platform back-end.
Feature spotlight:
* Uses fewer resources than other clients
* Native Mac, GTK+ and Qt GUI clients
* Native Mac, GTK and Qt GUI clients
* Daemon ideal for servers, embedded systems, and headless use
* All these can be remote controlled by Web and Terminal clients
* Bluetack (PeerGuardian) blocklists with automatic updates

View File

@ -52,7 +52,7 @@ stdenv.mkDerivation rec {
meta = {
license = licenses.gpl2;
homepage = https://gitlab.com/Remmina/Remmina;
description = "Remote desktop client written in GTK+";
description = "Remote desktop client written in GTK";
maintainers = with maintainers; [ melsigl ryantm ];
platforms = platforms.linux;
};

View File

@ -61,7 +61,7 @@ in stdenv.mkDerivation {
Its goal is to be an easy-to-use no-nonsense cross-platform
project management application.
Planner is a GTK+ application written in C and licensed under the
Planner is a GTK application written in C and licensed under the
GPLv2 or any later version. It can store its data in either xml
files or in a postgresql database. Projects can also be printed
to PDF or exported to HTML for easy viewing from any web browser.

View File

@ -21,7 +21,7 @@ in stdenv.mkDerivation {
description = "Real time satellite tracking and orbit prediction";
longDescription = ''
Gpredict is a real time satellite tracking and orbit prediction program
written using the Gtk+ widgets. Gpredict is targetted mainly towards ham radio
written using the GTK widgets. Gpredict is targetted mainly towards ham radio
operators but others interested in satellite tracking may find it useful as
well. Gpredict uses the SGP4/SDP4 algorithms, which are compatible with the
NORAD Keplerian elements.

View File

@ -0,0 +1,23 @@
{buildPythonPackage, lib, fetchFromGitHub, statistics}:
buildPythonPackage rec {
pname = "xenomapper";
version = "1.0.2";
src = fetchFromGitHub {
owner = "genomematt";
repo = pname;
rev = "v${version}";
sha256 = "0mnmfzlq5mhih6z8dq5bkx95vb8whjycz9mdlqwbmlqjb3gb3zhr";
};
propagatedBuildInputs = [ statistics ];
meta = with lib; {
homepage = "http://github.com/genomematt/xenomapper";
description = "A utility for post processing mapped reads that have been aligned to a primary genome and a secondary genome and binning reads into species specific, multimapping in each species, unmapped and unassigned bins";
license = licenses.gpl3;
platforms = platforms.all;
maintainers = [ maintainers.jbedo ];
};
}

View File

@ -56,7 +56,7 @@ pythonPackages.buildPythonApplication rec {
description = "A handy file search tool";
longDescription = ''
Catfish is a handy file searching tool. The interface is
intentionally lightweight and simple, using only GTK+3.
intentionally lightweight and simple, using only GTK 3.
You can configure it to your needs by using several command line
options.
'';

View File

@ -51,7 +51,7 @@ stdenv.mkDerivation rec {
doCheck = true;
meta = with stdenv.lib; {
description = "Simple GTK+ frontend for the mpv video player";
description = "Simple GTK frontend for the mpv video player";
longDescription = ''
GNOME MPV interacts with mpv via the client API exported by libmpv,
allowing access to mpv's powerful playback capabilities through an

View File

@ -97,7 +97,7 @@ stdenv.mkDerivation rec {
and containers. Very versatile and customizable.
Package provides:
CLI - `HandbrakeCLI`
GTK+ GUI - `ghb`
GTK GUI - `ghb`
'';
license = licenses.gpl2;
maintainers = with maintainers; [ Anton-Latukha wmertens ];

View File

@ -13,13 +13,13 @@ with stdenv.lib;
stdenv.mkDerivation rec {
pname = "mkvtoolnix";
version = "36.0.0";
version = "37.0.0";
src = fetchFromGitLab {
owner = "mbunkus";
repo = "mkvtoolnix";
rev = "release-${version}";
sha256 = "114j9n2m6dkh7vqzyhcsjzzffadr0lzyjmh31cbl4mvvkg9j5z6r";
sha256 = "0r1qzvqc6xx7rmv4v4fjc70cqy832h8v0fjf6c5ljbg1c6pgkl0l";
};
nativeBuildInputs = [

View File

@ -49,9 +49,9 @@ stdenv.mkDerivation {
configureFlags = [ "--disable-debug" ];
meta = {
description = "GTK+3 application to edit video subtitles";
description = "GTK 3 application to edit video subtitles";
longDescription = ''
Subtitle Editor is a GTK+3 tool to edit subtitles for GNU/Linux/*BSD. It
Subtitle Editor is a GTK 3 tool to edit subtitles for GNU/Linux/*BSD. It
can be used for new subtitles or as a tool to transform, edit, correct
and refine existing subtitle. This program also shows sound waves, which
makes it easier to synchronise subtitles to voices.

Some files were not shown because too many files have changed in this diff Show More