Merge commit '18aa59b0f26fc707e7313f8467e67159e61600c2' from master into staging

There was one conflict in the NixOS manual; I checked that it still
built after resolving it.
This commit is contained in:
John Ericson 2019-04-01 00:24:46 -04:00
commit 4ccb74011f
319 changed files with 5657 additions and 2610 deletions

View File

@ -78,15 +78,14 @@ manual-full.xml: ${MD_TARGETS} .version functions/library/locations.xml function
nix-instantiate --eval \
-E '(import ../lib).version' > .version
function_locations := $(shell nix-build --no-out-link ./lib-function-locations.nix)
functions/library/locations.xml:
ln -s $(function_locations) ./functions/library/locations.xml
nix-build ./lib-function-locations.nix \
--out-link $@
functions/library/generated:
functions/library/generated: functions/library/locations.xml
nix-build ./lib-function-docs.nix \
--arg locationsXml $(function_locations)\
--out-link ./functions/library/generated
--arg locationsXml $< \
--out-link $@
%.section.xml: %.section.md
pandoc $^ -w docbook+smart \

View File

@ -12,11 +12,12 @@
computing power and memory to compile their own programs. One might think
that cross-compilation is a fairly niche concern. However, there are
significant advantages to rigorously distinguishing between build-time and
run-time environments! This applies even when one is developing and
deploying on the same machine. Nixpkgs is increasingly adopting the opinion
that packages should be written with cross-compilation in mind, and nixpkgs
should evaluate in a similar way (by minimizing cross-compilation-specific
special cases) whether or not one is cross-compiling.
run-time environments! Significant, because the benefits apply even when one
is developing and deploying on the same machine. Nixpkgs is increasingly
adopting the opinion that packages should be written with cross-compilation
in mind, and nixpkgs should evaluate in a similar way (by minimizing
cross-compilation-specific special cases) whether or not one is
cross-compiling.
</para>
<para>
@ -30,7 +31,7 @@
<section xml:id="sec-cross-packaging">
<title>Packaging in a cross-friendly manner</title>
<section xml:id="sec-cross-platform-parameters">
<section xml:id="ssec-cross-platform-parameters">
<title>Platform parameters</title>
<para>
@ -218,8 +219,20 @@
</variablelist>
</section>
<section xml:id="sec-cross-specifying-dependencies">
<title>Specifying Dependencies</title>
<section xml:id="ssec-cross-dependency-categorization">
<title>Theory of dependency categorization</title>
<note>
<para>
This is a rather philosophical description that isn't very
Nixpkgs-specific. For an overview of all the relevant attributes given to
<varname>mkDerivation</varname>, see
<xref
linkend="ssec-stdenv-dependencies"/>. For a description of how
everything is implemented, see
<xref linkend="ssec-cross-dependency-implementation" />.
</para>
</note>
<para>
In this section we explore the relationship between both runtime and
@ -227,84 +240,98 @@
</para>
<para>
A runtime dependency between 2 packages implies that between them both the
host and target platforms match. This is directly implied by the meaning of
"host platform" and "runtime dependency": The package dependency exists
while both packages are running on a single host platform.
A run time dependency between two packages requires that their host
platforms match. This is directly implied by the meaning of "host platform"
and "runtime dependency": The package dependency exists while both packages
are running on a single host platform.
</para>
<para>
A build time dependency, however, implies a shift in platforms between the
depending package and the depended-on package. The meaning of a build time
dependency is that to build the depending package we need to be able to run
the depended-on's package. The depending package's build platform is
therefore equal to the depended-on package's host platform. Analogously,
the depending package's host platform is equal to the depended-on package's
target platform.
A build time dependency, however, has a shift in platforms between the
depending package and the depended-on package. "build time dependency"
means that to build the depending package we need to be able to run the
depended-on's package. The depending package's build platform is therefore
equal to the depended-on package's host platform.
</para>
<para>
In this manner, given the 3 platforms for one package, we can determine the
three platforms for all its transitive dependencies. This is the most
important guiding principle behind cross-compilation with Nixpkgs, and will
be called the <wordasword>sliding window principle</wordasword>.
If both the dependency and depending packages aren't compilers or other
machine-code-producing tools, we're done. And indeed
<varname>buildInputs</varname> and <varname>nativeBuildInputs</varname>
have covered these simpler build-time and run-time (respectively) changes
for many years. But if the dependency does produce machine code, we might
need to worry about its target platform too. In principle, that target
platform might be any of the depending package's build, host, or target
platforms, but we prohibit dependencies from a "later" platform to an
earlier platform to limit confusion because we've never seen a legitimate
use for them.
</para>
<para>
Some examples will make this clearer. If a package is being built with a
<literal>(build, host, target)</literal> platform triple of <literal>(foo,
bar, bar)</literal>, then its build-time dependencies would have a triple
of <literal>(foo, foo, bar)</literal>, and <emphasis>those
packages'</emphasis> build-time dependencies would have a triple of
<literal>(foo, foo, foo)</literal>. In other words, it should take two
"rounds" of following build-time dependency edges before one reaches a
fixed point where, by the sliding window principle, the platform triple no
longer changes. Indeed, this happens with cross-compilation, where only
rounds of native dependencies starting with the second necessarily coincide
with native packages.
Finally, if the depending package is a compiler or other
machine-code-producing tool, it might need dependencies that run at "emit
time". This is for compilers that (regrettably) insist on being built
together with their source langauges' standard libraries. Assuming build !=
host != target, a run-time dependency of the standard library cannot be run
at the compiler's build time or run time, but only at the run time of code
emitted by the compiler.
</para>
<note>
<para>
The depending package's target platform is unconstrained by the sliding
window principle, which makes sense in that one can in principle build
cross compilers targeting arbitrary platforms.
</para>
</note>
<para>
How does this work in practice? Nixpkgs is now structured so that
build-time dependencies are taken from <varname>buildPackages</varname>,
whereas run-time dependencies are taken from the top level attribute set.
For example, <varname>buildPackages.gcc</varname> should be used at
build-time, while <varname>gcc</varname> should be used at run-time. Now,
for most of Nixpkgs's history, there was no
<varname>buildPackages</varname>, and most packages have not been
refactored to use it explicitly. Instead, one can use the six
(<emphasis>gasp</emphasis>) attributes used for specifying dependencies as
documented in <xref linkend="ssec-stdenv-dependencies"/>. We "splice"
together the run-time and build-time package sets with
<varname>callPackage</varname>, and then <varname>mkDerivation</varname>
for each of four attributes pulls the right derivation out. This splicing
can be skipped when not cross-compiling as the package sets are the same,
but is a bit slow for cross-compiling. Because of this, a
best-of-both-worlds solution is in the works with no splicing or explicit
access of <varname>buildPackages</varname> needed. For now, feel free to
use either method.
Putting this all together, that means we have dependencies in the form
"host → target", in at most the following six combinations:
<table>
<caption>Possible dependency types</caption>
<thead>
<tr>
<th>Dependency's host platform</th>
<th>Dependency's target platform</th>
</tr>
</thead>
<tbody>
<tr>
<td>build</td>
<td>build</td>
</tr>
<tr>
<td>build</td>
<td>host</td>
</tr>
<tr>
<td>build</td>
<td>target</td>
</tr>
<tr>
<td>host</td>
<td>host</td>
</tr>
<tr>
<td>host</td>
<td>target</td>
</tr>
<tr>
<td>target</td>
<td>target</td>
</tr>
</tbody>
</table>
</para>
<note>
<para>
There is also a "backlink" <varname>targetPackages</varname>, yielding a
package set whose <varname>buildPackages</varname> is the current package
set. This is a hack, though, to accommodate compilers with lousy build
systems. Please do not use this unless you are absolutely sure you are
packaging such a compiler and there is no other way.
</para>
</note>
<para>
Some examples will make this table clearer. Suppose there's some package
that is being built with a <literal>(build, host, target)</literal>
platform triple of <literal>(foo, bar, baz)</literal>. If it has a
build-time library dependency, that would be a "host → build" dependency
with a triple of <literal>(foo, foo, *)</literal> (the target platform is
irrelevant). If it needs a compiler to be built, that would be a "build →
host" dependency with a triple of <literal>(foo, foo, *)</literal> (the
target platform is irrelevant). That compiler, would be built with another
compiler, also "build → host" dependency, with a triple of <literal>(foo,
foo, foo)</literal>.
</para>
</section>
<section xml:id="sec-cross-cookbook">
<section xml:id="ssec-cross-cookbook">
<title>Cross packaging cookbook</title>
<para>
@ -450,21 +477,202 @@ nix-build &lt;nixpkgs&gt; --arg crossSystem '{ config = "&lt;arch&gt;-&lt;os&gt;
<section xml:id="sec-cross-infra">
<title>Cross-compilation infrastructure</title>
<para>
To be written.
</para>
<section xml:id="ssec-cross-dependency-implementation">
<title>Implementation of dependencies</title>
<note>
<para>
If one explores Nixpkgs, they will see derivations with names like
<literal>gccCross</literal>. Such <literal>*Cross</literal> derivations is
a holdover from before we properly distinguished between the host and
target platforms—the derivation with "Cross" in the name covered the
<literal>build = host != target</literal> case, while the other covered the
<literal>host = target</literal>, with build platform the same or not based
on whether one was using its <literal>.nativeDrv</literal> or
<literal>.crossDrv</literal>. This ugliness will disappear soon.
The categorizes of dependencies developed in
<xref
linkend="ssec-cross-dependency-categorization"/> are specified as
lists of derivations given to <varname>mkDerivation</varname>, as
documented in <xref linkend="ssec-stdenv-dependencies"/>. In short,
each list of dependencies for "host → target" of "foo → bar" is called
<varname>depsFooBar</varname>, with exceptions for backwards
compatibility that <varname>depsBuildHost</varname> is instead called
<varname>nativeBuildInputs</varname> and <varname>depsHostTarget</varname>
is instead called <varname>buildInputs</varname>. Nixpkgs is now structured
so that each <varname>depsFooBar</varname> is automatically taken from
<varname>pkgsFooBar</varname>. (These <varname>pkgsFooBar</varname>s are
quite new, so there is no special case for
<varname>nativeBuildInputs</varname> and <varname>buildInputs</varname>.)
For example, <varname>pkgsBuildHost.gcc</varname> should be used at
build-time, while <varname>pkgsHostTarget.gcc</varname> should be used at
run-time.
</para>
</note>
<para>
Now, for most of Nixpkgs's history, there were no
<varname>pkgsFooBar</varname> attributes, and most packages have not been
refactored to use it explicitly. Prior to those, there were just
<varname>buildPackages</varname>, <varname>pkgs</varname>, and
<varname>targetPackages</varname>. Those are now redefined as aliases to
<varname>pkgsBuildHost</varname>, <varname>pkgsHostTarget</varname>, and
<varname>pkgsTargetTarget</varname>. It is acceptable, even
recommended, to use them for libraries to show that the host platform is
irrelevant.
</para>
<para>
But before that, there was just <varname>pkgs</varname>, even though both
<varname>buildInputs</varname> and <varname>nativeBuildInputs</varname>
existed. [Cross barely worked, and those were implemented with some hacks
on <varname>mkDerivation</varname> to override dependencies.] What this
means is the vast majority of packages do not use any explicit package set
to populate their dependencies, just using whatever
<varname>callPackage</varname> gives them even if they do correctly sort
their dependencies into the multiple lists described above. And indeed,
asking that users both sort their dependencies, <emphasis>and</emphasis>
take them from the right attribute set, is both too onerous and redundant,
so the recommended approach (for now) is to continue just categorizing by
list and not using an explicit package set.
</para>
<para>
To make this work, we "splice" together the six
<varname>pkgsFooBar</varname> package sets and have
<varname>callPackage</varname> actually take its arguments from that. This
is currently implemented in <filename>pkgs/top-level/splice.nix</filename>.
<varname>mkDerivation</varname> then, for each dependency attribute, pulls
the right derivation out from the splice. This splicing can be skipped when
not cross-compiling as the package sets are the same, but still is a bit
slow for cross-compiling. We'd like to do something better, but haven't
come up with anything yet.
</para>
</section>
<section xml:id="ssec-bootstrapping">
<title>Bootstrapping</title>
<para>
Each of the package sets described above come from a single bootstrapping
stage. While <filename>pkgs/top-level/default.nix</filename>, coordinates
the composition of stages at a high level,
<filename>pkgs/top-level/stage.nix</filename> "ties the knot" (creates the
fixed point) of each stage. The package sets are defined per-stage however,
so they can be thought of as edges between stages (the nodes) in a graph.
Compositions like <literal>pkgsBuildTarget.targetPackages</literal> can be
thought of as paths to this graph.
</para>
<para>
While there are many package sets, and thus many edges, the stages can also
be arranged in a linear chain. In other words, many of the edges are
redundant as far as connectivity is concerned. This hinges on the type of
bootstrapping we do. Currently for cross it is:
<orderedlist>
<listitem>
<para>
<literal>(native, native, native)</literal>
</para>
</listitem>
<listitem>
<para>
<literal>(native, native, foreign)</literal>
</para>
</listitem>
<listitem>
<para>
<literal>(native, foreign, foreign)</literal>
</para>
</listitem>
</orderedlist>
In each stage, <varname>pkgsBuildHost</varname> refers the the previous
stage, <varname>pkgsBuildBuild</varname> refers to the one before that, and
<varname>pkgsHostTarget</varname> refers to the current one, and
<varname>pkgsTargetTarget</varname> refers to the next one. When there is
no previous or next stage, they instead refer to the current stage. Note
how all the invariants regarding the mapping between dependency and depending
packages' build host and target platforms are preserved.
<varname>pkgsBuildTarget</varname> and <varname>pkgsHostHost</varname> are
more complex in that the stage fitting the requirements isn't always a
fixed chain of "prevs" and "nexts" away (modulo the "saturating"
self-references at the ends). We just special case each instead. All the primary
edges are implemented is in <filename>pkgs/stdenv/booter.nix</filename>,
and secondarily aliases in <filename>pkgs/top-level/stage.nix</filename>.
</para>
<note>
<para>
Note the native stages are bootstrapped in legacy ways that predate the
current cross implementation. This is why the the bootstrapping stages
leading up to the final stages are ignored inthe previous paragraph.
</para>
</note>
<para>
If one looks at the 3 platform triples, one can see that they overlap such
that one could put them together into a chain like:
<programlisting>
(native, native, native, foreign, foreign)
</programlisting>
If one imagines the saturating self references at the end being replaced
with infinite stages, and then overlays those platform triples, one ends up
with the infinite tuple:
<programlisting>
(native..., native, native, native, foreign, foreign, foreign...)
</programlisting>
On can then imagine any sequence of platforms such that there are bootstrap
stages with their 3 platforms determined by "sliding a window" that is the
3 tuple through the sequence. This was the original model for
bootstrapping. Without a target platform (assume a better world where all
compilers are multi-target and all standard libraries are built in their
own derivation), this is sufficient. Conversely if one wishes to cross
compile "faster", with a "Canadian Cross" bootstraping stage where
<literal>build != host != target</literal>, more bootstrapping stages are
needed since no sliding window providess the pesky
<varname>pkgsBuildTarget</varname> package set since it skips the Canadian
cross stage's "host".
</para>
<note>
<para>
It is much better to refer to <varname>buildPackages</varname> than
<varname>targetPackages</varname>, or more broadly package sets that do
not mention "target". There are three reasons for this.
</para>
<para>
First, it is because bootstrapping stages do not have a unique
<varname>targetPackages</varname>. For example a <literal>(x86-linux,
x86-linux, arm-linux)</literal> and <literal>(x86-linux, x86-linux,
x86-windows)</literal> package set both have a <literal>(x86-linux,
x86-linux, x86-linux)</literal> package set. Because there is no canonical
<varname>targetPackages</varname> for such a native (<literal>build ==
host == target</literal>) package set, we set their
<varname>targetPackages</varname>
</para>
<para>
Second, it is because this is a frequent source of hard-to-follow
"infinite recursions" / cycles. When only package sets that don't mention
target are used, the package set forms a directed acyclic graph. This
means that all cycles that exist are confined to one stage. This means
they are a lot smaller, and easier to follow in the code or a backtrace. It
also means they are present in native and cross builds alike, and so more
likely to be caught by CI and other users.
</para>
<para>
Thirdly, it is because everything target-mentioning only exists to
accommodate compilers with lousy build systems that insist on the compiler
itself and standard library being built together. Of course that is bad
because bigger derivations means longer rebuilds. It is also problematic because
it tends to make the standard libraries less like other libraries than
they could be, complicating code and build systems alike. Because of the
other problems, and because of these innate disadvantages, compilers ought
to be packaged another way where possible.
</para>
</note>
<note>
<para>
If one explores Nixpkgs, they will see derivations with names like
<literal>gccCross</literal>. Such <literal>*Cross</literal> derivations is
a holdover from before we properly distinguished between the host and
target platforms—the derivation with "Cross" in the name covered the
<literal>build = host != target</literal> case, while the other covered
the <literal>host = target</literal>, with build platform the same or not
based on whether one was using its <literal>.nativeDrv</literal> or
<literal>.crossDrv</literal>. This ugliness will disappear soon.
</para>
</note>
</section>
</section>
</chapter>

View File

@ -189,7 +189,8 @@ $ git rebase --onto nixos-unstable BASEBRANCH FETCH_HEAD <co
</listitem>
<listitem>
<para>
The <link xlink:href="https://github.com/Mic92/nix-review">nix-review</link>
The
<link xlink:href="https://github.com/Mic92/nix-review">nix-review</link>
tool can be used to review a pull request content in a single command.
<varname>PRNUMBER</varname> should be replaced by the number at the end
of the pull request title. You can also provide the full github pull

View File

@ -222,9 +222,10 @@ genericBuild
</footnote>
But even if one is not cross compiling, the platforms imply whether or not
the dependency is needed at run-time or build-time, a concept that makes
perfect sense outside of cross compilation. For now, the run-time/build-time
distinction is just a hint for mental clarity, but in the future it perhaps
could be enforced.
perfect sense outside of cross compilation. By default, the
run-time/build-time distinction is just a hint for mental clarity, but with
<varname>strictDeps</varname> set it is mostly enforced even in the native
case.
</para>
<para>
@ -348,7 +349,10 @@ let f(h, h + 1, i) = i + h
<para>
Overall, the unifying theme here is that propagation shouldn't be
introducing transitive dependencies involving platforms the depending
package is unaware of. The offset bounds checking and definition of
package is unaware of. [One can imagine the dependending package asking for
dependencies with the platforms it knows about; other platforms it doesn't
know how to ask for. The platform description in that scenario is a kind of
unforagable capability.] The offset bounds checking and definition of
<function>mapOffset</function> together ensure that this is the case.
Discovering a new offset is discovering a new platform, and since those
platforms weren't in the derivation "spec" of the needing package, they
@ -2633,21 +2637,20 @@ addEnvHooks "$hostOffset" myBashFunction
happens. It prevents nix from cleaning up the build environment
immediately and allows the user to attach to a build environment using
the <command>cntr</command> command. Upon build error it will print
instructions on how to use <command>cntr</command>, which can be used
to enter the environment for debugging. Installing cntr and
running the command will provide shell access to the build sandbox of
failed build. At <filename>/var/lib/cntr</filename> the sandboxed
filesystem is mounted. All commands and files of the system are still
accessible within the shell. To execute commands from the sandbox use
the cntr exec subcommand. Note that <command>cntr</command> also needs
to be executed on the machine that is doing the build, which might not
be the case when remote builders are enabled. <command>cntr</command> is
only supported on Linux-based platforms. To use it first add
<literal>cntr</literal> to your
<literal>environment.systemPackages</literal> on NixOS or alternatively
to the root user on non-NixOS systems. Then in the package that is
supposed to be inspected, add <literal>breakpointHook</literal> to
<literal>nativeBuildInputs</literal>.
instructions on how to use <command>cntr</command>, which can be used to
enter the environment for debugging. Installing cntr and running the
command will provide shell access to the build sandbox of failed build.
At <filename>/var/lib/cntr</filename> the sandboxed filesystem is
mounted. All commands and files of the system are still accessible
within the shell. To execute commands from the sandbox use the cntr exec
subcommand. Note that <command>cntr</command> also needs to be executed
on the machine that is doing the build, which might not be the case when
remote builders are enabled. <command>cntr</command> is only supported
on Linux-based platforms. To use it first add <literal>cntr</literal> to
your <literal>environment.systemPackages</literal> on NixOS or
alternatively to the root user on non-NixOS systems. Then in the package
that is supposed to be inspected, add <literal>breakpointHook</literal>
to <literal>nativeBuildInputs</literal>.
<programlisting>
nativeBuildInputs = [ breakpointHook ];
</programlisting>

View File

@ -354,23 +354,22 @@ Additional information.
<title>Tested compilation of all pkgs that depend on this change using <command>nix-review</command></title>
<para>
If you are updating a package's version, you can use nix-review to make sure all
packages that depend on the updated package still compile correctly.
The <command>nix-review</command> utility can look for and build all dependencies
either based on uncommited changes with the <literal>wip</literal> option or
specifying a github pull request number.
If you are updating a package's version, you can use nix-review to make
sure all packages that depend on the updated package still compile
correctly. The <command>nix-review</command> utility can look for and build
all dependencies either based on uncommited changes with the
<literal>wip</literal> option or specifying a github pull request number.
</para>
<para>
review changes from pull request number 12345:
<screen>nix-shell -p nix-review --run "nix-review pr 12345"</screen>
review changes from pull request number 12345:
<screen>nix-shell -p nix-review --run "nix-review pr 12345"</screen>
</para>
<para>
review uncommitted changes:
<screen>nix-shell -p nix-review --run "nix-review wip"</screen>
review uncommitted changes:
<screen>nix-shell -p nix-review --run "nix-review wip"</screen>
</para>
</section>
<section xml:id="submitting-changes-tested-execution">

View File

@ -27,8 +27,13 @@ nixos.firefox firefox-23.0 Mozilla Firefox - the browser, reloaded
<replaceable>...</replaceable>
</screen>
The first column in the output is the <emphasis>attribute name</emphasis>,
such as <literal>nixos.thunderbird</literal>. (The <literal>nixos</literal>
prefix allows distinguishing between different channels that you might have.)
such as <literal>nixos.thunderbird</literal>.
</para>
<para>
Note: the <literal>nixos</literal> prefix tells us that we want to get the
package from the <literal>nixos</literal> channel and works only in CLI tools.
In declarative configuration use <literal>pkgs</literal> prefix (variable).
</para>
<para>

View File

@ -55,6 +55,23 @@
</para>
<itemizedlist>
<listitem>
<para>
Buildbot no longer supports Python 2, as support was dropped upstream in
version 2.0.0. Configurations may need to be modified to make them
compatible with Python 3.
</para>
</listitem>
<listitem>
<para>
PostgreSQL now uses
<filename class="directory">/run/postgresql</filename> as its socket
directory instead of <filename class="directory">/tmp</filename>. So
if you run an application like eg. Nextcloud, where you need to use
the Unix socket path as the database host name, you need to change it
accordingly.
</para>
</listitem>
<listitem>
<para>
The NetworkManager systemd unit was renamed back from network-manager.service to

View File

@ -880,6 +880,7 @@
./virtualisation/container-config.nix
./virtualisation/containers.nix
./virtualisation/docker.nix
./virtualisation/docker-containers.nix
./virtualisation/ecs-agent.nix
./virtualisation/libvirtd.nix
./virtualisation/lxc.nix

View File

@ -76,7 +76,7 @@ in
};
failmode = mkOption {
type = types.enum [ "safe" "enum" ];
type = types.enum [ "safe" "secure" ];
default = "safe";
description = ''
On service or configuration errors that prevent Duo

View File

@ -199,10 +199,10 @@ in {
package = mkOption {
type = types.package;
default = pkgs.pythonPackages.buildbot-full;
defaultText = "pkgs.pythonPackages.buildbot-full";
default = pkgs.python3Packages.buildbot-full;
defaultText = "pkgs.python3Packages.buildbot-full";
description = "Package to use for buildbot.";
example = literalExample "pkgs.python3Packages.buildbot-full";
example = literalExample "pkgs.python3Packages.buildbot";
};
packages = mkOption {

View File

@ -118,10 +118,10 @@ in {
package = mkOption {
type = types.package;
default = pkgs.pythonPackages.buildbot-worker;
defaultText = "pkgs.pythonPackages.buildbot-worker";
default = pkgs.python3Packages.buildbot-worker;
defaultText = "pkgs.python3Packages.buildbot-worker";
description = "Package to use for buildbot worker.";
example = literalExample "pkgs.python3Packages.buildbot-worker";
example = literalExample "pkgs.python2Packages.buildbot-worker";
};
packages = mkOption {

View File

@ -238,6 +238,7 @@ in
User = "postgres";
Group = "postgres";
PermissionsStartOnly = true;
RuntimeDirectory = "postgresql";
Type = if lib.versionAtLeast cfg.package.version "9.6"
then "notify"
else "simple";

View File

@ -9,6 +9,8 @@ let
in
{
meta.maintainers = pkgs.pantheon.maintainers;
###### interface
options = {

View File

@ -6,6 +6,8 @@ with lib;
{
meta.maintainers = pkgs.pantheon.maintainers;
###### interface
options = {

View File

@ -6,6 +6,8 @@ with lib;
{
meta.maintainers = pkgs.pantheon.maintainers;
###### interface
options = {

View File

@ -14,9 +14,10 @@ let
log.fields.service = "registry";
storage = {
cache.blobdescriptor = blobCache;
filesystem.rootdirectory = cfg.storagePath;
delete.enabled = cfg.enableDelete;
};
} // (if cfg.storagePath != null
then { filesystem.rootdirectory = cfg.storagePath; }
else {});
http = {
addr = "${cfg.listenAddress}:${builtins.toString cfg.port}";
headers.X-Content-Type-Options = ["nosniff"];
@ -61,9 +62,12 @@ in {
};
storagePath = mkOption {
type = types.path;
type = types.nullOr types.path;
default = "/var/lib/docker-registry";
description = "Docker registry storage path.";
description = ''
Docker registry storage path for the filesystem storage backend. Set to
null to configure another backend via extraConfig.
'';
};
enableDelete = mkOption {
@ -140,9 +144,12 @@ in {
startAt = optional cfg.enableGarbageCollect cfg.garbageCollectDates;
};
users.users.docker-registry = {
createHome = true;
home = cfg.storagePath;
};
users.users.docker-registry =
if cfg.storagePath != null
then {
createHome = true;
home = cfg.storagePath;
}
else {};
};
}

View File

@ -160,6 +160,8 @@ let
'';
};
extraGitlabRb = pkgs.writeText "extra-gitlab.rb" cfg.extraGitlabRb;
smtpSettings = pkgs.writeText "gitlab-smtp-settings.rb" ''
if Rails.env.production?
Rails.application.config.action_mailer.delivery_method = :smtp
@ -266,6 +268,26 @@ in {
description = "Extra configuration in config/database.yml.";
};
extraGitlabRb = mkOption {
type = types.str;
default = "";
example = ''
if Rails.env.production?
Rails.application.config.action_mailer.delivery_method = :sendmail
ActionMailer::Base.delivery_method = :sendmail
ActionMailer::Base.sendmail_settings = {
location: "/run/wrappers/bin/sendmail",
arguments: "-i -t"
}
end
'';
description = ''
Extra configuration to be placed in config/extra-gitlab.rb. This can
be used to add configuration not otherwise exposed through this module's
options.
'';
};
host = mkOption {
type = types.str;
default = config.networking.hostName;
@ -586,6 +608,7 @@ in {
[ -L /run/gitlab/uploads ] || ln -sf ${cfg.statePath}/uploads /run/gitlab/uploads
cp ${cfg.packages.gitlab}/share/gitlab/VERSION ${cfg.statePath}/VERSION
cp -rf ${cfg.packages.gitlab}/share/gitlab/config.dist/* ${cfg.statePath}/config
ln -sf ${extraGitlabRb} ${cfg.statePath}/config/initializers/extra-gitlab.rb
${optionalString cfg.smtp.enable ''
ln -sf ${smtpSettings} ${cfg.statePath}/config/initializers/smtp_settings.rb
''}

View File

@ -146,7 +146,7 @@ in
PLEX_MEDIA_SERVER_MAX_PLUGIN_PROCS="6";
PLEX_MEDIA_SERVER_TMPDIR="/tmp";
PLEX_MEDIA_SERVER_USE_SYSLOG="true";
LD_LIBRARY_PATH="/run/opengl-driver/lib:${cfg.package}/usr/lib/plexmediaserver";
LD_LIBRARY_PATH="/run/opengl-driver/lib:${cfg.package}/usr/lib/plexmediaserver/lib";
LC_ALL="en_US.UTF-8";
LANG="en_US.UTF-8";
};

View File

@ -261,10 +261,14 @@ let
fi
'';
canonicalizePortList =
ports: lib.unique (builtins.sort builtins.lessThan ports);
commonOptions = {
allowedTCPPorts = mkOption {
type = types.listOf types.int;
type = types.listOf types.port;
default = [ ];
apply = canonicalizePortList;
example = [ 22 80 ];
description =
''
@ -274,7 +278,7 @@ let
};
allowedTCPPortRanges = mkOption {
type = types.listOf (types.attrsOf types.int);
type = types.listOf (types.attrsOf types.port);
default = [ ];
example = [ { from = 8999; to = 9003; } ];
description =
@ -285,8 +289,9 @@ let
};
allowedUDPPorts = mkOption {
type = types.listOf types.int;
type = types.listOf types.port;
default = [ ];
apply = canonicalizePortList;
example = [ 53 ];
description =
''
@ -295,7 +300,7 @@ let
};
allowedUDPPortRanges = mkOption {
type = types.listOf (types.attrsOf types.int);
type = types.listOf (types.attrsOf types.port);
default = [ ];
example = [ { from = 60000; to = 61000; } ];
description =

View File

@ -172,7 +172,7 @@ in {
Database host.
Note: for using Unix authentication with PostgreSQL, this should be
set to <literal>/tmp</literal>.
set to <literal>/run/postgresql</literal>.
'';
};
dbport = mkOption {

View File

@ -33,7 +33,7 @@
config = {
<link linkend="opt-services.nextcloud.config.dbtype">dbtype</link> = "pgsql";
<link linkend="opt-services.nextcloud.config.dbuser">dbuser</link> = "nextcloud";
<link linkend="opt-services.nextcloud.config.dbhost">dbhost</link> = "/tmp"; # nextcloud will add /.s.PGSQL.5432 by itself
<link linkend="opt-services.nextcloud.config.dbhost">dbhost</link> = "/run/postgresql"; # nextcloud will add /.s.PGSQL.5432 by itself
<link linkend="opt-services.nextcloud.config.dbname">dbname</link> = "nextcloud";
<link linkend="opt-services.nextcloud.config.adminpassFile">adminpassFile</link> = "/path/to/admin-pass-file";
<link linkend="opt-services.nextcloud.config.adminuser">adminuser</link> = "root";

View File

@ -86,11 +86,19 @@ in with lib; {
default = false;
description = "Serve and listen only through HTTPS.";
};
videoPaths = mkOption {
type = types.listOf types.path;
default = [];
example = [ "/home/okina/Videos/tehe_pero.webm" ];
description = "Videos that will be symlinked into www/videos.";
};
};
config = mkIf cfg.enable {
security.sudo.enable = cfg.enable;
services.postgresql.enable = cfg.enable;
services.postgresql.package = pkgs.postgresql_11;
services.meguca.passwordFile = mkDefault (pkgs.writeText "meguca-password-file" cfg.password);
services.meguca.postgresArgsFile = mkDefault (pkgs.writeText "meguca-postgres-args" cfg.postgresArgs);
services.meguca.postgresArgs = mkDefault "user=meguca password=${cfg.password} dbname=meguca sslmode=disable";
@ -102,8 +110,16 @@ in with lib; {
preStart = ''
# Ensure folder exists or create it and links and permissions are correct
mkdir -p ${escapeShellArg cfg.dataDir}
ln -sf ${pkgs.meguca}/share/meguca/www ${escapeShellArg cfg.dataDir}
mkdir -p ${escapeShellArg cfg.dataDir}/www
rm -rf ${escapeShellArg cfg.dataDir}/www/videos
ln -sf ${pkgs.meguca}/share/meguca/www/* ${escapeShellArg cfg.dataDir}/www
unlink ${escapeShellArg cfg.dataDir}/www/videos
mkdir -p ${escapeShellArg cfg.dataDir}/www/videos
for vid in ${escapeShellArg cfg.videoPaths}; do
ln -sf $vid ${escapeShellArg cfg.dataDir}/www/videos
done
chmod 750 ${escapeShellArg cfg.dataDir}
chown -R meguca:meguca ${escapeShellArg cfg.dataDir}

View File

@ -14,6 +14,9 @@ let
in
{
meta.maintainers = pkgs.pantheon.maintainers;
options = {
services.xserver.desktopManager.pantheon = {

View File

@ -25,7 +25,7 @@ in
{ name = "dwm";
start =
''
${pkgs.dwm}/bin/dwm &
dwm &
waitPID=$!
'';
};

View File

@ -36,8 +36,9 @@ let
#! ${pkgs.runtimeShell} -e
# Initialise the container side of the veth pair.
if [ -n "$HOST_ADDRESS" ] || [ -n "$LOCAL_ADDRESS" ] || [ -n "$HOST_BRIDGE" ]; then
if [ -n "$HOST_ADDRESS" ] || [ -n "$HOST_ADDRESS6" ] ||
[ -n "$LOCAL_ADDRESS" ] || [ -n "$LOCAL_ADDRESS6" ] ||
[ -n "$HOST_BRIDGE" ]; then
ip link set host0 name eth0
ip link set dev eth0 up
@ -88,7 +89,8 @@ let
extraFlags+=" --private-network"
fi
if [ -n "$HOST_ADDRESS" ] || [ -n "$LOCAL_ADDRESS" ]; then
if [ -n "$HOST_ADDRESS" ] || [ -n "$LOCAL_ADDRESS" ] ||
[ -n "$HOST_ADDRESS6" ] || [ -n "$LOCAL_ADDRESS6" ]; then
extraFlags+=" --network-veth"
fi
@ -159,7 +161,8 @@ let
# Clean up existing machined registration and interfaces.
machinectl terminate "$INSTANCE" 2> /dev/null || true
if [ -n "$HOST_ADDRESS" ] || [ -n "$LOCAL_ADDRESS" ]; then
if [ -n "$HOST_ADDRESS" ] || [ -n "$LOCAL_ADDRESS" ] ||
[ -n "$HOST_ADDRESS6" ] || [ -n "$LOCAL_ADDRESS6" ]; then
ip link del dev "ve-$INSTANCE" 2> /dev/null || true
ip link del dev "vb-$INSTANCE" 2> /dev/null || true
fi
@ -208,7 +211,8 @@ let
'';
in
''
if [ -n "$HOST_ADDRESS" ] || [ -n "$LOCAL_ADDRESS" ]; then
if [ -n "$HOST_ADDRESS" ] || [ -n "$LOCAL_ADDRESS" ] ||
[ -n "$HOST_ADDRESS6" ] || [ -n "$LOCAL_ADDRESS6" ]; then
if [ -z "$HOST_BRIDGE" ]; then
ifaceHost=ve-$INSTANCE
ip link set dev $ifaceHost up

View File

@ -0,0 +1,233 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.docker-containers;
dockerContainer =
{ name, config, ... }: {
options = {
image = mkOption {
type = types.str;
description = "Docker image to run.";
example = "library/hello-world";
};
cmd = mkOption {
type = with types; listOf str;
default = [];
description = "Commandline arguments to pass to the image's entrypoint.";
example = literalExample ''
["--port=9000"]
'';
};
entrypoint = mkOption {
type = with types; nullOr str;
description = "Overwrite the default entrypoint of the image.";
default = null;
example = "/bin/my-app";
};
environment = mkOption {
type = with types; attrsOf str;
default = {};
description = "Environment variables to set for this container.";
example = literalExample ''
{
DATABASE_HOST = "db.example.com";
DATABASE_PORT = "3306";
}
'';
};
log-driver = mkOption {
type = types.str;
default = "none";
description = ''
Logging driver for the container. The default of
<literal>"none"</literal> means that the container's logs will be
handled as part of the systemd unit. Setting this to
<literal>"journald"</literal> will result in duplicate logging, but
the container's logs will be visible to the <command>docker
logs</command> command.
For more details and a full list of logging drivers, refer to the
<link xlink:href="https://docs.docker.com/engine/reference/run/#logging-drivers---log-driver">
Docker engine documentation</link>
'';
};
ports = mkOption {
type = with types; listOf str;
default = [];
description = ''
Network ports to publish from the container to the outer host.
</para>
<para>
Valid formats:
</para>
<itemizedlist>
<listitem>
<para>
<literal>&lt;ip&gt;:&lt;hostPort&gt;:&lt;containerPort&gt;</literal>
</para>
</listitem>
<listitem>
<para>
<literal>&lt;ip&gt;::&lt;containerPort&gt;</literal>
</para>
</listitem>
<listitem>
<para>
<literal>&lt;hostPort&gt;:&lt;containerPort&gt;</literal>
</para>
</listitem>
<listitem>
<para>
<literal>&lt;containerPort&gt;</literal>
</para>
</listitem>
</itemizedlist>
<para>
Both <literal>hostPort</literal> and
<literal>containerPort</literal> can be specified as a range of
ports. When specifying ranges for both, the number of container
ports in the range must match the number of host ports in the
range. Example: <literal>1234-1236:1234-1236/tcp</literal>
</para>
<para>
When specifying a range for <literal>hostPort</literal> only, the
<literal>containerPort</literal> must <emphasis>not</emphasis> be a
range. In this case, the container port is published somewhere
within the specified <literal>hostPort</literal> range. Example:
<literal>1234-1236:1234/tcp</literal>
</para>
<para>
Refer to the
<link xlink:href="https://docs.docker.com/engine/reference/run/#expose-incoming-ports">
Docker engine documentation</link> for full details.
'';
example = literalExample ''
[
"8080:9000"
]
'';
};
user = mkOption {
type = with types; nullOr str;
default = null;
description = ''
Override the username or UID (and optionally groupname or GID) used
in the container.
'';
example = "nobody:nogroup";
};
volumes = mkOption {
type = with types; listOf str;
default = [];
description = ''
List of volumes to attach to this container.
Note that this is a list of <literal>"src:dst"</literal> strings to
allow for <literal>src</literal> to refer to
<literal>/nix/store</literal> paths, which would difficult with an
attribute set. There are also a variety of mount options available
as a third field; please refer to the
<link xlink:href="https://docs.docker.com/engine/reference/run/#volume-shared-filesystems">
docker engine documentation</link> for details.
'';
example = literalExample ''
[
"volume_name:/path/inside/container"
"/path/on/host:/path/inside/container"
]
'';
};
workdir = mkOption {
type = with types; nullOr str;
default = null;
description = "Override the default working directory for the container.";
example = "/var/lib/hello_world";
};
extraDockerOptions = mkOption {
type = with types; listOf str;
default = [];
description = "Extra options for <command>docker run</command>.";
example = literalExample ''
["--network=host"]
'';
};
};
};
mkService = name: container: {
wantedBy = [ "multi-user.target" ];
after = [ "docker.service" "docker.socket" ];
requires = [ "docker.service" "docker.socket" ];
serviceConfig = {
ExecStart = concatStringsSep " \\\n " ([
"${pkgs.docker}/bin/docker run"
"--rm"
"--name=%n"
"--log-driver=${container.log-driver}"
] ++ optional (! isNull container.entrypoint)
"--entrypoint=${escapeShellArg container.entrypoint}"
++ (mapAttrsToList (k: v: "-e ${escapeShellArg k}=${escapeShellArg v}") container.environment)
++ map (p: "-p ${escapeShellArg p}") container.ports
++ optional (! isNull container.user) "-u ${escapeShellArg container.user}"
++ map (v: "-v ${escapeShellArg v}") container.volumes
++ optional (! isNull container.workdir) "-w ${escapeShellArg container.workdir}"
++ map escapeShellArg container.extraDockerOptions
++ [container.image]
++ map escapeShellArg container.cmd
);
ExecStartPre = "-${pkgs.docker}/bin/docker rm -f %n";
ExecStop = "${pkgs.docker}/bin/docker stop %n";
ExecStopPost = "-${pkgs.docker}/bin/docker rm -f %n";
### There is no generalized way of supporting `reload` for docker
### containers. Some containers may respond well to SIGHUP sent to their
### init process, but it is not guaranteed; some apps have other reload
### mechanisms, some don't have a reload signal at all, and some docker
### images just have broken signal handling. The best compromise in this
### case is probably to leave ExecReload undefined, so `systemctl reload`
### will at least result in an error instead of potentially undefined
### behaviour.
###
### Advanced users can still override this part of the unit to implement
### a custom reload handler, since the result of all this is a normal
### systemd service from the perspective of the NixOS module system.
###
# ExecReload = ...;
###
TimeoutStartSec = 0;
TimeoutStopSec = 120;
Restart = "always";
};
};
in {
options.docker-containers = mkOption {
default = {};
type = types.attrsOf (types.submodule dockerContainer);
description = "Docker containers to run as systemd services.";
};
config = mkIf (cfg != {}) {
systemd.services = mapAttrs' (n: v: nameValuePair "docker-${n}" (mkService n v)) cfg;
virtualisation.docker.enable = true;
};
}

View File

@ -94,6 +94,7 @@ in {
fileSystems."/" = {
device = "/dev/disk/by-label/nixos";
autoResize = true;
fsType = "ext4";
};
boot.growPartition = true;

View File

@ -64,6 +64,7 @@ in rec {
#(all nixos.tests.containers)
(all nixos.tests.containers-imperative)
(all nixos.tests.containers-ipv4)
(all nixos.tests.containers-ipv6)
nixos.tests.chromium.x86_64-linux or []
(all nixos.tests.firefox)
(all nixos.tests.firewall)

View File

@ -33,6 +33,7 @@ in rec {
inherit (nixos'.tests)
containers-imperative
containers-ipv4
containers-ipv6
firewall
ipv6
login

View File

@ -59,6 +59,7 @@ in
dhparams = handleTest ./dhparams.nix {};
dnscrypt-proxy = handleTestOn ["x86_64-linux"] ./dnscrypt-proxy.nix {};
docker = handleTestOn ["x86_64-linux"] ./docker.nix {};
docker-containers = handleTestOn ["x86_64-linux"] ./docker-containers.nix {};
docker-edge = handleTestOn ["x86_64-linux"] ./docker-edge.nix {};
docker-preloader = handleTestOn ["x86_64-linux"] ./docker-preloader.nix {};
docker-registry = handleTest ./docker-registry.nix {};

View File

@ -5,116 +5,109 @@
with import ../lib/testing.nix { inherit system pkgs; };
let
# Test ensures buildbot master comes up correctly and workers can connect
mkBuildbotTest = python: makeTest {
name = "buildbot";
# Test ensures buildbot master comes up correctly and workers can connect
makeTest {
name = "buildbot";
nodes = {
bbmaster = { pkgs, ... }: {
services.buildbot-master = {
enable = true;
package = python.pkgs.buildbot-full;
nodes = {
bbmaster = { pkgs, ... }: {
services.buildbot-master = {
enable = true;
# NOTE: use fake repo due to no internet in hydra ci
factorySteps = [
"steps.Git(repourl='git://gitrepo/fakerepo.git', mode='incremental')"
"steps.ShellCommand(command=['bash', 'fakerepo.sh'])"
];
changeSource = [
"changes.GitPoller('git://gitrepo/fakerepo.git', workdir='gitpoller-workdir', branch='master', pollinterval=300)"
];
};
networking.firewall.allowedTCPPorts = [ 8010 8011 9989 ];
environment.systemPackages = with pkgs; [ git python.pkgs.buildbot-full ];
};
bbworker = { pkgs, ... }: {
services.buildbot-worker = {
enable = true;
masterUrl = "bbmaster:9989";
};
environment.systemPackages = with pkgs; [ git python.pkgs.buildbot-worker ];
};
gitrepo = { pkgs, ... }: {
services.openssh.enable = true;
networking.firewall.allowedTCPPorts = [ 22 9418 ];
environment.systemPackages = with pkgs; [ git ];
# NOTE: use fake repo due to no internet in hydra ci
factorySteps = [
"steps.Git(repourl='git://gitrepo/fakerepo.git', mode='incremental')"
"steps.ShellCommand(command=['bash', 'fakerepo.sh'])"
];
changeSource = [
"changes.GitPoller('git://gitrepo/fakerepo.git', workdir='gitpoller-workdir', branch='master', pollinterval=300)"
];
};
networking.firewall.allowedTCPPorts = [ 8010 8011 9989 ];
environment.systemPackages = with pkgs; [ git python3Packages.buildbot-full ];
};
testScript = ''
#Start up and populate fake repo
$gitrepo->waitForUnit("multi-user.target");
print($gitrepo->execute(" \
git config --global user.name 'Nobody Fakeuser' && \
git config --global user.email 'nobody\@fakerepo.com' && \
rm -rvf /srv/repos/fakerepo.git /tmp/fakerepo && \
mkdir -pv /srv/repos/fakerepo ~/.ssh && \
ssh-keyscan -H gitrepo > ~/.ssh/known_hosts && \
cat ~/.ssh/known_hosts && \
cd /srv/repos/fakerepo && \
git init && \
echo -e '#!/bin/sh\necho fakerepo' > fakerepo.sh && \
cat fakerepo.sh && \
touch .git/git-daemon-export-ok && \
git add fakerepo.sh .git/git-daemon-export-ok && \
git commit -m fakerepo && \
git daemon --verbose --export-all --base-path=/srv/repos --reuseaddr & \
"));
# Test gitrepo
$bbmaster->waitForUnit("network-online.target");
#$bbmaster->execute("nc -z gitrepo 9418");
print($bbmaster->execute(" \
rm -rfv /tmp/fakerepo && \
git clone git://gitrepo/fakerepo /tmp/fakerepo && \
pwd && \
ls -la && \
ls -la /tmp/fakerepo \
"));
# Test start master and connect worker
$bbmaster->waitForUnit("buildbot-master.service");
$bbmaster->waitUntilSucceeds("curl -s --head http://bbmaster:8010") =~ /200 OK/;
$bbworker->waitForUnit("network-online.target");
$bbworker->execute("nc -z bbmaster 8010");
$bbworker->execute("nc -z bbmaster 9989");
$bbworker->waitForUnit("buildbot-worker.service");
print($bbworker->execute("ls -la /home/bbworker/worker"));
# Test stop buildbot master and worker
print($bbmaster->execute(" \
systemctl -l --no-pager status buildbot-master && \
systemctl stop buildbot-master \
"));
$bbworker->fail("nc -z bbmaster 8010");
$bbworker->fail("nc -z bbmaster 9989");
print($bbworker->execute(" \
systemctl -l --no-pager status buildbot-worker && \
systemctl stop buildbot-worker && \
ls -la /home/bbworker/worker \
"));
# Test buildbot daemon mode
$bbmaster->execute("buildbot create-master /tmp");
$bbmaster->execute("mv -fv /tmp/master.cfg.sample /tmp/master.cfg");
$bbmaster->execute("sed -i 's/8010/8011/' /tmp/master.cfg");
$bbmaster->execute("buildbot start /tmp");
$bbworker->execute("nc -z bbmaster 8011");
$bbworker->waitUntilSucceeds("curl -s --head http://bbmaster:8011") =~ /200 OK/;
$bbmaster->execute("buildbot stop /tmp");
$bbworker->fail("nc -z bbmaster 8011");
'';
meta.maintainers = with pkgs.stdenv.lib.maintainers; [ nand0p ];
bbworker = { pkgs, ... }: {
services.buildbot-worker = {
enable = true;
masterUrl = "bbmaster:9989";
};
environment.systemPackages = with pkgs; [ git python3Packages.buildbot-worker ];
};
gitrepo = { pkgs, ... }: {
services.openssh.enable = true;
networking.firewall.allowedTCPPorts = [ 22 9418 ];
environment.systemPackages = with pkgs; [ git ];
};
};
in {
python2 = mkBuildbotTest pkgs.python2;
python3 = mkBuildbotTest pkgs.python3;
testScript = ''
#Start up and populate fake repo
$gitrepo->waitForUnit("multi-user.target");
print($gitrepo->execute(" \
git config --global user.name 'Nobody Fakeuser' && \
git config --global user.email 'nobody\@fakerepo.com' && \
rm -rvf /srv/repos/fakerepo.git /tmp/fakerepo && \
mkdir -pv /srv/repos/fakerepo ~/.ssh && \
ssh-keyscan -H gitrepo > ~/.ssh/known_hosts && \
cat ~/.ssh/known_hosts && \
cd /srv/repos/fakerepo && \
git init && \
echo -e '#!/bin/sh\necho fakerepo' > fakerepo.sh && \
cat fakerepo.sh && \
touch .git/git-daemon-export-ok && \
git add fakerepo.sh .git/git-daemon-export-ok && \
git commit -m fakerepo && \
git daemon --verbose --export-all --base-path=/srv/repos --reuseaddr & \
"));
# Test gitrepo
$bbmaster->waitForUnit("network-online.target");
#$bbmaster->execute("nc -z gitrepo 9418");
print($bbmaster->execute(" \
rm -rfv /tmp/fakerepo && \
git clone git://gitrepo/fakerepo /tmp/fakerepo && \
pwd && \
ls -la && \
ls -la /tmp/fakerepo \
"));
# Test start master and connect worker
$bbmaster->waitForUnit("buildbot-master.service");
$bbmaster->waitUntilSucceeds("curl -s --head http://bbmaster:8010") =~ /200 OK/;
$bbworker->waitForUnit("network-online.target");
$bbworker->execute("nc -z bbmaster 8010");
$bbworker->execute("nc -z bbmaster 9989");
$bbworker->waitForUnit("buildbot-worker.service");
print($bbworker->execute("ls -la /home/bbworker/worker"));
# Test stop buildbot master and worker
print($bbmaster->execute(" \
systemctl -l --no-pager status buildbot-master && \
systemctl stop buildbot-master \
"));
$bbworker->fail("nc -z bbmaster 8010");
$bbworker->fail("nc -z bbmaster 9989");
print($bbworker->execute(" \
systemctl -l --no-pager status buildbot-worker && \
systemctl stop buildbot-worker && \
ls -la /home/bbworker/worker \
"));
# Test buildbot daemon mode
$bbmaster->execute("buildbot create-master /tmp");
$bbmaster->execute("mv -fv /tmp/master.cfg.sample /tmp/master.cfg");
$bbmaster->execute("sed -i 's/8010/8011/' /tmp/master.cfg");
$bbmaster->execute("buildbot start /tmp");
$bbworker->execute("nc -z bbmaster 8011");
$bbworker->waitUntilSucceeds("curl -s --head http://bbmaster:8011") =~ /200 OK/;
$bbmaster->execute("buildbot stop /tmp");
$bbworker->fail("nc -z bbmaster 8011");
'';
meta.maintainers = with pkgs.stdenv.lib.maintainers; [ nand0p ];
}

View File

@ -0,0 +1,29 @@
# Test Docker containers as systemd units
import ./make-test.nix ({ pkgs, lib, ... }: {
name = "docker-containers";
meta = {
maintainers = with lib.maintainers; [ benley ];
};
nodes = {
docker = { pkgs, ... }:
{
virtualisation.docker.enable = true;
virtualisation.dockerPreloader.images = [ pkgs.dockerTools.examples.nginx ];
docker-containers.nginx = {
image = "nginx-container";
ports = ["8181:80"];
};
};
};
testScript = ''
startAll;
$docker->waitForUnit("docker-nginx.service");
$docker->waitForOpenPort(8181);
$docker->waitUntilSucceeds("curl http://localhost:8181|grep Hello");
'';
})

View File

@ -33,11 +33,13 @@ in {
longitude = "0.0";
elevation = 0;
auth_providers = [
{ type = "legacy_api_password"; }
{
type = "legacy_api_password";
api_password = apiPassword;
}
];
};
frontend = { };
http.api_password = apiPassword;
mqtt = { # Use hbmqtt as broker
password = mqttPassword;
};

View File

@ -20,8 +20,7 @@ in pkgs.lib.listToAttrs (pkgs.lib.crossLists (predictable: withNetworkd: {
testScript = ''
print $machine->succeed("ip link");
$machine->succeed("ip link show ${if predictable then "ens3" else "eth0"}");
$machine->fail("ip link show ${if predictable then "eth0" else "ens3"}");
$machine->${if predictable then "fail" else "succeed"}("ip link show eth0 ");
'';
};
}) [[true false] [true false]])

View File

@ -13,13 +13,13 @@ with stdenv.lib;
stdenv.mkDerivation rec {
name = "monero-gui-${version}";
version = "0.13.0.4";
version = "0.14.0.0";
src = fetchFromGitHub {
owner = "monero-project";
repo = "monero-gui";
rev = "v${version}";
sha256 = "142yj5s15bhm300dislq3x5inw1f37shnrd5vyj78jjcvry3wymw";
sha256 = "1l4kx2vidr7bpds43jdbwyaz0q1dy7sricpz061ff1fkappbxdh8";
};
nativeBuildInputs = [ qmake pkgconfig ];

View File

@ -13,15 +13,17 @@ index 79223c0..e80b317 100644
parser.addHelpOption();
parser.process(app);
diff --git a/Logger.cpp b/Logger.cpp
index 660bafc..dae24d4 100644
index 6b1daba..c357762 100644
--- a/Logger.cpp
+++ b/Logger.cpp
@@ -15,7 +15,7 @@ static const QString default_name = "monero-wallet-gui.log";
#elif defined(Q_OS_MAC)
static const QString osPath = QStandardPaths::standardLocations(QStandardPaths::HomeLocation).at(0) + "/Library/Logs";
@@ -28,8 +28,8 @@ static const QString defaultLogName = "monero-wallet-gui.log";
static const QString appFolder = "Library/Logs";
#else // linux + bsd
//HomeLocation = "~"
- static const QString osPath = QStandardPaths::standardLocations(QStandardPaths::HomeLocation).at(0);
- static const QString appFolder = ".bitmonero";
+ static const QString osPath = QStandardPaths::standardLocations(QStandardPaths::CacheLocation).at(0);
+ static const QString appFolder = "bitmonero";
#endif

View File

@ -11,12 +11,12 @@ with stdenv.lib;
stdenv.mkDerivation rec {
name = "monero-${version}";
version = "0.13.0.4";
version = "0.14.0.2";
src = fetchgit {
url = "https://github.com/monero-project/monero.git";
rev = "v${version}";
sha256 = "1ambgakapijhsi1pd70vw8vvnlwa3nid944lqkbfq3wl25lmc70d";
sha256 = "1471iy6c8dfdqcmcwcp0m7fp9xl74dcm5hqlfdfi217abhawfs8k";
};
nativeBuildInputs = [ cmake pkgconfig git ];

View File

@ -1,18 +1,19 @@
{
stdenv, fetchurl, docbook_xsl,
docbook_xml_dtd_45, python, pygments,
libxslt
{
stdenv, fetchFromGitHub, docbook_xsl,
docbook_xml_dtd_45, python, pygments,
libxslt
}:
stdenv.mkDerivation rec {
version = "6.12.0";
name = "csound-manual-${version}";
src = fetchurl {
url = "https://github.com/csound/manual/archive/${version}.tar.gz";
sha256 = "1v1scp468rnfbcajnp020kdj8zigimc2mbcwzxxqi8sf8paccdrp";
};
pname = "csound-manual";
version = "unstable-2019-02-22";
src = fetchFromGitHub {
owner = "csound";
repo = "manual";
rev = "3b0bdc83f9245261b4b85a57c3ed636d5d924a4f";
sha256 = "074byjhaxraapyg54dxgg7hi1d4978aa9c1rmyi50p970nsxnacn";
};
prePatch = ''
substituteInPlace manual.xml \
@ -41,4 +42,3 @@ stdenv.mkDerivation rec {
platforms = stdenv.lib.platforms.all;
};
}

View File

@ -18,7 +18,7 @@
stdenv.mkDerivation rec {
name = "muse-sequencer-${version}";
version = "3.0.2";
version = "3.1pre1";
meta = with stdenv.lib; {
homepage = http://www.muse-sequencer.org;
@ -38,11 +38,16 @@ stdenv.mkDerivation rec {
fetchFromGitHub {
owner = "muse-sequencer";
repo = "muse";
rev = "02d9dc6abd757c3c1783fdd46dacd3c4ef2c0a6d";
sha256 = "0pn0mcg79z3bhjwxbss3ylypdz3gg70q5d1ij3x8yw65ryxbqf51";
rev = "2167ae053c16a633d8377acdb1debaac10932838";
sha256 = "0rsdx8lvcbz5bapnjvypw8h8bq587s9z8cf2znqrk6ah38s6fsrf";
};
nativeBuildInputs = [
pkgconfig
gitAndTools.gitFull
];
buildInputs = [
libjack2
qt5.qtsvg
@ -57,8 +62,6 @@ stdenv.mkDerivation rec {
lash
dssi
liblo
pkgconfig
gitAndTools.gitFull
];
sourceRoot = "source/muse3";

View File

@ -6,11 +6,11 @@
stdenv.mkDerivation rec {
name = "musescore-${version}";
version = "3.0.1";
version = "3.0.5";
src = fetchzip {
url = "https://download.musescore.com/releases/MuseScore-${version}/MuseScore-${version}.zip";
sha256 = "1l9djxq5hdfqiya2jwcag7qq4dhmb9qcv68y27dlza19imrnim80";
sha256 = "1pbf6v0l3nixxr8k5igwhj09wnqvw92av6q6yjrbb3kyjh5br2d8";
stripRoot = false;
};

View File

@ -1,4 +1,4 @@
{ stdenv, fetchFromGitHub, pkgconfig, meson, gnome3, at-spi2-core, dbus, gst_all_1, sphinxbase, pocketsphinx, ninja, gettext, appstream-glib, python3, glib, gobject-introspection, gsettings-desktop-schemas, itstool, wrapGAppsHook, makeWrapper, hicolor-icon-theme }:
{ stdenv, fetchFromGitHub, pkgconfig, meson, gtk3, at-spi2-core, dbus, gst_all_1, sphinxbase, pocketsphinx, ninja, gettext, appstream-glib, python3, glib, gobject-introspection, gsettings-desktop-schemas, itstool, wrapGAppsHook, makeWrapper, hicolor-icon-theme }:
stdenv.mkDerivation rec {
pname = "parlatype";
@ -24,7 +24,7 @@ stdenv.mkDerivation rec {
];
buildInputs = [
gnome3.gtk
gtk3
at-spi2-core
dbus
gst_all_1.gstreamer

View File

@ -0,0 +1,50 @@
{ stdenv, fetchFromGitHub, audiofile, libvorbis, fltk, fftw, fftwFloat,
minixml, pkgconfig, libmad, libjack2, portaudio, libsamplerate }:
stdenv.mkDerivation {
pname = "paulstretch";
version = "2.2-2";
src = fetchFromGitHub {
owner = "paulnasca";
repo = "paulstretch_cpp";
rev = "7f5c3993abe420661ea0b808304b0e2b4b0048c5";
sha256 = "06dy03dbz1yznhsn0xvsnkpc5drzwrgxbxdx0hfpsjn2xcg0jrnc";
};
nativeBuildInputs = [ pkgconfig ];
buildInputs = [
audiofile
libvorbis
fltk
fftw
fftwFloat
minixml
libmad
libjack2
portaudio
libsamplerate
];
buildPhase = ''
bash compile_linux_fftw_jack.sh
'';
installPhase = ''
install -Dm555 ./paulstretch $out/bin/paulstretch
'';
meta = with stdenv.lib; {
description = "Produces high quality extreme sound stretching";
longDescription = ''
This is a program for stretching the audio. It is suitable only for
extreme sound stretching of the audio (like 50x) and for applying
special effects by "spectral smoothing" the sounds.
It can transform any sound/music to a texture.
'';
homepage = http://hypermammut.sourceforge.net/paulstretch/;
platforms = platforms.linux;
license = licenses.gpl2;
};
}

View File

@ -29,11 +29,11 @@
# handle that.
stdenv.mkDerivation rec {
name = "qmmp-1.2.5";
name = "qmmp-1.3.1";
src = fetchurl {
url = "http://qmmp.ylsoftware.com/files/${name}.tar.bz2";
sha256 = "1xs8kg65088yzdhdkymmknkp1s4adzv095f5jhjvy62s8ymyjvnx";
sha256 = "1dmybzibpr6hpr2iv1wvrjgww842mng2x0rh1mr8gs8j191xvlhw";
};
buildInputs =

View File

@ -1,6 +1,6 @@
{ stdenv, fetchurl, autoPatchelfHook, makeWrapper
, alsaLib, xorg
, gnome3, pango, gdk_pixbuf, cairo, glib, freetype
, gnome3, gtk3, pango, gdk_pixbuf, cairo, glib, freetype
, libpulseaudio, xdg_utils
}:
@ -31,7 +31,7 @@ stdenv.mkDerivation rec {
];
runtimeDependencies = [
gnome3.gtk
gtk3
];
dontBuild = true;

View File

@ -4,6 +4,7 @@
, perlPackages
, gtk3
, intltool
, libpeas
, libsoup
, gnome3
, totem-pl-parser
@ -48,7 +49,7 @@ in stdenv.mkDerivation rec {
json-glib
gtk3
gnome3.libpeas
libpeas
totem-pl-parser
gnome3.adwaita-icon-theme

View File

@ -1,6 +1,6 @@
{ stdenv, fetchFromGitLab, substituteAll, meson, ninja, pkgconfig, vala_0_40, gettext
, gnome3, libnotify, itstool, glib, gtk3, libxml2
, coreutils, libsecret, pcre, libxkbcommon, wrapGAppsHook
, coreutils, libpeas, libsecret, pcre, libxkbcommon, wrapGAppsHook
, libpthreadstubs, libXdmcp, epoxy, at-spi2-core, dbus, libgpgerror
, appstream-glib, desktop-file-utils, duplicity
}:
@ -35,7 +35,7 @@ stdenv.mkDerivation rec {
];
buildInputs = [
libnotify gnome3.libpeas glib gtk3 libsecret
libnotify libpeas glib gtk3 libsecret
pcre libxkbcommon libpthreadstubs libXdmcp epoxy gnome3.nautilus
at-spi2-core dbus gnome3.gnome-online-accounts libgpgerror
];

View File

@ -18,9 +18,9 @@ let
sha256Hash = "0d7d6n7n1zzhxpdykbwwbrw139mqxkp20d4l0570pk7975p1s2q9";
};
latestVersion = { # canary & dev
version = "3.5.0.6"; # "Android Studio 3.5 Canary 7"
build = "183.5346365";
sha256Hash = "0dfkhzsxabrv8cwgyv3gicpglgpccmi1ig5shlhp6a006awgfyj0";
version = "3.5.0.7"; # "Android Studio 3.5 Canary 8"
build = "191.5375575";
sha256Hash = "0vssynvj0j4xbin9h95lciilc3j9mkm53vwzxxr3kqxwl74qx4mj";
};
in rec {
# Old alias (TODO @primeos: Remove after 19.03 is branched off):

View File

@ -17,6 +17,7 @@
, json-glib
, jsonrpc-glib
, libdazzle
, libpeas
, libxml2
, meson
, ninja
@ -64,7 +65,7 @@ in stdenv.mkDerivation {
flatpak
gnome3.devhelp
libgit2-glib
gnome3.libpeas
libpeas
vte
gspell
gtk3

View File

@ -1,6 +1,6 @@
{ stdenv, fetchgit, gnome3, at-spi2-core,
{ stdenv, fetchgit, gnome3, gtksourceview3, at-spi2-core, gtksourceviewmm,
boost, epoxy, cmake, aspell, llvmPackages, libgit2, pkgconfig, pcre,
libXdmcp, libxkbcommon, libpthreadstubs, wrapGAppsHook, aspellDicts,
libXdmcp, libxkbcommon, libpthreadstubs, wrapGAppsHook, aspellDicts, gtkmm3,
coreutils, glibc, dbus, openssl, libxml2, gnumake, ctags }:
with stdenv.lib;
@ -29,7 +29,7 @@ stdenv.mkDerivation rec {
dbus
openssl
libxml2
gnome3.gtksourceview
gtksourceview3
at-spi2-core
pcre
epoxy
@ -39,9 +39,9 @@ stdenv.mkDerivation rec {
aspell
libgit2
libxkbcommon
gnome3.gtkmm3
gtkmm3
libpthreadstubs
gnome3.gtksourceviewmm
gtksourceviewmm
llvmPackages.clang.cc
llvmPackages.lldb
gnome3.dconf

View File

@ -2,14 +2,14 @@
let
pname = "kdev-php";
version = "5.3.1";
version = "5.3.2";
in
stdenv.mkDerivation rec {
name = "${pname}-${version}";
src = fetchurl {
url = "https://github.com/KDE/${pname}/archive/v${version}.tar.gz";
sha256 = "1xiz4v6w30dsa7l4nk3jw3hxpkx71b0yaaj2k8s7xzgjif824bgl";
sha256 = "0yjn7y7al2xs8g0mrjvcym8gbjy4wmiv7lsljcrasjd7ymag1wgs";
};
nativeBuildInputs = [ cmake extra-cmake-modules ];

View File

@ -2,14 +2,14 @@
let
pname = "kdev-python";
version = "5.3.1";
version = "5.3.2";
in
stdenv.mkDerivation rec {
name = "${pname}-${version}";
src = fetchurl {
url = "https://github.com/KDE/${pname}/archive/v${version}.tar.gz";
sha256 = "11hf8n6vrlaz31c0p3xbnf0df2q5j6ykgc9ip0l5g33kadwn5b9j";
sha256 = "0gqv1abzfpxkrf538rb62d2291lmlra8rghm9q9r3x8a46wh96zm";
};
cmakeFlags = [

View File

@ -9,7 +9,7 @@
let
pname = "kdevelop";
version = "5.3.1";
version = "5.3.2";
qtVersion = "5.${lib.versions.minor qtbase.version}";
in
mkDerivation rec {
@ -17,7 +17,7 @@ mkDerivation rec {
src = fetchurl {
url = "mirror://kde/stable/${pname}/${version}/src/${name}.tar.xz";
sha256 = "1098ra7qpal6578hsv20kvxc63v47sp85wjhqr5rgzr2fm7jf6fr";
sha256 = "0akgdnvrab6mbwnmvgzsplk0qh83k1hnm5xc06yxr1s1a5sxbk08";
};
nativeBuildInputs = [

View File

@ -0,0 +1,63 @@
{ lib, stdenv, python3, fetchFromGitHub, makeWrapper, buildEnv, aspellDicts
# Use `lib.collect lib.isDerivation aspellDicts;` to make all dictionaries
# available.
, enchantAspellDicts ? with aspellDicts; [ en en-computers en-science ]
}:
let
version = "7.0.4";
python = let
packageOverrides = self: super: {
markdown = super.markdown.overridePythonAttrs(old: rec {
src = super.fetchPypi {
version = "3.0.1";
pname = "Markdown";
sha256 = "d02e0f9b04c500cde6637c11ad7c72671f359b87b9fe924b2383649d8841db7c";
};
});
chardet = super.chardet.overridePythonAttrs(old: rec {
src = super.fetchPypi {
version = "2.3.0";
pname = "chardet";
sha256 = "e53e38b3a4afe6d1132de62b7400a4ac363452dc5dfcf8d88e8e0cce663c68aa";
};
});
};
in python3.override { inherit packageOverrides; };
pythonEnv = python.withPackages (ps: with ps; [
pyqt5 docutils pyenchant Markups markdown pygments chardet
]);
in python.pkgs.buildPythonApplication {
inherit version;
pname = "retext";
src = fetchFromGitHub {
owner = "retext-project";
repo = "retext";
rev = "${version}";
sha256 = "1zcapywspc9v5zf5cxqkcy019np9n41gmryqixj66zsvd544c6si";
};
doCheck = false;
nativeBuildInputs = [ makeWrapper ];
propagatedBuildInputs = [ pythonEnv ];
postInstall = ''
mv $out/bin/retext $out/bin/.retext
makeWrapper "$out/bin/.retext" "$out/bin/retext" \
--set ASPELL_CONF "dict-dir ${buildEnv {
name = "aspell-all-dicts";
paths = map (path: "${path}/lib/aspell") enchantAspellDicts;
}}"
'';
meta = with stdenv.lib; {
homepage = https://github.com/retext-project/retext/;
description = "Simple but powerful editor for Markdown and reStructuredText";
license = licenses.gpl3;
maintainers = with maintainers; [ klntsky ];
platforms = platforms.unix;
};
}

View File

@ -17,10 +17,10 @@
stdenv.mkDerivation rec {
name = "drawpile-${version}";
version = "2.1.3";
version = "2.1.4";
src = fetchurl {
url = "https://drawpile.net/files/src/drawpile-${version}.tar.gz";
sha256 = "0fngj5hfinj66xpij2h3ag79mgmqcfrjpwynxdbjr5brch25ldwj";
sha256 = "0n54p5day6gnmxqmgx4yd7q6y0mgv1nwh84yrz5r953yhd9m37rn";
};
nativeBuildInputs = [
extra-cmake-modules

View File

@ -0,0 +1,2 @@
source 'https://rubygems.org'
gem 'image_optim'

View File

@ -0,0 +1,23 @@
GEM
remote: https://rubygems.org/
specs:
exifr (1.3.6)
fspath (3.1.0)
image_optim (0.26.3)
exifr (~> 1.2, >= 1.2.2)
fspath (~> 3.0)
image_size (>= 1.5, < 3)
in_threads (~> 1.3)
progress (~> 3.0, >= 3.0.1)
image_size (2.0.0)
in_threads (1.5.1)
progress (3.5.0)
PLATFORMS
ruby
DEPENDENCIES
image_optim
BUNDLED WITH
1.16.3

View File

@ -0,0 +1,66 @@
{ lib, bundlerApp, fetchurl, ruby, makeWrapper,
withPngcrush ? true, pngcrush ? null,
withPngout ? true, pngout ? null,
withAdvpng ? true, advancecomp ? null,
withOptipng ? true, optipng ? null,
withPngquant ? true, pngquant ? null,
withJhead ? true, jhead ? null,
withJpegoptim ? true, jpegoptim ? null,
withJpegrecompress ? true, jpeg-archive ? null,
withJpegtran ? true, libjpeg ? null,
withGifsicle ? true, gifsicle ? null,
withSvgo ? true, svgo ? null
}:
assert withPngcrush -> pngcrush != null;
assert withPngout -> pngout != null;
assert withAdvpng -> advancecomp != null;
assert withOptipng -> optipng != null;
assert withPngquant -> pngquant != null;
assert withJhead -> jhead != null;
assert withJpegoptim -> jpegoptim != null;
assert withJpegrecompress -> jpeg-archive != null;
assert withJpegtran -> libjpeg != null;
assert withGifsicle -> gifsicle != null;
assert withSvgo -> svgo != null;
with lib;
let
optionalDepsPath = []
++ optional withPngcrush pngcrush
++ optional withPngout pngout
++ optional withAdvpng advancecomp
++ optional withOptipng optipng
++ optional withPngquant pngquant
++ optional withJhead jhead
++ optional withJpegoptim jpegoptim
++ optional withJpegrecompress jpeg-archive
++ optional withJpegtran libjpeg
++ optional withGifsicle gifsicle
++ optional withSvgo svgo;
in
bundlerApp {
pname = "image_optim";
gemdir = ./.;
inherit ruby;
exes = [ "image_optim" ];
buildInputs = [ makeWrapper ];
postBuild = ''
wrapProgram $out/bin/image_optim \
--prefix PATH : ${makeBinPath optionalDepsPath}
'';
meta = with lib; {
description = "Command line tool and ruby interface to optimize (lossless compress, optionally lossy) jpeg, png, gif and svg images using external utilities (advpng, gifsicle, jhead, jpeg-recompress, jpegoptim, jpegrescan, jpegtran, optipng, pngcrush, pngout, pngquant, svgo)";
homepage = http://github.com/toy/image_optim;
license = licenses.mit;
maintainers = with maintainers; [ srghma ];
platforms = platforms.all;
};
}

View File

@ -0,0 +1,51 @@
{
exifr = {
source = {
remotes = ["https://rubygems.org"];
sha256 = "0q2abhiyvgfv23i0izbskjxcqaxiw9bfg6s57qgn4li4yxqpwpfg";
type = "gem";
};
version = "1.3.6";
};
fspath = {
source = {
remotes = ["https://rubygems.org"];
sha256 = "1vjn9sy4hklr2d5wxmj5x1ry31dfq3sjp779wyprb3nbbdmra1sc";
type = "gem";
};
version = "3.1.0";
};
image_optim = {
dependencies = ["exifr" "fspath" "image_size" "in_threads" "progress"];
source = {
remotes = ["https://rubygems.org"];
sha256 = "082w9qcyy9j6m6s2pknfdcik7l2qch4j48axs13m06l4s1hz0dmg";
type = "gem";
};
version = "0.26.3";
};
image_size = {
source = {
remotes = ["https://rubygems.org"];
sha256 = "0bcn7nc6qix3w4sf7xd557lnsgjniqa7qvz7nnznx70m8qfbc7ig";
type = "gem";
};
version = "2.0.0";
};
in_threads = {
source = {
remotes = ["https://rubygems.org"];
sha256 = "14hqm59sgqi91ag187zwpgwi58xckjkk58m031ghkp0csl8l9mkx";
type = "gem";
};
version = "1.5.1";
};
progress = {
source = {
remotes = ["https://rubygems.org"];
sha256 = "1yrzq4v5sp7cg4nbgqh11k3d1czcllfz98dcdrxrsjxwq5ziiw0p";
type = "gem";
};
version = "3.5.0";
};
}

View File

@ -0,0 +1,9 @@
#!/usr/bin/env nix-shell
#!nix-shell -i bash -p bundix bundler
SCRIPT_DIR=$(dirname "$(readlink -f "$BASH_SOURCE")")
cd $SCRIPT_DIR
bundle lock --update
bundix

View File

@ -0,0 +1,42 @@
{ lib, stdenv, fetchFromGitHub, mozjpeg, makeWrapper, coreutils, parallel, findutils }:
stdenv.mkDerivation rec {
name = "jpeg-archive-${version}";
version = "2.2.0"; # can be found here https://github.com/danielgtaylor/jpeg-archive/blob/master/src/util.c#L15
# update with
# nix-prefetch-git https://github.com/danielgtaylor/jpeg-archive
src = fetchFromGitHub {
owner = "danielgtaylor";
repo = "jpeg-archive";
rev = "8da4bf76b6c3c0e11e4941294bfc1857c119419b";
sha256 = "1639y9qp2ls80fzimwmwds792q8rq5p6c14c0r4jswx4yp6dcs33";
};
nativeBuildInputs = [ makeWrapper ];
buildInputs = [ mozjpeg ];
prePatch = ''
# allow override LIBJPEG
substituteInPlace Makefile --replace 'LIBJPEG =' 'LIBJPEG ?='
'';
makeFlags = [
"PREFIX=$(out)"
"MOZJPEG_PREFIX=${mozjpeg}"
"LIBJPEG=${mozjpeg}/lib/libjpeg.so"
];
postInstall = ''
wrapProgram $out/bin/jpeg-archive \
--set PATH "$out/bin:${coreutils}/bin:${parallel}/bin:${findutils}/bin"
'';
meta = with stdenv.lib; {
description = "Utilities for archiving photos for saving to long term storage or serving over the web";
homepage = "https://github.com/danielgtaylor/jpeg-archive";
# license = ...; # mixed?
maintainers = [ maintainers.srghma ];
platforms = platforms.all;
};
}

View File

@ -5,27 +5,27 @@
, boost, libraw, fftw, eigen, exiv2, libheif, lcms2, gsl, openexr, giflib
, openjpeg, opencolorio, vc, poppler, curl, ilmbase
, qtmultimedia, qtx11extras
, python3
, python3Packages
}:
let
major = "4.1";
minor = "7";
patch = "101";
minor = "8";
patch = null;
in
mkDerivation rec {
name = "krita-${version}";
version = "${major}.${minor}.${patch}";
version = "${major}.${minor}";
src = fetchurl {
url = "https://download.kde.org/stable/krita/${major}.${minor}/${name}.tar.gz";
sha256 = "0pvghb17vj3y19wa1n1zfg3yl5206ir3y45znrgdgdw076m5pjav";
sha256 = "0h2rplc76r82b8smk61zci1ijj9xkjmf20pdqa8fc2lz4zicjxh4";
};
nativeBuildInputs = [ cmake extra-cmake-modules ];
nativeBuildInputs = [ cmake extra-cmake-modules python3Packages.sip ];
buildInputs = [
karchive kconfig kwidgetsaddons kcompletion kcoreaddons kguiaddons
@ -33,11 +33,17 @@ mkDerivation rec {
boost libraw fftw eigen exiv2 lcms2 gsl openexr libheif giflib
openjpeg opencolorio poppler curl ilmbase
qtmultimedia qtx11extras
python3
python3Packages.pyqt5
] ++ lib.optional (stdenv.hostPlatform.isi686 || stdenv.hostPlatform.isx86_64) vc;
NIX_CFLAGS_COMPILE = [ "-I${ilmbase.dev}/include/OpenEXR" ];
cmakeFlags = [
"-DPYQT5_SIP_DIR=${python3Packages.pyqt5}/share/sip/PyQt5"
"-DPYQT_SIP_DIR_OVERRIDE=${python3Packages.pyqt5}/share/sip/PyQt5"
"-DCMAKE_BUILD_TYPE=RelWithDebInfo"
];
meta = with lib; {
description = "A free and open source painting application";
homepage = https://krita.org/;

View File

@ -7,7 +7,7 @@
# Gtk deps
# upstream gImagereader supports Qt too
, gtk3, gobject-introspection, wrapGAppsHook
, gnome3, gtkspell3, gtkspellmm, cairomm
, gnome3, gtkmm3, gtksourceview3, gtksourceviewmm, gtkspell3, gtkspellmm, cairomm
}:
let
@ -48,11 +48,11 @@ stdenv.mkDerivation rec {
poppler
# Gtk specific
gnome3.gtkmm
gtkmm3
gtkspell3
gtkspellmm
gnome3.gtksourceview
gnome3.gtksourceviewmm
gtksourceview3
gtksourceviewmm
cairomm
json-glib
];

View File

@ -1,4 +1,4 @@
{ stdenv, fetchurl, gnome3, intltool, pkgconfig, texinfo, hicolor-icon-theme }:
{ stdenv, fetchurl, gtk3, intltool, pkgconfig, texinfo, hicolor-icon-theme }:
stdenv.mkDerivation rec {
name = "gxmessage-${version}";
@ -10,7 +10,7 @@ stdenv.mkDerivation rec {
};
nativeBuildInputs = [ pkgconfig ];
buildInputs = [ intltool gnome3.gtk texinfo hicolor-icon-theme ];
buildInputs = [ intltool gtk3 texinfo hicolor-icon-theme ];
meta = {
description = "A GTK enabled dropin replacement for xmessage";

View File

@ -1,14 +1,14 @@
{ stdenv, fetchurl, sane-backends, qtbase, qtsvg, nss, autoPatchelfHook, lib, makeWrapper }:
let
version = "5.2.20";
version = "5.3.22";
in stdenv.mkDerivation {
name = "masterpdfeditor-${version}";
src = fetchurl {
url = "https://code-industry.net/public/master-pdf-editor-${version}_qt5.amd64.tar.gz";
sha256 = "1399zv3m7a2rxvmy213f5yii3krsqyahpwdzsw8j535xrb9f3z1m";
sha256 = "0cnw01g3j5l07f2lng604mx8qqm61i5sflryj1vya2gkjmrphkan";
};
nativeBuildInputs = [ autoPatchelfHook makeWrapper ];

View File

@ -1,26 +1,21 @@
{ stdenv, fetchFromGitHub, git, gnupg, pass, qtbase, qtsvg, qttools, qmake, makeWrapper }:
stdenv.mkDerivation rec {
name = "qtpass-${version}";
version = "1.2.1";
pname = "qtpass";
version = "1.2.3";
src = fetchFromGitHub {
owner = "IJHack";
repo = "QtPass";
rev = "v${version}";
sha256 = "0pp38b3fifkfwqcb6vi194ccgb8j3zc8j8jq8ww5ib0wvhldzsg8";
sha256 = "1vfhfyccrxq9snyvayqfzm5rqik8ny2gysyv7nipc91kvhq3bhky";
};
patches = [ ./hidpi.patch ];
buildInputs = [ git gnupg pass qtbase qtsvg qttools ];
nativeBuildInputs = [ makeWrapper qmake ];
postPatch = ''
substituteInPlace qtpass.pro --replace "SUBDIRS += src tests main" "SUBDIRS += src main"
substituteInPlace qtpass.pro --replace "main.depends = tests" "main.depends = src"
'';
enableParallelBuilding = true;
postInstall = ''
install -D qtpass.desktop $out/share/applications/qtpass.desktop

View File

@ -1,13 +0,0 @@
diff --git a/main/main.cpp b/main/main.cpp
index 8a18409c..1cddd911 100644
--- a/main/main.cpp
+++ b/main/main.cpp
@@ -35,7 +35,7 @@
* @return
*/
int main(int argc, char *argv[]) {
- qputenv("QT_AUTO_SCREEN_SCALE_FACTOR", "1");
+ QApplication::setAttribute(Qt::AA_EnableHighDpiScaling);
QString text = "";
for (int i = 1; i < argc; ++i) {
if (i > 1)

View File

@ -1,20 +1,20 @@
{ stdenv, fetchFromGitHub, python3Packages }:
python3Packages.buildPythonApplication rec {
version = "0.20.0";
version = "0.21.0";
name = "toot-${version}";
src = fetchFromGitHub {
owner = "ihabunek";
repo = "toot";
rev = "${version}";
sha256 = "0s5i6fjip5kvvyb59yndi2rhgn962lr0g9b0pi5w2aqnv1mwjbfh";
sha256 = "03s81i9rz7dn33r13p7j2c7yw874hkm64x7myddiqw9lc21fyzql";
};
checkInputs = with python3Packages; [ pytest ];
propagatedBuildInputs = with python3Packages;
[ requests beautifulsoup4 future ];
[ requests beautifulsoup4 future wcwidth ];
checkPhase = ''
py.test

View File

@ -3,17 +3,18 @@
, traySupport ? true, libdbusmenu-gtk3
, pulseSupport ? false, libpulseaudio
, nlSupport ? true, libnl
, udevSupport ? true, udev
, swaySupport ? true, sway
}:
stdenv.mkDerivation rec {
name = "waybar-${version}";
version = "0.4.0";
version = "0.5.0";
src = fetchFromGitHub {
owner = "Alexays";
repo = "Waybar";
rev = version;
sha256 = "0vkx1b6bgr75wkx89ppxhg4103vl2g0sky22npmfkvbkpgh8dj38";
sha256 = "006pzx4crsqn9vk28g87306xh3jrfwk4ib9cmsxqrxy8v0kl2s4g";
};
nativeBuildInputs = [
@ -25,19 +26,21 @@
++ optional traySupport libdbusmenu-gtk3
++ optional pulseSupport libpulseaudio
++ optional nlSupport libnl
++ optional udevSupport udev
++ optional swaySupport sway;
mesonFlags = [
"-Ddbusmenu-gtk=${ if traySupport then "enabled" else "disabled" }"
"-Dpulseaudio=${ if pulseSupport then "enabled" else "disabled" }"
"-Dlibnl=${ if nlSupport then "enabled" else "disabled" }"
"-Dlibudev=${ if udevSupport then "enabled" else "disabled" }"
"-Dout=${placeholder "out"}"
];
meta = with stdenv.lib; {
description = "Highly customizable Wayland bar for Sway and Wlroots based compositors";
license = licenses.mit;
maintainers = [ maintainers.FlorianFranzen ];
maintainers = with maintainers; [ FlorianFranzen minijackson ];
platforms = platforms.unix;
};
}

View File

@ -1,20 +1,20 @@
{ stdenv, fetchFromGitHub, rustPlatform, cmake, pkgconfig, openssl, CoreServices, cf-private }:
rustPlatform.buildRustPackage rec {
name = "zola-${version}";
version = "0.5.1";
pname = "zola";
version = "0.6.0";
src = fetchFromGitHub {
owner = "getzola";
repo = "zola";
repo = pname;
rev = "v${version}";
sha256 = "1jj6yfb3qkfq3nwcxfrc7k1gqyls873imxgpifbwjx9slg6ssis9";
sha256 = "11y5gb6lx040ax4b16fr3whkj4vmv8hlkvb50h58gs77payglf6l";
};
cargoSha256 = "1hn2l25fariidgdr32mfx2yqb3g8xk4qafs614bdjiyvfrb7j752";
cargoSha256 = "19hqkj27dbsy4pi0i8mjjlhi4351yifvc6zln6scc2nd60p251h6";
nativeBuildInputs = [ cmake pkgconfig openssl ];
buildInputs = stdenv.lib.optionals stdenv.isDarwin [ CoreServices cf-private ];
nativeBuildInputs = [ cmake pkgconfig ];
buildInputs = [ openssl ] ++ stdenv.lib.optionals stdenv.isDarwin [ CoreServices cf-private ];
postInstall = ''
install -D -m 444 completions/zola.bash \

View File

@ -0,0 +1,25 @@
{ stdenv, buildGoPackage, fetchFromGitHub, fetchgx }:
buildGoPackage rec {
name = "brig-${version}";
version = "0.3.0";
rev = "v${version}";
goPackagePath = "github.com/sahib/brig";
subPackages = ["."];
src = fetchFromGitHub {
owner = "sahib";
repo = "brig";
inherit rev;
sha256 = "01hpb6cvq8cw21ka74jllggkv5pavc0sbl1207x32gzxslw3gsvy";
};
meta = with stdenv.lib; {
description = "File synchronization on top of ipfs with git like interface and FUSE filesystem";
homepage = https://github.com/sahib/brig;
license = licenses.agpl3;
platforms = platforms.unix;
maintainers = with maintainers; [ offline ];
};
}

View File

@ -20,10 +20,12 @@
# optional dependencies
, libgcrypt ? null # gnomeSupport || cupsSupport
, libva ? null # useVaapi
# package customization
, enableNaCl ? false
, enableWideVine ? false
, useVaapi ? true
, gnomeSupport ? false, gnome ? null
, gnomeKeyringSupport ? false, libgnome-keyring3 ? null
, proprietaryCodecs ? true
@ -126,6 +128,7 @@ let
] ++ optional gnomeKeyringSupport libgnome-keyring3
++ optionals gnomeSupport [ gnome.GConf libgcrypt ]
++ optionals cupsSupport [ libgcrypt cups ]
++ optional useVaapi libva
++ optional pulseSupport libpulseaudio
++ optional (versionAtLeast version "72") jdk.jre;
@ -143,6 +146,9 @@ let
# - https://github.com/chromium/chromium/search?q=GCC&s=committer-date&type=Commits
#
# ++ optional (versionRange "68" "72") ( githubPatch "<patch>" "0000000000000000000000000000000000000000000000000000000000000000" )
] ++ optionals (useVaapi) [
# source: https://aur.archlinux.org/cgit/aur.git/plain/chromium-vaapi.patch?h=chromium-vaapi
./patches/chromium-vaapi.patch
] ++ optionals (!stdenv.cc.isClang && (versionRange "71" "72")) [
( githubPatch "65be571f6ac2f7942b4df9e50b24da517f829eec" "1sqv0aba0mpdi4x4f21zdkxz2cf8ji55ffgbfcr88c5gcg0qn2jh" )
] ++ optional stdenv.isAarch64
@ -260,6 +266,8 @@ let
proprietary_codecs = true;
enable_hangout_services_extension = true;
ffmpeg_branding = "Chrome";
} // optionalAttrs useVaapi {
use_vaapi = true;
} // optionalAttrs pulseSupport {
use_pulseaudio = true;
link_pulseaudio = true;

View File

@ -1,6 +1,7 @@
{ newScope, config, stdenv, llvmPackages, gcc8Stdenv, llvmPackages_7
, makeWrapper, makeDesktopItem, ed
, glib, gtk3, gnome3, gsettings-desktop-schemas
, libva ? null
# package customization
, channel ? "stable"
@ -10,6 +11,7 @@
, proprietaryCodecs ? true
, enablePepperFlash ? false
, enableWideVine ? false
, useVaapi ? true
, cupsSupport ? true
, pulseSupport ? config.pulseaudio or stdenv.isLinux
, commandLineArgs ? ""
@ -32,6 +34,7 @@ in let
mkChromiumDerivation = callPackage ./common.nix {
inherit enableNaCl gnomeSupport gnome
gnomeKeyringSupport proprietaryCodecs cupsSupport pulseSupport
useVaapi
enableWideVine;
};
@ -92,6 +95,10 @@ in stdenv.mkDerivation {
buildCommand = let
browserBinary = "${chromium.browser}/libexec/chromium/chromium";
getWrapperFlags = plugin: "$(< \"${plugin}/nix-support/wrapper-flags\")";
libPath = stdenv.lib.makeLibraryPath ([]
++ stdenv.lib.optional useVaapi libva
);
in with stdenv.lib; ''
mkdir -p "$out/bin"
@ -109,6 +116,8 @@ in stdenv.mkDerivation {
export CHROME_DEVEL_SANDBOX="$sandbox/bin/${sandboxExecutableName}"
fi
export LD_LIBRARY_PATH="\$LD_LIBRARY_PATH:${libPath}"
# libredirect causes chromium to deadlock on startup
export LD_PRELOAD="\$(echo -n "\$LD_PRELOAD" | tr ':' '\n' | grep -v /lib/libredirect\\\\.so$ | tr '\n' ':')"

View File

@ -0,0 +1,117 @@
From abc7295ca1653c85472916909f0eb76e28e79a58 Mon Sep 17 00:00:00 2001
From: Akarshan Biswas <akarshan.biswas@gmail.com>
Date: Thu, 24 Jan 2019 12:45:29 +0530
Subject: [PATCH] Enable mojo with VDA2 on Linux
---
chrome/browser/about_flags.cc | 8 ++++----
chrome/browser/flag_descriptions.cc | 9 +++++++--
chrome/browser/flag_descriptions.h | 10 ++++++++--
gpu/config/software_rendering_list.json | 3 ++-
media/media_options.gni | 9 ++++++---
media/mojo/services/gpu_mojo_media_client.cc | 4 ++--
6 files changed, 29 insertions(+), 14 deletions(-)
diff --git a/chrome/browser/about_flags.cc b/chrome/browser/about_flags.cc
index 0a84c6ac1..be2aa1d8b 100644
--- a/chrome/browser/about_flags.cc
+++ b/chrome/browser/about_flags.cc
@@ -1714,7 +1714,7 @@ const FeatureEntry kFeatureEntries[] = {
"disable-accelerated-video-decode",
flag_descriptions::kAcceleratedVideoDecodeName,
flag_descriptions::kAcceleratedVideoDecodeDescription,
- kOsMac | kOsWin | kOsCrOS | kOsAndroid,
+ kOsMac | kOsWin | kOsCrOS | kOsAndroid | kOsLinux,
SINGLE_DISABLE_VALUE_TYPE(switches::kDisableAcceleratedVideoDecode),
},
#if defined(OS_WIN)
@@ -2345,12 +2345,12 @@ const FeatureEntry kFeatureEntries[] = {
FEATURE_VALUE_TYPE(service_manager::features::kXRSandbox)},
#endif // ENABLE_ISOLATED_XR_SERVICE
#endif // ENABLE_VR
-#if defined(OS_CHROMEOS)
+#if defined(OS_CHROMEOS) || defined(OS_LINUX)
{"disable-accelerated-mjpeg-decode",
flag_descriptions::kAcceleratedMjpegDecodeName,
- flag_descriptions::kAcceleratedMjpegDecodeDescription, kOsCrOS,
+ flag_descriptions::kAcceleratedMjpegDecodeDescription, kOsCrOS | kOsLinux,
SINGLE_DISABLE_VALUE_TYPE(switches::kDisableAcceleratedMjpegDecode)},
-#endif // OS_CHROMEOS
+#endif // OS_CHROMEOS // OS_LINUX
{"v8-cache-options", flag_descriptions::kV8CacheOptionsName,
flag_descriptions::kV8CacheOptionsDescription, kOsAll,
MULTI_VALUE_TYPE(kV8CacheOptionsChoices)},
diff --git a/chrome/browser/flag_descriptions.cc b/chrome/browser/flag_descriptions.cc
index 62637e092..86f89fc6e 100644
--- a/chrome/browser/flag_descriptions.cc
+++ b/chrome/browser/flag_descriptions.cc
@@ -3085,15 +3085,20 @@ const char kTextSuggestionsTouchBarDescription[] =
#endif
-// Chrome OS -------------------------------------------------------------------
+// Chrome OS Linux-------------------------------------------------------------------
-#if defined(OS_CHROMEOS)
+#if defined(OS_CHROMEOS) || (defined(OS_LINUX) && !defined(OS_ANDROID))
const char kAcceleratedMjpegDecodeName[] =
"Hardware-accelerated mjpeg decode for captured frame";
const char kAcceleratedMjpegDecodeDescription[] =
"Enable hardware-accelerated mjpeg decode for captured frame where "
"available.";
+#endif
+
+// Chrome OS --------------------------------------------------
+
+#if defined(OS_CHROMEOS)
const char kAllowTouchpadThreeFingerClickName[] = "Touchpad three-finger-click";
const char kAllowTouchpadThreeFingerClickDescription[] =
diff --git a/chrome/browser/flag_descriptions.h b/chrome/browser/flag_descriptions.h
index 5dac660bb..6cc4115da 100644
--- a/chrome/browser/flag_descriptions.h
+++ b/chrome/browser/flag_descriptions.h
@@ -1846,13 +1846,19 @@ extern const char kPermissionPromptPersistenceToggleDescription[];
#endif // defined(OS_MACOSX)
-// Chrome OS ------------------------------------------------------------------
+// Chrome OS and Linux ------------------------------------------------------------------
-#if defined(OS_CHROMEOS)
+#if defined(OS_CHROMEOS) || (defined(OS_LINUX) && !defined(OS_ANDROID))
extern const char kAcceleratedMjpegDecodeName[];
extern const char kAcceleratedMjpegDecodeDescription[];
+#endif // defined(OS_CHROMEOS) || (defined(OS_LINUX) && !defined(OS_ANDROID))
+
+// Chrome OS ------------------------------------------------------------------------
+
+#if defined(OS_CHROMEOS)
+
extern const char kAllowTouchpadThreeFingerClickName[];
extern const char kAllowTouchpadThreeFingerClickDescription[];
diff --git a/gpu/config/software_rendering_list.json b/gpu/config/software_rendering_list.json
index 65f37b3f1..ae8a1718f 100644
--- a/gpu/config/software_rendering_list.json
+++ b/gpu/config/software_rendering_list.json
@@ -371,11 +371,12 @@
},
{
"id": 48,
- "description": "Accelerated video decode is unavailable on Linux",
+ "description": "Accelerated VA-API video decode is not supported on NVIDIA platforms",
"cr_bugs": [137247],
"os": {
"type": "linux"
},
+ "vendor_id": "0x10de",
"features": [
"accelerated_video_decode"
]
--
2.20.1

View File

@ -7,13 +7,13 @@
stdenv.mkDerivation rec {
name = "falkon-${version}";
version = "3.0.1";
version = "3.1.0";
src = fetchFromGitHub {
owner = "KDE";
repo = "falkon";
rev = "v${version}";
sha256 = "1ay1ljrdjcfqwjv4rhf4psh3dfihnvhpmpqcayd3p9lh57x7fh41";
sha256 = "1w64slh9wpcfi4v7ds9wci1zvwh0dh787ndpi6hd4kmdgnswvsw7";
};
preConfigure = ''

View File

@ -50,6 +50,7 @@
, gnupg
, ffmpeg
, runtimeShell
, systemLocale ? config.i18n.defaultLocale or "en-US"
}:
let
@ -69,8 +70,6 @@ let
sourceMatches = locale: source:
(isPrefixOf source.locale locale) && source.arch == arch;
systemLocale = config.i18n.defaultLocale or "en-US";
policies = {
DisableAppUpdate = true;
};

View File

@ -251,8 +251,10 @@ stdenv.mkDerivation rec {
# and wants these
++ lib.optionals isTorBrowserLike ([
"--with-tor-browser-version=${tbversion}"
"--with-distribution-id=org.torproject"
"--enable-signmar"
"--enable-verify-mar"
"--enable-bundled-fonts"
])
++ flag alsaSupport "alsa"

View File

@ -232,16 +232,16 @@ in rec {
};
tor-browser-8-0 = tbcommon rec {
ffversion = "60.5.1esr";
tbversion = "8.0.6";
ffversion = "60.6.1esr";
tbversion = "8.0.8";
# FIXME: fetchFromGitHub is not ideal, unpacked source is >900Mb
src = fetchFromGitHub {
owner = "SLNOS";
repo = "tor-browser";
# branch "tor-browser-60.5.1esr-8.0-1-slnos"
rev = "89be91fc7cbc420b7c4a3bfc36d2b0d500dd3ccf";
sha256 = "022zjfwsdl0dkg6ck2kha4nf91xm3j9ag5n21zna98szg3x82dj1";
# branch "tor-browser-60.6.1esr-8.0-1-slnos"
rev = "dda14213c550afc522ef0bb0bb1643289c298736";
sha256 = "0lj79nczcix9mx6d0isbizg0f8apf6vgkp7r0q7id92691frj7fz";
};
};

View File

@ -21,12 +21,12 @@ let
in python3Packages.buildPythonApplication rec {
pname = "qutebrowser";
version = "1.6.0";
version = "1.6.1";
# the release tarballs are different from the git checkout!
src = fetchurl {
url = "https://github.com/qutebrowser/qutebrowser/releases/download/v${version}/${pname}-${version}.tar.gz";
sha256 = "1pkbzhd5syn7m8q0i7zlxjdgd693z0gj0h22nkc48zjkn214w236";
sha256 = "1sckfp9l2jgg29p2p4vmd0g7yzbldimqy0a0jvf488yp47qj310p";
};
# Needs tox

View File

@ -89,7 +89,7 @@ let
fteLibPath = makeLibraryPath [ stdenv.cc.cc gmp ];
# Upstream source
version = "8.0.6";
version = "8.0.8";
lang = "en-US";
@ -99,7 +99,7 @@ let
"https://github.com/TheTorProject/gettorbrowser/releases/download/v${version}/tor-browser-linux64-${version}_${lang}.tar.xz"
"https://dist.torproject.org/torbrowser/${version}/tor-browser-linux64-${version}_${lang}.tar.xz"
];
sha256 = "14i32r8pw749ghigqblnbr5622jh5wp1ivnwi71vycbgp9pds4f7";
sha256 = "14ckbhfiyv01cxnd98iihfz7xvrgcd5k4j7pn9ag4a6xb2l80sxi";
};
"i686-linux" = fetchurl {

View File

@ -340,9 +340,7 @@ stdenv.mkDerivation rec {
`tor-browser-bundle` needs for the bundling using a much simpler patch. See the
longDescription and expression of the `firefoxPackages.tor-browser` package for more info.
'';
homepage = https://torproject.org/;
license = licenses.free;
platforms = [ "x86_64-linux" ];
inherit (tor-browser-unwrapped.meta) homepage platforms license;
hydraPlatforms = [ ];
maintainers = with maintainers; [ joachifm ];
};

View File

@ -2,7 +2,7 @@
buildGoPackage rec {
name = "kompose-${version}";
version = "1.9.0";
version = "1.18.0";
goPackagePath = "github.com/kubernetes/kompose";
@ -10,14 +10,14 @@ buildGoPackage rec {
rev = "v${version}";
owner = "kubernetes";
repo = "kompose";
sha256 = "00yvih5gn67sw9v30a0rpaj1zag7k02i4biw1p37agxih0aphc86";
sha256 = "1hb4bs710n9fghphhfakwg42wjscf136dcr05zwwfg7iyqx2cipc";
};
meta = with stdenv.lib; {
description = "A tool to help users who are familiar with docker-compose move to Kubernetes";
homepage = https://github.com/kubernetes/kompose;
license = licenses.asl20;
maintainers = with maintainers; [thpham];
maintainers = with maintainers; [ thpham vdemeester ];
platforms = platforms.unix;
};
}

View File

@ -714,4 +714,11 @@
version = "0.2.0";
sha256 = "0ic5b9djhnb1bs2bz3zdprgy3r55dng09xgc4d9l9fyp85g2amaz";
};
ansible =
{
owner = "nbering";
repo = "terraform-provider-ansible";
version = "0.0.4";
sha256 = "125a8vbpnahaxxrxj3mp0kj6ajssxnfb6l0spgnf118wg3bvlmw5";
};
}

View File

@ -20,3 +20,6 @@ tweag/terraform-provider-secret
# include terraform-provider-segment
ajbosco/terraform-provider-segment
# include terraform-provider-ansible
nbering/terraform-provider-ansible

View File

@ -3,23 +3,15 @@
rustPlatform.buildRustPackage rec {
name = "newsboat-${version}";
version = "2.14.1";
version = "2.15";
src = fetchurl {
url = "https://newsboat.org/releases/${version}/${name}.tar.xz";
sha256 = "0rnz61in715xgma6phvmn5bil618gic01f3kxzhcfgqsj2qx7l2b";
sha256 = "1dqdcp34jmphqf3d8ik0xdhg0s66nd5rky0y8y591nidq29wws6s";
};
cargoSha256 = "05pf020jp20ffmvin6d1g8zbwf1zk03bm1cb99b7iqkk4r54g6dn";
cargoPatches = [
# Bump versions in Cargo.lock
(fetchpatch {
url = https://github.com/newsboat/newsboat/commit/cbad27a19d270f2f0fce9317651e2c9f0aa22865.patch;
sha256 = "05n31b6mycsmzilz7f3inkmav34210c4nlr1fna4zapbhxjdlhqn";
})
];
postPatch = ''
substituteInPlace Makefile --replace "|| true" ""
# Allow other ncurses versions on Darwin

View File

@ -2,7 +2,7 @@
let
stableVersion = "2.1.15";
previewVersion = "2.2.0a2";
previewVersion = "2.2.0a3";
addVersion = args:
let version = if args.stable then stableVersion else previewVersion;
branch = if args.stable then "stable" else "preview";
@ -18,7 +18,7 @@ in {
};
guiPreview = mkGui {
stable = false;
sha256Hash = "1lvdff4yfavfkjmdbhxqfxdd5nq77c2vyy2wnsdliwnmdh3fhm28";
sha256Hash = "110mghkhanz92p8vfzyh4199mnihb24smxsc44a8v534ds6hww74";
};
serverStable = mkServer {
@ -27,6 +27,6 @@ in {
};
serverPreview = mkServer {
stable = false;
sha256Hash = "033bi1bcw5ss6g380qnam1qqyi4bz1cykbb3lparb8hryikicdb9";
sha256Hash = "104pvrba7n9gp7mx31xg520cfahcy0vsmbzx23007c50kp0nxc56";
};
}

View File

@ -5,11 +5,11 @@
with stdenv.lib;
stdenv.mkDerivation rec {
name = "bitlbee-3.5.1";
name = "bitlbee-3.6";
src = fetchurl {
url = "mirror://bitlbee/src/${name}.tar.gz";
sha256 = "0sgsn0fv41rga46mih3fyv65cvfa6rvki8x92dn7bczbi7yxfdln";
sha256 = "0zhhcbcr59sx9h4maf8zamzv2waya7sbsl7w74gbyilvy93dw5cz";
};
nativeBuildInputs = [ pkgconfig ] ++ optional doCheck check;

View File

@ -3,11 +3,11 @@
let configFile = writeText "riot-config.json" conf; in
stdenv.mkDerivation rec {
name= "riot-web-${version}";
version = "1.0.3";
version = "1.0.5";
src = fetchurl {
url = "https://github.com/vector-im/riot-web/releases/download/v${version}/riot-v${version}.tar.gz";
sha256 = "1gwz47wi9g9g9zzf46ry3q9s855rvlcjlg3dsxr1xdvz4arci195";
sha256 = "0m0kdnw0pc84yasnybfh9hmkajji0wjk2snv89crdi79s8k572ki";
};
installPhase = ''

View File

@ -57,11 +57,11 @@ let
in stdenv.mkDerivation rec {
name = "signal-desktop-${version}";
version = "1.23.0";
version = "1.23.1";
src = fetchurl {
url = "https://updates.signal.org/desktop/apt/pool/main/s/signal-desktop/signal-desktop_${version}_amd64.deb";
sha256 = "1bdl2najrbwvfbl5wy1m8vlr4lj6gmngillnyqlxasvjz355rlwr";
sha256 = "1i0s0pd67hcwc8m2xyydxky76yq796lg2h91lw7n9xi7lvpg5m4s";
};
phases = [ "unpackPhase" "installPhase" ];

View File

@ -5,7 +5,7 @@
let
version = "3.3.7";
version = "3.3.8";
rpath = stdenv.lib.makeLibraryPath [
alsaLib
@ -48,7 +48,7 @@ let
if stdenv.hostPlatform.system == "x86_64-linux" then
fetchurl {
url = "https://downloads.slack-edge.com/linux_releases/slack-desktop-${version}-amd64.deb";
sha256 = "1q3866iaby8rqim8h2m398wzi0isnnlsxirlq63fzz7a4g1hnc8p";
sha256 = "02435zvpyr95fljx3xgqz0b0npim1j0611p4rc1azwgdf8hjn11p";
}
else
throw "Slack is not supported on ${stdenv.hostPlatform.system}";

View File

@ -4,8 +4,8 @@ let
mkTelegram = args: qt5.callPackage (import ./generic.nix args) { };
stableVersion = {
stable = true;
version = "1.6.1";
sha256Hash = "1gy5al5m1hks0z98cya9kkfinh6k1i8a1d97cy7x6gj0jgmgs88k";
version = "1.6.3";
sha256Hash = "1bm0m1y3cf0zmaasz1wfkbz5fy9wm7ivyjn9bzs87yrvlj9x7wqz";
# svn log svn://svn.archlinux.org/community/telegram-desktop/trunk
archPatchesRevision = "429149";
archPatchesHash = "1ylpi9kb6hk27x9wmna4ing8vzn9b7247iya91pyxxrpxrcrhpli";

View File

@ -1,4 +1,4 @@
{ stdenv, python36Packages }:
{ stdenv, fetchpatch, python36Packages }:
with stdenv.lib;
@ -19,6 +19,13 @@ buildPythonPackage rec {
checkInputs = [ mock pytest coverage tox ];
propagatedBuildInputs = [ urwid tweepy future ];
patches = [
(fetchpatch {
url = "https://github.com/louipc/turses/commit/be0961b51f502d49fd9e2e5253ac130e543a31c7.patch";
sha256 = "17s1n0275mcj03vkf3n39dmc09niwv4y7ssrfk7k3vqx22kppzg3";
})
];
checkPhase = ''
TMP_TURSES=`echo turses-$RANDOM`
mkdir $TMP_TURSES
@ -26,7 +33,7 @@ buildPythonPackage rec {
rm -rf $TMP_TURSES
'';
patchPhase = ''
postPatch = ''
sed -i -e 's|urwid==1.3.0|urwid==${getVersion urwid}|' setup.py
sed -i -e "s|future==0.14.3|future==${getVersion future}|" setup.py
sed -i -e "s|tweepy==3.3.0|tweepy==${getVersion tweepy}|" setup.py
@ -35,7 +42,7 @@ buildPythonPackage rec {
'';
meta = with stdenv.lib; {
homepage = https://github.com/alejandrogomez/turses;
homepage = https://github.com/louipc/turses;
description = "A Twitter client for the console";
license = licenses.gpl3;
maintainers = with maintainers; [ garbas ];

View File

@ -38,51 +38,30 @@ let
};
});
versionInfo = {
"13.8.0" = {
major = "13";
minor = "8";
patch = "0";
x64hash = "FDF5991CCD52B2B98289D7B2FB46D492D3E4032846D4AFA52CAA0F8AC0578931";
x86hash = "E0CFB43312BF79F753514B11F7B8DE4529823AE4C92D1B01E8A2C34F26AC57E7";
x64suffix = "10299729";
x86suffix = "10299729";
homepage = https://www.citrix.com/downloads/citrix-receiver/legacy-receiver-for-linux/receiver-for-linux-138.html;
versionInfo = let
supportedVersions = {
"13.10.0" = {
major = "13";
minor = "10";
patch = "0";
x64hash = "7025688C7891374CDA11C92FC0BA2FA8151AEB4C4D31589AD18747FAE943F6EA";
x86hash = "2DCA3C8EDED11C5D824D579BC3A6B7D531EAEDDCBFB16E91B5702C72CAE9DEE4";
x64suffix = "20";
x86suffix = "20";
homepage = https://www.citrix.com/downloads/citrix-receiver/linux/receiver-for-linux-latest.html;
};
};
"13.9.0" = {
major = "13";
minor = "9";
patch = "0";
x64hash = "00l18s7i9yky3ddabwljwsf7fx4cjgjn9hfd74j0x1v4gl078nl9";
x86hash = "117fwynpxfnrw98933y8z8v2q4g6ycs1sngvpbki2qj09bjkwmag";
x64suffix = "102";
x86suffix = "102";
homepage = https://www.citrix.com/downloads/citrix-receiver/linux/receiver-for-linux-latest.html; # This version has disappeared from Citrix's website... *sigh*
};
"13.9.1" = {
major = "13";
minor = "9";
patch = "1";
x64hash = "A9A9157CE8C287E8AA11447A0E3C3AB7C227330E9D8882C6F7B938A4DD5925BC";
x86hash = "A93E9770FD10FDD3586A2D47448559EA037265717A7000B9BD2B1DCCE7B0A483";
x64suffix = "6";
x86suffix = "6";
homepage = https://www.citrix.com/downloads/citrix-receiver/legacy-receiver-for-linux/receiver-for-linux-1391.html;
};
"13.10.0" = {
major = "13";
minor = "10";
patch = "0";
x64hash = "7025688C7891374CDA11C92FC0BA2FA8151AEB4C4D31589AD18747FAE943F6EA";
x86hash = "2DCA3C8EDED11C5D824D579BC3A6B7D531EAEDDCBFB16E91B5702C72CAE9DEE4";
x64suffix = "20";
x86suffix = "20";
homepage = https://www.citrix.com/downloads/citrix-receiver/linux/receiver-for-linux-latest.html;
};
};
# break an evaluation for old Citrix versions rather than exiting with
# an "attribute name not found" error to avoid confusion.
deprecatedVersions = let
versions = [ "13.8.0" "13.9.0" "13.9.1" ];
in
lib.listToAttrs
(lib.flip map versions
(v: lib.nameValuePair v (throw "Unsupported citrix_receiver version: ${v}")));
in
deprecatedVersions // supportedVersions;
citrixReceiverForVersion = { major, minor, patch, x86hash, x64hash, x86suffix, x64suffix, homepage }:
stdenv.mkDerivation rec {

View File

@ -1,5 +1,5 @@
{ lib, python3Packages, fetchFromGitHub, wrapGAppsHook, gobject-introspection
, gnome3, libappindicator-gtk3, libnotify }:
, gtksourceview3, libappindicator-gtk3, libnotify }:
python3Packages.buildPythonApplication rec {
name = "autokey-${version}";
@ -22,7 +22,7 @@ python3Packages.buildPythonApplication rec {
# Note: no dependencies included for Qt GUI because Qt ui is poorly
# maintained—see https://github.com/autokey/autokey/issues/51
buildInputs = [ wrapGAppsHook gobject-introspection gnome3.gtksourceview
buildInputs = [ wrapGAppsHook gobject-introspection gtksourceview3
libappindicator-gtk3 libnotify ];
propagatedBuildInputs = with python3Packages; [

View File

@ -3,7 +3,7 @@
rec {
major = "6";
minor = "2";
patch = "1";
patch = "2";
tweak = "2";
subdir = "${major}.${minor}.${patch}";
@ -12,6 +12,6 @@ rec {
src = fetchurl {
url = "https://download.documentfoundation.org/libreoffice/src/${subdir}/libreoffice-${version}.tar.xz";
sha256 = "0p2r48n27v5ifbj3cb9bs38nb6699awmdqx4shy1c6p28b24y78f";
sha256 = "0s8zwc2bp1zs7hvyhjz0hpb8w97jm0cdb179p56z7svvmald6fmq";
};
}

View File

@ -13,7 +13,7 @@
, librevenge, libe-book, libmwaw, glm, glew, gst_all_1
, gdb, commonsLogging, librdf_rasqal, wrapGAppsHook
, gnome3, glib, ncurses, epoxy, gpgme
, langs ? [ "ca" "cs" "de" "en-GB" "en-US" "eo" "es" "fr" "hu" "it" "nl" "pl" "ru" "sl" "zh-CN" ]
, langs ? [ "ca" "cs" "de" "en-GB" "en-US" "eo" "es" "fr" "hu" "it" "ja" "nl" "pl" "ru" "sl" "zh-CN" ]
, withHelp ? true
, kdeIntegration ? false
}:
@ -48,14 +48,14 @@ let
translations = fetchSrc {
name = "translations";
sha256 = "180d4rrzb3lq7g2w7x512fn8chfkjg4ld20ikrj6hkg11kj4hbmy";
sha256 = "0i8pmgdm0i6klb06s3nwad9xz4whbvb5mjjqjqvl6fh0flk6zs1p";
};
# TODO: dictionaries
help = fetchSrc {
name = "help";
sha256 = "06fgd5jkqqbvskyj1ywmsmb4crsj064s8r45nrv0r8j6ydn0hi1l";
sha256 = "14hd6rnq9316p78zharqznps80mxxwz3n80zm15cpj3xg3dr57v1";
};
};

View File

@ -56,11 +56,11 @@
md5name = "00b516f4704d4a7cb50a1d97e6e8e15b-bzip2-1.0.6.tar.gz";
}
{
name = "cairo-1.15.12.tar.xz";
url = "http://dev-www.libreoffice.org/src/cairo-1.15.12.tar.xz";
sha256 = "7623081b94548a47ee6839a7312af34e9322997806948b6eec421a8c6d0594c9";
name = "cairo-1.16.0.tar.xz";
url = "http://dev-www.libreoffice.org/src/cairo-1.16.0.tar.xz";
sha256 = "5e7b29b3f113ef870d1e3ecf8adf21f923396401604bda16d44be45e66052331";
md5 = "";
md5name = "7623081b94548a47ee6839a7312af34e9322997806948b6eec421a8c6d0594c9-cairo-1.15.12.tar.xz";
md5name = "5e7b29b3f113ef870d1e3ecf8adf21f923396401604bda16d44be45e66052331-cairo-1.16.0.tar.xz";
}
{
name = "libcdr-0.1.5.tar.xz";
@ -658,11 +658,11 @@
md5name = "cdd6cffdebcd95161a73305ec13fc7a78e9707b46ca9f84fb897cd5626df3824-openldap-2.4.45.tgz";
}
{
name = "openssl-1.0.2p.tar.gz";
url = "http://dev-www.libreoffice.org/src/openssl-1.0.2p.tar.gz";
sha256 = "50a98e07b1a89eb8f6a99477f262df71c6fa7bef77df4dc83025a2845c827d00";
name = "openssl-1.0.2r.tar.gz";
url = "http://dev-www.libreoffice.org/src/openssl-1.0.2r.tar.gz";
sha256 = "ae51d08bba8a83958e894946f15303ff894d75c2b8bbd44a852b64e3fe11d0d6";
md5 = "";
md5name = "50a98e07b1a89eb8f6a99477f262df71c6fa7bef77df4dc83025a2845c827d00-openssl-1.0.2p.tar.gz";
md5name = "ae51d08bba8a83958e894946f15303ff894d75c2b8bbd44a852b64e3fe11d0d6-openssl-1.0.2r.tar.gz";
}
{
name = "liborcus-0.14.1.tar.gz";

View File

@ -13,7 +13,7 @@
, librevenge, libe-book, libmwaw, glm, glew, gst_all_1
, gdb, commonsLogging, librdf_rasqal, wrapGAppsHook
, gnome3, glib, ncurses, epoxy, gpgme
, langs ? [ "ca" "cs" "de" "en-GB" "en-US" "eo" "es" "fr" "hu" "it" "nl" "pl" "ru" "sl" "zh-CN" ]
, langs ? [ "ca" "cs" "de" "en-GB" "en-US" "eo" "es" "fr" "hu" "it" "ja" "nl" "pl" "ru" "sl" "zh-CN" ]
, withHelp ? true
, kdeIntegration ? false
}:

View File

@ -5,7 +5,13 @@ export JAVA_HOME="${JAVA_HOME:-@jdk@}"
if uname | grep Linux > /dev/null &&
! ( test -n "$DBUS_SESSION_BUS_ADDRESS" ); then
dbus_tmp_dir="/run/user/$(id -u)/libreoffice-dbus"
mkdir "$dbus_tmp_dir"
if ! test -d "$dbus_tmp_dir" && test -d "/run"; then
mkdir -p "$dbus_tmp_dir"
fi
if ! test -d "$dbus_tmp_dir"; then
dbus_tmp_dir="/tmp/libreoffice-$(id -u)/libreoffice-dbus"
mkdir -p "$dbus_tmp_dir"
fi
dbus_socket_dir="$(mktemp -d -p "$dbus_tmp_dir")"
"@dbus@"/bin/dbus-daemon --nopidfile --nofork --config-file "@dbus@"/share/dbus-1/session.conf --address "unix:path=$dbus_socket_dir/session" &> /dev/null &
export DBUS_SESSION_BUS_ADDRESS="unix:path=$dbus_socket_dir/session"

View File

@ -0,0 +1,34 @@
{ stdenv, fetchurl, hmmer, perl }:
stdenv.mkDerivation rec {
version = "1.1.1";
name = "itsx-${version}";
src = fetchurl {
url = "http://microbiology.se/sw/ITSx_${version}.tar.gz";
sha256 = "0lrmy2n3ax7f208k0k8l3yz0j5cpz05hv4hx1nnxzn0c51z1pc31";
};
buildInputs = [ hmmer perl ];
buildPhase = ''
sed -e "s,profileDB = .*,profileDB = \"$out/share/ITSx_db/HMMs\";," -i ITSx
sed "3 a \$ENV{\'PATH\'}='${hmmer}/bin:'.\"\$ENV{\'PATH\'}\";" -i ITSx
mkdir bin
mv ITSx bin
'';
installPhase = ''
mkdir -p $out/share/doc && cp -a bin $out/
cp *pdf $out/share/doc
cp -r ITSx_db $out/share
'';
meta = with stdenv.lib; {
description = "Improved software detection and extraction of ITS1 and ITS2 from ribosomal ITS sequences of fungi and other eukaryotes for use in environmental sequencing";
homepage = http://microbiology.se/software/itsx/;
license = licenses.gpl3;
maintainers = [ maintainers.bzizou ];
platforms = [ "x86_64-linux" "i686-linux" ];
};
}

View File

@ -0,0 +1,27 @@
{ stdenv, fetchurl, cmake, gcc, gcc-unwrapped }:
stdenv.mkDerivation rec {
version = "3.2.1";
name = "messer-slim-${version}";
src = fetchurl {
url = "https://github.com/MesserLab/SLiM/archive/v${version}.tar.gz";
sha256 = "1j3ssjvxpsc21mmzj59kwimglz8pdazi5w6wplmx11x744k77wa1";
};
enableParallelBuilding = true;
nativeBuildInputs = [ cmake gcc gcc-unwrapped ];
cmakeFlags = [ "-DCMAKE_AR=${gcc-unwrapped}/bin/gcc-ar"
"-DCMAKE_RANLIB=${gcc-unwrapped}/bin/gcc-ranlib" ];
meta = {
description = "An evolutionary simulation framework";
homepage = https://messerlab.org/slim/;
license = with stdenv.lib.licenses; [ gpl3 ];
maintainers = with stdenv.lib.maintainers; [ bzizou ];
platforms = stdenv.lib.platforms.all;
};
}

Some files were not shown because too many files have changed in this diff Show More