Merge master into haskell-updates

This commit is contained in:
github-actions[bot] 2024-01-26 00:12:48 +00:00 committed by GitHub
commit acd0181532
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
121 changed files with 2020 additions and 1395 deletions

View File

@ -71,6 +71,11 @@ If you **omit a link text** for a link pointing to a section, the text will be s
This syntax is taken from [MyST](https://myst-parser.readthedocs.io/en/latest/using/syntax.html#targets-and-cross-referencing).
#### HTML
Inlining HTML is not allowed. Parts of the documentation gets rendered to various non-HTML formats, such as man pages in the case of NixOS manual.
#### Roles
If you want to link to a man page, you can use `` {manpage}`nix.conf(5)` ``. The references will turn into links when a mapping exists in [`doc/manpage-urls.json`](./manpage-urls.json).
@ -157,6 +162,9 @@ watermelon
In an effort to keep the Nixpkgs manual in a consistent style, please follow the conventions below, unless they prevent you from properly documenting something.
In that case, please open an issue about the particular documentation convention and tag it with a "needs: documentation" label.
When needed, each convention explain why it exists, so you can make a decision whether to follow it or not based on your particular case.
Note that these conventions are about the **structure** of the manual (and its source files), not about the content that goes in it.
You, as the writer of documentation, are still in charge of its content.
- Put each sentence in its own line.
This makes reviews and suggestions much easier, since GitHub's review system is based on lines.
@ -188,26 +196,153 @@ In that case, please open an issue about the particular documentation convention
}
```
- Use [definition lists](#definition-lists) to document function arguments, and the attributes of such arguments. For example:
- When showing inputs/outputs of any [REPL](https://en.wikipedia.org/wiki/Read%E2%80%93eval%E2%80%93print_loop), such as a shell or the Nix REPL, use a format as you'd see in the REPL, while trying to visually separate inputs from outputs.
This means that for a shell, you should use a format like the following:
```shell
$ nix-build -A hello '<nixpkgs>' \
--option require-sigs false \
--option trusted-substituters file:///tmp/hello-cache \
--option substituters file:///tmp/hello-cache
/nix/store/zhl06z4lrfrkw5rp0hnjjfrgsclzvxpm-hello-2.12.1
```
Note how the input is preceded by `$` on the first line and indented on subsequent lines, and how the output is provided as you'd see on the shell.
For the Nix REPL, you should use a format like the following:
```shell
nix-repl> builtins.attrNames { a = 1; b = 2; }
[ "a" "b" ]
```
Note how the input is preceded by `nix-repl>` and the output is provided as you'd see on the Nix REPL.
- When documenting functions or anything that has inputs/outputs and example usage, use nested headings to clearly separate inputs, outputs, and examples.
Keep examples as the last nested heading, and link to the examples wherever applicable in the documentation.
The purpose of this convention is to provide a familiar structure for navigating the manual, so any reader can expect to find content related to inputs in an "inputs" heading, examples in an "examples" heading, and so on.
An example:
```
## buildImage
Some explanation about the function here.
Describe a particular scenario, and point to [](#ex-dockerTools-buildImage), which is an example demonstrating it.
### Inputs
Documentation for the inputs of `buildImage`.
Perhaps even point to [](#ex-dockerTools-buildImage) again when talking about something specifically linked to it.
### Passthru outputs
Documentation for any passthru outputs of `buildImage`.
### Examples
Note that this is the last nested heading in the `buildImage` section.
:::{.example #ex-dockerTools-buildImage}
# Using `buildImage`
Example of how to use `buildImage` goes here.
:::
```
- Use [definition lists](#definition-lists) to document function arguments, and the attributes of such arguments as well as their [types](https://nixos.org/manual/nix/stable/language/values).
For example:
```markdown
# pkgs.coolFunction
Description of what `coolFunction` does.
## Inputs
`coolFunction` expects a single argument which should be an attribute set, with the following possible attributes:
`name`
`name` (String)
: The name of the resulting image.
`tag` _optional_
`tag` (String; _optional_)
: Tag of the generated image.
_Default value:_ the output path's hash.
_Default:_ the output path's hash.
```
#### Examples
To define a referenceable figure use the following fencing:
```markdown
:::{.example #an-attribute-set-example}
# An attribute set example
You can add text before
```nix
{ a = 1; b = 2;}
```
and after code fencing
:::
```
Defining examples through the `example` fencing class adds them to a "List of Examples" section after the Table of Contents.
Though this is not shown in the rendered documentation on nixos.org.
#### Figures
To define a referencable figure use the following fencing:
```markdown
::: {.figure #nixos-logo}
# NixOS Logo
![NixOS logo](./nixos_logo.png)
:::
```
Defining figures through the `figure` fencing class adds them to a `List of Figures` after the `Table of Contents`.
Though this is not shown in the rendered documentation on nixos.org.
#### Footnotes
To add a foonote explanation, use the following syntax:
```markdown
Sometimes it's better to add context [^context] in a footnote.
[^context]: This explanation will be rendered at the end of the chapter.
```
#### Inline comments
Inline comments are supported with following syntax:
```markdown
<!-- This is an inline comment -->
```
The comments will not be rendered in the rendered HTML.
#### Link reference definitions
Links can reference a label, for example, to make the link target reusable:
```markdown
::: {.note}
Reference links can also be used to [shorten URLs][url-id] and keep the markdown readable.
:::
[url-id]: https://github.com/NixOS/nixpkgs/blob/19d4f7dc485f74109bd66ef74231285ff797a823/doc/README.md
```
This syntax is taken from [CommonMark](https://spec.commonmark.org/0.30/#link-reference-definitions).
#### Typographic replacements
Typographic replacements are enabled. Check the [list of possible replacement patterns check](https://github.com/executablebooks/markdown-it-py/blob/3613e8016ecafe21709471ee0032a90a4157c2d1/markdown_it/rules_core/replacements.py#L1-L15).
## Getting help
If you need documentation-specific help or reviews, ping [@NixOS/documentation-reviewers](https://github.com/orgs/nixos/teams/documentation-reviewers) on your pull request.

View File

@ -676,6 +676,7 @@ If our package sets `includeStorePaths` to `false`, we'll end up with only the f
dockerTools.streamLayeredImage {
name = "hello";
contents = [ hello ];
includeStorePaths = false;
}
```
@ -714,56 +715,168 @@ dockerTools.streamLayeredImage {
```
:::
## pullImage {#ssec-pkgs-dockerTools-fetchFromRegistry}
[]{#ssec-pkgs-dockerTools-fetchFromRegistry}
## pullImage {#ssec-pkgs-dockerTools-pullImage}
This function is analogous to the `docker pull` command, in that it can be used to pull a Docker image from a Docker registry. By default [Docker Hub](https://hub.docker.com/) is used to pull images.
This function is similar to the `docker pull` command, which means it can be used to pull a Docker image from a registry that implements the [Docker Registry HTTP API V2](https://distribution.github.io/distribution/spec/api/).
By default, the `docker.io` registry is used.
Its parameters are described in the example below:
The image will be downloaded as an uncompressed Docker-compatible repository tarball, which is suitable for use with other `dockerTools` functions such as [`buildImage`](#ssec-pkgs-dockerTools-buildImage), [`buildLayeredImage`](#ssec-pkgs-dockerTools-buildLayeredImage), and [`streamLayeredImage`](#ssec-pkgs-dockerTools-streamLayeredImage).
This function requires two different types of hashes/digests to be specified:
- One of them is used to identify a unique image within the registry (see the documentation for the `imageDigest` attribute).
- The other is used by Nix to ensure the contents of the output haven't changed (see the documentation for the `sha256` attribute).
Both hashes are required because they must uniquely identify some content in two completely different systems (the Docker registry and the Nix store), but their values will not be the same.
See [](#ex-dockerTools-pullImage-nixprefetchdocker) for a tool that can help gather these values.
### Inputs {#ssec-pkgs-dockerTools-pullImage-inputs}
`pullImage` expects a single argument with the following attributes:
`imageName` (String)
: Specifies the name of the image to be downloaded, as well as the registry endpoint.
By default, the `docker.io` registry is used.
To specify a different registry, prepend the endpoint to `imageName`, separated by a slash (`/`).
See [](#ex-dockerTools-pullImage-differentregistry) for how to do that.
`imageDigest` (String)
: Specifies the digest of the image to be downloaded.
:::{.tip}
**Why can't I specify a tag to pull from, and have to use a digest instead?**
Tags are often updated to point to different image contents.
The most common example is the `latest` tag, which is usually updated whenever a newer image version is available.
An image tag isn't enough to guarantee the contents of an image won't change, but a digest guarantees this.
Providing a digest helps ensure that you will still be able to build the same Nix code and get the same output even if newer versions of an image are released.
:::
`sha256` (String)
: The hash of the image after it is downloaded.
Internally, this is passed to the [`outputHash`](https://nixos.org/manual/nix/stable/language/advanced-attributes#adv-attr-outputHash) attribute of the resulting derivation.
This is needed to provide a guarantee to Nix that the contents of the image haven't changed, because Nix doesn't support the value in `imageDigest`.
`finalImageName` (String; _optional_)
: Specifies the name that will be used for the image after it has been downloaded.
This only applies after the image is downloaded, and is not used to identify the image to be downloaded in the registry.
Use `imageName` for that instead.
_Default value:_ the same value specified in `imageName`.
`finalImageTag` (String; _optional_)
: Specifies the tag that will be used for the image after it has been downloaded.
This only applies after the image is downloaded, and is not used to identify the image to be downloaded in the registry.
_Default value:_ `"latest"`.
`os` (String; _optional_)
: Specifies the operating system of the image to pull.
If specified, its value should follow the [OCI Image Configuration Specification](https://github.com/opencontainers/image-spec/blob/main/config.md#properties), which should still be compatible with Docker.
According to the linked specification, all possible values for `$GOOS` in [the Go docs](https://go.dev/doc/install/source#environment) should be valid, but will commonly be one of `darwin` or `linux`.
_Default value:_ `"linux"`.
`arch` (String; _optional_)
: Specifies the architecture of the image to pull.
If specified, its value should follow the [OCI Image Configuration Specification](https://github.com/opencontainers/image-spec/blob/main/config.md#properties), which should still be compatible with Docker.
According to the linked specification, all possible values for `$GOARCH` in [the Go docs](https://go.dev/doc/install/source#environment) should be valid, but will commonly be one of `386`, `amd64`, `arm`, or `arm64`.
_Default value:_ the same value from `pkgs.go.GOARCH`.
`tlsVerify` (Boolean; _optional_)
: Used to enable or disable HTTPS and TLS certificate verification when communicating with the chosen Docker registry.
Setting this to `false` will make `pullImage` connect to the registry through HTTP.
_Default value:_ `true`.
`name` (String; _optional_)
: The name used for the output in the Nix store path.
_Default value:_ a value derived from `finalImageName` and `finalImageTag`, with some symbols replaced.
It is recommended to treat the default as an opaque value.
### Examples {#ssec-pkgs-dockerTools-pullImage-examples}
::: {.example #ex-dockerTools-pullImage-niximage}
# Pulling the nixos/nix Docker image from the default registry
This example pulls the [`nixos/nix` image](https://hub.docker.com/r/nixos/nix) and saves it in the Nix store.
```nix
pullImage {
{ dockerTools }:
dockerTools.pullImage {
imageName = "nixos/nix";
imageDigest =
"sha256:473a2b527958665554806aea24d0131bacec46d23af09fef4598eeab331850fa";
imageDigest = "sha256:b8ea88f763f33dfda2317b55eeda3b1a4006692ee29e60ee54ccf6d07348c598";
finalImageName = "nix";
finalImageTag = "2.11.1";
sha256 = "sha256-qvhj+Hlmviz+KEBVmsyPIzTB3QlVAFzwAY1zDPIBGxc=";
os = "linux";
arch = "x86_64";
finalImageTag = "2.19.3";
sha256 = "zRwlQs1FiKrvHPaf8vWOR/Tlp1C5eLn1d9pE4BZg3oA=";
}
```
:::
::: {.example #ex-dockerTools-pullImage-differentregistry}
# Pulling the nixos/nix Docker image from a specific registry
This example pulls the [`coreos/etcd` image](https://quay.io/repository/coreos/etcd) from the `quay.io` registry.
```nix
{ dockerTools }:
dockerTools.pullImage {
imageName = "quay.io/coreos/etcd";
imageDigest = "sha256:24a23053f29266fb2731ebea27f915bb0fb2ae1ea87d42d890fe4e44f2e27c5d";
finalImageName = "etcd";
finalImageTag = "v3.5.11";
sha256 = "Myw+85f2/EVRyMB3axECdmQ5eh9p1q77FWYKy8YpRWU=";
}
```
:::
::: {.example #ex-dockerTools-pullImage-nixprefetchdocker}
# Finding the digest and hash values to use for `dockerTools.pullImage`
Since [`dockerTools.pullImage`](#ssec-pkgs-dockerTools-pullImage) requires two different hashes, one can run the `nix-prefetch-docker` tool to find out the values for the hashes.
The tool outputs some text for an attribute set which you can pass directly to `pullImage`.
```shell
$ nix run nixpkgs#nix-prefetch-docker -- --image-name nixos/nix --image-tag 2.19.3 --arch amd64 --os linux
(some output removed for clarity)
Writing manifest to image destination
-> ImageName: nixos/nix
-> ImageDigest: sha256:498fa2d7f2b5cb3891a4edf20f3a8f8496e70865099ba72540494cd3e2942634
-> FinalImageName: nixos/nix
-> FinalImageTag: latest
-> ImagePath: /nix/store/4mxy9mn6978zkvlc670g5703nijsqc95-docker-image-nixos-nix-latest.tar
-> ImageHash: 1q6cf2pdrasa34zz0jw7pbs6lvv52rq2aibgxccbwcagwkg2qj1q
{
imageName = "nixos/nix";
imageDigest = "sha256:498fa2d7f2b5cb3891a4edf20f3a8f8496e70865099ba72540494cd3e2942634";
sha256 = "1q6cf2pdrasa34zz0jw7pbs6lvv52rq2aibgxccbwcagwkg2qj1q";
finalImageName = "nixos/nix";
finalImageTag = "latest";
}
```
- `imageName` specifies the name of the image to be downloaded, which can also include the registry namespace (e.g. `nixos`). This argument is required.
It is important to supply the `--arch` and `--os` arguments to `nix-prefetch-docker` to filter to a single image, in case there are multiple architectures and/or operating systems supported by the image name and tags specified.
By default, `nix-prefetch-docker` will set `os` to `linux` and `arch` to `amd64`.
- `imageDigest` specifies the digest of the image to be downloaded. This argument is required.
- `finalImageName`, if specified, this is the name of the image to be created. Note it is never used to fetch the image since we prefer to rely on the immutable digest ID. By default it's equal to `imageName`.
- `finalImageTag`, if specified, this is the tag of the image to be created. Note it is never used to fetch the image since we prefer to rely on the immutable digest ID. By default it's `latest`.
- `sha256` is the checksum of the whole fetched image. This argument is required.
- `os`, if specified, is the operating system of the fetched image. By default it's `linux`.
- `arch`, if specified, is the cpu architecture of the fetched image. By default it's `x86_64`.
`nix-prefetch-docker` command can be used to get required image parameters:
```ShellSession
$ nix run nixpkgs#nix-prefetch-docker -- --image-name mysql --image-tag 5
```
Since a given `imageName` may transparently refer to a manifest list of images which support multiple architectures and/or operating systems, you can supply the `--os` and `--arch` arguments to specify exactly which image you want. By default it will match the OS and architecture of the host the command is run on.
```ShellSession
$ nix-prefetch-docker --image-name mysql --image-tag 5 --arch x86_64 --os linux
```
Desired image name and tag can be set using `--final-image-name` and `--final-image-tag` arguments:
```ShellSession
$ nix-prefetch-docker --image-name mysql --image-tag 5 --final-image-name eu.gcr.io/my-project/mysql --final-image-tag prod
Run `nix-prefetch-docker --help` for a list of all supported arguments:
```shell
$ nix run nixpkgs#nix-prefetch-docker -- --help
(output removed for clarity)
```
:::
## exportImage {#ssec-pkgs-dockerTools-exportImage}
@ -845,6 +958,18 @@ buildImage {
Creating base files like `/etc/passwd` or `/etc/login.defs` is necessary for shadow-utils to manipulate users and groups.
When using `buildLayeredImage`, you can put this in `fakeRootCommands` if you `enableFakechroot`:
```nix
buildLayeredImage {
name = "shadow-layered";
fakeRootCommands = ''
${pkgs.dockerTools.shadowSetup}
'';
enableFakechroot = true;
}
```
## fakeNss {#ssec-pkgs-dockerTools-fakeNss}
If your primary goal is providing a basic skeleton for user lookups to work,

View File

@ -144,7 +144,7 @@ in buildDotnetModule rec {
projectReferences = [ referencedProject ]; # `referencedProject` must contain `nupkg` in the folder structure.
dotnet-sdk = dotnetCorePackages.sdk_6.0;
dotnet-sdk = dotnetCorePackages.sdk_6_0;
dotnet-runtime = dotnetCorePackages.runtime_6_0;
executables = [ "foo" ]; # This wraps "$out/lib/$pname/foo" to `$out/bin/foo`.

View File

@ -70,39 +70,42 @@ compilers like this:
```console
$ nix-env -f '<nixpkgs>' -qaP -A haskell.compiler
haskell.compiler.ghc810 ghc-8.10.7
haskell.compiler.ghc88 ghc-8.8.4
haskell.compiler.ghc90 ghc-9.0.2
haskell.compiler.ghc924 ghc-9.2.4
haskell.compiler.ghc925 ghc-9.2.5
haskell.compiler.ghc926 ghc-9.2.6
haskell.compiler.ghc92 ghc-9.2.7
haskell.compiler.ghc942 ghc-9.4.2
haskell.compiler.ghc943 ghc-9.4.3
haskell.compiler.ghc94 ghc-9.4.4
haskell.compiler.ghcHEAD ghc-9.7.20221224
haskell.compiler.ghc8102Binary ghc-binary-8.10.2
haskell.compiler.ghc8102BinaryMinimal ghc-binary-8.10.2
haskell.compiler.ghc8107BinaryMinimal ghc-binary-8.10.7
haskell.compiler.ghc927 ghc-9.2.7
haskell.compiler.ghc92 ghc-9.2.8
haskell.compiler.ghc945 ghc-9.4.5
haskell.compiler.ghc946 ghc-9.4.6
haskell.compiler.ghc947 ghc-9.4.7
haskell.compiler.ghc94 ghc-9.4.8
haskell.compiler.ghc963 ghc-9.6.3
haskell.compiler.ghc96 ghc-9.6.4
haskell.compiler.ghc98 ghc-9.8.1
haskell.compiler.ghcHEAD ghc-9.9.20231121
haskell.compiler.ghc8107Binary ghc-binary-8.10.7
haskell.compiler.ghc865Binary ghc-binary-8.6.5
haskell.compiler.ghc924Binary ghc-binary-9.2.4
haskell.compiler.ghc924BinaryMinimal ghc-binary-9.2.4
haskell.compiler.integer-simple.ghc810 ghc-integer-simple-8.10.7
haskell.compiler.integer-simple.ghc8107 ghc-integer-simple-8.10.7
haskell.compiler.integer-simple.ghc88 ghc-integer-simple-8.8.4
haskell.compiler.integer-simple.ghc884 ghc-integer-simple-8.8.4
haskell.compiler.integer-simple.ghc810 ghc-integer-simple-8.10.7
haskell.compiler.native-bignum.ghc90 ghc-native-bignum-9.0.2
haskell.compiler.native-bignum.ghc902 ghc-native-bignum-9.0.2
haskell.compiler.native-bignum.ghc924 ghc-native-bignum-9.2.4
haskell.compiler.native-bignum.ghc925 ghc-native-bignum-9.2.5
haskell.compiler.native-bignum.ghc926 ghc-native-bignum-9.2.6
haskell.compiler.native-bignum.ghc92 ghc-native-bignum-9.2.7
haskell.compiler.native-bignum.ghc927 ghc-native-bignum-9.2.7
haskell.compiler.native-bignum.ghc942 ghc-native-bignum-9.4.2
haskell.compiler.native-bignum.ghc943 ghc-native-bignum-9.4.3
haskell.compiler.native-bignum.ghc94 ghc-native-bignum-9.4.4
haskell.compiler.native-bignum.ghc944 ghc-native-bignum-9.4.4
haskell.compiler.native-bignum.ghcHEAD ghc-native-bignum-9.7.20221224
haskell.compiler.native-bignum.ghc92 ghc-native-bignum-9.2.8
haskell.compiler.native-bignum.ghc928 ghc-native-bignum-9.2.8
haskell.compiler.native-bignum.ghc945 ghc-native-bignum-9.4.5
haskell.compiler.native-bignum.ghc946 ghc-native-bignum-9.4.6
haskell.compiler.native-bignum.ghc947 ghc-native-bignum-9.4.7
haskell.compiler.native-bignum.ghc94 ghc-native-bignum-9.4.8
haskell.compiler.native-bignum.ghc948 ghc-native-bignum-9.4.8
haskell.compiler.native-bignum.ghc963 ghc-native-bignum-9.6.3
haskell.compiler.native-bignum.ghc96 ghc-native-bignum-9.6.4
haskell.compiler.native-bignum.ghc964 ghc-native-bignum-9.6.4
haskell.compiler.native-bignum.ghc98 ghc-native-bignum-9.8.1
haskell.compiler.native-bignum.ghc981 ghc-native-bignum-9.8.1
haskell.compiler.native-bignum.ghcHEAD ghc-native-bignum-9.9.20231121
haskell.compiler.ghcjs ghcjs-8.10.7
```

View File

@ -1408,6 +1408,20 @@
fingerprint = "7083 E268 4BFD 845F 2B84 9E74 B695 8918 ED23 32CE";
}];
};
applejag = {
email = "applejag.luminance905@passmail.com";
github = "applejag";
githubId = 2477952;
name = "Kalle Fagerberg";
keys = [
{
fingerprint = "F68E 6DB3 79FB 1FF0 7C72 6479 9874 DEDD 3592 5ED0";
}
{
fingerprint = "8DDB 3994 0A34 4FE5 4F3B 3E77 F161 001D EE78 1051";
}
];
};
applePrincess = {
email = "appleprincess@appleprincess.io";
github = "applePrincess";
@ -10478,6 +10492,14 @@
githubId = 31388299;
name = "Leonardo Eugênio";
};
leo248 = {
github ="leo248";
githubId = 95365184;
keys = [{
fingerprint = "81E3 418D C1A2 9687 2C4D 96DC BB1A 818F F295 26D2";
}];
name = "leo248";
};
leo60228 = {
email = "leo@60228.dev";
matrix = "@leo60228:matrix.org";

View File

@ -1,6 +1,6 @@
# Contributing to this manual {#chap-contributing}
The [DocBook] and CommonMark sources of the NixOS manual are in the [nixos/doc/manual](https://github.com/NixOS/nixpkgs/tree/master/nixos/doc/manual) subdirectory of the [Nixpkgs](https://github.com/NixOS/nixpkgs) repository.
The sources of the NixOS manual are in the [nixos/doc/manual](https://github.com/NixOS/nixpkgs/tree/master/nixos/doc/manual) subdirectory of the [Nixpkgs](https://github.com/NixOS/nixpkgs) repository.
This manual uses the [Nixpkgs manual syntax](https://nixos.org/manual/nixpkgs/unstable/#sec-contributing-markup).
You can quickly check your edits with the following:

View File

@ -7,7 +7,7 @@ worthy contribution to the project.
## Building the Manual {#sec-writing-docs-building-the-manual}
The DocBook sources of the [](#book-nixos-manual) are in the
The sources of the [](#book-nixos-manual) are in the
[`nixos/doc/manual`](https://github.com/NixOS/nixpkgs/tree/master/nixos/doc/manual)
subdirectory of the Nixpkgs repository.
@ -29,65 +29,3 @@ nix-build nixos/release.nix -A manual.x86_64-linux
When this command successfully finishes, it will tell you where the
manual got generated. The HTML will be accessible through the `result`
symlink at `./result/share/doc/nixos/index.html`.
## Editing DocBook XML {#sec-writing-docs-editing-docbook-xml}
For general information on how to write in DocBook, see [DocBook 5: The
Definitive Guide](https://tdg.docbook.org/tdg/5.1/).
Emacs nXML Mode is very helpful for editing DocBook XML because it
validates the document as you write, and precisely locates errors. To
use it, see [](#sec-emacs-docbook-xml).
[Pandoc](https://pandoc.org/) can generate DocBook XML from a multitude of
formats, which makes a good starting point. Here is an example of Pandoc
invocation to convert GitHub-Flavoured MarkDown to DocBook 5 XML:
```ShellSession
pandoc -f markdown_github -t docbook5 docs.md -o my-section.md
```
Pandoc can also quickly convert a single `section.xml` to HTML, which is
helpful when drafting.
Sometimes writing valid DocBook is too difficult. In this case,
submit your documentation updates in a [GitHub
Issue](https://github.com/NixOS/nixpkgs/issues/new) and someone will
handle the conversion to XML for you.
## Creating a Topic {#sec-writing-docs-creating-a-topic}
You can use an existing topic as a basis for the new topic or create a
topic from scratch.
Keep the following guidelines in mind when you create and add a topic:
- The NixOS [`book`](https://tdg.docbook.org/tdg/5.0/book.html)
element is in `nixos/doc/manual/manual.xml`. It includes several
[`parts`](https://tdg.docbook.org/tdg/5.0/book.html) which are in
subdirectories.
- Store the topic file in the same directory as the `part` to which it
belongs. If your topic is about configuring a NixOS module, then the
XML file can be stored alongside the module definition `nix` file.
- If you include multiple words in the file name, separate the words
with a dash. For example: `ipv6-config.xml`.
- Make sure that the `xml:id` value is unique. You can use abbreviations
if the ID is too long. For example: `nixos-config`.
- Determine whether your topic is a chapter or a section. If you are
unsure, open an existing topic file and check whether the main
element is chapter or section.
## Adding a Topic to the Book {#sec-writing-docs-adding-a-topic}
Open the parent CommonMark file and add a line to the list of
chapters with the file name of the topic that you created. If you
created a `section`, you add the file to the `chapter` file. If you created
a `chapter`, you add the file to the `part` file.
If the topic is about configuring a NixOS module, it can be
automatically included in the manual by using the `meta.doc` attribute.
See [](#sec-meta-attributes) for an explanation.

View File

@ -50,6 +50,8 @@ In addition to numerous new and upgraded packages, this release has the followin
- [ollama](https://ollama.ai), server for running large language models locally.
- [hebbot](https://github.com/haecker-felix/hebbot), a Matrix bot to generate "This Week in X" like blog posts. Available as [services.hebbot](#opt-services.hebbot.enable).
- [Anki Sync Server](https://docs.ankiweb.net/sync-server.html), the official sync server built into recent versions of Anki. Available as [services.anki-sync-server](#opt-services.anki-sync-server.enable).
The pre-existing [services.ankisyncd](#opt-services.ankisyncd.enable) has been marked deprecated and will be dropped after 24.05 due to lack of maintenance of the anki-sync-server softwares.
@ -138,12 +140,10 @@ The pre-existing [services.ankisyncd](#opt-services.ankisyncd.enable) has been m
- `services.avahi.nssmdns` got split into `services.avahi.nssmdns4` and `services.avahi.nssmdns6` which enable the mDNS NSS switch for IPv4 and IPv6 respectively.
Since most mDNS responders only register IPv4 addresses, most users want to keep the IPv6 support disabled to avoid long timeouts.
- `multi-user.target` no longer depends on `network-online.target`.
This will potentially break services that assumed this was the case in the past.
This was changed for consistency with other distributions as well as improved boot times.
We have added a warning for services that are
`after = [ "network-online.target" ]` but do not depend on it (e.g. using `wants`).
- A warning has been added for services that are
`after = [ "network-online.target" ]` but do not depend on it (e.g. using
`wants`), because the dependency that `multi-user.target` has on
`network-online.target` is planned for removal.
- `services.archisteamfarm` no longer uses the abbreviation `asf` for its state directory (`/var/lib/asf`), user and group (both `asf`). Instead the long name `archisteamfarm` is used.
Configurations with `system.stateVersion` 23.11 or earlier, default to the old stateDirectory until the 24.11 release and must either set the option explicitly or move the data to the new directory.
@ -235,11 +235,16 @@ The pre-existing [services.ankisyncd](#opt-services.ankisyncd.enable) has been m
- `stdenv`: The `--replace` flag in `substitute`, `substituteInPlace`, `substituteAll`, `substituteAllStream`, and `substituteStream` is now deprecated if favor of the new `--replace-fail`, `--replace-warn` and `--replace-quiet`. The deprecated `--replace` equates to `--replace-warn`.
- New options were added to the dnsdist module to enable and configure a DNSCrypt endpoint (see `services.dnsdist.dnscrypt.enable`, etc.).
The module can generate the DNSCrypt provider key pair, certificates and also performs their rotation automatically with no downtime.
- The Yama LSM is now enabled by default in the kernel, which prevents ptracing
non-child processes. This means you will not be able to attach gdb to an
existing process, but will need to start that process from gdb (so it is a
child). Or you can set `boot.kernel.sysctl."kernel.yama.ptrace_scope"` to 0.
- The netbird module now allows running multiple tunnels in parallel through [`services.netbird.tunnels`](#opt-services.netbird.tunnels).
- [Nginx virtual hosts](#opt-services.nginx.virtualHosts) using `forceSSL` or
`globalRedirect` can now have redirect codes other than 301 through
`redirectCode`.

View File

@ -634,6 +634,7 @@
./services/matrix/appservice-irc.nix
./services/matrix/conduit.nix
./services/matrix/dendrite.nix
./services/matrix/hebbot.nix
./services/matrix/maubot.nix
./services/matrix/mautrix-facebook.nix
./services/matrix/mautrix-telegram.nix

View File

@ -4,7 +4,9 @@ let
inherit (lib) any attrValues concatMapStringsSep concatStrings
concatStringsSep flatten imap1 isList literalExpression mapAttrsToList
mkEnableOption mkIf mkOption mkRemovedOptionModule optional optionalAttrs
optionalString singleton types;
optionalString singleton types mkRenamedOptionModule nameValuePair
mapAttrs' listToAttrs filter;
inherit (lib.strings) match;
cfg = config.services.dovecot2;
dovecotPkg = pkgs.dovecot;
@ -12,6 +14,58 @@ let
baseDir = "/run/dovecot2";
stateDir = "/var/lib/dovecot";
sieveScriptSettings = mapAttrs' (to: from: nameValuePair "sieve_${to}" "${stateDir}/sieve/${from}") cfg.sieve.scripts;
imapSieveMailboxSettings = listToAttrs (flatten (imap1 (idx: el:
singleton {
name = "imapsieve_mailbox${toString idx}_name";
value = el.name;
} ++ optional (el.from != null) {
name = "imapsieve_mailbox${toString idx}_from";
value = el.from;
} ++ optional (el.causes != []) {
name = "imapsieve_mailbox${toString idx}_causes";
value = concatStringsSep "," el.causes;
} ++ optional (el.before != null) {
name = "imapsieve_mailbox${toString idx}_before";
value = "file:${stateDir}/imapsieve/before/${baseNameOf el.before}";
} ++ optional (el.after != null) {
name = "imapsieve_mailbox${toString idx}_after";
value = "file:${stateDir}/imapsieve/after/${baseNameOf el.after}";
}
) cfg.imapsieve.mailbox));
mkExtraConfigCollisionWarning = term: ''
You referred to ${term} in `services.dovecot2.extraConfig`.
Due to gradual transition to structured configuration for plugin configuration, it is possible
this will cause your plugin configuration to be ignored.
Consider setting `services.dovecot2.pluginSettings.${term}` instead.
'';
# Those settings are automatically set based on other parts
# of this module.
automaticallySetPluginSettings = [
"sieve_plugins"
"sieve_extensions"
"sieve_global_extensions"
"sieve_pipe_bin_dir"
]
++ (builtins.attrNames sieveScriptSettings)
++ (builtins.attrNames imapSieveMailboxSettings);
# The idea is to match everything that looks like `$term =`
# but not `# $term something something`
# or `# $term = some value` because those are comments.
configContainsSetting = lines: term: (match "^[^#]*\b${term}\b.*=" lines) != null;
warnAboutExtraConfigCollisions = map mkExtraConfigCollisionWarning (filter (configContainsSetting cfg.extraConfig) automaticallySetPluginSettings);
sievePipeBinScriptDirectory = pkgs.linkFarm "sieve-pipe-bins" (map (el: {
name = builtins.unsafeDiscardStringContext (baseNameOf el);
path = el;
}) cfg.sieve.pipeBins);
dovecotConf = concatStrings [
''
base_dir = ${baseDir}
@ -77,14 +131,6 @@ let
''
)
(
optionalString (cfg.sieveScripts != {}) ''
plugin {
${concatStringsSep "\n" (mapAttrsToList (to: from: "sieve_${to} = ${stateDir}/sieve/${to}") cfg.sieveScripts)}
}
''
)
(
optionalString (cfg.mailboxes != {}) ''
namespace inbox {
@ -116,33 +162,12 @@ let
''
)
# General plugin settings:
# - sieve is mostly generated here, refer to `pluginSettings` to follow
# the control flow.
''
plugin {
sieve_plugins = ${concatStringsSep " " cfg.sieve.plugins}
sieve_extensions = ${concatStringsSep " " (map (el: "+${el}") cfg.sieve.extensions)}
sieve_global_extensions = ${concatStringsSep " " (map (el: "+${el}") cfg.sieve.globalExtensions)}
''
(optionalString (cfg.imapsieve.mailbox != []) ''
${
concatStringsSep "\n" (flatten (imap1 (
idx: el:
singleton "imapsieve_mailbox${toString idx}_name = ${el.name}"
++ optional (el.from != null) "imapsieve_mailbox${toString idx}_from = ${el.from}"
++ optional (el.causes != null) "imapsieve_mailbox${toString idx}_causes = ${el.causes}"
++ optional (el.before != null) "imapsieve_mailbox${toString idx}_before = file:${stateDir}/imapsieve/before/${baseNameOf el.before}"
++ optional (el.after != null) "imapsieve_mailbox${toString idx}_after = file:${stateDir}/imapsieve/after/${baseNameOf el.after}"
)
cfg.imapsieve.mailbox))
}
'')
(optionalString (cfg.sieve.pipeBins != []) ''
sieve_pipe_bin_dir = ${pkgs.linkFarm "sieve-pipe-bins" (map (el: {
name = builtins.unsafeDiscardStringContext (baseNameOf el);
path = el;
})
cfg.sieve.pipeBins)}
'')
''
${concatStringsSep "\n" (mapAttrsToList (key: value: " ${key} = ${value}") cfg.pluginSettings)}
}
''
@ -199,6 +224,7 @@ in
{
imports = [
(mkRemovedOptionModule [ "services" "dovecot2" "package" ] "")
(mkRenamedOptionModule [ "services" "dovecot2" "sieveScripts" ] [ "services" "dovecot2" "sieve" "scripts" ])
];
options.services.dovecot2 = {
@ -337,12 +363,6 @@ in
enableDHE = mkEnableOption (lib.mdDoc "ssl_dh and generation of primes for the key exchange") // { default = true; };
sieveScripts = mkOption {
type = types.attrsOf types.path;
default = {};
description = lib.mdDoc "Sieve scripts to be executed. Key is a sequence, e.g. 'before2', 'after' etc.";
};
showPAMFailure = mkEnableOption (lib.mdDoc "showing the PAM failure message on authentication error (useful for OTPW)");
mailboxes = mkOption {
@ -376,6 +396,26 @@ in
description = lib.mdDoc "Quota limit for the user in bytes. Supports suffixes b, k, M, G, T and %.";
};
pluginSettings = mkOption {
# types.str does not coerce from packages, like `sievePipeBinScriptDirectory`.
type = types.attrsOf (types.oneOf [ types.str types.package ]);
default = {};
example = literalExpression ''
{
sieve = "file:~/sieve;active=~/.dovecot.sieve";
}
'';
description = ''
Plugin settings for dovecot in general, e.g. `sieve`, `sieve_default`, etc.
Some of the other knobs of this module will influence by default the plugin settings, but you
can still override any plugin settings.
If you override a plugin setting, its value is cleared and you have to copy over the defaults.
'';
};
imapsieve.mailbox = mkOption {
default = [];
description = "Configure Sieve filtering rules on IMAP actions";
@ -405,14 +445,14 @@ in
};
causes = mkOption {
default = null;
default = [ ];
description = ''
Only execute the administrator Sieve scripts for the mailbox configured with services.dovecot2.imapsieve.mailbox.<name>.name when one of the listed IMAPSIEVE causes apply.
This has no effect on the user script, which is always executed no matter the cause.
'';
example = "COPY";
type = types.nullOr (types.enum [ "APPEND" "COPY" "FLAG" ]);
example = [ "COPY" "APPEND" ];
type = types.listOf (types.enum [ "APPEND" "COPY" "FLAG" ]);
};
before = mkOption {
@ -462,6 +502,12 @@ in
type = types.listOf types.str;
};
scripts = mkOption {
type = types.attrsOf types.path;
default = {};
description = lib.mdDoc "Sieve scripts to be executed. Key is a sequence, e.g. 'before2', 'after' etc.";
};
pipeBins = mkOption {
default = [];
example = literalExpression ''
@ -476,7 +522,6 @@ in
};
};
config = mkIf cfg.enable {
security.pam.services.dovecot2 = mkIf cfg.enablePAM {};
@ -501,6 +546,13 @@ in
++ optional (cfg.sieve.pipeBins != []) "sieve_extprograms";
sieve.globalExtensions = optional (cfg.sieve.pipeBins != []) "vnd.dovecot.pipe";
pluginSettings = lib.mapAttrs (n: lib.mkDefault) ({
sieve_plugins = concatStringsSep " " cfg.sieve.plugins;
sieve_extensions = concatStringsSep " " (map (el: "+${el}") cfg.sieve.extensions);
sieve_global_extensions = concatStringsSep " " (map (el: "+${el}") cfg.sieve.globalExtensions);
sieve_pipe_bin_dir = sievePipeBinScriptDirectory;
} // sieveScriptSettings // imapSieveMailboxSettings);
};
users.users = {
@ -556,7 +608,7 @@ in
# the source file and Dovecot won't try to compile it.
preStart = ''
rm -rf ${stateDir}/sieve ${stateDir}/imapsieve
'' + optionalString (cfg.sieveScripts != {}) ''
'' + optionalString (cfg.sieve.scripts != {}) ''
mkdir -p ${stateDir}/sieve
${concatStringsSep "\n" (
mapAttrsToList (
@ -569,7 +621,7 @@ in
fi
${pkgs.dovecot_pigeonhole}/bin/sievec '${stateDir}/sieve/${to}'
''
) cfg.sieveScripts
) cfg.sieve.scripts
)}
chown -R '${cfg.mailUser}:${cfg.mailGroup}' '${stateDir}/sieve'
''
@ -600,9 +652,7 @@ in
environment.systemPackages = [ dovecotPkg ];
warnings = mkIf (any isList options.services.dovecot2.mailboxes.definitions) [
"Declaring `services.dovecot2.mailboxes' as a list is deprecated and will break eval in 21.05! See the release notes for more info for migration."
];
warnings = warnAboutExtraConfigCollisions;
assertions = [
{
@ -615,8 +665,8 @@ in
message = "dovecot is configured with showPAMFailure while enablePAM is disabled";
}
{
assertion = cfg.sieveScripts != {} -> (cfg.mailUser != null && cfg.mailGroup != null);
message = "dovecot requires mailUser and mailGroup to be set when sieveScripts is set";
assertion = cfg.sieve.scripts != {} -> (cfg.mailUser != null && cfg.mailGroup != null);
message = "dovecot requires mailUser and mailGroup to be set when `sieve.scripts` is set";
}
];

View File

@ -0,0 +1,78 @@
{ lib
, config
, pkgs
, ...
}:
let
inherit (lib) mkEnableOption mkOption mkIf types;
format = pkgs.formats.toml { };
cfg = config.services.hebbot;
settingsFile = format.generate "config.toml" cfg.settings;
mkTemplateOption = templateName: mkOption {
type = types.path;
description = lib.mdDoc ''
A path to the Markdown file for the ${templateName}.
'';
};
in
{
meta.maintainers = [ lib.maintainers.raitobezarius ];
options.services.hebbot = {
enable = mkEnableOption "hebbot";
botPasswordFile = mkOption {
type = types.path;
description = lib.mdDoc ''
A path to the password file for your bot.
Consider using a path that does not end up in your Nix store
as it would be world readable.
'';
};
templates = {
project = mkTemplateOption "project template";
report = mkTemplateOption "report template";
section = mkTemplateOption "section template";
};
settings = mkOption {
type = format.type;
default = { };
description = lib.mdDoc ''
Configuration for Hebbot, see, for examples:
- <https://github.com/matrix-org/twim-config/blob/master/config.toml>
- <https://gitlab.gnome.org/Teams/Websites/thisweek.gnome.org/-/blob/main/hebbot/config.toml>
'';
};
};
config = mkIf cfg.enable {
systemd.services.hebbot = {
description = "hebbot - a TWIM-style Matrix bot written in Rust";
after = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
preStart = ''
ln -sf ${cfg.templates.project} ./project_template.md
ln -sf ${cfg.templates.report} ./report_template.md
ln -sf ${cfg.templates.section} ./section_template.md
ln -sf ${settingsFile} ./config.toml
'';
script = ''
export BOT_PASSWORD="$(cat $CREDENTIALS_DIRECTORY/bot-password-file)"
${lib.getExe pkgs.hebbot}
'';
serviceConfig = {
DynamicUser = true;
Restart = "on-failure";
LoadCredential = "bot-password-file:${cfg.botPasswordFile}";
RestartSec = "10s";
StateDirectory = "hebbot";
WorkingDirectory = "hebbot";
};
};
};
}

View File

@ -21,7 +21,7 @@ in
type = with types; nullOr path;
default = null;
example = "/etc/prometheus-pve-exporter/pve.env";
description = lib.mdDoc ''
description = ''
Path to the service's environment file. This path can either be a computed path in /nix/store or a path in the local filesystem.
The environment file should NOT be stored in /nix/store as it contains passwords and/or keys in plain text.
@ -34,7 +34,7 @@ in
type = with types; nullOr path;
default = null;
example = "/etc/prometheus-pve-exporter/pve.yml";
description = lib.mdDoc ''
description = ''
Path to the service's config file. This path can either be a computed path in /nix/store or a path in the local filesystem.
The config file should NOT be stored in /nix/store as it will contain passwords and/or keys in plain text.
@ -45,46 +45,66 @@ in
'';
};
server = {
keyFile = mkOption {
type = with types; nullOr path;
default = null;
example = "/var/lib/prometheus-pve-exporter/privkey.key";
description = ''
Path to a SSL private key file for the server
'';
};
certFile = mkOption {
type = with types; nullOr path;
default = null;
example = "/var/lib/prometheus-pve-exporter/full-chain.pem";
description = ''
Path to a SSL certificate file for the server
'';
};
};
collectors = {
status = mkOption {
type = types.bool;
default = true;
description = lib.mdDoc ''
description = ''
Collect Node/VM/CT status
'';
};
version = mkOption {
type = types.bool;
default = true;
description = lib.mdDoc ''
description = ''
Collect PVE version info
'';
};
node = mkOption {
type = types.bool;
default = true;
description = lib.mdDoc ''
description = ''
Collect PVE node info
'';
};
cluster = mkOption {
type = types.bool;
default = true;
description = lib.mdDoc ''
description = ''
Collect PVE cluster info
'';
};
resources = mkOption {
type = types.bool;
default = true;
description = lib.mdDoc ''
description = ''
Collect PVE resources info
'';
};
config = mkOption {
type = types.bool;
default = true;
description = lib.mdDoc ''
description = ''
Collect PVE onboot status
'';
};
@ -102,8 +122,10 @@ in
--${optionalString (!cfg.collectors.cluster) "no-"}collector.cluster \
--${optionalString (!cfg.collectors.resources) "no-"}collector.resources \
--${optionalString (!cfg.collectors.config) "no-"}collector.config \
%d/configFile \
${toString cfg.port} ${cfg.listenAddress}
${optionalString (cfg.server.keyFile != null) "--server.keyfile ${cfg.server.keyFile}"} \
${optionalString (cfg.server.certFile != null) "--server.certfile ${cfg.server.certFile}"} \
--config.file %d/configFile \
--web.listen-address ${cfg.listenAddress}:${toString cfg.port}
'';
} // optionalAttrs (cfg.environmentFile != null) {
EnvironmentFile = cfg.environmentFile;

View File

@ -4,10 +4,79 @@ with lib;
let
cfg = config.services.dnsdist;
toLua = lib.generators.toLua {};
mkBind = cfg: toLua "${cfg.listenAddress}:${toString cfg.listenPort}";
configFile = pkgs.writeText "dnsdist.conf" ''
setLocal('${cfg.listenAddress}:${toString cfg.listenPort}')
setLocal(${mkBind cfg})
${lib.optionalString cfg.dnscrypt.enable dnscryptSetup}
${cfg.extraConfig}
'';
dnscryptSetup = ''
last_rotation = 0
cert_serial = 0
provider_key = ${toLua cfg.dnscrypt.providerKey}
cert_lifetime = ${toLua cfg.dnscrypt.certLifetime} * 60
function file_exists(name)
local f = io.open(name, "r")
return f ~= nil and io.close(f)
end
function dnscrypt_setup()
-- generate provider keys on first run
if provider_key == nil then
provider_key = "/var/lib/dnsdist/private.key"
if not file_exists(provider_key) then
generateDNSCryptProviderKeys("/var/lib/dnsdist/public.key",
"/var/lib/dnsdist/private.key")
print("DNSCrypt: generated provider keypair")
end
end
-- generate resolver certificate
local now = os.time()
generateDNSCryptCertificate(
provider_key, "/run/dnsdist/resolver.cert", "/run/dnsdist/resolver.key",
cert_serial, now - 60, now + cert_lifetime)
addDNSCryptBind(
${mkBind cfg.dnscrypt}, ${toLua cfg.dnscrypt.providerName},
"/run/dnsdist/resolver.cert", "/run/dnsdist/resolver.key")
end
function maintenance()
-- certificate rotation
local now = os.time()
local dnscrypt = getDNSCryptBind(0)
if ((now - last_rotation) > 0.9 * cert_lifetime) then
-- generate and start using a new certificate
dnscrypt:generateAndLoadInMemoryCertificate(
provider_key, cert_serial + 1,
now - 60, now + cert_lifetime)
-- stop advertising the last certificate
dnscrypt:markInactive(cert_serial)
-- remove the second to last certificate
if (cert_serial > 1) then
dnscrypt:removeInactiveCertificate(cert_serial - 1)
end
print("DNSCrypt: rotated certificate")
-- increment serial number
cert_serial = cert_serial + 1
last_rotation = now
end
end
dnscrypt_setup()
'';
in {
options = {
services.dnsdist = {
@ -15,15 +84,69 @@ in {
listenAddress = mkOption {
type = types.str;
description = lib.mdDoc "Listen IP Address";
description = lib.mdDoc "Listen IP address";
default = "0.0.0.0";
};
listenPort = mkOption {
type = types.int;
type = types.port;
description = lib.mdDoc "Listen port";
default = 53;
};
dnscrypt = {
enable = mkEnableOption (lib.mdDoc "a DNSCrypt endpoint to dnsdist");
listenAddress = mkOption {
type = types.str;
description = lib.mdDoc "Listen IP address of the endpoint";
default = "0.0.0.0";
};
listenPort = mkOption {
type = types.port;
description = lib.mdDoc "Listen port of the endpoint";
default = 443;
};
providerName = mkOption {
type = types.str;
default = "2.dnscrypt-cert.${config.networking.hostName}";
defaultText = literalExpression "2.dnscrypt-cert.\${config.networking.hostName}";
example = "2.dnscrypt-cert.myresolver";
description = lib.mdDoc ''
The name that will be given to this DNSCrypt resolver.
::: {.note}
The provider name must start with `2.dnscrypt-cert.`.
:::
'';
};
providerKey = mkOption {
type = types.nullOr types.path;
default = null;
description = lib.mdDoc ''
The filepath to the provider secret key.
If not given a new provider key pair will be generated in
/var/lib/dnsdist on the first run.
::: {.note}
The file must be readable by the dnsdist user/group.
:::
'';
};
certLifetime = mkOption {
type = types.ints.positive;
default = 15;
description = lib.mdDoc ''
The lifetime (in minutes) of the resolver certificate.
This will be automatically rotated before expiration.
'';
};
};
extraConfig = mkOption {
type = types.lines;
default = "";
@ -35,6 +158,14 @@ in {
};
config = mkIf cfg.enable {
users.users.dnsdist = {
description = "dnsdist daemons user";
isSystemUser = true;
group = "dnsdist";
};
users.groups.dnsdist = {};
systemd.packages = [ pkgs.dnsdist ];
systemd.services.dnsdist = {
@ -42,8 +173,10 @@ in {
startLimitIntervalSec = 0;
serviceConfig = {
DynamicUser = true;
User = "dnsdist";
Group = "dnsdist";
RuntimeDirectory = "dnsdist";
StateDirectory = "dnsdist";
# upstream overrides for better nixos compatibility
ExecStartPre = [ "" "${pkgs.dnsdist}/bin/dnsdist --check-config --config ${configFile}" ];
ExecStart = [ "" "${pkgs.dnsdist}/bin/dnsdist --supervised --disable-syslog --config ${configFile}" ];

View File

@ -444,10 +444,14 @@ in {
tls_letsencrypt_cache_dir = "${dataDir}/.cache";
};
# Setup the headscale configuration in a known path in /etc to
# allow both the Server and the Client use it to find the socket
# for communication.
environment.etc."headscale/config.yaml".source = configFile;
environment = {
# Setup the headscale configuration in a known path in /etc to
# allow both the Server and the Client use it to find the socket
# for communication.
etc."headscale/config.yaml".source = configFile;
systemPackages = [ cfg.package ];
};
users.groups.headscale = mkIf (cfg.group == "headscale") {};

View File

@ -0,0 +1,56 @@
# Netbird {#module-services-netbird}
## Quickstart {#module-services-netbird-quickstart}
The absolute minimal configuration for the netbird daemon looks like this:
```nix
services.netbird.enable = true;
```
This will set up a netbird service listening on the port `51820` associated to the
`wt0` interface.
It is strictly equivalent to setting:
```nix
services.netbird.tunnels.wt0.stateDir = "netbird";
```
The `enable` option is mainly kept for backward compatibility, as defining netbird
tunnels through the `tunnels` option is more expressive.
## Multiple connections setup {#module-services-netbird-multiple-connections}
Using the `services.netbird.tunnels` option, it is also possible to define more than
one netbird service running at the same time.
The following configuration will start a netbird daemon using the interface `wt1` and
the port 51830. Its configuration file will then be located at `/var/lib/netbird-wt1/config.json`.
```nix
services.netbird.tunnels = {
wt1 = {
port = 51830;
};
};
```
To interact with it, you will need to specify the correct daemon address:
```bash
netbird --daemon-addr unix:///var/run/netbird-wt1/sock ...
```
The address will by default be `unix:///var/run/netbird-<name>`.
It is also possible to overwrite default options passed to the service, for
example:
```nix
services.netbird.tunnels.wt1.environment = {
NB_DAEMON_ADDR = "unix:///var/run/toto.sock"
};
```
This will set the socket to interact with the netbird service to `/var/run/toto.sock`.

View File

@ -1,60 +1,171 @@
{ config, lib, pkgs, ... }:
with lib;
{
config,
lib,
pkgs,
...
}:
let
cfg = config.services.netbird;
inherit (lib)
attrNames
getExe
literalExpression
maintainers
mapAttrs'
mkDefault
mkEnableOption
mkIf
mkMerge
mkOption
mkPackageOption
nameValuePair
optional
versionOlder
;
inherit (lib.types)
attrsOf
port
str
submodule
;
kernel = config.boot.kernelPackages;
interfaceName = "wt0";
in {
meta.maintainers = with maintainers; [ misuzu ];
cfg = config.services.netbird;
in
{
meta.maintainers = with maintainers; [
misuzu
thubrecht
];
meta.doc = ./netbird.md;
options.services.netbird = {
enable = mkEnableOption (lib.mdDoc "Netbird daemon");
package = mkPackageOption pkgs "netbird" { };
};
config = mkIf cfg.enable {
boot.extraModulePackages = optional (versionOlder kernel.kernel.version "5.6") kernel.wireguard;
tunnels = mkOption {
type = attrsOf (
submodule (
{ name, config, ... }:
{
options = {
port = mkOption {
type = port;
default = 51820;
description = ''
Port for the ${name} netbird interface.
'';
};
environment.systemPackages = [ cfg.package ];
environment = mkOption {
type = attrsOf str;
defaultText = literalExpression ''
{
NB_CONFIG = "/var/lib/''${stateDir}/config.json";
NB_LOG_FILE = "console";
NB_WIREGUARD_PORT = builtins.toString port;
NB_INTERFACE_NAME = name;
NB_DAMEON_ADDR = "/var/run/''${stateDir}"
}
'';
description = ''
Environment for the netbird service, used to pass configuration options.
'';
};
networking.dhcpcd.denyInterfaces = [ interfaceName ];
stateDir = mkOption {
type = str;
default = "netbird-${name}";
description = ''
Directory storing the netbird configuration.
'';
};
};
systemd.network.networks."50-netbird" = mkIf config.networking.useNetworkd {
matchConfig = {
Name = interfaceName;
};
linkConfig = {
Unmanaged = true;
ActivationPolicy = "manual";
};
};
systemd.services.netbird = {
description = "A WireGuard-based mesh network that connects your devices into a single private network";
documentation = [ "https://netbird.io/docs/" ];
after = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
path = with pkgs; [
openresolv
];
serviceConfig = {
Environment = [
"NB_CONFIG=/var/lib/netbird/config.json"
"NB_LOG_FILE=console"
];
ExecStart = "${cfg.package}/bin/netbird service run";
Restart = "always";
RuntimeDirectory = "netbird";
StateDirectory = "netbird";
WorkingDirectory = "/var/lib/netbird";
};
unitConfig = {
StartLimitInterval = 5;
StartLimitBurst = 10;
};
stopIfChanged = false;
config.environment = builtins.mapAttrs (_: mkDefault) {
NB_CONFIG = "/var/lib/${config.stateDir}/config.json";
NB_LOG_FILE = "console";
NB_WIREGUARD_PORT = builtins.toString config.port;
NB_INTERFACE_NAME = name;
NB_DAEMON_ADDR = "unix:///var/run/${config.stateDir}/sock";
};
}
)
);
default = { };
description = ''
Attribute set of Netbird tunnels, each one will spawn a daemon listening on ...
'';
};
};
config = mkMerge [
(mkIf cfg.enable {
# For backwards compatibility
services.netbird.tunnels.wt0.stateDir = "netbird";
})
(mkIf (cfg.tunnels != { }) {
boot.extraModulePackages = optional (versionOlder kernel.kernel.version "5.6") kernel.wireguard;
environment.systemPackages = [ cfg.package ];
networking.dhcpcd.denyInterfaces = attrNames cfg.tunnels;
systemd.network.networks = mkIf config.networking.useNetworkd (
mapAttrs'
(
name: _:
nameValuePair "50-netbird-${name}" {
matchConfig = {
Name = name;
};
linkConfig = {
Unmanaged = true;
ActivationPolicy = "manual";
};
}
)
cfg.tunnels
);
systemd.services =
mapAttrs'
(
name:
{ environment, stateDir, ... }:
nameValuePair "netbird-${name}" {
description = "A WireGuard-based mesh network that connects your devices into a single private network";
documentation = [ "https://netbird.io/docs/" ];
after = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
path = with pkgs; [ openresolv ];
inherit environment;
serviceConfig = {
ExecStart = "${getExe cfg.package} service run";
Restart = "always";
RuntimeDirectory = stateDir;
StateDirectory = stateDir;
StateDirectoryMode = "0700";
WorkingDirectory = "/var/lib/${stateDir}";
};
unitConfig = {
StartLimitInterval = 5;
StartLimitBurst = 10;
};
stopIfChanged = false;
}
)
cfg.tunnels;
})
];
}

View File

@ -428,7 +428,13 @@ in
config = {
warnings = concatLists (
warnings = let
mkOneNetOnlineWarn = typeStr: name: def: lib.optional
(lib.elem "network-online.target" def.after && !(lib.elem "network-online.target" (def.wants ++ def.requires ++ def.bindsTo)))
"${name}.${typeStr} is ordered after 'network-online.target' but doesn't depend on it";
mkNetOnlineWarns = typeStr: defs: lib.concatLists (lib.mapAttrsToList (mkOneNetOnlineWarn typeStr) defs);
mkMountNetOnlineWarns = typeStr: defs: lib.concatLists (map (m: mkOneNetOnlineWarn typeStr m.what m) defs);
in concatLists (
mapAttrsToList
(name: service:
let
@ -449,39 +455,30 @@ in
]
)
cfg.services
);
)
++ (mkNetOnlineWarns "target" cfg.targets)
++ (mkNetOnlineWarns "service" cfg.services)
++ (mkNetOnlineWarns "socket" cfg.sockets)
++ (mkNetOnlineWarns "timer" cfg.timers)
++ (mkNetOnlineWarns "path" cfg.paths)
++ (mkMountNetOnlineWarns "mount" cfg.mounts)
++ (mkMountNetOnlineWarns "automount" cfg.automounts)
++ (mkNetOnlineWarns "slice" cfg.slices);
assertions = let
mkOneAssert = typeStr: name: def: {
assertion = lib.elem "network-online.target" def.after -> lib.elem "network-online.target" (def.wants ++ def.requires ++ def.bindsTo);
message = "${name}.${typeStr} is ordered after 'network-online.target' but doesn't depend on it";
};
mkAsserts = typeStr: lib.mapAttrsToList (mkOneAssert typeStr);
mkMountAsserts = typeStr: map (m: mkOneAssert typeStr m.what m);
in mkMerge [
(concatLists (
mapAttrsToList
(name: service:
map (message: {
assertion = false;
inherit message;
}) (concatLists [
(optional ((builtins.elem "network-interfaces.target" service.after) || (builtins.elem "network-interfaces.target" service.wants))
"Service '${name}.service' is using the deprecated target network-interfaces.target, which no longer exists. Using network.target is recommended instead."
)
])
)
cfg.services
))
(mkAsserts "target" cfg.targets)
(mkAsserts "service" cfg.services)
(mkAsserts "socket" cfg.sockets)
(mkAsserts "timer" cfg.timers)
(mkAsserts "path" cfg.paths)
(mkMountAsserts "mount" cfg.mounts)
(mkMountAsserts "automount" cfg.automounts)
(mkAsserts "slice" cfg.slices)
];
assertions = concatLists (
mapAttrsToList
(name: service:
map (message: {
assertion = false;
inherit message;
}) (concatLists [
(optional ((builtins.elem "network-interfaces.target" service.after) || (builtins.elem "network-interfaces.target" service.wants))
"Service '${name}.service' is using the deprecated target network-interfaces.target, which no longer exists. Using network.target is recommended instead."
)
])
)
cfg.services
);
system.build.units = cfg.units;
@ -658,6 +655,7 @@ in
systemd.services.systemd-udev-settle.restartIfChanged = false; # Causes long delays in nixos-rebuild
systemd.targets.local-fs.unitConfig.X-StopOnReconfiguration = true;
systemd.targets.remote-fs.unitConfig.X-StopOnReconfiguration = true;
systemd.targets.network-online.wantedBy = [ "multi-user.target" ];
systemd.services.systemd-importd.environment = proxy_env;
systemd.services.systemd-pstore.wantedBy = [ "sysinit.target" ]; # see #81138

View File

@ -242,7 +242,7 @@ in {
discourse = handleTest ./discourse.nix {};
dnscrypt-proxy2 = handleTestOn ["x86_64-linux"] ./dnscrypt-proxy2.nix {};
dnscrypt-wrapper = runTestOn ["x86_64-linux"] ./dnscrypt-wrapper;
dnsdist = handleTest ./dnsdist.nix {};
dnsdist = import ./dnsdist.nix { inherit pkgs runTest; };
doas = handleTest ./doas.nix {};
docker = handleTestOn ["aarch64-linux" "x86_64-linux"] ./docker.nix {};
docker-rootless = handleTestOn ["aarch64-linux" "x86_64-linux"] ./docker-rootless.nix {};

View File

@ -1,48 +1,113 @@
import ./make-test-python.nix (
{ pkgs, ... }: {
name = "dnsdist";
meta = with pkgs.lib; {
maintainers = with maintainers; [ jojosch ];
};
{ pkgs, runTest }:
nodes.machine = { pkgs, lib, ... }: {
services.bind = {
enable = true;
extraOptions = "empty-zones-enable no;";
zones = lib.singleton {
name = ".";
master = true;
file = pkgs.writeText "root.zone" ''
$TTL 3600
. IN SOA ns.example.org. admin.example.org. ( 1 3h 1h 1w 1d )
. IN NS ns.example.org.
let
ns.example.org. IN A 192.168.0.1
ns.example.org. IN AAAA abcd::1
inherit (pkgs) lib;
1.0.168.192.in-addr.arpa IN PTR ns.example.org.
'';
};
};
services.dnsdist = {
enable = true;
listenPort = 5353;
extraConfig = ''
newServer({address="127.0.0.1:53", name="local-bind"})
baseConfig = {
networking.nameservers = [ "::1" ];
services.bind = {
enable = true;
extraOptions = "empty-zones-enable no;";
zones = lib.singleton {
name = ".";
master = true;
file = pkgs.writeText "root.zone" ''
$TTL 3600
. IN SOA ns.example.org. admin.example.org. ( 1 3h 1h 1w 1d )
. IN NS ns.example.org.
ns.example.org. IN A 192.168.0.1
ns.example.org. IN AAAA abcd::1
1.0.168.192.in-addr.arpa IN PTR ns.example.org.
'';
};
environment.systemPackages = with pkgs; [ dig ];
};
services.dnsdist = {
enable = true;
listenPort = 5353;
extraConfig = ''
newServer({address="127.0.0.1:53", name="local-bind"})
'';
};
};
in
{
base = runTest {
name = "dnsdist-base";
meta.maintainers = with lib.maintainers; [ jojosch ];
nodes.machine = baseConfig;
testScript = ''
machine.wait_for_unit("bind.service")
machine.wait_for_open_port(53)
machine.succeed("dig @127.0.0.1 +short -x 192.168.0.1 | grep -qF ns.example.org")
machine.succeed("host -p 53 192.168.0.1 | grep -qF ns.example.org")
machine.wait_for_unit("dnsdist.service")
machine.wait_for_open_port(5353)
machine.succeed("dig @127.0.0.1 -p 5353 +short -x 192.168.0.1 | grep -qF ns.example.org")
machine.succeed("host -p 5353 192.168.0.1 | grep -qF ns.example.org")
'';
}
)
};
dnscrypt = runTest {
name = "dnsdist-dnscrypt";
meta.maintainers = with lib.maintainers; [ rnhmjoj ];
nodes.server = lib.mkMerge [
baseConfig
{
networking.firewall.allowedTCPPorts = [ 443 ];
networking.firewall.allowedUDPPorts = [ 443 ];
services.dnsdist.dnscrypt.enable = true;
services.dnsdist.dnscrypt.providerKey = "${./dnscrypt-wrapper/secret.key}";
}
];
nodes.client = {
services.dnscrypt-proxy2.enable = true;
services.dnscrypt-proxy2.upstreamDefaults = false;
services.dnscrypt-proxy2.settings =
{ server_names = [ "server" ];
listen_addresses = [ "[::1]:53" ];
cache = false;
# Computed using https://dnscrypt.info/stamps/
static.server.stamp =
"sdns://AQAAAAAAAAAADzE5Mi4xNjguMS4yOjQ0MyAUQdg6_RIIpK6pHkINhrv7nxwIG5c7b_m5NJVT3A1AXRYyLmRuc2NyeXB0LWNlcnQuc2VydmVy";
};
networking.nameservers = [ "::1" ];
};
testScript = ''
with subtest("The DNSCrypt server is accepting connections"):
server.wait_for_unit("bind.service")
server.wait_for_unit("dnsdist.service")
server.wait_for_open_port(443)
almost_expiration = server.succeed("date --date '14min'").strip()
with subtest("The DNSCrypt client can connect to the server"):
client.wait_until_succeeds("journalctl -u dnscrypt-proxy2 --grep '\[server\] OK'")
with subtest("DNS queries over UDP are working"):
client.wait_for_open_port(53)
client.succeed("host -U 192.168.0.1 | grep -qF ns.example.org")
with subtest("DNS queries over TCP are working"):
client.wait_for_open_port(53)
client.succeed("host -T 192.168.0.1 | grep -qF ns.example.org")
with subtest("The server rotates the ephemeral keys"):
server.succeed(f"date -s '{almost_expiration}'")
client.succeed(f"date -s '{almost_expiration}'")
server.wait_until_succeeds("journalctl -u dnsdist --grep 'rotated certificate'")
with subtest("The client can still connect to the server"):
client.wait_until_succeeds("host -T 192.168.0.1")
client.wait_until_succeeds("host -U 192.168.0.1")
'';
};
}

View File

@ -8,7 +8,7 @@
, lib
, libmicrohttpd
, libusb-compat-0_1
, llvmPackages_10
, llvmPackages
, qtcharts
, qtdeclarative
, qtquickcontrols2
@ -39,7 +39,7 @@ mkDerivation rec {
gcc12.cc.lib
libmicrohttpd
libusb-compat-0_1
llvmPackages_10.openmp
llvmPackages.openmp
qtcharts
qtdeclarative
qtquickcontrols2

View File

@ -1,4 +1,4 @@
{ lib, stdenv, fetchFromGitHub, boost, cairo, libGL, lv2, pkg-config }:
{ lib, stdenv, fetchFromGitHub, fetchpatch, boost, cairo, libGL, lv2, pkg-config }:
stdenv.mkDerivation rec {
pname = "string-machine";
@ -12,6 +12,16 @@ stdenv.mkDerivation rec {
fetchSubmodules = true;
};
patches = [
# gcc-13 compatibility fix:
# https://github.com/jpcima/string-machine/pull/36
(fetchpatch {
name = "gcc-13.patch";
url = "https://github.com/jpcima/string-machine/commit/e1f9c70da46e43beb2654b509bc824be5601a0a5.patch";
hash = "sha256-eS28wBuFjbx2tEb9gtVRZXfK0w2o1RCFTouNf8Adq+k=";
})
];
postPatch = ''
patchShebangs ./dpf/utils/generate-ttl.sh
'';

View File

@ -2,11 +2,11 @@
let
pname = "ledger-live-desktop";
version = "2.73.1";
version = "2.75.0";
src = fetchurl {
url = "https://download.live.ledger.com/${pname}-${version}-linux-x86_64.AppImage";
hash = "sha256-aHA65NLX3tlg8nLnQOOG1TuvcJP57HbQWruiBMvDJ10=";
hash = "sha256-sVaQbfpgHgd1OZgR+R0PUmNENfDOcNRfvO2AVKFyDqM=";
};
appimageContents = appimageTools.extractType2 {
@ -34,5 +34,6 @@ appimageTools.wrapType2 rec {
maintainers = with maintainers; [ andresilva thedavidmeister nyanloutre RaghavSood th0rgal ];
platforms = [ "x86_64-linux" ];
mainProgram = "ledger-live-desktop";
sourceProvenance = with sourceTypes; [ binaryNativeCode ];
};
}

View File

@ -2,16 +2,16 @@
buildGoModule rec {
pname = "atmos";
version = "1.54.0";
version = "1.55.0";
src = fetchFromGitHub {
owner = "cloudposse";
repo = pname;
rev = "v${version}";
sha256 = "sha256-WGOuFqkrX3/5RINdsegTSxJ28W4iEMPuLVrCjtmCkTw=";
sha256 = "sha256-JRvPRlq4H9PcELozlvIE065LSNIxrh/Ej+2GXO8s2x4=";
};
vendorHash = "sha256-kR13BVbjgQoEjb2xwH8LkxLeMp30h6mbWum9RbzzSGE=";
vendorHash = "sha256-YBcVsuBL5n5ycaY1a0uxlDKX7YyrtF16gi17wCK1Jio=";
ldflags = [ "-s" "-w" "-X github.com/cloudposse/atmos/cmd.Version=v${version}" ];

View File

@ -2,13 +2,13 @@
buildGoModule rec {
pname = "fn";
version = "0.6.28";
version = "0.6.29";
src = fetchFromGitHub {
owner = "fnproject";
repo = "cli";
rev = version;
hash = "sha256-/ifr/sSaChZKRe9wCcjURhqZl2/JhIMewZSlJiit/7w=";
hash = "sha256-hN9Kok2+ZNYZsG+3ffzr1jGfIMg99JzgzC0x585KDF4=";
};
vendorHash = null;

View File

@ -2,13 +2,13 @@
buildGoModule rec {
pname = "kn";
version = "1.12.0";
version = "1.13.0";
src = fetchFromGitHub {
owner = "knative";
repo = "client";
rev = "knative-v${version}";
sha256 = "sha256-Xp5PpHIcjh02qesnyrz53yydIAClx0OrBE75Sz5pifg=";
sha256 = "sha256-irMipYDYMyA0l9d7tI1wS7XsxGWjBzTvxmhpKM1gLW8=";
};
vendorHash = null;

View File

@ -2,13 +2,13 @@
buildGoModule rec {
pname = "kubecolor";
version = "0.2.0";
version = "0.2.2";
src = fetchFromGitHub {
owner = pname;
repo = pname;
rev = "v${version}";
sha256 = "sha256-WDnuEC2uXo7wybOh0wRiKZt70JMrWteWINuZ+C7lbo8=";
sha256 = "sha256-zXglsfPsJi9DVxlRPniSBsdF1xEMYqqGr46ThpQj3gQ=";
};
vendorHash = "sha256-uf7nBnS1wmbz4xcVA5qF82QMPsLdSucje1NNaPyheCw=";
@ -20,6 +20,6 @@ buildGoModule rec {
homepage = "https://github.com/kubecolor/kubecolor";
changelog = "https://github.com/kubecolor/kubecolor/releases/tag/v${version}";
license = licenses.mit;
maintainers = with maintainers; [ ivankovnatsky SuperSandro2000 ];
maintainers = with maintainers; [ ivankovnatsky SuperSandro2000 applejag ];
};
}

View File

@ -167,8 +167,8 @@ rec {
mkTerraform = attrs: pluggable (generic attrs);
terraform_1 = mkTerraform {
version = "1.7.0";
hash = "sha256-oF0osIC/ti9ZkWDTBIQuBHreIBVfeo4f/naGFdaMxJE=";
version = "1.7.1";
hash = "sha256-e+YXOqXgiUXtm6P8PulZowRK0OLA8ekmS+MZRQP/srg=";
vendorHash = "sha256-77W0x6DENB+U3yB4LI3PwJU9bTuH7Eqz2a9FNoERuJg=";
patches = [ ./provider-path-0_15.patch ];
passthru = {

View File

@ -5,16 +5,16 @@
buildGoModule rec {
pname = "zarf";
version = "0.32.1";
version = "0.32.2";
src = fetchFromGitHub {
owner = "defenseunicorns";
repo = "zarf";
rev = "v${version}";
hash = "sha256-A5GfXdm13u82yW8mTYDX+H6idCBSeYML3C56t1TD2ec=";
hash = "sha256-LQe/M7uX6VKA7q040wFWKYQ96M1Ynp37uglENqvyAaU=";
};
vendorHash = "sha256-7UBqO1O6o/eM04/bZpcGgttLhSoemcBBly3IZbATAz0=";
vendorHash = "sha256-HAIupM30qmOqol661iFm2lNjukoKBvYY1tPTnc0u3lg=";
proxyVendor = true;
preBuild = ''

View File

@ -33,14 +33,14 @@ let
}.${system} or throwSystem;
hash = {
x86_64-linux = "sha256-tn3vumHjRt5bhNnFA0k8WaJmpCQx7SJea89xf1NGhME=";
x86_64-linux = "sha256-kSJFKKqiSTa7sfHwZ3N7O01Eoi4cr86X7Dxkg+pzSgU=";
}.${system} or throwSystem;
displayname = "XPipe";
in stdenvNoCC.mkDerivation rec {
pname = "xpipe";
version = "1.7.14";
version = "1.7.16";
src = fetchzip {
url = "https://github.com/xpipe-io/xpipe/releases/download/${version}/xpipe-portable-linux-${arch}.tar.gz";

View File

@ -10,13 +10,13 @@
stdenv.mkDerivation (finalAttrs: {
pname = "ANTs";
version = "2.5.0";
version = "2.5.1";
src = fetchFromGitHub {
owner = "ANTsX";
repo = "ANTs";
rev = "refs/tags/v${finalAttrs.version}";
hash = "sha256-rSibcsprhMC1qsuZN8ou32QPLf8n62BiDzpnTRWRx0Q=";
hash = "sha256-q252KC6SKUN5JaQWAcsVmDprVkLXDvkYzNhC7yHJNpk=";
};
nativeBuildInputs = [

View File

@ -62,6 +62,12 @@ stdenv.mkDerivation rec {
# should come from or be proposed to upstream. This list will probably never
# be empty since dependencies update all the time.
packageUpgradePatches = [
# https://github.com/sagemath/sage/pull/37123, to land in 10.3.beta7
(fetchpatch {
name = "scipy-1.12-upgrade.patch";
url = "https://github.com/sagemath/sage/commit/54eec464e9fdf18b411d9148aecb918178e95909.diff";
sha256 = "sha256-9wyNrcSfF6mYFTIV4ev2OdD7igb0AeyZZYWSc/+JrIU=";
})
];
patches = nixPatches ++ bugfixPatches ++ packageUpgradePatches;

View File

@ -10,13 +10,13 @@
stdenv.mkDerivation rec {
pname = "obs-pipewire-audio-capture";
version = "1.1.2";
version = "1.1.3";
src = fetchFromGitHub {
owner = "dimtpap";
repo = pname;
rev = version;
sha256 = "sha256-9HPQ17swMlsCnKkYQXIUzEbx2vKuBUfGf58Up2hHVGI=";
sha256 = "sha256-dL/+Y1uaD+7EY0UNWbxvh1TTLYfgk07qCqLLGvfzWZk=";
};
nativeBuildInputs = [ cmake ninja pkg-config ];

View File

@ -1,7 +1,8 @@
{ lib
, stdenv
, buildPackages
, targetPackages
, pkgsBuildHost
, pkgsBuildTarget
, pkgsTargetTarget
}:
rec {
@ -16,26 +17,26 @@ rec {
# As a workaround for https://github.com/rust-lang/rust/issues/89626 use lld on pkgsStatic aarch64
shouldUseLLD = platform: platform.isAarch64 && platform.isStatic && !stdenv.isDarwin;
ccForBuild = "${buildPackages.stdenv.cc}/bin/${buildPackages.stdenv.cc.targetPrefix}cc";
cxxForBuild = "${buildPackages.stdenv.cc}/bin/${buildPackages.stdenv.cc.targetPrefix}c++";
ccForBuild = "${pkgsBuildHost.stdenv.cc}/bin/${pkgsBuildHost.stdenv.cc.targetPrefix}cc";
cxxForBuild = "${pkgsBuildHost.stdenv.cc}/bin/${pkgsBuildHost.stdenv.cc.targetPrefix}c++";
linkerForBuild = ccForBuild;
ccForHost = "${stdenv.cc}/bin/${stdenv.cc.targetPrefix}cc";
cxxForHost = "${stdenv.cc}/bin/${stdenv.cc.targetPrefix}c++";
linkerForHost = if shouldUseLLD stdenv.targetPlatform
&& !stdenv.cc.bintools.isLLVM
then "${buildPackages.lld}/bin/ld.lld"
then "${pkgsBuildHost.llvmPackages.bintools}/bin/${stdenv.cc.targetPrefix}ld.lld"
else ccForHost;
# Unfortunately we must use the dangerous `targetPackages` here
# Unfortunately we must use the dangerous `pkgsTargetTarget` here
# because hooks are artificially phase-shifted one slot earlier
# (they go in nativeBuildInputs, so the hostPlatform looks like
# a targetPlatform to them).
ccForTarget = "${targetPackages.stdenv.cc}/bin/${targetPackages.stdenv.cc.targetPrefix}cc";
cxxForTarget = "${targetPackages.stdenv.cc}/bin/${targetPackages.stdenv.cc.targetPrefix}c++";
linkerForTarget = if shouldUseLLD targetPackages.stdenv.targetPlatform
&& !targetPackages.stdenv.cc.bintools.isLLVM # whether stdenv's linker is lld already
then "${buildPackages.lld}/bin/ld.lld"
ccForTarget = "${pkgsTargetTarget.stdenv.cc}/bin/${pkgsTargetTarget.stdenv.cc.targetPrefix}cc";
cxxForTarget = "${pkgsTargetTarget.stdenv.cc}/bin/${pkgsTargetTarget.stdenv.cc.targetPrefix}c++";
linkerForTarget = if shouldUseLLD pkgsTargetTarget.stdenv.targetPlatform
&& !pkgsTargetTarget.stdenv.cc.bintools.isLLVM # whether stdenv's linker is lld already
then "${pkgsBuildTarget.llvmPackages.bintools}/bin/${pkgsTargetTarget.stdenv.cc.targetPrefix}ld.lld"
else ccForTarget;
rustBuildPlatform = stdenv.buildPlatform.rust.rustcTarget;
@ -56,9 +57,9 @@ rec {
setEnv = ''
env \
''
# Due to a bug in how splicing and targetPackages works, in
# situations where targetPackages is irrelevant
# targetPackages.stdenv.cc is often simply wrong. We must omit
# Due to a bug in how splicing and pkgsTargetTarget works, in
# situations where pkgsTargetTarget is irrelevant
# pkgsTargetTarget.stdenv.cc is often simply wrong. We must omit
# the following lines when rustTargetPlatform collides with
# rustHostPlatform.
+ lib.optionalString (rustTargetPlatform != rustHostPlatform) ''
@ -74,8 +75,8 @@ rec {
"CXX_${stdenv.buildPlatform.rust.cargoEnvVarTarget}=${cxxForBuild}" \
"CARGO_TARGET_${stdenv.buildPlatform.rust.cargoEnvVarTarget}_LINKER=${linkerForBuild}" \
"CARGO_BUILD_TARGET=${rustBuildPlatform}" \
"HOST_CC=${buildPackages.stdenv.cc}/bin/cc" \
"HOST_CXX=${buildPackages.stdenv.cc}/bin/c++" \
"HOST_CC=${pkgsBuildHost.stdenv.cc}/bin/cc" \
"HOST_CXX=${pkgsBuildHost.stdenv.cc}/bin/c++" \
'';
};
} // lib.mapAttrs (old: new: platform:

View File

@ -0,0 +1,44 @@
{ lib
, stdenvNoCC
, fetchurl
, appimageTools
}:
stdenvNoCC.mkDerivation (finalAttrs: {
pname = "deskreen";
version = "2.0.4";
src = fetchurl {
url = "https://github.com/pavlobu/deskreen/releases/download/v${finalAttrs.version}/Deskreen-${finalAttrs.version}.AppImage";
hash = "sha256-0jI/mbXaXanY6ay2zn+dPWGvsqWRcF8aYHRvfGVsObE=";
};
deskreenUnwrapped = appimageTools.wrapType2 {
name = "deskreen";
src = finalAttrs.src;
};
buildInputs = [
finalAttrs.deskreenUnwrapped
];
dontUnpack = true;
dontBuild = true;
installPhase = ''
runHook preInstall
mkdir -p $out/bin
ln -s ${finalAttrs.deskreenUnwrapped}/bin/deskreen $out/bin/deskreen
runHook postInstall
'';
meta = {
description = "Turn any device into a secondary screen for your computer";
homepage = "https://deskreen.com";
license = lib.licenses.agpl3;
mainProgram = "deskreen";
maintainers = with lib.maintainers; [ leo248 drupol ];
platforms = lib.platforms.linux;
};
})

View File

@ -1,19 +1,19 @@
From c9ca58262045b82537bd8284d426c91582cc7ed7 Mon Sep 17 00:00:00 2001
From f56083d95304752c45cc569fe42c3b0d7a2430bd Mon Sep 17 00:00:00 2001
From: Philipp Rintz <git@rintz.net>
Date: Thu, 28 Sep 2023 21:22:18 +0200
Date: Wed, 24 Jan 2024 22:11:50 +0100
Subject: [PATCH] uncommited
---
pocket_updater.csproj | 1 +
pupdate.csproj | 1 +
1 file changed, 1 insertion(+)
diff --git a/pocket_updater.csproj b/pocket_updater.csproj
index 30f77d5..ad6bf69 100644
--- a/pocket_updater.csproj
+++ b/pocket_updater.csproj
diff --git a/pupdate.csproj b/pupdate.csproj
index a6f59a8..0563137 100644
--- a/pupdate.csproj
+++ b/pupdate.csproj
@@ -12,6 +12,7 @@
<Authors>Matt Pannella</Authors>
<Product>Analogue Pocket Updater Utility</Product>
<Product>Pupdate</Product>
<RepositoryUrl>https://github.com/mattpannella/pocket-updater-utility</RepositoryUrl>
+ <RuntimeIdentifier>@RuntimeIdentifier@</RuntimeIdentifier>
</PropertyGroup>

View File

@ -1,24 +1,24 @@
{ pkgs ? import <nixpkgs> { system = builtins.currentSystem; }
, stdenv ? pkgs.stdenv
, lib ? pkgs.lib
, fetchFromGitHub ? pkgs.fetchFromGitHub
, buildDotnetModule ? pkgs.buildDotnetModule
, dotnetCorePackages ? pkgs.dotnetCorePackages
, openssl ? pkgs.openssl
, zlib ? pkgs.zlib
, hostPlatform ? stdenv.hostPlatform
, nix-update-script ? stdenv.nix-update-script
{ pkgs
, stdenv
, lib
, fetchFromGitHub
, buildDotnetModule
, dotnetCorePackages
, openssl
, zlib
, hostPlatform
, nix-update-script
}:
buildDotnetModule rec {
pname = "pocket-updater-utility";
version = "2.43.1";
pname = "pupdate";
version = "3.0.0";
src = fetchFromGitHub {
owner = "mattpannella";
repo = "${pname}";
rev = "${version}";
hash = "sha256-ur7BEsG4MIEcdiRt5BkK4GCa7aVkrh2Djd10KhaWf3U=";
hash = "sha256-Lr3orYOSzFQCLduBhp2MtGbgiKtFB1CgP/iMMySSvEk=";
};
buildInputs = [
@ -30,17 +30,17 @@ buildDotnetModule rec {
# See https://github.com/NixOS/nixpkgs/pull/196648/commits/0fb17c04fe34ac45247d35a1e4e0521652d9c494
patches = [ ./add-runtime-identifier.patch ];
postPatch = ''
substituteInPlace pocket_updater.csproj \
substituteInPlace pupdate.csproj \
--replace @RuntimeIdentifier@ "${dotnetCorePackages.systemToDotnetRid hostPlatform.system}"
'';
projectFile = "pocket_updater.csproj";
projectFile = "pupdate.csproj";
nugetDeps = ./deps.nix;
selfContainedBuild = true;
executables = [ "pocket_updater" ];
executables = [ "pupdate" ];
dotnetFlags = [
"-p:PackageRuntime=${dotnetCorePackages.systemToDotnetRid stdenv.hostPlatform.system}"
@ -54,11 +54,11 @@ buildDotnetModule rec {
};
meta = with lib; {
homepage = "https://github.com/mattpannella/pocket-updater-utility";
description = "Analogue Pocket Updater Utility";
homepage = "https://github.com/mattpannella/pupdate";
description = "Pupdate - A thing for updating your Analogue Pocket ";
license = licenses.mit;
platforms = platforms.linux;
maintainers = with maintainers; [ p-rintz ];
mainProgram = "pocket_updater";
mainProgram = "pupdate";
};
}

View File

@ -0,0 +1,32 @@
{ lib
, fetchFromGitHub
, rustPlatform
}:
rustPlatform.buildRustPackage rec {
pname = "rcp";
version = "0.5.0";
src = fetchFromGitHub {
owner = "wykurz";
repo = "rcp";
rev = "v${version}";
hash = "sha256-5CqQwTJAQhO9mLfMan6JhNY3N2gfwR6wmGtVBYzVxuc=";
};
cargoHash = "sha256-sF7RjuVRNfJa3vw71S+BKIBLeWT6biekAE/56BsZYkw=";
checkFlags = [
# this test also sets setuid permissions on a test file (3oXXX) which doesn't work in a sandbox
"--skip=copy::copy_tests::check_default_mode"
];
meta = with lib; {
changelog = "https://github.com/wykurz/rcp/releases/tag/v${version}";
description = "Tools to efficiently copy, remove and link large filesets";
homepage = "https://github.com/wykurz/rcp";
license = with licenses; [ mit ];
mainProgram = "rcp";
maintainers = with maintainers; [ wykurz ];
};
}

File diff suppressed because it is too large Load Diff

View File

@ -10,9 +10,7 @@
, libxcb
, python3
, libiconv
, AppKit
, CoreText
, Security
, darwin
, fira-code
, fontconfig
, harfbuzz
@ -20,33 +18,30 @@
rustPlatform.buildRustPackage rec {
pname = "silicon";
version = "0.5.1";
version = "0.5.2";
src = fetchFromGitHub {
owner = "Aloxaf";
repo = "silicon";
rev = "v${version}";
hash = "sha256-RuzaRJr1n21MbHSeHBt8CjEm5AwbDbvX9Nw5PeBTl+w=";
hash = "sha256-fk1qaR7z9taOuNmjMCSdq7RybgV/3u7njU0Gehb98Lk=";
};
patches = [
# fix build on aarch64-linux, see https://github.com/Aloxaf/silicon/pull/210
(fetchpatch {
url = "https://github.com/Aloxaf/silicon/commit/f666c95d3dab85a81d60067e2f25d29ee8ab59e7.patch";
hash = "sha256-L6tF9ndC38yVn5ZNof1TMxSImmaqZ6bJ/NYhb0Ebji4=";
})
];
cargoLock = {
lockFile = ./Cargo.lock;
outputHashes = {
"pathfinder_simd-0.5.1" = "sha256-jQCa8TpGHLWvDT9kXWmlw51QtpKImPlWi082Va721cE=";
"pathfinder_simd-0.5.2" = "sha256-b9RuxtTRKJ9Bnh0AWkoInRVrK/a3KV/2DCbXhN63yF0=";
};
};
buildInputs = [ expat freetype fira-code fontconfig harfbuzz ]
++ lib.optionals stdenv.isLinux [ libxcb ]
++ lib.optionals stdenv.isDarwin [ libiconv AppKit CoreText Security ];
++ lib.optionals stdenv.isDarwin (with darwin.apple_sdk.frameworks; [
libiconv
AppKit
CoreText
Security
]);
nativeBuildInputs = [ cmake pkg-config rustPlatform.bindgenHook ]
++ lib.optionals stdenv.isLinux [ python3 ];

View File

@ -10,16 +10,16 @@
buildGoModule rec {
pname = "terraform-plugin-docs";
version = "0.17.0";
version = "0.18.0";
src = fetchFromGitHub {
owner = "hashicorp";
repo = "terraform-plugin-docs";
rev = "refs/tags/v${version}";
sha256 = "sha256-ID+4Pz6SUPzZTZYX6IHn/U02Ffw95he/gogV0mNA2OA=";
sha256 = "sha256-8rNoH01fWNGWH3cSqqFCGetl5S/d3yVh+pmIzg79g3k=";
};
vendorHash = "sha256-HseQBCvflmnlKX4PygWejPbyXRJmNUyl2K2//b4/tik=";
vendorHash = "sha256-9ddxgceILBP1NqbGr08cfdPs0BHSjQWN0MkFA5oqyPE=";
nativeBuildInputs = [ makeWrapper ];

View File

@ -17,13 +17,13 @@
assert lib.elem lineEditingLibrary [ "isocline" "readline" ];
stdenv.mkDerivation (finalAttrs: {
pname = "trealla";
version = "2.32.13";
version = "2.34.0";
src = fetchFromGitHub {
owner = "trealla-prolog";
repo = "trealla";
rev = "v${finalAttrs.version}";
hash = "sha256-Meyy6muzJt/Lg76sa+nwZXCOhfeMTwO4VYTXO/o20XI=";
hash = "sha256-cqIiPeQO/M8MtpHRomN/fzxIq7TgUwZSvL3PFCVsEnY=";
};
postPatch = ''
@ -38,9 +38,9 @@ stdenv.mkDerivation (finalAttrs: {
];
buildInputs =
lib.optional enableFFI libffi
++ lib.optional enableSSL openssl
++ lib.optional (lineEditingLibrary == "readline") readline;
lib.optionals enableFFI [ libffi ]
++ lib.optionals enableSSL [ openssl ]
++ lib.optionals (lineEditingLibrary == "readline") [ readline ];
nativeCheckInputs = lib.optionals finalAttrs.finalPackage.doCheck [ valgrind ];
@ -49,10 +49,10 @@ stdenv.mkDerivation (finalAttrs: {
makeFlags = [
"GIT_VERSION=\"v${finalAttrs.version}\""
]
++ lib.optional (lineEditingLibrary == "isocline") "ISOCLINE=1"
++ lib.optional (!enableFFI) "NOFFI=1"
++ lib.optional (!enableSSL) "NOSSL=1"
++ lib.optional enableThreads "THREADS=1";
++ lib.optionals (lineEditingLibrary == "isocline") [ "ISOCLINE=1" ]
++ lib.optionals (!enableFFI) [ "NOFFI=1" ]
++ lib.optionals (!enableSSL) [ "NOSSL=1" ]
++ lib.optionals enableThreads [ "THREADS=1" ];
enableParallelBuilding = true;
@ -66,7 +66,7 @@ stdenv.mkDerivation (finalAttrs: {
checkFlags = [
"test"
] ++ lib.optional checkLeaks "leaks";
] ++ lib.optionals checkLeaks [ "leaks" ];
passthru.updateScript = gitUpdater {
rev-prefix = "v";

View File

@ -5,11 +5,11 @@
stdenvNoCC.mkDerivation rec {
pname = "lxgw-neoxihei";
version = "1.108";
version = "1.109";
src = fetchurl {
url = "https://github.com/lxgw/LxgwNeoXiHei/releases/download/v${version}/LXGWNeoXiHei.ttf";
hash = "sha256-Wx2fmvIEHgimu7BJ49xWK7c08Rsf3fsjMLTdyedgK3I=";
hash = "sha256-LnbkHmEyxqv1W/qWeCVQGHKLuv6qX3P8zUMUxx61t38=";
};
dontUnpack = true;

View File

@ -6,13 +6,13 @@
stdenvNoCC.mkDerivation (self: {
name = "alacritty-theme";
version = "unstable-2024-01-15";
version = "unstable-2024-01-21";
src = fetchFromGitHub {
owner = "alacritty";
repo = "alacritty-theme";
rev = "489cb8d014e5e2d6aea8bc8a5680a10b8b13b0c3";
hash = "sha256-47F9YwhIDEvPK01zMwwUcAJ3xAetXhWfRHf1cfpuna4=";
rev = "f03686afad05274f5fbd2507f85f95b1a6542df4";
hash = "sha256-457kKE3I4zGf1EKkEoyZu0Fa/1O3yiryzHVEw2rNZt8=";
};
dontConfigure = true;

View File

@ -1,457 +0,0 @@
{ lib, stdenv
, fetchurl, perl, gcc
, ncurses5
, ncurses6, gmp, libiconv, numactl
, llvmPackages
, coreutils
, targetPackages
# minimal = true; will remove files that aren't strictly necessary for
# regular builds and GHC bootstrapping.
# This is "useful" for staying within hydra's output limits for at least the
# aarch64-linux architecture.
, minimal ? false
}:
# Prebuilt only does native
assert stdenv.targetPlatform == stdenv.hostPlatform;
let
downloadsUrl = "https://downloads.haskell.org/ghc";
version = "8.10.2";
# Information about available bindists that we use in the build.
#
# # Bindist library checking
#
# The field `archSpecificLibraries` also provides a way for us get notified
# early when the upstream bindist changes its dependencies (e.g. because a
# newer Debian version is used that uses a new `ncurses` version).
#
# Usage:
#
# * You can find the `fileToCheckFor` of libraries by running `readelf -d`
# on the compiler binary (`exePathForLibraryCheck`).
# * To skip library checking for an architecture,
# set `exePathForLibraryCheck = null`.
# * To skip file checking for a specific arch specfic library,
# set `fileToCheckFor = null`.
ghcBinDists = {
# Binary distributions for the default libc (e.g. glibc, or libSystem on Darwin)
# nixpkgs uses for the respective system.
defaultLibc = {
i686-linux = {
variantSuffix = "";
src = {
url = "${downloadsUrl}/${version}/ghc-${version}-i386-deb9-linux.tar.xz";
sha256 = "0bvwisl4w0z5z8z0da10m9sv0mhm9na2qm43qxr8zl23mn32mblx";
};
exePathForLibraryCheck = "ghc/stage2/build/tmp/ghc-stage2";
archSpecificLibraries = [
{ nixPackage = gmp; fileToCheckFor = null; }
# The i686-linux bindist provided by GHC HQ is currently built on Debian 9,
# which link it against `libtinfo.so.5` (ncurses 5).
# Other bindists are linked `libtinfo.so.6` (ncurses 6).
{ nixPackage = ncurses5; fileToCheckFor = "libtinfo.so.5"; }
];
};
x86_64-linux = {
variantSuffix = "";
src = {
url = "${downloadsUrl}/${version}/ghc-${version}-x86_64-deb10-linux.tar.xz";
sha256 = "0chnzy9j23b2wa8clx5arwz8wnjfxyjmz9qkj548z14cqf13slcl";
};
exePathForLibraryCheck = "ghc/stage2/build/tmp/ghc-stage2";
archSpecificLibraries = [
{ nixPackage = gmp; fileToCheckFor = null; }
{ nixPackage = ncurses6; fileToCheckFor = "libtinfo.so.6"; }
];
};
armv7l-linux = {
variantSuffix = "";
src = {
url = "${downloadsUrl}/${version}/ghc-${version}-armv7-deb10-linux.tar.xz";
sha256 = "1j41cq5d3rmlgz7hzw8f908fs79gc5mn3q5wz277lk8zdf19g75v";
};
exePathForLibraryCheck = "ghc/stage2/build/tmp/ghc-stage2";
archSpecificLibraries = [
{ nixPackage = gmp; fileToCheckFor = null; }
{ nixPackage = ncurses6; fileToCheckFor = "libtinfo.so.6"; }
];
};
aarch64-linux = {
variantSuffix = "";
src = {
url = "${downloadsUrl}/${version}/ghc-${version}-aarch64-deb10-linux.tar.xz";
sha256 = "14smwl3741ixnbgi0l51a7kh7xjkiannfqx15b72svky0y4l3wjw";
};
exePathForLibraryCheck = "ghc/stage2/build/tmp/ghc-stage2";
archSpecificLibraries = [
{ nixPackage = gmp; fileToCheckFor = null; }
{ nixPackage = ncurses6; fileToCheckFor = "libtinfo.so.6"; }
{ nixPackage = numactl; fileToCheckFor = null; }
];
};
x86_64-darwin = {
variantSuffix = "";
src = {
url = "${downloadsUrl}/${version}/ghc-${version}-x86_64-apple-darwin.tar.xz";
sha256 = "1hngyq14l4f950hzhh2d204ca2gfc98pc9xdasxihzqd1jq75dzd";
};
exePathForLibraryCheck = null; # we don't have a library check for darwin yet
archSpecificLibraries = [
{ nixPackage = gmp; fileToCheckFor = null; }
{ nixPackage = ncurses6; fileToCheckFor = null; }
{ nixPackage = libiconv; fileToCheckFor = null; }
];
};
};
# Binary distributions for the musl libc for the respective system.
musl = {
x86_64-linux = {
variantSuffix = "-musl";
src = {
url = "${downloadsUrl}/${version}/ghc-${version}-x86_64-alpine3.10-linux-integer-simple.tar.xz";
sha256 = "0xpcbyaxqyhbl6f0i3s4rp2jm67nqpkfh2qlbj3i2fiaix89ml0l";
};
exePathForLibraryCheck = "bin/ghc";
archSpecificLibraries = [
{ nixPackage = gmp; fileToCheckFor = null; }
# In contrast to glibc builds, the musl-bindist uses `libncursesw.so.*`
# instead of `libtinfo.so.*.`
{ nixPackage = ncurses6; fileToCheckFor = "libncursesw.so.6"; }
];
isHadrian = true;
};
};
};
distSetName = if stdenv.hostPlatform.isMusl then "musl" else "defaultLibc";
binDistUsed = ghcBinDists.${distSetName}.${stdenv.hostPlatform.system}
or (throw "cannot bootstrap GHC on this platform ('${stdenv.hostPlatform.system}' with libc '${distSetName}')");
useLLVM = !stdenv.targetPlatform.isx86;
libPath =
lib.makeLibraryPath (
# Add arch-specific libraries.
map ({ nixPackage, ... }: nixPackage) binDistUsed.archSpecificLibraries
);
libEnvVar = lib.optionalString stdenv.hostPlatform.isDarwin "DY"
+ "LD_LIBRARY_PATH";
runtimeDeps = [
targetPackages.stdenv.cc
targetPackages.stdenv.cc.bintools
coreutils # for cat
]
++ lib.optionals useLLVM [
(lib.getBin llvmPackages.llvm)
]
# On darwin, we need unwrapped bintools as well (for otool)
++ lib.optionals (stdenv.targetPlatform.linker == "cctools") [
targetPackages.stdenv.cc.bintools.bintools
];
in
stdenv.mkDerivation rec {
inherit version;
pname = "ghc-binary${binDistUsed.variantSuffix}";
src = fetchurl binDistUsed.src;
# Note that for GHC 8.10 versions <= 8.10.5, the GHC HQ musl bindist
# has a `gmp` dependency:
# https://gitlab.haskell.org/ghc/ghc/-/commit/8306501020cd66f683ad9c215fa8e16c2d62357d
# Related nixpkgs issues:
# * https://github.com/NixOS/nixpkgs/pull/130441#issuecomment-922452843
nativeBuildInputs = [ perl ];
propagatedBuildInputs =
# Because musl bindists currently provide no way to tell where
# libgmp is (see not [musl bindists have no .buildinfo]), we need
# to propagate `gmp`, otherwise programs built by this ghc will
# fail linking with `cannot find -lgmp` errors.
# Concrete cases are listed in:
# https://github.com/NixOS/nixpkgs/pull/130441#issuecomment-922459988
#
# Also, as of writing, the release pages of musl bindists claim
# that they use `integer-simple` and do not require `gmp`; however
# that is incorrect, so `gmp` is required until a release has been
# made that includes https://gitlab.haskell.org/ghc/ghc/-/issues/20059.
# (Note that for packaging the `-binary` compiler, nixpkgs does not care
# about whether or not `gmp` is used; this comment is just here to explain
# why the `gmp` dependency exists despite what the release page says.)
#
# For GHC >= 8.10.6, `gmp` was switched out for `integer-simple`
# (https://gitlab.haskell.org/ghc/ghc/-/commit/8306501020cd66f683ad9c215fa8e16c2d62357d),
# fixing the above-mentioned release issue,
# and for GHC >= 9.* it is not clear as of writing whether that switch
# will be made there too.
lib.optionals stdenv.hostPlatform.isMusl [ gmp ]; # musl bindist needs this
# Set LD_LIBRARY_PATH or equivalent so that the programs running as part
# of the bindist installer can find the libraries they expect.
# Cannot patchelf beforehand due to relative RPATHs that anticipate
# the final install location.
${libEnvVar} = libPath;
postUnpack =
# Verify our assumptions of which `libtinfo.so` (ncurses) version is used,
# so that we know when ghc bindists upgrade that and we need to update the
# version used in `libPath`.
lib.optionalString
(binDistUsed.exePathForLibraryCheck != null)
# Note the `*` glob because some GHCs have a suffix when unpacked, e.g.
# the musl bindist has dir `ghc-VERSION-x86_64-unknown-linux/`.
# As a result, don't shell-quote this glob when splicing the string.
(let buildExeGlob = ''ghc-${version}*/"${binDistUsed.exePathForLibraryCheck}"''; in
lib.concatStringsSep "\n" [
(''
shopt -u nullglob
echo "Checking that ghc binary exists in bindist at ${buildExeGlob}"
if ! test -e ${buildExeGlob}; then
echo >&2 "GHC binary ${binDistUsed.exePathForLibraryCheck} could not be found in the bindist build directory (at ${buildExeGlob}) for arch ${stdenv.hostPlatform.system}, please check that ghcBinDists correctly reflect the bindist dependencies!"; exit 1;
fi
'')
(lib.concatMapStringsSep
"\n"
({ fileToCheckFor, nixPackage }:
lib.optionalString (fileToCheckFor != null) ''
echo "Checking bindist for ${fileToCheckFor} to ensure that is still used"
if ! readelf -d ${buildExeGlob} | grep "${fileToCheckFor}"; then
echo >&2 "File ${fileToCheckFor} could not be found in ${binDistUsed.exePathForLibraryCheck} for arch ${stdenv.hostPlatform.system}, please check that ghcBinDists correctly reflect the bindist dependencies!"; exit 1;
fi
echo "Checking that the nix package ${nixPackage} contains ${fileToCheckFor}"
if ! test -e "${lib.getLib nixPackage}/lib/${fileToCheckFor}"; then
echo >&2 "Nix package ${nixPackage} did not contain ${fileToCheckFor} for arch ${stdenv.hostPlatform.system}, please check that ghcBinDists correctly reflect the bindist dependencies!"; exit 1;
fi
''
)
binDistUsed.archSpecificLibraries
)
])
# GHC has dtrace probes, which causes ld to try to open /usr/lib/libdtrace.dylib
# during linking
+ lib.optionalString stdenv.isDarwin ''
export NIX_LDFLAGS+=" -no_dtrace_dof"
# not enough room in the object files for the full path to libiconv :(
for exe in $(find . -type f -executable); do
isScript $exe && continue
ln -fs ${libiconv}/lib/libiconv.dylib $(dirname $exe)/libiconv.dylib
install_name_tool -change /usr/lib/libiconv.2.dylib @executable_path/libiconv.dylib -change /usr/local/lib/gcc/6/libgcc_s.1.dylib ${gcc.cc.lib}/lib/libgcc_s.1.dylib $exe
done
'' +
# Some scripts used during the build need to have their shebangs patched
''
patchShebangs ghc-${version}/utils/
patchShebangs ghc-${version}/configure
test -d ghc-${version}/inplace/bin && \
patchShebangs ghc-${version}/inplace/bin
'' +
# We have to patch the GMP paths for the integer-gmp package.
# Note [musl bindists have no .buildinfo]
# Note that musl bindists do not contain them; unclear if that's intended;
# see: https://gitlab.haskell.org/ghc/ghc/-/issues/20073#note_363231
''
find . -name integer-gmp.buildinfo \
-exec sed -i "s@extra-lib-dirs: @extra-lib-dirs: ${gmp.out}/lib@" {} \;
'' + lib.optionalString stdenv.isDarwin ''
find . -name base.buildinfo \
-exec sed -i "s@extra-lib-dirs: @extra-lib-dirs: ${libiconv}/lib@" {} \;
'' +
# aarch64 does HAVE_NUMA so -lnuma requires it in library-dirs in rts/package.conf.in
# FFI_LIB_DIR is a good indication of places it must be needed.
lib.optionalString stdenv.hostPlatform.isAarch64 ''
find . -name package.conf.in \
-exec sed -i "s@FFI_LIB_DIR@FFI_LIB_DIR ${numactl.out}/lib@g" {} \;
'' +
# Rename needed libraries and binaries, fix interpreter
lib.optionalString stdenv.isLinux ''
find . -type f -executable -exec patchelf \
--interpreter ${stdenv.cc.bintools.dynamicLinker} {} \;
'' +
# The hadrian install Makefile uses 'xxx' as a temporary placeholder in path
# substitution. Which can break the build if the store path / prefix happens
# to contain this string. This will be fixed with 9.4 bindists.
# https://gitlab.haskell.org/ghc/ghc/-/issues/21402
''
# Detect hadrian Makefile by checking for the target that has the problem
if grep '^update_package_db' ghc-${version}*/Makefile > /dev/null; then
echo Hadrian bindist, applying workaround for xxx path substitution.
# based on https://gitlab.haskell.org/ghc/ghc/-/commit/dd5fecb0e2990b192d92f4dfd7519ecb33164fad.patch
substituteInPlace ghc-${version}*/Makefile --replace 'xxx' '\0xxx\0'
else
echo Not a hadrian bindist, not applying xxx path workaround.
fi
'';
# fix for `configure: error: Your linker is affected by binutils #16177`
preConfigure = lib.optionalString
stdenv.targetPlatform.isAarch32
"LD=ld.gold";
configurePlatforms = [ ];
configureFlags = [
"--with-gmp-includes=${lib.getDev gmp}/include"
# Note `--with-gmp-libraries` does nothing for GHC bindists:
# https://gitlab.haskell.org/ghc/ghc/-/merge_requests/6124
] ++ lib.optional stdenv.isDarwin "--with-gcc=${./gcc-clang-wrapper.sh}"
# From: https://github.com/NixOS/nixpkgs/pull/43369/commits
++ lib.optional stdenv.hostPlatform.isMusl "--disable-ld-override";
# No building is necessary, but calling make without flags ironically
# calls install-strip ...
dontBuild = true;
# Patch scripts to include runtime dependencies in $PATH.
postInstall = ''
for i in "$out/bin/"*; do
test ! -h "$i" || continue
isScript "$i" || continue
sed -i -e '2i export PATH="${lib.makeBinPath runtimeDeps}:$PATH"' "$i"
done
'';
# Apparently necessary for the ghc Alpine (musl) bindist:
# When we strip, and then run the
# patchelf --set-rpath "${libPath}:$(patchelf --print-rpath $p)" $p
# below, running ghc (e.g. during `installCheckPhase)` gives some apparently
# corrupted rpath or whatever makes the loader work on nonsensical strings:
# running install tests
# Error relocating /nix/store/...-ghc-8.10.2-binary/lib/ghc-8.10.5/bin/ghc: : symbol not found
# Error relocating /nix/store/...-ghc-8.10.2-binary/lib/ghc-8.10.5/bin/ghc: ir6zf6c9f86pfx8sr30n2vjy-ghc-8.10.2-binary/lib/ghc-8.10.5/bin/../lib/x86_64-linux-ghc-8.10.5/libHSexceptions-0.10.4-ghc8.10.5.so: symbol not found
# Error relocating /nix/store/...-ghc-8.10.2-binary/lib/ghc-8.10.5/bin/ghc: y/lib/ghc-8.10.5/bin/../lib/x86_64-linux-ghc-8.10.5/libHStemplate-haskell-2.16.0.0-ghc8.10.5.so: symbol not found
# Error relocating /nix/store/...-ghc-8.10.2-binary/lib/ghc-8.10.5/bin/ghc: 8.10.5/libHStemplate-haskell-2.16.0.0-ghc8.10.5.so: symbol not found
# Error relocating /nix/store/...-ghc-8.10.2-binary/lib/ghc-8.10.5/bin/ghc: <20>: symbol not found
# Error relocating /nix/store/...-ghc-8.10.2-binary/lib/ghc-8.10.5/bin/ghc: <20>?: symbol not found
# Error relocating /nix/store/...-ghc-8.10.2-binary/lib/ghc-8.10.5/bin/ghc: 64-linux-ghc-8.10.5/libHSexceptions-0.10.4-ghc8.10.5.so: symbol not found
# This is extremely bogus and should be investigated.
dontStrip = if stdenv.hostPlatform.isMusl then true else false; # `if` for explicitness
# On Linux, use patchelf to modify the executables so that they can
# find editline/gmp.
postFixup = lib.optionalString stdenv.isLinux
(if stdenv.hostPlatform.isAarch64 then
# Keep rpath as small as possible on aarch64 for patchelf#244. All Elfs
# are 2 directories deep from $out/lib, so pooling symlinks there makes
# a short rpath.
''
(cd $out/lib; ln -s ${ncurses6.out}/lib/libtinfo.so.6)
(cd $out/lib; ln -s ${gmp.out}/lib/libgmp.so.10)
(cd $out/lib; ln -s ${numactl.out}/lib/libnuma.so.1)
for p in $(find "$out/lib" -type f -name "*\.so*"); do
(cd $out/lib; ln -s $p)
done
for p in $(find "$out/lib" -type f -executable); do
if isELF "$p"; then
echo "Patchelfing $p"
patchelf --set-rpath "\$ORIGIN:\$ORIGIN/../.." $p
fi
done
''
else
''
for p in $(find "$out" -type f -executable); do
if isELF "$p"; then
echo "Patchelfing $p"
patchelf --set-rpath "${libPath}:$(patchelf --print-rpath $p)" $p
fi
done
'') + lib.optionalString stdenv.isDarwin ''
# not enough room in the object files for the full path to libiconv :(
for exe in $(find "$out" -type f -executable); do
isScript $exe && continue
ln -fs ${libiconv}/lib/libiconv.dylib $(dirname $exe)/libiconv.dylib
install_name_tool -change /usr/lib/libiconv.2.dylib @executable_path/libiconv.dylib -change /usr/local/lib/gcc/6/libgcc_s.1.dylib ${gcc.cc.lib}/lib/libgcc_s.1.dylib $exe
done
for file in $(find "$out" -name setup-config); do
substituteInPlace $file --replace /usr/bin/ranlib "$(type -P ranlib)"
done
'' +
lib.optionalString minimal ''
# Remove profiling files
find $out -type f -name '*.p_o' -delete
find $out -type f -name '*.p_hi' -delete
find $out -type f -name '*_p.a' -delete
# `-f` because e.g. musl bindist does not have this file.
rm -f $out/lib/ghc-*/bin/ghc-iserv-prof
# Hydra will redistribute this derivation, so we have to keep the docs for
# legal reasons (retaining the legal notices etc)
# As a last resort we could unpack the docs separately and symlink them in.
# They're in $out/share/{doc,man}.
'';
# In nixpkgs, musl based builds currently enable `pie` hardening by default
# (see `defaultHardeningFlags` in `make-derivation.nix`).
# But GHC cannot currently produce outputs that are ready for `-pie` linking.
# Thus, disable `pie` hardening, otherwise `recompile with -fPIE` errors appear.
# See:
# * https://github.com/NixOS/nixpkgs/issues/129247
# * https://gitlab.haskell.org/ghc/ghc/-/issues/19580
hardeningDisable = lib.optional stdenv.targetPlatform.isMusl "pie";
doInstallCheck = true;
installCheckPhase = ''
# Sanity check, can ghc create executables?
cd $TMP
mkdir test-ghc; cd test-ghc
cat > main.hs << EOF
{-# LANGUAGE TemplateHaskell #-}
module Main where
main = putStrLn \$([|"yes"|])
EOF
# can't use env -i here because otherwise we don't find -lgmp on musl
env ${libEnvVar}= PATH= \
$out/bin/ghc --make main.hs || exit 1
echo compilation ok
[ $(./main) == "yes" ]
'';
passthru = {
targetPrefix = "";
enableShared = true;
inherit llvmPackages;
# Our Cabal compiler name
haskellCompilerName = "ghc-${version}";
}
# We duplicate binDistUsed here since we have a sensible default even if no bindist is avaible,
# this makes sure that getting the `meta` attribute doesn't throw even on unsupported platforms.
// lib.optionalAttrs (ghcBinDists.${distSetName}.${stdenv.hostPlatform.system}.isHadrian or false) {
# Normal GHC derivations expose the hadrian derivation used to build them
# here. In the case of bindists we just make sure that the attribute exists,
# as it is used for checking if a GHC derivation has been built with hadrian.
# The isHadrian mechanism will become obsolete with GHCs that use hadrian
# exclusively, i.e. 9.6 (and 9.4?).
hadrian = null;
};
meta = rec {
homepage = "http://haskell.org/ghc";
description = "The Glasgow Haskell Compiler";
license = lib.licenses.bsd3;
# HACK: since we can't encode the libc / abi in platforms, we need
# to make the platform list dependent on the evaluation platform
# in order to avoid eval errors with musl which supports less
# platforms than the default libcs (i. e. glibc / libSystem).
# This is done for the benefit of Hydra, so `packagePlatforms`
# won't return any platforms that would cause an evaluation
# failure for `pkgsMusl.haskell.compiler.ghc8102Binary`, as
# long as the evaluator runs on a platform that supports
# `pkgsMusl`.
platforms = builtins.attrNames ghcBinDists.${distSetName};
maintainers = with lib.maintainers; [
guibou
] ++ lib.teams.haskell.members;
};
}

View File

@ -442,7 +442,7 @@ stdenv.mkDerivation rec {
# platforms than the default libcs (i. e. glibc / libSystem).
# This is done for the benefit of Hydra, so `packagePlatforms`
# won't return any platforms that would cause an evaluation
# failure for `pkgsMusl.haskell.compiler.ghc8102Binary`, as
# failure for `pkgsMusl.haskell.compiler.ghc8107Binary`, as
# long as the evaluator runs on a platform that supports
# `pkgsMusl`.
platforms = builtins.attrNames ghcBinDists.${distSetName};

View File

@ -222,7 +222,7 @@ stdenv.mkDerivation rec {
"x86_64-darwin"
"powerpc64le-linux"
];
# build segfaults, use ghc8102Binary which has proper musl support instead
# build segfaults, use ghc8107Binary which has proper musl support instead
broken = stdenv.hostPlatform.isMusl;
maintainers = with lib.maintainers; [
guibou

View File

@ -16,10 +16,14 @@ let
# Test flaky because of our RPATH patching
# https://github.com/NixOS/nixpkgs/pull/230965#issuecomment-1545336489
"compiler/codegen"
# Test flaky
"read"
] ++ lib.optionals (lib.versionAtLeast version "1.10") [
# Test flaky
# https://github.com/JuliaLang/julia/issues/52739
"REPL"
# Test flaky
"ccall"
] ++ lib.optionals stdenv.isDarwin [
# Test flaky on ofborg
"FileWatching"

View File

@ -30,5 +30,5 @@ index 1565014a0f..edd5c65244 100644
-rm -f $(DESTDIR)$(datarootdir)/julia/base/version_git.sh
-rm -f $(DESTDIR)$(datarootdir)/julia/test/Makefile
--
2.42.0
2.43.0

View File

@ -1,4 +1,4 @@
From c7e2f6ed00c170b68d5d156faac38aa76d4490fd Mon Sep 17 00:00:00 2001
From 9da2f2596db9f4f1a61825d82d9b8c3f3b2e99aa Mon Sep 17 00:00:00 2001
From: Nick Cao <nickcao@nichi.co>
Date: Wed, 10 Jan 2024 20:58:20 -0500
Subject: [PATCH 2/2] skip failing and flaky tests
@ -8,7 +8,7 @@ Subject: [PATCH 2/2] skip failing and flaky tests
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/test/Makefile b/test/Makefile
index 88dbe5b2b4..f0bdedfdf5 100644
index 88dbe5b2b4..a2a7a55e20 100644
--- a/test/Makefile
+++ b/test/Makefile
@@ -28,7 +28,7 @@ default:
@ -16,10 +16,10 @@ index 88dbe5b2b4..f0bdedfdf5 100644
$(TESTS):
@cd $(SRCDIR) && \
- $(call PRINT_JULIA, $(call spawn,$(JULIA_EXECUTABLE)) --check-bounds=yes --startup-file=no --depwarn=error ./runtests.jl $@)
+ $(call PRINT_JULIA, $(call spawn,$(JULIA_EXECUTABLE)) --check-bounds=yes --startup-file=no --depwarn=error ./runtests.jl --skip NetworkOptions REPL channels $@)
+ $(call PRINT_JULIA, $(call spawn,$(JULIA_EXECUTABLE)) --check-bounds=yes --startup-file=no --depwarn=error ./runtests.jl --skip NetworkOptions REPL channels FileWatching ccall $@)
$(addprefix revise-, $(TESTS)): revise-% :
@cd $(SRCDIR) && \
--
2.42.0
2.43.0

View File

@ -10,11 +10,9 @@
# 3. Firefox and Thunderbird should still build on x86_64-linux.
{ stdenv, lib
, buildPackages
, targetPackages
, newScope, callPackage
, CoreFoundation, Security, SystemConfiguration
, pkgsBuildTarget, pkgsBuildBuild, pkgsBuildHost
, pkgsBuildTarget, pkgsBuildBuild, pkgsBuildHost, pkgsTargetTarget
, makeRustPlatform
, wrapRustcWith
, llvmPackages_17, llvm_17
@ -58,4 +56,4 @@ import ./default.nix {
rustcPatches = [ ];
}
(builtins.removeAttrs args [ "pkgsBuildTarget" "pkgsBuildHost" "llvmPackages_17" "llvm_17"])
(builtins.removeAttrs args [ "llvmPackages_17" "llvm_17"])

View File

@ -12,18 +12,21 @@
, llvmPackages # Exposed through rustc for LTO in Firefox
}:
{ stdenv, lib
, buildPackages
, targetPackages
, newScope, callPackage
, CoreFoundation, Security, SystemConfiguration
, pkgsBuildBuild
, pkgsBuildHost
, pkgsBuildTarget
, pkgsTargetTarget
, makeRustPlatform
, wrapRustcWith
}:
let
# Use `import` to make sure no packages sneak in here.
lib' = import ../../../build-support/rust/lib { inherit lib stdenv buildPackages targetPackages; };
lib' = import ../../../build-support/rust/lib {
inherit lib stdenv pkgsBuildHost pkgsBuildTarget pkgsTargetTarget;
};
# Allow faster cross compiler generation by reusing Build artifacts
fastCross = (stdenv.buildPlatform == stdenv.hostPlatform) && (stdenv.hostPlatform != stdenv.targetPlatform);
in
@ -58,11 +61,11 @@ in
else
self.buildRustPackages.overrideScope (_: _:
lib.optionalAttrs (stdenv.buildPlatform == stdenv.hostPlatform)
(selectRustPackage buildPackages).packages.prebuilt);
(selectRustPackage pkgsBuildHost).packages.prebuilt);
bootRustPlatform = makeRustPlatform bootstrapRustPackages;
in {
# Packages suitable for build-time, e.g. `build.rs`-type stuff.
buildRustPackages = (selectRustPackage buildPackages).packages.stable // { __attrsFailEvaluation = true; };
buildRustPackages = (selectRustPackage pkgsBuildHost).packages.stable // { __attrsFailEvaluation = true; };
# Analogous to stdenv
rustPlatform = makeRustPlatform self.buildRustPackages;
rustc-unwrapped = self.callPackage ./rustc.nix ({

View File

@ -5,18 +5,18 @@
buildGoModule rec {
pname = "expr";
version = "1.15.8";
version = "1.16.0";
src = fetchFromGitHub {
owner = "antonmedv";
repo = "expr";
rev = "v${version}";
hash = "sha256-leZEP6RJv136z/bNc1S74tw+JQ3QD7NCMbo/Wo7q0ek=";
hash = "sha256-GLh4NayAbqGXI0Ekkk3lXCRwpLwGLbJIo7WjDfpKDhI=";
};
sourceRoot = "${src.name}/repl";
vendorHash = "sha256-Rs2tlno0vJo8FSdnnk3cxQCCxdByQD1jRzmePzMMfvs=";
vendorHash = "sha256-42kFO7kXIdqVrp2FQGELZ90OUobOp4zbdo533vresIw=";
ldflags = [ "-s" "-w" ];

View File

@ -9,13 +9,13 @@
stdenv.mkDerivation rec {
pname = "libcint";
version = "6.1.0";
version = "6.1.1";
src = fetchFromGitHub {
owner = "sunqm";
repo = "libcint";
rev = "v${version}";
hash = "sha256-qcVVp+81S3Y0fxDWA/PWQeFT2g0N6tIHNUaOHSru2GA=";
hash = "sha256-wV3y+NobV6J+J6I2z3dJdCvTwvfgMspMtAGNpbwfsYk=";
};
postPatch = ''

View File

@ -400,9 +400,10 @@ final: prev: {
};
};
volar = final."@volar/vue-language-server".override {
volar = final."@volar/vue-language-server".override ({ meta, ... }: {
name = "volar";
};
meta = meta // { mainProgram = "vue-language-server"; };
});
wavedrom-cli = prev.wavedrom-cli.override {
nativeBuildInputs = [ pkgs.pkg-config final.node-pre-gyp ];

View File

@ -11,7 +11,7 @@
buildPythonPackage rec {
pname = "aiocomelit";
version = "0.7.4";
version = "0.8.2";
format = "pyproject";
disabled = pythonOlder "3.10";
@ -20,7 +20,7 @@ buildPythonPackage rec {
owner = "chemelli74";
repo = "aiocomelit";
rev = "refs/tags/v${version}";
hash = "sha256-F/blKd+6n/mTeqgmA5rVGz8DFJA+317T6sjYfsAAf2E=";
hash = "sha256-SjyC/KiszQVVmctyqCn3i0DureuCtDlUhJTHC6+PQ2c=";
};
postPatch = ''

View File

@ -13,7 +13,7 @@
buildPythonPackage rec {
pname = "aiosql";
version = "9.2";
version = "9.3";
pyproject = true;
disabled = pythonOlder "3.8";
@ -27,7 +27,7 @@ buildPythonPackage rec {
owner = "nackjicholson";
repo = "aiosql";
rev = "refs/tags/${version}";
hash = "sha256-x8ndLVIYAmixH4Fc1DIC1CK8ChYIPZc3b5VFdpT7JO8=";
hash = "sha256-7bCJykE+7/eA1h4L5MyH/zVPZVMt7cNLXZSWq+8mPtY=";
};
sphinxRoot = "docs/source";

View File

@ -7,14 +7,14 @@
buildPythonPackage rec {
pname = "ansi";
version = "0.3.6";
version = "0.3.7";
format = "pyproject";
src = fetchFromGitHub {
owner = "tehmaze";
repo = pname;
rev = "${pname}-${version}";
hash = "sha256-2gu2Dba3LOjMhbCCZrBqzlOor5KqDYThhe8OP8J3O2M=";
rev = "refs/tags/ansi-${version}";
hash = "sha256-PmgB1glksu4roQeZ1o7uilMJNm9xaYqw680N2z+tUUM=";
};
nativeBuildInputs = [

View File

@ -6,6 +6,7 @@
, distro
, dirty-equals
, httpx
, google-auth
, sniffio
, pydantic
, pytest-asyncio
@ -18,7 +19,7 @@
buildPythonPackage rec {
pname = "anthropic";
version = "0.7.8";
version = "0.11.0";
pyproject = true;
disabled = pythonOlder "3.8";
@ -27,7 +28,7 @@ buildPythonPackage rec {
owner = "anthropics";
repo = "anthropic-sdk-python";
rev = "refs/tags/v${version}";
hash = "sha256-1mpNwZJbYdKVmUeUM+PBL6vPhwe8tr2SnAP/t/MMKpI=";
hash = "sha256-1g3Bbij9HbMK+JJASe+VTBXx5jCQheXLrcnAD0qMs8g=";
};
nativeBuildInputs = [
@ -44,6 +45,10 @@ buildPythonPackage rec {
typing-extensions
];
passthru.optional-dependencies = {
vertex = [ google-auth ];
};
nativeCheckInputs = [
dirty-equals
pytest-asyncio
@ -51,8 +56,9 @@ buildPythonPackage rec {
respx
];
disabledTests = [
"api_resources"
disabledTestPaths = [
# require network access
"tests/api_resources"
];
pythonImportsCheck = [

View File

@ -5,15 +5,15 @@
, dnspython
, fetchFromGitHub
, publicsuffix2
, pythonOlder
, pytestCheckHook
, pythonOlder
, setuptools
}:
buildPythonPackage rec {
pname = "authheaders";
version = "0.15.3";
format = "setuptools";
version = "0.16.2";
pyproject = true;
disabled = pythonOlder "3.7";
@ -21,9 +21,13 @@ buildPythonPackage rec {
owner = "ValiMail";
repo = "authentication-headers";
rev = "refs/tags/${version}";
hash = "sha256-96fCx5uN7yegTrCN+LSjtu4u3RL+dcxV/Puyo0eziI8=";
hash = "sha256-/vxUUSWwysYQzcy2AmkF4f8R59FHRnBfFlPRpfM9e5o=";
};
nativeBuildInputs = [
setuptools
];
propagatedBuildInputs = [
authres
dnspython
@ -40,6 +44,11 @@ buildPythonPackage rec {
"authheaders"
];
disabledTests = [
# Test fails with timeout even if the resolv.conf hack is present
"test_authenticate_dmarc_psdsub"
];
meta = with lib; {
description = "Python library for the generation of email authentication headers";
homepage = "https://github.com/ValiMail/authentication-headers";

View File

@ -10,14 +10,14 @@
buildPythonPackage rec {
pname = "azure-mgmt-compute";
version = "30.4.0";
version = "30.5.0";
format = "setuptools";
disabled = pythonOlder "3.7";
src = fetchPypi {
inherit pname version;
hash = "sha256-C3Qo/YvRXHy9fGa5uwEOClyzeoBs7x9JSNkHGRV2kzQ=";
hash = "sha256-7T6jS3mdsNUu5V4vGrSw8J+koI814GHsuarZ+1ohiEQ=";
};
propagatedBuildInputs = [

View File

@ -11,14 +11,14 @@
buildPythonPackage rec {
pname = "azure-mgmt-containerservice";
version = "28.0.0";
version = "29.0.0";
format = "setuptools";
disabled = pythonOlder "3.7";
src = fetchPypi {
inherit pname version;
hash = "sha256-tVYFpEUV9v0OOk3CK/oPRA8+fhYl668Gqz6wa/NabNs=";
hash = "sha256-0BiuK5JCj6rqfSYD8+GWca2k5SQ19MXEHR3TQcYzgoA=";
};
propagatedBuildInputs = [

View File

@ -9,14 +9,14 @@
buildPythonPackage rec {
pname = "azure-mgmt-recoveryservicesbackup";
version = "7.0.0";
version = "9.0.0";
format = "setuptools";
disabled = pythonOlder "3.8";
src = fetchPypi {
inherit pname version;
hash = "sha256-GuW6x8JGdBedywum4fDAQ8rwbVU9UgQWgHrFqJ6Uz9A=";
hash = "sha256-H/SsO/DnHXSsSyejYX7BFem1GqPh20DRGecrYVkIu1E=";
};
propagatedBuildInputs = [

View File

@ -1,34 +1,33 @@
{ lib
, buildPythonPackage
, fetchPypi
, azure-common
, azure-mgmt-core
, msrest
, msrestazure
, buildPythonPackage
, fetchPypi
, isodate
, pythonOlder
, typing-extensions
, setuptools
}:
buildPythonPackage rec {
pname = "azure-mgmt-security";
version = "5.0.0";
format = "setuptools";
version = "6.0.0";
fpyproject = true;
disabled = pythonOlder "3.7";
disabled = pythonOlder "3.8";
src = fetchPypi {
inherit pname version;
hash = "sha256-OLA+/oLCNEzqID/alebQC3rCJ4L6HAtYXNDqLI/z5wI=";
extension = "zip";
hash = "sha256-zq/BhpiZBnEQvYMMXMmLybjzLY6oQMofaTsaX1Kl+LA=";
};
nativeBuildInputs = [
setuptools
];
propagatedBuildInputs = [
azure-common
azure-mgmt-core
msrest
msrestazure
] ++ lib.optionals (pythonOlder "3.8") [
typing-extensions
isodate
];
# no tests included
@ -42,6 +41,7 @@ buildPythonPackage rec {
meta = with lib; {
description = "Microsoft Azure Security Center Management Client Library for Python";
homepage = "https://github.com/Azure/azure-sdk-for-python";
changelog = "https://github.com/Azure/azure-sdk-for-python/blob/azure-mgmt-security_${version}/sdk/security/azure-mgmt-security/CHANGELOG.md";
license = licenses.mit;
maintainers = with maintainers; [ jonringer ];
};

View File

@ -14,7 +14,7 @@
buildPythonPackage rec {
pname = "blinkpy";
version = "0.22.5";
version = "0.22.6";
pyproject = true;
disabled = pythonOlder "3.9";
@ -23,7 +23,7 @@ buildPythonPackage rec {
owner = "fronzbot";
repo = "blinkpy";
rev = "refs/tags/v${version}";
hash = "sha256-u6FurFaAbkBOT2F+nTL/rGNdUhOpLq+nVKPF3ohuXEs=";
hash = "sha256-46REi+3dUY9dJrhXgKkQ1OfN6XCy1fV9cW6wk82ClOA=";
};
postPatch = ''

View File

@ -365,14 +365,14 @@
buildPythonPackage rec {
pname = "boto3-stubs";
version = "1.34.25";
version = "1.34.27";
pyproject = true;
disabled = pythonOlder "3.7";
src = fetchPypi {
inherit pname version;
hash = "sha256-l8uuaUto4toyW6wGbE6+iwugtf1HQl5kSsiZoXCdJw8=";
hash = "sha256-/YRnjjsSNxA0EGkOaLx6YwZBE47iat7uz9Z5iUUU2Gk=";
};
nativeBuildInputs = [

View File

@ -9,7 +9,7 @@
buildPythonPackage rec {
pname = "botocore-stubs";
version = "1.34.26";
version = "1.34.27";
format = "pyproject";
disabled = pythonOlder "3.7";
@ -17,7 +17,7 @@ buildPythonPackage rec {
src = fetchPypi {
pname = "botocore_stubs";
inherit version;
hash = "sha256-65EItCrdCs4ocQQr+0HToSRqHkR8tHp3EEIniopoSb8=";
hash = "sha256-6r4CRGoS6r0dCY4WN0MnW2HCxurrvtmb5bVNt+9sc2c=";
};
nativeBuildInputs = [

View File

@ -14,7 +14,7 @@
buildPythonPackage rec {
pname = "dvc-data";
version = "3.8.0";
version = "3.9.0";
pyproject = true;
disabled = pythonOlder "3.8";
@ -23,7 +23,7 @@ buildPythonPackage rec {
owner = "iterative";
repo = "dvc-data";
rev = "refs/tags/${version}";
hash = "sha256-i9pFdGMzUypUFZKtE4k1w116r+NjfIECg1a6xw9TpG0=";
hash = "sha256-rgqSgNsqAGATzu3ZX8LWRiFJt0xTTLaF8bUNOgA3s2w=";
};
nativeBuildInputs = [

View File

@ -1,20 +1,26 @@
{ lib
, buildPythonPackage
, fetchPypi
, pythonOlder
, buildPythonPackage }:
, setuptools
}:
buildPythonPackage rec {
pname = "ed25519-blake2b";
version = "1.4";
format = "setuptools";
version = "1.4.1";
pyproject = true;
disabled = pythonOlder "3.6";
disabled = pythonOlder "3.7";
src = fetchPypi {
inherit pname version;
hash = "sha256-0aHLkDLsMHzpW0HGGUQP1NP87MGPIkA1zH1tx6fY70A=";
hash = "sha256-cx6fk80awaZGSVdfNRmpn/4LseTPe/X18L5ROjnfc2M=";
};
nativeBuildInputs = [
setuptools
];
pythonImportsCheck = [
"ed25519_blake2b"
];
@ -22,6 +28,7 @@ buildPythonPackage rec {
meta = with lib; {
description = "Ed25519 public-key signatures (BLAKE2b fork)";
homepage = "https://github.com/Matoking/python-ed25519-blake2b";
changelog = "https://github.com/Matoking/python-ed25519-blake2b/releases/tag/${version}";
license = licenses.mit;
maintainers = with maintainers; [ onny stargate01 ];
};

View File

@ -10,14 +10,14 @@
buildPythonPackage rec {
pname = "elasticsearch8";
version = "8.11.1";
version = "8.12.0";
format = "setuptools";
disabled = pythonOlder "3.7";
src = fetchPypi {
inherit pname version;
hash = "sha256-nY+qZ94uVBLMPb0i0k7gEUfcR5lsE6lcbtFtGQkTKeo=";
hash = "sha256-YFsrdsAAelOest7Pw3+Zl3lV+Q/e7YMELmL3TodBKSM=";
};
nativeBuildInputs = [

View File

@ -0,0 +1,45 @@
{ lib
, buildPythonPackage
, fetchFromGitHub
, pythonOlder
, setuptools
, typing-extensions
}:
buildPythonPackage rec {
pname = "http-sf";
version = "1.0.1";
pyproject = true;
disabled = pythonOlder "3.9";
src = fetchFromGitHub {
owner = "mnot";
repo = "http-sf";
rev = "refs/tags/v${version}";
hash = "sha256-8xK8/IVrhqMDgkxZY10QqSGswCrttc29FZLCntmSUQ4=";
};
nativeBuildInputs = [
setuptools
];
propagatedBuildInputs = [
typing-extensions
];
# Tests require external data (https://github.com/httpwg/structured-field-tests)
doCheck = false;
pythonImportsCheck = [
"http_sf"
];
meta = with lib; {
description = "Module to parse and serialise HTTP structured field values";
homepage = "https://github.com/mnot/http-sf";
changelog = "https://github.com/mnot/http-sf/releases/tag/v${version}";
license = licenses.mit;
maintainers = with maintainers; [ fab ];
};
}

View File

@ -8,8 +8,8 @@
buildPythonPackage rec {
pname = "http-sfv";
version = "0.9.8";
format = "pyproject";
version = "0.9.9";
pyproject = true;
disabled = pythonOlder "3.7";
@ -17,7 +17,7 @@ buildPythonPackage rec {
owner = "mnot";
repo = "http_sfv";
rev = "http_sfv-${version}";
hash = "sha256-zl0Rk4QbzCVmYZ6TnVq+C+oe27Imz5fEQY9Fco5lo5s=";
hash = "sha256-xf9bGDfsEcQnFQ2b1bLRGYug+H4e5jeV/LJstQtp6Bw=";
};
nativeBuildInputs = [

View File

@ -15,7 +15,7 @@
, httpx
}:
let
version = "1.18.3";
version = "1.19.0";
in
buildPythonPackage rec {
pname = "litellm";
@ -26,7 +26,7 @@ buildPythonPackage rec {
owner = "BerriAI";
repo = "litellm";
rev = "refs/tags/v${version}";
hash = "sha256-V4OTEZMyhXDcva7k88uTVH6vJ1EsF549ZmqUqXETsB0=";
hash = "sha256-cHGLOcOC9G6FlJfyrf+owURfGtn/gCAJuhSPt9lJS0o=";
};
postPatch = ''

View File

@ -10,14 +10,14 @@
buildPythonPackage rec {
pname = "publicsuffixlist";
version = "0.10.0.20240124";
version = "0.10.0.20240125";
pyproject = true;
disabled = pythonOlder "3.7";
src = fetchPypi {
inherit pname version;
hash = "sha256-Z87qlGIL215R3Lqbx2f7AuY0Zhu2zpXD+tL5cxGm8Uw=";
hash = "sha256-lxyUgACsRULzQLNyU2TFrLdRzdSbQzvECTRaYQP8O04=";
};
nativeBuildInputs = [

View File

@ -0,0 +1,48 @@
{ lib
, buildPythonPackage
, fetchFromGitHub
, setuptools
, wheel
, scipy
, numpy
, pydoe
, unittestCheckHook
}:
buildPythonPackage rec {
pname = "pwlf";
version = "2.2.1";
pyproject = true;
src = fetchFromGitHub {
owner = "cjekel";
repo = "piecewise_linear_fit_py";
rev = "v${version}";
hash = "sha256-gjdahulpHjBmOlKOCPF9WmrWe4jn/+0oVI4o09EX7qE=";
};
nativeBuildInputs = [
setuptools
wheel
];
propagatedBuildInputs = [
scipy
numpy
pydoe
];
nativeCheckInputs = [
unittestCheckHook
];
pythonImportsCheck = [ "pwlf" ];
meta = with lib; {
description = "Fit piecewise linear data for a specified number of line segments";
homepage = "https://jekel.me/piecewise_linear_fit_py/";
changelog = "https://github.com/cjekel/piecewise_linear_fit_py/blob/${src.rev}/CHANGELOG.md";
license = licenses.mit;
maintainers = with maintainers; [ doronbehar ];
};
}

View File

@ -0,0 +1,38 @@
{ lib
, buildPythonPackage
, fetchPypi
, setuptools
, wheel
, scipy
, numpy
}:
buildPythonPackage rec {
pname = "pyDOE";
version = "0.3.8";
pyproject = true;
src = fetchPypi {
inherit pname version;
hash = "sha256-y9bxSuJtPJ9zYBMgX1PqEZGt1FZwM8Pud7fdNWVmxLY=";
extension = "zip";
};
nativeBuildInputs = [
setuptools
wheel
];
propagatedBuildInputs = [
scipy
numpy
];
pythonImportsCheck = [ "pyDOE" ];
meta = with lib; {
description = "Design of experiments for Python";
homepage = "https://github.com/tisimst/pyDOE";
license = licenses.bsd3;
maintainers = with maintainers; [ doronbehar ];
};
}

View File

@ -14,7 +14,7 @@
buildPythonPackage rec {
pname = "pyswitchbot";
version = "0.44.0";
version = "0.44.1";
pyproject = true;
disabled = pythonOlder "3.7";
@ -23,7 +23,7 @@ buildPythonPackage rec {
owner = "Danielhiversen";
repo = "pySwitchbot";
rev = "refs/tags/${version}";
hash = "sha256-8F0mcZuGU3CiB3pGbAVReKAjvqFLMNa3EHLOOVujgAo=";
hash = "sha256-i3OQ2QOBMaiNTyq44wbnHZ2iqAXEYB16NWKWzOza1Jo=";
};
nativeBuildInputs = [

View File

@ -8,7 +8,7 @@
buildPythonPackage rec {
pname = "pytedee-async";
version = "0.2.11";
version = "0.2.12";
pyproject = true;
disabled = pythonOlder "3.9";
@ -17,7 +17,7 @@ buildPythonPackage rec {
owner = "zweckj";
repo = "pytedee_async";
rev = "refs/tags/v${version}";
hash = "sha256-mBTY2JU79Hk6P+oWQ+2FD0BYHL1c865EvnTUl6H+gnk=";
hash = "sha256-eepN5Urr9fp1780iy3Z4sot+hXvMCxMGodYBdRdDj9Y=";
};
nativeBuildInputs = [

View File

@ -19,7 +19,7 @@
buildPythonPackage rec {
pname = "python-kasa";
version = "0.6.0.1";
version = "0.6.1";
pyproject = true;
disabled = pythonOlder "3.8";
@ -28,7 +28,7 @@ buildPythonPackage rec {
owner = "python-kasa";
repo = "python-kasa";
rev = "refs/tags/${version}";
hash = "sha256-Vx2ZRcm/Ob0oWB/Th7hF4ctppWoeeNiqKGjYVNsidrE=";
hash = "sha256-kMhmnIwdVix9DgijTcNf5fsm4jiqygxjOvgGNOGN4O8=";
};
nativeBuildInputs = [

View File

@ -35,7 +35,7 @@
buildPythonPackage rec {
pname = "python-lsp-server";
version = "1.9.0";
version = "1.10.0";
format = "pyproject";
disabled = pythonOlder "3.8";
@ -44,7 +44,7 @@ buildPythonPackage rec {
owner = "python-lsp";
repo = pname;
rev = "refs/tags/v${version}";
hash = "sha256-9za0et/W+AwrjqUVoHwk8oqLXk4eqgRON8Z4F5GSKXM=";
hash = "sha256-dh33m7wgOwUETjdNqqDKZnpTgbrYCg9/XXC296tHm4w=";
};
postPatch = ''
@ -144,6 +144,7 @@ buildPythonPackage rec {
# https://github.com/python-lsp/python-lsp-server/issues/243
"test_numpy_completions"
"test_workspace_loads_pycodestyle_config"
"test_autoimport_code_actions_and_completions_for_notebook_document"
] ++ lib.optionals (stdenv.isDarwin && stdenv.isAarch64) [
# pyqt5 is broken on aarch64-darwin
"test_pyqt_completion"

View File

@ -19,7 +19,7 @@
buildPythonPackage rec {
pname = "python-roborock";
version = "0.39.0";
version = "0.39.1";
pyproject = true;
disabled = pythonOlder "3.10";
@ -28,7 +28,7 @@ buildPythonPackage rec {
owner = "humbertogontijo";
repo = "python-roborock";
rev = "refs/tags/v${version}";
hash = "sha256-t+ZjLsnsLcWYNlx2eRxDhQLw3levdiCk4FUrcjtSmq8=";
hash = "sha256-iFLzrjbCwBuV9RQSHoP5LOG0PIPjiTMCpvk3wqGtMgk=";
};
postPatch = ''

View File

@ -1,23 +1,27 @@
{ lib
, stdenv
, buildPythonPackage
, fetchPypi
, fetchFromGitHub
, libspatialindex
, numpy
, pytestCheckHook
, pythonOlder
, setuptools
, wheel
}:
buildPythonPackage rec {
pname = "rtree";
version = "1.1.0";
format = "setuptools";
disabled = pythonOlder "3.7";
version = "1.2.0";
pyproject = true;
src = fetchPypi {
pname = "Rtree";
inherit version;
hash = "sha256-b47lBN3l0AWyWwiq9b4LNASvOtX+zm4d3N41kIp5ipU=";
disabled = pythonOlder "3.8";
src = fetchFromGitHub {
owner = "Toblerity";
repo = "rtree";
rev = "refs/tags/${version}";
hash = "sha256-RmAiyYrkUMBN/ebmo27WvFcRmYlKkywuQHLLUbluTTw=";
};
postPatch = ''
@ -25,6 +29,11 @@ buildPythonPackage rec {
'find_library("spatialindex_c")' '"${libspatialindex}/lib/libspatialindex_c${stdenv.hostPlatform.extensions.sharedLibrary}"'
'';
nativeBuildInputs = [
setuptools
wheel
];
buildInputs = [ libspatialindex ];
nativeCheckInputs = [
@ -36,7 +45,8 @@ buildPythonPackage rec {
meta = with lib; {
description = "R-Tree spatial index for Python GIS";
homepage = "https://toblerity.org/rtree/";
homepage = "https://github.com/Toblerity/rtree";
changelog = "https://github.com/Toblerity/rtree/blob/${version}/CHANGES.rst";
license = licenses.mit;
maintainers = with maintainers; [ bgamari ];
};

View File

@ -18,7 +18,7 @@
buildPythonPackage rec {
pname = "twilio";
version = "8.11.1";
version = "8.12.0";
format = "setuptools";
disabled = pythonOlder "3.7";
@ -27,7 +27,7 @@ buildPythonPackage rec {
owner = "twilio";
repo = "twilio-python";
rev = "refs/tags/${version}";
hash = "sha256-cByrE0/sKZ0dWnuQS1KOOCHbSYF6YJchqGrdkmVp9DM=";
hash = "sha256-I2ktLhlSFeQ3f7/zcm5NKLv5Pm1R7EPkeMPREMa9bBA=";
};
propagatedBuildInputs = [

View File

@ -6,12 +6,12 @@
buildPythonPackage rec {
pname = "types-docutils";
version = "0.20.0.20240106";
version = "0.20.0.20240125";
pyproject = true;
src = fetchPypi {
inherit pname version;
hash = "sha256-A5kuyXb74IDbWI4eVqg8Xkq6XHMwIrJbsmy4Q5e5YEk=";
hash = "sha256-r3YOMR2Jrz+PtiVD6FCZ1v2dwDttGjva9mlXNnXVitg=";
};
nativeBuildInputs = [

View File

@ -5,14 +5,14 @@
python3.pkgs.buildPythonApplication rec {
pname = "checkov";
version = "3.1.69";
version = "3.1.70";
pyproject = true;
src = fetchFromGitHub {
owner = "bridgecrewio";
repo = "checkov";
rev = "refs/tags/${version}";
hash = "sha256-hA0GmCNsds/dkSJ5PZYPiz1lsaISs62jb000k17aqAM=";
hash = "sha256-6HR6Hfv8dAo3/GT1OZQmH7yq4fY9Xi8SKkGUjG9914I=";
};
patches = [

View File

@ -17,13 +17,13 @@ let
in
stdenv.mkDerivation (finalAttrs: {
pname = "castxml";
version = "0.6.3";
version = "0.6.4";
src = fetchFromGitHub {
owner = "CastXML";
repo = "CastXML";
rev = "v${finalAttrs.version}";
hash = "sha256-g/BgKkU8Me6EacDm+KFAsKq5++v/b+Par0x7lzBzHw8=";
hash = "sha256-6xeMkqsFchZxrAsE2DLaIzGU4VMwyDckm00s69wahOo=";
};
nativeBuildInputs = [

View File

@ -5,13 +5,13 @@
buildGoModule rec {
pname = "litefs";
version = "0.5.10";
version = "0.5.11";
src = fetchFromGitHub {
owner = "superfly";
repo = pname;
rev = "v${version}";
sha256 = "sha256-e7RBiUHMndOz1n8gWlx+4ifnueWgPu482KIAXaSEhl0=";
sha256 = "sha256-I12bKImZkvAMyfwb6r/NxE+BcUk+SalN+cIDXP0q4xA=";
};
vendorHash = "sha256-FcYPe4arb+jbxj4Tl6bRRAnkEvw0rkECIo8/zC79lOA=";

View File

@ -2,11 +2,11 @@
stdenv.mkDerivation rec {
pname = "micronaut";
version = "4.2.3";
version = "4.2.4";
src = fetchzip {
url = "https://github.com/micronaut-projects/micronaut-starter/releases/download/v${version}/micronaut-cli-${version}.zip";
sha256 = "sha256-+03wjNxIZr8vhvK3zfvFBwXC5WmEs5A6mydGXsmGuCI=";
sha256 = "sha256-Jhy1q+6VdLPScq882QU8dIUNNKs1i+3Mug5ycUWFp9U=";
};
nativeBuildInputs = [ makeWrapper installShellFiles ];

View File

@ -1,23 +0,0 @@
{ stdenv, fetchurl, patchelf }:
# Note: this package is used for bootstrapping fetchurl, and thus
# cannot use fetchpatch! All mutable patches (generated by GitHub or
# cgit) that are needed here should be included directly in Nixpkgs as
# files.
stdenv.mkDerivation rec {
pname = "patchelf";
version = "0.13.1";
src = fetchurl {
url = "https://github.com/NixOS/${pname}/releases/download/${version}/${pname}-${version}.tar.bz2";
sha256 = "sha256-OeiuzNdJXVTfCU0rSnwIAQ/3d3A2+q8k8o4Hd30VmOI=";
};
setupHook = [ ./setup-hook.sh ];
# fails 8 out of 24 tests, problems when loading libc.so.6
doCheck = stdenv.name == "stdenv-linux";
inherit (patchelf) meta;
}

View File

@ -2,14 +2,14 @@
rustPlatform.buildRustPackage rec {
pname = "cargo-hack";
version = "0.6.15";
version = "0.6.16";
src = fetchCrate {
inherit pname version;
hash = "sha256-yjaX4lqUj9aZPkRuiJC3yBwXvfvd+Okr87Ia2IQvxfM=";
hash = "sha256-DbZ/8tnVD9jXN9Ek7LJRF1GFy/gphexNKG7FcZeqtoE=";
};
cargoHash = "sha256-6ogeqVN2V38N7mNBjimjNv/KK2JtV4aa5AorRfYMBx8=";
cargoHash = "sha256-j7ZHq3M2JgQV72GRKOIlp+jsoc/ikYHmNLOnrZ2yA8I=";
# some necessary files are absent in the crate version
doCheck = false;

View File

@ -7,16 +7,16 @@
rustPlatform.buildRustPackage rec {
pname = "cargo-mutants";
version = "24.1.1";
version = "24.1.2";
src = fetchFromGitHub {
owner = "sourcefrog";
repo = "cargo-mutants";
rev = "v${version}";
hash = "sha256-n7fpfgbDvLMMA834BUSAEYD+mXVxGGFPLlLjDxpKuSA=";
hash = "sha256-V1BQJmwLhsh36Gyg1Zrxw5MCUQcyIKlnEsYmchu8K5A=";
};
cargoHash = "sha256-lEeNIwNvq6K+xRCUTXs9Sh7o8q3u5GcBKntVMhPQqMU=";
cargoHash = "sha256-f2iJnBklzSgHqez6KSk1+ZqiY/t9iCdtsQze9PhG164=";
buildInputs = lib.optionals stdenv.isDarwin [
darwin.apple_sdk.frameworks.SystemConfiguration

View File

@ -12,14 +12,14 @@
rustPlatform.buildRustPackage rec {
pname = "cargo-workspaces";
version = "0.3.0";
version = "0.3.1";
src = fetchCrate {
inherit pname version;
hash = "sha256-1wNoMVfouuPRGFGB6XIhgeeWgknxMctrBl5Vfco6qug=";
hash = "sha256-1YFTBzFr11FUfwgdGJgyF1lWvrfQ6ZPIkYAG7vySfFA=";
};
cargoHash = "sha256-OJGqIo6mYqXjmQb/2CVVTskecYZretw+K46Fvbu/PcQ=";
cargoHash = "sha256-wL1DKZ1QhBKB4Gy2rbwe4y/hR4A/wiiVqGAIcM+Om8E=";
nativeBuildInputs = [
pkg-config

View File

@ -10,13 +10,13 @@
}:
rustPlatform.buildRustPackage rec {
pname = "sentry-cli";
version = "2.25.2";
version = "2.26.0";
src = fetchFromGitHub {
owner = "getsentry";
repo = "sentry-cli";
rev = version;
sha256 = "sha256-IAtOlWIs1BScr569s8Y8A+m1CzzGrSXX/CaqkXubZfA=";
sha256 = "sha256-9Qwonp2tGmaffYj5Vv09+Z3YcbFSFmeS/zc7PXjmrk4=";
};
doCheck = false;
@ -26,7 +26,7 @@ rustPlatform.buildRustPackage rec {
buildInputs = [ openssl ] ++ lib.optionals stdenv.isDarwin [ CoreServices Security SystemConfiguration ];
nativeBuildInputs = [ pkg-config ];
cargoHash = "sha256-oydBeEOFTmDibUZZSwe7WMcU5eDshsDogPRlxrrx1i8=";
cargoHash = "sha256-t1Gqis4Gd6Zdkka8u/tCRM5xmm3z85OqZIkINm9jNyc=";
meta = with lib; {
homepage = "https://docs.sentry.io/cli/";

View File

@ -9,16 +9,16 @@
buildGoModule rec {
pname = "supabase-cli";
version = "1.131.2";
version = "1.137.2";
src = fetchFromGitHub {
owner = "supabase";
repo = "cli";
rev = "v${version}";
hash = "sha256-6IjVROKxDiLod8XWWndnxHQGnk8DJc1sjzJxLWDkRL0=";
hash = "sha256-C7J1hXRsWlzVvvKjj0IlgWC/BtVsJOvFnPm7c+ioxCA=";
};
vendorHash = "sha256-/hfFydNHDK6shCC4iIkdP8r1ZO9niMIWZ/Ypj/DGj+c=";
vendorHash = "sha256-p026yk50DfzUZX7TTFpDhvGHiD/XUhbxlHQz383pRZk=";
ldflags = [
"-s"

View File

@ -7,16 +7,16 @@
buildNpmPackage rec {
pname = "web-ext";
version = "7.10.0";
version = "7.11.0";
src = fetchFromGitHub {
owner = "mozilla";
repo = "web-ext";
rev = version;
hash = "sha256-VXvs4Z5cOt+lJ1JReApynpz/TufJgIVaO3dszS3Gvb4=";
hash = "sha256-tXYqAAzxAFQGREkNGgBrHLp7ukRDMtr0bPYW7hOEniY=";
};
npmDepsHash = "sha256-ovLVWOrQ//aJPJqzCJQS+/Tnn4Z75OR69e7ACevKWCA=";
npmDepsHash = "sha256-uKAEWe28zUgE7Fv00sGXD5dKje/pHh22yJlYtk+7tN8=";
npmBuildFlags = [ "--production" ];

View File

@ -9,13 +9,13 @@
stdenv.mkDerivation rec {
pname = "bpftrace";
version = "0.19.1";
version = "0.20.0";
src = fetchFromGitHub {
owner = "iovisor";
repo = "bpftrace";
rev = "v${version}";
hash = "sha256-JyMogqyntSm2IDXzsOIjcUkf2YwG2oXKpqPpdx/eMNI=";
hash = "sha256-IfceH4OSlL0J9O7ZF3vYzvoRM/NFlevC6LChH5+p9CY=";
};

View File

@ -7,14 +7,14 @@
rustPlatform.buildRustPackage rec {
pname = "mdevctl";
version = "1.3.0";
version = "1.2.0";
src = fetchCrate {
inherit pname version;
hash = "sha256-4K4NW3DOTtzZJ7Gg0mnRPr88YeqEjTtKX+C4P8i923E=";
hash = "sha256-0X/3DWNDPOgSNNTqcj44sd7DNGFt+uGBjkc876dSgU8=";
};
cargoHash = "sha256-hCqNy32uPLsKfUJqiG2DRcXfqdvlp4bCutQmt+FieXc=";
cargoHash = "sha256-TmumQBWuH5fJOe2qzcDtEGbmCs2G9Gfl8mH7xifzRGc=";
nativeBuildInputs = [
docutils

View File

@ -2,6 +2,7 @@
, callPackage
, fetchFromGitHub
, rustPlatform
, cmake
, pkg-config
, protobuf
, elfutils
@ -9,18 +10,19 @@
rustPlatform.buildRustPackage rec {
pname = "router";
version = "1.19.0";
version = "1.30.1";
src = fetchFromGitHub {
owner = "apollographql";
repo = pname;
rev = "v${version}";
sha256 = "sha256-IuS7NmlTNmHHnnSZ+YIbV6BnxJW2xprOQ5mkz5FuJEQ=";
sha256 = "sha256-mQtIjfXDcEy5HfZbWauL0NQLPneGq9EJt/yB8zMuhSU=";
};
cargoHash = "sha256-yeb+4lgRDssjkEx6bYfGIbn4DJGpZZ/JDmuwFjQ+U+8=";
cargoHash = "sha256-XCDU6cXw+Wf5MR6m+HCI8/VFRRylMywktZbd5k7Lcwo=";
nativeBuildInputs = [
cmake
pkg-config
protobuf
];

View File

@ -9,11 +9,11 @@ let
};
in
fetch_librusty_v8 {
version = "0.60.1";
version = "0.74.3";
shas = {
x86_64-linux = "sha256-P8H+XJqrt9jdKM885L1epMldp+stwmEw+0Gtd2x3r4g=";
aarch64-linux = "sha256-frHpBP2pL3o4efFLHP2r3zsWJrNT93yYu2Qkxv+7m8Y=";
x86_64-darwin = "sha256-taewoYBkyikqWueLSD9dW1EDjzkV68Xplid1UaLZgRM=";
aarch64-darwin = "sha256-s2YEVbuYpiT/qrmE37aXk13MetrnJo6l+s1Q2y6b5kU=";
x86_64-linux = "sha256-8pa8nqA6rbOSBVnp2Q8/IQqh/rfYQU57hMgwU9+iz4A=";
aarch64-linux = "sha256-3kXOV8rlCNbNBdXgOtd3S94qO+JIKyOByA4WGX+XVP0=";
x86_64-darwin = "sha256-iBBVKZiSoo08YEQ8J/Rt1/5b7a+2xjtuS6QL/Wod5nQ=";
aarch64-darwin = "sha256-Djnuc3l/jQKvBf1aej8LG5Ot2wPT0m5Zo1B24l1UHsM=";
};
}

View File

@ -2,16 +2,16 @@
buildGoModule rec {
pname = "ipmi_exporter";
version = "1.7.0";
version = "1.8.0";
src = fetchFromGitHub {
owner = "prometheus-community";
repo = "ipmi_exporter";
rev = "v${version}";
hash = "sha256-yVFpYedWELqDNzmHQfMJa95iKQsn1N/wa82sQEQh1Uw=";
hash = "sha256-ZF5mBjq+IhSQrQ1dUfHlfyUMK2dkpZ5gu9djPkUYvRQ=";
};
vendorHash = "sha256-1ntFcOmVN4I1aa/5gWnzkYNYxxFT9ZM1usNnE23KfR0=";
vendorHash = "sha256-q5MFAvFCrr24b1VO0Z03C08CGd+0pUerXZEKiu4r7cE=";
nativeBuildInputs = [ makeWrapper ];

Some files were not shown because too many files have changed in this diff Show More