Merge pull request #50674 from matthewbauer/proofing

Doc clean up
This commit is contained in:
Matthew Bauer 2018-11-19 14:13:54 -06:00 committed by GitHub
commit 123da1f9c8
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
9 changed files with 453 additions and 357 deletions

@ -20,7 +20,7 @@ release and `nixos-unstable` for the latest successful build of master:
% git rebase channels/nixos-18.09
```
For pull-requests, please rebase onto nixpkgs `master`.
For pull requests, please rebase onto nixpkgs `master`.
[NixOS](https://nixos.org/nixos/) Linux distribution source code is located inside
`nixos/` folder.

@ -132,7 +132,7 @@
</itemizedlist>
<para>
The difference between an a package being unsupported on some system and
The difference between a package being unsupported on some system and
being broken is admittedly a bit fuzzy. If a program
<emphasis>ought</emphasis> to work on a certain platform, but doesn't, the
platform should be included in <literal>meta.platforms</literal>, but marked
@ -175,11 +175,12 @@
</programlisting>
</para>
<para>
A more useful example, the following configuration allows only allows
flash player and visual studio code:
For a more useful example, try the following. This configuration
only allows unfree packages named flash player and visual studio
code:
<programlisting>
{
allowUnfreePredicate = (pkg: elem (builtins.parseDrvName pkg.name).name [ "flashplayer" "vscode" ]);
allowUnfreePredicate = (pkg: builtins.elem (builtins.parseDrvName pkg.name).name [ "flashplayer" "vscode" ]);
}
</programlisting>
</para>
@ -286,8 +287,8 @@
<para>
You can define a function called <varname>packageOverrides</varname> in your
local <filename>~/.config/nixpkgs/config.nix</filename> to override nix
packages. It must be a function that takes pkgs as an argument and return
local <filename>~/.config/nixpkgs/config.nix</filename> to override Nix
packages. It must be a function that takes pkgs as an argument and returns a
modified set of packages.
<programlisting>
{

@ -6,17 +6,17 @@
<title>Introduction</title>
<para>
"Cross-compilation" means compiling a program on one machine for another
type of machine. For example, a typical use of cross compilation is to
compile programs for embedded devices. These devices often don't have the
computing power and memory to compile their own programs. One might think
that cross-compilation is a fairly niche concern, but there are advantages
to being rigorous about distinguishing build-time vs run-time environments
even when one is developing and deploying on the same machine. Nixpkgs is
increasingly adopting the opinion that packages should be written with
cross-compilation in mind, and nixpkgs should evaluate in a similar way (by
minimizing cross-compilation-specific special cases) whether or not one is
cross-compiling.
"Cross-compilation" means compiling a program on one machine for another type
of machine. For example, a typical use of cross-compilation is to compile
programs for embedded devices. These devices often don't have the computing
power and memory to compile their own programs. One might think that
cross-compilation is a fairly niche concern. However, there are significant
advantages to rigorously distinguishing between build-time and run-time
environments! This applies even when one is developing and deploying on the
same machine. Nixpkgs is increasingly adopting the opinion that packages
should be written with cross-compilation in mind, and nixpkgs should evaluate
in a similar way (by minimizing cross-compilation-specific special cases)
whether or not one is cross-compiling.
</para>
<para>
@ -34,15 +34,15 @@
<title>Platform parameters</title>
<para>
Nixpkgs follows the
<link xlink:href="https://gcc.gnu.org/onlinedocs/gccint/Configure-Terms.html">common
historical convention of GNU autoconf</link> of distinguishing between 3
types of platform: <wordasword>build</wordasword>,
Nixpkgs follows the <link
xlink:href="https://gcc.gnu.org/onlinedocs/gccint/Configure-Terms.html">conventions
of GNU autoconf</link>. We distinguish between 3 types of platforms when
building a derivation: <wordasword>build</wordasword>,
<wordasword>host</wordasword>, and <wordasword>target</wordasword>. In
summary, <wordasword>build</wordasword> is the platform on which a package
is being built, <wordasword>host</wordasword> is the platform on which it
is to run. The third attribute, <wordasword>target</wordasword>, is
relevant only for certain specific compilers and build tools.
will run. The third attribute, <wordasword>target</wordasword>, is relevant
only for certain specific compilers and build tools.
</para>
<para>
@ -64,7 +64,7 @@
<para>
The "build platform" is the platform on which a package is built. Once
someone has a built package, or pre-built binary package, the build
platform should not matter and be safe to ignore.
platform should not matter and can be ignored.
</para>
</listitem>
</varlistentry>
@ -94,11 +94,11 @@
<para>
The build process of certain compilers is written in such a way that the
compiler resulting from a single build can itself only produce binaries
for a single platform. The task specifying this single "target platform"
is thus pushed to build time of the compiler. The root cause of this
mistake is often that the compiler (which will be run on the host) and
the the standard library/runtime (which will be run on the target) are
built by a single build process.
for a single platform. The task of specifying this single "target
platform" is thus pushed to build time of the compiler. The root cause of
this that the compiler (which will be run on the host) and the standard
library/runtime (which will be run on the target) are built by a single
build process.
</para>
<para>
There is no fundamental need to think about a single target ahead of
@ -135,8 +135,10 @@
<para>
This is a two-component shorthand for the platform. Examples of this
would be "x86_64-darwin" and "i686-linux"; see
<literal>lib.systems.doubles</literal> for more. This format isn't very
standard, but has built-in support in Nix, such as the
<literal>lib.systems.doubles</literal> for more. The first component
corresponds to the CPU architecture of the platform and the second to the
operating system of the platform (<literal>[cpu]-[os]</literal>). This
format has built-in support in Nix, such as the
<varname>builtins.currentSystem</varname> impure string.
</para>
</listitem>
@ -147,12 +149,13 @@
</term>
<listitem>
<para>
This is a 3- or 4- component shorthand for the platform. Examples of
this would be "x86_64-unknown-linux-gnu" and "aarch64-apple-darwin14".
This is a standard format called the "LLVM target triple", as they are
pioneered by LLVM and traditionally just used for the
<varname>targetPlatform</varname>. This format is strictly more
informative than the "Nix host double", as the previous format could
This is a 3- or 4- component shorthand for the platform. Examples of this
would be <literal>x86_64-unknown-linux-gnu</literal> and
<literal>aarch64-apple-darwin14</literal>. This is a standard format
called the "LLVM target triple", as they are pioneered by LLVM. In the
4-part form, this corresponds to
<literal>[cpu]-[vendor]-[os]-[abi]</literal>. This format is strictly
more informative than the "Nix host double", as the previous format could
analogously be termed. This needs a better name than
<varname>config</varname>!
</para>
@ -164,12 +167,11 @@
</term>
<listitem>
<para>
This is a nix representation of a parsed LLVM target triple with
white-listed components. This can be specified directly, or actually
parsed from the <varname>config</varname>. [Technically, only one need
be specified and the others can be inferred, though the precision of
inference may not be very good.] See
<literal>lib.systems.parse</literal> for the exact representation.
This is a Nix representation of a parsed LLVM target triple
with white-listed components. This can be specified directly,
or actually parsed from the <varname>config</varname>. See
<literal>lib.systems.parse</literal> for the exact
representation.
</para>
</listitem>
</varlistentry>
@ -193,7 +195,7 @@
<listitem>
<para>
These predicates are defined in <literal>lib.systems.inspect</literal>,
and slapped on every platform. They are superior to the ones in
and slapped onto every platform. They are superior to the ones in
<varname>stdenv</varname> as they force the user to be explicit about
which platform they are inspecting. Please use these instead of those.
</para>
@ -221,7 +223,7 @@
<para>
In this section we explore the relationship between both runtime and
buildtime dependencies and the 3 Autoconf platforms.
build-time dependencies and the 3 Autoconf platforms.
</para>
<para>
@ -249,17 +251,17 @@
</para>
<para>
Some examples will probably make this clearer. If a package is being built
with a <literal>(build, host, target)</literal> platform triple of
<literal>(foo, bar, bar)</literal>, then its build-time dependencies would
have a triple of <literal>(foo, foo, bar)</literal>, and <emphasis>those
packages'</emphasis> build-time dependencies would have triple of
<literal>(foo, foo, foo)</literal>. In other words, it should take two
"rounds" of following build-time dependency edges before one reaches a
fixed point where, by the sliding window principle, the platform triple no
longer changes. Indeed, this happens with cross compilation, where only
rounds of native dependencies starting with the second necessarily coincide
with native packages.
Some examples will make this clearer. If a package is being built with a
<literal>(build, host, target)</literal> platform triple of <literal>(foo,
bar, bar)</literal>, then its build-time dependencies would have a triple of
<literal>(foo, foo, bar)</literal>, and <emphasis>those packages'</emphasis>
build-time dependencies would have a triple of <literal>(foo, foo,
foo)</literal>. In other words, it should take two "rounds" of following
build-time dependency edges before one reaches a fixed point where, by the
sliding window principle, the platform triple no longer changes. Indeed,
this happens with cross-compilation, where only rounds of native
dependencies starting with the second necessarily coincide with native
packages.
</para>
<note>
@ -271,23 +273,23 @@
</note>
<para>
How does this work in practice? Nixpkgs is now structured so that
build-time dependencies are taken from <varname>buildPackages</varname>,
whereas run-time dependencies are taken from the top level attribute set.
For example, <varname>buildPackages.gcc</varname> should be used at build
time, while <varname>gcc</varname> should be used at run time. Now, for
most of Nixpkgs's history, there was no <varname>buildPackages</varname>,
and most packages have not been refactored to use it explicitly. Instead,
one can use the six (<emphasis>gasp</emphasis>) attributes used for
specifying dependencies as documented in
<xref linkend="ssec-stdenv-dependencies"/>. We "splice" together the
run-time and build-time package sets with <varname>callPackage</varname>,
and then <varname>mkDerivation</varname> for each of four attributes pulls
the right derivation out. This splicing can be skipped when not cross
compiling as the package sets are the same, but is a bit slow for cross
compiling. Because of this, a best-of-both-worlds solution is in the works
with no splicing or explicit access of <varname>buildPackages</varname>
needed. For now, feel free to use either method.
How does this work in practice? Nixpkgs is now structured so that build-time
dependencies are taken from <varname>buildPackages</varname>, whereas
run-time dependencies are taken from the top level attribute set. For
example, <varname>buildPackages.gcc</varname> should be used at build-time,
while <varname>gcc</varname> should be used at run-time. Now, for most of
Nixpkgs's history, there was no <varname>buildPackages</varname>, and most
packages have not been refactored to use it explicitly. Instead, one can use
the six (<emphasis>gasp</emphasis>) attributes used for specifying
dependencies as documented in <xref linkend="ssec-stdenv-dependencies"/>. We
"splice" together the run-time and build-time package sets with
<varname>callPackage</varname>, and then <varname>mkDerivation</varname> for
each of four attributes pulls the right derivation out. This splicing can be
skipped when not cross-compiling as the package sets are the same, but is a
bit slow for cross-compiling. Because of this, a best-of-both-worlds
solution is in the works with no splicing or explicit access of
<varname>buildPackages</varname> needed. For now, feel free to use either
method.
</para>
<note>
@ -305,11 +307,11 @@
<title>Cross packaging cookbook</title>
<para>
Some frequently problems when packaging for cross compilation are good to
just spell and answer. Ideally the information above is exhaustive, so this
section cannot provide any new information, but its ludicrous and cruel to
expect everyone to spend effort working through the interaction of many
features just to figure out the same answer to the same common problem.
Some frequently encountered problems when packaging for cross-compilation
should be answered here. Ideally, the information above is exhaustive, so
this section cannot provide any new information, but it is ludicrous and
cruel to expect everyone to spend effort working through the interaction of
many features just to figure out the same answer to the same common problem.
Feel free to add to this list!
</para>
@ -364,17 +366,9 @@
<section xml:id="sec-cross-usage">
<title>Cross-building packages</title>
<note>
<para>
More information needs to moved from the old wiki, especially
<link xlink:href="https://nixos.org/wiki/CrossCompiling" />, for this
section.
</para>
</note>
<para>
Nixpkgs can be instantiated with <varname>localSystem</varname> alone, in
which case there is no cross compiling and everything is built by and for
which case there is no cross-compiling and everything is built by and for
that system, or also with <varname>crossSystem</varname>, in which case
packages run on the latter, but all building happens on the former. Both
parameters take the same schema as the 3 (build, host, and target) platforms
@ -440,15 +434,14 @@ nix-build &lt;nixpkgs&gt; --arg crossSystem.config '&lt;arch&gt;-&lt;os&gt;-&lt;
build plan or package set. A simple "build vs deploy" dichotomy is adequate:
the sliding window principle described in the previous section shows how to
interpolate between the these two "end points" to get the 3 platform triple
for each bootstrapping stage. That means for any package a given package
set, even those not bound on the top level but only reachable via
dependencies or <varname>buildPackages</varname>, the three platforms will
be defined as one of <varname>localSystem</varname> or
<varname>crossSystem</varname>, with the former replacing the latter as one
traverses build-time dependencies. A last simple difference then is
<varname>crossSystem</varname> should be null when one doesn't want to
cross-compile, while the <varname>*Platform</varname>s are always non-null.
<varname>localSystem</varname> is always non-null.
for each bootstrapping stage. That means for any package a given package set,
even those not bound on the top level but only reachable via dependencies or
<varname>buildPackages</varname>, the three platforms will be defined as one
of <varname>localSystem</varname> or <varname>crossSystem</varname>, with the
former replacing the latter as one traverses build-time dependencies. A last
simple difference is that <varname>crossSystem</varname> should be null when
one doesn't want to cross-compile, while the <varname>*Platform</varname>s
are always non-null. <varname>localSystem</varname> is always non-null.
</para>
</section>
<!--============================================================-->
@ -461,14 +454,14 @@ nix-build &lt;nixpkgs&gt; --arg crossSystem.config '&lt;arch&gt;-&lt;os&gt;-&lt;
<note>
<para>
If one explores nixpkgs, they will see derivations with names like
<literal>gccCross</literal>. Such <literal>*Cross</literal> derivations is
a holdover from before we properly distinguished between the host and
target platforms —the derivation with "Cross" in the name covered the
<literal>build = host != target</literal> case, while the other covered the
<literal>host = target</literal>, with build platform the same or not based
on whether one was using its <literal>.nativeDrv</literal> or
<literal>.crossDrv</literal>. This ugliness will disappear soon.
If one explores Nixpkgs, they will see derivations with names like
<literal>gccCross</literal>. Such <literal>*Cross</literal> derivations is a
holdover from before we properly distinguished between the host and target
platforms—the derivation with "Cross" in the name covered the <literal>build
= host != target</literal> case, while the other covered the <literal>host =
target</literal>, with build platform the same or not based on whether one
was using its <literal>.nativeDrv</literal> or <literal>.crossDrv</literal>.
This ugliness will disappear soon.
</para>
</note>
</section>

@ -12,7 +12,7 @@
<para>
The Nix language allows a derivation to produce multiple outputs, which is
similar to what is utilized by other Linux distribution packaging systems.
The outputs reside in separate nix store paths, so they can be mostly
The outputs reside in separate Nix store paths, so they can be mostly
handled independently of each other, including passing to build inputs,
garbage collection or binary substitution. The exception is that building
from source always produces all the outputs.

@ -3,9 +3,9 @@
xml:id="chap-overlays">
<title>Overlays</title>
<para>
This chapter describes how to extend and change Nixpkgs packages using
overlays. Overlays are used to add layers in the fix-point used by Nixpkgs to
compose the set of all packages.
This chapter describes how to extend and change Nixpkgs using overlays.
Overlays are used to add layers in the fixed-point used by Nixpkgs to compose
the set of all packages.
</para>
<para>
Nixpkgs can be configured with a list of overlays, which are applied in
@ -60,7 +60,7 @@
<para>
First, if an
<link xlink:href="#sec-overlays-argument"><varname>overlays</varname>
argument</link> to the nixpkgs function itself is given, then that is
argument</link> to the Nixpkgs function itself is given, then that is
used and no path lookup will be performed.
</para>
</listitem>

@ -205,7 +205,7 @@ $ cat $(PRINT_PATH=1 nix-prefetch-url $i | tail -n 1) \
<para>
Nixpkgs provides a number of packages that will install Eclipse in its
various forms, these range from the bare-bones Eclipse Platform to the more
various forms. These range from the bare-bones Eclipse Platform to the more
fully featured Eclipse SDK or Scala-IDE packages and multiple version are
often available. It is possible to list available Eclipse packages by
issuing the command:

@ -6,13 +6,13 @@
<title>Darwin (macOS)</title>
<para>
Some common issues when packaging software for darwin:
Some common issues when packaging software for Darwin:
</para>
<itemizedlist>
<listitem>
<para>
The darwin <literal>stdenv</literal> uses clang instead of gcc. When
The Darwin <literal>stdenv</literal> uses clang instead of gcc. When
referring to the compiler <varname>$CC</varname> or <command>cc</command>
will work in both cases. Some builds hardcode gcc/g++ in their build
scripts, that can usually be fixed with using something like
@ -31,7 +31,7 @@
</listitem>
<listitem>
<para>
On darwin libraries are linked using absolute paths, libraries are
On Darwin, libraries are linked using absolute paths, libraries are
resolved by their <literal>install_name</literal> at link time. Sometimes
packages won't set this correctly causing the library lookups to fail at
runtime. This can be fixed by adding extra linker flags or by running
@ -96,8 +96,8 @@
</programlisting>
<para>
The package <literal>xcbuild</literal> can be used to build projects that
really depend on Xcode, however projects that build some kind of graphical
interface won't work without using Xcode in an impure way.
really depend on Xcode. However, this replacement is not 100%
compatible with Xcode and can occasionally cause issues.
</para>
</listitem>
</itemizedlist>

@ -17,22 +17,20 @@
</para>
</warning>
<para>
The nixpkgs project receives a fairly high number of contributions via GitHub
pull-requests. Reviewing and approving these is an important task and a way
The Nixpkgs project receives a fairly high number of contributions via GitHub
pull requests. Reviewing and approving these is an important task and a way
to contribute to the project.
</para>
<para>
The high change rate of nixpkgs makes any pull request that remains open for
The high change rate of Nixpkgs makes any pull request that remains open for
too long subject to conflicts that will require extra work from the submitter
or the merger. Reviewing pull requests in a timely manner and being
responsive to the comments is the key to avoid these. GitHub provides sort
filters that can be used to see the
<link
or the merger. Reviewing pull requests in a timely manner and being responsive
to the comments is the key to avoid this issue. GitHub provides sort filters
that can be used to see the <link
xlink:href="https://github.com/NixOS/nixpkgs/pulls?q=is%3Apr+is%3Aopen+sort%3Aupdated-desc">most
recently</link> and the
<link
recently</link> and the <link
xlink:href="https://github.com/NixOS/nixpkgs/pulls?q=is%3Apr+is%3Aopen+sort%3Aupdated-asc">least
recently</link> updated pull-requests. We highly encourage looking at
recently</link> updated pull requests. We highly encourage looking at
<link xlink:href="https://github.com/NixOS/nixpkgs/pulls?q=is%3Apr+is%3Aopen+review%3Anone+status%3Asuccess+-label%3A%222.status%3A+work-in-progress%22+no%3Aproject+no%3Aassignee+no%3Amilestone">
this list of ready to merge, unreviewed pull requests</link>.
</para>
@ -43,12 +41,12 @@
</para>
<para>
GitHub provides reactions as a simple and quick way to provide feedback to
pull-requests or any comments. The thumb-down reaction should be used with
pull requests or any comments. The thumb-down reaction should be used with
care and if possible accompanied with some explanation so the submitter has
directions to improve their contribution.
</para>
<para>
Pull-request reviews should include a list of what has been reviewed in a
pull request reviews should include a list of what has been reviewed in a
comment, so other reviewers and mergers can know the state of the review.
</para>
<para>
@ -60,8 +58,8 @@
<title>Package updates</title>
<para>
A package update is the most trivial and common type of pull-request. These
pull-requests mainly consist of updating the version part of the package
A package update is the most trivial and common type of pull request. These
pull requests mainly consist of updating the version part of the package
name and the source hash.
</para>
@ -77,7 +75,7 @@
<itemizedlist>
<listitem>
<para>
Add labels to the pull-request. (Requires commit rights)
Add labels to the pull request. (Requires commit rights)
</para>
<itemizedlist>
<listitem>
@ -144,8 +142,8 @@
<itemizedlist>
<listitem>
<para>
Pull-requests are often targeted to the master or staging branch, and
building the pull-request locally when it is submitted can trigger many
pull requests are often targeted to the master or staging branch, and
building the pull request locally when it is submitted can trigger many
source builds.
</para>
<para>
@ -174,14 +172,14 @@ $ git rebase --onto nixos-unstable BASEBRANCH FETCH_HEAD <co
</callout>
<callout arearefs='reviewing-rebase-3'>
<para>
Fetching the pull-request changes, <varname>PRNUMBER</varname> is the
number at the end of the pull-request title and
<varname>BASEBRANCH</varname> the base branch of the pull-request.
Fetching the pull request changes, <varname>PRNUMBER</varname> is the
number at the end of the pull request title and
<varname>BASEBRANCH</varname> the base branch of the pull request.
</para>
</callout>
<callout arearefs='reviewing-rebase-4'>
<para>
Rebasing the pull-request changes to the nixos-unstable branch.
Rebasing the pull request changes to the nixos-unstable branch.
</para>
</callout>
</calloutlist>
@ -190,10 +188,10 @@ $ git rebase --onto nixos-unstable BASEBRANCH FETCH_HEAD <co
<listitem>
<para>
The <link xlink:href="https://github.com/madjar/nox">nox</link> tool can
be used to review a pull-request content in a single command. It doesn't
be used to review a pull request content in a single command. It doesn't
rebase on a channel branch so it might trigger multiple source builds.
<varname>PRNUMBER</varname> should be replaced by the number at the end
of the pull-request title.
of the pull request title.
</para>
<screen>
$ nix-shell -p nox --run "nox-review -k pr PRNUMBER"
@ -230,7 +228,7 @@ $ nix-shell -p nox --run "nox-review -k pr PRNUMBER"
<title>New packages</title>
<para>
New packages are a common type of pull-requests. These pull requests
New packages are a common type of pull requests. These pull requests
consists in adding a new nix-expression for a package.
</para>
@ -241,7 +239,7 @@ $ nix-shell -p nox --run "nox-review -k pr PRNUMBER"
<itemizedlist>
<listitem>
<para>
Add labels to the pull-request. (Requires commit rights)
Add labels to the pull request. (Requires commit rights)
</para>
<itemizedlist>
<listitem>
@ -279,7 +277,7 @@ $ nix-shell -p nox --run "nox-review -k pr PRNUMBER"
</listitem>
<listitem>
<para>
A maintainer must be set, this can be the package submitter or a
A maintainer must be set. This can be the package submitter or a
community member that accepts to take maintainership of the package.
</para>
</listitem>
@ -361,7 +359,7 @@ $ nix-shell -p nox --run "nox-review -k pr PRNUMBER"
<itemizedlist>
<listitem>
<para>
Add labels to the pull-request. (Requires commit rights)
Add labels to the pull request. (Requires commit rights)
</para>
<itemizedlist>
<listitem>
@ -474,7 +472,7 @@ $ nix-shell -p nox --run "nox-review -k pr PRNUMBER"
<itemizedlist>
<listitem>
<para>
Add labels to the pull-request. (Requires commit rights)
Add labels to the pull request. (Requires commit rights)
</para>
<itemizedlist>
<listitem>
@ -576,7 +574,7 @@ $ nix-shell -p nox --run "nox-review -k pr PRNUMBER"
like to be a long-term reviewer for related submissions, please contact the
current reviewers for that topic. They will give you information about the
reviewing process. The main reviewers for a topic can be hard to find as
there is no list, but checking past pull-requests to see who reviewed or
there is no list, but checking past pull requests to see who reviewed or
git-blaming the code to see who committed to that topic can give some hints.
</para>
@ -585,8 +583,8 @@ $ nix-shell -p nox --run "nox-review -k pr PRNUMBER"
pull requests fitting this category.
</para>
</section>
<section xml:id="reviewing-contributions--merging-pull-requests">
<title>Merging pull-requests</title>
<section xml:id="reviewing-contributions--merging-pull requests">
<title>Merging pull requests</title>
<para>
It is possible for community members that have enough knowledge and

@ -228,18 +228,19 @@ genericBuild
</para>
<para>
The extension of <envar>PATH</envar> with dependencies, alluded to above,
proceeds according to the relative platforms alone. The process is carried
out only for dependencies whose host platform matches the new derivation's
build platformi.e. which run on the platform where the new derivation
will be built.
The extension of <envar>PATH</envar> with dependencies, alluded to
above, proceeds according to the relative platforms alone. The
process is carried out only for dependencies whose host platform
matches the new derivation's build platform i.e. dependencies which
run on the platform where the new derivation will be built.
<footnote xml:id="footnote-stdenv-native-dependencies-in-path">
<para>
Currently, that means for native builds all dependencies are put on the
<envar>PATH</envar>. But in the future that may not be the case for sake
of matching cross: the platforms would be assumed to be unique for native
and cross builds alike, so only the <varname>depsBuild*</varname> and
<varname>nativeBuildDependencies</varname> dependencies would affect the
Currently, this means for native builds all dependencies are put
on the <envar>PATH</envar>. But in the future that may not be the
case for sake of matching cross: the platforms would be assumed
to be unique for native and cross builds alike, so only the
<varname>depsBuild*</varname> and
<varname>nativeBuildInputs</varname> would be added to the
<envar>PATH</envar>.
</para>
</footnote>
@ -251,28 +252,27 @@ genericBuild
<para>
The dependency is propagated when it forces some of its other-transitive
(non-immediate) downstream dependencies to also take it on as an immediate
dependency. Nix itself already takes a package's transitive dependencies
into account, but this propagation ensures nixpkgs-specific infrastructure
like setup hooks (mentioned above) also are run as if the propagated
dependency.
dependency. Nix itself already takes a package's transitive dependencies into
account, but this propagation ensures nixpkgs-specific infrastructure like
setup hooks (mentioned above) also are run as if the propagated dependency.
</para>
<para>
It is important to note dependencies are not necessary propagated as the
same sort of dependency that they were before, but rather as the
It is important to note that dependencies are not necessarily propagated as
the same sort of dependency that they were before, but rather as the
corresponding sort so that the platform rules still line up. The exact rules
for dependency propagation can be given by assigning each sort of dependency
two integers based one how it's host and target platforms are offset from
the depending derivation's platforms. Those offsets are given
below in the descriptions of each dependency list attribute.
Algorithmically, we traverse propagated inputs, accumulating every
propagated dep's propagated deps and adjusting them to account for the
"shift in perspective" described by the current dep's platform offsets. This
results in sort a transitive closure of the dependency relation, with the
offsets being approximately summed when two dependency links are combined.
We also prune transitive deps whose combined offsets go out-of-bounds, which
can be viewed as a filter over that transitive closure removing dependencies
that are blatantly absurd.
for dependency propagation can be given by assigning to each dependency two
integers based one how its host and target platforms are offset from the
depending derivation's platforms. Those offsets are given below in the
descriptions of each dependency list attribute. Algorithmically, we traverse
propagated inputs, accumulating every propagated dependency's propagated
dependencies and adjusting them to account for the "shift in perspective"
described by the current dependency's platform offsets. This results in sort
a transitive closure of the dependency relation, with the offsets being
approximately summed when two dependency links are combined. We also prune
transitive dependencies whose combined offsets go out-of-bounds, which can be
viewed as a filter over that transitive closure removing dependencies that
are blatantly absurd.
</para>
<para>
@ -288,7 +288,7 @@ genericBuild
</para>
</footnote>
They're confusing in very different ways so... hopefully if something doesn't
make sense in one presentation, it does in the other!
make sense in one presentation, it will in the other!
<programlisting>
let mapOffset(h, t, i) = i + (if i &lt;= 0 then h else t - 1)
@ -307,13 +307,13 @@ dep(h0, _, A, B)
propagated-dep(h1, t1, B, C)
h0 + h1 in {-1, 0, 1}
h0 + t1 in {-1, 0, -1}
-------------------------------------- Take immediate deps' propagated deps
----------------------------- Take immediate dependencies' propagated dependencies
propagated-dep(mapOffset(h0, t0, h1),
mapOffset(h0, t0, t1),
A, C)</programlisting>
<programlisting>
propagated-dep(h, t, A, B)
-------------------------------------- Propagated deps count as deps
----------------------------- Propagated dependencies count as dependencies
dep(h, t, A, B)</programlisting>
Some explanation of this monstrosity is in order. In the common case, the
target offset of a dependency is the successor to the target offset:
@ -324,31 +324,31 @@ let f(h, h + 1, i) = i + (if i &lt;= 0 then h else (h + 1) - 1)
let f(h, h + 1, i) = i + (if i &lt;= 0 then h else h)
let f(h, h + 1, i) = i + h
</programlisting>
This is where the "sum-like" comes from above: We can just sum all the host
offset to get the host offset of the transitive dependency. The target
offset is the transitive dep is simply the host offset + 1, just as it was
with the dependencies composed to make this transitive one; it can be
This is where "sum-like" comes in from above: We can just sum all of the host
offsets to get the host offset of the transitive dependency. The target
offset is the transitive dependency is simply the host offset + 1, just as it
was with the dependencies composed to make this transitive one; it can be
ignored as it doesn't add any new information.
</para>
<para>
Because of the bounds checks, the uncommon cases are <literal>h =
t</literal> and <literal>h + 2 = t</literal>. In the former case, the
motivation for <function>mapOffset</function> is that since its host and
target platforms are the same, no transitive dep of it should be able to
"discover" an offset greater than its reduced target offsets.
Because of the bounds checks, the uncommon cases are <literal>h = t</literal>
and <literal>h + 2 = t</literal>. In the former case, the motivation for
<function>mapOffset</function> is that since its host and target platforms
are the same, no transitive dependency of it should be able to "discover" an
offset greater than its reduced target offsets.
<function>mapOffset</function> effectively "squashes" all its transitive
dependencies' offsets so that none will ever be greater than the target
offset of the original <literal>h = t</literal> package. In the other case,
<literal>h + 1</literal> is skipped over between the host and target
offsets. Instead of squashing the offsets, we need to "rip" them apart so no
<literal>h + 1</literal> is skipped over between the host and target offsets.
Instead of squashing the offsets, we need to "rip" them apart so no
transitive dependencies' offset is that one.
</para>
<para>
Overall, the unifying theme here is that propagation shouldn't be
introducing transitive dependencies involving platforms the needing package
is unaware of. The offset bounds checking and definition of
Overall, the unifying theme here is that propagation shouldn't be introducing
transitive dependencies involving platforms the depending package is unaware
of. The offset bounds checking and definition of
<function>mapOffset</function> together ensure that this is the case.
Discovering a new offset is discovering a new platform, and since those
platforms weren't in the derivation "spec" of the needing package, they
@ -369,20 +369,20 @@ let f(h, h + 1, i) = i + h
A list of dependencies whose host and target platforms are the new
derivation's build platform. This means a <literal>-1</literal> host and
<literal>-1</literal> target offset from the new derivation's platforms.
They are programs/libraries used at build time that furthermore produce
programs/libraries also used at build time. If the dependency doesn't
care about the target platform (i.e. isn't a compiler or similar tool),
put it in <varname>nativeBuildInputs</varname> instead. The most common
use for this <literal>buildPackages.stdenv.cc</literal>, the default C
compiler for this role. That example crops up more than one might think
in old commonly used C libraries.
These are programs and libraries used at build time that produce programs
and libraries also used at build time. If the dependency doesn't care
about the target platform (i.e. isn't a compiler or similar tool), put it
in <varname>nativeBuildInputs</varname> instead. The most common use of
this <literal>buildPackages.stdenv.cc</literal>, the default C compiler
for this role. That example crops up more than one might think in old
commonly used C libraries.
</para>
<para>
Since these packages are able to be run at build time, that are always
Since these packages are able to be run at build-time, they are always
added to the <envar>PATH</envar>, as described above. But since these
packages are only guaranteed to be able to run then, they shouldn't
persist as run-time dependencies. This isn't currently enforced, but
could be in the future.
persist as run-time dependencies. This isn't currently enforced, but could
be in the future.
</para>
</listitem>
</varlistentry>
@ -395,21 +395,20 @@ let f(h, h + 1, i) = i + h
A list of dependencies whose host platform is the new derivation's build
platform, and target platform is the new derivation's host platform. This
means a <literal>-1</literal> host offset and <literal>0</literal> target
offset from the new derivation's platforms. They are programs/libraries
used at build time that, if they are a compiler or similar tool, produce
code to run at run time—i.e. tools used to build the new derivation. If
the dependency doesn't care about the target platform (i.e. isn't a
compiler or similar tool), put it here, rather than in
offset from the new derivation's platforms. These are programs and
libraries used at build-time that, if they are a compiler or similar tool,
produce code to run at run-time—i.e. tools used to build the new
derivation. If the dependency doesn't care about the target platform (i.e.
isn't a compiler or similar tool), put it here, rather than in
<varname>depsBuildBuild</varname> or <varname>depsBuildTarget</varname>.
This would be called <varname>depsBuildHost</varname> but for historical
continuity.
This could be called <varname>depsBuildHost</varname> but
<varname>nativeBuildInputs</varname> is used for historical continuity.
</para>
<para>
Since these packages are able to be run at build time, that are added to
the <envar>PATH</envar>, as described above. But since these packages
only are guaranteed to be able to run then, they shouldn't persist as
run-time dependencies. This isn't currently enforced, but could be in the
future.
Since these packages are able to be run at build-time, they are added to
the <envar>PATH</envar>, as described above. But since these packages are
only guaranteed to be able to run then, they shouldn't persist as run-time
dependencies. This isn't currently enforced, but could be in the future.
</para>
</listitem>
</varlistentry>
@ -422,34 +421,33 @@ let f(h, h + 1, i) = i + h
A list of dependencies whose host platform is the new derivation's build
platform, and target platform is the new derivation's target platform.
This means a <literal>-1</literal> host offset and <literal>1</literal>
target offset from the new derivation's platforms. They are programs used
at build time that produce code to run at run with code produced by the
depending package. Most commonly, these would tools used to build the
runtime or standard library the currently-being-built compiler will
inject into any code it compiles. In many cases, the currently-being
built compiler is itself employed for that task, but when that compiler
won't run (i.e. its build and host platform differ) this is not possible.
Other times, the compiler relies on some other tool, like binutils, that
is always built separately so the dependency is unconditional.
target offset from the new derivation's platforms. These are programs used
at build time that produce code to run with code produced by the depending
package. Most commonly, these are tools used to build the runtime or
standard library that the currently-being-built compiler will inject into
any code it compiles. In many cases, the currently-being-built-compiler is
itself employed for that task, but when that compiler won't run (i.e. its
build and host platform differ) this is not possible. Other times, the
compiler relies on some other tool, like binutils, that is always built
separately so that the dependency is unconditional.
</para>
<para>
This is a somewhat confusing dependency to wrap ones head around, and for
good reason. As the only one where the platform offsets are not adjacent
integers, it requires thinking of a bootstrapping stage
<emphasis>two</emphasis> away from the current one. It and it's use-case
go hand in hand and are both considered poor form: try not to need this
sort dependency, and try not avoid building standard libraries / runtimes
This is a somewhat confusing concept to wrap ones head around, and for
good reason. As the only dependency type where the platform offsets are
not adjacent integers, it requires thinking of a bootstrapping stage
<emphasis>two</emphasis> away from the current one. It and its use-case go
hand in hand and are both considered poor form: try to not need this sort
of dependency, and try to avoid building standard libraries and runtimes
in the same derivation as the compiler produces code using them. Instead
strive to build those like a normal library, using the newly-built
compiler just as a normal library would. In short, do not use this
attribute unless you are packaging a compiler and are sure it is needed.
</para>
<para>
Since these packages are able to be run at build time, that are added to
the <envar>PATH</envar>, as described above. But since these packages
only are guaranteed to be able to run then, they shouldn't persist as
run-time dependencies. This isn't currently enforced, but could be in the
future.
Since these packages are able to run at build time, they are added to the
<envar>PATH</envar>, as described above. But since these packages are only
guaranteed to be able to run then, they shouldn't persist as run-time
dependencies. This isn't currently enforced, but could be in the future.
</para>
</listitem>
</varlistentry>
@ -460,15 +458,15 @@ let f(h, h + 1, i) = i + h
<listitem>
<para>
A list of dependencies whose host and target platforms match the new
derivation's host platform. This means a both <literal>0</literal> host
offset and <literal>0</literal> target offset from the new derivation's
host platform. These are packages used at run-time to generate code also
used at run-time. In practice, that would usually be tools used by
compilers for metaprogramming/macro systems, or libraries used by the
macros/metaprogramming code itself. It's always preferable to use a
<varname>depsBuildBuild</varname> dependency in the derivation being
built than a <varname>depsHostHost</varname> on the tool doing the
building for this purpose.
derivation's host platform. This means a <literal>0</literal> host offset
and <literal>0</literal> target offset from the new derivation's host
platform. These are packages used at run-time to generate code also used
at run-time. In practice, this would usually be tools used by compilers
for macros or a metaprogramming system, or libraries used by the macros or
metaprogramming code itself. It's always preferable to use a
<varname>depsBuildBuild</varname> dependency in the derivation being built
over a <varname>depsHostHost</varname> on the tool doing the building for
this purpose.
</para>
</listitem>
</varlistentry>
@ -479,20 +477,20 @@ let f(h, h + 1, i) = i + h
<listitem>
<para>
A list of dependencies whose host platform and target platform match the
new derivation's. This means a <literal>0</literal> host offset and
new derivation's. This means a <literal>0</literal> host offset and a
<literal>1</literal> target offset from the new derivation's host
platform. This would be called <varname>depsHostTarget</varname> but for
historical continuity. If the dependency doesn't care about the target
platform (i.e. isn't a compiler or similar tool), put it here, rather
than in <varname>depsBuildBuild</varname>.
platform (i.e. isn't a compiler or similar tool), put it here, rather than
in <varname>depsBuildBuild</varname>.
</para>
<para>
These often are programs/libraries used by the new derivation at
These are often programs and libraries used by the new derivation at
<emphasis>run</emphasis>-time, but that isn't always the case. For
example, the machine code in a statically linked library is only used at
run time, but the derivation containing the library is only needed at
build time. Even in the dynamic case, the library may also be needed at
build time to appease the linker.
example, the machine code in a statically-linked library is only used at
run-time, but the derivation containing the library is only needed at
build-time. Even in the dynamic case, the library may also be needed at
build-time to appease the linker.
</para>
</listitem>
</varlistentry>
@ -581,7 +579,7 @@ let f(h, h + 1, i) = i + h
</varlistentry>
<varlistentry>
<term>
<varname>depsTargetTarget</varname>
<varname>depsTargetTargetPropagated</varname>
</term>
<listitem>
<para>
@ -604,10 +602,10 @@ let f(h, h + 1, i) = i + h
<listitem>
<para>
A natural number indicating how much information to log. If set to 1 or
higher, <literal>stdenv</literal> will print moderate debug information
during the build. In particular, the <command>gcc</command> and
<command>ld</command> wrapper scripts will print out the complete command
line passed to the wrapped tools. If set to 6 or higher, the
higher, <literal>stdenv</literal> will print moderate debugging
information during the build. In particular, the <command>gcc</command>
and <command>ld</command> wrapper scripts will print out the complete
command line passed to the wrapped tools. If set to 6 or higher, the
<literal>stdenv</literal> setup script will be run with <literal>set
-x</literal> tracing. If set to 7 or higher, the <command>gcc</command>
and <command>ld</command> wrapper scripts will also be run with
@ -666,11 +664,10 @@ passthru = {
<literal>hello.baz.value1</literal>. We don't specify any usage or schema
of <literal>passthru</literal> - it is meant for values that would be
useful outside the derivation in other parts of a Nix expression (e.g. in
other derivations). An example would be to convey some specific
dependency of your derivation which contains a program with plugins
support. Later, others who make derivations with plugins can use
passed-through dependency to ensure that their plugin would be
binary-compatible with built program.
other derivations). An example would be to convey some specific dependency
of your derivation which contains a program with plugins support. Later,
others who make derivations with plugins can use passed-through dependency
to ensure that their plugin would be binary-compatible with built program.
</para>
</listitem>
</varlistentry>
@ -836,7 +833,7 @@ passthru = {
<para>
Zip files are unpacked using <command>unzip</command>. However,
<command>unzip</command> is not in the standard environment, so you
should add it to <varname>buildInputs</varname> yourself.
should add it to <varname>nativeBuildInputs</varname> yourself.
</para>
</listitem>
</varlistentry>
@ -1076,6 +1073,17 @@ passthru = {
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>prefixKey</varname>
</term>
<listitem>
<para>
The key to use when specifying the prefix. By default, this is set to
<option>--prefix=</option> as that is used by the majority of packages.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>dontAddDisableDepTrack</varname>
@ -1133,12 +1141,11 @@ passthru = {
By default, when cross compiling, the configure script has
<option>--build=...</option> and <option>--host=...</option> passed.
Packages can instead pass <literal>[ "build" "host" "target" ]</literal>
or a subset to control exactly which platform flags are passed.
Compilers and other tools should use this to also pass the target
platform, for example.
or a subset to control exactly which platform flags are passed. Compilers
and other tools can use this to also pass the target platform.
<footnote xml:id="footnote-stdenv-build-time-guessing-impurity">
<para>
Eventually these will be passed when in native builds too, to improve
Eventually these will be passed building natively as well, to improve
determinism: build-time guessing, as is done today, is a risk of
impurity.
</para>
@ -1203,17 +1210,6 @@ passthru = {
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>checkInputs</varname>
</term>
<listitem>
<para>
A list of dependencies used by the phase. This gets included in
<varname>buildInputs</varname> when <varname>doCheck</varname> is set.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>makeFlags</varname>
@ -1363,6 +1359,18 @@ makeFlagsArray=(CFLAGS="-O0 -g" LDFLAGS="-lfoo -lbar")
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>checkInputs</varname>
</term>
<listitem>
<para>
A list of dependencies used by the phase. This gets included in
<varname>nativeBuildInputs</varname> when <varname>doCheck</varname> is
set.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>preCheck</varname>
@ -1635,12 +1643,10 @@ installTargets = "install-bin install-doc";</programlisting>
</term>
<listitem>
<para>
A package can export a <link
linkend="ssec-setup-hooks">setup
hook</link> by setting this variable. The setup hook, if defined, is
copied to <filename>$out/nix-support/setup-hook</filename>. Environment
variables are then substituted in it using
<function
A package can export a <link linkend="ssec-setup-hooks">setup hook</link>
by setting this variable. The setup hook, if defined, is copied to
<filename>$out/nix-support/setup-hook</filename>. Environment variables
are then substituted in it using <function
linkend="fun-substituteAll">substituteAll</function>.
</para>
</listitem>
@ -2074,12 +2080,12 @@ someVar=$(stripHash $name)
<title>Package setup hooks</title>
<para>
Nix itself considers a build-time dependency merely something that should
Nix itself considers a build-time dependency as merely something that should
previously be built and accessible at build time—packages themselves are
on their own to perform any additional setup. In most cases, that is fine,
and the downstream derivation can deal with its own dependencies. But for a
few common tasks, that would result in almost every package doing the same
sort of setup work---depending not on the package itself, but entirely on
sort of setup workdepending not on the package itself, but entirely on
which dependencies were used.
</para>
@ -2094,20 +2100,19 @@ someVar=$(stripHash $name)
</para>
<para>
The Setup hook mechanism is a bit of a sledgehammer though: a powerful
The setup hook mechanism is a bit of a sledgehammer though: a powerful
feature with a broad and indiscriminate area of effect. The combination of
its power and implicit use may be expedient, but isn't without costs. Nix
itself is unchanged, but the spirit of adding dependencies being effect-free
itself is unchanged, but the spirit of added dependencies being effect-free
is violated even if the letter isn't. For example, if a derivation path is
mentioned more than once, Nix itself doesn't care and simply makes sure the
dependency derivation is already built just the same—depending is just
needing something to exist, and needing is idempotent. However, a dependency
specified twice will have its setup hook run twice, and that could easily
change the build environment (though a well-written setup hook will
therefore strive to be idempotent so this is in fact not observable). More
broadly, setup hooks are anti-modular in that multiple dependencies, whether
the same or different, should not interfere and yet their setup hooks may
well do so.
change the build environment (though a well-written setup hook will therefore
strive to be idempotent so this is in fact not observable). More broadly,
setup hooks are anti-modular in that multiple dependencies, whether the same
or different, should not interfere and yet their setup hooks may well do so.
</para>
<para>
@ -2126,15 +2131,14 @@ someVar=$(stripHash $name)
<para>
Packages adding a hook should not hard code a specific hook, but rather
choose a variable <emphasis>relative</emphasis> to how they are included.
Returning to the C compiler wrapper example, if it itself is an
Returning to the C compiler wrapper example, if the wrapper itself is an
<literal>n</literal> dependency, then it only wants to accumulate flags from
<literal>n + 1</literal> dependencies, as only those ones match the
compiler's target platform. The <envar>hostOffset</envar> variable is
defined with the current dependency's host offset
<envar>targetOffset</envar> with its target offset, before its setup hook is
sourced. Additionally, since most environment hooks don't care about the
target platform, That means the setup hook can append to the right bash array
by doing something like
compiler's target platform. The <envar>hostOffset</envar> variable is defined
with the current dependency's host offset <envar>targetOffset</envar> with
its target offset, before its setup hook is sourced. Additionally, since most
environment hooks don't care about the target platform, that means the setup
hook can append to the right bash array by doing something like
<programlisting language="bash">
addEnvHooks "$hostOffset" myBashFunction
</programlisting>
@ -2159,19 +2163,19 @@ addEnvHooks "$hostOffset" myBashFunction
</term>
<listitem>
<para>
Bintools Wrapper wraps the binary utilities for a bunch of miscellaneous
purposes. These are GNU Binutils when targetting Linux, and a mix of
cctools and GNU binutils for Darwin. [The "Bintools" name is supposed to
be a compromise between "Binutils" and "cctools" not denoting any
specific implementation.] Specifically, the underlying bintools package,
and a C standard library (glibc or Darwin's libSystem, just for the
dynamic loader) are all fed in, and dependency finding, hardening (see
below), and purity checks for each are handled by Bintools Wrapper.
Packages typically depend on CC Wrapper, which in turn (at run time)
depends on Bintools Wrapper.
The Bintools Wrapper wraps the binary utilities for a bunch of
miscellaneous purposes. These are GNU Binutils when targetting Linux, and
a mix of cctools and GNU binutils for Darwin. [The "Bintools" name is
supposed to be a compromise between "Binutils" and "cctools" not denoting
any specific implementation.] Specifically, the underlying bintools
package, and a C standard library (glibc or Darwin's libSystem, just for
the dynamic loader) are all fed in, and dependency finding, hardening
(see below), and purity checks for each are handled by the Bintools
Wrapper. Packages typically depend on CC Wrapper, which in turn (at run
time) depends on the Bintools Wrapper.
</para>
<para>
Bintools Wrapper was only just recently split off from CC Wrapper, so
The Bintools Wrapper was only just recently split off from CC Wrapper, so
the division of labor is still being worked out. For example, it
shouldn't care about about the C standard library, but just take a
derivation with the dynamic loader (which happens to be the glibc on
@ -2179,24 +2183,24 @@ addEnvHooks "$hostOffset" myBashFunction
to need to share, and probably the most important to understand. It is
currently accomplished by collecting directories of host-platform
dependencies (i.e. <varname>buildInputs</varname> and
<varname>nativeBuildInputs</varname>) in environment variables. Bintools
Wrapper's setup hook causes any <filename>lib</filename> and
<varname>nativeBuildInputs</varname>) in environment variables. The
Bintools Wrapper's setup hook causes any <filename>lib</filename> and
<filename>lib64</filename> subdirectories to be added to
<envar>NIX_LDFLAGS</envar>. Since CC Wrapper and Bintools Wrapper use
the same strategy, most of the Bintools Wrapper code is sparsely
commented and refers to CC Wrapper. But CC Wrapper's code, by contrast,
has quite lengthy comments. Bintools Wrapper merely cites those, rather
than repeating them, to avoid falling out of sync.
<envar>NIX_LDFLAGS</envar>. Since the CC Wrapper and the Bintools Wrapper
use the same strategy, most of the Bintools Wrapper code is sparsely
commented and refers to the CC Wrapper. But the CC Wrapper's code, by
contrast, has quite lengthy comments. The Bintools Wrapper merely cites
those, rather than repeating them, to avoid falling out of sync.
</para>
<para>
A final task of the setup hook is defining a number of standard
environment variables to tell build systems which executables full-fill
environment variables to tell build systems which executables fulfill
which purpose. They are defined to just be the base name of the tools,
under the assumption that Bintools Wrapper's binaries will be on the
under the assumption that the Bintools Wrapper's binaries will be on the
path. Firstly, this helps poorly-written packages, e.g. ones that look
for just <command>gcc</command> when <envar>CC</envar> isn't defined yet
<command>clang</command> is to be used. Secondly, this helps packages
not get confused when cross-compiling, in which case multiple Bintools
<command>clang</command> is to be used. Secondly, this helps packages not
get confused when cross-compiling, in which case multiple Bintools
Wrappers may simultaneously be in use.
<footnote xml:id="footnote-stdenv-per-platform-wrapper">
<para>
@ -2208,20 +2212,20 @@ addEnvHooks "$hostOffset" myBashFunction
</para>
</footnote>
<envar>BUILD_</envar>- and <envar>TARGET_</envar>-prefixed versions of
the normal environment variable are defined for the additional Bintools
the normal environment variable are defined for additional Bintools
Wrappers, properly disambiguating them.
</para>
<para>
A problem with this final task is that Bintools Wrapper is honest and
A problem with this final task is that the Bintools Wrapper is honest and
defines <envar>LD</envar> as <command>ld</command>. Most packages,
however, firstly use the C compiler for linking, secondly use
<envar>LD</envar> anyways, defining it as the C compiler, and thirdly,
only so define <envar>LD</envar> when it is undefined as a fallback.
This triple-threat means Bintools Wrapper will break those packages, as
LD is already defined as the actual linker which the package won't
override yet doesn't want to use. The workaround is to define, just for
the problematic package, <envar>LD</envar> as the C compiler. A good way
to do this would be <command>preConfigure = "LD=$CC"</command>.
only so define <envar>LD</envar> when it is undefined as a fallback. This
triple-threat means Bintools Wrapper will break those packages, as LD is
already defined as the actual linker which the package won't override yet
doesn't want to use. The workaround is to define, just for the
problematic package, <envar>LD</envar> as the C compiler. A good way to
do this would be <command>preConfigure = "LD=$CC"</command>.
</para>
</listitem>
</varlistentry>
@ -2231,30 +2235,31 @@ addEnvHooks "$hostOffset" myBashFunction
</term>
<listitem>
<para>
CC Wrapper wraps a C toolchain for a bunch of miscellaneous purposes.
The CC Wrapper wraps a C toolchain for a bunch of miscellaneous purposes.
Specifically, a C compiler (GCC or Clang), wrapped binary tools, and a C
standard library (glibc or Darwin's libSystem, just for the dynamic
loader) are all fed in, and dependency finding, hardening (see below),
and purity checks for each are handled by CC Wrapper. Packages typically
depend on CC Wrapper, which in turn (at run time) depends on Bintools
Wrapper.
and purity checks for each are handled by the CC Wrapper. Packages
typically depend on the CC Wrapper, which in turn (at run-time) depends
on the Bintools Wrapper.
</para>
<para>
Dependency finding is undoubtedly the main task of CC Wrapper. This
works just like Bintools Wrapper, except that any
Dependency finding is undoubtedly the main task of the CC Wrapper. This
works just like the Bintools Wrapper, except that any
<filename>include</filename> subdirectory of any relevant dependency is
added to <envar>NIX_CFLAGS_COMPILE</envar>. The setup hook itself
contains some lengthy comments describing the exact convoluted mechanism
by which this is accomplished.
</para>
<para>
CC Wrapper also like Bintools Wrapper defines standard environment
variables with the names of the tools it wraps, for the same reasons
described above. Importantly, while it includes a <command>cc</command>
symlink to the c compiler for portability, the <envar>CC</envar> will be
defined using the compiler's "real name" (i.e. <command>gcc</command> or
<command>clang</command>). This helps lousy build systems that inspect
on the name of the compiler rather than run it.
Similarly, the CC Wrapper follows the Bintools Wrapper in defining
standard environment variables with the names of the tools it wraps, for
the same reasons described above. Importantly, while it includes a
<command>cc</command> symlink to the c compiler for portability, the
<envar>CC</envar> will be defined using the compiler's "real name" (i.e.
<command>gcc</command> or <command>clang</command>). This helps lousy
build systems that inspect on the name of the compiler rather than run
it.
</para>
</listitem>
</varlistentry>
@ -2314,9 +2319,11 @@ addEnvHooks "$hostOffset" myBashFunction
<listitem>
<para>
The <varname>autoreconfHook</varname> derivation adds
<varname>autoreconfPhase</varname>, which runs autoreconf, libtoolize
and automake, essentially preparing the configure script in
autotools-based builds.
<varname>autoreconfPhase</varname>, which runs autoreconf, libtoolize and
automake, essentially preparing the configure script in autotools-based
builds. Most autotools-based packages come with the configure script
pre-generated, but this hook is necessary for a few packages and when you
need to patch the packages configure scripts.
</para>
</listitem>
</varlistentry>
@ -2360,9 +2367,9 @@ addEnvHooks "$hostOffset" myBashFunction
</term>
<listitem>
<para>
Exports <envar>GDK_PIXBUF_MODULE_FILE</envar> environment variable the
the builder. Add librsvg package to <varname>buildInputs</varname> to
get svg support.
Exports <envar>GDK_PIXBUF_MODULE_FILE</envar> environment variable to the
builder. Add librsvg package to <varname>buildInputs</varname> to get svg
support.
</para>
</listitem>
</varlistentry>
@ -2399,7 +2406,7 @@ addEnvHooks "$hostOffset" myBashFunction
PaX flags on Linux (where it is available by default; on all other
platforms, <varname>paxmark</varname> is a no-op). For example, to
disable secure memory protections on the executable
<replaceable>foo</replaceable>:
<replaceable>foo</replaceable>
<programlisting>
postFixup = ''
paxmark m $out/bin/<replaceable>foo</replaceable>
@ -2452,6 +2459,103 @@ addEnvHooks "$hostOffset" myBashFunction
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
cmake
</term>
<listitem>
<para>
Overrides the default configure phase to run the CMake command. By
default, we use the Make generator of CMake. In
addition, dependencies are added automatically to CMAKE_PREFIX_PATH so
that packages are correctly detected by CMake. Some additional flags
are passed in to give similar behavior to configure-based packages. You
can disable this hooks behavior by setting configurePhase to a custom
value, or by setting dontUseCmakeConfigure. cmakeFlags controls flags
passed only to CMake. By default, parallel building is enabled as CMake
supports parallel building almost everywhere. When Ninja is also in
use, CMake will detect that and use the ninja generator.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
xcbuildHook
</term>
<listitem>
<para>
Overrides the build and install phases to run the “xcbuild” command.
This hook is needed when a project only comes with build files for the
XCode build system. You can disable this behavior by setting buildPhase
and configurePhase to a custom value. xcbuildFlags controls flags
passed only to xcbuild.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
meson
</term>
<listitem>
<para>
Overrides the configure phase to run meson to generate Ninja files. You
can disable this behavior by setting configurePhase to a custom value,
or by setting dontUseMesonConfigure. To run these files, you should
accompany meson with ninja. mesonFlags controls only the flags passed
to meson. By default, parallel building is enabled as Meson supports
parallel building almost everywhere.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
ninja
</term>
<listitem>
<para>
Overrides the build, install, and check phase to run ninja instead of
make. You can disable this behavior with the dontUseNinjaBuild,
dontUseNinjaInstall, and dontUseNinjaCheck, respectively. Parallel
building is enabled by default in Ninja.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
unzip
</term>
<listitem>
<para>
This setup hook will allow you to unzip .zip files specified in $src.
There are many similar packages like unrar, undmg, etc.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
wafHook
</term>
<listitem>
<para>
Overrides the configure, build, and install phases. This will run the
"waf" script used by many projects. If waf doesnt exist, it will copy
the version of waf available in Nixpkgs wafFlags can be used to pass
flags to the waf script.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
scons
</term>
<listitem>
<para>
Overrides the build, install, and check phases. This uses the scons
build system as a replacement for make. scons does not provide a
configure phase, so everything is managed at build and install time.
</para>
</listitem>
</varlistentry>
</variablelist>
</para>
</section>