The command nixos-container can now create containers. For instance,
the following creates and starts a container named ‘database’:
$ nixos-container create database
The configuration of the container is stored in
/var/lib/containers/<name>/etc/nixos/configuration.nix. After editing
the configuration, you can make the changes take effect by doing
$ nixos-container update database
The container can also be destroyed:
$ nixos-container destroy database
Containers are now executed using a template unit,
‘container@.service’, so the unit in this example would be
‘container@database.service’.
For example, the following sets up a container named ‘foo’. The
container will have a single network interface eth0, with IP address
10.231.136.2. The host will have an interface c-foo with IP address
10.231.136.1.
systemd.containers.foo =
{ privateNetwork = true;
hostAddress = "10.231.136.1";
localAddress = "10.231.136.2";
config =
{ services.openssh.enable = true; };
};
With ‘privateNetwork = true’, the container has the CAP_NET_ADMIN
capability, allowing it to do arbitrary network configuration, such as
setting up firewall rules. This is secure because it cannot touch the
interfaces of the host.
The helper program ‘run-in-netns’ is needed at the moment because ‘ip
netns exec’ doesn't quite do the right thing (it remounts /sys without
bind-mounting the original /sys/fs/cgroups).
These are stored on the host in
/nix/var/nix/{profiles,gcroots}/per-container/<container-name> to
ensure that container profiles/roots are not garbage-collected.
This has the unintended side-effect of restarting httpd every time we
run switch-to-configuration, even if httpd hasn't changed (because
we're doing a "stop keys.target" now). So use a "Wants" dependency
instead.
On the host, you can run
$ socat unix:<path-to-container>/var/lib/login.socket -,echo=0,raw
to get a login prompt. So this allows logging in even if the
container has no SSH access enabled.
You can also do
$ socat unix:<path-to-container>/var/lib/root-shell.socket -
to get a plain root shell. (This socket is only accessible by root,
obviously.) This makes it easy to execute commands in the container,
e.g.
$ echo reboot | socat unix:<path-to-container>/var/lib/root-shell.socket -
This reverts commit b792394119b8ffc4a2fd34a67048fe205a08dcd7.
Starting the manual on tty8 was intended as a convenience during
installation, not as a general purpose thing. In fact, given that w3m
runs as root, this is highly insecure!
This module adds the security.duosec attributes, which you can use to
enable simple two-factor authentication for NixOS logins.
The module currently provides PAM and SSH support, although the PAM unix
system configuration isn't automatically dealt with (although the
configuration is automatically built).
Enabling it is as easy as saying:
security.duosec.ssh.enable = true;
security.duosec.ikey = "XXXXXXXX...";
security.duosec.skey = "XXXXXXXX...";
security.duosec.host = "api-XXXXXXX.duosecurity.com";
security.duosec.group = "duosec";
which will enforce two-factor authentication for SSH logins for users in
the 'duosec' group.
This requires uid/gid support in the environment.etc module.
Signed-off-by: Austin Seipp <aseipp@pobox.com>
This has the nice side-effect of making gpsd actually run!
Old behaviour (debugLevel=2):
systemd[1]: gpsd.service holdoff time over, scheduling restart.
systemd[1]: Stopping GPSD daemon...
systemd[1]: Starting GPSD daemon...
systemd[1]: gpsd.service start request repeated too quickly, refusing to start.
systemd[1]: Failed to start GPSD daemon.
systemd[1]: Unit gpsd.service entered failed state.
New behaviour (debugLevel=2):
gpsd[945]: gpsd: launching (Version 2.95)
systemd[1]: Started GPSD daemon.
gpsd[945]: gpsd: listening on port 2947
gpsd[945]: gpsd: running with effective group ID 27
gpsd[945]: gpsd: running with effective user ID 23
gpsd[945]: gpsd: stashing device /dev/ttyUSB0 at slot 0
Uses standard NixOS user config merging.
Work in progress: The slave config does not actually start the slave agent. This just configures a
jenkins user if required. Bare minimum to enable a nice jenkins SSH slave.
By default the jenkins server is executed under the user "jenkins". Which can be configured using
users.jenkins.* options. If a different user is requested by changing services.jenkins.user then
none of the users.jenkins options apply.
This patch does not include jenkins slave configuration. Some config options will probably change
when this is implemented.
Aspects like the user and environment are typically identical between slave and master. The service
configs are different. The design is for users.jenkins to cover the shared aspects while
services.jenkins and services.jenkins-slave cover the master and slave specific aspects,
respectively.
Another option would be to place everything under services.jenkins and have a config that selects
master vs slave.