From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mp2 ([2001:41d0:8:6d80::]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) by ms0.migadu.com with LMTPS id sKLcAu3FbmGmVgEAgWs5BA (envelope-from ) for ; Tue, 19 Oct 2021 15:19:41 +0200 Received: from aspmx1.migadu.com ([2001:41d0:8:6d80::]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) by mp2 with LMTPS id gKI0OuzFbmE/CQAAB5/wlQ (envelope-from ) for ; Tue, 19 Oct 2021 13:19:40 +0000 Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by aspmx1.migadu.com (Postfix) with ESMTPS id 6884E228B0 for ; Tue, 19 Oct 2021 15:19:40 +0200 (CEST) Received: from localhost ([::1]:53030 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1mcp1n-0001BS-6N for larch@yhetil.org; Tue, 19 Oct 2021 09:19:39 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:51086) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mcp1C-0000b9-7A for guix-patches@gnu.org; Tue, 19 Oct 2021 09:19:02 -0400 Received: from debbugs.gnu.org ([209.51.188.43]:38307) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1mcp1B-0004Xp-TM for guix-patches@gnu.org; Tue, 19 Oct 2021 09:19:01 -0400 Received: from Debian-debbugs by debbugs.gnu.org with local (Exim 4.84_2) (envelope-from ) id 1mcp1B-0006oI-LE for guix-patches@gnu.org; Tue, 19 Oct 2021 09:19:01 -0400 X-Loop: help-debbugs@gnu.org Subject: [bug#45692] [PATCH v5 3/3] gnu: Add ZFS service type. Resent-From: raid5atemyhomework Original-Sender: "Debbugs-submit" Resent-CC: guix-patches@gnu.org Resent-Date: Tue, 19 Oct 2021 13:19:01 +0000 Resent-Message-ID: Resent-Sender: help-debbugs@gnu.org X-GNU-PR-Message: followup 45692 X-GNU-PR-Package: guix-patches X-GNU-PR-Keywords: patch To: zimoun Cc: Maxime Devos , "45692@debbugs.gnu.org" <45692@debbugs.gnu.org> Received: via spool by 45692-submit@debbugs.gnu.org id=B45692.163464953226162 (code B ref 45692); Tue, 19 Oct 2021 13:19:01 +0000 Received: (at 45692) by debbugs.gnu.org; 19 Oct 2021 13:18:52 +0000 Received: from localhost ([127.0.0.1]:49853 helo=debbugs.gnu.org) by debbugs.gnu.org with esmtp (Exim 4.84_2) (envelope-from ) id 1mcp0w-0006no-Ul for submit@debbugs.gnu.org; Tue, 19 Oct 2021 09:18:51 -0400 Received: from mail-40137.protonmail.ch ([185.70.40.137]:10307) by debbugs.gnu.org with esmtp (Exim 4.84_2) (envelope-from ) id 1mcp0s-0006nX-Eb for 45692@debbugs.gnu.org; Tue, 19 Oct 2021 09:18:46 -0400 Date: Tue, 19 Oct 2021 13:18:21 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=protonmail.com; s=protonmail; t=1634649513; bh=6wVwQ3KEXeE9N2sRzODZ8AuAmD5DiUS83+OrZ/16Py0=; h=Date:To:From:Cc:Reply-To:Subject:In-Reply-To:References:From; b=p4tL68UWmDclkIMicwXvy9IztHgvtn8l8Xr2u/eUg7DJvD234K3HznXocaUsM/ulo wHej2kfZeJKnAlhqIJggFdwZFrkiC5jA5BkAWXF6U7NC++J+jC/0XSTrYZnOsXmL63 /B0Oa77A+LESf/0Eq97/Y6fQ3OONiRaalDOWMFio= Message-ID: In-Reply-To: References: MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-BeenThere: debbugs-submit@debbugs.gnu.org X-Mailman-Version: 2.1.18 Precedence: list X-BeenThere: guix-patches@gnu.org List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: guix-patches-bounces+larch=yhetil.org@gnu.org Sender: "Guix-patches" Reply-to: raid5atemyhomework X-ACL-Warn: , raid5atemyhomework via Guix-patches From: raid5atemyhomework via Guix-patches via X-Migadu-Flow: FLOW_IN ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=yhetil.org; s=key1; t=1634649580; h=from:from:sender:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding:resent-cc: resent-from:resent-sender:resent-message-id:in-reply-to:in-reply-to: references:references:list-id:list-help:list-unsubscribe: list-subscribe:list-post:dkim-signature; bh=6wVwQ3KEXeE9N2sRzODZ8AuAmD5DiUS83+OrZ/16Py0=; b=rT7q4S+XPVBkvfbtEMLGD0VnP24S9jZL/rhdWDZDKYHjFuns+xL9v4+oYnb+o/6gelQiVc IUb3zJV2xTKELtfplgJ1OAJqP1lvn95FaEkQ18QtcBLDz/Kp1rtCGF8esGjDosyZrYwiHM 7t0MMKn+yqIO9tXkTdgegjXy9yye1i7N/ebNcPeF9GSwEPOd/85v3GNuRHvJVF3eA0Tuxg Wb3qurYpugHq4OPBLDmPyvH0cIcinzh66vPA8NDttYtNj0DtdxvtPcvB3Y0z+qblnsKiK5 0daDMuw+H4FU0PV9lZptLvDyG5LLHtyzfLp4cgIvCDlyUhiA0lhi+fB35vzQfg== ARC-Seal: i=1; s=key1; d=yhetil.org; t=1634649580; a=rsa-sha256; cv=none; b=pYsxJQ9X0uTmlATJZNKFo96pEI1ZQxMAcBgFO0DuRxX1lCvHNjeQ4+gmqXn5kAKDGK+SMZ qVqEyHfGhQ5Hb6k5HKQTcK4tpEAZqtf+LfJ7RkrZCCuys2dWy3kXMZJ1YH6KNZ2lpmqrY5 cw5mktaB+raSw1QK9Ud/gmZy+6jKMUaqeG+hXMCSUwGHo0VVUVxcttBQeHp/2HVUicG9S1 VsFtTVsMFwGsxC7c+Vy6rHTm/2kyKZequdlp+GYdLTERpuciRj1eiUdPNagO/81Gk9VnSm eUxS/qqeDdIxWVo3B7AinPvubICRoE4mz98IxYfHyza5KjoYmMa8WGpiLzgYWw== ARC-Authentication-Results: i=1; aspmx1.migadu.com; dkim=fail ("headers rsa verify failed") header.d=protonmail.com header.s=protonmail header.b=p4tL68UW; dmarc=pass (policy=none) header.from=gnu.org; spf=pass (aspmx1.migadu.com: domain of guix-patches-bounces@gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=guix-patches-bounces@gnu.org X-Migadu-Spam-Score: -2.93 Authentication-Results: aspmx1.migadu.com; dkim=fail ("headers rsa verify failed") header.d=protonmail.com header.s=protonmail header.b=p4tL68UW; dmarc=pass (policy=none) header.from=gnu.org; spf=pass (aspmx1.migadu.com: domain of guix-patches-bounces@gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=guix-patches-bounces@gnu.org X-Migadu-Queue-Id: 6884E228B0 X-Spam-Score: -2.93 X-Migadu-Scanner: scn0.migadu.com X-TUID: PU1ipUlI7HnP ***BUMP*** > Sorry for the lateness everyone. > > Hope this one gets reviewed and merged. > > -------------------------------------------------------------------------= - > > From 3803e046566278fe12d64f6e39564e9602bf434d Mon Sep 17 00:00:00 2001 > From: raid5atemyhomework raid5atemyhomework@protonmail.com > Date: Thu, 30 Sep 2021 16:58:46 +0800 > Subject: [PATCH] gnu: Add ZFS service type. > > - gnu/services/file-systems.scm: New file. > > - gnu/local.mk (GNU_SYSTEM_MODULES): Add it. > > - gnu/services/base.scm: Export dependency->shepherd-service-name. > > - doc/guix.texi (ZFS File System): New subsection. > > > doc/guix.texi | 351 ++++++++++++++++++++++++++++++++ > gnu/local.mk | 2 + > gnu/services/base.scm | 4 +- > gnu/services/file-systems.scm | 363 ++++++++++++++++++++++++++++++++++ > 4 files changed, 719 insertions(+), 1 deletion(-) > create mode 100644 gnu/services/file-systems.scm > > diff --git a/doc/guix.texi b/doc/guix.texi > index a72a726b54..dd38103953 100644 > --- a/doc/guix.texi > +++ b/doc/guix.texi > @@ -97,6 +97,7 @@ Copyright @copyright{} 2021 Hui Lu@* > Copyright @copyright{} 2021 pukkamustard@* > Copyright @copyright{} 2021 Alice Brenon@* > Copyright @copyright{} 2021 Andrew Tropin@* > +Copyright @copyright{} 2021 raid5atemyhomework@* > > Permission is granted to copy, distribute and/or modify this document > under the terms of the GNU Free Documentation License, Version 1.3 or > @@ -14435,6 +14436,356 @@ a file system declaration such as: > compress-force=3Dzstd,space_cache=3Dv2")) > @end lisp > > + > +@node ZFS File System > +@subsection ZFS File System > + > +Support for ZFS file systems in Guix is based on the OpenZFS project. > +OpenZFS currently only supports Linux-Libre and is not available on the > +Hurd. > + > +OpenZFS is free software; unfortunately its license is incompatible with > +the GNU General Public License (GPL), the license of the Linux kernel, > +which means they cannot be distributed together. However, as a user, > +you can choose to build ZFS and use it together with Linux; you can > +even rely on Guix to automate this task. See > +@uref{https://www.fsf.org/licensing/zfs-and-linux, this analysis by > +the Free Software Foundation} for more information. > + > +As a large and complex kernel module, OpenZFS has to be compiled for a > +specific version of Linux-Libre. At times, the latest OpenZFS package > +available in Guix is not compatible with the latest Linux-Libre version. > +Thus, directly installing the @code{zfs} package can fail. > + > +Instead, you are recommended to select a specific older long-term-suppor= t > +Linux-Libre kernel. Do not use @code{linux-libre-lts}, as even the > +latest long-term-support kernel may be too new for @code{zfs}. Instead, > +explicitly select a specific older version, such as @code{linux-libre-5.= 10}, > +and upgrade it manually later as new long-term-support kernels become > +available that you have confirmed is compatible with the latest availabl= e > +OpenZFS version on Guix. > + > +For example, you can modify your system configuration file to a specific > +Linux-Libre version and add the @code{zfs-service-type} service. > + > +@lisp > +(use-modules (gnu)) > +(use-package-modules > > - #;@dots{} > - linux) > +(use-service-modules > > - #;@dots{} > - file-systems) > - > > +(define my-kernel linux-libre-5.10) > + > +(operating-system > > - (kernel my-kernel) > - #;@dots{} > - (services > - (cons* (service zfs-service-type > - (zfs-configuration > > > - (kernel my-kernel))) > > > - #;@dots{} > > > - %desktop-services)) > > > - #;@dots{}) > +@end lisp > > - > > +@defvr {Scheme Variable} zfs-service-type > +This is the type for a service that adds ZFS support to your operating > +system. The service is configured using a @code{zfs-configuration} > +record. > + > +Here is an example use: > + > +@lisp > +(service zfs-service-type > > - (zfs-configuration > - (kernel linux-libre-5.4))) > +@end lisp > +@end defvr > > - > > +@deftp {Data Type} zfs-configuration > +This data type represents the configuration of the ZFS support in Guix > +System. Its fields are: > + > +@table @asis > +@item @code{kernel} > +The package of the Linux-Libre kernel to compile OpenZFS for. This field > +is always required. It @emph{must} be the same kernel you use in your > +@code{operating-system} form. > + > +@item @code{base-zfs} (default: @code{zfs}) > +The OpenZFS package that will be compiled for the given Linux-Libre kern= el. > + > +@item @code{base-zfs-auto-snapshot} (default: @code{zfs-auto-snapshot}) > +The @code{zfs-auto-snapshot} package to use. It will be modified to > +specifically use the OpenZFS compiled for your kernel. > + > +@item @code{dependencies} (default: @code{'()}) > +A list of @code{} or @code{} records that mu= st > +be mounted or opened before OpenZFS scans for pools to import. For examp= le, > +if you have set up LUKS containers as leaf VDEVs in a pool, you have to > +include their corresponding @code{} records so that Open= ZFS > +can import the pool correctly at bootup. > + > +@item @code{auto-mount?} (default: @code{#t}) > +Whether to mount datasets with the ZFS @code{mountpoint} property automa= tically > +at startup. This is the behavior that ZFS users usually expect. You migh= t > +set this to @code{#f} for an operating system intended as a `rescue'' sy= stem +that is intended to help debug problems with the disks rather than ac= tually +work in production. + +@item @code{auto-scrub} (default: @code{'wee= kly}) +Specifies how often to scrub all pools. Can be the symbols @code{'we= ekly} or +@code{'monthly}, or a schedule specification understood by +@xref= {mcron, mcron job specifications,, mcron, GNU@tie{}mcron}, such as +@code{"= 0 3 * * 6"} for`every 3AM on Saturday''. > +It can also be @code{#f} to disable auto-scrubbing (@strong{not recommen= ded}). > + > +The general guideline is to scrub weekly when using consumer-quality dri= ves, and > +to scrub monthly when using enterprise-quality drives. > + > +@code{'weekly} scrubs are done on Sunday midnight, while @code{monthly} = scrubs > +are done on midnight on the first day of each month. > + > +@item @code{auto-snapshot?} (default: @code{#t}) > +Specifies whether to auto-snapshot by default. If @code{#t}, then snapsh= ots > +are automatically created except for ZFS datasets with the > +@code{com.sun:auto-snapshot} ZFS vendor property set to @code{false}. > + > +If @code{#f}, snapshots will not be automatically created, unless the ZF= S > +dataset has the @code{com.sun:auto-snapshot} ZFS vendor property set to > +@code{true}. > + > +@item @code{auto-snapshot-keep} (default: @code{'()}) > +Specifies an association list of symbol-number pairs, indicating the num= ber > +of automatically-created snapshots to retain for each frequency type. > + > +If not specified via this field, by default there are 4 @code{frequent},= 24 > +@code{hourly}, 31 @code{daily}, 8 @code{weekly}, and 12 @code{monthly} s= napshots. > + > +For example: > + > +@lisp > +(zfs-configuration > > - (kernel my-kernel) > - (auto-snapshot-keep > - '((frequent . 8) > - (hourly . 12)))) > > > > +@end lisp > + > +The above will keep 8 @code{frequent} snapshots and 12 @code{hourly} sna= pshots. > +@code{daily}, @code{weekly}, and @code{monthly} snapshots will keep thei= r > +defaults (31 @code{daily}, 8 @code{weekly}, and 12 @code{monthly}). > + > +@end table > +@end deftp > + > +@subsubsection ZFS Auto-Snapshot > + > +The ZFS service on Guix System supports auto-snapshots as implemented in= the > +Solaris operating system. > + > +@code{frequent} (every 15 minutes), @code{hourly}, @code{daily}, @code{w= eekly}, > +and @code{monthly} snapshots are created automatically for ZFS datasets = that > +have auto-snapshot enabled. They will be named, for example, > +@code{zfs-auto-snap_frequent-2021-03-22-1415}. You can continue to use > +manually-created snapshots as long as they do not conflict with the nami= ng > +convention used by auto-snapshot. You can also safely manually destroy > +automatically-created snapshots, for example to free up space. > + > +The @code{com.sun:auto-snapshot} ZFS property controls auto-snapshot on = a > +per-dataset level. Sub-datasets will inherit this property from their pa= rent > +dataset, but can have their own property. > + > +You @emph{must} set this property to @code{true} or @code{false} exactly= , > +otherwise it will be treated as if the property is unset. > + > +For example: > + > +@example > +# zfs list -o name > +NAME > +tank > +tank/important-data > +tank/tmp > +# zfs set com.sun:auto-snapshot=3Dtrue tank > +# zfs set com.sun:auto-snapshot=3Dfalse tank/tmp > +@end example > + > +The above will set @code{tank} and @code{tank/important-data} to be > +auto-snapshot, while @code{tank/tmp} will not be auto-snapshot. > + > +If the @code{com.sun:auto-snapshot} property is not set for a dataset > +(the default when pools and datasets are created), then whether > +auto-snapshot is done or not will depend on the @code{auto-snapshot?} > +field of the @code{zfs-configuration} record. > + > +There are also @code{com.sun:auto-snapshot:frequent}, > +@code{com.sun:auto-snapshot:hourly}, @code{com.sun:auto-snapshot:daily}, > +@code{com.sun:auto-snapshot:weekly}, and @code{com.sun:auto-snapshot:mon= thly} > +properties that give finer-grained control of whether to auto-snapshot a > +dataset at a particular schedule. > + > +The number of snapshots kept for all datasets can be overridden via the > +@code{auto-snapshot-keep} field of the @code{zfs-configuration} record. > +There is currently no support to have different numbers of snapshots to > +keep for different datasets. > + > +@subsubsection ZVOLs > + > +ZFS supports ZVOLs, block devices that ZFS exposes to the operating > +system in the @code{/dev/zvol/} directory. The ZVOL will have the same > +resilience and self-healing properties as other datasets on your ZFS poo= l. > +ZVOLs can also be snapshotted (and will be included in auto-snapshotting > +if enabled), which snapshots the state of the block device, effectively > +snapshotting the hosted file system. > + > +You can put any file system inside the ZVOL. However, in order to mount = this > +file system at system start, you need to add @code{%zfs-zvol-dependency}= as a > +dependency of each file system inside a ZVOL. > + > +@defvr {Scheme Variable} %zfs-zvol-dependency > +An artificial @code{} which tells the file system mountin= g > +service to wait for ZFS to provide ZVOLs before mounting the > +@code{} dependent on it. > +@end defvr > + > +For example, suppose you create a ZVOL and put an ext4 filesystem > +inside it: > + > +@example > +# zfs create -V 100G tank/ext4-on-zfs > +# mkfs.ext4 /dev/zvol/tank/ext4-on-zfs > +# mkdir /ext4-on-zfs > +# mount /dev/zvol/tank/ext4-on-zfs /ext4-on-zfs > +@end example > + > +You can then set this up to be mounted at boot by adding this to the > +@code{file-systems} field of your @code{operating-system} record: > + > +@lisp > +(file-system > > - (device "/dev/zvol/tank/ext4-on-zfs") > - (mount-point "/ext4-on-zfs") > - (type "ext4") > - (dependencies (list %zfs-zvol-dependency))) > +@end lisp > > - > > +You @emph{must not} add @code{%zfs-zvol-dependency} to your > +@code{operating-system}'s @code{mapped-devices} field, and you @emph{mus= t > +not} add it (or any @code{}s dependent on it) to the > +@code{dependencies} field of @code{zfs-configuration}. Finally, you > +@emph{must not} use @code{%zfs-zvol-dependency} unless you actually > +instantiate @code{zfs-service-type} on your system. > + > +@subsubsection Unsupported Features > + > +Some common features and uses of ZFS are currently not supported, or not > +fully supported, on Guix. > + > +@enumerate > +@item > +Shepherd-managed daemons that are configured to read from or write to ZF= S > +mountpoints need to include @code{user-processes} in their @code{require= ment} > +field. This is the earliest that ZFS file systems are assured of being > +mounted. > + > +Generally, most daemons will, directly or indirectly, require > +@code{networking}, or @code{user-processes}, or both. Most implementatio= ns > +of @code{networking} will also require @code{user-processes} so daemons = that > +require only @code{networking} will also generally start up after > +@code{user-processes}. A notable exception, however, is > +@code{static-networking-service-type}. You will need to explicitly add > +@code{user-processes} as a @code{requirement} of your @code{static-netwo= rking} > +record. > + > +@item > +@code{mountpoint=3Dlegacy} ZFS file systems. The handlers for the Guix m= ounting > +system have not yet been modified to support ZFS, and will expect @code{= /dev} > +paths in the @code{}'s @code{device} field, but ZFS file sy= stems > +are referred to via non-path @code{pool/file/system} names. Such file sy= stems > +also need to be mounted @emph{after} OpenZFS has scanned for pools. > + > +You can still manually mount these file systems after system boot; what = is > +only unsupported is mounting them automatically at system boot by specif= ying > +them in @code{} records of your @code{operating-system}. > > - > > +@item > +@code{/home} on ZFS. Guix will create home directories for users, but th= is > +process currently cannot be scheduled after ZFS file systems are mounted= . > +Thus, the ZFS file system might be mounted @emph{after} Guix has created > +home directories at boot, at which point OpenZFS will refuse to mount si= nce > +the mountpoint is not empty. However, you @emph{can} create an ext4, xfs= , > +btrfs, or other supported file system inside a ZVOL, have that depend on > +@code{%zfs-zvol-dependency}, and set it to mount on the @code{/home} > +directory; they will be scheduled to mount before the @code{user-homes} > +process. > + > +Similarly, other locations like @code{/var}, @code{/gnu/store} and so > +on cannot be reliably put in a ZFS file system, though they may be > +possible to create as other file systems inside ZVOL containers. > + > +@item > +@code{/} and @code{/boot} on ZFS. These require Guix to expose more of > +the @code{initrd} very early boot process to services. It also requires > +Guix to have the ability to explicitly load modules while still in > +@code{initrd} (currently kernel modules loaded by > +@code{kernel-module-loader-service-type} are loaded after @code{/} is > +mounted). Further, since one of ZFS's main advantages is that it can > +continue working despite the loss of one or more devices, it makes sense > +to also support installing the bootloader on all devices of the pool tha= t > +contains the @code{/} and @code{/boot}; after all, if ZFS can survive th= e > +loss of one device, the bootloader should also be able to survive the lo= ss > +of one device. > + > +@item > +ZVOL swap devices. Mapped swap devices need to be listed in > +@code{mapped-devices} to ensure they are opened before the system attemp= ts > +to use them, but you cannot currently add @code{%zfs-zvol-dependency} to > +@code{mapped-devices}. > + > +This will also require significant amounts of testing, as various kernel > +build options and patches may affect how swapping works, which are possi= bly > +different on Guix System compared to other distributions that this featu= re is > +known to work on. > + > +@item > +ZFS Event Daemon. Support for this has not been written yet, patches are > +welcome. The main issue is how to design this in a Guix style while > +supporting legacy shell-script styles as well. In particular, OpenZFS it= self > +comes with a number of shell scripts intended for ZFS Event Daemon, and = we > +need to figure out how the user can choose to use or not use the provide= d > +scripts (and configure any settings they have) or override with their ow= n > +custom code (which could be shell scripts they have written and trusted = from > +previous ZFS installations). > + > +As-is, you can create your own service that activates the ZFS Event Daem= on > +by creating the @file{/etc/zfs/zed} directory and filling it appropriate= ly, > +then launching @code{zed}. > + > +@item > +@file{/etc/zfs/zpool.cache}. Currently the ZFS support on Guix always fo= rces > +scanning of all devices at bootup to look for ZFS pools. For systems wit= h > +dozens or hundreds of storage devices, this can lead to slow bootup. One= issue > +is that tools should really not write to @code{/etc} which is supposed t= o be for > +configuration; possibly it could be moved to @code{/var} instead. Anothe= r issue > +is that if Guix ever supports @code{/} on ZFS, we would need to somehow = keep the > +@code{zpool.cache} file inside the @code{initrd} up-to-date with what is= in the > +@code{/} mount point. > + > +@item > +@code{zfs share}. This will require some (unknown amount of) work to int= egrate > +into the Samba and NFS services of Guix. You @emph{can} manually set up = Samba > +and NFS to share any mounted ZFS datasets by setting up their configurat= ions > +properly; it just can't be done for you by @code{zfs share} and the > +@code{sharesmb} and @code{sharenfs} properties. > +@end enumerate > + > +Hopefully, support for the above only requires code to be written, and u= sers > +are encouraged to hack on Guix to implement the above features. > + > @node Mapped Devices > @section Mapped Devices > > diff --git a/gnu/local.mk b/gnu/local.mk > index d415b892e9..4147badd49 100644 > --- a/gnu/local.mk > +++ b/gnu/local.mk > @@ -45,6 +45,7 @@ > > Copyright =C2=A9 2021 Sharlatan Hellseher sharlatanus@gmail.com > > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > > Copyright =C2=A9 2021 Dmitry Polyakov polyakov@liltechdude.xyz > > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D > > Copyright =C2=A9 2021 Andrew Tropin andrew@trop.in > > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > > +# Copyright =C2=A9 2021 raid5atemyhomework raid5atemyhomework@protonmail= .com > > =3D=3D > > This file is part of GNU Guix. > > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D > > =3D=3D > > @@ -633,6 +634,7 @@ GNU_SYSTEM_MODULES =3D \ > %D%/services/docker.scm \ > %D%/services/authentication.scm \ > %D%/services/file-sharing.scm \ > > - %D%/services/file-systems.scm \ > %D%/services/games.scm \ > %D%/services/ganeti.scm \ > %D%/services/getmail.scm \ > diff --git a/gnu/services/base.scm b/gnu/services/base.scm > index 50865055fe..d5d33aeada 100644 > --- a/gnu/services/base.scm > +++ b/gnu/services/base.scm > @@ -186,7 +186,9 @@ > > references-file > > > - %base-services)) > > > > - %base-services > > > - > - dependency->shepherd-service-name)) > > > > ;;; Commentary: > ;;; > diff --git a/gnu/services/file-systems.scm b/gnu/services/file-systems.sc= m > new file mode 100644 > index 0000000000..867349c3a5 > --- /dev/null > +++ b/gnu/services/file-systems.scm > @@ -0,0 +1,363 @@ > +;;; GNU Guix --- Functional package management for GNU > +;;; Copyright =C2=A9 2021 raid5atemyhomework raid5atemyhomework@protonma= il.com > +;;; > +;;; This file is part of GNU Guix. > +;;; > +;;; GNU Guix is free software; you can redistribute it and/or modify it > +;;; under the terms of the GNU General Public License as published by > +;;; the Free Software Foundation; either version 3 of the License, or (a= t > +;;; your option) any later version. > +;;; > +;;; GNU Guix is distributed in the hope that it will be useful, but > +;;; WITHOUT ANY WARRANTY; without even the implied warranty of > +;;; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the > +;;; GNU General Public License for more details. > +;;; > +;;; You should have received a copy of the GNU General Public License > +;;; along with GNU Guix. If not, see http://www.gnu.org/licenses/. > > - > > +(define-module (gnu services file-systems) > > - #:use-module (gnu packages file-systems) > - #:use-module (gnu services) > - #:use-module (gnu services base) > - #:use-module (gnu services linux) > - #:use-module (gnu services mcron) > - #:use-module (gnu services shepherd) > - #:use-module (gnu system mapped-devices) > - #:use-module (guix gexp) > - #:use-module (guix modules) > - #:use-module (guix packages) > - #:use-module (guix records) > - #:use-module (srfi srfi-1) > - #:export (zfs-service-type > - > - zfs-configuration > > > - zfs-configuration? > > > - zfs-configuration-kernel > > > - zfs-configuration-base-zfs > > > - zfs-configuration-base-zfs-auto-snapshot > > > - zfs-configuration-dependencies > > > - zfs-configuration-auto-mount? > > > - zfs-configuration-auto-scrub > > > - zfs-configuration-auto-snapshot? > > > - zfs-configuration-auto-snapshot-keep > > > - > - %zfs-zvol-dependency)) > > > - > > +(define-record-type* > > - zfs-configuration > > - make-zfs-configuration > > - zfs-configuration? > > - > - ;; linux-libre kernel you want to compile the base-zfs module for. > > - (kernel zfs-configuration-kernel) > > - > - ;; the OpenZFS package that will be modified to compile for the > > - ;; given kernel. > > - ;; Because it is modified and not the actual package that is used, > > - ;; we prepend the name 'base-'. > > - (base-zfs zfs-configuration-base-zfs > > - (default zfs)) > > > - > - ;; the zfs-auto-snapshot package that will be modified to compile > > - ;; for the given kernel. > > - ;; Because it is modified and not the actual package that is used, > > - ;; we prepend the name 'base-'. > > - (base-zfs-auto-snapshot zfs-configuration-base-zfs-auto-snapshot > > - (default zfs-auto-snapshot)) > > > - > - ;; list of or objects that must be > > - ;; opened/mounted before we import any ZFS pools. > > - (dependencies zfs-configuration-dependencies > > - (default '())) > > > - > - ;; #t to mount all mountable datasets by default. > > - ;; #f if not mounting. > > - ;; #t is the expected behavior on other operating systems, the > > - ;; #f is only supported for "rescue" operating systems where > > - ;; the user wants lower-level control of when to mount. > > - (auto-mount? zfs-configuration-auto-mount? > > - (default #t)) > > > - > - ;; 'weekly for weekly scrubbing, 'monthly for monthly scrubbing, an > > - ;; mcron time specification that can be given to `job`, or #f to > > - ;; disable. > > - (auto-scrub zfs-configuration-auto-scrub > > - (default 'weekly)) > > > - > - ;; #t to auto-snapshot by default (and `com.sun:auto-snapshot=3Dfalse= ` > > - ;; disables auto-snapshot per dataset), #f to not auto-snapshot > > - ;; by default (and `com.sun:auto-snapshot=3Dtrue` enables auto-snapsh= ot > > - ;; per dataset). > > - (auto-snapshot? zfs-configuration-auto-snapshot? > > - (default #t)) > > > - > - ;; association list of symbol-number pairs to indicate the number > > - ;; of automatic snapshots to keep for each of 'frequent, 'hourly, > > - ;; 'daily, 'weekly, and 'monthly. > > - ;; e.g. '((frequent . 8) (hourly . 12)) > > - (auto-snapshot-keep zfs-configuration-auto-snapshot-keep > > - (default '()))) > > > - > > +(define %default-auto-snapshot-keep > > - '((frequent . 4) > - (hourly . 24) > - (daily . 31) > - (weekly . 8) > - (monthly . 12))) > - > > +(define %auto-snapshot-mcron-schedule > > - '((frequent . "0,15,30,45 * * * *") > - (hourly . "0 * * * *") > - (daily . "0 0 * * *") > - (weekly . "0 0 * * 7") > - (monthly . "0 0 1 * *"))) > - > > +;; A synthetic and unusable MAPPED-DEVICE intended for use when > +;; the user has created a mountable filesystem inside a ZFS > +;; zvol and wants it mounted inside the configuration.scm. > +(define %zfs-zvol-dependency > > - (mapped-device > - (source '()) > - (targets '("zvol/*")) > - (type #f))) > - > > +(define (make-zfs-package conf) > > - "Creates a zfs package based on the given zfs-configuration. > - > - OpenZFS is a kernel package and to ensure best compatibility > - it should be compiled with the specific Linux-Libre kernel > - used on the system. This simply overrides the kernel used > - in compilation with that given in the configuration, which > - the user has to ensure is the same as in the operating-system." > - (let ((kernel (zfs-configuration-kernel conf)) > - (base-zfs (zfs-configuration-base-zfs conf))) > > > - (package > - (inherit base-zfs) > > > - (arguments (cons* #:linux kernel > > > - (package-arguments base-zfs)))))) > > > - > > +(define (make-zfs-auto-snapshot-package conf) > > - "Creates a zfs-auto-snapshot package based on the given > - zfs-configuration. > - > - Since the OpenZFS tools above are compiled to a specific > - kernel version, zfs-auto-snapshot --- which calls into the > - OpenZFS tools --- has to be compiled with the specific > - modified OpenZFS package created in the make-zfs-package > - procedure." > - (let ((zfs (make-zfs-package conf)) > - (base-zfs-auto-snapshot (zfs-configuration-base-zfs-auto-snaps= hot conf))) > > > - (package > - (inherit base-zfs-auto-snapshot) > > > - (inputs `(("zfs" ,zfs)))))) > > > - > > +(define (zfs-loadable-modules conf) > > - "Specifies that the specific 'module' output of the OpenZFS > - package is to be used; for use in indicating it as a > - loadable kernel module." > - (list (list (make-zfs-package conf) "module"))) > - > > +(define (zfs-shepherd-services conf) > > - "Constructs a list of Shepherd services that is installed > > - by the ZFS Guix service. > > - > - 'zfs-scan' scans all devices for ZFS pools, and makes them > > - available to 'zpool' commands. > > - 'device-mapping-zvol/' waits for /dev/zvol/ to be > > - populated by 'udev', and runs after 'zfs-scan'. > > - 'zfs-auto-mount' mounts all ZFS datasets with a 'mount' > > - property, which defaults to '/' followed by the name of > > - the dataset. > > - > - All the above behavior is expected by ZFS users from > > - typical ZFS installations. A mild difference is that > > - scanning is usually based on '/etc/zfs/zpool.cache' > > - instead of the 'scan all devices' used below, but that > > - file is questionable in Guix since ideally '/etc/' > > - files are modified by the sysad directly; > > - '/etc/zfs/zpool.cache' is modified by ZFS tools." > > - (let* ((zfs-package (make-zfs-package conf)) > > - (zpool (file-append zfs-package "/sbin/zpool")) > > > - (zfs (file-append zfs-package "/sbin/zfs")) > > > - (zvol_wait (file-append zfs-package "/bin/zvol_wait")) > > > - (scheme-modules `((srfi srfi-1) > > > - (srfi srfi-34) > > > - (srfi srfi-35) > > > - (rnrs io ports) > > > - ,@%default-modules))) > > > - (define zfs-scan > > - (shepherd-service > > > - (provision '(zfs-scan)) > > > - (requirement `(root-file-system > > > - kernel-module-loader > > > - udev > > > - ,@(map dependency->shepherd-service-name > > > - (zfs-configuration-dependencies conf)))) > > > - (documentation "Scans for and imports ZFS pools.") > > > - (modules scheme-modules) > > > - (start #~(lambda _ > > > - (guard (c ((message-condition? c) > > > - (format (current-error-port) > > > - "zfs: error importing pools: ~s~= %" > > > - (condition-message c)) > > > - #f)) > > > - ;; TODO: optionally use a cachefile. > > > - (invoke #$zpool "import" "-a" "-N")))) > > > - ;; Why not one-shot? Because we don't really want to rescan > > > - ;; this each time a requiring process is restarted, as scannin= g > > > - ;; can take a long time and a lot of I/O. > > > - (stop #~(const #f)))) > > > - > - (define device-mapping-zvol/* > > - (shepherd-service > > > - (provision '(device-mapping-zvol/*)) > > > - (requirement '(zfs-scan)) > > > - (documentation "Waits for all ZFS ZVOLs to be opened.") > > > - (modules scheme-modules) > > > - (start #~(lambda _ > > > - (guard (c ((message-condition? c) > > > - (format (current-error-port) > > > - "zfs: error opening zvols: ~s~%" > > > - (condition-message c)) > > > - #f)) > > > - (invoke #$zvol_wait)))) > > > - (stop #~(const #f)))) > > > - > - (define zfs-auto-mount > > - (shepherd-service > > > - (provision '(zfs-auto-mount)) > > > - (requirement '(zfs-scan)) > > > - (documentation "Mounts all non-legacy mounted ZFS filesystems.= ") > > > - (modules scheme-modules) > > > - (start #~(lambda _ > > > - (guard (c ((message-condition? c) > > > - (format (current-error-port) > > > - "zfs: error mounting file system= s: ~s~%" > > > - (condition-message c)) > > > - #f)) > > > - ;; Output to current-error-port, otherwise the > > > - ;; user will not see any prompts for passwords > > > - ;; of encrypted datasets. > > > - ;; XXX Maybe better to explicitly open /dev/conso= le ? > > > - (with-output-to-port (current-error-port) > > > - (lambda () > > > - (invoke #$zfs "mount" "-a" "-l")))))) > > > - (stop #~(lambda _ > > > - ;; Make sure that Shepherd does not have a CWD that > > > - ;; is a mounted ZFS filesystem, which would prevent > > > - ;; unmounting. > > > - (chdir "/") > > > - (invoke #$zfs "unmount" "-a" "-f"))))) > > > - > - `(,zfs-scan > > - ,device-mapping-zvol/* > > > - ,@(if (zfs-configuration-auto-mount? conf) > > > - `(,zfs-auto-mount) > > > - '())))) > > > - > > +(define (zfs-user-processes conf) > > - "Provides the last Shepherd service that 'user-processes' has to > - wait for. > - > - If not auto-mounting, then user-processes should only wait for > - the device scan." > - (if (zfs-configuration-auto-mount? conf) > - '(zfs-auto-mount) > > > - '(zfs-scan))) > > > - > > +(define (zfs-mcron-auto-snapshot-jobs conf) > > - "Creates a list of mcron jobs for auto-snapshotting, one for each > > - of the standard durations." > > - (let* ((user-auto-snapshot-keep (zfs-configuration-auto-snapshot-keep= conf)) > > - ;; assoc-ref has earlier entries overriding later ones. > > > - (auto-snapshot-keep (append user-auto-snapshot-keep > > > - %default-auto-snapshot-= keep)) > > > - (auto-snapshot? (zfs-configuration-auto-snapsho= t? conf)) > > > - (zfs-auto-snapshot-package (make-zfs-auto-snapshot-package= conf)) > > > - (zfs-auto-snapshot (file-append zfs-auto-snapshot-= package > > > - "/sbin/zfs-auto-sn= apshot"))) > > > - (map > > - (lambda (label) > > > - (let ((keep (assoc-ref auto-snapshot-keep label)) > > > - (sched (assoc-ref %auto-snapshot-mcron-schedule label))= ) > > > - #~(job '#$sched > > > - (lambda () > > > - (system* #$zfs-auto-snapshot > > > - "--quiet" > > > - "--syslog" > > > - #$(string-append "--label=3D" > > > - (symbol->string label)) > > > - #$(string-append "--keep=3D" > > > - (number->string keep)) > > > - "//"))))) > > > - (map first %auto-snapshot-mcron-schedule)))) > > > - > > +(define (zfs-mcron-auto-scrub-jobs conf) > > - "Creates a list of mcron jobs for auto-scrubbing." > - (let* ((zfs-package (make-zfs-package conf)) > - (zpool (file-append zfs-package "/sbin/zpool")) > > > - (auto-scrub (zfs-configuration-auto-scrub conf)) > > > - (sched (cond > > > - ((eq? auto-scrub 'weekly) "0 0 * * 7") > > > - ((eq? auto-scrub 'monthly) "0 0 1 * *") > > > - (else auto-scrub)))) > > > - (define code > - ;; We need to get access to (guix build utils) for the > > > - ;; invoke procedures. > > > - (with-imported-modules (source-module-closure '((guix build util= s))) > > > - #~(begin > > > - (use-modules (guix build utils) > > > - (ice-9 ports)) > > > - ;; The ZFS pools in the system. > > > - (define pools > > > - (invoke/quiet #$zpool "list" "-o" "name" "-H")) > > > - ;; Only scrub if there are actual ZFS pools, as the > > > - ;; zpool scrub command errors out if given an empty > > > - ;; argument list. > > > - (unless (null? pools) > > > - ;; zpool scrub only initiates the scrub and otherwise > > > - ;; prints nothing. Results are always seen on the > > > - ;; zpool status command. > > > - (apply invoke #$zpool "scrub" pools))))) > > > - (list > - #~(job '#$sched > > > - #$(program-file "mcron-zfs-scrub.scm" code))))) > > > - > > +(define (zfs-mcron-jobs conf) > > - "Creates a list of mcron jobs for ZFS management." > - (append (zfs-mcron-auto-snapshot-jobs conf) > - (if (zfs-configuration-auto-scrub conf) > > > - (zfs-mcron-auto-scrub-jobs conf) > > > - '()))) > > > - > > +(define zfs-service-type > > - (service-type > - (name 'zfs) > - (extensions > - (list ;; Install OpenZFS kernel module into kernel profile. > > > - (service-extension linux-loadable-module-service-type > > > - zfs-loadable-modules) > > > - ;; And load it. > > > - (service-extension kernel-module-loader-service-type > > > - (const '("zfs"))) > > > - ;; Make sure ZFS pools and datasets are mounted at > > > - ;; boot. > > > - (service-extension shepherd-root-service-type > > > - zfs-shepherd-services) > > > - ;; Make sure user-processes don't start until > > > - ;; after ZFS does. > > > - (service-extension user-processes-service-type > > > - zfs-user-processes) > > > - ;; Install automated scrubbing and snapshotting. > > > - (service-extension mcron-service-type > > > - zfs-mcron-jobs) > > > - > - ;; Install ZFS management commands in the system > > > - ;; profile. > > > - (service-extension profile-service-type > > > - (compose list make-zfs-package)) > > > - ;; Install ZFS udev rules. > > > - (service-extension udev-service-type > > > - (compose list make-zfs-package)))) > > > - (description "Installs ZFS, an advanced filesystem and volume manager= ."))) > > base-commit: a939011b58c65f4192a10cde9e925e85702bacf4 > -- > 2.33.0 >