From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mp0 ([2001:41d0:8:6d80::]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) by ms0.migadu.com with LMTPS id uCx4NC52/WBAcAEAgWs5BA (envelope-from ) for ; Sun, 25 Jul 2021 16:33:18 +0200 Received: from aspmx1.migadu.com ([2001:41d0:8:6d80::]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) by mp0 with LMTPS id cBhCMC52/WA4HgAA1q6Kng (envelope-from ) for ; Sun, 25 Jul 2021 14:33:18 +0000 Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by aspmx1.migadu.com (Postfix) with ESMTPS id 08A582E373 for ; Sun, 25 Jul 2021 16:33:18 +0200 (CEST) Received: from localhost ([::1]:58166 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1m7fBt-0002a9-3j for larch@yhetil.org; Sun, 25 Jul 2021 10:33:17 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:39646) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1m7fBe-0002Zo-Ei for guix-patches@gnu.org; Sun, 25 Jul 2021 10:33:02 -0400 Received: from debbugs.gnu.org ([209.51.188.43]:37558) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1m7fBe-00065V-7c for guix-patches@gnu.org; Sun, 25 Jul 2021 10:33:02 -0400 Received: from Debian-debbugs by debbugs.gnu.org with local (Exim 4.84_2) (envelope-from ) id 1m7fBd-00064Y-UW for guix-patches@gnu.org; Sun, 25 Jul 2021 10:33:01 -0400 X-Loop: help-debbugs@gnu.org Subject: [bug#45692] [PATCH v4 3/3] gnu: Add ZFS service type. References: In-Reply-To: Resent-From: raid5atemyhomework Original-Sender: "Debbugs-submit" Resent-CC: guix-patches@gnu.org Resent-Date: Sun, 25 Jul 2021 14:33:01 +0000 Resent-Message-ID: Resent-Sender: help-debbugs@gnu.org X-GNU-PR-Message: followup 45692 X-GNU-PR-Package: guix-patches X-GNU-PR-Keywords: patch To: "45692@debbugs.gnu.org" <45692@debbugs.gnu.org> Received: via spool by 45692-submit@debbugs.gnu.org id=B45692.162722352923275 (code B ref 45692); Sun, 25 Jul 2021 14:33:01 +0000 Received: (at 45692) by debbugs.gnu.org; 25 Jul 2021 14:32:09 +0000 Received: from localhost ([127.0.0.1]:49104 helo=debbugs.gnu.org) by debbugs.gnu.org with esmtp (Exim 4.84_2) (envelope-from ) id 1m7fAi-00063H-1G for submit@debbugs.gnu.org; Sun, 25 Jul 2021 10:32:08 -0400 Received: from mail-40138.protonmail.ch ([185.70.40.138]:44313) by debbugs.gnu.org with esmtp (Exim 4.84_2) (envelope-from ) id 1m7fAc-00062j-Mg for 45692@debbugs.gnu.org; Sun, 25 Jul 2021 10:32:02 -0400 Date: Sun, 25 Jul 2021 14:31:45 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=protonmail.com; s=protonmail; t=1627223510; bh=G401Hf2cn60LfSnJr6Wxnq5IDzgqvYoTAfzVfNnmOM0=; h=Date:To:From:Reply-To:Subject:From; b=UAnbXGVmU/cXLdVvEF5QNsPVNH3NiSJoUGFUdL0v/E1SNT+dlAheIJKiaQ9hqrUrV iDe/SBsH2F8G8BJGkkCEYpwifk6bvBrYDC9oxo+9cTbuxCbOrE1eqy9iSt5gHPc5jN ognZOte8aMG9JJixZnny3kGvYK/TrIrTrT5zyh7s= Message-ID: MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-BeenThere: debbugs-submit@debbugs.gnu.org X-Mailman-Version: 2.1.18 Precedence: list X-BeenThere: guix-patches@gnu.org List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: guix-patches-bounces+larch=yhetil.org@gnu.org Sender: "Guix-patches" Reply-to: raid5atemyhomework X-ACL-Warn: , raid5atemyhomework via Guix-patches From: raid5atemyhomework via Guix-patches via X-Migadu-Flow: FLOW_IN ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=yhetil.org; s=key1; t=1627223598; h=from:from:sender:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding:resent-cc: resent-from:resent-sender:resent-message-id:in-reply-to:in-reply-to: references:references:list-id:list-help:list-unsubscribe: list-subscribe:list-post:dkim-signature; bh=G401Hf2cn60LfSnJr6Wxnq5IDzgqvYoTAfzVfNnmOM0=; b=gSjxqdvUyVxBpGOlEPQpt0NMuyssTg6LPpNhX7zMCmefPCEKxb805xrPondyaRj7eaMGgO ZnkdSB6UoAYyqPiTUk9uGkb/O+K13nnaPZ1shn9GpFGI82tpnaE3EBBgKA9J13epFx0/Qf nuRzW/nCEpxjBHWboJ6WWiYSNOWwy1rFwhcckF2LpUaRwjyaUiGbfecPjCF35fA+EqFhJP HyFoZg3+GTrnrFanXkkCjL0GNSK/9OBJGrlN/Mm+uOH6kOysb734FoBWWwdcJyBp2LSDX7 dy8O9uVTFQ5gf76NaCO9kw5Kzvt1KXYTPnfT/5VC6v8HJREe7u3LUMBA4NAtSA== ARC-Seal: i=1; s=key1; d=yhetil.org; t=1627223598; a=rsa-sha256; cv=none; b=ArnsucutoS/tYabBM4/JOBpNfAauFihmIAuAie7aKWb2y87fYwfbx8bWnOKfeQ1lJza3xL 9ckIlV87ZlTTvpqM5ROEMOwREWAnyxVAMTRaOPgox5jatQgC6w2Khhi3ziZFEvHsDY/tOc 51pxe2lPVZzqv711+bhSY2jZPbSRPjFw1wQJuby0L8h4s/56FViOpVqrzQvLVQmRkTO7or nixhVzOnLmv/Vyl9bIBzQnLkki1biIq8NfCJy2mips1IZxd8OmNH2lqm6bDHNWQAFLqv4w 137/gpotAjSzS/5QPxnYasm6Xbb7dCZyMH/oQv4tGcu12cxyR6vQlbhd8SQjVg== ARC-Authentication-Results: i=1; aspmx1.migadu.com; dkim=fail ("headers rsa verify failed") header.d=protonmail.com header.s=protonmail header.b=UAnbXGVm; dmarc=pass (policy=none) header.from=gnu.org; spf=pass (aspmx1.migadu.com: domain of guix-patches-bounces@gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=guix-patches-bounces@gnu.org X-Migadu-Spam-Score: -2.92 Authentication-Results: aspmx1.migadu.com; dkim=fail ("headers rsa verify failed") header.d=protonmail.com header.s=protonmail header.b=UAnbXGVm; dmarc=pass (policy=none) header.from=gnu.org; spf=pass (aspmx1.migadu.com: domain of guix-patches-bounces@gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=guix-patches-bounces@gnu.org X-Migadu-Queue-Id: 08A582E373 X-Spam-Score: -2.92 X-Migadu-Scanner: scn0.migadu.com X-TUID: bau0hVwFfR0f Hello nonexistent reviewer, I updated this patch to latest `origin/master` because the previous version= has bitrotted and will not `git am` cleanly anymore. There are no changes relative to v3, just rebased it so that the patch appl= ies cleanly. Testing has been very minimal: I created a VM with the added service, then = ran it in a VM session that included additional devices (qcow2 files) from = a previous VM run that formatted those as devices of a ZFS pool, and confir= med that the new VM could read it and manage that pool. Is there any chance this will get reviewed or should I just not bother and = move on with my life and forget about Guix? At this point as well, I would like to point out what I think is a failing = of how the Guix distribution is managed. Guix does not assign any particular committers to any particular tasks or a= reas. The intent is that any committer can review and commit any particular patch= . However, this fails as the Guix project has grown. No single committer wants to review ***all*** the available patches, and th= is is understandable, as the Guix project has grown significantly and inclu= des a wide variety of people with diverging interests and as an open-source= project you cannot really require that particular people look at particula= r things. Unfortunately, I do not know *who* committers are, and more importantly, *w= hich* committer might be interested in this ZFS service type. Because "any committer can review and commit any patch!!" there is no parti= cular list or table I can refer to, to figure out who might be useful to pi= ng for this patchset. At the same time, because no committer is interested in *all* patches I can= not just ping some particular person and expect to somehow get on some list= somewhere that tells me "you will be the 48486th patch that will be review= ed by who is interested in all patches". It is very discouraging to work on this code for a few weeks, release it, n= ot get any reviews, and end up in a situation where I have to make annoying= small changes just to keep the patch from bitrotting. I understand that there are few possible reviewers, but if potential new co= ntributors get discouraged from contributing because they do not see their = code actually getting in, then you really cannot expect the number of revie= wers to increase, either. I think it would be nice if I could at least be told some number of people = who *might* be interested in this patch, or just throw the towel and not bo= ther. Thanks raid5atemyhomework >From 5351aa7c1c14d4fea032adad895c436e02d1f261 Mon Sep 17 00:00:00 2001 From: raid5atemyhomework Date: Mon, 22 Mar 2021 16:26:28 +0800 Subject: [PATCH] gnu: Add ZFS service type. * gnu/services/file-systems.scm: New file. * gnu/local.mk (GNU_SYSTEM_MODULES): Add it. * gnu/services/base.scm: Export dependency->shepherd-service-name. * doc/guix.texi (ZFS File System): New subsection. --- doc/guix.texi | 351 ++++++++++++++++++++++++++++++++++ gnu/local.mk | 2 + gnu/services/base.scm | 4 +- gnu/services/file-systems.scm | 295 ++++++++++++++++++++++++++++ 4 files changed, 651 insertions(+), 1 deletion(-) create mode 100644 gnu/services/file-systems.scm diff --git a/doc/guix.texi b/doc/guix.texi index b3c16e6507..e21c47d7ca 100644 --- a/doc/guix.texi +++ b/doc/guix.texi @@ -94,6 +94,7 @@ Copyright @copyright{} 2021 Xinglu Chen@* Copyright @copyright{} 2021 Raghav Gururajan@* Copyright @copyright{} 2021 Domagoj Stolfa@* Copyright @copyright{} 2021 Hui Lu@* +Copyright @copyright{} 2021 raid5atemyhomework@* Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or @@ -14265,6 +14266,356 @@ a file system declaration such as: compress-force=3Dzstd,space_cache=3Dv2")) @end lisp + +@node ZFS File System +@subsection ZFS File System + +Support for ZFS file systems is provided on Guix by the OpenZFS project. +OpenZFS currently only supports Linux-Libre and is not available on the +Hurd. + +OpenZFS is free software; unfortunately its license is incompatible with +the GNU General Public License (GPL), the license of the Linux kernel, +which means they cannot be distributed together. However, as a user, +you can choose to build ZFS and use it together with Linux; you can +even rely on Guix to automate this task. See +@uref{https://www.fsf.org/licensing/zfs-and-linux, this analysis by +the Free Software Foundation} for more information. + +As a large and complex kernel module, OpenZFS has to be compiled for a +specific version of Linux-Libre. At times, the latest OpenZFS package +available in Guix is not compatible with the latest Linux-Libre version. +Thus, directly installing the @code{zfs} package can fail. + +Instead, you are recommended to select a specific older long-term-support +Linux-Libre kernel. Do not use @code{linux-libre-lts}, as even the +latest long-term-support kernel may be too new for @code{zfs}. Instead, +explicitly select a specific older version, such as @code{linux-libre-5.10= }, +and upgrade it manually later as new long-term-support kernels become +available that you have confirmed is compatible with the latest available +OpenZFS version on Guix. + +For example, you can modify your system configuration file to a specific +Linux-Libre version and add the @code{zfs-service-type} service. + +@lisp +(use-modules (gnu)) +(use-package-modules + #;@dots{} + linux) +(use-service-modules + #;@dots{} + file-systems) + +(define my-kernel linux-libre-5.10) + +(operating-system + (kernel my-kernel) + #;@dots{} + (services + (cons* (service zfs-service-type + (zfs-configuration + (kernel my-kernel))) + #;@dots{} + %desktop-services)) + #;@dots{}) +@end lisp + +@defvr {Scheme Variable} zfs-service-type +This is the type for a service that adds ZFS support to your operating +system. The service is configured using a @code{zfs-configuration} +record. + +Here is an example use: + +@lisp +(service zfs-service-type + (zfs-configuration + (kernel linux-libre-5.4))) +@end lisp +@end defvr + +@deftp {Data Type} zfs-configuration +This data type represents the configuration of the ZFS support in Guix +System. Its fields are: + +@table @asis +@item @code{kernel} +The package of the Linux-Libre kernel to compile OpenZFS for. This field +is always required. It @emph{must} be the same kernel you use in your +@code{operating-system} form. + +@item @code{base-zfs} (default: @code{zfs}) +The OpenZFS package that will be compiled for the given Linux-Libre kernel= . + +@item @code{base-zfs-auto-snapshot} (default: @code{zfs-auto-snapshot}) +The @code{zfs-auto-snapshot} package to use. It will be modified to +specifically use the OpenZFS compiled for your kernel. + +@item @code{dependencies} (default: @code{'()}) +A list of @code{} or @code{} records that must +be mounted or opened before OpenZFS scans for pools to import. For exampl= e, +if you have set up LUKS containers as leaf VDEVs in a pool, you have to +include their corresponding @code{} records so that OpenZF= S +can import the pool correctly at bootup. + +@item @code{auto-mount?} (default: @code{#t}) +Whether to mount datasets with the ZFS @code{mountpoint} property automati= cally +at startup. This is the behavior that ZFS users usually expect. You migh= t +set this to @code{#f} for an operating system intended as a ``rescue'' sys= tem +that is intended to help debug problems with the disks rather than actuall= y +work in production. + +@item @code{auto-scrub} (default: @code{'weekly}) +Specifies how often to scrub all pools. Can be the symbols @code{'weekly}= or +@code{'monthly}, or a schedule specification understood by +@xref{mcron, mcron job specifications,, mcron, GNU@tie{}mcron}, such as +@code{"0 3 * * 6"} for ``every 3AM on Saturday''. +It can also be @code{#f} to disable auto-scrubbing (@strong{not recommende= d}). + +The general guideline is to scrub weekly when using consumer-quality drive= s, and +to scrub monthly when using enterprise-quality drives. + +@code{'weekly} scrubs are done on Sunday midnight, while @code{monthly} sc= rubs +are done on midnight on the first day of each month. + +@item @code{auto-snapshot?} (default: @code{#t}) +Specifies whether to auto-snapshot by default. If @code{#t}, then snapsho= ts +are automatically created except for ZFS datasets with the +@code{com.sun:auto-snapshot} ZFS vendor property set to @code{false}. + +If @code{#f}, snapshots will not be automatically created, unless the ZFS +dataset has the @code{com.sun:auto-snapshot} ZFS vendor property set to +@code{true}. + +@item @code{auto-snapshot-keep} (default: @code{'()}) +Specifies an association list of symbol-number pairs, indicating the numbe= r +of automatically-created snapshots to retain for each frequency type. + +If not specified via this field, by default there are 4 @code{frequent}, 2= 4 +@code{hourly}, 31 @code{daily}, 8 @code{weekly}, and 12 @code{monthly} sna= pshots. + +For example: + +@lisp +(zfs-configuration + (kernel my-kernel) + (auto-snapshot-keep + '((frequent . 8) + (hourly . 12)))) +@end lisp + +The above will keep 8 @code{frequent} snapshots and 12 @code{hourly} snaps= hots. +@code{daily}, @code{weekly}, and @code{monthly} snapshots will keep their +defaults (31 @code{daily}, 8 @code{weekly}, and 12 @code{monthly}). + +@end table +@end deftp + +@subsubsection ZFS Auto-Snapshot + +The ZFS service on Guix System supports auto-snapshots as implemented in t= he +Solaris operating system. + +@code{frequent} (every 15 minutes), @code{hourly}, @code{daily}, @code{wee= kly}, +and @code{monthly} snapshots are created automatically for ZFS datasets th= at +have auto-snapshot enabled. They will be named, for example, +@code{zfs-auto-snap_frequent-2021-03-22-1415}. You can continue to use +manually-created snapshots as long as they do not conflict with the naming +convention used by auto-snapshot. You can also safely manually destroy +automatically-created snapshots, for example to free up space. + +The @code{com.sun:auto-snapshot} ZFS property controls auto-snapshot on a +per-dataset level. Sub-datasets will inherit this property from their par= ent +dataset, but can have their own property. + +You @emph{must} set this property to @code{true} or @code{false} exactly, +otherwise it will be treated as if the property is unset. + +For example: + +@example +# zfs list -o name +NAME +tank +tank/important-data +tank/tmp +# zfs set com.sun:auto-snapshot=3Dtrue tank +# zfs set com.sun:auto-snapshot=3Dfalse tank/tmp +@end example + +The above will set @code{tank} and @code{tank/important-data} to be +auto-snapshot, while @code{tank/tmp} will not be auto-snapshot. + +If the @code{com.sun:auto-snapshot} property is not set for a dataset +(the default when pools and datasets are created), then whether +auto-snapshot is done or not will depend on the @code{auto-snapshot?} +field of the @code{zfs-configuration} record. + +There are also @code{com.sun:auto-snapshot:frequent}, +@code{com.sun:auto-snapshot:hourly}, @code{com.sun:auto-snapshot:daily}, +@code{com.sun:auto-snapshot:weekly}, and @code{com.sun:auto-snapshot:month= ly} +properties that give finer-grained control of whether to auto-snapshot a +dataset at a particular schedule. + +The number of snapshots kept for all datasets can be overridden via the +@code{auto-snapshot-keep} field of the @code{zfs-configuration} record. +There is currently no support to have different numbers of snapshots to +keep for different datasets. + +@subsubsection ZVOLs + +ZFS supports ZVOLs, block devices that ZFS exposes to the operating +system in the @code{/dev/zvol/} directory. The ZVOL will have the same +resilience and self-healing properties as other datasets on your ZFS pool. +ZVOLs can also be snapshotted (and will be included in auto-snapshotting +if enabled), which snapshots the state of the block device, effectively +snapshotting the hosted file system. + +You can put any file system inside the ZVOL. However, in order to mount t= his +file system at system start, you need to add @code{%zfs-zvol-dependency} a= s a +dependency of each file system inside a ZVOL. + +@defvr {Scheme Variable} %zfs-zvol-dependency +An artificial @code{} which tells the file system mounting +service to wait for ZFS to provide ZVOLs before mounting the +@code{} dependent on it. +@end defvr + +For example, suppose you create a ZVOL and put an ext4 filesystem +inside it: + +@example +# zfs create -V 100G tank/ext4-on-zfs +# mkfs.ext4 /dev/zvol/tank/ext4-on-zfs +# mkdir /ext4-on-zfs +# mount /dev/zvol/tank/ext4-on-zfs /ext4-on-zfs +@end example + +You can then set this up to be mounted at boot by adding this to the +@code{file-systems} field of your @code{operating-system} record: + +@lisp +(file-system + (device "/dev/zvol/tank/ext4-on-zfs") + (mount-point "/ext4-on-zfs") + (type "ext4") + (dependencies (list %zfs-zvol-dependency))) +@end lisp + +You @emph{must not} add @code{%zfs-zvol-dependency} to your +@code{operating-system}'s @code{mapped-devices} field, and you @emph{must +not} add it (or any @code{}s dependent on it) to the +@code{dependencies} field of @code{zfs-configuration}. Finally, you +@emph{must not} use @code{%zfs-zvol-dependency} unless you actually +instantiate @code{zfs-service-type} on your system. + +@subsubsection Unsupported Features + +Some common features and uses of ZFS are currently not supported, or not +fully supported, on Guix. + +@enumerate +@item +Shepherd-managed daemons that are configured to read from or write to ZFS +mountpoints need to include @code{user-processes} in their @code{requireme= nt} +field. This is the earliest that ZFS file systems are assured of being +mounted. + +Generally, most daemons will, directly or indirectly, require +@code{networking}, or @code{user-processes}, or both. Most implementation= s +of @code{networking} will also require @code{user-processes} so daemons th= at +require only @code{networking} will also generally start up after +@code{user-processes}. A notable exception, however, is +@code{static-networking-service-type}. You will need to explicitly add +@code{user-processes} as a @code{requirement} of your @code{static-network= ing} +record. + +@item +@code{mountpoint=3Dlegacy} ZFS file systems. The handlers for the Guix mo= unting +system have not yet been modified to support ZFS, and will expect @code{/d= ev} +paths in the @code{}'s @code{device} field, but ZFS file syst= ems +are referred to via non-path @code{pool/file/system} names. Such file sys= tems +also need to be mounted @emph{after} OpenZFS has scanned for pools. + +You can still manually mount these file systems after system boot; what is +only unsupported is mounting them automatically at system boot by specifyi= ng +them in @code{} records of your @code{operating-system}. + +@item +@code{/home} on ZFS. Guix will create home directories for users, but thi= s +process currently cannot be scheduled after ZFS file systems are mounted. +Thus, the ZFS file system might be mounted @emph{after} Guix has created +home directories at boot, at which point OpenZFS will refuse to mount sinc= e +the mountpoint is not empty. However, you @emph{can} create an ext4, xfs, +btrfs, or other supported file system inside a ZVOL, have that depend on +@code{%zfs-zvol-dependency}, and set it to mount on the @code{/home} +directory; they will be scheduled to mount before the @code{user-homes} +process. + +Similarly, other locations like @code{/var}, @code{/gnu/store} and so +on cannot be reliably put in a ZFS file system, though they may be +possible to create as other file systems inside ZVOL containers. + +@item +@code{/} and @code{/boot} on ZFS. These require Guix to expose more of +the @code{initrd} very early boot process to services. It also requires +Guix to have the ability to explicitly load modules while still in +@code{initrd} (currently kernel modules loaded by +@code{kernel-module-loader-service-type} are loaded after @code{/} is +mounted). Further, since one of ZFS's main advantages is that it can +continue working despite the loss of one or more devices, it makes sense +to also support installing the bootloader on all devices of the pool that +contains the @code{/} and @code{/boot}; after all, if ZFS can survive the +loss of one device, the bootloader should also be able to survive the loss +of one device. + +@item +ZVOL swap devices. Mapped swap devices need to be listed in +@code{mapped-devices} to ensure they are opened before the system attempts +to use them, but you cannot currently add @code{%zfs-zvol-dependency} to +@code{mapped-devices}. + +This will also require significant amounts of testing, as various kernel +build options and patches may affect how swapping works, which are possibl= y +different on Guix System compared to other distributions that this feature= is +known to work on. + +@item +ZFS Event Daemon. Support for this has not been written yet, patches are +welcome. The main issue is how to design this in a Guix style while +supporting legacy shell-script styles as well. In particular, OpenZFS its= elf +comes with a number of shell scripts intended for ZFS Event Daemon, and we +need to figure out how the user can choose to use or not use the provided +scripts (and configure any settings they have) or override with their own +custom code (which could be shell scripts they have written and trusted fr= om +previous ZFS installations). + +As-is, you can create your own service that activates the ZFS Event Daemon +by creating the @file{/etc/zfs/zed} directory and filling it appropriately= , +then launching @code{zed}. + +@item +@file{/etc/zfs/zpool.cache}. Currently the ZFS support on Guix always for= ces +scanning of all devices at bootup to look for ZFS pools. For systems with +dozens or hundreds of storage devices, this can lead to slow bootup. One = issue +is that tools should really not write to @code{/etc} which is supposed to = be for +configuration; possibly it could be moved to @code{/var} instead. Another= issue +is that if Guix ever supports @code{/} on ZFS, we would need to somehow ke= ep the +@code{zpool.cache} file inside the @code{initrd} up-to-date with what is i= n the +@code{/} mount point. + +@item +@code{zfs share}. This will require some (unknown amount of) work to inte= grate +into the Samba and NFS services of Guix. You @emph{can} manually set up S= amba +and NFS to share any mounted ZFS datasets by setting up their configuratio= ns +properly; it just can't be done for you by @code{zfs share} and the +@code{sharesmb} and @code{sharenfs} properties. +@end enumerate + +Hopefully, support for the above only requires code to be written, o users +are encouraged to hack on Guix to implement the above features. + @node Mapped Devices @section Mapped Devices diff --git a/gnu/local.mk b/gnu/local.mk index b944c671af..a2ff871277 100644 --- a/gnu/local.mk +++ b/gnu/local.mk @@ -43,6 +43,7 @@ # Copyright =C2=A9 2021 Philip McGrath # Copyright =C2=A9 2021 Arun Isaac # Copyright =C2=A9 2021 Sharlatan Hellseher +# Copyright =C2=A9 2021 raid5atemyhomework # # This file is part of GNU Guix. # @@ -618,6 +619,7 @@ GNU_SYSTEM_MODULES =3D=09=09=09=09\ %D%/services/docker.scm=09=09=09\ %D%/services/authentication.scm=09=09\ %D%/services/file-sharing.scm=09=09=09\ + %D%/services/file-systems.scm=09=09=09\ %D%/services/games.scm=09=09=09\ %D%/services/ganeti.scm=09=09=09\ %D%/services/getmail.scm=09=09=09=09\ diff --git a/gnu/services/base.scm b/gnu/services/base.scm index ab3e441a7b..bcca24f93a 100644 --- a/gnu/services/base.scm +++ b/gnu/services/base.scm @@ -185,7 +185,9 @@ references-file - %base-services)) + %base-services + + dependency->shepherd-service-name)) ;;; Commentary: ;;; diff --git a/gnu/services/file-systems.scm b/gnu/services/file-systems.scm new file mode 100644 index 0000000000..0b1aae38ac --- /dev/null +++ b/gnu/services/file-systems.scm @@ -0,0 +1,295 @@ +;;; GNU Guix --- Functional package management for GNU +;;; Copyright =C2=A9 2021 raid5atemyhomework +;;; +;;; This file is part of GNU Guix. +;;; +;;; GNU Guix is free software; you can redistribute it and/or modify it +;;; under the terms of the GNU General Public License as published by +;;; the Free Software Foundation; either version 3 of the License, or (at +;;; your option) any later version. +;;; +;;; GNU Guix is distributed in the hope that it will be useful, but +;;; WITHOUT ANY WARRANTY; without even the implied warranty of +;;; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +;;; GNU General Public License for more details. +;;; +;;; You should have received a copy of the GNU General Public License +;;; along with GNU Guix. If not, see . + +(define-module (gnu services file-systems) + #:use-module (gnu packages file-systems) + #:use-module (gnu services) + #:use-module (gnu services base) + #:use-module (gnu services linux) + #:use-module (gnu services mcron) + #:use-module (gnu services shepherd) + #:use-module (gnu system mapped-devices) + #:use-module (guix gexp) + #:use-module (guix packages) + #:use-module (guix records) + #:export (zfs-service-type + + zfs-configuration + zfs-configuration? + zfs-configuration-kernel + zfs-configuration-base-zfs + zfs-configuration-base-zfs-auto-snapshot + zfs-configuration-dependencies + zfs-configuration-auto-mount? + zfs-configuration-auto-scrub + zfs-configuration-auto-snapshot? + zfs-configuration-auto-snapshot-keep + + %zfs-zvol-dependency)) + +(define-record-type* + zfs-configuration + make-zfs-configuration + zfs-configuration? + + ; linux-libre kernel you want to compile the base-zfs module for. + (kernel zfs-configuration-kernel) + + ; the OpenZFS package that will be modified to compile for the + ; given kernel. + (base-zfs zfs-configuration-base-zfs + (default zfs)) + + ; the zfs-auto-snapshot package that will be modified to compile + ; for the given kernel. + (base-zfs-auto-snapshot zfs-configuration-base-zfs-auto-snapshot + (default zfs-auto-snapshot)) + + ; list of or objects that must be + ; opened/mounted before we import any ZFS pools. + (dependencies zfs-configuration-dependencies + (default '())) + + ; #t if mountable datasets are to be mounted automatically. + ; #f if not mounting. + ; #t is the expected behavior on other operating systems, the + ; #f is only supported for "rescue" operating systems where + ; the user wants lower-level control of when to mount. + (auto-mount? zfs-configuration-auto-mount? + (default #t)) + + ; 'weekly for weekly scrubbing, 'monthly for monthly scrubbing, an + ; mcron time specification that can be given to `job`, or #f to + ; disable. + (auto-scrub zfs-configuration-auto-scrub + (default 'weekly)) + + ; #t if auto-snapshot is default (and `com.sun:auto-snapshot=3Dfalse` + ; disables auto-snapshot per dataset), #f if no auto-snapshotting + ; is default (and `com.sun:auto-snapshot=3Dtrue` enables auto-snapshot + ; per dataset). + (auto-snapshot? zfs-configuration-auto-snapshot? + (default #t)) + + ; association list of symbol-number pairs to indicate the number + ; of automatic snapshots to keep for each of 'frequent, 'hourly, + ; 'daily, 'weekly, and 'monthly. + ; e.g. '((frequent . 8) (hourly . 12)) + (auto-snapshot-keep zfs-configuration-auto-snapshot-keep + (default '()))) + +(define %default-auto-snapshot-keep + '((frequent . 4) + (hourly . 24) + (daily . 31) + (weekly . 8) + (monthly . 12))) + +(define %auto-snapshot-mcron-schedule + '((frequent . "0,15,30,45 * * * *") + (hourly . "0 * * * *") + (daily . "0 0 * * *") + (weekly . "0 0 * * 7") + (monthly . "0 0 1 * *"))) + +;; A synthetic and unusable MAPPED-DEVICE intended for use when +;; the user has created a mountable filesystem inside a ZFS +;; zvol and wants it mounted inside the configuration.scm. +(define %zfs-zvol-dependency + (mapped-device + (source '()) + (targets '("zvol/*")) + (type #f))) + +(define (make-zfs-package conf) + (let ((kernel (zfs-configuration-kernel conf)) + (base-zfs (zfs-configuration-base-zfs conf))) + (package + (inherit base-zfs) + (arguments (cons* #:linux kernel + (package-arguments base-zfs)))))) + +(define (make-zfs-auto-snapshot-package conf) + (let ((zfs (make-zfs-package conf)) + (base-zfs-auto-snapshot (zfs-configuration-base-zfs-auto-snapshot = conf))) + (package + (inherit base-zfs-auto-snapshot) + (inputs `(("zfs" ,zfs)))))) + +(define (zfs-loadable-modules conf) + (list (list (make-zfs-package conf) "module"))) + +(define (zfs-shepherd-services conf) + (let* ((zfs-package (make-zfs-package conf)) + (zpool (file-append zfs-package "/sbin/zpool")) + (zfs (file-append zfs-package "/sbin/zfs")) + (zvol_wait (file-append zfs-package "/bin/zvol_wait")) + (scheme-modules `((srfi srfi-1) + (srfi srfi-34) + (srfi srfi-35) + (rnrs io ports) + ,@%default-modules))) + (define zfs-scan + (shepherd-service + (provision '(zfs-scan)) + (requirement `(root-file-system + kernel-module-loader + udev + ,@(map dependency->shepherd-service-name + (zfs-configuration-dependencies conf)))) + (documentation "Scans for and imports ZFS pools.") + (modules scheme-modules) + (start #~(lambda _ + (guard (c ((message-condition? c) + (format (current-error-port) + "zfs: error importing pools: ~s~%" + (condition-message c)) + #f)) + ; TODO: optionally use a cachefile. + (invoke #$zpool "import" "-a" "-N")))) + ;; Why not one-shot? Because we don't really want to rescan + ;; this each time a requiring process is restarted, as scanning + ;; can take a long time and a lot of I/O. + (stop #~(const #f)))) + + (define device-mapping-zvol/* + (shepherd-service + (provision '(device-mapping-zvol/*)) + (requirement '(zfs-scan)) + (documentation "Waits for all ZFS ZVOLs to be opened.") + (modules scheme-modules) + (start #~(lambda _ + (guard (c ((message-condition? c) + (format (current-error-port) + "zfs: error opening zvols: ~s~%" + (condition-message c)) + #f)) + (invoke #$zvol_wait)))) + (stop #~(const #f)))) + + (define zfs-auto-mount + (shepherd-service + (provision '(zfs-auto-mount)) + (requirement '(zfs-scan)) + (documentation "Mounts all non-legacy mounted ZFS filesystems.") + (modules scheme-modules) + (start #~(lambda _ + (guard (c ((message-condition? c) + (format (current-error-port) + "zfs: error mounting file systems: ~= s~%" + (condition-message c)) + #f)) + ;; Output to current-error-port, otherwise the + ;; user will not see any prompts for passwords + ;; of encrypted datasets. + ;; XXX Maybe better to explicitly open /dev/console ? + (with-output-to-port (current-error-port) + (lambda () + (invoke #$zfs "mount" "-a" "-l")))))) + (stop #~(lambda _ + ;; Make sure that Shepherd does not have a CWD that + ;; is a mounted ZFS filesystem, which would prevent + ;; unmounting. + (chdir "/") + (invoke #$zfs "unmount" "-a" "-f"))))) + + `(,zfs-scan + ,device-mapping-zvol/* + ,@(if (zfs-configuration-auto-mount? conf) + `(,zfs-auto-mount) + '())))) + +(define (zfs-user-processes conf) + (if (zfs-configuration-auto-mount? conf) + '(zfs-auto-mount) + '(zfs-scan))) + +(define (zfs-mcron-auto-snapshot-jobs conf) + (let* ((user-auto-snapshot-keep (zfs-configuration-auto-snapshot-ke= ep conf)) + ;; assoc-ref has earlier entries overriding later ones. + (auto-snapshot-keep (append user-auto-snapshot-keep + %default-auto-snapshot-keep= )) + (auto-snapshot? (zfs-configuration-auto-snapshot? c= onf)) + (zfs-auto-snapshot-package (make-zfs-auto-snapshot-package con= f)) + (zfs-auto-snapshot (file-append zfs-auto-snapshot-pack= age + "/sbin/zfs-auto-snapsh= ot"))) + (map + (lambda (label) + (let ((keep (assoc-ref auto-snapshot-keep label)) + (sched (assoc-ref %auto-snapshot-mcron-schedule label))) + #~(job '#$sched + (string-append #$zfs-auto-snapshot + " --quiet --syslog " + " --label=3D" #$(symbol->string label) + " --keep=3D" #$(number->string keep) + " //")))) + '(frequent hourly daily weekly monthly)))) + +(define (zfs-mcron-auto-scrub-jobs conf) + (let* ((zfs-package (make-zfs-package conf)) + (zpool (file-append zfs-package "/sbin/zpool")) + (auto-scrub (zfs-configuration-auto-scrub conf)) + (sched (cond + ((eq? auto-scrub 'weekly) "0 0 * * 7") + ((eq? auto-scrub 'monthly) "0 0 1 * *") + (else auto-scrub)))) + (list + #~(job '#$sched + ;; Suppress errors: if there are no ZFS pools, the + ;; scrub will not be given any arguments, which makes + ;; it error out. + (string-append "(" #$zpool " scrub `" #$zpool " list -o name = -H` " + "> /dev/null 2>&1) " + "|| exit 0"))))) + +(define (zfs-mcron-jobs conf) + (append (zfs-mcron-auto-snapshot-jobs conf) + (if (zfs-configuration-auto-scrub conf) + (zfs-mcron-auto-scrub-jobs conf) + '()))) + +(define zfs-service-type + (service-type + (name 'zfs) + (extensions + (list ;; Install OpenZFS kernel module into kernel profile. + (service-extension linux-loadable-module-service-type + zfs-loadable-modules) + ;; And load it. + (service-extension kernel-module-loader-service-type + (const '("zfs"))) + ;; Make sure ZFS pools and datasets are mounted at + ;; boot. + (service-extension shepherd-root-service-type + zfs-shepherd-services) + ;; Make sure user-processes don't start until + ;; after ZFS does. + (service-extension user-processes-service-type + zfs-user-processes) + ;; Install automated scrubbing and snapshotting. + (service-extension mcron-service-type + zfs-mcron-jobs) + + ;; Install ZFS management commands in the system + ;; profile. + (service-extension profile-service-type + (compose list make-zfs-package)) + ;; Install ZFS udev rules. + (service-extension udev-service-type + (compose list make-zfs-package)))) + (description "Installs ZFS, an advanced filesystem and volume manager.= "))) -- 2.31.1