From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mp0 ([2001:41d0:8:6d80::]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) by ms0.migadu.com with LMTPS id 4B00HhHjM2EtFwEAgWs5BA (envelope-from ) for ; Sat, 04 Sep 2021 23:20:17 +0200 Received: from aspmx1.migadu.com ([2001:41d0:8:6d80::]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) by mp0 with LMTPS id 2Nv1GRHjM2F2ZgAA1q6Kng (envelope-from ) for ; Sat, 04 Sep 2021 21:20:17 +0000 Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by aspmx1.migadu.com (Postfix) with ESMTPS id 43AB71AA8C for ; Sat, 4 Sep 2021 23:20:16 +0200 (CEST) Received: from localhost ([::1]:55032 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1mMd5D-0008Je-0Z for larch@yhetil.org; Sat, 04 Sep 2021 17:20:15 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:33642) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mMd50-0008JG-AS for guix-patches@gnu.org; Sat, 04 Sep 2021 17:20:02 -0400 Received: from debbugs.gnu.org ([209.51.188.43]:37258) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1mMd50-0006oo-2u for guix-patches@gnu.org; Sat, 04 Sep 2021 17:20:02 -0400 Received: from Debian-debbugs by debbugs.gnu.org with local (Exim 4.84_2) (envelope-from ) id 1mMd4z-0004E1-T1 for guix-patches@gnu.org; Sat, 04 Sep 2021 17:20:01 -0400 X-Loop: help-debbugs@gnu.org Subject: [bug#45692] [PATCH v4 3/3] gnu: Add ZFS service type. Resent-From: Xinglu Chen Original-Sender: "Debbugs-submit" Resent-CC: guix-patches@gnu.org Resent-Date: Sat, 04 Sep 2021 21:20:01 +0000 Resent-Message-ID: Resent-Sender: help-debbugs@gnu.org X-GNU-PR-Message: followup 45692 X-GNU-PR-Package: guix-patches X-GNU-PR-Keywords: patch To: raid5atemyhomework , "45692@debbugs.gnu.org" <45692@debbugs.gnu.org> Received: via spool by 45692-submit@debbugs.gnu.org id=B45692.163079036116189 (code B ref 45692); Sat, 04 Sep 2021 21:20:01 +0000 Received: (at 45692) by debbugs.gnu.org; 4 Sep 2021 21:19:21 +0000 Received: from localhost ([127.0.0.1]:48804 helo=debbugs.gnu.org) by debbugs.gnu.org with esmtp (Exim 4.84_2) (envelope-from ) id 1mMd4F-0004Cw-6K for submit@debbugs.gnu.org; Sat, 04 Sep 2021 17:19:21 -0400 Received: from h87-96-130-155.cust.a3fiber.se ([87.96.130.155]:46232 helo=mail.yoctocell.xyz) by debbugs.gnu.org with esmtp (Exim 4.84_2) (envelope-from ) id 1mMd4C-0004Ch-B0 for 45692@debbugs.gnu.org; Sat, 04 Sep 2021 17:19:14 -0400 From: Xinglu Chen DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=yoctocell.xyz; s=mail; t=1630790343; bh=uOq2ubofzi8diZUbFXFlWTQ5F6sQlUX2EYspmM+QmbA=; h=From:To:Subject:In-Reply-To:References:Date; b=Q+XwyVGkOnXmxAZkC4tDVzHIWm0S9Kg04QMpt5U9C6W2kA3rU4LunybU346aRjSR8 d2AsmriWIqY2AbHOeiPbl0un8YPT+uYdeB19uYwqO+E9JPQGSeGnq6sz6689XO6NWC G6wIHuRTgQQM8XwBiO6AbfgiK3MNniz35KnNgIZE= In-Reply-To: References: Date: Sat, 04 Sep 2021 23:19:01 +0200 Message-ID: <87k0jw3upm.fsf@yoctocell.xyz> MIME-Version: 1.0 Content-Type: multipart/signed; boundary="=-=-="; micalg=pgp-sha256; protocol="application/pgp-signature" X-BeenThere: debbugs-submit@debbugs.gnu.org X-Mailman-Version: 2.1.18 Precedence: list X-BeenThere: guix-patches@gnu.org List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: guix-patches-bounces+larch=yhetil.org@gnu.org Sender: "Guix-patches" X-Migadu-Flow: FLOW_IN ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=yhetil.org; s=key1; t=1630790417; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type:resent-cc:resent-from:resent-sender: resent-message-id:in-reply-to:in-reply-to:references:references: list-id:list-help:list-unsubscribe:list-subscribe:list-post: dkim-signature; bh=XrC+VxnlYhfZsT8qIlp/qEmhGYix7jRDXez1+VZZNC0=; b=cjERh20yfPn6SwzGDxl+ZII3AiLRoPRcOEpglhBWnxQLmuBJZKZam1FDpdqirgAPU4W4uX i82sK2bWqsxNRXTvX+B/kdMLbQfhoTdRCMaZVfzKKceldS2xPt34ZxKHePnXhiiiE/x2Uf pEf87/tYkmZXl3XlzqNwOUSJocupUgEXD7oMXJfwFi26squO9RoY3V+7Hx73aDGXAtt2hk WY0Db5ZYd0Df5vB0OFUt8tjyqL7s/50SlT+zg6GBSv9y3oqZN8f9OIb7xC3UAjy/0ZsQoR 0skOhVgfb46KZod/grrOdSF7P60jCcGAkCfB6iSOKecUH2gbiU7sYNSqi6CFiw== ARC-Seal: i=1; s=key1; d=yhetil.org; t=1630790417; a=rsa-sha256; cv=none; b=Cl+w5lUP5fUsgfW9s+4TIftBgvr1OSnyOvO9YKln5WbIjJ/xRe6GQnxT24G3UIyP9UaL23 833kzacxeV+0sAlfkojwHEj9cCfEWf7n06nwVqTm6jA6JLItVNbTk1Da18AJ1h+fakaz0w HxKhjvxAPcBL78unffeiM3TJWuJX1g2+wYjes8LuMBWnFiC0o6qSp6/axykbSiR439ojNy Bbwf2xI6DJaTn+8jaLHeGILJrxGQstX+74iUsYHHGjHSfol/fulpdZboZdC7pAltbduUlT sQmAsYeOPVzvcT2R0DfkWsgYsG8DAIqIRtCNLi6cpv3L1Cq2JbgxTC2lRulQ0Q== ARC-Authentication-Results: i=1; aspmx1.migadu.com; dkim=fail ("headers rsa verify failed") header.d=yoctocell.xyz header.s=mail header.b=Q+XwyVGk; spf=pass (aspmx1.migadu.com: domain of guix-patches-bounces@gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=guix-patches-bounces@gnu.org X-Migadu-Spam-Score: -1.92 Authentication-Results: aspmx1.migadu.com; dkim=fail ("headers rsa verify failed") header.d=yoctocell.xyz header.s=mail header.b=Q+XwyVGk; dmarc=fail reason="SPF not aligned (relaxed)" header.from=yoctocell.xyz (policy=none); spf=pass (aspmx1.migadu.com: domain of guix-patches-bounces@gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=guix-patches-bounces@gnu.org X-Migadu-Queue-Id: 43AB71AA8C X-Spam-Score: -1.92 X-Migadu-Scanner: scn1.migadu.com X-TUID: 69edgCTQsU1P --=-=-= Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable On Sun, Jul 25 2021, raid5atemyhomework via Guix-patches via wrote: > Hello nonexistent reviewer, > > I updated this patch to latest `origin/master` because the previous > version has bitrotted and will not `git am` cleanly anymore. > > There are no changes relative to v3, just rebased it so that the patch ap= plies cleanly. > > Testing has been very minimal: I created a VM with the added service, > then ran it in a VM session that included additional devices (qcow2 > files) from a previous VM run that formatted those as devices of a ZFS > pool, and confirmed that the new VM could read it and manage that > pool. > > > Is there any chance this will get reviewed or should I just not bother > and move on with my life and forget about Guix? > > > At this point as well, I would like to point out what I think is a > failing of how the Guix distribution is managed. > > Guix does not assign any particular committers to any particular tasks or= areas. > The intent is that any committer can review and commit any particular pat= ch. > > However, this fails as the Guix project has grown. > > No single committer wants to review ***all*** the available patches, > and this is understandable, as the Guix project has grown > significantly and includes a wide variety of people with diverging > interests and as an open-source project you cannot really require that > particular people look at particular things. Unfortunately, I do not > know *who* committers are, and more importantly, *which* committer > might be interested in this ZFS service type. Because "any committer > can review and commit any patch!!" there is no particular list or > table I can refer to, to figure out who might be useful to ping for > this patchset. > > At the same time, because no committer is interested in *all* patches > I cannot just ping some particular person and expect to somehow get on > some list somewhere that tells me "you will be the 48486th patch that > will be reviewed by who is interested in all patches". > > > It is very discouraging to work on this code for a few weeks, release > it, not get any reviews, and end up in a situation where I have to > make annoying small changes just to keep the patch from bitrotting. > > I understand that there are few possible reviewers, but if potential > new contributors get discouraged from contributing because they do not > see their code actually getting in, then you really cannot expect the > number of reviewers to increase, either. > > I think it would be nice if I could at least be told some number of > people who *might* be interested in this patch, or just throw the > towel and not bother. You might want to bring up the topic of subsystem maintainers on the guix-devel mailing list to get some more attention. I am just some random ZFS user, so maybe take my comments with a pinch of salt. I haven=E2=80=99t followed the thread that closely, so apologies if some of= my questions have already been answered. > > Thanks > raid5atemyhomework > > > From 5351aa7c1c14d4fea032adad895c436e02d1f261 Mon Sep 17 00:00:00 2001 > From: raid5atemyhomework > Date: Mon, 22 Mar 2021 16:26:28 +0800 > Subject: [PATCH] gnu: Add ZFS service type. > > * gnu/services/file-systems.scm: New file. > * gnu/local.mk (GNU_SYSTEM_MODULES): Add it. > * gnu/services/base.scm: Export dependency->shepherd-service-name. > * doc/guix.texi (ZFS File System): New subsection. > --- > doc/guix.texi | 351 ++++++++++++++++++++++++++++++++++ > gnu/local.mk | 2 + > gnu/services/base.scm | 4 +- > gnu/services/file-systems.scm | 295 ++++++++++++++++++++++++++++ > 4 files changed, 651 insertions(+), 1 deletion(-) > create mode 100644 gnu/services/file-systems.scm > > diff --git a/doc/guix.texi b/doc/guix.texi > index b3c16e6507..e21c47d7ca 100644 > --- a/doc/guix.texi > +++ b/doc/guix.texi > @@ -94,6 +94,7 @@ Copyright @copyright{} 2021 Xinglu Chen@* > Copyright @copyright{} 2021 Raghav Gururajan@* > Copyright @copyright{} 2021 Domagoj Stolfa@* > Copyright @copyright{} 2021 Hui Lu@* > +Copyright @copyright{} 2021 raid5atemyhomework@* > > Permission is granted to copy, distribute and/or modify this document > under the terms of the GNU Free Documentation License, Version 1.3 or > @@ -14265,6 +14266,356 @@ a file system declaration such as: > compress-force=3Dzstd,space_cache=3Dv2")) > @end lisp > > + > +@node ZFS File System > +@subsection ZFS File System > + > +Support for ZFS file systems is provided on Guix by the OpenZFS project. > +OpenZFS currently only supports Linux-Libre and is not available on the > +Hurd. > + > +OpenZFS is free software; unfortunately its license is incompatible with > +the GNU General Public License (GPL), the license of the Linux kernel, > +which means they cannot be distributed together. However, as a user, > +you can choose to build ZFS and use it together with Linux; you can > +even rely on Guix to automate this task. See > +@uref{https://www.fsf.org/licensing/zfs-and-linux, this analysis by > +the Free Software Foundation} for more information. > + > +As a large and complex kernel module, OpenZFS has to be compiled for a > +specific version of Linux-Libre. At times, the latest OpenZFS package > +available in Guix is not compatible with the latest Linux-Libre version. > +Thus, directly installing the @code{zfs} package can fail. > + > +Instead, you are recommended to select a specific older long-term-support > +Linux-Libre kernel. Do not use @code{linux-libre-lts}, as even the > +latest long-term-support kernel may be too new for @code{zfs}. Instead, > +explicitly select a specific older version, such as @code{linux-libre-5.= 10}, > +and upgrade it manually later as new long-term-support kernels become > +available that you have confirmed is compatible with the latest available > +OpenZFS version on Guix. > + > +For example, you can modify your system configuration file to a specific > +Linux-Libre version and add the @code{zfs-service-type} service. > + > +@lisp > +(use-modules (gnu)) > +(use-package-modules > + #;@dots{} > + linux) > +(use-service-modules > + #;@dots{} > + file-systems) > + > +(define my-kernel linux-libre-5.10) > + > +(operating-system > + (kernel my-kernel) > + #;@dots{} > + (services > + (cons* (service zfs-service-type > + (zfs-configuration > + (kernel my-kernel))) > + #;@dots{} > + %desktop-services)) > + #;@dots{}) > +@end lisp > + > +@defvr {Scheme Variable} zfs-service-type > +This is the type for a service that adds ZFS support to your operating > +system. The service is configured using a @code{zfs-configuration} > +record. > + > +Here is an example use: > + > +@lisp > +(service zfs-service-type > + (zfs-configuration > + (kernel linux-libre-5.4))) > +@end lisp > +@end defvr > + > +@deftp {Data Type} zfs-configuration > +This data type represents the configuration of the ZFS support in Guix > +System. Its fields are: > + > +@table @asis > +@item @code{kernel} > +The package of the Linux-Libre kernel to compile OpenZFS for. This field > +is always required. It @emph{must} be the same kernel you use in your > +@code{operating-system} form. > + > +@item @code{base-zfs} (default: @code{zfs}) > +The OpenZFS package that will be compiled for the given Linux-Libre kern= el. > + > +@item @code{base-zfs-auto-snapshot} (default: @code{zfs-auto-snapshot}) > +The @code{zfs-auto-snapshot} package to use. It will be modified to > +specifically use the OpenZFS compiled for your kernel. > + > +@item @code{dependencies} (default: @code{'()}) > +A list of @code{} or @code{} records that mu= st > +be mounted or opened before OpenZFS scans for pools to import. For exam= ple, > +if you have set up LUKS containers as leaf VDEVs in a pool, you have to > +include their corresponding @code{} records so that Open= ZFS > +can import the pool correctly at bootup. > + > +@item @code{auto-mount?} (default: @code{#t}) > +Whether to mount datasets with the ZFS @code{mountpoint} property automa= tically > +at startup. This is the behavior that ZFS users usually expect. You mi= ght > +set this to @code{#f} for an operating system intended as a ``rescue'' s= ystem > +that is intended to help debug problems with the disks rather than actua= lly > +work in production. > + > +@item @code{auto-scrub} (default: @code{'weekly}) > +Specifies how often to scrub all pools. Can be the symbols @code{'weekl= y} or > +@code{'monthly}, or a schedule specification understood by > +@xref{mcron, mcron job specifications,, mcron, GNU@tie{}mcron}, such as > +@code{"0 3 * * 6"} for ``every 3AM on Saturday''. > +It can also be @code{#f} to disable auto-scrubbing (@strong{not recommen= ded}). > + > +The general guideline is to scrub weekly when using consumer-quality dri= ves, and > +to scrub monthly when using enterprise-quality drives. > + > +@code{'weekly} scrubs are done on Sunday midnight, while @code{monthly} = scrubs > +are done on midnight on the first day of each month. > + > +@item @code{auto-snapshot?} (default: @code{#t}) > +Specifies whether to auto-snapshot by default. If @code{#t}, then snaps= hots > +are automatically created except for ZFS datasets with the > +@code{com.sun:auto-snapshot} ZFS vendor property set to @code{false}. > + > +If @code{#f}, snapshots will not be automatically created, unless the ZFS > +dataset has the @code{com.sun:auto-snapshot} ZFS vendor property set to > +@code{true}. > + > +@item @code{auto-snapshot-keep} (default: @code{'()}) > +Specifies an association list of symbol-number pairs, indicating the num= ber > +of automatically-created snapshots to retain for each frequency type. > + > +If not specified via this field, by default there are 4 @code{frequent},= 24 > +@code{hourly}, 31 @code{daily}, 8 @code{weekly}, and 12 @code{monthly} s= napshots. > + > +For example: > + > +@lisp > +(zfs-configuration > + (kernel my-kernel) > + (auto-snapshot-keep > + '((frequent . 8) > + (hourly . 12)))) > +@end lisp > + > +The above will keep 8 @code{frequent} snapshots and 12 @code{hourly} sna= pshots. > +@code{daily}, @code{weekly}, and @code{monthly} snapshots will keep their > +defaults (31 @code{daily}, 8 @code{weekly}, and 12 @code{monthly}). > + > +@end table > +@end deftp IIUC, there is not way specify ZFS pools in the =E2=80=98file-systems=E2=80= =99 field of an record. Does this mean that ZFS as the root file system is not supported, and if so, is there a particular reason for this? > + > +@subsubsection ZFS Auto-Snapshot > + > +The ZFS service on Guix System supports auto-snapshots as implemented in= the > +Solaris operating system. > + > +@code{frequent} (every 15 minutes), @code{hourly}, @code{daily}, @code{w= eekly}, > +and @code{monthly} snapshots are created automatically for ZFS datasets = that > +have auto-snapshot enabled. They will be named, for example, > +@code{zfs-auto-snap_frequent-2021-03-22-1415}. You can continue to use > +manually-created snapshots as long as they do not conflict with the nami= ng > +convention used by auto-snapshot. You can also safely manually destroy > +automatically-created snapshots, for example to free up space. > + > +The @code{com.sun:auto-snapshot} ZFS property controls auto-snapshot on a > +per-dataset level. Sub-datasets will inherit this property from their p= arent > +dataset, but can have their own property. > + > +You @emph{must} set this property to @code{true} or @code{false} exactly, > +otherwise it will be treated as if the property is unset. > + > +For example: > + > +@example > +# zfs list -o name > +NAME > +tank > +tank/important-data > +tank/tmp > +# zfs set com.sun:auto-snapshot=3Dtrue tank > +# zfs set com.sun:auto-snapshot=3Dfalse tank/tmp > +@end example > + > +The above will set @code{tank} and @code{tank/important-data} to be > +auto-snapshot, while @code{tank/tmp} will not be auto-snapshot. > + > +If the @code{com.sun:auto-snapshot} property is not set for a dataset > +(the default when pools and datasets are created), then whether > +auto-snapshot is done or not will depend on the @code{auto-snapshot?} > +field of the @code{zfs-configuration} record. > + > +There are also @code{com.sun:auto-snapshot:frequent}, > +@code{com.sun:auto-snapshot:hourly}, @code{com.sun:auto-snapshot:daily}, > +@code{com.sun:auto-snapshot:weekly}, and @code{com.sun:auto-snapshot:mon= thly} > +properties that give finer-grained control of whether to auto-snapshot a > +dataset at a particular schedule. > + > +The number of snapshots kept for all datasets can be overridden via the > +@code{auto-snapshot-keep} field of the @code{zfs-configuration} record. > +There is currently no support to have different numbers of snapshots to > +keep for different datasets. > + > +@subsubsection ZVOLs > + > +ZFS supports ZVOLs, block devices that ZFS exposes to the operating > +system in the @code{/dev/zvol/} directory. The ZVOL will have the same > +resilience and self-healing properties as other datasets on your ZFS poo= l. > +ZVOLs can also be snapshotted (and will be included in auto-snapshotting > +if enabled), which snapshots the state of the block device, effectively > +snapshotting the hosted file system. > + > +You can put any file system inside the ZVOL. However, in order to mount= this > +file system at system start, you need to add @code{%zfs-zvol-dependency}= as a > +dependency of each file system inside a ZVOL. > + > +@defvr {Scheme Variable} %zfs-zvol-dependency > +An artificial @code{} which tells the file system mounting > +service to wait for ZFS to provide ZVOLs before mounting the > +@code{} dependent on it. > +@end defvr > + > +For example, suppose you create a ZVOL and put an ext4 filesystem > +inside it: > + > +@example > +# zfs create -V 100G tank/ext4-on-zfs > +# mkfs.ext4 /dev/zvol/tank/ext4-on-zfs > +# mkdir /ext4-on-zfs > +# mount /dev/zvol/tank/ext4-on-zfs /ext4-on-zfs > +@end example > + > +You can then set this up to be mounted at boot by adding this to the > +@code{file-systems} field of your @code{operating-system} record: > + > +@lisp > +(file-system > + (device "/dev/zvol/tank/ext4-on-zfs") > + (mount-point "/ext4-on-zfs") > + (type "ext4") > + (dependencies (list %zfs-zvol-dependency))) > +@end lisp > + > +You @emph{must not} add @code{%zfs-zvol-dependency} to your > +@code{operating-system}'s @code{mapped-devices} field, and you @emph{must > +not} add it (or any @code{}s dependent on it) to the > +@code{dependencies} field of @code{zfs-configuration}. Finally, you > +@emph{must not} use @code{%zfs-zvol-dependency} unless you actually > +instantiate @code{zfs-service-type} on your system. I am not familiar with ZVOLs, so I can=E2=80=99t really comment on that. > +@subsubsection Unsupported Features > + > +Some common features and uses of ZFS are currently not supported, or not > +fully supported, on Guix. > + > +@enumerate > +@item > +Shepherd-managed daemons that are configured to read from or write to ZFS > +mountpoints need to include @code{user-processes} in their @code{require= ment} > +field. This is the earliest that ZFS file systems are assured of being > +mounted. > + > +Generally, most daemons will, directly or indirectly, require > +@code{networking}, or @code{user-processes}, or both. Most implementati= ons > +of @code{networking} will also require @code{user-processes} so daemons = that > +require only @code{networking} will also generally start up after > +@code{user-processes}. A notable exception, however, is > +@code{static-networking-service-type}. You will need to explicitly add > +@code{user-processes} as a @code{requirement} of your @code{static-netwo= rking} > +record. > + > +@item > +@code{mountpoint=3Dlegacy} ZFS file systems. The handlers for the Guix = mounting > +system have not yet been modified to support ZFS, and will expect @code{= /dev} > +paths in the @code{}'s @code{device} field, but ZFS file sy= stems > +are referred to via non-path @code{pool/file/system} names. Such file s= ystems > +also need to be mounted @emph{after} OpenZFS has scanned for pools. > + > +You can still manually mount these file systems after system boot; what = is > +only unsupported is mounting them automatically at system boot by specif= ying > +them in @code{} records of your @code{operating-system}. > + > +@item > +@code{/home} on ZFS. Guix will create home directories for users, but t= his > +process currently cannot be scheduled after ZFS file systems are mounted. > +Thus, the ZFS file system might be mounted @emph{after} Guix has created > +home directories at boot, at which point OpenZFS will refuse to mount si= nce > +the mountpoint is not empty. However, you @emph{can} create an ext4, xf= s, > +btrfs, or other supported file system inside a ZVOL, have that depend on > +@code{%zfs-zvol-dependency}, and set it to mount on the @code{/home} > +directory; they will be scheduled to mount before the @code{user-homes} > +process. > + > +Similarly, other locations like @code{/var}, @code{/gnu/store} and so > +on cannot be reliably put in a ZFS file system, though they may be > +possible to create as other file systems inside ZVOL containers. > + > +@item > +@code{/} and @code{/boot} on ZFS. These require Guix to expose more of > +the @code{initrd} very early boot process to services. It also requires > +Guix to have the ability to explicitly load modules while still in > +@code{initrd} (currently kernel modules loaded by > +@code{kernel-module-loader-service-type} are loaded after @code{/} is > +mounted). Further, since one of ZFS's main advantages is that it can > +continue working despite the loss of one or more devices, it makes sense > +to also support installing the bootloader on all devices of the pool that > +contains the @code{/} and @code{/boot}; after all, if ZFS can survive the > +loss of one device, the bootloader should also be able to survive the lo= ss > +of one device. Ah, OK, this answered my previous question. > +@item > +ZVOL swap devices. Mapped swap devices need to be listed in > +@code{mapped-devices} to ensure they are opened before the system attemp= ts > +to use them, but you cannot currently add @code{%zfs-zvol-dependency} to > +@code{mapped-devices}. > + > +This will also require significant amounts of testing, as various kernel > +build options and patches may affect how swapping works, which are possi= bly > +different on Guix System compared to other distributions that this featu= re is > +known to work on. > + > +@item > +ZFS Event Daemon. Support for this has not been written yet, patches are > +welcome. The main issue is how to design this in a Guix style while > +supporting legacy shell-script styles as well. In particular, OpenZFS i= tself > +comes with a number of shell scripts intended for ZFS Event Daemon, and = we > +need to figure out how the user can choose to use or not use the provided > +scripts (and configure any settings they have) or override with their own > +custom code (which could be shell scripts they have written and trusted = from > +previous ZFS installations). > + > +As-is, you can create your own service that activates the ZFS Event Daem= on > +by creating the @file{/etc/zfs/zed} directory and filling it appropriate= ly, > +then launching @code{zed}. > + > +@item > +@file{/etc/zfs/zpool.cache}. Currently the ZFS support on Guix always f= orces > +scanning of all devices at bootup to look for ZFS pools. For systems wi= th > +dozens or hundreds of storage devices, this can lead to slow bootup. On= e issue > +is that tools should really not write to @code{/etc} which is supposed t= o be for > +configuration; possibly it could be moved to @code{/var} instead. Anoth= er issue > +is that if Guix ever supports @code{/} on ZFS, we would need to somehow = keep the > +@code{zpool.cache} file inside the @code{initrd} up-to-date with what is= in the > +@code{/} mount point. > + > +@item > +@code{zfs share}. This will require some (unknown amount of) work to in= tegrate > +into the Samba and NFS services of Guix. You @emph{can} manually set up= Samba > +and NFS to share any mounted ZFS datasets by setting up their configurat= ions > +properly; it just can't be done for you by @code{zfs share} and the > +@code{sharesmb} and @code{sharenfs} properties. > +@end enumerate > + > +Hopefully, support for the above only requires code to be written, o use= rs > +are encouraged to hack on Guix to implement the above features. > + > @node Mapped Devices > @section Mapped Devices > > diff --git a/gnu/local.mk b/gnu/local.mk > index b944c671af..a2ff871277 100644 > --- a/gnu/local.mk > +++ b/gnu/local.mk > @@ -43,6 +43,7 @@ > # Copyright =C2=A9 2021 Philip McGrath > # Copyright =C2=A9 2021 Arun Isaac > # Copyright =C2=A9 2021 Sharlatan Hellseher > +# Copyright =C2=A9 2021 raid5atemyhomework > # > # This file is part of GNU Guix. > # > @@ -618,6 +619,7 @@ GNU_SYSTEM_MODULES =3D \ > %D%/services/docker.scm \ > %D%/services/authentication.scm \ > %D%/services/file-sharing.scm \ > + %D%/services/file-systems.scm \ > %D%/services/games.scm \ > %D%/services/ganeti.scm \ > %D%/services/getmail.scm \ > diff --git a/gnu/services/base.scm b/gnu/services/base.scm > index ab3e441a7b..bcca24f93a 100644 > --- a/gnu/services/base.scm > +++ b/gnu/services/base.scm > @@ -185,7 +185,9 @@ > > references-file > > - %base-services)) > + %base-services > + > + dependency->shepherd-service-name)) > > ;;; Commentary: > ;;; > diff --git a/gnu/services/file-systems.scm b/gnu/services/file-systems.scm > new file mode 100644 > index 0000000000..0b1aae38ac > --- /dev/null > +++ b/gnu/services/file-systems.scm > @@ -0,0 +1,295 @@ > +;;; GNU Guix --- Functional package management for GNU > +;;; Copyright =C2=A9 2021 raid5atemyhomework > +;;; > +;;; This file is part of GNU Guix. > +;;; > +;;; GNU Guix is free software; you can redistribute it and/or modify it > +;;; under the terms of the GNU General Public License as published by > +;;; the Free Software Foundation; either version 3 of the License, or (at > +;;; your option) any later version. > +;;; > +;;; GNU Guix is distributed in the hope that it will be useful, but > +;;; WITHOUT ANY WARRANTY; without even the implied warranty of > +;;; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the > +;;; GNU General Public License for more details. > +;;; > +;;; You should have received a copy of the GNU General Public License > +;;; along with GNU Guix. If not, see . > + > +(define-module (gnu services file-systems) > + #:use-module (gnu packages file-systems) > + #:use-module (gnu services) > + #:use-module (gnu services base) > + #:use-module (gnu services linux) > + #:use-module (gnu services mcron) > + #:use-module (gnu services shepherd) > + #:use-module (gnu system mapped-devices) > + #:use-module (guix gexp) > + #:use-module (guix packages) > + #:use-module (guix records) > + #:export (zfs-service-type > + > + zfs-configuration > + zfs-configuration? > + zfs-configuration-kernel > + zfs-configuration-base-zfs > + zfs-configuration-base-zfs-auto-snapshot > + zfs-configuration-dependencies > + zfs-configuration-auto-mount? > + zfs-configuration-auto-scrub > + zfs-configuration-auto-snapshot? > + zfs-configuration-auto-snapshot-keep > + > + %zfs-zvol-dependency)) > + > +(define-record-type* > + zfs-configuration > + make-zfs-configuration > + zfs-configuration? > + > + ; linux-libre kernel you want to compile the base-zfs module for. > + (kernel zfs-configuration-kernel) > + > + ; the OpenZFS package that will be modified to compile for the > + ; given kernel. > + (base-zfs zfs-configuration-base-zfs > + (default zfs)) The field name usually just contains the package name, so =E2=80=98zfs=E2= =80=99 and =E2=80=98zfs-auto-snapshot=E2=80=99 instead of =E2=80=98base-zfs=E2=80=99 a= nd =E2=80=98base-zfs-auto-snapshot=E2=80=99. > + ; the zfs-auto-snapshot package that will be modified to compile > + ; for the given kernel. > + (base-zfs-auto-snapshot zfs-configuration-base-zfs-auto-snapshot > + (default zfs-auto-snapshot)) > + > + ; list of or objects that must be > + ; opened/mounted before we import any ZFS pools. > + (dependencies zfs-configuration-dependencies > + (default '())) > + > + ; #t if mountable datasets are to be mounted automatically. > + ; #f if not mounting. > + ; #t is the expected behavior on other operating systems, the > + ; #f is only supported for "rescue" operating systems where > + ; the user wants lower-level control of when to mount. > + (auto-mount? zfs-configuration-auto-mount? > + (default #t)) > + > + ; 'weekly for weekly scrubbing, 'monthly for monthly scrubbing, an > + ; mcron time specification that can be given to `job`, or #f to > + ; disable. > + (auto-scrub zfs-configuration-auto-scrub > + (default 'weekly)) > + > + ; #t if auto-snapshot is default (and `com.sun:auto-snapshot=3Dfalse` > + ; disables auto-snapshot per dataset), #f if no auto-snapshotting > + ; is default (and `com.sun:auto-snapshot=3Dtrue` enables auto-snapshot > + ; per dataset). > + (auto-snapshot? zfs-configuration-auto-snapshot? > + (default #t)) > + > + ; association list of symbol-number pairs to indicate the number > + ; of automatic snapshots to keep for each of 'frequent, 'hourly, > + ; 'daily, 'weekly, and 'monthly. > + ; e.g. '((frequent . 8) (hourly . 12)) > + (auto-snapshot-keep zfs-configuration-auto-snapshot-keep > + (default '()))) > + > +(define %default-auto-snapshot-keep > + '((frequent . 4) > + (hourly . 24) > + (daily . 31) > + (weekly . 8) > + (monthly . 12))) > + > +(define %auto-snapshot-mcron-schedule > + '((frequent . "0,15,30,45 * * * *") > + (hourly . "0 * * * *") > + (daily . "0 0 * * *") > + (weekly . "0 0 * * 7") > + (monthly . "0 0 1 * *"))) > + > +;; A synthetic and unusable MAPPED-DEVICE intended for use when > +;; the user has created a mountable filesystem inside a ZFS > +;; zvol and wants it mounted inside the configuration.scm. > +(define %zfs-zvol-dependency > + (mapped-device > + (source '()) > + (targets '("zvol/*")) > + (type #f))) > + > +(define (make-zfs-package conf) > + (let ((kernel (zfs-configuration-kernel conf)) > + (base-zfs (zfs-configuration-base-zfs conf))) > + (package > + (inherit base-zfs) > + (arguments (cons* #:linux kernel > + (package-arguments base-zfs)))))) > + > +(define (make-zfs-auto-snapshot-package conf) > + (let ((zfs (make-zfs-package conf)) > + (base-zfs-auto-snapshot (zfs-configuration-base-zfs-auto-snapsho= t conf))) > + (package > + (inherit base-zfs-auto-snapshot) > + (inputs `(("zfs" ,zfs)))))) > + > +(define (zfs-loadable-modules conf) > + (list (list (make-zfs-package conf) "module"))) > + > +(define (zfs-shepherd-services conf) > + (let* ((zfs-package (make-zfs-package conf)) > + (zpool (file-append zfs-package "/sbin/zpool")) > + (zfs (file-append zfs-package "/sbin/zfs")) > + (zvol_wait (file-append zfs-package "/bin/zvol_wait")) > + (scheme-modules `((srfi srfi-1) > + (srfi srfi-34) > + (srfi srfi-35) > + (rnrs io ports) > + ,@%default-modules))) > + (define zfs-scan > + (shepherd-service > + (provision '(zfs-scan)) > + (requirement `(root-file-system > + kernel-module-loader > + udev > + ,@(map dependency->shepherd-service-name > + (zfs-configuration-dependencies conf)))) > + (documentation "Scans for and imports ZFS pools.") > + (modules scheme-modules) > + (start #~(lambda _ > + (guard (c ((message-condition? c) > + (format (current-error-port) > + "zfs: error importing pools: ~s~%" > + (condition-message c)) > + #f)) > + ; TODO: optionally use a cachefile. > + (invoke #$zpool "import" "-a" "-N")))) > + ;; Why not one-shot? Because we don't really want to rescan > + ;; this each time a requiring process is restarted, as scanning > + ;; can take a long time and a lot of I/O. > + (stop #~(const #f)))) > + > + (define device-mapping-zvol/* > + (shepherd-service > + (provision '(device-mapping-zvol/*)) > + (requirement '(zfs-scan)) > + (documentation "Waits for all ZFS ZVOLs to be opened.") > + (modules scheme-modules) > + (start #~(lambda _ > + (guard (c ((message-condition? c) > + (format (current-error-port) > + "zfs: error opening zvols: ~s~%" > + (condition-message c)) > + #f)) > + (invoke #$zvol_wait)))) > + (stop #~(const #f)))) > + > + (define zfs-auto-mount > + (shepherd-service > + (provision '(zfs-auto-mount)) > + (requirement '(zfs-scan)) > + (documentation "Mounts all non-legacy mounted ZFS filesystems.") > + (modules scheme-modules) > + (start #~(lambda _ > + (guard (c ((message-condition? c) > + (format (current-error-port) > + "zfs: error mounting file systems:= ~s~%" > + (condition-message c)) > + #f)) > + ;; Output to current-error-port, otherwise the > + ;; user will not see any prompts for passwords > + ;; of encrypted datasets. > + ;; XXX Maybe better to explicitly open /dev/console= ? Seeing this comment, I assume that encrypted pools are supported, right? > + (with-output-to-port (current-error-port) > + (lambda () > + (invoke #$zfs "mount" "-a" "-l")))))) > + (stop #~(lambda _ > + ;; Make sure that Shepherd does not have a CWD that > + ;; is a mounted ZFS filesystem, which would prevent > + ;; unmounting. > + (chdir "/") > + (invoke #$zfs "unmount" "-a" "-f"))))) > + > + `(,zfs-scan > + ,device-mapping-zvol/* > + ,@(if (zfs-configuration-auto-mount? conf) > + `(,zfs-auto-mount) > + '())))) > + > +(define (zfs-user-processes conf) > + (if (zfs-configuration-auto-mount? conf) > + '(zfs-auto-mount) > + '(zfs-scan))) > + > +(define (zfs-mcron-auto-snapshot-jobs conf) > + (let* ((user-auto-snapshot-keep (zfs-configuration-auto-snapshot-= keep conf)) > + ;; assoc-ref has earlier entries overriding later ones. > + (auto-snapshot-keep (append user-auto-snapshot-keep > + %default-auto-snapshot-ke= ep)) > + (auto-snapshot? (zfs-configuration-auto-snapshot?= conf)) > + (zfs-auto-snapshot-package (make-zfs-auto-snapshot-package c= onf)) > + (zfs-auto-snapshot (file-append zfs-auto-snapshot-pa= ckage > + "/sbin/zfs-auto-snap= shot"))) > + (map > + (lambda (label) > + (let ((keep (assoc-ref auto-snapshot-keep label)) > + (sched (assoc-ref %auto-snapshot-mcron-schedule label))) > + #~(job '#$sched > + (string-append #$zfs-auto-snapshot > + " --quiet --syslog " > + " --label=3D" #$(symbol->string label) > + " --keep=3D" #$(number->string keep) > + " //")))) > + '(frequent hourly daily weekly monthly)))) Maybe use something like (map first %default-auto-snapshot-keep) to avoid having to changing it if things change in the future. > +(define (zfs-mcron-auto-scrub-jobs conf) > + (let* ((zfs-package (make-zfs-package conf)) > + (zpool (file-append zfs-package "/sbin/zpool")) > + (auto-scrub (zfs-configuration-auto-scrub conf)) > + (sched (cond > + ((eq? auto-scrub 'weekly) "0 0 * * 7") > + ((eq? auto-scrub 'monthly) "0 0 1 * *") > + (else auto-scrub)))) > + (list > + #~(job '#$sched > + ;; Suppress errors: if there are no ZFS pools, the > + ;; scrub will not be given any arguments, which makes > + ;; it error out. > + (string-append "(" #$zpool " scrub `" #$zpool " list -o nam= e -H` " > + "> /dev/null 2>&1) " > + "|| exit 0"))))) > + > +(define (zfs-mcron-jobs conf) > + (append (zfs-mcron-auto-snapshot-jobs conf) > + (if (zfs-configuration-auto-scrub conf) > + (zfs-mcron-auto-scrub-jobs conf) > + '()))) > + > +(define zfs-service-type > + (service-type > + (name 'zfs) > + (extensions > + (list ;; Install OpenZFS kernel module into kernel profile. > + (service-extension linux-loadable-module-service-type > + zfs-loadable-modules) > + ;; And load it. > + (service-extension kernel-module-loader-service-type > + (const '("zfs"))) > + ;; Make sure ZFS pools and datasets are mounted at > + ;; boot. > + (service-extension shepherd-root-service-type > + zfs-shepherd-services) > + ;; Make sure user-processes don't start until > + ;; after ZFS does. > + (service-extension user-processes-service-type > + zfs-user-processes) > + ;; Install automated scrubbing and snapshotting. > + (service-extension mcron-service-type > + zfs-mcron-jobs) > + > + ;; Install ZFS management commands in the system > + ;; profile. > + (service-extension profile-service-type > + (compose list make-zfs-package)) > + ;; Install ZFS udev rules. > + (service-extension udev-service-type > + (compose list make-zfs-package)))) > + (description "Installs ZFS, an advanced filesystem and volume manage= r."))) > -- > 2.31.1 I haven=E2=80=99t tested anything, but the rest looks good! :-) --=-=-= Content-Type: application/pgp-signature; name="signature.asc" -----BEGIN PGP SIGNATURE----- iQJJBAEBCAAzFiEEAVhh4yyK5+SEykIzrPUJmaL7XHkFAmEz4sUVHHB1YmxpY0B5 b2N0b2NlbGwueHl6AAoJEKz1CZmi+1x57YcQAKLhO/BZbp3/pfkF/9xrol6wZO3b nUY6rw3m1oi5mjEx7wLdSonltbkxoXQnZuAqnlIe1NXLldd7AQ5f4Lx3Uw9myB+l f0FjC1ks5Cp6yWhqgWp04o94mAxTEQ/yHB1jEMIhkM2PTaYyAIcil1dEOjJITD9b Imx0tl93imfo+LAy1mBQvqb9ONimK1ixr6Xvu8IM6Cu85qTHVE18Pt3ugS25wvit Fi+te5NAWMYHQ+2sbPQjGdlVCnTNYNEI7fL4Bse9+Y82WPdCeKPqay01MR4U0WbS P28KNmNqNtefL2dYoImARVckKdOP83NQzEtNaQY8VD32xNAcW/dfAKlcxLuLkfI+ s1QUnYTnY9//AYhCao6b5MKn2iLAWJ68Ij7t2D8iEXLTFzi1l7zyGiKvJKlvmucN mWoKGCfmLOp0xxki4CC0mVkBGDRwA+QrPGqXDljbCWoAaOpEywnqX88cUxPq5+Jw niConf+Bc/GYdmdqWbWp+NUqpXGPax96jsys9lBKoiEV/8DOGnyKsB3XLFtLv2oH wa4SoeeKexF9E7BIDABLts/kTPZG7VcWN4Vl0IVymIHF6aQBxpCk4GbHUH4voVNg /7oP2OZCQn9HOiClWN7awllCVISX7b/nmxijx/81UnsDzDWThtkacjtfyFqZalxX 0d2CLc4s27HWzU6C =Ql/1 -----END PGP SIGNATURE----- --=-=-=--