unofficial mirror of guix-patches@gnu.org 
 help / color / mirror / code / Atom feed
From: raid5atemyhomework via Guix-patches via <guix-patches@gnu.org>
To: "45734@debbugs.gnu.org" <45734@debbugs.gnu.org>
Subject: [bug#45734] [PATCH v2] gnu: update zfs.
Date: Mon, 11 Jan 2021 10:23:16 +0000	[thread overview]
Message-ID: <NuLfPeUyXzX_004mezdMF5AGSYJmr0mAB_kvLlEFKfEbETK82auhXKwnwaQBQMHEP4gfnPsU3ijZZxd8s90-yBL7a_IDeWX5RZTJhaaZNR0=@protonmail.com> (raw)
In-Reply-To: <ZW8QJg1lPl13koE-MLg4VkxA8rDNxXNhQcb8FGW4esxsa-yIbCQC5xiZxj1ojhnZMQzeV7dAgwJ5vHw_GlGfM13VISYveDSashwUnj6an6I=@protonmail.com>

For the patch to 2.0.1, I did the following testing:

* Included patches https://issues.guix.gnu.org/45692 https://issues.guix.gnu.org/45722 https://issues.guix.gnu.org/45723
  * Created a new VM image that includes ZFS using `(service zfs-service-type ...)`.  Linux Libre 5.4 though.
    * Expanded this image +10G and created a new partition and created a ZFS pool there, with a ZFS dataset, and wrote some text files and also downloaded the ZFS source release into the ZFS filesystem.  Then rebooted the VM and checked that the ZFS filesystem was still automounted, the contents look like they are as expected.
    * Created three extra disk images and booted the same image with the extra disks.  Added two of them as a mirror SLOG and the third as a L2ARC.  Then rebooted and checked that the pool still mounts fine.
    * Started the VM again with the extra disk images rearranged.  Checked ZFS pool status, the L2ARC and SLOG devices were correctly rearranged as well.  Did a few more rearrangements and checked that ZFS assigned the ZFS device to the correct use.
    * Started the VM again with one of the mirror SLOG devices missing.  Checked ZFS pool status, confirmed that the SLOG mirror was degraded but the pool is still up.

So all of it seems to be working fine so far.  I'm mostly satisfied with this. I'll probably need to add more code to make it work closer to how ZFS on other systems works (the current patches scan all devices rather than use `/etc/zfs/zpool.cache`, because I don't really know what `/etc/zfs/zpool.cache`).

With all those patches, ZFS on Guix supports:

* Automatic importing and mounting of ZFS filesystems (does not use `/etc/zfs/zpool.cache`; this theoretically speeds up the case where the computer has dozens or hundreds of disks, and protects in a setting where someone could potentially gain physical access to the computer and override sensitive locations by plugging in a USB that gets auto-imported (and auto-mounted) at boot by ZFS).
* `/home` on ZFS.
* L2ARC and SLOG.
* ZVOLs, accessible over `/dev/zvol/*` hierarchy.
* Can have pools on LUKS containers by adding them as dependencies of the `zfs-service-type` (untested).
* `file-system` declarations mounted on ZVOLs (untested).

Some other stuff is not supported yet:

* `zpool.cache` file, which replaces `fstab` but is not user-editable, for faster importing of ZFS pools.
* ZFS Event Daemon.  Traditionally this is configured by having the sysad manage a `/etc/zfs/zed.d/` directory; some bits of ZFS automation are provided by the ZFS release and the sysad is supposed to either symlink to those,  or copy it and modify, or remove, or replace with their own script.
* ZFS sharing over the network.  Probably need to go look at how NFS and Samba are started on Guix then figure this part out; NFS and Samba need to get started first, but I'm not sure how ZFS talks to those to get its filesystems shared.
* `/` on ZFS. Probably we need to have some kind of `initrd-kernel-module-service-type`, `initrd-kernel-module-loader-service-type`, and have kernel module parameter configuration passed in either by the kernel command line, or by the early `initrd` module loader (which isn't modprobe, by the way).
* Mounting in "legacy" mode where datasets are declared via `(file-system ...)` declarations.  Actually https://issues.guix.gnu.org/45643#3 has a patch for this as well.





  reply	other threads:[~2021-01-11 10:24 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-09  5:50 [bug#45734] [PATCH] gnu: update zfs to 0.8.6 raid5atemyhomework via Guix-patches via
2021-01-10  9:58 ` [bug#45734] [PATCH v2] gnu: update zfs raid5atemyhomework via Guix-patches via
2021-01-11 10:23   ` raid5atemyhomework via Guix-patches via [this message]
2021-01-20 12:44   ` Efraim Flashner
2021-01-25  0:20 ` bug#45734: " guix-patches--- via

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://guix.gnu.org/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='NuLfPeUyXzX_004mezdMF5AGSYJmr0mAB_kvLlEFKfEbETK82auhXKwnwaQBQMHEP4gfnPsU3ijZZxd8s90-yBL7a_IDeWX5RZTJhaaZNR0=@protonmail.com' \
    --to=guix-patches@gnu.org \
    --cc=45734@debbugs.gnu.org \
    --cc=raid5atemyhomework@protonmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this public inbox

	https://git.savannah.gnu.org/cgit/guix.git

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).