From: "Ludovic Courtès" <ludo@gnu.org>
To: Ricardo Wurmus <rekado@elephly.net>
Cc: guix-devel@gnu.org
Subject: Re: “guix gc”, auto gcroots, cluster deployments
Date: Tue, 08 Jun 2021 14:55:28 +0200 [thread overview]
Message-ID: <87mts0h5xb.fsf@gnu.org> (raw)
In-Reply-To: <8735uplioz.fsf@elephly.net> (Ricardo Wurmus's message of "Fri, 14 May 2021 12:21:16 +0200")
Hi,
Ricardo Wurmus <rekado@elephly.net> skribis:
> Ludovic Courtès <ludovic.courtes@inria.fr> writes:
>
>> Hi!
>>
>> Ricardo Wurmus <rekado@elephly.net> skribis:
>>
>>> There are two problems here:
>>>
>>> 1) I don’t think “guix gc --list-dead” (or “--list-live”, or more
>>> generally “findRoots” in nix/libstore/gc.cc) should delete
>>> anything. It should just list and not clean up.
>>
>> Maybe ‘findRoots’ could populate the list of stale roots and it’d be
>> up
>> to the caller to decide whether to delete them or not?
>
> Yes, this would be better. It already does this for links whose
> targets exist but cannot be read.
OK.
>>> 2) For cluster installations with remote file systems perhaps
>>> there’s
>>> something else we can do to record gcroots. We now have this
[...]
>>> […] we would
>>> record
>>> /var/guix/profiles/per-user/me/auto/8ypp8dmwnydgbsgjcms2wyb32mng0wri
>>> pointing to
>>> /gnu/store/ap0vrfxjdj57iqdapg8q83l4f7aylqzm-profile, and
>>> then point /home/me/projects/mrg1_chipseq/.guix-profile-1-link at
>>> that. Yes, removing
>>> /home/me/projects/mrg1_chipseq/.guix-profile-1-link would no longer
>>> free up the profile for garbage collection, but removing
>>> $(readlink /home/me/projects/mrg1_chipseq/.guix-profile-1-link)
>>> would.
>>
>> Yes, but how would per-user/me/auto/* be cleaned up?
>
> Yeah, that’s an open question.
>
> I get the appeal of having these things be cleaned up automatically
> when the link disappears, but if we added this extra layer of
> indirection for cluster deployments this would become manual.
>
> Can we make this configurable perhaps…? On my cluster installation
> I’d rather have a cron job to erase the stuff in per-user/me/auto/* on
> my own terms, than to have “guix gc” fail to resolve links and
> consider it all garbage.
Sure, why not. Do you have configuration options in mind?
I have troubles wrapping my head around this problem. Should we split
it into smaller chunks in bug-guix?
Thanks,
Ludo’.
prev parent reply other threads:[~2021-06-08 12:55 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-05-10 9:59 “guix gc”, auto gcroots, cluster deployments Ricardo Wurmus
2021-05-10 10:59 ` Roel Janssen
2021-05-10 11:59 ` Ricardo Wurmus
2021-05-10 15:45 ` Roel Janssen
2021-05-10 16:23 ` Ricardo Wurmus
2021-05-11 20:42 ` Ludovic Courtès
2021-05-10 13:40 ` Sébastien Lerique
2021-05-10 13:59 ` Guix on NFS Ricardo Wurmus
2021-05-11 20:50 ` “guix gc”, auto gcroots, cluster deployments Ludovic Courtès
2021-05-14 10:21 ` Ricardo Wurmus
2021-06-08 12:55 ` Ludovic Courtès [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
List information: https://guix.gnu.org/
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87mts0h5xb.fsf@gnu.org \
--to=ludo@gnu.org \
--cc=guix-devel@gnu.org \
--cc=rekado@elephly.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
Code repositories for project(s) associated with this public inbox
https://git.savannah.gnu.org/cgit/guix.git
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).