From mboxrd@z Thu Jan 1 00:00:00 1970 From: zimoun Subject: Re: Use guix to distribute data & reproducible (data) science Date: Sat, 10 Feb 2018 00:01:56 +0100 Message-ID: References: <365e13248634ac1e26cf6678611d550d@hypermove.net> <87mv0ixf07.fsf@gnu.org> <1cb709d0-b282-192c-ce1d-20fbff43430e@fastmail.net> Mime-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Return-path: Received: from eggs.gnu.org ([2001:4830:134:3::10]:35612) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ekHg7-0000ZF-Vu for guix-devel@gnu.org; Fri, 09 Feb 2018 18:02:01 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ekHg7-0003GM-0Y for guix-devel@gnu.org; Fri, 09 Feb 2018 18:02:00 -0500 Received: from mail-wr0-x233.google.com ([2a00:1450:400c:c0c::233]:45489) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1ekHg6-0003FH-PE for guix-devel@gnu.org; Fri, 09 Feb 2018 18:01:58 -0500 Received: by mail-wr0-x233.google.com with SMTP id h9so9746319wre.12 for ; Fri, 09 Feb 2018 15:01:58 -0800 (PST) In-Reply-To: <1cb709d0-b282-192c-ce1d-20fbff43430e@fastmail.net> List-Id: "Development of GNU Guix and the GNU System distribution." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: guix-devel-bounces+gcggd-guix-devel=m.gmane.org@gnu.org Sender: "Guix-devel" To: Konrad Hinsen Cc: Guix Devel Hi, > I'd say it depends on the data and how it is used inside and outside of a > workflow. Some data could very well stored in the store, and then > distributed via standard channels (Zenodo, ...) after export by "guix pack". > For big datasets, some other mechanism is required. I am not sure to understand the point. >From my point of view, there is 2 kind of datasets: a- the ones which are part of the software, e.g., used to pass the tests. Therefore, they are usually small, not always; b- the ones which are applied to the software and somehow they are not in the source repository. They are big or not. I do not know if some policy is established in guix about the case a-, not sure that it is possible in fact (e.g., include Whole Genome fasta to test alignment tools ? etc.). It does not appear to me a good idea to try to include in the store datasets of case b-. Is it not the job of data management tools ? e.g., database etc. I do not know so much, but a idea should to write a workflow: you fetch the data, you clean them and you check by hashing that the result is the expected one. Only the softwares used to do that are in the store. The input and output data are not, but your workflow check that they are the expected ones. However, it depends on what we are calling 'cleaning' because some algorithms are not deterministic. Hum? I do not know if there is some mechanism in GWL to check the hash of the `data-inputs' field. > I think it's worth thinking carefully about how to exploit guix for > reproducible computations. As Lispers know very well, code is data and data > is code. Building a package is a computation like any other. Scientific > workflows could be handled by a specific build system. In fact, as long as > no big datasets or multiple processors are involved, we can do this right > now, using standard package declarations. It appear to me as a complement of these points ---and personnally, I learn some points about the design of GWL--- with this thread: https://lists.gnu.org/archive/html/guix-devel/2016-05/msg00380.html > It would be nice if big datasets could conceptually be handled in the same > way while being stored elsewhere - a bit like git-annex does for git. And > for parallel computing, we could have special build daemons. Hum? the point is to add data management a la git-annex to GWL ? Is it ? Have a nice week-end ! simon