all messages for Guix-related lists mirrored at yhetil.org
 help / color / mirror / code / Atom feed
* [bug#33899] [PATCH 0/5] Distributing substitutes over IPFS
@ 2018-12-28 23:12 Ludovic Courtès
  2018-12-28 23:15 ` [bug#33899] [PATCH 1/5] Add (guix json) Ludovic Courtès
                   ` (4 more replies)
  0 siblings, 5 replies; 23+ messages in thread
From: Ludovic Courtès @ 2018-12-28 23:12 UTC (permalink / raw)
  To: 33899; +Cc: Hector Sanjuan, Pierre Neidhardt

Hello Guix!

Here is a first draft adding support to distribute and retrieve substitutes
over IPFS.  This builds on discussions at the R-B Summit with Héctor Sanjuan
of IPFS, lewo of Nix, Pierre Neidhardt, and also on the work Florian
Paul Schmidt posted on guix-devel last month.

The IPFS daemon exposes an HTTP API and the (guix ipfs) module provides
bindings to a subset of that API.  This module also implements a custom
“directory” format to store directory trees in IPFS (IPFS already provides
“UnixFS” and “tar” but they store too many or too few file attributes.)

‘guix publish’ and ‘guix substitute’ use (guix ipfs) to
store and retrieve store items.  Complete directory trees are stored in
IPFS “as is”, rather than as compressed archives (nars).  This allows for
deduplication in IPFS.  ‘guix publish’ adds a new “IPFS” field in
narinfos and ‘guix substitute’ can then query those objects over IPFS.
So the idea is that you still get narinfos over HTTP(S), and then you
have the option of downloading substitutes over IPFS.

I’ve pushed these patches in ‘wip-ipfs-substitutes’.  This is rough on the
edges and probably buggy, but the adventurous among us might want to give
it a spin.  :-)

Thanks,
Ludo’.

Ludovic Courtès (5):
  Add (guix json).
  tests: 'file=?' now recurses on directories.
  Add (guix ipfs).
  publish: Add IPFS support.
  DRAFT substitute: Add IPFS support.

 Makefile.am                 |   3 +
 doc/guix.texi               |  33 +++++
 guix/ipfs.scm               | 250 ++++++++++++++++++++++++++++++++++++
 guix/json.scm               |  63 +++++++++
 guix/scripts/publish.scm    |  67 +++++++---
 guix/scripts/substitute.scm | 106 ++++++++-------
 guix/swh.scm                |  35 +----
 guix/tests.scm              |  26 +++-
 tests/ipfs.scm              |  55 ++++++++
 9 files changed, 535 insertions(+), 103 deletions(-)
 create mode 100644 guix/ipfs.scm
 create mode 100644 guix/json.scm
 create mode 100644 tests/ipfs.scm

-- 
2.20.1

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [bug#33899] [PATCH 1/5] Add (guix json).
  2018-12-28 23:12 [bug#33899] [PATCH 0/5] Distributing substitutes over IPFS Ludovic Courtès
@ 2018-12-28 23:15 ` Ludovic Courtès
  2018-12-28 23:15   ` [bug#33899] [PATCH 2/5] tests: 'file=?' now recurses on directories Ludovic Courtès
                     ` (3 more replies)
  2019-01-07 14:43 ` [bug#33899] [PATCH 0/5] Distributing substitutes over IPFS Hector Sanjuan
                   ` (3 subsequent siblings)
  4 siblings, 4 replies; 23+ messages in thread
From: Ludovic Courtès @ 2018-12-28 23:15 UTC (permalink / raw)
  To: 33899

* guix/swh.scm: Use (guix json).
(define-json-reader, define-json-mapping): Move to...
* guix/json.scm: ... here.  New file.
* Makefile.am (MODULES): Add it.
---
 Makefile.am   |  1 +
 guix/json.scm | 63 +++++++++++++++++++++++++++++++++++++++++++++++++++
 guix/swh.scm  | 35 +---------------------------
 3 files changed, 65 insertions(+), 34 deletions(-)
 create mode 100644 guix/json.scm

diff --git a/Makefile.am b/Makefile.am
index 0e5ca02ed3..da3720e3a6 100644
--- a/Makefile.am
+++ b/Makefile.am
@@ -77,6 +77,7 @@ MODULES =					\
   guix/discovery.scm				\
   guix/git-download.scm				\
   guix/hg-download.scm				\
+  guix/json.scm					\
   guix/swh.scm					\
   guix/monads.scm				\
   guix/monad-repl.scm				\
diff --git a/guix/json.scm b/guix/json.scm
new file mode 100644
index 0000000000..d446f6894e
--- /dev/null
+++ b/guix/json.scm
@@ -0,0 +1,63 @@
+;;; GNU Guix --- Functional package management for GNU
+;;; Copyright © 2018 Ludovic Courtès <ludo@gnu.org>
+;;;
+;;; This file is part of GNU Guix.
+;;;
+;;; GNU Guix is free software; you can redistribute it and/or modify it
+;;; under the terms of the GNU General Public License as published by
+;;; the Free Software Foundation; either version 3 of the License, or (at
+;;; your option) any later version.
+;;;
+;;; GNU Guix is distributed in the hope that it will be useful, but
+;;; WITHOUT ANY WARRANTY; without even the implied warranty of
+;;; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+;;; GNU General Public License for more details.
+;;;
+;;; You should have received a copy of the GNU General Public License
+;;; along with GNU Guix.  If not, see <http://www.gnu.org/licenses/>.
+
+(define-module (guix json)
+  #:use-module (json)
+  #:use-module (srfi srfi-9)
+  #:export (define-json-mapping))
+
+;;; Commentary:
+;;;
+;;; This module provides tools to define mappings from JSON objects to SRFI-9
+;;; records.  This is useful when writing bindings to HTTP APIs.
+;;;
+;;; Code:
+
+(define-syntax-rule (define-json-reader json->record ctor spec ...)
+  "Define JSON->RECORD as a procedure that converts a JSON representation,
+read from a port, string, or hash table, into a record created by CTOR and
+following SPEC, a series of field specifications."
+  (define (json->record input)
+    (let ((table (cond ((port? input)
+                        (json->scm input))
+                       ((string? input)
+                        (json-string->scm input))
+                       ((hash-table? input)
+                        input))))
+      (let-syntax ((extract-field (syntax-rules ()
+                                    ((_ table (field key json->value))
+                                     (json->value (hash-ref table key)))
+                                    ((_ table (field key))
+                                     (hash-ref table key))
+                                    ((_ table (field))
+                                     (hash-ref table
+                                               (symbol->string 'field))))))
+        (ctor (extract-field table spec) ...)))))
+
+(define-syntax-rule (define-json-mapping rtd ctor pred json->record
+                      (field getter spec ...) ...)
+  "Define RTD as a record type with the given FIELDs and GETTERs, à la SRFI-9,
+and define JSON->RECORD as a conversion from JSON to a record of this type."
+  (begin
+    (define-record-type rtd
+      (ctor field ...)
+      pred
+      (field getter) ...)
+
+    (define-json-reader json->record ctor
+      (field spec ...) ...)))
diff --git a/guix/swh.scm b/guix/swh.scm
index 89cddb2bdd..c5f2153a22 100644
--- a/guix/swh.scm
+++ b/guix/swh.scm
@@ -23,6 +23,7 @@
   #:use-module (web client)
   #:use-module (web response)
   #:use-module (json)
+  #:use-module (guix json)
   #:use-module (srfi srfi-1)
   #:use-module (srfi srfi-9)
   #:use-module (srfi srfi-11)
@@ -127,40 +128,6 @@
       url
       (string-append url "/")))
 
-(define-syntax-rule (define-json-reader json->record ctor spec ...)
-  "Define JSON->RECORD as a procedure that converts a JSON representation,
-read from a port, string, or hash table, into a record created by CTOR and
-following SPEC, a series of field specifications."
-  (define (json->record input)
-    (let ((table (cond ((port? input)
-                        (json->scm input))
-                       ((string? input)
-                        (json-string->scm input))
-                       ((hash-table? input)
-                        input))))
-      (let-syntax ((extract-field (syntax-rules ()
-                                    ((_ table (field key json->value))
-                                     (json->value (hash-ref table key)))
-                                    ((_ table (field key))
-                                     (hash-ref table key))
-                                    ((_ table (field))
-                                     (hash-ref table
-                                               (symbol->string 'field))))))
-        (ctor (extract-field table spec) ...)))))
-
-(define-syntax-rule (define-json-mapping rtd ctor pred json->record
-                      (field getter spec ...) ...)
-  "Define RTD as a record type with the given FIELDs and GETTERs, à la SRFI-9,
-and define JSON->RECORD as a conversion from JSON to a record of this type."
-  (begin
-    (define-record-type rtd
-      (ctor field ...)
-      pred
-      (field getter) ...)
-
-    (define-json-reader json->record ctor
-      (field spec ...) ...)))
-
 (define %date-regexp
   ;; Match strings like "2014-11-17T22:09:38+01:00" or
   ;; "2018-09-30T23:20:07.815449+00:00"".
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [bug#33899] [PATCH 2/5] tests: 'file=?' now recurses on directories.
  2018-12-28 23:15 ` [bug#33899] [PATCH 1/5] Add (guix json) Ludovic Courtès
@ 2018-12-28 23:15   ` Ludovic Courtès
  2018-12-28 23:15   ` [bug#33899] [PATCH 3/5] Add (guix ipfs) Ludovic Courtès
                     ` (2 subsequent siblings)
  3 siblings, 0 replies; 23+ messages in thread
From: Ludovic Courtès @ 2018-12-28 23:15 UTC (permalink / raw)
  To: 33899

* guix/tests.scm (not-dot?): New procedure.
(file=?)[executable?]: New procedure.
In 'regular case, check whether the executable bit is preserved.
Add 'directory case.
---
 guix/tests.scm | 26 ++++++++++++++++++++++----
 1 file changed, 22 insertions(+), 4 deletions(-)

diff --git a/guix/tests.scm b/guix/tests.scm
index f4948148c4..c9ae2718e4 100644
--- a/guix/tests.scm
+++ b/guix/tests.scm
@@ -26,9 +26,12 @@
   #:use-module (gcrypt hash)
   #:use-module (guix build-system gnu)
   #:use-module (gnu packages bootstrap)
+  #:use-module (srfi srfi-1)
+  #:use-module (srfi srfi-26)
   #:use-module (srfi srfi-34)
   #:use-module (srfi srfi-64)
   #:use-module (rnrs bytevectors)
+  #:use-module (ice-9 ftw)
   #:use-module (ice-9 binary-ports)
   #:use-module (web uri)
   #:export (open-connection-for-tests
@@ -138,16 +141,31 @@ too expensive to build entirely in the test store."
             (loop (1+ i)))
           bv))))
 
+(define (not-dot? entry)
+  (not (member entry '("." ".."))))
+
 (define (file=? a b)
-  "Return true if files A and B have the same type and same content."
+  "Return true if files A and B have the same type and same content,
+recursively."
+  (define (executable? file)
+    (->bool (logand (stat:mode (lstat file)) #o100)))
+
   (and (eq? (stat:type (lstat a)) (stat:type (lstat b)))
        (case (stat:type (lstat a))
          ((regular)
-          (equal?
-           (call-with-input-file a get-bytevector-all)
-           (call-with-input-file b get-bytevector-all)))
+          (and (eqv? (executable? a) (executable? b))
+               (equal?
+                (call-with-input-file a get-bytevector-all)
+                (call-with-input-file b get-bytevector-all))))
          ((symlink)
           (string=? (readlink a) (readlink b)))
+         ((directory)
+          (let ((lst1 (scandir a not-dot?))
+                (lst2 (scandir b not-dot?)))
+            (and (equal? lst1 lst2)
+                 (every file=?
+                        (map (cut string-append a "/" <>) lst1)
+                        (map (cut string-append b "/" <>) lst2)))))
          (else
           (error "what?" (lstat a))))))
 
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [bug#33899] [PATCH 3/5] Add (guix ipfs).
  2018-12-28 23:15 ` [bug#33899] [PATCH 1/5] Add (guix json) Ludovic Courtès
  2018-12-28 23:15   ` [bug#33899] [PATCH 2/5] tests: 'file=?' now recurses on directories Ludovic Courtès
@ 2018-12-28 23:15   ` Ludovic Courtès
  2018-12-28 23:15   ` [bug#33899] [PATCH 4/5] publish: Add IPFS support Ludovic Courtès
  2018-12-28 23:15   ` [bug#33899] [PATCH 5/5] DRAFT substitute: " Ludovic Courtès
  3 siblings, 0 replies; 23+ messages in thread
From: Ludovic Courtès @ 2018-12-28 23:15 UTC (permalink / raw)
  To: 33899

* guix/ipfs.scm, tests/ipfs.scm: New files.
* Makefile.am (MODULES, SCM_TESTS): Add them.
---
 Makefile.am    |   2 +
 guix/ipfs.scm  | 250 +++++++++++++++++++++++++++++++++++++++++++++++++
 tests/ipfs.scm |  55 +++++++++++
 3 files changed, 307 insertions(+)
 create mode 100644 guix/ipfs.scm
 create mode 100644 tests/ipfs.scm

diff --git a/Makefile.am b/Makefile.am
index da3720e3a6..975d83db6c 100644
--- a/Makefile.am
+++ b/Makefile.am
@@ -101,6 +101,7 @@ MODULES =					\
   guix/cve.scm					\
   guix/workers.scm				\
   guix/zlib.scm					\
+  guix/ipfs.scm					\
   guix/build-system.scm				\
   guix/build-system/android-ndk.scm		\
   guix/build-system/ant.scm			\
@@ -384,6 +385,7 @@ SCM_TESTS =					\
   tests/cve.scm					\
   tests/workers.scm				\
   tests/zlib.scm				\
+  tests/ipfs.scm				\
   tests/file-systems.scm			\
   tests/uuid.scm				\
   tests/system.scm				\
diff --git a/guix/ipfs.scm b/guix/ipfs.scm
new file mode 100644
index 0000000000..e941feda6f
--- /dev/null
+++ b/guix/ipfs.scm
@@ -0,0 +1,250 @@
+;;; GNU Guix --- Functional package management for GNU
+;;; Copyright © 2018 Ludovic Courtès <ludo@gnu.org>
+;;;
+;;; This file is part of GNU Guix.
+;;;
+;;; GNU Guix is free software; you can redistribute it and/or modify it
+;;; under the terms of the GNU General Public License as published by
+;;; the Free Software Foundation; either version 3 of the License, or (at
+;;; your option) any later version.
+;;;
+;;; GNU Guix is distributed in the hope that it will be useful, but
+;;; WITHOUT ANY WARRANTY; without even the implied warranty of
+;;; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+;;; GNU General Public License for more details.
+;;;
+;;; You should have received a copy of the GNU General Public License
+;;; along with GNU Guix.  If not, see <http://www.gnu.org/licenses/>.
+
+(define-module (guix ipfs)
+  #:use-module (guix json)
+  #:use-module (guix base64)
+  #:use-module ((guix build utils) #:select (dump-port))
+  #:use-module (srfi srfi-1)
+  #:use-module (srfi srfi-11)
+  #:use-module (srfi srfi-26)
+  #:use-module (rnrs io ports)
+  #:use-module (rnrs bytevectors)
+  #:use-module (ice-9 match)
+  #:use-module (ice-9 ftw)
+  #:use-module (web uri)
+  #:use-module (web client)
+  #:use-module (web response)
+  #:export (%ipfs-base-url
+            add-file
+            add-file-tree
+            restore-file-tree
+
+            content?
+            content-name
+            content-hash
+            content-size
+
+            add-empty-directory
+            add-to-directory
+            read-contents
+            publish-name))
+
+;;; Commentary:
+;;;
+;;; This module implements bindings for the HTTP interface of the IPFS
+;;; gateway, documented here: <https://docs.ipfs.io/reference/api/http/>.  It
+;;; allows you to add and retrieve files over IPFS, and a few other things.
+;;;
+;;; Code:
+
+(define %ipfs-base-url
+  ;; URL of the IPFS gateway.
+  (make-parameter "http://localhost:5001"))
+
+(define* (call url decode #:optional (method http-post)
+               #:key body (false-if-404? #t) (headers '()))
+  "Invoke the endpoint at URL using METHOD.  Decode the resulting JSON body
+using DECODE, a one-argument procedure that takes an input port; when DECODE
+is false, return the input port.  When FALSE-IF-404? is true, return #f upon
+404 responses."
+  (let*-values (((response port)
+                 (method url #:streaming? #t
+                         #:body body
+
+                         ;; Always pass "Connection: close".
+                         #:keep-alive? #f
+                         #:headers `((connection close)
+                                     ,@headers))))
+    (cond ((= 200 (response-code response))
+           (if decode
+               (let ((result (decode port)))
+                 (close-port port)
+                 result)
+               port))
+          ((and false-if-404?
+                (= 404 (response-code response)))
+           (close-port port)
+           #f)
+          (else
+           (close-port port)
+           (throw 'ipfs-error url response)))))
+
+;; Result of a file addition.
+(define-json-mapping <content> make-content content?
+  json->content
+  (name   content-name "Name")
+  (hash   content-hash "Hash")
+  (bytes  content-bytes "Bytes")
+  (size   content-size "Size" string->number))
+
+;; Result of a 'patch/add-link' operation.
+(define-json-mapping <directory> make-directory directory?
+  json->directory
+  (hash   directory-hash "Hash")
+  (links  directory-links "Links" json->links))
+
+;; A "link".
+(define-json-mapping <link> make-link link?
+  json->link
+  (name   link-name "Name")
+  (hash   link-hash "Hash")
+  (size   link-size "Size" string->number))
+
+;; A "binding", also known as a "name".
+(define-json-mapping <binding> make-binding binding?
+  json->binding
+  (name   binding-name "Name")
+  (value  binding-value "Value"))
+
+(define (json->links json)
+  (match json
+    (#f    '())
+    (links (map json->link links))))
+
+(define %multipart-boundary
+  ;; XXX: We might want to find a more reliable boundary.
+  (string-append (make-string 24 #\-) "2698127afd7425a6"))
+
+(define (bytevector->form-data bv port)
+  "Write to PORT a 'multipart/form-data' representation of BV."
+  (display (string-append "--" %multipart-boundary "\r\n"
+                          "Content-Disposition: form-data\r\n"
+                          "Content-Type: application/octet-stream\r\n\r\n")
+           port)
+  (put-bytevector port bv)
+  (display (string-append "\r\n--" %multipart-boundary "--\r\n")
+           port))
+
+(define* (add-data data #:key (name "file.txt") recursive?)
+  "Add DATA, a bytevector, to IPFS.  Return a content object representing it."
+  (call (string-append (%ipfs-base-url)
+                       "/api/v0/add?arg=" (uri-encode name)
+                       "&recursive="
+                       (if recursive? "true" "false"))
+        json->content
+        #:headers
+        `((content-type
+           . (multipart/form-data
+              (boundary . ,%multipart-boundary))))
+        #:body
+        (call-with-bytevector-output-port
+         (lambda (port)
+           (bytevector->form-data data port)))))
+
+(define (not-dot? entry)
+  (not (member entry '("." ".."))))
+
+(define (file-tree->sexp file)
+  "Add FILE, recursively, to the IPFS, and return an sexp representing the
+directory's tree structure.
+
+Unlike IPFS's own \"UnixFS\" structure, this format preserves exactly what we
+need: like the nar format, it preserves the executable bit, but does not save
+the mtime or other Unixy attributes irrelevant in the store."
+  ;; The natural approach would be to insert each directory listing as an
+  ;; object of its own in IPFS.  However, this does not buy us much in terms
+  ;; of deduplication, but it does cause a lot of extra round trips when
+  ;; fetching it.  Thus, this sexp is \"flat\" in that only the leaves are
+  ;; inserted into the IPFS.
+  (let ((st (lstat file)))
+    (match (stat:type st)
+      ('directory
+       (let* ((parent  file)
+              (entries (map (lambda (file)
+                              `(entry ,file
+                                      ,(file-tree->sexp
+                                        (string-append parent "/" file))))
+                            (scandir file not-dot?)))
+              (size    (fold (lambda (entry total)
+                               (match entry
+                                 (('entry name (kind value size))
+                                  (+ total size))))
+                             0
+                             entries)))
+         `(directory ,entries ,size)))
+      ('symlink
+       `(symlink ,(readlink file) 0))
+      ('regular
+       (let ((size (stat:size st)))
+         (if (zero? (logand (stat:mode st) #o100))
+             `(file ,(content-name (add-file file)) ,size)
+             `(executable ,(content-name (add-file file)) ,size)))))))
+
+(define (add-file-tree file)
+  "Add FILE to the IPFS, recursively, using our own canonical directory
+format.  Return the resulting content object."
+  (add-data (string->utf8 (object->string
+                           `(file-tree (version 0)
+                                       ,(file-tree->sexp file))))))
+
+(define (restore-file-tree object file)
+  "Restore to FILE the tree pointed to by OBJECT."
+  (let restore ((tree (match (read (read-contents object))
+                        (('file-tree ('version 0) tree)
+                         tree)))
+                (file file))
+    (match tree
+      (('file object size)
+       (call-with-output-file file
+         (lambda (output)
+           (dump-port (read-contents object) output))))
+      (('executable object size)
+       (call-with-output-file file
+         (lambda (output)
+           (dump-port (read-contents object) output)))
+       (chmod file #o555))
+      (('symlink target size)
+       (symlink target file))
+      (('directory (('entry names entries) ...) size)
+       (mkdir file)
+       (for-each restore entries
+                 (map (cut string-append file "/" <>) names))))))
+
+(define* (add-file file #:key (name (basename file)))
+  "Add FILE under NAME to the IPFS and return a content object for it."
+  (add-data (match (call-with-input-file file get-bytevector-all)
+              ((? eof-object?) #vu8())
+              (bv bv))
+            #:name name))
+
+(define* (add-empty-directory #:key (name "directory"))
+  "Return a content object for an empty directory."
+  (add-data #vu8() #:recursive? #t #:name name))
+
+(define* (add-to-directory directory file name)
+  "Add FILE to DIRECTORY under NAME, and return the resulting directory.
+DIRECTORY and FILE must be hashes identifying objects in the IPFS store."
+  (call (string-append (%ipfs-base-url)
+                       "/api/v0/object/patch/add-link?arg="
+                       (uri-encode directory)
+                       "&arg=" (uri-encode name) "&arg=" (uri-encode file)
+                       "&create=true")
+        json->directory))
+
+(define* (read-contents object #:key offset length)
+  "Return an input port to read the content of OBJECT from."
+  (call (string-append (%ipfs-base-url)
+                       "/api/v0/cat?arg=" object)
+        #f))
+
+(define* (publish-name object)
+  "Publish OBJECT under the current peer ID."
+  (call (string-append (%ipfs-base-url)
+                       "/api/v0/name/publish?arg=" object)
+        json->binding))
diff --git a/tests/ipfs.scm b/tests/ipfs.scm
new file mode 100644
index 0000000000..3b662b22bd
--- /dev/null
+++ b/tests/ipfs.scm
@@ -0,0 +1,55 @@
+;;; GNU Guix --- Functional package management for GNU
+;;; Copyright © 2018 Ludovic Courtès <ludo@gnu.org>
+;;;
+;;; This file is part of GNU Guix.
+;;;
+;;; GNU Guix is free software; you can redistribute it and/or modify it
+;;; under the terms of the GNU General Public License as published by
+;;; the Free Software Foundation; either version 3 of the License, or (at
+;;; your option) any later version.
+;;;
+;;; GNU Guix is distributed in the hope that it will be useful, but
+;;; WITHOUT ANY WARRANTY; without even the implied warranty of
+;;; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+;;; GNU General Public License for more details.
+;;;
+;;; You should have received a copy of the GNU General Public License
+;;; along with GNU Guix.  If not, see <http://www.gnu.org/licenses/>.
+
+(define-module (test-ipfs)
+  #:use-module (guix ipfs)
+  #:use-module ((guix utils) #:select (call-with-temporary-directory))
+  #:use-module (guix tests)
+  #:use-module (web uri)
+  #:use-module (srfi srfi-64))
+
+;; Test the (guix ipfs) module.
+
+(define (ipfs-gateway-running?)
+  "Return true if the IPFS gateway is running at %IPFS-BASE-URL."
+  (let* ((uri    (string->uri (%ipfs-base-url)))
+         (socket (socket AF_INET SOCK_STREAM 0)))
+    (define connected?
+      (catch 'system-error
+        (lambda ()
+          (format (current-error-port)
+                  "probing IPFS gateway at localhost:~a...~%"
+                  (uri-port uri))
+          (connect socket AF_INET INADDR_LOOPBACK (uri-port uri))
+          #t)
+        (const #f)))
+
+    (close-port socket)
+    connected?))
+
+(unless (ipfs-gateway-running?)
+  (test-skip 1))
+
+(test-assert "add-file-tree + restore-file-tree"
+  (call-with-temporary-directory
+   (lambda (directory)
+     (let* ((source  (dirname (search-path %load-path "guix/base32.scm")))
+            (target  (string-append directory "/r"))
+            (content (pk 'content (add-file-tree source))))
+       (restore-file-tree (content-name content) target)
+       (file=? source target)))))
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [bug#33899] [PATCH 4/5] publish: Add IPFS support.
  2018-12-28 23:15 ` [bug#33899] [PATCH 1/5] Add (guix json) Ludovic Courtès
  2018-12-28 23:15   ` [bug#33899] [PATCH 2/5] tests: 'file=?' now recurses on directories Ludovic Courtès
  2018-12-28 23:15   ` [bug#33899] [PATCH 3/5] Add (guix ipfs) Ludovic Courtès
@ 2018-12-28 23:15   ` Ludovic Courtès
  2018-12-28 23:15   ` [bug#33899] [PATCH 5/5] DRAFT substitute: " Ludovic Courtès
  3 siblings, 0 replies; 23+ messages in thread
From: Ludovic Courtès @ 2018-12-28 23:15 UTC (permalink / raw)
  To: 33899

* guix/scripts/publish.scm (show-help, %options): Add '--ipfs'.
(narinfo-string): Add IPFS parameter and honor it.
(render-narinfo/cached): Add #:ipfs? and honor it.
(bake-narinfo+nar, make-request-handler, run-publish-server): Likewise.
(guix-publish): Honor '--ipfs' and parameterize %IPFS-BASE-URL.
---
 doc/guix.texi            | 33 ++++++++++++++++++++
 guix/scripts/publish.scm | 67 ++++++++++++++++++++++++++++------------
 2 files changed, 80 insertions(+), 20 deletions(-)

diff --git a/doc/guix.texi b/doc/guix.texi
index fcb5b8c088..f2af5a1558 100644
--- a/doc/guix.texi
+++ b/doc/guix.texi
@@ -8470,6 +8470,15 @@ caching of the archives before they are sent to clients---see below for
 details.  The @command{guix weather} command provides a handy way to
 check what a server provides (@pxref{Invoking guix weather}).
 
+@cindex peer-to-peer, substitute distribution
+@cindex distributed storage, of substitutes
+@cindex IPFS, for substitutes
+It is also possible to publish substitutes over @uref{https://ipfs.io, IFPS},
+a distributed, peer-to-peer storage mechanism.  To enable it, pass the
+@option{--ipfs} option alongside @option{--cache}, and make sure you're
+running @command{ipfs daemon}.  Capable clients will then be able to choose
+whether to fetch substitutes over HTTP or over IPFS.
+
 As a bonus, @command{guix publish} also serves as a content-addressed
 mirror for source files referenced in @code{origin} records
 (@pxref{origin Reference}).  For instance, assuming @command{guix
@@ -8560,6 +8569,30 @@ thread per CPU core is created, but this can be customized.  See
 When @option{--ttl} is used, cached entries are automatically deleted
 when they have expired.
 
+@item --ifps[=@var{gateway}]
+When used in conjunction with @option{--cache}, instruct @command{guix
+publish} to publish substitutes over the @uref{https://ipfs.io, IPFS
+distributed data store} in addition to HTTP.
+
+@quotation Note
+As of version @value{VERSION}, IPFS support is experimental.  You're welcome
+to share your experience with the developers by emailing
+@email{guix-devel@@gnu.org}!
+@end quotation
+
+The IPFS HTTP interface must be reachable at @var{gateway}, by default
+@code{localhost:5001}.  To get it up and running, it is usually enough to
+install IPFS and start the IPFS daemon:
+
+@example
+$ guix package -i go-ipfs
+$ ipfs init
+$ ipfs daemon
+@end example
+
+For more information on how to get started with IPFS, please refer to the
+@uref{https://docs.ipfs.io/introduction/usage/, IPFS documentation}.
+
 @item --workers=@var{N}
 When @option{--cache} is used, request the allocation of @var{N} worker
 threads to ``bake'' archives.
diff --git a/guix/scripts/publish.scm b/guix/scripts/publish.scm
index a236f3e45c..2accd632ab 100644
--- a/guix/scripts/publish.scm
+++ b/guix/scripts/publish.scm
@@ -59,6 +59,7 @@
   #:use-module ((guix build utils)
                 #:select (dump-port mkdir-p find-files))
   #:use-module ((guix build syscalls) #:select (set-thread-name))
+  #:use-module ((guix ipfs) #:prefix ipfs:)
   #:export (%public-key
             %private-key
 
@@ -78,6 +79,8 @@ Publish ~a over HTTP.\n") %store-directory)
                          compress archives at LEVEL"))
   (display (G_ "
   -c, --cache=DIRECTORY  cache published items to DIRECTORY"))
+  (display (G_ "
+      --ipfs[=GATEWAY]   publish items over IPFS via GATEWAY"))
   (display (G_ "
       --workers=N        use N workers to bake items"))
   (display (G_ "
@@ -168,6 +171,10 @@ compression disabled~%"))
         (option '(#\c "cache") #t #f
                 (lambda (opt name arg result)
                   (alist-cons 'cache arg result)))
+        (option '("ipfs") #f #t
+                (lambda (opt name arg result)
+                  (alist-cons 'ipfs (or arg (ipfs:%ipfs-base-url))
+                              result)))
         (option '("workers") #t #f
                 (lambda (opt name arg result)
                   (alist-cons 'workers (string->number* arg)
@@ -237,12 +244,15 @@ compression disabled~%"))
 
 (define* (narinfo-string store store-path key
                          #:key (compression %no-compression)
-                         (nar-path "nar") file-size)
+                         (nar-path "nar") file-size ipfs)
   "Generate a narinfo key/value string for STORE-PATH; an exception is raised
 if STORE-PATH is invalid.  Produce a URL that corresponds to COMPRESSION.  The
 narinfo is signed with KEY.  NAR-PATH specifies the prefix for nar URLs.
+
 Optionally, FILE-SIZE can specify the size in bytes of the compressed NAR; it
-informs the client of how much needs to be downloaded."
+informs the client of how much needs to be downloaded.
+
+When IPFS is true, it is the IPFS object identifier for STORE-PATH."
   (let* ((path-info  (query-path-info store store-path))
          (compression (actual-compression store-path compression))
          (url        (encode-and-join-uri-path
@@ -295,7 +305,12 @@ References: ~a~%~a"
                                  (apply throw args))))))
          (signature  (base64-encode-string
                       (canonical-sexp->string (signed-string info)))))
-    (format #f "~aSignature: 1;~a;~a~%" info (gethostname) signature)))
+    (format #f "~aSignature: 1;~a;~a~%~a" info (gethostname) signature
+
+            ;; Append IPFS info below the signed part.
+            (if ipfs
+                (string-append "IPFS: " ipfs "\n")
+                ""))))
 
 (define* (not-found request
                     #:key (phrase "Resource not found")
@@ -406,10 +421,12 @@ items.  Failing that, we could eventually have to recompute them and return
 (define* (render-narinfo/cached store request hash
                                 #:key ttl (compression %no-compression)
                                 (nar-path "nar")
-                                cache pool)
+                                cache pool ipfs?)
   "Respond to the narinfo request for REQUEST.  If the narinfo is available in
 CACHE, then send it; otherwise, return 404 and \"bake\" that nar and narinfo
-requested using POOL."
+requested using POOL.
+
+When IPFS? is true, additionally publish binaries over IPFS."
   (define (delete-entry narinfo)
     ;; Delete NARINFO and the corresponding nar from CACHE.
     (let ((nar (string-append (string-drop-right narinfo
@@ -447,7 +464,8 @@ requested using POOL."
                  (bake-narinfo+nar cache item
                                    #:ttl ttl
                                    #:compression compression
-                                   #:nar-path nar-path)))
+                                   #:nar-path nar-path
+                                   #:ipfs? ipfs?)))
 
              (when ttl
                (single-baker 'cache-cleanup
@@ -465,7 +483,7 @@ requested using POOL."
 
 (define* (bake-narinfo+nar cache item
                            #:key ttl (compression %no-compression)
-                           (nar-path "/nar"))
+                           (nar-path "/nar") ipfs?)
   "Write the narinfo and nar for ITEM to CACHE."
   (let* ((compression (actual-compression item compression))
          (nar         (nar-cache-file cache item
@@ -502,7 +520,11 @@ requested using POOL."
                                    #:nar-path nar-path
                                    #:compression compression
                                    #:file-size (and=> (stat nar #f)
-                                                      stat:size))
+                                                      stat:size)
+                                   #:ipfs
+                                   (and ipfs?
+                                        (ipfs:content-name
+                                         (ipfs:add-file-tree item))))
                    port))))))
 
 ;; XXX: Declare the 'X-Nar-Compression' HTTP header, which is in fact for
@@ -766,7 +788,8 @@ blocking."
                                cache pool
                                narinfo-ttl
                                (nar-path "nar")
-                               (compression %no-compression))
+                               (compression %no-compression)
+                               ipfs?)
   (define nar-path?
     (let ((expected (split-and-decode-uri-path nar-path)))
       (cut equal? expected <>)))
@@ -793,7 +816,8 @@ blocking."
                                       #:pool pool
                                       #:ttl narinfo-ttl
                                       #:nar-path nar-path
-                                      #:compression compression)
+                                      #:compression compression
+                                      #:ipfs? ipfs?)
                (render-narinfo store request hash
                                #:ttl narinfo-ttl
                                #:nar-path nar-path
@@ -847,13 +871,14 @@ blocking."
 (define* (run-publish-server socket store
                              #:key (compression %no-compression)
                              (nar-path "nar") narinfo-ttl
-                             cache pool)
+                             cache pool ipfs?)
   (run-server (make-request-handler store
                                     #:cache cache
                                     #:pool pool
                                     #:nar-path nar-path
                                     #:narinfo-ttl narinfo-ttl
-                                    #:compression compression)
+                                    #:compression compression
+                                    #:ipfs? ipfs?)
               concurrent-http-server
               `(#:socket ,socket)))
 
@@ -902,6 +927,7 @@ blocking."
            (repl-port (assoc-ref opts 'repl))
            (cache     (assoc-ref opts 'cache))
            (workers   (assoc-ref opts 'workers))
+           (ipfs      (assoc-ref opts 'ipfs))
 
            ;; Read the key right away so that (1) we fail early on if we can't
            ;; access them, and (2) we can then drop privileges.
@@ -930,14 +956,15 @@ consider using the '--user' option!~%")))
         (set-thread-name "guix publish")
 
         (with-store store
-          (run-publish-server socket store
-                              #:cache cache
-                              #:pool (and cache (make-pool workers
-                                                           #:thread-name
-                                                           "publish worker"))
-                              #:nar-path nar-path
-                              #:compression compression
-                              #:narinfo-ttl ttl))))))
+          (parameterize ((ipfs:%ipfs-base-url ipfs))
+            (run-publish-server socket store
+                                #:cache cache
+                                #:pool (and cache (make-pool workers
+                                                             #:thread-name
+                                                             "publish worker"))
+                                #:nar-path nar-path
+                                #:compression compression
+                                #:narinfo-ttl ttl)))))))
 
 ;;; Local Variables:
 ;;; eval: (put 'single-baker 'scheme-indent-function 1)
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [bug#33899] [PATCH 5/5] DRAFT substitute: Add IPFS support.
  2018-12-28 23:15 ` [bug#33899] [PATCH 1/5] Add (guix json) Ludovic Courtès
                     ` (2 preceding siblings ...)
  2018-12-28 23:15   ` [bug#33899] [PATCH 4/5] publish: Add IPFS support Ludovic Courtès
@ 2018-12-28 23:15   ` Ludovic Courtès
  3 siblings, 0 replies; 23+ messages in thread
From: Ludovic Courtès @ 2018-12-28 23:15 UTC (permalink / raw)
  To: 33899

Missing:

  - documentation
  - command-line options
  - progress report when downloading over IPFS
  - fallback when we fail to fetch from IPFS

* guix/scripts/substitute.scm (<narinfo>)[ipfs]: New field.
(read-narinfo): Read "IPFS".
(process-substitution/http): New procedure, with code formerly in
'process-substitution'.
(process-substitution): Check for IPFS and call 'ipfs:restore-file-tree'
when IPFS is true.
---
 guix/scripts/substitute.scm | 106 +++++++++++++++++++++---------------
 1 file changed, 61 insertions(+), 45 deletions(-)

diff --git a/guix/scripts/substitute.scm b/guix/scripts/substitute.scm
index 53b1777241..8be15e4f13 100755
--- a/guix/scripts/substitute.scm
+++ b/guix/scripts/substitute.scm
@@ -42,6 +42,7 @@
   #:use-module (guix progress)
   #:use-module ((guix build syscalls)
                 #:select (set-thread-name))
+  #:use-module ((guix ipfs) #:prefix ipfs:)
   #:use-module (ice-9 rdelim)
   #:use-module (ice-9 regex)
   #:use-module (ice-9 match)
@@ -281,7 +282,7 @@ failure, return #f and #f."
 \f
 (define-record-type <narinfo>
   (%make-narinfo path uri uri-base compression file-hash file-size nar-hash nar-size
-                 references deriver system signature contents)
+                 references deriver system ipfs signature contents)
   narinfo?
   (path         narinfo-path)
   (uri          narinfo-uri)
@@ -294,6 +295,7 @@ failure, return #f and #f."
   (references   narinfo-references)
   (deriver      narinfo-deriver)
   (system       narinfo-system)
+  (ipfs         narinfo-ipfs)
   (signature    narinfo-signature)      ; canonical sexp
   ;; The original contents of a narinfo file.  This field is needed because we
   ;; want to preserve the exact textual representation for verification purposes.
@@ -335,7 +337,7 @@ s-expression: ~s~%")
   "Return a narinfo constructor for narinfos originating from CACHE-URL.  STR
 must contain the original contents of a narinfo file."
   (lambda (path url compression file-hash file-size nar-hash nar-size
-                references deriver system signature)
+                references deriver system ipfs signature)
     "Return a new <narinfo> object."
     (%make-narinfo path
                    ;; Handle the case where URL is a relative URL.
@@ -352,6 +354,7 @@ must contain the original contents of a narinfo file."
                      ((or #f "") #f)
                      (_ deriver))
                    system
+                   ipfs
                    (false-if-exception
                     (and=> signature narinfo-signature->canonical-sexp))
                    str)))
@@ -386,7 +389,7 @@ No authentication and authorization checks are performed here!"
                    (narinfo-maker str url)
                    '("StorePath" "URL" "Compression"
                      "FileHash" "FileSize" "NarHash" "NarSize"
-                     "References" "Deriver" "System"
+                     "References" "Deriver" "System" "IPFS"
                      "Signature"))))
 
 (define (narinfo-sha256 narinfo)
@@ -947,13 +950,58 @@ authorized substitutes."
     (wtf
      (error "unknown `--query' command" wtf))))
 
+(define* (process-substitution/http narinfo destination uri
+                                    #:key print-build-trace?)
+  (unless print-build-trace?
+    (format (current-error-port)
+            (G_ "Downloading ~a...~%") (uri->string uri)))
+
+  (let*-values (((raw download-size)
+                 ;; Note that Hydra currently generates Nars on the fly
+                 ;; and doesn't specify a Content-Length, so
+                 ;; DOWNLOAD-SIZE is #f in practice.
+                 (fetch uri #:buffered? #f #:timeout? #f))
+                ((progress)
+                 (let* ((comp     (narinfo-compression narinfo))
+                        (dl-size  (or download-size
+                                      (and (equal? comp "none")
+                                           (narinfo-size narinfo))))
+                        (reporter (if print-build-trace?
+                                      (progress-reporter/trace
+                                       destination
+                                       (uri->string uri) dl-size
+                                       (current-error-port))
+                                      (progress-reporter/file
+                                       (uri->string uri) dl-size
+                                       (current-error-port)
+                                       #:abbreviation nar-uri-abbreviation))))
+                   (progress-report-port reporter raw)))
+                ((input pids)
+                 ;; NOTE: This 'progress' port of current process will be
+                 ;; closed here, while the child process doing the
+                 ;; reporting will close it upon exit.
+                 (decompressed-port (and=> (narinfo-compression narinfo)
+                                           string->symbol)
+                                    progress)))
+    ;; Unpack the Nar at INPUT into DESTINATION.
+    (restore-file input destination)
+    (close-port input)
+
+    ;; Wait for the reporter to finish.
+    (every (compose zero? cdr waitpid) pids)
+
+    ;; Skip a line after what 'progress-reporter/file' printed, and another
+    ;; one to visually separate substitutions.
+    (display "\n\n" (current-error-port))))
+
 (define* (process-substitution store-item destination
                                #:key cache-urls acl print-build-trace?)
   "Substitute STORE-ITEM (a store file name) from CACHE-URLS, and write it to
 DESTINATION as a nar file.  Verify the substitute against ACL."
   (let* ((narinfo (lookup-narinfo cache-urls store-item
                                   (cut valid-narinfo? <> acl)))
-         (uri     (and=> narinfo narinfo-uri)))
+         (uri     (and=> narinfo narinfo-uri))
+         (ipfs    (and=> narinfo narinfo-ipfs)))
     (unless uri
       (leave (G_ "no valid substitute for '~a'~%")
              store-item))
@@ -961,47 +1009,15 @@ DESTINATION as a nar file.  Verify the substitute against ACL."
     ;; Tell the daemon what the expected hash of the Nar itself is.
     (format #t "~a~%" (narinfo-hash narinfo))
 
-    (unless print-build-trace?
-      (format (current-error-port)
-              (G_ "Downloading ~a...~%") (uri->string uri)))
-
-    (let*-values (((raw download-size)
-                   ;; Note that Hydra currently generates Nars on the fly
-                   ;; and doesn't specify a Content-Length, so
-                   ;; DOWNLOAD-SIZE is #f in practice.
-                   (fetch uri #:buffered? #f #:timeout? #f))
-                  ((progress)
-                   (let* ((comp     (narinfo-compression narinfo))
-                          (dl-size  (or download-size
-                                        (and (equal? comp "none")
-                                             (narinfo-size narinfo))))
-                          (reporter (if print-build-trace?
-                                        (progress-reporter/trace
-                                         destination
-                                         (uri->string uri) dl-size
-                                         (current-error-port))
-                                        (progress-reporter/file
-                                         (uri->string uri) dl-size
-                                         (current-error-port)
-                                         #:abbreviation nar-uri-abbreviation))))
-                     (progress-report-port reporter raw)))
-                  ((input pids)
-                   ;; NOTE: This 'progress' port of current process will be
-                   ;; closed here, while the child process doing the
-                   ;; reporting will close it upon exit.
-                   (decompressed-port (and=> (narinfo-compression narinfo)
-                                             string->symbol)
-                                      progress)))
-      ;; Unpack the Nar at INPUT into DESTINATION.
-      (restore-file input destination)
-      (close-port input)
-
-      ;; Wait for the reporter to finish.
-      (every (compose zero? cdr waitpid) pids)
-
-      ;; Skip a line after what 'progress-reporter/file' printed, and another
-      ;; one to visually separate substitutions.
-      (display "\n\n" (current-error-port)))))
+    (if ipfs
+        (begin
+          (unless print-build-trace?
+            (format (current-error-port)
+                    (G_ "Downloading from IPFS ~s...~%") ipfs))
+          (ipfs:restore-file-tree ipfs destination))
+        (process-substitution/http narinfo destination uri
+                                   #:print-build-trace?
+                                   print-build-trace?))))
 
 \f
 ;;;
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [bug#33899] [PATCH 0/5] Distributing substitutes over IPFS
  2018-12-28 23:12 [bug#33899] [PATCH 0/5] Distributing substitutes over IPFS Ludovic Courtès
  2018-12-28 23:15 ` [bug#33899] [PATCH 1/5] Add (guix json) Ludovic Courtès
@ 2019-01-07 14:43 ` Hector Sanjuan
  2019-01-14 13:17   ` Ludovic Courtès
  2019-05-13 18:51 ` Alex Griffin
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 23+ messages in thread
From: Hector Sanjuan @ 2019-01-07 14:43 UTC (permalink / raw)
  To: 33899@debbugs.gnu.org; +Cc: mail, go-ipfs-wg

‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Saturday, December 29, 2018 12:12 AM, Ludovic Courtès <ludo@gnu.org> wrote:

> Hello Guix!
>
> Here is a first draft adding support to distribute and retrieve substitutes
> over IPFS. This builds on discussions at the R-B Summit with Héctor Sanjuan
> of IPFS, lewo of Nix, Pierre Neidhardt, and also on the work Florian
> Paul Schmidt posted on guix-devel last month.
>
> The IPFS daemon exposes an HTTP API and the (guix ipfs) module provides
> bindings to a subset of that API. This module also implements a custom
> “directory” format to store directory trees in IPFS (IPFS already provides
> “UnixFS” and “tar” but they store too many or too few file attributes.)
>
> ‘guix publish’ and ‘guix substitute’ use (guix ipfs) to
> store and retrieve store items. Complete directory trees are stored in
> IPFS “as is”, rather than as compressed archives (nars). This allows for
> deduplication in IPFS. ‘guix publish’ adds a new “IPFS” field in
> narinfos and ‘guix substitute’ can then query those objects over IPFS.
> So the idea is that you still get narinfos over HTTP(S), and then you
> have the option of downloading substitutes over IPFS.
>
> I’ve pushed these patches in ‘wip-ipfs-substitutes’. This is rough on the
> edges and probably buggy, but the adventurous among us might want to give
> it a spin. :-)
>
> Thanks,
> Ludo’.


Hey! Happy new year! This is great news. I'm very glad to see this.
I haven't tried this yet but looking at the code there are a couple
of things to point out.

1) The doc strings usually refer to the IPFS HTTP API as GATEWAY. go-ipfs
has a read/write API (on :5001) and a read-only API that we call "gateway"
and which runs on :8080. The gateway, apart from handling most of the
read-only methods from the HTTP API, also handles paths like "/ipfs/<cid>"
or "/ipns/<name>" gracefully, and returns an autogenerated webpage for
directory-type CIDs. The gateway does not allow to "publish". Therefore I think
the doc strings should say "IPFS daemon API" rather than "GATEWAY".

2) I'm not proficient enough in schema to grasp the details of the
"directory" format. If I understand it right, you keep a separate manifest
object listing the directory structure, the contents and the executable bit
for each. Thus, when adding a store item you add all the files separately and
this manifest. And when retrieving a store item you fetch the manifest and
reconstruct the tree by fetching the contents in it (and applying the
executable flag). Is this correct? This works, but it can be improved:

You can add all the files/folders in a single request. If I'm
reading it right, now each files is added separately (and gets pinned
separately). It would probably make sense to add it all in a single request,
letting IPFS to store the directory structure as "unixfs". You can
additionally add the sexp file with the dir-structure and executable flags
as an extra file to the root folder. This would allow to fetch the whole thing
with a single request too /api/v0/get?arg=<hash>. And to pin a single hash
recursively (and not each separately). After getting the whole thing, you
will need to chmod +x things accordingly.

It will probably need some trial an error to get the multi-part right
to upload all in a single request. The Go code HTTP Clients doing
this can be found at:

https://github.com/ipfs/go-ipfs-files/blob/master/multifilereader.go#L96

As you see, a directory part in the multipart will have the content-type Header
set to "application/x-directory". The best way to see how "abspath" etc is set
is probably to sniff an `ipfs add -r <testfolder>` operation (localhost:5001).

Once UnixFSv2 lands, you will be in a position to just drop the sexp file
altogether.

Let me know if you have any doubts, I'll make my best to answer them. In the
meantime I'll try to get more familiar with Guix.

Cheers,

Hector

PS. There is a place where it says "ifps" instead of "ipfs". A very common typo.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [bug#33899] [PATCH 0/5] Distributing substitutes over IPFS
  2019-01-07 14:43 ` [bug#33899] [PATCH 0/5] Distributing substitutes over IPFS Hector Sanjuan
@ 2019-01-14 13:17   ` Ludovic Courtès
  2019-01-18  9:08     ` Hector Sanjuan
  0 siblings, 1 reply; 23+ messages in thread
From: Ludovic Courtès @ 2019-01-14 13:17 UTC (permalink / raw)
  To: Hector Sanjuan
  Cc: go-ipfs-wg@ipfs.io, Pierre Neidhardt, 33899@debbugs.gnu.org

Hi Hector,

Happy new year to you too!  :-)

Hector Sanjuan <code@hector.link> skribis:

> 1) The doc strings usually refer to the IPFS HTTP API as GATEWAY. go-ipfs
> has a read/write API (on :5001) and a read-only API that we call "gateway"
> and which runs on :8080. The gateway, apart from handling most of the
> read-only methods from the HTTP API, also handles paths like "/ipfs/<cid>"
> or "/ipns/<name>" gracefully, and returns an autogenerated webpage for
> directory-type CIDs. The gateway does not allow to "publish". Therefore I think
> the doc strings should say "IPFS daemon API" rather than "GATEWAY".

Indeed, I’ll change that.

> 2) I'm not proficient enough in schema to grasp the details of the
> "directory" format. If I understand it right, you keep a separate manifest
> object listing the directory structure, the contents and the executable bit
> for each. Thus, when adding a store item you add all the files separately and
> this manifest. And when retrieving a store item you fetch the manifest and
> reconstruct the tree by fetching the contents in it (and applying the
> executable flag). Is this correct? This works, but it can be improved:

That’s correct.

> You can add all the files/folders in a single request. If I'm
> reading it right, now each files is added separately (and gets pinned
> separately). It would probably make sense to add it all in a single request,
> letting IPFS to store the directory structure as "unixfs". You can
> additionally add the sexp file with the dir-structure and executable flags
> as an extra file to the root folder. This would allow to fetch the whole thing
> with a single request too /api/v0/get?arg=<hash>. And to pin a single hash
> recursively (and not each separately). After getting the whole thing, you
> will need to chmod +x things accordingly.

Yes, I’m well aware of “unixfs”.  The problems, as I see it, is that it
stores “too much” in a way (we don’t need to store the mtimes or
permissions; we could ignore them upon reconstruction though), and “not
enough” in another way (the executable bit is lost, IIUC.)

> It will probably need some trial an error to get the multi-part right
> to upload all in a single request. The Go code HTTP Clients doing
> this can be found at:
>
> https://github.com/ipfs/go-ipfs-files/blob/master/multifilereader.go#L96
>
> As you see, a directory part in the multipart will have the content-type Header
> set to "application/x-directory". The best way to see how "abspath" etc is set
> is probably to sniff an `ipfs add -r <testfolder>` operation (localhost:5001).
>
> Once UnixFSv2 lands, you will be in a position to just drop the sexp file
> altogether.

Yes, that makes sense.  In the meantime, I guess we have to keep using
our own format.

What are the performance implications of adding and retrieving files one
by one like I did?  I understand we’re doing N HTTP requests to the
local IPFS daemon where “ipfs add -r” makes a single request, but this
alone can’t be much of a problem since communication is happening
locally.  Does pinning each file separately somehow incur additional
overhead?

Thanks for your feedback!

Ludo’.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [bug#33899] [PATCH 0/5] Distributing substitutes over IPFS
  2019-01-14 13:17   ` Ludovic Courtès
@ 2019-01-18  9:08     ` Hector Sanjuan
  2019-01-18  9:52       ` Ludovic Courtès
  0 siblings, 1 reply; 23+ messages in thread
From: Hector Sanjuan @ 2019-01-18  9:08 UTC (permalink / raw)
  To: Ludovic Courtès
  Cc: go-ipfs-wg\@ipfs.io, Pierre Neidhardt, 33899\@debbugs.gnu.org

‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Monday, January 14, 2019 2:17 PM, Ludovic Courtès <ludo@gnu.org> wrote:

> Hi Hector,
>
> Happy new year to you too! :-)
>
> Hector Sanjuan code@hector.link skribis:
>
> > 1.  The doc strings usually refer to the IPFS HTTP API as GATEWAY. go-ipfs
> >     has a read/write API (on :5001) and a read-only API that we call "gateway"
> >     and which runs on :8080. The gateway, apart from handling most of the
> >     read-only methods from the HTTP API, also handles paths like "/ipfs/<cid>"
> >     or "/ipns/<name>" gracefully, and returns an autogenerated webpage for
> >     directory-type CIDs. The gateway does not allow to "publish". Therefore I think
> >     the doc strings should say "IPFS daemon API" rather than "GATEWAY".
> >
>
> Indeed, I’ll change that.
>
> > 2.  I'm not proficient enough in schema to grasp the details of the
> >     "directory" format. If I understand it right, you keep a separate manifest
> >     object listing the directory structure, the contents and the executable bit
> >     for each. Thus, when adding a store item you add all the files separately and
> >     this manifest. And when retrieving a store item you fetch the manifest and
> >     reconstruct the tree by fetching the contents in it (and applying the
> >     executable flag). Is this correct? This works, but it can be improved:
> >
>
> That’s correct.
>
> > You can add all the files/folders in a single request. If I'm
> > reading it right, now each files is added separately (and gets pinned
> > separately). It would probably make sense to add it all in a single request,
> > letting IPFS to store the directory structure as "unixfs". You can
> > additionally add the sexp file with the dir-structure and executable flags
> > as an extra file to the root folder. This would allow to fetch the whole thing
> > with a single request too /api/v0/get?arg=<hash>. And to pin a single hash
> > recursively (and not each separately). After getting the whole thing, you
> > will need to chmod +x things accordingly.
>
> Yes, I’m well aware of “unixfs”. The problems, as I see it, is that it
> stores “too much” in a way (we don’t need to store the mtimes or
> permissions; we could ignore them upon reconstruction though), and “not
> enough” in another way (the executable bit is lost, IIUC.)

Actually the only metadata that Unixfs stores is size:
https://github.com/ipfs/go-unixfs/blob/master/pb/unixfs.proto and by all
means the amount of metadata is negligible for the actual data stored
and serves to give you a progress bar when you are downloading.

Having IPFS understand what files are part of a single item is important
because you can pin/unpin,diff,patch all of them as a whole. Unixfs
also takes care of handling the case where the directories need to
be sharded because there are too many entries. When the user
puts the single root hash in ipfs.io/ipfs/<hash>, it will display
correctly the underlying files and the people will be
able to navigate the actual tree with both web and cli. Note that
every file added to IPFS is getting wrapped as a Unixfs block
anyways. You are just saving some "directory" nodes by adding
them separately.

There is an alternative way which is using IPLD to implement a custom
block format that carries the executable bit information and nothing
else. But I don't see significant advantages at this point for the extra
work it requires.

>
> > It will probably need some trial an error to get the multi-part right
> > to upload all in a single request. The Go code HTTP Clients doing
> > this can be found at:
> > https://github.com/ipfs/go-ipfs-files/blob/master/multifilereader.go#L96
> > As you see, a directory part in the multipart will have the content-type Header
> > set to "application/x-directory". The best way to see how "abspath" etc is set
> > is probably to sniff an `ipfs add -r <testfolder>` operation (localhost:5001).
> > Once UnixFSv2 lands, you will be in a position to just drop the sexp file
> > altogether.
>
> Yes, that makes sense. In the meantime, I guess we have to keep using
> our own format.
>
> What are the performance implications of adding and retrieving files one
> by one like I did? I understand we’re doing N HTTP requests to the
> local IPFS daemon where “ipfs add -r” makes a single request, but this
> alone can’t be much of a problem since communication is happening
> locally. Does pinning each file separately somehow incur additional
> overhead?
>

Yes, pinning separately is slow and incurs in overhead. Pins are stored
in a merkle tree themselves so it involves reading, patching and saving. This
gets quite slow when you have very large pinsets because your pins block size
grow. Your pinset will grow very large if you do this. Additionally the
pinning operation itself requires global lock making it more slow.

But, even if it was fast, you will not have a way to easily unpin
anything that becomes obsolete or have an overview of to where things
belong. It is also unlikely that a single IPFS daemon will be able to
store everything you build, so you might find yourself using IPFS Cluster
soon to distribute the storage across multiple nodes and then you will
be effectively adding remotely.


> Thanks for your feedback!
>
> Ludo’.

Thanks for working on this!

Hector

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [bug#33899] [PATCH 0/5] Distributing substitutes over IPFS
  2019-01-18  9:08     ` Hector Sanjuan
@ 2019-01-18  9:52       ` Ludovic Courtès
  2019-01-18 11:26         ` Hector Sanjuan
  0 siblings, 1 reply; 23+ messages in thread
From: Ludovic Courtès @ 2019-01-18  9:52 UTC (permalink / raw)
  To: Hector Sanjuan
  Cc: go-ipfs-wg@ipfs.io, Pierre Neidhardt, 33899@debbugs.gnu.org

Hello,

Hector Sanjuan <code@hector.link> skribis:

> ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
> On Monday, January 14, 2019 2:17 PM, Ludovic Courtès <ludo@gnu.org> wrote:

[...]

>> Yes, I’m well aware of “unixfs”. The problems, as I see it, is that it
>> stores “too much” in a way (we don’t need to store the mtimes or
>> permissions; we could ignore them upon reconstruction though), and “not
>> enough” in another way (the executable bit is lost, IIUC.)
>
> Actually the only metadata that Unixfs stores is size:
> https://github.com/ipfs/go-unixfs/blob/master/pb/unixfs.proto and by all
> means the amount of metadata is negligible for the actual data stored
> and serves to give you a progress bar when you are downloading.

Yes, the format I came up with also store the size so we can eventually
display a progress bar.

> Having IPFS understand what files are part of a single item is important
> because you can pin/unpin,diff,patch all of them as a whole. Unixfs
> also takes care of handling the case where the directories need to
> be sharded because there are too many entries.

Isn’t there a way, then, to achieve the same behavior with the custom
format?  The /api/v0/add entry point has a ‘pin’ argument; I suppose we
could leave it to false except when we add the top-level “directory”
node?  Wouldn’t that give us behavior similar to that of Unixfs?

> When the user puts the single root hash in ipfs.io/ipfs/<hash>, it
> will display correctly the underlying files and the people will be
> able to navigate the actual tree with both web and cli.

Right, though that’s less important in my view.

> Note that every file added to IPFS is getting wrapped as a Unixfs
> block anyways. You are just saving some "directory" nodes by adding
> them separately.

Hmm weird.  When I do /api/v0/add, I’m really just passing a byte
vector; there’s no notion of a “file” here, AFAICS.  Or am I missing
something?

>> > It will probably need some trial an error to get the multi-part right
>> > to upload all in a single request. The Go code HTTP Clients doing
>> > this can be found at:
>> > https://github.com/ipfs/go-ipfs-files/blob/master/multifilereader.go#L96
>> > As you see, a directory part in the multipart will have the content-type Header
>> > set to "application/x-directory". The best way to see how "abspath" etc is set
>> > is probably to sniff an `ipfs add -r <testfolder>` operation (localhost:5001).
>> > Once UnixFSv2 lands, you will be in a position to just drop the sexp file
>> > altogether.
>>
>> Yes, that makes sense. In the meantime, I guess we have to keep using
>> our own format.
>>
>> What are the performance implications of adding and retrieving files one
>> by one like I did? I understand we’re doing N HTTP requests to the
>> local IPFS daemon where “ipfs add -r” makes a single request, but this
>> alone can’t be much of a problem since communication is happening
>> locally. Does pinning each file separately somehow incur additional
>> overhead?
>>
>
> Yes, pinning separately is slow and incurs in overhead. Pins are stored
> in a merkle tree themselves so it involves reading, patching and saving. This
> gets quite slow when you have very large pinsets because your pins block size
> grow. Your pinset will grow very large if you do this. Additionally the
> pinning operation itself requires global lock making it more slow.

OK, I see.

> But, even if it was fast, you will not have a way to easily unpin
> anything that becomes obsolete or have an overview of to where things
> belong. It is also unlikely that a single IPFS daemon will be able to
> store everything you build, so you might find yourself using IPFS Cluster
> soon to distribute the storage across multiple nodes and then you will
> be effectively adding remotely.

Currently, ‘guix publish’ stores things as long as they are requested,
and then for the duration specified with ‘--ttl’.  I suppose we could
have similar behavior with IPFS: if an item hasn’t been requested for
the specified duration, then we unpin it.

Does that make sense?

Thanks for your help!

Ludo’.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [bug#33899] [PATCH 0/5] Distributing substitutes over IPFS
  2019-01-18  9:52       ` Ludovic Courtès
@ 2019-01-18 11:26         ` Hector Sanjuan
  2019-07-01 21:36           ` Pierre Neidhardt
  0 siblings, 1 reply; 23+ messages in thread
From: Hector Sanjuan @ 2019-01-18 11:26 UTC (permalink / raw)
  To: Ludovic Courtès
  Cc: go-ipfs-wg\@ipfs.io, Pierre Neidhardt, 33899\@debbugs.gnu.org

‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Friday, January 18, 2019 10:52 AM, Ludovic Courtès <ludo@gnu.org> wrote:

> Hello,
>
> Hector Sanjuan code@hector.link skribis:
>
> > ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
> > On Monday, January 14, 2019 2:17 PM, Ludovic Courtès ludo@gnu.org wrote:
>
> [...]
>

>
> Isn’t there a way, then, to achieve the same behavior with the custom
> format? The /api/v0/add entry point has a ‘pin’ argument; I suppose we
> could leave it to false except when we add the top-level “directory”
> node? Wouldn’t that give us behavior similar to that of Unixfs?
>

Yes. What you could do is to add every file flatly/separately (with pin=false)
and at the end add an IPLD object with references to all the
files that you added and including the exec bit information (and size?).
This is just a JSON file:

{
   "name": "package name",
   "contents": [
       {
           "path": "/file/path", # so you know where to extract it later
           "exec": true,
           "ipfs": { "/": "Qmhash..." }
       },
       ...
}

This needs to be added to IPFS with the /api/v0/dag/put endpoint (this
converts it to CBOR - IPLD-Cbor is the actual block format used here).
When this is pinned (?pin=true), this will pin all the things referenced
from it recursively in the way we want.

So this will be quite similar to unixfs. But note that if this blob
ever grows over the 2M block-size limit because you have a package with
many files, you will need to start solving problems that unixfs solves
automatically now (directory sharding).

Because IPLD-cbor is supported, ipfs, the gateway etc will know how to
display these manifests, the info in it and their links.


> > When the user puts the single root hash in ipfs.io/ipfs/<hash>, it
> > will display correctly the underlying files and the people will be
> > able to navigate the actual tree with both web and cli.
>
> Right, though that’s less important in my view.
>
> > Note that every file added to IPFS is getting wrapped as a Unixfs
> > block anyways. You are just saving some "directory" nodes by adding
> > them separately.
>
> Hmm weird. When I do /api/v0/add, I’m really just passing a byte
> vector; there’s no notion of a “file” here, AFAICS. Or am I missing
> something?

They are wrapped in Unixfs blocks anyway by default. From the moment
the file is >256K it will get chunked into several  pieces and
a Unixfs block (or multiple, if a really big file) is necessary to
reference them. In this case the root hash will be a Unixfs node
with links to the parts.

There is a "raw-leaves" option which does not wrap the individual
blocks with unixfs, so if the file is small to not be chunked,
you can avoid the default unixfs-wrapping this way.


>
> > > > It will probably need some trial an error to get the multi-part right
> > > > to upload all in a single request. The Go code HTTP Clients doing
> > > > this can be found at:
> > > > https://github.com/ipfs/go-ipfs-files/blob/master/multifilereader.go#L96
> > > > As you see, a directory part in the multipart will have the content-type Header
> > > > set to "application/x-directory". The best way to see how "abspath" etc is set
> > > > is probably to sniff an `ipfs add -r <testfolder>` operation (localhost:5001).
> > > > Once UnixFSv2 lands, you will be in a position to just drop the sexp file
> > > > altogether.
> > >
> > > Yes, that makes sense. In the meantime, I guess we have to keep using
> > > our own format.
> > > What are the performance implications of adding and retrieving files one
> > > by one like I did? I understand we’re doing N HTTP requests to the
> > > local IPFS daemon where “ipfs add -r” makes a single request, but this
> > > alone can’t be much of a problem since communication is happening
> > > locally. Does pinning each file separately somehow incur additional
> > > overhead?
> >
> > Yes, pinning separately is slow and incurs in overhead. Pins are stored
> > in a merkle tree themselves so it involves reading, patching and saving. This
> > gets quite slow when you have very large pinsets because your pins block size
> > grow. Your pinset will grow very large if you do this. Additionally the
> > pinning operation itself requires global lock making it more slow.
>
> OK, I see.

I should add that even if you want to /add all files separately (and then
put the IPLD manifest I described above), you can still add them all in the same
request (it becomes easier as you just need to put more parts in the multipart
and don't have to worry about names/folders/paths).

The /add endpoint will forcefully close the HTTP connection for every
/add (long story) and small delays might add up to a big one. Specially relevant
if using IPFS Cluster, where /add might send the blocks somewhere else and does
needs to do some other things.


>
> > But, even if it was fast, you will not have a way to easily unpin
> > anything that becomes obsolete or have an overview of to where things
> > belong. It is also unlikely that a single IPFS daemon will be able to
> > store everything you build, so you might find yourself using IPFS Cluster
> > soon to distribute the storage across multiple nodes and then you will
> > be effectively adding remotely.
>
> Currently, ‘guix publish’ stores things as long as they are requested,
> and then for the duration specified with ‘--ttl’. I suppose we could
> have similar behavior with IPFS: if an item hasn’t been requested for
> the specified duration, then we unpin it.
>
> Does that make sense?

Yes, in fact I wanted IPFS Cluster to support a TTL so that things are
automatically unpinned when it expires too.

>
> Thanks for your help!
>
> Ludo’.

Thanks!

Hector

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [bug#33899] [PATCH 0/5] Distributing substitutes over IPFS
  2018-12-28 23:12 [bug#33899] [PATCH 0/5] Distributing substitutes over IPFS Ludovic Courtès
  2018-12-28 23:15 ` [bug#33899] [PATCH 1/5] Add (guix json) Ludovic Courtès
  2019-01-07 14:43 ` [bug#33899] [PATCH 0/5] Distributing substitutes over IPFS Hector Sanjuan
@ 2019-05-13 18:51 ` Alex Griffin
  2020-12-29  9:59 ` [bug#33899] Ludo's patch rebased on master Maxime Devos
  2021-06-06 17:54 ` [bug#33899] [PATCH 0/5] Distributing substitutes over IPFS Tony Olagbaiye
  4 siblings, 0 replies; 23+ messages in thread
From: Alex Griffin @ 2019-05-13 18:51 UTC (permalink / raw)
  To: 33899

Do I understand correctly that the only reason you don't just store nar files is for deduplication? Reading [this page][1] suggests to me that you might be overthinking it. IPFS already uses a content-driven chunking algorithm that might provide good enough deduplication on its own. It also looks like you can use your own chunker, so a future improvement could be implementing a custom chunker that makes sure to split nar files at the file boundaries within them.

[1]: https://github.com/ipfs/archives

-- 
Alex Griffin

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [bug#33899] [PATCH 0/5] Distributing substitutes over IPFS
  2019-01-18 11:26         ` Hector Sanjuan
@ 2019-07-01 21:36           ` Pierre Neidhardt
  2019-07-06  8:44             ` Pierre Neidhardt
                               ` (2 more replies)
  0 siblings, 3 replies; 23+ messages in thread
From: Pierre Neidhardt @ 2019-07-01 21:36 UTC (permalink / raw)
  To: Hector Sanjuan, Ludovic Courtès, Antoine Eiche
  Cc: go-ipfs-wg\@ipfs.io, 33899\@debbugs.gnu.org

[-- Attachment #1: Type: text/plain, Size: 3324 bytes --]

Hi!

(Re-sending to debbugs, sorry for the double email :p)

A little update/recap after many months! :)

I talked with Héctor and some other people from IPFS + I reviewed Ludo's
patch so now I have a little better understanding of the current state
of affair.

- We could store the substitutes as tarballs on IPFS, but this has
  some possible downsides:

  - We would need to use IPFS' tar chunker to deduplicate the content of
    the tarball.  But the tar chunker is not well maintained currently,
    and it's not clear whether it's reproducible at the moment, so it
    would need some more work.

  - Tarballs might induce some performance cost.  Nix had attempted
    something similar in the past and this may have incurred a significant
    performance penalty, although this remains to be confirmed.
    Lewo?

- Ludo's patch stores all files on IPFS individually.  This way we don't
  need to touch the tar chunker, so it's less work :)
  This raises some other issues however:

  - Extra metadata:  IPFS stores files on UnixFSv1 which does not
    include the executable bit.

    - Right now we store a s-exp manifest with a list of files and a
      list of executable bits.  But maybe we don't have to roll out our own.

    - UnixFSv1 has some metadata field, but Héctor and Alex did not
      recommend using it (not sure why though).

    - We could use UnixFSv2 but it's not released yet and it's unclear when
      it's going to be released.  So we can't really count on it right now.

    - IPLD: As Héctor suggested in the previous email, we could leverage
      IPLD and generate a JSON object that references the files with
      their paths together with an "executable?" property.
      A problem would arise if this IPLD object grows over the 2M
      block-size limit because then we would have to shard it (something
      that UnixFS would do automatically for us).

  - Flat storage vs. tree storage: Right now we are storing the files
    separately, but this has some shortcomings, namely we need multiple
    "get" requests instead of just one, and that IPFS does
    not "know" that those files are related.  (We lose the web view of
    the tree, etc.)  Storing them as tree could be better.
    I don't understand if that would work with the "IPLD manifest"
    suggested above.  Héctor?

  - Pinning: Pinning all files separately incurs an overhead.  It's
    enough to just pin the IPLD object since it propagates recursively.
    When adding a tree, then it's no problem since pinning is only done once.

  - IPFS endpoint calls: instead of adding each file individually, it's
    possible to add them all in one go.  Can we add all files at once
    while using a flat storage? (I.e. not adding them all under a common
    root folder.)

To sum up, here is what remains to be done on the current patch:

- Add all files in one go without pinning them.
- Store as the file tree?  Can we still us the IPLD object to reference
  the files in the tree?  Else use the "raw-leaves" option to avoid
  wrapping small files in UnixFS blocks.
- Remove the Scheme manifest if IPLD can do.
- Generate the IPLD object and pin it.

Any corrections?
Thoughts?

Cheers!

-- 
Pierre Neidhardt
https://ambrevar.xyz/

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 487 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [bug#33899] [PATCH 0/5] Distributing substitutes over IPFS
  2019-07-01 21:36           ` Pierre Neidhardt
@ 2019-07-06  8:44             ` Pierre Neidhardt
  2019-07-12 20:02             ` Molly Mackinlay
  2019-07-12 20:15             ` Ludovic Courtès
  2 siblings, 0 replies; 23+ messages in thread
From: Pierre Neidhardt @ 2019-07-06  8:44 UTC (permalink / raw)
  To: Hector Sanjuan, Ludovic Courtès, Antoine Eiche
  Cc: go-ipfs-wg\@ipfs.io, 33899\@debbugs.gnu.org

[-- Attachment #1: Type: text/plain, Size: 131 bytes --]

Link to the Nix integration discussion:
https://github.com/NixOS/nix/issues/859.

-- 
Pierre Neidhardt
https://ambrevar.xyz/

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 487 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [bug#33899] [PATCH 0/5] Distributing substitutes over IPFS
  2019-07-01 21:36           ` Pierre Neidhardt
  2019-07-06  8:44             ` Pierre Neidhardt
@ 2019-07-12 20:02             ` Molly Mackinlay
  2019-07-15  9:20               ` Alex Potsides
  2019-07-12 20:15             ` Ludovic Courtès
  2 siblings, 1 reply; 23+ messages in thread
From: Molly Mackinlay @ 2019-07-12 20:02 UTC (permalink / raw)
  To: Pierre Neidhardt
  Cc: Alex Potsides, Hector Sanjuan, Andrew Nesbitt,
	33899\@debbugs.gnu.org, Eric Myhre, Jessica Schilling,
	go-ipfs-wg\@ipfs.io, Antoine Eiche

[-- Attachment #1: Type: text/plain, Size: 4315 bytes --]

Thanks for the update Pierre! Also adding Alex, Jessica, Eric and Andrew
from the package managers discussions at IPFS Camp as FYI.

Generating the ipld manifest with the metadata and the tree of files should
also be fine AFAIK - I’m sure Hector and Eric can expand more on how to
compose them, but data storage format shouldn’t make a big difference for
the ipld manifest.

On Mon, Jul 1, 2019 at 2:36 PM Pierre Neidhardt <mail@ambrevar.xyz> wrote:

> Hi!
>
> (Re-sending to debbugs, sorry for the double email :p)
>
> A little update/recap after many months! :)
>
> I talked with Héctor and some other people from IPFS + I reviewed Ludo's
> patch so now I have a little better understanding of the current state
> of affair.
>
> - We could store the substitutes as tarballs on IPFS, but this has
>   some possible downsides:
>
>   - We would need to use IPFS' tar chunker to deduplicate the content of
>     the tarball.  But the tar chunker is not well maintained currently,
>     and it's not clear whether it's reproducible at the moment, so it
>     would need some more work.
>
>   - Tarballs might induce some performance cost.  Nix had attempted
>     something similar in the past and this may have incurred a significant
>     performance penalty, although this remains to be confirmed.
>     Lewo?
>
> - Ludo's patch stores all files on IPFS individually.  This way we don't
>   need to touch the tar chunker, so it's less work :)
>   This raises some other issues however:
>
>   - Extra metadata:  IPFS stores files on UnixFSv1 which does not
>     include the executable bit.
>
>     - Right now we store a s-exp manifest with a list of files and a
>       list of executable bits.  But maybe we don't have to roll out our
> own.
>
>     - UnixFSv1 has some metadata field, but Héctor and Alex did not
>       recommend using it (not sure why though).
>
>     - We could use UnixFSv2 but it's not released yet and it's unclear when
>       it's going to be released.  So we can't really count on it right now.
>
>     - IPLD: As Héctor suggested in the previous email, we could leverage
>       IPLD and generate a JSON object that references the files with
>       their paths together with an "executable?" property.
>       A problem would arise if this IPLD object grows over the 2M
>       block-size limit because then we would have to shard it (something
>       that UnixFS would do automatically for us).
>
>   - Flat storage vs. tree storage: Right now we are storing the files
>     separately, but this has some shortcomings, namely we need multiple
>     "get" requests instead of just one, and that IPFS does
>     not "know" that those files are related.  (We lose the web view of
>     the tree, etc.)  Storing them as tree could be better.
>     I don't understand if that would work with the "IPLD manifest"
>     suggested above.  Héctor?
>
>   - Pinning: Pinning all files separately incurs an overhead.  It's
>     enough to just pin the IPLD object since it propagates recursively.
>     When adding a tree, then it's no problem since pinning is only done
> once.
>
>   - IPFS endpoint calls: instead of adding each file individually, it's
>     possible to add them all in one go.  Can we add all files at once
>     while using a flat storage? (I.e. not adding them all under a common
>     root folder.)
>
> To sum up, here is what remains to be done on the current patch:
>
> - Add all files in one go without pinning them.
> - Store as the file tree?  Can we still us the IPLD object to reference
>   the files in the tree?  Else use the "raw-leaves" option to avoid
>   wrapping small files in UnixFS blocks.
> - Remove the Scheme manifest if IPLD can do.
> - Generate the IPLD object and pin it.
>
> Any corrections?
> Thoughts?
>
> Cheers!
>
> --
> Pierre Neidhardt
> https://ambrevar.xyz/
>
> --
> You received this message because you are subscribed to the Google Groups
> "Go IPFS Working Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to go-ipfs-wg+unsubscribe@ipfs.io.
> To view this discussion on the web visit
> https://groups.google.com/a/ipfs.io/d/msgid/go-ipfs-wg/87zhlxe8t9.fsf%40ambrevar.xyz
> .
>

[-- Attachment #2: Type: text/html, Size: 5268 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [bug#33899] [PATCH 0/5] Distributing substitutes over IPFS
  2019-07-01 21:36           ` Pierre Neidhardt
  2019-07-06  8:44             ` Pierre Neidhardt
  2019-07-12 20:02             ` Molly Mackinlay
@ 2019-07-12 20:15             ` Ludovic Courtès
  2019-07-14 22:31               ` Hector Sanjuan
  2 siblings, 1 reply; 23+ messages in thread
From: Ludovic Courtès @ 2019-07-12 20:15 UTC (permalink / raw)
  To: Pierre Neidhardt
  Cc: Hector Sanjuan, Antoine Eiche, go-ipfs-wg@ipfs.io,
	33899@debbugs.gnu.org

Hello!

Pierre Neidhardt <mail@ambrevar.xyz> skribis:

> A little update/recap after many months! :)

Thank you, and apologies for the delay!

>   - Extra metadata:  IPFS stores files on UnixFSv1 which does not
>     include the executable bit.
>
>     - Right now we store a s-exp manifest with a list of files and a
>       list of executable bits.  But maybe we don't have to roll out our own.
>
>     - UnixFSv1 has some metadata field, but Héctor and Alex did not
>       recommend using it (not sure why though).
>
>     - We could use UnixFSv2 but it's not released yet and it's unclear when
>       it's going to be released.  So we can't really count on it right now.

UnixFSv1 is not an option because it lacks the executable bit; UnixFSv2
would be appropriate, though it stores timestamps that we don’t need
(not necessarily a problem).

>   - Flat storage vs. tree storage: Right now we are storing the files
>     separately, but this has some shortcomings, namely we need multiple
>     "get" requests instead of just one, and that IPFS does
>     not "know" that those files are related.  (We lose the web view of
>     the tree, etc.)  Storing them as tree could be better.
>     I don't understand if that would work with the "IPLD manifest"
>     suggested above.  Héctor?

I don’t consider the web view a strong argument :-) since we could write
tools to deal with whatever format we use.

Regarding multiple GET requests: we could pipeline them, and it seems
more like an implementation detail to me.  The real question is whether
making separate GET requests prevents some optimization in IPFS.

>   - Pinning: Pinning all files separately incurs an overhead.  It's
>     enough to just pin the IPLD object since it propagates recursively.
>     When adding a tree, then it's no problem since pinning is only done once.

Where’s the overhead exactly?

>   - IPFS endpoint calls: instead of adding each file individually, it's
>     possible to add them all in one go.  Can we add all files at once
>     while using a flat storage? (I.e. not adding them all under a common
>     root folder.)

Again, is the concern that we’re making one GET and thus one round trip
per file, or is there some additional cost under the hood?

Thanks for the summary and explanations!

Ludo’.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [bug#33899] [PATCH 0/5] Distributing substitutes over IPFS
  2019-07-12 20:15             ` Ludovic Courtès
@ 2019-07-14 22:31               ` Hector Sanjuan
  2019-07-15  9:24                 ` Ludovic Courtès
  0 siblings, 1 reply; 23+ messages in thread
From: Hector Sanjuan @ 2019-07-14 22:31 UTC (permalink / raw)
  To: Ludovic Courtès
  Cc: Antoine Eiche, go-ipfs-wg\@ipfs.io, Pierre Neidhardt,
	33899\@debbugs.gnu.org

Hey! Thanks for reviving this discussion!

‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Friday, July 12, 2019 10:15 PM, Ludovic Courtès <ludo@gnu.org> wrote:

> Hello!
>
> Pierre Neidhardt mail@ambrevar.xyz skribis:
>
> > A little update/recap after many months! :)
>
> Thank you, and apologies for the delay!
>
> > -   Extra metadata: IPFS stores files on UnixFSv1 which does not
> >     include the executable bit.
> >     -   Right now we store a s-exp manifest with a list of files and a
> >         list of executable bits. But maybe we don't have to roll out our own.
> >
> >     -   UnixFSv1 has some metadata field, but Héctor and Alex did not
> >         recommend using it (not sure why though).
> >
> >     -   We could use UnixFSv2 but it's not released yet and it's unclear when
> >         it's going to be released. So we can't really count on it right now.
> >
>
> UnixFSv1 is not an option because it lacks the executable bit; UnixFSv2
> would be appropriate, though it stores timestamps that we don’t need
> (not necessarily a problem).
>
> > -   Flat storage vs. tree storage: Right now we are storing the files
> >     separately, but this has some shortcomings, namely we need multiple
> >     "get" requests instead of just one, and that IPFS does
> >     not "know" that those files are related. (We lose the web view of
> >     the tree, etc.) Storing them as tree could be better.
> >     I don't understand if that would work with the "IPLD manifest"
> >     suggested above. Héctor?
> >
>
> I don’t consider the web view a strong argument :-) since we could write
> tools to deal with whatever format we use.
>
> Regarding multiple GET requests: we could pipeline them, and it seems
> more like an implementation detail to me. The real question is whether
> making separate GET requests prevents some optimization in IPFS.
>
> > -   Pinning: Pinning all files separately incurs an overhead. It's
> >     enough to just pin the IPLD object since it propagates recursively.
> >     When adding a tree, then it's no problem since pinning is only done once.
> >
>
> Where’s the overhead exactly?

There are reasons why we are proposing to create a single DAG with an
IPLD object at the root. Pinning has a big overhead because it
involves locking, reading, parsing, and writing an internal pin-DAG. This
is specially relevant when the pinset is very large.

Doing multiple GET requests also has overhead, like being unable to use
a single bitswap session (which, when downloading something new means a
big overhead since every request will have to find providers).

And it's not just the web view, it's the ability to walk/traverse all
the object related to a given root natively, which allows also to compare
multiple trees and to be more efficient for some things ("pin update"
for example). Your original idea is to create a manifest with
references to different parts. I'm just asking you to
create that manifest in a format where those references are understood
not only by you, the file creator, but by IPFS and any tool that can
read IPLD, by making this a IPLD object (which is just a json).

The process of adding "something" to ipfs is as follows.

----
1. Add to IPFS: multipart upload equivalent to "ipfs add -r":

~/ipfstest $ ipfs add -r -q .
QmXVgwHR2c8KiPPxaoZAj4M4oNGW1qjZSsxMNE8sLWZWTP

2. Add manifest as IPLD object. dag/put a json file like:

cat <<EOF | ipfs dag put
{
  "executables": ["ipfstest/1.txt"],
  "root": {
    "/": "QmXVgwHR2c8KiPPxaoZAj4M4oNGW1qjZSsxMNE8sLWZWTP"
  }
}
EOF
---

That's it "QmXVgwHR2c8KiPPxaoZAj4M4oNGW1qjZSsxMNE8sLWZWTP" is the root
of your package files.

"bafyreievcw5qoowhepwskcxybochrui65bbtsliuy7r6kyail4w5lyqnjm"
is the root of your manifest file with the list of executables
and a pointer to the other root.


Hector

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [bug#33899] [PATCH 0/5] Distributing substitutes over IPFS
  2019-07-12 20:02             ` Molly Mackinlay
@ 2019-07-15  9:20               ` Alex Potsides
  0 siblings, 0 replies; 23+ messages in thread
From: Alex Potsides @ 2019-07-15  9:20 UTC (permalink / raw)
  To: Molly Mackinlay
  Cc: Hector Sanjuan, Antoine Eiche, Andrew Nesbitt,
	33899\@debbugs.gnu.org, Eric Myhre, Pierre Neidhardt,
	Jessica Schilling, go-ipfs-wg\@ipfs.io

[-- Attachment #1: Type: text/plain, Size: 5487 bytes --]

The reason not to use the UnixFSv1 metadata field was that it's in the spec
<https://github.com/ipfs/specs/tree/master/unixfs#data-format> but it's not
really been implemented.  As it stands in v1, you'd have to add explicit
metadata types to the spec (executable, owner?, group?, etc) because
protobufs need to know about everything ahead of time and each
implementation would have update to implement those.  This is all possible
& not a technical blocker, but since most effort is centred around UnixFSv2
the timescales might not fit with people's requirements.

The more pragmatic approach Hector suggested was to wrap a CID that
resolves to the UnixFSv1 file in a JSON object that you could use to store
application-specific metadata - something similar to the UnixFSv1.5 section
<https://github.com/ipfs/camp/blob/master/DEEP_DIVES/package-managers/README.md#unixfs-v15>
in our notes from the Package Managers deep dive we did at camp.

a.






On Fri, Jul 12, 2019 at 9:03 PM Molly Mackinlay <molly@protocol.ai> wrote:

> Thanks for the update Pierre! Also adding Alex, Jessica, Eric and Andrew
> from the package managers discussions at IPFS Camp as FYI.
>
> Generating the ipld manifest with the metadata and the tree of files
> should also be fine AFAIK - I’m sure Hector and Eric can expand more on how
> to compose them, but data storage format shouldn’t make a big difference
> for the ipld manifest.
>
> On Mon, Jul 1, 2019 at 2:36 PM Pierre Neidhardt <mail@ambrevar.xyz> wrote:
>
>> Hi!
>>
>> (Re-sending to debbugs, sorry for the double email :p)
>>
>> A little update/recap after many months! :)
>>
>> I talked with Héctor and some other people from IPFS + I reviewed Ludo's
>> patch so now I have a little better understanding of the current state
>> of affair.
>>
>> - We could store the substitutes as tarballs on IPFS, but this has
>>   some possible downsides:
>>
>>   - We would need to use IPFS' tar chunker to deduplicate the content of
>>     the tarball.  But the tar chunker is not well maintained currently,
>>     and it's not clear whether it's reproducible at the moment, so it
>>     would need some more work.
>>
>>   - Tarballs might induce some performance cost.  Nix had attempted
>>     something similar in the past and this may have incurred a significant
>>     performance penalty, although this remains to be confirmed.
>>     Lewo?
>>
>> - Ludo's patch stores all files on IPFS individually.  This way we don't
>>   need to touch the tar chunker, so it's less work :)
>>   This raises some other issues however:
>>
>>   - Extra metadata:  IPFS stores files on UnixFSv1 which does not
>>     include the executable bit.
>>
>>     - Right now we store a s-exp manifest with a list of files and a
>>       list of executable bits.  But maybe we don't have to roll out our
>> own.
>>
>>     - UnixFSv1 has some metadata field, but Héctor and Alex did not
>>       recommend using it (not sure why though).
>>
>>     - We could use UnixFSv2 but it's not released yet and it's unclear
>> when
>>       it's going to be released.  So we can't really count on it right
>> now.
>>
>>     - IPLD: As Héctor suggested in the previous email, we could leverage
>>       IPLD and generate a JSON object that references the files with
>>       their paths together with an "executable?" property.
>>       A problem would arise if this IPLD object grows over the 2M
>>       block-size limit because then we would have to shard it (something
>>       that UnixFS would do automatically for us).
>>
>>   - Flat storage vs. tree storage: Right now we are storing the files
>>     separately, but this has some shortcomings, namely we need multiple
>>     "get" requests instead of just one, and that IPFS does
>>     not "know" that those files are related.  (We lose the web view of
>>     the tree, etc.)  Storing them as tree could be better.
>>     I don't understand if that would work with the "IPLD manifest"
>>     suggested above.  Héctor?
>>
>>   - Pinning: Pinning all files separately incurs an overhead.  It's
>>     enough to just pin the IPLD object since it propagates recursively.
>>     When adding a tree, then it's no problem since pinning is only done
>> once.
>>
>>   - IPFS endpoint calls: instead of adding each file individually, it's
>>     possible to add them all in one go.  Can we add all files at once
>>     while using a flat storage? (I.e. not adding them all under a common
>>     root folder.)
>>
>> To sum up, here is what remains to be done on the current patch:
>>
>> - Add all files in one go without pinning them.
>> - Store as the file tree?  Can we still us the IPLD object to reference
>>   the files in the tree?  Else use the "raw-leaves" option to avoid
>>   wrapping small files in UnixFS blocks.
>> - Remove the Scheme manifest if IPLD can do.
>> - Generate the IPLD object and pin it.
>>
>> Any corrections?
>> Thoughts?
>>
>> Cheers!
>>
>> --
>> Pierre Neidhardt
>> https://ambrevar.xyz/
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Go IPFS Working Group" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to go-ipfs-wg+unsubscribe@ipfs.io.
>> To view this discussion on the web visit
>> https://groups.google.com/a/ipfs.io/d/msgid/go-ipfs-wg/87zhlxe8t9.fsf%40ambrevar.xyz
>> .
>>
>

[-- Attachment #2: Type: text/html, Size: 6787 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [bug#33899] [PATCH 0/5] Distributing substitutes over IPFS
  2019-07-14 22:31               ` Hector Sanjuan
@ 2019-07-15  9:24                 ` Ludovic Courtès
  2019-07-15 10:10                   ` Pierre Neidhardt
  0 siblings, 1 reply; 23+ messages in thread
From: Ludovic Courtès @ 2019-07-15  9:24 UTC (permalink / raw)
  To: Hector Sanjuan
  Cc: Antoine Eiche, go-ipfs-wg@ipfs.io, Pierre Neidhardt,
	33899@debbugs.gnu.org

Hello Héctor!  :-)

Hector Sanjuan <code@hector.link> skribis:

> On Friday, July 12, 2019 10:15 PM, Ludovic Courtès <ludo@gnu.org> wrote:

[...]

>> > -   Pinning: Pinning all files separately incurs an overhead. It's
>> >     enough to just pin the IPLD object since it propagates recursively.
>> >     When adding a tree, then it's no problem since pinning is only done once.
>> >
>>
>> Where’s the overhead exactly?
>
> There are reasons why we are proposing to create a single DAG with an
> IPLD object at the root. Pinning has a big overhead because it
> involves locking, reading, parsing, and writing an internal pin-DAG. This
> is specially relevant when the pinset is very large.
>
> Doing multiple GET requests also has overhead, like being unable to use
> a single bitswap session (which, when downloading something new means a
> big overhead since every request will have to find providers).
>
> And it's not just the web view, it's the ability to walk/traverse all
> the object related to a given root natively, which allows also to compare
> multiple trees and to be more efficient for some things ("pin update"
> for example). Your original idea is to create a manifest with
> references to different parts. I'm just asking you to
> create that manifest in a format where those references are understood
> not only by you, the file creator, but by IPFS and any tool that can
> read IPLD, by making this a IPLD object (which is just a json).

OK, I see.  Put this way, it seems like creating a DAG with an IPLD
object as its root is pretty compelling.

Thanks for clarifying!

Ludo’.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [bug#33899] [PATCH 0/5] Distributing substitutes over IPFS
  2019-07-15  9:24                 ` Ludovic Courtès
@ 2019-07-15 10:10                   ` Pierre Neidhardt
  2019-07-15 10:21                     ` Hector Sanjuan
  0 siblings, 1 reply; 23+ messages in thread
From: Pierre Neidhardt @ 2019-07-15 10:10 UTC (permalink / raw)
  To: Ludovic Courtès, Hector Sanjuan
  Cc: Antoine Eiche, go-ipfs-wg@ipfs.io, 33899@debbugs.gnu.org

[-- Attachment #1: Type: text/plain, Size: 313 bytes --]

Héctor mentioned a possible issue with the IPLD manifest growing too big
(in case of too many files in a package), that is, above 2MB.
Then we would need to implement some form of sharding.

Héctor, do you confirm?  Any idea on how to tackle this elegantly?

-- 
Pierre Neidhardt
https://ambrevar.xyz/

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 487 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [bug#33899] [PATCH 0/5] Distributing substitutes over IPFS
  2019-07-15 10:10                   ` Pierre Neidhardt
@ 2019-07-15 10:21                     ` Hector Sanjuan
  0 siblings, 0 replies; 23+ messages in thread
From: Hector Sanjuan @ 2019-07-15 10:21 UTC (permalink / raw)
  To: Pierre Neidhardt
  Cc: Antoine Eiche, go-ipfs-wg\@ipfs.io, 33899\@debbugs.gnu.org

‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Monday, July 15, 2019 12:10 PM, Pierre Neidhardt <mail@ambrevar.xyz> wrote:

> Héctor mentioned a possible issue with the IPLD manifest growing too big
> (in case of too many files in a package), that is, above 2MB.
> Then we would need to implement some form of sharding.
>
> Héctor, do you confirm? Any idea on how to tackle this elegantly?
>

Doing the DAG node the way I proposed it (referencing a single root) should be
ok... Unless you put too many executable files in that list, it should largely
stay within the 2MB limit.


--
Hector

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [bug#33899] Ludo's patch rebased on master
  2018-12-28 23:12 [bug#33899] [PATCH 0/5] Distributing substitutes over IPFS Ludovic Courtès
                   ` (2 preceding siblings ...)
  2019-05-13 18:51 ` Alex Griffin
@ 2020-12-29  9:59 ` Maxime Devos
  2021-06-06 17:54 ` [bug#33899] [PATCH 0/5] Distributing substitutes over IPFS Tony Olagbaiye
  4 siblings, 0 replies; 23+ messages in thread
From: Maxime Devos @ 2020-12-29  9:59 UTC (permalink / raw)
  To: 33899


[-- Attachment #1.1: Type: text/plain, Size: 641 bytes --]

Hi Guix,

I've rebased Ludovic's patch on master
(08d8c2d3c08e4f35325553e75abc76da40630334),
resolving merge conflicts.

Make and make check succeed, except for
tests/cve.scm and tests/swh.scm. For completeness,
I've attached the logs of the failing tests.
I don't think they rare related to the changes
in the patch, though.

I most likely won't have time to test and complete
this patch in the near future.

On an unrelated note, I've changed e-mail addresses
due to excessive spam-filtering
-- 
Maxime Devos <maximedevos@telenet.be>
PGP Key: C1F3 3EE2 0C52 8FDB 7DD7  011F 49E3 EE22 1917 25EE
Freenode handle: mdevos

[-- Attachment #1.2: 0001-Add-guix-json.patch --]
[-- Type: text/x-patch, Size: 3723 bytes --]

From cc19a6bee26032fa32e83d2435d33dac76bec58d Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Ludovic=20Court=C3=A8s?= <ludo@gnu.org>
Date: Mon, 17 Dec 2018 00:05:55 +0100
Subject: [PATCH 1/5] Add (guix json).

* guix/swh.scm: Use (guix json).
(define-json-reader, define-json-mapping): Move to...
* guix/json.scm: ... here.  New file.
* Makefile.am (MODULES): Add it.
---
 Makefile.am   |  1 +
 guix/json.scm | 63 +++++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 64 insertions(+)
 create mode 100644 guix/json.scm

diff --git a/Makefile.am b/Makefile.am
index 1a3ca227a4..81f502d877 100644
--- a/Makefile.am
+++ b/Makefile.am
@@ -95,6 +95,7 @@ MODULES =					\
   guix/bzr-download.scm            		\
   guix/git-download.scm				\
   guix/hg-download.scm				\
+  guix/json.scm					\
   guix/swh.scm					\
   guix/monads.scm				\
   guix/monad-repl.scm				\
diff --git a/guix/json.scm b/guix/json.scm
new file mode 100644
index 0000000000..d446f6894e
--- /dev/null
+++ b/guix/json.scm
@@ -0,0 +1,63 @@
+;;; GNU Guix --- Functional package management for GNU
+;;; Copyright © 2018 Ludovic Courtès <ludo@gnu.org>
+;;;
+;;; This file is part of GNU Guix.
+;;;
+;;; GNU Guix is free software; you can redistribute it and/or modify it
+;;; under the terms of the GNU General Public License as published by
+;;; the Free Software Foundation; either version 3 of the License, or (at
+;;; your option) any later version.
+;;;
+;;; GNU Guix is distributed in the hope that it will be useful, but
+;;; WITHOUT ANY WARRANTY; without even the implied warranty of
+;;; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+;;; GNU General Public License for more details.
+;;;
+;;; You should have received a copy of the GNU General Public License
+;;; along with GNU Guix.  If not, see <http://www.gnu.org/licenses/>.
+
+(define-module (guix json)
+  #:use-module (json)
+  #:use-module (srfi srfi-9)
+  #:export (define-json-mapping))
+
+;;; Commentary:
+;;;
+;;; This module provides tools to define mappings from JSON objects to SRFI-9
+;;; records.  This is useful when writing bindings to HTTP APIs.
+;;;
+;;; Code:
+
+(define-syntax-rule (define-json-reader json->record ctor spec ...)
+  "Define JSON->RECORD as a procedure that converts a JSON representation,
+read from a port, string, or hash table, into a record created by CTOR and
+following SPEC, a series of field specifications."
+  (define (json->record input)
+    (let ((table (cond ((port? input)
+                        (json->scm input))
+                       ((string? input)
+                        (json-string->scm input))
+                       ((hash-table? input)
+                        input))))
+      (let-syntax ((extract-field (syntax-rules ()
+                                    ((_ table (field key json->value))
+                                     (json->value (hash-ref table key)))
+                                    ((_ table (field key))
+                                     (hash-ref table key))
+                                    ((_ table (field))
+                                     (hash-ref table
+                                               (symbol->string 'field))))))
+        (ctor (extract-field table spec) ...)))))
+
+(define-syntax-rule (define-json-mapping rtd ctor pred json->record
+                      (field getter spec ...) ...)
+  "Define RTD as a record type with the given FIELDs and GETTERs, à la SRFI-9,
+and define JSON->RECORD as a conversion from JSON to a record of this type."
+  (begin
+    (define-record-type rtd
+      (ctor field ...)
+      pred
+      (field getter) ...)
+
+    (define-json-reader json->record ctor
+      (field spec ...) ...)))
-- 
2.29.2


[-- Attachment #1.3: 0002-tests-file-now-recurses-on-directories.patch --]
[-- Type: text/x-patch, Size: 2482 bytes --]

From f4cbc586fa09f24214261d2ee4e1e6a213a6c2d5 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Ludovic=20Court=C3=A8s?= <ludo@gnu.org>
Date: Fri, 28 Dec 2018 15:58:58 +0100
Subject: [PATCH 2/5] =?UTF-8?q?tests:=20'file=3D=3F'=20now=20recurses=20on?=
 =?UTF-8?q?=20directories.?=

* guix/tests.scm (not-dot?): New procedure.
(file=?)[executable?]: New procedure.
In 'regular case, check whether the executable bit is preserved.
Add 'directory case.
---
 guix/tests.scm | 25 +++++++++++++++++++++----
 1 file changed, 21 insertions(+), 4 deletions(-)

diff --git a/guix/tests.scm b/guix/tests.scm
index fc3d521163..d0f9e6d35a 100644
--- a/guix/tests.scm
+++ b/guix/tests.scm
@@ -30,11 +30,13 @@
   #:use-module (guix build-system gnu)
   #:use-module (gnu packages base)
   #:use-module (gnu packages bootstrap)
+  #:use-module (srfi srfi-1)
   #:use-module (srfi srfi-26)
   #:use-module (srfi srfi-34)
   #:use-module (srfi srfi-64)
   #:use-module (rnrs bytevectors)
   #:use-module (ice-9 match)
+  #:use-module (ice-9 ftw)
   #:use-module (ice-9 binary-ports)
   #:use-module (web uri)
   #:export (open-connection-for-tests
@@ -182,16 +184,31 @@ too expensive to build entirely in the test store."
             (loop (1+ i)))
           bv))))
 
+(define (not-dot? entry)
+  (not (member entry '("." ".."))))
+
 (define (file=? a b)
-  "Return true if files A and B have the same type and same content."
+  "Return true if files A and B have the same type and same content,
+recursively."
+  (define (executable? file)
+    (->bool (logand (stat:mode (lstat file)) #o100)))
+
   (and (eq? (stat:type (lstat a)) (stat:type (lstat b)))
        (case (stat:type (lstat a))
          ((regular)
-          (equal?
-           (call-with-input-file a get-bytevector-all)
-           (call-with-input-file b get-bytevector-all)))
+          (and (eqv? (executable? a) (executable? b))
+               (equal?
+                (call-with-input-file a get-bytevector-all)
+                (call-with-input-file b get-bytevector-all))))
          ((symlink)
           (string=? (readlink a) (readlink b)))
+         ((directory)
+          (let ((lst1 (scandir a not-dot?))
+                (lst2 (scandir b not-dot?)))
+            (and (equal? lst1 lst2)
+                 (every file=?
+                        (map (cut string-append a "/" <>) lst1)
+                        (map (cut string-append b "/" <>) lst2)))))
          (else
           (error "what?" (lstat a))))))
 
-- 
2.29.2


[-- Attachment #1.4: 0003-Add-guix-ipfs.patch --]
[-- Type: text/x-patch, Size: 13014 bytes --]

From 3dcd999dbb6860317459a006bc03bbc8d9d1fdc0 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Ludovic=20Court=C3=A8s?= <ludo@gnu.org>
Date: Fri, 28 Dec 2018 01:07:58 +0100
Subject: [PATCH 3/5] Add (guix ipfs).

* guix/ipfs.scm, tests/ipfs.scm: New files.
* Makefile.am (MODULES, SCM_TESTS): Add them.
---
 Makefile.am    |   2 +
 guix/ipfs.scm  | 250 +++++++++++++++++++++++++++++++++++++++++++++++++
 tests/ipfs.scm |  55 +++++++++++
 3 files changed, 307 insertions(+)
 create mode 100644 guix/ipfs.scm
 create mode 100644 tests/ipfs.scm

diff --git a/Makefile.am b/Makefile.am
index 81f502d877..ff7deacc44 100644
--- a/Makefile.am
+++ b/Makefile.am
@@ -123,6 +123,7 @@ MODULES =					\
   guix/cache.scm				\
   guix/cve.scm					\
   guix/workers.scm				\
+  guix/ipfs.scm					\
   guix/build-system.scm				\
   guix/build-system/android-ndk.scm		\
   guix/build-system/ant.scm			\
@@ -450,6 +451,7 @@ SCM_TESTS =					\
   tests/hackage.scm				\
   tests/import-utils.scm			\
   tests/inferior.scm				\
+  tests/ipfs.scm				\
   tests/lint.scm				\
   tests/modules.scm				\
   tests/monads.scm				\
diff --git a/guix/ipfs.scm b/guix/ipfs.scm
new file mode 100644
index 0000000000..e941feda6f
--- /dev/null
+++ b/guix/ipfs.scm
@@ -0,0 +1,250 @@
+;;; GNU Guix --- Functional package management for GNU
+;;; Copyright © 2018 Ludovic Courtès <ludo@gnu.org>
+;;;
+;;; This file is part of GNU Guix.
+;;;
+;;; GNU Guix is free software; you can redistribute it and/or modify it
+;;; under the terms of the GNU General Public License as published by
+;;; the Free Software Foundation; either version 3 of the License, or (at
+;;; your option) any later version.
+;;;
+;;; GNU Guix is distributed in the hope that it will be useful, but
+;;; WITHOUT ANY WARRANTY; without even the implied warranty of
+;;; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+;;; GNU General Public License for more details.
+;;;
+;;; You should have received a copy of the GNU General Public License
+;;; along with GNU Guix.  If not, see <http://www.gnu.org/licenses/>.
+
+(define-module (guix ipfs)
+  #:use-module (guix json)
+  #:use-module (guix base64)
+  #:use-module ((guix build utils) #:select (dump-port))
+  #:use-module (srfi srfi-1)
+  #:use-module (srfi srfi-11)
+  #:use-module (srfi srfi-26)
+  #:use-module (rnrs io ports)
+  #:use-module (rnrs bytevectors)
+  #:use-module (ice-9 match)
+  #:use-module (ice-9 ftw)
+  #:use-module (web uri)
+  #:use-module (web client)
+  #:use-module (web response)
+  #:export (%ipfs-base-url
+            add-file
+            add-file-tree
+            restore-file-tree
+
+            content?
+            content-name
+            content-hash
+            content-size
+
+            add-empty-directory
+            add-to-directory
+            read-contents
+            publish-name))
+
+;;; Commentary:
+;;;
+;;; This module implements bindings for the HTTP interface of the IPFS
+;;; gateway, documented here: <https://docs.ipfs.io/reference/api/http/>.  It
+;;; allows you to add and retrieve files over IPFS, and a few other things.
+;;;
+;;; Code:
+
+(define %ipfs-base-url
+  ;; URL of the IPFS gateway.
+  (make-parameter "http://localhost:5001"))
+
+(define* (call url decode #:optional (method http-post)
+               #:key body (false-if-404? #t) (headers '()))
+  "Invoke the endpoint at URL using METHOD.  Decode the resulting JSON body
+using DECODE, a one-argument procedure that takes an input port; when DECODE
+is false, return the input port.  When FALSE-IF-404? is true, return #f upon
+404 responses."
+  (let*-values (((response port)
+                 (method url #:streaming? #t
+                         #:body body
+
+                         ;; Always pass "Connection: close".
+                         #:keep-alive? #f
+                         #:headers `((connection close)
+                                     ,@headers))))
+    (cond ((= 200 (response-code response))
+           (if decode
+               (let ((result (decode port)))
+                 (close-port port)
+                 result)
+               port))
+          ((and false-if-404?
+                (= 404 (response-code response)))
+           (close-port port)
+           #f)
+          (else
+           (close-port port)
+           (throw 'ipfs-error url response)))))
+
+;; Result of a file addition.
+(define-json-mapping <content> make-content content?
+  json->content
+  (name   content-name "Name")
+  (hash   content-hash "Hash")
+  (bytes  content-bytes "Bytes")
+  (size   content-size "Size" string->number))
+
+;; Result of a 'patch/add-link' operation.
+(define-json-mapping <directory> make-directory directory?
+  json->directory
+  (hash   directory-hash "Hash")
+  (links  directory-links "Links" json->links))
+
+;; A "link".
+(define-json-mapping <link> make-link link?
+  json->link
+  (name   link-name "Name")
+  (hash   link-hash "Hash")
+  (size   link-size "Size" string->number))
+
+;; A "binding", also known as a "name".
+(define-json-mapping <binding> make-binding binding?
+  json->binding
+  (name   binding-name "Name")
+  (value  binding-value "Value"))
+
+(define (json->links json)
+  (match json
+    (#f    '())
+    (links (map json->link links))))
+
+(define %multipart-boundary
+  ;; XXX: We might want to find a more reliable boundary.
+  (string-append (make-string 24 #\-) "2698127afd7425a6"))
+
+(define (bytevector->form-data bv port)
+  "Write to PORT a 'multipart/form-data' representation of BV."
+  (display (string-append "--" %multipart-boundary "\r\n"
+                          "Content-Disposition: form-data\r\n"
+                          "Content-Type: application/octet-stream\r\n\r\n")
+           port)
+  (put-bytevector port bv)
+  (display (string-append "\r\n--" %multipart-boundary "--\r\n")
+           port))
+
+(define* (add-data data #:key (name "file.txt") recursive?)
+  "Add DATA, a bytevector, to IPFS.  Return a content object representing it."
+  (call (string-append (%ipfs-base-url)
+                       "/api/v0/add?arg=" (uri-encode name)
+                       "&recursive="
+                       (if recursive? "true" "false"))
+        json->content
+        #:headers
+        `((content-type
+           . (multipart/form-data
+              (boundary . ,%multipart-boundary))))
+        #:body
+        (call-with-bytevector-output-port
+         (lambda (port)
+           (bytevector->form-data data port)))))
+
+(define (not-dot? entry)
+  (not (member entry '("." ".."))))
+
+(define (file-tree->sexp file)
+  "Add FILE, recursively, to the IPFS, and return an sexp representing the
+directory's tree structure.
+
+Unlike IPFS's own \"UnixFS\" structure, this format preserves exactly what we
+need: like the nar format, it preserves the executable bit, but does not save
+the mtime or other Unixy attributes irrelevant in the store."
+  ;; The natural approach would be to insert each directory listing as an
+  ;; object of its own in IPFS.  However, this does not buy us much in terms
+  ;; of deduplication, but it does cause a lot of extra round trips when
+  ;; fetching it.  Thus, this sexp is \"flat\" in that only the leaves are
+  ;; inserted into the IPFS.
+  (let ((st (lstat file)))
+    (match (stat:type st)
+      ('directory
+       (let* ((parent  file)
+              (entries (map (lambda (file)
+                              `(entry ,file
+                                      ,(file-tree->sexp
+                                        (string-append parent "/" file))))
+                            (scandir file not-dot?)))
+              (size    (fold (lambda (entry total)
+                               (match entry
+                                 (('entry name (kind value size))
+                                  (+ total size))))
+                             0
+                             entries)))
+         `(directory ,entries ,size)))
+      ('symlink
+       `(symlink ,(readlink file) 0))
+      ('regular
+       (let ((size (stat:size st)))
+         (if (zero? (logand (stat:mode st) #o100))
+             `(file ,(content-name (add-file file)) ,size)
+             `(executable ,(content-name (add-file file)) ,size)))))))
+
+(define (add-file-tree file)
+  "Add FILE to the IPFS, recursively, using our own canonical directory
+format.  Return the resulting content object."
+  (add-data (string->utf8 (object->string
+                           `(file-tree (version 0)
+                                       ,(file-tree->sexp file))))))
+
+(define (restore-file-tree object file)
+  "Restore to FILE the tree pointed to by OBJECT."
+  (let restore ((tree (match (read (read-contents object))
+                        (('file-tree ('version 0) tree)
+                         tree)))
+                (file file))
+    (match tree
+      (('file object size)
+       (call-with-output-file file
+         (lambda (output)
+           (dump-port (read-contents object) output))))
+      (('executable object size)
+       (call-with-output-file file
+         (lambda (output)
+           (dump-port (read-contents object) output)))
+       (chmod file #o555))
+      (('symlink target size)
+       (symlink target file))
+      (('directory (('entry names entries) ...) size)
+       (mkdir file)
+       (for-each restore entries
+                 (map (cut string-append file "/" <>) names))))))
+
+(define* (add-file file #:key (name (basename file)))
+  "Add FILE under NAME to the IPFS and return a content object for it."
+  (add-data (match (call-with-input-file file get-bytevector-all)
+              ((? eof-object?) #vu8())
+              (bv bv))
+            #:name name))
+
+(define* (add-empty-directory #:key (name "directory"))
+  "Return a content object for an empty directory."
+  (add-data #vu8() #:recursive? #t #:name name))
+
+(define* (add-to-directory directory file name)
+  "Add FILE to DIRECTORY under NAME, and return the resulting directory.
+DIRECTORY and FILE must be hashes identifying objects in the IPFS store."
+  (call (string-append (%ipfs-base-url)
+                       "/api/v0/object/patch/add-link?arg="
+                       (uri-encode directory)
+                       "&arg=" (uri-encode name) "&arg=" (uri-encode file)
+                       "&create=true")
+        json->directory))
+
+(define* (read-contents object #:key offset length)
+  "Return an input port to read the content of OBJECT from."
+  (call (string-append (%ipfs-base-url)
+                       "/api/v0/cat?arg=" object)
+        #f))
+
+(define* (publish-name object)
+  "Publish OBJECT under the current peer ID."
+  (call (string-append (%ipfs-base-url)
+                       "/api/v0/name/publish?arg=" object)
+        json->binding))
diff --git a/tests/ipfs.scm b/tests/ipfs.scm
new file mode 100644
index 0000000000..3b662b22bd
--- /dev/null
+++ b/tests/ipfs.scm
@@ -0,0 +1,55 @@
+;;; GNU Guix --- Functional package management for GNU
+;;; Copyright © 2018 Ludovic Courtès <ludo@gnu.org>
+;;;
+;;; This file is part of GNU Guix.
+;;;
+;;; GNU Guix is free software; you can redistribute it and/or modify it
+;;; under the terms of the GNU General Public License as published by
+;;; the Free Software Foundation; either version 3 of the License, or (at
+;;; your option) any later version.
+;;;
+;;; GNU Guix is distributed in the hope that it will be useful, but
+;;; WITHOUT ANY WARRANTY; without even the implied warranty of
+;;; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+;;; GNU General Public License for more details.
+;;;
+;;; You should have received a copy of the GNU General Public License
+;;; along with GNU Guix.  If not, see <http://www.gnu.org/licenses/>.
+
+(define-module (test-ipfs)
+  #:use-module (guix ipfs)
+  #:use-module ((guix utils) #:select (call-with-temporary-directory))
+  #:use-module (guix tests)
+  #:use-module (web uri)
+  #:use-module (srfi srfi-64))
+
+;; Test the (guix ipfs) module.
+
+(define (ipfs-gateway-running?)
+  "Return true if the IPFS gateway is running at %IPFS-BASE-URL."
+  (let* ((uri    (string->uri (%ipfs-base-url)))
+         (socket (socket AF_INET SOCK_STREAM 0)))
+    (define connected?
+      (catch 'system-error
+        (lambda ()
+          (format (current-error-port)
+                  "probing IPFS gateway at localhost:~a...~%"
+                  (uri-port uri))
+          (connect socket AF_INET INADDR_LOOPBACK (uri-port uri))
+          #t)
+        (const #f)))
+
+    (close-port socket)
+    connected?))
+
+(unless (ipfs-gateway-running?)
+  (test-skip 1))
+
+(test-assert "add-file-tree + restore-file-tree"
+  (call-with-temporary-directory
+   (lambda (directory)
+     (let* ((source  (dirname (search-path %load-path "guix/base32.scm")))
+            (target  (string-append directory "/r"))
+            (content (pk 'content (add-file-tree source))))
+       (restore-file-tree (content-name content) target)
+       (file=? source target)))))
-- 
2.29.2


[-- Attachment #1.5: 0004-publish-Add-IPFS-support.patch --]
[-- Type: text/x-patch, Size: 12285 bytes --]

From 21cf092c67e10e60682f3c14d6b438ce7d905eef Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Ludovic=20Court=C3=A8s?= <ludo@gnu.org>
Date: Fri, 28 Dec 2018 18:27:59 +0100
Subject: [PATCH 4/5] publish: Add IPFS support.

* guix/scripts/publish.scm (show-help, %options): Add '--ipfs'.
(narinfo-string): Add IPFS parameter and honor it.
(render-narinfo/cached): Add #:ipfs? and honor it.
(bake-narinfo+nar, make-request-handler, run-publish-server): Likewise.
(guix-publish): Honor '--ipfs' and parameterize %IPFS-BASE-URL.
---
 doc/guix.texi            | 34 +++++++++++++++++++
 guix/scripts/publish.scm | 73 +++++++++++++++++++++++++++-------------
 2 files changed, 83 insertions(+), 24 deletions(-)

diff --git a/doc/guix.texi b/doc/guix.texi
index 1f33fd3b76..e52083fc5d 100644
--- a/doc/guix.texi
+++ b/doc/guix.texi
@@ -12267,6 +12267,16 @@ http://example.org/file/hello-2.10.tar.gz/sha256/0ssi1@dots{}ndq1i
 Obviously, these URLs only work for files that are in the store; in
 other cases, they return 404 (``Not Found'').
 
+@cindex peer-to-peer, substitute distribution
+@cindex distributed storage, of substitutes
+@cindex IPFS, for substitutes
+
+It is also possible to publish substitutes over @uref{https://ipfs.io, IFPS},
+a distributed, peer-to-peer storage mechanism.  To enable it, pass the
+@option{--ipfs} option alongside @option{--cache}, and make sure you're
+running @command{ipfs daemon}.  Capable clients will then be able to choose
+whether to fetch substitutes over HTTP or over IPFS.
+
 @cindex build logs, publication
 Build logs are available from @code{/log} URLs like:
 
@@ -12363,6 +12373,30 @@ thread per CPU core is created, but this can be customized.  See
 When @option{--ttl} is used, cached entries are automatically deleted
 when they have expired.
 
+@item --ifps[=@var{gateway}]
+When used in conjunction with @option{--cache}, instruct @command{guix
+publish} to publish substitutes over the @uref{https://ipfs.io, IPFS
+distributed data store} in addition to HTTP.
+
+@quotation Note
+As of version @value{VERSION}, IPFS support is experimental.  You're welcome
+to share your experience with the developers by emailing
+@email{guix-devel@@gnu.org}!
+@end quotation
+
+The IPFS HTTP interface must be reachable at @var{gateway}, by default
+@code{localhost:5001}.  To get it up and running, it is usually enough to
+install IPFS and start the IPFS daemon:
+
+@example
+$ guix package -i go-ipfs
+$ ipfs init
+$ ipfs daemon
+@end example
+
+For more information on how to get started with IPFS, please refer to the
+@uref{https://docs.ipfs.io/introduction/usage/, IPFS documentation}.
+
 @item --workers=@var{N}
 When @option{--cache} is used, request the allocation of @var{N} worker
 threads to ``bake'' archives.
diff --git a/guix/scripts/publish.scm b/guix/scripts/publish.scm
index c31cef3181..998dfa560d 100644
--- a/guix/scripts/publish.scm
+++ b/guix/scripts/publish.scm
@@ -64,8 +64,8 @@
   #:use-module ((guix build utils)
                 #:select (dump-port mkdir-p find-files))
   #:use-module ((guix build syscalls) #:select (set-thread-name))
+  #:use-module ((guix ipfs) #:prefix ipfs:)
   #:export (%default-gzip-compression
-
             %public-key
             %private-key
             signed-string
@@ -94,6 +94,8 @@ Publish ~a over HTTP.\n") %store-directory)
   (display (G_ "
       --cache-bypass-threshold=SIZE
                          serve store items below SIZE even when not cached"))
+  (display (G_ "
+      --ipfs[=GATEWAY]   publish items over IPFS via GATEWAY"))
   (display (G_ "
       --workers=N        use N workers to bake items"))
   (display (G_ "
@@ -210,6 +212,10 @@ usage."
                 (lambda (opt name arg result)
                   (alist-cons 'cache-bypass-threshold (size->number arg)
                               result)))
+        (option '("ipfs") #f #t
+                (lambda (opt name arg result)
+                  (alist-cons 'ipfs (or arg (ipfs:%ipfs-base-url))
+                              result)))
         (option '("workers") #t #f
                 (lambda (opt name arg result)
                   (alist-cons 'workers (string->number* arg)
@@ -308,14 +314,16 @@ with COMPRESSION, starting at NAR-PATH."
 
 (define* (narinfo-string store store-path key
                          #:key (compressions (list %no-compression))
-                         (nar-path "nar") (file-sizes '()))
+                         (nar-path "nar") (file-sizes '()) ipfs)
   "Generate a narinfo key/value string for STORE-PATH; an exception is raised
 if STORE-PATH is invalid.  Produce a URL that corresponds to COMPRESSION.  The
 narinfo is signed with KEY.  NAR-PATH specifies the prefix for nar URLs.
 
 Optionally, FILE-SIZES is a list of compression/integer pairs, where the
 integer is size in bytes of the compressed NAR; it informs the client of how
-much needs to be downloaded."
+much needs to be downloaded.
+
+When IPFS is true, it is the IPFS object identifier for STORE-PATH."
   (let* ((path-info  (query-path-info store store-path))
          (compressions (actual-compressions store-path compressions))
          (hash       (bytevector->nix-base32-string
@@ -363,7 +371,12 @@ References: ~a~%"
                                  (apply throw args))))))
          (signature  (base64-encode-string
                       (canonical-sexp->string (signed-string info)))))
-    (format #f "~aSignature: 1;~a;~a~%" info (gethostname) signature)))
+    (format #f "~aSignature: 1;~a;~a~%~a" info (gethostname) signature
+
+            ;; Append IPFS info below the signed part.
+            (if ipfs
+                (string-append "IPFS: " ipfs "\n")
+                ""))))
 
 (define* (not-found request
                     #:key (phrase "Resource not found")
@@ -510,10 +523,12 @@ interpreted as the basename of a store item."
 (define* (render-narinfo/cached store request hash
                                 #:key ttl (compressions (list %no-compression))
                                 (nar-path "nar")
-                                cache pool)
+                                cache pool ipfs?)
   "Respond to the narinfo request for REQUEST.  If the narinfo is available in
 CACHE, then send it; otherwise, return 404 and \"bake\" that nar and narinfo
-requested using POOL."
+requested using POOL.
+
+When IPFS? is true, additionally publish binaries over IPFS."
   (define (delete-entry narinfo)
     ;; Delete NARINFO and the corresponding nar from CACHE.
     (let* ((nar     (string-append (string-drop-right narinfo
@@ -556,7 +571,8 @@ requested using POOL."
                  (bake-narinfo+nar cache item
                                    #:ttl ttl
                                    #:compressions compressions
-                                   #:nar-path nar-path)))
+                                   #:nar-path nar-path
+                                   #:ipfs? ipfs?)))
 
              (when ttl
                (single-baker 'cache-cleanup
@@ -617,7 +633,7 @@ requested using POOL."
 
 (define* (bake-narinfo+nar cache item
                            #:key ttl (compressions (list %no-compression))
-                           (nar-path "/nar"))
+                           (nar-path "/nar") ipfs?)
   "Write the narinfo and nar for ITEM to CACHE."
   (define (compressed-nar-size compression)
     (let* ((nar  (nar-cache-file cache item #:compression compression))
@@ -644,7 +660,11 @@ requested using POOL."
                                           (%private-key)
                                           #:nar-path nar-path
                                           #:compressions compressions
-                                          #:file-sizes sizes)
+                                          #:file-sizes sizes
+                                          #:ipfs
+                                          (and ipfs?
+                                               (ipfs:content-name
+                                                (ipfs:add-file-tree item))))
                           port)))
 
              ;; Make the cached narinfo world-readable, contrary to what
@@ -996,7 +1016,8 @@ methods, return the applicable compression."
                                cache pool
                                narinfo-ttl
                                (nar-path "nar")
-                               (compressions (list %no-compression)))
+                               (compressions (list %no-compression))
+                               ipfs?)
   (define compression-type?
     string->compression-type)
 
@@ -1027,7 +1048,9 @@ methods, return the applicable compression."
                                       #:pool pool
                                       #:ttl narinfo-ttl
                                       #:nar-path nar-path
-                                      #:compressions compressions)
+                                      #:compressions compressions
+                                      #:compressions compressions
+                                      #:ipfs? ipfs?)
                (render-narinfo store request hash
                                #:ttl narinfo-ttl
                                #:nar-path nar-path
@@ -1089,7 +1112,7 @@ methods, return the applicable compression."
                              advertise? port
                              (compressions (list %no-compression))
                              (nar-path "nar") narinfo-ttl
-                             cache pool)
+                             cache pool ipfs?)
   (when advertise?
     (let ((name (service-name)))
       ;; XXX: Use a callback from Guile-Avahi here, as Avahi can pick a
@@ -1098,13 +1121,13 @@ methods, return the applicable compression."
       (avahi-publish-service-thread name
                                     #:type publish-service-type
                                     #:port port)))
-
   (run-server (make-request-handler store
                                     #:cache cache
                                     #:pool pool
                                     #:nar-path nar-path
                                     #:narinfo-ttl narinfo-ttl
-                                    #:compressions compressions)
+                                    #:compressions compressions
+                                    #:ipfs? ipfs?)
               concurrent-http-server
               `(#:socket ,socket)))
 
@@ -1166,6 +1189,7 @@ methods, return the applicable compression."
            (repl-port (assoc-ref opts 'repl))
            (cache     (assoc-ref opts 'cache))
            (workers   (assoc-ref opts 'workers))
+           (ipfs      (assoc-ref opts 'ipfs))
 
            ;; Read the key right away so that (1) we fail early on if we can't
            ;; access them, and (2) we can then drop privileges.
@@ -1204,16 +1228,17 @@ consider using the '--user' option!~%")))
         (set-thread-name "guix publish")
 
         (with-store store
-          (run-publish-server socket store
-                              #:advertise? advertise?
-                              #:port port
-                              #:cache cache
-                              #:pool (and cache (make-pool workers
-                                                           #:thread-name
-                                                           "publish worker"))
-                              #:nar-path nar-path
-                              #:compressions compressions
-                              #:narinfo-ttl ttl))))))
+          (parameterize ((ipfs:%ipfs-base-url ipfs))
+            (run-publish-server socket store
+                                #:advertise? advertise?
+                                #:port port
+                                #:cache cache
+                                #:pool (and cache (make-pool workers
+                                                             #:thread-name
+                                                             "publish worker"))
+                                #:nar-path nar-path
+                                #:compressions compressions
+                                #:narinfo-ttl ttl)))))))
 
 ;;; Local Variables:
 ;;; eval: (put 'single-baker 'scheme-indent-function 1)
-- 
2.29.2


[-- Attachment #1.6: 0005-DRAFT-substitute-Add-IPFS-support.patch --]
[-- Type: text/x-patch, Size: 9021 bytes --]

From d300bd6b37680f26fbc9b339264476fcc35e1787 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Ludovic=20Court=C3=A8s?= <ludo@gnu.org>
Date: Fri, 28 Dec 2018 18:40:06 +0100
Subject: [PATCH 5/5] DRAFT substitute: Add IPFS support.

Missing:

  - documentation
  - command-line options
  - progress report when downloading over IPFS
  - fallback when we fail to fetch from IPFS

* guix/scripts/substitute.scm (<narinfo>)[ipfs]: New field.
(read-narinfo): Read "IPFS".
(process-substitution/http): New procedure, with code formerly in
'process-substitution'.
(process-substitution): Check for IPFS and call 'ipfs:restore-file-tree'
when IPFS is true.
---
 guix/scripts/substitute.scm | 112 ++++++++++++++++++++----------------
 1 file changed, 63 insertions(+), 49 deletions(-)

diff --git a/guix/scripts/substitute.scm b/guix/scripts/substitute.scm
index feae2df9cb..8a888c5e01 100755
--- a/guix/scripts/substitute.scm
+++ b/guix/scripts/substitute.scm
@@ -43,6 +43,7 @@
   #:use-module (guix progress)
   #:use-module ((guix build syscalls)
                 #:select (set-thread-name))
+  #:use-module ((guix ipfs) #:prefix ipfs:)
   #:use-module (ice-9 rdelim)
   #:use-module (ice-9 regex)
   #:use-module (ice-9 match)
@@ -233,7 +234,7 @@ provide."
 (define-record-type <narinfo>
   (%make-narinfo path uri-base uris compressions file-sizes file-hashes
                  nar-hash nar-size references deriver system
-                 signature contents)
+                 ipfs signature contents)
   narinfo?
   (path         narinfo-path)
   (uri-base     narinfo-uri-base)        ;URI of the cache it originates from
@@ -246,6 +247,7 @@ provide."
   (references   narinfo-references)
   (deriver      narinfo-deriver)
   (system       narinfo-system)
+  (ipfs         narinfo-ipfs)
   (signature    narinfo-signature)      ; canonical sexp
   ;; The original contents of a narinfo file.  This field is needed because we
   ;; want to preserve the exact textual representation for verification purposes.
@@ -288,7 +290,7 @@ s-expression: ~s~%")
 must contain the original contents of a narinfo file."
   (lambda (path urls compressions file-hashes file-sizes
                 nar-hash nar-size references deriver system
-                signature)
+                ipfs signature)
     "Return a new <narinfo> object."
     (define len (length urls))
     (%make-narinfo path cache-url
@@ -312,6 +314,7 @@ must contain the original contents of a narinfo file."
                      ((or #f "") #f)
                      (_ deriver))
                    system
+                   ipfs
                    (false-if-exception
                     (and=> signature narinfo-signature->canonical-sexp))
                    str)))
@@ -330,7 +333,7 @@ No authentication and authorization checks are performed here!"
                    (narinfo-maker str url)
                    '("StorePath" "URL" "Compression"
                      "FileHash" "FileSize" "NarHash" "NarSize"
-                     "References" "Deriver" "System"
+                     "References" "Deriver" "System" "IPFS"
                      "Signature")
                    '("URL" "Compression" "FileSize" "FileHash"))))
 
@@ -962,6 +965,48 @@ the URI, its compression method (a string), and the compressed file size."
     (((uri compression file-size) _ ...)
      (values uri compression file-size))))
 
+(define* (process-substitution/http narinfo destination uri
+                                    compression
+                                    #:key print-build-trace?)
+  (unless print-build-trace?
+    (format (current-error-port)
+            (G_ "Downloading ~a...~%") (uri->string uri)))
+  (let*-values (((raw download-size)
+                 ;; Note that Hydra currently generates Nars on the fly
+                 ;; and doesn't specify a Content-Length, so
+                 ;; DOWNLOAD-SIZE is #f in practice.
+                 (fetch uri #:buffered? #f #:timeout? #f))
+                ((progress)
+                 (let* ((dl-size  (or download-size
+                                      (and (equal? compression "none")
+                                           (narinfo-size narinfo))))
+                        (reporter (if print-build-trace?
+                                      (progress-reporter/trace
+                                       destination
+                                       (uri->string uri) dl-size
+                                       (current-error-port))
+                                      (progress-reporter/file
+                                       (uri->string uri) dl-size
+                                       (current-error-port)
+                                       #:abbreviation nar-uri-abbreviation))))
+                   (progress-report-port reporter raw)))
+                ((input pids)
+                 ;; NOTE: This 'progress' port of current process will be
+                 ;; closed here, while the child process doing the
+                 ;; reporting will close it upon exit.
+                 (decompressed-port (string->symbol compression)
+                                    progress)))
+    ;; Unpack the Nar at INPUT into DESTINATION.
+    (restore-file input destination)
+    (close-port input)
+
+    ;; Wait for the reporter to finish.
+    (every (compose zero? cdr waitpid) pids)
+
+    ;; Skip a line after what 'progress-reporter/file' printed, and another
+    ;; one to visually separate substitutions.
+    (display "\n\n" (current-error-port))))
+
 (define* (process-substitution store-item destination
                                #:key cache-urls acl print-build-trace?)
   "Substitute STORE-ITEM (a store file name) from CACHE-URLS, and write it to
@@ -969,55 +1014,24 @@ DESTINATION as a nar file.  Verify the substitute against ACL."
   (define narinfo
     (lookup-narinfo cache-urls store-item
                     (cut valid-narinfo? <> acl)))
-
+  (define ipfs (and=> narinfo narinfo-ipfs))
   (unless narinfo
     (leave (G_ "no valid substitute for '~a'~%")
            store-item))
-
-  (let-values (((uri compression file-size)
-                (narinfo-best-uri narinfo)))
-    ;; Tell the daemon what the expected hash of the Nar itself is.
-    (format #t "~a~%" (narinfo-hash narinfo))
-
-    (unless print-build-trace?
-      (format (current-error-port)
-              (G_ "Downloading ~a...~%") (uri->string uri)))
-
-    (let*-values (((raw download-size)
-                   ;; Note that Hydra currently generates Nars on the fly
-                   ;; and doesn't specify a Content-Length, so
-                   ;; DOWNLOAD-SIZE is #f in practice.
-                   (fetch uri #:buffered? #f #:timeout? #f))
-                  ((progress)
-                   (let* ((dl-size  (or download-size
-                                        (and (equal? compression "none")
-                                             (narinfo-size narinfo))))
-                          (reporter (if print-build-trace?
-                                        (progress-reporter/trace
-                                         destination
-                                         (uri->string uri) dl-size
-                                         (current-error-port))
-                                        (progress-reporter/file
-                                         (uri->string uri) dl-size
-                                         (current-error-port)
-                                         #:abbreviation nar-uri-abbreviation))))
-                     (progress-report-port reporter raw)))
-                  ((input pids)
-                   ;; NOTE: This 'progress' port of current process will be
-                   ;; closed here, while the child process doing the
-                   ;; reporting will close it upon exit.
-                   (decompressed-port (string->symbol compression)
-                                      progress)))
-      ;; Unpack the Nar at INPUT into DESTINATION.
-      (restore-file input destination)
-      (close-port input)
-
-      ;; Wait for the reporter to finish.
-      (every (compose zero? cdr waitpid) pids)
-
-      ;; Skip a line after what 'progress-reporter/file' printed, and another
-      ;; one to visually separate substitutions.
-      (display "\n\n" (current-error-port)))))
+  ;; Tell the daemon what the expected hash of the Nar itself is.
+  (format #t "~a~%" (narinfo-hash narinfo))
+  (if ipfs
+      (begin
+        (unless print-build-trace?
+          (format (current-error-port)
+                  (G_ "Downloading from IPFS ~s...~%") ipfs))
+        (ipfs:restore-file-tree ipfs destination))
+      (let-values (((uri compression file-size)
+                    (narinfo-best-uri narinfo)))
+        (process-substitution/http narinfo destination uri
+                                   compression
+                                   #:print-build-trace?
+                                   print-build-trace?))))
 
 \f
 ;;;
-- 
2.29.2


[-- Attachment #1.7: swh.log --]
[-- Type: text/x-log, Size: 2888 bytes --]

test-name: lookup-origin
location: /home/sylviidae/guix/git/guix/tada/tests/swh.scm:49
source:
+ (test-equal
+   "lookup-origin"
+   (list "git" "http://example.org/guix.git")
+   (with-json-result
+     %origin
+     (let ((origin
+             (lookup-origin "http://example.org/guix.git")))
+       (list (origin-type origin) (origin-url origin)))))
expected-value: ("git" "http://example.org/guix.git")
actual-value: ("git" "http://example.org/guix.git")
result: PASS

test-name: lookup-origin, not found
location: /home/sylviidae/guix/git/guix/tada/tests/swh.scm:56
source:
+ (test-equal
+   "lookup-origin, not found"
+   #f
+   (with-http-server
+     `((404 "Nope."))
+     (parameterize
+       ((%swh-base-url (%local-url)))
+       (lookup-origin "http://example.org/whatever"))))
expected-value: #f
actual-value: #f
result: PASS

test-name: lookup-directory
location: /home/sylviidae/guix/git/guix/tada/tests/swh.scm:62
source:
+ (test-equal
+   "lookup-directory"
+   '(("one" 123) ("two" 456))
+   (with-json-result
+     %directory-entries
+     (map (lambda (entry)
+            (list (directory-entry-name entry)
+                  (directory-entry-length entry)))
+          (lookup-directory "123"))))
expected-value: (("one" 123) ("two" 456))
actual-value: #f
actual-error:
+ (json-invalid #<input: string 7ff2c93a3150>)
result: FAIL

test-name: rate limit reached
location: /home/sylviidae/guix/git/guix/tada/tests/swh.scm:70
source:
+ (test-equal
+   "rate limit reached"
+   3000000000
+   (let ((too-many
+           (build-response
+             #:code
+             429
+             #:reason-phrase
+             "Too many requests"
+             #:headers
+             '((x-ratelimit-remaining . "0")
+               (x-ratelimit-reset . "3000000000")))))
+     (with-http-server
+       `((,too-many "Too bad."))
+       (parameterize
+         ((%swh-base-url (%local-url)))
+         (catch 'swh-error
+                (lambda ()
+                  (lookup-origin "http://example.org/guix.git"))
+                (lambda (key url method response)
+                  (@@ (guix swh) %general-rate-limit-reset-time)))))))
expected-value: 3000000000
actual-value: 3000000000
result: PASS

test-name: %allow-request? and request-rate-limit-reached?
location: /home/sylviidae/guix/git/guix/tada/tests/swh.scm:89
source:
+ (test-assert
+   "%allow-request? and request-rate-limit-reached?"
+   (let* ((key (gensym "skip-request"))
+          (skip-if-limit-reached
+            (lambda (url method)
+              (or (not (request-rate-limit-reached? url method))
+                  (throw key #t)))))
+     (parameterize
+       ((%allow-request? skip-if-limit-reached))
+       (catch key
+              (lambda ()
+                (lookup-origin "http://example.org/guix.git")
+                #f)
+              (const #t)))))
actual-value: #t
result: PASS


[-- Attachment #1.8: cve.log --]
[-- Type: text/x-log, Size: 4050 bytes --]

test-name: json->cve-items
location: /home/sylviidae/guix/git/guix/tada/tests/cve.scm:56
source:
+ (test-equal
+   "json->cve-items"
+   '("CVE-2019-0001"
+     "CVE-2019-0005"
+     "CVE-2019-14811"
+     "CVE-2019-17365"
+     "CVE-2019-1010180"
+     "CVE-2019-1010204"
+     "CVE-2019-18192")
+   (map (compose cve-id cve-item-cve)
+        (call-with-input-file %sample json->cve-items)))
expected-value: ("CVE-2019-0001" "CVE-2019-0005" "CVE-2019-14811" "CVE-2019-17365" "CVE-2019-1010180" "CVE-2019-1010204" "CVE-2019-18192")
actual-value: #f
actual-error:
+ (json-invalid
+   #<input: /home/sylviidae/guix/git/guix/tada/tests/cve-sample.json 15>)
result: FAIL

test-name: cve-item-published-date
location: /home/sylviidae/guix/git/guix/tada/tests/cve.scm:67
source:
+ (test-equal
+   "cve-item-published-date"
+   '(2019)
+   (delete-duplicates
+     (map (compose date-year cve-item-published-date)
+          (call-with-input-file %sample json->cve-items))))
expected-value: (2019)
actual-value: #f
actual-error:
+ (json-invalid
+   #<input: /home/sylviidae/guix/git/guix/tada/tests/cve-sample.json 16>)
result: FAIL

test-name: json->vulnerabilities
location: /home/sylviidae/guix/git/guix/tada/tests/cve.scm:73
source:
+ (test-equal
+   "json->vulnerabilities"
+   %expected-vulnerabilities
+   (call-with-input-file
+     %sample
+     json->vulnerabilities))
expected-value: (#<<vulnerability> id: "CVE-2019-0001" packages: (("junos" (or "18.21-s4" (or "18.21-s3" "18.2"))))> #<<vulnerability> id: "CVE-2019-0005" packages: (("junos" (or "18.11" "18.1")))> #<<vulnerability> id: "CVE-2019-14811" packages: (("ghostscript" (< "9.28")))> #<<vulnerability> id: "CVE-2019-17365" packages: (("nix" (<= "2.3")))> #<<vulnerability> id: "CVE-2019-1010180" packages: (("gdb" _))> #<<vulnerability> id: "CVE-2019-1010204" packages: (("binutils" (and (>= "2.21") (<= "2.31.1"))) ("binutils_gold" (and (>= "1.11") (<= "1.16"))))>)
actual-value: #f
actual-error:
+ (json-invalid
+   #<input: /home/sylviidae/guix/git/guix/tada/tests/cve-sample.json 17>)
result: FAIL

test-name: vulnerabilities->lookup-proc
location: /home/sylviidae/guix/git/guix/tada/tests/cve.scm:77
source:
+ (test-equal
+   "vulnerabilities->lookup-proc"
+   (list (list (third %expected-vulnerabilities))
+         (list (third %expected-vulnerabilities))
+         '()
+         (list (fifth %expected-vulnerabilities))
+         (list (fifth %expected-vulnerabilities))
+         (list (fourth %expected-vulnerabilities))
+         '()
+         (list (sixth %expected-vulnerabilities))
+         '()
+         (list (sixth %expected-vulnerabilities))
+         '())
+   (let* ((vulns (call-with-input-file
+                   %sample
+                   json->vulnerabilities))
+          (lookup (vulnerabilities->lookup-proc vulns)))
+     (list (lookup "ghostscript")
+           (lookup "ghostscript" "9.27")
+           (lookup "ghostscript" "9.28")
+           (lookup "gdb")
+           (lookup "gdb" "42.0")
+           (lookup "nix")
+           (lookup "nix" "2.4")
+           (lookup "binutils" "2.31.1")
+           (lookup "binutils" "2.10")
+           (lookup "binutils_gold" "1.11")
+           (lookup "binutils" "2.32"))))
expected-value: ((#<<vulnerability> id: "CVE-2019-14811" packages: (("ghostscript" (< "9.28")))>) (#<<vulnerability> id: "CVE-2019-14811" packages: (("ghostscript" (< "9.28")))>) () (#<<vulnerability> id: "CVE-2019-1010180" packages: (("gdb" _))>) (#<<vulnerability> id: "CVE-2019-1010180" packages: (("gdb" _))>) (#<<vulnerability> id: "CVE-2019-17365" packages: (("nix" (<= "2.3")))>) () (#<<vulnerability> id: "CVE-2019-1010204" packages: (("binutils" (and (>= "2.21") (<= "2.31.1"))) ("binutils_gold" (and (>= "1.11") (<= "1.16"))))>) () (#<<vulnerability> id: "CVE-2019-1010204" packages: (("binutils" (and (>= "2.21") (<= "2.31.1"))) ("binutils_gold" (and (>= "1.11") (<= "1.16"))))>) ())
actual-value: #f
actual-error:
+ (json-invalid
+   #<input: /home/sylviidae/guix/git/guix/tada/tests/cve-sample.json 18>)
result: FAIL


[-- Attachment #1.9: Maxime Devos.pgp --]
[-- Type: application/pgp-encrypted, Size: 613 bytes --]

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 260 bytes --]

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [bug#33899] [PATCH 0/5] Distributing substitutes over IPFS
  2018-12-28 23:12 [bug#33899] [PATCH 0/5] Distributing substitutes over IPFS Ludovic Courtès
                   ` (3 preceding siblings ...)
  2020-12-29  9:59 ` [bug#33899] Ludo's patch rebased on master Maxime Devos
@ 2021-06-06 17:54 ` Tony Olagbaiye
  4 siblings, 0 replies; 23+ messages in thread
From: Tony Olagbaiye @ 2021-06-06 17:54 UTC (permalink / raw)
  To: 33899@debbugs.gnu.org


[-- Attachment #1.1.1: Type: text/plain, Size: 66 bytes --]

Hi,

Has this task stagnated? What's the news?

Thanks,
ix :)

[-- Attachment #1.1.2.1: Type: text/html, Size: 289 bytes --]

[-- Attachment #1.2: publickey - mail@fron.io - 0xE4296916.asc --]
[-- Type: application/pgp-keys, Size: 641 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 249 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2021-06-06 22:10 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-12-28 23:12 [bug#33899] [PATCH 0/5] Distributing substitutes over IPFS Ludovic Courtès
2018-12-28 23:15 ` [bug#33899] [PATCH 1/5] Add (guix json) Ludovic Courtès
2018-12-28 23:15   ` [bug#33899] [PATCH 2/5] tests: 'file=?' now recurses on directories Ludovic Courtès
2018-12-28 23:15   ` [bug#33899] [PATCH 3/5] Add (guix ipfs) Ludovic Courtès
2018-12-28 23:15   ` [bug#33899] [PATCH 4/5] publish: Add IPFS support Ludovic Courtès
2018-12-28 23:15   ` [bug#33899] [PATCH 5/5] DRAFT substitute: " Ludovic Courtès
2019-01-07 14:43 ` [bug#33899] [PATCH 0/5] Distributing substitutes over IPFS Hector Sanjuan
2019-01-14 13:17   ` Ludovic Courtès
2019-01-18  9:08     ` Hector Sanjuan
2019-01-18  9:52       ` Ludovic Courtès
2019-01-18 11:26         ` Hector Sanjuan
2019-07-01 21:36           ` Pierre Neidhardt
2019-07-06  8:44             ` Pierre Neidhardt
2019-07-12 20:02             ` Molly Mackinlay
2019-07-15  9:20               ` Alex Potsides
2019-07-12 20:15             ` Ludovic Courtès
2019-07-14 22:31               ` Hector Sanjuan
2019-07-15  9:24                 ` Ludovic Courtès
2019-07-15 10:10                   ` Pierre Neidhardt
2019-07-15 10:21                     ` Hector Sanjuan
2019-05-13 18:51 ` Alex Griffin
2020-12-29  9:59 ` [bug#33899] Ludo's patch rebased on master Maxime Devos
2021-06-06 17:54 ` [bug#33899] [PATCH 0/5] Distributing substitutes over IPFS Tony Olagbaiye

Code repositories for project(s) associated with this external index

	https://git.savannah.gnu.org/cgit/guix.git

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.