unofficial mirror of guix-devel@gnu.org 
 help / color / mirror / code / Atom feed
* distributed substitutes: file slicing
@ 2023-06-20 22:44 Csepp
  2023-06-21 14:32 ` Attila Lendvai
                   ` (3 more replies)
  0 siblings, 4 replies; 5+ messages in thread
From: Csepp @ 2023-06-20 22:44 UTC (permalink / raw)
  To: guix-devel; +Cc: pukkamustard

I have a question / suggestion about the distributed substitutes
project: would downloads be split into uniformly sized chunks or could
the sizes vary?
Specifically, in an extreme case where an update introduced a single
extra byte at the beginning of a file, would that result in completely
new chunks?

An alternative I've been thinking about is this:
find the store references in a file and split it along these references,
optionally apply further chunking to the non-reference blobs.

It's probably best to do this at the NAR level??

Storing reference offsets is already something that we should be doing to
speed other operations up, so this could tie in nicely with that.


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: distributed substitutes: file slicing
  2023-06-20 22:44 distributed substitutes: file slicing Csepp
@ 2023-06-21 14:32 ` Attila Lendvai
  2023-06-21 23:06 ` Csepp
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 5+ messages in thread
From: Attila Lendvai @ 2023-06-21 14:32 UTC (permalink / raw)
  To: Csepp; +Cc: guix-devel, pukkamustard

> I have a question / suggestion about the distributed substitutes
> project: would downloads be split into uniformly sized chunks or could
> the sizes vary?
> Specifically, in an extreme case where an update introduced a single
> extra byte at the beginning of a file, would that result in completely
> new chunks?


most (all?) distributed storage solutions have a chunker (including ERIS with its 32k chunks, or Swarm with 4k chunks), and the chunks are content addressed, i.e. it also serves as deduplication at the chunk granularity.

if the file doesn't just grow, but shifts away a couple of bytes somewhere in the middle, then this chunk-level deduplication stops happening from that point on.

IIRC rar was the first archiver that introduced a very fast deduplication algorithm that detected even the non-aligned duplicated blocks of varying sizes. i don't think any distributed storage system has anything like that.


> An alternative I've been thinking about is this:
> find the store references in a file and split it along these references,
> optionally apply further chunking to the non-reference blobs.


chunking storage systems store only whole chunks, so too much splitting of files can increase the wasted storage. more so with large chunks, less so with smaller ones.


> It's probably best to do this at the NAR level??
> 
> Storing reference offsets is already something that we should be doing to
> speed other operations up, so this could tie in nicely with that.


if optimization of grafting is worth this amount of trouble, then maybe the best is to extend the NAR format to store mutable references in a separate table at the end of the file. that would speed up guix operations like grafting, and help any storage systems that have deduplication, which includes some copy-on-write filesystems.

-- 
• attila lendvai
• PGP: 963F 5D5F 45C7 DFCD 0A39
--
“If you shut up truth and bury it under the ground, it will but grow, and gather to itself such explosive power that the day it bursts through it will knock down everything that stands in its way.”
	— Émile Zola (1840–1902)



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: distributed substitutes: file slicing
  2023-06-20 22:44 distributed substitutes: file slicing Csepp
  2023-06-21 14:32 ` Attila Lendvai
@ 2023-06-21 23:06 ` Csepp
  2023-06-25  9:48 ` pukkamustard
  2023-06-26 13:41 ` Florian Klink
  3 siblings, 0 replies; 5+ messages in thread
From: Csepp @ 2023-06-21 23:06 UTC (permalink / raw)
  To: Csepp; +Cc: guix-devel, pukkamustard


Csepp <raingloom@riseup.net> writes:

> I have a question / suggestion about the distributed substitutes
> project: would downloads be split into uniformly sized chunks or could
> the sizes vary?
> Specifically, in an extreme case where an update introduced a single
> extra byte at the beginning of a file, would that result in completely
> new chunks?
>
> An alternative I've been thinking about is this:
> find the store references in a file and split it along these references,
> optionally apply further chunking to the non-reference blobs.
>
> It's probably best to do this at the NAR level??
>
> Storing reference offsets is already something that we should be doing to
> speed other operations up, so this could tie in nicely with that.

I was sent this link that others might find interesting:
https://alternativebit.fr/posts/nixos/future-of-nix-substitution/


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: distributed substitutes: file slicing
  2023-06-20 22:44 distributed substitutes: file slicing Csepp
  2023-06-21 14:32 ` Attila Lendvai
  2023-06-21 23:06 ` Csepp
@ 2023-06-25  9:48 ` pukkamustard
  2023-06-26 13:41 ` Florian Klink
  3 siblings, 0 replies; 5+ messages in thread
From: pukkamustard @ 2023-06-25  9:48 UTC (permalink / raw)
  To: Csepp; +Cc: guix-devel


Csepp <raingloom@riseup.net> writes:

> I have a question / suggestion about the distributed substitutes
> project: would downloads be split into uniformly sized chunks or could
> the sizes vary?

For the proposal that uses ERIS (https://issues.guix.gnu.org/52555) the
chunks are uniformly sized (32KiB).

> Specifically, in an extreme case where an update introduced a single
> extra byte at the beginning of a file, would that result in completely
> new chunks?

Yes, that would be the case.

ERIS uses fixed-block sizes and such extreme cases would result in
completely new chunks - very bad de-duplication.

The reason for using fixed-block sizes is security/privacy. When using
variable sized blocks the sizes are observable by a potential censor and
are also a function of the content itself. This leaks information about
the transferred content.

I believe there are documented cases of HTTPS connections being
blocked/censored based on size of requests [citation needed]. This is
something ERIS tries to prevent.

That being said, I think there is still room for optimizing the
de-duplication even with fixed-size blocks.

> An alternative I've been thinking about is this:
> find the store references in a file and split it along these references,
> optionally apply further chunking to the non-reference blobs.
>
> It's probably best to do this at the NAR level??

I like the idea!

If I understand correctly we would split whenever a store reference
appears. When a single store reference changes (this probably happens
quite often) then only the preceeding block changes.

I think there is also a way to do something similar while preserving
fixed size blocks:

Maintain a lookup table for all store references appearing in a store
item. When serializing this lookup table goes to the front (or back)
with appropriate padding so that it is block aligned. All store
references in the remaining serialization are replaced by a reference to
the lookup table. Now when a store reference changes only the lookup
table changes, the remaining content remains the same and is
de-duplicated.

A similar idea for also allowing de-duplication when individual files
change: https://codeberg.org/eris/eer/src/branch/eris-fs/eer/eris-fs/index.md

Also check out the Guix `wip-digests` branch. There are some related
interesting ideas there.

I'm working on rebasing and updating the decentralized substitute
patches. Sorry for the slowness. They would at first only address
block-wise transfer with a naive encoding that does not do very good
de-duplication. 

As outlined I think de-duplication can be added later and I think it's
great to start thinking about it and experimenting with ideas.

-pukkamustard


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: distributed substitutes: file slicing
  2023-06-20 22:44 distributed substitutes: file slicing Csepp
                   ` (2 preceding siblings ...)
  2023-06-25  9:48 ` pukkamustard
@ 2023-06-26 13:41 ` Florian Klink
  3 siblings, 0 replies; 5+ messages in thread
From: Florian Klink @ 2023-06-26 13:41 UTC (permalink / raw)
  To: Csepp; +Cc: guix-devel, pukkamustard

On 23-06-21 00:44:06, Csepp wrote:
>I have a question / suggestion about the distributed substitutes
>project: would downloads be split into uniformly sized chunks or could
>the sizes vary?
>Specifically, in an extreme case where an update introduced a single
>extra byte at the beginning of a file, would that result in completely
>new chunks?
>
>An alternative I've been thinking about is this:
>find the store references in a file and split it along these references,
>optionally apply further chunking to the non-reference blobs.
>
>It's probably best to do this at the NAR level??
>
>Storing reference offsets is already something that we should be doing to
>speed other operations up, so this could tie in nicely with that.

A bit late to the party, but I've been toying around with a different
model to represent contents inside store paths - see [tvix-store-docs]
for more details.

Essentially, tvix-store internally uses a model similar to git trees,
but with Blake3 as a digest for blobs (regular file contents).
Even with all that, you can still put on a NAR lens, and get back a
byte-by-byte identical NAR representation of a store path.

Because blake3 enables [verified streaming][bao], there's no need to
make granular chunking part of the information to encode - it can be a
transport concern only. It also allows easy "seeking" into different
parts of a store path, and due to content-adressability, easy partial
fetching.

I've been playing around with using a blob storage implementation
storing these blobs with content-defined chunking (and eventually
exposing more granular chunking data to clients).
Due to the "decomposition" of the NAR (storing blobs separately from the
"surrounding skeleton"), we always look at file contents separately.

I didn't yet run any benchmarks on whether it makes sense to "blank out"
store paths before ingesting, and dynamically applying these references
on top, but would be interested in some discussion around some
experiments.


flokli

--

[tvix-store-docs]: https://cs.tvl.fyi/depot/-/tree/tvix/store/docs
[bao]: https://github.com/oconnor663/bao


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2023-06-28 12:41 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-06-20 22:44 distributed substitutes: file slicing Csepp
2023-06-21 14:32 ` Attila Lendvai
2023-06-21 23:06 ` Csepp
2023-06-25  9:48 ` pukkamustard
2023-06-26 13:41 ` Florian Klink

Code repositories for project(s) associated with this public inbox

	https://git.savannah.gnu.org/cgit/guix.git

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).