unofficial mirror of meta@public-inbox.org
 help / color / mirror / Atom feed
* what storage system(s) are you using?
@ 2020-08-05  3:11 Eric Wong
  2020-08-06 17:32 ` Konstantin Ryabitsev
  0 siblings, 1 reply; 3+ messages in thread
From: Eric Wong @ 2020-08-05  3:11 UTC (permalink / raw)
  To: meta

I've been mostly using ext4 on SSDs since I started public-inbox
and it works well.

1.6.0 will have some changes to make things less slow on
rotational HDDs (and even faster on SSD).  It's still slow, just
a bit less slow than before.  Testing things on giant inboxes is
like watching grass grow.  Small inboxes that can fit into the
page cache aren't too bad...

I'm also evaluating btrfs since its raid1 is handy when I've got
a bunch of old mismatched HDDs for backups.  btrfs may become
the default FS for Fedora (and maybe other distros will follow),
so I anticipate we'll see more btrfs adoption as time goes on.

Out-of-the-box, btrfs is not remotely suited for random write
patterns from Xapian and SQLite.  However it gives some extra
piece-of-mind with checksumming and compresssion of git refs.
Since we don't care much about data integrity of Xapian or
SQLite data, 1.6.0 will set the nodatacow attribute on those
files/directories:

  https://public-inbox.org/meta/20200728222158.17457-1-e@yhbt.net/

With the default CoW, even a TRIM-ed SSD was abysmal at indexing
LKML.

The space_cache=v2 mount option seems to help significantly with
large, multi-TB FSes (still testing...).  This will be noted in a
public-inbox-tuning(7) manpage...

I haven't done much with XFS outside of a VM.  Since it doesn't
have CoW, I expect it to be similar to ext4 as far as this
codebase is concerned.  From what I recall years ago, unlink(2)
was slow on XFS with Maildirs, and it's also the case currently
with btrfs...

Another interesting bit: SQLite uses some F2FS-only APIs.  So
F2FS could be good for SSD users; but I've yet to try it...

I also don't know if network filesystems (Ceph, Gluster, NFS,
Lustre, AFS, ...) work at all.  Maybe they're fine for git
storage, but probably not with SQLite, Xapian or flock(2).

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: what storage system(s) are you using?
  2020-08-05  3:11 what storage system(s) are you using? Eric Wong
@ 2020-08-06 17:32 ` Konstantin Ryabitsev
  2020-08-21 18:51   ` Eric Wong
  0 siblings, 1 reply; 3+ messages in thread
From: Konstantin Ryabitsev @ 2020-08-06 17:32 UTC (permalink / raw)
  To: Eric Wong; +Cc: meta

On Wed, Aug 05, 2020 at 03:11:27AM +0000, Eric Wong wrote:
> I've been mostly using ext4 on SSDs since I started public-inbox
> and it works well.

As you know, I hope to move lore.kernel.org to a system with a hybrid 
lvm-cache setup, specifically:

12 x 1.8TB rotational drives set up in a lvm raid-6 array
2  x 450GB SSD drives are lvm-cache volume

This gives up 18TB capacity with a 900GB cache layer, and the FS on top 
of that is XFS.

This is what is currently serving mirrors.edge.kernel.org (4 nodes 
around the world).

Current lore.kernel.org just uses an AWS EBS disk, but since AWS is a 
blackbox, there's no knowing what sorts of levels of abstraction are 
beneath that.

-K

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: what storage system(s) are you using?
  2020-08-06 17:32 ` Konstantin Ryabitsev
@ 2020-08-21 18:51   ` Eric Wong
  0 siblings, 0 replies; 3+ messages in thread
From: Eric Wong @ 2020-08-21 18:51 UTC (permalink / raw)
  To: meta

Konstantin Ryabitsev <konstantin@linuxfoundation.org> wrote:
> On Wed, Aug 05, 2020 at 03:11:27AM +0000, Eric Wong wrote:
> > I've been mostly using ext4 on SSDs since I started public-inbox
> > and it works well.
> 
> As you know, I hope to move lore.kernel.org to a system with a hybrid 
> lvm-cache setup, specifically:
> 
> 12 x 1.8TB rotational drives set up in a lvm raid-6 array
> 2  x 450GB SSD drives are lvm-cache volume
> 
> This gives up 18TB capacity with a 900GB cache layer, and the FS on top 
> of that is XFS.
> 
> This is what is currently serving mirrors.edge.kernel.org (4 nodes 
> around the world).

Do you have any numbers on read IOPS or seek latency for the
RAID-6 array?  Also, how much RAM for the page cache?

Xapian is going to be tricky(*), and it's looking like group search
will require a separate index :<  The upside is it may be able
to gradually replace existing indices for WWW and deduplicate
much data for cross-posted messages.

IMAP/JMAP is a different story...

Removing or relocating inboxes isn't going to be fun, either.

(*) Xapian built-in sharding works well for matching CPU core count,
    but trying to use Xapian's MultiDatabase (via ->add_database) with
    the current mirror of lore (almost 400 shards) doesn't work well,
    at all.

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2020-08-21 18:51 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2020-08-05  3:11 what storage system(s) are you using? Eric Wong
2020-08-06 17:32 ` Konstantin Ryabitsev
2020-08-21 18:51   ` Eric Wong

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).