unofficial mirror of meta@public-inbox.org
 help / color / mirror / Atom feed
* [PATCH] doc: add public-inbox-tuning(7) manpage
@ 2020-08-15  5:21 Eric Wong
  0 siblings, 0 replies; only message in thread
From: Eric Wong @ 2020-08-15  5:21 UTC (permalink / raw)
  To: meta

Determining storage device speed and latencies doesn't
seem portable or even possible with the wide variety
of storage layers in use.

This means we need to write a tuning document and hope
users read and improve on it :P
---
 Documentation/public-inbox-tuning.pod    | 139 +++++++++++++++++++++++
 Documentation/public-inbox-v2-format.pod |   6 +-
 MANIFEST                                 |   1 +
 Makefile.PL                              |   2 +-
 4 files changed, 144 insertions(+), 4 deletions(-)
 create mode 100644 Documentation/public-inbox-tuning.pod

diff --git a/Documentation/public-inbox-tuning.pod b/Documentation/public-inbox-tuning.pod
new file mode 100644
index 00000000..abc53d1e
--- /dev/null
+++ b/Documentation/public-inbox-tuning.pod
@@ -0,0 +1,139 @@
+=head1 NAME
+
+public-inbox-tuning - tuning public-inbox
+
+=head1 DESCRIPTION
+
+public-inbox intends to support a wide variety of hardware.  While
+we strive to provide the best out-of-the-box performance possible,
+tuning knobs are an unfortunate necessity in some cases.
+
+=over 4
+
+=item 1
+
+New inboxes: public-inbox-init -V2
+
+=item 2
+
+Process spawning
+
+=item 3
+
+Performance on rotational hard disk drives
+
+=item 4
+
+Btrfs (and possibly other copy-on-write filesystems)
+
+=item 5
+
+Performance on solid state drives
+
+=item 6
+
+Read-only daemons
+
+=back
+
+=head2 New inboxes: public-inbox-init -V2
+
+If you're starting a new inbox (and not mirroring an existing one),
+the L<-V2|public-inbox-v2-format(5)> requires L<DBD::SQLite>, but is
+orders of magnitude more scalable than the original C<-V1> format.
+
+=head2 Process spawning
+
+Our optional use of L<Inline::C> speeds up subprocess spawning from
+large daemon processes.
+
+To enable L<Inline::C>, either set the C<PERL_INLINE_DIRECTORY>
+environment variable to point to a writable directory, or create
+C<~/.cache/public-inbox/inline-c> for any user(s) running
+public-inbox processes.
+
+More (optional) L<Inline::C> use will be introduced in the future
+to lower memory use and improve scalability.
+
+=head2 Performance on rotational hard disk drives
+
+Random I/O performance is poor on rotational HDDs.  Xapian indexing
+performance degrades significantly as DBs grow larger than available
+RAM.  Attempts to parallelize random I/O on HDDs leads to pathological
+slowdowns as inboxes grow.
+
+While C<-V2> introduced Xapian shards as a parallelization
+mechanism for SSDs; enabling C<publicInbox.indexSequentialShard>
+repurposes sharding as mechanism to reduce the kernel page cache
+footprint when indexing on HDDs.
+
+Initializing a mirror with a high C<--jobs> count to create more
+shards (in C<-V2> inboxes) will keep each shard smaller and
+reduce its kernel page cache footprint.
+
+Users with large amounts of RAM are advised to set a large value
+for C<publicinbox.indexBatchSize> as documented in
+L<public-inbox-config(5)>.
+
+C<dm-crypt> users on Linux 4.0+ are advised to try the
+C<--perf-same_cpu_crypt> C<--perf-submit_from_crypt_cpus>
+switches of L<cryptsetup(8)> to reduce I/O contention from
+kernel workqueue threads.
+
+=head2 Btrfs (and possibly other copy-on-write filesystems)
+
+L<btrfs(5)> performance degrades from fragmentation when using
+large databases and random writes.  The Xapian + SQLite indices
+used by public-inbox are no exception to that.
+
+public-inbox 1.6.0+ disables copy-on-write (CoW) on Xapian and SQLite
+indices on btrfs to achieve acceptable performance (even on SSD).
+Disabling copy-on-write also disables checksumming, thus raid1
+(or higher) configurations may corrupt on unsafe shutdowns.
+
+Fortunately, these SQLite and Xapian indices are designed to
+recoverable from git if missing.
+
+Large filesystems benefit significantly from the C<space_cache=v2>
+mount option documented in L<btrfs(5)>.
+
+Older, non-CoW filesystems are generally work well out-of-the-box
+for our Xapian and SQLite indices.
+
+=head2 Performance on solid state drives
+
+While SSD read performance is generally good, SSD write performance
+degrades as the drive ages and/or gets full.  Issuing C<TRIM> commands
+via L<fstrim(8)> or similar is required to sustain write performance.
+
+=head2 Read-only daemons
+
+L<public-inbox-httpd(1)>, L<public-inbox-imapd(1)>, and
+L<public-inbox-nntpd(1)> are all designed for C10K (or higher)
+levels of concurrency from a single process.  SMP systems may
+use C<--worker-processes=NUM> as documented in L<public-inbox-daemon(8)>
+for parallelism.
+
+The open file descriptor limit (C<RLIMIT_NOFILE>, C<ulimit -n> in L<sh(1)>,
+C<LimitNOFILE=> in L<systemd.exec(5)>) may need to be raised to
+accomodate many concurrent clients.
+
+Transport Layer Security (IMAPS, NNTPS, or via STARTTLS) significantly
+increases memory use of client sockets, sure to account for that in
+capacity planning.
+
+=head1 CONTACT
+
+Feedback encouraged via plain-text mail to L<mailto:meta@public-inbox.org>
+
+Information for *BSDs and non-traditional filesystems especially
+welcome.
+
+Our archives are hosted at L<https://public-inbox.org/meta/>,
+L<http://hjrcffqmbrq6wope.onion/meta/>, and other places
+
+=head1 COPYRIGHT
+
+Copyright 2020 all contributors L<mailto:meta@public-inbox.org>
+
+License: AGPL-3.0+ L<https://www.gnu.org/licenses/agpl-3.0.txt>
diff --git a/Documentation/public-inbox-v2-format.pod b/Documentation/public-inbox-v2-format.pod
index 6876989c..86a9b8f2 100644
--- a/Documentation/public-inbox-v2-format.pod
+++ b/Documentation/public-inbox-v2-format.pod
@@ -117,9 +117,9 @@ Rotational storage devices perform significantly worse than
 solid state storage for indexing of large mail archives; but are
 fine for backup and usable for small instances.
 
-As of public-inbox 1.6.0, the C<--sequential-shard> option of
-L<public-inbox-index(1)> may be used with a high shard count
-to ensure individual shards fit into page cache when the entire
+As of public-inbox 1.6.0, the C<publicInbox.indexSequentialShard>
+option of L<public-inbox-index(1)> may be used with a high shard
+count to ensure individual shards fit into page cache when the entire
 Xapian DB cannot.
 
 Our use of the L</OVERVIEW DB> requires Xapian document IDs to
diff --git a/MANIFEST b/MANIFEST
index 3d690177..6cb5f6bf 100644
--- a/MANIFEST
+++ b/MANIFEST
@@ -35,6 +35,7 @@ Documentation/public-inbox-mda.pod
 Documentation/public-inbox-nntpd.pod
 Documentation/public-inbox-overview.pod
 Documentation/public-inbox-purge.pod
+Documentation/public-inbox-tuning.pod
 Documentation/public-inbox-v1-format.pod
 Documentation/public-inbox-v2-format.pod
 Documentation/public-inbox-watch.pod
diff --git a/Makefile.PL b/Makefile.PL
index 831649f9..88da5b45 100644
--- a/Makefile.PL
+++ b/Makefile.PL
@@ -34,7 +34,7 @@ $v->{my_syntax} = [map { "$_.syntax" } @syn];
 $v->{-m1} = [ map { (split('/'))[-1] } @EXE_FILES ];
 $v->{-m5} = [ qw(public-inbox-config public-inbox-v1-format
 		public-inbox-v2-format) ];
-$v->{-m7} = [ qw(public-inbox-overview) ];
+$v->{-m7} = [ qw(public-inbox-overview public-inbox-tuning) ];
 $v->{-m8} = [ qw(public-inbox-daemon) ];
 my @sections = (1, 5, 7, 8);
 $v->{check_80} = [];

^ permalink raw reply related	[flat|nested] only message in thread

only message in thread, other threads:[~2020-08-15  5:21 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-08-15  5:21 [PATCH] doc: add public-inbox-tuning(7) manpage Eric Wong

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).