unofficial mirror of meta@public-inbox.org
 help / color / mirror / Atom feed
* sample robots.txt to reduce WWW load
@ 2024-04-01 13:21 Eric Wong
  2024-04-03 20:58 ` Konstantin Ryabitsev
  0 siblings, 1 reply; 4+ messages in thread
From: Eric Wong @ 2024-04-01 13:21 UTC (permalink / raw)
  To: meta

Performance is still slow, and crawler traffic patterns tend to
do bad things with caches at all levels, so I've regretfully had
to experiment with robots.txt to mitigate performance problems.

The /s/ solver endpoint remains expensive but commit
8d6a50ff2a44 (www: use a dedicated limiter for blob solver, 2024-03-11)
seems to have helped significantly.

All the multi-message endpoints (/[Tt]*) are of course expensive
and have always been.  git blob access over SATA 2 SSD isn't too
fast, and HTML rendering is quite expensive in Perl.  Keeping
multiple zlib contexts for HTTP gzip also hurts memory usage,
so we want to minimize the amount of time clients keep
longer-lived allocations.

Anyways, this robots.txt is what I've been experimenting with
and (after a few days when bots pick it up) it seems to have
significantly cut load on my system so I can actually work on
performance problems[1] which show up.

==> robots.txt <==
User-Agent: *
Disallow: /*/s/
Disallow: /*/T/
Disallow: /*/t/
Disallow: /*/t.atom
Disallow: /*/t.mbox.gz
Allow: /

I also disable git-archive snapshots for cgit || WwwCoderepo:

Disallow: /*/snapshot/*


[1] I'm testing a glibc patch which hopefully reduces fragmentation.
    I've gotten rid of many of the Disallow: entries temporarily
   since

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: sample robots.txt to reduce WWW load
  2024-04-01 13:21 sample robots.txt to reduce WWW load Eric Wong
@ 2024-04-03 20:58 ` Konstantin Ryabitsev
  2024-04-03 22:31   ` Eric Wong
  0 siblings, 1 reply; 4+ messages in thread
From: Konstantin Ryabitsev @ 2024-04-03 20:58 UTC (permalink / raw)
  To: Eric Wong; +Cc: meta

On Mon, Apr 01, 2024 at 01:21:45PM +0000, Eric Wong wrote:
> Performance is still slow, and crawler traffic patterns tend to
> do bad things with caches at all levels, so I've regretfully had
> to experiment with robots.txt to mitigate performance problems.

This has been the source of grief for us, because aggressive bots don't appear
to be paying any attention to robots.txt, and they are fudging their
user-agent string to pretend to be a regular browser. I am dealing with one
that is hammering us from China Mobile IP ranges and is currently trying to
download every possible snapshot of torvalds/linux, while pretending to be
various versions of Chrome.

So, while I welcome having a robots.txt recommendation, it kinda assumes that
robots will actually play nice and won't try to suck down as much as possible
as quickly as possible for training some LLM-du-jour.

/end rant

-K

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: sample robots.txt to reduce WWW load
  2024-04-03 20:58 ` Konstantin Ryabitsev
@ 2024-04-03 22:31   ` Eric Wong
  2024-04-08 23:58     ` Eric Wong
  0 siblings, 1 reply; 4+ messages in thread
From: Eric Wong @ 2024-04-03 22:31 UTC (permalink / raw)
  To: Konstantin Ryabitsev; +Cc: meta

Konstantin Ryabitsev <konstantin@linuxfoundation.org> wrote:
> On Mon, Apr 01, 2024 at 01:21:45PM +0000, Eric Wong wrote:
> > Performance is still slow, and crawler traffic patterns tend to
> > do bad things with caches at all levels, so I've regretfully had
> > to experiment with robots.txt to mitigate performance problems.
> 
> This has been the source of grief for us, because aggressive bots don't appear
> to be paying any attention to robots.txt, and they are fudging their
> user-agent string to pretend to be a regular browser. I am dealing with one
> that is hammering us from China Mobile IP ranges and is currently trying to
> download every possible snapshot of torvalds/linux, while pretending to be
> various versions of Chrome.

Ouch, that's from cgit doing `git archive` on every single commit?
Yeah, that's a PITA and not something varnish can help with :/

I suppose you're already using some nginx knobs to throttle
or limit requests from their IP ranges?

It's been years since I've used nginx myself, but AFAIK nginx
buffering is either full (buffer everything before sending)
or not buffered at all.  IOW, (AFAIK) there's no lazy buffering
that tries to send whatever it can, but falls back to buffering
when a client is the bottleneck.

I recommend "proxy_buffering off" in nginx for
public-inbox-{httpd,netd} since the lazy buffering done by our
Perl logic is ideal for git-{archive,http-backend} trickling to
slow clients.  This ensures the git memory hogs finish as
quickly as possible and we can slowly trickle to slow (or
throttled) clients with minimal memory overhead.

When I run cgit nowadays, it's _always_ being run by
public-inbox-{httpd,netd} to get this lazy buffering behavior.
Previously, I used another poorly-marketed (epoll|kqueue)
multi-threaded Ruby HTTP server to get the same lazy buffering
behavior (I still rely on that server to do HTTPS instead of
nginx since I don't yet have a Perl reverse proxy).

All that said, PublicInbox::WwwCoderepo (JS-free cgit
replacement + inbox integration UI) only generates archive links
for tags and not every single commit.

> So, while I welcome having a robots.txt recommendation, it kinda assumes that
> robots will actually play nice and won't try to suck down as much as possible
> as quickly as possible for training some LLM-du-jour.

robots.txt actually made a significant difference before I
started playing around with jemalloc-inspired size classes for
malloc in glibc[1] and mwrap-perl[2].

I've unleashed the bots again and let them run rampant on the
https://80x24.org/lore/ HTML pages.  Will need to add malloc
tracing on my own to generate reproducible results to prove it's
worth adding to glibc malloc...

[1] https://public-inbox.org/libc-alpha/20240401191925.M515362@dcvr/
[2] https://80x24.org/mwrap-perl/20240403214222.3258695-2-e@80x24.org/

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: sample robots.txt to reduce WWW load
  2024-04-03 22:31   ` Eric Wong
@ 2024-04-08 23:58     ` Eric Wong
  0 siblings, 0 replies; 4+ messages in thread
From: Eric Wong @ 2024-04-08 23:58 UTC (permalink / raw)
  To: Konstantin Ryabitsev; +Cc: meta

Eric Wong <e@80x24.org> wrote:
> I've unleashed the bots again and let them run rampant on the
> https://80x24.org/lore/ HTML pages.  Will need to add malloc
> tracing on my own to generate reproducible results to prove it's
> worth adding to glibc malloc...

Unfortunately, mwrap w/ tracing is expensive enough to affect
memory use:  slower request/response processing due to slower
malloc means larger queues build up.

Going to have to figure out lower-overhead tracing mechanisms if
I actually want to prove size classes work to reduce
fragmentation....

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2024-04-08 23:58 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-04-01 13:21 sample robots.txt to reduce WWW load Eric Wong
2024-04-03 20:58 ` Konstantin Ryabitsev
2024-04-03 22:31   ` Eric Wong
2024-04-08 23:58     ` Eric Wong

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).