From mboxrd@z Thu Jan 1 00:00:00 1970 Path: news.gmane.org!not-for-mail From: Bob Proulx Newsgroups: gmane.emacs.help Subject: Re: Long response times from elpa.gnu.org Date: Sat, 8 Feb 2014 14:02:38 -0700 Message-ID: <20140208210238.GA1018@hysteria.proulx.com> References: <20140208194705.GA15885@hysteria.proulx.com> <8361opweus.fsf@gnu.org> NNTP-Posting-Host: plane.gmane.org Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-Trace: ger.gmane.org 1391893374 1263 80.91.229.3 (8 Feb 2014 21:02:54 GMT) X-Complaints-To: usenet@ger.gmane.org NNTP-Posting-Date: Sat, 8 Feb 2014 21:02:54 +0000 (UTC) To: help-gnu-emacs@gnu.org Original-X-From: help-gnu-emacs-bounces+geh-help-gnu-emacs=m.gmane.org@gnu.org Sat Feb 08 22:03:02 2014 Return-path: Envelope-to: geh-help-gnu-emacs@m.gmane.org Original-Received: from lists.gnu.org ([208.118.235.17]) by plane.gmane.org with esmtp (Exim 4.69) (envelope-from ) id 1WCF3F-0006EY-DS for geh-help-gnu-emacs@m.gmane.org; Sat, 08 Feb 2014 22:03:01 +0100 Original-Received: from localhost ([::1]:47990 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1WCF3F-0003bC-2k for geh-help-gnu-emacs@m.gmane.org; Sat, 08 Feb 2014 16:03:01 -0500 Original-Received: from eggs.gnu.org ([2001:4830:134:3::10]:32852) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1WCF2z-0003b0-V6 for help-gnu-emacs@gnu.org; Sat, 08 Feb 2014 16:02:50 -0500 Original-Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1WCF2v-0003eP-63 for help-gnu-emacs@gnu.org; Sat, 08 Feb 2014 16:02:45 -0500 Original-Received: from joseki.proulx.com ([216.17.153.58]:51674) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1WCF2u-0003eH-Uw for help-gnu-emacs@gnu.org; Sat, 08 Feb 2014 16:02:41 -0500 Original-Received: from hysteria.proulx.com (hysteria.proulx.com [192.168.230.119]) by joseki.proulx.com (Postfix) with ESMTP id 22C7521228 for ; Sat, 8 Feb 2014 14:02:39 -0700 (MST) Original-Received: by hysteria.proulx.com (Postfix, from userid 1000) id 036B72DC4C; Sat, 8 Feb 2014 14:02:38 -0700 (MST) Mail-Followup-To: help-gnu-emacs@gnu.org Content-Disposition: inline In-Reply-To: <8361opweus.fsf@gnu.org> User-Agent: Mutt/1.5.21 (2010-09-15) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x X-Received-From: 216.17.153.58 X-BeenThere: help-gnu-emacs@gnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Users list for the GNU Emacs text editor List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: help-gnu-emacs-bounces+geh-help-gnu-emacs=m.gmane.org@gnu.org Original-Sender: help-gnu-emacs-bounces+geh-help-gnu-emacs=m.gmane.org@gnu.org Xref: news.gmane.org gmane.emacs.help:95987 Archived-At: Johan Andersson wrote: > Contacting host: elpa.gnu.org:80 ^^^^^^^^^^^^ > Failed to download `gnu' archive. Stefan Monnier wrote: > I indeed saw this problem recently (a couple weeks ago), and when > I logged into elpa.gnu.org to investigate, I saw a flood of connections ^^^^^^^^^^^^ Eli Zaretskii wrote: > > Bob Proulx wrote: > > > When you see such slow response times, please go to savannah.gnu.org and > > > open a support request about it. > > > > Good idea but as far as I know elpa.gnu.org is not a Savannah machine. > > ??? Then how come I have in my elpa/.git/config this snippet: > > [remote "origin"] > url = git+ssh://git.savannah.gnu.org/srv/git/emacs/elpa Because git.savannah.gnu.org != elpa.gnu.org . Those are different systems. How is elpa.gnu.org related to git.sv.gnu.org? However if you are talking about vcs.sv.gnu.org VM (hosting git.sv.gnu.org aka git.savannah.gnu.org) then the entire VM stack has known performance problems. There are at least 24 VMs hosted on one Xen based dom0 system. Karl and I believe that the dom0 is I/O saturated when several of the systems are active at the same time. The I/O saturation causes long I/O waits with little cpu usage causing the appearance of a high load average while the cpu is idle. Meanwhile any performance metrics observed on the VM is fake data and they always report all okay. The only way to really know what is happening would be to observe the dom0 host during performance brownouts. If we had some visibility into what was reported by the dom0 then we would know something. So far we don't. Until someone can look at the dom0 we can't actually know anything. The FSF admins so far are unconvinced that the dom0 is at I/O capacity. They think that the dom0 should be able to handle all of the current load plus more. And so we have the current status. But I don't think anything will improve the situation short of adding additional hardware to increase the amount of data I/O capability. Three dom0 systems instead of one would give 3x the capability. Put vcs onto its own hardware and I believe the problem would go away. Note that they require that their systems run coreboot bios and therefore I can't just send additional hardware to Boston to help. Bob