unofficial mirror of guix-devel@gnu.org 
 help / color / mirror / code / Atom feed
From: myglc2 <myglc2@gmail.com>
To: "Ludovic Courtès" <ludo@gnu.org>
Cc: Guix-devel <guix-devel@gnu.org>
Subject: Re: Guix on clusters and in HPC
Date: Mon, 24 Oct 2016 22:56:51 -0400	[thread overview]
Message-ID: <86vawh9lvw.fsf@gmail.com> (raw)
In-Reply-To: <87r37divr8.fsf@gnu.org> ("Ludovic \=\?utf-8\?Q\?Court\=C3\=A8s\=22'\?\= \=\?utf-8\?Q\?s\?\= message of "Tue, 18 Oct 2016 16:20:43 +0200")

On 10/18/2016 at 16:20 Ludovic Courtès writes:

> Hello,
>
> I’m trying to gather a “wish list” of things to be done to facilitate
> the use of Guix on clusters and for high-performance computing (HPC).

The scheduler that I am most familiar with, SGE, supports the
proposition that compute hosts are heterogeneous and that they each have
a fixed software and/or hardware configuration. As a result, users need
to specify resources, such as SW packages &/or #CPUs &/or memory needed
for a given job. These requirements in turn control where a given job
can run. QMAKE, the integration of GNU Make with the SGE scheduler,
further allows a make recipe step to specify specific resources for a
SGE job to process the make step.

While SGE is dated and can be a bear to use, it provides a useful
yardstick for HPC/Cluster functionality. So it is useful to consider how
Guix(SD) might impact this model. Presumably a defining characteristic
of GuixSD clusters is that the software configuration of compute hosts
no longer needs to be fixed and the user can "dial in" a specific SW
configuration for each job step.  This is in many ways a good thing. But
it also generates new requirements. How does one specify the SW config
for a given job or recipe step:

1) VM image?

2) VM?

3) Installed System Packages?

4) Installed (user) packages?

Based on my experiments with Guix/Debian, GuixSD, VMs, and VM images it
is not obvious to me which of these levels of abstraction is
appropriate. Perhaps any mix should be supported. In any case, tools to
manage this aspect of a GuixSD cluster are needed. And they need to be
integrated with the cluster scheduler to produce a manageable GuixSD HPC
cluster.

The most forward-thinking group that I know discarded their cluster
hardware a year ago to replace it with starcluster
(http://star.mit.edu/cluster/). Starcluster automates the creation,
care, and feeding of a HPC clusters on AWS using the Grid Engine
scheduler and AMIs. The group has a full-time "starcluster jockey" who
manages their cluster and they seem quite happy with the approach. So
you may want to consider starcluster as a model when you think of
cluster management requirements.

  parent reply	other threads:[~2016-10-25  2:55 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-10-18 14:20 Guix on clusters and in HPC Ludovic Courtès
2016-10-18 14:55 ` Christopher Allan Webber
2016-10-18 16:04   ` Ludovic Courtès
2016-10-18 16:47 ` Roel Janssen
2016-10-19 11:11   ` Ricardo Wurmus
2016-10-21 12:11     ` Roel Janssen
2016-10-20 14:08   ` Ludovic Courtès
2016-10-21  9:32     ` Ricardo Wurmus
2016-10-26 11:51       ` Ludovic Courtès
2016-11-01 23:25         ` Ben Woodcroft
2016-11-02 16:03           ` Pjotr Prins
2016-11-04 22:05             ` Pjotr Prins
2016-11-05  2:17               ` Chris Marusich
2016-11-05 16:15               ` Roel Janssen
2016-11-08 12:39               ` Ludovic Courtès
2016-11-03 13:47           ` Ludovic Courtès
2016-10-19  7:17 ` Thomas Danckaert
2016-10-20 14:17   ` Ludovic Courtès
2016-10-25  2:56 ` myglc2 [this message]
2016-10-26 12:00   ` Ludovic Courtès
2016-11-01  0:11     ` myglc2
2016-10-26 12:08   ` Ricardo Wurmus
2016-10-31 22:01     ` myglc2
2016-11-01  7:15       ` Ricardo Wurmus
2016-11-01 12:03         ` Ben Woodcroft
2016-11-03 13:44           ` Ludovic Courtès
2016-11-19  6:18             ` Ben Woodcroft
2016-11-21 14:07               ` Ludovic Courtès
2016-10-26 15:43 ` Eric Bavier
2016-10-26 16:31   ` Ludovic Courtès

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://guix.gnu.org/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=86vawh9lvw.fsf@gmail.com \
    --to=myglc2@gmail.com \
    --cc=guix-devel@gnu.org \
    --cc=ludo@gnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this public inbox

	https://git.savannah.gnu.org/cgit/guix.git

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).