* [bug#27850] gnu: mpi: openmpi: Don't enable thread-multiple
@ 2017-07-27 15:01 Dave Love
2017-07-31 15:16 ` Ludovic Courtès
0 siblings, 1 reply; 13+ messages in thread
From: Dave Love @ 2017-07-27 15:01 UTC (permalink / raw)
To: 27850
[-- Attachment #1: Type: text/plain, Size: 184 bytes --]
The performance penalty for thread-multiple is supposed to be mitigated
in the most recent openmpi, but not in this version, and most
applications are happy with MPI_THREAD_FUNNELED.
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0001-gnu-mpi-openmpi-Don-t-enable-thread-multiple.patch --]
[-- Type: text/x-diff, Size: 1988 bytes --]
From b860f75ed48c6d15e8f19b80ceede315df4db1fe Mon Sep 17 00:00:00 2001
From: Dave Love <fx@gnu.org>
Date: Thu, 27 Jul 2017 15:52:34 +0100
Subject: [PATCH] gnu: mpi: openmpi: Don't enable thread-multiple. gnu: mpi:
openmpi-thread-multiple: Version with thread-multiple enabled.
* gnu/packages/mpi.scm (openmpi): Don't enable thread-multiple.
The support affects performance significantly.
(openmpi-thread-multiple): New package.
---
gnu/packages/mpi.scm | 18 +++++++++++++++++-
1 file changed, 17 insertions(+), 1 deletion(-)
diff --git a/gnu/packages/mpi.scm b/gnu/packages/mpi.scm
index 54fdd35ad..678d7dd88 100644
--- a/gnu/packages/mpi.scm
+++ b/gnu/packages/mpi.scm
@@ -130,10 +130,10 @@ bind processes, and much more.")
(native-inputs
`(("pkg-config" ,pkg-config)
("perl" ,perl)))
+ (outputs '("out" "lib" "debug"))
(arguments
`(#:configure-flags `("--enable-static"
- "--enable-mpi-thread-multiple"
"--enable-builtin-atomics"
"--enable-mpi-ext=all"
@@ -182,3 +182,19 @@ best MPI library available. Open MPI offers advantages for system and
software vendors, application developers and computer science researchers.")
;; See file://LICENSE
(license bsd-2)))
+
+(define-public openmpi-thread-multiple
+ (package
+ (inherit openmpi)
+ (name "openmpi-thread-multiple")
+ (arguments
+ (substitute-keyword-arguments (package-arguments openmpi)
+ ((#:configure-flags flags)
+ `(cons "--enable-mpi-thread-multiple" ,flags))))
+ (description (string-append (package-description openmpi)
+ "\
+
+This version of openmpi has an implementation of MPI_Init_thread that provides
+MPI_THREAD_MULTIPLE. This won't work correctly with all transports (such as
+openib), and the performance is generally worse than the vanilla openmpi
+package, which only provides MPI_THREAD_FUNNELED."))))
--
2.11.0
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [bug#27850] gnu: mpi: openmpi: Don't enable thread-multiple
2017-07-27 15:01 [bug#27850] gnu: mpi: openmpi: Don't enable thread-multiple Dave Love
@ 2017-07-31 15:16 ` Ludovic Courtès
2017-07-31 18:09 ` Dave Love
0 siblings, 1 reply; 13+ messages in thread
From: Ludovic Courtès @ 2017-07-31 15:16 UTC (permalink / raw)
To: Dave Love; +Cc: 27850
Hi Dave,
Dave Love <fx@gnu.org> skribis:
> The performance penalty for thread-multiple is supposed to be mitigated
> in the most recent openmpi, but not in this version, and most
> applications are happy with MPI_THREAD_FUNNELED.
>
>>From b860f75ed48c6d15e8f19b80ceede315df4db1fe Mon Sep 17 00:00:00 2001
> From: Dave Love <fx@gnu.org>
> Date: Thu, 27 Jul 2017 15:52:34 +0100
> Subject: [PATCH] gnu: mpi: openmpi: Don't enable thread-multiple. gnu: mpi:
> openmpi-thread-multiple: Version with thread-multiple enabled.
>
> * gnu/packages/mpi.scm (openmpi): Don't enable thread-multiple.
> The support affects performance significantly.
> (openmpi-thread-multiple): New package.
> ---
> gnu/packages/mpi.scm | 18 +++++++++++++++++-
> 1 file changed, 17 insertions(+), 1 deletion(-)
>
> diff --git a/gnu/packages/mpi.scm b/gnu/packages/mpi.scm
> index 54fdd35ad..678d7dd88 100644
> --- a/gnu/packages/mpi.scm
> +++ b/gnu/packages/mpi.scm
> @@ -130,10 +130,10 @@ bind processes, and much more.")
> (native-inputs
> `(("pkg-config" ,pkg-config)
> ("perl" ,perl)))
> + (outputs '("out" "lib" "debug"))
This should go to a separate patch. Also please check the relative size
of each output and the “typical” closure size of openmpi users, to
motivate the need for a separate “lib” output (the “debug” output is
OK.)
I suspect some of the packages that depend on openmpi need to be changed
to have a dependency on both the “out” and the “lib” outputs, no?
> (arguments
> `(#:configure-flags `("--enable-static"
>
> - "--enable-mpi-thread-multiple"
Should we upgrade our openmpi package instead of doing this?
> + (description (string-append (package-description openmpi)
> + "\
> +
> +This version of openmpi has an implementation of MPI_Init_thread that provides
> +MPI_THREAD_MULTIPLE. This won't work correctly with all transports (such as
> +openib), and the performance is generally worse than the vanilla openmpi
> +package, which only provides MPI_THREAD_FUNNELED."))))
Nitpick: we use literal strings in ‘description’ and ‘strings’ so that
they are picked up by xgettext for translation.
Thanks for looking into this!
Ludo’.
^ permalink raw reply [flat|nested] 13+ messages in thread
* [bug#27850] gnu: mpi: openmpi: Don't enable thread-multiple
2017-07-31 15:16 ` Ludovic Courtès
@ 2017-07-31 18:09 ` Dave Love
2017-08-01 9:27 ` Ludovic Courtès
0 siblings, 1 reply; 13+ messages in thread
From: Dave Love @ 2017-07-31 18:09 UTC (permalink / raw)
To: Ludovic Courtès; +Cc: 27850
Ludovic Courtès <ludovic.courtes@inria.fr> writes:
>> diff --git a/gnu/packages/mpi.scm b/gnu/packages/mpi.scm
>> index 54fdd35ad..678d7dd88 100644
>> --- a/gnu/packages/mpi.scm
>> +++ b/gnu/packages/mpi.scm
>> @@ -130,10 +130,10 @@ bind processes, and much more.")
>> (native-inputs
>> `(("pkg-config" ,pkg-config)
>> ("perl" ,perl)))
>> + (outputs '("out" "lib" "debug"))
>
> This should go to a separate patch.
Yes, that's a mistake, and I'd actually forgotten I'd sent the patch.
(I detest git.) I have a new version to send, which I'll edit for the
nitpicks.
>> (arguments
>> `(#:configure-flags `("--enable-static"
>>
>> - "--enable-mpi-thread-multiple"
>
> Should we upgrade our openmpi package instead of doing this?
I don't know whether they've fixed all the breakage I knew about in
OMPI 2 or whether there's still any penalty from thread-multiple. 1.10
seems fairly safe, but I don't have strong opinions if people think 2 is
solid. Apart from ABI incompatibility, I assume it has the usual
incompatibilities at the mpirun/MCA level, and that they aren't well
documented.
^ permalink raw reply [flat|nested] 13+ messages in thread
* [bug#27850] gnu: mpi: openmpi: Don't enable thread-multiple
2017-07-31 18:09 ` Dave Love
@ 2017-08-01 9:27 ` Ludovic Courtès
2017-08-01 16:06 ` Dave Love
0 siblings, 1 reply; 13+ messages in thread
From: Ludovic Courtès @ 2017-08-01 9:27 UTC (permalink / raw)
To: Dave Love; +Cc: 27850
Dave Love <fx@gnu.org> skribis:
> Ludovic Courtès <ludovic.courtes@inria.fr> writes:
[...]
>>> (arguments
>>> `(#:configure-flags `("--enable-static"
>>>
>>> - "--enable-mpi-thread-multiple"
>>
>> Should we upgrade our openmpi package instead of doing this?
>
> I don't know whether they've fixed all the breakage I knew about in
> OMPI 2 or whether there's still any penalty from thread-multiple. 1.10
> seems fairly safe, but I don't have strong opinions if people think 2 is
> solid. Apart from ABI incompatibility, I assume it has the usual
> incompatibilities at the mpirun/MCA level, and that they aren't well
> documented.
ABI compatibility is normally not an issue with Guix, so I’d be in favor
of upgrading to 2.0.3. Would you like to do it?
Thanks,
Ludo’.
^ permalink raw reply [flat|nested] 13+ messages in thread
* [bug#27850] gnu: mpi: openmpi: Don't enable thread-multiple
2017-08-01 9:27 ` Ludovic Courtès
@ 2017-08-01 16:06 ` Dave Love
2017-08-01 17:39 ` Ludovic Courtès
0 siblings, 1 reply; 13+ messages in thread
From: Dave Love @ 2017-08-01 16:06 UTC (permalink / raw)
To: Ludovic Courtès; +Cc: 27850
Ludovic Courtès <ludovic.courtes@inria.fr> writes:
> Dave Love <fx@gnu.org> skribis:
>
>> Ludovic Courtès <ludovic.courtes@inria.fr> writes:
>
> [...]
>
>>>> (arguments
>>>> `(#:configure-flags `("--enable-static"
>>>>
>>>> - "--enable-mpi-thread-multiple"
>>>
>>> Should we upgrade our openmpi package instead of doing this?
>>
>> I don't know whether they've fixed all the breakage I knew about in
>> OMPI 2 or whether there's still any penalty from thread-multiple. 1.10
>> seems fairly safe, but I don't have strong opinions if people think 2 is
>> solid. Apart from ABI incompatibility, I assume it has the usual
>> incompatibilities at the mpirun/MCA level, and that they aren't well
>> documented.
>
> ABI compatibility is normally not an issue with Guix, so I’d be in favor
> of upgrading to 2.0.3. Would you like to do it?
Maybe, but what about the non-ABI compatibility I expect there is? (I
don't know whether there's still any penalty from thread-multiple
anyhow; I guess not, as I see it's not the default.) 2.1 probably also
needs non-trivial work in figuring out whether it still needs a bundled
libevent, for instance.
If anyone's using it seriously, I'd have thought effort would be better
spent on support for SLURM (as it's in Guix) and supporting
high-performance fabrics (which I started on).
^ permalink raw reply [flat|nested] 13+ messages in thread
* [bug#27850] gnu: mpi: openmpi: Don't enable thread-multiple
2017-08-01 16:06 ` Dave Love
@ 2017-08-01 17:39 ` Ludovic Courtès
2017-08-01 20:10 ` Dave Love
0 siblings, 1 reply; 13+ messages in thread
From: Ludovic Courtès @ 2017-08-01 17:39 UTC (permalink / raw)
To: Dave Love; +Cc: 27850
Hi Dave,
Dave Love <fx@gnu.org> skribis:
> Ludovic Courtès <ludovic.courtes@inria.fr> writes:
>
>> Dave Love <fx@gnu.org> skribis:
>>
>>> Ludovic Courtès <ludovic.courtes@inria.fr> writes:
>>
>> [...]
>>
>>>>> (arguments
>>>>> `(#:configure-flags `("--enable-static"
>>>>>
>>>>> - "--enable-mpi-thread-multiple"
>>>>
>>>> Should we upgrade our openmpi package instead of doing this?
>>>
>>> I don't know whether they've fixed all the breakage I knew about in
>>> OMPI 2 or whether there's still any penalty from thread-multiple. 1.10
>>> seems fairly safe, but I don't have strong opinions if people think 2 is
>>> solid. Apart from ABI incompatibility, I assume it has the usual
>>> incompatibilities at the mpirun/MCA level, and that they aren't well
>>> documented.
>>
>> ABI compatibility is normally not an issue with Guix, so I’d be in favor
>> of upgrading to 2.0.3. Would you like to do it?
>
> Maybe, but what about the non-ABI compatibility I expect there is? (I
> don't know whether there's still any penalty from thread-multiple
> anyhow; I guess not, as I see it's not the default.)
I propose this because you had written that the “performance penalty for
thread-multiple is supposed to be mitigated in the most recent openmpi.”
If it’s not, then fine.
> 2.1 probably also needs non-trivial work in figuring out whether it
> still needs a bundled libevent, for instance.
Sure, that’s packaging. :-)
> If anyone's using it seriously, I'd have thought effort would be better
> spent on support for SLURM (as it's in Guix) and supporting
> high-performance fabrics (which I started on).
You already mentioned openfabrics a couple of times I think. Mentioning
it more won’t turn it into an actual package. :-) It’s on my to-do
list, I guess it’s on yours too, so we’ll get there.
What do you have in mind for SLURM?
As for “using it seriously”, I think this is a needlessly aggressive way
to express your frustration. People *are* using Guix “seriously” in HPC
already, but (1) different application domains emphasize different
aspects of “HPC”, and (2) there’s on-going work to improve Guix for HPC
and your feedback is invaluable here.
HTH,
Ludo’.
^ permalink raw reply [flat|nested] 13+ messages in thread
* [bug#27850] gnu: mpi: openmpi: Don't enable thread-multiple
2017-08-01 17:39 ` Ludovic Courtès
@ 2017-08-01 20:10 ` Dave Love
2017-08-21 13:17 ` bug#27850: " Ludovic Courtès
0 siblings, 1 reply; 13+ messages in thread
From: Dave Love @ 2017-08-01 20:10 UTC (permalink / raw)
To: Ludovic Courtès; +Cc: 27850
Ludovic Courtès <ludovic.courtes@inria.fr> writes:
>> Maybe, but what about the non-ABI compatibility I expect there is? (I
>> don't know whether there's still any penalty from thread-multiple
>> anyhow; I guess not, as I see it's not the default.)
>
> I propose this because you had written that the “performance penalty for
> thread-multiple is supposed to be mitigated in the most recent openmpi.”
> If it’s not, then fine.
I don't know the value of "mitigated". I could ask or, better, measure
when I get back from holiday (at least micro-benchmarks over
Infiniband).
>> If anyone's using it seriously, I'd have thought effort would be better
>> spent on support for SLURM (as it's in Guix) and supporting
>> high-performance fabrics (which I started on).
>
> You already mentioned openfabrics a couple of times I think. Mentioning
> it more won’t turn it into an actual package. :-) It’s on my to-do
> list, I guess it’s on yours too, so we’ll get there.
Sure. It's only what seems important. I'll post what I've got, but if
someone else is doing it, fine, and I won't duplicate effort.
> What do you have in mind for SLURM?
There's integration with SLURM (--with-slurm), PBS/Torque, and LSF (or,
I guess, Open Lava in the free world). I don't know much about them,
but they build MCA modules. Unlike the gridengine support, they link
against libraries for the resource managers, so you want them to be
add-ons which are only installed when required (not like the Fedora
packaging).
> As for “using it seriously”, I think this is a needlessly aggressive way
> to express your frustration.
I'm sorry I'm mis-communicating trans-Manche, at least. It wasn't meant
like that at all and I'll try to be more careful. Please assume I'm a
friendly hacker, even if I have strong opinions, which I hope I can
justify!
> People *are* using Guix “seriously” in HPC
I meant openmpi, not Guix generally. "Seriously" meant applications
which are communication-intensive (like the latency-sensitive DFT
applications).
> already, but (1) different application domains emphasize different
> aspects of “HPC”, and (2) there’s on-going work to improve Guix for HPC
> and your feedback is invaluable here.
I hope I can give useful feedback, and any criticism is meant
constructively. However, I'm not representative of UK HPC people --
happier to use functional Scheme than Python, and believing in packaging
for a start!
Happy hacking.
^ permalink raw reply [flat|nested] 13+ messages in thread
* bug#27850: gnu: mpi: openmpi: Don't enable thread-multiple
2017-08-01 20:10 ` Dave Love
@ 2017-08-21 13:17 ` Ludovic Courtès
2017-08-23 11:08 ` [bug#27850] " Dave Love
0 siblings, 1 reply; 13+ messages in thread
From: Ludovic Courtès @ 2017-08-21 13:17 UTC (permalink / raw)
To: Dave Love; +Cc: 27850-done
Hello Dave,
Sorry for the looong delay!
Dave Love <fx@gnu.org> skribis:
> Ludovic Courtès <ludovic.courtes@inria.fr> writes:
>
>>> Maybe, but what about the non-ABI compatibility I expect there is? (I
>>> don't know whether there's still any penalty from thread-multiple
>>> anyhow; I guess not, as I see it's not the default.)
>>
>> I propose this because you had written that the “performance penalty for
>> thread-multiple is supposed to be mitigated in the most recent openmpi.”
>> If it’s not, then fine.
>
> I don't know the value of "mitigated". I could ask or, better, measure
> when I get back from holiday (at least micro-benchmarks over
> Infiniband).
OK, makes sense. I asked an Open MPI developer here at work and they
confirmed that it’s reasonable to assume that thread-multiple support
has some overhead.
I went ahead and applied the patch you posted, minus the extra outputs,
and without ‘string-append’ in the description (which prevents l10n).
>> What do you have in mind for SLURM?
>
> There's integration with SLURM (--with-slurm), PBS/Torque, and LSF (or,
> I guess, Open Lava in the free world). I don't know much about them,
> but they build MCA modules. Unlike the gridengine support, they link
> against libraries for the resource managers, so you want them to be
> add-ons which are only installed when required (not like the Fedora
> packaging).
I see. I suppose we could make them separate outputs to avoid the
overhead, if that’s justified?
> I hope I can give useful feedback, and any criticism is meant
> constructively. However, I'm not representative of UK HPC people --
> happier to use functional Scheme than Python, and believing in packaging
> for a start!
Got it!
Thank you,
Ludo’.
^ permalink raw reply [flat|nested] 13+ messages in thread
* [bug#27850] gnu: mpi: openmpi: Don't enable thread-multiple
2017-08-21 13:17 ` bug#27850: " Ludovic Courtès
@ 2017-08-23 11:08 ` Dave Love
2017-08-23 21:34 ` Ludovic Courtès
0 siblings, 1 reply; 13+ messages in thread
From: Dave Love @ 2017-08-23 11:08 UTC (permalink / raw)
To: Ludovic Courtès; +Cc: 27850-done
Ludovic Courtès <ludo@gnu.org> writes:
> Hello Dave,
>
> Sorry for the looong delay!
I'm sure you deserve holidays.
> I went ahead and applied the patch you posted, minus the extra outputs,
What's the problem with a runtime output? I think it's a problem if
running MPI programs requires a development environment on compute
nodes.
> and without ‘string-append’ in the description (which prevents l10n).
OK. I'll post a general question about that. I'm happy to support
localization (or even localisation for misguided people
<https://en.wikipedia.org/wiki/Oxford_spelling>).
>>> What do you have in mind for SLURM?
>>
>> There's integration with SLURM (--with-slurm), PBS/Torque, and LSF (or,
>> I guess, Open Lava in the free world). I don't know much about them,
>> but they build MCA modules. Unlike the gridengine support, they link
>> against libraries for the resource managers, so you want them to be
>> add-ons which are only installed when required (not like the Fedora
>> packaging).
>
> I see. I suppose we could make them separate outputs to avoid the
> overhead, if that’s justified?
Yes, I think so. It's probably best if someone does it who uses those
resource managers and can test the result.
^ permalink raw reply [flat|nested] 13+ messages in thread
* [bug#27850] gnu: mpi: openmpi: Don't enable thread-multiple
2017-08-23 11:08 ` [bug#27850] " Dave Love
@ 2017-08-23 21:34 ` Ludovic Courtès
2017-09-01 10:35 ` Dave Love
0 siblings, 1 reply; 13+ messages in thread
From: Ludovic Courtès @ 2017-08-23 21:34 UTC (permalink / raw)
To: Dave Love; +Cc: 27850-done
Dave Love <fx@gnu.org> skribis:
> Ludovic Courtès <ludo@gnu.org> writes:
[...]
>> I went ahead and applied the patch you posted, minus the extra outputs,
>
> What's the problem with a runtime output?
It’s mostly that this should be a separate patch, as discussed earlier.
It’s not a problem per se as long as it does help reduce the size of
what people commonly install, which is not obvious to me.
>>>> What do you have in mind for SLURM?
>>>
>>> There's integration with SLURM (--with-slurm), PBS/Torque, and LSF (or,
>>> I guess, Open Lava in the free world). I don't know much about them,
>>> but they build MCA modules. Unlike the gridengine support, they link
>>> against libraries for the resource managers, so you want them to be
>>> add-ons which are only installed when required (not like the Fedora
>>> packaging).
>>
>> I see. I suppose we could make them separate outputs to avoid the
>> overhead, if that’s justified?
>
> Yes, I think so. It's probably best if someone does it who uses those
> resource managers and can test the result.
OK, I may give it a try at a later point.
Thanks,
Ludo’.
^ permalink raw reply [flat|nested] 13+ messages in thread
* [bug#27850] gnu: mpi: openmpi: Don't enable thread-multiple
2017-08-23 21:34 ` Ludovic Courtès
@ 2017-09-01 10:35 ` Dave Love
2017-09-01 22:10 ` Ludovic Courtès
0 siblings, 1 reply; 13+ messages in thread
From: Dave Love @ 2017-09-01 10:35 UTC (permalink / raw)
To: Ludovic Courtès; +Cc: 27850-done
Ludovic Courtès <ludo@gnu.org> writes:
> Dave Love <fx@gnu.org> skribis:
>
>> Ludovic Courtès <ludo@gnu.org> writes:
>
> [...]
>
>>> I went ahead and applied the patch you posted, minus the extra outputs,
>>
>> What's the problem with a runtime output?
>
> It’s mostly that this should be a separate patch, as discussed earlier.
OK. I've lost track, sorry. I'll have another look.
> It’s not a problem per se as long as it does help reduce the size of
> what people commonly install, which is not obvious to me.
I'm afraid I don't know about "commonly", particularly with Guix, but
I'm surprised if people would expect to get the respective MPI
development environment for an MPI program they install any more than
they'd expect the compiler generally. The separation other
distributions have seems appropriate to me as a system manager and
packager -- even if it's not the typical HPC way (Spack et al?) -- but
that's just my opinion.
^ permalink raw reply [flat|nested] 13+ messages in thread
* [bug#27850] gnu: mpi: openmpi: Don't enable thread-multiple
2017-09-01 10:35 ` Dave Love
@ 2017-09-01 22:10 ` Ludovic Courtès
2017-09-07 16:16 ` Dave Love
0 siblings, 1 reply; 13+ messages in thread
From: Ludovic Courtès @ 2017-09-01 22:10 UTC (permalink / raw)
To: Dave Love; +Cc: 27850-done
Heya,
Dave Love <fx@gnu.org> skribis:
> I'm afraid I don't know about "commonly", particularly with Guix, but
> I'm surprised if people would expect to get the respective MPI
> development environment for an MPI program they install any more than
> they'd expect the compiler generally. The separation other
> distributions have seems appropriate to me as a system manager and
> packager -- even if it's not the typical HPC way (Spack et al?) -- but
> that's just my opinion.
As I wrote before, Guix currently doesn’t have as fine grain a
separation as Debian, Fedora, & co.: we add an extra output when we have
evidence that it provides noticeably space savings.
We could discuss that policy of course, but that’s another story.
Ludo’.
^ permalink raw reply [flat|nested] 13+ messages in thread
* [bug#27850] gnu: mpi: openmpi: Don't enable thread-multiple
2017-09-01 22:10 ` Ludovic Courtès
@ 2017-09-07 16:16 ` Dave Love
0 siblings, 0 replies; 13+ messages in thread
From: Dave Love @ 2017-09-07 16:16 UTC (permalink / raw)
To: Ludovic Courtès; +Cc: 27850-done
Ludovic Courtès <ludo@gnu.org> writes:
> Heya,
>
> Dave Love <fx@gnu.org> skribis:
>
>> I'm afraid I don't know about "commonly", particularly with Guix, but
>> I'm surprised if people would expect to get the respective MPI
>> development environment for an MPI program they install any more than
>> they'd expect the compiler generally. The separation other
>> distributions have seems appropriate to me as a system manager and
>> packager -- even if it's not the typical HPC way (Spack et al?) -- but
>> that's just my opinion.
>
> As I wrote before, Guix currently doesn’t have as fine grain a
> separation as Debian, Fedora, & co.: we add an extra output when we have
> evidence that it provides noticeably space savings.
Sorry again, I'd forgotten, with holiday breaks and things, about the
dependency on gfortran being removed. (I'm not sure that's a good idea,
since the library is version-specific.) Since I've expurgated the
contribution to the closure from valgrind it looks a lot better.
Keeping a dependency on gfortran I think would only contribute the
"self" of gfortran. That's actually a lot smaller than libgfortran,
which has a load of stuff that presumably shouldn't be in it, like
libstdc++ and the sanitizer support.
> We could discuss that policy of course, but that’s another story.
>
> Ludo’.
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2017-09-07 16:18 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-07-27 15:01 [bug#27850] gnu: mpi: openmpi: Don't enable thread-multiple Dave Love
2017-07-31 15:16 ` Ludovic Courtès
2017-07-31 18:09 ` Dave Love
2017-08-01 9:27 ` Ludovic Courtès
2017-08-01 16:06 ` Dave Love
2017-08-01 17:39 ` Ludovic Courtès
2017-08-01 20:10 ` Dave Love
2017-08-21 13:17 ` bug#27850: " Ludovic Courtès
2017-08-23 11:08 ` [bug#27850] " Dave Love
2017-08-23 21:34 ` Ludovic Courtès
2017-09-01 10:35 ` Dave Love
2017-09-01 22:10 ` Ludovic Courtès
2017-09-07 16:16 ` Dave Love
Code repositories for project(s) associated with this external index
https://git.savannah.gnu.org/cgit/guix.git
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.