* Unable to run host commands with singularity in a `guix pack -f squashfs` conainer
@ 2021-07-08 19:29 Sebastian Gibb via
2021-07-15 19:39 ` Sebastian Gibb via
0 siblings, 1 reply; 2+ messages in thread
From: Sebastian Gibb via @ 2021-07-08 19:29 UTC (permalink / raw)
To: help-guix
Hi,
I am trying to use `guix pack` to create a reproducible container that could be
used on an HPC. Unfortunately I am not able to schedule jobs from within the
container or run any command from the host system.
The HPC uses Cent OS 8.1. It offers slurm 20.02.5 for job scheduling and
singularity 3.4.2 for virtualisation/own software bundles.
I generated my container as follows:
```
cp $(/usr/local/bin/guix time-machine --commit=c78d6c6 -- pack \
--relocatable --relocatable \
--format=squashfs \
--entry-point=bin/bash \
--symlink=/bin=bin \
--symlink=/lib=lib \
--symlink=/share=share \
--save-provenance \
bash coreutils) mwe.squashfs
```
Next I try to use the host's `ssh` or slurms `sinfo`/`sbatch` from within the
container but I got the error `No such file or directory`. Using `ls` or `cat` I
could access these files but I am not able to execute them:
```
SINGULARITY_BIND="/usr" singularity run mwe.squashfs
> /usr/bin/ssh
runscript: /usr/bin/ssh: No such file or directory
> ls /usr/bin/ssh
/usr/bin/ssh
> cat /usr/bin/ssh
ELF>�@($
@8
...
```
I could workaround the `ssh` problem by putting `openssh-sans-x` into the
`guix pack` command and binding `/etc/group`, `/etc/passwd` and `/var/run` to
my singularity container. But if I include `slurm` I always got an error for
`sinfo`/`sbatch`:
"slurm_partitions: Zero Bytes were transmitted or received"
(maybe I need to bind some more pathes to singularity?)
1. Is there a way to use the host commands from within the singularity/squashfs
container generated by `guix pack`?
2. Or can I bind some more files/directory to my singularity command to get
`slurm` commands working?
By the way: I am not able to modify the PATH variable with `guix pack` generated
containers. It seems that SINGULARITY_PREPEND_PATH/SINGULARITY_APPEND_PATH are
ignored:
```
> SINGULARITY_PREPEND_PATH="/usr/bin" \
singularity exec container/mwe.squashfs /bin/bash -c 'echo $PATH'
WARNING: passwd file doesn't exist in container, not updating
WARNING: group file doesn't exist in container, not updating
/gnu/store/266jw5fcbygya3fkfbxkaa4yl23hrwci-profile/bin
```
3. How to modify the PATH variable?
(I know I could create a wrapper start script like
`export PATH=$PATH:/usr/bin; /bin/bash` but even than I can't execute
`ssh`/`sinfo`/`sbatch` and other host commands.)
Best wishes,
Sebastian
^ permalink raw reply [flat|nested] 2+ messages in thread
* Unable to run host commands with singularity in a `guix pack -f squashfs` conainer
2021-07-08 19:29 Unable to run host commands with singularity in a `guix pack -f squashfs` conainer Sebastian Gibb via
@ 2021-07-15 19:39 ` Sebastian Gibb via
0 siblings, 0 replies; 2+ messages in thread
From: Sebastian Gibb via @ 2021-07-15 19:39 UTC (permalink / raw)
To: help-guix
Hi,
> The HPC uses Cent OS 8.1. It offers slurm 20.02.5 for job scheduling and
> singularity 3.4.2 for virtualisation/own software bundles.
> ...
> I could workaround the `ssh` problem by putting `openssh-sans-x` into the
> `guix pack` command and binding `/etc/group`, `/etc/passwd` and `/var/run`
> to my singularity container. But if I include `slurm` I always got an error
> for `sinfo`/`sbatch`:
> "slurm_partitions: Zero Bytes were transmitted or received"
> (maybe I need to bind some more pathes to singularity?)
I got it working by putting a slurm with the same major.minor version into the
singularity container.
Best wishes,
Sebastian
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2021-07-15 19:40 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2021-07-08 19:29 Unable to run host commands with singularity in a `guix pack -f squashfs` conainer Sebastian Gibb via
2021-07-15 19:39 ` Sebastian Gibb via
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).