unofficial mirror of guile-user@gnu.org 
 help / color / mirror / Atom feed
* object serialization
@ 2005-02-05 17:40 Mario Storti
  0 siblings, 0 replies; 3+ messages in thread
From: Mario Storti @ 2005-02-05 17:40 UTC (permalink / raw)


Hi all,

I´m writing a parallel Finite Element
(http://www.cimec.org.ar/petscfem) program and I´m making some
experiments with extending it with Guile. 

The program runs in parallel using Message Passing with the MPI Library
(http://www-unix.mcs.anl.gov/mpi/). I have wrapped some basic MPI
functions (MPI_Comm_rank,MPI_Comm_size,MPI_Recv and MPI_Wend), and it
seems to work fine, but I ask you people if someone knows of a port of
MPI to Guile. 

When running in parallel I had to compile myself the Guile interpreter
since I need that all processes read and interpret the script. This
prevents me to using the interpreter in interactive form (when running
in parallel) because MPI does not broadcast the standard input to the
other processes. I think that this can be fixed by modifiying the REPL,
i.e. when running in interactive mode, in parallel, the REPL in the
master node should be in charge of broadcasting the standard input to
the nodes. Any ideas?

Also, i´m not very happy with the way I do the MPI initialization. I
had to write my own guile interpreter because MPI needs to have accesss
to the argc, argv arguments of main(), so that MPI initialization is
done _always_. I would like rather to have a Scheme `mpi-init' function
called by the user. But, on the other hand I can´t do the finalization
in the `inner_main()´ because I receive a lot of `net_recv' errors
_before_ reaching the MPI_Finalize(). I solved this by writing a Scheme
`mpi-finalize' function and forcing the user to end always her script
with (mpi-finalize). It works but is ugly to me and lacks symmetry
between initialization and finalization of MPI. Any ideas?

//---:---<*>---:---<*>---:---<*>---:---<*>---:---<*>---:---<*>---: 
static void
inner_main (void *closure, int argc, char **argv) {
  MPI_Init(&argc,&argv);
  init_mpi(); // load wrapped MPI functions
  scm_shell(argc, argv);
  MPI_Finalize();
}

//---:---<*>---:---<*>---:---<*>---:---<*>---:---<*>---:---<*>---: 
int main (int argc, char **argv) {
  scm_boot_guile (argc, argv, inner_main, 0);
  return 0; // never reached
}

Third question: I wish to be able to send and receive any Scheme
object. I know that there is some standard form of objetct
serialization (SRFI-10, hash-comma reader extension), but I wish to
hear opinions about this. Any pointers to object serialization in
Guile?

TIA for your help. 

Regards, Mario

=====
-------------------------
Mario Alberto Storti
Centro Internacional de Metodos Computacionales
  en Ingenieria - CIMEC (INTEC/CONICET-UNL)
INTEC, Guemes 3450 - 3000 Santa Fe, Argentina
Tel/Fax: +54-342-4511594, cel: +54-342-156144983
e-mail: mstorti@intec.unl.edu.ar
http://www.cimec.org.ar/mstorti, http://www.cimec.org.ar
-------------------------

_________________________________________________________
Do You Yahoo!?
Información de Estados Unidos y América Latina, en Yahoo! Noticias.
Visítanos en http://noticias.espanol.yahoo.com


_______________________________________________
Guile-user mailing list
Guile-user@gnu.org
http://lists.gnu.org/mailman/listinfo/guile-user


^ permalink raw reply	[flat|nested] 3+ messages in thread

* object serialization
@ 2005-02-06 10:46 Mikael Djurfeldt
  0 siblings, 0 replies; 3+ messages in thread
From: Mikael Djurfeldt @ 2005-02-06 10:46 UTC (permalink / raw)
  Cc: djurfeldt

[Resending to list as well.]

On Sat, 5 Feb 2005 11:40:36 -0600 (CST), Mario Storti
<mariostorti@yahoo.com> wrote:
> I´m writing a parallel Finite Element
> (http://www.cimec.org.ar/petscfem) program and I´m making some
> experiments with extending it with Guile.

That's very exciting!  I've written a neuron simulator which is
parallelized with MPI and extended with Guile.  (It's not publicly
available right now.  It's GPL:ed alright, but not in a releasable
state and doesn't have enough docs.)

> The program runs in parallel using Message Passing with the MPI Library
> (http://www-unix.mcs.anl.gov/mpi/). I have wrapped some basic MPI
> functions (MPI_Comm_rank,MPI_Comm_size,MPI_Recv and MPI_Wend), and it
> seems to work fine, but I ask you people if someone knows of a port of
> MPI to Guile.

I've never seen one.  I don't think it exists.  However, I would welcome one!

> When running in parallel I had to compile myself the Guile interpreter
> since I need that all processes read and interpret the script. This
> prevents me to using the interpreter in interactive form (when running
> in parallel) because MPI does not broadcast the standard input to the
> other processes. I think that this can be fixed by modifiying the REPL,
> i.e. when running in interactive mode, in parallel, the REPL in the
> master node should be in charge of broadcasting the standard input to
> the nodes. Any ideas?

Maybe you should have a look at how to implement custom port objects. 
There should be some documentation in the reference manual and
guile-readline is one example. The idea would then be that you
interact with a master process with a custom standard input port. At
every newline, it sends data to another kind of custom standard input
port on the slaves.

Once you have figured out how to make the ports, you can simply
redirect input with:

(set-current-input-port MY-PORT)

> Also, i´m not very happy with the way I do the MPI initialization. I
> had to write my own guile interpreter because MPI needs to have accesss
> to the argc, argv arguments of main(), so that MPI initialization is
> done _always_. I would like rather to have a Scheme `mpi-init' function
> called by the user.

My view on this situation is that since the installed Guile
interpreter currently can't run any custom code before parsing its
arguments and since MPI *has to* parse the arguments before that,
there's no other choice but to write your own interpreter like you've
done.

> But, on the other hand I can´t do the finalization
> in the `inner_main()´ because I receive a lot of `net_recv' errors
> _before_ reaching the MPI_Finalize().

But this should be due to some synchronization problem in your
program.  Maybe everybody hasn't received all data before senders
begin to finalize?  Have you tried to but a call to MPI_Barrier right
before MPI_Finalize?

Best regards,
Mikael D.


_______________________________________________
Guile-user mailing list
Guile-user@gnu.org
http://lists.gnu.org/mailman/listinfo/guile-user


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: object serialization
       [not found] <66e540fe0502060241298fede3@mail.gmail.com>
@ 2005-02-06 13:14 ` Mario Storti
  0 siblings, 0 replies; 3+ messages in thread
From: Mario Storti @ 2005-02-06 13:14 UTC (permalink / raw)


 --- Mikael Djurfeldt <mdjurfeldt@gmail.com> escribió: 
> On Sat, 5 Feb 2005 11:40:36 -0600 (CST), Mario Storti
> <mariostorti@yahoo.com> wrote:
> > I´m writing a parallel Finite Element
> > (http://www.cimec.org.ar/petscfem) program and I´m making some
> > experiments with extending it with Guile.
> 
> That's very exciting!  I've written a neuron simulator which is
> parallelized with MPI and extended with Guile.  (It's not publicly
> available right now.  It's GPL:ed alright, but not in a releasable
> state and doesn't have enough docs.)

Sounds nice...

> > The program runs in parallel using Message Passing with the MPI
Library
> > (http://www-unix.mcs.anl.gov/mpi/). I have wrapped some basic MPI
> > functions (MPI_Comm_rank,MPI_Comm_size,MPI_Recv and MPI_Wend), and
> it
> > seems to work fine, but I ask you people if someone knows of a port
> of
> > MPI to Guile.
> 
> I've never seen one.  I don't think it exists.  However, I would
> welcome one!
> 
> > When running in parallel I had to compile myself the Guile
> interpreter
> > since I need that all processes read and interpret the script. This
> > prevents me to using the interpreter in interactive form (when
> running
> > in parallel) because MPI does not broadcast the standard input to
> the
> > other processes. I think that this can be fixed by modifiying the
> REPL,
> > i.e. when running in interactive mode, in parallel, the REPL in the
> > master node should be in charge of broadcasting the standard input
> to
> > the nodes. Any ideas?
> 
> Maybe you should have a look at how to implement custom port objects.
> 
> There should be some documentation in the reference manual and
> guile-readline is one example. The idea would then be that you
> interact with a master process with a custom standard input port. At
> every newline, it sends data to another kind of custom standard input
> port on the slaves.
> 
> Once you have figured out how to make the ports, you can simply
> redirect input with:
> 
> (set-current-input-port MY-PORT)

I will investigate this.

> > Also, i´m not very happy with the way I do the MPI initialization.
> I
> > had to write my own guile interpreter because MPI needs to have
> accesss
> > to the argc, argv arguments of main(), so that MPI initialization
> is
> > done _always_. I would like rather to have a Scheme `mpi-init'
> function
> > called by the user.
> 
> My view on this situation is that since the installed Guile
> interpreter currently can't run any custom code before parsing its
> arguments and since MPI *has to* parse the arguments before that,
> there's no other choice but to write your own interpreter like you've
> done.
> 
> > But, on the other hand I can´t do the finalization
> > in the `inner_main()´ because I receive a lot of `net_recv' errors
> > _before_ reaching the MPI_Finalize().
> 
> But this should be due to some synchronization problem in your
> program.  Maybe everybody hasn't received all data before senders
> begin to finalize?  Have you tried to but a call to MPI_Barrier right
> before MPI_Finalize?

Hmmmmm... I will check... That's possible. 

If I write something about MPI wrapping I will keep you informed. 

Regards, Mario

=====
-------------------------
Mario Alberto Storti
Centro Internacional de Metodos Computacionales
  en Ingenieria - CIMEC (INTEC/CONICET-UNL)
INTEC, Guemes 3450 - 3000 Santa Fe, Argentina
Tel/Fax: +54-342-4511594, cel: +54-342-156144983
e-mail: mstorti@intec.unl.edu.ar
http://www.cimec.org.ar/mstorti, http://www.cimec.org.ar
-------------------------

_________________________________________________________
Do You Yahoo!?
Información de Estados Unidos y América Latina, en Yahoo! Noticias.
Visítanos en http://noticias.espanol.yahoo.com


_______________________________________________
Guile-user mailing list
Guile-user@gnu.org
http://lists.gnu.org/mailman/listinfo/guile-user


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2005-02-06 13:14 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2005-02-05 17:40 object serialization Mario Storti
  -- strict thread matches above, loose matches on Subject: below --
2005-02-06 10:46 Mikael Djurfeldt
     [not found] <66e540fe0502060241298fede3@mail.gmail.com>
2005-02-06 13:14 ` Mario Storti

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).