From mboxrd@z Thu Jan 1 00:00:00 1970 Path: news.gmane.org!not-for-mail From: Alexander Shirokov Newsgroups: gmane.lisp.guile.user Subject: Re: custom guile stdin port for MPI users Date: Tue, 9 Jan 2007 00:24:38 -0500 (EST) Message-ID: References: NNTP-Posting-Host: lo.gmane.org Mime-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Trace: sea.gmane.org 1168320314 3980 80.91.229.12 (9 Jan 2007 05:25:14 GMT) X-Complaints-To: usenet@sea.gmane.org NNTP-Posting-Date: Tue, 9 Jan 2007 05:25:14 +0000 (UTC) Cc: guile-user@gnu.org Original-X-From: guile-user-bounces+guile-user=m.gmane.org@gnu.org Tue Jan 09 06:25:09 2007 Return-path: Envelope-to: guile-user@m.gmane.org Original-Received: from lists.gnu.org ([199.232.76.165]) by lo.gmane.org with esmtp (Exim 4.50) id 1H49To-00072p-GD for guile-user@m.gmane.org; Tue, 09 Jan 2007 06:25:00 +0100 Original-Received: from localhost ([127.0.0.1] helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1H49Tn-0007xv-V1 for guile-user@m.gmane.org; Tue, 09 Jan 2007 00:24:59 -0500 Original-Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1H49TW-0007wf-7s for guile-user@gnu.org; Tue, 09 Jan 2007 00:24:42 -0500 Original-Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1H49TV-0007wH-IB for guile-user@gnu.org; Tue, 09 Jan 2007 00:24:41 -0500 Original-Received: from [199.232.76.173] (helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1H49TV-0007wC-C8 for guile-user@gnu.org; Tue, 09 Jan 2007 00:24:41 -0500 Original-Received: from [128.100.76.6] (helo=quail.cita.utoronto.ca) by monty-python.gnu.org with esmtps (TLS-1.0:DHE_RSA_AES_256_CBC_SHA:32) (Exim 4.52) id 1H49TV-0002mb-3D for guile-user@gnu.org; Tue, 09 Jan 2007 00:24:41 -0500 Original-Received: from cita.utoronto.ca (hare.cita.utoronto.ca [128.100.76.58]) by quail.cita.utoronto.ca (8.13.1/8.13.1) with ESMTP id l095OdFp008762; Tue, 9 Jan 2007 00:24:39 -0500 Original-Received: from hare.cita.utoronto.ca (localhost [127.0.0.1]) by cita.utoronto.ca (8.13.7/8.12.8) with ESMTP id l095OdFb031353; Tue, 9 Jan 2007 00:24:39 -0500 Original-Received: from localhost (shirokov@localhost) by hare.cita.utoronto.ca (8.13.7/8.13.7/Submit) with ESMTP id l095OcKw031350; Tue, 9 Jan 2007 00:24:39 -0500 Original-To: Mario Storti In-Reply-To: X-BeenThere: guile-user@gnu.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: General Guile related discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Original-Sender: guile-user-bounces+guile-user=m.gmane.org@gnu.org Errors-To: guile-user-bounces+guile-user=m.gmane.org@gnu.org Xref: news.gmane.org gmane.lisp.guile.user:5760 Archived-At: Hi Mario, thanks for your reply! It looks like I will probbaly have to try to do it on my own. I would be interested to see an example on how you wrapped an MPI_Bcast function and MPI_Send, Receive. Would it be difficult for you to show me an example?, - would be very nice to have one since i am a beginner in guile, and i will let you know how it goes. Thank you. Alex On Fri, 1 Dec 2006, Mario Storti wrote: > >>>>>> On Tue, 28 Nov 2006 13:52:30 -0500 (EST), >>>>>> Alexander Shirokov said: > >> I would like to embed guile interpreter into my application - a >> parallel program using MPI (message passing interface) and operating >> massive data and computations. I would like that program to be able to >> process standard input in order to be able to have a live interactive >> session with my application. Below I describe the problem i >> encountered. > > ... > >> With guile however, I am limited to using > >> scm_shell(argc, argv); > >> which is supposed to do the stdin processing itself, - I hoped it would >> even in the parallel environment. I inserted > >> MPI_Init(&argc,&argv); >> MPI_Finalize() > >> into the tortoise.c program of the guile tutorial (the complete copy of >> the program is attached) and compiled it with 'mpicc', but I do not get >> the expected behavior, for example when i run on 4 processes: > >> mpirun -np 4 ./tortoise2 > guile> (tortoise-move 100) > >> the next guile prompt does not appear after the entered command has >> completed. > >> I looked into the guile archieves using search "MPI" and found >> that another person was having the same problem one year ago. >> That user has recieved a very informative message : >> http://lists.gnu.org/archive/html/guile-user/2005-02/msg00018.html >> but unfortunately, the thread stops there. >> I did some followup and found nice documentation on setting custom >> ports on stdin at >> http://www.gnu.org/software/guile/docs/docs-1.8/guile-ref/Port-Types.html#Port-Types >> but the resources of my expertise in scheme and setting custom >> ports have exhausted there. > >> There are many people using MPI, I think a solution very be greatly >> appreciated by a sizable community of MPI users. > > One issue in wrapping MPI for Guile is calling the `MPI_Init()' before > entering Guile. This is done in the code you sent. With that code you > can use MPI in background (I guess). For instance try to write a small > script and then run it in background with MPI. > > $ mpirun -np xx tortoise -s myscript.scm > > That should work. (I use `scm_boot_guile' instead of `gh_enter.' I > think that the `gh_..' stuff is deprecated, but I don't know if this > is relevant in the discussion. ) Note that with a small effort you > have something that is not completely useless: you can use it > > * interactively in sequential mode, and > * in parallel (but not interactively) > > I have made some experiments in this line, wrapping the most simple > MPI functions (mpi-send, mpi-recv, mpi-bcast...) and some basic stuff > from PETSc. > > Now, if you want to use it in parallel and in an interactive > environment, then I think the solution is to replace the REPL > evaluator, so that each time he founds a `sexp' or whatever to be > evaluated, it sends the expression to the nodes with `MPI_Bcast'. I > know of something being done with Python and I think it's much the same. > > http://www.cimec.org.ar/python/ > http://sourceforge.net/projects/mpi4py/ > > I think that this broadcasting of the input from the master to the > nodes is something that you can't avoid whenever you want to wrap MPI > for any scripting language. > > Mario > -- _______________________________________________ Guile-user mailing list Guile-user@gnu.org http://lists.gnu.org/mailman/listinfo/guile-user