Accéder au contenu.
Menu Sympa

starpu-devel - [Starpu-devel] Some interrogations about MPI and StarPU

Objet : Developers list for StarPU

Archives de la liste

[Starpu-devel] Some interrogations about MPI and StarPU


Chronologique Discussions 
  • From: Xavier Lacoste <xavier.lacoste@inria.fr>
  • To: starpu-devel@lists.gforge.inria.fr
  • Subject: [Starpu-devel] Some interrogations about MPI and StarPU
  • Date: Wed, 29 Oct 2014 14:18:51 +0100
  • List-archive: <http://lists.gforge.inria.fr/pipermail/starpu-devel/>
  • List-id: "Developers list. For discussion of new features, code changes, etc." <starpu-devel.lists.gforge.inria.fr>

Hello,

I have some questions to ask about MPI handling in StarPU :

I forgot the to call the starpu_mpi_cache_flush_all_data() at the end of my algorithm, before the wait_fo_all() :
So i suspected this to be responsible of the memory overhead (a test case won't end with starpu because of memory excedeed while it is OK with native \pastix{} scheduler)
I rerun my tests with the flush and got similar results as without, sometimes better, sometimes worst but always close. And the failing run still fails.

So my first question is : 
- is the starpu_wait_for_all() also posting the  starpu_mpi_cache_flush_all_data() ? That could explain the similar performances

- Would it be better to insert the flush as soon as I know it can be flushed or is the final flush_all_data() equivalent ?

- When I insert a task invoving a data reception, will starpu alloacte the receiving buffer and post MPI_Irecv or will he probe until the data is here before allocating and receiving the data ?
If it is the first case it could explain the memory overhead as I insert all my receiving tasks at the beginning of the algorithm.

Regards,

XL.

----------------------------------------
Xavier Lacoste
INRIA Bordeaux Sud-Ouest
200, avenue de la Vieille Tour
33405 Talence Cedex
Tél : +33 (0)5 24 57 40 69








Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail




Archives gérées par MHonArc 2.6.19+.

Haut de le page