Accéder au contenu.
Menu Sympa

starpu-devel - Re: [Starpu-devel] MPI scaling

Objet : Developers list for StarPU

Archives de la liste

Re: [Starpu-devel] MPI scaling


Chronologique Discussions 
  • From: Samuel Thibault <samuel.thibault@ens-lyon.org>
  • To: Xavier Lacoste <xavier.lacoste@inria.fr>
  • Cc: starpu-devel@lists.gforge.inria.fr
  • Subject: Re: [Starpu-devel] MPI scaling
  • Date: Tue, 24 Jun 2014 14:17:11 +0200
  • List-archive: <http://lists.gforge.inria.fr/pipermail/starpu-devel/>
  • List-id: "Developers list. For discussion of new features, code changes, etc." <starpu-devel.lists.gforge.inria.fr>

Hello,

Xavier Lacoste, le Mon 23 Jun 2014 09:19:19 +0200, a écrit :
> I'm using eager scheduler has I have no GPUs

Maybe you'd want to give a try to the lws scheduler.

> I attach the 4x8 execution trace. I would like to know is their is a mean
> to see if a communication is triggered as soon as the last local update as
> been performed.

We can see big idle areas indeed. Is "local update" a particular kind
of task? You could make vite show it in a different color and thus see
where it is. Perhaps the problem here is rather that these tasks are
scheduled late, and giving them a priority and using the prio scheduler
would help.

> Have you got any advice to try to improve MPI scaling.

Well, it's the matter of Marc's thesis :)
Here it seems to me rather a problem of task order.

> In my case here, using the whole 8 cores of a node is worse than using 7
> threads per node... Have you already experience that ?

That may not be surprising indeed, since this lets the MPI thread have
its own core.

Samuel




Archives gérées par MHonArc 2.6.19+.

Haut de le page