Accéder au contenu.
Menu Sympa

starpu-devel - Re: [Starpu-devel] Processor grid and data distribution

Objet : Developers list for StarPU

Archives de la liste

Re: [Starpu-devel] Processor grid and data distribution


Chronologique Discussions 
  • From: Samuel Thibault <samuel.thibault@inria.fr>
  • To: Hatem Ltaief <hatem.ltaief@kaust.edu.sa>
  • Cc: "starpu-devel@lists.gforge.inria.fr" <starpu-devel@lists.gforge.inria.fr>, Amani Alonazi <amani.alonazi@kaust.edu.sa>
  • Subject: Re: [Starpu-devel] Processor grid and data distribution
  • Date: Thu, 18 Oct 2018 16:24:33 +0200
  • List-archive: <http://lists.gforge.inria.fr/pipermail/starpu-devel/>
  • List-id: "Developers list. For discussion of new features, code changes, etc." <starpu-devel.lists.gforge.inria.fr>
  • Organization: I am not organized

Hello,

Hatem Ltaief, le jeu. 18 oct. 2018 09:43:08 +0000, a ecrit:
> Is there a way to specify an arbitrary processor grid and data distribution
> in an MPI-based application? How?
> In fact, are the data structure and the way it is distributed via processor
> grid completely uncoupled?

I'm not sure what you mean by processor grid.

The way it is usually done is to set for each data the MPI node it
resides on with starpu_mpi_data_register or starpu_mpi_data_set_rank.
By default each task gets executed on the node which owns
the data which is written to by the task. One can use
starpu_mpi_node_selection_register_policy to set this to another policy,
or use STARPU_EXECUTE_ON_NODE to explicitly specify the node which shall
execute the task. The data location and the task execution location are
thus completely uncoupled.

Samuel




Archives gérées par MHonArc 2.6.19+.

Haut de le page