Accéder au contenu.
Menu Sympa

starpu-devel - Re: [Starpu-devel] A question regarding the MPI cache and starpu_mpi_cache_flush

Objet : Developers list for StarPU

Archives de la liste

Re: [Starpu-devel] A question regarding the MPI cache and starpu_mpi_cache_flush


Chronologique Discussions 
  • From: Samuel Thibault <samuel.thibault@inria.fr>
  • To: Mirko Myllykoski <mirkom@cs.umu.se>
  • Cc: starpu-devel@lists.gforge.inria.fr
  • Subject: Re: [Starpu-devel] A question regarding the MPI cache and starpu_mpi_cache_flush
  • Date: Tue, 14 Feb 2017 10:52:52 +0100
  • List-archive: <http://lists.gforge.inria.fr/pipermail/starpu-devel/>
  • List-id: "Developers list. For discussion of new features, code changes, etc." <starpu-devel.lists.gforge.inria.fr>
  • Organization: I am not organized

Hello,

Mirko Myllykoski, on mar. 14 févr. 2017 09:52:34 +0100, wrote:
> However, the
> documentation states that the subroutine has to be called by all MPI nodes.
> If I understand everything correctly, this means that all MPI nodes should
> have a corresponding data handle (either a placeholder, a copy or the actual
> block).

I have just fixed the documentation: it only needs to be called by MPI
nodes which know about the data (and more exactly, by MPI nodes which
have done anything about the data yet, but that's usually more difficult
to determine)

> Related question: Is there a way to obtain a pointer to the data that is
> stored inside a temporary data handle (home_node -1)?

Yes, you can use starpu_data_acquire() then starpu_data_get_local_ptr()
then starpu_data_release().

Samuel




Archives gérées par MHonArc 2.6.19+.

Haut de le page