Accéder au contenu.
Menu Sympa

starpu-devel - [Starpu-devel] A question regarding the MPI cache and starpu_mpi_cache_flush

Objet : Developers list for StarPU

Archives de la liste

[Starpu-devel] A question regarding the MPI cache and starpu_mpi_cache_flush


Chronologique Discussions 
  • From: Mirko Myllykoski <mirkom@cs.umu.se>
  • To: starpu-devel@lists.gforge.inria.fr
  • Subject: [Starpu-devel] A question regarding the MPI cache and starpu_mpi_cache_flush
  • Date: Tue, 14 Feb 2017 09:52:34 +0100
  • Authentication-results: mail2-smtp-roc.national.inria.fr; spf=None smtp.pra=mirkom@cs.umu.se; spf=Pass smtp.mailfrom=mirkom@cs.umu.se; spf=None smtp.helo=postmaster@mail.cs.umu.se
  • Ironport-phdr: 9a23:/bNZ2RW0FUOKwzGYRNcRAvXuYHjV8LGtZVwlr6E/grcLSJyIuqrYZRyOt8tkgFKBZ4jH8fUM07OQ6PG8Hz1cqs/b7zhCKMUKDEBVz51O3kQJO42sNw7SFLbSdSs0HcBPBhdO3kqQFgxrIvv4fEDYuXao7DQfSV3VPAtxIfnpSMaJ15zkn7P6x5qGeBlBniKgJL9/MhiyhQHQrdUNx4RsLbw+x13IpGFJcqJY3zBGP1WWyjP9/MS3tLty9yBBuPU69M8IBaD7Zac/SJRTF3I7Nn1z/8C95kqLdheG+nZJCjZeqRFPGQWQtBw=
  • List-archive: <http://lists.gforge.inria.fr/pipermail/starpu-devel/>
  • List-id: "Developers list. For discussion of new features, code changes, etc." <starpu-devel.lists.gforge.inria.fr>

Hi,

I am developing a numerical software that deals with large dense matrices (size class 50 000 X 50 000). The matrices are divided into smaller blocks (maybe 128 X 128) which are then registered with StarPU. I am trying to reduce the overhead by making each MPI node register only those blocks that it actually needs. Some blocks are needed by multiple MPI node which is why I implemented a separate subsystem that will automatically register a "placeholder" data handle and call the starpu_mpi_data_register() subroutine when a MPI node that does not own the block requests a data handle to the block.

In a certain situation, I would like to flush a data handle from the MPI cache using the starpu_mpi_cache_flush() subroutine. However, the documentation states that the subroutine has to be called by all MPI nodes. If I understand everything correctly, this means that all MPI nodes should have a corresponding data handle (either a placeholder, a copy or the actual block).

Is this correct? Is there any way to make this work with my block management subsystem?

Related question: Is there a way to obtain a pointer to the data that is stored inside a temporary data handle (home_node -1)?

Best Regards,
Mirko Myllykoski




Archives gérées par MHonArc 2.6.19+.

Haut de le page