Accéder au contenu.
Menu Sympa

starpu-devel - Re: [Starpu-devel] [daxpy with StarPU-MPI]

Objet : Developers list for StarPU

Archives de la liste

Re: [Starpu-devel] [daxpy with StarPU-MPI]


Chronologique Discussions 
  • From: Maxim Abalenkov <maxim.abalenkov@gmail.com>
  • To: Samuel Thibault <samuel.thibault@inria.fr>
  • Cc: starpu-devel@lists.gforge.inria.fr
  • Subject: Re: [Starpu-devel] [daxpy with StarPU-MPI]
  • Date: Wed, 17 Oct 2018 12:54:08 +0100
  • Authentication-results: mail3-smtp-sop.national.inria.fr; spf=None smtp.pra=maxim.abalenkov@gmail.com; spf=Pass smtp.mailfrom=maxim.abalenkov@gmail.com; spf=None smtp.helo=postmaster@mail-wm1-f47.google.com
  • Ironport-phdr: 9a23:YrhkPhYdHnYB3zr5r6nJH7L/LSx+4OfEezUN459isYplN5qZr8WzbnLW6fgltlLVR4KTs6sC17KJ9fi4EUU7or+5+EgYd5JNUxJXwe43pCcHRPC/NEvgMfTxZDY7FskRHHVs/nW8LFQHUJ2mPw6arXK99yMdFQviPgRpOOv1BpTSj8Oq3Oyu5pHfeQpFiCa/bL9oMBm6sRjau9ULj4dlNqs/0AbCrGFSe+RRy2NoJFaTkAj568yt4pNt8Dletuw4+cJYXqr0Y6o3TbpDDDQ7KG81/9HktQPCTQSU+HQRVHgdnwdSDAjE6BH6WYrxsjf/u+Fg1iSWIdH6QLYpUjm58axlVAHnhzsGNz4h8WHYlMpwjL5AoBm8oxBz2pPYbJ2JOPZ7eK7WYNEUSndbXstJVyJPAZ+zYIQSAeQPP+lWsYf9qVwVoBSkGQWsAfniyj9UinL026AxzuQvERvB3AwlB98At27brdr0NKcXTOu40LLHwi/Hb/xI3zf964/Icg48qvyLWLJ/a8XQyUgqFw/flFqfspbqPzeL2eQLsGib6PRgWPmgi24isQ5xozyvyt0whYnOg4IY01bJ/jh3zoYyIN23Uk97Ydi8HZtRsSGaLYp2Tdk4T2FmoiY20rIGuZ+nfCgO0pso3ATTa/2Ac4WO/xntV/6RLC9miH55fL+znRW//Ei6xuHhSMW500xGoyVHn9XUq3wA2QDf5tKER/Z+5EutxzmC2xzJ5uxHIk05k7fQJYQ7zb4qjJUTtFzOHi/ol0Xyi6+bbkAk9fKp6+Tje7nnqJqcO5JthgHwPakjmNazAes/MggJUGib/fqz2Kf/8k3+RbVGlvw2kq/Hv5DGPckXuLK1DgtP3osg6xuzFSqq3MobkHUdI19IegqLj43zNFHPJPD4A+2/g1OpkDpzwvDJJLLgApTILnTZirjuZqxy60pCxwo1ztBf4IxUB6oOIPL2QEDxtdjYAgUlPAyzxubrEM992Z8GWWKTHq+ZN7vfsUSU5u01OemMfJIVtC/gJPc7+f7hk3s5lEQZfamoxpsXdGu4Eu5pI0WXZnrsmNgBHnkQsgo/SuzqklyCXiRJa3a8RaJvrg08XbmvCJrOQsiRgL2L1Tq/AtUCfWlDF12IV2vodo+NRvMQQCOUOM5o1DIeA+uPUYgkgDSnpA7/g5NtJ+7Z/C4fq9q30dFl5uiVnBs78TFyDMOD+26IRmBw2GgPQmllj+hEvUVhxwLbguBDiPtCGIkWvqsRC1ZoBdvn1+V/TuvKdEfEd9aNRkyhR4z/Uz40R9M1hdQJZhQkQonwvlX4xyOvRoQtufmTHpVtq/DT2nHwI4B2zHOUjPB83WljedNGMCidvoA69wXXANSUwUCQlqLvaqFFmSCRqyGMym2BuEweWwl1A/3I
  • List-archive: <http://lists.gforge.inria.fr/pipermail/starpu-devel/>
  • List-id: "Developers list. For discussion of new features, code changes, etc." <starpu-devel.lists.gforge.inria.fr>

Dear Samuel et al.,

I hope all is well with you. Would you please be so kind to check the logic of my StarPU-MPI “daxpy” code? I’m following the Handbook code snippet on StarPU-MPI. I use two vectors x, y of length N=10 split into NT=5 segments nb=2 elements each. Running on a single processor with mpiexec -np 1 ./tst_daxpy_mpi the first three segments are calculated correctly, while the last two are unchanged. It seems there is a synchronisation issue. In case I launch the code on two CPUs it crashes with a segmentation fault. Thank you and have a good day ahead!

Best wishes,
Maxim

Attachment: daxpy_mpi.c
Description: Binary data

Attachment: tst_daxpy_mpi.c
Description: Binary data


Maxim Abalenkov \\ maxim.abalenkov@gmail.com
+44 7 486 486 505 \\ http://mabalenk.gitlab.io

On 16 Oct 2018, at 17:22, Maxim Abalenkov <maxim.abalenkov@gmail.com> wrote:

Dear Samuel et al.,

Thank you very much for your reply. There is another small question. The “scatter—gather” snippet in the Handbook insists on using “starpu_task_insert” instead of the “starpu_mpi_task_insert” (http://starpu.gforge.inria.fr/doc/html/MPISupport.html#MPICollective). What is the correct routine to use? I would expect it should be the “starpu_mpi_task_insert”. So far, my code crashes with the normal “starpu_task_insert”.

Best wishes,
Maxim

Maxim Abalenkov \\ maxim.abalenkov@gmail.com
+44 7 486 486 505 \\ http://mabalenk.gitlab.io

On 16 Oct 2018, at 16:32, Samuel Thibault <samuel.thibault@inria.fr> wrote:

Hello,

Maxim Abalenkov, le mar. 16 oct. 2018 16:25:18 +0100, a ecrit:
I came up with an alternative:

starpu_mpi_scatter_detached(hx, NT, ROOT, MPI_COMM_WORLD, 0, 0, 0, 0);

But I am not sure it is correct.

The 4 nul parameters were missing in the documentation indeed.

b) Is it possible to use vector filters to automatically partition the data in
StarPU-MPI, similarly to the normal StarPU?

It seems we haven't tested that case, but it should work, yes.

Below is a code snippet I use in
the normal StarPU, but I’m not certain it can be adapted to StarPU-MPI, due to
the need of explicit registration of vector segments with MPI.

I guess you just need, after calling starpu_data_partition, to register
each subdata to MPI with starpu_mpi_data_register, which a different tag
for each piece.

c) Finally, what is the recommended way of initialising the vector data in the
"scatter—gather" example? So far, I am initialising each vector segment on the
root processor. Is there a better way, e.g. initialising the entire vectors
before splitting them into segments and registering with StarPU and MPI?

You can initialize the whole vector in just one task, yes, it will be
more efficient, but it will be less parallel. You can also just let MPI
nodes initialize their own piece, which will allow for yet more parallelism.

Samuel





Archives gérées par MHonArc 2.6.19+.

Haut de le page