Objet : Developers list for StarPU
Archives de la liste
- From: Philippe SWARTVAGHER <philippe.swartvagher@inria.fr>
- To: starpu-devel@lists.gforge.inria.fr
- Cc: Alexandre Denis <alexandre.denis@inria.fr>
- Subject: [Starpu-devel] StarPU overhead for MPI
- Date: Wed, 29 Jan 2020 17:19:10 +0100
- List-archive: <http://lists.gforge.inria.fr/pipermail/starpu-devel/>
- List-id: "Developers list. For discussion of new features, code changes, etc." <starpu-devel.lists.gforge.inria.fr>
Hello,
Attached to this mail, a plot resulting of the execution of mpi/tests/sendrecv_bench. This plot show latency of data transfers, according to data size. The reference curve is raw NewMadeleine nmad/examples/benchmarks/nm_bench_sendrecv. These two programs are the same, except that in StarPU bench, measured period is between just before starpu_mpi_send and just after starpu_mpi_recv. So this plot should show the StarPU-MPI overhead.
The overhead seems a little bit huge (a small comm lasts 10 times longer). Do you know what could be the origin of this overhead ?
StarPU was compiled without FXT and without --enable-debug.
I tried adding events in trace, to see which function could consume a lot of time. I tried this on dalton/joe01. A transfer of 0 byte lasts 109µs in median and average. Nothing really relevant appeared, but between just before the call to starpu_mpi_recv and just after the call to nm_sr_recv_irecv, 14µs elapsed.
Any idea ?
--
Philippe SWARTVAGHER
Doctorant
Équipe TADaaM, Inria Bordeaux Sud-Ouest
Attachment:
microbench_sendrecv_plafrim.png
Description: PNG image
- [Starpu-devel] StarPU overhead for MPI, Philippe SWARTVAGHER, 29/01/2020
Archives gérées par MHonArc 2.6.19+.