Accéder au contenu.
Menu Sympa

starpu-devel - Re: [Starpu-devel] Issue with distributed NUMA-aware StarPU and dmda scheduler

Objet : Developers list for StarPU

Archives de la liste

Re: [Starpu-devel] Issue with distributed NUMA-aware StarPU and dmda scheduler


Chronologique Discussions 
  • From: Samuel Thibault <samuel.thibault@inria.fr>
  • To: Philippe SWARTVAGHER <philippe.swartvagher@inria.fr>
  • Cc: starpu-devel@lists.gforge.inria.fr
  • Subject: Re: [Starpu-devel] Issue with distributed NUMA-aware StarPU and dmda scheduler
  • Date: Fri, 13 Mar 2020 17:55:39 +0100
  • List-archive: <http://lists.gforge.inria.fr/pipermail/starpu-devel/>
  • List-id: "Developers list. For discussion of new features, code changes, etc." <starpu-devel.lists.gforge.inria.fr>
  • Organization: I am not organized

Hello,

Philippe SWARTVAGHER, le ven. 13 mars 2020 14:02:38 +0100, a ecrit:
> #5  0x00007ffff7fb0053 in _starpu_mpi_handle_request_termination
> (req=req@entry
> =0x55555581b640) at ../../../mpi/src/mpi/starpu_mpi_mpi.c:861

It seems that it frees the packed data with free(), while the documentation
says
that the pack_data method is supposed to use starpu_malloc_on_node_flags
to allocate it. Please try the attached patch to properly release it.

Samuel
diff --git a/mpi/src/mpi/starpu_mpi_mpi.c b/mpi/src/mpi/starpu_mpi_mpi.c
index 01c46fe30..1548641d6 100644
--- a/mpi/src/mpi/starpu_mpi_mpi.c
+++ b/mpi/src/mpi/starpu_mpi_mpi.c
@@ -858,7 +858,7 @@ static void _starpu_mpi_handle_request_termination(struct
_starpu_mpi_req *req)
int ret;
ret =
MPI_Wait(&req->backend->size_req, MPI_STATUS_IGNORE);
STARPU_MPI_ASSERT_MSG(ret ==
MPI_SUCCESS, "MPI_Wait returning %s", _starpu_mpi_get_mpi_error_code(ret));
- free(req->ptr);
+ starpu_free_on_node(STARPU_MAIN_RAM,
req->ptr, req->backend->envelope->size);
req->ptr = NULL;
}
else if (req->request_type == RECV_REQ)



Archives gérées par MHonArc 2.6.19+.

Haut de le page