Objet : Developers list for StarPU
Archives de la liste
- From: Maxim Abalenkov <maxim.abalenkov@gmail.com>
- To: starpu-devel@lists.gforge.inria.fr
- Subject: [Starpu-devel] [daxpy with StarPU-MPI]
- Date: Tue, 16 Oct 2018 16:25:18 +0100
- Authentication-results: mail2-smtp-roc.national.inria.fr; spf=None smtp.pra=maxim.abalenkov@gmail.com; spf=Pass smtp.mailfrom=maxim.abalenkov@gmail.com; spf=None smtp.helo=postmaster@mail-wm1-f43.google.com
- Ironport-phdr: 9a23:o+2AuBPSYDiDtNCBiWEl6mtUPXoX/o7sNwtQ0KIMzox0IvryrarrMEGX3/hxlliBBdydt6obzbKO+4nbGkU4qa6bt34DdJEeHzQksu4x2zIaPcieFEfgJ+TrZSFpVO5LVVti4m3peRMNQJW2aFLduGC94iAPERvjKwV1Ov71GonPhMiryuy+4ZLebxlKiTanfb9+MAi9oBnMuMURnYZsMLs6xAHTontPdeRWxGdoKkyWkh3h+Mq+/4Nt/jpJtf45+MFOTav1f6IjTbxFFzsmKHw65NfqtRbYUwSC4GYXX3gMnRpJBwjF6wz6Xov0vyDnuOdxxDWWMMvrRr0vRz+s87lkRwPpiCcfNj427mfXitBrjKlGpB6tvgFzz5LIbI2QMvd1Y6HTcs4ARWdZXsheSyNODJ6/YYUBEeQPOv1VoJPhq1sLtxa+BRWgCeHpxzRVhnH2x6o60+E5HA/BxgMhENMOsHHJp9jpL6gdS+S1w7fOzTXAaPNWxyr25Y/Nch87rvCMXLdwfdDLxkY0DQzFikufqYrmPzOSyOQAqGeb7+96WuKuj24rsR1+oj+qxso1jITCm4wbylfB9SpjwYY1I8W1SEF6Yd64FJtfrTqVO5F3QsMlRWxjpSU0yqUetJKlYCQHzI4ryh3fZvCdboSE/BHuWPyeLDp2nH5pZbyyiwqo/UWjyeDwTNe43VlWoiZfkdTBtXYA3AHJ5MedUPty5EKh1C6P1w/N7uFEJlg5la/BJJ4gxr48j5QSsUHeEiPvlkX7g6Gbel8r+uiv7OTnbbHmqYGGO4BojQH+N7wims25AesmLggDR3aX9fi42bH5/kD0QK9GguAonqTaqpzXKsoWqra8AwBP04Yj7xi/Dy2h0NQdhXQHKUhKeR2Gj4jsIV3BOuv3Au27g1uyljdrxPfGPqP6D5XCK3jMirbhfbJn50FAzwozyMhT54hIBbEZPPLzRkjxucTDDh8lKQO02f7nCMhk2owDR22PHLGWMKfJvF+M5+IvOPWMZJQPtDbyJfgl4OTujXAnllMHfKmp24EXaHGiEfh8LUWZeymkvtBUCnsDpBIjCeDnllCGeTpSfGqpGa0y4Ss0BcSnC53CT8ajmu+vxiC+S7hfdmFDQnmGGHPlcYaDRb9YbSuCI8onmzYNUbWnSIg//R6rvQ7+jbFgK7yHqWUjqZv/2Y0ttKXonhYo+GkxVpzFijDffyRPhmoNAgQO8uV6qE15xE2E1PEh0fNdHN1XofhOV1VjbMKO/6lBE9n3Hzn5UJKRUl//G4epBDgwSpQ6xNpcOx8gSeXntQjK2m+RO5FQl7GPA8ZpoKfV3ny0OMMkjniaj+8uiF4pRsYJPmqj1PZy
- List-archive: <http://lists.gforge.inria.fr/pipermail/starpu-devel/>
- List-id: "Developers list. For discussion of new features, code changes, etc." <starpu-devel.lists.gforge.inria.fr>
Dear Samuel et al.,
I hope all is well with you. I’m on a mission to learn StarPU-MPI. In order to do that I decided to implement a simple vector addition routine “daxpy" which does y := alpha*x+y. Currently I’m following the code snippet given in the Handbook using the “gather" and “scatter" routines. I’m using a “rolling” StarPU distribution from the Git repository. If possible, I would like to ask a few questions:
a) What is a correct way to call the “starpu_mpi_scatter/gather_detached” routines? If I call them as shown in the Handbook I obtain the error message:
tst_daxpy_mpi.c:137:5: error: too few arguments to function 'starpu_mpi_scatter_detached'
starpu_mpi_scatter_detached(hx, NT, ROOT, MPI_COMM_WORLD);
^~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from daxpy_mpi.h:10,
from tst_daxpy_mpi.c:10:
/opt/starpu13-gnu73/include/starpu/1.3/starpu_mpi.h:76:5: note: declared here
int starpu_mpi_scatter_detached(starpu_data_handle_t *data_handles, int count, int root, MPI_Comm comm, void (*scallback)(void *), void *sarg, void (*rcallback)(void *), void *rarg);
starpu_mpi_scatter_detached(hx, NT, ROOT, MPI_COMM_WORLD);
^~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from daxpy_mpi.h:10,
from tst_daxpy_mpi.c:10:
/opt/starpu13-gnu73/include/starpu/1.3/starpu_mpi.h:76:5: note: declared here
int starpu_mpi_scatter_detached(starpu_data_handle_t *data_handles, int count, int root, MPI_Comm comm, void (*scallback)(void *), void *sarg, void (*rcallback)(void *), void *rarg);
I came up with an alternative:
starpu_mpi_scatter_detached(hx, NT, ROOT, MPI_COMM_WORLD, 0, 0, 0, 0);
But I am not sure it is correct.
b) Is it possible to use vector filters to automatically partition the data in StarPU-MPI, similarly to the normal StarPU? Below is a code snippet I use in the normal StarPU, but I’m not certain it can be adapted to StarPU-MPI, due to the need of explicit registration of vector segments with MPI.
// data handles for vectors
starpu_data_handle_t hx, hy;
// register vectors
starpu_vector_data_register(&hx, STARPU_MAIN_RAM, (uintptr_t)x, N, sizeof(x[0]));
starpu_vector_data_register(&hy, STARPU_MAIN_RAM, (uintptr_t)y, N, sizeof(y[0]));
// partition vectors in segments
struct starpu_data_filter f = {
.filter_func = starpu_vector_filter_block,
.nchildren = NT
};
starpu_data_partition(hx, &f);
starpu_data_partition(hy, &f);
// submit tasks on vector segments
for (int i = 0; i < starpu_data_get_nb_children(hx); i++) {
starpu_data_handle_t hxi = starpu_data_get_sub_data(hx, 1, i);
starpu_data_handle_t hyi = starpu_data_get_sub_data(hy, 1, i);
// perform vector addition
core_starpu_daxpy(ALPHA, hxi, INCX, hyi, INCY);
}
// gather vector segments
starpu_data_unpartition(hx, STARPU_MAIN_RAM);
starpu_data_unpartition(hy, STARPU_MAIN_RAM);
starpu_data_handle_t hx, hy;
// register vectors
starpu_vector_data_register(&hx, STARPU_MAIN_RAM, (uintptr_t)x, N, sizeof(x[0]));
starpu_vector_data_register(&hy, STARPU_MAIN_RAM, (uintptr_t)y, N, sizeof(y[0]));
// partition vectors in segments
struct starpu_data_filter f = {
.filter_func = starpu_vector_filter_block,
.nchildren = NT
};
starpu_data_partition(hx, &f);
starpu_data_partition(hy, &f);
// submit tasks on vector segments
for (int i = 0; i < starpu_data_get_nb_children(hx); i++) {
starpu_data_handle_t hxi = starpu_data_get_sub_data(hx, 1, i);
starpu_data_handle_t hyi = starpu_data_get_sub_data(hy, 1, i);
// perform vector addition
core_starpu_daxpy(ALPHA, hxi, INCX, hyi, INCY);
}
// gather vector segments
starpu_data_unpartition(hx, STARPU_MAIN_RAM);
starpu_data_unpartition(hy, STARPU_MAIN_RAM);
c) Finally, what is the recommended way of initialising the vector data in the "scatter—gather" example? So far, I am initialising each vector segment on the root processor. Is there a better way, e.g. initialising the entire vectors before splitting them into segments and registering with StarPU and MPI?
int const ROOT = 0;
// vectors
double *x, *y;
// allocate memory for vectors
if (rank == ROOT) {
double *x = (double *) malloc((size_t)NT*sizeof(double));
double *y = (double *) malloc((size_t)NT*sizeof(double));
// for each segment
for (int i = 0; i < NT; i++) {
// length of segment
int nb = get_nb(i, NT, N);
starpu_malloc((void **)&x[i], nb*sizeof(double));
starpu_malloc((void **)&y[i], nb*sizeof(double));
// initialise vector segments with random values
int iseed[4] = {0, 0, 0, 1};
LAPACKE_dlarnv_work(2, iseed, nb, &x[i]);
LAPACKE_dlarnv_work(2, iseed, nb, &y[i]);
// @test print out initial vectors
printf("Initial segment x[%d]:\n", i);
print_dvec(nb, &x[i], "cm");
printf("Initial segment y[%d]:\n", i);
print_dvec(nb, &y[i], "cm");
}
}
// vectors
double *x, *y;
// allocate memory for vectors
if (rank == ROOT) {
double *x = (double *) malloc((size_t)NT*sizeof(double));
double *y = (double *) malloc((size_t)NT*sizeof(double));
// for each segment
for (int i = 0; i < NT; i++) {
// length of segment
int nb = get_nb(i, NT, N);
starpu_malloc((void **)&x[i], nb*sizeof(double));
starpu_malloc((void **)&y[i], nb*sizeof(double));
// initialise vector segments with random values
int iseed[4] = {0, 0, 0, 1};
LAPACKE_dlarnv_work(2, iseed, nb, &x[i]);
LAPACKE_dlarnv_work(2, iseed, nb, &y[i]);
// @test print out initial vectors
printf("Initial segment x[%d]:\n", i);
print_dvec(nb, &x[i], "cm");
printf("Initial segment y[%d]:\n", i);
print_dvec(nb, &y[i], "cm");
}
}
Thank you and have a good evening ahead!
—
Best wishes,
Maxim
- [Starpu-devel] [daxpy with StarPU-MPI], Maxim Abalenkov, 16/10/2018
- Re: [Starpu-devel] [daxpy with StarPU-MPI], Samuel Thibault, 16/10/2018
- Re: [Starpu-devel] [daxpy with StarPU-MPI], Maxim Abalenkov, 16/10/2018
- Re: [Starpu-devel] [daxpy with StarPU-MPI], Maxim Abalenkov, 17/10/2018
- Re: [Starpu-devel] [daxpy with StarPU-MPI], Nathalie Furmento, 17/10/2018
- Re: [Starpu-devel] [daxpy with StarPU-MPI], Maxim Abalenkov, 17/10/2018
- Re: [Starpu-devel] [daxpy with StarPU-MPI], Maxim Abalenkov, 17/10/2018
- Re: [Starpu-devel] [daxpy with StarPU-MPI], Nathalie Furmento, 17/10/2018
- Re: [Starpu-devel] [daxpy with StarPU-MPI], Maxim Abalenkov, 17/10/2018
- Re: [Starpu-devel] [daxpy with StarPU-MPI], Maxim Abalenkov, 17/10/2018
- Re: [Starpu-devel] [daxpy with StarPU-MPI], Nathalie Furmento, 17/10/2018
- Re: [Starpu-devel] [daxpy with StarPU-MPI], Maxim Abalenkov, 17/10/2018
- Re: [Starpu-devel] [daxpy with StarPU-MPI], Maxim Abalenkov, 16/10/2018
- Re: [Starpu-devel] [daxpy with StarPU-MPI], Samuel Thibault, 16/10/2018
Archives gérées par MHonArc 2.6.19+.