Accéder au contenu.
Menu Sympa

starpu-devel - Re: [Starpu-devel] Processor grid and data distribution

Objet : Developers list for StarPU

Archives de la liste

Re: [Starpu-devel] Processor grid and data distribution


Chronologique Discussions 
  • From: Hatem Ltaief <hatem.ltaief@kaust.edu.sa>
  • To: Samuel Thibault <samuel.thibault@inria.fr>
  • Cc: "starpu-devel@lists.gforge.inria.fr" <starpu-devel@lists.gforge.inria.fr>, Amani Alonazi <amani.alonazi@kaust.edu.sa>
  • Subject: Re: [Starpu-devel] Processor grid and data distribution
  • Date: Fri, 19 Oct 2018 13:19:44 +0000
  • Accept-language: en-GB, en-US
  • Authentication-results: mail2-smtp-roc.national.inria.fr; spf=None smtp.pra=hatem.ltaief@kaust.edu.sa; spf=Pass smtp.mailfrom=prvs=183013500c=hatem.ltaief@kaust.edu.sa; spf=None smtp.helo=postmaster@mx08-0025e101.pphosted.com
  • Ironport-phdr: 9a23:5oxljBBT56jg9MzkAhjtUyQJP3N1i/DPJgcQr6AfoPdwSPXyp8bcNUDSrc9gkEXOFd2Cra4c1KyO6+jJYi8p2d65qncMcZhBBVcuqP49uEgeOvODElDxN/XwbiY3T4xoXV5h+GynYwAOQJ6tL1LdrWev4jEMBx7xKRR6JvjvGo7Vks+7y/2+94fcbglUhjexe69+IAmrpgjNq8cahpdvJLwswRXTuHtIfOpWxWJsJV2Nmhv3+9m98p1+/SlOovwt78FPX7n0cKQ+VrxYES8pM3sp683xtBnMVhWA630BWWgLiBVIAgzF7BbnXpfttybxq+Rw1DWGMcDwULs5Qiqp4bt1RxD0iScHLz85/3/Risxsl6JQvRatqwViz4LIfI2ZMfxzdb7fc9wHX2pMRsZfWTJcDIOgYYUBDOQBMuRfoYn8u1QBogCzBQywCOPq0DJIhmP60bM83u88EQ/GxgsgH9cWvXrWttr1KLkdUeWox6fO0TXMdfJW1inz6IPVdR0hoeuDXa5ufsrU00UvEx/FgU+RqYP7MTOYzecNvHKG7+p7SeKjkXAopBxsojS13ccjkJDEi4QIwV7H7SV02Ic4KcOiREJlfdKpFIFcuiGHO4dsX88vQWJltD4nxrEao5K3YigHxZs9yxLCd/CLb5KE7xL7WOafPzh1h25pdbe6ihqs/kWv1PPwW8yv31tIqidIndzBu3MQ2BDN6sWKT+dy80a/1juJygvd8PtLIVoumqreM5MhwqA/lp4UsUnbGy/3l1n6gayPekk45uSk9eTqbav7qpOAKoN4kBn+Mrwumsy4GuQ4LhYBUHWB9eug073j+FX1QLRMjvIojqnUqJHXKdoBqqKnHwNY1pwv5haiAzu8zdgVn3oKIEpAeB2djojpP1/OIOr/Dfe6m1msiC1rx+7dPrD5H5nALXbOnKv8cbtz60NQ1BY/wsxH6pJUEL4BPOz8WkrruNzcEh82KQ20zPjpCNhm1YMRQ3+PArSFMK/Ir1CH+/8vL/OXZIAPoDr9MeQq5+byjX8lnl8QZbKp3YcNZ3CiBvRmPlyVbmfyjdcfD2gKuhEzTOjriF2ZTT5TfGy+X60y5jE8EoKmApnMSpqsgLyHxie7H4dZanpIClCWCX3obZmLW+8QaCKOJc9siiAEVbigS4A6zx6uqQv6y6Z8I+vV+y0YsIns1MJv6OHJlBEy8yZ0D8WH3G2XQWF0hDBAezhj56l6vEF5gmuD0KJxnvhEXYhI7vZUXwN8KZ7dxeVnD8zaWwTbf97PRkzwEfu8BjRkbtstztpGR1p0GtisxkTD1janBvkRi7WKCZIc8avBxGS3KspgjW3Pgvpyx2I6S9dCYDX1zpV08BLeUsuQyx3AxvSaMJ8E1SuIz1+tiG+HvUVWSgl1CPmXXnkCfVeQoNjkoFjLHeb3VeYXdzBZwMvHEZNkL8XzhAwaFvTqJc/CJW+9hiGrDETQn+7eXM/RY2wYmR7lJg0EngQUpybUMBhmVmL5+ziDAWU2TBT3e0Pr6vVzpDWwSUpmlww=
  • List-archive: <http://lists.gforge.inria.fr/pipermail/starpu-devel/>
  • List-id: "Developers list. For discussion of new features, code changes, etc." <starpu-devel.lists.gforge.inria.fr>

Hi Sam,
Please find attached a cartoon of what we would like to achieve with fine-grained tasks. We would like to use the data distribution, which increase data locality and engenders less data movement (halo exchanges) across the NICs.
I just found out about the StarPU stencil example, which I believe has pretty much what we would like to achieve.
In particular, the function assign_blocks_to_mpi_nodes should do it.
Thanks!
Hatem


Attachment: data-distrib.pdf
Description: data-distrib.pdf


On Oct 18, 2018, at 5:24 PM, Samuel Thibault <samuel.thibault@inria.fr> wrote:

Hello,

Hatem Ltaief, le jeu. 18 oct. 2018 09:43:08 +0000, a ecrit:
Is there a way to specify an arbitrary processor grid and data distribution in an MPI-based application? How?
In fact, are the data structure and the way it is distributed via processor grid completely uncoupled?

I'm not sure what you mean by processor grid.

The way it is usually done is to set for each data the MPI node it
resides on with starpu_mpi_data_register or starpu_mpi_data_set_rank.
By default each task gets executed on the node which owns
the data which is written to by the task.  One can use
starpu_mpi_node_selection_register_policy to set this to another policy,
or use STARPU_EXECUTE_ON_NODE to explicitly specify the node which shall
execute the task. The data location and the task execution location are
thus completely uncoupled.

Samuel




Archives gérées par MHonArc 2.6.19+.

Haut de le page