Accéder au contenu.
Menu Sympa

starpu-devel - Re: [Starpu-devel] Control data movement to/from device

Objet : Developers list for StarPU

Archives de la liste

Re: [Starpu-devel] Control data movement to/from device


Chronologique Discussions 
  • From: Samuel Thibault <samuel.thibault@inria.fr>
  • To: Amani Alonazi <amani.alonazi@kaust.edu.sa>
  • Cc: starpu-devel@lists.gforge.inria.fr, Hatem Ltaief <Hatem.Ltaief@kaust.edu.sa>
  • Subject: Re: [Starpu-devel] Control data movement to/from device
  • Date: Thu, 18 Apr 2019 13:03:37 +0200
  • Authentication-results: mail2-smtp-roc.national.inria.fr; spf=None smtp.pra=samuel.thibault@inria.fr; spf=Neutral smtp.mailfrom=samuel.thibault@ens-lyon.org; spf=Pass smtp.helo=postmaster@hera.aquilenet.fr
  • Ironport-phdr: 9a23:cRiA5BfbbMurKmRKBpdzLvjtlGMj4u6mDksu8pMizoh2WeGdxcSyYR7h7PlgxGXEQZ/co6odzbaP6ua7BCdasd7B6ClELMUUEUddyI0/pE8JOIa9E0r1LfrnPWQRPf9pcxtbxUy9KlVfA83kZlff8TWY5D8WHQjjZ0IufrymUoHdgN6q2O+s5pbdfxtHhCanYbN1MR66sRjdutMZjId/N6o90BXEr3tHd+hIxm5jOFafkwrh6suq85Nv7jhct+g9+8JcVKnxYrg1Q6FfADk6KW4++dfltQPETQuB53scVnsZnx9VCAXb7x/0Q4n8vDLiuuVyxCeVM8v2TaspWTu59KdkVAXoiCYcODEn9mzcl9F9g7haoBKloBx/3pLUbYSIP/dwYq/RYdUXTndaU81PSyJOHJ+zb4oXD+oAI+lYqZX9p0ATphe6HAWgGf/jxiNNinLwwKY00fkuERve0QIuH9wArmnaotb7NKgdTe+60avHzTLNYP5NxTfx9JLFfgw9rfyWQ759d9fax0k1FwPCi1WdsZHrMCmQ1uQQrWeb6/drW/yvi24msA5+uCWvxsMwioLUgY8V0UrL9SBkwIkrId20UlJ7Yd6lEJRLrS6aKo92Qt85TmFpviY60LwGtoShcCgE0pQq3hjSYOGEfYiQ+h/vSeWcLDdiiH57dr+yiQy+/VW8xuHgTMW4zU5GojdKn9XQrHwA1R7e5tKaRvZz5EutxyqD2x3V5+pZO047j7DbJIQkwrMolpocr0DDHijulUX2kqCWbF8r9vKy5OT+f7Xmp5ucOJFyig7gLqQigMK/Af4gPggPWWiU5/i82aX+8UD6QLhGlOM6n6fXvZzAOMgXurK1DxVI3oo77hawFTam0NAWnXkdK1JFfQqKj5PzNFHLPfD3E/O/j06wkDdrxvDJJafuAojJLnjfi7ruY7B961VFxAo3zdFf4JRUBqsGIPLpVU/9rMbYAQMhMwyo3+bnD81w1pgCWW2VGK+ZKL7SvUaV6e0xPemDeosVtS35K/gk/P7ukWQ5lUUSfamn2psXcn+4Eep8L0WYZ3rsmNYBHn0QsgowVuy5wGGFBBdWbnCzF4s44TcyGoPuWY7GT4asxrWF2yGyDJx+ZWlbEUzKHHv1MZ6NDaQiciWXd/RolyEJUfCdS44r3AyqqEeu0LNiM+fQvDEYtJjqydxpz+zVjxA7szJuWZfOm1qRRn15yztbDwQ927py9Akkkg/agPpIxsdAHNkW3MtnFwIzNJrS1et/UoygVwTaO9OYT1DgRc+pU2toE4ABhuQWakM4IO2MyxDO2y3zUu0VmqaCQpUt76PY0n78O4Bz0SSfjfVzvxwdWsJKcFaeqOtn7QGKX9zIlV+YnuCkb/ZE0Q==
  • List-archive: <http://lists.gforge.inria.fr/pipermail/starpu-devel/>
  • List-id: "Developers list. For discussion of new features, code changes, etc." <starpu-devel.lists.gforge.inria.fr>
  • Organization: I am not organized

Amani Alonazi, le mer. 17 avril 2019 11:55:27 +0300, a ecrit:
> The problem is using 4 gpus (4N workload) takes longer time than using 1
> gpu (N
> workload).

Taking longer would not be too surprising since parallelization is hard
in general :) But I guess you mean much longer.

> The application in theory requires only 600GB to be in gpu and then
> back to CPU memory.

I wonder what the dataset looks like compared to the computation. Can
the computation be exactly split in 4 pieces of 600GB that go to each
of the 4 GPUs? I guess the scheduler, which is relatively blind with
regards to the whole data+task set does not manage to know how to split,
and thus the runtime has to make the transfers to compensate for the
misplits.

If you do know how to split exactly, you could try to force for each
task which GPU it should run on by setting

task->execute_on_a_specific_worker = 1;
task->workerid = ...;

Samuel




Archives gérées par MHonArc 2.6.19+.

Haut de le page