Accéder au contenu.
Menu Sympa

starpu-devel - Re: [Starpu-devel] OpenMP StarPU

Objet : Developers list for StarPU

Archives de la liste

Re: [Starpu-devel] OpenMP StarPU


Chronologique Discussions 
  • From: Samuel Thibault <samuel.thibault@ens-lyon.org>
  • To: Roberto Ribeiro <rbrt.ribeiro@gmail.com>
  • Cc: starpu-devel@lists.gforge.inria.fr
  • Subject: Re: [Starpu-devel] OpenMP StarPU
  • Date: Tue, 25 Sep 2012 18:03:36 +0200
  • List-archive: <http://lists.gforge.inria.fr/pipermail/starpu-devel>
  • List-id: "Developers list. For discussion of new features, code changes, etc." <starpu-devel.lists.gforge.inria.fr>

Roberto Ribeiro, le Tue 25 Sep 2012 16:50:31 +0100, a écrit :
> The reason is basically because i cant make it work. Using
> single_combined_worker and starpu_combined_worker_get_size() will always
> return
> a single CPU. I've used your vector_scal.c example, compiled with the given
> command and the output is always: "running task with 1 CPUs".

Remember that the pheft scheduler benchmarks the parallel implementation
with varying numbers of cores, to check which ones are most efficient.
You can also try to use pgreedy instead.

> How do we do it then? we may still specify a single codelet, use:
>  .type = STARPU_FORKJOIN,
> .cpu_funcs = {CPU_func, NULL}
> cuda_funcs= {CUDA_func, NULL};
> and create subtasks with it?

Yes. You also need a performance model in order to use the pheft
scheduler, otherwise it might indeed get dumb and not dare parallel
execution, I don't remember.

Samuel





Archives gérées par MHonArc 2.6.19+.

Haut de le page