Accéder au contenu.
Menu Sympa

starpu-devel - Re: [Starpu-devel] OpenMP StarPU

Objet : Developers list for StarPU

Archives de la liste

Re: [Starpu-devel] OpenMP StarPU


Chronologique Discussions 
  • From: Roberto Ribeiro <rbrt.ribeiro@gmail.com>
  • To: Samuel Thibault <samuel.thibault@ens-lyon.org>
  • Subject: Re: [Starpu-devel] OpenMP StarPU
  • Date: Tue, 25 Sep 2012 16:50:31 +0100
  • Authentication-results: iona.labri.fr (amavisd-new); dkim=pass header.i=@gmail.com
  • List-archive: <http://lists.gforge.inria.fr/pipermail/starpu-devel>
  • List-id: "Developers list. For discussion of new features, code changes, etc." <starpu-devel.lists.gforge.inria.fr>
  • Resent-date: Tue, 25 Sep 2012 17:52:50 +0200
  • Resent-from: Samuel Thibault <samuel.thibault@ens-lyon.org>
  • Resent-message-id: <20120925155250.GO6096@type.oeaw.ads>
  • Resent-to: starpu-devel@lists.gforge.inria.fr

Hi,

thanks for the quick answer.

The reason is basically because i cant make it work. Using single_combined_worker and starpu_combined_worker_get_size() will always return a single CPU. I've used your vector_scal.c example, compiled with the given command and the output is always: "running task with 1 CPUs".

How do we do it then? we may still specify a single codelet, use:
 .type = STARPU_FORKJOIN,
.cpu_funcs = {CPU_func, NULL}
cuda_funcs= {CUDA_func, NULL};
and create subtasks with it?

RR


On Tue, Sep 25, 2012 at 4:34 PM, Samuel Thibault <samuel.thibault@ens-lyon.org> wrote:
Hello,

Roberto Ribeiro, le Tue 25 Sep 2012 17:23:17 +0200, a écrit :
> i'm looking for an aternative way to use OpenMP with StarPU without parallel
> tasks feature.

Out of curiosity, why do you want an alternative, exactly?

> My first approach is to init the system with a single CPU
> (conf.ncpus = 1) and inside the codelet func use #pragma omp parallel for.
> However, for some unknown reason the threads are instanced but they stall and
> the kernel remains sequential. Are you aware of anything that may cause this?

I guess this is because the only thread started by StarPU is bound to
only one core.

> One more thing, in your handbook you say:
>
>
>     The solution is then to use only one combined worker at a time. This can be
>     done by setting single_combined_worker to 1 in the starpu_conf structure,
>     or setting the STARPU_SINGLE_COMBINED_WORKER environment variable to 1.
>     StarPU will then run only one parallel task at a time.
>
>
> Does this mean that we can't have a GPU worker running concurrently and
> executing tasks with the same starpu_codelet?

No, I have just added “(but other CPU and GPU tasks are not affected
and can be run concurrently).”

> For instance, can we schedule a GEMM in a CPU+GPU system using any CPU
> BLAS implementation and cuBLAS?

Yes.

Samuel




Archives gérées par MHonArc 2.6.19+.

Haut de le page