Objet : Developers list for StarPU
Archives de la liste
- From: Xavier Lacoste <xavier.lacoste@inria.fr>
- To: Samuel Thibault <samuel.thibault@ens-lyon.org>
- Cc: starpu-devel@lists.gforge.inria.fr
- Subject: Re: [Starpu-devel] MPI scaling
- Date: Tue, 24 Jun 2014 17:11:51 +0200
- List-archive: <http://lists.gforge.inria.fr/pipermail/starpu-devel/>
- List-id: "Developers list. For discussion of new features, code changes, etc." <starpu-devel.lists.gforge.inria.fr>
Le 24 juin 2014 à 15:29, Xavier Lacoste <xavier.lacoste@inria.fr> a écrit :
>
>
> Le 24 juin 2014 à 14:17, Samuel Thibault <samuel.thibault@ens-lyon.org> a
> écrit :
>
>> Hello,
>>
>> Xavier Lacoste, le Mon 23 Jun 2014 09:19:19 +0200, a écrit :
>>> I'm using eager scheduler has I have no GPUs
>>
>> Maybe you'd want to give a try to the lws scheduler.
>
> Ok I'll have a try.
>
> I don't see it in starpu 1.1.2, is it new in 1.2 ? Is it different from ws ?
>
> The variable STARPU_SCHED can be set to one of the following strings:
> eager -> eager policy with a central queue
> prio -> eager (with priorities)
> random -> weighted random based on worker overall performance
> ws -> work stealing
> dm -> performance model
> dmda -> data-aware performance model
> dmdar -> data-aware performance model (ready)
> dmdas -> data-aware performance model (sorted)
> pheft -> parallel HEFT
I assume it was "ws" and not "lws", I tried it and it seems to improve
execution time.
I now have to set priorities and try with prio.
>>
>>> I attach the 4x8 execution trace. I would like to know is their is a mean
>>> to see if a communication is triggered as soon as the last local update
>>> as been performed.
>>
>> We can see big idle areas indeed. Is "local update" a particular kind
>> of task? You could make vite show it in a different color and thus see
>> where it is. Perhaps the problem here is rather that these tasks are
>> scheduled late, and giving them a priority and using the prio scheduler
>> would help.
>>
>
> This local update task is the same task as the update to local GEMMs
> (except the written data is temporary buffer and will have to be send over
> MPI). I could rename them to have a different color.
>
> I already tried using eager and giving non zero priority to these update
> tasks (as I saw in online documentation that the non zero priority tasks
> are put in front of the queue). Is this different from prio scheduler
> behaviour ?
>
>>> Have you got any advice to try to improve MPI scaling.
>>
>> Well, it's the matter of Marc's thesis :)
>> Here it seems to me rather a problem of task order.
>>
>>> In my case here, using the whole 8 cores of a node is worse than using 7
>>> threads per node... Have you already experience that ?
>>
>> That may not be surprising indeed, since this lets the MPI thread have
>> its own core.
>>
>> Samuel
>
Attachment:
signature.asc
Description: Message signed with OpenPGP using GPGMail
- [Starpu-devel] MPI scaling, Xavier Lacoste, 23/06/2014
- Re: [Starpu-devel] MPI scaling, Samuel Thibault, 24/06/2014
- Re: [Starpu-devel] MPI scaling, Xavier Lacoste, 24/06/2014
- Re: [Starpu-devel] MPI scaling, Xavier Lacoste, 24/06/2014
- Re: [Starpu-devel] MPI scaling, Samuel Thibault, 24/06/2014
- Re: [Starpu-devel] MPI scaling, Xavier Lacoste, 24/06/2014
- Re: [Starpu-devel] MPI scaling, Samuel Thibault, 24/06/2014
Archives gérées par MHonArc 2.6.19+.