Objet : Developers list for StarPU
Archives de la liste
- From: Cyril Roelandt <cyril.roelandt@inria.fr>
- To: "starpu-devel@lists.gforge.inria.fr" <starpu-devel@lists.gforge.inria.fr>
- Subject: [Starpu-devel] What would you like to see most in the schedulers ?
- Date: Tue, 04 Dec 2012 19:31:27 +0100
- List-archive: <http://lists.gforge.inria.fr/pipermail/starpu-devel>
- List-id: "Developers list. For discussion of new features, code changes, etc." <starpu-devel.lists.gforge.inria.fr>
Hi everyone!
We have just merged heft and dmda. What should we do next about our schedulers ? Here are some ideas:
* Merge pheft into dm*: I think Andra would like that to happen. I am wondering how hard this would really be, and if it can be done nicely.
* Make dm use compute_all_performance_predictions() in _dm_push_task(). It should not be too hard, but compute_all_performance_predictions() does a bit too much (like taking into account the power consumption). This would add bundle support to the dm strategy and would probably be helpful with regards to maintainability.
* Try to optimize dmda: this is probably still possible. For instance, this is probably quite bad:
for (worker = 0; worker < nworkers; worker++)
{
...
for (nimpl = 0; nimpl < STARPU_MAXIMPLEMENTATIONS; nimpl++)
{
if (!starpu_worker_can_execute_task(worker, task, nimpl))
{
/* no one on that queue may execute this task */
continue;
}
...
}
}
Most codelets use only one implementation, so we'll end up calling starpu_worker_can_execute_task() (STARPU_MAXIMPLEMENTATIONS - 1) times for nothing. We could probably compute the exact number of implementations before entering these nested loops, and avoid a lot of function calls.
* Improve the readability and maintainability of dm*.
* Anything else ?
So, what would you be interested in ?
Cyril Roelandt.
- [Starpu-devel] What would you like to see most in the schedulers ?, Cyril Roelandt, 04/12/2012
- Re: [Starpu-devel] What would you like to see most in the schedulers ?, Samuel Thibault, 04/12/2012
Archives gérées par MHonArc 2.6.19+.