Accéder au contenu.
Menu Sympa

starpu-devel - [Starpu-devel] StarPU 1.2.0rc2, FAIL: datawizard/manual_reduction

Objet : Developers list for StarPU

Archives de la liste

[Starpu-devel] StarPU 1.2.0rc2, FAIL: datawizard/manual_reduction


Chronologique Discussions 
  • From: Guowei HE <g.he@fz-juelich.de>
  • To: <starpu-devel@lists.gforge.inria.fr>
  • Subject: [Starpu-devel] StarPU 1.2.0rc2, FAIL: datawizard/manual_reduction
  • Date: Mon, 11 May 2015 11:29:20 +0200
  • List-archive: <http://lists.gforge.inria.fr/pipermail/starpu-devel/>
  • List-id: "Developers list. For discussion of new features, code changes, etc." <starpu-devel.lists.gforge.inria.fr>

Dear StarPU developers,

Just installed a new StarPU and got test error. But looking at the error
log it seems normal. Could you kindly gain me more insight? Thanks!

--
Kind regards,

Guowei HE
IBG-3: Agrosphere
Forschungszentrum Jülich
Phone: +49 2461 61-8832
email: g.he@fz-juelich.de



------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------
Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
Prof. Dr. Sebastian M. Schmidt
------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------

===========================================
   StarPU 1.2.0rc2: tests/test-suite.log
===========================================

# TOTAL: 178
# PASS:  136
# SKIP:  40
# XFAIL: 1
# FAIL:  1
# XPASS: 0
# ERROR: 0

.. contents:: :depth: 2

SKIP: main/starpu_init
======================

Testing with env=-1 - conf=-1
[starpu][initialize_eager_center_policy] Warning: you are running the default eager scheduler, which is not very smart. Make sure to read the StarPU documentation about adding performance models in order to be able to use the dmda or dmdas scheduler instead.
Number of CPUS:   1
#Execution_time_in_seconds 0.119259 ./main/starpu_init

FAIL: datawizard/manual_reduction
=================================

[starpu][initialize_eager_center_policy] Warning: you are running the default eager scheduler, which is not very smart. Make sure to read the StarPU documentation about adding performance models in order to be able to use the dmda or dmdas scheduler instead.
5041 = 42 + 4999
10041 = 5041 + 5000
#Execution_time_in_seconds 0.233139 ./datawizard/manual_reduction
`./datawizard/manual_reduction' exited with return code 1

SKIP: datawizard/interfaces/multiformat/advanced/multiformat_cuda_opencl
========================================================================

[starpu][initialize_eager_center_policy] Warning: you are running the default eager scheduler, which is not very smart. Make sure to read the StarPU documentation about adding performance models in order to be able to use the dmda or dmdas scheduler instead.
#Execution_time_in_seconds 0.111760 ./datawizard/interfaces/multiformat/advanced/multiformat_cuda_opencl

SKIP: datawizard/interfaces/multiformat/advanced/multiformat_data_release
=========================================================================

[starpu][initialize_eager_center_policy] Warning: you are running the default eager scheduler, which is not very smart. Make sure to read the StarPU documentation about adding performance models in order to be able to use the dmda or dmdas scheduler instead.
#Execution_time_in_seconds 0.123353 ./datawizard/interfaces/multiformat/advanced/multiformat_data_release

SKIP: datawizard/gpu_register
=============================

[starpu][initialize_eager_center_policy] Warning: you are running the default eager scheduler, which is not very smart. Make sure to read the StarPU documentation about adding performance models in order to be able to use the dmda or dmdas scheduler instead.
#Execution_time_in_seconds 0.105065 ./datawizard/gpu_register

SKIP: datawizard/gpu_ptr_register
=================================

[starpu][initialize_eager_center_policy] Warning: you are running the default eager scheduler, which is not very smart. Make sure to read the StarPU documentation about adding performance models in order to be able to use the dmda or dmdas scheduler instead.
#Execution_time_in_seconds 0.111164 ./datawizard/gpu_ptr_register

SKIP: datawizard/readonly
=========================

[starpu][initialize_eager_center_policy] Warning: you are running the default eager scheduler, which is not very smart. Make sure to read the StarPU documentation about adding performance models in order to be able to use the dmda or dmdas scheduler instead.
copy 0
submission of task 0x10283d0 wih codelet 0x6020a0 failed (symbol `none') (err: ENODEV)
WARNING: No one can execute this task
#Execution_time_in_seconds 0.123073 ./datawizard/readonly

XFAIL: errorcheck/invalid_blocking_calls
========================================

[starpu][initialize_eager_center_policy] Warning: you are running the default eager scheduler, which is not very smart. Make sure to read the StarPU documentation about adding performance models in order to be able to use the dmda or dmdas scheduler instead.

[starpu][starpu_data_acquire_on_node][assert failure] Acquiring a data synchronously is not possible from a codelet or from a task callback, use starpu_data_acquire_cb instead.

lt-invalid_blocking_calls: datawizard/user_interactions.c:239: starpu_data_acquire_on_node: Assertion `_starpu_worker_may_perform_blocking_calls()' failed.
[error] `./errorcheck/invalid_blocking_calls' killed with signal 6; test marked as failed
while looking for core file of ./errorcheck/invalid_blocking_calls: core: No such file or directory
#Execution_time_in_seconds 0.352678 ./errorcheck/invalid_blocking_calls

SKIP: openmp/init_exit_01
=========================

#Execution_time_in_seconds 0.057531 ./openmp/init_exit_01

SKIP: openmp/init_exit_02
=========================

#Execution_time_in_seconds 0.045760 ./openmp/init_exit_02

SKIP: openmp/environment
========================

#Execution_time_in_seconds 0.045375 ./openmp/environment

SKIP: openmp/api_01
===================

#Execution_time_in_seconds 0.043518 ./openmp/api_01

SKIP: openmp/parallel_01
========================

#Execution_time_in_seconds 0.048156 ./openmp/parallel_01

SKIP: openmp/parallel_02
========================

#Execution_time_in_seconds 0.048868 ./openmp/parallel_02

SKIP: openmp/parallel_03
========================

#Execution_time_in_seconds 0.050040 ./openmp/parallel_03

SKIP: openmp/parallel_barrier_01
================================

#Execution_time_in_seconds 0.036483 ./openmp/parallel_barrier_01

SKIP: openmp/parallel_master_01
===============================

#Execution_time_in_seconds 0.035507 ./openmp/parallel_master_01

SKIP: openmp/parallel_master_inline_01
======================================

#Execution_time_in_seconds 0.035630 ./openmp/parallel_master_inline_01

SKIP: openmp/parallel_single_wait_01
====================================

#Execution_time_in_seconds 0.035341 ./openmp/parallel_single_wait_01

SKIP: openmp/parallel_single_nowait_01
======================================

#Execution_time_in_seconds 0.035618 ./openmp/parallel_single_nowait_01

SKIP: openmp/parallel_single_inline_01
======================================

#Execution_time_in_seconds 0.034975 ./openmp/parallel_single_inline_01

SKIP: openmp/parallel_single_copyprivate_01
===========================================

#Execution_time_in_seconds 0.035131 ./openmp/parallel_single_copyprivate_01

SKIP: openmp/parallel_single_copyprivate_inline_01
==================================================

#Execution_time_in_seconds 0.035668 ./openmp/parallel_single_copyprivate_inline_01

SKIP: openmp/parallel_critical_01
=================================

#Execution_time_in_seconds 0.035382 ./openmp/parallel_critical_01

SKIP: openmp/parallel_critical_inline_01
========================================

#Execution_time_in_seconds 0.035764 ./openmp/parallel_critical_inline_01

SKIP: openmp/parallel_critical_named_01
=======================================

#Execution_time_in_seconds 0.035636 ./openmp/parallel_critical_named_01

SKIP: openmp/parallel_critical_named_inline_01
==============================================

#Execution_time_in_seconds 0.046206 ./openmp/parallel_critical_named_inline_01

SKIP: openmp/parallel_simple_lock_01
====================================

#Execution_time_in_seconds 0.056543 ./openmp/parallel_simple_lock_01

SKIP: openmp/parallel_nested_lock_01
====================================

#Execution_time_in_seconds 0.056992 ./openmp/parallel_nested_lock_01

SKIP: openmp/parallel_for_01
============================

#Execution_time_in_seconds 0.055715 ./openmp/parallel_for_01

SKIP: openmp/parallel_for_02
============================

#Execution_time_in_seconds 0.047313 ./openmp/parallel_for_02

SKIP: openmp/parallel_for_ordered_01
====================================

#Execution_time_in_seconds 0.036418 ./openmp/parallel_for_ordered_01

SKIP: openmp/parallel_sections_01
=================================

#Execution_time_in_seconds 0.035471 ./openmp/parallel_sections_01

SKIP: openmp/parallel_sections_combined_01
==========================================

#Execution_time_in_seconds 0.044922 ./openmp/parallel_sections_combined_01

SKIP: openmp/task_01
====================

#Execution_time_in_seconds 0.040446 ./openmp/task_01

SKIP: openmp/task_02
====================

#Execution_time_in_seconds 0.043067 ./openmp/task_02

SKIP: openmp/taskwait_01
========================

#Execution_time_in_seconds 0.053124 ./openmp/taskwait_01

SKIP: openmp/taskgroup_01
=========================

#Execution_time_in_seconds 0.045002 ./openmp/taskgroup_01

SKIP: openmp/taskgroup_02
=========================

#Execution_time_in_seconds 0.045065 ./openmp/taskgroup_02

SKIP: openmp/array_slice_01
===========================

#Execution_time_in_seconds 0.050750 ./openmp/array_slice_01

SKIP: openmp/cuda_task_01
=========================

#Execution_time_in_seconds 0.051132 ./openmp/cuda_task_01

SKIP: perfmodels/feed
=====================

[starpu][initialize_eager_center_policy] Warning: you are running the default eager scheduler, which is not very smart. Make sure to read the StarPU documentation about adding performance models in order to be able to use the dmda or dmdas scheduler instead.
#Execution_time_in_seconds 0.115788 ./perfmodels/feed




Archives gérées par MHonArc 2.6.19+.

Haut de le page