Accéder au contenu.
Menu Sympa

starpu-devel - Re: [Starpu-devel] Segfault occurs when using openMPI memory pinning

Objet : Developers list for StarPU

Archives de la liste

Re: [Starpu-devel] Segfault occurs when using openMPI memory pinning


Chronologique Discussions 
  • From: Florent Pruvost <florent.pruvost@inria.fr>
  • To: starpu-devel@lists.gforge.inria.fr
  • Subject: Re: [Starpu-devel] Segfault occurs when using openMPI memory pinning
  • Date: Wed, 06 May 2015 15:13:17 +0200
  • List-archive: <http://lists.gforge.inria.fr/pipermail/starpu-devel/>
  • List-id: "Developers list. For discussion of new features, code changes, etc." <starpu-devel.lists.gforge.inria.fr>

Hi,

concerning the mpi bug, I'm able to reproduce it on PlaFRIM

Modules on PlaFRIM:

1) compiler/gcc/stable 4) mpi/openmpi/stable 7) build/cmake/2.8.8 10) build/ac269-am114-lt242-m41417
2) lib/gmp/current 5) trace/fxt/latest 8) linalg/plasma/latest 11) editor/emacs/24.3
3) compiler/mkl/stable 6) hardware/hwloc/1.9 9) tools/ddt/stable

Build StarPU
----------------

svn: https://scm.gforge.inria.fr/svn/starpu/branches/starpu-1.1
Rev: 15434

$ ../configure --disable-build-doc --disable-gcc-extensions --disable-opencl --prefix=/home/pruvost/work/install/starpu-1.1/mpi/gnu --enable-debug --with-fxt --disable-cuda CC=gcc CXX=g++ F77=gfortran --no-create --no-recursion

$ make -j4 && make install

Build Chameleon:
-----------------------

svn: svn+ssh://fpruvost@scm.gforge.inria.fr/svnroot/morse/trunk/chameleon
rev: 2219

$ cmake .. -DSTARPU_DIR=/home/pruvost/work/install/starpu-1.1/mpi/gnu -DCHAMELEON_USE_MPI=ON

$ make -j4


Testcase:
-----------

$ qsub -IX -l nodes=4:ppn=8

$ mpirun -pernode OMPI_MCA_mpi_leave_pinned=1 ./timing/time_dpotrf_tile --n_range=20000:20000:1 --nb=320 --nowarmup --p=2

bug = segfault


Backtrace:
--------------

# gdb
$ mpirun -pernode screen -L -m -D env LD_LIBRARY_PATH=$LD_LIBRARY_PATH OMPI_MCA_mpi_leave_pinned=1 gdb --args ./timing/time_dpotrf_tile --n_range=20000:20000:1 --nb=320 --nowarmup --p=2

locally on my laptop:

# ssh connexion on all nodes reserved with qsub
$ cssh fourmi001 fourmi002 fourmi003 fourmi004

# in cssh:
$ screen -r

# in gdb on all nodes simultaneously thanks to cssh
$ r
$ backtrace

An MPI process is down:

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fffe2709700 (LWP 377)]
0x00007ffff11311f3 in mca_rcache_vma_delete ()
from /opt/cluster/plafrim2/apps/mpi/openmpi/1.6.5/gcc/lib/openmpi/mca_rcache_vma.so
Missing separate debuginfos, use: debuginfo-install glibc-2.12-1.107.el6_4.5.x86_64 infinipath-psm-3.0.1-115.1015_open.1.1.el6_4.x86_64 libpciaccess-0.13.1-2.el6.x86_64 libxml2-2.7.6-12.el6_4.1.x86_64 numactl-2.0.7-6.el6.x86_64 zlib-1.2.3-29.el6.x86_64
(gdb) backtrace
#0 0x00007ffff11311f3 in mca_rcache_vma_delete ()
from /opt/cluster/plafrim2/apps/mpi/openmpi/1.6.5/gcc/lib/openmpi/mca_rcache_vma.so
#1 0x00007ffff0d2ccc3 in mca_mpool_rdma_register ()
from /opt/cluster/plafrim2/apps/mpi/openmpi/1.6.5/gcc/lib/openmpi/mca_mpool_rdma.so
#2 0x00007fffef482e9b in mca_btl_openib_prepare_dst ()
from /opt/cluster/plafrim2/apps/mpi/openmpi/1.6.5/gcc/lib/openmpi/mca_btl_openib.so
#3 0x00007ffff00dc7a9 in mca_pml_ob1_recv_request_get_frag ()
at ../../../../ompi/mca/bml/bml.h:357
#4 0x00007ffff00db281 in mca_pml_ob1_recv_frag_callback_rget ()
at pml_ob1_recvfrag.c:650
#5 0x00007fffef48b0c4 in btl_openib_handle_incoming ()
at btl_openib_component.c:3131
#6 0x00007fffef48c9d1 in btl_openib_component_progress ()
at btl_openib_component.c:3679
#7 0x00007ffff449446a in opal_progress ()
from /opt/cluster/plafrim2/apps/mpi/openmpi/1.6.5/gcc/lib/libmpi.so.1
#8 0x00007ffff43e26eb in ompi_request_default_test ()
from /opt/cluster/plafrim2/apps/mpi/openmpi/1.6.5/gcc/lib/libmpi.so.1
#9 0x00007ffff44002d9 in PMPI_Test ()
from /opt/cluster/plafrim2/apps/mpi/openmpi/1.6.5/gcc/lib/libmpi.so.1
---Type <return> to continue, or q <return> to quit---
#10 0x00007ffff7bbc313 in _starpu_mpi_test_detached_requests ()
at ../../../mpi/src/starpu_mpi.c:1052
#11 0x00007ffff7bbde12 in _starpu_mpi_progress_thread_func (arg=0x7f9c40)
at ../../../mpi/src/starpu_mpi.c:1326
#12 0x00007ffff5105851 in start_thread () from /lib64/libpthread.so.0
#13 0x00007ffff345d94d in clone () from /lib64/libc.so.6

Bonne chance !

Florent


Le 06/05/2015 11:40, Nathalie Furmento a écrit :
Marc,


Are you able to reproduce the bug on the plafrim platform?

if yes, could you provide the list of modules you are using, the queue, the options you gave when compiling StarPU & chameleon, the application you are running & all relevant informations needed to reproduce the bug.

And what does "StarPU 1.1 r15399 behaves the same as 1.2" mean?

Cheers,

Nathalie


On 06/05/2015 11:30, Marc Sergent wrote:
Hello,

I want to perform a distributed DPOTRF on 4 nodes with the Chameleon solver on top of StarPU with the openMPI memory pinning activated (export OMPI_MCA_mpi_leave_pinned_pipeline=1). My test case is N=65536, NB=512, on 4 heterogeneous nodes (8 CPUs + 2 GPUs), with Chameleon (r2201) on top of StarPU 1.2 (r15399). I made my experiments on TGCC Curie platform with BullxMPI 1.2.8.2. StarPU 1.1 r15399 behaves the same as 1.2, and this segfault can be reproduced with CPU-only nodes.

[curie7065:07257] *** Process received signal ***
[curie7065:07257] Signal: Segmentation fault (11)
[curie7065:07257] Signal code: Address not mapped (1)
[curie7065:07257] Failing at address: 0x40
[curie7065:07257] [ 0] /lib64/libpthread.so.0(+0xf710) [0x2b1af7ed9710]
[curie7065:07257] [ 1] /opt/mpi/bullxmpi/1.2.8.2/lib/bullxmpi/mca_rcache_vma.so(mca_rcache_vma_delete+0x1b) [0x2b1affb0703b]
[curie7065:07257] [ 2] /opt/mpi/bullxmpi/1.2.8.2/lib/bullxmpi/mca_mpool_rdma.so(mca_mpool_rdma_register+0xe4) [0x2b1afff0ca84]
[curie7065:07257] [ 3] /opt/mpi/bullxmpi/1.2.8.2/lib/bullxmpi/mca_pml_ob1.so(mca_pml_ob1_rdma_btls+0x13a) [0x2b1b0095119a]
[curie7065:07257] [ 4] /opt/mpi/bullxmpi/1.2.8.2/lib/bullxmpi/mca_pml_ob1.so(mca_pml_ob1_isend+0x8ba) [0x2b1b0095085a]
[curie7065:07257] [ 5] /opt/mpi/bullxmpi/1.2.8.2/lib/libmpi.so.1(MPI_Isend+0xff) [0x2b1afa48ca8f]
[curie7065:07257] [ 6] /ccc/cont003/home/gen1567/sergentm/libs/lib/libstarpumpi-1.1.so.2(+0x4c21) [0x2b1af018cc21]
[curie7065:07257] [ 7] /ccc/cont003/home/gen1567/sergentm/libs/lib/libstarpumpi-1.1.so.2(+0x9b4a) [0x2b1af0191b4a]
[curie7065:07257] [ 8] /lib64/libpthread.so.0(+0x79d1) [0x2b1af7ed19d1]
[curie7065:07257] [ 9] /lib64/libc.so.6(clone+0x6d) [0x2b1afcb318fd]

The segfault can also occur with other MPI routines (Irecv, Test mostly). I tried to attach a gdb to check the backtrace, but the segfault didn't occur when I tried.

I had also this kind of messages sometimes, but I hadn't been able to catch a backtrace from that point:

[[37962,1],1][btl_openib_component.c:3544:handle_wc] from curie7136 to: curie7138 error polling LP CQ with status LOCAL PROTOCOL ERROR status number 4 for wr_id 2b813a9fb550 opcode 128 vendor error 84 qp_idx 3
[curie7136:7161] Attempt to free memory that is still in use by an ongoing MPI communication (buffer 0x2b816d80c000, size 2101248). MPI job will now abort.

Do you have an idea of what happens ?

Thanks in advance,
Marc Sergent



_______________________________________________
Starpu-devel mailing list
Starpu-devel@lists.gforge.inria.fr
http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/starpu-devel





Archives gérées par MHonArc 2.6.19+.

Haut de le page