Skip to Content.
Sympa Menu

cado-nfs - Re: [Cado-nfs-discuss] Fw:Re:Re: makefb not found in distributed enviroment

Subject: Discussion related to cado-nfs

List archive

Re: [Cado-nfs-discuss] Fw:Re:Re: makefb not found in distributed enviroment


Chronological Thread 
  • From: meng <qsmeng@126.com>
  • To: "Zimmermann Paul" <Paul.Zimmermann@inria.fr>
  • Cc: "cado-nfs-discuss@lists.gforge.inria.fr" <cado-nfs-discuss@lists.gforge.inria.fr>
  • Subject: Re: [Cado-nfs-discuss] Fw:Re:Re: makefb not found in distributed enviroment
  • Date: Sun, 28 Jul 2013 21:08:16 +0800 (CST)
  • List-archive: <http://lists.gforge.inria.fr/pipermail/cado-nfs-discuss>
  • List-id: A discussion list for Cado-NFS <cado-nfs-discuss.lists.gforge.inria.fr>

Dear Paul,
>> /home/greatnet/meng/cado-nfs/build/master/sieve/las -I 11 -rpowlim 2047 -apowlim 2047 -poly /tmp/c100.poly -fb /tmp/c100.roots -q0 2040000 -q1 2060000 -mt 2 -out /tmp/c100.rels.2040000-2060000.gz
>> the gdb result is given below :
>> #0  0x00000039f5a7a050 in memset () from /lib64/libc.so.6
>> #1  0x0000000000413c0b in init_rat_norms_bucket_region (S=0x6a0000 "", j=<value optimized out>, si=<value optimized out>)
>>     at /home/greatnet/meng/cado-nfs/sieve/las-norms.c:332
>> #2  0x000000000040980e in process_bucket_region (th=0x65fac0) at /home/greatnet/meng/cado-nfs/sieve/las.c:2858
>> #3  0x00000039f660683d in start_thread () from /lib64/libpthread.so.0
>> #4  0x00000039f5ad4fad in clone () from /lib64/libc.so.6
>> The version is 8efeb35.
 I am thinking that this should be related to  dynamic library as compiling is successful.
>$ makefb -I 11 -poly c100.poly -alim 2000000 > c100.roots I can reproduce this step and md5sum of c100.roots is the same as yours.
>$ las -I 11 -rpowlim 2047 -apowlim 2047 -poly /tmp/c100.poly -fb /tmp/c100.roots -q0 2040000 -q1 2060000 -mt 2 -out /tmp/c100.rels.2040000-2060000.gz -rlim 1000000 -alim 2000000 -lpbr 25 -lpba 25 -mfbr 50 -mfba 50 -rlambda 2.1 -alambda 2.2
I cant reproduce this step.
>First please checkout revision d7da39a (or later), recompile the "las" binary >(maybe after removing the old one to be sure), and try again. I failed in these latest version. But it is strange that I can succeed in the old version f784e49c. I think if I can replace the sieve directory in latest version with that old version.
I will solve this later.
Now when I try the mpi with version f78e49c, there exists problem.
After successful installation of openmpi-1.6.5 on two computers(I can run the example correctly), I do the following jobs on two computers:
cp local.sh.example local.sh
vi local.sh and add MPI=1 and MPI=/usr/local/openmpi-1.6.5/ and save it.
and rm -rf build/
make
vi mach_desc on the local host and add mpi=1 after the name of the two computers.
and run the cadofactor.pl. Finally the screen displays:

Info:Calling Block-Wiedemann...
Error:Command `/home/greatnet/cado-nfs/build/master/linalg/bwc/bwc.pl :complete seed=1 thr=2x2 mpi=2x1 hosts=node2,localhost mpi_extra_args='--mca btl_tcp_if_exclude lo,virbr0' matrix=/home/greatnet/c59/c59.small.bin nullspace=left mm_impl=bucket interleaving=0 interval=100 mn=64 wdir=/home/greatnet/c59/c59.bwc shuffled_product=1 bwc_bindir=/home/greatnet/cado-nfs/build/master/linalg/bwc ' terminated unexpectedly with exit status 1.
Error:STDOUT 'thr=2x2',
Error:STDOUT 'interval=100',
Error:STDOUT 'mpi=2x1',
Error:STDOUT 'seed=1',
Error:STDOUT 'mn=64',
Error:STDOUT 'interleaving=0',
Error:STDOUT 'splits=0,64',
Error:STDOUT 'ys=0..64',
Error:STDOUT 'matrix=/home/greatnet/c59/c59.small.bin'
Error:STDOUT ];
Error:STDERR --------------------------------------------------------------------------
Error:STDERR mpiexec was unable to launch the specified application as it could not access
Error:STDERR or execute an executable:
Error:STDERR
Error:STDERR Executable: /home/greatnet/cado-nfs/build/master/linalg/bwc/mf_bal
Error:STDERR Node: node2
Error:STDERR
Error:STDERR while attempting to start process rank 0.
Error:STDERR --------------------------------------------------------------------------
Error:STDERR /usr/local/openmpi-1.6.5/bin//mpiexec: exited with status 131

Is there something i did wrongly?
Thank you.
Regards,
Meng





Archive powered by MHonArc 2.6.19+.

Top of Page