[R] R 2.1.0 RH Linux Built from Source Segmentation Fault
Peter Dalgaard
p.dalgaard at biostat.ku.dk
Fri May 20 09:04:08 CEST 2005
Bruce Foster <bef at northwestern.edu> writes:
...
> The machines are AMD Athlon MP 2400+ with 2 GB RAM, dual CPUs, and
> lots of free disk space.
Any per-user/per-process limits? Resource usage look suspiciously
close to 256M. If your install is allowing overcommitment of memory,
the OS can kill processes at unpredictable times.
> I've got a user running Monte Carlo codes that fail with segmentation
> faults on a frequent basis. The jobs run for a long time (up to a day)
> before failure.
>
> If a failed job is rerun, chances are high that it will run to completion.
>
> I'm at a loss about approaching this problem. R (as it is here)
> doesn't seem to give much of a hint as to where things are when it
> crashes.
>
> I'm looking for some guidance to diagnose this problem so we can focus
> on a solution.
(A) Use set.seed(...) to get a fixed sequence of random numbers. If it
still fails unpredictably, my bet is that it is a resource problem.
(B) Once you have a case that fails predictably, run it under a
debugger and try to backtrack to the point of failure. There are
various debugging tricks that you can use, but just get there first
and show us a stack backtrace at the failure point (bt command in
gdb).
For more detailed guidance you should probably move the discussion to
the r-devel list.
--
O__ ---- Peter Dalgaard Blegdamsvej 3
c/ /'_ --- Dept. of Biostatistics 2200 Cph. N
(*) \(*) -- University of Copenhagen Denmark Ph: (+45) 35327918
~~~~~~~~~~ - (p.dalgaard at biostat.ku.dk) FAX: (+45) 35327907
More information about the R-help
mailing list