[R] classification for huge datasets: SVM yields memory troubles

Andreas wolf.privat at gmx.de
Mon Dec 13 21:56:22 CET 2004


Hi,

I'm a beginner in the SVM-module but I have seen there is a parameter called
:
cachesize #cache memory in MB (default 40)

please let me know if this parameter solved your problem, I might get the
same number of samples in the near future.

regards Andreas

"Christoph Lehmann" <christoph.lehmann at gmx.ch> schrieb im Newsbeitrag
news:41BD8A9F.4040509 at gmx.ch...
> Hi
> I have a matrix with 30 observations and roughly 30000 variables, each
> obs belongs to one of two groups. With svm and slda I get into memory
> troubles ('cannot allocate vector of size' roughly 2G). PCA LDA runs
> fine. Are there any way to use the memory issue withe SVM's? Or can you
> recommend any other classification method for such huge datasets?
>
>
> P.S. I run suse 9.1 on a 2G RAM PIV machine.
> thanks for a hint
>
> Christoph
>
> ______________________________________________
> R-help at stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide!
http://www.R-project.org/posting-guide.html
>




More information about the R-help mailing list