Good Reference for Efficient R – TonyStark®

Data Learner

This evening I was reading Norman Matloff’s excellent book, The Art of R Programming. He mentions,

If you are adding rows or columns one at a time within a loop, and the matrix will eventually become large, it’s better to allocate a large matrix in the first place.

The reason for this is that every time a new matrix is defined the machine needs to allocate the memory, and this can be expensive. But just how much of a performance concern is this? In order to answer that question I wrote a simple example code.

Check out the results when building a matrix with 10 columns and one hundred thousand rows- 1 second versus 22 minutes!

And how much time does this take to create this matrix the proper way, without using a for loop?

Which gives the following result:

Thanks to Matt Leonawicz for the blog post on…

元の投稿を表示 さらに4語

広告