In this section, we will illustrate how to load large datasets directly from a URL with the help of the ff
package and how to interact with a biglm
package to fit a general linear regression model to the datasets that are larger than the memory. The biglm
package can effectively handle datasets even if they overload the RAM of the computer, as it loads data into memory in chunks. It processes the last chunk and updates the sufficient statistics required for the model. It then disposes the chunk and loads the next one. This process is repeated until all the data is processed in the calculation.
The following example examines the unemployment compensation amount as a linear function of a few social-economic data.