You probably won't spend as much time getting the data as you will getting it into shape. Raw data is often inconsistent, duplicated, or full of holes. You have to fix it before it's usable.
This is often a very iterative, interactive process: If it's a very large dataset, I may create a sample to work with at this stage. Generally, I start by examining the data files. Once I find a problem, I try to code a solution, which I run on the dataset. After each change, I archive the data, either using a ZIP file or—if the data files are small enough—using Git (http://git-scm.com/) or another version control system. Using a version control system is nice, because I can track the code to transform the data along with the data itself, and I can include comments about what I'm doing. Then I look at the data again, and the whole process starts over. Even once I've moved on to analyze the data, I may find more issues or need to change the data somehow to make it easier to analyze, and I...