Some of the methods of moving data in and out of Databricks have already been explained in Chapter 8, Spark Databricks and Chapter 9, Databricks Visualization. What I would like to do in this section is provide an overview of all of the methods available for moving data. I will examine the options for tables, workspaces, jobs, and Spark code.
The table import functionality for Databricks cloud allows data to be imported from an AWS S3 bucket, from the Databricks file system (DBFS), via JDBC and finally from a local file. This section gives an overview of each type of import, starting with S3. Importing the table data from AWS S3 requires the AWS Key, the AWS secret key, and the S3 bucket name. The following screenshot shows an example. I have already provided an example of S3 bucket creation, including adding an access policy, so I will not cover it again.
Once the form details are added, you will be able to browse your S3 bucket for a data source. Selecting DBFS...