Let's use a type mapping to improve our data import.
Delete any existing output directory:
$ hadoop fs -rmr employees
Execute Sqoop with an explicit type mapping:
sqoop import --connect jdbc:mysql://10.0.0.100/hadooptest --username hadoopuser -P --table employees --hive-import --hive-table employees --map-column-hive start_date=timestamp
You will receive the following response:
12/05/23 14:53:38 INFO hive.HiveImport: Hive import complete.
Examine the created table definition:
$ hive -e "describe employees"
You will receive the following response:
OK first_name string dept string salary int start_date timestamp Time taken: 2.547 seconds
Examine the imported data:
$ hive -e "select * from employees";
You will receive the following response:
OK Failed with exception java.io.IOException:java.lang.IllegalArgumentException: Timestamp format must be yyyy-mm-dd hh:mm:ss[.fffffffff] Time taken: 2.73 seconds