sqoop基本用法,hive报错
将mysql中user_info表数据导入到HDFS的/test路径
1 bin/sqoop import \ 2 --connect jdbc:mysql://hadoop102:3306/gmall \ 库名 3 --username root \ 4 --password 123456\
5 --table user_info \ 表名 6 --columns id,login_name \ 列名 7 --where "id>=1 and id<=20" \ 条件
等同于--query "select id,login_name from user_info where id>=1 and id<=20 and $CONDITIONS" \
8 --target-dir /test \ hdfs目录 9 --delete-target-dir \ 10 --fields-terminated-by '\t' \ 分隔符 11 --num-mappers 2 \ 分两片 (id=1___id=10 , id=11___id=20) 12 --split-by id 通过id分片
将hive中employees表数据导入到mysql
1 bin/sqoop export \ 2 --connect jdbc:mysql://Hadoop102:3306/bigdata \ 3 --username root \ 4 --password 123456 \ 5 --table jichang \ 6 --num-mappers 1 \ 7 --export-dir /user/hive/warehouse/jichang \ 8 --input-fields-terminated-by ","
补hive报错
FAILED: HiveException java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
解决:启动集群,启动metastore
nohup bin/hive --service metastore >> logs/metastore.log 2>&1 &