hadoop-spark集群安装---6.sqoop导入导出
1.准备
上传sqoop到node01
cd /tools
tar -zxvf sqoop-1.4.6.bin__hadoop-2.0.4-alpha.tar.gz -C /ren
cd /ren
mv sqoop-1.4.6.bin__hadoop-2.0.4-alpha sqoop-1.4.6 sqoop-1.4.6
vi /etc/profile
export SQOOP_HOME=/ren/sqoop-1.4.6
export PATH=$PATH:$SQOOP_HOME/bin
source /etc/profile
2.配置
复制jdbc到sqoop下 cp /root/mysql-connector-5.1.6-bin.jar /ren/sqoop-1.4.6/lib/
cd /ren/sqoop-1.4.6/conf
mv sqoop-env.template.sh sqoop-env.sh
vi sqoop-env.sh
修改
export HADOOP_COMMON_HOME=/ren/hadoop-2.7.3
export HADOOP_MAPRED_HOME=/ren/hadoop-2.7.3
export HIVE_HOME=/ren/hive-1.2.1
export ZOOCFGDIR=/ren/zookeeper-3.4.9
3.同步,启动
scp -r /ren/sqoop-1.4.6 root@node02:/ren/
scp -r /ren/sqoop-1.4.6 root@node03:/ren/
输入sqoop version可查看版本
查看数据库中表 sqoop list tables --connect xxx --username xxx --password xxx
如果报check依赖 错误,把bin/configure-sqoop中相关启动依赖检查信息注释
4.example
建表 sqoop create-hive-table --connect jdbc:mysql://123.59.135.103:4306/crub --username crub --password crub@mfb.2016 --table user_grow_path_d --hive-table user
导入hive : sqoop import --connect jdbc:mysql://123.59.135.103:4306/crub --username crub --password crub@mfb.2016 --table user_grow_path_d --hive-table user --hive- import --hive-overwrite --direct
如果报错 mysqldump未发现 Error: java.io.IOException: Cannot run program "mysqldump": error=2, No such file or directory
需要在datanode节点安装mysql客户端

浙公网安备 33010602011771号