hdfs相关命令
1、在spark节点上进入hdfs,su hdfs
2、hdfs dfs -ls /zxvmax/telecom/lte/netmaxl/nbi
3、在hdfs上推数据:
建表时,为了测试,建议用textfile格式:
STORED AS textfile LOCATION '/zxvmax/telecom/lte/subject/lte_function_ems_pm_cellthrput/';
①在spark SQL上建分区:
alter table lte_function_ems_pm_cellthrput add partition (p_provincecode=510000,p_date='2016-08-16',p_hour=11);
//show partitions;//查看分区
②在hdfs上推数据:
1.txt的内容(unix utf-08):
2016-03-05,634887,1,31.11,32.11
hdfs dfs -put /home/10192057/1.txt /zxvmax/telecom/lte/subject/lte_function_ems_pm_cellthrput/p_provincecode=510000/p_date=2016-03-05/p_hour=1
③也可以删除数据:
hdfs dfs -rm /zxvmax/telecom/lte/subject/lte_function_ems_pm_cellthrput/p_provincecode=510000/p_date=2016-03-05/p_hour=1/1.txt
hdfs dfs -rm -r 目录
hadoop fs -rm
④查看建表时的存储路径:
desc formatted lte_qcell_prru_location;
浙公网安备 33010602011771号