Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try.
报错信息: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[192.168.13.130:50010,DS-d105d41c-49cc-48b9-8beb-28058c2a03f7,DISK]], original=[DatanodeInfoWithStorage[192.168.13.130:50010,DS-d105d41c-49cc-48b9-8beb-28058c2a03f7,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration
出现这个错误是因为我将本地文件追加到hdfs的txt文件中。而当我第二次追加时,错误信息变成了

从第一个报错信息,我们可以发现dfs.client.block.write.replace-datanode-on-failure.policy。
所以我去查看hadoop中etc/hdfs-site.xml。发现我没有定义副本数量,所以在其中加上。重启即可。
<property>
<name>dfs.client.block.write.replace-datanode-on-failure.policy</name>
<value>NEVER</value>
</property>
原因分析:默认情况下副本数为3个,在执行写入到HDFS的操作时,当我的一台Datenode写入失败时,它要保持副本数为3,它就会寻找一个可用的DateNode节点来写入,可时流水线上只有3台,所有导致报错Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try;
查看自己已经存在的副本数量,位置在hadoop中etc/hdfs-site.xml。不存在以下几行代码则副本数默认为3。参考apache官方文档得知 NEVER: never add a new datanode 相当于 设置为NEVER之后,就不会添加新的DataNode,一般来说,集群中DataNode节点小于等于3 都不建议开启
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
什么是流水线:Hadoop架构: 流水线(PipeLine) - 执生 - 博客园 (cnblogs.com)
参考:(13条消息) java.io.IOException Failed to replace a bad datanode_fffffyp的博客-CSDN博客

浙公网安备 33010602011771号