hdfs OutOfMemoryError

大量推送本地文件到hdfs如下

hadoop fs -put ${local_path} ${hdfs_path}报错。

Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded
    at java.util.Arrays.copyOfRange(Arrays.java:3664)
    at java.lang.StringBuffer.toString(StringBuffer.java:669)
    at java.net.URI.toString(URI.java:1945)
    at java.net.URI.<init>(URI.java:742)
    at org.apache.hadoop.fs.Path.initialize(Path.java:202)
    at org.apache.hadoop.fs.Path.<init>(Path.java:196)
    at org.apache.hadoop.fs.Path.getPathWithoutSchemeAndAuthority(Path.java:80)
    at org.apache.hadoop.fs.shell.CommandWithDestination.checkPathsForReservedRaw(CommandWithDestination.java:355)
    at org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:322)
    at org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:263)
    at org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:248)
    at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
    at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:373)
    at org.apache.hadoop.fs.shell.CommandWithDestination.recursePath(CommandWithDestination.java:291)
    at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:319)
    at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:373)
    at org.apache.hadoop.fs.shell.CommandWithDestination.recursePath(CommandWithDestination.java:291)
    at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:319)
    at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:373)
    at org.apache.hadoop.fs.shell.CommandWithDestination.recursePath(CommandWithDestination.java:291)
    at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:319)
    at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:373)
    at org.apache.hadoop.fs.shell.CommandWithDestination.recursePath(CommandWithDestination.java:291)
    at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:319)
    at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:373)
    at org.apache.hadoop.fs.shell.CommandWithDestination.recursePath(CommandWithDestination.java:291)
    at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:319)
    at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:373)
    at org.apache.hadoop.fs.shell.CommandWithDestination.recursePath(CommandWithDestination.java:291)
    at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:319)
    at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:373)
    at org.apache.hadoop.fs.shell.CommandWithDestination.recursePath(CommandWithDestination.java:291)

 

在hadoop命令下发现:

hadoop-2.7.3/bin/hadoop:    exec "$JAVA" $JAVA_HEAP_MAX $HADOOP_OPTS $CLASS "$@"

查找JAVA_HEAP_MAX

发现

hadoop-2.7.3/libexec/hadoop-config.sh:  JAVA_HEAP_MAX="-Xmx""$HADOOP_HEAPSIZE""m"

 

继续查找HADOOP_HEAPSIZE

发现

hadoop-2.7.3/libexec/hadoop-config.sh:  JAVA_HEAP_MAX="-Xmx""$HADOOP_HEAPSIZE""m"

以及

hadoop-2.7.3/share/doc/hadoop/hadoop-project-dist/hadoop-common/ClusterSetup.html:<li><tt>HADOOP_HEAPSIZE</tt> / <tt>YARN_HEAPSIZE</tt> - The maximum amount of heapsize to use, in MB e.g. if the varibale is set to 1000 the heap will be set to 1000MB. This is used to configure the heap size for the daemon. By default, the value is 1000. If you want to configure the values separately for each deamon you can use.</li>

 

<li>
    <tt>HADOOP_HEAPSIZE</tt>
    / <tt>YARN_HEAPSIZE</tt> - The maximum amount of heapsize to use, in MB e.g. if the varibale is set to 1000 the heap
    will be set to 1000MB. This is used to configure the heap size for the daemon. By default, the value is 1000. If you
    want to configure the values separately for each deamon you can use.
</li>

 

调整JVM堆的最大值:

export HADOOP_HEAPSIZE=100000

 

posted @ 2018-10-30 18:26  苏轶然  阅读(416)  评论(0编辑  收藏  举报