1. 安装ob-loader-dumper

1.1 安装Java

1)安装

[root@qcloud-yuanshu-ob-test-15-140 bin]# java -version
-bash: java: command not found

[root@qcloud-yuanshu-ob-test-15-140 bin]# yum install -y java-1.8.0-openjdk
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
Resolving Dependencies
--> Running transaction check
---> Package java-1.8.0-openjdk.x86_64 1:1.8.0.412.b08-1.el7_9 will be installed
--> Processing Dependency: java-1.8.0-openjdk-headless(x86-64) = 1:1.8.0.412.b08-1.el7_9 for package: 1:java-1.8.0-openjdk-1.8.0.412.b08-1.el7_9.x86_64
--> Processing Dependency: xorg-x11-fonts-Type1 for package: 1:java-1.8.0-openjdk-1.8.0.412.b08-1.el7_9.x86_64
--> Processing Dependency: libjvm.so(SUNWprivate_1.1)(64bit) for package: 1:java-1.8.0-openjdk-1.8.0.412.b08-1.el7_9.x86_64
--> Processing Dependency: libjava.so(SUNWprivate_1.1)(64bit) for package: 1:java-1.8.0-openjdk-1.8.0.412.b08-1.el7_9.x86_64

[root@qcloud-yuanshu-ob-test-15-140 bin]# java -version
openjdk version "1.8.0_412"
OpenJDK Runtime Environment (build 1.8.0_412-b08)
OpenJDK 64-Bit Server VM (build 25.412-b08, mixed mode)
[root@qcloud-yuanshu-ob-test-15-140 bin]# 

2)设置环境变量

[root@qcloud-yuanshu-ob-test-15-140 bin]# update-alternatives --config java

There is 1 program that provides 'java'.

  Selection    Command
-----------------------------------------------
*+ 1           java-1.8.0-openjdk.x86_64 (/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.412.b08-1.el7_9.x86_64/jre/bin/java)

Enter to keep the current selection[+], or type selection number: 
[root@qcloud-yuanshu-ob-test-15-140 bin]# 


[root@qcloud-yuanshu-ob-test-15-140 bin]# echo 'export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.412.b08-1.el7_9.x86_64' >> ~/.bashrc
[root@qcloud-yuanshu-ob-test-15-140 bin]# echo 'export PATH=$JAVA_HOME/bin:$PATH' >> ~/.bashrc
[root@qcloud-yuanshu-ob-test-15-140 bin]# source ~/.bashrc
[root@qcloud-yuanshu-ob-test-15-140 bin]# echo $JAVA_HOME
/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.412.b08-1.el7_9.x86_64

1.2 安装ob-loader-dumper

1)下载

ob-loader-dumper-4.3.3.1-RELEASE.zip
链接:https://obbusiness-private.oss-cn-shanghai.aliyuncs.com/download-center/opensource/ob_loader_dumper/4.3.3.1/ob-loader-dumper-4.3.3.1-RELEASE.zip

2)安装

[root@qcloud-yuanshu-ob-test-15-140 obl]# pwd
/data/dba/oceanbase/obl
[root@qcloud-yuanshu-ob-test-15-140 obl]# unzip ob-loader-dumper-4.3.3.1-RELEASE.zip 
Archive:  ob-loader-dumper-4.3.3.1-RELEASE.zip
   creating: ob-loader-dumper-4.3.3.1-RELEASE/
   creating: ob-loader-dumper-4.3.3.1-RELEASE/docs/
   creating: ob-loader-dumper-4.3.3.1-RELEASE/bin/
   creating: ob-loader-dumper-4.3.3.1-RELEASE/bin/windows/
......
[root@qcloud-yuanshu-ob-test-15-140 obl]# ll
total 144M
drwxr-xr-x 3 root root   90 Jul 21 16:55 .
drwxr-xr-x 4 root root   31 Jul 21 16:47 ..
drwxr-xr-x 8 root root  105 Apr 14 11:08 ob-loader-dumper-4.3.3.1-RELEASE
-rw-r--r-- 1 root root 144M Jul 21 16:55 ob-loader-dumper-4.3.3.1-RELEASE.zip

[root@qcloud-yuanshu-ob-test-15-140 obl]# cd ob-loader-dumper-4.3.3.1-RELEASE/
[root@qcloud-yuanshu-ob-test-15-140 ob-loader-dumper-4.3.3.1-RELEASE]# ls
bin  conf  docs  ext  lib  LICENSE  NOTICE  tools
[root@qcloud-yuanshu-ob-test-15-140 ob-loader-dumper-4.3.3.1-RELEASE]# ll
total 28K
drwxr-xr-x 8 root root  105 Apr 14 11:08 .
drwxr-xr-x 3 root root   90 Jul 21 16:55 ..
drwxr-xr-x 3 root root   97 Apr 14 11:08 bin
drwxr-xr-x 2 root root  150 Apr 14 11:08 conf
drwxr-xr-x 2 root root  124 Apr 14 11:08 docs
drwxr-xr-x 3 root root   21 Apr 14 11:08 ext
drwxr-xr-x 2 root root 8.0K Apr 14 11:08 lib
-rw-r--r-- 1 root root 6.8K Jun 12  2024 LICENSE
-rw-r--r-- 1 root root 4.7K Feb 25 10:54 NOTICE
drwxr-xr-x 2 root root   24 Apr 14 11:08 tools
[root@qcloud-yuanshu-ob-test-15-140 ob-loader-dumper-4.3.3.1-RELEASE]# 
[root@qcloud-yuanshu-ob-test-15-140 ob-loader-dumper-4.3.3.1-RELEASE]# 
[root@qcloud-yuanshu-ob-test-15-140 ob-loader-dumper-4.3.3.1-RELEASE]# cd bin/
[root@qcloud-yuanshu-ob-test-15-140 bin]# ll
total 28K
drwxr-xr-x 3 root root   97 Apr 14 11:08 .
drwxr-xr-x 8 root root  105 Apr 14 11:08 ..
-rwxr-xr-x 1 root root 9.0K Apr  3 13:30 obdumper
-rwxr-xr-x 1 root root 1.1K Apr  3 13:30 obdumper-debug
-rwxr-xr-x 1 root root 6.9K Apr 10 18:19 obloader
-rwxr-xr-x 1 root root 1.1K Apr  3 13:30 obloader-debug
drwxr-xr-x 2 root root   98 Apr 10 18:19 windows

3)设置

---- obloader
[root@qcloud-yuanshu-ob-test-15-140 ob-loader-dumper-4.3.3.1-RELEASE]# cat > /usr/bin/obloader <<'EOF'
> #!/bin/bash
> export LD_LIBRARY_PATH=/data/dba/oceanbase/obl/ob-loader-dumper-4.3.3.1-RELEASE/lib:$LD_LIBRARY_PATH
> export CLASSPATH=/data/dba/oceanbase/obl/ob-loader-dumper-4.3.3.1-RELEASE/lib/*:.
> java -Xms512m -Xmx2g com.oceanbase.tools.loaddump.cmd.Obloader "$@"
> EOF
[root@qcloud-yuanshu-ob-test-15-140 ob-loader-dumper-4.3.3.1-RELEASE]# chmod +x /usr/bin/obloader
[root@qcloud-yuanshu-ob-test-15-140 ob-loader-dumper-4.3.3.1-RELEASE]# echo $JAVA_HOME
/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.412.b08-1.el7_9.x86_64/jre
[root@qcloud-yuanshu-ob-test-15-140 ob-loader-dumper-4.3.3.1-RELEASE]# obloader --version
Version: 4.3.3.1-RELEASE 


---- obdumper
[root@qcloud-yuanshu-ob-test-15-140 ob-loader-dumper-4.3.3.1-RELEASE]# cat > /usr/bin/obdumper <<'EOF'
> #!/bin/bash
> export LD_LIBRARY_PATH=/data/dba/oceanbase/obl/ob-loader-dumper-4.3.3.1-RELEASE/lib:$LD_LIBRARY_PATH
> export CLASSPATH=/data/dba/oceanbase/obl/ob-loader-dumper-4.3.3.1-RELEASE/lib/*:.
> java -Xms512m -Xmx2g com.oceanbase.tools.loaddump.cmd.Obdumper "$@"
> EOF
[root@qcloud-yuanshu-ob-test-15-140 ob-loader-dumper-4.3.3.1-RELEASE]# 
[root@qcloud-yuanshu-ob-test-15-140 ob-loader-dumper-4.3.3.1-RELEASE]# 
[root@qcloud-yuanshu-ob-test-15-140 ob-loader-dumper-4.3.3.1-RELEASE]# chmod +x /usr/bin/obdumper
[root@qcloud-yuanshu-ob-test-15-140 ob-loader-dumper-4.3.3.1-RELEASE]# 
[root@qcloud-yuanshu-ob-test-15-140 ob-loader-dumper-4.3.3.1-RELEASE]# obdumper --version
Version: 4.3.3.1-RELEASE

2. 使用旁路导入条件准备

2.1 确定rpc-port 端口

rpc_port 用于设置远程访问的协议端口号。

[root@qcloud-yuanshu-ob-test-15-140 dump]# ss -tuln | grep 2882
tcp    LISTEN     0      1024      *:2882                  *:*                  
tcp    LISTEN     0      1024   [::]:2882               [::]:*   

2.2 确定 tenant_name

tenant_name 为 sys 

[root@qcloud-yuanshu-ob-test-15-140 data]# obclient -h 10.250.15.140 -P 2881 -uroot@sys -padmin123 -A
Welcome to the OceanBase.  Commands end with ; or \g.
Your OceanBase connection id is 3221733977
Server version: OceanBase_CE 4.3.3.1 (r101000012024102216-2df04a2a7a203b498f23e1904d4b7a000457ce43) (Built Oct 22 2024 17:46:45)

Copyright (c) 2000, 2018, OceanBase and/or its affiliates. All rights reserved.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

obclient [(none)]> show tenant;
+---------------------+
| Current_tenant_name |
+---------------------+
| sys                 |
+---------------------+
1 row in set (0.002 sec)

obclient [(none)]> select tenant_id, tenant_name, primary_zone from oceanbase.DBA_OB_TENANTS;
+-----------+-------------+--------------+
| tenant_id | tenant_name | primary_zone |
+-----------+-------------+--------------+
|         1 | sys         | RANDOM       |
+-----------+-------------+--------------+
1 row in set (0.004 sec)

obclient [(none)]> 

2.3 配置session.config.json文件

cat /data/dba/oceanbase/obl/ob-loader-dumper-4.3.3.1-RELEASE/conf/session.config.json
{
  "direct_path_load": {
    "heartbeat_interval_ms": 10000,
    "task_timeout_ms": 3600000,
    "heartbeat_timeout_ms": 60000
  }
}

2.4 确定csv数据格式是否都正确

[root@qcloud-yuanshu-ob-test-15-140 dump]# awk -F'\t' '{print NF}' /data/dba/mysql/dump/hxr_card_copy_dir/xaa.csv | sort -nu
5
6
[root@qcloud-yuanshu-ob-test-15-140 dump]# awk -F'\t' '{print NF}' /data/dba/mysql/dump/hxr_card_copy_dir/xab.csv | sort -nu
2
5
6
[root@qcloud-yuanshu-ob-test-15-140 dump]# awk -F'\t' '{print NF}' /data/dba/mysql/dump/hxr_card_copy_dir/xac.csv | sort -nu
2
6
-- 找出不符合要求的数据
[root@qcloud-yuanshu-ob-test-15-140 dump]# awk -F'\t' 'NF!=6 {print "行号:", NR, "列数:", NF, "内容:", $0}' /data/dba/mysql/dump/hxr_card_copy_dir/xaa.csv
行号: 804439 列数: 5 内容: 804439       QW028488        电子卡  QW028488        2099-12-31 2

[root@qcloud-yuanshu-ob-test-15-140 dump]# awk -F'\t' 'NF!=6 {print "行号:", NR, "列数:", NF, "内容:", $0}' /data/dba/mysql/dump/hxr_card_copy_dir/xab.csv
行号: 1 列数: 2 内容: 3:59:59   2024-01-01 00:00:00

[root@qcloud-yuanshu-ob-test-15-140 dump]# awk -F'\t' 'NF!=6 {print "行号:", NR, "列数:", NF, "内容:", $0}' /data/dba/mysql/dump/hxr_card_copy_dir/xac.csv
行号: 1 列数: 2 内容: 31 23:59:59       2024-01-01 00:00:00

3 旁路导入测试

3.1 导入单个文件

旁路导入使用到的关键参数:

--direct

--rpc-port 2882

--sys-user=root

-t sys 

--session-config=/data/dba/oceanbase/obl/ob-loader-dumper-4.3.3.1-RELEASE/conf/session.config.json

--parallel=4    # 并发增加,导入变快,无并发45秒、4个并发35秒

1)导入命令

obloader -h 10.250.15.140 -P 2881 -uroot -padmin123 -D members_test   --table hxr_card_copy_dir --skip-header --block-size=10240  --csv   -f /data/dba/mysql/dump/hxr_card_copy.csv --column-separator='\t'  --max-errors=10   --direct --rpc-port 2882 --sys-user=root -t sys --parallel=4  --session-config=/data/dba/oceanbase/obl/ob-loader-dumper-4.3.3.1-RELEASE/conf/session.config.json

2)导入详情

[root@qcloud-yuanshu-ob-test-15-140 dump]# obloader -h 10.250.15.140 -P 2881 -uroot -padmin123 -D members_test   --table hxr_card_copy_dir --skip-header --block-size=10240  --csv   -f /data/dba/mysql/dump/hxr_card_copy.csv --column-separator='\t'  --max-errors=10   --direct --rpc-port 2882 --sys-user=root -t sys  --session-config=/data/dba/oceanbase/obl/ob-loader-dumper-4.3.3.1-RELEASE/conf/session.config.json  --parallel=4
2025-07-22 14:19:05 [INFO] Parsed args:
[--csv] true
[--file-path] /data/dba/mysql/dump/hxr_card_copy.csv
[--column-separator] \t
[--skip-header] true
[--parallel] 4
[--session-config] /data/dba/oceanbase/obl/ob-loader-dumper-4.3.3.1-RELEASE/conf/session.config.json
[--host] 10.250.15.140
[--port] 2881
[--user] root
[--tenant] sys
[--password] ******
[--database] members_test
[--sys-user] root
[--table] [hxr_card_copy_dir]
[--max-errors] 10
[--block-size] 10240
[--direct] true
[--rpc-port] 2882

2025-07-22 14:19:06 [WARN] Both `--max-errors` and `--max-discards` are not supported yet. They will be set to 0 by default
2025-07-22 14:19:06 [DEBUG] Failed to detect a valid hadoop home directory java.io.FileNotFoundException: HADOOP_HOME and hadoop.home.dir are unset.
        at org.apache.hadoop.util.Shell.checkHadoopHomeInner(Shell.java:520) ~[hadoop-common-3.3.6.jar:?]
        at org.apache.hadoop.util.Shell.checkHadoopHome(Shell.java:491) ~[hadoop-common-3.3.6.jar:?]
        at org.apache.hadoop.util.Shell.<clinit>(Shell.java:568) ~[hadoop-common-3.3.6.jar:?]
        at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:79) ~[hadoop-common-3.3.6.jar:?]
        at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:3879) ~[hadoop-common-3.3.6.jar:?]
        at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:3874) ~[hadoop-common-3.3.6.jar:?]
        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3662) ~[hadoop-common-3.3.6.jar:?]
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:557) ~[hadoop-common-3.3.6.jar:?]
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:290) ~[hadoop-common-3.3.6.jar:?]
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:541) ~[hadoop-common-3.3.6.jar:?]
        at com.oceanbase.tools.loaddump.common.model.storage.StorageConfig.getFileSystem(StorageConfig.java:332) ~[ob-loader-dumper-4.3.3.1-RELEASE.jar:?]
        at com.oceanbase.tools.loaddump.client.LoadClient.rectifyFilePath(LoadClient.java:104) ~[ob-loader-dumper-4.3.3.1-RELEASE.jar:?]
        at com.oceanbase.tools.loaddump.client.LoadClient.access$000(LoadClient.java:68) ~[ob-loader-dumper-4.3.3.1-RELEASE.jar:?]
        at com.oceanbase.tools.loaddump.client.LoadClient$Builder.build(LoadClient.java:220) ~[ob-loader-dumper-4.3.3.1-RELEASE.jar:?]
        at com.oceanbase.tools.loaddump.cmd.Obloader.run(Obloader.java:269) ~[ob-loader-dumper-4.3.3.1-RELEASE.jar:?]
        at com.oceanbase.tools.loaddump.cmd.Obloader.main(Obloader.java:231) ~[ob-loader-dumper-4.3.3.1-RELEASE.jar:?]

2025-07-22 14:19:06 [DEBUG] setsid exited with exit code 0
2025-07-22 14:19:06 [DEBUG] field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName=Ops, valueName=Time, about=, interval=10, type=DEFAULT, value=[GetGroups])
2025-07-22 14:19:06 [DEBUG] field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName=Ops, valueName=Time, about=, interval=10, type=DEFAULT, value=[Rate of failed kerberos logins and latency (milliseconds)])
2025-07-22 14:19:06 [DEBUG] field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName=Ops, valueName=Time, about=, interval=10, type=DEFAULT, value=[Rate of successful kerberos logins and latency (milliseconds)])
2025-07-22 14:19:06 [DEBUG] field private org.apache.hadoop.metrics2.lib.MutableGaugeInt org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailures with annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName=Ops, valueName=Time, about=, interval=10, type=DEFAULT, value=[Renewal failures since last successful login])
2025-07-22 14:19:06 [DEBUG] field private org.apache.hadoop.metrics2.lib.MutableGaugeLong org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailuresTotal with annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName=Ops, valueName=Time, about=, interval=10, type=DEFAULT, value=[Renewal failures since startup])
2025-07-22 14:19:06 [DEBUG] UgiMetrics, User and group related metrics
2025-07-22 14:19:06 [DEBUG] Setting hadoop.security.token.service.use_ip to true
2025-07-22 14:19:06 [DEBUG]  Creating new Groups object
2025-07-22 14:19:06 [DEBUG] Trying to load the custom-built native-hadoop library...
2025-07-22 14:19:06 [DEBUG] Failed to load native-hadoop with error: java.lang.UnsatisfiedLinkError: no hadoop in java.library.path
2025-07-22 14:19:06 [DEBUG] java.library.path=/data/dba/oceanbase/obl/ob-loader-dumper-4.3.3.1-RELEASE/lib::/data/observer/lib/:/home/admin/observer/lib:/home/admin/observer/lib:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2025-07-22 14:19:06 [WARN] Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2025-07-22 14:19:06 [DEBUG] Falling back to shell based
2025-07-22 14:19:06 [DEBUG] Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping
2025-07-22 14:19:06 [DEBUG] Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback; cacheTimeout=300000; warningDeltaMs=5000
2025-07-22 14:19:06 [DEBUG] Hadoop login
2025-07-22 14:19:06 [DEBUG] hadoop login commit
2025-07-22 14:19:06 [DEBUG] Using local user: UnixPrincipal: root
2025-07-22 14:19:06 [DEBUG] Using user: "UnixPrincipal: root" with name: root
2025-07-22 14:19:06 [DEBUG] User entry: "root"
2025-07-22 14:19:06 [DEBUG] UGI loginUser: root (auth:SIMPLE)
2025-07-22 14:19:06 [DEBUG] Starting: Acquiring creator semaphore for file:///
2025-07-22 14:19:06 [DEBUG] Acquiring creator semaphore for file:///: duration 0:00.001s
2025-07-22 14:19:06 [DEBUG] Starting: Creating FS file:///
2025-07-22 14:19:06 [DEBUG] Loading filesystems
2025-07-22 14:19:06 [DEBUG] hdfs:// = class org.apache.hadoop.hdfs.DistributedFileSystem from /data/dba/oceanbase/obl/ob-loader-dumper-4.3.3.1-RELEASE/lib/hadoop-hdfs-client-3.3.6.jar
2025-07-22 14:19:06 [DEBUG] webhdfs:// = class org.apache.hadoop.hdfs.web.WebHdfsFileSystem from /data/dba/oceanbase/obl/ob-loader-dumper-4.3.3.1-RELEASE/lib/hadoop-hdfs-client-3.3.6.jar
2025-07-22 14:19:06 [DEBUG] swebhdfs:// = class org.apache.hadoop.hdfs.web.SWebHdfsFileSystem from /data/dba/oceanbase/obl/ob-loader-dumper-4.3.3.1-RELEASE/lib/hadoop-hdfs-client-3.3.6.jar
2025-07-22 14:19:06 [DEBUG] file:// = class org.apache.hadoop.fs.LocalFileSystem from /data/dba/oceanbase/obl/ob-loader-dumper-4.3.3.1-RELEASE/lib/hadoop-common-3.3.6.jar
2025-07-22 14:19:06 [DEBUG] viewfs:// = class org.apache.hadoop.fs.viewfs.ViewFileSystem from /data/dba/oceanbase/obl/ob-loader-dumper-4.3.3.1-RELEASE/lib/hadoop-common-3.3.6.jar
2025-07-22 14:19:06 [DEBUG] har:// = class org.apache.hadoop.fs.HarFileSystem from /data/dba/oceanbase/obl/ob-loader-dumper-4.3.3.1-RELEASE/lib/hadoop-common-3.3.6.jar
2025-07-22 14:19:06 [DEBUG] http:// = class org.apache.hadoop.fs.http.HttpFileSystem from /data/dba/oceanbase/obl/ob-loader-dumper-4.3.3.1-RELEASE/lib/hadoop-common-3.3.6.jar
2025-07-22 14:19:06 [DEBUG] https:// = class org.apache.hadoop.fs.http.HttpsFileSystem from /data/dba/oceanbase/obl/ob-loader-dumper-4.3.3.1-RELEASE/lib/hadoop-common-3.3.6.jar
2025-07-22 14:19:06 [DEBUG] s3n:// = class org.apache.hadoop.fs.s3native.NativeS3FileSystem from /data/dba/oceanbase/obl/ob-loader-dumper-4.3.3.1-RELEASE/lib/hadoop-aws-3.3.6.jar
2025-07-22 14:19:06 [DEBUG] obs:// = class org.apache.hadoop.fs.obs.OBSFileSystem from /data/dba/oceanbase/obl/ob-loader-dumper-4.3.3.1-RELEASE/lib/hadoop-huaweicloud-3.4.0.jar
2025-07-22 14:19:06 [DEBUG] Looking for FS supporting file
2025-07-22 14:19:06 [DEBUG] looking for configuration option fs.file.impl
2025-07-22 14:19:06 [DEBUG] Filesystem file defined in configuration option
2025-07-22 14:19:06 [DEBUG] FS for file is class org.apache.hadoop.fs.LocalFileSystem
2025-07-22 14:19:06 [DEBUG] Creating FS file:///: duration 0:00.049s
2025-07-22 14:19:06 [INFO] Log files will be written to /data/dba/mysql/dump/logs
2025-07-22 14:19:06 [INFO] Trying to establish JDBC connection to `root@sys`...
2025-07-22 14:19:06 [DEBUG] JDBC url for business tenant: jdbc:oceanbase://10.250.15.140:2881/members_test
2025-07-22 14:19:06 [WARN] removeAbandoned is true, not use in production.
2025-07-22 14:19:06 [INFO] {dataSource-1} inited
2025-07-22 14:19:06 [INFO] Server Mode: OBMYSQL-4.3.3.1
2025-07-22 14:19:06 [INFO] Querying table column metadata, this might take a while...
2025-07-22 14:19:06 [DEBUG] Query column metadata for the table: "hxr_card_copy_dir" finished
2025-07-22 14:19:06 [INFO] Splitting data files into 10 GB logical chunks...
2025-07-22 14:19:06 [DEBUG] File: "/data/dba/mysql/dump/hxr_card_copy.csv" has not been splitted. 149521470 < 10737418240
2025-07-22 14:19:06 [INFO] Split 1 data files to 1 logical chunks success. Elapsed: 25.08 ms
2025-07-22 14:19:06 [DEBUG] Ignore to clean any tables as --truncate-table or --delete-from-table is not specified
2025-07-22 14:19:06 [INFO] Bootstrap with Max Heap: 1 GB, Safe Heap: 1.42 GB
2025-07-22 14:19:06 [INFO] Filtering out empty tables...
2025-07-22 14:19:06 [INFO] Found 1 empty tables before executing. Elapsed: 11.32 ms
2025-07-22 14:19:06 [INFO] Direct load method for Table: 'hxr_card_copy_dir' is 'full'
2025-07-22 14:19:06,682 main WARN No Root logger was configured, creating default ERROR-level Root logger with Console appender
2025-07-22 14:19:06 [DEBUG] Using SLF4J as the default logging framework
2025-07-22 14:19:06 [DEBUG] -Dio.netty.noUnsafe: false
2025-07-22 14:19:06 [DEBUG] Java version: 8
2025-07-22 14:19:06 [DEBUG] sun.misc.Unsafe.theUnsafe: available
2025-07-22 14:19:06 [DEBUG] sun.misc.Unsafe.copyMemory: available
2025-07-22 14:19:06 [DEBUG] sun.misc.Unsafe.storeFence: available
2025-07-22 14:19:06 [DEBUG] java.nio.Buffer.address: available
2025-07-22 14:19:06 [DEBUG] direct buffer constructor: available
2025-07-22 14:19:06 [DEBUG] java.nio.Bits.unaligned: available, true
2025-07-22 14:19:06 [DEBUG] jdk.internal.misc.Unsafe.allocateUninitializedArray(int): unavailable prior to Java9
2025-07-22 14:19:06 [DEBUG] java.nio.DirectByteBuffer.<init>(long, {int,long}): available
2025-07-22 14:19:06 [DEBUG] sun.misc.Unsafe: available
2025-07-22 14:19:06 [DEBUG] -Dio.netty.tmpdir: /tmp (java.io.tmpdir)
2025-07-22 14:19:06 [DEBUG] -Dio.netty.bitMode: 64 (sun.arch.data.model)
2025-07-22 14:19:06 [DEBUG] -Dio.netty.maxDirectMemory: 1908932608 bytes
2025-07-22 14:19:06 [DEBUG] -Dio.netty.uninitializedArrayAllocationThreshold: -1
2025-07-22 14:19:06 [DEBUG] java.nio.ByteBuffer.cleaner(): available
2025-07-22 14:19:06 [DEBUG] -Dio.netty.noPreferDirect: false
2025-07-22 14:19:06 [DEBUG] -Dio.netty.native.workdir: /tmp (io.netty.tmpdir)
2025-07-22 14:19:06 [DEBUG] -Dio.netty.native.deleteLibAfterLoading: true
2025-07-22 14:19:06 [DEBUG] -Dio.netty.native.tryPatchShadedId: true
2025-07-22 14:19:06 [DEBUG] -Dio.netty.native.detectNativeLibraryDuplicates: true
2025-07-22 14:19:06 [DEBUG] -Dio.netty.eventLoopThreads: 8
2025-07-22 14:19:06 [DEBUG] -Dio.netty.globalEventExecutor.quietPeriodSeconds: 1
2025-07-22 14:19:06 [DEBUG] -Dio.netty.threadLocalMap.stringBuilder.initialSize: 1024
2025-07-22 14:19:06 [DEBUG] -Dio.netty.threadLocalMap.stringBuilder.maxSize: 4096
2025-07-22 14:19:06 [DEBUG] -Dio.netty.noKeySetOptimization: false
2025-07-22 14:19:06 [DEBUG] -Dio.netty.selectorAutoRebuildThreshold: 512
2025-07-22 14:19:06 [DEBUG] org.jctools-core.MpscChunkedArrayQueue: available
2025-07-22 14:19:06 [DEBUG] -Dio.netty.leakDetection.level: simple
2025-07-22 14:19:06 [DEBUG] -Dio.netty.leakDetection.targetRecords: 4
2025-07-22 14:19:06 [DEBUG] -Dio.netty.allocator.numHeapArenas: 8
2025-07-22 14:19:06 [DEBUG] -Dio.netty.allocator.numDirectArenas: 8
2025-07-22 14:19:06 [DEBUG] -Dio.netty.allocator.pageSize: 8192
2025-07-22 14:19:06 [DEBUG] -Dio.netty.allocator.maxOrder: 9
2025-07-22 14:19:06 [DEBUG] -Dio.netty.allocator.chunkSize: 4194304
2025-07-22 14:19:06 [DEBUG] -Dio.netty.allocator.smallCacheSize: 256
2025-07-22 14:19:06 [DEBUG] -Dio.netty.allocator.normalCacheSize: 64
2025-07-22 14:19:06 [DEBUG] -Dio.netty.allocator.maxCachedBufferCapacity: 32768
2025-07-22 14:19:06 [DEBUG] -Dio.netty.allocator.cacheTrimInterval: 8192
2025-07-22 14:19:06 [DEBUG] -Dio.netty.allocator.cacheTrimIntervalMillis: 0
2025-07-22 14:19:06 [DEBUG] -Dio.netty.allocator.useCacheForAllThreads: false
2025-07-22 14:19:06 [DEBUG] -Dio.netty.allocator.maxCachedByteBuffersPerChunk: 1023
2025-07-22 14:19:06 [DEBUG] -Dio.netty.processId: 29371 (auto-detected)
2025-07-22 14:19:06 [DEBUG] -Djava.net.preferIPv4Stack: false
2025-07-22 14:19:06 [DEBUG] -Djava.net.preferIPv6Addresses: false
2025-07-22 14:19:06 [DEBUG] Loopback interface: lo (lo, 127.0.0.1)
2025-07-22 14:19:06 [DEBUG] /proc/sys/net/core/somaxconn: 65535
2025-07-22 14:19:06 [DEBUG] -Dio.netty.machineId: 52:54:00:ff:fe:dd:86:70 (auto-detected)
2025-07-22 14:19:06 [DEBUG] -Dio.netty.allocator.type: pooled
2025-07-22 14:19:06 [DEBUG] -Dio.netty.threadLocalDirectBufferSize: 0
2025-07-22 14:19:06 [DEBUG] -Dio.netty.maxThreadLocalCharBufferSize: 16384
2025-07-22 14:19:07 [DEBUG] -Dio.netty.recycler.maxCapacityPerThread: 4096
2025-07-22 14:19:07 [DEBUG] -Dio.netty.recycler.ratio: 8
2025-07-22 14:19:07 [DEBUG] -Dio.netty.recycler.chunkSize: 32
2025-07-22 14:19:07 [DEBUG] -Dio.netty.recycler.blocking: false
2025-07-22 14:19:07 [DEBUG] -Dio.netty.recycler.batchFastThreadLocalOnly: true
2025-07-22 14:19:07 [DEBUG] -Dio.netty.buffer.checkAccessible: true
2025-07-22 14:19:07 [DEBUG] -Dio.netty.buffer.checkBounds: true
2025-07-22 14:19:07 [DEBUG] Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector@676d5f40
2025-07-22 14:19:07 [DEBUG] Use c.l.d.LiteBlockingWaitStrategy as available cpu(s) is 4
2025-07-22 14:19:07 [INFO] Create 512 slots for ring buffer finished. [0.0.0.0]
2025-07-22 14:19:07 [INFO] Start 8 database writer threads finished. [0.0.0.0]
2025-07-22 14:19:07 [INFO] Start 8 record file reader threads success
2025-07-22 14:19:07 [DEBUG] Created instance IOStatisticsContextImpl{id=1, threadId=48, ioStatistics=counters=();
gauges=();
minimums=();
maximums=();
means=();
}
2025-07-22 14:19:07 [DEBUG] Automatic reset batch size 0 to 800 for loading table "hxr_card_copy_dir"
2025-07-22 14:19:07 [INFO] Direct load for table "hxr_card_copy_dir" begins in async mode
2025-07-22 14:19:07 [INFO] Direct load for table "hxr_card_copy_dir" waiting until begin phase is done...
2025-07-22 14:19:08 [INFO] Direct load for table "hxr_card_copy_dir" begin phase finished
2025-07-22 14:19:12 [INFO] 

1. Enqueue Performance Monitor: 
-------------------------------------------------------------------------------------------------------
 Dimension \ Metric |             Tps              |          Throughput          |       Buffer       
-------------------------------------------------------------------------------------------------------
     1.sec.avg      |     247471.2 Records/sec     |         19.98 MB/sec         |      3 Slots       
     1.min.avg      |     247360.0 Records/sec     |         19.97 MB/sec         |      3 Slots       
       Total        |       1240800 Records        |           100.2 MB           |      3 Slots       
-------------------------------------------------------------------------------------------------------

2. Dequeue Performance Monitor: 
-------------------------------------------------------------------------------------------------------
 Dimension \ Metric |             Tps              |          Throughput          |       Buffer       
-------------------------------------------------------------------------------------------------------
     1.sec.avg      |    246433.77 Records/sec     |         19.89 MB/sec         |      6 Slots       
     1.min.avg      |     245920.0 Records/sec     |         19.85 MB/sec         |      6 Slots       
       Total        |       1238400 Records        |           100.0 MB           |      6 Slots       
-------------------------------------------------------------------------------------------------------


2025-07-22 14:19:14 [INFO] File: "/data/dba/mysql/dump/hxr_card_copy.csv" has been parsed finished
2025-07-22 14:19:14 [INFO] Commit load task on table "hxr_card_copy_dir". This might take a while. Please wait...
2025-07-22 14:19:16 [INFO] ----------   Finished Tasks: 0       Running Tasks: 1        Progress: 100.00%       ----------
2025-07-22 14:19:17 [INFO] 

1. Enqueue Performance Monitor: 
-------------------------------------------------------------------------------------------------------
 Dimension \ Metric |             Tps              |          Throughput          |       Buffer       
-------------------------------------------------------------------------------------------------------
     1.sec.avg      |    199122.63 Records/sec     |         16.15 MB/sec         |      2 Slots       
     1.min.avg      |    239671.65 Records/sec     |         19.36 MB/sec         |      2 Slots       
       Total        |       1992811 Records        |           161.6 MB           |      2 Slots       
-------------------------------------------------------------------------------------------------------

2. Dequeue Performance Monitor: 
-------------------------------------------------------------------------------------------------------
 Dimension \ Metric |             Tps              |          Throughput          |       Buffer       
-------------------------------------------------------------------------------------------------------
     1.sec.avg      |    198956.45 Records/sec     |         16.13 MB/sec         |      2 Slots       
     1.min.avg      |    238461.92 Records/sec     |         19.26 MB/sec         |      2 Slots       
       Total        |       1992811 Records        |           161.6 MB           |      2 Slots       
-------------------------------------------------------------------------------------------------------


2025-07-22 14:19:21 [INFO] ----------   Finished Tasks: 0       Running Tasks: 1        Progress: 100.00%       ----------
2025-07-22 14:19:27 [INFO] ----------   Finished Tasks: 0       Running Tasks: 1        Progress: 100.00%       ----------
2025-07-22 14:19:30 [INFO] ----------   Finished Tasks: 0       Running Tasks: 1        Progress: 100.00%       ----------
2025-07-22 14:19:33 [INFO] ----------   Finished Tasks: 0       Running Tasks: 1        Progress: 100.00%       ----------
2025-07-22 14:19:36 [INFO] ----------   Finished Tasks: 0       Running Tasks: 1        Progress: 100.00%       ----------
2025-07-22 14:19:39 [INFO] ----------   Finished Tasks: 0       Running Tasks: 1        Progress: 100.00%       ----------
2025-07-22 14:19:41 [INFO] Load task on table "hxr_card_copy_dir" is committed successfully! Elapsed: 0ms
2025-07-22 14:19:41 [DEBUG] Drain and halt the worker group finished
2025-07-22 14:19:41 [INFO] [Timer] Table: hxr_card_copy_dir, Write Elapsed: 6.11s, Commit Elapsed: 0ms, Total Elapsed: 32.84s
2025-07-22 14:19:41 [DEBUG] Shutdown task context finished
2025-07-22 14:19:41 [INFO] ----------   Finished Tasks: 1       Running Tasks: 0        Progress: 100.00%       ----------
2025-07-22 14:19:41 [INFO] 

All Load Tasks Finished: 

----------------------------------------------------------------------------------------------------------------------------
        No.#        |        Type        |             Name             |            Count             |       Status       
----------------------------------------------------------------------------------------------------------------------------
         1          |       TABLE        |      hxr_card_copy_dir       |      1992811 -> 1992811      |      SUCCESS       
----------------------------------------------------------------------------------------------------------------------------

Total Count: 1992811            End Time: 2025-07-22 14:19:41


2025-07-22 14:19:41 [INFO] Load record finished. Total Elapsed: 34.88 s
2025-07-22 14:19:41 [DEBUG] FileSystem.close() by method: org.apache.hadoop.fs.FilterFileSystem.close(FilterFileSystem.java:529)); Key: (root (auth:SIMPLE))@file://; URI: file:///; Object Identity Hash: 664e5dee
2025-07-22 14:19:41 [DEBUG] FileSystem.close() by method: org.apache.hadoop.fs.RawLocalFileSystem.close(RawLocalFileSystem.java:895)); Key: null; URI: file:///; Object Identity Hash: 431f1eaf
2025-07-22 14:19:41 [INFO] System exit 0
2025-07-22 14:19:41 [DEBUG] Completed shutdown in 0.001 seconds; Timeouts: 0
View Code

3)数据库确认

MySQL [members_test]> truncate table hxr_card_copy_dir;
Query OK, 0 rows affected (0.10 sec)

MySQL [members_test]> select count(*) from hxr_card_copy_dir;
+----------+
| count(*) |
+----------+
|        0 |
+----------+
1 row in set (0.04 sec)

MySQL [members_test]> select count(*) from hxr_card_copy_dir;
+----------+
| count(*) |
+----------+
|  1992811 |
+----------+
1 row in set (0.04 sec)

MySQL [members_test]> 

3.2 导入多个文件

旁路导入使用到的关键参数:

--direct

--rpc-port 2882

--sys-user=root

-t sys 

--session-config=/data/dba/oceanbase/obl/ob-loader-dumper-4.3.3.1-RELEASE/conf/session.config.json

--parallel=4    

 -f /data/dba/mysql/dump/hxr_card_copy_dir   # 指定csv文件所在目录

--file-regular-expression="xa.*\.csv"                 # 指定csv文件名(使用正则匹配多个文件)

1)导入命令

obloader -h 10.250.15.140 -P 2881 -uroot -padmin123 -D members_test   --table hxr_card_copy_dir --skip-header --block-size=10240  --csv   -f /data/dba/mysql/dump/hxr_card_copy_dir --column-separator='\t'  --max-errors=10 --file-regular-expression="xa.*\.csv"  --direct --rpc-port 2882 --sys-user=root -t sys  --session-config=/data/dba/oceanbase/obl/ob-loader-dumper-4.3.3.1-RELEASE/conf/session.config.json --parallel=3

2)导入详情

[root@qcloud-yuanshu-ob-test-15-140 dump]# obloader -h 10.250.15.140 -P 2881 -uroot -padmin123 -D members_test   --table hxr_card_copy_dir --skip-header --block-size=10240  --csv   -f /data/dba/mysql/dump/hxr_card_copy_dir --column-separator='\t'  --max-errors=10 --file-regular-expression="xa.*\.csv"  --direct --rpc-port 2882 --sys-user=root -t sys  --session-config=/data/dba/oceanbase/obl/ob-loader-dumper-4.3.3.1-RELEASE/conf/session.config.json --parallel=3
2025-07-22 14:40:26 [INFO] Parsed args:
[--csv] true
[--file-path] /data/dba/mysql/dump/hxr_card_copy_dir
[--column-separator] \t
[--skip-header] true
[--parallel] 3
[--session-config] /data/dba/oceanbase/obl/ob-loader-dumper-4.3.3.1-RELEASE/conf/session.config.json
[--host] 10.250.15.140
[--port] 2881
[--user] root
[--tenant] sys
[--password] ******
[--database] members_test
[--sys-user] root
[--table] [hxr_card_copy_dir]
[--max-errors] 10
[--file-regular-expression] xa.*\.csv
[--block-size] 10240
[--direct] true
[--rpc-port] 2882

2025-07-22 14:40:26 [WARN] Both `--max-errors` and `--max-discards` are not supported yet. They will be set to 0 by default
2025-07-22 14:40:26 [DEBUG] Failed to detect a valid hadoop home directory java.io.FileNotFoundException: HADOOP_HOME and hadoop.home.dir are unset.
        at org.apache.hadoop.util.Shell.checkHadoopHomeInner(Shell.java:520) ~[hadoop-common-3.3.6.jar:?]
        at org.apache.hadoop.util.Shell.checkHadoopHome(Shell.java:491) ~[hadoop-common-3.3.6.jar:?]
        at org.apache.hadoop.util.Shell.<clinit>(Shell.java:568) ~[hadoop-common-3.3.6.jar:?]
        at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:79) ~[hadoop-common-3.3.6.jar:?]
        at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:3879) ~[hadoop-common-3.3.6.jar:?]
        at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:3874) ~[hadoop-common-3.3.6.jar:?]
        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3662) ~[hadoop-common-3.3.6.jar:?]
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:557) ~[hadoop-common-3.3.6.jar:?]
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:290) ~[hadoop-common-3.3.6.jar:?]
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:541) ~[hadoop-common-3.3.6.jar:?]
        at com.oceanbase.tools.loaddump.common.model.storage.StorageConfig.getFileSystem(StorageConfig.java:332) ~[ob-loader-dumper-4.3.3.1-RELEASE.jar:?]
        at com.oceanbase.tools.loaddump.client.LoadClient.rectifyFilePath(LoadClient.java:104) ~[ob-loader-dumper-4.3.3.1-RELEASE.jar:?]
        at com.oceanbase.tools.loaddump.client.LoadClient.access$000(LoadClient.java:68) ~[ob-loader-dumper-4.3.3.1-RELEASE.jar:?]
        at com.oceanbase.tools.loaddump.client.LoadClient$Builder.build(LoadClient.java:220) ~[ob-loader-dumper-4.3.3.1-RELEASE.jar:?]
        at com.oceanbase.tools.loaddump.cmd.Obloader.run(Obloader.java:269) ~[ob-loader-dumper-4.3.3.1-RELEASE.jar:?]
        at com.oceanbase.tools.loaddump.cmd.Obloader.main(Obloader.java:231) ~[ob-loader-dumper-4.3.3.1-RELEASE.jar:?]

2025-07-22 14:40:26 [DEBUG] setsid exited with exit code 0
2025-07-22 14:40:26 [DEBUG] field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName=Ops, valueName=Time, about=, interval=10, type=DEFAULT, value=[GetGroups])
2025-07-22 14:40:26 [DEBUG] field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName=Ops, valueName=Time, about=, interval=10, type=DEFAULT, value=[Rate of failed kerberos logins and latency (milliseconds)])
2025-07-22 14:40:26 [DEBUG] field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName=Ops, valueName=Time, about=, interval=10, type=DEFAULT, value=[Rate of successful kerberos logins and latency (milliseconds)])
2025-07-22 14:40:26 [DEBUG] field private org.apache.hadoop.metrics2.lib.MutableGaugeInt org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailures with annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName=Ops, valueName=Time, about=, interval=10, type=DEFAULT, value=[Renewal failures since last successful login])
2025-07-22 14:40:26 [DEBUG] field private org.apache.hadoop.metrics2.lib.MutableGaugeLong org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailuresTotal with annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName=Ops, valueName=Time, about=, interval=10, type=DEFAULT, value=[Renewal failures since startup])
2025-07-22 14:40:26 [DEBUG] UgiMetrics, User and group related metrics
2025-07-22 14:40:26 [DEBUG] Setting hadoop.security.token.service.use_ip to true
2025-07-22 14:40:26 [DEBUG]  Creating new Groups object
2025-07-22 14:40:26 [DEBUG] Trying to load the custom-built native-hadoop library...
2025-07-22 14:40:26 [DEBUG] Failed to load native-hadoop with error: java.lang.UnsatisfiedLinkError: no hadoop in java.library.path
2025-07-22 14:40:26 [DEBUG] java.library.path=/data/dba/oceanbase/obl/ob-loader-dumper-4.3.3.1-RELEASE/lib::/data/observer/lib/:/home/admin/observer/lib:/home/admin/observer/lib:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2025-07-22 14:40:26 [WARN] Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2025-07-22 14:40:26 [DEBUG] Falling back to shell based
2025-07-22 14:40:26 [DEBUG] Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping
2025-07-22 14:40:26 [DEBUG] Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback; cacheTimeout=300000; warningDeltaMs=5000
2025-07-22 14:40:26 [DEBUG] Hadoop login
2025-07-22 14:40:26 [DEBUG] hadoop login commit
2025-07-22 14:40:26 [DEBUG] Using local user: UnixPrincipal: root
2025-07-22 14:40:26 [DEBUG] Using user: "UnixPrincipal: root" with name: root
2025-07-22 14:40:26 [DEBUG] User entry: "root"
2025-07-22 14:40:26 [DEBUG] UGI loginUser: root (auth:SIMPLE)
2025-07-22 14:40:26 [DEBUG] Starting: Acquiring creator semaphore for file:///
2025-07-22 14:40:26 [DEBUG] Acquiring creator semaphore for file:///: duration 0:00.000s
2025-07-22 14:40:26 [DEBUG] Starting: Creating FS file:///
2025-07-22 14:40:26 [DEBUG] Loading filesystems
2025-07-22 14:40:26 [DEBUG] hdfs:// = class org.apache.hadoop.hdfs.DistributedFileSystem from /data/dba/oceanbase/obl/ob-loader-dumper-4.3.3.1-RELEASE/lib/hadoop-hdfs-client-3.3.6.jar
2025-07-22 14:40:26 [DEBUG] webhdfs:// = class org.apache.hadoop.hdfs.web.WebHdfsFileSystem from /data/dba/oceanbase/obl/ob-loader-dumper-4.3.3.1-RELEASE/lib/hadoop-hdfs-client-3.3.6.jar
2025-07-22 14:40:26 [DEBUG] swebhdfs:// = class org.apache.hadoop.hdfs.web.SWebHdfsFileSystem from /data/dba/oceanbase/obl/ob-loader-dumper-4.3.3.1-RELEASE/lib/hadoop-hdfs-client-3.3.6.jar
2025-07-22 14:40:26 [DEBUG] file:// = class org.apache.hadoop.fs.LocalFileSystem from /data/dba/oceanbase/obl/ob-loader-dumper-4.3.3.1-RELEASE/lib/hadoop-common-3.3.6.jar
2025-07-22 14:40:26 [DEBUG] viewfs:// = class org.apache.hadoop.fs.viewfs.ViewFileSystem from /data/dba/oceanbase/obl/ob-loader-dumper-4.3.3.1-RELEASE/lib/hadoop-common-3.3.6.jar
2025-07-22 14:40:26 [DEBUG] har:// = class org.apache.hadoop.fs.HarFileSystem from /data/dba/oceanbase/obl/ob-loader-dumper-4.3.3.1-RELEASE/lib/hadoop-common-3.3.6.jar
2025-07-22 14:40:26 [DEBUG] http:// = class org.apache.hadoop.fs.http.HttpFileSystem from /data/dba/oceanbase/obl/ob-loader-dumper-4.3.3.1-RELEASE/lib/hadoop-common-3.3.6.jar
2025-07-22 14:40:26 [DEBUG] https:// = class org.apache.hadoop.fs.http.HttpsFileSystem from /data/dba/oceanbase/obl/ob-loader-dumper-4.3.3.1-RELEASE/lib/hadoop-common-3.3.6.jar
2025-07-22 14:40:26 [DEBUG] s3n:// = class org.apache.hadoop.fs.s3native.NativeS3FileSystem from /data/dba/oceanbase/obl/ob-loader-dumper-4.3.3.1-RELEASE/lib/hadoop-aws-3.3.6.jar
2025-07-22 14:40:27 [DEBUG] obs:// = class org.apache.hadoop.fs.obs.OBSFileSystem from /data/dba/oceanbase/obl/ob-loader-dumper-4.3.3.1-RELEASE/lib/hadoop-huaweicloud-3.4.0.jar
2025-07-22 14:40:27 [DEBUG] Looking for FS supporting file
2025-07-22 14:40:27 [DEBUG] looking for configuration option fs.file.impl
2025-07-22 14:40:27 [DEBUG] Filesystem file defined in configuration option
2025-07-22 14:40:27 [DEBUG] FS for file is class org.apache.hadoop.fs.LocalFileSystem
2025-07-22 14:40:27 [DEBUG] Creating FS file:///: duration 0:00.050s
2025-07-22 14:40:27 [INFO] Log files will be written to /data/dba/mysql/dump/hxr_card_copy_dir/logs
2025-07-22 14:40:27 [INFO] Trying to establish JDBC connection to `root@sys`...
2025-07-22 14:40:27 [DEBUG] JDBC url for business tenant: jdbc:oceanbase://10.250.15.140:2881/members_test
2025-07-22 14:40:27 [WARN] removeAbandoned is true, not use in production.
2025-07-22 14:40:27 [INFO] {dataSource-1} inited
2025-07-22 14:40:27 [INFO] Server Mode: OBMYSQL-4.3.3.1
2025-07-22 14:40:27 [INFO] Querying table column metadata, this might take a while...
2025-07-22 14:40:27 [DEBUG] Query column metadata for the table: "hxr_card_copy_dir" finished
2025-07-22 14:40:27 [INFO] Listing all matched data files in dest path...
2025-07-22 14:40:27 [INFO] File: "/data/dba/mysql/dump/hxr_card_copy_dir/logs/oceanbase-table-client/oceanbase-table-client-runtime.log" is empty, ignore it
2025-07-22 14:40:27 [INFO] File: "/data/dba/mysql/dump/hxr_card_copy_dir/logs/oceanbase-table-client/oceanbase-table-client-boot.log" is empty, ignore it
2025-07-22 14:40:27 [INFO] File: "/data/dba/mysql/dump/hxr_card_copy_dir/logs/bolt/remoting-rpc.log" is empty, ignore it
2025-07-22 14:40:27 [INFO] File: "/data/dba/mysql/dump/hxr_card_copy_dir/logs/bolt/remoting-tr-adapter.log" is empty, ignore it
2025-07-22 14:40:27 [INFO] File: "/data/dba/mysql/dump/hxr_card_copy_dir/logs/bolt/remoting-msg.log" is empty, ignore it
2025-07-22 14:40:27 [INFO] File: "/data/dba/mysql/dump/hxr_card_copy_dir/logs/bolt/remoting-http.log" is empty, ignore it
2025-07-22 14:40:27 [INFO] File: "/data/dba/mysql/dump/hxr_card_copy_dir/logs/bolt/common-error.log" is empty, ignore it
2025-07-22 14:40:27 [INFO] Cannot find a binding for "file:/data/dba/mysql/dump/hxr_card_copy_dir/.load.ckpt", ignore it
2025-07-22 14:40:27 [INFO] Cannot find a binding for "file:/data/dba/mysql/dump/hxr_card_copy_dir/logs/oceanbase-table-client/oceanbase-table-client.log", ignore it
2025-07-22 14:40:27 [INFO] Cannot find a binding for "file:/data/dba/mysql/dump/hxr_card_copy_dir/logs/oceanbase-table-client/oceanbase-table-client-monitor.log", ignore it
2025-07-22 14:40:27 [INFO] Cannot find a binding for "file:/data/dba/mysql/dump/hxr_card_copy_dir/logs/oceanbase-table-client/oceanbase-table-client-direct.log", ignore it
2025-07-22 14:40:27 [INFO] Cannot find a binding for "file:/data/dba/mysql/dump/hxr_card_copy_dir/logs/bolt/common-default.log", ignore it
2025-07-22 14:40:27 [INFO] Cannot find a binding for "file:/data/dba/mysql/dump/hxr_card_copy_dir/logs/bolt/connection-event.log", ignore it
2025-07-22 14:40:27 [INFO] Binding table: "hxr_card_copy_dir" to file: "file:/data/dba/mysql/dump/hxr_card_copy_dir/xaa.csv" finished
2025-07-22 14:40:27 [INFO] Binding table: "hxr_card_copy_dir" to file: "file:/data/dba/mysql/dump/hxr_card_copy_dir/xab.csv" finished
2025-07-22 14:40:27 [INFO] Binding table: "hxr_card_copy_dir" to file: "file:/data/dba/mysql/dump/hxr_card_copy_dir/xac.csv" finished
2025-07-22 14:40:27 [INFO] Find 3 data files in: "/data/dba/mysql/dump/hxr_card_copy_dir" success. Elapsed: 15.37 ms
2025-07-22 14:40:27 [INFO] Splitting data files into 10 GB logical chunks...
2025-07-22 14:40:27 [DEBUG] File: "/data/dba/mysql/dump/hxr_card_copy_dir/xab.csv" has not been splitted. 59999932 < 10737418240
2025-07-22 14:40:27 [DEBUG] File: "/data/dba/mysql/dump/hxr_card_copy_dir/xaa.csv" has not been splitted. 59999953 < 10737418240
2025-07-22 14:40:27 [DEBUG] File: "/data/dba/mysql/dump/hxr_card_copy_dir/xac.csv" has not been splitted. 29521438 < 10737418240
2025-07-22 14:40:27 [INFO] Split 3 data files to 3 logical chunks success. Elapsed: 95.48 ms
2025-07-22 14:40:27 [DEBUG] Ignore to clean any tables as --truncate-table or --delete-from-table is not specified
2025-07-22 14:40:27 [INFO] Bootstrap with Max Heap: 1 GB, Safe Heap: 1.42 GB
2025-07-22 14:40:27 [INFO] Filtering out empty tables...
2025-07-22 14:40:27 [INFO] Found 1 empty tables before executing. Elapsed: 7.630 ms
2025-07-22 14:40:27 [INFO] Direct load method for Table: 'hxr_card_copy_dir' is 'full'
2025-07-22 14:40:27,476 main WARN No Root logger was configured, creating default ERROR-level Root logger with Console appender
2025-07-22 14:40:27 [DEBUG] Using SLF4J as the default logging framework
2025-07-22 14:40:27 [DEBUG] -Dio.netty.noUnsafe: false
2025-07-22 14:40:27 [DEBUG] Java version: 8
2025-07-22 14:40:27 [DEBUG] sun.misc.Unsafe.theUnsafe: available
2025-07-22 14:40:27 [DEBUG] sun.misc.Unsafe.copyMemory: available
2025-07-22 14:40:27 [DEBUG] sun.misc.Unsafe.storeFence: available
2025-07-22 14:40:27 [DEBUG] java.nio.Buffer.address: available
2025-07-22 14:40:27 [DEBUG] direct buffer constructor: available
2025-07-22 14:40:27 [DEBUG] java.nio.Bits.unaligned: available, true
2025-07-22 14:40:27 [DEBUG] jdk.internal.misc.Unsafe.allocateUninitializedArray(int): unavailable prior to Java9
2025-07-22 14:40:27 [DEBUG] java.nio.DirectByteBuffer.<init>(long, {int,long}): available
2025-07-22 14:40:27 [DEBUG] sun.misc.Unsafe: available
2025-07-22 14:40:27 [DEBUG] -Dio.netty.tmpdir: /tmp (java.io.tmpdir)
2025-07-22 14:40:27 [DEBUG] -Dio.netty.bitMode: 64 (sun.arch.data.model)
2025-07-22 14:40:27 [DEBUG] -Dio.netty.maxDirectMemory: 1908932608 bytes
2025-07-22 14:40:27 [DEBUG] -Dio.netty.uninitializedArrayAllocationThreshold: -1
2025-07-22 14:40:27 [DEBUG] java.nio.ByteBuffer.cleaner(): available
2025-07-22 14:40:27 [DEBUG] -Dio.netty.noPreferDirect: false
2025-07-22 14:40:27 [DEBUG] -Dio.netty.native.workdir: /tmp (io.netty.tmpdir)
2025-07-22 14:40:27 [DEBUG] -Dio.netty.native.deleteLibAfterLoading: true
2025-07-22 14:40:27 [DEBUG] -Dio.netty.native.tryPatchShadedId: true
2025-07-22 14:40:27 [DEBUG] -Dio.netty.native.detectNativeLibraryDuplicates: true
2025-07-22 14:40:27 [DEBUG] -Dio.netty.eventLoopThreads: 8
2025-07-22 14:40:27 [DEBUG] -Dio.netty.globalEventExecutor.quietPeriodSeconds: 1
2025-07-22 14:40:27 [DEBUG] -Dio.netty.threadLocalMap.stringBuilder.initialSize: 1024
2025-07-22 14:40:27 [DEBUG] -Dio.netty.threadLocalMap.stringBuilder.maxSize: 4096
2025-07-22 14:40:27 [DEBUG] -Dio.netty.noKeySetOptimization: false
2025-07-22 14:40:27 [DEBUG] -Dio.netty.selectorAutoRebuildThreshold: 512
2025-07-22 14:40:27 [DEBUG] org.jctools-core.MpscChunkedArrayQueue: available
2025-07-22 14:40:27 [DEBUG] -Dio.netty.leakDetection.level: simple
2025-07-22 14:40:27 [DEBUG] -Dio.netty.leakDetection.targetRecords: 4
2025-07-22 14:40:27 [DEBUG] -Dio.netty.allocator.numHeapArenas: 8
2025-07-22 14:40:27 [DEBUG] -Dio.netty.allocator.numDirectArenas: 8
2025-07-22 14:40:27 [DEBUG] -Dio.netty.allocator.pageSize: 8192
2025-07-22 14:40:27 [DEBUG] -Dio.netty.allocator.maxOrder: 9
2025-07-22 14:40:27 [DEBUG] -Dio.netty.allocator.chunkSize: 4194304
2025-07-22 14:40:27 [DEBUG] -Dio.netty.allocator.smallCacheSize: 256
2025-07-22 14:40:27 [DEBUG] -Dio.netty.allocator.normalCacheSize: 64
2025-07-22 14:40:27 [DEBUG] -Dio.netty.allocator.maxCachedBufferCapacity: 32768
2025-07-22 14:40:27 [DEBUG] -Dio.netty.allocator.cacheTrimInterval: 8192
2025-07-22 14:40:27 [DEBUG] -Dio.netty.allocator.cacheTrimIntervalMillis: 0
2025-07-22 14:40:27 [DEBUG] -Dio.netty.allocator.useCacheForAllThreads: false
2025-07-22 14:40:27 [DEBUG] -Dio.netty.allocator.maxCachedByteBuffersPerChunk: 1023
2025-07-22 14:40:27 [DEBUG] -Dio.netty.processId: 4535 (auto-detected)
2025-07-22 14:40:27 [DEBUG] -Djava.net.preferIPv4Stack: false
2025-07-22 14:40:27 [DEBUG] -Djava.net.preferIPv6Addresses: false
2025-07-22 14:40:27 [DEBUG] Loopback interface: lo (lo, 127.0.0.1)
2025-07-22 14:40:27 [DEBUG] /proc/sys/net/core/somaxconn: 65535
2025-07-22 14:40:27 [DEBUG] -Dio.netty.machineId: 52:54:00:ff:fe:dd:86:70 (auto-detected)
2025-07-22 14:40:27 [DEBUG] -Dio.netty.allocator.type: pooled
2025-07-22 14:40:27 [DEBUG] -Dio.netty.threadLocalDirectBufferSize: 0
2025-07-22 14:40:27 [DEBUG] -Dio.netty.maxThreadLocalCharBufferSize: 16384
2025-07-22 14:40:27 [DEBUG] -Dio.netty.recycler.maxCapacityPerThread: 4096
2025-07-22 14:40:27 [DEBUG] -Dio.netty.recycler.ratio: 8
2025-07-22 14:40:27 [DEBUG] -Dio.netty.recycler.chunkSize: 32
2025-07-22 14:40:27 [DEBUG] -Dio.netty.recycler.blocking: false
2025-07-22 14:40:27 [DEBUG] -Dio.netty.recycler.batchFastThreadLocalOnly: true
2025-07-22 14:40:27 [DEBUG] -Dio.netty.buffer.checkAccessible: true
2025-07-22 14:40:27 [DEBUG] -Dio.netty.buffer.checkBounds: true
2025-07-22 14:40:27 [DEBUG] Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector@19f61c31
2025-07-22 14:40:27 [DEBUG] Use c.l.d.LiteBlockingWaitStrategy as available cpu(s) is 4
2025-07-22 14:40:27 [INFO] Create 512 slots for ring buffer finished. [0.0.0.0]
2025-07-22 14:40:27 [INFO] Start 8 database writer threads finished. [0.0.0.0]
2025-07-22 14:40:27 [DEBUG] Created instance IOStatisticsContextImpl{id=1, threadId=50, ioStatistics=counters=();
gauges=();
minimums=();
maximums=();
means=();
}
2025-07-22 14:40:27 [DEBUG] Created instance IOStatisticsContextImpl{id=2, threadId=51, ioStatistics=counters=();
gauges=();
minimums=();
maximums=();
means=();
}
2025-07-22 14:40:27 [INFO] Start 8 record file reader threads success
2025-07-22 14:40:28 [DEBUG] Created instance IOStatisticsContextImpl{id=3, threadId=52, ioStatistics=counters=();
gauges=();
minimums=();
maximums=();
means=();
}
2025-07-22 14:40:28 [DEBUG] Automatic reset batch size 0 to 700 for loading table "hxr_card_copy_dir"
2025-07-22 14:40:28 [DEBUG] Automatic reset batch size 0 to 700 for loading table "hxr_card_copy_dir"
2025-07-22 14:40:28 [DEBUG] Automatic reset batch size 0 to 700 for loading table "hxr_card_copy_dir"
2025-07-22 14:40:28 [INFO] Direct load for table "hxr_card_copy_dir" begins in async mode
2025-07-22 14:40:28 [INFO] Direct load for table "hxr_card_copy_dir" waiting until begin phase is done...
2025-07-22 14:40:29 [INFO] Direct load for table "hxr_card_copy_dir" begin phase finished
2025-07-22 14:40:32 [INFO] File: "/data/dba/mysql/dump/hxr_card_copy_dir/xac.csv" has been parsed finished
2025-07-22 14:40:33 [INFO] 

1. Enqueue Performance Monitor: 
-------------------------------------------------------------------------------------------------------
 Dimension \ Metric |             Tps              |          Throughput          |       Buffer       
-------------------------------------------------------------------------------------------------------
     1.sec.avg      |    234568.38 Records/sec     |         19.01 MB/sec         |      7 Slots       
     1.min.avg      |     234780.0 Records/sec     |         19.03 MB/sec         |      7 Slots       
       Total        |       1183596 Records        |           95.9 MB            |      7 Slots       
-------------------------------------------------------------------------------------------------------

2. Dequeue Performance Monitor: 
-------------------------------------------------------------------------------------------------------
 Dimension \ Metric |             Tps              |          Throughput          |       Buffer       
-------------------------------------------------------------------------------------------------------
     1.sec.avg      |    233336.16 Records/sec     |         18.91 MB/sec         |      7 Slots       
     1.min.avg      |     233520.0 Records/sec     |         18.92 MB/sec         |      7 Slots       
       Total        |       1182896 Records        |           95.9 MB            |      7 Slots       
-------------------------------------------------------------------------------------------------------


2025-07-22 14:40:34 [INFO] File: "/data/dba/mysql/dump/hxr_card_copy_dir/xab.csv" has been parsed finished
2025-07-22 14:40:35 [INFO] File: "/data/dba/mysql/dump/hxr_card_copy_dir/xaa.csv" has been parsed finished
2025-07-22 14:40:35 [INFO] Commit load task on table "hxr_card_copy_dir". This might take a while. Please wait...
2025-07-22 14:40:37 [INFO] ----------   Finished Tasks: 2       Running Tasks: 1        Progress: 100.00%       ----------
2025-07-22 14:40:38 [INFO] 

1. Enqueue Performance Monitor: 
-------------------------------------------------------------------------------------------------------
 Dimension \ Metric |             Tps              |          Throughput          |       Buffer       
-------------------------------------------------------------------------------------------------------
     1.sec.avg      |    198562.15 Records/sec     |         16.1 MB/sec          |      4 Slots       
     1.min.avg      |    229103.27 Records/sec     |         18.57 MB/sec         |      4 Slots       
       Total        |       1992807 Records        |           161.6 MB           |      4 Slots       
-------------------------------------------------------------------------------------------------------

2. Dequeue Performance Monitor: 
-------------------------------------------------------------------------------------------------------
 Dimension \ Metric |             Tps              |          Throughput          |       Buffer       
-------------------------------------------------------------------------------------------------------
     1.sec.avg      |    198153.34 Records/sec     |         16.07 MB/sec         |      2 Slots       
     1.min.avg      |    228044.75 Records/sec     |         18.48 MB/sec         |      2 Slots       
       Total        |       1992807 Records        |           161.6 MB           |      2 Slots       
-------------------------------------------------------------------------------------------------------


2025-07-22 14:40:42 [INFO] ----------   Finished Tasks: 2       Running Tasks: 1        Progress: 100.00%       ----------
2025-07-22 14:40:48 [INFO] ----------   Finished Tasks: 2       Running Tasks: 1        Progress: 100.00%       ----------
2025-07-22 14:40:51 [INFO] ----------   Finished Tasks: 2       Running Tasks: 1        Progress: 100.00%       ----------
2025-07-22 14:40:54 [INFO] ----------   Finished Tasks: 2       Running Tasks: 1        Progress: 100.00%       ----------
2025-07-22 14:40:57 [INFO] ----------   Finished Tasks: 2       Running Tasks: 1        Progress: 100.00%       ----------
2025-07-22 14:41:00 [INFO] ----------   Finished Tasks: 2       Running Tasks: 1        Progress: 100.00%       ----------
2025-07-22 14:41:03 [INFO] ----------   Finished Tasks: 2       Running Tasks: 1        Progress: 100.00%       ----------
2025-07-22 14:41:06 [INFO] ----------   Finished Tasks: 2       Running Tasks: 1        Progress: 100.00%       ----------
2025-07-22 14:41:07 [INFO] Load task on table "hxr_card_copy_dir" is committed successfully! Elapsed: 0ms
2025-07-22 14:41:08 [DEBUG] Drain and halt the worker group finished
2025-07-22 14:41:08 [INFO] [Timer] Table: hxr_card_copy_dir, Write Elapsed: 6.16s, Commit Elapsed: 0ms, Total Elapsed: 38.31s
2025-07-22 14:41:08 [DEBUG] Shutdown task context finished
2025-07-22 14:41:08 [INFO] ----------   Finished Tasks: 3       Running Tasks: 0        Progress: 100.00%       ----------
2025-07-22 14:41:08 [INFO] 

All Load Tasks Finished: 

----------------------------------------------------------------------------------------------------------------------------
        No.#        |        Type        |             Name             |            Count             |       Status       
----------------------------------------------------------------------------------------------------------------------------
         1          |       TABLE        |      hxr_card_copy_dir       |      1992807 -> 1992807      |      SUCCESS       
----------------------------------------------------------------------------------------------------------------------------

Total Count: 1992807            End Time: 2025-07-22 14:41:08


2025-07-22 14:41:08 [INFO] Load record finished. Total Elapsed: 41.00 s
2025-07-22 14:41:08 [DEBUG] FileSystem.close() by method: org.apache.hadoop.fs.FilterFileSystem.close(FilterFileSystem.java:529)); Key: (root (auth:SIMPLE))@file://; URI: file:///; Object Identity Hash: 41f4fe5
2025-07-22 14:41:08 [DEBUG] FileSystem.close() by method: org.apache.hadoop.fs.RawLocalFileSystem.close(RawLocalFileSystem.java:895)); Key: null; URI: file:///; Object Identity Hash: 15f8701f
2025-07-22 14:41:08 [INFO] System exit 0
[root@qcloud-yuanshu-ob-test-15-140 dump]# 
View Code

3)数据库确认

MySQL [members_test]> truncate table hxr_card_copy_dir;
Query OK, 0 rows affected (0.07 sec)

MySQL [members_test]> select count(*) from hxr_card_copy_dir;
+----------+
| count(*) |
+----------+
|        0 |
+----------+
1 row in set (0.03 sec)

MySQL [members_test]> select count(*) from hxr_card_copy_dir;
+----------+
| count(*) |
+----------+
|  1992807 |
+----------+
1 row in set (0.04 sec)

MySQL [members_test]> 

4 汇总

导入方式 数据 并发 耗时(秒)
SQL文件 hxr_card_copy_dir 1992807  63
LOAD-csv hxr_card_copy_dir 1992807 92
obloader单文件-csv hxr_card_copy_dir 1992807 4 36
obloader多文件-csv hxr_card_copy_dir 1992807 4 45

 

 

 posted on 2025-07-22 13:53  xibuhaohao  阅读(96)  评论(0)    收藏  举报