20171130 QinHD oracle migrate
源库 rhel6 11.2.0.4 单实例 sysaux 丢失 数据库可以启动 rm用户 973张普通表、73张lob表
新库 rhel6 11.2.0.4 rac rm用户 973张普通表、73张lob表
千兆带宽、理论1000/8 M每秒、考虑IO后估计是1000/20M每秒
传统工具rman、数据泵、表空间传输失效、exp用户导出模式、全库导出方式失效。
exp(tables) exp(tablespace)可以使用、(虽然可以exp表空间、但是exp表空间元数据部分是失败的、所以传输表空间的办法也不行啦
###方案1
exp以表可以 但是涉及lob表就很慢(时间长到你快哭泣)
http://m.blog.csdn.net/msdnchina/article/details/78153875
Imp的commit影响:
imp的commit主要用于大表数据插入时减少回滚段的压力,以免出现长时间导入而出现的回滚段过久ora-01555,结合buffer设置则能按照buffer的大小进行分批提交,可以有效的减轻undo压力。
但是如果对于插入lob、long、ref、rowid等字段时设置commit=y则会逐条commit,我们都清楚对于批量插入数据逐条commit会降低性能,所以这里需要依据表中的具体信息。
而且还存在一个因素则是结合buffer区大小批量commit提交时,如果出现表空间,连接丢失等问题导致imp失败,则可能只提交一批数据,此时带来的影响还是很大的,即使再次插入根据主键筛选依然会出现大量的错误,而imp对这些错误的排查则会降低大量性能,这也可能是oracle默认的imp工具中commit=n缘由吧。
在回滚段够大的情况下,不用加commit=y参数,频繁提交会影响效率;
Imp的时候buffer给的大一些。
Imp的buffer效果
生产环境下,oracle 9I下sga大概8G,pga大概6g,需要导入一个2.7g以上的大表到成产库中,由于是同事着手运用imp工具的默认buffer=30K,用时大概一个小时还没有结果,考虑到pga还是很大的,跟同事商议加上buffer=409600000设置buffer大概400M的,15分钟内imp完成。当然运用impdp然后运用parallel=n效率当然更加理想了!
Imp 参数BUFFER定义了每一次读取导出文件的数据量,设的越大,就越减少Import进程读取数据的次数,从而提高导入效率。BUFFER的大小取决于系统应用、数据库规模,通常来说,设为百兆就足够了。
使用INDEXES=N选项,导入数据之后再手工创建索引
导入两次,第一次导入数据,第二次导入索引
imp user/pwd fromuser=user1 touser=user2 file=/tmp/imp_db_pipe1 commit=y feedback=10000 buffer=10240000 ignore=y rows=y indexes=n
imp user/pwd fromuser=user1 touser=user2 file=/tmp/imp_index_pipe1 commit=y feedback=10000 buffer=10240000 ignore=y rows=n indexes=y
imp -help
Keyword Description (Default) Keyword Description (Default)
--------------------------------------------------------------------------
USERID username/password FULL import entire file (N)
BUFFER size of data buffer FROMUSER list of owner usernames
FILE input files (EXPDAT.DMP) TOUSER list of usernames
SHOW just list file contents (N) TABLES list of table names
IGNORE ignore create errors (N) RECORDLENGTH length of IO record
GRANTS import grants (Y) INCTYPE incremental import type
INDEXES import indexes (Y) COMMIT commit array insert (N)
ROWS import data rows (Y) PARFILE parameter filename
LOG log file of screen output CONSTRAINTS import constraints (Y)
默认Default Y的情况下 即使不加改参数 也会执行
比如INDEXES import indexes (Y) imp语句中没有加该参数 索引信息也会被导入进来的
imp buffer=10240000 ignore=y rows=y indexes=n
###方案2
使用sqldeveloper工具(图形界面哦)导出正常表973张、数据不落地、其余的73张lob表用datacopy
其实可以用datacopy直接迁移所有表的,并行度参数可以缩短迁移周期、exp/imp没有并行度。
注意需要在新库上创建一样的表结构
cp /home/oracle/shihfa/datacopy_linux64_10204 /home/oracle/shihfa/datacopy
chmod u+x datacopy
[oracle@chnweb1 shihfa]$ ./datacopy
DataCopy: Fast Oracle Data Copy Utility, Release 3.0.1
(c) Copyright Lou Fangxin (AnySQL.net) 2010, all rights reserved.
Usage: datacopy keyword=value [,keyword=value,...]
Valid Keywords:
user = username/password@tnsname for source and target.
user1 = username/password@tnsname for source database.
user2 = username/password@tnsname for target database.
table = table name for both source and target.
tables = table name list for both source and target.
table1 = source table name to query data from.
table2 = target table name to insert data into.
query1 = select SQL for source database.
query2 = insert SQL for target database.
read = set DB_FILE_MULTIBLOCK_READ_COUNT at session level
sort = set SORT_AREA_SIZE at session level (UNIT:MB)
hash = set HASH_AREA_SIZE at session level (UNIT:MB)
serial = set _serial_direct_read to TRUE at session level
array = array fetch size
rows = print log information for every given rows.
long = maximum size for long, long raw, CLOB, BLOB columns.
crypt = encrypt the connection info only, no data copy (YES/NO).
parfile = read command option from parameter file
* wait = wait time in microsecond after each array.
* direct = direct mode (YES/NO).
* nolog = no archive log for direct mode (YES/NO).
* parallel= allow parallel load for direct mode (YES/NO).
* sync = INSERT,UPDATE,DELETE,UPDINS,INSUPD,DELINS,MERGE,REPLACE.
* unique = primary key or unique key columns of target table.
* conflict= conflict columns for update on target table.
* filler = filler columns (exclude columns) for target table.
* charset = character set name of the target database.
* ncharset= national character set name of the target database.
* safe = double column buffer for character set conversion.
* degree = parallelize data copy degree (2-128).
log = log file name for screen messages.
Notes:
datacopy user1=scott/tiger user2=scott/tiger table=emp
datacopy user1=scott/tiger user2=scott/tiger table=emp direct=yes
datacopy user1=scott/tiger user2=scott/tiger table1=emp table2=emp_his
###新旧库tnsnames.ora
to_new =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.26.139.91)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = orcl)
)
)
to_old=
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.26.139.80)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = orcl)
)
)
###新库创建dblink 进行表差异比对
create public database link DBL_TO_OLD
connect to rm IDENTIFIED BY "passwd_old"
using 'to_old';
##新库创建表结构
create table UNIMAS_DC360S_GABDA_54 as select * from rm.UNIMAS_DC360S_GABDA_54@dbl_to_old where 1=0;
##老库上迁移
datacopy user1=rm/passwd_old@to_old user2=rm/paswd_new@to_new table=UNIMAS_DC360S_GABDA_54 parallel=yes degree=8
###
之后通过数据对比之后和DBArtisan 迁移constraint、index、trigger、procedure、function、package等

浙公网安备 33010602011771号