This post is about how to migrate Oracle asm data to another storage, for my environment, is migrating from HHD storage to SSD storage, the redo blocksize also need to be modified to 4K for good performance.
1. Preparation for migration
1.1 Backup whole database
1 2 3
|
RMAN> backup database format '/backup/rman_full_%U' tag='db_full_bak' including archive log all;
|
1.2 Verify file types and numbers in the ASM
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28
|
|
1.3 Verify asm_diskstrings parameter
Must include new storage path in the asm_diskstrings.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
|
|
- Query OCR and voting disk information
Because voting disk is backed up by Oracle automatically, no need to backup again.
- Backup asm diskgroup metadata
1 2 3 4
|
ASMCMD [+] > md_backup /home/grid/ocrvote.bak -G OCR ASMCMD [+] > md_backup /home/grid/datadg.bak -G DBDATA ASMCMD [+] > md_backup /home/grid/fradg.bak -G FRA srvctl config asm > /home/grid/ocr_config.bak
|
2. Migrating ASM diskgroups
2.1 Adding new disk to diskgroups
2.2 Dropping old disk
Also, monitor and adjust rebalance power if needed.
For all the diskgroups in ASM, repeat step 2.1 - 2.2 one diskgroup by one diskgroup.
2.3 Verify migration result
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
|
sqlplus "/as sysasm" set line 200 pagesize 200 col name for a10 col path for a40 select group_number,name,OS_MB ,TOTAL_MB,FREE_MB ,HOT_USED_MB, COLD_USED_MB,PATH ,SECTOR_SIZE from v$asm_disk order by 1;
col PATH for a30 col DG_NAME for a10 col DG_STATE for a10 col FAILGROUP for a15 col name for a20 set lines 200 pages 10000
select dg.name dg_name, dg.state dg_state, dg.type, d.disk_number dsk_no, d.path, d.mount_status, d.FAILGROUP,d.name, d.state from v$asm_diskgroup dg, v$asm_disk d where dg.group_number=d.group_number order by dg_name, dsk_no;
select d.name group_name,f.type,count(*) cnt from v$asm_file f join v$asm_diskgroup d on f.group_number=d.group_number group by rollup (d.name,f.type);
|
3. Modify redo log file's blocksize
We need to add hidden parameter _disk_sector_size_override for modifing redo blocksize, this parameter is dynamic, no instance recycle needed.
1
|
alter system set "_disk_sector_size_override"=TRUE scope=both;
|
1 2 3 4 5 6 7 8
|
alter database add logfile thread 1 group 9 ('+FRA') size 2048M blocksize 4096; alter database add logfile thread 1 group 10 ('+FRA') size 2048M blocksize 4096; alter database add logfile thread 1 group 11 ('+FRA') size 2048M blocksize 4096; alter database add logfile thread 1 group 12 ('+FRA') size 2048M blocksize 4096; alter database add logfile thread 2 group 13 ('+FRA') size 2048M blocksize 4096; alter database add logfile thread 2 group 14 ('+FRA') size 2048M blocksize 4096; alter database add logfile thread 2 group 15 ('+FRA') size 2048M blocksize 4096; alter database add logfile thread 2 group 16 ('+FRA') size 2048M blocksize 4096;
|
Ensure the log groups need to be dropped are in inactive status, by switching log, and checkpoint, we can put target redo groups in inactive state
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
|
SQL> ALTER SYSTEM ARCHIVE LOG CURRENT; ALTER SYSTEM ARCHIVE LOG CURRENT; alter system checkpoint; alter system checkpoint;
|
4. Risk assessment
- If ASM diskgroup is too large, rebalance maybe can't complete in the change window
- Setting rebalance power too high would cause bad performance
It's recommended to schedule multiple change window if data amount is too large.
5. Backout
Adding disks is a low risk operation, except rebalance power set to high and impact the performance. Under such circumstance, lower down rebalance power speed:
1
|
alter diskgroup DG_NAME rebalance power 1;
|
If something wrong with dropping disks, stop dropping operation by below command, only available for status = dropping in v$asm_disk:
1
|
ALTER DISKGROUP data1 UNDROP DISKS;
|
If something wrong with OCR or voting disk during migrating or RAC can't be started after migrating, follow below steps to restore OCR:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
|
|
6. Alternative way of migrating
Previous steps are available for online migration, we also can use rman copy to migrate, which need to stop database.
Assume we've create a new diskgroup named DATA_NEW, our purpose is to restore the whole database to DATA_NEW diskgroup.
- Backup database as copy into new diskgroup
1 2 3 4 5 6 7 8 9
|
run { allocate channel c1 type disk; allocate channel c2 type disk; backup as copy database format '+data_new'; release channel c1; release channel c2; }
RMAN> list copy;
|
- After backing up, restart database to mount status
1 2
|
srvctl stop database -d DB_NAME srvctl start database -d DB_NAME -startoptions mount
|
1 2 3 4
|
RMAN> switch database to copy; RMAN> recover database;
|
- Restart database in normal mode
1 2
|
srvctl stop database -d DB_NAME srvctl start database -d DB_NAME
|
Reference:
How To A Recreate Disk Group Used By CRS
OCR / Vote disk Maintenance Operations: (ADD/REMOVE/REPLACE/MOVE) (Doc ID 428681.1)
Software Patch Level and 12c Grid Infrastructure OCR Backup/Restore (Doc ID 1558920.1)
Linux/Unix 平台,在CRS 磁盘组完全丢失后,如何恢复基于 ASM 的 OCR (Doc ID 2331776.1)
EOF