oracle磁盘空间维护

/*********************************************oracle磁盘空间维护
1.linux系统认盘
2.aix系统认盘
3.加盘、减盘、替换盘数据盘操作
4.运维日志和相关知识点测试
***************************************************/

1.linux系统认盘(所有节点)
fdisk -l
/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/sdg
/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/sdh
cp /etc/udev/rules.d/99-oracle-asmdevices.rules /etc/udev/rules.d/99-oracle-asmdevices.rules.0402bak
vi /etc/udev/rules.d/99-oracle-asmdevices.rules (修改盘号和别名)
KERNEL=="sd*", ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u --device=/dev/$name", RESULT=="360050768018087f690000000000003ab", SYMLINK+="asm-diskc", OWNER="grid", GROUP="asmadmin", MODE="0660"
ll /dev/asm* |grep asm-diskg
ll /dev/asm* |grep asm-diskh


2.aix系统认盘(所有节点)
su - root
for i in {401..404}; do ls -l /dev/rhdisk$i; done
for i in {401..404}; do ls -l /dev/hdisk$i; done
for i in {401..404}; do lsattr -El hdisk$i|grep reserve_policy; done
for i in {401..404}; do chown -R grid:asmadmin /dev/rhdisk$i; done
for i in {401..404}; do chmod 660 /dev/rhdisk$i; done
for i in {401..404} ; do chdev -l hdisk$i -a reserve_policy=no_reserve; done


3.关键操作
su - grid
sqlplus / as sysasm
select group_number, name,state,type,total_MB / 1024 total_GB,free_mb / 1024 FREE_GB,free_mb / total_MB * 100 free_per,(case when free_mb / total_mb * 100 < 15 then '*' else '' end) care from V$ASM_DISKGROUP;
select GROUP_NUMBER,DISK_NUMBER,HEADER_STATUS,NAME,TOTAL_MB,FREE_MB,FREE_MB/TOTAL_MB*100 free_per,PATH,STATE from V$ASM_DISK WHERE GROUP_NUMBER!=0;
--查询进度(要在grid用户下执行,oracle用户可以执行但是没有返回结果)
select * from V$ASM_OPERATION;
show parameter background_dump_dest(查看asm日志)
/*
alter diskgroup DATA add disk '/dev/asm-diskg','/dev/asm-diskh' rebalance power 11;
alter diskgroup DATA drop disk 'DATA_0044' rebalance power 8;
alter diskgroup DATA add disk '/dev/asm-sde','/dev/asm-sdf','/dev/asm-sdg' drop disk DATA_0000,DATA_0001 rebalance power 8;
*/

4.运维日志和相关知识点
--hdisk,rhdisk的区别
01.hdisk是块设备,而rhdisk是字符设备。
02.ASM必须使用字符设备作为ASM磁盘,而不能使用块设备(如果使用块设备,则必须使用ASMLib的支持)

--log:0
基本视图
select * from V$ASM_DISK
select * from V$ASM_DISKGROUP
select * from V$ASM_OPERATION

log:19.10
关于v$asm_disk中HEADER_STATUS字段的认识
17:49:59 SYS@+ASM1>alter diskgroup DATA add disk '/dev/rhdisk137','/dev/rhdisk138','/dev/rhdisk139','/dev/rhdisk140','/dev/rhdisk141' rebalance power 8;
alter diskgroup DATA add disk '/dev/rhdisk137','/dev/rhdisk138','/dev/rhdisk139','/dev/rhdisk140','/dev/rhdisk141' rebalance power 8
*
ERROR at line 1:
ORA-15032: not all alterations performed
ORA-15033: disk '/dev/rhdisk141' belongs to diskgroup "DATA_RG"

/*
MEMBER:磁盘已经被使用,不要加
CANDIDATE:即将加盘的正常状态
*/
log:20.03
ASM_POWER_LIMIT关于默认平衡力度的思考:当指定power后,再修改ASM_POWER_LIMIT参数,本次执行的力度应该不会改变;但是通过alter power事可以改变的,取值范围为11.2.0.2前是0-11,之后是0-1023,但是应该还是有限制,测试最大只能到11.
Beginning with Oracle Database 11g Release 2 (11.2.0.2), if the COMPATIBLE.ASM disk group attribute is set to 11.2.0.2 or higher, then the range of values is 0 to 1024.

GROUP_NUMBER OPERATION STATE POWER ACTUAL SOFAR EST_WORK EST_RATE EST_MINUTES
------------ --------------- --------------- ---------- ---------- ---------- ---------- ---------- -----------
2 REBAL RUN 8 8 2287859 4406404 7196 294

Elapsed: 00:00:01.13
15:35:22 SYS@+ASM1>alter diskgroup DATA rebalance power 1023;
Diskgroup altered.

Elapsed: 00:00:08.41
15:35:39 SYS@+ASM1>select * from v$asm_operation;

GROUP_NUMBER OPERATION STATE POWER ACTUAL SOFAR EST_WORK EST_RATE EST_MINUTES
------------ --------------- --------------- ---------- ---------- ---------- ---------- ---------- -----------
2 REBAL RUN 11 11 5 2116044 45 43200

log:20200409
格式化磁盘
dd if=/dev/zero of=/dev/sdi bs=8192K count=10
dd if=/dev/zero of=/dev/sdj bs=8192K count=10

使用trigger无法成功绑盘,重新reload一下就好
重新加载udev rule执行以下命令:
[root@perfeader1 rules.d]# udevadm control --reload-rules
[root@perfeader1 rules.d]# udevadm trigger --type=devices --action=change
注:trigger不加参数会导致vip漂移

log:20200512,
01.关于/etc/udev/rules.d/99-oracle-asmdevices.rules文件的写法和生效问题。。
02.SYMLINK:symlink() 函数创建符号连接,可以达到与“ln -s ”命令一样的效果。
[root@zbxtdb1 ~]# ll /dev/asm*
lrwxrwxrwx 1 root root 3 May 12 15:43 /dev/asm-diskc -> sdc
lrwxrwxrwx 1 root root 3 May 12 16:15 /dev/asm-diskk -> sdk
lrwxrwxrwx 1 root root 3 May 12 16:15 /dev/asm-diskl -> sdl
lrwxrwxrwx 1 root root 3 May 12 16:15 /dev/asm-diskm -> sdm


SQL> alter diskgroup DATA add disk '/dev/asm-diskm','/dev/asm-diskl' rebalance power 11;
alter diskgroup DATA add disk '/dev/asm-diskm','/dev/asm-diskl' rebalance power 11
*
ERROR at line 1:
ORA-15032: not all alterations performed
ORA-15075: disk(s) are not visible cluster-wide
--写法1
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c2970e0af4fa6b4620a216d3e2d9", NAME="asm-sdc", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c2983d24b30159fb7a2412f5f647", NAME="asm-sdd", OWNER="grid", GROUP="asmadmin", MODE="0660"

--写法2
KERNEL=="sd*",SUBSYSTEM=="block",PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name",RESULT=="360050768018087f690000000000004ca",NAME="asm_data4", ACTION=="add|change",OWNER="grid",GROUP="asmdba", MODE="0660"
KERNEL=="sd*",SUBSYSTEM=="block",PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name",RESULT=="360050768018087f690000000000004cb",NAME="asm_data5", ACTION=="add|change",OWNER="grid",GROUP="asmdba", MODE="0660"

--存在疑问的写法3
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="360002ac000000000000000440001c817", SYMLINK+="asm-diskk", OWNER="grid", GROUP="asmdba", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="360002ac000000000000000850001c817", SYMLINK+="asm-diskl", OWNER="grid", GROUP="asmdba", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="360002ac000000000000000860001c817", SYMLINK+="asm-diskm", OWNER="grid", GROUP="asmdba", MODE="0660"

--写法4
KERNEL=="sd*", ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u --device=/dev/$name", RESULT=="360050768018087f69000000000000508", SYMLINK+="asm-diskg", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u --device=/dev/$name", RESULT=="360050768018087f69000000000000509", SYMLINK+="asm-diskh", OWNER="grid", GROUP="asmadmin", MODE="0660"


log:20.08
01.关于替换磁盘:How to ADD/DROP ASM DISK in SINGLE COMMAND (Doc ID 1910831.1),改mos提到加减的盘的大小需要一致,测试同时加减不同大小,不同数量的磁盘,也是可以的。保证加>减,应该是为了保证reblance才限制大小。
02.failgroup的概念:涉及到,磁盘的冗余策略。。
03.关于ocr相关
How to Swap Voting Disks Across Storage in a Diskgroup (Doc ID 1558007.1)
OCR / Vote disk Maintenance Operations: (ADD/REMOVE/REPLACE/MOVE) (Doc ID 428681.1)

log:2020.08.07
/**********************************************************************
01.关于替换CRS磁盘组
02.恢复CRS磁盘组的测试
*************************************************************************/
01.关于替换CRS磁盘组
目前有两个共享磁盘:
/dev/asm-sdd --2G
/dev/asm-sdc --5G
SQL> select group_number, name,state,type,total_MB / 1024 total_GB,free_mb / 1024 FREE_GB,free_mb / total_MB * 100 free_per,(case when free_mb / total_mb * 100 < 15 then '*' else '' end) care from V$ASM_DISKGROUP;
select GROUP_NUMBER,DISK_NUMBER,HEADER_STATUS,NAME,TOTAL_MB,FREE_MB,FREE_MB/TOTAL_MB*100 free_per,PATH,STATE from V$ASM_DISK WHERE GROUP_NUMBER!=0;


GROUP_NUMBER NAME STATE TYPE TOTAL_GB FREE_GB FREE_PER C
------------ -------------------- ----------- ------ ---------- ---------- ---------- -
1 CRS MOUNTED EXTERN 2 1.61328125 80.6640625
2 DATA MOUNTED EXTERN 3 .454101563 15.1367188

SQL> select GROUP_NUMBER,DISK_NUMBER,HEADER_STATUS,NAME,TOTAL_MB,FREE_MB ,PATH,STATE from V$ASM_DISK;
GROUP_NUMBER DISK_NUMBER HEADER_STA NAME TOTAL_MB FREE_MB PATH STATE
------------ ----------- ---------- -------------------- ---------- ---------- ---------------------------------------- --------
0 3 FORMER 0 0 /dev/asm-sdd NORMAL
0 5 FORMER 0 0 /dev/asm-sdc NORMAL
2 2 MEMBER DATA_0002 1024 155 /dev/asm-sde NORMAL
2 3 MEMBER DATA_0003 1024 155 /dev/asm-sdf NORMAL
1 0 MEMBER CRS_0000 2048 1652 /dev/asm-sdb NORMAL
2 4 MEMBER DATA_0004 1024 155 /dev/asm-sdg NORMAL


su - grid
sqlplus / as sysasm
create diskgroup OCR_VOTE external redundancy disk '/dev/asm-sdd' ATTRIBUTE 'compatible.asm'='11.2.0.4.0','compatible.rdbms'='11.2.0.4.0';
/*必须指定compatible.asm 参数,否则会出现兼容报错*/
另一个节点
alter diskgroup OCR_VOTE mount;
1).#更换OCR
[root@zytrac1 grid]# /xxx/app/11.2.0/grid/bin/ocrcheck
[root@zytrac1 grid]# /xxx/app/11.2.0/grid/bin/ocrconfig -add +OCR_VOTE
[root@zytrac1 grid]# /xxx/app/11.2.0/grid/bin/ocrcheck
[root@zytrac1 grid]# /xxx/app/11.2.0/grid/bin/ocrconfig -delete +OCR
[root@zytrac1 grid]# /xxx/app/11.2.0/grid/bin/ocrcheck

2).#更换votingdisk
[root@zytrac1 grid]# /xxx/app/11.2.0/grid/bin/crsctl query css votedisk
[root@zytrac1 grid]# /xxx/app/11.2.0/grid/bin/crsctl replace votedisk +OCR_VOTE
[root@zytrac1 grid]# /xxx/app/11.2.0/grid/bin/crsctl query css votedisk

3).#迁移asm实例的spfile文件
SQL> conn / as sysdba
Connected.
SQL> show parameter pfile;
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
spfile string +CRS/zytrac/asmparameterfile/registry.253.989686401

SQL> create pfile='/home/grid/initASM.ora' from spfile;
File created.
[root@zytrac1 grid]# /xxx/app/11.2.0/grid/bin/srvctl stop instance -d zytrac -n zytrac1
[root@zytrac1 grid]# /xxx/app/11.2.0/grid/bin/srvctl stop instance -d zytrac -n zytrac2
[root@zytrac1 grid]# /xxx/app/11.2.0/grid/bin/srvctl stop asm -o abort -f

SQL> startup pfile='/home/grid/initASM.ora';
ASM instance started
Total System Global Area 1135747072 bytes
Fixed Size 2260728 bytes
Variable Size 1108320520 bytes
ASM Cache 25165824 bytes
ASM diskgroups mounted
SQL> show parameter spfile
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
spfile string
SQL> create spfile='+OCR_VOTE' from pfile='/home/grid/initASM.ora';
File created.
[root@ray31 ~]# /xxx/app/11.2.0/grid/bin/srvctl stop asm -o abort -f
[grid@ray31 ~]$ sqlplus /nolog
SQL*Plus: Release 11.2.0.4.0 Production on Mon Nov 18 15:57:40 2019
Copyright (c) 1982, 2013, Oracle. All rights reserved.
SQL> conn / as sysasm
Connected to an idle instance.
SQL> startup
ASM instance started
Total System Global Area 1135747072 bytes
Fixed Size 2260728 bytes
Variable Size 1108320520 bytes
ASM Cache 25165824 bytes
ASM diskgroups mounted
SQL> show parameter spfile


[root@ray31 ~]# /xxx/app/11.2.0/grid/bin/crsctl stop cluster -all
[root@ray31 ~]# /xxx/app/11.2.0/grid/bin/crsctl start cluster -all

#删除磁盘组
#在其他节点执行
SQL> alter diskgroup CRS dismount;
Diskgroup altered.
#其中一个节点执行删除磁盘组
SQL> drop diskgroup CRS including contents;
Diskgroup dropped.


--过程
[root@zytrac1 grid]# /xxx/app/11.2.0/grid/bin/ocrconfig -add +OCR_VOTE
PROT-30: The Oracle Cluster Registry location to be added is not usable
PROC-50: The Oracle Cluster Registry location to be added is inaccessible on nodes zytrac2.
[root@zytrac1 grid]# /xxx/app/11.2.0/grid/bin/ocrconfig -add +OCR_VOTE
PROT-30: The Oracle Cluster Registry location to be added is not usable
PROC-8: Cannot perform cluster registry operation because one of the parameters is invalid.
ORA-15056: additional error message
ORA-17502: ksfdcre:4 Failed to create file +OCR_VOTE.255.1
ORA-15221: ASM operation requires compatible.asm of 11.1.0.0.0 or higher
ORA-06512: at line 4

SQL> set linesize 200
SQL> SELECT group_number, name, compatibility, database_compatibility FROM v$asm_diskgroup;
GROUP_NUMBER NAME COMPATIBILITY DATABASE_COMPATIBILITY
------------ ------------------------------ ------------------------------------------------------------ ------------------------------------------------------------
1 CRS 11.2.0.0.0 10.1.0.0.0
2 DATA 10.1.0.0.0 10.1.0.0.0
3 OCR_VOTE 10.1.0.0.0 10.1.0.0.0

使用的asm磁盘组的兼容性属性的版本太旧
最低要求是11.1.0.0.0

solution:
ALTER DISKGROUP OCR_VOTE SET ATTRIBUTE 'compatible.asm' = '11.2';

SQL> alter diskgroup CRS dismount
NOTE: Active use of SPFILE in group
Thu Apr 29 22:27:37 2021
NOTE: Instance updated compatible.asm to 11.2.0.0.0 for grp 1
ORA-15032: not all alterations performed
ORA-15027: active use of diskgroup "CRS" precludes its dismount
ERROR: alter diskgroup CRS dismount

SQL> show parameter spfile

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
spfile string +CRS/zytrac/asmparameterfile/registry.253.989686401


02.恢复CRS磁盘组的测试
2.How to Restore ASM Based OCR After Complete Loss of the CRS Diskgroup on Linux/Unix Systems (Doc ID 1062983.1)
恢复crs的asm磁盘组,共有3类文件需要回复或者创建:
1).OCR.
2).voting file
3).asm的spfile

01.定位最近的ocr备份
[root@zytrac1 zytrac1]# ls -lrt /xxx/app/11.2.0/grid/cdata/zytrac
total 45552
-rw------- 1 root root 7774208 Jan 16 06:07 day.ocr
-rw------- 1 root root 7774208 Jan 16 06:07 week.ocr
-rw------- 1 root root 7774208 Apr 27 00:25 backup02.ocr
-rw------- 1 root root 7774208 Apr 27 00:25 day_.ocr
-rw------- 1 root root 7774208 Apr 27 04:25 backup01.ocr
-rw------- 1 root root 7774208 Apr 27 08:25 backup00.ocr

02.确保集群的所有节点关闭
# /xxx/app/11.2.0/grid/bin/crsctl stop crs -f

03.对于11.2.0.2 and above:使用排他模式,noocr起库
# /xxx/app/11.2.0/grid/bin/crsctl start crs -excl -nocrs
CRS-4123: Oracle High Availability Services has been started.
CRS-2672: Attempting to start 'ora.mdnsd' on 'zytrac1'
CRS-2676: Start of 'ora.mdnsd' on 'zytrac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'zytrac1'
CRS-2676: Start of 'ora.gpnpd' on 'zytrac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'zytrac1'
CRS-2672: Attempting to start 'ora.gipcd' on 'zytrac1'
CRS-2676: Start of 'ora.cssdmonitor' on 'zytrac1' succeeded
CRS-2676: Start of 'ora.gipcd' on 'zytrac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'zytrac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'zytrac1'
CRS-2676: Start of 'ora.diskmon' on 'zytrac1' succeeded
CRS-2676: Start of 'ora.cssd' on 'zytrac1' succeeded
CRS-2672: Attempting to start 'ora.drivers.acfs' on 'zytrac1'
CRS-2679: Attempting to clean 'ora.cluster_interconnect.haip' on 'zytrac1'
CRS-2672: Attempting to start 'ora.ctssd' on 'zytrac1'
CRS-2681: Clean of 'ora.cluster_interconnect.haip' on 'zytrac1' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'zytrac1'
CRS-2676: Start of 'ora.ctssd' on 'zytrac1' succeeded
CRS-2676: Start of 'ora.drivers.acfs' on 'zytrac1' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'zytrac1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'zytrac1'
CRS-2676: Start of 'ora.asm' on 'zytrac1' succeeded

04.创建crs磁盘组
$ sqlplus / as sysasm
SQL*Plus: Release 11.2.0.1.0 Production on Tue Mar 30 11:47:24 2010
Copyright (c) 1982, 2009, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Real Application Clusters and Automatic Storage Management options

SQL> create diskgroup CRS external redundancy disk '/dev/asm-sdb' attribute 'COMPATIBLE.ASM' = '11.2';
Diskgroup created.
SQL> exit


05.恢复最近的ocr备份,使用root用户:
# cd /xxx/app/11.2.0/grid/cdata/zytrac
# /xxx/app/11.2.0/grid/bin/ocrconfig -restore backup00.ocr

06.重创voting file
# /xxx/app/11.2.0/grid/bin/crsctl replace votedisk +CRS
Successful addition of voting disk a47758313ba34f79bfa3382169082bcd.
Successful deletion of voting disk 8c057512823a4f38bfcb7635442f2306.
Successfully replaced voting disk group with +CRS.
CRS-4266: Voting file(s) successfully replaced

07.重创spfile
SQL> create spfile='+CRS' from pfile='/home/grid/initASM.ora';

+ASM1.asm_diskgroups='DATA','OCR_VOTE'#Manual Mount
+ASM2.asm_diskgroups='DATA','OCR_VOTE'#Manual Mount
*.asm_diskstring='/dev/asm*'
*.asm_power_limit=1
*.diagnostic_dest='/xxx/app/grid'
*.instance_type='asm'
*.large_pool_size=12M
*.remote_login_passwordfile='EXCLUSIVE'

08.关闭集群
/xxx/app/11.2.0/grid /bin/crsctl stop crs -f

09.启动所有节点的集群
# $CRS_HOME/bin/crsctl start crs
CRS-4123: Oracle High Availability Services has been started.

010.验证恢复成功
[root@zytrac1 zytrac]# crsctl check cluster -all
**************************************************************
zytrac1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
zytrac2:
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4529: Cluster Synchronization Services is online
CRS-4534: Cannot communicate with Event Manager
**************************************************************

log:20200824(经验判断)
SQL> alter diskgroup data add disk '/dev/asmdisk/data1_new','/dev/asmdisk/data2_new' rebalance power 11;
SQL> alter diskgroup arch add disk '/dev/asmdisk/arch1_new','/dev/asmdisk/arch2_new' rebalance power 11;

2017-03-30 13:29:51.794 [OCSSD(17100)]CRS-1719: Cluster Synchronization Service daemon (CSSD) clssnmvWorkerThread_0 not scheduled for 7620 msecs.
2017-03-30 13:29:59.794 [OCSSD(17100)]CRS-1719: Cluster Synchronization Service daemon (CSSD) clssnmvWorkerThread_0 not scheduled for 15620 msecs.
2017-03-30 13:30:07.795 [OCSSD(17100)]CRS-1719: Cluster Synchronization Service daemon (CSSD) clssnmvWorkerThread_0 not scheduled for 23620 msecs.
2017-03-30 13:30:15.795 [OCSSD(17100)]CRS-1719: Cluster Synchronization Service daemon (CSSD) clssnmvWorkerThread_0 not scheduled for 31620 msecs.
1719,0,“未安排%d msecs的群集同步服务守护程序(CSSD)%s”。
// *原因:过多的系统负载阻止了集群中的线程
//正在安排同步服务守护程序(CSSD)
//执行消息中指示的时间。 这表明
//系统超载
// *操作:采取措施减少系统负载或增加系统
//处理负载的资源。

原因找到了 应该是power值设置的大了,我把power值改成0 停止平衡,那个报错就立刻消失了

 

posted @ 2021-09-28 14:58  AnneZhou  阅读(760)  评论(0)    收藏  举报