Ceph Reef(18.2.X)之压缩算法和压缩模式

                                              作者:尹正杰

版权声明:原创作品,谢绝转载!否则将追究法律责任。

一.ceph压缩概述

1.ceph压缩概述

ceph支持高效传输,支持对数据进行压缩,BluStrore存储引擎默认提供了即时数据压缩,以节省磁盘空间。

ceph支持none,zlib(不推荐),lz4(和snappy性能接近),zstd(压缩比高但耗费CPU)和snappy等常用的压缩方式,默认为snappy压缩算法。
	
ceph支持压缩模式有none,passive,aggressive,force,默认值为none。
	- none:
	    表示不压缩。
	- passive:
			若提示COMPRESSIBLE,则压缩。
	- aggressive:
  		除非提示INCOMPRESSIBLE,否则就压缩。
  - force:
  		表示始终压缩。

2.启用压缩相关命令

启用压缩的相关命令: (如果需要全局压缩,最好在配置文件中指定)
ceph osd pool set <pool_name> compression_algorithm snappy  # 指定压缩算法
ceph osd pool set <pool_name> compression_mode aggressive  # 指定压缩模式


其他可用的压缩参数:
    compression_required ratio:
    	指定压缩比,取值格式为双精度浮点型。
    	其值为"SIZE_COMPRESSED/SIZE_ORIGINAL",即压缩后的大小与原始内容大小的比例,默认值为"0.875000"。
    
    compression_max_blob_size:
    	压缩对象的最大体积,无符号整数型数值,默认为0。
    	
    compression_min_blob_size:
    	压缩对象的最小体积,无符号整数型数值,默认值为0。
    

全局压缩选项:
	可在ceph配置文件中设置压缩属性,它将对所有的存储池生效,可设置相关参数如下:
		bluestore_compression_algorithm
		bluestore_compression_mode
		bluestore_compression_required_ratio
		bluestore_compression_min_blob_size
		bluestore_compression_max_blob_size
		bluestore_compression_min_blob_size_ssd
		bluestore_compression_max_blob_size_ssd
		bluestore_compression_min_blob_size_hdd
		bluestore_compression_max_blob_size_hdd

二.ceph压缩案例

1.查默认的压缩算法

[root@ceph141 ~]# ceph --admin-daemon /var/run/ceph/c0ed6ca0-5fbc-11ef-9ff6-cf3a9f02b0d4/ceph-osd.0.asok config show | grep compression
    "bluestore_compression_algorithm": "snappy",
    "bluestore_compression_max_blob_size": "0",
    "bluestore_compression_max_blob_size_hdd": "65536",
    "bluestore_compression_max_blob_size_ssd": "65536",
    "bluestore_compression_min_blob_size": "0",
    "bluestore_compression_min_blob_size_hdd": "8192",
    "bluestore_compression_min_blob_size_ssd": "65536",
    "bluestore_compression_mode": "none",
    "bluestore_compression_required_ratio": "0.875000",
    "bluestore_rocksdb_options": "compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0",
    "filestore_rocksdb_options": "max_background_jobs=10,compaction_readahead_size=2097152,compression=kNoCompression",
    "kstore_rocksdb_options": "compression=kNoCompression",
    "leveldb_compression": "true",
    "mon_rocksdb_options": "write_buffer_size=33554432,compression=kNoCompression,level_compaction_dynamic_level_bytes=true",
    "ms_osd_compression_algorithm": "snappy",
    "rbd_compression_hint": "none",
[root@ceph141 ~]# 

2.更改压缩算法

[root@ceph141 ~]# ceph osd pool ls detail | grep yinzhengjie-rbd  # 默认不现实压缩算法
pool 2 'yinzhengjie-rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 16 pgp_num 16 autoscale_mode on last_change 63 flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd read_balance_score 1.88
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph osd pool set yinzhengjie-rbd compression_algorithm zstd  # 指定压缩算法
set pool 2 compression_algorithm to zstd
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph osd pool ls detail | grep yinzhengjie-rbd  # 查看压缩算法是否生效
pool 2 'yinzhengjie-rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 16 pgp_num 16 autoscale_mode on last_change 344 flags hashpspool,selfmanaged_snaps stripe_width 0 compression_algorithm zstd application rbd read_balance_score 1.88
[root@ceph141 ~]# 

3.更改算法模式

[root@ceph141 ~]# ceph osd pool ls detail | grep yinzhengjie-rbd
pool 2 'yinzhengjie-rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 16 pgp_num 16 autoscale_mode on last_change 344 flags hashpspool,selfmanaged_snaps stripe_width 0 compression_algorithm zstd application rbd read_balance_score 1.88
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph osd pool set yinzhengjie-rbd compression_mode aggressive
set pool 2 compression_mode to aggressive
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph osd pool ls detail | grep yinzhengjie-rbd
pool 2 'yinzhengjie-rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 16 pgp_num 16 autoscale_mode on last_change 345 flags hashpspool,selfmanaged_snaps stripe_width 0 compression_algorithm zstd compression_mode aggressive application rbd read_balance_score 1.88
[root@ceph141 ~]# 

4.还原算法和模式

[root@ceph141 ~]# ceph osd pool ls detail | grep yinzhengjie-rbd
pool 2 'yinzhengjie-rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 16 pgp_num 16 autoscale_mode on last_change 345 flags hashpspool,selfmanaged_snaps stripe_width 0 compression_algorithm zstd compression_mode aggressive application rbd read_balance_score 1.88
[root@ceph141 ~]# 
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph osd pool set yinzhengjie-rbd compression_mode none
set pool 2 compression_mode to none
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph osd pool set yinzhengjie-rbd compression_algorithm snappy
set pool 2 compression_algorithm to snappy
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph osd pool ls detail | grep yinzhengjie-rbd
pool 2 'yinzhengjie-rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 16 pgp_num 16 autoscale_mode on last_change 347 flags hashpspool,selfmanaged_snaps stripe_width 0 compression_algorithm snappy compression_mode none application rbd read_balance_score 1.88
[root@ceph141 ~]# 

posted @ 2024-09-05 04:55  尹正杰  阅读(40)  评论(0编辑  收藏  举报