代码改变世界

rsync 文件校验及同步原理及rsync server配置

2013-07-11 16:06  梁小白  阅读(3884)  评论(1编辑  收藏  举报

参考:http://rsync.samba.org/how-rsync-works.html

我们关注的是其发送与接收校验文件的算法,这里附上原文和我老婆(^_^)的翻译:

The Sender

The sender process reads the file index numbers and associated block checksum sets one at a time from the generator.

发送进程一次从生成器读取一个文件索引号和关联的块校验集合

For each file id the generator sends it will store the block checksums and build a hash index of them for rapid lookup.

 对于生成器发送的每个文件ID,它会存储数据块校验和并生成它们的哈希索引,以进行快速查找 。

Then the local file is read and a checksum is generated for the block beginning with the first byte of the local file. This block checksum is looked for in the set that was sent by the generator, and if no match is found, the non-matching byte will be appended to the non-matching data and the block starting at the next byte will be compared. This is what is referred to as the “rolling checksum”

 然后会读取本地文件,并为以本地文件的第一个字节开头的数据块生成校验和。此数据块校验和在由生成器发送的集中查找,如果未找到匹配,则会将非匹配字节附加到非匹配数据,并且会比较以下一字节开头的数据块。 这称为“rolling checksum” .

If a block checksum match is found it is considered a matching block and any accumulated non-matching data will be sent to the receiver followed by the offset and length in the receiver's file of the matching block and the block checksum generator will be advanced to the next byte after the matching block.

 如果找到数据块校验和匹配,则会将它视为匹配块,所有累积的非匹配数据将被加上在接收端的文件中的匹配数据块的偏移量和长度之后发送到接收端,并且数据块校验和生成器将提前到匹配块之后的下一字节。

Matching blocks can be identified in this way even if the blocks are reordered or at different offsets. This process is the very heart of the rsync algorithm.

可以以这种方式标识匹配块,即使重新排列数据块的顺序或数据块的偏移量不同。此过程是 rsync 算法的核心。

In this way, the sender will give the receiver instructions for how to reconstruct the source file into a new destination file. These instructions detail all the matching data that can be copied from the basis file (if one exists for the transfe), and includes any raw data that was not available locally. At the end of each file's processing a whole-file checksum is sent and the sender proceeds with the next file.

Generating the rolling checksums and searching for matches in the checksum set sent by the generator require a good deal of CPU power. Of all the rsync processes it is the sender that is the most CPU intensive.

 

The Receiver

The receiver will read from the sender data for each file identified by the file index number. It will open the local file (called the basis) and will create a temporary file.

The receiver will expect to read non-matched data and/or to match records all in sequence for the final file contents. When non-matched data is read it will be written to the temp-file. When a block match record is received the receiver will seek to the block offset in the basis file and copy the block to the temp-file. In this way the temp-file is built from beginning to end.

The file's checksum is generated as the temp-file is built. At the end of the file, this checksum is compared with the file checksum from the sender. If the file checksums do not match the temp-file is deleted. If the file fails once it will be reprocessed in a second phase, and if it fails twice an error is reported.

After the temp-file has been completed, its ownership and permissions and modification time are set. It is then renamed to replace the basis file.

Copying data from the basis file to the temp-file make the receiver the most disk intensive of all the rsync processes. Small files may still be in disk cache mitigating this but for large files the cache may thrash as the generator has moved on to other files and there is further latency caused by the sender. As data is read possibly at random from one file and written to another, if the working set is larger than the disk cache, then what is called a seek storm can occur, further hurting performance.

将数据从基础文件复制到临时文件会使receiver在所有rsync进程中最耗磁盘。小文件可以仍处于缓解此作用的磁盘缓存中,但对于大型文件,由于生成器已移动到其他文件,并且存在sender引起的进一步延迟,缓存可能会"抖动"(thrash)。 数据可能从一个文件随机读取,写入另一文件,如果工作集大于磁盘缓存,则会发生"寻道风暴"(seek storm),进一步影响性能。 

 

看到这儿可能还是一头雾水,好吧,刚bing一下,前面已经有人栽树了,有图有真相:

http://coolshell.cn/articles/7425.html#more-7425

 

附:rsync server配置:

1.修改/etc/rsyncd.conf文件

uid=nobody
gid=nobody
use chroot = yes
max connections = 4
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
lock file = /var/run/rsync.lock
[MYSERVER]
path = /
read only = no
write only = no
list = yes
uid = root
gid = root
auth users = root
secrets file = /root/rsyncd.secrets


2. 创建/root/rsyncd.secrets
root:123456

设置该文件权限为400,否则会提示错误
chmod 400 /root/rsyncd.secrets

3. 修改/etc/xinetd.d/rsync文件

# default: off
# description: The rsync server is a good addition to an ftp server, as it \
# allows crc checksumming etc.
service rsync
{
disable = no
flags = IPv6
socket_type = stream
wait = no
user = root
server = /usr/bin/rsync
server_args = --daemon /etc/rsyncd.conf
log_on_failure += USERID
}

4. chkconfig rsync on

yum install xinetd

5. service xinetd restart

 

客户端

1.创建/tmp/rsync.pass文件
123456

设置该文件权限为400,否则会提示错误
chmod 400 /tmp/rsync.pass

2.运行rsync(手动测试时使用)
rsync -az --password-file=/tmp/rsync.pass --progress /local/file1 root@192.168.20.221::MYSERVER/remote/

 

下面附上最近的POC的结果:

Scenrio 1 (transfer if file content changed):
command: rsync --password-file=/tmp/rsync.pass --progress scptest2.vmdk root@9.112.224.244::MYSERVER/root/rsync/
result: only transfer the changed data.

Scenario 2( transfer resume if network break ):
command: rsync --password-file=/tmp/rsync.pass --progress scptest2.vmdk root@9.112.224.244::MYSERVER/root/rsync/
result: the rsync client will wait utill the network recovery and rescue the transfer.

Scenario 3(partial transfer):
command: rsync --password-file=/tmp/rsync.pass --progress --partial scptest2.vmdk root@9.112.224.244::MYSERVER/root/rsync/
result: if rsync use --partial parameter before transfer broken, it will continue to transfer the remaining data.

if scenario 1,2,3 happened concurrently, rsync client still can resume and continue to finish the transfer. because rsync has it's special
algorithm to check file changed.