交换分区 在dd命令执行期间 top 其消耗系统约14%的cpu,而mem占比约为0

 

【资源不友好代码】 

from pyltp import *

d_dir = '/usr/local/ltp_data_v3.4.0/'


def gen_one_sentence_part(paragraph):
one_piece_split = [',', ',', '?', '?', '。', '.']
for i in one_piece_split:
paragraph = paragraph.split(i)[0]
return paragraph


def gen_segmentor_words(paragraph):
# 分词 其他分析依赖于该数据
sentence = SentenceSplitter.split(paragraph)[0]
segmentor = Segmentor()
s = '%s%s' % (d_dir, "cws.model")
segmentor.load(s)
words = segmentor.segment(sentence)
del sentence, segmentor
return words


def gen_postagger(words):
# 词性标注
postagger = Postagger()
s = '%s%s' % (d_dir, "pos.model")
postagger.load(s)
postags = postagger.postag(words)
del postagger
return postags


ori_f = 'list_b_only_title.txt'
r_f = '%s%s' % (ori_f, '.del_ns.txt')
res = {}
with open(ori_f, 'r', encoding='utf8') as fo:
for i in fo:
p = i.replace('\n', '').replace('"', '')
p = gen_one_sentence_part(p)
words = gen_segmentor_words(p)
res[p] = gen_postagger(words)


【释放模型 model.release()】

free -g
在启动该脚本的前后内存与交换分区

root@hadoop1 tmp]# date
2017年 12月 14日 星期四 15:10:43 CST
[root@hadoop1 tmp]# xlfg
              total        used        free      shared  buff/cache   available
Mem:             15           0          14           0           0          14
Swap:             7           0           6
[root@hadoop1 tmp]# date
2017年 12月 14日 星期四 15:10:54 CST
[root@hadoop1 tmp]# xlfg
              total        used        free      shared  buff/cache   available
Mem:             15          13           0           0           0           1
Swap:             7           0           6
[root@hadoop1 tmp]# xlfg
              total        used        free      shared  buff/cache   available
Mem:             15          14           0           0           0           0
Swap:             7           0           6
[root@hadoop1 tmp]# date
2017年 12月 14日 星期四 15:11:01 CST
[root@hadoop1 tmp]# xlfg
              total        used        free      shared  buff/cache   available
Mem:             15          15           0           0           0           0
Swap:             7           1           6
[root@hadoop1 tmp]# xlfg
              total        used        free      shared  buff/cache   available
Mem:             15          15           0           0           0           0
Swap:             7           2           5
[root@hadoop1 tmp]# xlfg
              total        used        free      shared  buff/cache   available
Mem:             15          15           0           0           0           0
Swap:             7           2           5
[root@hadoop1 tmp]#





[root@hadoop1 nlp]# xlfg
              total        used        free      shared  buff/cache   available
Mem:             15           0          14           0           0          14
Swap:             7           0           6
[root@hadoop1 nlp]# dd if=/dev/zero of=/home/swap_xl bs=1024 count=1024*1024*4
dd: 无效的数字"1024*1024*4"
[root@hadoop1 nlp]# dd if=/dev/zero of=/home/swap_xl bs=1024 count=80^C
[root@hadoop1 nlp]# ^C
[root@hadoop1 nlp]# ^C
[root@hadoop1 nlp]# dd if=/dev/zero of=/home/swap_xl bs=1024 count=10485760
记录了10485760+0 的读入
记录了10485760+0 的写出
10737418240字节(11 GB)已复制,55.8535 秒,192 MB/秒
[root@hadoop1 nlp]# /sbin/mkswap /home/swap_xl
正在设置交换空间版本 1,大小 = 10485756 KiB
无标签,UUID=5121c17c-6664-40d3-a310-e79b24d7c6b1
[root@hadoop1 nlp]# /sbin/swapon /home/swap_xl
swapon: /home/swap_xl:不安全的权限 0644,建议使用 0600。
[root@hadoop1 nlp]# xlfg
              total        used        free      shared  buff/cache   available
Mem:             15           0           4           0          10          14
Swap:            17           0          16
[root@hadoop1 nlp]#


在SWAP空间不够用的情况下,如何手工添加SWAP空间?以下的操作都要在root用户下进行:
首先先建立一个分区,采用dd命令比如
dd if=/dev/zero of=/home/swap bs=1024 count=1024*1024
这样就会创建/home/swap这么一个分区文件。文件的大小是512000个block,一般情况下1个block为1K,所以这里空间是512M。接着再把这个分区变成swap分区。
/sbin/mkswap /home/swap
再接着使用这个swap分区。使其成为有效状态。
/sbin/swapon /home/swap
现在再用free -m命令查看一下内存和swap分区大小,就发现增加了512M的空间了。不过当计算机重启了以后,发现swap还是原来那么大,新的swap没有自动启动,还要手动启动。那我们需要修改/etc/fstab文件,增加如下一行
/home/swap swap swap defaults 0 0
你就会发现你的机器自动启动以后swap空间也增大了。

 

dd if=/dev/zero of=/home/swap_xl bs=1024 count=10485760
/sbin/mkswap /home/swap_xl
/sbin/swapon /home/swap_xl


[root@hadoop1 tmp]# dd  --help
用法:dd [操作数] ...
 或:dd 选项
Copy a file, converting and formatting according to the operands.

  bs=BYTES        read and write up to BYTES bytes at a time
  cbs=BYTES       convert BYTES bytes at a time
  conv=CONVS      convert the file as per the comma separated symbol list
  count=N         copy only N input blocks
  ibs=BYTES       read up to BYTES bytes at a time (default: 512)
  if=FILE         read from FILE instead of stdin
  iflag=FLAGS     read as per the comma separated symbol list
  obs=BYTES       write BYTES bytes at a time (default: 512)
  of=FILE         write to FILE instead of stdout
  oflag=FLAGS     write as per the comma separated symbol list
  seek=N          skip N obs-sized blocks at start of output
  skip=N          skip N ibs-sized blocks at start of input
  status=LEVEL    The LEVEL of information to print to stderr;
                  'none' suppresses everything but error messages,
                  'noxfer' suppresses the final transfer statistics,
                  'progress' shows periodic transfer statistics

N and BYTES may be followed by the following multiplicative suffixes:
c =1, w =2, b =512, kB =1000, K =1024, MB =1000*1000, M =1024*1024, xM =M
GB =1000*1000*1000, G =1024*1024*1024, and so on for T, P, E, Z, Y.

Each CONV symbol may be:

  ascii     from EBCDIC to ASCII
  ebcdic    from ASCII to EBCDIC
  ibm       from ASCII to alternate EBCDIC
  block     pad newline-terminated records with spaces to cbs-size
  unblock   replace trailing spaces in cbs-size records with newline
  lcase     change upper case to lower case
  ucase     change lower case to upper case
  sparse    try to seek rather than write the output for NUL input blocks
  swab      swap every pair of input bytes
  sync      pad every input block with NULs to ibs-size; when used
            with block or unblock, pad with spaces rather than NULs
  excl  fail if the output file already exists
  nocreat do not create the output file
  notrunc 不截断输出文件
  noerror 读取数据发生错误后仍然继续
  fdatasync 结束前将输出文件数据写入磁盘
  fsync 类似上面,但是元数据也一同写入

FLAG 符号可以是:

  append 追加模式(仅对输出有意义;隐含了conv=notrunc)
  direct 使用直接I/O 存取模式
  directory 除非是目录,否则 directory 失败
  dsync  使用同步I/O 存取模式
  sync  与上者类似,但同时也对元数据生效
  fullblock 为输入积累完整块(仅iflag)
  nonblock 使用无阻塞I/O 存取模式
  noatime 不更新存取时间
  nocache 丢弃缓存数据
  noctty 不根据文件指派控制终端
  nofollow 不跟随链接文件
  count_bytes  treat 'count=N' as a byte count (iflag only)
  skip_bytes  treat 'skip=N' as a byte count (iflag only)
  seek_bytes  treat 'seek=N' as a byte count (oflag only)

Sending a USR1 signal to a running 'dd' process makes it
print I/O statistics to standard error and then resume copying.

  $ dd if=/dev/zero of=/dev/null& pid=$!
  $ kill -USR1 $pid; sleep 1; kill $pid
  18335302+0 records in
  18335302+0 records out
  9387674624 bytes (9.4 GB) copied, 34.6279 seconds, 271 MB/s

Options are:

      --help  显示此帮助信息并退出
      --version  显示版本信息并退出

GNU coreutils online help: <http://www.gnu.org/software/coreutils/>
请向<http://translationproject.org/team/zh_CN.html> 报告dd 的翻译错误
要获取完整文档,请运行:info coreutils 'dd invocation'

 

 

 

 

 

 

 

 

 

 













posted @ 2017-12-13 20:40  papering  阅读(461)  评论(0编辑  收藏  举报