jieba中文分词及文本词频统计
中文文本需要通过分词获得单个的词语,jieba库是优秀的中文分词第三方库,jieba提供三种分词模式。
jieba分词的三种模式
- 精确模式:把文本精确的切分开,不存在冗余单词
- 全模式:把文本中所有可能的词语都扫描出来,有冗余
- 搜索引擎模式:在精确模式基础上,对长词再次切分


代码示例:
import jieba
# 精确模式
seg_list1 = jieba.lcut("中国是一个伟大的国家")
# 全模式
seg_list2 = jieba.lcut("中国是一个伟大的国家", cut_all=True)
# 搜索引擎模式
seg_list3 = jieba.lcut_for_search("中国是一个伟大的国家")
# 向jieba库添加新单词
jieba.add_word("蟒蛇语言")
下面进行文本词频统计,分为英文文本词频统计和中文文本词频统计,英文文本使用hamlet ,中文文本使用《三国演义》
数据集下载地址:
hamlet :https://python123.io/resources/pye/hamlet.txt
《三国演义》:https://python123.io/resources/pye/threekingdoms.txt
1.进行hamlet英文文本词频统计:
import jieba
def getText():
txt = open("hamlet.txt", "r").read()
txt = txt.lower()
for ch in "!@#$%^&*(),.?/\\[]~+-=<>';:|{}''":
txt.replace(ch, " ")
return txt
hamletTxt = getText()
words = hamletTxt.split()
counts = {}
for word in words:
counts[word] = counts.get(word, 0) + 1
items = list(counts.items())
items.sort(key=lambda x:x[1], reverse=True)
for i in range(10):
word, count = items[i]
print("{0:<10}{1:>5}".format(word, count))
输出如下:
the 1137 and 936 to 728 of 664 a 527 i 513 my 513 in 423 you 405 hamlet 401
2.进行《三国演义》中文文本词频统计及人物出场次序(前十名):
import jieba
txt = open("threekingdoms.txt", "r", encoding="utf-8").read()
words = jieba.lcut(txt)
excludes = {"将军", "却说", "二人", "不可", "荆州", "不能", "如此", "商议","如何",
"主公", "军士", "左右", "军马", "引兵", "次日", "大喜", "天下", "东吴",
"于是", "今日", "不敢", "魏兵", "陛下", "一人", "都督", "人马", "不知"}
counts = {}
for word in words:
if len(word) == 1:
continue
elif word == "诸葛亮" or word == "孔明曰":
rword = "孔明"
elif word == "关公" or word == "云长":
rword = "关羽"
elif word == "玄德" or word == "玄德曰":
rword = "刘备"
elif word == "孟德" or word == "丞相":
rword = "曹操"
else:
rword = word
counts[rword] = counts.get(rword, 0) + 1
for word in excludes:
del counts[word]
items = list(counts.items())
items.sort(key=lambda x:x[1], reverse=True)
for i in range(10):
word, count = items[i]
print("{0:<10}{1:>5}".format(word, count))
输出如下:
曹操 1451 孔明 1383 刘备 1252 关羽 784 张飞 358 吕布 300 赵云 278 孙权 264 司马懿 221 周瑜 217

浙公网安备 33010602011771号