Python 文本词频统计
来自CloudWiki
词频统计原理
我们可以采用字典来解决制品统计问题。
- 输入:从文件中读取一篇文章。
- 处理:采用字典数据结构,统计词语出现频率。
- 输出:文章中最常出现的十个单词及出现次数。
英文词频统计
英文文本以空格或标点符号来分隔词语获得单词并统计数量相对容易。
代码:
#e10.1CalHamlet.py def getText(): txt = open("hamlet.txt", "r").read() txt = txt.lower() for ch in '!"#$%&()*+,-./:;<=>?@[\\]^_‘{|}~': txt = txt.replace(ch, " ") #将文本中特殊字符替换为空格 return txt hamletTxt = getText() words = hamletTxt.split() counts = {} for word in words: counts[word] = counts.get(word,0) + 1 items = list(counts.items()) items.sort(key=lambda x:x[1], reverse=True) for i in range(10): word, count = items[i] print ("{0:<10}{1:>5}".format(word, count))
中文词频分析
jieba库的使用
- 相关内容请参见:Python jiaba库的使用
中文词频分析
#e10.3CalThreeKingdoms.py import jieba excludes = {}#{"将军","却说","丞相"} txt = open("三国演义.txt", "r", encoding='utf-8').read() words = jieba.lcut(txt) counts = {} for word in words: if len(word) == 1: #排除单个字符的分词结果 continue else: counts[word] = counts.get(word,0) + 1 for word in excludes: del(counts[word]) items = list(counts.items()) items.sort(key=lambda x:x[1], reverse=True) for i in range(15): word, count = items[i] print ("{0:<10}{1:>5}".format(word, count))