文章目錄
- 1.win10上安裝pycorrector
- 2.unbuntu上訓練語言模型:
- 3.use kenlm
-
- 3.1 kenlm打分
- 3.2 分詞
- 3.3 (2或3_gram)打分
- 3.4 numpy矩陣處理
- 4.編輯距離
- 5.pandas use pycorrector
1.win10上安裝pycorrector
https://github.com/shibing624/pycorrector
1
.
pip install -i https://pypi.tuna.tsinghua.edu.cn/simple pycorrector
出現No module named ‘pypinyin’
2
.
pip install -i https://pypi.tuna.tsinghua.edu.cn/simple pypinyin
出現No module named ‘kenlm’
3
.
pip install https://github.com/kpu/kenlm/archive/master.zip
出現少了Microsoft Visual C++
4
.Microsoft Visual C++ 連結:https://pan.baidu.com/s/1toZQAaJXa3xnflhjDMx6lg 提取碼:ky7w 。安裝完後繼續第3步,第1步,再
pip install jieba
2.unbuntu上訓練語言模型:
wget -O - http://kheafield.com/code/kenlm.tar.gz |tar xz
cd kenlm
mkdir -p build
cd build
cmake ..
make -j 4
cmake未安裝問題:
sudu apt install cmake
boost問題:
sudo apt-get install libboost-all-dev
Eigen3的問題:如下圖:

build/bin/lmplz -o 3 --verbose_header --text people2014corpus_words.txt --arpa result/people2014corpus_words.arps
build/bin/build_binary ./result/people2014corpus_words.arps ./result/people2014corpus_words.klm
如果提示lmplz不存在,build_binary不存在,則需要設定環境變量:将kenlm檔案夾加入路徑:
gedit .profile
,
source .profile
3.use kenlm
3.1 kenlm打分
pycorrector裡有一檔案包含了很多字,把每個字挨個送進編輯距離産生的空格然後用語言模型打分,困惑度最低的就是正确的。
import kenlm
lm = kenlm.Model('C:/Users/1/Anaconda3/Lib/site-packages/pycorrector/data/kenlm/people_chars_lm.klm')
print(lm.score('銀行', bos = True, eos = True)) # begain end
chars = ['中國工商銀行',
'往來賬業務']
print(lm.score(' '.join(chars), bos = True, eos = True))
3.2 分詞
分詞方法主要基于詞典比對(正向最大比對法、逆向最大比對法和雙向比對分詞法等)和基于統計(HMM、CRF、和深度學習);主流分詞工具庫包括中科院計算所NLPIR、哈工大LTP、清華大學THULAC、Hanlp分詞器、Python jieba工具庫等。更多的分詞方法和工具庫參考知乎:https://www.zhihu.com/question/19578687
s="我在課堂學習自然語言1000處理"#不能1=
b=jieba.cut(s)
print("/ ".join(b))
我/ 在/ 課堂/ 學習/ 自然語言/ 1000/ 處理
b=jieba.cut(s)
print(b)
<generator object Tokenizer.cut at 0x000001DDD9CFB728>
b=jieba.lcut(s) #l為list
print(b)
[‘我’, ‘在’, ‘課堂’, ‘學習’, ‘自然語言’, ‘1000’, ‘處理’]
b= jieba.cut(s, cut_all=True)
print("Full Mode: " + "/ ".join(b)) # 全模式
Full Mode: 我/ 在/ 課堂/ 學習/ 自然/ 自然語言/ 語言/ 1000/ 處理
jieba.cut 方法接受
三個參數
:
•需要分詞的字元串
•cut_all 參數用來控制是否采用全模式
•HMM 參數用來控制是否使用 HMM 模型
jieba.cut_for_search 方法接受
兩個參數
:用于搜尋引擎建構反向索引的分詞,粒度比較細
•需要分詞的字元串
•是否使用 HMM 模型。
import jieba
seg_list = jieba.cut("我在課堂學習自然語言1000處理", cut_all=True)
print("Full Mode: " + "/ ".join(seg_list)) # 全模式
seg_list = jieba.cut("我在課堂學習自然語言處理", cut_all=False)
print("Default Mode: " + "/ ".join(seg_list)) # 精确模式
seg_list = jieba.cut("他畢業于北京航空航天大學,在百度深度學習研究院進行研究") # 預設是精确模式
print(", ".join(seg_list))
seg_list = jieba.cut_for_search("小明碩士畢業于中國科學院計算所,後在斯坦福大學深造") # 搜尋引擎模式
print(", ".join(seg_list))
Full Mode: 我/ 在/ 課堂/ 學習/ 自然/ 自然語言/ 語言/ 1000/ 處理
Default Mode: 我/ 在/ 課堂/ 學習/ 自然語言/ 處理
他, 畢業, 于, 北京航空航天大學, ,, 在, 百度, 深度, 學習, 研究院, 進行, 研究
小明, 碩士, 畢業, 于, 中國, 科學, 學院, 科學院, 中國科學院, 計算, 計算所, ,, 後, 在, 福大, 大學, 斯坦福, 斯坦福大學, 深造
添加使用者自定義字典,很多時候我們需要針對自己的場景進行分詞,會有一些領域内的專有詞彙:
1.
可以用jieba.load_userdict(file_name)加載使用者字典。
2.
少量的詞彙可以自己用下面方法手動添加:
2.1
用
add_word(word, freq=None, tag=None)
和
del_word(word)
在程式中動态修改詞典
2.2
用
suggest_freq(segment, tune=True)
可調節單個詞語的詞頻,使其能(或不能)被分出來
如果/放到/舊/字典/中将/出錯/。
jieba.suggest_freq(('中', '将'), True)
print('/'.join(jieba.cut('如果放到舊字典中将出錯。', HMM=False)))
如果/放到/舊/字典/中/将/出錯/。
import jieba_fast as jieba
jieba.lcut('浙江蕭山農村商業銀行對公取款憑條客戶聯')
from pycorrector.tokenizer import segment as seg
seg('浙江蕭山農村商業銀行對公取款憑條客戶聯')
import thulac
thu1 = thulac.thulac() #預設模式
text = thu1.cut("福州運恒計程車服務有限公司通用機打發票出租汽車專用")
print(text)
3.3 (2或3_gram)打分
import kenlm
lm = kenlm.Model('C:\ProgramData\Anaconda3\Lib\site-packages\pycorrector\data\kenlm/people_chars_lm.klm')
sentence = '中國二商銀行'
# 2-gram
ngram_avg_scores = []
n = 2
scores = []
for i in range(6 - n + 1):
word = sentence[i:i + n]
score = lm.score(word, bos=False, eos=False)
scores.append(score)
print(scores)
# if not scores:
# continue
for _ in range( 1):
scores.insert(0,scores[0])
scores.append(scores[-1])
print(scores)
avg_scores = [sum(scores[i:i + n]) / len(scores[i:i + n]) for i in range(6)]
ngram_avg_scores.append(avg_scores)
print(ngram_avg_scores)
# 3-gram
ngram_avg_scores = []
n = 3
scores = []
for i in range(6 - n + 1):
word = sentence[i:i + n]
score = lm.score(word, bos=False, eos=False)
scores.append(score)
print(scores)
# if not scores:
# continue
for _ in range( n-1):
scores.insert(0,scores[0])
scores.append(scores[-1])
print(scores)
avg_scores = [sum(scores[i:i + n]) / len(scores[i:i + n]) for i in range(6)]
ngram_avg_scores.append(avg_scores)
print(ngram_avg_scores)
# 2或3-gram
ngram_avg_scores = []
for n in [2,3]:
scores = []
for i in range(6 - n + 1):
word = sentence[i:i + n]
score = lm.score(word, bos=False, eos=False)
scores.append(score)
# print(scores)
# if not scores:
# continue
for _ in range( n-1):
scores.insert(0,scores[0])
scores.append(scores[-1])
# print(scores)
avg_scores = [sum(scores[i:i + n]) / len(scores[i:i + n]) for i in range(6)]
ngram_avg_scores.append(avg_scores)
print(ngram_avg_scores)
3.4 numpy矩陣處理
import numpy as np
# 取拼接後的ngram平均得分
# sent_scores = list(np.average(np.array(ngram_avg_scores), axis=0))
np.array(ngram_avg_scores)
sent_scores = list(np.average(np.array(ngram_avg_scores), axis=0))
sent_scores
scoress = sent_scores
scoress = np.array(scoress)
scoress
scores2 = scoress[:, None]
scores2
median = np.median(scores2 , axis = 0)#中位數先排序,奇數取中間,偶數取中間兩個求平均。不是np.mean
median
#margin_median = np.sqrt(np.sum((scores2 - median) ** 2, axis=-1))
margin_median = np.sqrt(np.sum((scores2 - median) ** 2 , axis = 1))
margin_median
# 平均絕對離內插補點
med_abs_deviation = np.median(margin_median)
med_abs_deviation
ratio=0.6745
y_score = ratio * margin_median / med_abs_deviation
y_score
# scores = scores.flatten()
# maybe_error_indices = np.where((y_score > threshold) & (scores < median))
scores2 = scores2.flatten()
scores2
print('scores2 :' ,scores2)
print('median :' ,median)
print('y_score :' ,y_score)
4.編輯距離
import re
import os
from collections import Counter
def candidates(word):
"""
generate possible spelling corrections for word.
:param word:
:return:
"""
return known([word]) or known(edits1(word)) or known(edits2(word)) or [word]
def known(words):
"""
the subset of 'words' that appear in the dictionary of WORDS
:param words:
:return:
"""
return set(w for w in words if w in WORDS)
def edits1(word):
"""
all edits that are one edit away from 'word'
:param word:
:return:
"""
letters = 'abcdefghijklmnopqrstuvwxyz'
splits = [(word[:i], word[i:]) for i in range(len(word) + 1)]
deletes = [L + R[1:] for L, R in splits if R]
transposes = [L + R[1] + R[0] + R[2:] for L, R in splits if len(R) > 1]
replaces = [L + c + R[1:] for L, R in splits if R for c in letters]
inserts = [L + c + R for L, R in splits for c in letters]
return set(deletes + transposes + replaces + inserts)
def edits2(word):
"""
all edit that are two edits away from 'word'
:param word:
:return:
"""
return (e2 for e1 in edits1(word) for e2 in edits1(e1))
word = '中國工商銀行'
for i in range(len(word) + 1):
print(word[:i], word[i:])
sentence = '##我愛##/中國###//'
sentence.strip('#''/')
from pycorrector.tokenizer import Tokenizer
tokenize = Tokenizer() #類的執行個體
sentence = '中國是聯合國第五大常任理事國'
token = tokenize.tokenize(sentence)
token
5.pandas use pycorrector
資料集連結:https://pan.baidu.com/s/1c1EGc_tY4K7rfoS-NbGhMg 提取碼:kp4h
import pandas as pd
data = pd.read_csv('data.txt',sep = ' ',header = None)
data
new_data = data.dropna()#将有NULL的一行如第24行去除,但是序号不變
new_data
new_data.index = range(0,3754)
new_data
new_data.columns = ['Right','Wrong']# = 号不要忘記寫
new_data
a = new_data['Right'] == new_data['Wrong']
a
new_data_1 = pd.concat([new_data,a],axis=1) #增加一列
new_data_1
error_sentences = new_data_1['Wrong']
error_sentences
import pycorrector
corrector = []
for error_sentence in error_sentences:
corrected_sent,detail = pycorrector.correct(error_sentence)#不能加單引号'error_sentence'
corrector.append(corrected_sent)
print(corrected_sent)
corrector
new_data_2 = pd.concat([new_data_1,pd.DataFrame(corrector)],axis=1) #DataFrame不是dateframe,不用單引号
new_data_2.columns = ['Right','Wrong','t/f','correct']
new_data_2
b = new_data_2['Right'] == new_data_2['correct']
new_data_3 = pd.concat([new_data_2,b],axis=1) #增加一列
new_data_3.columns = ['Right','Wrong','t/f','correct','T/F']
new_data_3
# 統計将正确糾正錯誤的個數 和 将錯誤糾正正确的個數
data_change = new_data_3[new_data_3['t/f'] != new_data_3['T/F']]
data_change