机器学习笔记

分类

准确率:所有样本中预测正确的占比

accuracy=TP+TNTP+TN+FP+FN=TT+Faccuracy =\frac {TP+TN}{TP+TN+FP+FN} = \frac {T}{T+F}

精确率:预测为正的样本中真正的正样本占比

precision=TPTP+FP=TPPprecision = \frac {TP}{TP+FP} = \frac {TP}{P'}

召回率:正样本中预测为正的占比

recall=TPTP+FN=TPPrecall = \frac{TP}{TP+FN} = \frac{TP}{P}

F1:精确率和召回率的调和均值

2F1=1precision+1recallF1=2precisionrecallprecision+recallF1=2TP2TP+FP+FNF1=2TPP+P\begin{align*} \frac{2}{F_1} & = \frac{1}{precision} + \frac{1}{recall}\cr F_1 & = \frac{2\cdot precision \cdot recall}{precision+recall}\cr F_1 & = \frac{2TP}{2TP + FP + FN} \cr F_1 & = \frac{2TP}{P' + P} \cr \end{align*}

F-score:

Fscore=(1+β2)precisionrecallβ2precision+recallF_{score}=(1+\beta^2)\cdot \frac{precision \cdot recall}{\beta^2\cdot precision + recall}
PN
P'TPFP
N'FNTN

序列

BLEU(Bilingual Evaluation understudy)

CPn(C,S)=ikmin(hk(ci),maxjmhk(sij))ikhk(cj)CP_n(C,S)=\frac {\sum_i\sum_k\min(h_k(c_i),max_{j \in m}h_k(s_{ij}))}{\sum_i\sum_kh_k(c_j)}

惩罚因子BP(Brevity Penalty)

b(C,S)={1,lc<lse1lslc,lclsb(C,S)=\begin{cases} 1, &l_c \lt l_s \cr e^{1-\frac{l_s}{l_c}}, &l_c \geq l_s \end{cases} BLEUN(C,S)=b(C,S)exp(n=1NωnlogCPn(C,S))BLEU_N(C,S)=b(C,S)\exp(\sum_{n=1}^N\omega_n\log CP_n(C,S))

机器翻译

ROUGE(Recall-Oriented Understudy for Gisting Evaluation)

ROUGE-N基于N-gram公现性统计
ROUGE-L基于最长公有子句共现性精确度和召回率Fmeasure统计
ROUGE-W代权重的最长公有子句共现性精确度和召回率Fmeasure统计
ROUGE-S不连续二元组共现性精确度和召回率Fmeasure统计

ROUGE-N

ROUGEN=SReferencesSummariesgramnSCountmatch(gramn)SReferencesSummariesgramnSCount(gramn)ROUGE-N=\frac {\sum_{S \in ReferencesSummaries}\sum_{gram_n\in S}Count_{match}(gram_n)} {\sum_{S \in ReferencesSummaries}\sum_{gram_n\in S}Count(gram_n)}

ROUGE-L 最长公共子句longest common subsequence(LCS)

Rlcs=LCS(X,Y)m,m=len(X)R_{lcs}=\frac {LCS(X,Y)}{m} ,m=len(X) Plcs=LCS(X,Y)n,n=len(Y)P_{lcs}=\frac {LCS(X,Y)}{n} ,n=len(Y) Flcs=(1+β2)RlcsPlcsRlcs+β2PlcsF_{lcs}=\frac{(1+\beta^2)R_{lcs}P_{lcs}}{R_{lcs}+\beta^2P_{lcs}}