Welcome 微信登录

首页 / 软件开发 / JAVA / 浅谈中文文本自动纠错在影视剧搜索中应用与Java实现

浅谈中文文本自动纠错在影视剧搜索中应用与Java实现2014-08-26 博客园 巫峡1.背景:

这周由于项目需要对搜索框中输入的错误影片名进行校正处理,以提升搜索命中率和用户体验,研究了一下中文文本自动纠错(专业点讲是校对,proofread),并初步实现了该功能,特此记录。

2.简介:

中文输入错误的校对与更正是指在输入不常见或者错误文字时系统提示文字有误,最简单的例子就是在word里打字时会有红色下划线提示。实现该功能目前主要有两大思路:

(1)  基于大量字典的分词法:主要是将待分析的汉字串与一个很大的“机器词典”中的词条进行匹配,若在词典中找到则匹配成功;该方法易于实现,比较适用于输入的汉字串

属于某个或某几个领域的名词或名称;

(2)  基于统计信息的分词法:常用的是N-Gram语言模型,其实就是N-1阶Markov(马尔科夫)模型;在此简介一下该模型:

上式是Byes公式,表明字符串X1X2……Xm出现的概率是每个字单独出现的条件概率之积,为了简化计算假设字Xi的出现仅与前面紧挨着的N-1个字符有关,则上面的公式变为:

这就是N-1阶Markov(马尔科夫)模型,计算出概率后与一个阈值对比,若小于该阈值则提示该字符串拼写有误。

3.实现:

由于本人项目针对的输入汉字串基本上是影视剧名称以及综艺动漫节目的名字,语料库的范围相对稳定些,所以这里采用2-Gram即二元语言模型与字典分词相结合的方法;

先说下思路:

对语料库进行分词处理 —>计算二元词条出现概率(在语料库的样本下,用词条出现的频率代替) —>对待分析的汉字串分词并找出最大连续字符串和第二大连续字符串 —>

利用最大和第二大连续字符串与语料库的影片名称匹配—>部分匹配则现实拼写有误并返回更正的字符串(所以字典很重要)

备注:分词这里用ICTCLAS Java API

上代码:

创建类ChineseWordProofread

3.1 初始化分词包并对影片语料库进行分词处理

public ICTCLAS2011 initWordSegmentation(){ICTCLAS2011 wordSeg = new ICTCLAS2011();try{String argu = "F:\Java\workspace\wordProofread"; //set your project pathSystem.out.println("ICTCLAS_Init");if (ICTCLAS2011.ICTCLAS_Init(argu.getBytes("GB2312"),0) == false){System.out.println("Init Fail!");//return null;}/* * 设置词性标注集ID代表词性集 1计算所一级标注集0计算所二级标注集2北大二级标注集3北大一级标注集*/wordSeg.ICTCLAS_SetPOSmap(2);}catch (Exception ex){System.out.println("words segmentation initialization failed");System.exit(-1);}return wordSeg;}public boolean wordSegmentate(String argu1,String argu2){boolean ictclasFileProcess = false;try{//文件分词ictclasFileProcess = wordSeg.ICTCLAS_FileProcess(argu1.getBytes("GB2312"), argu2.getBytes("GB2312"), 0);//ICTCLAS2011.ICTCLAS_Exit();}catch (Exception ex){System.out.println("file process segmentation failed");System.exit(-1);}return ictclasFileProcess;}
3.2 计算词条(tokens)出现的频率

public Map<String,Integer> calculateTokenCount(String afterWordSegFile){Map<String,Integer> wordCountMap = new HashMap<String,Integer>();File movieInfoFile = new File(afterWordSegFile);BufferedReader movieBR = null;try {movieBR = new BufferedReader(new FileReader(movieInfoFile));} catch (FileNotFoundException e) {System.out.println("movie_result.txt file not found");e.printStackTrace();}String wordsline = null;try {while ((wordsline=movieBR.readLine()) != null){String[] words = wordsline.trim().split(" ");for (int i=0;i<words.length;i++){int wordCount = wordCountMap.get(words[i])==null ? 0:wordCountMap.get(words[i]);wordCountMap.put(words[i], wordCount+1);totalTokensCount += 1;if (words.length > 1 && i < words.length-1){StringBuffer wordStrBuf = new StringBuffer();wordStrBuf.append(words[i]).append(words[i+1]);int wordStrCount = wordCountMap.get(wordStrBuf.toString())==null ? 0:wordCountMap.get(wordStrBuf.toString());wordCountMap.put(wordStrBuf.toString(), wordStrCount+1);totalTokensCount += 1;}}}} catch (IOException e) {System.out.println("read movie_result.txt file failed");e.printStackTrace();}return wordCountMap;}
3.3 找出待分析字符串中的正确tokens

public Map<String,Integer> calculateTokenCount(String afterWordSegFile){Map<String,Integer> wordCountMap = new HashMap<String,Integer>();File movieInfoFile = new File(afterWordSegFile);BufferedReader movieBR = null;try {movieBR = new BufferedReader(new FileReader(movieInfoFile));} catch (FileNotFoundException e) {System.out.println("movie_result.txt file not found");e.printStackTrace();}String wordsline = null;try {while ((wordsline=movieBR.readLine()) != null){String[] words = wordsline.trim().split(" ");for (int i=0;i<words.length;i++){int wordCount = wordCountMap.get(words[i])==null ? 0:wordCountMap.get(words[i]);wordCountMap.put(words[i], wordCount+1);totalTokensCount += 1;if (words.length > 1 && i < words.length-1){StringBuffer wordStrBuf = new StringBuffer();wordStrBuf.append(words[i]).append(words[i+1]);int wordStrCount = wordCountMap.get(wordStrBuf.toString())==null ? 0:wordCountMap.get(wordStrBuf.toString());wordCountMap.put(wordStrBuf.toString(), wordStrCount+1);totalTokensCount += 1;}}}} catch (IOException e) {System.out.println("read movie_result.txt file failed");e.printStackTrace();}return wordCountMap;}
3.4 得到最大连续和第二大连续字符串(也可能为单个字符)