实现效果:

  上一篇文章有附全文搜索结果的设计图,下面截一张开发完成上线后的实图:

记一次企业级爬虫系统升级改造(五):基于JieBaNet+Lucene.Net实现全文搜索

  基本风格是模仿的百度搜索结果,绿色的分页略显小清新。

  目前已采集并创建索引的文章约3W多篇,索引文件不算太大,查询速度非常棒。

  记一次企业级爬虫系统升级改造(五):基于JieBaNet+Lucene.Net实现全文搜索

刀不磨要生锈,人不学要落后。每天都要学一些新东西。 

 

基本技术介绍:

  还记得上一次做全文搜索是在2013年,主要核心设计与代码均是当时的架构师写的,自己只能算是全程参与。

  当时使用的是经典搭配:盘古分词+Lucene.net。

  前几篇文章有说到,盘古分词已经很多年不更新了,我在SupportYun系统一直引用的JieBaNet来做分词技术。

  那么是否也有成型的JieBaNet+Lucene.Net的全文搜索方案呢?

  经过多番寻找,在GitHub上面找到一个简易的例子:https://github.com/anderscui/jiebaForLuceneNet

  博主下面要讲的实现方案就是从这个demo得到的启发,大家有兴趣可以去看看这个demo。

  博主使用的具体版本:Lucene.net 3.0.3.0 ,JieBaNet 0.38.3.0(做过简易的调整与扩展,前面文章有讲到)

  首先我们对Lucene.Net的分词器Tokenizer、分析器Analyzer做一个基于JieBaNet的扩展。

  1.基于LuceneNet扩展的JieBa分析器JiebaForLuceneAnalyzer  

 1     /// <summary>
 2     /// 基于LuceneNet扩展的JieBa分析器
 3     /// </summary>
 4     public class JiebaForLuceneAnalyzer : Analyzer
 5     {
 6         protected static readonly ISet<string> DefaultStopWords = StopAnalyzer.ENGLISH_STOP_WORDS_SET;
 7 
 8         private static ISet<string> StopWords;
 9 
10         static JiebaForLuceneAnalyzer()
11         {
12             StopWords = new HashSet<string>();
13             var stopWordsFile = Path.GetFullPath(JiebaNet.Analyser.ConfigManager.StopWordsFile);
14             if (File.Exists(stopWordsFile))
15             {
16                 var lines = File.ReadAllLines(stopWordsFile);
17                 foreach (var line in lines)
18                 {
19                     StopWords.Add(line.Trim());
20                 }
21             }
22             else
23             {
24                 StopWords = DefaultStopWords;
25             }
26         }
27 
28         public override TokenStream TokenStream(string fieldName, TextReader reader)
29         {
30             var seg = new JiebaSegmenter();
31             TokenStream result = new JiebaForLuceneTokenizer(seg, reader);
32             result = new LowerCaseFilter(result);
33             result = new StopFilter(true, result, StopWords);
34             return result;
35         }
36     }

  2.基于LuceneNet扩展的JieBa分词器:JiebaForLuceneTokenizer

 1     /// <summary>
 2     /// 基于Lucene的JieBa分词扩展
 3     /// </summary>
 4     public class JiebaForLuceneTokenizer:Tokenizer
 5     {
 6         private readonly JiebaSegmenter segmenter;
 7         private readonly ITermAttribute termAtt;
 8         private readonly IOffsetAttribute offsetAtt;
 9         private readonly ITypeAttribute typeAtt;
10 
11         private readonly List<Token> tokens;
12         private int position = -1;
13 
14         public JiebaForLuceneTokenizer(JiebaSegmenter seg, TextReader input):this(seg, input.ReadToEnd()) { }
15 
16         public JiebaForLuceneTokenizer(JiebaSegmenter seg, string input)
17         {
18             segmenter = seg;
19             termAtt = AddAttribute<ITermAttribute>();
20             offsetAtt = AddAttribute<IOffsetAttribute>();
21             typeAtt = AddAttribute<ITypeAttribute>();
22 
23             var text = input;
24             tokens = segmenter.Tokenize(text, TokenizerMode.Search).ToList();
25         }
26 
27         public override bool IncrementToken()
28         {
29             ClearAttributes();
30             position++;
31             if (position < tokens.Count)
32             {
33                 var token = tokens[position];
34                 termAtt.SetTermBuffer(token.Word);
35                 offsetAtt.SetOffset(token.StartIndex, token.EndIndex);
36                 typeAtt.Type = "Jieba";
37                 return true;
38             }
39 
40             End();
41             return false;
42         }
43 
44         public IEnumerable<Token> Tokenize(string text, TokenizerMode mode = TokenizerMode.Search)
45         {
46             return segmenter.Tokenize(text, mode);
47         }
48     }

理想如果不向现实做一点点屈服,那么理想也将归于尘土。 

 

实现方案设计:

  我们做全文搜索的设计时一定会考虑的一个问题就是:我们系统是分很多模块的,不同模块的字段差异很大,怎么才能实现同一个索引,既可以单个模块搜索又可以全站搜索,甚至按一些字段做条件来搜索呢?

  这些也是SupportYun系统需要考虑的问题,因为目前的数据就天然的拆分成了活动、文章两个类别,字段也大有不同。博主想实现的是一个可以全站搜索(结果包括活动、文章),也可以在文章栏目/活动栏目分别搜索,并且可以按几个指定字段来做搜索条件。

  要做一个这样的全文搜索功能,我们需要从程序设计上来下功夫。下面就介绍一下博主的设计方案:

  一、索引创建

    记一次企业级爬虫系统升级改造(五):基于JieBaNet+Lucene.Net实现全文搜索

    1.我们设计一个IndexManager来处理最基本的索引创建、更新、删除操作。

  1 public class IndexManager
  2     {
  3         /// <summary>
  4         /// 索引存储目录
  5         /// </summary>
  6         public static readonly string IndexStorePath = ConfigurationManager.AppSettings["IndexStorePath"];
  7         private IndexWriter indexWriter;
  8         private FSDirectory entityDirectory;
  9 
 10         ~IndexManager()
 11         {
 12             if (entityDirectory != null)
 13             {
 14                 entityDirectory.Dispose();
 15             }
 16             if (indexWriter != null)
 17             {
 18                 indexWriter.Dispose();
 19             }
 20         }
 21 
 22         /// <summary>
 23         /// 对内容新增索引
 24         /// </summary>
 25         public void BuildIndex(List<IndexContent> indexContents)
 26         {
 27             try
 28             {
 29                 if (entityDirectory == null)
 30                 {
 31                     entityDirectory = FSDirectory.Open(new DirectoryInfo(IndexStorePath));
 32                 }
 33                 if (indexWriter == null)
 34                 {
 35                     Analyzer analyzer = new JiebaForLuceneAnalyzer();
 36                     indexWriter = new IndexWriter(entityDirectory, analyzer, IndexWriter.MaxFieldLength.LIMITED);
 37                 }
 38                 lock (IndexStorePath)
 39                 {
 40                     foreach (var indexContent in indexContents)
 41                     {
 42                         var doc = GetDocument(indexContent);
 43                         indexWriter.AddDocument(doc);
 44                     }
 45                     indexWriter.Commit();
 46                     indexWriter.Optimize();
 47                     indexWriter.Dispose();
 48                 }
 49             }
 50             catch (Exception exception)
 51             {
 52                 LogUtils.ErrorLog(exception);
 53             }
 54             finally
 55             {
 56                 if (entityDirectory != null)
 57                 {
 58                     entityDirectory.Dispose();
 59                 }
 60                 if (indexWriter != null)
 61                 {
 62                     indexWriter.Dispose();
 63                 }
 64             }
 65         }
 66 
 67         /// <summary>
 68         /// 删除索引
 69         /// </summary>
 70         /// <param name="moduleType"></param>
 71         /// <param name="tableName">可空</param>
 72         /// <param name="rowID"></param>
 73         public void DeleteIndex(string moduleType, string tableName, string rowID)
 74         {
 75             try
 76             {
 77                 if (entityDirectory == null)
 78                 {
 79                     entityDirectory = FSDirectory.Open(new DirectoryInfo(IndexStorePath));
 80                 }
 81                 if (indexWriter == null)
 82                 {
 83                     Analyzer analyzer = new JiebaForLuceneAnalyzer();
 84                     indexWriter = new IndexWriter(entityDirectory, analyzer, IndexWriter.MaxFieldLength.LIMITED);
 85                 }
 86                 lock (IndexStorePath)
 87                 {
 88                     var query = new BooleanQuery
 89                     {
 90                         {new TermQuery(new Term("ModuleType", moduleType)), Occur.MUST},
 91                         {new TermQuery(new Term("RowId", rowID)), Occur.MUST}
 92                     };
 93                     if (!string.IsNullOrEmpty(tableName))
 94                     {
 95                         query.Add(new TermQuery(new Term("TableName", tableName)), Occur.MUST);
 96                     }
 97 
 98                     indexWriter.DeleteDocuments(query);
 99                     indexWriter.Commit();
100                     indexWriter.Optimize();
101                     indexWriter.Dispose();
102                 }
103             }
104             catch (Exception exception)
105             {
106                 LogUtils.ErrorLog(exception);
107             }
108             finally
109             {
110                 if (entityDirectory != null)
111                 {
112                     entityDirectory.Dispose();
113                 }
114                 if (indexWriter != null)
115                 {
116                     indexWriter.Dispose();
117                 }
118             }
119         }
120 
121         /// <summary>
122         /// 更新索引
123         /// </summary>
124         /// <param name="indexContent"></param>
125         public void UpdateIndex(IndexContent indexContent)
126         {
127             try
128             {
129                 if (entityDirectory == null)
130                 {
131                     entityDirectory = FSDirectory.Open(new DirectoryInfo(IndexStorePath));
132                 }
133                 if (indexWriter == null)
134                 {
135                     Analyzer analyzer = new JiebaForLuceneAnalyzer();
136                     indexWriter = new IndexWriter(entityDirectory, analyzer, IndexWriter.MaxFieldLength.LIMITED);
137                 }
138                 lock (IndexStorePath)
139                 {
140                     var query = new BooleanQuery
141                     {
142                         {new TermQuery(new Term("ModuleType", indexContent.ModuleType)), Occur.MUST},
143                         {new TermQuery(new Term("RowId", indexContent.RowId.ToString())), Occur.MUST}
144                     };
145                     if (!string.IsNullOrEmpty(indexContent.TableName))
146                     {
147                         query.Add(new TermQuery(new Term("TableName", indexContent.TableName)), Occur.MUST);
148                     }
149 
150                     indexWriter.DeleteDocuments(query);
151                    
152                     var document = GetDocument(indexContent);
153                     indexWriter.AddDocument(document);
154 
155                     indexWriter.Commit();
156                     indexWriter.Optimize();
157                     indexWriter.Dispose();
158                 }
159             }
160             catch (Exception exception)
161             {
162                 LogUtils.ErrorLog(exception);
163             }
164             finally
165             {
166                 if (entityDirectory != null)
167                 {
168                     entityDirectory.Dispose();
169                 }
170                 if (indexWriter != null)
171                 {
172                     indexWriter.Dispose();
173                 }
174             }
175         }
176 
177         private Document GetDocument(IndexContent indexContent)
178         {
179             var doc = new Document();
180             doc.Add(new Field("ModuleType", indexContent.ModuleType, Field.Store.YES, Field.Index.NOT_ANALYZED));
181             doc.Add(new Field("TableName", indexContent.TableName, Field.Store.YES, Field.Index.NOT_ANALYZED));
182             doc.Add(new Field("RowId", indexContent.RowId.ToString().ToLower(), Field.Store.YES, Field.Index.NOT_ANALYZED));
183             doc.Add(new Field("Title", indexContent.Title, Field.Store.YES, Field.Index.ANALYZED));
184             doc.Add(new Field("IndexTextContent", ReplaceIndexSensitiveWords(indexContent.IndexTextContent), Field.Store.YES, Field.Index.ANALYZED));
185             doc.Add(new Field("CollectTime", indexContent.CollectTime.ToString("yyyy-MM-dd HH:mm:ss"),Field.Store.YES, Field.Index.NO));
186 
187             // 预留
188             doc.Add(new Field("Tag1", indexContent.Tag1.Value, GetStoreEnum(indexContent.Tag1.Store)
189                 , GetIndexEnum(indexContent.Tag1.Index)));
190             doc.Add(new Field("Tag2", indexContent.Tag2.Value, GetStoreEnum(indexContent.Tag2.Store)
191                 , GetIndexEnum(indexContent.Tag2.Index)));
192             doc.Add(new Field("Tag3", indexContent.Tag3.Value, GetStoreEnum(indexContent.Tag3.Store)
193                 , GetIndexEnum(indexContent.Tag3.Index)));
194             doc.Add(new Field("Tag4", indexContent.Tag4.Value, GetStoreEnum(indexContent.Tag4.Store)
195                 , GetIndexEnum(indexContent.Tag4.Index)));
196             doc.Add(new Field("Tag5", indexContent.Tag5.Value, GetStoreEnum(indexContent.Tag5.Store)
197                 , GetIndexEnum(indexContent.Tag5.Index)));
198             doc.Add(new Field("Tag6", indexContent.Tag6.Value, GetStoreEnum(indexContent.Tag6.Store)
199                 , GetIndexEnum(indexContent.Tag6.Index)));
200             doc.Add(new Field("Tag7", indexContent.Tag7.Value, GetStoreEnum(indexContent.Tag7.Store)
201                 , GetIndexEnum(indexContent.Tag7.Index)));
202             doc.Add(new Field("Tag8", indexContent.Tag8.Value, GetStoreEnum(indexContent.Tag8.Store)
203                 , GetIndexEnum(indexContent.Tag8.Index)));
204             var field = new NumericField("FloatTag9", GetStoreEnum(indexContent.FloatTag9.Store),
205                 indexContent.FloatTag9.Index != IndexEnum.NotIndex);
206             field = field.SetFloatValue(indexContent.FloatTag9.Value);
207             doc.Add(field);
208             field = new NumericField("FloatTag10", GetStoreEnum(indexContent.FloatTag10.Store),
209                 indexContent.FloatTag10.Index != IndexEnum.NotIndex);
210             field = field.SetFloatValue(indexContent.FloatTag10.Value);
211             doc.Add(field);
212             return doc;
213         }
214 
215         /// <summary>
216         /// 权益方法,临时使用
217         /// 去除文本中非索引文本
218         /// </summary>
219         /// <param name="str"></param>
220         /// <returns></returns>
221         private string ReplaceIndexSensitiveWords(string str)
222         {
223             for (var i = 0; i < 3; i++)
224             {
225                 str = str.Replace(" ", "");
226                 str = str.Replace(" ", "").Replace("\n", "");
227             }
228             return str;
229         }
230 
231         private Field.Index GetIndexEnum(IndexEnum index)
232         {
233             switch (index)
234             {
235                 case IndexEnum.NotIndex:
236                     return Field.Index.NO;
237                 case IndexEnum.NotUseAnalyzerButIndex:
238                     return Field.Index.NOT_ANALYZED;
239                 case IndexEnum.UseAnalyzerIndex:
240                     return Field.Index.ANALYZED;
241                 default:
242                     return Field.Index.NO;
243             }
244         }
245 
246         private Field.Store GetStoreEnum(bool store)
247         {
248             return store ? Field.Store.YES : Field.Store.NO;
249         }
250     }
View Code

相关文章:

  • 2022-12-23
  • 2022-12-23
  • 2021-12-31
猜你喜欢
  • 2022-12-23
  • 2021-09-30
  • 2022-12-23
  • 2022-01-17
  • 2021-12-19
  • 2021-11-26
  • 2021-11-25
相关资源
相似解决方案