ES 分词笔记
stop words
比如“Web”一词几乎在每个网站上均会出现,对这样的词搜索引擎无 法保证能够给出真正相关的搜索结果,难以帮助缩小搜索范围,同时还会降低搜索的效率;2、这类就更多了,包括了语气助词、副词、介词、连接词等,通常自身 并无明确的意义,只有将其放入一个完整的句子中才有一定作用,如常见的“的”、“在”之类。http://www.cnblogs.com/flyingchen/archive/2010/02/23/1671915.html
word_delimiter
word_delimiter,它分割成子字的话,并执行可选的转换子字组。词被分成子字按照以下规则:
- 按照分隔符拆分字符串(默认情况下,分隔符为所有非字母数字的字符)。
- “Wi-Fi” → “Wi”, “Fi”
- 按照大小写拆分: “PowerShot” → “Power”, “Shot”
- 按照字母和数字拆分: “SD500” → “SD”, “500”
- 在每个子字开头和结尾的字的内部分隔符将被忽略: “//hello—-there, ‘dude’” → “hello”, “there”, “dude”
- 在单词以“’s”结尾的被删除 : “O’Neil’s” → “O”, “Neil”
stemming和lemmatization
我试验了一下文章中提到的 stemming 和 lemmatization
- 将单词缩减为词根形式,如“cars”到“car”等。这种操作称为:stemming。
- 将单词转变为词根形式,如“drove”到“drive”等。这种操作称为:lemmatization。
下面是内置的一些analyzer。
| analyzer | logical name | description |
|---|---|---|
| standard analyzer | standard | standard tokenizer, standard filter, lower case filter, stop filter |
| simple analyzer | simple | lower case tokenizer |
| stop analyzer | stop | lower case tokenizer, stop filter |
| keyword analyzer | keyword | 不分词,内容整体作为一个token(not_analyzed) |
| pattern analyzer | whitespace | 正则表达式分词,默认匹配\W+ |
| language analyzers | lang | 各种语言 |
| snowball analyzer | snowball | standard tokenizer, standard filter, lower case filter, stop filter, snowball filter |
| custom analyzer | custom | 一个Tokenizer, 零个或多个Token Filter, 零个或多个Char Filter |
tokenizer
ES内置的tokenizer列表。
| tokenizer | logical name | description |
|---|---|---|
| standard tokenizer | standard | |
| edge ngram tokenizer | edgeNGram | |
| keyword tokenizer | keyword | 不分词 |
| letter analyzer | letter | 按单词分 |
| lowercase analyzer | lowercase | letter tokenizer, lower case filter |
| ngram analyzers | nGram | |
| whitespace analyzer | whitespace | 以空格为分隔符拆分 |
| pattern analyzer | pattern | 定义分隔符的正则表达式 |
| uax email url analyzer | uax_url_email | 不拆分url和email |
| path hierarchy analyzer | path_hierarchy | 处理类似/path/to/somthing样式的字符串 |
token filter
ES内置的token filter列表。
| token filter | logical name | description |
|---|---|---|
| standard filter | standard | |
| ascii folding filter | asciifolding | |
| length filter | length | 去掉太长或者太短的 |
| lowercase filter | lowercase | 转成小写 |
| ngram filter | nGram | |
| edge ngram filter | edgeNGram | |
| porter stem filter | porterStem | 波特词干算法 |
| shingle filter | shingle | 定义分隔符的正则表达式 |
| stop filter | stop | 移除 stop words |
| word delimiter filter | word_delimiter | 将一个单词再拆成子分词 |
| stemmer token filter | stemmer | |
| stemmer override filter | stemmer_override | |
| keyword marker filter | keyword_marker | |
| keyword repeat filter | keyword_repeat | |
| kstem filter | kstem | |
| snowball filter | snowball | |
| phonetic filter | phonetic | 插件 |
| synonym filter | synonyms | 处理同义词 |
| compound word filter | dictionary_decompounder, hyphenation_decompounder | 分解复合词 |
| reverse filter | reverse | 反转字符串 |
| elision filter | elision | 去掉缩略语 |
| truncate filter | truncate | 截断字符串 |
| unique filter | unique | |
| pattern capture filter | pattern_capture | |
| pattern replace filte | pattern_replace | 用正则表达式替换 |
| trim filter | trim | 去掉空格 |
| limit token count filter | limit | 限制token数量 |
| hunspell filter | hunspell | 拼写检查 |
| common grams filter | common_grams | |
| normalization filter | arabic_normalization, persian_normalization | |
character filter
ES内置的character filter列表
| character filter | logical name | description |
|---|---|---|
| mapping char filter | mapping | 根据配置的映射关系替换字符 |
| html strip char filter | html_strip | 去掉HTML元素 |
| pattern replace char filter | pattern_replace | 用正则表达式处理字符串 |
icu plugin
定,精,简,俭

浙公网安备 33010602011771号