STConvert Analysis for Elasticsearch
STConvert is analyzer that convert Chinese characters between Traditional and Simplified. [ไธญๆ็ฎ็น้ซ่ฝฌๆข][็ฎไฝๅฐ็นไฝ][็นไฝๅฐ็ฎไฝ][็ฎ็นๆฅ่ฏขExpand]
You can download the pre-build package from release page
The plugin includes analyzer: stconvert
,
tokenizer: stconvert
,
token-filter: stconvert
,
and char-filter: stconvert
Supported config:
-
convert_type
: defaults2t
,optional option:s2t
,convert characters from Simple Chinese to Traditional Chineset2s
,convert characters from Traditional Chinese to Simple Chinese
-
keep_both
:defaultfalse
, -
delimiter
:default,
Custom example:
PUT /stconvert/
{
"settings" : {
"analysis" : {
"analyzer" : {
"tsconvert" : {
"tokenizer" : "tsconvert"
}
},
"tokenizer" : {
"tsconvert" : {
"type" : "stconvert",
"delimiter" : "#",
"keep_both" : false,
"convert_type" : "t2s"
}
},
"filter": {
"tsconvert" : {
"type" : "stconvert",
"delimiter" : "#",
"keep_both" : false,
"convert_type" : "t2s"
}
},
"char_filter" : {
"tsconvert" : {
"type" : "stconvert",
"convert_type" : "t2s"
}
}
}
}
}
Analyze tests
GET stconvert/_analyze
{
"tokenizer" : "keyword",
"filter" : ["lowercase"],
"char_filter" : ["tsconvert"],
"text" : "ๅฝ้
ๅ้"
}
Output๏ผ
{
"tokens": [
{
"token": "ๅฝ้
ๅฝ้
",
"start_offset": 0,
"end_offset": 4,
"type": "word",
"position": 0
}
]
}
Normalizer usage
DELETE index
PUT index
{
"settings": {
"analysis": {
"char_filter": {
"tsconvert": {
"type": "stconvert",
"convert_type": "t2s"
}
},
"normalizer": {
"my_normalizer": {
"type": "custom",
"char_filter": [
"tsconvert"
],
"filter": [
"lowercase"
]
}
}
}
},
"mappings": {
"properties": {
"foo": {
"type": "keyword",
"normalizer": "my_normalizer"
}
}
}
}
PUT index/_doc/1
{
"foo": "ๅ้"
}
PUT index/_doc/2
{
"foo": "ๅฝ้
"
}
GET index/_search
{
"query": {
"term": {
"foo": "ๅฝ้
"
}
}
}
GET index/_search
{
"query": {
"term": {
"foo": "ๅ้"
}
}
}