目录
1、创建、修改以及删除索引
(1)手动创建索引语法
(2)修改索引
(3)删除索引
2、 修改分词器以及定义自己的分词器
(1)默认分词器 standard
(2)修改分词器的设置
(3)定制化自己的分词器
3、深入探秘type底层数据结构
(1)理论知识
(2)案例实践
(3)最后总结
4、mapping root object深入剖析
(1)root object:某个type对应的mapping json
(2)properties:包含type、index、analyzer
(3)_source
(4)_all
(5)标识性metadata:_index、_type、_id
5、定制化自己的dynamic mapping策略
(1)定制dynamic策略
(2)定制dynamic mapping策略
a、date_detection
b、定制自己的动态映射模板dynamic mapping template(type level)
c、定制自己的default mapping template(index level)
6、基于scroll滚动搜索和alias别名实现zero downtime(零停机) reindex
(1)重建索引
1、创建、修改以及删除索引
(1)手动创建索引语法
PUT /my_index
{
"settings": {... any settings...},
"mappings": {
"type_one":{... any mappings...},
"type_two":{... any mappings...},
...
}
}
PUT /my_index
{
"settings": {
"number_of_shards": 1,
"number_of_replicas": 0
},
"mappings": {
"my_type":{
"properties": {
"my_field":{
"type":"text"
}
}
}
}
}
(2)修改索引
//---------修改replica 数量
PUT /my_index/_settings
{
"number_of_replicas": 1
}
(3)删除索引
- DELETE /my_index
- DELETE /index_one,index_two
- DELETE /index_*
- DELETE /_all -----删除所有索引
- elasticsearch.yml 中配置action.destructive_requires_name:true,就不能用 DELETE /_all 这种方式删除全部索引
2、 修改分词器以及定义自己的分词器
(1)默认分词器 standard
- standard tokenizer:以单词边界进行切分
- standard token filter:什么都不做
- lowercase token filter:将所有字母转换为小写
- stop token filter(默认被禁用):移除停用词,比如a the it 等等
(2)修改分词器的设置
启用english 停用词 token filter
PUT /my_index
{
"settings": {
"analysis": {
"analyzer": {
"es_std":{
"type":"standard",
"stopwords":"_englist_"
}
}
}
}
}
修改后查看分词效果
//-----------用standard ,被分词成 a , dog , is , in , the , house
GET /my_index/_analyze
{
"analyzer":"standard",
"text":"a dog is in the house"
}
//----------用es_std, 被分词成 dog , house
GET /my_index/_analyze
{
"analyzer": "es_std",
"text": "a dog is in the house"
}
(3)定制化自己的分词器
PUT /my_index
{
"settings": {
"analysis": {
"char_filter": {
"&_to_and":{
"type":"mapping",
"mappings":["&=>and"] //&转化成and
}
},
"filter": {
"my_stopwords":{
"type":"stop",
"stopwords":["the","a"] //忽略the a
}
},
"analyzer": {
"my_analyzer":{
"type":"custom",
"char_filter":["html_strip","&_to_and"], //忽略html标签,&转成and
"tokenizer":"standard",
"filter":["lowercase","my_stopwords"]
}
}
}
}
}
//--------------测试,分词结果:tomandjerry,are, friend,in, house,haha
GET /my_index/_analyze
{
"analyzer": "my_analyzer",
"text":"tom&jerry are a friend in the house,<a>,HAHA!!"
}
在type中应用自己的分词器
PUT /my_index/_mapping/my_type
{
"properties": {
"content":{
"type": "text",
"analyzer": "my_analyzer"
}
}
}
3、深入探秘type底层数据结构
(1)理论知识
- type,是一个index中用来区分类似的数据的,类似的数据,但是可能有不同的fields,而且有不同的属性来控制索引建立、分词器
- field的value,在底层的Lucene中建立索引的时候,全部是opaque(不透明)bytes类型,即:不区分类型的。
- Lucene是没有type的概念的,在document中,实际上将type作为一个document的field来存储,即_type,es通过_type来进行type的过滤和筛选。
- 一个index中的多个type,实际上是放在一起存储的,因此一个index下,不能有多个type重名,二类型或者其他设置不同的,因为那样是无法处理的。
(2)案例实践
(1)插入两条数据
PUT goods_index/electronic_goods/1
{
"name": "geli kongtiao",
"price": 1999.0,
"service_period": "one year"
}
PUT goods_index/eat_goods/2
{
"name": "aozhou dalongxia",
"price": 199.0,
"eat_period": "one week"
}
索引名称为goods_index,在该索引下面分别有两个type:electronic_goods和eat_goods
我们来看下索引对应的映射
(2)查看mapping
GET /goods_index/_mapping
{
"goods_index": {
"mappings": {
"electronic_goods": {
"properties": {
"name": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"price": {
"type": "float"
},
"service_period": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
}
}
},
"eat_goods": {
"properties": {
"eat_period": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"name": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"price": {
"type": "float"
}
}
}
}
}
}
一个index中的多个type,实际上是放在一起存储的,在Lucene底层的存储结构如下:
(3)lucene 底层的存储
{
"ecommerce": {
"mappings": {
"_type": {
"type": "string",
"index": "not_analyzed"
},
"name": {
"type": "string"
}
"price": {
"type": "double"
}
"service_period": {
"type": "string"
}
"eat_period": {
"type": "string"
}
}
}
}
上述两条数据在底层存储结构如下:
{
"_type": "electronic_goods",
"name": "geli kongtiao",
"price": 1999.0,
"service_period": "one year",
"eat_period": ""
}
{
"_type": "eat_goods",
"name": "aozhou dalongxia",
"price": 199.0,
"service_period": "",
"eat_period": "one week"
}
_type字段就是type的名称,两个type中都有name字段,这里两个type中同有name字段,意味type同享一个存储空间,如果electronic_goods中的name为data类型,eat_goods中name为text类型。如果二者的类型不一样,这里就会存在问题,Lucene底层的数据结构会将“electronic_goods”和“eat_goods”的字段取并集存储起来,
(3)最后总结
将类似结构的type放在一个index下,这些type应该有多个field是相同的。假如说,你将两个type的field完全不同,放在一个index下,那么就每条数据都至少有一半的field在底层的Lucene中是空值,会有严重的性能问题。
4、mapping root object深入剖析
(1)root object:某个type对应的mapping json
就是某个type对应的mapping json,包括了properties,metadata(_id,_source,_type),settings(analyzer),其他settings(比如include_in_all)
(2)properties:包含type、index、analyzer
PUT my_index/_mapping/my_type
{
"properties":{
"title":{
"type":"string", //类型
"index":"analyzed", //要不要分词
"analyzer":"standard" //使用哪个分词器
}
}
}
(3)_source
好处:
- 查询的时候,直接可以拿到完整的document,不需要先拿document id,再发送一次请求拿document
- partial update基于_source实现
- reindex时,直接基于_source实现,不需要从数据库(或者其他外部存储)查询数据再修改
- 可以基于_source定制返回field
- debug query更容易,因为可以直接看到_source
如果不需要上述好处,可以禁用_source:
禁用_source:
PUT /my_index/_mapping/my_type2
{
"_source":{
"enabled":false
}
}
(4)_all
将所有field打包在一起,作为一个_all field,建立索引。没指定任何field进行搜索时,就是使用_all field在搜索。
禁用 _all:
PUT /my_index/_mapping/my_type3
{
"_all":{
"enabled":false
}
}
也可以在field级别设置include_in_all field,设置是否要将field的值包含在_all field中
PUT /my_index/_mapping/my_type4
{
"properties":{
"my_field":{
"type":"text",
"include_in_all":false
}
}
}
(5)标识性metadata:_index、_type、_id
5、定制化自己的dynamic mapping策略
(1)定制dynamic策略
- true:遇到陌生字段,就进行dynamic mapping
- false:遇到陌生字段,就忽略
- strict:遇到陌生字段,就报错
//----------定制dynamic策略
PUT /my_index
{
"mappings": {
"my_type": {
"dynamic":"strict",
"properties": {
"title":{
"type": "text"
},
"address":{
"type": "object",
"dynamic":"true"
}
}
}
}
}
//---------1、测试顶层strict(遇到陌生字段报错),添加陌生字段content
PUT my_index/my_type/1
{
"title":"my article",
"content":"this is my article",
"address":{
"province":"guangdong",
"city":"guangzhou"
}
}
错误信息如下:
{
"error": {
"root_cause": [
{
"type": "strict_dynamic_mapping_exception",
"reason": "mapping set to strict, dynamic introduction of [content] within [my_type] is not allowed"
}
],
"type": "strict_dynamic_mapping_exception",
"reason": "mapping set to strict, dynamic introduction of [content] within [my_type] is not allowed"
},
"status": 400
}
//----------2、测试address(dynamic:true -- 遇到陌生字段进行dynamic mapping)中添加陌生字段
PUT my_index/my_type/1
{
"title":"my article",
"address":{
"province":"guangdong",
"city":"guangzhou"
}
}
结果如下:
{
"_index": "my_index",
"_type": "my_type",
"_id": "1",
"_version": 1,
"result": "created",
"_shards": {
"total": 2,
"successful": 1,
"failed": 0
},
"created": true
}
//-----------查看
GET /my_index/_mapping/my_type
结果如下:
{
"my_index": {
"mappings": {
"my_type": {
"dynamic": "strict",
"properties": {
"address": {
"dynamic": "true",
"properties": {
"city": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"province": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
}
}
},
"title": {
"type": "text"
}
}
}
}
}
}
(2)定制dynamic mapping策略
a、date_detection
默认会按照一定格式识别date,比如yyyy-MM-dd,但是如果某个field先过来一个2017-01-01的值,就会被自动dynamic mapping成date,后面如果再来一个hello world之类的值,就会报错,可以手动关闭某个type的date_detection,如果有需要,自己手动指定某个field为date类型
PUT /my_index/_mapping/my_type
{
"date_detection":false
}
b、定制自己的动态映射模板dynamic mapping template(type level)
//---------------1、定制模板:以“_en”结尾的字段,匹配如下mapping
PUT my_index
{
"mappings": {
"my_type": {
"dynamic_templates":[
{"en":{
"match":"*_en",
"match_mapping_type":"string",
"mapping":{
"type":"string",
"analyzer":"english"
}
}}
]
}
}
}
//-----------------2、插入测试数据
PUT my_index/my_type/1
{
"title":"this is my first article"
}
PUT my_index/my_type/2
{
"title_en":"this is my first article"
}
//------------------3、查询
GET my_index/my_type/_search
{
"query": {
"match": {
"title": "is"
}
}
}
结果:
{
"took": 145,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 1,
"max_score": 0.2824934,
"hits": [
{
"_index": "my_index",
"_type": "my_type",
"_id": "1",
"_score": 0.2824934,
"_source": {
"title": "this is my first article"
}
}
]
}
}
GET my_index/my_type/_search
{
"query": {
"match": {
"title_en": "is"
}
}
}
结果:
{
"took": 3,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 0,
"max_score": null,
"hits": []
}
}
title没有匹配到任何的dynamic模板,默认就是standard分词器,不会过滤停用词,is会进入倒排索引,用is来搜索是可以搜索到的;title_en匹配到了dynamic模板,就是English分词器,会过滤停用词,is这种停用词就会被过滤掉,用is来搜索就搜索不到了
c、定制自己的default mapping template(index level)
PUT /my_index
{
"mappings":{
"_default":{
"_all":{"enabled":false}
},
"blog":{
"_all":{"enabled": false}
}
}
}
6、基于scroll滚动搜索和alias别名实现zero downtime reindex
(1)重建索引
一个field的设置是不能被修改的,如果要修改一个field,那么应该重新按照新的mapping,建立一个index,然后将数据批量查询出来,重新用bulk api写入index中,批量查询的时候,建议采用scroll api,并且采用多线程并发的方式来reindex数据,每次scroll就查询指定日期的一段数据,交给一个线程即可。
(1)一开始,依靠dynamic mapping,插入数据,但是不小心有些数据是2017-01-01这种日期格式的,所以title这种field被自动映射为了date类型,实际上他应该是string类型
PUT my_index/my_type/1
{
"title":"2017-01-01"
}
PUT my_index/my_type/2
{
"title":"2017-01-02"
}
PUT my_index/my_type/3
{
"title":"2017-01-03"
}
GET my_index/_mapping/my_type
结果:
{
"my_index": {
"mappings": {
"my_type": {
"properties": {
"title": {
"type": "date"
}
}
}
}
}
}
(2)当后期向索引中加入string类型的title值时,就会报错
PUT my_index/my_type/4
{
"title":"my first article"
}
结果:
{
"error": {
"root_cause": [
{
"type": "mapper_parsing_exception",
"reason": "failed to parse [title]"
}
],
"type": "mapper_parsing_exception",
"reason": "failed to parse [title]",
"caused_by": {
"type": "illegal_argument_exception",
"reason": "Invalid format: \"my first article\""
}
},
"status": 400
}
(3)如果此时想修改title的类型,是不可能的
PUT my_index/_mapping/my_type
{
"properties": {
"title":{
"type": "text"
}
}
}
结果:
{
"error": {
"root_cause": [
{
"type": "illegal_argument_exception",
"reason": "mapper [title] of different type, current_type [date], merged_type [text]"
}
],
"type": "illegal_argument_exception",
"reason": "mapper [title] of different type, current_type [date], merged_type [text]"
},
"status": 400
}
(4)此时,唯一的办法就是进行reindex,也就是说,重新建立一个索引,将旧索引的数据查询出来,在导入新索引
(5)如果说旧索引的名字,是old_index,新索引的名字是new_index,终端java应用,已经在使用old_index在操作了,难道还要去停止java应用,修改使用的index为new_index,才重新启动java应用吗?这个过程中,就会导致java应用停机,可用性降低
(6)所以说,给java应用一个别名,这个别名是指向旧索引的,java应用先用着,java应用先用goods_index alias来操作,此时实际指向的是旧的my_index
PUT my_index/_alias/goods_index
(7)新建一个index,调整其title的类型为string
PUT my_index_new
{
"mappings": {
"my_type":{
"properties": {
"title":{
"type": "text"
}
}
}
}
}
(8)使用scroll api将数据批量查询出来
GET my_index/_search?scroll=1m
{
"query": {
"match_all": {}
},
"sort": ["_doc"],
"size": 1
}
结果:
{
"_scroll_id": "DnF1ZXJ5VGhlbkZldGNoBQAAAAAAABvLFjU4LWIxRl9CU1BLR1RHMHd6SWpuX3cAAAAAAAAbyRY1OC1iMUZfQlNQS0dURzB3eklqbl93AAAAAAAAG8oWNTgtYjFGX0JTUEtHVEcwd3pJam5fdwAAAAAAABvMFjU4LWIxRl9CU1BLR1RHMHd6SWpuX3cAAAAAAAAbzRY1OC1iMUZfQlNQS0dURzB3eklqbl93",
"took": 113,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 3,
"max_score": null,
"hits": [
{
"_index": "my_index",
"_type": "my_type",
"_id": "2",
"_score": null,
"_source": {
"title": "2017-01-02"
},
"sort": [
0
]
}
]
}
}
(9)采用bulk api将scroll查出来的一批数据,批量写入新索引
POST /_bulk
{"index":{"_index":"my_index_new", "_type":"my_type","_id":"2"}}
{"title":"2017-01-02"}
(10)反复循环(8)-(9),查询一批又一批的数据出来,采取bulk api将每一批数据批量写入新索引
(11)将goods_index alias切换到my_index_new上去,java应用会直接通过index别名使用新的索引中的数据,java应用程序不需要停机,零停机,高可用
POST /_aliases
{
"actions": [
{"remove": {"index": "my_index","alias": "goods_index"}},
{"add": {"index": "my_index_new","alias": "goods_index"}}
]
}
(12)直接通过goods_index别名来查询
GET /goods_index/my_type/_search