Book Image

Elasticsearch Blueprints

Book Image

Elasticsearch Blueprints

Overview of this book

Table of Contents (15 chapters)
Elasticsearch Blueprints
Credits
About the Author
About the Reviewer
www.PacktPub.com
Preface
Index

Using a lowercased analyzer


In the previous attempt, we noted that, though we were able to solve the problem partially, there were still issues with it. In this case, it would be a good approach to use the keyword tokenizer. The keyword tokenizer allows you to keep the text as it is before it reaches the filters.

Note

As opposed to the not_analyzed approach, the keyword tokenizer approach allows us to apply filters, such as lowercase, to the text.

Now, let's see how we can implement this analyzer.

First, we need to create our analyzer while creating the index:

curl -X PUT "http://localhost:9200/news" -d '{
  "analysis": {
    "analyzer": {
      "flat": {
        "type": "custom",
        "tokenizer": "keyword",
        "filter": "lowercase"
      }
    }
  }
}'

Next in the mapping, we should map the flat analyzer we just made to the required field:

curl -X PUT "http://localhost:9200/news/public/_mapping" -d '{
  "public": {
    "properties": {
      "Author": {
        "type": "string",
     ...