Index

# Clore des Index
curl -XPOST 'localhost:9200/logstash-2014.07.*/_close'
 
# Ré-ouvrir un Index
curl -XPOST 'localhost:9200/my_index/_open'
 
# Suppression de certains type dans des index (index/type)
curl  -XDELETE 'http://localhost:9200/logstash-2014.08.22/apacheaccesslogs'

Réplicats

# Passer les réplicats à 0
# Attention pour une archi 1 serveurs (dev)
curl -XPUT 'localhost:9200/_settings' -d '  {  "index" : { "number_of_replicas" : 0 } }'
# Passer les réplicats à 0 pour un index particulier
# Attention pour une archi 1 serveurs (dev)
curl -XPUT 'localhost:9200/MonIndex/_settings' -d '  {  "index" : { "number_of_replicas" : 0 } }'

Liste des index avec information

curl 'localhost:9200/_cat/indices?v'
health index               pri rep docs.count docs.deleted store.size pri.store.size 
yellow logstash-2014.09.28   5   1        376            0    407.7kb        407.7kb 
yellow logstash-2015.07.30   5   1         66            0      217kb          217kb 
yellow kibana-int            5   1          3            0     43.3kb         43.3kb 
yellow nodes_stats           1   1          2            0      1.5mb          1.5mb 
yellow logstash-2014.10.22   5   1         17            0     40.1kb         40.1kb 
yellow logstash-2014.10.18   5   1          7            0     36.3kb         36.3kb 
yellow logstash-2014.09.08   5   1        114            0    241.9kb        241.9kb 
yellow logstash-2014.10.12   5   1        896            0    961.7kb        961.7kb 
yellow logstash-2014.09.10   5   1         93            0    184.4kb        184.4kb 

Stats

curl 'localhost:9200/_stats/

autres stats
/_stats
/_stats/{metric}
/_stats/{metric}/{indexMetric}
/{index}/_stats
/{index}/_stats/{metric}
/_cluster/stats
/_nodes/stats

ou metric peut être
indices, docs, store, indexing, search, get, merge, 
refresh, flush, warmer, filter_cache, id_cache, 
percolate, segments, fielddata, completion

Settings

/_nodes/settings
/_cluster/settings
/_settings
/_nodes/_process

Etat du cluster

curl -XGET 'http://localhost:9200/_cluster/health?pretty=true'
{
  "cluster_name" : "elasticsearch",
  "status" : "yellow",
  "timed_out" : false,
  "number_of_nodes" : 2,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 41,
  "active_shards" : 41,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 5
}


curl -XGET 'http://localhost:9200/_cluster/state'

Snapshot/Restore

Dossier de snapshot

Création du dossier de snapshot

mkdir /tmp/my_backup
chmod 777 /tmp/my_backup

Création du snapshot

curl -XPUT http://127.0.0.1:9200/_snapshot/my_backup -d '
{
  "type": "fs",
  "settings": {
    "location": "/tmp/my_backup"
  }
}'

Snapshot

curl -XPUT http://127.0.0.1:9200/_snapshot/my_backup/snapshot_2 -d '
{
  "indices": "logstash-2015.11.12",
  "ignore_unavailable": "true",
  "include_global_state": false
}'

Restore

curl -XPOST http://127.0.0.1:9200/_snapshot/my_backup/snapshot_2/_restore

Elasticsearch 2.x

Ajouter la ligne suivante dans **/etc/elasticsearch/elasticsearch.yml **

path.repo: ["/tmp/my_backup"]

Et redémarrer Elasticsearch.

_cat

   
curl 'http://127.0.0.1:9200/_cat'                                                                 
=^.^=
/_cat/allocation
/_cat/shards
/_cat/shards/{index}
/_cat/master
/_cat/nodes
/_cat/indices
/_cat/indices/{index}
/_cat/segments
/_cat/segments/{index}
/_cat/count
/_cat/count/{index}
/_cat/recovery
/_cat/recovery/{index}
/_cat/health
/_cat/pending_tasks
/_cat/aliases
/_cat/aliases/{alias}
/_cat/thread_pool
/_cat/plugins
/_cat/fielddata
/_cat/fielddata/{fields}
/_cat/nodeattrs
/_cat/repositories
/_cat/snapshots/{repository}

Exemples :

   
curl 'http://127.0.0.1:9200/_cat/master'
h5yLY6U5QgKn3bjKZiD84g 127.0.0.1 127.0.0.1 node1 

En verbose

   
curl 'http://127.0.0.1:9200/_cat/master?v'
id                     host      ip        node  
h5yLY6U5QgKn3bjKZiD84g 127.0.0.1 127.0.0.1 node1 

Help

   
 curl 'http://127.0.0.1:9200/_cat/master?help'
id   |   | node id    
host | h | host name  
ip   |   | ip address 
node | n | node name  

Headers

   
curl 'http://127.0.0.1:9200/_cat/master?h=host,id'
127.0.0.1 h5yLY6U5QgKn3bjKZiD84g 

Template

Récupérer les templates

curl 'http://127.0.0.1:9200/_template?pretty'

Récupérer un template spécifique

curl 'http://127.0.0.1:9200/_template/logstash?pretty'

Ajouter un nouveau template

curl 'http://127.0.0.1:9200/_template/MonTemplate' -d '.....'

Option d’affichage

  • ?pretty=false : format brute sans format (valeur par défaut)
  • ?pretty=true : sortie au format JSON
  • ?format=yaml : sortie au format YAML
  • ?human=true : Ajoute une entrée supplémentaire par champs qui peut être convertie (champs basés sur le temps ou taille)
Certain sont cummulable [pretty format] et human.

Afficher les index/shards non assignés :

# curl -s localhost:9200/_cat/shards  | grep UNASSIGNED 
logstash-2016.11.14 4 p UNASSIGNED 
logstash-2016.11.14 4 r UNASSIGNED 
logstash-2016.11.15 3 p UNASSIGNED 
logstash-2016.11.15 3 r UNASSIGNED 
logstash-2016.11.15 4 p UNASSIGNED 
logstash-2016.11.15 4 r UNASSIGNED 
logstash-2016.11.15 0 p UNASSIGNED 
logstash-2016.11.15 0 r UNASSIGNED 

Assigner un index/shard à un membre du cluster

curl -XPOST 'localhost:9200/_cluster/reroute' -d '{ 
    "commands" : [ { 
              "allocate" : { 
              "index" : "logstash-2016.11.15", "shard" : 4, "node" : "MonNode","allow_primary" : true 
          } 
        } 
    ] 
}' 

Afficher le nombre de File Descriptor utilisé

curl http://127.0.0.1:9200/_cluster/stats?pretty | grep -A 4 file 
      "open_file_descriptors" : { 
        "min" : 63505, 
        "max" : 65870, 
        "avg" : 64687 
      } 

ou si l’on connait le PID

# ls /proc/22505/fd/ | wc -l 
65966 

ou

# curl 'localhost:9200/_nodes/stats/process?pretty&human=true'
{
  "cluster_name" : "elasticsearch",
  "nodes" : {
    "VGaVGsCoQKO4tI8uvOL4eQ" : {
      "timestamp" : 1479478000398,
      "name" : "Alasta Lab",
      "transport_address" : "127.0.0.1:9300",
      "host" : "127.0.0.1",
      "ip" : [ "127.0.0.1:9300", "NONE" ],
      "process" : {
        "timestamp" : 1479478000399,
        "open_file_descriptors" : 3549,
        "max_file_descriptors" : 65535,
        "cpu" : {
          "percent" : 6,
          "total" : "45.5m",
          "total_in_millis" : 2735680
        },
        "mem" : {
          "total_virtual" : "2.8gb",
          "total_virtual_in_bytes" : 3071959040
        }
      }
    }
  }
}

Afficher la limite de File Descriptor

# su - elasticsearch -s /bin/bash
$ ulimit -Sn
65535
$ ulimit -Hn
65535

ou avec le PID

# cat /proc/22505/limits 
Limit                     Soft Limit           Hard Limit           Units 
Max cpu time              unlimited            unlimited            seconds 
Max file size             unlimited            unlimited            bytes 
Max data size             unlimited            unlimited            bytes 
Max stack size            10485760             unlimited            bytes 
Max core file size        0                    unlimited            bytes 
Max resident set          unlimited            unlimited            bytes 
Max processes             1024                 774254               processes 
Max open files            128000               128000               files 
Max locked memory         65536                65536                bytes 
Max address space         unlimited            unlimited            bytes 
Max file locks            unlimited            unlimited            locks 
Max pending signals       774254               774254               signals 
Max msgqueue size         819200               819200               bytes 
Max nice priority         0                    0 
Max realtime priority     0                    0 
Max realtime timeout      unlimited            unlimited            us 

Afficher les index en les triant par date

curl -s http://127.0.0.1:9200/_cat/shards | awk '{print $1}'  | sort -n | uniq
.kibana
logstash-2015.11.29
logstash-2015.11.30
logstash-2015.12.01
logstash-2015.12.02