ELK日誌分析系統(4)-elasticsearch數據存儲
- 2019 年 10 月 14 日
- 筆記
1. 概述
logstash把格式化的數據發送到elasticsearch以後,elasticsearch負責存儲搜索日誌數據
elasticsearch的搜索介面還是很強大的,這邊不詳細展開,因為kibana會去調用el的介面;
本文將講解elasticsearch的相關配置和遇到的問題,至於elasticsearch的相關搜索使用,後面會找個時間整理一下。
2. 配置
配置路徑:docker-elk/elasticsearch/config/elasticsearch.yml
- 關閉安全驗證,否則kibana連接不上:xpack.security.enabled:false
- 配置支援跨域調用,否則kibana會提示連接不上: http.cors.enabled: true
另外由於elasticsearch很容易被攻擊,所以建議不要把elasticsearch的埠對外開放
cluster.name: "docker-cluster" network.host: 0.0.0.0 ## Use single node discovery in order to disable production mode and avoid bootstrap checks ## see https://www.elastic.co/guide/en/elasticsearch/reference/current/bootstrap-checks.html # discovery.type: single-node ## X-Pack settings ## see https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-xpack.html # xpack.license.self_generated.type: trial xpack.security.enabled: false xpack.monitoring.collection.enabled: true http.cors.enabled: true http.cors.allow-origin: "*"
elasticsearch的快取路徑是/usr/share/elasticsearch/data
驗證是否成功:
訪問http://192.168.1.165:9200 ,如果得到以下數據表示成功:
3. 異常處理
3.1. index has exceeded [1000000] – maximum allowed to be analyzed for highlighting
詳細的出錯內容是這樣:
{“type”:”illegal_argument_exception”,”reason”:”The length of [message] field of [l60ZgW0Bv9XMTlnX27A_] doc of [syslog] index has exceeded [1000000] – maximum allowed to be analyzed for highlighting. This maximum can be set by changing the [index.highlight.max_analyzed_offset] index level setting. For large texts, indexing with offsets or term vectors is recommended!”}}
錯誤原因:索引偏移量默認是100000,超過了
最大遷移索引不能配置在配置文件中,只能介面修改
# 修改最大索引遷移 curl -XPUT "http://192.168.1.165:9200/_settings" -H 'Content-Type: application/json' -d' { "index" : { "highlight.max_analyzed_offset" : 100000000 } }’
3.1. circuit_breaking_exception’, ‘[parent] Data too large, data for [<http_request>] would be [246901928/235.4mb], which is larger than the limit of [246546432/235.1mb]
詳細的出錯內容是這樣:
elasticsearch.exceptions.TransportError: TransportError(429, ‘circuit_breaking_exception’, ‘[parent] Data too large, data for [<http_request>] would be [246901928/235.4mb], which is larger than the limit of [246546432/235.1mb], real usage: [246901768/235.4mb], new bytes reserved: [160/160b], usages [request=0/0b, fielddata=11733/11.4kb, in_flight_requests=160/160b, accounting=6120593/5.8mb]’)
錯誤原因:
堆記憶體不夠當前查詢載入數據所以會報 https://github.com/docker-library/elasticsearch/issues/98
解決方案:
- 提高堆棧記憶體
在宿主機執行:sudo sysctl -w vm.max_map_count=262144
docker增加命令參數設置java的虛擬機初始化堆棧大小1G,和最大堆棧大小3G
docker-compose路徑:配置路徑:docker-elk/docker-compose.yml
services: elasticsearch: build: context: elasticsearch/ args: ELK_VERSION: $ELK_VERSION volumes: - type: bind source: ./elasticsearch/config/elasticsearch.yml target: /usr/share/elasticsearch/config/elasticsearch.yml read_only: true - type: volume source: elasticsearch target: /usr/share/elasticsearch/data ports: - "9200:9200" - "9300:9300" environment: ES_JAVA_OPTS: "-Xms1g -Xmx3g" ELASTIC_PASSWORD: changeme LOGSPOUT: ignore networks: - elk
- 增加堆記憶體的使用率,默認70%
curl -X PUT "http://192.168.1.165:9200/_cluster/settings" -H 'Content-Type: application/json' -d' { "transient" : { "indices.breaker.total.limit" : "90%" } }’
3. 安裝可視化插件
使用docker啟動
docker run -d –name elasticsearch-head -p 9100:9100 mobz/elasticsearch-head:5
elasticsearch需要配置支援跨域調用,否則會提示連接不上
ElasticSearch head入口:http://192.168.1.165:9100
插件效果如下:
這個插件估計對新版本的elasticsearch支援不好,後面可以換一個支援新版本elsticsearch的插件。