在CentOS 7上安装ELK作为集中式日志文件管理服务器
本教程解释了如何在CentOS 7上使用ELK设置集中式日志文件管理服务器。对于任何不熟悉的人,ELK是3种服务的组合:ElasticSearch,Logstash和Kibana。 要使用这个概念构建一个完整的集中式日志管理服务器,它需要拥有每个这个包,因为它服务于不同的目的并且彼此相关。 基本上它像这样一起工作:
- 对于您想要管理的每个客户端,它将生成自己的相关服务日志。
- 对于将用于管理来自每个客户端的所有日志记录信息的服务器,它将使用LogStash包来收集数据并将其转换为相对值。 根据定义,它是一个开源的服务器端数据处理管道,可以同时从多个源中提取数据,转换它
- 收集和转换数据后,管理服务器将使用ElasticSearch帮助并将数据分析为相关值。 如果要根据需要生成相关报告,可以使用通用查询语言
- 随着相关数据的验证和分析,这就是Kibana软件包的图景,因为它可以帮助将相关数据可视化和管理到适当的视图,或者将其组合到理想的光泽仪表板中以便于理解。
下图总结了工作流程:
1.初步说明
在本教程中,我使用的是64位版本的CentOS Linux 7.4。 在本教程中,我们将使用3个服务器:第一个将用作管理服务器,另外两个将用作客户端。 在本练习中,我们将使用管理服务器来监视已在每个客户端下设置,配置和运行的现有MySQL服务。 由于MySQL是主要用于OLTP目的的数据库服务,我们将使管理服务器记录2个日志记录进程,这是MySQL服务本身的健康检查和慢速查询事务。 在本教程结束时,我们将看到从专用客户端内的任何MySQL服务记录的任何信息都可以直接从管理服务器实时查看,可视化和分析。
2.安装阶段
对于安装阶段,我们将在充当客户端的两个MySQL DB服务器上安装FileBeat。 让我们开始这个过程,下面是步骤:
[root@mysql_db1 opt]# cd
[root@mysql_db1 ~]# cd /opt/
[root@mysql_db1 opt]# wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.2.1-x86_64.rpm
--2018-06-09 10:50:46-- https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.2.1-x86_64.rpm
Resolving artifacts.elastic.co (artifacts.elastic.co)... 107.21.237.188, 107.21.253.15, 184.73.245.233, ...
Connecting to artifacts.elastic.co (artifacts.elastic.co)|107.21.237.188|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 12697093 (12M) [binary/octet-stream]
Saving to: ‘filebeat-6.2.1-x86_64.rpm’
100%[==============================================================================>] 12,697,093 2.20MB/s in 6.9s
2018-06-09 10:51:00 (1.75 MB/s) - ‘filebeat-6.2.1-x86_64.rpm’ saved [12697093/12697093]
[root@mysql_db1 opt]# yum localinstall -y filebeat-6.2.1-x86_64.rpm
Loaded plugins: fastestmirror, ovl
Examining filebeat-6.2.1-x86_64.rpm: filebeat-6.2.1-1.x86_64
Marking filebeat-6.2.1-x86_64.rpm to be installed
Resolving Dependencies
--> Running transaction check
---> Package filebeat.x86_64 0:6.2.1-1 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
========================================================================================================================
Package Arch Version Repository Size
========================================================================================================================
Installing:
filebeat x86_64 6.2.1-1 /filebeat-6.2.1-x86_64 49 M
Transaction Summary
========================================================================================================================
Install 1 Package
Total size: 49 M
Installed size: 49 M
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : filebeat-6.2.1-1.x86_64 1/1
Verifying : filebeat-6.2.1-1.x86_64 1/1
Installed:
filebeat.x86_64 0:6.2.1-1
Complete!
完成后,我们将列出由FileBeat包启用的默认模块,并在此处启用我们的案例所需的mysql模块。 以下是步骤:
[root@mysql_db1 opt]# filebeat modules list
Enabled:
Disabled:
apache2
auditd
icinga
kafka
logstash
mysql
nginx
osquery
postgresql
redis
system
traefik
[root@mysql_db1 opt]# filebeat modules enable mysql
Enabled mysql
完成后,现在让我们编辑我们刚刚启用的mysql模块所需的配置。 默认情况下,一旦我们从filebeat包启用mysql模块,它将自动在modules.d目录中创建一个yaml文件。 但是,如果未创建文件,请随意在同一位置创建新的yaml文件。 以下是步骤:
[root@mysql_db1 opt]# vi /etc/filebeat/modules.d/mysql.yml
- module: mysql
error:
enabled: true
var.paths: ["/var/lib/mysql/mysql-error.log*"]
slowlog:
enabled: true
var.paths: ["/var/lib/mysql/log-slow-queries.log*"]
如上所示,我们决定从MySQL服务记录2个日志记录过程,这是数据库本身的健康检查和慢查询日志。
现在一切完成后,让我们在filebeat.yml文件下的filebeat主配置文件中进行一些配置。 以下是配置集:
[root@mysql_db1 opt]# vi /etc/filebeat/filebeat.yml
#=========================== Filebeat prospectors =============================
filebeat.prospectors:
- type: log
enabled: false
paths:
- /var/lib/mysql/mysql-error.log
- /var/lib/mysql/log-slow-queries.log
#============================= Filebeat modules ===============================
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 3
#================================ General =====================================
setup.kibana:
#----------------------------- Logstash output --------------------------------
output.logstash:
hosts: ["172.17.0.6:5044"]
注意上面我们已经为logstash主机设置了一个172.17.0.6的IP地址。这个IP是我们的集中管理服务器的地址,它将直接爬行以收集日志数据。 我已经设置了硬编码的IP,因为我没有在/ etc / hosts文件下进行任何替代更改,并且在本教程中没有使用任何DNS服务器。 但是,如果您确实进行了替代更改,请随意使用管理服务器的主机名。
由于所有已按计划设置,让我们启动filebeat服务。 以下是步骤:
[root@mysql_db1 opt]# filebeat setup -e
2018-06-09T11:04:37.277Z INFO instance/beat.go:468 Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2018-06-09T11:04:37.277Z INFO instance/beat.go:475 Beat UUID: 98503460-035e-4476-8e4d-10470433dba5
2018-06-09T11:04:37.277Z INFO instance/beat.go:213 Setup Beat: filebeat; Version: 6.2.1
2018-06-09T11:04:37.277Z INFO pipeline/module.go:76 Beat name: lara
2018-06-09T11:04:37.278Z ERROR instance/beat.go:667 Exiting: Template loading requested but the Elasticsearch output is not configured/enabled
Exiting: Template loading requested but the Elasticsearch output is not configured/enabled
[root@mysql_db1 opt]# filebeat -e &
[1] 22010
[root@mysql_db1 opt]# 2018-06-09T12:45:18.812Z INFO instance/beat.go:468 Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2018-06-09T12:45:18.813Z INFO instance/beat.go:475 Beat UUID: 98503460-035e-4476-8e4d-10470433dba5
2018-06-09T12:45:18.813Z INFO instance/beat.go:213 Setup Beat: filebeat; Version: 6.2.1
2018-06-09T12:45:18.813Z INFO pipeline/module.go:76 Beat name: lara
2018-06-09T12:45:18.813Z INFO [monitoring] log/log.go:97 Starting metrics logging every 30s
2018-06-09T12:45:18.813Z INFO instance/beat.go:301 filebeat start running.
2018-06-09T12:45:18.814Z INFO registrar/registrar.go:71 No registry file found under: /var/lib/filebeat/registry. Creating a new registry file.
2018-06-09T12:45:18.819Z INFO registrar/registrar.go:108 Loading registrar data from /var/lib/filebeat/registry
2018-06-09T12:45:18.819Z INFO registrar/registrar.go:119 States Loaded from registrar: 0
2018-06-09T12:45:18.819Z WARN beater/filebeat.go:261 Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2018-06-09T12:45:18.820Z INFO crawler/crawler.go:48 Loading Prospectors: 1
2018-06-09T12:45:18.821Z INFO log/prospector.go:111 Configured paths: [/var/lib/mysql/log-slow-queries.log*]
2018-06-09T12:45:18.822Z INFO log/prospector.go:111 Configured paths: [/var/lib/mysql/mysql-error.log*]
2018-06-09T12:45:18.822Z INFO crawler/crawler.go:82 Loading and starting Prospectors completed. Enabled prospectors: 0
2018-06-09T12:45:18.822Z INFO cfgfile/reload.go:127 Config reloader started
2018-06-09T12:45:18.840Z INFO log/prospector.go:111 Configured paths: [/var/lib/mysql/log-slow-queries.log*]
2018-06-09T12:45:18.840Z INFO log/prospector.go:111 Configured paths: [/var/lib/mysql/mysql-error.log*]
2018-06-09T12:45:18.840Z INFO cfgfile/reload.go:258 Starting 1 runners ...
2018-06-09T12:45:18.840Z INFO cfgfile/reload.go:219 Loading of config files completed.
2018-06-09T12:45:18.841Z INFO log/harvester.go:216 Harvester started for file: /var/lib/mysql/mysql-error.log
2018-06-09T12:45:18.841Z INFO log/harvester.go:216 Harvester started for file: /var/lib/mysql/log-slow-queries.log
2018-06-09T12:45:20.841Z ERROR pipeline/output.go:74 Failed to connect: dial tcp 172.17.0.6:5044: getsockopt: connection refused
2018-06-09T12:45:22.842Z ERROR pipeline/output.go:74 Failed to connect: dial tcp 172.17.0.6:5044: getsockopt: connection refused
2018-06-09T12:45:26.842Z ERROR pipeline/output.go:74 Failed to connect: dial tcp 172.17.0.6:5044: getsockopt: connection refused
[root@mysql_db1 ~]# tail -f /var/log/filebeat/filebeat
2018-06-09T10:53:28.853Z INFO instance/beat.go:468 Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2018-06-09T10:53:28.853Z INFO instance/beat.go:475 Beat UUID: 98503460-035e-4476-8e4d-10470433dba5
请注意,一旦启动filebeat服务,日志下会显示错误。 这是由于已分配的管理服务器尚未设置。 对于初始阶段,您可以忽略错误日志,因为一旦我们的管理服务器已经设置并开始爬行,它将自动恢复。
完成客户端基础配置后,您可以继续复制另一个也充当客户端的MySQL服务器上的步骤。
展望未来,我们将继续设置管理服务器本身。
3.安装阶段(集中管理服务器端)
现在我们已经完成了客户端就绪的设置,让我们启动管理服务器本身所需的配置。 简而言之,需要为管理服务器安装和配置3个核心软件包,即ElasticSearch,LogStash和Kibana。
对于此阶段,我们将首先启动ElasticSearch所需的安装和配置,以下是步骤:
[root@elk_master ~]# cd /opt/
[root@elk_master opt]# ls
[root@elk_master opt]# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.2.1.tar.gz
--2018-06-09 12:47:59-- https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.2.1.tar.gz
Resolving artifacts.elastic.co (artifacts.elastic.co)... 107.21.237.188, 54.235.82.130, 107.21.253.15, ...
Connecting to artifacts.elastic.co (artifacts.elastic.co)|107.21.237.188|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 29049089 (28M) [binary/octet-stream]
Saving to: ‘elasticsearch-6.2.1.tar.gz’
100%[==============================================================================>] 29,049,089 2.47MB/s in 16s
2018-06-09 12:48:21 (1.76 MB/s) - ‘elasticsearch-6.2.1.tar.gz’ saved [29049089/29049089]
[root@elk_master opt]#
[root@elk_master opt]#
[root@elk_master opt]# tar -zxvf elasticsearch-6.2.1.tar.gz
[root@elk_master opt]# ln -s /opt/elasticsearch-6.2.1 /opt/elasticsearch
[root@elk_master opt]# ll
total 28372
lrwxrwxrwx 1 root root 24 Jun 9 12:49 elasticsearch -> /opt/elasticsearch-6.2.1
drwxr-xr-x 8 root root 143 Feb 7 19:36 elasticsearch-6.2.1
-rw-r--r-- 1 root root 29049089 May 15 04:56 elasticsearch-6.2.1.tar.gz
在完成elasticsearch的安装后,让我们继续配置部分。 对于配置方面,我们将分配目录/数据/数据来存储已分析的收集的日志数据。 该目录本身也将用于存储索引,该索引将由elasticSearch本身用于更快的查询。 对于目录/数据/日志,elasticSearch本身将使用它来实现自己的日志记录目的。 以下是步骤:
[root@elk_master opt]# mkdir -p /data/data
[root@elk_master opt]# mkdir -p /data/logs
[root@elk_master opt]#
[root@elk_master opt]# cd elasticsearch
[root@elk_master elasticsearch]# ls
bin config lib LICENSE.txt logs modules NOTICE.txt plugins README.textile
[root@elk_master elasticsearch]# cd config/
[root@elk_master config]# vi elasticsearch.yml
# ---------------------------------- Cluster -----------------------------------
cluster.name: log_cluster
#
# ------------------------------------ Node ------------------------------------
#
node.name: elk_master
#
# ----------------------------------- Paths ------------------------------------
#
path.data: /data/data
path.logs: /data/logs
#
network.host: 172.17.0.6
完成后,为了使ElasticSearch工作,它需要设置Java。 以下是在服务器中安装和配置Java的步骤。
[root@elk_master config]# wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u131-b11/d54c1d3a095b4ff2b6607d096fa80163/jdk-8u131-linux-x64.rpm"
--2018-06-09 12:57:05-- http://download.oracle.com/otn-pub/java/jdk/8u131-b11/d54c1d3a095b4ff2b6607d096fa80163/jdk-8u131-linux-x64.rpm
Resolving download.oracle.com (download.oracle.com)... 23.49.16.62
Connecting to download.oracle.com (download.oracle.com)|23.49.16.62|:80... connected.
HTTP request sent, awaiting response... 302 Moved Temporarily
Location: https://edelivery.oracle.com/otn-pub/java/jdk/8u131-b11/d54c1d3a095b4ff2b6607d096fa80163/jdk-8u131-linux-x64.rpm [following]
--2018-06-09 12:57:10-- https://edelivery.oracle.com/otn-pub/java/jdk/8u131-b11/d54c1d3a095b4ff2b6607d096fa80163/jdk-8u131-linux-x64.rpm
Resolving edelivery.oracle.com (edelivery.oracle.com)... 104.103.48.174, 2600:1417:58:181::2d3e, 2600:1417:58:188::2d3e
Connecting to edelivery.oracle.com (edelivery.oracle.com)|104.103.48.174|:443... connected.
HTTP request sent, awaiting response... 302 Moved Temporarily
Location: http://download.oracle.com/otn-pub/java/jdk/8u131-b11/d54c1d3a095b4ff2b6607d096fa80163/jdk-8u131-linux-x64.rpm?AuthParam=1528549151_b1fd01d854bc0423600a83c36240028e [following]
--2018-06-09 12:57:11-- http://download.oracle.com/otn-pub/java/jdk/8u131-b11/d54c1d3a095b4ff2b6607d096fa80163/jdk-8u131-linux-x64.rpm?AuthParam=1528549151_b1fd01d854bc0423600a83c36240028e
Connecting to download.oracle.com (download.oracle.com)|23.49.16.62|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 169983496 (162M) [application/x-redhat-package-manager]
Saving to: ‘jdk-8u131-linux-x64.rpm’
100%[==============================================================================>] 169,983,496 2.56MB/s in 64s
2018-06-09 12:58:15 (2.54 MB/s) - ‘jdk-8u131-linux-x64.rpm’ saved [169983496/169983496]
[root@elk_master config]# yum localinstall -y jdk-8u131-linux-x64.rpm
[root@elk_master config]# vi /root/.bash_profile
export JAVA_HOME=/usr/java/jdk1.8.0_131
PATH=$JAVA_HOME/bin:$PATH:$HOME/bin
export PATH
[root@elk_master config]# . /root/.bash_profile
[root@elk_master config]# java -version
java version "1.8.0_131"
Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
完成,现在弹性搜索已在服务器中安装和配置。 然而,由于某些安全策略,elasticSearch禁止由root用户运行,因此我们将创建一个额外的用户作为elasticSearch服务的所有者并运行它。 以下是为其创建专用用户的步骤:
[root@elk_master config]# useradd -s /bin/bash shahril
[root@elk_master config]# passwd shahril
Changing password for user shahril.
New password:
BAD PASSWORD: The password fails the dictionary check - it is too simplistic/systematic
Retype new password:
passwd: all authentication tokens updated successfully.
[root@elk_master config]# chown -R shahril:shahril /data/
[root@elk_master config]# sysctl -w vm.max_map_count=262144
vm.max_map_count = 262144
完成后,以用户身份登录,您可以启动elasticSearch服务。
[root@elk_master config]# su - shahril
Last login: Sat Jun 9 13:03:07 UTC 2018 on pts/1
[shahril@elk_master ~]$
[shahril@elk_master ~]$
[shahril@elk_master ~]$
[shahril@elk_master ~]$ /opt/elasticsearch/bin/elasticsearch &
[1] 7295
[shahril@elk_master ~]$ [2018-06-09T13:06:26,667][INFO ][o.e.n.Node ] [elk_master] initializing ...
[2018-06-09T13:06:26,721][INFO ][o.e.e.NodeEnvironment ] [elk_master] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [394.3gb], net total_space [468.2gb], types [rootfs]
[2018-06-09T13:06:26,722][INFO ][o.e.e.NodeEnvironment ] [elk_master] heap size [990.7mb], compressed ordinary object pointers [true]
[2018-06-09T13:06:26,723][INFO ][o.e.n.Node ] [elk_master] node name [elk_master], node ID [xjNoA9mMSGiXYmFPRNlXBg]
[2018-06-09T13:06:26,723][INFO ][o.e.n.Node ] [elk_master] version[6.2.1], pid[7295], build[7299dc3/2018-02-07T19:34:26.990113Z], OS[Linux/3.10.0-693.17.1.el7.x86_64/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_131/25.131-b11]
[2018-06-09T13:06:26,723][INFO ][o.e.n.Node ] [elk_master] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch.U6ilAwt9, -XX:+HeapDumpOnOutOfMemoryError, -XX:+PrintGCDetails, -XX:+PrintGCDateStamps, -XX:+PrintTenuringDistribution, -XX:+PrintGCApplicationStoppedTime, -Xloggc:logs/gc.log, -XX:+UseGCLogFileRotation, -XX:NumberOfGCLogFiles=32, -XX:GCLogFileSize=64m, -Des.path.home=/opt/elasticsearch, -Des.path.conf=/opt/elasticsearch/config]
[2018-06-09T13:06:27,529][INFO ][o.e.p.PluginsService ] [elk_master] loaded module [aggs-matrix-stats]
[2018-06-09T13:06:27,529][INFO ][o.e.p.PluginsService ] [elk_master] loaded module [analysis-common]
[2018-06-09T13:06:27,529][INFO ][o.e.p.PluginsService ] [elk_master] loaded module [ingest-common]
[2018-06-09T13:06:27,530][INFO ][o.e.p.PluginsService ] [elk_master] loaded module [lang-expression]
[2018-06-09T13:06:27,530][INFO ][o.e.p.PluginsService ] [elk_master] loaded module [lang-mustache]
[2018-06-09T13:06:27,530][INFO ][o.e.p.PluginsService ] [elk_master] loaded module [lang-painless]
[2018-06-09T13:06:27,530][INFO ][o.e.p.PluginsService ] [elk_master] loaded module [mapper-extras]
[2018-06-09T13:06:27,530][INFO ][o.e.p.PluginsService ] [elk_master] loaded module [parent-join]
[2018-06-09T13:06:27,530][INFO ][o.e.p.PluginsService ] [elk_master] loaded module [percolator]
[2018-06-09T13:06:27,531][INFO ][o.e.p.PluginsService ] [elk_master] loaded module [rank-eval]
[2018-06-09T13:06:27,532][INFO ][o.e.p.PluginsService ] [elk_master] loaded module [reindex]
[2018-06-09T13:06:27,532][INFO ][o.e.p.PluginsService ] [elk_master] loaded module [repository-url]
[2018-06-09T13:06:27,533][INFO ][o.e.p.PluginsService ] [elk_master] loaded module [transport-netty4]
[2018-06-09T13:06:27,533][INFO ][o.e.p.PluginsService ] [elk_master] loaded module [tribe]
[2018-06-09T13:06:27,534][INFO ][o.e.p.PluginsService ] [elk_master] no plugins loaded
优秀,现在弹性搜索启动并运行没有任何问题,你会注意到服务器内部建立了与弹性搜索服务相关的额外端口。 您可以验证列出的端口如下:
[root@elk_master config]# netstat -apn|grep -i :9
tcp 0 0 172.17.0.6:9200 0.0.0.0:* LISTEN 7295/java
tcp 0 0 172.17.0.6:9300 0.0.0.0:* LISTEN 7295/java
现在让我们开始设置和配置logstash服务。 以下是安装过程所需的步骤:
[root@elk_master opt]# wget https://artifacts.elastic.co/downloads/logstash/logstash-6.2.1.rpm
--2018-06-09 13:07:51-- https://artifacts.elastic.co/downloads/logstash/logstash-6.2.1.rpm
Resolving artifacts.elastic.co (artifacts.elastic.co)... 107.21.253.15, 23.21.67.46, 107.21.237.188, ...
Connecting to artifacts.elastic.co (artifacts.elastic.co)|107.21.253.15|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 140430729 (134M) [binary/octet-stream]
Saving to: ‘logstash-6.2.1.rpm’
100%[==============================================================================>] 140,430,729 2.19MB/s in 60s
2018-06-09 13:08:57 (2.24 MB/s) - ‘logstash-6.2.1.rpm’ saved [140430729/140430729]
[root@elk_master opt]# yum localinstall -y logstash-6.2.1.rpm
Loaded plugins: fastestmirror, ovl
Examining logstash-6.2.1.rpm: 1:logstash-6.2.1-1.noarch
Marking logstash-6.2.1.rpm to be installed
Resolving Dependencies
--> Running transaction check
---> Package logstash.noarch 1:6.2.1-1 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
========================================================================================================================
Package Arch Version Repository Size
========================================================================================================================
Installing:
logstash noarch 1:6.2.1-1 /logstash-6.2.1 224 M
Transaction Summary
========================================================================================================================
Install 1 Package
Total size: 224 M
Installed size: 224 M
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : 1:logstash-6.2.1-1.noarch 1/1
Using provided startup.options file: /etc/logstash/startup.options
Successfully created system startup script for Logstash
Verifying : 1:logstash-6.2.1-1.noarch 1/1
Installed:
logstash.noarch 1:6.2.1-1
Complete!
安装完成后,按以下方式应用所需的配置:
[root@elk_master opt]# vi /etc/logstash/conf.d/02-mysql-log.conf
input {
beats {
port => 5044
host => "0.0.0.0"
}
}
filter {
if [fileset][module] == "mysql" {
if [fileset][name] == "error" {
grok {
match => { "message" => ["%{LOCALDATETIME:[mysql][error][timestamp]} (\[%{DATA:[mysql][error][level]}\] )?%{GREEDYDATA:[mysql][error][message]}",
"%{TIMESTAMP_ISO8601:[mysql][error][timestamp]} %{NUMBER:[mysql][error][thread_id]} \[%{DATA:[mysql][error][level]}\] %{GREEDYDATA:[mysql][error][message1]}",
"%{GREEDYDATA:[mysql][error][message2]}"] }
pattern_definitions => {
"LOCALDATETIME" => "[0-9]+ %{TIME}"
}
remove_field => "message"
}
mutate {
rename => { "[mysql][error][message1]" => "[mysql][error][message]" }
}
mutate {
rename => { "[mysql][error][message2]" => "[mysql][error][message]" }
}
date {
match => [ "[mysql][error][timestamp]", "ISO8601", "YYMMdd H:m:s" ]
remove_field => "[mysql][error][time]"
}
}
else if [fileset][name] == "slowlog" {
grok {
match => { "message" => ["^# User@Host: %{USER:[mysql][slowlog][user]}(\[[^\]]+\])? @ %{HOSTNAME:[mysql][slowlog][host]} \[(IP:[mysql][slowlog][ip])?\](\s*Id:\s* %{NUMBER:[mysql][slowlog][id]})?\n# Query_time: %{NUMBER:[mysql][slowlog][query_time][sec]}\s* Lock_time: %{NUMBER:[mysql][slowlog][lock_time][sec]}\s* Rows_sent: %{NUMBER:[mysql][slowlog][rows_sent]}\s* Rows_examined: %{NUMBER:[mysql][slowlog][rows_examined]}\n(SET timestamp=%{NUMBER:[mysql][slowlog][timestamp]};\n)?%{GREEDYMULTILINE:[mysql][slowlog][query]}"] }
pattern_definitions => {
"GREEDYMULTILINE" => "(.|\n)*"
}
remove_field => "message"
}
date {
match => [ "[mysql][slowlog][timestamp]", "UNIX" ]
}
mutate {
gsub => ["[mysql][slowlog][query]", "\n# Time: [0-9]+ [0-9][0-9]:[0-9][0-9]:[0-9][0-9](\\.[0-9]+)?$", ""]
}
}
}
}
output {
elasticsearch {
hosts => "172.17.0.6"
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
}
}
注意,从上面的配置中,我们设置了使用端口5044从客户端的filebeat服务获取的输入。我们还为logstash设置了适当的注释,以对齐从每个客户端获取的原始数据。 这是必要的,以便从弹性搜索方面更容易查看和分析。
接下来,我们需要为logstash安装filebeats模块,以便logstash能够从客户端捕获和爬行原始数据。
[root@elk_master opt]# /usr/share/logstash/bin/logstash-plugin install logstash-input-beats
Validating logstash-input-beats
Installing logstash-input-beats
Installation successful
由于完成了logstash所需的安装和配置,我们可以直接启动服务。 以下是步骤:
[root@elk_master opt]# service logstash restart
Redirecting to /bin/systemctl restart logstash.service
[root@elk_master opt]# service logstash status
Redirecting to /bin/systemctl status logstash.service
? logstash.service - logstash
Loaded: loaded (/etc/systemd/system/logstash.service; disabled; vendor preset: disabled)
Active: active (running) since Sat 2018-06-09 13:17:40 UTC; 5s ago
Main PID: 8106 (java)
CGroup: /docker/2daaf895e0efa67ef70dbabd87b56d53815e94ff70432f692385f527e2dc488b/system.slice/logstash.service
??8106 /bin/java -Xms256m -Xmx1g -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFracti...
Jun 09 13:17:40 elk_master systemd[1]: Started logstash.
Jun 09 13:17:40 elk_master systemd[1]: Starting logstash...
[root@elk_master opt]#
[root@elk_master opt]# tail -f /var/log/logstash/logstash-plain.log
[2018-06-09T13:17:59,496][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://172.17.0.6:9200/]}}
[2018-06-09T13:17:59,498][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://172.17.0.6:9200/, :path=>"/"}
[2018-06-09T13:17:59,976][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://172.17.0.6:9200/"}
[2018-06-09T13:18:00,083][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>nil}
[2018-06-09T13:18:00,083][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-06-09T13:18:00,095][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//172.17.0.6"]}
[2018-06-09T13:18:00,599][INFO ][logstash.inputs.beats ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2018-06-09T13:18:00,652][INFO ][logstash.pipeline ] Pipeline started succesfully {:pipeline_id=>"main", :thread=>"#<Thread:0x70567cf0@/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:246 sleep>"}
[2018-06-09T13:18:00,663][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[2018-06-09T13:18:00,660][INFO ][logstash.agent ] Pipelines running {:count=>1, :pipelines=>["main"]}
[2018-06-09T13:18:24,060][INFO ][o.e.c.m.MetaDataCreateIndexService] [elk_master] [filebeat-6.2.1-2018.06.04] creating index, cause [auto(bulk api)], templates [], shards [5]/[1], mappings []
[2018-06-09T13:18:24,189][INFO ][o.e.c.m.MetaDataCreateIndexService] [elk_master] [filebeat-6.2.1-2018.06.09] creating index, cause [auto(bulk api)], templates [], shards [5]/[1], mappings []
[2018-06-09T13:18:24,288][INFO ][o.e.c.m.MetaDataCreateIndexService] [elk_master] [filebeat-6.2.1-2018.06.08] creating index, cause [auto(bulk api)], templates [], shards [5]/[1], mappings []
[2018-06-09T13:18:24,591][INFO ][o.e.c.m.MetaDataMappingService] [elk_master] [filebeat-6.2.1-2018.06.04/yPD91Ww0SD2ei4YI-FgLgQ] create_mapping [doc]
[2018-06-09T13:18:24,781][INFO ][o.e.c.m.MetaDataMappingService] [elk_master] [filebeat-6.2.1-2018.06.08/Qnv0gplFTgW0z1C6haZESg] create_mapping [doc]
[2018-06-09T13:18:24,882][INFO ][o.e.c.m.MetaDataMappingService] [elk_master] [filebeat-6.2.1-2018.06.09/dihjTJw3SjGncXYln2MXbA] create_mapping [doc]
[2018-06-09T13:18:24,996][INFO ][o.e.c.m.MetaDataMappingService] [elk_master] [filebeat-6.2.1-2018.06.09/dihjTJw3SjGncXYln2MXbA] update_mapping [doc]
如您所见,现在logstash服务已成功启动并开始从每个客户端收集数据。 作为替代方案,您可以使用curl命令查看logstash端的状态和更新。 以下是示例:
[root@elk_master opt]# curl -kL http://172.17.0.6:9200/_cat/indices?v
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
yellow open filebeat-6.2.1-2018.06.09 dihjTJw3SjGncXYln2MXbA 5 1 6 0 35.2kb 35.2kb
yellow open filebeat-6.2.1-2018.06.04 yPD91Ww0SD2ei4YI-FgLgQ 5 1 350 0 186.4kb 186.4kb
yellow open filebeat-6.2.1-2018.06.08 Qnv0gplFTgW0z1C6haZESg 5 1 97 0 89.4kb 89.4kb
最后但同样重要的是,我们需要设置和配置kibana服务以构建完整的集中管理服务器。 只是一个脚注,因为kibana用于简化通过可视化收集和分析数据的过程,如果您在较小的盒子下设置服务器,它不是像elasticSearch或logstash这样的重要包。 然而,要继续,下面是安装和配置的步骤:
[root@elk_master opt]# wget https://artifacts.elastic.co/downloads/kibana/kibana-6.2.1-linux-x86_64.tar.gz
--2018-06-09 13:21:41-- https://artifacts.elastic.co/downloads/kibana/kibana-6.2.1-linux-x86_64.tar.gz
Resolving artifacts.elastic.co (artifacts.elastic.co)... 107.21.237.188, 107.21.237.95, 107.21.253.15, ...
Connecting to artifacts.elastic.co (artifacts.elastic.co)|107.21.237.188|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 83465500 (80M) [binary/octet-stream]
Saving to: ‘kibana-6.2.1-linux-x86_64.tar.gz’
100%[==============================================================================>] 83,465,500 2.76MB/s in 41s
2018-06-09 13:22:28 (1.94 MB/s) - ‘kibana-6.2.1-linux-x86_64.tar.gz’ saved [83465500/83465500]
[root@elk_master opt]# tar -zxvf kibana-6.2.1-linux-x86_64.tar.gz
[root@elk_master opt]# ln -s /opt/kibana-6.2.1-linux-x86_64 /opt/kibana
[root@elk_master opt]# vi kibana/config/kibana.yml
server.host: "172.17.0.6"
server.port: 5601
elasticsearch.url: "http://172.17.0.6:9200"
注意上面我已经将kibana与配置中的ElasticSearch服务相关联,并分配了一个将在启动后由Kibana服务使用的端口。 现在一切已经到位,我们可以启动最终服务。 以下是步骤:
[root@elk_master opt]# /opt/kibana/bin/kibana --version
6.2.1
[root@elk_master opt]# /opt/kibana/bin/kibana &
[1] 8640
[root@elk_master opt]# log [13:26:20.034] [info][status][plugin:kibana@6.2.1] Status changed from uninitialized to green - Ready
log [13:26:20.073] [info][status][plugin:elasticsearch@6.2.1] Status changed from uninitialized to yellow - Waiting for Elasticsearch
log [13:26:20.193] [info][status][plugin:timelion@6.2.1] Status changed from uninitialized to green - Ready
log [13:26:20.200] [info][status][plugin:console@6.2.1] Status changed from uninitialized to green - Ready
log [13:26:20.212] [info][status][plugin:metrics@6.2.1] Status changed from uninitialized to green - Ready
log [13:26:20.233] [info][listening] Server running at http://172.17.0.6:5601
log [13:26:20.276] [info][status][plugin:elasticsearch@6.2.1] Status changed from yellow to green - Ready
[root@elk_master opt]# netstat -apn|grep -i :5601
tcp 0 0 172.17.0.6:5601 0.0.0.0:* LISTEN 8640/node
太棒了,现在一切都按照上面所示使用netstat命令启动和运行。 现在让我们查看Kibana的仪表板并进行配置。 得到网址http://172.17.0.6:5601/app ,你会看到仪表板将如下所示。
接下来在仪表板上,单击Management选项卡,然后定义索引模式,对于我们的情况,索引模式被定义为生成的日志文件名。 输入信息,然后单击“下一步”。
之后,输入将用作时间序列的变量。 完成后,单击“创建索引模式”。 以下是示例:
很好,现在管理服务器已经可以使用了。 让我们继续测试可用性。
4.测试阶段
在我们开始测试之前,让我们为最终结果考虑做出假设。 对于此测试,我们将尝试执行数据库查询,该查询将从客户端(即MySQL服务器)传递长查询时间。 一旦执行,我们的集中管理服务器应该通过Kibana仪表板自动将慢查询信息的结果显示为图形。 现在一切都很清楚,让我们开始测试,下面是步骤:
登录到任何客户端服务器并执行如下所示的慢速查询SQL:
[root@mysql_db1 ~]# mysql --login-path=root -P 3306 --prompt='TEST>'
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 193
Server version: 5.7.21-log MySQL Community Server (GPL)
Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
TEST>select sleep(5);
+----------+
| sleep(5) |
+----------+
| 0 |
+----------+
1 row in set (5.01 sec)
TEST>select sleep(6);
+----------+
| sleep(6) |
+----------+
| 0 |
+----------+
1 row in set (6.00 sec)
TEST>select sleep(10) 'run for 10 seconds';
+--------------------+
| run for 10 seconds |
+--------------------+
| 0 |
+--------------------+
1 row in set (10.00 sec)
TEST>select sleep(3) 'test again';
+------------+
| test again |
+------------+
| 0 |
+------------+
1 row in set (3.00 sec)
TEST>exit
Bye
如上所示,我们设法生成一些慢速查询,自动记入每个客户端慢查询日志。 现在,让我们转到仪表板,查看数据信息是否已被集中式服务器成功抓取并将其转换为可视化图形。
很好,如上所示,通过kibana仪表板成功抓取并查看了日志信息列表。 您可以使用左侧选项卡过滤要显示或隐藏的列类型,下面是示例: -
使用仪表板顶部的文本字段,您可以键入与查看某些信息或所需数据的一部分相关的SQL查询。
非常好,正如我们最初从我们的客户服务器中的1个生成的慢查询SQL上面显示的那样,根据我们的预期在我们的Kibana仪表板下自动显示。