ELK部署

Elasticsearch 7.14.1 172.16.10.60
Kibana 172.16.10.20
Logstash 172.16.10.10
Filebeat 172.16.10.60
Nginx 172.16.10.60

1、Elasticsearch部署

1
2
1、下载elasticsearch rpm包
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.14.1-x86_64.rpm
1
2
3
4
5
6
7
8
9
10
11
12
13
14
2、编辑elasticsearch.yml配置文件

创建elasticsearch数据存放目录,并授权
mkdir /data/elasticsearch -p
chown -R elasticsearch.elasticsearch /data/elasticsearch
vim /etc/elasticsearch.yml

cluster.name: wgj-application
node.name: node-wgj-1
path.data: /data/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 172.16.10.60
http.port: 9200
cluster.initial_master_nodes: ["node-wgj-1"]

3、系统优化和jvm参数优化

Elasticsearch还针对各种文件使用NioFS和MMapFS的混合。 确保配置最大映射计数,以便有足够的虚拟内存可用于mmapped文件。 这可以临时设置

1
2
3
sysctl -w vm.max_map_count=655300
或者 vim /etc/sysctl.conf
vm.max_map_count = 655300

Linux默认配置下最大打开文件数为1024,可通过`ulimit -n`查看,而ES在建索引过程中会打开很多小文件,这样很容易超过限制,文件描述符临时设置命令如下

1
2
vim /etc/security/limits.conf
elasticsearch - nofile 65535

禁止交换空间 Linux的交换空间机制是指,当内存资源不足时,Linux把某些页的内容转移至硬盘上的一块空间上,以释放内存空间。硬盘上的那块空间叫做交换空间(swap space)。如果不关闭swap,Elasticsearch的堆内存可能会被挤到磁盘中,垃圾回收速度会从毫秒级别变成分钟级别,导致节点的响应速度慢甚至和集群断开连接。有三种方式来避免交换空间发生

1
2
swapoff -a
永久关闭需要修改/etc/fstab 文件

4、开启自启和启动elasticsearch

1
2
systemctl enable elasticsearch
systemctl start elasticsearch

5、检查是否启动

1
2
netstat -tnlp |grep 9200
curl http://172.16.10.60:9200

2、Kibana部署

1
2
1、下载Kibana rpm包
wget https://artifacts.elastic.co/downloads/kibana/kibana-7.14.1-x86_64.rpm
1
2
3
4
5
6
7
8
9
10
2、配置kibana.yml配置文件
vim /etc/kibana/kibana.yml

server.port: 5601
server.host: 172.16.10.20
server.name: "wgj-kibana"
elasticsearch.hosts: ["http://172.16.10.60:9200"]
logging.dest: /var/log/kibana/kibana.log
logging.verbose: false
i18n.locale: "zh-CN" ##配置中文页面
1
2
3
3、开启自启并启动
systemctl enable kibana.service
systemctl start kibana.service

3、Logstash部署

1
2
3
4
5
6
7
8
9
10
1、添加logstash源
cat /etc/yum.repos.d/logstash.repo
[logstash-7.x]
name=Elastic repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
1
2
2、安装logstash
yum install -y logstash ##默认安装最新版
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
3、修改logstash.yml配置文件

创建logstash数据目录,并授权
mkdir -p /data/logstash
chown -R logstash.logstash /data/logstash

vim /etc/logstash/logstash.yml

path.data: /data/logstash
pipeline.workers: 2
pipeline.batch.size: 125
path.config: /etc/logstash/conf.d/*.conf
path.settings: /etc/logstash
config.test_and_exit: false
http.host: 172.16.10.10
http.port: 9600-9700
log.level: debug
path.logs: /var/log/logstash

1
2
3
4
5
6
7
8
9
10
11
12
13
4、创建/etc/logstash/pattern.d 并写nginx正则匹配过滤规则,与nginx日志格式相关联

cat /etc/nginx/nginx.conf

log_format test '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';


cat /etc/logstash/pattern.d/mypattern

NGINXCOMBINEDLOG %{IPORHOST:remote_addr} - %{DATA:remote_user} \[%{HTTPDATE:time_local}\] \"%{WORD:request_method} %{DATA:uri} HTTP/%{NUMBER:http_version}\" %{NUMBER:response_code} %{NUMBER:body_sent_bytes} \"%{DATA:http_referrer}\" \"%{DATA:http_user_agent}\"

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
5、配置nginx的logstash文件
input {
beats {
host => "0.0.0.0"
port => 5044
}
}

filter {
grok {
patterns_dir => [ "/etc/logstash/pattern.d" ]
match => { "message" => "%{NGINXCOMBINEDLOG}" }
}
date {
match => [ "time_local", "dd/MMM/yyyy:HH:mm:ss Z" ]
}
geoip {
source => "remote_addr"
}
}

output {
elasticsearch {
hosts => ["172.16.10.60:9200"]
index => "wgj_index_pattern-%{+YYYY.MM.dd}"
}
}

1
2
6、启动logstash
systemctl start logstash

4、部署Filebeat

1
2
3
4
5
6
7
8
9
10
1、添加filebeat源
cat filebeat.repo
[elastic-7.x]
name=Elastic repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
1
2
2、安装filebeat
yum install -y filebeat
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
3、配置filebeat.yml配置文件
vim /etc/filebeat/filebeat.yml

logging.level: info
logging.to_files: true
logging.files:
path: /var/log/filebeat
name: filebeat
keepfiles: 7

- type: log
enabled: true
paths:
- /var/log/nginx/access.log
fields:
env: test
nginx_log_type: access
- type: log
enabled: true
paths:
- /var/log/nginx/error.log
fields:
env: test
nginx_log_type: error

setup.template.settings:
index.number_of_shards: 1

output.logstash:
## 该端口与logstash的nginx配置里的端口对应
hosts: ["172.16.10.10:5044"]

1
2
4、启动filebeat
systemctl start filebeat