从零开始的Linux运维屌丝之路,资源免费分享平台   运维人员首选:简单、易用、高效、安全、稳定、社区活跃的开源软件

ELK完整版笔记

发布:蔺要红03-28分类: ELK


ELKstack---技术栈结合

官方网站:
https://www.elastic.co
 

ElasticSearch是一个基于Lucene的开源分布式搜索服务器。特点有:分布式,零配置,自动发现,索引自动分片,索引副本机制,restful风格接口,多数据源,自动搜索负载等。它提供了一个分布式多用户能力的全文搜索引擎,基于RESTful web接口。Elasticsearch是用Java开发的,并作为Apache许可条款下的开放源码发布,是第二流行的企业搜索引擎。设计用于云计算中,能够达到实时搜索,稳定,可靠,快速,安装使用方便。在elasticsearch中,所有节点的数据是均等的。

 

Logstash是一个完全开源的工具,它可以对你的日志进行收集、过滤、分析,支持大量的数据获取方法,并将其存储供以后使用(如搜索)。说到搜索,logstash带有一个web界面,搜索和展示所有日志。一般工作方式为c/s架构,client端安装在需要收集日志的主机上,server端负责将收到的各节点日志进行过滤、修改等操作在一并发往elasticsearch上去。

 

Kibana 是一个基于浏览器页面的Elasticsearch前端展示工具,也是一个开源和免费的工具,Kibana可以为 Logstash 和 ElasticSearch 提供的日志分析友好的 Web 界面,可以帮助您汇总、分析和搜索重要数据日志。


总结:Logstash(收集)---ElasticSearch(存储+搜索)---Kibana(web界面展示)
 

 

标准化:
日志存放目录 /data/logs/  更细分  access_log  error_log  runtime_log
格式要求:日志写jsion格式 (php python等很容易实现) 命名规则
日志切割:按天、小时、  crontab进行切分  runtime_log(开发实现,小时切割)
所有的原始日志文本----rsync推送到NAS(存储)、切记禁止使用NFS、删除7-15天前

安装过程:
# 端口:
ES:9200/9300
ZK:2181/22181/32181
kafka:9092
logstash:960X(X代表每启动一个加1)

# 实验环境
39.99.236.143     172.26.81.178
39.99.238.68      172.26.81.177
39.99.232.245     172.26.81.17

## 修改主机名名
hostnamectl set-hostname elk  && bash
hostnamectl set-hostname elk2   && bash
hostnamectl set-hostname elk3  && bash


# 修改hosts 
172.26.81.178   elk
172.26.81.177   elk2
172.26.81.179   elk3

#
yum install java -y
[ ! -d /data/server ] && mkdir -p /data/server
cd /data/server

# 下载软件安装包
wget https://mirrors.huaweicloud.com/elasticsearch/7.3.0/elasticsearch-7.3.0-linux-x86_64.tar.gz
wget https://mirrors.huaweicloud.com/kibana/7.3.0/kibana-7.3.0-linux-x86_64.tar.gz
wget http://mirrors.tuna.tsinghua.edu.cn/apache/kafka/2.3.0/kafka_2.12-2.3.0.tgz
wget https://mirrors.huaweicloud.com/logstash/7.3.0/logstash-7.3.0.tar.gz
wget https://mirrors.linyaohong.com/scripts/install.sh

sh install.sh

tar xf elasticsearch-7.3.0-linux-x86_64.tar.gz 
tar xf kafka_2.12-2.3.0.tgz 
tar xf kibana-7.3.0-linux-x86_64.tar.gz 
tar xf logstash-7.3.0.tar.gz 

# 建立普通用户
useradd elk
chown -R elk:elk  *
su elk

##############################   es start    ###############################

cd /data/server/elasticsearch-7.3.0

cat >restart.sh <<EOF
#!/bin/sh
echo \`ps -ef|grep elasticsearch|grep -v grep|awk '{print \$2}'\`
echo "start kill elasticsearch"
kill -9 \`ps -ef|grep elasticsearch|grep -v grep|awk '{print \$2}'\`
cd /data/server/elasticsearch-7.3.0
/data/server/elasticsearch-7.3.0/bin/elasticsearch -d
echo starting
echo \`ps -ef|grep elasticsearch|grep -v grep|awk '{print \$2}'\`
EOF

chmod +x restart.sh

#备份配置
mv config/elasticsearch.yml  config/elasticsearch.yml.bak

vi config/elasticsearch.yml
# 修改配置文件为
##########
cluster.name: klzz
node.name: node-1
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["elk", "elk2", "elk3"]
cluster.initial_master_nodes: ["node-1", "node-2", "node-3"]
xpack.security.enabled: false
transport.tcp.port: 9300

http.cors.enabled: true
http.cors.allow-origin: "*"

./restart.sh
# 启动说明
9200  9300 端口启动说明正常
# 查看所有索引
curl 'elk:9200/_cat/indices?v'
# 查看集群状态
curl 'elk:9200/_cluster/health?pretty'

#临时测试
./bin/logstash -f /tmp/test.conf  --path.data=/tmp/test >> /tmp/test_log.log
##############################   es end    ###############################

#############################   kibana开始  ###############################

cd   /data/server/kibana-7.3.0-linux-x86_64
cat >restart.sh <<EOF
#!/bin/sh
echo \`ps -ef|grep kibana-7.3.0|grep -v grep|awk '{print \$2}'\`
echo "start kill kibana-7.3.0"
kill -9 \`ps -ef|grep kibana-7.3.0|grep -v grep|awk '{print \$2}'\`
cd /data/server/kibana-7.3.0-linux-x86_64
nohup /data/server/kibana-7.3.0-linux-x86_64/bin/kibana &
echo starting
echo \`ps -ef|grep kibana|grep -v grep|awk '{print \$2}'\`
EOF

cd /data/server/kibana-7.3.0-linux-x86_64/config
cp kibana.yml kibana.yml.bak

vi kibana.yml

# 修改配置文件为 -可以只在master配置。通过负载均衡自动分配
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.hosts: ["http://elk:9200", "http://elk2:9200", "http://elk3:9200"]
elasticsearch.requestTimeout: 30000000
elasticsearch.shardTimeout: 30000000
elasticsearch.startupTimeout: 5000000
i18n.locale: "zh-CN"

chmod +x restart.sh

#############################   kibana end  ###############################


#############################   kafka zk 安装  ###############################
cd  /data/server/kafka_2.12-2.3.0/config/
# 修改配置文件-01
vim server.properties
vim zookeeper.properties
# 三台分别是 1 2 3 
broker.id=1
# 三台服务器分别在   /data/server/kafka_2.12-2.3.0 目录下创建   zookeeper

mkdir zookeeper
echo  1 > zookeeper/myid
echo  2 > zookeeper/myid
echo  3 > zookeeper/myid

# 启动 zookeeper         
# /data/server/kafka_2.12-2.3.0/bin/zookeeper-server-start.sh -daemon /data/server/kafka_2.12-2.3.0/config/zookeeper.properties
cat >zkrestart.sh <<EOF
#!/bin/sh
echo \`ps -ef|grep zookeeper|grep -v grep|awk '{print \$2}'\`
echo "start kill zookeeper"
kill -9 \`ps -ef|grep zookeeper|grep -v grep|awk '{print \$2}'\`
/data/server/kafka_2.12-2.3.0/bin/zookeeper-server-start.sh -daemon /data/server/kafka_2.12-2.3.0/config/zookeeper.properties
echo starting
echo \`ps -ef|grep zookeeper|grep -v grep|awk '{print \$2}'\`
EOF
chmod +x zkrestart.sh

# kafka 请人肉重启
netstat -lntp|grep 9092|awk   '{print $7}'|awk -F "/" '{print $1}'
# 9092
# 32181
# 2181   
# 启动kafka
/data/server/kafka_2.12-2.3.0/bin/kafka-server-start.sh -daemon /data/server/kafka_2.12-2.3.0/config/server.properties

# 查看 kafka topic  # 区你
/data/server/kafka_2.12-2.3.0/bin/kafka-topics.sh --zookeeper elk:2181 --list
/data/server/kafka_2.12-2.3.0/bin/kafka-topics.sh --zookeeper elk2:2181 --list
/data/server/kafka_2.12-2.3.0/bin/kafka-topics.sh --zookeeper elk3:2181 --list
# 删除topic
/data/server/kafka_2.12-2.3.0/bin/kafka-topics.sh --delete --zookeeper localhost:2181 --topic prod_php_www.mixins
# 查看topic
/data/server/kafka_2.12-2.3.0/bin/kafka-topics.sh --zookeeper localhost:2181 --list
# 删除topic
/data/server/kafka_2.12-2.3.0/bin/kafka-topics.sh --delete --zookeeper localhost:2181 --topic prod_php_www.mixins
# 生产者
/data/server/kafka_2.12-2.3.0/bin/kafka-console-producer.sh --topic test_kfk --broker-list 100.121.193.81:9092
# 消费者
/data/server/kafka_2.12-2.3.0/bin/kafka-console-consumer.sh \
--bootstrap-server 10.104.159.56:9092,10.104.144.186:9092,10.104.37.244:9092 \
--from-beginning \
--topic test_kfk

#############################   kafka zk end  ###############################


#############################   logstash   ###############################

cd /data/server/logstash-7.3.0/config/

vim  input-kafka.conf
#
#logstash启动:   
nohup /data/server/logstash-7.3.0/bin/logstash -f /data/server/logstash-7.3.0/config/input-kafka.conf --config.reload.automatic &

cat >restart.sh <<EOF
#!/bin/sh
echo \`ps -ef|grep logstash|grep -v grep|awk '{print \$2}'\`
echo "start kill logstash"
kill -9 \`ps -ef|grep logstash|grep -v grep|awk '{print \$2}'\`
nohup /data/server/logstash-7.3.0/bin/logstash -f /data/server/logstash-7.3.0/config/input-kafka.conf --config.reload.automatic &
echo starting
echo \`ps -ef|grep logstash|grep -v grep|awk '{print \$2}'\`
EOF

chmod +x restart.sh

#############################  filebeat客户端配置  ###############################
[ ! -d /data/server ] && mkdir /data/server
cd /data/server/
wget https://mirrors.huaweicloud.com/filebeat/7.3.0/filebeat-7.3.0-linux-x86_64.tar.gz
tar zxf filebeat-7.3.0-linux-x86_64.tar.gz
cd /data/server/filebeat-7.3.0-linux-x86_64/

cat >restart.sh <<EOF
#!/bin/sh
echo \`ps -ef|grep filebeat|grep -v grep|awk '{print \$2}'\`
echo "start kill filebeat"
kill -9 \`ps -ef|grep filebeat|grep -v grep|awk '{print \$2}'\`
cd /data/server/filebeat-7.3.0-linux-x86_64/
rm -f  nohup.out
./filebeat &
echo starting
echo \`ps -ef|grep filebeat|grep -v grep|awk '{print \$2}'\`
EOF

chmod +x restart.sh


#############################  filebeat  end   ###############################


# kibana 增加x-pack 用户管理

# 1. 替换破解包 一定注意大小 2.55M
cd /data/server/
wget https://mirrors.linyaohong.com/klzz/x-pack-core.tar.gz
tar zxvf x-pack-core.tar.gz
mv x-pack-core-7.3.0.jar  elasticsearch-7.3.0/modules/x-pack-core/
chown -R elk:elk *
su elk
# 2 .检查es配置文件 xpack.security.enabled: false
# 3 .重启 es
# 4. 在kibana上传那license  ----- 设置-许可管理

# 生成证书 在一个master上执行即可
cd /data/server/elasticsearch-7.3.0
# 两次回车
./bin/elasticsearch-certutil ca
# 三次回车
./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12 

# 把生成的文件放到conf下,注意权限为 elk用户 elastic-certificates.p12  elastic-stack-ca.p12
chown elk.elk elastic-*
mv elastic-* config/

# 再把证书文件 **************
elastic-certificates.p12 复制到其他master节点并赋予权限。 

# 修改配置文件. 

#xpack.security.enabled: false
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12

# 生成密码
./bin/elasticsearch-setup-passwords  auto

[elk@elk3 elasticsearch-7.3.0]$ ./bin/elasticsearch-setup-passwords  auto
Initiating the setup of passwords for reserved users elastic,apm_system,kibana,logstash_system,beats_system,remote_monitoring_user.
The passwords will be randomly generated and printed to the console.
Please confirm that you would like to continue [y/N]y


Changed password for user apm_system
PASSWORD apm_system = IjWby5yx3uQ5FJMJiySD

Changed password for user kibana
PASSWORD kibana = xsbAt2qlDytNRi3MY5T1

Changed password for user logstash_system
PASSWORD logstash_system = AiI2763iM2NQkWjaBL2K

Changed password for user beats_system
PASSWORD beats_system = oBOhzrSVMX5jvCdOO5rL

Changed password for user remote_monitoring_user
PASSWORD remote_monitoring_user = KnFnmhENWwhOlkJbpPCZ

Changed password for user elastic
PASSWORD elastic = e9o4qkpVdWR7K2FYz2qh


# 修改 kibana 配置文件  增加 kibana用户
elasticsearch.username: "kibana"
elasticsearch.password: "xsbAt2qlDytNRi3MY5T1"

# 重启 kibana用户后, 用如下用户登陆
Changed password for user elastic
PASSWORD elastic = e9o4qkpVdWR7K2FYz2qh

# superuser 角色为管理员角色/添加一个运维专用账号/ 不要修改默认密码,logstash需要使用

# 增加一个只读开发的角色 索引权限为
read index  create  write

Kibana 视图工作区权限


#修改logstash配置文件

#
input {

  kafka {
    bootstrap_servers => "elk:9092,elk2:9092,elk3:9092"    #zookeeper地址
    topics => ["vipthink","ccvipthink","jyvipthink"]
    codec => "json"                            #与Shipper端output配置项一致
    consumer_threads => 1                      #消费的线程数
    decorate_events => true                    #在输出消息的时候回输出自身的信息,包括:消费消息的大小、topic来源以及consumer的group信息。
    type => "logstash_mixins"
  }

}
output {

  if [type] == "logstash_mixins" {
      elasticsearch {
          action   => "index"
          hosts    => ["elk:9200","elk2:9200","elk3:9200"]                           # The operation on ES
          index    => "%{[fields][type]}-%{+YYYY.MM.dd}"
          user     => "elastic"
          password => "e9o4qkpVdWR7K2FYz2q"
      }
  }
}






















  log_format json '{ "@timestamp": "$time_iso8601", '
                         '"time": "$time_iso8601", '
                         '"remote_addr": "$remote_addr", '
                         '"remote_user": "$remote_user", '
                         '"http_host": "$host", '
                         '"body_bytes_sent": $body_bytes_sent, '
                         '"request_time": $request_time, '
                         '"status": $status, '
                         '"host": "$host", '
                         '"request": "$request", '
                         '"request_method": "$request_method", '
                         '"uri": "$uri", '
                         '"http_referrer": "$http_referer", '
                         '"http_x_forwarded_for": "$http_x_forwarded_for", '
                         '"http_user_agent": "$http_user_agent", '
                         '"up_addr": "$upstream_addr", '
                         '"up_status": "$upstream_status", '
                         '"up_rept": "$upstream_response_time" '
                    '}';



















   
温馨提示如有转载或引用以上内容之必要,敬请将本文链接作为出处标注,如有侵权我会在24小时之内删除!

欢迎使用手机扫描访问本站