CentOS6 ELK实现

1 简介

我们来介绍Centos6.5基于SSL密码认证部署ELK(Elasticsearch 1.4.4+Logstash 1.4.2+kibana3),同时为大家介绍如何集合如上组件来收集日志,本章的日志收集主要为大家介绍SYSTEM日志收集.

集中化日志收集主要应用场景是在同一个窗口临时性或永久性鉴定分析系统,应用等各类日志,对用户提供极大便利,同时也为用户提供一定自主性展示方式

2 本文目标

为大家介绍用logstash收集多目标主机syslogs日志,同时用kibana来分析展示收集到的日志

2.1 四大组件介绍

Logstash: logstash server端用来录入日志

Elasticsearch: 存储各类日志

Kibana: web化接口用作查寻和可视化日志

Logstash Forwarder: logstash client端用来通过lumberjack 网络协议发送日志到logstash server

我们将安装前三个组件到一台服务器上,这台机器将作为我们的logstash Server. Logstash Forwarder 将安装在所有需要被收集日志的服务器,所有日志将被发送给Logstash Server.

2.2 基本概念

NRT: Near RealTime(NRT)时时分析系统,延迟在1秒内;

Cluster: 集群的通过name作为唯一标识,默认elasticsearch;

Node: part of cluster,stores data,a single cluster can have many nodes as want.if no elasticsearch nodes running on your network,starting a single node will be default form a new single-node cluster named elasticsearch

Index: 索引必须小写,in a single cluster,you can define as many indeses as u want.

Type: one index, u can define one or more types.

Document: 最小被索引单位,例如一个文档为单个用户准备,另外一个为单产品介绍准备,还有一个是为单据准备。以json的方式切割。Index/type可存储多个documents.

Shards & replicas: index can store a billion documents taking up 1TB of disk space, single node may be not fit, and may bo too slow to serve search requests from a single node alone. To solve this problem, Elasticsearch provides subdivide the indes into multiple pieces called shards. When create an index, we can simple define the num of shards that we want.

Sharding two primary reasons:

l Horizontally split/scale content volume(方便纵向切割或横向扩展)

l Allow distribute distribute and parallelize operation shards (允许并行或分布式操作碎片)each index in Elasticsearch is allocated 5 primary shards and 1 replica which means that if u’ve at least two nodes in cluster, u index will have 5 primary shards and another 5 replica shards1 complete replicafor total of 10 shards per index.

3 部署环境

3.1 前期环境准备

ELK硬件测试环境

HostName

InnerIp

OuterIp

HardWare

System

Version

Role

AppS2

192.168.1.38

\

RAM:1GB
CPU:1

CentOS release 6.5 (Final)

ElasticSearch:1.4.2
LogStash: 1.4.2
Kibana: 3.0.1

ELK Server

AppS3

192.168.1.39

\

0.3.1

Logstash Forwarder

Manager

192.168.1.40

\

ansible 1.8.2

AnsibleManager

3.2 Server环境配置

3.2.1 Install Java 7

ELK环境基于JAVA 7环境运行,安装命令如下

# yum install java-1.7.0-openjdk -y

3.2.2 Install ElasticSearch

//import the ElasticSearch GPG key into rpm

# rpm --import 
http://packages.elasticsearch.org/GPG-KEY-elasticsearch

//create new yum repository file for ElasticSearch

# vi /etc/yum.repos.d/elasticsearch.repo

//添加如下内容到elasticsearch.repo

[elasticsearch-1.4]
name=Elasticsearch repository for 1.4.x   packages
baseurl=http://packages.elasticsearch.org/elasticsearch/1.4/centos
gpgcheck=1
gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
enabled=1

//安装elasticsearch

# yum install elasticsearch-1.4.1 –y
//编辑/etc/elasticsearch/elasticsearch.yml
script.disable_dynamic: true  //增加行
network.host: localhost  //取消注释 防止外部通过HTTP API访问Elasticsearch实例随意读取甚至shutdown Elasticsearch Clustaer
discovery.zen.ping.multicast.enabled: false //取消注释 禁用广播

3.2.3 start Elasticsearch

# service elasticsearch restart

//加入到开机启动项

# /sbin/chkconfig --add elasticsearch

3.2.4 Install Kibana

# cd /data/software; curl -O https://download.elasticsearch.org/kibana/kibana/kibana-3.0.1.tar.gz
# tar -xvf kibana-3.0.1.tar.gz
# vim kibana-3.0.1/config.js  //修改9200端口号为80

elasticsearch: “http://”+window.location.hostname+”:80″,

//创建nginx下的kibana目录

# mkdir -p /usr/share/nginx/kibana3
# cp -R * /usr/share/nginx/kibana3/

3.2.5 Install Logstash

Logstash提供了yum安装方式,

# vim /etc/yum.repos.d/logstash.repo

//增加如下配置

[logstash-1.4]
name=logstash repository for 1.4.x packages
baseurl=http://packages.elasticsearch.org/logstash/1.4/centos
gpgcheck=1
gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
enabled=1

安装

# yum -y install logstash-1.4.

3.2.6 Install Nginx

# yum install nginx

//kibana默认是使用Elasticsearch9200端口,但用户可以有权限直接访问Elasticsearch,所以我们通过web Serverr 80端口代替访问9200端口,Kibana也提供了关于nginx的配置文件供大家直接下载使用.

# curl -OL https://gist.githubusercontent.com/thisismitch/2205786838a6a5d61f55/raw/f91e06198a7c455925f6e3099e3ea7c186d0b263/nginx.conf

//nginx.conf配置

# cat   nginx.conf 
#
# Nginx   proxy for Elasticsearch + Kibana
#
# In this   setup, we are password protecting the saving of dashboards. You may
# wish to   extend the password protection to all paths.
#
# Even   though these paths are being called as the result of an ajax request, the
# browser   will prompt for a username/password on the first request
#
# If you   use this, you'll want to point config.js at http://FQDN:80/ instead of
#   http://FQDN:9200
#
server {
  listen                *:80 ;
 
  server_name           kibana2.ihuilian.com.;
  access_log              /var/log/nginx/kibana2.access.log;
 
  location / {
    root    /usr/share/nginx/kibana3;
    index    index.html  index.htm;
  }
 
  location ~ ^/_aliases$ {
    proxy_pass http://127.0.0.1:9200;
    proxy_read_timeout 90;
  }
  location ~ ^/.*/_aliases$ {
    proxy_pass http://127.0.0.1:9200;
    proxy_read_timeout 90;
  }
  location ~ ^/_nodes$ {
    proxy_pass http://127.0.0.1:9200;
    proxy_read_timeout 90;
  }
  location ~ ^/.*/_search$ {
    proxy_pass http://127.0.0.1:9200;
    proxy_read_timeout 90;
  }
  location ~ ^/.*/_mapping {
    proxy_pass http://127.0.0.1:9200;
    proxy_read_timeout 90;
  }
 
  # Password protected end points
  location ~ ^/kibana-int/dashboard/.*$ {
    proxy_pass http://127.0.0.1:9200;
    proxy_read_timeout 90;
    limit_except GET {
      proxy_pass http://127.0.0.1:9200;
      auth_basic "Restricted";
      auth_basic_user_file   /etc/nginx/conf.d/kibana2.htpasswd;
    }
  }
  location ~ ^/kibana-int/temp.*$ {
    proxy_pass http://127.0.0.1:9200;
    proxy_read_timeout 90;
    limit_except GET {
      proxy_pass http://127.0.0.1:9200;
      auth_basic "Restricted";
      auth_basic_user_file   /etc/nginx/conf.d/kibana2.htpasswd;
    }
  }
}

//保存退出后

# cp nginx.conf /etc/nginx/conf.d/default.conf

//安装apache2-utilshtpasswd来生成用户名和密码对:

# yum install httpd-tools-2.2.15 –y

//生成用户名密码

# htpasswd -c /etc/nginx/conf.d/kibana2.htpasswd user

//启动Nginx

# service nginx restart

//添加开机启动

# chkconfig nginx on

3.2.7 SSL认证

如上文所述,安全起见,我们elasticsearch采用web方式访问,通过ssl认证的方式提高访问安全性。

# vim /etc/pki/tls/openssl.cnf

//[v3_ca]下添加如下配置

subjectAltName=IP: 192.168.1.38

生成ssl认证文件

# cd /etc/pki/tls
# openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt

Generating a 2048 bit RSA private key

……………………………….+++

…………………..+++

writing new private key to ‘private/logstash-forwarder.key’

—–

3.2.8 配置logstash

Logstach配置文件是jason格式,配置文件在/etc/logstash/conf.d,配置文件主要包括三部分:inputs,filters,outputs:

先创建input文件 01-lumberjack-input.conf采用lumberjack input 协议logstash forwarder使用.

Input配置如下内容:

# vim /etc/logstash/conf.d/01-lumberjack-input.conf

input {
  lumberjack {       //定义采用lumberjack协议来收集日志
    port => 5000    //定义使用5000端口
    type => "logs"
    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    ssl_key =>   "/etc/pki/tls/private/logstash-forwarder.key"
  }
}

Filter配置如下:

# vim 10-syslog.conf

#//如下过滤器可以收集到有syslog标签的日志,并用grok来解析日志使之更结构化和可查询
filter {
  if [type] == "syslog" {
    grok {
      match => { "message" =>   "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname}   %{DATA:syslog_progr
am}(?:\[%{POSINT:syslog_pid}\])?:   %{GREEDYDATA:syslog_message}" }
      add_field => [   "received_at", "%{@timestamp}" ]
      add_field => [ "received_from",   "%{host}" ]
    }
    syslog_pri { }
    date {
      match => [   "syslog_timestamp", "MMM    d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}

Output配置:

# vim /etc/logstash/conf.d/30-lumberjack-output.conf

#//这个配置是基于日志存储在elasticsearch方式 ,通过这种方式 结合下面的规则logstash同时也可以收集不匹配规则的日志,只是这些日志不会被结构化
output {
  elasticsearch { host => localhost }
  stdout { codec => rubydebug }
}

启动logstash

# service logstash restart

3.3 Client环境配置

3.3.1 安装Logstash Forwarder

//server SSL认证文件发送到ship服务器

官网下载 https://www.elastic.co/downloads/logstash

logstash-forwarder-0.4.0-1.x86_64.rpm

//通过如下命令安装

# rpm -ihv logstash-forwarder-0.4.0-1.x86_64.rpm

//添加logstash Forwarder初始化脚本

# cd /etc/init.d/; sudo curl -o logstash-forwarder http://logstashbook.com/code/4/logstash_forwarder_redhat_init
# chmod +x logstash-forwarder

//init脚本依赖于配置文件/etc/sysconfig/logstash-forwarder

# curl -o /etc/sysconfig/logstash-forwarder 
http://logstashbook.com/code/4/logstash_forwarder_redhat_sysconfig

//编辑并保存

# vim /etc/sysconfig/logstash-forwarder

//复制SSL认证文件到对应目录下

# cp /tmp/logstash-forwarder.crt /etc/pki/tls/certs/

3.3.2 配置Logstash Forwarder

//编辑并保存

//ship将连接logstash server5000端口

# vim /etc/logstash-forwarder
{
    "network": {
      "servers": [ "192.168.1.38:5000" ],
      "timeout": 15,
      "ssl ca":   "/etc/pki/tls/certs/logstash-forwarder.crt"
    },
    "files": [
      {
        "paths": [
          "/var/log/messages",
          "/var/log/secure"
         ],
        "fields": { "type": "syslog" }
      }
     ]
}

//启动logstash-forwarder

# service logstash-forwarder start

//添加开机启动

# chkconfig --add logstash-forwarder

//其它任何想收集日志的服务器均按如上配置即可

3.4 连接kibana

//当我们配置完如上的后,就可以收集所有希望收集的日志信息,Kibana可以提供一个web api友好接口供我们使用

//在浏览器虽输入kibana2.ihuilian.com(按你的配置输入)ip来访问logstash server。我们最先访问到的是kibana welcome page.

//点击Logstash dashboard进行预设置仪表盘,我们将看到类似如下的柱状图包括日志事件,日志信息(如果没有看到这些信息那一定是四个组件的配置有问题,请检查)

1.png

//进行下来的练习

l Search for “root “ to see if anyone is trying to log into your servers sa root

2.png

l Search for a particular hostname

3.png

貌似只支持全量匹配

l Change the time frame by selecting an area on the histogram on from the menu above

l Click on masaages below the histogram to see how the data is being filtered

4 Kibana使用说明

4.1 控制面板设置

4.png

CentOS6 ELK实现

CentOS6 ELK实现

CentOS6 ELK实现

4.2 自动刷新

8.png

In fact, you can add any exported dashboard to that directory and access it as http://YOUR-HOST -HERE/index.html#dashboard/file/YOUR-DASHBOARD.json. Neat trick eh?

http://kibana.ihuilian.com/#/dashboard/file/default.json

5 qa

信息收集慢

没有找到文件的匹配规则

5.1 添加新ship失败,一直无法显示

a) 查看日志 无异常

b) 确认SSH认证文件 正常

c) # service logstash-forwarder restart 正常-(restart失败但返回正常,其实是有问题我没有发现,只相信系统最原始的命令,第三方脚本经常会有不同程度的问题)

d) serverrestart logstash,elastashsearch,kibana,nginx均无法发现主机

e) 全新部署ship环境,每步均进行详细确认

f) 发现logstash-forwarder脚本问题,修改后正常添加新主机

大日志是100条的方式逐渐累加

9.png10.png

6 监控nginx日志

//定义Nginx日志格式

log_format logstash ‘$http_host $remote_addr [$time_local] “$request” $status $body_bytes_sent “$http_referer”

“$http_user_agent” $request_time $upstream_response_time’;

access_log /var/log/nginx/AppM.access.log logstash;

//修改logstash-forwarder

# vim /etc/logstash-forwarder

{
  "network": {
    "servers": [   "192.168.1.38:5000" ],
    "timeout": 15,
    "ssl ca":   "/etc/pki/tls/certs/logstash-forwarder.crt"
  },
  "files": [
    {
      "paths": [
        "/var/log/messages*",
        "/var/log/secure*"
       ],
      "fields": { "type":   "syslog" }
    },{
      "paths": [
          "/var/log/nginx/AppM.access.log*"
       ],
      "fields": { "type":   "nginx-access" }
    }
  ]
 
}

重启logstash-forwarder生效

7 参考文档:

https://www.digitalocean.com/community/tutorials/how-to-use-logstash-and-kibana-to-centralize-logs-on-centos-6

http://www.wklken.me/posts/2015/04/26/elk-for-nginx-log.html

http://www.cnblogs.com/yjf512/p/4199105.html

http://www.tuicool.com/articles/UnUzimJ

http://www.learnes.net/getting_started/README.html

http://bigbo.github.io/pages/2015/02/28/elasticsearch_hadoop/

https://github.com/lmenezes/elasticsearch-kopf

http://logstash.es/

https://github.com/chenryn/kibana-guide-cn/blob/master/v4/dashboard.md

http://kibana.logstash.es/content/


原创文章,作者:kang,如若转载,请注明出处:http://www.178linux.com/79151

(0)
kangkang
上一篇 2017-05-17
下一篇 2017-05-17

相关推荐

  • N26-第一周博客作业

    1.描述计算机的组成及其功能 完整的计算机系统由硬件和软件两部分组成。 现在大部分的计算机为冯诺依曼体系,主要有五个组成部分:运算器、控制器、存储器、输入设备、输出设备,以下为详细描述。(计算机的CPU由运算器、控制器和一二三层缓存等构成) 运算器:对数据进行算术运算和逻辑运算(对数据进行加工处理) 控制器:分析指令,控制协调输入、输出操作对内存的访问。 存…

    Linux干货 2017-01-02
  • 第五周学习总结-rpm&yum

    我们都知道,计算机只能识别二进制程序,而程序员编写的源代码都是以纯文本形式存在,因此,要想让计算机识别并运行这些源代码程序,就必须通过中间的转换机制让源代码变为二进制程序文件,而这种转换过程就称为编译过程。Linux的各发行版本中由于各厂商的编译过程不尽相同,因此就诞生了各种不同的软件管理包组件。其中我们最熟知的就要数Redhat系列的rpm包了。 rpm包…

    2018-01-03
  • centos 7 修改sshd服务默认端口号

    由于最近sshd服务默认端口号22被匿名进行试探性的进行登入,为防止匿名用户再次进行试探性的登入,将sshd服务的默认端口号进行修改。 环境:centos 7.3  xshell 思路:先将sshd的被指文件进行修改,把sshd服务的默认端口号修改为所需要的端口号,然后重启sshd服务,发现重启报错。找其原因是应为selinux不允许自定义sshd…

    Linux干货 2017-04-06
  • ThirdWeek_SecondDay

    Python学习笔记整理

    Linux干货 2017-10-09
  • httpd的介绍以及常用配置

    继上一篇写了LAMP的编译安装之后没有介绍如何配置使用,接下来的几篇会依次介绍,编译安装的过程为http://www.178linux.com/64006 一.httpd介绍 1.httpd是http协议的一个经典实现,也是apache组织中的一个顶级项目,其官方站点为httpd.apache.org。 2.httpd的运行机制 高度模块化(Core+Mod…

    Linux干货 2016-12-22
  • linux入门第二课

    **linux入门基础** linux 命令类别 内部命令和外部命令 首先我们可以用echo $SHELL 来查询shell 里面会显示我们当前在用的shell 上图是echo $SHELL 也可以用cat /etc/shells来查询系统里面的shell bash其实是一个文件是一个程序 存在于 /bin/bash 目录,这是真是存在的文件,是一个可以执行…

    Linux干货 2017-05-20