close

安裝elasticsearch

(參考網址 https://www.elastic.co/guide/en/elasticsearch/reference/current/rpm.html#install-rpm)

在/etc/yum.repos.d/新增檔案elasticsearch.repo

檔案內容如下:

[elasticsearch-6.x]
name=Elasticsearch repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/6.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

執行安裝elasticsearch

sudo yum install elasticsearch

設定開機自動執行elasticsearch

sudo /bin/systemctl daemon-reload 
sudo /bin/systemctl enable elasticsearch.service

啟動elasticsearch

sudo systemctl start elasticsearch.service

關閉elasticsearch

sudo systemctl stop elasticsearch.service

修改設定檔 (vim /etc/elasticsearch/elasticsearch.yml)

// 預設只開放localhost連線, 如果要設定public access需改為0.0.0.0 或是 指定allow的access ip
network.host: 0.0.0.0

// 加入這兩行 否則es會開不起來
xpack.ml.enabled: false
bootstrap.system_call_filter: false

// elasticsearch 7.x後需加上這行
cluster.initial_master_nodes: node-1

 

安裝kibana

sudo yum install kibana

kibana設定檔路徑 : /etc/kibana/kibana.yml

在設定檔內設定

elasticsearch.url: http://127.0.0.1:9200
server.host: "0"

 

kibana預設port是5601,因此安裝後瀏覽器輸入http://{your_kibana_host}:5601/即可看到kibana

更改server.port:80會遇到錯誤,可使用iptable轉port指令如下

iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 5601

參考網址 : https://hk.saowen.com/a/74ccd1a263a239bd422f3c11990ade95f2f92ff70886d42b60a0efae6093dec5

 

安裝logstash至log來源機器

sudo yum install logstash

logstash需安裝在log來源的server

如果今天要收集nginx log, logstash就必須安裝在nginx server

如果今天要收集mysql log, logstash就必須安裝在mysql server

在/etc/logstash/pipelines.yml可設定conf path,預設是/etc/logstash/conf.d/*.conf

此處的conf可用來設定log的input(input可以是log檔案,mysql db,redis,tcp request...等等)、output、filter...等等

範例如下:

input {
    file {
        path => ["/var/log/nginx/*.access.log","/var/log/nginx/*.error.log","/var/log/mysqld.log"]
        type => "system"
        # 載入log檔所有內容,若無此設定,則只會抓取"logstash啟動後"的log內容
        start_position => "beginning"
        # 更新間隔(秒)
        stat_interval => 10
    }
}

filter {
    # nginx access log
    if [path] =~ "access.log" {
        mutate { add_field => { "log_type" => "nginx_access_log" } }
        grok {
            match => { "message" => "%{IPORHOST:remote_addr} - %{USERNAME:remote_user} \[%{HTTPDATE:time_local}\] \"%{DATA:request}\" %{INT:status} %{NUMBER:bytes_sent} \"%{DATA:http_referer}\" \"%{DATA:http_user_agent}\"" }
        }
    }
    # nginx error log
    if [path] =~ "error.log" {
        mutate { add_field => { "log_type" => "nginx_error_log" } }
        grok {
            match => { "message" => "(?<timestamp>%{YEAR}[./]%{MONTHNUM}[./]%{MONTHDAY} %{TIME}) \[%{LOGLEVEL:severity}\] %{POSINT:pid}#%{NUMBER:threadid}\: \*%{NUMBER:connectionid} %{GREEDYDATA:errormessage}, client: %{IP:client}, server: %{GREEDYDATA:server}, request: %{GREEDYDATA:request}" }
        }
    }
    # mysql log
    else if [path] =~ "mysql" {
        mutate { add_field => { "log_type" => "mysql_log" } }
        grok {
            match => {
                # mysql log pattern
                "message" => "%{TIMESTAMP_ISO8601:log_time}\ %{NUMBER:log_number}\ \[%{DATA:log_level}\]\ %{GREEDYDATA:log_message}"
            }
        }
    }
}

## Add your filters / logstash plugins configuration here

output {
    elasticsearch {
        hosts => "your_es_hostname:9200"
    }
}

filter內可定義parsing pattern,並且可以新增自定義欄位

此範例我用grok去parse log內容,logstash還有許多好用的plugin,請自行查閱官方文件

logstash預設不會自動reload你的conf檔,意即conf的內容更新後,須將logstash重啟conf才會生效

可用以下指令解決此問題

logstash --config.reload.automatic &

執行完指令後,你的conf檔的所有異動將即刻生效

 

 

 

 

 

 

arrow
arrow
    文章標籤
    elasticsearch kibana logstash
    全站熱搜
    創作者介紹
    創作者 駱韋仲 的頭像
    駱韋仲

    尼古丁中毒

    駱韋仲 發表在 痞客邦 留言(0) 人氣()