day03-LogStash

news/2024/11/3 18:02:41/

LogStash环境搭建

1.下载Logstash 
[10:17:18 root@elk2:/usr/local]#wget  http://192.168.13.253/Resources/ElasticStack/softwares/logstash-7.17.23-amd64.deb2.安装Logstash 
[10:17:35 root@elk2:/usr/local]#dpkg -i logstash-7.17.23-amd64.deb 3.创建软连接,可以基于PATH变量访问logstash程序
[10:19:11 root@elk2:~]#ln -svf /usr/share/logstash/bin/logstash /usr/local/bin4.查看帮助信息
#LogStash会启动一个jvm虚拟机所以会启动非常慢。
[10:20:39 root@elk2:~]#logstash -h
Using JAVA_HOME defined java: /usr/share/elasticsearch/jdk
WARNING: Using JAVA_HOME while Logstash distribution comes with a bundled JDK.
DEPRECATION: The use of JAVA_HOME is now deprecated and will be removed starting from 8.0. Please configure LS_JAVA_HOME instead.
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Usage:bin/logstash [OPTIONS]Options:-n, --node.name NAME          Specify the name of this logstash instance, if no value is givenit will default to the current hostname.(default: "elk2")--enable-local-plugin-development Allow Gemfile to be manipulated directlyto facilitate simpler local plugindevelopment.This is an advanced setting, intendedonly for use by Logstash developers,and should not be used in production.(default: false)-f, --path.config CONFIG_PATH Load the logstash config from a specific fileor directory.  If a directory is given, allfiles in that directory will be concatenatedin lexicographical order and then parsed as asingle config file. You can also specifywildcards (globs) and any matched files willbe loaded in the order described above.-e, --config.string CONFIG_STRING Use the given string as the configurationdata. Same syntax as the config file. If noinput is specified, then the following isused as the default input:"input { stdin { type => stdin } }"and if no output is specified, then thefollowing is used as the default output:"output { stdout { codec => rubydebug } }"If you wish to use both defaults, please usethe empty string for the '-e' flag.(default: nil)--field-reference-parser MODE (DEPRECATED) This option is no longerconfigurable.Use the given MODE when parsing fieldreferences.The field reference parser is used to expandfield references in your pipeline configs,and has become more strict to better handleambiguous- and illegal-syntax inputs.The only available MODE is:- `STRICT`: parse in a strict manner; whengiven ambiguous- or illegal-syntax input,raises a runtime exception that shouldbe handled by the calling plugin.(default: "STRICT")--modules MODULES             Load Logstash modules.Modules can be defined using multiple instances'--modules module1 --modules module2',or comma-separated syntax'--modules=module1,module2'Cannot be used in conjunction with '-e' or '-f'Use of '--modules' will override modules declaredin the 'logstash.yml' file.-M, --modules.variable MODULES_VARIABLE Load variables for module template.Multiple instances of '-M' or'--modules.variable' are supported.Ignored if '--modules' flag is not used.Should be in the format of'-M "MODULE_NAME.var.PLUGIN_TYPE.PLUGIN_NAME.VARIABLE_NAME=VALUE"'as in'-M "example.var.filter.mutate.fieldname=fieldvalue"'--setup                       Load index template into Elasticsearch, and saved searches, index-pattern, visualizations, and dashboards into Kibana whenrunning modules.(default: false)--cloud.id CLOUD_ID           Sets the elasticsearch and kibana host settings formodule connections in Elastic Cloud.Your Elastic Cloud User interface or the Cloud supportteam should provide this.Add an optional label prefix '<label>:' to help youidentify multiple cloud.ids.e.g. 'staging:dXMtZWFzdC0xLmF3cy5mb3VuZC5pbyRub3RhcmVhbCRpZGVudGlmaWVy'--cloud.auth CLOUD_AUTH       Sets the elasticsearch and kibana username and passwordfor module connections in Elastic Cloude.g. 'username:<password>'--pipeline.id ID              Sets the ID of the pipeline.(default: "main")-w, --pipeline.workers COUNT  Sets the number of pipeline workers to run.(default: 2)--pipeline.ordered ORDERED    Preserve events order. Possible values are `auto` (default), `true` and `false`.This settingwill only work when also using a single worker for the pipeline.Note that when enabled, it may impact the performance of the filtersand ouput processing.The `auto` option will automatically enable ordering if the`pipeline.workers` setting is set to `1`.Use `true` to enable ordering on the pipeline and prevent logstashfrom starting if there are multiple workers.Use `false` to disable any extra processing necessary for preservingordering.(default: "auto")--java-execution              Use Java execution engine.(default: true)--plugin-classloaders         (Beta) Load Java plugins in independent classloaders to isolate their dependencies.(default: false)-b, --pipeline.batch.size SIZE Size of batches the pipeline is to work in.(default: 125)-u, --pipeline.batch.delay DELAY_IN_MS When creating pipeline batches, how long to wait while pollingfor the next event.(default: 50)--pipeline.unsafe_shutdown    Force logstash to exit during shutdown evenif there are still inflight events in memory.By default, logstash will refuse to quit until allreceived events have been pushed to the outputs.(default: false)--pipeline.ecs_compatibility STRING Sets the pipeline's default value for `ecs_compatibility`,a setting that is available to plugins that implementan ECS Compatibility mode for use with the Elastic CommonSchema.Possible values are:- disabled (default)- v1- v2This option allows the early opt-in (or preemptive opt-out)of ECS Compatibility modes in plugins, which is scheduled tobe on-by-default in a future major release of Logstash.Values other than `disabled` are currently considered BETA,and may produce unintended consequences when upgrading Logstash.(default: "disabled")--path.data PATH              This should point to a writable directory. Logstashwill use this directory whenever it needs to storedata. Plugins will also have access to this path.(default: "/usr/share/logstash/data")-p, --path.plugins PATH       A path of where to find plugins. This flagcan be given multiple times to includemultiple paths. Plugins are expected to bein a specific directory hierarchy:'PATH/logstash/TYPE/NAME.rb' where TYPE is'inputs' 'filters', 'outputs' or 'codecs'and NAME is the name of the plugin.(default: [])-l, --path.logs PATH          Write logstash internal logs to the givenfile. Without this flag, logstash will emitlogs to standard output.(default: "/usr/share/logstash/logs")--log.level LEVEL             Set the log level for logstash. Possible values are:- fatal- error- warn- info- debug- trace(default: "info")--config.debug                Print the compiled config ruby code out as a debug log (you must also have --log.level=debug enabled).WARNING: This will include any 'password' options passed to plugin configs as plaintext, and may resultin plaintext passwords appearing in your logs!(default: false)-i, --interactive SHELL       Drop to shell instead of running as normal.Valid shells are "irb" and "pry"-V, --version                 Emit the version of logstash and its friends,then exit.-t, --config.test_and_exit    Check configuration for valid syntax and then exit.(default: false)-r, --config.reload.automatic Monitor configuration changes and reloadwhenever it is changed.NOTE: use SIGHUP to manually reload the config(default: false)--config.reload.interval RELOAD_INTERVAL How frequently to poll the configuration locationfor changes, in seconds.(default: #<Java::OrgLogstashUtil::TimeValue:0x291a8ecd>)--api.enabled ENABLED         Can be used to disable the Web API, which isenabled by default.(default: true)--api.http.host HTTP_HOST     Web API binding host (default: "127.0.0.1")--api.http.port HTTP_PORT     Web API http port (default: 9600..9700)--log.format FORMAT           Specify if Logstash should write its own logs in JSON form (oneevent per line) or in plain text (using Ruby's Object#inspect)(default: "plain")--path.settings SETTINGS_DIR  Directory containing logstash.yml file. This can also beset through the LS_SETTINGS_DIR environment variable.(default: "/usr/share/logstash/config")--verbose                     Set the log level to info.DEPRECATED: use --log.level=info instead.--debug                       Set the log level to debug.DEPRECATED: use --log.level=debug instead.--quiet                       Set the log level to info.DEPRECATED: use --log.level=info instead.--http.enabled                Can be used to disable the Web API, which isenabled by default.DEPRECATED: use `--api.enabled=false`--http.host HTTP_HOST         Web API binding hostDEPRECATED: use `--api.http.host=IP`--http.port HTTP_PORT         Web API http portDEPRECATED: use `--api.http.port=PORT`-h, --help                    print help

LogStash的架构

#logstash服务器:
指的是部署了Logstash环境的服务器。#Logstash实例:
指的是运行了对应Logstash的进程。#pipeline:
同一个Logstash实例可以有多个pipeline,如果未定义,则默认只有一个main pipeline。同一个pipeline中有多个插件:- input:必须有,数据从哪来。- filter:该插件可选操作,数据做哪些处理,处理后再流向output插件。- output:必须有,数据到哪去。

LogStash的两种启动方式

`---------------------#基于命令行启动---------------------`
[10:21:39 root@elk2:~]#logstash -e input "input { stdin { type => stdin } }  output { stdout { codec => rubydebug } }"#观察
[INFO ] 2024-10-29 10:34:13.751 [[main]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>"main"}#测试。输入1111111111111111111
{"host" => "elk2","type" => "stdin","@timestamp" => 2024-10-29T02:36:28.784Z,"message" => "1111111111111111111","@version" => "1"
}#停止服务
ctrl+c终止`---------------------#基于配置文件方式启动---------------------`
#参考方式一:
#当我们修改完配置后服务会热加载文件
[10:39:24 root@elk2:/etc/logstash/conf.d]#vim 01-stdin-to-stdout.conf
input {stdin { type => stdin }
}output {stdout {codec => rubydebug}
}[10:44:11 root@elk2:/etc/logstash/conf.d]#logstash -f 01-stdin-to-stdout.conf#观察信息
[INFO ] 2024-10-29 10:44:36.810 [[main]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>"main"}#输入信息测试:222222222222222
{"@version" => "1","host" => "elk2","@timestamp" => 2024-10-29T02:45:06.936Z,"type" => "stdin","message" => "222222222222222"
}#参考方式二:
[10:45:29 root@elk2:/etc/logstash/conf.d]#vim 01-stdin-to-stdout.conf
input {stdin { type => stdin }
}output {stdout {#codec => rubydebugcodec => json}
}#观察信息
[INFO ] 2024-10-29 10:46:56.507 [[main]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>"main"}#输入信息测试:333333333333333333333
{"host":"elk2","@version":"1","@timestamp":"2024-10-29T02:49:54.283Z","type":"stdin","message":"333333333333333333333"}

python脚本产生日志并使用Logstash采集

[11:10:43 root@elk2:~]#vim generate_log.py
#!/usr/bin/env python
# -*- coding: UTF-8 -*-
# @author : Jason Yinimport datetime
import random
import logging
import time
import sysLOG_FORMAT = "%(levelname)s %(asctime)s [com.edu.%(module)s] - %(message)s "
DATE_FORMAT = "%Y-%m-%d %H:%M:%S"# 配置root的logging.Logger实例的基本配置
logging.basicConfig(level=logging.INFO, format=LOG_FORMAT, datefmt=DATE_FORMAT, filename=sys.argv[1]
, filemode='a',)
actions = ["浏览页面", "评论商品", "加入收藏", "加入购物车", "提交订单", "使用优惠券", "领取优惠券","搜索", "查看订单", "付款", "清空购物车"]while True:time.sleep(random.randint(1, 5))user_id = random.randint(1, 10000)# 对生成的浮点数保留2位有效数字.price = round(random.uniform(15000, 30000),2)action = random.choice(actions)svip = random.choice([0,1,2])logging.info("DAU|{0}|{1}|{2}|{3}".format(user_id, action,svip,price))#配置完后启动,就不要再动了,再开一个窗口去执行其他命令
[11:14:39 root@elk2:~]#python3 generate_log.py /tmp/apps.log   #第二个窗口
[11:34:46 root@elk2:~]#cat /etc/logstash/conf.d/02-file-to-stdout.conf 
input { file { # 指定采集文件的路径path => "/tmp/apps.log"# 指定"首次"采集文件时从哪个位置读取: beginning, end(默认值)。start_position => "beginning"} 
}  output { stdout { codec => rubydebug } 
}#第二个窗口启动
[11:37:04 root@elk2:~]#logstash -rf /etc/logstash/conf.d/02-file-to-stdout.conf#第三个窗口查看我们的/tmp/apps.log
[11:38:32 root@elk2:~]#tail -f /tmp/apps.log 
INFO 2024-10-29 11:38:12 [com.generate_log] - DAU|843|加入购物车|2|28042.12 
INFO 2024-10-29 11:38:17 [com.generate_log] - DAU|6363|领取优惠券|0|20939.4 
INFO 2024-10-29 11:38:19 [com.generate_log] - DAU|6314|清空购物车|2|18861.92 
.........................

LogStash的filter插件基本使用

#插件汇总
mutate
date
grok
useragent
geoip

mutate插件分析自定义app日志案例

#注意:第一个窗口的脚本别关
[12:00:23 root@elk2:~]#python3 generate_log.py /tmp/apps.log1.编写配置文件
[12:01:28 root@elk2:~]#vim /etc/logstash/conf.d/03-file-mutate-stdout.conf input { file { path => "/tmp/apps.log"start_position => "beginning"} 
}  filter {mutate {# 将message字段按照"|"进行切割split => { "message" => "|" }# 添加字段add_field => { "userId" => "%{[message][1]}""action" => "%{[message][2]}""svip" => "%{[message][3]}""price" => "%{[message][4]}""other" => "%{[message][0]}"}# 移除字段remove_field => [ "message","@version","host" ]}mutate {# 转换字段的数据类型convert => {"userId" => "integer""price" => "float"}}}output { stdout { codec => rubydebug } 
}2.启动Logstash实例
[12:04:31 root@elk2:/etc/logstash/conf.d]#logstash -rf 03-file-mutate-stdout.conf 3.第二次窗口,验证测试
查看2启动后出现的信息

date插件准备处理日期格式案例

#注意:第一个窗口的脚本别关
[12:00:23 root@elk2:~]#python3 generate_log.py /tmp/apps.log1.编写配置文件
[15:06:53 root@elk2:/etc/logstash/conf.d]#vim 03-file-mutate-stdout.conf 
input { file { path => "/tmp/apps.log"start_position => "beginning"} 
}  filter {mutate {# 将message字段按照"|"进行切割split => { "message" => "|" }# 添加字段add_field => { "userId" => "%{[message][1]}""action" => "%{[message][2]}""svip" => "%{[message][3]}""price" => "%{[message][4]}""other" => "%{[message][0]}"}}mutate {# 转换字段的数据类型convert => {"userId" => "integer""price" => "float"}}mutate {split => { "other" => " " }add_field => { "dt" => "%{[other][1]} %{[other][2]}"}# 移除字段remove_field => [ "message","@version","host","other" ]}date {# 2024-10-29 14:43:21# 语法参考:#   https://www.elastic.co/guide/en/logstash/current/plugins-filters-date.html#plugins-filters-date-matchmatch => [ "dt", "yyyy-MM-dd HH:mm:ss" ]# 将转换的数据存储在指定的字段中,若不指定,则默认会覆盖"@timestamp"字段#target => "datetime-li"# 定义时区的配置,参考地址:#  https://joda-time.sourceforge.net/timezones.htmltimezone => "Asia/Shanghai"}}output { stdout { codec => rubydebug } elasticsearch {# 指定ES集群的地址hosts => ["10.0.0.91:9200","10.0.0.92:9200","10.0.0.93:9200"]# 指定ES集群的索引名称index => "linux-logstash-%{+yyyy.MM.dd}"}}2.启动Logstash实例 
[15:13:59 root@elk2:/etc/logstash/conf.d]#sed  -i 's#2024#2023#g' /tmp/apps.log 
[root@elk92 conf.d]# rm -rf /usr/share/logstash/data/
[15:14:02 root@elk2:/etc/logstash/conf.d]#rm -rf /usr/share/logstash/data/
[15:14:14 root@elk2:/etc/logstash/conf.d]#logstash -rf 03-file-mutate-stdout.conf #这个脚本是我又执行了一遍,为了生成新数据
[15:17:05 root@elk2:~]#python3 generate_log.py /tmp/apps.log3.kibana查询数据
先看索引管理-->创建索引-->去Discover中查看-->点击创建的索引

在这里插入图片描述

在这里插入图片描述

Kibana分析数据(画图)

创建聚合索引

在这里插入图片描述

视图库创建UV

在这里插入图片描述

在这里插入图片描述

在这里插入图片描述

在这里插入图片描述

ELFK-用户行为统计

在这里插入图片描述

在这里插入图片描述

在这里插入图片描述

ELFK-平台交易额

在这里插入图片描述

在这里插入图片描述

创建仪表盘(Dashboard)

在这里插入图片描述

在这里插入图片描述

Logstash对接Filebeat

1.编写Logstash配置文件 
[16:17:14 root@elk2:/etc/logstash/conf.d]#vim 04-beats-grok-stdout.conf
input { beats {# 监听的端口号port => 8888}}  filter {mutate {remove_field => [ "@version","log","tags","agent","ecs","input" ]}}output { stdout { codec => rubydebug } 
}2.启动Logstash 
[16:28:59 root@elk2:/etc/logstash/conf.d]# logstash -rf 04-beats-grok-stdout.conf
...	
[INFO ] 2024-10-29 16:29:45.880 [[main]-pipeline-manager] beats - Starting input listener {:address=>"0.0.0.0:8888"}
...3.编写filebeat配置文件 
[16:29:59 root@elk3:/etc/filebeat]#vim 09-nginx-to-logstash.yam
filebeat.inputs:
- type: logpaths:- /var/log/nginx/access.log*# 输出地址为Logstash的8888端口
output.logstash:hosts: ["10.0.0.92:8888"]4.启动filebeat采集数据到Logstash节点
[16:30:23 root@elk3:/etc/filebeat]#rm -rf /var/lib/filebeat/
[16:30:45 root@elk3:/etc/filebeat]#filebeat -e -c 09-nginx-to-logstash.yaml

基于grok分析nginx的访问日志

#第一个窗口
[20:24:48 root@elk2:/etc/logstash/conf.d]#logstash -rf 04-beats-grok-stdout.conf1.编写Logstash配置文件(第二个窗口)
[20:06:53 root@elk2:/etc/logstash/conf.d]#vim 04-beats-grok-stdout.conf 
input { beats {# 监听的端口号port => 8888}}  filter {mutate {remove_field => [ "@version","log","tags","agent","ecs","input" ]}# 基于正则提取任意文本grok {# 分析message字段的访问日志,基于内置的变量: HTTPD_COMMONLOGmatch => { "message" => "%{HTTPD_COMMONLOG}"}}}output { stdout { codec => rubydebug } 
}2.启动filebeat 
[19:06:16 root@elk3:~]#rm -rf /var/lib/filebeat/
[20:11:03 root@elk3:/etc/filebeat/conf.d]#filebeat -e -c 09-nginx-to-logstash.yaml 3.测试数据效果:
{"verb" => "GET","ident" => "-","request" => "/","bytes" => "396","@timestamp" => 2024-10-29T12:52:45.683Z,"clientip" => "221.218.209.11","timestamp" => "28/Oct/2024:17:11:50 +0800","host" => {"name" => "elk3"},"auth" => "-","httpversion" => "1.1","response" => "200","message" => "221.218.209.11 - - [28/Oct/2024:17:11:50 +0800] \"GET / HTTP/1.1\" 200 396 \"-\" \"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.0 Safari/605.1.15\""
}

基于geoip分析nginx的IP地址经纬度

#第一个窗口
[20:36:57 root@elk2:/etc/logstash/conf.d]#logstash -rf 05-beats-grok-stdout.conf
......
[INFO ] 2024-10-29 20:37:38.375 [[main]<beats] Server - Starting server on port: 8888
.....1.编写Logstash配置文件
[20:26:50 root@elk2:/etc/logstash/conf.d]#vim 05-beats-geoip-stdout.conf input { beats {port => 8888}}  filter {mutate {remove_field => [ "@version","log","tags","agent","ecs","input" ]}grok {match => { "message" => "%{HTTPD_COMMONLOG}"}}# 基于指定公网IP地址字段分析用户的经纬度,城市,国家等相关信息。geoip {source => "clientip"}}output { stdout { codec => rubydebug } 
}2.filebeat启动实例 
[20:27:37 root@elk3:/etc/filebeat]#rm -rf /var/lib/filebeat/
[20:27:40 root@elk3:/etc/filebeat]#filebeat -e -c 09-nginx-to-logstash.yaml 3.去第一个窗口查看效果
{"@timestamp" => 2024-10-29T12:59:48.839Z,"clientip" => "150.88.66.22","response" => "200","bytes" => "396","auth" => "-","verb" => "GET","timestamp" => "28/Oct/2024:17:12:27 +0800","request" => "/iphone","host" => {"name" => "elk3"},"geoip" => {"longitude" => 139.6895,"country_code2" => "JP","continent_code" => "AS","location" => {"lon" => 139.6895,"lat" => 35.6897},"ip" => "150.88.66.22","country_code3" => "JP","country_name" => "Japan","latitude" => 35.6897,"timezone" => "Asia/Tokyo"},"ident" => "-","message" => "150.88.66.22 - - [28/Oct/2024:17:12:27 +0800] \"GET /iphone HTTP/1.1\" 200 396 \"-\" \"Mozilla/5.0 (Linux; Android 8.0.0; SM-G955U Build/R16NW) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Mobile Safari/537.36\"","httpversion" => "1.1"
}

基于useragent分析用户的设备类型

1.编写Logstash配置文件
[20:59:29 root@elk2:/etc/logstash/conf.d]#vim 06-beats-useragent-stdout.confinput { beats {port => 8888}}  filter {mutate {remove_field => [ "@version","log","tags","agent","ecs","input" ]}grok {match => { "message" => "%{HTTPD_COMMONLOG}"}}geoip {source => "clientip"}useragent {# 基于哪个字段分析用户的设备类型source => "message"# 将分析的结果存储在哪个字段中target => "linux-useragent"}}output { stdout { codec => rubydebug } 
}[21:03:34 root@elk2:/etc/logstash/conf.d]#logstash -rf 06-beats-useragent-stdout.conf2.filebeat启动实例
[21:04:12 root@elk3:/etc/filebeat]#rm -rf /var/lib/filebeat/
[21:04:14 root@elk3:/etc/filebeat]#filebeat -e -c 09-nginx-to-logstash.yaml 3.测试结果
{"linux-useragent" => {"os_version" => "8.0.0","major" => "116","os_major" => "8","patch" => "0","os" => "Android","device" => "Samsung SM-G955U","os_name" => "Android","os_full" => "Android 8.0.0","os_patch" => "0","minor" => "0","version" => "116.0.0.0","os_minor" => "0","name" => "Chrome Mobile"},"@timestamp" => 2024-10-29T13:04:43.443Z,"response" => "200","request" => "/iphone","auth" => "-","bytes" => "396","clientip" => "150.88.66.22","ident" => "-","host" => {"name" => "elk3"},"verb" => "GET","httpversion" => "1.1","geoip" => {"longitude" => 139.6895,"country_code3" => "JP","timezone" => "Asia/Tokyo","latitude" => 35.6897,"continent_code" => "AS","country_code2" => "JP","country_name" => "Japan","ip" => "150.88.66.22","location" => {"lat" => 35.6897,"lon" => 139.6895}},"timestamp" => "28/Oct/2024:17:12:27 +0800","message" => "150.88.66.22 - - [28/Oct/2024:17:12:27 +0800] \"GET /iphone HTTP/1.1\" 200 396 \"-\" \"Mozilla/5.0 (Linux; Android 8.0.0; SM-G955U Build/R16NW) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Mobile Safari/537.36\""
}

基于date分析nginx的访问时间

1.编写Logstash配置文件
[21:04:41 root@elk2:/etc/logstash/conf.d]#vim 07-beats-date-stdout.confinput { beats {port => 8888}}filter {mutate {remove_field => [ "@version","log","tags","agent","ecs","input" ]}grok {match => { "message" => "%{HTTPD_COMMONLOG}"}}geoip {source => "clientip"}useragent {source => "message"target => "linux-useragent"}date {# 28/Oct/2024:17:12:27 +0800match => [ "timestamp", "dd/MMM/yyyy:HH:mm:ss Z" ]}}output { stdout { codec => rubydebug } 
}[21:06:25 root@elk2:/etc/logstash/conf.d]#logstash -rf 07-beats-useragent-stdout.conf 2.filebeat启动实例
[21:04:12 root@elk3:/etc/filebeat]#rm -rf /var/lib/filebeat/
[21:04:14 root@elk3:/etc/filebeat]#filebeat -e -c 09-nginx-to-logstash.yaml 3.测试数据效果:
{"response" => "200","timestamp" => "28/Oct/2024:17:12:27 +0800","httpversion" => "1.1","auth" => "-","geoip" => {"continent_code" => "AS","longitude" => 139.6895,"country_code3" => "JP","country_code2" => "JP","location" => {"lon" => 139.6895,"lat" => 35.6897},"timezone" => "Asia/Tokyo","ip" => "150.88.66.22","latitude" => 35.6897,"country_name" => "Japan"},"verb" => "GET","bytes" => "396","linux-useragent" => {"os_name" => "Android","minor" => "0","os_patch" => "0","major" => "116","os_full" => "Android 8.0.0","os_minor" => "0","os_major" => "8","patch" => "0","os" => "Android","version" => "116.0.0.0","os_version" => "8.0.0","device" => "Samsung SM-G955U","name" => "Chrome Mobile"},"clientip" => "150.88.66.22","@timestamp" => 2024-10-28T09:12:27.000Z,"ident" => "-","host" => {"name" => "elk3"},"request" => "/iphone","message" => "150.88.66.22 - - [28/Oct/2024:17:12:27 +0800] \"GET /iphone HTTP/1.1\" 200 396 \"-\" \"Mozilla/5.0 (Linux; Android 8.0.0; SM-G955U Build/R16NW) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Mobile Safari/537.36\""
}

将数据写入到ES集群

1.编写Logstash配置文件
[21:07:37 root@elk2:/etc/logstash/conf.d]#vim 08-beats-filter-es.confinput { beats {port => 8888}}  filter {mutate {remove_field => [ "@version","log","tags","agent","ecs","input" ]}grok {match => { "message" => "%{HTTPD_COMMONLOG}"}}geoip {source => "clientip"}useragent {source => "message"target => "linux-useragent"}date {# 28/Oct/2024:17:12:27 +0800match => [ "timestamp", "dd/MMM/yyyy:HH:mm:ss Z" ]}}output { stdout { codec => rubydebug } elasticsearch {hosts => ["10.0.0.91:9200","10.0.0.92:9200","10.0.0.93:9200"]index => "linux-elfk-nginx-%{+yyyy.MM.dd}"}}[21:10:04 root@elk2:/etc/logstash/conf.d]#logstash -rf 08-beats-filter-es.conf2.filebeat启动实例
[21:10:21 root@elk3:/etc/filebeat]#rm -rf /var/lib/filebeat/
[21:10:22 root@elk3:/etc/filebeat]#filebeat -e -c 09-nginx-to-logstash.yaml 3.kibana查询数据并验证
#重新采集一下
[21:10:21 root@elk3:/etc/filebeat]#rm -rf /var/lib/filebeat/
[21:10:22 root@elk3:/etc/filebeat]#filebeat -e -c 09-nginx-to-logstash.yaml 

在这里插入图片描述

ELFK架构数据映射之地图故障案例讲解

在这里插入图片描述

在这里插入图片描述

解决方法:

在这里插入图片描述

在这里插入图片描述

在这里插入图片描述

{"number_of_replicas": 0,"number_of_shards": 6
}

在这里插入图片描述

在这里插入图片描述

在这里插入图片描述

[21:40:37 root@elk3:/etc/filebeat]#rm -rf /var/lib/filebeat/
[21:40:39 root@elk3:/etc/filebeat]#filebeat -e -c 09-nginx-to-logstash.yaml 

在这里插入图片描述

删除索引

在这里插入图片描述

在这里插入图片描述

重新创建索引

在这里插入图片描述

故障修复

在这里插入图片描述

在这里插入图片描述

统计IP

统计UV

地理位置

设备类型

在这里插入图片描述

今日总结

- Logstash的环境搭建
- logstash的架构- logstash实例- logstash的pipeline- input - filter - output- Logstash常用的插件- input :- stdin - beats - file - filter:- mutate- date - grok- geoip- useragent- output- stdout - elasticsearch- ELK架构 - ELFK架构- 故障案例分享之数据映射

http://www.ppmy.cn/news/1543466.html

相关文章

python psutil 模块概述

文章目录 psutil 模块概述支持的系统安装 psutil 使用示例CPU 信息获取内存信息获取磁盘信息获取网络信息获取 进程管理功能查看系统进程获取进程详情示例脚本&#xff1a;监控系统资源 总结核心功能使用场景主要优点 psutil 模块概述 psutil 是一个强大的跨平台 Python 库&am…

无人机之集群控制方法篇

无人机的集群控制方法涉及多个技术和策略&#xff0c;以确保多架无人机能够协同、高效地执行任务。以下是一些主要的无人机集群控制方法&#xff1a; 一、编队控制方法 领航-跟随法&#xff08;Leader-Follower&#xff09; 通过设定一架无人机作为领航者&#xff08;长机&am…

java.sql.SQLException: ORA-00971: 缺失 SET 关键字

目录 背景&#xff1a; 过程&#xff1a; 错误原因: 解决办法&#xff1a; 总结: 背景&#xff1a; 正在运行某个项目程序&#xff0c;提交信息之后发现库中并没有刚刚的相关数据&#xff0c;后来查看后台信息&#xff0c;发现提示错误&#xff0c;java.sql.SQLException…

ReactNative 启动应用(2)

ReactNative 启动应用 简述 本节我们来看一下ReactNative在Android上启动Activity的流程&#xff0c;ReactNative在Android上也是一个Apk&#xff0c;它的实现全部都在应用层&#xff0c;所以它肯定也是符合我们Android应用的启动流程的&#xff0c;UI页面的载体也是一个Acti…

appium文本输入的多种形式

目录 一、send_keys方法 二、press_keycode方法 三、subprocess方法直接通过adb命令输入 一、send_keys方法 这个是最常用的方法&#xff0c;不过通常使用时要使用聚焦&#xff0c;也就是先点击后等待&#xff1a; element wait.until(EC.presence_of_element_located((By…

Redis 淘汰策略 问题

前言 相关系列 《Redis & 目录》《Redis & 淘汰策略 & 源码》《Redis & 淘汰策略 & 总结》《Redis & 淘汰策略 & 问题》 参考文献 《Redis过期策略和淘汰策略》 在内存耗尽的情况访问Redis会发生什么&#xff1f; 如果是读指令会正常返回…

初识字节码文件--Java

1&#xff0c;问题&#xff1a;请问以下代码&#xff0c;每行创建了几个对象&#xff1f; public static void main(String[] args) {String a "你好";String b new String("你好");String c "你" "好";} 如何查看字节码 方法一…

Python中的`update`方法详解及示例

Python中的update方法详解及示例 1. update方法简介2. update方法的应用场景3. 代码示例示例代码代码解释运行结果 4. 总结 在Python编程中&#xff0c;update方法是一个非常实用的工具&#xff0c;尤其是在处理集合&#xff08;Set&#xff09;数据类型时。本文将详细介绍upda…