Skip to main content

🚀 Steps to Send Nginx Logs from Filebeat to ELK Server

🛠️ 1. Install and Configure Filebeat on Nginx Server

# Add Elasticsearch GPG Key
curl -fsSL https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo gpg --dearmor -o /usr/share/keyrings/elastic.gpg

# Add Elastic APT Repository
echo "deb [signed-by=/usr/share/keyrings/elastic.gpg] https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list

# Update and Install Filebeat
sudo apt update
sudo apt install filebeat

# Verify Installation
filebeat version

🔧 2. Configure Filebeat

✏️ Modify Filebeat Configuration

sudo nano /etc/filebeat/filebeat.yml

🔹 Disable Elasticsearch Output (Comment out the following lines):

# output.elasticsearch:
# hosts: ["localhost:9200"]

🔹 Enable Logstash Output (Uncomment and modify):

output.logstash:
hosts: ["3.109.209.101:5044"]

🔹 Enable Nginx Module

sudo filebeat modules enable nginx

🔹 Check Enabled Modules

sudo filebeat modules list

🔹 Modify Nginx Module Configuration

sudo nano /etc/filebeat/modules.d/nginx.yml

Ensure these lines are uncommented:

- module: nginx
access:
enabled: true
error:
enabled: true

🔹 Restart Filebeat

sudo systemctl restart filebeat

📡 3. Configure Logstash on ELK Server

✏️ Create Input Configuration

sudo nano /etc/logstash/conf.d/02-beats-input.conf
input {
beats {
port => 5044
}
}

✏️ Create Filter for Nginx Logs

sudo nano /etc/logstash/conf.d/25-nginx-filter.conf
filter {
if [fileset][module] == "nginx" {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}" }
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
}
}

✏️ Create Output Configuration

sudo nano /etc/logstash/conf.d/30-elasticsearch-output.conf
output {
if [@metadata][beat] == "filebeat" {
if [@metadata][pipeline] {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
pipeline => "%{[@metadata][pipeline]}"
}
} else {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
}
}
} else if [fields][log_type] == "nginx" {
elasticsearch {
hosts => ["localhost:9200"]
index => "nginx-logs-%{+YYYY.MM.dd}"
}
}
}

✅ Test and Restart Logstash

sudo -u logstash /usr/share/logstash/bin/logstash --path.settings /etc/logstash -t
sudo systemctl restart logstash
sudo systemctl enable logstash

📊 4. Load Nginx Dashboards in Kibana

Run the following on the Nginx Server:

sudo filebeat setup --pipelines --modules nginx
sudo filebeat setup --index-management \
-E output.logstash.enabled=false \
-E 'output.elasticsearch.hosts=["3.109.209.101:9200"]' \
-E setup.kibana.host="3.109.209.101:5601"

🔹 Check Elasticsearch Network Settings

sudo nano /etc/elasticsearch/elasticsearch.yml

Ensure the following is set:

cluster.name: elasticsearch
node.name: node-1
network.host: 0.0.0.0
discovery.seed_hosts: []
cluster.initial_master_nodes: ["node-1"]

Restart Elasticsearch:

sudo systemctl restart elasticsearch

🔹 Configure Kibana

sudo nano /etc/kibana/kibana.yml

Set:

server.host: "0.0.0.0"

Restart Kibana:

sudo systemctl restart kibana

🚀 5. Start and Test Filebeat

🔄 Restart and Enable Filebeat on Nginx Server

sudo systemctl restart filebeat
sudo systemctl enable filebeat

🛠️ Test Output

sudo filebeat test output

🔍 Verify Filebeat Configuration

sudo nano /etc/filebeat/filebeat.yml

Ensure logs path is correctly set:

filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/nginx/*.log # Modify this path according to your logs

✅ Final Checks

sudo filebeat test config
sudo systemctl restart filebeat
sudo systemctl status filebeat

🎉 Congratulations! 🎉

Your Nginx logs should now be visible in the Kibana dashboard! 🚀📊