Skip to content

Logging

Effective logging is crucial for monitoring blockchain node health, diagnosing issues, and maintaining security. This chapter provides comprehensive coverage of logging configuration, log management, and best practices for blockchain nodes.


┌─────────────────────────────────────────────────────────────────────────────┐
│ LOG VERBOSITY LEVELS │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ Level Value When to Use │
│ ━━━━━ ━━━━━ ━━━━━━━━━━━━━━━━━ │
│ │
│ TRACE 5 Extremely detailed, every function call │
│ ──── Use for debugging specific issues │
│ │
│ DEBUG 4 Detailed information for debugging │
│ ──── Use during development and troubleshooting │
│ │
│ INFO 3 Normal operation messages │
│ ──── Recommended for production │
│ │
│ WARN 2 Warning messages (non-critical issues) │
│ ──── Always monitor in production │
│ │
│ ERROR 1 Error messages (critical issues) │
│ ──── Critical - needs immediate attention │
│ │
│ CRIT 0 Critical issues (node may crash) │
│ ──── Requires immediate action │
│ │
│ Recommended Production Setting: INFO (3) │
│ │
└─────────────────────────────────────────────────────────────────────────────┘

Terminal window
# Start Geth with specific verbosity
geth \
--mainnet \
--verbosity 3 \
--vmodule "miner=5,eth/*=3"
# Log to file
geth \
--mainnet \
--verbosity 3 \
--log.file /var/log/geth.log
# Rotate logs with external tool
journalctl -u geth -f --since "1 hour ago" > /var/log/geth-hourly.log
Terminal window
# Module-specific verbosity
# Format: --vmodule <module>=<level>
geth \
--vmodule "eth/*=3" \
--vmodule "miner=5" \
--vmodule "txpool=4" \
--vmodule "server=3"
# Available modules:
# - eth: Ethereum protocol
# - eth/downloader: Block downloader
# - eth/fetcher: Block/tx fetcher
# - miner: Block mining/creation
# - txpool: Transaction pool
# - server: P2P server
# - network: Network connectivity
# - rpc: RPC server
# - les: Light Ethereum Subprotocol
Terminal window
# Enable JSON logging (for log aggregation)
geth \
--mainnet \
--log.json \
--log.file /var/log/geth.json
# JSON log format example:
# {"level":"info","ts":1699999999.123,"caller":"miner/worker.go:347","msg":"Commit new mining work","number":18500000,"hash":"0xabc...","tx":100,"gas":1000000}

Terminal window
# View logs for geth service
sudo journalctl -u geth
# Follow logs in real-time
sudo journalctl -u geth -f
# View last 100 lines
sudo journalctl -u geth -n 100
# View logs since specific time
sudo journalctl -u geth --since "2024-01-01 00:00:00"
sudo journalctl -u geth --since "1 hour ago"
sudo journalctl -u geth --since yesterday
# Filter by priority
sudo journalctl -u geth -p err
sudo journalctl -u geth -p warning
# Show only errors
sudo journalctl -u geth -b -p err
# Rotate journal (if needed)
sudo journalctl --vacuum-size=500M
sudo journalctl --vacuum-time=7d
Terminal window
# Create logrotate config for geth
sudo cat > /etc/logrotate.d/geth << EOF
/var/log/geth.log {
daily
rotate 7
compress
delaycompress
missingok
notifempty
create 0640 root root
postrotate
systemctl reload geth > /dev/null 2>&1 || true
endscript
}
/var/log/geth.json {
daily
rotate 30
compress
delaycompress
missingok
notifempty
create 0640 root root
}
EOF

ELK Stack (Elasticsearch, Logstash, Kibana)

Section titled “ELK Stack (Elasticsearch, Logstash, Kibana)”
┌─────────────────────────────────────────────────────────────────┐
│ ELK STACK FOR BLOCKCHAIN NODES │
├─────────────────────────────────────────────────────────────────┤
│ │
│ Node Logs ──▶ Filebeat ──▶ Logstash ──▶ Elasticsearch ──▶ Kibana│
│ │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ COMPONENTS: │ │
│ │ │ │
│ │ Filebeat: Lightweight log shipper │ │
│ │ - Installed on each node │ │
│ │ - Monitors log files │ │
│ │ - Forwards to Logstash │ │
│ │ │ │
│ │ Logstash: Log processing pipeline │ │
│ │ - Parses and transforms logs │ │
│ │ - Filters (grok, mutate, etc.) │ │
│ │ - Sends to Elasticsearch │ │
│ │ │ │
│ │ Elasticsearch: Search and analytics engine │ │
│ │ - Stores logs indexed by time │ │
│ │ - Full-text search │ │
│ │ │ │
│ │ Kibana: Visualization and dashboards │ │
│ │ - Search and filter logs │ │
│ │ - Create visualizations │ │
│ │ - Build dashboards │ │
│ └─────────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘
/etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/geth.log
- /var/log/geth.json
json:
keys_under_root: true
overwrite_keys: true
add_fields:
host.name: geth-node-1
network.name: mainnet
output.logstash:
hosts: ["logstash.example.com:5044"]
setup.dashboards.enabled: true
setup.template.enabled: true
# Promtail configuration (Loki agent)
server:
http_listen_port: 9080
grpc_listen_port: 0
positions:
filename: /tmp/positions.yaml
clients:
- url: http://loki.example.com:3100/loki/api/v1/push
scrape_configs:
- job_name: geth
static_configs:
- targets:
- localhost
labels:
job: geth
node: mainnet
__path__: /var/log/geth.log

PatternDescriptionAction
ERRORCritical issuesImmediate attention
WARN.*memoryMemory pressureCheck resources
WARN.*diskDisk issuesCheck storage
dropped.*txTransaction droppedCheck gas price
sync.*behindSync issuesCheck network
Invalid.*blockInvalid blockCheck consensus
alert-errors.sh
#!/bin/bash
# Monitor for errors
tail -f /var/log/geth.log | grep --line-buffered ERROR | while read line; do
echo "$line" | mail -s "Geth Error Alert" admin@example.com
# Send to Slack
curl -X POST -H 'Content-type: application/json' \
--data '{"text":"🔥 Geth Error: '"$line"'"}' \
https://hooks.slack.com/services/YOUR/WEBHOOK
done
# Loki alert for errors
groups:
- name: geth-logs
rules:
- alert: GethErrors
expr: count_over_time({job="geth"}[5m]) > 0
for: 2m
labels:
severity: critical
annotations:
summary: "Geth errors detected"
description: "There are {{ $value }} error messages in the last 5 minutes"

QuestionAnswer
What log level is recommended for production?INFO (3) - balances detail with performance
What is log rotation?Automatically archiving and deleting old log files
How do you monitor logs centrally?Use ELK Stack, Grafana Loki, or similar
What are important log patterns to watch?Errors, warnings, sync status, memory/disk issues
Why is JSON logging useful?Enables easy parsing and analysis by log aggregation tools

  • Configure appropriate verbosity for your environment
  • Use log rotation to manage disk space
  • Implement centralized logging for production
  • Set up alerts for critical log patterns
  • Regularly review and analyze logs

In Chapter 38: Grafana Dashboards, we’ll explore visualization and dashboards.


Last Updated: 2026-02-20