How to monitor server and Docker using Grafana

No comments
Now we have WordPress site, docker with nice UI and Proxmox for virtualization. Let's do some monitoring for all this stuff.
I choose Prometheus + Node exporter + IPMI exporter as server side monitoring solution with Cadvisor for containers monitoring and Grafana as data display.
There are many different dashboards on their site that is very simple to import.

docker-compose.yml:
prometheus:
    image: prom/prometheus
    restart: always
    volumes:
        - ./prometheus.yml:/etc/prometheus/prometheus.yml
        - ./data/prometheus_data:/prometheus
    command:
        - '-config.file=/etc/prometheus/prometheus.yml'
    links:
        - node-exporter
        - cadvisor
        - ipmi-exporter

ipmi-exporter:
    image: lovoo/ipmi_exporter:latest
    restart: always
    devices:
        - "/dev/ipmi0:/dev/ipmi0"

node-exporter:
    image: prom/node-exporter
    restart: always

cadvisor:
    image: google/cadvisor
    volumes:
        - /:/rootfs:ro
        - /var/run:/var/run:rw
        - /sys:/sys:ro
        - /var/lib/docker/:/var/lib/docker:ro
    restart: always

grafana:
    image: grafana/grafana
    restart: always
    volumes:
        - ./data/grafana_data:/var/lib/grafana
    environment:
        - GF_SECURITY_ADMIN_PASSWORD=adminpassword
    ports:
        - "3000:3000"
    links:
        - prometheus
        - node-exporter
        - cadvisor
    environment:
        - "VIRTUAL_HOST=grafana.example.com"
        - "LETSENCRYPT_HOST=grafana.example.com"
        - "LETSENCRYPT_EMAIL=email@example.com"

prometheus.yml:
# my global config
global:
  scrape_interval:     15s # By default, scrape targets every 15 seconds.
  evaluation_interval: 15s # By default, scrape targets every 15 seconds.
  # scrape_timeout is set to the global default (10s).

  # Attach these labels to any time series or alerts when communicating with
  # external systems (federation, remote storage, Alertmanager).
  external_labels:
      monitor: 'my-project'

# Load and evaluate rules in this file every 'evaluation_interval' seconds.
rule_files:
  - "alert.rules"
  # - "first.rules"
  # - "second.rules"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'prometheus'
  - job_name: 'node'

    # Override the global default and scrape targets from this job every 5 seconds.
    scrape_interval: 5s

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
            - targets: ['localhost:9090','cadvisor:8080','node-exporter:9100', 'ipmi-exporter:9289']

Just start with docker-compose and import or create your own dashboards.

No comments :

Post a Comment