Syslog Simple for Small Networks: Lightweight Configuration Tips

Syslog Simple: Troubleshooting Common Log IssuesSyslog is a lightweight, widely supported protocol for collecting and forwarding log messages from devices and applications across a network. While simple in concept, real-world deployments often run into practical issues that make logs unreliable, incomplete, or difficult to interpret. This guide walks through common syslog problems and gives practical troubleshooting steps, examples, and configuration tips to get things working reliably.


Overview: how syslog works (brief)

Syslog typically involves three components:

  • A syslog client (device or application) that generates messages.
  • A syslog server/collector that receives and stores messages.
  • Optional forwarders/relays that route messages between systems.

Messages include a facility and severity, a timestamp, hostname, and structured or free-form content. Syslog transports include UDP (default, lightweight, unreliable), TCP (reliable), and TLS-encrypted TCP (secure). Understanding which transport you use helps diagnose connectivity and loss issues.


Common issue 1 — No logs arriving at the collector

Symptoms

  • Collector shows no incoming messages from one or more clients.
  • Clients report successful send but collector log shows nothing.

Checks and fixes

  1. Network reachability
    • Ping or traceroute from client to collector IP.
    • Confirm any intermediate firewalls or ACLs allow syslog ports (default UDP/514, TCP/514). If using TLS, check the configured port (commonly 6514).
  2. Confirm client configuration
    • Verify client points to the correct collector IP/hostname and port.
    • Check whether client uses UDP vs TCP vs TLS; mismatch will stop delivery.
  3. Use packet captures
    • On collector, run tcpdump/wireshark to confirm packets arrive:
      
      sudo tcpdump -n -i any port 514 
    • If you see packets but no application logs, the collector process may not be bound or running.
  4. Verify collector service
    • Ensure syslog daemon (rsyslog/syslog-ng/journald+forwarder) is running and configured to listen on network interfaces.
    • Check binding: some daemons default to localhost only. For rsyslog, ensure \(ModLoad imudp and \)UDPServerRun 514 (UDP) and \(ModLoad imtcp with \)InputTCPServerRun 514 (TCP) are enabled.
  5. SELinux/AppArmor and permission issues
    • Security policies can block network binds—check audit logs (auditd) for denied operations.
  6. DNS issues
    • If client uses hostname, test name resolution and ensure collector resolves correctly.

Example rsyslog network enable (rsyslog.conf snippet)

# UDP module(load="imudp") input(type="imudp" port="514") # TCP module(load="imtcp") input(type="imtcp" port="514") 

Common issue 2 — Messages arrive but timestamps are wrong

Symptoms

  • Logs show future or past timestamps.
  • Correlation across systems is inconsistent.

Checks and fixes

  1. Check system clocks
    • Ensure NTP (ntpd, chronyd, systemd-timesyncd) is running and synchronized on both client and server.
    • Example: systemctl status chronyd && chronyc tracking
  2. Timezone vs UTC
    • Confirm which timezone each host is using and whether logs should be normalized to UTC at collection.
  3. Message timestamp semantics
    • Some syslog senders include their own timestamp in the message payload; collector may prepend its own. Use structured formats (RFC5424) to avoid ambiguity.
  4. Daylight Saving Time
    • Verify DST settings, especially for distributed systems.

Common issue 3 — Message loss or unreliability

Symptoms

  • Intermittent missing messages.
  • High-volume bursts cause dropped logs.

Checks and fixes

  1. Transport choice
    • UDP is lossy. For critical logs, use TCP or TLS to guarantee delivery.
  2. Buffering and rate limits
    • Collector or network devices may drop packets under load. Increase receive buffers (SO_RCVBUF) or tune rsyslog queue settings.
    • Example rsyslog queue tuning (rsyslog.conf):
      
      $ActionQueueType LinkedList $ActionQueueFileName srvrfwd $ActionQueueMaxDiskSpace 1g $ActionQueueSaveOnShutdown on 
  3. Disk and I/O bottlenecks
    • Ensure the collector has sufficient disk throughput and space; slow disk writes can cause backpressure and drops.
  4. Rate-limiting on devices
    • Network gear often rate-limits syslog generation—adjust logging levels or rate limits.
  5. Log rotation and pruning
    • Ensure rotation configuration doesn’t remove files prematurely; archive large volumes instead of deleting.

Common issue 4 — Misformatted or garbled messages

Symptoms

  • Messages appear concatenated, truncated, or include non-printable characters.
  • Character encoding issues (e.g., UTF-8 vs ISO-8859-1).

Checks and fixes

  1. Framing and transport
    • UDP preserves message boundaries but can be truncated if over MTU. TCP is stream-based: ensure proper delimiting—RFC6587 recommends octet-counting framing or NIL-terminated framing.
  2. MTU fragmentation
    • Large syslog messages may be fragmented and lost—limit message sizes or switch to TCP.
  3. Encoding
    • Ensure senders and collectors agree on encoding (prefer UTF-8). Configure applications to emit UTF-8.
  4. Truncation limits
    • Some daemons impose maximum message lengths; increase limits if supported (rsyslog has MaxMessageSize).

Example enabling octet-counting framing in rsyslog forwarder

action(type="omfwd" target="10.0.0.1" port="6514" protocol="tcp" Framing="octet-counted") 

Common issue 5 — Duplicate messages

Symptoms

  • Same log entries appear multiple times from one host.
  • Duplicates across multiple collectors or in a storage backend.

Checks and fixes

  1. Multiple forwarders
    • Check if a host sends logs to several collectors or if collectors forward to one another (loop).
  2. Collector config loops
    • Avoid circular forwarding. Use unique filters or tags when forwarding.
  3. Redelivery after ack failure
    • TCP/TLS with retries can cause duplicates if acknowledgements are misinterpreted. Ensure idempotent storage or dedupe at ingestion (use unique IDs or hashing).
  4. Client misconfiguration
    • Some agents (e.g., rsyslog + syslog-ng both installed) may both send the same local logs—disable one.

Common issue 6 — Permission and ownership problems writing logs

Symptoms

  • Collector fails to write files or rotate logs; errors in syslog daemon logs.
  • Log files owned by unexpected users or have restrictive permissions.

Checks and fixes

  1. File system permissions
    • Confirm the syslog process user (often syslog, root) has write access to log directories.
  2. AppArmor/SELinux denials
    • Check audit logs and allow necessary access with proper policies or adjust booleans.
  3. Immutable attributes or ACLs
    • Ensure chattr +i hasn’t been set and that ACLs don’t block writes.
  4. Disk quotas and full partitions
    • Check df -h and quota outputs.

Common issue 7 — Structured logging and parsing issues

Symptoms

  • Fields not parsed correctly in downstream systems; Kibana/Elastic dashboards show missing values.
  • Key/value pairs appended in message text rather than parsed.

Checks and fixes

  1. Use RFC5424 or JSON structured messages
    • These make parsing deterministic. Configure applications to emit JSON or RFC5424 structured data.
  2. Configure parser rules
    • Update grok, rsyslog mmnormalize, or syslog-ng parsing templates to match your message formats.
  3. Tags and templates
    • Add or standardize tags on the client side so collectors can route and parse appropriately.
  4. Example rsyslog template for JSON
    
    template(name="jsonTpl" type="list") {  constant(value="{")  constant(value=""@timestamp":"") property(name="timereported" dateFormat="rfc3339")  constant(value="","host":"") property(name="hostname")  constant(value="","message":"") property(name="msg" format="json")  constant(value=""}") } action(type="omfile" file="/var/log/json.log" template="jsonTpl") 

Common issue 8 — Security and spoofing concerns

Symptoms

  • Logs show unexpected hostnames/IPs or fake source information.
  • Unauthenticated senders post arbitrary messages.

Checks and fixes

  1. Use TLS and client authentication
    • Encrypt transport with TLS and optionally require client certs to verify identity.
  2. Network controls
    • Limit which devices can connect to the collector via firewall rules and allowlist IPs.
  3. Reliable metadata
    • Collect additional metadata (e.g., device ID, interface data, syslog-proxy headers) rather than relying solely on the sender-supplied hostname.
  4. Audit and alerting
    • Alert on anomalies like sudden bursts, unknown hostnames, or messages with mismatched source IP vs declared host.

Collector-specific notes (rsyslog, syslog-ng, journald)

  • rsyslog: Highly flexible, supports multiple inputs/outputs, queues, and modules. Common pitfalls: forgetting to load imudp/imtcp modules, not tuning action queues, and default localhost binding.
  • syslog-ng: Strong parsing capabilities and flexible destinations. Watch for driver-specific options and use log-filters to avoid loops.
  • journald: Systemd’s journal is local; to forward, use systemd-journal-remote or a forwarder (e.g., journald -> rsyslog). Ensure persistent storage is enabled if you need logs across reboots.

Debugging workflow — step-by-step

  1. Reproduce the issue and collect exact symptoms (times, hosts, sample messages).
  2. Gather evidence:
    • Client config file(s)
    • Collector config file(s)
    • Output of netstat/lsof showing listening ports
    • tcpdump/capture of traffic
    • Syslog daemon logs and system logs
  3. Isolate network vs application:
    • If packets arrive at collector (tcpdump) but not processed, focus on collector config.
    • If packets don’t arrive, check network/firewall and client sending.
  4. Check resource constraints (disk, CPU, memory).
  5. Change one thing at a time and monitor results.
  6. If necessary, enable debug logging on rsyslog/syslog-ng for more detail, but remember to turn it off after troubleshooting to avoid huge logs.

Best practices to avoid problems

  • Use TCP/TLS for critical logs; reserve UDP only for low-risk, high-volume telemetry where loss is acceptable.
  • Centralize time sync (NTP/chrony) and normalize timestamps to UTC.
  • Standardize log formats (RFC5424 or JSON) and document field schemas.
  • Implement rate limiting and backpressure handling to prevent overload.
  • Use unique identifiers and hashing to deduplicate in the ingestion pipeline.
  • Monitor the syslog pipeline (ingest rates, queue lengths, drop counters) and alert on anomalies.
  • Regularly test failover and disaster-recovery procedures for your logging backend.

Quick checklist (compact)

  • Network reachable? ports open? collector listening?
  • NTP synchronized across hosts?
  • Correct transport (UDP/TCP/TLS) configured on both ends?
  • No accidental duplicate forwarders/loops?
  • Collector has disk, permission, and I/O capacity?
  • Structured messages for parsing; parsers configured properly?
  • TLS/authentication and firewall allowlisting in place?

Troubleshooting syslog is often detective work: verify where messages stop (sender, network, or collector), collect evidence, and apply targeted fixes. With the right transport, timestamps, parsers, and resource planning, a “simple” syslog pipeline becomes reliable and scalable.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *