Install

grafana/loki

Grafana Loki Wiki

Last updated on Jan 06, 2026 (Commit: a0aef98)

Overview

Relevant Files
  • README.md
  • pkg/loki/loki.go
  • docs/sources/get-started/architecture.md
  • docs/sources/get-started/components.md
  • pkg/scheduler/scheduler.go
  • pkg/ruler/evaluator_local.go
  • pkg/ruler/evaluator_remote.go
  • pkg/indexgateway/gateway.go
  • pkg/compactor/compactor.go

Loki is a horizontally-scalable, highly-available, multi-tenant log aggregation system inspired by Prometheus. Unlike traditional logging systems that perform full-text indexing on log contents, Loki takes a label-based approach: it stores compressed, unstructured logs while indexing only metadata (labels). This design makes Loki simpler to operate and significantly cheaper to run compared to alternatives.

Core Philosophy

Loki differs from Prometheus in two key ways: it focuses on logs instead of metrics, and it uses a push model where agents send logs directly to Loki, rather than Prometheus's pull model. Loki indexes and groups log streams using the same labels as Prometheus, enabling seamless switching between metrics and logs within Grafana. This label-based approach is especially well-suited for Kubernetes environments, where pod labels are automatically scraped and indexed.

Deployment Modes

Loki can run in three deployment modes, allowing you to scale from simple single-instance setups to large distributed systems:

  • Single Binary: All components run in one process, ideal for development and small deployments
  • Simple Scalable: Components are grouped into read, write, and backend tiers for moderate scaling
  • Microservices: Each component runs independently, enabling full horizontal scaling for large multi-tenant deployments

Multi-Tenancy

Loki is designed for multi-tenant deployments with tenant isolation enforced via the X-Scope-OrgID HTTP header. This allows multiple organizations to safely share the same Loki cluster with complete data isolation.

Write Path

The write path demonstrates Loki's distributed architecture:

  1. Agents (Alloy) send log streams to the Distributor
  2. The Distributor hashes each stream using a consistent hash ring to determine target Ingesters
  3. Logs are sent to the Ingester and its replicas (based on replication factor)
  4. The Ingester creates or appends to chunks for each unique label set
  5. The Distributor waits for a quorum of Ingesters to acknowledge writes
  6. A success response is returned only after quorum acknowledgment

Query Path

Queries flow through multiple components for optimization and fairness:

  1. Query Frontend receives queries and optionally splits them for parallelization
  2. Query Scheduler (optional) provides advanced queuing with per-tenant fairness, preventing single tenants from monopolizing resources through hierarchical queues
  3. Queriers execute the actual queries against stored data
  4. Index Gateway serves metadata queries, helping determine which chunks to fetch

Key Components

Ruler: Evaluates alerting and recording rules periodically. It supports two evaluation modes: local evaluation (using an internal querier) or remote evaluation (delegating to the Query Frontend to leverage query acceleration techniques like splitting, sharding, and caching).

Index Gateway: Manages index access for shipper-based storage (TSDB or BoltDB). It can run in simple mode (serving all indexes) or ring mode (distributing indexes across instances using consistent hashing). The Index Gateway also coordinates with the Bloom Gateway for efficient chunk filtering.

Compactor: Compacts multiple index files produced by Ingesters into single index files per day and tenant, making index lookups more efficient. It also handles retention policies and delete requests.

Ring: Loki uses consistent hash rings for distributed coordination. Components like Distributors, Ingesters, Query Schedulers, Compactors, and Rulers register themselves in rings, enabling service discovery and load distribution. The Ring uses a Key-Value store (Consul, etcd, or Memberlist) to maintain cluster state.

Storage

Loki stores all data in a single object storage backend (S3, GCS, Azure Blob Storage, etc.) using the index shipper pattern. This unified approach stores both indexes and chunks in the same backend. Two index formats are supported:

  • TSDB (recommended): Originally developed by Prometheus, extensible and supports all new Loki features
  • BoltDB (deprecated): Legacy format, no longer recommended for new deployments

Architecture Diagram

Loading diagram...

Architecture & Core Components

Relevant Files
  • pkg/loki/modules.go
  • pkg/distributor/distributor.go
  • pkg/ingester/ingester.go
  • pkg/querier/querier.go
  • pkg/storage/store.go
  • docs/sources/get-started/components.md

Loki is a modular, horizontally-scalable log aggregation system built on a microservices architecture. It can run as a single binary or be deployed as independent components across multiple machines.

System Overview

Loading diagram...

Write Path

The write path handles incoming log data through a series of validation and distribution steps:

  1. Distributor receives HTTP/gRPC push requests from clients and agents
  2. Validation ensures logs meet requirements (valid labels, timestamp bounds, size limits)
  3. Consistent Hashing determines target ingesters using tenant ID and label set
  4. Replication sends data to n ingesters (typically 3) in parallel
  5. Quorum Acknowledgment waits for majority (floor(n/2) + 1) of ingesters to confirm
  6. Ingester appends logs to in-memory chunks and persists via Write-Ahead Log (WAL)
  7. Chunk Flushing compresses and uploads chunks to object storage at configurable intervals

Read Path

The read path retrieves logs through a query execution pipeline:

  1. Query Frontend accepts LogQL queries and provides caching and query splitting
  2. Query Scheduler (optional) manages fair queuing across tenants
  3. Querier executes queries by fetching data from two sources:
    • Ingesters for recent in-memory data (within query_ingesters_within window)
    • Object Storage for historical data via the Store
  4. Index Gateway provides metadata lookups for efficient chunk discovery
  5. Deduplication removes duplicate entries from replicated data
  6. Result Aggregation combines results from multiple sources

Core Components

Distributor: Stateless entry point for writes. Validates data, applies rate limiting per tenant, and distributes writes across ingesters using consistent hashing. Scales horizontally behind a load balancer.

Ingester: Stateful component holding in-memory chunks. Receives writes, maintains WAL for durability, and flushes compressed chunks to storage. Uses a hash ring for discovery and lifecycle management (PENDING, JOINING, ACTIVE, LEAVING, UNHEALTHY states).

Querier: Executes LogQL queries by merging results from ingesters and storage. Handles deduplication, filtering, and aggregation. Can run standalone or as workers pulling jobs from query frontend/scheduler.

Query Frontend: Optional caching and query optimization layer. Splits large queries, caches results, and provides fair scheduling. Stateless and horizontally scalable.

Index Gateway: Serves metadata queries for shipper-based stores (TSDB, BoltDB). Enables efficient chunk discovery without scanning all data. Can run in simple or ring mode.

Store: Abstraction over object storage backends (S3, GCS, Azure). Manages chunk storage, index files, and provides SelectLogs/SelectSamples interfaces for querying.

Deployment Modes

Loki supports three deployment patterns:

  • Single Binary (target=all): All components in one process
  • Simple Scalable (target=read/write/backend): Logical grouping for easier scaling
  • Microservices (individual targets): Each component deployed independently

Key Architectural Patterns

Consistent Hashing: Distributors use a hash ring to deterministically route streams to ingesters, enabling horizontal scaling without rebalancing all data.

Quorum Consistency: Writes require acknowledgment from a quorum of replicas, balancing durability with availability.

Async Store: Queriers can query both ingesters and storage concurrently, with configurable lookback windows to optimize performance.

Module Dependencies: The system uses a dependency injection pattern where modules declare their dependencies, enabling flexible composition and testing.

LogQL Query Language & Engine

Relevant Files
  • pkg/logql/engine.go - Query execution engine
  • pkg/logql/evaluator.go - Expression evaluation logic
  • pkg/logql/syntax/parser.go - Query parser
  • pkg/logql/syntax/ast.go - Abstract syntax tree definitions
  • pkg/logql/log/pipeline.go - Log processing pipeline
  • pkg/engine/basic_engine.go - Basic execution engine

LogQL is Loki's query language, combining log stream selection with optional processing pipelines. The query engine parses, validates, and executes queries against log data.

Query Structure

Every LogQL query has two main components:

  1. Log Stream Selector - Filters logs by label matchers (e.g., {app="mysql"})
  2. Log Pipeline (optional) - Processes selected logs through stages

Example query:

{container="query-frontend"} |= "error" | json | level="ERROR"

The selector {container="query-frontend"} identifies streams, then the pipeline filters for "error", parses JSON, and filters by level.

Query Types

LogQL supports two query types determined by parameters:

  • Instant Queries - Return a single point-in-time result
  • Range Queries - Return time-series data across a time range with step intervals

The engine automatically detects the type from start, end, and step parameters.

Execution Flow

Loading diagram...

Parser & Syntax

The parser (pkg/logql/syntax/parser.go) uses a YACC-based grammar to convert query strings into an Abstract Syntax Tree (AST). Key expression types:

  • LogSelectorExpr - Stream selection with matchers
  • SampleExpr - Metric extraction and aggregation
  • StageExpr - Pipeline stages (filters, parsers, formatters)

Validation ensures queries have at least one equality or regex matcher to prevent full-table scans.

Evaluator & Pipeline Processing

The evaluator (pkg/logql/evaluator.go) executes the AST:

  1. Log Queries - Create iterators over matching streams, apply pipeline stages
  2. Metric Queries - Extract samples, apply range/vector aggregations

The log pipeline (pkg/logql/log/pipeline.go) chains stages that filter, parse, and transform log lines. Each stage processes a line and returns the modified line plus extracted labels.

Key Interfaces

  • Query - Represents an executable query with Exec(ctx) method
  • StepEvaluator - Evaluates metric queries step-by-step
  • StreamPipeline - Processes individual log lines through stages
  • Params - Query parameters (time range, limits, expression)

Performance Considerations

  • Stream selectors are critical—more specific matchers reduce data scanned
  • Pipelines are applied per-line, so early filters improve performance
  • Range queries use step intervals to create time-series data
  • The engine supports query sharding for distributed execution

Storage & Indexing

Relevant Files
  • pkg/storage/store.go
  • pkg/storage/chunk/chunk.go
  • pkg/ingester/wal.go
  • pkg/compactor/compactor.go
  • pkg/indexgateway/gateway.go
  • pkg/storage/stores/shipper/indexshipper/tsdb/

Loki stores two primary data types: chunks (compressed log data) and indexes (metadata for finding logs). All data resides in a single object storage backend (S3, GCS, Azure, etc.), with an index shipper managing periodic index files.

Data Format

Chunks are containers for log entries from a specific stream (unique label set) within a time range. Each chunk is compressed using Snappy and includes:

  • Magic number and version bytes
  • Structured metadata (label names/values)
  • Compressed log blocks with individual entries
  • CRC32 checksums for integrity validation

Indexes map label combinations to chunk locations. Loki supports two index formats:

  • TSDB (recommended): Prometheus-inspired format with symbols, series, label indices, and postings tables. Compact, efficient, and supports dynamic query sharding.
  • BoltDB (deprecated): Legacy key-value store format, replaced by TSDB in Loki 2.8+.

Write Path & WAL

The ingester receives log streams and appends entries to in-memory chunks. To ensure durability, Loki uses a Write-Ahead Log (WAL):

  1. Series and entries are logged to disk before being acknowledged
  2. Periodic checkpoints flush in-memory data to disk
  3. On restart, the WAL replays to recover unflushed data
  4. Disk throttling prevents writes when storage exceeds a threshold

The WAL is optional but enabled by default. Configuration includes checkpoint duration, replay memory ceiling, and disk full thresholds.

Index Lifecycle

Loading diagram...

Head: Per-tenant in-memory accumulator for index entries. Created when chunks are flushed, enabling consistent queries across flushed chunks and unflushed in-memory data.

Compaction: The compactor periodically merges index files from multiple periods. It:

  1. Downloads multi-tenant index files
  2. Merges series and chunks across files
  3. Applies retention policies (deletes expired data)
  4. Uploads compacted indexes back to storage
  5. Deletes old index files

Compaction runs on a single elected leader to avoid conflicts.

Index Gateway

The Index Gateway serves metadata queries for shipper-based stores, enabling efficient chunk discovery without scanning all data. It:

  • Maintains multiple index clients for different time periods
  • Queries indexes in order (newest first)
  • Supports bloom filtering for accelerated search
  • Handles series volume and label queries
  • Limits response size to 1000 entries per request

The gateway can run in simple mode (single instance) or ring mode (distributed with leader election).

Storage Configuration

Schema configuration defines index and chunk storage per time period:

schema_config:
  configs:
    - from: "2023-01-05"
      store: tsdb
      object_store: gcs
      schema: v13
      index:
        period: 24h
        prefix: index_

Each period can use different backends, enabling gradual migrations. The store abstraction handles multiple object storage providers transparently.

Deployment Modes & Operations

Relevant Files
  • cmd/loki/main.go
  • pkg/loki/loki.go
  • production/helm/loki/values.yaml
  • docs/sources/get-started/deployment-modes.md
  • docs/sources/operations/scalability.md
  • docs/sources/operations/loki-canary/_index.md

Loki's flexible architecture allows deployment in three distinct modes, each optimized for different scale and operational requirements. The -target command-line flag controls which components run in a given instance, enabling seamless transitions between modes as your needs evolve.

Deployment Modes Overview

Loading diagram...

Single Binary Mode

The simplest deployment runs all components in a single process using -target=all. This mode is ideal for development, testing, and small deployments handling up to approximately 20GB of logs per day. All microservices—distributor, ingester, querier, query frontend, and backend components—execute within one binary.

Advantages: Minimal operational overhead, easy to get started, suitable for HA with shared object storage and replication_factor=3.

Limitations: Query parallelization is constrained by instance count and max_query_parallelism configuration.

Simple Scalable Mode

Simple Scalable (SSD) separates execution into three independent targets: write, read, and backend. This is the default Helm chart configuration and scales to approximately 1TB of logs per day.

  • Write target (-target=write): Stateful, runs Distributor and Ingester components
  • Read target (-target=read): Stateless, runs Query Frontend and Querier components
  • Backend target (-target=backend): Stateful, runs Compactor, Index Gateway, Query Scheduler, Ruler, and experimental Bloom components

A reverse proxy (Nginx in the Helm chart) routes client requests to appropriate read or write nodes. This mode balances operational complexity with scalability.

Microservices Mode

Each component runs as a distinct process with its own -target flag. This provides maximum granularity for scaling individual components independently. Microservices mode is recommended for deployments exceeding 1TB/day or requiring precise control over resource allocation.

Components: Distributor, Ingester, Querier, Query Frontend, Query Scheduler, Compactor, Index Gateway, Ruler, Bloom Builder, Bloom Gateway, Bloom Planner, and Overrides Exporter.

Complexity: Most operationally demanding but most efficient for large clusters.

Operational Considerations

Query Scheduler Separation

For deployments with multiple query frontends, extract the Query Scheduler into a separate process. This allows the in-memory queue to be shared across frontends, improving query distribution. Configure via -frontend.scheduler-address and -querier.scheduler-address.

Remote Rule Evaluation

Complex or high-volume rules can degrade ruler performance. Enable remote rule evaluation by configuring the ruler to delegate queries to a dedicated query-frontend pool:

ruler:
  evaluation:
    mode: remote
    query_frontend:
      address: dns:///<query-frontend-service>:<grpc-port>

This externalizes rule evaluation, reducing ruler resource usage significantly.

Memory Ballast

In compute-constrained environments, configure ballast_bytes to allocate unused virtual memory. This inflates the heap size, reducing garbage collection frequency and improving performance.

Monitoring with Loki Canary

Loki Canary is a standalone tool that audits log ingestion performance and correctness. It generates synthetic log entries, verifies they are captured without loss, and exposes Prometheus metrics for monitoring.

Key Metrics:

  • loki_canary_entries_total: Total synthetic entries written
  • loki_canary_missing_entries_total: Entries not found in Loki
  • loki_canary_response_latency: End-to-end latency histogram
  • loki_canary_out_of_order_entries_total: Out-of-order log detection

Canary supports spot-check queries (verifying logs persist from ingester to storage) and metric tests (validating log rate consistency). Deploy as a DaemonSet or standalone pod with unique label values per instance.

Migration Between Modes

Loki decouples storage from ingestion/query logic, enabling mode transitions with minimal configuration changes. The Helm chart supports migration modes: SingleBinary<->SimpleScalable and SimpleScalable<->Distributed for zero-downtime transitions.

Bloom Filters & Query Acceleration

Relevant Files
  • pkg/bloomgateway/bloomgateway.go
  • pkg/bloomgateway/querier.go
  • pkg/bloombuild/builder/builder.go
  • pkg/bloombuild/planner/planner.go
  • pkg/storage/bloom/v1/
  • docs/sources/operations/bloom-filters.md

Bloom filters are probabilistic data structures that accelerate "needle in a haystack" queries by reducing the amount of data Loki must load and process. They enable efficient filtering of chunks based on structured metadata without examining every log line.

Architecture Overview

Loading diagram...

Bloom Building Pipeline

The Bloom Planner runs as a single instance and periodically scans TSDB index files to identify gaps in bloom coverage. It compares existing bloom metadata against new or modified TSDB files, then creates build tasks for streams requiring new blooms.

The Bloom Builder is a stateless, horizontally scalable component that pulls tasks from the planner's queue. For each stream, it:

  1. Iterates through log lines in new chunks
  2. Extracts structured metadata key-value pairs
  3. Appends hashes to a scalable bloom filter: hash(key), hash(key=value), and chunk-prefixed variants
  4. Aggregates multiple stream blooms into block files organized by fingerprint ranges

Blocks are stored with accompanying metadata files that enable discovery by gateways and the planner. Builders attempt to reuse existing blooms when TSDB files change, avoiding full rebuilds.

Bloom Querying & Filtering

The Bloom Gateway receives filtering requests from the Index Gateway containing a list of chunks and a LogQL filter expression. It:

  1. Resolves which bloom blocks contain the relevant fingerprint ranges
  2. Loads bloom pages from object storage (with LRU caching on local SSD)
  3. Tests each chunk's bloom against the filter expression
  4. Returns only chunks with statistical confidence of matching

The gateway is horizontally scalable with client-side sharding using jumphash for consistent distribution across instances. Each instance owns a subset of the fingerprint range.

Bloom Filter Data Structure

Blooms use a scalable bloom filter (SBF) with configurable false-positive rate (default 1%). The v1 format stores:

  • Series Index: Metadata about each stream (fingerprint, chunk references)
  • Bloom Pages: Compressed bloom data organized into pages for memory-efficient querying
  • Schema: Magic number, version (V3), and compression codec (e.g., Snappy)

Blocks are built in memory and freed after writing to object storage. The binary format supports lazy loading of bloom pages to limit concurrent memory usage.

Query Acceleration Benefits

For a query like {cluster="prod"} | traceID="3c0e3dcd33e7":

  • Without blooms: Download all chunks matching {cluster="prod"} for 24 hours, iterate every log line
  • With blooms: Skip chunks where the bloom indicates the trace ID is absent, process only likely candidates

This is especially effective for large-scale deployments (>75TB/month) where most chunks don't match the filter.

Configuration & Sizing

Enable blooms via bloom_build and bloom_gateway config blocks. Key sizing considerations:

  • Builders: Process ~4MB/second/core; memory = block size (default 128MB)
  • Gateways: Memory = worker_concurrency × block_query_concurrency × max_query_page_size (e.g., 4 × 8 × 64MB = 2GB minimum)
  • Storage: Blooms typically <1% of raw structured metadata size
  • Retention: Applied per-tenant, respecting both general and stream-specific retention policies

Gateways benefit from multiple locally-attached NVMe SSDs for high IOPS throughput.

Pattern Ingestion & Data Objects

Relevant Files
  • pkg/pattern/ingester.go
  • pkg/pattern/drain/drain.go
  • pkg/pattern/stream.go
  • pkg/dataobj/dataobj.go
  • pkg/dataobj/builder.go
  • pkg/dataobj/README.md

The pattern ingester detects and persists log patterns using the Drain algorithm, while data objects provide a columnar container format for efficient storage and retrieval of structured log data.

Pattern Detection with Drain

The Drain algorithm identifies recurring log patterns by building a parse tree that clusters similar log messages. Each stream maintains separate pattern detectors for different log levels (debug, info, warn, error, etc.).

Key Concepts:

  • Parse Tree: A hierarchical tree structure where tokens from log messages are matched against existing clusters
  • LogCluster: Represents a detected pattern with tokens (constant parts) and placeholders for variable parts
  • Similarity Threshold: Controls pattern matching sensitivity (default 0.3 means 30% token mismatch tolerance)
  • Token Depth: The first 30 tokens determine cluster placement, ensuring constant prefixes are preserved

Pattern Lifecycle:

  1. Log lines are tokenized based on format (JSON, logfmt, or punctuation-based)
  2. Tokens are matched against the parse tree to find similar clusters
  3. If similarity exceeds the threshold, the line joins an existing cluster; otherwise, a new cluster is created
  4. Clusters track volume (total bytes) and sample counts over time intervals
// Example: Training a pattern
cluster := drain.Train(logLine, timestamp)
if cluster != nil {
    // Pattern detected or updated
    cluster.Size++  // Increment occurrence count
}

Data Objects: Columnar Storage Container

Data objects are self-contained files stored in object storage, composed of multiple sections containing structured data. Each section holds both data and metadata regions, enabling efficient columnar storage and querying.

File Structure:

┌─────────────────────────────────┐
│ Header (Magic: "THOR")          │
├─────────────────────────────────┤
│ Section Data (all sections)     │
├─────────────────────────────────┤
│ Section Metadata (all sections) │
├─────────────────────────────────┤
│ File Metadata (protobuf)        │
├─────────────────────────────────┤
│ Footer (size + magic)           │
└─────────────────────────────────┘

Section Types:

  • Logs: Columnar log records (stream_id, timestamp, metadata, message)
  • Streams: Stream metadata and statistics (labels, min/max timestamps, row counts)
  • Pointers: References to other objects for index lookups
  • IndexPointers: Time-range based references to index objects

Building and Flushing Data Objects

The builder pattern accumulates sections and flushes them into a complete data object. Sections are encoded independently, then combined with file metadata.

// Create builder
builder := dataobj.NewBuilder(nil)

// Append section builders
builder.Append(logsBuilder)
builder.Append(streamsBuilder)

// Flush to object
obj, closer, err := builder.Flush()
defer closer.Close()

Encoding Strategy:

  • Sections buffer data in memory up to a size threshold
  • When full, data is sorted and compressed (zstd)
  • Multiple stripes are merged during flush for better compression
  • File metadata includes section layout and dictionary for string deduplication

Integration: Patterns to Data Objects

The pattern ingester pushes detected patterns back to Loki for persistence. Patterns are aggregated with metrics (occurrence counts, volume) and stored as time-series data, enabling pattern-based log analysis and querying.

Data Flow:

  1. Ingester receives log entries
  2. Drain detects patterns per stream and log level
  3. Pattern samples (timestamp, count) are accumulated
  4. Samples are pushed to Loki via HTTP or written to data objects
  5. Data objects are uploaded to object storage for long-term retention

Development, Testing & Tools

Relevant Files
  • CLAUDE.md - Development guide and code style
  • Makefile - Build system and targets
  • cmd/logcli/main.go - CLI tool for querying Loki
  • cmd/loki-canary/main.go - Canary testing tool
  • integration/loki_single_binary_test.go - Integration tests
  • pkg/logql/bench/README.md - LogQL benchmark suite

Build System

Loki uses a comprehensive Makefile-based build system supporting multiple targets and configurations. The build process is containerized by default for consistency, but can run natively with BUILD_IN_CONTAINER=false.

Core Build Targets:

make all                      # Build all executables (loki, logcli, promtail, loki-canary)
make loki                     # Build Loki server
make logcli                   # Build CLI query tool
make promtail                 # Build log shipper
make loki-canary              # Build canary testing tool
make test                     # Run all unit tests
make test-integration         # Run integration tests
make lint                     # Run all linters
make format                   # Format code (gofmt and goimports)

The build system uses Go 1.25.5 and supports cross-compilation for multiple platforms (Linux, macOS, Windows, FreeBSD) and architectures (amd64, arm64, arm).

Frontend Development

The Loki UI is built with Vite and located in pkg/ui/frontend. Frontend commands are run from that directory:

make build          # Build the frontend
make dev            # Start development server
make test           # Run frontend tests
make lint           # Lint TypeScript/React code
make check-deps     # Check for vulnerabilities

Testing Infrastructure

Loki provides multiple testing layers:

Unit Tests: Run with make test or go test ./.... Tests use table-driven patterns and structured logging. Coverage is tracked with coverage.txt.

Integration Tests: Located in integration/ directory, tagged with //go:build integration. Run with make test-integration. These tests use a cluster abstraction to spin up Loki components and validate end-to-end behavior like ingestion and querying.

Benchmark Suite: The LogQL benchmark suite in pkg/logql/bench/ generates realistic log data and benchmarks queries:

make generate                 # Generate test dataset
make bench                    # Run all benchmarks
make run                      # Interactive benchmark UI

Development Tools

LogCLI: Command-line tool for querying Loki. Supports multiple output formats (default, raw, jsonl), timezone handling, and statistics. Built from cmd/logcli/main.go.

Loki-Canary: Synthetic monitoring tool that continuously writes and reads logs to detect data loss or latency issues. Supports TLS, authentication, and metrics export on port 3500.

Code Quality: Linting uses golangci-lint with 15-minute timeout. Additional checks include faillint for forbidden imports (e.g., enforcing go.uber.org/atomic over sync/atomic). Markdown documentation is validated with lychee.

Code Style Guidelines

Follow standard Go conventions: gofmt/goimports formatting, CamelCase for exported identifiers, structured logging with go-kit/log. Always check errors with if err != nil { return ... }. Use table-driven tests and Conventional Commits format (<type>: description). Frontend code uses TypeScript with functional components and lowercase-dash directory naming.

Documentation Standards

Documentation follows the Grafana Writers' Toolkit style guide using CommonMark markdown. Large features require Loki Improvement Documents (LIDs). Configuration documentation is auto-generated from code. Upgrade steps are documented in docs/sources/setup/upgrade/_index.md. Preview docs locally with make docs from the /docs directory.