Overview
Relevant Files
README.mdcmd/daprd/README.mddapr/README.mdpkg/runtime/runtime.gopkg/runtime/processor/processor.gopkg/actors/actors.go
Dapr is a distributed application runtime that provides a set of integrated APIs with built-in best practices and patterns for building secure, reliable microservices. It is a graduated CNCF project designed to increase developer productivity by 20-40% through out-of-the-box features like workflow, pub/sub, state management, secret stores, bindings, actors, distributed locks, and cryptography.
Core Architecture
Dapr operates as a sidecar (container or process) injected alongside each application instance. Applications communicate with the sidecar via standard HTTP or gRPC protocols, enabling language-agnostic development without requiring SDKs or framework dependencies. This design allows Dapr to support all existing and future programming languages seamlessly.
The runtime consists of two main layers:
- Application-Facing APIs - Exposed through HTTP and gRPC endpoints for state management, pub/sub messaging, service invocation, actors, bindings, secrets, workflows, and distributed locks.
- Control Plane Services - Kubernetes operator, placement service for actor distribution, sentry for certificate authority and mTLS, scheduler for job scheduling, and the injector for sidecar injection.
Key Features
- Pluggable Component Architecture - Support for multiple implementations of state stores, pub/sub providers, secret stores, and other building blocks without vendor lock-in.
- Built-in Security - mTLS communication, certificate management via Sentry, and access control policies.
- Observability - Integrated tracing, metrics collection, and health monitoring.
- Resiliency - Configurable retry policies, circuit breakers, and timeout handling.
- Multi-Platform - Runs natively on Kubernetes, as a standalone binary, containerized, or on IoT devices.
Runtime Components
The core runtime (daprd) manages:
- Direct Messaging - Service-to-service invocation with service discovery and load balancing.
- Pub/Sub Adapter - Event-driven messaging with guaranteed delivery semantics.
- Actor Runtime - Virtual actor model with state management, reminders, and timers.
- Component Processor - Lifecycle management for state stores, bindings, and other components.
- Workflow Engine - Durable workflow orchestration and job scheduling.
Loading diagram...
Deployment Modes
Dapr supports multiple deployment scenarios:
- Kubernetes - Native integration with Operator and CRDs for declarative configuration.
- Standalone - Self-hosted binary for local development or edge deployments.
- Containerized - Docker containers for cloud or on-premises environments.
- IoT/Edge - Lightweight footprint (58MB binary, 4MB physical memory) for resource-constrained devices.
The runtime emphasizes consistency and portability through open APIs, enabling seamless transitions between platforms and underlying implementations without code rewrites.
Architecture & System Services
Relevant Files
pkg/runtime/runtime.gocmd/daprd/app/app.gocmd/operator/app/app.gocmd/placement/app/app.gocmd/sentry/app/app.gocmd/scheduler/app/app.gocmd/injector/app/app.gopkg/operator/operator.gopkg/placement/placement.gopkg/sentry/sentry.gopkg/scheduler/server/server.go
Dapr is a distributed runtime composed of multiple system services that work together to provide a complete microservices platform. The architecture follows a sidecar pattern where each application gets its own Dapr runtime instance, coordinated by control-plane services.
System Services Overview
The Dapr system consists of six core services:
Daprd (Sidecar Runtime) - The main runtime that runs alongside each application. It exposes HTTP and gRPC APIs for state management, pub/sub, service invocation, actors, and other building blocks. Each daprd instance manages components, handles direct messaging, and coordinates with control-plane services.
Operator - A Kubernetes controller that manages Dapr resources (Components, Configurations, Subscriptions, HTTPEndpoints). It watches for resource changes, validates them, and distributes updates to all daprd instances. The operator also manages sidecar lifecycle and pod watchdog functionality.
Placement Service - Manages actor placement and distribution across daprd instances using a Raft-based consensus algorithm. It maintains distributed hash tables that map actor types to their host locations, enabling stateful actor routing and failover.
Sentry - The certificate authority (CA) for the control plane. It issues and manages mTLS certificates for all Dapr services and applications. Sentry supports multiple identity validators (Kubernetes, OIDC, JWKS) and provides SPIFFE-based identity.
Scheduler - Handles job scheduling and cron-based task execution. It uses embedded etcd for distributed state management and coordinates scheduled jobs across the cluster.
Sidecar Injector - A Kubernetes mutating webhook that automatically injects the Dapr sidecar container into pods. It patches pod specifications with the daprd container, environment variables, and volume mounts based on pod annotations.
Runtime Architecture
Loading diagram...
Component Registry and Initialization
The runtime uses a component registry pattern to load and manage pluggable components. During initialization, daprd loads components from the configured path and registers them with the appropriate subsystems (state stores, pub/sub, bindings, etc.). The processor handles component lifecycle and routes requests to the correct component implementation.
Security and mTLS
All inter-service communication is secured with mTLS. The security handler manages certificate acquisition from Sentry and enforces mutual authentication. Each service has a SPIFFE identity that uniquely identifies it within the trust domain.
Graceful Shutdown
Services use a RunnerCloser pattern for coordinated startup and shutdown. Multiple runners execute concurrently, and graceful shutdown waits for all runners to complete within a configured grace period before forcefully terminating.
HTTP & gRPC APIs
Relevant Files
pkg/api/http/http.gopkg/api/http/server.gopkg/api/http/middlewares.gopkg/api/grpc/grpc.gopkg/api/grpc/server.gopkg/api/grpc/endpoints.godapr/proto/runtime/v1/dapr.proto
Dapr exposes two primary API protocols for application communication: HTTP and gRPC. Both protocols provide access to the same core building blocks (state, pub/sub, bindings, actors, etc.) but with different transport characteristics and performance profiles.
HTTP API
The HTTP API is built on the chi router and provides RESTful endpoints for all Dapr services. The server supports both HTTP/1.1 and HTTP/2 Cleartext (H2C) by default, enabling efficient multiplexing for high-throughput scenarios.
Key Components:
- Server Setup (
pkg/api/http/server.go): Manages multiple listeners (public and internal ports), applies middleware stack, and handles graceful shutdown. - Endpoint Registration (
pkg/api/http/http.go): Constructs endpoint definitions for state, pub/sub, bindings, actors, secrets, configuration, crypto, and workflow operations. - Routing: Routes follow the pattern
/{version}/{route}(e.g.,/v1/state/{storeName}/{key}).
Middleware Stack (applied in order):
- Max body size enforcement
- Context setup (request metadata)
- Distributed tracing (if enabled)
- Metrics collection (if enabled)
- CORS handling
- API token authentication
- Component middleware
- Request/response logging
Public vs. Internal Endpoints:
- Public endpoints (separate port): Health checks and metadata endpoints accessible without authentication.
- Internal endpoints (main port): Full API access with authentication and access control.
gRPC API
The gRPC API implements the Dapr service definition from dapr.proto and provides bidirectional streaming, lower latency, and efficient binary serialization. Two server types exist: external (for applications) and internal (for Dapr-to-Dapr communication).
Service Definition:
The Dapr service exposes methods for:
- Service invocation (
InvokeService) - State management (
GetState,SaveState,DeleteState,ExecuteStateTransaction) - Pub/Sub (
PublishEvent,SubscribeTopicEventsAlpha1) - Bindings (
InvokeBinding) - Actors (timers, reminders, state)
- Secrets, configuration, crypto, workflows, and jobs
Middleware Stack:
- Metadata extraction (gRPC headers)
- API access control (allowlist/denylist)
- Token authentication
- Distributed tracing
- Metrics collection
- API logging
Key Features:
- Reflection: gRPC reflection is enabled for dynamic service discovery.
- Streaming: Supports bidirectional streams for pub/sub subscriptions and service invocation.
- Message Size: Configurable max receive/send message sizes and header list sizes.
- Security: Optional mTLS support via security handlers.
- Proxy Mode: Unknown service handler for proxying requests to other services.
API Access Control
Both protocols support allowlist and denylist rules per endpoint. Rules are defined in the API specification and enforced via middleware:
- HTTP: Checked during route registration and request handling.
- gRPC: Enforced via unary and stream interceptors.
Health check endpoints (/v1/healthz) bypass authentication by default.
Configuration
Both servers are configured via ServerConfig:
- Port: Listen address and port
- MaxRequestBodySize: Limits request payload size
- ReadBufferSize: HTTP header buffer size
- APILogHealthChecks: Whether to log health check requests
- UnixDomainSocket: Optional Unix socket support (gRPC only)
Component System & Pluggable Architecture
Relevant Files
pkg/runtime/processor/components.gopkg/runtime/registry/registry.gopkg/components/pluggable/grpc.gopkg/components/loader/localloader.gopkg/internal/loader/kubernetes/components.gopkg/runtime/hotreload/reconciler/components.gopkg/components/category.go
Overview
Dapr's component system provides a pluggable architecture for integrating external services and capabilities. Components are modular building blocks that implement specific interfaces (state stores, pub/sub brokers, bindings, etc.) and can be loaded dynamically at runtime. The system supports both built-in components and external pluggable components via gRPC.
Component Categories
Components are organized into distinct categories, each serving a specific purpose:
- State Stores (
state.*) - Persistent data storage backends - Pub/Sub (
pubsub.*) - Message brokers for publish-subscribe patterns - Bindings (
bindings.*) - Input/output connectors for external systems - Secret Stores (
secretstores.*) - Secure credential management - Configuration (
configuration.*) - Dynamic configuration providers - Locks (
lock.*) - Distributed locking mechanisms - Crypto (
crypto.*) - Cryptographic operations - Name Resolution (
nameresolution.*) - Service discovery - Middleware (
middleware.*) - HTTP request/response processing - Conversations (
conversation.*) - AI conversation management
Component Lifecycle
Loading diagram...
Component Registration & Discovery
Each component category maintains a Registry that maps component names to factory functions. When a component is declared, the system:
- Loads the manifest (from Kubernetes, local files, or operator)
- Determines the category by parsing the type prefix (e.g.,
state.redis) - Looks up the factory in the appropriate category registry
- Creates an instance with the provided metadata
- Calls
Init()to configure the component
For pluggable components (external gRPC services), discovery happens via Unix socket reflection. The system scans a socket directory, uses gRPC reflection to identify available services, and registers them dynamically.
Pluggable Components & gRPC
Pluggable components run as separate processes and communicate with Dapr via gRPC over Unix sockets. The GRPCConnector generic type handles:
- Connection management - Dial, ping, and close operations
- Instance multiplexing - Multiple component instances over a single connection using metadata headers
- Error conversion - Maps gRPC errors to business errors
- Lifecycle - Context cancellation and graceful shutdown
Each pluggable component type (state, pubsub, bindings, etc.) wraps the generic connector with type-specific gRPC clients generated from protobuf definitions.
Component Processing & Dependency Management
The Processor orchestrates component initialization with support for dependency ordering. Components can depend on secret stores for credential resolution. The processor:
- Queues pending components in a channel
- Preprocesses each component to resolve secret dependencies
- Defers components with unmet dependencies
- Initializes ready components with a configurable timeout (default 5 seconds)
- Processes dependent components once their dependencies are ready
Failed components can be configured to either fail fast or be ignored via the ignoreErrors flag.
Hot-Reload & Reconciliation
The Reconciler monitors component changes and applies updates without restarting Dapr. When a component is updated:
- Verify authorization and actor state store constraints
- Close the existing component instance
- Reinitialize with new configuration
- Notify subscribers of the change
This enables zero-downtime configuration updates for most component types.
Component Store
The ComponentStore maintains the runtime state of all loaded components, indexed by category and name. It provides thread-safe access to:
- Active component instances
- Pending components awaiting initialization
- Component metadata and configuration
- Special component assignments (e.g., actor state store)
This centralized store ensures consistent component visibility across all runtime subsystems.
State Management & Persistence
Relevant Files
pkg/runtime/processor/state/state.gopkg/actors/state/state.gopkg/encryption/state.gopkg/components/state/state_config.gopkg/runtime/compstore/statestore.gopkg/outbox/outbox.go
Dapr provides a pluggable state management system that abstracts away the complexity of persisting application state across multiple backend stores. The system handles initialization, encryption, transactional operations, and outbox patterns for reliable state persistence.
State Store Initialization
State stores are initialized through the runtime processor during component setup. The pkg/runtime/processor/state package manages the lifecycle of state store components:
- Component Registration: State stores are registered via a component registry that supports multiple versions and implementations
- Encryption Setup: Automatic encryption keys are extracted from secret stores and registered for encrypted state stores
- Actor State Store: When actors are enabled, a designated state store can be marked as the actor state store using the
actorstatestoreproperty - Configuration Persistence: State configuration (key prefix strategies) is saved for each store to ensure consistent key naming
// State store initialization with encryption
encKeys, err := encryption.ComponentEncryptionKey(comp, secretStore)
if encKeys.Primary.Key != "" {
encryption.AddEncryptedStateStore(comp.ObjectMeta.Name, encKeys)
}
Key Prefix Strategies
State keys are automatically prefixed based on configurable strategies to prevent collisions in shared stores:
appid(default): Keys prefixed with the application IDnamespace: Keys prefixed with namespace and app ID for multi-tenant scenariosname: Keys prefixed with the state store namenone: No prefix applied
The separator || is used to delimit prefixes from keys. Original keys can be recovered using GetOriginalStateKey().
Actor State Operations
The pkg/actors/state package provides specialized state management for actors with three core operations:
- Get: Retrieves single actor state with optional distributed locking
- GetBulk: Retrieves multiple state keys in a single operation
- TransactionalStateOperation: Performs atomic multi-key updates with ACID guarantees
All operations include:
- Partition key metadata for distributed state stores
- Resiliency policies for fault tolerance
- Placement-based distributed locking when needed
// Transactional actor state update
operations := []contribstate.TransactionalStateOperation{...}
err := s.TransactionalStateOperation(ctx, ignoreHosted, req, lock)
State Encryption
The pkg/encryption/state.go module provides transparent encryption for sensitive state:
- Automatic Encryption: Values are encrypted using registered keys before storage
- Key Rotation: Supports primary and secondary keys for seamless key rotation
- Metadata Embedding: Encryption key names are appended to encrypted values for decryption routing
- Selective Encryption: Only state stores with registered encryption keys are encrypted
// Encrypt value with registered key
encrypted, err := encryption.TryEncryptValue(storeName, value)
// Decrypt with automatic key selection
decrypted, err := encryption.TryDecryptValue(storeName, encrypted)
Outbox Pattern
The outbox pattern combines state and pub/sub for reliable event publishing:
- Transactional Publishing: State changes and event publishing happen atomically
- Internal Topics: Events are published to internal topics for guaranteed delivery
- Subscription Management: Applications subscribe to internal topics to process published events
This ensures that state updates and their corresponding events are never lost, even during failures.
Component Store
The ComponentStore maintains thread-safe access to all registered state stores:
- State Store Registry: Maps store names to their implementations
- Actor State Store: Tracks the designated actor state store separately
- Thread Safety: Uses RWMutex for concurrent read/write access
- Lifecycle Management: Supports adding, retrieving, and deleting stores
Loading diagram...
Pub/Sub & Event-Driven Messaging
Relevant Files
pkg/runtime/pubsub/adapter.gopkg/runtime/processor/pubsub/pubsub.gopkg/runtime/subscription/subscription.gopkg/components/pubsub/pluggable.gopkg/runtime/pubsub/publisher/publisher.gopkg/runtime/subscription/postman/postman.go
Dapr's pub/sub system enables event-driven communication between applications through pluggable message brokers. The architecture separates concerns into component initialization, publishing, subscription, and message delivery.
Architecture Overview
Loading diagram...
Core Components
PubsubItem (pkg/runtime/pubsub/adapter.go) wraps a pub/sub component with scoped access controls:
Component: The underlying pub/sub implementationScopedSubscriptions&ScopedPublishings: Topic access restrictions per appAllowedTopics&ProtectedTopics: Global topic policiesNamespaceScoped: Whether topics are namespace-prefixed
Processor (pkg/runtime/processor/pubsub/pubsub.go) initializes pub/sub components:
- Creates component instances from registry
- Applies metadata and consumer ID configuration
- Registers subscriptions with the subscriber
- Handles component lifecycle (init/close)
Subscription (pkg/runtime/subscription/subscription.go) manages topic subscriptions:
- Subscribes to broker topics with callback handlers
- Transforms messages into CloudEvents
- Routes events based on matching rules
- Supports dead-letter queues for failed messages
- Handles both single and bulk message delivery
Publishing Flow
The publisher adapter validates requests before forwarding to components:
- Validation: Check if pubsub exists and topic is allowed
- Namespace Handling: Prepend namespace if component is namespace-scoped
- Resiliency: Apply outbound policies (retries, timeouts)
- Component Publish: Forward to underlying broker
Bulk publishing uses concurrent workers (max 100) to publish multiple messages in parallel, collecting failed entries for partial success responses.
Subscription & Message Delivery
Subscriptions follow a multi-stage pipeline:
- Message Reception: Component delivers messages via callback
- CloudEvent Transformation: Convert raw payloads or binary content to CloudEvent format
- Route Matching: Evaluate CEL expressions against event properties
- Delivery: Postman sends to app via HTTP or gRPC
- Error Handling: Retry with resiliency policies or send to dead-letter topic
CloudEvents are the standard envelope format. Metadata prefixed with cloudevent. overrides event properties (e.g., cloudevent.type, cloudevent.source). Trace context (traceid, traceparent) is propagated through event metadata.
Bulk Operations
Bulk subscribe enables efficient batch processing:
- Messages grouped by route path
- Sent together in a single request to the app
- Responses include per-message status (success/failure)
- Failed messages can be sent to dead-letter queues
Access Control
The IsOperationAllowed function enforces topic policies:
- If
AllowedTopicsis set, only those topics are permitted - If
ProtectedTopicsis set, a scoped topic must match - Scoped topics (per-app) are checked against subscription/publishing scopes
Actors & Virtual Actor Model
Relevant Files
pkg/actors/actors.gopkg/actors/state/state.gopkg/actors/reminders/reminders.gopkg/actors/timers/timers.gopkg/actors/router/router.gopkg/actors/table/table.gopkg/actors/internal/placement/placement.gopkg/actors/internal/scheduler/scheduler.go
Overview
The Dapr actors framework implements the Virtual Actor Model, a distributed computing pattern where actors are lightweight, stateful entities that process messages sequentially. Each actor is uniquely identified by an actor type and actor ID, and can maintain private state, communicate with other actors, and schedule reminders or timers.
Core Architecture
Loading diagram...
Key Components
Actor Runtime (actors.go): The main orchestrator that initializes and manages all actor subsystems. It coordinates the table, router, state store, reminders, and timers. The runtime can be disabled if placement addresses are not configured.
Actor Table (table.go): Maintains a registry of active actor instances, organized by actor type. It handles actor creation, lifecycle management, and reentrancy tracking. Supports dynamic registration and unregistration of actor types.
Router (router.go): Routes actor method invocations to local or remote actors. It handles placement lookups, distributed locking, and resiliency policies. Supports both direct calls and streaming invocations.
State Management (state.go): Provides transactional state operations for actors. Requires a state store with ETag and transactional support. Uses partition keys for isolation and supports bulk operations.
Reminders (reminders.go): Persistent, durable reminders that survive actor deactivation. Managed by the Scheduler service, reminders can only be created on hosted actor types. Supports CRUD operations and listing.
Timers (timers.go): In-memory timers that are lost on actor deactivation. Stored in memory and automatically cleaned up. Simpler than reminders, used for short-lived scheduling.
Placement & Locking
The Placement Service determines which Dapr instance hosts each actor. It uses consistent hashing to distribute actors across the cluster. The placement client maintains a lock for state consistency during actor operations, preventing concurrent modifications.
Reentrancy
Actors support configurable reentrancy through the ReentrancyStore. This allows an actor to call itself or other actors without deadlock, with a configurable maximum stack depth to prevent infinite recursion.
State Isolation
Actor state is isolated using composite keys: appID || actorType || actorID || stateKey. The state store receives partition keys for optimization, enabling efficient sharding and isolation per actor instance.
Bindings, Workflows & Jobs
Relevant Files
pkg/components/bindings/output_pluggable.gopkg/components/bindings/input_pluggable.gopkg/runtime/wfengine/wfengine.gopkg/scheduler/server/server.gopkg/runtime/processor/binding/binding.gopkg/runtime/scheduler/scheduler.go
Bindings: Input & Output
Bindings enable Dapr applications to connect to external systems and services. They come in two flavors:
Input Bindings receive events from external sources and push them to your application. The grpcInputBinding implements a bi-directional gRPC stream that continuously reads messages from a pluggable component. When a message arrives, it's passed to a handler function, which processes it and sends back an acknowledgment. This pattern ensures reliable message delivery with proper error handling.
Output Bindings send data from your application to external systems. The grpcOutputBinding wraps a gRPC client that invokes operations on remote components. Each binding declares supported operations (e.g., create, delete, get) which are discovered during initialization. Requests include data, metadata, and an operation type, with responses containing the result and optional content type information.
Both binding types are registered in a central Registry that maps component names to factory functions. During runtime initialization, the processor discovers and initializes bindings based on component configuration, storing them in the ComponentStore for later access.
Workflow Engine: Orchestrations & Activities
The Dapr Workflow Engine enables developers to write durable, resilient workflows using code. It introduces internal actors as core runtime primitives—actors implemented directly in daprd with no host application dependency.
Orchestrations (workflows) are the main execution units. Each workflow instance corresponds to a single internal actor. The workflow logic runs in the host application and communicates with the engine via gRPC. The engine manages workflow state, including an inbox for incoming events and a history log of all executed steps. State is stored using a key-value scheme (inbox-NNNNNN, history-NNNNNN) that supports arbitrarily large workflows.
Activities are individual tasks called by orchestrations. When an orchestration schedules an activity, a separate activity actor is created. The activity actor sends work items to the engine, which forwards them to the host application. Once the application completes the activity, the result is sent back as a new event to the orchestration actor.
The engine uses reminders for resilience. Each workflow step is driven by a reminder (e.g., start, new-event, timer, run-activity). If a process crashes mid-execution, the reminder reschedules the work, allowing recovery from the last checkpoint stored in the state store.
Scheduler Service: Job Management
The Scheduler is a distributed service that manages time-based job execution across a Dapr cluster. It uses etcd for distributed coordination and cron for job scheduling.
When a job is scheduled (via ScheduleJob), the Scheduler stores it in etcd and broadcasts it to all connected daprd sidecars. Each sidecar watches for jobs targeting its app and executes them at the scheduled time. Jobs can target actors (for reminders) or other components.
The Scheduler maintains a pool of worker goroutines that execute jobs concurrently. It supports cron expressions, one-time schedules, and TTL-based expiration. The WatchJobs and WatchHosts gRPC streams keep sidecars synchronized with the cluster state, enabling dynamic scaling and failover.
Integration: Workflows & Scheduler
Workflows and the Scheduler work together seamlessly. When a workflow is created, a reminder is scheduled to trigger its initial execution. As the workflow progresses, additional reminders are created for each step (activities, timers, events). The Scheduler ensures these reminders fire reliably, even across process restarts or cluster topology changes.
Loading diagram...
Security, mTLS & Observability
Relevant Files
pkg/security/security.gopkg/sentry/sentry.gopkg/sentry/server/server.gopkg/diagnostics/http_monitoring.gopkg/diagnostics/grpc_monitoring.gopkg/diagnostics/tracing.gopkg/metrics/exporter.go
Dapr implements comprehensive security through mutual TLS (mTLS) and observability through integrated metrics, tracing, and monitoring. These systems work together to ensure secure inter-service communication while providing visibility into system behavior.
mTLS and Certificate Management
Dapr uses SPIFFE (Secure Production Identity Framework for Everyone) for identity and certificate management. Each service receives a unique SPIFFE ID in the format /ns/{namespace}/{app-id}, which is embedded in X.509 certificates issued by the Sentry certificate authority.
The security handler manages certificate lifecycle:
- Certificate Acquisition - Services request certificates from Sentry via gRPC, providing a certificate signing request (CSR).
- Automatic Rotation - Certificates are automatically rotated before expiration, with the SPIFFE library handling renewal.
- Trust Anchors - Root CA certificates are loaded from files or static configuration and watched for updates.
- Identity Binding - Certificates are bound to specific SPIFFE IDs, enabling fine-grained authorization policies.
// Security handler provides mTLS configuration
type Handler interface {
GRPCServerOptionMTLS() grpc.ServerOption
GRPCDialOptionMTLS(spiffeid.ID) grpc.DialOption
MTLSClientConfig(spiffeid.ID) *tls.Config
CurrentTrustAnchors(context.Context) ([]byte, error)
}
Sentry Certificate Authority
Sentry is the control-plane service responsible for issuing and managing certificates. It validates certificate requests using pluggable validators:
- Kubernetes Validator - Verifies requests from pods using Kubernetes service account tokens.
- JWKS Validator - Validates JWT tokens against a JSON Web Key Set.
- Insecure Validator - Allows any request (development only).
Sentry also provides OIDC endpoints for external systems to verify Dapr identities via JWT tokens.
Observability Stack
Dapr integrates three observability pillars:
Metrics - OpenCensus-based metrics collection with Prometheus export. Metrics cover HTTP/gRPC latency, request/response sizes, error rates, and component-specific counters. The metrics exporter listens on a configurable port and exposes metrics in Prometheus format.
Tracing - Distributed tracing using OpenTelemetry with W3C Trace Context propagation. Traces capture request flows across services, including span attributes for method names, status codes, and error details.
Health Monitoring - Liveness and readiness probes for all services. The healthz subsystem tracks component health and signals readiness when all dependencies are initialized.
Loading diagram...
HTTP and gRPC Monitoring
Both HTTP and gRPC protocols have dedicated monitoring implementations that track:
- Request/Response Metrics - Bytes sent/received, latency percentiles, request counts.
- Status Tracking - HTTP status codes and gRPC error codes.
- Path Matching - Optional path-based metric cardinality control to prevent metric explosion.
- Health Probes - Separate tracking for liveness/readiness probes to avoid skewing latency metrics.
The monitoring system uses tag-based dimensions (app ID, method, status) to enable flexible querying and aggregation in Prometheus.
Kubernetes Integration & Operator
Relevant Files
pkg/operator/operator.gopkg/operator/watchdog.gopkg/injector/service/injector.gopkg/apis/components/v1alpha1/types.gopkg/operator/handlers/dapr_handler.gopkg/operator/api/api.gopkg/operator/cache/cache.gocharts/dapr/README.md
Overview
Dapr's Kubernetes integration consists of three core components: the Operator, the Sidecar Injector, and the Watchdog. Together, they manage the lifecycle of Dapr sidecars, handle resource configuration, and ensure pods maintain their sidecar containers in a Kubernetes cluster.
Architecture
Loading diagram...
Dapr Operator
The Operator is the central control plane component that manages Dapr's Kubernetes resources. It runs as a deployment and uses controller-runtime to watch for changes to Dapr CRDs (Components, Configurations, Subscriptions, HTTPEndpoints, and Resiliencies).
Key responsibilities:
- Manages Custom Resource Definitions (CRDs) for Dapr components and configurations
- Runs a gRPC API server that serves component metadata to sidecar runtimes
- Handles webhook conversion for subscription API versioning (v1alpha1 <-> v2alpha1)
- Coordinates with the Sentry service for mTLS certificate management
- Supports leader election for high-availability deployments
The operator initializes with a filtered cache that reduces memory overhead by stripping unnecessary fields from watched resources. It also manages health probes for security, API server, webhook, and informer components.
Sidecar Injector
The Sidecar Injector is a mutating webhook that automatically injects the Dapr sidecar container into pods annotated with dapr.io/enabled: true. It runs as a separate deployment and intercepts pod creation requests.
Injection process:
- Pod creation request reaches the Kubernetes API server
- Webhook intercepts the request and validates the pod
- Injector patches the pod spec with the
daprdcontainer and required volumes - Injector configures environment variables, ports, and resource limits
- Modified pod spec is returned to the API server
The injector validates that requests come from allowed service accounts (deployment controllers, statefulset controllers, etc.) to prevent unauthorized sidecar injection. It also handles scheduler integration when enabled.
Dapr Watchdog
The Watchdog is a leader-elected controller that periodically polls pods to ensure they have the Dapr sidecar injected. It runs only on the cluster leader to avoid duplicate work.
Watchdog workflow:
- Scans pods with
dapr.io/enabled: trueannotation - Checks if the
daprdcontainer exists in the pod spec - If missing, deletes the pod to trigger a restart with sidecar injection
- Applies rate limiting to prevent pod restart storms (configurable via
maxPodRestartsPerMin) - Optionally patches pod labels to mark them as processed
This ensures that even if the injector fails or is temporarily unavailable, pods will eventually get their sidecars.
CRD Management
Dapr defines several Kubernetes CRDs for configuration:
- Component: Defines state stores, pub/sub brokers, bindings, and other building blocks
- Configuration: Global Dapr settings (logging, tracing, metrics)
- Subscription: Pub/sub topic subscriptions (v1alpha1 and v2alpha1 versions)
- HTTPEndpoint: HTTP service endpoints for invocation
- Resiliency: Retry and timeout policies
The operator watches these resources and exposes them via its gRPC API, allowing sidecars to query configuration at runtime.
Service Reconciliation
The operator includes a Service Reconciler that automatically creates Kubernetes Services for Dapr-enabled Deployments and StatefulSets. This enables sidecar-to-sidecar communication and access to application ports.
The reconciler creates headless services (ClusterIP: None) with ports for:
- HTTP API (3500)
- gRPC API (50001)
- Internal gRPC (50002)
- Metrics (9090)
Optional support for Argo Rollouts service reconciliation is available via configuration.
Deployment & Configuration
Dapr is deployed via Helm chart (charts/dapr/README.md). Key operator configuration options:
dapr_operator.watchInterval: Polling interval for the watchdog (default: disabled)dapr_operator.maxPodRestartsPerMin: Rate limit for pod restarts (default: 20)dapr_operator.serviceReconciler.enabled: Enable automatic service creation (default: true)dapr_operator.watchNamespace: Restrict operator to specific namespace (default: all)global.ha.enabled: Enable high-availability mode with leader election
The operator requires RBAC permissions to watch resources, create services, patch pods, and manage webhooks.