Overview
Relevant Files
README.mdROADMAP.mdproject/PRINCIPLES.md
Moby is an open-source project created by Docker to enable and accelerate software containerization. It provides a modular "Lego set" of toolkit components that can be assembled into custom container-based systems. The project serves as the upstream for Docker and welcomes community contributions, experimentation, and alternative implementations.
What is Moby?
Moby is designed for engineers, integrators, and enthusiasts who want to modify, hack, and build systems based on containers. It is not a commercially supported product, but rather a collaborative platform for working with open-source container technologies. The project emphasizes modularity and flexibility, allowing components to be swapped with alternative implementations while maintaining compatibility.
Key characteristics:
- Modular architecture: Well-defined components with clear APIs that work together
- Batteries included but swappable: Includes full-featured components, but most can be replaced
- Usable security: Secure defaults without compromising usability
- Developer-focused: APIs designed for developers building tools, not end users
Core Components
Moby includes several major subsystems:
- Container Runtime: Lifecycle management for containers (create, start, stop, delete)
- Image Management: Distribution, storage, and layer management for container images
- Networking: libnetwork-based networking with support for multiple drivers
- Volume Management: Persistent storage for containers
- Build System: Integration with BuildKit for advanced image building
- Swarm Orchestration: Clustering and orchestration capabilities
- Client & API: REST and gRPC interfaces for programmatic access
Architecture Overview
Loading diagram...
Design Principles
The Moby project follows core principles that guide development:
- Modular design with minimal dependencies between components
- Straightforward, readable code over clever implementations
- Portability across machines and platforms
- Comprehensive testing and documentation requirements
- Conservative approach to new features
Relationship with Docker
Moby serves as the upstream for Docker products. Docker is committed to using Moby as the foundation, but other projects are equally encouraged to use Moby components. The releases are supported on a best-efforts basis by maintainers and the community.
Getting Started
To understand the codebase structure, explore the main directories:
daemon/- Core daemon implementation and subsystemsclient/- Go client library for interacting with the daemonapi/- API definitions and typescmd/- Command-line tools (dockerd, docker-proxy)integration/- Integration testspkg/- Reusable utility packages
Architecture & Core Components
Relevant Files
daemon/daemon.godaemon/command/daemon.gocmd/dockerd/main.godaemon/server/server.godaemon/command/httphandler.go
The Docker daemon is organized around a central Daemon struct that orchestrates all core functionality. The architecture follows a layered design with clear separation between the CLI entry point, daemon initialization, and request handling.
Daemon Initialization Flow
The startup sequence begins in cmd/dockerd/main.go, which creates a DaemonRunner and invokes its Run() method. This delegates to daemon/command/daemon.go, where the daemonCLI struct manages the complete initialization pipeline:
- Configuration Loading – Merges CLI flags, environment variables, and config files
- Containerd Setup – Initializes or connects to containerd for container runtime
- Daemon Creation – Instantiates the core
Daemonstruct viaNewDaemon() - Component Initialization – Sets up image service, network controller, volumes, and plugins
- API Server Launch – Starts HTTP listeners and registers routers
Core Daemon Structure
The Daemon struct (in daemon/daemon.go) is the heart of the system. Key responsibilities include:
- Container Management – Maintains container store, execution commands, and state
- Image Service – Manages image storage, layers, and distribution
- Networking – Controls libnetwork for container networking and DNS
- Volumes – Manages persistent storage and volume drivers
- Events – Publishes daemon events for monitoring and logging
- Plugins – Loads and manages authorization, logging, and volume plugins
- Configuration – Stores daemon config with atomic pointer for safe reloading
HTTP Request Handling
The daemon exposes a REST API through an HTTP server. Request flow:
- HTTP Handler (
daemon/command/httphandler.go) – Routes requests to either gRPC or REST API based on content type - Server Mux (
daemon/server/server.go) – Creates HTTP routes and applies middleware - Routers – Specialized routers handle containers, images, networks, volumes, and system endpoints
- Middleware Chain – Applies authorization, versioning, and experimental feature checks
Startup and Restoration
During daemon startup, NewDaemon() performs critical initialization:
- Loads existing containers from disk in parallel (respecting file descriptor limits)
- Restores container state by querying containerd
- Reconnects containers to networks and volumes
- Restarts containers with auto-restart policies
- Initializes the network controller with active sandboxes
The restoration process uses semaphores to limit concurrent operations, preventing resource exhaustion on systems with many containers.
Graceful Shutdown
The daemon implements graceful shutdown via Shutdown():
- Stops accepting new API requests
- Waits for running containers to exit (with configurable timeout)
- Forcibly kills containers if timeout is exceeded
- Cleans up resources (networks, volumes, plugins)
- Closes database connections and listeners
Loading diagram...
Container Lifecycle Management
Relevant Files
daemon/container/container.godaemon/container/state.godaemon/create.godaemon/start.godaemon/monitor.godaemon/internal/restartmanager/restartmanager.godaemon/health.go
Containers in Docker progress through distinct lifecycle states, each with specific transitions and responsibilities. Understanding this flow is essential for managing container behavior, restart policies, and resource cleanup.
Container States
A container can exist in one of several states, tracked in the State struct:
- Created: Container exists but has not been started. Configuration is persisted to disk.
- Running: Container process is executing. The daemon maintains a reference to the containerd task.
- Paused: Container is running but suspended (via freezer cgroup on Linux). Both
RunningandPausedflags are true. - Restarting: Container exited and is waiting to restart per its restart policy. Both
RunningandRestartingflags are true. - Stopped: Container process has exited. Exit code and timestamp are recorded.
- Dead: Container is marked for removal and cannot be restarted.
Note: States are not always mutually exclusive. A paused container remains Running=true because the process is still active.
State Transitions
Loading diagram...
Creation Phase
The ContainerCreate flow in daemon/create.go initializes a container:
- Validates configuration and merges image defaults
- Creates the container object with
newContainer() - Sets up security options (SELinux, AppArmor)
- Creates the read-write layer via the image service
- Registers mount points and volumes
- Persists configuration to disk via
CheckpointTo()
The container is now in the Created state, ready to start.
Start Phase
ContainerStart in daemon/start.go prepares and launches the container:
- Validates the container is not already running or paused
- Sets up networking (creates sandbox, configures endpoints)
- Mounts container directories and volumes
- Generates the OCI runtime spec
- Creates a containerd container and task
- Calls
task.Start()to begin execution - Updates state to Running with PID and start time
- Initializes health monitoring if configured
Pause & Resume
Pausing suspends all processes without terminating them:
State.Pausedis set to true whileRunningremains true- Health monitoring is paused
- Resume clears the
Pausedflag and resumes health checks
Stop & Exit Handling
When a container stops (via signal or natural exit):
ProcessEventreceives anEventExitfrom libcontainerdhandleContainerExitis called, which:- Records exit code and timestamp
- Sets state to Stopped
- Evaluates the restart policy via
RestartManager.ShouldRestart()
- If restart is needed, state transitions to Restarting with exponential backoff
- If not restarting, cleanup occurs and auto-remove is triggered if configured
Restart Policy
The RestartManager in daemon/internal/restartmanager/restartmanager.go enforces restart policies:
- no: Never restart
- always: Always restart, with exponential backoff (100ms <-> 1min)
- on-failure: Restart only if exit code is non-zero, up to max retries
- unless-stopped: Always restart unless manually stopped
Backoff resets if the container runs for >10 seconds, preventing rapid restart loops.
Cleanup
Cleanup() in daemon/start.go releases resources when a container stops:
- Deletes the containerd container
- Releases network resources and sandbox
- Unmounts IPC, secrets, and volumes
- Closes stdio streams
- Unregisters exec commands
Health Monitoring
If a healthcheck is configured, initHealthMonitor() spawns a monitor goroutine that:
- Runs probes at configured intervals
- Tracks consecutive failures
- Transitions to Unhealthy after threshold failures
- Stops when container is paused or stopped
Persistence
Container state is persisted via CheckpointTo(), which:
- Serializes the container config to
config.v2.json - Serializes host config to
hostconfig.json - Stores in the in-memory
ViewDBfor fast queries - Enables recovery after daemon restart
Image Management & Distribution
Relevant Files
daemon/images/service.godaemon/images/image_pull.godaemon/images/image_push.godaemon/internal/image/store.godaemon/internal/layer/layer_store.godaemon/internal/refstore/store.godaemon/internal/distribution/pull.godaemon/internal/distribution/push.godaemon/internal/distribution/xfer/download.godaemon/internal/distribution/xfer/upload.go
Overview
The image management system handles the complete lifecycle of container images: storage, retrieval, distribution, and reference management. It integrates with containerd for content storage and provides APIs for pulling, pushing, tagging, and deleting images.
Core Components
ImageService (daemon/images/service.go) is the central orchestrator that coordinates all image operations. It manages:
- Image Store: Persists image configurations and metadata
- Layer Store: Manages read-only and read-write filesystem layers
- Reference Store: Maps human-readable names and tags to image digests
- Download/Upload Managers: Handle concurrent layer transfers during pull/push operations
- Distribution Metadata Store: Tracks V2 registry metadata for efficient re-pulls
Layer Architecture
Layers form the foundation of image storage. Each layer is identified by two hashes:
- DiffID: Hash of the individual layer tar stream
- ChainID: Content-addressable hash of the entire layer chain (current layer + all parents)
The layer store maintains a chain of read-only layers (roLayer) and read-write layers (RWLayer) for containers. Maximum chain depth is limited to 125 layers to prevent filesystem driver issues.
// Layer interface provides access to layer content
type Layer interface {
ChainID() ChainID // Hash of entire chain
DiffID() DiffID // Hash of this layer
Parent() Layer // Next layer in chain
TarStream() io.ReadCloser
Size() int64
}
Reference Management
The Reference Store (daemon/internal/refstore/store.go) maintains bidirectional mappings between image references and digests. It supports:
- Tags: Human-readable names like
myrepo:latest - Digests: Content-addressable references like
myrepo@sha256:abc123...
References are persisted in JSON format and cached in memory for fast lookups. The store prevents ambiguous references and enforces that digest references cannot be overwritten.
Pull & Push Operations
Pull (daemon/images/image_pull.go) downloads images from registries:
- Resolves the reference to a manifest
- Identifies required layers using the
LayerDownloadManager - Downloads layers concurrently (configurable limit)
- Registers layers in the layer store
- Creates image metadata and updates references
Push (daemon/images/image_push.go) uploads images to registries:
- Validates the image exists locally
- Collects all layers referenced by the image
- Uploads layers concurrently using the
LayerUploadManager - Pushes the manifest to the registry
- Logs distribution metadata for future pulls
Concurrent Transfer Management
The LayerDownloadManager and LayerUploadManager handle concurrent transfers with configurable limits. They:
- Track transfer progress and report it to clients
- Implement retry logic with exponential backoff
- Support cross-repository blob mounts to optimize storage
- Manage dependencies between layers (parents must be available before children)
Image Deletion
Image deletion is complex due to dependencies:
- Check if image is referenced by running containers
- Remove all tags pointing to the image
- Delete child images if they have no other parents
- Release layers and clean up storage
- Emit deletion events for each untagged reference
// ImageDelete returns a list of deletion responses
func (i *ImageService) ImageDelete(ctx context.Context, imageRef string,
options imagebackend.RemoveOptions) ([]imagetypes.DeleteResponse, error)
Storage Integration
Images leverage containerd's content store for blob storage and leases for lifecycle management. The system uses:
- Content Store: Immutable blob storage indexed by digest
- Leases: Prevent garbage collection of in-use content
- Namespaces: Isolate image content from other containerd clients
This architecture enables efficient deduplication and sharing of layers across images while maintaining strong consistency guarantees.
Networking & Network Management
Relevant Files
daemon/libnetwork/controller.godaemon/libnetwork/network.godaemon/libnetwork/endpoint.godaemon/libnetwork/sandbox.godaemon/libnetwork/drvregistry/networks.godaemon/network.goapi/types/network/
Architecture Overview
The networking system is built on libnetwork, a pluggable network abstraction layer that manages container connectivity. The core components are:
- Controller: Central orchestrator managing networks, endpoints, and sandboxes
- Network: Logical connectivity zone with a specific driver type
- Endpoint: Connection point between a container and a network
- Sandbox: Network namespace for a container, holding multiple endpoints
- Driver Registry: Manages available network drivers and IPAM implementations
Loading diagram...
Network Drivers
Docker supports multiple network driver types, each with different scopes and capabilities:
- bridge: Local driver for single-host networking. Default for containers.
- host: Container uses host's network namespace directly. No isolation.
- null: No networking. Container is isolated.
- overlay: Global driver for multi-host networking in Swarm mode using VXLAN.
- macvlan: Assigns MAC addresses to containers, making them appear as physical devices.
- ipvlan: Similar to macvlan but operates at Layer 3 (IP level).
Drivers are registered via the drvregistry.Networks registry, which maintains driver instances and their capabilities (data scope and connectivity scope).
Network Creation & Lifecycle
When a network is created:
- Controller validates the network name and type
- Appropriate driver is loaded from the registry
- Driver creates network resources (bridges, VXLAN tunnels, etc.)
- Network is persisted to the datastore
- DNS resolver is started for service discovery
Networks can be queried by full ID, full name, or partial ID (if unambiguous).
Endpoints & Sandboxes
An Endpoint represents a container's connection to a network. Multiple endpoints can connect a single container to multiple networks. Each endpoint:
- Allocates an IP address via IPAM
- Stores DNS names and aliases
- Tracks exposed ports and service information
A Sandbox is a network namespace containing one or more endpoints. It manages:
- Container's network interfaces
- DNS resolution (/etc/resolv.conf)
- Host entries (/etc/hosts)
- Port mappings and firewall rules
When an endpoint joins a sandbox, the driver configures the actual network interface and connectivity.
IPAM (IP Address Management)
IPAM drivers allocate and manage IP addresses for networks. The default IPAM:
- Manages address pools per network
- Supports IPv4 and IPv6
- Reserves gateway and auxiliary addresses
- Tracks allocated IPs to prevent conflicts
Custom IPAM drivers can be registered for specialized allocation strategies.
Service Discovery & DNS
Networks maintain service records for DNS resolution. When containers are connected to a network:
- Service names resolve to Virtual IPs (VIPs) in Swarm mode
- DNS queries are handled by the embedded resolver
- PTR records map IPs back to service names
- External DNS servers can be configured as fallbacks
Volume Management
Relevant Files
daemon/volumes.godaemon/volume/service/service.godaemon/volume/service/store.godaemon/volume/volume.godaemon/volume/local/local.godaemon/volume/drivers/adapter.goapi/types/volume/volume.goapi/types/volume/cluster_volume.go
Volumes provide persistent data storage for containers, decoupled from container lifecycles. The volume management system handles creation, mounting, reference counting, and lifecycle management across local and cluster-wide deployments.
Architecture Overview
Loading diagram...
Core Components
VolumesService is the primary API entry point for volume operations. It wraps the VolumeStore and provides high-level methods like Create(), Get(), Mount(), Unmount(), and Release(). The service automatically generates anonymous volume names when needed and applies default drivers.
VolumeStore manages volume persistence and reference counting. It maintains four key data structures: names (volume name to volume mapping), refs (reference counts per volume), labels (user-defined metadata), and options (driver-specific configuration). A per-volume lock ensures thread-safe operations without blocking the entire store.
Volume Drivers implement the actual storage backend. The built-in local driver stores volumes on the host filesystem at /var/lib/docker/volumes/. Plugin drivers communicate via RPC. The volumeDriverAdapter wraps drivers and enforces scope constraints (local or global).
Volume Types
Local Volumes are stored on the host machine and scoped to a single node. They support quota management on Linux and are ideal for single-host deployments. The local driver stores data in _data subdirectories within the volumes path.
Cluster Volumes (Swarm CSI) are distributed across cluster nodes with advanced features: access modes (single-node or multi-node), capacity ranges, topology constraints, and availability states (active, pause, drain). Each cluster volume has a Swarm ID and publish status per node.
Reference Counting and Lifecycle
Volumes use reference counting to prevent premature deletion. When a container mounts a volume, it increments the reference count. The Release() method decrements the count. Volumes are only removed when references reach zero and explicit deletion is requested.
Anonymous volumes (created without a name) are automatically labeled with com.docker.volume.anonymous and cleaned up when their sole referencing container is removed.
Mount Operations
Mount operations require a unique reference ID (typically the container ID) to track which entity mounted the volume. The same reference must be used for unmounting. This enables multiple containers to safely share volumes while maintaining accurate reference counts.
mountID := "container-id"
mountPath, err := service.Mount(ctx, volume, mountID)
// ... use volume ...
err = service.Unmount(ctx, volume, mountID)
Metadata Persistence
Volume metadata (name, driver, labels, options) is persisted to a BoltDB database. This ensures volumes survive daemon restarts. The store can restore volumes from disk and re-establish connections to external drivers during startup.
Build System & BuildKit Integration
Relevant Files
daemon/builder/builder.godaemon/builder/backend/backend.godaemon/builder/dockerfile/builder.godaemon/internal/builder-next/builder.godaemon/command/daemon.godaemon/build.go
Docker supports two build backends: the legacy Builder V1 (Dockerfile interpreter) and the modern BuildKit system. The build system is initialized during daemon startup and provides a unified interface for both backends.
Architecture Overview
Loading diagram...
Build Backend Initialization
During daemon startup (daemon/command/daemon.go), the build system is initialized via initBuildkit():
- Session Manager is created for managing build sessions
- Dockerfile BuildManager is instantiated for V1 builds
- BuildKit Builder is created with comprehensive options (networking, cgroups, snapshotter, etc.)
- Backend wrapper combines both builders into a unified interface
The BuildKit builder receives configuration including the engine ID, network controller, registry hosts, and identity mapping for rootless mode.
Build Routing
The Backend struct in daemon/builder/backend/backend.go acts as a dispatcher:
- Checks
options.Versionto determine which builder to use - BuildKit path: Calls
b.buildkit.Build(ctx, config) - V1 path: Calls
b.builder.Build(ctx, config)(Dockerfile interpreter)
Both paths return a builder.Result containing the image ID. Post-processing includes squashing (if requested) and tagging.
BuildKit Execution Flow
The BuildKit builder (daemon/internal/builder-next/builder.go) manages builds through:
- Job tracking: Maps build IDs to
buildJobstructs for cancellation and upload handling - Controller: Orchestrates the build using BuildKit's control plane
- Frontends: Supports
dockerfile.v0andgateway.v0frontends - Solve: Executes the build graph and returns image digest
BuildKit supports multi-stage builds, caching strategies, and parallel execution through its solver.
Dockerfile V1 Execution
The Dockerfile builder (daemon/builder/dockerfile/builder.go) processes builds sequentially:
- Detects build context (local, git, remote URL)
- Parses Dockerfile instructions
- Executes each instruction by creating temporary containers
- Commits layer changes after each step
- Returns final image ID
Key Interfaces
builder.Backend: Abstracts daemon operations (image creation, container execution)builder.Source: Represents build context (filesystem, git, remote)builder.Image: Wraps image metadata and run configurationbuildbackend.BuildConfig: Contains build options, source, and progress writer
Build Cancellation & Events
- BuildKit builds can be cancelled via
Cancel(ctx, id)using stored context cancellation functions - Build events are logged to the daemon's event service (prune, completion)
- BuildKit callbacks notify the image service when images are exported or tagged
Swarm & Cluster Orchestration
Relevant Files
daemon/cluster/cluster.godaemon/cluster/noderunner.godaemon/cluster/swarm.godaemon/cluster/services.godaemon/cluster/tasks.goapi/types/swarm/swarm.go
Docker's swarm orchestration system enables distributed container management across multiple nodes. The architecture integrates SwarmKit (a separate orchestration library) with Docker's daemon to provide clustering, service scheduling, and task management.
Core Components
Cluster is the main entry point for all swarm operations. It exists in every daemon instance but only becomes active when swarm mode is enabled. The Cluster struct manages the lifecycle of the swarmkit node and provides methods for initialization, joining, and leaving a swarm.
NodeRunner manages the continuous execution of the swarmkit node with automatic restart capabilities. It implements a backoff reconnection loop to handle transient failures, ensuring the node recovers gracefully from network issues or unexpected shutdowns.
NodeState represents the current state of a node, including access to gRPC clients when the node is a manager. This allows API handlers to communicate with the manager's control plane.
Initialization & Cluster Formation
Swarm initialization begins with the Init() method, which validates addresses, creates a new swarmkit node, and starts the cluster. The process involves:
- Resolving listen, advertise, and data path addresses
- Validating default address pools for overlay networks
- Creating a
nodeRunnerwith the configuration - Waiting for the node to become ready (20-second timeout)
Nodes can join an existing swarm via Join(), which requires a valid join token and remote manager addresses. The join process is similar to initialization but includes the join token and remote address for discovery.
Locking Strategy
The cluster uses two synchronization primitives:
- controlMutex: Protects the entire lifecycle of cluster reconfiguration operations (init, join, leave). Ensures only one reconfiguration action completes before another starts.
- mu (RWMutex): Protects the current cluster state. Allows concurrent reads of cluster state while preventing writes during reconfiguration. API handlers hold read locks to ensure the node isn't shut down mid-request.
Service & Task Orchestration
Services are the primary abstraction for running distributed workloads. The GetServices() method queries the manager's control plane for all services, supporting filters by name, ID, label, mode (replicated/global), and runtime.
Tasks represent individual container instances scheduled by the orchestrator. The GetTasks() method retrieves tasks with filtering by service, node, and runtime type. Tasks are automatically assigned to nodes by the manager's scheduler and orchestrator components.
Configuration & Raft Consensus
Swarm uses Raft consensus for manager coordination. Key configuration parameters include:
- RaftHeartbeatTick: Frequency of leader heartbeats (default: 1 tick)
- RaftElectionTick: Timeout before followers initiate elections (default: 10x heartbeat tick)
- SnapshotInterval: Log entries between Raft snapshots
- AutoLockManagers: Enables encryption of manager TLS keys and Raft data at rest
Data Path & Networking
The data path port (default: 4789 UDP) carries VXLAN traffic for overlay networks. Each swarm can define a default address pool with configurable subnet sizes, enabling automatic subnet allocation for overlay networks without manual configuration.
// Example: Cluster initialization with custom Raft settings
c, err := cluster.New(cluster.Config{
Root: "/var/lib/docker",
Name: "node-1",
Backend: daemon,
RaftHeartbeatTick: 1,
RaftElectionTick: 10,
})
Manager vs. Worker Nodes
Manager nodes run the full orchestration stack (Raft, scheduler, dispatcher, orchestrators) and handle API requests. Worker nodes run only the agent component, receiving task assignments from managers. A node's role is determined at initialization and can be changed via the Update() method.
Client & API Layer
Relevant Files
client/client.goclient/client_interfaces.goclient/request.goapi/swagger.yamlapi/types/
Overview
The Client & API Layer provides the HTTP interface between Docker clients and the daemon. The Engine API is a RESTful HTTP API defined in api/swagger.yaml, while the Go Client in client/ offers a type-safe wrapper for programmatic access.
Architecture
Loading diagram...
Key Components
APIClient Interface (client_interfaces.go)
The APIClient interface defines all methods available to interact with the daemon. It composes multiple sub-interfaces:
ContainerAPIClient- Container operations (create, list, start, stop, etc.)ImageAPIClient- Image operations (pull, push, build, inspect)NetworkAPIClient- Network managementVolumeAPIClient- Volume operationsSwarmManagementAPIClient- Swarm and orchestration featuresSystemAPIClient- System-level operations
Client Struct (client.go)
The Client struct implements APIClient and manages:
- HTTP connection pooling and transport configuration
- API version negotiation (supports versions 1.44 to 1.53)
- TLS/SSL configuration for secure connections
- Custom HTTP headers and authentication
- OpenTelemetry tracing integration
Request Handling (request.go)
Low-level HTTP methods handle communication:
get(),post(),put(),delete(),head()- Standard HTTP verbspostRaw(),putRaw()- Raw body handling for streamingprepareJSONRequest()- JSON encoding and Content-Type headers- Automatic error handling and response parsing
API Types
The api/types/ directory contains Go structs representing API objects:
container/- Container configuration, state, and responsesimage/- Image metadata and inspection resultsnetwork/- Network configuration and endpointsvolume/- Volume definitions and operationsswarm/- Swarm services, tasks, nodes, and secretscommon/- Shared types like error responses
Swagger Definition
The api/swagger.yaml file is the single source of truth for the API specification. It defines:
- Paths - All REST endpoints with HTTP methods, parameters, and responses
- Definitions - Reusable object schemas referenced throughout the API
- Versioning - Current version is 1.53; clients negotiate compatible versions
Client Initialization
// Create client with environment configuration
cli, err := client.New(client.FromEnv)
// Or with custom options
cli, err := client.New(
client.WithHost("unix:///var/run/docker.sock"),
client.WithAPIVersion("1.50"),
)
The FromEnv option reads DOCKER_HOST, DOCKER_API_VERSION, DOCKER_CERT_PATH, and DOCKER_TLS_VERIFY environment variables.
API Version Negotiation
By default, the client automatically negotiates the API version on the first request. This allows compatibility with older daemon versions. The negotiation process:
- Client attempts to use
MaxAPIVersion(1.53) - On first request, queries daemon version via
/version - Downgrades to compatible version if needed
- Caches result for subsequent requests
Manual version control disables negotiation via WithAPIVersion() or WithAPIVersionFromEnv().
Plugins & Extensibility
Relevant Files
daemon/pkg/plugin– Plugin manager and lifecycledaemon/pkg/plugin/v2– V2 plugin implementationpkg/plugins– V1 plugin discovery and clientpkg/plugingetter– Plugin getter interfaceapi/types/plugin– Plugin API typesdaemon/server/router/plugin– Plugin HTTP routes
Docker supports two plugin architectures: V1 (legacy) and V2 (managed). Plugins extend Docker’s functionality through well-defined extension points like volume drivers, network drivers, logging drivers, and authorization hooks.
Plugin Architecture
The plugin system consists of three layers:
-
Plugin Manager (
daemon/pkg/plugin/manager.go) – Manages plugin lifecycle: loading, enabling, disabling, and removal. Handles plugin restoration on daemon startup and coordinates with the executor. -
Plugin Store (
daemon/pkg/plugin/store.go) – In-memory registry of installed plugins with reference counting. Filters plugins by capability and manages state persistence. -
Plugin Getter (
pkg/plugingetter/getter.go) – Abstraction layer providing unified access to both V1 and V2 plugins. Implements reference counting for resource management.
Plugin Types & Capabilities
Plugins declare their capabilities using a standardized format: prefix.capability/version (e.g., docker.volumedriver/1.0).
Common extension points:
- Volume Drivers – Custom storage backends (
docker.volumedriver/1.0) - Network Drivers – Custom networking (
docker.networkdriver/1.0) - Logging Drivers – Custom log collection (
docker.logdriver/1.0) - Authorization – Request/response filtering (
docker.authz/1.0)
Each plugin implements an HTTP server on a UNIX socket. The daemon communicates via JSON-RPC over HTTP.
V1 vs V2 Plugins
V1 Plugins (Legacy):
- Discovered via socket files in
/run/docker/pluginsor spec files in/etc/docker/plugins - Run directly on the host
- Lazily loaded on-demand
- Simpler but less isolated
V2 Plugins (Managed):
- Packaged as OCI images
- Run in isolated containers with configurable mounts and capabilities
- Explicitly enabled/disabled via API
- Support for settings and environment variables
- Full lifecycle management by the daemon
Plugin Lifecycle
Loading diagram...
On daemon startup, the manager reloads plugins from disk. Enabled plugins are restored and restarted if live-restore is disabled. The plugin activation handshake occurs at /Plugin.Activate, where the plugin returns its manifest listing implemented subsystems.
Extension Points
Subsystems register handlers via plugins.Handle(interface, callback). When a plugin activates, matching handlers execute automatically. This enables dynamic plugin discovery without hardcoded dependencies.
Example: Volume driver plugins implement Create, Remove, Mount, Unmount, List, Get, and Capabilities methods. The daemon wraps these via generated proxy objects using pluginrpc-gen.
Reference Counting
Plugins use reference counting to track active usage. Operations support three modes: Lookup (no change), Acquire (increment), and Release (decrement). This prevents premature plugin removal while in use.