Overview
Relevant Files
README.mdREPO_LAYOUT.mdapi/README.mdmobile/README.md
Envoy is a cloud-native, high-performance edge/middle/service proxy hosted by the Cloud Native Computing Foundation (CNCF). It provides a universal data plane API that enables dynamic configuration and interoperability across microservices architectures.
Core Purpose
Envoy acts as a communication bus for modern service-oriented architectures. It abstracts away network complexity by providing a single, programmable proxy that can be deployed at the edge, in the middle, or as a service proxy. The project emphasizes performance, observability, and extensibility through a comprehensive set of filters, protocols, and integrations.
Repository Structure
The repository is organized into several key components:
- api/ – Universal data plane API definitions using Protocol Buffers. These APIs drive Envoy configuration and are used by other proxy solutions for interoperability.
- source/ – Core Envoy implementation split into
common/(reusable library code),server/(standalone server logic),exe/(binary-specific code), andextensions/(pluggable functionality). - envoy/ – Public interface headers for core Envoy, primarily abstract classes defining extension points.
- test/ – Comprehensive test suite including unit tests, integration tests, mocks, and fuzz testing.
- mobile/ – Envoy Mobile, a multiplatform client HTTP/networking library for iOS and Android.
- contrib/ – Community-contributed extensions with optional compilation.
- bazel/ – Build configuration and dependency management using Bazel.
- docs/ – User-facing documentation and API reference generation.
Extension Architecture
Envoy's extensibility is built on a strict namespace and directory layout:
- HTTP Filters –
source/extensions/filters/http/for L7 request/response processing - Network Filters –
source/extensions/filters/network/for L4 protocol handling - Transport Sockets – TLS, mTLS, and custom encryption implementations
- Clusters – Upstream endpoint discovery and load balancing strategies
- Access Loggers – Pluggable logging backends (file, gRPC, etc.)
- Tracers – Distributed tracing integrations (Jaeger, Zipkin, etc.)
Each extension lives in its own namespace and can be compiled in or out based on build configuration.
Key Technologies
Protocols: HTTP/1.1, HTTP/2, HTTP/3 (QUIC), gRPC, TCP, UDP
Configuration: xDS (Envoy Data Plane API), YAML, JSON
Build System: Bazel with bzlmod support
Languages: C++ (core), Go (tools), Python (scripts)
Testing: GoogleTest, integration tests, fuzzing (OSS-Fuzz)
Community & Support
Envoy maintains active community channels including mailing lists (envoy-announce, envoy-users, envoy-dev), Slack workspace, and bi-weekly community meetings. The project follows a structured governance model with clear contribution guidelines and security policies.
Architecture & Request Processing
Relevant Files
envoy/server/instance.henvoy/http/filter.henvoy/network/filter.henvoy/router/router.henvoy/upstream/cluster_manager.hsource/exe/main_common.ccsource/server/server.ccsource/common/http/filter_manager.ccsource/common/listener_manager/active_stream_listener_base.cc
Envoy's architecture follows a layered request processing model that separates concerns between network-level and application-level handling. Understanding this flow is essential for extending Envoy with custom filters or modifying request behavior.
High-Level Request Flow
Loading diagram...
Server Initialization
The server starts via main_common.cc, which creates a Server::Instance (implemented in server.cc). This instance initializes:
- Event Dispatcher – The main event loop managing all I/O operations
- Cluster Manager – Manages upstream clusters, endpoints, and connection pools
- Listener Manager – Creates and manages listeners that accept downstream connections
- Configuration – Loads bootstrap config and manages dynamic updates
Listener and Filter Chain Matching
When a TCP connection arrives at a listener:
- The listener accepts the connection and creates a
ConnectionSocket - Filter chain matching occurs in
active_stream_listener_base.cc: the system matches the socket against configured filter chains based on destination IP, port, SNI, ALPN, and source IP - A matching filter chain provides a transport socket (e.g., TLS) and network filters
- Network filters are instantiated and initialized in order
Network Filter Chain
Network filters operate on raw bytes. The most important is the HTTP Connection Manager, which:
- Decrypts TLS data (if applicable)
- Runs the HTTP codec to demultiplex frames into streams
- For each HTTP stream, creates an HTTP filter chain
HTTP Filter Chain and Filter Manager
The FilterManager (in filter_manager.cc) orchestrates HTTP filter execution:
- Decoder filters process the request in order: headers, metadata, data, trailers
- Each filter returns a status (
Continue,StopIteration,StopAllIterationAndBuffer, etc.) - If a filter returns
StopIteration, subsequent filters are skipped untilcontinueDecoding()is called - The router filter (terminal decoder filter) selects a route and cluster, then creates an upstream request
- Encoder filters process the response in reverse order before sending to the client
Router and Upstream Request
The router filter:
- Matches the request against route configuration
- Selects a cluster using the matched route entry
- Calls the Cluster Manager to obtain a connection pool for the cluster
- Performs load balancing to select an endpoint
- Creates an
UpstreamRequestthat forwards headers/data to the upstream endpoint - Receives the upstream response and passes it back through encoder filters
Cluster Manager and Connection Pooling
The ClusterManager maintains:
- Cluster definitions – Configuration for each upstream cluster
- Thread-local cluster state – Per-worker-thread connection pools and load balancers
- Health checking – Monitors endpoint health
- Circuit breakers – Prevents overload of upstream services
Connection pools are created on-demand per cluster and reused across requests to the same endpoint.
Key Design Patterns
Filter Status Control: Filters can pause iteration and resume later, enabling buffering, async operations, and complex transformations.
Separation of Concerns: Network filters handle protocol details; HTTP filters handle application logic. This allows independent evolution of each layer.
Thread Safety: Each worker thread has its own dispatcher and thread-local state. The cluster manager uses thread-local storage to avoid locks on the hot path.
xDS & Dynamic Configuration
Relevant Files
api/envoy/service/discovery/v3/discovery.protoapi/envoy/service/listener/v3/lds.protoapi/envoy/service/cluster/v3/cds.protoapi/envoy/service/route/v3/rds.protoapi/envoy/service/endpoint/v3/eds.protoenvoy/config/subscription.hsource/docs/xds.mdsource/common/config/xds_manager_impl.h
Envoy's xDS (Extensible Discovery Service) system enables dynamic configuration of proxies through a unified protocol. Instead of requiring restarts to update listeners, clusters, routes, or endpoints, Envoy can subscribe to configuration changes from a management server and apply them at runtime.
Core Discovery Services
xDS comprises five primary discovery services, each managing a specific resource type:
- LDS (Listener Discovery Service) - Dynamically discovers listeners that define how Envoy accepts incoming connections
- CDS (Cluster Discovery Service) - Discovers upstream clusters and their configuration
- RDS (Route Discovery Service) - Discovers route configurations for HTTP routing decisions
- EDS (Endpoint Discovery Service) - Discovers endpoints (backend instances) within clusters
- SDS (Secret Discovery Service) - Discovers TLS certificates and secrets
Each service uses the same underlying protocol with DiscoveryRequest and DiscoveryResponse messages, allowing Envoy to request resources and receive updates.
Protocol Modes
xDS supports multiple transport and update mechanisms:
Transport Options:
- Filesystem - Watch configuration files on disk
- REST - Poll HTTP endpoints for configuration
- gRPC - Stream configuration updates via gRPC
Update Patterns:
- State-of-the-World (SotW) - Server sends complete resource set on each update
- Delta - Server sends only added/removed resources, reducing bandwidth
gRPC supports both patterns, while filesystem and REST use SotW. The delta protocol is more efficient for large configurations with frequent partial updates.
Subscription Architecture
Loading diagram...
The Subscription interface abstracts transport details. Internally, GrpcMuxImpl manages multiplexing multiple resource types over a single gRPC stream (ADS mode) or separate streams. SubscriptionState tracks per-resource-type state, while WatchMap handles multiple subscribers to the same resource type.
Resource Watches and Multiplexing
Multiple Envoy subsystems can subscribe to overlapping resources. For example, two EDS subscriptions requesting clusters {X, Y} and {Y, Z} result in a single subscription to the union {X, Y, Z}. Updates are delivered to the appropriate subscribers via WatchMap, enabling efficient resource sharing.
xdstp:// Naming Scheme
Envoy supports a structured naming scheme (xdstp://) for better scalability and federation. This scheme encodes resource authority, type, and context parameters in the resource name, enabling:
- Glob collections for LDS, CDS, and SRDS
- Singleton resources for RDS and EDS
- Node context parameter injection at the transport layer
The XdsManager coordinates xdstp:// subscriptions, routing requests to appropriate authorities based on bootstrap configuration.
Configuration Update Flow
When a configuration update arrives:
DiscoveryResponseis received with versioned resources- Resources are decoded using
OpaqueResourceDecoder SubscriptionCallbacks::onConfigUpdate()is invoked with decoded resources- Subsystem validates and applies the configuration
- If successful, version is ACK'd in the next
DiscoveryRequest - If rejected, NACK is sent with error details
The nonce field in gRPC responses ensures the server can track which updates were processed, handling out-of-order or duplicate messages.
Key Interfaces
Subscription- Abstract subscription to resources; hides transport detailsSubscriptionCallbacks- Receives decoded resource updates and failure notificationsOpaqueResourceDecoder- Decodes protobuf resources from wire formatXdsManager- Coordinates all xDS subscriptions and authorities
HTTP Filters & L7 Processing
Relevant Files
envoy/http/filter.hsource/common/http/filter_manager.hsource/extensions/filters/http/well_known_names.hsource/extensions/filters/http/buffer/buffer_filter.hsource/extensions/filters/http/jwt_authn/filter.hsource/extensions/filters/http/ext_proc/ext_proc.h
HTTP filters implement Layer 7 (application layer) processing in Envoy. They form a chain that processes both incoming requests (decode path) and outgoing responses (encode path), allowing filters to inspect, modify, or reject traffic.
Filter Architecture
Filters inherit from StreamDecoderFilter (request processing) or StreamEncoderFilter (response processing), or both. The FilterManager orchestrates filter chain execution, maintaining state and handling buffering. Each filter receives callbacks via StreamDecoderFilterCallbacks or StreamEncoderFilterCallbacks to interact with the stream.
Filter Lifecycle
- Initialization: Filter created per stream,
setDecoderFilterCallbacks()called - Processing:
decodeHeaders(),decodeData(),decodeTrailers()invoked in order - Completion:
onStreamComplete()called before access logs, thenonDestroy()
Return Status Codes
Filters control chain iteration via status codes:
FilterHeadersStatus::Continue: Pass to next filterFilterHeadersStatus::StopIteration: Pause chain, filter must callcontinueDecoding()to resumeFilterHeadersStatus::StopAllIterationAndBuffer: Buffer data, pause all filtersFilterHeadersStatus::ContinueAndDontEndStream: Continue but delay stream end (for adding body)
Data processing uses FilterDataStatus with similar semantics. Trailers use FilterTrailersStatus.
Common Filter Patterns
Synchronous filters (e.g., Buffer, JWT Auth) process data immediately and return status. Asynchronous filters (e.g., External Processing) return StopIteration, perform async work, then call continueDecoding() when ready.
Filters can modify headers/trailers, buffer data, inject new data via injectDecodedDataToFilterChain(), or send local replies via sendLocalReply().
Built-in Filters
Envoy includes 50+ filters: authentication (JWT, OAuth, Basic Auth), transformation (compression, transcoding), routing (Router, Dynamic Forward Proxy), resilience (fault injection, rate limiting), and observability (tap, stats).
Loading diagram...
Filter Configuration
Filters are configured in the HTTP connection manager with per-route overrides. The FilterChainFactory creates filter instances per stream. Well-known filter names follow the pattern envoy.filters.http.name (e.g., envoy.filters.http.jwt_authn).
Network Filters & L4 Processing
Relevant Files
envoy/network/filter.hsource/extensions/filters/network/well_known_names.hsource/common/network/filter_manager_impl.hsource/common/tcp_proxy/tcp_proxy.hsource/extensions/filters/network/redis_proxy/proxy_filter.h
Network filters operate at Layer 3/4 (TCP/UDP) and form the foundation of Envoy's proxy architecture. They process raw bytes and connection events, enabling diverse proxy tasks like TCP forwarding, protocol-specific handling (Redis, Mongo, Thrift), and request filtering.
Filter Types & Execution Model
Envoy supports three main filter types:
Read Filters process incoming data via onData() and handle new connections via onNewConnection(). They can stop iteration and resume later using continueReading() or inject data directly via injectReadDataToFilterChain().
Write Filters process outgoing data via onWrite(). They operate in LIFO order (last added, first called), allowing filters to buffer or modify data before transmission.
Combined Filters inherit from both ReadFilter and WriteFilter, handling bidirectional traffic in a single instance.
Filter Chain Execution
Filters are chained in FIFO order for reads and LIFO for writes. Each filter returns FilterStatus::Continue to proceed or FilterStatus::StopIteration to pause. The filter manager maintains separate lists for upstream and downstream filters, enabling complex scenarios like rate limiting or asynchronous operations.
// Filter returns status to control chain iteration
enum class FilterStatus {
Continue, // Proceed to next filter
StopIteration // Pause chain, resume later
};
Listener Filters & Filter Chain Selection
Listener Filters execute before connection creation, operating on raw socket data. They can inspect initial bytes (e.g., TLS SNI, protocol detection) and decide whether to accept or reject connections. They run in FIFO order and can stop iteration to wait for more data.
Filter Chain Matching selects the appropriate network filter chain based on connection metadata (source/destination IP, SNI, etc.). Each listener can have multiple filter chains with match criteria; the most specific match wins. If no match, the default filter chain is used.
Terminal Filters
Terminal filters like TCP Proxy establish upstream connections and manage bidirectional data flow. They implement startUpstreamSecureTransport() to handle TLS upgrades and are aware of upstream host selection. Non-terminal filters should not implement upstream-aware methods.
Common Network Filters
Envoy includes filters for:
- TCP Proxy: Raw TCP forwarding with load balancing
- HTTP Connection Manager: HTTP/1.1, HTTP/2, HTTP/3 handling
- Redis Proxy: Redis protocol parsing and command routing
- Rate Limiting: Global and local rate limit enforcement
- RBAC: Role-based access control
- External Authorization: Delegated auth decisions
- Mongo/Thrift/Dubbo Proxies: Protocol-specific proxying
Data Sharing & State Management
Filters can share state via FilterState (per-connection) and dynamic metadata. This enables communication between filters without direct coupling, useful for passing routing decisions or authentication results downstream.
Loading diagram...
Upstream Clusters & Load Balancing
Relevant Files
envoy/upstream/cluster_manager.henvoy/upstream/load_balancer.henvoy/upstream/health_checker.hsource/common/upstream/cluster_manager_impl.hsource/common/upstream/health_checker_impl.h
Overview
Envoy's upstream cluster management system handles the discovery, health checking, and load balancing of backend services. The ClusterManager is the central component that manages all upstream clusters, while LoadBalancers select individual hosts for requests, and HealthCheckers monitor host availability.
Cluster Manager Architecture
The ClusterManager maintains a persistent, thread-safe registry of all upstream clusters. It operates in two initialization phases:
- Primary Phase: Initializes clusters with static endpoints or those discovered via DNS/CDS at startup
- Secondary Phase: Initializes clusters that depend on other clusters (e.g., EDS clusters depending on the EDS server itself)
Each cluster is wrapped in a ClusterEntry that holds the cluster configuration, a thread-aware load balancer, and connection pools. Thread-local copies of clusters are maintained for lock-free access during request processing.
// Central cluster registry
ClusterManagerImpl maintains:
- cluster_map_: Map of cluster name to ClusterData
- thread_local_clusters_: Thread-local cluster cache
- warming_clusters_: Clusters still initializing
Load Balancing System
Load balancers select hosts based on the configured algorithm. The system uses a two-tier hierarchy:
- ThreadAwareLoadBalancer: Shared across threads, creates worker-local load balancers
- LoadBalancer: Worker-local instance that selects individual hosts
// Load balancer interface
class LoadBalancer {
HostSelectionResponse chooseHost(LoadBalancerContext* context);
HostConstSharedPtr peekAnotherHost(LoadBalancerContext* context);
OptRef<ConnectionLifetimeCallbacks> lifetimeCallbacks();
};
Key Algorithms:
- Round Robin: Distributes requests evenly across healthy hosts
- Least Request: Selects host with fewest active connections
- Ring Hash: Consistent hashing for session affinity
- Subset: Filters hosts by metadata before applying primary algorithm
Load balancers consider priority levels and health status when selecting hosts. Healthy hosts are preferred; degraded hosts are used only when healthy hosts are overloaded.
Health Checking
Active health checking monitors upstream host availability. The HealthChecker runs periodic checks and notifies the cluster when host status changes.
class HealthChecker {
void addHostCheckCompleteCb(HostStatusCb callback);
void start();
};
Health Check Types:
- HTTP: Sends HTTP requests and validates response codes
- TCP: Establishes TCP connections
- gRPC: Uses gRPC health check protocol
- Custom: Extensible via plugins
When a host fails health checks, it's marked unhealthy and removed from load balancing. The cluster also supports outlier detection, which ejects hosts based on error rates or latency.
Host Sets & Priority
Hosts are organized into HostSets grouped by priority level in a PrioritySet:
PrioritySet
├── Priority 0 (HostSet) - Primary endpoints
├── Priority 1 (HostSet) - Failover endpoints
└── Priority N (HostSet) - Additional failover tiers
Each HostSet contains hosts from a specific locality or priority level. Load balancers use priority-aware selection: they distribute load across priority 0 until it reaches capacity, then spill over to priority 1, and so on.
Connection Pooling
The ClusterManager maintains connection pools per host. Pools are thread-local and created on-demand:
- HTTP/1.1 pools: One connection per request
- HTTP/2 pools: Multiplexed streams over persistent connections
- TCP pools: Reused connections for L4 proxying
Pools are automatically cleaned up when hosts are removed or health checks fail.
Dynamic Configuration
Clusters can be added, updated, or removed via Cluster Discovery Service (CDS). The ClusterManager validates new configurations and performs graceful transitions:
absl::StatusOr<bool> addOrUpdateCluster(
const envoy::config::cluster::v3::Cluster& cluster,
const std::string& version_info);
Callbacks notify filters when clusters are ready, enabling coordinated initialization across the proxy.
Transport Sockets & TLS/mTLS
Relevant Files
envoy/network/transport_socket.henvoy/ssl/context.hsource/common/tls/ssl_socket.hsource/extensions/transport_sockets/alts/tsi_socket.hsource/common/network/raw_buffer_socket.hsource/common/tls/context_impl.hsource/common/tls/client_context_impl.hsource/common/tls/server_context_impl.h
Overview
Transport sockets are the abstraction layer in Envoy that handles all I/O operations on network connections. They can perform transformations on data (encryption, compression, etc.) and manage protocol negotiation. The primary implementations are TLS/mTLS for secure communication, ALTS for gRPC security, and raw buffer sockets for plaintext connections.
Core Architecture
The transport socket system is built on three main interfaces:
TransportSocket - The primary interface for read/write operations:
doRead()anddoWrite()handle encrypted/plaintext data transformationprotocol()returns negotiated protocol (e.g., ALPN result)ssl()provides SSL connection metadata if applicableonConnected()signals when the underlying transport is ready
TransportSocketCallbacks - Bidirectional communication with the connection:
ioHandle()provides access to the underlying file descriptorraiseEvent()notifies the connection of state changes (e.g., handshake complete)shouldDrainReadBuffer()enforces read limits and backpressure
TransportSocketFactory - Creates socket instances:
- Upstream factories create client-side sockets for outbound connections
- Downstream factories create server-side sockets for inbound connections
- Factories manage SSL contexts and certificate configuration
TLS/mTLS Implementation
The SslSocket class implements TLS encryption using BoringSSL. Key features:
Handshake Management:
- Performs SSL/TLS handshake during connection establishment
- Supports both client and server modes via
InitialStateenum - Handles asynchronous certificate validation and selection
- Manages private key operations through callbacks
Certificate Configuration:
ContextImplmanages SSL contexts and certificate chains- Multiple certificates per context enable SNI-based selection
- Supports PKCS#12 and PEM certificate formats
- Integrates with Secret Discovery Service (SDS) for dynamic updates
mTLS Verification:
CertificateValidationContextconfigures peer certificate validation- Validates Subject Alternative Names (SANs) against configured matchers
- Supports trust chain verification modes:
VERIFY_PEER,ACCEPT_UNTRUSTED - Handles Certificate Revocation Lists (CRLs) for revocation checking
Session Management:
- Client contexts cache SSL sessions for resumption
- Configurable session key limits prevent unbounded memory growth
- Supports session ticket keys for stateless resumption
ALTS Transport Socket
The TsiSocket class implements gRPC's Application Layer Transport Security (ALTS):
- Uses gRPC TSI (Transport Security Interface) for handshaking
- Wraps a raw buffer socket for underlying I/O
- Provides frame protection/unprotection for encrypted communication
- Supports peer identity validation through custom validators
Raw Buffer Socket
The RawBufferSocket class handles plaintext connections:
- Direct pass-through of data without transformation
- Used when no encryption is configured
- Implements the same interface as secure sockets for consistency
Transport Socket Options
TransportSocketOptions allows dynamic configuration per connection:
- SNI Override - Force specific server name for certificate validation
- SAN Override - Override Subject Alternative Names to verify
- ALPN Override - Force specific application protocols
- Proxy Protocol - Include PROXY protocol headers
- Filter State - Share downstream connection state with upstream
Data Flow
Loading diagram...
Configuration Flow
TLS contexts are created during listener/cluster initialization:
- Load Certificates - Read certificate chains and private keys from files or SDS
- Initialize SSL_CTX - Configure OpenSSL context with certificates and validation rules
- Set Callbacks - Register certificate selection and validation callbacks
- Create Factory - Wrap context in upstream/downstream factory for socket creation
When a connection is established, the factory creates a socket instance that uses the pre-configured context for handshaking.
Error Handling
Invalid or missing certificates result in placeholder socket implementations:
NotReadySslSocket- Returned when SDS secrets haven't been fetched yetErrorSslSocket- Returned when certificate loading fails- Both close connections immediately with appropriate error messages
Performance Considerations
- Context Reuse - SSL contexts are shared across many connections
- Session Resumption - Reduces handshake overhead for repeated connections
- OCSP Stapling - Includes revocation status in handshake, avoiding extra requests
- Congestion Window - Optional initial congestion window configuration for QUIC
Envoy Mobile & Client Networking
Relevant Files
mobile/library/kotlin/io/envoyproxy/envoymobile/Engine.ktmobile/library/java/io/envoyproxy/envoymobile/engine/EnvoyEngine.javamobile/library/swift/Engine.swiftmobile/library/common/internal_engine.hmobile/library/common/http/client.hmobile/library/common/network/connectivity_manager.hmobile/library/java/io/envoyproxy/envoymobile/engine/AndroidNetworkMonitorV2.javamobile/library/objective-c/EnvoyNetworkMonitor.mm
Envoy Mobile is a multiplatform client HTTP/networking library built on Envoy's core networking layer. It provides language-specific APIs for iOS (Swift/Objective-C), Android (Kotlin/Java), and C++ while sharing a common C++ engine implementation.
Architecture Overview
Loading diagram...
Engine Lifecycle
The Engine is the central component managing all networking operations. It runs on a dedicated thread with its own event dispatcher. Initialization follows this pattern:
- Creation:
EnvoyEngineImpl(Java) orEnvoyEngineImpl(Objective-C) wraps the C++InternalEngine - Configuration: Bootstrap configuration is passed via
EnvoyConfigurationprotobuf - Startup:
runWithConfig()spawns the engine thread and initializes the HTTP client - Callbacks: Engine lifecycle callbacks notify when startup completes
HTTP Streams
Streams are created via streamClient() and managed by the HTTP Client class. Each stream:
- Maintains separate request/response state with explicit flow control support
- Buffers data up to a configurable high watermark (default 2MB per stream)
- Supports HTTP/1.1, HTTP/2, and HTTP/3 protocols
- Emits callbacks for headers, data, trailers, completion, and errors
Stream operations (send headers, data, trailers) are posted to the engine's event dispatcher for thread-safe execution.
Network Connectivity Management
Envoy Mobile actively monitors platform network changes through platform-specific monitors:
Android (AndroidNetworkMonitorV2):
- Registers with
ConnectivityManagerto track network availability - Detects network type changes (WiFi, cellular, VPN) via
NetworkCallback - Calls engine methods:
onNetworkConnect(),onNetworkDisconnect(),onDefaultNetworkChangedV2() - Handles VPN transitions by purging inaccessible networks
iOS (EnvoyNetworkMonitor):
- Uses
nw_path_monitor(Network framework) for path monitoring - Falls back to
SCNetworkReachabilityfor older iOS versions - Detects interface changes and network availability transitions
DNS & Connection Management
The ConnectivityManager singleton coordinates DNS and connection state:
- DNS Refresh: On network changes, DNS cache is refreshed and connections are drained (unless disabled)
- Preferred Network: Tracks the OS default network and applies socket options accordingly
- Interface Enumeration: Provides IPv4/IPv6 interface lists for socket binding
- Proxy Settings: Manages system proxy configuration and applies it to upstream connections
- Network Fault Reporting: Tracks connection failures per network to enable failover logic
Platform Integration
Android:
- JNI layer (
jni_impl.cc) bridges Java/Kotlin to C++ engine AndroidNetworkMonitorandAndroidProxyMonitorintegrate with Android system services- Network type detection maps Android transport types to Envoy connection types
iOS:
- Objective-C++ bridge (
EnvoyEngineImpl.mm) wraps C++ engine - Network monitoring uses Apple's Network framework APIs
- System proxy settings are queried via PAC proxy resolver
Key Features
- Async Startup: Engine initialization is non-blocking; callbacks notify when ready
- Thread Safety: Engine runs on dedicated thread; all operations are dispatcher-posted
- Stats & Logging: Configurable log levels and stats dumping for debugging
- Graceful Shutdown:
terminate()drains connections and stops the event loop - Flow Control: Optional explicit flow control for backpressure handling
Contrib Extensions & Specialized Features
Relevant Files
contrib/extensions_metadata.yamlcontrib/contrib_build_config.bzlcontrib/all_contrib_extensions.bzlcontrib/golang/filters/http/source/golang_filter.hcontrib/kafka/filters/network/source/broker/filter.hcontrib/postgres_proxy/filters/network/source/postgres_filter.h
Contrib extensions provide specialized, optional features for Envoy that are not part of the core distribution. These extensions are maintained separately and can be selectively compiled into custom Envoy builds. They cover protocol-specific proxies, hardware acceleration, advanced load balancing, and integration with specialized systems.
Extension Categories
The contrib directory organizes extensions into several functional categories:
HTTP Filters include Golang filter support for custom request/response processing, DynamoDB filter for AWS integration, SXG (Signed HTTP Exchange) for web packaging, and specialized filters like checksum validation and language detection.
Network Filters provide protocol-specific proxying: Kafka broker and mesh filters for message streaming, PostgreSQL and MySQL proxies for database traffic, RocketMQ for message queues, and SIP proxy for VoIP. The Golang network filter enables custom protocol handling.
Compression extensions leverage hardware acceleration via QAT (Intel QuickAssist Technology) for QATZip and QATZstd compression algorithms.
TLS Key Providers offer hardware-accelerated cryptographic operations: CryptoMB (Intel Multi-Buffer), QAT (Intel QuickAssist), and KAE (Kunpeng Accelerator Engine for ARM).
Load Balancing includes Peak EWMA (Exponentially Weighted Moving Average) for adaptive load distribution based on request latency.
Other Extensions cover connection balancing (DLB), regex engines (Hyperscan), input matchers, UDP tap sinks, and xDS delegates for dynamic configuration.
Architecture & Registration
contrib/
├── contrib_build_config.bzl # Maps extension names to build targets
├── extensions_metadata.yaml # Metadata: status, security posture
├── all_contrib_extensions.bzl # Platform-specific filtering
└── [extension_name]/
└── filters/[type]/source/
├── config.h/cc # Factory registration
└── [filter_impl].h/cc # Implementation
Each extension registers via a factory pattern. For example, the Postgres proxy filter:
class PostgresConfigFactory
: public Common::FactoryBase<
envoy::extensions::filters::network::postgres_proxy::v3alpha::PostgresProxy> {
Network::FilterFactoryCb createFilterFactoryFromProtoTyped(...) override;
};
REGISTER_FACTORY(PostgresConfigFactory, NamedNetworkFilterConfigFactory);
Platform & Build Constraints
Extensions have platform-specific constraints defined in all_contrib_extensions.bzl:
- ARM64 Skip: CryptoMB, QAT, DLB, QATZip/Zstd (require x86-64 hardware)
- x86 Skip: KAE (ARM-specific)
- PPC Skip: CryptoMB, QAT, KAE, Hyperscan, DLB, QAT compression
- FIPS Linux x86: QATZip, KAE (FIPS compliance restrictions)
Extension Status & Security
Each extension declares a stability status (alpha, stable, wip) and security posture:
- robust_to_untrusted_downstream_and_upstream: Safe for untrusted traffic
- requires_trusted_downstream_and_upstream: Requires trusted clients and servers
- data_plane_agnostic: No security assumptions
This metadata guides deployment decisions and security policies.
Key Implementation Patterns
Protocol proxies (Kafka, PostgreSQL, MySQL) follow a common pattern: request/response decoders parse protocol frames, metrics facades track statistics, and optional rewriters modify traffic. The Golang filters enable custom logic via CGO bindings to Go code, supporting both HTTP and network-level processing.
Hardware acceleration extensions (QAT, CryptoMB, KAE) integrate with system libraries for cryptographic and compression operations, improving performance on compatible platforms.