Install Now

istio/istio

Istio Service Mesh

Last updated on Dec 18, 2025 (Commit: c33a3c5)

Overview

Relevant Files
  • README.md
  • __deepwiki_repo_metadata.json
  • pilot/cmd/pilot-discovery/main.go
  • istioctl/cmd/istioctl/main.go
  • architecture/networking/pilot.md
  • architecture/security/istio-agent.md
  • architecture/ambient/ztunnel.md

Istio is an open-source service mesh platform that provides a uniform way to secure, connect, and monitor microservices in distributed applications. It layers transparently onto existing Kubernetes clusters, enabling traffic management, security policies, and observability without requiring changes to application code.

Core Architecture

Istio is composed of three primary components working together:

  1. Istiod (Control Plane) - The central control plane that manages service discovery, configuration, and certificate management. It runs as a modular monolith housing proxy configuration (XDS), Kubernetes controllers, and certificate signing authority (CA) functionality.

  2. Data Plane Proxies - Handle actual traffic between services:

    • Envoy Sidecars - Full-featured proxies deployed alongside each microservice, providing L7 routing, circuit breakers, and telemetry collection
    • Ztunnel - A lightweight Rust-based proxy for ambient mesh mode, providing secure connectivity without sidecar overhead
  3. Istio Agent - Runs on each node/pod, acting as an intermediary between Istiod and Envoy for certificate distribution (SDS) and configuration updates (XDS).

Key Responsibilities

Loading diagram...

Istiod ingests configuration from Kubernetes resources (VirtualServices, DestinationRules, etc.) and translates them into Envoy-compatible configurations delivered via the xDS protocol. It also manages mTLS certificates for workload-to-workload authentication.

Envoy Proxies intercept all inbound and outbound traffic, enforcing routing policies, security rules, and collecting telemetry data. They communicate with Istiod through the Aggregated Discovery Service (ADS) protocol.

Ztunnel provides a minimal L4 proxy for ambient mesh deployments, handling traffic interception and secure tunneling via HBONE (HTTP-Based Overlay Network) without the overhead of full Envoy sidecars.

Deployment Modes

  • Sidecar Mode - Envoy proxy injected into each pod, providing full L7 capabilities
  • Ambient Mode - Ztunnel on nodes with optional waypoint proxies for L7 features, reducing resource consumption

Key Tools

  • istioctl - Command-line utility for managing Istio, debugging, and analyzing mesh configuration
  • pilot-discovery - The Istiod discovery server component
  • pilot-agent - Local agent running in pods for certificate and configuration management

Architecture & Core Components

Relevant Files
  • pilot/pkg/bootstrap/server.go
  • pilot/pkg/model/context.go
  • pilot/pkg/xds/discovery.go
  • architecture/networking/pilot.md
  • architecture/ambient/ztunnel.md
  • architecture/security/istio-agent.md

Istio's architecture is built around a modular control plane (Istiod) that dynamically configures data plane proxies through a three-stage pipeline: config ingestion, translation, and xDS delivery. This section covers the core components and their interactions.

System Overview

Loading diagram...

Istiod: The Control Plane

Istiod is a modular monolith structured as a Server that houses multiple responsibilities:

  • XDS Discovery Server - Serves dynamic proxy configurations via gRPC
  • Config Controllers - Watches Kubernetes resources and external sources
  • Service Discovery - Maintains service and endpoint information
  • Certificate Authority - Issues and manages workload certificates
  • Webhooks - Validates and mutates Istio resources

The Server struct orchestrates these components, managing gRPC servers (secure and insecure), HTTP servers for debugging and monitoring, and file watchers for configuration changes.

Config Ingestion Pipeline

Istiod ingests configuration from multiple sources and aggregates them:

  1. ConfigStore - Reads Istio resources (VirtualServices, DestinationRules, etc.) from Kubernetes CRDs, files, or xDS
  2. ServiceDiscovery - Precomputes service-oriented resources from Kubernetes Services/Endpoints and Istio ServiceEntries
  3. Watcher - Monitors mesh configuration and network topology changes

These layers feed into the Environment struct, which provides a unified API for accessing all mesh state.

PushContext: Immutable State Snapshot

PushContext is a critical abstraction that captures a point-in-time snapshot of the entire mesh state. It is regenerated on configuration changes and includes:

  • Service registry and endpoint information
  • Routing rules and policies
  • Security policies and certificates
  • Computed indexes for efficient lookups

By being immutable, PushContext enables lock-free access during configuration generation, improving performance and reducing contention.

XDS Generation and Push Mechanism

The DiscoveryServer manages the push pipeline:

  1. Generators - Specialized components (LdsGenerator, CdsGenerator, EdsGenerator, etc.) transform PushContext into Envoy-compatible xDS resources
  2. Push Queue - Buffers and debounces configuration updates to avoid overwhelming proxies
  3. Connections - Tracks active gRPC streams to connected proxies

When configuration changes, the server:

  • Creates a new PushContext
  • Determines which proxies need updates via ProxyNeedsPush
  • Enqueues push requests for each affected proxy
  • Generators produce resource-specific configurations
  • Updates are sent via gRPC streaming

Data Plane Components

Envoy Sidecars - Full-featured L7 proxies deployed alongside services, receiving complete xDS configuration for routing, security, and observability.

Ztunnel - A lightweight Rust-based L4 proxy for ambient mesh mode. Instead of consuming standard xDS types, ztunnel uses custom Istio-specific resources (Address and Authorization) optimized for minimal resource footprint. It handles traffic interception and secure tunneling via HBONE (HTTP-Based Overlay Network) protocol.

Istio Agent - Runs alongside Envoy sidecars, acting as an intermediary for:

  • SDS (Secret Discovery Service) - Provides workload certificates to Envoy
  • ADS (Aggregated Discovery Service) - Forwards xDS requests to Istiod
  • Certificate Rotation - Manages certificate lifecycle and renewal

Key Design Patterns

Modular Monolith - Istiod combines multiple concerns (discovery, config, security) in a single process, simplifying deployment while maintaining clear internal boundaries.

Immutable Snapshots - PushContext enables safe concurrent access without locks, improving scalability.

Incremental Updates - Endpoints use an optimized path that bypasses PushContext regeneration, as they change most frequently.

Custom xDS for Ztunnel - Ambient mode uses domain-specific resources instead of generic Envoy types, achieving 10x efficiency gains in size and CPU.

Control Plane: Istiod & Pilot

Relevant Files
  • pilot/cmd/pilot-discovery/app/cmd.go
  • pilot/pkg/bootstrap/server.go
  • pilot/pkg/bootstrap/discovery.go
  • pilot/pkg/xds/discovery.go
  • pilot/pkg/xds/ads.go
  • pilot/pkg/model/context.go

Istiod is Istio's control plane, implemented as a modular monolith that orchestrates mesh-wide configuration and security. The core component is the Server struct in bootstrap/server.go, which manages multiple subsystems including the XDS discovery service, configuration controllers, certificate authority, and webhooks.

Architecture Overview

Loading diagram...

Startup Flow

The pilot-discovery command in cmd.go initiates the control plane:

  1. Command Initialization - Parses flags and validates configuration
  2. Server Creation - bootstrap.NewServer() creates the Server instance
  3. Component Initialization - Sets up XDS server, controllers, CA, and webhooks
  4. Service Start - Launches gRPC servers (secure and insecure), HTTP servers for debugging, and monitoring
  5. Readiness Check - Waits for cache synchronization before accepting client connections

XDS Discovery Server

The DiscoveryServer in xds/discovery.go is the heart of Pilot, implementing the Envoy xDS protocol:

  • Client Management - Maintains active gRPC connections from proxies in adsClients map
  • Resource Generation - Uses pluggable generators (CDS, LDS, RDS, EDS, SDS) to create proxy configurations
  • Push Mechanism - Debounces configuration changes and pushes updates to connected proxies via PushQueue
  • Caching - Optionally caches generated resources to reduce CPU overhead

Configuration Pipeline

Loading diagram...

Key Components

Server Struct - Orchestrates all subsystems:

  • XDSServer - Serves dynamic proxy configurations
  • configController - Watches Kubernetes resources (VirtualServices, DestinationRules, etc.)
  • kubeClient - Kubernetes API client for resource discovery
  • CA - Issues and manages workload certificates
  • grpcServer / secureGrpcServer - Handles XDS connections

Connection Lifecycle:

  1. Proxy connects via gRPC to StreamAggregatedResources()
  2. Server initializes connection with proxy metadata
  3. Proxy sends discovery requests for resource types (CDS, LDS, RDS, EDS)
  4. Server generates and pushes responses
  5. On config changes, server enqueues push to PushQueue and sends updates

Push Queue - Deduplicates and orders configuration pushes:

  • Merges multiple updates for the same proxy
  • Processes pushes sequentially to avoid overwhelming proxies
  • Supports both full and incremental (EDS) updates

Resource Generators

Generators in bootstrap/discovery.go create type-specific configurations:

  • CDS - Cluster definitions for upstream services
  • LDS - Listener configurations for inbound/outbound traffic
  • RDS - Route configurations for traffic routing
  • EDS - Endpoint lists for load balancing
  • SDS - Secret configurations for mTLS certificates
  • ECDS - Extension configurations for filters and plugins

Each generator implements XdsResourceGenerator interface and receives the proxy metadata, watched resources, and current push context to generate appropriate configurations.

Readiness and Health

Istiod exposes multiple health endpoints:

  • /ready - HTTP readiness probe (port 8080) - returns success when caches are synced
  • Monitoring endpoint (port 15014) - Prometheus metrics and debug information
  • Webhook HTTPS server (port 15017) - Validates and mutates Istio resources

The server waits for cache synchronization before marking itself ready, ensuring proxies receive complete configuration on connection.

Data Plane: Proxies & Networking

Relevant Files
  • architecture/ambient/ztunnel.md
  • cni/pkg/nodeagent/server.go
  • cni/README.md
  • pkg/istio-agent/xds_proxy.go
  • pkg/hbone/dialer.go
  • cni/pkg/trafficmanager/interface.go

The data plane is responsible for intercepting, encrypting, and routing workload traffic. Istio supports two primary data plane modes: sidecar proxies (Envoy) and ambient mode (Ztunnel).

Ztunnel: Lightweight Node Proxy

Ztunnel is a minimal L4 proxy written in Rust that runs as a DaemonSet in ambient mode. Unlike Envoy sidecars, Ztunnel is shared across all pods on a node, reducing resource overhead significantly.

Key responsibilities:

  • Intercept all ingress and egress traffic from mesh pods
  • Enforce mTLS encryption using SPIFFE identities
  • Route traffic to waypoint proxies for L7 policies
  • Maintain connection pooling for efficiency

Ztunnel receives dynamic configuration via xDS using custom resource types (Address and Authorization) optimized for its specific use case, rather than generic Envoy types.

Traffic Redirection

The CNI plugin and node agent work together to redirect pod traffic to Ztunnel using iptables or nftables rules:

  • Egress (port 15001): All outbound traffic from pods is redirected to Ztunnel, which preserves the original Service IP for routing decisions
  • Inbound passthrough (port 15006): Non-HBONE ingress traffic is redirected here for RBAC policy enforcement
  • HBONE (port 15008): Incoming HBONE tunneled traffic from other ztunnels or waypoints

HBONE Protocol

HBONE (HTTP-Based Overlay Network) is the secure tunneling protocol used for mesh communication. It consists of:

  • HTTP/2 CONNECT tunnels over mutual TLS with SPIFFE certificates
  • Connection pooling keyed by {source identity, destination identity, destination IP}
  • Custom headers for metadata: :authority (target), Forwarded (source IP), Baggage (telemetry), Traceparent (tracing)

HBONE enables secure, multiplexed communication between ztunnels and waypoints without requiring direct pod-to-pod networking.

XDS Proxy

The XDS proxy in istio-agent consolidates all XDS connections into a single gRPC stream to Istiod. It:

  • Proxies Envoy discovery requests to the control plane
  • Handles response forwarding and error propagation
  • Manages health checks and connection lifecycle
  • Supports custom handlers for internal resource types (DNS, proxy config)

Pod Lifecycle Integration

The CNI agent coordinates with Ztunnel throughout the pod lifecycle:

  1. Pod startup: CNI plugin synchronously sets up iptables rules and signals Ztunnel via ZDS API
  2. Running: Ztunnel lazily fetches certificates and workload info as needed
  3. Shutdown: CNI agent signals pod deletion; Ztunnel gracefully closes connections

This ensures networking is ready before application startup and remains available throughout the pod's lifetime.

Traffic Flow Example

Loading diagram...

Security & Certificate Management

Relevant Files
  • security/pkg/nodeagent/sds/server.go
  • security/pkg/server/ca/server.go
  • security/pkg/nodeagent/cache/secretcache.go
  • security/pkg/pki/ca/ca.go
  • architecture/security/istio-agent.md
  • architecture/ambient/peer-authentication.md

Overview

Istio's security model centers on mutual TLS (mTLS) authentication and certificate management. The system uses a distributed architecture where the Istio agent (running on each workload) acts as an intermediary between Envoy proxies and the Certificate Authority (Istiod). Certificates are provisioned, rotated, and distributed through the Secret Discovery Service (SDS) protocol.

Certificate Provisioning Flow

The certificate lifecycle begins when Envoy requests credentials via SDS. The Istio agent's SDS server receives the request and delegates to the SecretManager, which handles three provisioning modes:

  1. CA-signed certificates (default): The agent generates a Certificate Signing Request (CSR) and submits it to Istiod or an external CA. Istiod authenticates the request, validates the caller's identity, and signs the certificate with a configurable TTL.

  2. File-mounted certificates: Pre-provisioned certificates from well-known paths (/var/run/secrets/workload-spiffe-credentials) are served directly without CA interaction. This mode is common in VM deployments.

  3. Cached certificates: Previously issued certificates are returned if still valid, reducing CA load.

Certificate Authority (CA) Server

The CA server (security/pkg/server/ca/server.go) implements the IstioCertificateService gRPC API. When handling a CSR:

  • Authentication: Validates the caller using configured authenticators (mTLS, JWT, or X-Forwarded-Client-Cert headers).
  • Authorization: Checks if the caller is authorized to request certificates, with support for impersonation in multicluster scenarios.
  • Signing: Generates a certificate with the caller's identity as the Subject Alternative Name (SAN), using the requested TTL or a default value.
  • Response: Returns the leaf certificate, intermediate chain, and root certificate separately so clients can distinguish trust anchors.

SDS Server Architecture

The SDS server (security/pkg/nodeagent/sds/server.go) exposes certificates via a Unix Domain Socket (UDS) at /var/run/secrets/workload-spiffe-uds/socket. Key features:

  • gRPC over UDS: Supports up to 100,000 concurrent streams for high-scale deployments.
  • Push notifications: When certificates are rotated, the server pushes updates to connected Envoy proxies without requiring re-requests.
  • Graceful startup: Implements retry logic with exponential backoff (up to 5 attempts) to handle transient socket setup failures.

Certificate Rotation

Certificates are automatically rotated before expiration using a grace period mechanism. The SecretManager schedules rotation callbacks when a certificate reaches a configurable threshold (default: 50% of remaining lifetime, with jitter to stagger renewals). When triggered:

  1. The rotation task clears the cached certificate.
  2. A new CSR is generated and sent to the CA.
  3. The updated certificate is pushed to all subscribed Envoy proxies.
  4. If no proxies are subscribed, the rotation is skipped to avoid unnecessary CA load.

mTLS and PeerAuthentication

Istio enforces mTLS policies through PeerAuthentication resources. In sidecar mode, policies are directly applied to Envoy. In ambient mode, policies are converted to Authorization rules and sent to ztunnel (the L4 proxy). Policies support three modes:

  • STRICT: Only mTLS traffic is accepted.
  • PERMISSIVE: Both mTLS and plaintext traffic are accepted (useful during migrations).
  • DISABLE: mTLS is not enforced.

Port-level policies allow fine-grained control, enabling different modes on different ports of the same workload.

Trust Domain and SPIFFE

All workload identities follow the SPIFFE standard: spiffe://<trust-domain>/ns/<namespace>/sa/<service-account>. The trust domain is configured globally and used to establish trust boundaries. Certificates issued by different trust domains are not automatically trusted, enabling secure multi-cluster deployments.

Configuration

Key environment variables control certificate behavior:

  • CA_ADDR: CA endpoint (defaults to Istiod discovery address)
  • OUTPUT_CERTS: Directory to persist fetched certificates (enables certificate reuse across restarts)
  • FILE_MOUNTED_CERTS: Disable CA path and use only pre-mounted certificates
  • SECRET_GRACE_PERIOD_RATIO: Fraction of certificate lifetime before rotation (default: 0.5)
  • WORKLOAD_RSA_KEY_SIZE: RSA key size for workload certificates (default: 2048 bits)

Installation & Configuration

Relevant Files
  • operator/README.md
  • operator/cmd/mesh
  • manifests/profiles
  • operator/pkg/apis/types.go

Istio installation is managed through the IstioOperator API, a declarative configuration system that replaces direct Helm templating. The operator acts as a client-side CLI tool that generates and applies Kubernetes manifests based on your configuration.

Core Concepts

The IstioOperator resource defines three main configuration layers:

  1. Profiles - Pre-configured starting points (default, minimal, demo, ambient, etc.)
  2. Component Configuration - Kubernetes-level settings (resources, replicas, HPA, tolerations)
  3. Values API - Istio runtime configuration (mesh behavior, logging, security)

Installation Methods

Quick Install

istioctl install

This generates manifests and applies them in dependency order, waiting for CRDs to be available.

Generate Manifests Only

istioctl manifest generate -f config.yaml > manifests.yaml

Useful for review or GitOps workflows before applying.

Using Profiles

Select a profile as your base configuration:

apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
  profile: minimal

Available profiles: default, minimal, demo, ambient, remote, openshift, preview.

Configuration Customization

Component Enablement

Enable or disable components like CNI, gateways, or ztunnel:

spec:
  components:
    cni:
      enabled: true
    ingressGateways:
    - name: istio-ingressgateway
      enabled: true
    egressGateways:
    - name: istio-egressgateway
      enabled: false

Kubernetes Resource Overlays

Configure resources, replicas, HPA, and pod settings using standard Kubernetes APIs:

spec:
  components:
    pilot:
      k8s:
        resources:
          requests:
            cpu: 1000m
            memory: 4096Mi
        hpaSpec:
          maxReplicas: 10
          minReplicas: 2
        nodeSelector:
          master: "true"

CLI Overrides

Use --set flags to override configuration values:

istioctl manifest generate --set values.global.mtls.auto=true

For nested paths with dots, escape with backslash:

istioctl manifest generate --set "values.sidecarInjectorWebhook.injectedAnnotations.container\.apparmor\.security\.beta\.kubernetes\.io/istio-proxy=runtime/default"

Advanced Customization

Mesh Configuration

Configure runtime behavior through meshConfig:

spec:
  meshConfig:
    accessLogFile: /dev/stdout
    enableTracing: true

Advanced Overlays

For fine-grained control over generated resources, use JSON patch overlays:

spec:
  components:
    pilot:
      k8s:
        overlays:
        - kind: Deployment
          name: istio-pilot
          patches:
          - path: spec.template.spec.containers.[name:discovery].args.[30m]
            value: "60m"

Verification Commands

# List available profiles
istioctl profile list

# Show profile values
istioctl profile dump demo

# Compare manifests
istioctl manifest diff manifest1.yaml manifest2.yaml

# Show specific configuration subtree
istioctl profile dump --config-path components.pilot

Installation Workflow

Loading diagram...

The operator validates all configuration against schemas before applying, catching syntax errors early. Use --force to bypass validation if needed, though this is not recommended for production.

CLI Tools & Debugging

Relevant Files
  • istioctl/cmd/root.go
  • istioctl/pkg/admin/admin.go
  • istioctl/pkg/analyze/analyze.go
  • istioctl/pkg/proxystatus/proxystatus.go
  • istioctl/pkg/internaldebug/internal-debug.go
  • pkg/ctrlz/ctrlz.go
  • pkg/ctrlz/options.go

Istio provides two complementary debugging systems: istioctl for CLI-based troubleshooting and ControlZ for runtime introspection. Together, they enable operators to diagnose mesh issues, inspect proxy configurations, and control component behavior.

istioctl: The Control Plane CLI

istioctl is the primary command-line interface for Istio operators. It connects to Kubernetes clusters and communicates with the control plane to retrieve diagnostic information.

Core Commands:

  • istioctl analyze - Validates Istio configuration files and detects misconfigurations. Runs a suite of analyzers that check for common issues like missing VirtualServices, incorrect AuthorizationPolicies, and namespace configuration problems.
  • istioctl proxy-status - Shows synchronization status between Istiod and Envoy proxies. Displays which proxies are out-of-sync and why.
  • istioctl proxy-config - Dumps the complete Envoy configuration for a specific proxy, including clusters, listeners, routes, and endpoints.
  • istioctl admin - Manages Istiod configuration at runtime, including log level adjustments via istioctl admin log.
  • istioctl describe - Provides detailed information about Istio resources (VirtualServices, DestinationRules, etc.).
  • istioctl experimental debug - Accesses internal Istiod debug endpoints for advanced troubleshooting.

Configuration:

istioctl reads defaults from $HOME/.istioctl/config.yaml and respects environment variables prefixed with ISTIOCTL_. The root command uses Cobra for CLI parsing and integrates with Kubernetes client configuration.

ControlZ: Runtime Introspection

ControlZ is an embedded HTTP server that runs in Istio components (Istiod, Envoy proxies, etc.) on port 9876 by default. It provides both a web UI and REST API for real-time component inspection.

Built-in Topics:

  • Scopes - View and modify logging levels for different components dynamically
  • Memory - Monitor heap usage and garbage collection statistics
  • Environment - Display environment variables and configuration
  • Process - Show process information (PID, uptime, resource usage)
  • Arguments - List command-line flags and their values
  • Version - Display component version information
  • Signals - Trigger profiling and diagnostic dumps

Accessing ControlZ:

# Web UI
curl http://localhost:9876/scopez

# JSON API
curl http://localhost:9876/scopej

# Enable pprof profiling
curl http://localhost:9876/debug/pprof/heap

Configuration:

ControlZ is configured via command-line flags:

--ctrlz_port 9876          # Port to listen on
--ctrlz_address localhost  # Address to bind to (use '*' for all interfaces)

Debugging Workflow

Loading diagram...

Key Integration Points

  • Cobra CLI Framework - All istioctl commands use Cobra for consistent flag parsing and help generation
  • Kubernetes Client - istioctl uses the standard Kubernetes client library to connect to clusters
  • XDS Protocol - Proxy status and config commands communicate with Istiod via the xDS discovery protocol
  • HTTP Server - ControlZ uses Gorilla mux for routing and Go's standard HTTP server for serving the UI and API

The combination of istioctl's diagnostic commands and ControlZ's runtime visibility makes Istio highly debuggable and operator-friendly.

Testing & Integration Framework

Relevant Files
  • pkg/test/framework/suite.go
  • pkg/test/framework/test.go
  • pkg/test/framework/testcontext.go
  • pkg/test/framework/resource/context.go
  • tests/integration/README.md
  • tests/integration/tests.mk

The Istio test framework provides a fluent, environment-agnostic API for writing integration tests. It abstracts away cluster setup, resource lifecycle management, and test orchestration, allowing developers to focus on test logic.

Core Architecture

Loading diagram...

Suite & Test Structure

A test suite is bootstrapped in TestMain using framework.NewSuite(). Each suite can define setup/teardown functions, labels, and environment requirements. Individual tests are created with framework.NewTest(t) and execute within a TestContext.

func TestMain(m *testing.M) {
    framework.NewSuite(m).
        Setup(istio.Setup(nil, nil)).
        Label(label.CustomSetup).
        Run()
}

func TestMyLogic(t *testing.T) {
    framework.NewTest(t).
        Label(label.CustomSetup).
        Run(func(ctx framework.TestContext) {
            // Test logic here
        })
}

Resource Management & Lifecycle

The framework automatically tracks resources created during tests. When a test exits, all tracked resources are cleaned up. Resources are organized in a hierarchical scope: suite-level resources persist across tests, while test-level resources are cleaned up after each test.

Key context methods:

  • ctx.TrackResource(r) — Register a resource for automatic cleanup
  • ctx.Clusters() — Access Kubernetes clusters in the environment
  • ctx.Environment() — Get the test environment (Kubernetes, native, etc.)
  • ctx.Cleanup(fn) — Register a cleanup function

Parallel & Sub-Tests

Tests support both sequential and parallel execution. Use RunParallel() for tests that can run concurrently with siblings. Sub-tests are created with ctx.NewSubTest() and inherit the parent's context.

ctx.NewSubTest("subtest-name").
    RunParallel(func(ctx framework.TestContext) {
        // Runs in parallel with other RunParallel siblings
    })

Components & Abstractions

Components provide environment-agnostic APIs for Istio resources (Pilot, Galley, namespaces, echo workloads, etc.). Each component implements resource.Resource and is tracked by the framework. This allows tests to run against different environments without code changes.

Running Tests

Tests are tagged with the integ build tag and run via go test:

go test -tags=integ ./tests/integration/pilot/...
go test -tags=integ ./tests/integration/... -p 1  # Sequential suite execution

Use --istio.test.select for label-based filtering and --istio.test.kube.config to specify kubeconfig. The tests.mk Makefile defines CI targets that automatically configure flags for different environments (KinD, GKE, IPv6, etc.).

Diagnosing Failures

The framework generates diagnostic output in a work directory (default: system temp). Enable --istio.test.ci for verbose logging and state dumps. Use --istio.test.nocleanup to preserve cluster state for investigation. Test logs and resource dumps are saved to the work directory for post-mortem analysis.