Overview
Relevant Files
main.gocommands.goREADME.mddocs/architecture.md
Terraform is an Infrastructure as Code (IaC) tool that enables users to define, provision, and manage infrastructure across multiple cloud providers and services using declarative configuration files. This repository contains Terraform Core, the command-line interface and execution engine that orchestrates infrastructure changes.
Key Features
Terraform's core capabilities include:
- Infrastructure as Code: Define infrastructure using HCL (HashiCorp Configuration Language), enabling version control and code reuse.
- Execution Plans: Generate and review plans before applying changes, preventing unexpected modifications.
- Resource Graph: Build dependency graphs for parallel execution and efficient resource provisioning.
- Change Automation: Apply complex infrastructure changes with minimal manual intervention.
Architecture Overview
Loading diagram...
Core Components
CLI Layer (command package): Parses user commands and arguments, constructs operations, and delegates to backends. The mapping of command names to implementations is defined in commands.go.
Backends: Determine where Terraform stores state snapshots. The local backend executes operations locally, while others like remote and cloud delegate to external systems.
Terraform Context: The main execution engine that orchestrates planning and applying infrastructure changes using resource graphs.
Providers: Plugins that implement resource types for specific cloud platforms or services. Terraform automatically downloads providers from the Terraform Registry.
Workflow
- User runs a Terraform command (e.g.,
terraform plan) - CLI parses arguments and creates an operation
- Backend loads current state and configuration
- Terraform Context builds a resource dependency graph
- Graph is executed, communicating with providers
- State is updated with results
Getting Started
To understand the codebase, start with main.go for entry point logic, then explore commands.go for command registration. The internal/command package contains individual command implementations, while internal/backend handles state management and operation execution.
Architecture & Request Flow
Relevant Files
internal/command/meta.gointernal/backend/backend.gointernal/backend/backendrun/operation.gointernal/terraform/context.gointernal/terraform/graph_builder_plan.gointernal/terraform/graph_walk.gointernal/dag/walk.go
Terraform's architecture follows a layered request flow from CLI commands through backends to the core execution engine. Understanding this flow is essential for navigating the codebase.
Request Flow: CLI to Execution
When a user runs terraform plan or terraform apply, the request flows through these layers:
- CLI Command Layer (
internal/command/) - Parses arguments and options - Backend Layer (
internal/backend/) - Manages state and operation execution - Terraform Context (
internal/terraform/) - Orchestrates the core logic - Graph Execution - Builds and walks the dependency graph
Loading diagram...
Command and Meta
The Meta struct in internal/command/meta.go is the central hub for command execution. It holds:
- WorkingDir - The working directory context
- Streams - Output handles (stdout, stderr, stdin)
- View - Output rendering abstraction
- Backend configuration - State management settings
- Plugin discovery - Provider and provisioner factories
Commands use Meta.RunOperation() to execute operations on a backend, which blocks until completion and handles interruption signals.
Backend Interface
The Backend interface (internal/backend/backend.go) defines state management:
ConfigSchema()- Describes backend configuration structurePrepareConfig()- Validates and normalizes configurationConfigure()- Initializes the backend with configurationStateMgr()- Returns a state manager for a workspaceWorkspaces()- Lists available workspaces
The OperationsBackend interface extends this for backends that execute operations (local, cloud, remote). Most backends only store state; the local backend wraps others to execute operations locally.
Operation and Execution
An Operation (internal/backend/backendrun/operation.go) describes what to do:
- Type - Plan, apply, refresh, destroy, etc.
- ConfigDir - Path to configuration
- Targets - Specific resources to target
- Variables - Input variable values
- Plan - For apply operations, the plan to apply
The backend's Operation() method returns a RunningOperation with channels for monitoring progress and cancellation.
Terraform Context
The Context struct (internal/terraform/context.go) is the core execution engine. It:
- Holds plugins (providers and provisioners)
- Manages parallelism via semaphores
- Coordinates graph building and walking
- Handles graceful shutdown and interruption
NewContext() initializes it with options including provider factories, hooks, and parallelism settings.
Graph Building and Walking
Graph building transforms configuration into an execution plan:
- PlanGraphBuilder or ApplyGraphBuilder creates a graph via sequential transformers
- Transformers add nodes and edges:
ConfigTransformer,StateTransformer,ReferenceTransformer,ProviderTransformer, etc. - Graph.Walk() executes vertices respecting dependency edges
- ContextGraphWalker evaluates each vertex, managing provider/provisioner lifecycle
The DAG walker (internal/dag/walk.go) respects dependencies and executes vertices concurrently where possible, using a semaphore to limit parallelism.
Vertex Evaluation
Each graph vertex represents a resource, data source, or provider. During evaluation:
- Retrieve provider from context
- Load current state
- Evaluate configuration expressions
- Call provider with current state and configuration
- Record changes in the plan or apply them to state
This process repeats for each vertex in dependency order, with concurrent execution where safe.
CLI Commands & User Interface
Relevant Files
internal/command/apply.gointernal/command/plan.gointernal/command/init.gointernal/command/meta.gointernal/command/arguments/internal/command/views/commands.go
Command Architecture
Terraform's CLI is built on a layered architecture that separates concerns across argument parsing, command execution, and output rendering. Each command follows a consistent pattern: parse arguments, prepare backend state, execute operations, and render results through a view layer.
The Meta struct in meta.go serves as the foundation for all commands, encapsulating shared state like working directory, streams, configuration, and backend information. Commands embed Meta to inherit these capabilities without duplication.
Command Execution Flow
Commands follow a standardized execution pipeline:
-
Argument Parsing - The
argumentspackage parses CLI flags into structured types (e.g.,Apply,Plan). Each command type defines its specific flags and validation rules. -
View Configuration - Global view arguments (like
<-jsonand<-no-color) are parsed first and applied to theViewobject, which controls output formatting. -
Backend Preparation - Commands call
PrepareBackend()to initialize the backend, which manages state storage and remote operations. -
Operation Execution - The backend executes the operation (plan, apply, etc.) and returns results.
-
Result Rendering - The
viewspackage renders results in the configured format (human-readable or JSON).
Key Components
Meta Struct - Holds shared command state including WorkingDir, Streams, View, plugin directories, and provider configuration. Commands access backend, configuration, and state through Meta methods.
Arguments Package - Defines command-specific argument types and parsing functions. Each command has a corresponding Parse* function that validates flags and returns diagnostics for errors.
Views Package - Provides output rendering abstraction. The View base class handles colorization and diagnostics. Command-specific views (e.g., ApplyView, PlanView) format operation results.
Command Registration - commands.go registers all available commands in the Commands map, creating command factories that instantiate commands with the shared Meta object.
Example: Plan Command
The PlanCommand demonstrates the pattern:
func (c *PlanCommand) Run(rawArgs []string) int {
// Parse global view arguments
common, rawArgs := arguments.ParseView(rawArgs)
c.View.Configure(common)
// Parse command-specific arguments
args, diags := arguments.ParsePlan(rawArgs)
view := views.NewPlan(args.ViewType, c.View)
// Prepare backend and execute
be, beDiags := c.PrepareBackend(args.State, args.ViewType)
opReq, opDiags := c.OperationRequest(be, view, args.ViewType, ...)
// Delegate to backend for execution
return be.Operation(ctx, opReq)
}
Output Formatting
The View system supports multiple output formats controlled by the <-json flag. Human-readable output uses colorization (controlled by <-no-color) and structured text. JSON output provides machine-readable results for automation and tooling integration.
Diagnostics (warnings and errors) are rendered through the view layer, which handles source code context display and formatting based on the selected output mode.
Configuration Loading & Parsing
Relevant Files
internal/configs/config.gointernal/configs/configload/loader.gointernal/configs/parser.gointernal/configs/module.gointernal/configs/parser_config_dir.gointernal/configs/config_build.go
Configuration loading in Terraform transforms raw HCL files on disk into a complete, validated module tree. The process involves three main layers: parsing (reading files), module assembly (combining files into modules), and tree building (resolving module dependencies).
The Three-Layer Pipeline
Layer 1: Parsing (Parser)
The Parser reads HCL files from disk and caches their contents. It supports both native HCL syntax (.tf files) and JSON syntax (.tf.json files). The LoadConfigDir method orchestrates directory scanning:
- Identifies primary config files (
.tf,.tf.json) - Identifies override files (
override.tf,*_override.tf) - Optionally loads test files (
.tftest.hcl) and query files (.tfquery.hcl) - Parses each file into a
Filestructure containing raw declarations
Layer 2: Module Assembly (Module)
Multiple File objects are combined into a single Module via NewModule. This step:
- Merges declarations from all primary files
- Applies overrides from override files
- Detects duplicate declarations (variables, resources, etc.)
- Validates provider requirements and experiments
- Produces a single namespace containing all module elements
Layer 3: Tree Building (Config & BuildConfig)
The BuildConfig function constructs the complete module tree by:
- Creating a root
Configwrapping the rootModule - Walking child module calls via a
ModuleWalkerinterface - Recursively loading and building each child module
- Establishing parent-child relationships and module paths
- Resolving provider types across the tree
Key Data Structures
// File: Single configuration file with raw declarations
type File struct {
Variables []*Variable
Resources []*Resource
ModuleCalls []*ModuleCall
// ... other declarations
}
// Module: Merged namespace from multiple files
type Module struct {
SourceDir string
Variables map[string]*Variable
ManagedResources map[string]*Resource
ModuleCalls map[string]*ModuleCall
// ... other merged declarations
}
// Config: Node in the module tree
type Config struct {
Root *Config // Self-referential for root
Parent *Config // Parent module (nil for root)
Path addrs.Module // Path from root
Children map[string]*Config
Module *Module
}
The Loader Abstraction
The Loader in configload extends the base Parser to handle module installation and manifest tracking. It provides:
LoadConfig(rootDir): Loads root module and builds complete tree- Module manifest management (
.terraform/modules/modules.json) - Integration with the module registry for remote sources
- Support for snapshot-based loading (for testing)
// Typical usage
loader, _ := configload.NewLoader(&configload.Config{
ModulesDir: ".terraform/modules",
Services: disco,
})
cfg, diags := loader.LoadConfig(rootDir)
Error Handling
Configuration loading is fault-tolerant. Even with errors, partial results are returned:
- Syntax errors return incomplete
Fileobjects - Duplicate declarations return incomplete
Moduleobjects - Missing modules return incomplete
Configtrees
This allows static analysis and error reporting even when the configuration is invalid.
Mermaid Diagram: Loading Pipeline
Loading diagram...
State Management & Backends
Relevant Files
internal/states/state.gointernal/states/statemgr/statemgr.gointernal/states/statemgr/filesystem.gointernal/backend/backend.gointernal/backend/local/backend.gointernal/cloud/backend.gointernal/states/remote/state.go
Terraform's state management system separates concerns between the State data structure and State Managers, which handle persistence. Backends provide the abstraction layer that allows the CLI to work seamlessly with both local and remote state storage.
State Structure
The State type in internal/states/state.go is the in-memory representation of Terraform's state. It contains:
- Modules: A map of module states, keyed by module instance address
- RootOutputValues: Output values defined in the root module
- CheckResults: Snapshots of check statuses from the most recent run
State is not concurrency-safe by default. Callers must use SyncState for concurrent access or ensure single-threaded access through proper locking.
State Managers & Interfaces
State managers implement interfaces from internal/states/statemgr/ to handle reading, writing, and locking state:
- Transient: Reader + Writer interfaces for in-memory snapshots
- Persistent: Refresher + Persister interfaces for durable storage
- Storage: Union of Transient and Persistent
- Full: Storage + Locker (the complete interface most managers implement)
The Locker interface provides mutual-exclusion locks to prevent concurrent modifications across processes. Lock info includes operation type, user, version, and timestamp.
Filesystem State Manager
Filesystem is the default local state manager. It:
- Reads initial state from a configurable path
- Writes new snapshots to a separate output path (allowing read-only source files)
- Maintains optional backup files before overwriting state
- Implements file-level locking on Unix/Windows systems
- Supports state migration between different storage backends
Backend Abstraction
The Backend interface in internal/backend/backend.go defines how the CLI interacts with state storage:
type Backend interface {
ConfigSchema() *configschema.Block
PrepareConfig(cty.Value) (cty.Value, tfdiags.Diagnostics)
Configure(cty.Value) tfdiags.Diagnostics
StateMgr(workspace string) (statemgr.Full, tfdiags.Diagnostics)
DeleteWorkspace(name string) tfdiags.Diagnostics
Workspaces() ([]string, tfdiags.Diagnostics)
}
Each backend returns a statemgr.Full for a given workspace, handling workspace-specific configuration and state isolation.
Local Backend
The Local backend (internal/backend/local/backend.go) is the default. It:
- Stores state in the local filesystem using
Filesystemmanagers - Supports multiple workspaces via the
terraform.tfstate.d/directory - Allows per-operation state path overrides (
-state,-state-out,-state-backupflags) - Executes all operations locally (plan, apply, refresh)
Cloud Backend
The Cloud backend (internal/cloud/backend.go) integrates with HCP Terraform or Terraform Enterprise:
- Delegates state storage to remote servers
- Enforces workspace naming constraints (no default workspace)
- Implements remote runs with policy evaluation and cost estimation
- Uses
cloud.Statemanager which syncs state via the TFE API - Supports state locking through remote infrastructure
Remote State Managers
For backends storing state remotely (S3, GCS, Azure, etc.), the remote.State manager:
- Wraps a backend-specific
Clientinterface - Maintains in-memory transient snapshots
- Persists state by serializing to JSON and uploading via the client
- Tracks lineage (unique state ID) and serial (version counter) for conflict detection
- Supports conditional persistence to avoid intermediate snapshots during apply operations
Loading diagram...
The separation of concerns allows Terraform to support diverse storage backends while maintaining consistent state semantics across all implementations.
Graph Building & Execution
Relevant Files
internal/terraform/graph_builder.gointernal/terraform/graph.gointernal/dag/dag.gointernal/dag/walk.gointernal/terraform/context_plan.gointernal/terraform/context_apply.go
Terraform's execution model is built on a Directed Acyclic Graph (DAG) that represents resources, data sources, and their dependencies. The graph is constructed through sequential transformations and then executed with full parallelism respecting dependency constraints.
Graph Building Pipeline
Graph construction follows a transformer pattern where each transformer mutates the graph to add nodes and edges:
- BasicGraphBuilder orchestrates the build process by executing a sequence of
GraphTransformerimplementations - Each transformer adds specific elements: resources, state, providers, references, etc.
- After all transformers complete, the graph is validated to ensure it has a single root and no cycles
Key transformers include:
- ConfigTransformer — Creates nodes for all resources and data sources in the configuration
- StateTransformer — Adds nodes for resources in the current state (including deposed instances)
- ReferenceTransformer — Analyzes expressions and creates edges between nodes that reference each other
- AttachStateTransformer — Attaches state data to resource instance nodes
- ProviderTransformer — Adds provider configuration nodes and connects resources to their providers
- DiffTransformer (apply only) — Creates nodes for resource instances with planned changes
Graph Execution Model
The DAG walker (internal/dag/walk.go) executes vertices with full parallelism while respecting dependencies:
// Walker executes vertices in parallel, respecting dependency edges
type Walker struct {
Callback WalkFunc // Called for each vertex
Reverse bool // Reverse edge direction if true
// ... internal state for tracking execution
}
Execution flow:
- Dependency Waiting — Each vertex waits for its upstream dependencies to complete successfully
- Parallel Execution — Vertices with no blocking dependencies execute concurrently
- Error Propagation — If a dependency fails, downstream vertices are skipped
- Goroutine Per Vertex — The walker creates two goroutines per vertex: one for execution, one for dependency tracking
Graph Walking in Context
When Terraform plans or applies, it calls Graph.Walk(walker) with a ContextGraphWalker that:
- Evaluates each vertex (resource, data source, provider, etc.)
- Manages provider and provisioner lifecycle
- Enforces parallelism limits via a semaphore (
Context.parallelSem) - Handles dynamic graph expansion for
countandfor_each
Loading diagram...
Dynamic Expansion
Some graph nodes implement GraphNodeDynamicExpandable to expand into subgraphs at runtime. This allows count and for_each to be evaluated during execution rather than requiring all instances to be known upfront. When a node expands, its subgraph is merged into the main graph and walked immediately.
Providers & Plugin System
Relevant Files
internal/providers/provider.gointernal/plugin/grpc_provider.gointernal/getproviders/registry_source.gointernal/providercache/dir.gointernal/plugin/discovery/find.gointernal/command/meta_providers.go
Terraform's provider system is built on a plugin architecture that enables extensibility through gRPC-based communication. Providers are external processes that implement the providers.Interface, allowing Terraform to manage resources across diverse infrastructure platforms.
Provider Architecture
The provider system consists of three main layers:
-
Provider Interface (
internal/providers/provider.go): Defines the contract that all providers must implement, including schema retrieval, validation, planning, and resource operations. -
gRPC Transport Layer (
internal/plugin/grpc_provider.go): Handles serialization and RPC communication between Terraform core and provider processes using Protocol Buffers. Supports both protocol versions 5 and 6. -
Plugin Management (
internal/command/meta_providers.go): Orchestrates provider discovery, initialization, and lifecycle management.
Provider Lifecycle
Loading diagram...
Installation & Discovery
Provider installation follows a multi-source strategy:
- Registry Source (
internal/getproviders/registry_source.go): Queries provider registries for available versions and downloads packages. - Provider Cache (
internal/providercache/dir.go): Stores unpacked provider binaries organized by platform ($GOOS-$GOARCH). - Plugin Discovery (
internal/plugin/discovery/find.go): Scans directories for executables matching the naming patternterraform-provider-$NAME-v$VERSION.
The discovery process searches multiple locations in order: current working directory, user home directory, and system paths.
Provider Initialization
When a provider is needed, Terraform:
- Locates the provider executable via discovery
- Spawns the provider as a separate process
- Establishes a gRPC connection using the handshake protocol
- Calls
GetProviderSchema()to retrieve resource and data source definitions - Caches the schema globally to avoid repeated instantiation
- Calls
ConfigureProvider()with user-supplied configuration - Executes operations (read, plan, apply) through gRPC RPC calls
Schema Caching
Schemas are cached at two levels:
- Global Cache (
providers.SchemaCache): Shared across all provider instances for the same address, reducing startup time. - Instance Cache: Each
GRPCProviderinstance caches its schema locally to avoid redundant RPC calls.
Plugin Protocol
Terraform supports multiple plugin protocol versions:
- Protocol 5: Legacy version with full feature support
- Protocol 6: Modern version with enhanced capabilities
The VersionedPlugins map in internal/plugin/plugin.go registers handlers for each protocol version. During client initialization, the protocol version is negotiated, and the appropriate handler is selected.
Provider Factory Pattern
Providers are instantiated through factories (providers.Factory), which are functions that create new provider instances. This pattern enables:
- Lazy initialization of providers
- Multiple concurrent instances of the same provider
- Dependency injection for testing
- Clean separation between provider discovery and execution
The factory pattern is central to Terraform's ability to manage provider lifecycles efficiently and support both managed (spawned by Terraform) and unmanaged (pre-existing) provider processes.
Expression Evaluation & Language
Relevant Files
internal/lang/scope.gointernal/lang/eval.gointernal/lang/functions.gointernal/addrs/parse_ref.gointernal/lang/data.gointernal/lang/langrefs/references.go
Terraform's expression evaluation system transforms HCL code into computed values through a multi-stage pipeline. The core components work together to parse references, resolve data, and execute functions within a controlled scope.
Scope: The Evaluation Context
The Scope struct is the central hub for expression evaluation. It encapsulates:
- Data source - An interface providing access to variables, resources, outputs, and other referenceable objects
- ParseRef function - Determines which reference types are valid in this context (e.g., testing scopes allow output references)
- Self address - Optional alias for the current resource instance
- Functions - Built-in and external functions available for use
- Configuration - Base directory, pure-only mode, plan timestamp, and experiment flags
type Scope struct {
Data Data
ParseRef langrefs.ParseRef
SelfAddr addrs.Referenceable
BaseDir string
PureOnly bool
ExternalFuncs ExternalFuncs
FunctionResults *FunctionResults
}
Reference Parsing & Resolution
References are extracted from HCL expressions and converted to typed addresses. The ParseRef function in internal/addrs/parse_ref.go handles this conversion:
- Traversal parsing - Converts HCL traversals like
aws_instance.example.idinto structured references - Reference types - Supports
var,local,resource,data,module,count,each,path,terraform, andself - Remaining traversal - Captures attribute access beyond the reference subject (e.g.,
.idinaws_instance.example.id)
The Data interface provides methods to retrieve values for each reference type:
type Data interface {
GetResource(addrs.Resource, tfdiags.SourceRange) (cty.Value, tfdiags.Diagnostics)
GetInputVariable(addrs.InputVariable, tfdiags.SourceRange) (cty.Value, tfdiags.Diagnostics)
GetLocalValue(addrs.LocalValue, tfdiags.SourceRange) (cty.Value, tfdiags.Diagnostics)
GetModule(addrs.ModuleCall, tfdiags.SourceRange) (cty.Value, tfdiags.Diagnostics)
// ... and more
}
Evaluation Pipeline
Expression evaluation follows this sequence:
- Extract references - Use
langrefs.ReferencesInExpr()to find all traversals in an expression - Parse references - Convert traversals to typed addresses using
ParseRef - Build eval context - Call
Scope.EvalContext()to construct anhcl.EvalContextwith variables and functions - Evaluate expression - Use HCL's native evaluation with the context
- Type conversion - Convert result to the requested type
func (s *Scope) EvalExpr(expr hcl.Expression, wantType cty.Type) (cty.Value, tfdiags.Diagnostics) {
refs, diags := langrefs.ReferencesInExpr(s.ParseRef, expr)
ctx, ctxDiags := s.EvalContext(refs)
val, evalDiags := expr.Value(ctx)
// Convert to wantType and return
}
Function Resolution
The Functions() method builds a map of available functions, organized by namespace:
- Core functions - 100+ built-in functions (string, math, collection, crypto, etc.)
- Filesystem functions - File I/O operations with consistency checking
- Provider functions - External functions under
provider::NAME::namespace - Impure functions -
bcrypt,timestamp,uuidreturn unknown during planning
Functions are wrapped with descriptions and consistency validators to ensure deterministic behavior during plan/apply cycles.
Key Design Patterns
Lazy evaluation - Functions are only resolved when needed, allowing scopes to be created without full data availability.
Immutability checking - Filesystem and template functions track results to detect inconsistencies between plan and apply phases.
Scope isolation - Different contexts (resources, provisioners, testing) use different ParseRef implementations to control what can be referenced.
Error recovery - Unknown values are substituted when errors occur, allowing type checking to proceed even with invalid references.
Plans & Change Tracking
Relevant Files
internal/plans/plan.gointernal/plans/changes.gointernal/plans/changes_src.gointernal/plans/action.gointernal/plans/mode.gointernal/plans/planfile/tfplan.gointernal/plans/deferring.godocs/planning-behaviors.md
Terraform's planning system is the foundation for the plan-and-apply workflow. A plan describes the set of changes required to move from the current state to a goal state derived from configuration. Plans are not applied directly but contain an approximation of the final result, with unknown values resolved during apply.
Core Plan Structure
The Plan type represents a complete planned set of changes. It contains:
- UIMode: The planning mode (Normal, Destroy, or RefreshOnly) used for UI presentation only
- Changes: A
ChangesSrcobject tracking all planned resource, output, and action invocation changes - VariableValues & VariableMarks: Persisted variable values and sensitivity marks for the apply phase
- DeferredResources: Resources whose changes were deferred due to unknown values or dependencies
- TargetAddrs & ForceReplaceAddrs: Addresses specified by planning options like
-targetand-replace - Complete & Applyable: Flags indicating whether the plan is complete and ready to apply
Change Actions
The Action type represents the operation Terraform will perform on a resource instance:
NoOp // No change needed
Create // Create new resource
Read // Read data source
Update // Modify existing resource
DeleteThenCreate // Replace: destroy then create
CreateThenDelete // Replace: create then destroy
Delete // Destroy resource
Forget // Remove from state without destroying
The IsReplace() method identifies replace actions, which decompose into separate create and delete operations.
Resource Instance Changes
ResourceInstanceChangeSrc represents a planned change to a resource instance before decoding. Key fields include:
- Addr: Absolute address of the resource instance
- Action: The planned operation (Create, Update, Delete, etc.)
- Before & After: Prior and proposed object values (encoded as
DynamicValue) - RequiredReplace: Paths that forced a Replace action instead of Update
- ActionReason: User-facing context for why this action was chosen (e.g., tainted, replace-triggered-by)
- Private: Provider-opaque data preserved across plan and apply
The Decode() method converts ResourceInstanceChangeSrc to ResourceInstanceChange using the resource schema.
Deferred Changes
DeferredResourceInstanceChangeSrc tracks resources whose changes were deferred:
type DeferredResourceInstanceChangeSrc struct {
DeferredReason providers.DeferredReason
ChangeSrc *ResourceInstanceChangeSrc
}
Deferral occurs when unknown values or pending dependencies prevent immediate planning. The deferred change may be incomplete and must be parsed carefully.
Planning Modes
Three mutually-exclusive modes control planning behavior:
- NormalMode: Default mode; synchronizes state with remote objects and plans changes to match configuration
- DestroyMode: Plans destruction of all managed resources regardless of configuration
- RefreshOnlyMode: Synchronizes state without proposing any change actions
Plan Serialization
Plans are serialized to disk using Protocol Buffers (protobuf) in the tfplan file format (version 3). The planfile package handles reading and writing:
readTfplan()deserializes protobuf data into aPlanobjectwriteTfplan()serializes aPlanto protobuf format- Version checking ensures plans cannot be transferred between different Terraform versions
- Only root module outputs survive serialization; nested outputs are recalculated during apply
Planning Behaviors
Terraform supports three design patterns for special planning behaviors:
- Configuration-driven: Activated by module annotations (e.g.,
lifecycleblocks,movedblocks) - Provider-driven: Activated by provider responses during
PlanResourceChangeRPC - Single-run: Activated by planning options like
-replaceor-refresh-only
These patterns allow module authors, providers, and operators to customize planning without changing core logic.
Testing, Validation & Diagnostics
Relevant Files
internal/terraform/context_validate.gointernal/moduletest/suite.gointernal/moduletest/run.gointernal/tfdiags/diagnostic.gointernal/tfdiags/diagnostics.gointernal/checks/state.gointernal/checks/status.gointernal/backend/local/test.go
Diagnostics System
Terraform uses a rich diagnostics framework to report errors, warnings, and contextual information. The tfdiags package provides a Diagnostics type that acts as a list of diagnostic messages, replacing traditional Go errors with more expressive information.
Each diagnostic includes:
- Severity: Either
ErrororWarning - Description: Address, summary, and detailed explanation
- Source: File location and context range
- Expression Context: Optional HCL expression and evaluation context for expression-related errors
The Diagnostics.Append() method is the primary interface for building diagnostic lists. It accepts various types (Go errors, HCL diagnostics, other Diagnostics) and normalizes them into a unified list.
Configuration Validation
The Context.Validate() function performs semantic validation of Terraform configurations without requiring external state or variable values. It:
- Checks configuration dependencies and provider schemas
- Populates all input variables with unknown values of their declared types
- Builds a validation graph using
PlanGraphBuilderwithwalkValidateoperation - Walks the graph to validate resource configurations, expressions, and provider compatibility
Validation is distinct from planning—it catches configuration errors early without considering current infrastructure state. The Plan operation includes all validation checks plus additional state-aware validation.
Module Testing Framework
The moduletest package implements Terraform's testing system for validating module behavior. Tests are defined in .tftest.hcl files and organized into:
- Suite: A collection of test files with overall status and command mode (Normal or Cleanup)
- File: A single test file containing multiple runs
- Run: An individual test execution with configuration, variables, and assertions
Each run progresses through states: Pending → Running → Pass/Fail/Error/Skip. The Status.Merge() function aggregates statuses hierarchically—errors take precedence over failures, which take precedence over passes.
Checks System
The checks package tracks validation assertions declared in configurations. It maintains a State object that:
- Maps configuration objects to their associated checks
- Tracks dynamic checkable objects (resources, outputs, etc.)
- Records individual check results as they execute
Check statuses include:
- StatusPass: Condition evaluated to true
- StatusFail: Condition evaluated to false
- StatusError: Condition evaluation encountered an error
- StatusUnknown: Result not yet determined
The ReportCheckableObjects() and ReportCheckResult() methods allow Terraform Core to incrementally report check outcomes during graph walks.
Test Execution Pipeline
Loading diagram...
Test execution validates expected failures, upgrades check warnings to errors during testing, and aggregates diagnostics. The framework supports filtering specific test files and cleanup mode for resource teardown.