Install

WireGuard/wireguard

WireGuard Kernel Module

Last updated on Dec 19, 2019 (Commit: edad0d6)

Overview

Relevant Files
  • README.md
  • src/main.c
  • src/version.h
  • src/device.h
  • src/noise.h
  • src/peer.h

WireGuard is a modern, high-performance VPN tunnel that runs as a Linux kernel module. It combines state-of-the-art cryptography with a lean, auditable codebase to provide secure network tunneling with minimal overhead. Unlike traditional VPN solutions like IPsec or OpenVPN, WireGuard prioritizes simplicity, speed, and security through careful design choices.

Core Architecture

WireGuard operates as a kernel-space VPN implementation with three primary layers:

  1. Device Layer (device.h, device.c) — Manages the virtual network interface and coordinates all subsystems. Each WireGuard device maintains peer connections, encryption/decryption queues, and socket bindings for IPv4 and IPv6.

  2. Cryptographic Layer (noise.h, noise.c) — Implements the Noise Protocol Framework for authenticated key exchange. Handles handshake state machines, keypair management, and symmetric key derivation using ChaCha20-Poly1305 and Curve25519.

  3. Peer Management (peer.h, peer.c) — Tracks connected peers with their public keys, endpoints, and allowed IP ranges. Each peer maintains its own encryption/decryption queues and timer-based keepalive mechanisms.

Key Components

Noise Protocol Implementation: WireGuard uses the Noise Protocol for its handshake mechanism, providing forward secrecy and identity hiding. The handshake involves four message types (initiation, response, and data packets) and uses pre-shared keys for optional additional security.

Multicore Packet Processing: The codebase uses per-CPU work queues for encryption and decryption operations, enabling efficient parallel processing on multicore systems. Packets are staged in per-peer queues before cryptographic operations.

Allowed IPs Routing: The allowedips module implements a radix tree for efficient IP address matching, determining which packets should be encrypted for which peers.

Module Initialization

The module initialization sequence (main.c) follows this order:

  1. Initialize cryptographic primitives (ChaCha20, Poly1305, BLAKE2s, Curve25519)
  2. Run self-tests in DEBUG mode
  3. Initialize the Noise Protocol state
  4. Register the virtual network device
  5. Set up the generic netlink interface for userspace configuration
if ((ret = chacha20_mod_init()) || (ret = poly1305_mod_init()) ||
    (ret = chacha20poly1305_mod_init()) || (ret = blake2s_mod_init()) ||
    (ret = curve25519_mod_init()))
    return ret;

Data Flow

Loading diagram...

Version and Licensing

The current version is defined in version.h as 0.0.20191219. WireGuard is released under the GPLv2 license, ensuring it remains open-source and freely available for modification and distribution.

Architecture & Core Components

Relevant Files
  • src/device.h — Device structure and initialization
  • src/device.c — Device lifecycle and network operations
  • src/peer.h — Peer structure and reference counting
  • src/peer.c — Peer creation, removal, and lifecycle
  • src/messages.h — Message types and protocol constants

WireGuard's architecture is built around three core abstractions: devices, peers, and message types. These components work together to create a high-performance VPN tunnel that processes packets through dedicated encryption and decryption pipelines.

Device Layer

The wg_device structure is the central hub of WireGuard. Each device represents a virtual network interface and manages:

  • Network Interface — Registered with the Linux kernel as a standard network device with custom transmit (wg_xmit) and receive handlers
  • Peer Management — Maintains hashtables for peer lookup by public key and by session index
  • Encryption/Decryption Queues — Per-device crypt_queue structures that distribute cryptographic work across CPU cores
  • Workqueues — Separate queues for handshake processing, key exchange, and packet encryption/decryption
  • Socket Bindings — Dual IPv4 and IPv6 UDP sockets for sending and receiving packets

Device initialization happens in wg_newlink(), which allocates hashtables, initializes workqueues, and registers the device with the kernel. The device maintains a list of all connected peers and enforces limits (max 2^20 peers per device).

Peer Structure

Each wg_peer represents a remote endpoint and contains:

  • Cryptographic State — Noise protocol handshake state and active keypairs for encryption/decryption
  • Endpoint Information — Remote address, source address, and cached routing information
  • Per-Peer Queues — Separate transmit and receive queues for staging packets before cryptographic processing
  • Timers — Keepalive, handshake retransmission, and key material expiration timers
  • Reference Counting — Uses kref for safe memory management across async contexts

Peers are created via wg_peer_create() and removed through a two-phase process: peer_make_dead() marks the peer as dead (preventing new references), followed by peer_remove_after_dead() which flushes workqueues and frees resources.

Message Types

WireGuard defines four message types in messages.h:

  1. MESSAGE_HANDSHAKE_INITIATION — Initiates Noise protocol key exchange
  2. MESSAGE_HANDSHAKE_RESPONSE — Completes the handshake
  3. MESSAGE_HANDSHAKE_COOKIE — Anti-DoS cookie response
  4. MESSAGE_DATA — Encrypted user data packets

Each message includes a header with type information and MAC fields for authentication.

Packet Flow Architecture

Loading diagram...

Concurrency Model

WireGuard uses a multi-stage pipeline to maximize throughput:

  • Per-Device Queues — Distribute encryption work across all CPU cores using multicore_worker threads
  • Per-Peer Queues — Stage packets before and after cryptographic operations
  • NAPI Polling — Receive-side processing uses NAPI for efficient batching
  • RCU Synchronization — Read-copy-update for lock-free peer lookups during packet processing

The crypt_queue structure uses a lock-free ptr_ring to pass packets between stages, with per-CPU workers handling encryption/decryption in parallel.

Synchronization Primitives

  • device_update_lock — Protects peer list modifications and device configuration
  • socket_update_lock — Protects socket binding changes
  • endpoint_lock — Protects peer endpoint updates
  • RCU — Enables lock-free packet processing during peer removal

Noise Protocol & Cryptography

Relevant Files
  • src/noise.h
  • src/noise.c
  • src/crypto/zinc.h
  • src/crypto/zinc/chacha20poly1305.c
  • src/messages.h

WireGuard implements the Noise_IKpsk2 protocol, a modern cryptographic handshake that combines Elliptic Curve Diffie-Hellman (ECDH) key exchange with symmetric encryption. This section explains the cryptographic foundations and handshake flow.

Noise Protocol Overview

WireGuard uses Noise_IKpsk2_25519_ChaChaPoly_BLAKE2s, which specifies:

  • 25519: Curve25519 for ECDH operations (32-byte keys)
  • ChaChaPoly: ChaCha20-Poly1305 AEAD cipher for authenticated encryption
  • BLAKE2s: BLAKE2s hash function for key derivation and integrity

The protocol provides mutual authentication, forward secrecy, and resistance to key compromise through a two-message handshake followed by symmetric data encryption.

Cryptographic Primitives

// Key sizes (from messages.h)
NOISE_PUBLIC_KEY_LEN = 32        // Curve25519 key
NOISE_SYMMETRIC_KEY_LEN = 32     // ChaCha20-Poly1305 key
NOISE_HASH_LEN = 32              // BLAKE2s hash output
NOISE_AUTHTAG_LEN = 16           // Poly1305 authentication tag

Curve25519 performs ECDH key agreement. The implementation clamps secrets (lines 24-25 in curve25519.h) to ensure proper scalar handling.

ChaCha20-Poly1305 provides authenticated encryption. ChaCha20 is a stream cipher generating a keystream XORed with plaintext, while Poly1305 computes a MAC over the ciphertext and additional data.

BLAKE2s is a cryptographic hash used for key derivation via HKDF (Hugo Krawczyk's Key Derivation Function).

Handshake State Machine

Loading diagram...

The handshake progresses through five states defined in noise_handshake_state enum. Each peer maintains ephemeral keys, remote public keys, and a chaining key for key derivation.

Handshake Messages

Initiation Message (148 bytes):

  • Unencrypted ephemeral public key (32 bytes)
  • Encrypted static public key (48 bytes: 32 + 16-byte tag)
  • Encrypted timestamp (28 bytes: 12 + 16-byte tag)
  • MAC fields for DoS protection

Response Message (92 bytes):

  • Unencrypted ephemeral public key (32 bytes)
  • Encrypted empty payload (16 bytes: tag only)
  • MAC fields

Key Derivation and Mixing

The kdf() function implements HKDF using BLAKE2s-HMAC. It extracts entropy from input data and expands it into multiple keys:

// Mix Diffie-Hellman result into chaining key and symmetric key
mix_dh(chaining_key, key, private, public)
  → curve25519(dh_result, private, public)
  → kdf(chaining_key, key, NULL, dh_result, ...)

// Mix preshared key (PSK) for additional security
mix_psk(chaining_key, hash, key, psk)
  → kdf(chaining_key, temp_hash, key, psk, ...)

The mix_hash() function updates the handshake hash with new data, ensuring all messages contribute to the final session keys.

Session Key Management

After successful handshake, wg_noise_handshake_begin_session() creates a noise_keypair containing:

  • Sending key: For encrypting outbound packets
  • Receiving key: For decrypting inbound packets
  • Counter state: Tracks packet numbers to prevent replay attacks

The noise_keypairs structure maintains three keypairs (current, previous, next) to handle key rotation and in-flight packets during rekeying.

Replay Protection

Each symmetric key includes a noise_counter union with:

  • Atomic counter: Fast path for sending (atomic64_t)
  • Receive counter: Backtracking array (2048-bit window) for replay detection

The backtracking array allows out-of-order packet acceptance within a sliding window, preventing both replay attacks and legitimate packet reordering issues.

Timestamp Handling

Timestamps use TAI64N format (64-bit seconds + 32-bit nanoseconds) rounded to prevent timing side-channels. The initiation message includes an encrypted timestamp that responders verify to detect replay attacks and rate-limit handshake floods (max 50 initiations/second).

Packet Send & Receive Pipeline

Relevant Files
  • src/send.c
  • src/receive.c
  • src/queueing.h
  • src/queueing.c
  • src/messages.h
  • src/device.h

WireGuard's packet processing pipeline is split into two parallel paths: send (encryption) and receive (decryption). Both use multi-stage queuing with cryptographic workers to maximize throughput while maintaining security.

Send Pipeline

The send path processes outgoing packets through these stages:

  1. Staging Queue - Packets arrive at peer->staged_packet_queue and wait for a valid encryption key.
  2. Key Validation - wg_packet_send_staged_packets() checks if the current keypair is valid and assigns nonces to all queued packets.
  3. Encryption Queue - Valid packets are enqueued to the device-wide encrypt_queue for parallel processing.
  4. Encryption Worker - wg_packet_encrypt_worker() runs on multiple CPUs, encrypting packets using ChaCha20-Poly1305 with SIMD acceleration.
  5. TX Queue - Encrypted packets move to per-peer tx_queue with state PACKET_STATE_CRYPTED.
  6. TX Worker - wg_packet_tx_worker() sends encrypted packets via UDP and updates timers.

Key management triggers rekeying: if message count exceeds REKEY_AFTER_MESSAGES or time exceeds REKEY_AFTER_TIME, a new handshake is initiated.

Receive Pipeline

The receive path mirrors the send path:

  1. Packet Reception - wg_packet_receive() validates the UDP packet header and routes by message type.
  2. Handshake Processing - Handshake packets queue to incoming_handshakes and are processed by wg_packet_handshake_receive_worker() on dedicated CPUs.
  3. Data Packet Routing - Data packets lookup the keypair via key_idx and enqueue to decrypt_queue.
  4. Decryption Worker - wg_packet_decrypt_worker() decrypts packets and validates the Poly1305 authentication tag.
  5. NAPI Poll - wg_packet_rx_poll() validates packet nonces using a replay detection bitmap (RFC 6479), then delivers to the network stack.

Queueing Architecture

Loading diagram...

Packet State Machine

Packets transition through three states:

  • PACKET_STATE_UNCRYPTED - Initial state when queued for encryption/decryption.
  • PACKET_STATE_CRYPTED - Successfully encrypted or decrypted; ready for transmission or delivery.
  • PACKET_STATE_DEAD - Encryption/decryption failed; packet is dropped.

State changes use atomic operations with acquire/release semantics to coordinate between workers without locks.

Handshake Packets

Handshake messages (initiation, response, cookie) bypass the data pipeline:

  • Initiation - Triggered by wg_packet_send_queued_handshake_initiation() when keys expire or on demand.
  • Response - Sent immediately upon receiving a valid initiation via wg_packet_send_handshake_response().
  • Cookie - Under load, the responder requests a cookie from the initiator to prevent DoS amplification.

Handshakes use dedicated workqueues (handshake_send_wq, handshake_receive_wq) to avoid blocking data traffic.

Performance Optimizations

  • Per-CPU Workers - Encryption and decryption workers run on multiple CPUs to parallelize cryptographic operations.
  • SIMD Context - Workers acquire SIMD context once and reuse it across multiple packets to reduce overhead.
  • Ptr Ring - Lock-free ring buffers coordinate work between stages without spinlocks.
  • NAPI Polling - Receive-side uses NAPI to batch packet processing and reduce interrupt overhead.
  • Checksum Offloading - Decrypted packets are marked CHECKSUM_UNNECESSARY since Poly1305 already verified integrity.

Peer Lookup & Allowed IPs Routing

Relevant Files
  • src/peerlookup.h
  • src/peerlookup.c
  • src/allowedips.h
  • src/allowedips.c
  • src/peer.h

WireGuard uses two complementary lookup systems to route packets to the correct peer: peer lookup by public key and allowed IPs routing. These systems work together to identify which peer should handle an incoming or outgoing packet.

Peer Lookup by Public Key

The peer lookup system maintains two hash tables for fast peer identification:

Public Key Hashtable (pubkey_hashtable) uses SipHash to securely hash peer public keys into a 2048-entry table. When a WireGuard message arrives, the sender's public key is extracted and hashed to find the corresponding peer. This is critical for handshake messages and data packets that need peer authentication.

Index Hashtable (index_hashtable) maps session indices to peers. During the Noise Protocol handshake, each side generates a random 32-bit index for the session. This index is included in all subsequent data packets, allowing fast peer lookup without extracting and hashing the full public key on every packet. The index table supports both handshake and keypair entries.

struct pubkey_hashtable {
    DECLARE_HASHTABLE(hashtable, 11);  /* 2048 entries */
    siphash_key_t key;
    struct mutex lock;
};

struct index_hashtable {
    DECLARE_HASHTABLE(hashtable, 13);  /* 8192 entries */
    spinlock_t lock;
};

Allowed IPs Routing

The allowed IPs system routes packets based on source or destination IP addresses using a binary trie data structure. Each peer is associated with a set of allowed IPv4 and IPv6 CIDR ranges. When a packet arrives, the destination IP is matched against the trie to find the responsible peer.

The trie is optimized for modern CPUs using bit-level operations. Each node stores:

  • A peer reference (if this node represents an allowed IP range)
  • Two child pointers (for bit 0 and bit 1 in the IP address)
  • The IP prefix and CIDR length
  • Cached bit positions for fast traversal
struct allowedips_node {
    struct wg_peer __rcu *peer;
    struct allowedips_node __rcu *bit[2];
    u8 bits[16] __aligned(__alignof(u64));
    u8 cidr, bit_at_a, bit_at_b, bitlen;
};

Lookup Flow

Loading diagram...

Key Design Decisions

RCU Synchronization: Both systems use Read-Copy-Update (RCU) for lock-free reads. Lookups acquire rcu_read_lock_bh() to safely traverse structures while updates use mutex or spinlock protection.

Constant-Time Resistance: The index hashtable insertion uses random probing with retry logic to resist timing attacks. The algorithm statistically succeeds on the first try with probability > 99.9%.

Sequence Numbers: The allowed IPs table maintains a sequence counter (seq) that increments on modifications, allowing callers to detect configuration changes without holding locks.

Memory Efficiency: The trie nodes are allocated to the nearest power of 2, so IPv4 nodes (16 bytes) don't waste space compared to IPv6 nodes (32 bytes).

DoS Protection: Cookies & Rate Limiting

Relevant Files
  • src/cookie.h
  • src/cookie.c
  • src/ratelimiter.h
  • src/ratelimiter.c
  • src/messages.h

WireGuard implements a two-layer DoS protection mechanism combining cryptographic cookies and per-IP rate limiting. This prevents attackers from flooding the handshake process without valid credentials while protecting against resource exhaustion attacks.

The cookie system uses MAC authentication to verify that incoming handshake packets originate from legitimate sources. Each handshake message includes two MACs:

  • MAC1: Computed over the entire message using a key derived from the server's static public key. This proves the sender knows the server's identity.
  • MAC2: Computed using a server-issued cookie, proving the sender received a prior response from the server.

When a client initiates a handshake, it sends MAC1 but no MAC2 (since it hasn't received a cookie yet). The server validates MAC1 and, if valid, responds with a MESSAGE_HANDSHAKE_COOKIE containing an encrypted cookie. The client then includes this cookie in subsequent messages to compute MAC2.

enum cookie_mac_state {
    INVALID_MAC,
    VALID_MAC_BUT_NO_COOKIE,
    VALID_MAC_WITH_COOKIE_BUT_RATELIMITED,
    VALID_MAC_WITH_COOKIE
};

The cookie itself is derived from the client's IP address and UDP source port, encrypted with XChaCha20-Poly1305. This ensures cookies are specific to each client and cannot be reused across different sources.

Rate Limiting Per IP

Once a packet passes cookie validation, it enters the rate limiter. WireGuard enforces a per-IP limit of 20 packets per second with a burst allowance of 5 packets. This is implemented using a token bucket algorithm:

enum {
    PACKETS_PER_SECOND = 20,
    PACKETS_BURSTABLE = 5,
    PACKET_COST = NSEC_PER_SEC / PACKETS_PER_SECOND,
    TOKEN_MAX = PACKET_COST * PACKETS_BURSTABLE
};

The rate limiter maintains a hash table of per-IP entries, keyed by network namespace and IP address. Each entry tracks:

  • Tokens: Available capacity (max TOKEN_MAX)
  • Last update time: Used to calculate token replenishment
  • Lock: Protects concurrent access

When a packet arrives, the limiter calculates elapsed time since the last packet, adds proportional tokens, and deducts one token per packet. If tokens are insufficient, the packet is dropped.

Memory Management

The rate limiter uses a garbage collection mechanism to prevent unbounded memory growth. Entries older than 1 second are automatically removed via a deferred work queue. The table size is dynamically calculated based on available RAM, with a maximum of 8 entries per hash bucket.

Integration with Handshake Flow

The cookie and rate limiting mechanisms work together:

  1. Client sends MESSAGE_HANDSHAKE_INITIATION with MAC1
  2. Server validates MAC1; if valid, sends MESSAGE_HANDSHAKE_COOKIE
  3. Client sends MESSAGE_HANDSHAKE_RESPONSE with MAC1 and MAC2
  4. Server validates both MACs and rate limit before accepting the handshake

This design ensures that even unauthenticated attackers cannot exhaust server resources, as they cannot produce valid MAC1 values without knowing the server's public key.

Timers & Connection Management

Relevant Files
  • src/timers.h
  • src/timers.c
  • src/peer.h
  • src/messages.h

WireGuard maintains peer connectivity through a sophisticated timer system that manages handshakes, keepalives, and key material lifecycle. Each peer has five independent timers that coordinate to ensure reliable communication and security.

Timer Architecture

Loading diagram...

Core Timers

Retransmit Handshake (timer_retransmit_handshake): Fires after REKEY_TIMEOUT (5 seconds) plus jitter when a handshake initiation is sent. If no response arrives, the timer retries up to MAX_TIMER_HANDSHAKES (18) times. After exhausting retries, staged packets are purged and a key-zeroing timer is scheduled.

Send Keepalive (timer_send_keepalive): Triggered when data is received. Fires after KEEPALIVE_TIMEOUT (10 seconds) to send an empty authenticated packet, keeping the connection alive through NAT. If another packet arrives before expiry, the flag timer_need_another_keepalive is set to reschedule after sending.

New Handshake (timer_new_handshake): Scheduled when data is sent. Fires after KEEPALIVE_TIMEOUT + REKEY_TIMEOUT (15 seconds) plus jitter if no authenticated packet is received. This prevents stale sessions and ensures fresh keys.

Zero Key Material (timer_zero_key_material): Fires after REJECT_AFTER_TIME * 3 (540 seconds) to securely erase all ephemeral keys and handshake state. Queues work on the handshake workqueue to avoid holding locks during key zeroing.

Persistent Keepalive (timer_persistent_keepalive): Optional timer for user-configured intervals. Sends keepalive packets at fixed intervals, useful for peers behind restrictive NAT or firewalls.

Timer Safety

The mod_peer_timer() helper ensures timers only fire when the device is running and the peer is alive. It uses RCU read-side locking to safely check peer state before modifying timers. All timers are synchronized during peer cleanup with del_timer_sync() to prevent use-after-free.

Key Lifecycle Integration

Timers coordinate with the noise protocol to manage session freshness. When a session is derived (after handshake completion), wg_timers_session_derived() schedules key zeroing. The wg_birthdate_has_expired() inline function checks if a key's age exceeds limits, triggering rekeying when REKEY_AFTER_TIME (120 seconds) is reached or REKEY_AFTER_MESSAGES (2^60) is exceeded.

Timer Constants

ConstantValuePurpose
REKEY_TIMEOUT5 secHandshake retransmit interval
KEEPALIVE_TIMEOUT10 secKeepalive packet interval
REKEY_AFTER_TIME120 secForce rekey after this duration
REJECT_AFTER_TIME180 secReject packets after this duration
MAX_TIMER_HANDSHAKES18Max retransmit attempts

Configuration & Network I/O

Relevant Files
  • src/netlink.c – Generic netlink interface for device and peer configuration
  • src/netlink.h – Netlink API declarations
  • src/socket.c – UDP socket creation and packet transmission
  • src/socket.h – Socket API declarations
  • src/device.h – Device structure with socket references
  • src/peer.h – Peer endpoint and socket cache definitions

WireGuard uses two complementary I/O mechanisms: generic netlink for userspace configuration and UDP sockets for encrypted packet transmission. These systems work together to provide a complete network interface.

The netlink subsystem (netlink.c) exposes WireGuard configuration through the Linux generic netlink (genl) family. This allows userspace tools like wg to query and modify device and peer settings without direct system calls.

Key operations:

  • Device queries (WG_CMD_GET_DEVICE) – Dump device state including listen port, firewall mark, and all peers with their allowed IPs
  • Device configuration (WG_CMD_SET_DEVICE) – Set private keys, listen ports, firewall marks, and manage peer lists
  • Peer management – Add, update, or remove peers with public keys, preshared keys, endpoints, and allowed IP ranges

The netlink layer validates all incoming attributes using strict policies (device_policy, peer_policy, allowedip_policy) that enforce exact lengths for cryptographic keys and proper types for all parameters. This prevents malformed configuration from reaching the core logic.

UDP Socket Layer

The socket subsystem (socket.c) manages IPv4 and IPv6 UDP sockets for packet transmission and reception. Each WireGuard device maintains two sockets: one for IPv4 and one for IPv6 (if available).

Socket initialization (wg_socket_init):

  • Creates UDP sockets bound to a configurable port (default 51820)
  • Registers a custom receive callback (wg_receive) that routes incoming packets to the packet processing pipeline
  • Configures socket options for atomic allocation and maximum send buffer size
  • Attempts to bind both IPv4 and IPv6 sockets to the same port; retries up to 100 times if the port is in use

Packet transmission (send4 and send6):

  • Performs route lookup using the kernel’s routing table to determine the outgoing interface and source address
  • Caches routing decisions per peer using dst_cache to avoid repeated lookups
  • Validates source addresses and detects routing loops before transmission
  • Uses UDP tunnel helpers to encapsulate packets with proper checksums and TTL/hop limits

Endpoint management:

  • Extracts source and destination addresses from incoming packets via wg_socket_endpoint_from_skb
  • Updates peer endpoints dynamically when packets arrive from new addresses
  • Maintains endpoint locks to safely handle concurrent updates from multiple CPUs
  • Clears cached routes when endpoints change or firewall marks are updated

Configuration & Socket Synchronization

Device configuration changes trigger socket reinitialization via wg_socket_reinit, which atomically swaps socket pointers using RCU (Read-Copy-Update). This allows in-flight packets to complete on old sockets while new configuration takes effect immediately.

The device_update_lock and socket_update_lock mutexes coordinate between netlink configuration threads and packet processing workers, ensuring consistent state across the system.

/* Example: Setting a new listen port via netlink */
if (info->attrs[WGDEVICE_A_LISTEN_PORT]) {
    ret = set_port(wg, nla_get_u16(info->attrs[WGDEVICE_A_LISTEN_PORT]));
    /* Internally calls wg_socket_init() to rebind sockets */
}

Firewall Mark & Routing Policy

The firewall mark (fwmark) allows WireGuard packets to be routed through policy-based routing rules. When the mark is updated, all peer endpoint source addresses are cleared, forcing a fresh route lookup on the next packet transmission. This ensures packets respect the new routing policy immediately.