Install

hasura/graphql-engine

Hasura GraphQL Engine

Last updated on Dec 15, 2025 (Commit: 355aade)

Overview

Relevant Files
  • README.md
  • V2-README.md
  • v3/README.md

Hasura is an open-source GraphQL engine that instantly provides secure, composable GraphQL APIs over existing databases and services. It accelerates API development by 10x by eliminating the need to write backend code for common data access patterns.

What Hasura Does

Hasura connects to your databases and automatically generates a GraphQL API with built-in authorization, real-time subscriptions, and event triggers. You can extend it with custom business logic through remote schemas, actions, and webhooks. It supports multiple databases including PostgreSQL, MySQL, MS SQL Server, BigQuery, MongoDB, and ClickHouse.

Dual-Version Architecture

The repository contains two major engine versions:

Hasura V2 (Stable) - A production-ready Haskell-based engine with mature features including GraphQL queries, mutations, subscriptions, event triggers, and fine-grained access control. It powers the current stable releases and is widely deployed in production.

Hasura V3 (Next-Generation) - A Rust-based engine built on the Open Data Domain Specification (OpenDD) and Native Data Connector (NDC) specifications. It powers Hasura Data Delivery Network (DDN) and provides improved performance, extensibility, and support for diverse data sources through a connector-based architecture.

Core Components

┌─────────────────────────────────────────────────────┐
│  Hasura GraphQL Engine Repository                   │
├─────────────────────────────────────────────────────┤
│  server/          → V2 Engine (Haskell)             │
│  v3/              → V3 Engine (Rust)                │
│  cli/             → CLI Tool (Go)                   │
│  frontend/        → Admin Console (React/TypeScript)│
│  dc-agents/       → Data Connector SDK              │
│  docs/            → Documentation (Docusaurus)      │
└─────────────────────────────────────────────────────┘

Server (server/) - The V2 GraphQL engine written in Haskell. Handles query execution, authorization, subscriptions, and event triggers.

V3 Engine (v3/) - The next-generation Rust-based engine with modular crates for metadata resolution, query planning, execution, and GraphQL schema generation.

CLI Tool (cli/) - Go-based command-line tool for project initialization, migrations, metadata management, and deployment.

Frontend Console (frontend/) - React-based admin UI for managing databases, tables, permissions, and GraphQL schema configuration.

Data Connectors (dc-agents/) - SDK and reference implementations for building connectors that enable Hasura to work with any data source.

Key Features

  • Instant GraphQL APIs - Auto-generate GraphQL from database schemas
  • Real-time Subscriptions - Convert any query to a live subscription
  • Event Triggers - Execute webhooks on database changes
  • Fine-grained Authorization - Dynamic access control with role-based permissions
  • Remote Schemas - Merge custom GraphQL schemas for business logic
  • Actions - Extend with custom REST APIs and business logic
  • Multi-database Support - PostgreSQL, MySQL, MS SQL Server, BigQuery, MongoDB, ClickHouse
  • Admin Console - Web UI for schema management and API configuration

Architecture & Core Concepts

Relevant Files
  • server/documentation/overview.md
  • server/documentation/deep-dives/schema.md
  • server/documentation/deep-dives/role-permissions.md
  • architecture/live-queries.md
  • architecture/streaming-subscriptions.md

Hasura is a GraphQL engine that automatically generates a composable, secure GraphQL API from your database schema. The architecture centers on three core concepts: the schema cache, schema parsers, and query execution.

High-Level System Architecture

Loading diagram...

Schema Cache

The schema cache is a live, in-memory copy of your metadata and database introspection. It contains:

  • SourceInfo for each connected database (tables, columns, functions, relationships)
  • Tracked metadata (permissions, relationships, custom types)
  • Cached schema parsers for each role (query, mutation, subscription parsers)

The schema cache is rebuilt whenever metadata changes, ensuring the GraphQL schema stays synchronized with your configuration.

Schema Parsers & GraphQL Schema Generation

Hasura uses a unified approach: the same code that generates the GraphQL schema also generates the parsers that validate and transform incoming queries. This ensures consistency between schema and execution.

Key concepts:

  • Parser combinators build both type information and parsing functions simultaneously
  • Introspectable parsers transform GraphQL AST into Hasura's Intermediate Representation (IR)
  • Memoization handles recursive schema structures (e.g., relationships between tables) using lazy evaluation

Query Execution Pipeline

When a GraphQL query arrives:

  1. Transport layer (HTTP/WebSocket) receives the request
  2. Parser selection retrieves the correct parser based on user role
  3. Parsing transforms GraphQL AST into IR
  4. Execution planning generates backend-specific SQL or API calls
  5. Result assembly joins results from multiple sources (databases, remote schemas)

Roles & Permissions

Hasura implements two types of permissions:

Schema Permissions determine which fields/tables are visible to a role. If a role cannot access any columns in a table, the table is removed from their schema entirely.

Data Permissions filter rows at query time using WHERE clauses. They don't change the schema but restrict what data is returned. These are implemented as boolean predicates embedded in generated SQL.

Declarative Authorization

Authorization rules are declarative and applied at the query layer, not post-fetch. This means:

  • Permissions are embedded as predicates in the generated SQL
  • Session variables (user ID, roles, etc.) are passed as query parameters
  • The database engine handles filtering efficiently

Live Queries & Streaming Subscriptions

Hasura supports two subscription patterns:

Live Queries refetch the entire result set at intervals when underlying data changes. Multiple clients' queries are multiplexed into single SQL queries for efficiency.

Streaming Subscriptions provide cursor-based pagination over append-only tables, enabling exactly-once delivery semantics and efficient handling of large result sets.

Both use batching and multiplexing to reduce database load: hundreds of GraphQL clients can map to a single database connection.

Backend Abstraction

Most GraphQL logic is backend-agnostic. Backend-specific code lives in Hasura.Backends.[Name] modules and handles:

  • SQL generation and translation
  • Type mapping
  • Function execution
  • Permission predicate compilation

This allows Hasura to support PostgreSQL, MySQL, MSSQL, BigQuery, and other databases with a unified GraphQL API.

V2 Engine (Haskell)

Relevant Files
  • server/src-exec/Main.hs - Entry point
  • server/src-lib/Hasura/App.hs - Core application initialization
  • server/src-lib/Hasura/Server/App.hs - WAI application setup
  • server/src-lib/Hasura/GraphQL/ - GraphQL execution pipeline
  • server/src-lib/Hasura/RQL/ - Intermediate representation & metadata
  • server/graphql-engine.cabal - Build configuration

The V2 Engine is the Haskell-based GraphQL server that powers Hasura. It transforms metadata into a live GraphQL API by combining schema caching, query parsing, and multi-backend execution.

Architecture Overview

Loading diagram...

Core Components

Schema Cache - A live, in-memory copy of metadata containing source information, tracked tables, functions, and role-based GraphQL parsers. Rebuilt when metadata changes.

GraphQL Parser - Unified schema building combinators in Hasura.GraphQL.Schema that generate both the GraphQL schema and corresponding parsers from metadata simultaneously.

Intermediate Representation (IR) - Backend-agnostic representation in Hasura.RQL.IR that translates incoming GraphQL queries into executable operations.

Query Execution - Hasura.GraphQL.Execute processes queries, mutations, and subscriptions, generating execution plans that run against configured backends.

Transport Layer - HTTP and WebSocket handlers in Hasura.GraphQL.Transport that receive queries, apply role-based parsers, and execute operations.

Startup Flow

  1. Initialization (Main.hs) - Parses environment variables and command-line arguments
  2. Connection Setup (Hasura.App) - Establishes metadata database and source connections
  3. Catalog Migration - Ensures metadata schema is current
  4. Schema Cache Build - Loads metadata and builds initial schema cache
  5. Server Start - Launches WAI application with configured routes and middleware

Key Modules

  • Hasura.Server - Network stack, routes, and authentication
  • Hasura.Backends - Backend-specific implementations (Postgres, MSSQL, BigQuery, etc.)
  • Hasura.Metadata - Metadata representation and validation
  • Hasura.Eventing - Event triggers and scheduled triggers
  • Hasura.Authentication - Session and role management

V3 Engine (Rust)

Relevant Files
  • v3/crates/engine - HTTP server and request routing
  • v3/crates/graphql/frontend - GraphQL request orchestration
  • v3/crates/graphql/lang-graphql - GraphQL lexer, parser, and validation
  • v3/crates/graphql/ir - Intermediate representation generation
  • v3/crates/plan - Query planning and execution tree construction
  • v3/crates/execute - NDC query execution and result processing
  • v3/crates/metadata-resolve - OpenDD metadata validation and resolution
  • v3/crates/open-dds - OpenDD metadata structures

The V3 Engine is a Rust-based GraphQL execution engine built on the Open Data Domain (OpenDD) specification and Native Data Connector (NDC) protocol. It powers Hasura's Data Delivery Network (DDN) by translating GraphQL queries into connector-agnostic execution plans.

Architecture Overview

Loading diagram...

Request Processing Pipeline

1. Parsing & Validation - The lang-graphql crate lexes and parses raw GraphQL documents into an AST, then validates against the schema to produce a normalized AST with type information.

2. Intermediate Representation (IR) - The graphql/ir crate transforms the normalized AST and resolved metadata into an IR that captures the semantic structure of the query (query root fields, selections, filters, etc.).

3. Query Planning - The plan crate converts the IR into an execution tree that specifies which NDC queries to run, in what order, and how to join results. This stage handles relationships, remote joins, and argument presets.

4. Execution - The execute crate runs NDC queries concurrently, processes remote predicates, handles remote joins, and aggregates results into ndc_models::RowSet objects.

5. Response Formatting - Each frontend (GraphQL, JSON:API, SQL) transforms the raw row sets into its output format.

Key Design Principles

Metadata-Driven Schema - The GraphQL schema is entirely determined by OpenDD metadata at startup. NDC availability or schema changes do not affect the server's ability to start or serve requests.

Separation of Concerns - The engine is decoupled from database-specific logic via the NDC abstraction. All data access goes through NDC agents, enabling support for any data source.

Instant Startup - Schema construction is fast and deterministic, eliminating the slow metadata introspection that plagued V2. The server starts reliably regardless of connector availability.

Core Components

Engine State - Holds the resolved metadata, GraphQL schema, authentication config, and HTTP client. Built once at startup and shared across all requests.

GraphQL Frontend - Orchestrates the full request pipeline: parsing, validation, IR generation, planning, and execution. Handles both HTTP and WebSocket connections.

Execution Tree - A recursive data structure representing the query execution plan. Supports nested queries, remote joins, and parallel field execution.

NDC Integration - Sends IR-derived queries to connectors via HTTP, handles response streaming, and processes results according to the execution plan.

CLI Tool

Relevant Files
  • cli/cli.go
  • cli/commands/root.go
  • cli/commands/migrate.go
  • cli/commands/metadata.go
  • cli/migrate/migrate.go
  • cli/pkg/metadata

The Hasura CLI is a command-line tool that provides a text-based interface to the Hasura GraphQL Engine's Metadata API. It enables developers to manage projects programmatically, making it ideal for version control, CI/CD pipelines, and infrastructure-as-code workflows.

Architecture Overview

Loading diagram...

Core Components

ExecutionContext is the central singleton that holds all contextual information during CLI execution. It manages configuration, logging, HTTP clients, and project state. Every command receives this context, ensuring consistent behavior across the CLI.

Project Structure follows a standard layout created by hasura init:

  • config.yaml - Project configuration (endpoint, admin secret, directories)
  • migrations/ - Database migration files
  • metadata/ - GraphQL schema and configuration (v2+)
  • seeds/ - Seed data files

Command Categories

GraphQL Commands manage core project operations:

  • init - Initialize a new Hasura project
  • migrate - Create, apply, and manage database migrations
  • metadata - Export, apply, and manage GraphQL schema configuration
  • console - Launch the web-based management console
  • actions - Create and manage custom GraphQL actions
  • seed - Manage database seed data
  • deploy - Apply metadata and migrations in one operation

Utility Commands provide additional functionality:

  • plugins - Extend CLI with custom commands
  • version - Display CLI and server versions
  • scripts - Run configuration upgrade scripts
  • completion - Generate shell auto-completion
  • update-cli - Update to the latest CLI version

Configuration System

The config.yaml file defines project settings:

version: 3
endpoint: https://my-graphql-engine.com
admin_secret: secret_key
metadata_directory: metadata
migrations_directory: migrations
seeds_directory: seeds
actions:
  kind: synchronous
  handler_webhook_baseurl: http://localhost:3000

Configuration can be overridden via environment variables (e.g., HASURA_GRAPHQL_ADMIN_SECRET) or command-line flags, enabling flexible deployment scenarios.

Migration System

The migration system tracks database schema changes using timestamped SQL files. Migrations are applied sequentially and tracked in the database's hdb_catalog.schema_migrations table. The CLI supports:

  • Creating new migrations
  • Applying pending migrations
  • Rolling back migrations
  • Squashing multiple migrations into one
  • Checking migration status

Metadata Management

Metadata represents the GraphQL schema configuration and is stored either as a directory of YAML files (directory mode) or a single JSON/YAML file (file mode). The CLI can:

  • Export metadata from a running engine
  • Apply local metadata changes to the engine
  • Detect and resolve inconsistencies
  • Diff local and remote metadata

Plugin System

The CLI supports extensibility through plugins. Custom commands can be registered and invoked like built-in commands, allowing teams to create domain-specific tooling that integrates seamlessly with the Hasura workflow.

Console & Frontend

Relevant Files
  • frontend/apps/console-ce - Community Edition console app
  • frontend/apps/console-ee - Enterprise Edition console app
  • frontend/libs/console/legacy-ce - CE shared library with core features
  • frontend/libs/console/legacy-ee - EE shared library with enterprise features
  • frontend/package.json - Dependencies and build scripts
  • frontend/nx.json - Nx monorepo configuration

The Hasura Console is a React-based admin dashboard for managing databases and testing GraphQL APIs. It is organized as an Nx monorepo with separate Community Edition (CE) and Enterprise Edition (EE) variants.

Architecture Overview

Loading diagram...

Project Structure

Apps (frontend/apps/):

  • console-ce and console-ee are thin entry points that render the main console application
  • Both use Webpack for bundling and Tailwind CSS for styling
  • Entry point: src/main.tsx renders ConsoleCeApp or ConsoleEeApp from the respective libraries

Libraries (frontend/libs/console/):

  • legacy-ce contains the core console implementation with 40+ feature modules
  • legacy-ee extends CE with enterprise-specific features
  • Organized into components/, features/, hooks/, and utils/ directories

Key Features

The console includes comprehensive modules for:

  • Data Management: Browse rows, insert, delete, modify tables, manage databases
  • GraphQL: API explorer, GraphiQL interface, schema introspection
  • Permissions: Row-level security, function permissions, logical model permissions
  • Remote Schemas: Connect and manage external GraphQL schemas
  • Events: Event triggers, cron triggers, adhoc events
  • Actions: Custom business logic integration
  • Monitoring: Prometheus metrics, OpenTelemetry, query response caching
  • Settings: Database configuration, authentication, API limits

State Management

The console uses a hybrid approach:

  • Redux for global application state (metadata, data, actions, events)
  • React Query for server state and caching
  • Apollo Client for GraphQL operations
  • Zustand for lightweight local state

Build & Development

npm install                    # Install dependencies
nx serve console-ce           # Dev server (requires .env)
nx build console-ce           # Production build
nx test console-ce            # Unit tests via Jest
nx e2e console-ce-e2e --watch # E2E tests via Cypress

The monorepo uses Nx for task orchestration, caching, and dependency management. Both CE and EE variants share the same build pipeline with feature flags controlling enterprise-only functionality.

Technology Stack

  • React 17 with TypeScript
  • Redux Toolkit for state management
  • Tailwind CSS + Ant Design for UI
  • Radix UI components for accessible primitives
  • Jest & Cypress for testing
  • Storybook for component documentation

Data Connectors & NDC

Relevant Files
  • dc-agents/README.md
  • dc-agents/DOCUMENTATION.md
  • dc-agents/sdk/README.md
  • dc-agents/reference/
  • dc-agents/sqlite/
  • dc-agents/dc-api-types/

Data Connectors enable Hasura GraphQL Engine to query external data sources by delegating execution to specialized REST API services called agents. This architecture allows runtime configuration of new databases without modifying the core HGE codebase.

Core Concept

A Data Connector Agent is a stateless HTTP service that abstracts a datasource behind a well-defined wire format. HGE communicates with agents via REST endpoints, sending queries and receiving results. Each agent describes its capabilities (supported operations, scalar types, mutations) and exposes a schema of available tables and functions.

Architecture Overview

Loading diagram...

Key Agent Endpoints

Agents must implement the following REST endpoints:

  • GET /capabilities - Returns agent capabilities (queries, mutations, relationships, scalar types) and configuration schema
  • POST /schema - Returns table/function definitions, columns, types, and nullability information
  • POST /query - Executes queries with filtering, pagination, ordering, and relationship traversal
  • POST /mutation - Performs inserts, updates, and deletes with optional atomicity guarantees
  • GET /health - Health check endpoint for agent and data source availability

Configuration & Headers

Agents receive configuration via the X-Hasura-DataConnector-Config header (JSON object validated against the agent's config schema). The X-Hasura-DataConnector-SourceName header identifies which HGE source is making the request, enabling per-source connection pooling and caching.

Query Execution Model

Queries are sent as JSON structures containing:

  • Target - Table or function to query
  • Relationships - Join definitions for traversing related tables
  • Query - Fields, filters (where), ordering, pagination (limit/offset), and aggregations
  • Filters - Recursive expression trees supporting and, or, not, exists, binary_op, and unary_op

Agents return rows as JSON arrays with nested relationship results for joined data.

Capabilities Declaration

Agents declare support for:

  • Query features - Foreach queries, data redaction
  • Data schema - Primary keys, foreign keys, column nullability
  • Mutations - Insert/update/delete with atomicity levels (row, single_operation, homogeneous_operations, heterogeneous_operations)
  • Scalar types - Custom types with comparison operators, aggregate functions, and update operators
  • Relationships - Object and array relationships with column mapping

Reference Implementations

The SDK includes two reference agents:

  • Reference Agent (TypeScript) - In-memory Chinook dataset for testing and learning
  • SQLite Agent (TypeScript) - Production-ready SQLite connector with mutations, native queries, and metrics

Both agents follow best practices: self-describing capabilities, stateless design, type-safe APIs, and comprehensive test coverage.

SDK Workflow

The Data Connector SDK provides a complete development environment:

  1. Spin up stack with docker compose up (HGE, Postgres, agent, tests)
  2. Modify reference agent or replace with custom implementation
  3. Run test suite with docker compose run tests
  4. Query via HGE at http://localhost:8080
  5. Explore OpenAPI schema at http://localhost:8300

Supported Data Sources

The Hasura Hub lists available agents across categories:

  • OLTP - PostgreSQL, MySQL, MariaDB, Oracle, SQL Server, SQLite
  • OLAP - BigQuery, Snowflake, Athena
  • NoSQL - MongoDB
  • APIs - GitHub, Prometheus, Salesforce, Zendesk (coming soon)
  • Vector - Weaviate (coming soon)

Development Principles

When building agents, follow these guidelines:

  • Self-describing - Declare capabilities via /capabilities endpoint
  • Stateless - Each request carries all required information
  • Defer logic - Offload processing to the backend when possible
  • Type-safe - Expect and return types per OpenAPI schema
  • Backwards compatible - Preserve compatibility as agent evolves
  • Well-tested - Use provided test suite to validate implementation

Testing & Deployment

Relevant Files
  • server/tests-py - Python integration test suite
  • server/test-postgres - Haskell-based PostgreSQL tests
  • server/test-mssql - SQL Server backend tests
  • install-manifests - Deployment configurations
  • docker-compose.yaml - Local development environment
  • scripts/make/tests.mk - Test execution targets
  • .circleci/config.yml - CI/CD pipeline configuration

Test Infrastructure

The repository uses a multi-layered testing approach covering unit tests, integration tests, and backend-specific tests. Tests are organized by language and backend support:

Python Integration Tests (server/tests-py) provide comprehensive coverage of GraphQL queries, mutations, subscriptions, and metadata operations. These tests run against multiple backends (PostgreSQL, SQL Server, BigQuery, Citus, CockroachDB) and support filtering by backend using the --backend flag.

Haskell API Tests (server/lib/api-tests) use the Hspec framework and test the core GraphQL engine functionality. These tests are backend-agnostic and run against all supported data sources simultaneously.

Backend-Specific Tests include dedicated test suites for PostgreSQL (server/test-postgres) and SQL Server (server/test-mssql) with specialized test cases for database-specific features.

Running Tests

Quick Start:

# Python integration tests (PostgreSQL)
./server/tests-py/run.sh

# Filter specific tests
./server/tests-py/run.sh -- test_graphql_queries.py

# Test against different backends
scripts/dev.sh test --integration --backend mssql

Available Backends: postgres (default), mssql, bigquery, citus, cockroach

Haskell API Tests:

make test-postgres
make test-sqlserver
make test-backends  # All backends

Test Configuration

Tests use YAML-based configuration files for setup and teardown. The structure follows a pattern where each test class defines:

  • setup.yaml - Initial database and metadata state
  • teardown.yaml - Cleanup after tests
  • Backend-specific variants: setup_mssql.yaml, teardown_postgres.yaml, etc.

Test classes extend base fixtures (DefaultTestSelectQueries, DefaultTestMutations) that handle per-class or per-method setup/teardown automatically.

Deployment

Docker Compose is the primary local deployment method. The root docker-compose.yaml orchestrates multiple database services (PostgreSQL, SQL Server, CockroachDB, Citus) on non-standard ports (65001+) to avoid conflicts.

Production Deployments are configured in install-manifests/:

  • Kubernetes - Standard deployment with health checks and resource management
  • Docker Compose - Multi-database setups with optional HTTPS (Caddy), PostGIS, pgAdmin
  • Cloud Platforms - Azure Container Instances, Google Cloud Run, AWS ECS templates
  • Docker Run - Single-container deployment script

Key Environment Variables:

  • HASURA_GRAPHQL_DATABASE_URL - Primary database connection
  • HASURA_GRAPHQL_ENABLE_CONSOLE - Enable GraphQL console
  • HASURA_GRAPHQL_DEV_MODE - Development mode (disable in production)

CI/CD Pipeline

CircleCI orchestrates the build and test pipeline with parallel job execution:

  1. Build Stage - Compiles server, CLI, and console
  2. Test Stage - Runs tests against all backends in parallel
  3. Artifact Collection - Stores test reports and logs

Tests are skipped automatically if only non-server files changed (via .ciignore). The pipeline waits for database services (PostgreSQL, SQL Server) to be ready before executing tests.

Test Matrix: The test-matrix target generates a feature compatibility matrix across all backends, useful for tracking which features work on which databases.

Metadata & API Types

Relevant Files
  • server/src-lib/Hasura/RQL/Types/Metadata.hs
  • server/src-lib/Hasura/RQL/DDL/Metadata/Types.hs
  • metadata-api-types/typescript
  • contrib/metadata-types/src/types
  • v3/crates/open-dds/src/lib.rs

Hasura's metadata system is the core configuration layer that defines all GraphQL schema, permissions, relationships, and integrations. The metadata API provides endpoints to export, import, and manage this configuration programmatically.

Core Metadata Structure

The Metadata type in Haskell represents the complete GraphQL Engine configuration:

data Metadata = Metadata
  { _metaSources :: Sources
  , _metaRemoteSchemas :: RemoteSchemas
  , _metaQueryCollections :: QueryCollections
  , _metaAllowlist :: MetadataAllowlist
  , _metaCustomTypes :: CustomTypes
  , _metaActions :: Actions
  , _metaCronTriggers :: CronTriggers
  , _metaRestEndpoints :: Endpoints
  , _metaApiLimits :: ApiLimit
  , _metaMetricsConfig :: MetricsConfig
  , _metaInheritedRoles :: InheritedRoles
  , _metaNetwork :: Network
  , _metaBackendConfigs :: BackendMap BackendConfigWrapper
  , _metaOpenTelemetryConfig :: OpenTelemetryConfig
  }

Each field represents a distinct configuration domain. Sources define database connections and tracked tables. Remote schemas integrate external GraphQL APIs. Custom types, actions, and REST endpoints extend the GraphQL schema. Permissions and roles control access.

Metadata Versioning

Metadata follows semantic versioning to track backwards-incompatible changes:

  • Version 1: Legacy format (deprecated)
  • Version 2: Introduced source-based organization
  • Version 3: Current version with enhanced structure

The current version is always MVVersion3. When exporting metadata via the API, the latest version is always used.

Metadata API Operations

All metadata operations use the /v1/metadata endpoint with POST requests:

{
  "type": "<operation-type>",
  "version": 1 | 2,
  "args": {}
}

Core Operations:

  • export_metadata – Export current metadata as JSON
  • replace_metadata – Import/replace entire metadata
  • reload_metadata – Refresh metadata from database changes
  • clear_metadata – Reset all configuration
  • get_inconsistent_metadata – List validation errors
  • drop_inconsistent_metadata – Remove invalid objects

TypeScript Type Definitions

The @hasura/metadata-api package provides TypeScript types for metadata structures:

export interface HasuraMetadataV3 {
  version: 3
  sources: Source[]
  actions?: Action[]
  custom_types?: CustomTypes
  remote_schemas?: RemoteSchema[]
  query_collections?: QueryCollectionEntry[]
  allowlist?: AllowList[]
  cron_triggers?: CronTrigger[]
  api_limits?: APILimits
  rest_endpoints: RestEndpoint[]
  inherited_roles?: InheritedRole[]
}

These types enable type-safe metadata manipulation in CLI tools, console, and client applications.

OpenDD Metadata (V3 Engine)

The V3 engine uses OpenDD (Open Data Definition) for metadata representation:

pub enum MetadataWithVersion {
  V1(MetadataV1),
  V2(MetadataV2),
  V3(MetadataV3),
}

pub struct MetadataV3 {
  pub subgraphs: Vec&lt;Subgraph&gt;,
  pub flags: OpenDdFlags,
}

OpenDD organizes metadata into subgraphs, each containing typed objects (data connectors, types, relationships, commands). This enables modular, composable metadata definitions.

Metadata Consistency

Metadata can become inconsistent when referenced objects are deleted or configurations conflict. The system tracks inconsistencies and provides APIs to inspect and resolve them. The allow_inconsistent_metadata flag in replace_metadata permits importing partially invalid configurations.

Contributing & Development

Relevant Files
  • CONTRIBUTING.md
  • server/CONTRIBUTING.md
  • server/STYLE.md
  • cli/CONTRIBUTING.md
  • v3/CONTRIBUTING.md

Hasura is a monorepo containing both V2 and V3 engines. Each component has specific setup requirements and contribution workflows. First-time contributors are welcome—reach out on the Discord #contrib channel if you have questions.

Repository Structure

The project consists of three main V2 components and the V3 engine:

  • Server (Haskell) – Core GraphQL engine logic
  • CLI (Go) – Command-line interface for migrations and metadata management
  • Console (JavaScript) – Web-based admin interface
  • V3 Engine (Rust) – Next-generation engine architecture

All contributions require signing a CLA before or after submitting a pull request.

Getting Started

V2 Server (Haskell)

Prerequisites: GHC, Cabal, Docker, PostgreSQL >= 10, Node.js, Python >= 3.9.

# Using Nix (recommended)
nix develop

# Or install manually and set up project
ln -s cabal/dev.project cabal.project.local
cabal new-update
cabal new-build graphql-engine

Run the server with scripts/dev.sh graphql-engine (requires Docker). Tests use scripts/dev.sh test for Python integration tests or make test-unit for Haskell unit tests.

V2 CLI (Go)

Prerequisites: Go >= 1.16, Docker, Node.js.

cd cli
make deps
make build

Run tests with make test-all after setting HASURA_TEST_CLI_HGE_DOCKER_IMAGE.

V3 Engine (Rust)

Prerequisites: Rust compiler, protobuf-compiler.

# Using Docker
docker compose run --build --rm dev_setup bash

# Or locally
cargo build

Code Standards

Haskell (V2 Server):

  • Format with ormolu: ormolu -ei '*.hs'
  • Lint with hlint: hlint --hint=../.hlint.yaml
  • No compiler warnings; use camel case for functions, upper camel case for types
  • Prefer sum types over Bool; use strict data fields by default
  • Target line length: 80 characters (soft limit)

Go (CLI):

  • Follow standard Go conventions
  • Use the spf13/cobra package for CLI commands

Rust (V3):

  • Standard Rust conventions with cargo fmt and cargo clippy

Workflow

  1. Fork the repository and create a feature branch
  2. Make changes and ensure tests pass
  3. Commit with clear messages: add/fix/change (imperative, no period)
  4. Reference issues: fix #123 or close #456
  5. Rebase with master before submitting a pull request
  6. CI automatically builds and runs tests

Contribution Areas

  • Documentation – Fix errors, add missing content
  • CLI – Issues labeled c/cli and good-first-issue
  • Community content – Boilerplates, sample apps, tools
  • V3 Engine – Core engine improvements in Rust

See good-first-issue for beginner-friendly tasks.