Overview
Relevant Files
README.mdV2-README.mdv3/README.md
Hasura is an open-source GraphQL engine that instantly provides secure, composable GraphQL APIs over existing databases and services. It accelerates API development by 10x by eliminating the need to write backend code for common data access patterns.
What Hasura Does
Hasura connects to your databases and automatically generates a GraphQL API with built-in authorization, real-time subscriptions, and event triggers. You can extend it with custom business logic through remote schemas, actions, and webhooks. It supports multiple databases including PostgreSQL, MySQL, MS SQL Server, BigQuery, MongoDB, and ClickHouse.
Dual-Version Architecture
The repository contains two major engine versions:
Hasura V2 (Stable) - A production-ready Haskell-based engine with mature features including GraphQL queries, mutations, subscriptions, event triggers, and fine-grained access control. It powers the current stable releases and is widely deployed in production.
Hasura V3 (Next-Generation) - A Rust-based engine built on the Open Data Domain Specification (OpenDD) and Native Data Connector (NDC) specifications. It powers Hasura Data Delivery Network (DDN) and provides improved performance, extensibility, and support for diverse data sources through a connector-based architecture.
Core Components
┌─────────────────────────────────────────────────────┐
│ Hasura GraphQL Engine Repository │
├─────────────────────────────────────────────────────┤
│ server/ → V2 Engine (Haskell) │
│ v3/ → V3 Engine (Rust) │
│ cli/ → CLI Tool (Go) │
│ frontend/ → Admin Console (React/TypeScript)│
│ dc-agents/ → Data Connector SDK │
│ docs/ → Documentation (Docusaurus) │
└─────────────────────────────────────────────────────┘
Server (server/) - The V2 GraphQL engine written in Haskell. Handles query execution, authorization, subscriptions, and event triggers.
V3 Engine (v3/) - The next-generation Rust-based engine with modular crates for metadata resolution, query planning, execution, and GraphQL schema generation.
CLI Tool (cli/) - Go-based command-line tool for project initialization, migrations, metadata management, and deployment.
Frontend Console (frontend/) - React-based admin UI for managing databases, tables, permissions, and GraphQL schema configuration.
Data Connectors (dc-agents/) - SDK and reference implementations for building connectors that enable Hasura to work with any data source.
Key Features
- Instant GraphQL APIs - Auto-generate GraphQL from database schemas
- Real-time Subscriptions - Convert any query to a live subscription
- Event Triggers - Execute webhooks on database changes
- Fine-grained Authorization - Dynamic access control with role-based permissions
- Remote Schemas - Merge custom GraphQL schemas for business logic
- Actions - Extend with custom REST APIs and business logic
- Multi-database Support - PostgreSQL, MySQL, MS SQL Server, BigQuery, MongoDB, ClickHouse
- Admin Console - Web UI for schema management and API configuration
Architecture & Core Concepts
Relevant Files
server/documentation/overview.mdserver/documentation/deep-dives/schema.mdserver/documentation/deep-dives/role-permissions.mdarchitecture/live-queries.mdarchitecture/streaming-subscriptions.md
Hasura is a GraphQL engine that automatically generates a composable, secure GraphQL API from your database schema. The architecture centers on three core concepts: the schema cache, schema parsers, and query execution.
High-Level System Architecture
Loading diagram...
Schema Cache
The schema cache is a live, in-memory copy of your metadata and database introspection. It contains:
- SourceInfo for each connected database (tables, columns, functions, relationships)
- Tracked metadata (permissions, relationships, custom types)
- Cached schema parsers for each role (query, mutation, subscription parsers)
The schema cache is rebuilt whenever metadata changes, ensuring the GraphQL schema stays synchronized with your configuration.
Schema Parsers & GraphQL Schema Generation
Hasura uses a unified approach: the same code that generates the GraphQL schema also generates the parsers that validate and transform incoming queries. This ensures consistency between schema and execution.
Key concepts:
- Parser combinators build both type information and parsing functions simultaneously
- Introspectable parsers transform GraphQL AST into Hasura's Intermediate Representation (IR)
- Memoization handles recursive schema structures (e.g., relationships between tables) using lazy evaluation
Query Execution Pipeline
When a GraphQL query arrives:
- Transport layer (HTTP/WebSocket) receives the request
- Parser selection retrieves the correct parser based on user role
- Parsing transforms GraphQL AST into IR
- Execution planning generates backend-specific SQL or API calls
- Result assembly joins results from multiple sources (databases, remote schemas)
Roles & Permissions
Hasura implements two types of permissions:
Schema Permissions determine which fields/tables are visible to a role. If a role cannot access any columns in a table, the table is removed from their schema entirely.
Data Permissions filter rows at query time using WHERE clauses. They don't change the schema but restrict what data is returned. These are implemented as boolean predicates embedded in generated SQL.
Declarative Authorization
Authorization rules are declarative and applied at the query layer, not post-fetch. This means:
- Permissions are embedded as predicates in the generated SQL
- Session variables (user ID, roles, etc.) are passed as query parameters
- The database engine handles filtering efficiently
Live Queries & Streaming Subscriptions
Hasura supports two subscription patterns:
Live Queries refetch the entire result set at intervals when underlying data changes. Multiple clients' queries are multiplexed into single SQL queries for efficiency.
Streaming Subscriptions provide cursor-based pagination over append-only tables, enabling exactly-once delivery semantics and efficient handling of large result sets.
Both use batching and multiplexing to reduce database load: hundreds of GraphQL clients can map to a single database connection.
Backend Abstraction
Most GraphQL logic is backend-agnostic. Backend-specific code lives in Hasura.Backends.[Name] modules and handles:
- SQL generation and translation
- Type mapping
- Function execution
- Permission predicate compilation
This allows Hasura to support PostgreSQL, MySQL, MSSQL, BigQuery, and other databases with a unified GraphQL API.
V2 Engine (Haskell)
Relevant Files
server/src-exec/Main.hs- Entry pointserver/src-lib/Hasura/App.hs- Core application initializationserver/src-lib/Hasura/Server/App.hs- WAI application setupserver/src-lib/Hasura/GraphQL/- GraphQL execution pipelineserver/src-lib/Hasura/RQL/- Intermediate representation & metadataserver/graphql-engine.cabal- Build configuration
The V2 Engine is the Haskell-based GraphQL server that powers Hasura. It transforms metadata into a live GraphQL API by combining schema caching, query parsing, and multi-backend execution.
Architecture Overview
Loading diagram...
Core Components
Schema Cache - A live, in-memory copy of metadata containing source information, tracked tables, functions, and role-based GraphQL parsers. Rebuilt when metadata changes.
GraphQL Parser - Unified schema building combinators in Hasura.GraphQL.Schema that generate both the GraphQL schema and corresponding parsers from metadata simultaneously.
Intermediate Representation (IR) - Backend-agnostic representation in Hasura.RQL.IR that translates incoming GraphQL queries into executable operations.
Query Execution - Hasura.GraphQL.Execute processes queries, mutations, and subscriptions, generating execution plans that run against configured backends.
Transport Layer - HTTP and WebSocket handlers in Hasura.GraphQL.Transport that receive queries, apply role-based parsers, and execute operations.
Startup Flow
- Initialization (
Main.hs) - Parses environment variables and command-line arguments - Connection Setup (
Hasura.App) - Establishes metadata database and source connections - Catalog Migration - Ensures metadata schema is current
- Schema Cache Build - Loads metadata and builds initial schema cache
- Server Start - Launches WAI application with configured routes and middleware
Key Modules
Hasura.Server- Network stack, routes, and authenticationHasura.Backends- Backend-specific implementations (Postgres, MSSQL, BigQuery, etc.)Hasura.Metadata- Metadata representation and validationHasura.Eventing- Event triggers and scheduled triggersHasura.Authentication- Session and role management
V3 Engine (Rust)
Relevant Files
v3/crates/engine- HTTP server and request routingv3/crates/graphql/frontend- GraphQL request orchestrationv3/crates/graphql/lang-graphql- GraphQL lexer, parser, and validationv3/crates/graphql/ir- Intermediate representation generationv3/crates/plan- Query planning and execution tree constructionv3/crates/execute- NDC query execution and result processingv3/crates/metadata-resolve- OpenDD metadata validation and resolutionv3/crates/open-dds- OpenDD metadata structures
The V3 Engine is a Rust-based GraphQL execution engine built on the Open Data Domain (OpenDD) specification and Native Data Connector (NDC) protocol. It powers Hasura's Data Delivery Network (DDN) by translating GraphQL queries into connector-agnostic execution plans.
Architecture Overview
Loading diagram...
Request Processing Pipeline
1. Parsing & Validation - The lang-graphql crate lexes and parses raw GraphQL documents into an AST, then validates against the schema to produce a normalized AST with type information.
2. Intermediate Representation (IR) - The graphql/ir crate transforms the normalized AST and resolved metadata into an IR that captures the semantic structure of the query (query root fields, selections, filters, etc.).
3. Query Planning - The plan crate converts the IR into an execution tree that specifies which NDC queries to run, in what order, and how to join results. This stage handles relationships, remote joins, and argument presets.
4. Execution - The execute crate runs NDC queries concurrently, processes remote predicates, handles remote joins, and aggregates results into ndc_models::RowSet objects.
5. Response Formatting - Each frontend (GraphQL, JSON:API, SQL) transforms the raw row sets into its output format.
Key Design Principles
Metadata-Driven Schema - The GraphQL schema is entirely determined by OpenDD metadata at startup. NDC availability or schema changes do not affect the server's ability to start or serve requests.
Separation of Concerns - The engine is decoupled from database-specific logic via the NDC abstraction. All data access goes through NDC agents, enabling support for any data source.
Instant Startup - Schema construction is fast and deterministic, eliminating the slow metadata introspection that plagued V2. The server starts reliably regardless of connector availability.
Core Components
Engine State - Holds the resolved metadata, GraphQL schema, authentication config, and HTTP client. Built once at startup and shared across all requests.
GraphQL Frontend - Orchestrates the full request pipeline: parsing, validation, IR generation, planning, and execution. Handles both HTTP and WebSocket connections.
Execution Tree - A recursive data structure representing the query execution plan. Supports nested queries, remote joins, and parallel field execution.
NDC Integration - Sends IR-derived queries to connectors via HTTP, handles response streaming, and processes results according to the execution plan.
CLI Tool
Relevant Files
cli/cli.gocli/commands/root.gocli/commands/migrate.gocli/commands/metadata.gocli/migrate/migrate.gocli/pkg/metadata
The Hasura CLI is a command-line tool that provides a text-based interface to the Hasura GraphQL Engine's Metadata API. It enables developers to manage projects programmatically, making it ideal for version control, CI/CD pipelines, and infrastructure-as-code workflows.
Architecture Overview
Loading diagram...
Core Components
ExecutionContext is the central singleton that holds all contextual information during CLI execution. It manages configuration, logging, HTTP clients, and project state. Every command receives this context, ensuring consistent behavior across the CLI.
Project Structure follows a standard layout created by hasura init:
config.yaml- Project configuration (endpoint, admin secret, directories)migrations/- Database migration filesmetadata/- GraphQL schema and configuration (v2+)seeds/- Seed data files
Command Categories
GraphQL Commands manage core project operations:
init- Initialize a new Hasura projectmigrate- Create, apply, and manage database migrationsmetadata- Export, apply, and manage GraphQL schema configurationconsole- Launch the web-based management consoleactions- Create and manage custom GraphQL actionsseed- Manage database seed datadeploy- Apply metadata and migrations in one operation
Utility Commands provide additional functionality:
plugins- Extend CLI with custom commandsversion- Display CLI and server versionsscripts- Run configuration upgrade scriptscompletion- Generate shell auto-completionupdate-cli- Update to the latest CLI version
Configuration System
The config.yaml file defines project settings:
version: 3
endpoint: https://my-graphql-engine.com
admin_secret: secret_key
metadata_directory: metadata
migrations_directory: migrations
seeds_directory: seeds
actions:
kind: synchronous
handler_webhook_baseurl: http://localhost:3000
Configuration can be overridden via environment variables (e.g., HASURA_GRAPHQL_ADMIN_SECRET) or command-line flags, enabling flexible deployment scenarios.
Migration System
The migration system tracks database schema changes using timestamped SQL files. Migrations are applied sequentially and tracked in the database's hdb_catalog.schema_migrations table. The CLI supports:
- Creating new migrations
- Applying pending migrations
- Rolling back migrations
- Squashing multiple migrations into one
- Checking migration status
Metadata Management
Metadata represents the GraphQL schema configuration and is stored either as a directory of YAML files (directory mode) or a single JSON/YAML file (file mode). The CLI can:
- Export metadata from a running engine
- Apply local metadata changes to the engine
- Detect and resolve inconsistencies
- Diff local and remote metadata
Plugin System
The CLI supports extensibility through plugins. Custom commands can be registered and invoked like built-in commands, allowing teams to create domain-specific tooling that integrates seamlessly with the Hasura workflow.
Console & Frontend
Relevant Files
frontend/apps/console-ce- Community Edition console appfrontend/apps/console-ee- Enterprise Edition console appfrontend/libs/console/legacy-ce- CE shared library with core featuresfrontend/libs/console/legacy-ee- EE shared library with enterprise featuresfrontend/package.json- Dependencies and build scriptsfrontend/nx.json- Nx monorepo configuration
The Hasura Console is a React-based admin dashboard for managing databases and testing GraphQL APIs. It is organized as an Nx monorepo with separate Community Edition (CE) and Enterprise Edition (EE) variants.
Architecture Overview
Loading diagram...
Project Structure
Apps (frontend/apps/):
console-ceandconsole-eeare thin entry points that render the main console application- Both use Webpack for bundling and Tailwind CSS for styling
- Entry point:
src/main.tsxrendersConsoleCeApporConsoleEeAppfrom the respective libraries
Libraries (frontend/libs/console/):
legacy-cecontains the core console implementation with 40+ feature moduleslegacy-eeextends CE with enterprise-specific features- Organized into
components/,features/,hooks/, andutils/directories
Key Features
The console includes comprehensive modules for:
- Data Management: Browse rows, insert, delete, modify tables, manage databases
- GraphQL: API explorer, GraphiQL interface, schema introspection
- Permissions: Row-level security, function permissions, logical model permissions
- Remote Schemas: Connect and manage external GraphQL schemas
- Events: Event triggers, cron triggers, adhoc events
- Actions: Custom business logic integration
- Monitoring: Prometheus metrics, OpenTelemetry, query response caching
- Settings: Database configuration, authentication, API limits
State Management
The console uses a hybrid approach:
- Redux for global application state (metadata, data, actions, events)
- React Query for server state and caching
- Apollo Client for GraphQL operations
- Zustand for lightweight local state
Build & Development
npm install # Install dependencies
nx serve console-ce # Dev server (requires .env)
nx build console-ce # Production build
nx test console-ce # Unit tests via Jest
nx e2e console-ce-e2e --watch # E2E tests via Cypress
The monorepo uses Nx for task orchestration, caching, and dependency management. Both CE and EE variants share the same build pipeline with feature flags controlling enterprise-only functionality.
Technology Stack
- React 17 with TypeScript
- Redux Toolkit for state management
- Tailwind CSS + Ant Design for UI
- Radix UI components for accessible primitives
- Jest & Cypress for testing
- Storybook for component documentation
Data Connectors & NDC
Relevant Files
dc-agents/README.mddc-agents/DOCUMENTATION.mddc-agents/sdk/README.mddc-agents/reference/dc-agents/sqlite/dc-agents/dc-api-types/
Data Connectors enable Hasura GraphQL Engine to query external data sources by delegating execution to specialized REST API services called agents. This architecture allows runtime configuration of new databases without modifying the core HGE codebase.
Core Concept
A Data Connector Agent is a stateless HTTP service that abstracts a datasource behind a well-defined wire format. HGE communicates with agents via REST endpoints, sending queries and receiving results. Each agent describes its capabilities (supported operations, scalar types, mutations) and exposes a schema of available tables and functions.
Architecture Overview
Loading diagram...
Key Agent Endpoints
Agents must implement the following REST endpoints:
GET /capabilities- Returns agent capabilities (queries, mutations, relationships, scalar types) and configuration schemaPOST /schema- Returns table/function definitions, columns, types, and nullability informationPOST /query- Executes queries with filtering, pagination, ordering, and relationship traversalPOST /mutation- Performs inserts, updates, and deletes with optional atomicity guaranteesGET /health- Health check endpoint for agent and data source availability
Configuration & Headers
Agents receive configuration via the X-Hasura-DataConnector-Config header (JSON object validated against the agent's config schema). The X-Hasura-DataConnector-SourceName header identifies which HGE source is making the request, enabling per-source connection pooling and caching.
Query Execution Model
Queries are sent as JSON structures containing:
- Target - Table or function to query
- Relationships - Join definitions for traversing related tables
- Query - Fields, filters (where), ordering, pagination (limit/offset), and aggregations
- Filters - Recursive expression trees supporting
and,or,not,exists,binary_op, andunary_op
Agents return rows as JSON arrays with nested relationship results for joined data.
Capabilities Declaration
Agents declare support for:
- Query features - Foreach queries, data redaction
- Data schema - Primary keys, foreign keys, column nullability
- Mutations - Insert/update/delete with atomicity levels (row, single_operation, homogeneous_operations, heterogeneous_operations)
- Scalar types - Custom types with comparison operators, aggregate functions, and update operators
- Relationships - Object and array relationships with column mapping
Reference Implementations
The SDK includes two reference agents:
- Reference Agent (TypeScript) - In-memory Chinook dataset for testing and learning
- SQLite Agent (TypeScript) - Production-ready SQLite connector with mutations, native queries, and metrics
Both agents follow best practices: self-describing capabilities, stateless design, type-safe APIs, and comprehensive test coverage.
SDK Workflow
The Data Connector SDK provides a complete development environment:
- Spin up stack with
docker compose up(HGE, Postgres, agent, tests) - Modify reference agent or replace with custom implementation
- Run test suite with
docker compose run tests - Query via HGE at
http://localhost:8080 - Explore OpenAPI schema at
http://localhost:8300
Supported Data Sources
The Hasura Hub lists available agents across categories:
- OLTP - PostgreSQL, MySQL, MariaDB, Oracle, SQL Server, SQLite
- OLAP - BigQuery, Snowflake, Athena
- NoSQL - MongoDB
- APIs - GitHub, Prometheus, Salesforce, Zendesk (coming soon)
- Vector - Weaviate (coming soon)
Development Principles
When building agents, follow these guidelines:
- Self-describing - Declare capabilities via
/capabilitiesendpoint - Stateless - Each request carries all required information
- Defer logic - Offload processing to the backend when possible
- Type-safe - Expect and return types per OpenAPI schema
- Backwards compatible - Preserve compatibility as agent evolves
- Well-tested - Use provided test suite to validate implementation
Testing & Deployment
Relevant Files
server/tests-py- Python integration test suiteserver/test-postgres- Haskell-based PostgreSQL testsserver/test-mssql- SQL Server backend testsinstall-manifests- Deployment configurationsdocker-compose.yaml- Local development environmentscripts/make/tests.mk- Test execution targets.circleci/config.yml- CI/CD pipeline configuration
Test Infrastructure
The repository uses a multi-layered testing approach covering unit tests, integration tests, and backend-specific tests. Tests are organized by language and backend support:
Python Integration Tests (server/tests-py) provide comprehensive coverage of GraphQL queries, mutations, subscriptions, and metadata operations. These tests run against multiple backends (PostgreSQL, SQL Server, BigQuery, Citus, CockroachDB) and support filtering by backend using the --backend flag.
Haskell API Tests (server/lib/api-tests) use the Hspec framework and test the core GraphQL engine functionality. These tests are backend-agnostic and run against all supported data sources simultaneously.
Backend-Specific Tests include dedicated test suites for PostgreSQL (server/test-postgres) and SQL Server (server/test-mssql) with specialized test cases for database-specific features.
Running Tests
Quick Start:
# Python integration tests (PostgreSQL)
./server/tests-py/run.sh
# Filter specific tests
./server/tests-py/run.sh -- test_graphql_queries.py
# Test against different backends
scripts/dev.sh test --integration --backend mssql
Available Backends: postgres (default), mssql, bigquery, citus, cockroach
Haskell API Tests:
make test-postgres
make test-sqlserver
make test-backends # All backends
Test Configuration
Tests use YAML-based configuration files for setup and teardown. The structure follows a pattern where each test class defines:
setup.yaml- Initial database and metadata stateteardown.yaml- Cleanup after tests- Backend-specific variants:
setup_mssql.yaml,teardown_postgres.yaml, etc.
Test classes extend base fixtures (DefaultTestSelectQueries, DefaultTestMutations) that handle per-class or per-method setup/teardown automatically.
Deployment
Docker Compose is the primary local deployment method. The root docker-compose.yaml orchestrates multiple database services (PostgreSQL, SQL Server, CockroachDB, Citus) on non-standard ports (65001+) to avoid conflicts.
Production Deployments are configured in install-manifests/:
- Kubernetes - Standard deployment with health checks and resource management
- Docker Compose - Multi-database setups with optional HTTPS (Caddy), PostGIS, pgAdmin
- Cloud Platforms - Azure Container Instances, Google Cloud Run, AWS ECS templates
- Docker Run - Single-container deployment script
Key Environment Variables:
HASURA_GRAPHQL_DATABASE_URL- Primary database connectionHASURA_GRAPHQL_ENABLE_CONSOLE- Enable GraphQL consoleHASURA_GRAPHQL_DEV_MODE- Development mode (disable in production)
CI/CD Pipeline
CircleCI orchestrates the build and test pipeline with parallel job execution:
- Build Stage - Compiles server, CLI, and console
- Test Stage - Runs tests against all backends in parallel
- Artifact Collection - Stores test reports and logs
Tests are skipped automatically if only non-server files changed (via .ciignore). The pipeline waits for database services (PostgreSQL, SQL Server) to be ready before executing tests.
Test Matrix: The test-matrix target generates a feature compatibility matrix across all backends, useful for tracking which features work on which databases.
Metadata & API Types
Relevant Files
server/src-lib/Hasura/RQL/Types/Metadata.hsserver/src-lib/Hasura/RQL/DDL/Metadata/Types.hsmetadata-api-types/typescriptcontrib/metadata-types/src/typesv3/crates/open-dds/src/lib.rs
Hasura's metadata system is the core configuration layer that defines all GraphQL schema, permissions, relationships, and integrations. The metadata API provides endpoints to export, import, and manage this configuration programmatically.
Core Metadata Structure
The Metadata type in Haskell represents the complete GraphQL Engine configuration:
data Metadata = Metadata
{ _metaSources :: Sources
, _metaRemoteSchemas :: RemoteSchemas
, _metaQueryCollections :: QueryCollections
, _metaAllowlist :: MetadataAllowlist
, _metaCustomTypes :: CustomTypes
, _metaActions :: Actions
, _metaCronTriggers :: CronTriggers
, _metaRestEndpoints :: Endpoints
, _metaApiLimits :: ApiLimit
, _metaMetricsConfig :: MetricsConfig
, _metaInheritedRoles :: InheritedRoles
, _metaNetwork :: Network
, _metaBackendConfigs :: BackendMap BackendConfigWrapper
, _metaOpenTelemetryConfig :: OpenTelemetryConfig
}
Each field represents a distinct configuration domain. Sources define database connections and tracked tables. Remote schemas integrate external GraphQL APIs. Custom types, actions, and REST endpoints extend the GraphQL schema. Permissions and roles control access.
Metadata Versioning
Metadata follows semantic versioning to track backwards-incompatible changes:
- Version 1: Legacy format (deprecated)
- Version 2: Introduced source-based organization
- Version 3: Current version with enhanced structure
The current version is always MVVersion3. When exporting metadata via the API, the latest version is always used.
Metadata API Operations
All metadata operations use the /v1/metadata endpoint with POST requests:
{
"type": "<operation-type>",
"version": 1 | 2,
"args": {}
}
Core Operations:
export_metadata– Export current metadata as JSONreplace_metadata– Import/replace entire metadatareload_metadata– Refresh metadata from database changesclear_metadata– Reset all configurationget_inconsistent_metadata– List validation errorsdrop_inconsistent_metadata– Remove invalid objects
TypeScript Type Definitions
The @hasura/metadata-api package provides TypeScript types for metadata structures:
export interface HasuraMetadataV3 {
version: 3
sources: Source[]
actions?: Action[]
custom_types?: CustomTypes
remote_schemas?: RemoteSchema[]
query_collections?: QueryCollectionEntry[]
allowlist?: AllowList[]
cron_triggers?: CronTrigger[]
api_limits?: APILimits
rest_endpoints: RestEndpoint[]
inherited_roles?: InheritedRole[]
}
These types enable type-safe metadata manipulation in CLI tools, console, and client applications.
OpenDD Metadata (V3 Engine)
The V3 engine uses OpenDD (Open Data Definition) for metadata representation:
pub enum MetadataWithVersion {
V1(MetadataV1),
V2(MetadataV2),
V3(MetadataV3),
}
pub struct MetadataV3 {
pub subgraphs: Vec<Subgraph>,
pub flags: OpenDdFlags,
}
OpenDD organizes metadata into subgraphs, each containing typed objects (data connectors, types, relationships, commands). This enables modular, composable metadata definitions.
Metadata Consistency
Metadata can become inconsistent when referenced objects are deleted or configurations conflict. The system tracks inconsistencies and provides APIs to inspect and resolve them. The allow_inconsistent_metadata flag in replace_metadata permits importing partially invalid configurations.
Contributing & Development
Relevant Files
CONTRIBUTING.mdserver/CONTRIBUTING.mdserver/STYLE.mdcli/CONTRIBUTING.mdv3/CONTRIBUTING.md
Hasura is a monorepo containing both V2 and V3 engines. Each component has specific setup requirements and contribution workflows. First-time contributors are welcome—reach out on the Discord #contrib channel if you have questions.
Repository Structure
The project consists of three main V2 components and the V3 engine:
- Server (Haskell) – Core GraphQL engine logic
- CLI (Go) – Command-line interface for migrations and metadata management
- Console (JavaScript) – Web-based admin interface
- V3 Engine (Rust) – Next-generation engine architecture
All contributions require signing a CLA before or after submitting a pull request.
Getting Started
V2 Server (Haskell)
Prerequisites: GHC, Cabal, Docker, PostgreSQL >= 10, Node.js, Python >= 3.9.
# Using Nix (recommended)
nix develop
# Or install manually and set up project
ln -s cabal/dev.project cabal.project.local
cabal new-update
cabal new-build graphql-engine
Run the server with scripts/dev.sh graphql-engine (requires Docker). Tests use scripts/dev.sh test for Python integration tests or make test-unit for Haskell unit tests.
V2 CLI (Go)
Prerequisites: Go >= 1.16, Docker, Node.js.
cd cli
make deps
make build
Run tests with make test-all after setting HASURA_TEST_CLI_HGE_DOCKER_IMAGE.
V3 Engine (Rust)
Prerequisites: Rust compiler, protobuf-compiler.
# Using Docker
docker compose run --build --rm dev_setup bash
# Or locally
cargo build
Code Standards
Haskell (V2 Server):
- Format with ormolu:
ormolu -ei '*.hs' - Lint with hlint:
hlint --hint=../.hlint.yaml - No compiler warnings; use camel case for functions, upper camel case for types
- Prefer sum types over
Bool; use strict data fields by default - Target line length: 80 characters (soft limit)
Go (CLI):
- Follow standard Go conventions
- Use the
spf13/cobrapackage for CLI commands
Rust (V3):
- Standard Rust conventions with
cargo fmtandcargo clippy
Workflow
- Fork the repository and create a feature branch
- Make changes and ensure tests pass
- Commit with clear messages:
add/fix/change(imperative, no period) - Reference issues:
fix #123orclose #456 - Rebase with master before submitting a pull request
- CI automatically builds and runs tests
Contribution Areas
- Documentation – Fix errors, add missing content
- CLI – Issues labeled
c/cliandgood-first-issue - Community content – Boilerplates, sample apps, tools
- V3 Engine – Core engine improvements in Rust
See good-first-issue for beginner-friendly tasks.