All Connectors
BigQuery Connector
BigQuery Connector
BigQuery Connector
Connect to a BigQuery database and expose it to Hasura v3 Project
About
Version
Released
February 21, 2025
Last Updated
February 26, 2025
GitHub Repo

BigQuery Connector

Docs ndc-hub License Status

With this connector, Hasura allows you to instantly create a real-time GraphQL API on top of your data models in BigQuery. This connector supports BigQuery's functionalities listed in the table below, allowing for efficient and scalable data operations. Additionally, users benefit from all the powerful features of Hasura’s Data Delivery Network (DDN) platform, including query pushdown capabilities that delegate query operations to the database, thereby enhancing query optimization and performance.

This connector implements the Data Connector Spec.

Features

Below, you'll find a matrix of all supported features for the BigQuery connector:

FeatureSupportedNotes
Native Queries + Logical Models
Native Mutations
Simple Object Query
Filter / Search
Simple Aggregation
Sort
Paginate
Table Relationships
Views
Remote Relationships
Custom Fields
Mutations
Distinct
Enums
Naming Conventions
Default Values
User-defined Functions

Prerequisites

  1. Create a Hasura Cloud account
  2. Please ensure you have the DDN CLI and Docker installed
  3. Create a supergraph
  4. Create a subgraph

The steps below explain how to initialize and configure a connector on your local machine (typically for development purposes).You can learn how to deploy a connector to Hasura DDN — after it's been configured — here.

Using the BigQuery connector

With the context set for an existing subgraph, initialize the connector:

ddn connector init -i

When the wizard runs, you'll be prompted to enter the following env vars necessary for your connector to function:

NameDescriptionRequired
JDBC_URLThe JDBC URL to connect to the databaseYes

After the CLI initializes the connector, you'll need to:

Configuring your JDBC connection string

The official BigQuery JDBC driver is used. You can find documentation on configuring the JDBC connection string here. As an example using a service account with a full key file downloaded from google:

APP_FOO_JDBC_URL=jdbc:bigquery://https://www.googleapis.com/bigquery/v2:443;Project=project-id;OAuthType=0;OAuthServiceAcctEmail=service-account-email;OAuthPvtKey=/etc/connector/key.json;

Note: since the files get mounted in docker it is import the file path is /etc/connector/<your-key-file>.json

Make sure you place you key.json in the connector folder /<subgraph>/connector/<connectorname>/key.json. The key should be the full key downloaded from google cloud console that looks like:

{
  "type": "service_account",
  "project_id": "project-id",
  "private_key_id": "private-key-id",
  "private_key": "-----BEGIN PRIVATE KEY-----\nprivate-key\n-----END PRIVATE KEY-----\n",
  "client_email": "service-account-email",
  "client_id": "client-id",
  "auth_uri": "https://accounts.google.com/o/oauth2/auth",
  "token_uri": "https://oauth2.googleapis.com/token",
  "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
  "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/service-account-email"
}

Once that is done you'll need to:

License

The Hasura BigQuery connector is available under the Apache License 2.0.

hasura-community-call

PromptQL: Data Agent on Hasura DDN that
gets you close to 100% accuracy on RAG

Ship a rock-solid API on your data – in minutes!