Skip to main content
Version: v3.x (DDN)

Get Started with Hasura DDN and BigQuery

Overview

This tutorial will guide you through setting up a Hasura DDN project with Google BigQuery. You'll learn how to:

  • Set up a new Hasura DDN project
  • Connect it to a BigQuery database
  • Generate Hasura metadata
  • Create a build
  • Run your first query
  • Create relationships
  • Mutate data

Additionally, we'll familiarize you with the steps and workflows necessary to iterate on your API.

You'll also need:

  • A Google Cloud account and project
  • A Google Cloud service account with appropriate permissions for the project and BigQuery dataset you'll use.
  • The service account JSON key file downloaded from Google Cloud Console

Prerequisites

Install the DDN CLI

Minimum version requirements

To use this guide, ensure you've installed/updated your CLI to at least v2.28.0.

Simply run the installer script in your terminal:

curl -L https://graphql-engine-cdn.hasura.io/ddn/cli/v4/get.sh | bash
ARM-based Linux Machines

Currently, the CLI does not support installation on ARM-based Linux systems.

Install Docker

The Docker-based workflow helps you iterate and develop locally without deploying any changes to Hasura DDN, making the development experience faster and your feedback loops shorter. You'll need Docker Compose v2.20 or later.

Validate the installation

You can verify that the DDN CLI is installed correctly by running:

ddn doctor

Tutorial

Step 1. Authenticate your CLI

Before you can create a new Hasura DDN project, you need to authenticate your CLI:
ddn auth login

This will launch a browser window prompting you to log in or sign up for Hasura DDN. After you log in, the CLI will acknowledge your login, giving you access to Hasura Cloud resources.

Step 2. Scaffold out a new local project

Next, create a new local project:
ddn supergraph init my-project && cd my-project

Once you move into this directory, you'll see your project scaffolded out for you. You can view the structure by either running ls in your terminal, or by opening the directory in your preferred editor.

Step 3. Create a new BigQuery dataset

In the Google Cloud console, navigate to the BigQuery page and create a new dataset called hasura_demo. You can do so by clicking "Studio" in the left sidebar, then with the three dots button next to the project name in the "Explorer", selecting "Create data set". Give it an id of hasura_demo and click "Create data set".

Make sure that in "IAM and admin", in "Service accounts", you have a service account with at least the "BigQuery User" role. Using the "Owner" or "BigQuery Admin" role will cover this role too.

You should also create and download a JSON key file for the service account by clicking on its name and selecting "Keys" in the navigation menu.

The JSON key file should look like this example:

{
"type": "service_account",
"project_id": "project-id",
"private_key_id": "private-key-id",
"private_key": "-----BEGIN PRIVATE KEY-----\nprivate-key\n-----END PRIVATE KEY-----\n",
"client_email": "service-account-email",
"client_id": "client-id",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/service-account-email"
}

Step 4. Seed your BigQuery database

Create a table in your data set with name users by clicking on the hasura_demo dataset in the BigQuery console and then clicking on the + Create table button in the top right. Select Empty table and give it a name.

You can use the following schema text definition to create the table in the BigQuery console:

[
{
"name": "user_id",
"type": "INTEGER",
"mode": "REQUIRED"
},
{
"name": "name",
"type": "STRING",
"mode": "NULLABLE"
},
{
"name": "age",
"type": "INTEGER",
"mode": "NULLABLE"
}
]

Once created, you can click on the table and the "Query" button to seed the table with some data:

Then, seed the table:
INSERT INTO hasura_demo.users (user_id, name, age) VALUES (1, 'Alice', 25), (2, 'Bob', 30), (3, 'Charlie', 35);

Step 5. Configure a service account key file

Step 5.1. Move your downloaded service account key file to your connector folder

Copy your service account key file to the connector directory:
mv /path/to/your/key.json app/connector/my_bigquery/key.json

Step 5.2. Configure your JDBC connection string in the connector's environment variable .env file. The connection string

should follow this format:

APP_MY_BIGQUERY_JDBC_URL=jdbc:bigquery://https://www.googleapis.com/bigquery/v2:443;ProjectId=your-project-id;DefaultDataset=your-dataset;OAuthType=0;OAuthServiceAcctEmail=your-service-account-email;OAuthPvtKey=/etc/connector/your-key.json

In the connection string make sure to replace:

  • your-project-id with your Google Cloud project ID
  • your-dataset with your default BigQuery dataset
  • your-service-account-email with your service account email
  • your-key.json must match the name of the file you placed in the connector folder

Step 6. Initialize your BigQuery connector

In your project directory, run:
ddn connector init my_bigquery -i

From the dropdown, select hasura/bigquery-jdbc (you can type to filter the list), and enter the value of the connection string you just created for the JDBC_URL environment variable.

Step 7. Introspect your BigQuery database

Use the CLI to introspect your BigQuery database:
ddn connector introspect my_bigquery

After running this, you should see a representation of your database's schema in the app/connector/my_bigquery/configuration.json file.

You can check which resources are available — and their status — at any point using the CLI:
ddn connector show-resources my_bigquery

Step 8. Add your model

Track your BigQuery tables as models in your DDN metadata:
ddn model add my_bigquery users
Add all resources at once

You can add all of your models at once by running:

ddn model add my_bigquery "*"

You can also do the same for commands and relationships.

Open the app/metadata directory and you'll find newly-generated files for each table you track. The DDN CLI uses these Hasura Metadata Language files to represent your BigQuery tables in your API as models.

Step 9. Create a new build

To create a local build, run:
ddn supergraph build local

The build is stored as a set of JSON files in engine/build.

Step 10. Start your local services

Start your local Hasura DDN Engine and BigQuery connector:
ddn run docker-start

Your terminal will be taken over by logs for the different services.

Step 11. Run your first query

In a new terminal tab, open your local console:
ddn console --local
In the GraphiQL explorer of the console, write a query for your table:
query GetUsers {
users {
userId
name
age
}
}
You'll get a response like this:
{
"data": {
"users": [
{
"userId": "1",
"name": "Alice",
"age": "25"
},
{
"userId": "3",
"name": "Charlie",
"age": "35"
},
{
"userId": "2",
"name": "Bob",
"age": "30"
}
]
}
}

Step 12. Iterate on your BigQuery schema

Add a new table to your BigQuery dataset with the name posts. You can use the following schema text definition to create the table in the console:

Add a new posts table to your BigQuery dataset:
[
{
"name": "user_id",
"type": "INTEGER",
"mode": "REQUIRED"
},
{
"name": "post_id",
"type": "INTEGER",
"mode": "REQUIRED"
},
{
"name": "title",
"type": "STRING",
"mode": "NULLABLE",
"maxLength": "255"
},
{
"name": "content",
"type": "STRING",
"mode": "NULLABLE"
}
]

Once created, you can click on the table and the "Query" button to seed the table with some data:

Seed the new table:
INSERT INTO hasura_demo.posts (user_id, post_id, title, content) VALUES
(1, 1, 'My First Post', 'This is the first post for Alice.'),
(1, 2, 'Another Post', 'Alice writes again!'),
(2, 3, 'Bobs Post', 'Bob shares his thoughts.'),
(3, 4, 'Hello World', 'Charlie joins the conversation.');

Step 13. Refresh your metadata and rebuild your project

tip

The following steps are necessary each time you make changes to your source schema. This includes, adding, modifying, or dropping tables.

Step 13.1. Re-introspect your data source

Run the introspection command again:
ddn connector introspect my_bigquery

In app/connector/my_bigquery/configuration.json, you'll see schema updated to include operations for the posts table.

Step 13.2. Update your metadata

Add the posts model:
ddn model add my_bigquery posts

In app/metadata/my_bigquery.hml, you'll see posts present in the metadata.

Step 14. Create a Relationship in your Hasura metadata

Let's also now create a Relationship in our Hasura metdata for between the users and posts tables.

Hasura VS Code Extension can help you with this.

Add a new relationship to your Hasura metadata:
---
kind: Relationship
version: v1
definition:
name: user
sourceType: Posts
target:
model:
name: Users
relationshipType: Object
mapping:
- source:
fieldPath:
- fieldName: userId
target:
modelField:
- fieldName: userId

Since this is added directly to the metadata, you don't need to run any commands to add it from the introspection of the connector. By building your supergraph you will make it available in your API.

Step 15. Rebuild your project and restart your services

Bring down the services by pressing CTRL+C in the terminal tab logging their activity.

As your metadata has changed, create a new build:
ddn supergraph build local
Bring everything back up:
ddn run docker-start
Run the console again:
ddn console --local

Step 16. Run your first query utilizing the relationship

Run your first query with a relationship in the GraphiQL console:
query GetPostsWithAuthors {
posts {
postId
title
content
user {
age
name
userId
}
}
}
You'll get a response like this:
{
"data": {
"posts": [
{
"postId": "1",
"title": "My First Post",
"content": "This is the first post for Alice.",
"user": {
"age": "25",
"name": "Alice",
"userId": "1"
}
},
{
"postId": "2",
"title": "Another Post",
"content": "Alice writes again!",
"user": {
"age": "25",
"name": "Alice",
"userId": "1"
}
},
{
"postId": "3",
"title": "Bobs Post",
"content": "Bob shares his thoughts.",
"user": {
"age": "30",
"name": "Bob",
"userId": "2"
}
},
{
"postId": "4",
"title": "Hello World",
"content": "Charlie joins the conversation.",
"user": {
"age": "35",
"name": "Charlie",
"userId": "3"
}
}
]
}
}

Next steps

Congratulations on completing your first Hasura DDN project with BigQuery! 🎉

Here's what you just accomplished:

  • You started with a fresh project and connected it to a local BigQuery database.
  • You set up metadata to represent your tables and relationships, which acts as the blueprint for your API.
  • Then, you created a build — essentially compiling everything into a ready-to-use API — and successfully ran your first GraphQL queries to fetch data.
  • You learned how to iterate on your schema and refresh your metadata to reflect changes.
  • You added a relationship between your users and posts tables.

Now, you're equipped to connect and expose your data, empowering you to iterate and scale with confidence. Great work!

Take a look at our BigQuery docs to learn more about how to use Hasura DDN with BigQuery.