Get Started with Hasura DDN and Databricks
Overview
This tutorial takes about twenty minutes to complete. You'll learn how to:
- Set up a new Hasura DDN project
- Connect it to a hosted Databricks instance
- Generate Hasura metadata
- Create a build
- Run your first query
- Create relationships
Additionally, we'll familiarize you with the steps and workflows necessary to iterate on your API.
This tutorial assumes you're starting from scratch; you'll connect a hosted Databricks instance to Hasura, but you can easily follow the steps if you already have data seeded. Hasura will never modify your source schema.
Prerequisites
Install the DDN CLI
- macOS and Linux
- Windows
Simply run the installer script in your terminal:
curl -L https://graphql-engine-cdn.hasura.io/ddn/cli/v4/get.sh | bash
Currently, the CLI does not support installation on ARM-based Linux systems.
- Download the latest DDN CLI installer for Windows.
- Run the
DDN_CLI_Setup.exe
installer file and follow the instructions. This will only take a minute. - By default, the DDN CLI is installed under
C:\Users\{Username}\AppData\Local\Programs\DDN_CLI
- The DDN CLI is added to your
%PATH%
environment variable so that you can use theddn
command from your terminal.
Install Docker
The Docker based workflow helps you iterate and develop locally without deploying any changes to Hasura DDN, making the
development experience faster and your feedback loops shorter. You'll need Docker Compose v2.20
or later.
Validate the installation
You can verify that the DDN CLI is installed correctly by running:
ddn doctor
Tutorial
Step 1. Authenticate your CLI
ddn auth login
This will launch a browser window prompting you to log in or sign up for Hasura DDN. After you log in, the CLI will acknowledge your login, giving you access to Hasura Cloud resources.
Step 2. Scaffold out a new local project
ddn supergraph init my-project && cd my-project
Once you move into this directory, you'll see your project scaffolded out for you. You can view the structure by either
running ls
in your terminal, or by opening the directory in your preferred editor.
Step 3. Create and seed a new Databricks database
Head to Databricks and create an account if you don't already have one. Then, create a new instance.
From your instance's dashboard, choose SQL Editor
and create a new query. At the top of the query editor, there will
be a breadcrumb letting you know which catalog and schema you're currently utilizing. Before proceeding,
ensure you've selected main
and default
.
CREATE TABLE default.users (
id BIGINT GENERATED ALWAYS AS IDENTITY, name STRING NOT NULL, age INT NOT NULL
);
COMMENT ON TABLE default.users IS 'The users table contains information about application users';
INSERT INTO default.users (name, age)
VALUES
('Alice', 25),
('Bob', 30),
('Charlie', 35);
Choose Run all statements
to create the table, add the comment, and insert the users.
SELECT * FROM users;
Step 4. Initialize your Databricks connector
ddn connector init my_databricks -i
From the dropdown, select hasura/databricks-jdbc
(you can type to filter the list).
JDBC_URL
You'll be prompted for your JDBC URL; you can construct the base of this using your Databricks UI under SQL Warehouses
» <name-of-warehouse>
» Connection details
.
jdbc:databricks://<host>:<port>/default;transportMode=http;ssl=1;AuthMech=3;httpPath=/sql/1.0/warehouses/<warehouse-id>;UID=token;PWD=<access-token>;ConnCatalog=main;
The Databricks connector utilizes a JDBC URL that includes:
- An access token — which you can generate with the
Create a personal access token
button in the top right in the same UI as where you found the base for your connection string. - A
ConnCatalog
parameter — which references the catalog to connect to and use for SQL queries during introspection.
JDBC_SCHEMAS
This comma-separated list of schemas is case-sensitive and should not include any spaces. For our tutorial, we'll
simply enter default
for this value.
Step 5. Introspect your Databricks instance
ddn connector introspect my_databricks
After running this, you should see a representation of your database's schema in the
app/connector/my_databricks/configuration.json
file; you can view this using cat
or open the file in your editor.
ddn connector show-resources my_databricks
Step 6. Add your model
ddn model add my_databricks main.default.users
Open the app/metadata
directory and you'll find a newly-generated file: MainDefaultUsers.hml
. The DDN CLI will use
this Hasura Metadata Language file to represent the users
table from Databricks in your API as a
model.
Step 7. Create a new build
ddn supergraph build local
The build is stored as a set of JSON files in engine/build
.
Step 8. Start your local services
ddn run docker-start
Your terminal will be taken over by logs for the different services.
Step 9. Run your first query
ddn console --local
query GetUsers {
mainDefaultUsers {
id
name
age
}
}
{
"data": {
"mainDefaultUsers": [
{
"id": 1,
"name": "Alice",
"age": 25
},
{
"id": 2,
"name": "Bob",
"age": 30
},
{
"id": 3,
"name": "Charlie",
"age": 35
}
]
}
}
Step 10. Iterate on your Databricks schema
CREATE TABLE posts (
id BIGINT GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT BY 1),
user_id INT NOT NULL,
title STRING NOT NULL,
content STRING NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
)
USING DELTA
TBLPROPERTIES (
'delta.feature.allowColumnDefaults' = 'supported'
);
COMMENT ON TABLE default.posts IS 'Posts are written by users and mapped to them using their id column';
INSERT INTO posts (user_id, title, content) VALUES
(1, 'My First Post', 'This is Alice''s first post.'),
(1, 'Another Post', 'Alice writes again!'),
(2, 'Bob''s Post', 'Bob shares his thoughts.'),
(3, 'Hello World', 'Charlie joins the conversation.');
Choose Run all statements
to create the table, add the comment, and insert the posts.
-- Fetch all posts with user information
SELECT
posts.id AS post_id,
posts.title,
posts.content,
posts.created_at,
users.name AS author
FROM
posts
JOIN
users ON posts.user_id = users.id;
You should see a list of posts returned with the author's information joined from the users
table
Step 11. Refresh your metadata and rebuild your project
The following steps are necessary each time you make changes to your source schema. This includes, adding, modifying, or dropping tables.
Step 11.1. Re-introspect your data source
ddn connector introspect my_databricks
In app/connector/my_databricks/configuration.json
, you'll see schema updated to include operations for the posts
table. In app/metadata/my_databricks.hml
, you'll see posts
present in the metadata as well.
Step 11.2. Update your metadata
ddn model add my_databricks main.default.posts
Step 11.3. Create a new build
ddn supergraph build local
Step 11.4. Restart your services
ddn run docker-start
Step 12. Query your new build
query GetPosts {
mainDefaultPosts {
id
title
content
}
}
{
"data": {
"mainDefaultPosts": [
{
"id": "1",
"title": "My First Post",
"content": "This is Alices first post."
},
{
"id": "2",
"title": "Another Post",
"content": "Alice writes again!"
},
{
"id": "3",
"title": "Bobs Post",
"content": "Bob shares his thoughts."
},
{
"id": "4",
"title": "Hello World",
"content": "Charlie joins the conversation."
}
]
}
}
Step 13. Create a relationship
---
kind: Relationship
version: v1
definition:
name: user
sourceType: MainDefaultPosts
target:
model:
name: MainDefaultUsers
relationshipType: Object
mapping:
- source:
fieldPath:
- fieldName: userId
target:
modelField:
- fieldName: id
This will create a relationship that maps the userId
for any post to the id
of a user, allowing for nested queries.
Step 14. Rebuild your project
ddn supergraph build local
ddn run docker-start
Step 15. Query using your relationship
query GetPosts {
mainDefaultPosts {
id
title
content
user {
id
name
age
}
}
}
{
"data": {
"mainDefaultPosts": [
{
"id": "1",
"title": "My First Post",
"content": "This is Alices first post.",
"user": {
"id": "1",
"name": "Alice",
"age": 25
}
},
{
"id": "2",
"title": "Another Post",
"content": "Alice writes again!",
"user": {
"id": "1",
"name": "Alice",
"age": 25
}
},
{
"id": "3",
"title": "Bobs Post",
"content": "Bob shares his thoughts.",
"user": {
"id": "2",
"name": "Bob",
"age": 30
}
},
{
"id": "4",
"title": "Hello World",
"content": "Charlie joins the conversation.",
"user": {
"id": "3",
"name": "Charlie",
"age": 35
}
}
]
}
}
Next steps
Congratulations on completing your first Hasura DDN project with Databricks! 🎉
Here's what you just accomplished:
- You started with a fresh project and connected it to a hosted Databricks instance.
- You set up metadata to represent your tables and relationships, which acts as the blueprint for your API.
- Then, you created a build — essentially compiling everything into a ready-to-use API — and successfully ran your first GraphQL queries to fetch data.
- Along the way, you learned how to iterate on your schema and refresh your metadata to reflect changes.
Now, you're equipped to connect and expose your data, empowering you to iterate and scale with confidence. Great work!
Take a look at our Databricks docs to learn more about how to use Hasura DDN with Databricks. Or, if you're ready, get started with adding permissions to control access to your API.