Skip to main content
Version: v3.x (DDN)

Get Started with Hasura DDN and Cloud Storage

Overview

This tutorial takes about twenty minutes to complete. You'll learn how to:

  • Set up a new Hasura DDN project
  • Connect it to a cloud storage instance
  • Generate Hasura metadata
  • Create a build
  • Run your first query
  • Mutate data

Additionally, we'll familiarize you with the steps and workflows necessary to iterate on your API.

Available Cloud Storage

In this tutorial, we'll use the Storage connector's local file system, but you can easily configure the connector to work with:

Hasura will never modify your source schema. Learn more about these sources in the connector's configuration section.

Prerequisites

Install the DDN CLI

Simply run the installer script in your terminal:

curl -L https://graphql-engine-cdn.hasura.io/ddn/cli/v4/get.sh | bash
ARM-based Linux Machines

Currently, the CLI does not support installation on ARM-based Linux systems.

Install Docker

The Docker based workflow helps you iterate and develop locally without deploying any changes to Hasura DDN, making the development experience faster and your feedback loops shorter. You'll need Docker Compose v2.20 or later.

Validate the installation

You can verify that the DDN CLI is installed correctly by running:

ddn doctor

Tutorial

Step 1. Authenticate your CLI

Before you can create a new Hasura DDN project, you need to authenticate your CLI:
ddn auth login

This will launch a browser window prompting you to log in or sign up for Hasura DDN. After you log in, the CLI will acknowledge your login, giving you access to Hasura Cloud resources.

Step 2. Scaffold out a new local project

Next, create a new local project:
ddn supergraph init my-project && cd my-project

Once you move into this directory, you'll see your project scaffolded out for you. You can view the structure by either running ls in your terminal, or by opening the directory in your preferred editor.

Step 3. Initialize your Storage connector

In your project directory, run:
ddn connector init my_storage -i

Select hasura/storage from the list of connectors. You can start typing storage to quickly filter the list; hit ENTER to accept any default values for which you're prompted.

Step 4. Update your configuration.yaml

Update the app/connector/my_storage/configuration.yaml to utilize the local file system by replacing its contents with:
# yaml-language-server: $schema=https://raw.githubusercontent.com/hasura/ndc-storage/main/jsonschema/configuration.schema.json
clients:
- id: fs
type: fs
defaultDirectory:
value: /home/nonroot/data
concurrency:
query: 5
mutation: 1
runtime:
maxDownloadSizeMBs: 20
maxUploadSizeMBs: 20
generator:
promptqlCompatible: false
Navigating the file system

"Local" in this case refers to the container in which the connector is running. The files we create and query will be ephemeral and unavailable after the container is stopped.

Step 5. Introspect your cloud storage

Next, use the CLI to introspect your cloud storage:
ddn connector introspect my_storage

As the schema for this connector is mapped to file system conventions for S3-compatible providers, the connector does not generate any local configuration files. Instead, the introspection command above will generate a my_storage.hml file that contains all the models and commands necessary to interact with your file system via the exposed GraphQL API.

You can check which resources are available — and their status — at any point using the CLI:
ddn connector show-resources my_storage

Step 6. Add your models

The connector contains two models: storage_buckets and storage_objects.

Track them both with this command:
ddn model add my_storage "*"

Open the app/metadata directory and you'll find newly-generated file for both of these models. The DDN CLI will use these Hasura Metadata Language files to represent the available bucket(s) and objects within it.

Step 7. Create a new build

To create a local build, run:
ddn supergraph build local

The build is stored as a set of JSON files in engine/build.

Step 8. Start your local services

Start your local Hasura DDN Engine and Storage connector:
ddn run docker-start

Your terminal will be taken over by logs for the different services.

Step 9. Run your first query

In a new terminal tab, open your local console:
ddn console --local
In the GraphiQL explorer of the console, write this query:
query GET_ALL_BUCKETS {
storageBuckets(args: {}) {
name
}
}
You'll get the following response:
{
"data": {
"storageBuckets": [
{
"name": "/home/nonroot/data"
}
]
}
}

This is the same directory you provided in your configuration.yaml file and where we'll soon write new files to via the API.

Step 10. Add your commands

Let's write a text file to this directory (bucket). To do so, we can use one of the pre-existing commands in our my_storage.hml file.

Run the following to add the UploadStorageObjectAsText as command:
ddn command add my_storage "upload_storage_object_as_text"

We'll also need another command that allows us to download the file as text.

We can add that using the following:
ddn command add my_storage "download_storage_object_as_text"

As before, these commands will generate new HML files in the app/metadata directory.

Step 11. Create a new build and restart your services

Create the new build:
ddn supergraph build local
Kill your locally running services with CTRL+C and then restart them with:
ddn run docker-start

Step 12. Create a new text file via a mutation

From the GraphiQL explorer in your console, execute the following mutation:
mutation ADD_TXT_FILE {
uploadStorageObjectAsText(data: "This is a sample text file.", name: "sample.txt", bucket: "/home/nonroot/data") {
name
size
}
}
Which will return a value like this:
{
"data": {
"uploadStorageObjectAsText": {
"name": "sample.txt",
"size": 27
}
}
}

Step 13. Query the new file's contents

You can now query that file's text value:
query GET_TEXT_VALUE_FROM_OBJECT {
downloadStorageObjectAsText(name: "sample.txt") {
data
}
}
You'll get a response like this:
{
"data": {
"downloadStorageObjectAsText": {
"data": "This is a sample text file."
}
}
}

Next steps

Congratulations on completing your first Hasura DDN project to interact with cloud storage! 🎉

Here's what you just accomplished:

  • You started with a fresh project and connected it to the Storage connector.
  • You set up metadata to represent your file system and methods of interacting with it, which acts as the blueprint for your API.
  • Then, you created a build — essentially compiling everything into a ready-to-use API — and successfully ran your first GraphQL queries to fetch data.
  • You utilized commands to write a file and then access its text value.
  • Along the way, you learned how to iterate on your schema and refresh your metadata to reflect changes.

Now, you're equipped to connect and expose your data, empowering you to iterate and scale with confidence. Great work!

Take a look at our Storage connector docs to learn more about how to use Hasura DDN with cloud storage providers. Or, if you're ready, get started with adding permissions to control access to your API.