Get Started with Hasura DDN and Cloud Storage
Overview
This tutorial takes about twenty minutes to complete. You'll learn how to:
- Set up a new Hasura DDN project
- Connect it to a cloud storage instance
- Generate Hasura metadata
- Create a build
- Run your first query
- Mutate data
Additionally, we'll familiarize you with the steps and workflows necessary to iterate on your API.
In this tutorial, we'll use the Storage connector's local file system, but you can easily configure the connector to work with:
Hasura will never modify your source schema. Learn more about these sources in the connector's configuration section.
Prerequisites
Install the DDN CLI
- macOS and Linux
- Windows
Simply run the installer script in your terminal:
curl -L https://graphql-engine-cdn.hasura.io/ddn/cli/v4/get.sh | bash
Currently, the CLI does not support installation on ARM-based Linux systems.
- Download the latest DDN CLI installer for Windows.
- Run the
DDN_CLI_Setup.exe
installer file and follow the instructions. This will only take a minute. - By default, the DDN CLI is installed under
C:\Users\{Username}\AppData\Local\Programs\DDN_CLI
- The DDN CLI is added to your
%PATH%
environment variable so that you can use theddn
command from your terminal.
Install Docker
The Docker based workflow helps you iterate and develop locally without deploying any changes to Hasura DDN, making the
development experience faster and your feedback loops shorter. You'll need Docker Compose v2.20
or later.
Validate the installation
You can verify that the DDN CLI is installed correctly by running:
ddn doctor
Tutorial
Step 1. Authenticate your CLI
ddn auth login
This will launch a browser window prompting you to log in or sign up for Hasura DDN. After you log in, the CLI will acknowledge your login, giving you access to Hasura Cloud resources.
Step 2. Scaffold out a new local project
ddn supergraph init my-project && cd my-project
Once you move into this directory, you'll see your project scaffolded out for you. You can view the structure by either
running ls
in your terminal, or by opening the directory in your preferred editor.
Step 3. Initialize your Storage connector
ddn connector init my_storage -i
Select hasura/storage
from the list of connectors. You can start typing storage
to quickly filter the list; hit
ENTER
to accept any default values for which you're prompted.
Step 4. Update your configuration.yaml
# yaml-language-server: $schema=https://raw.githubusercontent.com/hasura/ndc-storage/main/jsonschema/configuration.schema.json
clients:
- id: fs
type: fs
defaultDirectory:
value: /home/nonroot/data
concurrency:
query: 5
mutation: 1
runtime:
maxDownloadSizeMBs: 20
maxUploadSizeMBs: 20
generator:
promptqlCompatible: false
"Local" in this case refers to the container in which the connector is running. The files we create and query will be ephemeral and unavailable after the container is stopped.
Step 5. Introspect your cloud storage
ddn connector introspect my_storage
As the schema for this connector is mapped to file system conventions for S3-compatible providers, the connector does
not generate any local configuration files. Instead, the introspection command above will generate a my_storage.hml
file that contains all the models and
commands necessary to interact with your file system via the exposed
GraphQL API.
ddn connector show-resources my_storage
Step 6. Add your models
The connector contains two models: storage_buckets
and storage_objects
.
ddn model add my_storage "*"
Open the app/metadata
directory and you'll find newly-generated file for both of these models. The DDN CLI will use
these Hasura Metadata Language files to represent the available bucket(s) and objects within it.
Step 7. Create a new build
ddn supergraph build local
The build is stored as a set of JSON files in engine/build
.
Step 8. Start your local services
ddn run docker-start
Your terminal will be taken over by logs for the different services.
Step 9. Run your first query
ddn console --local
query GET_ALL_BUCKETS {
storageBuckets(args: {}) {
name
}
}
{
"data": {
"storageBuckets": [
{
"name": "/home/nonroot/data"
}
]
}
}
This is the same directory you provided in your configuration.yaml
file and where we'll soon write new files to via
the API.
Step 10. Add your commands
Let's write a text file to this directory (bucket). To do so, we can use one of the pre-existing commands in our
my_storage.hml
file.
ddn command add my_storage "upload_storage_object_as_text"
We'll also need another command that allows us to download the file as text.
ddn command add my_storage "download_storage_object_as_text"
As before, these commands will generate new HML files in the app/metadata
directory.
Step 11. Create a new build and restart your services
ddn supergraph build local
ddn run docker-start
Step 12. Create a new text file via a mutation
mutation ADD_TXT_FILE {
uploadStorageObjectAsText(data: "This is a sample text file.", name: "sample.txt", bucket: "/home/nonroot/data") {
name
size
}
}
{
"data": {
"uploadStorageObjectAsText": {
"name": "sample.txt",
"size": 27
}
}
}
Step 13. Query the new file's contents
query GET_TEXT_VALUE_FROM_OBJECT {
downloadStorageObjectAsText(name: "sample.txt") {
data
}
}
{
"data": {
"downloadStorageObjectAsText": {
"data": "This is a sample text file."
}
}
}
Next steps
Congratulations on completing your first Hasura DDN project to interact with cloud storage! 🎉
Here's what you just accomplished:
- You started with a fresh project and connected it to the Storage connector.
- You set up metadata to represent your file system and methods of interacting with it, which acts as the blueprint for your API.
- Then, you created a build — essentially compiling everything into a ready-to-use API — and successfully ran your first GraphQL queries to fetch data.
- You utilized commands to write a file and then access its text value.
- Along the way, you learned how to iterate on your schema and refresh your metadata to reflect changes.
Now, you're equipped to connect and expose your data, empowering you to iterate and scale with confidence. Great work!
Take a look at our Storage connector docs to learn more about how to use Hasura DDN with cloud storage providers. Or, if you're ready, get started with adding permissions to control access to your API.