Get Started with Hasura DDN and Prometheus
Overview
This tutorial takes about twenty minutes to complete. You'll learn how to:
- Set up a new Hasura DDN project
- Connect it to a Prometheus instance
- Generate Hasura metadata
- Create a build
- Run your first query
Additionally, we'll familiarize you with the steps and workflows necessary to iterate on your API.
This tutorial assumes you're starting from scratch; you'll connect a locally-running Prometheus instance — set up to scrape metrics from itself — to Hasura, but you can easily follow the steps if you already have an existing Prometheus server. Hasura will never modify your source schema.
Prerequisites
Install the DDN CLI
- macOS and Linux
- Windows
Simply run the installer script in your terminal:
curl -L https://graphql-engine-cdn.hasura.io/ddn/cli/v4/get.sh | bash
Currently, the CLI does not support installation on ARM-based Linux systems.
- Download the latest DDN CLI installer for Windows.
- Run the
DDN_CLI_Setup.exe
installer file and follow the instructions. This will only take a minute. - By default, the DDN CLI is installed under
C:\Users\{Username}\AppData\Local\Programs\DDN_CLI
- The DDN CLI is added to your
%PATH%
environment variable so that you can use theddn
command from your terminal.
Install Docker
The Docker based workflow helps you iterate and develop locally without deploying any changes to Hasura DDN, making the
development experience faster and your feedback loops shorter. You'll need Docker Compose v2.20
or later.
Validate the installation
You can verify that the DDN CLI is installed correctly by running:
ddn doctor
Tutorial
Step 1. Authenticate your CLI
ddn auth login
This will launch a browser window prompting you to log in or sign up for Hasura DDN. After you log in, the CLI will acknowledge your login, giving you access to Hasura Cloud resources.
Step 2. Scaffold out a new local project
ddn supergraph init my-project && cd my-project
Once you move into this directory, you'll see your project scaffolded out for you. You can view the structure by either
running ls
in your terminal, or by opening the directory in your preferred editor.
Step 3. Initialize your Prometheus connector
ddn connector init my_prometheus -i
From the dropdown, select hasura/prometheus
(you can type to filter the list). Then, enter the following connection
string when prompted:
http://local.hasura.dev:9090
Step 4. Start the local Prometheus instance
touch app/connector/my_prometheus/compose.prometheus.yaml
services:
prometheus:
image: prom/prometheus:latest
container_name: prometheus
ports:
- "9090:9090"
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
command:
- "--config.file=/etc/prometheus/prometheus.yml"
touch app/connector/my_prometheus/prometheus.yml
global:
scrape_interval: 15s
scrape_configs:
- job_name: "prometheus"
static_configs:
- targets: ["local.hasura.dev:9090"]
docker compose -f app/connector/my_prometheus/compose.prometheus.yaml up -d
You can open Prometheus by visiting: http://localhost:9090
. Go ahead and navigate here and
poke around...we'll need some requests against the server for our queries later!
Step 5. Introspect your Prometheus server
ddn connector introspect my_prometheus
After running this, you should see a representation of your Prometheus server's schema in the
app/connector/my_prometheus/configuration.yaml
file; you can view this using cat
or open the file in your editor.
ddn connector show-resources my_prometheus
Step 6. Add your model
ddn model add my_prometheus prometheus_http_requests_total
Open the app/metadata
directory and you'll find a newly-generated file: PrometheusHttpRequestsTotal.hml
. The DDN CLI
will use this Hasura Metadata Language file to represent the prometheus_http_requests_total
metric from Prometheus in
your API as a model.
Step 7. Create a new build
ddn supergraph build local
The build is stored as a set of JSON files in engine/build
.
Step 8. Start your local services
ddn run docker-start
Your terminal will be taken over by logs for the different services.
Step 9. Run your first query
ddn console --local
We'll write a GraphQL query that's the equivalent of this PromQL query:
sum(rate(prometheus_http_requests_total{job="prometheus"}[1m]))
query GET_REQUESTS_PER_SECOND {
prometheusHttpRequestsTotal(
args: { fn: [{ rate: "1m" }, { sum: [] }] }
where: { timestamp: { _gt: "2025-03-26" }, job: { _eq: "prometheus" } }
) {
timestamp
value
}
}
_gt
valueFor the most accurate results, change the date value to a recent date.
{
"data": {
"prometheusHttpRequestsTotal": [
{
"job": "prometheus",
"timestamp": 1743004075,
"value": 0.08889283968176363
}
]
}
}
Step 10. Add another model
ddn model add my_prometheus process_cpu_seconds_total
In app/metadata
you'll find a newly-generated file: ProcessCpuSecondsTotal.hml
. This will be used to expose the
counter metric via your supergraph API.
Step 11. Rebuild your project
ddn supergraph build local
ddn run docker-start
Step 12. Query your new build
query GET_CPU_PROCESS_TOTAL {
processCpuSecondsTotal(
args: { fn: [{ rate: "1m" }, { sum: [] }] }
where: { timestamp: { _gt: "2025-03-26" }, job: { _eq: "prometheus" } }
) {
timestamp
value
}
}
{
"data": {
"processCpuSecondsTotal": [
{
"timestamp": 1743010998,
"value": 0.002222123461179495
}
]
}
}
Next steps
Congratulations on completing your first Hasura DDN project with Prometheus! 🎉
Here's what you just accomplished:
- You started with a fresh project and connected it to a local Prometheus server.
- You set up metadata to represent your counter metrics, which acts as the blueprint for your API.
- Then, you created a build — essentially compiling everything into a ready-to-use API — and successfully ran your first GraphQL queries to fetch data.
- Along the way, you learned how to iterate on your schema and refresh your metadata to reflect changes.
Now, you're equipped to connect and expose your data, empowering you to iterate and scale with confidence. Great work!
Take a look at our Prometheus docs to learn more about how to use Hasura DDN with Prometheus. Or, if you're ready, get started with adding permissions to control access to your API.