Get Started with Hasura DDN and Elasticsearch
Overview
This tutorial takes about fifteen minutes to complete. You'll learn how to:
- Set up a new Hasura DDN project
- Connect it to an Elasticsearch instance
- Generate Hasura metadata
- Create a build
- Run your first query
Additionally, we'll familiarize you with the steps and workflows necessary to iterate on your API.
This tutorial assumes you're starting from scratch. We'll use a locally running Elasticsearch instance via Docker and connect it to Hasura, but you can easily follow the steps if you already have data seeded or are using Elasticsearch's hosted service; Hasura will never modify your source schema.
Prerequisites
Install the DDN CLI
- macOS and Linux
- Windows
Simply run the installer script in your terminal:
curl -L https://graphql-engine-cdn.hasura.io/ddn/cli/v4/get.sh | bash
- Download the latest DDN CLI installer for Windows.
- Run the
DDN_CLI_Setup.exe
installer file and follow the instructions. This will only take a minute. - By default, the DDN CLI is installed under
C:\Users\{Username}\AppData\Local\Programs\DDN_CLI
- The DDN CLI is added to your
%PATH%
environment variable so that you can use theddn
command from your terminal.
Install Docker
The Docker based workflow helps you iterate and develop locally without deploying any changes to Hasura DDN, making the
development experience faster and your feedback loops shorter. You'll need Docker Compose v2.27.1
or later.
Validate the installation
You can verify that the DDN CLI is installed correctly by running:
ddn doctor
Tutorial
Step 1. Authenticate your CLI
ddn auth login
This will launch a browser window prompting you to log in or sign up for Hasura DDN. After you log in, the CLI will acknowledge your login, giving you access to Hasura Cloud resources.
Step 2. Scaffold out a new local project
ddn supergraph init my-project && cd my-project
Once you move into this directory, you'll see your project scaffolded out for you. You can view the structure by either
running ls
in your terminal, or by opening the directory in your preferred editor.
Step 3. Initialize your Elasticsearch connector
ddn connector init my_es -i
From the dropdown, start typing elasticsearch
and hit enter to accept the default port. Then, provide the following
values:
ELASTICSEARCH_URL
http://local.hasura.dev:9200/
ELASTICSEARCH_USERNAME
elastic
ELASTICSEARCH_PASSWORD
elastic
For now, you can leave the rest of the values as blanks (or their defaults, if present).
Step 4. Start the local Elasticsearch container
docker run --rm -p 127.0.0.1:9200:9200 -d --name elasticsearch \
-e ELASTIC_PASSWORD="elastic" \
-e "discovery.type=single-node" \
-e "xpack.security.http.ssl.enabled=false" \
-e "xpack.license.self_generated.type=trial" \
docker.elastic.co/elasticsearch/elasticsearch:8.17.1
curl -X PUT "http://localhost:9200/customers/" -u elastic:elastic -H 'Content-Type: application/json' -d'
{
"mappings": {
"properties": {
"customer_id": {
"type": "keyword"
},
"name": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword"
}
}
},
"email": {
"type": "keyword",
"index": true
},
"location": {
"type": "geo_point"
}
}
}
}
'
curl -X POST "http://localhost:9200/_bulk" -u elastic:elastic -H 'Content-Type: application/json' -d'
{ "index": { "_index": "customers", "_id": "1" } }
{ "customer_id": "CUST001", "name": "John Doe", "email": "[email protected]", "location": { "lat": 40.7128, "lon": -74.0060 } }
{ "index": { "_index": "customers", "_id": "2" } }
{ "customer_id": "CUST002", "name": "Jane Smith", "email": "[email protected]", "location": { "lat": 34.0522, "lon": -118.2437 } }
{ "index": { "_index": "customers", "_id": "3" } }
{ "customer_id": "CUST003", "name": "Alice Johnson", "email": "[email protected]", "location": { "lat": 51.5074, "lon": -0.1278 } }
{ "index": { "_index": "customers", "_id": "4" } }
{ "customer_id": "CUST004", "name": "Bob Brown", "email": "[email protected]", "location": { "lat": 48.8566, "lon": 2.3522 } }
{ "index": { "_index": "customers", "_id": "5" } }
{ "customer_id": "CUST005", "name": "Charlie Davis", "email": "[email protected]", "location": { "lat": 35.6895, "lon": 139.6917 } }
'
curl --location 'http://localhost:9200/customers/_search' \
-H 'Content-Type: application/json' \
-u elastic:elastic \
--data '{
"_source": [
"_id",
"name"
]
}'
Step 5. Introspect your Elasticsearch database
ddn connector introspect my_es
After running this, you should see a representation of your database's schema in the
app/connector/my_es/configuration.json
file; you can view this using cat
or open the file in your editor.
ddn connector show-resources my_es
Step 6. Add your model
ddn models add my_es customers
Open the app/metadata
directory and you'll find a newly-generated file: Customers.hml
. The DDN CLI will use this
Hasura Metadata Language file to represent the customers
index from Elasticsearch in your API as a
model.
You can import all the indexes from your Elasticsearch database as models by running ddn model add my_es "*"
.
Step 7. Create a new build
ddn supergraph build local
The build is stored as a set of JSON files in engine/build
.
Step 8. Start your local services
ddn run docker-start
Your terminal will be taken over by logs for the different services.
Step 9. Run your first query
ddn console --local
query MyQuery {
customers {
email
name
}
}
{
"data": {
"customers": [
{
"email": "[email protected]",
"name": "John Doe"
},
{
"email": "[email protected]",
"name": "Jane Smith"
},
{
"email": "[email protected]",
"name": "Alice Johnson"
},
{
"email": "[email protected]",
"name": "Bob Brown"
},
{
"email": "[email protected]",
"name": "Charlie Davis"
}
]
}
}
Step 10. Iterate on your Elasticsearch schema
curl -X PUT "http://localhost:9200/transactions/" -u elastic:elastic -H 'Content-Type: application/json' -d'
{
"mappings": {
"properties": {
"transaction_id": {
"type": "keyword"
},
"timestamp": {
"type": "date",
"format": "strict_date_optional_time||epoch_millis"
},
"customer_id": {
"type": "keyword"
},
"transaction_details": {
"properties": {
"item_id": {
"type": "keyword"
},
"item_name": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"price": {
"type": "float"
},
"quantity": {
"type": "integer"
},
"currency": {
"type": "keyword"
}
}
}
}
}
}
'
curl -X POST "http://localhost:9200/_bulk" -u elastic:elastic -H 'Content-Type: application/json' -d'
{ "index": { "_index": "transactions", "_id": "1" } }
{ "transaction_id": "TXN001", "timestamp": "2024-09-01T12:00:00", "customer_id": "CUST001", "transaction_details": { "item_id": "ITEM001", "item_name": "Laptop", "price": 999.99, "quantity": 1, "currency": "USD" } }
{ "index": { "_index": "transactions", "_id": "2" } }
{ "transaction_id": "TXN002", "timestamp": "2024-09-02T14:30:00", "customer_id": "CUST002", "transaction_details": { "item_id": "ITEM002", "item_name": "Smartphone", "price": 599.99, "quantity": 2, "currency": "USD" } }
{ "index": { "_index": "transactions", "_id": "3" } }
{ "transaction_id": "TXN003", "timestamp": "2024-09-03T09:45:00", "customer_id": "CUST003", "transaction_details": { "item_id": "ITEM003", "item_name": "Tablet", "price": 299.99, "quantity": 1, "currency": "USD" } }
{ "index": { "_index": "transactions", "_id": "4" } }
{ "transaction_id": "TXN004", "timestamp": "2024-09-04T16:15:00", "customer_id": "CUST004", "transaction_details": { "item_id": "ITEM004", "item_name": "Headphones", "price": 199.99, "quantity": 1, "currency": "USD" } }
{ "index": { "_index": "transactions", "_id": "5" } }
{ "transaction_id": "TXN005", "timestamp": "2024-09-05T11:30:00", "customer_id": "CUST005", "transaction_details": { "item_id": "ITEM005", "item_name": "Monitor", "price": 149.99, "quantity": 2, "currency": "USD" } }
'
Step 11. Refresh your metadata and rebuild your project
The following steps are necessary each time you make changes to your source schema. This includes, adding, modifying, or dropping tables.
Step 11.1. Kill your services
Bring down the services by pressing CTRL+C
in the terminal tab logging their activity.
Step 11.2. Re-introspect your data source
ddn connector introspect my_es
In app/connector/my_es/configuration.json
, you'll see schema updated to include the transactions
index. In
app/metadata/my_es.hml
, you'll see transactions
present in the metadata as well.
Step 11.3. Update your metadata
ddn model add my_es transactions
Open the app/metadata
directory and you'll find a newly-generated file: Transactions.hml
. The DDN CLI will use this
Hasura Metadata Language file to represent the transactions
index from Elasticsearch in your API as a
model.
You can import all the indexes from your Elasticsearch database as models by running ddn model add my_es "*"
.
Step 11.4. Create a new build
ddn supergraph build local
Step 11.5 Restart your services
ddn run docker-start
Step 12. Query your new build
query MyQuery {
transactions {
customerId
transactionId
transactionDetails {
currency
price
}
}
}
{
"data": {
"transactions": [
{
"customerId": "CUST001",
"transactionId": "TXN001",
"transactionDetails": [
{
"currency": "USD",
"price": 999.99
}
]
},
{
"customerId": "CUST002",
"transactionId": "TXN002",
"transactionDetails": [
{
"currency": "USD",
"price": 599.99
}
]
},
{
"customerId": "CUST003",
"transactionId": "TXN003",
"transactionDetails": [
{
"currency": "USD",
"price": 299.99
}
]
},
{
"customerId": "CUST004",
"transactionId": "TXN004",
"transactionDetails": [
{
"currency": "USD",
"price": 199.99
}
]
},
{
"customerId": "CUST005",
"transactionId": "TXN005",
"transactionDetails": [
{
"currency": "USD",
"price": 149.99
}
]
}
]
}
}
Next steps
Congratulations on completing your first Hasura DDN project with Elasticsearch! 🎉
Here's what you just accomplished:
- You started with a fresh project and connected it to a local Elasticsearch database.
- You set up metadata to represent your tables and relationships, which acts as the blueprint for your API.
- Then, you created a build — essentially compiling everything into a ready-to-use API — and successfully ran your first GraphQL queries to fetch data.
- Along the way, you learned how to iterate on your schema and refresh your metadata to reflect changes.
Now, you're equipped to connect and expose your data, empowering you to iterate and scale with confidence. Great work!
Take a look at our Elasticsearch docs to learn more about how to use Hasura DDN with Elasticsearch. Or, if you're ready, get started with adding permissions to control access to your API.