This guide explains how to initialize a connector, configure its environment variables, and link it to your data source.
Once initialized, you'll be ready to introspect the source and integrate it into your API.
You'll need a project before initializing a connector.
A wizard will appear with a dropdown list. If you know the name of the connector, start typing the name. Otherwise, use
the arrows to scroll through the list. Hit ENTER when you've selected your desired connector; you'll then be prompted
to enter some values.
Customization
You can customize which subgraph this connector is added to by
changing your project's context or using flags. More information can be found
in the CLI docs for the ddn connector init command.
The CLI will assign a random port for the connector to use during local development. You can hit ENTER to accept the
suggested value or enter your own. Then, depending on your connector, there may be a set of environment variables that
it requires:
You can construct the base of this using your Databricks UI under SQL Warehouses » <name-of-warehouse> » Connection details.
JDBC_SCHEMAS
default,public
A comma-separated list of schemas within the referenced catalog.
ENV
Example
Description
DUCKDB_URL
/path/to/*.duckdb
The path to the DuckDB database file which will be mounted to the connector's container.
ENV
Example
Description
ELASTICSEARCH_URL
https://example.es.gcp.cloud.es.io:9200
The comma-separated list of Elasticsearch host addresses for connection (Use local.hasura.dev instead of localhost if your connector is running on your local machine)
ELASTICSEARCH_USERNAME
default
The username for authenticating to the Elasticsearch cluster
The Elasticsearch API key for authenticating to the Elasticsearch cluster
ELASTICSEARCH_CA_CERT_PATH
/etc/connector/cacert.pem
The path to the Certificate Authority (CA) certificate for verifying the Elasticsearch server's SSL certificate
ELASTICSEARCH_INDEX_PATTERN
hasura*
The pattern for matching Elasticsearch indices, potentially including wildcards, used by the connector
The HTTP connector won't prompt you for any environment variables. When you initialize the connector, a config.yaml
will be generated where you can list valid specs for your API(s).
When you first initialize the connector, the CLI will only ask for the API's endpoint. You can further configure the
connector — such as choosing different configurations for introspection vs. execution — using the connector's
configuration file.
Snowflake allows you to set a number of defaults. Your JDBC can minimal (i.e., only including the account identifier,
username, and password) or more granular depending on your settings within Snowflake.
ENV
Example
Description
ACCESS_KEY_ID
AKIAIOSFODNN7EXAMPLE
Your AWS access key ID used to authenticate with S3.
SECRET_ACCESS_KEY
wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Your AWS secret access key used to authenticate with S3.
STORAGE_ENDPOINT
https://s3.amazonaws.com
The S3 service endpoint URL.
DEFAULT_BUCKET
my-app-bucket
The default S3 bucket name where files will be stored.
These environment variables are AWS-specific
Upon initialization, the connector will prompt you for the environment variables listed above. However, you can
configure this connector to work with any S3-compatible cloud provider. Check the configuration docs for more
information.
The connection string for the Turso database, using libsql protocol. You can generate this from the database's overview in your Turso dashboard.
TURSO_AUTH_TOKEN
eyJ...
A Turso auth token with access to the same database; this is also available via the Turso dashboard.
If your data source requires a connection string or endpoint, the CLI will confirm that it successfully tested the
connection to your source. Additionally, it generates configuration files, which you can find in the connector
directory of the subgraph where you added the connector (default: app). Finally, the CLI will create a
DataConnectorLink in your connector's metadata directory.
Now that you've initialized a connector and connected it to your data, you're ready to introspect the source and
populate the configuration files with source-specific information that Hasura will need to build your API. Check out the
introspection page to learn more.