Connect to a source
Introduction
This guide explains how to initialize a connector, configure its environment variables, and link it to your data source. Once initialized, you'll be ready to introspect the source and integrate it into your PromptQL application.
You'll need a project before initializing a connector.
Step 1. Initialize a connector
Regardless which connector you're using, you'll always begin by initializing it with a unique name:
ddn connector init <your_name_for_the_connector> -i
A wizard will appear with a dropdown list. If you know the name of the connector, start typing the name. Otherwise, use
the arrows to scroll through the list. Hit ENTER
when you've selected your desired connector; you'll then be prompted
to enter some values.
You can customize which subgraph this connector is added to by
changing your project's context or using flags. More information can be found
in the CLI docs for the ddn connector init
command.
Step 2. Add environment variables
The CLI will assign a random port for the connector to use during local development. You can hit ENTER
to accept the
suggested value or enter your own. Then, depending on your connector, there may be a set of environment variables that
it requires:
- Athena
- BigQuery
- Databricks
- MySQL
- PostgreSQL
- Redshift
- Snowflake
ENV | Example | Description |
---|---|---|
JDBC_URL | jdbc:athena://<host>:<port>/<database>?user=<username>&password=<password> | The JDBC URL to connect to the Amazon Athena database. |
JDBC_SCHEMAS | public,app | The schemas to use for the database. |
ENV | Example | Description |
---|---|---|
JDBC_URL | jdbc:bigquery://https://www.googleapis.com/bigquery/v2:443;ProjectId=project-id;DefaultDataset=dataset;OAuthType=0;OAuthServiceAcctEmail=service-account-email;OAuthPvtKey=/etc/connector/key.json; | The JDBC URL to connect to the BigQuery database. |
ENV | Example | Description |
---|---|---|
JDBC_URL | jdbc:databricks://<host>:<port>/default;transportMode=http;ssl=1;AuthMech=3;httpPath=/sql/1.0/warehouses/<warehouse-id>;UID=token;PWD=<access-token>;ConnCatalog=main; | You can construct the base of this using your Databricks UI under SQL Warehouses » <name-of-warehouse> » Connection details . |
JDBC_SCHEMAS | default,public | A comma-separated list of schemas within the referenced catalog. |
ENV | Example | Description |
---|---|---|
JDBC_URL | jdbc:mysql://user:password@host:3306/db_name | This connector requires a JDBC URL. |
ENV | Example | Description |
---|---|---|
JDBC_URL | jdbc:postgresql://<host>:<port>/<database>?user=<username>&password=<password> | The JDBC URL to connect to the PostgreSQL database. |
JDBC_SCHEMAS | public, app | The schemas to use for the database. Optional. This can also be included in the connection string. |
ENV | Example | Description |
---|---|---|
JDBC_URL | jdbc:redshift://<host>:<port>/<database>?user=<username>&password=<password> | The JDBC URL to connect to the Amazon Redshift database. |
JDBC_SCHEMAS | public,app | The schemas to use for the database. |
ENV | Example | Description |
---|---|---|
JDBC_URL | jdbc:snowflake://<account-identifier.<region>.snowflakecomputing.com?user=YOUR_USERNAME&&password=YOUR_PASSWORD&db=YOUR_DATABASE&warehouse=YOUR_WAREHOUSE&schema=YOUR_SCHEMA&role=YOUR_ROLE | This connector requires a JDBC URL. |
Snowflake allows you to set a number of defaults. Your JDBC can minimal (i.e., only including the account identifier, username, and password) or more granular depending on your settings within Snowflake.
If your data source requires a connection string or endpoint, the CLI will confirm that it successfully tested the
connection to your source. Additionally, it generates configuration files, which you can find in the connector
directory of the subgraph where you added the connector (default: app
). Finally, the CLI will create a
DataConnectorLink in your connector's metadata
directory.
Next steps
Now that you've initialized a connector and connected it to your data, you're ready to introspect the source and populate the configuration files with source-specific information that Hasura will need to build your application. Check out the introspection page to learn more.