Native Data Connector for PostgreSQL
Introduction
The Native Data Connector for PostgreSQL is our flagship connector, with rich support for all kinds of queries. In the sections below, we'll try to give an overview of the features of the PostgreSQL connector and how to configure it in a Hasura project.
To get started with PostgreSQL and Hasura DDN, check out these docs 🚀
As much as possible we attempt to provide explicit support for database projects that identify as being derived from PostgreSQL with this connector. This includes various popular hosting options for PostgreSQL proper.
The following variants and hosting offerings enjoy the same level of support as a dedicated installation of the latest stable PostgreSQL release:
- AWS Aurora, distributed PostgreSQL derivative and service offering
- Citus, PostgreSQL extension
- Cockroach, distributed PostgreSQL-compatible service offering
- YugabyteDB, distributed PostgreSQL derivative and service offering
The PostgreSQL data connector makes available any database resource that is listed in its configuration. In order to
populate the configuration, the connector supports introspecting the database via the CLI update
command, see
updating with introspection.
Queryable collections
The PostgreSQL data connector supports several types of queryable collections:
- Tables
- Views
- Native Operations
Tables and views are typically discovered automatically during the introspection process.
Native queries on the other hand are named SQL select statements that you specify in the data connector configuration. They are similar to a database view, but more expressive as they admit parameters similar to a function and do not require any DDL permissions on the database. See the configuration reference on Native Queries.
Scalar types
The PostgreSQL data connector supports any scalar type that PostgreSQL knows how encode and decode as JSON - No scalar type is handled as a special case.
Many scalar types have aliases in PostgreSQL, which can be found in the PostgreSQL documentation on data
types. The connector exposes only the canonical name of a scalar type (as per
pg_catalog.pg_type(typname)
) in its NDC schema.
In order to use standard GraphQL scalar types rather than custom scalar types in the resulting GraphQL schema, the DDN configuration enables expressing a mapping between data connector types and GraphQL types.
The data connector doesn't currently support range types, and the introspection process will not include entities of these types.
Manually adding entries for a type to the configuration may work to an extent, but there is no guarantee that it will work as expected.
Structural types
The data connector supports PostgreSQL array and composite types, and they will appear as such in the schema it exposes.
Collections may have columns that are arrays of any supported type, and columns that are of a composite type, and arbitrary combinations thereof.
As an example (irrespective of whether this is a good database design or not) the following may be tracked with full typing fidelity:
CREATE TYPE address AS (address_lines varchar[], zipcode varchar, country varchar);
CREATE TABLE person(id int primary key, name varchar, address address);
Note that the json
and jsonb
types are not considered structural types. They are opaque scalar types that do not
admit to any schema.
Queries
The connector supports all query operations; see the query documentation for details.
Filtering
The connector supports the introspection and usage of all binary comparison functions and operators defined for any tracked scalar type, built-in and user-defined alike.
However, because PostgreSQL operators don't translate syntactically well to GraphQL, and because not all binary procedures are necessarily appropriate comparison operators, the introspection process needs some guidance in order to map names and select which functions to include. See the configuration reference for Comparison Operators for details.
Mutations
We currently support two types of mutations:
Point mutations
These are single row insert, delete and update mutations generated for each database table, and exposed as procedures
which can used as Commands
in OpenDD. To expose these, set mutationsVersion: "v2"
in your
configuration.
-
An insert procedure is generated per table
v2_insert_<table>(
objects: [<table-object>],
post_check: <boolexpr>
)Using it, you can insert multiple objects and include a post check for permissions.
-
A delete procedure is generated for each unique constraint on each table
v2_delete_<table>_by_<column_and_...>(
key_<column1>: <value>,
key_<column2>: <value>,
...,
pre_check: <boolexpr>
)Using it, you can delete a single row using the uniqueness constraint, and include a pre-check for permissions.
-
An update procedure is generated for each unique constraint on each table
v2_update_<table>_by_<column_and_...>(
key_<column1>: <value>,
key_<column2>: <value>,
...,
update_columns: { <column>: { _set: <value> }, ... },
pre_check: <boolexpr>,
post_check: <boolexpr>
)Using it, you can update a single row using the uniqueness constraint by updating the relevant columns, and include a pre-check and post-check for permissions.
The pre_check
and post_check
arguments of a mutation, can be set in
metadata for all queries using an
Argument Preset on the mutation's
Command Permissions.
Native Mutations
You can also write your own SQL queries and expose them via the GraphQL engine using Native Mutations.
Troubleshooting the PostgreSQL Connector
I received the error message error communicating with database
This error message indicates that there was a problem connecting to the database. Some common issues are a typo in the connection string, a firewall blocking access, etc.
If you receive such an error, make sure that the PostgreSQL database is live, and that you can connect to it from your
machine using psql
using the same connection string supplied to your ndc-postgres
connector.
I received the error message invalid port number
If there isn't a problem with the port number itself, this might indicate a problem with the connection string URI in
general. One issue in particular might rise from a password containing certain characters, such as '#'. In order to
passwords containing these characters, they need to be escaped. This can be done using the encodeURIComponent
function
in Node.js:
$ node
Welcome to Node.js v20.10.0.
Type ".help" for more information.
> encodeURIComponent('^password#');
'%5Epassword%23'
I've changed my database schema and would like these changes propagated in Hasura
Running the ddn connector introspect command tells the CLI to update the
ndc-postgres
configuration. Note that the metadata.tables
part of the configuration will be refreshed and
overwritten when a change occurs.
I'd like to tweak how the connector works or does database introspection
The connector settings can be modified by editing the connector configuration. For example, you can tweak the Pool Settings, change the Isolation Level of transactions, or choose which schema not to introspect.
See the Configuration Reference for more information.
I'm having issues getting my Native Query to run
There are certain requirements that the SQL of a Native Query must fulfill, such omitting the semicolon in the SQL, or using arguments only in place of scalar values.
More information can be found in the Native Queries SQL Requirements section.