Skip to main content
Version: v3.x beta

Native Data Connector for PostgreSQL

Introduction

The Native Data Connector for PostgreSQL is our flagship connector, with rich support for all kinds of queries. In the sections below, we'll try to give an overview of the features of the PostgreSQL connector and how to configure it in a Hasura project.

Flavors of PostgreSQL

As much as possible we attempt to provide explicit support for database projects that identify as being derived from PostgreSQL with this connector. This includes various popular hosting options for PostgreSQL proper.

The following variants and hosting offerings enjoy the same level of support as a dedicated installation of the latest stable PostgreSQL release:

  • AWS Aurora, distributed PostgreSQL derivative and service offering
  • Citus, PostgreSQL extension
  • Cockroach, distributed PostgreSQL-compatible service offering
  • YugabyteDB, distributed PostgreSQL derivative and service offering

The PostgreSQL data connector makes available any database resource that is listed in its configuration. In order to populate the configuration, the connector supports introspecting the database via the CLI update command.

Queryable collections

The PostgreSQL data connector supports several types of queryable collections:

  • Tables
  • Views
  • Native queries

Tables and views are typically discovered automatically during the introspection process.

Native queries on the other hand are named SQL select statements that you specify in the data connector configuration. They are similar to a database view, but more expressive as they admit parameters similar to a function and do not require any DDL permissions on the database. See the configuration reference on Native Queries.

Scalar types

The PostgreSQL data connector supports any scalar type that PostgreSQL knows how encode and decode as JSON - No scalar type is handled as a special case.

Data type names

Many scalar types have aliases in PostgreSQL, which can be found in the PostgreSQL documentation on data types. The connector exposes only the canonical name of a scalar type (as per pg_catalog.pg_type(typname)) in its NDC schema.

In order to use standard GraphQL scalar types rather than custom scalar types in the resulting GraphQL schema, the DDN configuration enables expressing a mapping between data connector types and GraphQL types.

Limited support

The data connector doesn't currently support domain types, enum, or range types, and the introspection process will not include entities these types.

Manually adding entities of a type to the configuration may work to an extent, but there is no guarantee that e.g. domain constraints will be checked.

Structural types

The data connector supports PostgreSQL array and composite types, and they will appear as such in the schema it exposes.

Collections may have columns that are arrays of any supported type, and columns that are of a composite type, and arbitrary combinations thereof.

As an example (irrespective of whether this is a good database design or not) the following may be tracked with full typing fidelity:

CREATE TYPE address AS (address_lines varchar[], zipcode varchar, country varchar);
CREATE TABLE person(id int primary key, name varchar, address address);

Note that the json and jsonb types are not considered structural types. They are opaque scalar types that do not admit to any schema.

Limited support

The data connector doesn't currently support introspecting composite types apart from those defined implicitly by a table. Custom defined composite types need to be added to the configuration manually to be usable.

Queries

The connector supports all query operations; see the query documentation for details.

Filtering

The connector supports the introspection and usage of all binary comparison functions and operators defined for any tracked scalar type, built-in and user-defined alike.

However, because PostgreSQL operators don't translate syntactically well to GraphQL, and because not all binary procedures are necessarily appropriate comparison operators, the introspection process needs some guidance in order to map names and select which functions to include. See the configuration reference for Comparison Operators for details.

Mutations

Mutations are still undergoing active development and are likely to change in the future. See mutationsVersion.

Troubleshooting the PostgreSQL Connector

I received the error message error communicating with database

This error message indicates that there was a problem connecting to the database. Some common issues are a typo in the connection string, a firewall blocking access, etc.

If you receive such an error, make sure that the PostgreSQL database is live, and that you can connect to it from your machine using psql using the same connection string supplied to your ndc-postgres connector.

I received the error message invalid port number

If there isn't a problem with the port number itself, this might indicate a problem with the connection string URI in general. One issue in particular might rise from a password containing certain characters, such as '#'. In order to passwords containing these characters, they need to be escaped. This can be done using the encodeURIComponent function in Node.js:

$ node
Welcome to Node.js v20.10.0.
Type ".help" for more information.
> encodeURIComponent('^password#');
'%5Epassword%23'

I've changed my database schema and would like these changes propagated in Hasura

Running the ddn dev command tells the CLI to watch for local development changes and update the ndc-postgres configuration when changes occurs. Note that the metadata.tables part of the configuration will be refreshed and overwritten when a change occurs.

I'd like to tweak how the connector works or does database introspection

The connector settings can be modified by editing the connector configuration. For example, you can tweak the Pool Settings, change the Isolation Level of transactions, or choose which schema not to introspect.

Visit the Configuration page for more information.

I'm having issues getting my Native Query to run

There are certain requirements that the SQL of a Native Query must fulfill, such omitting the semicolon in the SQL, or using arguments only in place of scalar values.

More information can be found in the Native Queries SQL Requirements section.

Loading...