A guide to building a unified data access layer on domain REST APIs

A step-by-step guide to integrating and composing data from different domain APIs using Hasura Data Delivery Network (DDN).

Background

If you haven’t read the previous post in this series, please do so – it will set the motivation for what we are about to do and shed some light on the design choices we’ll make in this tutorial. We compared and contrasted data sources like databases, REST APIs with/without OpenAPI spec, RPCs, etc. We evaluated them for efficiency and effectiveness when connecting them to a unified data access platform. It turns out that REST APIs are sub-optimal for this purpose and, in an ideal world, building a platform on a set of domain APIs would be an anti-pattern!

But, we must contend with the reality of engineering team structures, incentives, and the limited agency that platform developers and architects sometimes have in choosing the right sources (databases >> REST APIs). So, here we are, trying to do the best with what we have, and looking for an efficient way to build a platform on a set of domain REST APIs. Let’s try to do exactly that.

We’ll pick the relatively harder challenge of working with REST APIs without OpenAPI specification. More accurately, we’ll ignore the specification in our sample APIs for now. Once you’re done with this guide, you can check out how the OpenAPI Lambda Connector allows you to bulk import APIs that are documented in the OpenAPI/Swagger format into the Hasura DDN supergraph by automating some of the steps in this guide.

Our goals for this tutorial are to connect two REST endpoints to a supergraph and to integrate the data from these endpoints by defining a relationship between the types returned by the endpoints. If we succeed, a single call to the resultant supergraph will fetch data from these two endpoints in a meaningful way.

Pre-requisites

You’ll need the following resources to follow this tutorial:

  1. Domain services: You can easily customize this tutorial to use any REST endpoints, however,  we’ll use this sample Github repo to quickly bring up a set of APIs on our local machine in a couple of easy steps. It is highly recommended that you do the same for the first iteration, and then customize as needed.
  2. Hasura DDN prerequisites: Follow the three steps in the pre-requisites section of this docs page. This will ensure you have Docker and the latest Hasura DDN CLI installed and ready to help you build your new supergraph.

Instructions

Bring up the domain services: Follow this sample repo’s README to get the following APIs up and running:

  1. /author takes a mandatory authorid URL param to fetch an author object (id and name).
  2. /author-article takes a mandatory authorid URL param to fetch one article written by this as an article object (id, title, and author_id).

While not representative of the complexity and richness of a set of real-world domain services, these two simple endpoints are sufficient to achieve the goals of this exercise. In a similar vein, these two APIs are part of the same service but they needn’t be – think of them as domain APIs managed by two different teams.

Authentication with DDN Cloud: Run the following command in a terminal to authenticate yourself with Hasura DDN Cloud. This is a housekeeping step but it will eventually help us create a copy of our work in the cloud and invite supergraph collaborators.

ddn auth login

Initialize the supergraph: Run the following commands to create a new directory called “mysupergraph” and initialize it with Hasura DDN supergraph scaffolding:

ddn supergraph init mysupergraph
cd mysupergraph

Initialize a connector: A connector is a Hasura DDN component serving as a gateway to an underlying source. It transforms the source schema into common metadata. For example, a connector for databases like PostgreSQL, SQL Server, etc. converts the underlying relational data model into source-agnostic metadata. Similarly, there’s a connector each for gRPC and OpenAPI specification. There’s also a connector for Google Calendar. As you can see, you can build a connector for any underlying schema.

For everything else, there are language Lambda connectors to help do the same with some code. We use code, usually functions – hence the name Lambda – to call the unstructured source, and in the process use the language runtime (and some helpful annotations) to infer information about the types of the response objects. We do this because the supergraph is a unified semantic layer that needs to know about the data it’s working with.

Our REST endpoints don’t have any schema like a Swagger spec (OpenAPI spec) to work with. Therefore, we need one of these language lambda connectors. In this tutorial, we’ll use a TypeScript connector. If you’re uncomfortable with JavaScript, don’t worry – ready-to-use code will be supplied shortly, and you can always switch to using another language connector later. We’ll select the “hasura/nodejs” connector in response to the command below, and go with the default connector name (mynodejs) and port configuration:

ddn connector init -i

If you notice the supergraph project directory, you’ll see a new functions.ts file under the app/connector/mynodejs folder. This is where our functions will go.

  1. TypeScript functions: You can find sample TypeScript code here (along with the rest of the Hasura DDN configuration for this tutorial for reference). For now, we won’t get into much detail about the annotations and the code itself. You should also know that all these functions were generated using AI. Copy the contents of the functions.ts file into your own file. The AI was also kind enough to supply a test file to verify the functions, so please feel free to copy the test.ts file too if you’d like.
  2. Introspect REST APIs: We’re ready to gather all the information we need from the underlying source (our two REST endpoints via the TS functions) to be able to generate Hasura’s source-agnostic metadata. Run the following introspection command and notice the changes in the mysupergraph/app/metadata/mynodejs.hml file. If all goes well, you’ll see the following CLI hints:
ddn connector introspect mynodejs

..
...
HINT Add all Models from the introspected schema: ddn model add mynodejs "*"
HINT Add all Commands (includes Mutations) from the introspected schema: ddn command add mynodejs "*"
HINT Add all Relationships from the introspected schema: ddn relationship add mynodejs "*"
...
..

Generate Hasura metadata: Language connectors typically convert unstructured data calls into commands and, because there’s no schema, they can’t infer any data relationships automatically. So we just need to first generate the metadata for commands by adding the introspected ones. To add the two REST endpoints as commands, we’ll run the following:

ddn command add mynodejs "*"

Notice the new Hasura metadata files (.hml) in the metadata directory corresponding to the newly added commands.

Build and run the supergraph: We are good to test our supergraph now. We have done enough to call our REST endpoints from a unified interface (which happens to be GraphQL, you can also choose a REST interface for your supergraph). Run the following command to create a local build artifact that we’ll subsequently run using Docker:

ddn supergraph build local

In a separate terminal window (from the same folder), run the following to start the supergraph services:

ddn run docker-start

In the original terminal window or another new one, run the following command to open the local Hasura DDN console:

ddn console --local

Testing the “gateway” supergraph: At this stage, our supergraph is a unified gateway to the underlying sources. It exposes the REST endpoints as part of a unified GraphQL API/schema and can do the same with its own REST interface. We should be able to call our two REST endpoints in a single GraphQL query such as the following:

query AuthorsAndArticlesSeparately {
  getAuthor(authorId: 1) {
    id
    name
  }
  getAuthorArticle(authorId: 2) {
    id
    title
    authorId
  }
}

After running the query, click on the “Trace” button to see Hasura DDN’s query planner in action, although it hasn’t had to work very hard just yet. This should give you a sense of the orchestration that DDN takes care of behind the scenes.

API integration and composition: To make the supergraph more useful to consumers of your API, we need to define the relationships between the types/data returned by the source endpoints. Thankfully, we will do this using declarative methods and won’t have to write any glue code or resolvers – Hasura DDN already understands the data type returned by the individual endpoints. It knows how to call the endpoints. It also understands how to optimize these calls depending on the source. Unfortunately, we won’t see much of the optimization in action in this tutorial thanks to the nature of the source we are dealing with. However, check out the docs on Hasura DDN’s query planning capabilities.

In our example, each article (the authorArticle type) has an author as indicated by the authorid field. Therefore, there’s a 1:1 relationship between the article and the author, and these two types are returned by two different REST endpoints. To define this relationship, we head to the Hasura DDN metadata for the GetAuthorArticle command/endpoint i.e. the app/metadata/GetAuthorArticle.hml file. At the very end of the file (after the CommandPermissions section), add the following metadata snippet:

---
kind: Relationship
version: v1
definition:
  name: articleAuthor
  sourceType: Article
  target:
    command: 
      name: GetAuthor
  mapping:
    - source:
        fieldPath:
          - fieldName: authorId
      target:
        argument:
          argumentName: authorId

This metadata defines a relationship between the Article type and the GetAuthorArticle command by specifying that the value of the authorId field in the Article type can be used as an argument (also named authorId) when invoking the GetAuthorArticle command.

A quick aside for your future iterations – don’t forget to install the Hasura VSCode extension (only if you use VSCode as a code editor) that can help you with auto-completing such metadata definitions and validating them.

As our metadata has been modified, we must build our supergraph again and run the resultant artifacts. So back to the build and run commands:

ddn supergraph build local

In a separate terminal window (from the same folder), run the following to start the supergraph services:

ddn run docker-start

In the original terminal window or another new one, run the following command to open the local Hasura DDN console:

ddn console --local

Unified data access layer on domain services: Run the following GraphQL query to see your data access platform in action:

query ArticleAndItsAuthor {
  getAuthorArticle(authorId: 2) {
    id
    title
    articleAuthor {
      id
      name
    }
  }
}

As the name of the query implies, we have been able to fetch related data from two different sources in the same query, thus wrapping up this tutorial’s goals. Check out the trace information to see what Hasura DDN does under the hood.

Deploying your supergraph to Hasura Cloud: We have everything working on our local machine. Now, you might feel the urge to shout from the rooftops about metadata-driven supergraphs. Or just save your work and share it with your coworkers and friends. To do this, we initialize a project and create a (non-local) cloud build by running the following commands:

ddn project init

ddn supergraph build create

The second command here will give you the details of your free Hasura DDN project and how to access its console. Please note that to make the project API work, you’ll need to host the mock APIs somewhere publicly accessible.

Next steps

In the first post of this series, we looked at why you might want to consider building a supergraph on a set of domain databases instead of domain services. Still, we were also conscious of organizational design challenges that made this challenging to kick off.

Hopefully, this tutorial helps you prove out the value of building a unified data access layer over disparate domain services using a metadata-driven approach, and build upon that success to negotiate access to underlying databases.

If you need more help building a unified data access layer, please do connect with us. We’re happy to assist you!

Blog
21 Nov, 2024
Email
Subscribe to stay up-to-date on all things Hasura. One newsletter, once a month.
Loading...
v3-pattern
Accelerate development and data access with radically reduced complexity.