tags

Automating the logistics of developing with Hasura Self-Hosted

16 May, 2023 | 4 min read

About Equi

Equi helps investors unlock generational wealth by providing access to investments normally reserved for the institutions and the ultra-wealthy. We evaluate thousands of investments to provide our customers with one diversified and managed portfolio of alternative asset investments (things that are not stocks and bonds). Best of all, we are powered by an intuitive and easy-to-use software platform that our customers can use anywhere. Learn more at www.equi.com.

Why Equi Chose Hasura

The Equi engineering team prides itself on being nimble and efficient. We’re a small team aiming for top 1% in productivity. That’s why when it came time to architect our core backend service we chose Hasura. Hasura helps us write dramatically fewer backend controllers, enforces row-based security and provides a management UI that makes day to day operations a breeze.

Hasura’s self-hosted version (the graphql-engine container) is incredibly powerful. You can run it on any infrastructure and get a powerful API generator that can either be a gateway to other services in your stack or sit alongside your services as part of a larger ecosystem. Updating Hasura’s configuration (they call metadata) is easy with their user interface, and they provide tools to export (and thus add to source control) all the metadata as a set of JSON files.

Managing this configuration with multiple developers, even with a small team, can be tricky to avoid accidental omissions or errors.

In this blog post, we'll explore ways to automate these processes to create a more streamlined and consistent development experience when working with Hasura.

The Scenario

You have a local developer environment that uses Hasura and at least one remote schema (i.e. an application that provides data or actions to your graph). For the purposes of this walkthrough we assume your using Docker Compose to orchestrate the setup of these containers. This setup allows for a consistent and isolated development environment that's easy to replicate across different stages of the project.

The Hasurawatcher Sidecar Container

The first automation we'll implement is a sidecar container that imports metadata and reloads remote schemas on boot. This container will run a simple script that waits for Hasura to load, then executes the necessary import commands. By doing so, we'll ensure that metadata and remote schemas are always up-to-date when your local instance starts.  

CODE: watchconfig.sh (note this script assumes that your Hasura metadata files are mounted inside the Docker container at /hasuraconfig/ as in the docker-compose example below).

#!/bin/sh


# This script is intended to be run inside the 'hasurawatcher' docker container


# Give hasura itself some time to load
echo "waiting for hasura to load..."
yarn wait-on -l http-get://hasura:8080/console


echo "hasura loaded, beginning initial import"


./node_modules/hasura-cli/hasura metadata apply --admin-secret LOCAL_HASURA_ADMIN_SECRET --skip-update-check --endpoint http://hasura:8080/ --project /hasuraconfig/

CODE: docker-compose.yml

hasurawatcher:
  container_name: hasura_watcher
  stop_signal: SIGKILL
  depends_on:
    hasura:
      condition: service_started
  build:
    context: .
    target: hasurawatcher
    dockerfile: ./Dockerfile
  volumes:
    - './packages/hasura/:/hasuraconfig/:Z'
  networks:
    - equi-local-network
  extra_hosts:
    - 'host.docker.internal:host-gateway'
  command: sh watchconfig.sh
  restart: unless-stopped

CODE: Dockerfile

FROM node:18 AS hasurawatcher

COPY packages/hasura/package.json .
RUN apt update
RUN apt-get install inotify-tools -y
RUN yarn

packages/hasura/package.json: (make sure you replace <LOCAL_HASURA_ADMIN_SECRET> with your actual local hasura admin secret)

{
 "name": "hasura-local-tools",
 "version": "0.0.1",
 "description": "",
 "author": "",
 "private": true,
 "license": "UNLICENSED",
 "scripts": {
   "hasura": "./node_modules/hasura-cli/hasura --skip-update-check",
   "reload:local": "yarn hasura metadata reload --admin-secret <LOCAL_HASURA_ADMIN_SECRET>",
   "export:local": "./node_modules/hasura-cli/hasura metadata export --admin-secret <LOCAL_HASURA_ADMIN_SECRET> --skip-update-check",
   "import:local": "./node_modules/hasura-cli/hasura metadata apply --admin-secret <LOCAL_HASURA_ADMIN_SECRET> --skip-update-check",
 "devDependencies": {
   "hasura-cli": "^2.13.0",
   "js-yaml": "^4.1.0",
   "wait-on": "^6.0.1"
 }
}

Ensuring metadata changes from remote are imported

To keep your local environment in sync with remote changes, we'll add an inotify script to our sidecar container. This script will continuously monitor the metadata directory for changes and automatically apply them to your local Hasura instance. This ensures that any updates made by a fellow developer are automatically propagated to your local environment after you git pull without any manual intervention.

echo "Initial import complete, watching for local changes to hasura config..."
# Watch for changes forever and import
while inotifywait -q -e modify -r /hasuraconfig/; do
   echo "Saw a change in hasura config!  Re-applying..."
   ls /hasuraconfig/
   ./node_modules/hasura-cli/hasura metadata apply --admin-secret LOCAL_HASURA_ADMIN_SECRET --skip-update-check --endpoint http://hasura:8080/ --project /hasuraconfig/
done

Making it easier to export and commit metadata changes

To make it simple for developers to export and commit metadata changes, we'll provide a script with local authentication that exports metadata to the appropriate location. This script can be easily executed whenever a developer makes a change in the Hasura console, ensuring that all changes are saved to version control. This script is also referenced in the package.json file above.

"export:local": "./node_modules/hasura-cli/hasura metadata export --admin-secret LOCAL_HASURA_ADMIN_SECRET --skip-update-check",

Finally, write up some instructions for your team so they know how this setup works.  For example:

README.MD

Our team has implemented a process (called hasurawatcher) that monitors your local file system and automatically imports any changes in hasura metadata into your local dockerized hasura instance.  All you have to do to import a coworker's changes is pull them down with your source control system and they will automatically be imported.
Committing Local Hasura Metadata Changes
Whenever you make changes to Hasura metadata (e.g. permissions, relationships, remote schema configuration) you should then run `yarn export:local` which will export all the metadata as json to your local file system and be ready for a git add / commit.```

Conclusion

By implementing these automations in your Hasura development workflow, you'll make it vastly simpler for your developers to keep their Hasura instance in sync with other developers, saving time and frustration on a regular basis.

With metadata and remote schemas consistently synchronized across your local instance, other running services, the file system, and higher environments, you can focus on building powerful applications with confidence.

Happy developing!

About the author

Zach Goldberg is a seasonsed tech entrepreneur, innovation enthusiast, and author with a "founder's mentality" who believes that engineering software should be more science than art. A passionate learner, leader, and executive, Zach is one of the rare CTOs who has cultivated extensive, nuanced, and proven expertise that spans industries while serving numerous startups and scaled organizations, including Microsoft, Google, Dama Financial, Savvy Insurance, and Lottery.com. He is currently at Equi, an investment firm.


Hasura

Hasura

Instant GraphQL API on all your data. Get Authorization, Caching, Performance and Monitoring benefits for all new and existing GraphQL APIs.

Read More
Similar tags
Share this article
Subscribe IlluSubscribe Illu

Monthly product updates in your inbox. No spam.

Loading...