Skip to main content
Version: v2.x

How it works

What is cached exactly?

Hasura Caching is a type of response caching that helps you cache results of a given query. Hasura will cache the response of a given "unique" query, and only if the same query is made again (i.e., has the same cache key) will it hit the cache. We explain how the cache key is computed in the next section.

This means that there is no sharing of the cache if:

  • the session variables differ, even if the GraphQL query, variables, operation name everything is the same
  • the GraphQL variables differ, even if the GraphQL query is the same
  • the operation name is different, even if the GraphQL query is the same
  • the GraphQL query differs only in one or a few fields

How is the cache key computed?

If the @cached directive is used in a GraphQL operation, Hasura computes a cache key. This is then used to look up and store values in the cache.

The cache key is a hash of:

If the computed cache key is found in the cache, then there is a cache hit.

TTL matters

Note that the cache hit also depends on the TTL and not just cache key. See below to know more about cache TTL.

GraphQL query

This includes the entire GraphQL query text. Any difference in the GraphQL query text is considered a different query. Only whitespace is ignored.

For example, all the following queries are considered different:

query MyCachedQuery @cached {
users {
id
name
}
}
query MyCachedQuery @cached {
users {
name
id
}
}
query MyCachedQuery @cached {
users {
id
name
created_at
}
}

If the order of objects inside an input argument are changed, even then it is considered a different query:

query MyCachedQuery @cached {
profile(where: { _and: [{ id: { _gt: 1 } }, { name: { _ilike: "%x%" } }] }) {
id
name
}
}
query MyCachedQuery @cached {
profile(where: { _and: [{ name: { _ilike: "%x%" } }, { id: { _gt: 1 } }] }) {
id
name
}
}

Only whitespace is ignored, so the following queries are considered the same:

query MyCachedQuery @cached {
users {
name
id
}
}
query MyCachedQuery @cached {
users { name id }
}

Operation name

In the following example, the operation name is MyCachedQuery

query MyCachedQuery @cached {
users {
id
name
}
}

If we use the same name, but change the query, then it is considered a different query:

query MyCachedQuery @cached {
users {
id
}
}

GraphQL variables

The following example shows variables declared and used called minDate and maxDate. Usually, when executing the operation, one would pass the actual value of these variables.

If these variable values differs across queries, then they are deemed different:

query getNewlyJoinedUsers($minDate: timestamptz!, $maxDate: timestamptz!) @cached {
users(where: { _and: [{ created_at: { _gt: $minDate } }, { created_at: { _lt: $maxDate } }] }) {
id
name
}
}

Role and session variables

Hasura resolves session variables via the authentication process. The role and session variables are used to compute the cache key.

A session variable will only influence the cache key for a query if it is referenced by the execution plan. In practice, this means that session variables are only factored into cache keys if they are referenced in the permissions for a query.

For example, if a JWT resolves to say x-hasura-user-id and x-hasura-org-id session variables, but the query only uses the x-hasura-user-id in the permissions, then only the role and x-hasura-user-id would be used to compute the cache key.

Request headers

Request headers (ignoring x-request-id header) are added to the cache key computation, when executing Remote Schema or Action queries, if they have forward_client_headers set to true.

Cache Invalidation

Cache invalidation in Hasura is based on TTLs. Hasura doesn't support any other "automatic" way to invalidate cache (like on specific mutations). However, there are API endpoints to clear the cache manually.

The default TTL is 60 seconds. This can be increased via the TTL argument in the cached directive.

Rate Limiting

Cache writes are rate limited, with a rate depending on your plan. The rate limit is based on a leaky bucket algorithm. If you exceed the rate limit, the HTTP response will indicate this with a warning header:

 Warning: 199 - cache-store-capacity-exceeded