Skip to main content
Version: v2.x

Hasura GraphQL Engine Logs

Accessing logs​

Based on your deployment method, the Hasura GraphQL Engine logs can be accessed as follows:

Detailed Logging in Hasura Cloud

If you’re looking for more in-depth logging information, along with a Console for browsing logs, please see Observability with Hasura Cloud.

Different log-types​

The Hasura GraphQL Engine has different kind of log-types depending on the sub-system or the layer. A log-type is simply the type field in a log line, which indicates which sub-system the log comes from.

For example, the HTTP webserver logs incoming requests as an access log and is called http-log. Similarly logs from the websocket layer are called websocket-log, logs from the Event Trigger system are called event-trigger etc.

You can configure the GraphQL Engine to enable/disable certain log-types using the --enabled-log-types flag or the HASURA_GRAPHQL_ENABLED_LOG_TYPES env var. See GraphQL Engine server config reference

Default enabled log-types are: startup, http-log, webhook-log, websocket-log, jwk-refresh-log

Configurable log-types​

All the log-types that can be enabled/disabled are:

Log typeDescriptionLog Level
startupInformation that is logged during startupinfo
query-logLogs: the entire GraphQL query with variables, generated SQL statements (only for database queries, not for mutations/subscriptions or Remote Schema and Action queries), the operation name (if provided in the GraphQL request)info
execution-logLogs data-source-specific information generated during the requestinfo
http-logHttp access and error logs at the webserver layer (handling GraphQL and Metadata requests)info and error
websocket-logWebsocket events and error logs at the websocket server layer (handling GraphQL requests)info and error
webhook-logLogs responses and errors from the authorization webhook (if setup)info and error
jwk-refresh-logLogs information and errors about periodic refreshing of JWKinfo and error

Internal log-types​

Apart from the above, there are other internal log-types which cannot be configured:

Log typeDescriptionLog Level
pg-clientLogs from the postgres client librarywarn
metadataLogs inconsistent Metadata itemswarn
telemetry-logLogs error (if any) while sending out telemetry datainfo
event-triggerLogs HTTP responses from the webhook, HTTP exceptions and internal errorsinfo and error
ws-serverDebug logs from the websocket server, mostly used internally for debuggingdebug
schema-sync-threadLogs internal events, when it detects schema has changed on Postgres and when it reloads the schemainfo and error
health-check-logLogs source Health Check events which includes health status of a data sourceinfo and warn
event-trigger-processLogs the statistics of events fetched from database for each sourceinfo and warn
scheduled-trigger-processLogs the statistics of scheduled and cron events fetched from metadata storageinfo and warn
cron-event-generator-processLogs the cron triggers fetched from metadata storage for events generationinfo and warn

Logging levels​

You can set the desired logging level on the server using the log-level flag or the HASURA_GRAPHQL_LOG_LEVEL env var. See GraphQL Engine server config reference.

The default log-level is info.

Setting a log-level will print all logs of priority greater than the set level. The log-level hierarchy is: debug > info > warn > error

For example, setting --log-level=warn, will enable all warn and error level logs only. So even if you have enabled query-log it won't be printed as the level of query-log is info.

See log-types for more details on log-level of each log-type.

Log structure and metrics​

All requests are identified by a request id. If the client sends a x-request-id header then that is used, otherwise a request id is generated for each request. This is also sent back to the client as a response header (x-request-id). This is useful to correlate logs from the server and the client.

query-log structure​

On enabling verbose logging, i.e. enabling query-log, GraphQL Engine will log the full GraphQL query object on each request.

It will also log the generated SQL for GraphQL queries (but not mutations and subscriptions).

{
"timestamp": "2019-06-03T13:25:10.915+0530",
"level": "info",
"type": "query-log",
"detail": {
"kind": "database",
"request_id": "840f952d-c489-4d21-a87a-cc23ad17926a",
"query": {
"variables": {
"limit": 10
},
"operationName": "getProfile",
"query": "query getProfile($limit: Int!) {\n profile(limit: $limit, where: {username: {_like: \"%a%\"}}) {\n username\n }\n myusername: profile (where: {username: {_eq: \"foobar\"}}) {\n username\n }\n}\n"
},
"generated_sql": {
"profile": {
"prepared_arguments": ["{\"x-hasura-role\":\"admin\"}", "%a%"],
"query": "SELECT coalesce(json_agg(\"root\" ), '[]' ) AS \"root\" FROM (SELECT row_to_json((SELECT \"_1_e\" FROM (SELECT \"_0_root.base\".\"username\" AS \"username\" ) AS \"_1_e\" ) ) AS \"root\" FROM (SELECT * FROM \"public\".\"profile\" WHERE ((\"public\".\"profile\".\"username\") LIKE ($2)) ) AS \"_0_root.base\" LIMIT 10 ) AS \"_2_root\" "
},
"myusername": {
"prepared_arguments": ["{\"x-hasura-role\":\"admin\"}", "foobar"],
"query": "SELECT coalesce(json_agg(\"root\" ), '[]' ) AS \"root\" FROM (SELECT row_to_json((SELECT \"_1_e\" FROM (SELECT \"_0_root.base\".\"username\" AS \"username\" ) AS \"_1_e\" ) ) AS \"root\" FROM (SELECT * FROM \"public\".\"profile\" WHERE ((\"public\".\"profile\".\"username\") = ($2)) ) AS \"_0_root.base\" ) AS \"_2_root\" "
}
},
"connection_template": {
"result": {
"routing_to": "primary",
"value": null
}
}
}
}

The type of in the log with be query-log. All the details are nested under the detail key.

This log contains 4 important fields:

  • kind: indicates the type or kind of operation. kind can be database, action, remote-schema, cached or introspection
  • request_id: A unique ID for each request. If the client sends a x-request-id header then that is respected, otherwise a UUID is generated for each request. This is useful to correlate between http-log and query-log.
  • query: Contains the full GraphQL request including the variables and operation name.
  • generated_sql: this contains the generated SQL for GraphQL queries. For mutations and subscriptions this field will be null.
  • connection_template(*): if there is any connection template associated with the source, this contains the result of the connection template resolution.

(*) - Supported only in Cloud and Enterprise editions only

execution-log structure​

On enabling verbose logging, i.e. enabling execution-log, GraphQL Engine may also log statistics about the request once it has completed.

{
"detail": {
"request_id": "ff8d2809-8afb-4d6e-a6c3-0fb61ac813e7",
"statistics": {
"job": {
"id": "1230123-sd2fsdjs23djj24s57dfj",
"location": "europe-west2",
"state": "DONE"
}
}
},
"level": "info",
"timestamp": "2023-03-15T10:29:08.928+0000",
"type": "execution-log"
}

This log contains 2 important fields:

  • request_id: A unique ID for each request. If the client sends a x-request-id header then that is respected, otherwise a UUID is generated for each request. This is useful to correlate between execution-log and query-log.
  • statistics: Statistics about the completed request. These will differ depending on the backend used for the request. For instance, BigQuery returns details about the job it created.

http-log structure​

This is how the HTTP access logs look like:

  • On success response:
{
"timestamp": "2019-05-30T23:40:24.654+0530",
"level": "info",
"type": "http-log",
"detail": {
"request_id": "072b3617-6653-4fd5-b5ee-580e9d098c3d",
"operation": {
"query_execution_time": 0.009240042,
"user_vars": {
"x-hasura-role": "user"
},
"error": null,
"request_id": "072b3617-6653-4fd5-b5ee-580e9d098c3d",
"parameterized_query_hash": "7116865cef017c3b09e5c9271b0e182a6dcf4c01",
"response_size": 105,
"query": null
},
"http_info": {
"status": 200,
"http_version": "HTTP/1.1",
"url": "/v1/graphql",
"ip": "127.0.0.1",
"method": "POST"
}
}
}
  • On error response:
{
"timestamp": "2019-05-29T15:22:37.834+0530",
"level": "error",
"type": "http-log",
"detail": {
"operation": {
"query_execution_time": 0.000656144,
"user_vars": {
"x-hasura-role": "user",
"x-hasura-user-id": "1"
},
"error": {
"path": "$.selectionSet.profile.selectionSet.usernamex",
"error": "field 'usernamex' not found in type: 'profile'",
"code": "validation-failed"
},
"request_id": "072b3617-6653-4fd5-b5ee-580e9d098c3d",
"response_size": 142,
"query": {
"variables": {
"limit": 10
},
"operationName": "getProfile",
"query": "query getProfile($limit: Int!) { profile(limit: $limit, where:{username: {_like: \"%a%\"}}) { usernamex} }"
}
},
"http_info": {
"status": 200,
"http_version": "HTTP/1.1",
"url": "/v1/graphql",
"ip": "127.0.0.1",
"method": "POST"
}
}

The type in the log will be http-log for HTTP access/error log. This log contains basic information about the HTTP request and the GraphQL operation.

It has two important "keys" under the detail section - operation and http_info.

http_info lists various information regarding the HTTP request, e.g. IP address, URL path, HTTP status code etc.

operation lists various information regarding the GraphQL query/operation.

  • query_execution_time: the time taken to parse the GraphQL query (from JSON request), compile it to SQL with permissions and user session variables, and then executing it and fetching the results back from Postgres. The unit is in seconds.
  • user_vars: contains the user session variables. Or the x-hasura-* session variables inferred from the authorization mode.
  • request_id: A unique ID for each request. If the client sends a x-request-id header then that is respected, otherwise a UUID is generated for each request.
  • response_size: Size of the response in bytes.
  • error: optional. Will contain the error object when there is an error, otherwise this will be null. This key can be used to detect if there is an error in the request. The status code for error requests will be 200 on the v1/graphql endpoint.
  • query: optional. This will contain the GraphQL query object only when there is an error. On successful response this will be null.
  • parameterized_query_hash (*): Hash of the incoming GraphQL query after resolving variables with all the leaf nodes (i.e. scalar values) discarded. This value will only be logged when the request is successful. For example, all the queries mentioned in the below snippet will compute the same parametrized query hash.
# sample query
query {
authors(where: { id: { _eq: 2 } }) {
id
name
}
}

# query with a different leaf value to that of the sample query
query {
authors(where: { id: { _eq: 203943 } }) {
id
name
}
}

# query with use of a variable, the value of
# the variable `id` can be anything
query {
authors(where: { id: { _eq: $id } }) {
id
name
}
}

# query with use of a boolean expression variable,
# the value when the `whereBoolExp` is in the form of
#
# {
# "id": {
# "_eq": <id>
# }
# }

query {
authors(where: $whereBoolExp) {
id
name
}
}

(*) - Supported only in Cloud and Enterprise editions only

websocket-log structure​

This is how the Websocket logs look like:

  • On successful operation start:
{
"timestamp": "2019-06-10T10:52:54.247+0530",
"level": "info",
"type": "websocket-log",
"detail": {
"event": {
"type": "operation",
"detail": {
"request_id": "d2ede87d-5cb7-44b6-8736-1d898117722a",
"operation_id": "1",
"query": {
"variables": {},
"query": "subscription {\n author {\n name\n }\n}\n"
},
"operation_type": {
"type": "started"
},
"operation_name": null
}
},
"connection_info": {
"websocket_id": "f590dd18-75db-4602-8693-8150239df7f7",
"jwt_expiry": null,
"msg": null
},
"user_vars": {
"x-hasura-role": "admin"
}
}
}
  • On operation stop:
{
"timestamp": "2019-06-10T11:01:40.939+0530",
"level": "info",
"type": "websocket-log",
"detail": {
"event": {
"type": "operation",
"detail": {
"request_id": null,
"operation_id": "1",
"query": null,
"operation_type": {
"type": "stopped"
},
"operation_name": null
}
},
"connection_info": {
"websocket_id": "7f782190-fd58-4305-a83f-8e17177b204e",
"jwt_expiry": null,
"msg": null
},
"user_vars": {
"x-hasura-role": "admin"
}
}
}
  • On error:
{
"timestamp": "2019-06-10T10:55:20.650+0530",
"level": "error",
"type": "websocket-log",
"detail": {
"event": {
"type": "operation",
"detail": {
"request_id": "150e3e6a-e1a7-46ba-a9d4-da6b192a4005",
"operation_id": "1",
"query": {
"variables": {},
"query": "subscription {\n author {\n namex\n }\n}\n"
},
"operation_type": {
"type": "query_err",
"detail": {
"path": "$.selectionSet.author.selectionSet.namex",
"error": "field 'namex' not found in type: 'author'",
"code": "validation-failed"
}
},
"operation_name": null
}
},
"connection_info": {
"websocket_id": "49932ddf-e54d-42c6-bffb-8a57a1c6dcbe",
"jwt_expiry": null,
"msg": null
},
"user_vars": {
"x-hasura-role": "admin"
}
}
}

health-check-log structure​

The GraphQL Engine does recurring Health Checks on data sources and logs the status with other details. This is how the Health Check log looks like:

  • On successful Health Check
{
"level": "info",
"timestamp": "2022-07-28T12:23:56.555+0530",
"type": "health-check-log",
"detail": {
"source_name": "mssql",
"status": "OK",
"timestamp": "2022-07-28T06:53:56.555Z",
"error": null,
"internal": {
"interval": 5,
"max_retries": 3,
"retry_iteration": 0,
"timeout": 3
}
}
}
  • When Health Check is timed out
{
"level": "warn",
"timestamp": "2022-07-28T12:28:16.165+0530",
"type": "health-check-log",
"detail": {
"source_name": "mssql",
"status": "TIMEOUT",
"timestamp": "2022-07-28T06:58:16.165Z",
"error": null,
"internal": {
"interval": 5,
"max_retries": 3,
"retry_iteration": 3,
"timeout": 3
}
}
}
  • When Health Check results in an error
{
"level": "warn",
"timestamp": "2022-07-28T12:30:06.643+0530",
"type": "health-check-log",
"detail": {
"source_name": "postgres",
"status": "ERROR",
"timestamp": "2022-07-28T07:00:06.643Z",
"error": {
"message": "connection error",
"extra": "connection to server at \"localhost\" (::1), port 6432 failed: Connection refused\n\tIs the server running on that host and accepting TCP/IP connections?\nconnection to server at \"localhost\" (127.0.0.1), port 6432 failed: Connection refused\n\tIs the server running on that host and accepting TCP/IP connections?\n"
},
"internal": {
"interval": 10,
"max_retries": 3,
"retry_iteration": 3,
"timeout": 5
}
}
}

The type in the log will be health-check-log and details of the Health Check will be under detail key.

The detail field value is an object contains the following members.

NameTypeDescription
source_namestringThe name of the source
statusHealthCheckStatus stringThe health status of the source
timestampstringThe timestamp in UTC when the Health Check is finished
errorHealthCheckError valueThe details of the error
internalInternal objectInternals of the Health Check config
  • HealthCheckStatus is a mandatory field whose values are as follows.

    Health Check statusDescriptionLog level
    OKHealth Check succeeded with no errors.info
    FAILEDHealth Check is failed maybe due to bad connection config.warn
    TIMEOUTHealth Check is timed out. The timeout value is specified in the health check configwarn
    ERRORHealth Check results in an exception.warn
  • HealthCheckError contains more information about the Health Check exception when the status is ERROR. For other statuses the value will be null. The error object contains the following fields

    • message: string. A very brief description about the error.
    • extra: any json. Contains extra and detailed information about the error.
  • Internal is an object contains the following fields.

    • interval: int. Health Check interval in seconds.
    • max_retries: int. Maximum # of retries configured.
    • retry_iteration: int. The iteration on which the Health Check has succeeded. In case of unsuccessful Health Check, the retry iteration is same as max_retries.
    • retry_interval: int. The retry interval in seconds.
    • timeout: int. Health Check time out value in seconds.
Note

The GraphQL Engine logs the Health Check status only when

  • the status is not OK
  • the previous check status was not OK and current status is OK

event-trigger-process log structure​

Every 10 minutes, the Hasura Engine logs the count of events fetched from database for each source where event triggers are defined. The log also contains the number of fetches occurred within the 10 minute timeframe.

{
"detail": {
"num_events_fetched": {
"default": 2,
"source_1": 1
},
"num_fetches": 601
},
"level": "info",
"timestamp": "2023-01-24T20:20:45.036+0530",
"type": "event-trigger-process"
}

The type in the log will be event-trigger-process and details of the event trigger process will be under detail key.

The detail field value is an object contains the following members.

NameTypeDescription
num_events_fetchedFetchedEventsSource objectThe count of total events fetched for each source
num_fetchesintThe number of fetches happened within 10 minutes
  • FetchedEventsSource is a map of source name to the count of total events fetched. Contains sources with event triggers defined.
    {
    "source_1": <events_count_fetched_from_source_1>,
    "source_2": <events_count_fetched_from_source_2>,
    . .
    . .
    }

scheduled-trigger-process log structure​

Every 10 minutes, the Hasura Engine logs the count of cron and scheduled events fetched from metadata storage. The log also contains the number of times new events are fetched within the 10 minute timeframe.

{
"detail": {
"num_cron_events_fetched": 0,
"num_fetches": 60,
"num_one_off_scheduled_events_fetched": 0
},
"level": "info",
"timestamp": "2023-01-24T20:10:44.330+0530",
"type": "scheduled-trigger-process"
}

The type in the log will be scheduled-trigger-process and details of the scheduled trigger process will be under detail key.

The detail field value is an object contains the following members.

NameTypeDescription
num_cron_events_fetchedintThe count of total cron events fetched
num_one_off_scheduled_events_fetchedintThe count of total one-off scheduled events fetched
num_fetchesintThe number of fetches happened within 10 minutes

Monitoring frameworks​

You can integrate the logs emitted by the Hasura Engine with external monitoring tools for better visibility as per your convenience.

For some examples, see Guides: Integrating with monitoring frameworks