Build safer AI assistants with PromptQL and human-in-the-loop guardrails

Imagine a customer service AI assistant about to issue a $50,000 refund instead of $500 due to a misplaced decimal. Without proper guardrails, this simple error could cost a company dearly. In today's AI landscape, where systems handle increasingly sensitive decisions, guardrails aren't optional extras – they're essential safeguards that protect both businesses and users.

One of the most effective guardrails in AI systems today is human-in-the-loop (HITL) oversight, which combines the computational power of AI with the discernment and accountability of human decision-making. This blog explores why HITL systems are crucial, how they align with the principles of agentic AI, and how PromptQL simplifies their implementation in the AI assistant interface.

Guardrails in the age of agentic AI

AI systems excel at automating repetitive tasks, analyzing data, and generating insights. However, they’re not infallible. Without the right checks, even the most advanced AI can:

  1. Make inaccurate decisions based on incomplete or irrelevant data.
  2. Compromise user trust by executing actions without sufficient oversight.
  3. Violate safety and compliance regulations by mishandling data or making unauthorized decisions.

A well-designed guardrail system ensures AI applications are transparent, reliable, and controllable, addressing these risks effectively.

The Role of human-in-the-loop in agentic AI

Agentic AI, which emphasizes autonomy with accountability, thrives on structured workflows and intelligent decision-making. While these systems can autonomously generate query plans, retrieve data, and adapt dynamically, there are scenarios where human intervention becomes a non-negotiable step.

Key scenarios for HITL oversight:

  • Sensitive decisions: Approving or denying refunds, escalating customer complaints, or making policy exceptions.
  • Data manipulation: Performing updates, deletions, or transformations in the database that could have irreversible consequences.
  • High-stakes outputs: Generating reports or insights that directly impact strategic decisions or compliance.

Human-in-the-loop systems bring the following benefits:

  • Accountability: A human review step ensures decisions are not made in a vacuum.
  • Transparency: Human oversight adds a layer of clarity to AI workflows.
  • Safety: Manual approval mitigates the risk of errors or unintended consequences.

How PromptQL makes guardrails easy

PromptQL is a data access agent that creates and runs query plans. At its core, PromptQL is designed to enable agentic AI workflows that are both intelligent and controllable. Its modular design simplifies HITL implementation.

Example: A Customer Support Assistant

Recently, I built a Customer Support Assistant using Hasura PromptQL, showcasing how guardrails can enhance Agentic AI workflows. The system was tasked with managing customer invoices and support tickets. One of its standout features was manual approval for refunds – a critical step in ensuring responsible automation.

Human-in-the-loop approval for data manipulations in PromptQL
Human-in-the-loop approval for data manipulations in PromptQL

Here’s how it worked:

  1. Query planning: When a refund action was triggered, PromptQL generated a query plan to gather invoice, ticket, and payment records.
  2. Data aggregation: The agent gathered all relevant data using Hasura’s Unified Data Access Layer. This was connected to PostgreSQL and Zendesk.
  3. HITL oversight: Before executing the refund, the workflow paused, presenting the aggregated data for manual approval or denial.
  4. Execution: Depending on the decision, the workflow either completed the refund or flagged the request for further review.

This combination of automated query planning and manual intervention ensured that the system was both efficient and safe.

Passing authentication headers

The underlying data source is typically secured with an authentication layer. When the Hasura DDN project is configured with an authentication token and granular role-based access control (RBAC) rules, PromptQL enforces these permissions. This ensures that your AI powered app only accesses authorized data, maintaining secure and compliant data handling.

Read more about how Authentication works with Hasura DDN

Why PromptQL stands out

  • Dynamic query planning: PromptQL breaks down user intents into actionable steps, ensuring that only the required data is retrieved.
  • Adaptability: The system retries and iterates on workflows when errors occur, enabling robust handling of edge cases.
  • Composable workflows: With its integration into standardized and composable APIs, PromptQL allows developers to add HITL steps without rewriting workflows.
  • Memory-aware context: By limiting the context passed to LLMs, PromptQL ensures that HITL actions are informed without overloading the system.

Building smarter and safer AI workflows

As AI continues to evolve, the systems that succeed will be those that find the right balance between autonomy and accountability. By integrating human-in-the-loop oversight, we can build workflows that are not just smarter – but safer, too. PromptQL provides the foundation for building these hybrid systems.

What are your thoughts on human-in-the-loop systems in AI? Learn more about PromptQL with these resources.

Blog
22 Jan, 2025
Email
Subscribe to stay up-to-date on all things Hasura. One newsletter, once a month.
Loading...
v3-pattern
Accelerate development and data access with radically reduced complexity.