Building a powerful customer support AI assistant with PromptQL in 5 minutes

In today's data-driven world, connecting AI to your organization's data remains a significant challenge. What if you could build AI assistants that can query databases, access customer support tickets, and even take actions – all using natural language? That's exactly what PromptQL enables, and as demonstrated in our recent demo, you can set it up in under 5 minutes.

What is PromptQL?

PromptQL is a data agent that gets you to 100% accuracy on your enterprise data. Unlike traditional AI approaches that struggle with complex queries across multiple data sources, PromptQL creates and runs query plans programmatically. (a.k.a AI ggents on the fly).

At its core, PromptQL is built on Hasura Data Delivery Network (DDN), which provides universal data access across databases, APIs, SaaS platforms, and more. This foundation allows PromptQL to connect directly to your existing data sources while maintaining granular role based authorization and security.

The key innovation of PromptQL is its approach to query planning. Instead of trying to process data within the LLM's context window (which leads to hallucinations and errors), PromptQL:

  1. Uses AI to create a structured query plan.
  2. Executes that plan programmatically outside the LLM (programmatic agents on the fly).
  3. Stores results in structured artifacts that can be referenced later.
  4. Combines retrieval, computation, and AI reasoning in a predictable way.

This approach delivers 5x better performance than traditional methods like RAG or function calling, with near-perfect repeatability even as tasks become more complex.

Now, let's see PromptQL in action.

PromptQL in action: A customer support assistant

Our demo showcases a powerful assistant built for a customer support team. Connected to user data in PostgreSQL, product usage metrics in BigQuery, and support tickets in Zendesk, it demonstrates what's possible with integrated data access.

Here’s how the architecture looks roughly:

Query 1: Revenue analysis

When asked: "Who are our top five customers by revenue?"

PromptQL's approach is fundamentally different. Rather than trying to answer directly, it develops a query plan:

  • Check if revenue data is available somewhere
  • Look at all invoices generated to date
  • Group invoices by user
  • Sum invoice amounts for each user
  • Return the top five by total revenue

The result? A clean table showing each customer, their email, and total revenue generated - without any hallucinations or incorrect data.

Query 2: Contextual follow-ups

Next, when asked: "How many support tickets do each of these customers have?"

This follow-up query showcases PromptQL's structured memory. It:

  • Retrieves the previous customer list from its artifact (not from LLM memory)
  • For each customer, queries the ticket system
  • Returns ticket counts for each customer

The assistant doesn't try to "remember" customer details in its context – it pulls them from a structured artifact in real-time.

Query 3: Complex multi-step operations

The third query demonstrates complex multi-step operations: "For the third user, fetch the details of their tickets including the comments on them. Then summarize each ticket and classify how well they have been resolved. Don't just give me a binary classification – be more nuanced in that, so create five categories."

This complex query required PromptQL to:

  1. Identify the third user from the artifact
  2. Fetch all their ticket details including comments
  3. Iterate through each ticket
  4. Call an LLM to summarize each ticket
  5. Call another LLM to categorize resolution quality with five nuanced categories

The result was a comprehensive analysis of all tickets, with summaries and resolution classifications – all executed flawlessly across multiple data sources.

Query 4: Taking action based on analysis

Finally, we pushed the assistant further: "Issue $50 in credits to this user's highest revenue generating project."

This action-oriented query required:

  1. Understanding who "this user" is from context
  2. Retrieving all projects for this user
  3. Calculating revenue per project
  4. Identifying the highest-revenue project
  5. Preparing to issue $50 credit
  6. Requesting human confirmation
  7. Executing the credit after approval

The human-in-the-loop confirmation is crucial - you don't want AI having free reign over your data manipulation.

Read more: Build safer AI assistants with PromptQL and human-in-the-loop guardrails

Building your own assistant in 5 minutes

The most impressive aspect of PromptQL is how quickly you can set up your own assistant.

Pre-requisites: You’ll need the Hasura CLI (authenticated via a Hasura account) and Docker installed on your local machine. Links to these steps are below:

Link to the GitHub repo: https://github.com/hasura/promptql-customer-support-ticket-example

Step 1: Clone the project

git clone [email protected]:hasura/promptql-customer-support-ticket-example.git

Step 2: Setup the database

cd postgres
docker compose up -d

Step 3: Build Hasura DDN project

cd ddn-project
ddn supergraph build local

Step 4: Run PromptQL

ddn run docker-start

Step 5: Try out a query

Once PromptQL is started and running, head to

https://console.hasura.io/local/graphql?url=http://localhost:3280

and try out the queries mentioned above. Or you can start with a simple prompt to ask "What can you do?" and go from there.

Summary

Whether you're building internal tools for your team, customer-facing assistants, or personal productivity tools, PromptQL provides the foundation for AI that truly understands and acts on your data.

PromptQL opens up possibilities for numerous applications. Visit promptql.hasura.io to get started, or follow one of the getting started guides in the documentation.

Check out the full demo at youtu.be/Ow1iNlX31Dk and start building today!



Blog
27 Feb, 2025
Email
Subscribe to stay up-to-date on all things Hasura. One newsletter, once a month.
Loading...
v3-pattern
Accelerate development and data access with radically reduced complexity.