Bridging AI and enterprise data with PromptQL x MCP

The Model Context Protocol (MCP) represents an advancement in AI integration, functioning as the "USB-C port for AI agents" by providing a universal standard for connecting AI models with external tools and data sources.

Since its introduction by Anthropic in late 2024, MCP has transformed the AI landscape by solving the integration challenge that previously required custom connectors for each AI-tool pairing. By April 2025, the MCP ecosystem has flourished with over 1,000 open-source connectors, with major players like Block, Apollo, and development tools such as Cursor and Replit embracing the standard.

For enterprises and developers, MCP dramatically reduces development time while enabling more capable, context-aware AI systems that can seamlessly interact with the tools and data sources where critical information lives.

Building on this foundation, Hasura's PromptQL offers a highly reliable approach to AI data access through its MCP server implementation. This post explores how our newly released PromptQL MCP server enhances the MCP ecosystem while enabling powerful new AI-data interaction patterns for enterprises.

Current challenges with MCP implementation

As MCP adoption grows, several implementation considerations have emerged when deploying in enterprise settings:

Technical limitations

  • Varied output structures: Current MCP implementations often lead to AI assistants interpreting raw tool outputs and generating free-form responses. As Dharmesh Shah, CTO of HubSpot notes, this presents challenges for trust and reliability: "The promise of MCP is a large pool of arbitrary clients being able to connect to a large pool of arbitrary servers... but, we're not there yet."
  • State management: Standard MCP tool calls are essentially stateless – there's no shared memory between tool invocations beyond what's carried in the LLM context window. While MCP can support stateful interactions, practical implementations often face challenges maintaining state across complex workflows, particularly in serverless deployments.
  • LLM-dependent execution: Current MCP implementations place the burden of execution logic directly on the LLM.

    Consider this typical pattern:
    User: "What were our Q2 sales by region?"
    LLM: [decides to call database_query tool]
    Tool: [returns raw JSON with 1,000+ rows of data]
    LLM: [must parse JSON, perform aggregations, reason about data in its context]

    This approach becomes challenging as data volume increases or complex multi-step logic is required. Each step consumes context tokens, and the LLM must process raw data itself, which it can’t.
  • Orchestration complexity: While MCP doesn't preclude multi-step workflows, coordinating complex sequences across disparate tools often falls to the LLM itself. As task complexity grows, maintaining accuracy and repeatability becomes increasingly difficult.

Strategic and developer challenges

  • Tool ecosystem fragmentation: Organizations often maintain multiple individual MCP servers without a unified control plane, creating operational overhead and integration challenges.
  • Security and trust deficits: Enterprises lack guarantees that AI agents won't expose sensitive data or execute unsafe actions. As Ranjan Goel points out, “Currently, MCP connectors rely on an honor-based system when communicating with LLMs and injecting code/data combinations. This poses security risks, especially in enterprise environments where data integrity and trust are paramount.”
  • Debugging complexity: When an AI agent produces unexpected results, developers face challenges identifying the root cause, as reasoning logic often remains embedded in the LLM's decision process.

PromptQL MCP server solution

PromptQL's MCP server implementation enhances the MCP ecosystem by addressing many of these considerations:

LLM as planner, not executor

PromptQL extends the MCP model by establishing a clear contract between the LLM and data systems. The LLM is responsible for generating a structured plan (essentially code) that is then executed by a deterministic runtime:

User: "What were our Q2 sales by region?"<The client sends the natural language query to the PromptQL MCP server> 

LLM in PromptQL: [generates plan to find, access, process and format the data, and implements it as code]

PromptQL Runtime: [executes plan, handles all data processing]

This architecture delivers significantly higher accuracy (~2× improvement) and near-perfect repeatability on complex tasks compared to traditional approaches.

Unified data access via Hasura's DDN

PromptQL leverages Hasura's Data Delivery Network (DDN) connectors to provide unified access to:

  1. Relational databases (PostgreSQL, MySQL, SQL Server)
  2. NoSQL databases (MongoDB, DynamoDB)
  3. APIs (RESTful, GraphQL, SOAP)
  4. SaaS apps (Salesforce, Zendesk)
  5. Unstructured data sources (Vector, web)

This means a single PromptQL MCP server can replace dozens of individual MCP tool endpoints that an LLM would otherwise need to understand and coordinate.

Fine-grained access control and governance

Every query planned and executed by PromptQL inherits Hasura's enterprise-grade security model:

  1. Row-level and column-level security policies
  2. Role-based access control
  3. Audit logging of all data access

This addresses key security considerations highlighted by industry leaders, ensuring the AI can only retrieve data appropriate for the user's permissions – a critical requirement for enterprise adoption.

Architectural evolution: From tool fragments to a unified data agent

Hasura envisions two complementary approaches to MCP integration:

1. PromptQL as a unified data agent

Instead of exposing multiple individual MCP servers to an LLM, PromptQL functions as a single unified agent. This simplifies integration and prompt engineering while providing a consistent security model across all data sources.

Example: Consider an enterprise dashboard query:

User: "Compare revenue and customer satisfaction across regions last quarter"

With traditional MCP tools, this might require:

  1. Call revenue_database tool to fetch sales data
  2. Call customer_service_api tool to get satisfaction metrics
  3. Call analytics_tool to join and analyze the datasets
  4. Each step requires context switching and independent error handling

With PromptQL's unified approach:

revenue = executor.query("""
SELECT project_id, SUM(amount) as revenue
FROM InvoiceItem
WHERE quarter = 'Q1-2025'
GROUP BY project_id
""")

satisfaction = executor.query("""
SELECT * FROM GetCustomerServiceSatisfaction(
STRUCT('2025-01-01' as start_date, '2025-03-31' as end_date)
)
""")

# Store joined results in single artifact
executor.store_artifact(
'quarterly_metrics',
'Revenue and Satisfaction Q1 2025',
'table',
join(revenue, satisfaction, on='project_id')
)

This plan is executed as a single coordinated workflow, with consistent error handling and security policies applied throughout.

2. Ingesting external MCP tools into Hasura DDN

PromptQL can also leverage external MCP servers through Hasura's connector architecture. This allows organizations to:

  • Integrate existing MCP tools into a unified data graph
  • Apply consistent authorization policies across all tools
  • Compose data from different sources in a single query

For example, if your organization uses MCP servers for Slack, Google Drive, and Salesforce, PromptQL can incorporate them all into a single query plan while enforcing consistent access controls.

Conclusion: Enhancing the AI-data experience

PromptQL's MCP server implementation represents an evolution in how AI assistants interact with enterprise data, building upon the foundation that MCP has established. By extending MCP with structured planning and programmatic execution, it addresses many emerging challenges:

  • Higher accuracy: Structured planning and deterministic execution
  • Better security: Inherited from Hasura's enterprise-grade access controls
  • Improved developer experience: Clear visibility into execution and errors
  • Unified architecture: One control plane for all AI-data interactions

The open source PromptQL MCP server is available now on GitHub, enabling developers to connect AI assistants like Claude to their enterprise data through a standardized, secure, and powerful interface.

Blog
24 Apr, 2025
Email
Subscribe to stay up-to-date on all things Hasura. One newsletter, once a month.
Loading...
v3-pattern
Accelerate development and data access with radically reduced complexity.