Bridging AI and enterprise data with PromptQL x MCP
Current challenges with MCP implementation
Technical limitations
- Varied output structures: Current MCP implementations often lead to AI assistants interpreting raw tool outputs and generating free-form responses. As Dharmesh Shah, CTO of HubSpot notes, this presents challenges for trust and reliability: "The promise of MCP is a large pool of arbitrary clients being able to connect to a large pool of arbitrary servers... but, we're not there yet."
- State management: Standard MCP tool calls are essentially stateless – there's no shared memory between tool invocations beyond what's carried in the LLM context window. While MCP can support stateful interactions, practical implementations often face challenges maintaining state across complex workflows, particularly in serverless deployments.
- LLM-dependent execution: Current MCP implementations place the burden of execution logic directly on the LLM.
Consider this typical pattern:
User: "What were our Q2 sales by region?"
LLM: [decides to call database_query tool]
Tool: [returns raw JSON with 1,000+ rows of data]
LLM: [must parse JSON, perform aggregations, reason about data in its context]
This approach becomes challenging as data volume increases or complex multi-step logic is required. Each step consumes context tokens, and the LLM must process raw data itself, which it can’t. - Orchestration complexity: While MCP doesn't preclude multi-step workflows, coordinating complex sequences across disparate tools often falls to the LLM itself. As task complexity grows, maintaining accuracy and repeatability becomes increasingly difficult.
Strategic and developer challenges
- Tool ecosystem fragmentation: Organizations often maintain multiple individual MCP servers without a unified control plane, creating operational overhead and integration challenges.
- Security and trust deficits: Enterprises lack guarantees that AI agents won't expose sensitive data or execute unsafe actions. As Ranjan Goel points out, “Currently, MCP connectors rely on an honor-based system when communicating with LLMs and injecting code/data combinations. This poses security risks, especially in enterprise environments where data integrity and trust are paramount.”
- Debugging complexity: When an AI agent produces unexpected results, developers face challenges identifying the root cause, as reasoning logic often remains embedded in the LLM's decision process.
PromptQL MCP server solution
LLM as planner, not executor
User: "What were our Q2 sales by region?"<The client sends the natural language query to the PromptQL MCP server>
LLM in PromptQL: [generates plan to find, access, process and format the data, and implements it as code]
PromptQL Runtime: [executes plan, handles all data processing]
Unified data access via Hasura's DDN
- Relational databases (PostgreSQL, MySQL, SQL Server)
- NoSQL databases (MongoDB, DynamoDB)
- APIs (RESTful, GraphQL, SOAP)
- SaaS apps (Salesforce, Zendesk)
- Unstructured data sources (Vector, web)
Fine-grained access control and governance
- Row-level and column-level security policies
- Role-based access control
- Audit logging of all data access
Architectural evolution: From tool fragments to a unified data agent
1. PromptQL as a unified data agent
User: "Compare revenue and customer satisfaction across regions last quarter"
- Call revenue_database tool to fetch sales data
- Call customer_service_api tool to get satisfaction metrics
- Call analytics_tool to join and analyze the datasets
- Each step requires context switching and independent error handling
revenue = executor.query("""
SELECT project_id, SUM(amount) as revenue
FROM InvoiceItem
WHERE quarter = 'Q1-2025'
GROUP BY project_id
""")
satisfaction = executor.query("""
SELECT * FROM GetCustomerServiceSatisfaction(
STRUCT('2025-01-01' as start_date, '2025-03-31' as end_date)
)
""")
# Store joined results in single artifact
executor.store_artifact(
'quarterly_metrics',
'Revenue and Satisfaction Q1 2025',
'table',
join(revenue, satisfaction, on='project_id')
)
2. Ingesting external MCP tools into Hasura DDN
- Integrate existing MCP tools into a unified data graph
- Apply consistent authorization policies across all tools
- Compose data from different sources in a single query
Conclusion: Enhancing the AI-data experience
- Higher accuracy: Structured planning and deterministic execution
- Better security: Inherited from Hasura's enterprise-grade access controls
- Improved developer experience: Clear visibility into execution and errors
- Unified architecture: One control plane for all AI-data interactions
Related reading