Sub-Processors
Last Updated: March 5, 2026
Hedra engages certain third-party service providers ("Sub-Processors") to assist in providing our services. This page lists all Sub-Processors currently engaged by Hedra and describes the data they process and their locations.
Important Notice
This list reflects our current implementation as of March 2026. As an early-stage product, we are transparent about which sub-processors are always used (✅ Active) versus those that depend on customer configuration (⚠️ Conditional).
Active Sub-Processors
These sub-processors are used for all Hedra customers:
Neon
neon.tech ↗Purpose
Database hosting (PostgreSQL)
Data Processed
Customer metadata, query history, audit logs, encrypted credentials
Location
United States (AWS US-East-1)
Compliance
SOC 2 Type II
Vercel
vercel.com ↗Purpose
Application hosting (web dashboard, API)
Data Processed
HTTP requests, session data
Location
United States / Global (Vercel Edge Network)
Compliance
SOC 2 Type II, ISO 27001
Mailgun
mailgun.com ↗Purpose
Email delivery (transactional emails)
Data Processed
Email addresses, transactional content
Location
United States
Compliance
SOC 2 Type II
Slack
slack.com ↗Purpose
Messaging platform integration
Data Processed
User identity, workspace info, bot messages
Location
United States, European Union (multi-region)
Compliance
SOC 2 Type II, ISO 27001, GDPR
Conditional Sub-Processors
These sub-processors are only used if you configure Hedra to use them. You can choose alternative providers or self-hosted options to avoid these sub-processors entirely.
OpenAI
openai.com ↗Purpose
LLM inference for natural language queries
Data Processed
Natural language queries, CSV file content
Location
United States
Compliance
SOC 2 Type II
When Used: If you configure LLM_PROVIDER=openai
Alternatives: Anthropic Claude, self-hosted Ollama, or self-hosted vLLM
Anthropic
anthropic.com ↗Purpose
LLM inference for natural language queries
Data Processed
Natural language queries
Location
United States
Compliance
SOC 2 Type II
When Used: If you configure LLM_PROVIDER=anthropic
Alternatives: OpenAI GPT, self-hosted Ollama, or self-hosted vLLM
Customer Controls
Choose Your LLM Provider
Select between OpenAI, Anthropic, or self-hosted models (Ollama, vLLM) to control which LLM sub-processors process your data.
Self-Hosted LLMs
Deploy Ollama or vLLM in your own infrastructure to eliminate OpenAI and Anthropic as sub-processors entirely. Your data never leaves your environment.
Database Data Never Leaves Your Environment
Customer databases are never migrated to Hedra or any sub-processor infrastructure. Only query metadata is processed by our services.
Not Yet Implemented
As an early-stage product, the following features are not yet available:
- •Sub-processor change notification system (30-day advance notice)
- •Formal Data Processing Addendums (DPAs) with all sub-processors
- •Sub-processor objection process
- •Email notifications for sub-processor changes
Changes to This List
We will update this page whenever we add or remove sub-processors. The "Last Updated" date at the top of this page indicates when changes were last made.
Planned: In the future, we will implement email notifications to workspace administrators with 30-day advance notice before adding new sub-processors or changing existing ones.
Questions About Sub-Processors?
If you have questions about our sub-processors or need specific compliance documentation (such as Data Processing Addendums), please contact us: