Help Center

Answers, fast.
Real humans on the other end.

Quick answers about your Oya account, billing, privacy, and how AI accuracy actually works. For product how-tos — building agents, connecting integrations, writing skills — head to the docs.

Account

Sign in, security, teammates, and managing your Oya account.

How do I sign up?

Go to oya.ai and click Get Started. Sign up with email or Google. New accounts get $5 in free credits immediately, plus another $5 when you connect GitHub. No credit card required to start.

I forgot my password — how do I reset it?

Click "Forgot password" on the sign-in page. We'll email you a reset link. If you signed up with Google, sign in with Google instead — there's no password to reset.

How do I update my email or display name?

In your account settings. If you need to change the email on a paid account, please contact us via the form below — we want to make sure billing keeps flowing without interruption.

How do I delete my account?

Submit a request via the contact form below with the subject "Delete my account". We'll permanently remove your account, agents, run history, knowledge base, and connected integrations within 30 days. Some financial records (invoices, tax-relevant transactions) are retained for legal compliance.

How do I export my data?

Submit a request via the contact form below with the subject "Data export". We'll provide a machine-readable export of your agents, skills, run history, knowledge base content, and account metadata. EU/UK users have this right under GDPR; we offer it to everyone.

Billing & payments

How payments work — for product details on per-run pricing, see the pricing page.

Do my $5 free credits expire?

No. Your free credits don't expire. Once they're used, you decide whether to add more. There are no subscriptions to cancel — Oya is purely pay-as-you-go.

What payment methods do you accept?

Cards via Stripe, including Stripe Link for one-click checkout. All transactions are in USD.

Can I get an invoice or receipt?

Yes. Every charge generates a receipt that's emailed to your account email. You can also download invoices from your Billing page. For custom invoice formatting (e.g. for your accounting team), contact us via the form.

I'm interested in enterprise pricing or a higher-volume plan — who do I contact?

Use the contact form below with category "Billing" and let us know your expected usage. We'll get back to you with options.

Privacy & data

Where your data lives, how it's used, and how to report security issues.

Do you train AI models on my data?

No. We do not use customer data — your prompts, conversations, knowledge base content, or run outputs — to train any AI model, ours or anyone else's. The data is used only to do the work you asked your AI employee to do, and then logged in your account so you can audit what happened.

Where is my data stored?

Account data and run history are stored in Supabase (US region). Sandbox execution happens on Daytona (US region). Knowledge base content is stored on AWS. We do not currently offer a non-US data residency option.

Who can access my data internally?

Only the Oya team members who need to. We access account data when you contact support and ask us to look, when investigating an incident, or as required for security and abuse prevention. Access is logged. We don't browse customer data for any other reason.

How do I report a security vulnerability?

Use the contact form below with the category "Security" and a description of what you found. Please give us a reasonable window to investigate before public disclosure. We don't currently run a paid bug bounty, but we appreciate responsible disclosure and will credit researchers who ask.

AI accuracy

How Oya separates reasoning from execution to keep AI employees reliable — and what you can do to push reliability further.

The Oya difference

Most AI agents mix reasoning and action into one LLM call. Oya splits them.

A whole class of “the AI claimed it did something it didn’t actually do” errors becomes structurally impossible — because the layer that thinks is not the layer that executes.

Layer 1

Orchestrator

Reasoning LLM

Decides which skill to run. Never touches data directly.

Layer 2

Interpreter

Skill LLM (typed)

Translates request into a structured tool call. Constrained by schema.

Layer 3

Execution

Sandboxed code

Deterministic Python/JS. Real APIs. No LLM in the loop here.

Every model call, tool invocation, input, and output is captured in your audit trail. You can replay any run and verify what actually happened — not just what the agent said it did.

A note on AI accuracy

Even with reasoning and execution separated, large language models can still produce inaccurate, incomplete, or out-of-date responses. They are tools to assist your work, not authoritative sources. Always verify important information — especially numbers, dates, names, and any business or financial action — before relying on AI-generated output.

How is Oya different from a typical AI chatbot?

A typical AI agent is a single LLM doing three jobs in one prompt: thinking, deciding what to do, and doing it. Oya splits those into separate layers. (1) An orchestration LLM reasons about your request and picks which skill to run — it never touches your data directly and never executes actions. (2) A skill-interpreter LLM translates the request into a typed tool call, constrained by a schema so it can't invent values. (3) The skill itself runs as deterministic code in an isolated sandbox — real APIs, real responses, captured in your audit trail. A whole class of "the AI claimed it did X but actually didn't" errors becomes structurally impossible because the LLM doing the thinking is not the layer doing the doing.

Why does my AI employee sometimes still give wrong answers?

Even with reasoning and execution separated, the orchestration LLM still chooses tools and frames language. It can pick the wrong skill if your system prompt or skill descriptions are ambiguous. It can summarize a tool result inaccurately when explaining what it did. And it has the usual LLM weaknesses for recent events, niche domains, and anything outside its training data. Separation kills a category of errors; it doesn't kill all of them.

How do I make my AI employee more accurate?

Four high-leverage moves, in roughly this order of impact: 1. Ground answers in the knowledge base. Upload your source documents (PDFs, CSVs, URLs) and the agent retrieves exact passages — no LLM hop in the lookup. 2. Use the fact-check skill before sending. For any output going to a human or external system, have the agent call fact_check on its draft, passing the raw tool outputs as evidence. Returns {passed, unsupported_claims}; the agent rewrites and re-checks if it fails. Costs about $0.0005 per check. 3. Insert approval gates in your routines. For high-stakes actions (money movement, public posts, destructive operations), pause the routine for human confirmation in Slack or Discord before executing. 4. Tighten your skill schemas. Each skill defines a typed tool schema; enum fields can only hold listed values, integer fields can't hold strings. This kills a whole class of hallucinations at the API boundary.

When should I NOT trust the AI's output without verifying?

When the cost of being wrong is high: money movement, public posts, destructive actions (deletion, kicking, canceling), legal/medical/safety/financial advice, recent events or post-training-data info, or anything that touches customers directly. Treat AI output as a draft for these cases — Oya's audit trail makes it cheap to verify, but you still have to do it.

Does Oya verify AI output for me?

Not automatically — verification across every conceivable workflow isn't a feature any platform can guarantee. What we give you is a workflow: • Full audit trail: every run records which model was called, what prompt it received, which skills fired, every input and output, and the cost. You can replay any run and check what actually happened vs. what was claimed. • fact-check skill: a separate model that compares claims in a draft against the raw evidence the agent gathered. • Approval gates: pause execution before any irreversible step. • Knowledge base grounding: retrieval over your trusted sources surfaced as exact snippets. • Typed skill schemas: structural blocks at the skill boundary. Verification is a workflow you compose from these building blocks; we don't hide it inside a black box.

Send us a message

Bug, billing question, security report, feature idea, or anything else. We typically reply within one business day.

We typically reply within one business day.

Or join the community

Real-time help from us and other Oya users.