GitHub Versioning
Sync agents to individual GitHub repos with auto-generated README, usage examples, configuration export, and encrypted secrets.
Overview
Every agent can be linked to a GitHub repository that stays in sync with your agent's configuration. When you update your agent — change skills, modify the soul, add a gateway, create a routine — the repo is automatically updated in the background.
Each repo contains a comprehensive README with working code examples, a full agent configuration export (agent.json), encrypted secrets, and source files for all enabled skills.
- Account-level GitHub connection via OAuth (repo scope)
- One repo per agent, auto-created as oya-agent-{name}
- Auto-sync on every mutation (deploy, skills, gateways, routines, triggers, soul, memories, scratchpad)
- README with cURL, Python, TypeScript, Swift, Kotlin, streaming, and widget examples
- agent.json with full config for import
- Encrypted secrets.enc for credentials (encrypted with your password)
- Skill source files (SKILL.md + script.py) for each enabled skill
Connecting GitHub
GitHub is connected at the account level — one connection covers all your agents. Navigate to Settings and click Connect GitHub. This initiates a standard OAuth flow with repo scope.
- Only the repo scope is requested — Oya creates and pushes to repos on your behalf.
- The connection is stored securely. You can disconnect at any time from Settings.
- Disconnecting removes all repo links but does not delete repos from GitHub.
Repo Structure
When an agent is synced, the repo is created (or updated) with the following structure:
oya-agent-my-assistant/
├── README.md # Agent description + usage examples
├── agent.json # Full agent configuration (no secrets)
├── secrets.enc # Encrypted credentials (encrypted)
├── script.py # Generated script (automation agents only)
├── examples/
│ ├── curl.sh # cURL examples with thread continuation
│ ├── chat.py # Python (OpenAI SDK) with streaming
│ ├── chat.ts # TypeScript (OpenAI SDK)
│ ├── chat.swift # Swift (MacPaw OpenAI SDK)
│ ├── chat.kt # Kotlin (aallam openai-client)
│ └── widget.html # Embeddable chat widget
└── skills/
├── web-search/
│ ├── SKILL.md # Skill definition
│ └── script.py # Skill implementation
└── gmail-send/
├── SKILL.md
└── script.pyagent.json
The agent.json file contains a comprehensive export of the agent's configuration, designed for future import support:
- Agent metadata — name, mode, deploy status, chat model
- Soul — persona, brand, behavior rules, welcome message
- System prompt and scratchpad
- Skills — with config schema and resource requirements
- Gateways — platform, name, active status (credentials encrypted separately)
- Routines — schedule, prompt, output channel
- Triggers — type, name, config
- MCP connections — name, transport, URL, discovered tools
- Memories — type, content, metadata
- Knowledge base — folder, filename, mime type, summary
- Tools config — enabled tools (script-mode agents)
Encrypted Secrets
The secrets.enc file contains all credentials, encrypted so they are not stored in plain text in the repo.
What's encrypted:
- Gateway configs — full config including tokens, OAuth credentials, team IDs, user emails, cloud IDs
- Skill credentials — API keys and tokens configured per-skill
- MCP headers — authentication headers for MCP server connections
- Agent secrets — any secrets stored in agent config
Usage Examples
The README and example files include working code for all supported platforms. Each example uses the agent's actual model and shows thread continuation.
cURL
# First message — starts a new thread
curl -X POST https://oya.ai/api/v1/chat/completions \
-H "Authorization: Bearer a2a_your_key_here" \
-H "Content-Type: application/json" \
-d '{"model":"gemini/gemini-2.0-flash","messages":[{"role":"user","content":"Hello"}]}'
# Continue a conversation using thread_id from the first response:
curl -X POST https://oya.ai/api/v1/chat/completions \
-H "Authorization: Bearer a2a_your_key_here" \
-H "Content-Type: application/json" \
-d '{"model":"gemini/gemini-2.0-flash","messages":[{"role":"user","content":"Follow up"}],"thread_id":"THREAD_ID"}'Python
# pip install openai
from openai import OpenAI
client = OpenAI(
api_key="a2a_your_key_here",
base_url="https://oya.ai/api/v1",
)
# First message — starts a new thread
response = client.chat.completions.create(
model="gemini/gemini-2.0-flash",
messages=[{"role": "user", "content": "Hello"}],
)
print(response.choices[0].message.content)
# Continue the conversation using thread_id
thread_id = response.thread_id
response = client.chat.completions.create(
model="gemini/gemini-2.0-flash",
messages=[{"role": "user", "content": "Follow up question"}],
extra_body={"thread_id": thread_id},
)
print(response.choices[0].message.content)TypeScript
// npm install openai
// Run with: npx tsx script.ts
import OpenAI from "openai";
const client = new OpenAI({
apiKey: "a2a_your_key_here",
baseURL: "https://oya.ai/api/v1",
});
async function main() {
// First message — starts a new thread
const response = await client.chat.completions.create({
model: "gemini/gemini-2.0-flash",
messages: [{ role: "user", content: "Hello" }],
});
console.log(response.choices[0].message.content);
// Continue the conversation using thread_id
const threadId = (response as any).thread_id;
const followUp = await client.chat.completions.create({
model: "gemini/gemini-2.0-flash",
messages: [{ role: "user", content: "Follow up question" }],
// @ts-ignore — custom field
thread_id: threadId,
});
console.log(followUp.choices[0].message.content);
}
main();Swift
// Package.swift:
// .package(url: "https://github.com/MacPaw/OpenAI.git", from: "0.4.0")
// Run with: swift run
import Foundation
import OpenAI
@main
struct Main {
static func main() async throws {
let config = OpenAI.Configuration(
token: "a2a_your_key_here",
host: "oya.ai",
scheme: "https"
)
let client = OpenAI(configuration: config)
let query = ChatQuery(
messages: [.user(.init(content: .string("Hello")))],
model: "gemini/gemini-2.0-flash"
)
let result = try await withCheckedThrowingContinuation { continuation in
_ = client.chats(query: query) { continuation.resume(with: $0) }
}
print(result.choices.first?.message.content ?? "")
}
}Kotlin
// build.gradle.kts dependencies:
// implementation("com.aallam.openai:openai-client:4.0.1")
// implementation("io.ktor:ktor-client-cio:3.0.0")
import com.aallam.openai.api.chat.ChatCompletionRequest
import com.aallam.openai.api.chat.ChatMessage
import com.aallam.openai.api.chat.ChatRole
import com.aallam.openai.api.model.ModelId
import com.aallam.openai.client.OpenAI
import com.aallam.openai.client.OpenAIHost
import kotlinx.coroutines.runBlocking
fun main() = runBlocking {
val openai = OpenAI(
token = "a2a_your_key_here",
host = OpenAIHost(baseUrl = "https://oya.ai/api/v1/")
)
val completion = openai.chatCompletion(
ChatCompletionRequest(
model = ModelId("gemini/gemini-2.0-flash"),
messages = listOf(ChatMessage(role = ChatRole.User, content = "Hello"))
)
)
println(completion.choices.first().message.messageContent)
}Streaming
stream = client.chat.completions.create(
model="gemini/gemini-2.0-flash",
messages=[{"role": "user", "content": "Tell me about AI agents"}],
stream=True,
)
for chunk in stream:
delta = chunk.choices[0].delta.content
if delta:
print(delta, end="", flush=True)Embeddable Widget
<!-- Oya Chat Widget -->
<script
src="https://oya.ai/widget.js"
data-agent-id="your-agent-id"
data-api-key="a2a_your_key_here"
data-title="Chat with us"
data-color="#6366f1"
data-welcome="Hi! How can I help you today?"
></script>Auto Sync
The repo is automatically synced in the background whenever you mutate agent state. Sync triggers include:
- Agent updates — name, system prompt, scratchpad, soul, deploy
- Skills — add, update, remove, sync
- Gateways — create, update, delete
- Routines — create, update, delete
- Triggers — create, update, delete
- Memories — add, delete
You can also trigger a manual sync from the agent's GitHub panel at /agents/[id]/github.
Managing Repos
- View repo — The GitHub panel shows the repo URL, last sync time, sync status, and commit SHA.
- Manual sync — Click "Sync now" to push the latest state.
- Unlink — Removes the link between the agent and the repo. The repo remains on GitHub.
- Disconnect — Remove the GitHub connection entirely from Settings. All repo links are removed (repos stay on GitHub).
Supported Models
All generated examples use the agent's configured model. Supported models:
- gemini/gemini-2.0-flash
- gemini/gemini-2.5-flash
- gemini/gemini-2.5-pro
- gemini/gemini-3-flash-preview
- gemini/gemini-3-pro-preview
- anthropic/claude-sonnet-4-6
- anthropic/claude-haiku-4-5-20251001