Skip to main content
CogOS resolves configuration from multiple sources (highest priority wins):
  1. Environment variables (API_KEY, MODEL, BASE_URL)
  2. YAML config file (configs/cogos.yaml, created by cogos init)
  3. Dataclass defaults

Config File

Generated by cogos init:
llm:
  api_key: ""                            # or use API_KEY env var
  model: "gpt-5.4"                       # any OpenAI-compatible model
  base_url: "https://api.openai.com/v1"  # endpoint URL
  request_delay: 0.5

agent:
  max_iterations: 5
  temperature: 0.0
  max_tokens: 3000
  disable_thinking: true
  schema_update_rounds: 0               # auto-update every N chat rounds (0 = disabled)

chatbot:
  max_tokens: 4096
  context_rounds: 10                    # recent chat rounds as context (0 = single-turn)

persistence:
  session_dir: "./sessions"
  schemas_dir: "./schemas"              # standalone schema files for cross-session sharing

schema:
  inspect_max_depth: 3                  # depth for schema overview in prompts
  inspect_show_values: false            # include values in overview (costs tokens)

templates:
  templates_dir: "./templates"           # user-defined template JSON files
  custom_templates_dir: "./templates/custom"  # user custom templates (git-ignored)
  schema_template: ""                    # auto-load on startup (e.g. "general")

listen:
  listen_mode: false                     # enable via CLI --listen or set to true
  build_every_n_turns: 5                 # run build CM every N conversation turns

helpers:
  enable_forget_handling: false          # strip forgotten phrases from recalled data
  enable_sensitive_handling: false        # add PII warning section to chatbot prompt

server:
  host: "0.0.0.0"
  port: 8000

Config File Resolution

PriorityPathDescription
1Explicit path (-c / from_file("..."))User-specified
2configs/cogos.yamlUser config (created by cogos init)
3configs/cogos.default.yamlDefault template (tracked by git)

Key Settings

Config KeyDescriptionDefault
chatbot.context_roundsNumber of recent chat rounds included as conversation context. When exceeded, schema memory is auto-injected. Set to 0 for single-turn.10
agent.schema_update_roundsAuto-update schema from chat context every N rounds. Set to 0 to disable.0
persistence.schemas_dirDirectory for standalone schema files shared across sessions. Backups are stored in backups/ subdirectory../schemas
listen.build_every_n_turnsIn listen mode, run the build CM every N conversation turns.5
schema.inspect_max_depthDepth for schema overview in CM prompts.3
helpers.enable_forget_handlingStrip forgotten phrases from recalled data in chat().false
helpers.enable_sensitive_handlingAdd PII warning section to the chatbot prompt.false

Supported LLM Providers

Any OpenAI-compatible API works:
Providerbase_urlmodel example
OpenAIhttps://api.openai.com/v1gpt-5.4
OpenRouterhttps://openrouter.ai/api/v1google/gemini-3.1-flash-lite-preview
vLLMhttp://localhost:8000/v1meta-llama/Llama-3-8b

Loading Config in Python

from cogos import CogOSConfig

config = CogOSConfig.from_file()                      # auto-resolve + env vars
config = CogOSConfig.from_file("configs/cogos.yaml")   # explicit path
config = CogOSConfig.from_env()                        # env vars only
config = CogOSConfig(model="gpt-5.4")                  # direct construction