1. Clone the repository
git clone https://github.com/cogos-ai/CogOS.git
cd CogOS
2. Create environment & install dependencies
# Create and activate a Python 3.10 conda environment
conda create -n cogos python= 3.10 -y
conda activate cogos
# Install dependencies
pip install -r requirements.txt
# Install CogOS as a package (registers the `cogos` CLI command)
pip install -e .
For development mode with testing and linting tools, run pip install -e ".[dev]" instead.
# Initialize the project (generates config + template files)
cogos init
This creates the files CogOS needs to run:
File Purpose configs/cogos.yamlYour config — edit this to set LLM provider, model, etc.templates/general.jsonBuilt-in general template (editable) templates/roleplay.jsonBuilt-in roleplay template (editable)
Then configure your LLM provider (pick one):
YAML Config
Environment Variables
Open configs/cogos.yaml and set api_key, model, base_url: llm :
api_key : "your-api-key"
model : "gpt-4o"
base_url : "https://api.openai.com/v1"
cp .env.example .env
# Edit .env: fill in API_KEY, MODEL, BASE_URL
Environment variables override YAML config values.
4. Verify installation
These examples run without an API key:
# Schema operations
python examples/01_basic_schema.py
# Input converters
python examples/03_converters.py
5. Run with LLM
Requires a configured API key:
# Full chat with memory
python examples/02_chat_with_memory.py
# Or start the web server
cogos serve
# Open http://localhost:8000
# Or run as a background daemon
cogos serve start
cogos serve status
Next Steps
Python API Learn how to use CogOS programmatically.
CLI Reference Explore the full command-line interface.
Service Management Run CogOS as a daemon with auto-start on boot.
Schema Templates Use preset templates or create your own.
Configuration Customize LLM provider, persistence, and more.