Skip to main content
Core Feature

Vector Memory System

CogTog's persistent memory system enables agents to learn from past experiences and recall relevant context across sessions using vector embeddings.

System Overview

How Vector Memory Works

Semantic Search

Memories are stored as vector embeddings, enabling semantic similarity search. Agents can recall relevant information based on meaning, not just keywords.

1

Store Memory

Agent calls remember() with content. Text is converted to a vector embedding using local Ollama or OpenAI.

2

Query Similar Memories

When agent calls recall(), the query is embedded and compared to stored vectors using cosine similarity.

3

Return Ranked Results

Most relevant memories are returned sorted by similarity score, filtered by type, project, and importance.

Local Embeddings

CogTog uses local Ollama (nomic-embed-text model) by default for privacy. Falls back to OpenAI embeddings if Ollama is unavailable, and to deterministic hash-based embeddings for offline use.

Memory Types

Different types of memories help agents categorize and prioritize information effectively.

fact

Base: 0.5

Discovered facts about the project or codebase (e.g., "The API uses JWT authentication")

decision

Base: 0.6

Decision made and its reasoning (e.g., "Chose React over Vue due to team experience")

pattern

Base: 0.6

Code pattern or convention discovered (e.g., "All components use styled-components")

success

Base: 0.7
Learning

Successful approach to remember (e.g., "Running tests before commit prevented breaking main")

mistake

Base: 0.8
Learning

Past mistake to avoid (e.g., "Avoid: Forgetting to update package.json version")

preference

Base: 0.5

User or project preference (e.g., "Prefers tabs over spaces")

user_preference / user_context

Base: 0.9/0.85

Personal user information for personalization (e.g., "User is a senior frontend developer specializing in React")

context / relationship

Base: 0.5

General context information or relationships between entities

Memory Tools

remember

memory

Store information in long-term memory. Agents use this to remember important discoveries, patterns, and learnings.

content: "The API rate limit is 100 requests/minute"
type: "fact"
importance: 0.7
topics: ["api", "rate-limiting"]
source: "agent_discovery"

recall

memory

Search for relevant memories based on semantic similarity. Returns ranked results with optional filtering.

query: "How do I handle API errors?"
types: ["mistake", "success", "pattern"]
minImportance: 0.5
limit: 10
// Returns memories about error handling patterns

search_memory

memory

Advanced memory search with project/session filters. Build context for tasks automatically.

projectId
sessionId
agentRole
includeRelated

Embedding Service

The embedding service converts text into vector representations. CogTog tries multiple embedding sources with automatic fallback:

1

Local Ollama (Default)

Uses nomic-embed-text model via local Ollama instance

Private
Fast
Free
2

OpenAI API (Fallback)

Uses text-embedding-ada-002 if Ollama is unavailable

Reliable
Cloud
Requires API Key
3

Hash-Based (Offline)

Deterministic hash-based pseudo-semantic embeddings for offline use

Always Available
Offline
Lower Quality
Automatic Caching

Embeddings are cached to avoid recomputing the same text. Cache automatically clears when needed to save memory.

Persistence Across Sessions

Memories are automatically saved to disk and persist across application restarts. This enables true long-term learning.

Automatic Persistence
Memories saved immediately when created
Project Isolation
Memories can be scoped to specific projects
Automatic Pruning
Low-importance memories older than 90 days are auto-pruned
Memory Limits

Default limit: 10,000 memories. When limit is reached, memories with importance below 0.2 are pruned first, then oldest memories.

How Agents Use Memory

Automatic Context Building

Before starting tasks, agents automatically recall relevant memories to build context. Past mistakes and successes inform current decisions.

Learning from Outcomes

After task completion, agents automatically store insights as success/mistake memories. This creates a self-improving feedback loop.

Personalization

User preferences and context are stored in memory, allowing agents to adapt their behavior and communication style to each user.

Intelligent Recall

Agents proactively search memory when encountering similar situations, avoiding repeated mistakes and reusing successful patterns.

Learn More