How Amon Hen Works

An inside look at how your knowledge becomes grounded, contextual answers - and how the system gets smarter every time you use it.

On This Page

  1. The Core Idea: Context-First Advice
  2. Where Knowledge Comes From
  3. How a Question Gets Answered
  4. How the System Learns
  5. Where Data Lives
  6. Knowledge Versioning
  7. The Role of OpenAI

The Core Idea: Context-First Advice

Most AI tools generate answers from broad, general training data. Amon Hen works differently. Before answering any question, it assembles a project-specific context - a structured collection of rules, decisions, notes, and reference material - and instructs the AI model to reason exclusively within that context.

This means:

Think of it as a senior advisor who has read every guideline, decision record, and operating principle relevant to your work - and answers accordingly.

Where Knowledge Comes From

Amon Hen's knowledge base is built from four sources, each serving a different stage of a project's life.

1. Starter Packs - Bootstrapping

When a project is first created, it can be seeded with a starter pack - a pre-built collection of domain knowledge organized by industry vertical. These give the system an immediate foundation of expertise before any conversations have taken place.

2. Manual Knowledge Entry

Users and administrators can directly add knowledge items to a project. Each item is one of four types:

Rule

A standing principle, constraint, or guideline that always applies.

Decision

A specific choice made, paired with documented rationale explaining why.

Note

An observation, clarification, or contextual fact useful for future reference.

Code Context

Reference material, templates, formulas, or structured data the system can draw on.

Every knowledge item is scoped - it belongs to a specific project, a workspace, or is global - so the system knows exactly when to use it.

3. Document Import - Bulk Knowledge Ingestion

For projects with existing documentation, Amon Hen can extract knowledge directly from uploaded files. Upload PDFs, Word documents (.docx), or plain text files, and the system will parse the content and use AI to identify rules, decisions, notes, and reference material.

The extraction pipeline works in three stages:

  1. Upload - Files are validated (10 MB max per file, up to 5 per batch) and stored alongside the project
  2. Extract - AI analyzes the document text against the project's existing knowledge base, identifying new items and ignoring duplicates
  3. Review - Extracted items are placed in a draft proposal for human review before becoming active context. Nothing is auto-committed from documents

URL ingestion is also supported for web-based documentation. The system fetches the page and feeds it through the same extraction pipeline.

Document import is designed for onboarding: load existing policies, standards, procedures, or reference material and let the system organize it into structured knowledge items you can review and approve.

4. Conversational Learning - Automatic Extraction

This is the most distinctive capability. Amon Hen learns from its own advisory conversations.

When a user asks a question and rates the response as useful, the system triggers a background knowledge extraction process. AI analyzes the exchange, identifies new knowledge not already in the project's context, assigns a confidence score, and checks for duplicates.

Extracted items are then routed based on confidence:

85%+

Auto-Committed

Immediately added to the knowledge base and available in future answers.

60 – 84%

Proposed for Review

Saved as a draft proposal. A user must approve before it becomes active.

< 60%

Discarded

Not reliable enough to store. Quietly dropped.

This creates a virtuous cycle: good advice leads to feedback, feedback triggers learning, and learning improves future advice.

How a Question Gets Answered

When you submit a question, five things happen behind the scenes - typically in under two seconds.

1

Context Assembly

The system begins with your question. All active rules are gathered first - rules are always included in full because they represent standing constraints. For decisions, notes, and code context, the system embeds your question and retrieves the most relevant items by semantic similarity. Superseded items are filtered out automatically. The result is a focused, question-specific context window rather than a dump of everything in the project.

2

Profile Selection

Amon Hen picks the best response style based on historical feedback. Concise for quick decisions, Analytical for complex reasoning, or Strict-Literal for compliance-sensitive work. The system self-calibrates over time.

3

Prompt Construction

A system prompt is assembled with the selected profile instructions, depth settings, and the full project context formatted as authoritative reference material. Token budgeting ensures everything fits within the model's processing window.

4

AI Generation

The assembled prompt, conversation history, and your question are sent to OpenAI's GPT-4.1-mini. Low temperature (0.3) keeps answers deterministic. The model can also search the web if the project context doesn't fully cover the question.

5

Response Delivery

The AI's response is parsed to extract the main answer and follow-up hints. The full exchange - question, answer, and which context items were used - is saved to the database. A unique decision ID is returned so you can provide feedback later.

How the System Learns

Learning in Amon Hen is driven entirely by user feedback. Nothing is extracted automatically - the system learns only when you tell it an answer was useful.

The Feedback Loop

Ask a question Receive advice Rate the answer Knowledge extracted Context expands

Feedback Scoring

OutcomeScoreTriggers Learning?
Accepted1.0Yes
Minor Edit0.7Yes
Major Edit0.3No
Rejected0.0No

What Gets Extracted

The extraction process looks for new principles, decisions with rationale, contextual facts useful in future conversations, and patterns the knowledge base doesn't already capture. It avoids trivial information, anything already known (via content-hash deduplication), and meta-commentary.

Safety Rails

Where Data Lives

All persistent data is stored in a PostgreSQL database hosted on Neon, a serverless Postgres provider.

WhatWhereRetention
Knowledge itemscontext_itemsPermanent
Knowledge relationshipscontext_edgesPermanent
Version historycontext_supersessionsPermanent
Advisory exchangesadvisory_exchanges90 days
Learning jobsextraction_jobs90 days
Learning proposalsproposalsUntil reviewed
Feedback scoresadvisory_decisionsPermanent
Profile metricsadvisory_profile_metricsPermanent
All mutationsaudit_logPermanent
Uploaded documentsdocumentsPermanent
Document extraction jobsdocument_extraction_jobs90 days
Item embeddingscontext_items.embeddingPermanent
Users & projectsusers + projectsPermanent

One database. No separate vector store. Embeddings are stored as a column on the same context_items table in PostgreSQL (via pgvector). Rules are always included deterministically. Non-rule items are retrieved by semantic similarity to the user's question. This keeps the system focused, auditable, and efficient.

Knowledge Versioning

Knowledge items support supersession - a form of versioning where a new item replaces an older one while preserving the full history.

  1. A new item is created with the updated content
  2. The old item is linked as "superseded by" the new one
  3. The context assembly query automatically returns only the latest version (the "chain tip")
  4. The old version remains in the database for audit purposes

This means the knowledge base always reflects current understanding while maintaining a complete paper trail of how that understanding evolved.

The Role of OpenAI

Amon Hen uses OpenAI's API in three specific ways:

Advisory Responses

When you ask a question, GPT-4.1-mini generates the answer. But the model is instructed with Amon Hen's assembled context - it doesn't rely on its general training data alone. The context acts as authoritative reference material that the model must ground its reasoning in.

Knowledge Extraction

When learning is triggered, GPT-4.1-mini analyzes the Q&A exchange to identify new knowledge items. It receives the existing context so it can avoid duplicating what's already known, and assigns confidence scores to each extracted item.

Semantic Embeddings

Every knowledge item is embedded at creation time using OpenAI's text-embedding model. These embeddings are stored in PostgreSQL (via pgvector) and used during context assembly to match the most relevant non-rule items to each incoming question.

In all three cases, the AI is a tool, not the product. Amon Hen provides the knowledge, the model provides the analysis. The platform's value is in what the model reasons over, not just that it reasons.

Summary

ConceptHow Amon Hen Handles It
Knowledge sourcingStarter packs, manual entry, document import, and automatic extraction from conversations
Knowledge storagePostgreSQL with structured types and vector embeddings (pgvector)
VersioningSupersession chains preserve history while serving current versions
Context assemblyRules always included; non-rule items retrieved by semantic similarity to the question
AI reasoningGPT-4.1-mini generates responses grounded in project-specific context
Response qualityThree profiles auto-selected by historical feedback scores
LearningPositive feedback triggers extraction with confidence-based routing
SafetyRate limits, deduplication, human review, full audit logging
Data retentionKnowledge is permanent; transient records expire after 90 days

Amon Hen is designed so that the more you use it, the better it gets - not through opaque model fine-tuning, but through transparent, auditable knowledge accumulation you can inspect, approve, and manage at every step.