On This Page
The Core Idea: Context-First Advice
Most AI tools generate answers from broad, general training data. Amon Hen works differently. Before answering any question, it assembles a project-specific context - a structured collection of rules, decisions, notes, and reference material - and instructs the AI model to reason exclusively within that context.
This means:
- Answers are consistent and traceable
- The system doesn't hallucinate outside your domain
- Responses improve as the knowledge base grows
Think of it as a senior advisor who has read every guideline, decision record, and operating principle relevant to your work - and answers accordingly.
Where Knowledge Comes From
Amon Hen's knowledge base is built from four sources, each serving a different stage of a project's life.
1. Starter Packs - Bootstrapping
When a project is first created, it can be seeded with a starter pack - a pre-built collection of domain knowledge organized by industry vertical. These give the system an immediate foundation of expertise before any conversations have taken place.
- A healthcare project might start with 70+ rules covering nursing scope of practice, medication administration, infection control, and communication templates
- A business project might include entity formation, licensing, insurance, pricing, and cash flow management
- A process mining project might contain analysis patterns and conformance checking rules
2. Manual Knowledge Entry
Users and administrators can directly add knowledge items to a project. Each item is one of four types:
Rule
A standing principle, constraint, or guideline that always applies.
Decision
A specific choice made, paired with documented rationale explaining why.
Note
An observation, clarification, or contextual fact useful for future reference.
Code Context
Reference material, templates, formulas, or structured data the system can draw on.
Every knowledge item is scoped - it belongs to a specific project, a workspace, or is global - so the system knows exactly when to use it.
3. Document Import - Bulk Knowledge Ingestion
For projects with existing documentation, Amon Hen can extract knowledge directly from uploaded files. Upload PDFs, Word documents (.docx), or plain text files, and the system will parse the content and use AI to identify rules, decisions, notes, and reference material.
The extraction pipeline works in three stages:
- Upload - Files are validated (10 MB max per file, up to 5 per batch) and stored alongside the project
- Extract - AI analyzes the document text against the project's existing knowledge base, identifying new items and ignoring duplicates
- Review - Extracted items are placed in a draft proposal for human review before becoming active context. Nothing is auto-committed from documents
URL ingestion is also supported for web-based documentation. The system fetches the page and feeds it through the same extraction pipeline.
Document import is designed for onboarding: load existing policies, standards, procedures, or reference material and let the system organize it into structured knowledge items you can review and approve.
4. Conversational Learning - Automatic Extraction
This is the most distinctive capability. Amon Hen learns from its own advisory conversations.
When a user asks a question and rates the response as useful, the system triggers a background knowledge extraction process. AI analyzes the exchange, identifies new knowledge not already in the project's context, assigns a confidence score, and checks for duplicates.
Extracted items are then routed based on confidence:
Auto-Committed
Immediately added to the knowledge base and available in future answers.
Proposed for Review
Saved as a draft proposal. A user must approve before it becomes active.
Discarded
Not reliable enough to store. Quietly dropped.
This creates a virtuous cycle: good advice leads to feedback, feedback triggers learning, and learning improves future advice.
How a Question Gets Answered
When you submit a question, five things happen behind the scenes - typically in under two seconds.
Context Assembly
The system begins with your question. All active rules are gathered first - rules are always included in full because they represent standing constraints. For decisions, notes, and code context, the system embeds your question and retrieves the most relevant items by semantic similarity. Superseded items are filtered out automatically. The result is a focused, question-specific context window rather than a dump of everything in the project.
Profile Selection
Amon Hen picks the best response style based on historical feedback. Concise for quick decisions, Analytical for complex reasoning, or Strict-Literal for compliance-sensitive work. The system self-calibrates over time.
Prompt Construction
A system prompt is assembled with the selected profile instructions, depth settings, and the full project context formatted as authoritative reference material. Token budgeting ensures everything fits within the model's processing window.
AI Generation
The assembled prompt, conversation history, and your question are sent to OpenAI's GPT-4.1-mini. Low temperature (0.3) keeps answers deterministic. The model can also search the web if the project context doesn't fully cover the question.
Response Delivery
The AI's response is parsed to extract the main answer and follow-up hints. The full exchange - question, answer, and which context items were used - is saved to the database. A unique decision ID is returned so you can provide feedback later.
How the System Learns
Learning in Amon Hen is driven entirely by user feedback. Nothing is extracted automatically - the system learns only when you tell it an answer was useful.
The Feedback Loop
Feedback Scoring
| Outcome | Score | Triggers Learning? |
|---|---|---|
| Accepted | 1.0 | Yes |
| Minor Edit | 0.7 | Yes |
| Major Edit | 0.3 | No |
| Rejected | 0.0 | No |
What Gets Extracted
The extraction process looks for new principles, decisions with rationale, contextual facts useful in future conversations, and patterns the knowledge base doesn't already capture. It avoids trivial information, anything already known (via content-hash deduplication), and meta-commentary.
Safety Rails
- Rate limiting - Maximum 10 extractions per hour per project to prevent runaway learning
- Idempotency - Each exchange can only be extracted once
- Human review - Moderate-confidence items require explicit approval
- Audit trail - Every auto-commit and proposal is logged in an immutable audit log
Where Data Lives
All persistent data is stored in a PostgreSQL database hosted on Neon, a serverless Postgres provider.
| What | Where | Retention |
|---|---|---|
| Knowledge items | context_items | Permanent |
| Knowledge relationships | context_edges | Permanent |
| Version history | context_supersessions | Permanent |
| Advisory exchanges | advisory_exchanges | 90 days |
| Learning jobs | extraction_jobs | 90 days |
| Learning proposals | proposals | Until reviewed |
| Feedback scores | advisory_decisions | Permanent |
| Profile metrics | advisory_profile_metrics | Permanent |
| All mutations | audit_log | Permanent |
| Uploaded documents | documents | Permanent |
| Document extraction jobs | document_extraction_jobs | 90 days |
| Item embeddings | context_items.embedding | Permanent |
| Users & projects | users + projects | Permanent |
One database. No separate vector store. Embeddings are stored as a column on the same context_items table in PostgreSQL (via pgvector). Rules are always included deterministically. Non-rule items are retrieved by semantic similarity to the user's question. This keeps the system focused, auditable, and efficient.
Knowledge Versioning
Knowledge items support supersession - a form of versioning where a new item replaces an older one while preserving the full history.
- A new item is created with the updated content
- The old item is linked as "superseded by" the new one
- The context assembly query automatically returns only the latest version (the "chain tip")
- The old version remains in the database for audit purposes
This means the knowledge base always reflects current understanding while maintaining a complete paper trail of how that understanding evolved.
The Role of OpenAI
Amon Hen uses OpenAI's API in three specific ways:
Advisory Responses
When you ask a question, GPT-4.1-mini generates the answer. But the model is instructed with Amon Hen's assembled context - it doesn't rely on its general training data alone. The context acts as authoritative reference material that the model must ground its reasoning in.
Knowledge Extraction
When learning is triggered, GPT-4.1-mini analyzes the Q&A exchange to identify new knowledge items. It receives the existing context so it can avoid duplicating what's already known, and assigns confidence scores to each extracted item.
Semantic Embeddings
Every knowledge item is embedded at creation time using OpenAI's text-embedding model. These embeddings are stored in PostgreSQL (via pgvector) and used during context assembly to match the most relevant non-rule items to each incoming question.
In all three cases, the AI is a tool, not the product. Amon Hen provides the knowledge, the model provides the analysis. The platform's value is in what the model reasons over, not just that it reasons.
Summary
| Concept | How Amon Hen Handles It |
|---|---|
| Knowledge sourcing | Starter packs, manual entry, document import, and automatic extraction from conversations |
| Knowledge storage | PostgreSQL with structured types and vector embeddings (pgvector) |
| Versioning | Supersession chains preserve history while serving current versions |
| Context assembly | Rules always included; non-rule items retrieved by semantic similarity to the question |
| AI reasoning | GPT-4.1-mini generates responses grounded in project-specific context |
| Response quality | Three profiles auto-selected by historical feedback scores |
| Learning | Positive feedback triggers extraction with confidence-based routing |
| Safety | Rate limits, deduplication, human review, full audit logging |
| Data retention | Knowledge is permanent; transient records expire after 90 days |
Amon Hen is designed so that the more you use it, the better it gets - not through opaque model fine-tuning, but through transparent, auditable knowledge accumulation you can inspect, approve, and manage at every step.