Intelligent Extraction
How ATLAS transforms raw content into structured knowledge
Beyond Simple Storage
Most tools just store your content. ATLAS understands it.
When you add content to ATLAS, AI models analyze every piece to extract structured knowledge that's searchable, connectable, and actionable.
What Gets Extracted
📌 Concepts
Key ideas with importance scores
Every piece of content yields concepts — the fundamental ideas it contains. Each concept includes:
Name
Clear, descriptive identifier
Description
What this concept means
Importance
Score from 0.0 to 1.0
Category
Domain classification
Source
Where it came from
Example:
Name: "Antifragility"
Description: "Systems that gain from disorder and volatility"
Importance: 0.92
Category: Systems Thinking
Source: Twitter bookmark on Taleb thread💡 Insights
Actionable learnings and observations
Not just facts, but implications. What should you remember? What matters?
Example:
✅ Actions
Things to do, research, or explore
Your content often implies next steps. ATLAS captures them:
Example:
📝 Quotes
Memorable passages worth preserving
The exact words that captured an idea perfectly.
🧩 Frameworks
Mental models and thinking tools
Structured approaches to problems. Decision matrices. Evaluation criteria. Reusable thinking patterns.
The Extraction Process
Step 1: Content Analysis
LLM receives the full content with extraction prompts optimized for each content type (tweets vs. long articles vs. conversations).
Step 2: Structured Output
AI returns JSON-structured extractions that map to ATLAS's schema. No fuzzy outputs — clean, queryable data.
Step 3: Importance Scoring
Each concept gets an importance score based on:
Relevance to your existing knowledge graph
Novelty of the idea
Actionability
Connection potential
Step 4: Deduplication
If a concept already exists in your graph, ATLAS merges the new reference rather than creating duplicates.
Step 5: Relationship Mapping
New concepts automatically link to related existing concepts based on semantic similarity and co-occurrence.
Importance Scoring
The 0.0 to 1.0 importance score is crucial for surfacing what matters.
Scoring Factors
Novelty
25%
Is this new to your graph?
Density
25%
How connected to other concepts?
Actionability
20%
Does it suggest concrete actions?
Source Quality
15%
Where did this come from?
Recency
15%
Is this timely?
What Scores Mean
0.9 - 1.0: Core concepts, foundational ideas
0.7 - 0.9: Important concepts, frequent references
0.5 - 0.7: Useful supporting concepts
0.3 - 0.5: Contextual, situational concepts
0.0 - 0.3: Peripheral, low-signal concepts
Current Stats
Concepts
12,768+
Insights
5,691+
Actions
4,694+
Relationships
9,807+
Content Items
5,731+
The Compounding Effect
Here's what makes intelligent extraction powerful:
Month 1
You add your Twitter bookmarks. ATLAS extracts 500 concepts.
Month 3
You add Apple Notes and GitHub stars. ATLAS extracts 1,000 more concepts — but 300 of them connect to existing ones, enriching both.
Month 6
Your graph has 3,000 concepts with 8,000 relationships. New content immediately slots into context.
Month 12
ATLAS knows your intellectual landscape. It can answer questions about your specific interests, connect disparate ideas, and surface forgotten insights at the right moment.
Knowledge compounds. Storage doesn't.
Last updated
