ARCHIVED 2026-03-23 — Superseded by ultimate-claude-code-skills-playbook.md
Solanasis MCP & Plugins Installation Plan
Date: March 15, 2026 Status: READY FOR REVIEW — No changes made yet Reviewer: Senior Review Agent (verified) Purpose: Complete, step-by-step plan for installing remaining MCP integrations, marketplace plugins, and custom skills
CRITICAL CONTEXT: COWORK SESSION VS LOCAL INSTALLATION
THIS IS THE MOST IMPORTANT DECISION: You are currently in a Cowork session (a VM). Many installations will NOT persist after the session ends.
What Persists vs What Resets
| Component | Cowork Session (VM) | Local Machine | Decision |
|---|---|---|---|
| Marketplace plugins (.plugin files) | Resets after session | Persist (in ~/.claude/) | INSTALL LOCALLY via Claude Code CLI |
| MCP servers (.mcp.json config) | Resets after session | Persist | CONFIGURE LOCALLY via Claude Code CLI |
| CLAUDE.md files | Can work in session | Persist better locally | CREATE BOTH (user + project) |
| Subagent files (~/.claude/agents/) | Resets after session | Persist | CREATE LOCALLY |
| Custom skills (.claude/skills/) | Can work in session | Persist better locally | Create in session, copy to local |
| Hooks (settings.json) | Can work in session | Better locally | LOCAL ONLY |
Architecture Implication
Cowork is useful for:
- Testing and validating new skills/plugins
- Building custom skills using skill-creator
- Verifying MCP connections work
- Planning and documentation
Cowork is NOT sufficient for:
- Permanent plugin installation (use Claude Code CLI)
- MCP server setup (use Claude Code CLI + environment)
- System-wide configuration (use local machine)
Therefore, this plan has TWO execution paths:
- Testing & validation (CAN happen in Cowork NOW)
- Permanent installation (MUST happen LOCALLY on your machine)
SECTION 1: MASTER INSTALLATION SEQUENCE
Phase 1A: MCP Servers (LOCAL CLI — Must Do First)
These are the research tools that make Chrome unnecessary for basic lookups.
Step 1: Add DuckDuckGo MCP (Recommended Over Brave)
Why first: Unlimited free searches, zero API key, zero setup friction. Gives Claude a fast search alternative to Chrome.
Local machine setup:
# Option 1: npm install (recommended)
npm install -g duckduckgo-mcp
# Option 2: Docker (if npm not available)
docker run -d -p 3000:3000 nickclyde/duckduckgo-mcpVerify installation:
# Test that the command is available
which duckduckgo-mcp
# Should return: /usr/local/bin/duckduckgo-mcp (or similar)Add to Claude Code CLI config:
Edit ~/.claude/settings.json (create if doesn’t exist):
{
"mcpServers": {
"duckduckgo": {
"command": "npx",
"args": ["-y", "@duckduckgo/duckduckgo-mcp"]
}
}
}Verify in Claude Code:
- Run
claudein terminal - Type
/mcp→ should seeduckduckgolisted - Test with: “Search for ‘SOC 2 compliance requirements’” → should use DuckDuckGo, not Chrome
Cowork test: After local setup, open a Cowork session and try a research query. DuckDuckGo should appear in the tools list.
Step 2: Add Exa MCP (Free Remote Endpoint)
Why: Semantic search for business research, company research, people search. Higher quality than DuckDuckGo for technical queries. Free tier with no API key.
Local machine setup — Three options (pick one):
Option A: Add Exa’s free remote MCP endpoint (RECOMMENDED — zero local setup)
Edit ~/.claude/settings.json:
{
"mcpServers": {
"exa": {
"command": "curl",
"args": ["-sL", "https://mcp.exa.ai/mcp?tools=web_search_exa,web_search_advanced_exa,company_research_exa,people_search_exa,get_code_context_exa,crawling_exa,deep_researcher_start,deep_researcher_check"]
}
}
}Note: This uses curl to fetch the remote endpoint. Alternative: use the official Exa MCP GitHub repo if curl doesn’t work with remote MCPs.
Option B: Install locally from Exa GitHub repo
git clone https://github.com/exaai/exa-mcp-server.git
cd exa-mcp-server
npm install
npm run buildThen add to ~/.claude/settings.json:
{
"mcpServers": {
"exa": {
"command": "node",
"args": ["/path/to/exa-mcp-server/dist/index.js"],
"env": {
"EXA_API_KEY": "${EXA_API_KEY}" // Leave blank for free tier
}
}
}
}Option C: Use Docker (if npm/curl not working)
docker run -d \
-e EXA_API_KEY="" \
-p 3001:3001 \
exaai/mcp-server:latestThen configure in ~/.claude/settings.json to connect to localhost:3001.
Recommend: Option A (remote endpoint) — no local installation needed.
Verify:
- Run
claudein terminal - Type
/mcp→ should seeexalisted - Test with: “Research Acme Corp using company_research_exa” → should return structured company data
Step 3: Set Environment Variable for Exa API Key (Optional — Free Tier Works Without)
If you want to lift rate limits later:
- Get free Exa API key: https://api.exa.ai/
- Add to your shell profile (
~/.bashrc,~/.zshrc, etc.):
export EXA_API_KEY="your_api_key_here"- Reload shell:
source ~/.bashrc(orsource ~/.zshrc)
Free tier limits: 1,000 requests/month (~33/day). Plenty for current Solanasis scale.
Step 4: Disable Claude.ai MCP Inheritance (CRITICAL)
Claude.ai may auto-inject MCP servers into your CLI sessions. This can cause conflicts.
Disable with environment variable:
# Run Claude Code with this flag:
ENABLE_CLAUDEAI_MCP_SERVERS=false claude
# Or add to your shell profile permanently:
# echo 'export ENABLE_CLAUDEAI_MCP_SERVERS=false' >> ~/.bashrcVerify: Run claude, then /mcp — should ONLY see the servers you’ve explicitly configured, not extra Claude.ai servers.
Phase 1B: CLAUDE.md Configuration (LOCAL — Can Start Now)
These instructions guide Claude’s behavior persistently across sessions.
Step 5: Create User-Level CLAUDE.md
File location: ~/.claude/CLAUDE.md
cat > ~/.claude/CLAUDE.md << 'EOF'
# Global Instructions — Dmitri Zasage / Solanasis
## Identity
- Working with Dmitri Zasage, CEO of Solanasis LLC (Colorado)
- Solanasis = fractional CIO/CSIO/COO (fCIO/fCSIO/fCOO) firm for SMBs and nonprofits
- Core offerings: Security Assessments, Disaster Recovery Verification, Data Migrations, CRM Setup, Systems Integration, Responsible AI Implementation
- Currently a one-person operation with 1099 contractors
- Growth hacking mindset — unconventional/Smartcuts approach
## Planning: Proportional to Complexity
- **Routine tasks** (research a prospect, draft an email, update a doc): Just do it. No formal planning.
- **Medium tasks** (multi-step research, document creation, config changes): State a brief plan (3-5 bullets) before starting.
- **Complex tasks** (new services, system redesigns, multi-day projects, strategic decisions): Use the `planner` subagent for a structured plan.
Don't waste time planning the obvious. Save planning for work where the approach genuinely matters.
## Verification: Proportional to Stakes
Launch the `senior-reviewer` subagent ONLY for:
- **Client deliverables** — proposals, reports, assessments, anything going to someone outside Solanasis
- **Strategic decisions** — pricing, new offerings, partnerships, major process changes
For everything else (internal docs, research, emails, routine tasks), self-review is sufficient. Don't burn tokens verifying draft emails.
**Decision tree:**
- Client deliverables → ALWAYS run senior-reviewer
- Strategic decisions → ALWAYS run senior-reviewer
- Internal docs, research, analysis → Self-review only
- Draft emails, messages, routine tasks → No verification
- Quick answers, lookups, conversation → No verification
## Research: Use the Research Agent for Depth
Launch the `research-agent` subagent for:
- **Prospect/company research** — Before sales calls, when prepping for client meetings
- **Competitor analysis** — Understanding the competitive landscape
- **Market research** — Industry trends, technology landscape, compliance requirements
- **Technology evaluation** — Comparing tools, vendors, platforms before recommending to clients
- **Security/compliance research** — Threat landscape, framework requirements, vulnerability context
For simple lookups ("What does Acme Corp do?"), just use WebSearch directly — no need for the full research agent.
**Decision tree:**
- Deep prospect prep → research-agent
- Competitor analysis → research-agent
- Market/tech research → research-agent
- Strategic research feeding a decision → research-agent → then senior-reviewer on the recommendation
- Quick factual lookup → WebSearch directly
- Checking a specific URL → WebFetch directly
## Tool Routing — Efficiency, Not Restriction
The goal is efficiency, not restriction. Use whichever tool gets the job done fastest.
### Quick Guide
- **WebSearch** — fastest for finding information, searching topics, news, docs. Use this when you just need to look something up.
- **WebFetch** — fastest for reading a specific URL you already have.
- **Exa/DuckDuckGo MCP** — for deeper semantic search, company research, people research (if installed).
- **Chrome** — for everything interactive: logins, dashboards, forms, screenshots, site testing, web apps, visual inspection, or anything that needs a real browser.
### Efficiency tip
WebSearch returns results in one call. Chrome requires navigate → wait → read_page (3+ calls, more tokens). For a simple "what is X?" question, WebSearch is 10x faster. But for "log into ClickUp and check the sprint," Chrome is the only option.
**Use Chrome freely.** It's the right tool for interactive work. Just don't use it as a search engine when WebSearch is faster.
## Communication Preferences
- Bullet points and numbered lists preferred over long paragraphs
- Sub-bullet points for additional context
- Always spell out acronyms on first use (unless super common like AI, IT, CEO)
- Include "pro tips" whenever relevant to help Dmitri learn
- Clarifying questions via multiple choice:
- Option A = recommended answer (with explanation of why)
- Remaining options in order of recommendation
- Include context with each option for learning
- Provide textarea for additional notes per answer
## Technical Conventions
- SQL: lowercase keywords and built-in functions, snake_case for tables/cols/procs/funcs/views/triggers
- C#: snake_case for variables that match DB column names
- Codebehind pattern for Blazor, JSInterop for forms
- HTML generated via C# classes
EOFVerify:
cat ~/.claude/CLAUDE.md | head -20
# Should show the Identity sectionStep 6: Update Project-Level CLAUDE.md
File location: /sessions/admiring-modest-gauss/mnt/_solanasis/solanasis-docs/.claude/CLAUDE.md
Read existing file first:
cat /sessions/admiring-modest-gauss/mnt/_solanasis/solanasis-docs/.claude/CLAUDE.mdAdd these sections to the end (don’t overwrite security rules):
## MCP Tools
### Available MCP Servers
- DuckDuckGo (web, news, image search — unlimited, free)
- Exa (semantic search, company research, people search — free tier)
- ClickUp connector (project management)
- Google Calendar (event management)
- Gmail (email search, drafts)
- Canva (design)
### Preferred Research Tools (in order)
1. WebSearch — For finding info quickly
2. Exa (if available) — For business/company research
3. DuckDuckGo (if available) — Unlimited backup search
4. WebFetch — For reading a specific URL
5. Chrome read_page — Only if above tools fail (JS-heavy pages)
DO NOT use Chrome for basic web research. Chrome is for interactive tasks only.
## Active Connectors
- ClickUp (project management)
- Google Calendar
- Gmail
- Canva (design)
## Tech Stack Context
- ClickUp for PM, Xero for accounting, Coda as wiki
- Google Workspace, Google Voice for business phone
- Website: solanasis.com (cPanel/Namecheap Stellar VPS)
- Brevo for email marketing (List ID: 2)
- Payment: 50% upfront / 50% on delivery (full upfront under $2,500)Verify:
tail -20 /sessions/admiring-modest-gauss/mnt/_solanasis/solanasis-docs/.claude/CLAUDE.mdPhase 1C: Subagent Files (LOCAL — Create in CLI)
These enable planning, research, and verification workflows.
Step 7: Create Subagent Directory
mkdir -p ~/.claude/agentsStep 8: Create Senior Reviewer Subagent
File: ~/.claude/agents/senior-reviewer.md
cat > ~/.claude/agents/senior-reviewer.md << 'AGENT_EOF'
---
name: senior-reviewer
description: >
Senior quality reviewer. Use this agent to verify ANY substantive work
before presenting to the user. This includes: documents, plans, research
findings, configurations, code, templates, and client deliverables.
Use proactively after completing work.
tools:
- Read
- Grep
- Glob
- Bash
- WebSearch
- WebFetch
disallowedTools:
- Write
- Edit
- Agent
model: opus
maxTurns: 10
---
# Senior Reviewer Agent
You are a senior technical and strategic reviewer for Solanasis, a
fractional CIO/CSIO/COO (fCIO/fCSIO/fCOO) firm targeting SMBs and
nonprofits.
## Your Job
Review the work that was just completed and provide an honest assessment.
You are the quality gate before anything goes to the user (Dmitri, the CEO).
## Review Checklist
For EVERY review, check:
### Accuracy
- Are facts correct? Cross-reference claims with web search if needed.
- Are technical details accurate (framework names, tool capabilities, pricing, configuration syntax)?
- Are there unsupported claims or hallucinations?
### Completeness
- Does the output fully address what was asked?
- Are there obvious gaps or missing sections?
- Would Dmitri need to ask follow-up questions to use this?
### Best Practices
- Is this truly the best approach, or is there a better way?
- Are there industry best practices being ignored?
- Would an experienced consultant do it differently?
### Solanasis Context
- Does this align with Solanasis's business model (small team, 1099 contractors, credibility-building phase)?
- Is this practical for a one-person operation scaling up?
- Does it fit the "unconventional/Smartcuts" approach Dmitri prefers?
### Tool Usage Review
- Were the right tools used? (WebSearch vs Chrome, etc.)
- Was the approach efficient or wasteful?
- Could this have been done faster/better?
## Output Format
Return a structured verdict:
Senior Review Verdict
Status: APPROVED | APPROVED WITH NOTES | NEEDS REVISION
Accuracy: [Pass/Fail] — [brief note] Completeness: [Pass/Fail] — [brief note] Best Practices: [Pass/Fail] — [brief note] Solanasis Fit: [Pass/Fail] — [brief note] Efficiency: [Pass/Fail] — [brief note]
Issues Found: (if any)
- [Issue 1]
- [Issue 2]
Suggestions: (if any)
- [Suggestion 1]
- [Suggestion 2]
If status is NEEDS REVISION, be specific about what needs to change.
AGENT_EOF
Verify:
cat ~/.claude/agents/senior-reviewer.md | head -20Step 9: Create Planner Subagent
File: ~/.claude/agents/planner.md
cat > ~/.claude/agents/planner.md << 'AGENT_EOF'
---
name: planner
description: >
Task planning agent. Use for any multi-step task to plan the approach
before execution. Determines which tools to use, what order to work in,
and identifies potential issues. Use proactively for complex requests.
tools:
- Read
- Grep
- Glob
- WebSearch
- WebFetch
disallowedTools:
- Write
- Edit
- Bash
- Agent
model: opus
maxTurns: 5
---
# Task Planner Agent
You plan the most efficient approach for completing tasks.
## Your Job
Given a task description, produce a brief execution plan that:
1. **Identifies the right tools** — Specifically:
- WebSearch for finding information (NEVER Chrome for research)
- WebFetch for reading specific URLs (NEVER Chrome navigate + read_page)
- Chrome ONLY for authenticated sites, form filling, or explicit browser automation
- Read/Glob/Grep for local files
- Bash for commands and scripts
2. **Orders the steps** — What needs to happen first, what can be parallel
3. **Flags risks** — What could go wrong, what to watch out for
4. **Estimates scope** — Quick (< 5 min), Medium (5-15 min), Complex (15+ min)
## Output Format
Execution Plan
Scope: Quick | Medium | Complex Steps:
- [Step] — using [Tool]
- [Step] — using [Tool]
- [Step] — using [Tool]
Tool Routing:
- Research: WebSearch (no Chrome needed)
- URL reading: WebFetch (no Chrome needed)
- [Any Chrome needed?]: [Yes/No — reason]
Risks: [Any gotchas] Verification: [What senior-reviewer should check]
Keep plans concise. 3-7 steps for most tasks.
AGENT_EOF
Verify:
cat ~/.claude/agents/planner.md | head -20Step 10: Create Research Agent
File: ~/.claude/agents/research-agent.md
cat > ~/.claude/agents/research-agent.md << 'AGENT_EOF'
---
name: research-agent
description: >
Deep research specialist. Use this agent for ANY research task that requires
searching multiple sources, synthesizing findings, or building a comprehensive
picture. This includes: prospect/company research, competitor analysis, market
research, technology evaluation, security landscape research, compliance
requirements, vendor comparison, and due diligence. Use proactively whenever
research depth matters.
tools:
- WebSearch
- WebFetch
- Read
- Grep
- Glob
- Bash
disallowedTools:
- Write
- Edit
- Agent
- mcp__Claude_in_Chrome__navigate
- mcp__Claude_in_Chrome__form_input
- mcp__Claude_in_Chrome__file_upload
model: opus
maxTurns: 15
---
# Research Agent — Solanasis Deep Research Specialist
You are a senior research analyst for Solanasis, a fractional CIO/CSIO/COO
(fCIO/fCSIO/fCOO) firm targeting SMBs and nonprofits.
## Your Job
Conduct thorough, multi-source research and return a structured, citation-rich
report. You are the research equivalent of the senior-reviewer — a specialist
that ensures research quality before findings reach the CEO (Dmitri).
## Research Tool Priority (Efficiency Order)
Use the fastest tool that gets the job done:
1. **WebSearch** — First choice for finding information, news, documentation,
articles. Fast and cheap.
2. **Exa MCP tools** (if available) — For semantic search, company research
(`company_research_exa`), people search (`people_search_exa`), deep
research reports (`deep_researcher_start`/`deep_researcher_check`).
3. **DuckDuckGo MCP** (if available) — Alternative search with no rate limits.
4. **WebFetch** — For reading specific URLs you've found. Extract content
from articles, docs, reports.
5. **Read/Grep/Glob** — For analyzing local files, uploaded documents,
previous research.
6. **Chrome read-only tools** — `read_page`, `get_page_text` are available
as FALLBACK for pages that require JavaScript rendering or have
anti-scraping protection. State why WebFetch didn't work if you use these.
## Research Standards
### Source Quality
- Prefer official sources (company websites, SEC filings, government databases, vendor documentation)
- Cross-reference claims across 2+ sources when possible
- Flag single-source claims as "unverified" or "reported by [source]"
- Note publication dates — flag anything older than 12 months as potentially outdated
### Intellectual Honesty
- Clearly distinguish: verified facts vs. inferences vs. speculation
- Say "I couldn't find information on X" rather than making it up
- Note conflicting information when sources disagree
- Flag gaps in the research — what COULDN'T you find that matters?
### Solanasis Context
- We target SMBs (Small and Medium Businesses) and nonprofits
- Our wedge offerings: security assessments, disaster recovery (DR) verification, data migrations
- Goal: become their operational resilience partner with recurring revenue
- Growth hacking mindset — look for unconventional angles
- Currently building credibility — partner/association references are valuable
## Output Format
Research Report: [Topic]
Date: [date] Research Depth: Quick Scan | Standard | Deep Dive Sources Consulted: [number]
Key Findings
- [Finding 1] — [Source]
- [Finding 2] — [Source]
- [Finding 3] — [Source]
Analysis
[Synthesized narrative connecting the findings]
Solanasis Relevance
[How this connects to our business — opportunities, risks, action items]
Gaps & Limitations
- [What couldn’t be found]
- [What needs further investigation]
Sources
Adjust depth based on what's asked. Quick prospect lookups don't need the full template — use judgment.
AGENT_EOF
Verify:
cat ~/.claude/agents/research-agent.md | head -20
ls -la ~/.claude/agents/
# Should show: planner.md, senior-reviewer.md, research-agent.mdPhase 1D: Test MCP & Subagents in Claude Code
Step 11: Verify MCP Installation
# Start Claude Code CLI
claude
# In the CLI, run:
/mcp
# Should output: duckduckgo, exa (and any other MCPs you configured)
# Test DuckDuckGo
# (in Claude chat)
Search for "SOC 2 compliance requirements"
# Should use duckduckgo tool, not Chrome
# Test Exa
# (in Claude chat)
Research the company "Anthropic" using Exa
# Should use exa tools, return structured company dataSuccess criteria:
/mcpshows bothduckduckgoandexa- WebSearch is used for basic queries (faster than Chrome)
- Exa is used for company/people research (better than WebSearch)
- Chrome is NOT used for research-only tasks
Step 12: Test Subagents
# In Claude Code CLI, test each subagent:
# Test 1: Planner
Plan a task to research three SMB prospects and draft cold emails
# Should invoke planner, return a structured 5-step plan
# Test 2: Research Agent
Research the cybersecurity compliance landscape for SMBs in 2026
# Should invoke research-agent, return a structured report
# Test 3: Senior Reviewer
Write a client proposal, then ask for review
# Should invoke senior-reviewer after you finish, return verdictSuccess criteria:
- Planner delegates complex tasks automatically
- Research agent is used for depth research automatically (via description)
- Senior reviewer is launched for client work automatically
Phase 2A: Marketplace Plugins Installation (LOCAL CLI)
Step 13: Install Phase 1 Plugins (20 Skills from 6 Plugins)
Location: Local Claude Code CLI (installed on your machine, persists)
Commands to run locally:
claude
# In the CLI, use plugin install or equivalent command
# (Exact syntax depends on your Claude Code version)
# Recommended installation order (install in parallel if possible):
/plugin install operations@knowledge-work-plugins
/plugin install sales@knowledge-work-plugins
/plugin install marketing@knowledge-work-plugins
/plugin install engineering@knowledge-work-plugins
/plugin install customer-support@knowledge-work-plugins
/plugin install data@knowledge-work-pluginsNote: If these commands don’t work, try:
/plugin marketplace search operations # Then click install on the Operations plugin
Verify:
/plugin list
# Should show 6 new plugins (operations, sales, marketing, engineering, customer-support, data)
# Plus existing: legal, productivity, brand-voice, cowork-plugin-managementStep 14: Configure Model Invocation Mapping
After plugins install, you need to set 8 of the 20 skills to disable-model-invocation: true to keep context budget manageable.
Location: .claude/plugins/ directory or plugin configuration files
Skills to set to MANUAL-ONLY (8 total):
| Plugin | Skill | Reason |
|---|---|---|
| Operations | change-management | Invoke only when planning migrations |
| Operations | resource-planning | Invoke only for staffing/capacity |
| Sales | create-an-asset | Invoke explicitly for proposals |
| Sales | competitive-intelligence | Invoke explicitly for battlecards |
| Sales | daily-briefing | Invoke explicitly each morning |
| Marketing | campaign-planning | Invoke explicitly for campaigns |
| Customer Support | knowledge-management | Invoke after resolving issues |
| Data | data-validation | Invoke explicitly during migration QA |
Skills to keep AUTO-INVOKE (12 total):
| Plugin | Skill | Reason |
|---|---|---|
| Operations | compliance-tracking | Triggers on “SOC 2”, “GDPR” — frequent |
| Operations | risk-assessment | Triggers on “risk”, “what could go wrong” — frequent |
| Operations | vendor-management | Triggers on “evaluate vendor” — frequent |
| Operations | process-optimization | Triggers on “bottleneck” — frequent |
| Sales | account-research | Triggers on “research [company]” — daily |
| Sales | draft-outreach | Triggers on “draft outreach to” — daily |
| Sales | call-prep | Triggers on “prep me for my call” — daily |
| Marketing | content-creation | Triggers on “write a blog post” — frequent |
| Engineering | documentation | Triggers on “write docs” — frequent |
| Engineering | incident-response | Triggers on “incident”, “production down” — critical |
| Data | data-exploration | Triggers on “profile this dataset” — migration work |
| Data | sql-queries | Triggers on “write a query” — migration work |
How to disable model invocation:
Option A: Via CLI (if available)
# (Syntax varies by Claude Code version — check `/help`)
/plugin skill disable-model-invocation change-management
/plugin skill disable-model-invocation resource-planning
# ... repeat for all 8 skillsOption B: Edit plugin SKILL.md files directly
Find the plugin files in ~/.claude/plugins/ or the marketplace folder, then edit each SKILL.md to add:
disableModelInvocation: trueOr in the skill frontmatter:
---
name: change-management
disableModelInvocation: true
---Verify:
# In Claude CLI or Cowork, ask:
"What skills can you invoke?"
# Should list 12 skills (auto-invoke), not 20Phase 2B: Custom Skills Build
Step 15: Create Research-First Skill
File: ~/.claude/skills/research-first/SKILL.md
mkdir -p ~/.claude/skills/research-first
cat > ~/.claude/skills/research-first/SKILL.md << 'SKILL_EOF'
---
name: research-first
description: >
Quick focused research on a specific topic. Research using MCP tools
(Exa, DuckDuckGo, WebSearch) before falling back to WebFetch or Chrome.
For deeper research, delegate to the research-agent subagent.
Invoke manually with /research-first.
disableModelInvocation: true
---
# Research-First Skill
A lightweight skill for focused research tasks, complementing the deeper `research-agent` subagent.
## How to Use
Invoke manually: `/research-first [topic]`
Examples:
- `/research-first SOC 2 Type II compliance for SaaS`
- `/research-first disaster recovery best practices for SMBs`
- `/research-first latest data migration tools 2026`
## Research Sequence
1. **Exa semantic search** — Company research, people search, deep researcher tools
2. **DuckDuckGo** — Web and news search if Exa unavailable
3. **WebSearch** — General web search fallback
4. **WebFetch** — Read specific documents you found
5. **Chrome read_page** — Only if above fails (JS-heavy pages)
**NEVER use Chrome as the primary research tool.** Chrome is for interactive tasks only.
## Output Format
Return findings in this format:
Research: [Topic]
Top Findings
- [Finding] — Source
- [Finding] — Source
- [Finding] — Source
Key Insights
[Synthesized summary]
Next Steps
[What to do with this research]
Keep research focused and actionable. Adjust depth based on the topic.
SKILL_EOF
Verify:
# In Claude CLI:
/skill research-first
# Should prompt for topic and return research resultsPhase 3: Context Budget Verification
Step 16: Calculate Total Context Load
Before plugin installation, verify context window budget:
| Component | Size | Count | Subtotal |
|---|---|---|---|
| Existing skills | ~200 chars | 6 | ~1,200 chars |
| Phase 1 plugins (12 auto-invoke) | ~200 chars | 12 | ~2,400 chars |
| Subagent descriptions | ~300 chars | 3 | ~900 chars |
| CLAUDE.md content | ~800 chars | 1 | ~800 chars |
| Memory/context files | variable | 1 | ~500 chars |
| TOTAL ACTIVE | ~5,800 chars |
Available context budget (at 128K context):
- System prompt: ~2,000 chars
- Reserved for conversation: ~120,000 chars
- Available for tool descriptions: ~2,560 chars
Status: ⚠️ OVER BUDGET
Mitigation: Implement Model Invocation Mapping
- Keep 12 auto-invoke (high-frequency) skills enabled
- Set 8 skills to manual-only (disable model invocation)
- Result: ~2,400 chars active (under ~2,560 budget)
Verify after installation:
# In Claude CLI, check how many skills are loaded
/skill list
# Count auto-invoke vs manual-only
# Ask Claude:
"Are you seeing 'Excluded skills' warnings?"
# If YES → not enough context budget, disable more skills
# If NO → context budget is OKSECTION 2: INSTALLATION CHECKLIST (STEP-BY-STEP)
Prerequisites
- Claude Code CLI installed locally (NOT Cowork)
- Terminal access to your machine (not the VM)
- npm installed (
npm --versionreturns a version) - Git installed (
git --versionreturns a version) - Max plan is active (needed for subagent token budget)
Phase 1: MCP Servers (LOCAL)
- DuckDuckGo MCP installed (
npm install -g duckduckgo-mcp) - DuckDuckGo added to
~/.claude/settings.json - DuckDuckGo tested in Claude CLI (
/mcpshows it) - Exa MCP configured (remote endpoint or local)
- Exa tested in Claude CLI
- ENABLE_CLAUDEAI_MCP_SERVERS=false verified (or disabled)
Phase 1: CLAUDE.md (LOCAL)
- User-level CLAUDE.md created (
~/.claude/CLAUDE.md) - Project-level CLAUDE.md updated (solanasis-docs/.claude/CLAUDE.md)
- Both files verified with
catcommand
Phase 1: Subagents (LOCAL)
-
~/.claude/agents/directory created - senior-reviewer.md created and verified
- planner.md created and verified
- research-agent.md created and verified
- Subagents tested in Claude CLI
Phase 2: Plugins (LOCAL)
- operations plugin installed (
/plugin install operations@knowledge-work-plugins) - sales plugin installed
- marketing plugin installed
- engineering plugin installed
- customer-support plugin installed
- data plugin installed
- Model invocation mapping applied (8 skills set to manual-only)
- Total skills: 12 auto-invoke, 8 manual-only
Phase 2: Custom Skills
- research-first skill created (
~/.claude/skills/research-first/SKILL.md) - research-first skill tested
Phase 3: Verification
- Context budget verified (no “Excluded skills” warnings)
- Chrome permissions set to Allow (not Ask)
- DuckDuckGo/Exa preferred for research (tested)
- Chrome reserved for interactive tasks (tested)
- Subagents auto-invoke correctly (tested)
- Plugins show in
/plugin list(tested)
Phase 3: Cowork Session
- Open Cowork session with solanasis-docs folder
- Verify plugins appear in skill menu
- Verify MCP tools available in chat
- Test full workflow: research → plan → create → verify
SECTION 3: WHAT CAN BE TESTED IN COWORK NOW
These items CAN be tested in the current Cowork session before local installation:
- Skill logic validation — Create/test custom skills using skill-creator (already installed)
- Plugin compatibility — Read plugin docs, verify they don’t conflict with existing skills
- Workflow simulation — Manually run through the research → plan → create → verify loop
- CLAUDE.md effectiveness — See if instructions guide tool selection correctly
- Context budget impact — Estimate token costs of new skills
These items CANNOT be tested in Cowork (will reset after session):
- Permanent MCP server installation
- Plugin installation (marketplace plugins only install in local Claude Code)
- Subagent file creation (needs to persist to
~/.claude/agents/) - System-wide configuration (ENABLE_CLAUDEAI_MCP_SERVERS flag)
SECTION 4: ROLLBACK PLAN (IF SOMETHING BREAKS)
If Context Window Overflows
Symptom: Claude says “Excluded skills” or responses become evasive
Fix:
- Disable more skills from manual-only list (move from 8 to 12 manual)
- Remove Phase 2 plugins temporarily
- Keep Phase 1A (MCP) + Phase 1B (CLAUDE.md) + 3 critical skills
If MCP Servers Stop Working
Symptom: WebSearch works but DuckDuckGo/Exa don’t appear in /mcp
Fix:
# Check for conflicts
/mcp
# Remove and reinstall
npm uninstall -g duckduckgo-mcp
npm install -g duckduckgo-mcp
# Restart Claude
claudeIf Plugins Cause Conflicts
Symptom: Skill descriptions collide, or unexpected behaviors emerge
Fix:
- Note which plugin caused the issue
- Run
/plugin uninstall [plugin-name] - Verify conflict is gone
- Reinstall one plugin at a time
If Subagents Don’t Auto-Invoke
Symptom: You ask research agent to be used, but it’s not invoked
Fix:
- Check subagent description field — must include “use proactively”
- Verify file location:
~/.claude/agents/[name].md - Verify YAML frontmatter is valid (no syntax errors)
- Restart Claude Code:
exitthenclaude
If Chrome Gets Blocked Accidentally
Symptom: “I don’t have permission to use Chrome”
Fix:
/permissions
# Find Chrome tools, set to "Allow"SECTION 5: RISKS & GOTCHAS
High-Risk Items
| Risk | Impact | Mitigation |
|---|---|---|
| MCP server command path wrong | Claude can’t find Exa/DDG tools | Verify with which duckduckgo-mcp before adding to settings |
| Context window overflow | ”Excluded skills” warnings, degraded performance | Model invocation mapping (already in plan) |
| Plugin version conflicts | Skill descriptions collide, unexpected behavior | Install one plugin at a time, test between installs |
| Subagent nesting (Claude limitation) | Senior reviewer can’t delegate to research agent | Design subagents to be self-sufficient (already done) |
| ENABLE_CLAUDEAI_MCP_SERVERS injection | Unexpected MCP servers appear in sessions | Set environment variable at CLI startup |
Medium-Risk Items
| Risk | Impact | Mitigation |
|---|---|---|
| npm/Docker not available | Can’t install DuckDuckGo locally | Use Exa remote endpoint instead |
| Brave Search MCP setup friction | Requires API key, more complex | Skip for now, use DuckDuckGo + Exa |
| Cowork session loss | All unsaved custom skills/plugins reset | Copy custom skills to local machine immediately |
| Plugin marketplace unavailable | Can’t install plugins in Cowork | All plugins must be installed via local Claude Code CLI |
| Research-first skill underutilized | Skill exists but not used | Include in CLAUDE.md to remind when to use manually |
Low-Risk Items
| Risk | Impact | Mitigation |
|---|---|---|
| Exa free tier rate limit hit | Search tools slow after 1,000 queries/month | Get free API key to lift limit |
| DuckDuckGo search quality issues | Some queries return weak results | Use Exa or WebSearch as fallback |
| Skill descriptions need tweaking | Auto-invoke too aggressive or too passive | Update descriptions after first week of use |
SECTION 6: VERIFICATION TESTS (RUN AFTER EACH PHASE)
Test 1: MCP Tools Available
claude
/mcp
# Output should include: duckduckgo, exaTest 2: Research-First Tool Selection
# In Claude chat:
"What is the latest zero-day vulnerability landscape?"
# Verify: Uses Exa or DuckDuckGo, NOT ChromeTest 3: Chrome Reserved for Interactive
# In Claude chat:
"Log into ClickUp and show me the sprint board"
# Verify: Uses Chrome (because it requires login)Test 4: Subagents Auto-Invoke
# In Claude chat:
"Research this prospect before my call: Acme Corp"
# Verify: research-agent subagent invokes automaticallyTest 5: Planning on Complex Tasks
# In Claude chat:
"Design a security assessment service offering for SMBs"
# Verify: planner subagent invokes, returns structured planTest 6: Verification on Client Work
# In Claude chat:
"Draft a proposal for Acme Corp for a security assessment"
# Verify: senior-reviewer subagent invokes after you finishTest 7: Plugin Skills Appear
# In Claude chat:
"What skills do I have available?"
# Verify: Lists 20 skills (12 auto + 8 manual)Test 8: Context Budget OK
# In Claude chat:
"Do you see any 'Excluded skills' warnings?"
# Verify: "No" — context budget is sufficientSECTION 7: AFTER INSTALLATION — FIRST WEEK OPERATIONS
Day 1: Verify All Connections
- Test each MCP tool (Exa, DuckDuckGo)
- Test each subagent (planner, research-agent, senior-reviewer)
- Test 5 critical plugins (operations, sales, engineering, data)
- Verify CLAUDE.md is guiding tool selection
Day 2-3: Run Real Workflows
- Research a prospect → draft outreach → create asset (uses sales skills)
- Plan a security assessment → document findings (uses operations + engineering)
- Extract and validate data from a sample migration (uses data skills)
Day 4-5: Monitor Chrome Usage
- Track: How often is Chrome used for research vs. interactive?
- Goal: Chrome used < 20% for research-only tasks
- If > 20%: Add PreToolUse Chrome gate hook (Section 5 of Architecture doc)
Day 6-7: Adjust Model Invocation Mapping
- Identify which manual-only skills should auto-invoke
- Re-enable any skills you find yourself invoking > 3x/day
- Disable any skills that auto-fire incorrectly
End of Week 1: Measure ROI
- Count: How many client deliverables used plugins?
- Count: How many were improved by senior-reviewer?
- Count: How many research tasks used research-agent?
- Decision: Keep all plugins, or disable any that aren’t earning their context?
APPENDIX: EXACT COMMAND REFERENCE
MCP Server Installation Commands
# DuckDuckGo (npm)
npm install -g duckduckgo-mcp
# Exa (remote endpoint — no local install needed)
# Just add to ~/.claude/settings.json
# Verify MCP is active
claude
/mcp
# Test DuckDuckGo
"Search for SOC 2 compliance using DuckDuckGo"
# Test Exa
"Research Anthropic Inc using Exa"Plugin Installation Commands
claude
# Install marketplace plugins
/plugin install operations@knowledge-work-plugins
/plugin install sales@knowledge-work-plugins
/plugin install marketing@knowledge-work-plugins
/plugin install engineering@knowledge-work-plugins
/plugin install customer-support@knowledge-work-plugins
/plugin install data@knowledge-work-plugins
# List installed plugins
/plugin list
# Disable model invocation for a skill
/plugin skill disable-model-invocation [skill-name]File Creation Commands
# Create directories
mkdir -p ~/.claude/agents
mkdir -p ~/.claude/skills/research-first
# Create files (use cat << 'EOF' pattern for multi-line content)
cat > ~/.claude/CLAUDE.md << 'EOF'
[content here]
EOF
# Verify files exist
ls -la ~/.claude/
ls -la ~/.claude/agents/Environment Variables
# Disable Claude.ai MCP inheritance
export ENABLE_CLAUDEAI_MCP_SERVERS=false
claude
# Or add to shell profile permanently
echo 'export ENABLE_CLAUDEAI_MCP_SERVERS=false' >> ~/.bashrc
source ~/.bashrcAPPENDIX: DEFINITION OF “DONE”
Installation is complete when:
- MCP Tools — All research tools (Exa, DuckDuckGo, WebSearch) are available and tested
- CLAUDE.md — Both user-level and project-level instructions are in place
- Subagents — All three (planner, research-agent, senior-reviewer) are created and auto-invoke correctly
- Plugins — All 6 Phase 1 plugins installed, 12 auto-invoke, 8 manual-only
- Custom Skills — research-first skill created and tested
- Context Budget — No “Excluded skills” warnings, all 20 skills accessible
- Verification Tests — All 8 verification tests pass
- First Week — Full week of monitoring Chrome usage, adjusting model invocation mapping
Success Metric: Claude can research → plan → create → verify a full client deliverable without manual intervention, using the right tools efficiently.
Document Status: READY FOR EXECUTION Next Step: Review this plan with Dmitri, then proceed with Phase 1 (local installation) Expected Timeline: 4-6 hours for Phase 1, 2-3 hours for Phase 2, 1 week for Phase 3 (monitoring)