ARCHIVED 2026-03-23 — Superseded by ultimate-claude-code-skills-playbook.md

Solanasis MCP & Plugins Installation Plan

Date: March 15, 2026 Status: READY FOR REVIEW — No changes made yet Reviewer: Senior Review Agent (verified) Purpose: Complete, step-by-step plan for installing remaining MCP integrations, marketplace plugins, and custom skills


CRITICAL CONTEXT: COWORK SESSION VS LOCAL INSTALLATION

THIS IS THE MOST IMPORTANT DECISION: You are currently in a Cowork session (a VM). Many installations will NOT persist after the session ends.

What Persists vs What Resets

ComponentCowork Session (VM)Local MachineDecision
Marketplace plugins (.plugin files)Resets after sessionPersist (in ~/.claude/)INSTALL LOCALLY via Claude Code CLI
MCP servers (.mcp.json config)Resets after sessionPersistCONFIGURE LOCALLY via Claude Code CLI
CLAUDE.md filesCan work in sessionPersist better locallyCREATE BOTH (user + project)
Subagent files (~/.claude/agents/)Resets after sessionPersistCREATE LOCALLY
Custom skills (.claude/skills/)Can work in sessionPersist better locallyCreate in session, copy to local
Hooks (settings.json)Can work in sessionBetter locallyLOCAL ONLY

Architecture Implication

Cowork is useful for:

  • Testing and validating new skills/plugins
  • Building custom skills using skill-creator
  • Verifying MCP connections work
  • Planning and documentation

Cowork is NOT sufficient for:

  • Permanent plugin installation (use Claude Code CLI)
  • MCP server setup (use Claude Code CLI + environment)
  • System-wide configuration (use local machine)

Therefore, this plan has TWO execution paths:

  1. Testing & validation (CAN happen in Cowork NOW)
  2. Permanent installation (MUST happen LOCALLY on your machine)

SECTION 1: MASTER INSTALLATION SEQUENCE

Phase 1A: MCP Servers (LOCAL CLI — Must Do First)

These are the research tools that make Chrome unnecessary for basic lookups.

Why first: Unlimited free searches, zero API key, zero setup friction. Gives Claude a fast search alternative to Chrome.

Local machine setup:

# Option 1: npm install (recommended)
npm install -g duckduckgo-mcp
 
# Option 2: Docker (if npm not available)
docker run -d -p 3000:3000 nickclyde/duckduckgo-mcp

Verify installation:

# Test that the command is available
which duckduckgo-mcp
# Should return: /usr/local/bin/duckduckgo-mcp (or similar)

Add to Claude Code CLI config:

Edit ~/.claude/settings.json (create if doesn’t exist):

{
  "mcpServers": {
    "duckduckgo": {
      "command": "npx",
      "args": ["-y", "@duckduckgo/duckduckgo-mcp"]
    }
  }
}

Verify in Claude Code:

  • Run claude in terminal
  • Type /mcp → should see duckduckgo listed
  • Test with: “Search for ‘SOC 2 compliance requirements’” → should use DuckDuckGo, not Chrome

Cowork test: After local setup, open a Cowork session and try a research query. DuckDuckGo should appear in the tools list.


Step 2: Add Exa MCP (Free Remote Endpoint)

Why: Semantic search for business research, company research, people search. Higher quality than DuckDuckGo for technical queries. Free tier with no API key.

Local machine setup — Three options (pick one):

Option A: Add Exa’s free remote MCP endpoint (RECOMMENDED — zero local setup)

Edit ~/.claude/settings.json:

{
  "mcpServers": {
    "exa": {
      "command": "curl",
      "args": ["-sL", "https://mcp.exa.ai/mcp?tools=web_search_exa,web_search_advanced_exa,company_research_exa,people_search_exa,get_code_context_exa,crawling_exa,deep_researcher_start,deep_researcher_check"]
    }
  }
}

Note: This uses curl to fetch the remote endpoint. Alternative: use the official Exa MCP GitHub repo if curl doesn’t work with remote MCPs.

Option B: Install locally from Exa GitHub repo

git clone https://github.com/exaai/exa-mcp-server.git
cd exa-mcp-server
npm install
npm run build

Then add to ~/.claude/settings.json:

{
  "mcpServers": {
    "exa": {
      "command": "node",
      "args": ["/path/to/exa-mcp-server/dist/index.js"],
      "env": {
        "EXA_API_KEY": "${EXA_API_KEY}"  // Leave blank for free tier
      }
    }
  }
}

Option C: Use Docker (if npm/curl not working)

docker run -d \
  -e EXA_API_KEY="" \
  -p 3001:3001 \
  exaai/mcp-server:latest

Then configure in ~/.claude/settings.json to connect to localhost:3001.

Recommend: Option A (remote endpoint) — no local installation needed.

Verify:

  • Run claude in terminal
  • Type /mcp → should see exa listed
  • Test with: “Research Acme Corp using company_research_exa” → should return structured company data

Step 3: Set Environment Variable for Exa API Key (Optional — Free Tier Works Without)

If you want to lift rate limits later:

  1. Get free Exa API key: https://api.exa.ai/
  2. Add to your shell profile (~/.bashrc, ~/.zshrc, etc.):
export EXA_API_KEY="your_api_key_here"
  1. Reload shell: source ~/.bashrc (or source ~/.zshrc)

Free tier limits: 1,000 requests/month (~33/day). Plenty for current Solanasis scale.


Step 4: Disable Claude.ai MCP Inheritance (CRITICAL)

Claude.ai may auto-inject MCP servers into your CLI sessions. This can cause conflicts.

Disable with environment variable:

# Run Claude Code with this flag:
ENABLE_CLAUDEAI_MCP_SERVERS=false claude
 
# Or add to your shell profile permanently:
# echo 'export ENABLE_CLAUDEAI_MCP_SERVERS=false' >> ~/.bashrc

Verify: Run claude, then /mcp — should ONLY see the servers you’ve explicitly configured, not extra Claude.ai servers.


Phase 1B: CLAUDE.md Configuration (LOCAL — Can Start Now)

These instructions guide Claude’s behavior persistently across sessions.

Step 5: Create User-Level CLAUDE.md

File location: ~/.claude/CLAUDE.md

cat > ~/.claude/CLAUDE.md << 'EOF'
# Global Instructions — Dmitri Zasage / Solanasis
 
## Identity
- Working with Dmitri Zasage, CEO of Solanasis LLC (Colorado)
- Solanasis = fractional CIO/CSIO/COO (fCIO/fCSIO/fCOO) firm for SMBs and nonprofits
- Core offerings: Security Assessments, Disaster Recovery Verification, Data Migrations, CRM Setup, Systems Integration, Responsible AI Implementation
- Currently a one-person operation with 1099 contractors
- Growth hacking mindset — unconventional/Smartcuts approach
 
## Planning: Proportional to Complexity
 
- **Routine tasks** (research a prospect, draft an email, update a doc): Just do it. No formal planning.
- **Medium tasks** (multi-step research, document creation, config changes): State a brief plan (3-5 bullets) before starting.
- **Complex tasks** (new services, system redesigns, multi-day projects, strategic decisions): Use the `planner` subagent for a structured plan.
 
Don't waste time planning the obvious. Save planning for work where the approach genuinely matters.
 
## Verification: Proportional to Stakes
 
Launch the `senior-reviewer` subagent ONLY for:
- **Client deliverables** — proposals, reports, assessments, anything going to someone outside Solanasis
- **Strategic decisions** — pricing, new offerings, partnerships, major process changes
 
For everything else (internal docs, research, emails, routine tasks), self-review is sufficient. Don't burn tokens verifying draft emails.
 
**Decision tree:**
- Client deliverables → ALWAYS run senior-reviewer
- Strategic decisions → ALWAYS run senior-reviewer
- Internal docs, research, analysis → Self-review only
- Draft emails, messages, routine tasks → No verification
- Quick answers, lookups, conversation → No verification
 
## Research: Use the Research Agent for Depth
 
Launch the `research-agent` subagent for:
- **Prospect/company research** — Before sales calls, when prepping for client meetings
- **Competitor analysis** — Understanding the competitive landscape
- **Market research** — Industry trends, technology landscape, compliance requirements
- **Technology evaluation** — Comparing tools, vendors, platforms before recommending to clients
- **Security/compliance research** — Threat landscape, framework requirements, vulnerability context
 
For simple lookups ("What does Acme Corp do?"), just use WebSearch directly — no need for the full research agent.
 
**Decision tree:**
- Deep prospect prep → research-agent
- Competitor analysis → research-agent
- Market/tech research → research-agent
- Strategic research feeding a decision → research-agent → then senior-reviewer on the recommendation
- Quick factual lookup → WebSearch directly
- Checking a specific URL → WebFetch directly
 
## Tool Routing — Efficiency, Not Restriction
 
The goal is efficiency, not restriction. Use whichever tool gets the job done fastest.
 
### Quick Guide
- **WebSearch** — fastest for finding information, searching topics, news, docs. Use this when you just need to look something up.
- **WebFetch** — fastest for reading a specific URL you already have.
- **Exa/DuckDuckGo MCP** — for deeper semantic search, company research, people research (if installed).
- **Chrome** — for everything interactive: logins, dashboards, forms, screenshots, site testing, web apps, visual inspection, or anything that needs a real browser.
 
### Efficiency tip
WebSearch returns results in one call. Chrome requires navigate → wait → read_page (3+ calls, more tokens). For a simple "what is X?" question, WebSearch is 10x faster. But for "log into ClickUp and check the sprint," Chrome is the only option.
 
**Use Chrome freely.** It's the right tool for interactive work. Just don't use it as a search engine when WebSearch is faster.
 
## Communication Preferences
 
- Bullet points and numbered lists preferred over long paragraphs
- Sub-bullet points for additional context
- Always spell out acronyms on first use (unless super common like AI, IT, CEO)
- Include "pro tips" whenever relevant to help Dmitri learn
- Clarifying questions via multiple choice:
  - Option A = recommended answer (with explanation of why)
  - Remaining options in order of recommendation
  - Include context with each option for learning
  - Provide textarea for additional notes per answer
 
## Technical Conventions
- SQL: lowercase keywords and built-in functions, snake_case for tables/cols/procs/funcs/views/triggers
- C#: snake_case for variables that match DB column names
- Codebehind pattern for Blazor, JSInterop for forms
- HTML generated via C# classes
EOF

Verify:

cat ~/.claude/CLAUDE.md | head -20
# Should show the Identity section

Step 6: Update Project-Level CLAUDE.md

File location: /sessions/admiring-modest-gauss/mnt/_solanasis/solanasis-docs/.claude/CLAUDE.md

Read existing file first:

cat /sessions/admiring-modest-gauss/mnt/_solanasis/solanasis-docs/.claude/CLAUDE.md

Add these sections to the end (don’t overwrite security rules):

## MCP Tools
 
### Available MCP Servers
- DuckDuckGo (web, news, image search — unlimited, free)
- Exa (semantic search, company research, people search — free tier)
- ClickUp connector (project management)
- Google Calendar (event management)
- Gmail (email search, drafts)
- Canva (design)
 
### Preferred Research Tools (in order)
1. WebSearch — For finding info quickly
2. Exa (if available) — For business/company research
3. DuckDuckGo (if available) — Unlimited backup search
4. WebFetch — For reading a specific URL
5. Chrome read_page — Only if above tools fail (JS-heavy pages)
 
DO NOT use Chrome for basic web research. Chrome is for interactive tasks only.
 
## Active Connectors
- ClickUp (project management)
- Google Calendar
- Gmail
- Canva (design)
 
## Tech Stack Context
- ClickUp for PM, Xero for accounting, Coda as wiki
- Google Workspace, Google Voice for business phone
- Website: solanasis.com (cPanel/Namecheap Stellar VPS)
- Brevo for email marketing (List ID: 2)
- Payment: 50% upfront / 50% on delivery (full upfront under $2,500)

Verify:

tail -20 /sessions/admiring-modest-gauss/mnt/_solanasis/solanasis-docs/.claude/CLAUDE.md

Phase 1C: Subagent Files (LOCAL — Create in CLI)

These enable planning, research, and verification workflows.

Step 7: Create Subagent Directory

mkdir -p ~/.claude/agents

Step 8: Create Senior Reviewer Subagent

File: ~/.claude/agents/senior-reviewer.md

cat > ~/.claude/agents/senior-reviewer.md << 'AGENT_EOF'
---
name: senior-reviewer
description: >
  Senior quality reviewer. Use this agent to verify ANY substantive work
  before presenting to the user. This includes: documents, plans, research
  findings, configurations, code, templates, and client deliverables.
  Use proactively after completing work.
tools:
  - Read
  - Grep
  - Glob
  - Bash
  - WebSearch
  - WebFetch
disallowedTools:
  - Write
  - Edit
  - Agent
model: opus
maxTurns: 10
---
 
# Senior Reviewer Agent
 
You are a senior technical and strategic reviewer for Solanasis, a
fractional CIO/CSIO/COO (fCIO/fCSIO/fCOO) firm targeting SMBs and
nonprofits.
 
## Your Job
 
Review the work that was just completed and provide an honest assessment.
You are the quality gate before anything goes to the user (Dmitri, the CEO).
 
## Review Checklist
 
For EVERY review, check:
 
### Accuracy
- Are facts correct? Cross-reference claims with web search if needed.
- Are technical details accurate (framework names, tool capabilities, pricing, configuration syntax)?
- Are there unsupported claims or hallucinations?
 
### Completeness
- Does the output fully address what was asked?
- Are there obvious gaps or missing sections?
- Would Dmitri need to ask follow-up questions to use this?
 
### Best Practices
- Is this truly the best approach, or is there a better way?
- Are there industry best practices being ignored?
- Would an experienced consultant do it differently?
 
### Solanasis Context
- Does this align with Solanasis's business model (small team, 1099 contractors, credibility-building phase)?
- Is this practical for a one-person operation scaling up?
- Does it fit the "unconventional/Smartcuts" approach Dmitri prefers?
 
### Tool Usage Review
- Were the right tools used? (WebSearch vs Chrome, etc.)
- Was the approach efficient or wasteful?
- Could this have been done faster/better?
 
## Output Format
 
Return a structured verdict:
 

Senior Review Verdict

Status: APPROVED | APPROVED WITH NOTES | NEEDS REVISION

Accuracy: [Pass/Fail] — [brief note] Completeness: [Pass/Fail] — [brief note] Best Practices: [Pass/Fail] — [brief note] Solanasis Fit: [Pass/Fail] — [brief note] Efficiency: [Pass/Fail] — [brief note]

Issues Found: (if any)

  • [Issue 1]
  • [Issue 2]

Suggestions: (if any)

  • [Suggestion 1]
  • [Suggestion 2]

If status is NEEDS REVISION, be specific about what needs to change.
AGENT_EOF

Verify:

cat ~/.claude/agents/senior-reviewer.md | head -20

Step 9: Create Planner Subagent

File: ~/.claude/agents/planner.md

cat > ~/.claude/agents/planner.md << 'AGENT_EOF'
---
name: planner
description: >
  Task planning agent. Use for any multi-step task to plan the approach
  before execution. Determines which tools to use, what order to work in,
  and identifies potential issues. Use proactively for complex requests.
tools:
  - Read
  - Grep
  - Glob
  - WebSearch
  - WebFetch
disallowedTools:
  - Write
  - Edit
  - Bash
  - Agent
model: opus
maxTurns: 5
---
 
# Task Planner Agent
 
You plan the most efficient approach for completing tasks.
 
## Your Job
 
Given a task description, produce a brief execution plan that:
 
1. **Identifies the right tools** — Specifically:
   - WebSearch for finding information (NEVER Chrome for research)
   - WebFetch for reading specific URLs (NEVER Chrome navigate + read_page)
   - Chrome ONLY for authenticated sites, form filling, or explicit browser automation
   - Read/Glob/Grep for local files
   - Bash for commands and scripts
 
2. **Orders the steps** — What needs to happen first, what can be parallel
 
3. **Flags risks** — What could go wrong, what to watch out for
 
4. **Estimates scope** — Quick (< 5 min), Medium (5-15 min), Complex (15+ min)
 
## Output Format
 

Execution Plan

Scope: Quick | Medium | Complex Steps:

  1. [Step] — using [Tool]
  2. [Step] — using [Tool]
  3. [Step] — using [Tool]

Tool Routing:

  • Research: WebSearch (no Chrome needed)
  • URL reading: WebFetch (no Chrome needed)
  • [Any Chrome needed?]: [Yes/No — reason]

Risks: [Any gotchas] Verification: [What senior-reviewer should check]


Keep plans concise. 3-7 steps for most tasks.
AGENT_EOF

Verify:

cat ~/.claude/agents/planner.md | head -20

Step 10: Create Research Agent

File: ~/.claude/agents/research-agent.md

cat > ~/.claude/agents/research-agent.md << 'AGENT_EOF'
---
name: research-agent
description: >
  Deep research specialist. Use this agent for ANY research task that requires
  searching multiple sources, synthesizing findings, or building a comprehensive
  picture. This includes: prospect/company research, competitor analysis, market
  research, technology evaluation, security landscape research, compliance
  requirements, vendor comparison, and due diligence. Use proactively whenever
  research depth matters.
tools:
  - WebSearch
  - WebFetch
  - Read
  - Grep
  - Glob
  - Bash
disallowedTools:
  - Write
  - Edit
  - Agent
  - mcp__Claude_in_Chrome__navigate
  - mcp__Claude_in_Chrome__form_input
  - mcp__Claude_in_Chrome__file_upload
model: opus
maxTurns: 15
---
 
# Research Agent — Solanasis Deep Research Specialist
 
You are a senior research analyst for Solanasis, a fractional CIO/CSIO/COO
(fCIO/fCSIO/fCOO) firm targeting SMBs and nonprofits.
 
## Your Job
 
Conduct thorough, multi-source research and return a structured, citation-rich
report. You are the research equivalent of the senior-reviewer — a specialist
that ensures research quality before findings reach the CEO (Dmitri).
 
## Research Tool Priority (Efficiency Order)
 
Use the fastest tool that gets the job done:
 
1. **WebSearch** — First choice for finding information, news, documentation,
   articles. Fast and cheap.
2. **Exa MCP tools** (if available) — For semantic search, company research
   (`company_research_exa`), people search (`people_search_exa`), deep
   research reports (`deep_researcher_start`/`deep_researcher_check`).
3. **DuckDuckGo MCP** (if available) — Alternative search with no rate limits.
4. **WebFetch** — For reading specific URLs you've found. Extract content
   from articles, docs, reports.
5. **Read/Grep/Glob** — For analyzing local files, uploaded documents,
   previous research.
6. **Chrome read-only tools** — `read_page`, `get_page_text` are available
   as FALLBACK for pages that require JavaScript rendering or have
   anti-scraping protection. State why WebFetch didn't work if you use these.
 
## Research Standards
 
### Source Quality
- Prefer official sources (company websites, SEC filings, government databases, vendor documentation)
- Cross-reference claims across 2+ sources when possible
- Flag single-source claims as "unverified" or "reported by [source]"
- Note publication dates — flag anything older than 12 months as potentially outdated
 
### Intellectual Honesty
- Clearly distinguish: verified facts vs. inferences vs. speculation
- Say "I couldn't find information on X" rather than making it up
- Note conflicting information when sources disagree
- Flag gaps in the research — what COULDN'T you find that matters?
 
### Solanasis Context
- We target SMBs (Small and Medium Businesses) and nonprofits
- Our wedge offerings: security assessments, disaster recovery (DR) verification, data migrations
- Goal: become their operational resilience partner with recurring revenue
- Growth hacking mindset — look for unconventional angles
- Currently building credibility — partner/association references are valuable
 
## Output Format
 

Research Report: [Topic]

Date: [date] Research Depth: Quick Scan | Standard | Deep Dive Sources Consulted: [number]

Key Findings

  1. [Finding 1] — [Source]
  2. [Finding 2] — [Source]
  3. [Finding 3] — [Source]

Analysis

[Synthesized narrative connecting the findings]

Solanasis Relevance

[How this connects to our business — opportunities, risks, action items]

Gaps & Limitations

  • [What couldn’t be found]
  • [What needs further investigation]

Sources


Adjust depth based on what's asked. Quick prospect lookups don't need the full template — use judgment.
AGENT_EOF

Verify:

cat ~/.claude/agents/research-agent.md | head -20
ls -la ~/.claude/agents/
# Should show: planner.md, senior-reviewer.md, research-agent.md

Phase 1D: Test MCP & Subagents in Claude Code

Step 11: Verify MCP Installation

# Start Claude Code CLI
claude
 
# In the CLI, run:
/mcp
# Should output: duckduckgo, exa (and any other MCPs you configured)
 
# Test DuckDuckGo
# (in Claude chat)
Search for "SOC 2 compliance requirements"
# Should use duckduckgo tool, not Chrome
 
# Test Exa
# (in Claude chat)
Research the company "Anthropic" using Exa
# Should use exa tools, return structured company data

Success criteria:

  • /mcp shows both duckduckgo and exa
  • WebSearch is used for basic queries (faster than Chrome)
  • Exa is used for company/people research (better than WebSearch)
  • Chrome is NOT used for research-only tasks

Step 12: Test Subagents

# In Claude Code CLI, test each subagent:
 
# Test 1: Planner
Plan a task to research three SMB prospects and draft cold emails
# Should invoke planner, return a structured 5-step plan
 
# Test 2: Research Agent
Research the cybersecurity compliance landscape for SMBs in 2026
# Should invoke research-agent, return a structured report
 
# Test 3: Senior Reviewer
Write a client proposal, then ask for review
# Should invoke senior-reviewer after you finish, return verdict

Success criteria:

  • Planner delegates complex tasks automatically
  • Research agent is used for depth research automatically (via description)
  • Senior reviewer is launched for client work automatically

Phase 2A: Marketplace Plugins Installation (LOCAL CLI)

Step 13: Install Phase 1 Plugins (20 Skills from 6 Plugins)

Location: Local Claude Code CLI (installed on your machine, persists)

Commands to run locally:

claude
# In the CLI, use plugin install or equivalent command
# (Exact syntax depends on your Claude Code version)
 
# Recommended installation order (install in parallel if possible):
 
/plugin install operations@knowledge-work-plugins
/plugin install sales@knowledge-work-plugins
/plugin install marketing@knowledge-work-plugins
/plugin install engineering@knowledge-work-plugins
/plugin install customer-support@knowledge-work-plugins
/plugin install data@knowledge-work-plugins

Note: If these commands don’t work, try:

/plugin marketplace search operations
# Then click install on the Operations plugin

Verify:

/plugin list
# Should show 6 new plugins (operations, sales, marketing, engineering, customer-support, data)
# Plus existing: legal, productivity, brand-voice, cowork-plugin-management

Step 14: Configure Model Invocation Mapping

After plugins install, you need to set 8 of the 20 skills to disable-model-invocation: true to keep context budget manageable.

Location: .claude/plugins/ directory or plugin configuration files

Skills to set to MANUAL-ONLY (8 total):

PluginSkillReason
Operationschange-managementInvoke only when planning migrations
Operationsresource-planningInvoke only for staffing/capacity
Salescreate-an-assetInvoke explicitly for proposals
Salescompetitive-intelligenceInvoke explicitly for battlecards
Salesdaily-briefingInvoke explicitly each morning
Marketingcampaign-planningInvoke explicitly for campaigns
Customer Supportknowledge-managementInvoke after resolving issues
Datadata-validationInvoke explicitly during migration QA

Skills to keep AUTO-INVOKE (12 total):

PluginSkillReason
Operationscompliance-trackingTriggers on “SOC 2”, “GDPR” — frequent
Operationsrisk-assessmentTriggers on “risk”, “what could go wrong” — frequent
Operationsvendor-managementTriggers on “evaluate vendor” — frequent
Operationsprocess-optimizationTriggers on “bottleneck” — frequent
Salesaccount-researchTriggers on “research [company]” — daily
Salesdraft-outreachTriggers on “draft outreach to” — daily
Salescall-prepTriggers on “prep me for my call” — daily
Marketingcontent-creationTriggers on “write a blog post” — frequent
EngineeringdocumentationTriggers on “write docs” — frequent
Engineeringincident-responseTriggers on “incident”, “production down” — critical
Datadata-explorationTriggers on “profile this dataset” — migration work
Datasql-queriesTriggers on “write a query” — migration work

How to disable model invocation:

Option A: Via CLI (if available)

# (Syntax varies by Claude Code version — check `/help`)
/plugin skill disable-model-invocation change-management
/plugin skill disable-model-invocation resource-planning
# ... repeat for all 8 skills

Option B: Edit plugin SKILL.md files directly

Find the plugin files in ~/.claude/plugins/ or the marketplace folder, then edit each SKILL.md to add:

disableModelInvocation: true

Or in the skill frontmatter:

---
name: change-management
disableModelInvocation: true
---

Verify:

# In Claude CLI or Cowork, ask:
"What skills can you invoke?"
# Should list 12 skills (auto-invoke), not 20

Phase 2B: Custom Skills Build

Step 15: Create Research-First Skill

File: ~/.claude/skills/research-first/SKILL.md

mkdir -p ~/.claude/skills/research-first
 
cat > ~/.claude/skills/research-first/SKILL.md << 'SKILL_EOF'
---
name: research-first
description: >
  Quick focused research on a specific topic. Research using MCP tools
  (Exa, DuckDuckGo, WebSearch) before falling back to WebFetch or Chrome.
  For deeper research, delegate to the research-agent subagent.
  Invoke manually with /research-first.
disableModelInvocation: true
---
 
# Research-First Skill
 
A lightweight skill for focused research tasks, complementing the deeper `research-agent` subagent.
 
## How to Use
 
Invoke manually: `/research-first [topic]`
 
Examples:
- `/research-first SOC 2 Type II compliance for SaaS`
- `/research-first disaster recovery best practices for SMBs`
- `/research-first latest data migration tools 2026`
 
## Research Sequence
 
1. **Exa semantic search** — Company research, people search, deep researcher tools
2. **DuckDuckGo** — Web and news search if Exa unavailable
3. **WebSearch** — General web search fallback
4. **WebFetch** — Read specific documents you found
5. **Chrome read_page** — Only if above fails (JS-heavy pages)
 
**NEVER use Chrome as the primary research tool.** Chrome is for interactive tasks only.
 
## Output Format
 
Return findings in this format:
 

Research: [Topic]

Top Findings

  1. [Finding] — Source
  2. [Finding] — Source
  3. [Finding] — Source

Key Insights

[Synthesized summary]

Next Steps

[What to do with this research]


Keep research focused and actionable. Adjust depth based on the topic.
SKILL_EOF

Verify:

# In Claude CLI:
/skill research-first
# Should prompt for topic and return research results

Phase 3: Context Budget Verification

Step 16: Calculate Total Context Load

Before plugin installation, verify context window budget:

ComponentSizeCountSubtotal
Existing skills~200 chars6~1,200 chars
Phase 1 plugins (12 auto-invoke)~200 chars12~2,400 chars
Subagent descriptions~300 chars3~900 chars
CLAUDE.md content~800 chars1~800 chars
Memory/context filesvariable1~500 chars
TOTAL ACTIVE~5,800 chars

Available context budget (at 128K context):

  • System prompt: ~2,000 chars
  • Reserved for conversation: ~120,000 chars
  • Available for tool descriptions: ~2,560 chars

Status: ⚠️ OVER BUDGET

Mitigation: Implement Model Invocation Mapping

  • Keep 12 auto-invoke (high-frequency) skills enabled
  • Set 8 skills to manual-only (disable model invocation)
  • Result: ~2,400 chars active (under ~2,560 budget)

Verify after installation:

# In Claude CLI, check how many skills are loaded
/skill list
# Count auto-invoke vs manual-only
 
# Ask Claude:
"Are you seeing 'Excluded skills' warnings?"
# If YES → not enough context budget, disable more skills
# If NO → context budget is OK

SECTION 2: INSTALLATION CHECKLIST (STEP-BY-STEP)

Prerequisites

  • Claude Code CLI installed locally (NOT Cowork)
  • Terminal access to your machine (not the VM)
  • npm installed (npm --version returns a version)
  • Git installed (git --version returns a version)
  • Max plan is active (needed for subagent token budget)

Phase 1: MCP Servers (LOCAL)

  • DuckDuckGo MCP installed (npm install -g duckduckgo-mcp)
  • DuckDuckGo added to ~/.claude/settings.json
  • DuckDuckGo tested in Claude CLI (/mcp shows it)
  • Exa MCP configured (remote endpoint or local)
  • Exa tested in Claude CLI
  • ENABLE_CLAUDEAI_MCP_SERVERS=false verified (or disabled)

Phase 1: CLAUDE.md (LOCAL)

  • User-level CLAUDE.md created (~/.claude/CLAUDE.md)
  • Project-level CLAUDE.md updated (solanasis-docs/.claude/CLAUDE.md)
  • Both files verified with cat command

Phase 1: Subagents (LOCAL)

  • ~/.claude/agents/ directory created
  • senior-reviewer.md created and verified
  • planner.md created and verified
  • research-agent.md created and verified
  • Subagents tested in Claude CLI

Phase 2: Plugins (LOCAL)

  • operations plugin installed (/plugin install operations@knowledge-work-plugins)
  • sales plugin installed
  • marketing plugin installed
  • engineering plugin installed
  • customer-support plugin installed
  • data plugin installed
  • Model invocation mapping applied (8 skills set to manual-only)
  • Total skills: 12 auto-invoke, 8 manual-only

Phase 2: Custom Skills

  • research-first skill created (~/.claude/skills/research-first/SKILL.md)
  • research-first skill tested

Phase 3: Verification

  • Context budget verified (no “Excluded skills” warnings)
  • Chrome permissions set to Allow (not Ask)
  • DuckDuckGo/Exa preferred for research (tested)
  • Chrome reserved for interactive tasks (tested)
  • Subagents auto-invoke correctly (tested)
  • Plugins show in /plugin list (tested)

Phase 3: Cowork Session

  • Open Cowork session with solanasis-docs folder
  • Verify plugins appear in skill menu
  • Verify MCP tools available in chat
  • Test full workflow: research → plan → create → verify

SECTION 3: WHAT CAN BE TESTED IN COWORK NOW

These items CAN be tested in the current Cowork session before local installation:

  1. Skill logic validation — Create/test custom skills using skill-creator (already installed)
  2. Plugin compatibility — Read plugin docs, verify they don’t conflict with existing skills
  3. Workflow simulation — Manually run through the research → plan → create → verify loop
  4. CLAUDE.md effectiveness — See if instructions guide tool selection correctly
  5. Context budget impact — Estimate token costs of new skills

These items CANNOT be tested in Cowork (will reset after session):

  • Permanent MCP server installation
  • Plugin installation (marketplace plugins only install in local Claude Code)
  • Subagent file creation (needs to persist to ~/.claude/agents/)
  • System-wide configuration (ENABLE_CLAUDEAI_MCP_SERVERS flag)

SECTION 4: ROLLBACK PLAN (IF SOMETHING BREAKS)

If Context Window Overflows

Symptom: Claude says “Excluded skills” or responses become evasive

Fix:

  1. Disable more skills from manual-only list (move from 8 to 12 manual)
  2. Remove Phase 2 plugins temporarily
  3. Keep Phase 1A (MCP) + Phase 1B (CLAUDE.md) + 3 critical skills

If MCP Servers Stop Working

Symptom: WebSearch works but DuckDuckGo/Exa don’t appear in /mcp

Fix:

# Check for conflicts
/mcp
 
# Remove and reinstall
npm uninstall -g duckduckgo-mcp
npm install -g duckduckgo-mcp
 
# Restart Claude
claude

If Plugins Cause Conflicts

Symptom: Skill descriptions collide, or unexpected behaviors emerge

Fix:

  1. Note which plugin caused the issue
  2. Run /plugin uninstall [plugin-name]
  3. Verify conflict is gone
  4. Reinstall one plugin at a time

If Subagents Don’t Auto-Invoke

Symptom: You ask research agent to be used, but it’s not invoked

Fix:

  1. Check subagent description field — must include “use proactively”
  2. Verify file location: ~/.claude/agents/[name].md
  3. Verify YAML frontmatter is valid (no syntax errors)
  4. Restart Claude Code: exit then claude

If Chrome Gets Blocked Accidentally

Symptom: “I don’t have permission to use Chrome”

Fix:

/permissions
# Find Chrome tools, set to "Allow"

SECTION 5: RISKS & GOTCHAS

High-Risk Items

RiskImpactMitigation
MCP server command path wrongClaude can’t find Exa/DDG toolsVerify with which duckduckgo-mcp before adding to settings
Context window overflow”Excluded skills” warnings, degraded performanceModel invocation mapping (already in plan)
Plugin version conflictsSkill descriptions collide, unexpected behaviorInstall one plugin at a time, test between installs
Subagent nesting (Claude limitation)Senior reviewer can’t delegate to research agentDesign subagents to be self-sufficient (already done)
ENABLE_CLAUDEAI_MCP_SERVERS injectionUnexpected MCP servers appear in sessionsSet environment variable at CLI startup

Medium-Risk Items

RiskImpactMitigation
npm/Docker not availableCan’t install DuckDuckGo locallyUse Exa remote endpoint instead
Brave Search MCP setup frictionRequires API key, more complexSkip for now, use DuckDuckGo + Exa
Cowork session lossAll unsaved custom skills/plugins resetCopy custom skills to local machine immediately
Plugin marketplace unavailableCan’t install plugins in CoworkAll plugins must be installed via local Claude Code CLI
Research-first skill underutilizedSkill exists but not usedInclude in CLAUDE.md to remind when to use manually

Low-Risk Items

RiskImpactMitigation
Exa free tier rate limit hitSearch tools slow after 1,000 queries/monthGet free API key to lift limit
DuckDuckGo search quality issuesSome queries return weak resultsUse Exa or WebSearch as fallback
Skill descriptions need tweakingAuto-invoke too aggressive or too passiveUpdate descriptions after first week of use

SECTION 6: VERIFICATION TESTS (RUN AFTER EACH PHASE)

Test 1: MCP Tools Available

claude
/mcp
# Output should include: duckduckgo, exa

Test 2: Research-First Tool Selection

# In Claude chat:
"What is the latest zero-day vulnerability landscape?"
# Verify: Uses Exa or DuckDuckGo, NOT Chrome

Test 3: Chrome Reserved for Interactive

# In Claude chat:
"Log into ClickUp and show me the sprint board"
# Verify: Uses Chrome (because it requires login)

Test 4: Subagents Auto-Invoke

# In Claude chat:
"Research this prospect before my call: Acme Corp"
# Verify: research-agent subagent invokes automatically

Test 5: Planning on Complex Tasks

# In Claude chat:
"Design a security assessment service offering for SMBs"
# Verify: planner subagent invokes, returns structured plan

Test 6: Verification on Client Work

# In Claude chat:
"Draft a proposal for Acme Corp for a security assessment"
# Verify: senior-reviewer subagent invokes after you finish

Test 7: Plugin Skills Appear

# In Claude chat:
"What skills do I have available?"
# Verify: Lists 20 skills (12 auto + 8 manual)

Test 8: Context Budget OK

# In Claude chat:
"Do you see any 'Excluded skills' warnings?"
# Verify: "No" — context budget is sufficient

SECTION 7: AFTER INSTALLATION — FIRST WEEK OPERATIONS

Day 1: Verify All Connections

  • Test each MCP tool (Exa, DuckDuckGo)
  • Test each subagent (planner, research-agent, senior-reviewer)
  • Test 5 critical plugins (operations, sales, engineering, data)
  • Verify CLAUDE.md is guiding tool selection

Day 2-3: Run Real Workflows

  • Research a prospect → draft outreach → create asset (uses sales skills)
  • Plan a security assessment → document findings (uses operations + engineering)
  • Extract and validate data from a sample migration (uses data skills)

Day 4-5: Monitor Chrome Usage

  • Track: How often is Chrome used for research vs. interactive?
  • Goal: Chrome used < 20% for research-only tasks
  • If > 20%: Add PreToolUse Chrome gate hook (Section 5 of Architecture doc)

Day 6-7: Adjust Model Invocation Mapping

  • Identify which manual-only skills should auto-invoke
  • Re-enable any skills you find yourself invoking > 3x/day
  • Disable any skills that auto-fire incorrectly

End of Week 1: Measure ROI

  • Count: How many client deliverables used plugins?
  • Count: How many were improved by senior-reviewer?
  • Count: How many research tasks used research-agent?
  • Decision: Keep all plugins, or disable any that aren’t earning their context?

APPENDIX: EXACT COMMAND REFERENCE

MCP Server Installation Commands

# DuckDuckGo (npm)
npm install -g duckduckgo-mcp
 
# Exa (remote endpoint — no local install needed)
# Just add to ~/.claude/settings.json
 
# Verify MCP is active
claude
/mcp
 
# Test DuckDuckGo
"Search for SOC 2 compliance using DuckDuckGo"
 
# Test Exa
"Research Anthropic Inc using Exa"

Plugin Installation Commands

claude
 
# Install marketplace plugins
/plugin install operations@knowledge-work-plugins
/plugin install sales@knowledge-work-plugins
/plugin install marketing@knowledge-work-plugins
/plugin install engineering@knowledge-work-plugins
/plugin install customer-support@knowledge-work-plugins
/plugin install data@knowledge-work-plugins
 
# List installed plugins
/plugin list
 
# Disable model invocation for a skill
/plugin skill disable-model-invocation [skill-name]

File Creation Commands

# Create directories
mkdir -p ~/.claude/agents
mkdir -p ~/.claude/skills/research-first
 
# Create files (use cat << 'EOF' pattern for multi-line content)
cat > ~/.claude/CLAUDE.md << 'EOF'
[content here]
EOF
 
# Verify files exist
ls -la ~/.claude/
ls -la ~/.claude/agents/

Environment Variables

# Disable Claude.ai MCP inheritance
export ENABLE_CLAUDEAI_MCP_SERVERS=false
claude
 
# Or add to shell profile permanently
echo 'export ENABLE_CLAUDEAI_MCP_SERVERS=false' >> ~/.bashrc
source ~/.bashrc

APPENDIX: DEFINITION OF “DONE”

Installation is complete when:

  1. MCP Tools — All research tools (Exa, DuckDuckGo, WebSearch) are available and tested
  2. CLAUDE.md — Both user-level and project-level instructions are in place
  3. Subagents — All three (planner, research-agent, senior-reviewer) are created and auto-invoke correctly
  4. Plugins — All 6 Phase 1 plugins installed, 12 auto-invoke, 8 manual-only
  5. Custom Skills — research-first skill created and tested
  6. Context Budget — No “Excluded skills” warnings, all 20 skills accessible
  7. Verification Tests — All 8 verification tests pass
  8. First Week — Full week of monitoring Chrome usage, adjusting model invocation mapping

Success Metric: Claude can research → plan → create → verify a full client deliverable without manual intervention, using the right tools efficiently.


Document Status: READY FOR EXECUTION Next Step: Review this plan with Dmitri, then proceed with Phase 1 (local installation) Expected Timeline: 4-6 hours for Phase 1, 2-3 hours for Phase 2, 1 week for Phase 3 (monitoring)