Solanasis Discovery Calls for Operational Resilience

Research-Grade Playbook, Briefing Memo, and AI Handoff

Date: 2026-03-16
Prepared for: Solanasis
Prepared from: This discussion plus source verification against official guidance


Executive Summary

This document converts the discussion into a structured, evidence-labeled operating artifact for Solanasis discovery calls.

Top takeaways

  1. [Verified] A strong operational resilience discovery call should not be framed as a generic IT sales call. It should focus on the prospect’s critical operations, dependencies, governance, ability to respond, and ability to recover.
    Why this is verified: The Federal Reserve defines operational resilience as the ability to deliver operations, including critical operations and core business lines, through disruption from any hazard. NIST CSF 2.0 organizes cybersecurity outcomes around Govern, Identify, Protect, Detect, Respond, and Recover rather than a narrow tool checklist.
    Evidence:

  2. [Verified] The “baseline basics” discussed in the conversation are legitimate baseline topics, not random MSP trivia. Examples include MFA, password managers, backup protection and restore testing, patching, and separation of admin vs. daily-use accounts.
    Evidence:

  3. [Assistant-stated but unverified] The recommended call structure is to start with business impact and critical operations first, then move into baseline controls and resilience readiness.
    Why this matters: It positions Solanasis as a strategic operator, not a commodity IT vendor.
    Note: This is a strategic synthesis, not a rule stated verbatim by the cited sources.

  4. [Assistant-stated but unverified] A lightweight pre-call questionnaire with answer choices such as Yes / Partial / No / Not Sure is a good fit for founder-led discovery.
    Why this matters: It creates useful signal without overwhelming the prospect.
    Note: This is a practical recommendation, not an externally validated standard.

  5. [Assistant-stated but unverified] Using a live “mirror whiteboard” during the call is a high-value tactic.
    Why this matters: It helps the prospect feel understood, surfaces dependency gaps quickly, and creates immediate value.
    Note: This was not verified against formal research; it is a facilitation tactic.

  6. [User-stated] Solanasis is not mainly trying to sell one-off projects. The user wants to use discovery calls to determine fit for an Operational Resilience Baseline and potentially become an ongoing operational partner on a recurring retainer.

  7. [Verified + User-stated] A good bridge from discovery into paid work is:
    Discovery Baseline / Assessment Stabilization / Remediation Ongoing operational partner retainer.
    Why partly verified: The need for governance, preparedness, response, and recovery is supported by NIST and CISA. The specific commercial packaging is a Solanasis strategy decision, not something prescribed by those sources.

  8. [Verified] Incident response readiness is not just a technical issue. NIST SP 800-61r3 explicitly treats incident response as part of broader cybersecurity risk management across organizational operations.
    Evidence:

Bottom-line recommendation

[Assistant-stated but unverified] Solanasis should run discovery calls as mutual-fit resilience diagnostics, not free audits and not generic sales calls. The call should:

  • clarify what must keep working,
  • identify critical dependencies and obvious fragility,
  • test baseline control maturity without drowning the prospect in compliance jargon,
  • assess leadership readiness and internal capacity,
  • and naturally tee up either a paid baseline, a short stabilization sprint, or a recurring operational partner engagement.

Purpose of This Document

[User-stated] The user asked for a downloadable Markdown artifact that extracts, verifies, labels, and improves the discussion so another AI can continue the work without the original chat.

This document is intended to serve four purposes simultaneously:

  1. Guide — how Solanasis should think about discovery calls.
  2. Playbook — a repeatable process with questions, stages, and outputs.
  3. Briefing memo — what was discussed, what is verified, and what still needs validation.
  4. Handoff artifact for another AI — enough context, evidence labels, and next-step structure to continue work cleanly.

Discussion Context

What the discussion was about

  • [User-stated] The user asked how Solanasis should run discovery calls when evaluating a prospect for operational resilience work and possibly for a recurring operational partner retainer.
  • [User-stated] The user specifically asked about:
    • what questions to ask,
    • whether a mirror whiteboard is useful,
    • whether Solanasis should have a starter questionnaire covering basics such as password managers and separate accounts,
    • and how to pitch the engagement so discovery naturally leads to recurring revenue.
  • [User-stated] The user wants a thorough guide, not surface-level advice.

Relevant context from the user’s broader project profile

  • [User-stated] Solanasis is a fractional CIO / CISO / COO-style firm focused on operational resilience, security assessments, disaster recovery verification, data migrations, CRM setup, systems integration, and responsible AI implementation.
  • [User-stated] The user wants recurring revenue and wants Solanasis positioned as a trusted operational partner, not just a one-time technical fixer.
  • [User-stated] The user prefers practical, first-principles, founder-led, non-bureaucratic approaches.

Scope limitations

  • [Verified] The source verification in this document focused on official or primary sources where possible: Federal Reserve, NIST, CISA, and CIS.
  • [Assistant-stated but unverified] The conversation also included tactical selling and facilitation recommendations that are based on operator judgment rather than formal source validation.
  • [Tentative / speculative] The acronym “ORB” appears to mean Operational Resilience Baseline, but this exact acronym was not clearly defined in the conversation and should be confirmed before standardizing it externally.

Evidence Labeling Method

Use this legend when reading the rest of the document:

  • Verified — confirmed against cited external sources or directly observable from the current discussion.
  • User-stated — stated by the user; not independently verified unless noted.
  • Assistant-stated but unverified — recommended or asserted in the prior discussion, but not directly verified against an authoritative external source.
  • Tentative / speculative — inference, placeholder, or idea that may be useful but still needs confirmation.

Key Facts and Verified Findings

1) Definition and framing of operational resilience

  • [Verified] Operational resilience is broader than cybersecurity alone. It is about the ability to continue delivering important operations through disruption from any hazard.
    Evidence: Federal Reserve operational resilience topic page.
    Why it matters for Solanasis: Discovery should cover operations, people, vendors, recovery, and continuity — not just technical controls.
    Source: https://www.federalreserve.gov/supervisionreg/topics/operational-resilience.htm

  • [Verified] The cited Federal Reserve definition is written for financial institutions, but the underlying concept — preserving critical operations through disruption — is portable to SMB and nonprofit messaging.
    Evidence note: The source itself is banking-focused; portability to Solanasis’s target market is a strategic application, not a binding legal mapping.
    Source: https://www.federalreserve.gov/supervisionreg/topics/operational-resilience.htm

2) NIST CSF 2.0 is a credible organizing spine

3) The “basic controls” discussed are valid baseline topics

  • [Verified] NIST’s Small Business Cybersecurity Basics page recommends MFA, strong passwords, considering a password manager, regular backups, backup protection and testing, antivirus, patching, phishing protection, and employee training.
    Why it matters: These questions belong in a discovery baseline and are not arbitrary.
    Source: https://www.nist.gov/itl/smallbusinesscyber/cybersecurity-basics

  • [Verified] NIST SP 800-63B states that verifiers should allow the use of password managers and notes that password managers increase the likelihood of stronger passwords, especially when they include password generators.
    Why it matters: Asking whether the prospect uses a password manager is a defensible baseline question.
    Source: https://pages.nist.gov/800-63-4/sp800-63b.html

  • [Verified] CISA’s CPGs state that administrators should maintain separate user accounts for activities unrelated to their admin role.
    Why it matters: Asking about separate admin accounts is a legitimate minimum-baseline question.
    Sources:

  • [Verified] CISA’s small business guidance says restores should be tested regularly, including partial and full restores.
    Why it matters: “Do you back up?” is not enough; discovery should ask whether recovery has been tested.
    Source: https://www.cisa.gov/cyber-guidance-small-businesses

4) Incident response and recovery belong in discovery

  • [Verified] NIST SP 800-61r3 treats incident response as part of broader cybersecurity risk management and says all six CSF 2.0 Functions play vital roles.
    Why it matters: Discovery should include who responds, who communicates, what gets restored first, and how lessons are fed back into the operating system.
    Source: https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-61r3.pdf

  • [Verified] NIST SP 800-61r3 ties preparation to Govern, Identify, and Protect, while Detect, Respond, and Recover cover discovery and handling of incidents.
    Why it matters: This supports a discovery model that covers governance and preparation before discussing incident handling details.
    Source: https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-61r3.pdf

5) CIS IG1 is a credible “minimum viable cyber hygiene” reference point

6) Exercises and manual fallback are relevant resilience topics


Major Decisions and Conclusions

Commercial and positioning conclusions

  • [User-stated] Solanasis wants discovery calls to evaluate fit not only for a one-time operational resilience engagement, but also for an ongoing retainer as an operational partner.

  • [Assistant-stated but unverified] The call should be positioned as a mutual-fit resilience diagnostic, not a free audit.
    Reasoning: This protects scope, elevates the conversation, and better supports a move into paid baseline work.

  • [Assistant-stated but unverified] Solanasis should lead with the idea that most organizations do not need more random tools; they need clarity on what must keep working, what is fragile, and what must be fixed first.

  • [Assistant-stated but unverified] The recommended commercial ladder is:

    1. Discovery call
    2. Operational Resilience Baseline (or similarly named assessment)
    3. Stabilization / remediation sprint if needed
    4. Ongoing operational partner retainer
  • [Tentative / speculative] The exact packaging name “Operational Resilience Baseline” should likely be retained because it sounds strategic and productized, but the acronym and public-facing naming should still be finalized.

Delivery and facilitation conclusions

  • [Assistant-stated but unverified] A live mirror whiteboard is worth using because it turns the conversation into a shared system map rather than a one-way interview.
  • [Assistant-stated but unverified] The questionnaire should be lightweight before the first call and deeper after paid engagement begins.
  • [Assistant-stated but unverified] The discovery should start with business impact, then critical operations, then dependencies, then baseline controls, then recovery / response, then partner fit.

Reasoning, Tradeoffs, and Why It Matters

Why start with critical operations instead of tools

  • [Verified] Operational resilience and the CSF both imply that risk management starts with mission, governance, assets, and impact — not just tools.
    Sources: Federal Reserve operational resilience page; NIST CSF 2.0.

  • [Assistant-stated but unverified] Starting with business impact makes the prospect feel strategically understood and keeps the conversation out of commodity IT territory.

  • Tradeoff:

    • Upside: better executive engagement, easier retainer positioning, clearer prioritization.
    • Downside: if the prospect is extremely tool-focused, they may initially want to jump straight to controls. The facilitator has to bridge both worlds.

Why use a lightweight baseline questionnaire

  • [Verified] The topics discussed in the proposed questionnaire are legitimate baseline items according to NIST/CISA/CIS guidance.
  • [Assistant-stated but unverified] A short questionnaire creates signal without creating pre-call friction.
  • Tradeoff:
    • Short form: easier completion, better for founder-led sales, less intimidating.
    • Long form: more information, but higher abandonment and lower rapport.

Why ask about separate admin accounts, password managers, and restore testing

  • [Verified] These are not obscure best practices; they appear in official guidance and are strongly tied to preventable exposure.
  • Why it matters commercially: Prospects often underestimate how much risk is concentrated in a few missing basics. Those basics create clean, visible early wins.

Why use a mirror whiteboard

  • [Assistant-stated but unverified] The mirror whiteboard is a facilitation technique that:

    • proves listening,
    • surfaces contradictions and hidden dependencies,
    • gives value during the call,
    • creates a clean bridge into a paid baseline deliverable.
  • Tradeoff:

    • Upside: trust, clarity, shared reality.
    • Downside: can bog down if over-designed or if the facilitator types too much and stops listening.

Why not give away a full audit in discovery

  • [Assistant-stated but unverified] Free discovery should expose the shape of the problem, not produce a complete remediation plan.
  • Why it matters: If Solanasis solves the entire framing and prioritization problem for free, the paid baseline gets devalued.

Status note for this whole section:
The process below is primarily [Assistant-stated but unverified], but it is intentionally designed to align with the [Verified] resilience and cyber-baseline sources cited above.

A. Pre-call objectives

  1. Define the real goal of the call

    • Determine whether there is a good fit for a paid baseline and possibly a retainer.
    • Avoid sliding into free consulting.
  2. Get the right people on the call when possible

    • Executive sponsor / owner
    • Operations lead
    • IT / MSP contact if appropriate
    • Optional compliance / finance stakeholder for regulated prospects
  3. Send a lightweight pre-call questionnaire

    • Keep it to 10–15 questions.
    • Use answer options: Yes / Partial / No / Not Sure
    • Make clear that the questionnaire is not an exam and “Not Sure” is acceptable.
  4. Prepare a simple live note / whiteboard template Suggested boxes:

    • Critical operations
    • Core systems / data
    • Key people / vendors
    • Likely disruptions / failure points
    • Current safeguards
    • Priority gaps / next steps

B. Discovery call flow (45–60 minutes)

1. Opening (0–5 minutes)

Goal: set context, reduce defensiveness, get permission to structure the conversation.

Suggested language:

Thanks for making the time. The goal today is to understand what absolutely has to keep working in your organization, where the obvious fragility may be, and whether there’s a practical way for us to help. I’ll mirror back what I’m hearing in real time so we can make sure we’re seeing the same picture.

2. Business and impact (5–15 minutes)

Goal: identify critical operations and consequences of failure.

Core questions:

  • What does your organization actually do in practical terms?
  • What are the top 3 operations that absolutely must keep working?
  • If one critical system went down for a day, what hurts?
  • If it went down for a week, what becomes existential?
  • What client obligations, deadlines, or regulatory pressures matter most?
  • Where do you already feel fragile?

3. Dependencies and operating model (15–25 minutes)

Goal: identify concentration risk and hidden fragility.

Core questions:

  • Which systems, tools, vendors, and people do those operations depend on?
  • Where do you have single points of failure?
  • If a key vendor disappeared tomorrow, what would be hardest to recover?
  • Where are domains, backups, admin access, and cloud tenant control actually held?
  • What knowledge is trapped in one person’s head?

4. Baseline controls and hygiene (25–40 minutes)

Goal: test whether the fundamentals are present.

Core questions:

  • Do you require MFA on email and critical business systems?
  • Do privileged users have separate admin accounts?
  • Do you use a password manager or credential vault?
  • Are onboarding and offboarding access changes handled consistently?
  • Are devices centrally managed?
  • Are systems patched on a routine cadence?
  • Are backups in place for critical data and platforms?
  • Have you tested restoring from backup recently?
  • Do you know who leads response if a cyber or IT incident occurs?
  • Do you have a documented list of key vendors and owners?

5. Response, recovery, and continuity (40–50 minutes)

Goal: find out whether they can operate through disruption.

Core questions:

  • If a user is phished tomorrow, what happens first?
  • If your email or line-of-business system goes down tomorrow, what is your manual fallback?
  • What gets restored first, second, and third?
  • Have you ever tabletop-tested a disruption?
  • Do you know how long recovery would realistically take?

6. Fit, urgency, and path forward (50–60 minutes)

Goal: understand whether this is a report buyer, a rescue situation, or a real partner-fit opportunity.

Core questions:

  • What made you take this conversation now?
  • Are you looking for clarity, implementation help, or ongoing oversight?
  • What would “better” look like in 90 days?
  • How much internal capacity do you actually have to implement change?
  • Is leadership aligned on this being a priority?

C. Suggested internal scoring model

Status: [Assistant-stated but unverified]

Score each prospect on four axes: 1–5.

  1. Hygiene risk

    • MFA, admin separation, password management, backups, restore testing, patching, endpoint management
  2. Continuity risk

    • ability to maintain critical operations, fallback paths, recovery priorities, vendor resilience, tabletop readiness
  3. Leadership readiness

    • executive buy-in, willingness to hear hard truths, ownership, budget realism, internal follow-through
  4. Partner fit

    • recurring need, cross-functional complexity, desire for leadership-level guidance, not just tools or ticket handling

Suggested labels:

  • Green — mostly mature; advisory or targeted hardening
  • Yellow — meaningful gaps; good baseline candidate
  • Orange — fragile operations; strong baseline + stabilization opportunity
  • Red — severe exposure or likely mismatch unless treated as urgent stabilization

D. What the paid baseline should produce

Status: [Assistant-stated but unverified]

Recommended outputs:

  1. Critical operations map
  2. Core systems / data / vendor dependency map
  3. Baseline scorecard
  4. High-priority risks and concentration points
  5. Immediate quick wins
  6. 30 / 60 / 90-day plan
  7. Ownership recommendations
  8. Recommended operating cadence if Solanasis stays involved

E. How to transition into the retainer

Status: [Assistant-stated but unverified]

Suggested language:

Based on what we heard, the next logical move would be a baseline so we can turn these concerns into a clear, prioritized plan. Some clients stop there. Others want us to stay involved as the operational partner who helps drive the plan, coordinate vendors, and keep resilience from becoming a one-time exercise.

F. Recommended one-page post-call follow-up

Status: [Assistant-stated but unverified]

Include:

  • top 3 critical operations,
  • top dependencies,
  • top 3–5 gaps,
  • likely first priorities,
  • your recommended next step,
  • and a simple offer to move into the baseline.

Suggested Question Bank

Status:
The content domains below are [Verified] as legitimate discovery topics.
The exact wording and sequence are [Assistant-stated but unverified].

1. Business-critical operations

  • What absolutely must keep working?
  • Which services are clients or beneficiaries most dependent on?
  • What would cause the most operational pain if it failed tomorrow?
  • What deadline-driven obligations cannot slip?

2. Data and systems

  • What data would hurt the most to lose, expose, or corrupt?
  • Which systems are essential to serve clients or run payroll / finance / operations?
  • Which SaaS apps are actually mission-critical?

3. Identity and access

  • Is MFA required on email, finance, admin, and key SaaS platforms?
  • Do admins use separate accounts?
  • Are there any shared logins still in use?
  • Do you use a password manager or vault?

4. Devices and maintenance

  • Do you know what devices and systems are in scope?
  • Are devices centrally managed?
  • Are systems patched consistently?
  • Are any legacy or unsupported systems still in use?

5. Backups and recovery

  • What is backed up?
  • Who controls the backups?
  • Are backups protected from tampering or deletion?
  • When was the last real restore test?
  • How long would recovery actually take?

6. Vendor and concentration risk

  • Which vendors are critical?
  • If your MSP or main SaaS vendor disappeared, what would break?
  • Who controls the keys to your domains, DNS, email admin, and cloud tenancy?
  • Are vendor relationships and owners documented?

7. Response and continuity

  • Who leads during an incident?
  • What is the communication path?
  • Do you have cyber insurance, and if so, have you aligned controls to its requirements?
  • What is the manual fallback for critical operations?
  • Have you tabletop-tested likely scenarios?

8. Leadership and fit

  • What is driving urgency now?
  • Is leadership aligned around this?
  • Do you want a roadmap, implementation help, or an ongoing partner?
  • What has prevented progress so far?

Example Lightweight Pre-Call Questionnaire

Status: [Assistant-stated but unverified]
Note: The specific questionnaire below is proposed, not sourced verbatim.

Use answer choices: Yes / Partial / No / Not Sure

  1. We require MFA on email and critical business systems.
  2. Privileged users have separate admin accounts.
  3. We use a password manager or credential vault.
  4. Employee onboarding and offboarding access changes follow a consistent process.
  5. We maintain an inventory of critical systems, devices, and SaaS apps.
  6. Systems are patched on a routine schedule.
  7. Critical data and business systems are backed up.
  8. We have tested restoring from backup recently.
  9. We know who leads response to a cyber or operational incident.
  10. We maintain a list of critical vendors and who owns those relationships.
  11. We know which operations must be restored first after disruption.
  12. We have at least a basic fallback / manual workaround for key disruptions.

How to use it

  • Score each answer:
    • Yes = 2
    • Partial = 1
    • No = 0
    • Not Sure = 0
  • Group results under:
    • Identity & access
    • Systems & maintenance
    • Backup & recovery
    • Governance & continuity

Primary references used to verify the discussion

  1. Federal Reserve — Operational Resilience
    https://www.federalreserve.gov/supervisionreg/topics/operational-resilience.htm
    Use for: definition of operational resilience; critical operations; all-hazards framing.
    Caveat: financial-sector source, not SMB-specific.

  2. NIST Cybersecurity Framework (CSF) 2.0
    https://nvlpubs.nist.gov/nistpubs/CSWP/NIST.CSWP.29.pdf
    Use for: Govern / Identify / Protect / Detect / Respond / Recover; framework is not a one-size-fits-all checklist.

  3. NIST CSF 2.0 Small Business Quick-Start Guide
    https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.1300.pdf
    Use for: SMB-friendly adaptation of CSF 2.0.

  4. NIST Cybersecurity Basics
    https://www.nist.gov/itl/smallbusinesscyber/cybersecurity-basics
    Use for: MFA, password manager, backups, backup testing, patching, training.

  5. NIST SP 800-63B
    https://pages.nist.gov/800-63-4/sp800-63b.html
    Use for: password managers and password-entry usability guidance.

  6. NIST SP 800-61r3 — Incident Response Recommendations and Considerations for Cyber Risk Management
    https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-61r3.pdf
    Use for: incident response as part of broader cyber risk management; role of all CSF functions.

  7. CISA Cybersecurity Performance Goals (CPGs)
    https://www.cisa.gov/cybersecurity-performance-goals-cpgs
    Use for: baseline security outcomes including admin account separation and recovery-related practices.

  8. CISA Cybersecurity Performance Goals 2.0
    https://www.cisa.gov/cybersecurity-performance-goals-2-0-cpg-2-0
    Use for: updated phrasing of baseline outcomes; separate admin accounts; backups/recovery.

  9. CISA Cyber Guidance for Small Businesses
    https://www.cisa.gov/cyber-guidance-small-businesses
    Use for: backup and restore testing, tabletop exercises, practical SMB guidance.

  10. CISA Tabletop Exercise Packages
    https://www.cisa.gov/resources-tools/services/cisa-tabletop-exercise-packages
    Use for: validating the recommendation to include exercise readiness in resilience conversations.

  11. CISA — Primary Mitigations to Reduce Cyber Threats to OT
    https://www.cisa.gov/resources-tools/resources/primary-mitigations-reduce-cyber-threats-operational-technology
    Use for: evidence that reverting to manual controls can be vital after incidents.
    Caveat: OT-focused; use carefully for general SMB messaging.

  12. CIS Controls — Implementation Groups / IG1
    https://www.cisecurity.org/controls/implementation-groups/ig1
    https://www.cisecurity.org/controls/implementation-groups
    Use for: essential cyber hygiene / minimum baseline framing.

Non-source tools mentioned or implied in the discussion

  • [Assistant-stated but unverified] Shared whiteboarding or live diagramming tools may be useful during discovery.
  • [Tentative / speculative] Specific product selection (for example Miro vs. alternatives, free vs. AI-assisted options) was not verified in this artifact and should be evaluated separately if the user wants a tool recommendation.

Risks, Caveats, and Red Flags

Source and framework caveats

  • [Verified] The Federal Reserve source is a banking-sector supervisory source. It is useful conceptually, but it is not a direct SMB compliance requirement.
  • [Verified] NIST CSF 2.0 is intentionally flexible and not a checklist. Misusing it as rigid checklist theater would be a mistake.
  • [Verified] Some CISA guidance cited is sector-specific (for example OT or government-oriented). Use care when translating those points into general SMB sales language.

Discovery-call delivery risks

  • [Assistant-stated but unverified] The biggest risk is drifting into a free audit.
  • [Assistant-stated but unverified] Another major risk is turning the call into a technical interrogation before the business context is clear.
  • [Assistant-stated but unverified] Overusing jargon can cause the prospect to feel judged or lost.
  • [Assistant-stated but unverified] If the wrong stakeholders attend, the call may produce low-quality signals and weak follow-through.
  • [Assistant-stated but unverified] If Solanasis does not clearly distinguish baseline vs. implementation vs. retainer, scope creep becomes likely.

Commercial red flags to watch for

  • [Assistant-stated but unverified]
    • Prospect wants free recommendations but has no intention to buy.
    • Prospect wants “everything fixed” but has no sponsor or owner.
    • Prospect sees resilience as only an IT issue, not an operational leadership issue.
    • Prospect has severe vendor lock-in or undocumented admin control but is unwilling to confront it.
    • Prospect wants guaranteed outcomes without organizational participation.
    • Prospect is highly price-sensitive but expects fractional-executive value.

Additional missing considerations that should be included

These did not get enough attention in the original discussion and should be added to the operating model:

  1. [Verified + Assistant synthesis] Cyber insurance requirements
    Many SMBs and nonprofits are now indirectly shaped by insurer security expectations. Discovery should ask whether they carry cyber insurance and whether they know the conditions attached.
    Evidence basis: This document did not independently verify insurer-specific requirements, but asking the question is operationally important because it changes control priorities and incident workflow.

  2. [Assistant-stated but unverified] Contractual and client obligations
    Discovery should ask about customer contracts, funder requirements, or board expectations that may raise the resilience bar.

  3. [Assistant-stated but unverified] Privacy and legal response paths
    The call should eventually clarify whether counsel, breach notification, regulated data, or donor / client confidentiality issues are in play.

  4. [Assistant-stated but unverified] Board / executive reporting cadence
    If Solanasis wants retainer revenue, part of the offer may be regular operating reviews and leadership updates.

  5. [Assistant-stated but unverified] Vendor governance
    Discovery should probe not just vendor dependence but whether vendor ownership, escalation paths, renewal cycles, and access rights are documented.


Open Questions / What Still Needs Verification

  1. [Tentative / speculative] Does the user want “Operational Resilience Baseline” to be the final product name, or is there another preferred branded name?
  2. [Tentative / speculative] What exact deliverables should be included in the baseline package for Solanasis by default?
  3. [Tentative / speculative] How should Solanasis price the baseline vs. stabilization sprint vs. operational partner retainer in this specific lane?
  4. [Tentative / speculative] Which sectors are first-priority targets for this discovery playbook (nonprofits, RIAs, dental groups, defense subcontractors, etc.)?
  5. [Tentative / speculative] Should Solanasis create sector-specific variants of the questionnaire and call flow?
  6. [Tentative / speculative] Which whiteboard / collaborative tool should be used if the user wants a live mirror board with AI support? This artifact did not verify specific product options or current pricing/capabilities.
  7. [Tentative / speculative] How far should the free discovery go before Solanasis insists on a paid baseline?
  8. [Tentative / speculative] Should the baseline questionnaire be delivered as:
    • a web form,
    • a PDF / Markdown worksheet,
    • a CRM intake form,
    • or an AI-assisted conversational intake?
  9. [Tentative / speculative] What exact rubric should qualify a prospect as “retainer-ready” versus “baseline-only”?
  10. [Tentative / speculative] Does the user want discovery artifacts in Markdown worksheet form by default, given their stated preference for fillable offline artifacts in related iterative workflows?

Suggested Next Steps

Immediate next steps

  1. [Assistant-stated but unverified] Convert this playbook into three separate working assets:

    • a pre-call questionnaire,
    • a live discovery worksheet / whiteboard template,
    • and a post-call recap template.
  2. [Assistant-stated but unverified] Decide the exact productized offer names:

    • Discovery Call
    • Operational Resilience Baseline
    • Stabilization Sprint
    • Operational Partner Retainer
  3. [Assistant-stated but unverified] Build a simple scoring sheet so Solanasis can consistently decide:

    • no fit,
    • baseline fit,
    • urgent stabilization fit,
    • recurring partner fit.
  4. [Assistant-stated but unverified] Create a one-page visual example of what the baseline output looks like. This will help prospects understand that Solanasis sells clarity and operating structure, not just technical cleanup.

  5. [Assistant-stated but unverified] Create sector variants of the call questions for Solanasis’s best near-term lanes.

Strong next artifact candidates

  • Discovery worksheet (fillable Markdown)
  • 12-question pre-call questionnaire
  • Discovery scoring rubric
  • One-page “What to Expect from an Operational Resilience Baseline” explainer
  • Follow-up email templates
  • Sector-specific variants (for example nonprofit and RIA versions)

Handoff Notes for Another AI

Use these notes to continue the work without needing the original conversation.

What the user is trying to build

  • [User-stated] The user wants Solanasis discovery calls to:
    • qualify the prospect,
    • create immediate clarity,
    • avoid giving away free consulting,
    • and lead naturally into recurring operational partner work.

What tone and approach the user prefers

  • [User-stated] Practical, founder-led, first-principles, non-bureaucratic, useful.
  • [User-stated] Wants thorough deliverables and real operating documents, not high-level summaries.
  • [User-stated] Cares about recurring revenue and positioning as a high-trust operational partner.

What has already been established

  • [Verified] Framework anchors: Federal Reserve operational resilience concept, NIST CSF 2.0, NIST small business basics, NIST incident response guidance, CISA CPGs, CISA SMB guidance, CIS IG1.
  • [Assistant-stated but unverified] Discovery should be run as a resilience diagnostic, using a mirror whiteboard and a lightweight questionnaire.
  • [Assistant-stated but unverified] The recommended commercial ladder is discovery baseline stabilization retainer.

What another AI should do next

  1. Turn the playbook into actual operational templates.
  2. Preserve evidence labels when introducing new claims.
  3. Keep product/tool recommendations separate from framework-backed claims.
  4. Ask the user to confirm product naming, target verticals, and desired artifact formats before standardizing them.
  5. If recommending specific whiteboard or intake tools, verify current features, pricing, and integrations first.

What another AI should avoid

  • Do not treat Federal Reserve banking guidance as a direct SMB compliance requirement.
  • Do not present the questionnaire or call structure as if it were directly mandated by NIST or CISA.
  • Do not claim that the mirror whiteboard tactic is research-validated unless that is separately verified.
  • Do not collapse baseline, remediation, and retainer into a single blurry offer.

Reviewer Notes and Improvements Made

Reviewer availability: No dedicated reviewer-agent capability was available in this session. A serious self-review pass was performed.

Self-review actions taken

  1. Removed unsupported certainty
    Claims that were tactical or sales-process recommendations were not mislabeled as externally verified.

  2. Separated framework-backed guidance from operator tactics
    Official-source-backed items were clearly split from facilitation suggestions like mirror whiteboarding.

  3. Added missing caveats
    Especially around:

    • banking-sector vs. SMB applicability,
    • OT guidance being OT-specific,
    • and CSF 2.0 not being a rigid checklist.
  4. Added missing implementation concerns
    Including:

    • cyber insurance,
    • vendor governance,
    • legal / privacy response paths,
    • executive reporting cadence,
    • and the risk of free-audit scope creep.
  5. Improved handoff value for another AI
    Added:

    • Handoff Notes for Another AI,
    • Open Questions / What Still Needs Verification,
    • and a structured appendix.

Improvements beyond the original discussion

  • Added evidence labels across the document
  • Added source-backed framework alignment
  • Added missing caveats and edge cases
  • Added a clearer commercialization ladder
  • Added a more explicit scoring model
  • Added next-artifact recommendations

Optional Appendix: Structured Summary (YAML-Style)

document:
  title: "Solanasis Discovery Calls for Operational Resilience"
  date: "2026-03-16"
  purpose:
    - guide
    - playbook
    - briefing_memo
    - ai_handoff
 
user_goal:
  - qualify prospects for operational resilience work
  - potentially convert into recurring operational partner retainers
  - avoid free-audit scope creep
  - use a practical, founder-led approach
 
verified_frameworks:
  - name: "Federal Reserve Operational Resilience"
    use: "definition and all-hazards framing"
    caveat: "banking-focused source"
    link: "https://www.federalreserve.gov/supervisionreg/topics/operational-resilience.htm"
  - name: "NIST CSF 2.0"
    use: "organizing model: Govern Identify Protect Detect Respond Recover"
    link: "https://nvlpubs.nist.gov/nistpubs/CSWP/NIST.CSWP.29.pdf"
  - name: "NIST CSF 2.0 Small Business Quick-Start Guide"
    use: "SMB-friendly application"
    link: "https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.1300.pdf"
  - name: "NIST Cybersecurity Basics"
    use: "MFA, passwords, password manager, backups, tests, patching"
    link: "https://www.nist.gov/itl/smallbusinesscyber/cybersecurity-basics"
  - name: "NIST SP 800-63B"
    use: "password managers supported and useful"
    link: "https://pages.nist.gov/800-63-4/sp800-63b.html"
  - name: "NIST SP 800-61r3"
    use: "incident response as part of broader cyber risk management"
    link: "https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-61r3.pdf"
  - name: "CISA CPGs / CPG 2.0"
    use: "baseline security outcomes, separate admin accounts, recovery"
    links:
      - "https://www.cisa.gov/cybersecurity-performance-goals-cpgs"
      - "https://www.cisa.gov/cybersecurity-performance-goals-2-0-cpg-2-0"
  - name: "CISA Small Business Cyber Guidance"
    use: "restore testing, tabletop exercises, SMB practices"
    link: "https://www.cisa.gov/cyber-guidance-small-businesses"
  - name: "CIS IG1"
    use: "essential cyber hygiene baseline"
    links:
      - "https://www.cisecurity.org/controls/implementation-groups/ig1"
      - "https://www.cisecurity.org/controls/implementation-groups"
 
playbook_core:
  pre_call:
    - send_lightweight_questionnaire
    - prep_mirror_whiteboard
    - confirm_right_stakeholders
  call_flow:
    - business_impact
    - critical_operations
    - dependencies
    - baseline_controls
    - response_and_recovery
    - partner_fit
  post_call:
    - summarize_critical_ops
    - summarize_dependencies
    - highlight_top_gaps
    - recommend_baseline_or_next_step
 
assistant_recommendations_unverified:
  - use_mirror_whiteboard
  - run_discovery_as_mutual_fit_diagnostic
  - keep_questionnaire_lightweight
  - sell_baseline_before_retainer
  - use_scoring_model_for_fit
 
open_questions:
  - confirm_final_product_name
  - define_default_baseline_deliverables
  - finalize_pricing_and_packaging
  - choose_target_vertical_variants
  - verify_best_collaboration_tool_for_live_discovery