Solanasis Handoff Guide: Manus, OpenClaw, Structured Backends, Supabase, Storage Patterns, and Client Agent Defaults — 2026-03-15
Executive Summary
This document consolidates and upgrades the key findings from the discussion about:
- whether Manus can serve as a real operational agent layer,
- what OpenClaw can still do that Manus cannot fully replace,
- how multi-model orchestration should actually be handled,
- where Supabase fits for structured agent backends,
- when GitHub, object storage, Postgres, pgvector, vector databases, and local embeddings each make sense,
- and what the strong default architecture should be when building a custom AI agent for a client that has no existing database.
Bottom-line conclusions
-
[Verified] Manus is a hosted agent platform with Projects, Skills, Wide Research, Browser Operator, API, webhooks, Slack, Zapier, Mail Manus, and custom MCP server support. It is stronger as an execution/orchestration product than as a pure code-harness replacement.
Sources: Projects, Skills, Wide Research, Browser Operator, API, Webhooks, Zapier, Slack, Custom MCP -
[Verified] OpenClaw remains stronger than Manus in several areas that matter to power users: self-hosting, local-first control, wide channel coverage, external coding harness brokering via ACP (Claude Code, Codex, Gemini CLI, etc.), provider flexibility, Markdown-file memory, hooks, cron, and per-agent sandbox/tool policy.
Sources: OpenClaw GitHub README, ACP Agents, Providers, Memory, Hooks, Cron Jobs, Multi-Agent Sandbox & Tools -
[Verified] There is no current public evidence of a first-party Manus feature that natively lets Claude Opus and ChatGPT debate each other inside Manus as peer models. What Manus clearly supports is orchestration through API + webhooks + custom MCP + connectors.
Sources: Create Task API, OpenAI SDK Compatibility, Webhooks, Custom MCP -
[Assistant-stated but unverified as a benchmarked industry rule] The strongest default for client AI agents is not “start with a vector database.” It is:
system-of-record relational DB first, object storage for files, full-text/hybrid retrieval before semantic-only retrieval, then pgvector or a dedicated vector layer only where justified. -
[Verified] Supabase is not just “managed Postgres.” It is a platform that bundles Postgres, auto-generated APIs, Auth, Edge Functions, Realtime, and Storage. Storage is S3-compatible object storage, while Postgres stores metadata in the
storageschema.
Sources: Supabase Platform, Supabase Architecture, Storage, Storage Schema -
[Verified] The user’s DBA instinct is correct: do not put large document/image blobs in ordinary Postgres tables unless there is a special reason. Keep actual files in object storage, and keep metadata, permissions, extracted text references, chunk references, hashes, and state in the database.
Sources: Supabase Storage, Storage Schema -
[Verified] GitHub is excellent for code, prompts, schemas, human-maintained Markdown knowledge, and selected version-worthy artifacts. It is not a great primary store for large operational document sets. GitHub documents browser upload limits, 100 MiB repository file blocking, Git LFS usage, billing, and repository size guidance.
Sources: About large files on GitHub, About Git LFS, Git LFS billing -
[Verified] If a client has no existing backend, a strong default stack is:
Supabase + Postgres + Storage + Auth/RLS + Edge Functions + full-text search first + optional pgvector later.
Sources: Supabase Platform, RLS, Storage Access Control, Full Text Search, AI & Vectors -
[Verified] Embedding generation does not require a dedicated vector vendor, but it does require an embedding source. Current official sources confirm at least three valid paths:
- External provider pricing such as OpenAI embeddings,
- Local embeddings via Ollama,
- Supabase Edge Functions guide claiming built-in AI inference API for embeddings.
However, Supabase’s Automatic Embeddings guide also still references calls to “a provider like OpenAI,” creating a documentation inconsistency that should be treated as an open verification item.
Sources: OpenAI pricing, Ollama Embeddings, Supabase Generate Embeddings, Supabase Automatic Embeddings
Purpose of This Document
This artifact is intended to serve as:
- a guide for decision-making,
- a playbook for building client AI-agent backends,
- a briefing memo on Manus, OpenClaw, Supabase, and storage architecture,
- and a handoff document for another AI so it can continue the work without needing the original conversation.
This is not just a narrative summary. It is a structured, verified, operational memo.
Discussion Context
User goals and constraints
- [User-stated] The user is a power user of Claude, Claude Code / CoWork, and ChatGPT, and is looking for the closest practical equivalent to a true OpenClaw-like operational agent while evaluating Manus.
- [User-stated] The user maintains a local knowledge base of AI-generated files/Markdown around Solanasis and wants to understand how tools like Manus or a custom agent stack can work with that knowledge.
- [User-stated] The user wants agents that can perform GTM, website changes, ClickUp-related actions, and more general operational delegation.
- [User-stated] The user is especially interested in how to build custom AI agents for clients where the client does not already have a database.
- [User-stated] The user is cost-aware, especially around embedding costs, vector databases, and SaaS plan burn.
- [User-stated] The user has a DBA background and prefers that large documents/images live outside the relational database in a dedicated object store or equivalent.
- [User-stated] The user feels GitHub-hosted documents can be cleaner for human-curated versioned text artifacts than storing raw documents directly in a DB.
Preferences inferred from discussion
- [User-stated] The user values structured data over loose Markdown-only “memory.”
- [User-stated] The user wants strong defaults, clear tradeoffs, and up-to-date research, not hype.
- [User-stated] The user wants architecture that is strong enough for client delivery, not just internal tinkering.
Evidence Status Legend
- Verified = directly supported by a current reliable source.
- User-stated = provided by the user in the discussion.
- Assistant-stated but unverified = proposed in the discussion but not fully verified against current sources.
- Tentative / speculative = inference, opinion, or reasonable architectural guidance that should not be treated as a fact.
Key Facts and Verified Findings
1) Manus: what is currently verified
1.1 Core product shape
-
[Verified] Manus Projects are persistent workspaces with a master instruction and a knowledge base of files/documents that are automatically applied to tasks created in that project.
Sources: Projects -
[Verified] Manus Skills are modular, file-system-based resources that package specific capabilities/workflows for the agent.
Sources: Skills -
[Verified] Wide Research is Manus’s parallel-processing system for many similar items; it “deploys hundreds of independent agents that work in parallel.”
Sources: Wide Research -
[Verified] Manus Browser Operator runs in the user’s actual browser with existing sessions and local IP, not just in the cloud browser.
Sources: Browser Operator -
[Verified] Manus API supports task creation with agent profiles
manus-1.6,manus-1.6-lite, andmanus-1.6-max, as well as task continuation and project assignment.
Sources: Create Task API, OpenAI SDK Compatibility -
[Verified] Manus webhooks support
task_created,task_progress, andtask_stoppedlifecycle events.
Sources: Webhooks -
[Verified] Manus integrates with Slack, Zapier, and MCP connectors/custom MCP servers.
Sources: Slack Integration, Zapier, Integrations Overview, Custom MCP, MCP Connectors
1.2 Browser Operator limits and power-user implications
-
[Verified] Browser Operator currently has documented limitations:
- complex interactions like drag-and-drop and multi-step forms may not work perfectly,
- some sites with aggressive anti-bot measures may require manual intervention.
Sources: Browser Operator limitations
-
[Verified] Browser Operator requires one-time authorization per task, provides visible real-time activity, and can be stopped by closing the dedicated tab.
Sources: Browser Operator security & control -
[Assistant-stated but unverified in formal benchmark form] Browser Operator is powerful for authenticated SaaS actions but is more fragile than direct API/MCP/Zapier integrations for production workflows.
1.3 Manus website builder / app builder
-
[Verified] Manus web projects with the database feature include a fully managed MySQL database and a visual data browser/editor.
Sources: Project Analytics -
[Verified] Manus positions itself as a full-stack website/app builder and documents a separate usage and pricing model for web experiences.
Sources: Getting Started (Web), Usage and Pricing
1.4 Manus pricing and credit model
-
[Verified] Manus Help Center currently documents:
- Free plan: $0/month, Manus 1.6 Lite in Agent Mode, 300 daily refresh credits, 1 concurrent task, 2 scheduled tasks.
- Pro starting from $20/month with 4,000 monthly credits.
- A higher Pro option starting from $40/month with 8,000 monthly credits.
- Team starting from $20/seat/month. Sources: Current membership pricing, Daily refresh credits
-
[Verified] Official Manus web/blog/pricing materials also reference plan names like Standard, Customizable, Extended, and numbers such as $200/month and 40,000 credits, which creates an official naming inconsistency versus the Help Center’s “Free / Pro / Team” language.
Sources: Manus pricing page, Manus vs Synthesia blog pricing snippet, Current membership pricing -
[Verified] Manus help content states users cannot know the exact credit cost of a task before it begins because cost depends on execution complexity.
Sources: Is there a way to check how many credits a task will cost before I begin?
1.5 User-reported Manus behavior
-
[Tentative / speculative based on user reports only] Recent Reddit posts indicate credit-burn complaints around Telegram/always-on agent usage and general pricing opacity. These reports are not official facts, but they are relevant as risk signals.
Sources: Telegram costing me 1,348 credits, Always-on agent is so expensive, 2 million credits disappeared -
[Tentative / speculative based on user reports only] Recent Reddit prompt advice trends emphasize shorter, more specific prompts, phased workflows, and avoiding expensive wandering sessions.
Sources: Best way to use ChatGPT for Manus prompts?, Guide I wish I had for Manus, What prompts have worked best?
2) OpenClaw: what is currently verified
2.1 Core product shape
-
[Verified] OpenClaw describes itself as a personal AI assistant you run on your own devices; the gateway is its control plane.
Sources: OpenClaw GitHub README -
[Verified] The OpenClaw README currently claims support across many channels, including WhatsApp, Telegram, Slack, Discord, Google Chat, Signal, iMessage, BlueBubbles, IRC, Microsoft Teams, Matrix, Feishu, LINE, Mattermost, Nextcloud Talk, Nostr, Synology Chat, Tlon, Twitch, Zalo, and WebChat.
Sources: OpenClaw GitHub README
2.2 ACP and coding harness integration
-
[Verified] OpenClaw ACP Agents can run external coding harnesses such as Pi, Claude Code, Codex, OpenCode, and Gemini CLI through an ACP backend plugin.
Sources: ACP Agents -
[Verified] OpenClaw docs explicitly describe asking OpenClaw to “run this in Codex” or “start Claude Code in a thread.”
Sources: ACP Agents
2.3 Provider flexibility and local models
-
[Verified] OpenClaw supports many providers and documents provider selection as
provider/model.
Sources: Providers -
[Verified] OpenClaw also documents integrations with providers/gateways such as LiteLLM, OpenRouter, Ollama, Bedrock, and more.
Sources: LiteLLM, OpenRouter, Ollama, Bedrock
2.4 Memory, hooks, automation
-
[Verified] OpenClaw memory is documented as plain Markdown in the agent workspace, with files as the source of truth.
Sources: Memory -
[Verified] OpenClaw documents Hooks (event-driven scripts), Cron Jobs (gateway scheduler), and semantic memory indexing/search tooling.
Sources: Hooks, Cron Jobs,openclaw memory -
[Verified] OpenClaw supports multi-agent setups with per-agent sandbox configuration and tool restrictions.
Sources: Multi-Agent Sandbox & Tools
2.5 Implications relative to Manus
- [Assistant-stated but strongly source-backed] OpenClaw is closer to an AI operating system / self-hosted control plane. Manus is closer to a hosted agent product with polished integrations and UX.
3) Manus vs OpenClaw: distilled comparison
3.1 What OpenClaw can do that Manus does not clearly match natively
-
[Verified] Self-hosted, always-on gateway runtime.
Sources: OpenClaw GitHub README, Cron Jobs -
[Verified] Much broader first-class messaging surface coverage.
Sources: OpenClaw GitHub README -
[Verified] Native brokering of external coding harnesses (Claude Code, Codex, Gemini CLI, etc.) via ACP.
Sources: ACP Agents -
[Verified] Markdown-file memory as a first-class local source of truth.
Sources: Memory -
[Verified] Local/heterogeneous model provider flexibility with Ollama/OpenRouter/LiteLLM/etc.
Sources: Providers, LiteLLM, Ollama -
[Verified] Hooks/cron and granular multi-agent sandbox/tool profiles.
Sources: Hooks, Cron Jobs, Multi-Agent Sandbox & Tools
3.2 What Manus does especially well
-
[Verified] Hosted, polished Wide Research parallelization.
Sources: Wide Research -
[Verified] Strong browser-based authenticated action layer via Browser Operator.
Sources: Browser Operator -
[Verified] Easier first-party UX around Projects, Skills, Slack, Zapier, and web deliverables.
Sources: Projects, Skills, Slack Integration, Zapier, Website Builder
3.3 Best synthesis
- [Tentative / strategic recommendation] For a power user like the user in this discussion, the most practical stance is:
- Claude Code for direct repo work,
- ChatGPT for strategy and verification,
- Manus for hosted orchestration and async GTM/research workflows,
- OpenClaw if local-first ownership, ACP harness routing, or self-hosted control becomes core.
4) Multi-model orchestration: what is actually real
-
[Verified] Manus’s public API/docs do not document a native “Claude vs ChatGPT debate inside Manus” feature.
Sources: Create Task API, OpenAI SDK Compatibility -
[Verified] Manus does support the building blocks needed for orchestration:
- async tasks,
- task continuation,
- project assignment,
- webhooks,
- custom MCP servers,
- connectors.
Sources: Create Task API, Webhooks, Custom MCP, MCP Connectors
-
[Tentative / strategic recommendation] The best “multi-model” pattern is likely:
- Manus performs/coordinates a task.
- A webhook or custom service sends the output to one or more external models (e.g. ChatGPT, Claude/Opus).
- A comparison/referee step produces consensus or flags disagreement.
- Results return to ClickUp/Slack/Notion/database or back into Manus as a continuation turn.
-
[Verified] OpenClaw can more directly route work to external coding harnesses through ACP.
Sources: ACP Agents
5) Backends for agent systems: where each category fits
5.1 Strong default principle
-
[Assistant-stated but well-supported] Most serious agent systems should be treated as:
- structured system-of-record DB
- retrieval/search layer
- object store
- optionally cache/queue
- optionally graph
- optionally time-series/observability layer
-
[Verified] Supabase’s own platform docs support this kind of decomposition: Postgres, API, Auth, Storage, Edge Functions, Realtime.
Sources: Supabase Platform, Architecture
5.2 Postgres + pgvector
-
[Verified] Neon pricing currently shows:
- Free: $0
- Launch: usage-based, typical spend 0.106 per CU-hour, $0.35 per GB-month
- Scale: usage-based, higher rates
Sources: Neon Pricing
-
[Verified] Neon explicitly includes
pg_vectorin its extensions library.
Sources: Neon Pricing -
[Tentative / strategic recommendation] Postgres + pgvector remains the cleanest “one place for rows + vectors” default when scale or retrieval specialization does not justify a separate vector store.
5.3 SQLite / libSQL / Turso
-
[Verified] Turso pricing currently shows:
- Free: $0
- Developer: $4.99/month
- Scaler: $24.92/month
- Pro: $416.58/month
Sources: Turso Pricing
-
[Tentative / strategic recommendation] Turso/libSQL fits well for:
- edge/local-first apps,
- portable per-agent databases,
- offline-ish architectures,
- or when a central heavyweight Postgres service is overkill.
5.4 Dedicated vector databases
Pinecone
- [Verified] Pinecone pricing currently shows:
- Starter: Free
- Standard: $50/month minimum
- Enterprise: $500/month minimum
- Read Units: 24 per million (Enterprise)
Sources: Pinecone Pricing
Qdrant
- [Verified] Qdrant free tier currently includes:
- single-node cluster,
- 0.5 vCPU,
- 1 GB RAM,
- 4 GB disk.
Billing above free tier is resource-based.
Sources: Qdrant Pricing
Weaviate
-
[Verified] Weaviate pricing currently shows:
- Flex starts at $45/month
- Premium starts at $400/month
- Free trial available
Sources: Weaviate Pricing
-
[Conflict note / Verified] Earlier discussion referenced a much lower Weaviate “Plus starts at 400/month**. Treat older figures as stale unless a custom quote applies.
Sources: Weaviate Pricing -
[Assistant-stated but unverified as a hard universal threshold] Most teams should not start with a dedicated vector database unless retrieval is unusually central, specialized, or already pushing past what pgvector/Postgres can comfortably support.
5.5 Redis
-
[Verified] Redis Cloud pricing page currently shows:
- Essentials from $0.007/hour
- Pro from $0.014/hour
- Pro minimum $200/month
Sources: Redis Pricing
-
[Tentative / strategic recommendation] Redis fits as the fast ephemeral sidecar:
- queues,
- locks,
- session state,
- hot caches,
- short-lived coordination.
5.6 MongoDB Atlas
-
[Verified] MongoDB pricing page currently shows:
- Flex: 30/month
- Dedicated: starts at 56.94/month
Sources: MongoDB Pricing
-
[Tentative / strategic recommendation] MongoDB fits when document-shaped JSON data is the center of gravity, but it is not the strongest default for relational control-plane needs.
5.7 Neo4j
-
[Verified] Neo4j AuraDB Professional pricing currently includes a 1 GB / 1 CPU tier at $65.70/month.
-
[Verified] AuraDB Business Critical is shown at $146/GB/month minimum 2 GB cluster.
Sources: Neo4j Pricing -
[Tentative / strategic recommendation] Neo4j should be introduced when relationship/path logic is truly central (trust graphs, introductions, dependency networks, knowledge graphs), not just because graph DBs sound advanced.
5.8 Firestore / Firebase
-
[Verified] Firebase pricing page currently shows free Cloud Firestore quotas including:
- 1 GiB stored data
- 10 GiB/month network egress
- 20K writes/day
- 50K reads/day
- 20K deletes/day
Sources: Firebase Pricing
-
[Assistant-stated but currently unverified from the main Firebase pricing page used here] Exact paid per-operation Firestore rates cited earlier in the conversation were not re-verified in this handoff document. Treat those earlier figures as needing re-check against current Google Cloud pricing pages before reuse.
5.9 Tiger Data / Timescale category
-
[Verified] Tiger Data pricing currently shows:
- Performance: compute starts at 0.177/GB-month
- Scale: compute starts at $36/month
Sources: Tiger Data Pricing
-
[Tentative / strategic recommendation] This category matters if agent telemetry, time-series events, audit trails, or observability become significant.
6) Supabase: where it fits and what it changes
6.1 Platform role
-
[Verified] Supabase Platform docs state each project comes with:
- dedicated Postgres database,
- auto-generated APIs,
- Auth,
- Edge Functions,
- Realtime,
- Storage.
Sources: Supabase Platform
-
[Verified] Supabase Architecture docs state the Storage API is an S3-compatible object storage service that stores metadata in Postgres.
Sources: Supabase Architecture -
[Verified] Supabase docs state every project is a full Postgres database with
postgres-level access.
Sources: Database Overview
Implication
- [Verified] Supabase is best thought of as a backend platform around Postgres, not merely as a database host. That makes it a strong fit for agent apps that need data + API + auth + storage + functions in one place.
6.2 Storage pattern
-
[Verified] Supabase Storage is a robust, scalable solution for files and can manage files of any size with access controls.
Sources: Supabase Storage -
[Verified] Supabase Storage Schema docs state Storage uses Postgres to store metadata for buckets and objects in a dedicated
storageschema, and records there should be treated as read-only with operations going through the API.
Sources: Storage Schema -
[Verified] Supabase Storage Access Control integrates with Postgres RLS.
Sources: Storage Access Control
Implication
- [Verified] The correct design is:
- actual files in object storage,
- metadata in Postgres,
- RLS/permissions at the metadata/API level.
This aligns with the user’s DBA preference and corrects any ambiguity from the conversation.
6.3 Search and vector support
-
[Verified] Supabase AI docs support:
- semantic search,
- keyword search,
- hybrid search.
Sources: AI & Vectors, Keyword Search, Semantic Search, Hybrid Search
-
[Verified] Supabase full-text search docs confirm Postgres has built-in full-text search capabilities.
Sources: Full Text Search -
[Verified] Hybrid search docs explicitly recommend combining keyword and semantic search where appropriate.
Sources: Hybrid Search
Implication
- [Assistant-stated but well-supported] For many client agents, full-text search first and hybrid search later is a stronger, cheaper default than “semantic search everything.”
6.4 Security and multi-tenancy
-
[Verified] Supabase RLS docs describe RLS as a Postgres primitive that can provide defense in depth, and say it can be combined with Supabase Auth.
Sources: RLS, Auth -
[Verified] Supabase API docs say the data APIs are designed to work with RLS.
Sources: Securing your API
Implication
- [Verified] Supabase is a credible default for multi-tenant client agent systems where user/role/document segregation matters.
6.5 Functions and automation
-
[Verified] Supabase Edge Functions are globally distributed TypeScript functions and can be used for webhooks and third-party integrations.
Sources: Edge Functions -
[Verified] Supabase also supports local stack development via
supabase start, including database, auth, storage, and edge functions runtime.
Sources: Development Environment /supabase start
Implication
- [Verified] Supabase can serve as the backend control plane for custom agent workflows without needing a fully separate backend platform on day 1.
6.6 Supabase pricing
-
[Verified] Supabase pricing page snippet currently shows Pro: $25/month.
Sources: Supabase Pricing -
[Assistant-stated but partially unverified] Exact higher-tier pricing details were not fully re-extracted in this handoff because the pricing page rendering was incomplete in the fetched view. Re-check live pricing before quoting for proposals.
7) Embeddings, pgvector, and cost reality
7.1 What is verified
-
[Verified] OpenAI’s current developer pricing page lists:
text-embedding-3-small: $0.02 / 1M tokenstext-embedding-3-large: $0.13 / 1M tokens
Sources: OpenAI API pricing, text-embedding-3-small, text-embedding-3-large
-
[Verified] Ollama docs support embedding generation locally and show example usage with
embeddinggemma.
Sources: Ollama Embeddings -
[Verified] Supabase AI docs include a guide that says text embeddings can be generated in Edge Functions using a built-in AI inference API, “so no external API is required.”
Sources: Generate Embeddings -
[Verified / conflicting official docs] Supabase’s Automatic Embeddings guide also says semantic search requires asynchronous API calls to a provider like OpenAI.
Sources: Automatic Embeddings
Conflict note
-
[Verified conflict] Supabase’s official docs currently send mixed signals:
- one guide says no external API is required,
- another says semantic search requires API calls to a provider like OpenAI.
This should be treated as an open verification item before committing to a “zero external embedding cost” sales claim.
7.2 Strategic interpretation
-
[Assistant-stated but grounded] The real cost risk is usually not pgvector itself; it is:
- embedding too much low-value text,
- re-embedding too often,
- failing to structure data first,
- or adopting a dedicated vector stack too early.
-
[Tentative / strategic recommendation] Embeddings should be introduced in phases:
- model structured data first,
- use full-text search for many cases,
- add hybrid search,
- add semantic embeddings only where search quality or retrieval semantics clearly justify it.
8) GitHub vs object storage vs relational DB
8.1 GitHub limitations
-
[Verified] GitHub docs say:
- browser-uploaded files can be no larger than 25 MiB,
- files larger than 100 MiB are blocked in normal Git usage,
- Git is not designed to handle large SQL files,
- repositories should be kept at reasonable sizes.
Sources: About large files on GitHub
-
[Verified] Git LFS stores pointer files in the repo and the actual large file elsewhere.
Sources: About Git LFS -
[Verified] Current Git LFS maximum file sizes by plan are documented as:
- GitHub Free: 2 GB
- GitHub Pro: 2 GB
- GitHub Team: 4 GB
- GitHub Enterprise Cloud: 5 GB
Sources: About Git LFS
-
[Verified] GitHub bills Git LFS storage and bandwidth beyond included quota.
Sources: Git LFS billing
8.2 Practical role split
-
[Verified + strategic recommendation] Use GitHub for:
- code,
- migrations,
- prompts,
- schemas,
- hand-maintained Markdown knowledge,
- config,
- selected version-worthy text artifacts.
-
[Verified + strategic recommendation] Use object storage for:
- PDFs,
- images,
- screenshots,
- recordings,
- generated reports,
- inbound client uploads,
- other large operational artifacts.
-
[Verified + strategic recommendation] Use Postgres for:
- metadata,
- ownership,
- permissions,
- hashes,
- extracted text references,
- chunk pointers,
- processing state,
- audit trails.
This is the recommended architecture for Solanasis-style client systems.
Major Decisions and Conclusions
1) The strong default for client AI agents
- [Assistant-stated recommendation] Start with:
- Supabase
- Postgres as the system of record
- Supabase Storage (or equivalent S3-compatible object storage) for actual files
- Auth + RLS for permissions
- Edge Functions for webhook/tooling/automation
- full-text search first
- hybrid search second
- pgvector later only where justified
2) The strong default for file handling
- [Verified + recommendation] Do not store raw document/image blobs in ordinary Postgres tables by default. Use object storage + metadata in Postgres.
3) The strong default for retrieval
- [Assistant-stated recommendation] Treat vector search as a secondary index, not the foundation.
- [Verified] Keyword/full-text and hybrid search are first-class supported patterns in Supabase/Postgres.
Sources: Full Text Search, Hybrid Search
4) The strong default for Manus/OpenClaw
- [Assistant-stated recommendation]
- Use Manus where you want hosted orchestration, research, and browser-driven action.
- Use OpenClaw when self-hosting, ACP harness control, local-first memory, and deep tool/runtime ownership matter.
- Use Claude Code as the primary repo-operating scalpel.
- Use ChatGPT as strategy/verifier/research reviewer.
Reasoning, Tradeoffs, and Why It Matters
Why not start with a vector DB?
- [Assistant-stated, strongly supported] Because most client agent problems are initially about:
- getting data into a structured system,
- modeling entities and permissions,
- storing artifacts safely,
- and creating predictable workflows.
Vector search is useful, but it does not replace:
- joins,
- statuses,
- relationships,
- audit history,
- tenant boundaries,
- approval flow,
- or operational state.
Why Supabase instead of “just Postgres”?
- [Verified] Supabase removes a large amount of backend glue:
- APIs,
- auth,
- storage,
- functions,
- realtime,
- RLS integration.
Sources: Supabase Platform, Architecture
Why object storage instead of GitHub for client files?
- [Verified] Because GitHub has file/repo limits, LFS billing, and is optimized for versioned development artifacts rather than ongoing operational binary storage.
Sources: About large files on GitHub, About Git LFS, Git LFS billing
Why full-text and hybrid search first?
-
[Verified] Postgres already supports full-text search, and Supabase explicitly supports keyword, semantic, and hybrid search.
Sources: Full Text Search, AI & Vectors, Hybrid Search -
[Assistant-stated recommendation] This reduces cost, complexity, and accidental semantic drift.
Recommended Playbook / Process
A. Client-agent backend default blueprint
Step 1 — Define the agent’s actual job
- [Assistant-stated recommendation] Before schema design, classify the agent into one or more roles:
- retrieval assistant,
- workflow operator,
- triage assistant,
- document analyst,
- dashboard/report assistant,
- action-taking SaaS agent.
This determines what must be structured.
Step 2 — Model the business in structured tables first
Suggested baseline tables:
-
clients
-
users
-
memberships / user_client_roles
-
projects / engagements
-
documents
-
document_versions
-
document_extracted_text
-
document_chunks
-
tasks
-
task_runs
-
tool_calls
-
approvals
-
artifacts
-
audit_events
-
[Assistant-stated recommendation] Anything filterable, permission-sensitive, or operationally meaningful should be a table column before it becomes an embedding target.
Step 3 — Keep actual files out of the main tables
- [Verified] Store files in object storage, not in ordinary relational blob columns by default.
Sources: Supabase Storage, Storage Schema
Suggested columns in documents / artifacts:
storage_providerbucketstorage_keymime_typefile_size_bytessha256uploaded_byvisibilitystatusversion_number
Step 4 — Implement permissions immediately
- [Verified] Use RLS from the start if the agent is multi-tenant or client-facing.
Sources: RLS, Storage Access Control
Step 5 — Implement text extraction and full-text search
-
Extract text from supported documents.
-
Store normalized text separately.
-
Index full-text search.
-
Only then decide where chunking is needed.
-
[Verified] Postgres full-text search is already available.
Sources: Full Text Search
Step 6 — Add chunking only where retrieval needs it
- [Assistant-stated recommendation] Chunk:
- long SOPs,
- policies,
- playbooks,
- contracts,
- long reports,
- meeting summaries meant for later retrieval.
Do not chunk:
- every log line,
- transient noise,
- low-value duplicates,
- data that should be a structured row.
Step 7 — Introduce hybrid retrieval before full semantic dependence
-
Start with filters + FTS.
-
Then FTS + embeddings for recall improvement.
-
Then reranking if needed.
-
[Verified] Hybrid search is a documented Supabase pattern.
Sources: Hybrid Search
Step 8 — Add embeddings only where they prove value
Candidate embedding targets:
- approved SOPs,
- curated knowledge-base docs,
- stable summaries,
- selected ticket/document corpora,
- meeting notes that users actually need to semantically search.
Non-candidate targets at first:
- raw chat exhaust,
- noisy drafts,
- every binary attachment,
- low-value repetitive logs.
Step 9 — Choose the embedding source deliberately
Options:
- [Verified] OpenAI embeddings (cheap, hosted, low-friction).
Sources: OpenAI API pricing - [Verified] Ollama local embeddings (local control, operational overhead).
Sources: Ollama Embeddings - [Verified but needs confirmation in practice] Supabase built-in AI inference path.
Sources: Generate Embeddings
Step 10 — Track cost, latency, and drift
Minimum observability tables:
-
embedding_jobs -
retrieval_queries -
retrieval_results -
llm_runs -
tool_call_log -
cost_events -
[Assistant-stated recommendation] If you do not meter this early, client agents become hard to price and hard to debug.
B. Strong default stack for Solanasis client work
Recommended default stack
- Backend platform: Supabase
- Primary database: Postgres
- File store: Supabase Storage or S3-compatible store
- Auth & permissions: Supabase Auth + RLS
- Automation: Edge Functions + DB triggers/webhooks
- Search v1: Postgres full-text search
- Search v2: Hybrid search
- Vector layer: pgvector only if/when justified
- Embeddings source: Start with lowest-friction option; re-evaluate based on client privacy/volume
- GitHub: code, migrations, prompts, curated Markdown, versioned text assets
Optional later additions
- Redis: queues / locks / cache
- Neo4j: trust graph / introductions / dependencies
- Dedicated vector DB: only if pgvector/Search no longer fits
- Turso/libSQL: local-first or edge-heavy deployments
C. Manus/OpenClaw usage playbook
If using Manus
Use Manus for:
- GTM research
- partner ecosystem mapping
- authenticated browser work
- Slack/Zapier-triggered automation
- async deliverables
- recurring research tasks
Avoid using Manus as the first choice for:
- deep local repo surgery,
- fragile UI-only production automations when API/Zapier/MCP is available,
- unchecked always-on usage before credit burn is understood.
If using OpenClaw
Use OpenClaw for:
- self-hosted agent runtime
- local-first memory
- ACP routing to Claude Code / Codex / Gemini CLI
- broad messaging-surface workflows
- hook/cron based local automations
- per-agent sandbox policy control
If combining them
- Manus = hosted operator
- OpenClaw = self-hosted orchestrator/control plane
- Claude Code = coding harness
- ChatGPT = synthesis/reviewer
Tools, Resources, Links, and References
Manus
- Projects: https://manus.im/docs/features/projects
- Skills: https://manus.im/docs/features/skills
- Wide Research: https://manus.im/docs/features/wide-research
- Browser Operator: https://manus.im/docs/integrations/manus-browser-operator
- Integrations Overview: https://manus.im/docs/integrations/integrations
- MCP Connectors: https://manus.im/docs/integrations/mcp-connectors
- Custom MCP: https://manus.im/docs/integrations/custom-mcp
- Zapier: https://manus.im/docs/integrations/zapier
- Slack Integration: https://manus.im/docs/integrations/slack-integration
- Manus API: https://manus.im/docs/integrations/manus-api
- Create Task API: https://open.manus.im/docs/api-reference/create-task
- OpenAI SDK Compatibility: https://open.manus.im/docs/openai-compatibility
- Webhooks: https://open.manus.im/docs/webhooks
- Website Builder / Getting Started: https://manus.im/docs/website-builder/getting-started
- Project Analytics (managed MySQL in web projects): https://manus.im/docs/website-builder/project-analytics
- Pricing page: https://manus.im/pricing
- Help Center pricing: https://help.manus.im/en/articles/11711111-what-is-the-current-membership-pricing-for-manus
- Daily credits: https://help.manus.im/en/articles/11711121-will-my-daily-refresh-credits-accumulate-or-reset
- Credit cost uncertainty: https://help.manus.im/en/articles/13185575-is-there-a-way-to-check-how-many-credits-a-task-will-cost-before-i-begin
OpenClaw
- GitHub repo / README: https://github.com/openclaw/openclaw
- ACP Agents: https://docs.openclaw.ai/tools/acp-agents
- Providers: https://docs.openclaw.ai/providers
- LiteLLM provider: https://docs.openclaw.ai/providers/litellm
- OpenRouter provider: https://docs.openclaw.ai/providers/openrouter
- Ollama provider: https://docs.openclaw.ai/providers/ollama
- Bedrock provider: https://docs.openclaw.ai/providers/bedrock
- Memory: https://docs.openclaw.ai/concepts/memory
- Hooks: https://docs.openclaw.ai/automation/hooks
- Cron Jobs: https://docs.openclaw.ai/automation/cron-jobs
- Multi-Agent Sandbox & Tools: https://docs.openclaw.ai/tools/multi-agent-sandbox-tools
Supabase
- Platform: https://supabase.com/docs/guides/platform
- Architecture: https://supabase.com/docs/guides/getting-started/architecture
- Database Overview: https://supabase.com/docs/guides/database/overview
- Full Text Search: https://supabase.com/docs/guides/database/full-text-search
- AI & Vectors: https://supabase.com/docs/guides/ai
- Keyword Search: https://supabase.com/docs/guides/ai/keyword-search
- Semantic Search: https://supabase.com/docs/guides/ai/semantic-search
- Hybrid Search: https://supabase.com/docs/guides/ai/hybrid-search
- Automatic Embeddings: https://supabase.com/docs/guides/ai/automatic-embeddings
- Generate Embeddings (built-in AI inference API): https://supabase.com/docs/guides/ai/quickstarts/generate-text-embeddings
- Auth: https://supabase.com/docs/guides/auth
- RLS: https://supabase.com/docs/guides/database/postgres/row-level-security
- Securing API: https://supabase.com/docs/guides/api/securing-your-api
- Storage: https://supabase.com/docs/guides/storage
- Storage Access Control: https://supabase.com/docs/guides/storage/security/access-control
- Storage Schema: https://supabase.com/docs/guides/storage/schema/design
- Edge Functions: https://supabase.com/docs/guides/functions
- Local stack /
supabase start: https://supabase.com/docs/guides/functions/development-environment - Pricing: https://supabase.com/pricing
GitHub
- About large files on GitHub: https://docs.github.com/repositories/working-with-files/managing-large-files/about-large-files-on-github
- About Git LFS: https://docs.github.com/repositories/working-with-files/managing-large-files/about-git-large-file-storage
- Git LFS billing: https://docs.github.com/billing/managing-billing-for-git-large-file-storage/about-billing-for-git-large-file-storage
Pricing references for databases and related tooling
- Neon: https://neon.com/pricing
- Turso: https://turso.tech/pricing
- Pinecone: https://www.pinecone.io/pricing/
- Qdrant: https://qdrant.tech/pricing/
- Weaviate: https://weaviate.io/pricing
- MongoDB: https://www.mongodb.com/pricing
- Redis: https://redis.io/pricing/
- Neo4j: https://neo4j.com/pricing/
- Firebase: https://firebase.google.com/pricing
- Tiger Data: https://www.tigerdata.com/pricing
- OpenAI embeddings pricing: https://developers.openai.com/api/docs/pricing/
- Ollama embeddings: https://docs.ollama.com/capabilities/embeddings
User-reported sources used cautiously
- https://www.reddit.com/r/ManusOfficial/comments/1r9z73v/telegram_costing_me_1348_credits_to_communicate/
- https://www.reddit.com/r/ManusOfficial/comments/1r8w4zz/manus_new_alwayson_agent_is_so_expensive/
- https://www.reddit.com/r/ManusOfficial/comments/1kgjsjv/best_way_to_use_chatgpt_for_manus_prompts/
- https://www.reddit.com/r/ManusOfficial/comments/1rbz0q8/here_is_the_guide_i_wish_i_had_for_manus_and/
- https://www.reddit.com/r/ManusOfficial/comments/1qeh8q5/what_prompts_have_worked_best_for_you_when_using/
Risks, Caveats, and Red Flags
-
[Verified] Manus official surfaces are not perfectly consistent:
- API docs show
https://api.manus.ai/v1/tasks - OpenAI SDK compatibility docs show
base_url="https://api.manus.im"
This is a real documentation inconsistency that should be validated in practice before production integration.
Sources: Create Task API, OpenAI SDK Compatibility
- API docs show
-
[Verified] Manus pricing nomenclature also appears inconsistent across official surfaces (Pro vs Standard vs Customizable vs Extended).
Sources: Help Center pricing, Pricing page, Blog pricing snippet -
[Verified] Browser Operator is in beta rollout and has documented limitations. It should not be treated as a universally reliable API substitute.
Sources: Browser Operator -
[Verified conflict] Supabase’s embedding docs currently conflict on whether an external provider is required. Do not promise “zero external embedding cost” without a live implementation check.
Sources: Generate Embeddings, Automatic Embeddings -
[Tentative / user-reported only] Manus credit burn appears to be a recurring complaint among recent users. Treat usage forecasting carefully and insist on a monitored trial before designing client-facing commercial commitments around it.
Sources: Reddit links above -
[Verified] GitHub is not a substitute for operational file storage at scale.
Sources: About large files on GitHub -
[Assistant-stated but important] A major failure mode in client agent projects is over-reliance on embeddings to compensate for poor schema design. This usually leads to higher cost, weaker precision, and harder debugging.
Open Questions / What Still Needs Verification
-
[Open verification item] Which Manus API base URL is currently canonical in production for all endpoints:
api.manus.aiorapi.manus.im?
Why open: official docs currently show both.
Sources: Create Task API, OpenAI SDK Compatibility -
[Open verification item] What is the currently correct canonical public pricing taxonomy for Manus: Free/Pro/Team vs Standard/Customizable/Extended?
Why open: official docs and marketing surfaces disagree.
Sources: same as pricing conflict above -
[Open verification item] In Supabase today, what is the exact production-ready path for “no external API required” embeddings?
Why open: docs conflict between built-in inference and provider-based language.
Sources: Generate Embeddings, Automatic Embeddings -
[Open verification item] For specific Solanasis client sectors, what privacy/compliance posture is required:
- SMB general
- nonprofit
- RIA / wealth-adjacent
- healthcare-adjacent
- defense-adjacent
This affects whether hosted embeddings or cloud-hosted agent runtimes are acceptable.
-
[Open verification item] What exact ClickUp action surface is needed first:
- read-only context,
- create/update task,
- comment,
- attachment upload,
- status transitions,
- time tracking,
- recurring triage?
This determines whether Zapier is enough or a custom MCP/server is warranted.
-
[Open verification item] What concrete client deliverable shape should the first Solanasis custom agent support?
- internal ops copilot,
- document Q&A,
- evidence binder navigator,
- GTM research agent,
- website/content ops agent,
- service desk triage agent,
- renewal/risk/compliance workflow assistant.
-
[Open verification item] Do any target clients require on-prem or VPC-isolated architecture from day one?
If yes, Supabase-hosted default may need adjustment. -
[Open verification item] Paid Firestore per-operation pricing was not re-verified in this artifact and should be checked directly on current Google Cloud pricing pages before citing.
Suggested Next Steps
Near-term (recommended)
-
Build a Solanasis default client agent reference architecture using:
- Supabase
- object storage
- RLS
- full-text search
- optional pgvector
-
Create a starter schema pack for:
- documents
- artifacts
- extracted text
- chunks
- tasks
- task runs
- approvals
- audit events
-
Write a retrieval ladder policy:
- filter only
- filter + full-text
- filter + hybrid
- semantic fallback
- reranking only when justified
-
Define a file governance policy:
- what goes in GitHub,
- what goes in object storage,
- what metadata lives in Postgres,
- how retention/versioning/hashing work.
-
Prototype one real client-style agent against this architecture:
- upload docs,
- extract text,
- search/filter,
- answer questions,
- record every retrieval and answer.
-
Run a measured trial on Manus if still evaluating it for GTM or browser-based operations:
- one paid month,
- instrument credit burn,
- test Browser Operator on actual workflows,
- test webhook/API orchestration,
- test Slack/Zapier fit,
- then decide whether it is a core tool or a niche adjunct.
-
Decide whether OpenClaw is needed as:
- a self-hosted internal orchestrator,
- or whether Claude Code + ChatGPT + Supabase + small custom services already cover the essential needs.
Handoff Notes for Another AI
You are continuing work for a user who:
- is a strong technical founder / architect,
- cares about structured data and cost discipline,
- does not want fuzzy hype,
- wants defaults that work for real client delivery,
- and has a DBA mindset about keeping large files out of ordinary relational storage.
Assume these working positions unless the user changes them
- The user prefers structured relational data first, not Markdown-only agent memory.
- The user is open to GitHub for curated text artifacts, but not as the only backend.
- The user is highly interested in Supabase as a default backend platform if it is actually the right choice.
- The user wants a client-deliverable default architecture, not just an internal tinkering stack.
- The user is skeptical of unnecessary vector spend and wants embeddings introduced only where justified.
- The user views Claude Code as the likely default code operator unless something clearly better emerges.
- The user is interested in Manus and OpenClaw, but needs the practical comparison, not fandom.
Priority follow-up work for another AI
-
Turn this into a Solanasis client-agent starter kit:
- schema design,
- storage layout,
- RLS patterns,
- ingestion pipeline,
- retrieval ladder.
-
Produce a decision matrix:
- Supabase vs Neon vs Turso vs custom stack
- object storage vs GitHub vs LFS
- pgvector vs Pinecone/Qdrant/Weaviate
- OpenAI embeddings vs Ollama vs Supabase-native inference path
-
Validate open items:
- current Manus API base URL,
- true Manus pricing taxonomy,
- Supabase embeddings path ambiguity,
- ClickUp integration design needs,
- sector-specific compliance constraints.
-
If asked to go deeper technically, create:
- table schema proposals,
- RLS templates,
- chunking rules,
- storage key conventions,
- ingestion job workflow,
- cost guardrails.
Reviewer Notes and Improvements Made
Reviewer availability
- [Verified] No separate reviewer-agent capability was used here.
- [Verified] A serious self-review pass was performed.
Improvements made during self-review
-
Corrected ambiguity around Supabase file storage:
- clarified that the recommendation is object storage + Postgres metadata, not storing blobs directly in ordinary Postgres tables.
-
Flagged official-source conflicts rather than hiding them:
- Manus API base URL inconsistency,
- Manus pricing taxonomy inconsistency,
- Supabase embeddings workflow inconsistency.
-
Updated stale pricing assumptions where current official pages contradicted earlier statements:
- Weaviate current pricing page now shows Flex starts at 400/month, not the earlier lower figure referenced in prior discussion.
-
Separated fact from recommendation more clearly:
- Verified vs User-stated vs Assistant-stated vs Tentative.
-
Strengthened operational usefulness by adding:
- default stack recommendation,
- retrieval ladder,
- file governance split,
- schema starter guidance,
- concrete next steps,
- handoff notes for another AI.
-
Preserved uncertainty honestly:
- where verification was incomplete, that is stated plainly.
Optional Appendix — Structured Summary (YAML-style)
document_date: 2026-03-15
topic:
- Manus
- OpenClaw
- Supabase
- structured backends for AI agents
- object storage vs GitHub vs Postgres
- vector search and embeddings cost
user_context:
status: User-stated
notes:
- Power user of Claude, Claude Code/CoWork, and ChatGPT
- Maintains local Markdown/AI-generated knowledge files for Solanasis
- Wants client-ready custom AI agents with structured backends
- Cost-aware, skeptical of unnecessary vector spend
- DBA mindset: keep large files out of normal DB tables
main_recommendation:
status: Assistant-stated but strongly source-backed
summary: >
Strong default for client agents: Supabase + Postgres + object storage +
Auth/RLS + Edge Functions + full-text search first + hybrid search second +
pgvector later only where justified.
manus:
status: Verified
strengths:
- Projects
- Skills
- Wide Research
- Browser Operator
- API + webhooks
- Slack + Zapier + MCP
weaknesses_or_cautions:
- credit unpredictability
- browser operator beta limitations
- official doc inconsistencies
openclaw:
status: Verified
strengths:
- self-hosted control plane
- ACP harness integration (Claude Code, Codex, Gemini CLI)
- broad channel support
- markdown-file memory
- hooks and cron
- flexible providers including local models
supabase:
status: Verified
role: backend platform around Postgres
includes:
- Postgres
- API
- Auth
- Edge Functions
- Realtime
- Storage
storage_rule:
status: Verified
summary: actual files in object storage, metadata in Postgres
retrieval_policy:
status: Assistant-stated recommendation
order:
- structured filters
- full-text search
- hybrid search
- semantic/vector only where justified
open_questions:
- canonical Manus API base URL
- canonical Manus pricing taxonomy
- true Supabase built-in embeddings path in production
- client sector compliance requirements
- ClickUp action surface requirements