ARCHIVED 2026-03-24 — Superseded by .claude-plans/deep-plan-baserow-migration-2026-03-24.md (in solanasis-scripts). Originally described the Coda CSV → Baserow Cloud migration. Cloud-to-self-hosted migration completed 2026-03-24.

Baserow Migration Plan — Coda CSV Export to Baserow

Project: solanasis-scripts (multi-tool repo for Solanasis operational scripts) Subdirectory: baserow/ (Baserow CLI + migration tools) Status: COMPLETE (migrated 2026-03-07, finalized 2026-03-08) Last Updated: 2026-03-08 Source data: C:\Users\zasya\Documents\coda-data-export\NHGOLB87tr-DB\ Baserow API docs: https://baserow.io/docs/apis/rest-api/introduction


Table of Contents

  1. Specifications
  2. Data Analysis Findings
  3. Architecture & File Structure
  4. API Reference Summary
  5. Table Definitions & Field Mappings 5B. Reference Integrity Analysis
  6. Data Quality Issues & Mitigations
  7. Relationship Resolution Strategy
  8. CLI Command Spec
  9. Migration Execution Flow
  10. Test Plan
  11. Implementation Checklist
  12. Senior Reviewer Findings
  13. Post-Migration: URL Enrichment

1. Specifications

1.1 Goal

Migrate Dmitri’s Coda database export (CSV files from NHGOLB87tr-DB) into Baserow, preserving relationships (tags, locations, organizations) as link_row fields.

1.2 Deliverables

  1. README.md — Baserow API reference guide for ongoing Claude use
  2. brow CLI — Reusable command-line wrapper for Baserow API
  3. Migration script — Automated CSV-to-Baserow migration with relationship resolution
  4. SETUP.md — User setup instructions for credentials

1.3 Tech Stack

  • Runtime: Node.js 22 + TypeScript (matches existing solanasis-site stack)
  • Execution: tsx (already installed globally, v4.21.0)
  • CLI framework: commander (industry standard, lightweight)
  • CSV parsing: csv-parse (streaming, handles multiline, RFC 4180 compliant)
  • Env: dotenv (consistent with solanasis-site)
  • HTTP: Native fetch (Node 22 built-in, no deps needed)
  • Testing: vitest (fast, native TS/ESM support, no Babel needed)

1.4 Non-Goals (Explicitly Skipped)

  • CRM data (6sDSk92JyW-CRM) — sample data, 19 CSV files
  • Financial tables (per user decision):
    • Finances-grid-ewFx_PED36-default.csv (42 rows)
    • Credit-Cards-grid-zAlgZHbtBG-default.csv (23 rows)
    • Budget-grid-9iaPZcc0AX-default.csv (4 rows)
    • SAM Paychecks-grid--JIWn8ynoW-default.csv (3 rows)
    • SAM Paychecks Totals-grid-XbZvXyhU-p-default.csv (3 rows)
    • Table-grid-IFI2utooLR-default.csvthis is Bank Balances (headers: Name, Month, Ending Balance, Notes)
  • Enum lookups (tiny, static): High-Med-Low (3), Interval-Type (3), Version Num (3)
  • Software-Features-grid-TZJ5vIoHFY-default.csv — different project (The Source Platform)
  • Journal-grid-PID7-d1fGT-default.csv — only 1 real entry (“asd”)
  • Filtered views (same data, different view): “People to Respond”, “View of Time Logs”
  • Cut by user (2026-03-07): Books, Book-Tags, Time Logs, Time-Log Categories, Wishlist, Quotes, BSW 2025 Hike — not needed in Baserow

1.5 Auth Architecture

  • DB token (BASEROW_DB_TOKEN): Row operations (CRUD on rows). Header: Authorization: Token {t} (renamed from BASEROW_API_KEY for clarity per R20)
  • JWT (BASEROW_EMAIL + BASEROW_PASSWORD): Schema operations (workspaces, databases, tables, fields). Header: Authorization: JWT {t}
  • JWT obtained via POST /api/user/token-auth/{access_token, refresh_token, user} (note: token field is deprecated, use access_token)
  • access_token lifetime: 10 min. Auto-refresh when token age > 9 min.
  • refresh_token lifetime: 168 hours (7 days). Use POST /api/user/token-refresh/ with {refresh_token} → new {access_token}.
  • 2FA handling: If user has 2FA enabled, token-auth returns {two_factor_auth, token} instead. Must handle this case.
  • Fallback: If JWT auth fails, user can manually create tables in Baserow UI and provide table IDs for rows-only migration.

1.6 Target Environment (discovered 2026-03-07)

  • Workspace: “Dmitri Sunshine’s workspace” (ID: 183254)
  • Database: “Personal CRM” (ID: 387807)
  • Existing tables (DO NOT DELETE — user will manually):
    • Contacts (ID: 873314) — 2 rows, 11 fields (simple flat schema, not reusable)
    • Interactions (ID: 873315) — 0 rows, 4 fields
  • Decision: Create 5 new tables alongside existing ones (cut from 12)
  • No 2FA on account — standard JWT auth works
  • DB token confirmed working for row operations
  • JWT confirmed working — user: “Dmitri Sunshine”

1.7 Conventions

  • All source files include permalinks to Baserow API docs in comments
  • user_field_names=true on all row endpoints (human-readable field names)
  • All functions that hit the API go through the centralized BaserowClient
  • No code duplication — shared utilities for date parsing, currency stripping, etc.
  • ESM modules ("type": "module" in package.json)

2. Data Analysis Findings

2.1 Accurate Row Counts (blank rows filtered)

TableCSV FileRaw LinesData RowsColumns
TagTag-grid-r5xQIRHdcF-default.csv5551Name, Notes
LocationLocation-grid-fXQ_7w3JVA-default.csv3733Name, State, Notes
Book-TagsBook-Tags-grid-FZI-5fxkSD-default.csv43Name, Notes
Time-Log CategoriesTime-Log Categories-grid-gYy4hgGKjG-default.csv54Name, Notes
OrganizationOrganization-grid-X1DpSi_91e-default.csv11866Name, Tag, Location, Website, LinkedIn, Summary, Partnership-Potential, Notes, Twitter
PeoplePeople-grid-EszwOBhhvI-default.csv196160Name, Tags, Location, Title, Organization, Phone Number, Email, LinkedIn, IG, Twitter, FB, Blog, Website, Notes, Interest Form Message, Response to Interest Form, Connected From, Referral Source, LinkedIn Initial Outreach
BooksBooks-grid-8XNJ7myI6L-default.csv3129Name, Author, Link, Book-Tags, Quick Notes, Notes
Meeting NotesMeeting Notes-grid-Q8pQH8wPNz-default.csv31337Name, LinkedIn, Date, Follow-up Date, Notes
Time LogsTime Logs-grid-dOc1PXyLFa-default.csv679440Date, Time Start, Time-End, Notes, Categories, Duration, Logged
WishlistWishlist-grid-1-AdR1Cd0Y-default.csv2020Name, Date Added, Got It!, Amount, Possibilites [sic], Why, Notes
QuotesQuotes-grid-6tMhOGUNb4-default.csv31Name, Author, Notes
BSW 2025 HikeBSW 2025 Boulder Ventures Hike-grid-6_gjtLV5s0-default.csv5010Column 1 (Name), Column 2 (First), Column 3 (Last), Column 4 (Email)

Total data rows migrated: 351 (5 tables after user cuts; 345 initial + 6 orgs added post-migration to fix missing references)

2.2 Date Format

All dates across all tables use M/D/YYYY format (e.g., 10/21/2024, 1/1/2025). Must convert to ISO YYYY-MM-DD for Baserow date fields.

2.3 Boolean Values

"true" / "false" string literals in:

  • Time Logs → Logged column
  • Wishlist → Got It! column

2.4 Currency Values

Wishlist Amount column has mixed formats:

  • Standard: $1,000.00, $200.00
  • Shorthand: $1K, $2K
  • Decision: Store as text field (not number) to preserve original formatting

2.5 Phone Number Formats

Mixed formats in People Phone Number:

  • Raw digits: 7865535271
  • International: +1 (737) 294-3882, +52 55 7887 2380
  • US: (604) 417-6074, 702-941-1004
  • Decision: Store as text field (Baserow phone_number type may reject some formats)

2.6 URL Variations

LinkedIn URLs sometimes missing https:// prefix:

  • linkedin.com/in/yevmuchnik (missing protocol)
  • https://www.linkedin.com/in/christopherpina/ (correct)
  • Decision: Store as url field, normalize by prepending https:// if URL has no protocol

2.7 CSV Encoding

Organization CSV contains Unicode curly quotes (valid UTF-8). No special encoding handling needed. Read all CSVs with fs.readFileSync(path, 'utf-8').

2.8 BSW Hike Table Format

Generic column headers: Column 1, Column 2, Column 3, Column 4. Actual semantics: Full Name, First Name, Last Name, Email. Rename during migration.


3. Architecture & File Structure

solanasis-scripts/
├── package.json
├── tsconfig.json
├── .env                              # BASEROW_DB_TOKEN, BASEROW_EMAIL, BASEROW_PASSWORD, CSV_SOURCE_DIR, BASEROW_DATABASE_ID
├── .env.example                      # Template (no secrets)
├── .gitignore
├── README.md                         # Baserow API reference guide (Deliverable 1)
├── SETUP.md                          # User setup instructions (Deliverable 4)
├── BASEROW-MIGRATION-PLAN.md         # This file
├── vitest.config.ts
├── baserow/
│   ├── src/
│   │   ├── lib/
│   │   │   ├── baserow-client.ts     # Core API client (dual auth, rate limiting, pagination, retry)
│   │   │   ├── types.ts              # TypeScript interfaces for Baserow API
│   │   │   ├── csv-parser.ts         # CSV reading + blank row filtering + encoding handling
│   │   │   └── field-mapper.ts       # Data transforms: dates, booleans, URLs, currency
│   │   ├── cli/
│   │   │   ├── index.ts              # CLI entry point (commander) — "brow" command
│   │   │   └── commands/
│   │   │       ├── auth.ts           # login, test-token
│   │   │       ├── tables.ts         # list, create, delete tables
│   │   │       ├── fields.ts         # list, create fields
│   │   │       ├── rows.ts           # list, create, batch-create, delete rows
│   │   │       └── preflight.ts      # full connectivity + permission checks
│   │   ├── migrate/
│   │   │   ├── index.ts              # Migration orchestrator (--plan / --run)
│   │   │   ├── schema.ts            # Table + field creation logic
│   │   │   ├── import-csv.ts         # CSV → Baserow row insertion (batched, max 200)
│   │   │   └── relationships.ts      # link_row resolution (name → row ID)
│   │   └── config/
│   │       └── tables.ts             # All table definitions + field mappings + CSV paths
│   └── tests/
│       ├── lib/
│       │   ├── baserow-client.test.ts
│       │   ├── csv-parser.test.ts
│       │   └── field-mapper.test.ts
│       ├── migrate/
│       │   ├── schema.test.ts
│       │   ├── import-csv.test.ts
│       │   └── relationships.test.ts
│       └── fixtures/
│           ├── tags.csv              # Minimal test CSVs
│           ├── people.csv
│           └── locations.csv

Why baserow/ subdirectory?

The repo is solanasis-scripts (multi-tool). The Baserow tools live in baserow/ to keep the repo organized for future tools (e.g., brevo/, cloudflare/).

Entry Point

npx tsx baserow/src/cli/index.ts <command> [options]
# OR via npm script:
npm run brow -- <command> [options]

4. API Reference Summary

Full reference in README.md (Deliverable 1). Key points here.

4.1 Authentication

MethodHeaderScopeEndpoint
DB TokenAuthorization: Token {t}Row CRUD on tables token has access toRow endpoints only
JWTAuthorization: JWT {t}Full access: workspaces, databases, tables, fields, rowsAll endpoints

JWT Lifecycle (verified against OpenAPI schema https://api.baserow.io/api/schema.json):

POST https://api.baserow.io/api/user/token-auth/
Body: {"email": "...", "password": "..."}
Response (no 2FA): {
  "access_token": "eyJ...",   // valid 10 min — use in Authorization: JWT {access_token}
  "refresh_token": "eyJ...",  // valid 168 hours (7 days)
  "token": "eyJ...",          // DEPRECATED — same as access_token
  "user": {"first_name": "...", "username": "...", "language": "..."}
}
Response (2FA enabled): {
  "two_factor_auth": "totp",
  "token": "temp_token_for_2fa_verify"
}
  • Refresh: POST /api/user/token-refresh/ with {"refresh_token": "..."} → same response shape

4.2 Key Endpoints (verified against OpenAPI schema — 267 total paths)

OperationMethodPathAuthNotes
Auth (JWT)POST/api/user/token-auth/NoneBody: {email, password}
Refresh JWTPOST/api/user/token-refresh/NoneBody: {refresh_token}
List workspacesGET/api/workspaces/JWT
List apps/databasesGET/api/applications/workspace/{workspace_id}/JWTReturns all apps in workspace
List tablesGET/api/database/tables/database/{database_id}/JWT
Create tablePOST/api/database/tables/database/{database_id}/JWTBody: {name} (required)
Get tableGET/api/database/tables/{table_id}/JWT
Delete tableDELETE/api/database/tables/{table_id}/JWT
List fieldsGET/api/database/fields/table/{table_id}/JWT
Create fieldPOST/api/database/fields/table/{table_id}/JWTBody: {name, type, ...type_opts}
Update fieldPATCH/api/database/fields/{field_id}/JWTRename or change type
Delete fieldDELETE/api/database/fields/{field_id}/JWT
List rowsGET/api/database/rows/table/{table_id}/?user_field_names=trueTokenPaginated: ?size=200&page=1
Create rowPOST/api/database/rows/table/{table_id}/?user_field_names=trueToken
Batch createPOST/api/database/rows/table/{table_id}/batch/?user_field_names=trueTokenBody: {items: [...]} max 200
Delete rowDELETE/api/database/rows/table/{table_id}/{row_id}/Token
Batch deletePOST/api/database/rows/table/{table_id}/batch-delete/TokenBody: {items: [id1, id2]}

Table creation note: When creating a table with just {name}, Baserow auto-creates it with a primary “Name” text field and 3 default fields (Notes, Active, etc.). We need to delete the extra default fields after creation, or use the data + first_row_header approach to control initial fields.

4.3 Rate Limits

  • 20 requests/second (per token)
  • 10 concurrent requests
  • Response: 429 Too Many Requests with Retry-After header
  • Strategy (simplified per R14): Configurable delay between requests (default 100ms) + reactive exponential backoff on 429. For ~854 rows in sequential batches, request rate naturally stays well under limits.

4.4 Pagination

  • Query: ?size=200&page=1 (max 200 per page)
  • Response: {count, next, previous, results: [...]}
  • Auto-paginate: follow next until null

4.5 Batch Operations (verified: maxItems: 200, minItems: 1 in OpenAPI schema)

  • Batch create: POST .../batch/?user_field_names=true with {"items": [{...}, ...]} — max 200 rows
  • Batch update: PATCH .../batch/?user_field_names=true with {"items": [{"id": 1, ...}, ...]}
  • Batch delete: POST .../batch-delete/ with {"items": [1, 2, 3]}

4.6 Field Types Used

Baserow TypeCoda SourceCreation PayloadValue Format
textText fields{name, type: "text"}"string"
long_textNotes, multiline{name, type: "long_text"}"string\nwith\nnewlines"
urlURLs{name, type: "url"}"https://..."
emailEmail{name, type: "email"}"user@example.com"
dateDates{name, type: "date", date_format: "ISO", date_include_time: false}"2024-10-21"
booleanCheckboxes{name, type: "boolean"}true / false
link_rowReferences{name, type: "link_row", link_row_table_id: N}[1, 5, 12] (row IDs)
  • Created on source table: {name: "Tags", type: "link_row", link_row_table_id: <tag_table_id>}
  • Automatically creates reverse link field on target table
  • Values are arrays of Baserow row IDs: [1, 5, 12]
  • When reading with user_field_names=true: returns [{id: 1, value: "Boulder"}, ...]
  • When writing: send [1, 5, 12] (just IDs)

5. Table Definitions & Field Mappings

Phase A — Lookup Tables (no dependencies)

A1: Tag

CSV ColumnBaserow FieldTypeNotes
NameNametextPrimary field (auto-created by Baserow)
NotesNoteslong_text
  • Dedup: “Philanthropy Consultant” appears twice → import only once
  • CSV: Tag-grid-r5xQIRHdcF-default.csv
  • Rows: 51 → ~50 after dedup

A2: Location

CSV ColumnBaserow FieldTypeNotes
NameNametextPrimary field
StateStatetext
NotesNoteslong_text
  • Dedup: “Costa Rica” appears twice → import only once
  • CSV: Location-grid-fXQ_7w3JVA-default.csv
  • Rows: 33 → ~32 after dedup

A3: Book-Tags

CSV ColumnBaserow FieldTypeNotes
NameNametextPrimary field
NotesNoteslong_text
  • CSV: Book-Tags-grid-FZI-5fxkSD-default.csv
  • Rows: 3

A4: Time-Log Categories

CSV ColumnBaserow FieldTypeNotes
NameNametextPrimary field
NotesNoteslong_text
  • Values: ONE|Boulder, Creators-Hub, SAM, OOS
  • CSV: Time-Log Categories-grid-gYy4hgGKjG-default.csv
  • Rows: 4

Phase B — Core Tables (reference lookups)

B1: Organization

CSV ColumnBaserow FieldTypeNotes
NameNametextPrimary field
TagTagslink_row → TagSingle tag per org in CSV
LocationLocationlink_row → LocationSingle location per org
WebsiteWebsiteurl
LinkedInLinkedInurl
SummarySummarylong_text
Partnership-PotentialPartnership Potentiallong_text
NotesNoteslong_textMultiline, may contain encoding issues
TwitterTwitterurl
  • CSV: Organization-grid-X1DpSi_91e-default.csv
  • Rows: 72 (66 from CSV + 6 added post-migration: Herban Wellness, Polestar Gardens / Village, Antler, Questco, Rootstock Philanthropy, Earth05)
  • Encoding: Contains Unicode curly quotes (valid UTF-8, no special handling needed)

B2: People

CSV ColumnBaserow FieldTypeNotes
NameNametextPrimary field
TagsTagslink_row → TagComma-separated multi-value
LocationLocationlink_row → LocationMay be comma-separated or contain #r refs
TitleTitletext
OrganizationOrganizationlink_row → OrganizationSingle value
Phone NumberPhone NumbertextMixed formats, not phone_number type
EmailEmailemail
LinkedInLinkedInurlMay need https:// prefix
IGInstagramurl
TwitterTwitterurl
FBFacebookurl
BlogBlogurl
WebsiteWebsiteurl
NotesNoteslong_text
Interest Form MessageInterest Form Messagelong_text
Response to Interest FormResponse to Interest Formlong_text
Connected FromConnected FromtextFree-text, 2 unique values
Referral SourceReferral SourcetextFree-text, 9 unique values
LinkedIn Initial OutreachLinkedIn Initial OutreachdateM/D/YYYY format
  • CSV: People-grid-EszwOBhhvI-default.csv
  • Rows: 160
  • Complex references: Tags (multi), Location (multi with broken refs), Organization (single)

B3: Books

CSV ColumnBaserow FieldTypeNotes
NameNametextPrimary field
AuthorAuthortextSome have leading spaces
LinkLinkurlAmazon/Scribd URLs
Book-TagsBook Tagslink_row → Book-TagsComma-separated
Quick NotesQuick Noteslong_text
NotesNoteslong_text
  • CSV: Books-grid-8XNJ7myI6L-default.csv
  • Rows: 29

Phase C — Dependent Tables

C1: Meeting Notes

CSV ColumnBaserow FieldTypeNotes
Nameformula: field('Person')Primary field — auto-derives from Person link
NamePersonlink_row → People37/37 names matched. Handles Unicode NFC normalization for ñ etc.
LinkedInLinkedInurl
DateDatedateM/D/YYYY → YYYY-MM-DD
Follow-up DateFollow-up DatedateM/D/YYYY → YYYY-MM-DD
NotesNoteslong_textMultiline
  • CSV: Meeting Notes-grid-Q8pQH8wPNz-default.csv
  • Rows: 37
  • Table ID: 873613 (rebuilt 2026-03-08; original 873567 deleted)
  • Primary field: Formula field('Person') — no redundant Name text column
  • link_row relationships: 6 total across all tables
    • Org.Tags → Tag, Org.Location → Location
    • People.Tags → Tag, People.Location → Location, People.Organization → Organization
    • Meeting Notes.Person → People

C2: Time Logs

CSV ColumnBaserow FieldTypeNotes
DateDatedateM/D/YYYY → YYYY-MM-DD
Time StartTime Starttext”12:00 PM” format — text, not time type
Time-EndTime Endtext”12:45 PM” format
NotesNoteslong_text
CategoriesCategoriestextPipe-separated category names (NOT link_row for simplicity)
DurationDurationtext”45 mins”, “2 hrs 15 mins” format
LoggedLoggedboolean”true”/“false” → true/false
  • CSV: Time Logs-grid-dOc1PXyLFa-default.csv
  • Rows: 440 (454 raw, 14 garbage rows filtered by requiring Date field)
  • Decision: Categories stored as text (not link_row) since there are only 3 values and parsing pipe-separated into link_row adds complexity for minimal benefit.

Phase D — Misc Tables

D1: Wishlist

CSV ColumnBaserow FieldTypeNotes
NameNametextPrimary field
Date AddedDate AddeddateM/D/YYYY → YYYY-MM-DD
Got It!Got Itboolean”true”/“false”
AmountAmounttextMixed formats (1,000.00) — text not number
PossibilitesPossibilitiestextFix typo in field name
WhyWhylong_text
NotesNoteslong_text
  • CSV: Wishlist-grid-1-AdR1Cd0Y-default.csv
  • Rows: 20

D2: Quotes

CSV ColumnBaserow FieldTypeNotes
NameQuotetextRename primary field from “Name”. Keep as text type (R23: changing primary field type may not be supported)
AuthorAuthortext
NotesNoteslong_text
  • CSV: Quotes-grid-6tMhOGUNb4-default.csv
  • Rows: 1

D3: BSW 2025 Hike

CSV ColumnBaserow FieldTypeNotes
Column 1Full NametextPrimary field, rename from generic header
Column 2First Nametext
Column 3Last Nametext
Column 4Emailemail
  • CSV: BSW 2025 Boulder Ventures Hike-grid-6_gjtLV5s0-default.csv
  • Rows: 10

5B. Reference Integrity Analysis (verified 2026-03-07)

Tags

  • Tag table: 51 entries (50 unique after dedup of “Philanthropy Consultant”)
  • People → Tags: 46 unique values referenced. All match Tag table EXCEPT:
    • #r48, #r49, #r54 — broken Coda refs → skip
  • Org → Tag: 10 unique values. All match Tag table ✓

Locations

  • Location table: 33 entries (32 unique after dedup of “Costa Rica”)
  • People → Location: 27 unique values. All resolve after splitting EXCEPT:
    • #r31, #r36 — broken Coda refs → skip
    • Denver CO,Costa Rica → split into 2, both resolve ✓
    • Denver,#r28 → split, Denver resolves, #r28 skipped ✓
  • Org → Location: 14 unique values. All match Location table ✓

Organizations

  • Org table: 72 entries (66 from CSV + 6 added post-migration)
  • People → Organization: 16 unique values. All 16 now resolve ✓
    • 6 orgs were missing from the original CSV and created post-migration:
      • Herban Wellness (person: Katya Difani)
      • Polestar Gardens / Village (person: Terry Curran)
      • Antler (person: Rio Hodges)
      • Questco (person: Adrienne Milligan Majcina, SHRM-CP)
      • Rootstock Philanthropy (person: Brad Smith, M.Ed.)
      • Earth05 (person: Maria Dahrieh)

Book-Tags

  • Book-Tags table: 3 entries (Biz, Top 10, Social)
  • Books → Book-Tags: 2 values referenced (Biz, Top 10). All match ✓

Time-Log Categories

  • Categories table: 4 entries (ONE|Boulder, Creators-Hub, SAM, OOS)
  • Time Logs → Categories: 3 values used (ONE|Boulder, Creators-Hub, SAM). All match ✓
  • OOS exists in table but not referenced by any Time Log

6. Data Quality Issues & Mitigations

6.1 Broken Coda References

People CSV contains internal Coda row references that didn’t export properly:

  • Tags: #r48, #r49, #r54Strip (log warning, skip)
  • Location: #r31, #r36Strip
  • Compound: Denver,#r28 → Split on comma, keep Denver, strip #r28

Detection rule: Any value starting with #r followed by digits is a broken Coda reference. Regex: /^#r\d+$/

6.2 Duplicate Names in Lookup Tables

  • Tag: “Philanthropy Consultant” × 2 → Keep first, skip second
  • Location: “Costa Rica” × 2 → Keep first, skip second

Strategy: Build Map<string, number> during import. On duplicate name, use existing row ID.

6.3 Compound Location Values

People reference multiple locations in one field:

  • Denver CO,Costa Rica → Split, resolve both, create multi-value link_row [id_denver, id_costarica]
  • Denver,#r28 → Split, resolve Denver (matches Location table), strip #r28

Edge case: Denver vs Denver CO — “Denver” exists in Location table as-is? Let me check…

  • Location table has both Denver and Denver CO as separate entries. So Denver maps to the Denver location.

6.4 Missing Organization References (corrected per reviewer R7)

6 orgs referenced by People that don’t exist in Org table:

  • Herban Wellness (Katya Difani), Polestar Gardens / Village (Terry Curran)
  • Antler (Rio Hodges), Questco (Adrienne Milligan Majcina, SHRM-CP)
  • Rootstock Philanthropy (Brad Smith, M.Ed.), Earth05 (Maria Dahrieh)

Strategy: Log warning, skip link_row for these 6 rows.

6.5 URL Normalization

LinkedIn URLs sometimes missing protocol:

  • linkedin.com/in/yevmuchnikhttps://linkedin.com/in/yevmuchnik

Rule: If URL doesn’t start with http:// or https://, prepend https://.

6.6 Twitter Handle Normalization (from reviewer R2)

1 out of 12 People Twitter values is a handle, not URL:

  • @FitFounder (Dan Go) — all 14 other Twitter values (People + Org) are full URLs

Rule: If value starts with @, convert to https://twitter.com/{handle} (strip @).

6.7 Whitespace Trimming (from reviewer R3, R16)

Apply .trim() to ALL values before API submission:

  • Leading spaces in Author: " David Ehrlichman"
  • Leading spaces in 3 emails: " rio@antler.co", " adriana@agamistudios.com", " reed@regenesisgroup.com"
  • Trailing \n in all BSW Hike cells: "Allie Clark\n""Allie Clark"
  • JS .trim() strips \n, \r, \t, spaces — handles all cases

6.8 CSV Encoding (corrected per reviewer R6)

Organization CSV uses valid UTF-8 with Unicode curly quotes (U+201C/U+201D). Not Windows-1252. Standard fs.readFileSync(path, 'utf-8') works perfectly.

6.9 Partial/Garbage Rows (from reviewer R5)

Time Logs has 14 trailing garbage rows (only Logged=false). One partial row has only Time Start=6:30 PM.

Rule: Each table config specifies a requiredField. Rows missing that field are skipped. Corrected count: Time Logs 454→440.

TableRequired FieldBeforeAfter
Time LogsDate454440
All othersName/Column 1samesame

6.10 Categories Not Pipe-Separated (from reviewer R8)

The | in ONE|Boulder is part of the name, NOT a delimiter. Each Time Log row has exactly one category. No splitting needed.

6.11 CRLF Line Endings (from reviewer R21)

CSVs use Windows \r\n. csv-parse handles natively. Any manual splitting must use /\r?\n/.


7. Relationship Resolution Strategy

7.1 Resolution Order

Must import lookup tables first to build ID maps:

1. Import Tag          → tagMap:      Map<tagName, baserowRowId>
2. Import Location     → locationMap: Map<locationName, baserowRowId>
3. Import Organization → orgMap:      Map<orgName, baserowRowId>  (resolves: Tag, Location)
4. Import People       (resolves: Tag, Location, Organization)
5. Import Meeting Notes (no link_row)

7.2 Resolution Algorithm (updated per reviewer R1 — “try whole first”)

Locations like “Boston, MA” contain commas. Naive comma-splitting would break them. Strategy: Try exact match on whole string first. Only split if no whole match.

function resolveRefs(
  csvValue: string,                    // e.g., "Boston, MA" or "Boulder,Denver,#r48"
  lookupMap: Map<string, number>,
  separator: string = ','
): { ids: number[]; warnings: string[] } {
  const warnings: string[] = [];
  const ids: number[] = [];
  const trimmed = csvValue.trim();
 
  if (!trimmed) return { ids, warnings };
 
  // 1. Try whole value first (handles "Boston, MA", "Arlington, VA", etc.)
  const wholeMatch = lookupMap.get(trimmed);
  if (wholeMatch !== undefined) {
    return { ids: [wholeMatch], warnings };
  }
 
  // 2. If no whole match, split on separator and resolve each part
  for (const raw of trimmed.split(separator)) {
    const name = raw.trim();
    if (!name) continue;
    if (/^#r\d+$/.test(name)) {
      warnings.push(`Skipped broken Coda ref: ${name}`);
      continue;
    }
    const id = lookupMap.get(name);
    if (id !== undefined) {
      if (!ids.includes(id)) ids.push(id);  // deduplicate
    } else {
      warnings.push(`No match for "${name}" in lookup table`);
    }
  }
 
  return { ids, warnings };
}

Affected cases verified:

  • "Boston, MA" → whole match ✓ (2 People rows: Magenta Ceiba, Mel Robbins)
  • "Denver CO,Costa Rica" → no whole match → split → both resolve ✓
  • "Denver,#r28" → no whole match → split → Denver resolves, r28 skipped ✓
  • "Super-Connector,Events-Producer" → no whole match → split → both resolve ✓
  1. Create all tables first (to get table IDs)
  2. Then create link_row fields (which need target table IDs)
  3. Then import data

Order:

CREATE TABLES: Tag, Location, Organization, People, Meeting Notes
CREATE LINK_ROW FIELDS:
  - Org.Tags → Tag table
  - Org.Location → Location table
  - People.Tags → Tag table
  - People.Location → Location table
  - People.Organization → Organization table
IMPORT DATA (in dependency order: Tag → Location → Organization → People → Meeting Notes)

8. CLI Command Spec

Entry point

npx tsx baserow/src/cli/index.ts <command> [options]

Commands

CommandAuthDescription
preflightBothFull connectivity + permission check
auth loginJWTObtain JWT, display workspace/database info
auth testDB TokenVerify DB token works
workspacesJWTList workspaces
tables <db_id>JWTList tables in database
table-create <db_id> <name>JWTCreate empty table
fields <table_id>EitherList fields in table
field-create <table_id> <name> <type>JWTCreate field with type
rows <table_id> [--limit N]DB TokenList rows (paginated)
row-create <table_id> --data '{}'DB TokenCreate single row
batch-create <table_id> --file data.jsonDB TokenBatch create (max 200)
row-delete <table_id> <row_id>DB TokenDelete single row
migrate --planBothDry-run: show what would be created/imported
migrate --runBothExecute full migration
migrate --run --table <name>BothMigrate single table

Global Options

  • --env <path> — Path to .env file (default: .env)
  • --verbose — Show detailed API calls

9. Migration Execution Flow

STEP 1: PREFLIGHT
├── 1.1 Load .env → verify BASEROW_DB_TOKEN, BASEROW_EMAIL, BASEROW_PASSWORD, CSV_SOURCE_DIR, BASEROW_DATABASE_ID
├── 1.2 Test DB token → try listing rows on any known table (or handle 404)
├── 1.3 Obtain JWT → POST /api/user/token-auth/
├── 1.4 List workspaces → find user's workspace
├── 1.5 List databases in workspace → find/confirm target database
├── 1.6 Validate 5 CSV files exist and parse headers
└── 1.7 Print summary: tables to create, rows to import, issues found

STEP 2: SCHEMA CREATION (JWT auth)
├── 2.1 Create 5 tables in database 387807 (POST with just {name})
│   └── Baserow auto-creates default fields (primary "Name" text + extras)
├── 2.2 For each new table: list default fields, delete unwanted defaults
│   └── Delete auto-created fields not in our schema
├── 2.3 Create additional non-link fields for each table
│   └── Skip fields that already exist from defaults
├── 2.4 Create link_row fields (after ALL tables exist — need target table IDs)
│   ├── Org.Tags → Tag table
│   ├── Org.Location → Location table
│   ├── People.Tags → Tag table
│   ├── People.Location → Location table
│   └── People.Organization → Organization table
└── 2.5 Save state map: {tableName → tableId, fieldName → fieldId}

STEP 3: DATA IMPORT (DB token auth, batches of 200)
├── 3.1 Phase A: Import lookups → build ID maps
│   ├── Tag (51 rows → ~50 after dedup)
│   └── Location (33 rows → ~32 after dedup)
├── 3.2 Phase B: Import core tables with resolved references
│   ├── Organization (66 rows, resolves Tag + Location)
│   └── People (160 rows, resolves Tags + Location + Organization)
└── 3.3 Phase C: Import remaining tables
    └── Meeting Notes (37 rows, no link_row)

STEP 4: VERIFICATION
├── 4.1 Compare row counts: Baserow API count vs CSV count
├── 4.2 Spot-check 3 random rows per table (print to console)
├── 4.3 Verify link_row resolution on People table (check a few Tags)
├── 4.4 Print summary: tables created, rows imported, warnings
└── 4.5 Save migration report to `migration-report.json`

Re-run Safety (updated per reviewer R9, R10)

State persistence: After each phase, save state to migration-state.json:

  • Table IDs, field IDs
  • Lookup maps (name → Baserow row ID) for Tag, Location, Book-Tags, Organization
  • Phase completion status
  • Row counts per table

On re-run:

  1. Load migration-state.json if it exists
  2. Check if tables already exist (by name) in target database
  3. If table exists and has rows, Skip by default (log: “Table ‘Tag’ already has 50 rows — skipping”)
  4. Reconstruct lookup maps from state file (no need to re-query API)
  5. Resume from the last incomplete phase

No “Overwrite” option — too dangerous for link_row targets (changing table IDs breaks references). User must manually delete tables in Baserow UI if they want a fresh start.

Single-Table Mode Safety (R19)

migrate --run --table People requires lookup tables to exist:

  • Validate that target tables for link_row fields (Tag, Location, Organization) exist in Baserow
  • If missing, error with: “Cannot import People: Tag table not found. Run full migration first.”
  • Alternatively: auto-import dependencies (offer as prompt)

10. Test Plan

10.1 Unit Tests (vitest)

baserow/tests/lib/csv-parser.test.ts

  • Parses simple CSV with headers
  • Filters blank rows (all empty cells)
  • Handles multiline values in quoted fields
  • Handles commas within quoted fields
  • Handles encoding issues (replacement chars)
  • Returns accurate row count

baserow/tests/lib/field-mapper.test.ts

  • Converts M/D/YYYY dates to YYYY-MM-DD
  • Converts MM/DD/YYYY dates (zero-padded) to YYYY-MM-DD
  • Returns null for empty date strings
  • Converts "true"true, "false"false
  • Returns null for empty boolean strings
  • Normalizes URLs: adds https:// when missing protocol
  • Normalizes URLs: leaves https:// URLs unchanged
  • Normalizes Twitter handles: @FitFounderhttps://twitter.com/FitFounder
  • Returns null for empty URL strings
  • Trims whitespace from all string values (spaces, tabs, \n, \r)
  • Trims trailing newlines from BSW Hike-style values
  • Trims leading spaces from email values
  • Strips broken Coda references (#r\d+)
  • Splits comma-separated values correctly
  • Handles values with commas inside location names (e.g., “Boston, MA”)

baserow/tests/lib/baserow-client.test.ts (use vi.spyOn(globalThis, 'fetch') mocks — R18)

  • Constructs correct headers for DB token auth
  • Constructs correct headers for JWT auth (Authorization: JWT {access_token})
  • Handles JWT refresh via refresh_token when access_token expires
  • Falls back to re-auth if refresh_token also fails (R13)
  • Retries on 429 with exponential backoff
  • Retries on 5xx with exponential backoff
  • Does not retry on 4xx (except 429)
  • Paginates through all pages
  • Handles empty result sets
  • Redacts tokens in verbose log output (R12)

baserow/tests/migrate/relationships.test.ts

  • Resolves single tag name to row ID
  • Resolves multiple comma-separated tags to row IDs
  • Tries whole-string match first before splitting (R1: “Boston, MA”)
  • Falls back to comma-split when whole string doesn’t match
  • Skips broken Coda references with warning
  • Handles unmatched names with warning
  • Handles empty/null values gracefully
  • Resolves compound location values (e.g., “Denver CO,Costa Rica”)
  • Deduplicates resolved IDs

baserow/tests/migrate/schema.test.ts

  • Generates correct field creation payloads for each type
  • Creates link_row fields with correct target table IDs
  • Handles the Name primary field (doesn’t recreate it)
  • Identifies and deletes unwanted default fields
  • Renames primary field when table def specifies different name
  • Skips table creation when table already exists (re-run safety)

baserow/tests/migrate/import-csv.test.ts

  • Batches rows correctly (200 per batch)
  • Handles final partial batch
  • Applies field transforms (dates, booleans, URLs)
  • Resolves link_row references during import
  • Handles dedup for lookup tables
  • Skips rows missing required field (R5)
  • Reports accurate import counts

10.2 Integration Tests (manual, against real Baserow API)

  • brow preflight passes all checks
  • brow auth login returns JWT and workspace info
  • brow auth test confirms DB token works
  • brow workspaces lists workspaces
  • brow tables <id> lists tables in database
  • Create + delete a test table via CLI
  • Import Tag table (smallest useful test) → verify in Baserow UI
  • Full migration → verify all tables + row counts

10.3 Test Fixtures

Minimal CSV files in baserow/tests/fixtures/ for unit tests:

  • tags.csv — 5 tags including one duplicate (“Philanthropy Consultant” × 2)
  • locations.csv — 5 locations including one duplicate (“Costa Rica” × 2) + comma-in-name (“Boston, MA”)
  • people.csv — 5 people with: multi-value tags, broken Coda refs (#r48), compound locations (Denver CO,Costa Rica), @handle Twitter, leading-space email
  • timelogs.csv — 5 valid rows + 2 garbage rows (one with only Logged=false, one partial)
  • bsw-hike.csv — 3 rows with trailing \n in every cell value
  • empty.csv — headers only, no data rows
  • multiline.csv — values with newlines in quoted fields

11. Implementation Checklist

Phase 0: Project Setup

  • 0.1 Initialize solanasis-scripts as git repo
  • 0.2 Create GitHub private repo dzinreach/solanasis-scripts
  • 0.3 Set up remote and initial push
  • 0.4 Create package.json (name, type:module, scripts, dependencies)
  • 0.5 Create tsconfig.json (strict, ESM, path aliases)
  • 0.6 Create vitest.config.ts
  • 0.7 Create .gitignore (node_modules, .env, dist, migration-state.json, migration-report.json)
  • 0.8 Create .env.example (template: BASEROW_DB_TOKEN, BASEROW_EMAIL, BASEROW_PASSWORD, CSV_SOURCE_DIR, BASEROW_DATABASE_ID)
  • 0.9 Create .env with actual credentials (gitignored) — NOTE: password with # must be quoted
  • 0.10 npm install dependencies (+ @rollup/rollup-win32-x64-msvc for Windows)
  • 0.11 Verify npx tsx and npx vitest work

Phase 1: Core Library (baserow/src/lib/)

  • 1.1 Create types.ts — all TypeScript interfaces
    • 1.1a BaserowConfig, AuthType, BaserowField, BaserowRow, BaserowTable
    • 1.1b PaginatedResponse<T>, BatchCreateResponse, JwtAuthResponse
    • 1.1c TableDefinition, FieldMapping, MigrationState, MigrationReport
  • 1.2 Create baserow-client.ts — core API client
    • 1.2a Dual auth (DB token + JWT)
    • 1.2b JWT refresh (refresh_token first, fall back to re-auth per R13)
    • 1.2c Simple rate limiting: configurable delay (default 100ms) + reactive 429 handling (R14)
    • 1.2e Auto-retry (3 attempts, exponential backoff on 429/5xx)
    • 1.2f Auto-pagination helper (listAllRows)
    • 1.2g user_field_names=true by default
    • 1.2h Methods: request, listRows, listAllRows, createRow, batchCreateRows, deleteRow, batchDeleteRows
  • 1.3 Write baserow-client.test.ts — 9 tests passing
  • 1.4 Create csv-parser.ts — CSV reading utility
    • 1.4a Read CSV with UTF-8 encoding
    • 1.4b Parse with csv-parse/sync
    • 1.4c Filter rows missing required field (R5)
    • 1.4d Return typed CsvParseResult with row count stats
  • 1.5 Write csv-parser.test.ts + fixtures — 8 tests passing
  • 1.6 Create field-mapper.ts — data transformation
    • 1.6a parseDate — M/D/YYYY → YYYY-MM-DD
    • 1.6b parseBoolean — “true”/“false” → boolean
    • 1.6c normalizeUrl — adds https://, converts @handle, rejects non-URL strings
    • 1.6d trimValue — handles trailing \n, leading spaces
    • 1.6e isBrokenCodaRef — detects #r\d+ patterns
    • 1.6f splitMultiValue — comma-separated splitting
  • 1.7 Write field-mapper.test.ts — 34 tests passing
  • 1.8 Run all tests → 75 green

Phase 2: CLI Framework (baserow/src/cli/)

  • 2.1 Create cli/index.ts — commander setup with all subcommands
  • 2.2 Create commands/auth.tslogin and test commands
  • 2.3 Create commands/preflight.ts — full connectivity check
  • 2.4 Create commands/tables.ts — list, create
  • 2.5 Create commands/fields.ts — list, create
  • 2.6 Create commands/rows.ts — list, create, batch-create, delete
  • 2.7 Manual smoke test: brow auth login ✓, brow auth test ✓, brow preflight

Phase 3: Migration Engine (baserow/src/migrate/)

  • 3.1 Create config/tables.ts — 5 table definitions with field mappings
    • 3.1a Each definition: tableName, csvFile, requiredField, primaryField, fields[], dedupField, phase
    • 3.1b CSV paths resolved at runtime from CSV_SOURCE_DIR env var (R17)
  • 3.2 Create migrate/schema.ts — table + field creation
    • 3.2a Create table via API + delete auto-created sample rows
    • 3.2b List auto-created default fields, delete unwanted ones
    • 3.2c Rename primary field if needed
    • 3.2d Create non-link fields (skip if already exists)
    • 3.2e Create link_row fields (after all tables created)
    • 3.2f Handle “table already exists” gracefully
  • 3.3 Write schema.test.ts — 5 tests passing
  • 3.4 Create migrate/relationships.ts — name → ID resolution
    • 3.4a resolveRefs() with “try whole first” (R1)
    • 3.4b buildLookupMap() from imported rows
    • 3.4c Handle broken Coda refs, missing names, compound values
  • 3.5 Write relationships.test.ts — 14 tests passing
  • 3.6 Create migrate/import-csv.ts — data import with batching
    • 3.6a Read CSV → transform fields → resolve refs → batch rows
    • 3.6b Batch into chunks of 200
    • 3.6c Track import counts and warnings
    • 3.6d Build lookup maps for dependent tables
  • 3.7 Write import-csv.test.ts — 5 tests passing
  • 3.8 Create migrate/index.ts — orchestrator
    • 3.8a --plan mode: dry run
    • 3.8b --run mode: execute full migration
    • 3.8c --table <name> mode: single table with dependency validation
    • 3.8d Verification step: compare row counts
    • 3.8e Re-run safety: state persistence, skip completed tables
  • 3.9 Run all tests → 75 green

Phase 4: Documentation

  • 4.1 Write README.md — Baserow API reference guide + project structure
  • 4.2 Write SETUP.md — User setup instructions
  • 4.3 Verify permalinks in all source files

Phase 5: Integration Testing

  • 5.1 Run brow preflight → all green (11/11 checks passed)
  • 5.2 Run brow auth login → “Dmitri Sunshine (admin@solanasis.com)”, 1 workspace, 1 database
  • 5.3 Run brow migrate --plan → accurate summary (5 tables, 5 link_row relationships)
  • 5.4 (Skipped single-table test — went directly to full migration)
  • 5.5 Run brow migrate --run → full migration completed
    • Run 1: Tag(50), Location(32), Organization(66) imported. People failed on invalid LinkedIn (person name in URL field).
    • Fix: Updated normalizeUrl() to reject non-URL strings (no dots/slashes).
    • Run 2: Resumed from state. People(160), Meeting Notes(37) imported.
    • Cleaned up 2 default sample rows per table (10 total).
  • 5.6 Verify row counts: Tag=50, Location=32, Organization=66, People=160, Meeting Notes=37 (345 total)
  • 5.7 Spot-check People: Tags link_row resolves (“Creators-Hub Possibility”), Location resolves (“Boulder CO”), dates in ISO format
  • 5.8 Re-run: all tables skipped, all counts [OK]

Phase 6: Finalize

  • 6.1 Final test run: 75 tests, 6 files, all passing
  • 6.2 Commit all changes
  • 6.3 Push to GitHub
  • 6.4 Update MEMORY.md

Post-Migration Fixes (2026-03-08)

  • Rebuilt Meeting Notes table with formula primary field field('Person')
    • Deleted old table (873567), created new (873613)
    • Primary field auto-derives from Person link_row — no redundant Name text column
    • 37/37 Person links resolved
  • Created 6 missing Organizations (Herban Wellness, Antler, Questco, Polestar Gardens / Village, Rootstock Philanthropy, Earth05)
    • These were referenced by People but absent from Organization CSV
    • Linked all 6 People → Organization relationships (16/16 now resolved, was 10/16)
  • Deleted obsolete upgrade-meeting-notes.ts (superseded by table rebuild)
  • Final row counts: Tag=50, Location=32, Organization=72, People=160, Meeting Notes=37 (351 total)
  • Remaining known gaps: 2 people with broken Coda refs (unresolvable from CSV export)
    • Dominic Kalms: Tags #r48,#r49, Location #r31
    • Paul Foley: Location #r36

Issues Found During Implementation

  1. dotenv # comment parsing — Password Ejs$N4G4#4qH was truncated at #. Fix: quote the value in .env.
  2. Non-URL in URL field — People row 72 had person name “Stephen (CSM) Shepherd” in LinkedIn field. normalizeUrl turned it into https://Stephen (CSM) Shepherd which Baserow rejected. Fix: added heuristic (requires . or / to be treated as URL).
  3. Default sample rows — Baserow creates 2 sample rows per new table. Added batchDeleteRows in createTableWithFields after table creation.
  4. Windows rollup binary@rollup/rollup-win32-x64-msvc needed explicit install with --os=win32 --cpu=x64.

12. Senior Reviewer Findings

Reviewed by Plan agent. 23 findings. All verified against actual CSV data.

Summary

SeverityCountAddressed
CRITICAL1 (R1)Yes — “try whole first” resolution strategy
HIGH4 (R2-R5)Yes — all integrated into plan
MEDIUM10 (R6-R14, R23)Yes — all integrated
LOW8 (R15-R22)Yes — all integrated

Issue Log

IDSevCategoryIssueResolution
R1CRITICALDataComma-split breaks “Boston, MA” location resolutionAdded “try whole first” strategy to resolveRefs() in Section 7.2
R2HIGHDataTwitter @FitFounder handle is not a valid URLAdded Twitter handle normalization rule in Section 6.6
R3HIGHDataBSW Hike cells have trailing \nDocumented in Section 6.7; .trim() handles it
R4HIGHAPIMissing PATCH field endpoint for renamesAdded to Section 4.2 endpoints table
R5HIGHData14 garbage rows in Time Logs pass “not all empty” filterAdded requiredField per-table config in Section 6.9; Time Logs 454→440
R6MEDIUMDataEncoding was valid UTF-8, not Windows-1252Corrected Section 6.8
R7MEDIUMDataCount said “5 orgs” but listed 6Fixed to “6 orgs” in Section 6.4
R8MEDIUMDataONE|Boulder pipe is part of name, not delimiterDocumented in Section 6.10
R9MEDIUMSafety”Overwrite” re-run mode undefined, dangerous for link_row targetsRemoved Overwrite option; Skip only + state persistence
R10MEDIUMSafetyLookup maps lost on partial failureAdded migration-state.json checkpoint file
R11MEDIUMTestingMissing test cases for R1/R3/R5 edge casesAdded to test plan Sections 10.1, 10.3
R12MEDIUMSecurityVerbose mode could log tokens/PIIAdded redaction requirement to test plan
R13MEDIUMAPIJWT refresh vs re-auth conflatedClarified: use refresh_token first, fall back to re-auth
R14MEDIUMArchitectureToken bucket + semaphore is over-engineeringSimplified to delay + reactive 429 handling
R15LOWDXInconsistent “Time-Log Cat” abbreviationUsing full name everywhere
R16LOWData3 emails have leading spacesCovered by global .trim() in Section 6.7
R17LOWArchitectureHardcoded CSV paths fragileAdded CSV_SOURCE_DIR env var
R18LOWTestingNo mock strategy specified for API testsAdded vi.spyOn(globalThis, 'fetch') note
R19LOWSafetySingle-table mode ignores dependency orderAdded dependency validation in re-run safety
R20LOWDXBASEROW_API_KEY name confusingRenamed to BASEROW_DB_TOKEN
R21LOWDataCRLF line endings not mentionedDocumented in Section 6.12
R22LOWArchitectureMonorepo-lite structure needs documentationNoted for README.md
R23LOWSafetyPrimary field type change may not be supportedQuotes primary field stays as text

13. Post-Migration: URL Enrichment

After the migration was completed, a URL enrichment tool was built to verify and gap-fill URL fields.

See: ENRICH-PLAN.md for the full plan, matching algorithm, and execution steps.

Summary:

  • brow enrich --plan — dry-run: match CSV rows to Baserow, report gaps
  • brow enrich --run — apply gap-fill updates (never overwrites existing data)
  • brow enrich --export — export LinkedIn URLs to CSV for Phase 2 Chrome extension enrichment
  • 25 additional unit tests (100 total across the project)