ARCHIVED 2026-03-24 — Superseded by
.claude-plans/deep-plan-baserow-migration-2026-03-24.md(in solanasis-scripts). Originally described the Coda CSV → Baserow Cloud migration. Cloud-to-self-hosted migration completed 2026-03-24.
Baserow Migration Plan — Coda CSV Export to Baserow
Project:
solanasis-scripts(multi-tool repo for Solanasis operational scripts) Subdirectory:baserow/(Baserow CLI + migration tools) Status: COMPLETE (migrated 2026-03-07, finalized 2026-03-08) Last Updated: 2026-03-08 Source data:C:\Users\zasya\Documents\coda-data-export\NHGOLB87tr-DB\Baserow API docs: https://baserow.io/docs/apis/rest-api/introduction
Table of Contents
- Specifications
- Data Analysis Findings
- Architecture & File Structure
- API Reference Summary
- Table Definitions & Field Mappings 5B. Reference Integrity Analysis
- Data Quality Issues & Mitigations
- Relationship Resolution Strategy
- CLI Command Spec
- Migration Execution Flow
- Test Plan
- Implementation Checklist
- Senior Reviewer Findings
- Post-Migration: URL Enrichment
1. Specifications
1.1 Goal
Migrate Dmitri’s Coda database export (CSV files from NHGOLB87tr-DB) into Baserow, preserving relationships (tags, locations, organizations) as link_row fields.
1.2 Deliverables
README.md— Baserow API reference guide for ongoing Claude usebrowCLI — Reusable command-line wrapper for Baserow API- Migration script — Automated CSV-to-Baserow migration with relationship resolution
SETUP.md— User setup instructions for credentials
1.3 Tech Stack
- Runtime: Node.js 22 + TypeScript (matches existing solanasis-site stack)
- Execution:
tsx(already installed globally, v4.21.0) - CLI framework:
commander(industry standard, lightweight) - CSV parsing:
csv-parse(streaming, handles multiline, RFC 4180 compliant) - Env:
dotenv(consistent with solanasis-site) - HTTP: Native
fetch(Node 22 built-in, no deps needed) - Testing:
vitest(fast, native TS/ESM support, no Babel needed)
1.4 Non-Goals (Explicitly Skipped)
- CRM data (
6sDSk92JyW-CRM) — sample data, 19 CSV files - Financial tables (per user decision):
Finances-grid-ewFx_PED36-default.csv(42 rows)Credit-Cards-grid-zAlgZHbtBG-default.csv(23 rows)Budget-grid-9iaPZcc0AX-default.csv(4 rows)SAM Paychecks-grid--JIWn8ynoW-default.csv(3 rows)SAM Paychecks Totals-grid-XbZvXyhU-p-default.csv(3 rows)Table-grid-IFI2utooLR-default.csv— this is Bank Balances (headers: Name, Month, Ending Balance, Notes)
- Enum lookups (tiny, static): High-Med-Low (3), Interval-Type (3), Version Num (3)
Software-Features-grid-TZJ5vIoHFY-default.csv— different project (The Source Platform)Journal-grid-PID7-d1fGT-default.csv— only 1 real entry (“asd”)- Filtered views (same data, different view): “People to Respond”, “View of Time Logs”
- Cut by user (2026-03-07): Books, Book-Tags, Time Logs, Time-Log Categories, Wishlist, Quotes, BSW 2025 Hike — not needed in Baserow
1.5 Auth Architecture
- DB token (
BASEROW_DB_TOKEN): Row operations (CRUD on rows). Header:Authorization: Token {t}(renamed fromBASEROW_API_KEYfor clarity per R20) - JWT (
BASEROW_EMAIL+BASEROW_PASSWORD): Schema operations (workspaces, databases, tables, fields). Header:Authorization: JWT {t} - JWT obtained via
POST /api/user/token-auth/→{access_token, refresh_token, user}(note:tokenfield is deprecated, useaccess_token) access_tokenlifetime: 10 min. Auto-refresh when token age > 9 min.refresh_tokenlifetime: 168 hours (7 days). UsePOST /api/user/token-refresh/with{refresh_token}→ new{access_token}.- 2FA handling: If user has 2FA enabled, token-auth returns
{two_factor_auth, token}instead. Must handle this case. - Fallback: If JWT auth fails, user can manually create tables in Baserow UI and provide table IDs for rows-only migration.
1.6 Target Environment (discovered 2026-03-07)
- Workspace: “Dmitri Sunshine’s workspace” (ID: 183254)
- Database: “Personal CRM” (ID: 387807)
- Existing tables (DO NOT DELETE — user will manually):
- Contacts (ID: 873314) — 2 rows, 11 fields (simple flat schema, not reusable)
- Interactions (ID: 873315) — 0 rows, 4 fields
- Decision: Create 5 new tables alongside existing ones (cut from 12)
- No 2FA on account — standard JWT auth works
- DB token confirmed working for row operations
- JWT confirmed working — user: “Dmitri Sunshine”
1.7 Conventions
- All source files include permalinks to Baserow API docs in comments
user_field_names=trueon all row endpoints (human-readable field names)- All functions that hit the API go through the centralized
BaserowClient - No code duplication — shared utilities for date parsing, currency stripping, etc.
- ESM modules (
"type": "module"in package.json)
2. Data Analysis Findings
2.1 Accurate Row Counts (blank rows filtered)
| Table | CSV File | Raw Lines | Data Rows | Columns |
|---|---|---|---|---|
| Tag | Tag-grid-r5xQIRHdcF-default.csv | 55 | 51 | Name, Notes |
| Location | Location-grid-fXQ_7w3JVA-default.csv | 37 | 33 | Name, State, Notes |
| Book-Tags | Book-Tags-grid-FZI-5fxkSD-default.csv | 4 | 3 | Name, Notes |
| Time-Log Categories | Time-Log Categories-grid-gYy4hgGKjG-default.csv | 5 | 4 | Name, Notes |
| Organization | Organization-grid-X1DpSi_91e-default.csv | 118 | 66 | Name, Tag, Location, Website, LinkedIn, Summary, Partnership-Potential, Notes, Twitter |
| People | People-grid-EszwOBhhvI-default.csv | 196 | 160 | Name, Tags, Location, Title, Organization, Phone Number, Email, LinkedIn, IG, Twitter, FB, Blog, Website, Notes, Interest Form Message, Response to Interest Form, Connected From, Referral Source, LinkedIn Initial Outreach |
| Books | Books-grid-8XNJ7myI6L-default.csv | 31 | 29 | Name, Author, Link, Book-Tags, Quick Notes, Notes |
| Meeting Notes | Meeting Notes-grid-Q8pQH8wPNz-default.csv | 313 | 37 | Name, LinkedIn, Date, Follow-up Date, Notes |
| Time Logs | Time Logs-grid-dOc1PXyLFa-default.csv | 679 | 440 | Date, Time Start, Time-End, Notes, Categories, Duration, Logged |
| Wishlist | Wishlist-grid-1-AdR1Cd0Y-default.csv | 20 | 20 | Name, Date Added, Got It!, Amount, Possibilites [sic], Why, Notes |
| Quotes | Quotes-grid-6tMhOGUNb4-default.csv | 3 | 1 | Name, Author, Notes |
| BSW 2025 Hike | BSW 2025 Boulder Ventures Hike-grid-6_gjtLV5s0-default.csv | 50 | 10 | Column 1 (Name), Column 2 (First), Column 3 (Last), Column 4 (Email) |
Total data rows migrated: 351 (5 tables after user cuts; 345 initial + 6 orgs added post-migration to fix missing references)
2.2 Date Format
All dates across all tables use M/D/YYYY format (e.g., 10/21/2024, 1/1/2025).
Must convert to ISO YYYY-MM-DD for Baserow date fields.
2.3 Boolean Values
"true" / "false" string literals in:
- Time Logs →
Loggedcolumn - Wishlist →
Got It!column
2.4 Currency Values
Wishlist Amount column has mixed formats:
- Standard:
$1,000.00,$200.00 - Shorthand:
$1K,$2K - Decision: Store as text field (not number) to preserve original formatting
2.5 Phone Number Formats
Mixed formats in People Phone Number:
- Raw digits:
7865535271 - International:
+1 (737) 294-3882,+52 55 7887 2380 - US:
(604) 417-6074,702-941-1004 - Decision: Store as text field (Baserow
phone_numbertype may reject some formats)
2.6 URL Variations
LinkedIn URLs sometimes missing https:// prefix:
linkedin.com/in/yevmuchnik(missing protocol)https://www.linkedin.com/in/christopherpina/(correct)- Decision: Store as url field, normalize by prepending
https://if URL has no protocol
2.7 CSV Encoding
Organization CSV contains Unicode curly quotes (valid UTF-8). No special encoding handling needed.
Read all CSVs with fs.readFileSync(path, 'utf-8').
2.8 BSW Hike Table Format
Generic column headers: Column 1, Column 2, Column 3, Column 4.
Actual semantics: Full Name, First Name, Last Name, Email.
Rename during migration.
3. Architecture & File Structure
solanasis-scripts/
├── package.json
├── tsconfig.json
├── .env # BASEROW_DB_TOKEN, BASEROW_EMAIL, BASEROW_PASSWORD, CSV_SOURCE_DIR, BASEROW_DATABASE_ID
├── .env.example # Template (no secrets)
├── .gitignore
├── README.md # Baserow API reference guide (Deliverable 1)
├── SETUP.md # User setup instructions (Deliverable 4)
├── BASEROW-MIGRATION-PLAN.md # This file
├── vitest.config.ts
├── baserow/
│ ├── src/
│ │ ├── lib/
│ │ │ ├── baserow-client.ts # Core API client (dual auth, rate limiting, pagination, retry)
│ │ │ ├── types.ts # TypeScript interfaces for Baserow API
│ │ │ ├── csv-parser.ts # CSV reading + blank row filtering + encoding handling
│ │ │ └── field-mapper.ts # Data transforms: dates, booleans, URLs, currency
│ │ ├── cli/
│ │ │ ├── index.ts # CLI entry point (commander) — "brow" command
│ │ │ └── commands/
│ │ │ ├── auth.ts # login, test-token
│ │ │ ├── tables.ts # list, create, delete tables
│ │ │ ├── fields.ts # list, create fields
│ │ │ ├── rows.ts # list, create, batch-create, delete rows
│ │ │ └── preflight.ts # full connectivity + permission checks
│ │ ├── migrate/
│ │ │ ├── index.ts # Migration orchestrator (--plan / --run)
│ │ │ ├── schema.ts # Table + field creation logic
│ │ │ ├── import-csv.ts # CSV → Baserow row insertion (batched, max 200)
│ │ │ └── relationships.ts # link_row resolution (name → row ID)
│ │ └── config/
│ │ └── tables.ts # All table definitions + field mappings + CSV paths
│ └── tests/
│ ├── lib/
│ │ ├── baserow-client.test.ts
│ │ ├── csv-parser.test.ts
│ │ └── field-mapper.test.ts
│ ├── migrate/
│ │ ├── schema.test.ts
│ │ ├── import-csv.test.ts
│ │ └── relationships.test.ts
│ └── fixtures/
│ ├── tags.csv # Minimal test CSVs
│ ├── people.csv
│ └── locations.csv
Why baserow/ subdirectory?
The repo is solanasis-scripts (multi-tool). The Baserow tools live in baserow/ to keep the repo organized for future tools (e.g., brevo/, cloudflare/).
Entry Point
npx tsx baserow/src/cli/index.ts <command> [options]
# OR via npm script:
npm run brow -- <command> [options]4. API Reference Summary
Full reference in
README.md(Deliverable 1). Key points here.
4.1 Authentication
| Method | Header | Scope | Endpoint |
|---|---|---|---|
| DB Token | Authorization: Token {t} | Row CRUD on tables token has access to | Row endpoints only |
| JWT | Authorization: JWT {t} | Full access: workspaces, databases, tables, fields, rows | All endpoints |
JWT Lifecycle (verified against OpenAPI schema https://api.baserow.io/api/schema.json):
POST https://api.baserow.io/api/user/token-auth/
Body: {"email": "...", "password": "..."}
Response (no 2FA): {
"access_token": "eyJ...", // valid 10 min — use in Authorization: JWT {access_token}
"refresh_token": "eyJ...", // valid 168 hours (7 days)
"token": "eyJ...", // DEPRECATED — same as access_token
"user": {"first_name": "...", "username": "...", "language": "..."}
}
Response (2FA enabled): {
"two_factor_auth": "totp",
"token": "temp_token_for_2fa_verify"
}
- Refresh:
POST /api/user/token-refresh/with{"refresh_token": "..."}→ same response shape
4.2 Key Endpoints (verified against OpenAPI schema — 267 total paths)
| Operation | Method | Path | Auth | Notes |
|---|---|---|---|---|
| Auth (JWT) | POST | /api/user/token-auth/ | None | Body: {email, password} |
| Refresh JWT | POST | /api/user/token-refresh/ | None | Body: {refresh_token} |
| List workspaces | GET | /api/workspaces/ | JWT | |
| List apps/databases | GET | /api/applications/workspace/{workspace_id}/ | JWT | Returns all apps in workspace |
| List tables | GET | /api/database/tables/database/{database_id}/ | JWT | |
| Create table | POST | /api/database/tables/database/{database_id}/ | JWT | Body: {name} (required) |
| Get table | GET | /api/database/tables/{table_id}/ | JWT | |
| Delete table | DELETE | /api/database/tables/{table_id}/ | JWT | |
| List fields | GET | /api/database/fields/table/{table_id}/ | JWT | |
| Create field | POST | /api/database/fields/table/{table_id}/ | JWT | Body: {name, type, ...type_opts} |
| Update field | PATCH | /api/database/fields/{field_id}/ | JWT | Rename or change type |
| Delete field | DELETE | /api/database/fields/{field_id}/ | JWT | |
| List rows | GET | /api/database/rows/table/{table_id}/?user_field_names=true | Token | Paginated: ?size=200&page=1 |
| Create row | POST | /api/database/rows/table/{table_id}/?user_field_names=true | Token | |
| Batch create | POST | /api/database/rows/table/{table_id}/batch/?user_field_names=true | Token | Body: {items: [...]} max 200 |
| Delete row | DELETE | /api/database/rows/table/{table_id}/{row_id}/ | Token | |
| Batch delete | POST | /api/database/rows/table/{table_id}/batch-delete/ | Token | Body: {items: [id1, id2]} |
Table creation note: When creating a table with just {name}, Baserow auto-creates it with a primary “Name” text field and 3 default fields (Notes, Active, etc.). We need to delete the extra default fields after creation, or use the data + first_row_header approach to control initial fields.
4.3 Rate Limits
- 20 requests/second (per token)
- 10 concurrent requests
- Response:
429 Too Many RequestswithRetry-Afterheader - Strategy (simplified per R14): Configurable delay between requests (default 100ms) + reactive exponential backoff on 429. For ~854 rows in sequential batches, request rate naturally stays well under limits.
4.4 Pagination
- Query:
?size=200&page=1(max 200 per page) - Response:
{count, next, previous, results: [...]} - Auto-paginate: follow
nextuntil null
4.5 Batch Operations (verified: maxItems: 200, minItems: 1 in OpenAPI schema)
- Batch create: POST
.../batch/?user_field_names=truewith{"items": [{...}, ...]}— max 200 rows - Batch update: PATCH
.../batch/?user_field_names=truewith{"items": [{"id": 1, ...}, ...]} - Batch delete: POST
.../batch-delete/with{"items": [1, 2, 3]}
4.6 Field Types Used
| Baserow Type | Coda Source | Creation Payload | Value Format |
|---|---|---|---|
text | Text fields | {name, type: "text"} | "string" |
long_text | Notes, multiline | {name, type: "long_text"} | "string\nwith\nnewlines" |
url | URLs | {name, type: "url"} | "https://..." |
email | {name, type: "email"} | "user@example.com" | |
date | Dates | {name, type: "date", date_format: "ISO", date_include_time: false} | "2024-10-21" |
boolean | Checkboxes | {name, type: "boolean"} | true / false |
link_row | References | {name, type: "link_row", link_row_table_id: N} | [1, 5, 12] (row IDs) |
4.7 link_row Details
- Created on source table:
{name: "Tags", type: "link_row", link_row_table_id: <tag_table_id>} - Automatically creates reverse link field on target table
- Values are arrays of Baserow row IDs:
[1, 5, 12] - When reading with
user_field_names=true: returns[{id: 1, value: "Boulder"}, ...] - When writing: send
[1, 5, 12](just IDs)
5. Table Definitions & Field Mappings
Phase A — Lookup Tables (no dependencies)
A1: Tag
| CSV Column | Baserow Field | Type | Notes |
|---|---|---|---|
| Name | Name | text | Primary field (auto-created by Baserow) |
| Notes | Notes | long_text |
- Dedup: “Philanthropy Consultant” appears twice → import only once
- CSV:
Tag-grid-r5xQIRHdcF-default.csv - Rows: 51 → ~50 after dedup
A2: Location
| CSV Column | Baserow Field | Type | Notes |
|---|---|---|---|
| Name | Name | text | Primary field |
| State | State | text | |
| Notes | Notes | long_text |
- Dedup: “Costa Rica” appears twice → import only once
- CSV:
Location-grid-fXQ_7w3JVA-default.csv - Rows: 33 → ~32 after dedup
A3: Book-Tags
| CSV Column | Baserow Field | Type | Notes |
|---|---|---|---|
| Name | Name | text | Primary field |
| Notes | Notes | long_text |
- CSV:
Book-Tags-grid-FZI-5fxkSD-default.csv - Rows: 3
A4: Time-Log Categories
| CSV Column | Baserow Field | Type | Notes |
|---|---|---|---|
| Name | Name | text | Primary field |
| Notes | Notes | long_text |
- Values: ONE|Boulder, Creators-Hub, SAM, OOS
- CSV:
Time-Log Categories-grid-gYy4hgGKjG-default.csv - Rows: 4
Phase B — Core Tables (reference lookups)
B1: Organization
| CSV Column | Baserow Field | Type | Notes |
|---|---|---|---|
| Name | Name | text | Primary field |
| Tag | Tags | link_row → Tag | Single tag per org in CSV |
| Location | Location | link_row → Location | Single location per org |
| Website | Website | url | |
| url | |||
| Summary | Summary | long_text | |
| Partnership-Potential | Partnership Potential | long_text | |
| Notes | Notes | long_text | Multiline, may contain encoding issues |
| url |
- CSV:
Organization-grid-X1DpSi_91e-default.csv - Rows: 72 (66 from CSV + 6 added post-migration: Herban Wellness, Polestar Gardens / Village, Antler, Questco, Rootstock Philanthropy, Earth05)
- Encoding: Contains Unicode curly quotes (valid UTF-8, no special handling needed)
B2: People
| CSV Column | Baserow Field | Type | Notes |
|---|---|---|---|
| Name | Name | text | Primary field |
| Tags | Tags | link_row → Tag | Comma-separated multi-value |
| Location | Location | link_row → Location | May be comma-separated or contain #r refs |
| Title | Title | text | |
| Organization | Organization | link_row → Organization | Single value |
| Phone Number | Phone Number | text | Mixed formats, not phone_number type |
| url | May need https:// prefix | ||
| IG | url | ||
| url | |||
| FB | url | ||
| Blog | Blog | url | |
| Website | Website | url | |
| Notes | Notes | long_text | |
| Interest Form Message | Interest Form Message | long_text | |
| Response to Interest Form | Response to Interest Form | long_text | |
| Connected From | Connected From | text | Free-text, 2 unique values |
| Referral Source | Referral Source | text | Free-text, 9 unique values |
| LinkedIn Initial Outreach | LinkedIn Initial Outreach | date | M/D/YYYY format |
- CSV:
People-grid-EszwOBhhvI-default.csv - Rows: 160
- Complex references: Tags (multi), Location (multi with broken refs), Organization (single)
B3: Books
| CSV Column | Baserow Field | Type | Notes |
|---|---|---|---|
| Name | Name | text | Primary field |
| Author | Author | text | Some have leading spaces |
| Link | Link | url | Amazon/Scribd URLs |
| Book-Tags | Book Tags | link_row → Book-Tags | Comma-separated |
| Quick Notes | Quick Notes | long_text | |
| Notes | Notes | long_text |
- CSV:
Books-grid-8XNJ7myI6L-default.csv - Rows: 29
Phase C — Dependent Tables
C1: Meeting Notes
| CSV Column | Baserow Field | Type | Notes |
|---|---|---|---|
| — | Name | formula: field('Person') | Primary field — auto-derives from Person link |
| Name | Person | link_row → People | 37/37 names matched. Handles Unicode NFC normalization for ñ etc. |
| url | |||
| Date | Date | date | M/D/YYYY → YYYY-MM-DD |
| Follow-up Date | Follow-up Date | date | M/D/YYYY → YYYY-MM-DD |
| Notes | Notes | long_text | Multiline |
- CSV:
Meeting Notes-grid-Q8pQH8wPNz-default.csv - Rows: 37
- Table ID: 873613 (rebuilt 2026-03-08; original 873567 deleted)
- Primary field: Formula
field('Person')— no redundant Name text column - link_row relationships: 6 total across all tables
- Org.Tags → Tag, Org.Location → Location
- People.Tags → Tag, People.Location → Location, People.Organization → Organization
- Meeting Notes.Person → People
C2: Time Logs
| CSV Column | Baserow Field | Type | Notes |
|---|---|---|---|
| Date | Date | date | M/D/YYYY → YYYY-MM-DD |
| Time Start | Time Start | text | ”12:00 PM” format — text, not time type |
| Time-End | Time End | text | ”12:45 PM” format |
| Notes | Notes | long_text | |
| Categories | Categories | text | Pipe-separated category names (NOT link_row for simplicity) |
| Duration | Duration | text | ”45 mins”, “2 hrs 15 mins” format |
| Logged | Logged | boolean | ”true”/“false” → true/false |
- CSV:
Time Logs-grid-dOc1PXyLFa-default.csv - Rows: 440 (454 raw, 14 garbage rows filtered by requiring Date field)
- Decision: Categories stored as text (not link_row) since there are only 3 values and parsing pipe-separated into link_row adds complexity for minimal benefit.
Phase D — Misc Tables
D1: Wishlist
| CSV Column | Baserow Field | Type | Notes |
|---|---|---|---|
| Name | Name | text | Primary field |
| Date Added | Date Added | date | M/D/YYYY → YYYY-MM-DD |
| Got It! | Got It | boolean | ”true”/“false” |
| Amount | Amount | text | Mixed formats (1,000.00) — text not number |
| Possibilites | Possibilities | text | Fix typo in field name |
| Why | Why | long_text | |
| Notes | Notes | long_text |
- CSV:
Wishlist-grid-1-AdR1Cd0Y-default.csv - Rows: 20
D2: Quotes
| CSV Column | Baserow Field | Type | Notes |
|---|---|---|---|
| Name | Quote | text | Rename primary field from “Name”. Keep as text type (R23: changing primary field type may not be supported) |
| Author | Author | text | |
| Notes | Notes | long_text |
- CSV:
Quotes-grid-6tMhOGUNb4-default.csv - Rows: 1
D3: BSW 2025 Hike
| CSV Column | Baserow Field | Type | Notes |
|---|---|---|---|
| Column 1 | Full Name | text | Primary field, rename from generic header |
| Column 2 | First Name | text | |
| Column 3 | Last Name | text | |
| Column 4 |
- CSV:
BSW 2025 Boulder Ventures Hike-grid-6_gjtLV5s0-default.csv - Rows: 10
5B. Reference Integrity Analysis (verified 2026-03-07)
Tags
- Tag table: 51 entries (50 unique after dedup of “Philanthropy Consultant”)
- People → Tags: 46 unique values referenced. All match Tag table EXCEPT:
#r48,#r49,#r54— broken Coda refs → skip
- Org → Tag: 10 unique values. All match Tag table ✓
Locations
- Location table: 33 entries (32 unique after dedup of “Costa Rica”)
- People → Location: 27 unique values. All resolve after splitting EXCEPT:
#r31,#r36— broken Coda refs → skipDenver CO,Costa Rica→ split into 2, both resolve ✓Denver,#r28→ split,Denverresolves,#r28skipped ✓
- Org → Location: 14 unique values. All match Location table ✓
Organizations
- Org table: 72 entries (66 from CSV + 6 added post-migration)
- People → Organization: 16 unique values. All 16 now resolve ✓
- 6 orgs were missing from the original CSV and created post-migration:
Herban Wellness(person: Katya Difani)Polestar Gardens / Village(person: Terry Curran)Antler(person: Rio Hodges)Questco(person: Adrienne Milligan Majcina, SHRM-CP)Rootstock Philanthropy(person: Brad Smith, M.Ed.)Earth05(person: Maria Dahrieh)
- 6 orgs were missing from the original CSV and created post-migration:
Book-Tags
- Book-Tags table: 3 entries (Biz, Top 10, Social)
- Books → Book-Tags: 2 values referenced (Biz, Top 10). All match ✓
Time-Log Categories
- Categories table: 4 entries (ONE|Boulder, Creators-Hub, SAM, OOS)
- Time Logs → Categories: 3 values used (ONE|Boulder, Creators-Hub, SAM). All match ✓
- OOS exists in table but not referenced by any Time Log
6. Data Quality Issues & Mitigations
6.1 Broken Coda References
People CSV contains internal Coda row references that didn’t export properly:
- Tags:
#r48,#r49,#r54→ Strip (log warning, skip) - Location:
#r31,#r36→ Strip - Compound:
Denver,#r28→ Split on comma, keepDenver, strip#r28
Detection rule: Any value starting with #r followed by digits is a broken Coda reference.
Regex: /^#r\d+$/
6.2 Duplicate Names in Lookup Tables
- Tag: “Philanthropy Consultant” × 2 → Keep first, skip second
- Location: “Costa Rica” × 2 → Keep first, skip second
Strategy: Build Map<string, number> during import. On duplicate name, use existing row ID.
6.3 Compound Location Values
People reference multiple locations in one field:
Denver CO,Costa Rica→ Split, resolve both, create multi-value link_row[id_denver, id_costarica]Denver,#r28→ Split, resolveDenver(matches Location table), strip#r28
Edge case: Denver vs Denver CO — “Denver” exists in Location table as-is? Let me check…
- Location table has both
DenverandDenver COas separate entries. SoDenvermaps to theDenverlocation.
6.4 Missing Organization References (corrected per reviewer R7)
6 orgs referenced by People that don’t exist in Org table:
- Herban Wellness (Katya Difani), Polestar Gardens / Village (Terry Curran)
- Antler (Rio Hodges), Questco (Adrienne Milligan Majcina, SHRM-CP)
- Rootstock Philanthropy (Brad Smith, M.Ed.), Earth05 (Maria Dahrieh)
Strategy: Log warning, skip link_row for these 6 rows.
6.5 URL Normalization
LinkedIn URLs sometimes missing protocol:
linkedin.com/in/yevmuchnik→https://linkedin.com/in/yevmuchnik
Rule: If URL doesn’t start with http:// or https://, prepend https://.
6.6 Twitter Handle Normalization (from reviewer R2)
1 out of 12 People Twitter values is a handle, not URL:
@FitFounder(Dan Go) — all 14 other Twitter values (People + Org) are full URLs
Rule: If value starts with @, convert to https://twitter.com/{handle} (strip @).
6.7 Whitespace Trimming (from reviewer R3, R16)
Apply .trim() to ALL values before API submission:
- Leading spaces in Author:
" David Ehrlichman" - Leading spaces in 3 emails:
" rio@antler.co"," adriana@agamistudios.com"," reed@regenesisgroup.com" - Trailing
\nin all BSW Hike cells:"Allie Clark\n"→"Allie Clark" - JS
.trim()strips\n,\r,\t, spaces — handles all cases
6.8 CSV Encoding (corrected per reviewer R6)
Organization CSV uses valid UTF-8 with Unicode curly quotes (U+201C/U+201D).
Not Windows-1252. Standard fs.readFileSync(path, 'utf-8') works perfectly.
6.9 Partial/Garbage Rows (from reviewer R5)
Time Logs has 14 trailing garbage rows (only Logged=false). One partial row has only Time Start=6:30 PM.
Rule: Each table config specifies a requiredField. Rows missing that field are skipped.
Corrected count: Time Logs 454→440.
| Table | Required Field | Before | After |
|---|---|---|---|
| Time Logs | Date | 454 | 440 |
| All others | Name/Column 1 | same | same |
6.10 Categories Not Pipe-Separated (from reviewer R8)
The | in ONE|Boulder is part of the name, NOT a delimiter.
Each Time Log row has exactly one category. No splitting needed.
6.11 CRLF Line Endings (from reviewer R21)
CSVs use Windows \r\n. csv-parse handles natively. Any manual splitting must use /\r?\n/.
7. Relationship Resolution Strategy
7.1 Resolution Order
Must import lookup tables first to build ID maps:
1. Import Tag → tagMap: Map<tagName, baserowRowId>
2. Import Location → locationMap: Map<locationName, baserowRowId>
3. Import Organization → orgMap: Map<orgName, baserowRowId> (resolves: Tag, Location)
4. Import People (resolves: Tag, Location, Organization)
5. Import Meeting Notes (no link_row)
7.2 Resolution Algorithm (updated per reviewer R1 — “try whole first”)
Locations like “Boston, MA” contain commas. Naive comma-splitting would break them. Strategy: Try exact match on whole string first. Only split if no whole match.
function resolveRefs(
csvValue: string, // e.g., "Boston, MA" or "Boulder,Denver,#r48"
lookupMap: Map<string, number>,
separator: string = ','
): { ids: number[]; warnings: string[] } {
const warnings: string[] = [];
const ids: number[] = [];
const trimmed = csvValue.trim();
if (!trimmed) return { ids, warnings };
// 1. Try whole value first (handles "Boston, MA", "Arlington, VA", etc.)
const wholeMatch = lookupMap.get(trimmed);
if (wholeMatch !== undefined) {
return { ids: [wholeMatch], warnings };
}
// 2. If no whole match, split on separator and resolve each part
for (const raw of trimmed.split(separator)) {
const name = raw.trim();
if (!name) continue;
if (/^#r\d+$/.test(name)) {
warnings.push(`Skipped broken Coda ref: ${name}`);
continue;
}
const id = lookupMap.get(name);
if (id !== undefined) {
if (!ids.includes(id)) ids.push(id); // deduplicate
} else {
warnings.push(`No match for "${name}" in lookup table`);
}
}
return { ids, warnings };
}Affected cases verified:
"Boston, MA"→ whole match ✓ (2 People rows: Magenta Ceiba, Mel Robbins)"Denver CO,Costa Rica"→ no whole match → split → both resolve ✓"Denver,#r28"→ no whole match → split → Denver resolves, r28 skipped ✓"Super-Connector,Events-Producer"→ no whole match → split → both resolve ✓
7.3 link_row Field Creation Timing
- Create all tables first (to get table IDs)
- Then create link_row fields (which need target table IDs)
- Then import data
Order:
CREATE TABLES: Tag, Location, Organization, People, Meeting Notes
CREATE LINK_ROW FIELDS:
- Org.Tags → Tag table
- Org.Location → Location table
- People.Tags → Tag table
- People.Location → Location table
- People.Organization → Organization table
IMPORT DATA (in dependency order: Tag → Location → Organization → People → Meeting Notes)
8. CLI Command Spec
Entry point
npx tsx baserow/src/cli/index.ts <command> [options]Commands
| Command | Auth | Description |
|---|---|---|
preflight | Both | Full connectivity + permission check |
auth login | JWT | Obtain JWT, display workspace/database info |
auth test | DB Token | Verify DB token works |
workspaces | JWT | List workspaces |
tables <db_id> | JWT | List tables in database |
table-create <db_id> <name> | JWT | Create empty table |
fields <table_id> | Either | List fields in table |
field-create <table_id> <name> <type> | JWT | Create field with type |
rows <table_id> [--limit N] | DB Token | List rows (paginated) |
row-create <table_id> --data '{}' | DB Token | Create single row |
batch-create <table_id> --file data.json | DB Token | Batch create (max 200) |
row-delete <table_id> <row_id> | DB Token | Delete single row |
migrate --plan | Both | Dry-run: show what would be created/imported |
migrate --run | Both | Execute full migration |
migrate --run --table <name> | Both | Migrate single table |
Global Options
--env <path>— Path to .env file (default:.env)--verbose— Show detailed API calls
9. Migration Execution Flow
STEP 1: PREFLIGHT
├── 1.1 Load .env → verify BASEROW_DB_TOKEN, BASEROW_EMAIL, BASEROW_PASSWORD, CSV_SOURCE_DIR, BASEROW_DATABASE_ID
├── 1.2 Test DB token → try listing rows on any known table (or handle 404)
├── 1.3 Obtain JWT → POST /api/user/token-auth/
├── 1.4 List workspaces → find user's workspace
├── 1.5 List databases in workspace → find/confirm target database
├── 1.6 Validate 5 CSV files exist and parse headers
└── 1.7 Print summary: tables to create, rows to import, issues found
STEP 2: SCHEMA CREATION (JWT auth)
├── 2.1 Create 5 tables in database 387807 (POST with just {name})
│ └── Baserow auto-creates default fields (primary "Name" text + extras)
├── 2.2 For each new table: list default fields, delete unwanted defaults
│ └── Delete auto-created fields not in our schema
├── 2.3 Create additional non-link fields for each table
│ └── Skip fields that already exist from defaults
├── 2.4 Create link_row fields (after ALL tables exist — need target table IDs)
│ ├── Org.Tags → Tag table
│ ├── Org.Location → Location table
│ ├── People.Tags → Tag table
│ ├── People.Location → Location table
│ └── People.Organization → Organization table
└── 2.5 Save state map: {tableName → tableId, fieldName → fieldId}
STEP 3: DATA IMPORT (DB token auth, batches of 200)
├── 3.1 Phase A: Import lookups → build ID maps
│ ├── Tag (51 rows → ~50 after dedup)
│ └── Location (33 rows → ~32 after dedup)
├── 3.2 Phase B: Import core tables with resolved references
│ ├── Organization (66 rows, resolves Tag + Location)
│ └── People (160 rows, resolves Tags + Location + Organization)
└── 3.3 Phase C: Import remaining tables
└── Meeting Notes (37 rows, no link_row)
STEP 4: VERIFICATION
├── 4.1 Compare row counts: Baserow API count vs CSV count
├── 4.2 Spot-check 3 random rows per table (print to console)
├── 4.3 Verify link_row resolution on People table (check a few Tags)
├── 4.4 Print summary: tables created, rows imported, warnings
└── 4.5 Save migration report to `migration-report.json`
Re-run Safety (updated per reviewer R9, R10)
State persistence: After each phase, save state to migration-state.json:
- Table IDs, field IDs
- Lookup maps (name → Baserow row ID) for Tag, Location, Book-Tags, Organization
- Phase completion status
- Row counts per table
On re-run:
- Load
migration-state.jsonif it exists - Check if tables already exist (by name) in target database
- If table exists and has rows, Skip by default (log: “Table ‘Tag’ already has 50 rows — skipping”)
- Reconstruct lookup maps from state file (no need to re-query API)
- Resume from the last incomplete phase
No “Overwrite” option — too dangerous for link_row targets (changing table IDs breaks references). User must manually delete tables in Baserow UI if they want a fresh start.
Single-Table Mode Safety (R19)
migrate --run --table People requires lookup tables to exist:
- Validate that target tables for link_row fields (Tag, Location, Organization) exist in Baserow
- If missing, error with: “Cannot import People: Tag table not found. Run full migration first.”
- Alternatively: auto-import dependencies (offer as prompt)
10. Test Plan
10.1 Unit Tests (vitest)
baserow/tests/lib/csv-parser.test.ts
- Parses simple CSV with headers
- Filters blank rows (all empty cells)
- Handles multiline values in quoted fields
- Handles commas within quoted fields
- Handles encoding issues (replacement chars)
- Returns accurate row count
baserow/tests/lib/field-mapper.test.ts
- Converts
M/D/YYYYdates toYYYY-MM-DD - Converts
MM/DD/YYYYdates (zero-padded) toYYYY-MM-DD - Returns null for empty date strings
- Converts
"true"→true,"false"→false - Returns null for empty boolean strings
- Normalizes URLs: adds
https://when missing protocol - Normalizes URLs: leaves
https://URLs unchanged - Normalizes Twitter handles:
@FitFounder→https://twitter.com/FitFounder - Returns null for empty URL strings
- Trims whitespace from all string values (spaces, tabs,
\n,\r) - Trims trailing newlines from BSW Hike-style values
- Trims leading spaces from email values
- Strips broken Coda references (
#r\d+) - Splits comma-separated values correctly
- Handles values with commas inside location names (e.g., “Boston, MA”)
baserow/tests/lib/baserow-client.test.ts (use vi.spyOn(globalThis, 'fetch') mocks — R18)
- Constructs correct headers for DB token auth
- Constructs correct headers for JWT auth (
Authorization: JWT {access_token}) - Handles JWT refresh via refresh_token when access_token expires
- Falls back to re-auth if refresh_token also fails (R13)
- Retries on 429 with exponential backoff
- Retries on 5xx with exponential backoff
- Does not retry on 4xx (except 429)
- Paginates through all pages
- Handles empty result sets
- Redacts tokens in verbose log output (R12)
baserow/tests/migrate/relationships.test.ts
- Resolves single tag name to row ID
- Resolves multiple comma-separated tags to row IDs
- Tries whole-string match first before splitting (R1: “Boston, MA”)
- Falls back to comma-split when whole string doesn’t match
- Skips broken Coda references with warning
- Handles unmatched names with warning
- Handles empty/null values gracefully
- Resolves compound location values (e.g., “Denver CO,Costa Rica”)
- Deduplicates resolved IDs
baserow/tests/migrate/schema.test.ts
- Generates correct field creation payloads for each type
- Creates link_row fields with correct target table IDs
- Handles the Name primary field (doesn’t recreate it)
- Identifies and deletes unwanted default fields
- Renames primary field when table def specifies different name
- Skips table creation when table already exists (re-run safety)
baserow/tests/migrate/import-csv.test.ts
- Batches rows correctly (200 per batch)
- Handles final partial batch
- Applies field transforms (dates, booleans, URLs)
- Resolves link_row references during import
- Handles dedup for lookup tables
- Skips rows missing required field (R5)
- Reports accurate import counts
10.2 Integration Tests (manual, against real Baserow API)
-
brow preflightpasses all checks -
brow auth loginreturns JWT and workspace info -
brow auth testconfirms DB token works -
brow workspaceslists workspaces -
brow tables <id>lists tables in database - Create + delete a test table via CLI
- Import Tag table (smallest useful test) → verify in Baserow UI
- Full migration → verify all tables + row counts
10.3 Test Fixtures
Minimal CSV files in baserow/tests/fixtures/ for unit tests:
tags.csv— 5 tags including one duplicate (“Philanthropy Consultant” × 2)locations.csv— 5 locations including one duplicate (“Costa Rica” × 2) + comma-in-name (“Boston, MA”)people.csv— 5 people with: multi-value tags, broken Coda refs (#r48), compound locations (Denver CO,Costa Rica),@handleTwitter, leading-space emailtimelogs.csv— 5 valid rows + 2 garbage rows (one with onlyLogged=false, one partial)bsw-hike.csv— 3 rows with trailing\nin every cell valueempty.csv— headers only, no data rowsmultiline.csv— values with newlines in quoted fields
11. Implementation Checklist
Phase 0: Project Setup
- 0.1 Initialize
solanasis-scriptsas git repo - 0.2 Create GitHub private repo
dzinreach/solanasis-scripts - 0.3 Set up remote and initial push
- 0.4 Create
package.json(name, type:module, scripts, dependencies) - 0.5 Create
tsconfig.json(strict, ESM, path aliases) - 0.6 Create
vitest.config.ts - 0.7 Create
.gitignore(node_modules, .env, dist, migration-state.json, migration-report.json) - 0.8 Create
.env.example(template:BASEROW_DB_TOKEN,BASEROW_EMAIL,BASEROW_PASSWORD,CSV_SOURCE_DIR,BASEROW_DATABASE_ID) - 0.9 Create
.envwith actual credentials (gitignored) — NOTE: password with#must be quoted - 0.10
npm installdependencies (+@rollup/rollup-win32-x64-msvcfor Windows) - 0.11 Verify
npx tsxandnpx vitestwork
Phase 1: Core Library (baserow/src/lib/)
- 1.1 Create
types.ts— all TypeScript interfaces- 1.1a
BaserowConfig,AuthType,BaserowField,BaserowRow,BaserowTable - 1.1b
PaginatedResponse<T>,BatchCreateResponse,JwtAuthResponse - 1.1c
TableDefinition,FieldMapping,MigrationState,MigrationReport
- 1.1a
- 1.2 Create
baserow-client.ts— core API client- 1.2a Dual auth (DB token + JWT)
- 1.2b JWT refresh (refresh_token first, fall back to re-auth per R13)
- 1.2c Simple rate limiting: configurable delay (default 100ms) + reactive 429 handling (R14)
- 1.2e Auto-retry (3 attempts, exponential backoff on 429/5xx)
- 1.2f Auto-pagination helper (
listAllRows) - 1.2g
user_field_names=trueby default - 1.2h Methods:
request,listRows,listAllRows,createRow,batchCreateRows,deleteRow,batchDeleteRows
- 1.3 Write
baserow-client.test.ts— 9 tests passing - 1.4 Create
csv-parser.ts— CSV reading utility- 1.4a Read CSV with UTF-8 encoding
- 1.4b Parse with
csv-parse/sync - 1.4c Filter rows missing required field (R5)
- 1.4d Return typed
CsvParseResultwith row count stats
- 1.5 Write
csv-parser.test.ts+ fixtures — 8 tests passing - 1.6 Create
field-mapper.ts— data transformation- 1.6a
parseDate— M/D/YYYY → YYYY-MM-DD - 1.6b
parseBoolean— “true”/“false” → boolean - 1.6c
normalizeUrl— addshttps://, converts@handle, rejects non-URL strings - 1.6d
trimValue— handles trailing\n, leading spaces - 1.6e
isBrokenCodaRef— detects#r\d+patterns - 1.6f
splitMultiValue— comma-separated splitting
- 1.6a
- 1.7 Write
field-mapper.test.ts— 34 tests passing - 1.8 Run all tests → 75 green
Phase 2: CLI Framework (baserow/src/cli/)
- 2.1 Create
cli/index.ts— commander setup with all subcommands - 2.2 Create
commands/auth.ts—loginandtestcommands - 2.3 Create
commands/preflight.ts— full connectivity check - 2.4 Create
commands/tables.ts— list, create - 2.5 Create
commands/fields.ts— list, create - 2.6 Create
commands/rows.ts— list, create, batch-create, delete - 2.7 Manual smoke test:
brow auth login✓,brow auth test✓,brow preflight✓
Phase 3: Migration Engine (baserow/src/migrate/)
- 3.1 Create
config/tables.ts— 5 table definitions with field mappings- 3.1a Each definition: tableName, csvFile, requiredField, primaryField, fields[], dedupField, phase
- 3.1b CSV paths resolved at runtime from
CSV_SOURCE_DIRenv var (R17)
- 3.2 Create
migrate/schema.ts— table + field creation- 3.2a Create table via API + delete auto-created sample rows
- 3.2b List auto-created default fields, delete unwanted ones
- 3.2c Rename primary field if needed
- 3.2d Create non-link fields (skip if already exists)
- 3.2e Create link_row fields (after all tables created)
- 3.2f Handle “table already exists” gracefully
- 3.3 Write
schema.test.ts— 5 tests passing - 3.4 Create
migrate/relationships.ts— name → ID resolution- 3.4a
resolveRefs()with “try whole first” (R1) - 3.4b
buildLookupMap()from imported rows - 3.4c Handle broken Coda refs, missing names, compound values
- 3.4a
- 3.5 Write
relationships.test.ts— 14 tests passing - 3.6 Create
migrate/import-csv.ts— data import with batching- 3.6a Read CSV → transform fields → resolve refs → batch rows
- 3.6b Batch into chunks of 200
- 3.6c Track import counts and warnings
- 3.6d Build lookup maps for dependent tables
- 3.7 Write
import-csv.test.ts— 5 tests passing - 3.8 Create
migrate/index.ts— orchestrator- 3.8a
--planmode: dry run - 3.8b
--runmode: execute full migration - 3.8c
--table <name>mode: single table with dependency validation - 3.8d Verification step: compare row counts
- 3.8e Re-run safety: state persistence, skip completed tables
- 3.8a
- 3.9 Run all tests → 75 green
Phase 4: Documentation
- 4.1 Write
README.md— Baserow API reference guide + project structure - 4.2 Write
SETUP.md— User setup instructions - 4.3 Verify permalinks in all source files
Phase 5: Integration Testing
- 5.1 Run
brow preflight→ all green (11/11 checks passed) - 5.2 Run
brow auth login→ “Dmitri Sunshine (admin@solanasis.com)”, 1 workspace, 1 database - 5.3 Run
brow migrate --plan→ accurate summary (5 tables, 5 link_row relationships) - 5.4 (Skipped single-table test — went directly to full migration)
- 5.5 Run
brow migrate --run→ full migration completed- Run 1: Tag(50), Location(32), Organization(66) imported. People failed on invalid LinkedIn (person name in URL field).
- Fix: Updated
normalizeUrl()to reject non-URL strings (no dots/slashes). - Run 2: Resumed from state. People(160), Meeting Notes(37) imported.
- Cleaned up 2 default sample rows per table (10 total).
- 5.6 Verify row counts: Tag=50, Location=32, Organization=66, People=160, Meeting Notes=37 (345 total)
- 5.7 Spot-check People: Tags link_row resolves (“Creators-Hub Possibility”), Location resolves (“Boulder CO”), dates in ISO format
- 5.8 Re-run: all tables skipped, all counts [OK]
Phase 6: Finalize
- 6.1 Final test run: 75 tests, 6 files, all passing
- 6.2 Commit all changes
- 6.3 Push to GitHub
- 6.4 Update MEMORY.md
Post-Migration Fixes (2026-03-08)
- Rebuilt Meeting Notes table with formula primary field
field('Person')- Deleted old table (873567), created new (873613)
- Primary field auto-derives from Person link_row — no redundant Name text column
- 37/37 Person links resolved
- Created 6 missing Organizations (Herban Wellness, Antler, Questco, Polestar Gardens / Village, Rootstock Philanthropy, Earth05)
- These were referenced by People but absent from Organization CSV
- Linked all 6 People → Organization relationships (16/16 now resolved, was 10/16)
- Deleted obsolete
upgrade-meeting-notes.ts(superseded by table rebuild) - Final row counts: Tag=50, Location=32, Organization=72, People=160, Meeting Notes=37 (351 total)
- Remaining known gaps: 2 people with broken Coda refs (unresolvable from CSV export)
- Dominic Kalms: Tags
#r48,#r49, Location#r31 - Paul Foley: Location
#r36
- Dominic Kalms: Tags
Issues Found During Implementation
- dotenv
#comment parsing — PasswordEjs$N4G4#4qHwas truncated at#. Fix: quote the value in.env. - Non-URL in URL field — People row 72 had person name “Stephen (CSM) Shepherd” in LinkedIn field.
normalizeUrlturned it intohttps://Stephen (CSM) Shepherdwhich Baserow rejected. Fix: added heuristic (requires.or/to be treated as URL). - Default sample rows — Baserow creates 2 sample rows per new table. Added
batchDeleteRowsincreateTableWithFieldsafter table creation. - Windows rollup binary —
@rollup/rollup-win32-x64-msvcneeded explicit install with--os=win32 --cpu=x64.
12. Senior Reviewer Findings
Reviewed by Plan agent. 23 findings. All verified against actual CSV data.
Summary
| Severity | Count | Addressed |
|---|---|---|
| CRITICAL | 1 (R1) | Yes — “try whole first” resolution strategy |
| HIGH | 4 (R2-R5) | Yes — all integrated into plan |
| MEDIUM | 10 (R6-R14, R23) | Yes — all integrated |
| LOW | 8 (R15-R22) | Yes — all integrated |
Issue Log
| ID | Sev | Category | Issue | Resolution |
|---|---|---|---|---|
| R1 | CRITICAL | Data | Comma-split breaks “Boston, MA” location resolution | Added “try whole first” strategy to resolveRefs() in Section 7.2 |
| R2 | HIGH | Data | Twitter @FitFounder handle is not a valid URL | Added Twitter handle normalization rule in Section 6.6 |
| R3 | HIGH | Data | BSW Hike cells have trailing \n | Documented in Section 6.7; .trim() handles it |
| R4 | HIGH | API | Missing PATCH field endpoint for renames | Added to Section 4.2 endpoints table |
| R5 | HIGH | Data | 14 garbage rows in Time Logs pass “not all empty” filter | Added requiredField per-table config in Section 6.9; Time Logs 454→440 |
| R6 | MEDIUM | Data | Encoding was valid UTF-8, not Windows-1252 | Corrected Section 6.8 |
| R7 | MEDIUM | Data | Count said “5 orgs” but listed 6 | Fixed to “6 orgs” in Section 6.4 |
| R8 | MEDIUM | Data | ONE|Boulder pipe is part of name, not delimiter | Documented in Section 6.10 |
| R9 | MEDIUM | Safety | ”Overwrite” re-run mode undefined, dangerous for link_row targets | Removed Overwrite option; Skip only + state persistence |
| R10 | MEDIUM | Safety | Lookup maps lost on partial failure | Added migration-state.json checkpoint file |
| R11 | MEDIUM | Testing | Missing test cases for R1/R3/R5 edge cases | Added to test plan Sections 10.1, 10.3 |
| R12 | MEDIUM | Security | Verbose mode could log tokens/PII | Added redaction requirement to test plan |
| R13 | MEDIUM | API | JWT refresh vs re-auth conflated | Clarified: use refresh_token first, fall back to re-auth |
| R14 | MEDIUM | Architecture | Token bucket + semaphore is over-engineering | Simplified to delay + reactive 429 handling |
| R15 | LOW | DX | Inconsistent “Time-Log Cat” abbreviation | Using full name everywhere |
| R16 | LOW | Data | 3 emails have leading spaces | Covered by global .trim() in Section 6.7 |
| R17 | LOW | Architecture | Hardcoded CSV paths fragile | Added CSV_SOURCE_DIR env var |
| R18 | LOW | Testing | No mock strategy specified for API tests | Added vi.spyOn(globalThis, 'fetch') note |
| R19 | LOW | Safety | Single-table mode ignores dependency order | Added dependency validation in re-run safety |
| R20 | LOW | DX | BASEROW_API_KEY name confusing | Renamed to BASEROW_DB_TOKEN |
| R21 | LOW | Data | CRLF line endings not mentioned | Documented in Section 6.12 |
| R22 | LOW | Architecture | Monorepo-lite structure needs documentation | Noted for README.md |
| R23 | LOW | Safety | Primary field type change may not be supported | Quotes primary field stays as text |
13. Post-Migration: URL Enrichment
After the migration was completed, a URL enrichment tool was built to verify and gap-fill URL fields.
See: ENRICH-PLAN.md for the full plan, matching algorithm, and execution steps.
Summary:
brow enrich --plan— dry-run: match CSV rows to Baserow, report gapsbrow enrich --run— apply gap-fill updates (never overwrites existing data)brow enrich --export— export LinkedIn URLs to CSV for Phase 2 Chrome extension enrichment- 25 additional unit tests (100 total across the project)