Sentinel, PE Deal-Flow System Review Dashboard

This is the architecture-first dashboard for the Claude Code project created to build the PE listing detection, enrichment, scoring, outreach, and approval system. It shows what’s already defined, what the system will do, what still needs your call, and how implementation should proceed without turning into reckless spammy bullshit.

Claude Code project created Spec-first Human approval required Speed-to-opportunity engine Dashboard-ready architecture
Spec docs created
8
README, PRD, architecture, technical spec, dashboard spec, implementation plan, TDD strategy, open questions
Core system stages
8
Watch, detect, extract, enrich, score, draft, review, track
Initial source target
3–5
Start narrow, prove signal, then expand
Critical rule
0
auto-sent outbound messages in V1

System Flow

What the product is designed to do

Watch
Monitor business-for-sale sources using sitemap, feed, HTML diff, or email ingest.
Detect
Identify net-new listings fast and dedupe repeats across sources.
Extract
Parse title, summary, location, financials, broker info, and provenance.
Enrich
Add company, broker, contact, and digital footprint data with confidence scoring.
Score
Rank against buyer thesis and route into high, medium, or low priority buckets.
Draft
Generate serious-buyer outreach drafts and internal memos grounded in listing facts.
Review
Human approval gate for approve, edit, reject, or snooze decisions.
Track
Move deals through pipeline from first-seen to reply, call, diligence, and close/loss.

What I’ve done

Artifacts created in the Claude Code project

  • README.md, product framing and repo orientation
  • PRD.md, user, problem, scope, and requirements
  • ARCHITECTURE.md, service layout and data flow
  • TECHNICAL-SPEC.md, schemas, services, API domains, and risks
  • DASHBOARD-SPEC.md, operator review interface definition
  • IMPLEMENTATION-PLAN.md, phased build gates
  • TDD-STRATEGY.md, fixture-first build method
  • OPEN-QUESTIONS.md, unresolved decisions for you

Method used

The build approach

  • Started from the Cody Schneider pattern research
  • Converted that into a dedicated operator-grade product concept named Sentinel
  • Kept it architecture-first, not implementation-first
  • Separated product definition, system design, dashboard design, and build sequencing
  • Forced human approval into the core workflow so it doesn’t become a blind auto-blaster
  • Designed it so it can serve Saint personally first, then become productizable later

Core Modules

Main components defined so far

Defined

Source Watchers

Monitor BizBuySell, BizQuest, broker sites, and other candidate sources using conservative source-specific polling.

Defined

Extraction Layer

Normalizes ugly marketplace data into a stable listing schema with provenance and extraction confidence.

Defined

Enrichment Engine

Adds broker info, company metadata, contact paths, and digital signals without pretending certainty where none exists.

Defined

Scoring Engine

Ranks each opportunity against a buyer thesis using weighted dimensions and clear explanation traces.

Defined

Draft Generator

Creates grounded outreach drafts and internal memos that sound like a serious buyer, not a bulk sender.

Defined

Approval Queue

Human-in-the-loop review gate for approve, edit, reject, snooze, and reassign actions.

Defined

Pipeline Tracker

Tracks opportunities from detection to diligence, offer, close, or archive.

Defined

Notifications

Supports Telegram alerts, daily digest email, and dashboard attention queues.

Pending build

Live Implementation

No product code yet. That stays gated until source list, thesis, workflow, and stack decisions are approved.

Phase Plan

How implementation should proceed

Phase 0

Approval and scope lock: sources, thesis, score thresholds, channels, stack.

Phase 1

Data foundation: schema, first connector, raw artifacts, diff engine, persistence.

Phase 2

Intelligence layer: enrichment, confidence scoring, thesis config, score engine.

Phase 3

Operator workflow: drafting, review queue, approvals, audit log.

Phase 4

Dashboard and notifications: overview, source health, alerts, digest.

Phase 5+

Live pilot and hardening: prove quality, improve resilience, expand sources.

Open Questions

What still needs your decision

1. What exact buyer thesis is V1 built for?

Industry, size, geography, deal type, exclusions. This drives scoring and drafts.

2. Which 3–5 sources ship first?

BizBuySell and BizQuest are obvious, but the niche broker sites matter.

3. Is this Saint-first, Clearfork-client-first, or both in sequence?

That affects naming, permissions, and product boundaries.

4. What outbound channels belong in V1?

Email only, or email plus LinkedIn drafts?

5. What stack do we want?

Current recommendation is boring and practical: Postgres/Supabase, TypeScript services, web dashboard.

6. Should approval trigger sending, or should sending stay fully manual at first?

My bias is safer: manual first.

Risks and Constraints

The ugly truth, plainly stated

Scraping constraints: some listing sources will fight automation, change markup, or enforce anti-bot controls.
Data quality: marketplace listings are inconsistent as hell, and enrichment confidence will vary wildly.
Outreach quality risk: if drafts sound generic, the whole advantage collapses.
Overbuild risk: if we start too broad, we’ll drown in source complexity before proving value.

Recommended Next Actions

What I’d do next