Project Architecture · Double Diamond Sequence with HITL Gates
Full Process Structure
Discovery
/ POV
Ideation
×3
Consoli-
dation
Feasibility
Study
Concept
Dev.
Success
Criteria
Method.
Critique
Stage 1
POV Generation
HITL: POV Validation
Stage 2
Three Anonymous
Parallel Runs
Isolated · No shared context
Stage 3
Consolidation &
Prioritisation
Separate chat · No KB
Stage 4
Feasibility &
RAG Assessment
HITL Gate
Stage 5
Quiet Signal
System
HITL: Concept Review
Stage 5a
H1 + H2
Criteria
HITL Gate
Stage 6
Critique &
Learning
HITL Methodology: Human-in-the-loop review gates occur at four points in the sequence: POV validation before ideation begins; after the feasibility study before concept development proceeds; after concept development before success criteria are written; and after the full critique output before submission. At each gate, a human assessor reviewed the AI output against source evidence before the next stage was authorised to proceed. The system detects. The human decides.

Working Document Chat: A separate working document chat was used in parallel throughout the project to develop and refine the prompt sequence and overall project architecture. That chat served as the structural design environment for the entire methodology — iterating the prompt sequence, testing knowledge base structure, and generating the final prompt appendix. Its existence is noted here; its detailed contents are referenced only in the final prompt sequence document included in this appendix.
1
Discovery Stage · Double Diamond — Diverge

Point of View

The POV was the convergent output of the Discovery phase — synthesised from qualitative research including peer-to-peer community forum sentiment, NPS data, and financial results via a HITL-overseen prompt sequence. It was stress-tested against Design Thinking criteria before being used as the brief embedded in the Ideation stage. A deliberately anonymised canonical short form was prepared for the three parallel ideation runs to prevent the ideation sessions from being primed by project context.
Master Point of View Statement — P1 Discovery
Generated & HITL-Validated · Discovery Phase
A long-tenured subscriber to a premium DTC wine community needs the operational experience of receiving their order to be held to the same standard of care as the product itself — because the company's primary measure of customer health (a prompted, managed satisfaction metric) consistently reports 'excellent,' whilst the unsolicited, peer-to-peer community forum — where customers speak to each other, not to the brand — reveals a pattern of quiet, irreversible disengagement expressed not as complaint, but as conclusion: 'I am not angry. I am finished.' This gap between the managed metric and the lived reality is not a data discrepancy. It is where the highest-value customers are disappearing.

Canonical Short Form for Ideation Use
Anonymised POV for Three Parallel Sessions

The canonical short form was used as the only briefing material for the three anonymous ideation sessions. Each session received this statement and nothing else — no project context, no financial data, no knowledge base access.

User
A 5+ year subscriber who pre-funds independent makers and co-owns the brand story.
Need
Needs the final physical delivery to be treated with the same care and intention as the product itself.
Insight
Because when a cost-optimised carrier fails that moment, it doesn't produce a complaint — it produces a conclusion: 'I am not angry. I am finished.'

HITL Critical Finding
The Primary Methodological Risk

The primary methodological risk in this project is the use of prompted, managed satisfaction metrics as the primary measure of customer health. NPS and post-interaction surveys are administered by the business and answered in the context of a direct relationship with the brand. They consistently overstate satisfaction because they measure willingness to respond positively in a managed context, not the actual state of the customer relationship.

The unmediated peer-to-peer community forum — where customers speak to each other, not to the brand — produces a materially different signal. In this context, customers express authentic sentiment without the social pressure of a direct brand relationship. The divergence between these two data sources is not noise. It is the finding.

Any concept developed in response to P1 must use unmediated community signal as its primary data source. Concepts that rely on prompted metrics — even as a secondary input — risk replicating the measurement failure they are designed to solve.

Human-in-the-loop oversight is required at every point where AI-generated signal is translated into a customer communication or operational decision. The system detects. The human decides.

2
Ideation Stage · Double Diamond — Diverge

Ideation — Three Parallel Anonymous Runs

The ideation stage generated maximum idea diversity by running three completely isolated sessions simultaneously using only the anonymised POV as the brief. No session had access to project context, knowledge base, or the other sessions' outputs. The three sessions produced 161 ideas across nine territory-sessions, which were then consolidated in a separate stage. The three outputs are presented here in their sub-tabbed form.
Methodology Note
Why Three Anonymous Sessions — Why No Shared Context

The decision to run three isolated sessions with no shared context was the single most consequential structural choice in the methodology. Each session received only the anonymised POV statement. No session could see the others' output before generating its own. This manufactured genuine divergence rather than the pseudo-divergence of sequential prompting within a single conversation.

Why this matters: Convergence across isolated sessions is structurally different from convergence within a single session. When three isolated processes reach the same idea independently, that convergence is evidence — not an artefact of shared context. Community Vocabulary Shift appeared independently in all three sessions, making it the single most structurally validated idea in the corpus.

Financial context and project knowledge were deliberately excluded from all three ideation sessions. Ideas should not be filtered for commercial viability at the point of generation — premature commercial anchoring eliminates structurally important ideas before they can be assessed. The feasibility study made the commercial case for retained ideas retrospectively at Stage 4, not at Stage 2.

Chat 1 Output — 52 Ideas · Territories: Physical, Signal, Relational
Chat 1 · 01–10 (selected)
Forum Divergence Score
Signal Intelligence
A real-time index comparing sentiment in managed review channels versus unmanaged peer community forums — the gap is the early warning. NLP dual-channel sentiment comparison.
Chat 1 · Signal
The Silence Classifier
Signal Intelligence
AI distinguishes between 'satisfied silence' and 'concluded silence' — two states that look identical in engagement metrics but represent entirely different subscriber trajectories. Temporal engagement pattern recognition and change-point detection.
Chat 1 · Signal
Community Vocabulary Shift
Signal Intelligence
Monitors the language long-tenure subscribers use in community forums over time — a shift from 'we' to 'they' language is an early-exit signal. Longitudinal NLP pronoun and framing-shift detection.
Chat 1 · Signal
The Anticipation Metric
Pre-Departure Interception
Measures whether subscribers are opening delivery notification emails earlier or later over time — declining anticipation is a leading indicator. Email open-time distribution analysis against personal tenure baseline.
Chat 1 · Relational
Exit Interview Before They Leave
Relational Stewardship
AI identifies subscribers showing pre-departure signals and triggers a proactive, human-authored conversation that treats the interview as an investment — not a retention script. Departure-risk scoring + personalised outreach.
Chat 1 · Relational
Commitment Reciprocity Test
Relational Stewardship
Before any price increase, AI-identifies the highest-LTV subscribers and asks directly: does the price still feel fair? Not as a survey — as a genuine bilateral question. Commitment imbalance detection + honest outreach generation.
Chat 1 · Operational
Failure-First Insert
Physical Delivery
When any delivery issue is detected — damaged box, delayed carrier, product quality flag — the next box includes a pre-emptive acknowledgement inserted before the subscriber raises the issue. NLP + predictive logistics monitoring.
Chat 1 · Operational
Grief Protocol Box
Physical Delivery
When a subscriber has been silent for 90+ days, the next box ships with an additional quiet gift — not in the catalogue — with a note: 'We noticed your quiet. No action needed.' Silence-detection AI without requiring explicit feedback.
Chat 1 · Relational
The Relationship Ledger
Relational Stewardship
A private subscriber dashboard showing their contribution to the community — referrals made, winemakers funded, years of membership — turning invisible commitment into visible record. CRM integration + contribution data aggregation.
Chat 1 · Contrarian
Tell Them You're Worried About Losing Them
Relational Stewardship
Rather than pretending everything is fine, send at-risk long-tenure subscribers a frank note: 'We've noticed you've been quieter. We don't want to lose you.' Radical vulnerability as retention strategy. Departure risk scoring + honest outreach generation.
Chat 1 · Signal
Referral Velocity Reversal
Signal Intelligence
Tracks not just referral volume but referral velocity over tenure — a subscriber who referred three friends in year one and zero in year five is a fundamentally different risk signal. Time-series analysis + personal baseline modelling.
Chat 1 · Relational
The Tenure Council
Co-Ownership & Influence
A standing advisory group of the highest-LTV subscribers who receive early access to strategic decisions — product direction, pricing reviews, market strategy — and are genuinely consulted before decisions are made. Stakeholder influence modelling.
Chat 1 · Legacy
The Shared Future Letter
Legacy & Memory
Annually, the company sends long-tenure subscribers a letter describing where the brand is going and explicitly asking: 'Is this still the journey you want to be on?' Making continued subscription a conscious re-commitment. Generative narrative + CRM integration.
Chat 1 · Contrarian
The Mutual Exit Interview Programme
Legacy & Memory
Publish anonymised departure reasons from long-tenure subscribers in the company's annual report — not to perform transparency, but to make the cost of ignoring community signals a matter of public record. Aggregated exit data anonymisation + thematic summarisation.
Chat 1 · Operational
Legacy Naming Rights
Legacy & Memory
At 10 years, offer the subscriber the permanent naming of one SKU or product variant after themselves or a person they nominate — a genuine act of co-ownership that no discount can replicate. Tenure milestone detection + ceremonial communication generation.
Chat 1 produced 52 ideas across three territories. The above represents a selection of ideas subsequently carried into consolidation. The full corpus of 52 ideas is captured in the Consolidation stage output.
Chat 2 Output — 54 Ideas · Independent Anonymous Session
Chat 2 · Signal
The Warm Goodbye Classifier
Pre-Departure Interception
An NLP model trained specifically on the calm, positive, conclusive language that precedes voluntary churn — distinguishing 'I am finished' from neutral or positive interactions. Standard sentiment tools flag anger, not conclusion. Specialised departure-language NLP classifier.
Chat 2 · Signal
Forum Divergence Score
Signal Intelligence
Real-time NLP sentiment comparison across managed NPS channels and unmanaged peer forum — the gap is the primary early-warning instrument. [Convergence: this idea surfaced independently in all three sessions.]
Chat 2 · Signal
Community Vocabulary Shift
Signal Intelligence
Monitors the pronoun shift from 'we love what they're building' to 'they seem to have changed direction' — longitudinal NLP detecting the earliest language signal of disengagement. [Convergence: surfaced independently in all three sessions.]
Chat 2 · Physical
The Broken Seal Guarantee
Physical Delivery
Every box arrives with a tamper seal and a QR code linking to the exact quality-check log for that specific unit — making quality verification personal, not statistical. AI-generated per-unit QC certificates linked to individual shipping labels.
Chat 2 · Signal
Silence Classifier
Signal Intelligence
AI distinguishes satisfied silence from concluded silence. Temporal engagement pattern recognition and change-point detection trained specifically on departure-precursor behavioural sequences.
Chat 2 · Physical
Grief Protocol Box
Physical Delivery
A quietly personalised box that ships when silence exceeds 90 days — no call to action, just acknowledgement. 'We noticed your quiet. No action needed.' Silence-detection AI identifies engagement gaps without requiring explicit feedback. [Convergence: independent emergence across sessions.]
Chat 2 · Relational
The Stewardship Score
Relational Stewardship
Internal teams are evaluated not on satisfaction survey scores but on a stewardship score — an AI-generated composite measuring tenure retention, relationship depth, and long-tenure engagement across their accounts. Composite performance scoring ML + multi-signal weighting.
Chat 2 · Relational
Alumni Network with Re-entry Pathway
Legacy & Memory
Former long-tenure subscribers maintained in a distinct alumni relationship — receiving occasional non-commercial updates and a standing personalised re-entry invitation based on their original preferences. Post-churn relationship ML + personalised re-engagement NLP.
Chat 2 · Physical
The Counterintuitive Downgrade Offer
Physical Delivery
High-tenure subscribers who signal fatigue are proactively offered a lower-cost tier — 'the version for people who already know us' — reducing volume but preserving the relationship. Lifetime value model identifies subscribers where relationship preservation outweighs revenue.
Chat 2 · Signal
Referral Velocity Reversal
Signal Intelligence
Tracks referral velocity decline over subscriber tenure as a proxy for loss of social conviction. Time-series analysis + personal baseline modelling with anomaly alerting. [Convergence: surfaced independently in multiple sessions.]
Chat 2 · Physical
Community-Tested Batch Label
Physical Delivery
Products tested and approved by the peer community ship with a physical 'Community Passed' mark — making the community's authority visible in the product itself. NLP sentiment analysis of forum threads surfaces community consensus signals.
Chat 2 · Physical
The No-Box Option
Physical Delivery
Subscribers can opt into a 'nothing this month' pause that still ships a single personal note — keeping the physical touchpoint alive without the product burden. AI generates bespoke notes calibrated to subscriber history and current season.
Chat 2 produced 54 ideas. Key independent convergences with Chat 1: Forum Divergence Score, Community Vocabulary Shift, Silence Classifier, Grief Protocol Box, and Referral Velocity Reversal. These convergences became evidence of structural validity in the consolidation phase.
Chat 3 Output — 55 Ideas · Independent Anonymous Session
Chat 3 · Signal
Forum Divergence Score
Signal Intelligence
NLP sentiment comparison across NPS managed channel and unmanaged peer forum. [Third independent emergence — confirmed cross-session convergence and structural validity of this as the foundational instrument.]
Chat 3 · Signal
Community Vocabulary Shift
Signal Intelligence
Longitudinal NLP pronoun-shift detection: 'we' → 'they' as the earliest language-level signal of disengagement. [Third independent emergence — the single most structurally validated idea in the corpus of 161.]
Chat 3 · Relational
The Peer Influence Map
Pre-Departure Interception
Maps the subscriber's influence network within the community — identifying which Angels are downstream referral nodes whose departure would generate cascading churn beyond the individual. Network graph analysis + influence propagation modelling.
Chat 3 · Relational
Contribution Archaeology
Legacy & Memory
Surfaces the complete history of a subscriber's community contribution — every forum post, every winemaker funded, every friend referred — compiled into a 'Your Story With Us' document at year five and ten. NLP + CRM integration for full history reconstruction.
Chat 3 · Relational
The Honest Annual Report
Relational Stewardship
An annual subscriber communication that includes not just the positives but the delivery failures, the community complaints, and the operational gaps — sent to long-tenure subscribers as a gesture of radical transparency. NLP data aggregation + editorial AI curation + human review.
Chat 3 · Signal
The Silence Classifier
Signal Intelligence
AI trained to detect 'concluded silence' versus 'satisfied silence' from the temporal pattern of engagement — not the sentiment of utterances. Change-point detection + departure-precursor sequence recognition.
Chat 3 · Signal
The Anticipation Metric
Pre-Departure Interception
Declining speed of delivery email opening is a predictive leading indicator preceding satisfaction score decline by weeks. Email open-time distribution tracking against personal tenure baseline.
Chat 3 · Relational
The Disagreement Forum
Co-Ownership & Influence
A dedicated unmanaged space within the community where subscribers can formally disagree with a company decision — with a guaranteed leadership response. AI topic clustering + structured response routing to appropriate decision-maker.
Chat 3 · Operational
Failure-First Insert
Physical Delivery
Pre-emptive acknowledgement of known delivery issues included in the box before the subscriber raises the complaint. Proactive logistics monitoring + personalised apology generation.
Chat 3 · Signal
Referral Velocity Reversal
Signal Intelligence
Referral cessation as proxy for loss of social conviction. Personal referral baseline modelling with anomaly alerting. [Independent emergence confirmed cross-session validity.]
Chat 3 · Relational
Memory Banking
Legacy & Memory
Every community interaction, product preference, and significant life event mentioned by a long-tenure subscriber is stored and surfaced at relevant moments — birthday, anniversary, product launch — creating genuine continuity of relationship. NLP extraction + contextual memory retrieval.
Chat 3 · Physical
The Last Box Protocol
Legacy & Memory
When a long-tenure subscriber cancels, their final shipment is curated from their complete history — not a catalogue pick — with a handwritten note that acknowledges the relationship without a call to re-subscribe. AI-curated personalised farewell from subscriber's complete taste graph.
Chat 3 produced 55 ideas. Combined total across all three sessions: 161 ideas. Cross-session convergence on Forum Divergence Score, Community Vocabulary Shift, Silence Classifier, and Referral Velocity Reversal confirmed structural validity of these as the foundational layer of the concept.
3
Consolidation Stage · Double Diamond — Converge

Consolidation — Prioritised Shortlist of 18 Ideas

The three ideation outputs were consolidated in a separate standalone chat with no project knowledge base — ensuring natural clusters emerged from the ideas themselves rather than confirming a predetermined structure. 161 ideas were de-duplicated, clustered, and reduced to 18 prioritised ideas. This stage fed directly into the Feasibility Study, which used the emergent cluster architecture as its organising structure.
Six Emerging Clusters — Not Imposed in Advance
Cluster Ideas Focus Territory
Signal Intelligence 01–04 Making the unmanaged community signal legible Signal
Pre-Departure Interception 05–07 Detecting the decision before it is announced Signal
Relational Stewardship 08–11 Mutual recognition, reciprocity, and radical transparency Relational
Co-Ownership & Influence 12–13 Highest-value subscriber as structural stakeholder Relational
Physical Delivery as Trust Signal 14–16 The box as a trust instrument, not a fulfilment event Operational
Legacy & Memory 17–18 Making the subscriber's investment visible and honoured Relational
18 Selected Ideas — Full Detail
01 · Signal Intelligence
Forum Divergence Score
Signal Intelligence
A real-time index comparing sentiment in managed review channels versus unmanaged peer community forums — the gap between the two becomes the primary early-warning instrument for silent exit. NLP dual-channel sentiment comparison.
Directly addresses the core finding: the critical signal lives in unmanaged peer data, not prompted satisfaction metrics. This is the instrument that makes the gap visible.
★ Quick Win — foundational infrastructure
02 · Signal Intelligence
The Silence Classifier
Signal Intelligence
AI distinguishes between 'satisfied silence' and 'concluded silence' — two states that look identical in engagement metrics. Temporal engagement pattern recognition and change-point detection.
The verbatim 'I am not angry. I am finished.' is precisely the silence this classifier is designed to detect. Addresses the core failure mode where standard metrics cannot distinguish disengagement from contentment.
★ Quick Win — deployable on existing engagement data
03 · Signal Intelligence
Community Vocabulary Shift
Signal Intelligence
Monitors the language long-tenure subscribers use in community forums — a shift from 'we' to 'they' language is an early-exit signal. Longitudinal NLP pronoun and framing-shift detection.
Surfaced independently across all three ideation sessions — the single most convergent idea in the corpus. Three isolated processes reaching the same idea independently is evidence, not coincidence.
★ Quick Win — NLP on existing forum corpus
04 · Signal Intelligence
Referral Velocity Reversal
Signal Intelligence
Tracks referral velocity over the subscriber's full tenure — cessation is a proxy for the loss of social conviction. Time-series analysis + personal baseline modelling with anomaly alerting.
A subscriber who referred three friends in year one and zero in year five is a fundamentally different risk signal from a subscriber who has never referred. Invisible in standard churn models.
★ Quick Win — derivable from existing CRM data
05 · Pre-Departure Interception
The Anticipation Metric
Pre-Departure Interception
Measures whether subscribers are opening delivery notification emails earlier or later over time — declining anticipation is a leading indicator. Email open-time distribution analysis against personal tenure baseline.
Counterintuitive: the speed at which someone opens a delivery email encodes their emotional investment. A falling open-velocity curve is visible weeks before any explicit dissatisfaction signal.
★ Quick Win — existing email platform data
06 · Pre-Departure Interception
The Warm Goodbye Classifier
Pre-Departure Interception
NLP model trained specifically on the calm, positive, conclusive language that precedes voluntary churn — distinguishing 'I am finished' from neutral interactions. Specialised departure-language classifier trained on historical churn-correlated text.
Standard sentiment tools flag anger, not conclusion. This classifier closes the most dangerous blind spot: the subscriber who is polite, positive, and already decided.
Medium-Term — requires training data from historical departures
07 · Pre-Departure Interception
The Peer Influence Map
Pre-Departure Interception
Maps the subscriber's influence network within the community — identifying which Angels are downstream referral nodes whose departure would generate cascading churn. Network graph analysis + influence propagation modelling.
A high-influence Angel's departure creates downstream risk beyond the individual LTV loss. This model makes that risk visible before it compounds.
Medium-Term — requires network mapping infrastructure
08 · Relational Stewardship
Exit Interview Before They Leave
Relational Stewardship
AI identifies subscribers showing pre-departure signals and triggers a proactive human conversation — not a retention script, but a genuine bilateral enquiry. Departure-risk scoring + personalised outreach generation.
Reframes the relationship as mutual rather than transactional at the critical moment. The highest-value subscribers leave in conclusion, not complaint — the intervention must match that register.
★ Quick Win — process design, not technology
09 · Relational Stewardship
Commitment Reciprocity Test
Relational Stewardship
Before any price increase, AI-identifies the highest-LTV subscribers and asks directly: does the price still feel fair? Not as a survey — as a genuine question. Commitment imbalance detection + honest outreach generation.
The most important single intervention at zero technology cost. That question, sent and genuinely answered, would do more to close the metric-truth gap than any model.
★ Quick Win — available at zero technology cost
10 · Relational Stewardship
The Relationship Ledger
Relational Stewardship
A private subscriber dashboard showing their full contribution — referrals, winemakers funded, years of membership, community posts — turning invisible commitment into visible shared record. CRM + contribution data aggregation.
Makes the relationship bilateral and visible. The highest-value subscribers have invested years into a brand story they cannot see reflected back. The Ledger makes that visible.
Medium-Term — UI design and data integration required
11 · Relational Stewardship
The Honest Annual Report
Relational Stewardship
An annual communication to long-tenure subscribers that includes operational gaps, delivery failures, and community complaints alongside positives — radical transparency as a relational posture. NLP data aggregation + editorial AI curation + human review.
Values-aware Angels have already found the gaps in the community forum. Pre-emptive transparency is the reputationally correct posture with the UK market specifically.
Medium-Term — editorial and cultural commitment required
12 · Co-Ownership & Influence
The Tenure Council
Co-Ownership & Influence
A standing advisory group of the highest-LTV subscribers who receive early access to strategic decisions and are genuinely consulted before those decisions are made. Stakeholder influence modelling + structured consultation protocols.
Structurally important but requires cultural change. Requires leadership to be genuinely willing to receive and respond to structured criticism — performative transparency with values-aware Angels is more damaging than none.
Ambitious — governance design and cultural commitment required
13 · Co-Ownership & Influence
The Disagreement Forum
Co-Ownership & Influence
A dedicated space within the community where subscribers can formally disagree with a company decision — with a guaranteed leadership response within a defined timeframe. AI topic clustering + structured response routing.
Makes disagreement structurally safe. The alternative — disagreement migrating to unmanaged peer forums where the company cannot respond — is the P1 failure mode itself.
Medium-Term — process design and leadership commitment
14 · Physical Delivery
Failure-First Insert
Physical Delivery as Trust Signal
When any delivery issue is detected, the next box includes a pre-emptive acknowledgement inserted before the subscriber raises the issue. NLP + predictive logistics monitoring + personalised apology generation.
The POV identifies the delivery moment as the trigger for the 'I am finished' conclusion. Pre-empting the failure before it becomes a complaint converts a trust-destroying event into a trust-building one.
★ Quick Win — logistics data + insert print process
15 · Physical Delivery
The Grief Protocol Box
Physical Delivery as Trust Signal
When a subscriber has been silent for 90+ days, the next box ships with an additional quiet gift not in the catalogue, with a note: 'We noticed your quiet. No action needed.' Silence-detection AI without requiring explicit feedback.
The physical box becomes the relational instrument rather than the fulfilment vehicle. A quiet acknowledgement without a call to action is structurally different from a win-back offer.
★ Quick Win — process and small gift budget
16 · Physical Delivery
Counterintuitive Downgrade Offer
Physical Delivery as Trust Signal
High-tenure subscribers who signal fatigue are proactively offered a lower-cost tier — 'the version for people who already know us' — reducing volume but preserving the relationship. Lifetime value model identifies subscribers where relationship preservation outweighs revenue.
Revenue optimisation at the moment of departure accelerates the conclusion. Reducing ask before it is demanded is a structurally different posture — and may preserve the long-term relationship.
Medium-Term — pricing and tier design required
17 · Legacy & Memory
Contribution Archaeology
Legacy & Memory
Surfaces the complete history of a subscriber's community contribution — every forum post, winemaker funded, referral made — compiled into a 'Your Story With Us' document at year five and ten. NLP + CRM integration for full history reconstruction.
The subscriber's investment is invisible to them. Making it visible — comprehensively, not as a highlight reel — converts abstract loyalty into concrete shared record. The most powerful retention mechanism is showing someone what they have already built.
Ambitious — data archaeology and reconstruction at scale
18 · Legacy & Memory
The Alumni Network
Legacy & Memory
Former long-tenure subscribers maintained in a distinct alumni relationship — receiving occasional non-commercial updates and a standing personalised re-entry invitation. Post-churn relationship ML + personalised re-engagement NLP calibrated to departure context.
Converts cancellation from a terminal event into a relationship state. The highest-value subscribers rarely leave in anger — they leave in conclusion. An alumni network maintains the connection through which many will return.
Medium-Term — community infrastructure and non-commercial editorial stance
Summary Table
#Idea TitleClusterHorizon
01Forum Divergence ScoreSignal IntelligenceQuick Win
02The Silence ClassifierSignal IntelligenceQuick Win
03Community Vocabulary ShiftSignal IntelligenceQuick Win
04Referral Velocity ReversalSignal IntelligenceQuick Win
05The Anticipation MetricPre-Departure InterceptionQuick Win
06The Warm Goodbye ClassifierPre-Departure InterceptionMedium-Term
07The Peer Influence MapPre-Departure InterceptionMedium-Term
08Exit Interview Before They LeaveRelational StewardshipQuick Win
09Commitment Reciprocity TestRelational StewardshipQuick Win
10The Relationship LedgerRelational StewardshipMedium-Term
11The Honest Annual ReportRelational StewardshipMedium-Term
12The Tenure CouncilCo-Ownership & InfluenceAmbitious
13The Disagreement ForumCo-Ownership & InfluenceMedium-Term
14Failure-First InsertPhysical Delivery as Trust SignalQuick Win
15The Grief Protocol BoxPhysical Delivery as Trust SignalQuick Win
16Counterintuitive Downgrade OfferPhysical Delivery as Trust SignalMedium-Term
17Contribution ArchaeologyLegacy & MemoryAmbitious
18The Alumni NetworkLegacy & MemoryMedium-Term
4
Feasibility Stage · Double Diamond — Converge

Feasibility Study — P1 Response Assessment

The Feasibility Study assessed whether the 18 prioritised ideas constituted a viable, coherent, and responsible response to P1. This was the first stage to have access to the full financial context — deliberately withheld during ideation to prevent commercial anchoring. The study fed the conditions and cluster architecture directly into the Concept Development stage.
Problem Statement
P1 Definition
The silent churn of long-tenure high-LTV subscribers caused by operational delivery failure — a problem invisible to prompted satisfaction metrics but evidenced in unsolicited peer-to-peer community forum sentiment, confirmed by a 24% annual churn rate in the highest-value customer cohort.
Verbatim signal: "I am not angry. I am finished."
Financial Context — HY26 Public Data, UK Estimated
MetricFigureSource
Annual churn rate — highest LTV cohort24%CRM analysis
Estimated UK high-LTV Angel cohort~24,000Segment estimate
Annualised revenue per Angel~£324HY26 derived
Replacement cost per churned Angel~£398CAC estimate
Acquisition payback window44 monthsHY26 derived
Annual replacement burden (current)~£2.3mCalculated
Revenue preserved per 5pp churn reduction~£478,000Calculated
Group NPS76HY26 reported
ICO maximum fine exposure (group)~£8m4% of £200m group revenue
RAG Assessment by Feasibility Dimension
DimensionRAGAssessment
Strategic Feasibility GREEN The prioritised ideas align tightly with the stated business direction of a smaller, materially more profitable business centred on core Angel retention. The seven Quick Win ideas are low-capital, high-signal interventions operating on existing data infrastructure. The strategic direction demands high-LTV retention as the primary value driver; the ideation output treats this as the organising principle throughout.
Commercial Feasibility GREEN The commercial case is strongly positive. At 24% annual churn in the highest-LTV cohort, with replacement cost of approximately £398 per churned Angel and a 44-month acquisition payback window, the cost of inaction at ~£2.3m annually substantially exceeds any credible implementation cost for the ideas prioritised. The Quick Win ideas individually carry negligible implementation cost relative to this figure.
Technical Feasibility GREEN All AI capabilities required — NLP sentiment analysis, temporal pattern recognition, change-point detection, generative personalisation — are mature, commercially available, and deployable without bespoke model development. The Quick Win ideas rely on NLP techniques operational in production environments in 2025–26. No capability gap exists that would prevent deployment within a 6–12 month timeframe.
Operational Feasibility AMBER The Quick Win ideas are implementable within the constraints of a business undergoing active cost reduction and restructuring, provided implementation is sequenced correctly. Ideas 01–05 are derivable from existing CRM and email platform data with modest engineering investment. The full 18-idea set implemented simultaneously would exceed operational capacity — but the Quick Win subset is operationally viable as a phased first tranche.
Human & Ethical Feasibility AMBER The ideation output is ethically well-constructed in intent. However, an Amber rating is warranted because the ideas do not specify the HITL architecture in sufficient detail. The Silence Classifier and Community Vocabulary Shift depend on NLP models that, if poorly calibrated, could reduce the nuanced signal of concluded loyalty to a binary churn flag — recreating the managed metric problem the project was designed to solve. Resolvable at concept development stage.
Reputational Feasibility GREEN The UK Angel community is values-aware and digitally engaged. The ideas most likely to generate positive reputational impact demonstrate radical transparency: the Honest Annual Report, the Commitment Reciprocity Test, and the Failure-First Insert each signal that the company trusts the Angel with the truth before she discovers it independently in the forum. Proactive transparency is the reputationally correct posture for the UK market specifically.
Critical Gaps
Gap 1 — HITL Architecture Unspecified
The consolidated ideation output names AI capabilities for each idea but does not specify how the Human-in-the-Loop layer is designed. The risk — that algorithmic processing of community data may produce a sanitised signal as misleading as the original NPS — is not resolved by the current ideation output. A dedicated design session is required before concept development to specify who receives the unmediated signal, in what form, and with what intervention authority. This is a design constraint to be resolved in concept development, not a reason to return to ideation.
Gap 2 — No DPIA Framework Present
None of the 18 prioritised ideas include a DPIA or privacy-by-design specification. The Forum Divergence Score, Silence Classifier, Community Vocabulary Shift, and Peer Influence Map all involve behavioural profiling of identifiable subscribers at a scale that would trigger the UK GDPR Article 35 threshold. The concept development phase must include legal and data protection review as a parallel workstream, not a sequential gate.
Gap 3 — No Winemaker-Side Equity Dimension
The ideation output is entirely subscriber-facing. The relational credibility of the brand with Angels is partly constituted by the perceived fairness of its treatment of winemakers. A concept that improves Angel-facing relational stewardship without addressing the winemaker equity dimension risks an internal inconsistency that values-aware Angels are likely to identify. This gap does not require a return to ideation on P1 but flags a related problem to be scoped separately.
Critical Path — Minimum Conditions Before Deployment
#ConditionDetail
1 DPIA Completed A Data Protection Impact Assessment under UK GDPR Article 35 must be completed and approved before any AI-driven behavioural profiling is deployed. This is a hard legal gate. ICO maximum fine exposure at group level is approximately £8m at current revenue.
2 HITL Design Specified The Human-in-the-Loop architecture must be designed and documented before deployment. The interface through which community signal reaches human reviewers must preserve verbatim community language, not only model scores. The individuals authorised to act must have relationship authority, not only retention incentives.
3 Quick Win Sequencing Agreed Recommended first tranche: Forum Divergence Score, Community Vocabulary Shift, and Failure-First Insert — three ideas requiring no new customer-facing infrastructure, validatable against existing data before broader rollout.
4 Target Cohort Defined Before Exit Interview Before They Leave or Commitment Reciprocity Test are deployed, the specific tenure cohort at risk must be defined from CRM data — minimum 5-year tenure, pre-funded balance, historical referral activity.
5 Senior Leadership Sponsorship The Tenure Council and Disagreement Forum require that senior leadership are genuinely willing to receive and respond to structured criticism from the Angel community. Without that commitment, both ideas become performative — and performative transparency with a values-aware community is more damaging than no transparency at all.
Closing Recommendation
Verdict: Proceed to Concept Development — With Conditions
The financial case is unambiguous. At 24% annual churn in the highest-LTV cohort, with a £398 replacement cost per churned Angel and a 44-month acquisition payback window, the cost of inaction compounds each quarter. The group NPS of 76 is not a signal that P1 is resolved — it is evidence that the metric-truth gap is functioning exactly as the POV describes: the managed metric says everything is fine; the unmanaged community signal confirms that the highest-value customers have already decided. The recommendation is to proceed to concept development on the Quick Win ideas in the first sprint, with the DPIA and HITL design workstreams running in parallel as non-negotiable prerequisites to any AI-driven deployment against the Angel subscriber base. The most important single intervention available at zero technology cost is Idea 09: before the next price review, ask the highest-LTV subscribers whether it still feels fair. That question, sent and genuinely answered, would do more to close the metric-truth gap than any model.
5
Concept Development Stage · Double Diamond — Converge

The Quiet Signal System

The Concept Development stage synthesised the 18 prioritised ideas — bounded by the Feasibility Study conditions — into a single named concept with an AI capability stack, Angel experience narrative, HITL architecture, and prototype specification. The visual outputs produced at this stage (storyboard and service blueprint) are embedded inline below in their original rendered form.
Concept Name & One-Line Description
The Quiet Signal System
An AI-powered relational intelligence system that reads the unmediated peer community forum as its primary data source, detects the behavioural and linguistic signals of concluded departure before any cancellation action, and surfaces those signals — verbatim, not scored — to a named human relationship steward who decides what to do next.

The Problem It Solves — P1 Reference

P1 is the silent churn of long-tenure high-LTV subscribers caused by operational delivery failure — invisible to prompted satisfaction metrics but evidenced in unsolicited peer-to-peer community forum sentiment. The company's NPS of 76 says everything is fine. The unmanaged community forum says the highest-value customers have already decided.

The Quiet Signal System is designed to close the gap between what the managed metric measures and what the unmanaged community reveals — and to do so before the subscriber acts on their conclusion rather than after.


The Angel Experience — What Changes and Why It Matters

Before the Quiet Signal System is live, the Angel rates 9/10 when asked, posts less frequently in the community forum, stops referring friends, and opens delivery emails later. No complaint is lodged. The company's dashboard shows health. She has already decided.

After the system is live, the Angel receives a letter — not a survey, not a discount code. The letter names her specific five-year contribution, her referrals, a product she helped shape. It asks one question: what must we do to deserve year six? It arrives before she has acted on her conclusion. She did not ask for this. She did not expect it.

What changes is not the system's sophistication — it is the relational register. The company stops measuring her and starts listening to her. The gap between the managed metric and her lived reality is closed not by a better model, but by a human who read what she actually wrote and chose to respond as though it mattered.


The AI Capability Stack — Four Layers
Layer 1 · Signal Intelligence
Unmediated Signal Detection
Continuously reads the peer-to-peer community forum using NLP dual-channel sentiment comparison (managed NPS vs. unmanaged forum), longitudinal pronoun-shift detection (we → they), and temporal change-point analysis for 'satisfied vs. concluded' silence. Referral velocity reversal monitored continuously as the social conviction signal.
Enables: A real-time, subscriber-level signal of concluded departure that did not exist before — derived entirely from unmediated peer language, not from prompted responses.
Layer 2 · Pre-Departure Interception
Anticipation & Departure Modelling
Tracks delivery email open-velocity against personal tenure baseline (Anticipation Metric). Runs the Warm Goodbye Classifier — trained specifically on the calm, conclusive language that precedes voluntary churn, not on complaint language. Maps peer influence networks to identify high-downstream-risk departures (Peer Influence Map).
Enables: Interception of the departure decision 4–6 weeks before any cancellation action — the specific window in which relational intervention is still possible.
Layer 3 · Relational Generation
Personalised Intervention Authoring
Uses the subscriber's complete five-year history — tenure, referral record, community contributions, products linked to personal context — to generate a personalised intervention letter draft for human review. Also maintains the Relationship Ledger for each Angel: a private record of their full contribution made visible to them on request.
Enables: Human-quality, relationship-specific outreach at scale — but the AI drafts, the human decides. No communication is sent without relationship steward approval.
Layer 4 · Operational Orchestration
Physical Delivery as Trust Instrument
Monitors logistics and product quality signals to trigger the Failure-First Insert before the subscriber raises the issue. Orchestrates the Grief Protocol Box when 90-day silence is confirmed. If cancellation is confirmed, curates the Last Box Protocol from the full five-year taste graph. Coordinates physical delivery timing with relational intervention sequence.
Enables: The physical box becoming a relational instrument — the delivery moment becomes the trust-building event that the POV identifies as currently causing the 'I am finished' conclusion.

The Community Signal Engine
Primary Data Source: Unmediated Peer-to-Peer Forum, Not NPS
The Quiet Signal System uses the unmanaged peer-to-peer community forum — where subscribers speak to each other, not to the brand — as its primary signal source. This is a direct structural response to the HITL Critical Finding: any concept that relies on prompted metrics risks replicating the measurement failure it is designed to solve.

The Forum Divergence Score compares managed NPS sentiment against unmanaged forum sentiment in real time. The Community Vocabulary Shift model monitors long-tenure subscriber language longitudinally. The Silence Classifier detects concluded departure from temporal engagement patterns. None of these three instruments use survey data. All three use unmediated language.

When the system surfaces a signal to the relationship steward, it surfaces the verbatim forum language first — not a churn score, not a risk percentile. The steward reads what the Angel actually wrote before making any decision. This design choice prevents the most predictable failure mode: converting a nuanced relational signal into a managed metric that recreates the original problem.

Feasibility Confidence

The Quiet Signal System passes all five conditions set in the Feasibility Study. The Quick Win ideas (Forum Divergence Score, Community Vocabulary Shift, Silence Classifier, Anticipation Metric, Failure-First Insert, and Grief Protocol Box) are deployable within existing CRM and email platform infrastructure without requiring new customer-facing technology. The HITL architecture is explicitly designed to surface verbatim forum language to human reviewers rather than model scores — directly addressing the Amber rating on Human and Ethical Feasibility. The DPIA and legal review workstreams are designated as parallel prerequisites, not sequential gates. The Ambitious ideas (Tenure Council and Contribution Archaeology) are scoped as a separate governance conversation that does not block the Quick Win sprint. The commercial case — at £2.3m annual replacement burden with a 44-month payback window — is sufficient to justify the concept's development and prototype investment.


Ideas Incorporated and Set Aside

Incorporated in the concept: Ideas 01–05 (Signal Intelligence and Pre-Departure Interception Quick Wins), 08–09 (Relational Stewardship Quick Wins), 14–15 (Physical Delivery Quick Wins), 06 (Warm Goodbye Classifier), and 18 (Alumni Network). These form the complete operating loop of the Quiet Signal System from detection through intervention to post-outcome stewardship.

Idea 10 — The Relationship Ledger Not incorporated at prototype stage. Valuable as a subscriber-facing feature but requires UI development that is outside the scope of Layer 1 validation. Scoped for delivery once the prototype has confirmed the classifier's reliability.
Idea 11 — The Honest Annual Report Requires editorial commitment and cultural change beyond the system's technical scope. Recommended as a parallel workstream, not a system component.
Idea 12 — The Tenure Council Requires governance design and senior leadership cultural commitment. Scoped as a separate governance conversation. Not blocked — deferred.
Idea 13 — The Disagreement Forum Community infrastructure requirement. Recommended as a medium-term addition to the concept once Quick Wins are validated.
Idea 16 — Counterintuitive Downgrade Offer Requires pricing and tier design. Operationally important but outside the scope of the signal system itself.
Idea 17 — Contribution Archaeology Ambitious. Requires data reconstruction at scale. Recommended as a five-year anniversary intervention once the concept is operational.

The Prototype — Layer 1 Proof of Concept
The Relational Health Dashboard

What is being tested: The single foundational assumption the entire system depends on — that AI can reliably distinguish concluded departure from active complaint in unmediated peer community language. This is the specific signal that algorithmic sanitisation consistently misses and that makes the metric-truth gap possible.

Why this is the correct place to start: Before any relational intervention is built, before any letter is sent, before any human steward is trained to act, the system must prove that Layer 1 is reliable. If the Silence Classifier cannot reliably distinguish 'I am not angry, I am finished' from 'I am frustrated but engaged,' all subsequent intervention layers are built on a false positive. Layer 1 must be validated first.

What the dashboard does: The relationship steward opens the dashboard and sees a short list of Angel subscribers whose Layer 1 signals have crossed the escalation threshold. For each subscriber, the dashboard surfaces: the verbatim community forum language that triggered the signal, not a score; a five-year history brief showing tenure, referral count, pre-funded balance, and last meaningful community contribution; the Anticipation Metric decay curve showing email open-velocity over tenure; and the specific Layer 1 signal type that triggered the alert — vocabulary shift, silence classification, or divergence score.

The steward reads first. Then decides. The system advises — the human acts. No communication is generated, no intervention is triggered, and no signal is acted upon without the steward's explicit decision. This is the HITL architecture made operational.


Visual Outputs — Rendered Inline from Original Production
Storyboard & Service Blueprint

The storyboard and service blueprint below were produced as the visual outputs of the Concept Development stage. They are rendered here in their original form, as produced by the Quiet Signal System concept development prompt.

A
The Angel Experience — Before, During & After
Five frames showing the Angel's journey as the Quiet Signal System moves from absence to activation. Each frame names the specific system component operating in the background.
01 / 05
9 NPS
The Invisible Drift

The Angel rates 9/10 when asked. But she has posted in the community forum half as often this year. She opens her box, uses the products, and says nothing. The company's dashboard shows a healthy NPS. The forum shows something different.

System: Inactive — no signal yet captured from unmanaged forum channels
02 / 05
'they' DIVERGENCE NPS: 76 FORUM: ↓18
The Signal Surfaces

The Forum Divergence Score flags a gap: NPS holds at 76, but community language analysis detects a shift from 'we love what they're building' to 'they seem to have changed direction.' The Silence Classifier scores this as concluded rather than transient.

Forum Divergence Score + Community Vocabulary Shift + Silence Classifier (Layer 1 active)
03 / 05
RELATIONAL HEALTH DASHBOARD "they seem to have changed direction recently..." HUMAN REVIEW
The Relationship Steward Reads the Signal

The relational health dashboard surfaces the subscriber's verbatim forum language — not a churn score — to a named relationship steward. The steward reads what the Angel actually wrote, sees her five-year history, referral count, and pre-funded balance. No script. No retention discount. A human with context.

HITL handoff — AI surfaces verbatim signal; human steward reads and decides response
04 / 05
5 YEARS 12 REFERRALS £1,620 SPEND WHAT MUST WE DO FOR YEAR SIX?
The Exit Interview Before She Leaves

The Angel receives a letter — not a survey, not a discount code. It names her specific five-year contribution, her referrals, a product she helped shape. It asks one question: what must we do to deserve year six? It arrives before she has acted on her conclusion. She did not ask for this.

Exit Interview Before They Leave (Idea 08) + Referral Velocity Reversal context (Idea 04)
05 / 05
RENEWED ALUMNI ALUMNI COMMUNITY VOCABULARY SHIFT — MONITORED 'WE' LANGUAGE RECOVERED — SIGNAL POSITIVE
Renewal or Honoured Departure

The Angel responds — or she doesn't. Either way, the relationship is changed. If she stays, Community Vocabulary Shift monitors the recovery of 'we' language as the forward signal. If she leaves, the Last Box Protocol and Alumni Network preserve the connection without pressure. The metric-truth gap is closed.

Alumni Network (Idea 18) + Last Box Protocol + Community Vocabulary Shift monitoring (Idea 03)

B
Service Blueprint — Angel Journey & AI System with HITL Architecture
The blueprint maps the Angel's journey across the top against the AI system operating below. Amber markers show the moments where the system hands off to a human relationship steward — the HITL architecture made tangible.
Journey Stage
Stage 01
Ongoing Subscription — Silent Drift
Stage 02
Signal Detected in Forum
Stage 03
Human Steward Reviews
Stage 04
Personalised Intervention
Stage 05
Response & Outcome
Stage 06
Post-Outcome Stewardship
Angel Experience — Visible to the subscriber
Angel
Experience
Receives and uses boxes. Rates 9/10 when surveyed.
Posts less frequently in the community forum. Stops referring friends. Opens delivery emails later. No complaint lodged. Everything looks fine from the outside.
No change in experience — entirely backstage.
The Angel does not know her forum language has shifted. She does not know she has been identified as at risk. Her next box arrives as normal.
Still no visible change.
The relationship steward reads her history in silence. No contact yet. The Failure-First Insert, if relevant, is in the box — she reads that the company acknowledges the quality issue before she had to raise it.
Receives a letter she did not ask for and did not expect.
The letter names her five-year tenure, her twelve referrals, a product she commented on. It asks: 'What must we do to deserve year six?' No survey link. No discount. No call to action. It feels like correspondence, not commerce.
She responds — or she does not.
If she responds: a steward reads her reply and acts without a script. If she cancels: the Last Box Protocol sends a curated farewell — no win-back, just thanks. If she continues silently: the Commitment Reciprocity Test is triggered ahead of any price review.
Renewed relationship — or Alumni status with standing re-entry invitation.
If renewed: community vocabulary shift is monitored for recovery of 'we' language. If departed: Alumni Network maintains a non-commercial connection. She knows the door is open. There was no dark pattern.
Line of Visibility — below here is invisible to the Angel
AI System
Layer
Layer 1 — Forum Divergence Score running continuously
NLP dual-channel sentiment comparison across NPS (managed) and peer community forum (unmanaged). Baseline established per subscriber from historical forum participation.
Layer 1 · Signal
Silence Classifier + Community Vocabulary Shift detect departure signal
Change-point detection identifies 'concluded silence.' Pronoun-shift NLP flags 'we → they' transition. Referral Velocity Reversal confirms cessation of advocacy. All three signals corroborate.
Layer 1 Layer 2
AI pre-briefs the relationship steward — verbatim-first
Dashboard surfaces the subscriber's unmediated forum language at the top — not a churn score. Below: five-year tenure summary, referral count, pre-funded balance, last meaningful community contribution, Anticipation Metric decay curve.
Layer 1 Layer 3
→ HITL handoff — steward receives brief
Layer 3 — Generative personalisation of intervention letter
AI drafts letter using full subscriber history. Steward reviews and edits before send — AI does not send without human approval.
Layer 3 Layer 4
→ HITL approval gate — no send without steward sign-off
Response analysis + outcome routing
NLP analyses reply sentiment and extracts actionable themes. Routes to: renewal journey, cancellation (Last Box Protocol), or continued monitoring (Commitment Reciprocity Test queue).
Layer 2 Layer 4
→ HITL — steward reads reply before routing
Community Vocabulary Shift monitors recovery or departure
Forward signal: 'we' language recovery confirms relational repair. Churn cohort analysis routes departed Angels to Alumni Network.
Layer 1 Layer 2
Human-in-the-Loop Layer — Relationship Steward Touchpoints
Relationship
Steward
(Human)
Steward not yet involved. System is monitoring. No human action required at this stage.
Signal detected but not yet routed to human. Automated threshold check: does corroboration across three signals exceed escalation threshold? If yes, dashboard alert generated.
★ Steward receives dashboard alert
Reads verbatim forum language. Reviews five-year history brief. Decides: is this concluded silence or transient frustration? Chooses intervention type. The system advises — the human decides.
→ Primary decision point
★ Steward reviews and approves letter
Reads AI draft. Edits for genuine personal register. Signs off. AI does not send autonomously. The human voice is final.
→ Approval gate — mandatory
★ Steward reads reply; routes outcome
If the Angel replies: steward reads in full before AI routing suggestion. Confirms or overrides routing. If the Angel cancels: steward personalises farewell note. If silent: steward decides monitoring continuation.
→ Outcome decision point
Ongoing stewardship — periodic check-ins
For renewed subscribers: steward receives community vocabulary shift report quarterly. For Alumni: steward reviews re-entry invitation timing. The relationship does not revert to automated monitoring only.
Operational Layer — Physical delivery and fulfilment backstage
Physical
Delivery
Layer
Standard fulfilment — no differentiation yet
Box ships. Carrier handles last-mile. No tenure recognition in packaging at this stage of drift.
No layer active
Failure-First Insert queued if quality issue detected
NLP quality-signal monitoring flags any known issue with the upcoming shipment. If flagged: Failure-First Insert generated and added to next box before the Angel receives it.
Layer 4 · Operational
Grief Protocol Box may be triggered if 90-day silence confirmed
Silence-detection AI cross-references engagement gap with physical delivery schedule. If 90 days of silence confirmed: next box includes quiet unrequested gift. Note says: 'We noticed your quiet. No action needed.'
Layer 4
Next box ships with Tenure Ribbon and personalised Covenant card
Physical delivery is aligned with relational intervention. Covenant card references tenure and community contribution in personalised copy. Tactile and written signal arrive together.
Layer 4
Last Box Protocol if cancellation confirmed
AI curates a final box from the subscriber's complete five-year preference history. No catalogue items. Personalised farewell letter. No call to action. Shipped at the company's cost as a genuine thank you.
Layer 4
Subscriber-Curated Annual Batch for renewed Angels
If the relationship is renewed: next anniversary box is curated from the subscriber's full five-year taste graph and labelled 'Your Year in Review.' Physical delivery becomes the ongoing expression of memory and recognition.
Layer 4
Layer 1 — Signal Intelligence (Forum Divergence Score, Silence Classifier, Community Vocabulary Shift)
Layer 2 — Pre-Departure Interception (Warm Goodbye Classifier, Referral Velocity, Anticipation Metric)
Layer 3 — Relational Generation (Generative personalisation, Relationship Ledger)
Layer 4 — Operational Orchestration (Failure-First Insert, Grief Protocol Box, Last Box Protocol)
★ HITL Handoff — AI surfaces signal; human decides and acts
5a
Stretch Goal · Assignment 6 · Defining Success Framework

Success Criteria — Horizon 1 & Horizon 2

The Success Criteria were generated using the Defining Success brief as the structural framework, applied to the Quiet Signal System concept and Feasibility Study financial context. The criteria are organised across two horizons: Horizon 1 measures prototype validation of the single foundational assumption; Horizon 2 measures full system impact once Layer 1 is confirmed reliable. Simulation comments rigorously test the classifier's limits.
Horizon 1 — Prototype Success Criteria
Customer Perspective — What Success Looks Like for the Angel
What Is Being MeasuredSuccessful ResultMeasurement Tool / Approach
Whether the Angel who receives a relational intervention experiences it as genuine, not commercial At least 70% of Angels who receive a letter report (when asked) that it did not feel like a retention script. The letter referenced specific, accurate details about their tenure. Post-intervention qualitative interview (small sample, not a survey). Verbatim responses reviewed by human steward, not scored by NLP.
Whether the intervention arrives before the Angel has taken any cancellation action 100% of Layer 1 alerts result in human steward review before the subscriber's next billing cycle. No intervention arrives after a cancellation decision has been made. CRM timestamp comparison: alert generation date vs. cancellation action date. Alert must precede cancellation by a minimum of 14 days to qualify as a genuine interception.
Whether the Angel's community language recovers after a successful intervention Community Vocabulary Shift model detects recovery of 'we' language (vs. 'they') within 90 days of intervention in at least 50% of renewed subscribers. Layer 1 longitudinal NLP monitoring post-intervention. Pronoun ratio tracked at 30, 60, and 90 days. Compared against pre-intervention baseline.
Team Perspective — What Success Looks Like for the Relationship Steward
What Is Being MeasuredSuccessful ResultMeasurement Tool / Approach
Whether the dashboard output is readable and actionable without further interpretation Actionable is defined as: the steward can read the dashboard entry and make a binary decision — intervene or monitor — within 10 minutes, without consulting any other data source. The verbatim forum language at the top of the entry must be the primary input to that decision. Steward time-to-decision logged per alert. Steward confidence score (1–5) collected per alert. Correlation between confidence score and intervention outcome tracked across prototype period.
Whether the AI draft letter requires substantial rewriting or only minor editing At least 60% of AI draft letters are approved by the steward with edits of fewer than 50 words. Letters requiring substantial rewrite (150+ words changed) are flagged as Layer 3 failures and reviewed. Word diff between AI draft and steward-approved version, logged per letter. Patterns in high-edit letters reviewed monthly to identify systematic Layer 3 weaknesses.
Whether the steward reports that the verbatim forum language accurately reflects the subscriber's departure trajectory At least 80% of steward post-intervention assessments confirm that the forum language surfaced by the dashboard was consistent with the subscriber's subsequent behaviour (cancellation, renewal, or continued engagement). Steward retrospective assessment at 90 days post-intervention. Binary: did the signal prove accurate? Logged per case. Used to calibrate Layer 1 threshold settings.
Business Perspective — What Success Looks Like Commercially
What Is Being MeasuredSuccessful ResultMeasurement Tool / Approach
Whether the prototype demonstrates a statistically meaningful churn reduction in the intervened cohort Churn rate in the prototype cohort (Angels who received a Layer 1 alert and a subsequent intervention) is at least 8 percentage points lower than the control cohort churn rate over the same period. At current £398 replacement cost, this represents approximately £380,000 in saved replacement cost per 1,000 Angels in the prototype cohort. Randomised control trial design: 50% of Angels flagged by Layer 1 in the prototype period receive intervention; 50% are monitored without intervention. 12-month churn comparison. CRM survival analysis.
Whether the commercial case for building the full system is validated by the prototype result If the prototype demonstrates an 8pp churn reduction in the tested cohort, the projected annual benefit at full deployment (24,000 Angel cohort) exceeds the full system build cost within 18 months of deployment. Prototype result becomes the commercial gate for full system investment approval. Financial model updated with actual prototype churn results. Presented to senior leadership at prototype close-out. Commercial gate decision documented.

Simulation Comments — Classifier Stress Test

The following simulation comments are designed to rigorously test the full range of signal types the Layer 1 classifier must detect. Satisfactory examples confirm correct classification; unsatisfactory examples expose the limits and failure modes the classifier must be trained to handle.

Comment Text Expected Signal Type Detection Challenge
"They've really changed how they handle the community these days. Not sure it's the direction I'd have chosen."
CONCLUDED DEPARTURE — vocabulary shift signal. 'They' not 'we'. No complaint, no anger.
Core classifier challenge: calm, reflective language with no negative sentiment marker. Standard tools miss this entirely.
"Still enjoy the products. Just don't post here as much as I used to."
CONCLUDED SILENCE — silence classifier signal. Positive surface sentiment masking withdrawal pattern.
The most dangerous signal type. Positive sentiment + reduced engagement = the P1 failure mode exactly. Requires change-point detection, not sentiment analysis.
"My last two referrals haven't converted. Probably says something about where things are."
REFERRAL VELOCITY REVERSAL — cessation of social conviction. Indirect signal, not complaint.
Subscriber is attributing referral failure to brand trajectory, not to their own network. A leading indicator of loss of pride-in-recommendation. Easy to classify as neutral frustration; must be classified as concluded departure trajectory.
"The last box arrived absolutely battered. This is unacceptable for the price we pay."
ACTIVE COMPLAINT — not departure signal. Engaged subscriber. Intervention = resolution, not relational outreach.
Test of false positive rate. Anger signals engagement, not conclusion. The classifier must distinguish 'I am angry' (engaged) from 'I am finished' (concluded). Misclassifying this as departure wastes steward capacity and potentially patronises the subscriber.
"Would love to see more natural wine options. Anyone else feeling this?"
ACTIVE ENGAGEMENT — product request. No departure signal.
Community-directed question. Positive engagement indicator. Must not be classified as disengagement. Tests classifier's ability to distinguish product advocacy from departure language.
"I've been a subscriber for six years and this was my last box. Thank you for everything."
CONFIRMED DEPARTURE — post-decision announcement. Warm, positive, final.
Edge case: departure has already been decided and announced. The classifier must recognise this as too late for pre-departure interception and route to Last Box Protocol and Alumni Network, not to relational intervention queue. Tests routing logic, not classification.
"Does anyone know if the price is going up again this year? Just trying to plan."
PRICE SENSITIVITY SIGNAL — potential pre-departure indicator, depending on tenure and context.
Ambiguous. Could be financial planning, could be pre-departure due diligence. Classification must be contextual: if from a 5+ year subscriber with declining engagement, route to Commitment Reciprocity Test queue. If from a recent subscriber, classify as neutral. Tests contextual calibration.
"I used to feel like this community knew me. Lately it just feels like a shop."
CONCLUDED DEPARTURE TRAJECTORY — relational disengagement with clear 'before/after' framing. High priority alert.
The verbatim 'I am not angry. I am finished.' in paraphrased form. The 'used to feel' construction is the departure signal. Must be classified at the highest departure-probability tier and routed immediately to steward review — not held in monitoring queue.

Horizon 2 — Full System Success Indicators
Three Measurable Indicators at Scale

Each indicator becomes measurable only once the prototype has validated Layer 1. None of these targets should be pursued as operational goals before the Horizon 1 prototype has confirmed the classifier's reliability.

Indicator 1 · Churn Reduction
Annual Churn Rate in Highest-LTV Cohort
Directly connected to P1 failure mode: the silent departure of long-tenure Angels invisible to prompted metrics. The full Quiet Signal System is designed to make this churn visible and interceptable before cancellation.

Target: Annual churn rate in the identified 5+ year, pre-funded Angel cohort reduces from 24% to ≤16% within 24 months of full system deployment — a reduction of at least 8 percentage points.

Data source: CRM annual survival analysis, segmented by cohort definition used in the feasibility study (5+ year tenure, pre-funded balance, historical referral activity). Measured at 12 and 24 months post-deployment.
Target: 24% → ≤16% churn within 24 months
Indicator 2 · Metric-Truth Gap
Forum Divergence Score — Managed vs. Unmanaged Gap
Directly connected to P1 failure mode: the gap between the managed satisfaction metric and the unmanaged community signal. The full system is working at scale when this gap narrows — not because NPS has fallen, but because forum sentiment has recovered.

Target: The Forum Divergence Score across the UK Angel cohort narrows by at least 40% within 18 months of full system deployment. Recovery in forum sentiment (not managed metric score) is the signal. NPS remaining stable or improving is a secondary indicator, not the primary one.

Data source: Layer 1 Forum Divergence Score aggregated across the full UK Angel cohort. Monthly reporting. Baseline established at system launch.
Target: Forum Divergence Score narrows ≥40% within 18 months
Indicator 3 · Commercial Recovery
Annual Replacement Burden — Angel Cohort
Directly connected to P1 failure mode: the £2.3m annual replacement burden for the high-LTV cohort at 24% churn, confirmed in the Feasibility Study commercial assessment. The full system is working at scale when this burden is measurably and sustainably reduced.

Target: Annual replacement burden for the identified Angel cohort reduces from ~£2.3m to ≤£1.5m within 24 months of full system deployment — representing a reduction of at least £800,000 in replacement cost annually, against an estimated full system build and operating cost of ≤£400,000 per annum.

Data source: CRM churn volume in the defined cohort × £398 replacement cost per churned Angel (updated annually from CAC data). Reported quarterly. Commercial gate review at 12 months.
Target: Replacement burden £2.3m → ≤£1.5m within 24 months
6
Terminal Stage · Methodology Assessment

Methodology Critique

The methodology critique was the terminal stage of the project — issued as a structured prompt into the full project knowledge base, designed to assess the process that produced the Quiet Signal System rather than the concept itself. The critique prompt explicitly excluded concept quality assessment and required naming the stage, the decision, and the evidence for every claim made.
The Critique Prompt
Prompt Text — Issued into Full Project Knowledge Base
"You are critiquing a methodology, not a solution. Do not assess the quality of the Quiet Signal System concept or the ideation outputs. Assess only the process that produced them.

Drawing on the prompt sequence document and the feasibility study in this knowledge base, critique the methodology used across the six stages of this assignment. Structure your critique as follows: What worked by design — where the methodology made a deliberate structural choice that produced a better outcome than a less structured approach would have. Be specific: name the stage, the decision, and the evidence that it worked. What worked by accident — where the output was good but for reasons the methodology did not anticipate or control for. What should be done differently next time — identify the most significant structural weakness. Do not list minor improvements. The single most important lesson — one sentence.

Where the methodology has a genuine weakness, name it plainly. A diplomatic answer is not useful here."
The Six-Stage Architecture
StagePhaseWhere RunPurposeStructural Logic
1POV GenerationProject chat with full knowledge baseSynthesise strongest strategic POV grounded in P1Single session, no isolation needed — the POV is a convergent output, not a divergent one
2IdeationThree anonymous browser windows — no shared contextGenerate maximum idea diversity without cross-contaminationIsolation manufactures genuine divergence; convergence across sessions becomes evidence, not artefact
3ConsolidationSeparate standalone chat — no project knowledge baseDe-duplicate, cluster, and select 15–20 ideasNo pre-imposed taxonomy; natural clusters must emerge from the ideas themselves
4Feasibility StudyProject chat with full knowledge base including financialsAssess viability; quantify cost of inaction; identify critical gapsFinancial context enters here, not at ideation — protecting early-stage ideas from premature commercial filtering
5Concept DevelopmentProject chat — builds on feasibility findingsSynthesise ideas into a single coherent named concept with AI capability stackConcept is bounded by feasibility conditions; HITL architecture made explicit in prototype specification
5aSuccess CriteriaProject chat — adds Defining Success frameworkGenerate measurable criteria across two horizons: prototype and full systemHorizon 1 tests the single foundational assumption before Horizon 2 is pursued
What Worked By Design
Stage 2 — Three Parallel Anonymous Ideation Runs

The decision to run three isolated sessions using only the anonymised POV, with no shared context and no cross-contamination between windows, was the single most consequential structural choice in the entire methodology. The evidence that it worked is not the volume of ideas produced — 161 ideas across nine territory-sessions is a quantity any single session could approximate — but the convergence signal it generated. Community Vocabulary Shift appeared independently in all three sessions.

When three isolated sessions converge on the same idea, that convergence is the signal.

That convergence finding could not have been produced by a single session, however well-prompted. The methodology manufactured a form of triangulation that is structurally impossible in conventional chat-based use. This was deliberate, and it worked.

Stage 3 — Consolidation as a Distinct, Isolated Stage

Running the consolidation in a separate chat with no project knowledge base, using only the three pasted outputs, meant the consolidator had to find natural patterns rather than confirm predetermined ones. The six clusters that emerged — Signal Intelligence, Pre-Departure Interception, Relational Stewardship, Co-Ownership and Influence, Physical Delivery as Trust Signal, Legacy and Memory — were not imposed in advance. The feasibility study then uses these clusters as its organising architecture, which means the emergent structure held up through subsequent scrutiny. Imposed taxonomies rarely do.

The HITL Critical Finding as a Persistent Constraint

Embedding the HITL Critical Finding as a named, labelled knowledge base entry that every subsequent prompt was required to address directly prevented a predictable failure mode: that a design process about detecting unmediated signal would itself produce outputs optimised for managed metrics. The feasibility study's Amber rating on Human and Ethical Feasibility — specifically the observation that the Silence Classifier could recreate the managed metric problem it was designed to solve — demonstrates that the constraint was actively applied, not merely cited. A constraint that does not generate friction has not been embedded.

Decision 3 — Financial Context Withheld Until Stage 4

The Financial Context document entered the knowledge base at the Feasibility Study stage, not at Ideation. This sequencing meant ideation outputs were not anchored to commercial viability at the point of generation. Ideas like the Last Box Protocol, the Alumni Network, and the Tenure Council are commercially expensive to defend in an ideation room. Had the £398 replacement cost and 44-month payback window been present during ideation, these ideas would have been self-censored before reaching the consolidation. The Feasibility Study made the commercial case for them retrospectively — which is the correct sequence.

What Worked By Accident
The Financial Context Note Arrived at Exactly the Right Moment

The Financial Context document entered the knowledge base at Stage 4 — the feasibility study — rather than Stage 2 during ideation. This sequencing was almost certainly not a deliberate decision to protect the ideation phase from commercial anchoring; it reads more like the financial data being assembled when needed. But the accidental effect was significant: the ideation outputs were not constrained by the £398 replacement cost figure or the 44-month payback window.

Ideas like the Last Box Protocol, the Alumni Network, and the Tenure Council — which are commercially expensive to defend at ideation stage — survived into the consolidation because no one was filtering against a cost model. The feasibility study then made the commercial case for them retrospectively. If the financial context had been present during ideation, some of the most structurally interesting ideas would likely have been self-censored before they reached the consolidation.

The POV's Emotional Specificity

The verbatim "I am not angry. I am finished." is embedded in the POV and appears repeatedly across all three ideation outputs and the feasibility study. Its persistence across sessions suggests it did genuine generative work — anchoring the ideation in a specific human register rather than a generic churn problem.

The methodology benefited from having access to unusually precise qualitative evidence, and the prompt design was good enough not to dilute it. That combination was fortunate. A less specific verbatim — or no verbatim at all — would have produced a materially different ideation corpus. The difference between a well-evidenced discovery phase and a generic brief is visible in every subsequent output.

What Should Be Done Differently Next Time
The DPIA and Legal Review Should Have Been a Design Input, Not a Post-Hoc Condition

This is the most significant structural weakness. The feasibility study correctly identifies the mandatory Data Protection Impact Assessment as a hard legal deployment gate, and the AI Risk Register is thorough on UK GDPR Article 35 exposure, Equality Act obligations, and ICO fine risk. But both documents treat legal compliance as a condition to be satisfied after the concept is formed. The DPIA gap, the missing Standard Contractual Clauses, the absence of a Legitimate Interests Assessment — these are named in the feasibility study as remediation requirements, not as constraints that shaped the concept.

The consequence is that several of the highest-priority ideas — the Silence Classifier, the Community Vocabulary Shift monitor, the Peer Influence Map — involve behavioural profiling of identifiable subscribers at a scale that may require material redesign once legal review is completed. The feasibility study acknowledges this but defers it: "the concept development phase must include legal and data protection review as a parallel workstream." A parallel workstream to concept development is not the same as a design input to it.

If the DPIA had been scoped — even at a high level — before the consolidation and concept development stages, the AI capability stack might have been designed differently from the outset. Running the process again, a simplified privacy-by-design checklist should be present in the knowledge base from Stage 3 onwards, not introduced as a remediation condition in Stage 4.

The Single Most Important Lesson
Structural isolation between stages — running each phase in a separate context with no accumulated prompt history — converts the AI from an idea-generator into an evidence-generator, because convergence across isolated sessions produces findings that a single continuous conversation cannot.

Summary Assessment — Methodology Critique Dimensions
"I am not angry. I am finished." appeared in all three sessions and in the feasibility study — generative anchor that was not designed to function this way.
Critique DimensionVerdictMost Significant Evidence
Isolation at Ideation Stage Worked by design Community Vocabulary Shift convergence across all three sessions — impossible to produce without structural isolation.
Consolidation as Distinct Stage Worked by design Six emergent clusters held up as the organising architecture through the feasibility study and concept development — not replaced or revised.
HITL Critical Finding as Persistent Constraint Worked by design Feasibility Study Amber rating on Human & Ethical Feasibility — the constraint generated active friction, not just citation.
Financial Context Timing Worked by accident Last Box Protocol, Alumni Network, and Tenure Council survived into consolidation because no commercial filter was present at ideation.
POV Emotional Specificity Worked by accident
DPIA as Post-Hoc Condition Structural weakness Several highest-priority ideas may require material redesign after legal review — the concept is commercially and strategically compelling but legally provisional.