Refining SEEK Recommendations with GenAI
Designed and shipped a GenAI-powered preference elicitation system that captured 10× more user signals but revealed key UX insights about friction timing and integration. The feature was rolled back after reducing conversion by 4%, leading to strategic pivots toward inline micro-questions and real-time model integration.

Richard Simms
Impact at a glance: 10× signal capture increase • 12% adoption rate • 60% completion • Feature rolled back due to 4% conversion drop • Key learnings inform next iteration strategy
The Challenge
SEEK's job recommendations relied on limited user preference data, resulting in suboptimal relevance for job seekers. While our feed suggested jobs from existing models, we lacked rich, structured signals about evolving candidate preferences—commute tolerance, compensation expectations, team size preferences, and cultural fit factors.
The core question: How might we continuously learn user preferences without creating survey fatigue?
To avoid heavy upfront questionnaires, we explored adaptive, high-information-gain prompts—asking just the next most informative question given what we already knew and what the candidate was viewing in the feed. This approach maximizes marginal information gain while minimizing cognitive load, treating preference learning as a continuous conversation, not a one-off survey.
Strategic Approach
Design and ship a GenAI-powered preference elicitation system that:
- Collects high-signal preference data with minimal friction
- Adapts to context (profile, behavior, visible jobs) and evolving interest shifts
- Outputs structured entities the recommender can actually use (weights, facets, constraints)
- Preserves trust through clarity/explainability and fast value payoffs
We framed this as a system that selects the single most informative next question based on profile, revealed behavior (views/saves/applies), and the context of jobs shown—then loops.
Implementation Journey
Phase 1: Discovery & Alignment
- Facilitated discovery workshop with PM & Engineering to identify ranking signals that actually move the needle
- Prioritized preference atoms (location bands, salary floors, role seniority, hybrid/on-site, stack exclusions)
- Defined experience principles: one-tap micro-interactions, visible payoff ("watch your feed change"), optionality
Phase 2: Prototype Development
- Built AI prototype: conversational flow that explains, asks, and reflects back structured signals
- Designed system prompts to ask structured questions and generated options before shifting into fully generative questions and options
- Validated explainability hooks with rationales next to new jobs ("we boosted these because you prefer hybrid within 45 min")
Phase 3: Adaptive Evolution
- Moved from open chat to LLM-generated, single-question turns that adapt to profile, recent interactions, and live feed context
- Implemented marginal information gain selection: always ask what reduces the most uncertainty now
- Added preference change detection triggers to revisit stale assumptions when behavior drifted
Phase 4: Integration & Testing
- Positioned feature alongside context-aware recommendations so answers immediately shaped what appeared next
- Built structured output contract that the recommender could consume immediately
- Measured adoption, completion, and conversion impact across user segments
Results & Analysis
Metrics That Improved ✅
Signal capture: ~10× more preference entities per user session (≈21.4 vs 2.3 baseline), ~80% novel vs existing profile/search—exactly the kind of data our models were missing.
Adoption & completion: ~12% adoption overall (≈20% among active recommendations users), ~60% completion—healthy first-version engagement for a new, optional interaction.
Challenges Encountered ⚠️
Downstream conversion: JDV/UV and app-start/UV dipped ~4% overall (mobile heavier).
Hypothesis: The chat-like interaction introduced friction before the "apply" moment, and signals weren't fully wired into the live ranker fast enough to offset that cost.
The Decision
We rolled back v1 and kept the learning: the method (adaptive, high-info-gain preference collection) is sound; the container (longer chat) wasn't right for the feed.
Key Deliverables
✅ Adaptive Q&A pattern: One concise, highly relevant question at a time; "Skip / Prefer not to say" always present
✅ Context-aware prompts that consider what's on screen (nearby seniorities/industries) to keep questions legible and useful
✅ Structured output contract (schema & weights) the recommender can consume immediately
✅ Explainability micro-copy: Short rationales attached to refreshed jobs to build trust ("We prioritized X because you told us Y")
Key Learnings
- Information gain wins—but UX must be feather-light
The theory is solid; the form factor matters. We'll shift from "conversation" to inline micro-moments embedded directly in cards/rows. - Show the payoff instantly
Tie answers to visible feed changes or a small "Why these?" explainer to reinforce trust and control. - Close the loop with ranking
Signals must immediately steer the model; otherwise the interaction cost hurts conversion. Aligning pipelines and refresh cadence is critical. - Preference drift is real
Monitor behavior to re-ask when drift is detected—without nagging users.
What's Next
Based on learnings from v1, the next iteration focuses on:
Inline micro-questions in the feed (no chat UI): one-tap, context-specific questions on job cards with immediate re-ranking
Signal → Ranker integration with AIPS recommendations so captured entities directly influence candidate/job pair scoring in near-real-time
Explainability at the edge ("Match reasons" snippets on cards) to raise confidence and teach the model through hide/boost feedback
Safeguards & privacy UX: transparent controls over what's stored, why, and how it improves fit
Why This Matters
Modern job discovery needs to be context-aware, explainable, and continuously learning from the smallest, highest-value interactions. This work establishes the information design and experience primitives to do exactly that—without turning the feed into a form.
The transparent documentation of both successes and failures provides a blueprint for other teams building GenAI features: strong signal capture is achievable, but the interaction design and integration timing are critical for maintaining user conversion while improving relevance.
Technical Notes
- Adaptive, high-information-gain approach codified in internal Information Design Requirements
- System prompts and entity schema designed for structured output the recommender can consume
- Explainability patterns align with AI-Enhanced Feed strategy for match rationales and behavior-aware boosts