CASE STUDY — LOOP WORK

Most job search tools
track what happened.
I built one that learns from it.

Loop Work is a persistent AI agent and relational database system I built in Notion to manage my job search. It doesn't just track applications — it compounds knowledge from every JD analysis, interview, and debrief to get smarter over time.

Here's what I built, how it works, and what it taught me.

LOOPER 1SKILL DISCOVERY

AGENT

KRYSTAL

SKILLS DISCOVERED

FRAMEWORK DESIGNCOGNITIVE LOAD REDUCTIONUNIVERSAL TRANSLATIONTECHNICAL IMPLEMENTATIONPROACTIVE MONITORING

↳ 5 skills added to inventory

THE PROBLEM

Existing tools track status.
None of them help you think.

I started where most people start. ChatGPT. A folder of resume variants. A long document of accomplishments I'd been adding to for years. I was clicking apply, giving context, starting over. Every session from scratch. Nothing I learned carried forward.

The tools that exist aren't much better. Spreadsheets are flexible but unstructured — they capture what happened, not what it means. Job boards and dedicated trackers like Huntr and Teal have added AI — resume builders, keyword matching, cover letter generators. But the AI is stateless. Every session starts from scratch. Nothing compounds. The tool doesn't get smarter about you.

None of them answer the questions that actually matter: Am I positioning myself correctly? Which roles am I genuinely strongest for? What did I learn from that last interview that should change how I approach the next application?

100–200+

applications average
before one offer

~2%

of cold applications
reach a human reviewer

Source: ZipRecruiter / Glassdoor, 2025

THE INSIGHT

“A job search isn't a task list.
It's a product system.”

Once I framed it that way, the design decisions followed naturally.

Applications and interviews are different objects with different lifecycles. An application is a canonical record. An interview is a tactical event that relates to it. Modeling them as one thing creates schema conflicts — the same way merging two distinct data types into one table creates problems in any production system.

AI isn't a writing tool in this system. It's a workflow multiplier — analyzing job descriptions, surfacing skills I couldn't articulate, and writing those discoveries back into every future application. The system compounds. It doesn't reset.

THE ARCHITECTURE

Four interconnected systems. One through-line: every interaction compounds.

JOB LOOP01

Job Pipeline DB

Relational database. One page per application. Fit score ≥75 auto-creates an application page with full JD analysis, tailored resume, and cover letter embedded. <75 flags the gaps before anything is created.

INTERVIEW LOOP02

Interview Schedule DB

One page per interviewer, not per company. Two-way relation back to Job Loop. Self-relation links each round to the one before it — context carries forward automatically.

STAR BANK03

Canonical Story Bank

~28 entries. Not 28 unique stories — a handful of high-signal experiences reframed for different competencies. Maintain once, deploy in any context.

AI WORKFLOW LAYER04

Persistent AI Agents

The foundation connecting all three systems. Not one tool — two specialized agents with distinct scopes. Looper 1 handles JD intake and fit scoring. Looper 2 handles interview prep and debrief capture. Both write back to the system and persist context across sessions.

3 CROSS-DATABASE RELATIONS

Job Loop Interview Loop
type: two-waytrigger: always
Interview Loop Previous Round
type: self-relationtrigger: round 2+
Fit score ≥75 Job Loop page
type: threshold ruletrigger: auto-create + embed resume
WHY THIS IS DIFFERENT

The AI isn't a tool I invoke.
It's an agent embedded in the workflow.

I haven't found one that compounds after each use — or builds a picture of which roles you're actually strongest for, based on your own feedback loops.

Note: I built this workflow months before Notion shipped their custom agents feature. At the time, it was a persistent instructions page that the AI read at the start of every session. Same outcome, more manual setup.

CAPABILITY 01 · SKILL DISCOVERY

The agent doesn't find gaps.
It finds skills you couldn't articulate.

When the AI analyzes a new job description, it maps requirements against my documented experience and asks clarifying questions — not to generate content, but to surface framing I didn't have.

The answers to those questions get written back into the canonical skills bank. Application #20 is smarter than application #1 because of what #1–19 taught the system.

Progressive skill discovery
1/3
AI AGENT → KRYSTALQUESTION

“The testing intake system — was this a documentation template, or did it have any technical integration behind it?”

WHY THIS MATTERS

A template shows process thinking. Technical integration shows product ops thinking. The difference changes which competency this story leads with.

CAPABILITY 02 · TARGETING INTELLIGENCE

My background spans enough role types that targeting felt like guessing.
Until the data said otherwise.

My background maps to multiple role types — product ops, technical program management, live operations, internal tooling. That range is a strength, but it made it hard to know which roles to prioritize without data.

The targeting intelligence framework was built from actual outcome patterns: which role types advanced to interviews, which JD signals predicted strong fit, which industries had the highest success rate. It updates as patterns shift.

The result gets more accurate with each application.

GREEN FLAGS

  • "Internal tools" or "operator-facing"
  • "Live operations" or "live events"
  • "Cross-functional coordination"
  • "Scaling operations"
  • "Technical translator"
  • "High-stakes execution"

RED FLAGS

  • Heavy marketing/growth metrics
  • "A/B testing" as primary focus
  • B2C product without ops context
  • "GTM strategy" without systems framing

BEFORE LOOP WORK

High volume, inconsistent results

AFTER LOOP WORK

19%interview rate

CAPABILITY 03 · COMPOUNDING CONTEXT

Every interview round starts with the last one already loaded.

Interview prep follows a codified 12-step template — the structure is decided in advance, so cognitive effort goes to content, not format. But the more important design decision is the link between rounds.

Each Interview Loop page has a Previous Round property that links to the prior round's page. When round 2 prep starts, the AI pulls key takeaways from round 1's debrief into the new page automatically. Manual reflection, structural carryover.

The same principle applies to resume tailoring. The most time-consuming part of any AI workflow is the setup. Built once, it compounds. What used to take 30 minutes per resume now takes seconds.

loop-work / interview-loop / round-2

↩ QUICK CONTEXT — FROM ROUND 1 DEBRIEF

PREVIOUS ROUND SELF-RELATION

Round 2 prep continues below ↓

SYSTEM IN ACTION

Mockups for confidentiality. Workflow is real.

Four screens from the actual workflow.

loop-work / job-loop / [role]

FIT SCORING OUTPUT

FIT SCORE

82

AUTO-CREATED ✓

Technical complexity

4/5

Operator-facing

5/5

Cross-functional scope

4/5

Live operations context

5/5

Every JD is analyzed and given a 0–100 Fit Score. If it’s ≥75, the agent auto-creates a Job Loop application page (and a linked Resume Variant); if it’s <75, it flags the gaps and asks before creating anything.

loop-work / job-loop / [role] / jd-analysis

HIDDEN EXPECTATIONS SURFACED

What the JD says

"Structure Ambiguous Problem Spaces"

What it actually means

They have growing pains. Product Support is scaling and they need someone who can identify what's breaking, define what success looks like, and build the plan.

Your biggest selling point

You did exactly this with template sprawl, metadata management, and testing intake.

The agent surfaces what the JD implies but doesn't say — hidden expectations, positioning risks, your strongest selling point — then maps your evidence so you walk into the first conversation already positioned.

loop-work / interview-loop / [company] / round-1

INTERVIEW PREP TEMPLATE

01

Interviewer research

02

Role positioning

03

STAR stories selected

04

Technical questions prepared

05

Questions to ask them

06

Comp context loaded

+ 6 more steps

When an interview is scheduled, the agent generates a structured Interview Loop prep page per interviewer: quick win positioning, research checklists, predicted questions, prepared answers, and questions to ask — pre-filled from the Job Loop analysis and your existing materials.

loop-work / interview-loop / [company] / round-2

ROUND 2+ CONTEXT CARRYOVER

↩ FROM ROUND 1 DEBRIEF

Strong on technical depth

Probe: stakeholder influence

Their concern: bandwidth

Comp band confirmed

Round 2 prep →

For later rounds, prep starts with context from the previous round: takeaways, concerns, confirmed logistics, and what changed. The new Interview Loop page links back to the prior round so your debrief carries forward without redoing the work.

PRODUCT THINKING SIGNALS

How I work, shown through one project.

SYSTEMS THINKING

Diagnosis before solution.

I didn't open Notion and start building.

I spent time understanding the failure mode first — why every session restarted from scratch, why nothing compounded, why the tools that existed solved the wrong problem.

The system design followed from that diagnosis, not from feature ideas.

SCOPE DISCIPLINE

Knowing what it doesn't do.

Loop Work does four things. That's not an accident — it's the result of explicitly deciding what it doesn't do.

It doesn't manage networking, offer negotiation, or emotional wellbeing.

Each of those is a different product problem. Knowing where the boundary is makes the four things actually work.

REAL ITERATION

Every version from actual friction.

Five versions shipped. None of them planned in advance.

v1.0 tracked applications. v1.4 has two specialized agents with distinct scopes.

Every change came from a gap I discovered in real use — not a roadmap, not a vision doc.

The system is smarter now because I kept using it honestly.

KNOWN GAPS

The infrastructure works.
The habits need reinforcement.

The debrief-to-learning loop works structurally — but it depends on consistent manual input after every interview. In practice, capture has been uneven. Some rounds have detailed reflections. Others don't.

This is a known adoption gap, and the kind of friction any product builder would recognize: you can design the right structure, but you can't automate discipline.

Three things I'd improve next:

  • Lower-friction debrief capture (quick-capture format, less activation energy than a full Notion page)
  • Resume export automation: the tailored resume lives on the Job Loop page but still requires manual export for submission. I'm exploring Make and Zapier automations to push structured content directly into a formatted Google Doc template — the step shouldn't be manual.
  • Cross-application pattern analysis: I've run ad hoc analysis through Notion AI — surfacing which story types and positioning angles correlate with advancement. The systematic layer is next. Notion shipped custom dashboards with aggregated tables in March 2026; that's the infrastructure I'm building toward.

The longer-term question is whether any of this is useful to others going through the same thing.

WHY THIS EXISTS

246KTECH LAYOFFS IN 202542APPS PER INTERVIEW83 DAYSMEDIAN JOB SEARCH

The numbers aren't personal. They're structural.

Nearly 246,000 tech workers were laid off in 2025 — and 2026 is already tracking ahead of that pace. AI alone drove roughly 55,000 of those U.S. cuts — not because the work disappeared, but because companies found a cheaper way to do it. The average job seeker now submits 42 applications to land a single interview. Only 2.4% of candidates make it through to that stage. The median time from layoff to first offer stretched to 83 days by Q4 2025.

This is the environment. Knowing it changes how you move through it.

I didn't start with a product spec. I started by noticing where my own process was breaking down. So I did what I do: observed the friction points, mapped the workflow, and started making it more intuitive to navigate.

Somewhere in the middle of that, I realized what I was actually doing. I was treating myself as a product. My skills were features behind feature flags — things I knew I could do and had done before, but had never been able to articulate until a job description forced the question. Every interview was a QA release: real conditions, live feedback, data I could act on. The goal wasn't just to pass — it was to get better with each cycle and prepare for the production release: landing the role.

That's Loop Work. Not because I set out to build a product. Because that's how I think.

Stats: Layoffs.fyi · LinkedIn Economic Graph · Bureau of Labor Statistics · Q4 2025

THE SYSTEM IS ONE SIDE OF THE STORY

Loop Work shows how I think.
The about page shows where I want to apply it.

Seven years of live operations, internal tooling, and building things no one had figured out yet — for the teams and environments where that kind of thinking matters most.

MEET KRYSTAL →